lkml.org 
[lkml]   [2020]   [Aug]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC 7/7] KVM: VMX: Enable PKS for nested VM
From
Date


On 8/11/2020 8:05 AM, Jim Mattson wrote:
> On Fri, Aug 7, 2020 at 1:47 AM Chenyi Qiang <chenyi.qiang@intel.com> wrote:
>>
>> PKS MSR passes through guest directly. Configure the MSR to match the
>> L0/L1 settings so that nested VM runs PKS properly.
>>
>> Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
>> ---
>> arch/x86/kvm/vmx/nested.c | 32 ++++++++++++++++++++++++++++++++
>> arch/x86/kvm/vmx/vmcs12.c | 2 ++
>> arch/x86/kvm/vmx/vmcs12.h | 6 +++++-
>> arch/x86/kvm/vmx/vmx.c | 10 ++++++++++
>> arch/x86/kvm/vmx/vmx.h | 1 +
>> 5 files changed, 50 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
>> index df2c2e733549..1f9823d21ecd 100644
>> --- a/arch/x86/kvm/vmx/nested.c
>> +++ b/arch/x86/kvm/vmx/nested.c
>> @@ -647,6 +647,12 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
>> MSR_IA32_PRED_CMD,
>> MSR_TYPE_W);
>>
>> + if (!msr_write_intercepted_l01(vcpu, MSR_IA32_PKRS))
>> + nested_vmx_disable_intercept_for_msr(
>> + msr_bitmap_l1, msr_bitmap_l0,
>> + MSR_IA32_PKRS,
>> + MSR_TYPE_R | MSR_TYPE_W);
>
> What if L1 intercepts only *reads* of MSR_IA32_PKRS?
>
>> kvm_vcpu_unmap(vcpu, &to_vmx(vcpu)->nested.msr_bitmap_map, false);
>>
>> return true;
>
>> @@ -2509,6 +2519,11 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
>> if (kvm_mpx_supported() && (!vmx->nested.nested_run_pending ||
>> !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
>> vmcs_write64(GUEST_BNDCFGS, vmx->nested.vmcs01_guest_bndcfgs);
>> +
>> + if (kvm_cpu_cap_has(X86_FEATURE_PKS) &&
>
> Is the above check superfluous? I would assume that the L1 guest can't
> set VM_ENTRY_LOAD_IA32_PKRS unless this is true.
>

I enforce this check to ensure vmcs_write to the Guest_IA32_PKRS without
error. if deleted, vmcs_write to GUEST_IA32_PKRS may executed when PKS
is unsupported.

>> + (!vmx->nested.nested_run_pending ||
>> + !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PKRS)))
>> + vmcs_write64(GUEST_IA32_PKRS, vmx->nested.vmcs01_guest_pkrs);
>
> This doesn't seem right to me. On the target of a live migration, with
> L2 active at the time the snapshot was taken (i.e.,
> vmx->nested.nested_run_pending=0), it looks like we're going to try to
> overwrite the current L2 PKRS value with L1's PKRS value (except that
> in this situation, vmx->nested.vmcs01_guest_pkrs should actually be
> 0). Am I missing something?
>

We overwrite the L2 PKRS with L1's value when L2 doesn't support PKS.
Because the L1's VM_ENTRY_LOAD_IA32_PKRS is off, we need to migrate L1's
PKRS to L2.

>> vmx_set_rflags(vcpu, vmcs12->guest_rflags);
>>
>> /* EXCEPTION_BITMAP and CR0_GUEST_HOST_MASK should basically be the
>
>
>> @@ -3916,6 +3943,8 @@ static void sync_vmcs02_to_vmcs12_rare(struct kvm_vcpu *vcpu,
>> vmcs_readl(GUEST_PENDING_DBG_EXCEPTIONS);
>> if (kvm_mpx_supported())
>> vmcs12->guest_bndcfgs = vmcs_read64(GUEST_BNDCFGS);
>> + if (kvm_cpu_cap_has(X86_FEATURE_PKS))
>
> Shouldn't we be checking to see if the *virtual* CPU supports PKS
> before writing anything into vmcs12->guest_ia32_pkrs?
>

Yes, It's reasonable.

>> + vmcs12->guest_ia32_pkrs = vmcs_read64(GUEST_IA32_PKRS);
>>
>> vmx->nested.need_sync_vmcs02_to_vmcs12_rare = false;
>> }

\
 
 \ /
  Last update: 2020-08-13 06:55    [W:0.438 / U:0.200 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site