lkml.org 
[lkml]   [2019]   [May]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC KVM 18/27] kvm/isolation: function to copy page table entries for percpu buffer
On Tue, May 14, 2019 at 01:33:21PM -0700, Andy Lutomirski wrote:
> On Tue, May 14, 2019 at 11:09 AM Sean Christopherson
> <sean.j.christopherson@intel.com> wrote:
> > For IRQs it's somewhat feasible, but not for NMIs since NMIs are unblocked
> > on VMX immediately after VM-Exit, i.e. there's no way to prevent an NMI
> > from occuring while KVM's page tables are loaded.
> >
> > Back to Andy's question about enabling IRQs, the answer is "it depends".
> > Exits due to INTR, NMI and #MC are considered high priority and are
> > serviced before re-enabling IRQs and preemption[1]. All other exits are
> > handled after IRQs and preemption are re-enabled.
> >
> > A decent number of exit handlers are quite short, e.g. CPUID, most RDMSR
> > and WRMSR, any event-related exit, etc... But many exit handlers require
> > significantly longer flows, e.g. EPT violations (page faults) and anything
> > that requires extensive emulation, e.g. nested VMX. In short, leaving
> > IRQs disabled across all exits is not practical.
> >
> > Before going down the path of figuring out how to handle the corner cases
> > regarding kvm_mm, I think it makes sense to pinpoint exactly what exits
> > are a) in the hot path for the use case (configuration) and b) can be
> > handled fast enough that they can run with IRQs disabled. Generating that
> > list might allow us to tightly bound the contents of kvm_mm and sidestep
> > many of the corner cases, i.e. select VM-Exits are handle with IRQs
> > disabled using KVM's mm, while "slow" VM-Exits go through the full context
> > switch.
>
> I suspect that the context switch is a bit of a red herring. A
> PCID-don't-flush CR3 write is IIRC under 300 cycles. Sure, it's slow,
> but it's probably minor compared to the full cost of the vm exit. The
> pain point is kicking the sibling thread.

Speaking of PCIDs, a separate mm for KVM would mean consuming another
ASID, which isn't good.

> When I worked on the PTI stuff, I went to great lengths to never have
> a copy of the vmalloc page tables. The top-level entry is either
> there or it isn't, so everything is always in sync. I'm sure it's
> *possible* to populate just part of it for this KVM isolation, but
> it's going to be ugly. It would be really nice if we could avoid it.
> Unfortunately, this interacts unpleasantly with having the kernel
> stack in there. We can freely use a different stack (the IRQ stack,
> for example) as long as we don't schedule, but that means we can't run
> preemptable code.
>
> Another issue is tracing, kprobes, etc -- I don't think anyone will
> like it if a kprobe in KVM either dramatically changes performance by
> triggering isolation exits or by crashing. So you may need to
> restrict the isolated code to a file that is compiled with tracing off
> and has everything marked NOKPROBE. Yuck.

Right, and all of the above is largely why I suggested compiling a list
of VM-Exits that "need" preferential treatment. If the cumulative amount
of code and data that needs to be accessed is tiny, then this might be
feasible. But if the goal is to be able to do things like handle IRQs
using the KVM mm, ouch.

> I hate to say this, but at what point do we declare that "if you have
> SMT on, you get to keep both pieces, simultaneously!"?

\
 
 \ /
  Last update: 2019-05-14 23:07    [W:0.072 / U:9.760 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site