lkml.org 
[lkml]   [2022]   [May]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v4] x86/speculation, KVM: remove IBPB on vCPU load
On Fri, May 13, 2022 at 8:21 AM Jon Kohler <jon@nutanix.com> wrote:
>
>
>
> > On May 12, 2022, at 11:50 PM, Jim Mattson <jmattson@google.com> wrote:
> >
> > On Thu, May 12, 2022 at 8:19 PM Jon Kohler <jon@nutanix.com> wrote:
> >>
> >>
> >>
> >>> On May 12, 2022, at 11:06 PM, Jim Mattson <jmattson@google.com> wrote:
> >>>
> >>> On Thu, May 12, 2022 at 5:50 PM Jon Kohler <jon@nutanix.com> wrote:
> >>>
> >>>> You mentioned if someone was concerned about performance, are you
> >>>> saying they also critically care about performance, such that they are
> >>>> willing to *not* use IBPB at all, and instead just use taskset and hope
> >>>> nothing ever gets scheduled on there, and then hope that the hypervisor
> >>>> does the job for them?
> >>>
> >>> I am saying that IBPB is not the only viable mitigation for
> >>> cross-process indirect branch steering. Proper scheduling can also
> >>> solve the problem, without the overhead of IBPB. Say that you have two
> >>> security domains: trusted and untrusted. If you have a two-socket
> >>> system, and you always run trusted workloads on socket#0 and untrusted
> >>> workloads on socket#1, IBPB is completely superfluous. However, if the
> >>> hypervisor chooses to schedule a vCPU thread from virtual socket#0
> >>> after a vCPU thread from virtual socket#1 on the same logical
> >>> processor, then it *must* execute an IBPB between those two vCPU
> >>> threads. Otherwise, it has introduced a non-architectural
> >>> vulnerability that the guest can't possibly be aware of.
> >>>
> >>> If you can't trust your OS to schedule tasks where you tell it to
> >>> schedule them, can you really trust it to provide you with any kind of
> >>> inter-process security?
> >>
> >> Fair enough, so going forward:
> >> Should this be mandatory in all cases? How this whole effort came
> >> was that a user could configure their KVM host with conditional
> >> IBPB, but this particular mitigation is now always on no matter what.
> >>
> >> In our previous patch review threads, Sean and I mostly settled on making
> >> this particular avenue active only when a user configures always_ibpb, such
> >> that for cases like the one you describe (and others like it that come up in
> >> the future) can be covered easily, but for cond_ibpb, we can document
> >> that it doesn’t cover this case.
> >>
> >> Would that be acceptable here?
> >
> > That would make me unhappy. We use cond_ibpb, and I don't want to
> > switch to always_ibpb, yet I do want this barrier.
>
> Ok gotcha, which I think is a good point for cloud providers, since the
> workload(s) are especially opaque.
>
> How about this: I could work up a v5 patch here where this was at minimum
> a system level knob (similar to other mitigation knobs) and documented
> In more detail. That way folks who might want more control here have the
> basic ability to do that without recompiling the kernel. Such a “knob” would
> be on by default, such that there is no functional regression here.
>
> Would that be ok with you as a middle ground?

That would be great. Module parameter or sysctl is fine with me.

Thanks!

> Thanks again,
> Jon
>
> >
> >>>
> >>>> Would this be the expectation of just KVM? Or all hypervisors on the
> >>>> market?
> >>>
> >>> Any hypervisor that doesn't do this is broken, but that won't keep it
> >>> off the market. :-)
> >>
> >> Very true :)
> >>
>

\
 
 \ /
  Last update: 2022-05-13 21:37    [W:0.114 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site