Messages in this thread Patch in this message |  | | Date | Thu, 18 Jan 2018 14:48:26 +0100 | From | Peter Zijlstra <> | From | Peter Zijlstra <> | Subject | [PATCH 26/35] x86/enter: Create macros to stop/restart Indirect Branch Speculation |
| |
From: Tim Chen <tim.c.chen@linux.intel.com>
Create macros to control Indirect Branch Speculation.
Name them so they reflect what they are actually doing. The Intel supplied names are suggesting that they 'enable' something while in reality they disable.
[ tglx: Changed macro names and rewrote changelog ]
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Ashok Raj <ashok.raj@intel.com> --- arch/x86/entry/calling.h | 73 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+)
--- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -6,6 +6,8 @@ #include <asm/percpu.h> #include <asm/asm-offsets.h> #include <asm/processor-flags.h> +#include <asm/msr-index.h> +#include <asm/cpufeatures.h> /* @@ -349,3 +351,74 @@ For 32-bit we have the following convent .Lafter_call_\@: #endif .endm + +/* + * IBRS related macros + */ +.macro PUSH_MSR_REGS + pushq %rax + pushq %rcx + pushq %rdx +.endm + +.macro POP_MSR_REGS + popq %rdx + popq %rcx + popq %rax +.endm + +.macro WRMSR_ASM msr_nr:req edx_val:req eax_val:req + movl \msr_nr, %ecx + movl \edx_val, %edx + movl \eax_val, %eax + wrmsr +.endm + +.macro STOP_IB_SPEC + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_IBRS + PUSH_MSR_REGS + WRMSR_ASM $MSR_IA32_SPEC_CTRL, $0, $SPEC_CTRL_ENABLE_IBRS + POP_MSR_REGS +.Lskip_\@: +.endm + +.macro RESTART_IB_SPEC + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_IBRS + PUSH_MSR_REGS + WRMSR_ASM $MSR_IA32_SPEC_CTRL, $0, $SPEC_CTRL_DISABLE_IBRS + POP_MSR_REGS +.Lskip_\@: +.endm + +.macro STOP_IB_SPEC_CLOBBER + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_IBRS + WRMSR_ASM $MSR_IA32_SPEC_CTRL, $0, $SPEC_CTRL_ENABLE_IBRS +.Lskip_\@: +.endm + +.macro RESTART_IB_SPEC_CLOBBER + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_IBRS + WRMSR_ASM $MSR_IA32_SPEC_CTRL, $0, $SPEC_CTRL_DISABLE_IBRS +.Lskip_\@: +.endm + +.macro STOP_IB_SPEC_SAVE_AND_CLOBBER save_reg:req + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_IBRS + movl $MSR_IA32_SPEC_CTRL, %ecx + rdmsr + movl %eax, \save_reg + movl $0, %edx + movl $SPEC_CTRL_ENABLE_IBRS, %eax + wrmsr +.Lskip_\@: +.endm + +.macro RESTORE_IB_SPEC_CLOBBER save_reg:req + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_IBRS + /* Set IBRS to the value saved in the save_reg */ + movl $MSR_IA32_SPEC_CTRL, %ecx + movl $0, %edx + movl \save_reg, %eax + wrmsr +.Lskip_\@: +.endm
|  |