Messages in this thread |  | | Date | Mon, 16 Feb 2015 22:09:38 +0100 | From | Borislav Petkov <> | Subject | Re: [PATCH 3/8] x86, fpu: kill save_init_fpu(), change math_error() to use unlazy_fpu() |
| |
On Fri, Feb 06, 2015 at 03:02:00PM -0500, riel@redhat.com wrote: > From: Oleg Nesterov <oleg@redhat.com> > > math_error() calls save_init_fpu() after conditional_sti(), this means > that the caller can be preempted. If !use_eager_fpu() we can hit the > WARN_ON_ONCE(!__thread_has_fpu(tsk)) and/or save the wrong FPU state. > > Change math_error() to use unlazy_fpu() and kill save_init_fpu(). > > Signed-off-by: Oleg Nesterov <oleg@redhat.com> > Signed-off-by: Rik van Riel <riel@redhat.com> > --- > arch/x86/include/asm/fpu-internal.h | 18 ------------------ > arch/x86/kernel/traps.c | 2 +- > 2 files changed, 1 insertion(+), 19 deletions(-) > > diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h > index 0dbc08282291..27d00e04f911 100644 > --- a/arch/x86/include/asm/fpu-internal.h > +++ b/arch/x86/include/asm/fpu-internal.h > @@ -520,24 +520,6 @@ static inline void __save_fpu(struct task_struct *tsk) > } > > /* > - * These disable preemption on their own and are safe > - */ > -static inline void save_init_fpu(struct task_struct *tsk) > -{ > - WARN_ON_ONCE(!__thread_has_fpu(tsk)); > - > - if (use_eager_fpu()) { > - __save_fpu(tsk); > - return; > - } > - > - preempt_disable(); > - __save_init_fpu(tsk); > - __thread_fpu_end(tsk); > - preempt_enable(); > -} > - > -/* > * i387 state interaction > */ > static inline unsigned short get_fpu_cwd(struct task_struct *tsk) > diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c > index fb4cb6adf225..51c465846f06 100644 > --- a/arch/x86/kernel/traps.c > +++ b/arch/x86/kernel/traps.c > @@ -663,7 +663,7 @@ static void math_error(struct pt_regs *regs, int error_code, int trapnr) > /* > * Save the info for the exception handler and clear the error. > */ > - save_init_fpu(task); > + unlazy_fpu(task);
Do I see it correctly that even with this there's a not-so-small hole *after* conditional_sti() and *before* unlazy_fpu() where caller can still get preempted?
Thanks.
-- Regards/Gruss, Boris.
ECO tip #101: Trim your mails when you reply. --
|  |