Messages in this thread Patch in this message |  | | From | riel@redhat ... | Subject | [PATCH 7/8] x86,fpu: use disable_task_lazy_fpu_restore helper | Date | Fri, 6 Feb 2015 15:02:04 -0500 |
| |
From: Rik van Riel <riel@redhat.com>
Replace magic assignments of fpu.last_cpu = ~0 with more explicit disable_task_lazy_fpu_restore calls.
This also fixes the lazy FPU restore disabling in drop_fpu, which only really works when !use_eager_fpu(). This is fine for now, because fpu_lazy_restore() is only used when !use_eager_fpu() currently, but we may want to expand that.
Signed-off-by: Rik van Riel <riel@redhat.com> --- arch/x86/include/asm/fpu-internal.h | 4 ++-- arch/x86/kernel/i387.c | 2 +- arch/x86/kernel/process.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h index 04063751ac80..06af286593d7 100644 --- a/arch/x86/include/asm/fpu-internal.h +++ b/arch/x86/include/asm/fpu-internal.h @@ -440,7 +440,7 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta new->thread.fpu_counter > 5); if (__thread_has_fpu(old)) { if (!__save_init_fpu(old)) - old->thread.fpu.last_cpu = ~0; + task_disable_lazy_fpu_restore(old); else old->thread.fpu.last_cpu = cpu; old->thread.fpu.has_fpu = 0; /* But leave fpu_owner_task! */ @@ -454,7 +454,7 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta stts(); } else { old->thread.fpu_counter = 0; - old->thread.fpu.last_cpu = ~0; + task_disable_lazy_fpu_restore(old); if (fpu.preload) { new->thread.fpu_counter++; if (!use_eager_fpu() && fpu_lazy_restore(new, cpu)) diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c index 8e070a6c30e5..8416b5f85806 100644 --- a/arch/x86/kernel/i387.c +++ b/arch/x86/kernel/i387.c @@ -250,7 +250,7 @@ int init_fpu(struct task_struct *tsk) if (tsk_used_math(tsk)) { if (cpu_has_fpu && tsk == current) unlazy_fpu(tsk); - tsk->thread.fpu.last_cpu = ~0; + task_disable_lazy_fpu_restore(tsk); return 0; } diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index dd9a069a5ec5..83480373a642 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -68,8 +68,8 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) dst->thread.fpu_counter = 0; dst->thread.fpu.has_fpu = 0; - dst->thread.fpu.last_cpu = ~0; dst->thread.fpu.state = NULL; + task_disable_lazy_fpu_restore(dst); if (tsk_used_math(src)) { int err = fpu_alloc(&dst->thread.fpu); if (err) -- 1.9.3
|  |