Messages in this thread Patch in this message |  | | Subject | [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles | From | Kirill Tkhai <> | Date | Tue, 17 Feb 2015 13:47:01 +0300 |
| |
We migrate a task using TASK_ON_RQ_MIGRATING state of on_rq:
raw_spin_lock(&old_rq->lock); deactivate_task(old_rq, p, 0); p->on_rq = TASK_ON_RQ_MIGRATING; set_task_cpu(p, new_cpu); raw_spin_unlock(&rq->lock);
I.e.:
write TASK_ON_RQ_MIGRATING smp_wmb() (in __set_task_cpu) write new_cpu
But {,__}task_rq_lock() don't use smp_rmb(), and they may see the cpu and TASK_ON_RQ_MIGRATING in opposite order. In this case {,__}task_rq_lock() lock new_rq before the task is actually queued on it.
Fix that using ordered reading.
Fixes: cca26e8009d1 "sched: Teach scheduler to understand TASK_ON_RQ_MIGRATING state" Signed-off-by: Kirill Tkhai <ktkhai@parallels.com> --- kernel/sched/core.c | 8 ++++++-- kernel/sched/sched.h | 8 ++++++-- 2 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index fc12a1d..a42fb88 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -319,8 +319,12 @@ static struct rq *task_rq_lock(struct task_struct *p, unsigned long *flags) raw_spin_lock_irqsave(&p->pi_lock, *flags); rq = task_rq(p); raw_spin_lock(&rq->lock); - if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) - return rq; + if (likely(rq == task_rq(p))) { + /* Pairs with smp_wmb() in __set_task_cpu() */ + smp_rmb(); + if (likely(!task_on_rq_migrating(p))) + return rq; + } raw_spin_unlock(&rq->lock); raw_spin_unlock_irqrestore(&p->pi_lock, *flags); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index f65f57c..4d7b03c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1031,8 +1031,12 @@ static inline struct rq *__task_rq_lock(struct task_struct *p) for (;;) { rq = task_rq(p); raw_spin_lock(&rq->lock); - if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) - return rq; + if (likely(rq == task_rq(p))) { + /* Pairs with smp_wmb() in __set_task_cpu() */ + smp_rmb(); + if (likely(!task_on_rq_migrating(p))) + return rq; + } raw_spin_unlock(&rq->lock); while (unlikely(task_on_rq_migrating(p)))
|  |