Messages in this thread Patch in this message |  | | Date | Tue, 17 Feb 2015 13:12:58 +0100 | From | Peter Zijlstra <> | Subject | Re: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles |
| |
On Tue, Feb 17, 2015 at 01:47:01PM +0300, Kirill Tkhai wrote: > > We migrate a task using TASK_ON_RQ_MIGRATING state of on_rq: > > raw_spin_lock(&old_rq->lock); > deactivate_task(old_rq, p, 0); > p->on_rq = TASK_ON_RQ_MIGRATING; > set_task_cpu(p, new_cpu); > raw_spin_unlock(&rq->lock); > > I.e.: > > write TASK_ON_RQ_MIGRATING > smp_wmb() (in __set_task_cpu) > write new_cpu > > But {,__}task_rq_lock() don't use smp_rmb(), and they may see > the cpu and TASK_ON_RQ_MIGRATING in opposite order. In this case > {,__}task_rq_lock() lock new_rq before the task is actually queued > on it.
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index fc12a1d..a42fb88 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -319,8 +319,12 @@ static struct rq *task_rq_lock(struct task_struct *p, unsigned long *flags) > raw_spin_lock_irqsave(&p->pi_lock, *flags); > rq = task_rq(p); > raw_spin_lock(&rq->lock); > - if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) > - return rq; > + if (likely(rq == task_rq(p))) { > + /* Pairs with smp_wmb() in __set_task_cpu() */
That comment really is insufficient; but aside from that:
If we observe the old cpu value we've just acquired the old rq->lock and therefore we must observe the new cpu value and retry -- we don't care about the migrate value in this case.
If we observe the new cpu value, we've acquired the new rq->lock and its ACQUIRE will pair with the WMB to ensure we see the migrate value.
So I think the current code is correct; albeit it could use a comment.
> + smp_rmb(); > + if (likely(!task_on_rq_migrating(p))) > + return rq; > + }
--- Subject: sched: Clarify ordering between task_rq_lock() and move_queued_task() From: Peter Zijlstra <peterz@infradead.org> Date: Tue Feb 17 13:07:38 CET 2015
There was a wee bit of confusion around the exact ordering here; clarify things.
Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Reported-by: Kirill Tkhai <ktkhai@parallels.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/sched/core.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
--- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -341,6 +341,22 @@ static struct rq *task_rq_lock(struct ta raw_spin_lock_irqsave(&p->pi_lock, *flags); rq = task_rq(p); raw_spin_lock(&rq->lock); + /* + * move_queued_task() task_rq_lock() + * + * ACQUIRE (rq->lock) + * [S] ->on_rq = MIGRATING [L] rq = task_rq() + * WMB (__set_task_cpu()) ACQUIRE (rq->lock); + * [S] ->cpu = new_cpu [L] task_rq() + * [L] ->on_rq + * RELEASE (rq->lock) + * + * If we observe the old cpu in task_rq_lock, the acquire of + * the old rq->lock will fully serialize against the stores. + * + * If we observe the new cpu in task_rq_lock, the acquire will + * pair with the WMB to ensure we must then also see migrating. + */ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) return rq; raw_spin_unlock(&rq->lock);
|  |