Messages in this thread |  | | Date | Mon, 18 Sep 2017 12:12:13 -0400 | From | Steven Rostedt <> | Subject | Re: Query regarding synchronize_sched_expedited and resched_cpu |
| |
On Mon, 18 Sep 2017 09:01:25 -0700 "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> sched: Make resched_cpu() unconditional > > The current implementation of synchronize_sched_expedited() incorrectly > assumes that resched_cpu() is unconditional, which it is not. This means > that synchronize_sched_expedited() can hang when resched_cpu()'s trylock > fails as follows (analysis by Neeraj Upadhyay): > > o CPU1 is waiting for expedited wait to complete: > sync_rcu_exp_select_cpus > rdp->exp_dynticks_snap & 0x1 // returns 1 for CPU5 > IPI sent to CPU5 > > synchronize_sched_expedited_wait > ret = swait_event_timeout( > rsp->expedited_wq, > sync_rcu_preempt_exp_done(rnp_root), > jiffies_stall); > > expmask = 0x20 , and CPU 5 is in idle path (in cpuidle_enter()) > > o CPU5 handles IPI and fails to acquire rq lock. > > Handles IPI > sync_sched_exp_handler > resched_cpu > returns while failing to try lock acquire rq->lock > need_resched is not set > > o CPU5 calls rcu_idle_enter() and as need_resched is not set, goes to > idle (schedule() is not called). > > o CPU 1 reports RCU stall. > > Given that resched_cpu() is used only by RCU, this commit fixes the > assumption by making resched_cpu() unconditional.
Probably want to run this with several workloads with lockdep enabled first.
-- Steve
> > Reported-by: Neeraj Upadhyay <neeraju@codeaurora.org> > Suggested-by: Neeraj Upadhyay <neeraju@codeaurora.org> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Steven Rostedt <rostedt@goodmis.org> > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index cab8c5ec128e..b2281971894c 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -505,8 +505,7 @@ void resched_cpu(int cpu) > struct rq *rq = cpu_rq(cpu); > unsigned long flags; > > - if (!raw_spin_trylock_irqsave(&rq->lock, flags)) > - return; > + raw_spin_lock_irqsave(&rq->lock, flags); > resched_curr(rq); > raw_spin_unlock_irqrestore(&rq->lock, flags); > }
|  |