lkml.org 
[lkml]   [2016]   [Mar]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: rcu_preempt self-detected stall on CPU from 4.5-rc3, since 3.17
Date
Hi Paul,

On 2016-03-21, Paul E. McKenney wrote:
> On Mon, Mar 21, 2016 at 09:22:30AM -0700, Jacob Pan wrote:
>> On Fri, 18 Mar 2016 16:56:41 -0700
>> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
>>> On Fri, Mar 18, 2016 at 02:00:11PM -0700, Josh Triplett wrote:
>>>> On Thu, Feb 25, 2016 at 04:56:38PM -0800, Paul E. McKenney wrote:
>
> [ . . . ]
>
>>>> We're seeing a similar stall (~60 seconds) on an x86 development
>>>> system here. Any luck tracking down the cause of this? If not, any
>>>> suggestions for traces that might be helpful?
>>>
>>> The dmesg containing the stall, the kernel version, and the .config
>>> would be helpful! Working on a torture test specific to this bug...
>>>
>>> Thanx, Paul
>>>
>> +Reinette, she has the system that can reproduce the issue. I
>> believe she is having some other problems with it at the moment. But
>> the .config should be available. Version is v4.5.
>
> A couple of additional questions:
>
> 1. Is the test running on bare metal or virtualized? If the
> latter, what is the host?

Bare metal.

> 2. Does the workload involve CPU hotplug?

No.

> 3. Are you seeing things like this in dmesg?
>
> "rcu_preempt kthread starved for 21033 jiffies"
> "rcu_sched kthread starved for 32103 jiffies"
> "rcu_bh kthread starved for 84031 jiffies"
>
> If not, you are probably facing some other bug, and should
> proceed debugging as described in Documentation/RCU/stallwarn.txt.

Below is a sample of what I see as captured with v4.5. The kernel configuration is attached.

[ 135.456197] INFO: rcu_preempt detected stalls on CPUs/tasks:
[ 135.457729] 3-...: (0 ticks this GP) idle=722/0/0 softirq=5532/5532 fqs=0
[ 135.459604] (detected by 2, t=60004 jiffies, g=2105, c=2104, q=165)
[ 135.461318] Task dump for CPU 3:
[ 135.461321] swapper/3 R running task 0 0 1 0x00200000
[ 135.461325] 00000078560040e5 ffff88017846fed0 ffffffff818af2cc ffff880100000000
[ 135.461330] 0000000600000003 ffff880178470000 ffff880072f32200 ffffffff822dcec0
[ 135.461334] ffff88017846c000 ffff88017846c000 ffff88017846fee0 ffffffff818af517
[ 135.461338] Call Trace:
[ 135.461345] [<ffffffff818af2cc>] ? cpuidle_enter_state+0xfc/0x310
[ 135.461349] [<ffffffff818af517>] ? cpuidle_enter+0x17/0x20
[ 135.461353] [<ffffffff811515aa>] ? call_cpuidle+0x2a/0x40
[ 135.461355] [<ffffffff8115197d>] ? cpu_startup_entry+0x28d/0x360
[ 135.461360] [<ffffffff8108c874>] ? start_secondary+0x114/0x140
[ 135.461365] rcu_preempt kthread starved for 60004 jiffies! g2105 c2104 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1
[ 135.463965] rcu_preempt S ffff88017844fd68 0 7 2 0x00000000
[ 135.463969] ffff88017844fd68 ffff88017dd8cc80 ffff880177ff0000 ffff880178443b80
[ 135.463973] ffff880178450000 ffff88017844fda0 ffff88017dd8cc80 ffff88017dd8cc80
[ 135.463977] 0000000000000003 ffff88017844fd80 ffffffff81ab031f 0000000100031504
[ 135.463981] Call Trace:
[ 135.463986] [<ffffffff81ab031f>] schedule+0x3f/0xa0
[ 135.463989] [<ffffffff81ab42d7>] schedule_timeout+0x127/0x270
[ 135.463993] [<ffffffff81171a50>] ? detach_if_pending+0x120/0x120
[ 135.463997] [<ffffffff8116da5d>] rcu_gp_kthread+0x6bd/0xa30
[ 135.464000] [<ffffffff81151390>] ? wake_atomic_t_function+0x70/0x70
[ 135.464003] [<ffffffff8116d3a0>] ? force_qs_rnp+0x1b0/0x1b0
[ 135.464006] [<ffffffff8112f846>] kthread+0xe6/0x100
[ 135.464009] [<ffffffff8112f760>] ? kthread_worker_fn+0x190/0x190
[ 135.464012] [<ffffffff81ab5c0f>] ret_from_fork+0x3f/0x70
[ 135.464015] [<ffffffff8112f760>] ? kthread_worker_fn+0x190/0x190

Thank you very much

Reinette


[unhandled content-type:application/octet-stream]
\
 
 \ /
  Last update: 2016-03-22 18:21    [W:0.247 / U:0.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site