lkml.org 
[lkml]   [2019]   [Jan]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] locking/qspinlock: Add bug check for exceeding MAX_NODES
Date
On some architectures, it is possible to have nested NMIs taking
spinlocks nestedly. Even though the chance of having more than 4 nested
spinlocks with contention is extremely small, there could still be a
possibility that it may happen some days leading to system panic.

What we don't want is a silent corruption with system panic somewhere
else. So add a BUG_ON() check to make sure that a system panic caused
by this will show the correct root cause.

Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/locking/qspinlock.c | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 8a8c3c2..f823221 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -412,6 +412,16 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
idx = node->count++;
tail = encode_tail(smp_processor_id(), idx);

+ /*
+ * 4 nodes are allocated based on the assumption that there will
+ * not be nested NMIs taking spinlocks. That may not be true in
+ * some architectures even though the chance of needing more than
+ * 4 nodes will still be extremely unlikely. Adding a bug check
+ * here to make sure there won't be a silent corruption in case
+ * this condition happens.
+ */
+ BUG_ON(idx >= MAX_NODES);
+
node = grab_mcs_node(node, idx);

/*
--
1.8.3.1
\
 
 \ /
  Last update: 2019-01-15 22:57    [W:0.047 / U:2.864 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site