lkml.org 
[lkml]   [2020]   [Nov]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v5 14/21] kprobes: Remove NMI context check
On Tue, 3 Nov 2020 14:39:38 +0900
Masami Hiramatsu <mhiramat@kernel.org> wrote:

> Ah, OK. This looks good to me.
>
> BTW, in_nmi() in pre_handler_kretprobe() always be true because
> now int3 is treated as an NMI. So you can always pass 1 there.

What about the below patch then?

>
> Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thanks!

From 29ac1a5c9068df06f3196173d4325c8076759551 Mon Sep 17 00:00:00 2001
From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
Date: Mon, 2 Nov 2020 09:17:49 -0500
Subject: [PATCH] kprobes: Tell lockdep about kprobe nesting

Since the kprobe handlers have protection that prohibits other handlers from
executing in other contexts (like if an NMI comes in while processing a
kprobe, and executes the same kprobe, it will get fail with a "busy"
return). Lockdep is unaware of this protection. Use lockdep's nesting api to
differentiate between locks taken in INT3 context and other context to
suppress the false warnings.

Link: https://lore.kernel.org/r/20201102160234.fa0ae70915ad9e2b21c08b85@kernel.org

Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
kernel/kprobes.c | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 8a12a25fa40d..30889ea5514f 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1249,7 +1249,12 @@ __acquires(hlist_lock)

*head = &kretprobe_inst_table[hash];
hlist_lock = kretprobe_table_lock_ptr(hash);
- raw_spin_lock_irqsave(hlist_lock, *flags);
+ /*
+ * Nested is a workaround that will soon not be needed.
+ * There's other protections that make sure the same lock
+ * is not taken on the same CPU that lockdep is unaware of.
+ */
+ raw_spin_lock_irqsave_nested(hlist_lock, *flags, 1);
}
NOKPROBE_SYMBOL(kretprobe_hash_lock);

@@ -1258,7 +1263,12 @@ static void kretprobe_table_lock(unsigned long hash,
__acquires(hlist_lock)
{
raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash);
- raw_spin_lock_irqsave(hlist_lock, *flags);
+ /*
+ * Nested is a workaround that will soon not be needed.
+ * There's other protections that make sure the same lock
+ * is not taken on the same CPU that lockdep is unaware of.
+ */
+ raw_spin_lock_irqsave_nested(hlist_lock, *flags, 1);
}
NOKPROBE_SYMBOL(kretprobe_table_lock);

@@ -2028,7 +2038,12 @@ static int pre_handler_kretprobe(struct kprobe *p, struct pt_regs *regs)

/* TODO: consider to only swap the RA after the last pre_handler fired */
hash = hash_ptr(current, KPROBE_HASH_BITS);
- raw_spin_lock_irqsave(&rp->lock, flags);
+ /*
+ * Nested is a workaround that will soon not be needed.
+ * There's other protections that make sure the same lock
+ * is not taken on the same CPU that lockdep is unaware of.
+ */
+ raw_spin_lock_irqsave_nested(&rp->lock, flags, 1);
if (!hlist_empty(&rp->free_instances)) {
ri = hlist_entry(rp->free_instances.first,
struct kretprobe_instance, hlist);
@@ -2039,7 +2054,7 @@ static int pre_handler_kretprobe(struct kprobe *p, struct pt_regs *regs)
ri->task = current;

if (rp->entry_handler && rp->entry_handler(ri, regs)) {
- raw_spin_lock_irqsave(&rp->lock, flags);
+ raw_spin_lock_irqsave_nested(&rp->lock, flags, 1);
hlist_add_head(&ri->hlist, &rp->free_instances);
raw_spin_unlock_irqrestore(&rp->lock, flags);
return 0;
--
2.25.4
\
 
 \ /
  Last update: 2020-11-03 17:10    [W:0.146 / U:1.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site