Messages in this thread |  | | Subject | Re: 5.13-rt1 + KVM = WARNING: at fs/eventfd.c:74 eventfd_signal() | From | Jason Wang <> | Date | Thu, 15 Jul 2021 14:45:14 +0800 |
| |
在 2021/7/15 下午1:58, Paolo Bonzini 写道: > On 15/07/21 06:14, Jason Wang wrote: >>> This obviously does not fly with PREEMPT_RT. If eventfd_signal is >>> preempted and an unrelated thread calls eventfd_signal, the result is >>> a spurious WARN. To avoid this, protect the percpu variable with a >>> local_lock. >> >> But local_lock only disable migration not preemption. > > On mainline PREEMPT_RT, local_lock is an array of per-CPU spinlocks. > When two eventfd_signals run on the same CPU and one is preempted, the > spinlocks avoid that the second sees eventfd_wake_count > 0. > > Thanks, > > Paolo
Right, I see.
Thanks
> >> Or anything I missed here? >> >> Thanks >> >> >>> >>> Reported-by: Daniel Bristot de Oliveira <bristot@redhat.com> >>> Fixes: b5e683d5cab8 ("eventfd: track eventfd_signal() recursion depth") >>> Cc: stable@vger.kernel.org >>> Cc: He Zhe <zhe.he@windriver.com> >>> Cc: Jens Axboe <axboe@kernel.dk> >>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> >>> >>> diff --git a/fs/eventfd.c b/fs/eventfd.c >>> index e265b6dd4f34..7d27b6e080ea 100644 >>> --- a/fs/eventfd.c >>> +++ b/fs/eventfd.c >>> @@ -12,6 +12,7 @@ >>> #include <linux/fs.h> >>> #include <linux/sched/signal.h> >>> #include <linux/kernel.h> >>> +#include <linux/local_lock.h> >>> #include <linux/slab.h> >>> #include <linux/list.h> >>> #include <linux/spinlock.h> >>> @@ -25,6 +26,7 @@ >>> #include <linux/idr.h> >>> #include <linux/uio.h> >>> >>> +static local_lock_t eventfd_wake_lock = >>> INIT_LOCAL_LOCK(eventfd_wake_lock); >>> DEFINE_PER_CPU(int, eventfd_wake_count); >>> >>> static DEFINE_IDA(eventfd_ida); >>> @@ -71,8 +73,11 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, >>> __u64 n) >>> * it returns true, the eventfd_signal() call should be >>> deferred to a >>> * safe context. >>> */ >>> - if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count))) >>> + local_lock(&eventfd_wake_lock); >>> + if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count))) { >>> + local_unlock(&eventfd_wake_lock); >>> return 0; >>> + } >>> >>> spin_lock_irqsave(&ctx->wqh.lock, flags); >>> this_cpu_inc(eventfd_wake_count); >>> @@ -83,6 +88,7 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, >>> __u64 n) >>> wake_up_locked_poll(&ctx->wqh, EPOLLIN); >>> this_cpu_dec(eventfd_wake_count); >>> spin_unlock_irqrestore(&ctx->wqh.lock, flags); >>> + local_unlock(&eventfd_wake_lock); >>> >>> return n; >>> } >>> >> >
|  |