Messages in this thread |  | | Subject | Re: 5.13-rt1 + KVM = WARNING: at fs/eventfd.c:74 eventfd_signal() | From | He Zhe <> | Date | Thu, 15 Jul 2021 16:44:25 +0800 |
| |
On 7/15/21 4:22 PM, Daniel Bristot de Oliveira wrote: > On 7/14/21 12:35 PM, Paolo Bonzini wrote: >> On 14/07/21 11:23, Jason Wang wrote: >>>> This was added in 2020, so it's unlikely to be the direct cause of the >>>> change. What is a known-good version for the host? >>>> >>>> Since it is not KVM stuff, I'm CCing Michael and Jason. >>> I think this can be probably fixed here: >>> >>> https://lore.kernel.org/lkml/20210618084412.18257-1-zhe.he@windriver.com/ >> That seems wrong; in particular it wouldn't protect against AB/BA deadlocks. >> In fact, the bug is with the locking; the code assumes that >> spin_lock_irqsave/spin_unlock_irqrestore is non-preemptable and therefore >> increments and decrements the percpu variable inside the critical section. >> >> This obviously does not fly with PREEMPT_RT; the right fix should be >> using a local_lock. Something like this (untested!!): > the lock needs to be per-pcu... but so far, so good. I will continue using the > system in the next days to see if it blows on another way.
The original patch was created before preempt-rt was fully introduced into mainline. It was to increase the recursion depth to 2 so that vhost_worker and kvm_vcpu_ioctl syscall could work in parallel, as shown in the original commit log.
So the event_fd_recursion.count should still be 2 to fix the original issue, no matter how locks would be tweaked accordingly.
Zhe
> > The patch looks like this now: > > ------------------------- 8< --------------------- > Subject: [PATCH] eventfd: protect eventfd_wake_count with a local_lock > > eventfd_signal assumes that spin_lock_irqsave/spin_unlock_irqrestore is > non-preemptable and therefore increments and decrements the percpu > variable inside the critical section. > > This obviously does not fly with PREEMPT_RT. If eventfd_signal is > preempted and an unrelated thread calls eventfd_signal, the result is > a spurious WARN. To avoid this, protect the percpu variable with a > local_lock. > > Reported-by: Daniel Bristot de Oliveira <bristot@redhat.com> > Suggested-by: Paolo Bonzini <pbonzini@redhat.com> > Fixes: b5e683d5cab8 ("eventfd: track eventfd_signal() recursion depth") > Cc: stable@vger.kernel.org > Cc: He Zhe <zhe.he@windriver.com> > Cc: Jens Axboe <axboe@kernel.dk> > Co-developed-by: Paolo Bonzini <pbonzini@redhat.com> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > Co-developed-by: Daniel Bristot de Oliveira <bristot@redhat.com> > Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com> > --- > fs/eventfd.c | 27 ++++++++++++++++++++++----- > include/linux/eventfd.h | 7 +------ > 2 files changed, 23 insertions(+), 11 deletions(-) > > diff --git a/fs/eventfd.c b/fs/eventfd.c > index e265b6dd4f34..9754fcd38690 100644 > --- a/fs/eventfd.c > +++ b/fs/eventfd.c > @@ -12,6 +12,7 @@ > #include <linux/fs.h> > #include <linux/sched/signal.h> > #include <linux/kernel.h> > +#include <linux/local_lock.h> > #include <linux/slab.h> > #include <linux/list.h> > #include <linux/spinlock.h> > @@ -25,8 +26,6 @@ > #include <linux/idr.h> > #include <linux/uio.h> > > -DEFINE_PER_CPU(int, eventfd_wake_count); > - > static DEFINE_IDA(eventfd_ida); > > struct eventfd_ctx { > @@ -45,6 +44,20 @@ struct eventfd_ctx { > int id; > }; > > +struct event_fd_recursion { > + local_lock_t lock; > + int count; > +}; > + > +static DEFINE_PER_CPU(struct event_fd_recursion, event_fd_recursion) = { > + .lock = INIT_LOCAL_LOCK(lock), > +}; > + > +bool eventfd_signal_count(void) > +{ > + return this_cpu_read(event_fd_recursion.count); > +} > + > /** > * eventfd_signal - Adds @n to the eventfd counter. > * @ctx: [in] Pointer to the eventfd context. > @@ -71,18 +84,22 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n) > * it returns true, the eventfd_signal() call should be deferred to a > * safe context. > */ > - if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count))) > + local_lock(&event_fd_recursion.lock); > + if (WARN_ON_ONCE(this_cpu_read(event_fd_recursion.count))) { > + local_unlock(&event_fd_recursion.lock); > return 0; > + } > > spin_lock_irqsave(&ctx->wqh.lock, flags); > - this_cpu_inc(eventfd_wake_count); > + this_cpu_inc(event_fd_recursion.count); > if (ULLONG_MAX - ctx->count < n) > n = ULLONG_MAX - ctx->count; > ctx->count += n; > if (waitqueue_active(&ctx->wqh)) > wake_up_locked_poll(&ctx->wqh, EPOLLIN); > - this_cpu_dec(eventfd_wake_count); > + this_cpu_dec(event_fd_recursion.count); > spin_unlock_irqrestore(&ctx->wqh.lock, flags); > + local_unlock(&event_fd_recursion.lock); > > return n; > } > diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h > index fa0a524baed0..ca89d6c409c1 100644 > --- a/include/linux/eventfd.h > +++ b/include/linux/eventfd.h > @@ -43,12 +43,7 @@ int eventfd_ctx_remove_wait_queue(struct eventfd_ctx *ctx, > wait_queue_entry_t *w > __u64 *cnt); > void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt); > > -DECLARE_PER_CPU(int, eventfd_wake_count); > - > -static inline bool eventfd_signal_count(void) > -{ > - return this_cpu_read(eventfd_wake_count); > -} > +bool eventfd_signal_count(void); > > #else /* CONFIG_EVENTFD */ >
|  |