lkml.org 
[lkml]   [2016]   [Nov]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 11/11] [rfc] locking/ww_mutex: Always spin optimistically for the first waiter
    Date
    From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

    Check the current owner's context once against our stamp. If our stamp is
    lower, we continue to spin optimistically instead of backing off.

    This is correct with respect to deadlock detection because while the
    (owner, ww_ctx) pair may re-appear if the owner task manages to unlock
    and re-acquire the lock while we're spinning, the context could only have
    been re-initialized with an even higher stamp. We also still detect when
    we have to back off for other waiters that join the list while we're
    spinning.

    But taking the wait_lock in mutex_spin_on_owner feels iffy, even if it is
    done only once.

    Median timings taken of a contention-heavy GPU workload:

    Before:
    real 0m53.086s
    user 0m7.360s
    sys 1m46.204s

    After:
    real 0m52.577s
    user 0m7.544s
    sys 1m49.200s

    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Maarten Lankhorst <dev@mblankhorst.nl>
    Cc: Daniel Vetter <daniel@ffwll.ch>
    Cc: Chris Wilson <chris@chris-wilson.co.uk>
    Cc: dri-devel@lists.freedesktop.org
    Signed-off-by: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
    ---
    kernel/locking/mutex.c | 35 ++++++++++++++++++++++++++++++++---
    1 file changed, 32 insertions(+), 3 deletions(-)

    diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
    index ee84007..e7d5fac 100644
    --- a/kernel/locking/mutex.c
    +++ b/kernel/locking/mutex.c
    @@ -378,6 +378,28 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
    struct mutex_waiter *waiter)
    {
    bool ret = true;
    + struct ww_acquire_ctx *owner_ww_ctx = NULL;
    +
    + if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) {
    + struct ww_mutex *ww;
    + unsigned long flags;
    +
    + ww = container_of(lock, struct ww_mutex, base);
    +
    + /*
    + * Check the stamp of the current owner once. This allows us
    + * to spin optimistically in the case where the current owner
    + * has a higher stamp than us.
    + */
    + spin_lock_mutex(&lock->wait_lock, flags);
    + owner_ww_ctx = ww->ctx;
    + if (owner_ww_ctx &&
    + __ww_mutex_stamp_after(ww_ctx, owner_ww_ctx)) {
    + spin_unlock_mutex(&lock->wait_lock, flags);
    + return false;
    + }
    + spin_unlock_mutex(&lock->wait_lock, flags);
    + }

    rcu_read_lock();
    while (__mutex_owner(lock) == owner) {
    @@ -414,9 +436,16 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
    * Check this in every inner iteration because we may
    * be racing against another thread's ww_mutex_lock.
    */
    - if (ww_ctx->acquired > 0 && READ_ONCE(ww->ctx)) {
    - ret = false;
    - break;
    + if (ww_ctx->acquired > 0) {
    + struct ww_acquire_ctx *current_ctx;
    +
    + current_ctx = READ_ONCE(ww->ctx);
    +
    + if (current_ctx &&
    + current_ctx != owner_ww_ctx) {
    + ret = false;
    + break;
    + }
    }

    /*
    --
    2.7.4
    \
     
     \ /
      Last update: 2016-11-28 13:22    [W:7.761 / U:0.116 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site