lkml.org 
[lkml]   [2016]   [Aug]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 2/4 v3] spinlock.h: Move smp_mb__after_unlock_lock to spinlock.h
    Date
    v3: If smp_mb__after_unlock_lock() is in barrier.h, then
    for arm64, kernel/rcu/tree.c doesn't compile because barrier.h
    is not included in kernel/rcu/tree.c

    (v2 was: add example from Paul, something that can happen on real HW)

    spin_unlock() + spin_lock() together do not form a full memory barrier:
    (everything initialized to 0)

    CPU1:
    a=1;
    spin_unlock(&b);
    spin_lock(&c);
    + smp_mb__after_unlock_lock();
    r1=d;

    CPU2:
    d=1;
    smp_mb();
    r2=a;

    Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
    be possible.

    Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
    ---
    include/linux/spinlock.h | 16 ++++++++++++++++
    kernel/rcu/tree.h | 12 ------------
    2 files changed, 16 insertions(+), 12 deletions(-)

    diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
    index d79000e..5075c88 100644
    --- a/include/linux/spinlock.h
    +++ b/include/linux/spinlock.h
    @@ -142,6 +142,22 @@ do { \
    #define smp_mb__after_spin_lock() smp_mb()
    #endif

    +#ifndef smp_mb__after_unlock_lock
    +/**
    + * smp_mb__after_unlock_lock() - Provide smp_mb() after unlock+lock
    + *
    + * Place this after a lock-acquisition primitive to guarantee that
    + * an UNLOCK+LOCK pair act as a full barrier. This guarantee applies
    + * if the UNLOCK and LOCK are executed by the same CPU or if the
    + * UNLOCK and LOCK operate on the same lock variable.
    + */
    +#ifdef CONFIG_PPC
    +#define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */
    +#else /* #ifdef CONFIG_PPC */
    +#define smp_mb__after_unlock_lock() do { } while (0)
    +#endif /* #else #ifdef CONFIG_PPC */
    +#endif
    +
    /**
    * raw_spin_unlock_wait - wait until the spinlock gets unlocked
    * @lock: the spinlock in question.
    diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
    index e99a523..a0cd9ab 100644
    --- a/kernel/rcu/tree.h
    +++ b/kernel/rcu/tree.h
    @@ -687,18 +687,6 @@ static inline void rcu_nocb_q_lengths(struct rcu_data *rdp, long *ql, long *qll)
    #endif /* #ifdef CONFIG_RCU_TRACE */

    /*
    - * Place this after a lock-acquisition primitive to guarantee that
    - * an UNLOCK+LOCK pair act as a full barrier. This guarantee applies
    - * if the UNLOCK and LOCK are executed by the same CPU or if the
    - * UNLOCK and LOCK operate on the same lock variable.
    - */
    -#ifdef CONFIG_PPC
    -#define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */
    -#else /* #ifdef CONFIG_PPC */
    -#define smp_mb__after_unlock_lock() do { } while (0)
    -#endif /* #else #ifdef CONFIG_PPC */
    -
    -/*
    * Wrappers for the rcu_node::lock acquire and release.
    *
    * Because the rcu_nodes form a tree, the tree traversal locking will observe
    --
    2.5.5
    \
     
     \ /
      Last update: 2016-09-17 09:58    [W:13.877 / U:0.004 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site