lkml.org 
[lkml]   [2016]   [Aug]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/4] barrier.h: Move smp_mb__after_unlock_lock to barrier.h
On Sun, Aug 28, 2016 at 01:56:14PM +0200, Manfred Spraul wrote:
> spin_unlock() + spin_lock() together do not form a full memory barrier:
>
> a=1;
> spin_unlock(&b);
> spin_lock(&c);
> + smp_mb__after_unlock_lock();
> d=1;

Better would be s/d=1/r1=d/ above.

Then another process doing this:

d=1
smp_mb()
r2=a

might have the after-the-dust-settles outcome of r1==0&&r2==0.

The advantage of this scenario is that it can happen on real hardware.

>
> Without the smp_mb__after_unlock_lock(), other CPUs can observe the
> write to d without seeing the write to a.
>
> Signed-off-by: Manfred Spraul <manfred@colorfullife.com>

With the upgraded commit log, I am OK with the patch below.
However, others will probably want to see at least one use of
smp_mb__after_unlock_lock() outside of RCU.

Thanx, Paul

> ---
> include/asm-generic/barrier.h | 16 ++++++++++++++++
> kernel/rcu/tree.h | 12 ------------
> 2 files changed, 16 insertions(+), 12 deletions(-)
>
> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> index fe297b5..9b4d28f 100644
> --- a/include/asm-generic/barrier.h
> +++ b/include/asm-generic/barrier.h
> @@ -244,6 +244,22 @@ do { \
> smp_acquire__after_ctrl_dep(); \
> VAL; \
> })
> +
> +#ifndef smp_mb__after_unlock_lock
> +/*
> + * Place this after a lock-acquisition primitive to guarantee that
> + * an UNLOCK+LOCK pair act as a full barrier. This guarantee applies
> + * if the UNLOCK and LOCK are executed by the same CPU or if the
> + * UNLOCK and LOCK operate on the same lock variable.
> + */
> +#ifdef CONFIG_PPC
> +#define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */
> +#else /* #ifdef CONFIG_PPC */
> +#define smp_mb__after_unlock_lock() do { } while (0)
> +#endif /* #else #ifdef CONFIG_PPC */
> +
> +#endif
> +
> #endif
>
> #endif /* !__ASSEMBLY__ */
> diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> index e99a523..a0cd9ab 100644
> --- a/kernel/rcu/tree.h
> +++ b/kernel/rcu/tree.h
> @@ -687,18 +687,6 @@ static inline void rcu_nocb_q_lengths(struct rcu_data *rdp, long *ql, long *qll)
> #endif /* #ifdef CONFIG_RCU_TRACE */
>
> /*
> - * Place this after a lock-acquisition primitive to guarantee that
> - * an UNLOCK+LOCK pair act as a full barrier. This guarantee applies
> - * if the UNLOCK and LOCK are executed by the same CPU or if the
> - * UNLOCK and LOCK operate on the same lock variable.
> - */
> -#ifdef CONFIG_PPC
> -#define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */
> -#else /* #ifdef CONFIG_PPC */
> -#define smp_mb__after_unlock_lock() do { } while (0)
> -#endif /* #else #ifdef CONFIG_PPC */
> -
> -/*
> * Wrappers for the rcu_node::lock acquire and release.
> *
> * Because the rcu_nodes form a tree, the tree traversal locking will observe
> --
> 2.5.5
>

\
 
 \ /
  Last update: 2016-09-17 09:58    [W:0.213 / U:0.500 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site