[lkml]   [2016]   [Nov]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
Patch in this message
Subject[PATCH 3.16 261/346] arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()
3.16.39-rc1 review patch.  If anyone has any objections, please let me know.


From: Will Deacon <>

commit 872c63fbf9e153146b07f0cece4da0d70b283eeb upstream.

smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation
to a full barrier, such that prior stores are ordered with respect to
loads and stores occuring inside the critical section.

Unfortunately, the core code defines the barrier as smp_wmb(), which
is insufficient to provide the required ordering guarantees when used in
conjunction with our load-acquire-based spinlock implementation.

This patch overrides the arm64 definition of smp_mb__before_spinlock()
to map to a full smp_mb().

Cc: Peter Zijlstra <>
Reported-by: Alan Stern <>
Signed-off-by: Will Deacon <>
Signed-off-by: Catalin Marinas <>
Signed-off-by: Ben Hutchings <>
arch/arm64/include/asm/spinlock.h | 10 ++++++++++
1 file changed, 10 insertions(+)

--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -231,4 +231,14 @@ static inline int arch_read_trylock(arch
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()

+ * Accesses appearing in program order before a spin_lock() operation
+ * can be reordered with accesses inside the critical section, by virtue
+ * of arch_spin_lock being constructed using acquire semantics.
+ *
+ * In cases where this is problematic (e.g. try_to_wake_up), an
+ * smp_mb__before_spinlock() can restore the required ordering.
+ */
+#define smp_mb__before_spinlock() smp_mb()
#endif /* __ASM_SPINLOCK_H */
 \ /
  Last update: 2016-11-14 04:23    [W:0.732 / U:0.108 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site