lkml.org 
[lkml]   [2019]   [Jul]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 4.19 079/271] x86/atomic: Fix smp_mb__{before,after}_atomic()
On Fri, Jul 26, 2019 at 01:18:06PM +0300, Jari Ruusu wrote:
> Greg Kroah-Hartman wrote:
> > [ Upstream commit 69d927bba39517d0980462efc051875b7f4db185 ]
> >
> > Recent probing at the Linux Kernel Memory Model uncovered a
> > 'surprise'. Strongly ordered architectures where the atomic RmW
> > primitive implies full memory ordering and
> > smp_mb__{before,after}_atomic() are a simple barrier() (such as x86)
> > fail for:
> >
> > *x = 1;
> > atomic_inc(u);
> > smp_mb__after_atomic();
> > r0 = *y;
>
> [snip]
>
> > --- a/arch/x86/include/asm/atomic.h
> > +++ b/arch/x86/include/asm/atomic.h
> > @@ -54,7 +54,7 @@ static __always_inline void arch_atomic_add(int i, atomic_t *v)
> > {
> > asm volatile(LOCK_PREFIX "addl %1,%0"
> > : "+m" (v->counter)
> > - : "ir" (i));
> > + : "ir" (i) : "memory");
> > }
> >
> > /**
>
> Shouldn't those clobber contraints actually be: "memory","cc"
> That is because addl subl (and other) machine instructions
> actually modify the flags register too.
>
> gcc docs say: The "cc" clobber indicates that the assembler
> code modifies the flags register.

GCC x86 assumes any asm() will clobber "cc".

\
 
 \ /
  Last update: 2019-07-26 13:02    [W:0.140 / U:7.852 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site