lkml.org 
[lkml]   [2019]   [Aug]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/1] Fix: trace sched switch start/stop racy updates
On Wed, Aug 21, 2019 at 02:32:48PM +0100, Will Deacon wrote:
> On Wed, Aug 21, 2019 at 06:23:10AM -0700, Paul E. McKenney wrote:
> > On Wed, Aug 21, 2019 at 11:32:01AM +0100, Will Deacon wrote:
> > > void bar(u64 *x)
> > > {
> > > *(volatile u64 *)x = 0xabcdef10abcdef10;
> > > }
> > >
> > > then I get:
> > >
> > > bar:
> > > mov w1, 61200
> > > movk w1, 0xabcd, lsl 16
> > > str w1, [x0]
> > > str w1, [x0, 4]
> > > ret
> > >
> > > so I'm not sure that WRITE_ONCE would even help :/
> >
> > Well, I can have the LWN article cite your email, then. So thank you
> > very much!
> >
> > Is generation of this code for a 64-bit volatile store considered a bug?
>
> I consider it a bug for the volatile case, and the one compiler person I've
> spoken to also seems to reckon it's a bug, so hopefully it will be fixed.
> I'm led to believe it's an optimisation in the AArch64 backend of GCC.

Here is hoping for the fix!

> > Or does ARMv8 exclude the possibility of 64-bit MMIO registers? And I
> > would guess that Thomas and Linus would ask a similar bugginess question
> > for normal stores. ;-)
>
> We use inline asm for MMIO, fwiw.

I should have remembered that, shouldn't I have? ;-)

Is that also common practice across other embedded kernels these days?

Thanx, Paul

\
 
 \ /
  Last update: 2019-08-21 15:57    [W:0.110 / U:33.284 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site