Messages in this thread |  | | From | Marco Elver <> | Date | Tue, 4 Feb 2020 18:22:45 +0100 | Subject | Re: [PATCH 1/3] kcsan: Add option to assume plain writes up to word size are atomic |
| |
On Tue, 4 Feb 2020 at 16:40, Paul E. McKenney <paulmck@kernel.org> wrote: > > On Tue, Feb 04, 2020 at 04:28:47PM +0100, Marco Elver wrote: > > On Tue, 4 Feb 2020 at 15:04, Marco Elver <elver@google.com> wrote: > > > > > > This adds option KCSAN_ASSUME_PLAIN_WRITES_ATOMIC. If enabled, plain > > > writes up to word size are also assumed to be atomic, and also not > > > subject to other unsafe compiler optimizations resulting in data races. > > > > I just realized we should probably also check for alignedness. Would > > this be fair to add as an additional constraint? It would be my > > preference. > > Checking for alignment makes a lot of sense to me! Otherwise, write > tearing is expected behavior on some systems.
Sent v2: http://lkml.kernel.org/r/20200204172112.234455-1-elver@google.com
Thanks, -- Marco
> Thanx, Paul > > > Thanks, > > -- Marco > > > > > This option has been enabled by default to reflect current kernel-wide > > > preferences. > > > > > > Signed-off-by: Marco Elver <elver@google.com> > > > --- > > > kernel/kcsan/core.c | 20 +++++++++++++++----- > > > lib/Kconfig.kcsan | 26 +++++++++++++++++++------- > > > 2 files changed, 34 insertions(+), 12 deletions(-) > > > > > > diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c > > > index 64b30f7716a12..3bd1bf8d6bfeb 100644 > > > --- a/kernel/kcsan/core.c > > > +++ b/kernel/kcsan/core.c > > > @@ -169,10 +169,19 @@ static __always_inline struct kcsan_ctx *get_ctx(void) > > > return in_task() ? ¤t->kcsan_ctx : raw_cpu_ptr(&kcsan_cpu_ctx); > > > } > > > > > > -static __always_inline bool is_atomic(const volatile void *ptr) > > > +static __always_inline bool > > > +is_atomic(const volatile void *ptr, size_t size, int type) > > > { > > > - struct kcsan_ctx *ctx = get_ctx(); > > > + struct kcsan_ctx *ctx; > > > + > > > + if ((type & KCSAN_ACCESS_ATOMIC) != 0) > > > + return true; > > > > > > + if (IS_ENABLED(CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC) && > > > + (type & KCSAN_ACCESS_WRITE) != 0 && size <= sizeof(long)) > > > + return true; /* Assume all writes up to word size are atomic. */ > > > + > > > + ctx = get_ctx(); > > > if (unlikely(ctx->atomic_next > 0)) { > > > /* > > > * Because we do not have separate contexts for nested > > > @@ -193,7 +202,8 @@ static __always_inline bool is_atomic(const volatile void *ptr) > > > return kcsan_is_atomic(ptr); > > > } > > > > > > -static __always_inline bool should_watch(const volatile void *ptr, int type) > > > +static __always_inline bool > > > +should_watch(const volatile void *ptr, size_t size, int type) > > > { > > > /* > > > * Never set up watchpoints when memory operations are atomic. > > > @@ -202,7 +212,7 @@ static __always_inline bool should_watch(const volatile void *ptr, int type) > > > * should not count towards skipped instructions, and (2) to actually > > > * decrement kcsan_atomic_next for consecutive instruction stream. > > > */ > > > - if ((type & KCSAN_ACCESS_ATOMIC) != 0 || is_atomic(ptr)) > > > + if (is_atomic(ptr, size, type)) > > > return false; > > > > > > if (this_cpu_dec_return(kcsan_skip) >= 0) > > > @@ -460,7 +470,7 @@ static __always_inline void check_access(const volatile void *ptr, size_t size, > > > if (unlikely(watchpoint != NULL)) > > > kcsan_found_watchpoint(ptr, size, type, watchpoint, > > > encoded_watchpoint); > > > - else if (unlikely(should_watch(ptr, type))) > > > + else if (unlikely(should_watch(ptr, size, type))) > > > kcsan_setup_watchpoint(ptr, size, type); > > > } > > > > > > diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan > > > index 3552990abcfe5..08972376f0454 100644 > > > --- a/lib/Kconfig.kcsan > > > +++ b/lib/Kconfig.kcsan > > > @@ -91,13 +91,13 @@ config KCSAN_REPORT_ONCE_IN_MS > > > limiting reporting to avoid flooding the console with reports. > > > Setting this to 0 disables rate limiting. > > > > > > -# Note that, while some of the below options could be turned into boot > > > -# parameters, to optimize for the common use-case, we avoid this because: (a) > > > -# it would impact performance (and we want to avoid static branch for all > > > -# {READ,WRITE}_ONCE, atomic_*, bitops, etc.), and (b) complicate the design > > > -# without real benefit. The main purpose of the below options is for use in > > > -# fuzzer configs to control reported data races, and they are not expected > > > -# to be switched frequently by a user. > > > +# The main purpose of the below options is to control reported data races (e.g. > > > +# in fuzzer configs), and are not expected to be switched frequently by other > > > +# users. We could turn some of them into boot parameters, but given they should > > > +# not be switched normally, let's keep them here to simplify configuration. > > > +# > > > +# The defaults below are chosen to be very conservative, and may miss certain > > > +# bugs. > > > > > > config KCSAN_REPORT_RACE_UNKNOWN_ORIGIN > > > bool "Report races of unknown origin" > > > @@ -116,6 +116,18 @@ config KCSAN_REPORT_VALUE_CHANGE_ONLY > > > the data value of the memory location was observed to remain > > > unchanged, do not report the data race. > > > > > > +config KCSAN_ASSUME_PLAIN_WRITES_ATOMIC > > > + bool "Assume that plain writes up to word size are atomic" > > > + default y > > > + help > > > + Assume that plain writes up to word size are atomic by default, and > > > + also not subject to other unsafe compiler optimizations resulting in > > > + data races. This will cause KCSAN to not report data races due to > > > + conflicts where the only plain accesses are writes up to word size: > > > + conflicts between marked reads and plain writes up to word size will > > > + not be reported as data races; notice that data races between two > > > + conflicting plain writes will also not be reported. > > > + > > > config KCSAN_IGNORE_ATOMICS > > > bool "Do not instrument marked atomic accesses" > > > help > > > -- > > > 2.25.0.341.g760bfbb309-goog > > >
|  |