Messages in this thread |  | | Date | Tue, 30 Jan 2018 10:56:53 +0100 | From | Peter Zijlstra <> | Subject | Re: [PATCH 20/24] objtool: Another static block fail |
| |
On Mon, Jan 29, 2018 at 04:52:53PM -0600, Josh Poimboeuf wrote: > On Tue, Jan 23, 2018 at 04:25:59PM +0100, Peter Zijlstra wrote: > > I've observed GCC generate: > > > > sym: > > NOP/JMP 1f (static_branch) > > JMP 2f > > 1: /* crud */ > > JMP 3f > > 2: /* other crud */ > > > > 3: RETQ > > > > > > This means we need to follow unconditional jumps; be conservative and > > only follow if its a unique jump. > > > > (I've not yet figured out which CONFIG option is responsible for this, > > a normal defconfig build does not generate crap like this) > > > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> > > Any chance we can just add a compiler barrier to the assertion macro and > avoid all this grow_static_blocks() mess? It seems a bit... fragile.
It is all rather unfortunate yes.. :/ I've tried to keep the grow stuff as conservative as possible while still covering all the weirdness I found. And while it was great fun, I do agree it would be much better to not have to do this.
You're thinking of something like this?
static __always_inline void arch_static_assert(void) { asm volatile ("1:\n\t" ".pushsection .discard.jump_assert \n\t" _ASM_ALIGN "\n\t" _ASM_PTR "1b \n\t" - ".popsection \n\t"); + ".popsection \n\t" ::: "memory"); }
That doesn't seem to matter much; see here:
static void ttwu_stat(struct task_struct *p, int cpu, int wake_flags) { struct rq *rq;
if (!schedstat_enabled()) return;
rq = this_rq();
$ objdump -dr build/kernel/sched/core.o
0000000000001910 <ttwu_stat>: 1910: e8 00 00 00 00 callq 1915 <ttwu_stat+0x5> 1911: R_X86_64_PC32 __fentry__-0x4 1915: 41 57 push %r15 1917: 41 56 push %r14 1919: 41 55 push %r13 191b: 41 54 push %r12 191d: 55 push %rbp 191e: 53 push %rbx 191f: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) 1924: eb 25 jmp 194b <ttwu_stat+0x3b> 1926: 41 89 d5 mov %edx,%r13d 1929: 41 89 f4 mov %esi,%r12d 192c: 48 89 fb mov %rdi,%rbx 192f: 49 c7 c6 00 00 00 00 mov $0x0,%r14 1932: R_X86_64_32S runqueues
$ objdump -j __jump_table -sr build/kernel/sched.o
0000000000000048 R_X86_64_64 .text+0x000000000000191f 0000000000000050 R_X86_64_64 .text+0x0000000000001926 0000000000000058 R_X86_64_64 sched_schedstats
$ objdump -j .discard.jump_assert -dr build/kernel/sched.o
0000000000000000 R_X86_64_64 .text+0x000000000000192f
It still lifts random crud over that first initial statement (the rq load).
|  |