[lkml]   [2018]   [Feb]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: [PATCH v2 1/3] x86/entry: Clear extra registers beyond syscall arguments for 64bit kernels
On Mon, Feb 5, 2018 at 2:10 PM, Andy Lutomirski <> wrote:
> On Mon, Feb 5, 2018 at 9:58 PM, Linus Torvalds
> <> wrote:
>> On Mon, Feb 5, 2018 at 1:33 PM, Dan Williams <> wrote:
>>> On a suggestion from Arjan it also appears worthwhile to interleave
>>> 'mov' with 'xor'. Perf stat says that this test gets 3.45 instructions
>>> per cycle:
>> Ugh.
>> A "xor %reg/reg" is two bytes (three for the high regs due to REX
>> prefix). A "mov $0" is 7 bytes because unlike most of the ALU ops,
>> "mov" doesn't have a 8-bit expanding immediate.
>> So replacing those xors with movq's will add at least four bytes per
>> replacement. So you may well end up adding an L1 cache miss.
>> At which point "3.45 ipc" vs "2.88 ipc" is pretty much a non-issue.
>> I suspect that a bigger win would be if you try to interleave those
>> "xor" instructions with the "pushq" instructions in the entry code.
>> Because those push instructions tend to be limited by the LSU store
>> bandwidth, so you can probably put in xor instructions almost for free
>> in there.
> At the risk of over-optimizing a dead horse, what about:
> xorl %ebx, %ebx
> movq %ebx, %r10
> xorl %r11, %r11
> movq %ebx, %r12
> etc.
> We'll have a cycle of latency from xor to mov, but I'd be rather
> surprised if the CPU can't hide that.

Hmm, this again gets 2.88 ipc:

for (i = 0; i < INT_MAX/1024; i++)
asm(".rept 1024\n"
"xorl %%ebx, %%ebx\n"
"movq %%rbx, %%r10\n"
"xorq %%r11, %%r11\n"
"movq %%rbx, %%r12\n"
"xorq %%r13, %%r13\n"
"movq %%rbx, %%r14\n"
"xorq %%r15, %%r15\n"
: : : "r15", "r14", "r13", "r12",
"ebx", "r11", "r10");

 \ /
  Last update: 2018-02-05 23:19    [W:0.072 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site