mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
synced 2024-11-01 17:08:10 +00:00
ca96b162bf
Intel CPUs ship with ERMS for over a decade, but this is not true for
AMD. In particular one reasonably recent uarch (EPYC 7R13) does not
have it (or at least the bit is inactive when running on the Amazon EC2
cloud -- I found rather conflicting information about AMD CPUs vs the
extension).
Hand-rolled mov loops executing in this case are quite pessimal compared
to rep movsq for bigger sizes. While the upper limit depends on uarch,
everyone is well south of 1KB AFAICS and sizes bigger than that are
common.
While technically ancient CPUs may be suffering from rep usage, gcc has
been emitting it for years all over kernel code, so I don't think this
is a legitimate concern.
Sample result from read1_processes from will-it-scale (4KB reads/s):
before: 1507021
after: 1721828 (+14%)
Note that the cutoff point for rep usage is set to 64 bytes, which is
way too conservative but I'm sticking to what was done in
|
||
---|---|---|
.. | ||
.gitignore | ||
atomic64_32.c | ||
atomic64_386_32.S | ||
atomic64_cx8_32.S | ||
cache-smp.c | ||
checksum_32.S | ||
clear_page_64.S | ||
cmdline.c | ||
cmpxchg8b_emu.S | ||
cmpxchg16b_emu.S | ||
copy_mc.c | ||
copy_mc_64.S | ||
copy_page_64.S | ||
copy_user_64.S | ||
copy_user_uncached_64.S | ||
cpu.c | ||
csum-copy_64.S | ||
csum-partial_64.c | ||
csum-wrappers_64.c | ||
delay.c | ||
error-inject.c | ||
getuser.S | ||
hweight.S | ||
inat.c | ||
insn-eval.c | ||
insn.c | ||
iomap_copy_64.S | ||
iomem.c | ||
kaslr.c | ||
Makefile | ||
memcpy_32.c | ||
memcpy_64.S | ||
memmove_32.S | ||
memmove_64.S | ||
memset_64.S | ||
misc.c | ||
msr-reg-export.c | ||
msr-reg.S | ||
msr-smp.c | ||
msr.c | ||
pc-conf-reg.c | ||
putuser.S | ||
retpoline.S | ||
string_32.c | ||
strstr_32.c | ||
usercopy.c | ||
usercopy_32.c | ||
usercopy_64.c | ||
x86-opcode-map.txt |