linux-stable/arch
Mark Rutland 18ac0da526 locking/atomic: arm: fix sync ops
[ Upstream commit dda5f312bb ]

The sync_*() ops on arch/arm are defined in terms of the regular bitops
with no special handling. This is not correct, as UP kernels elide
barriers for the fully-ordered operations, and so the required ordering
is lost when such UP kernels are run under a hypervsior on an SMP
system.

Fix this by defining sync ops with the required barriers.

Note: On 32-bit arm, the sync_*() ops are currently only used by Xen,
which requires ARMv7, but the semantics can be implemented for ARMv6+.

Fixes: e54d2f6152 ("xen/arm: sync_bitops")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-2-mark.rutland@arm.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-11 19:39:26 +02:00
..
alpha
arc
arm locking/atomic: arm: fix sync ops 2023-07-11 19:39:26 +02:00
arm64
csky csky: fix up lock_mm_and_find_vma() conversion 2023-07-01 13:14:47 +02:00
hexagon
ia64
loongarch
m68k
microblaze
mips
nios2
openrisc
parisc
powerpc
riscv
s390
sh
sparc
um
x86 x86/mm: Fix __swp_entry_to_pte() for Xen PV guests 2023-07-11 19:39:26 +02:00
xtensa
.gitignore
Kconfig