linux-stable/arch/arm/include
Mark Rutland 1cea22f585 locking/atomic: arm: fix sync ops
[ Upstream commit dda5f312bb ]

The sync_*() ops on arch/arm are defined in terms of the regular bitops
with no special handling. This is not correct, as UP kernels elide
barriers for the fully-ordered operations, and so the required ordering
is lost when such UP kernels are run under a hypervsior on an SMP
system.

Fix this by defining sync ops with the required barriers.

Note: On 32-bit arm, the sync_*() ops are currently only used by Xen,
which requires ARMv7, but the semantics can be implemented for ARMv6+.

Fixes: e54d2f6152 ("xen/arm: sync_bitops")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-2-mark.rutland@arm.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-19 16:35:19 +02:00
..
asm locking/atomic: arm: fix sync ops 2023-07-19 16:35:19 +02:00
debug ARM: s3c: remove s3c24xx specific hacks 2023-01-16 09:26:05 +01:00
uapi/asm ARM: 9274/1: Add hwcap for Speculative Store Bypassing Safe 2022-11-28 11:57:35 +00:00