mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
synced 2024-10-30 08:02:30 +00:00
s390/ctl_reg: make __ctl_load a full memory barrier
We have quite a lot of code that depends on the order of the __ctl_load inline assemby and subsequent memory accesses, like e.g. disabling lowcore protection and the writing to lowcore. Since the __ctl_load macro does not have memory barrier semantics, nor any other dependencies the compiler is, theoretically, free to shuffle code around. Or in other words: storing to lowcore could happen before lowcore protection is disabled. In order to avoid this class of potential bugs simply add a full memory barrier to the __ctl_load macro. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This commit is contained in:
parent
49def18533
commit
e991c24d68
1 changed files with 3 additions and 1 deletions
|
@ -15,7 +15,9 @@
|
|||
BUILD_BUG_ON(sizeof(addrtype) != (high - low + 1) * sizeof(long));\
|
||||
asm volatile( \
|
||||
" lctlg %1,%2,%0\n" \
|
||||
: : "Q" (*(addrtype *)(&array)), "i" (low), "i" (high));\
|
||||
: \
|
||||
: "Q" (*(addrtype *)(&array)), "i" (low), "i" (high) \
|
||||
: "memory"); \
|
||||
}
|
||||
|
||||
#define __ctl_store(array, low, high) { \
|
||||
|
|
Loading…
Reference in a new issue