linux-stable/scripts/atomic/gen-atomic-fallback.sh

334 lines
8.3 KiB
Bash
Raw Normal View History

locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
ATOMICDIR=$(dirname $0)
. ${ATOMICDIR}/atomic-tbl.sh
#gen_template_fallback(template, meta, pfx, name, sfx, order, atomic, int, args...)
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
gen_template_fallback()
{
local template="$1"; shift
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
local atomic="$1"; shift
local int="$1"; shift
local ret="$(gen_ret_type "${meta}" "${int}")"
local retstmt="$(gen_ret_stmt "${meta}")"
local params="$(gen_params "${int}" "${atomic}" "$@")"
local args="$(gen_args "$@")"
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
. ${template}
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
}
#gen_order_fallback(meta, pfx, name, sfx, order, atomic, int, args...)
gen_order_fallback()
{
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
local tmpl_order=${order#_}
local tmpl="${ATOMICDIR}/fallbacks/${tmpl_order:-fence}"
gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
}
#gen_proto_fallback(meta, pfx, name, sfx, order, atomic, int, args...)
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
gen_proto_fallback()
{
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
local tmpl="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
}
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, args...)
gen_proto_order_variant()
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
{
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
local order="$1"; shift
locking/atomic: scripts: simplify raw_atomic*() definitions Currently each ordering variant has several potential definitions, with a mixture of preprocessor and C definitions, including several copies of its C prototype, e.g. | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Make this a bit simpler by defining the C prototype once, and writing the various potential definitions as plain C code guarded by ifdeffery. For example, the above becomes: | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | #if defined(arch_atomic_fetch_andnot_acquire) | return arch_atomic_fetch_andnot_acquire(i, v); | #elif defined(arch_atomic_fetch_andnot_relaxed) | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | #elif defined(arch_atomic_fetch_andnot) | return arch_atomic_fetch_andnot(i, v); | #else | return raw_atomic_fetch_and_acquire(~i, v); | #endif | } Which is far easier to read. As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. At the same time, the fallbacks for raw_atomic*_xhcg() are made to use 'new' rather than 'i' as the name of the new value. This is what the existing fallback template used, and is more consistent with the raw_atomic{_try,}cmpxchg() fallbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com
2023-06-05 07:01:20 +00:00
local atomic="$1"; shift
local int="$1"; shift
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
local basename="${atomic}_${pfx}${name}${sfx}"
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
locking/atomic: scripts: simplify raw_atomic*() definitions Currently each ordering variant has several potential definitions, with a mixture of preprocessor and C definitions, including several copies of its C prototype, e.g. | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Make this a bit simpler by defining the C prototype once, and writing the various potential definitions as plain C code guarded by ifdeffery. For example, the above becomes: | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | #if defined(arch_atomic_fetch_andnot_acquire) | return arch_atomic_fetch_andnot_acquire(i, v); | #elif defined(arch_atomic_fetch_andnot_relaxed) | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | #elif defined(arch_atomic_fetch_andnot) | return arch_atomic_fetch_andnot(i, v); | #else | return raw_atomic_fetch_and_acquire(~i, v); | #endif | } Which is far easier to read. As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. At the same time, the fallbacks for raw_atomic*_xhcg() are made to use 'new' rather than 'i' as the name of the new value. This is what the existing fallback template used, and is more consistent with the raw_atomic{_try,}cmpxchg() fallbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com
2023-06-05 07:01:20 +00:00
local ret="$(gen_ret_type "${meta}" "${int}")"
local retstmt="$(gen_ret_stmt "${meta}")"
local params="$(gen_params "${int}" "${atomic}" "$@")"
local args="$(gen_args "$@")"
locking/atomic: scripts: generate kerneldoc comments Currently the atomics are documented in Documentation/atomic_t.txt, and have no kerneldoc comments. There are a sufficient number of gotchas (e.g. semantics, noinstr-safety) that it would be nice to have comments to call these out, and it would be nice to have kerneldoc comments such that these can be collated. While it's possible to derive the semantics from the code, this can be painful given the amount of indirection we currently have (e.g. fallback paths), and it's easy to be mislead by naming, e.g. * The unconditional void-returning ops *only* have relaxed variants without a _relaxed suffix, and can easily be mistaken for being fully ordered. It would be nice to give these a _relaxed() suffix, but this would result in significant churn throughout the kernel. * Our naming of conditional and unconditional+test ops is rather inconsistent, and it can be difficult to derive the name of an operation, or to identify where an op is conditional or unconditional+test. Some ops are clearly conditional: - dec_if_positive - add_unless - dec_unless_positive - inc_unless_negative Some ops are clearly unconditional+test: - sub_and_test - dec_and_test - inc_and_test However, what exactly those test is not obvious. A _test_zero suffix might be clearer. Others could be read ambiguously: - inc_not_zero // conditional - add_negative // unconditional+test It would probably be worth renaming these, e.g. to inc_unless_zero and add_test_negative. As a step towards making this more consistent and easier to understand, this patch adds kerneldoc comments for all generated *atomic*_*() functions. These are generated from templates, with some common text shared, making it easy to extend these in future if necessary. I've tried to make these as consistent and clear as possible, and I've deliberately ensured: * All ops have their ordering explicitly mentioned in the short and long description. * All test ops have "test" in their short description. * All ops are described as an expression using their usual C operator. For example: andnot: "Atomically updates @v to (@v & ~@i)" inc: "Atomically updates @v to (@v + 1)" Which may be clearer to non-naative English speakers, and allows all the operations to be described in the same style. * All conditional ops have their condition described as an expression using the usual C operators. For example: add_unless: "If (@v != @u), atomically updates @v to (@v + @i)" cmpxchg: "If (@v == @old), atomically updates @v to @new" Which may be clearer to non-naative English speakers, and allows all the operations to be described in the same style. * All bitwise ops (and,andnot,or,xor) explicitly mention that they are bitwise in their short description, so that they are not mistaken for performing their logical equivalents. * The noinstr safety of each op is explicitly described, with a description of whether or not to use the raw_ form of the op. There should be no functional change as a result of this patch. Reported-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-26-mark.rutland@arm.com
2023-06-05 07:01:22 +00:00
gen_kerneldoc "raw_" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
locking/atomic: scripts: simplify raw_atomic*() definitions Currently each ordering variant has several potential definitions, with a mixture of preprocessor and C definitions, including several copies of its C prototype, e.g. | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Make this a bit simpler by defining the C prototype once, and writing the various potential definitions as plain C code guarded by ifdeffery. For example, the above becomes: | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | #if defined(arch_atomic_fetch_andnot_acquire) | return arch_atomic_fetch_andnot_acquire(i, v); | #elif defined(arch_atomic_fetch_andnot_relaxed) | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | #elif defined(arch_atomic_fetch_andnot) | return arch_atomic_fetch_andnot(i, v); | #else | return raw_atomic_fetch_and_acquire(~i, v); | #endif | } Which is far easier to read. As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. At the same time, the fallbacks for raw_atomic*_xhcg() are made to use 'new' rather than 'i' as the name of the new value. This is what the existing fallback template used, and is more consistent with the raw_atomic{_try,}cmpxchg() fallbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com
2023-06-05 07:01:20 +00:00
printf "static __always_inline ${ret}\n"
printf "raw_${atomicname}(${params})\n"
printf "{\n"
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
# Where there is no possible fallback, this order variant is mandatory
# and must be provided by arch code. Add a comment to the header to
# make this obvious.
#
# Ideally we'd error on a missing definition, but arch code might
# define this order variant as a C function without a preprocessor
# symbol.
if [ -z ${template} ] && [ -z "${order}" ] && ! meta_has_relaxed "${meta}"; then
locking/atomic: scripts: simplify raw_atomic*() definitions Currently each ordering variant has several potential definitions, with a mixture of preprocessor and C definitions, including several copies of its C prototype, e.g. | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Make this a bit simpler by defining the C prototype once, and writing the various potential definitions as plain C code guarded by ifdeffery. For example, the above becomes: | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | #if defined(arch_atomic_fetch_andnot_acquire) | return arch_atomic_fetch_andnot_acquire(i, v); | #elif defined(arch_atomic_fetch_andnot_relaxed) | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | #elif defined(arch_atomic_fetch_andnot) | return arch_atomic_fetch_andnot(i, v); | #else | return raw_atomic_fetch_and_acquire(~i, v); | #endif | } Which is far easier to read. As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. At the same time, the fallbacks for raw_atomic*_xhcg() are made to use 'new' rather than 'i' as the name of the new value. This is what the existing fallback template used, and is more consistent with the raw_atomic{_try,}cmpxchg() fallbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com
2023-06-05 07:01:20 +00:00
printf "\t${retstmt}arch_${atomicname}(${args});\n"
printf "}\n\n"
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
return
fi
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
printf "#if defined(arch_${atomicname})\n"
locking/atomic: scripts: simplify raw_atomic*() definitions Currently each ordering variant has several potential definitions, with a mixture of preprocessor and C definitions, including several copies of its C prototype, e.g. | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Make this a bit simpler by defining the C prototype once, and writing the various potential definitions as plain C code guarded by ifdeffery. For example, the above becomes: | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | #if defined(arch_atomic_fetch_andnot_acquire) | return arch_atomic_fetch_andnot_acquire(i, v); | #elif defined(arch_atomic_fetch_andnot_relaxed) | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | #elif defined(arch_atomic_fetch_andnot) | return arch_atomic_fetch_andnot(i, v); | #else | return raw_atomic_fetch_and_acquire(~i, v); | #endif | } Which is far easier to read. As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. At the same time, the fallbacks for raw_atomic*_xhcg() are made to use 'new' rather than 'i' as the name of the new value. This is what the existing fallback template used, and is more consistent with the raw_atomic{_try,}cmpxchg() fallbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com
2023-06-05 07:01:20 +00:00
printf "\t${retstmt}arch_${atomicname}(${args});\n"
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
# Allow FULL/ACQUIRE/RELEASE ops to be defined in terms of RELAXED ops
if [ "${order}" != "_relaxed" ] && meta_has_relaxed "${meta}"; then
printf "#elif defined(arch_${basename}_relaxed)\n"
locking/atomic: scripts: simplify raw_atomic*() definitions Currently each ordering variant has several potential definitions, with a mixture of preprocessor and C definitions, including several copies of its C prototype, e.g. | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Make this a bit simpler by defining the C prototype once, and writing the various potential definitions as plain C code guarded by ifdeffery. For example, the above becomes: | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | #if defined(arch_atomic_fetch_andnot_acquire) | return arch_atomic_fetch_andnot_acquire(i, v); | #elif defined(arch_atomic_fetch_andnot_relaxed) | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | #elif defined(arch_atomic_fetch_andnot) | return arch_atomic_fetch_andnot(i, v); | #else | return raw_atomic_fetch_and_acquire(~i, v); | #endif | } Which is far easier to read. As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. At the same time, the fallbacks for raw_atomic*_xhcg() are made to use 'new' rather than 'i' as the name of the new value. This is what the existing fallback template used, and is more consistent with the raw_atomic{_try,}cmpxchg() fallbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com
2023-06-05 07:01:20 +00:00
gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
fi
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
# Allow ACQUIRE/RELEASE/RELAXED ops to be defined in terms of FULL ops
if [ ! -z "${order}" ]; then
printf "#elif defined(arch_${basename})\n"
locking/atomic: scripts: simplify raw_atomic*() definitions Currently each ordering variant has several potential definitions, with a mixture of preprocessor and C definitions, including several copies of its C prototype, e.g. | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Make this a bit simpler by defining the C prototype once, and writing the various potential definitions as plain C code guarded by ifdeffery. For example, the above becomes: | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | #if defined(arch_atomic_fetch_andnot_acquire) | return arch_atomic_fetch_andnot_acquire(i, v); | #elif defined(arch_atomic_fetch_andnot_relaxed) | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | #elif defined(arch_atomic_fetch_andnot) | return arch_atomic_fetch_andnot(i, v); | #else | return raw_atomic_fetch_and_acquire(~i, v); | #endif | } Which is far easier to read. As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. At the same time, the fallbacks for raw_atomic*_xhcg() are made to use 'new' rather than 'i' as the name of the new value. This is what the existing fallback template used, and is more consistent with the raw_atomic{_try,}cmpxchg() fallbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com
2023-06-05 07:01:20 +00:00
printf "\t${retstmt}arch_${basename}(${args});\n"
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
fi
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
printf "#else\n"
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
if [ ! -z "${template}" ]; then
locking/atomic: scripts: simplify raw_atomic*() definitions Currently each ordering variant has several potential definitions, with a mixture of preprocessor and C definitions, including several copies of its C prototype, e.g. | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Make this a bit simpler by defining the C prototype once, and writing the various potential definitions as plain C code guarded by ifdeffery. For example, the above becomes: | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | #if defined(arch_atomic_fetch_andnot_acquire) | return arch_atomic_fetch_andnot_acquire(i, v); | #elif defined(arch_atomic_fetch_andnot_relaxed) | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | #elif defined(arch_atomic_fetch_andnot) | return arch_atomic_fetch_andnot(i, v); | #else | return raw_atomic_fetch_and_acquire(~i, v); | #endif | } Which is far easier to read. As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. At the same time, the fallbacks for raw_atomic*_xhcg() are made to use 'new' rather than 'i' as the name of the new value. This is what the existing fallback template used, and is more consistent with the raw_atomic{_try,}cmpxchg() fallbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com
2023-06-05 07:01:20 +00:00
gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
else
printf "#error \"Unable to define raw_${atomicname}\"\n"
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
fi
locking/atomic: scripts: simplify raw_atomic*() definitions Currently each ordering variant has several potential definitions, with a mixture of preprocessor and C definitions, including several copies of its C prototype, e.g. | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Make this a bit simpler by defining the C prototype once, and writing the various potential definitions as plain C code guarded by ifdeffery. For example, the above becomes: | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | #if defined(arch_atomic_fetch_andnot_acquire) | return arch_atomic_fetch_andnot_acquire(i, v); | #elif defined(arch_atomic_fetch_andnot_relaxed) | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | #elif defined(arch_atomic_fetch_andnot) | return arch_atomic_fetch_andnot(i, v); | #else | return raw_atomic_fetch_and_acquire(~i, v); | #endif | } Which is far easier to read. As we now always have a single copy of the C prototype wrapping all the potential definitions, we now have an obvious single location for kerneldoc comments. At the same time, the fallbacks for raw_atomic*_xhcg() are made to use 'new' rather than 'i' as the name of the new value. This is what the existing fallback template used, and is more consistent with the raw_atomic{_try,}cmpxchg() fallbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com
2023-06-05 07:01:20 +00:00
printf "#endif\n"
printf "}\n\n"
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
}
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...)
gen_proto_order_variants()
{
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
local atomic="$1"
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
if meta_has_acquire "${meta}"; then
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
fi
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
if meta_has_release "${meta}"; then
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
fi
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
if meta_has_relaxed "${meta}"; then
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@"
fi
}
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
#gen_basic_fallbacks(basename)
gen_basic_fallbacks()
{
local basename="$1"; shift
cat << EOF
#define raw_${basename}_acquire arch_${basename}
#define raw_${basename}_release arch_${basename}
#define raw_${basename}_relaxed arch_${basename}
EOF
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
}
gen_order_fallbacks()
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
{
local xchg="$1"; shift
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
cat <<EOF
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
#define raw_${xchg}_relaxed arch_${xchg}_relaxed
#ifdef arch_${xchg}_acquire
#define raw_${xchg}_acquire arch_${xchg}_acquire
#else
#define raw_${xchg}_acquire(...) \\
__atomic_op_acquire(arch_${xchg}, __VA_ARGS__)
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
#endif
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
#ifdef arch_${xchg}_release
#define raw_${xchg}_release arch_${xchg}_release
#else
#define raw_${xchg}_release(...) \\
__atomic_op_release(arch_${xchg}, __VA_ARGS__)
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
#endif
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
#ifdef arch_${xchg}
#define raw_${xchg} arch_${xchg}
#else
#define raw_${xchg}(...) \\
__atomic_op_fence(arch_${xchg}, __VA_ARGS__)
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
#endif
EOF
}
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
gen_xchg_order_fallback()
{
local xchg="$1"; shift
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
local order="$1"; shift
local forder="${order:-_fence}"
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
printf "#if defined(arch_${xchg}${order})\n"
printf "#define raw_${xchg}${order} arch_${xchg}${order}\n"
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
if [ "${order}" != "_relaxed" ]; then
printf "#elif defined(arch_${xchg}_relaxed)\n"
printf "#define raw_${xchg}${order}(...) \\\\\n"
printf " __atomic_op${forder}(arch_${xchg}, __VA_ARGS__)\n"
fi
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
if [ ! -z "${order}" ]; then
printf "#elif defined(arch_${xchg})\n"
printf "#define raw_${xchg}${order} arch_${xchg}\n"
fi
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
printf "#else\n"
printf "extern void raw_${xchg}${order}_not_implemented(void);\n"
printf "#define raw_${xchg}${order}(...) raw_${xchg}${order}_not_implemented()\n"
printf "#endif\n\n"
}
gen_xchg_fallbacks()
{
local xchg="$1"; shift
for order in "" "_acquire" "_release" "_relaxed"; do
gen_xchg_order_fallback "${xchg}" "${order}"
done
}
gen_try_cmpxchg_fallback()
{
local cmpxchg="$1"; shift;
local order="$1"; shift;
cat <<EOF
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
#define raw_try_${cmpxchg}${order}(_ptr, _oldp, _new) \\
({ \\
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \\
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
___r = raw_${cmpxchg}${order}((_ptr), ___o, (_new)); \\
if (unlikely(___r != ___o)) \\
*___op = ___r; \\
likely(___r == ___o); \\
})
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
EOF
}
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
gen_try_cmpxchg_order_fallback()
{
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
local cmpxchg="$1"; shift
local order="$1"; shift
local forder="${order:-_fence}"
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
printf "#if defined(arch_try_${cmpxchg}${order})\n"
printf "#define raw_try_${cmpxchg}${order} arch_try_${cmpxchg}${order}\n"
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
if [ "${order}" != "_relaxed" ]; then
printf "#elif defined(arch_try_${cmpxchg}_relaxed)\n"
printf "#define raw_try_${cmpxchg}${order}(...) \\\\\n"
printf " __atomic_op${forder}(arch_try_${cmpxchg}, __VA_ARGS__)\n"
fi
if [ ! -z "${order}" ]; then
printf "#elif defined(arch_try_${cmpxchg})\n"
printf "#define raw_try_${cmpxchg}${order} arch_try_${cmpxchg}\n"
fi
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
printf "#else\n"
gen_try_cmpxchg_fallback "${cmpxchg}" "${order}"
printf "#endif\n\n"
}
gen_try_cmpxchg_fallbacks()
{
local cmpxchg="$1"; shift;
for order in "" "_acquire" "_release" "_relaxed"; do
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
gen_try_cmpxchg_order_fallback "${cmpxchg}" "${order}"
done
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
}
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
gen_cmpxchg_local_fallbacks()
{
local cmpxchg="$1"; shift
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
printf "#define raw_${cmpxchg} arch_${cmpxchg}\n\n"
printf "#ifdef arch_try_${cmpxchg}\n"
printf "#define raw_try_${cmpxchg} arch_try_${cmpxchg}\n"
printf "#else\n"
gen_try_cmpxchg_fallback "${cmpxchg}" ""
printf "#endif\n\n"
}
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
cat << EOF
// SPDX-License-Identifier: GPL-2.0
// Generated by $0
// DO NOT MODIFY THIS FILE DIRECTLY
#ifndef _LINUX_ATOMIC_FALLBACK_H
#define _LINUX_ATOMIC_FALLBACK_H
#include <linux/compiler.h>
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
EOF
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
for xchg in "xchg" "cmpxchg" "cmpxchg64" "cmpxchg128"; do
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
gen_xchg_fallbacks "${xchg}"
done
for cmpxchg in "cmpxchg" "cmpxchg64" "cmpxchg128"; do
gen_try_cmpxchg_fallbacks "${cmpxchg}"
done
locking/atomic: scripts: restructure fallback ifdeffery Currently the various ordering variants of an atomic operation are defined in groups of full/acquire/release/relaxed ordering variants with some shared ifdeffery and several potential definitions of each ordering variant in different branches of the shared ifdeffery. As an ordering variant can have several potential definitions down different branches of the shared ifdeffery, it can be painful for a human to find a relevant definition, and we don't have a good location to place anything common to all definitions of an ordering variant (e.g. kerneldoc). Historically the grouping of full/acquire/release/relaxed ordering variants was necessary as we filled in the missing atomics in the same namespace as the architecture used. It would be easy to accidentally define one ordering fallback in terms of another ordering fallback with redundant barriers, and avoiding that would otherwise require a lot of baroque ifdeffery. With recent changes we no longer need to fill in the missing atomics in the arch_atomic*_<op>() namespace, and only need to fill in the raw_atomic*_<op>() namespace. Due to this, there's no risk of a namespace collision, and we can define each raw_atomic*_<op> ordering variant with its own ifdeffery checking for the arch_atomic*_<op> ordering variants. Restructure the fallbacks in this way, with each ordering variant having its own ifdeffery of the form: | #if defined(arch_atomic_fetch_andnot_acquire) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire | #elif defined(arch_atomic_fetch_andnot_relaxed) | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | int ret = arch_atomic_fetch_andnot_relaxed(i, v); | __atomic_acquire_fence(); | return ret; | } | #elif defined(arch_atomic_fetch_andnot) | #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot | #else | static __always_inline int | raw_atomic_fetch_andnot_acquire(int i, atomic_t *v) | { | return raw_atomic_fetch_and_acquire(~i, v); | } | #endif Note that where there's no relevant arch_atomic*_<op>() ordering variant, we'll define the operation in terms of a distinct raw_atomic*_<otherop>(), as this itself might have been filled in with a fallback. As we now generate the raw_atomic*_<op>() implementations directly, we no longer need the trivial wrappers, so they are removed. This makes the ifdeffery easier to follow, and will allow for further improvements in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 07:01:17 +00:00
for cmpxchg in "cmpxchg_local" "cmpxchg64_local" "cmpxchg128_local"; do
gen_cmpxchg_local_fallbacks "${cmpxchg}" ""
done
for cmpxchg in "sync_cmpxchg"; do
printf "#define raw_${cmpxchg} arch_${cmpxchg}\n\n"
done
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
grep '^[a-z]' "$1" | while read name meta args; do
gen_proto "${meta}" "${name}" "atomic" "int" ${args}
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
done
cat <<EOF
#ifdef CONFIG_GENERIC_ATOMIC64
#include <asm-generic/atomic64.h>
#endif
EOF
grep '^[a-z]' "$1" | while read name meta args; do
gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
locking/atomics: Add common header generation files To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: linuxdrivers@attotech.com Cc: dvyukov@google.com Cc: Boqun Feng <boqun.feng@gmail.com> Cc: arnd@arndb.de Cc: aryabinin@virtuozzo.com Cc: glider@google.com Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-09-04 10:48:25 +00:00
done
cat <<EOF
#endif /* _LINUX_ATOMIC_FALLBACK_H */
EOF