2021-04-28 12:12:31 +00:00
|
|
|
# SPDX-License-Identifier: GPL-2.0
|
|
|
|
#
|
|
|
|
# Internal CPU capabilities constants, keep this list sorted
|
|
|
|
|
2022-09-12 16:22:08 +00:00
|
|
|
ALWAYS_BOOT
|
|
|
|
ALWAYS_SYSTEM
|
2021-04-28 12:12:31 +00:00
|
|
|
BTI
|
2021-06-08 18:02:55 +00:00
|
|
|
# Unreliable: use system_supports_32bit_el0() instead.
|
|
|
|
HAS_32BIT_EL0_DO_NOT_USE
|
2021-04-28 12:12:31 +00:00
|
|
|
HAS_32BIT_EL1
|
|
|
|
HAS_ADDRESS_AUTH
|
2022-02-24 12:49:52 +00:00
|
|
|
HAS_ADDRESS_AUTH_ARCH_QARMA3
|
2022-02-24 12:49:51 +00:00
|
|
|
HAS_ADDRESS_AUTH_ARCH_QARMA5
|
2021-04-28 12:12:31 +00:00
|
|
|
HAS_ADDRESS_AUTH_IMP_DEF
|
|
|
|
HAS_AMU_EXTN
|
|
|
|
HAS_ARMv8_4_TTL
|
|
|
|
HAS_CACHE_DIC
|
|
|
|
HAS_CACHE_IDC
|
|
|
|
HAS_CNP
|
|
|
|
HAS_CRC32
|
|
|
|
HAS_DCPODP
|
|
|
|
HAS_DCPOP
|
arm64: Enable data independent timing (DIT) in the kernel
The ARM architecture revision v8.4 introduces a data independent timing
control (DIT) which can be set at any exception level, and instructs the
CPU to avoid optimizations that may result in a correlation between the
execution time of certain instructions and the value of the data they
operate on.
The DIT bit is part of PSTATE, and is therefore context switched as
usual, given that it becomes part of the saved program state (SPSR) when
taking an exception. We have also defined a hwcap for DIT, and so user
space can discover already whether or nor DIT is available. This means
that, as far as user space is concerned, DIT is wired up and fully
functional.
In the kernel, however, we never bothered with DIT: we disable at it
boot (i.e., INIT_PSTATE_EL1 has DIT cleared) and ignore the fact that we
might run with DIT enabled if user space happened to set it.
Currently, we have no idea whether or not running privileged code with
DIT disabled on a CPU that implements support for it may result in a
side channel that exposes privileged data to unprivileged user space
processes, so let's be cautious and just enable DIT while running in the
kernel if supported by all CPUs.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Adam Langley <agl@google.com>
Link: https://lore.kernel.org/all/YwgCrqutxmX0W72r@gmail.com/
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20221107172400.1851434-1-ardb@kernel.org
[will: Removed cpu_has_dit() as per Mark's suggestion on the list]
Signed-off-by: Will Deacon <will@kernel.org>
2022-11-07 17:24:00 +00:00
|
|
|
HAS_DIT
|
2021-04-28 12:12:31 +00:00
|
|
|
HAS_E0PD
|
2021-10-17 12:42:22 +00:00
|
|
|
HAS_ECV
|
2023-03-30 17:47:43 +00:00
|
|
|
HAS_ECV_CNTPOFF
|
2021-04-28 12:12:31 +00:00
|
|
|
HAS_EPAN
|
2023-05-15 17:00:16 +00:00
|
|
|
HAS_EVT
|
2023-08-15 18:38:43 +00:00
|
|
|
HAS_FGT
|
arm64: Use a positive cpucap for FP/SIMD
Currently we have a negative cpucap which describes the *absence* of
FP/SIMD rather than *presence* of FP/SIMD. This largely works, but is
somewhat awkward relative to other cpucaps that describe the presence of
a feature, and it would be nicer to have a cpucap which describes the
presence of FP/SIMD:
* This will allow the cpucap to be treated as a standard
ARM64_CPUCAP_SYSTEM_FEATURE, which can be detected with the standard
has_cpuid_feature() function and ARM64_CPUID_FIELDS() description.
* This ensures that the cpucap will only transition from not-present to
present, reducing the risk of unintentional and/or unsafe usage of
FP/SIMD before cpucaps are finalized.
* This will allow using arm64_cpu_capabilities::cpu_enable() to enable
the use of FP/SIMD later, with FP/SIMD being disabled at boot time
otherwise. This will ensure that any unintentional and/or unsafe usage
of FP/SIMD prior to this is trapped, and will ensure that FP/SIMD is
never unintentionally enabled for userspace in mismatched big.LITTLE
systems.
This patch replaces the negative ARM64_HAS_NO_FPSIMD cpucap with a
positive ARM64_HAS_FPSIMD cpucap, making changes as described above.
Note that as FP/SIMD will now be trapped when not supported system-wide,
do_fpsimd_acc() must handle these traps in the same way as for SVE and
SME. The commentary in fpsimd_restore_current_state() is updated to
describe the new scheme.
No users of system_supports_fpsimd() need to know that FP/SIMD is
available prior to alternatives being patched, so this is updated to
use alternative_has_cap_likely() to check for the ARM64_HAS_FPSIMD
cpucap, without generating code to test the system_cpucaps bitmap.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16 10:24:36 +00:00
|
|
|
HAS_FPSIMD
|
2021-04-28 12:12:31 +00:00
|
|
|
HAS_GENERIC_AUTH
|
2022-02-24 12:49:52 +00:00
|
|
|
HAS_GENERIC_AUTH_ARCH_QARMA3
|
2022-02-24 12:49:51 +00:00
|
|
|
HAS_GENERIC_AUTH_ARCH_QARMA5
|
2021-04-28 12:12:31 +00:00
|
|
|
HAS_GENERIC_AUTH_IMP_DEF
|
2023-01-30 14:54:25 +00:00
|
|
|
HAS_GIC_CPUIF_SYSREGS
|
2023-01-30 14:54:26 +00:00
|
|
|
HAS_GIC_PRIO_MASKING
|
arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap
When Priority Mask Hint Enable (PMHE) == 0b1, the GIC may use the PMR
value to determine whether to signal an IRQ to a PE, and consequently
after a change to the PMR value, a DSB SY may be required to ensure that
interrupts are signalled to a CPU in finite time. When PMHE == 0b0,
interrupts are always signalled to the relevant PE, and all masking
occurs locally, without requiring a DSB SY.
Since commit:
f226650494c6aa87 ("arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear")
... we handle this dynamically: in most cases a static key is used to
determine whether to issue a DSB SY, but the entry code must read from
ICC_CTLR_EL1 as static keys aren't accessible from plain assembly.
It would be much nicer to use an alternative instruction sequence for
the DSB, as this would avoid the need to read from ICC_CTLR_EL1 in the
entry code, and for most other code this will result in simpler code
generation with fewer instructions and fewer branches.
This patch adds a new ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap which is
only set when ICC_CTLR_EL1.PMHE == 0b0 (and GIC priority masking is in
use). This allows us to replace the existing users of the
`gic_pmr_sync` static key with alternative sequences which default to a
DSB SY and are relaxed to a NOP when PMHE is not in use.
The entry assembly management of the PMR is slightly restructured to use
a branch (rather than multiple NOPs) when priority masking is not in
use. This is more in keeping with other alternatives in the entry
assembly, and permits the use of a separate alternatives for the
PMHE-dependent DSB SY (and removal of the conditional branch this
currently requires). For consistency I've adjusted both the save and
restore paths.
According to bloat-o-meter, when building defconfig +
CONFIG_ARM64_PSEUDO_NMI=y this shrinks the kernel text by ~4KiB:
| add/remove: 4/2 grow/shrink: 42/310 up/down: 332/-5032 (-4700)
The resulting vmlinux is ~66KiB smaller, though the resulting Image size
is unchanged due to padding and alignment:
| [mark@lakrids:~/src/linux]% ls -al vmlinux-*
| -rwxr-xr-x 1 mark mark 137508344 Jan 17 14:11 vmlinux-after
| -rwxr-xr-x 1 mark mark 137575440 Jan 17 13:49 vmlinux-before
| [mark@lakrids:~/src/linux]% ls -al Image-*
| -rw-r--r-- 1 mark mark 38777344 Jan 17 14:11 Image-after
| -rw-r--r-- 1 mark mark 38777344 Jan 17 13:49 Image-before
Prior to this patch we did not verify the state of ICC_CTLR_EL1.PMHE on
secondary CPUs. As of this patch this is verified by the cpufeature code
when using GIC priority masking (i.e. when using pseudo-NMIs).
Note that since commit:
7e3a57fa6ca831fa ("arm64: Document ICC_CTLR_EL3.PMHE setting requirements")
... Documentation/arm64/booting.rst specifies:
| - ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across
| all CPUs the kernel is executing on, and must stay constant
| for the lifetime of the kernel.
... so that should not adversely affect any compliant systems, and as
we'll only check for the absense of PMHE when using pseudo-NMIs, this
will only fire when such mismatch will adversely affect the system.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230130145429.903791-5-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-01-30 14:54:28 +00:00
|
|
|
HAS_GIC_PRIO_RELAXED_SYNC
|
2023-05-09 14:22:26 +00:00
|
|
|
HAS_HCX
|
2021-04-28 12:12:31 +00:00
|
|
|
HAS_LDAPR
|
2023-11-27 11:17:30 +00:00
|
|
|
HAS_LPA2
|
2021-04-28 12:12:31 +00:00
|
|
|
HAS_LSE_ATOMICS
|
2023-05-09 14:22:33 +00:00
|
|
|
HAS_MOPS
|
2023-02-09 17:58:03 +00:00
|
|
|
HAS_NESTED_VIRT
|
2021-04-28 12:12:31 +00:00
|
|
|
HAS_PAN
|
2023-06-06 14:58:46 +00:00
|
|
|
HAS_S1PIE
|
2021-04-28 12:12:31 +00:00
|
|
|
HAS_RAS_EXTN
|
|
|
|
HAS_RNG
|
|
|
|
HAS_SB
|
|
|
|
HAS_STAGE2_FWB
|
2023-06-06 14:58:45 +00:00
|
|
|
HAS_TCR2
|
2022-06-22 11:54:24 +00:00
|
|
|
HAS_TIDCP1
|
2021-04-28 12:12:31 +00:00
|
|
|
HAS_TLB_RANGE
|
|
|
|
HAS_VIRT_HOST_EXTN
|
2022-04-20 12:21:12 +00:00
|
|
|
HAS_WFXT
|
2021-04-28 12:12:31 +00:00
|
|
|
HW_DBM
|
2023-06-09 16:21:47 +00:00
|
|
|
KVM_HVHE
|
2021-04-28 12:12:31 +00:00
|
|
|
KVM_PROTECTED_MODE
|
|
|
|
MISMATCHED_CACHE_TYPE
|
|
|
|
MTE
|
2021-10-06 15:47:49 +00:00
|
|
|
MTE_ASYMM
|
2022-04-19 11:22:16 +00:00
|
|
|
SME
|
|
|
|
SME_FA64
|
2023-01-16 16:04:43 +00:00
|
|
|
SME2
|
2021-04-28 12:12:31 +00:00
|
|
|
SPECTRE_V2
|
|
|
|
SPECTRE_V3A
|
|
|
|
SPECTRE_V4
|
arm64: Mitigate spectre style branch history side channels
Speculation attacks against some high-performance processors can
make use of branch history to influence future speculation.
When taking an exception from user-space, a sequence of branches
or a firmware call overwrites or invalidates the branch history.
The sequence of branches is added to the vectors, and should appear
before the first indirect branch. For systems using KPTI the sequence
is added to the kpti trampoline where it has a free register as the exit
from the trampoline is via a 'ret'. For systems not using KPTI, the same
register tricks are used to free up a register in the vectors.
For the firmware call, arch-workaround-3 clobbers 4 registers, so
there is no choice but to save them to the EL1 stack. This only happens
for entry from EL0, so if we take an exception due to the stack access,
it will not become re-entrant.
For KVM, the existing branch-predictor-hardening vectors are used.
When a spectre version of these vectors is in use, the firmware call
is sufficient to mitigate against Spectre-BHB. For the non-spectre
versions, the sequence of branches is added to the indirect vector.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2021-11-10 14:48:00 +00:00
|
|
|
SPECTRE_BHB
|
2021-04-28 12:12:31 +00:00
|
|
|
SSBS
|
|
|
|
SVE
|
|
|
|
UNMAP_KERNEL_AT_EL0
|
|
|
|
WORKAROUND_834220
|
|
|
|
WORKAROUND_843419
|
|
|
|
WORKAROUND_845719
|
|
|
|
WORKAROUND_858921
|
|
|
|
WORKAROUND_1418040
|
|
|
|
WORKAROUND_1463225
|
|
|
|
WORKAROUND_1508412
|
|
|
|
WORKAROUND_1542419
|
2022-07-14 16:15:23 +00:00
|
|
|
WORKAROUND_1742098
|
2022-01-25 14:20:34 +00:00
|
|
|
WORKAROUND_1902691
|
2022-01-27 12:20:52 +00:00
|
|
|
WORKAROUND_2038923
|
|
|
|
WORKAROUND_2064142
|
|
|
|
WORKAROUND_2077057
|
2022-08-19 10:30:50 +00:00
|
|
|
WORKAROUND_2457168
|
2023-01-02 06:16:51 +00:00
|
|
|
WORKAROUND_2645198
|
2022-09-09 16:59:38 +00:00
|
|
|
WORKAROUND_2658417
|
2023-06-09 22:01:02 +00:00
|
|
|
WORKAROUND_AMPERE_AC03_CPU_38
|
2021-10-19 16:31:40 +00:00
|
|
|
WORKAROUND_TRBE_OVERWRITE_FILL_MODE
|
2021-10-19 16:31:41 +00:00
|
|
|
WORKAROUND_TSB_FLUSH_FAILURE
|
2021-10-19 16:31:42 +00:00
|
|
|
WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
|
2021-04-28 12:12:31 +00:00
|
|
|
WORKAROUND_CAVIUM_23154
|
|
|
|
WORKAROUND_CAVIUM_27456
|
|
|
|
WORKAROUND_CAVIUM_30115
|
|
|
|
WORKAROUND_CAVIUM_TX2_219_PRFM
|
|
|
|
WORKAROUND_CAVIUM_TX2_219_TVM
|
|
|
|
WORKAROUND_CLEAN_CACHE
|
|
|
|
WORKAROUND_DEVICE_LOAD_ACQUIRE
|
|
|
|
WORKAROUND_NVIDIA_CARMEL_CNP
|
|
|
|
WORKAROUND_QCOM_FALKOR_E1003
|
|
|
|
WORKAROUND_REPEAT_TLBI
|
|
|
|
WORKAROUND_SPECULATIVE_AT
|
2024-01-10 17:29:20 +00:00
|
|
|
WORKAROUND_SPECULATIVE_UNPRIV_LOAD
|