linux-stable/arch/arm64/kernel/efi-entry.S

70 lines
1.5 KiB
ArmAsm
Raw Normal View History

/* SPDX-License-Identifier: GPL-2.0-only */
/*
* EFI entry point.
*
* Copyright (C) 2013, 2014 Red Hat, Inc.
* Author: Mark Salter <msalter@redhat.com>
*/
#include <linux/linkage.h>
#include <linux/init.h>
#include <asm/assembler.h>
__INIT
SYM_CODE_START(efi_enter_kernel)
/*
* efi_pe_entry() will have copied the kernel image if necessary and we
* end up here with device tree address in x1 and the kernel entry
* point stored in x0. Save those values in registers which are
* callee preserved.
*/
ldr w2, =primary_entry_offset
add x19, x0, x2 // relocated Image entrypoint
mov x20, x1 // DTB address
arm64: efi: Fix stub cache maintenance While efi-entry.S mentions that efi_entry() will have relocated the kernel image, it actually means that efi_entry will have placed a copy of the kernel in the appropriate location, and until this is branched to at the end of efi_entry.S, all instructions are executed from the original image. Thus while the flush in efi_entry.S does ensure that the copy is visible to noncacheable accesses, it does not guarantee that this is true for the image instructions are being executed from. This could have disasterous effects when the MMU and caches are disabled if the image has not been naturally evicted to the PoC. Additionally, due to a missing dsb following the ic ialluis, the new kernel image is not necessarily clean in the I-cache when it is branched to, with similar potentially disasterous effects. This patch adds additional flushing to ensure that the currently executing stub text is flushed to the PoC and is thus visible to noncacheable accesses. As it is placed after the instructions cache maintenance for the new image and __flush_dcache_area already contains a dsb, we do not need to add a separate barrier to ensure completion of the icache maintenance. Comments are updated to clarify the situation with regard to the two images and the maintenance required for both. Fixes: 3c7f255039a2ad6ee1e3890505caf0d029b22e29 Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Joel Schopp <joel.schopp@amd.com> Reviewed-by: Roy Franz <roy.franz@linaro.org> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Ian Campbell <ijc@hellion.org.uk> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Salter <msalter@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-11-13 12:22:01 +00:00
/*
* Clean the copied Image to the PoC, and ensure it is not shadowed by
arm64: efi: Fix stub cache maintenance While efi-entry.S mentions that efi_entry() will have relocated the kernel image, it actually means that efi_entry will have placed a copy of the kernel in the appropriate location, and until this is branched to at the end of efi_entry.S, all instructions are executed from the original image. Thus while the flush in efi_entry.S does ensure that the copy is visible to noncacheable accesses, it does not guarantee that this is true for the image instructions are being executed from. This could have disasterous effects when the MMU and caches are disabled if the image has not been naturally evicted to the PoC. Additionally, due to a missing dsb following the ic ialluis, the new kernel image is not necessarily clean in the I-cache when it is branched to, with similar potentially disasterous effects. This patch adds additional flushing to ensure that the currently executing stub text is flushed to the PoC and is thus visible to noncacheable accesses. As it is placed after the instructions cache maintenance for the new image and __flush_dcache_area already contains a dsb, we do not need to add a separate barrier to ensure completion of the icache maintenance. Comments are updated to clarify the situation with regard to the two images and the maintenance required for both. Fixes: 3c7f255039a2ad6ee1e3890505caf0d029b22e29 Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Joel Schopp <joel.schopp@amd.com> Reviewed-by: Roy Franz <roy.franz@linaro.org> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Ian Campbell <ijc@hellion.org.uk> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Salter <msalter@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-11-13 12:22:01 +00:00
* stale icache entries from before relocation.
*/
ldr w1, =kernel_size
add x1, x0, x1
arm64: Rename arm64-internal cache maintenance functions Although naming across the codebase isn't that consistent, it tends to follow certain patterns. Moreover, the term "flush" isn't defined in the Arm Architecture reference manual, and might be interpreted to mean clean, invalidate, or both for a cache. Rename arm64-internal functions to make the naming internally consistent, as well as making it consistent with the Arm ARM, by specifying whether it applies to the instruction, data, or both caches, whether the operation is a clean, invalidate, or both. Also specify which point the operation applies to, i.e., to the point of unification (PoU), coherency (PoC), or persistence (PoP). This commit applies the following sed transformation to all files under arch/arm64: "s/\b__flush_cache_range\b/caches_clean_inval_pou_macro/g;"\ "s/\b__flush_icache_range\b/caches_clean_inval_pou/g;"\ "s/\binvalidate_icache_range\b/icache_inval_pou/g;"\ "s/\b__flush_dcache_area\b/dcache_clean_inval_poc/g;"\ "s/\b__inval_dcache_area\b/dcache_inval_poc/g;"\ "s/__clean_dcache_area_poc\b/dcache_clean_poc/g;"\ "s/\b__clean_dcache_area_pop\b/dcache_clean_pop/g;"\ "s/\b__clean_dcache_area_pou\b/dcache_clean_pou/g;"\ "s/\b__flush_cache_user_range\b/caches_clean_inval_user_pou/g;"\ "s/\b__flush_icache_all\b/icache_inval_all_pou/g;" Note that __clean_dcache_area_poc is deliberately missing a word boundary check at the beginning in order to match the efistub symbols in image-vars.h. Also note that, despite its name, __flush_icache_range operates on both instruction and data caches. The name change here reflects that. No functional change intended. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Fuad Tabba <tabba@google.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-19-tabba@google.com Signed-off-by: Will Deacon <will@kernel.org>
2021-05-24 08:30:01 +00:00
bl dcache_clean_poc
ic ialluis
arm64: efi: Fix stub cache maintenance While efi-entry.S mentions that efi_entry() will have relocated the kernel image, it actually means that efi_entry will have placed a copy of the kernel in the appropriate location, and until this is branched to at the end of efi_entry.S, all instructions are executed from the original image. Thus while the flush in efi_entry.S does ensure that the copy is visible to noncacheable accesses, it does not guarantee that this is true for the image instructions are being executed from. This could have disasterous effects when the MMU and caches are disabled if the image has not been naturally evicted to the PoC. Additionally, due to a missing dsb following the ic ialluis, the new kernel image is not necessarily clean in the I-cache when it is branched to, with similar potentially disasterous effects. This patch adds additional flushing to ensure that the currently executing stub text is flushed to the PoC and is thus visible to noncacheable accesses. As it is placed after the instructions cache maintenance for the new image and __flush_dcache_area already contains a dsb, we do not need to add a separate barrier to ensure completion of the icache maintenance. Comments are updated to clarify the situation with regard to the two images and the maintenance required for both. Fixes: 3c7f255039a2ad6ee1e3890505caf0d029b22e29 Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Joel Schopp <joel.schopp@amd.com> Reviewed-by: Roy Franz <roy.franz@linaro.org> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Ian Campbell <ijc@hellion.org.uk> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Salter <msalter@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-11-13 12:22:01 +00:00
/*
* Clean the remainder of this routine to the PoC
* so that we can safely disable the MMU and caches.
arm64: efi: Fix stub cache maintenance While efi-entry.S mentions that efi_entry() will have relocated the kernel image, it actually means that efi_entry will have placed a copy of the kernel in the appropriate location, and until this is branched to at the end of efi_entry.S, all instructions are executed from the original image. Thus while the flush in efi_entry.S does ensure that the copy is visible to noncacheable accesses, it does not guarantee that this is true for the image instructions are being executed from. This could have disasterous effects when the MMU and caches are disabled if the image has not been naturally evicted to the PoC. Additionally, due to a missing dsb following the ic ialluis, the new kernel image is not necessarily clean in the I-cache when it is branched to, with similar potentially disasterous effects. This patch adds additional flushing to ensure that the currently executing stub text is flushed to the PoC and is thus visible to noncacheable accesses. As it is placed after the instructions cache maintenance for the new image and __flush_dcache_area already contains a dsb, we do not need to add a separate barrier to ensure completion of the icache maintenance. Comments are updated to clarify the situation with regard to the two images and the maintenance required for both. Fixes: 3c7f255039a2ad6ee1e3890505caf0d029b22e29 Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Joel Schopp <joel.schopp@amd.com> Reviewed-by: Roy Franz <roy.franz@linaro.org> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Ian Campbell <ijc@hellion.org.uk> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Salter <msalter@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-11-13 12:22:01 +00:00
*/
adr x0, 0f
adr x1, 3f
arm64: Rename arm64-internal cache maintenance functions Although naming across the codebase isn't that consistent, it tends to follow certain patterns. Moreover, the term "flush" isn't defined in the Arm Architecture reference manual, and might be interpreted to mean clean, invalidate, or both for a cache. Rename arm64-internal functions to make the naming internally consistent, as well as making it consistent with the Arm ARM, by specifying whether it applies to the instruction, data, or both caches, whether the operation is a clean, invalidate, or both. Also specify which point the operation applies to, i.e., to the point of unification (PoU), coherency (PoC), or persistence (PoP). This commit applies the following sed transformation to all files under arch/arm64: "s/\b__flush_cache_range\b/caches_clean_inval_pou_macro/g;"\ "s/\b__flush_icache_range\b/caches_clean_inval_pou/g;"\ "s/\binvalidate_icache_range\b/icache_inval_pou/g;"\ "s/\b__flush_dcache_area\b/dcache_clean_inval_poc/g;"\ "s/\b__inval_dcache_area\b/dcache_inval_poc/g;"\ "s/__clean_dcache_area_poc\b/dcache_clean_poc/g;"\ "s/\b__clean_dcache_area_pop\b/dcache_clean_pop/g;"\ "s/\b__clean_dcache_area_pou\b/dcache_clean_pou/g;"\ "s/\b__flush_cache_user_range\b/caches_clean_inval_user_pou/g;"\ "s/\b__flush_icache_all\b/icache_inval_all_pou/g;" Note that __clean_dcache_area_poc is deliberately missing a word boundary check at the beginning in order to match the efistub symbols in image-vars.h. Also note that, despite its name, __flush_icache_range operates on both instruction and data caches. The name change here reflects that. No functional change intended. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Fuad Tabba <tabba@google.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-19-tabba@google.com Signed-off-by: Will Deacon <will@kernel.org>
2021-05-24 08:30:01 +00:00
bl dcache_clean_poc
0:
/* Turn off Dcache and MMU */
mrs x0, CurrentEL
cmp x0, #CurrentEL_EL2
b.ne 1f
mrs x0, sctlr_el2
bic x0, x0, #1 << 0 // clear SCTLR.M
bic x0, x0, #1 << 2 // clear SCTLR.C
arm64: Add software workaround for Falkor erratum 1041 The ARM architecture defines the memory locations that are permitted to be accessed as the result of a speculative instruction fetch from an exception level for which all stages of translation are disabled. Specifically, the core is permitted to speculatively fetch from the 4KB region containing the current program counter 4K and next 4K. When translation is changed from enabled to disabled for the running exception level (SCTLR_ELn[M] changed from a value of 1 to 0), the Falkor core may errantly speculatively access memory locations outside of the 4KB region permitted by the architecture. The errant memory access may lead to one of the following unexpected behaviors. 1) A System Error Interrupt (SEI) being raised by the Falkor core due to the errant memory access attempting to access a region of memory that is protected by a slave-side memory protection unit. 2) Unpredictable device behavior due to a speculative read from device memory. This behavior may only occur if the instruction cache is disabled prior to or coincident with translation being changed from enabled to disabled. The conditions leading to this erratum will not occur when either of the following occur: 1) A higher exception level disables translation of a lower exception level (e.g. EL2 changing SCTLR_EL1[M] from a value of 1 to 0). 2) An exception level disabling its stage-1 translation if its stage-2 translation is enabled (e.g. EL1 changing SCTLR_EL1[M] from a value of 1 to 0 when HCR_EL2[VM] has a value of 1). To avoid the errant behavior, software must execute an ISB immediately prior to executing the MSR that will change SCTLR_ELn[M] from 1 to 0. Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-12-11 22:42:32 +00:00
pre_disable_mmu_workaround
msr sctlr_el2, x0
isb
b 2f
1:
mrs x0, sctlr_el1
bic x0, x0, #1 << 0 // clear SCTLR.M
bic x0, x0, #1 << 2 // clear SCTLR.C
arm64: Add software workaround for Falkor erratum 1041 The ARM architecture defines the memory locations that are permitted to be accessed as the result of a speculative instruction fetch from an exception level for which all stages of translation are disabled. Specifically, the core is permitted to speculatively fetch from the 4KB region containing the current program counter 4K and next 4K. When translation is changed from enabled to disabled for the running exception level (SCTLR_ELn[M] changed from a value of 1 to 0), the Falkor core may errantly speculatively access memory locations outside of the 4KB region permitted by the architecture. The errant memory access may lead to one of the following unexpected behaviors. 1) A System Error Interrupt (SEI) being raised by the Falkor core due to the errant memory access attempting to access a region of memory that is protected by a slave-side memory protection unit. 2) Unpredictable device behavior due to a speculative read from device memory. This behavior may only occur if the instruction cache is disabled prior to or coincident with translation being changed from enabled to disabled. The conditions leading to this erratum will not occur when either of the following occur: 1) A higher exception level disables translation of a lower exception level (e.g. EL2 changing SCTLR_EL1[M] from a value of 1 to 0). 2) An exception level disabling its stage-1 translation if its stage-2 translation is enabled (e.g. EL1 changing SCTLR_EL1[M] from a value of 1 to 0 when HCR_EL2[VM] has a value of 1). To avoid the errant behavior, software must execute an ISB immediately prior to executing the MSR that will change SCTLR_ELn[M] from 1 to 0. Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-12-11 22:42:32 +00:00
pre_disable_mmu_workaround
msr sctlr_el1, x0
isb
2:
/* Jump to kernel entry point */
mov x0, x20
mov x1, xzr
mov x2, xzr
mov x3, xzr
br x19
3:
SYM_CODE_END(efi_enter_kernel)