linux-stable/arch/arm/mach-shmobile/headsmp-scu.S
Linus Walleij a9ff696160 ARM: mm: Make virt_to_pfn() a static inline
Making virt_to_pfn() a static inline taking a strongly typed
(const void *) makes the contract of a passing a pointer of that
type to the function explicit and exposes any misuse of the
macro virt_to_pfn() acting polymorphic and accepting many types
such as (void *), (unitptr_t) or (unsigned long) as arguments
without warnings.

Doing this is a bit intrusive: virt_to_pfn() requires
PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in
<asm/page.h>, so this must be included *before* <asm/memory.h>.

The use of macros were obscuring the unclear inclusion order here,
as the macros would eventually be resolved, but a static inline
like this cannot be compiled with unresolved macros.

The naive solution to include <asm/page.h> at the top of
<asm/memory.h> does not work, because <asm/memory.h> sometimes
includes <asm/page.h> at the end of itself, which would create a
confusing inclusion loop. So instead, take the approach to always
unconditionally include <asm/page.h> at the end of <asm/memory.h>

arch/arm uses <asm/memory.h> explicitly in a lot of places,
however it turns out that if we just unconditionally include
<asm/memory.h> into <asm/page.h> and switch all inclusions of
<asm/memory.h> to <asm/page.h> instead, we enforce the right
order and <asm/memory.h> will always have access to the
definitions.

Put an inclusion guard in place making it impossible to include
<asm/memory.h> explicitly.

Link: https://lore.kernel.org/linux-mm/20220701160004.2ffff4e5ab59a55499f4c736@linux-foundation.org/
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2023-05-29 11:27:08 +02:00

31 lines
806 B
ArmAsm

/* SPDX-License-Identifier: GPL-2.0+
*
* Shared SCU setup for mach-shmobile
*
* Copyright (C) 2012 Bastian Hecht
*/
#include <linux/linkage.h>
#include <linux/init.h>
#include <asm/page.h>
/*
* Boot code for secondary CPUs.
*
* First we turn on L1 cache coherency for our CPU. Then we jump to
* secondary_startup that invalidates the cache and hands over control
* to the common ARM startup code.
*/
ENTRY(shmobile_boot_scu)
@ r0 = SCU base address
mrc p15, 0, r1, c0, c0, 5 @ read MPIDR
and r1, r1, #3 @ mask out cpu ID
lsl r1, r1, #3 @ we will shift by cpu_id * 8 bits
ldr r2, [r0, #8] @ SCU Power Status Register
mov r3, #3
lsl r3, r3, r1
bic r2, r2, r3 @ Clear bits of our CPU (Run Mode)
str r2, [r0, #8] @ write back
b secondary_startup
ENDPROC(shmobile_boot_scu)