powerpc updates for 5.11

- Switch to the generic C VDSO, as well as some cleanups of our VDSO
    setup/handling code.
 
  - Support for KUAP (Kernel User Access Prevention) on systems using the hashed
    page table MMU, using memory protection keys.
 
  - Better handling of PowerVM SMT8 systems where all threads of a core do not
    share an L2, allowing the scheduler to make better scheduling decisions.
 
  - Further improvements to our machine check handling.
 
  - Show registers when unwinding interrupt frames during stack traces.
 
  - Improvements to our pseries (PowerVM) partition migration code.
 
  - Several series from Christophe refactoring and cleaning up various parts of
    the 32-bit code.
 
  - Other smaller features, fixes & cleanups.
 
 Thanks to:
   Alan Modra, Alexey Kardashevskiy, Andrew Donnellan, Aneesh Kumar K.V, Ard
   Biesheuvel, Athira Rajeev, Balamuruhan S, Bill Wendling, Cédric Le Goater,
   Christophe Leroy, Christophe Lombard, Colin Ian King, Daniel Axtens, David
   Hildenbrand, Frederic Barrat, Ganesh Goudar, Gautham R. Shenoy, Geert
   Uytterhoeven, Giuseppe Sacco, Greg Kurz, Harish, Jan Kratochvil, Jordan
   Niethe, Kaixu Xia, Laurent Dufour, Leonardo Bras, Madhavan Srinivasan, Mahesh
   Salgaonkar, Mathieu Desnoyers, Nathan Lynch, Nicholas Piggin, Oleg Nesterov,
   Oliver O'Halloran, Oscar Salvador, Po-Hsu Lin, Qian Cai, Qinglang Miao, Randy
   Dunlap, Ravi Bangoria, Sachin Sant, Sandipan Das, Sebastian Andrzej Siewior ,
   Segher Boessenkool, Srikar Dronamraju, Tyrel Datwyler, Uwe Kleine-König,
   Vincent Stehlé, Youling Tang, Zhang Xiaoxu.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCAAxFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAl/bURITHG1wZUBlbGxl
 cm1hbi5pZC5hdQAKCRBR6+o8yOGlgEzBEAC1Vwibcog2P9rkJPb0q3UGWSYSx25V
 h/LwwxtM9Tm14j/LZsSgkOgIsfMaWEBIw/8D4efQ7AX9aFo+R0c2DdQMx1MG5MXz
 gZk58+l3LwId6h9+OrwurpEW+ZmURLAtGMSyFdkeiZ3/XTnkbf1XnewC0QWQe56a
 EGLmjx1MFl45jspoy7UIUXsXoNJIfflEKhrgUzSUh8X2eLmvB9ws6A4BXxbVzyZl
 lZv3+uWimU2pFgdkB9jOCxoG4zFEr2o5ovLHG7zCCVo5JoXmTPQ5cMVBraH206ms
 +5vCmu4qI8uP5UlZW/mZfhrtDiMdHdQqqFOaQwVlOmoUbU6L6E6rxm3iVnov2Bbi
 iUgxoeJDxAb2cM2EWFK6oWVgr7+NkwvXM1IG8xtprhHrCdnC9r+psQr/dswb3LSg
 MJ7u/RCq3uixy2kWP8E0NEHw7ngQZ/ZKPqzfnmIWOC7tYUxgaL02I8Ff9/ZXAI2J
 CnmqFYOjrimHkcwXGOtKkXNvfU0DiL97qpK2AQNWElE8+bUUmpw+ltUrsdSycYmv
 Afc4WIcVrTA+a9laSLgjdZbolbNSa3p+cMIYdPrVx9g+xqygbxIKv+EDGNv1WHfD
 GU1gmohMY+ZkMOvFRMi8LAsEm0DH/etWE0py/8uyxDYKnGyD1Ur6452DStkmgGb2
 azmcaOyLdb+HXA==
 =Ga3K
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-5.11-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:

 - Switch to the generic C VDSO, as well as some cleanups of our VDSO
   setup/handling code.

 - Support for KUAP (Kernel User Access Prevention) on systems using the
   hashed page table MMU, using memory protection keys.

 - Better handling of PowerVM SMT8 systems where all threads of a core
   do not share an L2, allowing the scheduler to make better scheduling
   decisions.

 - Further improvements to our machine check handling.

 - Show registers when unwinding interrupt frames during stack traces.

 - Improvements to our pseries (PowerVM) partition migration code.

 - Several series from Christophe refactoring and cleaning up various
   parts of the 32-bit code.

 - Other smaller features, fixes & cleanups.

Thanks to: Alan Modra, Alexey Kardashevskiy, Andrew Donnellan, Aneesh
Kumar K.V, Ard Biesheuvel, Athira Rajeev, Balamuruhan S, Bill Wendling,
Cédric Le Goater, Christophe Leroy, Christophe Lombard, Colin Ian King,
Daniel Axtens, David Hildenbrand, Frederic Barrat, Ganesh Goudar,
Gautham R. Shenoy, Geert Uytterhoeven, Giuseppe Sacco, Greg Kurz,
Harish, Jan Kratochvil, Jordan Niethe, Kaixu Xia, Laurent Dufour,
Leonardo Bras, Madhavan Srinivasan, Mahesh Salgaonkar, Mathieu
Desnoyers, Nathan Lynch, Nicholas Piggin, Oleg Nesterov, Oliver
O'Halloran, Oscar Salvador, Po-Hsu Lin, Qian Cai, Qinglang Miao, Randy
Dunlap, Ravi Bangoria, Sachin Sant, Sandipan Das, Sebastian Andrzej
Siewior , Segher Boessenkool, Srikar Dronamraju, Tyrel Datwyler, Uwe
Kleine-König, Vincent Stehlé, Youling Tang, and Zhang Xiaoxu.

* tag 'powerpc-5.11-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (304 commits)
  powerpc/32s: Fix cleanup_cpu_mmu_context() compile bug
  powerpc: Add config fragment for disabling -Werror
  powerpc/configs: Add ppc64le_allnoconfig target
  powerpc/powernv: Rate limit opal-elog read failure message
  powerpc/pseries/memhotplug: Quieten some DLPAR operations
  powerpc/ps3: use dma_mapping_error()
  powerpc: force inlining of csum_partial() to avoid multiple csum_partial() with GCC10
  powerpc/perf: Fix Threshold Event Counter Multiplier width for P10
  powerpc/mm: Fix hugetlb_free_pmd_range() and hugetlb_free_pud_range()
  KVM: PPC: Book3S HV: Fix mask size for emulated msgsndp
  KVM: PPC: fix comparison to bool warning
  KVM: PPC: Book3S: Assign boolean values to a bool variable
  powerpc: Inline setup_kup()
  powerpc/64s: Mark the kuap/kuep functions non __init
  KVM: PPC: Book3S HV: XIVE: Add a comment regarding VP numbering
  powerpc/xive: Improve error reporting of OPAL calls
  powerpc/xive: Simplify xive_do_source_eoi()
  powerpc/xive: Remove P9 DD1 flag XIVE_IRQ_FLAG_EOI_FW
  powerpc/xive: Remove P9 DD1 flag XIVE_IRQ_FLAG_MASK_FW
  powerpc/xive: Remove P9 DD1 flag XIVE_IRQ_FLAG_SHIFT_BUG
  ...
This commit is contained in:
Linus Torvalds 2020-12-17 13:34:25 -08:00
commit 8a5be36b93
258 changed files with 6041 additions and 4995 deletions

View File

@ -66,7 +66,7 @@ config NEED_PER_CPU_PAGE_FIRST_CHUNK
config NR_IRQS
int "Number of virtual interrupt numbers"
range 32 32768
range 32 1048576
default "512"
help
This defines the number of virtual interrupt numbers the kernel
@ -87,7 +87,7 @@ config PPC_WATCHDOG
help
This is a placeholder when the powerpc hardlockup detector
watchdog is selected (arch/powerpc/kernel/watchdog.c). It is
seleted via the generic lockup detector menu which is why we
selected via the generic lockup detector menu which is why we
have no standalone config option for it here.
config STACKTRACE_SUPPORT
@ -177,6 +177,7 @@ config PPC
select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER
select GENERIC_TIME_VSYSCALL
select GENERIC_GETTIMEOFDAY
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU
select HAVE_ARCH_JUMP_LABEL
@ -207,6 +208,7 @@ config PPC
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS if GCC_VERSION >= 50200 # plugin support on gcc <= 5.1 is buggy on PPC
select HAVE_GENERIC_VDSO
select HAVE_HW_BREAKPOINT if PERF_EVENTS && (PPC_BOOK3S || PPC_8xx)
select HAVE_IDE
select HAVE_IOREMAP_PROT
@ -312,6 +314,10 @@ config GENERIC_BUG
default y
depends on BUG
config GENERIC_BUG_RELATIVE_POINTERS
def_bool y
depends on GENERIC_BUG
config SYS_SUPPORTS_APM_EMULATION
default y if PMAC_APM_EMU
bool
@ -418,6 +424,7 @@ config HUGETLB_PAGE_SIZE_VARIABLE
config MATH_EMULATION
bool "Math emulation"
depends on 4xx || PPC_8xx || PPC_MPC832x || BOOKE
select PPC_FPU_REGS
help
Some PowerPC chips designed for embedded applications do not have
a floating-point unit and therefore do not implement the
@ -657,9 +664,15 @@ config IRQ_ALL_CPUS
reported with SMP Power Macintoshes with this option enabled.
config NUMA
bool "NUMA support"
depends on PPC64
default y if SMP && PPC_PSERIES
bool "NUMA Memory Allocation and Scheduler Support"
depends on PPC64 && SMP
default y if PPC_PSERIES || PPC_POWERNV
help
Enable NUMA (Non-Uniform Memory Access) support.
The kernel will try to allocate memory used by a CPU on the
local memory controller of the CPU and add some more
NUMA awareness to the kernel.
config NODES_SHIFT
int
@ -793,8 +806,7 @@ config DATA_SHIFT_BOOL
bool "Set custom data alignment"
depends on ADVANCED_OPTIONS
depends on STRICT_KERNEL_RWX || DEBUG_PAGEALLOC
depends on PPC_BOOK3S_32 || (PPC_8xx && !PIN_TLB_DATA && \
(!PIN_TLB_TEXT || !STRICT_KERNEL_RWX))
depends on PPC_BOOK3S_32 || (PPC_8xx && !PIN_TLB_DATA && !STRICT_KERNEL_RWX)
help
This option allows you to set the kernel data alignment. When
RAM is mapped by blocks, the alignment needs to fit the size and

View File

@ -374,6 +374,11 @@ ppc64le_allmodconfig:
$(Q)$(MAKE) KCONFIG_ALLCONFIG=$(srctree)/arch/powerpc/configs/le.config \
-f $(srctree)/Makefile allmodconfig
PHONY += ppc64le_allnoconfig
ppc64le_allnoconfig:
$(Q)$(MAKE) KCONFIG_ALLCONFIG=$(srctree)/arch/powerpc/configs/ppc64le.config \
-f $(srctree)/Makefile allnoconfig
PHONY += ppc64_book3e_allmodconfig
ppc64_book3e_allmodconfig:
$(Q)$(MAKE) KCONFIG_ALLCONFIG=$(srctree)/arch/powerpc/configs/85xx-64bit.config \
@ -405,18 +410,24 @@ PHONY += install
install:
$(Q)$(MAKE) $(build)=$(boot) install
PHONY += vdso_install
vdso_install:
ifdef CONFIG_PPC64
$(Q)$(MAKE) $(build)=arch/$(ARCH)/kernel/vdso64 $@
endif
ifdef CONFIG_VDSO32
$(Q)$(MAKE) $(build)=arch/$(ARCH)/kernel/vdso32 $@
endif
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
ifeq ($(KBUILD_EXTMOD),)
# We need to generate vdso-offsets.h before compiling certain files in kernel/.
# In order to do that, we should use the archprepare target, but we can't since
# asm-offsets.h is included in some files used to generate vdso-offsets.h, and
# asm-offsets.h is built in prepare0, for which archprepare is a dependency.
# Therefore we need to generate the header after prepare0 has been made, hence
# this hack.
prepare: vdso_prepare
vdso_prepare: prepare0
$(if $(CONFIG_VDSO32),$(Q)$(MAKE) \
$(build)=arch/powerpc/kernel/vdso32 include/generated/vdso32-offsets.h)
$(if $(CONFIG_PPC64),$(Q)$(MAKE) \
$(build)=arch/powerpc/kernel/vdso64 include/generated/vdso64-offsets.h)
endif
archprepare: checkbin
archheaders:

View File

@ -21,7 +21,11 @@
all: $(obj)/zImage
ifdef CROSS32_COMPILE
ifdef CONFIG_CC_IS_CLANG
BOOTCC := $(CROSS32_COMPILE)clang
else
BOOTCC := $(CROSS32_COMPILE)gcc
endif
BOOTAR := $(CROSS32_COMPILE)ar
else
BOOTCC := $(CC)

View File

@ -21,13 +21,6 @@ extern int lv1_get_logical_ppe_id(u64 *out_1);
extern int lv1_get_repository_node_value(u64 in_1, u64 in_2, u64 in_3,
u64 in_4, u64 in_5, u64 *out_1, u64 *out_2);
#ifdef DEBUG
#define DBG(fmt...) printf(fmt)
#else
static inline int __attribute__ ((format (printf, 1, 2))) DBG(
const char *fmt, ...) {return 0;}
#endif
BSS_STACK(4096);
/* A buffer that may be edited by tools operating on a zImage binary so as to

View File

@ -42,14 +42,11 @@ udelay:
* (nanoseconds + (timebase_period_ns - 1 )) / timebase_period_ns
* timebase_period_ns defaults to 60 (16.6MHz) */
mflr r5
bl 0f
bcl 20,31,0f
0: mflr r6
mtlr r5
lis r5,0b@ha
addi r5,r5,0b@l
subf r5,r5,r6 /* In case we're relocated */
addis r5,r5,timebase_period_ns@ha
lwz r5,timebase_period_ns@l(r5)
addis r5,r6,(timebase_period_ns-0b)@ha
lwz r5,(timebase_period_ns-0b)@l(r5)
add r4,r4,r5
addi r4,r4,-1
divw r4,r4,r5 /* BUS ticks */

View File

@ -46,6 +46,8 @@ compression=.gz
uboot_comp=gzip
pie=
format=
notext=
rodynamic=
# cross-compilation prefix
CROSS=
@ -353,6 +355,8 @@ epapr)
platformo="$object/pseries-head.o $object/epapr.o $object/epapr-wrapper.o"
link_address='0x20000000'
pie=-pie
notext='-z notext'
rodynamic=$(if ${CROSS}ld -V 2>&1 | grep -q LLD ; then echo "-z rodynamic"; fi)
;;
mvme5100)
platformo="$object/fixed-head.o $object/mvme5100.o"
@ -493,7 +497,7 @@ if [ "$platform" != "miboot" ]; then
text_start="-Ttext $link_address"
fi
#link everything
${CROSS}ld -m $format -T $lds $text_start $pie $nodl -o "$ofile" $map \
${CROSS}ld -m $format -T $lds $text_start $pie $nodl $rodynamic $notext -o "$ofile" $map \
$platformo $tmp $object/wrapper.a
rm $tmp
fi

View File

@ -34,6 +34,17 @@ SECTIONS
__dynamic_start = .;
*(.dynamic)
}
#ifdef CONFIG_PPC64_BOOT_WRAPPER
. = ALIGN(256);
.got :
{
__toc_start = .;
*(.got)
*(.toc)
}
#endif
.hash : { *(.hash) }
.interp : { *(.interp) }
.rela.dyn :
@ -76,16 +87,6 @@ SECTIONS
_esm_blob_end = .;
}
#ifdef CONFIG_PPC64_BOOT_WRAPPER
. = ALIGN(256);
.got :
{
__toc_start = .;
*(.got)
*(.toc)
}
#endif
. = ALIGN(4096);
.bss :
{

View File

@ -0,0 +1 @@
CONFIG_PPC_DISABLE_WERROR=y

View File

@ -0,0 +1,2 @@
CONFIG_PPC64=y
CONFIG_CPU_LITTLE_ENDIAN=y

View File

@ -0,0 +1,15 @@
# This is the equivalent of booting with lockdown=integrity
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_LOCKDOWN_LSM=y
CONFIG_SECURITY_LOCKDOWN_LSM_EARLY=y
CONFIG_LOCK_DOWN_KERNEL_FORCE_INTEGRITY=y
# These are some general, reasonably inexpensive hardening options
CONFIG_HARDENED_USERCOPY=y
CONFIG_FORTIFY_SOURCE=y
CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y
# UBSAN bounds checking is very cheap and good for hardening
CONFIG_UBSAN=y
# CONFIG_UBSAN_MISC is not set

View File

@ -10,6 +10,7 @@
#include <linux/types.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#include <asm/asm-const.h>
/*
* Since *_return_relaxed and {cmp}xchg_relaxed are implemented with
@ -26,14 +27,14 @@ static __inline__ int atomic_read(const atomic_t *v)
{
int t;
__asm__ __volatile__("lwz%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));
__asm__ __volatile__("lwz%U1%X1 %0,%1" : "=r"(t) : "m"UPD_CONSTR(v->counter));
return t;
}
static __inline__ void atomic_set(atomic_t *v, int i)
{
__asm__ __volatile__("stw%U0%X0 %1,%0" : "=m"(v->counter) : "r"(i));
__asm__ __volatile__("stw%U0%X0 %1,%0" : "=m"UPD_CONSTR(v->counter) : "r"(i));
}
#define ATOMIC_OP(op, asm_op) \
@ -316,14 +317,14 @@ static __inline__ s64 atomic64_read(const atomic64_t *v)
{
s64 t;
__asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));
__asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m"UPD_CONSTR(v->counter));
return t;
}
static __inline__ void atomic64_set(atomic64_t *v, s64 i)
{
__asm__ __volatile__("std%U0%X0 %1,%0" : "=m"(v->counter) : "r"(i));
__asm__ __volatile__("std%U0%X0 %1,%0" : "=m"UPD_CONSTR(v->counter) : "r"(i));
}
#define ATOMIC64_OP(op, asm_op) \

View File

@ -40,7 +40,7 @@
#define wmb() __asm__ __volatile__ ("sync" : : : "memory")
/* The sub-arch has lwsync */
#if defined(__powerpc64__) || defined(CONFIG_PPC_E500MC)
#if defined(CONFIG_PPC64) || defined(CONFIG_PPC_E500MC)
# define SMPWMB LWSYNC
#else
# define SMPWMB eieio

View File

@ -216,15 +216,34 @@ static inline void arch___clear_bit_unlock(int nr, volatile unsigned long *addr)
*/
static inline int fls(unsigned int x)
{
return 32 - __builtin_clz(x);
int lz;
if (__builtin_constant_p(x))
return x ? 32 - __builtin_clz(x) : 0;
asm("cntlzw %0,%1" : "=r" (lz) : "r" (x));
return 32 - lz;
}
#include <asm-generic/bitops/builtin-__fls.h>
/*
* 64-bit can do this using one cntlzd (count leading zeroes doubleword)
* instruction; for 32-bit we use the generic version, which does two
* 32-bit fls calls.
*/
#ifdef CONFIG_PPC64
static inline int fls64(__u64 x)
{
return 64 - __builtin_clzll(x);
int lz;
if (__builtin_constant_p(x))
return x ? 64 - __builtin_clzll(x) : 0;
asm("cntlzd %0,%1" : "=r" (lz) : "r" (x));
return 64 - lz;
}
#else
#include <asm-generic/bitops/fls64.h>
#endif
#ifdef CONFIG_PPC64
unsigned int __arch_hweight8(unsigned int w);

View File

@ -183,11 +183,7 @@ bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
unsigned long begin = regs->kuap & 0xf0000000;
unsigned long end = regs->kuap << 28;
if (!is_write)
return false;
return WARN(address < begin || address >= end,
"Bug: write fault blocked by segment registers !");
return is_write && (address < begin || address >= end);
}
#endif /* CONFIG_PPC_KUAP */

View File

@ -90,10 +90,11 @@ struct hash_pte {
typedef struct {
unsigned long id;
unsigned long vdso_base;
void __user *vdso;
} mm_context_t;
void update_bats(void);
static inline void cleanup_cpu_mmu_context(void) { };
/* patch sites */
extern s32 patch__hash_page_A0, patch__hash_page_A1, patch__hash_page_A2;

View File

@ -240,8 +240,14 @@ extern void add_hash_page(unsigned context, unsigned long va,
unsigned long pmdval);
/* Flush an entry from the TLB/hash table */
extern void flush_hash_entry(struct mm_struct *mm, pte_t *ptep,
unsigned long address);
static inline void flush_hash_entry(struct mm_struct *mm, pte_t *ptep, unsigned long addr)
{
if (mmu_has_feature(MMU_FTR_HPTE_TABLE)) {
unsigned long ptephys = __pa(ptep) & PAGE_MASK;
flush_hash_pages(mm->context.id, addr, ptephys, 1);
}
}
/*
* PTE updates. This function is called whenever an existing
@ -293,10 +299,9 @@ static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
{
unsigned long old;
old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0);
if (old & _PAGE_HASHPTE) {
unsigned long ptephys = __pa(ptep) & PAGE_MASK;
flush_hash_pages(mm->context.id, addr, ptephys, 1);
}
if (old & _PAGE_HASHPTE)
flush_hash_entry(mm, ptep, addr);
return (old & _PAGE_ACCESSED) != 0;
}
#define ptep_test_and_clear_young(__vma, __addr, __ptep) \
@ -524,9 +529,9 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
if (pte_val(*ptep) & _PAGE_HASHPTE)
flush_hash_entry(mm, ptep, addr);
__asm__ __volatile__("\
stw%U0%X0 %2,%0\n\
stw%X0 %2,%0\n\
eieio\n\
stw%U0%X0 %L2,%1"
stw%X1 %L2,%1"
: "=m" (*ptep), "=m" (*((unsigned char *)ptep+4))
: "r" (pte) : "memory");

View File

@ -6,12 +6,69 @@
/*
* TLB flushing for "classic" hash-MMU 32-bit CPUs, 6xx, 7xx, 7xxx
*/
extern void flush_tlb_mm(struct mm_struct *mm);
extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
extern void flush_tlb_page_nohash(struct vm_area_struct *vma, unsigned long addr);
extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end);
extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
void hash__flush_tlb_mm(struct mm_struct *mm);
void hash__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
void hash__flush_range(struct mm_struct *mm, unsigned long start, unsigned long end);
#ifdef CONFIG_SMP
void _tlbie(unsigned long address);
#else
static inline void _tlbie(unsigned long address)
{
asm volatile ("tlbie %0; sync" : : "r" (address) : "memory");
}
#endif
void _tlbia(void);
/*
* Called at the end of a mmu_gather operation to make sure the
* TLB flush is completely done.
*/
static inline void tlb_flush(struct mmu_gather *tlb)
{
/* 603 needs to flush the whole TLB here since it doesn't use a hash table. */
if (!mmu_has_feature(MMU_FTR_HPTE_TABLE))
_tlbia();
}
static inline void flush_range(struct mm_struct *mm, unsigned long start, unsigned long end)
{
start &= PAGE_MASK;
if (mmu_has_feature(MMU_FTR_HPTE_TABLE))
hash__flush_range(mm, start, end);
else if (end - start <= PAGE_SIZE)
_tlbie(start);
else
_tlbia();
}
static inline void flush_tlb_mm(struct mm_struct *mm)
{
if (mmu_has_feature(MMU_FTR_HPTE_TABLE))
hash__flush_tlb_mm(mm);
else
_tlbia();
}
static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
{
if (mmu_has_feature(MMU_FTR_HPTE_TABLE))
hash__flush_tlb_page(vma, vmaddr);
else
_tlbie(vmaddr);
}
static inline void
flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
{
flush_range(vma->vm_mm, start, end);
}
static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)
{
flush_range(&init_mm, start, end);
}
static inline void local_flush_tlb_page(struct vm_area_struct *vma,
unsigned long vmaddr)
{

View File

@ -2,6 +2,9 @@
#ifndef _ASM_POWERPC_BOOK3S_64_HASH_PKEY_H
#define _ASM_POWERPC_BOOK3S_64_HASH_PKEY_H
/* We use key 3 for KERNEL */
#define HASH_DEFAULT_KERNEL_KEY (HPTE_R_KEY_BIT0 | HPTE_R_KEY_BIT1)
static inline u64 hash__vmflag_to_pte_pkey_bits(u64 vm_flags)
{
return (((vm_flags & VM_PKEY_BIT0) ? H_PTE_PKEY_BIT0 : 0x0UL) |
@ -11,13 +14,23 @@ static inline u64 hash__vmflag_to_pte_pkey_bits(u64 vm_flags)
((vm_flags & VM_PKEY_BIT4) ? H_PTE_PKEY_BIT4 : 0x0UL));
}
static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
static inline u64 pte_to_hpte_pkey_bits(u64 pteflags, unsigned long flags)
{
return (((pteflags & H_PTE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL));
unsigned long pte_pkey;
pte_pkey = (((pteflags & H_PTE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL));
if (mmu_has_feature(MMU_FTR_BOOK3S_KUAP) ||
mmu_has_feature(MMU_FTR_BOOK3S_KUEP)) {
if ((pte_pkey == 0) && (flags & HPTE_USE_KERNEL_KEY))
return HASH_DEFAULT_KERNEL_KEY;
}
return pte_pkey;
}
static inline u16 hash__pte_to_pkey_bits(u64 pteflags)

View File

@ -145,7 +145,7 @@ extern void hash__mark_initmem_nx(void);
extern void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned long pte, int huge);
extern unsigned long htab_convert_pte_flags(unsigned long pteflags);
unsigned long htab_convert_pte_flags(unsigned long pteflags, unsigned long flags);
/* Atomic PTE updates */
static inline unsigned long hash__pte_update(struct mm_struct *mm,
unsigned long addr,

View File

@ -3,6 +3,7 @@
#ifndef _ASM_POWERPC_BOOK3S_64_KEXEC_H_
#define _ASM_POWERPC_BOOK3S_64_KEXEC_H_
#include <asm/plpar_wrappers.h>
#define reset_sprs reset_sprs
static inline void reset_sprs(void)
@ -14,6 +15,10 @@ static inline void reset_sprs(void)
if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
mtspr(SPRN_IAMR, 0);
if (cpu_has_feature(CPU_FTR_HVMODE))
mtspr(SPRN_CIABR, 0);
else
plpar_set_ciabr(0);
}
/* Do we need isync()? We are going via a kexec reset */

View File

@ -1,205 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H
#define _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H
#include <linux/const.h>
#include <asm/reg.h>
#define AMR_KUAP_BLOCK_READ UL(0x4000000000000000)
#define AMR_KUAP_BLOCK_WRITE UL(0x8000000000000000)
#define AMR_KUAP_BLOCKED (AMR_KUAP_BLOCK_READ | AMR_KUAP_BLOCK_WRITE)
#define AMR_KUAP_SHIFT 62
#ifdef __ASSEMBLY__
.macro kuap_restore_amr gpr1, gpr2
#ifdef CONFIG_PPC_KUAP
BEGIN_MMU_FTR_SECTION_NESTED(67)
mfspr \gpr1, SPRN_AMR
ld \gpr2, STACK_REGS_KUAP(r1)
cmpd \gpr1, \gpr2
beq 998f
isync
mtspr SPRN_AMR, \gpr2
/* No isync required, see kuap_restore_amr() */
998:
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
#endif
.endm
#ifdef CONFIG_PPC_KUAP
.macro kuap_check_amr gpr1, gpr2
#ifdef CONFIG_PPC_KUAP_DEBUG
BEGIN_MMU_FTR_SECTION_NESTED(67)
mfspr \gpr1, SPRN_AMR
li \gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
sldi \gpr2, \gpr2, AMR_KUAP_SHIFT
999: tdne \gpr1, \gpr2
EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
#endif
.endm
#endif
.macro kuap_save_amr_and_lock gpr1, gpr2, use_cr, msr_pr_cr
#ifdef CONFIG_PPC_KUAP
BEGIN_MMU_FTR_SECTION_NESTED(67)
.ifnb \msr_pr_cr
bne \msr_pr_cr, 99f
.endif
mfspr \gpr1, SPRN_AMR
std \gpr1, STACK_REGS_KUAP(r1)
li \gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
sldi \gpr2, \gpr2, AMR_KUAP_SHIFT
cmpd \use_cr, \gpr1, \gpr2
beq \use_cr, 99f
// We don't isync here because we very recently entered via rfid
mtspr SPRN_AMR, \gpr2
isync
99:
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
#endif
.endm
#else /* !__ASSEMBLY__ */
#include <linux/jump_label.h>
DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
#ifdef CONFIG_PPC_KUAP
#include <asm/mmu.h>
#include <asm/ptrace.h>
static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr)
{
if (mmu_has_feature(MMU_FTR_RADIX_KUAP) && unlikely(regs->kuap != amr)) {
isync();
mtspr(SPRN_AMR, regs->kuap);
/*
* No isync required here because we are about to RFI back to
* previous context before any user accesses would be made,
* which is a CSI.
*/
}
}
static inline unsigned long kuap_get_and_check_amr(void)
{
if (mmu_has_feature(MMU_FTR_RADIX_KUAP)) {
unsigned long amr = mfspr(SPRN_AMR);
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG)) /* kuap_check_amr() */
WARN_ON_ONCE(amr != AMR_KUAP_BLOCKED);
return amr;
}
return 0;
}
static inline void kuap_check_amr(void)
{
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_RADIX_KUAP))
WARN_ON_ONCE(mfspr(SPRN_AMR) != AMR_KUAP_BLOCKED);
}
/*
* We support individually allowing read or write, but we don't support nesting
* because that would require an expensive read/modify write of the AMR.
*/
static inline unsigned long get_kuap(void)
{
/*
* We return AMR_KUAP_BLOCKED when we don't support KUAP because
* prevent_user_access_return needs to return AMR_KUAP_BLOCKED to
* cause restore_user_access to do a flush.
*
* This has no effect in terms of actually blocking things on hash,
* so it doesn't break anything.
*/
if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP))
return AMR_KUAP_BLOCKED;
return mfspr(SPRN_AMR);
}
static inline void set_kuap(unsigned long value)
{
if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP))
return;
/*
* ISA v3.0B says we need a CSI (Context Synchronising Instruction) both
* before and after the move to AMR. See table 6 on page 1134.
*/
isync();
mtspr(SPRN_AMR, value);
isync();
}
static inline bool
bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
return WARN(mmu_has_feature(MMU_FTR_RADIX_KUAP) &&
(regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)),
"Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
}
#else /* CONFIG_PPC_KUAP */
static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr) { }
static inline unsigned long kuap_get_and_check_amr(void)
{
return 0UL;
}
static inline unsigned long get_kuap(void)
{
return AMR_KUAP_BLOCKED;
}
static inline void set_kuap(unsigned long value) { }
#endif /* !CONFIG_PPC_KUAP */
static __always_inline void allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
{
// This is written so we can resolve to a single case at build time
BUILD_BUG_ON(!__builtin_constant_p(dir));
if (dir == KUAP_READ)
set_kuap(AMR_KUAP_BLOCK_WRITE);
else if (dir == KUAP_WRITE)
set_kuap(AMR_KUAP_BLOCK_READ);
else if (dir == KUAP_READ_WRITE)
set_kuap(0);
else
BUILD_BUG();
}
static inline void prevent_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
{
set_kuap(AMR_KUAP_BLOCKED);
if (static_branch_unlikely(&uaccess_flush_key))
do_uaccess_flush();
}
static inline unsigned long prevent_user_access_return(void)
{
unsigned long flags = get_kuap();
set_kuap(AMR_KUAP_BLOCKED);
if (static_branch_unlikely(&uaccess_flush_key))
do_uaccess_flush();
return flags;
}
static inline void restore_user_access(unsigned long flags)
{
set_kuap(flags);
if (static_branch_unlikely(&uaccess_flush_key) && flags == AMR_KUAP_BLOCKED)
do_uaccess_flush();
}
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H */

View File

@ -0,0 +1,442 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_BOOK3S_64_KUP_H
#define _ASM_POWERPC_BOOK3S_64_KUP_H
#include <linux/const.h>
#include <asm/reg.h>
#define AMR_KUAP_BLOCK_READ UL(0x5455555555555555)
#define AMR_KUAP_BLOCK_WRITE UL(0xa8aaaaaaaaaaaaaa)
#define AMR_KUEP_BLOCKED UL(0x5455555555555555)
#define AMR_KUAP_BLOCKED (AMR_KUAP_BLOCK_READ | AMR_KUAP_BLOCK_WRITE)
#ifdef __ASSEMBLY__
.macro kuap_user_restore gpr1, gpr2
#if defined(CONFIG_PPC_PKEY)
BEGIN_MMU_FTR_SECTION_NESTED(67)
b 100f // skip_restore_amr
END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY, 67)
/*
* AMR and IAMR are going to be different when
* returning to userspace.
*/
ld \gpr1, STACK_REGS_AMR(r1)
/*
* If kuap feature is not enabled, do the mtspr
* only if AMR value is different.
*/
BEGIN_MMU_FTR_SECTION_NESTED(68)
mfspr \gpr2, SPRN_AMR
cmpd \gpr1, \gpr2
beq 99f
END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_BOOK3S_KUAP, 68)
isync
mtspr SPRN_AMR, \gpr1
99:
/*
* Restore IAMR only when returning to userspace
*/
ld \gpr1, STACK_REGS_IAMR(r1)
/*
* If kuep feature is not enabled, do the mtspr
* only if IAMR value is different.
*/
BEGIN_MMU_FTR_SECTION_NESTED(69)
mfspr \gpr2, SPRN_IAMR
cmpd \gpr1, \gpr2
beq 100f
END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_BOOK3S_KUEP, 69)
isync
mtspr SPRN_IAMR, \gpr1
100: //skip_restore_amr
/* No isync required, see kuap_user_restore() */
#endif
.endm
.macro kuap_kernel_restore gpr1, gpr2
#if defined(CONFIG_PPC_PKEY)
BEGIN_MMU_FTR_SECTION_NESTED(67)
/*
* AMR is going to be mostly the same since we are
* returning to the kernel. Compare and do a mtspr.
*/
ld \gpr2, STACK_REGS_AMR(r1)
mfspr \gpr1, SPRN_AMR
cmpd \gpr1, \gpr2
beq 100f
isync
mtspr SPRN_AMR, \gpr2
/*
* No isync required, see kuap_restore_amr()
* No need to restore IAMR when returning to kernel space.
*/
100:
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUAP, 67)
#endif
.endm
#ifdef CONFIG_PPC_KUAP
.macro kuap_check_amr gpr1, gpr2
#ifdef CONFIG_PPC_KUAP_DEBUG
BEGIN_MMU_FTR_SECTION_NESTED(67)
mfspr \gpr1, SPRN_AMR
/* Prevent access to userspace using any key values */
LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
999: tdne \gpr1, \gpr2
EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUAP, 67)
#endif
.endm
#endif
/*
* if (pkey) {
*
* save AMR -> stack;
* if (kuap) {
* if (AMR != BLOCKED)
* KUAP_BLOCKED -> AMR;
* }
* if (from_user) {
* save IAMR -> stack;
* if (kuep) {
* KUEP_BLOCKED ->IAMR
* }
* }
* return;
* }
*
* if (kuap) {
* if (from_kernel) {
* save AMR -> stack;
* if (AMR != BLOCKED)
* KUAP_BLOCKED -> AMR;
* }
*
* }
*/
.macro kuap_save_amr_and_lock gpr1, gpr2, use_cr, msr_pr_cr
#if defined(CONFIG_PPC_PKEY)
/*
* if both pkey and kuap is disabled, nothing to do
*/
BEGIN_MMU_FTR_SECTION_NESTED(68)
b 100f // skip_save_amr
END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY | MMU_FTR_BOOK3S_KUAP, 68)
/*
* if pkey is disabled and we are entering from userspace
* don't do anything.
*/
BEGIN_MMU_FTR_SECTION_NESTED(67)
.ifnb \msr_pr_cr
/*
* Without pkey we are not changing AMR outside the kernel
* hence skip this completely.
*/
bne \msr_pr_cr, 100f // from userspace
.endif
END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY, 67)
/*
* pkey is enabled or pkey is disabled but entering from kernel
*/
mfspr \gpr1, SPRN_AMR
std \gpr1, STACK_REGS_AMR(r1)
/*
* update kernel AMR with AMR_KUAP_BLOCKED only
* if KUAP feature is enabled
*/
BEGIN_MMU_FTR_SECTION_NESTED(69)
LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
cmpd \use_cr, \gpr1, \gpr2
beq \use_cr, 102f
/*
* We don't isync here because we very recently entered via an interrupt
*/
mtspr SPRN_AMR, \gpr2
isync
102:
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUAP, 69)
/*
* if entering from kernel we don't need save IAMR
*/
.ifnb \msr_pr_cr
beq \msr_pr_cr, 100f // from kernel space
mfspr \gpr1, SPRN_IAMR
std \gpr1, STACK_REGS_IAMR(r1)
/*
* update kernel IAMR with AMR_KUEP_BLOCKED only
* if KUEP feature is enabled
*/
BEGIN_MMU_FTR_SECTION_NESTED(70)
LOAD_REG_IMMEDIATE(\gpr2, AMR_KUEP_BLOCKED)
mtspr SPRN_IAMR, \gpr2
isync
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUEP, 70)
.endif
100: // skip_save_amr
#endif
.endm
#else /* !__ASSEMBLY__ */
#include <linux/jump_label.h>
DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
#ifdef CONFIG_PPC_PKEY
#include <asm/mmu.h>
#include <asm/ptrace.h>
/*
* For kernel thread that doesn't have thread.regs return
* default AMR/IAMR values.
*/
static inline u64 current_thread_amr(void)
{
if (current->thread.regs)
return current->thread.regs->amr;
return AMR_KUAP_BLOCKED;
}
static inline u64 current_thread_iamr(void)
{
if (current->thread.regs)
return current->thread.regs->iamr;
return AMR_KUEP_BLOCKED;
}
#endif /* CONFIG_PPC_PKEY */
#ifdef CONFIG_PPC_KUAP
static inline void kuap_user_restore(struct pt_regs *regs)
{
bool restore_amr = false, restore_iamr = false;
unsigned long amr, iamr;
if (!mmu_has_feature(MMU_FTR_PKEY))
return;
if (!mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) {
amr = mfspr(SPRN_AMR);
if (amr != regs->amr)
restore_amr = true;
} else {
restore_amr = true;
}
if (!mmu_has_feature(MMU_FTR_BOOK3S_KUEP)) {
iamr = mfspr(SPRN_IAMR);
if (iamr != regs->iamr)
restore_iamr = true;
} else {
restore_iamr = true;
}
if (restore_amr || restore_iamr) {
isync();
if (restore_amr)
mtspr(SPRN_AMR, regs->amr);
if (restore_iamr)
mtspr(SPRN_IAMR, regs->iamr);
}
/*
* No isync required here because we are about to rfi
* back to previous context before any user accesses
* would be made, which is a CSI.
*/
}
static inline void kuap_kernel_restore(struct pt_regs *regs,
unsigned long amr)
{
if (mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) {
if (unlikely(regs->amr != amr)) {
isync();
mtspr(SPRN_AMR, regs->amr);
/*
* No isync required here because we are about to rfi
* back to previous context before any user accesses
* would be made, which is a CSI.
*/
}
}
/*
* No need to restore IAMR when returning to kernel space.
*/
}
static inline unsigned long kuap_get_and_check_amr(void)
{
if (mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) {
unsigned long amr = mfspr(SPRN_AMR);
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG)) /* kuap_check_amr() */
WARN_ON_ONCE(amr != AMR_KUAP_BLOCKED);
return amr;
}
return 0;
}
#else /* CONFIG_PPC_PKEY */
static inline void kuap_user_restore(struct pt_regs *regs)
{
}
static inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
{
}
static inline unsigned long kuap_get_and_check_amr(void)
{
return 0;
}
#endif /* CONFIG_PPC_PKEY */
#ifdef CONFIG_PPC_KUAP
static inline void kuap_check_amr(void)
{
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_BOOK3S_KUAP))
WARN_ON_ONCE(mfspr(SPRN_AMR) != AMR_KUAP_BLOCKED);
}
/*
* We support individually allowing read or write, but we don't support nesting
* because that would require an expensive read/modify write of the AMR.
*/
static inline unsigned long get_kuap(void)
{
/*
* We return AMR_KUAP_BLOCKED when we don't support KUAP because
* prevent_user_access_return needs to return AMR_KUAP_BLOCKED to
* cause restore_user_access to do a flush.
*
* This has no effect in terms of actually blocking things on hash,
* so it doesn't break anything.
*/
if (!early_mmu_has_feature(MMU_FTR_BOOK3S_KUAP))
return AMR_KUAP_BLOCKED;
return mfspr(SPRN_AMR);
}
static inline void set_kuap(unsigned long value)
{
if (!early_mmu_has_feature(MMU_FTR_BOOK3S_KUAP))
return;
/*
* ISA v3.0B says we need a CSI (Context Synchronising Instruction) both
* before and after the move to AMR. See table 6 on page 1134.
*/
isync();
mtspr(SPRN_AMR, value);
isync();
}
static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
bool is_write)
{
if (!mmu_has_feature(MMU_FTR_BOOK3S_KUAP))
return false;
/*
* For radix this will be a storage protection fault (DSISR_PROTFAULT).
* For hash this will be a key fault (DSISR_KEYFAULT)
*/
/*
* We do have exception table entry, but accessing the
* userspace results in fault. This could be because we
* didn't unlock the AMR or access is denied by userspace
* using a key value that blocks access. We are only interested
* in catching the use case of accessing without unlocking
* the AMR. Hence check for BLOCK_WRITE/READ against AMR.
*/
if (is_write) {
return (regs->amr & AMR_KUAP_BLOCK_WRITE) == AMR_KUAP_BLOCK_WRITE;
}
return (regs->amr & AMR_KUAP_BLOCK_READ) == AMR_KUAP_BLOCK_READ;
}
static __always_inline void allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
{
unsigned long thread_amr = 0;
// This is written so we can resolve to a single case at build time
BUILD_BUG_ON(!__builtin_constant_p(dir));
if (mmu_has_feature(MMU_FTR_PKEY))
thread_amr = current_thread_amr();
if (dir == KUAP_READ)
set_kuap(thread_amr | AMR_KUAP_BLOCK_WRITE);
else if (dir == KUAP_WRITE)
set_kuap(thread_amr | AMR_KUAP_BLOCK_READ);
else if (dir == KUAP_READ_WRITE)
set_kuap(thread_amr);
else
BUILD_BUG();
}
#else /* CONFIG_PPC_KUAP */
static inline unsigned long get_kuap(void)
{
return AMR_KUAP_BLOCKED;
}
static inline void set_kuap(unsigned long value) { }
static __always_inline void allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
{ }
#endif /* !CONFIG_PPC_KUAP */
static inline void prevent_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
{
set_kuap(AMR_KUAP_BLOCKED);
if (static_branch_unlikely(&uaccess_flush_key))
do_uaccess_flush();
}
static inline unsigned long prevent_user_access_return(void)
{
unsigned long flags = get_kuap();
set_kuap(AMR_KUAP_BLOCKED);
if (static_branch_unlikely(&uaccess_flush_key))
do_uaccess_flush();
return flags;
}
static inline void restore_user_access(unsigned long flags)
{
set_kuap(flags);
if (static_branch_unlikely(&uaccess_flush_key) && flags == AMR_KUAP_BLOCKED)
do_uaccess_flush();
}
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_KUP_H */

View File

@ -452,6 +452,7 @@ static inline unsigned long hpt_hash(unsigned long vpn,
#define HPTE_LOCAL_UPDATE 0x1
#define HPTE_NOHPTE_UPDATE 0x2
#define HPTE_USE_KERNEL_KEY 0x4
extern int __hash_page_4K(unsigned long ea, unsigned long access,
unsigned long vsid, pte_t *ptep, unsigned long trap,
@ -842,6 +843,32 @@ static inline unsigned long get_kernel_vsid(unsigned long ea, int ssize)
unsigned htab_shift_for_mem_size(unsigned long mem_size);
#endif /* __ASSEMBLY__ */
enum slb_index {
LINEAR_INDEX = 0, /* Kernel linear map (0xc000000000000000) */
KSTACK_INDEX = 1, /* Kernel stack map */
};
#define slb_esid_mask(ssize) \
(((ssize) == MMU_SEGSIZE_256M) ? ESID_MASK : ESID_MASK_1T)
static inline unsigned long mk_esid_data(unsigned long ea, int ssize,
enum slb_index index)
{
return (ea & slb_esid_mask(ssize)) | SLB_ESID_V | index;
}
static inline unsigned long __mk_vsid_data(unsigned long vsid, int ssize,
unsigned long flags)
{
return (vsid << slb_vsid_shift(ssize)) | flags |
((unsigned long)ssize << SLB_VSID_SSIZE_SHIFT);
}
static inline unsigned long mk_vsid_data(unsigned long ea, int ssize,
unsigned long flags)
{
return __mk_vsid_data(get_kernel_vsid(ea, ssize), ssize, flags);
}
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_MMU_HASH_H_ */

View File

@ -111,7 +111,7 @@ typedef struct {
struct hash_mm_context *hash_context;
unsigned long vdso_base;
void __user *vdso;
/*
* pagetable fragment support
*/
@ -199,7 +199,7 @@ extern int mmu_io_psize;
void mmu_early_init_devtree(void);
void hash__early_init_devtree(void);
void radix__early_init_devtree(void);
#ifdef CONFIG_PPC_MEM_KEYS
#ifdef CONFIG_PPC_PKEY
void pkey_early_init_devtree(void);
#else
static inline void pkey_early_init_devtree(void) {}

View File

@ -1231,13 +1231,28 @@ static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
return hash__pmd_same(pmd_a, pmd_b);
}
static inline pmd_t pmd_mkhuge(pmd_t pmd)
static inline pmd_t __pmd_mkhuge(pmd_t pmd)
{
if (radix_enabled())
return radix__pmd_mkhuge(pmd);
return hash__pmd_mkhuge(pmd);
}
/*
* pfn_pmd return a pmd_t that can be used as pmd pte entry.
*/
static inline pmd_t pmd_mkhuge(pmd_t pmd)
{
#ifdef CONFIG_DEBUG_VM
if (radix_enabled())
WARN_ON((pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE)) == 0);
else
WARN_ON((pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE | H_PAGE_THP_HUGE)) !=
cpu_to_be64(_PAGE_PTE | H_PAGE_THP_HUGE));
#endif
return pmd;
}
#define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
extern int pmdp_set_access_flags(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp,

View File

@ -6,6 +6,8 @@
#include <asm/book3s/64/hash-pkey.h>
extern u64 __ro_after_init default_uamor;
extern u64 __ro_after_init default_amr;
extern u64 __ro_after_init default_iamr;
static inline u64 vmflag_to_pte_pkey_bits(u64 vm_flags)
{

View File

@ -12,7 +12,7 @@
#ifdef CONFIG_DEBUG_BUGVERBOSE
.macro EMIT_BUG_ENTRY addr,file,line,flags
.section __bug_table,"aw"
5001: PPC_LONG \addr, 5002f
5001: .4byte \addr - 5001b, 5002f - 5001b
.short \line, \flags
.org 5001b+BUG_ENTRY_SIZE
.previous
@ -23,7 +23,7 @@
#else
.macro EMIT_BUG_ENTRY addr,file,line,flags
.section __bug_table,"aw"
5001: PPC_LONG \addr
5001: .4byte \addr - 5001b
.short \flags
.org 5001b+BUG_ENTRY_SIZE
.previous
@ -36,14 +36,14 @@
#ifdef CONFIG_DEBUG_BUGVERBOSE
#define _EMIT_BUG_ENTRY \
".section __bug_table,\"aw\"\n" \
"2:\t" PPC_LONG "1b, %0\n" \
"2:\t.4byte 1b - 2b, %0 - 2b\n" \
"\t.short %1, %2\n" \
".org 2b+%3\n" \
".previous\n"
#else
#define _EMIT_BUG_ENTRY \
".section __bug_table,\"aw\"\n" \
"2:\t" PPC_LONG "1b\n" \
"2:\t.4byte 1b - 2b\n" \
"\t.short %2\n" \
".org 2b+%3\n" \
".previous\n"
@ -113,6 +113,7 @@
struct pt_regs;
extern int do_page_fault(struct pt_regs *, unsigned long, unsigned long);
extern void bad_page_fault(struct pt_regs *, unsigned long, int);
void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig);
extern void _exception(int, struct pt_regs *, int, unsigned long);
extern void _exception_pkey(struct pt_regs *, unsigned long, int);
extern void die(const char *, struct pt_regs *, long);

View File

@ -163,7 +163,7 @@ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
*/
__wsum __csum_partial(const void *buff, int len, __wsum sum);
static inline __wsum csum_partial(const void *buff, int len, __wsum sum)
static __always_inline __wsum csum_partial(const void *buff, int len, __wsum sum)
{
if (__builtin_constant_p(len) && len <= 16 && (len & 1) == 0) {
if (len == 2)

View File

@ -0,0 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_CLOCKSOURCE_H
#define _ASM_POWERPC_CLOCKSOURCE_H
#include <asm/vdso/clocksource.h>
#endif /* _ASM_POWERPC_CLOCKSOURCE_H */

View File

@ -68,6 +68,7 @@ extern void cpm_reset(void);
#define PROFF_SPI ((uint)0x0180)
#define PROFF_SCC3 ((uint)0x0200)
#define PROFF_SMC1 ((uint)0x0280)
#define PROFF_DSP1 ((uint)0x02c0)
#define PROFF_SCC4 ((uint)0x0300)
#define PROFF_SMC2 ((uint)0x0380)

View File

@ -0,0 +1,12 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2020 IBM Corporation
*/
void __setup_cpu_power7(unsigned long offset, struct cpu_spec *spec);
void __restore_cpu_power7(void);
void __setup_cpu_power8(unsigned long offset, struct cpu_spec *spec);
void __restore_cpu_power8(void);
void __setup_cpu_power9(unsigned long offset, struct cpu_spec *spec);
void __restore_cpu_power9(void);
void __setup_cpu_power10(unsigned long offset, struct cpu_spec *spec);
void __restore_cpu_power10(void);

View File

@ -41,7 +41,6 @@ extern int machine_check_4xx(struct pt_regs *regs);
extern int machine_check_440A(struct pt_regs *regs);
extern int machine_check_e500mc(struct pt_regs *regs);
extern int machine_check_e500(struct pt_regs *regs);
extern int machine_check_e200(struct pt_regs *regs);
extern int machine_check_47x(struct pt_regs *regs);
int machine_check_8xx(struct pt_regs *regs);
int machine_check_83xx(struct pt_regs *regs);
@ -137,7 +136,7 @@ static inline void cpu_feature_keys_init(void) { }
#define CPU_FTR_DBELL ASM_CONST(0x00000004)
#define CPU_FTR_CAN_NAP ASM_CONST(0x00000008)
#define CPU_FTR_DEBUG_LVL_EXC ASM_CONST(0x00000010)
#define CPU_FTR_NODSISRALIGN ASM_CONST(0x00000020)
// ASM_CONST(0x00000020) Free
#define CPU_FTR_FPU_UNAVAILABLE ASM_CONST(0x00000040)
#define CPU_FTR_LWSYNC ASM_CONST(0x00000080)
#define CPU_FTR_NOEXECUTE ASM_CONST(0x00000100)
@ -219,9 +218,7 @@ static inline void cpu_feature_keys_init(void) { }
#ifndef __ASSEMBLY__
#define CPU_FTR_PPCAS_ARCH_V2 (CPU_FTR_NOEXECUTE | CPU_FTR_NODSISRALIGN)
#define MMU_FTR_PPCAS_ARCH_V2 (MMU_FTR_TLBIEL | MMU_FTR_16M_PAGE)
#define CPU_FTR_PPCAS_ARCH_V2 (CPU_FTR_NOEXECUTE)
/* We only set the altivec features if the kernel was compiled with altivec
* support
@ -369,7 +366,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_PPC_LE | CPU_FTR_NEED_PAIRED_STWCX)
#define CPU_FTRS_82XX (CPU_FTR_COMMON | CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_G2_LE (CPU_FTR_COMMON | CPU_FTR_MAYBE_CAN_DOZE | \
CPU_FTR_MAYBE_CAN_NAP)
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_E300 (CPU_FTR_MAYBE_CAN_DOZE | \
CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_COMMON | CPU_FTR_NOEXECUTE)
@ -378,38 +375,33 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_COMMON | CPU_FTR_FPU_UNAVAILABLE | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_CLASSIC32 (CPU_FTR_COMMON)
#define CPU_FTRS_8XX (CPU_FTR_NOEXECUTE)
#define CPU_FTRS_40X (CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_44X (CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_440x6 (CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE | \
#define CPU_FTRS_40X (CPU_FTR_NOEXECUTE)
#define CPU_FTRS_44X (CPU_FTR_NOEXECUTE)
#define CPU_FTRS_440x6 (CPU_FTR_NOEXECUTE | \
CPU_FTR_INDEXED_DCR)
#define CPU_FTRS_47X (CPU_FTRS_440x6)
#define CPU_FTRS_E200 (CPU_FTR_SPE_COMP | \
CPU_FTR_NODSISRALIGN | CPU_FTR_COHERENT_ICACHE | \
CPU_FTR_NOEXECUTE | \
CPU_FTR_DEBUG_LVL_EXC)
#define CPU_FTRS_E500 (CPU_FTR_MAYBE_CAN_DOZE | \
CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_NODSISRALIGN | \
CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_NOEXECUTE)
#define CPU_FTRS_E500_2 (CPU_FTR_MAYBE_CAN_DOZE | \
CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_E500MC (CPU_FTR_NODSISRALIGN | \
CPU_FTR_NOEXECUTE)
#define CPU_FTRS_E500MC ( \
CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
CPU_FTR_DBELL | CPU_FTR_DEBUG_LVL_EXC | CPU_FTR_EMB_HV)
/*
* e5500/e6500 erratum A-006958 is a timebase bug that can use the
* same workaround as CPU_FTR_CELL_TB_BUG.
*/
#define CPU_FTRS_E5500 (CPU_FTR_NODSISRALIGN | \
#define CPU_FTRS_E5500 ( \
CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
CPU_FTR_DBELL | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_DEBUG_LVL_EXC | CPU_FTR_EMB_HV | CPU_FTR_CELL_TB_BUG)
#define CPU_FTRS_E6500 (CPU_FTR_NODSISRALIGN | \
#define CPU_FTRS_E6500 ( \
CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
CPU_FTR_DBELL | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_DEBUG_LVL_EXC | CPU_FTR_EMB_HV | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_CELL_TB_BUG | CPU_FTR_SMT)
#define CPU_FTRS_GENERIC_32 (CPU_FTR_COMMON | CPU_FTR_NODSISRALIGN)
/* 64-bit CPUs */
#define CPU_FTRS_PPC970 (CPU_FTR_LWSYNC | \
@ -489,7 +481,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_PURR | CPU_FTR_REAL_LE | CPU_FTR_DABRX)
#define CPU_FTRS_COMPATIBLE (CPU_FTR_PPCAS_ARCH_V2)
#ifdef __powerpc64__
#ifdef CONFIG_PPC64
#ifdef CONFIG_PPC_BOOK3E
#define CPU_FTRS_POSSIBLE (CPU_FTRS_E6500 | CPU_FTRS_E5500)
#else
@ -510,18 +502,19 @@ static inline void cpu_feature_keys_init(void) { }
#else
enum {
CPU_FTRS_POSSIBLE =
#ifdef CONFIG_PPC_BOOK3S_32
CPU_FTRS_603 | CPU_FTRS_604 | CPU_FTRS_740_NOTAU |
#ifdef CONFIG_PPC_BOOK3S_604
CPU_FTRS_604 | CPU_FTRS_740_NOTAU |
CPU_FTRS_740 | CPU_FTRS_750 | CPU_FTRS_750FX1 |
CPU_FTRS_750FX2 | CPU_FTRS_750FX | CPU_FTRS_750GX |
CPU_FTRS_7400_NOTAU | CPU_FTRS_7400 | CPU_FTRS_7450_20 |
CPU_FTRS_7450_21 | CPU_FTRS_7450_23 | CPU_FTRS_7455_1 |
CPU_FTRS_7455_20 | CPU_FTRS_7455 | CPU_FTRS_7447_10 |
CPU_FTRS_7447 | CPU_FTRS_7447A | CPU_FTRS_82XX |
CPU_FTRS_G2_LE | CPU_FTRS_E300 | CPU_FTRS_E300C2 |
CPU_FTRS_7447 | CPU_FTRS_7447A |
CPU_FTRS_CLASSIC32 |
#else
CPU_FTRS_GENERIC_32 |
#endif
#ifdef CONFIG_PPC_BOOK3S_603
CPU_FTRS_603 | CPU_FTRS_82XX |
CPU_FTRS_G2_LE | CPU_FTRS_E300 | CPU_FTRS_E300C2 |
#endif
#ifdef CONFIG_PPC_8xx
CPU_FTRS_8XX |
@ -529,14 +522,10 @@ enum {
#ifdef CONFIG_40x
CPU_FTRS_40X |
#endif
#ifdef CONFIG_44x
CPU_FTRS_44X | CPU_FTRS_440x6 |
#endif
#ifdef CONFIG_PPC_47x
CPU_FTRS_47X | CPU_FTR_476_DD2 |
#endif
#ifdef CONFIG_E200
CPU_FTRS_E200 |
#elif defined(CONFIG_44x)
CPU_FTRS_44X | CPU_FTRS_440x6 |
#endif
#ifdef CONFIG_E500
CPU_FTRS_E500 | CPU_FTRS_E500_2 |
@ -548,7 +537,7 @@ enum {
};
#endif /* __powerpc64__ */
#ifdef __powerpc64__
#ifdef CONFIG_PPC64
#ifdef CONFIG_PPC_BOOK3E
#define CPU_FTRS_ALWAYS (CPU_FTRS_E6500 & CPU_FTRS_E5500)
#else
@ -557,7 +546,6 @@ enum {
#define CPU_FTRS_DT_CPU_BASE \
(CPU_FTR_LWSYNC | \
CPU_FTR_FPU_UNAVAILABLE | \
CPU_FTR_NODSISRALIGN | \
CPU_FTR_NOEXECUTE | \
CPU_FTR_COHERENT_ICACHE | \
CPU_FTR_STCX_CHECKS_ADDRESS | \
@ -586,18 +574,19 @@ enum {
#else
enum {
CPU_FTRS_ALWAYS =
#ifdef CONFIG_PPC_BOOK3S_32
CPU_FTRS_603 & CPU_FTRS_604 & CPU_FTRS_740_NOTAU &
#ifdef CONFIG_PPC_BOOK3S_604
CPU_FTRS_604 & CPU_FTRS_740_NOTAU &
CPU_FTRS_740 & CPU_FTRS_750 & CPU_FTRS_750FX1 &
CPU_FTRS_750FX2 & CPU_FTRS_750FX & CPU_FTRS_750GX &
CPU_FTRS_7400_NOTAU & CPU_FTRS_7400 & CPU_FTRS_7450_20 &
CPU_FTRS_7450_21 & CPU_FTRS_7450_23 & CPU_FTRS_7455_1 &
CPU_FTRS_7455_20 & CPU_FTRS_7455 & CPU_FTRS_7447_10 &
CPU_FTRS_7447 & CPU_FTRS_7447A & CPU_FTRS_82XX &
CPU_FTRS_G2_LE & CPU_FTRS_E300 & CPU_FTRS_E300C2 &
CPU_FTRS_7447 & CPU_FTRS_7447A &
CPU_FTRS_CLASSIC32 &
#else
CPU_FTRS_GENERIC_32 &
#endif
#ifdef CONFIG_PPC_BOOK3S_603
CPU_FTRS_603 & CPU_FTRS_82XX &
CPU_FTRS_G2_LE & CPU_FTRS_E300 & CPU_FTRS_E300C2 &
#endif
#ifdef CONFIG_PPC_8xx
CPU_FTRS_8XX &
@ -605,12 +594,11 @@ enum {
#ifdef CONFIG_40x
CPU_FTRS_40X &
#endif
#ifdef CONFIG_44x
#ifdef CONFIG_PPC_47x
CPU_FTRS_47X &
#elif defined(CONFIG_44x)
CPU_FTRS_44X & CPU_FTRS_440x6 &
#endif
#ifdef CONFIG_E200
CPU_FTRS_E200 &
#endif
#ifdef CONFIG_E500
CPU_FTRS_E500 & CPU_FTRS_E500_2 &
#endif

View File

@ -168,8 +168,8 @@ do { \
/* Cache size items */ \
NEW_AUX_ENT(AT_DCACHEBSIZE, dcache_bsize); \
NEW_AUX_ENT(AT_ICACHEBSIZE, icache_bsize); \
NEW_AUX_ENT(AT_UCACHEBSIZE, ucache_bsize); \
VDSO_AUX_ENT(AT_SYSINFO_EHDR, current->mm->context.vdso_base); \
NEW_AUX_ENT(AT_UCACHEBSIZE, 0); \
VDSO_AUX_ENT(AT_SYSINFO_EHDR, (unsigned long)current->mm->context.vdso);\
ARCH_DLINFO_CACHE_GEOMETRY; \
} while (0)

View File

@ -36,6 +36,24 @@ label##2: \
.align 2; \
label##3:
#ifndef CONFIG_CC_IS_CLANG
#define CHECK_ALT_SIZE(else_size, body_size) \
.ifgt (else_size) - (body_size); \
.error "Feature section else case larger than body"; \
.endif;
#else
/*
* If we use the ifgt syntax above, clang's assembler complains about the
* expression being non-absolute when the code appears in an inline assembly
* statement.
* As a workaround use an .org directive that has no effect if the else case
* instructions are smaller than the body, but fails otherwise.
*/
#define CHECK_ALT_SIZE(else_size, body_size) \
.org . + ((else_size) > (body_size));
#endif
#define MAKE_FTR_SECTION_ENTRY(msk, val, label, sect) \
label##4: \
.popsection; \
@ -48,9 +66,7 @@ label##5: \
FTR_ENTRY_OFFSET label##2b-label##5b; \
FTR_ENTRY_OFFSET label##3b-label##5b; \
FTR_ENTRY_OFFSET label##4b-label##5b; \
.ifgt (label##4b- label##3b)-(label##2b- label##1b); \
.error "Feature section else case larger than body"; \
.endif; \
CHECK_ALT_SIZE((label##4b-label##3b), (label##2b-label##1b)); \
.popsection;
@ -100,6 +116,9 @@ label##5: \
#define END_MMU_FTR_SECTION_NESTED_IFSET(msk, label) \
END_MMU_FTR_SECTION_NESTED((msk), (msk), label)
#define END_MMU_FTR_SECTION_NESTED_IFCLR(msk, label) \
END_MMU_FTR_SECTION_NESTED((msk), 0, label)
#define END_MMU_FTR_SECTION_IFSET(msk) END_MMU_FTR_SECTION((msk), (msk))
#define END_MMU_FTR_SECTION_IFCLR(msk) END_MMU_FTR_SECTION((msk), 0)

View File

@ -134,12 +134,6 @@ extern int ibm_nmi_interlock_token;
extern unsigned int __start___fw_ftr_fixup, __stop___fw_ftr_fixup;
#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
bool is_kvm_guest(void);
#else
static inline bool is_kvm_guest(void) { return false; }
#endif
#ifdef CONFIG_PPC_PSERIES
void pseries_probe_fw_features(void);
#else

View File

@ -155,6 +155,14 @@
#define H_VASI_RESUMED 5
#define H_VASI_COMPLETED 6
/* VASI signal codes. Only the Cancel code is valid for H_VASI_SIGNAL. */
#define H_VASI_SIGNAL_CANCEL 1
#define H_VASI_SIGNAL_ABORT 2
#define H_VASI_SIGNAL_SUSPEND 3
#define H_VASI_SIGNAL_COMPLETE 4
#define H_VASI_SIGNAL_ENABLE 5
#define H_VASI_SIGNAL_FAILOVER 6
/* Each control block has to be on a 4K boundary */
#define H_CB_ALIGNMENT 4096
@ -261,6 +269,7 @@
#define H_ADD_CONN 0x284
#define H_DEL_CONN 0x288
#define H_JOIN 0x298
#define H_VASI_SIGNAL 0x2A0
#define H_VASI_STATE 0x2A4
#define H_VIOCTL 0x2A8
#define H_ENABLE_CRQ 0x2B0

View File

@ -122,7 +122,7 @@ static inline u##size name(const volatile u##size __iomem *addr) \
{ \
u##size ret; \
__asm__ __volatile__("sync;"#insn"%U1%X1 %0,%1;twi 0,%0,0;isync"\
: "=r" (ret) : "m" (*addr) : "memory"); \
: "=r" (ret) : "m"UPD_CONSTR (*addr) : "memory"); \
return ret; \
}
@ -130,7 +130,7 @@ static inline u##size name(const volatile u##size __iomem *addr) \
static inline void name(volatile u##size __iomem *addr, u##size val) \
{ \
__asm__ __volatile__("sync;"#insn"%U0%X0 %1,%0" \
: "=m" (*addr) : "r" (val) : "memory"); \
: "=m"UPD_CONSTR (*addr) : "r" (val) : "memory"); \
mmiowb_set_pending(); \
}
@ -302,41 +302,56 @@ static inline unsigned char __raw_readb(const volatile void __iomem *addr)
{
return *(volatile unsigned char __force *)PCI_FIX_ADDR(addr);
}
#define __raw_readb __raw_readb
static inline unsigned short __raw_readw(const volatile void __iomem *addr)
{
return *(volatile unsigned short __force *)PCI_FIX_ADDR(addr);
}
#define __raw_readw __raw_readw
static inline unsigned int __raw_readl(const volatile void __iomem *addr)
{
return *(volatile unsigned int __force *)PCI_FIX_ADDR(addr);
}
#define __raw_readl __raw_readl
static inline void __raw_writeb(unsigned char v, volatile void __iomem *addr)
{
*(volatile unsigned char __force *)PCI_FIX_ADDR(addr) = v;
}
#define __raw_writeb __raw_writeb
static inline void __raw_writew(unsigned short v, volatile void __iomem *addr)
{
*(volatile unsigned short __force *)PCI_FIX_ADDR(addr) = v;
}
#define __raw_writew __raw_writew
static inline void __raw_writel(unsigned int v, volatile void __iomem *addr)
{
*(volatile unsigned int __force *)PCI_FIX_ADDR(addr) = v;
}
#define __raw_writel __raw_writel
#ifdef __powerpc64__
static inline unsigned long __raw_readq(const volatile void __iomem *addr)
{
return *(volatile unsigned long __force *)PCI_FIX_ADDR(addr);
}
#define __raw_readq __raw_readq
static inline void __raw_writeq(unsigned long v, volatile void __iomem *addr)
{
*(volatile unsigned long __force *)PCI_FIX_ADDR(addr) = v;
}
#define __raw_writeq __raw_writeq
static inline void __raw_writeq_be(unsigned long v, volatile void __iomem *addr)
{
__raw_writeq((__force unsigned long)cpu_to_be64(v), addr);
}
#define __raw_writeq_be __raw_writeq_be
/*
* Real mode versions of the above. Those instructions are only supposed
@ -609,10 +624,37 @@ static inline void name at \
/* Some drivers check for the presence of readq & writeq with
* a #ifdef, so we make them happy here.
*/
#define readb readb
#define readw readw
#define readl readl
#define writeb writeb
#define writew writew
#define writel writel
#define readsb readsb
#define readsw readsw
#define readsl readsl
#define writesb writesb
#define writesw writesw
#define writesl writesl
#define inb inb
#define inw inw
#define inl inl
#define outb outb
#define outw outw
#define outl outl
#define insb insb
#define insw insw
#define insl insl
#define outsb outsb
#define outsw outsw
#define outsl outsl
#ifdef __powerpc64__
#define readq readq
#define writeq writeq
#endif
#define memset_io memset_io
#define memcpy_fromio memcpy_fromio
#define memcpy_toio memcpy_toio
/*
* Convert a physical pointer to a virtual kernel pointer for /dev/mem
@ -637,7 +679,106 @@ static inline void name at \
#define writel_relaxed(v, addr) writel(v, addr)
#define writeq_relaxed(v, addr) writeq(v, addr)
#ifdef CONFIG_GENERIC_IOMAP
#include <asm-generic/iomap.h>
#else
/*
* Here comes the implementation of the IOMAP interfaces.
*/
static inline unsigned int ioread16be(const void __iomem *addr)
{
return readw_be(addr);
}
#define ioread16be ioread16be
static inline unsigned int ioread32be(const void __iomem *addr)
{
return readl_be(addr);
}
#define ioread32be ioread32be
#ifdef __powerpc64__
static inline u64 ioread64_lo_hi(const void __iomem *addr)
{
return readq(addr);
}
#define ioread64_lo_hi ioread64_lo_hi
static inline u64 ioread64_hi_lo(const void __iomem *addr)
{
return readq(addr);
}
#define ioread64_hi_lo ioread64_hi_lo
static inline u64 ioread64be(const void __iomem *addr)
{
return readq_be(addr);
}
#define ioread64be ioread64be
static inline u64 ioread64be_lo_hi(const void __iomem *addr)
{
return readq_be(addr);
}
#define ioread64be_lo_hi ioread64be_lo_hi
static inline u64 ioread64be_hi_lo(const void __iomem *addr)
{
return readq_be(addr);
}
#define ioread64be_hi_lo ioread64be_hi_lo
#endif /* __powerpc64__ */
static inline void iowrite16be(u16 val, void __iomem *addr)
{
writew_be(val, addr);
}
#define iowrite16be iowrite16be
static inline void iowrite32be(u32 val, void __iomem *addr)
{
writel_be(val, addr);
}
#define iowrite32be iowrite32be
#ifdef __powerpc64__
static inline void iowrite64_lo_hi(u64 val, void __iomem *addr)
{
writeq(val, addr);
}
#define iowrite64_lo_hi iowrite64_lo_hi
static inline void iowrite64_hi_lo(u64 val, void __iomem *addr)
{
writeq(val, addr);
}
#define iowrite64_hi_lo iowrite64_hi_lo
static inline void iowrite64be(u64 val, void __iomem *addr)
{
writeq_be(val, addr);
}
#define iowrite64be iowrite64be
static inline void iowrite64be_lo_hi(u64 val, void __iomem *addr)
{
writeq_be(val, addr);
}
#define iowrite64be_lo_hi iowrite64be_lo_hi
static inline void iowrite64be_hi_lo(u64 val, void __iomem *addr)
{
writeq_be(val, addr);
}
#define iowrite64be_hi_lo iowrite64be_hi_lo
#endif /* __powerpc64__ */
struct pci_dev;
void pci_iounmap(struct pci_dev *dev, void __iomem *addr);
#define pci_iounmap pci_iounmap
void __iomem *ioport_map(unsigned long port, unsigned int len);
#define ioport_map ioport_map
#endif
static inline void iosync(void)
{
@ -670,7 +811,6 @@ static inline void iosync(void)
#define IO_SPACE_LIMIT ~(0UL)
/**
* ioremap - map bus memory into CPU space
* @address: bus address of the memory
@ -706,7 +846,13 @@ extern void __iomem *ioremap(phys_addr_t address, unsigned long size);
extern void __iomem *ioremap_prot(phys_addr_t address, unsigned long size,
unsigned long flags);
extern void __iomem *ioremap_wc(phys_addr_t address, unsigned long size);
#define ioremap_wc ioremap_wc
#ifdef CONFIG_PPC32
void __iomem *ioremap_wt(phys_addr_t address, unsigned long size);
#define ioremap_wt ioremap_wt
#endif
void __iomem *ioremap_coherent(phys_addr_t address, unsigned long size);
#define ioremap_uc(addr, size) ioremap((addr), (size))
#define ioremap_cache(addr, size) \
@ -766,6 +912,7 @@ static inline unsigned long virt_to_phys(volatile void * address)
return __pa((unsigned long)address);
}
#define virt_to_phys virt_to_phys
/**
* phys_to_virt - map physical address to virtual
@ -783,6 +930,7 @@ static inline void * phys_to_virt(unsigned long address)
{
return (void *)__va(address);
}
#define phys_to_virt phys_to_virt
/*
* Change "struct page" to physical address.
@ -810,6 +958,7 @@ static inline unsigned long virt_to_bus(volatile void * address)
return 0;
return __pa(address) + PCI_DRAM_OFFSET;
}
#define virt_to_bus virt_to_bus
static inline void * bus_to_virt(unsigned long address)
{
@ -817,6 +966,7 @@ static inline void * bus_to_virt(unsigned long address)
return NULL;
return __va(address - PCI_DRAM_OFFSET);
}
#define bus_to_virt bus_to_virt
#define page_to_bus(page) (page_to_phys(page) + PCI_DRAM_OFFSET)
@ -855,6 +1005,8 @@ static inline void * bus_to_virt(unsigned long address)
#define clrsetbits_8(addr, clear, set) clrsetbits(8, addr, clear, set)
#include <asm-generic/io.h>
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_IO_H */

View File

@ -15,11 +15,13 @@
#define KUAP_CURRENT (KUAP_CURRENT_READ | KUAP_CURRENT_WRITE)
#ifdef CONFIG_PPC_BOOK3S_64
#include <asm/book3s/64/kup-radix.h>
#include <asm/book3s/64/kup.h>
#endif
#ifdef CONFIG_PPC_8xx
#include <asm/nohash/32/kup-8xx.h>
#endif
#ifdef CONFIG_PPC_BOOK3S_32
#include <asm/book3s/32/kup.h>
#endif
@ -42,9 +44,10 @@
#else /* !__ASSEMBLY__ */
#include <linux/pgtable.h>
extern bool disable_kuep;
extern bool disable_kuap;
void setup_kup(void);
#include <linux/pgtable.h>
#ifdef CONFIG_PPC_KUEP
void setup_kuep(bool disabled);
@ -80,6 +83,12 @@ static inline void restore_user_access(unsigned long flags) { }
#endif /* CONFIG_PPC_BOOK3S_64 */
#endif /* CONFIG_PPC_KUAP */
static __always_inline void setup_kup(void)
{
setup_kuep(disable_kuep);
setup_kuap(disable_kuap);
}
static inline void allow_read_from_user(const void __user *from, unsigned long size)
{
allow_user_access(NULL, from, size, KUAP_READ);

View File

@ -0,0 +1,25 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2020 IBM Corporation
*/
#ifndef _ASM_POWERPC_KVM_GUEST_H_
#define _ASM_POWERPC_KVM_GUEST_H_
#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
#include <linux/jump_label.h>
DECLARE_STATIC_KEY_FALSE(kvm_guest);
static inline bool is_kvm_guest(void)
{
return static_branch_unlikely(&kvm_guest);
}
bool check_kvm_guest(void);
#else
static inline bool is_kvm_guest(void) { return false; }
static inline bool check_kvm_guest(void) { return false; }
#endif
#endif /* _ASM_POWERPC_KVM_GUEST_H_ */

View File

@ -8,7 +8,7 @@
#ifndef __POWERPC_KVM_PARA_H__
#define __POWERPC_KVM_PARA_H__
#include <asm/firmware.h>
#include <asm/kvm_guest.h>
#include <uapi/asm/kvm_para.h>

View File

@ -207,7 +207,6 @@ struct machdep_calls {
void (*suspend_disable_irqs)(void);
void (*suspend_enable_irqs)(void);
#endif
int (*suspend_disable_cpu)(void);
#ifdef CONFIG_ARCH_CPU_PROBE_RELEASE
ssize_t (*cpu_probe)(const char *, size_t);

View File

@ -228,6 +228,7 @@ int mce_register_notifier(struct notifier_block *nb);
int mce_unregister_notifier(struct notifier_block *nb);
#ifdef CONFIG_PPC_BOOK3S_64
void flush_and_reload_slb(void);
void flush_erat(void);
long __machine_check_early_realmode_p7(struct pt_regs *regs);
long __machine_check_early_realmode_p8(struct pt_regs *regs);
long __machine_check_early_realmode_p9(struct pt_regs *regs);

View File

@ -1,25 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Architecture specific mm hooks
*
* Copyright (C) 2015, IBM Corporation
* Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>
*/
#ifndef _ASM_POWERPC_MM_ARCH_HOOKS_H
#define _ASM_POWERPC_MM_ARCH_HOOKS_H
static inline void arch_remap(struct mm_struct *mm,
unsigned long old_start, unsigned long old_end,
unsigned long new_start, unsigned long new_end)
{
/*
* mremap() doesn't allow moving multiple vmas so we can limit the
* check to old_start == vdso_base.
*/
if (old_start == mm->context.vdso_base)
mm->context.vdso_base = new_start;
}
#define arch_remap arch_remap
#endif /* _ASM_POWERPC_MM_ARCH_HOOKS_H */

View File

@ -29,9 +29,18 @@
*/
/*
* Support for KUEP feature.
* Supports KUAP feature
* key 0 controlling userspace addresses on radix
* Key 3 on hash
*/
#define MMU_FTR_KUEP ASM_CONST(0x00000400)
#define MMU_FTR_BOOK3S_KUAP ASM_CONST(0x00000200)
/*
* Supports KUEP feature
* key 0 controlling userspace addresses on radix
* Key 3 on hash
*/
#define MMU_FTR_BOOK3S_KUEP ASM_CONST(0x00000400)
/*
* Support for memory protection keys.
@ -120,14 +129,8 @@
*/
#define MMU_FTR_1T_SEGMENT ASM_CONST(0x40000000)
/*
* Supports KUAP (key 0 controlling userspace addresses) on radix
*/
#define MMU_FTR_RADIX_KUAP ASM_CONST(0x80000000)
/* MMU feature bit sets for various CPUs */
#define MMU_FTRS_DEFAULT_HPTE_ARCH_V2 \
MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
#define MMU_FTRS_DEFAULT_HPTE_ARCH_V2 (MMU_FTR_HPTE_TABLE | MMU_FTR_TLBIEL | MMU_FTR_16M_PAGE)
#define MMU_FTRS_POWER MMU_FTRS_DEFAULT_HPTE_ARCH_V2
#define MMU_FTRS_PPC970 MMU_FTRS_POWER | MMU_FTR_TLBIE_CROP_VA
#define MMU_FTRS_POWER5 MMU_FTRS_POWER | MMU_FTR_LOCKLESS_TLBIE
@ -154,7 +157,7 @@ DECLARE_PER_CPU(int, next_tlbcam_idx);
enum {
MMU_FTRS_POSSIBLE =
#ifdef CONFIG_PPC_BOOK3S
#if defined(CONFIG_PPC_BOOK3S_64) || defined(CONFIG_PPC_BOOK3S_604)
MMU_FTR_HPTE_TABLE |
#endif
#ifdef CONFIG_PPC_8xx
@ -163,17 +166,19 @@ enum {
#ifdef CONFIG_40x
MMU_FTR_TYPE_40x |
#endif
#ifdef CONFIG_44x
MMU_FTR_TYPE_44x |
#endif
#if defined(CONFIG_E200) || defined(CONFIG_E500)
MMU_FTR_TYPE_FSL_E | MMU_FTR_BIG_PHYS | MMU_FTR_USE_TLBILX |
#endif
#ifdef CONFIG_PPC_47x
MMU_FTR_TYPE_47x | MMU_FTR_USE_TLBIVAX_BCAST | MMU_FTR_LOCK_BCAST_INVAL |
#elif defined(CONFIG_44x)
MMU_FTR_TYPE_44x |
#endif
#ifdef CONFIG_E500
MMU_FTR_TYPE_FSL_E | MMU_FTR_BIG_PHYS | MMU_FTR_USE_TLBILX |
#endif
#ifdef CONFIG_PPC_BOOK3S_32
MMU_FTR_USE_HIGH_BATS | MMU_FTR_NEED_DTLB_SW_LRU |
MMU_FTR_USE_HIGH_BATS |
#endif
#ifdef CONFIG_PPC_83xx
MMU_FTR_NEED_DTLB_SW_LRU |
#endif
#ifdef CONFIG_PPC_BOOK3E_64
MMU_FTR_USE_TLBRSRV | MMU_FTR_USE_PAIRED_MAS |
@ -187,22 +192,47 @@ enum {
#ifdef CONFIG_PPC_RADIX_MMU
MMU_FTR_TYPE_RADIX |
MMU_FTR_GTSE |
#ifdef CONFIG_PPC_KUAP
MMU_FTR_RADIX_KUAP |
#endif /* CONFIG_PPC_KUAP */
#endif /* CONFIG_PPC_RADIX_MMU */
#ifdef CONFIG_PPC_KUAP
MMU_FTR_BOOK3S_KUAP |
#endif /* CONFIG_PPC_KUAP */
#ifdef CONFIG_PPC_MEM_KEYS
MMU_FTR_PKEY |
#endif
#ifdef CONFIG_PPC_KUEP
MMU_FTR_KUEP |
MMU_FTR_BOOK3S_KUEP |
#endif /* CONFIG_PPC_KUAP */
0,
};
#if defined(CONFIG_PPC_BOOK3S_604) && !defined(CONFIG_PPC_BOOK3S_603)
#define MMU_FTRS_ALWAYS MMU_FTR_HPTE_TABLE
#endif
#ifdef CONFIG_PPC_8xx
#define MMU_FTRS_ALWAYS MMU_FTR_TYPE_8xx
#endif
#ifdef CONFIG_40x
#define MMU_FTRS_ALWAYS MMU_FTR_TYPE_40x
#endif
#ifdef CONFIG_PPC_47x
#define MMU_FTRS_ALWAYS MMU_FTR_TYPE_47x
#elif defined(CONFIG_44x)
#define MMU_FTRS_ALWAYS MMU_FTR_TYPE_44x
#endif
#if defined(CONFIG_E200) || defined(CONFIG_E500)
#define MMU_FTRS_ALWAYS MMU_FTR_TYPE_FSL_E
#endif
#ifndef MMU_FTRS_ALWAYS
#define MMU_FTRS_ALWAYS 0
#endif
static inline bool early_mmu_has_feature(unsigned long feature)
{
if (MMU_FTRS_ALWAYS & feature)
return true;
return !!(MMU_FTRS_POSSIBLE & cur_cpu_spec->mmu_features & feature);
}
@ -231,6 +261,9 @@ static __always_inline bool mmu_has_feature(unsigned long feature)
}
#endif
if (MMU_FTRS_ALWAYS & feature)
return true;
if (!(MMU_FTRS_POSSIBLE & feature))
return false;

View File

@ -263,8 +263,10 @@ extern void arch_exit_mmap(struct mm_struct *mm);
static inline void arch_unmap(struct mm_struct *mm,
unsigned long start, unsigned long end)
{
if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
mm->context.vdso_base = 0;
unsigned long vdso_base = (unsigned long)mm->context.vdso - PAGE_SIZE;
if (start <= vdso_base && vdso_base < end)
mm->context.vdso = NULL;
}
#ifdef CONFIG_PPC_MEM_KEYS
@ -285,7 +287,7 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
#define thread_pkey_regs_init(thread)
#define arch_dup_pkeys(oldmm, mm)
static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
static inline u64 pte_to_hpte_pkey_bits(u64 pteflags, unsigned long flags)
{
return 0x0UL;
}

View File

@ -63,8 +63,7 @@ static inline void restore_user_access(unsigned long flags)
static inline bool
bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xff000000),
"Bug: fault blocked by AP register !");
return !((regs->kuap ^ MD_APG_KUAP) & 0xff000000);
}
#endif /* !__ASSEMBLY__ */

View File

@ -57,7 +57,7 @@
typedef struct {
unsigned int id;
unsigned int active;
unsigned long vdso_base;
void __user *vdso;
} mm_context_t;
#endif /* !__ASSEMBLY__ */

View File

@ -108,7 +108,7 @@ extern unsigned int tlb_44x_index;
typedef struct {
unsigned int id;
unsigned int active;
unsigned long vdso_base;
void __user *vdso;
} mm_context_t;
/* patch sites */

View File

@ -181,7 +181,7 @@ void mmu_pin_tlb(unsigned long top, bool readonly);
typedef struct {
unsigned int id;
unsigned int active;
unsigned long vdso_base;
void __user *vdso;
void *pte_frag;
} mm_context_t;

View File

@ -238,7 +238,7 @@ extern unsigned int tlbcam_index;
typedef struct {
unsigned int id;
unsigned int active;
unsigned long vdso_base;
void __user *vdso;
} mm_context_t;
/* Page size definitions, common between 32 and 64-bit

View File

@ -192,9 +192,9 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
*/
if (IS_ENABLED(CONFIG_PPC32) && IS_ENABLED(CONFIG_PTE_64BIT) && !percpu) {
__asm__ __volatile__("\
stw%U0%X0 %2,%0\n\
stw%X0 %2,%0\n\
eieio\n\
stw%U0%X0 %L2,%1"
stw%X1 %L2,%1"
: "=m" (*ptep), "=m" (*((unsigned char *)ptep+4))
: "r" (pte) : "memory");
return;

View File

@ -10,7 +10,6 @@
* - local_flush_tlb_mm(mm, full) flushes the specified mm context on
* the local processor
* - local_flush_tlb_page(vma, vmaddr) flushes one page on the local processor
* - flush_tlb_page_nohash(vma, vmaddr) flushes one page if SW loaded TLB
* - flush_tlb_range(vma, start, end) flushes a range of pages
* - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
*

View File

@ -1091,9 +1091,9 @@ enum {
OPAL_XIVE_IRQ_TRIGGER_PAGE = 0x00000001,
OPAL_XIVE_IRQ_STORE_EOI = 0x00000002,
OPAL_XIVE_IRQ_LSI = 0x00000004,
OPAL_XIVE_IRQ_SHIFT_BUG = 0x00000008,
OPAL_XIVE_IRQ_MASK_VIA_FW = 0x00000010,
OPAL_XIVE_IRQ_EOI_VIA_FW = 0x00000020,
OPAL_XIVE_IRQ_SHIFT_BUG = 0x00000008, /* P9 DD1.0 workaround */
OPAL_XIVE_IRQ_MASK_VIA_FW = 0x00000010, /* P9 DD1.0 workaround */
OPAL_XIVE_IRQ_EOI_VIA_FW = 0x00000020, /* P9 DD1.0 workaround */
};
/* Flags for OPAL_XIVE_GET/SET_QUEUE_INFO */

View File

@ -16,12 +16,6 @@
#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
#endif
#ifdef CONFIG_PTE_64BIT
#define PTE_FLAGS_OFFSET 4 /* offset of PTE flags, in bytes */
#else
#define PTE_FLAGS_OFFSET 0
#endif
#if defined(CONFIG_PPC_256K_PAGES) || \
(defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES))
#define PTE_SHIFT (PAGE_SHIFT - PTE_T_LOG2 - 2) /* 1/4 of a page */

View File

@ -10,6 +10,9 @@
#endif
#ifdef CONFIG_PPC_SPLPAR
#include <asm/kvm_guest.h>
#include <asm/cputhreads.h>
DECLARE_STATIC_KEY_FALSE(shared_processor);
static inline bool is_shared_processor(void)
@ -74,6 +77,21 @@ static inline bool vcpu_is_preempted(int cpu)
{
if (!is_shared_processor())
return false;
#ifdef CONFIG_PPC_SPLPAR
if (!is_kvm_guest()) {
int first_cpu = cpu_first_thread_sibling(smp_processor_id());
/*
* Preemption can only happen at core granularity. This CPU
* is not preempted if one of the CPU of this core is not
* preempted.
*/
if (cpu_first_thread_sibling(cpu) == first_cpu)
return false;
}
#endif
if (yield_count_of(cpu) & 1)
return true;
return false;

View File

@ -82,6 +82,7 @@ struct power_pmu {
#define PPMU_ARCH_207S 0x00000080 /* PMC is architecture v2.07S */
#define PPMU_NO_SIAR 0x00000100 /* Do not use SIAR */
#define PPMU_ARCH_31 0x00000200 /* Has MMCR3, SIER2 and SIER3 */
#define PPMU_P10_DD1 0x00000400 /* Is power10 DD1 processor version */
/*
* Values for flags to get_alternatives()

View File

@ -3,12 +3,59 @@
#ifndef _ASM_PNV_OCXL_H
#define _ASM_PNV_OCXL_H
#include <linux/bitfield.h>
#include <linux/pci.h>
#define PNV_OCXL_TL_MAX_TEMPLATE 63
#define PNV_OCXL_TL_BITS_PER_RATE 4
#define PNV_OCXL_TL_RATE_BUF_SIZE ((PNV_OCXL_TL_MAX_TEMPLATE+1) * PNV_OCXL_TL_BITS_PER_RATE / 8)
#define PNV_OCXL_ATSD_TIMEOUT 1
/* TLB Management Instructions */
#define PNV_OCXL_ATSD_LNCH 0x00
/* Radix Invalidate */
#define PNV_OCXL_ATSD_LNCH_R PPC_BIT(0)
/* Radix Invalidation Control
* 0b00 Just invalidate TLB.
* 0b01 Invalidate just Page Walk Cache.
* 0b10 Invalidate TLB, Page Walk Cache, and any
* caching of Partition and Process Table Entries.
*/
#define PNV_OCXL_ATSD_LNCH_RIC PPC_BITMASK(1, 2)
/* Number and Page Size of translations to be invalidated */
#define PNV_OCXL_ATSD_LNCH_LP PPC_BITMASK(3, 10)
/* Invalidation Criteria
* 0b00 Invalidate just the target VA.
* 0b01 Invalidate matching PID.
*/
#define PNV_OCXL_ATSD_LNCH_IS PPC_BITMASK(11, 12)
/* 0b1: Process Scope, 0b0: Partition Scope */
#define PNV_OCXL_ATSD_LNCH_PRS PPC_BIT(13)
/* Invalidation Flag */
#define PNV_OCXL_ATSD_LNCH_B PPC_BIT(14)
/* Actual Page Size to be invalidated
* 000 4KB
* 101 64KB
* 001 2MB
* 010 1GB
*/
#define PNV_OCXL_ATSD_LNCH_AP PPC_BITMASK(15, 17)
/* Defines the large page select
* L=0b0 for 4KB pages
* L=0b1 for large pages)
*/
#define PNV_OCXL_ATSD_LNCH_L PPC_BIT(18)
/* Process ID */
#define PNV_OCXL_ATSD_LNCH_PID PPC_BITMASK(19, 38)
/* NoFlush Assumed to be 0b0 */
#define PNV_OCXL_ATSD_LNCH_F PPC_BIT(39)
#define PNV_OCXL_ATSD_LNCH_OCAPI_SLBI PPC_BIT(40)
#define PNV_OCXL_ATSD_LNCH_OCAPI_SINGLETON PPC_BIT(41)
#define PNV_OCXL_ATSD_AVA 0x08
#define PNV_OCXL_ATSD_AVA_AVA PPC_BITMASK(0, 51)
#define PNV_OCXL_ATSD_STAT 0x10
int pnv_ocxl_get_actag(struct pci_dev *dev, u16 *base, u16 *enabled, u16 *supported);
int pnv_ocxl_get_pasid_count(struct pci_dev *dev, int *count);
@ -28,4 +75,11 @@ int pnv_ocxl_spa_setup(struct pci_dev *dev, void *spa_mem, int PE_mask, void **p
void pnv_ocxl_spa_release(void *platform_data);
int pnv_ocxl_spa_remove_pe_from_cache(void *platform_data, int pe_handle);
int pnv_ocxl_map_lpar(struct pci_dev *dev, uint64_t lparid,
uint64_t lpcr, void __iomem **arva);
void pnv_ocxl_unmap_lpar(void __iomem *arva);
void pnv_ocxl_tlb_invalidate(void __iomem *arva,
unsigned long pid,
unsigned long addr,
unsigned long page_size);
#endif /* _ASM_PNV_OCXL_H */

View File

@ -78,6 +78,9 @@
#define IMM_L(i) ((uintptr_t)(i) & 0xffff)
#define IMM_DS(i) ((uintptr_t)(i) & 0xfffc)
#define IMM_DQ(i) ((uintptr_t)(i) & 0xfff0)
#define IMM_D0(i) (((uintptr_t)(i) >> 16) & 0x3ffff)
#define IMM_D1(i) IMM_L(i)
/*
* 16-bit immediate helper macros: HA() is for use with sign-extending instrs
@ -230,7 +233,6 @@
#define PPC_INST_POPCNTB_MASK 0xfc0007fe
#define PPC_INST_RFEBB 0x4c000124
#define PPC_INST_RFID 0x4c000024
#define PPC_INST_MFSPR 0x7c0002a6
#define PPC_INST_MFSPR_DSCR 0x7c1102a6
#define PPC_INST_MFSPR_DSCR_MASK 0xfc1ffffe
#define PPC_INST_MTSPR_DSCR 0x7c1103a6
@ -295,6 +297,8 @@
#define __PPC_XB(b) ((((b) & 0x1f) << 11) | (((b) & 0x20) >> 4))
#define __PPC_XS(s) ((((s) & 0x1f) << 21) | (((s) & 0x20) >> 5))
#define __PPC_XT(s) __PPC_XS(s)
#define __PPC_XSP(s) ((((s) & 0x1e) | (((s) >> 5) & 0x1)) << 21)
#define __PPC_XTP(s) __PPC_XSP(s)
#define __PPC_T_TLB(t) (((t) & 0x3) << 21)
#define __PPC_WC(w) (((w) & 0x3) << 21)
#define __PPC_WS(w) (((w) & 0x1f) << 11)
@ -395,6 +399,14 @@
#define PPC_RAW_XVCPSGNDP(t, a, b) ((0xf0000780 | VSX_XX3((t), (a), (b))))
#define PPC_RAW_VPERMXOR(vrt, vra, vrb, vrc) \
((0x1000002d | ___PPC_RT(vrt) | ___PPC_RA(vra) | ___PPC_RB(vrb) | (((vrc) & 0x1f) << 6)))
#define PPC_RAW_LXVP(xtp, a, i) (0x18000000 | __PPC_XTP(xtp) | ___PPC_RA(a) | IMM_DQ(i))
#define PPC_RAW_STXVP(xsp, a, i) (0x18000001 | __PPC_XSP(xsp) | ___PPC_RA(a) | IMM_DQ(i))
#define PPC_RAW_LXVPX(xtp, a, b) (0x7c00029a | __PPC_XTP(xtp) | ___PPC_RA(a) | ___PPC_RB(b))
#define PPC_RAW_STXVPX(xsp, a, b) (0x7c00039a | __PPC_XSP(xsp) | ___PPC_RA(a) | ___PPC_RB(b))
#define PPC_RAW_PLXVP(xtp, i, a, pr) \
((PPC_PREFIX_8LS | __PPC_PRFX_R(pr) | IMM_D0(i)) << 32 | (0xe8000000 | __PPC_XTP(xtp) | ___PPC_RA(a) | IMM_D1(i)))
#define PPC_RAW_PSTXVP(xsp, i, a, pr) \
((PPC_PREFIX_8LS | __PPC_PRFX_R(pr) | IMM_D0(i)) << 32 | (0xf8000000 | __PPC_XSP(xsp) | ___PPC_RA(a) | IMM_D1(i)))
#define PPC_RAW_NAP (0x4c000364)
#define PPC_RAW_SLEEP (0x4c0003a4)
#define PPC_RAW_WINKLE (0x4c0003e4)
@ -507,6 +519,8 @@
#define PPC_RAW_NEG(d, a) (0x7c0000d0 | ___PPC_RT(d) | ___PPC_RA(a))
#define PPC_RAW_MFSPR(d, spr) (0x7c0002a6 | ___PPC_RT(d) | __PPC_SPR(spr))
/* Deal with instructions that older assemblers aren't aware of */
#define PPC_BCCTR_FLUSH stringify_in_c(.long PPC_INST_BCCTR_FLUSH)
#define PPC_CP_ABORT stringify_in_c(.long PPC_RAW_CP_ABORT)

View File

@ -251,6 +251,8 @@ n:
#define _GLOBAL_TOC(name) _GLOBAL(name)
#define DOTSYM(a) a
#endif
/*
@ -495,15 +497,9 @@ END_FTR_SECTION_NESTED(CPU_FTR_CELL_TB_BUG, CPU_FTR_CELL_TB_BUG, 96)
#endif
#ifdef CONFIG_PPC_BOOK3S_64
#define RFI rfid
#define MTMSRD(r) mtmsrd r
#define MTMSR_EERI(reg) mtmsrd reg,1
#else
#ifndef CONFIG_40x
#define RFI rfi
#else
#define RFI rfi; b . /* Prevent prefetch past rfi */
#endif
#define MTMSRD(r) mtmsr r
#define MTMSR_EERI(reg) mtmsr reg
#endif

View File

@ -6,6 +6,8 @@
* Copyright (C) 2001 PPC 64 Team, IBM Corp
*/
#include <vdso/processor.h>
#include <asm/reg.h>
#ifdef CONFIG_VSX
@ -63,14 +65,6 @@ extern int _chrp_type;
#endif /* defined(__KERNEL__) && defined(CONFIG_PPC32) */
/* Macros for adjusting thread priority (hardware multi-threading) */
#define HMT_very_low() asm volatile("or 31,31,31 # very low priority")
#define HMT_low() asm volatile("or 1,1,1 # low priority")
#define HMT_medium_low() asm volatile("or 6,6,6 # medium low priority")
#define HMT_medium() asm volatile("or 2,2,2 # medium priority")
#define HMT_medium_high() asm volatile("or 5,5,5 # medium high priority")
#define HMT_high() asm volatile("or 3,3,3 # high priority")
#ifdef __KERNEL__
#ifdef CONFIG_PPC64
@ -170,8 +164,10 @@ struct thread_struct {
#endif
/* Debug Registers */
struct debug_reg debug;
#ifdef CONFIG_PPC_FPU_REGS
struct thread_fp_state fp_state;
struct thread_fp_state *fp_save_area;
#endif
int fpexc_mode; /* floating-point exception mode */
unsigned int align_ctl; /* alignment handling control */
#ifdef CONFIG_HAVE_HW_BREAKPOINT
@ -230,10 +226,6 @@ struct thread_struct {
struct thread_vr_state ckvr_state; /* Checkpointed VR state */
unsigned long ckvrsave; /* Checkpointed VRSAVE */
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
#ifdef CONFIG_PPC_MEM_KEYS
unsigned long amr;
unsigned long iamr;
#endif
#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
void* kvm_shadow_vcpu; /* KVM internal data */
#endif /* CONFIG_KVM_BOOK3S_32_HANDLER */
@ -344,7 +336,6 @@ static inline unsigned long __pack_fe01(unsigned int fpmode)
}
#ifdef CONFIG_PPC64
#define cpu_relax() do { HMT_low(); HMT_medium(); barrier(); } while (0)
#define spin_begin() HMT_low()
@ -363,8 +354,6 @@ do { \
} \
} while (0)
#else
#define cpu_relax() barrier()
#endif
/* Check that a certain kernel stack pointer is valid in task_struct p */
@ -398,20 +387,6 @@ static inline void prefetchw(const void *x)
#define HAVE_ARCH_PICK_MMAP_LAYOUT
#ifdef CONFIG_PPC64
static inline unsigned long get_clean_sp(unsigned long sp, int is_32)
{
if (is_32)
return sp & 0x0ffffffffUL;
return sp;
}
#else
static inline unsigned long get_clean_sp(unsigned long sp, int is_32)
{
return sp;
}
#endif
/* asm stubs */
extern unsigned long isa300_idle_stop_noloss(unsigned long psscr_val);
extern unsigned long isa300_idle_stop_mayloss(unsigned long psscr_val);

View File

@ -378,8 +378,8 @@ struct ps3_system_bus_driver {
enum ps3_match_sub_id match_sub_id;
struct device_driver core;
int (*probe)(struct ps3_system_bus_device *);
int (*remove)(struct ps3_system_bus_device *);
int (*shutdown)(struct ps3_system_bus_device *);
void (*remove)(struct ps3_system_bus_device *);
void (*shutdown)(struct ps3_system_bus_device *);
/* int (*suspend)(struct ps3_system_bus_device *, pm_message_t); */
/* int (*resume)(struct ps3_system_bus_device *); */
};

View File

@ -53,11 +53,19 @@ struct pt_regs
#ifdef CONFIG_PPC64
unsigned long ppr;
#endif
union {
#ifdef CONFIG_PPC_KUAP
unsigned long kuap;
unsigned long kuap;
#endif
#ifdef CONFIG_PPC_PKEY
unsigned long amr;
#endif
};
#ifdef CONFIG_PPC_PKEY
unsigned long iamr;
#endif
};
unsigned long __pad[2]; /* Maintain 16 byte interrupt stack alignment */
unsigned long __pad[4]; /* Maintain 16 byte interrupt stack alignment */
};
};
#endif
@ -171,12 +179,6 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
set_thread_flag(TIF_NOERROR); \
} while(0)
struct task_struct;
extern int ptrace_get_reg(struct task_struct *task, int regno,
unsigned long *data);
extern int ptrace_put_reg(struct task_struct *task, int regno,
unsigned long data);
#define current_pt_regs() \
((struct pt_regs *)((unsigned long)task_stack_page(current) + THREAD_SIZE) - 1)

View File

@ -29,7 +29,6 @@
#include <asm/reg_8xx.h>
#define MSR_SF_LG 63 /* Enable 64 bit mode */
#define MSR_ISF_LG 61 /* Interrupt 64b mode valid on 630 */
#define MSR_HV_LG 60 /* Hypervisor state */
#define MSR_TS_T_LG 34 /* Trans Mem state: Transactional */
#define MSR_TS_S_LG 33 /* Trans Mem state: Suspended */
@ -69,13 +68,11 @@
#ifdef CONFIG_PPC64
#define MSR_SF __MASK(MSR_SF_LG) /* Enable 64 bit mode */
#define MSR_ISF __MASK(MSR_ISF_LG) /* Interrupt 64b mode valid on 630 */
#define MSR_HV __MASK(MSR_HV_LG) /* Hypervisor state */
#define MSR_S __MASK(MSR_S_LG) /* Secure state */
#else
/* so tests for these bits fail on 32-bit */
#define MSR_SF 0
#define MSR_ISF 0
#define MSR_HV 0
#define MSR_S 0
#endif
@ -134,7 +131,7 @@
#define MSR_64BIT MSR_SF
/* Server variant */
#define __MSR (MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_ISF |MSR_HV)
#define __MSR (MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_HV)
#ifdef __BIG_ENDIAN__
#define MSR_ __MSR
#define MSR_IDLE (MSR_ME | MSR_SF | MSR_HV)
@ -864,6 +861,7 @@
#define MMCR0_BHRBA 0x00200000UL /* BHRB Access allowed in userspace */
#define MMCR0_EBE 0x00100000UL /* Event based branch enable */
#define MMCR0_PMCC 0x000c0000UL /* PMC control */
#define MMCR0_PMCCEXT ASM_CONST(0x00000200) /* PMCCEXT control */
#define MMCR0_PMCC_U6 0x00080000UL /* PMC1-6 are R/W by user (PR) */
#define MMCR0_PMC1CE 0x00008000UL /* PMC1 count enable*/
#define MMCR0_PMCjCE ASM_CONST(0x00004000) /* PMCj count enable*/
@ -1203,7 +1201,7 @@
#ifdef CONFIG_PPC_BOOK3S_32
#define SPRN_SPRG_SCRATCH0 SPRN_SPRG0
#define SPRN_SPRG_SCRATCH1 SPRN_SPRG1
#define SPRN_SPRG_PGDIR SPRN_SPRG2
#define SPRN_SPRG_SCRATCH2 SPRN_SPRG2
#define SPRN_SPRG_603_LRU SPRN_SPRG4
#endif
@ -1232,14 +1230,9 @@
#define SPRN_SPRG_WSCRATCH_MC SPRN_SPRG1
#define SPRN_SPRG_RSCRATCH4 SPRN_SPRG7R
#define SPRN_SPRG_WSCRATCH4 SPRN_SPRG7W
#ifdef CONFIG_E200
#define SPRN_SPRG_RSCRATCH_DBG SPRN_SPRG6R
#define SPRN_SPRG_WSCRATCH_DBG SPRN_SPRG6W
#else
#define SPRN_SPRG_RSCRATCH_DBG SPRN_SPRG9
#define SPRN_SPRG_WSCRATCH_DBG SPRN_SPRG9
#endif
#endif
#ifdef CONFIG_PPC_8xx
#define SPRN_SPRG_SCRATCH0 SPRN_SPRG0
@ -1419,37 +1412,6 @@ static inline void msr_check_and_clear(unsigned long bits)
__msr_check_and_clear(bits);
}
#if defined(CONFIG_PPC_CELL) || defined(CONFIG_E500)
#define mftb() ({unsigned long rval; \
asm volatile( \
"90: mfspr %0, %2;\n" \
ASM_FTR_IFSET( \
"97: cmpwi %0,0;\n" \
" beq- 90b;\n", "", %1) \
: "=r" (rval) \
: "i" (CPU_FTR_CELL_TB_BUG), "i" (SPRN_TBRL) : "cr0"); \
rval;})
#elif defined(CONFIG_PPC_8xx)
#define mftb() ({unsigned long rval; \
asm volatile("mftbl %0" : "=r" (rval)); rval;})
#else
#define mftb() ({unsigned long rval; \
asm volatile("mfspr %0, %1" : \
"=r" (rval) : "i" (SPRN_TBRL)); rval;})
#endif /* !CONFIG_PPC_CELL */
#if defined(CONFIG_PPC_8xx)
#define mftbu() ({unsigned long rval; \
asm volatile("mftbu %0" : "=r" (rval)); rval;})
#else
#define mftbu() ({unsigned long rval; \
asm volatile("mfspr %0, %1" : "=r" (rval) : \
"i" (SPRN_TBRU)); rval;})
#endif
#define mttbl(v) asm volatile("mttbl %0":: "r"(v))
#define mttbu(v) asm volatile("mttbu %0":: "r"(v))
#ifdef CONFIG_PPC32
#define mfsrin(v) ({unsigned int rval; \
asm volatile("mfsrin %0,%1" : "=r" (rval) : "r" (v)); \

View File

@ -281,18 +281,6 @@
#define MSRP_PMMP 0x00000004 /* Protect MSR[PMM] */
#endif
#ifdef CONFIG_E200
#define MCSR_MCP 0x80000000UL /* Machine Check Input Pin */
#define MCSR_CP_PERR 0x20000000UL /* Cache Push Parity Error */
#define MCSR_CPERR 0x10000000UL /* Cache Parity Error */
#define MCSR_EXCP_ERR 0x08000000UL /* ISI, ITLB, or Bus Error on 1st insn
fetch for an exception handler */
#define MCSR_BUS_IRERR 0x00000010UL /* Read Bus Error on instruction fetch*/
#define MCSR_BUS_DRERR 0x00000008UL /* Read Bus Error on data load */
#define MCSR_BUS_WRERR 0x00000004UL /* Write Bus Error on buffered
store or cache line push */
#endif
/* Bit definitions for the HID1 */
#ifdef CONFIG_E500
/* e500v1/v2 */

View File

@ -23,14 +23,6 @@ struct rtas_t {
struct device_node *dev; /* virtual address pointer */
};
struct rtas_suspend_me_data {
atomic_t working; /* number of cpus accessing this struct */
atomic_t done;
int token; /* ibm,suspend-me */
atomic_t error;
struct completion *complete; /* wait on this until working == 0 */
};
struct rtas_error_log {
/* Byte 0 */
u8 byte0; /* Architectural version */

View File

@ -23,11 +23,16 @@
#define RTAS_RMOBUF_MAX (64 * 1024)
/* RTAS return status codes */
#define RTAS_NOT_SUSPENDABLE -9004
#define RTAS_BUSY -2 /* RTAS Busy */
#define RTAS_EXTENDED_DELAY_MIN 9900
#define RTAS_EXTENDED_DELAY_MAX 9905
/* statuses specific to ibm,suspend-me */
#define RTAS_SUSPEND_ABORTED 9000 /* Suspension aborted */
#define RTAS_NOT_SUSPENDABLE -9004 /* Partition not suspendable */
#define RTAS_THREADS_ACTIVE -9005 /* Multiple processor threads active */
#define RTAS_OUTSTANDING_COPROC -9006 /* Outstanding coprocessor operations */
/*
* In general to call RTAS use rtas_token("string") to lookup
* an RTAS token for the given string (e.g. "event-scan").
@ -242,6 +247,7 @@ extern void __noreturn rtas_restart(char *cmd);
extern void rtas_power_off(void);
extern void __noreturn rtas_halt(void);
extern void rtas_os_term(char *str);
void rtas_activate_firmware(void);
extern int rtas_get_sensor(int sensor, int index, int *state);
extern int rtas_get_sensor_fast(int sensor, int index, int *state);
extern int rtas_get_power_level(int powerdomain, int *level);
@ -250,9 +256,7 @@ extern bool rtas_indicator_present(int token, int *maxindex);
extern int rtas_set_indicator(int indicator, int index, int new_value);
extern int rtas_set_indicator_fast(int indicator, int index, int new_value);
extern void rtas_progress(char *s, unsigned short hex);
extern int rtas_suspend_cpu(struct rtas_suspend_me_data *data);
extern int rtas_suspend_last_cpu(struct rtas_suspend_me_data *data);
extern int rtas_ibm_suspend_me(u64 handle);
int rtas_ibm_suspend_me(int *fw_status);
struct rtc_time;
extern time64_t rtas_get_boot_time(void);
@ -272,8 +276,13 @@ extern time64_t last_rtas_event;
extern int clobbering_unread_rtas_event(void);
extern int pseries_devicetree_update(s32 scope);
extern void post_mobility_fixup(void);
int rtas_syscall_dispatch_ibm_suspend_me(u64 handle);
#else
static inline int clobbering_unread_rtas_event(void) { return 0; }
static inline int rtas_syscall_dispatch_ibm_suspend_me(u64 handle)
{
return -EINVAL;
}
#endif
#ifdef CONFIG_PPC_RTAS_DAEMON

View File

@ -134,6 +134,7 @@ static inline struct cpumask *cpu_smallcore_mask(int cpu)
extern int cpu_to_core_id(int cpu);
extern bool has_big_cores;
extern bool thread_group_shares_l2;
#define cpu_smt_mask cpu_smt_mask
#ifdef CONFIG_SCHED_SMT
@ -187,6 +188,7 @@ extern void __cpu_die(unsigned int cpu);
/* for UP */
#define hard_smp_processor_id() get_hard_smp_processor_id(0)
#define smp_setup_cpu_maps()
#define thread_group_shares_l2 0
static inline void inhibit_secondary_onlining(void) {}
static inline void uninhibit_secondary_onlining(void) {}
static inline const struct cpumask *cpu_sibling_mask(int cpu)
@ -199,6 +201,10 @@ static inline const struct cpumask *cpu_smallcore_mask(int cpu)
return cpumask_of(cpu);
}
static inline const struct cpumask *cpu_l2_cache_mask(int cpu)
{
return cpumask_of(cpu);
}
#endif /* CONFIG_SMP */
#ifdef CONFIG_PPC64

View File

@ -77,10 +77,8 @@ struct thread_info {
/* how to get the thread information struct from C */
extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
#ifdef CONFIG_PPC_BOOK3S_64
void arch_setup_new_exec(void);
#define arch_setup_new_exec arch_setup_new_exec
#endif
#endif /* __ASSEMBLY__ */

View File

@ -15,6 +15,7 @@
#include <asm/processor.h>
#include <asm/cpu_has_feature.h>
#include <asm/vdso/timebase.h>
/* time.c */
extern unsigned long tb_ticks_per_jiffy;
@ -38,44 +39,14 @@ struct div_result {
u64 result_low;
};
/* For compatibility, get_tbl() is defined as get_tb() on ppc64 */
static inline unsigned long get_tbl(void)
{
return mftb();
}
static inline u64 get_vtb(void)
{
#ifdef CONFIG_PPC_BOOK3S_64
if (cpu_has_feature(CPU_FTR_ARCH_207S))
return mfspr(SPRN_VTB);
#endif
return 0;
}
static inline u64 get_tb(void)
{
unsigned int tbhi, tblo, tbhi2;
if (IS_ENABLED(CONFIG_PPC64))
return mftb();
do {
tbhi = mftbu();
tblo = mftb();
tbhi2 = mftbu();
} while (tbhi != tbhi2);
return ((u64)tbhi << 32) | tblo;
}
static inline void set_tb(unsigned int upper, unsigned int lower)
{
mtspr(SPRN_TBWL, 0);
mtspr(SPRN_TBWU, upper);
mtspr(SPRN_TBWL, lower);
}
/* Accessor functions for the decrementer register.
* The 4xx doesn't even have a decrementer. I tried to use the
* generic timer interrupt code, which seems OK, with the 4xx PIT

View File

@ -9,7 +9,7 @@
*/
#include <asm/cputable.h>
#include <asm/reg.h>
#include <asm/vdso/timebase.h>
#define CLOCK_TICK_RATE 1024000 /* Underlying HZ */

View File

@ -40,9 +40,6 @@ extern void tlb_flush(struct mmu_gather *tlb);
/* Get the generic bits... */
#include <asm-generic/tlb.h>
extern void flush_hash_entry(struct mm_struct *mm, pte_t *ptep,
unsigned long address);
static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep,
unsigned long address)
{

View File

@ -1,12 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __PPC64_VDSO_H__
#define __PPC64_VDSO_H__
#ifdef __KERNEL__
/* Default link addresses for the vDSOs */
#define VDSO32_LBASE 0x0
#define VDSO64_LBASE 0x0
#ifndef _ASM_POWERPC_VDSO_H
#define _ASM_POWERPC_VDSO_H
/* Default map addresses for 32bit vDSO */
#define VDSO32_MBASE 0x100000
@ -15,10 +9,17 @@
#ifndef __ASSEMBLY__
/* Offsets relative to thread->vdso_base */
extern unsigned long vdso64_rt_sigtramp;
extern unsigned long vdso32_sigtramp;
extern unsigned long vdso32_rt_sigtramp;
#ifdef CONFIG_PPC64
#include <generated/vdso64-offsets.h>
#endif
#ifdef CONFIG_VDSO32
#include <generated/vdso32-offsets.h>
#endif
#define VDSO64_SYMBOL(base, name) ((unsigned long)(base) + (vdso64_offset_##name))
#define VDSO32_SYMBOL(base, name) ((unsigned long)(base) + (vdso32_offset_##name))
int vdso_getcpu_init(void);
@ -51,6 +52,4 @@ int vdso_getcpu_init(void);
#endif /* __ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* __PPC64_VDSO_H__ */
#endif /* _ASM_POWERPC_VDSO_H */

View File

@ -0,0 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_VDSO_CLOCKSOURCE_H
#define _ASM_POWERPC_VDSO_CLOCKSOURCE_H
#define VDSO_ARCH_CLOCKMODES VDSO_CLOCKMODE_ARCHTIMER
#endif

View File

@ -0,0 +1,201 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_VDSO_GETTIMEOFDAY_H
#define _ASM_POWERPC_VDSO_GETTIMEOFDAY_H
#ifdef __ASSEMBLY__
#include <asm/ppc_asm.h>
/*
* The macros sets two stack frames, one for the caller and one for the callee
* because there are no requirement for the caller to set a stack frame when
* calling VDSO so it may have omitted to set one, especially on PPC64
*/
.macro cvdso_call funct
.cfi_startproc
PPC_STLU r1, -PPC_MIN_STKFRM(r1)
mflr r0
.cfi_register lr, r0
PPC_STLU r1, -PPC_MIN_STKFRM(r1)
PPC_STL r0, PPC_MIN_STKFRM + PPC_LR_STKOFF(r1)
#ifdef __powerpc64__
PPC_STL r2, PPC_MIN_STKFRM + STK_GOT(r1)
#endif
get_datapage r5
addi r5, r5, VDSO_DATA_OFFSET
bl DOTSYM(\funct)
PPC_LL r0, PPC_MIN_STKFRM + PPC_LR_STKOFF(r1)
#ifdef __powerpc64__
PPC_LL r2, PPC_MIN_STKFRM + STK_GOT(r1)
#endif
cmpwi r3, 0
mtlr r0
.cfi_restore lr
addi r1, r1, 2 * PPC_MIN_STKFRM
crclr so
beqlr+
crset so
neg r3, r3
blr
.cfi_endproc
.endm
.macro cvdso_call_time funct
.cfi_startproc
PPC_STLU r1, -PPC_MIN_STKFRM(r1)
mflr r0
.cfi_register lr, r0
PPC_STLU r1, -PPC_MIN_STKFRM(r1)
PPC_STL r0, PPC_MIN_STKFRM + PPC_LR_STKOFF(r1)
#ifdef __powerpc64__
PPC_STL r2, PPC_MIN_STKFRM + STK_GOT(r1)
#endif
get_datapage r4
addi r4, r4, VDSO_DATA_OFFSET
bl DOTSYM(\funct)
PPC_LL r0, PPC_MIN_STKFRM + PPC_LR_STKOFF(r1)
#ifdef __powerpc64__
PPC_LL r2, PPC_MIN_STKFRM + STK_GOT(r1)
#endif
crclr so
mtlr r0
.cfi_restore lr
addi r1, r1, 2 * PPC_MIN_STKFRM
blr
.cfi_endproc
.endm
#else
#include <asm/vdso/timebase.h>
#include <asm/barrier.h>
#include <asm/unistd.h>
#include <uapi/linux/time.h>
#define VDSO_HAS_CLOCK_GETRES 1
#define VDSO_HAS_TIME 1
static __always_inline int do_syscall_2(const unsigned long _r0, const unsigned long _r3,
const unsigned long _r4)
{
register long r0 asm("r0") = _r0;
register unsigned long r3 asm("r3") = _r3;
register unsigned long r4 asm("r4") = _r4;
register int ret asm ("r3");
asm volatile(
" sc\n"
" bns+ 1f\n"
" neg %0, %0\n"
"1:\n"
: "=r" (ret), "+r" (r4), "+r" (r0)
: "r" (r3)
: "memory", "r5", "r6", "r7", "r8", "r9", "r10", "r11", "r12", "cr0", "ctr");
return ret;
}
static __always_inline
int gettimeofday_fallback(struct __kernel_old_timeval *_tv, struct timezone *_tz)
{
return do_syscall_2(__NR_gettimeofday, (unsigned long)_tv, (unsigned long)_tz);
}
static __always_inline
int clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
{
return do_syscall_2(__NR_clock_gettime, _clkid, (unsigned long)_ts);
}
static __always_inline
int clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
{
return do_syscall_2(__NR_clock_getres, _clkid, (unsigned long)_ts);
}
#ifdef CONFIG_VDSO32
#define BUILD_VDSO32 1
static __always_inline
int clock_gettime32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
{
return do_syscall_2(__NR_clock_gettime, _clkid, (unsigned long)_ts);
}
static __always_inline
int clock_getres32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
{
return do_syscall_2(__NR_clock_getres, _clkid, (unsigned long)_ts);
}
#endif
static __always_inline u64 __arch_get_hw_counter(s32 clock_mode,
const struct vdso_data *vd)
{
return get_tb();
}
const struct vdso_data *__arch_get_vdso_data(void);
static inline bool vdso_clocksource_ok(const struct vdso_data *vd)
{
return true;
}
#define vdso_clocksource_ok vdso_clocksource_ok
/*
* powerpc specific delta calculation.
*
* This variant removes the masking of the subtraction because the
* clocksource mask of all VDSO capable clocksources on powerpc is U64_MAX
* which would result in a pointless operation. The compiler cannot
* optimize it away as the mask comes from the vdso data and is not compile
* time constant.
*/
static __always_inline u64 vdso_calc_delta(u64 cycles, u64 last, u64 mask, u32 mult)
{
return (cycles - last) * mult;
}
#define vdso_calc_delta vdso_calc_delta
#ifndef __powerpc64__
static __always_inline u64 vdso_shift_ns(u64 ns, unsigned long shift)
{
u32 hi = ns >> 32;
u32 lo = ns;
lo >>= shift;
lo |= hi << (32 - shift);
hi >>= shift;
if (likely(hi == 0))
return lo;
return ((u64)hi << 32) | lo;
}
#define vdso_shift_ns vdso_shift_ns
#endif
#ifdef __powerpc64__
int __c_kernel_clock_gettime(clockid_t clock, struct __kernel_timespec *ts,
const struct vdso_data *vd);
int __c_kernel_clock_getres(clockid_t clock_id, struct __kernel_timespec *res,
const struct vdso_data *vd);
#else
int __c_kernel_clock_gettime(clockid_t clock, struct old_timespec32 *ts,
const struct vdso_data *vd);
int __c_kernel_clock_gettime64(clockid_t clock, struct __kernel_timespec *ts,
const struct vdso_data *vd);
int __c_kernel_clock_getres(clockid_t clock_id, struct old_timespec32 *res,
const struct vdso_data *vd);
#endif
int __c_kernel_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz,
const struct vdso_data *vd);
__kernel_old_time_t __c_kernel_time(__kernel_old_time_t *time,
const struct vdso_data *vd);
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_VDSO_GETTIMEOFDAY_H */

View File

@ -0,0 +1,23 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef _ASM_POWERPC_VDSO_PROCESSOR_H
#define _ASM_POWERPC_VDSO_PROCESSOR_H
#ifndef __ASSEMBLY__
/* Macros for adjusting thread priority (hardware multi-threading) */
#define HMT_very_low() asm volatile("or 31, 31, 31 # very low priority")
#define HMT_low() asm volatile("or 1, 1, 1 # low priority")
#define HMT_medium_low() asm volatile("or 6, 6, 6 # medium low priority")
#define HMT_medium() asm volatile("or 2, 2, 2 # medium priority")
#define HMT_medium_high() asm volatile("or 5, 5, 5 # medium high priority")
#define HMT_high() asm volatile("or 3, 3, 3 # high priority")
#ifdef CONFIG_PPC64
#define cpu_relax() do { HMT_low(); HMT_medium(); barrier(); } while (0)
#else
#define cpu_relax() barrier()
#endif
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_VDSO_PROCESSOR_H */

View File

@ -0,0 +1,79 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Common timebase prototypes and such for all ppc machines.
*/
#ifndef _ASM_POWERPC_VDSO_TIMEBASE_H
#define _ASM_POWERPC_VDSO_TIMEBASE_H
#include <asm/reg.h>
/*
* We use __powerpc64__ here because we want the compat VDSO to use the 32-bit
* version below in the else case of the ifdef.
*/
#if defined(__powerpc64__) && (defined(CONFIG_PPC_CELL) || defined(CONFIG_E500))
#define mftb() ({unsigned long rval; \
asm volatile( \
"90: mfspr %0, %2;\n" \
ASM_FTR_IFSET( \
"97: cmpwi %0,0;\n" \
" beq- 90b;\n", "", %1) \
: "=r" (rval) \
: "i" (CPU_FTR_CELL_TB_BUG), "i" (SPRN_TBRL) : "cr0"); \
rval;})
#elif defined(CONFIG_PPC_8xx)
#define mftb() ({unsigned long rval; \
asm volatile("mftbl %0" : "=r" (rval)); rval;})
#else
#define mftb() ({unsigned long rval; \
asm volatile("mfspr %0, %1" : \
"=r" (rval) : "i" (SPRN_TBRL)); rval;})
#endif /* !CONFIG_PPC_CELL */
#if defined(CONFIG_PPC_8xx)
#define mftbu() ({unsigned long rval; \
asm volatile("mftbu %0" : "=r" (rval)); rval;})
#else
#define mftbu() ({unsigned long rval; \
asm volatile("mfspr %0, %1" : "=r" (rval) : \
"i" (SPRN_TBRU)); rval;})
#endif
#define mttbl(v) asm volatile("mttbl %0":: "r"(v))
#define mttbu(v) asm volatile("mttbu %0":: "r"(v))
/* For compatibility, get_tbl() is defined as get_tb() on ppc64 */
static inline unsigned long get_tbl(void)
{
return mftb();
}
static inline u64 get_tb(void)
{
unsigned int tbhi, tblo, tbhi2;
/*
* We use __powerpc64__ here not CONFIG_PPC64 because we want the compat
* VDSO to use the 32-bit compatible version in the while loop below.
*/
if (__is_defined(__powerpc64__))
return mftb();
do {
tbhi = mftbu();
tblo = mftb();
tbhi2 = mftbu();
} while (tbhi != tbhi2);
return ((u64)tbhi << 32) | tblo;
}
static inline void set_tb(unsigned int upper, unsigned int lower)
{
mtspr(SPRN_TBWL, 0);
mtspr(SPRN_TBWU, upper);
mtspr(SPRN_TBWL, lower);
}
#endif /* _ASM_POWERPC_VDSO_TIMEBASE_H */

View File

@ -0,0 +1,25 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_VDSO_VSYSCALL_H
#define _ASM_POWERPC_VDSO_VSYSCALL_H
#ifndef __ASSEMBLY__
#include <linux/timekeeper_internal.h>
#include <asm/vdso_datapage.h>
/*
* Update the vDSO data page to keep in sync with kernel timekeeping.
*/
static __always_inline
struct vdso_data *__arch_get_k_vdso_data(void)
{
return vdso_data->data;
}
#define __arch_get_k_vdso_data __arch_get_k_vdso_data
/* The asm-generic header needs to be included after the definitions above */
#include <asm-generic/vdso/vsyscall.h>
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_VDSO_VSYSCALL_H */

View File

@ -36,6 +36,7 @@
#include <linux/unistd.h>
#include <linux/time.h>
#include <vdso/datapage.h>
#define SYSCALL_MAP_SIZE ((NR_syscalls + 31) / 32)
@ -45,7 +46,7 @@
#ifdef CONFIG_PPC64
struct vdso_data {
struct vdso_arch_data {
__u8 eye_catcher[16]; /* Eyecatcher: SYSTEMCFG:PPC64 0x00 */
struct { /* Systemcfg version numbers */
__u32 major; /* Major number 0x10 */
@ -59,13 +60,13 @@ struct vdso_data {
__u32 processor; /* Processor type 0x1C */
__u64 processorCount; /* # of physical processors 0x20 */
__u64 physicalMemorySize; /* Size of real memory(B) 0x28 */
__u64 tb_orig_stamp; /* Timebase at boot 0x30 */
__u64 tb_orig_stamp; /* (NU) Timebase at boot 0x30 */
__u64 tb_ticks_per_sec; /* Timebase tics / sec 0x38 */
__u64 tb_to_xs; /* Inverse of TB to 2^20 0x40 */
__u64 stamp_xsec; /* 0x48 */
__u64 tb_update_count; /* Timebase atomicity ctr 0x50 */
__u32 tz_minuteswest; /* Minutes west of Greenwich 0x58 */
__u32 tz_dsttime; /* Type of dst correction 0x5C */
__u64 tb_to_xs; /* (NU) Inverse of TB to 2^20 0x40 */
__u64 stamp_xsec; /* (NU) 0x48 */
__u64 tb_update_count; /* (NU) Timebase atomicity ctr 0x50 */
__u32 tz_minuteswest; /* (NU) Min. west of Greenwich 0x58 */
__u32 tz_dsttime; /* (NU) Type of dst correction 0x5C */
__u32 dcache_size; /* L1 d-cache size 0x60 */
__u32 dcache_line_size; /* L1 d-cache line size 0x64 */
__u32 icache_size; /* L1 i-cache size 0x68 */
@ -78,14 +79,10 @@ struct vdso_data {
__u32 icache_block_size; /* L1 i-cache block size */
__u32 dcache_log_block_size; /* L1 d-cache log block size */
__u32 icache_log_block_size; /* L1 i-cache log block size */
__u32 stamp_sec_fraction; /* fractional seconds of stamp_xtime */
__s32 wtom_clock_nsec; /* Wall to monotonic clock nsec */
__s64 wtom_clock_sec; /* Wall to monotonic clock sec */
__s64 stamp_xtime_sec; /* xtime secs as at tb_orig_stamp */
__s64 stamp_xtime_nsec; /* xtime nsecs as at tb_orig_stamp */
__u32 hrtimer_res; /* hrtimer resolution */
__u32 syscall_map_64[SYSCALL_MAP_SIZE]; /* map of syscalls */
__u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
__u32 syscall_map[SYSCALL_MAP_SIZE]; /* Map of syscalls */
__u32 compat_syscall_map[SYSCALL_MAP_SIZE]; /* Map of compat syscalls */
struct vdso_data data[CS_BASES];
};
#else /* CONFIG_PPC64 */
@ -93,35 +90,27 @@ struct vdso_data {
/*
* And here is the simpler 32 bits version
*/
struct vdso_data {
__u64 tb_orig_stamp; /* Timebase at boot 0x30 */
struct vdso_arch_data {
__u64 tb_ticks_per_sec; /* Timebase tics / sec 0x38 */
__u64 tb_to_xs; /* Inverse of TB to 2^20 0x40 */
__u64 stamp_xsec; /* 0x48 */
__u32 tb_update_count; /* Timebase atomicity ctr 0x50 */
__u32 tz_minuteswest; /* Minutes west of Greenwich 0x58 */
__u32 tz_dsttime; /* Type of dst correction 0x5C */
__s32 wtom_clock_sec; /* Wall to monotonic clock */
__s32 wtom_clock_nsec;
__s32 stamp_xtime_sec; /* xtime seconds as at tb_orig_stamp */
__s32 stamp_xtime_nsec; /* xtime nsecs as at tb_orig_stamp */
__u32 stamp_sec_fraction; /* fractional seconds of stamp_xtime */
__u32 hrtimer_res; /* hrtimer resolution */
__u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
__u32 syscall_map[SYSCALL_MAP_SIZE]; /* Map of syscalls */
__u32 compat_syscall_map[0]; /* No compat syscalls on PPC32 */
struct vdso_data data[CS_BASES];
};
#endif /* CONFIG_PPC64 */
extern struct vdso_data *vdso_data;
extern struct vdso_arch_data *vdso_data;
#else /* __ASSEMBLY__ */
.macro get_datapage ptr, tmp
.macro get_datapage ptr
bcl 20, 31, .+4
999:
mflr \ptr
addi \ptr, \ptr, (__kernel_datapage_offset - (.-4))@l
lwz \tmp, 0(\ptr)
add \ptr, \tmp, \ptr
#if CONFIG_PPC_PAGE_SHIFT > 14
addis \ptr, \ptr, (_vdso_datapage - 999b)@ha
#endif
addi \ptr, \ptr, (_vdso_datapage - 999b)@l
.endm
#endif /* __ASSEMBLY__ */

View File

@ -60,13 +60,13 @@ struct xive_irq_data {
};
#define XIVE_IRQ_FLAG_STORE_EOI 0x01
#define XIVE_IRQ_FLAG_LSI 0x02
#define XIVE_IRQ_FLAG_SHIFT_BUG 0x04
#define XIVE_IRQ_FLAG_MASK_FW 0x08
#define XIVE_IRQ_FLAG_EOI_FW 0x10
/* #define XIVE_IRQ_FLAG_SHIFT_BUG 0x04 */ /* P9 DD1.0 workaround */
/* #define XIVE_IRQ_FLAG_MASK_FW 0x08 */ /* P9 DD1.0 workaround */
/* #define XIVE_IRQ_FLAG_EOI_FW 0x10 */ /* P9 DD1.0 workaround */
#define XIVE_IRQ_FLAG_H_INT_ESB 0x20
/* Special flag set by KVM for excalation interrupts */
#define XIVE_IRQ_NO_EOI 0x80
#define XIVE_IRQ_FLAG_NO_EOI 0x80
#define XIVE_INVALID_CHIP_ID -1

View File

@ -173,6 +173,9 @@ KCOV_INSTRUMENT_cputable.o := n
KCOV_INSTRUMENT_setup_64.o := n
KCOV_INSTRUMENT_paca.o := n
CFLAGS_setup_64.o += -fno-stack-protector
CFLAGS_paca.o += -fno-stack-protector
extra-$(CONFIG_PPC_FPU) += fpu.o
extra-$(CONFIG_ALTIVEC) += vector.o
extra-$(CONFIG_PPC64) += entry_64.o

View File

@ -110,9 +110,11 @@ int main(void)
#ifdef CONFIG_BOOKE
OFFSET(THREAD_NORMSAVES, thread_struct, normsave[0]);
#endif
#ifdef CONFIG_PPC_FPU
OFFSET(THREAD_FPEXC_MODE, thread_struct, fpexc_mode);
OFFSET(THREAD_FPSTATE, thread_struct, fp_state.fpr);
OFFSET(THREAD_FPSAVEAREA, thread_struct, fp_save_area);
#endif
OFFSET(FPSTATE_FPSCR, thread_fp_state, fpscr);
OFFSET(THREAD_LOAD_FP, thread_struct, load_fp);
#ifdef CONFIG_ALTIVEC
@ -354,10 +356,15 @@ int main(void)
STACK_PT_REGS_OFFSET(_PPR, ppr);
#endif /* CONFIG_PPC64 */
#ifdef CONFIG_PPC_PKEY
STACK_PT_REGS_OFFSET(STACK_REGS_AMR, amr);
STACK_PT_REGS_OFFSET(STACK_REGS_IAMR, iamr);
#endif
#ifdef CONFIG_PPC_KUAP
STACK_PT_REGS_OFFSET(STACK_REGS_KUAP, kuap);
#endif
#if defined(CONFIG_PPC32)
#if defined(CONFIG_BOOKE) || defined(CONFIG_40x)
DEFINE(EXC_LVL_SIZE, STACK_EXC_LVL_FRAME_SIZE);
@ -398,47 +405,18 @@ int main(void)
#endif /* ! CONFIG_PPC64 */
/* datapage offsets for use by vdso */
OFFSET(CFG_TB_ORIG_STAMP, vdso_data, tb_orig_stamp);
OFFSET(CFG_TB_TICKS_PER_SEC, vdso_data, tb_ticks_per_sec);
OFFSET(CFG_TB_TO_XS, vdso_data, tb_to_xs);
OFFSET(CFG_TB_UPDATE_COUNT, vdso_data, tb_update_count);
OFFSET(CFG_TZ_MINUTEWEST, vdso_data, tz_minuteswest);
OFFSET(CFG_TZ_DSTTIME, vdso_data, tz_dsttime);
OFFSET(CFG_SYSCALL_MAP32, vdso_data, syscall_map_32);
OFFSET(WTOM_CLOCK_SEC, vdso_data, wtom_clock_sec);
OFFSET(WTOM_CLOCK_NSEC, vdso_data, wtom_clock_nsec);
OFFSET(STAMP_XTIME_SEC, vdso_data, stamp_xtime_sec);
OFFSET(STAMP_XTIME_NSEC, vdso_data, stamp_xtime_nsec);
OFFSET(STAMP_SEC_FRAC, vdso_data, stamp_sec_fraction);
OFFSET(CLOCK_HRTIMER_RES, vdso_data, hrtimer_res);
OFFSET(VDSO_DATA_OFFSET, vdso_arch_data, data);
OFFSET(CFG_TB_TICKS_PER_SEC, vdso_arch_data, tb_ticks_per_sec);
#ifdef CONFIG_PPC64
OFFSET(CFG_ICACHE_BLOCKSZ, vdso_data, icache_block_size);
OFFSET(CFG_DCACHE_BLOCKSZ, vdso_data, dcache_block_size);
OFFSET(CFG_ICACHE_LOGBLOCKSZ, vdso_data, icache_log_block_size);
OFFSET(CFG_DCACHE_LOGBLOCKSZ, vdso_data, dcache_log_block_size);
OFFSET(CFG_SYSCALL_MAP64, vdso_data, syscall_map_64);
OFFSET(TVAL64_TV_SEC, __kernel_old_timeval, tv_sec);
OFFSET(TVAL64_TV_USEC, __kernel_old_timeval, tv_usec);
OFFSET(CFG_ICACHE_BLOCKSZ, vdso_arch_data, icache_block_size);
OFFSET(CFG_DCACHE_BLOCKSZ, vdso_arch_data, dcache_block_size);
OFFSET(CFG_ICACHE_LOGBLOCKSZ, vdso_arch_data, icache_log_block_size);
OFFSET(CFG_DCACHE_LOGBLOCKSZ, vdso_arch_data, dcache_log_block_size);
OFFSET(CFG_SYSCALL_MAP64, vdso_arch_data, syscall_map);
OFFSET(CFG_SYSCALL_MAP32, vdso_arch_data, compat_syscall_map);
#else
OFFSET(CFG_SYSCALL_MAP32, vdso_arch_data, syscall_map);
#endif
OFFSET(TSPC64_TV_SEC, __kernel_timespec, tv_sec);
OFFSET(TSPC64_TV_NSEC, __kernel_timespec, tv_nsec);
OFFSET(TVAL32_TV_SEC, old_timeval32, tv_sec);
OFFSET(TVAL32_TV_USEC, old_timeval32, tv_usec);
OFFSET(TSPC32_TV_SEC, old_timespec32, tv_sec);
OFFSET(TSPC32_TV_NSEC, old_timespec32, tv_nsec);
/* timeval/timezone offsets for use by vdso */
OFFSET(TZONE_TZ_MINWEST, timezone, tz_minuteswest);
OFFSET(TZONE_TZ_DSTTIME, timezone, tz_dsttime);
/* Other bits used by the vdso */
DEFINE(CLOCK_REALTIME, CLOCK_REALTIME);
DEFINE(CLOCK_MONOTONIC, CLOCK_MONOTONIC);
DEFINE(CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE);
DEFINE(CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_COARSE);
DEFINE(CLOCK_MAX, CLOCK_TAI);
DEFINE(NSEC_PER_SEC, NSEC_PER_SEC);
DEFINE(EINVAL, EINVAL);
DEFINE(KTIME_LOW_RES, KTIME_LOW_RES);
#ifdef CONFIG_BUG
DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry));

View File

@ -655,11 +655,27 @@ static unsigned int index_dir_to_cpu(struct cache_index_dir *index)
* On big-core systems, each core has two groups of CPUs each of which
* has its own L1-cache. The thread-siblings which share l1-cache with
* @cpu can be obtained via cpu_smallcore_mask().
*
* On some big-core systems, the L2 cache is shared only between some
* groups of siblings. This is already parsed and encoded in
* cpu_l2_cache_mask().
*
* TODO: cache_lookup_or_instantiate() needs to be made aware of the
* "ibm,thread-groups" property so that cache->shared_cpu_map
* reflects the correct siblings on platforms that have this
* device-tree property. This helper function is only a stop-gap
* solution so that we report the correct siblings to the
* userspace via sysfs.
*/
static const struct cpumask *get_big_core_shared_cpu_map(int cpu, struct cache *cache)
static const struct cpumask *get_shared_cpu_map(struct cache_index_dir *index, struct cache *cache)
{
if (cache->level == 1)
return cpu_smallcore_mask(cpu);
if (has_big_cores) {
int cpu = index_dir_to_cpu(index);
if (cache->level == 1)
return cpu_smallcore_mask(cpu);
if (cache->level == 2 && thread_group_shares_l2)
return cpu_l2_cache_mask(cpu);
}
return &cache->shared_cpu_map;
}
@ -670,17 +686,11 @@ show_shared_cpumap(struct kobject *k, struct kobj_attribute *attr, char *buf, bo
struct cache_index_dir *index;
struct cache *cache;
const struct cpumask *mask;
int cpu;
index = kobj_to_cache_index_dir(k);
cache = index->cache;
if (has_big_cores) {
cpu = index_dir_to_cpu(index);
mask = get_big_core_shared_cpu_map(cpu, cache);
} else {
mask = &cache->shared_cpu_map;
}
mask = get_shared_cpu_map(index, cache);
return cpumap_print_to_pagebuf(list, buf, mask);
}

View File

@ -108,15 +108,6 @@ _GLOBAL(__setup_cpu_e6500)
#endif /* CONFIG_PPC_E500MC */
#ifdef CONFIG_PPC32
#ifdef CONFIG_E200
_GLOBAL(__setup_cpu_e200)
/* enable dedicated debug exception handling resources (Debug APU) */
mfspr r3,SPRN_HID0
ori r3,r3,HID0_DAPUEN@l
mtspr SPRN_HID0,r3
b __setup_e200_ivors
#endif /* CONFIG_E200 */
#ifdef CONFIG_E500
#ifndef CONFIG_PPC_E500MC
_GLOBAL(__setup_cpu_e500v1)

View File

@ -1,252 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* This file contains low level CPU setup functions.
* Copyright (C) 2003 Benjamin Herrenschmidt (benh@kernel.crashing.org)
*/
#include <asm/processor.h>
#include <asm/page.h>
#include <asm/cputable.h>
#include <asm/ppc_asm.h>
#include <asm/asm-offsets.h>
#include <asm/cache.h>
#include <asm/book3s/64/mmu-hash.h>
/* Entry: r3 = crap, r4 = ptr to cputable entry
*
* Note that we can be called twice for pseudo-PVRs
*/
_GLOBAL(__setup_cpu_power7)
mflr r11
bl __init_hvmode_206
mtlr r11
beqlr
li r0,0
mtspr SPRN_LPID,r0
LOAD_REG_IMMEDIATE(r0, PCR_MASK)
mtspr SPRN_PCR,r0
mfspr r3,SPRN_LPCR
li r4,(LPCR_LPES1 >> LPCR_LPES_SH)
bl __init_LPCR_ISA206
mtlr r11
blr
_GLOBAL(__restore_cpu_power7)
mflr r11
mfmsr r3
rldicl. r0,r3,4,63
beqlr
li r0,0
mtspr SPRN_LPID,r0
LOAD_REG_IMMEDIATE(r0, PCR_MASK)
mtspr SPRN_PCR,r0
mfspr r3,SPRN_LPCR
li r4,(LPCR_LPES1 >> LPCR_LPES_SH)
bl __init_LPCR_ISA206
mtlr r11
blr
_GLOBAL(__setup_cpu_power8)
mflr r11
bl __init_FSCR
bl __init_PMU
bl __init_PMU_ISA207
bl __init_hvmode_206
mtlr r11
beqlr
li r0,0
mtspr SPRN_LPID,r0
LOAD_REG_IMMEDIATE(r0, PCR_MASK)
mtspr SPRN_PCR,r0
mfspr r3,SPRN_LPCR
ori r3, r3, LPCR_PECEDH
li r4,0 /* LPES = 0 */
bl __init_LPCR_ISA206
bl __init_HFSCR
bl __init_PMU_HV
bl __init_PMU_HV_ISA207
mtlr r11
blr
_GLOBAL(__restore_cpu_power8)
mflr r11
bl __init_FSCR
bl __init_PMU
bl __init_PMU_ISA207
mfmsr r3
rldicl. r0,r3,4,63
mtlr r11
beqlr
li r0,0
mtspr SPRN_LPID,r0
LOAD_REG_IMMEDIATE(r0, PCR_MASK)
mtspr SPRN_PCR,r0
mfspr r3,SPRN_LPCR
ori r3, r3, LPCR_PECEDH
li r4,0 /* LPES = 0 */
bl __init_LPCR_ISA206
bl __init_HFSCR
bl __init_PMU_HV
bl __init_PMU_HV_ISA207
mtlr r11
blr
_GLOBAL(__setup_cpu_power10)
mflr r11
bl __init_FSCR_power10
bl __init_PMU
bl __init_PMU_ISA31
b 1f
_GLOBAL(__setup_cpu_power9)
mflr r11
bl __init_FSCR_power9
bl __init_PMU
1: bl __init_hvmode_206
mtlr r11
beqlr
li r0,0
mtspr SPRN_PSSCR,r0
mtspr SPRN_LPID,r0
mtspr SPRN_PID,r0
LOAD_REG_IMMEDIATE(r0, PCR_MASK)
mtspr SPRN_PCR,r0
mfspr r3,SPRN_LPCR
LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE | LPCR_HEIC)
or r3, r3, r4
LOAD_REG_IMMEDIATE(r4, LPCR_UPRT | LPCR_HR)
andc r3, r3, r4
li r4,0 /* LPES = 0 */
bl __init_LPCR_ISA300
bl __init_HFSCR
bl __init_PMU_HV
mtlr r11
blr
_GLOBAL(__restore_cpu_power10)
mflr r11
bl __init_FSCR_power10
bl __init_PMU
bl __init_PMU_ISA31
b 1f
_GLOBAL(__restore_cpu_power9)
mflr r11
bl __init_FSCR_power9
bl __init_PMU
1: mfmsr r3
rldicl. r0,r3,4,63
mtlr r11
beqlr
li r0,0
mtspr SPRN_PSSCR,r0
mtspr SPRN_LPID,r0
mtspr SPRN_PID,r0
LOAD_REG_IMMEDIATE(r0, PCR_MASK)
mtspr SPRN_PCR,r0
mfspr r3,SPRN_LPCR
LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE | LPCR_HEIC)
or r3, r3, r4
LOAD_REG_IMMEDIATE(r4, LPCR_UPRT | LPCR_HR)
andc r3, r3, r4
li r4,0 /* LPES = 0 */
bl __init_LPCR_ISA300
bl __init_HFSCR
bl __init_PMU_HV
mtlr r11
blr
__init_hvmode_206:
/* Disable CPU_FTR_HVMODE and exit if MSR:HV is not set */
mfmsr r3
rldicl. r0,r3,4,63
bnelr
ld r5,CPU_SPEC_FEATURES(r4)
LOAD_REG_IMMEDIATE(r6,CPU_FTR_HVMODE | CPU_FTR_P9_TM_HV_ASSIST)
andc r5,r5,r6
std r5,CPU_SPEC_FEATURES(r4)
blr
__init_LPCR_ISA206:
/* Setup a sane LPCR:
* Called with initial LPCR in R3 and desired LPES 2-bit value in R4
*
* LPES = 0b01 (HSRR0/1 used for 0x500)
* PECE = 0b111
* DPFD = 4
* HDICE = 0
* VC = 0b100 (VPM0=1, VPM1=0, ISL=0)
* VRMASD = 0b10000 (L=1, LP=00)
*
* Other bits untouched for now
*/
li r5,0x10
rldimi r3,r5, LPCR_VRMASD_SH, 64-LPCR_VRMASD_SH-5
/* POWER9 has no VRMASD */
__init_LPCR_ISA300:
rldimi r3,r4, LPCR_LPES_SH, 64-LPCR_LPES_SH-2
ori r3,r3,(LPCR_PECE0|LPCR_PECE1|LPCR_PECE2)
li r5,4
rldimi r3,r5, LPCR_DPFD_SH, 64-LPCR_DPFD_SH-3
clrrdi r3,r3,1 /* clear HDICE */
li r5,4
rldimi r3,r5, LPCR_VC_SH, 0
mtspr SPRN_LPCR,r3
isync
blr
__init_FSCR_power10:
mfspr r3, SPRN_FSCR
ori r3, r3, FSCR_PREFIX
mtspr SPRN_FSCR, r3
// fall through
__init_FSCR_power9:
mfspr r3, SPRN_FSCR
ori r3, r3, FSCR_SCV
mtspr SPRN_FSCR, r3
// fall through
__init_FSCR:
mfspr r3,SPRN_FSCR
ori r3,r3,FSCR_TAR|FSCR_EBB
mtspr SPRN_FSCR,r3
blr
__init_HFSCR:
mfspr r3,SPRN_HFSCR
ori r3,r3,HFSCR_TAR|HFSCR_TM|HFSCR_BHRB|HFSCR_PM|\
HFSCR_DSCR|HFSCR_VECVSX|HFSCR_FP|HFSCR_EBB|HFSCR_MSGP
mtspr SPRN_HFSCR,r3
blr
__init_PMU_HV:
li r5,0
mtspr SPRN_MMCRC,r5
blr
__init_PMU_HV_ISA207:
li r5,0
mtspr SPRN_MMCRH,r5
blr
__init_PMU:
li r5,0
mtspr SPRN_MMCRA,r5
mtspr SPRN_MMCR0,r5
mtspr SPRN_MMCR1,r5
mtspr SPRN_MMCR2,r5
blr
__init_PMU_ISA207:
li r5,0
mtspr SPRN_MMCRS,r5
blr
__init_PMU_ISA31:
li r5,0
mtspr SPRN_MMCR3,r5
LOAD_REG_IMMEDIATE(r5, MMCRA_BHRB_DISABLE)
mtspr SPRN_MMCRA,r5
blr

View File

@ -0,0 +1,272 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright 2020, Jordan Niethe, IBM Corporation.
*
* This file contains low level CPU setup functions.
* Originally written in assembly by Benjamin Herrenschmidt & various other
* authors.
*/
#include <asm/reg.h>
#include <asm/synch.h>
#include <linux/bitops.h>
#include <asm/cputable.h>
#include <asm/cpu_setup_power.h>
/* Disable CPU_FTR_HVMODE and return false if MSR:HV is not set */
static bool init_hvmode_206(struct cpu_spec *t)
{
u64 msr;
msr = mfmsr();
if (msr & MSR_HV)
return true;
t->cpu_features &= ~(CPU_FTR_HVMODE | CPU_FTR_P9_TM_HV_ASSIST);
return false;
}
static void init_LPCR_ISA300(u64 lpcr, u64 lpes)
{
/* POWER9 has no VRMASD */
lpcr |= (lpes << LPCR_LPES_SH) & LPCR_LPES;
lpcr |= LPCR_PECE0|LPCR_PECE1|LPCR_PECE2;
lpcr |= (4ull << LPCR_DPFD_SH) & LPCR_DPFD;
lpcr &= ~LPCR_HDICE; /* clear HDICE */
lpcr |= (4ull << LPCR_VC_SH);
mtspr(SPRN_LPCR, lpcr);
isync();
}
/*
* Setup a sane LPCR:
* Called with initial LPCR and desired LPES 2-bit value
*
* LPES = 0b01 (HSRR0/1 used for 0x500)
* PECE = 0b111
* DPFD = 4
* HDICE = 0
* VC = 0b100 (VPM0=1, VPM1=0, ISL=0)
* VRMASD = 0b10000 (L=1, LP=00)
*
* Other bits untouched for now
*/
static void init_LPCR_ISA206(u64 lpcr, u64 lpes)
{
lpcr |= (0x10ull << LPCR_VRMASD_SH) & LPCR_VRMASD;
init_LPCR_ISA300(lpcr, lpes);
}
static void init_FSCR(void)
{
u64 fscr;
fscr = mfspr(SPRN_FSCR);
fscr |= FSCR_TAR|FSCR_EBB;
mtspr(SPRN_FSCR, fscr);
}
static void init_FSCR_power9(void)
{
u64 fscr;
fscr = mfspr(SPRN_FSCR);
fscr |= FSCR_SCV;
mtspr(SPRN_FSCR, fscr);
init_FSCR();
}
static void init_FSCR_power10(void)
{
u64 fscr;
fscr = mfspr(SPRN_FSCR);
fscr |= FSCR_PREFIX;
mtspr(SPRN_FSCR, fscr);
init_FSCR_power9();
}
static void init_HFSCR(void)
{
u64 hfscr;
hfscr = mfspr(SPRN_HFSCR);
hfscr |= HFSCR_TAR|HFSCR_TM|HFSCR_BHRB|HFSCR_PM|HFSCR_DSCR|\
HFSCR_VECVSX|HFSCR_FP|HFSCR_EBB|HFSCR_MSGP;
mtspr(SPRN_HFSCR, hfscr);
}
static void init_PMU_HV(void)
{
mtspr(SPRN_MMCRC, 0);
}
static void init_PMU_HV_ISA207(void)
{
mtspr(SPRN_MMCRH, 0);
}
static void init_PMU(void)
{
mtspr(SPRN_MMCRA, 0);
mtspr(SPRN_MMCR0, 0);
mtspr(SPRN_MMCR1, 0);
mtspr(SPRN_MMCR2, 0);
}
static void init_PMU_ISA207(void)
{
mtspr(SPRN_MMCRS, 0);
}
static void init_PMU_ISA31(void)
{
mtspr(SPRN_MMCR3, 0);
mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE);
mtspr(SPRN_MMCR0, MMCR0_PMCCEXT);
}
/*
* Note that we can be called twice of pseudo-PVRs.
* The parameter offset is not used.
*/
void __setup_cpu_power7(unsigned long offset, struct cpu_spec *t)
{
if (!init_hvmode_206(t))
return;
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH);
}
void __restore_cpu_power7(void)
{
u64 msr;
msr = mfmsr();
if (!(msr & MSR_HV))
return;
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH);
}
void __setup_cpu_power8(unsigned long offset, struct cpu_spec *t)
{
init_FSCR();
init_PMU();
init_PMU_ISA207();
if (!init_hvmode_206(t))
return;
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */
init_HFSCR();
init_PMU_HV();
init_PMU_HV_ISA207();
}
void __restore_cpu_power8(void)
{
u64 msr;
init_FSCR();
init_PMU();
init_PMU_ISA207();
msr = mfmsr();
if (!(msr & MSR_HV))
return;
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */
init_HFSCR();
init_PMU_HV();
init_PMU_HV_ISA207();
}
void __setup_cpu_power9(unsigned long offset, struct cpu_spec *t)
{
init_FSCR_power9();
init_PMU();
if (!init_hvmode_206(t))
return;
mtspr(SPRN_PSSCR, 0);
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PID, 0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
init_HFSCR();
init_PMU_HV();
}
void __restore_cpu_power9(void)
{
u64 msr;
init_FSCR_power9();
init_PMU();
msr = mfmsr();
if (!(msr & MSR_HV))
return;
mtspr(SPRN_PSSCR, 0);
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PID, 0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
init_HFSCR();
init_PMU_HV();
}
void __setup_cpu_power10(unsigned long offset, struct cpu_spec *t)
{
init_FSCR_power10();
init_PMU();
init_PMU_ISA31();
if (!init_hvmode_206(t))
return;
mtspr(SPRN_PSSCR, 0);
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PID, 0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
init_HFSCR();
init_PMU_HV();
}
void __restore_cpu_power10(void)
{
u64 msr;
init_FSCR_power10();
init_PMU();
init_PMU_ISA31();
msr = mfmsr();
if (!(msr & MSR_HV))
return;
mtspr(SPRN_PSSCR, 0);
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PID, 0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
init_HFSCR();
init_PMU_HV();
}

View File

@ -36,7 +36,6 @@ const char *powerpc_base_platform;
* and ppc64
*/
#ifdef CONFIG_PPC32
extern void __setup_cpu_e200(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_e500v1(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_e500v2(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_e500mc(unsigned long offset, struct cpu_spec* spec);
@ -60,19 +59,15 @@ extern void __setup_cpu_7410(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_745x(unsigned long offset, struct cpu_spec* spec);
#endif /* CONFIG_PPC32 */
#ifdef CONFIG_PPC64
#include <asm/cpu_setup_power.h>
extern void __setup_cpu_ppc970(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_ppc970MP(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_pa6t(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_pa6t(void);
extern void __restore_cpu_ppc970(void);
extern void __setup_cpu_power7(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_power7(void);
extern void __setup_cpu_power8(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_power8(void);
extern void __setup_cpu_power9(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_power9(void);
extern void __setup_cpu_power10(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_power10(void);
extern long __machine_check_early_realmode_p7(struct pt_regs *regs);
extern long __machine_check_early_realmode_p8(struct pt_regs *regs);
extern long __machine_check_early_realmode_p9(struct pt_regs *regs);
#endif /* CONFIG_PPC64 */
#if defined(CONFIG_E500)
extern void __setup_cpu_e5500(unsigned long offset, struct cpu_spec* spec);
@ -616,46 +611,8 @@ static struct cpu_spec __initdata cpu_specs[] = {
#endif /* CONFIG_PPC_BOOK3S_64 */
#ifdef CONFIG_PPC32
#ifdef CONFIG_PPC_BOOK3S_6xx
{ /* 603 */
.pvr_mask = 0xffff0000,
.pvr_value = 0x00030000,
.cpu_name = "603",
.cpu_features = CPU_FTRS_603,
.cpu_user_features = COMMON_USER | PPC_FEATURE_PPC_LE,
.mmu_features = 0,
.icache_bsize = 32,
.dcache_bsize = 32,
.cpu_setup = __setup_cpu_603,
.machine_check = machine_check_generic,
.platform = "ppc603",
},
{ /* 603e */
.pvr_mask = 0xffff0000,
.pvr_value = 0x00060000,
.cpu_name = "603e",
.cpu_features = CPU_FTRS_603,
.cpu_user_features = COMMON_USER | PPC_FEATURE_PPC_LE,
.mmu_features = 0,
.icache_bsize = 32,
.dcache_bsize = 32,
.cpu_setup = __setup_cpu_603,
.machine_check = machine_check_generic,
.platform = "ppc603",
},
{ /* 603ev */
.pvr_mask = 0xffff0000,
.pvr_value = 0x00070000,
.cpu_name = "603ev",
.cpu_features = CPU_FTRS_603,
.cpu_user_features = COMMON_USER | PPC_FEATURE_PPC_LE,
.mmu_features = 0,
.icache_bsize = 32,
.dcache_bsize = 32,
.cpu_setup = __setup_cpu_603,
.machine_check = machine_check_generic,
.platform = "ppc603",
},
#ifdef CONFIG_PPC_BOOK3S_32
#ifdef CONFIG_PPC_BOOK3S_604
{ /* 604 */
.pvr_mask = 0xffff0000,
.pvr_value = 0x00040000,
@ -1145,6 +1102,47 @@ static struct cpu_spec __initdata cpu_specs[] = {
.machine_check = machine_check_generic,
.platform = "ppc7450",
},
#endif /* CONFIG_PPC_BOOK3S_604 */
#ifdef CONFIG_PPC_BOOK3S_603
{ /* 603 */
.pvr_mask = 0xffff0000,
.pvr_value = 0x00030000,
.cpu_name = "603",
.cpu_features = CPU_FTRS_603,
.cpu_user_features = COMMON_USER | PPC_FEATURE_PPC_LE,
.mmu_features = 0,
.icache_bsize = 32,
.dcache_bsize = 32,
.cpu_setup = __setup_cpu_603,
.machine_check = machine_check_generic,
.platform = "ppc603",
},
{ /* 603e */
.pvr_mask = 0xffff0000,
.pvr_value = 0x00060000,
.cpu_name = "603e",
.cpu_features = CPU_FTRS_603,
.cpu_user_features = COMMON_USER | PPC_FEATURE_PPC_LE,
.mmu_features = 0,
.icache_bsize = 32,
.dcache_bsize = 32,
.cpu_setup = __setup_cpu_603,
.machine_check = machine_check_generic,
.platform = "ppc603",
},
{ /* 603ev */
.pvr_mask = 0xffff0000,
.pvr_value = 0x00070000,
.cpu_name = "603ev",
.cpu_features = CPU_FTRS_603,
.cpu_user_features = COMMON_USER | PPC_FEATURE_PPC_LE,
.mmu_features = 0,
.icache_bsize = 32,
.dcache_bsize = 32,
.cpu_setup = __setup_cpu_603,
.machine_check = machine_check_generic,
.platform = "ppc603",
},
{ /* 82xx (8240, 8245, 8260 are all 603e cores) */
.pvr_mask = 0x7fff0000,
.pvr_value = 0x00810000,
@ -1234,6 +1232,8 @@ static struct cpu_spec __initdata cpu_specs[] = {
.platform = "ppc603",
},
#endif
#endif /* CONFIG_PPC_BOOK3S_603 */
#ifdef CONFIG_PPC_BOOK3S_604
{ /* default match, we assume split I/D cache & TB (non-601)... */
.pvr_mask = 0x00000000,
.pvr_value = 0x00000000,
@ -1246,7 +1246,8 @@ static struct cpu_spec __initdata cpu_specs[] = {
.machine_check = machine_check_generic,
.platform = "ppc603",
},
#endif /* CONFIG_PPC_BOOK3S_6xx */
#endif /* CONFIG_PPC_BOOK3S_604 */
#endif /* CONFIG_PPC_BOOK3S_32 */
#ifdef CONFIG_PPC_8xx
{ /* 8xx */
.pvr_mask = 0xffff0000,
@ -1540,6 +1541,7 @@ static struct cpu_spec __initdata cpu_specs[] = {
#endif /* CONFIG_40x */
#ifdef CONFIG_44x
#ifndef CONFIG_PPC_47x
{
.pvr_mask = 0xf0000fff,
.pvr_value = 0x40000850,
@ -1822,7 +1824,19 @@ static struct cpu_spec __initdata cpu_specs[] = {
.machine_check = machine_check_440A,
.platform = "ppc440",
},
#ifdef CONFIG_PPC_47x
{ /* default match */
.pvr_mask = 0x00000000,
.pvr_value = 0x00000000,
.cpu_name = "(generic 44x PPC)",
.cpu_features = CPU_FTRS_44X,
.cpu_user_features = COMMON_USER_BOOKE,
.mmu_features = MMU_FTR_TYPE_44x,
.icache_bsize = 32,
.dcache_bsize = 32,
.machine_check = machine_check_4xx,
.platform = "ppc440",
}
#else /* CONFIG_PPC_47x */
{ /* 476 DD2 core */
.pvr_mask = 0xffffffff,
.pvr_value = 0x11a52080,
@ -1879,65 +1893,20 @@ static struct cpu_spec __initdata cpu_specs[] = {
.machine_check = machine_check_47x,
.platform = "ppc470",
},
#endif /* CONFIG_PPC_47x */
{ /* default match */
.pvr_mask = 0x00000000,
.pvr_value = 0x00000000,
.cpu_name = "(generic 44x PPC)",
.cpu_features = CPU_FTRS_44X,
.cpu_name = "(generic 47x PPC)",
.cpu_features = CPU_FTRS_47X,
.cpu_user_features = COMMON_USER_BOOKE,
.mmu_features = MMU_FTR_TYPE_44x,
.mmu_features = MMU_FTR_TYPE_47x,
.icache_bsize = 32,
.dcache_bsize = 32,
.machine_check = machine_check_4xx,
.platform = "ppc440",
.dcache_bsize = 128,
.machine_check = machine_check_47x,
.platform = "ppc470",
}
#endif /* CONFIG_PPC_47x */
#endif /* CONFIG_44x */
#ifdef CONFIG_E200
{ /* e200z5 */
.pvr_mask = 0xfff00000,
.pvr_value = 0x81000000,
.cpu_name = "e200z5",
/* xxx - galak: add CPU_FTR_MAYBE_CAN_DOZE */
.cpu_features = CPU_FTRS_E200,
.cpu_user_features = COMMON_USER_BOOKE |
PPC_FEATURE_HAS_EFP_SINGLE |
PPC_FEATURE_UNIFIED_CACHE,
.mmu_features = MMU_FTR_TYPE_FSL_E,
.dcache_bsize = 32,
.machine_check = machine_check_e200,
.platform = "ppc5554",
},
{ /* e200z6 */
.pvr_mask = 0xfff00000,
.pvr_value = 0x81100000,
.cpu_name = "e200z6",
/* xxx - galak: add CPU_FTR_MAYBE_CAN_DOZE */
.cpu_features = CPU_FTRS_E200,
.cpu_user_features = COMMON_USER_BOOKE |
PPC_FEATURE_HAS_SPE_COMP |
PPC_FEATURE_HAS_EFP_SINGLE_COMP |
PPC_FEATURE_UNIFIED_CACHE,
.mmu_features = MMU_FTR_TYPE_FSL_E,
.dcache_bsize = 32,
.machine_check = machine_check_e200,
.platform = "ppc5554",
},
{ /* default match */
.pvr_mask = 0x00000000,
.pvr_value = 0x00000000,
.cpu_name = "(generic E200 PPC)",
.cpu_features = CPU_FTRS_E200,
.cpu_user_features = COMMON_USER_BOOKE |
PPC_FEATURE_HAS_EFP_SINGLE |
PPC_FEATURE_UNIFIED_CACHE,
.mmu_features = MMU_FTR_TYPE_FSL_E,
.dcache_bsize = 32,
.cpu_setup = __setup_cpu_e200,
.machine_check = machine_check_e200,
.platform = "ppc5554",
}
#endif /* CONFIG_E200 */
#endif /* CONFIG_PPC32 */
#ifdef CONFIG_E500
#ifdef CONFIG_PPC32

View File

@ -69,7 +69,6 @@ static int hv_mode;
static struct {
u64 lpcr;
u64 lpcr_clear;
u64 hfscr;
u64 fscr;
u64 pcr;
@ -79,24 +78,7 @@ static void (*init_pmu_registers)(void);
static void __restore_cpu_cpufeatures(void)
{
u64 lpcr;
/*
* LPCR is restored by the power on engine already. It can be changed
* after early init e.g., by radix enable, and we have no unified API
* for saving and restoring such SPRs.
*
* This ->restore hook should really be removed from idle and register
* restore moved directly into the idle restore code, because this code
* doesn't know how idle is implemented or what it needs restored here.
*
* The best we can do to accommodate secondary boot and idle restore
* for now is "or" LPCR with existing.
*/
lpcr = mfspr(SPRN_LPCR);
lpcr |= system_registers.lpcr;
lpcr &= ~system_registers.lpcr_clear;
mtspr(SPRN_LPCR, lpcr);
mtspr(SPRN_LPCR, system_registers.lpcr);
if (hv_mode) {
mtspr(SPRN_LPID, 0);
mtspr(SPRN_HFSCR, system_registers.hfscr);
@ -273,13 +255,6 @@ static int __init feat_enable_idle_nap(struct dt_cpu_feature *f)
return 1;
}
static int __init feat_enable_align_dsisr(struct dt_cpu_feature *f)
{
cur_cpu_spec->cpu_features &= ~CPU_FTR_NODSISRALIGN;
return 1;
}
static int __init feat_enable_idle_stop(struct dt_cpu_feature *f)
{
u64 lpcr;
@ -317,7 +292,6 @@ static int __init feat_enable_mmu_hash_v3(struct dt_cpu_feature *f)
{
u64 lpcr;
system_registers.lpcr_clear |= (LPCR_ISL | LPCR_UPRT | LPCR_HR);
lpcr = mfspr(SPRN_LPCR);
lpcr &= ~(LPCR_ISL | LPCR_UPRT | LPCR_HR);
mtspr(SPRN_LPCR, lpcr);
@ -454,6 +428,7 @@ static void init_pmu_power10(void)
mtspr(SPRN_MMCR3, 0);
mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE);
mtspr(SPRN_MMCR0, MMCR0_PMCCEXT);
}
static int __init feat_enable_pmu_power10(struct dt_cpu_feature *f)
@ -641,7 +616,7 @@ static struct dt_cpu_feature_match __initdata
{"tm-suspend-hypervisor-assist", feat_enable, CPU_FTR_P9_TM_HV_ASSIST},
{"tm-suspend-xer-so-bug", feat_enable, CPU_FTR_P9_TM_XER_SO_BUG},
{"idle-nap", feat_enable_idle_nap, 0},
{"alignment-interrupt-dsisr", feat_enable_align_dsisr, 0},
/* alignment-interrupt-dsisr ignored */
{"idle-stop", feat_enable_idle_stop, 0},
{"machine-check-power8", feat_enable_mce_power8, 0},
{"performance-monitor-power8", feat_enable_pmu_power8, 0},

View File

@ -234,7 +234,10 @@ transfer_to_handler_cont:
mtspr SPRN_SRR0,r11
mtspr SPRN_SRR1,r10
mtlr r9
RFI /* jump to handler, enable MMU */
rfi /* jump to handler, enable MMU */
#ifdef CONFIG_40x
b . /* Prevent prefetch past rfi */
#endif
#if defined (CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500)
4: rlwinm r12,r12,0,~_TLF_NAPPING
@ -263,7 +266,10 @@ _ASM_NOKPROBE_SYMBOL(transfer_to_handler_cont)
LOAD_REG_IMMEDIATE(r0, MSR_KERNEL)
mtspr SPRN_SRR0,r12
mtspr SPRN_SRR1,r0
RFI
rfi
#ifdef CONFIG_40x
b . /* Prevent prefetch past rfi */
#endif
reenable_mmu:
/*
@ -321,7 +327,10 @@ stack_ovf:
#endif
mtspr SPRN_SRR0,r9
mtspr SPRN_SRR1,r10
RFI
rfi
#ifdef CONFIG_40x
b . /* Prevent prefetch past rfi */
#endif
_ASM_NOKPROBE_SYMBOL(stack_ovf)
#endif
@ -439,15 +448,13 @@ syscall_exit_cont:
andis. r10,r0,DBCR0_IDM@h
bnel- load_dbcr0
#endif
#ifdef CONFIG_44x
BEGIN_MMU_FTR_SECTION
#ifdef CONFIG_PPC_47x
lis r4,icache_44x_need_flush@ha
lwz r5,icache_44x_need_flush@l(r4)
cmplwi cr0,r5,0
bne- 2f
#endif /* CONFIG_PPC_47x */
1:
END_MMU_FTR_SECTION_IFCLR(MMU_FTR_TYPE_47x)
#endif /* CONFIG_44x */
BEGIN_FTR_SECTION
lwarx r7,0,r1
END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX)
@ -470,7 +477,10 @@ syscall_exit_finish:
#endif
mtspr SPRN_SRR0,r7
mtspr SPRN_SRR1,r8
RFI
rfi
#ifdef CONFIG_40x
b . /* Prevent prefetch past rfi */
#endif
_ASM_NOKPROBE_SYMBOL(syscall_exit_finish)
#ifdef CONFIG_44x
2: li r7,0
@ -600,7 +610,10 @@ ret_from_kernel_syscall:
#endif
mtspr SPRN_SRR0, r9
mtspr SPRN_SRR1, r10
RFI
rfi
#ifdef CONFIG_40x
b . /* Prevent prefetch past rfi */
#endif
_ASM_NOKPROBE_SYMBOL(ret_from_kernel_syscall)
/*
@ -671,7 +684,7 @@ handle_page_fault:
mr r5,r3
addi r3,r1,STACK_FRAME_OVERHEAD
lwz r4,_DAR(r1)
bl bad_page_fault
bl __bad_page_fault
b ret_from_except_full
#ifdef CONFIG_PPC_BOOK3S_32
@ -803,7 +816,10 @@ fast_exception_return:
REST_GPR(9, r11)
REST_GPR(12, r11)
lwz r11,GPR11(r11)
RFI
rfi
#ifdef CONFIG_40x
b . /* Prevent prefetch past rfi */
#endif
_ASM_NOKPROBE_SYMBOL(fast_exception_return)
#if !(defined(CONFIG_4xx) || defined(CONFIG_BOOKE))
@ -948,10 +964,7 @@ restore_kuap:
/* interrupts are hard-disabled at this point */
restore:
#ifdef CONFIG_44x
BEGIN_MMU_FTR_SECTION
b 1f
END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_47x)
#if defined(CONFIG_44x) && !defined(CONFIG_PPC_47x)
lis r4,icache_44x_need_flush@ha
lwz r5,icache_44x_need_flush@l(r4)
cmplwi cr0,r5,0
@ -1027,7 +1040,7 @@ exc_exit_restart:
lwz r1,GPR1(r1)
.globl exc_exit_restart_end
exc_exit_restart_end:
RFI
rfi
_ASM_NOKPROBE_SYMBOL(exc_exit_restart)
_ASM_NOKPROBE_SYMBOL(exc_exit_restart_end)
@ -1356,7 +1369,7 @@ _GLOBAL(enter_rtas)
stw r7, THREAD + RTAS_SP(r2)
mtspr SPRN_SRR0,r8
mtspr SPRN_SRR1,r9
RFI
rfi
1: tophys_novmstack r9, r1
#ifdef CONFIG_VMAP_STACK
li r0, MSR_KERNEL & ~MSR_IR /* can take DTLB miss */
@ -1371,6 +1384,6 @@ _GLOBAL(enter_rtas)
stw r0, THREAD + RTAS_SP(r7)
mtspr SPRN_SRR0,r8
mtspr SPRN_SRR1,r9
RFI /* return to caller */
rfi /* return to caller */
_ASM_NOKPROBE_SYMBOL(enter_rtas)
#endif /* CONFIG_PPC_RTAS */

View File

@ -653,8 +653,8 @@ _ASM_NOKPROBE_SYMBOL(fast_interrupt_return)
kuap_check_amr r3, r4
ld r5,_MSR(r1)
andi. r0,r5,MSR_PR
bne .Lfast_user_interrupt_return
kuap_restore_amr r3, r4
bne .Lfast_user_interrupt_return_amr
kuap_kernel_restore r3, r4
andi. r0,r5,MSR_RI
li r3,0 /* 0 return value, no EMULATE_STACK_STORE */
bne+ .Lfast_kernel_interrupt_return
@ -674,6 +674,8 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return)
cmpdi r3,0
bne- .Lrestore_nvgprs
.Lfast_user_interrupt_return_amr:
kuap_user_restore r3, r4
.Lfast_user_interrupt_return:
ld r11,_NIP(r1)
ld r12,_MSR(r1)
@ -967,7 +969,7 @@ _GLOBAL(enter_prom)
mtsrr1 r11
rfi
#else /* CONFIG_PPC_BOOK3E */
LOAD_REG_IMMEDIATE(r12, MSR_SF | MSR_ISF | MSR_LE)
LOAD_REG_IMMEDIATE(r12, MSR_SF | MSR_LE)
andc r11,r11,r12
mtsrr1 r11
RFI_TO_KERNEL

View File

@ -1023,7 +1023,7 @@ storage_fault_common:
mr r5,r3
addi r3,r1,STACK_FRAME_OVERHEAD
ld r4,_DAR(r1)
bl bad_page_fault
bl __bad_page_fault
b ret_from_except
/*

View File

@ -1059,7 +1059,7 @@ EXC_COMMON_BEGIN(system_reset_common)
ld r10,SOFTE(r1)
stb r10,PACAIRQSOFTMASK(r13)
kuap_restore_amr r9, r10
kuap_kernel_restore r9, r10
EXCEPTION_RESTORE_REGS
RFI_TO_USER_OR_KERNEL
@ -2875,7 +2875,7 @@ EXC_COMMON_BEGIN(soft_nmi_common)
ld r10,SOFTE(r1)
stb r10,PACAIRQSOFTMASK(r13)
kuap_restore_amr r9, r10
kuap_kernel_restore r9, r10
EXCEPTION_RESTORE_REGS hsrr=0
RFI_TO_KERNEL
@ -3259,7 +3259,7 @@ handle_page_fault:
mr r5,r3
addi r3,r1,STACK_FRAME_OVERHEAD
ld r4,_DAR(r1)
bl bad_page_fault
bl __bad_page_fault
b interrupt_return
/* We have a data breakpoint exception - handle it */

View File

@ -14,6 +14,7 @@
#include <linux/of.h>
#include <asm/firmware.h>
#include <asm/kvm_guest.h>
#ifdef CONFIG_PPC64
unsigned long powerpc_firmware_features __read_mostly;
@ -21,17 +22,19 @@ EXPORT_SYMBOL_GPL(powerpc_firmware_features);
#endif
#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
bool is_kvm_guest(void)
DEFINE_STATIC_KEY_FALSE(kvm_guest);
bool check_kvm_guest(void)
{
struct device_node *hyper_node;
hyper_node = of_find_node_by_path("/hypervisor");
if (!hyper_node)
return 0;
return false;
if (!of_device_is_compatible(hyper_node, "linux,kvm"))
return 0;
return false;
return 1;
static_branch_enable(&kvm_guest);
return true;
}
#endif

View File

@ -40,38 +40,31 @@
.macro EXCEPTION_PROLOG_1 for_rtas=0
#ifdef CONFIG_VMAP_STACK
mr r11, r1
mtspr SPRN_SPRG_SCRATCH2,r1
subi r1, r1, INT_FRAME_SIZE /* use r1 if kernel */
beq 1f
mfspr r1,SPRN_SPRG_THREAD
lwz r1,TASK_STACK-THREAD(r1)
addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
1:
mtcrf 0x7f, r1
bt 32 - THREAD_ALIGN_SHIFT, stack_overflow
#else
subi r11, r1, INT_FRAME_SIZE /* use r1 if kernel */
beq 1f
mfspr r11,SPRN_SPRG_THREAD
lwz r11,TASK_STACK-THREAD(r11)
addi r11, r11, THREAD_SIZE - INT_FRAME_SIZE
#endif
1:
tophys_novmstack r11, r11
#ifdef CONFIG_VMAP_STACK
mtcrf 0x7f, r1
bt 32 - THREAD_ALIGN_SHIFT, stack_overflow
1: tophys(r11, r11)
#endif
.endm
.macro EXCEPTION_PROLOG_2 handle_dar_dsisr=0
#ifdef CONFIG_VMAP_STACK
mtcr r10
li r10, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
mtmsr r10
li r11, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
mtmsr r11
isync
#else
stw r10,_CCR(r11) /* save registers */
#endif
mfspr r10, SPRN_SPRG_SCRATCH0
#ifdef CONFIG_VMAP_STACK
mfspr r11, SPRN_SPRG_SCRATCH2
stw r11,GPR1(r1)
stw r11,0(r1)
mr r11, r1
@ -80,14 +73,12 @@
stw r1,0(r11)
tovirt(r1, r11) /* set new kernel sp */
#endif
stw r10,_CCR(r11) /* save registers */
stw r12,GPR12(r11)
stw r9,GPR9(r11)
stw r10,GPR10(r11)
#ifdef CONFIG_VMAP_STACK
mfcr r10
stw r10, _CCR(r11)
#endif
mfspr r10,SPRN_SPRG_SCRATCH0
mfspr r12,SPRN_SPRG_SCRATCH1
stw r10,GPR10(r11)
stw r12,GPR11(r11)
mflr r10
stw r10,_LINK(r11)
@ -101,7 +92,6 @@
stw r10, _DSISR(r11)
.endif
lwz r9, SRR1(r12)
andi. r10, r9, MSR_PR
lwz r12, SRR0(r12)
#else
mfspr r12,SPRN_SRR0
@ -222,7 +212,10 @@
#endif
mtspr SPRN_SRR1,r10
mtspr SPRN_SRR0,r11
RFI /* jump to handler, enable MMU */
rfi /* jump to handler, enable MMU */
#ifdef CONFIG_40x
b . /* Prevent prefetch past rfi */
#endif
99: b ret_from_kernel_syscall
.endm

View File

@ -41,6 +41,11 @@
#include <asm/ppc-opcode.h>
#include <asm/export.h>
#include <asm/feature-fixups.h>
#ifdef CONFIG_PPC_BOOK3S
#include <asm/exception-64s.h>
#else
#include <asm/exception-64e.h>
#endif
/* The physical memory is laid out such that the secondary processor
* spin code sits at 0x0000...0x00ff. On server, the vectors follow
@ -417,6 +422,10 @@ generic_secondary_common_init:
/* From now on, r24 is expected to be logical cpuid */
mr r24,r5
/* Create a temp kernel stack for use before relocation is on. */
ld r1,PACAEMERGSP(r13)
subi r1,r1,STACK_FRAME_OVERHEAD
/* See if we need to call a cpu state restore handler */
LOAD_REG_ADDR(r23, cur_cpu_spec)
ld r23,0(r23)
@ -445,10 +454,6 @@ generic_secondary_common_init:
sync /* order paca.run and cur_cpu_spec */
isync /* In case code patching happened */
/* Create a temp kernel stack for use before relocation is on. */
ld r1,PACAEMERGSP(r13)
subi r1,r1,STACK_FRAME_OVERHEAD
b __secondary_start
#endif /* SMP */
@ -829,7 +834,7 @@ __secondary_start:
mtspr SPRN_SRR0,r3
mtspr SPRN_SRR1,r4
RFI
RFI_TO_KERNEL
b . /* prevent speculative execution */
/*
@ -865,8 +870,7 @@ enable_64b_mode:
oris r11,r11,0x8000 /* CM bit set, we'll set ICM later */
mtmsr r11
#else /* CONFIG_PPC_BOOK3E */
li r12,(MSR_64BIT | MSR_ISF)@highest
sldi r12,r12,48
LOAD_REG_IMMEDIATE(r12, MSR_64BIT)
or r11,r11,r12
mtmsrd r11
isync
@ -966,7 +970,7 @@ start_here_multiplatform:
ld r4,PACAKMSR(r13)
mtspr SPRN_SRR0,r3
mtspr SPRN_SRR1,r4
RFI
RFI_TO_KERNEL
b . /* prevent speculative execution */
/* This is where all platforms converge execution */
@ -990,7 +994,7 @@ start_here_common:
bl start_kernel
/* Not reached */
trap
0: trap
EMIT_BUG_ENTRY 0b, __FILE__, __LINE__, 0
.previous

View File

@ -42,16 +42,6 @@
#endif
.endm
/*
* We need an ITLB miss handler for kernel addresses if:
* - Either we have modules
* - Or we have not pinned the first 8M
*/
#if defined(CONFIG_MODULES) || !defined(CONFIG_PIN_TLB_TEXT) || \
defined(CONFIG_DEBUG_PAGEALLOC)
#define ITLB_MISS_KERNEL 1
#endif
/*
* Value for the bits that have fixed value in RPN entries.
* Also used for tagging DAR for DTLBerror.
@ -190,32 +180,31 @@ SystemCall:
*/
#ifdef CONFIG_8xx_CPU15
#define INVALIDATE_ADJACENT_PAGES_CPU15(addr) \
addi addr, addr, PAGE_SIZE; \
tlbie addr; \
addi addr, addr, -(PAGE_SIZE << 1); \
tlbie addr; \
addi addr, addr, PAGE_SIZE
#define INVALIDATE_ADJACENT_PAGES_CPU15(addr, tmp) \
addi tmp, addr, PAGE_SIZE; \
tlbie tmp; \
addi tmp, addr, -PAGE_SIZE; \
tlbie tmp
#else
#define INVALIDATE_ADJACENT_PAGES_CPU15(addr)
#define INVALIDATE_ADJACENT_PAGES_CPU15(addr, tmp)
#endif
InstructionTLBMiss:
mtspr SPRN_SPRG_SCRATCH0, r10
mtspr SPRN_SPRG_SCRATCH1, r11
mtspr SPRN_SPRG_SCRATCH2, r10
mtspr SPRN_M_TW, r11
/* If we are faulting a kernel address, we have to use the
* kernel page tables.
*/
mfspr r10, SPRN_SRR0 /* Get effective address of fault */
INVALIDATE_ADJACENT_PAGES_CPU15(r10)
INVALIDATE_ADJACENT_PAGES_CPU15(r10, r11)
mtspr SPRN_MD_EPN, r10
#ifdef ITLB_MISS_KERNEL
#ifdef CONFIG_MODULES
mfcr r11
compare_to_kernel_boundary r10, r10
#endif
mfspr r10, SPRN_M_TWB /* Get level 1 table */
#ifdef ITLB_MISS_KERNEL
#ifdef CONFIG_MODULES
blt+ 3f
rlwinm r10, r10, 0, 20, 31
oris r10, r10, (swapper_pg_dir - PAGE_OFFSET)@ha
@ -241,8 +230,8 @@ InstructionTLBMiss:
mtspr SPRN_MI_RPN, r10 /* Update TLB entry */
/* Restore registers */
0: mfspr r10, SPRN_SPRG_SCRATCH0
mfspr r11, SPRN_SPRG_SCRATCH1
0: mfspr r10, SPRN_SPRG_SCRATCH2
mfspr r11, SPRN_M_TW
rfi
patch_site 0b, patch__itlbmiss_exit_1
@ -251,14 +240,14 @@ InstructionTLBMiss:
0: lwz r10, (itlb_miss_counter - PAGE_OFFSET)@l(0)
addi r10, r10, 1
stw r10, (itlb_miss_counter - PAGE_OFFSET)@l(0)
mfspr r10, SPRN_SPRG_SCRATCH0
mfspr r11, SPRN_SPRG_SCRATCH1
mfspr r10, SPRN_SPRG_SCRATCH2
mfspr r11, SPRN_M_TW
rfi
#endif
. = 0x1200
DataStoreTLBMiss:
mtspr SPRN_DAR, r10
mtspr SPRN_SPRG_SCRATCH2, r10
mtspr SPRN_M_TW, r11
mfcr r11
@ -297,11 +286,11 @@ DataStoreTLBMiss:
li r11, RPN_PATTERN
rlwimi r10, r11, 0, 24, 27 /* Set 24-27 */
mtspr SPRN_MD_RPN, r10 /* Update TLB entry */
mtspr SPRN_DAR, r11 /* Tag DAR */
/* Restore registers */
0: mfspr r10, SPRN_DAR
mtspr SPRN_DAR, r11 /* Tag DAR */
0: mfspr r10, SPRN_SPRG_SCRATCH2
mfspr r11, SPRN_M_TW
rfi
patch_site 0b, patch__dtlbmiss_exit_1
@ -311,8 +300,7 @@ DataStoreTLBMiss:
0: lwz r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0)
addi r10, r10, 1
stw r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0)
mfspr r10, SPRN_DAR
mtspr SPRN_DAR, r11 /* Tag DAR */
mfspr r10, SPRN_SPRG_SCRATCH2
mfspr r11, SPRN_M_TW
rfi
#endif
@ -619,10 +607,6 @@ start_here:
lis r0, (MD_TWAM | MD_RSV4I)@h
mtspr SPRN_MD_CTR, r0
#endif
#ifndef CONFIG_PIN_TLB_TEXT
li r0, 0
mtspr SPRN_MI_CTR, r0
#endif
#if !defined(CONFIG_PIN_TLB_DATA) && !defined(CONFIG_PIN_TLB_IMMR)
lis r0, MD_TWAM@h
mtspr SPRN_MD_CTR, r0
@ -718,7 +702,6 @@ initial_mmu:
mtspr SPRN_DER, r8
blr
#ifdef CONFIG_PIN_TLB
_GLOBAL(mmu_pin_tlb)
lis r9, (1f - PAGE_OFFSET)@h
ori r9, r9, (1f - PAGE_OFFSET)@l
@ -740,7 +723,6 @@ _GLOBAL(mmu_pin_tlb)
mtspr SPRN_MD_CTR, r6
tlbia
#ifdef CONFIG_PIN_TLB_TEXT
LOAD_REG_IMMEDIATE(r5, 28 << 8)
LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET)
LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG | _PMD_ACCESSED)
@ -761,7 +743,7 @@ _GLOBAL(mmu_pin_tlb)
bdnzt lt, 2b
lis r0, MI_RSV4I@h
mtspr SPRN_MI_CTR, r0
#endif
LOAD_REG_IMMEDIATE(r5, 28 << 8 | MD_TWAM)
#ifdef CONFIG_PIN_TLB_DATA
LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET)
@ -819,7 +801,6 @@ _GLOBAL(mmu_pin_tlb)
mtspr SPRN_SRR1, r10
mtspr SPRN_SRR0, r11
rfi
#endif /* CONFIG_PIN_TLB */
/*
* We put a few things here that have to be page-aligned.

View File

@ -155,10 +155,8 @@ __after_mmu_off:
bl initial_bats
bl load_segment_registers
BEGIN_MMU_FTR_SECTION
bl reloc_offset
bl early_hash_table
END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
#if defined(CONFIG_BOOTX_TEXT)
bl setup_disp_bat
#endif
@ -207,7 +205,7 @@ turn_on_mmu:
lis r0,start_here@h
ori r0,r0,start_here@l
mtspr SPRN_SRR0,r0
RFI /* enables MMU */
rfi /* enables MMU */
/*
* We need __secondary_hold as a place to hold the other cpus on
@ -288,51 +286,35 @@ MachineCheck:
DO_KVM 0x300
DataAccess:
#ifdef CONFIG_VMAP_STACK
mtspr SPRN_SPRG_SCRATCH0,r10
mfspr r10, SPRN_SPRG_THREAD
BEGIN_MMU_FTR_SECTION
mtspr SPRN_SPRG_SCRATCH2,r10
mfspr r10, SPRN_SPRG_THREAD
stw r11, THR11(r10)
mfspr r10, SPRN_DSISR
mfcr r11
#ifdef CONFIG_PPC_KUAP
andis. r10, r10, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH | DSISR_PROTFAULT)@h
#else
andis. r10, r10, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH)@h
#endif
mfspr r10, SPRN_SPRG_THREAD
beq hash_page_dsi
.Lhash_page_dsi_cont:
mtcr r11
lwz r11, THR11(r10)
END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
mtspr SPRN_SPRG_SCRATCH1,r11
mfspr r11, SPRN_DAR
stw r11, DAR(r10)
mfspr r11, SPRN_DSISR
stw r11, DSISR(r10)
mfspr r11, SPRN_SRR0
stw r11, SRR0(r10)
mfspr r11, SPRN_SRR1 /* check whether user or kernel */
stw r11, SRR1(r10)
mfcr r10
andi. r11, r11, MSR_PR
mfspr r10, SPRN_SPRG_SCRATCH2
MMU_FTR_SECTION_ELSE
b 1f
ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
1: EXCEPTION_PROLOG_0 handle_dar_dsisr=1
EXCEPTION_PROLOG_1
b handle_page_fault_tramp_1
#else /* CONFIG_VMAP_STACK */
EXCEPTION_PROLOG handle_dar_dsisr=1
get_and_save_dar_dsisr_on_stack r4, r5, r11
BEGIN_MMU_FTR_SECTION
#ifdef CONFIG_PPC_KUAP
andis. r0, r5, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH | DSISR_PROTFAULT)@h
#else
andis. r0, r5, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH)@h
#endif
bne handle_page_fault_tramp_2 /* if not, try to put a PTE */
rlwinm r3, r5, 32 - 15, 21, 21 /* DSISR_STORE -> _PAGE_RW */
bl hash_page
b handle_page_fault_tramp_1
FTR_SECTION_ELSE
MMU_FTR_SECTION_ELSE
b handle_page_fault_tramp_2
ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
#endif /* CONFIG_VMAP_STACK */
@ -394,6 +376,7 @@ Alignment:
. = 0x800
DO_KVM 0x800
FPUnavailable:
#ifdef CONFIG_PPC_FPU
BEGIN_FTR_SECTION
/*
* Certain Freescale cores don't have a FPU and treat fp instructions
@ -407,6 +390,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
b fast_exception_return
1: addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_LITE(0x800, kernel_fp_unavailable_exception)
#else
b ProgramCheck
#endif
/* Decrementer */
EXCEPTION(0x900, Decrementer, timer_interrupt, EXC_XFER_LITE)
@ -453,13 +439,14 @@ InstructionTLBMiss:
*/
/* Get PTE (linux-style) and check access */
mfspr r3,SPRN_IMISS
#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)
#ifdef CONFIG_MODULES
lis r1, TASK_SIZE@h /* check if kernel address */
cmplw 0,r1,r3
#endif
mfspr r2, SPRN_SPRG_PGDIR
mfspr r2, SPRN_SDR1
li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)
rlwinm r2, r2, 28, 0xfffff000
#ifdef CONFIG_MODULES
bgt- 112f
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
@ -519,8 +506,9 @@ DataLoadTLBMiss:
mfspr r3,SPRN_DMISS
lis r1, TASK_SIZE@h /* check if kernel address */
cmplw 0,r1,r3
mfspr r2, SPRN_SPRG_PGDIR
mfspr r2, SPRN_SDR1
li r1, _PAGE_PRESENT | _PAGE_ACCESSED
rlwinm r2, r2, 28, 0xfffff000
bgt- 112f
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
@ -595,8 +583,9 @@ DataStoreTLBMiss:
mfspr r3,SPRN_DMISS
lis r1, TASK_SIZE@h /* check if kernel address */
cmplw 0,r1,r3
mfspr r2, SPRN_SPRG_PGDIR
mfspr r2, SPRN_SDR1
li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
rlwinm r2, r2, 28, 0xfffff000
bgt- 112f
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
@ -757,14 +746,14 @@ fast_hash_page_return:
/* DSI */
mtcr r11
lwz r11, THR11(r10)
mfspr r10, SPRN_SPRG_SCRATCH0
RFI
mfspr r10, SPRN_SPRG_SCRATCH2
rfi
1: /* ISI */
mtcr r11
mfspr r11, SPRN_SPRG_SCRATCH1
mfspr r10, SPRN_SPRG_SCRATCH0
RFI
rfi
stack_overflow:
vmap_stack_overflow_exception
@ -889,9 +878,12 @@ __secondary_start:
tophys(r4,r2)
addi r4,r4,THREAD /* phys address of our thread_struct */
mtspr SPRN_SPRG_THREAD,r4
BEGIN_MMU_FTR_SECTION
lis r4, (swapper_pg_dir - PAGE_OFFSET)@h
ori r4, r4, (swapper_pg_dir - PAGE_OFFSET)@l
mtspr SPRN_SPRG_PGDIR, r4
rlwinm r4, r4, 4, 0xffff01ff
mtspr SPRN_SDR1, r4
END_MMU_FTR_SECTION_IFCLR(MMU_FTR_HPTE_TABLE)
/* enable MMU and jump to start_secondary */
li r4,MSR_KERNEL
@ -899,7 +891,7 @@ __secondary_start:
ori r3,r3,start_secondary@l
mtspr SPRN_SRR0,r3
mtspr SPRN_SRR1,r4
RFI
rfi
#endif /* CONFIG_SMP */
#ifdef CONFIG_KVM_BOOK3S_HANDLER
@ -920,9 +912,6 @@ early_hash_table:
lis r6, early_hash - PAGE_OFFSET@h
ori r6, r6, 3 /* 256kB table */
mtspr SPRN_SDR1, r6
lis r6, early_hash@h
addis r3, r3, Hash@ha
stw r6, Hash@l(r3)
blr
load_up_mmu:
@ -931,11 +920,13 @@ load_up_mmu:
tlbia /* Clear all TLB entries */
sync /* wait for tlbia/tlbie to finish */
TLBSYNC /* ... on all CPUs */
BEGIN_MMU_FTR_SECTION
/* Load the SDR1 register (hash table base & size) */
lis r6,_SDR1@ha
tophys(r6,r6)
lwz r6,_SDR1@l(r6)
mtspr SPRN_SDR1,r6
END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
/* Load the BAT registers with the values set up by MMU_init. */
lis r3,BATS@ha
@ -991,9 +982,12 @@ start_here:
tophys(r4,r2)
addi r4,r4,THREAD /* init task's THREAD */
mtspr SPRN_SPRG_THREAD,r4
BEGIN_MMU_FTR_SECTION
lis r4, (swapper_pg_dir - PAGE_OFFSET)@h
ori r4, r4, (swapper_pg_dir - PAGE_OFFSET)@l
mtspr SPRN_SPRG_PGDIR, r4
rlwinm r4, r4, 4, 0xffff01ff
mtspr SPRN_SDR1, r4
END_MMU_FTR_SECTION_IFCLR(MMU_FTR_HPTE_TABLE)
/* stack */
lis r1,init_thread_union@ha
@ -1027,7 +1021,7 @@ start_here:
.align 4
mtspr SPRN_SRR0,r4
mtspr SPRN_SRR1,r3
RFI
rfi
/* Load up the kernel context */
2: bl load_up_mmu
@ -1051,7 +1045,7 @@ start_here:
ori r3,r3,start_kernel@l
mtspr SPRN_SRR0,r3
mtspr SPRN_SRR1,r4
RFI
rfi
/*
* void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next);
@ -1073,16 +1067,22 @@ _ENTRY(switch_mmu_context)
li r0,NUM_USER_SEGMENTS
mtctr r0
lwz r4, MM_PGD(r4)
#ifdef CONFIG_BDI_SWITCH
/* Context switch the PTE pointer for the Abatron BDI2000.
* The PGDIR is passed as second argument.
*/
lwz r4, MM_PGD(r4)
lis r5, abatron_pteptrs@ha
stw r4, abatron_pteptrs@l + 0x4(r5)
#endif
BEGIN_MMU_FTR_SECTION
#ifndef CONFIG_BDI_SWITCH
lwz r4, MM_PGD(r4)
#endif
tophys(r4, r4)
mtspr SPRN_SPRG_PGDIR, r4
rlwinm r4, r4, 4, 0xffff01ff
mtspr SPRN_SDR1, r4
END_MMU_FTR_SECTION_IFCLR(MMU_FTR_HPTE_TABLE)
li r4,0
isync
3:
@ -1166,7 +1166,7 @@ _ENTRY(update_bats)
.align 4
mtspr SPRN_SRR0, r4
mtspr SPRN_SRR1, r3
RFI
rfi
1: bl clear_bats
lis r3, BATS@ha
addi r3, r3, BATS@l
@ -1185,7 +1185,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
mtmsr r3
mtspr SPRN_SRR0, r7
mtspr SPRN_SRR1, r6
RFI
rfi
flush_tlbs:
lis r10, 0x40
@ -1206,7 +1206,7 @@ mmu_off:
mtspr SPRN_SRR0,r4
mtspr SPRN_SRR1,r3
sync
RFI
rfi
/* We use one BAT to map up to 256M of RAM at _PAGE_OFFSET */
initial_bats:

View File

@ -176,7 +176,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
#endif
mtspr SPRN_SRR1,r10
mtspr SPRN_SRR0,r11
RFI /* jump to handler, enable MMU */
rfi /* jump to handler, enable MMU */
99: b ret_from_kernel_syscall
.endm
@ -185,7 +185,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
*
* On 40x critical is the only additional level
* On 44x/e500 we have critical and machine check
* On e200 we have critical and debug (machine check occurs via critical)
*
* Additionally we reserve a SPRG for each priority level so we can free up a
* GPR to use as the base for indirect access to the exception stacks. This
@ -201,7 +200,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
#define MC_STACK_BASE mcheckirq_ctx
#define CRIT_STACK_BASE critirq_ctx
/* only on e500mc/e200 */
/* only on e500mc */
#define DBG_STACK_BASE dbgirq_ctx
#define EXC_LVL_FRAME_OVERHEAD (THREAD_SIZE - INT_FRAME_SIZE - EXC_LVL_SIZE)

View File

@ -187,9 +187,6 @@ set_ivor:
/* Setup the defaults for TLB entries */
li r2,(MAS4_TSIZED(BOOK3E_PAGESZ_4K))@l
#ifdef CONFIG_E200
oris r2,r2,MAS4_TLBSELD(1)@h
#endif
mtspr SPRN_MAS4, r2
#if !defined(CONFIG_BDI_SWITCH)
@ -362,13 +359,7 @@ interrupt_base:
CRITICAL_EXCEPTION(0x0100, CRITICAL, CriticalInput, unknown_exception)
/* Machine Check Interrupt */
#ifdef CONFIG_E200
/* no RFMCI, MCSRRs on E200 */
CRITICAL_EXCEPTION(0x0200, MACHINE_CHECK, MachineCheck, \
machine_check_exception)
#else
MCHECK_EXCEPTION(0x0200, MachineCheck, machine_check_exception)
#endif
/* Data Storage Interrupt */
START_EXCEPTION(DataStorage)
@ -399,15 +390,9 @@ interrupt_base:
/* Floating Point Unavailable Interrupt */
#ifdef CONFIG_PPC_FPU
FP_UNAVAILABLE_EXCEPTION
#else
#ifdef CONFIG_E200
/* E200 treats 'normal' floating point instructions as FP Unavail exception */
EXCEPTION(0x0800, FP_UNAVAIL, FloatingPointUnavailable, \
program_check_exception, EXC_XFER_STD)
#else
EXCEPTION(0x0800, FP_UNAVAIL, FloatingPointUnavailable, \
unknown_exception, EXC_XFER_STD)
#endif
#endif
/* System Call Interrupt */
@ -625,7 +610,7 @@ END_BTB_FLUSH_SECTION
mfspr r10, SPRN_SPRG_RSCRATCH0
b InstructionStorage
/* Define SPE handlers for e200 and e500v2 */
/* Define SPE handlers for e500v2 */
#ifdef CONFIG_SPE
/* SPE Unavailable */
START_EXCEPTION(SPEUnavailable)
@ -807,31 +792,6 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_BIG_PHYS)
#endif
3: mtspr SPRN_MAS2, r12
#ifdef CONFIG_E200
/* Round robin TLB1 entries assignment */
mfspr r12, SPRN_MAS0
/* Extract TLB1CFG(NENTRY) */
mfspr r11, SPRN_TLB1CFG
andi. r11, r11, 0xfff
/* Extract MAS0(NV) */
andi. r13, r12, 0xfff
addi r13, r13, 1
cmpw 0, r13, r11
addi r12, r12, 1
/* check if we need to wrap */
blt 7f
/* wrap back to first free tlbcam entry */
lis r13, tlbcam_index@ha
lwz r13, tlbcam_index@l(r13)
rlwimi r12, r13, 0, 20, 31
7:
mtspr SPRN_MAS0,r12
#endif /* CONFIG_E200 */
tlb_write_entry:
tlbwe
@ -933,21 +893,6 @@ get_phys_addr:
* Global functions
*/
#ifdef CONFIG_E200
/* Adjust or setup IVORs for e200 */
_GLOBAL(__setup_e200_ivors)
li r3,DebugDebug@l
mtspr SPRN_IVOR15,r3
li r3,SPEUnavailable@l
mtspr SPRN_IVOR32,r3
li r3,SPEFloatingPointData@l
mtspr SPRN_IVOR33,r3
li r3,SPEFloatingPointRound@l
mtspr SPRN_IVOR34,r3
sync
blr
#endif
#ifdef CONFIG_E500
#ifndef CONFIG_PPC_E500MC
/* Adjust or setup IVORs for e500v1/v2 */

View File

@ -499,6 +499,11 @@ static bool is_larx_stcx_instr(int type)
return type == LARX || type == STCX;
}
static bool is_octword_vsx_instr(int type, int size)
{
return ((type == LOAD_VSX || type == STORE_VSX) && size == 32);
}
/*
* We've failed in reliably handling the hw-breakpoint. Unregister
* it and throw a warning message to let the user know about it.
@ -549,6 +554,58 @@ static bool stepping_handler(struct pt_regs *regs, struct perf_event **bp,
return true;
}
static void handle_p10dd1_spurious_exception(struct arch_hw_breakpoint **info,
int *hit, unsigned long ea)
{
int i;
unsigned long hw_end_addr;
/*
* Handle spurious exception only when any bp_per_reg is set.
* Otherwise this might be created by xmon and not actually a
* spurious exception.
*/
for (i = 0; i < nr_wp_slots(); i++) {
if (!info[i])
continue;
hw_end_addr = ALIGN(info[i]->address + info[i]->len, HW_BREAKPOINT_SIZE);
/*
* Ending address of DAWR range is less than starting
* address of op.
*/
if ((hw_end_addr - 1) >= ea)
continue;
/*
* Those addresses need to be in the same or in two
* consecutive 512B blocks;
*/
if (((hw_end_addr - 1) >> 10) != (ea >> 10))
continue;
/*
* 'op address + 64B' generates an address that has a
* carry into bit 52 (crosses 2K boundary).
*/
if ((ea & 0x800) == ((ea + 64) & 0x800))
continue;
break;
}
if (i == nr_wp_slots())
return;
for (i = 0; i < nr_wp_slots(); i++) {
if (info[i]) {
hit[i] = 1;
info[i]->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
}
}
}
int hw_breakpoint_handler(struct die_args *args)
{
bool err = false;
@ -607,8 +664,14 @@ int hw_breakpoint_handler(struct die_args *args)
goto reset;
if (!nr_hit) {
rc = NOTIFY_DONE;
goto out;
/* Workaround for Power10 DD1 */
if (!IS_ENABLED(CONFIG_PPC_8xx) && mfspr(SPRN_PVR) == 0x800100 &&
is_octword_vsx_instr(type, size)) {
handle_p10dd1_spurious_exception(info, hit, ea);
} else {
rc = NOTIFY_DONE;
goto out;
}
}
/*

View File

@ -11,177 +11,11 @@
#include <asm/pci-bridge.h>
#include <asm/isa-bridge.h>
/*
* Here comes the ppc64 implementation of the IOMAP
* interfaces.
*/
unsigned int ioread8(const void __iomem *addr)
{
return readb(addr);
}
unsigned int ioread16(const void __iomem *addr)
{
return readw(addr);
}
unsigned int ioread16be(const void __iomem *addr)
{
return readw_be(addr);
}
unsigned int ioread32(const void __iomem *addr)
{
return readl(addr);
}
unsigned int ioread32be(const void __iomem *addr)
{
return readl_be(addr);
}
EXPORT_SYMBOL(ioread8);
EXPORT_SYMBOL(ioread16);
EXPORT_SYMBOL(ioread16be);
EXPORT_SYMBOL(ioread32);
EXPORT_SYMBOL(ioread32be);
#ifdef __powerpc64__
u64 ioread64(const void __iomem *addr)
{
return readq(addr);
}
u64 ioread64_lo_hi(const void __iomem *addr)
{
return readq(addr);
}
u64 ioread64_hi_lo(const void __iomem *addr)
{
return readq(addr);
}
u64 ioread64be(const void __iomem *addr)
{
return readq_be(addr);
}
u64 ioread64be_lo_hi(const void __iomem *addr)
{
return readq_be(addr);
}
u64 ioread64be_hi_lo(const void __iomem *addr)
{
return readq_be(addr);
}
EXPORT_SYMBOL(ioread64);
EXPORT_SYMBOL(ioread64_lo_hi);
EXPORT_SYMBOL(ioread64_hi_lo);
EXPORT_SYMBOL(ioread64be);
EXPORT_SYMBOL(ioread64be_lo_hi);
EXPORT_SYMBOL(ioread64be_hi_lo);
#endif /* __powerpc64__ */
void iowrite8(u8 val, void __iomem *addr)
{
writeb(val, addr);
}
void iowrite16(u16 val, void __iomem *addr)
{
writew(val, addr);
}
void iowrite16be(u16 val, void __iomem *addr)
{
writew_be(val, addr);
}
void iowrite32(u32 val, void __iomem *addr)
{
writel(val, addr);
}
void iowrite32be(u32 val, void __iomem *addr)
{
writel_be(val, addr);
}
EXPORT_SYMBOL(iowrite8);
EXPORT_SYMBOL(iowrite16);
EXPORT_SYMBOL(iowrite16be);
EXPORT_SYMBOL(iowrite32);
EXPORT_SYMBOL(iowrite32be);
#ifdef __powerpc64__
void iowrite64(u64 val, void __iomem *addr)
{
writeq(val, addr);
}
void iowrite64_lo_hi(u64 val, void __iomem *addr)
{
writeq(val, addr);
}
void iowrite64_hi_lo(u64 val, void __iomem *addr)
{
writeq(val, addr);
}
void iowrite64be(u64 val, void __iomem *addr)
{
writeq_be(val, addr);
}
void iowrite64be_lo_hi(u64 val, void __iomem *addr)
{
writeq_be(val, addr);
}
void iowrite64be_hi_lo(u64 val, void __iomem *addr)
{
writeq_be(val, addr);
}
EXPORT_SYMBOL(iowrite64);
EXPORT_SYMBOL(iowrite64_lo_hi);
EXPORT_SYMBOL(iowrite64_hi_lo);
EXPORT_SYMBOL(iowrite64be);
EXPORT_SYMBOL(iowrite64be_lo_hi);
EXPORT_SYMBOL(iowrite64be_hi_lo);
#endif /* __powerpc64__ */
/*
* These are the "repeat read/write" functions. Note the
* non-CPU byte order. We do things in "IO byteorder"
* here.
*
* FIXME! We could make these do EEH handling if we really
* wanted. Not clear if we do.
*/
void ioread8_rep(const void __iomem *addr, void *dst, unsigned long count)
{
readsb(addr, dst, count);
}
void ioread16_rep(const void __iomem *addr, void *dst, unsigned long count)
{
readsw(addr, dst, count);
}
void ioread32_rep(const void __iomem *addr, void *dst, unsigned long count)
{
readsl(addr, dst, count);
}
EXPORT_SYMBOL(ioread8_rep);
EXPORT_SYMBOL(ioread16_rep);
EXPORT_SYMBOL(ioread32_rep);
void iowrite8_rep(void __iomem *addr, const void *src, unsigned long count)
{
writesb(addr, src, count);
}
void iowrite16_rep(void __iomem *addr, const void *src, unsigned long count)
{
writesw(addr, src, count);
}
void iowrite32_rep(void __iomem *addr, const void *src, unsigned long count)
{
writesl(addr, src, count);
}
EXPORT_SYMBOL(iowrite8_rep);
EXPORT_SYMBOL(iowrite16_rep);
EXPORT_SYMBOL(iowrite32_rep);
void __iomem *ioport_map(unsigned long port, unsigned int len)
{
return (void __iomem *) (port + _IO_BASE);
}
void ioport_unmap(void __iomem *addr)
{
/* Nothing to do */
}
EXPORT_SYMBOL(ioport_map);
EXPORT_SYMBOL(ioport_unmap);
#ifdef CONFIG_PCI
void pci_iounmap(struct pci_dev *dev, void __iomem *addr)

Some files were not shown because too many files have changed in this diff Show More