powerpc updates for 5.5

Highlights:
 
  - Infrastructure for secure boot on some bare metal Power9 machines. The
    firmware support is still in development, so the code here won't actually
    activate secure boot on any existing systems.
 
  - A change to xmon (our crash handler / pseudo-debugger) to restrict it to
    read-only mode when the kernel is lockdown'ed, otherwise it's trivial to drop
    into xmon and modify kernel data, such as the lockdown state.
 
  - Support for KASLR on 32-bit BookE machines (Freescale / NXP).
 
  - Fixes for our flush_icache_range() and __kernel_sync_dicache() (VDSO) to work
    with memory ranges >4GB.
 
  - Some reworks of the pseries CMM (Cooperative Memory Management) driver to
    make it behave more like other balloon drivers and enable some cleanups of
    generic mm code.
 
  - A series of fixes to our hardware breakpoint support to properly handle
    unaligned watchpoint addresses.
 
 Plus a bunch of other smaller improvements, fixes and cleanups.
 
 Thanks to:
   Alastair D'Silva, Andrew Donnellan, Aneesh Kumar K.V, Anthony Steinhauser,
   Cédric Le Goater, Chris Packham, Chris Smart, Christophe Leroy, Christopher M.
   Riedl, Christoph Hellwig, Claudio Carvalho, Daniel Axtens, David Hildenbrand,
   Deb McLemore, Diana Craciun, Eric Richter, Geert Uytterhoeven, Greg
   Kroah-Hartman, Greg Kurz, Gustavo L. F. Walbon, Hari Bathini, Harish, Jason
   Yan, Krzysztof Kozlowski, Leonardo Bras, Mathieu Malaterre, Mauro S. M.
   Rodrigues, Michal Suchanek, Mimi Zohar, Nathan Chancellor, Nathan Lynch, Nayna
   Jain, Nick Desaulniers, Oliver O'Halloran, Qian Cai, Rasmus Villemoes, Ravi
   Bangoria, Sam Bobroff, Santosh Sivaraj, Scott Wood, Thomas Huth, Tyrel
   Datwyler, Vaibhav Jain, Valentin Longchamp, YueHaibing.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCAAxFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAl3hBycTHG1wZUBlbGxl
 cm1hbi5pZC5hdQAKCRBR6+o8yOGlgApBEACk91MEQDYJ9MF9I6uN+85qb5p4pMsp
 rGzqnpt+XFidbDAc3eP63pYfIDSo3jtkQ2YL7shAnDOTvkO0md+Vqkl9Aq/G6FIf
 lDBlwbgkXMSxS/O2Lpvfn4NZAoK6dKmiV55LSgfliM62X3e2Saeg6TR55wBTgJ6/
 SlYPDwZfcVHOAiFS3UmfB+hkiIZk+AI5Zr5VAZvT2ZmeH36yAWkq4JgJI1uAk6m1
 /7iCnlfUjx/nl/BhnA3kjjmAgGCJ5s/WuVgwFMz47XpMBWGBhLWpMh/NqDTFb8ca
 kpiVQoVPQe2xyO3pL/kOwBy6sii26ftfHDhLKMy1hJdEhVQzS5LerPIMeh1qsU8Q
 hV/Cj+jfsrS/vBDOehj3jwx93t+861PmTOqgLnpYQ6Ltrt+2B/74+fufGMHE1kI3
 Ffo7xvNw4sw6bSziDxDFqUx2P1dFN5D5EJsJsYM98ekkVAAkzNqCDRvfD2QI8Pif
 VXWPYXqtNJTrVPJA0D7Yfo9FDNwhANd0f1zi7r/U5mVXBFUyKOlGqTQSkXgMrVeK
 3I7wHPOVGgdA5UUkfcd3pcuqsY081U9E//o5PUfj8ybO5JCwly8NoatbG+xHmKia
 a72uJT8MjCo9mGCHKDrwi9l/kqms6ZSv8RP+yMhGuB52YoiGc6PpVyab5jXIUd1N
 yTtBlC0YGW1JYw==
 =JHzg
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-5.5-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
 "Highlights:

   - Infrastructure for secure boot on some bare metal Power9 machines.
     The firmware support is still in development, so the code here
     won't actually activate secure boot on any existing systems.

   - A change to xmon (our crash handler / pseudo-debugger) to restrict
     it to read-only mode when the kernel is lockdown'ed, otherwise it's
     trivial to drop into xmon and modify kernel data, such as the
     lockdown state.

   - Support for KASLR on 32-bit BookE machines (Freescale / NXP).

   - Fixes for our flush_icache_range() and __kernel_sync_dicache()
     (VDSO) to work with memory ranges >4GB.

   - Some reworks of the pseries CMM (Cooperative Memory Management)
     driver to make it behave more like other balloon drivers and enable
     some cleanups of generic mm code.

   - A series of fixes to our hardware breakpoint support to properly
     handle unaligned watchpoint addresses.

  Plus a bunch of other smaller improvements, fixes and cleanups.

  Thanks to: Alastair D'Silva, Andrew Donnellan, Aneesh Kumar K.V,
  Anthony Steinhauser, Cédric Le Goater, Chris Packham, Chris Smart,
  Christophe Leroy, Christopher M. Riedl, Christoph Hellwig, Claudio
  Carvalho, Daniel Axtens, David Hildenbrand, Deb McLemore, Diana
  Craciun, Eric Richter, Geert Uytterhoeven, Greg Kroah-Hartman, Greg
  Kurz, Gustavo L. F. Walbon, Hari Bathini, Harish, Jason Yan, Krzysztof
  Kozlowski, Leonardo Bras, Mathieu Malaterre, Mauro S. M. Rodrigues,
  Michal Suchanek, Mimi Zohar, Nathan Chancellor, Nathan Lynch, Nayna
  Jain, Nick Desaulniers, Oliver O'Halloran, Qian Cai, Rasmus Villemoes,
  Ravi Bangoria, Sam Bobroff, Santosh Sivaraj, Scott Wood, Thomas Huth,
  Tyrel Datwyler, Vaibhav Jain, Valentin Longchamp, YueHaibing"

* tag 'powerpc-5.5-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (144 commits)
  powerpc/fixmap: fix crash with HIGHMEM
  x86/efi: remove unused variables
  powerpc: Define arch_is_kernel_initmem_freed() for lockdep
  powerpc/prom_init: Use -ffreestanding to avoid a reference to bcmp
  powerpc: Avoid clang warnings around setjmp and longjmp
  powerpc: Don't add -mabi= flags when building with Clang
  powerpc: Fix Kconfig indentation
  powerpc/fixmap: don't clear fixmap area in paging_init()
  selftests/powerpc: spectre_v2 test must be built 64-bit
  powerpc/powernv: Disable native PCIe port management
  powerpc/kexec: Move kexec files into a dedicated subdir.
  powerpc/32: Split kexec low level code out of misc_32.S
  powerpc/sysdev: drop simple gpio
  powerpc/83xx: map IMMR with a BAT.
  powerpc/32s: automatically allocate BAT in setbat()
  powerpc/ioremap: warn on early use of ioremap()
  powerpc: Add support for GENERIC_EARLY_IOREMAP
  powerpc/fixmap: Use __fix_to_virt() instead of fix_to_virt()
  powerpc/8xx: use the fixmapped IMMR in cpm_reset()
  powerpc/8xx: add __init to cpm1 init functions
  ...
This commit is contained in:
Linus Torvalds 2019-11-30 14:35:43 -08:00
commit 7794b1d418
215 changed files with 4574 additions and 2468 deletions

View File

@ -25,6 +25,7 @@ Description:
lsm: [[subj_user=] [subj_role=] [subj_type=]
[obj_user=] [obj_role=] [obj_type=]]
option: [[appraise_type=]] [template=] [permit_directio]
[appraise_flag=]
base: func:= [BPRM_CHECK][MMAP_CHECK][CREDS_CHECK][FILE_CHECK][MODULE_CHECK]
[FIRMWARE_CHECK]
[KEXEC_KERNEL_CHECK] [KEXEC_INITRAMFS_CHECK]
@ -38,6 +39,9 @@ Description:
fowner:= decimal value
lsm: are LSM specific
option: appraise_type:= [imasig] [imasig|modsig]
appraise_flag:= [check_blacklist]
Currently, blacklist check is only for files signed with appended
signature.
template:= name of a defined IMA template type
(eg, ima-ng). Only valid when action is "measure".
pcr:= decimal value

View File

@ -0,0 +1,46 @@
What: /sys/firmware/secvar
Date: August 2019
Contact: Nayna Jain <nayna@linux.ibm.com>
Description: This directory is created if the POWER firmware supports OS
secureboot, thereby secure variables. It exposes interface
for reading/writing the secure variables
What: /sys/firmware/secvar/vars
Date: August 2019
Contact: Nayna Jain <nayna@linux.ibm.com>
Description: This directory lists all the secure variables that are supported
by the firmware.
What: /sys/firmware/secvar/format
Date: August 2019
Contact: Nayna Jain <nayna@linux.ibm.com>
Description: A string indicating which backend is in use by the firmware.
This determines the format of the variable and the accepted
format of variable updates.
What: /sys/firmware/secvar/vars/<variable name>
Date: August 2019
Contact: Nayna Jain <nayna@linux.ibm.com>
Description: Each secure variable is represented as a directory named as
<variable_name>. The variable name is unique and is in ASCII
representation. The data and size can be determined by reading
their respective attribute files.
What: /sys/firmware/secvar/vars/<variable_name>/size
Date: August 2019
Contact: Nayna Jain <nayna@linux.ibm.com>
Description: An integer representation of the size of the content of the
variable. In other words, it represents the size of the data.
What: /sys/firmware/secvar/vars/<variable_name>/data
Date: August 2019
Contact: Nayna Jain h<nayna@linux.ibm.com>
Description: A read-only file containing the value of the variable. The size
of the file represents the maximum size of the variable data.
What: /sys/firmware/secvar/vars/<variable_name>/update
Date: August 2019
Contact: Nayna Jain <nayna@linux.ibm.com>
Description: A write-only file that is used to submit the new value for the
variable. The size of the file represents the maximum size of
the variable data that can be written.

View File

@ -47,36 +47,6 @@ Example (LS2080A-RDB):
reg = <0x3 0 0x10000>;
};
* Freescale BCSR GPIO banks
Some BCSR registers act as simple GPIO controllers, each such
register can be represented by the gpio-controller node.
Required properities:
- compatible : Should be "fsl,<board>-bcsr-gpio".
- reg : Should contain the address and the length of the GPIO bank
register.
- #gpio-cells : Should be two. The first cell is the pin number and the
second cell is used to specify optional parameters (currently unused).
- gpio-controller : Marks the port as GPIO controller.
Example:
bcsr@1,0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "fsl,mpc8360mds-bcsr";
reg = <1 0 0x8000>;
ranges = <0 1 0 0x8000>;
bcsr13: gpio-controller@d {
#gpio-cells = <2>;
compatible = "fsl,mpc8360mds-bcsr-gpio";
reg = <0xd 1>;
gpio-controller;
};
};
* Freescale on-board FPGA connected on I2C bus
Some Freescale boards like BSC9132QDS have on board FPGA connected on

View File

@ -19,6 +19,7 @@ powerpc
firmware-assisted-dump
hvcs
isa-versions
kaslr-booke32
mpc52xx
pci_iov_resource_on_powernv
pmu-ebb

View File

@ -0,0 +1,42 @@
.. SPDX-License-Identifier: GPL-2.0
===========================
KASLR for Freescale BookE32
===========================
The word KASLR stands for Kernel Address Space Layout Randomization.
This document tries to explain the implementation of the KASLR for
Freescale BookE32. KASLR is a security feature that deters exploit
attempts relying on knowledge of the location of kernel internals.
Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.
Entropy is derived from the banner and timer base, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.
We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in::
KERNELBASE
|--> 64M <--|
| |
+---------------+ +----------------+---------------+
| |....| |kernel| | |
+---------------+ +----------------+---------------+
| |
|-----> offset <-----|
kernstart_virt_addr
To enable KASLR, set CONFIG_RANDOMIZE_BASE = y. If KASLR is enable and you
want to disable it at runtime, add "nokaslr" to the kernel cmdline.

View File

@ -14,4 +14,5 @@ obj-$(CONFIG_XMON) += xmon/
obj-$(CONFIG_KVM) += kvm/
obj-$(CONFIG_PERF_EVENTS) += perf/
obj-$(CONFIG_KEXEC_CORE) += kexec/
obj-$(CONFIG_KEXEC_FILE) += purgatory/

View File

@ -161,6 +161,7 @@ config PPC
select GENERIC_CMOS_UPDATE
select GENERIC_CPU_AUTOPROBE
select GENERIC_CPU_VULNERABILITIES if PPC_BARRIER_NOSPEC
select GENERIC_EARLY_IOREMAP
select GENERIC_IRQ_SHOW
select GENERIC_IRQ_SHOW_LEVEL
select GENERIC_PCI_IOMAP if PCI
@ -551,6 +552,17 @@ config RELOCATABLE
setting can still be useful to bootwrappers that need to know the
load address of the kernel (eg. u-boot/mkimage).
config RANDOMIZE_BASE
bool "Randomize the address of the kernel image"
depends on (FSL_BOOKE && FLATMEM && PPC32)
depends on RELOCATABLE
help
Randomizes the virtual address at which the kernel image is
loaded, as a security feature that deters exploit attempts
relying on knowledge of the location of kernel internals.
If unsure, say Y.
config RELOCATABLE_TEST
bool "Test relocatable kernel"
depends on (PPC64 && RELOCATABLE)
@ -874,15 +886,33 @@ config CMDLINE
some command-line options at build time by entering them here. In
most cases you will need to specify the root device here.
choice
prompt "Kernel command line type" if CMDLINE != ""
default CMDLINE_FROM_BOOTLOADER
config CMDLINE_FROM_BOOTLOADER
bool "Use bootloader kernel arguments if available"
help
Uses the command-line options passed by the boot loader. If
the boot loader doesn't provide any, the default kernel command
string provided in CMDLINE will be used.
config CMDLINE_EXTEND
bool "Extend bootloader kernel arguments"
help
The command-line arguments provided by the boot loader will be
appended to the default kernel command string.
config CMDLINE_FORCE
bool "Always use the default kernel command string"
depends on CMDLINE_BOOL
help
Always use the default kernel command string, even if the boot
loader passes other arguments to the kernel.
This is useful if you cannot or don't want to change the
command-line options your boot loader passes to the kernel.
endchoice
config EXTRA_TARGETS
string "Additional default image types"
help
@ -934,6 +964,28 @@ config PPC_MEM_KEYS
If unsure, say y.
config PPC_SECURE_BOOT
prompt "Enable secure boot support"
bool
depends on PPC_POWERNV
depends on IMA_ARCH_POLICY
help
Systems with firmware secure boot enabled need to define security
policies to extend secure boot to the OS. This config allows a user
to enable OS secure boot on systems that have firmware support for
it. If in doubt say N.
config PPC_SECVAR_SYSFS
bool "Enable sysfs interface for POWER secure variables"
default y
depends on PPC_SECURE_BOOT
depends on SYSFS
help
POWER secure variables are managed and controlled by firmware.
These variables are exposed to userspace via sysfs to enable
read/write operations on these variables. Say Y if you have
secure boot enabled and want to expose variables to userspace.
endmenu
config ISA_DMA_API

View File

@ -122,8 +122,8 @@ config XMON_DEFAULT_RO_MODE
depends on XMON
default y
help
Operate xmon in read-only mode. The cmdline options 'xmon=rw' and
'xmon=ro' override this default.
Operate xmon in read-only mode. The cmdline options 'xmon=rw' and
'xmon=ro' override this default.
config DEBUGGER
bool
@ -222,7 +222,7 @@ config PPC_EARLY_DEBUG_44x
help
Select this to enable early debugging for IBM 44x chips via the
inbuilt serial port. If you enable this, ensure you set
PPC_EARLY_DEBUG_44x_PHYSLOW below to suit your target board.
PPC_EARLY_DEBUG_44x_PHYSLOW below to suit your target board.
config PPC_EARLY_DEBUG_40x
bool "Early serial debugging for IBM/AMCC 40x CPUs"
@ -325,7 +325,7 @@ config PPC_EARLY_DEBUG_44x_PHYSLOW
default "0x40000200"
help
You probably want 0x40000200 for ebony boards and
0x40000300 for taishan
0x40000300 for taishan
config PPC_EARLY_DEBUG_44x_PHYSHIGH
hex "EPRN of early debug UART physical address"
@ -359,9 +359,9 @@ config FAIL_IOMMU
If you are unsure, say N.
config PPC_PTDUMP
bool "Export kernel pagetable layout to userspace via debugfs"
depends on DEBUG_KERNEL && DEBUG_FS
help
bool "Export kernel pagetable layout to userspace via debugfs"
depends on DEBUG_KERNEL && DEBUG_FS
help
This option exports the state of the kernel pagetables to a
debugfs file. This is only useful for kernel developers who are
working in architecture specific areas of the kernel - probably
@ -390,8 +390,8 @@ config PPC_DEBUG_WX
config PPC_FAST_ENDIAN_SWITCH
bool "Deprecated fast endian-switch syscall"
depends on DEBUG_KERNEL && PPC_BOOK3S_64
help
depends on DEBUG_KERNEL && PPC_BOOK3S_64
help
If you're unsure what this is, say N.
config KASAN_SHADOW_OFFSET

View File

@ -91,11 +91,13 @@ MULTIPLEWORD := -mmultiple
endif
ifdef CONFIG_PPC64
ifndef CONFIG_CC_IS_CLANG
cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mabi=elfv1)
cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mcall-aixdesc)
aflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mabi=elfv1)
aflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mabi=elfv2
endif
endif
ifndef CONFIG_CC_IS_CLANG
cflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mno-strict-align
@ -141,6 +143,7 @@ endif
endif
CFLAGS-$(CONFIG_PPC64) := $(call cc-option,-mtraceback=no)
ifndef CONFIG_CC_IS_CLANG
ifdef CONFIG_CPU_LITTLE_ENDIAN
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv2,$(call cc-option,-mcall-aixdesc))
AFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv2)
@ -149,6 +152,7 @@ CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv1)
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcall-aixdesc)
AFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv1)
endif
endif
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcmodel=medium,$(call cc-option,-mminimal-toc))
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mno-pointers-to-nested-functions)
@ -330,32 +334,32 @@ powernv_be_defconfig:
PHONY += mpc85xx_defconfig
mpc85xx_defconfig:
$(call merge_into_defconfig,mpc85xx_basic_defconfig,\
$(call merge_into_defconfig,mpc85xx_base.config,\
85xx-32bit 85xx-hw fsl-emb-nonhw)
PHONY += mpc85xx_smp_defconfig
mpc85xx_smp_defconfig:
$(call merge_into_defconfig,mpc85xx_basic_defconfig,\
$(call merge_into_defconfig,mpc85xx_base.config,\
85xx-32bit 85xx-smp 85xx-hw fsl-emb-nonhw)
PHONY += corenet32_smp_defconfig
corenet32_smp_defconfig:
$(call merge_into_defconfig,corenet_basic_defconfig,\
$(call merge_into_defconfig,corenet_base.config,\
85xx-32bit 85xx-smp 85xx-hw fsl-emb-nonhw dpaa)
PHONY += corenet64_smp_defconfig
corenet64_smp_defconfig:
$(call merge_into_defconfig,corenet_basic_defconfig,\
$(call merge_into_defconfig,corenet_base.config,\
85xx-64bit 85xx-smp altivec 85xx-hw fsl-emb-nonhw dpaa)
PHONY += mpc86xx_defconfig
mpc86xx_defconfig:
$(call merge_into_defconfig,mpc86xx_basic_defconfig,\
$(call merge_into_defconfig,mpc86xx_base.config,\
86xx-hw fsl-emb-nonhw)
PHONY += mpc86xx_smp_defconfig
mpc86xx_smp_defconfig:
$(call merge_into_defconfig,mpc86xx_basic_defconfig,\
$(call merge_into_defconfig,mpc86xx_base.config,\
86xx-smp 86xx-hw fsl-emb-nonhw)
PHONY += ppc32_allmodconfig

View File

@ -210,13 +210,19 @@
fman@400000 {
ethernet@e0000 {
fixed-link = <0 1 1000 0 0>;
phy-connection-type = "sgmii";
phy-mode = "sgmii";
fixed-link {
speed = <1000>;
full-duplex;
};
};
ethernet@e2000 {
fixed-link = <1 1 1000 0 0>;
phy-connection-type = "sgmii";
phy-mode = "sgmii";
fixed-link {
speed = <1000>;
full-duplex;
};
};
ethernet@e4000 {
@ -229,7 +235,7 @@
ethernet@e8000 {
phy-handle = <&front_phy>;
phy-connection-type = "rgmii";
phy-mode = "rgmii-id";
};
mdio0: mdio@fc000 {
@ -258,14 +264,50 @@
pci1: pcie@ffe250000 {
status = "disabled";
reg = <0xf 0xfe250000 0 0x10000>;
ranges = <0x02000000 0 0xe0000000 0xc 0x10000000 0 0x10000000
0x01000000 0 0 0xf 0xf8010000 0 0x00010000>;
pcie@0 {
ranges = <0x02000000 0 0xe0000000
0x02000000 0 0xe0000000
0 0x10000000
0x01000000 0 0x00000000
0x01000000 0 0x00000000
0 0x00010000>;
};
};
pci2: pcie@ffe260000 {
status = "disabled";
reg = <0xf 0xfe260000 0 0x10000>;
ranges = <0x02000000 0 0xe0000000 0xc 0x20000000 0 0x10000000
0x01000000 0 0x00000000 0xf 0xf8020000 0 0x00010000>;
pcie@0 {
ranges = <0x02000000 0 0xe0000000
0x02000000 0 0xe0000000
0 0x10000000
0x01000000 0 0x00000000
0x01000000 0 0x00000000
0 0x00010000>;
};
};
pci3: pcie@ffe270000 {
status = "disabled";
reg = <0xf 0xfe270000 0 0x10000>;
ranges = <0x02000000 0 0xe0000000 0xc 0x30000000 0 0x10000000
0x01000000 0 0x00000000 0xf 0xf8030000 0 0x00010000>;
pcie@0 {
ranges = <0x02000000 0 0xe0000000
0x02000000 0 0xe0000000
0 0x10000000
0x01000000 0 0x00000000
0x01000000 0 0x00000000
0 0x00010000>;
};
};
qe: qe@ffe140000 {

View File

@ -18,9 +18,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -17,9 +17,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -20,9 +20,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -17,9 +17,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -20,9 +20,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -15,9 +15,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -29,9 +29,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y

View File

@ -20,9 +20,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -18,9 +18,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_BLK_DEV_RAM=y

View File

@ -20,9 +20,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -27,9 +27,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y

View File

@ -16,9 +16,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -21,9 +21,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -39,9 +39,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_VLAN_8021Q=m
CONFIG_DEVTMPFS=y

View File

@ -20,9 +20,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -29,9 +29,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -18,9 +18,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -19,9 +19,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -21,9 +21,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -23,9 +23,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_BLK_DEV_LOOP=y

View File

@ -20,9 +20,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -18,9 +18,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -31,9 +31,6 @@ CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_DIAG is not set
# CONFIG_IPV6 is not set
# CONFIG_FW_LOADER is not set

View File

@ -25,9 +25,6 @@ CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_TIPC=y
CONFIG_BRIDGE=m

View File

@ -22,9 +22,6 @@ CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
# CONFIG_FW_LOADER is not set
CONFIG_BLK_DEV_LOOP=y

View File

@ -60,7 +60,6 @@ CONFIG_SYN_COOKIES=y
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_IPCOMP=m
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_INET6_AH=m
CONFIG_INET6_IPCOMP=m
CONFIG_IPV6_TUNNEL=m

View File

@ -22,9 +22,6 @@ CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y

View File

@ -26,9 +26,6 @@ CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_ADVANCED is not set

View File

@ -51,11 +51,9 @@ CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
CONFIG_NET_IPIP=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_IPCOMP=m
# CONFIG_INET6_XFRM_MODE_BEET is not set
# CONFIG_IPV6_SIT is not set
CONFIG_IPV6_TUNNEL=m
CONFIG_NETFILTER=y

View File

@ -27,9 +27,6 @@ CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_ADVANCED is not set

View File

@ -0,0 +1 @@
CONFIG_SCOM_DEBUGFS=y

View File

@ -24,9 +24,6 @@ CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y

View File

@ -29,9 +29,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_DIAG is not set
# CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set

View File

@ -25,9 +25,6 @@ CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_PNP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_DIAG is not set
# CONFIG_IPV6 is not set
CONFIG_CAN=y

View File

@ -15,7 +15,6 @@ CONFIG_PPC_MEDIA5200=y
CONFIG_PPC_MPC5200_BUGFIX=y
CONFIG_PPC_MPC5200_LPBFIFO=m
# CONFIG_PPC_PMAC is not set
CONFIG_SIMPLE_GPIO=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y

View File

@ -23,9 +23,6 @@ CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y

View File

@ -38,8 +38,6 @@ CONFIG_IP_MULTICAST=y
CONFIG_SYN_COOKIES=y
CONFIG_INET_AH=y
CONFIG_INET_ESP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_IPV6 is not set
CONFIG_NETFILTER=y
CONFIG_NF_CONNTRACK=m

View File

@ -83,9 +83,6 @@ CONFIG_INET_IPCOMP=m
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_IPCOMP=m
CONFIG_INET6_XFRM_MODE_TRANSPORT=m
CONFIG_INET6_XFRM_MODE_TUNNEL=m
CONFIG_INET6_XFRM_MODE_BEET=m
CONFIG_IPV6_SIT=m
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_ADVANCED is not set

View File

@ -32,9 +32,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_BRIDGE=m
CONFIG_CONNECTOR=y
CONFIG_MTD=y

View File

@ -109,9 +109,6 @@ CONFIG_SYN_COOKIES=y
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_IPCOMP=m
CONFIG_INET_XFRM_MODE_TRANSPORT=m
CONFIG_INET_XFRM_MODE_TUNNEL=m
CONFIG_INET_XFRM_MODE_BEET=m
CONFIG_INET_DIAG=m
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_HSTCP=m
@ -129,7 +126,6 @@ CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_IPCOMP=m
CONFIG_IPV6_MIP6=m
CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION=m
CONFIG_IPV6_TUNNEL=m
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_IPV6_SUBTREES=y

View File

@ -47,9 +47,6 @@ CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_DIAG is not set
CONFIG_BT=m
CONFIG_BT_RFCOMM=m

View File

@ -46,6 +46,7 @@ CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_IDLE=y
CONFIG_HZ_100=y
CONFIG_KEXEC=y
CONFIG_PRESERVE_FA_DUMP=y
CONFIG_IRQ_ALL_CPUS=y
CONFIG_NUMA=y
# CONFIG_COMPACTION is not set
@ -63,9 +64,6 @@ CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_NET_IPIP=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_DNS_RESOLVER=y
# CONFIG_WIRELESS is not set
CONFIG_DEVTMPFS=y

View File

@ -22,9 +22,6 @@ CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_MTD=y
CONFIG_MTD_CMDLINE_PARTS=y

View File

@ -27,9 +27,6 @@ CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set
# CONFIG_FW_LOADER is not set

View File

@ -29,9 +29,6 @@ CONFIG_INET=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_DIAG is not set
# CONFIG_IPV6 is not set
CONFIG_BT=y

View File

@ -103,6 +103,7 @@ static int __init crc_test_init(void)
crc32, verify32, len);
break;
}
cond_resched();
}
pr_info("crc-vpmsum_test done, completed %lu iterations\n", i);
} while (0);

View File

@ -4,6 +4,7 @@ generated-y += syscall_table_64.h
generated-y += syscall_table_c32.h
generated-y += syscall_table_spu.h
generic-y += div64.h
generic-y += dma-mapping.h
generic-y += export.h
generic-y += irq_regs.h
generic-y += local64.h
@ -11,3 +12,4 @@ generic-y += mcs_spinlock.h
generic-y += preempt.h
generic-y += vtime.h
generic-y += msi.h
generic-y += early_ioremap.h

View File

@ -122,11 +122,6 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud,
unsigned long address)
{
/*
* By now all the pud entries should be none entries. So go
* ahead and flush the page walk cache
*/
flush_tlb_pgtable(tlb, address);
pgtable_free_tlb(tlb, pud, PUD_INDEX);
}
@ -143,11 +138,6 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd,
unsigned long address)
{
/*
* By now all the pud entries should be none entries. So go
* ahead and flush the page walk cache
*/
flush_tlb_pgtable(tlb, address);
return pgtable_free_tlb(tlb, pmd, PMD_INDEX);
}
@ -166,11 +156,6 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
unsigned long address)
{
/*
* By now all the pud entries should be none entries. So go
* ahead and flush the page walk cache
*/
flush_tlb_pgtable(tlb, address);
pgtable_free_tlb(tlb, table, PTE_INDEX);
}

View File

@ -147,22 +147,6 @@ static inline void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
flush_tlb_page(vma, address);
}
/*
* flush the page walk cache for the address
*/
static inline void flush_tlb_pgtable(struct mmu_gather *tlb, unsigned long address)
{
/*
* Flush the page table walk cache on freeing a page table. We already
* have marked the upper/higher level page table entry none by now.
* So it is safe to flush PWC here.
*/
if (!radix_enabled())
return;
radix__flush_tlb_pwc(tlb, address);
}
extern bool tlbie_capable;
extern bool tlbie_enabled;

View File

@ -49,6 +49,15 @@
".previous\n"
#endif
#define BUG_ENTRY(insn, flags, ...) \
__asm__ __volatile__( \
"1: " insn "\n" \
_EMIT_BUG_ENTRY \
: : "i" (__FILE__), "i" (__LINE__), \
"i" (flags), \
"i" (sizeof(struct bug_entry)), \
##__VA_ARGS__)
/*
* BUG_ON() and WARN_ON() do their best to cooperate with compile-time
* optimisations. However depending on the complexity of the condition
@ -56,11 +65,7 @@
*/
#define BUG() do { \
__asm__ __volatile__( \
"1: twi 31,0,0\n" \
_EMIT_BUG_ENTRY \
: : "i" (__FILE__), "i" (__LINE__), \
"i" (0), "i" (sizeof(struct bug_entry))); \
BUG_ENTRY("twi 31, 0, 0", 0); \
unreachable(); \
} while (0)
@ -69,23 +74,11 @@
if (x) \
BUG(); \
} else { \
__asm__ __volatile__( \
"1: "PPC_TLNEI" %4,0\n" \
_EMIT_BUG_ENTRY \
: : "i" (__FILE__), "i" (__LINE__), "i" (0), \
"i" (sizeof(struct bug_entry)), \
"r" ((__force long)(x))); \
BUG_ENTRY(PPC_TLNEI " %4, 0", 0, "r" ((__force long)(x))); \
} \
} while (0)
#define __WARN_FLAGS(flags) do { \
__asm__ __volatile__( \
"1: twi 31,0,0\n" \
_EMIT_BUG_ENTRY \
: : "i" (__FILE__), "i" (__LINE__), \
"i" (BUGFLAG_WARNING|(flags)), \
"i" (sizeof(struct bug_entry))); \
} while (0)
#define __WARN_FLAGS(flags) BUG_ENTRY("twi 31, 0, 0", BUGFLAG_WARNING | (flags))
#define WARN_ON(x) ({ \
int __ret_warn_on = !!(x); \
@ -93,13 +86,9 @@
if (__ret_warn_on) \
__WARN(); \
} else { \
__asm__ __volatile__( \
"1: "PPC_TLNEI" %4,0\n" \
_EMIT_BUG_ENTRY \
: : "i" (__FILE__), "i" (__LINE__), \
"i" (BUGFLAG_WARNING|BUGFLAG_TAINT(TAINT_WARN)),\
"i" (sizeof(struct bug_entry)), \
"r" (__ret_warn_on)); \
BUG_ENTRY(PPC_TLNEI " %4, 0", \
BUGFLAG_WARNING | BUGFLAG_TAINT(TAINT_WARN), \
"r" (__ret_warn_on)); \
} \
unlikely(__ret_warn_on); \
})

View File

@ -55,42 +55,48 @@ struct ppc64_caches {
extern struct ppc64_caches ppc64_caches;
static inline u32 l1_cache_shift(void)
static inline u32 l1_dcache_shift(void)
{
return ppc64_caches.l1d.log_block_size;
}
static inline u32 l1_cache_bytes(void)
static inline u32 l1_dcache_bytes(void)
{
return ppc64_caches.l1d.block_size;
}
static inline u32 l1_icache_shift(void)
{
return ppc64_caches.l1i.log_block_size;
}
static inline u32 l1_icache_bytes(void)
{
return ppc64_caches.l1i.block_size;
}
#else
static inline u32 l1_cache_shift(void)
static inline u32 l1_dcache_shift(void)
{
return L1_CACHE_SHIFT;
}
static inline u32 l1_cache_bytes(void)
static inline u32 l1_dcache_bytes(void)
{
return L1_CACHE_BYTES;
}
static inline u32 l1_icache_shift(void)
{
return L1_CACHE_SHIFT;
}
static inline u32 l1_icache_bytes(void)
{
return L1_CACHE_BYTES;
}
#endif
#endif /* ! __ASSEMBLY__ */
#if defined(__ASSEMBLY__)
/*
* For a snooping icache, we still need a dummy icbi to purge all the
* prefetched instructions from the ifetch buffers. We also need a sync
* before the icbi to order the the actual stores to memory that might
* have modified instructions with the icbi.
*/
#define PURGE_PREFETCHED_INS \
sync; \
icbi 0,r3; \
sync; \
isync
#else
#define __read_mostly __attribute__((__section__(".data..read_mostly")))
#ifdef CONFIG_PPC_BOOK3S_32
@ -124,6 +130,17 @@ static inline void dcbst(void *addr)
{
__asm__ __volatile__ ("dcbst 0, %0" : : "r"(addr) : "memory");
}
static inline void icbi(void *addr)
{
asm volatile ("icbi 0, %0" : : "r"(addr) : "memory");
}
static inline void iccci(void *addr)
{
asm volatile ("iccci 0, %0" : : "r"(addr) : "memory");
}
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_CACHE_H */

View File

@ -42,29 +42,25 @@ extern void flush_dcache_page(struct page *page);
#define flush_dcache_mmap_lock(mapping) do { } while (0)
#define flush_dcache_mmap_unlock(mapping) do { } while (0)
extern void flush_icache_range(unsigned long, unsigned long);
void flush_icache_range(unsigned long start, unsigned long stop);
extern void flush_icache_user_range(struct vm_area_struct *vma,
struct page *page, unsigned long addr,
int len);
extern void __flush_dcache_icache(void *page_va);
extern void flush_dcache_icache_page(struct page *page);
#if defined(CONFIG_PPC32) && !defined(CONFIG_BOOKE)
extern void __flush_dcache_icache_phys(unsigned long physaddr);
#else
static inline void __flush_dcache_icache_phys(unsigned long physaddr)
{
BUG();
}
#endif
void __flush_dcache_icache(void *page);
/*
* Write any modified data cache blocks out to memory and invalidate them.
* Does not invalidate the corresponding instruction cache blocks.
/**
* flush_dcache_range(): Write any modified data cache blocks out to memory and
* invalidate them. Does not invalidate the corresponding instruction cache
* blocks.
*
* @start: the start address
* @stop: the stop address (exclusive)
*/
static inline void flush_dcache_range(unsigned long start, unsigned long stop)
{
unsigned long shift = l1_cache_shift();
unsigned long bytes = l1_cache_bytes();
unsigned long shift = l1_dcache_shift();
unsigned long bytes = l1_dcache_bytes();
void *addr = (void *)(start & ~(bytes - 1));
unsigned long size = stop - (unsigned long)addr + (bytes - 1);
unsigned long i;
@ -89,8 +85,8 @@ static inline void flush_dcache_range(unsigned long start, unsigned long stop)
*/
static inline void clean_dcache_range(unsigned long start, unsigned long stop)
{
unsigned long shift = l1_cache_shift();
unsigned long bytes = l1_cache_bytes();
unsigned long shift = l1_dcache_shift();
unsigned long bytes = l1_dcache_bytes();
void *addr = (void *)(start & ~(bytes - 1));
unsigned long size = stop - (unsigned long)addr + (bytes - 1);
unsigned long i;
@ -108,8 +104,8 @@ static inline void clean_dcache_range(unsigned long start, unsigned long stop)
static inline void invalidate_dcache_range(unsigned long start,
unsigned long stop)
{
unsigned long shift = l1_cache_shift();
unsigned long bytes = l1_cache_bytes();
unsigned long shift = l1_dcache_shift();
unsigned long bytes = l1_dcache_bytes();
void *addr = (void *)(start & ~(bytes - 1));
unsigned long size = stop - (unsigned long)addr + (bytes - 1);
unsigned long i;

View File

@ -1,18 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2004 IBM
*/
#ifndef _ASM_DMA_MAPPING_H
#define _ASM_DMA_MAPPING_H
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
/* We don't handle the NULL dev case for ISA for now. We could
* do it via an out of line call but it is not needed for now. The
* only ISA DMA device we support is the floppy and we have a hack
* in the floppy driver directly to get a device for us.
*/
return NULL;
}
#endif /* _ASM_DMA_MAPPING_H */

View File

@ -15,6 +15,7 @@
#define _ASM_FIXMAP_H
#ifndef __ASSEMBLY__
#include <linux/sizes.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#ifdef CONFIG_HIGHMEM
@ -62,8 +63,23 @@ enum fixed_addresses {
FIX_IMMR_START,
FIX_IMMR_BASE = __ALIGN_MASK(FIX_IMMR_START, FIX_IMMR_SIZE - 1) - 1 +
FIX_IMMR_SIZE,
#endif
#ifdef CONFIG_PPC_83xx
/* For IMMR we need an aligned 2M area */
#define FIX_IMMR_SIZE (SZ_2M / PAGE_SIZE)
FIX_IMMR_START,
FIX_IMMR_BASE = __ALIGN_MASK(FIX_IMMR_START, FIX_IMMR_SIZE - 1) - 1 +
FIX_IMMR_SIZE,
#endif
/* FIX_PCIE_MCFG, */
__end_of_permanent_fixed_addresses,
#define NR_FIX_BTMAPS (SZ_256K / PAGE_SIZE)
#define FIX_BTMAPS_SLOTS 16
#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)
FIX_BTMAP_END = __end_of_permanent_fixed_addresses,
FIX_BTMAP_BEGIN = FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1,
__end_of_fixed_addresses
};
@ -71,14 +87,22 @@ enum fixed_addresses {
#define FIXADDR_START (FIXADDR_TOP - __FIXADDR_SIZE)
#define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_NCG
#define FIXMAP_PAGE_IO PAGE_KERNEL_NCG
#include <asm-generic/fixmap.h>
static inline void __set_fixmap(enum fixed_addresses idx,
phys_addr_t phys, pgprot_t flags)
{
map_kernel_page(fix_to_virt(idx), phys, flags);
if (__builtin_constant_p(idx))
BUILD_BUG_ON(idx >= __end_of_fixed_addresses);
else if (WARN_ON(idx >= __end_of_fixed_addresses))
return;
map_kernel_page(__fix_to_virt(idx), phys, flags);
}
#define __early_set_fixmap __set_fixmap
#endif /* !__ASSEMBLY__ */
#endif

View File

@ -14,6 +14,7 @@ struct arch_hw_breakpoint {
unsigned long address;
u16 type;
u16 len; /* length of the target data symbol */
u16 hw_len; /* length programmed in hw */
};
/* Note: Don't change the the first 6 bits below as they are in the same order
@ -33,6 +34,11 @@ struct arch_hw_breakpoint {
#define HW_BRK_TYPE_PRIV_ALL (HW_BRK_TYPE_USER | HW_BRK_TYPE_KERNEL | \
HW_BRK_TYPE_HYP)
#define HW_BREAKPOINT_ALIGN 0x7
#define DABR_MAX_LEN 8
#define DAWR_MAX_LEN 512
#ifdef CONFIG_HAVE_HW_BREAKPOINT
#include <linux/kdebug.h>
#include <asm/reg.h>
@ -44,8 +50,6 @@ struct pmu;
struct perf_sample_data;
struct task_struct;
#define HW_BREAKPOINT_ALIGN 0x7
extern int hw_breakpoint_slots(int type);
extern int arch_bp_generic_fields(int type, int *gen_bp_type);
extern int arch_check_bp_in_kernelspace(struct arch_hw_breakpoint *hw);
@ -70,6 +74,7 @@ static inline void hw_breakpoint_disable(void)
brk.address = 0;
brk.type = 0;
brk.len = 0;
brk.hw_len = 0;
if (ppc_breakpoint_available())
__set_breakpoint(&brk);
}

View File

@ -226,8 +226,8 @@ static inline bool arch_irqs_disabled(void)
#endif /* CONFIG_PPC_BOOK3S */
#ifdef CONFIG_PPC_BOOK3E
#define __hard_irq_enable() asm volatile("wrteei 1" : : : "memory")
#define __hard_irq_disable() asm volatile("wrteei 0" : : : "memory")
#define __hard_irq_enable() wrtee(MSR_EE)
#define __hard_irq_disable() wrtee(0)
#else
#define __hard_irq_enable() __mtmsrd(MSR_EE|MSR_RI, 1)
#define __hard_irq_disable() __mtmsrd(MSR_RI, 1)
@ -280,8 +280,6 @@ extern void force_external_irq_replay(void);
#else /* CONFIG_PPC64 */
#define SET_MSR_EE(x) mtmsr(x)
static inline unsigned long arch_local_save_flags(void)
{
return mfmsr();
@ -289,47 +287,44 @@ static inline unsigned long arch_local_save_flags(void)
static inline void arch_local_irq_restore(unsigned long flags)
{
#if defined(CONFIG_BOOKE)
asm volatile("wrtee %0" : : "r" (flags) : "memory");
#else
mtmsr(flags);
#endif
if (IS_ENABLED(CONFIG_BOOKE))
wrtee(flags);
else
mtmsr(flags);
}
static inline unsigned long arch_local_irq_save(void)
{
unsigned long flags = arch_local_save_flags();
#ifdef CONFIG_BOOKE
asm volatile("wrteei 0" : : : "memory");
#elif defined(CONFIG_PPC_8xx)
wrtspr(SPRN_EID);
#else
SET_MSR_EE(flags & ~MSR_EE);
#endif
if (IS_ENABLED(CONFIG_BOOKE))
wrtee(0);
else if (IS_ENABLED(CONFIG_PPC_8xx))
wrtspr(SPRN_EID);
else
mtmsr(flags & ~MSR_EE);
return flags;
}
static inline void arch_local_irq_disable(void)
{
#ifdef CONFIG_BOOKE
asm volatile("wrteei 0" : : : "memory");
#elif defined(CONFIG_PPC_8xx)
wrtspr(SPRN_EID);
#else
arch_local_irq_save();
#endif
if (IS_ENABLED(CONFIG_BOOKE))
wrtee(0);
else if (IS_ENABLED(CONFIG_PPC_8xx))
wrtspr(SPRN_EID);
else
mtmsr(mfmsr() & ~MSR_EE);
}
static inline void arch_local_irq_enable(void)
{
#ifdef CONFIG_BOOKE
asm volatile("wrteei 1" : : : "memory");
#elif defined(CONFIG_PPC_8xx)
wrtspr(SPRN_EIE);
#else
unsigned long msr = mfmsr();
SET_MSR_EE(msr | MSR_EE);
#endif
if (IS_ENABLED(CONFIG_BOOKE))
wrtee(MSR_EE);
else if (IS_ENABLED(CONFIG_PPC_8xx))
wrtspr(SPRN_EIE);
else
mtmsr(mfmsr() | MSR_EE);
}
static inline bool arch_irqs_disabled_flags(unsigned long flags)

View File

@ -3,6 +3,7 @@
#define _ASM_POWERPC_KUP_8XX_H_
#include <asm/bug.h>
#include <asm/mmu.h>
#ifdef CONFIG_PPC_KUAP

View File

@ -75,7 +75,6 @@
#define MAS2_E 0x00000001
#define MAS2_WIMGE_MASK 0x0000001f
#define MAS2_EPN_MASK(size) (~0 << (size + 10))
#define MAS2_VAL(addr, size, flags) ((addr) & MAS2_EPN_MASK(size) | (flags))
#define MAS3_RPN 0xFFFFF000
#define MAS3_U0 0x00000200
@ -221,6 +220,16 @@
#define TLBILX_T_CLASS2 6
#define TLBILX_T_CLASS3 7
/*
* The mapping only needs to be cache-coherent on SMP, except on
* Freescale e500mc derivatives where it's also needed for coherent DMA.
*/
#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
#define MAS2_M_IF_NEEDED MAS2_M
#else
#define MAS2_M_IF_NEEDED 0
#endif
#ifndef __ASSEMBLY__
#include <asm/bug.h>

View File

@ -211,7 +211,10 @@
#define OPAL_MPIPL_UPDATE 173
#define OPAL_MPIPL_REGISTER_TAG 174
#define OPAL_MPIPL_QUERY_TAG 175
#define OPAL_LAST 175
#define OPAL_SECVAR_GET 176
#define OPAL_SECVAR_GET_NEXT 177
#define OPAL_SECVAR_ENQUEUE_UPDATE 178
#define OPAL_LAST 178
#define QUIESCE_HOLD 1 /* Spin all calls at entry */
#define QUIESCE_REJECT 2 /* Fail all calls with OPAL_BUSY */

View File

@ -298,6 +298,13 @@ int opal_sensor_group_clear(u32 group_hndl, int token);
int opal_sensor_group_enable(u32 group_hndl, int token, bool enable);
int opal_nx_coproc_init(uint32_t chip_id, uint32_t ct);
int opal_secvar_get(const char *key, uint64_t key_len, u8 *data,
uint64_t *data_size);
int opal_secvar_get_next(const char *key, uint64_t *key_len,
uint64_t key_buf_size);
int opal_secvar_enqueue_update(const char *key, uint64_t key_len, u8 *data,
uint64_t data_size);
s64 opal_mpipl_update(enum opal_mpipl_ops op, u64 src, u64 dest, u64 size);
s64 opal_mpipl_register_tag(enum opal_mpipl_tags tag, u64 addr);
s64 opal_mpipl_query_tag(enum opal_mpipl_tags tag, u64 *addr);

View File

@ -325,6 +325,13 @@ void arch_free_page(struct page *page, int order);
struct vm_area_struct;
extern unsigned long kernstart_virt_addr;
static inline unsigned long kaslr_offset(void)
{
return kernstart_virt_addr - KERNELBASE;
}
#include <asm-generic/memory_model.h>
#endif /* __ASSEMBLY__ */
#include <asm/slice.h>

View File

@ -157,13 +157,9 @@ static inline bool pgd_is_leaf(pgd_t pgd)
#define is_ioremap_addr is_ioremap_addr
static inline bool is_ioremap_addr(const void *x)
{
#ifdef CONFIG_MMU
unsigned long addr = (unsigned long)x;
return addr >= IOREMAP_BASE && addr < IOREMAP_END;
#else
return false;
#endif
}
#endif /* CONFIG_PPC64 */

View File

@ -25,9 +25,7 @@
#include <asm/reg_fsl_emb.h>
#endif
#ifdef CONFIG_PPC_8xx
#include <asm/reg_8xx.h>
#endif /* CONFIG_PPC_8xx */
#define MSR_SF_LG 63 /* Enable 64 bit mode */
#define MSR_ISF_LG 61 /* Interrupt 64b mode valid on 630 */
@ -1382,6 +1380,14 @@ static inline void mtmsr_isync(unsigned long val)
#define wrtspr(rn) asm volatile("mtspr " __stringify(rn) ",0" : \
: : "memory")
static inline void wrtee(unsigned long val)
{
if (__builtin_constant_p(val))
asm volatile("wrteei %0" : : "i" ((val & MSR_EE) ? 1 : 0) : "memory");
else
asm volatile("wrtee %0" : : "r" (val) : "memory");
}
extern unsigned long msr_check_and_set(unsigned long bits);
extern bool strict_msr_control;
extern void __msr_check_and_clear(unsigned long bits);
@ -1396,19 +1402,9 @@ static inline void msr_check_and_clear(unsigned long bits)
#define mftb() ({unsigned long rval; \
asm volatile( \
"90: mfspr %0, %2;\n" \
"97: cmpwi %0,0;\n" \
" beq- 90b;\n" \
"99:\n" \
".section __ftr_fixup,\"a\"\n" \
".align 3\n" \
"98:\n" \
" .8byte %1\n" \
" .8byte %1\n" \
" .8byte 97b-98b\n" \
" .8byte 99b-98b\n" \
" .8byte 0\n" \
" .8byte 0\n" \
".previous" \
ASM_FTR_IFSET( \
"97: cmpwi %0,0;\n" \
" beq- 90b;\n", "", %1) \
: "=r" (rval) \
: "i" (CPU_FTR_CELL_TB_BUG), "i" (SPRN_TBRL) : "cr0"); \
rval;})

View File

@ -5,8 +5,6 @@
#ifndef _ASM_POWERPC_REG_8xx_H
#define _ASM_POWERPC_REG_8xx_H
#include <asm/mmu.h>
/* Cache control on the MPC8xx is provided through some additional
* special purpose registers.
*/
@ -38,7 +36,9 @@
#define SPRN_CMPF 153
#define SPRN_LCTRL1 156
#define SPRN_LCTRL2 157
#ifdef CONFIG_PPC_8xx
#define SPRN_ICTRL 158
#endif
#define SPRN_BAR 159
/* Commands. Only the first few are available to the instruction cache.

View File

@ -5,8 +5,22 @@
#include <linux/elf.h>
#include <linux/uaccess.h>
#define arch_is_kernel_initmem_freed arch_is_kernel_initmem_freed
#include <asm-generic/sections.h>
extern bool init_mem_is_free;
static inline int arch_is_kernel_initmem_freed(unsigned long addr)
{
if (!init_mem_is_free)
return 0;
return addr >= (unsigned long)__init_begin &&
addr < (unsigned long)__init_end;
}
extern char __head_end[];
#ifdef __powerpc64__

View File

@ -0,0 +1,29 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Secure boot definitions
*
* Copyright (C) 2019 IBM Corporation
* Author: Nayna Jain
*/
#ifndef _ASM_POWER_SECURE_BOOT_H
#define _ASM_POWER_SECURE_BOOT_H
#ifdef CONFIG_PPC_SECURE_BOOT
bool is_ppc_secureboot_enabled(void);
bool is_ppc_trustedboot_enabled(void);
#else
static inline bool is_ppc_secureboot_enabled(void)
{
return false;
}
static inline bool is_ppc_trustedboot_enabled(void)
{
return false;
}
#endif
#endif

View File

@ -9,7 +9,7 @@
#define _ASM_POWERPC_SECURITY_FEATURES_H
extern unsigned long powerpc_security_features;
extern u64 powerpc_security_features;
extern bool rfi_flush;
/* These are bit flags */
@ -24,17 +24,17 @@ void setup_stf_barrier(void);
void do_stf_barrier_fixups(enum stf_barrier_type types);
void setup_count_cache_flush(void);
static inline void security_ftr_set(unsigned long feature)
static inline void security_ftr_set(u64 feature)
{
powerpc_security_features |= feature;
}
static inline void security_ftr_clear(unsigned long feature)
static inline void security_ftr_clear(u64 feature)
{
powerpc_security_features &= ~feature;
}
static inline bool security_ftr_enabled(unsigned long feature)
static inline bool security_ftr_enabled(u64 feature)
{
return !!(powerpc_security_features & feature);
}

View File

@ -0,0 +1,35 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2019 IBM Corporation
* Author: Nayna Jain
*
* PowerPC secure variable operations.
*/
#ifndef SECVAR_OPS_H
#define SECVAR_OPS_H
#include <linux/types.h>
#include <linux/errno.h>
extern const struct secvar_operations *secvar_ops;
struct secvar_operations {
int (*get)(const char *key, uint64_t key_len, u8 *data,
uint64_t *data_size);
int (*get_next)(const char *key, uint64_t *key_len,
uint64_t keybufsize);
int (*set)(const char *key, uint64_t key_len, u8 *data,
uint64_t data_size);
};
#ifdef CONFIG_PPC_SECURE_BOOT
extern void set_secvar_ops(const struct secvar_operations *ops);
#else
static inline void set_secvar_ops(const struct secvar_operations *ops) { }
#endif
#endif

View File

@ -5,20 +5,6 @@
* (C) Copyright 2006 IBM Corp.
*
* Author: Dwayne Grant McConnell <decimal@us.ibm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#ifndef _UAPI_SPU_INFO_H

View File

@ -5,9 +5,6 @@
CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"'
# Disable clang warning for using setjmp without setjmp.h header
CFLAGS_crash.o += $(call cc-disable-warning, builtin-requires-header)
ifdef CONFIG_PPC64
CFLAGS_prom_init.o += $(NO_MINIMAL_TOC)
endif
@ -22,6 +19,8 @@ CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_prom_init.o += $(call cc-option, -fno-stack-protector)
CFLAGS_prom_init.o += -DDISABLE_BRANCH_PROFILING
CFLAGS_prom_init.o += -ffreestanding
ifdef CONFIG_FUNCTION_TRACER
# Do not trace early boot code
@ -39,7 +38,6 @@ KASAN_SANITIZE_btext.o := n
ifdef CONFIG_KASAN
CFLAGS_early_32.o += -DDISABLE_BRANCH_PROFILING
CFLAGS_cputable.o += -DDISABLE_BRANCH_PROFILING
CFLAGS_prom_init.o += -DDISABLE_BRANCH_PROFILING
CFLAGS_btext.o += -DDISABLE_BRANCH_PROFILING
endif
@ -78,9 +76,8 @@ obj-$(CONFIG_EEH) += eeh.o eeh_pe.o eeh_dev.o eeh_cache.o \
eeh_driver.o eeh_event.o eeh_sysfs.o
obj-$(CONFIG_GENERIC_TBSYNC) += smp-tbsync.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
ifneq ($(CONFIG_FA_DUMP)$(CONFIG_PRESERVE_FA_DUMP),)
obj-y += fadump.o
endif
obj-$(CONFIG_FA_DUMP) += fadump.o
obj-$(CONFIG_PRESERVE_FA_DUMP) += fadump.o
ifdef CONFIG_PPC32
obj-$(CONFIG_E500) += idle_e500.o
endif
@ -126,14 +123,6 @@ pci64-$(CONFIG_PPC64) += pci_dn.o pci-hotplug.o isa-bridge.o
obj-$(CONFIG_PCI) += pci_$(BITS).o $(pci64-y) \
pci-common.o pci_of_scan.o
obj-$(CONFIG_PCI_MSI) += msi.o
obj-$(CONFIG_KEXEC_CORE) += machine_kexec.o crash.o \
machine_kexec_$(BITS).o
obj-$(CONFIG_KEXEC_FILE) += machine_kexec_file_$(BITS).o kexec_elf_$(BITS).o
ifdef CONFIG_HAVE_IMA_KEXEC
ifdef CONFIG_IMA
obj-y += ima_kexec.o
endif
endif
obj-$(CONFIG_AUDIT) += audit.o
obj64-$(CONFIG_AUDIT) += compat_audit.o
@ -161,16 +150,13 @@ ifneq ($(CONFIG_PPC_POWERNV)$(CONFIG_PPC_SVM),)
obj-y += ucall.o
endif
obj-$(CONFIG_PPC_SECURE_BOOT) += secure_boot.o ima_arch.o secvar-ops.o
obj-$(CONFIG_PPC_SECVAR_SYSFS) += secvar-sysfs.o
# Disable GCOV, KCOV & sanitizers in odd or sensitive code
GCOV_PROFILE_prom_init.o := n
KCOV_INSTRUMENT_prom_init.o := n
UBSAN_SANITIZE_prom_init.o := n
GCOV_PROFILE_machine_kexec_64.o := n
KCOV_INSTRUMENT_machine_kexec_64.o := n
UBSAN_SANITIZE_machine_kexec_64.o := n
GCOV_PROFILE_machine_kexec_32.o := n
KCOV_INSTRUMENT_machine_kexec_32.o := n
UBSAN_SANITIZE_machine_kexec_32.o := n
GCOV_PROFILE_kprobes.o := n
KCOV_INSTRUMENT_kprobes.o := n
UBSAN_SANITIZE_kprobes.o := n

View File

@ -231,7 +231,7 @@ _GLOBAL(__setup_cpu_e5500)
blr
#endif
/* flush L1 date cache, it can apply to e500v2, e500mc and e5500 */
/* flush L1 data cache, it can apply to e500v2, e500mc and e5500 */
_GLOBAL(flush_dcache_L1)
mfmsr r10
wrteei 0

View File

@ -30,10 +30,10 @@ int set_dawr(struct arch_hw_breakpoint *brk)
* DAWR length is stored in field MDR bits 48:53. Matches range in
* doublewords (64 bits) baised by -1 eg. 0b000000=1DW and
* 0b111111=64DW.
* brk->len is in bytes.
* brk->hw_len is in bytes.
* This aligns up to double word size, shifts and does the bias.
*/
mrd = ((brk->len + 7) >> 3) - 1;
mrd = ((brk->hw_len + 7) >> 3) - 1;
dawrx |= (mrd & 0x3f) << (63 - 53);
if (ppc_md.set_dawr)
@ -54,7 +54,7 @@ static ssize_t dawr_write_file_bool(struct file *file,
const char __user *user_buf,
size_t count, loff_t *ppos)
{
struct arch_hw_breakpoint null_brk = {0, 0, 0};
struct arch_hw_breakpoint null_brk = {0};
size_t rc;
/* Send error to user if they hypervisor won't allow us to write DAWR */

View File

@ -19,10 +19,13 @@
*/
notrace unsigned long __init early_init(unsigned long dt_ptr)
{
unsigned long offset = reloc_offset();
unsigned long kva, offset = reloc_offset();
kva = *PTRRELOC(&kernstart_virt_addr);
/* First zero the BSS */
memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
if (kva == KERNELBASE)
memset(PTRRELOC(&__bss_start), 0, __bss_stop - __bss_start);
/*
* Identify the CPU type and fix up code sections
@ -32,5 +35,5 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
apply_feature_fixups();
return KERNELBASE + offset;
return kva + offset;
}

View File

@ -1,25 +1,9 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* PCI Error Recovery Driver for RPA-compliant PPC64 platform.
* Copyright IBM Corp. 2004 2005
* Copyright Linas Vepstas <linas@linas.org> 2004, 2005
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or (at
* your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
* Send comments and feedback to Linas Vepstas <linas@austin.ibm.com>
*/
#include <linux/delay.h>
@ -897,12 +881,12 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
/* Log the event */
if (pe->type & EEH_PE_PHB) {
pr_err("EEH: PHB#%x failure detected, location: %s\n",
pr_err("EEH: Recovering PHB#%x, location: %s\n",
pe->phb->global_number, eeh_pe_loc_get(pe));
} else {
struct eeh_pe *phb_pe = eeh_phb_pe_get(pe->phb);
pr_err("EEH: Frozen PHB#%x-PE#%x detected\n",
pr_err("EEH: Recovering PHB#%x-PE#%x\n",
pe->phb->global_number, pe->addr);
pr_err("EEH: PE location: %s, PHB location: %s\n",
eeh_pe_loc_get(pe), eeh_pe_loc_get(phb_pe));

View File

@ -1,25 +1,9 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Sysfs entries for PCI Error Recovery for PAPR-compliant platform.
* Copyright IBM Corporation 2007
* Copyright Linas Vepstas <linas@austin.ibm.com> 2007
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or (at
* your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
* Send comments and feedback to Linas Vepstas <linas@austin.ibm.com>
*/
#include <linux/pci.h>

View File

@ -1346,16 +1346,6 @@ skpinv: addi r6,r6,1 /* Increment */
sync
isync
/*
* The mapping only needs to be cache-coherent on SMP, except on
* Freescale e500mc derivatives where it's also needed for coherent DMA.
*/
#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
#define M_IF_NEEDED MAS2_M
#else
#define M_IF_NEEDED 0
#endif
/* 6. Setup KERNELBASE mapping in TLB[0]
*
* r3 = MAS0 w/TLBSEL & ESEL for the entry we started in
@ -1368,7 +1358,7 @@ skpinv: addi r6,r6,1 /* Increment */
ori r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_1GB))@l
mtspr SPRN_MAS1,r6
LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET | M_IF_NEEDED)
LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET | MAS2_M_IF_NEEDED)
mtspr SPRN_MAS2,r6
rlwinm r5,r5,0,0,25

View File

@ -514,7 +514,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
* If stack=0, then the stack is already set in r1, and r1 is saved in r10.
* PPR save and CPU accounting is not done for the !stack case (XXX why not?)
*/
.macro INT_COMMON vec, area, stack, kaup, reconcile, dar, dsisr
.macro INT_COMMON vec, area, stack, kuap, reconcile, dar, dsisr
.if \stack
andi. r10,r12,MSR_PR /* See if coming from user */
mr r10,r1 /* Save r1 */
@ -533,7 +533,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
std r10,GPR1(r1) /* save r1 in stackframe */
.if \stack
.if \kaup
.if \kuap
kuap_save_amr_and_lock r9, r10, cr1, cr0
.endif
beq 101f /* if from kernel mode */
@ -541,7 +541,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
SAVE_PPR(\area, r9)
101:
.else
.if \kaup
.if \kuap
kuap_save_amr_and_lock r9, r10, cr1
.endif
.endif

View File

@ -1466,16 +1466,15 @@ static void fadump_init_files(void)
*/
int __init setup_fadump(void)
{
if (!fw_dump.fadump_enabled)
if (!fw_dump.fadump_supported)
return 0;
if (!fw_dump.fadump_supported) {
printk(KERN_ERR "Firmware-assisted dump is not supported on"
" this hardware\n");
return 0;
}
fadump_init_files();
fadump_show_config();
if (!fw_dump.fadump_enabled)
return 1;
/*
* If dump data is available then see if it is valid and prepare for
* saving it to the disk.
@ -1492,8 +1491,6 @@ int __init setup_fadump(void)
else if (fw_dump.reserve_dump_area_size)
fw_dump.ops->fadump_init_mem_struct(&fw_dump);
fadump_init_files();
return 1;
}
subsys_initcall(setup_fadump);

View File

@ -153,35 +153,24 @@ skpinv: addi r6,r6,1 /* Increment */
tlbivax 0,r9
TLBSYNC
/*
* The mapping only needs to be cache-coherent on SMP, except on
* Freescale e500mc derivatives where it's also needed for coherent DMA.
*/
#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
#define M_IF_NEEDED MAS2_M
#else
#define M_IF_NEEDED 0
#endif
#if defined(ENTRY_MAPPING_BOOT_SETUP)
/* 6. Setup KERNELBASE mapping in TLB1[0] */
/* 6. Setup kernstart_virt_addr mapping in TLB1[0] */
lis r6,0x1000 /* Set MAS0(TLBSEL) = TLB1(1), ESEL = 0 */
mtspr SPRN_MAS0,r6
lis r6,(MAS1_VALID|MAS1_IPROT)@h
ori r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
mtspr SPRN_MAS1,r6
lis r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_NEEDED)@h
ori r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_NEEDED)@l
lis r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
ori r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
and r6,r6,r20
ori r6,r6,MAS2_M_IF_NEEDED@l
mtspr SPRN_MAS2,r6
mtspr SPRN_MAS3,r8
tlbwe
/* 7. Jump to KERNELBASE mapping */
lis r6,(KERNELBASE & ~0xfff)@h
ori r6,r6,(KERNELBASE & ~0xfff)@l
rlwinm r7,r25,0,0x03ffffff
add r6,r7,r6
/* 7. Jump to kernstart_virt_addr mapping */
mr r6,r20
#elif defined(ENTRY_MAPPING_KEXEC_SETUP)
/*

View File

@ -155,6 +155,8 @@ _ENTRY(_start);
*/
_ENTRY(__early_start)
LOAD_REG_ADDR_PIC(r20, kernstart_virt_addr)
lwz r20,0(r20)
#define ENTRY_MAPPING_BOOT_SETUP
#include "fsl_booke_entry_mapping.S"
@ -277,8 +279,8 @@ set_ivor:
ori r6, r6, swapper_pg_dir@l
lis r5, abatron_pteptrs@h
ori r5, r5, abatron_pteptrs@l
lis r4, KERNELBASE@h
ori r4, r4, KERNELBASE@l
lis r3, kernstart_virt_addr@ha
lwz r4, kernstart_virt_addr@l(r3)
stw r5, 0(r4) /* Save abatron_pteptrs at a fixed location */
stw r6, 0(r5)
@ -1067,7 +1069,12 @@ __secondary_start:
mr r5,r25 /* phys kernel start */
rlwinm r5,r5,0,~0x3ffffff /* aligned 64M */
subf r4,r5,r4 /* memstart_addr - phys kernel start */
li r5,0 /* no device tree */
lis r7,KERNELBASE@h
ori r7,r7,KERNELBASE@l
cmpw r20,r7 /* if kernstart_virt_addr != KERNELBASE, randomized */
beq 2f
li r4,0
2: li r5,0 /* no device tree */
li r6,0 /* not boot cpu */
bl restore_to_as0
@ -1114,6 +1121,54 @@ __secondary_hold_acknowledge:
.long -1
#endif
/*
* Create a 64M tlb by address and entry
* r3 - entry
* r4 - virtual address
* r5/r6 - physical address
*/
_GLOBAL(create_kaslr_tlb_entry)
lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
rlwimi r7,r3,16,4,15 /* Setup MAS0 = TLBSEL | ESEL(r6) */
mtspr SPRN_MAS0,r7 /* Write MAS0 */
lis r3,(MAS1_VALID|MAS1_IPROT)@h
ori r3,r3,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
mtspr SPRN_MAS1,r3 /* Write MAS1 */
lis r3,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
ori r3,r3,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
and r3,r3,r4
ori r3,r3,MAS2_M_IF_NEEDED@l
mtspr SPRN_MAS2,r3 /* Write MAS2(EPN) */
#ifdef CONFIG_PHYS_64BIT
ori r8,r6,(MAS3_SW|MAS3_SR|MAS3_SX)
mtspr SPRN_MAS3,r8 /* Write MAS3(RPN) */
mtspr SPRN_MAS7,r5
#else
ori r8,r5,(MAS3_SW|MAS3_SR|MAS3_SX)
mtspr SPRN_MAS3,r8 /* Write MAS3(RPN) */
#endif
tlbwe /* Write TLB */
isync
sync
blr
/*
* Return to the start of the relocated kernel and run again
* r3 - virtual address of fdt
* r4 - entry of the kernel
*/
_GLOBAL(reloc_kernel_entry)
mfmsr r7
rlwinm r7, r7, 0, ~(MSR_IS | MSR_DS)
mtspr SPRN_SRR0,r4
mtspr SPRN_SRR1,r7
rfi
/*
* Create a tlb entry with the same effective and physical address as
* the tlb entry used by the current running code. But set the TS to 1.

View File

@ -126,6 +126,49 @@ int arch_bp_generic_fields(int type, int *gen_bp_type)
return 0;
}
/*
* Watchpoint match range is always doubleword(8 bytes) aligned on
* powerpc. If the given range is crossing doubleword boundary, we
* need to increase the length such that next doubleword also get
* covered. Ex,
*
* address len = 6 bytes
* |=========.
* |------------v--|------v--------|
* | | | | | | | | | | | | | | | | |
* |---------------|---------------|
* <---8 bytes--->
*
* In this case, we should configure hw as:
* start_addr = address & ~HW_BREAKPOINT_ALIGN
* len = 16 bytes
*
* @start_addr and @end_addr are inclusive.
*/
static int hw_breakpoint_validate_len(struct arch_hw_breakpoint *hw)
{
u16 max_len = DABR_MAX_LEN;
u16 hw_len;
unsigned long start_addr, end_addr;
start_addr = hw->address & ~HW_BREAKPOINT_ALIGN;
end_addr = (hw->address + hw->len - 1) | HW_BREAKPOINT_ALIGN;
hw_len = end_addr - start_addr + 1;
if (dawr_enabled()) {
max_len = DAWR_MAX_LEN;
/* DAWR region can't cross 512 bytes boundary */
if ((start_addr >> 9) != (end_addr >> 9))
return -EINVAL;
}
if (hw_len > max_len)
return -EINVAL;
hw->hw_len = hw_len;
return 0;
}
/*
* Validate the arch-specific HW Breakpoint register settings
*/
@ -133,9 +176,9 @@ int hw_breakpoint_arch_parse(struct perf_event *bp,
const struct perf_event_attr *attr,
struct arch_hw_breakpoint *hw)
{
int ret = -EINVAL, length_max;
int ret = -EINVAL;
if (!bp)
if (!bp || !attr->bp_len)
return ret;
hw->type = HW_BRK_TYPE_TRANSLATE;
@ -155,26 +198,10 @@ int hw_breakpoint_arch_parse(struct perf_event *bp,
hw->address = attr->bp_addr;
hw->len = attr->bp_len;
/*
* Since breakpoint length can be a maximum of HW_BREAKPOINT_LEN(8)
* and breakpoint addresses are aligned to nearest double-word
* HW_BREAKPOINT_ALIGN by rounding off to the lower address, the
* 'symbolsize' should satisfy the check below.
*/
if (!ppc_breakpoint_available())
return -ENODEV;
length_max = 8; /* DABR */
if (dawr_enabled()) {
length_max = 512 ; /* 64 doublewords */
/* DAWR region can't cross 512 boundary */
if ((attr->bp_addr >> 9) !=
((attr->bp_addr + attr->bp_len - 1) >> 9))
return -EINVAL;
}
if (hw->len >
(length_max - (hw->address & HW_BREAKPOINT_ALIGN)))
return -EINVAL;
return 0;
return hw_breakpoint_validate_len(hw);
}
/*
@ -195,33 +222,49 @@ void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs)
tsk->thread.last_hit_ubp = NULL;
}
static bool is_larx_stcx_instr(struct pt_regs *regs, unsigned int instr)
static bool dar_within_range(unsigned long dar, struct arch_hw_breakpoint *info)
{
int ret, type;
struct instruction_op op;
return ((info->address <= dar) && (dar - info->address < info->len));
}
ret = analyse_instr(&op, regs, instr);
type = GETTYPE(op.type);
return (!ret && (type == LARX || type == STCX));
static bool
dar_range_overlaps(unsigned long dar, int size, struct arch_hw_breakpoint *info)
{
return ((dar <= info->address + info->len - 1) &&
(dar + size - 1 >= info->address));
}
/*
* Handle debug exception notifications.
*/
static bool stepping_handler(struct pt_regs *regs, struct perf_event *bp,
unsigned long addr)
struct arch_hw_breakpoint *info)
{
unsigned int instr = 0;
int ret, type, size;
struct instruction_op op;
unsigned long addr = info->address;
if (__get_user_inatomic(instr, (unsigned int *)regs->nip))
goto fail;
if (is_larx_stcx_instr(regs, instr)) {
ret = analyse_instr(&op, regs, instr);
type = GETTYPE(op.type);
size = GETSIZE(op.type);
if (!ret && (type == LARX || type == STCX)) {
printk_ratelimited("Breakpoint hit on instruction that can't be emulated."
" Breakpoint at 0x%lx will be disabled.\n", addr);
goto disable;
}
/*
* If it's extraneous event, we still need to emulate/single-
* step the instruction, but we don't generate an event.
*/
if (size && !dar_range_overlaps(regs->dar, size, info))
info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
/* Do not emulate user-space instructions, instead single-step them */
if (user_mode(regs)) {
current->thread.last_hit_ubp = bp;
@ -253,7 +296,6 @@ int hw_breakpoint_handler(struct die_args *args)
struct perf_event *bp;
struct pt_regs *regs = args->regs;
struct arch_hw_breakpoint *info;
unsigned long dar = regs->dar;
/* Disable breakpoints during exception handling */
hw_breakpoint_disable();
@ -285,19 +327,14 @@ int hw_breakpoint_handler(struct die_args *args)
goto out;
}
/*
* Verify if dar lies within the address range occupied by the symbol
* being watched to filter extraneous exceptions. If it doesn't,
* we still need to single-step the instruction, but we don't
* generate an event.
*/
info->type &= ~HW_BRK_TYPE_EXTRANEOUS_IRQ;
if (!((bp->attr.bp_addr <= dar) &&
(dar - bp->attr.bp_addr < bp->attr.bp_len)))
info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
if (!IS_ENABLED(CONFIG_PPC_8xx) && !stepping_handler(regs, bp, info->address))
goto out;
if (IS_ENABLED(CONFIG_PPC_8xx)) {
if (!dar_within_range(regs->dar, info))
info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
} else {
if (!stepping_handler(regs, bp, info))
goto out;
}
/*
* As a policy, the callback is invoked in a 'trigger-after-execute'

View File

@ -0,0 +1,78 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2019 IBM Corporation
* Author: Nayna Jain
*/
#include <linux/ima.h>
#include <asm/secure_boot.h>
bool arch_ima_get_secureboot(void)
{
return is_ppc_secureboot_enabled();
}
/*
* The "secure_rules" are enabled only on "secureboot" enabled systems.
* These rules verify the file signatures against known good values.
* The "appraise_type=imasig|modsig" option allows the known good signature
* to be stored as an xattr or as an appended signature.
*
* To avoid duplicate signature verification as much as possible, the IMA
* policy rule for module appraisal is added only if CONFIG_MODULE_SIG_FORCE
* is not enabled.
*/
static const char *const secure_rules[] = {
"appraise func=KEXEC_KERNEL_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",
#ifndef CONFIG_MODULE_SIG_FORCE
"appraise func=MODULE_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",
#endif
NULL
};
/*
* The "trusted_rules" are enabled only on "trustedboot" enabled systems.
* These rules add the kexec kernel image and kernel modules file hashes to
* the IMA measurement list.
*/
static const char *const trusted_rules[] = {
"measure func=KEXEC_KERNEL_CHECK",
"measure func=MODULE_CHECK",
NULL
};
/*
* The "secure_and_trusted_rules" contains rules for both the secure boot and
* trusted boot. The "template=ima-modsig" option includes the appended
* signature, when available, in the IMA measurement list.
*/
static const char *const secure_and_trusted_rules[] = {
"measure func=KEXEC_KERNEL_CHECK template=ima-modsig",
"measure func=MODULE_CHECK template=ima-modsig",
"appraise func=KEXEC_KERNEL_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",
#ifndef CONFIG_MODULE_SIG_FORCE
"appraise func=MODULE_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",
#endif
NULL
};
/*
* Returns the relevant IMA arch-specific policies based on the system secure
* boot state.
*/
const char *const *arch_get_ima_policy(void)
{
if (is_ppc_secureboot_enabled()) {
if (IS_ENABLED(CONFIG_MODULE_SIG))
set_module_sig_enforced();
if (is_ppc_trustedboot_enabled())
return secure_and_trusted_rules;
else
return secure_rules;
} else if (is_ppc_trustedboot_enabled()) {
return trusted_rules;
}
return NULL;
}

View File

@ -6,11 +6,6 @@
* Largely rewritten by Cort Dougan (cort@cs.nmt.edu)
* and Paul Mackerras.
*
* kexec bits:
* Copyright (C) 2002-2003 Eric Biederman <ebiederm@xmission.com>
* GameCube/ppc32 port Copyright (C) 2004 Albert Herranz
* PPC44x port. Copyright (C) 2011, IBM Corporation
* Author: Suzuki Poulose <suzuki@in.ibm.com>
*/
#include <linux/sys.h>
@ -25,7 +20,6 @@
#include <asm/thread_info.h>
#include <asm/asm-offsets.h>
#include <asm/processor.h>
#include <asm/kexec.h>
#include <asm/bug.h>
#include <asm/ptrace.h>
#include <asm/export.h>
@ -316,126 +310,6 @@ _GLOBAL(flush_instruction_cache)
EXPORT_SYMBOL(flush_instruction_cache)
#endif /* CONFIG_PPC_8xx */
/*
* Write any modified data cache blocks out to memory
* and invalidate the corresponding instruction cache blocks.
* This is a no-op on the 601.
*
* flush_icache_range(unsigned long start, unsigned long stop)
*/
_GLOBAL(flush_icache_range)
#if defined(CONFIG_PPC_BOOK3S_601) || defined(CONFIG_E200)
PURGE_PREFETCHED_INS
blr /* for 601 and e200, do nothing */
#else
rlwinm r3,r3,0,0,31 - L1_CACHE_SHIFT
subf r4,r3,r4
addi r4,r4,L1_CACHE_BYTES - 1
srwi. r4,r4,L1_CACHE_SHIFT
beqlr
mtctr r4
mr r6,r3
1: dcbst 0,r3
addi r3,r3,L1_CACHE_BYTES
bdnz 1b
sync /* wait for dcbst's to get to ram */
#ifndef CONFIG_44x
mtctr r4
2: icbi 0,r6
addi r6,r6,L1_CACHE_BYTES
bdnz 2b
#else
/* Flash invalidate on 44x because we are passed kmapped addresses and
this doesn't work for userspace pages due to the virtually tagged
icache. Sigh. */
iccci 0, r0
#endif
sync /* additional sync needed on g4 */
isync
blr
#endif
_ASM_NOKPROBE_SYMBOL(flush_icache_range)
EXPORT_SYMBOL(flush_icache_range)
/*
* Flush a particular page from the data cache to RAM.
* Note: this is necessary because the instruction cache does *not*
* snoop from the data cache.
* This is a no-op on the 601 and e200 which have a unified cache.
*
* void __flush_dcache_icache(void *page)
*/
_GLOBAL(__flush_dcache_icache)
#if defined(CONFIG_PPC_BOOK3S_601) || defined(CONFIG_E200)
PURGE_PREFETCHED_INS
blr
#else
rlwinm r3,r3,0,0,31-PAGE_SHIFT /* Get page base address */
li r4,PAGE_SIZE/L1_CACHE_BYTES /* Number of lines in a page */
mtctr r4
mr r6,r3
0: dcbst 0,r3 /* Write line to ram */
addi r3,r3,L1_CACHE_BYTES
bdnz 0b
sync
#ifdef CONFIG_44x
/* We don't flush the icache on 44x. Those have a virtual icache
* and we don't have access to the virtual address here (it's
* not the page vaddr but where it's mapped in user space). The
* flushing of the icache on these is handled elsewhere, when
* a change in the address space occurs, before returning to
* user space
*/
BEGIN_MMU_FTR_SECTION
blr
END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_44x)
#endif /* CONFIG_44x */
mtctr r4
1: icbi 0,r6
addi r6,r6,L1_CACHE_BYTES
bdnz 1b
sync
isync
blr
#endif
#ifndef CONFIG_BOOKE
/*
* Flush a particular page from the data cache to RAM, identified
* by its physical address. We turn off the MMU so we can just use
* the physical address (this may be a highmem page without a kernel
* mapping).
*
* void __flush_dcache_icache_phys(unsigned long physaddr)
*/
_GLOBAL(__flush_dcache_icache_phys)
#if defined(CONFIG_PPC_BOOK3S_601) || defined(CONFIG_E200)
PURGE_PREFETCHED_INS
blr /* for 601 and e200, do nothing */
#else
mfmsr r10
rlwinm r0,r10,0,28,26 /* clear DR */
mtmsr r0
isync
rlwinm r3,r3,0,0,31-PAGE_SHIFT /* Get page base address */
li r4,PAGE_SIZE/L1_CACHE_BYTES /* Number of lines in a page */
mtctr r4
mr r6,r3
0: dcbst 0,r3 /* Write line to ram */
addi r3,r3,L1_CACHE_BYTES
bdnz 0b
sync
mtctr r4
1: icbi 0,r6
addi r6,r6,L1_CACHE_BYTES
bdnz 1b
sync
mtmsr r10 /* restore DR */
isync
blr
#endif
#endif /* CONFIG_BOOKE */
/*
* Copy a whole page. We use the dcbz instruction on the destination
* to reduce memory traffic (it eliminates the unnecessary reads of
@ -614,488 +488,3 @@ _GLOBAL(start_secondary_resume)
*/
_GLOBAL(__main)
blr
#ifdef CONFIG_KEXEC_CORE
/*
* Must be relocatable PIC code callable as a C function.
*/
.globl relocate_new_kernel
relocate_new_kernel:
/* r3 = page_list */
/* r4 = reboot_code_buffer */
/* r5 = start_address */
#ifdef CONFIG_FSL_BOOKE
mr r29, r3
mr r30, r4
mr r31, r5
#define ENTRY_MAPPING_KEXEC_SETUP
#include "fsl_booke_entry_mapping.S"
#undef ENTRY_MAPPING_KEXEC_SETUP
mr r3, r29
mr r4, r30
mr r5, r31
li r0, 0
#elif defined(CONFIG_44x)
/* Save our parameters */
mr r29, r3
mr r30, r4
mr r31, r5
#ifdef CONFIG_PPC_47x
/* Check for 47x cores */
mfspr r3,SPRN_PVR
srwi r3,r3,16
cmplwi cr0,r3,PVR_476FPE@h
beq setup_map_47x
cmplwi cr0,r3,PVR_476@h
beq setup_map_47x
cmplwi cr0,r3,PVR_476_ISS@h
beq setup_map_47x
#endif /* CONFIG_PPC_47x */
/*
* Code for setting up 1:1 mapping for PPC440x for KEXEC
*
* We cannot switch off the MMU on PPC44x.
* So we:
* 1) Invalidate all the mappings except the one we are running from.
* 2) Create a tmp mapping for our code in the other address space(TS) and
* jump to it. Invalidate the entry we started in.
* 3) Create a 1:1 mapping for 0-2GiB in chunks of 256M in original TS.
* 4) Jump to the 1:1 mapping in original TS.
* 5) Invalidate the tmp mapping.
*
* - Based on the kexec support code for FSL BookE
*
*/
/*
* Load the PID with kernel PID (0).
* Also load our MSR_IS and TID to MMUCR for TLB search.
*/
li r3, 0
mtspr SPRN_PID, r3
mfmsr r4
andi. r4,r4,MSR_IS@l
beq wmmucr
oris r3,r3,PPC44x_MMUCR_STS@h
wmmucr:
mtspr SPRN_MMUCR,r3
sync
/*
* Invalidate all the TLB entries except the current entry
* where we are running from
*/
bl 0f /* Find our address */
0: mflr r5 /* Make it accessible */
tlbsx r23,0,r5 /* Find entry we are in */
li r4,0 /* Start at TLB entry 0 */
li r3,0 /* Set PAGEID inval value */
1: cmpw r23,r4 /* Is this our entry? */
beq skip /* If so, skip the inval */
tlbwe r3,r4,PPC44x_TLB_PAGEID /* If not, inval the entry */
skip:
addi r4,r4,1 /* Increment */
cmpwi r4,64 /* Are we done? */
bne 1b /* If not, repeat */
isync
/* Create a temp mapping and jump to it */
andi. r6, r23, 1 /* Find the index to use */
addi r24, r6, 1 /* r24 will contain 1 or 2 */
mfmsr r9 /* get the MSR */
rlwinm r5, r9, 27, 31, 31 /* Extract the MSR[IS] */
xori r7, r5, 1 /* Use the other address space */
/* Read the current mapping entries */
tlbre r3, r23, PPC44x_TLB_PAGEID
tlbre r4, r23, PPC44x_TLB_XLAT
tlbre r5, r23, PPC44x_TLB_ATTRIB
/* Save our current XLAT entry */
mr r25, r4
/* Extract the TLB PageSize */
li r10, 1 /* r10 will hold PageSize */
rlwinm r11, r3, 0, 24, 27 /* bits 24-27 */
/* XXX: As of now we use 256M, 4K pages */
cmpwi r11, PPC44x_TLB_256M
bne tlb_4k
rotlwi r10, r10, 28 /* r10 = 256M */
b write_out
tlb_4k:
cmpwi r11, PPC44x_TLB_4K
bne default
rotlwi r10, r10, 12 /* r10 = 4K */
b write_out
default:
rotlwi r10, r10, 10 /* r10 = 1K */
write_out:
/*
* Write out the tmp 1:1 mapping for this code in other address space
* Fixup EPN = RPN , TS=other address space
*/
insrwi r3, r7, 1, 23 /* Bit 23 is TS for PAGEID field */
/* Write out the tmp mapping entries */
tlbwe r3, r24, PPC44x_TLB_PAGEID
tlbwe r4, r24, PPC44x_TLB_XLAT
tlbwe r5, r24, PPC44x_TLB_ATTRIB
subi r11, r10, 1 /* PageOffset Mask = PageSize - 1 */
not r10, r11 /* Mask for PageNum */
/* Switch to other address space in MSR */
insrwi r9, r7, 1, 26 /* Set MSR[IS] = r7 */
bl 1f
1: mflr r8
addi r8, r8, (2f-1b) /* Find the target offset */
/* Jump to the tmp mapping */
mtspr SPRN_SRR0, r8
mtspr SPRN_SRR1, r9
rfi
2:
/* Invalidate the entry we were executing from */
li r3, 0
tlbwe r3, r23, PPC44x_TLB_PAGEID
/* attribute fields. rwx for SUPERVISOR mode */
li r5, 0
ori r5, r5, (PPC44x_TLB_SW | PPC44x_TLB_SR | PPC44x_TLB_SX | PPC44x_TLB_G)
/* Create 1:1 mapping in 256M pages */
xori r7, r7, 1 /* Revert back to Original TS */
li r8, 0 /* PageNumber */
li r6, 3 /* TLB Index, start at 3 */
next_tlb:
rotlwi r3, r8, 28 /* Create EPN (bits 0-3) */
mr r4, r3 /* RPN = EPN */
ori r3, r3, (PPC44x_TLB_VALID | PPC44x_TLB_256M) /* SIZE = 256M, Valid */
insrwi r3, r7, 1, 23 /* Set TS from r7 */
tlbwe r3, r6, PPC44x_TLB_PAGEID /* PageID field : EPN, V, SIZE */
tlbwe r4, r6, PPC44x_TLB_XLAT /* Address translation : RPN */
tlbwe r5, r6, PPC44x_TLB_ATTRIB /* Attributes */
addi r8, r8, 1 /* Increment PN */
addi r6, r6, 1 /* Increment TLB Index */
cmpwi r8, 8 /* Are we done ? */
bne next_tlb
isync
/* Jump to the new mapping 1:1 */
li r9,0
insrwi r9, r7, 1, 26 /* Set MSR[IS] = r7 */
bl 1f
1: mflr r8
and r8, r8, r11 /* Get our offset within page */
addi r8, r8, (2f-1b)
and r5, r25, r10 /* Get our target PageNum */
or r8, r8, r5 /* Target jump address */
mtspr SPRN_SRR0, r8
mtspr SPRN_SRR1, r9
rfi
2:
/* Invalidate the tmp entry we used */
li r3, 0
tlbwe r3, r24, PPC44x_TLB_PAGEID
sync
b ppc44x_map_done
#ifdef CONFIG_PPC_47x
/* 1:1 mapping for 47x */
setup_map_47x:
/*
* Load the kernel pid (0) to PID and also to MMUCR[TID].
* Also set the MSR IS->MMUCR STS
*/
li r3, 0
mtspr SPRN_PID, r3 /* Set PID */
mfmsr r4 /* Get MSR */
andi. r4, r4, MSR_IS@l /* TS=1? */
beq 1f /* If not, leave STS=0 */
oris r3, r3, PPC47x_MMUCR_STS@h /* Set STS=1 */
1: mtspr SPRN_MMUCR, r3 /* Put MMUCR */
sync
/* Find the entry we are running from */
bl 2f
2: mflr r23
tlbsx r23, 0, r23
tlbre r24, r23, 0 /* TLB Word 0 */
tlbre r25, r23, 1 /* TLB Word 1 */
tlbre r26, r23, 2 /* TLB Word 2 */
/*
* Invalidates all the tlb entries by writing to 256 RPNs(r4)
* of 4k page size in all 4 ways (0-3 in r3).
* This would invalidate the entire UTLB including the one we are
* running from. However the shadow TLB entries would help us
* to continue the execution, until we flush them (rfi/isync).
*/
addis r3, 0, 0x8000 /* specify the way */
addi r4, 0, 0 /* TLB Word0 = (EPN=0, VALID = 0) */
addi r5, 0, 0
b clear_utlb_entry
/* Align the loop to speed things up. from head_44x.S */
.align 6
clear_utlb_entry:
tlbwe r4, r3, 0
tlbwe r5, r3, 1
tlbwe r5, r3, 2
addis r3, r3, 0x2000 /* Increment the way */
cmpwi r3, 0
bne clear_utlb_entry
addis r3, 0, 0x8000
addis r4, r4, 0x100 /* Increment the EPN */
cmpwi r4, 0
bne clear_utlb_entry
/* Create the entries in the other address space */
mfmsr r5
rlwinm r7, r5, 27, 31, 31 /* Get the TS (Bit 26) from MSR */
xori r7, r7, 1 /* r7 = !TS */
insrwi r24, r7, 1, 21 /* Change the TS in the saved TLB word 0 */
/*
* write out the TLB entries for the tmp mapping
* Use way '0' so that we could easily invalidate it later.
*/
lis r3, 0x8000 /* Way '0' */
tlbwe r24, r3, 0
tlbwe r25, r3, 1
tlbwe r26, r3, 2
/* Update the msr to the new TS */
insrwi r5, r7, 1, 26
bl 1f
1: mflr r6
addi r6, r6, (2f-1b)
mtspr SPRN_SRR0, r6
mtspr SPRN_SRR1, r5
rfi
/*
* Now we are in the tmp address space.
* Create a 1:1 mapping for 0-2GiB in the original TS.
*/
2:
li r3, 0
li r4, 0 /* TLB Word 0 */
li r5, 0 /* TLB Word 1 */
li r6, 0
ori r6, r6, PPC47x_TLB2_S_RWX /* TLB word 2 */
li r8, 0 /* PageIndex */
xori r7, r7, 1 /* revert back to original TS */
write_utlb:
rotlwi r5, r8, 28 /* RPN = PageIndex * 256M */
/* ERPN = 0 as we don't use memory above 2G */
mr r4, r5 /* EPN = RPN */
ori r4, r4, (PPC47x_TLB0_VALID | PPC47x_TLB0_256M)
insrwi r4, r7, 1, 21 /* Insert the TS to Word 0 */
tlbwe r4, r3, 0 /* Write out the entries */
tlbwe r5, r3, 1
tlbwe r6, r3, 2
addi r8, r8, 1
cmpwi r8, 8 /* Have we completed ? */
bne write_utlb
/* make sure we complete the TLB write up */
isync
/*
* Prepare to jump to the 1:1 mapping.
* 1) Extract page size of the tmp mapping
* DSIZ = TLB_Word0[22:27]
* 2) Calculate the physical address of the address
* to jump to.
*/
rlwinm r10, r24, 0, 22, 27
cmpwi r10, PPC47x_TLB0_4K
bne 0f
li r10, 0x1000 /* r10 = 4k */
bl 1f
0:
/* Defaults to 256M */
lis r10, 0x1000
bl 1f
1: mflr r4
addi r4, r4, (2f-1b) /* virtual address of 2f */
subi r11, r10, 1 /* offsetmask = Pagesize - 1 */
not r10, r11 /* Pagemask = ~(offsetmask) */
and r5, r25, r10 /* Physical page */
and r6, r4, r11 /* offset within the current page */
or r5, r5, r6 /* Physical address for 2f */
/* Switch the TS in MSR to the original one */
mfmsr r8
insrwi r8, r7, 1, 26
mtspr SPRN_SRR1, r8
mtspr SPRN_SRR0, r5
rfi
2:
/* Invalidate the tmp mapping */
lis r3, 0x8000 /* Way '0' */
clrrwi r24, r24, 12 /* Clear the valid bit */
tlbwe r24, r3, 0
tlbwe r25, r3, 1
tlbwe r26, r3, 2
/* Make sure we complete the TLB write and flush the shadow TLB */
isync
#endif
ppc44x_map_done:
/* Restore the parameters */
mr r3, r29
mr r4, r30
mr r5, r31
li r0, 0
#else
li r0, 0
/*
* Set Machine Status Register to a known status,
* switch the MMU off and jump to 1: in a single step.
*/
mr r8, r0
ori r8, r8, MSR_RI|MSR_ME
mtspr SPRN_SRR1, r8
addi r8, r4, 1f - relocate_new_kernel
mtspr SPRN_SRR0, r8
sync
rfi
1:
#endif
/* from this point address translation is turned off */
/* and interrupts are disabled */
/* set a new stack at the bottom of our page... */
/* (not really needed now) */
addi r1, r4, KEXEC_CONTROL_PAGE_SIZE - 8 /* for LR Save+Back Chain */
stw r0, 0(r1)
/* Do the copies */
li r6, 0 /* checksum */
mr r0, r3
b 1f
0: /* top, read another word for the indirection page */
lwzu r0, 4(r3)
1:
/* is it a destination page? (r8) */
rlwinm. r7, r0, 0, 31, 31 /* IND_DESTINATION (1<<0) */
beq 2f
rlwinm r8, r0, 0, 0, 19 /* clear kexec flags, page align */
b 0b
2: /* is it an indirection page? (r3) */
rlwinm. r7, r0, 0, 30, 30 /* IND_INDIRECTION (1<<1) */
beq 2f
rlwinm r3, r0, 0, 0, 19 /* clear kexec flags, page align */
subi r3, r3, 4
b 0b
2: /* are we done? */
rlwinm. r7, r0, 0, 29, 29 /* IND_DONE (1<<2) */
beq 2f
b 3f
2: /* is it a source page? (r9) */
rlwinm. r7, r0, 0, 28, 28 /* IND_SOURCE (1<<3) */
beq 0b
rlwinm r9, r0, 0, 0, 19 /* clear kexec flags, page align */
li r7, PAGE_SIZE / 4
mtctr r7
subi r9, r9, 4
subi r8, r8, 4
9:
lwzu r0, 4(r9) /* do the copy */
xor r6, r6, r0
stwu r0, 4(r8)
dcbst 0, r8
sync
icbi 0, r8
bdnz 9b
addi r9, r9, 4
addi r8, r8, 4
b 0b
3:
/* To be certain of avoiding problems with self-modifying code
* execute a serializing instruction here.
*/
isync
sync
mfspr r3, SPRN_PIR /* current core we are running on */
mr r4, r5 /* load physical address of chunk called */
/* jump to the entry point, usually the setup routine */
mtlr r5
blrl
1: b 1b
relocate_new_kernel_end:
.globl relocate_new_kernel_size
relocate_new_kernel_size:
.long relocate_new_kernel_end - relocate_new_kernel
#endif

View File

@ -49,108 +49,6 @@ _GLOBAL(call_do_irq)
mtlr r0
blr
.section ".toc","aw"
PPC64_CACHES:
.tc ppc64_caches[TC],ppc64_caches
.section ".text"
/*
* Write any modified data cache blocks out to memory
* and invalidate the corresponding instruction cache blocks.
*
* flush_icache_range(unsigned long start, unsigned long stop)
*
* flush all bytes from start through stop-1 inclusive
*/
_GLOBAL_TOC(flush_icache_range)
BEGIN_FTR_SECTION
PURGE_PREFETCHED_INS
blr
END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
/*
* Flush the data cache to memory
*
* Different systems have different cache line sizes
* and in some cases i-cache and d-cache line sizes differ from
* each other.
*/
ld r10,PPC64_CACHES@toc(r2)
lwz r7,DCACHEL1BLOCKSIZE(r10)/* Get cache block size */
addi r5,r7,-1
andc r6,r3,r5 /* round low to line bdy */
subf r8,r6,r4 /* compute length */
add r8,r8,r5 /* ensure we get enough */
lwz r9,DCACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of cache block size */
srw. r8,r8,r9 /* compute line count */
beqlr /* nothing to do? */
mtctr r8
1: dcbst 0,r6
add r6,r6,r7
bdnz 1b
sync
/* Now invalidate the instruction cache */
lwz r7,ICACHEL1BLOCKSIZE(r10) /* Get Icache block size */
addi r5,r7,-1
andc r6,r3,r5 /* round low to line bdy */
subf r8,r6,r4 /* compute length */
add r8,r8,r5
lwz r9,ICACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of Icache block size */
srw. r8,r8,r9 /* compute line count */
beqlr /* nothing to do? */
mtctr r8
2: icbi 0,r6
add r6,r6,r7
bdnz 2b
isync
blr
_ASM_NOKPROBE_SYMBOL(flush_icache_range)
EXPORT_SYMBOL(flush_icache_range)
/*
* Flush a particular page from the data cache to RAM.
* Note: this is necessary because the instruction cache does *not*
* snoop from the data cache.
*
* void __flush_dcache_icache(void *page)
*/
_GLOBAL(__flush_dcache_icache)
/*
* Flush the data cache to memory
*
* Different systems have different cache line sizes
*/
BEGIN_FTR_SECTION
PURGE_PREFETCHED_INS
blr
END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
/* Flush the dcache */
ld r7,PPC64_CACHES@toc(r2)
clrrdi r3,r3,PAGE_SHIFT /* Page align */
lwz r4,DCACHEL1BLOCKSPERPAGE(r7) /* Get # dcache blocks per page */
lwz r5,DCACHEL1BLOCKSIZE(r7) /* Get dcache block size */
mr r6,r3
mtctr r4
0: dcbst 0,r6
add r6,r6,r5
bdnz 0b
sync
/* Now invalidate the icache */
lwz r4,ICACHEL1BLOCKSPERPAGE(r7) /* Get # icache blocks per page */
lwz r5,ICACHEL1BLOCKSIZE(r7) /* Get icache block size */
mtctr r4
1: icbi 0,r3
add r3,r3,r5
bdnz 1b
isync
blr
_GLOBAL(__bswapdi2)
EXPORT_SYMBOL(__bswapdi2)
srdi r8,r3,32
@ -432,18 +330,13 @@ kexec_create_tlb:
rlwimi r9,r10,16,4,15 /* Setup MAS0 = TLBSEL | ESEL(r9) */
/* Set up a temp identity mapping v:0 to p:0 and return to it. */
#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
#define M_IF_NEEDED MAS2_M
#else
#define M_IF_NEEDED 0
#endif
mtspr SPRN_MAS0,r9
lis r9,(MAS1_VALID|MAS1_IPROT)@h
ori r9,r9,(MAS1_TSIZE(BOOK3E_PAGESZ_1GB))@l
mtspr SPRN_MAS1,r9
LOAD_REG_IMMEDIATE(r9, 0x0 | M_IF_NEEDED)
LOAD_REG_IMMEDIATE(r9, 0x0 | MAS2_M_IF_NEEDED)
mtspr SPRN_MAS2,r9
LOAD_REG_IMMEDIATE(r9, 0x0 | MAS3_SR | MAS3_SW | MAS3_SX)

View File

@ -715,6 +715,8 @@ static void set_debug_reg_defaults(struct thread_struct *thread)
{
thread->hw_brk.address = 0;
thread->hw_brk.type = 0;
thread->hw_brk.len = 0;
thread->hw_brk.hw_len = 0;
if (ppc_breakpoint_available())
set_breakpoint(&thread->hw_brk);
}
@ -816,6 +818,7 @@ static inline bool hw_brk_match(struct arch_hw_breakpoint *a,
return false;
if (a->len != b->len)
return false;
/* no need to check hw_len. it's calculated from address and len */
return true;
}

View File

@ -303,16 +303,24 @@ static char __init *prom_strstr(const char *s1, const char *s2)
return NULL;
}
static size_t __init prom_strlcpy(char *dest, const char *src, size_t size)
static size_t __init prom_strlcat(char *dest, const char *src, size_t count)
{
size_t ret = prom_strlen(src);
size_t dsize = prom_strlen(dest);
size_t len = prom_strlen(src);
size_t res = dsize + len;
/* This would be a bug */
if (dsize >= count)
return count;
dest += dsize;
count -= dsize;
if (len >= count)
len = count-1;
memcpy(dest, src, len);
dest[len] = 0;
return res;
if (size) {
size_t len = (ret >= size) ? size - 1 : ret;
memcpy(dest, src, len);
dest[len] = '\0';
}
return ret;
}
#ifdef CONFIG_PPC_PSERIES
@ -764,10 +772,14 @@ static void __init early_cmdline_parse(void)
prom_cmd_line[0] = 0;
p = prom_cmd_line;
if ((long)prom.chosen > 0)
if (!IS_ENABLED(CONFIG_CMDLINE_FORCE) && (long)prom.chosen > 0)
l = prom_getprop(prom.chosen, "bootargs", p, COMMAND_LINE_SIZE-1);
if (IS_ENABLED(CONFIG_CMDLINE_BOOL) && (l <= 0 || p[0] == '\0')) /* dbl check */
prom_strlcpy(prom_cmd_line, CONFIG_CMDLINE, sizeof(prom_cmd_line));
if (IS_ENABLED(CONFIG_CMDLINE_EXTEND) || l <= 0 || p[0] == '\0')
prom_strlcat(prom_cmd_line, " " CONFIG_CMDLINE,
sizeof(prom_cmd_line));
prom_printf("command line: %s\n", prom_cmd_line);
#ifdef CONFIG_PPC64
@ -1053,7 +1065,7 @@ static const struct ibm_arch_vec ibm_architecture_vec_template __initconst = {
.reserved2 = 0,
.reserved3 = 0,
.subprocessors = 1,
.byte22 = OV5_FEAT(OV5_DRMEM_V2),
.byte22 = OV5_FEAT(OV5_DRMEM_V2) | OV5_FEAT(OV5_DRC_INFO),
.intarch = 0,
.mmu = 0,
.hash_ext = 0,

View File

@ -2425,7 +2425,8 @@ static int ptrace_set_debugreg(struct task_struct *task, unsigned long addr,
return -EIO;
hw_brk.address = data & (~HW_BRK_TYPE_DABR);
hw_brk.type = (data & HW_BRK_TYPE_DABR) | HW_BRK_TYPE_PRIV_ALL;
hw_brk.len = 8;
hw_brk.len = DABR_MAX_LEN;
hw_brk.hw_len = DABR_MAX_LEN;
set_bp = (data) && (hw_brk.type & HW_BRK_TYPE_RDWR);
#ifdef CONFIG_HAVE_HW_BREAKPOINT
bp = thread->ptrace_bps[0];
@ -2439,6 +2440,7 @@ static int ptrace_set_debugreg(struct task_struct *task, unsigned long addr,
if (bp) {
attr = bp->attr;
attr.bp_addr = hw_brk.address;
attr.bp_len = DABR_MAX_LEN;
arch_bp_generic_fields(hw_brk.type, &attr.bp_type);
/* Enable breakpoint */
@ -2456,7 +2458,7 @@ static int ptrace_set_debugreg(struct task_struct *task, unsigned long addr,
/* Create a new breakpoint request if one doesn't exist already */
hw_breakpoint_init(&attr);
attr.bp_addr = hw_brk.address;
attr.bp_len = 8;
attr.bp_len = DABR_MAX_LEN;
arch_bp_generic_fields(hw_brk.type,
&attr.bp_type);
@ -2880,18 +2882,14 @@ static long ppc_set_hwdebug(struct task_struct *child,
if ((unsigned long)bp_info->addr >= TASK_SIZE)
return -EIO;
brk.address = bp_info->addr & ~7UL;
brk.address = bp_info->addr & ~HW_BREAKPOINT_ALIGN;
brk.type = HW_BRK_TYPE_TRANSLATE;
brk.len = 8;
brk.len = DABR_MAX_LEN;
if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_READ)
brk.type |= HW_BRK_TYPE_READ;
if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_WRITE)
brk.type |= HW_BRK_TYPE_WRITE;
#ifdef CONFIG_HAVE_HW_BREAKPOINT
/*
* Check if the request is for 'range' breakpoints. We can
* support it if range < 8 bytes.
*/
if (bp_info->addr_mode == PPC_BREAKPOINT_MODE_RANGE_INCLUSIVE)
len = bp_info->addr2 - bp_info->addr;
else if (bp_info->addr_mode == PPC_BREAKPOINT_MODE_EXACT)
@ -2904,7 +2902,7 @@ static long ppc_set_hwdebug(struct task_struct *child,
/* Create a new breakpoint request if one doesn't exist already */
hw_breakpoint_init(&attr);
attr.bp_addr = (unsigned long)bp_info->addr & ~HW_BREAKPOINT_ALIGN;
attr.bp_addr = (unsigned long)bp_info->addr;
attr.bp_len = len;
arch_bp_generic_fields(brk.type, &attr.bp_type);
@ -3361,6 +3359,12 @@ void do_syscall_trace_leave(struct pt_regs *regs)
user_enter();
}
void __init pt_regs_check(void);
/*
* Dummy function, its purpose is to break the build if struct pt_regs and
* struct user_pt_regs don't match.
*/
void __init pt_regs_check(void)
{
BUILD_BUG_ON(offsetof(struct pt_regs, gpr) !=
@ -3398,4 +3402,67 @@ void __init pt_regs_check(void)
offsetof(struct user_pt_regs, result));
BUILD_BUG_ON(sizeof(struct user_pt_regs) > sizeof(struct pt_regs));
// Now check that the pt_regs offsets match the uapi #defines
#define CHECK_REG(_pt, _reg) \
BUILD_BUG_ON(_pt != (offsetof(struct user_pt_regs, _reg) / \
sizeof(unsigned long)));
CHECK_REG(PT_R0, gpr[0]);
CHECK_REG(PT_R1, gpr[1]);
CHECK_REG(PT_R2, gpr[2]);
CHECK_REG(PT_R3, gpr[3]);
CHECK_REG(PT_R4, gpr[4]);
CHECK_REG(PT_R5, gpr[5]);
CHECK_REG(PT_R6, gpr[6]);
CHECK_REG(PT_R7, gpr[7]);
CHECK_REG(PT_R8, gpr[8]);
CHECK_REG(PT_R9, gpr[9]);
CHECK_REG(PT_R10, gpr[10]);
CHECK_REG(PT_R11, gpr[11]);
CHECK_REG(PT_R12, gpr[12]);
CHECK_REG(PT_R13, gpr[13]);
CHECK_REG(PT_R14, gpr[14]);
CHECK_REG(PT_R15, gpr[15]);
CHECK_REG(PT_R16, gpr[16]);
CHECK_REG(PT_R17, gpr[17]);
CHECK_REG(PT_R18, gpr[18]);
CHECK_REG(PT_R19, gpr[19]);
CHECK_REG(PT_R20, gpr[20]);
CHECK_REG(PT_R21, gpr[21]);
CHECK_REG(PT_R22, gpr[22]);
CHECK_REG(PT_R23, gpr[23]);
CHECK_REG(PT_R24, gpr[24]);
CHECK_REG(PT_R25, gpr[25]);
CHECK_REG(PT_R26, gpr[26]);
CHECK_REG(PT_R27, gpr[27]);
CHECK_REG(PT_R28, gpr[28]);
CHECK_REG(PT_R29, gpr[29]);
CHECK_REG(PT_R30, gpr[30]);
CHECK_REG(PT_R31, gpr[31]);
CHECK_REG(PT_NIP, nip);
CHECK_REG(PT_MSR, msr);
CHECK_REG(PT_ORIG_R3, orig_gpr3);
CHECK_REG(PT_CTR, ctr);
CHECK_REG(PT_LNK, link);
CHECK_REG(PT_XER, xer);
CHECK_REG(PT_CCR, ccr);
#ifdef CONFIG_PPC64
CHECK_REG(PT_SOFTE, softe);
#else
CHECK_REG(PT_MQ, mq);
#endif
CHECK_REG(PT_TRAP, trap);
CHECK_REG(PT_DAR, dar);
CHECK_REG(PT_DSISR, dsisr);
CHECK_REG(PT_RESULT, result);
#undef CHECK_REG
BUILD_BUG_ON(PT_REGS_COUNT != sizeof(struct user_pt_regs) / sizeof(unsigned long));
/*
* PT_DSCR isn't a real reg, but it's important that it doesn't overlap the
* real registers.
*/
BUILD_BUG_ON(PT_DSCR < sizeof(struct user_pt_regs) / sizeof(unsigned long));
}

Some files were not shown because too many files have changed in this diff Show More