Merge 5.8-rc2 into usb-linus

Felipe has based his patches on that tag, so update my usb-linus branch
to it as well so that I can pull his patches in here easier.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Greg Kroah-Hartman 2020-06-26 16:51:14 +02:00
commit 603ea288dc
453 changed files with 3946 additions and 2618 deletions

View file

@ -0,0 +1,27 @@
What: /sys/bus/nd/devices/nmemX/papr/flags
Date: Apr, 2020
KernelVersion: v5.8
Contact: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, linux-nvdimm@lists.01.org,
Description:
(RO) Report flags indicating various states of a
papr-pmem NVDIMM device. Each flag maps to a one or
more bits set in the dimm-health-bitmap retrieved in
response to H_SCM_HEALTH hcall. The details of the bit
flags returned in response to this hcall is available
at 'Documentation/powerpc/papr_hcalls.rst' . Below are
the flags reported in this sysfs file:
* "not_armed" : Indicates that NVDIMM contents will not
survive a power cycle.
* "flush_fail" : Indicates that NVDIMM contents
couldn't be flushed during last
shut-down event.
* "restore_fail": Indicates that NVDIMM contents
couldn't be restored during NVDIMM
initialization.
* "encrypted" : NVDIMM contents are encrypted.
* "smart_notify": There is health event for the NVDIMM.
* "scrubbed" : Indicating that contents of the
NVDIMM have been scrubbed.
* "locked" : Indicating that NVDIMM contents cant
be modified until next power cycle.

View file

@ -186,7 +186,7 @@ prctl(PR_SVE_SET_VL, unsigned long arg)
flags:
PR_SVE_SET_VL_INHERIT
PR_SVE_VL_INHERIT
Inherit the current vector length across execve(). Otherwise, the
vector length is reset to the system default at execve(). (See
@ -247,7 +247,7 @@ prctl(PR_SVE_GET_VL)
The following flag may be OR-ed into the result:
PR_SVE_SET_VL_INHERIT
PR_SVE_VL_INHERIT
Vector length will be inherited across execve().
@ -393,7 +393,7 @@ The regset data starts with struct user_sve_header, containing:
* At every execve() call, the new vector length of the new process is set to
the system default vector length, unless
* PR_SVE_SET_VL_INHERIT (or equivalently SVE_PT_VL_INHERIT) is set for the
* PR_SVE_VL_INHERIT (or equivalently SVE_PT_VL_INHERIT) is set for the
calling thread, or
* a deferred vector length change is pending, established via the

View file

@ -451,7 +451,7 @@ The bridge driver also has some helper functions it can use:
"module_foo", "chipid", 0x36, NULL);
This loads the given module (can be ``NULL`` if no module needs to be loaded)
and calls :c:func:`i2c_new_device` with the given ``i2c_adapter`` and
and calls :c:func:`i2c_new_client_device` with the given ``i2c_adapter`` and
chip/address arguments. If all goes well, then it registers the subdev with
the v4l2_device.

View file

@ -25,7 +25,7 @@ size when creating the filesystem.
Currently 3 filesystems support DAX: ext2, ext4 and xfs. Enabling DAX on them
is different.
Enabling DAX on ext4 and ext2
Enabling DAX on ext2
-----------------------------
When mounting the filesystem, use the "-o dax" option on the command line or
@ -33,8 +33,8 @@ add 'dax' to the options in /etc/fstab. This works to enable DAX on all files
within the filesystem. It is equivalent to the '-o dax=always' behavior below.
Enabling DAX on xfs
-------------------
Enabling DAX on xfs and ext4
----------------------------
Summary
-------

View file

@ -39,3 +39,6 @@ is encrypted as well as the data itself.
Verity files cannot have blocks allocated past the end of the verity
metadata.
Verity and DAX are not compatible and attempts to set both of these flags
on a file will fail.

View file

@ -197,11 +197,14 @@ pp_power_profile_mode
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
:doc: pp_power_profile_mode
busy_percent
~~~~~~~~~~~~
*_busy_percent
~~~~~~~~~~~~~~
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
:doc: busy_percent
:doc: gpu_busy_percent
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
:doc: mem_busy_percent
GPU Product Information
=======================

View file

@ -57,7 +57,7 @@ SMBus Quick Command
This sends a single bit to the device, at the place of the Rd/Wr bit::
A Addr Rd/Wr [A] P
S Addr Rd/Wr [A] P
Functionality flag: I2C_FUNC_SMBUS_QUICK

View file

@ -220,13 +220,51 @@ from the LPAR memory.
**H_SCM_HEALTH**
| Input: drcIndex
| Out: *health-bitmap, health-bit-valid-bitmap*
| Out: *health-bitmap (r4), health-bit-valid-bitmap (r5)*
| Return Value: *H_Success, H_Parameter, H_Hardware*
Given a DRC Index return the info on predictive failure and overall health of
the NVDIMM. The asserted bits in the health-bitmap indicate a single predictive
failure and health-bit-valid-bitmap indicate which bits in health-bitmap are
valid.
the PMEM device. The asserted bits in the health-bitmap indicate one or more states
(described in table below) of the PMEM device and health-bit-valid-bitmap indicate
which bits in health-bitmap are valid. The bits are reported in
reverse bit ordering for example a value of 0xC400000000000000
indicates bits 0, 1, and 5 are valid.
Health Bitmap Flags:
+------+-----------------------------------------------------------------------+
| Bit | Definition |
+======+=======================================================================+
| 00 | PMEM device is unable to persist memory contents. |
| | If the system is powered down, nothing will be saved. |
+------+-----------------------------------------------------------------------+
| 01 | PMEM device failed to persist memory contents. Either contents were |
| | not saved successfully on power down or were not restored properly on |
| | power up. |
+------+-----------------------------------------------------------------------+
| 02 | PMEM device contents are persisted from previous IPL. The data from |
| | the last boot were successfully restored. |
+------+-----------------------------------------------------------------------+
| 03 | PMEM device contents are not persisted from previous IPL. There was no|
| | data to restore from the last boot. |
+------+-----------------------------------------------------------------------+
| 04 | PMEM device memory life remaining is critically low |
+------+-----------------------------------------------------------------------+
| 05 | PMEM device will be garded off next IPL due to failure |
+------+-----------------------------------------------------------------------+
| 06 | PMEM device contents cannot persist due to current platform health |
| | status. A hardware failure may prevent data from being saved or |
| | restored. |
+------+-----------------------------------------------------------------------+
| 07 | PMEM device is unable to persist memory contents in certain conditions|
+------+-----------------------------------------------------------------------+
| 08 | PMEM device is encrypted |
+------+-----------------------------------------------------------------------+
| 09 | PMEM device has successfully completed a requested erase or secure |
| | erase procedure. |
+------+-----------------------------------------------------------------------+
|10:63 | Reserved / Unused |
+------+-----------------------------------------------------------------------+
**H_SCM_PERFORMANCE_STATS**

View file

@ -16,18 +16,6 @@ Store Queue API
.. kernel-doc:: arch/sh/kernel/cpu/sh4/sq.c
:export:
SH-5
----
TLB Interfaces
~~~~~~~~~~~~~~
.. kernel-doc:: arch/sh/mm/tlb-sh5.c
:internal:
.. kernel-doc:: arch/sh/include/asm/tlb_64.h
:internal:
Machine Specific Interfaces
===========================

View file

@ -27,7 +27,7 @@ nitpick_ignore = [
("c:func", "copy_to_user"),
("c:func", "determine_valid_ioctls"),
("c:func", "ERR_PTR"),
("c:func", "i2c_new_device"),
("c:func", "i2c_new_client_device"),
("c:func", "ioctl"),
("c:func", "IS_ERR"),
("c:func", "KERNEL_VERSION"),

View file

@ -11369,14 +11369,6 @@ L: dmaengine@vger.kernel.org
S: Supported
F: drivers/dma/at_xdmac.c
MICROSEMI ETHERNET SWITCH DRIVER
M: Alexandre Belloni <alexandre.belloni@bootlin.com>
M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/mscc/
F: include/soc/mscc/ocelot*
MICROSEMI MIPS SOCS
M: Alexandre Belloni <alexandre.belloni@bootlin.com>
M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com>
@ -12335,6 +12327,18 @@ M: Peter Zijlstra <peterz@infradead.org>
S: Supported
F: tools/objtool/
OCELOT ETHERNET SWITCH DRIVER
M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com>
M: Vladimir Oltean <vladimir.oltean@nxp.com>
M: Claudiu Manoil <claudiu.manoil@nxp.com>
M: Alexandre Belloni <alexandre.belloni@bootlin.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/dsa/ocelot/*
F: drivers/net/ethernet/mscc/
F: include/soc/mscc/ocelot*
F: net/dsa/tag_ocelot.c
OCXL (Open Coherent Accelerator Processor Interface OpenCAPI) DRIVER
M: Frederic Barrat <fbarrat@linux.ibm.com>
M: Andrew Donnellan <ajd@linux.ibm.com>
@ -14192,6 +14196,15 @@ L: dmaengine@vger.kernel.org
S: Supported
F: drivers/dma/qcom/hidma*
QUALCOMM I2C CCI DRIVER
M: Loic Poulain <loic.poulain@linaro.org>
M: Robert Foss <robert.foss@linaro.org>
L: linux-i2c@vger.kernel.org
L: linux-arm-msm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/i2c/i2c-qcom-cci.txt
F: drivers/i2c/busses/i2c-qcom-cci.c
QUALCOMM IOMMU
M: Rob Clark <robdclark@gmail.com>
L: iommu@lists.linux-foundation.org
@ -14534,7 +14547,7 @@ F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.txt
F: drivers/i2c/busses/i2c-emev2.c
RENESAS ETHERNET DRIVERS
R: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
R: Sergei Shtylyov <sergei.shtylyov@gmail.com>
L: netdev@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
F: Documentation/devicetree/bindings/net/renesas,*.txt
@ -18254,14 +18267,6 @@ S: Maintained
F: drivers/input/serio/userio.c
F: include/uapi/linux/userio.h
VITESSE FELIX ETHERNET SWITCH DRIVER
M: Vladimir Oltean <vladimir.oltean@nxp.com>
M: Claudiu Manoil <claudiu.manoil@nxp.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/dsa/ocelot/*
F: net/dsa/tag_ocelot.c
VIVID VIRTUAL VIDEO DRIVER
M: Hans Verkuil <hverkuil@xs4all.nl>
L: linux-media@vger.kernel.org

View file

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 8
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc2
NAME = Kleptomaniac Octopus
# *DOCUMENTATION*
@ -828,7 +828,7 @@ endif
ifdef CONFIG_DEBUG_INFO_COMPRESSED
DEBUG_CFLAGS += -gz=zlib
KBUILD_AFLAGS += -Wa,--compress-debug-sections=zlib
KBUILD_AFLAGS += -gz=zlib
KBUILD_LDFLAGS += --compress-debug-sections=zlib
endif
@ -1336,16 +1336,6 @@ dt_binding_check: scripts_dtc
# ---------------------------------------------------------------------------
# Modules
# install modules.builtin regardless of CONFIG_MODULES
PHONY += _builtin_inst_
_builtin_inst_:
@mkdir -p $(MODLIB)/
@cp -f modules.builtin $(MODLIB)/
@cp -f $(objtree)/modules.builtin.modinfo $(MODLIB)/
PHONY += install
install: _builtin_inst_
ifdef CONFIG_MODULES
# By default, build modules as well
@ -1389,7 +1379,7 @@ PHONY += modules_install
modules_install: _modinst_ _modinst_post
PHONY += _modinst_
_modinst_: _builtin_inst_
_modinst_:
@rm -rf $(MODLIB)/kernel
@rm -f $(MODLIB)/source
@mkdir -p $(MODLIB)/kernel
@ -1399,6 +1389,8 @@ _modinst_: _builtin_inst_
ln -s $(CURDIR) $(MODLIB)/build ; \
fi
@sed 's:^:kernel/:' modules.order > $(MODLIB)/modules.order
@cp -f modules.builtin $(MODLIB)/
@cp -f $(objtree)/modules.builtin.modinfo $(MODLIB)/
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modinst
# This depmod is only for convenience to give the initial

View file

@ -84,7 +84,8 @@ static int ftrace_modify_code(unsigned long pc, unsigned long old,
old = __opcode_to_mem_arm(old);
if (validate) {
if (probe_kernel_read(&replaced, (void *)pc, MCOUNT_INSN_SIZE))
if (copy_from_kernel_nofault(&replaced, (void *)pc,
MCOUNT_INSN_SIZE))
return -EFAULT;
if (replaced != old)

View file

@ -236,7 +236,7 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
/* patch_text() only supports int-sized breakpoints */
BUILD_BUG_ON(sizeof(int) != BREAK_INSTR_SIZE);
err = probe_kernel_read(bpt->saved_instr, (char *)bpt->bpt_addr,
err = copy_from_kernel_nofault(bpt->saved_instr, (char *)bpt->bpt_addr,
BREAK_INSTR_SIZE);
if (err)
return err;

View file

@ -396,7 +396,7 @@ int is_valid_bugaddr(unsigned long pc)
u32 insn = __opcode_to_mem_arm(BUG_INSTR_VALUE);
#endif
if (probe_kernel_address((unsigned *)pc, bkpt))
if (get_kernel_nofault(bkpt, (void *)pc))
return 0;
return bkpt == insn;

View file

@ -774,7 +774,7 @@ static int alignment_get_arm(struct pt_regs *regs, u32 *ip, u32 *inst)
if (user_mode(regs))
fault = get_user(instr, ip);
else
fault = probe_kernel_address(ip, instr);
fault = get_kernel_nofault(instr, ip);
*inst = __mem_to_opcode_arm(instr);
@ -789,7 +789,7 @@ static int alignment_get_thumb(struct pt_regs *regs, u16 *ip, u16 *inst)
if (user_mode(regs))
fault = get_user(instr, ip);
else
fault = probe_kernel_address(ip, instr);
fault = get_kernel_nofault(instr, ip);
*inst = __mem_to_opcode_thumb16(instr);

View file

@ -1564,7 +1564,7 @@ config CC_HAS_SIGN_RETURN_ADDRESS
def_bool $(cc-option,-msign-return-address=all)
config AS_HAS_PAC
def_bool $(as-option,-Wa$(comma)-march=armv8.3-a)
def_bool $(cc-option,-Wa$(comma)-march=armv8.3-a)
config AS_HAS_CFI_NEGATE_RA_STATE
def_bool $(as-instr,.cfi_startproc\n.cfi_negate_ra_state\n.cfi_endproc\n)
@ -1630,6 +1630,8 @@ config ARM64_BTI_KERNEL
depends on CC_HAS_BRANCH_PROT_PAC_RET_BTI
# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697
depends on !CC_IS_GCC || GCC_VERSION >= 100100
# https://reviews.llvm.org/rGb8ae3fdfa579dbf366b1bb1cbfdbf8c51db7fa55
depends on !CC_IS_CLANG || CLANG_VERSION >= 100001
depends on !(CC_IS_CLANG && GCOV_KERNEL)
depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)
help

View file

@ -8,21 +8,6 @@ config PID_IN_CONTEXTIDR
instructions during context switch. Say Y here only if you are
planning to use hardware trace tools with this kernel.
config ARM64_RANDOMIZE_TEXT_OFFSET
bool "Randomize TEXT_OFFSET at build time"
help
Say Y here if you want the image load offset (AKA TEXT_OFFSET)
of the kernel to be randomized at build-time. When selected,
this option will cause TEXT_OFFSET to be randomized upon any
build of the kernel, and the offset will be reflected in the
text_offset field of the resulting Image. This can be used to
fuzz-test bootloaders which respect text_offset.
This option is intended for bootloader and/or kernel testing
only. Bootloaders must make no assumptions regarding the value
of TEXT_OFFSET and platforms must not require a specific
value.
config DEBUG_EFI
depends on EFI && DEBUG_INFO
bool "UEFI debugging"

View file

@ -121,13 +121,7 @@ endif
head-y := arch/arm64/kernel/head.o
# The byte offset of the kernel image in RAM from the start of RAM.
ifeq ($(CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET), y)
TEXT_OFFSET := $(shell awk "BEGIN {srand(); printf \"0x%06x\n\", \
int(2 * 1024 * 1024 / (2 ^ $(CONFIG_ARM64_PAGE_SHIFT)) * \
rand()) * (2 ^ $(CONFIG_ARM64_PAGE_SHIFT))}")
else
TEXT_OFFSET := 0x0
endif
ifeq ($(CONFIG_KASAN_SW_TAGS), y)
KASAN_SHADOW_SCALE_SHIFT := 4

View file

@ -416,7 +416,7 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
__pgprot((pgprot_val(prot) & ~(mask)) | (bits))
#define pgprot_nx(prot) \
__pgprot_modify(prot, 0, PTE_PXN)
__pgprot_modify(prot, PTE_MAYBE_GP, PTE_PXN)
/*
* Mark the prot value as uncacheable and unbufferable.

View file

@ -12,6 +12,7 @@
#include <linux/bug.h>
#include <linux/cache.h>
#include <linux/compat.h>
#include <linux/compiler.h>
#include <linux/cpu.h>
#include <linux/cpu_pm.h>
#include <linux/kernel.h>
@ -119,10 +120,20 @@ struct fpsimd_last_state_struct {
static DEFINE_PER_CPU(struct fpsimd_last_state_struct, fpsimd_last_state);
/* Default VL for tasks that don't set it explicitly: */
static int sve_default_vl = -1;
static int __sve_default_vl = -1;
static int get_sve_default_vl(void)
{
return READ_ONCE(__sve_default_vl);
}
#ifdef CONFIG_ARM64_SVE
static void set_sve_default_vl(int val)
{
WRITE_ONCE(__sve_default_vl, val);
}
/* Maximum supported vector length across all CPUs (initially poisoned) */
int __ro_after_init sve_max_vl = SVE_VL_MIN;
int __ro_after_init sve_max_virtualisable_vl = SVE_VL_MIN;
@ -338,13 +349,13 @@ static unsigned int find_supported_vector_length(unsigned int vl)
return sve_vl_from_vq(__bit_to_vq(bit));
}
#ifdef CONFIG_SYSCTL
#if defined(CONFIG_ARM64_SVE) && defined(CONFIG_SYSCTL)
static int sve_proc_do_default_vl(struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos)
{
int ret;
int vl = sve_default_vl;
int vl = get_sve_default_vl();
struct ctl_table tmp_table = {
.data = &vl,
.maxlen = sizeof(vl),
@ -361,7 +372,7 @@ static int sve_proc_do_default_vl(struct ctl_table *table, int write,
if (!sve_vl_valid(vl))
return -EINVAL;
sve_default_vl = find_supported_vector_length(vl);
set_sve_default_vl(find_supported_vector_length(vl));
return 0;
}
@ -383,9 +394,9 @@ static int __init sve_sysctl_init(void)
return 0;
}
#else /* ! CONFIG_SYSCTL */
#else /* ! (CONFIG_ARM64_SVE && CONFIG_SYSCTL) */
static int __init sve_sysctl_init(void) { return 0; }
#endif /* ! CONFIG_SYSCTL */
#endif /* ! (CONFIG_ARM64_SVE && CONFIG_SYSCTL) */
#define ZREG(sve_state, vq, n) ((char *)(sve_state) + \
(SVE_SIG_ZREG_OFFSET(vq, n) - SVE_SIG_REGS_OFFSET))
@ -868,7 +879,7 @@ void __init sve_setup(void)
* For the default VL, pick the maximum supported value <= 64.
* VL == 64 is guaranteed not to grow the signal frame.
*/
sve_default_vl = find_supported_vector_length(64);
set_sve_default_vl(find_supported_vector_length(64));
bitmap_andnot(tmp_map, sve_vq_partial_map, sve_vq_map,
SVE_VQ_MAX);
@ -889,7 +900,7 @@ void __init sve_setup(void)
pr_info("SVE: maximum available vector length %u bytes per vector\n",
sve_max_vl);
pr_info("SVE: default vector length %u bytes per vector\n",
sve_default_vl);
get_sve_default_vl());
/* KVM decides whether to support mismatched systems. Just warn here: */
if (sve_max_virtualisable_vl < sve_max_vl)
@ -1029,13 +1040,13 @@ void fpsimd_flush_thread(void)
* vector length configured: no kernel task can become a user
* task without an exec and hence a call to this function.
* By the time the first call to this function is made, all
* early hardware probing is complete, so sve_default_vl
* early hardware probing is complete, so __sve_default_vl
* should be valid.
* If a bug causes this to go wrong, we make some noise and
* try to fudge thread.sve_vl to a safe value here.
*/
vl = current->thread.sve_vl_onexec ?
current->thread.sve_vl_onexec : sve_default_vl;
current->thread.sve_vl_onexec : get_sve_default_vl();
if (WARN_ON(!sve_vl_valid(vl)))
vl = SVE_VL_MIN;

View file

@ -730,6 +730,27 @@ static u64 get_distance_from_watchpoint(unsigned long addr, u64 val,
return 0;
}
static int watchpoint_report(struct perf_event *wp, unsigned long addr,
struct pt_regs *regs)
{
int step = is_default_overflow_handler(wp);
struct arch_hw_breakpoint *info = counter_arch_bp(wp);
info->trigger = addr;
/*
* If we triggered a user watchpoint from a uaccess routine, then
* handle the stepping ourselves since userspace really can't help
* us with this.
*/
if (!user_mode(regs) && info->ctrl.privilege == AARCH64_BREAKPOINT_EL0)
step = 1;
else
perf_bp_event(wp, regs);
return step;
}
static int watchpoint_handler(unsigned long addr, unsigned int esr,
struct pt_regs *regs)
{
@ -739,7 +760,6 @@ static int watchpoint_handler(unsigned long addr, unsigned int esr,
u64 val;
struct perf_event *wp, **slots;
struct debug_info *debug_info;
struct arch_hw_breakpoint *info;
struct arch_hw_breakpoint_ctrl ctrl;
slots = this_cpu_ptr(wp_on_reg);
@ -777,25 +797,13 @@ static int watchpoint_handler(unsigned long addr, unsigned int esr,
if (dist != 0)
continue;
info = counter_arch_bp(wp);
info->trigger = addr;
perf_bp_event(wp, regs);
/* Do we need to handle the stepping? */
if (is_default_overflow_handler(wp))
step = 1;
step = watchpoint_report(wp, addr, regs);
}
if (min_dist > 0 && min_dist != -1) {
/* No exact match found. */
wp = slots[closest_match];
info = counter_arch_bp(wp);
info->trigger = addr;
perf_bp_event(wp, regs);
/* Do we need to handle the stepping? */
if (is_default_overflow_handler(wp))
step = 1;
}
/* No exact match found? */
if (min_dist > 0 && min_dist != -1)
step = watchpoint_report(slots[closest_match], addr, regs);
rcu_read_unlock();
if (!step)

View file

@ -135,7 +135,7 @@ int __kprobes aarch64_insn_read(void *addr, u32 *insnp)
int ret;
__le32 val;
ret = probe_kernel_read(&val, addr, AARCH64_INSN_SIZE);
ret = copy_from_kernel_nofault(&val, addr, AARCH64_INSN_SIZE);
if (!ret)
*insnp = le32_to_cpu(val);
@ -151,7 +151,7 @@ static int __kprobes __aarch64_insn_write(void *addr, __le32 insn)
raw_spin_lock_irqsave(&patch_lock, flags);
waddr = patch_map(addr, FIX_TEXT_POKE0);
ret = probe_kernel_write(waddr, &insn, AARCH64_INSN_SIZE);
ret = copy_to_kernel_nofault(waddr, &insn, AARCH64_INSN_SIZE);
patch_unmap(FIX_TEXT_POKE0);
raw_spin_unlock_irqrestore(&patch_lock, flags);

View file

@ -219,8 +219,7 @@ static int prepare_elf_headers(void **addr, unsigned long *sz)
MEMBLOCK_NONE, &start, &end, NULL)
nr_ranges++;
cmem = kmalloc(sizeof(struct crash_mem) +
sizeof(struct crash_mem_range) * nr_ranges, GFP_KERNEL);
cmem = kmalloc(struct_size(cmem, ranges, nr_ranges), GFP_KERNEL);
if (!cmem)
return -ENOMEM;

View file

@ -376,7 +376,7 @@ static int call_undef_hook(struct pt_regs *regs)
if (!user_mode(regs)) {
__le32 instr_le;
if (probe_kernel_address((__force __le32 *)pc, instr_le))
if (get_kernel_nofault(instr_le, (__force __le32 *)pc))
goto exit;
instr = le32_to_cpu(instr_le);
} else if (compat_thumb_mode(regs)) {
@ -813,6 +813,7 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
handler[reason], smp_processor_id(), esr,
esr_get_class_string(esr));
__show_regs(regs);
local_daif_mask();
panic("bad mode");
}

View file

@ -404,11 +404,6 @@ void __init arm64_memblock_init(void)
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
dma_contiguous_reserve(arm64_dma32_phys_limit);
#ifdef CONFIG_ARM64_4K_PAGES
hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
#endif
}
void __init bootmem_init(void)
@ -424,6 +419,16 @@ void __init bootmem_init(void)
min_low_pfn = min;
arm64_numa_init();
/*
* must be done after arm64_numa_init() which calls numa_init() to
* initialize node_online_map that gets used in hugetlb_cma_reserve()
* while allocating required CMA size across online nodes.
*/
#ifdef CONFIG_ARM64_4K_PAGES
hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
#endif
/*
* Sparsemem tries to allocate bootmem in memory_present(), so must be
* done after the fixed reservations.

View file

@ -723,6 +723,7 @@ int kern_addr_valid(unsigned long addr)
pmd_t *pmdp, pmd;
pte_t *ptep, pte;
addr = arch_kasan_reset_tag(addr);
if ((((long)addr) >> VA_BITS) != -1UL)
return 0;

View file

@ -72,7 +72,8 @@ static int ftrace_check_current_nop(unsigned long hook)
uint16_t olds[7];
unsigned long hook_pos = hook - 2;
if (probe_kernel_read((void *)olds, (void *)hook_pos, sizeof(nops)))
if (copy_from_kernel_nofault((void *)olds, (void *)hook_pos,
sizeof(nops)))
return -EFAULT;
if (memcmp((void *)nops, (void *)olds, sizeof(nops))) {
@ -97,7 +98,7 @@ static int ftrace_modify_code(unsigned long hook, unsigned long target,
make_jbsr(target, hook, call, nolr);
ret = probe_kernel_write((void *)hook_pos, enable ? call : nops,
ret = copy_to_kernel_nofault((void *)hook_pos, enable ? call : nops,
sizeof(nops));
if (ret)
return -EPERM;

View file

@ -35,7 +35,7 @@ static inline void *dereference_function_descriptor(void *ptr)
struct fdesc *desc = ptr;
void *p;
if (!probe_kernel_address(&desc->ip, p))
if (!get_kernel_nofault(p, (void *)&desc->ip))
ptr = p;
return ptr;
}

View file

@ -108,7 +108,7 @@ ftrace_modify_code(unsigned long ip, unsigned char *old_code,
goto skip_check;
/* read the text we want to modify */
if (probe_kernel_read(replaced, (void *)ip, MCOUNT_INSN_SIZE))
if (copy_from_kernel_nofault(replaced, (void *)ip, MCOUNT_INSN_SIZE))
return -EFAULT;
/* Make sure it is what we expect it to be */
@ -117,7 +117,7 @@ ftrace_modify_code(unsigned long ip, unsigned char *old_code,
skip_check:
/* replace the text with the new text */
if (probe_kernel_write(((void *)ip), new_code, MCOUNT_INSN_SIZE))
if (copy_to_kernel_nofault(((void *)ip), new_code, MCOUNT_INSN_SIZE))
return -EPERM;
flush_icache_range(ip, ip + MCOUNT_INSN_SIZE);
@ -129,7 +129,7 @@ static int ftrace_make_nop_check(struct dyn_ftrace *rec, unsigned long addr)
unsigned char __attribute__((aligned(8))) replaced[MCOUNT_INSN_SIZE];
unsigned long ip = rec->ip;
if (probe_kernel_read(replaced, (void *)ip, MCOUNT_INSN_SIZE))
if (copy_from_kernel_nofault(replaced, (void *)ip, MCOUNT_INSN_SIZE))
return -EFAULT;
if (rec->flags & FTRACE_FL_CONVERTED) {
struct ftrace_call_insn *call_insn, *tmp_call;

View file

@ -42,7 +42,7 @@ enum unw_register_index {
struct unw_info_block {
u64 header;
u64 desc[0]; /* unwind descriptors */
u64 desc[]; /* unwind descriptors */
/* personality routine and language-specific data follow behind descriptors */
};

View file

@ -86,9 +86,9 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
goto out;
}
if ((probe_kernel_read(&prev_insn, p->addr - 1,
sizeof(mips_instruction)) == 0) &&
insn_has_delayslot(prev_insn)) {
if (copy_from_kernel_nofault(&prev_insn, p->addr - 1,
sizeof(mips_instruction)) == 0 &&
insn_has_delayslot(prev_insn)) {
pr_notice("Kprobes for branch delayslot are not supported\n");
ret = -EINVAL;
goto out;

View file

@ -131,13 +131,14 @@ static int __ftrace_modify_code(unsigned long pc, unsigned long *old_insn,
unsigned long orig_insn[3];
if (validate) {
if (probe_kernel_read(orig_insn, (void *)pc, MCOUNT_INSN_SIZE))
if (copy_from_kernel_nofault(orig_insn, (void *)pc,
MCOUNT_INSN_SIZE))
return -EFAULT;
if (memcmp(orig_insn, old_insn, MCOUNT_INSN_SIZE))
return -EINVAL;
}
if (probe_kernel_write((void *)pc, new_insn, MCOUNT_INSN_SIZE))
if (copy_to_kernel_nofault((void *)pc, new_insn, MCOUNT_INSN_SIZE))
return -EPERM;
return 0;

View file

@ -172,7 +172,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
ip = (void *)(rec->ip + 4 - size);
ret = probe_kernel_read(insn, ip, size);
ret = copy_from_kernel_nofault(insn, ip, size);
if (ret)
return ret;

View file

@ -154,8 +154,8 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long ip)
int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
{
int ret = probe_kernel_read(bpt->saved_instr, (char *)bpt->bpt_addr,
BREAK_INSTR_SIZE);
int ret = copy_from_kernel_nofault(bpt->saved_instr,
(char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
if (ret)
return ret;

View file

@ -293,7 +293,7 @@ void *dereference_function_descriptor(void *ptr)
Elf64_Fdesc *desc = ptr;
void *p;
if (!probe_kernel_address(&desc->addr, p))
if (!get_kernel_nofault(p, (void *)&desc->addr))
ptr = p;
return ptr;
}

View file

@ -57,7 +57,7 @@ void * memcpy(void * dst,const void *src, size_t count)
EXPORT_SYMBOL(raw_copy_in_user);
EXPORT_SYMBOL(memcpy);
bool probe_kernel_read_allowed(const void *unsafe_src, size_t size)
bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
{
if ((unsigned long)unsafe_src < PAGE_SIZE)
return false;

View file

@ -205,10 +205,6 @@ static inline void pmd_clear(pmd_t *pmdp)
*pmdp = __pmd(0);
}
/* to find an entry in a page-table-directory */
#define pgd_index(address) ((address) >> PGDIR_SHIFT)
#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
/*
* PTE updates. This function is called whenever an existing
* valid PTE is updated. This does -not- include set_pte_at()
@ -230,6 +226,8 @@ static inline void pmd_clear(pmd_t *pmdp)
* For other page sizes, we have a single entry in the table.
*/
#ifdef CONFIG_PPC_8xx
static pmd_t *pmd_off(struct mm_struct *mm, unsigned long addr);
static inline pte_basic_t pte_update(struct mm_struct *mm, unsigned long addr, pte_t *p,
unsigned long clr, unsigned long set, int huge)
{
@ -237,7 +235,7 @@ static inline pte_basic_t pte_update(struct mm_struct *mm, unsigned long addr, p
pte_basic_t old = pte_val(*p);
pte_basic_t new = (old & ~(pte_basic_t)clr) | set;
int num, i;
pmd_t *pmd = pmd_offset(pud_offset(p4d_offset(pgd_offset(mm, addr), addr), addr), addr);
pmd_t *pmd = pmd_off(mm, addr);
if (!huge)
num = PAGE_SIZE / SZ_4K;
@ -286,6 +284,16 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
return __pte(pte_update(mm, addr, ptep, ~0, 0, 0));
}
#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES)
#define __HAVE_ARCH_PTEP_GET
static inline pte_t ptep_get(pte_t *ptep)
{
pte_t pte = {READ_ONCE(ptep->pte), 0, 0, 0};
return pte;
}
#endif
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)

View file

@ -85,7 +85,7 @@ static inline void *dereference_function_descriptor(void *ptr)
struct ppc64_opd_entry *desc = ptr;
void *p;
if (!probe_kernel_address(&desc->funcaddr, p))
if (!get_kernel_nofault(p, (void *)&desc->funcaddr))
ptr = p;
return ptr;
}

View file

@ -0,0 +1,132 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* PAPR nvDimm Specific Methods (PDSM) and structs for libndctl
*
* (C) Copyright IBM 2020
*
* Author: Vaibhav Jain <vaibhav at linux.ibm.com>
*/
#ifndef _UAPI_ASM_POWERPC_PAPR_PDSM_H_
#define _UAPI_ASM_POWERPC_PAPR_PDSM_H_
#include <linux/types.h>
#include <linux/ndctl.h>
/*
* PDSM Envelope:
*
* The ioctl ND_CMD_CALL exchange data between user-space and kernel via
* envelope which consists of 2 headers sections and payload sections as
* illustrated below:
* +-----------------+---------------+---------------------------+
* | 64-Bytes | 8-Bytes | Max 184-Bytes |
* +-----------------+---------------+---------------------------+
* | ND-HEADER | PDSM-HEADER | PDSM-PAYLOAD |
* +-----------------+---------------+---------------------------+
* | nd_family | | |
* | nd_size_out | cmd_status | |
* | nd_size_in | reserved | nd_pdsm_payload |
* | nd_command | payload --> | |
* | nd_fw_size | | |
* | nd_payload ---> | | |
* +---------------+-----------------+---------------------------+
*
* ND Header:
* This is the generic libnvdimm header described as 'struct nd_cmd_pkg'
* which is interpreted by libnvdimm before passed on to papr_scm. Important
* member fields used are:
* 'nd_family' : (In) NVDIMM_FAMILY_PAPR_SCM
* 'nd_size_in' : (In) PDSM-HEADER + PDSM-IN-PAYLOAD (usually 0)
* 'nd_size_out' : (In) PDSM-HEADER + PDSM-RETURN-PAYLOAD
* 'nd_command' : (In) One of PAPR_PDSM_XXX
* 'nd_fw_size' : (Out) PDSM-HEADER + size of actual payload returned
*
* PDSM Header:
* This is papr-scm specific header that precedes the payload. This is defined
* as nd_cmd_pdsm_pkg. Following fields aare available in this header:
*
* 'cmd_status' : (Out) Errors if any encountered while servicing PDSM.
* 'reserved' : Not used, reserved for future and should be set to 0.
* 'payload' : A union of all the possible payload structs
*
* PDSM Payload:
*
* The layout of the PDSM Payload is defined by various structs shared between
* papr_scm and libndctl so that contents of payload can be interpreted. As such
* its defined as a union of all possible payload structs as
* 'union nd_pdsm_payload'. Based on the value of 'nd_cmd_pkg.nd_command'
* appropriate member of the union is accessed.
*/
/* Max payload size that we can handle */
#define ND_PDSM_PAYLOAD_MAX_SIZE 184
/* Max payload size that we can handle */
#define ND_PDSM_HDR_SIZE \
(sizeof(struct nd_pkg_pdsm) - ND_PDSM_PAYLOAD_MAX_SIZE)
/* Various nvdimm health indicators */
#define PAPR_PDSM_DIMM_HEALTHY 0
#define PAPR_PDSM_DIMM_UNHEALTHY 1
#define PAPR_PDSM_DIMM_CRITICAL 2
#define PAPR_PDSM_DIMM_FATAL 3
/*
* Struct exchanged between kernel & ndctl in for PAPR_PDSM_HEALTH
* Various flags indicate the health status of the dimm.
*
* extension_flags : Any extension fields present in the struct.
* dimm_unarmed : Dimm not armed. So contents wont persist.
* dimm_bad_shutdown : Previous shutdown did not persist contents.
* dimm_bad_restore : Contents from previous shutdown werent restored.
* dimm_scrubbed : Contents of the dimm have been scrubbed.
* dimm_locked : Contents of the dimm cant be modified until CEC reboot
* dimm_encrypted : Contents of dimm are encrypted.
* dimm_health : Dimm health indicator. One of PAPR_PDSM_DIMM_XXXX
*/
struct nd_papr_pdsm_health {
union {
struct {
__u32 extension_flags;
__u8 dimm_unarmed;
__u8 dimm_bad_shutdown;
__u8 dimm_bad_restore;
__u8 dimm_scrubbed;
__u8 dimm_locked;
__u8 dimm_encrypted;
__u16 dimm_health;
};
__u8 buf[ND_PDSM_PAYLOAD_MAX_SIZE];
};
};
/*
* Methods to be embedded in ND_CMD_CALL request. These are sent to the kernel
* via 'nd_cmd_pkg.nd_command' member of the ioctl struct
*/
enum papr_pdsm {
PAPR_PDSM_MIN = 0x0,
PAPR_PDSM_HEALTH,
PAPR_PDSM_MAX,
};
/* Maximal union that can hold all possible payload types */
union nd_pdsm_payload {
struct nd_papr_pdsm_health health;
__u8 buf[ND_PDSM_PAYLOAD_MAX_SIZE];
} __packed;
/*
* PDSM-header + payload expected with ND_CMD_CALL ioctl from libnvdimm
* Valid member of union 'payload' is identified via 'nd_cmd_pkg.nd_command'
* that should always precede this struct when sent to papr_scm via CMD_CALL
* interface.
*/
struct nd_pkg_pdsm {
__s32 cmd_status; /* Out: Sub-cmd status returned back */
__u16 reserved[2]; /* Ignored and to be set as '0' */
union nd_pdsm_payload payload;
} __packed;
#endif /* _UAPI_ASM_POWERPC_PAPR_PDSM_H_ */

View file

@ -270,7 +270,7 @@ BEGIN_FTR_SECTION
END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
.endif
ld r10,PACA_EXGEN+EX_CTR(r13)
ld r10,IAREA+EX_CTR(r13)
mtctr r10
BEGIN_FTR_SECTION
ld r10,IAREA+EX_PPR(r13)
@ -298,7 +298,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
.if IKVM_SKIP
89: mtocrf 0x80,r9
ld r10,PACA_EXGEN+EX_CTR(r13)
ld r10,IAREA+EX_CTR(r13)
mtctr r10
ld r9,IAREA+EX_R9(r13)
ld r10,IAREA+EX_R10(r13)

View file

@ -421,7 +421,7 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
unsigned int instr;
struct ppc_inst *addr = (struct ppc_inst *)bpt->bpt_addr;
err = probe_kernel_address(addr, instr);
err = get_kernel_nofault(instr, (unsigned *) addr);
if (err)
return err;

View file

@ -289,7 +289,7 @@ int kprobe_handler(struct pt_regs *regs)
if (!p) {
unsigned int instr;
if (probe_kernel_address(addr, instr))
if (get_kernel_nofault(instr, addr))
goto no_kprobe;
if (instr != BREAKPOINT_INSTRUCTION) {

View file

@ -756,7 +756,8 @@ int module_trampoline_target(struct module *mod, unsigned long addr,
stub = (struct ppc64_stub_entry *)addr;
if (probe_kernel_read(&magic, &stub->magic, sizeof(magic))) {
if (copy_from_kernel_nofault(&magic, &stub->magic,
sizeof(magic))) {
pr_err("%s: fault reading magic for stub %lx for %s\n", __func__, addr, mod->name);
return -EFAULT;
}
@ -766,7 +767,8 @@ int module_trampoline_target(struct module *mod, unsigned long addr,
return -EFAULT;
}
if (probe_kernel_read(&funcdata, &stub->funcdata, sizeof(funcdata))) {
if (copy_from_kernel_nofault(&funcdata, &stub->funcdata,
sizeof(funcdata))) {
pr_err("%s: fault reading funcdata for stub %lx for %s\n", __func__, addr, mod->name);
return -EFAULT;
}

View file

@ -1252,29 +1252,31 @@ struct task_struct *__switch_to(struct task_struct *prev,
static void show_instructions(struct pt_regs *regs)
{
int i;
unsigned long nip = regs->nip;
unsigned long pc = regs->nip - (NR_INSN_TO_PRINT * 3 / 4 * sizeof(int));
printk("Instruction dump:");
/*
* If we were executing with the MMU off for instructions, adjust pc
* rather than printing XXXXXXXX.
*/
if (!IS_ENABLED(CONFIG_BOOKE) && !(regs->msr & MSR_IR)) {
pc = (unsigned long)phys_to_virt(pc);
nip = (unsigned long)phys_to_virt(regs->nip);
}
for (i = 0; i < NR_INSN_TO_PRINT; i++) {
int instr;
if (!(i % 8))
pr_cont("\n");
#if !defined(CONFIG_BOOKE)
/* If executing with the IMMU off, adjust pc rather
* than print XXXXXXXX.
*/
if (!(regs->msr & MSR_IR))
pc = (unsigned long)phys_to_virt(pc);
#endif
if (!__kernel_text_address(pc) ||
probe_kernel_address((const void *)pc, instr)) {
get_kernel_nofault(instr, (const void *)pc)) {
pr_cont("XXXXXXXX ");
} else {
if (regs->nip == pc)
if (nip == pc)
pr_cont("<%08x> ", instr);
else
pr_cont("%08x ", instr);
@ -1305,7 +1307,8 @@ void show_user_instructions(struct pt_regs *regs)
for (i = 0; i < 8 && n; i++, n--, pc += sizeof(int)) {
int instr;
if (probe_user_read(&instr, (void __user *)pc, sizeof(instr))) {
if (copy_from_user_nofault(&instr, (void __user *)pc,
sizeof(instr))) {
seq_buf_printf(&s, "XXXXXXXX ");
continue;
}

View file

@ -226,7 +226,7 @@ __ftrace_make_nop(struct module *mod,
unsigned long ip = rec->ip;
unsigned long tramp;
if (probe_kernel_read(&op, (void *)ip, MCOUNT_INSN_SIZE))
if (copy_from_kernel_nofault(&op, (void *)ip, MCOUNT_INSN_SIZE))
return -EFAULT;
/* Make sure that that this is still a 24bit jump */
@ -249,7 +249,7 @@ __ftrace_make_nop(struct module *mod,
pr_devel("ip:%lx jumps to %lx", ip, tramp);
/* Find where the trampoline jumps to */
if (probe_kernel_read(jmp, (void *)tramp, sizeof(jmp))) {
if (copy_from_kernel_nofault(jmp, (void *)tramp, sizeof(jmp))) {
pr_err("Failed to read %lx\n", tramp);
return -EFAULT;
}

View file

@ -64,9 +64,9 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
isync();
if (is_load)
ret = probe_user_read(to, (const void __user *)from, n);
ret = copy_from_user_nofault(to, (const void __user *)from, n);
else
ret = probe_user_write((void __user *)to, from, n);
ret = copy_to_user_nofault((void __user *)to, from, n);
/* switch the pid first to avoid running host with unallocated pid */
if (quadrant == 1 && pid != old_pid)

View file

@ -15,11 +15,11 @@ int probe_user_read_inst(struct ppc_inst *inst,
unsigned int val, suffix;
int err;
err = probe_user_read(&val, nip, sizeof(val));
err = copy_from_user_nofault(&val, nip, sizeof(val));
if (err)
return err;
if (get_op(val) == OP_PREFIX) {
err = probe_user_read(&suffix, (void __user *)nip + 4, 4);
err = copy_from_user_nofault(&suffix, (void __user *)nip + 4, 4);
*inst = ppc_inst_prefix(val, suffix);
} else {
*inst = ppc_inst(val);
@ -33,11 +33,11 @@ int probe_kernel_read_inst(struct ppc_inst *inst,
unsigned int val, suffix;
int err;
err = probe_kernel_read(&val, src, sizeof(val));
err = copy_from_kernel_nofault(&val, src, sizeof(val));
if (err)
return err;
if (get_op(val) == OP_PREFIX) {
err = probe_kernel_read(&suffix, (void *)src + 4, 4);
err = copy_from_kernel_nofault(&suffix, (void *)src + 4, 4);
*inst = ppc_inst_prefix(val, suffix);
} else {
*inst = ppc_inst(val);
@ -51,7 +51,7 @@ int probe_user_read_inst(struct ppc_inst *inst,
unsigned int val;
int err;
err = probe_user_read(&val, nip, sizeof(val));
err = copy_from_user_nofault(&val, nip, sizeof(val));
if (!err)
*inst = ppc_inst(val);
@ -64,7 +64,7 @@ int probe_kernel_read_inst(struct ppc_inst *inst,
unsigned int val;
int err;
err = probe_kernel_read(&val, src, sizeof(val));
err = copy_from_kernel_nofault(&val, src, sizeof(val));
if (!err)
*inst = ppc_inst(val);

View file

@ -33,7 +33,8 @@ static unsigned int user_getsp32(unsigned int sp, int is_first)
* which means that we've done all that we can do from
* interrupt context.
*/
if (probe_user_read(stack_frame, (void __user *)p, sizeof(stack_frame)))
if (copy_from_user_nofault(stack_frame, (void __user *)p,
sizeof(stack_frame)))
return 0;
if (!is_first)
@ -51,7 +52,8 @@ static unsigned long user_getsp64(unsigned long sp, int is_first)
{
unsigned long stack_frame[3];
if (probe_user_read(stack_frame, (void __user *)sp, sizeof(stack_frame)))
if (copy_from_user_nofault(stack_frame, (void __user *)sp,
sizeof(stack_frame)))
return 0;
if (!is_first)

View file

@ -44,7 +44,7 @@ static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
((unsigned long)ptr & 3))
return -EFAULT;
rc = probe_user_read(ret, ptr, sizeof(*ret));
rc = copy_from_user_nofault(ret, ptr, sizeof(*ret));
if (IS_ENABLED(CONFIG_PPC64) && rc)
return read_user_stack_slow(ptr, ret, 4);

View file

@ -50,7 +50,7 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
((unsigned long)ptr & 7))
return -EFAULT;
if (!probe_user_read(ret, ptr, sizeof(*ret)))
if (!copy_from_user_nofault(ret, ptr, sizeof(*ret)))
return 0;
return read_user_stack_slow(ptr, ret, 8);

View file

@ -418,14 +418,16 @@ static __u64 power_pmu_bhrb_to(u64 addr)
__u64 target;
if (is_kernel_addr(addr)) {
if (probe_kernel_read(&instr, (void *)addr, sizeof(instr)))
if (copy_from_kernel_nofault(&instr, (void *)addr,
sizeof(instr)))
return 0;
return branch_target((struct ppc_inst *)&instr);
}
/* Userspace: need copy instruction here then translate it */
if (probe_user_read(&instr, (unsigned int __user *)addr, sizeof(instr)))
if (copy_from_user_nofault(&instr, (unsigned int __user *)addr,
sizeof(instr)))
return 0;
target = branch_target((struct ppc_inst *)&instr);

View file

@ -35,7 +35,7 @@
*/
static void *spu_syscall_table[] = {
#define __SYSCALL(nr, entry) entry,
#define __SYSCALL(nr, entry) [nr] = entry,
#include <asm/syscall_table_spu.h>
#undef __SYSCALL
};

View file

@ -12,16 +12,57 @@
#include <linux/libnvdimm.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/seq_buf.h>
#include <asm/plpar_wrappers.h>
#include <asm/papr_pdsm.h>
#define BIND_ANY_ADDR (~0ul)
#define PAPR_SCM_DIMM_CMD_MASK \
((1ul << ND_CMD_GET_CONFIG_SIZE) | \
(1ul << ND_CMD_GET_CONFIG_DATA) | \
(1ul << ND_CMD_SET_CONFIG_DATA))
(1ul << ND_CMD_SET_CONFIG_DATA) | \
(1ul << ND_CMD_CALL))
/* DIMM health bitmap bitmap indicators */
/* SCM device is unable to persist memory contents */
#define PAPR_PMEM_UNARMED (1ULL << (63 - 0))
/* SCM device failed to persist memory contents */
#define PAPR_PMEM_SHUTDOWN_DIRTY (1ULL << (63 - 1))
/* SCM device contents are persisted from previous IPL */
#define PAPR_PMEM_SHUTDOWN_CLEAN (1ULL << (63 - 2))
/* SCM device contents are not persisted from previous IPL */
#define PAPR_PMEM_EMPTY (1ULL << (63 - 3))
/* SCM device memory life remaining is critically low */
#define PAPR_PMEM_HEALTH_CRITICAL (1ULL << (63 - 4))
/* SCM device will be garded off next IPL due to failure */
#define PAPR_PMEM_HEALTH_FATAL (1ULL << (63 - 5))
/* SCM contents cannot persist due to current platform health status */
#define PAPR_PMEM_HEALTH_UNHEALTHY (1ULL << (63 - 6))
/* SCM device is unable to persist memory contents in certain conditions */
#define PAPR_PMEM_HEALTH_NON_CRITICAL (1ULL << (63 - 7))
/* SCM device is encrypted */
#define PAPR_PMEM_ENCRYPTED (1ULL << (63 - 8))
/* SCM device has been scrubbed and locked */
#define PAPR_PMEM_SCRUBBED_AND_LOCKED (1ULL << (63 - 9))
/* Bits status indicators for health bitmap indicating unarmed dimm */
#define PAPR_PMEM_UNARMED_MASK (PAPR_PMEM_UNARMED | \
PAPR_PMEM_HEALTH_UNHEALTHY)
/* Bits status indicators for health bitmap indicating unflushed dimm */
#define PAPR_PMEM_BAD_SHUTDOWN_MASK (PAPR_PMEM_SHUTDOWN_DIRTY)
/* Bits status indicators for health bitmap indicating unrestored dimm */
#define PAPR_PMEM_BAD_RESTORE_MASK (PAPR_PMEM_EMPTY)
/* Bit status indicators for smart event notification */
#define PAPR_PMEM_SMART_EVENT_MASK (PAPR_PMEM_HEALTH_CRITICAL | \
PAPR_PMEM_HEALTH_FATAL | \
PAPR_PMEM_HEALTH_UNHEALTHY)
/* private struct associated with each region */
struct papr_scm_priv {
struct platform_device *pdev;
struct device_node *dn;
@ -39,6 +80,15 @@ struct papr_scm_priv {
struct resource res;
struct nd_region *region;
struct nd_interleave_set nd_set;
/* Protect dimm health data from concurrent read/writes */
struct mutex health_mutex;
/* Last time the health information of the dimm was updated */
unsigned long lasthealth_jiffies;
/* Health information for the dimm */
u64 health_bitmap;
};
static int drc_pmem_bind(struct papr_scm_priv *p)
@ -144,6 +194,61 @@ static int drc_pmem_query_n_bind(struct papr_scm_priv *p)
return drc_pmem_bind(p);
}
/*
* Issue hcall to retrieve dimm health info and populate papr_scm_priv with the
* health information.
*/
static int __drc_pmem_query_health(struct papr_scm_priv *p)
{
unsigned long ret[PLPAR_HCALL_BUFSIZE];
long rc;
/* issue the hcall */
rc = plpar_hcall(H_SCM_HEALTH, ret, p->drc_index);
if (rc != H_SUCCESS) {
dev_err(&p->pdev->dev,
"Failed to query health information, Err:%ld\n", rc);
return -ENXIO;
}
p->lasthealth_jiffies = jiffies;
p->health_bitmap = ret[0] & ret[1];
dev_dbg(&p->pdev->dev,
"Queried dimm health info. Bitmap:0x%016lx Mask:0x%016lx\n",
ret[0], ret[1]);
return 0;
}
/* Min interval in seconds for assuming stable dimm health */
#define MIN_HEALTH_QUERY_INTERVAL 60
/* Query cached health info and if needed call drc_pmem_query_health */
static int drc_pmem_query_health(struct papr_scm_priv *p)
{
unsigned long cache_timeout;
int rc;
/* Protect concurrent modifications to papr_scm_priv */
rc = mutex_lock_interruptible(&p->health_mutex);
if (rc)
return rc;
/* Jiffies offset for which the health data is assumed to be same */
cache_timeout = p->lasthealth_jiffies +
msecs_to_jiffies(MIN_HEALTH_QUERY_INTERVAL * 1000);
/* Fetch new health info is its older than MIN_HEALTH_QUERY_INTERVAL */
if (time_after(jiffies, cache_timeout))
rc = __drc_pmem_query_health(p);
else
/* Assume cached health data is valid */
rc = 0;
mutex_unlock(&p->health_mutex);
return rc;
}
static int papr_scm_meta_get(struct papr_scm_priv *p,
struct nd_cmd_get_config_data_hdr *hdr)
@ -246,16 +351,250 @@ static int papr_scm_meta_set(struct papr_scm_priv *p,
return 0;
}
/*
* Do a sanity checks on the inputs args to dimm-control function and return
* '0' if valid. Validation of PDSM payloads happens later in
* papr_scm_service_pdsm.
*/
static int is_cmd_valid(struct nvdimm *nvdimm, unsigned int cmd, void *buf,
unsigned int buf_len)
{
unsigned long cmd_mask = PAPR_SCM_DIMM_CMD_MASK;
struct nd_cmd_pkg *nd_cmd;
struct papr_scm_priv *p;
enum papr_pdsm pdsm;
/* Only dimm-specific calls are supported atm */
if (!nvdimm)
return -EINVAL;
/* get the provider data from struct nvdimm */
p = nvdimm_provider_data(nvdimm);
if (!test_bit(cmd, &cmd_mask)) {
dev_dbg(&p->pdev->dev, "Unsupported cmd=%u\n", cmd);
return -EINVAL;
}
/* For CMD_CALL verify pdsm request */
if (cmd == ND_CMD_CALL) {
/* Verify the envelope and envelop size */
if (!buf ||
buf_len < (sizeof(struct nd_cmd_pkg) + ND_PDSM_HDR_SIZE)) {
dev_dbg(&p->pdev->dev, "Invalid pkg size=%u\n",
buf_len);
return -EINVAL;
}
/* Verify that the nd_cmd_pkg.nd_family is correct */
nd_cmd = (struct nd_cmd_pkg *)buf;
if (nd_cmd->nd_family != NVDIMM_FAMILY_PAPR) {
dev_dbg(&p->pdev->dev, "Invalid pkg family=0x%llx\n",
nd_cmd->nd_family);
return -EINVAL;
}
pdsm = (enum papr_pdsm)nd_cmd->nd_command;
/* Verify if the pdsm command is valid */
if (pdsm <= PAPR_PDSM_MIN || pdsm >= PAPR_PDSM_MAX) {
dev_dbg(&p->pdev->dev, "PDSM[0x%x]: Invalid PDSM\n",
pdsm);
return -EINVAL;
}
/* Have enough space to hold returned 'nd_pkg_pdsm' header */
if (nd_cmd->nd_size_out < ND_PDSM_HDR_SIZE) {
dev_dbg(&p->pdev->dev, "PDSM[0x%x]: Invalid payload\n",
pdsm);
return -EINVAL;
}
}
/* Let the command be further processed */
return 0;
}
/* Fetch the DIMM health info and populate it in provided package. */
static int papr_pdsm_health(struct papr_scm_priv *p,
union nd_pdsm_payload *payload)
{
int rc;
/* Ensure dimm health mutex is taken preventing concurrent access */
rc = mutex_lock_interruptible(&p->health_mutex);
if (rc)
goto out;
/* Always fetch upto date dimm health data ignoring cached values */
rc = __drc_pmem_query_health(p);
if (rc) {
mutex_unlock(&p->health_mutex);
goto out;
}
/* update health struct with various flags derived from health bitmap */
payload->health = (struct nd_papr_pdsm_health) {
.extension_flags = 0,
.dimm_unarmed = !!(p->health_bitmap & PAPR_PMEM_UNARMED_MASK),
.dimm_bad_shutdown = !!(p->health_bitmap & PAPR_PMEM_BAD_SHUTDOWN_MASK),
.dimm_bad_restore = !!(p->health_bitmap & PAPR_PMEM_BAD_RESTORE_MASK),
.dimm_scrubbed = !!(p->health_bitmap & PAPR_PMEM_SCRUBBED_AND_LOCKED),
.dimm_locked = !!(p->health_bitmap & PAPR_PMEM_SCRUBBED_AND_LOCKED),
.dimm_encrypted = !!(p->health_bitmap & PAPR_PMEM_ENCRYPTED),
.dimm_health = PAPR_PDSM_DIMM_HEALTHY,
};
/* Update field dimm_health based on health_bitmap flags */
if (p->health_bitmap & PAPR_PMEM_HEALTH_FATAL)
payload->health.dimm_health = PAPR_PDSM_DIMM_FATAL;
else if (p->health_bitmap & PAPR_PMEM_HEALTH_CRITICAL)
payload->health.dimm_health = PAPR_PDSM_DIMM_CRITICAL;
else if (p->health_bitmap & PAPR_PMEM_HEALTH_UNHEALTHY)
payload->health.dimm_health = PAPR_PDSM_DIMM_UNHEALTHY;
/* struct populated hence can release the mutex now */
mutex_unlock(&p->health_mutex);
rc = sizeof(struct nd_papr_pdsm_health);
out:
return rc;
}
/*
* 'struct pdsm_cmd_desc'
* Identifies supported PDSMs' expected length of in/out payloads
* and pdsm service function.
*
* size_in : Size of input payload if any in the PDSM request.
* size_out : Size of output payload if any in the PDSM request.
* service : Service function for the PDSM request. Return semantics:
* rc < 0 : Error servicing PDSM and rc indicates the error.
* rc >=0 : Serviced successfully and 'rc' indicate number of
* bytes written to payload.
*/
struct pdsm_cmd_desc {
u32 size_in;
u32 size_out;
int (*service)(struct papr_scm_priv *dimm,
union nd_pdsm_payload *payload);
};
/* Holds all supported PDSMs' command descriptors */
static const struct pdsm_cmd_desc __pdsm_cmd_descriptors[] = {
[PAPR_PDSM_MIN] = {
.size_in = 0,
.size_out = 0,
.service = NULL,
},
/* New PDSM command descriptors to be added below */
[PAPR_PDSM_HEALTH] = {
.size_in = 0,
.size_out = sizeof(struct nd_papr_pdsm_health),
.service = papr_pdsm_health,
},
/* Empty */
[PAPR_PDSM_MAX] = {
.size_in = 0,
.size_out = 0,
.service = NULL,
},
};
/* Given a valid pdsm cmd return its command descriptor else return NULL */
static inline const struct pdsm_cmd_desc *pdsm_cmd_desc(enum papr_pdsm cmd)
{
if (cmd >= 0 || cmd < ARRAY_SIZE(__pdsm_cmd_descriptors))
return &__pdsm_cmd_descriptors[cmd];
return NULL;
}
/*
* For a given pdsm request call an appropriate service function.
* Returns errors if any while handling the pdsm command package.
*/
static int papr_scm_service_pdsm(struct papr_scm_priv *p,
struct nd_cmd_pkg *pkg)
{
/* Get the PDSM header and PDSM command */
struct nd_pkg_pdsm *pdsm_pkg = (struct nd_pkg_pdsm *)pkg->nd_payload;
enum papr_pdsm pdsm = (enum papr_pdsm)pkg->nd_command;
const struct pdsm_cmd_desc *pdsc;
int rc;
/* Fetch corresponding pdsm descriptor for validation and servicing */
pdsc = pdsm_cmd_desc(pdsm);
/* Validate pdsm descriptor */
/* Ensure that reserved fields are 0 */
if (pdsm_pkg->reserved[0] || pdsm_pkg->reserved[1]) {
dev_dbg(&p->pdev->dev, "PDSM[0x%x]: Invalid reserved field\n",
pdsm);
return -EINVAL;
}
/* If pdsm expects some input, then ensure that the size_in matches */
if (pdsc->size_in &&
pkg->nd_size_in != (pdsc->size_in + ND_PDSM_HDR_SIZE)) {
dev_dbg(&p->pdev->dev, "PDSM[0x%x]: Mismatched size_in=%d\n",
pdsm, pkg->nd_size_in);
return -EINVAL;
}
/* If pdsm wants to return data, then ensure that size_out matches */
if (pdsc->size_out &&
pkg->nd_size_out != (pdsc->size_out + ND_PDSM_HDR_SIZE)) {
dev_dbg(&p->pdev->dev, "PDSM[0x%x]: Mismatched size_out=%d\n",
pdsm, pkg->nd_size_out);
return -EINVAL;
}
/* Service the pdsm */
if (pdsc->service) {
dev_dbg(&p->pdev->dev, "PDSM[0x%x]: Servicing..\n", pdsm);
rc = pdsc->service(p, &pdsm_pkg->payload);
if (rc < 0) {
/* error encountered while servicing pdsm */
pdsm_pkg->cmd_status = rc;
pkg->nd_fw_size = ND_PDSM_HDR_SIZE;
} else {
/* pdsm serviced and 'rc' bytes written to payload */
pdsm_pkg->cmd_status = 0;
pkg->nd_fw_size = ND_PDSM_HDR_SIZE + rc;
}
} else {
dev_dbg(&p->pdev->dev, "PDSM[0x%x]: Unsupported PDSM request\n",
pdsm);
pdsm_pkg->cmd_status = -ENOENT;
pkg->nd_fw_size = ND_PDSM_HDR_SIZE;
}
return pdsm_pkg->cmd_status;
}
static int papr_scm_ndctl(struct nvdimm_bus_descriptor *nd_desc,
struct nvdimm *nvdimm, unsigned int cmd, void *buf,
unsigned int buf_len, int *cmd_rc)
{
struct nd_cmd_get_config_size *get_size_hdr;
struct nd_cmd_pkg *call_pkg = NULL;
struct papr_scm_priv *p;
int rc;
/* Only dimm-specific calls are supported atm */
if (!nvdimm)
return -EINVAL;
rc = is_cmd_valid(nvdimm, cmd, buf, buf_len);
if (rc) {
pr_debug("Invalid cmd=0x%x. Err=%d\n", cmd, rc);
return rc;
}
/* Use a local variable in case cmd_rc pointer is NULL */
if (!cmd_rc)
cmd_rc = &rc;
p = nvdimm_provider_data(nvdimm);
@ -277,7 +616,13 @@ static int papr_scm_ndctl(struct nvdimm_bus_descriptor *nd_desc,
*cmd_rc = papr_scm_meta_set(p, buf);
break;
case ND_CMD_CALL:
call_pkg = (struct nd_cmd_pkg *)buf;
*cmd_rc = papr_scm_service_pdsm(p, call_pkg);
break;
default:
dev_dbg(&p->pdev->dev, "Unknown command = %d\n", cmd);
return -EINVAL;
}
@ -286,6 +631,64 @@ static int papr_scm_ndctl(struct nvdimm_bus_descriptor *nd_desc,
return 0;
}
static ssize_t flags_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct nvdimm *dimm = to_nvdimm(dev);
struct papr_scm_priv *p = nvdimm_provider_data(dimm);
struct seq_buf s;
u64 health;
int rc;
rc = drc_pmem_query_health(p);
if (rc)
return rc;
/* Copy health_bitmap locally, check masks & update out buffer */
health = READ_ONCE(p->health_bitmap);
seq_buf_init(&s, buf, PAGE_SIZE);
if (health & PAPR_PMEM_UNARMED_MASK)
seq_buf_printf(&s, "not_armed ");
if (health & PAPR_PMEM_BAD_SHUTDOWN_MASK)
seq_buf_printf(&s, "flush_fail ");
if (health & PAPR_PMEM_BAD_RESTORE_MASK)
seq_buf_printf(&s, "restore_fail ");
if (health & PAPR_PMEM_ENCRYPTED)
seq_buf_printf(&s, "encrypted ");
if (health & PAPR_PMEM_SMART_EVENT_MASK)
seq_buf_printf(&s, "smart_notify ");
if (health & PAPR_PMEM_SCRUBBED_AND_LOCKED)
seq_buf_printf(&s, "scrubbed locked ");
if (seq_buf_used(&s))
seq_buf_printf(&s, "\n");
return seq_buf_used(&s);
}
DEVICE_ATTR_RO(flags);
/* papr_scm specific dimm attributes */
static struct attribute *papr_nd_attributes[] = {
&dev_attr_flags.attr,
NULL,
};
static struct attribute_group papr_nd_attribute_group = {
.name = "papr",
.attrs = papr_nd_attributes,
};
static const struct attribute_group *papr_nd_attr_groups[] = {
&papr_nd_attribute_group,
NULL,
};
static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
{
struct device *dev = &p->pdev->dev;
@ -312,8 +715,8 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
dimm_flags = 0;
set_bit(NDD_LABELING, &dimm_flags);
p->nvdimm = nvdimm_create(p->bus, p, NULL, dimm_flags,
PAPR_SCM_DIMM_CMD_MASK, 0, NULL);
p->nvdimm = nvdimm_create(p->bus, p, papr_nd_attr_groups,
dimm_flags, PAPR_SCM_DIMM_CMD_MASK, 0, NULL);
if (!p->nvdimm) {
dev_err(dev, "Error creating DIMM object for %pOF\n", p->dn);
goto err;
@ -399,6 +802,9 @@ static int papr_scm_probe(struct platform_device *pdev)
if (!p)
return -ENOMEM;
/* Initialize the dimm mutex */
mutex_init(&p->health_mutex);
/* optional DT properties */
of_property_read_u32(dn, "ibm,metadata-size", &metadata_size);

View file

@ -1066,10 +1066,10 @@ int fsl_pci_mcheck_exception(struct pt_regs *regs)
if (is_in_pci_mem_space(addr)) {
if (user_mode(regs))
ret = probe_user_read(&inst, (void __user *)regs->nip,
sizeof(inst));
ret = copy_from_user_nofault(&inst,
(void __user *)regs->nip, sizeof(inst));
else
ret = probe_kernel_address((void *)regs->nip, inst);
ret = get_kernel_nofault(inst, (void *)regs->nip);
if (!ret && mcheck_handle_load(regs, inst)) {
regs->nip += 4;

View file

@ -179,7 +179,7 @@
" bnez %1, 0b\n" \
"1:\n" \
: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \
: "rJ" (__old), "rJ" (__new) \
: "rJ" ((long)__old), "rJ" (__new) \
: "memory"); \
break; \
case 8: \
@ -224,7 +224,7 @@
RISCV_ACQUIRE_BARRIER \
"1:\n" \
: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \
: "rJ" (__old), "rJ" (__new) \
: "rJ" ((long)__old), "rJ" (__new) \
: "memory"); \
break; \
case 8: \
@ -270,7 +270,7 @@
" bnez %1, 0b\n" \
"1:\n" \
: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \
: "rJ" (__old), "rJ" (__new) \
: "rJ" ((long)__old), "rJ" (__new) \
: "memory"); \
break; \
case 8: \
@ -316,7 +316,7 @@
" fence rw, rw\n" \
"1:\n" \
: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \
: "rJ" (__old), "rJ" (__new) \
: "rJ" ((long)__old), "rJ" (__new) \
: "memory"); \
break; \
case 8: \

View file

@ -38,7 +38,8 @@ static int ftrace_check_current_call(unsigned long hook_pos,
* Read the text we want to modify;
* return must be -EFAULT on read error
*/
if (probe_kernel_read(replaced, (void *)hook_pos, MCOUNT_INSN_SIZE))
if (copy_from_kernel_nofault(replaced, (void *)hook_pos,
MCOUNT_INSN_SIZE))
return -EFAULT;
/*

View file

@ -62,7 +62,7 @@ int get_step_address(struct pt_regs *regs, unsigned long *next_addr)
unsigned int rs1_num, rs2_num;
int op_code;
if (probe_kernel_address((void *)pc, op_code))
if (get_kernel_nofault(op_code, (void *)pc))
return -EINVAL;
if ((op_code & __INSN_LENGTH_MASK) != __INSN_LENGTH_GE_32) {
if (is_c_jalr_insn(op_code) || is_c_jr_insn(op_code)) {
@ -146,14 +146,14 @@ int do_single_step(struct pt_regs *regs)
return error;
/* Store the op code in the stepped address */
error = probe_kernel_address((void *)addr, stepped_opcode);
error = get_kernel_nofault(stepped_opcode, (void *)addr);
if (error)
return error;
stepped_address = addr;
/* Replace the op code with the break instruction */
error = probe_kernel_write((void *)stepped_address,
error = copy_to_kernel_nofault((void *)stepped_address,
arch_kgdb_ops.gdb_bpt_instr,
BREAK_INSTR_SIZE);
/* Flush and return */
@ -173,7 +173,7 @@ int do_single_step(struct pt_regs *regs)
static void undo_single_step(struct pt_regs *regs)
{
if (stepped_opcode != 0) {
probe_kernel_write((void *)stepped_address,
copy_to_kernel_nofault((void *)stepped_address,
(void *)&stepped_opcode, BREAK_INSTR_SIZE);
flush_icache_range(stepped_address,
stepped_address + BREAK_INSTR_SIZE);

View file

@ -63,7 +63,7 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
waddr = patch_map(addr, FIX_TEXT_POKE0);
ret = probe_kernel_write(waddr, insn, len);
ret = copy_to_kernel_nofault(waddr, insn, len);
patch_unmap(FIX_TEXT_POKE0);
@ -76,7 +76,7 @@ NOKPROBE_SYMBOL(patch_insn_write);
#else
static int patch_insn_write(void *addr, const void *insn, size_t len)
{
return probe_kernel_write(addr, insn, len);
return copy_to_kernel_nofault(addr, insn, len);
}
NOKPROBE_SYMBOL(patch_insn_write);
#endif /* CONFIG_MMU */

View file

@ -8,6 +8,7 @@
#include <linux/syscalls.h>
#include <asm/unistd.h>
#include <asm/cacheflush.h>
#include <asm-generic/mman-common.h>
static long riscv_sys_mmap(unsigned long addr, unsigned long len,
unsigned long prot, unsigned long flags,
@ -16,6 +17,11 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len,
{
if (unlikely(offset & (~PAGE_MASK >> page_shift_offset)))
return -EINVAL;
if ((prot & PROT_WRITE) && (prot & PROT_EXEC))
if (unlikely(!(prot & PROT_READ)))
return -EINVAL;
return ksys_mmap_pgoff(addr, len, prot, flags, fd,
offset >> (PAGE_SHIFT - page_shift_offset));
}

View file

@ -137,7 +137,7 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
{
bug_insn_t insn;
if (probe_kernel_address((bug_insn_t *)pc, insn))
if (get_kernel_nofault(insn, (bug_insn_t *)pc))
return 0;
return GET_INSN_LENGTH(insn);
@ -165,7 +165,7 @@ int is_valid_bugaddr(unsigned long pc)
if (pc < VMALLOC_START)
return 0;
if (probe_kernel_address((bug_insn_t *)pc, insn))
if (get_kernel_nofault(insn, (bug_insn_t *)pc))
return 0;
if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
return (insn == __BUG_INSN_32);

View file

@ -151,6 +151,7 @@ int set_memory_nx(unsigned long addr, int numpages)
int set_direct_map_invalid_noflush(struct page *page)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
unsigned long end = start + PAGE_SIZE;
struct pageattr_masks masks = {
@ -158,11 +159,16 @@ int set_direct_map_invalid_noflush(struct page *page)
.clear_mask = __pgprot(_PAGE_PRESENT)
};
return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
mmap_read_lock(&init_mm);
ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
mmap_read_unlock(&init_mm);
return ret;
}
int set_direct_map_default_noflush(struct page *page)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
unsigned long end = start + PAGE_SIZE;
struct pageattr_masks masks = {
@ -170,7 +176,11 @@ int set_direct_map_default_noflush(struct page *page)
.clear_mask = __pgprot(0)
};
return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
mmap_read_lock(&init_mm);
ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
mmap_read_unlock(&init_mm);
return ret;
}
void __kernel_map_pages(struct page *page, int numpages, int enable)

View file

@ -462,6 +462,7 @@ config NUMA
config NODES_SHIFT
int
depends on NEED_MULTIPLE_NODES
default "1"
config SCHED_SMT

View file

@ -693,7 +693,7 @@ static ssize_t prng_chunksize_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return snprintf(buf, PAGE_SIZE, "%u\n", prng_chunk_size);
return scnprintf(buf, PAGE_SIZE, "%u\n", prng_chunk_size);
}
static DEVICE_ATTR(chunksize, 0444, prng_chunksize_show, NULL);
@ -712,7 +712,7 @@ static ssize_t prng_counter_show(struct device *dev,
counter = prng_data->prngws.byte_counter;
mutex_unlock(&prng_data->mutex);
return snprintf(buf, PAGE_SIZE, "%llu\n", counter);
return scnprintf(buf, PAGE_SIZE, "%llu\n", counter);
}
static DEVICE_ATTR(byte_counter, 0444, prng_counter_show, NULL);
@ -721,7 +721,7 @@ static ssize_t prng_errorflag_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return snprintf(buf, PAGE_SIZE, "%d\n", prng_errorflag);
return scnprintf(buf, PAGE_SIZE, "%d\n", prng_errorflag);
}
static DEVICE_ATTR(errorflag, 0444, prng_errorflag_show, NULL);
@ -731,9 +731,9 @@ static ssize_t prng_mode_show(struct device *dev,
char *buf)
{
if (prng_mode == PRNG_MODE_TDES)
return snprintf(buf, PAGE_SIZE, "TDES\n");
return scnprintf(buf, PAGE_SIZE, "TDES\n");
else
return snprintf(buf, PAGE_SIZE, "SHA512\n");
return scnprintf(buf, PAGE_SIZE, "SHA512\n");
}
static DEVICE_ATTR(mode, 0444, prng_mode_show, NULL);
@ -756,7 +756,7 @@ static ssize_t prng_reseed_limit_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return snprintf(buf, PAGE_SIZE, "%u\n", prng_reseed_limit);
return scnprintf(buf, PAGE_SIZE, "%u\n", prng_reseed_limit);
}
static ssize_t prng_reseed_limit_store(struct device *dev,
struct device_attribute *attr,
@ -787,7 +787,7 @@ static ssize_t prng_strength_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return snprintf(buf, PAGE_SIZE, "256\n");
return scnprintf(buf, PAGE_SIZE, "256\n");
}
static DEVICE_ATTR(strength, 0444, prng_strength_show, NULL);

View file

@ -33,7 +33,17 @@ static inline void syscall_rollback(struct task_struct *task,
static inline long syscall_get_error(struct task_struct *task,
struct pt_regs *regs)
{
return IS_ERR_VALUE(regs->gprs[2]) ? regs->gprs[2] : 0;
unsigned long error = regs->gprs[2];
#ifdef CONFIG_COMPAT
if (test_tsk_thread_flag(task, TIF_31BIT)) {
/*
* Sign-extend the value so (int)-EFOO becomes (long)-EFOO
* and will match correctly in comparisons.
*/
error = (long)(int)error;
}
#endif
return IS_ERR_VALUE(error) ? error : 0;
}
static inline long syscall_get_return_value(struct task_struct *task,

View file

@ -36,6 +36,7 @@ struct vdso_data {
__u32 tk_shift; /* Shift used for xtime_nsec 0x60 */
__u32 ts_dir; /* TOD steering direction 0x64 */
__u64 ts_end; /* TOD steering end 0x68 */
__u32 hrtimer_res; /* hrtimer resolution 0x70 */
};
struct vdso_per_cpu_data {

View file

@ -76,6 +76,7 @@ int main(void)
OFFSET(__VDSO_TK_SHIFT, vdso_data, tk_shift);
OFFSET(__VDSO_TS_DIR, vdso_data, ts_dir);
OFFSET(__VDSO_TS_END, vdso_data, ts_end);
OFFSET(__VDSO_CLOCK_REALTIME_RES, vdso_data, hrtimer_res);
OFFSET(__VDSO_ECTG_BASE, vdso_per_cpu_data, ectg_timer_base);
OFFSET(__VDSO_ECTG_USER, vdso_per_cpu_data, ectg_user_time);
OFFSET(__VDSO_GETCPU_VAL, vdso_per_cpu_data, getcpu_val);
@ -86,7 +87,6 @@ int main(void)
DEFINE(__CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE);
DEFINE(__CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_COARSE);
DEFINE(__CLOCK_THREAD_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID);
DEFINE(__CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC);
DEFINE(__CLOCK_COARSE_RES, LOW_RES_NSEC);
BLANK();
/* idle data offsets */

View file

@ -401,9 +401,9 @@ ENTRY(system_call)
jnz .Lsysc_nr_ok
# svc 0: system call number in %r1
llgfr %r1,%r1 # clear high word in r1
sth %r1,__PT_INT_CODE+2(%r11)
cghi %r1,NR_syscalls
jnl .Lsysc_nr_ok
sth %r1,__PT_INT_CODE+2(%r11)
slag %r8,%r1,3
.Lsysc_nr_ok:
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)

View file

@ -83,7 +83,7 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
{
struct ftrace_insn orig, new, old;
if (probe_kernel_read(&old, (void *) rec->ip, sizeof(old)))
if (copy_from_kernel_nofault(&old, (void *) rec->ip, sizeof(old)))
return -EFAULT;
if (addr == MCOUNT_ADDR) {
/* Initial code replacement */
@ -105,7 +105,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
struct ftrace_insn orig, new, old;
if (probe_kernel_read(&old, (void *) rec->ip, sizeof(old)))
if (copy_from_kernel_nofault(&old, (void *) rec->ip, sizeof(old)))
return -EFAULT;
/* Replace nop with an ftrace call. */
ftrace_generate_nop_insn(&orig);

View file

@ -181,7 +181,7 @@ static ssize_t sys_##_prefix##_##_name##_show(struct kobject *kobj, \
struct kobj_attribute *attr, \
char *page) \
{ \
return snprintf(page, PAGE_SIZE, _format, ##args); \
return scnprintf(page, PAGE_SIZE, _format, ##args); \
}
#define IPL_ATTR_CCW_STORE_FN(_prefix, _name, _ipl_blk) \

View file

@ -323,6 +323,25 @@ static inline void __poke_user_per(struct task_struct *child,
child->thread.per_user.end = data;
}
static void fixup_int_code(struct task_struct *child, addr_t data)
{
struct pt_regs *regs = task_pt_regs(child);
int ilc = regs->int_code >> 16;
u16 insn;
if (ilc > 6)
return;
if (ptrace_access_vm(child, regs->psw.addr - (regs->int_code >> 16),
&insn, sizeof(insn), FOLL_FORCE) != sizeof(insn))
return;
/* double check that tracee stopped on svc instruction */
if ((insn >> 8) != 0xa)
return;
regs->int_code = 0x20000 | (data & 0xffff);
}
/*
* Write a word to the user area of a process at location addr. This
* operation does have an additional problem compared to peek_user.
@ -334,7 +353,9 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data)
struct user *dummy = NULL;
addr_t offset;
if (addr < (addr_t) &dummy->regs.acrs) {
struct pt_regs *regs = task_pt_regs(child);
/*
* psw and gprs are stored on the stack
*/
@ -352,7 +373,11 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data)
/* Invalid addressing mode bits */
return -EINVAL;
}
*(addr_t *)((addr_t) &task_pt_regs(child)->psw + addr) = data;
if (test_pt_regs_flag(regs, PIF_SYSCALL) &&
addr == offsetof(struct user, regs.gprs[2]))
fixup_int_code(child, data);
*(addr_t *)((addr_t) &regs->psw + addr) = data;
} else if (addr < (addr_t) (&dummy->regs.orig_gpr2)) {
/*
@ -718,6 +743,10 @@ static int __poke_user_compat(struct task_struct *child,
regs->psw.mask = (regs->psw.mask & ~PSW_MASK_BA) |
(__u64)(tmp & PSW32_ADDR_AMODE);
} else {
if (test_pt_regs_flag(regs, PIF_SYSCALL) &&
addr == offsetof(struct compat_user, regs.gprs[2]))
fixup_int_code(child, data);
/* gpr 0-15 */
*(__u32*)((addr_t) &regs->psw + addr*2 + 4) = tmp;
}
@ -837,40 +866,66 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
{
unsigned long mask = -1UL;
long ret = -1;
if (is_compat_task())
mask = 0xffffffff;
/*
* The sysc_tracesys code in entry.S stored the system
* call number to gprs[2].
*/
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
(tracehook_report_syscall_entry(regs) ||
regs->gprs[2] >= NR_syscalls)) {
tracehook_report_syscall_entry(regs)) {
/*
* Tracing decided this syscall should not happen or the
* debugger stored an invalid system call number. Skip
* Tracing decided this syscall should not happen. Skip
* the system call and the system call restart handling.
*/
clear_pt_regs_flag(regs, PIF_SYSCALL);
return -1;
goto skip;
}
#ifdef CONFIG_SECCOMP
/* Do the secure computing check after ptrace. */
if (secure_computing()) {
/* seccomp failures shouldn't expose any additional code. */
return -1;
if (unlikely(test_thread_flag(TIF_SECCOMP))) {
struct seccomp_data sd;
if (is_compat_task()) {
sd.instruction_pointer = regs->psw.addr & 0x7fffffff;
sd.arch = AUDIT_ARCH_S390;
} else {
sd.instruction_pointer = regs->psw.addr;
sd.arch = AUDIT_ARCH_S390X;
}
sd.nr = regs->int_code & 0xffff;
sd.args[0] = regs->orig_gpr2 & mask;
sd.args[1] = regs->gprs[3] & mask;
sd.args[2] = regs->gprs[4] & mask;
sd.args[3] = regs->gprs[5] & mask;
sd.args[4] = regs->gprs[6] & mask;
sd.args[5] = regs->gprs[7] & mask;
if (__secure_computing(&sd) == -1)
goto skip;
}
#endif /* CONFIG_SECCOMP */
if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
trace_sys_enter(regs, regs->gprs[2]);
trace_sys_enter(regs, regs->int_code & 0xffff);
if (is_compat_task())
mask = 0xffffffff;
audit_syscall_entry(regs->gprs[2], regs->orig_gpr2 & mask,
audit_syscall_entry(regs->int_code & 0xffff, regs->orig_gpr2 & mask,
regs->gprs[3] &mask, regs->gprs[4] &mask,
regs->gprs[5] &mask);
if ((signed long)regs->gprs[2] >= NR_syscalls) {
regs->gprs[2] = -ENOSYS;
ret = -ENOSYS;
}
return regs->gprs[2];
skip:
clear_pt_regs_flag(regs, PIF_SYSCALL);
return ret;
}
asmlinkage void do_syscall_trace_exit(struct pt_regs *regs)

View file

@ -301,6 +301,7 @@ void update_vsyscall(struct timekeeper *tk)
vdso_data->tk_mult = tk->tkr_mono.mult;
vdso_data->tk_shift = tk->tkr_mono.shift;
vdso_data->hrtimer_res = hrtimer_resolution;
smp_wmb();
++vdso_data->tb_update_count;
}

View file

@ -331,7 +331,7 @@ EXPORT_SYMBOL_GPL(arch_make_page_accessible);
static ssize_t uv_query_facilities(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return snprintf(page, PAGE_SIZE, "%lx\n%lx\n%lx\n%lx\n",
return scnprintf(page, PAGE_SIZE, "%lx\n%lx\n%lx\n%lx\n",
uv_info.inst_calls_list[0],
uv_info.inst_calls_list[1],
uv_info.inst_calls_list[2],
@ -344,7 +344,7 @@ static struct kobj_attribute uv_query_facilities_attr =
static ssize_t uv_query_max_guest_cpus(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return snprintf(page, PAGE_SIZE, "%d\n",
return scnprintf(page, PAGE_SIZE, "%d\n",
uv_info.max_guest_cpus);
}
@ -354,7 +354,7 @@ static struct kobj_attribute uv_query_max_guest_cpus_attr =
static ssize_t uv_query_max_guest_vms(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return snprintf(page, PAGE_SIZE, "%d\n",
return scnprintf(page, PAGE_SIZE, "%d\n",
uv_info.max_num_sec_conf);
}
@ -364,7 +364,7 @@ static struct kobj_attribute uv_query_max_guest_vms_attr =
static ssize_t uv_query_max_guest_addr(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return snprintf(page, PAGE_SIZE, "%lx\n",
return scnprintf(page, PAGE_SIZE, "%lx\n",
uv_info.max_sec_stor_addr);
}

View file

@ -18,8 +18,8 @@ KBUILD_AFLAGS_64 += -m64 -s
KBUILD_CFLAGS_64 := $(filter-out -m64,$(KBUILD_CFLAGS))
KBUILD_CFLAGS_64 += -m64 -fPIC -shared -fno-common -fno-builtin
KBUILD_CFLAGS_64 += -nostdlib -Wl,-soname=linux-vdso64.so.1 \
-Wl,--hash-style=both
ldflags-y := -fPIC -shared -nostdlib -soname=linux-vdso64.so.1 \
--hash-style=both --build-id -T
$(targets:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_64)
$(targets:%=$(obj)/%.dbg): KBUILD_AFLAGS = $(KBUILD_AFLAGS_64)
@ -37,8 +37,8 @@ KASAN_SANITIZE := n
$(obj)/vdso64_wrapper.o : $(obj)/vdso64.so
# link rule for the .so file, .lds has to be first
$(obj)/vdso64.so.dbg: $(src)/vdso64.lds $(obj-vdso64) FORCE
$(call if_changed,vdso64ld)
$(obj)/vdso64.so.dbg: $(obj)/vdso64.lds $(obj-vdso64) FORCE
$(call if_changed,ld)
# strip rule for the .so file
$(obj)/%.so: OBJCOPYFLAGS := -S
@ -50,8 +50,6 @@ $(obj-vdso64): %.o: %.S FORCE
$(call if_changed_dep,vdso64as)
# actual build commands
quiet_cmd_vdso64ld = VDSO64L $@
cmd_vdso64ld = $(CC) $(c_flags) -Wl,-T $(filter %.lds %.o,$^) -o $@
quiet_cmd_vdso64as = VDSO64A $@
cmd_vdso64as = $(CC) $(a_flags) -c -o $@ $<

View file

@ -17,12 +17,14 @@
.type __kernel_clock_getres,@function
__kernel_clock_getres:
CFI_STARTPROC
larl %r1,4f
larl %r1,3f
lg %r0,0(%r1)
cghi %r2,__CLOCK_REALTIME_COARSE
je 0f
cghi %r2,__CLOCK_MONOTONIC_COARSE
je 0f
larl %r1,3f
larl %r1,_vdso_data
llgf %r0,__VDSO_CLOCK_REALTIME_RES(%r1)
cghi %r2,__CLOCK_REALTIME
je 0f
cghi %r2,__CLOCK_MONOTONIC
@ -36,7 +38,6 @@ __kernel_clock_getres:
jz 2f
0: ltgr %r3,%r3
jz 1f /* res == NULL */
lg %r0,0(%r1)
xc 0(8,%r3),0(%r3) /* set tp->tv_sec to zero */
stg %r0,8(%r3) /* store tp->tv_usec */
1: lghi %r2,0
@ -45,6 +46,5 @@ __kernel_clock_getres:
svc 0
br %r14
CFI_ENDPROC
3: .quad __CLOCK_REALTIME_RES
4: .quad __CLOCK_COARSE_RES
3: .quad __CLOCK_COARSE_RES
.size __kernel_clock_getres,.-__kernel_clock_getres

View file

@ -105,7 +105,7 @@ static int bad_address(void *p)
{
unsigned long dummy;
return probe_kernel_address((unsigned long *)p, dummy);
return get_kernel_nofault(dummy, (unsigned long *)p);
}
static void dump_pagetable(unsigned long asce, unsigned long address)

View file

@ -119,7 +119,7 @@ static void ftrace_mod_code(void)
* But if one were to fail, then they all should, and if one were
* to succeed, then they all should.
*/
mod_code_status = probe_kernel_write(mod_code_ip, mod_code_newcode,
mod_code_status = copy_to_kernel_nofault(mod_code_ip, mod_code_newcode,
MCOUNT_INSN_SIZE);
/* if we fail, then kill any new writers */
@ -203,7 +203,7 @@ static int ftrace_modify_code(unsigned long ip, unsigned char *old_code,
*/
/* read the text we want to modify */
if (probe_kernel_read(replaced, (void *)ip, MCOUNT_INSN_SIZE))
if (copy_from_kernel_nofault(replaced, (void *)ip, MCOUNT_INSN_SIZE))
return -EFAULT;
/* Make sure it is what we expect it to be */
@ -268,7 +268,7 @@ static int ftrace_mod(unsigned long ip, unsigned long old_addr,
{
unsigned char code[MCOUNT_INSN_SIZE];
if (probe_kernel_read(code, (void *)ip, MCOUNT_INSN_SIZE))
if (copy_from_kernel_nofault(code, (void *)ip, MCOUNT_INSN_SIZE))
return -EFAULT;
if (old_addr != __raw_readl((unsigned long *)code))

View file

@ -118,7 +118,7 @@ int is_valid_bugaddr(unsigned long addr)
if (addr < PAGE_OFFSET)
return 0;
if (probe_kernel_address((insn_size_t *)addr, opcode))
if (get_kernel_nofault(opcode, (insn_size_t *)addr))
return 0;
if (opcode == TRAPA_BUG_OPCODE)
return 1;

View file

@ -7,7 +7,7 @@
#include <linux/kernel.h>
#include <os.h>
bool probe_kernel_read_allowed(const void *src, size_t size)
bool copy_from_kernel_nofault_allowed(const void *src, size_t size)
{
void *psrc = (void *)rounddown((unsigned long)src, PAGE_SIZE);

View file

@ -278,7 +278,7 @@ static inline unsigned long *regs_get_kernel_stack_nth_addr(struct pt_regs *regs
}
/* To avoid include hell, we can't include uaccess.h */
extern long probe_kernel_read(void *dst, const void *src, size_t size);
extern long copy_from_kernel_nofault(void *dst, const void *src, size_t size);
/**
* regs_get_kernel_stack_nth() - get Nth entry of the stack
@ -298,7 +298,7 @@ static inline unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
addr = regs_get_kernel_stack_nth_addr(regs, n);
if (addr) {
ret = probe_kernel_read(&val, addr, sizeof(val));
ret = copy_from_kernel_nofault(&val, addr, sizeof(val));
if (!ret)
return val;
}

View file

@ -106,7 +106,7 @@ void show_opcodes(struct pt_regs *regs, const char *loglvl)
bad_ip = user_mode(regs) &&
__chk_range_not_ok(prologue, OPCODE_BUFSIZE, TASK_SIZE_MAX);
if (bad_ip || probe_kernel_read(opcodes, (u8 *)prologue,
if (bad_ip || copy_from_kernel_nofault(opcodes, (u8 *)prologue,
OPCODE_BUFSIZE)) {
printk("%sCode: Bad RIP value.\n", loglvl);
} else {

View file

@ -86,7 +86,7 @@ static int ftrace_verify_code(unsigned long ip, const char *old_code)
* sure what we read is what we expected it to be before modifying it.
*/
/* read the text we want to modify */
if (probe_kernel_read(cur_code, (void *)ip, MCOUNT_INSN_SIZE)) {
if (copy_from_kernel_nofault(cur_code, (void *)ip, MCOUNT_INSN_SIZE)) {
WARN_ON(1);
return -EFAULT;
}
@ -355,7 +355,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
npages = DIV_ROUND_UP(*tramp_size, PAGE_SIZE);
/* Copy ftrace_caller onto the trampoline memory */
ret = probe_kernel_read(trampoline, (void *)start_offset, size);
ret = copy_from_kernel_nofault(trampoline, (void *)start_offset, size);
if (WARN_ON(ret < 0))
goto fail;
@ -363,13 +363,13 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
/* The trampoline ends with ret(q) */
retq = (unsigned long)ftrace_stub;
ret = probe_kernel_read(ip, (void *)retq, RET_SIZE);
ret = copy_from_kernel_nofault(ip, (void *)retq, RET_SIZE);
if (WARN_ON(ret < 0))
goto fail;
if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) {
ip = trampoline + (ftrace_regs_caller_ret - ftrace_regs_caller);
ret = probe_kernel_read(ip, (void *)retq, RET_SIZE);
ret = copy_from_kernel_nofault(ip, (void *)retq, RET_SIZE);
if (WARN_ON(ret < 0))
goto fail;
}
@ -506,7 +506,7 @@ static void *addr_from_call(void *ptr)
union text_poke_insn call;
int ret;
ret = probe_kernel_read(&call, ptr, CALL_INSN_SIZE);
ret = copy_from_kernel_nofault(&call, ptr, CALL_INSN_SIZE);
if (WARN_ON_ONCE(ret < 0))
return NULL;

View file

@ -732,11 +732,11 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
int err;
bpt->type = BP_BREAKPOINT;
err = probe_kernel_read(bpt->saved_instr, (char *)bpt->bpt_addr,
err = copy_from_kernel_nofault(bpt->saved_instr, (char *)bpt->bpt_addr,
BREAK_INSTR_SIZE);
if (err)
return err;
err = probe_kernel_write((char *)bpt->bpt_addr,
err = copy_to_kernel_nofault((char *)bpt->bpt_addr,
arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE);
if (!err)
return err;
@ -768,7 +768,7 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
return 0;
knl_write:
return probe_kernel_write((char *)bpt->bpt_addr,
return copy_to_kernel_nofault((char *)bpt->bpt_addr,
(char *)bpt->saved_instr, BREAK_INSTR_SIZE);
}

View file

@ -243,7 +243,7 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
* Fortunately, we know that the original code is the ideal 5-byte
* long NOP.
*/
if (probe_kernel_read(buf, (void *)addr,
if (copy_from_kernel_nofault(buf, (void *)addr,
MAX_INSN_SIZE * sizeof(kprobe_opcode_t)))
return 0UL;
@ -346,7 +346,8 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
return 0;
/* This can access kernel text if given address is not recovered */
if (probe_kernel_read(dest, (void *)recovered_insn, MAX_INSN_SIZE))
if (copy_from_kernel_nofault(dest, (void *)recovered_insn,
MAX_INSN_SIZE))
return 0;
kernel_insn_init(insn, dest, MAX_INSN_SIZE);
@ -753,16 +754,11 @@ asm(
NOKPROBE_SYMBOL(kretprobe_trampoline);
STACK_FRAME_NON_STANDARD(kretprobe_trampoline);
static struct kprobe kretprobe_kprobe = {
.addr = (void *)kretprobe_trampoline,
};
/*
* Called from kretprobe_trampoline
*/
__used __visible void *trampoline_handler(struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb;
struct kretprobe_instance *ri = NULL;
struct hlist_head *head, empty_rp;
struct hlist_node *tmp;
@ -772,16 +768,12 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
void *frame_pointer;
bool skipped = false;
preempt_disable();
/*
* Set a dummy kprobe for avoiding kretprobe recursion.
* Since kretprobe never run in kprobe handler, kprobe must not
* be running at this point.
*/
kcb = get_kprobe_ctlblk();
__this_cpu_write(current_kprobe, &kretprobe_kprobe);
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
kprobe_busy_begin();
INIT_HLIST_HEAD(&empty_rp);
kretprobe_hash_lock(current, &head, &flags);
@ -857,7 +849,7 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
__this_cpu_write(current_kprobe, &ri->rp->kp);
ri->ret_addr = correct_ret_addr;
ri->rp->handler(ri, regs);
__this_cpu_write(current_kprobe, &kretprobe_kprobe);
__this_cpu_write(current_kprobe, &kprobe_busy);
}
recycle_rp_inst(ri, &empty_rp);
@ -873,8 +865,7 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
kretprobe_hash_unlock(current, &flags);
__this_cpu_write(current_kprobe, NULL);
preempt_enable();
kprobe_busy_end();
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
hlist_del(&ri->hlist);

View file

@ -56,7 +56,7 @@ unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsigned long addr)
* overwritten by jump destination address. In this case, original
* bytes must be recovered from op->optinsn.copied_insn buffer.
*/
if (probe_kernel_read(buf, (void *)addr,
if (copy_from_kernel_nofault(buf, (void *)addr,
MAX_INSN_SIZE * sizeof(kprobe_opcode_t)))
return 0UL;

View file

@ -94,12 +94,12 @@ static bool match_id(struct pci_dev *pdev, unsigned short vendor, unsigned short
}
static bool probe_list(struct pci_dev *pdev, unsigned short vendor,
const unsigned char *rom_list)
const void *rom_list)
{
unsigned short device;
do {
if (probe_kernel_address(rom_list, device) != 0)
if (get_kernel_nofault(device, rom_list) != 0)
device = 0;
if (device && match_id(pdev, vendor, device))
@ -119,19 +119,19 @@ static struct resource *find_oprom(struct pci_dev *pdev)
for (i = 0; i < ARRAY_SIZE(adapter_rom_resources); i++) {
struct resource *res = &adapter_rom_resources[i];
unsigned short offset, vendor, device, list, rev;
const unsigned char *rom;
const void *rom;
if (res->end == 0)
break;
rom = isa_bus_to_virt(res->start);
if (probe_kernel_address(rom + 0x18, offset) != 0)
if (get_kernel_nofault(offset, rom + 0x18) != 0)
continue;
if (probe_kernel_address(rom + offset + 0x4, vendor) != 0)
if (get_kernel_nofault(vendor, rom + offset + 0x4) != 0)
continue;
if (probe_kernel_address(rom + offset + 0x6, device) != 0)
if (get_kernel_nofault(device, rom + offset + 0x6) != 0)
continue;
if (match_id(pdev, vendor, device)) {
@ -139,8 +139,8 @@ static struct resource *find_oprom(struct pci_dev *pdev)
break;
}
if (probe_kernel_address(rom + offset + 0x8, list) == 0 &&
probe_kernel_address(rom + offset + 0xc, rev) == 0 &&
if (get_kernel_nofault(list, rom + offset + 0x8) == 0 &&
get_kernel_nofault(rev, rom + offset + 0xc) == 0 &&
rev >= 3 && list &&
probe_list(pdev, vendor, rom + offset + list)) {
oprom = res;
@ -183,14 +183,14 @@ static int __init romsignature(const unsigned char *rom)
const unsigned short * const ptr = (const unsigned short *)rom;
unsigned short sig;
return probe_kernel_address(ptr, sig) == 0 && sig == ROMSIGNATURE;
return get_kernel_nofault(sig, ptr) == 0 && sig == ROMSIGNATURE;
}
static int __init romchecksum(const unsigned char *rom, unsigned long length)
{
unsigned char sum, c;
for (sum = 0; length && probe_kernel_address(rom++, c) == 0; length--)
for (sum = 0; length && get_kernel_nofault(c, rom++) == 0; length--)
sum += c;
return !length && !sum;
}
@ -211,7 +211,7 @@ void __init probe_roms(void)
video_rom_resource.start = start;
if (probe_kernel_address(rom + 2, c) != 0)
if (get_kernel_nofault(c, rom + 2) != 0)
continue;
/* 0 < length <= 0x7f * 512, historically */
@ -249,7 +249,7 @@ void __init probe_roms(void)
if (!romsignature(rom))
continue;
if (probe_kernel_address(rom + 2, c) != 0)
if (get_kernel_nofault(c, rom + 2) != 0)
continue;
/* 0 < length <= 0x7f * 512, historically */

View file

@ -91,7 +91,7 @@ int is_valid_bugaddr(unsigned long addr)
if (addr < TASK_SIZE_MAX)
return 0;
if (probe_kernel_address((unsigned short *)addr, ud))
if (get_kernel_nofault(ud, (unsigned short *)addr))
return 0;
return ud == INSN_UD0 || ud == INSN_UD2;
@ -488,7 +488,8 @@ static enum kernel_gp_hint get_kernel_gp_address(struct pt_regs *regs,
u8 insn_buf[MAX_INSN_SIZE];
struct insn insn;
if (probe_kernel_read(insn_buf, (void *)regs->ip, MAX_INSN_SIZE))
if (copy_from_kernel_nofault(insn_buf, (void *)regs->ip,
MAX_INSN_SIZE))
return GP_NO_HINT;
kernel_insn_init(&insn, insn_buf, MAX_INSN_SIZE);

View file

@ -99,7 +99,7 @@ check_prefetch_opcode(struct pt_regs *regs, unsigned char *instr,
return !instr_lo || (instr_lo>>1) == 1;
case 0x00:
/* Prefetch instruction is 0x0F0D or 0x0F18 */
if (probe_kernel_address(instr, opcode))
if (get_kernel_nofault(opcode, instr))
return 0;
*prefetch = (instr_lo == 0xF) &&
@ -133,7 +133,7 @@ is_prefetch(struct pt_regs *regs, unsigned long error_code, unsigned long addr)
while (instr < max_instr) {
unsigned char opcode;
if (probe_kernel_address(instr, opcode))
if (get_kernel_nofault(opcode, instr))
break;
instr++;
@ -301,7 +301,7 @@ static int bad_address(void *p)
{
unsigned long dummy;
return probe_kernel_address((unsigned long *)p, dummy);
return get_kernel_nofault(dummy, (unsigned long *)p);
}
static void dump_pagetable(unsigned long address)
@ -442,7 +442,7 @@ static void show_ldttss(const struct desc_ptr *gdt, const char *name, u16 index)
return;
}
if (probe_kernel_read(&desc, (void *)(gdt->address + offset),
if (copy_from_kernel_nofault(&desc, (void *)(gdt->address + offset),
sizeof(struct ldttss_desc))) {
pr_alert("%s: 0x%hx -- GDT entry is not readable\n",
name, index);

View file

@ -737,7 +737,7 @@ static void __init test_wp_bit(void)
__set_fixmap(FIX_WP_TEST, __pa_symbol(empty_zero_page), PAGE_KERNEL_RO);
if (probe_kernel_write((char *)fix_to_virt(FIX_WP_TEST), &z, 1)) {
if (copy_to_kernel_nofault((char *)fix_to_virt(FIX_WP_TEST), &z, 1)) {
clear_fixmap(FIX_WP_TEST);
printk(KERN_CONT "Ok.\n");
return;

View file

@ -9,7 +9,7 @@ static __always_inline u64 canonical_address(u64 vaddr, u8 vaddr_bits)
return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
}
bool probe_kernel_read_allowed(const void *unsafe_src, size_t size)
bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
{
unsigned long vaddr = (unsigned long)unsafe_src;
@ -22,7 +22,7 @@ bool probe_kernel_read_allowed(const void *unsafe_src, size_t size)
canonical_address(vaddr, boot_cpu_data.x86_virt_bits) == vaddr;
}
#else
bool probe_kernel_read_allowed(const void *unsafe_src, size_t size)
bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
{
return (unsigned long)unsafe_src >= TASK_SIZE_MAX;
}

View file

@ -302,7 +302,7 @@ static const struct pci_raw_ops *__init pci_find_bios(void)
check <= (union bios32 *) __va(0xffff0);
++check) {
long sig;
if (probe_kernel_address(&check->fields.signature, sig))
if (get_kernel_nofault(sig, &check->fields.signature))
continue;
if (check->fields.signature != BIOS32_SIGNATURE)

View file

@ -287,8 +287,8 @@ void intel_scu_devices_create(void)
adapter = i2c_get_adapter(i2c_bus[i]);
if (adapter) {
client = i2c_new_device(adapter, i2c_devs[i]);
if (!client)
client = i2c_new_client_device(adapter, i2c_devs[i]);
if (IS_ERR(client))
pr_err("can't create i2c device %s\n",
i2c_devs[i]->type);
} else

View file

@ -34,6 +34,7 @@ KCOV_INSTRUMENT := n
PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel
PURGATORY_CFLAGS := -mcmodel=large -ffreestanding -fno-zero-initialized-in-bss
PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN) -DDISABLE_BRANCH_PROFILING
PURGATORY_CFLAGS += $(call cc-option,-fno-stack-protector)
# Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That
# in turn leaves some undefined symbols like __fentry__ in purgatory and not

View file

@ -386,7 +386,7 @@ static void set_aliased_prot(void *v, pgprot_t prot)
preempt_disable();
probe_kernel_read(&dummy, v, 1);
copy_from_kernel_nofault(&dummy, v, 1);
if (HYPERVISOR_update_va_mapping((unsigned long)v, pte, 0))
BUG();

View file

@ -376,7 +376,7 @@ static void __blk_mq_all_tag_iter(struct blk_mq_tags *tags,
void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
void *priv)
{
return __blk_mq_all_tag_iter(tags, fn, priv, BT_TAG_ITER_STATIC_RQS);
__blk_mq_all_tag_iter(tags, fn, priv, BT_TAG_ITER_STATIC_RQS);
}
/**

View file

@ -3479,7 +3479,9 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
if (set->nr_maps == 1 && nr_hw_queues > nr_cpu_ids)
nr_hw_queues = nr_cpu_ids;
if (nr_hw_queues < 1 || nr_hw_queues == set->nr_hw_queues)
if (nr_hw_queues < 1)
return;
if (set->nr_maps == 1 && nr_hw_queues == set->nr_hw_queues)
return;
list_for_each_entry(q, &set->tag_list, tag_set_list)

View file

@ -910,7 +910,7 @@ static bool ldm_parse_dsk4 (const u8 *buffer, int buflen, struct vblk *vb)
return false;
disk = &vb->vblk.disk;
uuid_copy(&disk->disk_id, (uuid_t *)(buffer + 0x18 + r_name));
import_uuid(&disk->disk_id, buffer + 0x18 + r_name);
return true;
}

View file

@ -93,7 +93,7 @@ struct frag { /* VBLK Fragment handling */
u8 num; /* Total number of records */
u8 rec; /* This is record number n */
u8 map; /* Which portions are in use */
u8 data[0];
u8 data[];
};
/* In memory LDM database structures. */

View file

@ -178,8 +178,6 @@ static int cryptomgr_schedule_probe(struct crypto_larval *larval)
if (IS_ERR(thread))
goto err_put_larval;
wait_for_completion_interruptible(&larval->completion);
return NOTIFY_STOP;
err_put_larval:

View file

@ -74,14 +74,10 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
return PTR_ERR(areq);
/* convert iovecs of output buffers into RX SGL */
err = af_alg_get_rsgl(sk, msg, flags, areq, -1, &len);
err = af_alg_get_rsgl(sk, msg, flags, areq, ctx->used, &len);
if (err)
goto free;
/* Process only as much RX buffers for which we have TX data */
if (len > ctx->used)
len = ctx->used;
/*
* If more buffers are to be expected to be processed, process only
* full block size buffers.

Some files were not shown because too many files have changed in this diff Show more