diff --git a/CREDITS b/CREDITS index 5f67fc1cd335..d61ba9b5117f 100644 --- a/CREDITS +++ b/CREDITS @@ -319,6 +319,9 @@ N: Ohad Ben Cohen E: ohad@wizery.com D: Remote Processor (remoteproc) subsystem D: Remote Processor Messaging (rpmsg) subsystem +D: Hardware spinlock (hwspinlock) subsystem +D: OMAP hwspinlock driver +D: OMAP remoteproc driver N: Krzysztof Benedyczak E: golbi@mat.uni.torun.pl diff --git a/Documentation/driver-api/surface_aggregator/ssh.rst b/Documentation/driver-api/surface_aggregator/ssh.rst index b955b673838b..58a757319931 100644 --- a/Documentation/driver-api/surface_aggregator/ssh.rst +++ b/Documentation/driver-api/surface_aggregator/ssh.rst @@ -39,7 +39,7 @@ Note that the standard disclaimer for this subsystem also applies to this document: All of this has been reverse-engineered and may thus be erroneous and/or incomplete. -All CRCs used in the following are two-byte ``crc_ccitt_false(0xffff, ...)``. +All CRCs used in the following are two-byte ``crc_itu_t(0xffff, ...)``. All multi-byte values are little-endian, there is no implicit padding between values. diff --git a/Documentation/filesystems/squashfs.rst b/Documentation/filesystems/squashfs.rst index df42106bae71..4af8d6207509 100644 --- a/Documentation/filesystems/squashfs.rst +++ b/Documentation/filesystems/squashfs.rst @@ -64,6 +64,66 @@ obtained from this site also. The squashfs-tools development tree is now located on kernel.org git://git.kernel.org/pub/scm/fs/squashfs/squashfs-tools.git +2.1 Mount options +----------------- +=================== ========================================================= +errors=%s Specify whether squashfs errors trigger a kernel panic + or not + + ========== ============================================= + continue errors don't trigger a panic (default) + panic trigger a panic when errors are encountered, + similar to several other filesystems (e.g. + btrfs, ext4, f2fs, GFS2, jfs, ntfs, ubifs) + + This allows a kernel dump to be saved, + useful for analyzing and debugging the + corruption. + ========== ============================================= +threads=%s Select the decompression mode or the number of threads + + If SQUASHFS_CHOICE_DECOMP_BY_MOUNT is set: + + ========== ============================================= + single use single-threaded decompression (default) + + Only one block (data or metadata) can be + decompressed at any one time. This limits + CPU and memory usage to a minimum, but it + also gives poor performance on parallel I/O + workloads when using multiple CPU machines + due to waiting on decompressor availability. + multi use up to two parallel decompressors per core + + If you have a parallel I/O workload and your + system has enough memory, using this option + may improve overall I/O performance. It + dynamically allocates decompressors on a + demand basis. + percpu use a maximum of one decompressor per core + + It uses percpu variables to ensure + decompression is load-balanced across the + cores. + 1|2|3|... configure the number of threads used for + decompression + + The upper limit is num_online_cpus() * 2. + ========== ============================================= + + If SQUASHFS_CHOICE_DECOMP_BY_MOUNT is **not** set and + SQUASHFS_DECOMP_MULTI, SQUASHFS_MOUNT_DECOMP_THREADS are + both set: + + ========== ============================================= + 2|3|... configure the number of threads used for + decompression + + The upper limit is num_online_cpus() * 2. + ========== ============================================= + +=================== ========================================================= + 3. Squashfs Filesystem Design ----------------------------- diff --git a/Documentation/translations/ja_JP/SubmitChecklist b/Documentation/translations/ja_JP/SubmitChecklist index 4429447b0965..1759c6b452d6 100644 --- a/Documentation/translations/ja_JP/SubmitChecklist +++ b/Documentation/translations/ja_JP/SubmitChecklist @@ -56,8 +56,8 @@ Linux カーネルパッチ投稿者向けチェックリスト 9: sparseを利用してちゃんとしたコードチェックをしてください。 -10: 'make checkstack' と 'make namespacecheck' を利用し、問題が発見されたら - 修正してください。'make checkstack' は明示的に問題を示しませんが、どれか +10: 'make checkstack' を利用し、問題が発見されたら修正してください。 + 'make checkstack' は明示的に問題を示しませんが、どれか 1つの関数が512バイトより大きいスタックを使っていれば、修正すべき候補と なります。 diff --git a/Documentation/translations/zh_CN/process/submit-checklist.rst b/Documentation/translations/zh_CN/process/submit-checklist.rst index 3d6ee21c74ae..10536b74aeec 100644 --- a/Documentation/translations/zh_CN/process/submit-checklist.rst +++ b/Documentation/translations/zh_CN/process/submit-checklist.rst @@ -53,8 +53,7 @@ Linux内核补丁提交检查单 9) 通过 sparse 清查。 (参见 Documentation/translations/zh_CN/dev-tools/sparse.rst ) -10) 使用 ``make checkstack`` 和 ``make namespacecheck`` 并修复他们发现的任何 - 问题。 +10) 使用 ``make checkstack`` 并修复他们发现的任何问题。 .. note:: diff --git a/Documentation/translations/zh_TW/process/submit-checklist.rst b/Documentation/translations/zh_TW/process/submit-checklist.rst index 942962d1e2f4..dda456a73147 100644 --- a/Documentation/translations/zh_TW/process/submit-checklist.rst +++ b/Documentation/translations/zh_TW/process/submit-checklist.rst @@ -56,8 +56,7 @@ Linux內核補丁提交檢查單 9) 通過 sparse 清查。 (參見 Documentation/translations/zh_CN/dev-tools/sparse.rst ) -10) 使用 ``make checkstack`` 和 ``make namespacecheck`` 並修復他們發現的任何 - 問題。 +10) 使用 ``make checkstack`` 並修復他們發現的任何問題。 .. note:: diff --git a/MAINTAINERS b/MAINTAINERS index 8ed56b374eae..4f48ecfc5c58 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9283,7 +9283,6 @@ F: drivers/char/hw_random/ F: include/linux/hw_random.h HARDWARE SPINLOCK CORE -M: Ohad Ben-Cohen M: Bjorn Andersson R: Baolin Wang L: linux-remoteproc@vger.kernel.org @@ -15715,9 +15714,8 @@ F: Documentation/devicetree/bindings/gpio/ti,omap-gpio.yaml F: drivers/gpio/gpio-omap.c OMAP HARDWARE SPINLOCK SUPPORT -M: Ohad Ben-Cohen L: linux-omap@vger.kernel.org -S: Maintained +S: Orphan F: drivers/hwspinlock/omap_hwspinlock.c OMAP HS MMC SUPPORT diff --git a/Makefile b/Makefile index c6f549f6a4ae..f1b2fd977275 100644 --- a/Makefile +++ b/Makefile @@ -1576,7 +1576,8 @@ help: echo ' (default: $(INSTALL_HDR_PATH))'; \ echo '' @echo 'Static analysers:' - @echo ' checkstack - Generate a list of stack hogs' + @echo ' checkstack - Generate a list of stack hogs and consider all functions' + @echo ' with a stack size larger than MINSTACKSIZE (default: 100)' @echo ' versioncheck - Sanity check on version.h usage' @echo ' includecheck - Check for duplicate included header files' @echo ' export_report - List the usages of all exported symbols' @@ -2016,9 +2017,10 @@ CHECKSTACK_ARCH := $(SUBARCH) else CHECKSTACK_ARCH := $(ARCH) endif +MINSTACKSIZE ?= 100 checkstack: $(OBJDUMP) -d vmlinux $$(find . -name '*.ko') | \ - $(PERL) $(srctree)/scripts/checkstack.pl $(CHECKSTACK_ARCH) + $(PERL) $(srctree)/scripts/checkstack.pl $(CHECKSTACK_ARCH) $(MINSTACKSIZE) kernelrelease: @$(filechk_kernel.release) diff --git a/arch/Kconfig b/arch/Kconfig index 8c8901f80586..5ca66aad0d08 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -301,17 +301,8 @@ config ARCH_HAS_DMA_CLEAR_UNCACHED config ARCH_HAS_CPU_FINALIZE_INIT bool -# Select if arch init_task must go in the __init_task_data section -config ARCH_TASK_STRUCT_ON_STACK - bool - -# Select if arch has its private alloc_task_struct() function -config ARCH_TASK_STRUCT_ALLOCATOR - bool - config HAVE_ARCH_THREAD_STRUCT_WHITELIST bool - depends on !ARCH_TASK_STRUCT_ALLOCATOR help An architecture should select this to provide hardened usercopy knowledge about what region of the thread_struct should be @@ -320,10 +311,6 @@ config HAVE_ARCH_THREAD_STRUCT_WHITELIST should be implemented. Without this, the entire thread_struct field in task_struct will be left whitelisted. -# Select if arch has its private alloc_thread_stack() function -config ARCH_THREAD_STACK_ALLOCATOR - bool - # Select if arch wants to size task_struct dynamically via arch_task_struct_size: config ARCH_WANTS_DYNAMIC_TASK_STRUCT bool diff --git a/arch/alpha/lib/Makefile b/arch/alpha/lib/Makefile index 1cc74f7b50ef..6a779b9018fd 100644 --- a/arch/alpha/lib/Makefile +++ b/arch/alpha/lib/Makefile @@ -4,7 +4,6 @@ # asflags-y := $(KBUILD_CFLAGS) -ccflags-y := -Werror # Many of these routines have implementations tuned for ev6. # Choose them iff we're targeting ev6 specifically. diff --git a/arch/alpha/mm/Makefile b/arch/alpha/mm/Makefile index bd770302eb82..101dbd06b4ce 100644 --- a/arch/alpha/mm/Makefile +++ b/arch/alpha/mm/Makefile @@ -3,6 +3,4 @@ # Makefile for the linux alpha-specific parts of the memory manager. # -ccflags-y := -Werror - obj-y := init.o fault.o diff --git a/arch/arm64/kernel/kexec_image.c b/arch/arm64/kernel/kexec_image.c index 636be6715155..532d72ea42ee 100644 --- a/arch/arm64/kernel/kexec_image.c +++ b/arch/arm64/kernel/kexec_image.c @@ -122,9 +122,9 @@ static void *image_load(struct kimage *image, kernel_segment->memsz -= text_offset; image->start = kernel_segment->mem; - pr_debug("Loaded kernel at 0x%lx bufsz=0x%lx memsz=0x%lx\n", - kernel_segment->mem, kbuf.bufsz, - kernel_segment->memsz); + kexec_dprintk("Loaded kernel at 0x%lx bufsz=0x%lx memsz=0x%lx\n", + kernel_segment->mem, kbuf.bufsz, + kernel_segment->memsz); return NULL; } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 078910db77a4..b38aae5b488d 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -32,26 +32,12 @@ static void _kexec_image_info(const char *func, int line, const struct kimage *kimage) { - unsigned long i; - - pr_debug("%s:%d:\n", func, line); - pr_debug(" kexec kimage info:\n"); - pr_debug(" type: %d\n", kimage->type); - pr_debug(" start: %lx\n", kimage->start); - pr_debug(" head: %lx\n", kimage->head); - pr_debug(" nr_segments: %lu\n", kimage->nr_segments); - pr_debug(" dtb_mem: %pa\n", &kimage->arch.dtb_mem); - pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); - pr_debug(" el2_vectors: %pa\n", &kimage->arch.el2_vectors); - - for (i = 0; i < kimage->nr_segments; i++) { - pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", - i, - kimage->segment[i].mem, - kimage->segment[i].mem + kimage->segment[i].memsz, - kimage->segment[i].memsz, - kimage->segment[i].memsz / PAGE_SIZE); - } + kexec_dprintk("%s:%d:\n", func, line); + kexec_dprintk(" kexec kimage info:\n"); + kexec_dprintk(" type: %d\n", kimage->type); + kexec_dprintk(" head: %lx\n", kimage->head); + kexec_dprintk(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); + kexec_dprintk(" el2_vectors: %pa\n", &kimage->arch.el2_vectors); } void machine_kexec_cleanup(struct kimage *kimage) diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/machine_kexec_file.c index a11a6e14ba89..0e017358f4ba 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -127,8 +127,8 @@ int load_other_segments(struct kimage *image, image->elf_load_addr = kbuf.mem; image->elf_headers_sz = headers_sz; - pr_debug("Loaded elf core header at 0x%lx bufsz=0x%lx memsz=0x%lx\n", - image->elf_load_addr, kbuf.bufsz, kbuf.memsz); + kexec_dprintk("Loaded elf core header at 0x%lx bufsz=0x%lx memsz=0x%lx\n", + image->elf_load_addr, kbuf.bufsz, kbuf.memsz); } /* load initrd */ @@ -148,8 +148,8 @@ int load_other_segments(struct kimage *image, goto out_err; initrd_load_addr = kbuf.mem; - pr_debug("Loaded initrd at 0x%lx bufsz=0x%lx memsz=0x%lx\n", - initrd_load_addr, kbuf.bufsz, kbuf.memsz); + kexec_dprintk("Loaded initrd at 0x%lx bufsz=0x%lx memsz=0x%lx\n", + initrd_load_addr, kbuf.bufsz, kbuf.memsz); } /* load dtb */ @@ -179,8 +179,8 @@ int load_other_segments(struct kimage *image, image->arch.dtb = dtb; image->arch.dtb_mem = kbuf.mem; - pr_debug("Loaded dtb at 0x%lx bufsz=0x%lx memsz=0x%lx\n", - kbuf.mem, kbuf.bufsz, kbuf.memsz); + kexec_dprintk("Loaded dtb at 0x%lx bufsz=0x%lx memsz=0x%lx\n", + kbuf.mem, kbuf.bufsz, kbuf.memsz); return 0; diff --git a/arch/hexagon/include/asm/irq.h b/arch/hexagon/include/asm/irq.h index 1f7f1292f701..a60d26754caa 100644 --- a/arch/hexagon/include/asm/irq.h +++ b/arch/hexagon/include/asm/irq.h @@ -20,4 +20,7 @@ #include +struct pt_regs; +void arch_do_IRQ(struct pt_regs *); + #endif diff --git a/arch/hexagon/kernel/process.c b/arch/hexagon/kernel/process.c index dd7f74ea2c20..2a77bfd75694 100644 --- a/arch/hexagon/kernel/process.c +++ b/arch/hexagon/kernel/process.c @@ -5,6 +5,7 @@ * Copyright (c) 2010-2012, The Linux Foundation. All rights reserved. */ +#include #include #include #include @@ -152,6 +153,7 @@ unsigned long __get_wchan(struct task_struct *p) * Returns 0 if there's no need to re-check for more work. */ +int do_work_pending(struct pt_regs *regs, u32 thread_info_flags); int do_work_pending(struct pt_regs *regs, u32 thread_info_flags) { if (!(thread_info_flags & _TIF_WORK_MASK)) { diff --git a/arch/hexagon/kernel/reset.c b/arch/hexagon/kernel/reset.c index da36114d928f..efd70a8d2526 100644 --- a/arch/hexagon/kernel/reset.c +++ b/arch/hexagon/kernel/reset.c @@ -3,6 +3,7 @@ * Copyright (c) 2010-2011, The Linux Foundation. All rights reserved. */ +#include #include #include diff --git a/arch/hexagon/kernel/signal.c b/arch/hexagon/kernel/signal.c index bcba31e9e0ae..d301f4621553 100644 --- a/arch/hexagon/kernel/signal.c +++ b/arch/hexagon/kernel/signal.c @@ -220,7 +220,7 @@ no_restart: * Architecture-specific wrappers for signal-related system calls */ -asmlinkage int sys_rt_sigreturn(void) +SYSCALL_DEFINE0(rt_sigreturn) { struct pt_regs *regs = current_pt_regs(); struct rt_sigframe __user *frame; diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c index 4e8bee25b8c6..608884bc3396 100644 --- a/arch/hexagon/kernel/smp.c +++ b/arch/hexagon/kernel/smp.c @@ -79,7 +79,7 @@ void smp_vm_unmask_irq(void *info) * Specifically, first arg is irq, second is the irq_desc. */ -irqreturn_t handle_ipi(int irq, void *desc) +static irqreturn_t handle_ipi(int irq, void *desc) { int cpu = smp_processor_id(); struct ipi_data *ipi = &per_cpu(ipi_data, cpu); @@ -124,7 +124,7 @@ void __init smp_prepare_boot_cpu(void) * to point to current thread info */ -void start_secondary(void) +static void start_secondary(void) { unsigned long thread_ptr; unsigned int cpu, irq; diff --git a/arch/hexagon/kernel/time.c b/arch/hexagon/kernel/time.c index febc95714d75..f0f207e2a694 100644 --- a/arch/hexagon/kernel/time.c +++ b/arch/hexagon/kernel/time.c @@ -17,7 +17,9 @@ #include #include +#include #include +#include #define TIMER_ENABLE BIT(0) @@ -160,7 +162,7 @@ static irqreturn_t timer_interrupt(int irq, void *devid) * This runs just before the delay loop is calibrated, and * is used for delay calibration. */ -void __init time_init_deferred(void) +static void __init time_init_deferred(void) { struct resource *resource = NULL; struct clock_event_device *ce_dev = &hexagon_clockevent_dev; diff --git a/arch/hexagon/kernel/traps.c b/arch/hexagon/kernel/traps.c index 6447763ce5a9..75e062722d28 100644 --- a/arch/hexagon/kernel/traps.c +++ b/arch/hexagon/kernel/traps.c @@ -281,6 +281,7 @@ static void cache_error(struct pt_regs *regs) /* * General exception handler */ +void do_genex(struct pt_regs *regs); void do_genex(struct pt_regs *regs) { /* @@ -331,13 +332,7 @@ void do_genex(struct pt_regs *regs) } } -/* Indirect system call dispatch */ -long sys_syscall(void) -{ - printk(KERN_ERR "sys_syscall invoked!\n"); - return -ENOSYS; -} - +void do_trap0(struct pt_regs *regs); void do_trap0(struct pt_regs *regs) { syscall_fn syscall; @@ -415,6 +410,7 @@ void do_trap0(struct pt_regs *regs) /* * Machine check exception handler */ +void do_machcheck(struct pt_regs *regs); void do_machcheck(struct pt_regs *regs) { /* Halt and catch fire */ @@ -425,6 +421,7 @@ void do_machcheck(struct pt_regs *regs) * Treat this like the old 0xdb trap. */ +void do_debug_exception(struct pt_regs *regs); void do_debug_exception(struct pt_regs *regs) { regs->hvmer.vmest &= ~HVM_VMEST_CAUSE_MSK; diff --git a/arch/hexagon/kernel/vdso.c b/arch/hexagon/kernel/vdso.c index b70970ac809f..2e4872d62124 100644 --- a/arch/hexagon/kernel/vdso.c +++ b/arch/hexagon/kernel/vdso.c @@ -10,6 +10,7 @@ #include #include +#include #include static struct page *vdso_page; diff --git a/arch/hexagon/kernel/vm_events.c b/arch/hexagon/kernel/vm_events.c index 59ef72e4a4e5..2b881a89b206 100644 --- a/arch/hexagon/kernel/vm_events.c +++ b/arch/hexagon/kernel/vm_events.c @@ -73,13 +73,6 @@ void show_regs(struct pt_regs *regs) pt_psp(regs), pt_badva(regs), ints_enabled(regs)); } -void dummy_handler(struct pt_regs *regs) -{ - unsigned int elr = pt_elr(regs); - printk(KERN_ERR "Unimplemented handler; ELR=0x%08x\n", elr); -} - - void arch_do_IRQ(struct pt_regs *regs) { int irq = pt_cause(regs); diff --git a/arch/hexagon/mm/init.c b/arch/hexagon/mm/init.c index 146115c9de61..3458f39ca2ac 100644 --- a/arch/hexagon/mm/init.c +++ b/arch/hexagon/mm/init.c @@ -12,6 +12,7 @@ #include #include #include +#include #include /* @@ -86,7 +87,7 @@ void sync_icache_dcache(pte_t pte) * In this mode, we only have one pg_data_t * structure: contig_mem_data. */ -void __init paging_init(void) +static void __init paging_init(void) { unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, }; diff --git a/arch/hexagon/mm/uaccess.c b/arch/hexagon/mm/uaccess.c index 650bca92f0b7..3204e9ba6d6f 100644 --- a/arch/hexagon/mm/uaccess.c +++ b/arch/hexagon/mm/uaccess.c @@ -35,11 +35,3 @@ __kernel_size_t __clear_user_hexagon(void __user *dest, unsigned long count) return count; } - -unsigned long clear_user_hexagon(void __user *dest, unsigned long count) -{ - if (!access_ok(dest, count)) - return count; - else - return __clear_user_hexagon(dest, count); -} diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c index 7295ea3f8cc8..3771fb453898 100644 --- a/arch/hexagon/mm/vm_fault.c +++ b/arch/hexagon/mm/vm_fault.c @@ -12,6 +12,7 @@ */ #include +#include #include #include #include @@ -33,7 +34,7 @@ /* * Canonical page fault handler */ -void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) +static void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) { struct vm_area_struct *vma; struct mm_struct *mm = current->mm; diff --git a/arch/hexagon/mm/vm_tlb.c b/arch/hexagon/mm/vm_tlb.c index 53482f2a9ff9..8b6405e2234b 100644 --- a/arch/hexagon/mm/vm_tlb.c +++ b/arch/hexagon/mm/vm_tlb.c @@ -14,6 +14,7 @@ #include #include #include +#include /* * Initial VM implementation has only one map active at a time, with diff --git a/arch/mips/Kbuild b/arch/mips/Kbuild index af2967bffb73..e2d623621a00 100644 --- a/arch/mips/Kbuild +++ b/arch/mips/Kbuild @@ -1,10 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 -# Fail on warnings - also for files referenced in subdirs -# -Werror can be disabled for specific files using: -# CFLAGS_ := -Wno-error -ifeq ($(W),) -subdir-ccflags-y := -Werror -endif # platform specific definitions include $(srctree)/arch/mips/Kbuild.platforms diff --git a/arch/mips/boot/compressed/dbg.c b/arch/mips/boot/compressed/dbg.c index f6728a8fd1c3..2f1ac38fe1cc 100644 --- a/arch/mips/boot/compressed/dbg.c +++ b/arch/mips/boot/compressed/dbg.c @@ -9,6 +9,8 @@ #include #include +#include "decompress.h" + void __weak putc(char c) { } diff --git a/arch/mips/boot/compressed/decompress.c b/arch/mips/boot/compressed/decompress.c index c5dd415254d3..adb6d5b0e6eb 100644 --- a/arch/mips/boot/compressed/decompress.c +++ b/arch/mips/boot/compressed/decompress.c @@ -19,6 +19,8 @@ #include #include +#include "decompress.h" + /* * These two variables specify the free mem region * that can be used for temporary malloc area @@ -26,20 +28,6 @@ unsigned long free_mem_ptr; unsigned long free_mem_end_ptr; -/* The linker tells us where the image is. */ -extern unsigned char __image_begin[], __image_end[]; - -/* debug interfaces */ -#ifdef CONFIG_DEBUG_ZBOOT -extern void puts(const char *s); -extern void puthex(unsigned long long val); -#else -#define puts(s) do {} while (0) -#define puthex(val) do {} while (0) -#endif - -extern char __appended_dtb[]; - void error(char *x) { puts("\n\n"); diff --git a/arch/mips/boot/compressed/decompress.h b/arch/mips/boot/compressed/decompress.h new file mode 100644 index 000000000000..073b64593b3d --- /dev/null +++ b/arch/mips/boot/compressed/decompress.h @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef _DECOMPRESSOR_H +#define _DECOMPRESSOR_H + +/* The linker tells us where the image is. */ +extern unsigned char __image_begin[], __image_end[]; + +/* debug interfaces */ +#ifdef CONFIG_DEBUG_ZBOOT +extern void putc(char c); +extern void puts(const char *s); +extern void puthex(unsigned long long val); +#else +#define putc(s) do {} while (0) +#define puts(s) do {} while (0) +#define puthex(val) do {} while (0) +#endif + +extern char __appended_dtb[]; + +void error(char *x); +void decompress_kernel(unsigned long boot_heap_start); + +#endif diff --git a/arch/mips/boot/compressed/string.c b/arch/mips/boot/compressed/string.c index 0b593b709228..f0eb251e44e5 100644 --- a/arch/mips/boot/compressed/string.c +++ b/arch/mips/boot/compressed/string.c @@ -7,6 +7,7 @@ #include #include +#include void *memcpy(void *dest, const void *src, size_t n) { diff --git a/arch/mips/include/asm/cache.h b/arch/mips/include/asm/cache.h index 3424a7908c0f..8b08db3fb17a 100644 --- a/arch/mips/include/asm/cache.h +++ b/arch/mips/include/asm/cache.h @@ -17,5 +17,11 @@ #define __read_mostly __section(".data..read_mostly") extern void cache_noop(void); +extern void r3k_cache_init(void); +extern unsigned long r3k_cache_size(unsigned long); +extern unsigned long r3k_cache_lsize(unsigned long); +extern void r4k_cache_init(void); +extern void octeon_cache_init(void); +extern void au1x00_fixup_config_od(void); #endif /* _ASM_CACHE_H */ diff --git a/arch/mips/include/asm/jump_label.h b/arch/mips/include/asm/jump_label.h index c5c6864e64bc..081be98c71ef 100644 --- a/arch/mips/include/asm/jump_label.h +++ b/arch/mips/include/asm/jump_label.h @@ -15,6 +15,9 @@ #include #include +struct module; +extern void jump_label_apply_nops(struct module *mod); + #define JUMP_LABEL_NOP_SIZE 4 #ifdef CONFIG_64BIT diff --git a/arch/mips/include/asm/mach-loongson64/mmzone.h b/arch/mips/include/asm/mach-loongson64/mmzone.h index ebb1deaa77b9..a3d65d37b8b5 100644 --- a/arch/mips/include/asm/mach-loongson64/mmzone.h +++ b/arch/mips/include/asm/mach-loongson64/mmzone.h @@ -18,7 +18,6 @@ extern struct pglist_data *__node_data[]; #define NODE_DATA(n) (__node_data[n]) -extern void setup_zero_pages(void); extern void __init prom_init_numa_memory(void); #endif /* _ASM_MACH_MMZONE_H */ diff --git a/arch/mips/include/asm/mmzone.h b/arch/mips/include/asm/mmzone.h index 602a21aee9d4..14226ea42036 100644 --- a/arch/mips/include/asm/mmzone.h +++ b/arch/mips/include/asm/mmzone.h @@ -20,4 +20,6 @@ #define nid_to_addrbase(nid) 0 #endif +extern void setup_zero_pages(void); + #endif /* _ASM_MMZONE_H_ */ diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h index ae2cd37a38f0..ca7662cc65a7 100644 --- a/arch/mips/include/asm/processor.h +++ b/arch/mips/include/asm/processor.h @@ -402,4 +402,6 @@ extern int mips_set_process_fp_mode(struct task_struct *task, #define GET_FP_MODE(task) mips_get_process_fp_mode(task) #define SET_FP_MODE(task,value) mips_set_process_fp_mode(task, value) +void show_registers(struct pt_regs *regs); + #endif /* _ASM_PROCESSOR_H */ diff --git a/arch/mips/include/asm/r4kcache.h b/arch/mips/include/asm/r4kcache.h index 431a1c9d53fc..da1cd1bbdbc5 100644 --- a/arch/mips/include/asm/r4kcache.h +++ b/arch/mips/include/asm/r4kcache.h @@ -24,6 +24,10 @@ #include #include +extern void r5k_sc_init(void); +extern void rm7k_sc_init(void); +extern int mips_sc_init(void); + extern void (*r4k_blast_dcache)(void); extern void (*r4k_blast_icache)(void); diff --git a/arch/mips/include/asm/setup.h b/arch/mips/include/asm/setup.h index 8c56b862fd9c..4dce41138bad 100644 --- a/arch/mips/include/asm/setup.h +++ b/arch/mips/include/asm/setup.h @@ -27,5 +27,6 @@ extern unsigned long ebase; extern unsigned int hwrena; extern void per_cpu_trap_init(bool); extern void cpu_cache_init(void); +extern void tlb_init(void); #endif /* __SETUP_H */ diff --git a/arch/mips/include/asm/signal.h b/arch/mips/include/asm/signal.h index 23d6b8015c79..8de81ccef7ad 100644 --- a/arch/mips/include/asm/signal.h +++ b/arch/mips/include/asm/signal.h @@ -31,5 +31,6 @@ extern struct mips_abi mips_abi_32; extern int protected_save_fp_context(void __user *sc); extern int protected_restore_fp_context(void __user *sc); +void do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags); #endif /* _ASM_SIGNAL_H */ diff --git a/arch/mips/include/asm/smp.h b/arch/mips/include/asm/smp.h index 901bc61fa7ae..695928637db4 100644 --- a/arch/mips/include/asm/smp.h +++ b/arch/mips/include/asm/smp.h @@ -63,6 +63,8 @@ extern asmlinkage void smp_bootstrap(void); extern void calculate_cpu_foreign_map(void); +asmlinkage void start_secondary(void); + /* * this function sends a 'reschedule' IPI to another CPU. * it goes straight through and wastes no time serializing diff --git a/arch/mips/include/asm/spram.h b/arch/mips/include/asm/spram.h index 373f2a5d495d..9f6a2cb1943a 100644 --- a/arch/mips/include/asm/spram.h +++ b/arch/mips/include/asm/spram.h @@ -3,7 +3,7 @@ #define _MIPS_SPRAM_H #if defined(CONFIG_MIPS_SPRAM) -extern __init void spram_config(void); +extern void spram_config(void); #else static inline void spram_config(void) { } #endif /* CONFIG_MIPS_SPRAM */ diff --git a/arch/mips/include/asm/syscalls.h b/arch/mips/include/asm/syscalls.h new file mode 100644 index 000000000000..59f9c0c9fa0a --- /dev/null +++ b/arch/mips/include/asm/syscalls.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _ASM_MIPS_SYSCALLS_H +#define _ASM_MIPS_SYSCALLS_H + +#include +#include + +asmlinkage void sys_sigreturn(void); +asmlinkage void sys_rt_sigreturn(void); +asmlinkage int sysm_pipe(void); +asmlinkage long mipsmt_sys_sched_setaffinity(pid_t pid, unsigned int len, + unsigned long __user *user_mask_ptr); +asmlinkage long mipsmt_sys_sched_getaffinity(pid_t pid, unsigned int len, + unsigned long __user *user_mask_ptr); +asmlinkage long sys32_fallocate(int fd, int mode, unsigned offset_a2, + unsigned offset_a3, unsigned len_a4, + unsigned len_a5); +asmlinkage long sys32_fadvise64_64(int fd, int __pad, + unsigned long a2, unsigned long a3, + unsigned long a4, unsigned long a5, + int flags); +asmlinkage ssize_t sys32_readahead(int fd, u32 pad0, u64 a2, u64 a3, + size_t count); +asmlinkage long sys32_sync_file_range(int fd, int __pad, + unsigned long a2, unsigned long a3, + unsigned long a4, unsigned long a5, + int flags); +asmlinkage void sys32_rt_sigreturn(void); +asmlinkage void sys32_sigreturn(void); +asmlinkage int sys32_sigsuspend(compat_sigset_t __user *uset); +asmlinkage void sysn32_rt_sigreturn(void); + +#endif diff --git a/arch/mips/include/asm/tlbex.h b/arch/mips/include/asm/tlbex.h index 6d97e23f30ab..24a2d06cc1c3 100644 --- a/arch/mips/include/asm/tlbex.h +++ b/arch/mips/include/asm/tlbex.h @@ -23,6 +23,7 @@ void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep); void build_tlb_write_entry(u32 **p, struct uasm_label **l, struct uasm_reloc **r, enum tlb_write_entry wmode); +void build_tlb_refill_handler(void); extern void handle_tlbl(void); extern char handle_tlbl_end[]; diff --git a/arch/mips/include/asm/traps.h b/arch/mips/include/asm/traps.h index 15cde638b407..2c2b26f1e464 100644 --- a/arch/mips/include/asm/traps.h +++ b/arch/mips/include/asm/traps.h @@ -39,4 +39,28 @@ extern char except_vec_nmi[]; register_nmi_notifier(&fn##_nb); \ }) +asmlinkage void do_ade(struct pt_regs *regs); +asmlinkage void do_be(struct pt_regs *regs); +asmlinkage void do_ov(struct pt_regs *regs); +asmlinkage void do_fpe(struct pt_regs *regs, unsigned long fcr31); +asmlinkage void do_bp(struct pt_regs *regs); +asmlinkage void do_tr(struct pt_regs *regs); +asmlinkage void do_ri(struct pt_regs *regs); +asmlinkage void do_cpu(struct pt_regs *regs); +asmlinkage void do_msa_fpe(struct pt_regs *regs, unsigned int msacsr); +asmlinkage void do_msa(struct pt_regs *regs); +asmlinkage void do_mdmx(struct pt_regs *regs); +asmlinkage void do_watch(struct pt_regs *regs); +asmlinkage void do_mcheck(struct pt_regs *regs); +asmlinkage void do_mt(struct pt_regs *regs); +asmlinkage void do_dsp(struct pt_regs *regs); +asmlinkage void do_reserved(struct pt_regs *regs); +asmlinkage void do_ftlb(void); +asmlinkage void do_gsexc(struct pt_regs *regs, u32 diag1); +asmlinkage void do_daddi_ov(struct pt_regs *regs); + +asmlinkage void cache_parity_error(void); +asmlinkage void ejtag_exception_handler(struct pt_regs *regs); +asmlinkage void __noreturn nmi_exception_handler(struct pt_regs *regs); + #endif /* _ASM_TRAPS_H */ diff --git a/arch/mips/include/asm/uasm.h b/arch/mips/include/asm/uasm.h index 296bcf31abb5..b43bfd445252 100644 --- a/arch/mips/include/asm/uasm.h +++ b/arch/mips/include/asm/uasm.h @@ -193,9 +193,7 @@ struct uasm_label { void uasm_build_label(struct uasm_label **lab, u32 *addr, int lid); -#ifdef CONFIG_64BIT int uasm_in_compat_space_p(long addr); -#endif int uasm_rel_hi(long val); int uasm_rel_lo(long val); void UASM_i_LA_mostly(u32 **buf, unsigned int rs, long addr); diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c index b406d8bfb15a..de7460c3a72e 100644 --- a/arch/mips/kernel/cpu-probe.c +++ b/arch/mips/kernel/cpu-probe.c @@ -179,7 +179,6 @@ void __init check_bugs32(void) static inline int cpu_has_confreg(void) { #ifdef CONFIG_CPU_R3000 - extern unsigned long r3k_cache_size(unsigned long); unsigned long size1, size2; unsigned long cfg = read_c0_conf(); diff --git a/arch/mips/kernel/cpu-r3k-probe.c b/arch/mips/kernel/cpu-r3k-probe.c index be93469c0e0e..0c826f729f75 100644 --- a/arch/mips/kernel/cpu-r3k-probe.c +++ b/arch/mips/kernel/cpu-r3k-probe.c @@ -42,7 +42,6 @@ void __init check_bugs32(void) static inline int cpu_has_confreg(void) { #ifdef CONFIG_CPU_R3000 - extern unsigned long r3k_cache_size(unsigned long); unsigned long size1, size2; unsigned long cfg = read_c0_conf(); diff --git a/arch/mips/kernel/linux32.c b/arch/mips/kernel/linux32.c index 6b61be486303..a0c0a7a654e9 100644 --- a/arch/mips/kernel/linux32.c +++ b/arch/mips/kernel/linux32.c @@ -42,6 +42,7 @@ #include #include #include +#include #ifdef __MIPSEB__ #define merge_64(r1, r2) ((((r1) & 0xffffffffUL) << 32) + ((r2) & 0xffffffffUL)) diff --git a/arch/mips/kernel/machine_kexec.c b/arch/mips/kernel/machine_kexec.c index 432bfd3e7f22..4e3579bbd620 100644 --- a/arch/mips/kernel/machine_kexec.c +++ b/arch/mips/kernel/machine_kexec.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include diff --git a/arch/mips/kernel/mips-cm.c b/arch/mips/kernel/mips-cm.c index 3f00788b0871..84b3affb9de8 100644 --- a/arch/mips/kernel/mips-cm.c +++ b/arch/mips/kernel/mips-cm.c @@ -201,7 +201,7 @@ phys_addr_t __mips_cm_phys_base(void) phys_addr_t mips_cm_phys_base(void) __attribute__((weak, alias("__mips_cm_phys_base"))); -phys_addr_t __mips_cm_l2sync_phys_base(void) +static phys_addr_t __mips_cm_l2sync_phys_base(void) { u32 base_reg; diff --git a/arch/mips/kernel/mips-mt-fpaff.c b/arch/mips/kernel/mips-mt-fpaff.c index 67e130d3f038..10172fc4f627 100644 --- a/arch/mips/kernel/mips-mt-fpaff.c +++ b/arch/mips/kernel/mips-mt-fpaff.c @@ -15,6 +15,7 @@ #include #include #include +#include /* * CPU mask used to set process affinity for MT VPEs/TCs with FPUs diff --git a/arch/mips/kernel/mips-mt.c b/arch/mips/kernel/mips-mt.c index f88b7919f11f..c07d64438b5b 100644 --- a/arch/mips/kernel/mips-mt.c +++ b/arch/mips/kernel/mips-mt.c @@ -19,6 +19,7 @@ #include #include #include +#include int vpelimit; diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c index 0c936cbf20c5..7b2fbaa9cac5 100644 --- a/arch/mips/kernel/module.c +++ b/arch/mips/kernel/module.c @@ -20,8 +20,7 @@ #include #include #include - -extern void jump_label_apply_nops(struct module *mod); +#include struct mips_hi16 { struct mips_hi16 *next; diff --git a/arch/mips/kernel/r4k-bugs64.c b/arch/mips/kernel/r4k-bugs64.c index 6ffefb2c6971..1e300330078d 100644 --- a/arch/mips/kernel/r4k-bugs64.c +++ b/arch/mips/kernel/r4k-bugs64.c @@ -14,6 +14,7 @@ #include #include #include +#include static char bug64hit[] __initdata = "reliable operation impossible!\n%s"; diff --git a/arch/mips/kernel/signal-common.h b/arch/mips/kernel/signal-common.h index f50d48435c68..136eb20ac024 100644 --- a/arch/mips/kernel/signal-common.h +++ b/arch/mips/kernel/signal-common.h @@ -40,4 +40,7 @@ _restore_fp_context(void __user *fpregs, void __user *csr); extern asmlinkage int _save_msa_all_upper(void __user *buf); extern asmlinkage int _restore_msa_all_upper(void __user *buf); +extern int setup_sigcontext(struct pt_regs *, struct sigcontext __user *); +extern int restore_sigcontext(struct pt_regs *, struct sigcontext __user *); + #endif /* __SIGNAL_COMMON_H */ diff --git a/arch/mips/kernel/signal.c b/arch/mips/kernel/signal.c index 479999b7f2de..ccbf580827f6 100644 --- a/arch/mips/kernel/signal.c +++ b/arch/mips/kernel/signal.c @@ -38,6 +38,7 @@ #include #include #include +#include #include "signal-common.h" diff --git a/arch/mips/kernel/signal32.c b/arch/mips/kernel/signal32.c index 59b8965433c2..73081d4ee8c1 100644 --- a/arch/mips/kernel/signal32.c +++ b/arch/mips/kernel/signal32.c @@ -18,6 +18,7 @@ #include #include #include +#include #include "signal-common.h" diff --git a/arch/mips/kernel/signal_n32.c b/arch/mips/kernel/signal_n32.c index cfc77b69420a..139d2596b0d4 100644 --- a/arch/mips/kernel/signal_n32.c +++ b/arch/mips/kernel/signal_n32.c @@ -24,6 +24,7 @@ #include #include #include +#include #include "signal-common.h" @@ -32,9 +33,6 @@ */ #define __NR_N32_restart_syscall 6214 -extern int setup_sigcontext(struct pt_regs *, struct sigcontext __user *); -extern int restore_sigcontext(struct pt_regs *, struct sigcontext __user *); - struct ucontextn32 { u32 uc_flags; s32 uc_link; diff --git a/arch/mips/kernel/signal_o32.c b/arch/mips/kernel/signal_o32.c index 299a7a28ca33..4f0458459650 100644 --- a/arch/mips/kernel/signal_o32.c +++ b/arch/mips/kernel/signal_o32.c @@ -19,6 +19,7 @@ #include #include #include +#include #include "signal-common.h" diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c index 82e2e051b416..0b53d35a116e 100644 --- a/arch/mips/kernel/smp.c +++ b/arch/mips/kernel/smp.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -468,11 +469,13 @@ int __cpu_up(unsigned int cpu, struct task_struct *tidle) return 0; } +#ifdef CONFIG_PROFILING /* Not really SMP stuff ... */ int setup_profiling_timer(unsigned int multiplier) { return 0; } +#endif static void flush_tlb_all_ipi(void *info) { diff --git a/arch/mips/kernel/spram.c b/arch/mips/kernel/spram.c index d5d96214cce5..71c7e5e27567 100644 --- a/arch/mips/kernel/spram.c +++ b/arch/mips/kernel/spram.c @@ -12,6 +12,7 @@ #include #include #include +#include /* * These definitions are correct for the 24K/34K/74K SPRAM sample diff --git a/arch/mips/kernel/syscall.c b/arch/mips/kernel/syscall.c index ae93a607ddf7..1bfc34a2e5b3 100644 --- a/arch/mips/kernel/syscall.c +++ b/arch/mips/kernel/syscall.c @@ -39,6 +39,7 @@ #include #include #include +#include #include /* diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c index 246c6a6b0261..c58c0c3c5b40 100644 --- a/arch/mips/kernel/traps.c +++ b/arch/mips/kernel/traps.c @@ -2157,8 +2157,6 @@ void *set_vi_handler(int n, vi_handler_t addr) return set_vi_srs_handler(n, addr, 0); } -extern void tlb_init(void); - /* * Timer interrupt */ diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c index f4cf94e92ec3..db652c99b72e 100644 --- a/arch/mips/kernel/unaligned.c +++ b/arch/mips/kernel/unaligned.c @@ -91,6 +91,7 @@ #include #include #include +#include #include #include "access-helper.h" diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c index 187d1c16361c..b45bf026ee55 100644 --- a/arch/mips/mm/c-r4k.c +++ b/arch/mips/mm/c-r4k.c @@ -1485,10 +1485,6 @@ static void loongson3_sc_init(void) return; } -extern int r5k_sc_init(void); -extern int rm7k_sc_init(void); -extern int mips_sc_init(void); - static void setup_scache(void) { struct cpuinfo_mips *c = ¤t_cpu_data; @@ -1828,7 +1824,7 @@ static struct notifier_block r4k_cache_pm_notifier_block = { .notifier_call = r4k_cache_pm_notifier, }; -int __init r4k_cache_init_pm(void) +static int __init r4k_cache_init_pm(void) { return cpu_pm_register_notifier(&r4k_cache_pm_notifier_block); } diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index 7f830634dbe7..df1ced4fc3b5 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -205,22 +205,13 @@ static inline void setup_protection_map(void) void cpu_cache_init(void) { - if (cpu_has_3k_cache) { - extern void __weak r3k_cache_init(void); - + if (IS_ENABLED(CONFIG_CPU_R3000) && cpu_has_3k_cache) r3k_cache_init(); - } - if (cpu_has_4k_cache) { - extern void __weak r4k_cache_init(void); - + if (IS_ENABLED(CONFIG_CPU_R4K_CACHE_TLB) && cpu_has_4k_cache) r4k_cache_init(); - } - - if (cpu_has_octeon_cache) { - extern void __weak octeon_cache_init(void); + if (IS_ENABLED(CONFIG_CPU_CAVIUM_OCTEON) && cpu_has_octeon_cache) octeon_cache_init(); - } setup_protection_map(); } diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index d7878208bd3f..aaa9a242ebba 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -26,6 +26,7 @@ #include #include #include /* For VMALLOC_END */ +#include #include int show_unhandled_signals = 1; diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 5dcb525a8995..c2e0e5aebe90 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include #include diff --git a/arch/mips/mm/pgtable-64.c b/arch/mips/mm/pgtable-64.c index c76d21f7dffb..1e544827dea9 100644 --- a/arch/mips/mm/pgtable-64.c +++ b/arch/mips/mm/pgtable-64.c @@ -89,6 +89,7 @@ void pud_init(void *addr) } #endif +#ifdef CONFIG_TRANSPARENT_HUGEPAGE pmd_t mk_pmd(struct page *page, pgprot_t prot) { pmd_t pmd; @@ -103,6 +104,7 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr, { *pmdp = pmd; } +#endif void __init pagetable_init(void) { diff --git a/arch/mips/mm/tlb-r3k.c b/arch/mips/mm/tlb-r3k.c index 53dfa2b9316b..f6db65410c65 100644 --- a/arch/mips/mm/tlb-r3k.c +++ b/arch/mips/mm/tlb-r3k.c @@ -23,11 +23,11 @@ #include #include #include +#include +#include #undef DEBUG_TLB -extern void build_tlb_refill_handler(void); - /* CP0 hazard avoidance. */ #define BARRIER \ __asm__ __volatile__( \ diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c index 93c2d695588a..7e2a0011a6fb 100644 --- a/arch/mips/mm/tlb-r4k.c +++ b/arch/mips/mm/tlb-r4k.c @@ -22,9 +22,9 @@ #include #include #include +#include #include - -extern void build_tlb_refill_handler(void); +#include /* * LOONGSON-2 has a 4 entry itlb which is a subset of jtlb, LOONGSON-3 has @@ -458,6 +458,7 @@ EXPORT_SYMBOL(has_transparent_hugepage); int temp_tlb_entry; +#ifndef CONFIG_64BIT __init int add_temporary_entry(unsigned long entrylo0, unsigned long entrylo1, unsigned long entryhi, unsigned long pagemask) { @@ -496,6 +497,7 @@ out: local_irq_restore(flags); return ret; } +#endif static int ntlb; static int __init set_ntlb(char *str) diff --git a/arch/mips/power/cpu.c b/arch/mips/power/cpu.c index a15e29dfc7b3..d8ef7778e535 100644 --- a/arch/mips/power/cpu.c +++ b/arch/mips/power/cpu.c @@ -6,6 +6,7 @@ * Author: Hu Hongbing * Wu Zhangjin */ +#include #include #include #include diff --git a/arch/mips/power/hibernate.c b/arch/mips/power/hibernate.c index 94ab17c3c49d..192879e76c85 100644 --- a/arch/mips/power/hibernate.c +++ b/arch/mips/power/hibernate.c @@ -1,4 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include extern int restore_image(void); diff --git a/arch/parisc/kernel/kexec_file.c b/arch/parisc/kernel/kexec_file.c index 8c534204f0fd..3fc82130b6c3 100644 --- a/arch/parisc/kernel/kexec_file.c +++ b/arch/parisc/kernel/kexec_file.c @@ -38,8 +38,8 @@ static void *elf_load(struct kimage *image, char *kernel_buf, for (i = 0; i < image->nr_segments; i++) image->segment[i].mem = __pa(image->segment[i].mem); - pr_debug("Loaded the kernel at 0x%lx, entry at 0x%lx\n", - kernel_load_addr, image->start); + kexec_dprintk("Loaded the kernel at 0x%lx, entry at 0x%lx\n", + kernel_load_addr, image->start); if (initrd != NULL) { kbuf.buffer = initrd; @@ -51,7 +51,7 @@ static void *elf_load(struct kimage *image, char *kernel_buf, if (ret) goto out; - pr_debug("Loaded initrd at 0x%lx\n", kbuf.mem); + kexec_dprintk("Loaded initrd at 0x%lx\n", kbuf.mem); image->arch.initrd_start = kbuf.mem; image->arch.initrd_end = kbuf.mem + initrd_len; } @@ -68,7 +68,7 @@ static void *elf_load(struct kimage *image, char *kernel_buf, if (ret) goto out; - pr_debug("Loaded cmdline at 0x%lx\n", kbuf.mem); + kexec_dprintk("Loaded cmdline at 0x%lx\n", kbuf.mem); image->arch.cmdline = kbuf.mem; } out: diff --git a/arch/powerpc/kexec/core_64.c b/arch/powerpc/kexec/core_64.c index 0bee7ca9a77c..762e4d09aacf 100644 --- a/arch/powerpc/kexec/core_64.c +++ b/arch/powerpc/kexec/core_64.c @@ -283,8 +283,7 @@ static void kexec_prepare_cpus(void) * We could use a smaller stack if we don't care about anything using * current, but that audit has not been performed. */ -static union thread_union kexec_stack __init_task_data = - { }; +static union thread_union kexec_stack = { }; /* * For similar reasons to the stack above, the kexecing CPU needs to be on a diff --git a/arch/powerpc/kexec/elf_64.c b/arch/powerpc/kexec/elf_64.c index eeb258002d1e..904016cf89ea 100644 --- a/arch/powerpc/kexec/elf_64.c +++ b/arch/powerpc/kexec/elf_64.c @@ -59,7 +59,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf, if (ret) goto out; - pr_debug("Loaded the kernel at 0x%lx\n", kernel_load_addr); + kexec_dprintk("Loaded the kernel at 0x%lx\n", kernel_load_addr); ret = kexec_load_purgatory(image, &pbuf); if (ret) { @@ -67,7 +67,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf, goto out; } - pr_debug("Loaded purgatory at 0x%lx\n", pbuf.mem); + kexec_dprintk("Loaded purgatory at 0x%lx\n", pbuf.mem); /* Load additional segments needed for panic kernel */ if (image->type == KEXEC_TYPE_CRASH) { @@ -99,7 +99,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf, goto out; initrd_load_addr = kbuf.mem; - pr_debug("Loaded initrd at 0x%lx\n", initrd_load_addr); + kexec_dprintk("Loaded initrd at 0x%lx\n", initrd_load_addr); } fdt = of_kexec_alloc_and_setup_fdt(image, initrd_load_addr, @@ -132,7 +132,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf, fdt_load_addr = kbuf.mem; - pr_debug("Loaded device tree at 0x%lx\n", fdt_load_addr); + kexec_dprintk("Loaded device tree at 0x%lx\n", fdt_load_addr); slave_code = elf_info.buffer + elf_info.proghdrs[0].p_offset; ret = setup_purgatory_ppc64(image, slave_code, fdt, kernel_load_addr, diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c index 961a6dd67365..5b4c5cb23354 100644 --- a/arch/powerpc/kexec/file_load_64.c +++ b/arch/powerpc/kexec/file_load_64.c @@ -577,7 +577,7 @@ static int add_usable_mem_property(void *fdt, struct device_node *dn, NODE_PATH_LEN, dn); return -EOVERFLOW; } - pr_debug("Memory node path: %s\n", path); + kexec_dprintk("Memory node path: %s\n", path); /* Now that we know the path, find its offset in kdump kernel's fdt */ node = fdt_path_offset(fdt, path); @@ -590,8 +590,8 @@ static int add_usable_mem_property(void *fdt, struct device_node *dn, /* Get the address & size cells */ n_mem_addr_cells = of_n_addr_cells(dn); n_mem_size_cells = of_n_size_cells(dn); - pr_debug("address cells: %d, size cells: %d\n", n_mem_addr_cells, - n_mem_size_cells); + kexec_dprintk("address cells: %d, size cells: %d\n", n_mem_addr_cells, + n_mem_size_cells); um_info->idx = 0; if (!check_realloc_usable_mem(um_info, 2)) { @@ -664,7 +664,7 @@ static int update_usable_mem_fdt(void *fdt, struct crash_mem *usable_mem) node = fdt_path_offset(fdt, "/ibm,dynamic-reconfiguration-memory"); if (node == -FDT_ERR_NOTFOUND) - pr_debug("No dynamic reconfiguration memory found\n"); + kexec_dprintk("No dynamic reconfiguration memory found\n"); else if (node < 0) { pr_err("Malformed device tree: error reading /ibm,dynamic-reconfiguration-memory.\n"); return -EINVAL; @@ -776,8 +776,8 @@ static void update_backup_region_phdr(struct kimage *image, Elf64_Ehdr *ehdr) for (i = 0; i < ehdr->e_phnum; i++) { if (phdr->p_paddr == BACKUP_SRC_START) { phdr->p_offset = image->arch.backup_start; - pr_debug("Backup region offset updated to 0x%lx\n", - image->arch.backup_start); + kexec_dprintk("Backup region offset updated to 0x%lx\n", + image->arch.backup_start); return; } } @@ -850,7 +850,7 @@ int load_crashdump_segments_ppc64(struct kimage *image, pr_err("Failed to load backup segment\n"); return ret; } - pr_debug("Loaded the backup region at 0x%lx\n", kbuf->mem); + kexec_dprintk("Loaded the backup region at 0x%lx\n", kbuf->mem); /* Load elfcorehdr segment - to export crashing kernel's vmcore */ ret = load_elfcorehdr_segment(image, kbuf); @@ -858,8 +858,8 @@ int load_crashdump_segments_ppc64(struct kimage *image, pr_err("Failed to load elfcorehdr segment\n"); return ret; } - pr_debug("Loaded elf core header at 0x%lx, bufsz=0x%lx memsz=0x%lx\n", - image->elf_load_addr, kbuf->bufsz, kbuf->memsz); + kexec_dprintk("Loaded elf core header at 0x%lx, bufsz=0x%lx memsz=0x%lx\n", + image->elf_load_addr, kbuf->bufsz, kbuf->memsz); return 0; } diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index fee22a3d1b53..82940b6a79a2 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -11,7 +11,7 @@ endif CFLAGS_syscall_table.o += $(call cc-option,-Wno-override-init,) CFLAGS_compat_syscall_table.o += $(call cc-option,-Wno-override-init,) -ifdef CONFIG_KEXEC +ifdef CONFIG_KEXEC_CORE AFLAGS_kexec_relocate.o := -mcmodel=medany $(call cc-option,-mno-relax) endif diff --git a/arch/riscv/kernel/elf_kexec.c b/arch/riscv/kernel/elf_kexec.c index e60fbd8660c4..5bd1ec3341fe 100644 --- a/arch/riscv/kernel/elf_kexec.c +++ b/arch/riscv/kernel/elf_kexec.c @@ -216,7 +216,6 @@ static void *elf_kexec_load(struct kimage *image, char *kernel_buf, if (ret) goto out; kernel_start = image->start; - pr_notice("The entry point of kernel at 0x%lx\n", image->start); /* Add the kernel binary to the image */ ret = riscv_kexec_elf_load(image, &ehdr, &elf_info, @@ -252,8 +251,8 @@ static void *elf_kexec_load(struct kimage *image, char *kernel_buf, image->elf_load_addr = kbuf.mem; image->elf_headers_sz = headers_sz; - pr_debug("Loaded elf core header at 0x%lx bufsz=0x%lx memsz=0x%lx\n", - image->elf_load_addr, kbuf.bufsz, kbuf.memsz); + kexec_dprintk("Loaded elf core header at 0x%lx bufsz=0x%lx memsz=0x%lx\n", + image->elf_load_addr, kbuf.bufsz, kbuf.memsz); /* Setup cmdline for kdump kernel case */ modified_cmdline = setup_kdump_cmdline(image, cmdline, @@ -275,6 +274,8 @@ static void *elf_kexec_load(struct kimage *image, char *kernel_buf, pr_err("Error loading purgatory ret=%d\n", ret); goto out; } + kexec_dprintk("Loaded purgatory at 0x%lx\n", kbuf.mem); + ret = kexec_purgatory_get_set_symbol(image, "riscv_kernel_entry", &kernel_start, sizeof(kernel_start), 0); @@ -293,7 +294,7 @@ static void *elf_kexec_load(struct kimage *image, char *kernel_buf, if (ret) goto out; initrd_pbase = kbuf.mem; - pr_notice("Loaded initrd at 0x%lx\n", initrd_pbase); + kexec_dprintk("Loaded initrd at 0x%lx\n", initrd_pbase); } /* Add the DTB to the image */ @@ -318,7 +319,7 @@ static void *elf_kexec_load(struct kimage *image, char *kernel_buf, } /* Cache the fdt buffer address for memory cleanup */ image->arch.fdt = fdt; - pr_notice("Loaded device tree at 0x%lx\n", kbuf.mem); + kexec_dprintk("Loaded device tree at 0x%lx\n", kbuf.mem); goto out; out_free_fdt: diff --git a/arch/riscv/kernel/machine_kexec.c b/arch/riscv/kernel/machine_kexec.c index 2d139b724bc8..ed9cad20c039 100644 --- a/arch/riscv/kernel/machine_kexec.c +++ b/arch/riscv/kernel/machine_kexec.c @@ -18,30 +18,6 @@ #include #include -/* - * kexec_image_info - Print received image details - */ -static void -kexec_image_info(const struct kimage *image) -{ - unsigned long i; - - pr_debug("Kexec image info:\n"); - pr_debug("\ttype: %d\n", image->type); - pr_debug("\tstart: %lx\n", image->start); - pr_debug("\thead: %lx\n", image->head); - pr_debug("\tnr_segments: %lu\n", image->nr_segments); - - for (i = 0; i < image->nr_segments; i++) { - pr_debug("\t segment[%lu]: %016lx - %016lx", i, - image->segment[i].mem, - image->segment[i].mem + image->segment[i].memsz); - pr_debug("\t\t0x%lx bytes, %lu pages\n", - (unsigned long) image->segment[i].memsz, - (unsigned long) image->segment[i].memsz / PAGE_SIZE); - } -} - /* * machine_kexec_prepare - Initialize kexec * @@ -60,8 +36,6 @@ machine_kexec_prepare(struct kimage *image) unsigned int control_code_buffer_sz = 0; int i = 0; - kexec_image_info(image); - /* Find the Flattened Device Tree and save its physical address */ for (i = 0; i < image->nr_segments; i++) { if (image->segment[i].memsz <= sizeof(fdt)) diff --git a/arch/s390/kernel/traps.c b/arch/s390/kernel/traps.c index 1d2aa448d103..cc3e3a01dfa5 100644 --- a/arch/s390/kernel/traps.c +++ b/arch/s390/kernel/traps.c @@ -43,10 +43,12 @@ static inline void __user *get_trap_ip(struct pt_regs *regs) return (void __user *) (address - (regs->int_code >> 16)); } +#ifdef CONFIG_GENERIC_BUG int is_valid_bugaddr(unsigned long addr) { return 1; } +#endif void do_report_trap(struct pt_regs *regs, int si_signo, int si_code, char *str) { diff --git a/arch/sparc/kernel/Makefile b/arch/sparc/kernel/Makefile index 0984bb6f0f17..58ea4ef9b622 100644 --- a/arch/sparc/kernel/Makefile +++ b/arch/sparc/kernel/Makefile @@ -5,7 +5,6 @@ # asflags-y := -ansi -ccflags-y := -Werror # Undefine sparc when processing vmlinux.lds - it is used # And teach CPP we are doing $(BITS) builds (for this case) diff --git a/arch/sparc/lib/Makefile b/arch/sparc/lib/Makefile index 063556fe2cb1..59669ebddd4e 100644 --- a/arch/sparc/lib/Makefile +++ b/arch/sparc/lib/Makefile @@ -3,7 +3,6 @@ # asflags-y := -ansi -DST_DIV0=0x02 -ccflags-y := -Werror lib-$(CONFIG_SPARC32) += ashrdi3.o lib-$(CONFIG_SPARC32) += memcpy.o memset.o diff --git a/arch/sparc/mm/Makefile b/arch/sparc/mm/Makefile index 871354aa3c00..809d993f6d88 100644 --- a/arch/sparc/mm/Makefile +++ b/arch/sparc/mm/Makefile @@ -3,7 +3,6 @@ # asflags-y := -ansi -ccflags-y := -Werror obj-$(CONFIG_SPARC64) += ultra.o tlb.o tsb.o obj-y += fault_$(BITS).o diff --git a/arch/sparc/prom/Makefile b/arch/sparc/prom/Makefile index 397b79af77f7..a1adc75d8055 100644 --- a/arch/sparc/prom/Makefile +++ b/arch/sparc/prom/Makefile @@ -3,7 +3,6 @@ # Linux. # asflags := -ansi -ccflags := -Werror lib-y := bootstr_$(BITS).o lib-y += init_$(BITS).o diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c index c92d88680dbf..b6b044356f1b 100644 --- a/arch/x86/kernel/crash.c +++ b/arch/x86/kernel/crash.c @@ -170,7 +170,7 @@ static int elf_header_exclude_ranges(struct crash_mem *cmem) int ret = 0; /* Exclude the low 1M because it is always reserved */ - ret = crash_exclude_mem_range(cmem, 0, (1<<20)-1); + ret = crash_exclude_mem_range(cmem, 0, SZ_1M - 1); if (ret) return ret; @@ -198,8 +198,8 @@ static int prepare_elf64_ram_headers_callback(struct resource *res, void *arg) } /* Prepare elf headers. Return addr and size */ -static int prepare_elf_headers(struct kimage *image, void **addr, - unsigned long *sz, unsigned long *nr_mem_ranges) +static int prepare_elf_headers(void **addr, unsigned long *sz, + unsigned long *nr_mem_ranges) { struct crash_mem *cmem; int ret; @@ -221,7 +221,7 @@ static int prepare_elf_headers(struct kimage *image, void **addr, *nr_mem_ranges = cmem->nr_ranges; /* By default prepare 64bit headers */ - ret = crash_prepare_elf64_headers(cmem, IS_ENABLED(CONFIG_X86_64), addr, sz); + ret = crash_prepare_elf64_headers(cmem, IS_ENABLED(CONFIG_X86_64), addr, sz); out: vfree(cmem); @@ -349,7 +349,7 @@ int crash_load_segments(struct kimage *image) .buf_max = ULONG_MAX, .top_down = false }; /* Prepare elf headers and add a segment */ - ret = prepare_elf_headers(image, &kbuf.buffer, &kbuf.bufsz, &pnum); + ret = prepare_elf_headers(&kbuf.buffer, &kbuf.bufsz, &pnum); if (ret) return ret; @@ -386,8 +386,8 @@ int crash_load_segments(struct kimage *image) if (ret) return ret; image->elf_load_addr = kbuf.mem; - pr_debug("Loaded ELF headers at 0x%lx bufsz=0x%lx memsz=0x%lx\n", - image->elf_load_addr, kbuf.bufsz, kbuf.memsz); + kexec_dprintk("Loaded ELF headers at 0x%lx bufsz=0x%lx memsz=0x%lx\n", + image->elf_load_addr, kbuf.bufsz, kbuf.memsz); return ret; } @@ -452,7 +452,7 @@ void arch_crash_handle_hotplug_event(struct kimage *image) * Create the new elfcorehdr reflecting the changes to CPU and/or * memory resources. */ - if (prepare_elf_headers(image, &elfbuf, &elfsz, &nr_mem_ranges)) { + if (prepare_elf_headers(&elfbuf, &elfsz, &nr_mem_ranges)) { pr_err("unable to create new elfcorehdr"); goto out; } diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c index a61c12c01270..2a422e00ed4b 100644 --- a/arch/x86/kernel/kexec-bzimage64.c +++ b/arch/x86/kernel/kexec-bzimage64.c @@ -82,7 +82,7 @@ static int setup_cmdline(struct kimage *image, struct boot_params *params, cmdline_ptr[cmdline_len - 1] = '\0'; - pr_debug("Final command line is: %s\n", cmdline_ptr); + kexec_dprintk("Final command line is: %s\n", cmdline_ptr); cmdline_ptr_phys = bootparams_load_addr + cmdline_offset; cmdline_low_32 = cmdline_ptr_phys & 0xffffffffUL; cmdline_ext_32 = cmdline_ptr_phys >> 32; @@ -272,7 +272,12 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params, nr_e820_entries = params->e820_entries; + kexec_dprintk("E820 memmap:\n"); for (i = 0; i < nr_e820_entries; i++) { + kexec_dprintk("%016llx-%016llx (%d)\n", + params->e820_table[i].addr, + params->e820_table[i].addr + params->e820_table[i].size - 1, + params->e820_table[i].type); if (params->e820_table[i].type != E820_TYPE_RAM) continue; start = params->e820_table[i].addr; @@ -424,7 +429,7 @@ static void *bzImage64_load(struct kimage *image, char *kernel, * command line. Make sure it does not overflow */ if (cmdline_len + MAX_ELFCOREHDR_STR_LEN > header->cmdline_size) { - pr_debug("Appending elfcorehdr= to command line exceeds maximum allowed length\n"); + pr_err("Appending elfcorehdr= to command line exceeds maximum allowed length\n"); return ERR_PTR(-EINVAL); } @@ -445,7 +450,7 @@ static void *bzImage64_load(struct kimage *image, char *kernel, return ERR_PTR(ret); } - pr_debug("Loaded purgatory at 0x%lx\n", pbuf.mem); + kexec_dprintk("Loaded purgatory at 0x%lx\n", pbuf.mem); /* @@ -490,8 +495,8 @@ static void *bzImage64_load(struct kimage *image, char *kernel, if (ret) goto out_free_params; bootparam_load_addr = kbuf.mem; - pr_debug("Loaded boot_param, command line and misc at 0x%lx bufsz=0x%lx memsz=0x%lx\n", - bootparam_load_addr, kbuf.bufsz, kbuf.bufsz); + kexec_dprintk("Loaded boot_param, command line and misc at 0x%lx bufsz=0x%lx memsz=0x%lx\n", + bootparam_load_addr, kbuf.bufsz, kbuf.memsz); /* Load kernel */ kbuf.buffer = kernel + kern16_size; @@ -505,8 +510,8 @@ static void *bzImage64_load(struct kimage *image, char *kernel, goto out_free_params; kernel_load_addr = kbuf.mem; - pr_debug("Loaded 64bit kernel at 0x%lx bufsz=0x%lx memsz=0x%lx\n", - kernel_load_addr, kbuf.bufsz, kbuf.memsz); + kexec_dprintk("Loaded 64bit kernel at 0x%lx bufsz=0x%lx memsz=0x%lx\n", + kernel_load_addr, kbuf.bufsz, kbuf.memsz); /* Load initrd high */ if (initrd) { @@ -520,8 +525,8 @@ static void *bzImage64_load(struct kimage *image, char *kernel, goto out_free_params; initrd_load_addr = kbuf.mem; - pr_debug("Loaded initrd at 0x%lx bufsz=0x%lx memsz=0x%lx\n", - initrd_load_addr, initrd_len, initrd_len); + kexec_dprintk("Loaded initrd at 0x%lx bufsz=0x%lx memsz=0x%lx\n", + initrd_load_addr, initrd_len, initrd_len); setup_initrd(params, initrd_load_addr, initrd_len); } diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 1a3e2c05a8a5..bc0a5348b4a6 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -42,12 +42,9 @@ struct init_pgtable_data { static int mem_region_callback(struct resource *res, void *arg) { struct init_pgtable_data *data = arg; - unsigned long mstart, mend; - mstart = res->start; - mend = mstart + resource_size(res) - 1; - - return kernel_ident_mapping_init(data->info, data->level4p, mstart, mend); + return kernel_ident_mapping_init(data->info, data->level4p, + res->start, res->end + 1); } static int diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x86/pci/sta2x11-fixup.c index 7368afc03998..8c8ddc4dcc08 100644 --- a/arch/x86/pci/sta2x11-fixup.c +++ b/arch/x86/pci/sta2x11-fixup.c @@ -14,6 +14,7 @@ #include #include #include +#include #define STA2X11_SWIOTLB_SIZE (4*1024*1024) diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig index 90ce58fd629e..925c19ee513b 100644 --- a/drivers/mfd/Kconfig +++ b/drivers/mfd/Kconfig @@ -2255,7 +2255,7 @@ config MFD_VEXPRESS_SYSREG config RAVE_SP_CORE tristate "RAVE SP MCU core driver" depends on SERIAL_DEV_BUS - select CRC_CCITT + select CRC_ITU_T help Select this to get support for the Supervisory Processor device found on several devices in RAVE line of hardware. diff --git a/drivers/mfd/rave-sp.c b/drivers/mfd/rave-sp.c index da50eba10014..f62422740de2 100644 --- a/drivers/mfd/rave-sp.c +++ b/drivers/mfd/rave-sp.c @@ -9,7 +9,7 @@ */ #include -#include +#include #include #include #include @@ -251,7 +251,7 @@ static void csum_8b2c(const u8 *buf, size_t size, u8 *crc) static void csum_ccitt(const u8 *buf, size_t size, u8 *crc) { - const u16 calculated = crc_ccitt_false(0xffff, buf, size); + const u16 calculated = crc_itu_t(0xffff, buf, size); /* * While the rest of the wire protocol is little-endian, diff --git a/drivers/platform/mips/rs780e-acpi.c b/drivers/platform/mips/rs780e-acpi.c index bb0e8ae0eefd..5b8f9cc32589 100644 --- a/drivers/platform/mips/rs780e-acpi.c +++ b/drivers/platform/mips/rs780e-acpi.c @@ -32,29 +32,25 @@ static u8 pmio_read_index(u16 index, u8 reg) return inb(index + 1); } -void pm_iowrite(u8 reg, u8 value) +static void pm_iowrite(u8 reg, u8 value) { pmio_write_index(PM_INDEX, reg, value); } -EXPORT_SYMBOL(pm_iowrite); -u8 pm_ioread(u8 reg) +static u8 pm_ioread(u8 reg) { return pmio_read_index(PM_INDEX, reg); } -EXPORT_SYMBOL(pm_ioread); -void pm2_iowrite(u8 reg, u8 value) +static void pm2_iowrite(u8 reg, u8 value) { pmio_write_index(PM2_INDEX, reg, value); } -EXPORT_SYMBOL(pm2_iowrite); -u8 pm2_ioread(u8 reg) +static u8 pm2_ioread(u8 reg) { return pmio_read_index(PM2_INDEX, reg); } -EXPORT_SYMBOL(pm2_ioread); static void acpi_hw_clear_status(void) { diff --git a/drivers/platform/surface/aggregator/Kconfig b/drivers/platform/surface/aggregator/Kconfig index 88afc38ffdc5..957c216c180c 100644 --- a/drivers/platform/surface/aggregator/Kconfig +++ b/drivers/platform/surface/aggregator/Kconfig @@ -5,7 +5,7 @@ menuconfig SURFACE_AGGREGATOR tristate "Microsoft Surface System Aggregator Module Subsystem and Drivers" depends on SERIAL_DEV_BUS depends on ACPI && !RISCV - select CRC_CCITT + select CRC_ITU_T help The Surface System Aggregator Module (Surface SAM or SSAM) is an embedded controller (EC) found on 5th- and later-generation Microsoft diff --git a/drivers/rapidio/devices/tsi721.c b/drivers/rapidio/devices/tsi721.c index 83323c3d10af..4b84270a8906 100644 --- a/drivers/rapidio/devices/tsi721.c +++ b/drivers/rapidio/devices/tsi721.c @@ -51,8 +51,9 @@ static void tsi721_imsg_handler(struct tsi721_device *priv, int ch); * @len: Length (in bytes) of the maintenance transaction * @data: Value to be read into * - * Generates a local SREP space read. Returns %0 on - * success or %-EINVAL on failure. + * Generates a local SREP space read. + * + * Returns: %0 on success or %-EINVAL on failure. */ static int tsi721_lcread(struct rio_mport *mport, int index, u32 offset, int len, u32 *data) @@ -75,8 +76,9 @@ static int tsi721_lcread(struct rio_mport *mport, int index, u32 offset, * @len: Length (in bytes) of the maintenance transaction * @data: Value to be written * - * Generates a local write into SREP configuration space. Returns %0 on - * success or %-EINVAL on failure. + * Generates a local write into SREP configuration space. + * + * Returns: %0 on success or %-EINVAL on failure. */ static int tsi721_lcwrite(struct rio_mport *mport, int index, u32 offset, int len, u32 data) @@ -104,7 +106,7 @@ static int tsi721_lcwrite(struct rio_mport *mport, int index, u32 offset, * @do_wr: Operation flag (1 == MAINT_WR) * * Generates a RapidIO maintenance transaction (Read or Write). - * Returns %0 on success and %-EINVAL or %-EFAULT on failure. + * Returns: %0 on success and %-EINVAL or %-EFAULT on failure. */ static int tsi721_maint_dma(struct tsi721_device *priv, u32 sys_size, u16 destid, u8 hopcount, u32 offset, int len, @@ -205,10 +207,10 @@ err_out: * @hopcount: Number of hops to target device * @offset: Offset into configuration space * @len: Length (in bytes) of the maintenance transaction - * @val: Location to be read into + * @data: Location to be read into * * Generates a RapidIO maintenance read transaction. - * Returns %0 on success and %-EINVAL or %-EFAULT on failure. + * Returns: %0 on success and %-EINVAL or %-EFAULT on failure. */ static int tsi721_cread_dma(struct rio_mport *mport, int index, u16 destid, u8 hopcount, u32 offset, int len, u32 *data) @@ -228,10 +230,10 @@ static int tsi721_cread_dma(struct rio_mport *mport, int index, u16 destid, * @hopcount: Number of hops to target device * @offset: Offset into configuration space * @len: Length (in bytes) of the maintenance transaction - * @val: Value to be written + * @data: Value to be written * * Generates a RapidIO maintenance write transaction. - * Returns %0 on success and %-EINVAL or %-EFAULT on failure. + * Returns: %0 on success and %-EINVAL or %-EFAULT on failure. */ static int tsi721_cwrite_dma(struct rio_mport *mport, int index, u16 destid, u8 hopcount, u32 offset, int len, u32 data) @@ -250,6 +252,8 @@ static int tsi721_cwrite_dma(struct rio_mport *mport, int index, u16 destid, * Handles inbound port-write interrupts. Copies PW message from an internal * buffer into PW message FIFO and schedules deferred routine to process * queued messages. + * + * Returns: %0 */ static int tsi721_pw_handler(struct tsi721_device *priv) @@ -307,6 +311,8 @@ static void tsi721_pw_dpc(struct work_struct *work) * tsi721_pw_enable - enable/disable port-write interface init * @mport: Master port implementing the port write unit * @enable: 1=enable; 0=disable port-write message handling + * + * Returns: %0 */ static int tsi721_pw_enable(struct rio_mport *mport, int enable) { @@ -336,7 +342,9 @@ static int tsi721_pw_enable(struct rio_mport *mport, int enable) * @destid: Destination ID of target device * @data: 16-bit info field of RapidIO doorbell * - * Sends a RapidIO doorbell message. Always returns %0. + * Sends a RapidIO doorbell message. + * + * Returns: %0 */ static int tsi721_dsend(struct rio_mport *mport, int index, u16 destid, u16 data) @@ -361,6 +369,8 @@ static int tsi721_dsend(struct rio_mport *mport, int index, * Handles inbound doorbell interrupts. Copies doorbell entry from an internal * buffer into DB message FIFO and schedules deferred routine to process * queued DBs. + * + * Returns: %0 */ static int tsi721_dbell_handler(struct tsi721_device *priv) @@ -453,6 +463,8 @@ static void tsi721_db_dpc(struct work_struct *work) * * Handles Tsi721 interrupts signaled using MSI and INTA. Checks reported * interrupt events and calls an event-specific handler(s). + * + * Returns: %IRQ_HANDLED or %IRQ_NONE */ static irqreturn_t tsi721_irqhandler(int irq, void *ptr) { @@ -607,6 +619,8 @@ static void tsi721_interrupts_init(struct tsi721_device *priv) * @ptr: Pointer to interrupt-specific data (tsi721_device structure) * * Handles outbound messaging interrupts signaled using MSI-X. + * + * Returns: %IRQ_HANDLED */ static irqreturn_t tsi721_omsg_msix(int irq, void *ptr) { @@ -624,6 +638,8 @@ static irqreturn_t tsi721_omsg_msix(int irq, void *ptr) * @ptr: Pointer to interrupt-specific data (tsi721_device structure) * * Handles inbound messaging interrupts signaled using MSI-X. + * + * Returns: %IRQ_HANDLED */ static irqreturn_t tsi721_imsg_msix(int irq, void *ptr) { @@ -641,6 +657,8 @@ static irqreturn_t tsi721_imsg_msix(int irq, void *ptr) * @ptr: Pointer to interrupt-specific data (tsi721_device structure) * * Handles Tsi721 interrupts from SRIO MAC. + * + * Returns: %IRQ_HANDLED */ static irqreturn_t tsi721_srio_msix(int irq, void *ptr) { @@ -663,6 +681,8 @@ static irqreturn_t tsi721_srio_msix(int irq, void *ptr) * Handles Tsi721 interrupts from SR2PC Channel. * NOTE: At this moment services only one SR2PC channel associated with inbound * doorbells. + * + * Returns: %IRQ_HANDLED */ static irqreturn_t tsi721_sr2pc_ch_msix(int irq, void *ptr) { @@ -689,6 +709,8 @@ static irqreturn_t tsi721_sr2pc_ch_msix(int irq, void *ptr) * Registers MSI-X interrupt service routines for interrupts that are active * immediately after mport initialization. Messaging interrupt service routines * should be registered during corresponding open requests. + * + * Returns: %0 on success or -errno value on failure. */ static int tsi721_request_msix(struct tsi721_device *priv) { @@ -717,6 +739,8 @@ static int tsi721_request_msix(struct tsi721_device *priv) * * Configures MSI-X support for Tsi721. Supports only an exact number * of requested vectors. + * + * Returns: %0 on success or -errno value on failure. */ static int tsi721_enable_msix(struct tsi721_device *priv) { @@ -1334,7 +1358,7 @@ static void tsi721_close_sr2pc_mapping(struct tsi721_device *priv) * @priv: pointer to tsi721 private data * * Initializes inbound port write handler. - * Returns %0 on success or %-ENOMEM on failure. + * Returns: %0 on success or %-ENOMEM on failure. */ static int tsi721_port_write_init(struct tsi721_device *priv) { @@ -1412,7 +1436,8 @@ static void tsi721_doorbell_free(struct tsi721_device *priv) * * Initialize BDMA channel allocated for RapidIO maintenance read/write * request generation - * Returns %0 on success or %-ENOMEM on failure. + * + * Returns: %0 on success or %-ENOMEM on failure. */ static int tsi721_bdma_maint_init(struct tsi721_device *priv) { @@ -1662,6 +1687,8 @@ tsi721_omsg_interrupt_disable(struct tsi721_device *priv, int ch, * @mbox: Outbound mailbox * @buffer: Message to add to outbound queue * @len: Length of message + * + * Returns: %0 on success or -errno value on failure. */ static int tsi721_add_outb_message(struct rio_mport *mport, struct rio_dev *rdev, int mbox, @@ -1869,6 +1896,8 @@ no_sts_update: * @dev_id: Device specific pointer to pass on event * @mbox: Mailbox to open * @entries: Number of entries in the outbound mailbox ring + * + * Returns: %0 on success or -errno value on failure. */ static int tsi721_open_outb_mbox(struct rio_mport *mport, void *dev_id, int mbox, int entries) @@ -2156,6 +2185,8 @@ static void tsi721_imsg_handler(struct tsi721_device *priv, int ch) * @dev_id: Device specific pointer to pass on event * @mbox: Mailbox to open * @entries: Number of entries in the inbound mailbox ring + * + * Returns: %0 on success or -errno value on failure. */ static int tsi721_open_inb_mbox(struct rio_mport *mport, void *dev_id, int mbox, int entries) @@ -2409,6 +2440,8 @@ static void tsi721_close_inb_mbox(struct rio_mport *mport, int mbox) * @mport: Master port implementing the Inbound Messaging Engine * @mbox: Inbound mailbox number * @buf: Buffer to add to inbound queue + * + * Returns: %0 on success or -errno value on failure. */ static int tsi721_add_inb_buffer(struct rio_mport *mport, int mbox, void *buf) { @@ -2439,7 +2472,7 @@ out: * @mport: Master port implementing the Inbound Messaging Engine * @mbox: Inbound mailbox number * - * Returns pointer to the message on success or NULL on failure. + * Returns: pointer to the message on success or %NULL on failure. */ static void *tsi721_get_inb_message(struct rio_mport *mport, int mbox) { @@ -2507,6 +2540,8 @@ out: * @priv: pointer to tsi721 private data * * Configures Tsi721 messaging engine. + * + * Returns: %0 */ static int tsi721_messages_init(struct tsi721_device *priv) { @@ -2539,9 +2574,9 @@ static int tsi721_messages_init(struct tsi721_device *priv) /** * tsi721_query_mport - Fetch inbound message from the Tsi721 MSG Queue * @mport: Master port implementing the Inbound Messaging Engine - * @mbox: Inbound mailbox number + * @attr: mport device attributes * - * Returns pointer to the message on success or NULL on failure. + * Returns: pointer to the message on success or %NULL on failure. */ static int tsi721_query_mport(struct rio_mport *mport, struct rio_mport_attr *attr) @@ -2653,6 +2688,8 @@ static void tsi721_mport_release(struct device *dev) * @priv: pointer to tsi721 private data * * Configures Tsi721 as RapidIO master port. + * + * Returns: %0 on success or -errno value on failure. */ static int tsi721_setup_mport(struct tsi721_device *priv) { diff --git a/drivers/rapidio/devices/tsi721_dma.c b/drivers/rapidio/devices/tsi721_dma.c index d375c02059f3..f77f75172bdc 100644 --- a/drivers/rapidio/devices/tsi721_dma.c +++ b/drivers/rapidio/devices/tsi721_dma.c @@ -283,11 +283,13 @@ void tsi721_bdma_handler(struct tsi721_bdma_chan *bdma_chan) #ifdef CONFIG_PCI_MSI /** - * tsi721_omsg_msix - MSI-X interrupt handler for BDMA channels + * tsi721_bdma_msix - MSI-X interrupt handler for BDMA channels * @irq: Linux interrupt number * @ptr: Pointer to interrupt-specific data (BDMA channel structure) * * Handles BDMA channel interrupts signaled using MSI-X. + * + * Returns: %IRQ_HANDLED */ static irqreturn_t tsi721_bdma_msix(int irq, void *ptr) { diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c index 833cfab7d877..7327e81352e9 100644 --- a/drivers/s390/block/dasd.c +++ b/drivers/s390/block/dasd.c @@ -1106,12 +1106,6 @@ static void dasd_statistics_removeroot(void) return; } -int dasd_stats_generic_show(struct seq_file *m, void *v) -{ - seq_puts(m, "Statistics are not activated in this kernel\n"); - return 0; -} - static void dasd_profile_init(struct dasd_profile *profile, struct dentry *base_dentry) { diff --git a/drivers/usb/host/fsl-mph-dr-of.c b/drivers/usb/host/fsl-mph-dr-of.c index 8508d37a2aff..6cdc3d805c32 100644 --- a/drivers/usb/host/fsl-mph-dr-of.c +++ b/drivers/usb/host/fsl-mph-dr-of.c @@ -288,7 +288,7 @@ static void fsl_usb2_mph_dr_of_remove(struct platform_device *ofdev) #define PHYCTRL_LSFE (1 << 1) /* Line State Filter Enable */ #define PHYCTRL_PXE (1 << 0) /* PHY oscillator enable */ -int fsl_usb2_mpc5121_init(struct platform_device *pdev) +static int fsl_usb2_mpc5121_init(struct platform_device *pdev) { struct fsl_usb2_platform_data *pdata = dev_get_platdata(&pdev->dev); struct clk *clk; diff --git a/fs/exec.c b/fs/exec.c index 4aa19b24f281..ee43597cb453 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1578,11 +1578,10 @@ static void check_unsafe_exec(struct linux_binprm *bprm) * will be able to manipulate the current directory, etc. * It would be nice to force an unshare instead... */ - t = p; n_fs = 1; spin_lock(&p->fs->lock); rcu_read_lock(); - while_each_thread(p, t) { + for_other_threads(p, t) { if (t->fs == p->fs) n_fs++; } diff --git a/fs/freevxfs/vxfs_bmap.c b/fs/freevxfs/vxfs_bmap.c index de2a5bccb930..26d367e3668d 100644 --- a/fs/freevxfs/vxfs_bmap.c +++ b/fs/freevxfs/vxfs_bmap.c @@ -29,7 +29,7 @@ vxfs_typdump(struct vxfs_typed *typ) /** * vxfs_bmap_ext4 - do bmap for ext4 extents * @ip: pointer to the inode we do bmap for - * @iblock: logical block. + * @bn: logical block. * * Description: * vxfs_bmap_ext4 performs the bmap operation for inodes with @@ -97,7 +97,7 @@ fail_buf: * vxfs_bmap_indir reads a &struct vxfs_typed at @indir * and performs the type-defined action. * - * Return Value: + * Returns: * The physical block number on success, else Zero. * * Note: @@ -179,7 +179,7 @@ out: * Description: * Performs the bmap operation for typed extents. * - * Return Value: + * Returns: * The physical block number on success, else Zero. */ static daddr_t @@ -243,7 +243,7 @@ vxfs_bmap_typed(struct inode *ip, long iblock) * vxfs_bmap1 perfoms a logical to physical block mapping * for vxfs-internal purposes. * - * Return Value: + * Returns: * The physical block number on success, else Zero. */ daddr_t diff --git a/fs/freevxfs/vxfs_immed.c b/fs/freevxfs/vxfs_immed.c index 9b49ec36e667..ed51fcd34757 100644 --- a/fs/freevxfs/vxfs_immed.c +++ b/fs/freevxfs/vxfs_immed.c @@ -15,7 +15,7 @@ /** * vxfs_immed_read_folio - read part of an immed inode into pagecache - * @file: file context (unused) + * @fp: file context (unused) * @folio: folio to fill in. * * Description: diff --git a/fs/freevxfs/vxfs_lookup.c b/fs/freevxfs/vxfs_lookup.c index f04ba2ed1e1a..1b0bca8b4cc6 100644 --- a/fs/freevxfs/vxfs_lookup.c +++ b/fs/freevxfs/vxfs_lookup.c @@ -177,8 +177,7 @@ vxfs_lookup(struct inode *dip, struct dentry *dp, unsigned int flags) /** * vxfs_readdir - read a directory * @fp: the directory to read - * @retp: return buffer - * @filler: filldir callback + * @ctx: dir_context for filldir/readdir * * Description: * vxfs_readdir fills @retp with directory entries from @fp diff --git a/fs/jffs2/debug.c b/fs/jffs2/debug.c index 9d26b1b9fc01..0925caab23c4 100644 --- a/fs/jffs2/debug.c +++ b/fs/jffs2/debug.c @@ -157,7 +157,7 @@ __jffs2_dbg_prewrite_paranoia_check(struct jffs2_sb_info *c, kfree(buf); } -void __jffs2_dbg_superblock_counts(struct jffs2_sb_info *c) +static void __jffs2_dbg_superblock_counts(struct jffs2_sb_info *c) { struct jffs2_eraseblock *jeb; uint32_t free = 0, dirty = 0, used = 0, wasted = 0, diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c index 5710833ac1cc..0131d83b912d 100644 --- a/fs/nilfs2/btnode.c +++ b/fs/nilfs2/btnode.c @@ -64,8 +64,8 @@ nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr) set_buffer_mapped(bh); set_buffer_uptodate(bh); - unlock_page(bh->b_page); - put_page(bh->b_page); + folio_unlock(bh->b_folio); + folio_put(bh->b_folio); return bh; } @@ -75,7 +75,7 @@ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr, { struct buffer_head *bh; struct inode *inode = btnc->host; - struct page *page; + struct folio *folio; int err; bh = nilfs_grab_buffer(inode, btnc, blocknr, BIT(BH_NILFS_Node)); @@ -83,7 +83,7 @@ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr, return -ENOMEM; err = -EEXIST; /* internal code */ - page = bh->b_page; + folio = bh->b_folio; if (buffer_uptodate(bh) || buffer_dirty(bh)) goto found; @@ -130,8 +130,8 @@ found: *pbh = bh; out_locked: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); return err; } @@ -145,19 +145,19 @@ out_locked: void nilfs_btnode_delete(struct buffer_head *bh) { struct address_space *mapping; - struct page *page = bh->b_page; - pgoff_t index = page_index(page); + struct folio *folio = bh->b_folio; + pgoff_t index = folio->index; int still_dirty; - get_page(page); - lock_page(page); - wait_on_page_writeback(page); + folio_get(folio); + folio_lock(folio); + folio_wait_writeback(folio); nilfs_forget_buffer(bh); - still_dirty = PageDirty(page); - mapping = page->mapping; - unlock_page(page); - put_page(page); + still_dirty = folio_test_dirty(folio); + mapping = folio->mapping; + folio_unlock(folio); + folio_put(folio); if (!still_dirty && mapping) invalidate_inode_pages2_range(mapping, index, index); @@ -185,23 +185,23 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc, ctxt->newbh = NULL; if (inode->i_blkbits == PAGE_SHIFT) { - struct page *opage = obh->b_page; - lock_page(opage); + struct folio *ofolio = obh->b_folio; + folio_lock(ofolio); retry: /* BUG_ON(oldkey != obh->b_folio->index); */ - if (unlikely(oldkey != opage->index)) - NILFS_PAGE_BUG(opage, + if (unlikely(oldkey != ofolio->index)) + NILFS_FOLIO_BUG(ofolio, "invalid oldkey %lld (newkey=%lld)", (unsigned long long)oldkey, (unsigned long long)newkey); xa_lock_irq(&btnc->i_pages); - err = __xa_insert(&btnc->i_pages, newkey, opage, GFP_NOFS); + err = __xa_insert(&btnc->i_pages, newkey, ofolio, GFP_NOFS); xa_unlock_irq(&btnc->i_pages); /* - * Note: page->index will not change to newkey until + * Note: folio->index will not change to newkey until * nilfs_btnode_commit_change_key() will be called. - * To protect the page in intermediate state, the page lock + * To protect the folio in intermediate state, the folio lock * is held. */ if (!err) @@ -213,7 +213,7 @@ retry: if (!err) goto retry; /* fallback to copy mode */ - unlock_page(opage); + folio_unlock(ofolio); } nbh = nilfs_btnode_create_block(btnc, newkey); @@ -225,7 +225,7 @@ retry: return 0; failed_unlock: - unlock_page(obh->b_page); + folio_unlock(obh->b_folio); return err; } @@ -238,15 +238,15 @@ void nilfs_btnode_commit_change_key(struct address_space *btnc, { struct buffer_head *obh = ctxt->bh, *nbh = ctxt->newbh; __u64 oldkey = ctxt->oldkey, newkey = ctxt->newkey; - struct page *opage; + struct folio *ofolio; if (oldkey == newkey) return; if (nbh == NULL) { /* blocksize == pagesize */ - opage = obh->b_page; - if (unlikely(oldkey != opage->index)) - NILFS_PAGE_BUG(opage, + ofolio = obh->b_folio; + if (unlikely(oldkey != ofolio->index)) + NILFS_FOLIO_BUG(ofolio, "invalid oldkey %lld (newkey=%lld)", (unsigned long long)oldkey, (unsigned long long)newkey); @@ -257,8 +257,8 @@ void nilfs_btnode_commit_change_key(struct address_space *btnc, __xa_set_mark(&btnc->i_pages, newkey, PAGECACHE_TAG_DIRTY); xa_unlock_irq(&btnc->i_pages); - opage->index = obh->b_blocknr = newkey; - unlock_page(opage); + ofolio->index = obh->b_blocknr = newkey; + folio_unlock(ofolio); } else { nilfs_copy_buffer(nbh, obh); mark_buffer_dirty(nbh); @@ -284,7 +284,7 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc, if (nbh == NULL) { /* blocksize == pagesize */ xa_erase_irq(&btnc->i_pages, newkey); - unlock_page(ctxt->bh->b_page); + folio_unlock(ctxt->bh->b_folio); } else { /* * When canceling a buffer that a prepare operation has diff --git a/fs/nilfs2/cpfile.c b/fs/nilfs2/cpfile.c index 9ebefb3acb0e..39136637f715 100644 --- a/fs/nilfs2/cpfile.c +++ b/fs/nilfs2/cpfile.c @@ -552,11 +552,29 @@ static ssize_t nilfs_cpfile_do_get_ssinfo(struct inode *cpfile, __u64 *cnop, } /** - * nilfs_cpfile_get_cpinfo - - * @cpfile: - * @cno: - * @ci: - * @nci: + * nilfs_cpfile_get_cpinfo - get information on checkpoints + * @cpfile: checkpoint file inode + * @cnop: place to pass a starting checkpoint number and receive a + * checkpoint number to continue the search + * @mode: mode of checkpoints that the caller wants to retrieve + * @buf: buffer for storing checkpoints' information + * @cisz: byte size of one checkpoint info item in array + * @nci: number of checkpoint info items to retrieve + * + * nilfs_cpfile_get_cpinfo() searches for checkpoints in @mode state + * starting from the checkpoint number stored in @cnop, and stores + * information about found checkpoints in @buf. + * The buffer pointed to by @buf must be large enough to store information + * for @nci checkpoints. If at least one checkpoint information is + * successfully retrieved, @cnop is updated to point to the checkpoint + * number to continue searching. + * + * Return: Count of checkpoint info items stored in the output buffer on + * success, or the following negative error code on failure. + * * %-EINVAL - Invalid checkpoint mode. + * * %-ENOMEM - Insufficient memory available. + * * %-EIO - I/O error (including metadata corruption). + * * %-ENOENT - Invalid checkpoint number specified. */ ssize_t nilfs_cpfile_get_cpinfo(struct inode *cpfile, __u64 *cnop, int mode, diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c index de2073c47651..bc846b904b68 100644 --- a/fs/nilfs2/dir.c +++ b/fs/nilfs2/dir.c @@ -64,12 +64,6 @@ static inline unsigned int nilfs_chunk_size(struct inode *inode) return inode->i_sb->s_blocksize; } -static inline void nilfs_put_page(struct page *page) -{ - kunmap(page); - put_page(page); -} - /* * Return the offset into page `page_nr' of the last valid * byte in that page, plus one. @@ -84,48 +78,46 @@ static unsigned int nilfs_last_byte(struct inode *inode, unsigned long page_nr) return last_byte; } -static int nilfs_prepare_chunk(struct page *page, unsigned int from, +static int nilfs_prepare_chunk(struct folio *folio, unsigned int from, unsigned int to) { - loff_t pos = page_offset(page) + from; + loff_t pos = folio_pos(folio) + from; - return __block_write_begin(page, pos, to - from, nilfs_get_block); + return __block_write_begin(&folio->page, pos, to - from, nilfs_get_block); } -static void nilfs_commit_chunk(struct page *page, - struct address_space *mapping, - unsigned int from, unsigned int to) +static void nilfs_commit_chunk(struct folio *folio, + struct address_space *mapping, size_t from, size_t to) { struct inode *dir = mapping->host; - loff_t pos = page_offset(page) + from; - unsigned int len = to - from; - unsigned int nr_dirty, copied; + loff_t pos = folio_pos(folio) + from; + size_t copied, len = to - from; + unsigned int nr_dirty; int err; - nr_dirty = nilfs_page_count_clean_buffers(page, from, to); - copied = block_write_end(NULL, mapping, pos, len, len, page, NULL); + nr_dirty = nilfs_page_count_clean_buffers(&folio->page, from, to); + copied = block_write_end(NULL, mapping, pos, len, len, &folio->page, NULL); if (pos + copied > dir->i_size) i_size_write(dir, pos + copied); if (IS_DIRSYNC(dir)) nilfs_set_transaction_flag(NILFS_TI_SYNC); err = nilfs_set_file_dirty(dir, nr_dirty); WARN_ON(err); /* do not happen */ - unlock_page(page); + folio_unlock(folio); } -static bool nilfs_check_page(struct page *page) +static bool nilfs_check_folio(struct folio *folio, char *kaddr) { - struct inode *dir = page->mapping->host; + struct inode *dir = folio->mapping->host; struct super_block *sb = dir->i_sb; unsigned int chunk_size = nilfs_chunk_size(dir); - char *kaddr = page_address(page); - unsigned int offs, rec_len; - unsigned int limit = PAGE_SIZE; + size_t offs, rec_len; + size_t limit = folio_size(folio); struct nilfs_dir_entry *p; char *error; - if ((dir->i_size >> PAGE_SHIFT) == page->index) { - limit = dir->i_size & ~PAGE_MASK; + if (dir->i_size < folio_pos(folio) + limit) { + limit = dir->i_size - folio_pos(folio); if (limit & (chunk_size - 1)) goto Ebadsize; if (!limit) @@ -147,7 +139,7 @@ static bool nilfs_check_page(struct page *page) if (offs != limit) goto Eend; out: - SetPageChecked(page); + folio_set_checked(folio); return true; /* Too bad, we had an error */ @@ -170,8 +162,8 @@ Espan: error = "directory entry across blocks"; bad_entry: nilfs_error(sb, - "bad entry in directory #%lu: %s - offset=%lu, inode=%lu, rec_len=%d, name_len=%d", - dir->i_ino, error, (page->index << PAGE_SHIFT) + offs, + "bad entry in directory #%lu: %s - offset=%lu, inode=%lu, rec_len=%zd, name_len=%d", + dir->i_ino, error, (folio->index << PAGE_SHIFT) + offs, (unsigned long)le64_to_cpu(p->inode), rec_len, p->name_len); goto fail; @@ -179,29 +171,34 @@ Eend: p = (struct nilfs_dir_entry *)(kaddr + offs); nilfs_error(sb, "entry in directory #%lu spans the page boundary offset=%lu, inode=%lu", - dir->i_ino, (page->index << PAGE_SHIFT) + offs, + dir->i_ino, (folio->index << PAGE_SHIFT) + offs, (unsigned long)le64_to_cpu(p->inode)); fail: - SetPageError(page); + folio_set_error(folio); return false; } -static struct page *nilfs_get_page(struct inode *dir, unsigned long n) +static void *nilfs_get_folio(struct inode *dir, unsigned long n, + struct folio **foliop) { struct address_space *mapping = dir->i_mapping; - struct page *page = read_mapping_page(mapping, n, NULL); + struct folio *folio = read_mapping_folio(mapping, n, NULL); + void *kaddr; - if (!IS_ERR(page)) { - kmap(page); - if (unlikely(!PageChecked(page))) { - if (!nilfs_check_page(page)) - goto fail; - } + if (IS_ERR(folio)) + return folio; + + kaddr = kmap_local_folio(folio, 0); + if (unlikely(!folio_test_checked(folio))) { + if (!nilfs_check_folio(folio, kaddr)) + goto fail; } - return page; + + *foliop = folio; + return kaddr; fail: - nilfs_put_page(page); + folio_release_kmap(folio, kaddr); return ERR_PTR(-EIO); } @@ -275,21 +272,21 @@ static int nilfs_readdir(struct file *file, struct dir_context *ctx) for ( ; n < npages; n++, offset = 0) { char *kaddr, *limit; struct nilfs_dir_entry *de; - struct page *page = nilfs_get_page(inode, n); + struct folio *folio; - if (IS_ERR(page)) { + kaddr = nilfs_get_folio(inode, n, &folio); + if (IS_ERR(kaddr)) { nilfs_error(sb, "bad page in #%lu", inode->i_ino); ctx->pos += PAGE_SIZE - offset; return -EIO; } - kaddr = page_address(page); de = (struct nilfs_dir_entry *)(kaddr + offset); limit = kaddr + nilfs_last_byte(inode, n) - NILFS_DIR_REC_LEN(1); for ( ; (char *)de <= limit; de = nilfs_next_entry(de)) { if (de->rec_len == 0) { nilfs_error(sb, "zero-length directory entry"); - nilfs_put_page(page); + folio_release_kmap(folio, kaddr); return -EIO; } if (de->inode) { @@ -302,72 +299,67 @@ static int nilfs_readdir(struct file *file, struct dir_context *ctx) if (!dir_emit(ctx, de->name, de->name_len, le64_to_cpu(de->inode), t)) { - nilfs_put_page(page); + folio_release_kmap(folio, kaddr); return 0; } } ctx->pos += nilfs_rec_len_from_disk(de->rec_len); } - nilfs_put_page(page); + folio_release_kmap(folio, kaddr); } return 0; } /* - * nilfs_find_entry() + * nilfs_find_entry() * - * finds an entry in the specified directory with the wanted name. It - * returns the page in which the entry was found, and the entry itself - * (as a parameter - res_dir). Page is returned mapped and unlocked. - * Entry is guaranteed to be valid. + * Finds an entry in the specified directory with the wanted name. It + * returns the folio in which the entry was found, and the entry itself. + * The folio is mapped and unlocked. When the caller is finished with + * the entry, it should call folio_release_kmap(). + * + * On failure, returns NULL and the caller should ignore foliop. */ -struct nilfs_dir_entry * -nilfs_find_entry(struct inode *dir, const struct qstr *qstr, - struct page **res_page) +struct nilfs_dir_entry *nilfs_find_entry(struct inode *dir, + const struct qstr *qstr, struct folio **foliop) { const unsigned char *name = qstr->name; int namelen = qstr->len; unsigned int reclen = NILFS_DIR_REC_LEN(namelen); unsigned long start, n; unsigned long npages = dir_pages(dir); - struct page *page = NULL; struct nilfs_inode_info *ei = NILFS_I(dir); struct nilfs_dir_entry *de; if (npages == 0) goto out; - /* OFFSET_CACHE */ - *res_page = NULL; - start = ei->i_dir_start_lookup; if (start >= npages) start = 0; n = start; do { - char *kaddr; + char *kaddr = nilfs_get_folio(dir, n, foliop); - page = nilfs_get_page(dir, n); - if (!IS_ERR(page)) { - kaddr = page_address(page); + if (!IS_ERR(kaddr)) { de = (struct nilfs_dir_entry *)kaddr; kaddr += nilfs_last_byte(dir, n) - reclen; while ((char *) de <= kaddr) { if (de->rec_len == 0) { nilfs_error(dir->i_sb, "zero-length directory entry"); - nilfs_put_page(page); + folio_release_kmap(*foliop, kaddr); goto out; } if (nilfs_match(namelen, name, de)) goto found; de = nilfs_next_entry(de); } - nilfs_put_page(page); + folio_release_kmap(*foliop, kaddr); } if (++n >= npages) n = 0; - /* next page is past the blocks we've got */ + /* next folio is past the blocks we've got */ if (unlikely(n > (dir->i_blocks >> (PAGE_SHIFT - 9)))) { nilfs_error(dir->i_sb, "dir %lu size %lld exceeds block count %llu", @@ -380,55 +372,47 @@ out: return NULL; found: - *res_page = page; ei->i_dir_start_lookup = n; return de; } -struct nilfs_dir_entry *nilfs_dotdot(struct inode *dir, struct page **p) +struct nilfs_dir_entry *nilfs_dotdot(struct inode *dir, struct folio **foliop) { - struct page *page = nilfs_get_page(dir, 0); - struct nilfs_dir_entry *de = NULL; + struct nilfs_dir_entry *de = nilfs_get_folio(dir, 0, foliop); - if (!IS_ERR(page)) { - de = nilfs_next_entry( - (struct nilfs_dir_entry *)page_address(page)); - *p = page; - } - return de; + if (IS_ERR(de)) + return NULL; + return nilfs_next_entry(de); } ino_t nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr) { ino_t res = 0; struct nilfs_dir_entry *de; - struct page *page; + struct folio *folio; - de = nilfs_find_entry(dir, qstr, &page); + de = nilfs_find_entry(dir, qstr, &folio); if (de) { res = le64_to_cpu(de->inode); - kunmap(page); - put_page(page); + folio_release_kmap(folio, de); } return res; } -/* Releases the page */ void nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de, - struct page *page, struct inode *inode) + struct folio *folio, struct inode *inode) { - unsigned int from = (char *)de - (char *)page_address(page); - unsigned int to = from + nilfs_rec_len_from_disk(de->rec_len); - struct address_space *mapping = page->mapping; + size_t from = offset_in_folio(folio, de); + size_t to = from + nilfs_rec_len_from_disk(de->rec_len); + struct address_space *mapping = folio->mapping; int err; - lock_page(page); - err = nilfs_prepare_chunk(page, from, to); + folio_lock(folio); + err = nilfs_prepare_chunk(folio, from, to); BUG_ON(err); de->inode = cpu_to_le64(inode->i_ino); nilfs_set_de_type(de, inode); - nilfs_commit_chunk(page, mapping, from, to); - nilfs_put_page(page); + nilfs_commit_chunk(folio, mapping, from, to); inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir)); } @@ -443,31 +427,28 @@ int nilfs_add_link(struct dentry *dentry, struct inode *inode) unsigned int chunk_size = nilfs_chunk_size(dir); unsigned int reclen = NILFS_DIR_REC_LEN(namelen); unsigned short rec_len, name_len; - struct page *page = NULL; + struct folio *folio = NULL; struct nilfs_dir_entry *de; unsigned long npages = dir_pages(dir); unsigned long n; - char *kaddr; - unsigned int from, to; + size_t from, to; int err; /* * We take care of directory expansion in the same loop. - * This code plays outside i_size, so it locks the page + * This code plays outside i_size, so it locks the folio * to protect that region. */ for (n = 0; n <= npages; n++) { + char *kaddr = nilfs_get_folio(dir, n, &folio); char *dir_end; - page = nilfs_get_page(dir, n); - err = PTR_ERR(page); - if (IS_ERR(page)) - goto out; - lock_page(page); - kaddr = page_address(page); + if (IS_ERR(kaddr)) + return PTR_ERR(kaddr); + folio_lock(folio); dir_end = kaddr + nilfs_last_byte(dir, n); de = (struct nilfs_dir_entry *)kaddr; - kaddr += PAGE_SIZE - reclen; + kaddr += folio_size(folio) - reclen; while ((char *)de <= kaddr) { if ((char *)de == dir_end) { /* We hit i_size */ @@ -494,16 +475,16 @@ int nilfs_add_link(struct dentry *dentry, struct inode *inode) goto got_it; de = (struct nilfs_dir_entry *)((char *)de + rec_len); } - unlock_page(page); - nilfs_put_page(page); + folio_unlock(folio); + folio_release_kmap(folio, kaddr); } BUG(); return -EINVAL; got_it: - from = (char *)de - (char *)page_address(page); + from = offset_in_folio(folio, de); to = from + rec_len; - err = nilfs_prepare_chunk(page, from, to); + err = nilfs_prepare_chunk(folio, from, to); if (err) goto out_unlock; if (de->inode) { @@ -518,29 +499,28 @@ got_it: memcpy(de->name, name, namelen); de->inode = cpu_to_le64(inode->i_ino); nilfs_set_de_type(de, inode); - nilfs_commit_chunk(page, page->mapping, from, to); + nilfs_commit_chunk(folio, folio->mapping, from, to); inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir)); nilfs_mark_inode_dirty(dir); /* OFFSET_CACHE */ out_put: - nilfs_put_page(page); -out: + folio_release_kmap(folio, de); return err; out_unlock: - unlock_page(page); + folio_unlock(folio); goto out_put; } /* * nilfs_delete_entry deletes a directory entry by merging it with the - * previous entry. Page is up-to-date. Releases the page. + * previous entry. Folio is up-to-date. */ -int nilfs_delete_entry(struct nilfs_dir_entry *dir, struct page *page) +int nilfs_delete_entry(struct nilfs_dir_entry *dir, struct folio *folio) { - struct address_space *mapping = page->mapping; + struct address_space *mapping = folio->mapping; struct inode *inode = mapping->host; - char *kaddr = page_address(page); - unsigned int from, to; + char *kaddr = (char *)((unsigned long)dir & ~(folio_size(folio) - 1)); + size_t from, to; struct nilfs_dir_entry *de, *pde = NULL; int err; @@ -559,17 +539,16 @@ int nilfs_delete_entry(struct nilfs_dir_entry *dir, struct page *page) de = nilfs_next_entry(de); } if (pde) - from = (char *)pde - (char *)page_address(page); - lock_page(page); - err = nilfs_prepare_chunk(page, from, to); + from = (char *)pde - kaddr; + folio_lock(folio); + err = nilfs_prepare_chunk(folio, from, to); BUG_ON(err); if (pde) pde->rec_len = nilfs_rec_len_to_disk(to - from); dir->inode = 0; - nilfs_commit_chunk(page, mapping, from, to); + nilfs_commit_chunk(folio, mapping, from, to); inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); out: - nilfs_put_page(page); return err; } @@ -579,21 +558,21 @@ out: int nilfs_make_empty(struct inode *inode, struct inode *parent) { struct address_space *mapping = inode->i_mapping; - struct page *page = grab_cache_page(mapping, 0); + struct folio *folio = filemap_grab_folio(mapping, 0); unsigned int chunk_size = nilfs_chunk_size(inode); struct nilfs_dir_entry *de; int err; void *kaddr; - if (!page) - return -ENOMEM; + if (IS_ERR(folio)) + return PTR_ERR(folio); - err = nilfs_prepare_chunk(page, 0, chunk_size); + err = nilfs_prepare_chunk(folio, 0, chunk_size); if (unlikely(err)) { - unlock_page(page); + folio_unlock(folio); goto fail; } - kaddr = kmap_atomic(page); + kaddr = kmap_local_folio(folio, 0); memset(kaddr, 0, chunk_size); de = (struct nilfs_dir_entry *)kaddr; de->name_len = 1; @@ -608,10 +587,10 @@ int nilfs_make_empty(struct inode *inode, struct inode *parent) de->inode = cpu_to_le64(parent->i_ino); memcpy(de->name, "..\0", 4); nilfs_set_de_type(de, inode); - kunmap_atomic(kaddr); - nilfs_commit_chunk(page, mapping, 0, chunk_size); + kunmap_local(kaddr); + nilfs_commit_chunk(folio, mapping, 0, chunk_size); fail: - put_page(page); + folio_put(folio); return err; } @@ -620,18 +599,17 @@ fail: */ int nilfs_empty_dir(struct inode *inode) { - struct page *page = NULL; + struct folio *folio = NULL; + char *kaddr; unsigned long i, npages = dir_pages(inode); for (i = 0; i < npages; i++) { - char *kaddr; struct nilfs_dir_entry *de; - page = nilfs_get_page(inode, i); - if (IS_ERR(page)) + kaddr = nilfs_get_folio(inode, i, &folio); + if (IS_ERR(kaddr)) continue; - kaddr = page_address(page); de = (struct nilfs_dir_entry *)kaddr; kaddr += nilfs_last_byte(inode, i) - NILFS_DIR_REC_LEN(1); @@ -657,12 +635,12 @@ int nilfs_empty_dir(struct inode *inode) } de = nilfs_next_entry(de); } - nilfs_put_page(page); + folio_release_kmap(folio, kaddr); } return 1; not_empty: - nilfs_put_page(page); + folio_release_kmap(folio, kaddr); return 0; } diff --git a/fs/nilfs2/file.c b/fs/nilfs2/file.c index 740ce26d1e76..bec33b89a075 100644 --- a/fs/nilfs2/file.c +++ b/fs/nilfs2/file.c @@ -45,34 +45,36 @@ int nilfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) static vm_fault_t nilfs_page_mkwrite(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; - struct page *page = vmf->page; + struct folio *folio = page_folio(vmf->page); struct inode *inode = file_inode(vma->vm_file); struct nilfs_transaction_info ti; + struct buffer_head *bh, *head; int ret = 0; if (unlikely(nilfs_near_disk_full(inode->i_sb->s_fs_info))) return VM_FAULT_SIGBUS; /* -ENOSPC */ sb_start_pagefault(inode->i_sb); - lock_page(page); - if (page->mapping != inode->i_mapping || - page_offset(page) >= i_size_read(inode) || !PageUptodate(page)) { - unlock_page(page); + folio_lock(folio); + if (folio->mapping != inode->i_mapping || + folio_pos(folio) >= i_size_read(inode) || + !folio_test_uptodate(folio)) { + folio_unlock(folio); ret = -EFAULT; /* make the VM retry the fault */ goto out; } /* - * check to see if the page is mapped already (no holes) + * check to see if the folio is mapped already (no holes) */ - if (PageMappedToDisk(page)) + if (folio_test_mappedtodisk(folio)) goto mapped; - if (page_has_buffers(page)) { - struct buffer_head *bh, *head; + head = folio_buffers(folio); + if (head) { int fully_mapped = 1; - bh = head = page_buffers(page); + bh = head; do { if (!buffer_mapped(bh)) { fully_mapped = 0; @@ -81,11 +83,11 @@ static vm_fault_t nilfs_page_mkwrite(struct vm_fault *vmf) } while (bh = bh->b_this_page, bh != head); if (fully_mapped) { - SetPageMappedToDisk(page); + folio_set_mappedtodisk(folio); goto mapped; } } - unlock_page(page); + folio_unlock(folio); /* * fill hole blocks @@ -105,7 +107,7 @@ static vm_fault_t nilfs_page_mkwrite(struct vm_fault *vmf) nilfs_transaction_commit(inode->i_sb); mapped: - wait_for_stable_page(page); + folio_wait_stable(folio); out: sb_end_pagefault(inode->i_sb); return vmf_fs_error(ret); diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c index 8beb2730929d..bf9a11d58817 100644 --- a/fs/nilfs2/gcinode.c +++ b/fs/nilfs2/gcinode.c @@ -98,8 +98,8 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff, *out_bh = bh; failed: - unlock_page(bh->b_page); - put_page(bh->b_page); + folio_unlock(bh->b_folio); + folio_put(bh->b_folio); if (unlikely(err)) brelse(bh); return err; diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c index 2ead36dfa2a3..9c334c722fc1 100644 --- a/fs/nilfs2/inode.c +++ b/fs/nilfs2/inode.c @@ -175,7 +175,8 @@ static int nilfs_writepages(struct address_space *mapping, static int nilfs_writepage(struct page *page, struct writeback_control *wbc) { - struct inode *inode = page->mapping->host; + struct folio *folio = page_folio(page); + struct inode *inode = folio->mapping->host; int err; if (sb_rdonly(inode->i_sb)) { @@ -185,13 +186,13 @@ static int nilfs_writepage(struct page *page, struct writeback_control *wbc) * have dirty pages that try to be flushed in background. * So, here we simply discard this dirty page. */ - nilfs_clear_dirty_page(page, false); - unlock_page(page); + nilfs_clear_folio_dirty(folio, false); + folio_unlock(folio); return -EROFS; } - redirty_page_for_writepage(wbc, page); - unlock_page(page); + folio_redirty_for_writepage(wbc, folio); + folio_unlock(folio); if (wbc->sync_mode == WB_SYNC_ALL) { err = nilfs_construct_segment(inode->i_sb); diff --git a/fs/nilfs2/ioctl.c b/fs/nilfs2/ioctl.c index 40ffade49f38..cfb6aca5ec38 100644 --- a/fs/nilfs2/ioctl.c +++ b/fs/nilfs2/ioctl.c @@ -872,16 +872,14 @@ static int nilfs_ioctl_clean_segments(struct inode *inode, struct file *filp, nsegs = argv[4].v_nmembs; if (argv[4].v_size != argsz[4]) goto out; - if (nsegs > UINT_MAX / sizeof(__u64)) - goto out; /* * argv[4] points to segment numbers this ioctl cleans. We - * use kmalloc() for its buffer because memory used for the - * segment numbers is enough small. + * use kmalloc() for its buffer because the memory used for the + * segment numbers is small enough. */ - kbufs[4] = memdup_user((void __user *)(unsigned long)argv[4].v_base, - nsegs * sizeof(__u64)); + kbufs[4] = memdup_array_user((void __user *)(unsigned long)argv[4].v_base, + nsegs, sizeof(__u64)); if (IS_ERR(kbufs[4])) { ret = PTR_ERR(kbufs[4]); goto out; diff --git a/fs/nilfs2/mdt.c b/fs/nilfs2/mdt.c index c97c77a39668..e45c01a559c0 100644 --- a/fs/nilfs2/mdt.c +++ b/fs/nilfs2/mdt.c @@ -97,8 +97,8 @@ static int nilfs_mdt_create_block(struct inode *inode, unsigned long block, } failed_bh: - unlock_page(bh->b_page); - put_page(bh->b_page); + folio_unlock(bh->b_folio); + folio_put(bh->b_folio); brelse(bh); failed_unlock: @@ -158,8 +158,8 @@ nilfs_mdt_submit_block(struct inode *inode, unsigned long blkoff, blk_opf_t opf, *out_bh = bh; failed_bh: - unlock_page(bh->b_page); - put_page(bh->b_page); + folio_unlock(bh->b_folio); + folio_put(bh->b_folio); brelse(bh); failed: return ret; @@ -399,7 +399,8 @@ int nilfs_mdt_fetch_dirty(struct inode *inode) static int nilfs_mdt_write_page(struct page *page, struct writeback_control *wbc) { - struct inode *inode = page->mapping->host; + struct folio *folio = page_folio(page); + struct inode *inode = folio->mapping->host; struct super_block *sb; int err = 0; @@ -407,16 +408,16 @@ nilfs_mdt_write_page(struct page *page, struct writeback_control *wbc) /* * It means that filesystem was remounted in read-only * mode because of error or metadata corruption. But we - * have dirty pages that try to be flushed in background. - * So, here we simply discard this dirty page. + * have dirty folios that try to be flushed in background. + * So, here we simply discard this dirty folio. */ - nilfs_clear_dirty_page(page, false); - unlock_page(page); + nilfs_clear_folio_dirty(folio, false); + folio_unlock(folio); return -EROFS; } - redirty_page_for_writepage(wbc, page); - unlock_page(page); + folio_redirty_for_writepage(wbc, folio); + folio_unlock(folio); if (!inode) return 0; diff --git a/fs/nilfs2/namei.c b/fs/nilfs2/namei.c index 2a4e7f4a8102..959bd9fb3d81 100644 --- a/fs/nilfs2/namei.c +++ b/fs/nilfs2/namei.c @@ -260,11 +260,11 @@ static int nilfs_do_unlink(struct inode *dir, struct dentry *dentry) { struct inode *inode; struct nilfs_dir_entry *de; - struct page *page; + struct folio *folio; int err; err = -ENOENT; - de = nilfs_find_entry(dir, &dentry->d_name, &page); + de = nilfs_find_entry(dir, &dentry->d_name, &folio); if (!de) goto out; @@ -279,7 +279,8 @@ static int nilfs_do_unlink(struct inode *dir, struct dentry *dentry) inode->i_ino, inode->i_nlink); set_nlink(inode, 1); } - err = nilfs_delete_entry(de, page); + err = nilfs_delete_entry(de, folio); + folio_release_kmap(folio, de); if (err) goto out; @@ -347,9 +348,9 @@ static int nilfs_rename(struct mnt_idmap *idmap, { struct inode *old_inode = d_inode(old_dentry); struct inode *new_inode = d_inode(new_dentry); - struct page *dir_page = NULL; + struct folio *dir_folio = NULL; struct nilfs_dir_entry *dir_de = NULL; - struct page *old_page; + struct folio *old_folio; struct nilfs_dir_entry *old_de; struct nilfs_transaction_info ti; int err; @@ -362,19 +363,19 @@ static int nilfs_rename(struct mnt_idmap *idmap, return err; err = -ENOENT; - old_de = nilfs_find_entry(old_dir, &old_dentry->d_name, &old_page); + old_de = nilfs_find_entry(old_dir, &old_dentry->d_name, &old_folio); if (!old_de) goto out; if (S_ISDIR(old_inode->i_mode)) { err = -EIO; - dir_de = nilfs_dotdot(old_inode, &dir_page); + dir_de = nilfs_dotdot(old_inode, &dir_folio); if (!dir_de) goto out_old; } if (new_inode) { - struct page *new_page; + struct folio *new_folio; struct nilfs_dir_entry *new_de; err = -ENOTEMPTY; @@ -382,10 +383,11 @@ static int nilfs_rename(struct mnt_idmap *idmap, goto out_dir; err = -ENOENT; - new_de = nilfs_find_entry(new_dir, &new_dentry->d_name, &new_page); + new_de = nilfs_find_entry(new_dir, &new_dentry->d_name, &new_folio); if (!new_de) goto out_dir; - nilfs_set_link(new_dir, new_de, new_page, old_inode); + nilfs_set_link(new_dir, new_de, new_folio, old_inode); + folio_release_kmap(new_folio, new_de); nilfs_mark_inode_dirty(new_dir); inode_set_ctime_current(new_inode); if (dir_de) @@ -408,12 +410,15 @@ static int nilfs_rename(struct mnt_idmap *idmap, */ inode_set_ctime_current(old_inode); - nilfs_delete_entry(old_de, old_page); + nilfs_delete_entry(old_de, old_folio); if (dir_de) { - nilfs_set_link(old_inode, dir_de, dir_page, new_dir); + nilfs_set_link(old_inode, dir_de, dir_folio, new_dir); + folio_release_kmap(dir_folio, dir_de); drop_nlink(old_dir); } + folio_release_kmap(old_folio, old_de); + nilfs_mark_inode_dirty(old_dir); nilfs_mark_inode_dirty(old_inode); @@ -421,13 +426,10 @@ static int nilfs_rename(struct mnt_idmap *idmap, return err; out_dir: - if (dir_de) { - kunmap(dir_page); - put_page(dir_page); - } + if (dir_de) + folio_release_kmap(dir_folio, dir_de); out_old: - kunmap(old_page); - put_page(old_page); + folio_release_kmap(old_folio, old_de); out: nilfs_transaction_abort(old_dir->i_sb); return err; diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h index 8046490cd7fe..98cffaf0ac12 100644 --- a/fs/nilfs2/nilfs.h +++ b/fs/nilfs2/nilfs.h @@ -226,16 +226,16 @@ static inline __u32 nilfs_mask_flags(umode_t mode, __u32 flags) } /* dir.c */ -extern int nilfs_add_link(struct dentry *, struct inode *); -extern ino_t nilfs_inode_by_name(struct inode *, const struct qstr *); -extern int nilfs_make_empty(struct inode *, struct inode *); -extern struct nilfs_dir_entry * -nilfs_find_entry(struct inode *, const struct qstr *, struct page **); -extern int nilfs_delete_entry(struct nilfs_dir_entry *, struct page *); -extern int nilfs_empty_dir(struct inode *); -extern struct nilfs_dir_entry *nilfs_dotdot(struct inode *, struct page **); -extern void nilfs_set_link(struct inode *, struct nilfs_dir_entry *, - struct page *, struct inode *); +int nilfs_add_link(struct dentry *, struct inode *); +ino_t nilfs_inode_by_name(struct inode *, const struct qstr *); +int nilfs_make_empty(struct inode *, struct inode *); +struct nilfs_dir_entry *nilfs_find_entry(struct inode *, const struct qstr *, + struct folio **); +int nilfs_delete_entry(struct nilfs_dir_entry *, struct folio *); +int nilfs_empty_dir(struct inode *); +struct nilfs_dir_entry *nilfs_dotdot(struct inode *, struct folio **); +void nilfs_set_link(struct inode *, struct nilfs_dir_entry *, + struct folio *, struct inode *); /* file.c */ extern int nilfs_sync_file(struct file *, loff_t, loff_t, int); diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c index 06b04758f289..5c2eba1987bd 100644 --- a/fs/nilfs2/page.c +++ b/fs/nilfs2/page.c @@ -73,7 +73,7 @@ struct buffer_head *nilfs_grab_buffer(struct inode *inode, */ void nilfs_forget_buffer(struct buffer_head *bh) { - struct page *page = bh->b_page; + struct folio *folio = bh->b_folio; const unsigned long clear_bits = (BIT(BH_Uptodate) | BIT(BH_Dirty) | BIT(BH_Mapped) | BIT(BH_Async_Write) | BIT(BH_NILFS_Volatile) | @@ -81,12 +81,12 @@ void nilfs_forget_buffer(struct buffer_head *bh) lock_buffer(bh); set_mask_bits(&bh->b_state, clear_bits, 0); - if (nilfs_page_buffers_clean(page)) - __nilfs_clear_page_dirty(page); + if (nilfs_folio_buffers_clean(folio)) + __nilfs_clear_folio_dirty(folio); bh->b_blocknr = -1; - ClearPageUptodate(page); - ClearPageMappedToDisk(page); + folio_clear_uptodate(folio); + folio_clear_mappedtodisk(folio); unlock_buffer(bh); brelse(bh); } @@ -131,48 +131,49 @@ void nilfs_copy_buffer(struct buffer_head *dbh, struct buffer_head *sbh) } /** - * nilfs_page_buffers_clean - check if a page has dirty buffers or not. - * @page: page to be checked + * nilfs_folio_buffers_clean - Check if a folio has dirty buffers or not. + * @folio: Folio to be checked. * - * nilfs_page_buffers_clean() returns zero if the page has dirty buffers. - * Otherwise, it returns non-zero value. + * nilfs_folio_buffers_clean() returns false if the folio has dirty buffers. + * Otherwise, it returns true. */ -int nilfs_page_buffers_clean(struct page *page) +bool nilfs_folio_buffers_clean(struct folio *folio) { struct buffer_head *bh, *head; - bh = head = page_buffers(page); + bh = head = folio_buffers(folio); do { if (buffer_dirty(bh)) - return 0; + return false; bh = bh->b_this_page; } while (bh != head); - return 1; + return true; } -void nilfs_page_bug(struct page *page) +void nilfs_folio_bug(struct folio *folio) { + struct buffer_head *bh, *head; struct address_space *m; unsigned long ino; - if (unlikely(!page)) { - printk(KERN_CRIT "NILFS_PAGE_BUG(NULL)\n"); + if (unlikely(!folio)) { + printk(KERN_CRIT "NILFS_FOLIO_BUG(NULL)\n"); return; } - m = page->mapping; + m = folio->mapping; ino = m ? m->host->i_ino : 0; - printk(KERN_CRIT "NILFS_PAGE_BUG(%p): cnt=%d index#=%llu flags=0x%lx " + printk(KERN_CRIT "NILFS_FOLIO_BUG(%p): cnt=%d index#=%llu flags=0x%lx " "mapping=%p ino=%lu\n", - page, page_ref_count(page), - (unsigned long long)page->index, page->flags, m, ino); + folio, folio_ref_count(folio), + (unsigned long long)folio->index, folio->flags, m, ino); - if (page_has_buffers(page)) { - struct buffer_head *bh, *head; + head = folio_buffers(folio); + if (head) { int i = 0; - bh = head = page_buffers(page); + bh = head; do { printk(KERN_CRIT " BH[%d] %p: cnt=%d block#=%llu state=0x%lx\n", @@ -258,7 +259,7 @@ repeat: folio_lock(folio); if (unlikely(!folio_test_dirty(folio))) - NILFS_PAGE_BUG(&folio->page, "inconsistent dirty state"); + NILFS_FOLIO_BUG(folio, "inconsistent dirty state"); dfolio = filemap_grab_folio(dmap, folio->index); if (unlikely(IS_ERR(dfolio))) { @@ -268,7 +269,7 @@ repeat: break; } if (unlikely(!folio_buffers(folio))) - NILFS_PAGE_BUG(&folio->page, + NILFS_FOLIO_BUG(folio, "found empty page in dat page cache"); nilfs_copy_folio(dfolio, folio, true); @@ -379,7 +380,7 @@ void nilfs_clear_dirty_pages(struct address_space *mapping, bool silent) * was acquired. Skip processing in that case. */ if (likely(folio->mapping == mapping)) - nilfs_clear_dirty_page(&folio->page, silent); + nilfs_clear_folio_dirty(folio, silent); folio_unlock(folio); } @@ -389,32 +390,33 @@ void nilfs_clear_dirty_pages(struct address_space *mapping, bool silent) } /** - * nilfs_clear_dirty_page - discard dirty page - * @page: dirty page that will be discarded + * nilfs_clear_folio_dirty - discard dirty folio + * @folio: dirty folio that will be discarded * @silent: suppress [true] or print [false] warning messages */ -void nilfs_clear_dirty_page(struct page *page, bool silent) +void nilfs_clear_folio_dirty(struct folio *folio, bool silent) { - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; struct super_block *sb = inode->i_sb; + struct buffer_head *bh, *head; - BUG_ON(!PageLocked(page)); + BUG_ON(!folio_test_locked(folio)); if (!silent) nilfs_warn(sb, "discard dirty page: offset=%lld, ino=%lu", - page_offset(page), inode->i_ino); + folio_pos(folio), inode->i_ino); - ClearPageUptodate(page); - ClearPageMappedToDisk(page); + folio_clear_uptodate(folio); + folio_clear_mappedtodisk(folio); - if (page_has_buffers(page)) { - struct buffer_head *bh, *head; + head = folio_buffers(folio); + if (head) { const unsigned long clear_bits = (BIT(BH_Uptodate) | BIT(BH_Dirty) | BIT(BH_Mapped) | BIT(BH_Async_Write) | BIT(BH_NILFS_Volatile) | BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected)); - bh = head = page_buffers(page); + bh = head; do { lock_buffer(bh); if (!silent) @@ -427,7 +429,7 @@ void nilfs_clear_dirty_page(struct page *page, bool silent) } while (bh = bh->b_this_page, bh != head); } - __nilfs_clear_page_dirty(page); + __nilfs_clear_folio_dirty(folio); } unsigned int nilfs_page_count_clean_buffers(struct page *page, @@ -457,22 +459,23 @@ unsigned int nilfs_page_count_clean_buffers(struct page *page, * 2) Some B-tree operations like insertion or deletion may dispose buffers * in dirty state, and this needs to cancel the dirty state of their pages. */ -int __nilfs_clear_page_dirty(struct page *page) +void __nilfs_clear_folio_dirty(struct folio *folio) { - struct address_space *mapping = page->mapping; + struct address_space *mapping = folio->mapping; if (mapping) { xa_lock_irq(&mapping->i_pages); - if (test_bit(PG_dirty, &page->flags)) { - __xa_clear_mark(&mapping->i_pages, page_index(page), + if (folio_test_dirty(folio)) { + __xa_clear_mark(&mapping->i_pages, folio->index, PAGECACHE_TAG_DIRTY); xa_unlock_irq(&mapping->i_pages); - return clear_page_dirty_for_io(page); + folio_clear_dirty_for_io(folio); + return; } xa_unlock_irq(&mapping->i_pages); - return 0; + return; } - return TestClearPageDirty(page); + folio_clear_dirty(folio); } /** diff --git a/fs/nilfs2/page.h b/fs/nilfs2/page.h index d249ea1cefff..7e1a2c455a10 100644 --- a/fs/nilfs2/page.h +++ b/fs/nilfs2/page.h @@ -30,18 +30,18 @@ BUFFER_FNS(NILFS_Checked, nilfs_checked) /* buffer is verified */ BUFFER_FNS(NILFS_Redirected, nilfs_redirected) /* redirected to a copy */ -int __nilfs_clear_page_dirty(struct page *); +void __nilfs_clear_folio_dirty(struct folio *); struct buffer_head *nilfs_grab_buffer(struct inode *, struct address_space *, unsigned long, unsigned long); void nilfs_forget_buffer(struct buffer_head *); void nilfs_copy_buffer(struct buffer_head *, struct buffer_head *); -int nilfs_page_buffers_clean(struct page *); -void nilfs_page_bug(struct page *); +bool nilfs_folio_buffers_clean(struct folio *); +void nilfs_folio_bug(struct folio *); int nilfs_copy_dirty_pages(struct address_space *, struct address_space *); void nilfs_copy_back_pages(struct address_space *, struct address_space *); -void nilfs_clear_dirty_page(struct page *, bool); +void nilfs_clear_folio_dirty(struct folio *, bool); void nilfs_clear_dirty_pages(struct address_space *, bool); unsigned int nilfs_page_count_clean_buffers(struct page *, unsigned int, unsigned int); @@ -49,7 +49,7 @@ unsigned long nilfs_find_uncommitted_extent(struct inode *inode, sector_t start_blk, sector_t *blkoff); -#define NILFS_PAGE_BUG(page, m, a...) \ - do { nilfs_page_bug(page); BUG(); } while (0) +#define NILFS_FOLIO_BUG(folio, m, a...) \ + do { nilfs_folio_bug(folio); BUG(); } while (0) #endif /* _NILFS_PAGE_H */ diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c index 55e31cc903d1..2590a0860eab 100644 --- a/fs/nilfs2/segment.c +++ b/fs/nilfs2/segment.c @@ -1665,39 +1665,39 @@ static int nilfs_segctor_assign(struct nilfs_sc_info *sci, int mode) return 0; } -static void nilfs_begin_page_io(struct page *page) +static void nilfs_begin_folio_io(struct folio *folio) { - if (!page || PageWriteback(page)) + if (!folio || folio_test_writeback(folio)) /* * For split b-tree node pages, this function may be called * twice. We ignore the 2nd or later calls by this check. */ return; - lock_page(page); - clear_page_dirty_for_io(page); - set_page_writeback(page); - unlock_page(page); + folio_lock(folio); + folio_clear_dirty_for_io(folio); + folio_start_writeback(folio); + folio_unlock(folio); } static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci) { struct nilfs_segment_buffer *segbuf; - struct page *bd_page = NULL, *fs_page = NULL; + struct folio *bd_folio = NULL, *fs_folio = NULL; list_for_each_entry(segbuf, &sci->sc_segbufs, sb_list) { struct buffer_head *bh; list_for_each_entry(bh, &segbuf->sb_segsum_buffers, b_assoc_buffers) { - if (bh->b_page != bd_page) { - if (bd_page) { - lock_page(bd_page); - clear_page_dirty_for_io(bd_page); - set_page_writeback(bd_page); - unlock_page(bd_page); + if (bh->b_folio != bd_folio) { + if (bd_folio) { + folio_lock(bd_folio); + folio_clear_dirty_for_io(bd_folio); + folio_start_writeback(bd_folio); + folio_unlock(bd_folio); } - bd_page = bh->b_page; + bd_folio = bh->b_folio; } } @@ -1705,28 +1705,28 @@ static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci) b_assoc_buffers) { set_buffer_async_write(bh); if (bh == segbuf->sb_super_root) { - if (bh->b_page != bd_page) { - lock_page(bd_page); - clear_page_dirty_for_io(bd_page); - set_page_writeback(bd_page); - unlock_page(bd_page); - bd_page = bh->b_page; + if (bh->b_folio != bd_folio) { + folio_lock(bd_folio); + folio_clear_dirty_for_io(bd_folio); + folio_start_writeback(bd_folio); + folio_unlock(bd_folio); + bd_folio = bh->b_folio; } break; } - if (bh->b_page != fs_page) { - nilfs_begin_page_io(fs_page); - fs_page = bh->b_page; + if (bh->b_folio != fs_folio) { + nilfs_begin_folio_io(fs_folio); + fs_folio = bh->b_folio; } } } - if (bd_page) { - lock_page(bd_page); - clear_page_dirty_for_io(bd_page); - set_page_writeback(bd_page); - unlock_page(bd_page); + if (bd_folio) { + folio_lock(bd_folio); + folio_clear_dirty_for_io(bd_folio); + folio_start_writeback(bd_folio); + folio_unlock(bd_folio); } - nilfs_begin_page_io(fs_page); + nilfs_begin_folio_io(fs_folio); } static int nilfs_segctor_write(struct nilfs_sc_info *sci, @@ -1739,17 +1739,18 @@ static int nilfs_segctor_write(struct nilfs_sc_info *sci, return ret; } -static void nilfs_end_page_io(struct page *page, int err) +static void nilfs_end_folio_io(struct folio *folio, int err) { - if (!page) + if (!folio) return; - if (buffer_nilfs_node(page_buffers(page)) && !PageWriteback(page)) { + if (buffer_nilfs_node(folio_buffers(folio)) && + !folio_test_writeback(folio)) { /* * For b-tree node pages, this function may be called twice * or more because they might be split in a segment. */ - if (PageDirty(page)) { + if (folio_test_dirty(folio)) { /* * For pages holding split b-tree node buffers, dirty * flag on the buffers may be cleared discretely. @@ -1757,30 +1758,30 @@ static void nilfs_end_page_io(struct page *page, int err) * remaining buffers, and it must be cancelled if * all the buffers get cleaned later. */ - lock_page(page); - if (nilfs_page_buffers_clean(page)) - __nilfs_clear_page_dirty(page); - unlock_page(page); + folio_lock(folio); + if (nilfs_folio_buffers_clean(folio)) + __nilfs_clear_folio_dirty(folio); + folio_unlock(folio); } return; } if (!err) { - if (!nilfs_page_buffers_clean(page)) - __set_page_dirty_nobuffers(page); - ClearPageError(page); + if (!nilfs_folio_buffers_clean(folio)) + filemap_dirty_folio(folio->mapping, folio); + folio_clear_error(folio); } else { - __set_page_dirty_nobuffers(page); - SetPageError(page); + filemap_dirty_folio(folio->mapping, folio); + folio_set_error(folio); } - end_page_writeback(page); + folio_end_writeback(folio); } static void nilfs_abort_logs(struct list_head *logs, int err) { struct nilfs_segment_buffer *segbuf; - struct page *bd_page = NULL, *fs_page = NULL; + struct folio *bd_folio = NULL, *fs_folio = NULL; struct buffer_head *bh; if (list_empty(logs)) @@ -1790,10 +1791,10 @@ static void nilfs_abort_logs(struct list_head *logs, int err) list_for_each_entry(bh, &segbuf->sb_segsum_buffers, b_assoc_buffers) { clear_buffer_uptodate(bh); - if (bh->b_page != bd_page) { - if (bd_page) - end_page_writeback(bd_page); - bd_page = bh->b_page; + if (bh->b_folio != bd_folio) { + if (bd_folio) + folio_end_writeback(bd_folio); + bd_folio = bh->b_folio; } } @@ -1802,22 +1803,22 @@ static void nilfs_abort_logs(struct list_head *logs, int err) clear_buffer_async_write(bh); if (bh == segbuf->sb_super_root) { clear_buffer_uptodate(bh); - if (bh->b_page != bd_page) { - end_page_writeback(bd_page); - bd_page = bh->b_page; + if (bh->b_folio != bd_folio) { + folio_end_writeback(bd_folio); + bd_folio = bh->b_folio; } break; } - if (bh->b_page != fs_page) { - nilfs_end_page_io(fs_page, err); - fs_page = bh->b_page; + if (bh->b_folio != fs_folio) { + nilfs_end_folio_io(fs_folio, err); + fs_folio = bh->b_folio; } } } - if (bd_page) - end_page_writeback(bd_page); + if (bd_folio) + folio_end_writeback(bd_folio); - nilfs_end_page_io(fs_page, err); + nilfs_end_folio_io(fs_folio, err); } static void nilfs_segctor_abort_construction(struct nilfs_sc_info *sci, @@ -1859,7 +1860,7 @@ static void nilfs_set_next_segment(struct the_nilfs *nilfs, static void nilfs_segctor_complete_write(struct nilfs_sc_info *sci) { struct nilfs_segment_buffer *segbuf; - struct page *bd_page = NULL, *fs_page = NULL; + struct folio *bd_folio = NULL, *fs_folio = NULL; struct the_nilfs *nilfs = sci->sc_super->s_fs_info; int update_sr = false; @@ -1870,21 +1871,21 @@ static void nilfs_segctor_complete_write(struct nilfs_sc_info *sci) b_assoc_buffers) { set_buffer_uptodate(bh); clear_buffer_dirty(bh); - if (bh->b_page != bd_page) { - if (bd_page) - end_page_writeback(bd_page); - bd_page = bh->b_page; + if (bh->b_folio != bd_folio) { + if (bd_folio) + folio_end_writeback(bd_folio); + bd_folio = bh->b_folio; } } /* - * We assume that the buffers which belong to the same page + * We assume that the buffers which belong to the same folio * continue over the buffer list. - * Under this assumption, the last BHs of pages is - * identifiable by the discontinuity of bh->b_page - * (page != fs_page). + * Under this assumption, the last BHs of folios is + * identifiable by the discontinuity of bh->b_folio + * (folio != fs_folio). * * For B-tree node blocks, however, this assumption is not - * guaranteed. The cleanup code of B-tree node pages needs + * guaranteed. The cleanup code of B-tree node folios needs * special care. */ list_for_each_entry(bh, &segbuf->sb_payload_buffers, @@ -1897,16 +1898,16 @@ static void nilfs_segctor_complete_write(struct nilfs_sc_info *sci) set_mask_bits(&bh->b_state, clear_bits, set_bits); if (bh == segbuf->sb_super_root) { - if (bh->b_page != bd_page) { - end_page_writeback(bd_page); - bd_page = bh->b_page; + if (bh->b_folio != bd_folio) { + folio_end_writeback(bd_folio); + bd_folio = bh->b_folio; } update_sr = true; break; } - if (bh->b_page != fs_page) { - nilfs_end_page_io(fs_page, 0); - fs_page = bh->b_page; + if (bh->b_folio != fs_folio) { + nilfs_end_folio_io(fs_folio, 0); + fs_folio = bh->b_folio; } } @@ -1920,13 +1921,13 @@ static void nilfs_segctor_complete_write(struct nilfs_sc_info *sci) } } /* - * Since pages may continue over multiple segment buffers, - * end of the last page must be checked outside of the loop. + * Since folios may continue over multiple segment buffers, + * end of the last folio must be checked outside of the loop. */ - if (bd_page) - end_page_writeback(bd_page); + if (bd_folio) + folio_end_writeback(bd_folio); - nilfs_end_page_io(fs_page, 0); + nilfs_end_folio_io(fs_folio, 0); nilfs_drop_collected_inodes(&sci->sc_dirty_files); @@ -2587,6 +2588,7 @@ static int nilfs_segctor_thread(void *arg) "segctord starting. Construction interval = %lu seconds, CP frequency < %lu seconds", sci->sc_interval / HZ, sci->sc_mjcp_freq / HZ); + set_freezable(); spin_lock(&sci->sc_state_lock); loop: for (;;) { diff --git a/fs/nilfs2/sufile.c b/fs/nilfs2/sufile.c index 58ca7c936393..0a8119456c21 100644 --- a/fs/nilfs2/sufile.c +++ b/fs/nilfs2/sufile.c @@ -471,10 +471,15 @@ void nilfs_sufile_do_free(struct inode *sufile, __u64 segnum, kunmap_atomic(kaddr); return; } - WARN_ON(nilfs_segment_usage_error(su)); - WARN_ON(!nilfs_segment_usage_dirty(su)); + if (unlikely(nilfs_segment_usage_error(su))) + nilfs_warn(sufile->i_sb, "free segment %llu marked in error", + (unsigned long long)segnum); sudirty = nilfs_segment_usage_dirty(su); + if (unlikely(!sudirty)) + nilfs_warn(sufile->i_sb, "free unallocated segment %llu", + (unsigned long long)segnum); + nilfs_segment_usage_set_clean(su); kunmap_atomic(kaddr); mark_buffer_dirty(su_bh); diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c index 8ba8c4c50770..e8df6430444b 100644 --- a/fs/squashfs/file.c +++ b/fs/squashfs/file.c @@ -544,7 +544,8 @@ static void squashfs_readahead(struct readahead_control *ractl) struct squashfs_page_actor *actor; unsigned int nr_pages = 0; struct page **pages; - int i, file_end = i_size_read(inode) >> msblk->block_log; + int i; + loff_t file_end = i_size_read(inode) >> msblk->block_log; unsigned int max_pages = 1UL << shift; readahead_expand(ractl, start, (len | mask) + 1); diff --git a/fs/squashfs/file_direct.c b/fs/squashfs/file_direct.c index f1ccad519e28..763a3f7a75f6 100644 --- a/fs/squashfs/file_direct.c +++ b/fs/squashfs/file_direct.c @@ -26,10 +26,10 @@ int squashfs_readpage_block(struct page *target_page, u64 block, int bsize, struct inode *inode = target_page->mapping->host; struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info; - int file_end = (i_size_read(inode) - 1) >> PAGE_SHIFT; + loff_t file_end = (i_size_read(inode) - 1) >> PAGE_SHIFT; int mask = (1 << (msblk->block_log - PAGE_SHIFT)) - 1; - int start_index = target_page->index & ~mask; - int end_index = start_index | mask; + loff_t start_index = target_page->index & ~mask; + loff_t end_index = start_index | mask; int i, n, pages, bytes, res = -ENOMEM; struct page **page; struct squashfs_page_actor *actor; diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index 5126a4fecb44..9eaeaafe0cad 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -87,12 +87,6 @@ Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len); void final_note(Elf_Word *buf); -#ifdef CONFIG_ARCH_HAS_GENERIC_CRASHKERNEL_RESERVATION -#ifndef DEFAULT_CRASH_KERNEL_LOW_SIZE -#define DEFAULT_CRASH_KERNEL_LOW_SIZE (128UL << 20) -#endif -#endif - int __init parse_crashkernel(char *cmdline, unsigned long long system_ram, unsigned long long *crash_size, unsigned long long *crash_base, unsigned long long *low_size, bool *high); diff --git a/include/linux/crc-ccitt.h b/include/linux/crc-ccitt.h index 72c92c396bb8..cd4f420231ba 100644 --- a/include/linux/crc-ccitt.h +++ b/include/linux/crc-ccitt.h @@ -5,19 +5,12 @@ #include extern u16 const crc_ccitt_table[256]; -extern u16 const crc_ccitt_false_table[256]; extern u16 crc_ccitt(u16 crc, const u8 *buffer, size_t len); -extern u16 crc_ccitt_false(u16 crc, const u8 *buffer, size_t len); static inline u16 crc_ccitt_byte(u16 crc, const u8 c) { return (crc >> 8) ^ crc_ccitt_table[(crc ^ c) & 0xff]; } -static inline u16 crc_ccitt_false_byte(u16 crc, const u8 c) -{ - return (crc << 8) ^ crc_ccitt_false_table[(crc >> 8) ^ c]; -} - #endif /* _LINUX_CRC_CCITT_H */ diff --git a/include/linux/init_task.h b/include/linux/init_task.h index 40fc5813cf93..bccb3f1f6262 100644 --- a/include/linux/init_task.h +++ b/include/linux/init_task.h @@ -37,13 +37,6 @@ extern struct cred init_cred; #define INIT_TASK_COMM "swapper" -/* Attach to the init_task data structure for proper alignment */ -#ifdef CONFIG_ARCH_TASK_STRUCT_ON_STACK -#define __init_task_data __section(".data..init_task") -#else -#define __init_task_data /**/ -#endif - /* Attach to the thread_info data structure for proper alignment */ #define __init_thread_info __section(".data..init_thread_info") diff --git a/include/linux/ioport.h b/include/linux/ioport.h index 14f5cfabbbc8..db7fe25f3370 100644 --- a/include/linux/ioport.h +++ b/include/linux/ioport.h @@ -331,6 +331,9 @@ extern int walk_system_ram_res(u64 start, u64 end, void *arg, int (*func)(struct resource *, void *)); extern int +walk_system_ram_res_rev(u64 start, u64 end, void *arg, + int (*func)(struct resource *, void *)); +extern int walk_iomem_res_desc(unsigned long desc, unsigned long flags, u64 start, u64 end, void *arg, int (*func)(struct resource *, void *)); diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 8227455192b7..400cb6c02176 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -403,7 +403,7 @@ bool kexec_load_permitted(int kexec_image_type); /* List of defined/legal kexec file flags */ #define KEXEC_FILE_FLAGS (KEXEC_FILE_UNLOAD | KEXEC_FILE_ON_CRASH | \ - KEXEC_FILE_NO_INITRAMFS) + KEXEC_FILE_NO_INITRAMFS | KEXEC_FILE_DEBUG) /* flag to track if kexec reboot is in progress */ extern bool kexec_in_progress; @@ -500,6 +500,13 @@ static inline int crash_hotplug_memory_support(void) { return 0; } static inline unsigned int crash_get_elfcorehdr_size(void) { return 0; } #endif +extern bool kexec_file_dbg_print; + +#define kexec_dprintk(fmt, ...) \ + printk("%s" fmt, \ + kexec_file_dbg_print ? KERN_INFO : KERN_DEBUG, \ + ##__VA_ARGS__) + #else /* !CONFIG_KEXEC_CORE */ struct pt_regs; struct task_struct; diff --git a/include/linux/sched.h b/include/linux/sched.h index 03bfe9ab2951..d3097f0682d7 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1950,9 +1950,7 @@ extern void ia64_set_curr_task(int cpu, struct task_struct *p); void yield(void); union thread_union { -#ifndef CONFIG_ARCH_TASK_STRUCT_ON_STACK struct task_struct task; -#endif #ifndef CONFIG_THREAD_INFO_IN_TASK struct thread_info thread_info; #endif diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h index 3499c1a8b929..015c0e3a3e1d 100644 --- a/include/linux/sched/signal.h +++ b/include/linux/sched/signal.h @@ -432,7 +432,6 @@ static inline bool fault_signal_pending(vm_fault_t fault_flags, * This is required every time the blocked sigset_t changes. * callers must hold sighand->siglock. */ -extern void recalc_sigpending_and_wake(struct task_struct *t); extern void recalc_sigpending(void); extern void calculate_sigpending(void); @@ -646,6 +645,9 @@ extern bool current_is_single_threaded(void); #define while_each_thread(g, t) \ while ((t = next_thread(t)) != g) +#define for_other_threads(p, t) \ + for (t = p; (t = next_thread(t)) != p; ) + #define __for_each_thread(signal, t) \ list_for_each_entry_rcu(t, &(signal)->thread_head, thread_node, \ lockdep_is_held(&tasklist_lock)) diff --git a/include/linux/surface_aggregator/serial_hub.h b/include/linux/surface_aggregator/serial_hub.h index 5c4ae1a26183..d8dbef6b7fc2 100644 --- a/include/linux/surface_aggregator/serial_hub.h +++ b/include/linux/surface_aggregator/serial_hub.h @@ -12,7 +12,7 @@ #ifndef _LINUX_SURFACE_AGGREGATOR_SERIAL_HUB_H #define _LINUX_SURFACE_AGGREGATOR_SERIAL_HUB_H -#include +#include #include #include #include @@ -188,7 +188,7 @@ static_assert(sizeof(struct ssh_command) == 8); */ static inline u16 ssh_crc(const u8 *buf, size_t len) { - return crc_ccitt_false(0xffff, buf, len); + return crc_itu_t(0xffff, buf, len); } /* diff --git a/include/uapi/linux/kexec.h b/include/uapi/linux/kexec.h index 01766dd839b0..c17bb096ea68 100644 --- a/include/uapi/linux/kexec.h +++ b/include/uapi/linux/kexec.h @@ -25,6 +25,7 @@ #define KEXEC_FILE_UNLOAD 0x00000001 #define KEXEC_FILE_ON_CRASH 0x00000002 #define KEXEC_FILE_NO_INITRAMFS 0x00000004 +#define KEXEC_FILE_DEBUG 0x00000008 /* These values match the ELF architecture values. * Unless there is a good reason that should continue to be the case. diff --git a/init/Kconfig b/init/Kconfig index 9ffb103fc927..8df18f3a9748 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1676,6 +1676,56 @@ config MEMBARRIER If unsure, say Y. +config KCMP + bool "Enable kcmp() system call" if EXPERT + help + Enable the kernel resource comparison system call. It provides + user-space with the ability to compare two processes to see if they + share a common resource, such as a file descriptor or even virtual + memory space. + + If unsure, say N. + +config RSEQ + bool "Enable rseq() system call" if EXPERT + default y + depends on HAVE_RSEQ + select MEMBARRIER + help + Enable the restartable sequences system call. It provides a + user-space cache for the current CPU number value, which + speeds up getting the current CPU number from user-space, + as well as an ABI to speed up user-space operations on + per-CPU data. + + If unsure, say Y. + +config DEBUG_RSEQ + default n + bool "Enable debugging of rseq() system call" if EXPERT + depends on RSEQ && DEBUG_KERNEL + help + Enable extra debugging checks for the rseq system call. + + If unsure, say N. + +config CACHESTAT_SYSCALL + bool "Enable cachestat() system call" if EXPERT + default y + help + Enable the cachestat system call, which queries the page cache + statistics of a file (number of cached pages, dirty pages, + pages marked for writeback, (recently) evicted pages). + + If unsure say Y here. + +config PC104 + bool "PC/104 support" if EXPERT + help + Expose PC/104 form factor device drivers and options available for + selection and configuration. Enable this option if your target + machine has a PC/104 bus. + config KALLSYMS bool "Load all symbols for debugging/ksymoops" if EXPERT default y @@ -1740,57 +1790,12 @@ config KALLSYMS_BASE_RELATIVE # end of the "standard kernel features (expert users)" menu -# syscall, maps, verifier - config ARCH_HAS_MEMBARRIER_CALLBACKS bool config ARCH_HAS_MEMBARRIER_SYNC_CORE bool -config KCMP - bool "Enable kcmp() system call" if EXPERT - help - Enable the kernel resource comparison system call. It provides - user-space with the ability to compare two processes to see if they - share a common resource, such as a file descriptor or even virtual - memory space. - - If unsure, say N. - -config RSEQ - bool "Enable rseq() system call" if EXPERT - default y - depends on HAVE_RSEQ - select MEMBARRIER - help - Enable the restartable sequences system call. It provides a - user-space cache for the current CPU number value, which - speeds up getting the current CPU number from user-space, - as well as an ABI to speed up user-space operations on - per-CPU data. - - If unsure, say Y. - -config CACHESTAT_SYSCALL - bool "Enable cachestat() system call" if EXPERT - default y - help - Enable the cachestat system call, which queries the page cache - statistics of a file (number of cached pages, dirty pages, - pages marked for writeback, (recently) evicted pages). - - If unsure say Y here. - -config DEBUG_RSEQ - default n - bool "Enabled debugging of rseq() system call" if EXPERT - depends on RSEQ && DEBUG_KERNEL - help - Enable extra debugging checks for the rseq system call. - - If unsure, say N. - config HAVE_PERF_EVENTS bool help @@ -1805,13 +1810,6 @@ config PERF_USE_VMALLOC help See tools/perf/design.txt for details -config PC104 - bool "PC/104 support" if EXPERT - help - Expose PC/104 form factor device drivers and options available for - selection and configuration. Enable this option if your target - machine has a PC/104 bus. - menu "Kernel Performance Events And Counters" config PERF_EVENTS diff --git a/init/init_task.c b/init/init_task.c index 5727d42149c3..6f6485d554df 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -51,8 +51,7 @@ static struct sighand_struct init_sighand = { }; #ifdef CONFIG_SHADOW_CALL_STACK -unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] - __init_task_data = { +unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] = { [(SCS_SIZE / sizeof(long)) - 1] = SCS_END_MAGIC }; #endif @@ -61,12 +60,7 @@ unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] * Set up the first task table, touch at your own risk!. Base=0, * limit=0x1fffff (=2MB) */ -struct task_struct init_task -#ifdef CONFIG_ARCH_TASK_STRUCT_ON_STACK - __init_task_data -#endif - __aligned(L1_CACHE_BYTES) -= { +struct task_struct init_task __aligned(L1_CACHE_BYTES) = { #ifdef CONFIG_THREAD_INFO_IN_TASK .thread_info = INIT_THREAD_INFO(init_task), .stack_refcount = REFCOUNT_INIT(1), diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 56cf4ad7abbb..d48315667752 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include @@ -551,9 +550,11 @@ int crash_prepare_elf64_headers(struct crash_mem *mem, int need_kernel_map, phdr->p_filesz = phdr->p_memsz = mend - mstart + 1; phdr->p_align = 0; ehdr->e_phnum++; - pr_debug("Crash PT_LOAD ELF header. phdr=%p vaddr=0x%llx, paddr=0x%llx, sz=0x%llx e_phnum=%d p_offset=0x%llx\n", - phdr, phdr->p_vaddr, phdr->p_paddr, phdr->p_filesz, - ehdr->e_phnum, phdr->p_offset); +#ifdef CONFIG_KEXEC_FILE + kexec_dprintk("Crash PT_LOAD ELF header. phdr=%p vaddr=0x%llx, paddr=0x%llx, sz=0x%llx e_phnum=%d p_offset=0x%llx\n", + phdr, phdr->p_vaddr, phdr->p_paddr, phdr->p_filesz, + ehdr->e_phnum, phdr->p_offset); +#endif phdr++; } @@ -565,9 +566,8 @@ int crash_prepare_elf64_headers(struct crash_mem *mem, int need_kernel_map, int crash_exclude_mem_range(struct crash_mem *mem, unsigned long long mstart, unsigned long long mend) { - int i, j; + int i; unsigned long long start, end, p_start, p_end; - struct range temp_range = {0, 0}; for (i = 0; i < mem->nr_ranges; i++) { start = mem->ranges[i].start; @@ -575,72 +575,51 @@ int crash_exclude_mem_range(struct crash_mem *mem, p_start = mstart; p_end = mend; - if (mstart > end || mend < start) + if (p_start > end) continue; + /* + * Because the memory ranges in mem->ranges are stored in + * ascending order, when we detect `p_end < start`, we can + * immediately exit the for loop, as the subsequent memory + * ranges will definitely be outside the range we are looking + * for. + */ + if (p_end < start) + break; + /* Truncate any area outside of range */ - if (mstart < start) + if (p_start < start) p_start = start; - if (mend > end) + if (p_end > end) p_end = end; /* Found completely overlapping range */ if (p_start == start && p_end == end) { - mem->ranges[i].start = 0; - mem->ranges[i].end = 0; - if (i < mem->nr_ranges - 1) { - /* Shift rest of the ranges to left */ - for (j = i; j < mem->nr_ranges - 1; j++) { - mem->ranges[j].start = - mem->ranges[j+1].start; - mem->ranges[j].end = - mem->ranges[j+1].end; - } - - /* - * Continue to check if there are another overlapping ranges - * from the current position because of shifting the above - * mem ranges. - */ - i--; - mem->nr_ranges--; - continue; - } + memmove(&mem->ranges[i], &mem->ranges[i + 1], + (mem->nr_ranges - (i + 1)) * sizeof(mem->ranges[i])); + i--; mem->nr_ranges--; - return 0; - } - - if (p_start > start && p_end < end) { + } else if (p_start > start && p_end < end) { /* Split original range */ + if (mem->nr_ranges >= mem->max_nr_ranges) + return -ENOMEM; + + memmove(&mem->ranges[i + 2], &mem->ranges[i + 1], + (mem->nr_ranges - (i + 1)) * sizeof(mem->ranges[i])); + mem->ranges[i].end = p_start - 1; - temp_range.start = p_end + 1; - temp_range.end = end; + mem->ranges[i + 1].start = p_end + 1; + mem->ranges[i + 1].end = end; + + i++; + mem->nr_ranges++; } else if (p_start != start) mem->ranges[i].end = p_start - 1; else mem->ranges[i].start = p_end + 1; - break; } - /* If a split happened, add the split to array */ - if (!temp_range.end) - return 0; - - /* Split happened */ - if (i == mem->max_nr_ranges - 1) - return -ENOMEM; - - /* Location where new range should go */ - j = i + 1; - if (j < mem->nr_ranges) { - /* Move over all ranges one slot towards the end */ - for (i = mem->nr_ranges - 1; i >= j; i--) - mem->ranges[i + 1] = mem->ranges[i]; - } - - mem->ranges[j].start = temp_range.start; - mem->ranges[j].end = temp_range.end; - mem->nr_ranges++; return 0; } diff --git a/kernel/fork.c b/kernel/fork.c index 56cf276432c8..b32e323adbbf 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -165,7 +165,6 @@ void __weak arch_release_task_struct(struct task_struct *tsk) { } -#ifndef CONFIG_ARCH_TASK_STRUCT_ALLOCATOR static struct kmem_cache *task_struct_cachep; static inline struct task_struct *alloc_task_struct_node(int node) @@ -177,9 +176,6 @@ static inline void free_task_struct(struct task_struct *tsk) { kmem_cache_free(task_struct_cachep, tsk); } -#endif - -#ifndef CONFIG_ARCH_THREAD_STACK_ALLOCATOR /* * Allocate pages if THREAD_SIZE is >= PAGE_SIZE, otherwise use a @@ -412,24 +408,6 @@ void thread_stack_cache_init(void) } # endif /* THREAD_SIZE >= PAGE_SIZE || defined(CONFIG_VMAP_STACK) */ -#else /* CONFIG_ARCH_THREAD_STACK_ALLOCATOR */ - -static int alloc_thread_stack_node(struct task_struct *tsk, int node) -{ - unsigned long *stack; - - stack = arch_alloc_thread_stack_node(tsk, node); - tsk->stack = stack; - return stack ? 0 : -ENOMEM; -} - -static void free_thread_stack(struct task_struct *tsk) -{ - arch_free_thread_stack(tsk); - tsk->stack = NULL; -} - -#endif /* !CONFIG_ARCH_THREAD_STACK_ALLOCATOR */ /* SLAB cache for signal_struct structures (tsk->signal) */ static struct kmem_cache *signal_cachep; @@ -1039,7 +1017,6 @@ static void set_max_threads(unsigned int max_threads_suggested) int arch_task_struct_size __read_mostly; #endif -#ifndef CONFIG_ARCH_TASK_STRUCT_ALLOCATOR static void task_struct_whitelist(unsigned long *offset, unsigned long *size) { /* Fetch thread_struct whitelist for the architecture. */ @@ -1054,12 +1031,10 @@ static void task_struct_whitelist(unsigned long *offset, unsigned long *size) else *offset += offsetof(struct task_struct, thread); } -#endif /* CONFIG_ARCH_TASK_STRUCT_ALLOCATOR */ void __init fork_init(void) { int i; -#ifndef CONFIG_ARCH_TASK_STRUCT_ALLOCATOR #ifndef ARCH_MIN_TASKALIGN #define ARCH_MIN_TASKALIGN 0 #endif @@ -1072,7 +1047,6 @@ void __init fork_init(void) arch_task_struct_size, align, SLAB_PANIC|SLAB_ACCOUNT, useroffset, usersize, NULL); -#endif /* do the arch specific task caches init */ arch_task_cache_init(); @@ -1606,7 +1580,7 @@ static void complete_vfork_done(struct task_struct *tsk) static int wait_for_vfork_done(struct task_struct *child, struct completion *vfork) { - unsigned int state = TASK_UNINTERRUPTIBLE|TASK_KILLABLE|TASK_FREEZABLE; + unsigned int state = TASK_KILLABLE|TASK_FREEZABLE; int killed; cgroup_enter_frozen(); diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index be5642a4ec49..a08031b57a61 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -52,6 +52,8 @@ atomic_t __kexec_lock = ATOMIC_INIT(0); /* Flag to indicate we are going to kexec a new kernel */ bool kexec_in_progress = false; +bool kexec_file_dbg_print; + int kexec_should_crash(struct task_struct *p) { /* @@ -276,8 +278,8 @@ int kimage_is_destination_range(struct kimage *image, unsigned long mstart, mend; mstart = image->segment[i].mem; - mend = mstart + image->segment[i].memsz; - if ((end > mstart) && (start < mend)) + mend = mstart + image->segment[i].memsz - 1; + if ((end >= mstart) && (start <= mend)) return 1; } @@ -370,7 +372,7 @@ static struct page *kimage_alloc_normal_control_pages(struct kimage *image, pfn = page_to_boot_pfn(pages); epfn = pfn + count; addr = pfn << PAGE_SHIFT; - eaddr = epfn << PAGE_SHIFT; + eaddr = (epfn << PAGE_SHIFT) - 1; if ((epfn >= (KEXEC_CONTROL_MEMORY_LIMIT >> PAGE_SHIFT)) || kimage_is_destination_range(image, addr, eaddr)) { list_add(&pages->lru, &extra_pages); @@ -430,7 +432,7 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image, pages = NULL; size = (1 << order) << PAGE_SHIFT; - hole_start = (image->control_page + (size - 1)) & ~(size - 1); + hole_start = ALIGN(image->control_page, size); hole_end = hole_start + size - 1; while (hole_end <= crashk_res.end) { unsigned long i; @@ -447,7 +449,7 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image, mend = mstart + image->segment[i].memsz - 1; if ((hole_end >= mstart) && (hole_start <= mend)) { /* Advance the hole to the end of the segment */ - hole_start = (mend + (size - 1)) & ~(size - 1); + hole_start = ALIGN(mend, size); hole_end = hole_start + size - 1; break; } @@ -455,7 +457,7 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image, /* If I don't overlap any segments I have found my hole! */ if (i == image->nr_segments) { pages = pfn_to_page(hole_start >> PAGE_SHIFT); - image->control_page = hole_end; + image->control_page = hole_end + 1; break; } } @@ -716,7 +718,7 @@ static struct page *kimage_alloc_page(struct kimage *image, /* If the page is not a destination page use it */ if (!kimage_is_destination_range(image, addr, - addr + PAGE_SIZE)) + addr + PAGE_SIZE - 1)) break; /* @@ -1063,9 +1065,10 @@ __bpf_kfunc void crash_kexec(struct pt_regs *regs) * panic(). Otherwise parallel calls of panic() and crash_kexec() * may stop each other. To exclude them, we use panic_cpu here too. */ + old_cpu = PANIC_CPU_INVALID; this_cpu = raw_smp_processor_id(); - old_cpu = atomic_cmpxchg(&panic_cpu, PANIC_CPU_INVALID, this_cpu); - if (old_cpu == PANIC_CPU_INVALID) { + + if (atomic_try_cmpxchg(&panic_cpu, &old_cpu, this_cpu)) { /* This is the 1st CPU which comes here, so go ahead. */ __crash_kexec(regs); diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index f9a419cd22d4..bef2f6f2571b 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -123,6 +123,8 @@ void kimage_file_post_load_cleanup(struct kimage *image) */ kfree(image->image_loader_data); image->image_loader_data = NULL; + + kexec_file_dbg_print = false; } #ifdef CONFIG_KEXEC_SIG @@ -202,6 +204,8 @@ kimage_file_prepare_segments(struct kimage *image, int kernel_fd, int initrd_fd, if (ret < 0) return ret; image->kernel_buf_len = ret; + kexec_dprintk("kernel: %p kernel_size: %#lx\n", + image->kernel_buf, image->kernel_buf_len); /* Call arch image probe handlers */ ret = arch_kexec_kernel_image_probe(image, image->kernel_buf, @@ -278,6 +282,7 @@ kimage_file_alloc_init(struct kimage **rimage, int kernel_fd, if (!image) return -ENOMEM; + kexec_file_dbg_print = !!(flags & KEXEC_FILE_DEBUG); image->file_mode = 1; if (kexec_on_panic) { @@ -384,13 +389,14 @@ SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd, if (ret) goto out; + kexec_dprintk("nr_segments = %lu\n", image->nr_segments); for (i = 0; i < image->nr_segments; i++) { struct kexec_segment *ksegment; ksegment = &image->segment[i]; - pr_debug("Loading segment %d: buf=0x%p bufsz=0x%zx mem=0x%lx memsz=0x%zx\n", - i, ksegment->buf, ksegment->bufsz, ksegment->mem, - ksegment->memsz); + kexec_dprintk("segment[%d]: buf=0x%p bufsz=0x%zx mem=0x%lx memsz=0x%zx\n", + i, ksegment->buf, ksegment->bufsz, ksegment->mem, + ksegment->memsz); ret = kimage_load_segment(image, &image->segment[i]); if (ret) @@ -403,6 +409,8 @@ SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd, if (ret) goto out; + kexec_dprintk("kexec_file_load: type:%u, start:0x%lx head:0x%lx flags:0x%lx\n", + image->type, image->start, image->head, flags); /* * Free up any temporary buffers allocated which are not needed * after image has been loaded @@ -426,11 +434,11 @@ static int locate_mem_hole_top_down(unsigned long start, unsigned long end, unsigned long temp_start, temp_end; temp_end = min(end, kbuf->buf_max); - temp_start = temp_end - kbuf->memsz; + temp_start = temp_end - kbuf->memsz + 1; do { /* align down start */ - temp_start = temp_start & (~(kbuf->buf_align - 1)); + temp_start = ALIGN_DOWN(temp_start, kbuf->buf_align); if (temp_start < start || temp_start < kbuf->buf_min) return 0; @@ -592,6 +600,8 @@ static int kexec_walk_resources(struct kexec_buf *kbuf, IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY, crashk_res.start, crashk_res.end, kbuf, func); + else if (kbuf->top_down) + return walk_system_ram_res_rev(0, ULONG_MAX, kbuf, func); else return walk_system_ram_res(0, ULONG_MAX, kbuf, func); } diff --git a/kernel/ptrace.c b/kernel/ptrace.c index 5c579fb9a5e3..2fabd497d659 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c @@ -145,20 +145,9 @@ void __ptrace_unlink(struct task_struct *child) */ if (!(child->flags & PF_EXITING) && (child->signal->flags & SIGNAL_STOP_STOPPED || - child->signal->group_stop_count)) { + child->signal->group_stop_count)) child->jobctl |= JOBCTL_STOP_PENDING; - /* - * This is only possible if this thread was cloned by the - * traced task running in the stopped group, set the signal - * for the future reports. - * FIXME: we should change ptrace_init_task() to handle this - * case. - */ - if (!(child->jobctl & JOBCTL_STOP_SIGMASK)) - child->jobctl |= SIGSTOP; - } - /* * If transition to TASK_STOPPED is pending or in TASK_TRACED, kick * @child in the butt. Note that @resume should be used iff @child diff --git a/kernel/reboot.c b/kernel/reboot.c index 395a0ea3c7a8..c3a3b82c4f64 100644 --- a/kernel/reboot.c +++ b/kernel/reboot.c @@ -58,6 +58,14 @@ struct sys_off_handler { struct device *dev; }; +/* + * This variable is used to indicate if a halt was initiated instead of a + * reboot when the reboot call was invoked with LINUX_REBOOT_CMD_POWER_OFF, but + * the system cannot be powered off. This allowes kernel_halt() to notify users + * of that. + */ +static bool poweroff_fallback_to_halt; + /* * Temporary stub that prevents linkage failure while we're in process * of removing all uses of legacy pm_power_off() around the kernel. @@ -297,7 +305,10 @@ void kernel_halt(void) kernel_shutdown_prepare(SYSTEM_HALT); migrate_to_reboot_cpu(); syscore_shutdown(); - pr_emerg("System halted\n"); + if (poweroff_fallback_to_halt) + pr_emerg("Power off not available: System halted instead\n"); + else + pr_emerg("System halted\n"); kmsg_dump(KMSG_DUMP_SHUTDOWN); machine_halt(); } @@ -732,8 +743,10 @@ SYSCALL_DEFINE4(reboot, int, magic1, int, magic2, unsigned int, cmd, /* Instead of trying to make the power_off code look like * halt when pm_power_off is not set do it the easy way. */ - if ((cmd == LINUX_REBOOT_CMD_POWER_OFF) && !kernel_can_power_off()) + if ((cmd == LINUX_REBOOT_CMD_POWER_OFF) && !kernel_can_power_off()) { + poweroff_fallback_to_halt = true; cmd = LINUX_REBOOT_CMD_HALT; + } mutex_lock(&system_transition_mutex); switch (cmd) { diff --git a/kernel/relay.c b/kernel/relay.c index 83fe0325cde1..a8e90e98bf2c 100644 --- a/kernel/relay.c +++ b/kernel/relay.c @@ -1073,167 +1073,6 @@ static ssize_t relay_file_read(struct file *filp, return written; } -static void relay_consume_bytes(struct rchan_buf *rbuf, int bytes_consumed) -{ - rbuf->bytes_consumed += bytes_consumed; - - if (rbuf->bytes_consumed >= rbuf->chan->subbuf_size) { - relay_subbufs_consumed(rbuf->chan, rbuf->cpu, 1); - rbuf->bytes_consumed %= rbuf->chan->subbuf_size; - } -} - -static void relay_pipe_buf_release(struct pipe_inode_info *pipe, - struct pipe_buffer *buf) -{ - struct rchan_buf *rbuf; - - rbuf = (struct rchan_buf *)page_private(buf->page); - relay_consume_bytes(rbuf, buf->private); -} - -static const struct pipe_buf_operations relay_pipe_buf_ops = { - .release = relay_pipe_buf_release, - .try_steal = generic_pipe_buf_try_steal, - .get = generic_pipe_buf_get, -}; - -static void relay_page_release(struct splice_pipe_desc *spd, unsigned int i) -{ -} - -/* - * subbuf_splice_actor - splice up to one subbuf's worth of data - */ -static ssize_t subbuf_splice_actor(struct file *in, - loff_t *ppos, - struct pipe_inode_info *pipe, - size_t len, - unsigned int flags, - int *nonpad_ret) -{ - unsigned int pidx, poff, total_len, subbuf_pages, nr_pages; - struct rchan_buf *rbuf = in->private_data; - unsigned int subbuf_size = rbuf->chan->subbuf_size; - uint64_t pos = (uint64_t) *ppos; - uint32_t alloc_size = (uint32_t) rbuf->chan->alloc_size; - size_t read_start = (size_t) do_div(pos, alloc_size); - size_t read_subbuf = read_start / subbuf_size; - size_t padding = rbuf->padding[read_subbuf]; - size_t nonpad_end = read_subbuf * subbuf_size + subbuf_size - padding; - struct page *pages[PIPE_DEF_BUFFERS]; - struct partial_page partial[PIPE_DEF_BUFFERS]; - struct splice_pipe_desc spd = { - .pages = pages, - .nr_pages = 0, - .nr_pages_max = PIPE_DEF_BUFFERS, - .partial = partial, - .ops = &relay_pipe_buf_ops, - .spd_release = relay_page_release, - }; - ssize_t ret; - - if (rbuf->subbufs_produced == rbuf->subbufs_consumed) - return 0; - if (splice_grow_spd(pipe, &spd)) - return -ENOMEM; - - /* - * Adjust read len, if longer than what is available - */ - if (len > (subbuf_size - read_start % subbuf_size)) - len = subbuf_size - read_start % subbuf_size; - - subbuf_pages = rbuf->chan->alloc_size >> PAGE_SHIFT; - pidx = (read_start / PAGE_SIZE) % subbuf_pages; - poff = read_start & ~PAGE_MASK; - nr_pages = min_t(unsigned int, subbuf_pages, spd.nr_pages_max); - - for (total_len = 0; spd.nr_pages < nr_pages; spd.nr_pages++) { - unsigned int this_len, this_end, private; - unsigned int cur_pos = read_start + total_len; - - if (!len) - break; - - this_len = min_t(unsigned long, len, PAGE_SIZE - poff); - private = this_len; - - spd.pages[spd.nr_pages] = rbuf->page_array[pidx]; - spd.partial[spd.nr_pages].offset = poff; - - this_end = cur_pos + this_len; - if (this_end >= nonpad_end) { - this_len = nonpad_end - cur_pos; - private = this_len + padding; - } - spd.partial[spd.nr_pages].len = this_len; - spd.partial[spd.nr_pages].private = private; - - len -= this_len; - total_len += this_len; - poff = 0; - pidx = (pidx + 1) % subbuf_pages; - - if (this_end >= nonpad_end) { - spd.nr_pages++; - break; - } - } - - ret = 0; - if (!spd.nr_pages) - goto out; - - ret = *nonpad_ret = splice_to_pipe(pipe, &spd); - if (ret < 0 || ret < total_len) - goto out; - - if (read_start + ret == nonpad_end) - ret += padding; - -out: - splice_shrink_spd(&spd); - return ret; -} - -static ssize_t relay_file_splice_read(struct file *in, - loff_t *ppos, - struct pipe_inode_info *pipe, - size_t len, - unsigned int flags) -{ - ssize_t spliced; - int ret; - int nonpad_ret = 0; - - ret = 0; - spliced = 0; - - while (len && !spliced) { - ret = subbuf_splice_actor(in, ppos, pipe, len, flags, &nonpad_ret); - if (ret < 0) - break; - else if (!ret) { - if (flags & SPLICE_F_NONBLOCK) - ret = -EAGAIN; - break; - } - - *ppos += ret; - if (ret > len) - len = 0; - else - len -= ret; - spliced += nonpad_ret; - nonpad_ret = 0; - } - - if (spliced) - return spliced; - - return ret; -} const struct file_operations relay_file_operations = { .open = relay_file_open, @@ -1242,6 +1081,5 @@ const struct file_operations relay_file_operations = { .read = relay_file_read, .llseek = no_llseek, .release = relay_file_release, - .splice_read = relay_file_splice_read, }; EXPORT_SYMBOL_GPL(relay_file_operations); diff --git a/kernel/resource.c b/kernel/resource.c index 91be1bc50b60..fcbca39dbc45 100644 --- a/kernel/resource.c +++ b/kernel/resource.c @@ -27,6 +27,8 @@ #include #include #include +#include +#include #include @@ -429,6 +431,61 @@ int walk_system_ram_res(u64 start, u64 end, void *arg, func); } +/* + * This function, being a variant of walk_system_ram_res(), calls the @func + * callback against all memory ranges of type System RAM which are marked as + * IORESOURCE_SYSTEM_RAM and IORESOUCE_BUSY in reversed order, i.e., from + * higher to lower. + */ +int walk_system_ram_res_rev(u64 start, u64 end, void *arg, + int (*func)(struct resource *, void *)) +{ + struct resource res, *rams; + int rams_size = 16, i; + unsigned long flags; + int ret = -1; + + /* create a list */ + rams = kvcalloc(rams_size, sizeof(struct resource), GFP_KERNEL); + if (!rams) + return ret; + + flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY; + i = 0; + while ((start < end) && + (!find_next_iomem_res(start, end, flags, IORES_DESC_NONE, &res))) { + if (i >= rams_size) { + /* re-alloc */ + struct resource *rams_new; + + rams_new = kvrealloc(rams, rams_size * sizeof(struct resource), + (rams_size + 16) * sizeof(struct resource), + GFP_KERNEL); + if (!rams_new) + goto out; + + rams = rams_new; + rams_size += 16; + } + + rams[i].start = res.start; + rams[i++].end = res.end; + + start = res.end + 1; + } + + /* go reverse */ + for (i--; i >= 0; i--) { + ret = (*func)(&rams[i], arg); + if (ret) + break; + } + +out: + kvfree(rams); + return ret; +} + /* * This function calls the @func callback against all memory ranges, which * are ranges marked as IORESOURCE_MEM and IORESOUCE_BUSY. diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index b803030c3a03..533547e3c90a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -13097,19 +13097,6 @@ next_cpu: return 0; } -#else /* CONFIG_FAIR_GROUP_SCHED */ - -void free_fair_sched_group(struct task_group *tg) { } - -int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) -{ - return 1; -} - -void online_fair_sched_group(struct task_group *tg) { } - -void unregister_fair_sched_group(struct task_group *tg) { } - #endif /* CONFIG_FAIR_GROUP_SCHED */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e58a54bda77d..001fe047bd5d 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -461,10 +461,21 @@ static inline int walk_tg_tree(tg_visitor down, tg_visitor up, void *data) extern int tg_nop(struct task_group *tg, void *data); +#ifdef CONFIG_FAIR_GROUP_SCHED extern void free_fair_sched_group(struct task_group *tg); extern int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent); extern void online_fair_sched_group(struct task_group *tg); extern void unregister_fair_sched_group(struct task_group *tg); +#else +static inline void free_fair_sched_group(struct task_group *tg) { } +static inline int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) +{ + return 1; +} +static inline void online_fair_sched_group(struct task_group *tg) { } +static inline void unregister_fair_sched_group(struct task_group *tg) { } +#endif + extern void init_tg_cfs_entry(struct task_group *tg, struct cfs_rq *cfs_rq, struct sched_entity *se, int cpu, struct sched_entity *parent); diff --git a/kernel/signal.c b/kernel/signal.c index 47a7602dfe8d..c9c57d053ce4 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -171,16 +171,6 @@ static bool recalc_sigpending_tsk(struct task_struct *t) return false; } -/* - * After recalculating TIF_SIGPENDING, we need to make sure the task wakes up. - * This is superfluous when called on current, the wakeup is a harmless no-op. - */ -void recalc_sigpending_and_wake(struct task_struct *t) -{ - if (recalc_sigpending_tsk(t)) - signal_wake_up(t, 0); -} - void recalc_sigpending(void) { if (!recalc_sigpending_tsk(current) && !freezing(current)) @@ -1348,10 +1338,8 @@ force_sig_info_to_task(struct kernel_siginfo *info, struct task_struct *t, action->sa.sa_handler = SIG_DFL; if (handler == HANDLER_EXIT) action->sa.sa_flags |= SA_IMMUTABLE; - if (blocked) { + if (blocked) sigdelset(&t->blocked, sig); - recalc_sigpending_and_wake(t); - } } /* * Don't clear SIGNAL_UNKILLABLE for traced tasks, users won't expect @@ -1361,6 +1349,9 @@ force_sig_info_to_task(struct kernel_siginfo *info, struct task_struct *t, (!t->ptrace || (handler == HANDLER_EXIT))) t->signal->flags &= ~SIGNAL_UNKILLABLE; ret = send_signal_locked(sig, info, t, PIDTYPE_PID); + /* This can happen if the signal was already pending and blocked */ + if (!task_sigpending(t)) + signal_wake_up(t, 0); spin_unlock_irqrestore(&t->sighand->siglock, flags); return ret; @@ -1376,12 +1367,12 @@ int force_sig_info(struct kernel_siginfo *info) */ int zap_other_threads(struct task_struct *p) { - struct task_struct *t = p; + struct task_struct *t; int count = 0; p->signal->group_stop_count = 0; - while_each_thread(p, t) { + for_other_threads(p, t) { task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK); /* Don't require de_thread to wait for the vhost_worker */ if ((t->flags & (PF_IO_WORKER | PF_USER_WORKER)) != PF_USER_WORKER) @@ -2465,12 +2456,10 @@ static bool do_signal_stop(int signr) sig->group_exit_code = signr; sig->group_stop_count = 0; - if (task_set_jobctl_pending(current, signr | gstop)) sig->group_stop_count++; - t = current; - while_each_thread(current, t) { + for_other_threads(current, t) { /* * Setting state to TASK_STOPPED for a group * stop is always done with the siglock held, @@ -2966,8 +2955,7 @@ static void retarget_shared_pending(struct task_struct *tsk, sigset_t *which) if (sigisemptyset(&retarget)) return; - t = tsk; - while_each_thread(tsk, t) { + for_other_threads(tsk, t) { if (t->flags & PF_EXITING) continue; diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c index 4f65824879ab..afb3c116da91 100644 --- a/kernel/stacktrace.c +++ b/kernel/stacktrace.c @@ -126,7 +126,7 @@ EXPORT_SYMBOL_GPL(stack_trace_save); /** * stack_trace_save_tsk - Save a task stack trace into a storage array - * @task: The task to examine + * @tsk: The task to examine * @store: Pointer to storage array * @size: Size of the storage array * @skipnr: Number of entries to skip at the start of the stack trace diff --git a/kernel/watchdog.c b/kernel/watchdog.c index 5cd6d4e26915..81a8862295d6 100644 --- a/kernel/watchdog.c +++ b/kernel/watchdog.c @@ -91,7 +91,7 @@ static DEFINE_PER_CPU(atomic_t, hrtimer_interrupts); static DEFINE_PER_CPU(int, hrtimer_interrupts_saved); static DEFINE_PER_CPU(bool, watchdog_hardlockup_warned); static DEFINE_PER_CPU(bool, watchdog_hardlockup_touched); -static unsigned long watchdog_hardlockup_all_cpu_dumped; +static unsigned long hard_lockup_nmi_warn; notrace void arch_touch_nmi_watchdog(void) { @@ -151,12 +151,32 @@ void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs) */ if (is_hardlockup(cpu)) { unsigned int this_cpu = smp_processor_id(); + unsigned long flags; /* Only print hardlockups once. */ if (per_cpu(watchdog_hardlockup_warned, cpu)) return; + /* + * Prevent multiple hard-lockup reports if one cpu is already + * engaged in dumping all cpu back traces. + */ + if (sysctl_hardlockup_all_cpu_backtrace) { + if (test_and_set_bit_lock(0, &hard_lockup_nmi_warn)) + return; + } + + /* + * NOTE: we call printk_cpu_sync_get_irqsave() after printing + * the lockup message. While it would be nice to serialize + * that printout, we really want to make sure that if some + * other CPU somehow locked up while holding the lock associated + * with printk_cpu_sync_get_irqsave() that we can still at least + * get the message about the lockup out. + */ pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n", cpu); + printk_cpu_sync_get_irqsave(flags); + print_modules(); print_irqtrace_events(current); if (cpu == this_cpu) { @@ -164,17 +184,17 @@ void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs) show_regs(regs); else dump_stack(); + printk_cpu_sync_put_irqrestore(flags); } else { + printk_cpu_sync_put_irqrestore(flags); trigger_single_cpu_backtrace(cpu); } - /* - * Perform multi-CPU dump only once to avoid multiple - * hardlockups generating interleaving traces - */ - if (sysctl_hardlockup_all_cpu_backtrace && - !test_and_set_bit(0, &watchdog_hardlockup_all_cpu_dumped)) + if (sysctl_hardlockup_all_cpu_backtrace) { trigger_allbutcpu_cpu_backtrace(cpu); + if (!hardlockup_panic) + clear_bit_unlock(0, &hard_lockup_nmi_warn); + } if (hardlockup_panic) nmi_panic(regs, "Hard LOCKUP"); @@ -448,6 +468,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) struct pt_regs *regs = get_irq_regs(); int duration; int softlockup_all_cpu_backtrace = sysctl_softlockup_all_cpu_backtrace; + unsigned long flags; if (!watchdog_enabled) return HRTIMER_NORESTART; @@ -514,6 +535,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) /* Start period for the next softlockup warning. */ update_report_ts(); + printk_cpu_sync_get_irqsave(flags); pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n", smp_processor_id(), duration, current->comm, task_pid_nr(current)); @@ -523,10 +545,12 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) show_regs(regs); else dump_stack(); + printk_cpu_sync_put_irqrestore(flags); if (softlockup_all_cpu_backtrace) { trigger_allbutcpu_cpu_backtrace(smp_processor_id()); - clear_bit_unlock(0, &soft_lockup_nmi_warn); + if (!softlockup_panic) + clear_bit_unlock(0, &soft_lockup_nmi_warn); } add_taint(TAINT_SOFTLOCKUP, LOCKDEP_STILL_OK); diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 7d9416d6627c..97ce28f4d154 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -763,6 +763,8 @@ config DEBUG_STACK_USAGE help Enables the display of the minimum amount of free stack which each task has ever had available in the sysrq-T and sysrq-P debug output. + Also emits a message to dmesg when a process exits if that process + used more stack space than previously exiting processes. This option will slow down process creation somewhat. @@ -2087,10 +2089,6 @@ config KCOV KCOV exposes kernel code coverage information in a form suitable for coverage-guided fuzzing (randomized testing). - If RANDOMIZE_BASE is enabled, PC values will not be stable across - different machines and across reboots. If you need stable PC values, - disable RANDOMIZE_BASE. - For more details, see Documentation/dev-tools/kcov.rst. config KCOV_ENABLE_COMPARISONS diff --git a/lib/crc-ccitt.c b/lib/crc-ccitt.c index d1a7d29d2ac9..9cddf35d3b66 100644 --- a/lib/crc-ccitt.c +++ b/lib/crc-ccitt.c @@ -49,46 +49,6 @@ u16 const crc_ccitt_table[256] = { }; EXPORT_SYMBOL(crc_ccitt_table); -/* - * Similar table to calculate CRC16 variant known as CRC-CCITT-FALSE - * Reflected bits order, does not augment final value. - */ -u16 const crc_ccitt_false_table[256] = { - 0x0000, 0x1021, 0x2042, 0x3063, 0x4084, 0x50A5, 0x60C6, 0x70E7, - 0x8108, 0x9129, 0xA14A, 0xB16B, 0xC18C, 0xD1AD, 0xE1CE, 0xF1EF, - 0x1231, 0x0210, 0x3273, 0x2252, 0x52B5, 0x4294, 0x72F7, 0x62D6, - 0x9339, 0x8318, 0xB37B, 0xA35A, 0xD3BD, 0xC39C, 0xF3FF, 0xE3DE, - 0x2462, 0x3443, 0x0420, 0x1401, 0x64E6, 0x74C7, 0x44A4, 0x5485, - 0xA56A, 0xB54B, 0x8528, 0x9509, 0xE5EE, 0xF5CF, 0xC5AC, 0xD58D, - 0x3653, 0x2672, 0x1611, 0x0630, 0x76D7, 0x66F6, 0x5695, 0x46B4, - 0xB75B, 0xA77A, 0x9719, 0x8738, 0xF7DF, 0xE7FE, 0xD79D, 0xC7BC, - 0x48C4, 0x58E5, 0x6886, 0x78A7, 0x0840, 0x1861, 0x2802, 0x3823, - 0xC9CC, 0xD9ED, 0xE98E, 0xF9AF, 0x8948, 0x9969, 0xA90A, 0xB92B, - 0x5AF5, 0x4AD4, 0x7AB7, 0x6A96, 0x1A71, 0x0A50, 0x3A33, 0x2A12, - 0xDBFD, 0xCBDC, 0xFBBF, 0xEB9E, 0x9B79, 0x8B58, 0xBB3B, 0xAB1A, - 0x6CA6, 0x7C87, 0x4CE4, 0x5CC5, 0x2C22, 0x3C03, 0x0C60, 0x1C41, - 0xEDAE, 0xFD8F, 0xCDEC, 0xDDCD, 0xAD2A, 0xBD0B, 0x8D68, 0x9D49, - 0x7E97, 0x6EB6, 0x5ED5, 0x4EF4, 0x3E13, 0x2E32, 0x1E51, 0x0E70, - 0xFF9F, 0xEFBE, 0xDFDD, 0xCFFC, 0xBF1B, 0xAF3A, 0x9F59, 0x8F78, - 0x9188, 0x81A9, 0xB1CA, 0xA1EB, 0xD10C, 0xC12D, 0xF14E, 0xE16F, - 0x1080, 0x00A1, 0x30C2, 0x20E3, 0x5004, 0x4025, 0x7046, 0x6067, - 0x83B9, 0x9398, 0xA3FB, 0xB3DA, 0xC33D, 0xD31C, 0xE37F, 0xF35E, - 0x02B1, 0x1290, 0x22F3, 0x32D2, 0x4235, 0x5214, 0x6277, 0x7256, - 0xB5EA, 0xA5CB, 0x95A8, 0x8589, 0xF56E, 0xE54F, 0xD52C, 0xC50D, - 0x34E2, 0x24C3, 0x14A0, 0x0481, 0x7466, 0x6447, 0x5424, 0x4405, - 0xA7DB, 0xB7FA, 0x8799, 0x97B8, 0xE75F, 0xF77E, 0xC71D, 0xD73C, - 0x26D3, 0x36F2, 0x0691, 0x16B0, 0x6657, 0x7676, 0x4615, 0x5634, - 0xD94C, 0xC96D, 0xF90E, 0xE92F, 0x99C8, 0x89E9, 0xB98A, 0xA9AB, - 0x5844, 0x4865, 0x7806, 0x6827, 0x18C0, 0x08E1, 0x3882, 0x28A3, - 0xCB7D, 0xDB5C, 0xEB3F, 0xFB1E, 0x8BF9, 0x9BD8, 0xABBB, 0xBB9A, - 0x4A75, 0x5A54, 0x6A37, 0x7A16, 0x0AF1, 0x1AD0, 0x2AB3, 0x3A92, - 0xFD2E, 0xED0F, 0xDD6C, 0xCD4D, 0xBDAA, 0xAD8B, 0x9DE8, 0x8DC9, - 0x7C26, 0x6C07, 0x5C64, 0x4C45, 0x3CA2, 0x2C83, 0x1CE0, 0x0CC1, - 0xEF1F, 0xFF3E, 0xCF5D, 0xDF7C, 0xAF9B, 0xBFBA, 0x8FD9, 0x9FF8, - 0x6E17, 0x7E36, 0x4E55, 0x5E74, 0x2E93, 0x3EB2, 0x0ED1, 0x1EF0 -}; -EXPORT_SYMBOL(crc_ccitt_false_table); - /** * crc_ccitt - recompute the CRC (CRC-CCITT variant) for the data * buffer @@ -104,20 +64,5 @@ u16 crc_ccitt(u16 crc, u8 const *buffer, size_t len) } EXPORT_SYMBOL(crc_ccitt); -/** - * crc_ccitt_false - recompute the CRC (CRC-CCITT-FALSE variant) - * for the data buffer - * @crc: previous CRC value - * @buffer: data pointer - * @len: number of bytes in the buffer - */ -u16 crc_ccitt_false(u16 crc, u8 const *buffer, size_t len) -{ - while (len--) - crc = crc_ccitt_false_byte(crc, *buffer++); - return crc; -} -EXPORT_SYMBOL(crc_ccitt_false); - MODULE_DESCRIPTION("CRC-CCITT calculations"); MODULE_LICENSE("GPL"); diff --git a/lib/test_ida.c b/lib/test_ida.c index 55105baa19da..072a49897e71 100644 --- a/lib/test_ida.c +++ b/lib/test_ida.c @@ -13,7 +13,7 @@ static unsigned int tests_run; static unsigned int tests_passed; #ifdef __KERNEL__ -void ida_dump(struct ida *ida) { } +static void ida_dump(struct ida *ida) { } #endif #define IDA_BUG_ON(ida, x) do { \ tests_run++; \ diff --git a/lib/trace_readwrite.c b/lib/trace_readwrite.c index 62b4e8b3c733..a94cd56a1e4c 100644 --- a/lib/trace_readwrite.c +++ b/lib/trace_readwrite.c @@ -7,7 +7,7 @@ #include #include -#include +#include #define CREATE_TRACE_POINTS #include diff --git a/mm/highmem.c b/mm/highmem.c index e19269093a93..bd48ba445dd4 100644 --- a/mm/highmem.c +++ b/mm/highmem.c @@ -799,8 +799,6 @@ void set_page_address(struct page *page, void *virtual) } spin_unlock_irqrestore(&pas->lock, flags); } - - return; } void __init page_address_init(void) diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn index 2fe6f2828d37..c9725685aa76 100644 --- a/scripts/Makefile.extrawarn +++ b/scripts/Makefile.extrawarn @@ -17,6 +17,8 @@ KBUILD_CFLAGS += -Wno-format-security KBUILD_CFLAGS += -Wno-trigraphs KBUILD_CFLAGS += $(call cc-disable-warning,frame-address,) KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member) +KBUILD_CFLAGS += -Wmissing-declarations +KBUILD_CFLAGS += -Wmissing-prototypes ifneq ($(CONFIG_FRAME_WARN),0) KBUILD_CFLAGS += -Wframe-larger-than=$(CONFIG_FRAME_WARN) @@ -95,10 +97,8 @@ export KBUILD_EXTRA_WARN ifneq ($(findstring 1, $(KBUILD_EXTRA_WARN)),) KBUILD_CFLAGS += -Wextra -Wunused -Wno-unused-parameter -KBUILD_CFLAGS += -Wmissing-declarations KBUILD_CFLAGS += $(call cc-option, -Wrestrict) KBUILD_CFLAGS += -Wmissing-format-attribute -KBUILD_CFLAGS += -Wmissing-prototypes KBUILD_CFLAGS += -Wold-style-definition KBUILD_CFLAGS += -Wmissing-include-dirs KBUILD_CFLAGS += $(call cc-option, -Wunused-but-set-variable) diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 25fdb7fda112..a94ed6c46a6d 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -4054,7 +4054,7 @@ sub process { if ($prevline =~ /^[\+ ]};?\s*$/ && $line =~ /^\+/ && !($line =~ /^\+\s*$/ || - $line =~ /^\+\s*(?:EXPORT_SYMBOL|early_param)/ || + $line =~ /^\+\s*(?:EXPORT_SYMBOL|early_param|ALLOW_ERROR_INJECTION)/ || $line =~ /^\+\s*MODULE_/i || $line =~ /^\+\s*\#\s*(?:end|elif|else)/ || $line =~ /^\+[a-z_]*init/ || diff --git a/scripts/checkstack.pl b/scripts/checkstack.pl index f27d552aec43..14ce31f732ee 100755 --- a/scripts/checkstack.pl +++ b/scripts/checkstack.pl @@ -8,7 +8,6 @@ # Original idea maybe from Keith Owens # s390 port and big speedup by Arnd Bergmann # Mips port by Juan Quintela -# IA64 port via Andreas Dilger # Arm port by Holger Schurig # Random bits by Matt Mackall # M68k port by Geert Uytterhoeven and Andreas Schwab @@ -16,9 +15,10 @@ # sparc port by Martin Habets # ppc64le port by Breno Leitao # riscv port by Wadim Mueller +# loongarch port by Youling Tang # # Usage: -# objdump -d vmlinux | scripts/checkstack.pl [arch] +# objdump -d vmlinux | scripts/checkstack.pl [arch] [min_stack] # # TODO : Port to all architectures (one regex per arch) @@ -47,7 +47,7 @@ my (@stack, $re, $dre, $sub, $x, $xs, $funcre, $min_stack); $min_stack = shift; if ($min_stack eq "" || $min_stack !~ /^\d+$/) { - $min_stack = 100; + $min_stack = 512; } $x = "[0-9a-f]"; # hex character @@ -56,7 +56,7 @@ my (@stack, $re, $dre, $sub, $x, $xs, $funcre, $min_stack); if ($arch =~ '^(aarch|arm)64$') { #ffffffc0006325cc: a9bb7bfd stp x29, x30, [sp, #-80]! #a110: d11643ff sub sp, sp, #0x590 - $re = qr/^.*stp.*sp, \#-([0-9]{1,8})\]\!/o; + $re = qr/^.*stp.*sp, ?\#-([0-9]{1,8})\]\!/o; $dre = qr/^.*sub.*sp, sp, #(0x$x{1,8})/o; } elsif ($arch eq 'arm') { #c0008ffc: e24dd064 sub sp, sp, #100 ; 0x64 @@ -68,25 +68,22 @@ my (@stack, $re, $dre, $sub, $x, $xs, $funcre, $min_stack); # 2f60: 48 81 ec e8 05 00 00 sub $0x5e8,%rsp $re = qr/^.*[as][du][db] \$(0x$x{1,8}),\%(e|r)sp$/o; $dre = qr/^.*[as][du][db] (%.*),\%(e|r)sp$/o; - } elsif ($arch eq 'ia64') { - #e0000000044011fc: 01 0f fc 8c adds r12=-384,r12 - $re = qr/.*adds.*r12=-(([0-9]{2}|[3-9])[0-9]{2}),r12/o; } elsif ($arch eq 'm68k') { # 2b6c: 4e56 fb70 linkw %fp,#-1168 # 1df770: defc ffe4 addaw #-28,%sp $re = qr/.*(?:linkw %fp,|addaw )#-([0-9]{1,4})(?:,%sp)?$/o; } elsif ($arch eq 'mips64') { #8800402c: 67bdfff0 daddiu sp,sp,-16 - $re = qr/.*daddiu.*sp,sp,-(([0-9]{2}|[3-9])[0-9]{2})/o; + $re = qr/.*daddiu.*sp,sp,-([0-9]{1,8})/o; } elsif ($arch eq 'mips') { #88003254: 27bdffe0 addiu sp,sp,-32 - $re = qr/.*addiu.*sp,sp,-(([0-9]{2}|[3-9])[0-9]{2})/o; + $re = qr/.*addiu.*sp,sp,-([0-9]{1,8})/o; } elsif ($arch eq 'nios2') { #25a8: defffb04 addi sp,sp,-20 - $re = qr/.*addi.*sp,sp,-(([0-9]{2}|[3-9])[0-9]{2})/o; + $re = qr/.*addi.*sp,sp,-([0-9]{1,8})/o; } elsif ($arch eq 'openrisc') { # c000043c: 9c 21 fe f0 l.addi r1,r1,-272 - $re = qr/.*l\.addi.*r1,r1,-(([0-9]{2}|[3-9])[0-9]{2})/o; + $re = qr/.*l\.addi.*r1,r1,-([0-9]{1,8})/o; } elsif ($arch eq 'parisc' || $arch eq 'parisc64') { $re = qr/.*ldo ($x{1,8})\(sp\),sp/o; } elsif ($arch eq 'powerpc' || $arch =~ /^ppc(64)?(le)?$/ ) { @@ -100,10 +97,13 @@ my (@stack, $re, $dre, $sub, $x, $xs, $funcre, $min_stack); $re = qr/.*(?:lay|ag?hi).*\%r15,-([0-9]+)(?:\(\%r15\))?$/o; } elsif ($arch eq 'sparc' || $arch eq 'sparc64') { # f0019d10: 9d e3 bf 90 save %sp, -112, %sp - $re = qr/.*save.*%sp, -(([0-9]{2}|[3-9])[0-9]{2}), %sp/o; + $re = qr/.*save.*%sp, -([0-9]{1,8}), %sp/o; } elsif ($arch =~ /^riscv(64)?$/) { #ffffffff8036e868: c2010113 addi sp,sp,-992 - $re = qr/.*addi.*sp,sp,-(([0-9]{2}|[3-9])[0-9]{2})/o; + $re = qr/.*addi.*sp,sp,-([0-9]{1,8})/o; + } elsif ($arch =~ /^loongarch(32|64)?$/) { + #9000000000224708: 02ff4063 addi.d $sp, $sp, -48(0xfd0) + $re = qr/.*addi\..*sp, .*sp, -([0-9]{1,8}).*/o; } else { print("wrong or unknown architecture \"$arch\"\n"); exit @@ -189,5 +189,20 @@ if ($total_size > $min_stack) { push @stack, "$intro$total_size\n"; } -# Sort output by size (last field) -print sort { ($b =~ /:\t*(\d+)$/)[0] <=> ($a =~ /:\t*(\d+)$/)[0] } @stack; +# Sort output by size (last field) and function name if size is the same +sub sort_lines { + my ($a, $b) = @_; + + my $num_a = $1 if $a =~ /:\t*(\d+)$/; + my $num_b = $1 if $b =~ /:\t*(\d+)$/; + my $func_a = $1 if $a =~ / (.*):/; + my $func_b = $1 if $b =~ / (.*):/; + + if ($num_a != $num_b) { + return $num_b <=> $num_a; + } else { + return $func_a cmp $func_b; + } +} + +print sort { sort_lines($a, $b) } @stack; diff --git a/scripts/decode_stacktrace.sh b/scripts/decode_stacktrace.sh index 564c5632e1a2..cb980b144ca1 100755 --- a/scripts/decode_stacktrace.sh +++ b/scripts/decode_stacktrace.sh @@ -291,6 +291,9 @@ handle_line() { } while read line; do + # Strip unexpected carriage return at end of line + line=${line%$'\r'} + # Let's see if we have an address in the line if [[ $line =~ \[\<([^]]+)\>\] ]] || [[ $line =~ [^+\ ]+\+0x[0-9a-f]+/0x[0-9a-f]+ ]]; then diff --git a/scripts/decodecode b/scripts/decodecode index 8fe71c292381..6364218b2178 100755 --- a/scripts/decodecode +++ b/scripts/decodecode @@ -67,6 +67,7 @@ if [ -z "$ARCH" ]; then case `uname -m` in aarch64*) ARCH=arm64 ;; arm*) ARCH=arm ;; + loongarch*) ARCH=loongarch ;; esac fi @@ -98,6 +99,10 @@ disas() { ${CROSS_COMPILE}strip $t.o fi + if [ "$ARCH" = "loongarch" ]; then + ${CROSS_COMPILE}strip $t.o + fi + if [ $pc_sub -ne 0 ]; then if [ $PC ]; then adj_vma=$(( $PC - $pc_sub )) diff --git a/scripts/gdb/linux/page_owner.py b/scripts/gdb/linux/page_owner.py index 844fd5d0c912..8e713a09cfe7 100644 --- a/scripts/gdb/linux/page_owner.py +++ b/scripts/gdb/linux/page_owner.py @@ -122,27 +122,24 @@ class DumpPageOwner(gdb.Command): if not (page_ext['flags'] & (1 << PAGE_EXT_OWNER_ALLOCATED)): gdb.write("page_owner is not allocated\n") - try: - page_owner = self.get_page_owner(page_ext) - gdb.write("Page last allocated via order %d, gfp_mask: 0x%x, pid: %d, tgid: %d (%s), ts %u ns, free_ts %u ns\n" %\ - (page_owner["order"], page_owner["gfp_mask"],\ - page_owner["pid"], page_owner["tgid"], page_owner["comm"],\ - page_owner["ts_nsec"], page_owner["free_ts_nsec"])) - gdb.write("PFN: %d, Flags: 0x%x\n" % (pfn, page['flags'])) - if page_owner["handle"] == 0: - gdb.write('page_owner allocation stack trace missing\n') - else: - stackdepot.stack_depot_print(page_owner["handle"]) + page_owner = self.get_page_owner(page_ext) + gdb.write("Page last allocated via order %d, gfp_mask: 0x%x, pid: %d, tgid: %d (%s), ts %u ns, free_ts %u ns\n" %\ + (page_owner["order"], page_owner["gfp_mask"],\ + page_owner["pid"], page_owner["tgid"], page_owner["comm"].string(),\ + page_owner["ts_nsec"], page_owner["free_ts_nsec"])) + gdb.write("PFN: %d, Flags: 0x%x\n" % (pfn, page['flags'])) + if page_owner["handle"] == 0: + gdb.write('page_owner allocation stack trace missing\n') + else: + stackdepot.stack_depot_print(page_owner["handle"]) - if page_owner["free_handle"] == 0: - gdb.write('page_owner free stack trace missing\n') - else: - gdb.write('page last free stack trace:\n') - stackdepot.stack_depot_print(page_owner["free_handle"]) - if page_owner['last_migrate_reason'] != -1: - gdb.write('page has been migrated, last migrate reason: %s\n' % self.migrate_reason_names[page_owner['last_migrate_reason']]) - except: - gdb.write("\n") + if page_owner["free_handle"] == 0: + gdb.write('page_owner free stack trace missing\n') + else: + gdb.write('page last free stack trace:\n') + stackdepot.stack_depot_print(page_owner["free_handle"]) + if page_owner['last_migrate_reason'] != -1: + gdb.write('page has been migrated, last migrate reason: %s\n' % self.migrate_reason_names[page_owner['last_migrate_reason']]) def read_page_owner(self): pfn = self.min_pfn @@ -173,18 +170,13 @@ class DumpPageOwner(gdb.Command): pfn += 1 continue - try: - page_owner = self.get_page_owner(page_ext) - gdb.write("Page allocated via order %d, gfp_mask: 0x%x, pid: %d, tgid: %d (%s), ts %u ns, free_ts %u ns\n" %\ - (page_owner["order"], page_owner["gfp_mask"],\ - page_owner["pid"], page_owner["tgid"], page_owner["comm"],\ - page_owner["ts_nsec"], page_owner["free_ts_nsec"])) - gdb.write("PFN: %d, Flags: 0x%x\n" % (pfn, page['flags'])) - stackdepot.stack_depot_print(page_owner["handle"]) - pfn += (1 << page_owner["order"]) - continue - except: - gdb.write("\n") - pfn += 1 + page_owner = self.get_page_owner(page_ext) + gdb.write("Page allocated via order %d, gfp_mask: 0x%x, pid: %d, tgid: %d (%s), ts %u ns, free_ts %u ns\n" %\ + (page_owner["order"], page_owner["gfp_mask"],\ + page_owner["pid"], page_owner["tgid"], page_owner["comm"].string(),\ + page_owner["ts_nsec"], page_owner["free_ts_nsec"])) + gdb.write("PFN: %d, Flags: 0x%x\n" % (pfn, page['flags'])) + stackdepot.stack_depot_print(page_owner["handle"]) + pfn += (1 << page_owner["order"]) DumpPageOwner() diff --git a/scripts/gdb/linux/slab.py b/scripts/gdb/linux/slab.py index f012ba38c7d9..0e2d93867fe2 100644 --- a/scripts/gdb/linux/slab.py +++ b/scripts/gdb/linux/slab.py @@ -228,8 +228,7 @@ def slabtrace(alloc, cache_name): nr_cpu = gdb.parse_and_eval('__num_online_cpus')['counter'] if nr_cpu > 1: gdb.write(" cpus=") - for i in loc['cpus']: - gdb.write("%d," % i) + gdb.write(','.join(str(cpu) for cpu in loc['cpus'])) gdb.write("\n") if constants.LX_CONFIG_STACKDEPOT: if loc['handle']: diff --git a/scripts/gdb/linux/stackdepot.py b/scripts/gdb/linux/stackdepot.py index 047d329a6a12..0281d9de4b7c 100644 --- a/scripts/gdb/linux/stackdepot.py +++ b/scripts/gdb/linux/stackdepot.py @@ -25,10 +25,10 @@ def stack_depot_fetch(handle): handle_parts_t = gdb.lookup_type("union handle_parts") parts = handle.cast(handle_parts_t) offset = parts['offset'] << DEPOT_STACK_ALIGN - pool_index_cached = gdb.parse_and_eval('pool_index') + pools_num = gdb.parse_and_eval('pools_num') - if parts['pool_index'] > pool_index_cached: - gdb.write("pool index %d out of bounds (%d) for stack id 0x%08x\n" % (parts['pool_index'], pool_index_cached, handle)) + if parts['pool_index'] > pools_num: + gdb.write("pool index %d out of bounds (%d) for stack id 0x%08x\n" % (parts['pool_index'], pools_num, handle)) return gdb.Value(0), 0 stack_pools = gdb.parse_and_eval('stack_pools') diff --git a/scripts/spelling.txt b/scripts/spelling.txt index 855c4863124b..edec60d39bbf 100644 --- a/scripts/spelling.txt +++ b/scripts/spelling.txt @@ -26,6 +26,7 @@ accelaration||acceleration accelearion||acceleration acceleratoin||acceleration accelleration||acceleration +accelrometer||accelerometer accesing||accessing accesnt||accent accessable||accessible @@ -137,6 +138,7 @@ anniversery||anniversary annoucement||announcement anomolies||anomalies anomoly||anomaly +anonynous||anonymous anway||anyway aplication||application appearence||appearance @@ -267,6 +269,7 @@ cadidate||candidate cahces||caches calender||calendar calescing||coalescing +calibraiton||calibration calle||called callibration||calibration callled||called @@ -288,6 +291,7 @@ capabitilies||capabilities capablity||capability capatibilities||capabilities capapbilities||capabilities +captuer||capture caputure||capture carefuly||carefully cariage||carriage @@ -340,6 +344,7 @@ comminucation||communication commited||committed commiting||committing committ||commit +commmand||command commnunication||communication commoditiy||commodity comsume||consume @@ -406,6 +411,7 @@ continious||continuous continous||continuous continously||continuously continueing||continuing +contiuous||continuous contraints||constraints contruct||construct contol||control @@ -757,6 +763,7 @@ hardward||hardware havind||having heirarchically||hierarchically heirarchy||hierarchy +heirachy||hierarchy helpfull||helpful hearbeat||heartbeat heterogenous||heterogeneous @@ -1199,6 +1206,7 @@ priting||printing privilaged||privileged privilage||privilege priviledge||privilege +priviledged||privileged priviledges||privileges privleges||privileges probaly||probably @@ -1251,6 +1259,7 @@ purgable||purgeable pwoer||power queing||queuing quering||querying +querrying||querying queus||queues randomally||randomly raoming||roaming @@ -1324,6 +1333,7 @@ reseting||resetting reseved||reserved reseverd||reserved resizeable||resizable +resonable||reasonable resotre||restore resouce||resource resouces||resources @@ -1427,6 +1437,7 @@ sliped||slipped softwade||software softwares||software soley||solely +soluation||solution souce||source speach||speech specfic||specific @@ -1458,6 +1469,7 @@ standart||standard standy||standby stardard||standard staticly||statically +statisitcs||statistics statuss||status stoped||stopped stoping||stopping @@ -1548,6 +1560,7 @@ threds||threads threee||three threshhold||threshold thresold||threshold +throtting||throttling throught||through tansition||transition trackling||tracking @@ -1571,6 +1584,7 @@ tranasction||transaction tranceiver||transceiver tranfer||transfer tranmission||transmission +tranport||transport transcevier||transceiver transciever||transceiver transferd||transferred diff --git a/security/integrity/ima/ima_kexec.c b/security/integrity/ima/ima_kexec.c index ad133fe120db..dadc1d138118 100644 --- a/security/integrity/ima/ima_kexec.c +++ b/security/integrity/ima/ima_kexec.c @@ -129,8 +129,8 @@ void ima_add_kexec_buffer(struct kimage *image) image->ima_buffer_size = kexec_segment_size; image->ima_buffer = kexec_buffer; - pr_debug("kexec measurement buffer for the loaded kernel at 0x%lx.\n", - kbuf.mem); + kexec_dprintk("kexec measurement buffer for the loaded kernel at 0x%lx.\n", + kbuf.mem); } #endif /* IMA_KEXEC */ diff --git a/usr/Kconfig b/usr/Kconfig index 8bbcf699fe3b..9279a2893ab0 100644 --- a/usr/Kconfig +++ b/usr/Kconfig @@ -185,9 +185,9 @@ config INITRAMFS_COMPRESSION_LZO bool "LZO" depends on RD_LZO help - It's compression ratio is the second poorest amongst the choices. The - kernel size is about 10% bigger than gzip. Despite that, it's - decompression speed is the second fastest and it's compression speed + Its compression ratio is the second poorest amongst the choices. The + kernel size is about 10% bigger than gzip. Despite that, its + decompression speed is the second fastest and its compression speed is quite fast too. If you choose this, keep in mind that you may need to install the lzop