Modules updates for v5.19-rc1

As promised, for v5.19 I queued up quite a bit of work for modules, but
 still with a pretty conservative eye. These changes have been soaking on
 modules-next (and so linux-next) for quite some time, the code shift was
 merged onto modules-next on March 22, and the last patch was queued on May
 5th.
 
 The following are the highlights of what bells and whistles we will get for
 v5.19:
 
  1) It was time to tidy up kernel/module.c and one way of starting with
     that effort was to split it up into files. At my request Aaron Tomlin
     spearheaded that effort with the goal to not introduce any
     functional at all during that endeavour.  The penalty for the split
     is +1322 bytes total, +112 bytes in data, +1210 bytes in text while
     bss is unchanged. One of the benefits of this other than helping
     make the code easier to read and review is summoning more help on review
     for changes with livepatching so kernel/module/livepatch.c is now
     pegged as maintained by the live patching folks.
 
     The before and after with just the move on a defconfig on x86-64:
 
      $ size kernel/module.o
         text    data     bss     dec     hex filename
        38434    4540     104   43078    a846 kernel/module.o
 
      $ size -t kernel/module/*.o
         text    data     bss     dec     hex filename
        4785     120       0    4905    1329 kernel/module/kallsyms.o
       28577    4416     104   33097    8149 kernel/module/main.o
        1158       8       0    1166     48e kernel/module/procfs.o
         902     108       0    1010     3f2 kernel/module/strict_rwx.o
        3390       0       0    3390     d3e kernel/module/sysfs.o
         832       0       0     832     340 kernel/module/tree_lookup.o
       39644    4652     104   44400    ad70 (TOTALS)
 
  2) Aaron added module unload taint tracking (MODULE_UNLOAD_TAINT_TRACKING),
     so to enable tracking unloaded modules which did taint the kernel.
 
  3) Christophe Leroy added CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
     which lets architectures to request having modules data in vmalloc
     area instead of module area. There are three reasons why an
     architecture might want this:
 
     a) On some architectures (like book3s/32) it is not possible to protect
        against execution on a page basis. The exec stuff can be mapped by
        different arch segment sizes (on book3s/32 that is 256M segments). By
        default the module area is in an Exec segment while vmalloc area is in
        a NoExec segment. Using vmalloc lets you muck with module data as
        NoExec on those architectures whereas before you could not.
 
     b) By pushing more module data to vmalloc you also increase the
        probability of module text to remain within a closer distance
        from kernel core text and this reduces trampolines, this has been
        reported on arm first and powerpc folks are following that lead.
 
     c) Free'ing module_alloc() (Exec by default) area leaves this
        exposed as Exec by default, some architectures have some
        security enhancements to set this as NoExec on free, and splitting
        module data with text let's future generic special allocators
        be added to the kernel without having developers try to grok
        the tribal knowledge per arch. Work like Rick Edgecombe's
        permission vmalloc interface [0] becomes easier to address over
        time.
 
        [0] https://lore.kernel.org/lkml/20201120202426.18009-1-rick.p.edgecombe@intel.com/#r
 
  4) Masahiro Yamada's symbol search enhancements
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCgAwFiEENnNq2KuOejlQLZofziMdCjCSiKcFAmKOnHkSHG1jZ3JvZkBr
 ZXJuZWwub3JnAAoJEM4jHQowkoinFw4P/1ADdvfj+b6wbAxou6tPa2ZKnx/ImEnE
 0T1P/n2guWg+2Q8oYjqifTpadGzr8td4c/PaGb5UpfdEOdBIyIGklrVZpQ+xkqfT
 X4KIvqsf4ajL24OKxOSNtvL8RXEIDUhJ4Veq6BImBk8CPrPjsUBlNyAIlvV0aom2
 BsFROQ2pMTSCiFY47gkMKLBlBny1l7zktoF0lhWTzHimw8VSDbTJFlu+fZvspd0o
 lCqiHTkpiBSJDSEEjqk0lT6wIb27fvdzjmjy+Ur71bBKiPIEPiL5XNUufkGe6oB3
 mnTOPow+wPTQc0dtkTpCHQYXE/a70Sbkwp1JfkbSYeHzJLlFru/tkmKiwN0RUo9l
 0mY7VPEKuQWmxsOkLqvwcPBGx5JOSWOJKrbgpFmH+RLgeEgEa8t7uQDURK2KeIj8
 P7ZzN5M2klKIHHA4vjfekYOJAb1Tii9Ibp7iGeiYxf93mPJBqwvRwbtBXBZpB4ce
 FoDrxwEq812KPW7P2O1kgOvq7Fn1KWh0wVeKc8iBGxFxJhzOQY86H1ZRWDLAxRss
 Rr1PMLt2TbTLUBt7MzR4vrg0NoQvpLYyf2jGFjWyZDRHU8nLeHkOlQot3xRDAtq9
 Bpx5mSlM9BGfPibd1Kw4BaxBha5vVCQ+AcleT+NWnCjw4I0wLoFi9RLUSyItn9No
 tlHLgdrM2a54
 =cxtr
 -----END PGP SIGNATURE-----

Merge tag 'modules-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux

Pull modules updates from  Luis Chamberlain:

 - It was time to tidy up kernel/module.c and one way of starting with
   that effort was to split it up into files. At my request Aaron Tomlin
   spearheaded that effort with the goal to not introduce any functional
   at all during that endeavour. The penalty for the split is +1322
   bytes total, +112 bytes in data, +1210 bytes in text while bss is
   unchanged. One of the benefits of this other than helping make the
   code easier to read and review is summoning more help on review for
   changes with livepatching so kernel/module/livepatch.c is now pegged
   as maintained by the live patching folks.

   The before and after with just the move on a defconfig on x86-64:

     $ size kernel/module.o
        text    data     bss     dec     hex filename
       38434    4540     104   43078    a846 kernel/module.o

     $ size -t kernel/module/*.o
        text    data     bss     dec     hex filename
       4785     120       0    4905    1329 kernel/module/kallsyms.o
      28577    4416     104   33097    8149 kernel/module/main.o
       1158       8       0    1166     48e kernel/module/procfs.o
        902     108       0    1010     3f2 kernel/module/strict_rwx.o
       3390       0       0    3390     d3e kernel/module/sysfs.o
        832       0       0     832     340 kernel/module/tree_lookup.o
      39644    4652     104   44400    ad70 (TOTALS)

 - Aaron added module unload taint tracking (MODULE_UNLOAD_TAINT_TRACKING),
   to enable tracking unloaded modules which did taint the kernel.

 - Christophe Leroy added CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
   which lets architectures to request having modules data in vmalloc
   area instead of module area. There are three reasons why an
   architecture might want this:

    a) On some architectures (like book3s/32) it is not possible to
       protect against execution on a page basis. The exec stuff can be
       mapped by different arch segment sizes (on book3s/32 that is 256M
       segments). By default the module area is in an Exec segment while
       vmalloc area is in a NoExec segment. Using vmalloc lets you muck
       with module data as NoExec on those architectures whereas before
       you could not.

    b) By pushing more module data to vmalloc you also increase the
       probability of module text to remain within a closer distance
       from kernel core text and this reduces trampolines, this has been
       reported on arm first and powerpc folks are following that lead.

    c) Free'ing module_alloc() (Exec by default) area leaves this
       exposed as Exec by default, some architectures have some security
       enhancements to set this as NoExec on free, and splitting module
       data with text let's future generic special allocators be added
       to the kernel without having developers try to grok the tribal
       knowledge per arch. Work like Rick Edgecombe's permission vmalloc
       interface [0] becomes easier to address over time.

       [0] https://lore.kernel.org/lkml/20201120202426.18009-1-rick.p.edgecombe@intel.com/#r

 - Masahiro Yamada's symbol search enhancements

* tag 'modules-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux: (33 commits)
  module: merge check_exported_symbol() into find_exported_symbol_in_section()
  module: do not binary-search in __ksymtab_gpl if fsa->gplok is false
  module: do not pass opaque pointer for symbol search
  module: show disallowed symbol name for inherit_taint()
  module: fix [e_shstrndx].sh_size=0 OOB access
  module: Introduce module unload taint tracking
  module: Move module_assert_mutex_or_preempt() to internal.h
  module: Make module_flags_taint() accept a module's taints bitmap and usable outside core code
  module.h: simplify MODULE_IMPORT_NS
  powerpc: Select ARCH_WANTS_MODULES_DATA_IN_VMALLOC on book3s/32 and 8xx
  module: Remove module_addr_min and module_addr_max
  module: Add CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
  module: Introduce data_layout
  module: Prepare for handling several RB trees
  module: Always have struct mod_tree_root
  module: Rename debug_align() as strict_align()
  module: Rework layout alignment to avoid BUG_ON()s
  module: Move module_enable_x() and frob_text() in strict_rwx.c
  module: Make module_enable_x() independent of CONFIG_ARCH_HAS_STRICT_MODULE_RWX
  module: Move version support into a separate file
  ...
This commit is contained in:
Linus Torvalds 2022-05-26 17:13:43 -07:00
commit ef98f9cfe2
29 changed files with 2320 additions and 1988 deletions

View File

@ -10987,6 +10987,7 @@ F: drivers/tty/serial/kgdboc.c
F: include/linux/kdb.h
F: include/linux/kgdb.h
F: kernel/debug/
F: kernel/module/kdb.c
KHADAS MCU MFD DRIVER
M: Neil Armstrong <narmstrong@baylibre.com>
@ -11440,6 +11441,7 @@ F: arch/s390/include/asm/livepatch.h
F: arch/x86/include/asm/livepatch.h
F: include/linux/livepatch.h
F: kernel/livepatch/
F: kernel/module/livepatch.c
F: lib/livepatch/
F: samples/livepatch/
F: tools/testing/selftests/livepatch/
@ -13371,7 +13373,7 @@ L: linux-kernel@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git modules-next
F: include/linux/module.h
F: kernel/module.c
F: kernel/module/
MONOLITHIC POWER SYSTEM PMIC DRIVER
M: Saravanan Sekar <sravanhome@gmail.com>

View File

@ -892,6 +892,12 @@ config MODULES_USE_ELF_REL
Modules only use ELF REL relocations. Modules with ELF RELA
relocations will give an error.
config ARCH_WANTS_MODULES_DATA_IN_VMALLOC
bool
help
For architectures like powerpc/32 which have constraints on module
allocation and need to allocate module data outside of module area.
config HAVE_IRQ_EXIT_ON_IRQ_STACK
bool
help

View File

@ -158,6 +158,7 @@ config PPC
select ARCH_WANT_IPC_PARSE_VERSION
select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
select ARCH_WANT_LD_ORPHAN_WARN
select ARCH_WANTS_MODULES_DATA_IN_VMALLOC if PPC_BOOK3S_32 || PPC_8xx
select ARCH_WEAK_RELEASE_ACQUIRE
select BINFMT_ELF
select BUILDTIME_TABLE_SORT

View File

@ -222,5 +222,6 @@ enum {
extern int kdbgetintenv(const char *, int *);
extern int kdb_set(int, const char **);
int kdb_lsmod(int argc, const char **argv);
#endif /* !_KDB_H */

View File

@ -290,8 +290,7 @@ extern typeof(name) __mod_##type##__##name##_device_table \
* files require multiple MODULE_FIRMWARE() specifiers */
#define MODULE_FIRMWARE(_firmware) MODULE_INFO(firmware, _firmware)
#define _MODULE_IMPORT_NS(ns) MODULE_INFO(import_ns, #ns)
#define MODULE_IMPORT_NS(ns) _MODULE_IMPORT_NS(ns)
#define MODULE_IMPORT_NS(ns) MODULE_INFO(import_ns, __stringify(ns))
struct notifier_block;
@ -422,6 +421,9 @@ struct module {
/* Core layout: rbtree is accessed frequently, so keep together. */
struct module_layout core_layout __module_layout_align;
struct module_layout init_layout;
#ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
struct module_layout data_layout;
#endif
/* Arch-specific module values */
struct mod_arch_specific arch;
@ -569,6 +571,11 @@ bool is_module_text_address(unsigned long addr);
static inline bool within_module_core(unsigned long addr,
const struct module *mod)
{
#ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
if ((unsigned long)mod->data_layout.base <= addr &&
addr < (unsigned long)mod->data_layout.base + mod->data_layout.size)
return true;
#endif
return (unsigned long)mod->core_layout.base <= addr &&
addr < (unsigned long)mod->core_layout.base + mod->core_layout.size;
}
@ -663,19 +670,15 @@ static inline bool module_requested_async_probing(struct module *module)
return module && module->async_probe_requested;
}
static inline bool is_livepatch_module(struct module *mod)
{
#ifdef CONFIG_LIVEPATCH
static inline bool is_livepatch_module(struct module *mod)
{
return mod->klp;
}
#else /* !CONFIG_LIVEPATCH */
static inline bool is_livepatch_module(struct module *mod)
{
#else
return false;
#endif
}
#endif /* CONFIG_LIVEPATCH */
bool is_module_sig_enforced(void);
void set_module_sig_enforced(void);
#else /* !CONFIG_MODULES... */
@ -802,10 +805,6 @@ static inline bool module_requested_async_probing(struct module *module)
return false;
}
static inline bool is_module_sig_enforced(void)
{
return false;
}
static inline void set_module_sig_enforced(void)
{
@ -857,11 +856,18 @@ static inline bool retpoline_module_ok(bool has_retpoline)
#endif
#ifdef CONFIG_MODULE_SIG
bool is_module_sig_enforced(void);
static inline bool module_sig_ok(struct module *module)
{
return module->sig_ok;
}
#else /* !CONFIG_MODULE_SIG */
static inline bool is_module_sig_enforced(void)
{
return false;
}
static inline bool module_sig_ok(struct module *module)
{
return true;

View File

@ -1983,6 +1983,17 @@ config MODULE_FORCE_UNLOAD
rmmod). This is mainly for kernel developers and desperate users.
If unsure, say N.
config MODULE_UNLOAD_TAINT_TRACKING
bool "Tainted module unload tracking"
depends on MODULE_UNLOAD
default n
help
This option allows you to maintain a record of each unloaded
module that tainted the kernel. In addition to displaying a
list of linked (or loaded) modules e.g. on detection of a bad
page (see bad_page()), the aforementioned details are also
shown. If unsure, say N.
config MODVERSIONS
bool "Module versioning support"
help

View File

@ -29,7 +29,6 @@ KCOV_INSTRUMENT_softirq.o := n
KCSAN_SANITIZE_softirq.o = n
# These are called from save_stack_trace() on slub debug path,
# and produce insane amounts of uninteresting coverage.
KCOV_INSTRUMENT_module.o := n
KCOV_INSTRUMENT_extable.o := n
KCOV_INSTRUMENT_stacktrace.o := n
# Don't self-instrument.
@ -53,6 +52,7 @@ obj-y += rcu/
obj-y += livepatch/
obj-y += dma/
obj-y += entry/
obj-$(CONFIG_MODULES) += module/
obj-$(CONFIG_KCMP) += kcmp.o
obj-$(CONFIG_FREEZER) += freezer.o
@ -66,9 +66,6 @@ ifneq ($(CONFIG_SMP),y)
obj-y += up.o
endif
obj-$(CONFIG_UID16) += uid16.o
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_MODULE_DECOMPRESS) += module_decompress.o
obj-$(CONFIG_MODULE_SIG) += module_signing.o
obj-$(CONFIG_MODULE_SIG_FORMAT) += module_signature.o
obj-$(CONFIG_KALLSYMS) += kallsyms.o
obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o

View File

@ -9,7 +9,6 @@
* Copyright (c) 2009 Wind River Systems, Inc. All Rights Reserved.
*/
#include <linux/module.h>
#include <linux/types.h>
#include <linux/ctype.h>
#include <linux/kernel.h>

View File

@ -11,7 +11,6 @@
#include <linux/kdb.h>
#include <linux/keyboard.h>
#include <linux/ctype.h>
#include <linux/module.h>
#include <linux/io.h>
/* Keyboard Controller Registers on normal PCs. */

View File

@ -26,7 +26,6 @@
#include <linux/utsname.h>
#include <linux/vmalloc.h>
#include <linux/atomic.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/mm.h>
#include <linux/init.h>
@ -2060,54 +2059,6 @@ static int kdb_ef(int argc, const char **argv)
return 0;
}
#if defined(CONFIG_MODULES)
/*
* kdb_lsmod - This function implements the 'lsmod' command. Lists
* currently loaded kernel modules.
* Mostly taken from userland lsmod.
*/
static int kdb_lsmod(int argc, const char **argv)
{
struct module *mod;
if (argc != 0)
return KDB_ARGCOUNT;
kdb_printf("Module Size modstruct Used by\n");
list_for_each_entry(mod, kdb_modules, list) {
if (mod->state == MODULE_STATE_UNFORMED)
continue;
kdb_printf("%-20s%8u 0x%px ", mod->name,
mod->core_layout.size, (void *)mod);
#ifdef CONFIG_MODULE_UNLOAD
kdb_printf("%4d ", module_refcount(mod));
#endif
if (mod->state == MODULE_STATE_GOING)
kdb_printf(" (Unloading)");
else if (mod->state == MODULE_STATE_COMING)
kdb_printf(" (Loading)");
else
kdb_printf(" (Live)");
kdb_printf(" 0x%px", mod->core_layout.base);
#ifdef CONFIG_MODULE_UNLOAD
{
struct module_use *use;
kdb_printf(" [ ");
list_for_each_entry(use, &mod->source_list,
source_list)
kdb_printf("%s ", use->target->name);
kdb_printf("]\n");
}
#endif
}
return 0;
}
#endif /* CONFIG_MODULES */
/*
* kdb_env - This function implements the 'env' command. Display the
* current environment variables.

View File

@ -226,10 +226,6 @@ extern void kdb_kbd_cleanup_state(void);
#define kdb_kbd_cleanup_state()
#endif /* ! CONFIG_KDB_KEYBOARD */
#ifdef CONFIG_MODULES
extern struct list_head *kdb_modules;
#endif /* CONFIG_MODULES */
extern char kdb_prompt_str[];
#define KDB_WORD_SIZE ((int)sizeof(unsigned long))

View File

@ -17,7 +17,6 @@
#include <linux/stddef.h>
#include <linux/vmalloc.h>
#include <linux/ptrace.h>
#include <linux/module.h>
#include <linux/highmem.h>
#include <linux/hardirq.h>
#include <linux/delay.h>

View File

@ -1,50 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/* Module internals
*
* Copyright (C) 2012 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/elf.h>
#include <asm/module.h>
struct load_info {
const char *name;
/* pointer to module in temporary copy, freed at end of load_module() */
struct module *mod;
Elf_Ehdr *hdr;
unsigned long len;
Elf_Shdr *sechdrs;
char *secstrings, *strtab;
unsigned long symoffs, stroffs, init_typeoffs, core_typeoffs;
struct _ddebug *debug;
unsigned int num_debug;
bool sig_ok;
#ifdef CONFIG_KALLSYMS
unsigned long mod_kallsyms_init_off;
#endif
#ifdef CONFIG_MODULE_DECOMPRESS
struct page **pages;
unsigned int max_pages;
unsigned int used_pages;
#endif
struct {
unsigned int sym, str, mod, vers, info, pcpu;
} index;
};
extern int mod_verify_sig(const void *mod, struct load_info *info);
#ifdef CONFIG_MODULE_DECOMPRESS
int module_decompress(struct load_info *info, const void *buf, size_t size);
void module_decompress_cleanup(struct load_info *info);
#else
static inline int module_decompress(struct load_info *info,
const void *buf, size_t size)
{
return -EOPNOTSUPP;
}
static inline void module_decompress_cleanup(struct load_info *info)
{
}
#endif

21
kernel/module/Makefile Normal file
View File

@ -0,0 +1,21 @@
# SPDX-License-Identifier: GPL-2.0-only
#
# Makefile for linux kernel module support
#
# These are called from save_stack_trace() on slub debug path,
# and produce insane amounts of uninteresting coverage.
KCOV_INSTRUMENT_module.o := n
obj-y += main.o strict_rwx.o
obj-$(CONFIG_MODULE_DECOMPRESS) += decompress.o
obj-$(CONFIG_MODULE_SIG) += signing.o
obj-$(CONFIG_LIVEPATCH) += livepatch.o
obj-$(CONFIG_MODULES_TREE_LOOKUP) += tree_lookup.o
obj-$(CONFIG_DEBUG_KMEMLEAK) += debug_kmemleak.o
obj-$(CONFIG_KALLSYMS) += kallsyms.o
obj-$(CONFIG_PROC_FS) += procfs.o
obj-$(CONFIG_SYSFS) += sysfs.o
obj-$(CONFIG_KGDB_KDB) += kdb.o
obj-$(CONFIG_MODVERSIONS) += version.o
obj-$(CONFIG_MODULE_UNLOAD_TAINT_TRACKING) += tracking.o

View File

@ -0,0 +1,30 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Module kmemleak support
*
* Copyright (C) 2009 Catalin Marinas
*/
#include <linux/module.h>
#include <linux/kmemleak.h>
#include "internal.h"
void kmemleak_load_module(const struct module *mod,
const struct load_info *info)
{
unsigned int i;
/* only scan the sections containing data */
kmemleak_scan_area(mod, sizeof(struct module), GFP_KERNEL);
for (i = 1; i < info->hdr->e_shnum; i++) {
/* Scan all writable sections that's not executable */
if (!(info->sechdrs[i].sh_flags & SHF_ALLOC) ||
!(info->sechdrs[i].sh_flags & SHF_WRITE) ||
(info->sechdrs[i].sh_flags & SHF_EXECINSTR))
continue;
kmemleak_scan_area((void *)info->sechdrs[i].sh_addr,
info->sechdrs[i].sh_size, GFP_KERNEL);
}
}

View File

@ -12,7 +12,7 @@
#include <linux/sysfs.h>
#include <linux/vmalloc.h>
#include "module-internal.h"
#include "internal.h"
static int module_extend_max_pages(struct load_info *info, unsigned int extent)
{
@ -113,6 +113,7 @@ static ssize_t module_gzip_decompress(struct load_info *info,
do {
struct page *page = module_get_next_page(info);
if (!page) {
retval = -ENOMEM;
goto out_inflate_end;
@ -171,6 +172,7 @@ static ssize_t module_xz_decompress(struct load_info *info,
do {
struct page *page = module_get_next_page(info);
if (!page) {
retval = -ENOMEM;
goto out;
@ -256,6 +258,7 @@ static ssize_t compression_show(struct kobject *kobj,
{
return sysfs_emit(buf, "%s\n", __stringify(MODULE_COMPRESSION));
}
static struct kobj_attribute module_compression_attr = __ATTR_RO(compression);
static int __init module_decompress_sysfs_init(void)

302
kernel/module/internal.h Normal file
View File

@ -0,0 +1,302 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/* Module internals
*
* Copyright (C) 2012 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/elf.h>
#include <linux/compiler.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/rculist.h>
#include <linux/rcupdate.h>
#ifndef ARCH_SHF_SMALL
#define ARCH_SHF_SMALL 0
#endif
/* If this is set, the section belongs in the init part of the module */
#define INIT_OFFSET_MASK (1UL << (BITS_PER_LONG - 1))
/* Maximum number of characters written by module_flags() */
#define MODULE_FLAGS_BUF_SIZE (TAINT_FLAGS_COUNT + 4)
#ifndef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
#define data_layout core_layout
#endif
/*
* Modules' sections will be aligned on page boundaries
* to ensure complete separation of code and data, but
* only when CONFIG_STRICT_MODULE_RWX=y
*/
#ifdef CONFIG_STRICT_MODULE_RWX
# define strict_align(X) PAGE_ALIGN(X)
#else
# define strict_align(X) (X)
#endif
extern struct mutex module_mutex;
extern struct list_head modules;
extern struct module_attribute *modinfo_attrs[];
extern size_t modinfo_attrs_count;
/* Provided by the linker */
extern const struct kernel_symbol __start___ksymtab[];
extern const struct kernel_symbol __stop___ksymtab[];
extern const struct kernel_symbol __start___ksymtab_gpl[];
extern const struct kernel_symbol __stop___ksymtab_gpl[];
extern const s32 __start___kcrctab[];
extern const s32 __start___kcrctab_gpl[];
struct load_info {
const char *name;
/* pointer to module in temporary copy, freed at end of load_module() */
struct module *mod;
Elf_Ehdr *hdr;
unsigned long len;
Elf_Shdr *sechdrs;
char *secstrings, *strtab;
unsigned long symoffs, stroffs, init_typeoffs, core_typeoffs;
struct _ddebug *debug;
unsigned int num_debug;
bool sig_ok;
#ifdef CONFIG_KALLSYMS
unsigned long mod_kallsyms_init_off;
#endif
#ifdef CONFIG_MODULE_DECOMPRESS
struct page **pages;
unsigned int max_pages;
unsigned int used_pages;
#endif
struct {
unsigned int sym, str, mod, vers, info, pcpu;
} index;
};
enum mod_license {
NOT_GPL_ONLY,
GPL_ONLY,
};
struct find_symbol_arg {
/* Input */
const char *name;
bool gplok;
bool warn;
/* Output */
struct module *owner;
const s32 *crc;
const struct kernel_symbol *sym;
enum mod_license license;
};
int mod_verify_sig(const void *mod, struct load_info *info);
int try_to_force_load(struct module *mod, const char *reason);
bool find_symbol(struct find_symbol_arg *fsa);
struct module *find_module_all(const char *name, size_t len, bool even_unformed);
int cmp_name(const void *name, const void *sym);
long module_get_offset(struct module *mod, unsigned int *size, Elf_Shdr *sechdr,
unsigned int section);
char *module_flags(struct module *mod, char *buf);
size_t module_flags_taint(unsigned long taints, char *buf);
static inline void module_assert_mutex_or_preempt(void)
{
#ifdef CONFIG_LOCKDEP
if (unlikely(!debug_locks))
return;
WARN_ON_ONCE(!rcu_read_lock_sched_held() &&
!lockdep_is_held(&module_mutex));
#endif
}
static inline unsigned long kernel_symbol_value(const struct kernel_symbol *sym)
{
#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
return (unsigned long)offset_to_ptr(&sym->value_offset);
#else
return sym->value;
#endif
}
#ifdef CONFIG_LIVEPATCH
int copy_module_elf(struct module *mod, struct load_info *info);
void free_module_elf(struct module *mod);
#else /* !CONFIG_LIVEPATCH */
static inline int copy_module_elf(struct module *mod, struct load_info *info)
{
return 0;
}
static inline void free_module_elf(struct module *mod) { }
#endif /* CONFIG_LIVEPATCH */
static inline bool set_livepatch_module(struct module *mod)
{
#ifdef CONFIG_LIVEPATCH
mod->klp = true;
return true;
#else
return false;
#endif
}
#ifdef CONFIG_MODULE_UNLOAD_TAINT_TRACKING
struct mod_unload_taint {
struct list_head list;
char name[MODULE_NAME_LEN];
unsigned long taints;
u64 count;
};
int try_add_tainted_module(struct module *mod);
void print_unloaded_tainted_modules(void);
#else /* !CONFIG_MODULE_UNLOAD_TAINT_TRACKING */
static inline int try_add_tainted_module(struct module *mod)
{
return 0;
}
static inline void print_unloaded_tainted_modules(void)
{
}
#endif /* CONFIG_MODULE_UNLOAD_TAINT_TRACKING */
#ifdef CONFIG_MODULE_DECOMPRESS
int module_decompress(struct load_info *info, const void *buf, size_t size);
void module_decompress_cleanup(struct load_info *info);
#else
static inline int module_decompress(struct load_info *info,
const void *buf, size_t size)
{
return -EOPNOTSUPP;
}
static inline void module_decompress_cleanup(struct load_info *info)
{
}
#endif
struct mod_tree_root {
#ifdef CONFIG_MODULES_TREE_LOOKUP
struct latch_tree_root root;
#endif
unsigned long addr_min;
unsigned long addr_max;
};
extern struct mod_tree_root mod_tree;
extern struct mod_tree_root mod_data_tree;
#ifdef CONFIG_MODULES_TREE_LOOKUP
void mod_tree_insert(struct module *mod);
void mod_tree_remove_init(struct module *mod);
void mod_tree_remove(struct module *mod);
struct module *mod_find(unsigned long addr, struct mod_tree_root *tree);
#else /* !CONFIG_MODULES_TREE_LOOKUP */
static inline void mod_tree_insert(struct module *mod) { }
static inline void mod_tree_remove_init(struct module *mod) { }
static inline void mod_tree_remove(struct module *mod) { }
static inline struct module *mod_find(unsigned long addr, struct mod_tree_root *tree)
{
struct module *mod;
list_for_each_entry_rcu(mod, &modules, list,
lockdep_is_held(&module_mutex)) {
if (within_module(addr, mod))
return mod;
}
return NULL;
}
#endif /* CONFIG_MODULES_TREE_LOOKUP */
void module_enable_ro(const struct module *mod, bool after_init);
void module_enable_nx(const struct module *mod);
void module_enable_x(const struct module *mod);
int module_enforce_rwx_sections(Elf_Ehdr *hdr, Elf_Shdr *sechdrs,
char *secstrings, struct module *mod);
bool module_check_misalignment(const struct module *mod);
#ifdef CONFIG_MODULE_SIG
int module_sig_check(struct load_info *info, int flags);
#else /* !CONFIG_MODULE_SIG */
static inline int module_sig_check(struct load_info *info, int flags)
{
return 0;
}
#endif /* !CONFIG_MODULE_SIG */
#ifdef CONFIG_DEBUG_KMEMLEAK
void kmemleak_load_module(const struct module *mod, const struct load_info *info);
#else /* !CONFIG_DEBUG_KMEMLEAK */
static inline void kmemleak_load_module(const struct module *mod,
const struct load_info *info) { }
#endif /* CONFIG_DEBUG_KMEMLEAK */
#ifdef CONFIG_KALLSYMS
void init_build_id(struct module *mod, const struct load_info *info);
void layout_symtab(struct module *mod, struct load_info *info);
void add_kallsyms(struct module *mod, const struct load_info *info);
unsigned long find_kallsyms_symbol_value(struct module *mod, const char *name);
static inline bool sect_empty(const Elf_Shdr *sect)
{
return !(sect->sh_flags & SHF_ALLOC) || sect->sh_size == 0;
}
#else /* !CONFIG_KALLSYMS */
static inline void init_build_id(struct module *mod, const struct load_info *info) { }
static inline void layout_symtab(struct module *mod, struct load_info *info) { }
static inline void add_kallsyms(struct module *mod, const struct load_info *info) { }
#endif /* CONFIG_KALLSYMS */
#ifdef CONFIG_SYSFS
int mod_sysfs_setup(struct module *mod, const struct load_info *info,
struct kernel_param *kparam, unsigned int num_params);
void mod_sysfs_teardown(struct module *mod);
void init_param_lock(struct module *mod);
#else /* !CONFIG_SYSFS */
static inline int mod_sysfs_setup(struct module *mod,
const struct load_info *info,
struct kernel_param *kparam,
unsigned int num_params)
{
return 0;
}
static inline void mod_sysfs_teardown(struct module *mod) { }
static inline void init_param_lock(struct module *mod) { }
#endif /* CONFIG_SYSFS */
#ifdef CONFIG_MODVERSIONS
int check_version(const struct load_info *info,
const char *symname, struct module *mod, const s32 *crc);
void module_layout(struct module *mod, struct modversion_info *ver, struct kernel_param *kp,
struct kernel_symbol *ks, struct tracepoint * const *tp);
int check_modstruct_version(const struct load_info *info, struct module *mod);
int same_magic(const char *amagic, const char *bmagic, bool has_crcs);
#else /* !CONFIG_MODVERSIONS */
static inline int check_version(const struct load_info *info,
const char *symname,
struct module *mod,
const s32 *crc)
{
return 1;
}
static inline int check_modstruct_version(const struct load_info *info,
struct module *mod)
{
return 1;
}
static inline int same_magic(const char *amagic, const char *bmagic, bool has_crcs)
{
return strcmp(amagic, bmagic) == 0;
}
#endif /* CONFIG_MODVERSIONS */

512
kernel/module/kallsyms.c Normal file
View File

@ -0,0 +1,512 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Module kallsyms support
*
* Copyright (C) 2010 Rusty Russell
*/
#include <linux/module.h>
#include <linux/kallsyms.h>
#include <linux/buildid.h>
#include <linux/bsearch.h>
#include "internal.h"
/* Lookup exported symbol in given range of kernel_symbols */
static const struct kernel_symbol *lookup_exported_symbol(const char *name,
const struct kernel_symbol *start,
const struct kernel_symbol *stop)
{
return bsearch(name, start, stop - start,
sizeof(struct kernel_symbol), cmp_name);
}
static int is_exported(const char *name, unsigned long value,
const struct module *mod)
{
const struct kernel_symbol *ks;
if (!mod)
ks = lookup_exported_symbol(name, __start___ksymtab, __stop___ksymtab);
else
ks = lookup_exported_symbol(name, mod->syms, mod->syms + mod->num_syms);
return ks && kernel_symbol_value(ks) == value;
}
/* As per nm */
static char elf_type(const Elf_Sym *sym, const struct load_info *info)
{
const Elf_Shdr *sechdrs = info->sechdrs;
if (ELF_ST_BIND(sym->st_info) == STB_WEAK) {
if (ELF_ST_TYPE(sym->st_info) == STT_OBJECT)
return 'v';
else
return 'w';
}
if (sym->st_shndx == SHN_UNDEF)
return 'U';
if (sym->st_shndx == SHN_ABS || sym->st_shndx == info->index.pcpu)
return 'a';
if (sym->st_shndx >= SHN_LORESERVE)
return '?';
if (sechdrs[sym->st_shndx].sh_flags & SHF_EXECINSTR)
return 't';
if (sechdrs[sym->st_shndx].sh_flags & SHF_ALLOC &&
sechdrs[sym->st_shndx].sh_type != SHT_NOBITS) {
if (!(sechdrs[sym->st_shndx].sh_flags & SHF_WRITE))
return 'r';
else if (sechdrs[sym->st_shndx].sh_flags & ARCH_SHF_SMALL)
return 'g';
else
return 'd';
}
if (sechdrs[sym->st_shndx].sh_type == SHT_NOBITS) {
if (sechdrs[sym->st_shndx].sh_flags & ARCH_SHF_SMALL)
return 's';
else
return 'b';
}
if (strstarts(info->secstrings + sechdrs[sym->st_shndx].sh_name,
".debug")) {
return 'n';
}
return '?';
}
static bool is_core_symbol(const Elf_Sym *src, const Elf_Shdr *sechdrs,
unsigned int shnum, unsigned int pcpundx)
{
const Elf_Shdr *sec;
if (src->st_shndx == SHN_UNDEF ||
src->st_shndx >= shnum ||
!src->st_name)
return false;
#ifdef CONFIG_KALLSYMS_ALL
if (src->st_shndx == pcpundx)
return true;
#endif
sec = sechdrs + src->st_shndx;
if (!(sec->sh_flags & SHF_ALLOC)
#ifndef CONFIG_KALLSYMS_ALL
|| !(sec->sh_flags & SHF_EXECINSTR)
#endif
|| (sec->sh_entsize & INIT_OFFSET_MASK))
return false;
return true;
}
/*
* We only allocate and copy the strings needed by the parts of symtab
* we keep. This is simple, but has the effect of making multiple
* copies of duplicates. We could be more sophisticated, see
* linux-kernel thread starting with
* <73defb5e4bca04a6431392cc341112b1@localhost>.
*/
void layout_symtab(struct module *mod, struct load_info *info)
{
Elf_Shdr *symsect = info->sechdrs + info->index.sym;
Elf_Shdr *strsect = info->sechdrs + info->index.str;
const Elf_Sym *src;
unsigned int i, nsrc, ndst, strtab_size = 0;
/* Put symbol section at end of init part of module. */
symsect->sh_flags |= SHF_ALLOC;
symsect->sh_entsize = module_get_offset(mod, &mod->init_layout.size, symsect,
info->index.sym) | INIT_OFFSET_MASK;
pr_debug("\t%s\n", info->secstrings + symsect->sh_name);
src = (void *)info->hdr + symsect->sh_offset;
nsrc = symsect->sh_size / sizeof(*src);
/* Compute total space required for the core symbols' strtab. */
for (ndst = i = 0; i < nsrc; i++) {
if (i == 0 || is_livepatch_module(mod) ||
is_core_symbol(src + i, info->sechdrs, info->hdr->e_shnum,
info->index.pcpu)) {
strtab_size += strlen(&info->strtab[src[i].st_name]) + 1;
ndst++;
}
}
/* Append room for core symbols at end of core part. */
info->symoffs = ALIGN(mod->data_layout.size, symsect->sh_addralign ?: 1);
info->stroffs = mod->data_layout.size = info->symoffs + ndst * sizeof(Elf_Sym);
mod->data_layout.size += strtab_size;
info->core_typeoffs = mod->data_layout.size;
mod->data_layout.size += ndst * sizeof(char);
mod->data_layout.size = strict_align(mod->data_layout.size);
/* Put string table section at end of init part of module. */
strsect->sh_flags |= SHF_ALLOC;
strsect->sh_entsize = module_get_offset(mod, &mod->init_layout.size, strsect,
info->index.str) | INIT_OFFSET_MASK;
pr_debug("\t%s\n", info->secstrings + strsect->sh_name);
/* We'll tack temporary mod_kallsyms on the end. */
mod->init_layout.size = ALIGN(mod->init_layout.size,
__alignof__(struct mod_kallsyms));
info->mod_kallsyms_init_off = mod->init_layout.size;
mod->init_layout.size += sizeof(struct mod_kallsyms);
info->init_typeoffs = mod->init_layout.size;
mod->init_layout.size += nsrc * sizeof(char);
mod->init_layout.size = strict_align(mod->init_layout.size);
}
/*
* We use the full symtab and strtab which layout_symtab arranged to
* be appended to the init section. Later we switch to the cut-down
* core-only ones.
*/
void add_kallsyms(struct module *mod, const struct load_info *info)
{
unsigned int i, ndst;
const Elf_Sym *src;
Elf_Sym *dst;
char *s;
Elf_Shdr *symsec = &info->sechdrs[info->index.sym];
/* Set up to point into init section. */
mod->kallsyms = (void __rcu *)mod->init_layout.base +
info->mod_kallsyms_init_off;
preempt_disable();
/* The following is safe since this pointer cannot change */
rcu_dereference_sched(mod->kallsyms)->symtab = (void *)symsec->sh_addr;
rcu_dereference_sched(mod->kallsyms)->num_symtab = symsec->sh_size / sizeof(Elf_Sym);
/* Make sure we get permanent strtab: don't use info->strtab. */
rcu_dereference_sched(mod->kallsyms)->strtab =
(void *)info->sechdrs[info->index.str].sh_addr;
rcu_dereference_sched(mod->kallsyms)->typetab = mod->init_layout.base + info->init_typeoffs;
/*
* Now populate the cut down core kallsyms for after init
* and set types up while we still have access to sections.
*/
mod->core_kallsyms.symtab = dst = mod->data_layout.base + info->symoffs;
mod->core_kallsyms.strtab = s = mod->data_layout.base + info->stroffs;
mod->core_kallsyms.typetab = mod->data_layout.base + info->core_typeoffs;
src = rcu_dereference_sched(mod->kallsyms)->symtab;
for (ndst = i = 0; i < rcu_dereference_sched(mod->kallsyms)->num_symtab; i++) {
rcu_dereference_sched(mod->kallsyms)->typetab[i] = elf_type(src + i, info);
if (i == 0 || is_livepatch_module(mod) ||
is_core_symbol(src + i, info->sechdrs, info->hdr->e_shnum,
info->index.pcpu)) {
mod->core_kallsyms.typetab[ndst] =
rcu_dereference_sched(mod->kallsyms)->typetab[i];
dst[ndst] = src[i];
dst[ndst++].st_name = s - mod->core_kallsyms.strtab;
s += strscpy(s,
&rcu_dereference_sched(mod->kallsyms)->strtab[src[i].st_name],
KSYM_NAME_LEN) + 1;
}
}
preempt_enable();
mod->core_kallsyms.num_symtab = ndst;
}
#if IS_ENABLED(CONFIG_STACKTRACE_BUILD_ID)
void init_build_id(struct module *mod, const struct load_info *info)
{
const Elf_Shdr *sechdr;
unsigned int i;
for (i = 0; i < info->hdr->e_shnum; i++) {
sechdr = &info->sechdrs[i];
if (!sect_empty(sechdr) && sechdr->sh_type == SHT_NOTE &&
!build_id_parse_buf((void *)sechdr->sh_addr, mod->build_id,
sechdr->sh_size))
break;
}
}
#else
void init_build_id(struct module *mod, const struct load_info *info)
{
}
#endif
/*
* This ignores the intensely annoying "mapping symbols" found
* in ARM ELF files: $a, $t and $d.
*/
static inline int is_arm_mapping_symbol(const char *str)
{
if (str[0] == '.' && str[1] == 'L')
return true;
return str[0] == '$' && strchr("axtd", str[1]) &&
(str[2] == '\0' || str[2] == '.');
}
static const char *kallsyms_symbol_name(struct mod_kallsyms *kallsyms, unsigned int symnum)
{
return kallsyms->strtab + kallsyms->symtab[symnum].st_name;
}
/*
* Given a module and address, find the corresponding symbol and return its name
* while providing its size and offset if needed.
*/
static const char *find_kallsyms_symbol(struct module *mod,
unsigned long addr,
unsigned long *size,
unsigned long *offset)
{
unsigned int i, best = 0;
unsigned long nextval, bestval;
struct mod_kallsyms *kallsyms = rcu_dereference_sched(mod->kallsyms);
/* At worse, next value is at end of module */
if (within_module_init(addr, mod))
nextval = (unsigned long)mod->init_layout.base + mod->init_layout.text_size;
else
nextval = (unsigned long)mod->core_layout.base + mod->core_layout.text_size;
bestval = kallsyms_symbol_value(&kallsyms->symtab[best]);
/*
* Scan for closest preceding symbol, and next symbol. (ELF
* starts real symbols at 1).
*/
for (i = 1; i < kallsyms->num_symtab; i++) {
const Elf_Sym *sym = &kallsyms->symtab[i];
unsigned long thisval = kallsyms_symbol_value(sym);
if (sym->st_shndx == SHN_UNDEF)
continue;
/*
* We ignore unnamed symbols: they're uninformative
* and inserted at a whim.
*/
if (*kallsyms_symbol_name(kallsyms, i) == '\0' ||
is_arm_mapping_symbol(kallsyms_symbol_name(kallsyms, i)))
continue;
if (thisval <= addr && thisval > bestval) {
best = i;
bestval = thisval;
}
if (thisval > addr && thisval < nextval)
nextval = thisval;
}
if (!best)
return NULL;
if (size)
*size = nextval - bestval;
if (offset)
*offset = addr - bestval;
return kallsyms_symbol_name(kallsyms, best);
}
void * __weak dereference_module_function_descriptor(struct module *mod,
void *ptr)
{
return ptr;
}
/*
* For kallsyms to ask for address resolution. NULL means not found. Careful
* not to lock to avoid deadlock on oopses, simply disable preemption.
*/
const char *module_address_lookup(unsigned long addr,
unsigned long *size,
unsigned long *offset,
char **modname,
const unsigned char **modbuildid,
char *namebuf)
{
const char *ret = NULL;
struct module *mod;
preempt_disable();
mod = __module_address(addr);
if (mod) {
if (modname)
*modname = mod->name;
if (modbuildid) {
#if IS_ENABLED(CONFIG_STACKTRACE_BUILD_ID)
*modbuildid = mod->build_id;
#else
*modbuildid = NULL;
#endif
}
ret = find_kallsyms_symbol(mod, addr, size, offset);
}
/* Make a copy in here where it's safe */
if (ret) {
strncpy(namebuf, ret, KSYM_NAME_LEN - 1);
ret = namebuf;
}
preempt_enable();
return ret;
}
int lookup_module_symbol_name(unsigned long addr, char *symname)
{
struct module *mod;
preempt_disable();
list_for_each_entry_rcu(mod, &modules, list) {
if (mod->state == MODULE_STATE_UNFORMED)
continue;
if (within_module(addr, mod)) {
const char *sym;
sym = find_kallsyms_symbol(mod, addr, NULL, NULL);
if (!sym)
goto out;
strscpy(symname, sym, KSYM_NAME_LEN);
preempt_enable();
return 0;
}
}
out:
preempt_enable();
return -ERANGE;
}
int lookup_module_symbol_attrs(unsigned long addr, unsigned long *size,
unsigned long *offset, char *modname, char *name)
{
struct module *mod;
preempt_disable();
list_for_each_entry_rcu(mod, &modules, list) {
if (mod->state == MODULE_STATE_UNFORMED)
continue;
if (within_module(addr, mod)) {
const char *sym;
sym = find_kallsyms_symbol(mod, addr, size, offset);
if (!sym)
goto out;
if (modname)
strscpy(modname, mod->name, MODULE_NAME_LEN);
if (name)
strscpy(name, sym, KSYM_NAME_LEN);
preempt_enable();
return 0;
}
}
out:
preempt_enable();
return -ERANGE;
}
int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type,
char *name, char *module_name, int *exported)
{
struct module *mod;
preempt_disable();
list_for_each_entry_rcu(mod, &modules, list) {
struct mod_kallsyms *kallsyms;
if (mod->state == MODULE_STATE_UNFORMED)
continue;
kallsyms = rcu_dereference_sched(mod->kallsyms);
if (symnum < kallsyms->num_symtab) {
const Elf_Sym *sym = &kallsyms->symtab[symnum];
*value = kallsyms_symbol_value(sym);
*type = kallsyms->typetab[symnum];
strscpy(name, kallsyms_symbol_name(kallsyms, symnum), KSYM_NAME_LEN);
strscpy(module_name, mod->name, MODULE_NAME_LEN);
*exported = is_exported(name, *value, mod);
preempt_enable();
return 0;
}
symnum -= kallsyms->num_symtab;
}
preempt_enable();
return -ERANGE;
}
/* Given a module and name of symbol, find and return the symbol's value */
unsigned long find_kallsyms_symbol_value(struct module *mod, const char *name)
{
unsigned int i;
struct mod_kallsyms *kallsyms = rcu_dereference_sched(mod->kallsyms);
for (i = 0; i < kallsyms->num_symtab; i++) {
const Elf_Sym *sym = &kallsyms->symtab[i];
if (strcmp(name, kallsyms_symbol_name(kallsyms, i)) == 0 &&
sym->st_shndx != SHN_UNDEF)
return kallsyms_symbol_value(sym);
}
return 0;
}
/* Look for this name: can be of form module:name. */
unsigned long module_kallsyms_lookup_name(const char *name)
{
struct module *mod;
char *colon;
unsigned long ret = 0;
/* Don't lock: we're in enough trouble already. */
preempt_disable();
if ((colon = strnchr(name, MODULE_NAME_LEN, ':')) != NULL) {
if ((mod = find_module_all(name, colon - name, false)) != NULL)
ret = find_kallsyms_symbol_value(mod, colon + 1);
} else {
list_for_each_entry_rcu(mod, &modules, list) {
if (mod->state == MODULE_STATE_UNFORMED)
continue;
if ((ret = find_kallsyms_symbol_value(mod, name)) != 0)
break;
}
}
preempt_enable();
return ret;
}
#ifdef CONFIG_LIVEPATCH
int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
struct module *, unsigned long),
void *data)
{
struct module *mod;
unsigned int i;
int ret = 0;
mutex_lock(&module_mutex);
list_for_each_entry(mod, &modules, list) {
struct mod_kallsyms *kallsyms;
if (mod->state == MODULE_STATE_UNFORMED)
continue;
/* Use rcu_dereference_sched() to remain compliant with the sparse tool */
preempt_disable();
kallsyms = rcu_dereference_sched(mod->kallsyms);
preempt_enable();
for (i = 0; i < kallsyms->num_symtab; i++) {
const Elf_Sym *sym = &kallsyms->symtab[i];
if (sym->st_shndx == SHN_UNDEF)
continue;
ret = fn(data, kallsyms_symbol_name(kallsyms, i),
mod, kallsyms_symbol_value(sym));
if (ret != 0)
goto out;
}
}
out:
mutex_unlock(&module_mutex);
return ret;
}
#endif /* CONFIG_LIVEPATCH */

62
kernel/module/kdb.c Normal file
View File

@ -0,0 +1,62 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Module kdb support
*
* Copyright (C) 2010 Jason Wessel
*/
#include <linux/module.h>
#include <linux/kdb.h>
#include "internal.h"
/*
* kdb_lsmod - This function implements the 'lsmod' command. Lists
* currently loaded kernel modules.
* Mostly taken from userland lsmod.
*/
int kdb_lsmod(int argc, const char **argv)
{
struct module *mod;
if (argc != 0)
return KDB_ARGCOUNT;
kdb_printf("Module Size modstruct Used by\n");
list_for_each_entry(mod, &modules, list) {
if (mod->state == MODULE_STATE_UNFORMED)
continue;
kdb_printf("%-20s%8u", mod->name, mod->core_layout.size);
#ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
kdb_printf("/%8u", mod->data_layout.size);
#endif
kdb_printf(" 0x%px ", (void *)mod);
#ifdef CONFIG_MODULE_UNLOAD
kdb_printf("%4d ", module_refcount(mod));
#endif
if (mod->state == MODULE_STATE_GOING)
kdb_printf(" (Unloading)");
else if (mod->state == MODULE_STATE_COMING)
kdb_printf(" (Loading)");
else
kdb_printf(" (Live)");
kdb_printf(" 0x%px", mod->core_layout.base);
#ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
kdb_printf("/0x%px", mod->data_layout.base);
#endif
#ifdef CONFIG_MODULE_UNLOAD
{
struct module_use *use;
kdb_printf(" [ ");
list_for_each_entry(use, &mod->source_list,
source_list)
kdb_printf("%s ", use->target->name);
kdb_printf("]\n");
}
#endif
}
return 0;
}

74
kernel/module/livepatch.c Normal file
View File

@ -0,0 +1,74 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Module livepatch support
*
* Copyright (C) 2016 Jessica Yu <jeyu@redhat.com>
*/
#include <linux/module.h>
#include <linux/string.h>
#include <linux/slab.h>
#include "internal.h"
/*
* Persist Elf information about a module. Copy the Elf header,
* section header table, section string table, and symtab section
* index from info to mod->klp_info.
*/
int copy_module_elf(struct module *mod, struct load_info *info)
{
unsigned int size, symndx;
int ret;
size = sizeof(*mod->klp_info);
mod->klp_info = kmalloc(size, GFP_KERNEL);
if (!mod->klp_info)
return -ENOMEM;
/* Elf header */
size = sizeof(mod->klp_info->hdr);
memcpy(&mod->klp_info->hdr, info->hdr, size);
/* Elf section header table */
size = sizeof(*info->sechdrs) * info->hdr->e_shnum;
mod->klp_info->sechdrs = kmemdup(info->sechdrs, size, GFP_KERNEL);
if (!mod->klp_info->sechdrs) {
ret = -ENOMEM;
goto free_info;
}
/* Elf section name string table */
size = info->sechdrs[info->hdr->e_shstrndx].sh_size;
mod->klp_info->secstrings = kmemdup(info->secstrings, size, GFP_KERNEL);
if (!mod->klp_info->secstrings) {
ret = -ENOMEM;
goto free_sechdrs;
}
/* Elf symbol section index */
symndx = info->index.sym;
mod->klp_info->symndx = symndx;
/*
* For livepatch modules, core_kallsyms.symtab is a complete
* copy of the original symbol table. Adjust sh_addr to point
* to core_kallsyms.symtab since the copy of the symtab in module
* init memory is freed at the end of do_init_module().
*/
mod->klp_info->sechdrs[symndx].sh_addr = (unsigned long)mod->core_kallsyms.symtab;
return 0;
free_sechdrs:
kfree(mod->klp_info->sechdrs);
free_info:
kfree(mod->klp_info);
return ret;
}
void free_module_elf(struct module *mod)
{
kfree(mod->klp_info->sechdrs);
kfree(mod->klp_info->secstrings);
kfree(mod->klp_info);
}

File diff suppressed because it is too large Load Diff

146
kernel/module/procfs.c Normal file
View File

@ -0,0 +1,146 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Module proc support
*
* Copyright (C) 2008 Alexey Dobriyan
*/
#include <linux/module.h>
#include <linux/kallsyms.h>
#include <linux/mutex.h>
#include <linux/seq_file.h>
#include <linux/proc_fs.h>
#include "internal.h"
#ifdef CONFIG_MODULE_UNLOAD
static inline void print_unload_info(struct seq_file *m, struct module *mod)
{
struct module_use *use;
int printed_something = 0;
seq_printf(m, " %i ", module_refcount(mod));
/*
* Always include a trailing , so userspace can differentiate
* between this and the old multi-field proc format.
*/
list_for_each_entry(use, &mod->source_list, source_list) {
printed_something = 1;
seq_printf(m, "%s,", use->source->name);
}
if (mod->init && !mod->exit) {
printed_something = 1;
seq_puts(m, "[permanent],");
}
if (!printed_something)
seq_puts(m, "-");
}
#else /* !CONFIG_MODULE_UNLOAD */
static inline void print_unload_info(struct seq_file *m, struct module *mod)
{
/* We don't know the usage count, or what modules are using. */
seq_puts(m, " - -");
}
#endif /* CONFIG_MODULE_UNLOAD */
/* Called by the /proc file system to return a list of modules. */
static void *m_start(struct seq_file *m, loff_t *pos)
{
mutex_lock(&module_mutex);
return seq_list_start(&modules, *pos);
}
static void *m_next(struct seq_file *m, void *p, loff_t *pos)
{
return seq_list_next(p, &modules, pos);
}
static void m_stop(struct seq_file *m, void *p)
{
mutex_unlock(&module_mutex);
}
static int m_show(struct seq_file *m, void *p)
{
struct module *mod = list_entry(p, struct module, list);
char buf[MODULE_FLAGS_BUF_SIZE];
void *value;
unsigned int size;
/* We always ignore unformed modules. */
if (mod->state == MODULE_STATE_UNFORMED)
return 0;
size = mod->init_layout.size + mod->core_layout.size;
#ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
size += mod->data_layout.size;
#endif
seq_printf(m, "%s %u", mod->name, size);
print_unload_info(m, mod);
/* Informative for users. */
seq_printf(m, " %s",
mod->state == MODULE_STATE_GOING ? "Unloading" :
mod->state == MODULE_STATE_COMING ? "Loading" :
"Live");
/* Used by oprofile and other similar tools. */
value = m->private ? NULL : mod->core_layout.base;
seq_printf(m, " 0x%px", value);
/* Taints info */
if (mod->taints)
seq_printf(m, " %s", module_flags(mod, buf));
seq_puts(m, "\n");
return 0;
}
/*
* Format: modulename size refcount deps address
*
* Where refcount is a number or -, and deps is a comma-separated list
* of depends or -.
*/
static const struct seq_operations modules_op = {
.start = m_start,
.next = m_next,
.stop = m_stop,
.show = m_show
};
/*
* This also sets the "private" pointer to non-NULL if the
* kernel pointers should be hidden (so you can just test
* "m->private" to see if you should keep the values private).
*
* We use the same logic as for /proc/kallsyms.
*/
static int modules_open(struct inode *inode, struct file *file)
{
int err = seq_open(file, &modules_op);
if (!err) {
struct seq_file *m = file->private_data;
m->private = kallsyms_show_value(file->f_cred) ? NULL : (void *)8ul;
}
return err;
}
static const struct proc_ops modules_proc_ops = {
.proc_flags = PROC_ENTRY_PERMANENT,
.proc_open = modules_open,
.proc_read = seq_read,
.proc_lseek = seq_lseek,
.proc_release = seq_release,
};
static int __init proc_modules_init(void)
{
proc_create("modules", 0, NULL, &modules_proc_ops);
return 0;
}
module_init(proc_modules_init);

122
kernel/module/signing.c Normal file
View File

@ -0,0 +1,122 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* Module signature checker
*
* Copyright (C) 2012 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/module.h>
#include <linux/module_signature.h>
#include <linux/string.h>
#include <linux/verification.h>
#include <linux/security.h>
#include <crypto/public_key.h>
#include <uapi/linux/module.h>
#include "internal.h"
static bool sig_enforce = IS_ENABLED(CONFIG_MODULE_SIG_FORCE);
module_param(sig_enforce, bool_enable_only, 0644);
/*
* Export sig_enforce kernel cmdline parameter to allow other subsystems rely
* on that instead of directly to CONFIG_MODULE_SIG_FORCE config.
*/
bool is_module_sig_enforced(void)
{
return sig_enforce;
}
EXPORT_SYMBOL(is_module_sig_enforced);
void set_module_sig_enforced(void)
{
sig_enforce = true;
}
/*
* Verify the signature on a module.
*/
int mod_verify_sig(const void *mod, struct load_info *info)
{
struct module_signature ms;
size_t sig_len, modlen = info->len;
int ret;
pr_devel("==>%s(,%zu)\n", __func__, modlen);
if (modlen <= sizeof(ms))
return -EBADMSG;
memcpy(&ms, mod + (modlen - sizeof(ms)), sizeof(ms));
ret = mod_check_sig(&ms, modlen, "module");
if (ret)
return ret;
sig_len = be32_to_cpu(ms.sig_len);
modlen -= sig_len + sizeof(ms);
info->len = modlen;
return verify_pkcs7_signature(mod, modlen, mod + modlen, sig_len,
VERIFY_USE_SECONDARY_KEYRING,
VERIFYING_MODULE_SIGNATURE,
NULL, NULL);
}
int module_sig_check(struct load_info *info, int flags)
{
int err = -ENODATA;
const unsigned long markerlen = sizeof(MODULE_SIG_STRING) - 1;
const char *reason;
const void *mod = info->hdr;
bool mangled_module = flags & (MODULE_INIT_IGNORE_MODVERSIONS |
MODULE_INIT_IGNORE_VERMAGIC);
/*
* Do not allow mangled modules as a module with version information
* removed is no longer the module that was signed.
*/
if (!mangled_module &&
info->len > markerlen &&
memcmp(mod + info->len - markerlen, MODULE_SIG_STRING, markerlen) == 0) {
/* We truncate the module to discard the signature */
info->len -= markerlen;
err = mod_verify_sig(mod, info);
if (!err) {
info->sig_ok = true;
return 0;
}
}
/*
* We don't permit modules to be loaded into the trusted kernels
* without a valid signature on them, but if we're not enforcing,
* certain errors are non-fatal.
*/
switch (err) {
case -ENODATA:
reason = "unsigned module";
break;
case -ENOPKG:
reason = "module with unsupported crypto";
break;
case -ENOKEY:
reason = "module with unavailable key";
break;
default:
/*
* All other errors are fatal, including lack of memory,
* unparseable signatures, and signature check failures --
* even if signatures aren't required.
*/
return err;
}
if (is_module_sig_enforced()) {
pr_notice("Loading of %s is rejected\n", reason);
return -EKEYREJECTED;
}
return security_locked_down(LOCKDOWN_MODULE_SIGNATURE);
}

143
kernel/module/strict_rwx.c Normal file
View File

@ -0,0 +1,143 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Module strict rwx
*
* Copyright (C) 2015 Rusty Russell
*/
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
#include <linux/set_memory.h>
#include "internal.h"
/*
* LKM RO/NX protection: protect module's text/ro-data
* from modification and any data from execution.
*
* General layout of module is:
* [text] [read-only-data] [ro-after-init] [writable data]
* text_size -----^ ^ ^ ^
* ro_size ------------------------| | |
* ro_after_init_size -----------------------------| |
* size -----------------------------------------------------------|
*
* These values are always page-aligned (as is base) when
* CONFIG_STRICT_MODULE_RWX is set.
*/
/*
* Since some arches are moving towards PAGE_KERNEL module allocations instead
* of PAGE_KERNEL_EXEC, keep frob_text() and module_enable_x() independent of
* CONFIG_STRICT_MODULE_RWX because they are needed regardless of whether we
* are strict.
*/
static void frob_text(const struct module_layout *layout,
int (*set_memory)(unsigned long start, int num_pages))
{
set_memory((unsigned long)layout->base,
PAGE_ALIGN(layout->text_size) >> PAGE_SHIFT);
}
static void frob_rodata(const struct module_layout *layout,
int (*set_memory)(unsigned long start, int num_pages))
{
set_memory((unsigned long)layout->base + layout->text_size,
(layout->ro_size - layout->text_size) >> PAGE_SHIFT);
}
static void frob_ro_after_init(const struct module_layout *layout,
int (*set_memory)(unsigned long start, int num_pages))
{
set_memory((unsigned long)layout->base + layout->ro_size,
(layout->ro_after_init_size - layout->ro_size) >> PAGE_SHIFT);
}
static void frob_writable_data(const struct module_layout *layout,
int (*set_memory)(unsigned long start, int num_pages))
{
set_memory((unsigned long)layout->base + layout->ro_after_init_size,
(layout->size - layout->ro_after_init_size) >> PAGE_SHIFT);
}
static bool layout_check_misalignment(const struct module_layout *layout)
{
return WARN_ON(!PAGE_ALIGNED(layout->base)) ||
WARN_ON(!PAGE_ALIGNED(layout->text_size)) ||
WARN_ON(!PAGE_ALIGNED(layout->ro_size)) ||
WARN_ON(!PAGE_ALIGNED(layout->ro_after_init_size)) ||
WARN_ON(!PAGE_ALIGNED(layout->size));
}
bool module_check_misalignment(const struct module *mod)
{
if (!IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
return false;
return layout_check_misalignment(&mod->core_layout) ||
layout_check_misalignment(&mod->data_layout) ||
layout_check_misalignment(&mod->init_layout);
}
void module_enable_x(const struct module *mod)
{
if (!PAGE_ALIGNED(mod->core_layout.base) ||
!PAGE_ALIGNED(mod->init_layout.base))
return;
frob_text(&mod->core_layout, set_memory_x);
frob_text(&mod->init_layout, set_memory_x);
}
void module_enable_ro(const struct module *mod, bool after_init)
{
if (!IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
return;
#ifdef CONFIG_STRICT_MODULE_RWX
if (!rodata_enabled)
return;
#endif
set_vm_flush_reset_perms(mod->core_layout.base);
set_vm_flush_reset_perms(mod->init_layout.base);
frob_text(&mod->core_layout, set_memory_ro);
frob_rodata(&mod->data_layout, set_memory_ro);
frob_text(&mod->init_layout, set_memory_ro);
frob_rodata(&mod->init_layout, set_memory_ro);
if (after_init)
frob_ro_after_init(&mod->data_layout, set_memory_ro);
}
void module_enable_nx(const struct module *mod)
{
if (!IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
return;
frob_rodata(&mod->data_layout, set_memory_nx);
frob_ro_after_init(&mod->data_layout, set_memory_nx);
frob_writable_data(&mod->data_layout, set_memory_nx);
frob_rodata(&mod->init_layout, set_memory_nx);
frob_writable_data(&mod->init_layout, set_memory_nx);
}
int module_enforce_rwx_sections(Elf_Ehdr *hdr, Elf_Shdr *sechdrs,
char *secstrings, struct module *mod)
{
const unsigned long shf_wx = SHF_WRITE | SHF_EXECINSTR;
int i;
if (!IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
return 0;
for (i = 0; i < hdr->e_shnum; i++) {
if ((sechdrs[i].sh_flags & shf_wx) == shf_wx) {
pr_err("%s: section %s (index %d) has invalid WRITE|EXEC flags\n",
mod->name, secstrings + sechdrs[i].sh_name, i);
return -ENOEXEC;
}
}
return 0;
}

436
kernel/module/sysfs.c Normal file
View File

@ -0,0 +1,436 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Module sysfs support
*
* Copyright (C) 2008 Rusty Russell
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/sysfs.h>
#include <linux/slab.h>
#include <linux/kallsyms.h>
#include <linux/mutex.h>
#include "internal.h"
/*
* /sys/module/foo/sections stuff
* J. Corbet <corbet@lwn.net>
*/
#ifdef CONFIG_KALLSYMS
struct module_sect_attr {
struct bin_attribute battr;
unsigned long address;
};
struct module_sect_attrs {
struct attribute_group grp;
unsigned int nsections;
struct module_sect_attr attrs[];
};
#define MODULE_SECT_READ_SIZE (3 /* "0x", "\n" */ + (BITS_PER_LONG / 4))
static ssize_t module_sect_read(struct file *file, struct kobject *kobj,
struct bin_attribute *battr,
char *buf, loff_t pos, size_t count)
{
struct module_sect_attr *sattr =
container_of(battr, struct module_sect_attr, battr);
char bounce[MODULE_SECT_READ_SIZE + 1];
size_t wrote;
if (pos != 0)
return -EINVAL;
/*
* Since we're a binary read handler, we must account for the
* trailing NUL byte that sprintf will write: if "buf" is
* too small to hold the NUL, or the NUL is exactly the last
* byte, the read will look like it got truncated by one byte.
* Since there is no way to ask sprintf nicely to not write
* the NUL, we have to use a bounce buffer.
*/
wrote = scnprintf(bounce, sizeof(bounce), "0x%px\n",
kallsyms_show_value(file->f_cred)
? (void *)sattr->address : NULL);
count = min(count, wrote);
memcpy(buf, bounce, count);
return count;
}
static void free_sect_attrs(struct module_sect_attrs *sect_attrs)
{
unsigned int section;
for (section = 0; section < sect_attrs->nsections; section++)
kfree(sect_attrs->attrs[section].battr.attr.name);
kfree(sect_attrs);
}
static void add_sect_attrs(struct module *mod, const struct load_info *info)
{
unsigned int nloaded = 0, i, size[2];
struct module_sect_attrs *sect_attrs;
struct module_sect_attr *sattr;
struct bin_attribute **gattr;
/* Count loaded sections and allocate structures */
for (i = 0; i < info->hdr->e_shnum; i++)
if (!sect_empty(&info->sechdrs[i]))
nloaded++;
size[0] = ALIGN(struct_size(sect_attrs, attrs, nloaded),
sizeof(sect_attrs->grp.bin_attrs[0]));
size[1] = (nloaded + 1) * sizeof(sect_attrs->grp.bin_attrs[0]);
sect_attrs = kzalloc(size[0] + size[1], GFP_KERNEL);
if (!sect_attrs)
return;
/* Setup section attributes. */
sect_attrs->grp.name = "sections";
sect_attrs->grp.bin_attrs = (void *)sect_attrs + size[0];
sect_attrs->nsections = 0;
sattr = &sect_attrs->attrs[0];
gattr = &sect_attrs->grp.bin_attrs[0];
for (i = 0; i < info->hdr->e_shnum; i++) {
Elf_Shdr *sec = &info->sechdrs[i];
if (sect_empty(sec))
continue;
sysfs_bin_attr_init(&sattr->battr);
sattr->address = sec->sh_addr;
sattr->battr.attr.name =
kstrdup(info->secstrings + sec->sh_name, GFP_KERNEL);
if (!sattr->battr.attr.name)
goto out;
sect_attrs->nsections++;
sattr->battr.read = module_sect_read;
sattr->battr.size = MODULE_SECT_READ_SIZE;
sattr->battr.attr.mode = 0400;
*(gattr++) = &(sattr++)->battr;
}
*gattr = NULL;
if (sysfs_create_group(&mod->mkobj.kobj, &sect_attrs->grp))
goto out;
mod->sect_attrs = sect_attrs;
return;
out:
free_sect_attrs(sect_attrs);
}
static void remove_sect_attrs(struct module *mod)
{
if (mod->sect_attrs) {
sysfs_remove_group(&mod->mkobj.kobj,
&mod->sect_attrs->grp);
/*
* We are positive that no one is using any sect attrs
* at this point. Deallocate immediately.
*/
free_sect_attrs(mod->sect_attrs);
mod->sect_attrs = NULL;
}
}
/*
* /sys/module/foo/notes/.section.name gives contents of SHT_NOTE sections.
*/
struct module_notes_attrs {
struct kobject *dir;
unsigned int notes;
struct bin_attribute attrs[];
};
static ssize_t module_notes_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t pos, size_t count)
{
/*
* The caller checked the pos and count against our size.
*/
memcpy(buf, bin_attr->private + pos, count);
return count;
}
static void free_notes_attrs(struct module_notes_attrs *notes_attrs,
unsigned int i)
{
if (notes_attrs->dir) {
while (i-- > 0)
sysfs_remove_bin_file(notes_attrs->dir,
&notes_attrs->attrs[i]);
kobject_put(notes_attrs->dir);
}
kfree(notes_attrs);
}
static void add_notes_attrs(struct module *mod, const struct load_info *info)
{
unsigned int notes, loaded, i;
struct module_notes_attrs *notes_attrs;
struct bin_attribute *nattr;
/* failed to create section attributes, so can't create notes */
if (!mod->sect_attrs)
return;
/* Count notes sections and allocate structures. */
notes = 0;
for (i = 0; i < info->hdr->e_shnum; i++)
if (!sect_empty(&info->sechdrs[i]) &&
info->sechdrs[i].sh_type == SHT_NOTE)
++notes;
if (notes == 0)
return;
notes_attrs = kzalloc(struct_size(notes_attrs, attrs, notes),
GFP_KERNEL);
if (!notes_attrs)
return;
notes_attrs->notes = notes;
nattr = &notes_attrs->attrs[0];
for (loaded = i = 0; i < info->hdr->e_shnum; ++i) {
if (sect_empty(&info->sechdrs[i]))
continue;
if (info->sechdrs[i].sh_type == SHT_NOTE) {
sysfs_bin_attr_init(nattr);
nattr->attr.name = mod->sect_attrs->attrs[loaded].battr.attr.name;
nattr->attr.mode = 0444;
nattr->size = info->sechdrs[i].sh_size;
nattr->private = (void *)info->sechdrs[i].sh_addr;
nattr->read = module_notes_read;
++nattr;
}
++loaded;
}
notes_attrs->dir = kobject_create_and_add("notes", &mod->mkobj.kobj);
if (!notes_attrs->dir)
goto out;
for (i = 0; i < notes; ++i)
if (sysfs_create_bin_file(notes_attrs->dir,
&notes_attrs->attrs[i]))
goto out;
mod->notes_attrs = notes_attrs;
return;
out:
free_notes_attrs(notes_attrs, i);
}
static void remove_notes_attrs(struct module *mod)
{
if (mod->notes_attrs)
free_notes_attrs(mod->notes_attrs, mod->notes_attrs->notes);
}
#else /* !CONFIG_KALLSYMS */
static inline void add_sect_attrs(struct module *mod, const struct load_info *info) { }
static inline void remove_sect_attrs(struct module *mod) { }
static inline void add_notes_attrs(struct module *mod, const struct load_info *info) { }
static inline void remove_notes_attrs(struct module *mod) { }
#endif /* CONFIG_KALLSYMS */
static void del_usage_links(struct module *mod)
{
#ifdef CONFIG_MODULE_UNLOAD
struct module_use *use;
mutex_lock(&module_mutex);
list_for_each_entry(use, &mod->target_list, target_list)
sysfs_remove_link(use->target->holders_dir, mod->name);
mutex_unlock(&module_mutex);
#endif
}
static int add_usage_links(struct module *mod)
{
int ret = 0;
#ifdef CONFIG_MODULE_UNLOAD
struct module_use *use;
mutex_lock(&module_mutex);
list_for_each_entry(use, &mod->target_list, target_list) {
ret = sysfs_create_link(use->target->holders_dir,
&mod->mkobj.kobj, mod->name);
if (ret)
break;
}
mutex_unlock(&module_mutex);
if (ret)
del_usage_links(mod);
#endif
return ret;
}
static void module_remove_modinfo_attrs(struct module *mod, int end)
{
struct module_attribute *attr;
int i;
for (i = 0; (attr = &mod->modinfo_attrs[i]); i++) {
if (end >= 0 && i > end)
break;
/* pick a field to test for end of list */
if (!attr->attr.name)
break;
sysfs_remove_file(&mod->mkobj.kobj, &attr->attr);
if (attr->free)
attr->free(mod);
}
kfree(mod->modinfo_attrs);
}
static int module_add_modinfo_attrs(struct module *mod)
{
struct module_attribute *attr;
struct module_attribute *temp_attr;
int error = 0;
int i;
mod->modinfo_attrs = kzalloc((sizeof(struct module_attribute) *
(modinfo_attrs_count + 1)),
GFP_KERNEL);
if (!mod->modinfo_attrs)
return -ENOMEM;
temp_attr = mod->modinfo_attrs;
for (i = 0; (attr = modinfo_attrs[i]); i++) {
if (!attr->test || attr->test(mod)) {
memcpy(temp_attr, attr, sizeof(*temp_attr));
sysfs_attr_init(&temp_attr->attr);
error = sysfs_create_file(&mod->mkobj.kobj,
&temp_attr->attr);
if (error)
goto error_out;
++temp_attr;
}
}
return 0;
error_out:
if (i > 0)
module_remove_modinfo_attrs(mod, --i);
else
kfree(mod->modinfo_attrs);
return error;
}
static void mod_kobject_put(struct module *mod)
{
DECLARE_COMPLETION_ONSTACK(c);
mod->mkobj.kobj_completion = &c;
kobject_put(&mod->mkobj.kobj);
wait_for_completion(&c);
}
static int mod_sysfs_init(struct module *mod)
{
int err;
struct kobject *kobj;
if (!module_sysfs_initialized) {
pr_err("%s: module sysfs not initialized\n", mod->name);
err = -EINVAL;
goto out;
}
kobj = kset_find_obj(module_kset, mod->name);
if (kobj) {
pr_err("%s: module is already loaded\n", mod->name);
kobject_put(kobj);
err = -EINVAL;
goto out;
}
mod->mkobj.mod = mod;
memset(&mod->mkobj.kobj, 0, sizeof(mod->mkobj.kobj));
mod->mkobj.kobj.kset = module_kset;
err = kobject_init_and_add(&mod->mkobj.kobj, &module_ktype, NULL,
"%s", mod->name);
if (err)
mod_kobject_put(mod);
out:
return err;
}
int mod_sysfs_setup(struct module *mod,
const struct load_info *info,
struct kernel_param *kparam,
unsigned int num_params)
{
int err;
err = mod_sysfs_init(mod);
if (err)
goto out;
mod->holders_dir = kobject_create_and_add("holders", &mod->mkobj.kobj);
if (!mod->holders_dir) {
err = -ENOMEM;
goto out_unreg;
}
err = module_param_sysfs_setup(mod, kparam, num_params);
if (err)
goto out_unreg_holders;
err = module_add_modinfo_attrs(mod);
if (err)
goto out_unreg_param;
err = add_usage_links(mod);
if (err)
goto out_unreg_modinfo_attrs;
add_sect_attrs(mod, info);
add_notes_attrs(mod, info);
return 0;
out_unreg_modinfo_attrs:
module_remove_modinfo_attrs(mod, -1);
out_unreg_param:
module_param_sysfs_remove(mod);
out_unreg_holders:
kobject_put(mod->holders_dir);
out_unreg:
mod_kobject_put(mod);
out:
return err;
}
static void mod_sysfs_fini(struct module *mod)
{
remove_notes_attrs(mod);
remove_sect_attrs(mod);
mod_kobject_put(mod);
}
void mod_sysfs_teardown(struct module *mod)
{
del_usage_links(mod);
module_remove_modinfo_attrs(mod, -1);
module_param_sysfs_remove(mod);
kobject_put(mod->mkobj.drivers_dir);
kobject_put(mod->holders_dir);
mod_sysfs_fini(mod);
}
void init_param_lock(struct module *mod)
{
mutex_init(&mod->param_lock);
}

61
kernel/module/tracking.c Normal file
View File

@ -0,0 +1,61 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Module taint unload tracking support
*
* Copyright (C) 2022 Aaron Tomlin
*/
#include <linux/module.h>
#include <linux/string.h>
#include <linux/printk.h>
#include <linux/slab.h>
#include <linux/list.h>
#include <linux/rculist.h>
#include "internal.h"
static LIST_HEAD(unloaded_tainted_modules);
int try_add_tainted_module(struct module *mod)
{
struct mod_unload_taint *mod_taint;
module_assert_mutex_or_preempt();
list_for_each_entry_rcu(mod_taint, &unloaded_tainted_modules, list,
lockdep_is_held(&module_mutex)) {
if (!strcmp(mod_taint->name, mod->name) &&
mod_taint->taints & mod->taints) {
mod_taint->count++;
goto out;
}
}
mod_taint = kmalloc(sizeof(*mod_taint), GFP_KERNEL);
if (unlikely(!mod_taint))
return -ENOMEM;
strscpy(mod_taint->name, mod->name, MODULE_NAME_LEN);
mod_taint->taints = mod->taints;
list_add_rcu(&mod_taint->list, &unloaded_tainted_modules);
mod_taint->count = 1;
out:
return 0;
}
void print_unloaded_tainted_modules(void)
{
struct mod_unload_taint *mod_taint;
char buf[MODULE_FLAGS_BUF_SIZE];
if (!list_empty(&unloaded_tainted_modules)) {
printk(KERN_DEFAULT "Unloaded tainted modules:");
list_for_each_entry_rcu(mod_taint, &unloaded_tainted_modules,
list) {
size_t l;
l = module_flags_taint(mod_taint->taints, buf);
buf[l++] = '\0';
pr_cont(" %s(%s):%llu", mod_taint->name, buf,
mod_taint->count);
}
}
}

117
kernel/module/tree_lookup.c Normal file
View File

@ -0,0 +1,117 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Modules tree lookup
*
* Copyright (C) 2015 Peter Zijlstra
* Copyright (C) 2015 Rusty Russell
*/
#include <linux/module.h>
#include <linux/rbtree_latch.h>
#include "internal.h"
/*
* Use a latched RB-tree for __module_address(); this allows us to use
* RCU-sched lookups of the address from any context.
*
* This is conditional on PERF_EVENTS || TRACING because those can really hit
* __module_address() hard by doing a lot of stack unwinding; potentially from
* NMI context.
*/
static __always_inline unsigned long __mod_tree_val(struct latch_tree_node *n)
{
struct module_layout *layout = container_of(n, struct module_layout, mtn.node);
return (unsigned long)layout->base;
}
static __always_inline unsigned long __mod_tree_size(struct latch_tree_node *n)
{
struct module_layout *layout = container_of(n, struct module_layout, mtn.node);
return (unsigned long)layout->size;
}
static __always_inline bool
mod_tree_less(struct latch_tree_node *a, struct latch_tree_node *b)
{
return __mod_tree_val(a) < __mod_tree_val(b);
}
static __always_inline int
mod_tree_comp(void *key, struct latch_tree_node *n)
{
unsigned long val = (unsigned long)key;
unsigned long start, end;
start = __mod_tree_val(n);
if (val < start)
return -1;
end = start + __mod_tree_size(n);
if (val >= end)
return 1;
return 0;
}
static const struct latch_tree_ops mod_tree_ops = {
.less = mod_tree_less,
.comp = mod_tree_comp,
};
static noinline void __mod_tree_insert(struct mod_tree_node *node, struct mod_tree_root *tree)
{
latch_tree_insert(&node->node, &tree->root, &mod_tree_ops);
}
static void __mod_tree_remove(struct mod_tree_node *node, struct mod_tree_root *tree)
{
latch_tree_erase(&node->node, &tree->root, &mod_tree_ops);
}
/*
* These modifications: insert, remove_init and remove; are serialized by the
* module_mutex.
*/
void mod_tree_insert(struct module *mod)
{
mod->core_layout.mtn.mod = mod;
mod->init_layout.mtn.mod = mod;
__mod_tree_insert(&mod->core_layout.mtn, &mod_tree);
if (mod->init_layout.size)
__mod_tree_insert(&mod->init_layout.mtn, &mod_tree);
#ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
mod->data_layout.mtn.mod = mod;
__mod_tree_insert(&mod->data_layout.mtn, &mod_data_tree);
#endif
}
void mod_tree_remove_init(struct module *mod)
{
if (mod->init_layout.size)
__mod_tree_remove(&mod->init_layout.mtn, &mod_tree);
}
void mod_tree_remove(struct module *mod)
{
__mod_tree_remove(&mod->core_layout.mtn, &mod_tree);
mod_tree_remove_init(mod);
#ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
__mod_tree_remove(&mod->data_layout.mtn, &mod_data_tree);
#endif
}
struct module *mod_find(unsigned long addr, struct mod_tree_root *tree)
{
struct latch_tree_node *ltn;
ltn = latch_tree_find((void *)addr, &tree->root, &mod_tree_ops);
if (!ltn)
return NULL;
return container_of(ltn, struct mod_tree_node, node)->mod;
}

101
kernel/module/version.c Normal file
View File

@ -0,0 +1,101 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Module version support
*
* Copyright (C) 2008 Rusty Russell
*/
#include <linux/module.h>
#include <linux/string.h>
#include <linux/printk.h>
#include "internal.h"
int check_version(const struct load_info *info,
const char *symname,
struct module *mod,
const s32 *crc)
{
Elf_Shdr *sechdrs = info->sechdrs;
unsigned int versindex = info->index.vers;
unsigned int i, num_versions;
struct modversion_info *versions;
/* Exporting module didn't supply crcs? OK, we're already tainted. */
if (!crc)
return 1;
/* No versions at all? modprobe --force does this. */
if (versindex == 0)
return try_to_force_load(mod, symname) == 0;
versions = (void *)sechdrs[versindex].sh_addr;
num_versions = sechdrs[versindex].sh_size
/ sizeof(struct modversion_info);
for (i = 0; i < num_versions; i++) {
u32 crcval;
if (strcmp(versions[i].name, symname) != 0)
continue;
crcval = *crc;
if (versions[i].crc == crcval)
return 1;
pr_debug("Found checksum %X vs module %lX\n",
crcval, versions[i].crc);
goto bad_version;
}
/* Broken toolchain. Warn once, then let it go.. */
pr_warn_once("%s: no symbol version for %s\n", info->name, symname);
return 1;
bad_version:
pr_warn("%s: disagrees about version of symbol %s\n", info->name, symname);
return 0;
}
int check_modstruct_version(const struct load_info *info,
struct module *mod)
{
struct find_symbol_arg fsa = {
.name = "module_layout",
.gplok = true,
};
/*
* Since this should be found in kernel (which can't be removed), no
* locking is necessary -- use preempt_disable() to placate lockdep.
*/
preempt_disable();
if (!find_symbol(&fsa)) {
preempt_enable();
BUG();
}
preempt_enable();
return check_version(info, "module_layout", mod, fsa.crc);
}
/* First part is kernel version, which we ignore if module has crcs. */
int same_magic(const char *amagic, const char *bmagic,
bool has_crcs)
{
if (has_crcs) {
amagic += strcspn(amagic, " ");
bmagic += strcspn(bmagic, " ");
}
return strcmp(amagic, bmagic) == 0;
}
/*
* Generate the signature for all relevant module structures here.
* If these change, we don't want to try to parse the module.
*/
void module_layout(struct module *mod,
struct modversion_info *ver,
struct kernel_param *kp,
struct kernel_symbol *ks,
struct tracepoint * const *tp)
{
}
EXPORT_SYMBOL(module_layout);

View File

@ -1,45 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* Module signature checker
*
* Copyright (C) 2012 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/module.h>
#include <linux/module_signature.h>
#include <linux/string.h>
#include <linux/verification.h>
#include <crypto/public_key.h>
#include "module-internal.h"
/*
* Verify the signature on a module.
*/
int mod_verify_sig(const void *mod, struct load_info *info)
{
struct module_signature ms;
size_t sig_len, modlen = info->len;
int ret;
pr_devel("==>%s(,%zu)\n", __func__, modlen);
if (modlen <= sizeof(ms))
return -EBADMSG;
memcpy(&ms, mod + (modlen - sizeof(ms)), sizeof(ms));
ret = mod_check_sig(&ms, modlen, "module");
if (ret)
return ret;
sig_len = be32_to_cpu(ms.sig_len);
modlen -= sig_len + sizeof(ms);
info->len = modlen;
return verify_pkcs7_signature(mod, modlen, mod + modlen, sig_len,
VERIFY_USE_SECONDARY_KEYRING,
VERIFYING_MODULE_SIGNATURE,
NULL, NULL);
}