License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifndef __LINUX_COMPILER_H
|
|
|
|
#define __LINUX_COMPILER_H
|
|
|
|
|
linux/compiler.h: Split into compiler.h and compiler_types.h
linux/compiler.h is included indirectly by linux/types.h via
uapi/linux/types.h -> uapi/linux/posix_types.h -> linux/stddef.h
-> uapi/linux/stddef.h and is needed to provide a proper definition of
offsetof.
Unfortunately, compiler.h requires a definition of
smp_read_barrier_depends() for defining lockless_dereference() and soon
for defining READ_ONCE(), which means that all
users of READ_ONCE() will need to include asm/barrier.h to avoid splats
such as:
In file included from include/uapi/linux/stddef.h:1:0,
from include/linux/stddef.h:4,
from arch/h8300/kernel/asm-offsets.c:11:
include/linux/list.h: In function 'list_empty':
>> include/linux/compiler.h:343:2: error: implicit declaration of function 'smp_read_barrier_depends' [-Werror=implicit-function-declaration]
smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \
^
A better alternative is to include asm/barrier.h in linux/compiler.h,
but this requires a type definition for "bool" on some architectures
(e.g. x86), which is defined later by linux/types.h. Type "bool" is also
used directly in linux/compiler.h, so the whole thing is pretty fragile.
This patch splits compiler.h in two: compiler_types.h contains type
annotations, definitions and the compiler-specific parts, whereas
compiler.h #includes compiler-types.h and additionally defines macros
such as {READ,WRITE.ACCESS}_ONCE().
uapi/linux/stddef.h and linux/linkage.h are then moved over to include
linux/compiler_types.h, which fixes the build for h8 and blackfin.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1508840570-22169-2-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-10-24 10:22:46 +00:00
|
|
|
#include <linux/compiler_types.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
linux/compiler.h: Split into compiler.h and compiler_types.h
linux/compiler.h is included indirectly by linux/types.h via
uapi/linux/types.h -> uapi/linux/posix_types.h -> linux/stddef.h
-> uapi/linux/stddef.h and is needed to provide a proper definition of
offsetof.
Unfortunately, compiler.h requires a definition of
smp_read_barrier_depends() for defining lockless_dereference() and soon
for defining READ_ONCE(), which means that all
users of READ_ONCE() will need to include asm/barrier.h to avoid splats
such as:
In file included from include/uapi/linux/stddef.h:1:0,
from include/linux/stddef.h:4,
from arch/h8300/kernel/asm-offsets.c:11:
include/linux/list.h: In function 'list_empty':
>> include/linux/compiler.h:343:2: error: implicit declaration of function 'smp_read_barrier_depends' [-Werror=implicit-function-declaration]
smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \
^
A better alternative is to include asm/barrier.h in linux/compiler.h,
but this requires a type definition for "bool" on some architectures
(e.g. x86), which is defined later by linux/types.h. Type "bool" is also
used directly in linux/compiler.h, so the whole thing is pretty fragile.
This patch splits compiler.h in two: compiler_types.h contains type
annotations, definitions and the compiler-specific parts, whereas
compiler.h #includes compiler-types.h and additionally defines macros
such as {READ,WRITE.ACCESS}_ONCE().
uapi/linux/stddef.h and linux/linkage.h are then moved over to include
linux/compiler_types.h, which fixes the build for h8 and blackfin.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1508840570-22169-2-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-10-24 10:22:46 +00:00
|
|
|
#ifndef __ASSEMBLY__
|
2012-11-22 02:00:25 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifdef __KERNEL__
|
|
|
|
|
2008-11-12 20:24:24 +00:00
|
|
|
/*
|
|
|
|
* Note: DISABLE_BRANCH_PROFILING can be used by special lowlevel code
|
|
|
|
* to disable branch tracing on a per file basis.
|
|
|
|
*/
|
2017-01-19 13:57:14 +00:00
|
|
|
void ftrace_likely_update(struct ftrace_likely_data *f, int val,
|
tracing: Process constants for (un)likely() profiler
When running the likely/unlikely profiler, one of the results did not look
accurate. It noted that the unlikely() in link_path_walk() was 100%
incorrect. When I added a trace_printk() to see what was happening there, it
became 80% correct! Looking deeper into what whas happening, I found that
gcc split that if statement into two paths. One where the if statement
became a constant, the other path a variable. The other path had the if
statement always hit (making the unlikely there, always false), but since
the #define unlikely() has:
#define unlikely() (__builtin_constant_p(x) ? !!(x) : __branch_check__(x, 0))
Where constants are ignored by the branch profiler, the "constant" path
made by the compiler was ignored, even though it was hit 80% of the time.
By just passing the constant value to the __branch_check__() function and
tracing it out of line (as always correct, as likely/unlikely isn't a factor
for constants), then we get back the accurate readings of branches that were
optimized by gcc causing part of the execution to become constant.
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2017-01-17 17:29:35 +00:00
|
|
|
int expect, int is_constant);
|
2023-05-17 12:47:12 +00:00
|
|
|
#if defined(CONFIG_TRACE_BRANCH_PROFILING) \
|
|
|
|
&& !defined(DISABLE_BRANCH_PROFILING) && !defined(__CHECKER__)
|
2008-11-12 05:14:39 +00:00
|
|
|
#define likely_notrace(x) __builtin_expect(!!(x), 1)
|
|
|
|
#define unlikely_notrace(x) __builtin_expect(!!(x), 0)
|
|
|
|
|
tracing: Process constants for (un)likely() profiler
When running the likely/unlikely profiler, one of the results did not look
accurate. It noted that the unlikely() in link_path_walk() was 100%
incorrect. When I added a trace_printk() to see what was happening there, it
became 80% correct! Looking deeper into what whas happening, I found that
gcc split that if statement into two paths. One where the if statement
became a constant, the other path a variable. The other path had the if
statement always hit (making the unlikely there, always false), but since
the #define unlikely() has:
#define unlikely() (__builtin_constant_p(x) ? !!(x) : __branch_check__(x, 0))
Where constants are ignored by the branch profiler, the "constant" path
made by the compiler was ignored, even though it was hit 80% of the time.
By just passing the constant value to the __branch_check__() function and
tracing it out of line (as always correct, as likely/unlikely isn't a factor
for constants), then we get back the accurate readings of branches that were
optimized by gcc causing part of the execution to become constant.
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2017-01-17 17:29:35 +00:00
|
|
|
#define __branch_check__(x, expect, is_constant) ({ \
|
2018-05-30 12:19:22 +00:00
|
|
|
long ______r; \
|
2017-01-19 13:57:14 +00:00
|
|
|
static struct ftrace_likely_data \
|
2018-09-03 17:17:50 +00:00
|
|
|
__aligned(4) \
|
2020-10-22 02:36:07 +00:00
|
|
|
__section("_ftrace_annotated_branch") \
|
2008-11-12 05:14:39 +00:00
|
|
|
______f = { \
|
2017-01-19 13:57:14 +00:00
|
|
|
.data.func = __func__, \
|
|
|
|
.data.file = __FILE__, \
|
|
|
|
.data.line = __LINE__, \
|
2008-11-12 05:14:39 +00:00
|
|
|
}; \
|
tracing: Process constants for (un)likely() profiler
When running the likely/unlikely profiler, one of the results did not look
accurate. It noted that the unlikely() in link_path_walk() was 100%
incorrect. When I added a trace_printk() to see what was happening there, it
became 80% correct! Looking deeper into what whas happening, I found that
gcc split that if statement into two paths. One where the if statement
became a constant, the other path a variable. The other path had the if
statement always hit (making the unlikely there, always false), but since
the #define unlikely() has:
#define unlikely() (__builtin_constant_p(x) ? !!(x) : __branch_check__(x, 0))
Where constants are ignored by the branch profiler, the "constant" path
made by the compiler was ignored, even though it was hit 80% of the time.
By just passing the constant value to the __branch_check__() function and
tracing it out of line (as always correct, as likely/unlikely isn't a factor
for constants), then we get back the accurate readings of branches that were
optimized by gcc causing part of the execution to become constant.
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2017-01-17 17:29:35 +00:00
|
|
|
______r = __builtin_expect(!!(x), expect); \
|
|
|
|
ftrace_likely_update(&______f, ______r, \
|
|
|
|
expect, is_constant); \
|
2008-11-12 05:14:39 +00:00
|
|
|
______r; \
|
|
|
|
})
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Using __builtin_constant_p(x) to ignore cases where the return
|
|
|
|
* value is always the same. This idea is taken from a similar patch
|
|
|
|
* written by Daniel Walker.
|
|
|
|
*/
|
|
|
|
# ifndef likely
|
tracing: Process constants for (un)likely() profiler
When running the likely/unlikely profiler, one of the results did not look
accurate. It noted that the unlikely() in link_path_walk() was 100%
incorrect. When I added a trace_printk() to see what was happening there, it
became 80% correct! Looking deeper into what whas happening, I found that
gcc split that if statement into two paths. One where the if statement
became a constant, the other path a variable. The other path had the if
statement always hit (making the unlikely there, always false), but since
the #define unlikely() has:
#define unlikely() (__builtin_constant_p(x) ? !!(x) : __branch_check__(x, 0))
Where constants are ignored by the branch profiler, the "constant" path
made by the compiler was ignored, even though it was hit 80% of the time.
By just passing the constant value to the __branch_check__() function and
tracing it out of line (as always correct, as likely/unlikely isn't a factor
for constants), then we get back the accurate readings of branches that were
optimized by gcc causing part of the execution to become constant.
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2017-01-17 17:29:35 +00:00
|
|
|
# define likely(x) (__branch_check__(x, 1, __builtin_constant_p(x)))
|
2008-11-12 05:14:39 +00:00
|
|
|
# endif
|
|
|
|
# ifndef unlikely
|
tracing: Process constants for (un)likely() profiler
When running the likely/unlikely profiler, one of the results did not look
accurate. It noted that the unlikely() in link_path_walk() was 100%
incorrect. When I added a trace_printk() to see what was happening there, it
became 80% correct! Looking deeper into what whas happening, I found that
gcc split that if statement into two paths. One where the if statement
became a constant, the other path a variable. The other path had the if
statement always hit (making the unlikely there, always false), but since
the #define unlikely() has:
#define unlikely() (__builtin_constant_p(x) ? !!(x) : __branch_check__(x, 0))
Where constants are ignored by the branch profiler, the "constant" path
made by the compiler was ignored, even though it was hit 80% of the time.
By just passing the constant value to the __branch_check__() function and
tracing it out of line (as always correct, as likely/unlikely isn't a factor
for constants), then we get back the accurate readings of branches that were
optimized by gcc causing part of the execution to become constant.
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2017-01-17 17:29:35 +00:00
|
|
|
# define unlikely(x) (__branch_check__(x, 0, __builtin_constant_p(x)))
|
2008-11-12 05:14:39 +00:00
|
|
|
# endif
|
2008-11-21 06:30:54 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_PROFILE_ALL_BRANCHES
|
|
|
|
/*
|
|
|
|
* "Define 'is'", Bill Clinton
|
|
|
|
* "Define 'if'", Steven Rostedt
|
|
|
|
*/
|
2019-03-20 17:26:17 +00:00
|
|
|
#define if(cond, ...) if ( __trace_if_var( !!(cond , ## __VA_ARGS__) ) )
|
|
|
|
|
|
|
|
#define __trace_if_var(cond) (__builtin_constant_p(cond) ? (cond) : __trace_if_value(cond))
|
|
|
|
|
|
|
|
#define __trace_if_value(cond) ({ \
|
|
|
|
static struct ftrace_branch_data \
|
|
|
|
__aligned(4) \
|
2020-10-22 02:36:07 +00:00
|
|
|
__section("_ftrace_branch") \
|
2019-03-20 17:26:17 +00:00
|
|
|
__if_trace = { \
|
|
|
|
.func = __func__, \
|
|
|
|
.file = __FILE__, \
|
|
|
|
.line = __LINE__, \
|
|
|
|
}; \
|
|
|
|
(cond) ? \
|
|
|
|
(__if_trace.miss_hit[1]++,1) : \
|
|
|
|
(__if_trace.miss_hit[0]++,0); \
|
|
|
|
})
|
|
|
|
|
2008-11-21 06:30:54 +00:00
|
|
|
#endif /* CONFIG_PROFILE_ALL_BRANCHES */
|
|
|
|
|
2008-11-12 05:14:39 +00:00
|
|
|
#else
|
|
|
|
# define likely(x) __builtin_expect(!!(x), 1)
|
|
|
|
# define unlikely(x) __builtin_expect(!!(x), 0)
|
2020-12-11 21:37:54 +00:00
|
|
|
# define likely_notrace(x) likely(x)
|
|
|
|
# define unlikely_notrace(x) unlikely(x)
|
2008-11-12 05:14:39 +00:00
|
|
|
#endif
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/* Optimization barrier */
|
|
|
|
#ifndef barrier
|
2020-11-14 06:51:59 +00:00
|
|
|
/* The "volatile" is due to gcc bugs */
|
|
|
|
# define barrier() __asm__ __volatile__("": : :"memory")
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif
|
|
|
|
|
lib: make memzero_explicit more robust against dead store elimination
In commit 0b053c951829 ("lib: memzero_explicit: use barrier instead
of OPTIMIZER_HIDE_VAR"), we made memzero_explicit() more robust in
case LTO would decide to inline memzero_explicit() and eventually
find out it could be elimiated as dead store.
While using barrier() works well for the case of gcc, recent efforts
from LLVMLinux people suggest to use llvm as an alternative to gcc,
and there, Stephan found in a simple stand-alone user space example
that llvm could nevertheless optimize and thus elimitate the memset().
A similar issue has been observed in the referenced llvm bug report,
which is regarded as not-a-bug.
Based on some experiments, icc is a bit special on its own, while it
doesn't seem to eliminate the memset(), it could do so with an own
implementation, and then result in similar findings as with llvm.
The fix in this patch now works for all three compilers (also tested
with more aggressive optimization levels). Arguably, in the current
kernel tree it's more of a theoretical issue, but imho, it's better
to be pedantic about it.
It's clearly visible with gcc/llvm though, with the below code: if we
would have used barrier() only here, llvm would have omitted clearing,
not so with barrier_data() variant:
static inline void memzero_explicit(void *s, size_t count)
{
memset(s, 0, count);
barrier_data(s);
}
int main(void)
{
char buff[20];
memzero_explicit(buff, sizeof(buff));
return 0;
}
$ gcc -O2 test.c
$ gdb a.out
(gdb) disassemble main
Dump of assembler code for function main:
0x0000000000400400 <+0>: lea -0x28(%rsp),%rax
0x0000000000400405 <+5>: movq $0x0,-0x28(%rsp)
0x000000000040040e <+14>: movq $0x0,-0x20(%rsp)
0x0000000000400417 <+23>: movl $0x0,-0x18(%rsp)
0x000000000040041f <+31>: xor %eax,%eax
0x0000000000400421 <+33>: retq
End of assembler dump.
$ clang -O2 test.c
$ gdb a.out
(gdb) disassemble main
Dump of assembler code for function main:
0x00000000004004f0 <+0>: xorps %xmm0,%xmm0
0x00000000004004f3 <+3>: movaps %xmm0,-0x18(%rsp)
0x00000000004004f8 <+8>: movl $0x0,-0x8(%rsp)
0x0000000000400500 <+16>: lea -0x18(%rsp),%rax
0x0000000000400505 <+21>: xor %eax,%eax
0x0000000000400507 <+23>: retq
End of assembler dump.
As gcc, clang, but also icc defines __GNUC__, it's sufficient to define
this in compiler-gcc.h only to be picked up. For a fallback or otherwise
unsupported compiler, we define it as a barrier. Similarly, for ecc which
does not support gcc inline asm.
Reference: https://llvm.org/bugs/show_bug.cgi?id=15495
Reported-by: Stephan Mueller <smueller@chronox.de>
Tested-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Stephan Mueller <smueller@chronox.de>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: mancha security <mancha1@zoho.com>
Cc: Mark Charlebois <charlebm@gmail.com>
Cc: Behan Webster <behanw@converseincode.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-04-30 02:13:52 +00:00
|
|
|
#ifndef barrier_data
|
2020-11-14 06:51:59 +00:00
|
|
|
/*
|
|
|
|
* This version is i.e. to prevent dead stores elimination on @ptr
|
|
|
|
* where gcc and llvm may behave differently when otherwise using
|
|
|
|
* normal barrier(): while gcc behavior gets along with a normal
|
|
|
|
* barrier(), llvm needs an explicit input variable to be assumed
|
|
|
|
* clobbered. The issue is as follows: while the inline asm might
|
|
|
|
* access any memory it wants, the compiler could have fit all of
|
|
|
|
* @ptr into memory registers instead, and since @ptr never escaped
|
|
|
|
* from that, it proved that the inline asm wasn't touching any of
|
|
|
|
* it. This version works well with both compilers, i.e. we're telling
|
|
|
|
* the compiler that the inline asm absolutely may see the contents
|
|
|
|
* of @ptr. See also: https://llvm.org/bugs/show_bug.cgi?id=15495
|
|
|
|
*/
|
|
|
|
# define barrier_data(ptr) __asm__ __volatile__("": :"r"(ptr) :"memory")
|
lib: make memzero_explicit more robust against dead store elimination
In commit 0b053c951829 ("lib: memzero_explicit: use barrier instead
of OPTIMIZER_HIDE_VAR"), we made memzero_explicit() more robust in
case LTO would decide to inline memzero_explicit() and eventually
find out it could be elimiated as dead store.
While using barrier() works well for the case of gcc, recent efforts
from LLVMLinux people suggest to use llvm as an alternative to gcc,
and there, Stephan found in a simple stand-alone user space example
that llvm could nevertheless optimize and thus elimitate the memset().
A similar issue has been observed in the referenced llvm bug report,
which is regarded as not-a-bug.
Based on some experiments, icc is a bit special on its own, while it
doesn't seem to eliminate the memset(), it could do so with an own
implementation, and then result in similar findings as with llvm.
The fix in this patch now works for all three compilers (also tested
with more aggressive optimization levels). Arguably, in the current
kernel tree it's more of a theoretical issue, but imho, it's better
to be pedantic about it.
It's clearly visible with gcc/llvm though, with the below code: if we
would have used barrier() only here, llvm would have omitted clearing,
not so with barrier_data() variant:
static inline void memzero_explicit(void *s, size_t count)
{
memset(s, 0, count);
barrier_data(s);
}
int main(void)
{
char buff[20];
memzero_explicit(buff, sizeof(buff));
return 0;
}
$ gcc -O2 test.c
$ gdb a.out
(gdb) disassemble main
Dump of assembler code for function main:
0x0000000000400400 <+0>: lea -0x28(%rsp),%rax
0x0000000000400405 <+5>: movq $0x0,-0x28(%rsp)
0x000000000040040e <+14>: movq $0x0,-0x20(%rsp)
0x0000000000400417 <+23>: movl $0x0,-0x18(%rsp)
0x000000000040041f <+31>: xor %eax,%eax
0x0000000000400421 <+33>: retq
End of assembler dump.
$ clang -O2 test.c
$ gdb a.out
(gdb) disassemble main
Dump of assembler code for function main:
0x00000000004004f0 <+0>: xorps %xmm0,%xmm0
0x00000000004004f3 <+3>: movaps %xmm0,-0x18(%rsp)
0x00000000004004f8 <+8>: movl $0x0,-0x8(%rsp)
0x0000000000400500 <+16>: lea -0x18(%rsp),%rax
0x0000000000400505 <+21>: xor %eax,%eax
0x0000000000400507 <+23>: retq
End of assembler dump.
As gcc, clang, but also icc defines __GNUC__, it's sufficient to define
this in compiler-gcc.h only to be picked up. For a fallback or otherwise
unsupported compiler, we define it as a barrier. Similarly, for ecc which
does not support gcc inline asm.
Reference: https://llvm.org/bugs/show_bug.cgi?id=15495
Reported-by: Stephan Mueller <smueller@chronox.de>
Tested-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Stephan Mueller <smueller@chronox.de>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: mancha security <mancha1@zoho.com>
Cc: Mark Charlebois <charlebm@gmail.com>
Cc: Behan Webster <behanw@converseincode.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-04-30 02:13:52 +00:00
|
|
|
#endif
|
|
|
|
|
bug.h: work around GCC PR82365 in BUG()
Looking at functions with large stack frames across all architectures
led me discovering that BUG() suffers from the same problem as
fortify_panic(), which I've added a workaround for already.
In short, variables that go out of scope by calling a noreturn function
or __builtin_unreachable() keep using stack space in functions
afterwards.
A workaround that was identified is to insert an empty assembler
statement just before calling the function that doesn't return. I'm
adding a macro "barrier_before_unreachable()" to document this, and
insert calls to that in all instances of BUG() that currently suffer
from this problem.
The files that saw the largest change from this had these frame sizes
before, and much less with my patch:
fs/ext4/inode.c:82:1: warning: the frame size of 1672 bytes is larger than 800 bytes [-Wframe-larger-than=]
fs/ext4/namei.c:434:1: warning: the frame size of 904 bytes is larger than 800 bytes [-Wframe-larger-than=]
fs/ext4/super.c:2279:1: warning: the frame size of 1160 bytes is larger than 800 bytes [-Wframe-larger-than=]
fs/ext4/xattr.c:146:1: warning: the frame size of 1168 bytes is larger than 800 bytes [-Wframe-larger-than=]
fs/f2fs/inode.c:152:1: warning: the frame size of 1424 bytes is larger than 800 bytes [-Wframe-larger-than=]
net/netfilter/ipvs/ip_vs_core.c:1195:1: warning: the frame size of 1068 bytes is larger than 800 bytes [-Wframe-larger-than=]
net/netfilter/ipvs/ip_vs_core.c:395:1: warning: the frame size of 1084 bytes is larger than 800 bytes [-Wframe-larger-than=]
net/netfilter/ipvs/ip_vs_ftp.c:298:1: warning: the frame size of 928 bytes is larger than 800 bytes [-Wframe-larger-than=]
net/netfilter/ipvs/ip_vs_ftp.c:418:1: warning: the frame size of 908 bytes is larger than 800 bytes [-Wframe-larger-than=]
net/netfilter/ipvs/ip_vs_lblcr.c:718:1: warning: the frame size of 960 bytes is larger than 800 bytes [-Wframe-larger-than=]
drivers/net/xen-netback/netback.c:1500:1: warning: the frame size of 1088 bytes is larger than 800 bytes [-Wframe-larger-than=]
In case of ARC and CRIS, it turns out that the BUG() implementation
actually does return (or at least the compiler thinks it does),
resulting in lots of warnings about uninitialized variable use and
leaving noreturn functions, such as:
block/cfq-iosched.c: In function 'cfq_async_queue_prio':
block/cfq-iosched.c:3804:1: error: control reaches end of non-void function [-Werror=return-type]
include/linux/dmaengine.h: In function 'dma_maxpq':
include/linux/dmaengine.h:1123:1: error: control reaches end of non-void function [-Werror=return-type]
This makes them call __builtin_trap() instead, which should normally
dump the stack and kill the current process, like some of the other
architectures already do.
I tried adding barrier_before_unreachable() to panic() and
fortify_panic() as well, but that had very little effect, so I'm not
submitting that patch.
Vineet said:
: For ARC, it is double win.
:
: 1. Fixes 3 -Wreturn-type warnings
:
: | ../net/core/ethtool.c:311:1: warning: control reaches end of non-void function
: [-Wreturn-type]
: | ../kernel/sched/core.c:3246:1: warning: control reaches end of non-void function
: [-Wreturn-type]
: | ../include/linux/sunrpc/svc_xprt.h:180:1: warning: control reaches end of
: non-void function [-Wreturn-type]
:
: 2. bloat-o-meter reports code size improvements as gcc elides the
: generated code for stack return.
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82365
Link: http://lkml.kernel.org/r/20171219114112.939391-1-arnd@arndb.de
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Vineet Gupta <vgupta@synopsys.com> [arch/arc]
Tested-by: Vineet Gupta <vgupta@synopsys.com> [arch/arc]
Cc: Mikael Starvik <starvik@axis.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Christopher Li <sparse@chrisli.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-02-21 22:45:54 +00:00
|
|
|
/* workaround for GCC PR82365 if needed */
|
|
|
|
#ifndef barrier_before_unreachable
|
|
|
|
# define barrier_before_unreachable() do { } while (0)
|
|
|
|
#endif
|
|
|
|
|
2009-12-05 01:44:50 +00:00
|
|
|
/* Unreachable code */
|
2022-04-18 16:50:36 +00:00
|
|
|
#ifdef CONFIG_OBJTOOL
|
2017-11-06 13:17:37 +00:00
|
|
|
/*
|
|
|
|
* These macros help objtool understand GCC code flow for unreachable code.
|
|
|
|
* The __COUNTER__ based labels are a hack to make each instance of the macros
|
|
|
|
* unique, to convince GCC not to merge duplicate inline asm statements.
|
|
|
|
*/
|
2021-05-19 13:03:08 +00:00
|
|
|
#define __stringify_label(n) #n
|
|
|
|
|
|
|
|
#define __annotate_unreachable(c) ({ \
|
|
|
|
asm volatile(__stringify_label(c) ":\n\t" \
|
2018-12-19 10:23:27 +00:00
|
|
|
".pushsection .discard.unreachable\n\t" \
|
2021-05-19 13:03:08 +00:00
|
|
|
".long " __stringify_label(c) "b - .\n\t" \
|
2021-11-08 22:35:59 +00:00
|
|
|
".popsection\n\t" : : "i" (c)); \
|
objtool: Assume unannotated UD2 instructions are dead ends
Arnd reported some false positive warnings with GCC 7:
drivers/hid/wacom_wac.o: warning: objtool: wacom_bpt3_touch()+0x2a5: stack state mismatch: cfa1=7+8 cfa2=6+16
drivers/iio/adc/vf610_adc.o: warning: objtool: vf610_adc_calculate_rates() falls through to next function vf610_adc_sample_set()
drivers/pwm/pwm-hibvt.o: warning: objtool: hibvt_pwm_get_state() falls through to next function hibvt_pwm_remove()
drivers/pwm/pwm-mediatek.o: warning: objtool: mtk_pwm_config() falls through to next function mtk_pwm_enable()
drivers/spi/spi-bcm2835.o: warning: objtool: .text: unexpected end of section
drivers/spi/spi-bcm2835aux.o: warning: objtool: .text: unexpected end of section
drivers/watchdog/digicolor_wdt.o: warning: objtool: dc_wdt_get_timeleft() falls through to next function dc_wdt_restart()
When GCC 7 detects a potential divide-by-zero condition, it sometimes
inserts a UD2 instruction for the case where the divisor is zero,
instead of letting the hardware trap on the divide instruction.
Objtool doesn't consider UD2 to be fatal unless it's annotated with
unreachable(). So it considers the GCC-generated UD2 to be non-fatal,
and it tries to follow the control flow past the UD2 and gets
confused.
Previously, objtool *did* assume UD2 was always a dead end. That
changed with the following commit:
d1091c7fa3d5 ("objtool: Improve detection of BUG() and other dead ends")
The motivation behind that change was that Peter was planning on using
UD2 for __WARN(), which is *not* a dead end. However, it turns out
that some emulators rely on UD2 being fatal, so he ended up using
'ud0' instead:
9a93848fe787 ("x86/debug: Implement __WARN() using UD0")
For GCC 4.5+, it should be safe to go back to the previous assumption
that UD2 is fatal, even when it's not annotated with unreachable().
But for pre-4.5 versions of GCC, the unreachable() macro isn't
supported, so such cases of UD2 need to be explicitly annotated as
reachable.
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: d1091c7fa3d5 ("objtool: Improve detection of BUG() and other dead ends")
Link: http://lkml.kernel.org/r/e57fa9dfede25f79487da8126ee9cdf7b856db65.1501188854.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-07-27 20:56:53 +00:00
|
|
|
})
|
2021-05-19 13:03:08 +00:00
|
|
|
#define annotate_unreachable() __annotate_unreachable(__COUNTER__)
|
|
|
|
|
2019-06-28 01:50:46 +00:00
|
|
|
/* Annotate a C jump table to allow objtool to follow the code flow */
|
2020-10-22 02:36:07 +00:00
|
|
|
#define __annotate_jump_table __section(".rodata..c_jump_table")
|
2019-06-28 01:50:46 +00:00
|
|
|
|
2022-04-18 16:50:36 +00:00
|
|
|
#else /* !CONFIG_OBJTOOL */
|
objtool: Assume unannotated UD2 instructions are dead ends
Arnd reported some false positive warnings with GCC 7:
drivers/hid/wacom_wac.o: warning: objtool: wacom_bpt3_touch()+0x2a5: stack state mismatch: cfa1=7+8 cfa2=6+16
drivers/iio/adc/vf610_adc.o: warning: objtool: vf610_adc_calculate_rates() falls through to next function vf610_adc_sample_set()
drivers/pwm/pwm-hibvt.o: warning: objtool: hibvt_pwm_get_state() falls through to next function hibvt_pwm_remove()
drivers/pwm/pwm-mediatek.o: warning: objtool: mtk_pwm_config() falls through to next function mtk_pwm_enable()
drivers/spi/spi-bcm2835.o: warning: objtool: .text: unexpected end of section
drivers/spi/spi-bcm2835aux.o: warning: objtool: .text: unexpected end of section
drivers/watchdog/digicolor_wdt.o: warning: objtool: dc_wdt_get_timeleft() falls through to next function dc_wdt_restart()
When GCC 7 detects a potential divide-by-zero condition, it sometimes
inserts a UD2 instruction for the case where the divisor is zero,
instead of letting the hardware trap on the divide instruction.
Objtool doesn't consider UD2 to be fatal unless it's annotated with
unreachable(). So it considers the GCC-generated UD2 to be non-fatal,
and it tries to follow the control flow past the UD2 and gets
confused.
Previously, objtool *did* assume UD2 was always a dead end. That
changed with the following commit:
d1091c7fa3d5 ("objtool: Improve detection of BUG() and other dead ends")
The motivation behind that change was that Peter was planning on using
UD2 for __WARN(), which is *not* a dead end. However, it turns out
that some emulators rely on UD2 being fatal, so he ended up using
'ud0' instead:
9a93848fe787 ("x86/debug: Implement __WARN() using UD0")
For GCC 4.5+, it should be safe to go back to the previous assumption
that UD2 is fatal, even when it's not annotated with unreachable().
But for pre-4.5 versions of GCC, the unreachable() macro isn't
supported, so such cases of UD2 need to be explicitly annotated as
reachable.
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: d1091c7fa3d5 ("objtool: Improve detection of BUG() and other dead ends")
Link: http://lkml.kernel.org/r/e57fa9dfede25f79487da8126ee9cdf7b856db65.1501188854.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-07-27 20:56:53 +00:00
|
|
|
#define annotate_unreachable()
|
2019-06-28 01:50:46 +00:00
|
|
|
#define __annotate_jump_table
|
2022-04-18 16:50:36 +00:00
|
|
|
#endif /* CONFIG_OBJTOOL */
|
objtool: Assume unannotated UD2 instructions are dead ends
Arnd reported some false positive warnings with GCC 7:
drivers/hid/wacom_wac.o: warning: objtool: wacom_bpt3_touch()+0x2a5: stack state mismatch: cfa1=7+8 cfa2=6+16
drivers/iio/adc/vf610_adc.o: warning: objtool: vf610_adc_calculate_rates() falls through to next function vf610_adc_sample_set()
drivers/pwm/pwm-hibvt.o: warning: objtool: hibvt_pwm_get_state() falls through to next function hibvt_pwm_remove()
drivers/pwm/pwm-mediatek.o: warning: objtool: mtk_pwm_config() falls through to next function mtk_pwm_enable()
drivers/spi/spi-bcm2835.o: warning: objtool: .text: unexpected end of section
drivers/spi/spi-bcm2835aux.o: warning: objtool: .text: unexpected end of section
drivers/watchdog/digicolor_wdt.o: warning: objtool: dc_wdt_get_timeleft() falls through to next function dc_wdt_restart()
When GCC 7 detects a potential divide-by-zero condition, it sometimes
inserts a UD2 instruction for the case where the divisor is zero,
instead of letting the hardware trap on the divide instruction.
Objtool doesn't consider UD2 to be fatal unless it's annotated with
unreachable(). So it considers the GCC-generated UD2 to be non-fatal,
and it tries to follow the control flow past the UD2 and gets
confused.
Previously, objtool *did* assume UD2 was always a dead end. That
changed with the following commit:
d1091c7fa3d5 ("objtool: Improve detection of BUG() and other dead ends")
The motivation behind that change was that Peter was planning on using
UD2 for __WARN(), which is *not* a dead end. However, it turns out
that some emulators rely on UD2 being fatal, so he ended up using
'ud0' instead:
9a93848fe787 ("x86/debug: Implement __WARN() using UD0")
For GCC 4.5+, it should be safe to go back to the previous assumption
that UD2 is fatal, even when it's not annotated with unreachable().
But for pre-4.5 versions of GCC, the unreachable() macro isn't
supported, so such cases of UD2 need to be explicitly annotated as
reachable.
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: d1091c7fa3d5 ("objtool: Improve detection of BUG() and other dead ends")
Link: http://lkml.kernel.org/r/e57fa9dfede25f79487da8126ee9cdf7b856db65.1501188854.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-07-27 20:56:53 +00:00
|
|
|
|
2009-12-05 01:44:50 +00:00
|
|
|
#ifndef unreachable
|
2018-10-15 17:22:21 +00:00
|
|
|
# define unreachable() do { \
|
|
|
|
annotate_unreachable(); \
|
|
|
|
__builtin_unreachable(); \
|
|
|
|
} while (0)
|
2009-12-05 01:44:50 +00:00
|
|
|
#endif
|
|
|
|
|
kbuild: allow archs to select link dead code/data elimination
Introduce LD_DEAD_CODE_DATA_ELIMINATION option for architectures to
select to build with -ffunction-sections, -fdata-sections, and link
with --gc-sections. It requires some work (documented) to ensure all
unreferenced entrypoints are live, and requires toolchain and build
verification, so it is made a per-arch option for now.
On a random powerpc64le build, this yelds a significant size saving,
it boots and runs fine, but there is a lot I haven't tested as yet, so
these savings may be reduced if there are bugs in the link.
text data bss dec filename
11169741 1180744 1923176 14273661 vmlinux
10445269 1004127 1919707 13369103 vmlinux.dce
~700K text, ~170K data, 6% removed from kernel image size.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.com>
2016-08-24 12:29:20 +00:00
|
|
|
/*
|
|
|
|
* KENTRY - kernel entry point
|
|
|
|
* This can be used to annotate symbols (functions or data) that are used
|
|
|
|
* without their linker symbol being referenced explicitly. For example,
|
|
|
|
* interrupt vector handlers, or functions in the kernel image that are found
|
|
|
|
* programatically.
|
|
|
|
*
|
|
|
|
* Not required for symbols exported with EXPORT_SYMBOL, or initcalls. Those
|
|
|
|
* are handled in their own way (with KEEP() in linker scripts).
|
|
|
|
*
|
|
|
|
* KENTRY can be avoided if the symbols in question are marked as KEEP() in the
|
|
|
|
* linker script. For example an architecture could KEEP() its entire
|
|
|
|
* boot/exception vector code rather than annotate each function and data.
|
|
|
|
*/
|
|
|
|
#ifndef KENTRY
|
|
|
|
# define KENTRY(sym) \
|
|
|
|
extern typeof(sym) sym; \
|
|
|
|
static const unsigned long __kentry_##sym \
|
|
|
|
__used \
|
2020-10-13 23:47:58 +00:00
|
|
|
__attribute__((__section__("___kentry+" #sym))) \
|
kbuild: allow archs to select link dead code/data elimination
Introduce LD_DEAD_CODE_DATA_ELIMINATION option for architectures to
select to build with -ffunction-sections, -fdata-sections, and link
with --gc-sections. It requires some work (documented) to ensure all
unreferenced entrypoints are live, and requires toolchain and build
verification, so it is made a per-arch option for now.
On a random powerpc64le build, this yelds a significant size saving,
it boots and runs fine, but there is a lot I haven't tested as yet, so
these savings may be reduced if there are bugs in the link.
text data bss dec filename
11169741 1180744 1923176 14273661 vmlinux
10445269 1004127 1919707 13369103 vmlinux.dce
~700K text, ~170K data, 6% removed from kernel image size.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.com>
2016-08-24 12:29:20 +00:00
|
|
|
= (unsigned long)&sym;
|
|
|
|
#endif
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifndef RELOC_HIDE
|
|
|
|
# define RELOC_HIDE(ptr, off) \
|
|
|
|
({ unsigned long __ptr; \
|
|
|
|
__ptr = (unsigned long) (ptr); \
|
|
|
|
(typeof(ptr)) (__ptr + (off)); })
|
|
|
|
#endif
|
|
|
|
|
2021-09-15 03:52:24 +00:00
|
|
|
#define absolute_pointer(val) RELOC_HIDE((void *)(val), 0)
|
|
|
|
|
2013-11-26 00:00:41 +00:00
|
|
|
#ifndef OPTIMIZER_HIDE_VAR
|
2019-01-02 20:57:49 +00:00
|
|
|
/* Make the optimizer believe the variable can be manipulated arbitrarily. */
|
|
|
|
#define OPTIMIZER_HIDE_VAR(var) \
|
|
|
|
__asm__ ("" : "=r" (var) : "0" (var))
|
2013-11-26 00:00:41 +00:00
|
|
|
#endif
|
|
|
|
|
2012-11-22 02:00:25 +00:00
|
|
|
/* Not-quite-unique ID. */
|
|
|
|
#ifndef __UNIQUE_ID
|
|
|
|
# define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __LINE__)
|
|
|
|
#endif
|
|
|
|
|
2020-06-11 18:02:46 +00:00
|
|
|
/**
|
|
|
|
* data_race - mark an expression as containing intentional data races
|
|
|
|
*
|
|
|
|
* This data_race() macro is useful for situations in which data races
|
|
|
|
* should be forgiven. One example is diagnostic code that accesses
|
|
|
|
* shared variables but is not a part of the core synchronization design.
|
|
|
|
*
|
|
|
|
* This macro *does not* affect normal code generation, but is a hint
|
|
|
|
* to tooling that data races here are to be ignored.
|
|
|
|
*/
|
|
|
|
#define data_race(expr) \
|
2015-10-19 08:37:17 +00:00
|
|
|
({ \
|
2020-05-21 14:20:45 +00:00
|
|
|
__unqual_scalar_typeof(({ expr; })) __v = ({ \
|
|
|
|
__kcsan_disable_current(); \
|
|
|
|
expr; \
|
2020-06-11 18:02:46 +00:00
|
|
|
}); \
|
2020-05-21 14:20:45 +00:00
|
|
|
__kcsan_enable_current(); \
|
|
|
|
__v; \
|
2015-10-19 08:37:17 +00:00
|
|
|
})
|
2014-11-25 09:01:16 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif /* __KERNEL__ */
|
|
|
|
|
module: use relative references for __ksymtab entries
An ordinary arm64 defconfig build has ~64 KB worth of __ksymtab entries,
each consisting of two 64-bit fields containing absolute references, to
the symbol itself and to a char array containing its name, respectively.
When we build the same configuration with KASLR enabled, we end up with an
additional ~192 KB of relocations in the .init section, i.e., one 24 byte
entry for each absolute reference, which all need to be processed at boot
time.
Given how the struct kernel_symbol that describes each entry is completely
local to module.c (except for the references emitted by EXPORT_SYMBOL()
itself), we can easily modify it to contain two 32-bit relative references
instead. This reduces the size of the __ksymtab section by 50% for all
64-bit architectures, and gets rid of the runtime relocations entirely for
architectures implementing KASLR, either via standard PIE linking (arm64)
or using custom host tools (x86).
Note that the binary search involving __ksymtab contents relies on each
section being sorted by symbol name. This is implemented based on the
input section names, not the names in the ksymtab entries, so this patch
does not interfere with that.
Given that the use of place-relative relocations requires support both in
the toolchain and in the module loader, we cannot enable this feature for
all architectures. So make it dependent on whether
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS is defined.
Link: http://lkml.kernel.org/r/20180704083651.24360-4-ard.biesheuvel@linaro.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Jessica Yu <jeyu@kernel.org>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morris <james.morris@microsoft.com>
Cc: James Morris <jmorris@namei.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Garnier <thgarnie@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 04:56:09 +00:00
|
|
|
/*
|
|
|
|
* Force the compiler to emit 'sym' as a symbol, so that we can reference
|
|
|
|
* it from inline assembler. Necessary in case 'sym' could be inlined
|
|
|
|
* otherwise, or eliminated entirely due to lack of references that are
|
|
|
|
* visible to the compiler.
|
|
|
|
*/
|
2022-09-08 21:54:46 +00:00
|
|
|
#define ___ADDRESSABLE(sym, __attrs) \
|
|
|
|
static void * __used __attrs \
|
2020-08-18 13:57:40 +00:00
|
|
|
__UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)&sym;
|
2022-09-08 21:54:46 +00:00
|
|
|
#define __ADDRESSABLE(sym) \
|
|
|
|
___ADDRESSABLE(sym, __section(".discard.addressable"))
|
module: use relative references for __ksymtab entries
An ordinary arm64 defconfig build has ~64 KB worth of __ksymtab entries,
each consisting of two 64-bit fields containing absolute references, to
the symbol itself and to a char array containing its name, respectively.
When we build the same configuration with KASLR enabled, we end up with an
additional ~192 KB of relocations in the .init section, i.e., one 24 byte
entry for each absolute reference, which all need to be processed at boot
time.
Given how the struct kernel_symbol that describes each entry is completely
local to module.c (except for the references emitted by EXPORT_SYMBOL()
itself), we can easily modify it to contain two 32-bit relative references
instead. This reduces the size of the __ksymtab section by 50% for all
64-bit architectures, and gets rid of the runtime relocations entirely for
architectures implementing KASLR, either via standard PIE linking (arm64)
or using custom host tools (x86).
Note that the binary search involving __ksymtab contents relies on each
section being sorted by symbol name. This is implemented based on the
input section names, not the names in the ksymtab entries, so this patch
does not interfere with that.
Given that the use of place-relative relocations requires support both in
the toolchain and in the module loader, we cannot enable this feature for
all architectures. So make it dependent on whether
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS is defined.
Link: http://lkml.kernel.org/r/20180704083651.24360-4-ard.biesheuvel@linaro.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Jessica Yu <jeyu@kernel.org>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morris <james.morris@microsoft.com>
Cc: James Morris <jmorris@namei.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Garnier <thgarnie@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 04:56:09 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* offset_to_ptr - convert a relative memory offset to an absolute pointer
|
|
|
|
* @off: the address of the 32-bit offset value
|
|
|
|
*/
|
|
|
|
static inline void *offset_to_ptr(const int *off)
|
|
|
|
{
|
|
|
|
return (void *)((unsigned long)off + *off);
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif /* __ASSEMBLY__ */
|
|
|
|
|
2018-08-30 17:25:14 +00:00
|
|
|
/* &a[0] degrades to a pointer: a different type from an array */
|
|
|
|
#define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
|
|
|
|
|
2022-08-23 19:59:25 +00:00
|
|
|
/*
|
|
|
|
* Whether 'type' is a signed type or an unsigned type. Supports scalar types,
|
|
|
|
* bool and also pointer types.
|
|
|
|
*/
|
|
|
|
#define is_signed_type(type) (((type)(-1)) < (__force type)1)
|
overflow: Introduce overflows_type() and castable_to_type()
Implement a robust overflows_type() macro to test if a variable or
constant value would overflow another variable or type. This can be
used as a constant expression for static_assert() (which requires a
constant expression[1][2]) when used on constant values. This must be
constructed manually, since __builtin_add_overflow() does not produce
a constant expression[3].
Additionally adds castable_to_type(), similar to __same_type(), but for
checking if a constant value would overflow if cast to a given type.
Add unit tests for overflows_type(), __same_type(), and castable_to_type()
to the existing KUnit "overflow" test:
[16:03:33] ================== overflow (21 subtests) ==================
...
[16:03:33] [PASSED] overflows_type_test
[16:03:33] [PASSED] same_type_test
[16:03:33] [PASSED] castable_to_type_test
[16:03:33] ==================== [PASSED] overflow =====================
[16:03:33] ============================================================
[16:03:33] Testing complete. Ran 21 tests: passed: 21
[16:03:33] Elapsed time: 24.022s total, 0.002s configuring, 22.598s building, 0.767s running
[1] https://en.cppreference.com/w/c/language/_Static_assert
[2] C11 standard (ISO/IEC 9899:2011): 6.7.10 Static assertions
[3] https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins.html
6.56 Built-in Functions to Perform Arithmetic with Overflow Checking
Built-in Function: bool __builtin_add_overflow (type1 a, type2 b,
Cc: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Tom Rix <trix@redhat.com>
Cc: Daniel Latypov <dlatypov@google.com>
Cc: Vitor Massaru Iha <vitor@massaru.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
Cc: linux-hardening@vger.kernel.org
Cc: llvm@lists.linux.dev
Co-developed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20221024201125.1416422-1-gwan-gyeong.mun@intel.com
2022-10-24 20:11:25 +00:00
|
|
|
#define is_unsigned_type(type) (!is_signed_type(type))
|
2022-08-23 19:59:25 +00:00
|
|
|
|
x86: Fix early boot crash on gcc-10, third try
... or the odyssey of trying to disable the stack protector for the
function which generates the stack canary value.
The whole story started with Sergei reporting a boot crash with a kernel
built with gcc-10:
Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b37df9 #139
Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013
Call Trace:
dump_stack
panic
? start_secondary
__stack_chk_fail
start_secondary
secondary_startup_64
-—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary
This happens because gcc-10 tail-call optimizes the last function call
in start_secondary() - cpu_startup_entry() - and thus emits a stack
canary check which fails because the canary value changes after the
boot_init_stack_canary() call.
To fix that, the initial attempt was to mark the one function which
generates the stack canary with:
__attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused)
however, using the optimize attribute doesn't work cumulatively
as the attribute does not add to but rather replaces previously
supplied optimization options - roughly all -fxxx options.
The key one among them being -fno-omit-frame-pointer and thus leading to
not present frame pointer - frame pointer which the kernel needs.
The next attempt to prevent compilers from tail-call optimizing
the last function call cpu_startup_entry(), shy of carving out
start_secondary() into a separate compilation unit and building it with
-fno-stack-protector, was to add an empty asm("").
This current solution was short and sweet, and reportedly, is supported
by both compilers but we didn't get very far this time: future (LTO?)
optimization passes could potentially eliminate this, which leads us
to the third attempt: having an actual memory barrier there which the
compiler cannot ignore or move around etc.
That should hold for a long time, but hey we said that about the other
two solutions too so...
Reported-by: Sergei Trofimovich <slyfox@gentoo.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Tested-by: Kalle Valo <kvalo@codeaurora.org>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@gentoo.org
2020-04-22 16:11:30 +00:00
|
|
|
/*
|
|
|
|
* This is needed in functions which generate the stack canary, see
|
|
|
|
* arch/x86/kernel/smpboot.c::start_secondary() for an example.
|
|
|
|
*/
|
|
|
|
#define prevent_tail_call_optimization() mb()
|
|
|
|
|
2019-10-15 23:29:32 +00:00
|
|
|
#include <asm/rwonce.h>
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif /* __LINUX_COMPILER_H */
|