linux-stable/kernel/trace
Steven Rostedt (VMware) 021b6d11e5 tracing: Have all levels of checks prevent recursion
commit ed65df63a3 upstream.

While writing an email explaining the "bit = 0" logic for a discussion on
making ftrace_test_recursion_trylock() disable preemption, I discovered a
path that makes the "not do the logic if bit is zero" unsafe.

The recursion logic is done in hot paths like the function tracer. Thus,
any code executed causes noticeable overhead. Thus, tricks are done to try
to limit the amount of code executed. This included the recursion testing
logic.

Having recursion testing is important, as there are many paths that can
end up in an infinite recursion cycle when tracing every function in the
kernel. Thus protection is needed to prevent that from happening.

Because it is OK to recurse due to different running context levels (e.g.
an interrupt preempts a trace, and then a trace occurs in the interrupt
handler), a set of bits are used to know which context one is in (normal,
softirq, irq and NMI). If a recursion occurs in the same level, it is
prevented*.

Then there are infrastructure levels of recursion as well. When more than
one callback is attached to the same function to trace, it calls a loop
function to iterate over all the callbacks. Both the callbacks and the
loop function have recursion protection. The callbacks use the
"ftrace_test_recursion_trylock()" which has a "function" set of context
bits to test, and the loop function calls the internal
trace_test_and_set_recursion() directly, with an "internal" set of bits.

If an architecture does not implement all the features supported by ftrace
then the callbacks are never called directly, and the loop function is
called instead, which will implement the features of ftrace.

Since both the loop function and the callbacks do recursion protection, it
was seemed unnecessary to do it in both locations. Thus, a trick was made
to have the internal set of recursion bits at a more significant bit
location than the function bits. Then, if any of the higher bits were set,
the logic of the function bits could be skipped, as any new recursion
would first have to go through the loop function.

This is true for architectures that do not support all the ftrace
features, because all functions being traced must first go through the
loop function before going to the callbacks. But this is not true for
architectures that support all the ftrace features. That's because the
loop function could be called due to two callbacks attached to the same
function, but then a recursion function inside the callback could be
called that does not share any other callback, and it will be called
directly.

i.e.

 traced_function_1: [ more than one callback tracing it ]
   call loop_func

 loop_func:
   trace_recursion set internal bit
   call callback

 callback:
   trace_recursion [ skipped because internal bit is set, return 0 ]
   call traced_function_2

 traced_function_2: [ only traced by above callback ]
   call callback

 callback:
   trace_recursion [ skipped because internal bit is set, return 0 ]
   call traced_function_2

 [ wash, rinse, repeat, BOOM! out of shampoo! ]

Thus, the "bit == 0 skip" trick is not safe, unless the loop function is
call for all functions.

Since we want to encourage architectures to implement all ftrace features,
having them slow down due to this extra logic may encourage the
maintainers to update to the latest ftrace features. And because this
logic is only safe for them, remove it completely.

 [*] There is on layer of recursion that is allowed, and that is to allow
     for the transition between interrupt context (normal -> softirq ->
     irq -> NMI), because a trace may occur before the context update is
     visible to the trace recursion logic.

Link: https://lore.kernel.org/all/609b565a-ed6e-a1da-f025-166691b5d994@linux.alibaba.com/
Link: https://lkml.kernel.org/r/20211018154412.09fcad3c@gandalf.local.home

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Joe Lawrence <joe.lawrence@redhat.com>
Cc: Colin Ian King <colin.king@canonical.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Jisheng Zhang <jszhang@kernel.org>
Cc: =?utf-8?b?546L6LSH?= <yun.wang@linux.alibaba.com>
Cc: Guo Ren <guoren@kernel.org>
Cc: stable@vger.kernel.org
Fixes: edc15cafcb ("tracing: Avoid unnecessary multiple recursion checks")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-27 09:56:56 +02:00
..
blktrace.c blktrace: Fix uaf in blk_trace access after removing by sysfs 2021-09-30 10:11:05 +02:00
bpf_trace.c bpf: Add lockdown check for probe_write_user helper 2021-08-15 14:00:25 +02:00
bpf_trace.h bpf: Use dedicated bpf_trace_printk event instead of trace_printk() 2020-07-13 16:55:49 -07:00
fgraph.c fgraph: Initialize tracing_graph_pause at task creation 2021-02-10 09:29:16 +01:00
ftrace.c tracing: Have all levels of checks prevent recursion 2021-10-27 09:56:56 +02:00
ftrace_internal.h x86/ftrace: Have ftrace trampolines turn read-only at the end of system boot up 2020-05-12 18:24:34 -04:00
Kconfig tracing/kprobes: Do the notrace functions check without kprobes on ftrace 2021-01-19 18:27:19 +01:00
kprobe_event_gen_test.c tracing: Add kprobe event command generation test module 2020-01-30 09:46:28 -05:00
Makefile Kbuild updates for v5.9 2020-08-09 14:10:26 -07:00
power-traces.c
preemptirq_delay_test.c tracing: Wait for preempt irq delay thread to execute 2020-05-11 17:00:34 -04:00
ring_buffer.c tracing: Fix bug in rb_per_cpu_empty() that might cause deadloop. 2021-07-28 14:35:45 +02:00
ring_buffer_benchmark.c sched,tracing: Convert to sched_set_fifo() 2020-07-29 11:43:53 +02:00
rpm-traces.c
synth_event_gen_test.c tracing: Add support for dynamic strings to synthetic events 2020-10-05 19:32:18 -04:00
trace.c tracing: Fix NULL pointer dereference in start_creating 2021-08-12 13:22:12 +02:00
trace.h tracing: Have all levels of checks prevent recursion 2021-10-27 09:56:56 +02:00
trace_benchmark.c
trace_benchmark.h
trace_boot.c tracing/boot: Fix a hist trigger dependency for boot time tracing 2021-09-22 12:28:03 +02:00
trace_branch.c
trace_clock.c tracing: Do no increment trace_clock_global() by one 2021-06-23 14:42:50 +02:00
trace_dynevent.c tracing: Delete repeated words in comments 2020-09-21 21:06:02 -04:00
trace_dynevent.h tracing: Remove check_arg() callbacks from dynevent args 2020-02-01 13:09:23 -05:00
trace_entries.h tracing: Make ftrace packed events have align of 1 2020-06-16 21:21:02 -04:00
trace_event_perf.c
trace_events.c tracing: Do not count ftrace events in top level enable output 2021-02-17 11:02:20 +01:00
trace_events_filter.c treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
trace_events_filter_test.h
trace_events_hist.c tracing / histogram: Fix NULL pointer dereference on strcmp() on NULL event name 2021-08-26 08:35:54 -04:00
trace_events_inject.c
trace_events_synth.c tracing: Make -ENOMEM the default error for parse_synth_field() 2020-11-02 15:58:32 -05:00
trace_events_trigger.c tracing: Fix event trigger to accept redundant spaces 2020-06-23 21:51:40 -04:00
trace_export.c treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
trace_functions.c tracing: Have all levels of checks prevent recursion 2021-10-27 09:56:56 +02:00
trace_functions_graph.c tracing: make tracing_init_dentry() returns an integer instead of a d_entry pointer 2020-09-18 22:17:14 -04:00
trace_hwlat.c tracing: Remove WARN_ON in start_thread() 2020-11-30 21:43:07 -05:00
trace_irqsoff.c tracing: Use pause-on-trace with the latency tracers 2021-02-10 09:29:16 +01:00
trace_kdb.c
trace_kprobe.c tracing/probes: Reject events which have the same name of existing one 2021-09-22 12:28:00 +02:00
trace_kprobe_selftest.c
trace_kprobe_selftest.h
trace_mmiotrace.c
trace_nop.c
trace_output.c tracing: Make the space reserved for the pid wider 2020-09-18 12:42:11 -04:00
trace_output.h
trace_preemptirq.c lockdep: fix order in trace_hardirqs_off_caller() 2020-09-14 10:08:07 +02:00
trace_printk.c Updates for tracing and bootconfig: 2020-10-15 15:51:28 -07:00
trace_probe.c tracing/probes: Reject events which have the same name of existing one 2021-09-22 12:28:00 +02:00
trace_probe.h tracing/probes: Reject events which have the same name of existing one 2021-09-22 12:28:00 +02:00
trace_probe_tmpl.h
trace_sched_switch.c tracing: Fix sched switch start/stop refcount racy updates 2020-01-30 09:46:10 -05:00
trace_sched_wakeup.c
trace_selftest.c tracing: Disable ftrace selftests when any tracer is running 2020-12-30 11:54:28 +01:00
trace_selftest_dynamic.c
trace_seq.c tracing: Remove unused TRACE_SEQ_BUF_USED 2020-01-21 18:39:54 -05:00
trace_stack.c tracing: make tracing_init_dentry() returns an integer instead of a d_entry pointer 2020-09-18 22:17:14 -04:00
trace_stat.c tracing: make tracing_init_dentry() returns an integer instead of a d_entry pointer 2020-09-18 22:17:14 -04:00
trace_stat.h
trace_synth.h tracing: Synthetic event field_pos is an index not a boolean 2021-07-28 14:35:45 +02:00
trace_syscalls.c Tracing updates: 2020-02-06 07:12:11 +00:00
trace_uprobe.c tracing/probes: Reject events which have the same name of existing one 2021-09-22 12:28:00 +02:00
tracing_map.c tracing: Delete repeated words in comments 2020-09-21 21:06:02 -04:00
tracing_map.h