RCU pull request for v6.7

This pull request contains the following branches:
 
 rcu/torture: RCU torture, locktorture and generic torture infrastructure
 	updates that include various fixes, cleanups and consolidations.
 	Among the user visible things, ftrace dumps can now be found into
 	their own file, and module parameters get better documented and
 	reported on dumps.
 
 rcu/fixes: Generic and misc fixes all over the place. Some highlights:
 
 	* Hotplug handling has seen some light cleanups and comments.
 
 	* An RCU barrier can now be triggered through sysfs to serialize
 	memory stress testing and avoid OOM.
 
 	* Object information is now dumped in case of invalid callback
 	invocation.
 
 	* Also various SRCU issues, too hard to trigger to deserve urgent
 	pull requests, have been fixed.
 
 rcu/docs: RCU documentation updates
 
 rcu/refscale: RCU reference scalability test minor fixes and doc
 	improvements.
 
 rcu/tasks: RCU tasks minor fixes
 
 rcu/stall: Stall detection updates. Introduce RCU CPU Stall notifiers
 	that allows a subsystem to provide informations to help debugging.
 	Also cure some false positive stalls.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEd76+gtGM8MbftQlOhSRUR1COjHcFAmU21h0ACgkQhSRUR1CO
 jHdUgA/+Myy5K5OxNrqlF/gIK+flOSg635RyZ0DBx8OMXZ/fAg9qRI+PKt5I4Lha
 eXAg6EtmwSgHmIbjcg8WzsvwniEsqqjOF+n1qil447fHUI2Qqw6c7fIm/MXQkeHJ
 qA7CODDRtsAnwnjmTteasmMeGV0bmXDENxhNrAZBFnVkRgTqfyDbFcn+nxOaPK6b
 fmbKvnB07WUg1KOV8/MbEtAZPb8QgHo58bXSZRKjKkiqRQWB/D3On+tShFK7SYJi
 wIqQ96MLyUXLaIWQ47v6xEO4PZO+3o1wAryvP1DRdb5UrPjO6yKFfQaoo5Mza92G
 zhBJhnXkVvCoNoCU7GKJIDV54SgDHaB6Sf1GN5cjwfujOkLuGCyg0CpKktCGm7uH
 n3X66PVep608Uj2Y/pAo/hv3Hbv7lCu4nfrERvVLG9YoxUvTJDsKmBv+SF/g2mxF
 rHqFa39HUPr1yHA5WjqOQS3lLdqCXEGKvNi6zXCvOceiDbHbiJFkBo6p8TVrbSMX
 FCOWZ3LoE+6uiLu/lLOEroTjeBd8GhDh1LgWgyVK7o0LhP1018DSBolrpcSwnmOo
 Q/E4G2x+aPWs+5NTOmMGOIPY70khKQIM3c8YZelSRffJBo6O3yV68h6X45NQxYvx
 keLvrDaza8h4hKwaof/QaX4ZJgTOZ0xjpawr1vR0hbK8LNtPrUw=
 =cVD7
 -----END PGP SIGNATURE-----

Merge tag 'rcu-next-v6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks

Pull RCU updates from Frederic Weisbecker:

 - RCU torture, locktorture and generic torture infrastructure updates
   that include various fixes, cleanups and consolidations.

   Among the user visible things, ftrace dumps can now be found into
   their own file, and module parameters get better documented and
   reported on dumps.

 - Generic and misc fixes all over the place. Some highlights:

     * Hotplug handling has seen some light cleanups and comments

     * An RCU barrier can now be triggered through sysfs to serialize
       memory stress testing and avoid OOM

     * Object information is now dumped in case of invalid callback
       invocation

     * Also various SRCU issues, too hard to trigger to deserve urgent
       pull requests, have been fixed

 - RCU documentation updates

 - RCU reference scalability test minor fixes and doc improvements.

 - RCU tasks minor fixes

 - Stall detection updates. Introduce RCU CPU Stall notifiers that
   allows a subsystem to provide informations to help debugging. Also
   cure some false positive stalls.

* tag 'rcu-next-v6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks: (56 commits)
  srcu: Only accelerate on enqueue time
  locktorture: Check the correct variable for allocation failure
  srcu: Fix callbacks acceleration mishandling
  rcu: Comment why callbacks migration can't wait for CPUHP_RCUTREE_PREP
  rcu: Standardize explicit CPU-hotplug calls
  rcu: Conditionally build CPU-hotplug teardown callbacks
  rcu: Remove references to rcu_migrate_callbacks() from diagrams
  rcu: Assume rcu_report_dead() is always called locally
  rcu: Assume IRQS disabled from rcu_report_dead()
  rcu: Use rcu_segcblist_segempty() instead of open coding it
  rcu: kmemleak: Ignore kmemleak false positives when RCU-freeing objects
  srcu: Fix srcu_struct node grpmask overflow on 64-bit systems
  torture: Convert parse-console.sh to mktemp
  rcutorture: Traverse possible cpu to set maxcpu in rcu_nocb_toggle()
  rcutorture: Replace schedule_timeout*() 1-jiffy waits with HZ/20
  torture: Add kvm.sh --debug-info argument
  locktorture: Rename readers_bind/writers_bind to bind_readers/bind_writers
  doc: Catch-up update for locktorture module parameters
  locktorture: Add call_rcu_chains module parameter
  locktorture: Add new module parameters to lock_torture_print_module_parms()
  ...
This commit is contained in:
Linus Torvalds 2023-10-30 18:01:41 -10:00
commit 2656821f1f
45 changed files with 810 additions and 352 deletions

View File

@ -181,7 +181,7 @@ operations is carried out at several levels:
of this wait (or series of waits, as the case may be) is to permit a
concurrent CPU-hotplug operation to complete.
#. In the case of RCU-sched, one of the last acts of an outgoing CPU is
to invoke ``rcu_report_dead()``, which reports a quiescent state for
to invoke ``rcutree_report_cpu_dead()``, which reports a quiescent state for
that CPU. However, this is likely paranoia-induced redundancy.
+-----------------------------------------------------------------------+

View File

@ -564,15 +564,6 @@
font-size="192"
id="text202-7-9-6"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcutree_migrate_callbacks()</text>
<text
xml:space="preserve"
x="8335.4873"
y="5357.1006"
font-style="normal"
font-weight="bold"
font-size="192"
id="text202-7-9-6-0"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_migrate_callbacks()</text>
<text
xml:space="preserve"
x="8768.4678"

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

View File

@ -1135,7 +1135,7 @@
font-weight="bold"
font-size="192"
id="text202-7-5-3-27-6-5"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_report_dead()</text>
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcutree_report_cpu_dead()</text>
<text
xml:space="preserve"
x="3745.7725"
@ -1256,7 +1256,7 @@
font-style="normal"
y="3679.27"
x="-3804.9949"
xml:space="preserve">rcu_cpu_starting()</text>
xml:space="preserve">rcutree_report_cpu_starting()</text>
<g
style="fill:none;stroke-width:0.025in"
id="g3107-7-5-0"

Before

Width:  |  Height:  |  Size: 50 KiB

After

Width:  |  Height:  |  Size: 50 KiB

View File

@ -1446,15 +1446,6 @@
font-size="192"
id="text202-7-9-6"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcutree_migrate_callbacks()</text>
<text
xml:space="preserve"
x="8335.4873"
y="5357.1006"
font-style="normal"
font-weight="bold"
font-size="192"
id="text202-7-9-6-0"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_migrate_callbacks()</text>
<text
xml:space="preserve"
x="8768.4678"
@ -3274,7 +3265,7 @@
font-weight="bold"
font-size="192"
id="text202-7-5-3-27-6-5"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_report_dead()</text>
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcutree_report_cpu_dead()</text>
<text
xml:space="preserve"
x="3745.7725"
@ -3395,7 +3386,7 @@
font-style="normal"
y="3679.27"
x="-3804.9949"
xml:space="preserve">rcu_cpu_starting()</text>
xml:space="preserve">rcutree_report_cpu_starting()</text>
<g
style="fill:none;stroke-width:0.025in"
id="g3107-7-5-0"

Before

Width:  |  Height:  |  Size: 209 KiB

After

Width:  |  Height:  |  Size: 208 KiB

View File

@ -607,7 +607,7 @@
font-weight="bold"
font-size="192"
id="text202-7-5-3-27-6"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_report_dead()</text>
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcutree_report_cpu_dead()</text>
<text
xml:space="preserve"
x="3745.7725"
@ -728,7 +728,7 @@
font-style="normal"
y="3679.27"
x="-3804.9949"
xml:space="preserve">rcu_cpu_starting()</text>
xml:space="preserve">rcutree_report_cpu_starting()</text>
<g
style="fill:none;stroke-width:0.025in"
id="g3107-7-5-0"

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

View File

@ -1955,12 +1955,12 @@ if offline CPUs block an RCU grace period for too long.
An offline CPU's quiescent state will be reported either:
1. As the CPU goes offline using RCU's hotplug notifier (rcu_report_dead()).
1. As the CPU goes offline using RCU's hotplug notifier (rcutree_report_cpu_dead()).
2. When grace period initialization (rcu_gp_init()) detects a
race either with CPU offlining or with a task unblocking on a leaf
``rcu_node`` structure whose CPUs are all offline.
The CPU-online path (rcu_cpu_starting()) should never need to report
The CPU-online path (rcutree_report_cpu_starting()) should never need to report
a quiescent state for an offline CPU. However, as a debugging measure,
it does emit a warning if a quiescent state was not already reported
for that CPU.

View File

@ -8,6 +8,15 @@ One of the most common uses of RCU is protecting read-mostly linked lists
that all of the required memory ordering is provided by the list macros.
This document describes several list-based RCU use cases.
When iterating a list while holding the rcu_read_lock(), writers may
modify the list. The reader is guaranteed to see all of the elements
which were added to the list before they acquired the rcu_read_lock()
and are still on the list when they drop the rcu_read_unlock().
Elements which are added to, or removed from the list may or may not
be seen. If the writer calls list_replace_rcu(), the reader may see
either the old element or the new element; they will not see both,
nor will they see neither.
Example 1: Read-mostly list: Deferred Destruction
-------------------------------------------------

View File

@ -59,8 +59,8 @@ experiment with should focus on Section 2. People who prefer to start
with example uses should focus on Sections 3 and 4. People who need to
understand the RCU implementation should focus on Section 5, then dive
into the kernel source code. People who reason best by analogy should
focus on Section 6. Section 7 serves as an index to the docbook API
documentation, and Section 8 is the traditional answer key.
focus on Section 6 and 7. Section 8 serves as an index to the docbook
API documentation, and Section 9 is the traditional answer key.
So, start with the section that makes the most sense to you and your
preferred method of learning. If you need to know everything about

View File

@ -2919,6 +2919,38 @@
to extract confidential information from the kernel
are also disabled.
locktorture.acq_writer_lim= [KNL]
Set the time limit in jiffies for a lock
acquisition. Acquisitions exceeding this limit
will result in a splat once they do complete.
locktorture.bind_readers= [KNL]
Specify the list of CPUs to which the readers are
to be bound.
locktorture.bind_writers= [KNL]
Specify the list of CPUs to which the writers are
to be bound.
locktorture.call_rcu_chains= [KNL]
Specify the number of self-propagating call_rcu()
chains to set up. These are used to ensure that
there is a high probability of an RCU grace period
in progress at any given time. Defaults to 0,
which disables these call_rcu() chains.
locktorture.long_hold= [KNL]
Specify the duration in milliseconds for the
occasional long-duration lock hold time. Defaults
to 100 milliseconds. Select 0 to disable.
locktorture.nested_locks= [KNL]
Specify the maximum lock nesting depth that
locktorture is to exercise, up to a limit of 8
(MAX_NESTED_LOCKS). Specify zero to disable.
Note that this parameter is ineffective on types
of locks that do not support nested acquisition.
locktorture.nreaders_stress= [KNL]
Set the number of locking read-acquisition kthreads.
Defaults to being automatically set based on the
@ -2934,6 +2966,25 @@
Set time (s) between CPU-hotplug operations, or
zero to disable CPU-hotplug testing.
locktorture.rt_boost= [KNL]
Do periodic testing of real-time lock priority
boosting. Select 0 to disable, 1 to boost
only rt_mutex, and 2 to boost unconditionally.
Defaults to 2, which might seem to be an
odd choice, but which should be harmless for
non-real-time spinlocks, due to their disabling
of preemption. Note that non-realtime mutexes
disable boosting.
locktorture.rt_boost_factor= [KNL]
Number that determines how often and for how
long priority boosting is exercised. This is
scaled down by the number of writers, so that the
number of boosts per unit time remains roughly
constant as the number of writers increases.
On the other hand, the duration of each boost
increases with the number of writers.
locktorture.shuffle_interval= [KNL]
Set task-shuffle interval (jiffies). Shuffling
tasks allows some CPUs to go into dyntick-idle
@ -2956,13 +3007,13 @@
locktorture.torture_type= [KNL]
Specify the locking implementation to test.
locktorture.verbose= [KNL]
Enable additional printk() statements.
locktorture.writer_fifo= [KNL]
Run the write-side locktorture kthreads at
sched_set_fifo() real-time priority.
locktorture.verbose= [KNL]
Enable additional printk() statements.
logibm.irq= [HW,MOUSE] Logitech Bus Mouse Driver
Format: <irq>
@ -4775,6 +4826,13 @@
Set maximum number of finished RCU callbacks to
process in one batch.
rcutree.do_rcu_barrier= [KNL]
Request a call to rcu_barrier(). This is
throttled so that userspace tests can safely
hammer on the sysfs variable if they so choose.
If triggered before the RCU grace-period machinery
is fully active, this will error out with EAGAIN.
rcutree.dump_tree= [KNL]
Dump the structure of the rcu_node combining tree
out at early boot. This is used for diagnostic
@ -5428,6 +5486,12 @@
test until boot completes in order to avoid
interference.
refscale.lookup_instances= [KNL]
Number of data elements to use for the forms of
SLAB_TYPESAFE_BY_RCU testing. A negative number
is negated and multiplied by nr_cpu_ids, while
zero specifies nr_cpu_ids.
refscale.loops= [KNL]
Set the number of loops over the synchronization
primitive under test. Increasing this number

View File

@ -215,7 +215,7 @@ asmlinkage notrace void secondary_start_kernel(void)
if (system_uses_irq_prio_masking())
init_gic_priority_masking();
rcu_cpu_starting(cpu);
rcutree_report_cpu_starting(cpu);
trace_hardirqs_off();
/*
@ -401,7 +401,7 @@ void __noreturn cpu_die_early(void)
/* Mark this CPU absent */
set_cpu_present(cpu, 0);
rcu_report_dead(cpu);
rcutree_report_cpu_dead();
if (IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
update_cpu_boot_status(CPU_KILL_ME);

View File

@ -1629,7 +1629,7 @@ void start_secondary(void *unused)
smp_store_cpu_info(cpu);
set_dec(tb_ticks_per_jiffy);
rcu_cpu_starting(cpu);
rcutree_report_cpu_starting(cpu);
cpu_callin_map[cpu] = 1;
if (smp_ops->setup_cpu)

View File

@ -898,7 +898,7 @@ static void smp_start_secondary(void *cpuvoid)
S390_lowcore.restart_flags = 0;
restore_access_regs(S390_lowcore.access_regs_save_area);
cpu_init();
rcu_cpu_starting(cpu);
rcutree_report_cpu_starting(cpu);
init_cpu_timer();
vtime_init();
vdso_getcpu_init();

View File

@ -302,7 +302,7 @@ static void notrace start_secondary(void *unused)
cpu_init();
fpu__init_cpu();
rcu_cpu_starting(raw_smp_processor_id());
rcutree_report_cpu_starting(raw_smp_processor_id());
x86_cpuinit.early_percpu_clock_init();
ap_starting();

View File

@ -566,7 +566,7 @@ enum
*
* _ RCU:
* 1) rcutree_migrate_callbacks() migrates the queue.
* 2) rcu_report_dead() reports the final quiescent states.
* 2) rcutree_report_cpu_dead() reports the final quiescent states.
*
* _ IRQ_POLL: irq_poll_cpu_dead() migrates the queue
*

View File

@ -0,0 +1,32 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Read-Copy Update notifiers, initially RCU CPU stall notifier.
* Separate from rcupdate.h to avoid #include loops.
*
* Copyright (C) 2023 Paul E. McKenney.
*/
#ifndef __LINUX_RCU_NOTIFIER_H
#define __LINUX_RCU_NOTIFIER_H
// Actions for RCU CPU stall notifier calls.
#define RCU_STALL_NOTIFY_NORM 1
#define RCU_STALL_NOTIFY_EXP 2
#ifdef CONFIG_RCU_STALL_COMMON
#include <linux/notifier.h>
#include <linux/types.h>
int rcu_stall_chain_notifier_register(struct notifier_block *n);
int rcu_stall_chain_notifier_unregister(struct notifier_block *n);
#else // #ifdef CONFIG_RCU_STALL_COMMON
// No RCU CPU stall warnings in Tiny RCU.
static inline int rcu_stall_chain_notifier_register(struct notifier_block *n) { return -EEXIST; }
static inline int rcu_stall_chain_notifier_unregister(struct notifier_block *n) { return -ENOENT; }
#endif // #else // #ifdef CONFIG_RCU_STALL_COMMON
#endif /* __LINUX_RCU_NOTIFIER_H */

View File

@ -122,8 +122,6 @@ static inline void call_rcu_hurry(struct rcu_head *head, rcu_callback_t func)
void rcu_init(void);
extern int rcu_scheduler_active;
void rcu_sched_clock_irq(int user);
void rcu_report_dead(unsigned int cpu);
void rcutree_migrate_callbacks(int cpu);
#ifdef CONFIG_TASKS_RCU_GENERIC
void rcu_init_tasks_generic(void);

View File

@ -171,6 +171,6 @@ static inline void rcu_all_qs(void) { barrier(); }
#define rcutree_offline_cpu NULL
#define rcutree_dead_cpu NULL
#define rcutree_dying_cpu NULL
static inline void rcu_cpu_starting(unsigned int cpu) { }
static inline void rcutree_report_cpu_starting(unsigned int cpu) { }
#endif /* __LINUX_RCUTINY_H */

View File

@ -37,7 +37,6 @@ void synchronize_rcu_expedited(void);
void kvfree_call_rcu(struct rcu_head *head, void *ptr);
void rcu_barrier(void);
bool rcu_eqs_special_set(int cpu);
void rcu_momentary_dyntick_idle(void);
void kfree_rcu_scheduler_running(void);
bool rcu_gp_might_be_stalled(void);
@ -111,9 +110,21 @@ void rcu_all_qs(void);
/* RCUtree hotplug events */
int rcutree_prepare_cpu(unsigned int cpu);
int rcutree_online_cpu(unsigned int cpu);
int rcutree_offline_cpu(unsigned int cpu);
void rcutree_report_cpu_starting(unsigned int cpu);
#ifdef CONFIG_HOTPLUG_CPU
int rcutree_dead_cpu(unsigned int cpu);
int rcutree_dying_cpu(unsigned int cpu);
void rcu_cpu_starting(unsigned int cpu);
int rcutree_offline_cpu(unsigned int cpu);
#else
#define rcutree_dead_cpu NULL
#define rcutree_dying_cpu NULL
#define rcutree_offline_cpu NULL
#endif
void rcutree_migrate_callbacks(int cpu);
/* Called from hotplug and also arm64 early secondary boot failure */
void rcutree_report_cpu_dead(void);
#endif /* __LINUX_RCUTREE_H */

View File

@ -245,8 +245,9 @@ DEFINE_FREE(kfree, void *, if (_T) kfree(_T))
size_t ksize(const void *objp);
#ifdef CONFIG_PRINTK
bool kmem_valid_obj(void *object);
void kmem_dump_obj(void *object);
bool kmem_dump_obj(void *object);
#else
static inline bool kmem_dump_obj(void *object) { return false; }
#endif
/*

View File

@ -81,7 +81,8 @@ static inline void torture_random_init(struct torture_random_state *trsp)
}
/* Definitions for high-resolution-timer sleeps. */
int torture_hrtimeout_ns(ktime_t baset_ns, u32 fuzzt_ns, struct torture_random_state *trsp);
int torture_hrtimeout_ns(ktime_t baset_ns, u32 fuzzt_ns, const enum hrtimer_mode mode,
struct torture_random_state *trsp);
int torture_hrtimeout_us(u32 baset_us, u32 fuzzt_ns, struct torture_random_state *trsp);
int torture_hrtimeout_ms(u32 baset_ms, u32 fuzzt_us, struct torture_random_state *trsp);
int torture_hrtimeout_jiffies(u32 baset_j, struct torture_random_state *trsp);
@ -120,10 +121,15 @@ void _torture_stop_kthread(char *m, struct task_struct **tp);
#define torture_stop_kthread(n, tp) \
_torture_stop_kthread("Stopping " #n " task", &(tp))
/* Scheduler-related definitions. */
#ifdef CONFIG_PREEMPTION
#define torture_preempt_schedule() __preempt_schedule()
#else
#define torture_preempt_schedule() do { } while (0)
#endif
#if IS_ENABLED(CONFIG_RCU_TORTURE_TEST) || IS_MODULE(CONFIG_RCU_TORTURE_TEST) || IS_ENABLED(CONFIG_LOCK_TORTURE_TEST) || IS_MODULE(CONFIG_LOCK_TORTURE_TEST)
long torture_sched_setaffinity(pid_t pid, const struct cpumask *in_mask);
#endif
#endif /* __LINUX_TORTURE_H */

View File

@ -1380,7 +1380,14 @@ static int takedown_cpu(unsigned int cpu)
cpuhp_bp_sync_dead(cpu);
tick_cleanup_dead_cpu(cpu);
/*
* Callbacks must be re-integrated right away to the RCU state machine.
* Otherwise an RCU callback could block a further teardown function
* waiting for its completion.
*/
rcutree_migrate_callbacks(cpu);
return 0;
}
@ -1396,10 +1403,10 @@ void cpuhp_report_idle_dead(void)
struct cpuhp_cpu_state *st = this_cpu_ptr(&cpuhp_state);
BUG_ON(st->state != CPUHP_AP_OFFLINE);
rcu_report_dead(smp_processor_id());
rcutree_report_cpu_dead();
st->state = CPUHP_AP_IDLE_DEAD;
/*
* We cannot call complete after rcu_report_dead() so we delegate it
* We cannot call complete after rcutree_report_cpu_dead() so we delegate it
* to an online cpu.
*/
smp_call_function_single(cpumask_first(cpu_online_mask),
@ -1628,7 +1635,7 @@ void notify_cpu_starting(unsigned int cpu)
struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
enum cpuhp_state target = min((int)st->target, CPUHP_AP_ONLINE);
rcu_cpu_starting(cpu); /* Enables RCU usage on this CPU. */
rcutree_report_cpu_starting(cpu); /* Enables RCU usage on this CPU. */
cpumask_set_cpu(cpu, &cpus_booted_once_mask);
/*

View File

@ -33,21 +33,23 @@
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com>");
torture_param(int, nwriters_stress, -1, "Number of write-locking stress-test threads");
torture_param(int, nreaders_stress, -1, "Number of read-locking stress-test threads");
torture_param(int, acq_writer_lim, 0, "Write_acquisition time limit (jiffies).");
torture_param(int, call_rcu_chains, 0, "Self-propagate call_rcu() chains during test (0=disable).");
torture_param(int, long_hold, 100, "Do occasional long hold of lock (ms), 0=disable");
torture_param(int, nested_locks, 0, "Number of nested locks (max = 8)");
torture_param(int, nreaders_stress, -1, "Number of read-locking stress-test threads");
torture_param(int, nwriters_stress, -1, "Number of write-locking stress-test threads");
torture_param(int, onoff_holdoff, 0, "Time after boot before CPU hotplugs (s)");
torture_param(int, onoff_interval, 0, "Time between CPU hotplugs (s), 0=disable");
torture_param(int, rt_boost, 2,
"Do periodic rt-boost. 0=Disable, 1=Only for rt_mutex, 2=For all lock types.");
torture_param(int, rt_boost_factor, 50, "A factor determining how often rt-boost happens.");
torture_param(int, shuffle_interval, 3, "Number of jiffies between shuffles, 0=disable");
torture_param(int, shutdown_secs, 0, "Shutdown time (j), <= zero to disable.");
torture_param(int, stat_interval, 60, "Number of seconds between stats printk()s");
torture_param(int, stutter, 5, "Number of jiffies to run/halt test, 0=disable");
torture_param(int, rt_boost, 2,
"Do periodic rt-boost. 0=Disable, 1=Only for rt_mutex, 2=For all lock types.");
torture_param(int, rt_boost_factor, 50, "A factor determining how often rt-boost happens.");
torture_param(int, writer_fifo, 0, "Run writers at sched_set_fifo() priority");
torture_param(int, verbose, 1, "Enable verbose debugging printk()s");
torture_param(int, nested_locks, 0, "Number of nested locks (max = 8)");
torture_param(int, writer_fifo, 0, "Run writers at sched_set_fifo() priority");
/* Going much higher trips "BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!" errors */
#define MAX_NESTED_LOCKS 8
@ -56,6 +58,55 @@ module_param(torture_type, charp, 0444);
MODULE_PARM_DESC(torture_type,
"Type of lock to torture (spin_lock, spin_lock_irq, mutex_lock, ...)");
static cpumask_var_t bind_readers; // Bind the readers to the specified set of CPUs.
static cpumask_var_t bind_writers; // Bind the writers to the specified set of CPUs.
// Parse a cpumask kernel parameter. If there are more users later on,
// this might need to got to a more central location.
static int param_set_cpumask(const char *val, const struct kernel_param *kp)
{
cpumask_var_t *cm_bind = kp->arg;
int ret;
char *s;
if (!alloc_cpumask_var(cm_bind, GFP_KERNEL)) {
s = "Out of memory";
ret = -ENOMEM;
goto out_err;
}
ret = cpulist_parse(val, *cm_bind);
if (!ret)
return ret;
s = "Bad CPU range";
out_err:
pr_warn("%s: %s, all CPUs set\n", kp->name, s);
cpumask_setall(*cm_bind);
return ret;
}
// Output a cpumask kernel parameter.
static int param_get_cpumask(char *buffer, const struct kernel_param *kp)
{
cpumask_var_t *cm_bind = kp->arg;
return sprintf(buffer, "%*pbl", cpumask_pr_args(*cm_bind));
}
static bool cpumask_nonempty(cpumask_var_t mask)
{
return cpumask_available(mask) && !cpumask_empty(mask);
}
static const struct kernel_param_ops lt_bind_ops = {
.set = param_set_cpumask,
.get = param_get_cpumask,
};
module_param_cb(bind_readers, &lt_bind_ops, &bind_readers, 0644);
module_param_cb(bind_writers, &lt_bind_ops, &bind_writers, 0644);
long torture_sched_setaffinity(pid_t pid, const struct cpumask *in_mask);
static struct task_struct *stats_task;
static struct task_struct **writer_tasks;
static struct task_struct **reader_tasks;
@ -69,6 +120,12 @@ struct lock_stress_stats {
long n_lock_acquired;
};
struct call_rcu_chain {
struct rcu_head crc_rh;
bool crc_stop;
};
struct call_rcu_chain *call_rcu_chain;
/* Forward reference. */
static void lock_torture_cleanup(void);
@ -116,12 +173,9 @@ static int torture_lock_busted_write_lock(int tid __maybe_unused)
static void torture_lock_busted_write_delay(struct torture_random_state *trsp)
{
const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX;
/* We want a long delay occasionally to force massive contention. */
if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms);
if (long_hold && !(torture_random(trsp) % (cxt.nrealwriters_stress * 2000 * long_hold)))
mdelay(long_hold);
if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
torture_preempt_schedule(); /* Allow test to be preempted. */
}
@ -194,15 +248,14 @@ __acquires(torture_spinlock)
static void torture_spin_lock_write_delay(struct torture_random_state *trsp)
{
const unsigned long shortdelay_us = 2;
const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX;
unsigned long j;
/* We want a short delay mostly to emulate likely code, and
* we want a long delay occasionally to force massive contention.
*/
if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 2000 * longdelay_ms))) {
if (long_hold && !(torture_random(trsp) % (cxt.nrealwriters_stress * 2000 * long_hold))) {
j = jiffies;
mdelay(longdelay_ms);
mdelay(long_hold);
pr_alert("%s: delay = %lu jiffies.\n", __func__, jiffies - j);
}
if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 200 * shortdelay_us)))
@ -320,14 +373,12 @@ __acquires(torture_rwlock)
static void torture_rwlock_write_delay(struct torture_random_state *trsp)
{
const unsigned long shortdelay_us = 2;
const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX;
/* We want a short delay mostly to emulate likely code, and
* we want a long delay occasionally to force massive contention.
*/
if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms);
if (long_hold && !(torture_random(trsp) % (cxt.nrealwriters_stress * 2000 * long_hold)))
mdelay(long_hold);
else
udelay(shortdelay_us);
}
@ -348,14 +399,12 @@ __acquires(torture_rwlock)
static void torture_rwlock_read_delay(struct torture_random_state *trsp)
{
const unsigned long shortdelay_us = 10;
const unsigned long longdelay_ms = 100;
/* We want a short delay mostly to emulate likely code, and
* we want a long delay occasionally to force massive contention.
*/
if (!(torture_random(trsp) %
(cxt.nrealreaders_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms);
if (long_hold && !(torture_random(trsp) % (cxt.nrealreaders_stress * 2000 * long_hold)))
mdelay(long_hold);
else
udelay(shortdelay_us);
}
@ -453,12 +502,9 @@ __acquires(torture_mutex)
static void torture_mutex_delay(struct torture_random_state *trsp)
{
const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX;
/* We want a long delay occasionally to force massive contention. */
if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms * 5);
if (long_hold && !(torture_random(trsp) % (cxt.nrealwriters_stress * 2000 * long_hold)))
mdelay(long_hold * 5);
if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
torture_preempt_schedule(); /* Allow test to be preempted. */
}
@ -626,15 +672,13 @@ __acquires(torture_rtmutex)
static void torture_rtmutex_delay(struct torture_random_state *trsp)
{
const unsigned long shortdelay_us = 2;
const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX;
/*
* We want a short delay mostly to emulate likely code, and
* we want a long delay occasionally to force massive contention.
*/
if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms);
if (long_hold && !(torture_random(trsp) % (cxt.nrealwriters_stress * 2000 * long_hold)))
mdelay(long_hold);
if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 200 * shortdelay_us)))
udelay(shortdelay_us);
@ -691,12 +735,9 @@ __acquires(torture_rwsem)
static void torture_rwsem_write_delay(struct torture_random_state *trsp)
{
const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX;
/* We want a long delay occasionally to force massive contention. */
if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms * 10);
if (long_hold && !(torture_random(trsp) % (cxt.nrealwriters_stress * 2000 * long_hold)))
mdelay(long_hold * 10);
if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
torture_preempt_schedule(); /* Allow test to be preempted. */
}
@ -716,14 +757,11 @@ __acquires(torture_rwsem)
static void torture_rwsem_read_delay(struct torture_random_state *trsp)
{
const unsigned long longdelay_ms = 100;
/* We want a long delay occasionally to force massive contention. */
if (!(torture_random(trsp) %
(cxt.nrealreaders_stress * 2000 * longdelay_ms)))
mdelay(longdelay_ms * 2);
if (long_hold && !(torture_random(trsp) % (cxt.nrealreaders_stress * 2000 * long_hold)))
mdelay(long_hold * 2);
else
mdelay(longdelay_ms / 2);
mdelay(long_hold / 2);
if (!(torture_random(trsp) % (cxt.nrealreaders_stress * 20000)))
torture_preempt_schedule(); /* Allow test to be preempted. */
}
@ -803,11 +841,13 @@ static struct lock_torture_ops percpu_rwsem_lock_ops = {
*/
static int lock_torture_writer(void *arg)
{
struct lock_stress_stats *lwsp = arg;
int tid = lwsp - cxt.lwsa;
DEFINE_TORTURE_RANDOM(rand);
unsigned long j;
unsigned long j1;
u32 lockset_mask;
struct lock_stress_stats *lwsp = arg;
DEFINE_TORTURE_RANDOM(rand);
bool skip_main_lock;
int tid = lwsp - cxt.lwsa;
VERBOSE_TOROUT_STRING("lock_torture_writer task started");
if (!rt_task(current))
@ -834,17 +874,24 @@ static int lock_torture_writer(void *arg)
cxt.cur_ops->nested_lock(tid, lockset_mask);
if (!skip_main_lock) {
if (acq_writer_lim > 0)
j = jiffies;
cxt.cur_ops->writelock(tid);
if (WARN_ON_ONCE(lock_is_write_held))
lwsp->n_lock_fail++;
lock_is_write_held = true;
if (WARN_ON_ONCE(atomic_read(&lock_is_read_held)))
lwsp->n_lock_fail++; /* rare, but... */
if (acq_writer_lim > 0) {
j1 = jiffies;
WARN_ONCE(time_after(j1, j + acq_writer_lim),
"%s: Lock acquisition took %lu jiffies.\n",
__func__, j1 - j);
}
lwsp->n_lock_acquired++;
}
if (!skip_main_lock) {
cxt.cur_ops->write_delay(&rand);
lock_is_write_held = false;
WRITE_ONCE(last_lock_release, jiffies);
cxt.cur_ops->writeunlock(tid);
@ -986,16 +1033,69 @@ static int lock_torture_stats(void *arg)
return 0;
}
static inline void
lock_torture_print_module_parms(struct lock_torture_ops *cur_ops,
const char *tag)
{
static cpumask_t cpumask_all;
cpumask_t *rcmp = cpumask_nonempty(bind_readers) ? bind_readers : &cpumask_all;
cpumask_t *wcmp = cpumask_nonempty(bind_writers) ? bind_writers : &cpumask_all;
cpumask_setall(&cpumask_all);
pr_alert("%s" TORTURE_FLAG
"--- %s%s: nwriters_stress=%d nreaders_stress=%d nested_locks=%d stat_interval=%d verbose=%d shuffle_interval=%d stutter=%d shutdown_secs=%d onoff_interval=%d onoff_holdoff=%d\n",
"--- %s%s: acq_writer_lim=%d bind_readers=%*pbl bind_writers=%*pbl call_rcu_chains=%d long_hold=%d nested_locks=%d nreaders_stress=%d nwriters_stress=%d onoff_holdoff=%d onoff_interval=%d rt_boost=%d rt_boost_factor=%d shuffle_interval=%d shutdown_secs=%d stat_interval=%d stutter=%d verbose=%d writer_fifo=%d\n",
torture_type, tag, cxt.debug_lock ? " [debug]": "",
cxt.nrealwriters_stress, cxt.nrealreaders_stress,
nested_locks, stat_interval, verbose, shuffle_interval,
stutter, shutdown_secs, onoff_interval, onoff_holdoff);
acq_writer_lim, cpumask_pr_args(rcmp), cpumask_pr_args(wcmp),
call_rcu_chains, long_hold, nested_locks, cxt.nrealreaders_stress,
cxt.nrealwriters_stress, onoff_holdoff, onoff_interval, rt_boost,
rt_boost_factor, shuffle_interval, shutdown_secs, stat_interval, stutter,
verbose, writer_fifo);
}
// If requested, maintain call_rcu() chains to keep a grace period always
// in flight. These increase the probability of getting an RCU CPU stall
// warning and associated diagnostics when a locking primitive stalls.
static void call_rcu_chain_cb(struct rcu_head *rhp)
{
struct call_rcu_chain *crcp = container_of(rhp, struct call_rcu_chain, crc_rh);
if (!smp_load_acquire(&crcp->crc_stop)) {
(void)start_poll_synchronize_rcu(); // Start one grace period...
call_rcu(&crcp->crc_rh, call_rcu_chain_cb); // ... and later start another.
}
}
// Start the requested number of call_rcu() chains.
static int call_rcu_chain_init(void)
{
int i;
if (call_rcu_chains <= 0)
return 0;
call_rcu_chain = kcalloc(call_rcu_chains, sizeof(*call_rcu_chain), GFP_KERNEL);
if (!call_rcu_chain)
return -ENOMEM;
for (i = 0; i < call_rcu_chains; i++) {
call_rcu_chain[i].crc_stop = false;
call_rcu(&call_rcu_chain[i].crc_rh, call_rcu_chain_cb);
}
return 0;
}
// Stop all of the call_rcu() chains.
static void call_rcu_chain_cleanup(void)
{
int i;
if (!call_rcu_chain)
return;
for (i = 0; i < call_rcu_chains; i++)
smp_store_release(&call_rcu_chain[i].crc_stop, true);
rcu_barrier();
kfree(call_rcu_chain);
call_rcu_chain = NULL;
}
static void lock_torture_cleanup(void)
@ -1048,6 +1148,8 @@ static void lock_torture_cleanup(void)
kfree(cxt.lrsa);
cxt.lrsa = NULL;
call_rcu_chain_cleanup();
end:
if (cxt.init_called) {
if (cxt.cur_ops->exit)
@ -1177,6 +1279,10 @@ static int __init lock_torture_init(void)
}
}
firsterr = call_rcu_chain_init();
if (torture_init_error(firsterr))
goto unwind;
lock_torture_print_module_parms(cxt.cur_ops, "Start of test");
/* Prepare torture context. */
@ -1250,6 +1356,8 @@ static int __init lock_torture_init(void)
writer_fifo ? sched_set_fifo : NULL);
if (torture_init_error(firsterr))
goto unwind;
if (cpumask_nonempty(bind_writers))
torture_sched_setaffinity(writer_tasks[i]->pid, bind_writers);
create_reader:
if (cxt.cur_ops->readlock == NULL || (j >= cxt.nrealreaders_stress))
@ -1259,6 +1367,8 @@ static int __init lock_torture_init(void)
reader_tasks[j]);
if (torture_init_error(firsterr))
goto unwind;
if (cpumask_nonempty(bind_readers))
torture_sched_setaffinity(reader_tasks[j]->pid, bind_readers);
}
if (stat_interval > 0) {
firsterr = torture_create_kthread(lock_torture_stats, NULL,

View File

@ -10,6 +10,7 @@
#ifndef __LINUX_RCU_H
#define __LINUX_RCU_H
#include <linux/slab.h>
#include <trace/events/rcu.h>
/*
@ -248,6 +249,12 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head)
}
#endif /* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */
static inline void debug_rcu_head_callback(struct rcu_head *rhp)
{
if (unlikely(!rhp->func))
kmem_dump_obj(rhp);
}
extern int rcu_cpu_stall_suppress_at_boot;
static inline bool rcu_stall_is_suppressed_at_boot(void)
@ -568,10 +575,6 @@ void do_trace_rcu_torture_read(const char *rcutorturename,
static inline void rcu_gp_set_torture_wait(int duration) { }
#endif
#if IS_ENABLED(CONFIG_RCU_TORTURE_TEST) || IS_MODULE(CONFIG_RCU_TORTURE_TEST)
long rcutorture_sched_setaffinity(pid_t pid, const struct cpumask *in_mask);
#endif
#ifdef CONFIG_TINY_SRCU
static inline void srcutorture_get_gp_data(enum rcutorture_type test_type,
@ -654,4 +657,10 @@ static inline bool rcu_cpu_beenfullyonline(int cpu) { return true; }
bool rcu_cpu_beenfullyonline(int cpu);
#endif
#ifdef CONFIG_RCU_STALL_COMMON
int rcu_stall_notifier_call_chain(unsigned long val, void *v);
#else // #ifdef CONFIG_RCU_STALL_COMMON
static inline int rcu_stall_notifier_call_chain(unsigned long val, void *v) { return NOTIFY_DONE; }
#endif // #else // #ifdef CONFIG_RCU_STALL_COMMON
#endif /* __LINUX_RCU_H */

View File

@ -368,7 +368,7 @@ bool rcu_segcblist_entrain(struct rcu_segcblist *rsclp,
smp_mb(); /* Ensure counts are updated before callback is entrained. */
rhp->next = NULL;
for (i = RCU_NEXT_TAIL; i > RCU_DONE_TAIL; i--)
if (rsclp->tails[i] != rsclp->tails[i - 1])
if (!rcu_segcblist_segempty(rsclp, i))
break;
rcu_segcblist_inc_seglen(rsclp, i);
WRITE_ONCE(*rsclp->tails[i], rhp);
@ -551,7 +551,7 @@ bool rcu_segcblist_accelerate(struct rcu_segcblist *rsclp, unsigned long seq)
* as their ->gp_seq[] grace-period completion sequence number.
*/
for (i = RCU_NEXT_READY_TAIL; i > RCU_DONE_TAIL; i--)
if (rsclp->tails[i] != rsclp->tails[i - 1] &&
if (!rcu_segcblist_segempty(rsclp, i) &&
ULONG_CMP_LT(rsclp->gp_seq[i], seq))
break;

View File

@ -21,6 +21,7 @@
#include <linux/spinlock.h>
#include <linux/smp.h>
#include <linux/rcupdate_wait.h>
#include <linux/rcu_notifier.h>
#include <linux/interrupt.h>
#include <linux/sched/signal.h>
#include <uapi/linux/sched/types.h>
@ -810,7 +811,7 @@ static void synchronize_rcu_trivial(void)
int cpu;
for_each_online_cpu(cpu) {
rcutorture_sched_setaffinity(current->pid, cpumask_of(cpu));
torture_sched_setaffinity(current->pid, cpumask_of(cpu));
WARN_ON_ONCE(raw_smp_processor_id() != cpu);
}
}
@ -1149,7 +1150,7 @@ static int rcu_torture_boost(void *arg)
mutex_unlock(&boost_mutex);
break;
}
schedule_timeout_uninterruptible(1);
schedule_timeout_uninterruptible(HZ / 20);
}
/* Go do the stutter. */
@ -1160,7 +1161,7 @@ checkwait: if (stutter_wait("rcu_torture_boost"))
/* Clean up and exit. */
while (!kthread_should_stop()) {
torture_shutdown_absorb("rcu_torture_boost");
schedule_timeout_uninterruptible(1);
schedule_timeout_uninterruptible(HZ / 20);
}
torture_kthread_stopping("rcu_torture_boost");
return 0;
@ -1183,7 +1184,7 @@ rcu_torture_fqs(void *arg)
fqs_resume_time = jiffies + fqs_stutter * HZ;
while (time_before(jiffies, fqs_resume_time) &&
!kthread_should_stop()) {
schedule_timeout_interruptible(1);
schedule_timeout_interruptible(HZ / 20);
}
fqs_burst_remaining = fqs_duration;
while (fqs_burst_remaining > 0 &&
@ -2126,7 +2127,7 @@ static int rcu_nocb_toggle(void *arg)
VERBOSE_TOROUT_STRING("rcu_nocb_toggle task started");
while (!rcu_inkernel_boot_has_ended())
schedule_timeout_interruptible(HZ / 10);
for_each_online_cpu(cpu)
for_each_possible_cpu(cpu)
maxcpu = cpu;
WARN_ON(maxcpu < 0);
if (toggle_interval > ULONG_MAX)
@ -2428,6 +2429,16 @@ static int rcutorture_booster_init(unsigned int cpu)
return 0;
}
static int rcu_torture_stall_nf(struct notifier_block *nb, unsigned long v, void *ptr)
{
pr_info("%s: v=%lu, duration=%lu.\n", __func__, v, (unsigned long)ptr);
return NOTIFY_OK;
}
static struct notifier_block rcu_torture_stall_block = {
.notifier_call = rcu_torture_stall_nf,
};
/*
* CPU-stall kthread. It waits as specified by stall_cpu_holdoff, then
* induces a CPU stall for the time specified by stall_cpu.
@ -2435,9 +2446,14 @@ static int rcutorture_booster_init(unsigned int cpu)
static int rcu_torture_stall(void *args)
{
int idx;
int ret;
unsigned long stop_at;
VERBOSE_TOROUT_STRING("rcu_torture_stall task started");
ret = rcu_stall_chain_notifier_register(&rcu_torture_stall_block);
if (ret)
pr_info("%s: rcu_stall_chain_notifier_register() returned %d, %sexpected.\n",
__func__, ret, !IS_ENABLED(CONFIG_RCU_STALL_COMMON) ? "un" : "");
if (stall_cpu_holdoff > 0) {
VERBOSE_TOROUT_STRING("rcu_torture_stall begin holdoff");
schedule_timeout_interruptible(stall_cpu_holdoff * HZ);
@ -2481,6 +2497,11 @@ static int rcu_torture_stall(void *args)
cur_ops->readunlock(idx);
}
pr_alert("%s end.\n", __func__);
if (!ret) {
ret = rcu_stall_chain_notifier_unregister(&rcu_torture_stall_block);
if (ret)
pr_info("%s: rcu_stall_chain_notifier_unregister() returned %d.\n", __func__, ret);
}
torture_shutdown_absorb("rcu_torture_stall");
while (!kthread_should_stop())
schedule_timeout_interruptible(10 * HZ);
@ -2899,7 +2920,7 @@ static int rcu_torture_fwd_prog(void *args)
WRITE_ONCE(rcu_fwd_seq, rcu_fwd_seq + 1);
} else {
while (READ_ONCE(rcu_fwd_seq) == oldseq && !torture_must_stop())
schedule_timeout_interruptible(1);
schedule_timeout_interruptible(HZ / 20);
oldseq = READ_ONCE(rcu_fwd_seq);
}
pr_alert("%s: Starting forward-progress test %d\n", __func__, rfp->rcu_fwd_id);
@ -3200,7 +3221,7 @@ static int rcu_torture_read_exit_child(void *trsp_in)
set_user_nice(current, MAX_NICE);
// Minimize time between reading and exiting.
while (!kthread_should_stop())
schedule_timeout_uninterruptible(1);
schedule_timeout_uninterruptible(HZ / 20);
(void)rcu_torture_one_read(trsp, -1);
return 0;
}
@ -3248,7 +3269,7 @@ static int rcu_torture_read_exit(void *unused)
smp_mb(); // Store before wakeup.
wake_up(&read_exit_wq);
while (!torture_must_stop())
schedule_timeout_uninterruptible(1);
schedule_timeout_uninterruptible(HZ / 20);
torture_kthread_stopping("rcu_torture_read_exit");
return 0;
}

View File

@ -655,12 +655,12 @@ retry:
goto retry;
}
un_delay(udl, ndl);
b = READ_ONCE(rtsp->a);
// Remember, seqlock read-side release can fail.
if (!rts_release(rtsp, start)) {
rcu_read_unlock();
goto retry;
}
b = READ_ONCE(rtsp->a);
WARN_ONCE(a != b, "Re-read of ->a changed from %u to %u.\n", a, b);
b = rtsp->b;
rcu_read_unlock();
@ -1025,8 +1025,8 @@ static void
ref_scale_print_module_parms(struct ref_scale_ops *cur_ops, const char *tag)
{
pr_alert("%s" SCALE_FLAG
"--- %s: verbose=%d shutdown=%d holdoff=%d loops=%ld nreaders=%d nruns=%d readdelay=%d\n", scale_type, tag,
verbose, shutdown, holdoff, loops, nreaders, nruns, readdelay);
"--- %s: verbose=%d verbose_batched=%d shutdown=%d holdoff=%d lookup_instances=%ld loops=%ld nreaders=%d nruns=%d readdelay=%d\n", scale_type, tag,
verbose, verbose_batched, shutdown, holdoff, lookup_instances, loops, nreaders, nruns, readdelay);
}
static void

View File

@ -138,6 +138,7 @@ void srcu_drive_gp(struct work_struct *wp)
while (lh) {
rhp = lh;
lh = lh->next;
debug_rcu_head_callback(rhp);
local_bh_disable();
rhp->func(rhp);
local_bh_enable();

View File

@ -223,7 +223,7 @@ static bool init_srcu_struct_nodes(struct srcu_struct *ssp, gfp_t gfp_flags)
snp->grplo = cpu;
snp->grphi = cpu;
}
sdp->grpmask = 1 << (cpu - sdp->mynode->grplo);
sdp->grpmask = 1UL << (cpu - sdp->mynode->grplo);
}
smp_store_release(&ssp->srcu_sup->srcu_size_state, SRCU_SIZE_WAIT_BARRIER);
return true;
@ -255,29 +255,31 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static)
ssp->srcu_sup->sda_is_static = is_static;
if (!is_static)
ssp->sda = alloc_percpu(struct srcu_data);
if (!ssp->sda) {
if (!is_static)
kfree(ssp->srcu_sup);
return -ENOMEM;
}
if (!ssp->sda)
goto err_free_sup;
init_srcu_struct_data(ssp);
ssp->srcu_sup->srcu_gp_seq_needed_exp = 0;
ssp->srcu_sup->srcu_last_gp_end = ktime_get_mono_fast_ns();
if (READ_ONCE(ssp->srcu_sup->srcu_size_state) == SRCU_SIZE_SMALL && SRCU_SIZING_IS_INIT()) {
if (!init_srcu_struct_nodes(ssp, GFP_ATOMIC)) {
if (!ssp->srcu_sup->sda_is_static) {
free_percpu(ssp->sda);
ssp->sda = NULL;
kfree(ssp->srcu_sup);
return -ENOMEM;
}
} else {
WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG);
}
if (!init_srcu_struct_nodes(ssp, GFP_ATOMIC))
goto err_free_sda;
WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG);
}
ssp->srcu_sup->srcu_ssp = ssp;
smp_store_release(&ssp->srcu_sup->srcu_gp_seq_needed, 0); /* Init done. */
return 0;
err_free_sda:
if (!is_static) {
free_percpu(ssp->sda);
ssp->sda = NULL;
}
err_free_sup:
if (!is_static) {
kfree(ssp->srcu_sup);
ssp->srcu_sup = NULL;
}
return -ENOMEM;
}
#ifdef CONFIG_DEBUG_LOCK_ALLOC
@ -782,8 +784,7 @@ static void srcu_gp_start(struct srcu_struct *ssp)
spin_lock_rcu_node(sdp); /* Interrupts already disabled. */
rcu_segcblist_advance(&sdp->srcu_cblist,
rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq));
(void)rcu_segcblist_accelerate(&sdp->srcu_cblist,
rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq));
WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL));
spin_unlock_rcu_node(sdp); /* Interrupts remain disabled. */
WRITE_ONCE(ssp->srcu_sup->srcu_gp_start, jiffies);
WRITE_ONCE(ssp->srcu_sup->srcu_n_exp_nodelay, 0);
@ -833,7 +834,7 @@ static void srcu_schedule_cbs_snp(struct srcu_struct *ssp, struct srcu_node *snp
int cpu;
for (cpu = snp->grplo; cpu <= snp->grphi; cpu++) {
if (!(mask & (1 << (cpu - snp->grplo))))
if (!(mask & (1UL << (cpu - snp->grplo))))
continue;
srcu_schedule_cbs_sdp(per_cpu_ptr(ssp->sda, cpu), delay);
}
@ -1242,10 +1243,37 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
spin_lock_irqsave_sdp_contention(sdp, &flags);
if (rhp)
rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
/*
* The snapshot for acceleration must be taken _before_ the read of the
* current gp sequence used for advancing, otherwise advancing may fail
* and acceleration may then fail too.
*
* This could happen if:
*
* 1) The RCU_WAIT_TAIL segment has callbacks (gp_num = X + 4) and the
* RCU_NEXT_READY_TAIL also has callbacks (gp_num = X + 8).
*
* 2) The grace period for RCU_WAIT_TAIL is seen as started but not
* completed so rcu_seq_current() returns X + SRCU_STATE_SCAN1.
*
* 3) This value is passed to rcu_segcblist_advance() which can't move
* any segment forward and fails.
*
* 4) srcu_gp_start_if_needed() still proceeds with callback acceleration.
* But then the call to rcu_seq_snap() observes the grace period for the
* RCU_WAIT_TAIL segment as completed and the subsequent one for the
* RCU_NEXT_READY_TAIL segment as started (ie: X + 4 + SRCU_STATE_SCAN1)
* so it returns a snapshot of the next grace period, which is X + 12.
*
* 5) The value of X + 12 is passed to rcu_segcblist_accelerate() but the
* freshly enqueued callback in RCU_NEXT_TAIL can't move to
* RCU_NEXT_READY_TAIL which already has callbacks for a previous grace
* period (gp_num = X + 8). So acceleration fails.
*/
s = rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq);
rcu_segcblist_advance(&sdp->srcu_cblist,
rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq));
s = rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq);
(void)rcu_segcblist_accelerate(&sdp->srcu_cblist, s);
WARN_ON_ONCE(!rcu_segcblist_accelerate(&sdp->srcu_cblist, s) && rhp);
if (ULONG_CMP_LT(sdp->srcu_gp_seq_needed, s)) {
sdp->srcu_gp_seq_needed = s;
needgp = true;
@ -1692,6 +1720,7 @@ static void srcu_invoke_callbacks(struct work_struct *work)
ssp = sdp->ssp;
rcu_cblist_init(&ready_cbs);
spin_lock_irq_rcu_node(sdp);
WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL));
rcu_segcblist_advance(&sdp->srcu_cblist,
rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq));
if (sdp->srcu_cblist_invoking ||
@ -1708,6 +1737,7 @@ static void srcu_invoke_callbacks(struct work_struct *work)
rhp = rcu_cblist_dequeue(&ready_cbs);
for (; rhp != NULL; rhp = rcu_cblist_dequeue(&ready_cbs)) {
debug_rcu_head_unqueue(rhp);
debug_rcu_head_callback(rhp);
local_bh_disable();
rhp->func(rhp);
local_bh_enable();
@ -1720,8 +1750,6 @@ static void srcu_invoke_callbacks(struct work_struct *work)
*/
spin_lock_irq_rcu_node(sdp);
rcu_segcblist_add_len(&sdp->srcu_cblist, -len);
(void)rcu_segcblist_accelerate(&sdp->srcu_cblist,
rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq));
sdp->srcu_cblist_invoking = false;
more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist);
spin_unlock_irq_rcu_node(sdp);

View File

@ -432,6 +432,7 @@ static void rcu_barrier_tasks_generic(struct rcu_tasks *rtp)
static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
{
int cpu;
int dequeue_limit;
unsigned long flags;
bool gpdone = poll_state_synchronize_rcu(rtp->percpu_dequeue_gpseq);
long n;
@ -439,7 +440,8 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
long ncbsnz = 0;
int needgpcb = 0;
for (cpu = 0; cpu < smp_load_acquire(&rtp->percpu_dequeue_lim); cpu++) {
dequeue_limit = smp_load_acquire(&rtp->percpu_dequeue_lim);
for (cpu = 0; cpu < dequeue_limit; cpu++) {
struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu);
/* Advance and accelerate any new callbacks. */
@ -538,6 +540,7 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu
raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags);
len = rcl.len;
for (rhp = rcu_cblist_dequeue(&rcl); rhp; rhp = rcu_cblist_dequeue(&rcl)) {
debug_rcu_head_callback(rhp);
local_bh_disable();
rhp->func(rhp);
local_bh_enable();
@ -1084,7 +1087,7 @@ void rcu_barrier_tasks(void)
}
EXPORT_SYMBOL_GPL(rcu_barrier_tasks);
int rcu_tasks_lazy_ms = -1;
static int rcu_tasks_lazy_ms = -1;
module_param(rcu_tasks_lazy_ms, int, 0444);
static int __init rcu_spawn_tasks_kthread(void)
@ -1979,20 +1982,22 @@ static void test_rcu_tasks_callback(struct rcu_head *rhp)
static void rcu_tasks_initiate_self_tests(void)
{
pr_info("Running RCU-tasks wait API self tests\n");
#ifdef CONFIG_TASKS_RCU
pr_info("Running RCU Tasks wait API self tests\n");
tests[0].runstart = jiffies;
synchronize_rcu_tasks();
call_rcu_tasks(&tests[0].rh, test_rcu_tasks_callback);
#endif
#ifdef CONFIG_TASKS_RUDE_RCU
pr_info("Running RCU Tasks Rude wait API self tests\n");
tests[1].runstart = jiffies;
synchronize_rcu_tasks_rude();
call_rcu_tasks_rude(&tests[1].rh, test_rcu_tasks_callback);
#endif
#ifdef CONFIG_TASKS_TRACE_RCU
pr_info("Running RCU Tasks Trace wait API self tests\n");
tests[2].runstart = jiffies;
synchronize_rcu_tasks_trace();
call_rcu_tasks_trace(&tests[2].rh, test_rcu_tasks_callback);

View File

@ -97,6 +97,7 @@ static inline bool rcu_reclaim_tiny(struct rcu_head *head)
trace_rcu_invoke_callback("", head);
f = head->func;
debug_rcu_head_callback(head);
WRITE_ONCE(head->func, (rcu_callback_t)0L);
f(head);
rcu_lock_release(&rcu_callback_map);

View File

@ -31,6 +31,7 @@
#include <linux/bitops.h>
#include <linux/export.h>
#include <linux/completion.h>
#include <linux/kmemleak.h>
#include <linux/moduleparam.h>
#include <linux/panic.h>
#include <linux/panic_notifier.h>
@ -1260,7 +1261,7 @@ EXPORT_SYMBOL_GPL(rcu_gp_slow_register);
/* Unregister a counter, with NULL for not caring which. */
void rcu_gp_slow_unregister(atomic_t *rgssp)
{
WARN_ON_ONCE(rgssp && rgssp != rcu_gp_slow_suppress);
WARN_ON_ONCE(rgssp && rgssp != rcu_gp_slow_suppress && rcu_gp_slow_suppress != NULL);
WRITE_ONCE(rcu_gp_slow_suppress, NULL);
}
@ -1556,10 +1557,22 @@ static bool rcu_gp_fqs_check_wake(int *gfp)
*/
static void rcu_gp_fqs(bool first_time)
{
int nr_fqs = READ_ONCE(rcu_state.nr_fqs_jiffies_stall);
struct rcu_node *rnp = rcu_get_root();
WRITE_ONCE(rcu_state.gp_activity, jiffies);
WRITE_ONCE(rcu_state.n_force_qs, rcu_state.n_force_qs + 1);
WARN_ON_ONCE(nr_fqs > 3);
/* Only countdown nr_fqs for stall purposes if jiffies moves. */
if (nr_fqs) {
if (nr_fqs == 1) {
WRITE_ONCE(rcu_state.jiffies_stall,
jiffies + rcu_jiffies_till_stall_check());
}
WRITE_ONCE(rcu_state.nr_fqs_jiffies_stall, --nr_fqs);
}
if (first_time) {
/* Collect dyntick-idle snapshots. */
force_qs_rnp(dyntick_save_progress_counter);
@ -2135,6 +2148,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
trace_rcu_invoke_callback(rcu_state.name, rhp);
f = rhp->func;
debug_rcu_head_callback(rhp);
WRITE_ONCE(rhp->func, (rcu_callback_t)0L);
f(rhp);
@ -2713,7 +2727,7 @@ __call_rcu_common(struct rcu_head *head, rcu_callback_t func, bool lazy_in)
*/
void call_rcu_hurry(struct rcu_head *head, rcu_callback_t func)
{
return __call_rcu_common(head, func, false);
__call_rcu_common(head, func, false);
}
EXPORT_SYMBOL_GPL(call_rcu_hurry);
#endif
@ -2764,7 +2778,7 @@ EXPORT_SYMBOL_GPL(call_rcu_hurry);
*/
void call_rcu(struct rcu_head *head, rcu_callback_t func)
{
return __call_rcu_common(head, func, IS_ENABLED(CONFIG_RCU_LAZY));
__call_rcu_common(head, func, IS_ENABLED(CONFIG_RCU_LAZY));
}
EXPORT_SYMBOL_GPL(call_rcu);
@ -3388,6 +3402,14 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
success = true;
}
/*
* The kvfree_rcu() caller considers the pointer freed at this point
* and likely removes any references to it. Since the actual slab
* freeing (and kmemleak_free()) is deferred, tell kmemleak to ignore
* this object (no scanning or false positives reporting).
*/
kmemleak_ignore(ptr);
// Set timer to drain after KFREE_DRAIN_JIFFIES.
if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING)
schedule_delayed_monitor_work(krcp);
@ -4083,6 +4105,82 @@ retry:
}
EXPORT_SYMBOL_GPL(rcu_barrier);
static unsigned long rcu_barrier_last_throttle;
/**
* rcu_barrier_throttled - Do rcu_barrier(), but limit to one per second
*
* This can be thought of as guard rails around rcu_barrier() that
* permits unrestricted userspace use, at least assuming the hardware's
* try_cmpxchg() is robust. There will be at most one call per second to
* rcu_barrier() system-wide from use of this function, which means that
* callers might needlessly wait a second or three.
*
* This is intended for use by test suites to avoid OOM by flushing RCU
* callbacks from the previous test before starting the next. See the
* rcutree.do_rcu_barrier module parameter for more information.
*
* Why not simply make rcu_barrier() more scalable? That might be
* the eventual endpoint, but let's keep it simple for the time being.
* Note that the module parameter infrastructure serializes calls to a
* given .set() function, but should concurrent .set() invocation ever be
* possible, we are ready!
*/
static void rcu_barrier_throttled(void)
{
unsigned long j = jiffies;
unsigned long old = READ_ONCE(rcu_barrier_last_throttle);
unsigned long s = rcu_seq_snap(&rcu_state.barrier_sequence);
while (time_in_range(j, old, old + HZ / 16) ||
!try_cmpxchg(&rcu_barrier_last_throttle, &old, j)) {
schedule_timeout_idle(HZ / 16);
if (rcu_seq_done(&rcu_state.barrier_sequence, s)) {
smp_mb(); /* caller's subsequent code after above check. */
return;
}
j = jiffies;
old = READ_ONCE(rcu_barrier_last_throttle);
}
rcu_barrier();
}
/*
* Invoke rcu_barrier_throttled() when a rcutree.do_rcu_barrier
* request arrives. We insist on a true value to allow for possible
* future expansion.
*/
static int param_set_do_rcu_barrier(const char *val, const struct kernel_param *kp)
{
bool b;
int ret;
if (rcu_scheduler_active != RCU_SCHEDULER_RUNNING)
return -EAGAIN;
ret = kstrtobool(val, &b);
if (!ret && b) {
atomic_inc((atomic_t *)kp->arg);
rcu_barrier_throttled();
atomic_dec((atomic_t *)kp->arg);
}
return ret;
}
/*
* Output the number of outstanding rcutree.do_rcu_barrier requests.
*/
static int param_get_do_rcu_barrier(char *buffer, const struct kernel_param *kp)
{
return sprintf(buffer, "%d\n", atomic_read((atomic_t *)kp->arg));
}
static const struct kernel_param_ops do_rcu_barrier_ops = {
.set = param_set_do_rcu_barrier,
.get = param_get_do_rcu_barrier,
};
static atomic_t do_rcu_barrier;
module_param_cb(do_rcu_barrier, &do_rcu_barrier_ops, &do_rcu_barrier, 0644);
/*
* Compute the mask of online CPUs for the specified rcu_node structure.
* This will not be stable unless the rcu_node structure's ->lock is
@ -4130,7 +4228,7 @@ bool rcu_lockdep_current_cpu_online(void)
rdp = this_cpu_ptr(&rcu_data);
/*
* Strictly, we care here about the case where the current CPU is
* in rcu_cpu_starting() and thus has an excuse for rdp->grpmask
* in rcutree_report_cpu_starting() and thus has an excuse for rdp->grpmask
* not being up to date. So arch_spin_is_locked() might have a
* false positive if it's held by some *other* CPU, but that's
* OK because that just means a false *negative* on the warning.
@ -4151,25 +4249,6 @@ static bool rcu_init_invoked(void)
return !!rcu_state.n_online_cpus;
}
/*
* Near the end of the offline process. Trace the fact that this CPU
* is going offline.
*/
int rcutree_dying_cpu(unsigned int cpu)
{
bool blkd;
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
struct rcu_node *rnp = rdp->mynode;
if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
return 0;
blkd = !!(READ_ONCE(rnp->qsmask) & rdp->grpmask);
trace_rcu_grace_period(rcu_state.name, READ_ONCE(rnp->gp_seq),
blkd ? TPS("cpuofl-bgp") : TPS("cpuofl"));
return 0;
}
/*
* All CPUs for the specified rcu_node structure have gone offline,
* and all tasks that were preempted within an RCU read-side critical
@ -4215,23 +4294,6 @@ static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
}
}
/*
* The CPU has been completely removed, and some other CPU is reporting
* this fact from process context. Do the remainder of the cleanup.
* There can only be one CPU hotplug operation at a time, so no need for
* explicit locking.
*/
int rcutree_dead_cpu(unsigned int cpu)
{
if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
return 0;
WRITE_ONCE(rcu_state.n_online_cpus, rcu_state.n_online_cpus - 1);
// Stop-machine done, so allow nohz_full to disable tick.
tick_dep_clear(TICK_DEP_BIT_RCU);
return 0;
}
/*
* Propagate ->qsinitmask bits up the rcu_node tree to account for the
* first CPU in a given leaf rcu_node structure coming online. The caller
@ -4384,29 +4446,6 @@ int rcutree_online_cpu(unsigned int cpu)
return 0;
}
/*
* Near the beginning of the process. The CPU is still very much alive
* with pretty much all services enabled.
*/
int rcutree_offline_cpu(unsigned int cpu)
{
unsigned long flags;
struct rcu_data *rdp;
struct rcu_node *rnp;
rdp = per_cpu_ptr(&rcu_data, cpu);
rnp = rdp->mynode;
raw_spin_lock_irqsave_rcu_node(rnp, flags);
rnp->ffmask &= ~rdp->grpmask;
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
rcutree_affinity_setting(cpu, cpu);
// nohz_full CPUs need the tick for stop-machine to work quickly
tick_dep_set(TICK_DEP_BIT_RCU);
return 0;
}
/*
* Mark the specified CPU as being online so that subsequent grace periods
* (both expedited and normal) will wait on it. Note that this means that
@ -4418,8 +4457,10 @@ int rcutree_offline_cpu(unsigned int cpu)
* from the incoming CPU rather than from the cpuhp_step mechanism.
* This is because this function must be invoked at a precise location.
* This incoming CPU must not have enabled interrupts yet.
*
* This mirrors the effects of rcutree_report_cpu_dead().
*/
void rcu_cpu_starting(unsigned int cpu)
void rcutree_report_cpu_starting(unsigned int cpu)
{
unsigned long mask;
struct rcu_data *rdp;
@ -4473,14 +4514,21 @@ void rcu_cpu_starting(unsigned int cpu)
* Note that this function is special in that it is invoked directly
* from the outgoing CPU rather than from the cpuhp_step mechanism.
* This is because this function must be invoked at a precise location.
*
* This mirrors the effect of rcutree_report_cpu_starting().
*/
void rcu_report_dead(unsigned int cpu)
void rcutree_report_cpu_dead(void)
{
unsigned long flags, seq_flags;
unsigned long flags;
unsigned long mask;
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */
/*
* IRQS must be disabled from now on and until the CPU dies, or an interrupt
* may introduce a new READ-side while it is actually off the QS masks.
*/
lockdep_assert_irqs_disabled();
// Do any dangling deferred wakeups.
do_nocb_deferred_wakeup(rdp);
@ -4488,7 +4536,6 @@ void rcu_report_dead(unsigned int cpu)
/* Remove outgoing CPU from mask in the leaf rcu_node structure. */
mask = rdp->grpmask;
local_irq_save(seq_flags);
arch_spin_lock(&rcu_state.ofl_lock);
raw_spin_lock_irqsave_rcu_node(rnp, flags); /* Enforce GP memory-order guarantee. */
rdp->rcu_ofl_gp_seq = READ_ONCE(rcu_state.gp_seq);
@ -4502,8 +4549,6 @@ void rcu_report_dead(unsigned int cpu)
WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext & ~mask);
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
arch_spin_unlock(&rcu_state.ofl_lock);
local_irq_restore(seq_flags);
rdp->cpu_started = false;
}
@ -4558,7 +4603,60 @@ void rcutree_migrate_callbacks(int cpu)
cpu, rcu_segcblist_n_cbs(&rdp->cblist),
rcu_segcblist_first_cb(&rdp->cblist));
}
#endif
/*
* The CPU has been completely removed, and some other CPU is reporting
* this fact from process context. Do the remainder of the cleanup.
* There can only be one CPU hotplug operation at a time, so no need for
* explicit locking.
*/
int rcutree_dead_cpu(unsigned int cpu)
{
WRITE_ONCE(rcu_state.n_online_cpus, rcu_state.n_online_cpus - 1);
// Stop-machine done, so allow nohz_full to disable tick.
tick_dep_clear(TICK_DEP_BIT_RCU);
return 0;
}
/*
* Near the end of the offline process. Trace the fact that this CPU
* is going offline.
*/
int rcutree_dying_cpu(unsigned int cpu)
{
bool blkd;
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
struct rcu_node *rnp = rdp->mynode;
blkd = !!(READ_ONCE(rnp->qsmask) & rdp->grpmask);
trace_rcu_grace_period(rcu_state.name, READ_ONCE(rnp->gp_seq),
blkd ? TPS("cpuofl-bgp") : TPS("cpuofl"));
return 0;
}
/*
* Near the beginning of the process. The CPU is still very much alive
* with pretty much all services enabled.
*/
int rcutree_offline_cpu(unsigned int cpu)
{
unsigned long flags;
struct rcu_data *rdp;
struct rcu_node *rnp;
rdp = per_cpu_ptr(&rcu_data, cpu);
rnp = rdp->mynode;
raw_spin_lock_irqsave_rcu_node(rnp, flags);
rnp->ffmask &= ~rdp->grpmask;
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
rcutree_affinity_setting(cpu, cpu);
// nohz_full CPUs need the tick for stop-machine to work quickly
tick_dep_set(TICK_DEP_BIT_RCU);
return 0;
}
#endif /* #ifdef CONFIG_HOTPLUG_CPU */
/*
* On non-huge systems, use expedited RCU grace periods to make suspend
@ -4990,7 +5088,7 @@ void __init rcu_init(void)
pm_notifier(rcu_pm_notify, 0);
WARN_ON(num_online_cpus() > 1); // Only one CPU this early in boot.
rcutree_prepare_cpu(cpu);
rcu_cpu_starting(cpu);
rcutree_report_cpu_starting(cpu);
rcutree_online_cpu(cpu);
/* Create workqueue for Tree SRCU and for expedited GPs. */

View File

@ -386,6 +386,10 @@ struct rcu_state {
/* in jiffies. */
unsigned long jiffies_stall; /* Time at which to check */
/* for CPU stalls. */
int nr_fqs_jiffies_stall; /* Number of fqs loops after
* which read jiffies and set
* jiffies_stall. Stall
* warnings disabled if !0. */
unsigned long jiffies_resched; /* Time at which to resched */
/* a reluctant CPU. */
unsigned long n_force_qs_gpstart; /* Snapshot of n_force_qs at */

View File

@ -621,10 +621,14 @@ static void synchronize_rcu_expedited_wait(void)
}
for (;;) {
unsigned long j;
if (synchronize_rcu_expedited_wait_once(jiffies_stall))
return;
if (rcu_stall_is_suppressed())
continue;
j = jiffies;
rcu_stall_notifier_call_chain(RCU_STALL_NOTIFY_EXP, (void *)(j - jiffies_start));
trace_rcu_stall_warning(rcu_state.name, TPS("ExpeditedStall"));
pr_err("INFO: %s detected expedited stalls on CPUs/tasks: {",
rcu_state.name);
@ -647,7 +651,7 @@ static void synchronize_rcu_expedited_wait(void)
}
}
pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n",
jiffies - jiffies_start, rcu_state.expedited_sequence,
j - jiffies_start, rcu_state.expedited_sequence,
data_race(rnp_root->expmask),
".T"[!!data_race(rnp_root->exp_tasks)]);
if (ndetected) {

View File

@ -8,6 +8,7 @@
*/
#include <linux/kvm_para.h>
#include <linux/rcu_notifier.h>
//////////////////////////////////////////////////////////////////////////////
//
@ -149,12 +150,17 @@ static void panic_on_rcu_stall(void)
/**
* rcu_cpu_stall_reset - restart stall-warning timeout for current grace period
*
* To perform the reset request from the caller, disable stall detection until
* 3 fqs loops have passed. This is required to ensure a fresh jiffies is
* loaded. It should be safe to do from the fqs loop as enough timer
* interrupts and context switches should have passed.
*
* The caller must disable hard irqs.
*/
void rcu_cpu_stall_reset(void)
{
WRITE_ONCE(rcu_state.jiffies_stall,
jiffies + rcu_jiffies_till_stall_check());
WRITE_ONCE(rcu_state.nr_fqs_jiffies_stall, 3);
WRITE_ONCE(rcu_state.jiffies_stall, ULONG_MAX);
}
//////////////////////////////////////////////////////////////////////////////
@ -170,6 +176,7 @@ static void record_gp_stall_check_time(void)
WRITE_ONCE(rcu_state.gp_start, j);
j1 = rcu_jiffies_till_stall_check();
smp_mb(); // ->gp_start before ->jiffies_stall and caller's ->gp_seq.
WRITE_ONCE(rcu_state.nr_fqs_jiffies_stall, 0);
WRITE_ONCE(rcu_state.jiffies_stall, j + j1);
rcu_state.jiffies_resched = j + j1 / 2;
rcu_state.n_force_qs_gpstart = READ_ONCE(rcu_state.n_force_qs);
@ -534,16 +541,16 @@ static void rcu_check_gp_kthread_starvation(void)
data_race(READ_ONCE(rcu_state.gp_state)),
gpk ? data_race(READ_ONCE(gpk->__state)) : ~0, cpu);
if (gpk) {
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
pr_err("\tUnless %s kthread gets sufficient CPU time, OOM is now expected behavior.\n", rcu_state.name);
pr_err("RCU grace-period kthread stack dump:\n");
sched_show_task(gpk);
if (cpu >= 0) {
if (cpu_is_offline(cpu)) {
pr_err("RCU GP kthread last ran on offline CPU %d.\n", cpu);
} else {
pr_err("Stack dump where RCU GP kthread last ran:\n");
dump_cpu_task(cpu);
}
if (cpu_is_offline(cpu)) {
pr_err("RCU GP kthread last ran on offline CPU %d.\n", cpu);
} else if (!(data_race(READ_ONCE(rdp->mynode->qsmask)) & rdp->grpmask)) {
pr_err("Stack dump where RCU GP kthread last ran:\n");
dump_cpu_task(cpu);
}
wake_up_process(gpk);
}
@ -711,7 +718,7 @@ static void print_cpu_stall(unsigned long gps)
static void check_cpu_stall(struct rcu_data *rdp)
{
bool didstall = false;
bool self_detected;
unsigned long gs1;
unsigned long gs2;
unsigned long gps;
@ -725,6 +732,16 @@ static void check_cpu_stall(struct rcu_data *rdp)
!rcu_gp_in_progress())
return;
rcu_stall_kick_kthreads();
/*
* Check if it was requested (via rcu_cpu_stall_reset()) that the FQS
* loop has to set jiffies to ensure a non-stale jiffies value. This
* is required to have good jiffies value after coming out of long
* breaks of jiffies updates. Not doing so can cause false positives.
*/
if (READ_ONCE(rcu_state.nr_fqs_jiffies_stall) > 0)
return;
j = jiffies;
/*
@ -758,10 +775,10 @@ static void check_cpu_stall(struct rcu_data *rdp)
return; /* No stall or GP completed since entering function. */
rnp = rdp->mynode;
jn = jiffies + ULONG_MAX / 2;
self_detected = READ_ONCE(rnp->qsmask) & rdp->grpmask;
if (rcu_gp_in_progress() &&
(READ_ONCE(rnp->qsmask) & rdp->grpmask) &&
(self_detected || ULONG_CMP_GE(j, js + RCU_STALL_RAT_DELAY)) &&
cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
/*
* If a virtual machine is stopped by the host it can look to
* the watchdog like an RCU stall. Check to see if the host
@ -770,39 +787,28 @@ static void check_cpu_stall(struct rcu_data *rdp)
if (kvm_check_and_clear_guest_paused())
return;
/* We haven't checked in, so go dump stack. */
print_cpu_stall(gps);
rcu_stall_notifier_call_chain(RCU_STALL_NOTIFY_NORM, (void *)j - gps);
if (self_detected) {
/* We haven't checked in, so go dump stack. */
print_cpu_stall(gps);
} else {
/* They had a few time units to dump stack, so complain. */
print_other_cpu_stall(gs2, gps);
}
if (READ_ONCE(rcu_cpu_stall_ftrace_dump))
rcu_ftrace_dump(DUMP_ALL);
didstall = true;
} else if (rcu_gp_in_progress() &&
ULONG_CMP_GE(j, js + RCU_STALL_RAT_DELAY) &&
cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
/*
* If a virtual machine is stopped by the host it can look to
* the watchdog like an RCU stall. Check to see if the host
* stopped the vm.
*/
if (kvm_check_and_clear_guest_paused())
return;
/* They had a few time units to dump stack, so complain. */
print_other_cpu_stall(gs2, gps);
if (READ_ONCE(rcu_cpu_stall_ftrace_dump))
rcu_ftrace_dump(DUMP_ALL);
didstall = true;
}
if (didstall && READ_ONCE(rcu_state.jiffies_stall) == jn) {
jn = jiffies + 3 * rcu_jiffies_till_stall_check() + 3;
WRITE_ONCE(rcu_state.jiffies_stall, jn);
if (READ_ONCE(rcu_state.jiffies_stall) == jn) {
jn = jiffies + 3 * rcu_jiffies_till_stall_check() + 3;
WRITE_ONCE(rcu_state.jiffies_stall, jn);
}
}
}
//////////////////////////////////////////////////////////////////////////////
//
// RCU forward-progress mechanisms, including of callback invocation.
// RCU forward-progress mechanisms, including for callback invocation.
/*
@ -1054,3 +1060,58 @@ static int __init rcu_sysrq_init(void)
return 0;
}
early_initcall(rcu_sysrq_init);
//////////////////////////////////////////////////////////////////////////////
//
// RCU CPU stall-warning notifiers
static ATOMIC_NOTIFIER_HEAD(rcu_cpu_stall_notifier_list);
/**
* rcu_stall_chain_notifier_register - Add an RCU CPU stall notifier
* @n: Entry to add.
*
* Adds an RCU CPU stall notifier to an atomic notifier chain.
* The @action passed to a notifier will be @RCU_STALL_NOTIFY_NORM or
* friends. The @data will be the duration of the stalled grace period,
* in jiffies, coerced to a void* pointer.
*
* Returns 0 on success, %-EEXIST on error.
*/
int rcu_stall_chain_notifier_register(struct notifier_block *n)
{
return atomic_notifier_chain_register(&rcu_cpu_stall_notifier_list, n);
}
EXPORT_SYMBOL_GPL(rcu_stall_chain_notifier_register);
/**
* rcu_stall_chain_notifier_unregister - Remove an RCU CPU stall notifier
* @n: Entry to add.
*
* Removes an RCU CPU stall notifier from an atomic notifier chain.
*
* Returns zero on success, %-ENOENT on failure.
*/
int rcu_stall_chain_notifier_unregister(struct notifier_block *n)
{
return atomic_notifier_chain_unregister(&rcu_cpu_stall_notifier_list, n);
}
EXPORT_SYMBOL_GPL(rcu_stall_chain_notifier_unregister);
/*
* rcu_stall_notifier_call_chain - Call functions in an RCU CPU stall notifier chain
* @val: Value passed unmodified to notifier function
* @v: Pointer passed unmodified to notifier function
*
* Calls each function in the RCU CPU stall notifier chain in turn, which
* is an atomic call chain. See atomic_notifier_call_chain() for more
* information.
*
* This is for use within RCU, hence the omission of the extra asterisk
* to indicate a non-kerneldoc format header comment.
*/
int rcu_stall_notifier_call_chain(unsigned long val, void *v)
{
return atomic_notifier_call_chain(&rcu_cpu_stall_notifier_list, val, v);
}

View File

@ -25,6 +25,7 @@
#include <linux/interrupt.h>
#include <linux/sched/signal.h>
#include <linux/sched/debug.h>
#include <linux/torture.h>
#include <linux/atomic.h>
#include <linux/bitops.h>
#include <linux/percpu.h>
@ -524,17 +525,17 @@ EXPORT_SYMBOL_GPL(do_trace_rcu_torture_read);
do { } while (0)
#endif
#if IS_ENABLED(CONFIG_RCU_TORTURE_TEST) || IS_MODULE(CONFIG_RCU_TORTURE_TEST)
#if IS_ENABLED(CONFIG_RCU_TORTURE_TEST) || IS_MODULE(CONFIG_RCU_TORTURE_TEST) || IS_ENABLED(CONFIG_LOCK_TORTURE_TEST) || IS_MODULE(CONFIG_LOCK_TORTURE_TEST)
/* Get rcutorture access to sched_setaffinity(). */
long rcutorture_sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
long torture_sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
{
int ret;
ret = sched_setaffinity(pid, in_mask);
WARN_ONCE(ret, "%s: sched_setaffinity() returned %d\n", __func__, ret);
WARN_ONCE(ret, "%s: sched_setaffinity(%d) returned %d\n", __func__, pid, ret);
return ret;
}
EXPORT_SYMBOL_GPL(rcutorture_sched_setaffinity);
EXPORT_SYMBOL_GPL(torture_sched_setaffinity);
#endif
#ifdef CONFIG_RCU_STALL_COMMON

View File

@ -87,14 +87,15 @@ EXPORT_SYMBOL_GPL(verbose_torout_sleep);
* nanosecond random fuzz. This function and its friends desynchronize
* testing from the timer wheel.
*/
int torture_hrtimeout_ns(ktime_t baset_ns, u32 fuzzt_ns, struct torture_random_state *trsp)
int torture_hrtimeout_ns(ktime_t baset_ns, u32 fuzzt_ns, const enum hrtimer_mode mode,
struct torture_random_state *trsp)
{
ktime_t hto = baset_ns;
if (trsp)
hto += torture_random(trsp) % fuzzt_ns;
set_current_state(TASK_IDLE);
return schedule_hrtimeout(&hto, HRTIMER_MODE_REL);
return schedule_hrtimeout(&hto, mode);
}
EXPORT_SYMBOL_GPL(torture_hrtimeout_ns);
@ -106,7 +107,7 @@ int torture_hrtimeout_us(u32 baset_us, u32 fuzzt_ns, struct torture_random_state
{
ktime_t baset_ns = baset_us * NSEC_PER_USEC;
return torture_hrtimeout_ns(baset_ns, fuzzt_ns, trsp);
return torture_hrtimeout_ns(baset_ns, fuzzt_ns, HRTIMER_MODE_REL, trsp);
}
EXPORT_SYMBOL_GPL(torture_hrtimeout_us);
@ -123,7 +124,7 @@ int torture_hrtimeout_ms(u32 baset_ms, u32 fuzzt_us, struct torture_random_state
fuzzt_ns = (u32)~0U;
else
fuzzt_ns = fuzzt_us * NSEC_PER_USEC;
return torture_hrtimeout_ns(baset_ns, fuzzt_ns, trsp);
return torture_hrtimeout_ns(baset_ns, fuzzt_ns, HRTIMER_MODE_REL, trsp);
}
EXPORT_SYMBOL_GPL(torture_hrtimeout_ms);
@ -136,7 +137,7 @@ int torture_hrtimeout_jiffies(u32 baset_j, struct torture_random_state *trsp)
{
ktime_t baset_ns = jiffies_to_nsecs(baset_j);
return torture_hrtimeout_ns(baset_ns, jiffies_to_nsecs(1), trsp);
return torture_hrtimeout_ns(baset_ns, jiffies_to_nsecs(1), HRTIMER_MODE_REL, trsp);
}
EXPORT_SYMBOL_GPL(torture_hrtimeout_jiffies);
@ -153,7 +154,7 @@ int torture_hrtimeout_s(u32 baset_s, u32 fuzzt_ms, struct torture_random_state *
fuzzt_ns = (u32)~0U;
else
fuzzt_ns = fuzzt_ms * NSEC_PER_MSEC;
return torture_hrtimeout_ns(baset_ns, fuzzt_ns, trsp);
return torture_hrtimeout_ns(baset_ns, fuzzt_ns, HRTIMER_MODE_REL, trsp);
}
EXPORT_SYMBOL_GPL(torture_hrtimeout_s);
@ -520,9 +521,8 @@ static void torture_shuffle_task_unregister_all(void)
* A special case is when shuffle_idle_cpu = -1, in which case we allow
* the tasks to run on all CPUs.
*/
static void torture_shuffle_tasks(void)
static void torture_shuffle_tasks(struct torture_random_state *trp)
{
DEFINE_TORTURE_RANDOM(rand);
struct shuffle_task *stp;
cpumask_setall(shuffle_tmp_mask);
@ -543,7 +543,7 @@ static void torture_shuffle_tasks(void)
mutex_lock(&shuffle_task_mutex);
list_for_each_entry(stp, &shuffle_task_list, st_l) {
if (!random_shuffle || torture_random(&rand) & 0x1)
if (!random_shuffle || torture_random(trp) & 0x1)
set_cpus_allowed_ptr(stp->st_t, shuffle_tmp_mask);
}
mutex_unlock(&shuffle_task_mutex);
@ -562,7 +562,7 @@ static int torture_shuffle(void *arg)
VERBOSE_TOROUT_STRING("torture_shuffle task started");
do {
torture_hrtimeout_jiffies(shuffle_interval, &rand);
torture_shuffle_tasks();
torture_shuffle_tasks(&rand);
torture_shutdown_absorb("torture_shuffle");
} while (!torture_must_stop());
torture_kthread_stopping("torture_shuffle");
@ -673,7 +673,7 @@ int torture_shutdown_init(int ssecs, void (*cleanup)(void))
if (ssecs > 0) {
shutdown_time = ktime_add(ktime_get(), ktime_set(ssecs, 0));
return torture_create_kthread(torture_shutdown, NULL,
shutdown_task);
shutdown_task);
}
return 0;
}
@ -720,7 +720,7 @@ static void torture_shutdown_cleanup(void)
* suddenly applied to or removed from the system.
*/
static struct task_struct *stutter_task;
static int stutter_pause_test;
static ktime_t stutter_till_abs_time;
static int stutter;
static int stutter_gap;
@ -730,30 +730,16 @@ static int stutter_gap;
*/
bool stutter_wait(const char *title)
{
unsigned int i = 0;
bool ret = false;
int spt;
ktime_t till_ns;
cond_resched_tasks_rcu_qs();
spt = READ_ONCE(stutter_pause_test);
for (; spt; spt = READ_ONCE(stutter_pause_test)) {
if (!ret && !rt_task(current)) {
sched_set_normal(current, MAX_NICE);
ret = true;
}
if (spt == 1) {
torture_hrtimeout_jiffies(1, NULL);
} else if (spt == 2) {
while (READ_ONCE(stutter_pause_test)) {
if (!(i++ & 0xffff))
torture_hrtimeout_us(10, 0, NULL);
cond_resched();
}
} else {
torture_hrtimeout_jiffies(round_jiffies_relative(HZ), NULL);
}
torture_shutdown_absorb(title);
till_ns = READ_ONCE(stutter_till_abs_time);
if (till_ns && ktime_before(ktime_get(), till_ns)) {
torture_hrtimeout_ns(till_ns, 0, HRTIMER_MODE_ABS, NULL);
ret = true;
}
torture_shutdown_absorb(title);
return ret;
}
EXPORT_SYMBOL_GPL(stutter_wait);
@ -764,23 +750,16 @@ EXPORT_SYMBOL_GPL(stutter_wait);
*/
static int torture_stutter(void *arg)
{
DEFINE_TORTURE_RANDOM(rand);
int wtime;
ktime_t till_ns;
VERBOSE_TOROUT_STRING("torture_stutter task started");
do {
if (!torture_must_stop() && stutter > 1) {
wtime = stutter;
if (stutter > 2) {
WRITE_ONCE(stutter_pause_test, 1);
wtime = stutter - 3;
torture_hrtimeout_jiffies(wtime, &rand);
wtime = 2;
}
WRITE_ONCE(stutter_pause_test, 2);
torture_hrtimeout_jiffies(wtime, NULL);
till_ns = ktime_add_ns(ktime_get(),
jiffies_to_nsecs(stutter));
WRITE_ONCE(stutter_till_abs_time, till_ns);
torture_hrtimeout_jiffies(stutter - 1, NULL);
}
WRITE_ONCE(stutter_pause_test, 0);
if (!torture_must_stop())
torture_hrtimeout_jiffies(stutter_gap, NULL);
torture_shutdown_absorb("torture_stutter");
@ -812,6 +791,13 @@ static void torture_stutter_cleanup(void)
stutter_task = NULL;
}
static void
torture_print_module_parms(void)
{
pr_alert("torture module --- %s: disable_onoff_at_boot=%d ftrace_dump_at_shutdown=%d verbose_sleep_frequency=%d verbose_sleep_duration=%d random_shuffle=%d\n",
torture_type, disable_onoff_at_boot, ftrace_dump_at_shutdown, verbose_sleep_frequency, verbose_sleep_duration, random_shuffle);
}
/*
* Initialize torture module. Please note that this is -not- invoked via
* the usual module_init() mechanism, but rather by an explicit call from
@ -834,6 +820,7 @@ bool torture_init_begin(char *ttype, int v)
torture_type = ttype;
verbose = v;
fullstop = FULLSTOP_DONTSTOP;
torture_print_module_parms();
return true;
}
EXPORT_SYMBOL_GPL(torture_init_begin);

View File

@ -528,26 +528,6 @@ bool slab_is_available(void)
}
#ifdef CONFIG_PRINTK
/**
* kmem_valid_obj - does the pointer reference a valid slab object?
* @object: pointer to query.
*
* Return: %true if the pointer is to a not-yet-freed object from
* kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer
* is to an already-freed object, and %false otherwise.
*/
bool kmem_valid_obj(void *object)
{
struct folio *folio;
/* Some arches consider ZERO_SIZE_PTR to be a valid address. */
if (object < (void *)PAGE_SIZE || !virt_addr_valid(object))
return false;
folio = virt_to_folio(object);
return folio_test_slab(folio);
}
EXPORT_SYMBOL_GPL(kmem_valid_obj);
static void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
{
if (__kfence_obj_info(kpp, object, slab))
@ -566,11 +546,11 @@ static void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *
* and, if available, the slab name, return address, and stack trace from
* the allocation and last free path of that object.
*
* This function will splat if passed a pointer to a non-slab object.
* If you are not sure what type of object you have, you should instead
* use mem_dump_obj().
* Return: %true if the pointer is to a not-yet-freed object from
* kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer
* is to an already-freed object, and %false otherwise.
*/
void kmem_dump_obj(void *object)
bool kmem_dump_obj(void *object)
{
char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
int i;
@ -578,13 +558,13 @@ void kmem_dump_obj(void *object)
unsigned long ptroffset;
struct kmem_obj_info kp = { };
if (WARN_ON_ONCE(!virt_addr_valid(object)))
return;
/* Some arches consider ZERO_SIZE_PTR to be a valid address. */
if (object < (void *)PAGE_SIZE || !virt_addr_valid(object))
return false;
slab = virt_to_slab(object);
if (WARN_ON_ONCE(!slab)) {
pr_cont(" non-slab memory.\n");
return;
}
if (!slab)
return false;
kmem_obj_info(&kp, object, slab);
if (kp.kp_slab_cache)
pr_cont(" slab%s %s", cp, kp.kp_slab_cache->name);
@ -621,6 +601,7 @@ void kmem_dump_obj(void *object)
pr_info(" %pS\n", kp.kp_free_stack[i]);
}
return true;
}
EXPORT_SYMBOL_GPL(kmem_dump_obj);
#endif

View File

@ -1060,10 +1060,8 @@ void mem_dump_obj(void *object)
{
const char *type;
if (kmem_valid_obj(object)) {
kmem_dump_obj(object);
if (kmem_dump_obj(object))
return;
}
if (vmalloc_dump_obj(object))
return;

View File

@ -6427,15 +6427,6 @@ sub process {
}
}
# check for soon-to-be-deprecated single-argument k[v]free_rcu() API
if ($line =~ /\bk[v]?free_rcu\s*\([^(]+\)/) {
if ($line =~ /\bk[v]?free_rcu\s*\([^,]+\)/) {
ERROR("DEPRECATED_API",
"Single-argument k[v]free_rcu() API is deprecated, please pass rcu_head object or call k[v]free_rcu_mightsleep()." . $herecurr);
}
}
# check for unnecessary "Out of Memory" messages
if ($line =~ /^\+.*\b$logFunctions\s*\(/ &&
$prevline =~ /^[ \+]\s*if\s*\(\s*(\!\s*|NULL\s*==\s*)?($Lval)(\s*==\s*NULL\s*)?\s*\)/ &&

29
tools/testing/selftests/rcutorture/bin/functions.sh Normal file → Executable file
View File

@ -331,3 +331,32 @@ specify_qemu_net () {
echo $1 -net none
fi
}
# Extract the ftrace output from the console log output
# The ftrace output in the original logs look like:
# Dumping ftrace buffer:
# ---------------------------------
# [...]
# ---------------------------------
extract_ftrace_from_console() {
awk < "$1" '
/Dumping ftrace buffer:/ {
buffer_count++
print "Ftrace dump " buffer_count ":"
capture = 1
next
}
/---------------------------------/ {
if(capture == 1) {
capture = 2
next
} else if(capture == 2) {
capture = 0
print ""
}
}
capture == 2'
}

View File

@ -13,7 +13,7 @@
#
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
T=/tmp/kvm-recheck.sh.$$
T="`mktemp ${TMPDIR-/tmp}/kvm-recheck.sh.XXXXXX`"
trap 'rm -f $T' 0 2
configerrors=0

View File

@ -49,6 +49,7 @@ TORTURE_SHUTDOWN_GRACE=180
TORTURE_SUITE=rcu
TORTURE_MOD=rcutorture
TORTURE_TRUST_MAKE=""
debuginfo="CONFIG_DEBUG_INFO_NONE=n CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y"
resdir=""
configs=""
cpus=0
@ -68,6 +69,7 @@ usage () {
echo " --cpus N"
echo " --datestamp string"
echo " --defconfig string"
echo " --debug-info"
echo " --dryrun batches|scenarios|sched|script"
echo " --duration minutes | <seconds>s | <hours>h | <days>d"
echo " --gdb"
@ -135,6 +137,15 @@ do
ds=$2
shift
;;
--debug-info|--debuginfo)
if test -z "$TORTURE_KCONFIG_KCSAN_ARG" && test -z "$TORTURE_BOOT_GDB_ARG"
then
TORTURE_KCONFIG_KCSAN_ARG="$debuginfo"; export TORTURE_KCONFIG_KCSAN_ARG
TORTURE_BOOT_GDB_ARG="nokaslr"; export TORTURE_BOOT_GDB_ARG
else
echo "Ignored redundant --debug-info (implied by --kcsan &c)"
fi
;;
--defconfig)
checkarg --defconfig "defconfigtype" "$#" "$2" '^[^/][^/]*$' '^--'
TORTURE_DEFCONFIG=$2
@ -163,7 +174,7 @@ do
shift
;;
--gdb)
TORTURE_KCONFIG_GDB_ARG="CONFIG_DEBUG_INFO_NONE=n CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y"; export TORTURE_KCONFIG_GDB_ARG
TORTURE_KCONFIG_GDB_ARG="$debuginfo"; export TORTURE_KCONFIG_GDB_ARG
TORTURE_BOOT_GDB_ARG="nokaslr"; export TORTURE_BOOT_GDB_ARG
TORTURE_QEMU_GDB_ARG="-s -S"; export TORTURE_QEMU_GDB_ARG
;;
@ -179,7 +190,7 @@ do
shift
;;
--kasan)
TORTURE_KCONFIG_KASAN_ARG="CONFIG_DEBUG_INFO_NONE=n CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y CONFIG_KASAN=y"; export TORTURE_KCONFIG_KASAN_ARG
TORTURE_KCONFIG_KASAN_ARG="$debuginfo CONFIG_KASAN=y"; export TORTURE_KCONFIG_KASAN_ARG
if test -n "$torture_qemu_mem_default"
then
TORTURE_QEMU_MEM=2G
@ -191,7 +202,7 @@ do
shift
;;
--kcsan)
TORTURE_KCONFIG_KCSAN_ARG="CONFIG_DEBUG_INFO_NONE=n CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y CONFIG_KCSAN=y CONFIG_KCSAN_STRICT=y CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_PROVE_LOCKING=y"; export TORTURE_KCONFIG_KCSAN_ARG
TORTURE_KCONFIG_KCSAN_ARG="$debuginfo CONFIG_KCSAN=y CONFIG_KCSAN_STRICT=y CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_PROVE_LOCKING=y"; export TORTURE_KCONFIG_KCSAN_ARG
;;
--kmake-arg|--kmake-args)
checkarg --kmake-arg "(kernel make arguments)" $# "$2" '.*' '^error$'

View File

@ -11,7 +11,7 @@
#
# Authors: Paul E. McKenney <paulmck@linux.ibm.com>
T=${TMPDIR-/tmp}/parse-console.sh.$$
T="`mktemp -d ${TMPDIR-/tmp}/parse-console.sh.XXXXXX`"
file="$1"
title="$2"
@ -182,3 +182,10 @@ if ! test -s $file.diags
then
rm -f $file.diags
fi
# Call extract_ftrace_from_console function, if the output is empty,
# don't create $file.ftrace. Otherwise output the results to $file.ftrace
extract_ftrace_from_console $file > $file.ftrace
if [ ! -s $file.ftrace ]; then
rm -f $file.ftrace
fi

View File

@ -472,7 +472,7 @@ do
if test -n "$firsttime"
then
torture_bootargs="refscale.scale_type="$prim" refscale.nreaders=$HALF_ALLOTED_CPUS refscale.loops=10000 refscale.holdoff=20 torture.disable_onoff_at_boot"
torture_set "refscale-$prim" tools/testing/selftests/rcutorture/bin/kvm.sh --torture refscale --allcpus --duration 5 --kconfig "CONFIG_TASKS_TRACE_RCU=y CONFIG_NR_CPUS=$HALF_ALLOTED_CPUS" --bootargs "verbose_batched=$VERBOSE_BATCH_CPUS torture.verbose_sleep_frequency=8 torture.verbose_sleep_duration=$VERBOSE_BATCH_CPUS" --trust-make
torture_set "refscale-$prim" tools/testing/selftests/rcutorture/bin/kvm.sh --torture refscale --allcpus --duration 5 --kconfig "CONFIG_TASKS_TRACE_RCU=y CONFIG_NR_CPUS=$HALF_ALLOTED_CPUS" --bootargs "refscale.verbose_batched=$VERBOSE_BATCH_CPUS torture.verbose_sleep_frequency=8 torture.verbose_sleep_duration=$VERBOSE_BATCH_CPUS" --trust-make
mv $T/last-resdir-nodebug $T/first-resdir-nodebug || :
if test -f "$T/last-resdir-kasan"
then

View File

@ -11,3 +11,4 @@ CONFIG_FORCE_TASKS_TRACE_RCU=y
#CHECK#CONFIG_TASKS_TRACE_RCU=y
CONFIG_TASKS_TRACE_RCU_READ_MB=n
CONFIG_RCU_EXPERT=y
CONFIG_DEBUG_OBJECTS=y