linux-stable/kernel/rcu
Zqiang 0de2180ba3 rcu-tasks: Handle queue-shrink/callback-enqueue race condition
[ Upstream commit a4fcfbee8f ]

The rcu_tasks_need_gpcb() determines whether or not: (1) There are
callbacks needing another grace period, (2) There are callbacks ready
to be invoked, and (3) It would be a good time to shrink back down to a
single-CPU callback list.  This third case is interesting because some
other CPU might be adding new callbacks, which might suddenly make this
a very bad time to be shrinking.

This is currently handled by requiring call_rcu_tasks_generic() to
enqueue callbacks under the protection of rcu_read_lock() and requiring
rcu_tasks_need_gpcb() to wait for an RCU grace period to elapse before
finalizing the transition.  This works well in practice.

Unfortunately, the current code assumes that a grace period whose end is
detected by the poll_state_synchronize_rcu() in the second "if" condition
actually ended before the earlier code counted the callbacks queued on
CPUs other than CPU 0 (local variable "ncbsnz").  Given the current code,
it is possible that a long-delayed call_rcu_tasks_generic() invocation
will queue a callback on a non-zero CPU after these CPUs have had their
callbacks counted and zero has been stored to ncbsnz.  Such a callback
would trigger the WARN_ON_ONCE() in the second "if" statement.

To see this, consider the following sequence of events:

o	CPU 0 invokes rcu_tasks_one_gp(), and counts fewer than
	rcu_task_collapse_lim callbacks.  It sees at least one
	callback queued on some other CPU, thus setting ncbsnz
	to a non-zero value.

o	CPU 1 invokes call_rcu_tasks_generic() and loads 42 from
	->percpu_enqueue_lim.  It therefore decides to enqueue its
	callback onto CPU 1's callback list, but is delayed.

o	CPU 0 sees the rcu_task_cb_adjust is non-zero and that the number
	of callbacks does not exceed rcu_task_collapse_lim.  It therefore
	checks percpu_enqueue_lim, and sees that its value is greater
	than the value one.  CPU 0 therefore  starts the shift back
	to a single callback list.  It sets ->percpu_enqueue_lim to 1,
	but CPU 1 has already read the old value of 42.  It also gets
	a grace-period state value from get_state_synchronize_rcu().

o	CPU 0 sees that ncbsnz is non-zero in its second "if" statement,
	so it declines to finalize the shrink operation.

o	CPU 0 again invokes rcu_tasks_one_gp(), and counts fewer than
	rcu_task_collapse_lim callbacks.  It also sees that there are
	no callback queued on any other CPU, and thus sets ncbsnz to zero.

o	CPU 1 resumes execution and enqueues its callback onto its own
	list.  This invalidates the value of ncbsnz.

o	CPU 0 sees the rcu_task_cb_adjust is non-zero and that the number
	of callbacks does not exceed rcu_task_collapse_lim.  It therefore
	checks percpu_enqueue_lim, but sees that its value is already
	unity.	It therefore does not get a new grace-period state value.

o	CPU 0 sees that rcu_task_cb_adjust is non-zero, ncbsnz is zero,
	and that poll_state_synchronize_rcu() says that the grace period
	has completed.  it therefore finalizes the shrink operation,
	setting ->percpu_dequeue_lim to the value one.

o	CPU 0 does a debug check, scanning the other CPUs' callback lists.
	It sees that CPU 1's list has a callback, so it (rightly)
	triggers the WARN_ON_ONCE().  After all, the new value of
	->percpu_dequeue_lim says to not bother looking at CPU 1's
	callback list, which means that this callback will never be
	invoked.  This can result in hangs and maybe even OOMs.

Based on long experience with rcutorture, this is an extremely
low-probability race condition, but it really can happen, especially in
preemptible kernels or within guest OSes.

This commit therefore checks for completion of the grace period
before counting callbacks.  With this change, in the above failure
scenario CPU 0 would know not to prematurely end the shrink operation
because the grace period would not have completed before the count
operation started.

[ paulmck: Adjust grace-period end rather than adding RCU reader. ]
[ paulmck: Avoid spurious WARN_ON_ONCE() with ->percpu_dequeue_lim check. ]

Signed-off-by: Zqiang <qiang1.zhang@intel.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-10 09:29:08 +01:00
..
Kconfig printk changes for 6.2 2022-12-12 09:01:36 -08:00
Kconfig.debug rcu: Make SRCU mandatory 2022-11-29 15:00:06 -08:00
Makefile rcuperf: Change rcuperf to rcuscale 2020-08-24 18:39:24 -07:00
rcu.h printk changes for 6.2 2022-12-12 09:01:36 -08:00
rcu_segcblist.c rcu: Clarify fill-the-gap comment in rcu_segcblist_advance() 2022-04-11 17:28:48 -07:00
rcu_segcblist.h rcu: Mark writes to the rcu_segcblist structure's ->flags field 2022-02-14 10:36:58 -08:00
rcuscale.c rcu/rcuscale: Use call_rcu_hurry() for async reader test 2022-11-29 14:04:33 -08:00
rcutorture.c Merge branches 'doc.2022.10.20a', 'fixes.2022.10.21a', 'lazy.2022.11.30a', 'srcunmisafe.2022.11.09a', 'torture.2022.10.18c' and 'torturescript.2022.10.20a' into HEAD 2022-11-30 13:20:05 -08:00
refscale.c refscale: Convert test_lock spinlock to raw_spinlock 2022-06-21 15:57:04 -07:00
srcutiny.c srcu: Make Tiny synchronize_srcu() check for readers 2022-12-01 15:49:12 -08:00
srcutree.c srcu: Delegate work to the boot cpu if using SRCU_SIZE_SMALL 2023-03-10 09:29:07 +01:00
sync.c rcu/sync: Use call_rcu_hurry() instead of call_rcu 2022-11-29 14:04:33 -08:00
tasks.h rcu-tasks: Handle queue-shrink/callback-enqueue race condition 2023-03-10 09:29:08 +01:00
tiny.c rcu: Make call_rcu() lazy to save power 2022-11-29 14:02:23 -08:00
tree.c Urgent RCU pull request for v6.2 2022-12-21 07:59:57 -08:00
tree.h rcu: Make call_rcu() lazy to save power 2022-11-29 14:02:23 -08:00
tree_exp.h rcu: Suppress smp_processor_id() complaint in synchronize_rcu_expedited_wait() 2023-03-10 09:29:07 +01:00
tree_nocb.h rcu: Shrinker for lazy rcu 2022-11-29 14:02:52 -08:00
tree_plugin.h rcu: Synchronize ->qsmaskinitnext in rcu_boost_kthread_setaffinity() 2022-10-18 14:59:57 -07:00
tree_stall.h sched/debug: Try trigger_single_cpu_backtrace(cpu) in dump_cpu_task() 2022-08-31 05:03:14 -07:00
update.c rcu: Make SRCU mandatory 2022-11-29 15:00:06 -08:00