rcu/nocb: Use appropriate rcu_nocb_lock_irqsave()

Instead of hardcoding IRQ save and nocb lock, use the consolidated
API (and fix a comment as per Valentin Schneider's suggestion).

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Tested-by: Valentin Schneider <valentin.schneider@arm.com>
Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
This commit is contained in:
Frederic Weisbecker 2021-10-19 02:08:12 +02:00 committed by Paul E. McKenney
parent 344e219d7d
commit 7b65dfa32d

View file

@ -2478,12 +2478,11 @@ static void rcu_do_batch(struct rcu_data *rdp)
} }
/* /*
* Extract the list of ready callbacks, disabling to prevent * Extract the list of ready callbacks, disabling IRQs to prevent
* races with call_rcu() from interrupt handlers. Leave the * races with call_rcu() from interrupt handlers. Leave the
* callback counts, as rcu_barrier() needs to be conservative. * callback counts, as rcu_barrier() needs to be conservative.
*/ */
local_irq_save(flags); rcu_nocb_lock_irqsave(rdp, flags);
rcu_nocb_lock(rdp);
WARN_ON_ONCE(cpu_is_offline(smp_processor_id())); WARN_ON_ONCE(cpu_is_offline(smp_processor_id()));
pending = rcu_segcblist_n_cbs(&rdp->cblist); pending = rcu_segcblist_n_cbs(&rdp->cblist);
div = READ_ONCE(rcu_divisor); div = READ_ONCE(rcu_divisor);
@ -2546,8 +2545,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
} }
} }
local_irq_save(flags); rcu_nocb_lock_irqsave(rdp, flags);
rcu_nocb_lock(rdp);
rdp->n_cbs_invoked += count; rdp->n_cbs_invoked += count;
trace_rcu_batch_end(rcu_state.name, count, !!rcl.head, need_resched(), trace_rcu_batch_end(rcu_state.name, count, !!rcl.head, need_resched(),
is_idle_task(current), rcu_is_callbacks_kthread()); is_idle_task(current), rcu_is_callbacks_kthread());