net: Revert the softirq will run annotation in ____napi_schedule().

The lockdep annotation lockdep_assert_softirq_will_run() expects that
either hard or soft interrupts are disabled because both guaranty that
the "raised" soft-interrupts will be processed once the context is left.

This triggers in flush_smp_call_function_from_idle() but it this case it
explicitly calls do_softirq() in case of pending softirqs.

Revert the "softirq will run" annotation in ____napi_schedule() and move
the check back to __netif_rx() as it was. Keep the IRQ-off assert in
____napi_schedule() because this is always required.

Fixes: fbd9a2ceba ("net: Add lockdep asserts to ____napi_schedule().")
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Link: https://lore.kernel.org/r/YjhD3ZKWysyw8rc6@linutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Sebastian Andrzej Siewior 2022-03-21 10:22:37 +01:00 committed by Jakub Kicinski
parent ca4f3f187b
commit 351bdbb641
2 changed files with 1 additions and 9 deletions

View File

@ -329,12 +329,6 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie);
#define lockdep_assert_none_held_once() \
lockdep_assert_once(!current->lockdep_depth)
/*
* Ensure that softirq is handled within the callchain and not delayed and
* handled by chance.
*/
#define lockdep_assert_softirq_will_run() \
lockdep_assert_once(hardirq_count() | softirq_count())
#define lockdep_recursing(tsk) ((tsk)->lockdep_recursion)
@ -420,7 +414,6 @@ extern int lockdep_is_held(const void *);
#define lockdep_assert_held_read(l) do { (void)(l); } while (0)
#define lockdep_assert_held_once(l) do { (void)(l); } while (0)
#define lockdep_assert_none_held_once() do { } while (0)
#define lockdep_assert_softirq_will_run() do { } while (0)
#define lockdep_recursing(tsk) (0)

View File

@ -4277,7 +4277,6 @@ static inline void ____napi_schedule(struct softnet_data *sd,
{
struct task_struct *thread;
lockdep_assert_softirq_will_run();
lockdep_assert_irqs_disabled();
if (test_bit(NAPI_STATE_THREADED, &napi->state)) {
@ -4887,7 +4886,7 @@ int __netif_rx(struct sk_buff *skb)
{
int ret;
lockdep_assert_softirq_will_run();
lockdep_assert_once(hardirq_count() | softirq_count());
trace_netif_rx_entry(skb);
ret = netif_rx_internal(skb);