irq_work: Fix irq_work_claim() memory ordering

When irq_work_claim() finds IRQ_WORK_PENDING flag already set, we just
return and don't raise a new IPI. We expect the destination to see
and handle our latest updades thanks to the pairing atomic_xchg()
in irq_work_run_list().

But cmpxchg() doesn't guarantee a full memory barrier upon failure. So
it's possible that the destination misses our latest updates.

So use atomic_fetch_or() instead that is unconditionally fully ordered
and also performs exactly what we want here and simplify the code.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20191108160858.31665-3-frederic@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Frederic Weisbecker 2019-11-08 17:08:56 +01:00 committed by Ingo Molnar
parent 153bedbac2
commit 25269871db
1 changed files with 7 additions and 15 deletions

View File

@ -29,24 +29,16 @@ static DEFINE_PER_CPU(struct llist_head, lazy_list);
*/
static bool irq_work_claim(struct irq_work *work)
{
int flags, oflags, nflags;
int oflags;
oflags = atomic_fetch_or(IRQ_WORK_CLAIMED, &work->flags);
/*
* Start with our best wish as a premise but only trust any
* flag value after cmpxchg() result.
* If the work is already pending, no need to raise the IPI.
* The pairing atomic_xchg() in irq_work_run() makes sure
* everything we did before is visible.
*/
flags = atomic_read(&work->flags) & ~IRQ_WORK_PENDING;
for (;;) {
nflags = flags | IRQ_WORK_CLAIMED;
oflags = atomic_cmpxchg(&work->flags, flags, nflags);
if (oflags == flags)
break;
if (oflags & IRQ_WORK_PENDING)
return false;
flags = oflags;
cpu_relax();
}
if (oflags & IRQ_WORK_PENDING)
return false;
return true;
}