mm/rmap: use atomic_try_cmpxchg in set_tlb_ubc_flush_pending

Use atomic_try_cmpxchg instead of atomic_cmpxchg (*ptr, old, new) == old
in set_tlb_ubc_flush_pending.  86 CMPXCHG instruction returns success in
ZF flag, so this change saves a compare after cmpxchg (and related move
instruction in front of cmpxchg).

Also, try_cmpxchg implicitly assigns old *ptr value to "old" when cmpxchg
fails.

No functional change intended.

Link: https://lkml.kernel.org/r/20230227214228.3533299-1-ubizjak@gmail.com
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Uros Bizjak 2023-02-27 22:42:28 +01:00 committed by Andrew Morton
parent f2421a16f4
commit bdeb918810
1 changed files with 2 additions and 5 deletions

View File

@ -644,7 +644,7 @@ void try_to_unmap_flush_dirty(void)
static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
{
struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
int batch, nbatch;
int batch;
arch_tlbbatch_add_mm(&tlb_ubc->arch, mm);
tlb_ubc->flush_required = true;
@ -662,11 +662,8 @@ retry:
* overflow. Reset `pending' and `flushed' to be 1 and 0 if
* `pending' becomes large.
*/
nbatch = atomic_cmpxchg(&mm->tlb_flush_batched, batch, 1);
if (nbatch != batch) {
batch = nbatch;
if (!atomic_try_cmpxchg(&mm->tlb_flush_batched, &batch, 1))
goto retry;
}
} else {
atomic_inc(&mm->tlb_flush_batched);
}