locking/qspinlock: Use smp_cond_acquire() in pending code

The newly introduced smp_cond_acquire() was used to replace the
slowpath lock acquisition loop. Similarly, the new function can also
be applied to the pending bit locking loop. This patch uses the new
function in that loop.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1449778666-13593-1-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Waiman Long 2015-12-10 15:17:44 -05:00 committed by Ingo Molnar
parent eaff0e7003
commit cb037fdad6

View file

@ -358,8 +358,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
* sequentiality; this is because not all clear_pending_set_locked() * sequentiality; this is because not all clear_pending_set_locked()
* implementations imply full barriers. * implementations imply full barriers.
*/ */
while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK) smp_cond_acquire(!(atomic_read(&lock->val) & _Q_LOCKED_MASK));
cpu_relax();
/* /*
* take ownership and clear the pending bit. * take ownership and clear the pending bit.
@ -435,7 +434,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
* *
* The PV pv_wait_head_or_lock function, if active, will acquire * The PV pv_wait_head_or_lock function, if active, will acquire
* the lock and return a non-zero value. So we have to skip the * the lock and return a non-zero value. So we have to skip the
* smp_load_acquire() call. As the next PV queue head hasn't been * smp_cond_acquire() call. As the next PV queue head hasn't been
* designated yet, there is no way for the locked value to become * designated yet, there is no way for the locked value to become
* _Q_SLOW_VAL. So both the set_locked() and the * _Q_SLOW_VAL. So both the set_locked() and the
* atomic_cmpxchg_relaxed() calls will be safe. * atomic_cmpxchg_relaxed() calls will be safe.
@ -466,7 +465,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
break; break;
} }
/* /*
* The smp_load_acquire() call above has provided the necessary * The smp_cond_acquire() call above has provided the necessary
* acquire semantics required for locking. At most two * acquire semantics required for locking. At most two
* iterations of this loop may be ran. * iterations of this loop may be ran.
*/ */