linux-stable/kernel/locking
Xunlei Pang 2a1c602994 rtmutex: Deboost before waking up the top waiter
We should deboost before waking the high-priority task, such that we
don't run two tasks with the same "state" (priority, deadline,
sched_class, etc).

In order to make sure the boosting task doesn't start running between
unlock and deboost (due to 'spurious' wakeup), we move the deboost
under the wait_lock, that way its serialized against the wait loop in
__rt_mutex_slowlock().

Doing the deboost early can however lead to priority-inversion if
current would get preempted after the deboost but before waking our
high-prio task, hence we disable preemption before doing deboost, and
enabling it after the wake up is over.

This gets us the right semantic order, but most importantly however;
this change ensures pointer stability for the next patch, where we
have rt_mutex_setprio() cache a pointer to the top-most waiter task.
If we, as before this change, do the wakeup first and then deboost,
this pointer might point into thin air.

[peterz: Changelog + patch munging]
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: juri.lelli@arm.com
Cc: bigeasy@linutronix.de
Cc: mathieu.desnoyers@efficios.com
Cc: jdesfossez@efficios.com
Cc: bristot@redhat.com
Link: http://lkml.kernel.org/r/20170323150216.110065320@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-04-04 11:44:05 +02:00
..
lockdep.c locking/lockdep: Handle statically initialized PER_CPU locks properly 2017-03-16 09:57:08 +01:00
lockdep_internals.h lockdep: Limit static allocations if PROVE_LOCKING_SMALL is defined 2016-11-18 11:33:19 -08:00
lockdep_proc.c Replace <asm/uaccess.h> with <linux/uaccess.h> globally 2016-12-24 11:46:01 -08:00
lockdep_states.h
locktorture.c sched/headers: Prepare for the removal of <linux/rtmutex.h> from <linux/sched.h> 2017-03-02 08:42:32 +01:00
Makefile locking/ww_mutex: Begin kselftests for ww_mutex 2017-01-14 11:37:14 +01:00
mcs_spinlock.h locking/core: Remove cpu_relax_lowlatency() users 2016-11-16 10:15:10 +01:00
mutex-debug.c locking/mutex: Rework mutex::owner 2016-10-25 11:31:50 +02:00
mutex-debug.h locking/mutex: Fix lockdep_assert_held() fail 2017-01-30 11:42:59 +01:00
mutex.c sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h> 2017-03-02 08:42:34 +01:00
mutex.h locking/mutex: Fix lockdep_assert_held() fail 2017-01-30 11:42:59 +01:00
osq_lock.c locking/osq: Break out of spin-wait busy waiting loop for a preempted vCPU in osq_lock() 2016-11-22 12:48:10 +01:00
percpu-rwsem.c locking/percpu-rwsem: Replace waitqueue with rcuwait 2017-01-14 11:14:35 +01:00
qrwlock.c locking/core: Remove cpu_relax_lowlatency() users 2016-11-16 10:15:10 +01:00
qspinlock.c locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec() 2016-06-27 11:37:41 +02:00
qspinlock_paravirt.h locking/pvqspinlock: Don't wait if vCPU is preempted 2017-01-12 09:35:57 +01:00
qspinlock_stat.h sched/headers: Prepare for new header dependencies before moving code to <linux/sched/clock.h> 2017-03-02 08:42:27 +01:00
rtmutex-debug.c futex: Remove rt_mutex_deadlock_account_*() 2017-03-23 19:10:07 +01:00
rtmutex-debug.h futex: Remove rt_mutex_deadlock_account_*() 2017-03-23 19:10:07 +01:00
rtmutex.c rtmutex: Deboost before waking up the top waiter 2017-04-04 11:44:05 +02:00
rtmutex.h futex: Remove rt_mutex_deadlock_account_*() 2017-03-23 19:10:07 +01:00
rtmutex_common.h rtmutex: Deboost before waking up the top waiter 2017-04-04 11:44:05 +02:00
rwsem-spinlock.c locking/rwsem: Fix down_write_killable() for CONFIG_RWSEM_GENERIC_SPINLOCK=y 2017-03-16 09:28:30 +01:00
rwsem-xadd.c sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h> 2017-03-02 08:42:34 +01:00
rwsem.c locking/lockdep: Add new check to lock_downgrade() 2017-03-16 09:57:07 +01:00
rwsem.h locking/rwsem: Protect all writes to owner by WRITE_ONCE() 2016-06-08 15:16:59 +02:00
semaphore.c sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h> 2017-03-02 08:42:34 +01:00
spinlock.c locking/spinlocks: Remove the unused spin_lock_bh_nested() API 2017-01-12 09:33:39 +01:00
spinlock_debug.c locking/spinlock/debug: Remove spinlock lockup detection code 2017-02-10 09:09:49 +01:00
test-ww_mutex.c locking/ww-mutex: Limit stress test to 2 seconds 2017-03-30 09:49:47 +02:00