linux-stable/kernel/locking
Sebastian Andrzej Siewior 4051a81774 locking/lockdep: Use sched_clock() for random numbers
Since the rewrote of prandom_u32(), in the commit mentioned below, the
function uses sleeping locks which extracing random numbers and filling
the batch.
This breaks lockdep on PREEMPT_RT because lock_pin_lock() disables
interrupts while calling __lock_pin_lock(). This can't be moved earlier
because the main user of the function (rq_pin_lock()) invokes that
function after disabling interrupts in order to acquire the lock.

The cookie does not require random numbers as its goal is to provide a
random value in order to notice unexpected "unlock + lock" sites.

Use sched_clock() to provide random numbers.

Fixes: a0103f4d86f88 ("random32: use real rng for non-deterministic randomness")
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/YoNn3pTkm5+QzE5k@linutronix.de
2022-06-13 10:29:57 +02:00
..
irqflag-debug.c
lock_events.c
lock_events.h
lock_events_list.h
lockdep.c locking/lockdep: Use sched_clock() for random numbers 2022-06-13 10:29:57 +02:00
lockdep_internals.h locking/lockdep: Iterate lock_classes directly when reading lockdep files 2022-02-16 15:57:58 +01:00
lockdep_proc.c locking/lockdep: Iterate lock_classes directly when reading lockdep files 2022-02-16 15:57:58 +01:00
lockdep_states.h
locktorture.c locktorture,rcutorture,torture: Always log error message 2021-12-07 16:36:17 -08:00
Makefile locking/ww_mutex: Implement rtmutex based ww_mutex API functions 2021-08-17 19:05:26 +02:00
mcs_spinlock.h
mutex-debug.c locking/ww_mutex: Gather mutex_waiter initialization 2021-08-17 19:04:41 +02:00
mutex.c locking/mutex: Make contention tracepoints more consistent wrt adaptive spinning 2022-04-05 10:24:36 +02:00
mutex.h locking/mutex: Move the 'struct mutex_waiter' definition from <linux/mutex.h> to the internal header 2021-08-17 18:24:31 +02:00
osq_lock.c
percpu-rwsem.c locking: Apply contention tracepoints in the slow path 2022-04-05 10:24:35 +02:00
qrwlock.c locking/qrwlock: Change "queue rwlock" to "queued rwlock" 2022-05-11 16:27:04 +02:00
qspinlock.c locking: Apply contention tracepoints in the slow path 2022-04-05 10:24:35 +02:00
qspinlock_paravirt.h
qspinlock_stat.h
rtmutex.c locking: Apply contention tracepoints in the slow path 2022-04-05 10:24:35 +02:00
rtmutex_api.c locking/rtmutex: Add rt_mutex_lock_nest_lock() and rt_mutex_lock_killable(). 2021-12-04 10:56:23 +01:00
rtmutex_common.h locking/rtmutex: Dont dereference waiter lockless 2021-08-25 15:42:32 +02:00
rwbase_rt.c locking: Apply contention tracepoints in the slow path 2022-04-05 10:24:35 +02:00
rwsem.c locking: Apply contention tracepoints in the slow path 2022-04-05 10:24:35 +02:00
semaphore.c locking: Apply contention tracepoints in the slow path 2022-04-05 10:24:35 +02:00
spinlock.c locking/rwlocks: introduce write_lock_nested 2022-01-22 08:33:37 +02:00
spinlock_debug.c locking/rwlock: Provide RT variant 2021-08-17 17:50:51 +02:00
spinlock_rt.c locking/rwlocks: introduce write_lock_nested 2022-01-22 08:33:37 +02:00
test-ww_mutex.c locking/ww-mutex: Fix uninitialized use of ret in test_aa() 2021-10-01 13:57:49 +02:00
ww_mutex.h locking/ww_mutex: Add rt_mutex based lock type and accessors 2021-08-17 19:05:11 +02:00
ww_rt_mutex.c kernel/locking: Use a pointer in ww_mutex_trylock(). 2021-11-17 14:48:49 +01:00