linux-stable/tools/perf/Documentation/perf-lock.txt
Namhyung Kim 3477f079fe perf lock contention: Add -o/--lock-owner option
When there're many lock contentions in the system, people sometimes want
to know who caused the contention, IOW who's the owner of the locks.

The -o/--lock-owner option tries to follow the lock owners for the
contended mutexes and rwsems from BPF, and then attributes the
contention time to the owner instead of the waiter.  It's a best effort
approach to get the owner info at the time of the contention and doesn't
guarantee to have the precise tracking of owners if it's changing over
time.

Currently it only handles mutex and rwsem that have owner field in their
struct and it basically points to a task_struct that owns the lock at
the moment.

Technically its type is atomic_long_t and it comes with some LSB bits
used for other meanings.  So it needs to clear them when casting it to a
pointer to task_struct.

Also the atomic_long_t is a typedef of the atomic 32 or 64 bit types
depending on arch which is a wrapper struct for the counter value.  I'm
not aware of proper ways to access those kernel atomic types from BPF so
I just read the internal counter value directly.  Please let me know if
there's a better way.

When -o/--lock-owner option is used, it goes to the task aggregation
mode like -t/--threads option does.  However it cannot get the owner for
other lock types like spinlock and sometimes even for mutex.

  $ sudo ./perf lock con -abo -- ./perf bench sched pipe
  # Running 'sched/pipe' benchmark:
  # Executed 1000000 pipe operations between two processes

       Total time: 4.766 [sec]

         4.766540 usecs/op
           209795 ops/sec
   contended   total wait     max wait     avg wait          pid   owner

         403    565.32 us     26.81 us      1.40 us           -1   Unknown
           4     27.99 us      8.57 us      7.00 us      1583145   sched-pipe
           1      8.25 us      8.25 us      8.25 us      1583144   sched-pipe
           1      2.03 us      2.03 us      2.03 us         5068   chrome

As you can see, the owner is unknown for the most cases.  But if we
filter only for the mutex locks, it'd more likely get the onwers.

  $ sudo ./perf lock con -abo -Y mutex -- ./perf bench sched pipe
  # Running 'sched/pipe' benchmark:
  # Executed 1000000 pipe operations between two processes

       Total time: 4.910 [sec]

         4.910435 usecs/op
           203647 ops/sec
   contended   total wait     max wait     avg wait          pid   owner

           2     15.50 us      8.29 us      7.75 us      1582852   sched-pipe
           7      7.20 us      2.47 us      1.03 us           -1   Unknown
           1      6.74 us      6.74 us      6.74 us      1582851   sched-pipe

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Hao Luo <haoluo@google.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Song Liu <song@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: bpf@vger.kernel.org
Link: https://lore.kernel.org/r/20230207002403.63590-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-02-08 10:33:32 -03:00

204 lines
4.9 KiB
Text

perf-lock(1)
============
NAME
----
perf-lock - Analyze lock events
SYNOPSIS
--------
[verse]
'perf lock' {record|report|script|info|contention}
DESCRIPTION
-----------
You can analyze various lock behaviours
and statistics with this 'perf lock' command.
'perf lock record <command>' records lock events
between start and end <command>. And this command
produces the file "perf.data" which contains tracing
results of lock events.
'perf lock report' reports statistical data.
'perf lock script' shows raw lock events.
'perf lock info' shows metadata like threads or addresses
of lock instances.
'perf lock contention' shows contention statistics.
COMMON OPTIONS
--------------
-i::
--input=<file>::
Input file name. (default: perf.data unless stdin is a fifo)
-v::
--verbose::
Be more verbose (show symbol address, etc).
-q::
--quiet::
Do not show any warnings or messages. (Suppress -v)
-D::
--dump-raw-trace::
Dump raw trace in ASCII.
-f::
--force::
Don't complain, do it.
--vmlinux=<file>::
vmlinux pathname
--kallsyms=<file>::
kallsyms pathname
REPORT OPTIONS
--------------
-k::
--key=<value>::
Sorting key. Possible values: acquired (default), contended,
avg_wait, wait_total, wait_max, wait_min.
-F::
--field=<value>::
Output fields. By default it shows all the fields but users can
customize that using this. Possible values: acquired, contended,
avg_wait, wait_total, wait_max, wait_min.
-c::
--combine-locks::
Merge lock instances in the same class (based on name).
-t::
--threads::
The -t option is to show per-thread lock stat like below:
$ perf lock report -t -F acquired,contended,avg_wait
Name acquired contended avg wait (ns)
perf 240569 9 5784
swapper 106610 19 543
:15789 17370 2 14538
ContainerMgr 8981 6 874
sleep 5275 1 11281
ContainerThread 4416 4 944
RootPressureThr 3215 5 1215
rcu_preempt 2954 0 0
ContainerMgr 2560 0 0
unnamed 1873 0 0
EventManager_De 1845 1 636
futex-default-S 1609 0 0
-E::
--entries=<value>::
Display this many entries.
INFO OPTIONS
------------
-t::
--threads::
dump thread list in perf.data
-m::
--map::
dump map of lock instances (address:name table)
CONTENTION OPTIONS
--------------
-k::
--key=<value>::
Sorting key. Possible values: contended, wait_total (default),
wait_max, wait_min, avg_wait.
-F::
--field=<value>::
Output fields. By default it shows all but the wait_min fields
and users can customize that using this. Possible values:
contended, wait_total, wait_max, wait_min, avg_wait.
-t::
--threads::
Show per-thread lock contention stat
-b::
--use-bpf::
Use BPF program to collect lock contention stats instead of
using the input data.
-a::
--all-cpus::
System-wide collection from all CPUs.
-C::
--cpu=<value>::
Collect samples only on the list of CPUs provided. Multiple CPUs can be
provided as a comma-separated list with no space: 0,1. Ranges of CPUs
are specified with -: 0-2. Default is to monitor all CPUs.
-p::
--pid=<value>::
Record events on existing process ID (comma separated list).
--tid=<value>::
Record events on existing thread ID (comma separated list).
--map-nr-entries=<value>::
Maximum number of BPF map entries (default: 10240).
--max-stack=<value>::
Maximum stack depth when collecting lock contention (default: 8).
--stack-skip=<value>::
Number of stack depth to skip when finding a lock caller (default: 3).
-E::
--entries=<value>::
Display this many entries.
-l::
--lock-addr::
Show lock contention stat by address
-o::
--lock-owner::
Show lock contention stat by owners. Implies --threads and
requires --use-bpf.
-Y::
--type-filter=<value>::
Show lock contention only for given lock types (comma separated list).
Available values are:
semaphore, spinlock, rwlock, rwlock:R, rwlock:W, rwsem, rwsem:R, rwsem:W,
rtmutex, rwlock-rt, rwlock-rt:R, rwlock-rt:W, pcpu-sem, pcpu-sem:R, pcpu-sem:W,
mutex
Note that RW-variant of locks have :R and :W suffix. Names without the
suffix are shortcuts for the both variants. Ex) rwsem = rwsem:R + rwsem:W.
-L::
--lock-filter=<value>::
Show lock contention only for given lock addresses or names (comma separated list).
-S::
--callstack-filter=<value>::
Show lock contention only if the callstack contains the given string.
Note that it matches the substring so 'rq' would match both 'raw_spin_rq_lock'
and 'irq_enter_rcu'.
SEE ALSO
--------
linkperf:perf[1]