perf tools changes for v6.6:

perf tools maintainership:
 
 - Add git information for perf-tools and perf-tools-next trees/branches to the
   MAINTAINERS file. That is where development now takes place and myself and
   Namhyung Kim have write access, more people to come as we emulate other
   maintainer groups.
 
 perf record:
 
 - Record kernel data maps when 'perf record --data' is used, so that global variables can
   be resolved and used in tools that do data profiling.
 
 perf trace:
 
 - Remove the old, experimental support for BPF events in which a .c file was passed as
   an event: "perf trace -e hello.c" to then get compiled and loaded.
 
   The only known usage for that, that shipped with the kernel as an example for such events,
   augmented the raw_syscalls tracepoints and was converted to a libbpf skeleton, reusing all
   the user space components and the BPF code connected to the syscalls.
 
   In the end just the way to glue the BPF part and the user space type beautifiers changed,
   now being performed by libbpf skeletons.
 
   The next step is to use BTF to do pretty printing of all syscall types, as discussed with
   Alan Maguire and others.
 
   Now, on a perf built with BUILD_BPF_SKEL=1 we get most if not all path/filenames/strings,
   some of the networking data structures, perf_event_attr, etc, i.e. systemwide tracing of
   nanosleep calls and perf_event_open syscalls while 'perf stat' runs 'sleep' for 5 seconds:
 
   # perf trace -a -e *nanosleep,perf* perf stat -e cycles,instructions sleep 5
      0.000 (   9.034 ms): perf/327641 perf_event_open(attr_uptr: { type: 0 (PERF_TYPE_HARDWARE), size: 136, config: 0 (PERF_COUNT_HW_CPU_CYCLES), sample_type: IDENTIFIER, read_format: TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING, disabled: 1, inherit: 1, enable_on_exec: 1, exclude_guest: 1 }, pid: 327642 (perf), cpu: -1, group_fd: -1, flags: FD_CLOEXEC) = 3
      9.039 (   0.006 ms): perf/327641 perf_event_open(attr_uptr: { type: 0 (PERF_TYPE_HARDWARE), size: 136, config: 0x1 (PERF_COUNT_HW_INSTRUCTIONS), sample_type: IDENTIFIER, read_format: TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING, disabled: 1, inherit: 1, enable_on_exec: 1, exclude_guest: 1 }, pid: 327642 (perf-exec), cpu: -1, group_fd: -1, flags: FD_CLOEXEC) = 4
          ? (           ): gpm/991  ... [continued]: clock_nanosleep())               = 0
     10.133 (           ): sleep/327642 clock_nanosleep(rqtp: { .tv_sec: 5, .tv_nsec: 0 }, rmtp: 0x7ffd36f83ed0) ...
          ? (           ): pool-gsd-smart/3051  ... [continued]: clock_nanosleep())   = 0
     30.276 (           ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ...
    223.215 (1000.430 ms): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) = 0
     30.276 (2000.394 ms): gpm/991  ... [continued]: clock_nanosleep())               = 0
   1230.814 (           ): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) ...
   1230.814 (1000.404 ms): pool-gsd-smart/3051  ... [continued]: clock_nanosleep())   = 0
   2030.886 (           ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ...
   2237.709 (1000.153 ms): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) = 0
          ? (           ): crond/1172  ... [continued]: clock_nanosleep())            = 0
   3242.699 (           ): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) ...
   2030.886 (2000.385 ms): gpm/991  ... [continued]: clock_nanosleep())               = 0
   3728.078 (           ): crond/1172 clock_nanosleep(rqtp: { .tv_sec: 60, .tv_nsec: 0 }, rmtp: 0x7ffe0971dcf0) ...
   3242.699 (1000.158 ms): pool-gsd-smart/3051  ... [continued]: clock_nanosleep())   = 0
   4031.409 (           ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ...
     10.133 (5000.375 ms): sleep/327642  ... [continued]: clock_nanosleep())          = 0
 
  Performance counter stats for 'sleep 5':
 
          2,617,347      cycles
          1,855,997      instructions                     #    0.71  insn per cycle
 
        5.002282128 seconds time elapsed
 
        0.000855000 seconds user
        0.000852000 seconds sys
   #
 
 perf annotate:
 
 - Building with binutils' libopcode now is opt-in (BUILD_NONDISTRO=1) for
   licensing reasons, and we missed a build test on tools/perf/tests makefile.
 
   Since we now default to NDEBUG=1, we ended up segfaulting when building with
   BUILD_NONDISTRO=1 because a needed initialization routine was being "error
   checked" via an assert.
 
   Fix it by explicitly checking the result and aborting instead if it fails.
 
   We better back propagate the error, but at least 'perf annotate' on samples
   collected for a BPF program is back working when perf is built with
   BUILD_NONDISTRO=1.
 
 perf report/top:
 
 - Add back TUI hierarchy mode header, that is seen when using 'perf report/top --hierarchy'.
 
 - Fix the number of entries for 'e' key in the TUI that was preventing navigation of
   lines when expanding an entry.
 
 perf report/script:
 
 - Support cross platform register handling, allowing a perf.data file collected
   on one architecture to have registers sampled correctly displayed when
   analysis tools such as 'perf report' and 'perf script' are used on a different
   architecture.
 
 - Fix handling of event attributes in pipe mode, i.e. when one uses:
 
 	perf record -o - | perf report -i -
 
   When no perf.data files are used.
 
 - Handle files generated via pipe mode with a version of perf and then read
   also via pipe mode with a different version of perf, where the event attr
   record may have changed, use the record size field to properly support this
   version mismatch.
 
 perf probe:
 
 - Accessing global variables from uprobes isn't supported, make the error
   message state that instead of stating that some minimal kernel version is
   needed to have that feature. This seems just a tool limitation, the kernel
   probably has all that is needed.
 
 perf tests:
 
 - Fix a reference count related leak in the dlfilter v0 API where the result
   of a thread__find_symbol_fb() is not matched with an addr_location__exit()
   to drop the reference counts of the resolved components (machine, thread, map,
   symbol, etc). Add a dlfilter test to make sure that doesn't regresses.
 
 - Lots of fixes for the 'perf test' written in shell script related to problems
   found with the shellcheck utility.
 
 - Fixes for 'perf test' shell scripts testing features enabled when perf is
   built with BUILD_BPF_SKEL=1, such as 'perf stat' bpf counters.
 
 - Add perf record sample filtering test, things like the following example, that gets
   implemented as a BPF filter attached to the event:
 
    # perf record -e task-clock -c 10000 --filter 'ip < 0xffffffff00000000'
 
 - Improve the way the task_analyzer test checks if libtraceevent is linked,
   using 'perf version --build-options' instead of the more expensinve
   'perf record -e "sched:sched_switch"'.
 
 - Add support for riscv in the mmap-basic test. (This went as well via the RiscV tree, same contents).
 
 libperf:
 
 - Implement riscv mmap support (This went as well via the RiscV tree, same contents).
 
 perf script:
 
 - New tool that converts perf.data files to the firefox profiler format so that one can use
   the visualizer at https://profiler.firefox.com/. Done by Anup Sharma as part of this year's
   Google Summer of Code.
 
   One can generate the output and upload it to the web interface but Anup also automated
   everything:
 
      perf script gecko -F 99 -a sleep 60
 
 - Support syscall name parsing on arm64.
 
 - Print "cgroup" field on the same line as "comm".
 
 perf bench:
 
 - Add new 'uprobe' benchmark to measure the overhead of uprobes with/without
   BPF programs attached to it.
 
 - breakpoints are not available on power9, skip that test.
 
 perf stat:
 
 - Add #num_cpus_online literal to be used in 'perf stat' metrics, and add this extra
   'perf test' check that exemplifies its purpose:
 
 	TEST_ASSERT_VAL("#num_cpus_online",
                        expr__parse(&num_cpus_online, ctx, "#num_cpus_online") == 0);
 	TEST_ASSERT_VAL("#num_cpus", expr__parse(&num_cpus, ctx, "#num_cpus") == 0);
 	TEST_ASSERT_VAL("#num_cpus >= #num_cpus_online", num_cpus >= num_cpus_online);
 
 Miscellaneous:
 
 - Improve tool startup time by lazily reading PMU, JSON, sysfs data.
 
 - Improve error reporting in the parsing of events, passing YYLTYPE to error routines,
   so that the output can show were the parsing error was found.
 
 - Add 'perf test' entries to check the parsing of events improvements.
 
 - Fix various leak for things detected by -fsanitize=address, mostly things that would
   be freed at tool exit, including:
 
   - Free evsel->filter on the destructor.
 
   - Allow tools to register a thread->priv destructor and use it in 'perf trace'.
 
   - Free evsel->priv in 'perf trace'.
 
   - Free string returned by synthesize_perf_probe_point() when the caller fails
     to do all it needs.
 
 - Adjust various compiler options to not consider errors some warnings when
   building with broken headers found in things like python, flex, bison, as we
   otherwise build with -Werror. Some for gcc, some for clang, some for some
   specific version of those, some for some specific version of flex or bison, or
   some specific combination of these components, bah.
 
 - Allow customization of clang options for BPF target, this helps building on
   gentoo where there are other oddities where BPF targets gets passed some compiler
   options intended for the native build, so building with WERROR=0 helps while
   these oddities are fixed.
 
 - Dont pass ERR_PTR() values to perf_session__delete() in 'perf top' and 'perf lock',
   fixing some segfaults when handling some odd failures.
 
 - Add LTO build option.
 
 - Fix format of unordered lists in the perf docs (tools/perf/Documentation).
 
 - Overhaul the bison files, using constructs such as YYNOMEM.
 
 - Remove unused tokens from the bison .y files.
 
 - Add more comments to various structs.
 
 - A few LoongArch enablement patches.
 
 Vendor events (JSON):
 
 - Add JSON metrics for Yitian 710 DDR (aarch64). Things like:
 
 	EventName, BriefDescription
 	visible_window_limit_reached_rd, "At least one entry in read queue reaches the visible window limit.",
 	visible_window_limit_reached_wr, "At least one entry in write queue reaches the visible window limit.",
 	op_is_dqsosc_mpc	       , "A DQS Oscillator MPC command to DRAM.",
 	op_is_dqsosc_mrr	       , "A DQS Oscillator MRR command to DRAM.",
 	op_is_tcr_mrr		       , "A Temperature Compensated Refresh(TCR) MRR command to DRAM.",
 
 - Add AmpereOne metrics (aarch64).
 
 - Update N2 and V2 metrics (aarch64) and events using Arm telemetry repo.
 
 - Update scale units and descriptions of common topdown metrics on aarch64. Things like:
 
   - "MetricExpr": "stall_slot_frontend / (#slots * cpu_cycles)",
   - "BriefDescription": "Frontend bound L1 topdown metric",
   + "MetricExpr": "100 * (stall_slot_frontend / (#slots * cpu_cycles))",
   + "BriefDescription": "This metric is the percentage of total slots that were stalled due to resource constraints in the frontend of the processor.",
 
 - Update events for intel: meteorlake to 1.04, sapphirerapids to 1.15, Icelake+ metric constraints.
 
 - Update files for the power10 platform.
 
 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQR2GiIUctdOfX2qHhGyPKLppCJ+JwUCZPfJZgAKCRCyPKLppCJ+
 J1/eAP9lgtavD0V75wy1p5zyotkceOmPTkk1DYFVx2Euhxa/lAD/YW/JvuVSo0Gr
 HqJP52XaV0tF8gG+YxL+Lay/Ke0P5AQ=
 =d12c
 -----END PGP SIGNATURE-----

Merge tag 'perf-tools-for-v6.6-1-2023-09-05' of git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools

Pull perf tools updates from Arnaldo Carvalho de Melo:
 "perf tools maintainership:

   - Add git information for perf-tools and perf-tools-next trees and
     branches to the MAINTAINERS file. That is where development now
     takes place and myself and Namhyung Kim have write access, more
     people to come as we emulate other maintainer groups.

  perf record:

   - Record kernel data maps when 'perf record --data' is used, so that
     global variables can be resolved and used in tools that do data
     profiling.

  perf trace:

   - Remove the old, experimental support for BPF events in which a .c
     file was passed as an event: "perf trace -e hello.c" to then get
     compiled and loaded.

     The only known usage for that, that shipped with the kernel as an
     example for such events, augmented the raw_syscalls tracepoints and
     was converted to a libbpf skeleton, reusing all the user space
     components and the BPF code connected to the syscalls.

     In the end just the way to glue the BPF part and the user space
     type beautifiers changed, now being performed by libbpf skeletons.

     The next step is to use BTF to do pretty printing of all syscall
     types, as discussed with Alan Maguire and others.

     Now, on a perf built with BUILD_BPF_SKEL=1 we get most if not all
     path/filenames/strings, some of the networking data structures,
     perf_event_attr, etc, i.e. systemwide tracing of nanosleep calls
     and perf_event_open syscalls while 'perf stat' runs 'sleep' for 5
     seconds:

      # perf trace -a -e *nanosleep,perf* perf stat -e cycles,instructions sleep 5
         0.000 (   9.034 ms): perf/327641 perf_event_open(attr_uptr: { type: 0 (PERF_TYPE_HARDWARE), size: 136, config: 0 (PERF_COUNT_HW_CPU_CYCLES), sample_type: IDENTIFIER, read_format: TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING, disabled: 1, inherit: 1, enable_on_exec: 1, exclude_guest: 1 }, pid: 327642 (perf), cpu: -1, group_fd: -1, flags: FD_CLOEXEC) = 3
         9.039 (   0.006 ms): perf/327641 perf_event_open(attr_uptr: { type: 0 (PERF_TYPE_HARDWARE), size: 136, config: 0x1 (PERF_COUNT_HW_INSTRUCTIONS), sample_type: IDENTIFIER, read_format: TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING, disabled: 1, inherit: 1, enable_on_exec: 1, exclude_guest: 1 }, pid: 327642 (perf-exec), cpu: -1, group_fd: -1, flags: FD_CLOEXEC) = 4
             ? (           ): gpm/991  ... [continued]: clock_nanosleep())               = 0
        10.133 (           ): sleep/327642 clock_nanosleep(rqtp: { .tv_sec: 5, .tv_nsec: 0 }, rmtp: 0x7ffd36f83ed0) ...
             ? (           ): pool-gsd-smart/3051  ... [continued]: clock_nanosleep())   = 0
        30.276 (           ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ...
       223.215 (1000.430 ms): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) = 0
        30.276 (2000.394 ms): gpm/991  ... [continued]: clock_nanosleep())               = 0
      1230.814 (           ): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) ...
      1230.814 (1000.404 ms): pool-gsd-smart/3051  ... [continued]: clock_nanosleep())   = 0
      2030.886 (           ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ...
      2237.709 (1000.153 ms): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) = 0
             ? (           ): crond/1172  ... [continued]: clock_nanosleep())            = 0
      3242.699 (           ): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) ...
      2030.886 (2000.385 ms): gpm/991  ... [continued]: clock_nanosleep())               = 0
      3728.078 (           ): crond/1172 clock_nanosleep(rqtp: { .tv_sec: 60, .tv_nsec: 0 }, rmtp: 0x7ffe0971dcf0) ...
      3242.699 (1000.158 ms): pool-gsd-smart/3051  ... [continued]: clock_nanosleep())   = 0
      4031.409 (           ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ...
        10.133 (5000.375 ms): sleep/327642  ... [continued]: clock_nanosleep())          = 0

      Performance counter stats for 'sleep 5':

             2,617,347      cycles
             1,855,997      instructions                     #    0.71  insn per cycle

           5.002282128 seconds time elapsed

           0.000855000 seconds user
           0.000852000 seconds sys

  perf annotate:

   - Building with binutils' libopcode now is opt-in (BUILD_NONDISTRO=1)
     for licensing reasons, and we missed a build test on
     tools/perf/tests makefile.

     Since we now default to NDEBUG=1, we ended up segfaulting when
     building with BUILD_NONDISTRO=1 because a needed initialization
     routine was being "error checked" via an assert.

     Fix it by explicitly checking the result and aborting instead if it
     fails.

     We better back propagate the error, but at least 'perf annotate' on
     samples collected for a BPF program is back working when perf is
     built with BUILD_NONDISTRO=1.

  perf report/top:

   - Add back TUI hierarchy mode header, that is seen when using 'perf
     report/top --hierarchy'.

   - Fix the number of entries for 'e' key in the TUI that was
     preventing navigation of lines when expanding an entry.

  perf report/script:

   - Support cross platform register handling, allowing a perf.data file
     collected on one architecture to have registers sampled correctly
     displayed when analysis tools such as 'perf report' and 'perf
     script' are used on a different architecture.

   - Fix handling of event attributes in pipe mode, i.e. when one uses:

  	perf record -o - | perf report -i -

     When no perf.data files are used.

   - Handle files generated via pipe mode with a version of perf and
     then read also via pipe mode with a different version of perf,
     where the event attr record may have changed, use the record size
     field to properly support this version mismatch.

  perf probe:

   - Accessing global variables from uprobes isn't supported, make the
     error message state that instead of stating that some minimal
     kernel version is needed to have that feature. This seems just a
     tool limitation, the kernel probably has all that is needed.

  perf tests:

   - Fix a reference count related leak in the dlfilter v0 API where the
     result of a thread__find_symbol_fb() is not matched with an
     addr_location__exit() to drop the reference counts of the resolved
     components (machine, thread, map, symbol, etc). Add a dlfilter test
     to make sure that doesn't regresses.

   - Lots of fixes for the 'perf test' written in shell script related
     to problems found with the shellcheck utility.

   - Fixes for 'perf test' shell scripts testing features enabled when
     perf is built with BUILD_BPF_SKEL=1, such as 'perf stat' bpf
     counters.

   - Add perf record sample filtering test, things like the following
     example, that gets implemented as a BPF filter attached to the
     event:

       # perf record -e task-clock -c 10000 --filter 'ip < 0xffffffff00000000'

   - Improve the way the task_analyzer test checks if libtraceevent is
     linked, using 'perf version --build-options' instead of the more
     expensinve 'perf record -e "sched:sched_switch"'.

   - Add support for riscv in the mmap-basic test. (This went as well
     via the RiscV tree, same contents).

  libperf:

   - Implement riscv mmap support (This went as well via the RiscV tree,
     same contents).

  perf script:

   - New tool that converts perf.data files to the firefox profiler
     format so that one can use the visualizer at
     https://profiler.firefox.com/. Done by Anup Sharma as part of this
     year's Google Summer of Code.

     One can generate the output and upload it to the web interface but
     Anup also automated everything:

       perf script gecko -F 99 -a sleep 60

   - Support syscall name parsing on arm64.

   - Print "cgroup" field on the same line as "comm".

  perf bench:

   - Add new 'uprobe' benchmark to measure the overhead of uprobes
     with/without BPF programs attached to it.

   - breakpoints are not available on power9, skip that test.

  perf stat:

   - Add #num_cpus_online literal to be used in 'perf stat' metrics, and
     add this extra 'perf test' check that exemplifies its purpose:

  	TEST_ASSERT_VAL("#num_cpus_online",
                         expr__parse(&num_cpus_online, ctx, "#num_cpus_online") == 0);
  	TEST_ASSERT_VAL("#num_cpus", expr__parse(&num_cpus, ctx, "#num_cpus") == 0);
  	TEST_ASSERT_VAL("#num_cpus >= #num_cpus_online", num_cpus >= num_cpus_online);

  Miscellaneous:

   - Improve tool startup time by lazily reading PMU, JSON, sysfs data.

   - Improve error reporting in the parsing of events, passing YYLTYPE
     to error routines, so that the output can show were the parsing
     error was found.

   - Add 'perf test' entries to check the parsing of events
     improvements.

   - Fix various leak for things detected by -fsanitize=address, mostly
     things that would be freed at tool exit, including:

       - Free evsel->filter on the destructor.

       - Allow tools to register a thread->priv destructor and use it in
         'perf trace'.

       - Free evsel->priv in 'perf trace'.

       - Free string returned by synthesize_perf_probe_point() when the
         caller fails to do all it needs.

   - Adjust various compiler options to not consider errors some
     warnings when building with broken headers found in things like
     python, flex, bison, as we otherwise build with -Werror. Some for
     gcc, some for clang, some for some specific version of those, some
     for some specific version of flex or bison, or some specific
     combination of these components, bah.

   - Allow customization of clang options for BPF target, this helps
     building on gentoo where there are other oddities where BPF targets
     gets passed some compiler options intended for the native build, so
     building with WERROR=0 helps while these oddities are fixed.

   - Dont pass ERR_PTR() values to perf_session__delete() in 'perf top'
     and 'perf lock', fixing some segfaults when handling some odd
     failures.

   - Add LTO build option.

   - Fix format of unordered lists in the perf docs
     (tools/perf/Documentation)

   - Overhaul the bison files, using constructs such as YYNOMEM.

   - Remove unused tokens from the bison .y files.

   - Add more comments to various structs.

   - A few LoongArch enablement patches.

  Vendor events (JSON):

   - Add JSON metrics for Yitian 710 DDR (aarch64). Things like:

  	EventName, BriefDescription
  	visible_window_limit_reached_rd, "At least one entry in read queue reaches the visible window limit.",
  	visible_window_limit_reached_wr, "At least one entry in write queue reaches the visible window limit.",
  	op_is_dqsosc_mpc	       , "A DQS Oscillator MPC command to DRAM.",
  	op_is_dqsosc_mrr	       , "A DQS Oscillator MRR command to DRAM.",
  	op_is_tcr_mrr		       , "A Temperature Compensated Refresh(TCR) MRR command to DRAM.",

   - Add AmpereOne metrics (aarch64).

   - Update N2 and V2 metrics (aarch64) and events using Arm telemetry
     repo.

   - Update scale units and descriptions of common topdown metrics on
     aarch64. Things like:
       - "MetricExpr": "stall_slot_frontend / (#slots * cpu_cycles)",
       - "BriefDescription": "Frontend bound L1 topdown metric",
       + "MetricExpr": "100 * (stall_slot_frontend / (#slots * cpu_cycles))",
       + "BriefDescription": "This metric is the percentage of total slots that were stalled due to resource constraints in the frontend of the processor.",

   - Update events for intel: meteorlake to 1.04, sapphirerapids to
     1.15, Icelake+ metric constraints.

   - Update files for the power10 platform"

* tag 'perf-tools-for-v6.6-1-2023-09-05' of git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools: (217 commits)
  perf parse-events: Fix driver config term
  perf parse-events: Fixes relating to no_value terms
  perf parse-events: Fix propagation of term's no_value when cloning
  perf parse-events: Name the two term enums
  perf list: Don't print Unit for "default_core"
  perf vendor events intel: Fix modifier in tma_info_system_mem_parallel_reads for skylake
  perf dlfilter: Avoid leak in v0 API test use of resolve_address()
  perf metric: Add #num_cpus_online literal
  perf pmu: Remove str from perf_pmu_alias
  perf parse-events: Make common term list to strbuf helper
  perf parse-events: Minor help message improvements
  perf pmu: Avoid uninitialized use of alias->str
  perf jevents: Use "default_core" for events with no Unit
  perf test stat_bpf_counters_cgrp: Enhance perf stat cgroup BPF counter test
  perf test shell stat_bpf_counters: Fix test on Intel
  perf test shell record_bpf_filter: Skip 6.2 kernel
  libperf: Get rid of attr.id field
  perf tools: Convert to perf_record_header_attr_id()
  libperf: Add perf_record_header_attr_id()
  perf tools: Handle old data in PERF_RECORD_ATTR
  ...
This commit is contained in:
Linus Torvalds 2023-09-09 20:06:17 -07:00
commit 535a265d7f
284 changed files with 8019 additions and 9127 deletions

View File

@ -88,6 +88,11 @@ data bandwidth::
-e ali_drw_27080/hif_rmw/ \
-e ali_drw_27080/cycle/ -- sleep 10
Example usage of counting all memory read/write bandwidth by metric::
perf stat -M ddr_read_bandwidth.all -- sleep 10
perf stat -M ddr_write_bandwidth.all -- sleep 10
The average DRAM bandwidth can be calculated as follows:
- Read Bandwidth = perf_hif_rd * DDRC_WIDTH * DDRC_Freq / DDRC_Cycle

View File

@ -16763,6 +16763,8 @@ L: linux-kernel@vger.kernel.org
S: Supported
W: https://perf.wiki.kernel.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core
T: git git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools.git perf-tools
T: git git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git perf-tools-next
F: arch/*/events/*
F: arch/*/events/*/*
F: arch/*/include/asm/perf_event.h

View File

@ -117,6 +117,16 @@ $(OUTPUT)%.s: %.c FORCE
$(call rule_mkdir)
$(call if_changed_dep,cc_s_c)
# bison and flex files are generated in the OUTPUT directory
# so it needs a separate rule to depend on them properly
$(OUTPUT)%-bison.o: $(OUTPUT)%-bison.c FORCE
$(call rule_mkdir)
$(call if_changed_dep,$(host)cc_o_c)
$(OUTPUT)%-flex.o: $(OUTPUT)%-flex.c FORCE
$(call rule_mkdir)
$(call if_changed_dep,$(host)cc_o_c)
# Gather build data:
# obj-y - list of build objects
# subdir-y - list of directories to nest

View File

@ -340,7 +340,7 @@ $(OUTPUT)test-jvmti-cmlr.bin:
$(BUILD)
$(OUTPUT)test-llvm.bin:
$(BUILDXX) -std=gnu++14 \
$(BUILDXX) -std=gnu++17 \
-I$(shell $(LLVM_CONFIG) --includedir) \
-L$(shell $(LLVM_CONFIG) --libdir) \
$(shell $(LLVM_CONFIG) --libs Core BPF) \
@ -348,17 +348,15 @@ $(OUTPUT)test-llvm.bin:
> $(@:.bin=.make.output) 2>&1
$(OUTPUT)test-llvm-version.bin:
$(BUILDXX) -std=gnu++14 \
$(BUILDXX) -std=gnu++17 \
-I$(shell $(LLVM_CONFIG) --includedir) \
> $(@:.bin=.make.output) 2>&1
$(OUTPUT)test-clang.bin:
$(BUILDXX) -std=gnu++14 \
$(BUILDXX) -std=gnu++17 \
-I$(shell $(LLVM_CONFIG) --includedir) \
-L$(shell $(LLVM_CONFIG) --libdir) \
-Wl,--start-group -lclangBasic -lclangDriver \
-lclangFrontend -lclangEdit -lclangLex \
-lclangAST -Wl,--end-group \
-Wl,--start-group -lclang-cpp -Wl,--end-group \
$(shell $(LLVM_CONFIG) --libs Core option) \
$(shell $(LLVM_CONFIG) --system-libs) \
> $(@:.bin=.make.output) 2>&1

View File

@ -1,28 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include "clang/Basic/Version.h"
#if CLANG_VERSION_MAJOR < 8
#include "clang/Basic/VirtualFileSystem.h"
#endif
#include "clang/Driver/Driver.h"
#include "clang/Frontend/TextDiagnosticPrinter.h"
#include "llvm/ADT/IntrusiveRefCntPtr.h"
#include "llvm/Support/ManagedStatic.h"
#if CLANG_VERSION_MAJOR >= 8
#include "llvm/Support/VirtualFileSystem.h"
#endif
#include "llvm/Support/raw_ostream.h"
using namespace clang;
using namespace clang::driver;
int main()
{
IntrusiveRefCntPtr<DiagnosticIDs> DiagID(new DiagnosticIDs());
IntrusiveRefCntPtr<DiagnosticOptions> DiagOpts = new DiagnosticOptions();
DiagnosticsEngine Diags(DiagID, &*DiagOpts);
Driver TheDriver("test", "bpf-pc-linux", Diags);
llvm::llvm_shutdown();
return 0;
}

View File

@ -1,16 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <iostream>
#include <memory>
static void print_str(std::string s)
{
std::cout << s << std::endl;
}
int main()
{
std::string s("Hello World!");
print_str(std::move(s));
std::cout << "|" << s << "|" << std::endl;
return 0;
}

View File

@ -1,12 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <cstdio>
#include "llvm/Config/llvm-config.h"
#define NUM_VERSION (((LLVM_VERSION_MAJOR) << 16) + (LLVM_VERSION_MINOR << 8) + LLVM_VERSION_PATCH)
#define pass int main() {printf("%x\n", NUM_VERSION); return 0;}
#if NUM_VERSION >= 0x030900
pass
#else
# error This LLVM is not tested yet.
#endif

View File

@ -1,14 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include "llvm/Support/ManagedStatic.h"
#include "llvm/Support/raw_ostream.h"
#define NUM_VERSION (((LLVM_VERSION_MAJOR) << 16) + (LLVM_VERSION_MINOR << 8) + LLVM_VERSION_PATCH)
#if NUM_VERSION < 0x030900
# error "LLVM version too low"
#endif
int main()
{
llvm::errs() << "Hello World!\n";
llvm::llvm_shutdown();
return 0;
}

View File

@ -148,9 +148,19 @@ struct perf_record_switch {
struct perf_record_header_attr {
struct perf_event_header header;
struct perf_event_attr attr;
__u64 id[];
/*
* Array of u64 id follows here but we cannot use a flexible array
* because size of attr in the data can be different then current
* version. Please use perf_record_header_attr_id() below.
*
* __u64 id[]; // do not use this
*/
};
/* Returns the pointer to id array based on the actual attr size. */
#define perf_record_header_attr_id(evt) \
((void *)&(evt)->attr.attr + (evt)->attr.attr.size)
enum {
PERF_CPU_MAP__CPUS = 0,
PERF_CPU_MAP__MASK = 1,

View File

@ -67,6 +67,9 @@ SUBSYSTEM
'internals'::
Benchmark internal perf functionality.
'uprobe'::
Benchmark overhead of uprobe + BPF.
'all'::
All benchmark subsystems.

View File

@ -125,9 +125,6 @@ Given a $HOME/.perfconfig like this:
group = true
skip-empty = true
[llvm]
dump-obj = true
clang-opt = -g
You can hide source code of annotate feature setting the config to false with
@ -657,36 +654,6 @@ ftrace.*::
-F option is not specified. Possible values are 'function' and
'function_graph'.
llvm.*::
llvm.clang-path::
Path to clang. If omit, search it from $PATH.
llvm.clang-bpf-cmd-template::
Cmdline template. Below lines show its default value. Environment
variable is used to pass options.
"$CLANG_EXEC -D__KERNEL__ -D__NR_CPUS__=$NR_CPUS "\
"-DLINUX_VERSION_CODE=$LINUX_VERSION_CODE " \
"$CLANG_OPTIONS $PERF_BPF_INC_OPTIONS $KERNEL_INC_OPTIONS " \
"-Wno-unused-value -Wno-pointer-sign " \
"-working-directory $WORKING_DIR " \
"-c \"$CLANG_SOURCE\" --target=bpf $CLANG_EMIT_LLVM -O2 -o - $LLVM_OPTIONS_PIPE"
llvm.clang-opt::
Options passed to clang.
llvm.kbuild-dir::
kbuild directory. If not set, use /lib/modules/`uname -r`/build.
If set to "" deliberately, skip kernel header auto-detector.
llvm.kbuild-opts::
Options passed to 'make' when detecting kernel header options.
llvm.dump-obj::
Enable perf dump BPF object files compiled by LLVM.
llvm.opts::
Options passed to llc.
samples.*::
samples.context::

View File

@ -64,6 +64,12 @@ internal filtering.
If implemented, 'filter_description' should return a one-line description
of the filter, and optionally a longer description.
Do not assume the 'sample' argument is valid (dereferenceable)
after 'filter_event' and 'filter_event_early' return.
Do not assume data referenced by pointers in struct perf_dlfilter_sample
is valid (dereferenceable) after 'filter_event' and 'filter_event_early' return.
The perf_dlfilter_sample structure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -150,7 +156,8 @@ struct perf_dlfilter_fns {
const char *(*srcline)(void *ctx, __u32 *line_number);
struct perf_event_attr *(*attr)(void *ctx);
__s32 (*object_code)(void *ctx, __u64 ip, void *buf, __u32 len);
void *(*reserved[120])(void *);
void (*al_cleanup)(void *ctx, struct perf_dlfilter_al *al);
void *(*reserved[119])(void *);
};
----
@ -161,7 +168,8 @@ struct perf_dlfilter_fns {
'args' returns arguments from --dlarg options.
'resolve_address' provides information about 'address'. al->size must be set
before calling. Returns 0 on success, -1 otherwise.
before calling. Returns 0 on success, -1 otherwise. Call al_cleanup() (if present,
see below) when 'al' data is no longer needed.
'insn' returns instruction bytes and length.
@ -171,6 +179,12 @@ before calling. Returns 0 on success, -1 otherwise.
'object_code' reads object code and returns the number of bytes read.
'al_cleanup' must be called (if present, so check perf_dlfilter_fns.al_cleanup != NULL)
after resolve_address() to free any associated resources.
Do not assume pointers obtained via perf_dlfilter_fns are valid (dereferenceable)
after 'filter_event' and 'filter_event_early' return.
The perf_dlfilter_al structure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -197,9 +211,13 @@ struct perf_dlfilter_al {
/* Below members are only populated by resolve_ip() */
__u8 filtered; /* true if this sample event will be filtered out */
const char *comm;
void *priv; /* Private data. Do not change */
};
----
Do not assume data referenced by pointers in struct perf_dlfilter_al
is valid (dereferenceable) after 'filter_event' and 'filter_event_early' return.
perf_dlfilter_sample flags
~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -96,8 +96,9 @@ OPTIONS for 'perf ftrace trace'
--func-opts::
List of options allowed to set:
call-graph - Display kernel stack trace for function tracer.
irq-info - Display irq context info for function tracer.
- call-graph - Display kernel stack trace for function tracer.
- irq-info - Display irq context info for function tracer.
-G::
--graph-funcs=::
@ -118,11 +119,12 @@ OPTIONS for 'perf ftrace trace'
--graph-opts::
List of options allowed to set:
nosleep-time - Measure on-CPU time only for function_graph tracer.
noirqs - Ignore functions that happen inside interrupt.
verbose - Show process names, PIDs, timestamps, etc.
thresh=<n> - Setup trace duration threshold in microseconds.
depth=<n> - Set max depth for function graph tracer to follow.
- nosleep-time - Measure on-CPU time only for function_graph tracer.
- noirqs - Ignore functions that happen inside interrupt.
- verbose - Show process names, PIDs, timestamps, etc.
- thresh=<n> - Setup trace duration threshold in microseconds.
- depth=<n> - Set max depth for function graph tracer to follow.
OPTIONS for 'perf ftrace latency'

View File

@ -99,20 +99,6 @@ OPTIONS
If you want to profile write accesses in [0x1000~1008), just set
'mem:0x1000/8:w'.
- a BPF source file (ending in .c) or a precompiled object file (ending
in .o) selects one or more BPF events.
The BPF program can attach to various perf events based on the ELF section
names.
When processing a '.c' file, perf searches an installed LLVM to compile it
into an object file first. Optional clang options can be passed via the
'--clang-opt' command line option, e.g.:
perf record --clang-opt "-DLINUX_VERSION_CODE=0x50000" \
-e tests/bpf-script-example.c
Note: '--clang-opt' must be placed before '--event/-e'.
- a group of events surrounded by a pair of brace ("{event1,event2,...}").
Each event is separated by commas and the group should be quoted to
prevent the shell interpretation. You also need to use --group on
@ -523,9 +509,10 @@ CLOCK_BOOTTIME, CLOCK_REALTIME and CLOCK_TAI.
Select AUX area tracing Snapshot Mode. This option is valid only with an
AUX area tracing event. Optionally, certain snapshot capturing parameters
can be specified in a string that follows this option:
'e': take one last snapshot on exit; guarantees that there is at least one
- 'e': take one last snapshot on exit; guarantees that there is at least one
snapshot in the output file;
<size>: if the PMU supports this, specify the desired snapshot size.
- <size>: if the PMU supports this, specify the desired snapshot size.
In Snapshot Mode trace data is captured only when signal SIGUSR2 is received
and on exit if the above 'e' option is given.
@ -547,14 +534,6 @@ PERF_RECORD_SWITCH_CPU_WIDE. In some cases (e.g. Intel PT, CoreSight or Arm SPE)
switch events will be enabled automatically, which can be suppressed by
by the option --no-switch-events.
--clang-path=PATH::
Path to clang binary to use for compiling BPF scriptlets.
(enabled when BPF support is on)
--clang-opt=OPTIONS::
Options passed to clang when compiling BPF scriptlets.
(enabled when BPF support is on)
--vmlinux=PATH::
Specify vmlinux path which has debuginfo.
(enabled when BPF prologue is on)
@ -572,8 +551,9 @@ providing implementation for Posix AIO API.
--affinity=mode::
Set affinity mask of trace reading thread according to the policy defined by 'mode' value:
node - thread affinity mask is set to NUMA node cpu mask of the processed mmap buffer
cpu - thread affinity mask is set to cpu of the processed mmap buffer
- node - thread affinity mask is set to NUMA node cpu mask of the processed mmap buffer
- cpu - thread affinity mask is set to cpu of the processed mmap buffer
--mmap-flush=number::
@ -625,16 +605,17 @@ Record timestamp boundary (time of first/last samples).
--switch-output[=mode]::
Generate multiple perf.data files, timestamp prefixed, switching to a new one
based on 'mode' value:
"signal" - when receiving a SIGUSR2 (default value) or
<size> - when reaching the size threshold, size is expected to
be a number with appended unit character - B/K/M/G
<time> - when reaching the time threshold, size is expected to
be a number with appended unit character - s/m/h/d
Note: the precision of the size threshold hugely depends
on your configuration - the number and size of your ring
buffers (-m). It is generally more precise for higher sizes
(like >5M), for lower values expect different sizes.
- "signal" - when receiving a SIGUSR2 (default value) or
- <size> - when reaching the size threshold, size is expected to
be a number with appended unit character - B/K/M/G
- <time> - when reaching the time threshold, size is expected to
be a number with appended unit character - s/m/h/d
Note: the precision of the size threshold hugely depends
on your configuration - the number and size of your ring
buffers (-m). It is generally more precise for higher sizes
(like >5M), for lower values expect different sizes.
A possible use case is to, given an external event, slice the perf.data file
that gets then processed, possibly via a perf script, to decide if that
@ -680,11 +661,12 @@ choice in this option. For example, --synth=no would have MMAP events for
kernel and modules.
Available types are:
'task' - synthesize FORK and COMM events for each task
'mmap' - synthesize MMAP events for each process (implies 'task')
'cgroup' - synthesize CGROUP events for each cgroup
'all' - synthesize all events (default)
'no' - do not synthesize any of the above events
- 'task' - synthesize FORK and COMM events for each task
- 'mmap' - synthesize MMAP events for each process (implies 'task')
- 'cgroup' - synthesize CGROUP events for each cgroup
- 'all' - synthesize all events (default)
- 'no' - do not synthesize any of the above events
--tail-synthesize::
Instead of collecting non-sample events (for example, fork, comm, mmap) at
@ -736,18 +718,19 @@ ctl-fifo / ack-fifo are opened and used as ctl-fd / ack-fd as follows.
Listen on ctl-fd descriptor for command to control measurement.
Available commands:
'enable' : enable events
'disable' : disable events
'enable name' : enable event 'name'
'disable name' : disable event 'name'
'snapshot' : AUX area tracing snapshot).
'stop' : stop perf record
'ping' : ping
'evlist [-v|-g|-F] : display all events
-F Show just the sample frequency used for each event.
-v Show all fields.
-g Show event group information.
- 'enable' : enable events
- 'disable' : disable events
- 'enable name' : enable event 'name'
- 'disable name' : disable event 'name'
- 'snapshot' : AUX area tracing snapshot).
- 'stop' : stop perf record
- 'ping' : ping
- 'evlist [-v|-g|-F] : display all events
-F Show just the sample frequency used for each event.
-v Show all fields.
-g Show event group information.
Measurements can be started with events disabled using --delay=-1 option. Optionally
send control command completion ('ack\n') to ack-fd descriptor to synchronize with the
@ -808,10 +791,10 @@ the second monitors CPUs 1 and 5-7 with the affinity mask 5-7.
<spec> value can also be a string meaning predefined parallel threads
layout:
cpu - create new data streaming thread for every monitored cpu
core - create new thread to monitor CPUs grouped by a core
package - create new thread to monitor CPUs grouped by a package
numa - create new threed to monitor CPUs grouped by a NUMA domain
- cpu - create new data streaming thread for every monitored cpu
- core - create new thread to monitor CPUs grouped by a core
- package - create new thread to monitor CPUs grouped by a package
- numa - create new threed to monitor CPUs grouped by a NUMA domain
Predefined layouts can be used on systems with large number of CPUs in
order not to spawn multiple per-cpu streaming threads but still avoid LOST

View File

@ -43,7 +43,7 @@ struct perf_file_section {
Flags section:
For each of the optional features a perf_file_section it placed after the data
For each of the optional features a perf_file_section is placed after the data
section if the feature bit is set in the perf_header flags bitset. The
respective perf_file_section points to the data of the additional header and
defines its size.

View File

@ -246,6 +246,9 @@ ifeq ($(CC_NO_CLANG), 0)
else
CORE_CFLAGS += -O6
endif
else
CORE_CFLAGS += -g
CXXFLAGS += -g
endif
ifdef PARSER_DEBUG
@ -256,6 +259,11 @@ ifdef PARSER_DEBUG
$(call detected_var,PARSER_DEBUG_FLEX)
endif
ifdef LTO
CORE_CFLAGS += -flto
CXXFLAGS += -flto
endif
# Try different combinations to accommodate systems that only have
# python[2][3]-config in weird combinations in the following order of
# priority from lowest to highest:
@ -319,18 +327,14 @@ FEATURE_CHECK_LDFLAGS-disassembler-four-args = -lbfd -lopcodes -ldl
FEATURE_CHECK_LDFLAGS-disassembler-init-styled = -lbfd -lopcodes -ldl
CORE_CFLAGS += -fno-omit-frame-pointer
CORE_CFLAGS += -ggdb3
CORE_CFLAGS += -funwind-tables
CORE_CFLAGS += -Wall
CORE_CFLAGS += -Wextra
CORE_CFLAGS += -std=gnu11
CXXFLAGS += -std=gnu++14 -fno-exceptions -fno-rtti
CXXFLAGS += -std=gnu++17 -fno-exceptions -fno-rtti
CXXFLAGS += -Wall
CXXFLAGS += -Wextra
CXXFLAGS += -fno-omit-frame-pointer
CXXFLAGS += -ggdb3
CXXFLAGS += -funwind-tables
CXXFLAGS += -Wno-strict-aliasing
HOSTCFLAGS += -Wall
HOSTCFLAGS += -Wextra
@ -585,18 +589,6 @@ ifndef NO_LIBELF
LIBBPF_STATIC := 1
endif
endif
ifndef NO_DWARF
ifdef PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
CFLAGS += -DHAVE_BPF_PROLOGUE
$(call detected,CONFIG_BPF_PROLOGUE)
else
msg := $(warning BPF prologue is not supported by architecture $(SRCARCH), missing regs_query_register_offset());
endif
else
msg := $(warning DWARF support is off, BPF prologue is disabled);
endif
endif # NO_LIBBPF
endif # NO_LIBELF
@ -1123,37 +1115,6 @@ ifndef NO_JVMTI
endif
endif
USE_CXX = 0
USE_CLANGLLVM = 0
ifdef LIBCLANGLLVM
$(call feature_check,cxx)
ifneq ($(feature-cxx), 1)
msg := $(warning No g++ found, disable clang and llvm support. Please install g++)
else
$(call feature_check,llvm)
$(call feature_check,llvm-version)
ifneq ($(feature-llvm), 1)
msg := $(warning No suitable libLLVM found, disabling builtin clang and LLVM support. Please install llvm-dev(el) (>= 3.9.0))
else
$(call feature_check,clang)
ifneq ($(feature-clang), 1)
msg := $(warning No suitable libclang found, disabling builtin clang and LLVM support. Please install libclang-dev(el) (>= 3.9.0))
else
CFLAGS += -DHAVE_LIBCLANGLLVM_SUPPORT
CXXFLAGS += -DHAVE_LIBCLANGLLVM_SUPPORT -I$(shell $(LLVM_CONFIG) --includedir)
$(call detected,CONFIG_CXX)
$(call detected,CONFIG_CLANGLLVM)
USE_CXX = 1
USE_LLVM = 1
USE_CLANG = 1
ifneq ($(feature-llvm-version),1)
msg := $(warning This version of LLVM is not tested. May cause build errors)
endif
endif
endif
endif
endif
ifndef NO_LIBPFM4
$(call feature_check,libpfm4)
ifeq ($(feature-libpfm4), 1)

View File

@ -99,10 +99,6 @@ include ../scripts/utilities.mak
# Define NO_JVMTI_CMLR (debug only) if you do not want to process CMLR
# data for java source lines.
#
# Define LIBCLANGLLVM if you DO want builtin clang and llvm support.
# When selected, pass LLVM_CONFIG=/path/to/llvm-config to `make' if
# llvm-config is not in $PATH.
#
# Define CORESIGHT if you DO WANT support for CoreSight trace decoding.
#
# Define NO_AIO if you do not want support of Posix AIO based trace
@ -381,7 +377,7 @@ ifndef NO_JVMTI
PROGRAMS += $(OUTPUT)$(LIBJVMTI)
endif
DLFILTERS := dlfilter-test-api-v0.so dlfilter-show-cycles.so
DLFILTERS := dlfilter-test-api-v0.so dlfilter-test-api-v2.so dlfilter-show-cycles.so
DLFILTERS := $(patsubst %,$(OUTPUT)dlfilters/%,$(DLFILTERS))
# what 'all' will build and 'install' will install, in perfexecdir
@ -425,22 +421,6 @@ endif
EXTLIBS := $(call filter-out,$(EXCLUDE_EXTLIBS),$(EXTLIBS))
LIBS = -Wl,--whole-archive $(PERFLIBS) $(EXTRA_PERFLIBS) -Wl,--no-whole-archive -Wl,--start-group $(EXTLIBS) -Wl,--end-group
ifeq ($(USE_CLANG), 1)
CLANGLIBS_LIST = AST Basic CodeGen Driver Frontend Lex Tooling Edit Sema Analysis Parse Serialization
CLANGLIBS_NOEXT_LIST = $(foreach l,$(CLANGLIBS_LIST),$(shell $(LLVM_CONFIG) --libdir)/libclang$(l))
LIBCLANG = $(foreach l,$(CLANGLIBS_NOEXT_LIST),$(wildcard $(l).a $(l).so))
LIBS += -Wl,--start-group $(LIBCLANG) -Wl,--end-group
endif
ifeq ($(USE_LLVM), 1)
LIBLLVM = $(shell $(LLVM_CONFIG) --libs all) $(shell $(LLVM_CONFIG) --system-libs)
LIBS += -L$(shell $(LLVM_CONFIG) --libdir) $(LIBLLVM)
endif
ifeq ($(USE_CXX), 1)
LIBS += -lstdc++
endif
export INSTALL SHELL_PATH
### Build rules
@ -978,11 +958,6 @@ ifndef NO_JVMTI
endif
$(call QUIET_INSTALL, libexec) \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)'
ifndef NO_LIBBPF
$(call QUIET_INSTALL, bpf-examples) \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perf_examples_instdir_SQ)/bpf'; \
$(INSTALL) examples/bpf/*.c -m 644 -t '$(DESTDIR_SQ)$(perf_examples_instdir_SQ)/bpf'
endif
$(call QUIET_INSTALL, perf-archive) \
$(INSTALL) $(OUTPUT)perf-archive -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)'
$(call QUIET_INSTALL, perf-iostat) \
@ -1057,6 +1032,8 @@ SKELETONS += $(SKEL_OUT)/bperf_leader.skel.h $(SKEL_OUT)/bperf_follower.skel.h
SKELETONS += $(SKEL_OUT)/bperf_cgroup.skel.h $(SKEL_OUT)/func_latency.skel.h
SKELETONS += $(SKEL_OUT)/off_cpu.skel.h $(SKEL_OUT)/lock_contention.skel.h
SKELETONS += $(SKEL_OUT)/kwork_trace.skel.h $(SKEL_OUT)/sample_filter.skel.h
SKELETONS += $(SKEL_OUT)/bench_uprobe.skel.h
SKELETONS += $(SKEL_OUT)/augmented_raw_syscalls.skel.h
$(SKEL_TMP_OUT) $(LIBAPI_OUTPUT) $(LIBBPF_OUTPUT) $(LIBPERF_OUTPUT) $(LIBSUBCMD_OUTPUT) $(LIBSYMBOL_OUTPUT):
$(Q)$(MKDIR) -p $@
@ -1079,10 +1056,15 @@ ifneq ($(CROSS_COMPILE),)
CLANG_TARGET_ARCH = --target=$(notdir $(CROSS_COMPILE:%-=%))
endif
CLANG_OPTIONS = -Wall
CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG),$(CLANG_TARGET_ARCH))
BPF_INCLUDE := -I$(SKEL_TMP_OUT)/.. -I$(LIBBPF_INCLUDE) $(CLANG_SYS_INCLUDES)
TOOLS_UAPI_INCLUDE := -I$(srctree)/tools/include/uapi
ifneq ($(WERROR),0)
CLANG_OPTIONS += -Werror
endif
$(BPFTOOL): | $(SKEL_TMP_OUT)
$(Q)CFLAGS= $(MAKE) -C ../bpf/bpftool \
OUTPUT=$(SKEL_TMP_OUT)/ bootstrap
@ -1124,7 +1106,7 @@ else
endif
$(SKEL_TMP_OUT)/%.bpf.o: util/bpf_skel/%.bpf.c $(LIBBPF) $(SKEL_OUT)/vmlinux.h | $(SKEL_TMP_OUT)
$(QUIET_CLANG)$(CLANG) -g -O2 --target=bpf -Wall -Werror $(BPF_INCLUDE) $(TOOLS_UAPI_INCLUDE) \
$(QUIET_CLANG)$(CLANG) -g -O2 --target=bpf $(CLANG_OPTIONS) $(BPF_INCLUDE) $(TOOLS_UAPI_INCLUDE) \
-c $(filter util/bpf_skel/%.bpf.c,$^) -o $@
$(SKEL_OUT)/%.skel.h: $(SKEL_TMP_OUT)/%.bpf.o | $(BPFTOOL)

View File

@ -12,7 +12,4 @@ void perf_regs_load(u64 *regs);
#define PERF_REGS_MAX PERF_REG_ARM_MAX
#define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_32
#define PERF_REG_IP PERF_REG_ARM_PC
#define PERF_REG_SP PERF_REG_ARM_SP
#endif /* ARCH_PERF_REGS_H */

View File

@ -79,9 +79,9 @@ static int cs_etm_validate_context_id(struct auxtrace_record *itr,
int err;
u32 val;
u64 contextid = evsel->core.attr.config &
(perf_pmu__format_bits(&cs_etm_pmu->format, "contextid") |
perf_pmu__format_bits(&cs_etm_pmu->format, "contextid1") |
perf_pmu__format_bits(&cs_etm_pmu->format, "contextid2"));
(perf_pmu__format_bits(cs_etm_pmu, "contextid") |
perf_pmu__format_bits(cs_etm_pmu, "contextid1") |
perf_pmu__format_bits(cs_etm_pmu, "contextid2"));
if (!contextid)
return 0;
@ -106,7 +106,7 @@ static int cs_etm_validate_context_id(struct auxtrace_record *itr,
}
if (contextid &
perf_pmu__format_bits(&cs_etm_pmu->format, "contextid1")) {
perf_pmu__format_bits(cs_etm_pmu, "contextid1")) {
/*
* TRCIDR2.CIDSIZE, bit [9-5], indicates whether contextID
* tracing is supported:
@ -122,7 +122,7 @@ static int cs_etm_validate_context_id(struct auxtrace_record *itr,
}
if (contextid &
perf_pmu__format_bits(&cs_etm_pmu->format, "contextid2")) {
perf_pmu__format_bits(cs_etm_pmu, "contextid2")) {
/*
* TRCIDR2.VMIDOPT[30:29] != 0 and
* TRCIDR2.VMIDSIZE[14:10] == 0b00100 (32bit virtual contextid)
@ -151,7 +151,7 @@ static int cs_etm_validate_timestamp(struct auxtrace_record *itr,
u32 val;
if (!(evsel->core.attr.config &
perf_pmu__format_bits(&cs_etm_pmu->format, "timestamp")))
perf_pmu__format_bits(cs_etm_pmu, "timestamp")))
return 0;
if (!cs_etm_is_etmv4(itr, cpu)) {

View File

@ -1,6 +1,17 @@
// SPDX-License-Identifier: GPL-2.0
#include "perf_regs.h"
#include "../../../util/perf_regs.h"
const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
uint64_t arch__intr_reg_mask(void)
{
return PERF_REGS_MASK;
}
uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}

View File

@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include <elfutils/libdwfl.h>
#include "perf_regs.h"
#include "../../../util/unwind-libdw.h"
#include "../../../util/perf_regs.h"
#include "../../../util/sample.h"

View File

@ -2,6 +2,9 @@
#ifndef ARCH_TESTS_H
#define ARCH_TESTS_H
struct test_suite;
int test__cpuid_match(struct test_suite *test, int subtest);
extern struct test_suite *arch_tests[];
#endif

View File

@ -14,7 +14,4 @@ void perf_regs_load(u64 *regs);
#define PERF_REGS_MAX PERF_REG_ARM64_MAX
#define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_64
#define PERF_REG_IP PERF_REG_ARM64_PC
#define PERF_REG_SP PERF_REG_ARM64_SP
#endif /* ARCH_PERF_REGS_H */

View File

@ -2,3 +2,4 @@ perf-y += regs_load.o
perf-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o
perf-y += arch-tests.o
perf-y += cpuid-match.o

View File

@ -3,9 +3,13 @@
#include "tests/tests.h"
#include "arch-tests.h"
DEFINE_SUITE("arm64 CPUID matching", cpuid_match);
struct test_suite *arch_tests[] = {
#ifdef HAVE_DWARF_UNWIND_SUPPORT
&suite__dwarf_unwind,
#endif
&suite__cpuid_match,
NULL,
};

View File

@ -0,0 +1,37 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/compiler.h>
#include "arch-tests.h"
#include "tests/tests.h"
#include "util/header.h"
int test__cpuid_match(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
/* midr with no leading zeros matches */
if (strcmp_cpuid_str("0x410fd0c0", "0x00000000410fd0c0"))
return -1;
/* Upper case matches */
if (strcmp_cpuid_str("0x410fd0c0", "0x00000000410FD0C0"))
return -1;
/* r0p0 = r0p0 matches */
if (strcmp_cpuid_str("0x00000000410fd480", "0x00000000410fd480"))
return -1;
/* r0p1 > r0p0 matches */
if (strcmp_cpuid_str("0x00000000410fd480", "0x00000000410fd481"))
return -1;
/* r1p0 > r0p0 matches*/
if (strcmp_cpuid_str("0x00000000410fd480", "0x00000000411fd480"))
return -1;
/* r0p0 < r0p1 doesn't match */
if (!strcmp_cpuid_str("0x00000000410fd481", "0x00000000410fd480"))
return -1;
/* r0p0 < r1p0 doesn't match */
if (!strcmp_cpuid_str("0x00000000411fd480", "0x00000000410fd480"))
return -1;
/* Different CPU doesn't match */
if (!strcmp_cpuid_str("0x00000000410fd4c0", "0x00000000430f0af0"))
return -1;
return 0;
}

View File

@ -230,7 +230,7 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
* inform that the resulting output's SPE samples contain physical addresses
* where applicable.
*/
bit = perf_pmu__format_bits(&arm_spe_pmu->format, "pa_enable");
bit = perf_pmu__format_bits(arm_spe_pmu, "pa_enable");
if (arm_spe_evsel->core.attr.config & bit)
evsel__set_sample_bit(arm_spe_evsel, PHYS_ADDR);

View File

@ -1,3 +1,6 @@
#include <linux/kernel.h>
#include <linux/bits.h>
#include <linux/bitfield.h>
#include <stdio.h>
#include <stdlib.h>
#include <perf/cpumap.h>
@ -10,15 +13,14 @@
#define MIDR "/regs/identification/midr_el1"
#define MIDR_SIZE 19
#define MIDR_REVISION_MASK 0xf
#define MIDR_VARIANT_SHIFT 20
#define MIDR_VARIANT_MASK (0xf << MIDR_VARIANT_SHIFT)
#define MIDR_REVISION_MASK GENMASK(3, 0)
#define MIDR_VARIANT_MASK GENMASK(23, 20)
static int _get_cpuid(char *buf, size_t sz, struct perf_cpu_map *cpus)
{
const char *sysfs = sysfs__mountpoint();
u64 midr = 0;
int cpu;
int ret = EINVAL;
if (!sysfs || sz < MIDR_SIZE)
return EINVAL;
@ -44,22 +46,13 @@ static int _get_cpuid(char *buf, size_t sz, struct perf_cpu_map *cpus)
}
fclose(file);
/* Ignore/clear Variant[23:20] and
* Revision[3:0] of MIDR
*/
midr = strtoul(buf, NULL, 16);
midr &= (~(MIDR_VARIANT_MASK | MIDR_REVISION_MASK));
scnprintf(buf, MIDR_SIZE, "0x%016lx", midr);
/* got midr break loop */
ret = 0;
break;
}
perf_cpu_map__put(cpus);
if (!midr)
return EINVAL;
return 0;
return ret;
}
int get_cpuid(char *buf, size_t sz)
@ -99,3 +92,47 @@ char *get_cpuid_str(struct perf_pmu *pmu)
return buf;
}
/*
* Return 0 if idstr is a higher or equal to version of the same part as
* mapcpuid. Therefore, if mapcpuid has 0 for revision and variant then any
* version of idstr will match as long as it's the same CPU type.
*
* Return 1 if the CPU type is different or the version of idstr is lower.
*/
int strcmp_cpuid_str(const char *mapcpuid, const char *idstr)
{
u64 map_id = strtoull(mapcpuid, NULL, 16);
char map_id_variant = FIELD_GET(MIDR_VARIANT_MASK, map_id);
char map_id_revision = FIELD_GET(MIDR_REVISION_MASK, map_id);
u64 id = strtoull(idstr, NULL, 16);
char id_variant = FIELD_GET(MIDR_VARIANT_MASK, id);
char id_revision = FIELD_GET(MIDR_REVISION_MASK, id);
u64 id_fields = ~(MIDR_VARIANT_MASK | MIDR_REVISION_MASK);
/* Compare without version first */
if ((map_id & id_fields) != (id & id_fields))
return 1;
/*
* ID matches, now compare version.
*
* Arm revisions (like r0p0) are compared here like two digit semver
* values eg. 1.3 < 2.0 < 2.1 < 2.2.
*
* r = high value = 'Variant' field in MIDR
* p = low value = 'Revision' field in MIDR
*
*/
if (id_variant > map_id_variant)
return 0;
if (id_variant == map_id_variant && id_revision >= map_id_revision)
return 0;
/*
* variant is less than mapfile variant or variants are the same but
* the revision doesn't match. Return no match.
*/
return 1;
}

View File

@ -6,6 +6,7 @@
#include "debug.h"
#include "symbol.h"
#include "callchain.h"
#include "perf_regs.h"
#include "record.h"
#include "util/perf_regs.h"

View File

@ -20,7 +20,7 @@ struct perf_mem_event *perf_mem_events__ptr(int i)
return &perf_mem_events[i];
}
char *perf_mem_events__name(int i, char *pmu_name __maybe_unused)
const char *perf_mem_events__name(int i, const char *pmu_name __maybe_unused)
{
struct perf_mem_event *e = perf_mem_events__ptr(i);

View File

@ -6,6 +6,7 @@
#include <linux/kernel.h>
#include <linux/zalloc.h>
#include "perf_regs.h"
#include "../../../perf-sys.h"
#include "../../../util/debug.h"
#include "../../../util/event.h"
@ -139,6 +140,11 @@ int arch_sdt_arg_parse_op(char *old_op, char **new_op)
return SDT_ARG_VALID;
}
uint64_t arch__intr_reg_mask(void)
{
return PERF_REGS_MASK;
}
uint64_t arch__user_reg_mask(void)
{
struct perf_event_attr attr = {

View File

@ -2,28 +2,12 @@
#include <internal/cpumap.h>
#include "../../../util/cpumap.h"
#include "../../../util/header.h"
#include "../../../util/pmu.h"
#include "../../../util/pmus.h"
#include <api/fs/fs.h>
#include <math.h>
static struct perf_pmu *pmu__find_core_pmu(void)
{
struct perf_pmu *pmu = NULL;
while ((pmu = perf_pmus__scan_core(pmu))) {
/*
* The cpumap should cover all CPUs. Otherwise, some CPUs may
* not support some events or have different event IDs.
*/
if (RC_CHK_ACCESS(pmu->cpus)->nr != cpu__max_cpu().cpu)
return NULL;
return pmu;
}
return NULL;
}
const struct pmu_metrics_table *pmu_metrics_table__find(void)
{
struct perf_pmu *pmu = pmu__find_core_pmu();

View File

@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include <elfutils/libdwfl.h>
#include "perf_regs.h"
#include "../../../util/unwind-libdw.h"
#include "../../../util/perf_regs.h"
#include "../../../util/sample.h"

View File

@ -12,7 +12,4 @@
#define PERF_REGS_MAX PERF_REG_CSKY_MAX
#define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_32
#define PERF_REG_IP PERF_REG_CSKY_PC
#define PERF_REG_SP PERF_REG_CSKY_SP
#endif /* ARCH_PERF_REGS_H */

View File

@ -1,6 +1,17 @@
// SPDX-License-Identifier: GPL-2.0
#include "perf_regs.h"
#include "../../util/perf_regs.h"
const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
uint64_t arch__intr_reg_mask(void)
{
return PERF_REGS_MASK;
}
uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}

View File

@ -2,6 +2,7 @@
// Copyright (C) 2019 Hangzhou C-SKY Microsystems co.,ltd.
#include <elfutils/libdwfl.h>
#include "perf_regs.h"
#include "../../util/unwind-libdw.h"
#include "../../util/perf_regs.h"
#include "../../util/event.h"

View File

@ -7,8 +7,6 @@
#include <asm/perf_regs.h>
#define PERF_REGS_MAX PERF_REG_LOONGARCH_MAX
#define PERF_REG_IP PERF_REG_LOONGARCH_PC
#define PERF_REG_SP PERF_REG_LOONGARCH_R3
#define PERF_REGS_MASK ((1ULL << PERF_REG_LOONGARCH_MAX) - 1)

View File

@ -1,6 +1,17 @@
// SPDX-License-Identifier: GPL-2.0
#include "perf_regs.h"
#include "../../../util/perf_regs.h"
const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
uint64_t arch__intr_reg_mask(void)
{
return PERF_REGS_MASK;
}
uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}

View File

@ -2,6 +2,7 @@
/* Copyright (C) 2020-2023 Loongson Technology Corporation Limited */
#include <elfutils/libdwfl.h>
#include "perf_regs.h"
#include "../../util/unwind-libdw.h"
#include "../../util/perf_regs.h"
#include "../../util/sample.h"

View File

@ -7,8 +7,6 @@
#include <asm/perf_regs.h>
#define PERF_REGS_MAX PERF_REG_MIPS_MAX
#define PERF_REG_IP PERF_REG_MIPS_PC
#define PERF_REG_SP PERF_REG_MIPS_R29
#define PERF_REGS_MASK ((1ULL << PERF_REG_MIPS_MAX) - 1)

View File

@ -1,6 +1,17 @@
// SPDX-License-Identifier: GPL-2.0
#include "perf_regs.h"
#include "../../util/perf_regs.h"
const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
uint64_t arch__intr_reg_mask(void)
{
return PERF_REGS_MASK;
}
uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}

View File

@ -16,7 +16,4 @@ void perf_regs_load(u64 *regs);
#define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_32
#endif
#define PERF_REG_IP PERF_REG_POWERPC_NIP
#define PERF_REG_SP PERF_REG_POWERPC_R1
#endif /* ARCH_PERF_REGS_H */

View File

@ -3,10 +3,10 @@
#include "mem-events.h"
/* PowerPC does not support 'ldlat' parameter. */
char *perf_mem_events__name(int i, char *pmu_name __maybe_unused)
const char *perf_mem_events__name(int i, const char *pmu_name __maybe_unused)
{
if (i == PERF_MEM_EVENTS__LOAD)
return (char *) "cpu/mem-loads/";
return "cpu/mem-loads/";
return (char *) "cpu/mem-stores/";
return "cpu/mem-stores/";
}

View File

@ -4,6 +4,7 @@
#include <regex.h>
#include <linux/zalloc.h>
#include "perf_regs.h"
#include "../../../util/perf_regs.h"
#include "../../../util/debug.h"
#include "../../../util/event.h"
@ -226,3 +227,8 @@ uint64_t arch__intr_reg_mask(void)
}
return mask;
}
uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
#include <elfutils/libdwfl.h>
#include <linux/kernel.h>
#include "perf_regs.h"
#include "../../../util/unwind-libdw.h"
#include "../../../util/perf_regs.h"
#include "../../../util/sample.h"

View File

@ -16,7 +16,4 @@
#define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_32
#endif
#define PERF_REG_IP PERF_REG_RISCV_PC
#define PERF_REG_SP PERF_REG_RISCV_SP
#endif /* ARCH_PERF_REGS_H */

View File

@ -1,6 +1,17 @@
// SPDX-License-Identifier: GPL-2.0
#include "perf_regs.h"
#include "../../util/perf_regs.h"
const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
uint64_t arch__intr_reg_mask(void)
{
return PERF_REGS_MASK;
}
uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}

View File

@ -2,6 +2,7 @@
/* Copyright (C) 2019 Hangzhou C-SKY Microsystems co.,ltd. */
#include <elfutils/libdwfl.h>
#include "perf_regs.h"
#include "../../util/unwind-libdw.h"
#include "../../util/perf_regs.h"
#include "../../util/sample.h"

View File

@ -11,7 +11,4 @@ void perf_regs_load(u64 *regs);
#define PERF_REGS_MAX PERF_REG_S390_MAX
#define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_64
#define PERF_REG_IP PERF_REG_S390_PC
#define PERF_REG_SP PERF_REG_S390_R15
#endif /* ARCH_PERF_REGS_H */

View File

@ -1,6 +1,17 @@
// SPDX-License-Identifier: GPL-2.0
#include "perf_regs.h"
#include "../../util/perf_regs.h"
const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
uint64_t arch__intr_reg_mask(void)
{
return PERF_REGS_MASK;
}
uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}

View File

@ -5,6 +5,7 @@
#include "../../util/event.h"
#include "../../util/sample.h"
#include "dwarf-regs-table.h"
#include "perf_regs.h"
bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg)

View File

@ -24,7 +24,7 @@ sorted_table=$(mktemp /tmp/syscalltbl.XXXXXX)
grep '^[0-9]' "$in" | sort -n > $sorted_table
max_nr=0
while read nr abi name entry compat; do
while read nr _abi name entry _compat; do
if [ $nr -ge 512 ] ; then # discard compat sycalls
break
fi

View File

@ -20,7 +20,5 @@ void perf_regs_load(u64 *regs);
#define PERF_REGS_MASK (((1ULL << PERF_REG_X86_64_MAX) - 1) & ~REG_NOSUPPORT)
#define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_64
#endif
#define PERF_REG_IP PERF_REG_X86_IP
#define PERF_REG_SP PERF_REG_X86_SP
#endif /* ARCH_PERF_REGS_H */

View File

@ -75,11 +75,12 @@ int arch_evlist__add_default_attrs(struct evlist *evlist,
int arch_evlist__cmp(const struct evsel *lhs, const struct evsel *rhs)
{
if (topdown_sys_has_perf_metrics() && evsel__sys_has_perf_metrics(lhs)) {
if (topdown_sys_has_perf_metrics() &&
(arch_evsel__must_be_in_group(lhs) || arch_evsel__must_be_in_group(rhs))) {
/* Ensure the topdown slots comes first. */
if (strcasestr(lhs->name, "slots"))
if (strcasestr(lhs->name, "slots") && !strcasestr(lhs->name, "uops_retired.slots"))
return -1;
if (strcasestr(rhs->name, "slots"))
if (strcasestr(rhs->name, "slots") && !strcasestr(rhs->name, "uops_retired.slots"))
return 1;
/* Followed by topdown events. */
if (strcasestr(lhs->name, "topdown") && !strcasestr(rhs->name, "topdown"))

View File

@ -40,12 +40,11 @@ bool evsel__sys_has_perf_metrics(const struct evsel *evsel)
bool arch_evsel__must_be_in_group(const struct evsel *evsel)
{
if (!evsel__sys_has_perf_metrics(evsel))
if (!evsel__sys_has_perf_metrics(evsel) || !evsel->name ||
strcasestr(evsel->name, "uops_retired.slots"))
return false;
return evsel->name &&
(strcasestr(evsel->name, "slots") ||
strcasestr(evsel->name, "topdown"));
return strcasestr(evsel->name, "topdown") || strcasestr(evsel->name, "slots");
}
int arch_evsel__hw_name(struct evsel *evsel, char *bf, size_t size)

View File

@ -60,8 +60,7 @@ struct intel_pt_recording {
size_t priv_size;
};
static int intel_pt_parse_terms_with_default(const char *pmu_name,
struct list_head *formats,
static int intel_pt_parse_terms_with_default(struct perf_pmu *pmu,
const char *str,
u64 *config)
{
@ -75,13 +74,12 @@ static int intel_pt_parse_terms_with_default(const char *pmu_name,
INIT_LIST_HEAD(terms);
err = parse_events_terms(terms, str);
err = parse_events_terms(terms, str, /*input=*/ NULL);
if (err)
goto out_free;
attr.config = *config;
err = perf_pmu__config_terms(pmu_name, formats, &attr, terms, true,
NULL);
err = perf_pmu__config_terms(pmu, &attr, terms, /*zero=*/true, /*err=*/NULL);
if (err)
goto out_free;
@ -91,12 +89,10 @@ out_free:
return err;
}
static int intel_pt_parse_terms(const char *pmu_name, struct list_head *formats,
const char *str, u64 *config)
static int intel_pt_parse_terms(struct perf_pmu *pmu, const char *str, u64 *config)
{
*config = 0;
return intel_pt_parse_terms_with_default(pmu_name, formats, str,
config);
return intel_pt_parse_terms_with_default(pmu, str, config);
}
static u64 intel_pt_masked_bits(u64 mask, u64 bits)
@ -126,7 +122,7 @@ static int intel_pt_read_config(struct perf_pmu *intel_pt_pmu, const char *str,
*res = 0;
mask = perf_pmu__format_bits(&intel_pt_pmu->format, str);
mask = perf_pmu__format_bits(intel_pt_pmu, str);
if (!mask)
return -EINVAL;
@ -236,8 +232,7 @@ static u64 intel_pt_default_config(struct perf_pmu *intel_pt_pmu)
pr_debug2("%s default config: %s\n", intel_pt_pmu->name, buf);
intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format, buf,
&config);
intel_pt_parse_terms(intel_pt_pmu, buf, &config);
close(dirfd);
return config;
@ -348,16 +343,11 @@ static int intel_pt_info_fill(struct auxtrace_record *itr,
if (priv_size != ptr->priv_size)
return -EINVAL;
intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format,
"tsc", &tsc_bit);
intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format,
"noretcomp", &noretcomp_bit);
intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format,
"mtc", &mtc_bit);
mtc_freq_bits = perf_pmu__format_bits(&intel_pt_pmu->format,
"mtc_period");
intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format,
"cyc", &cyc_bit);
intel_pt_parse_terms(intel_pt_pmu, "tsc", &tsc_bit);
intel_pt_parse_terms(intel_pt_pmu, "noretcomp", &noretcomp_bit);
intel_pt_parse_terms(intel_pt_pmu, "mtc", &mtc_bit);
mtc_freq_bits = perf_pmu__format_bits(intel_pt_pmu, "mtc_period");
intel_pt_parse_terms(intel_pt_pmu, "cyc", &cyc_bit);
intel_pt_tsc_ctc_ratio(&tsc_ctc_ratio_n, &tsc_ctc_ratio_d);
@ -511,7 +501,7 @@ static int intel_pt_val_config_term(struct perf_pmu *intel_pt_pmu, int dirfd,
valid |= 1;
bits = perf_pmu__format_bits(&intel_pt_pmu->format, name);
bits = perf_pmu__format_bits(intel_pt_pmu, name);
config &= bits;
@ -781,8 +771,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
intel_pt_evsel->core.attr.aux_watermark = aux_watermark;
}
intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format,
"tsc", &tsc_bit);
intel_pt_parse_terms(intel_pt_pmu, "tsc", &tsc_bit);
if (opts->full_auxtrace && (intel_pt_evsel->core.attr.config & tsc_bit))
have_timing_info = true;

View File

@ -52,7 +52,7 @@ bool is_mem_loads_aux_event(struct evsel *leader)
return leader->core.attr.config == MEM_LOADS_AUX;
}
char *perf_mem_events__name(int i, char *pmu_name)
const char *perf_mem_events__name(int i, const char *pmu_name)
{
struct perf_mem_event *e = perf_mem_events__ptr(i);
@ -65,7 +65,7 @@ char *perf_mem_events__name(int i, char *pmu_name)
if (!pmu_name) {
mem_loads_name__init = true;
pmu_name = (char *)"cpu";
pmu_name = "cpu";
}
if (perf_pmus__have_event(pmu_name, "mem-loads-aux")) {
@ -82,12 +82,12 @@ char *perf_mem_events__name(int i, char *pmu_name)
if (i == PERF_MEM_EVENTS__STORE) {
if (!pmu_name)
pmu_name = (char *)"cpu";
pmu_name = "cpu";
scnprintf(mem_stores_name, sizeof(mem_stores_name),
e->name, pmu_name);
return mem_stores_name;
}
return (char *)e->name;
return e->name;
}

View File

@ -5,6 +5,7 @@
#include <linux/kernel.h>
#include <linux/zalloc.h>
#include "perf_regs.h"
#include "../../../perf-sys.h"
#include "../../../util/perf_regs.h"
#include "../../../util/debug.h"
@ -317,3 +318,8 @@ uint64_t arch__intr_reg_mask(void)
return PERF_REGS_MASK;
}
uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}

View File

@ -126,7 +126,7 @@ close_dir:
return ret;
}
static char *__pmu_find_real_name(const char *name)
static const char *__pmu_find_real_name(const char *name)
{
struct pmu_alias *pmu_alias;
@ -135,10 +135,10 @@ static char *__pmu_find_real_name(const char *name)
return pmu_alias->name;
}
return (char *)name;
return name;
}
char *pmu_find_real_name(const char *name)
const char *pmu_find_real_name(const char *name)
{
if (cached_list)
return __pmu_find_real_name(name);
@ -149,7 +149,7 @@ char *pmu_find_real_name(const char *name)
return __pmu_find_real_name(name);
}
static char *__pmu_find_alias_name(const char *name)
static const char *__pmu_find_alias_name(const char *name)
{
struct pmu_alias *pmu_alias;
@ -160,7 +160,7 @@ static char *__pmu_find_alias_name(const char *name)
return NULL;
}
char *pmu_find_alias_name(const char *name)
const char *pmu_find_alias_name(const char *name)
{
if (cached_list)
return __pmu_find_alias_name(name);

View File

@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include <elfutils/libdwfl.h>
#include "perf_regs.h"
#include "../../../util/unwind-libdw.h"
#include "../../../util/perf_regs.h"
#include "util/sample.h"

View File

@ -17,6 +17,7 @@ perf-y += inject-buildid.o
perf-y += evlist-open-close.o
perf-y += breakpoint.o
perf-y += pmu-scan.o
perf-y += uprobe.o
perf-$(CONFIG_X86_64) += mem-memcpy-x86-64-asm.o
perf-$(CONFIG_X86_64) += mem-memset-x86-64-asm.o

View File

@ -43,6 +43,9 @@ int bench_inject_build_id(int argc, const char **argv);
int bench_evlist_open_close(int argc, const char **argv);
int bench_breakpoint_thread(int argc, const char **argv);
int bench_breakpoint_enable(int argc, const char **argv);
int bench_uprobe_baseline(int argc, const char **argv);
int bench_uprobe_empty(int argc, const char **argv);
int bench_uprobe_trace_printk(int argc, const char **argv);
int bench_pmu_scan(int argc, const char **argv);
#define BENCH_FORMAT_DEFAULT_STR "default"

View File

@ -47,6 +47,7 @@ struct breakpoint {
static int breakpoint_setup(void *addr)
{
struct perf_event_attr attr = { .size = 0, };
int fd;
attr.type = PERF_TYPE_BREAKPOINT;
attr.size = sizeof(attr);
@ -56,7 +57,12 @@ static int breakpoint_setup(void *addr)
attr.bp_addr = (unsigned long)addr;
attr.bp_type = HW_BREAKPOINT_RW;
attr.bp_len = HW_BREAKPOINT_LEN_1;
return syscall(SYS_perf_event_open, &attr, 0, -1, -1, 0);
fd = syscall(SYS_perf_event_open, &attr, 0, -1, -1, 0);
if (fd < 0)
fd = -errno;
return fd;
}
static void *passive_thread(void *arg)
@ -122,8 +128,14 @@ int bench_breakpoint_thread(int argc, const char **argv)
for (i = 0; i < thread_params.nbreakpoints; i++) {
breakpoints[i].fd = breakpoint_setup(&breakpoints[i].watched);
if (breakpoints[i].fd == -1)
if (breakpoints[i].fd < 0) {
if (breakpoints[i].fd == -ENODEV) {
printf("Skipping perf bench breakpoint thread: No hardware support\n");
return 0;
}
exit((perror("perf_event_open"), EXIT_FAILURE));
}
}
gettimeofday(&start, NULL);
for (i = 0; i < thread_params.nparallel; i++) {
@ -196,8 +208,14 @@ int bench_breakpoint_enable(int argc, const char **argv)
exit(EXIT_FAILURE);
}
fd = breakpoint_setup(&watched);
if (fd == -1)
if (fd < 0) {
if (fd == -ENODEV) {
printf("Skipping perf bench breakpoint enable: No hardware support\n");
return 0;
}
exit((perror("perf_event_open"), EXIT_FAILURE));
}
nthreads = enable_params.npassive + enable_params.nactive;
threads = calloc(nthreads, sizeof(threads[0]));
if (!threads)

View File

@ -57,9 +57,7 @@ static int save_result(void)
r->is_core = pmu->is_core;
r->nr_caps = pmu->nr_caps;
r->nr_aliases = 0;
list_for_each(list, &pmu->aliases)
r->nr_aliases++;
r->nr_aliases = perf_pmu__num_events(pmu);
r->nr_formats = 0;
list_for_each(list, &pmu->format)
@ -98,9 +96,7 @@ static int check_result(bool core_only)
return -1;
}
nr = 0;
list_for_each(list, &pmu->aliases)
nr++;
nr = perf_pmu__num_events(pmu);
if (nr != r->nr_aliases) {
pr_err("Unmatched number of event aliases in %s: expect %d vs got %d\n",
pmu->name, r->nr_aliases, nr);

198
tools/perf/bench/uprobe.c Normal file
View File

@ -0,0 +1,198 @@
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
/*
* uprobe.c
*
* uprobe benchmarks
*
* Copyright (C) 2023, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com>
*/
#include "../perf.h"
#include "../util/util.h"
#include <subcmd/parse-options.h>
#include "../builtin.h"
#include "bench.h"
#include <linux/compiler.h>
#include <linux/time64.h>
#include <inttypes.h>
#include <stdio.h>
#include <sys/time.h>
#include <sys/types.h>
#include <time.h>
#include <unistd.h>
#include <stdlib.h>
#define LOOPS_DEFAULT 1000
static int loops = LOOPS_DEFAULT;
enum bench_uprobe {
BENCH_UPROBE__BASELINE,
BENCH_UPROBE__EMPTY,
BENCH_UPROBE__TRACE_PRINTK,
};
static const struct option options[] = {
OPT_INTEGER('l', "loop", &loops, "Specify number of loops"),
OPT_END()
};
static const char * const bench_uprobe_usage[] = {
"perf bench uprobe <options>",
NULL
};
#ifdef HAVE_BPF_SKEL
#include "bpf_skel/bench_uprobe.skel.h"
#define bench_uprobe__attach_uprobe(prog) \
skel->links.prog = bpf_program__attach_uprobe_opts(/*prog=*/skel->progs.prog, \
/*pid=*/-1, \
/*binary_path=*/"/lib64/libc.so.6", \
/*func_offset=*/0, \
/*opts=*/&uprobe_opts); \
if (!skel->links.prog) { \
err = -errno; \
fprintf(stderr, "Failed to attach bench uprobe \"%s\": %s\n", #prog, strerror(errno)); \
goto cleanup; \
}
struct bench_uprobe_bpf *skel;
static int bench_uprobe__setup_bpf_skel(enum bench_uprobe bench)
{
DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
int err;
/* Load and verify BPF application */
skel = bench_uprobe_bpf__open();
if (!skel) {
fprintf(stderr, "Failed to open and load uprobes bench BPF skeleton\n");
return -1;
}
err = bench_uprobe_bpf__load(skel);
if (err) {
fprintf(stderr, "Failed to load and verify BPF skeleton\n");
goto cleanup;
}
uprobe_opts.func_name = "usleep";
switch (bench) {
case BENCH_UPROBE__BASELINE: break;
case BENCH_UPROBE__EMPTY: bench_uprobe__attach_uprobe(empty); break;
case BENCH_UPROBE__TRACE_PRINTK: bench_uprobe__attach_uprobe(trace_printk); break;
default:
fprintf(stderr, "Invalid bench: %d\n", bench);
goto cleanup;
}
return err;
cleanup:
bench_uprobe_bpf__destroy(skel);
return err;
}
static void bench_uprobe__teardown_bpf_skel(void)
{
if (skel) {
bench_uprobe_bpf__destroy(skel);
skel = NULL;
}
}
#else
static int bench_uprobe__setup_bpf_skel(enum bench_uprobe bench __maybe_unused) { return 0; }
static void bench_uprobe__teardown_bpf_skel(void) {};
#endif
static int bench_uprobe_format__default_fprintf(const char *name, const char *unit, u64 diff, FILE *fp)
{
static u64 baseline, previous;
s64 diff_to_baseline = diff - baseline,
diff_to_previous = diff - previous;
int printed = fprintf(fp, "# Executed %'d %s calls\n", loops, name);
printed += fprintf(fp, " %14s: %'" PRIu64 " %ss", "Total time", diff, unit);
if (baseline) {
printed += fprintf(fp, " %s%'" PRId64 " to baseline", diff_to_baseline > 0 ? "+" : "", diff_to_baseline);
if (previous != baseline)
fprintf(stdout, " %s%'" PRId64 " to previous", diff_to_previous > 0 ? "+" : "", diff_to_previous);
}
printed += fprintf(fp, "\n\n %'.3f %ss/op", (double)diff / (double)loops, unit);
if (baseline) {
printed += fprintf(fp, " %'.3f %ss/op to baseline", (double)diff_to_baseline / (double)loops, unit);
if (previous != baseline)
printed += fprintf(fp, " %'.3f %ss/op to previous", (double)diff_to_previous / (double)loops, unit);
} else {
baseline = diff;
}
fputc('\n', fp);
previous = diff;
return printed + 1;
}
static int bench_uprobe(int argc, const char **argv, enum bench_uprobe bench)
{
const char *name = "usleep(1000)", *unit = "usec";
struct timespec start, end;
u64 diff;
int i;
argc = parse_options(argc, argv, options, bench_uprobe_usage, 0);
if (bench != BENCH_UPROBE__BASELINE && bench_uprobe__setup_bpf_skel(bench) < 0)
return 0;
clock_gettime(CLOCK_REALTIME, &start);
for (i = 0; i < loops; i++) {
usleep(USEC_PER_MSEC);
}
clock_gettime(CLOCK_REALTIME, &end);
diff = end.tv_sec * NSEC_PER_SEC + end.tv_nsec - (start.tv_sec * NSEC_PER_SEC + start.tv_nsec);
diff /= NSEC_PER_USEC;
switch (bench_format) {
case BENCH_FORMAT_DEFAULT:
bench_uprobe_format__default_fprintf(name, unit, diff, stdout);
break;
case BENCH_FORMAT_SIMPLE:
printf("%" PRIu64 "\n", diff);
break;
default:
/* reaching here is something of a disaster */
fprintf(stderr, "Unknown format:%d\n", bench_format);
exit(1);
}
if (bench != BENCH_UPROBE__BASELINE)
bench_uprobe__teardown_bpf_skel();
return 0;
}
int bench_uprobe_baseline(int argc, const char **argv)
{
return bench_uprobe(argc, argv, BENCH_UPROBE__BASELINE);
}
int bench_uprobe_empty(int argc, const char **argv)
{
return bench_uprobe(argc, argv, BENCH_UPROBE__EMPTY);
}
int bench_uprobe_trace_printk(int argc, const char **argv)
{
return bench_uprobe(argc, argv, BENCH_UPROBE__TRACE_PRINTK);
}

View File

@ -105,6 +105,13 @@ static struct bench breakpoint_benchmarks[] = {
{ NULL, NULL, NULL },
};
static struct bench uprobe_benchmarks[] = {
{ "baseline", "Baseline libc usleep(1000) call", bench_uprobe_baseline, },
{ "empty", "Attach empty BPF prog to uprobe on usleep, system wide", bench_uprobe_empty, },
{ "trace_printk", "Attach trace_printk BPF prog to uprobe on usleep syswide", bench_uprobe_trace_printk, },
{ NULL, NULL, NULL },
};
struct collection {
const char *name;
const char *summary;
@ -124,6 +131,7 @@ static struct collection collections[] = {
#endif
{ "internals", "Perf-internals benchmarks", internals_benchmarks },
{ "breakpoint", "Breakpoint benchmarks", breakpoint_benchmarks },
{ "uprobe", "uprobe benchmarks", uprobe_benchmarks },
{ "all", "All benchmarks", NULL },
{ NULL, NULL, NULL }
};

View File

@ -1915,8 +1915,8 @@ static int data_init(int argc, const char **argv)
struct perf_data *data = &d->data;
data->path = use_default ? defaults[i] : argv[i];
data->mode = PERF_DATA_MODE_READ,
data->force = force,
data->mode = PERF_DATA_MODE_READ;
data->force = force;
d->idx = i;
}

View File

@ -145,9 +145,20 @@ static void default_print_event(void *ps, const char *pmu_name, const char *topi
putchar('\n');
if (desc && print_state->desc) {
char *desc_with_unit = NULL;
int desc_len = -1;
if (pmu_name && strcmp(pmu_name, "default_core")) {
desc_len = strlen(desc);
desc_len = asprintf(&desc_with_unit,
desc[desc_len - 1] != '.'
? "%s. Unit: %s" : "%s Unit: %s",
desc, pmu_name);
}
printf("%*s", 8, "[");
wordwrap(desc, 8, pager_get_columns(), 0);
wordwrap(desc_len > 0 ? desc_with_unit : desc, 8, pager_get_columns(), 0);
printf("]\n");
free(desc_with_unit);
}
long_desc = long_desc ?: desc;
if (long_desc && print_state->long_desc) {
@ -423,6 +434,13 @@ static void json_print_metric(void *ps __maybe_unused, const char *group,
strbuf_release(&buf);
}
static bool default_skip_duplicate_pmus(void *ps)
{
struct print_state *print_state = ps;
return !print_state->long_desc;
}
int cmd_list(int argc, const char **argv)
{
int i, ret = 0;
@ -434,6 +452,7 @@ int cmd_list(int argc, const char **argv)
.print_end = default_print_end,
.print_event = default_print_event,
.print_metric = default_print_metric,
.skip_duplicate_pmus = default_skip_duplicate_pmus,
};
const char *cputype = NULL;
const char *unit_name = NULL;
@ -502,7 +521,7 @@ int cmd_list(int argc, const char **argv)
ret = -1;
goto out;
}
default_ps.pmu_glob = pmu->name;
default_ps.pmu_glob = strdup(pmu->name);
}
}
print_cb.print_start(ps);

View File

@ -2052,6 +2052,7 @@ static int __cmd_contention(int argc, const char **argv)
if (IS_ERR(session)) {
pr_err("Initializing perf session failed\n");
err = PTR_ERR(session);
session = NULL;
goto out_delete;
}
@ -2506,7 +2507,7 @@ int cmd_lock(int argc, const char **argv)
OPT_CALLBACK('M', "map-nr-entries", &bpf_map_entries, "num",
"Max number of BPF map entries", parse_map_entry),
OPT_CALLBACK(0, "max-stack", &max_stack_depth, "num",
"Set the maximum stack depth when collecting lopck contention, "
"Set the maximum stack depth when collecting lock contention, "
"Default: " __stringify(CONTENTION_STACK_DEPTH), parse_max_stack),
OPT_INTEGER(0, "stack-skip", &stack_skip,
"Set the number of stack depth to skip when finding a lock caller, "

View File

@ -37,8 +37,6 @@
#include "util/parse-branch-options.h"
#include "util/parse-regs-options.h"
#include "util/perf_api_probe.h"
#include "util/llvm-utils.h"
#include "util/bpf-loader.h"
#include "util/trigger.h"
#include "util/perf-hooks.h"
#include "util/cpu-set-sched.h"
@ -2465,16 +2463,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
}
}
err = bpf__apply_obj_config();
if (err) {
char errbuf[BUFSIZ];
bpf__strerror_apply_obj_config(err, errbuf, sizeof(errbuf));
pr_err("ERROR: Apply config to BPF failed: %s\n",
errbuf);
goto out_free_threads;
}
/*
* Normally perf_session__new would do this, but it doesn't have the
* evlist.
@ -3486,10 +3474,6 @@ static struct option __record_options[] = {
"collect kernel callchains"),
OPT_BOOLEAN(0, "user-callchains", &record.opts.user_callchains,
"collect user callchains"),
OPT_STRING(0, "clang-path", &llvm_param.clang_path, "clang path",
"clang binary to use for compiling BPF scriptlets"),
OPT_STRING(0, "clang-opt", &llvm_param.clang_opt, "clang options",
"options passed to clang when compiling BPF scriptlets"),
OPT_STRING(0, "vmlinux", &symbol_conf.vmlinux_name,
"file", "vmlinux pathname"),
OPT_BOOLEAN(0, "buildid-all", &record.buildid_all,
@ -3967,27 +3951,6 @@ int cmd_record(int argc, const char **argv)
setlocale(LC_ALL, "");
#ifndef HAVE_LIBBPF_SUPPORT
# define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, "NO_LIBBPF=1", c)
set_nobuild('\0', "clang-path", true);
set_nobuild('\0', "clang-opt", true);
# undef set_nobuild
#endif
#ifndef HAVE_BPF_PROLOGUE
# if !defined (HAVE_DWARF_SUPPORT)
# define REASON "NO_DWARF=1"
# elif !defined (HAVE_LIBBPF_SUPPORT)
# define REASON "NO_LIBBPF=1"
# else
# define REASON "this architecture doesn't support BPF prologue"
# endif
# define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, REASON, c)
set_nobuild('\0', "vmlinux", true);
# undef set_nobuild
# undef REASON
#endif
#ifndef HAVE_BPF_SKEL
# define set_nobuild(s, l, m, c) set_option_nobuild(record_options, s, l, m, c)
set_nobuild('\0', "off-cpu", "no BUILD_BPF_SKEL=1", true);
@ -4116,14 +4079,6 @@ int cmd_record(int argc, const char **argv)
if (dry_run)
goto out;
err = bpf__setup_stdout(rec->evlist);
if (err) {
bpf__strerror_setup_stdout(rec->evlist, err, errbuf, sizeof(errbuf));
pr_err("ERROR: Setup BPF stdout failed: %s\n",
errbuf);
goto out;
}
err = -ENOMEM;
if (rec->no_buildid_cache || rec->no_buildid) {

View File

@ -2199,6 +2199,17 @@ static void process_event(struct perf_script *script,
if (PRINT_FIELD(RETIRE_LAT))
fprintf(fp, "%16" PRIu16, sample->retire_lat);
if (PRINT_FIELD(CGROUP)) {
const char *cgrp_name;
struct cgroup *cgrp = cgroup__find(machine->env,
sample->cgroup);
if (cgrp != NULL)
cgrp_name = cgrp->name;
else
cgrp_name = "unknown";
fprintf(fp, " %s", cgrp_name);
}
if (PRINT_FIELD(IP)) {
struct callchain_cursor *cursor = NULL;
@ -2243,17 +2254,6 @@ static void process_event(struct perf_script *script,
if (PRINT_FIELD(CODE_PAGE_SIZE))
fprintf(fp, " %s", get_page_size_name(sample->code_page_size, str));
if (PRINT_FIELD(CGROUP)) {
const char *cgrp_name;
struct cgroup *cgrp = cgroup__find(machine->env,
sample->cgroup);
if (cgrp != NULL)
cgrp_name = cgrp->name;
else
cgrp_name = "unknown";
fprintf(fp, " %s", cgrp_name);
}
perf_sample__fprintf_ipc(sample, attr, fp);
fprintf(fp, "\n");

View File

@ -1805,6 +1805,7 @@ int cmd_top(int argc, const char **argv)
top.session = perf_session__new(NULL, NULL);
if (IS_ERR(top.session)) {
status = PTR_ERR(top.session);
top.session = NULL;
goto out_delete_evlist;
}

View File

@ -18,6 +18,10 @@
#include <api/fs/tracing_path.h>
#ifdef HAVE_LIBBPF_SUPPORT
#include <bpf/bpf.h>
#include <bpf/libbpf.h>
#ifdef HAVE_BPF_SKEL
#include "bpf_skel/augmented_raw_syscalls.skel.h"
#endif
#endif
#include "util/bpf_map.h"
#include "util/rlimit.h"
@ -53,7 +57,6 @@
#include "trace/beauty/beauty.h"
#include "trace-event.h"
#include "util/parse-events.h"
#include "util/bpf-loader.h"
#include "util/tracepoint.h"
#include "callchain.h"
#include "print_binary.h"
@ -127,25 +130,19 @@ struct trace {
struct syscalltbl *sctbl;
struct {
struct syscall *table;
struct { // per syscall BPF_MAP_TYPE_PROG_ARRAY
struct bpf_map *sys_enter,
*sys_exit;
} prog_array;
struct {
struct evsel *sys_enter,
*sys_exit,
*augmented;
*sys_exit,
*bpf_output;
} events;
struct bpf_program *unaugmented_prog;
} syscalls;
struct {
struct bpf_map *map;
} dump;
#ifdef HAVE_BPF_SKEL
struct augmented_raw_syscalls_bpf *skel;
#endif
struct record_opts opts;
struct evlist *evlist;
struct machine *host;
struct thread *current;
struct bpf_object *bpf_obj;
struct cgroup *cgroup;
u64 base_time;
FILE *output;
@ -415,6 +412,7 @@ static int evsel__init_syscall_tp(struct evsel *evsel)
if (evsel__init_tp_uint_field(evsel, &sc->id, "__syscall_nr") &&
evsel__init_tp_uint_field(evsel, &sc->id, "nr"))
return -ENOENT;
return 0;
}
@ -1296,6 +1294,22 @@ static struct thread_trace *thread_trace__new(void)
return ttrace;
}
static void thread_trace__free_files(struct thread_trace *ttrace);
static void thread_trace__delete(void *pttrace)
{
struct thread_trace *ttrace = pttrace;
if (!ttrace)
return;
intlist__delete(ttrace->syscall_stats);
ttrace->syscall_stats = NULL;
thread_trace__free_files(ttrace);
zfree(&ttrace->entry_str);
free(ttrace);
}
static struct thread_trace *thread__trace(struct thread *thread, FILE *fp)
{
struct thread_trace *ttrace;
@ -1333,6 +1347,17 @@ void syscall_arg__set_ret_scnprintf(struct syscall_arg *arg,
static const size_t trace__entry_str_size = 2048;
static void thread_trace__free_files(struct thread_trace *ttrace)
{
for (int i = 0; i < ttrace->files.max; ++i) {
struct file *file = ttrace->files.table + i;
zfree(&file->pathname);
}
zfree(&ttrace->files.table);
ttrace->files.max = -1;
}
static struct file *thread_trace__files_entry(struct thread_trace *ttrace, int fd)
{
if (fd < 0)
@ -1635,6 +1660,8 @@ static int trace__symbols_init(struct trace *trace, struct evlist *evlist)
if (trace->host == NULL)
return -ENOMEM;
thread__set_priv_destructor(thread_trace__delete);
err = trace_event__register_resolver(trace->host, trace__machine__resolve_kernel_addr);
if (err < 0)
goto out;
@ -2816,7 +2843,7 @@ static int trace__event_handler(struct trace *trace, struct evsel *evsel,
if (thread)
trace__fprintf_comm_tid(trace, thread, trace->output);
if (evsel == trace->syscalls.events.augmented) {
if (evsel == trace->syscalls.events.bpf_output) {
int id = perf_evsel__sc_tp_uint(evsel, id, sample);
struct syscall *sc = trace__syscall_info(trace, evsel, id);
@ -3136,13 +3163,8 @@ static void evlist__free_syscall_tp_fields(struct evlist *evlist)
struct evsel *evsel;
evlist__for_each_entry(evlist, evsel) {
struct evsel_trace *et = evsel->priv;
if (!et || !evsel->tp_format || strcmp(evsel->tp_format->system, "syscalls"))
continue;
zfree(&et->fmt);
free(et);
evsel_trace__delete(evsel->priv);
evsel->priv = NULL;
}
}
@ -3254,35 +3276,16 @@ out_enomem:
goto out;
}
#ifdef HAVE_LIBBPF_SUPPORT
static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace, const char *name)
{
if (trace->bpf_obj == NULL)
return NULL;
return bpf_object__find_map_by_name(trace->bpf_obj, name);
}
static void trace__set_bpf_map_filtered_pids(struct trace *trace)
{
trace->filter_pids.map = trace__find_bpf_map_by_name(trace, "pids_filtered");
}
static void trace__set_bpf_map_syscalls(struct trace *trace)
{
trace->syscalls.prog_array.sys_enter = trace__find_bpf_map_by_name(trace, "syscalls_sys_enter");
trace->syscalls.prog_array.sys_exit = trace__find_bpf_map_by_name(trace, "syscalls_sys_exit");
}
#ifdef HAVE_BPF_SKEL
static struct bpf_program *trace__find_bpf_program_by_title(struct trace *trace, const char *name)
{
struct bpf_program *pos, *prog = NULL;
const char *sec_name;
if (trace->bpf_obj == NULL)
if (trace->skel->obj == NULL)
return NULL;
bpf_object__for_each_program(pos, trace->bpf_obj) {
bpf_object__for_each_program(pos, trace->skel->obj) {
sec_name = bpf_program__section_name(pos);
if (sec_name && !strcmp(sec_name, name)) {
prog = pos;
@ -3300,12 +3303,12 @@ static struct bpf_program *trace__find_syscall_bpf_prog(struct trace *trace, str
if (prog_name == NULL) {
char default_prog_name[256];
scnprintf(default_prog_name, sizeof(default_prog_name), "!syscalls:sys_%s_%s", type, sc->name);
scnprintf(default_prog_name, sizeof(default_prog_name), "tp/syscalls/sys_%s_%s", type, sc->name);
prog = trace__find_bpf_program_by_title(trace, default_prog_name);
if (prog != NULL)
goto out_found;
if (sc->fmt && sc->fmt->alias) {
scnprintf(default_prog_name, sizeof(default_prog_name), "!syscalls:sys_%s_%s", type, sc->fmt->alias);
scnprintf(default_prog_name, sizeof(default_prog_name), "tp/syscalls/sys_%s_%s", type, sc->fmt->alias);
prog = trace__find_bpf_program_by_title(trace, default_prog_name);
if (prog != NULL)
goto out_found;
@ -3323,7 +3326,7 @@ out_found:
pr_debug("Couldn't find BPF prog \"%s\" to associate with syscalls:sys_%s_%s, not augmenting it\n",
prog_name, type, sc->name);
out_unaugmented:
return trace->syscalls.unaugmented_prog;
return trace->skel->progs.syscall_unaugmented;
}
static void trace__init_syscall_bpf_progs(struct trace *trace, int id)
@ -3340,13 +3343,13 @@ static void trace__init_syscall_bpf_progs(struct trace *trace, int id)
static int trace__bpf_prog_sys_enter_fd(struct trace *trace, int id)
{
struct syscall *sc = trace__syscall_info(trace, NULL, id);
return sc ? bpf_program__fd(sc->bpf_prog.sys_enter) : bpf_program__fd(trace->syscalls.unaugmented_prog);
return sc ? bpf_program__fd(sc->bpf_prog.sys_enter) : bpf_program__fd(trace->skel->progs.syscall_unaugmented);
}
static int trace__bpf_prog_sys_exit_fd(struct trace *trace, int id)
{
struct syscall *sc = trace__syscall_info(trace, NULL, id);
return sc ? bpf_program__fd(sc->bpf_prog.sys_exit) : bpf_program__fd(trace->syscalls.unaugmented_prog);
return sc ? bpf_program__fd(sc->bpf_prog.sys_exit) : bpf_program__fd(trace->skel->progs.syscall_unaugmented);
}
static struct bpf_program *trace__find_usable_bpf_prog_entry(struct trace *trace, struct syscall *sc)
@ -3371,7 +3374,7 @@ try_to_find_pair:
bool is_candidate = false;
if (pair == NULL || pair == sc ||
pair->bpf_prog.sys_enter == trace->syscalls.unaugmented_prog)
pair->bpf_prog.sys_enter == trace->skel->progs.syscall_unaugmented)
continue;
for (field = sc->args, candidate_field = pair->args;
@ -3395,6 +3398,19 @@ try_to_find_pair:
if (strcmp(field->type, candidate_field->type))
goto next_candidate;
/*
* This is limited in the BPF program but sys_write
* uses "const char *" for its "buf" arg so we need to
* use some heuristic that is kinda future proof...
*/
if (strcmp(field->type, "const char *") == 0 &&
!(strstr(field->name, "name") ||
strstr(field->name, "path") ||
strstr(field->name, "file") ||
strstr(field->name, "root") ||
strstr(field->name, "description")))
goto next_candidate;
is_candidate = true;
}
@ -3424,7 +3440,7 @@ try_to_find_pair:
*/
if (pair_prog == NULL) {
pair_prog = trace__find_syscall_bpf_prog(trace, pair, pair->fmt ? pair->fmt->bpf_prog_name.sys_enter : NULL, "enter");
if (pair_prog == trace->syscalls.unaugmented_prog)
if (pair_prog == trace->skel->progs.syscall_unaugmented)
goto next_candidate;
}
@ -3439,8 +3455,8 @@ try_to_find_pair:
static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace)
{
int map_enter_fd = bpf_map__fd(trace->syscalls.prog_array.sys_enter),
map_exit_fd = bpf_map__fd(trace->syscalls.prog_array.sys_exit);
int map_enter_fd = bpf_map__fd(trace->skel->maps.syscalls_sys_enter);
int map_exit_fd = bpf_map__fd(trace->skel->maps.syscalls_sys_exit);
int err = 0, key;
for (key = 0; key < trace->sctbl->syscalls.nr_entries; ++key) {
@ -3502,7 +3518,7 @@ static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace)
* For now we're just reusing the sys_enter prog, and if it
* already has an augmenter, we don't need to find one.
*/
if (sc->bpf_prog.sys_enter != trace->syscalls.unaugmented_prog)
if (sc->bpf_prog.sys_enter != trace->skel->progs.syscall_unaugmented)
continue;
/*
@ -3525,74 +3541,9 @@ static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace)
break;
}
return err;
}
static void trace__delete_augmented_syscalls(struct trace *trace)
{
struct evsel *evsel, *tmp;
evlist__remove(trace->evlist, trace->syscalls.events.augmented);
evsel__delete(trace->syscalls.events.augmented);
trace->syscalls.events.augmented = NULL;
evlist__for_each_entry_safe(trace->evlist, tmp, evsel) {
if (evsel->bpf_obj == trace->bpf_obj) {
evlist__remove(trace->evlist, evsel);
evsel__delete(evsel);
}
}
bpf_object__close(trace->bpf_obj);
trace->bpf_obj = NULL;
}
#else // HAVE_LIBBPF_SUPPORT
static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace __maybe_unused,
const char *name __maybe_unused)
{
return NULL;
}
static void trace__set_bpf_map_filtered_pids(struct trace *trace __maybe_unused)
{
}
static void trace__set_bpf_map_syscalls(struct trace *trace __maybe_unused)
{
}
static struct bpf_program *trace__find_bpf_program_by_title(struct trace *trace __maybe_unused,
const char *name __maybe_unused)
{
return NULL;
}
static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace __maybe_unused)
{
return 0;
}
static void trace__delete_augmented_syscalls(struct trace *trace __maybe_unused)
{
}
#endif // HAVE_LIBBPF_SUPPORT
static bool trace__only_augmented_syscalls_evsels(struct trace *trace)
{
struct evsel *evsel;
evlist__for_each_entry(trace->evlist, evsel) {
if (evsel == trace->syscalls.events.augmented ||
evsel->bpf_obj == trace->bpf_obj)
continue;
return false;
}
return true;
}
#endif // HAVE_BPF_SKEL
static int trace__set_ev_qualifier_filter(struct trace *trace)
{
@ -3956,23 +3907,31 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
err = evlist__open(evlist);
if (err < 0)
goto out_error_open;
#ifdef HAVE_BPF_SKEL
if (trace->syscalls.events.bpf_output) {
struct perf_cpu cpu;
err = bpf__apply_obj_config();
if (err) {
char errbuf[BUFSIZ];
bpf__strerror_apply_obj_config(err, errbuf, sizeof(errbuf));
pr_err("ERROR: Apply config to BPF failed: %s\n",
errbuf);
goto out_error_open;
/*
* Set up the __augmented_syscalls__ BPF map to hold for each
* CPU the bpf-output event's file descriptor.
*/
perf_cpu_map__for_each_cpu(cpu, i, trace->syscalls.events.bpf_output->core.cpus) {
bpf_map__update_elem(trace->skel->maps.__augmented_syscalls__,
&cpu.cpu, sizeof(int),
xyarray__entry(trace->syscalls.events.bpf_output->core.fd,
cpu.cpu, 0),
sizeof(__u32), BPF_ANY);
}
}
#endif
err = trace__set_filter_pids(trace);
if (err < 0)
goto out_error_mem;
if (trace->syscalls.prog_array.sys_enter)
#ifdef HAVE_BPF_SKEL
if (trace->skel && trace->skel->progs.sys_enter)
trace__init_syscalls_bpf_prog_array_maps(trace);
#endif
if (trace->ev_qualifier_ids.nr > 0) {
err = trace__set_ev_qualifier_filter(trace);
@ -4005,9 +3964,6 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
if (err < 0)
goto out_error_apply_filters;
if (trace->dump.map)
bpf_map__fprintf(trace->dump.map, trace->output);
err = evlist__mmap(evlist, trace->opts.mmap_pages);
if (err < 0)
goto out_error_mmap;
@ -4704,6 +4660,18 @@ static void trace__exit(struct trace *trace)
zfree(&trace->perfconfig_events);
}
#ifdef HAVE_BPF_SKEL
static int bpf__setup_bpf_output(struct evlist *evlist)
{
int err = parse_event(evlist, "bpf-output/no-inherit=1,name=__augmented_syscalls__/");
if (err)
pr_debug("ERROR: failed to create the \"__augmented_syscalls__\" bpf-output event\n");
return err;
}
#endif
int cmd_trace(int argc, const char **argv)
{
const char *trace_usage[] = {
@ -4735,7 +4703,6 @@ int cmd_trace(int argc, const char **argv)
.max_stack = UINT_MAX,
.max_events = ULONG_MAX,
};
const char *map_dump_str = NULL;
const char *output_name = NULL;
const struct option trace_options[] = {
OPT_CALLBACK('e', "event", &trace, "event",
@ -4769,9 +4736,6 @@ int cmd_trace(int argc, const char **argv)
OPT_CALLBACK(0, "duration", &trace, "float",
"show only events with duration > N.M ms",
trace__set_duration),
#ifdef HAVE_LIBBPF_SUPPORT
OPT_STRING(0, "map-dump", &map_dump_str, "BPF map", "BPF map to periodically dump"),
#endif
OPT_BOOLEAN(0, "sched", &trace.sched, "show blocking scheduler events"),
OPT_INCR('v', "verbose", &verbose, "be more verbose"),
OPT_BOOLEAN('T', "time", &trace.full_time,
@ -4898,87 +4862,48 @@ int cmd_trace(int argc, const char **argv)
"cgroup monitoring only available in system-wide mode");
}
evsel = bpf__setup_output_event(trace.evlist, "__augmented_syscalls__");
if (IS_ERR(evsel)) {
bpf__strerror_setup_output_event(trace.evlist, PTR_ERR(evsel), bf, sizeof(bf));
pr_err("ERROR: Setup trace syscalls enter failed: %s\n", bf);
goto out;
}
#ifdef HAVE_BPF_SKEL
if (!trace.trace_syscalls)
goto skip_augmentation;
if (evsel) {
trace.syscalls.events.augmented = evsel;
trace.skel = augmented_raw_syscalls_bpf__open();
if (!trace.skel) {
pr_debug("Failed to open augmented syscalls BPF skeleton");
} else {
/*
* Disable attaching the BPF programs except for sys_enter and
* sys_exit that tail call into this as necessary.
*/
struct bpf_program *prog;
evsel = evlist__find_tracepoint_by_name(trace.evlist, "raw_syscalls:sys_enter");
if (evsel == NULL) {
pr_err("ERROR: raw_syscalls:sys_enter not found in the augmented BPF object\n");
goto out;
bpf_object__for_each_program(prog, trace.skel->obj) {
if (prog != trace.skel->progs.sys_enter && prog != trace.skel->progs.sys_exit)
bpf_program__set_autoattach(prog, /*autoattach=*/false);
}
if (evsel->bpf_obj == NULL) {
pr_err("ERROR: raw_syscalls:sys_enter not associated to a BPF object\n");
goto out;
}
err = augmented_raw_syscalls_bpf__load(trace.skel);
trace.bpf_obj = evsel->bpf_obj;
/*
* If we have _just_ the augmenter event but don't have a
* explicit --syscalls, then assume we want all strace-like
* syscalls:
*/
if (!trace.trace_syscalls && trace__only_augmented_syscalls_evsels(&trace))
trace.trace_syscalls = true;
/*
* So, if we have a syscall augmenter, but trace_syscalls, aka
* strace-like syscall tracing is not set, then we need to trow
* away the augmenter, i.e. all the events that were created
* from that BPF object file.
*
* This is more to fix the current .perfconfig trace.add_events
* style of setting up the strace-like eBPF based syscall point
* payload augmenter.
*
* All this complexity will be avoided by adding an alternative
* to trace.add_events in the form of
* trace.bpf_augmented_syscalls, that will be only parsed if we
* need it.
*
* .perfconfig trace.add_events is still useful if we want, for
* instance, have msr_write.msr in some .perfconfig profile based
* 'perf trace --config determinism.profile' mode, where for some
* particular goal/workload type we want a set of events and
* output mode (with timings, etc) instead of having to add
* all via the command line.
*
* Also --config to specify an alternate .perfconfig file needs
* to be implemented.
*/
if (!trace.trace_syscalls) {
trace__delete_augmented_syscalls(&trace);
if (err < 0) {
libbpf_strerror(err, bf, sizeof(bf));
pr_debug("Failed to load augmented syscalls BPF skeleton: %s\n", bf);
} else {
trace__set_bpf_map_filtered_pids(&trace);
trace__set_bpf_map_syscalls(&trace);
trace.syscalls.unaugmented_prog = trace__find_bpf_program_by_title(&trace, "!raw_syscalls:unaugmented");
augmented_raw_syscalls_bpf__attach(trace.skel);
trace__add_syscall_newtp(&trace);
}
}
err = bpf__setup_stdout(trace.evlist);
err = bpf__setup_bpf_output(trace.evlist);
if (err) {
bpf__strerror_setup_stdout(trace.evlist, err, bf, sizeof(bf));
pr_err("ERROR: Setup BPF stdout failed: %s\n", bf);
libbpf_strerror(err, bf, sizeof(bf));
pr_err("ERROR: Setup BPF output event failed: %s\n", bf);
goto out;
}
trace.syscalls.events.bpf_output = evlist__last(trace.evlist);
assert(!strcmp(evsel__name(trace.syscalls.events.bpf_output), "__augmented_syscalls__"));
skip_augmentation:
#endif
err = -1;
if (map_dump_str) {
trace.dump.map = trace__find_bpf_map_by_name(&trace, map_dump_str);
if (trace.dump.map == NULL) {
pr_err("ERROR: BPF map \"%s\" not found\n", map_dump_str);
goto out;
}
}
if (trace.trace_pgfaults) {
trace.opts.sample_address = true;
trace.opts.sample_time = true;
@ -5029,7 +4954,7 @@ int cmd_trace(int argc, const char **argv)
* buffers that are being copied from kernel to userspace, think 'read'
* syscall.
*/
if (trace.syscalls.events.augmented) {
if (trace.syscalls.events.bpf_output) {
evlist__for_each_entry(trace.evlist, evsel) {
bool raw_syscalls_sys_exit = strcmp(evsel__name(evsel), "raw_syscalls:sys_exit") == 0;
@ -5038,9 +4963,9 @@ int cmd_trace(int argc, const char **argv)
goto init_augmented_syscall_tp;
}
if (trace.syscalls.events.augmented->priv == NULL &&
if (trace.syscalls.events.bpf_output->priv == NULL &&
strstr(evsel__name(evsel), "syscalls:sys_enter")) {
struct evsel *augmented = trace.syscalls.events.augmented;
struct evsel *augmented = trace.syscalls.events.bpf_output;
if (evsel__init_augmented_syscall_tp(augmented, evsel) ||
evsel__init_augmented_syscall_tp_args(augmented))
goto out;
@ -5145,5 +5070,8 @@ out_close:
fclose(trace.output);
out:
trace__exit(&trace);
#ifdef HAVE_BPF_SKEL
augmented_raw_syscalls_bpf__destroy(trace.skel);
#endif
return err;
}

View File

@ -123,7 +123,7 @@ check () {
shift
check_2 "tools/$file" "$file" $*
check_2 "tools/$file" "$file" "$@"
}
beauty_check () {
@ -131,7 +131,7 @@ beauty_check () {
shift
check_2 "tools/perf/trace/beauty/$file" "$file" $*
check_2 "tools/perf/trace/beauty/$file" "$file" "$@"
}
# Check if we have the kernel headers (tools/perf/../../include), else
@ -183,7 +183,7 @@ done
check_2 tools/perf/util/hashmap.h tools/lib/bpf/hashmap.h
check_2 tools/perf/util/hashmap.c tools/lib/bpf/hashmap.c
cd tools/perf
cd tools/perf || exit
if [ ${#FAILURES[@]} -gt 0 ]
then

View File

@ -254,6 +254,30 @@ static int check_addr_al(void *ctx)
return 0;
}
static int check_address_al(void *ctx, const struct perf_dlfilter_sample *sample)
{
struct perf_dlfilter_al address_al;
const struct perf_dlfilter_al *al;
al = perf_dlfilter_fns.resolve_ip(ctx);
if (!al)
return test_fail("resolve_ip() failed");
address_al.size = sizeof(address_al);
if (perf_dlfilter_fns.resolve_address(ctx, sample->ip, &address_al))
return test_fail("resolve_address() failed");
CHECK(address_al.sym && al->sym);
CHECK(!strcmp(address_al.sym, al->sym));
CHECK(address_al.addr == al->addr);
CHECK(address_al.sym_start == al->sym_start);
CHECK(address_al.sym_end == al->sym_end);
CHECK(address_al.dso && al->dso);
CHECK(!strcmp(address_al.dso, al->dso));
return 0;
}
static int check_attr(void *ctx)
{
struct perf_event_attr *attr = perf_dlfilter_fns.attr(ctx);
@ -290,7 +314,7 @@ static int do_checks(void *data, const struct perf_dlfilter_sample *sample, void
if (early && !d->do_early)
return 0;
if (check_al(ctx) || check_addr_al(ctx))
if (check_al(ctx) || check_addr_al(ctx) || check_address_al(ctx, sample))
return -1;
if (early)

View File

@ -0,0 +1,377 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Test v2 API for perf --dlfilter shared object
* Copyright (c) 2023, Intel Corporation.
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdbool.h>
/*
* Copy v2 API instead of including current API
*/
#include <linux/perf_event.h>
#include <linux/types.h>
/*
* The following macro can be used to determine if this header defines
* perf_dlfilter_sample machine_pid and vcpu.
*/
#define PERF_DLFILTER_HAS_MACHINE_PID
/* Definitions for perf_dlfilter_sample flags */
enum {
PERF_DLFILTER_FLAG_BRANCH = 1ULL << 0,
PERF_DLFILTER_FLAG_CALL = 1ULL << 1,
PERF_DLFILTER_FLAG_RETURN = 1ULL << 2,
PERF_DLFILTER_FLAG_CONDITIONAL = 1ULL << 3,
PERF_DLFILTER_FLAG_SYSCALLRET = 1ULL << 4,
PERF_DLFILTER_FLAG_ASYNC = 1ULL << 5,
PERF_DLFILTER_FLAG_INTERRUPT = 1ULL << 6,
PERF_DLFILTER_FLAG_TX_ABORT = 1ULL << 7,
PERF_DLFILTER_FLAG_TRACE_BEGIN = 1ULL << 8,
PERF_DLFILTER_FLAG_TRACE_END = 1ULL << 9,
PERF_DLFILTER_FLAG_IN_TX = 1ULL << 10,
PERF_DLFILTER_FLAG_VMENTRY = 1ULL << 11,
PERF_DLFILTER_FLAG_VMEXIT = 1ULL << 12,
};
/*
* perf sample event information (as per perf script and <linux/perf_event.h>)
*/
struct perf_dlfilter_sample {
__u32 size; /* Size of this structure (for compatibility checking) */
__u16 ins_lat; /* Refer PERF_SAMPLE_WEIGHT_TYPE in <linux/perf_event.h> */
__u16 p_stage_cyc; /* Refer PERF_SAMPLE_WEIGHT_TYPE in <linux/perf_event.h> */
__u64 ip;
__s32 pid;
__s32 tid;
__u64 time;
__u64 addr;
__u64 id;
__u64 stream_id;
__u64 period;
__u64 weight; /* Refer PERF_SAMPLE_WEIGHT_TYPE in <linux/perf_event.h> */
__u64 transaction; /* Refer PERF_SAMPLE_TRANSACTION in <linux/perf_event.h> */
__u64 insn_cnt; /* For instructions-per-cycle (IPC) */
__u64 cyc_cnt; /* For instructions-per-cycle (IPC) */
__s32 cpu;
__u32 flags; /* Refer PERF_DLFILTER_FLAG_* above */
__u64 data_src; /* Refer PERF_SAMPLE_DATA_SRC in <linux/perf_event.h> */
__u64 phys_addr; /* Refer PERF_SAMPLE_PHYS_ADDR in <linux/perf_event.h> */
__u64 data_page_size; /* Refer PERF_SAMPLE_DATA_PAGE_SIZE in <linux/perf_event.h> */
__u64 code_page_size; /* Refer PERF_SAMPLE_CODE_PAGE_SIZE in <linux/perf_event.h> */
__u64 cgroup; /* Refer PERF_SAMPLE_CGROUP in <linux/perf_event.h> */
__u8 cpumode; /* Refer CPUMODE_MASK etc in <linux/perf_event.h> */
__u8 addr_correlates_sym; /* True => resolve_addr() can be called */
__u16 misc; /* Refer perf_event_header in <linux/perf_event.h> */
__u32 raw_size; /* Refer PERF_SAMPLE_RAW in <linux/perf_event.h> */
const void *raw_data; /* Refer PERF_SAMPLE_RAW in <linux/perf_event.h> */
__u64 brstack_nr; /* Number of brstack entries */
const struct perf_branch_entry *brstack; /* Refer <linux/perf_event.h> */
__u64 raw_callchain_nr; /* Number of raw_callchain entries */
const __u64 *raw_callchain; /* Refer <linux/perf_event.h> */
const char *event;
__s32 machine_pid;
__s32 vcpu;
};
/*
* Address location (as per perf script)
*/
struct perf_dlfilter_al {
__u32 size; /* Size of this structure (for compatibility checking) */
__u32 symoff;
const char *sym;
__u64 addr; /* Mapped address (from dso) */
__u64 sym_start;
__u64 sym_end;
const char *dso;
__u8 sym_binding; /* STB_LOCAL, STB_GLOBAL or STB_WEAK, refer <elf.h> */
__u8 is_64_bit; /* Only valid if dso is not NULL */
__u8 is_kernel_ip; /* True if in kernel space */
__u32 buildid_size;
__u8 *buildid;
/* Below members are only populated by resolve_ip() */
__u8 filtered; /* True if this sample event will be filtered out */
const char *comm;
void *priv; /* Private data (v2 API) */
};
struct perf_dlfilter_fns {
/* Return information about ip */
const struct perf_dlfilter_al *(*resolve_ip)(void *ctx);
/* Return information about addr (if addr_correlates_sym) */
const struct perf_dlfilter_al *(*resolve_addr)(void *ctx);
/* Return arguments from --dlarg option */
char **(*args)(void *ctx, int *dlargc);
/*
* Return information about address (al->size must be set before
* calling). Returns 0 on success, -1 otherwise. Call al_cleanup()
* when 'al' data is no longer needed.
*/
__s32 (*resolve_address)(void *ctx, __u64 address, struct perf_dlfilter_al *al);
/* Return instruction bytes and length */
const __u8 *(*insn)(void *ctx, __u32 *length);
/* Return source file name and line number */
const char *(*srcline)(void *ctx, __u32 *line_number);
/* Return perf_event_attr, refer <linux/perf_event.h> */
struct perf_event_attr *(*attr)(void *ctx);
/* Read object code, return numbers of bytes read */
__s32 (*object_code)(void *ctx, __u64 ip, void *buf, __u32 len);
/*
* If present (i.e. must check al_cleanup != NULL), call after
* resolve_address() to free any associated resources. (v2 API)
*/
void (*al_cleanup)(void *ctx, struct perf_dlfilter_al *al);
/* Reserved */
void *(*reserved[119])(void *);
};
struct perf_dlfilter_fns perf_dlfilter_fns;
static int verbose;
#define pr_debug(fmt, ...) do { \
if (verbose > 0) \
fprintf(stderr, fmt, ##__VA_ARGS__); \
} while (0)
static int test_fail(const char *msg)
{
pr_debug("%s\n", msg);
return -1;
}
#define CHECK(x) do { \
if (!(x)) \
return test_fail("Check '" #x "' failed\n"); \
} while (0)
struct filter_data {
__u64 ip;
__u64 addr;
int do_early;
int early_filter_cnt;
int filter_cnt;
};
static struct filter_data *filt_dat;
int start(void **data, void *ctx)
{
int dlargc;
char **dlargv;
struct filter_data *d;
static bool called;
verbose = 1;
CHECK(!filt_dat && !called);
called = true;
d = calloc(1, sizeof(*d));
if (!d)
test_fail("Failed to allocate memory");
filt_dat = d;
*data = d;
dlargv = perf_dlfilter_fns.args(ctx, &dlargc);
CHECK(dlargc == 6);
CHECK(!strcmp(dlargv[0], "first"));
verbose = strtol(dlargv[1], NULL, 0);
d->ip = strtoull(dlargv[2], NULL, 0);
d->addr = strtoull(dlargv[3], NULL, 0);
d->do_early = strtol(dlargv[4], NULL, 0);
CHECK(!strcmp(dlargv[5], "last"));
pr_debug("%s API\n", __func__);
return 0;
}
#define CHECK_SAMPLE(x) do { \
if (sample->x != expected.x) \
return test_fail("'" #x "' not expected value\n"); \
} while (0)
static int check_sample(struct filter_data *d, const struct perf_dlfilter_sample *sample)
{
struct perf_dlfilter_sample expected = {
.ip = d->ip,
.pid = 12345,
.tid = 12346,
.time = 1234567890,
.addr = d->addr,
.id = 99,
.stream_id = 101,
.period = 543212345,
.cpu = 31,
.cpumode = PERF_RECORD_MISC_USER,
.addr_correlates_sym = 1,
.misc = PERF_RECORD_MISC_USER,
};
CHECK(sample->size >= sizeof(struct perf_dlfilter_sample));
CHECK_SAMPLE(ip);
CHECK_SAMPLE(pid);
CHECK_SAMPLE(tid);
CHECK_SAMPLE(time);
CHECK_SAMPLE(addr);
CHECK_SAMPLE(id);
CHECK_SAMPLE(stream_id);
CHECK_SAMPLE(period);
CHECK_SAMPLE(cpu);
CHECK_SAMPLE(cpumode);
CHECK_SAMPLE(addr_correlates_sym);
CHECK_SAMPLE(misc);
CHECK(!sample->raw_data);
CHECK_SAMPLE(brstack_nr);
CHECK(!sample->brstack);
CHECK_SAMPLE(raw_callchain_nr);
CHECK(!sample->raw_callchain);
#define EVENT_NAME "branches:"
CHECK(!strncmp(sample->event, EVENT_NAME, strlen(EVENT_NAME)));
return 0;
}
static int check_al(void *ctx)
{
const struct perf_dlfilter_al *al;
al = perf_dlfilter_fns.resolve_ip(ctx);
if (!al)
return test_fail("resolve_ip() failed");
CHECK(al->sym && !strcmp("foo", al->sym));
CHECK(!al->symoff);
return 0;
}
static int check_addr_al(void *ctx)
{
const struct perf_dlfilter_al *addr_al;
addr_al = perf_dlfilter_fns.resolve_addr(ctx);
if (!addr_al)
return test_fail("resolve_addr() failed");
CHECK(addr_al->sym && !strcmp("bar", addr_al->sym));
CHECK(!addr_al->symoff);
return 0;
}
static int check_address_al(void *ctx, const struct perf_dlfilter_sample *sample)
{
struct perf_dlfilter_al address_al;
const struct perf_dlfilter_al *al;
al = perf_dlfilter_fns.resolve_ip(ctx);
if (!al)
return test_fail("resolve_ip() failed");
address_al.size = sizeof(address_al);
if (perf_dlfilter_fns.resolve_address(ctx, sample->ip, &address_al))
return test_fail("resolve_address() failed");
CHECK(address_al.sym && al->sym);
CHECK(!strcmp(address_al.sym, al->sym));
CHECK(address_al.addr == al->addr);
CHECK(address_al.sym_start == al->sym_start);
CHECK(address_al.sym_end == al->sym_end);
CHECK(address_al.dso && al->dso);
CHECK(!strcmp(address_al.dso, al->dso));
/* al_cleanup() is v2 API so may not be present */
if (perf_dlfilter_fns.al_cleanup)
perf_dlfilter_fns.al_cleanup(ctx, &address_al);
return 0;
}
static int check_attr(void *ctx)
{
struct perf_event_attr *attr = perf_dlfilter_fns.attr(ctx);
CHECK(attr);
CHECK(attr->type == PERF_TYPE_HARDWARE);
CHECK(attr->config == PERF_COUNT_HW_BRANCH_INSTRUCTIONS);
return 0;
}
static int do_checks(void *data, const struct perf_dlfilter_sample *sample, void *ctx, bool early)
{
struct filter_data *d = data;
CHECK(data && filt_dat == data);
if (early) {
CHECK(!d->early_filter_cnt);
d->early_filter_cnt += 1;
} else {
CHECK(!d->filter_cnt);
CHECK(d->early_filter_cnt);
CHECK(d->do_early != 2);
d->filter_cnt += 1;
}
if (check_sample(data, sample))
return -1;
if (check_attr(ctx))
return -1;
if (early && !d->do_early)
return 0;
if (check_al(ctx) || check_addr_al(ctx) || check_address_al(ctx, sample))
return -1;
if (early)
return d->do_early == 2;
return 1;
}
int filter_event_early(void *data, const struct perf_dlfilter_sample *sample, void *ctx)
{
pr_debug("%s API\n", __func__);
return do_checks(data, sample, ctx, true);
}
int filter_event(void *data, const struct perf_dlfilter_sample *sample, void *ctx)
{
pr_debug("%s API\n", __func__);
return do_checks(data, sample, ctx, false);
}
int stop(void *data, void *ctx)
{
static bool called;
pr_debug("%s API\n", __func__);
CHECK(data && filt_dat == data && !called);
called = true;
free(data);
filt_dat = NULL;
return 0;
}
const char *filter_description(const char **long_description)
{
*long_description = "Filter used by the 'dlfilter C API' perf test";
return "dlfilter to test v2 C API";
}

View File

@ -1,53 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
Description:
. Disable strace like syscall tracing (--no-syscalls), or try tracing
just some (-e *sleep).
. Attach a filter function to a kernel function, returning when it should
be considered, i.e. appear on the output.
. Run it system wide, so that any sleep of >= 5 seconds and < than 6
seconds gets caught.
. Ask for callgraphs using DWARF info, so that userspace can be unwound
. While this is running, run something like "sleep 5s".
. If we decide to add tv_nsec as well, then it becomes:
int probe(hrtimer_nanosleep, rqtp->tv_sec rqtp->tv_nsec)(void *ctx, int err, long sec, long nsec)
I.e. add where it comes from (rqtp->tv_nsec) and where it will be
accessible in the function body (nsec)
# perf trace --no-syscalls -e tools/perf/examples/bpf/5sec.c/call-graph=dwarf/
0.000 perf_bpf_probe:func:(ffffffff9811b5f0) tv_sec=5
hrtimer_nanosleep ([kernel.kallsyms])
__x64_sys_nanosleep ([kernel.kallsyms])
do_syscall_64 ([kernel.kallsyms])
entry_SYSCALL_64 ([kernel.kallsyms])
__GI___nanosleep (/usr/lib64/libc-2.26.so)
rpl_nanosleep (/usr/bin/sleep)
xnanosleep (/usr/bin/sleep)
main (/usr/bin/sleep)
__libc_start_main (/usr/lib64/libc-2.26.so)
_start (/usr/bin/sleep)
^C#
Copyright (C) 2018 Red Hat, Inc., Arnaldo Carvalho de Melo <acme@redhat.com>
*/
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#define NSEC_PER_SEC 1000000000L
SEC("hrtimer_nanosleep=hrtimer_nanosleep rqtp")
int hrtimer_nanosleep(void *ctx, int err, long long sec)
{
return sec / NSEC_PER_SEC == 5ULL;
}
char _license[] SEC("license") = "GPL";

View File

@ -1,12 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
struct syscall_enter_args;
SEC("raw_syscalls:sys_enter")
int sys_enter(struct syscall_enter_args *args)
{
return 0;
}
char _license[] SEC("license") = "GPL";

View File

@ -1,27 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
struct __bpf_stdout__ {
__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
__type(key, int);
__type(value, __u32);
__uint(max_entries, __NR_CPUS__);
} __bpf_stdout__ SEC(".maps");
#define puts(from) \
({ const int __len = sizeof(from); \
char __from[sizeof(from)] = from; \
bpf_perf_event_output(args, &__bpf_stdout__, BPF_F_CURRENT_CPU, \
&__from, __len & (sizeof(from) - 1)); })
struct syscall_enter_args;
SEC("raw_syscalls:sys_enter")
int sys_enter(struct syscall_enter_args *args)
{
puts("Hello, world\n");
return 0;
}
char _license[] SEC("license") = "GPL";

View File

@ -1,33 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Hook into 'openat' syscall entry tracepoint
*
* Test it with:
*
* perf trace -e tools/perf/examples/bpf/sys_enter_openat.c cat /etc/passwd > /dev/null
*
* It'll catch some openat syscalls related to the dynamic linked and
* the last one should be the one for '/etc/passwd'.
*
* The syscall_enter_openat_args can be used to get the syscall fields
* and use them for filtering calls, i.e. use in expressions for
* the return value.
*/
#include <bpf/bpf.h>
struct syscall_enter_openat_args {
unsigned long long unused;
long syscall_nr;
long dfd;
char *filename_ptr;
long flags;
long mode;
};
int syscall_enter(openat)(struct syscall_enter_openat_args *args)
{
return 1;
}
license(GPL);

View File

@ -91,6 +91,7 @@ struct perf_dlfilter_al {
/* Below members are only populated by resolve_ip() */
__u8 filtered; /* True if this sample event will be filtered out */
const char *comm;
void *priv; /* Private data. Do not change */
};
struct perf_dlfilter_fns {
@ -102,7 +103,8 @@ struct perf_dlfilter_fns {
char **(*args)(void *ctx, int *dlargc);
/*
* Return information about address (al->size must be set before
* calling). Returns 0 on success, -1 otherwise.
* calling). Returns 0 on success, -1 otherwise. Call al_cleanup()
* when 'al' data is no longer needed.
*/
__s32 (*resolve_address)(void *ctx, __u64 address, struct perf_dlfilter_al *al);
/* Return instruction bytes and length */
@ -113,8 +115,13 @@ struct perf_dlfilter_fns {
struct perf_event_attr *(*attr)(void *ctx);
/* Read object code, return numbers of bytes read */
__s32 (*object_code)(void *ctx, __u64 ip, void *buf, __u32 len);
/*
* If present (i.e. must check al_cleanup != NULL), call after
* resolve_address() to free any associated resources.
*/
void (*al_cleanup)(void *ctx, struct perf_dlfilter_al *al);
/* Reserved */
void *(*reserved[120])(void *);
void *(*reserved[119])(void *);
};
/*

View File

@ -18,7 +18,6 @@
#include <subcmd/run-command.h>
#include "util/parse-events.h"
#include <subcmd/parse-options.h>
#include "util/bpf-loader.h"
#include "util/debug.h"
#include "util/event.h"
#include "util/util.h" // usage()
@ -324,7 +323,6 @@ static int run_builtin(struct cmd_struct *p, int argc, const char **argv)
perf_config__exit();
exit_browser(status);
perf_env__exit(&perf_env);
bpf__clear();
if (status)
return status & 0xff;

View File

@ -35,3 +35,9 @@ $(PMU_EVENTS_C): $(JSON) $(JSON_TEST) $(JEVENTS_PY) $(METRIC_PY) $(METRIC_TEST_L
$(call rule_mkdir)
$(Q)$(call echo-cmd,gen)$(PYTHON) $(JEVENTS_PY) $(JEVENTS_ARCH) $(JEVENTS_MODEL) pmu-events/arch $@
endif
# pmu-events.c file is generated in the OUTPUT directory so it needs a
# separate rule to depend on it properly
$(OUTPUT)pmu-events/pmu-events.o: $(PMU_EVENTS_C)
$(call rule_mkdir)
$(call if_changed_dep,cc_o_c)

View File

@ -92,9 +92,6 @@
{
"ArchStdEvent": "L1D_CACHE_LMISS_RD"
},
{
"ArchStdEvent": "L1D_CACHE_LMISS"
},
{
"ArchStdEvent": "L1I_CACHE_LMISS"
},

View File

@ -533,66 +533,6 @@
"EventName": "MMU_D_OTB_ALLOC",
"BriefDescription": "L2D OTB allocate"
},
{
"PublicDescription": "DTLB Translation cache hit on S1L2 walk cache entry",
"EventCode": "0xD801",
"EventName": "MMU_D_TRANS_CACHE_HIT_S1L2_WALK",
"BriefDescription": "DTLB Translation cache hit on S1L2 walk cache entry"
},
{
"PublicDescription": "DTLB Translation cache hit on S1L1 walk cache entry",
"EventCode": "0xD802",
"EventName": "MMU_D_TRANS_CACHE_HIT_S1L1_WALK",
"BriefDescription": "DTLB Translation cache hit on S1L1 walk cache entry"
},
{
"PublicDescription": "DTLB Translation cache hit on S1L0 walk cache entry",
"EventCode": "0xD803",
"EventName": "MMU_D_TRANS_CACHE_HIT_S1L0_WALK",
"BriefDescription": "DTLB Translation cache hit on S1L0 walk cache entry"
},
{
"PublicDescription": "DTLB Translation cache hit on S2L2 walk cache entry",
"EventCode": "0xD804",
"EventName": "MMU_D_TRANS_CACHE_HIT_S2L2_WALK",
"BriefDescription": "DTLB Translation cache hit on S2L2 walk cache entry"
},
{
"PublicDescription": "DTLB Translation cache hit on S2L1 walk cache entry",
"EventCode": "0xD805",
"EventName": "MMU_D_TRANS_CACHE_HIT_S2L1_WALK",
"BriefDescription": "DTLB Translation cache hit on S2L1 walk cache entry"
},
{
"PublicDescription": "DTLB Translation cache hit on S2L0 walk cache entry",
"EventCode": "0xD806",
"EventName": "MMU_D_TRANS_CACHE_HIT_S2L0_WALK",
"BriefDescription": "DTLB Translation cache hit on S2L0 walk cache entry"
},
{
"PublicDescription": "D-side S1 Page walk cache lookup",
"EventCode": "0xD807",
"EventName": "MMU_D_S1_WALK_CACHE_LOOKUP",
"BriefDescription": "D-side S1 Page walk cache lookup"
},
{
"PublicDescription": "D-side S1 Page walk cache refill",
"EventCode": "0xD808",
"EventName": "MMU_D_S1_WALK_CACHE_REFILL",
"BriefDescription": "D-side S1 Page walk cache refill"
},
{
"PublicDescription": "D-side S2 Page walk cache lookup",
"EventCode": "0xD809",
"EventName": "MMU_D_S2_WALK_CACHE_LOOKUP",
"BriefDescription": "D-side S2 Page walk cache lookup"
},
{
"PublicDescription": "D-side S2 Page walk cache refill",
"EventCode": "0xD80A",
"EventName": "MMU_D_S2_WALK_CACHE_REFILL",
"BriefDescription": "D-side S2 Page walk cache refill"
},
{
"PublicDescription": "D-side Stage1 tablewalk fault",
"EventCode": "0xD80B",
@ -617,66 +557,6 @@
"EventName": "MMU_I_OTB_ALLOC",
"BriefDescription": "L2I OTB allocate"
},
{
"PublicDescription": "ITLB Translation cache hit on S1L2 walk cache entry",
"EventCode": "0xD901",
"EventName": "MMU_I_TRANS_CACHE_HIT_S1L2_WALK",
"BriefDescription": "ITLB Translation cache hit on S1L2 walk cache entry"
},
{
"PublicDescription": "ITLB Translation cache hit on S1L1 walk cache entry",
"EventCode": "0xD902",
"EventName": "MMU_I_TRANS_CACHE_HIT_S1L1_WALK",
"BriefDescription": "ITLB Translation cache hit on S1L1 walk cache entry"
},
{
"PublicDescription": "ITLB Translation cache hit on S1L0 walk cache entry",
"EventCode": "0xD903",
"EventName": "MMU_I_TRANS_CACHE_HIT_S1L0_WALK",
"BriefDescription": "ITLB Translation cache hit on S1L0 walk cache entry"
},
{
"PublicDescription": "ITLB Translation cache hit on S2L2 walk cache entry",
"EventCode": "0xD904",
"EventName": "MMU_I_TRANS_CACHE_HIT_S2L2_WALK",
"BriefDescription": "ITLB Translation cache hit on S2L2 walk cache entry"
},
{
"PublicDescription": "ITLB Translation cache hit on S2L1 walk cache entry",
"EventCode": "0xD905",
"EventName": "MMU_I_TRANS_CACHE_HIT_S2L1_WALK",
"BriefDescription": "ITLB Translation cache hit on S2L1 walk cache entry"
},
{
"PublicDescription": "ITLB Translation cache hit on S2L0 walk cache entry",
"EventCode": "0xD906",
"EventName": "MMU_I_TRANS_CACHE_HIT_S2L0_WALK",
"BriefDescription": "ITLB Translation cache hit on S2L0 walk cache entry"
},
{
"PublicDescription": "I-side S1 Page walk cache lookup",
"EventCode": "0xD907",
"EventName": "MMU_I_S1_WALK_CACHE_LOOKUP",
"BriefDescription": "I-side S1 Page walk cache lookup"
},
{
"PublicDescription": "I-side S1 Page walk cache refill",
"EventCode": "0xD908",
"EventName": "MMU_I_S1_WALK_CACHE_REFILL",
"BriefDescription": "I-side S1 Page walk cache refill"
},
{
"PublicDescription": "I-side S2 Page walk cache lookup",
"EventCode": "0xD909",
"EventName": "MMU_I_S2_WALK_CACHE_LOOKUP",
"BriefDescription": "I-side S2 Page walk cache lookup"
},
{
"PublicDescription": "I-side S2 Page walk cache refill",
"EventCode": "0xD90A",
"EventName": "MMU_I_S2_WALK_CACHE_REFILL",
"BriefDescription": "I-side S2 Page walk cache refill"
},
{
"PublicDescription": "I-side Stage1 tablewalk fault",
"EventCode": "0xD90B",

View File

@ -0,0 +1,362 @@
[
{
"MetricExpr": "BR_MIS_PRED / BR_PRED",
"BriefDescription": "Branch predictor misprediction rate. May not count branches that are never resolved because they are in the misprediction shadow of an earlier branch",
"MetricGroup": "Branch Prediction",
"MetricName": "Misprediction"
},
{
"MetricExpr": "BR_MIS_PRED_RETIRED / BR_RETIRED",
"BriefDescription": "Branch predictor misprediction rate",
"MetricGroup": "Branch Prediction",
"MetricName": "Misprediction (retired)"
},
{
"MetricExpr": "BUS_ACCESS / ( BUS_CYCLES * 1)",
"BriefDescription": "Core-to-uncore bus utilization",
"MetricGroup": "Bus",
"MetricName": "Bus utilization"
},
{
"MetricExpr": "L1D_CACHE_REFILL / L1D_CACHE",
"BriefDescription": "L1D cache miss rate",
"MetricGroup": "Cache",
"MetricName": "L1D cache miss"
},
{
"MetricExpr": "L1D_CACHE_LMISS_RD / L1D_CACHE_RD",
"BriefDescription": "L1D cache read miss rate",
"MetricGroup": "Cache",
"MetricName": "L1D cache read miss"
},
{
"MetricExpr": "L1I_CACHE_REFILL / L1I_CACHE",
"BriefDescription": "L1I cache miss rate",
"MetricGroup": "Cache",
"MetricName": "L1I cache miss"
},
{
"MetricExpr": "L2D_CACHE_REFILL / L2D_CACHE",
"BriefDescription": "L2 cache miss rate",
"MetricGroup": "Cache",
"MetricName": "L2 cache miss"
},
{
"MetricExpr": "L1I_CACHE_LMISS / L1I_CACHE",
"BriefDescription": "L1I cache read miss rate",
"MetricGroup": "Cache",
"MetricName": "L1I cache read miss"
},
{
"MetricExpr": "L2D_CACHE_LMISS_RD / L2D_CACHE_RD",
"BriefDescription": "L2 cache read miss rate",
"MetricGroup": "Cache",
"MetricName": "L2 cache read miss"
},
{
"MetricExpr": "(L1D_CACHE_LMISS_RD * 1000) / INST_RETIRED",
"BriefDescription": "Misses per thousand instructions (data)",
"MetricGroup": "Cache",
"MetricName": "MPKI data"
},
{
"MetricExpr": "(L1I_CACHE_LMISS * 1000) / INST_RETIRED",
"BriefDescription": "Misses per thousand instructions (instruction)",
"MetricGroup": "Cache",
"MetricName": "MPKI instruction"
},
{
"MetricExpr": "ASE_SPEC / OP_SPEC",
"BriefDescription": "Proportion of advanced SIMD data processing operations (excluding DP_SPEC/LD_SPEC) operations",
"MetricGroup": "Instruction",
"MetricName": "ASE mix"
},
{
"MetricExpr": "CRYPTO_SPEC / OP_SPEC",
"BriefDescription": "Proportion of crypto data processing operations",
"MetricGroup": "Instruction",
"MetricName": "Crypto mix"
},
{
"MetricExpr": "VFP_SPEC / (duration_time *1000000000)",
"BriefDescription": "Giga-floating point operations per second",
"MetricGroup": "Instruction",
"MetricName": "GFLOPS_ISSUED"
},
{
"MetricExpr": "DP_SPEC / OP_SPEC",
"BriefDescription": "Proportion of integer data processing operations",
"MetricGroup": "Instruction",
"MetricName": "Integer mix"
},
{
"MetricExpr": "INST_RETIRED / CPU_CYCLES",
"BriefDescription": "Instructions per cycle",
"MetricGroup": "Instruction",
"MetricName": "IPC"
},
{
"MetricExpr": "LD_SPEC / OP_SPEC",
"BriefDescription": "Proportion of load operations",
"MetricGroup": "Instruction",
"MetricName": "Load mix"
},
{
"MetricExpr": "LDST_SPEC/ OP_SPEC",
"BriefDescription": "Proportion of load & store operations",
"MetricGroup": "Instruction",
"MetricName": "Load-store mix"
},
{
"MetricExpr": "INST_RETIRED / (duration_time * 1000000)",
"BriefDescription": "Millions of instructions per second",
"MetricGroup": "Instruction",
"MetricName": "MIPS_RETIRED"
},
{
"MetricExpr": "INST_SPEC / (duration_time * 1000000)",
"BriefDescription": "Millions of instructions per second",
"MetricGroup": "Instruction",
"MetricName": "MIPS_UTILIZATION"
},
{
"MetricExpr": "PC_WRITE_SPEC / OP_SPEC",
"BriefDescription": "Proportion of software change of PC operations",
"MetricGroup": "Instruction",
"MetricName": "PC write mix"
},
{
"MetricExpr": "ST_SPEC / OP_SPEC",
"BriefDescription": "Proportion of store operations",
"MetricGroup": "Instruction",
"MetricName": "Store mix"
},
{
"MetricExpr": "VFP_SPEC / OP_SPEC",
"BriefDescription": "Proportion of FP operations",
"MetricGroup": "Instruction",
"MetricName": "VFP mix"
},
{
"MetricExpr": "1 - (OP_RETIRED/ (CPU_CYCLES * 4))",
"BriefDescription": "Proportion of slots lost",
"MetricGroup": "Speculation / TDA",
"MetricName": "CPU lost"
},
{
"MetricExpr": "OP_RETIRED/ (CPU_CYCLES * 4)",
"BriefDescription": "Proportion of slots retiring",
"MetricGroup": "Speculation / TDA",
"MetricName": "CPU utilization"
},
{
"MetricExpr": "OP_RETIRED - OP_SPEC",
"BriefDescription": "Operations lost due to misspeculation",
"MetricGroup": "Speculation / TDA",
"MetricName": "Operations lost"
},
{
"MetricExpr": "1 - (OP_RETIRED / OP_SPEC)",
"BriefDescription": "Proportion of operations lost",
"MetricGroup": "Speculation / TDA",
"MetricName": "Operations lost (ratio)"
},
{
"MetricExpr": "OP_RETIRED / OP_SPEC",
"BriefDescription": "Proportion of operations retired",
"MetricGroup": "Speculation / TDA",
"MetricName": "Operations retired"
},
{
"MetricExpr": "STALL_BACKEND_CACHE / CPU_CYCLES",
"BriefDescription": "Proportion of cycles stalled and no operations issued to backend and cache miss",
"MetricGroup": "Stall",
"MetricName": "Stall backend cache cycles"
},
{
"MetricExpr": "STALL_BACKEND_RESOURCE / CPU_CYCLES",
"BriefDescription": "Proportion of cycles stalled and no operations issued to backend and resource full",
"MetricGroup": "Stall",
"MetricName": "Stall backend resource cycles"
},
{
"MetricExpr": "STALL_BACKEND_TLB / CPU_CYCLES",
"BriefDescription": "Proportion of cycles stalled and no operations issued to backend and TLB miss",
"MetricGroup": "Stall",
"MetricName": "Stall backend tlb cycles"
},
{
"MetricExpr": "STALL_FRONTEND_CACHE / CPU_CYCLES",
"BriefDescription": "Proportion of cycles stalled and no ops delivered from frontend and cache miss",
"MetricGroup": "Stall",
"MetricName": "Stall frontend cache cycles"
},
{
"MetricExpr": "STALL_FRONTEND_TLB / CPU_CYCLES",
"BriefDescription": "Proportion of cycles stalled and no ops delivered from frontend and TLB miss",
"MetricGroup": "Stall",
"MetricName": "Stall frontend tlb cycles"
},
{
"MetricExpr": "DTLB_WALK / L1D_TLB",
"BriefDescription": "D-side walk per d-side translation request",
"MetricGroup": "TLB",
"MetricName": "DTLB walks"
},
{
"MetricExpr": "ITLB_WALK / L1I_TLB",
"BriefDescription": "I-side walk per i-side translation request",
"MetricGroup": "TLB",
"MetricName": "ITLB walks"
},
{
"MetricExpr": "STALL_SLOT_BACKEND / (CPU_CYCLES * 4)",
"BriefDescription": "Fraction of slots backend bound",
"MetricGroup": "TopDownL1",
"MetricName": "backend"
},
{
"MetricExpr": "1 - (retiring + lost + backend)",
"BriefDescription": "Fraction of slots frontend bound",
"MetricGroup": "TopDownL1",
"MetricName": "frontend"
},
{
"MetricExpr": "((OP_SPEC - OP_RETIRED) / (CPU_CYCLES * 4))",
"BriefDescription": "Fraction of slots lost due to misspeculation",
"MetricGroup": "TopDownL1",
"MetricName": "lost"
},
{
"MetricExpr": "(OP_RETIRED / (CPU_CYCLES * 4))",
"BriefDescription": "Fraction of slots retiring, useful work",
"MetricGroup": "TopDownL1",
"MetricName": "retiring"
},
{
"MetricExpr": "backend - backend_memory",
"BriefDescription": "Fraction of slots the CPU was stalled due to backend non-memory subsystem issues",
"MetricGroup": "TopDownL2",
"MetricName": "backend_core"
},
{
"MetricExpr": "(STALL_BACKEND_TLB + STALL_BACKEND_CACHE + STALL_BACKEND_MEM) / CPU_CYCLES ",
"BriefDescription": "Fraction of slots the CPU was stalled due to backend memory subsystem issues (cache/tlb miss)",
"MetricGroup": "TopDownL2",
"MetricName": "backend_memory"
},
{
"MetricExpr": " (BR_MIS_PRED_RETIRED / GPC_FLUSH) * lost",
"BriefDescription": "Fraction of slots lost due to branch misprediciton",
"MetricGroup": "TopDownL2",
"MetricName": "branch_mispredict"
},
{
"MetricExpr": "frontend - frontend_latency",
"BriefDescription": "Fraction of slots the CPU did not dispatch at full bandwidth - able to dispatch partial slots only (1, 2, or 3 uops)",
"MetricGroup": "TopDownL2",
"MetricName": "frontend_bandwidth"
},
{
"MetricExpr": "(STALL_FRONTEND - ((STALL_SLOT_FRONTEND - (frontend * CPU_CYCLES * 4)) / 4)) / CPU_CYCLES",
"BriefDescription": "Fraction of slots the CPU was stalled due to frontend latency issues (cache/tlb miss); nothing to dispatch",
"MetricGroup": "TopDownL2",
"MetricName": "frontend_latency"
},
{
"MetricExpr": "lost - branch_mispredict",
"BriefDescription": "Fraction of slots lost due to other/non-branch misprediction misspeculation",
"MetricGroup": "TopDownL2",
"MetricName": "other_clears"
},
{
"MetricExpr": "(IXU_NUM_UOPS_ISSUED + FSU_ISSUED) / (CPU_CYCLES * 6)",
"BriefDescription": "Fraction of execute slots utilized",
"MetricGroup": "TopDownL2",
"MetricName": "pipe_utilization"
},
{
"MetricExpr": "STALL_BACKEND_MEM / CPU_CYCLES",
"BriefDescription": "Fraction of cycles the CPU was stalled due to data L2 cache miss",
"MetricGroup": "TopDownL3",
"MetricName": "d_cache_l2_miss"
},
{
"MetricExpr": "STALL_BACKEND_CACHE / CPU_CYCLES",
"BriefDescription": "Fraction of cycles the CPU was stalled due to data cache miss",
"MetricGroup": "TopDownL3",
"MetricName": "d_cache_miss"
},
{
"MetricExpr": "STALL_BACKEND_TLB / CPU_CYCLES",
"BriefDescription": "Fraction of cycles the CPU was stalled due to data TLB miss",
"MetricGroup": "TopDownL3",
"MetricName": "d_tlb_miss"
},
{
"MetricExpr": "FSU_ISSUED / (CPU_CYCLES * 2)",
"BriefDescription": "Fraction of FSU execute slots utilized",
"MetricGroup": "TopDownL3",
"MetricName": "fsu_pipe_utilization"
},
{
"MetricExpr": "STALL_FRONTEND_CACHE / CPU_CYCLES",
"BriefDescription": "Fraction of cycles the CPU was stalled due to instruction cache miss",
"MetricGroup": "TopDownL3",
"MetricName": "i_cache_miss"
},
{
"MetricExpr": " STALL_FRONTEND_TLB / CPU_CYCLES ",
"BriefDescription": "Fraction of cycles the CPU was stalled due to instruction TLB miss",
"MetricGroup": "TopDownL3",
"MetricName": "i_tlb_miss"
},
{
"MetricExpr": "IXU_NUM_UOPS_ISSUED / (CPU_CYCLES / 4)",
"BriefDescription": "Fraction of IXU execute slots utilized",
"MetricGroup": "TopDownL3",
"MetricName": "ixu_pipe_utilization"
},
{
"MetricExpr": "IDR_STALL_FLUSH / CPU_CYCLES",
"BriefDescription": "Fraction of cycles the CPU was stalled due to flush recovery",
"MetricGroup": "TopDownL3",
"MetricName": "recovery"
},
{
"MetricExpr": "STALL_BACKEND_RESOURCE / CPU_CYCLES",
"BriefDescription": "Fraction of cycles the CPU was stalled due to core resource shortage",
"MetricGroup": "TopDownL3",
"MetricName": "resource"
},
{
"MetricExpr": "IDR_STALL_FSU_SCHED / CPU_CYCLES ",
"BriefDescription": "Fraction of cycles the CPU was stalled and FSU was full",
"MetricGroup": "TopDownL4",
"MetricName": "stall_fsu_sched"
},
{
"MetricExpr": "IDR_STALL_IXU_SCHED / CPU_CYCLES ",
"BriefDescription": "Fraction of cycles the CPU was stalled and IXU was full",
"MetricGroup": "TopDownL4",
"MetricName": "stall_ixu_sched"
},
{
"MetricExpr": "IDR_STALL_LOB_ID / CPU_CYCLES ",
"BriefDescription": "Fraction of cycles the CPU was stalled and LOB was full",
"MetricGroup": "TopDownL4",
"MetricName": "stall_lob_id"
},
{
"MetricExpr": "IDR_STALL_ROB_ID / CPU_CYCLES",
"BriefDescription": "Fraction of cycles the CPU was stalled and ROB was full",
"MetricGroup": "TopDownL4",
"MetricName": "stall_rob_id"
},
{
"MetricExpr": "IDR_STALL_SOB_ID / CPU_CYCLES ",
"BriefDescription": "Fraction of cycles the CPU was stalled and SOB was full",
"MetricGroup": "TopDownL4",
"MetricName": "stall_sob_id"
}
]

View File

@ -1,18 +1,24 @@
[
{
"ArchStdEvent": "STALL_FRONTEND"
"ArchStdEvent": "STALL_FRONTEND",
"Errata": "Errata AC03_CPU_29",
"BriefDescription": "Impacted by errata, use metrics instead -"
},
{
"ArchStdEvent": "STALL_BACKEND"
},
{
"ArchStdEvent": "STALL"
"ArchStdEvent": "STALL",
"Errata": "Errata AC03_CPU_29",
"BriefDescription": "Impacted by errata, use metrics instead -"
},
{
"ArchStdEvent": "STALL_SLOT_BACKEND"
},
{
"ArchStdEvent": "STALL_SLOT_FRONTEND"
"ArchStdEvent": "STALL_SLOT_FRONTEND",
"Errata": "Errata AC03_CPU_29",
"BriefDescription": "Impacted by errata, use metrics instead -"
},
{
"ArchStdEvent": "STALL_SLOT"

View File

@ -1,8 +0,0 @@
[
{
"ArchStdEvent": "BR_MIS_PRED"
},
{
"ArchStdEvent": "BR_PRED"
}
]

View File

@ -1,20 +1,18 @@
[
{
"ArchStdEvent": "CPU_CYCLES"
"ArchStdEvent": "BUS_ACCESS",
"PublicDescription": "Counts memory transactions issued by the CPU to the external bus, including snoop requests and snoop responses. Each beat of data is counted individually."
},
{
"ArchStdEvent": "BUS_ACCESS"
"ArchStdEvent": "BUS_CYCLES",
"PublicDescription": "Counts bus cycles in the CPU. Bus cycles represent a clock cycle in which a transaction could be sent or received on the interface from the CPU to the external bus. Since that interface is driven at the same clock speed as the CPU, this event is a duplicate of CPU_CYCLES."
},
{
"ArchStdEvent": "BUS_CYCLES"
"ArchStdEvent": "BUS_ACCESS_RD",
"PublicDescription": "Counts memory read transactions seen on the external bus. Each beat of data is counted individually."
},
{
"ArchStdEvent": "BUS_ACCESS_RD"
},
{
"ArchStdEvent": "BUS_ACCESS_WR"
},
{
"ArchStdEvent": "CNT_CYCLES"
"ArchStdEvent": "BUS_ACCESS_WR",
"PublicDescription": "Counts memory write transactions seen on the external bus. Each beat of data is counted individually."
}
]

View File

@ -1,155 +0,0 @@
[
{
"ArchStdEvent": "L1I_CACHE_REFILL"
},
{
"ArchStdEvent": "L1I_TLB_REFILL"
},
{
"ArchStdEvent": "L1D_CACHE_REFILL"
},
{
"ArchStdEvent": "L1D_CACHE"
},
{
"ArchStdEvent": "L1D_TLB_REFILL"
},
{
"ArchStdEvent": "L1I_CACHE"
},
{
"ArchStdEvent": "L1D_CACHE_WB"
},
{
"ArchStdEvent": "L2D_CACHE"
},
{
"ArchStdEvent": "L2D_CACHE_REFILL"
},
{
"ArchStdEvent": "L2D_CACHE_WB"
},
{
"ArchStdEvent": "L2D_CACHE_ALLOCATE"
},
{
"ArchStdEvent": "L1D_TLB"
},
{
"ArchStdEvent": "L1I_TLB"
},
{
"ArchStdEvent": "L3D_CACHE_ALLOCATE"
},
{
"ArchStdEvent": "L3D_CACHE_REFILL"
},
{
"ArchStdEvent": "L3D_CACHE"
},
{
"ArchStdEvent": "L2D_TLB_REFILL"
},
{
"ArchStdEvent": "L2D_TLB"
},
{
"ArchStdEvent": "DTLB_WALK"
},
{
"ArchStdEvent": "ITLB_WALK"
},
{
"ArchStdEvent": "LL_CACHE_RD"
},
{
"ArchStdEvent": "LL_CACHE_MISS_RD"
},
{
"ArchStdEvent": "L1D_CACHE_LMISS_RD"
},
{
"ArchStdEvent": "L1D_CACHE_RD"
},
{
"ArchStdEvent": "L1D_CACHE_WR"
},
{
"ArchStdEvent": "L1D_CACHE_REFILL_RD"
},
{
"ArchStdEvent": "L1D_CACHE_REFILL_WR"
},
{
"ArchStdEvent": "L1D_CACHE_REFILL_INNER"
},
{
"ArchStdEvent": "L1D_CACHE_REFILL_OUTER"
},
{
"ArchStdEvent": "L1D_CACHE_WB_VICTIM"
},
{
"ArchStdEvent": "L1D_CACHE_WB_CLEAN"
},
{
"ArchStdEvent": "L1D_CACHE_INVAL"
},
{
"ArchStdEvent": "L1D_TLB_REFILL_RD"
},
{
"ArchStdEvent": "L1D_TLB_REFILL_WR"
},
{
"ArchStdEvent": "L1D_TLB_RD"
},
{
"ArchStdEvent": "L1D_TLB_WR"
},
{
"ArchStdEvent": "L2D_CACHE_RD"
},
{
"ArchStdEvent": "L2D_CACHE_WR"
},
{
"ArchStdEvent": "L2D_CACHE_REFILL_RD"
},
{
"ArchStdEvent": "L2D_CACHE_REFILL_WR"
},
{
"ArchStdEvent": "L2D_CACHE_WB_VICTIM"
},
{
"ArchStdEvent": "L2D_CACHE_WB_CLEAN"
},
{
"ArchStdEvent": "L2D_CACHE_INVAL"
},
{
"ArchStdEvent": "L2D_TLB_REFILL_RD"
},
{
"ArchStdEvent": "L2D_TLB_REFILL_WR"
},
{
"ArchStdEvent": "L2D_TLB_RD"
},
{
"ArchStdEvent": "L2D_TLB_WR"
},
{
"ArchStdEvent": "L3D_CACHE_RD"
},
{
"ArchStdEvent": "L1I_CACHE_LMISS"
},
{
"ArchStdEvent": "L2D_CACHE_LMISS_RD"
},
{
"ArchStdEvent": "L3D_CACHE_LMISS_RD"
}
]

View File

@ -1,47 +1,62 @@
[
{
"ArchStdEvent": "EXC_TAKEN"
"ArchStdEvent": "EXC_TAKEN",
"PublicDescription": "Counts any taken architecturally visible exceptions such as IRQ, FIQ, SError, and other synchronous exceptions. Exceptions are counted whether or not they are taken locally."
},
{
"ArchStdEvent": "MEMORY_ERROR"
"ArchStdEvent": "EXC_RETURN",
"PublicDescription": "Counts any architecturally executed exception return instructions. Eg: AArch64: ERET"
},
{
"ArchStdEvent": "EXC_UNDEF"
"ArchStdEvent": "EXC_UNDEF",
"PublicDescription": "Counts the number of synchronous exceptions which are taken locally that are due to attempting to execute an instruction that is UNDEFINED. Attempting to execute instruction bit patterns that have not been allocated. Attempting to execute instructions when they are disabled. Attempting to execute instructions at an inappropriate Exception level. Attempting to execute an instruction when the value of PSTATE.IL is 1."
},
{
"ArchStdEvent": "EXC_SVC"
"ArchStdEvent": "EXC_SVC",
"PublicDescription": "Counts SVC exceptions taken locally."
},
{
"ArchStdEvent": "EXC_PABORT"
"ArchStdEvent": "EXC_PABORT",
"PublicDescription": "Counts synchronous exceptions that are taken locally and caused by Instruction Aborts."
},
{
"ArchStdEvent": "EXC_DABORT"
"ArchStdEvent": "EXC_DABORT",
"PublicDescription": "Counts exceptions that are taken locally and are caused by data aborts or SErrors. Conditions that could cause those exceptions are attempting to read or write memory where the MMU generates a fault, attempting to read or write memory with a misaligned address, interrupts from the nSEI inputs and internally generated SErrors."
},
{
"ArchStdEvent": "EXC_IRQ"
"ArchStdEvent": "EXC_IRQ",
"PublicDescription": "Counts IRQ exceptions including the virtual IRQs that are taken locally."
},
{
"ArchStdEvent": "EXC_FIQ"
"ArchStdEvent": "EXC_FIQ",
"PublicDescription": "Counts FIQ exceptions including the virtual FIQs that are taken locally."
},
{
"ArchStdEvent": "EXC_SMC"
"ArchStdEvent": "EXC_SMC",
"PublicDescription": "Counts SMC exceptions take to EL3."
},
{
"ArchStdEvent": "EXC_HVC"
"ArchStdEvent": "EXC_HVC",
"PublicDescription": "Counts HVC exceptions taken to EL2."
},
{
"ArchStdEvent": "EXC_TRAP_PABORT"
"ArchStdEvent": "EXC_TRAP_PABORT",
"PublicDescription": "Counts exceptions which are traps not taken locally and are caused by Instruction Aborts. For example, attempting to execute an instruction with a misaligned PC."
},
{
"ArchStdEvent": "EXC_TRAP_DABORT"
"ArchStdEvent": "EXC_TRAP_DABORT",
"PublicDescription": "Counts exceptions which are traps not taken locally and are caused by Data Aborts or SError interrupts. Conditions that could cause those exceptions are:\n\n1. Attempting to read or write memory where the MMU generates a fault,\n2. Attempting to read or write memory with a misaligned address,\n3. Interrupts from the SEI input.\n4. internally generated SErrors."
},
{
"ArchStdEvent": "EXC_TRAP_OTHER"
"ArchStdEvent": "EXC_TRAP_OTHER",
"PublicDescription": "Counts the number of synchronous trap exceptions which are not taken locally and are not SVC, SMC, HVC, data aborts, Instruction Aborts, or interrupts."
},
{
"ArchStdEvent": "EXC_TRAP_IRQ"
"ArchStdEvent": "EXC_TRAP_IRQ",
"PublicDescription": "Counts IRQ exceptions including the virtual IRQs that are not taken locally."
},
{
"ArchStdEvent": "EXC_TRAP_FIQ"
"ArchStdEvent": "EXC_TRAP_FIQ",
"PublicDescription": "Counts FIQs which are not taken locally but taken from EL0, EL1,\n or EL2 to EL3 (which would be the normal behavior for FIQs when not executing\n in EL3)."
}
]

View File

@ -0,0 +1,22 @@
[
{
"ArchStdEvent": "FP_HP_SPEC",
"PublicDescription": "Counts speculatively executed half precision floating point operations."
},
{
"ArchStdEvent": "FP_SP_SPEC",
"PublicDescription": "Counts speculatively executed single precision floating point operations."
},
{
"ArchStdEvent": "FP_DP_SPEC",
"PublicDescription": "Counts speculatively executed double precision floating point operations."
},
{
"ArchStdEvent": "FP_SCALE_OPS_SPEC",
"PublicDescription": "Counts speculatively executed scalable single precision floating point operations."
},
{
"ArchStdEvent": "FP_FIXED_OPS_SPEC",
"PublicDescription": "Counts speculatively executed non-scalable single precision floating point operations."
}
]

View File

@ -0,0 +1,10 @@
[
{
"ArchStdEvent": "CPU_CYCLES",
"PublicDescription": "Counts CPU clock cycles (not timer cycles). The clock measured by this event is defined as the physical clock driving the CPU logic."
},
{
"ArchStdEvent": "CNT_CYCLES",
"PublicDescription": "Counts constant frequency cycles"
}
]

View File

@ -1,143 +0,0 @@
[
{
"ArchStdEvent": "SW_INCR"
},
{
"ArchStdEvent": "INST_RETIRED"
},
{
"ArchStdEvent": "EXC_RETURN"
},
{
"ArchStdEvent": "CID_WRITE_RETIRED"
},
{
"ArchStdEvent": "INST_SPEC"
},
{
"ArchStdEvent": "TTBR_WRITE_RETIRED"
},
{
"ArchStdEvent": "BR_RETIRED"
},
{
"ArchStdEvent": "BR_MIS_PRED_RETIRED"
},
{
"ArchStdEvent": "OP_RETIRED"
},
{
"ArchStdEvent": "OP_SPEC"
},
{
"ArchStdEvent": "LDREX_SPEC"
},
{
"ArchStdEvent": "STREX_PASS_SPEC"
},
{
"ArchStdEvent": "STREX_FAIL_SPEC"
},
{
"ArchStdEvent": "STREX_SPEC"
},
{
"ArchStdEvent": "LD_SPEC"
},
{
"ArchStdEvent": "ST_SPEC"
},
{
"ArchStdEvent": "DP_SPEC"
},
{
"ArchStdEvent": "ASE_SPEC"
},
{
"ArchStdEvent": "VFP_SPEC"
},
{
"ArchStdEvent": "PC_WRITE_SPEC"
},
{
"ArchStdEvent": "CRYPTO_SPEC"
},
{
"ArchStdEvent": "BR_IMMED_SPEC"
},
{
"ArchStdEvent": "BR_RETURN_SPEC"
},
{
"ArchStdEvent": "BR_INDIRECT_SPEC"
},
{
"ArchStdEvent": "ISB_SPEC"
},
{
"ArchStdEvent": "DSB_SPEC"
},
{
"ArchStdEvent": "DMB_SPEC"
},
{
"ArchStdEvent": "RC_LD_SPEC"
},
{
"ArchStdEvent": "RC_ST_SPEC"
},
{
"ArchStdEvent": "ASE_INST_SPEC"
},
{
"ArchStdEvent": "SVE_INST_SPEC"
},
{
"ArchStdEvent": "FP_HP_SPEC"
},
{
"ArchStdEvent": "FP_SP_SPEC"
},
{
"ArchStdEvent": "FP_DP_SPEC"
},
{
"ArchStdEvent": "SVE_PRED_SPEC"
},
{
"ArchStdEvent": "SVE_PRED_EMPTY_SPEC"
},
{
"ArchStdEvent": "SVE_PRED_FULL_SPEC"
},
{
"ArchStdEvent": "SVE_PRED_PARTIAL_SPEC"
},
{
"ArchStdEvent": "SVE_PRED_NOT_FULL_SPEC"
},
{
"ArchStdEvent": "SVE_LDFF_SPEC"
},
{
"ArchStdEvent": "SVE_LDFF_FAULT_SPEC"
},
{
"ArchStdEvent": "FP_SCALE_OPS_SPEC"
},
{
"ArchStdEvent": "FP_FIXED_OPS_SPEC"
},
{
"ArchStdEvent": "ASE_SVE_INT8_SPEC"
},
{
"ArchStdEvent": "ASE_SVE_INT16_SPEC"
},
{
"ArchStdEvent": "ASE_SVE_INT32_SPEC"
},
{
"ArchStdEvent": "ASE_SVE_INT64_SPEC"
}
]

View File

@ -0,0 +1,54 @@
[
{
"ArchStdEvent": "L1D_CACHE_REFILL",
"PublicDescription": "Counts level 1 data cache refills caused by speculatively executed load or store operations that missed in the level 1 data cache. This event only counts one event per cache line. This event does not count cache line allocations from preload instructions or from hardware cache prefetching."
},
{
"ArchStdEvent": "L1D_CACHE",
"PublicDescription": "Counts level 1 data cache accesses from any load/store operations. Atomic operations that resolve in the CPUs caches (near atomic operations) count as both a write access and read access. Each access to a cache line is counted including the multiple accesses caused by single instructions such as LDM or STM. Each access to other level 1 data or unified memory structures, for example refill buffers, write buffers, and write-back buffers, are also counted."
},
{
"ArchStdEvent": "L1D_CACHE_WB",
"PublicDescription": "Counts write-backs of dirty data from the L1 data cache to the L2 cache. This occurs when either a dirty cache line is evicted from L1 data cache and allocated in the L2 cache or dirty data is written to the L2 and possibly to the next level of cache. This event counts both victim cache line evictions and cache write-backs from snoops or cache maintenance operations. The following cache operations are not counted:\n\n1. Invalidations which do not result in data being transferred out of the L1 (such as evictions of clean data),\n2. Full line writes which write to L2 without writing L1, such as write streaming mode."
},
{
"ArchStdEvent": "L1D_CACHE_LMISS_RD",
"PublicDescription": "Counts cache line refills into the level 1 data cache from any memory read operations, that incurred additional latency."
},
{
"ArchStdEvent": "L1D_CACHE_RD",
"PublicDescription": "Counts level 1 data cache accesses from any load operation. Atomic load operations that resolve in the CPUs caches count as both a write access and read access."
},
{
"ArchStdEvent": "L1D_CACHE_WR",
"PublicDescription": "Counts level 1 data cache accesses generated by store operations. This event also counts accesses caused by a DC ZVA (data cache zero, specified by virtual address) instruction. Near atomic operations that resolve in the CPUs caches count as a write access and read access."
},
{
"ArchStdEvent": "L1D_CACHE_REFILL_RD",
"PublicDescription": "Counts level 1 data cache refills caused by speculatively executed load instructions where the memory read operation misses in the level 1 data cache. This event only counts one event per cache line."
},
{
"ArchStdEvent": "L1D_CACHE_REFILL_WR",
"PublicDescription": "Counts level 1 data cache refills caused by speculatively executed store instructions where the memory write operation misses in the level 1 data cache. This event only counts one event per cache line."
},
{
"ArchStdEvent": "L1D_CACHE_REFILL_INNER",
"PublicDescription": "Counts level 1 data cache refills where the cache line data came from caches inside the immediate cluster of the core."
},
{
"ArchStdEvent": "L1D_CACHE_REFILL_OUTER",
"PublicDescription": "Counts level 1 data cache refills for which the cache line data came from outside the immediate cluster of the core, like an SLC in the system interconnect or DRAM."
},
{
"ArchStdEvent": "L1D_CACHE_WB_VICTIM",
"PublicDescription": "Counts dirty cache line evictions from the level 1 data cache caused by a new cache line allocation. This event does not count evictions caused by cache maintenance operations."
},
{
"ArchStdEvent": "L1D_CACHE_WB_CLEAN",
"PublicDescription": "Counts write-backs from the level 1 data cache that are a result of a coherency operation made by another CPU. Event count includes cache maintenance operations."
},
{
"ArchStdEvent": "L1D_CACHE_INVAL",
"PublicDescription": "Counts each explicit invalidation of a cache line in the level 1 data cache caused by:\n\n- Cache Maintenance Operations (CMO) that operate by a virtual address.\n- Broadcast cache coherency operations from another CPU in the system.\n\nThis event does not count for the following conditions:\n\n1. A cache refill invalidates a cache line.\n2. A CMO which is executed on that CPU and invalidates a cache line specified by set/way.\n\nNote that CMOs that operate by set/way cannot be broadcast from one CPU to another."
}
]

View File

@ -0,0 +1,14 @@
[
{
"ArchStdEvent": "L1I_CACHE_REFILL",
"PublicDescription": "Counts cache line refills in the level 1 instruction cache caused by a missed instruction fetch. Instruction fetches may include accessing multiple instructions, but the single cache line allocation is counted once."
},
{
"ArchStdEvent": "L1I_CACHE",
"PublicDescription": "Counts instruction fetches which access the level 1 instruction cache. Instruction cache accesses caused by cache maintenance operations are not counted."
},
{
"ArchStdEvent": "L1I_CACHE_LMISS",
"PublicDescription": "Counts cache line refills into the level 1 instruction cache, that incurred additional latency."
}
]

View File

@ -0,0 +1,50 @@
[
{
"ArchStdEvent": "L2D_CACHE",
"PublicDescription": "Counts level 2 cache accesses. level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the first level caches or translation resolutions due to accesses. This event also counts write back of dirty data from level 1 data cache to the L2 cache."
},
{
"ArchStdEvent": "L2D_CACHE_REFILL",
"PublicDescription": "Counts cache line refills into the level 2 cache. level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the level 1 caches or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L2D_CACHE_WB",
"PublicDescription": "Counts write-backs of data from the L2 cache to outside the CPU. This includes snoops to the L2 (from other CPUs) which return data even if the snoops cause an invalidation. L2 cache line invalidations which do not write data outside the CPU and snoops which return data from an L1 cache are not counted. Data would not be written outside the cache when invalidating a clean cache line."
},
{
"ArchStdEvent": "L2D_CACHE_ALLOCATE",
"PublicDescription": "TBD"
},
{
"ArchStdEvent": "L2D_CACHE_RD",
"PublicDescription": "Counts level 2 cache accesses due to memory read operations. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L2D_CACHE_WR",
"PublicDescription": "Counts level 2 cache accesses due to memory write operations. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L2D_CACHE_REFILL_RD",
"PublicDescription": "Counts refills for memory accesses due to memory read operation counted by L2D_CACHE_RD. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L2D_CACHE_REFILL_WR",
"PublicDescription": "Counts refills for memory accesses due to memory write operation counted by L2D_CACHE_WR. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L2D_CACHE_WB_VICTIM",
"PublicDescription": "Counts evictions from the level 2 cache because of a line being allocated into the L2 cache."
},
{
"ArchStdEvent": "L2D_CACHE_WB_CLEAN",
"PublicDescription": "Counts write-backs from the level 2 cache that are a result of either:\n\n1. Cache maintenance operations,\n\n2. Snoop responses or,\n\n3. Direct cache transfers to another CPU due to a forwarding snoop request."
},
{
"ArchStdEvent": "L2D_CACHE_INVAL",
"PublicDescription": "Counts each explicit invalidation of a cache line in the level 2 cache by cache maintenance operations that operate by a virtual address, or by external coherency operations. This event does not count if either:\n\n1. A cache refill invalidates a cache line or,\n2. A Cache Maintenance Operation (CMO), which invalidates a cache line specified by set/way, is executed on that CPU.\n\nCMOs that operate by set/way cannot be broadcast from one CPU to another."
},
{
"ArchStdEvent": "L2D_CACHE_LMISS_RD",
"PublicDescription": "Counts cache line refills into the level 2 unified cache from any memory read operations that incurred additional latency."
}
]

View File

@ -0,0 +1,22 @@
[
{
"ArchStdEvent": "L3D_CACHE_ALLOCATE",
"PublicDescription": "Counts level 3 cache line allocates that do not fetch data from outside the level 3 data or unified cache. For example, allocates due to streaming stores."
},
{
"ArchStdEvent": "L3D_CACHE_REFILL",
"PublicDescription": "Counts level 3 accesses that receive data from outside the L3 cache."
},
{
"ArchStdEvent": "L3D_CACHE",
"PublicDescription": "Counts level 3 cache accesses. level 3 cache is a unified cache for data and instruction accesses. Accesses are for misses in the lower level caches or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L3D_CACHE_RD",
"PublicDescription": "TBD"
},
{
"ArchStdEvent": "L3D_CACHE_LMISS_RD",
"PublicDescription": "Counts any cache line refill into the level 3 cache from memory read operations that incurred additional latency."
}
]

View File

@ -0,0 +1,10 @@
[
{
"ArchStdEvent": "LL_CACHE_RD",
"PublicDescription": "Counts read transactions that were returned from outside the core cluster. This event counts when the system register CPUECTLR.EXTLLC bit is set. This event counts read transactions returned from outside the core if those transactions are either hit in the system level cache or missed in the SLC and are returned from any other external sources."
},
{
"ArchStdEvent": "LL_CACHE_MISS_RD",
"PublicDescription": "Counts read transactions that were returned from outside the core cluster but missed in the system level cache. This event counts when the system register CPUECTLR.EXTLLC bit is set. This event counts read transactions returned from outside the core if those transactions are missed in the System level Cache. The data source of the transaction is indicated by a field in the CHI transaction returning to the CPU. This event does not count reads caused by cache maintenance operations."
}
]

View File

@ -1,41 +1,46 @@
[
{
"ArchStdEvent": "MEM_ACCESS"
"ArchStdEvent": "MEM_ACCESS",
"PublicDescription": "Counts memory accesses issued by the CPU load store unit, where those accesses are issued due to load or store operations. This event counts memory accesses no matter whether the data is received from any level of cache hierarchy or external memory. If memory accesses are broken up into smaller transactions than what were specified in the load or store instructions, then the event counts those smaller memory transactions."
},
{
"ArchStdEvent": "REMOTE_ACCESS"
"ArchStdEvent": "MEMORY_ERROR",
"PublicDescription": "Counts any detected correctable or uncorrectable physical memory errors (ECC or parity) in protected CPUs RAMs. On the core, this event counts errors in the caches (including data and tag rams). Any detected memory error (from either a speculative and abandoned access, or an architecturally executed access) is counted. Note that errors are only detected when the actual protected memory is accessed by an operation."
},
{
"ArchStdEvent": "MEM_ACCESS_RD"
"ArchStdEvent": "REMOTE_ACCESS",
"PublicDescription": "Counts accesses to another chip, which is implemented as a different CMN mesh in the system. If the CHI bus response back to the core indicates that the data source is from another chip (mesh), then the counter is updated. If no data is returned, even if the system snoops another chip/mesh, then the counter is not updated."
},
{
"ArchStdEvent": "MEM_ACCESS_WR"
"ArchStdEvent": "MEM_ACCESS_RD",
"PublicDescription": "Counts memory accesses issued by the CPU due to load operations. The event counts any memory load access, no matter whether the data is received from any level of cache hierarchy or external memory. The event also counts atomic load operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions."
},
{
"ArchStdEvent": "UNALIGNED_LD_SPEC"
"ArchStdEvent": "MEM_ACCESS_WR",
"PublicDescription": "Counts memory accesses issued by the CPU due to store operations. The event counts any memory store access, no matter whether the data is located in any level of cache or external memory. The event also counts atomic load and store operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions."
},
{
"ArchStdEvent": "UNALIGNED_ST_SPEC"
"ArchStdEvent": "LDST_ALIGN_LAT",
"PublicDescription": "Counts the number of memory read and write accesses in a cycle that incurred additional latency, due to the alignment of the address and the size of data being accessed, which results in store crossing a single cache line."
},
{
"ArchStdEvent": "UNALIGNED_LDST_SPEC"
"ArchStdEvent": "LD_ALIGN_LAT",
"PublicDescription": "Counts the number of memory read accesses in a cycle that incurred additional latency, due to the alignment of the address and size of data being accessed, which results in load crossing a single cache line."
},
{
"ArchStdEvent": "LDST_ALIGN_LAT"
"ArchStdEvent": "ST_ALIGN_LAT",
"PublicDescription": "Counts the number of memory write access in a cycle that incurred additional latency, due to the alignment of the address and size of data being accessed incurred additional latency."
},
{
"ArchStdEvent": "LD_ALIGN_LAT"
"ArchStdEvent": "MEM_ACCESS_CHECKED",
"PublicDescription": "Counts the number of memory read and write accesses in a cycle that are tag checked by the Memory Tagging Extension (MTE)."
},
{
"ArchStdEvent": "ST_ALIGN_LAT"
"ArchStdEvent": "MEM_ACCESS_CHECKED_RD",
"PublicDescription": "Counts the number of memory read accesses in a cycle that are tag checked by the Memory Tagging Extension (MTE)."
},
{
"ArchStdEvent": "MEM_ACCESS_CHECKED"
},
{
"ArchStdEvent": "MEM_ACCESS_CHECKED_RD"
},
{
"ArchStdEvent": "MEM_ACCESS_CHECKED_WR"
"ArchStdEvent": "MEM_ACCESS_CHECKED_WR",
"PublicDescription": "Counts the number of memory write accesses in a cycle that is tag checked by the Memory Tagging Extension (MTE)."
}
]

Some files were not shown because too many files have changed in this diff Show More