linux-stable/arch/x86/events
Paolo Bonzini 9710794640 KVM: x86/pmu: fix masking logic for MSR_CORE_PERF_GLOBAL_CTRL
When commit c59a1f106f ("KVM: x86/pmu: Add IA32_PEBS_ENABLE
MSR emulation for extended PEBS") switched the initialization of
cpuc->guest_switch_msrs to use compound literals, it screwed up
the boolean logic:

+	u64 pebs_mask = cpuc->pebs_enabled & x86_pmu.pebs_capable;
...
-	arr[0].guest = intel_ctrl & ~cpuc->intel_ctrl_host_mask;
-	arr[0].guest &= ~(cpuc->pebs_enabled & x86_pmu.pebs_capable);
+               .guest = intel_ctrl & (~cpuc->intel_ctrl_host_mask | ~pebs_mask),

Before the patch, the value of arr[0].guest would have been intel_ctrl &
~cpuc->intel_ctrl_host_mask & ~pebs_mask.  The intent is to always treat
PEBS events as host-only because, while the guest runs, there is no way
to tell the processor about the virtual address where to put PEBS records
intended for the host.

Unfortunately, the new expression can be expanded to

	(intel_ctrl & ~cpuc->intel_ctrl_host_mask) | (intel_ctrl & ~pebs_mask)

which makes no sense; it includes any bit that isn't *both* marked as
exclude_guest and using PEBS.  So, reinstate the old logic.  Another
way to write it could be "intel_ctrl & ~(cpuc->intel_ctrl_host_mask |
pebs_mask)", presumably the intention of the author of the faulty.
However, I personally find the repeated application of A AND NOT B to
be a bit more readable.

This shows up as guest failures when running concurrent long-running
perf workloads on the host, and was reported to happen with rcutorture.
All guests on a given host would die simultaneously with something like an
instruction fault or a segmentation violation.

Reported-by: Paul E. McKenney <paulmck@kernel.org>
Analyzed-by: Sean Christopherson <seanjc@google.com>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
Cc: stable@vger.kernel.org
Fixes: c59a1f106f ("KVM: x86/pmu: Add IA32_PEBS_ENABLE MSR emulation for extended PEBS")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-01-04 16:31:27 +01:00
..
amd X86 core code updates: 2023-10-30 17:37:47 -10:00
intel KVM: x86/pmu: fix masking logic for MSR_CORE_PERF_GLOBAL_CTRL 2024-01-04 16:31:27 +01:00
zhaoxin x86/perf/zhaoxin: Add stepping check for ZXC 2023-02-11 11:18:12 +01:00
core.c perf/x86/intel: Clean up the hybrid CPU type handling code 2023-08-29 20:59:23 +02:00
Kconfig perf/x86/Kconfig: Fix indentation in the Kconfig file 2022-05-25 15:54:26 +02:00
Makefile perf/x86: Move branch classifier 2022-08-27 00:05:44 +02:00
msr.c x86/cpu: Fix Gracemont uarch 2023-08-09 21:51:06 +02:00
perf_event.h perf/x86/intel: Clean up the hybrid CPU type handling code 2023-08-29 20:59:23 +02:00
perf_event_flags.h x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH 2022-09-07 21:54:01 +02:00
probe.c perf/x86/rapl: Add msr mask support 2021-02-10 14:44:54 +01:00
probe.h perf/x86/rapl: Add msr mask support 2021-02-10 14:44:54 +01:00
rapl.c perf/x86/rapl: Annotate 'struct rapl_pmus' with __counted_by 2023-10-08 12:18:17 +02:00
utils.c perf/x86/lbr: Filter vsyscall addresses 2023-10-08 12:25:18 +02:00