platform-drivers-x86 for v6.8-1

Highlights:
  -  Intel PMC / PMT / TPMI / uncore-freq / vsec improvements and
     new platform support
  -  AMD PMC / PMF improvements and new platform support
  -  AMD ACPI based Wifi band RFI mitigation feature (WBRF)
  -  WMI bus driver cleanups and improvements (Armin Wolf)
  -  acer-wmi Predator PHN16-71 support
  -  New Silicom network appliance EC LEDs / GPIOs driver
 
 The following is an automated git shortlog grouped by driver:
 
 ACPI:
  -  scan: Add LNXVIDEO HID to ignore_serial_bus_ids[]
 
 Add Silicom Platform Driver:
  - Add Silicom Platform Driver
 
 Documentation/driver-api:
  -  Add document about WBRF mechanism
 
 ISST:
  -  Process read/write blocked feature status
 
 Merge tag 'platform-drivers-x86-amd-wbrf-v6.8-1' into review-hans:
  - Merge tag 'platform-drivers-x86-amd-wbrf-v6.8-1' into review-hans
 
 Merge tag 'platform-drivers-x86-v6.7-3' into pdx86/for-next:
  - Merge tag 'platform-drivers-x86-v6.7-3' into pdx86/for-next
 
 Merge tag 'platform-drivers-x86-v6.7-6' into pdx86/for-next:
  - Merge tag 'platform-drivers-x86-v6.7-6' into pdx86/for-next
 
 Remove "X86 PLATFORM DRIVERS - ARCH" from MAINTAINERS:
  - Remove "X86 PLATFORM DRIVERS - ARCH" from MAINTAINERS
 
 acer-wmi:
  -  add fan speed monitoring for Predator PHN16-71
  -  Depend on ACPI_VIDEO instead of selecting it
  -  Add platform profile and mode key support for Predator PHN16-71
 
 asus-laptop:
  -  remove redundant braces in if statements
 
 asus-wmi:
  -  Convert to platform remove callback returning void
 
 clk:
  -  x86: lpss-atom: Drop unneeded 'extern' in the header
 
 dell-smbios-wmi:
  -  Stop using WMI chardev
  -  Use devm_get_free_pages()
 
 hp-bioscfg:
  -  Removed needless asm-generic
 
 hp-wmi:
  -  Convert to platform remove callback returning void
 
 intel-uncore-freq:
  -  Add additional client processors
 
 intel-wmi-sbl-fw-update:
  -  Use bus-based WMI interface
 
 intel/pmc:
  -  Call pmc_get_low_power_modes from platform init
 
 ips:
  -  Remove unused debug code
 
 platform/mellanox:
  -  mlxbf-tmfifo: Remove unnecessary bool conversion
 
 platform/x86/amd:
  -  Add support for AMD ACPI based Wifi band RFI mitigation feature
 
 platform/x86/amd/pmc:
  -  Modify SMU message port for latest AMD platform
  -  Add 1Ah family series to STB support list
  -  Add idlemask support for 1Ah family
  -  call amd_pmc_get_ip_info() during driver probe
  -  Add VPE information for AMDI000A platform
  -  Send OS_HINT command for AMDI000A platform
 
 platform/x86/amd/pmf:
  -  Return a status code only as a constant in two functions
  -  Return directly after a failed apmf_if_call() in apmf_sbios_heartbeat_notify()
  -  dump policy binary data
  -  Add capability to sideload of policy binary
  -  Add facility to dump TA inputs
  -  Make source_as_str() as non-static
  -  Add support to update system state
  -  Add support update p3t limit
  -  Add support to get inputs from other subsystems
  -  change amd_pmf_init_features() call sequence
  -  Add support for PMF Policy Binary
  -  Change return type of amd_pmf_set_dram_addr()
  -  Add support for PMF-TA interaction
  -  Add PMF TEE interface
 
 platform/x86/dell:
  -  alienware-wmi: Use kasprintf()
 
 platform/x86/intel-uncore-freq:
  -  Process read/write blocked feature status
 
 platform/x86/intel/pmc:
  -  Add missing extern
  -  Add Lunar Lake M support to intel_pmc_core driver
  -  Add Arrow Lake S support to intel_pmc_core driver
  -  Add ssram_init flag in PMC discovery in Meteor Lake
  -  Move common code to core.c
  -  Add PSON residency counter for Alder Lake
  -  Add regmap for Tiger Lake H PCH
  -  Add PSON residency counter
  -  Fix in mtl_punit_pmt_init()
  -  Fix in pmc_core_ssram_get_pmc()
  -  Show Die C6 counter on Meteor Lake
  -  Add debug attribute for Die C6 counter
  -  Read low power mode requirements for MTL-M and MTL-P
  -  Retrieve LPM information using Intel PMT
  -  Display LPM requirements for multiple PMCs
  -  Find and register PMC telemetry entries
  -  Cleanup SSRAM discovery
  -  Allow pmc_core_ssram_init to fail
 
 platform/x86/intel/pmc/arl:
  -  Add GBE LTR ignore during suspend
 
 platform/x86/intel/pmc/lnl:
  -  Add GBE LTR ignore during suspend
 
 platform/x86/intel/pmc/mtl:
  -  Use return value from pmc_core_ssram_init()
 
 platform/x86/intel/pmt:
  -  telemetry: Export API to read telemetry
  -  Add header to struct intel_pmt_entry
 
 platform/x86/intel/tpmi:
  -  Move TPMI ID definition
  -  Modify external interface to get read/write state
  -  Don't create devices for disabled features
 
 platform/x86/intel/vsec:
  -  Add support for Lunar Lake M
  -  Add base address field
  -  Add intel_vsec_register
  -  Assign auxdev parent by argument
  -  Use cleanup.h
  -  remove platform_info from vsec device structure
  -  Move structures to header
  -  Remove unnecessary return
  -  Fix xa_alloc memory leak
 
 platform/x86/intel/wmi:
  -  thunderbolt: Use bus-based WMI interface
 
 silicom-platform:
  -  Fix spelling mistake "platfomr" -> "platform"
 
 wmi:
  -  linux/wmi.h: fix Excess kernel-doc description warning
  -  Simplify get_subobj_info()
  -  Decouple ACPI notify handler from wmi_block_list
  -  Create WMI bus device first
  -  Use devres for resource handling
  -  Remove ACPI handlers after WMI devices
  -  Remove unused variable in address space handler
  -  Remove chardev interface
  -  Remove debug_event module param
  -  Remove debug_dump_wdg module param
  -  Add to_wmi_device() helper macro
  -  Add wmidev_block_set()
 
 x86-android-tablets:
  -  Fix an IS_ERR() vs NULL check in probe
  -  Fix backlight ctrl for Lenovo Yoga Tab 3 Pro YT3-X90F
  -  Add audio codec info for Lenovo Yoga Tab 3 Pro YT3-X90F
  -  Add support for SPI device instantiation
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEEuvA7XScYQRpenhd+kuxHeUQDJ9wFAmWb29kUHGhkZWdvZWRl
 QHJlZGhhdC5jb20ACgkQkuxHeUQDJ9wliAf/bjJw8xGjzKYzipUhhDhM/tPdmuCs
 9zXCYXSeioSDCmhQNq6E4R+aBWJvcKAq5tNyf45x6tvWJVY2hk1HVnnJeu7qyVNk
 9fA7pGpqsUJg9w9nHW6YLU0AMSSl2a3k/E3lDwhDjYXwZiwQ2Y8atQ702HjiV5US
 cZ8/ZblIYqv/WKmhHpBf6dGlGzOUiwnXNIVQDIhXbde7yQPt3gGisftiu6bGyYvA
 FuBxm3o0UREX6yHAtII4TQfmcM45BrQa7/7FO3c0ZkpRmGCwiJJjRewUi8Rd2epJ
 mP9bNGkGdPriGMdlqcVDMyWvB5XJCaZTlbHHHSyEAaCRTEqeUMKWxp8OMg==
 =bmmu
 -----END PGP SIGNATURE-----

Merge tag 'platform-drivers-x86-v6.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86

Pull x86 platform driver updates from Hans de Goede:

 - Intel PMC / PMT / TPMI / uncore-freq / vsec improvements and new
   platform support

 - AMD PMC / PMF improvements and new platform support

 - AMD ACPI based Wifi band RFI mitigation feature (WBRF)

 - WMI bus driver cleanups and improvements (Armin Wolf)

 - acer-wmi Predator PHN16-71 support

 - New Silicom network appliance EC LEDs / GPIOs driver

* tag 'platform-drivers-x86-v6.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86: (96 commits)
  platform/x86/amd/pmc: Modify SMU message port for latest AMD platform
  platform/x86/amd/pmc: Add 1Ah family series to STB support list
  platform/x86/amd/pmc: Add idlemask support for 1Ah family
  platform/x86/amd/pmc: call amd_pmc_get_ip_info() during driver probe
  platform/x86/amd/pmc: Add VPE information for AMDI000A platform
  platform/x86/amd/pmc: Send OS_HINT command for AMDI000A platform
  platform/x86/amd/pmf: Return a status code only as a constant in two functions
  platform/x86/amd/pmf: Return directly after a failed apmf_if_call() in apmf_sbios_heartbeat_notify()
  platform/x86: wmi: linux/wmi.h: fix Excess kernel-doc description warning
  platform/x86/intel/pmc: Add missing extern
  platform/x86/intel/pmc/lnl: Add GBE LTR ignore during suspend
  platform/x86/intel/pmc/arl: Add GBE LTR ignore during suspend
  platform/x86: intel-uncore-freq: Add additional client processors
  platform/x86: Remove "X86 PLATFORM DRIVERS - ARCH" from MAINTAINERS
  platform/x86: hp-bioscfg: Removed needless asm-generic
  platform/x86/intel/pmc: Add Lunar Lake M support to intel_pmc_core driver
  platform/x86/intel/pmc: Add Arrow Lake S support to intel_pmc_core driver
  platform/x86/intel/pmc: Add ssram_init flag in PMC discovery in Meteor Lake
  platform/x86/intel/pmc: Move common code to core.c
  platform/x86/intel/pmc: Add PSON residency counter for Alder Lake
  ...
This commit is contained in:
Linus Torvalds 2024-01-09 17:07:12 -08:00
commit 5fda5698c2
66 changed files with 5842 additions and 795 deletions

View File

@ -0,0 +1,29 @@
What: /sys/devices/platform/silicom-platform/uc_version
Date: November 2023
KernelVersion: 6.7
Contact: Henry Shi <henrys@silicom-usa.com>
Description:
This file allows to read microcontroller firmware
version of current platform.
What: /sys/devices/platform/silicom-platform/power_cycle
Date: November 2023
KernelVersion: 6.7
Contact: Henry Shi <henrys@silicom-usa.com>
This file allow user to power cycle the platform.
Default value is 0; when set to 1, it powers down
the platform, waits 5 seconds, then powers on the
device. It returns to default value after power cycle.
0 - default value.
What: /sys/devices/platform/silicom-platform/efuse_status
Date: November 2023
KernelVersion: 6.7
Contact: Henry Shi <henrys@silicom-usa.com>
Description:
This file is read only. It returns the current
OTP status:
0 - not programmed.
1 - programmed.

View File

@ -119,6 +119,7 @@ configure specific aspects of kernel behavior to your liking.
parport
perf-security
pm/index
pmf
pnp
rapidio
ras

View File

@ -0,0 +1,24 @@
.. SPDX-License-Identifier: GPL-2.0
Set udev rules for PMF Smart PC Builder
---------------------------------------
AMD PMF(Platform Management Framework) Smart PC Solution builder has to set the system states
like S0i3, Screen lock, hibernate etc, based on the output actions provided by the PMF
TA (Trusted Application).
In order for this to work the PMF driver generates a uevent for userspace to react to. Below are
sample udev rules that can facilitate this experience when a machine has PMF Smart PC solution builder
enabled.
Please add the following line(s) to
``/etc/udev/rules.d/99-local.rules``::
DRIVERS=="amd-pmf", ACTION=="change", ENV{EVENT_ID}=="0", RUN+="/usr/bin/systemctl suspend"
DRIVERS=="amd-pmf", ACTION=="change", ENV{EVENT_ID}=="1", RUN+="/usr/bin/systemctl hibernate"
DRIVERS=="amd-pmf", ACTION=="change", ENV{EVENT_ID}=="2", RUN+="/bin/loginctl lock-sessions"
EVENT_ID values:
0= Put the system to S0i3/S2Idle
1= Put the system to hibernate
2= Lock the screen

View File

@ -115,6 +115,7 @@ available subsections can be seen below.
hte/index
wmi
dpll
wbrf
.. only:: subproject and html

View File

@ -0,0 +1,78 @@
.. SPDX-License-Identifier: GPL-2.0-or-later
=================================
WBRF - Wifi Band RFI Mitigations
=================================
Due to electrical and mechanical constraints in certain platform designs
there may be likely interference of relatively high-powered harmonics of
the GPU memory clocks with local radio module frequency bands used by
certain Wifi bands.
To mitigate possible RFI interference producers can advertise the
frequencies in use and consumers can use this information to avoid using
these frequencies for sensitive features.
When a platform is known to have this issue with any contained devices,
the platform designer will advertise the availability of this feature via
ACPI devices with a device specific method (_DSM).
* Producers with this _DSM will be able to advertise the frequencies in use.
* Consumers with this _DSM will be able to register for notifications of
frequencies in use.
Some general terms
==================
Producer: such component who can produce high-powered radio frequency
Consumer: such component who can adjust its in-use frequency in
response to the radio frequencies of other components to mitigate the
possible RFI.
To make the mechanism function, those producers should notify active use
of their particular frequencies so that other consumers can make relative
internal adjustments as necessary to avoid this resonance.
ACPI interface
==============
Although initially used by for wifi + dGPU use cases, the ACPI interface
can be scaled to any type of device that a platform designer discovers
can cause interference.
The GUID used for the _DSM is 7B7656CF-DC3D-4C1C-83E9-66E721DE3070.
3 functions are available in this _DSM:
* 0: discover # of functions available
* 1: record RF bands in use
* 2: retrieve RF bands in use
Driver programming interface
============================
.. kernel-doc:: drivers/platform/x86/amd/wbrf.c
Sample Usage
=============
The expected flow for the producers:
1. During probe, call `acpi_amd_wbrf_supported_producer` to check if WBRF
can be enabled for the device.
2. On using some frequency band, call `acpi_amd_wbrf_add_remove` with 'add'
param to get other consumers properly notified.
3. Or on stopping using some frequency band, call
`acpi_amd_wbrf_add_remove` with 'remove' param to get other consumers notified.
The expected flow for the consumers:
1. During probe, call `acpi_amd_wbrf_supported_consumer` to check if WBRF
can be enabled for the device.
2. Call `amd_wbrf_register_notifier` to register for notification
of frequency band change(add or remove) from other producers.
3. Call the `amd_wbrf_retrieve_freq_band` initally to retrieve
current active frequency bands considering some producers may broadcast
such information before the consumer is up.
4. On receiving a notification for frequency band change, run
`amd_wbrf_retrieve_freq_band` again to retrieve the latest
active frequency bands.
5. During driver cleanup, call `amd_wbrf_unregister_notifier` to
unregister the notifier.

View File

@ -23635,15 +23635,6 @@ F: drivers/platform/olpc/
F: drivers/platform/x86/
F: include/linux/platform_data/x86/
X86 PLATFORM DRIVERS - ARCH
R: Darren Hart <dvhart@infradead.org>
R: Andy Shevchenko <andy@infradead.org>
L: platform-driver-x86@vger.kernel.org
L: x86@kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/core
F: arch/x86/platform
X86 PLATFORM UV HPE SUPERDOME FLEX
M: Steve Wahl <steve.wahl@hpe.com>
R: Justin Ernst <justin.ernst@hpe.com>

View File

@ -1732,6 +1732,7 @@ static bool acpi_device_enumeration_by_parent(struct acpi_device *device)
* Some ACPI devs contain SerialBus resources even though they are not
* attached to a serial bus at all.
*/
{ACPI_VIDEO_HID, },
{"MSHW0028", },
/*
* HIDs of device with an UartSerialBusV2 resource for which userspace

View File

@ -91,7 +91,7 @@ struct mlxbf_tmfifo_vring {
/* Check whether vring is in drop mode. */
#define IS_VRING_DROP(_r) ({ \
typeof(_r) (r) = (_r); \
(r->desc_head == &r->drop_desc ? true : false); })
r->desc_head == &r->drop_desc; })
/* A stub length to drop maximum length packet. */
#define VRING_DROP_DESC_MAX_LEN GENMASK(15, 0)

View File

@ -177,10 +177,12 @@ config ACER_WMI
depends on INPUT
depends on RFKILL || RFKILL = n
depends on ACPI_WMI
select ACPI_VIDEO
depends on ACPI_VIDEO || ACPI_VIDEO = n
depends on HWMON
select INPUT_SPARSEKMAP
select LEDS_CLASS
select NEW_LEDS
select ACPI_PLATFORM_PROFILE
help
This is a driver for newer Acer (and Wistron) laptops. It adds
wireless radio and bluetooth control, and on some laptops,
@ -1087,6 +1089,21 @@ config INTEL_SCU_IPC_UTIL
source "drivers/platform/x86/siemens/Kconfig"
config SILICOM_PLATFORM
tristate "Silicom Edge Networking device support"
depends on HWMON
depends on GPIOLIB
depends on LEDS_CLASS_MULTICOLOR
help
This option enables support for the LEDs/GPIO/etc downstream of the
embedded controller on Silicom "Cordoba" hardware and derivatives.
This platform driver provides support for various functions via
the Linux LED framework, GPIO framework, Hardware Monitoring (HWMON)
and device attributes.
If you have a Silicom network appliance, say Y or M here.
config WINMATE_FM07_KEYS
tristate "Winmate FM07/FM07P front-panel keys driver"
depends on INPUT

View File

@ -136,6 +136,9 @@ obj-$(CONFIG_X86_INTEL_LPSS) += pmc_atom.o
# Siemens Simatic Industrial PCs
obj-$(CONFIG_SIEMENS_SIMATIC_IPC) += siemens/
# Silicom
obj-$(CONFIG_SILICOM_PLATFORM) += silicom-platform.o
# Winmate
obj-$(CONFIG_WINMATE_FM07_KEYS) += winmate-fm07-keys.o

View File

@ -20,6 +20,7 @@
#include <linux/backlight.h>
#include <linux/leds.h>
#include <linux/platform_device.h>
#include <linux/platform_profile.h>
#include <linux/acpi.h>
#include <linux/i8042.h>
#include <linux/rfkill.h>
@ -29,6 +30,8 @@
#include <linux/input.h>
#include <linux/input/sparse-keymap.h>
#include <acpi/video.h>
#include <linux/hwmon.h>
#include <linux/bitfield.h>
MODULE_AUTHOR("Carlos Corbacho");
MODULE_DESCRIPTION("Acer Laptop WMI Extras Driver");
@ -62,9 +65,14 @@ MODULE_LICENSE("GPL");
#define ACER_WMID_SET_GAMING_LED_METHODID 2
#define ACER_WMID_GET_GAMING_LED_METHODID 4
#define ACER_WMID_GET_GAMING_SYS_INFO_METHODID 5
#define ACER_WMID_SET_GAMING_FAN_BEHAVIOR 14
#define ACER_WMID_SET_GAMING_MISC_SETTING_METHODID 22
#define ACER_PREDATOR_V4_THERMAL_PROFILE_EC_OFFSET 0x54
#define ACER_PREDATOR_V4_FAN_SPEED_READ_BIT_MASK GENMASK(20, 8)
/*
* Acer ACPI method GUIDs
*/
@ -90,6 +98,12 @@ enum acer_wmi_event_ids {
WMID_GAMING_TURBO_KEY_EVENT = 0x7,
};
enum acer_wmi_predator_v4_sys_info_command {
ACER_WMID_CMD_GET_PREDATOR_V4_BAT_STATUS = 0x02,
ACER_WMID_CMD_GET_PREDATOR_V4_CPU_FAN_SPEED = 0x0201,
ACER_WMID_CMD_GET_PREDATOR_V4_GPU_FAN_SPEED = 0x0601,
};
static const struct key_entry acer_wmi_keymap[] __initconst = {
{KE_KEY, 0x01, {KEY_WLAN} }, /* WiFi */
{KE_KEY, 0x03, {KEY_WLAN} }, /* WiFi */
@ -229,9 +243,11 @@ struct hotkey_function_type_aa {
#define ACER_CAP_THREEG BIT(4)
#define ACER_CAP_SET_FUNCTION_MODE BIT(5)
#define ACER_CAP_KBD_DOCK BIT(6)
#define ACER_CAP_TURBO_OC BIT(7)
#define ACER_CAP_TURBO_LED BIT(8)
#define ACER_CAP_TURBO_FAN BIT(9)
#define ACER_CAP_TURBO_OC BIT(7)
#define ACER_CAP_TURBO_LED BIT(8)
#define ACER_CAP_TURBO_FAN BIT(9)
#define ACER_CAP_PLATFORM_PROFILE BIT(10)
#define ACER_CAP_FAN_SPEED_READ BIT(11)
/*
* Interface type flags
@ -259,6 +275,7 @@ static bool ec_raw_mode;
static bool has_type_aa;
static u16 commun_func_bitmap;
static u8 commun_fn_key_number;
static bool cycle_gaming_thermal_profile = true;
module_param(mailled, int, 0444);
module_param(brightness, int, 0444);
@ -266,12 +283,15 @@ module_param(threeg, int, 0444);
module_param(force_series, int, 0444);
module_param(force_caps, int, 0444);
module_param(ec_raw_mode, bool, 0444);
module_param(cycle_gaming_thermal_profile, bool, 0644);
MODULE_PARM_DESC(mailled, "Set initial state of Mail LED");
MODULE_PARM_DESC(brightness, "Set initial LCD backlight brightness");
MODULE_PARM_DESC(threeg, "Set initial state of 3G hardware");
MODULE_PARM_DESC(force_series, "Force a different laptop series");
MODULE_PARM_DESC(force_caps, "Force the capability bitmask to this value");
MODULE_PARM_DESC(ec_raw_mode, "Enable EC raw mode");
MODULE_PARM_DESC(cycle_gaming_thermal_profile,
"Set thermal mode key in cycle mode. Disabling it sets the mode key in turbo toggle mode");
struct acer_data {
int mailled;
@ -321,6 +341,7 @@ struct quirk_entry {
u8 turbo;
u8 cpu_fans;
u8 gpu_fans;
u8 predator_v4;
};
static struct quirk_entry *quirks;
@ -336,6 +357,10 @@ static void __init set_quirks(void)
if (quirks->turbo)
interface->capability |= ACER_CAP_TURBO_OC | ACER_CAP_TURBO_LED
| ACER_CAP_TURBO_FAN;
if (quirks->predator_v4)
interface->capability |= ACER_CAP_PLATFORM_PROFILE |
ACER_CAP_FAN_SPEED_READ;
}
static int __init dmi_matched(const struct dmi_system_id *dmi)
@ -370,6 +395,10 @@ static struct quirk_entry quirk_acer_predator_ph315_53 = {
.gpu_fans = 1,
};
static struct quirk_entry quirk_acer_predator_v4 = {
.predator_v4 = 1,
};
/* This AMW0 laptop has no bluetooth */
static struct quirk_entry quirk_medion_md_98300 = {
.wireless = 1,
@ -546,6 +575,15 @@ static const struct dmi_system_id acer_quirks[] __initconst = {
},
.driver_data = &quirk_acer_predator_ph315_53,
},
{
.callback = dmi_matched,
.ident = "Acer Predator PHN16-71",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
DMI_MATCH(DMI_PRODUCT_NAME, "Predator PHN16-71"),
},
.driver_data = &quirk_acer_predator_v4,
},
{
.callback = set_force_caps,
.ident = "Acer Aspire Switch 10E SW3-016",
@ -659,6 +697,31 @@ static const struct dmi_system_id non_acer_quirks[] __initconst = {
{}
};
static struct platform_profile_handler platform_profile_handler;
static bool platform_profile_support;
/*
* The profile used before turbo mode. This variable is needed for
* returning from turbo mode when the mode key is in toggle mode.
*/
static int last_non_turbo_profile;
enum acer_predator_v4_thermal_profile_ec {
ACER_PREDATOR_V4_THERMAL_PROFILE_ECO = 0x04,
ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO = 0x03,
ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE = 0x02,
ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET = 0x01,
ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED = 0x00,
};
enum acer_predator_v4_thermal_profile_wmi {
ACER_PREDATOR_V4_THERMAL_PROFILE_ECO_WMI = 0x060B,
ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO_WMI = 0x050B,
ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE_WMI = 0x040B,
ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET_WMI = 0x0B,
ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED_WMI = 0x010B,
};
/* Find which quirks are needed for a particular vendor/ model pair */
static void __init find_quirks(void)
{
@ -1339,7 +1402,7 @@ WMI_gaming_execute_u64(u32 method_id, u64 in, u64 *out)
struct acpi_buffer input = { (acpi_size) sizeof(u64), (void *)(&in) };
struct acpi_buffer result = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj;
u32 tmp = 0;
u64 tmp = 0;
acpi_status status;
status = wmi_evaluate_method(WMID_GUID4, 0, method_id, &input, &result);
@ -1663,6 +1726,26 @@ static int acer_gsensor_event(void)
return 0;
}
static int acer_get_fan_speed(int fan)
{
if (quirks->predator_v4) {
acpi_status status;
u64 fanspeed;
status = WMI_gaming_execute_u64(
ACER_WMID_GET_GAMING_SYS_INFO_METHODID,
fan == 0 ? ACER_WMID_CMD_GET_PREDATOR_V4_CPU_FAN_SPEED :
ACER_WMID_CMD_GET_PREDATOR_V4_GPU_FAN_SPEED,
&fanspeed);
if (ACPI_FAILURE(status))
return -EIO;
return FIELD_GET(ACER_PREDATOR_V4_FAN_SPEED_READ_BIT_MASK, fanspeed);
}
return -EOPNOTSUPP;
}
/*
* Predator series turbo button
*/
@ -1698,6 +1781,199 @@ static int acer_toggle_turbo(void)
return turbo_led_state;
}
static int
acer_predator_v4_platform_profile_get(struct platform_profile_handler *pprof,
enum platform_profile_option *profile)
{
u8 tp;
int err;
err = ec_read(ACER_PREDATOR_V4_THERMAL_PROFILE_EC_OFFSET, &tp);
if (err < 0)
return err;
switch (tp) {
case ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO:
*profile = PLATFORM_PROFILE_PERFORMANCE;
break;
case ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE:
*profile = PLATFORM_PROFILE_BALANCED_PERFORMANCE;
break;
case ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED:
*profile = PLATFORM_PROFILE_BALANCED;
break;
case ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET:
*profile = PLATFORM_PROFILE_QUIET;
break;
case ACER_PREDATOR_V4_THERMAL_PROFILE_ECO:
*profile = PLATFORM_PROFILE_LOW_POWER;
break;
default:
return -EOPNOTSUPP;
}
return 0;
}
static int
acer_predator_v4_platform_profile_set(struct platform_profile_handler *pprof,
enum platform_profile_option profile)
{
int tp;
acpi_status status;
switch (profile) {
case PLATFORM_PROFILE_PERFORMANCE:
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO_WMI;
break;
case PLATFORM_PROFILE_BALANCED_PERFORMANCE:
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE_WMI;
break;
case PLATFORM_PROFILE_BALANCED:
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED_WMI;
break;
case PLATFORM_PROFILE_QUIET:
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET_WMI;
break;
case PLATFORM_PROFILE_LOW_POWER:
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_ECO_WMI;
break;
default:
return -EOPNOTSUPP;
}
status = WMI_gaming_execute_u64(
ACER_WMID_SET_GAMING_MISC_SETTING_METHODID, tp, NULL);
if (ACPI_FAILURE(status))
return -EIO;
if (tp != ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO_WMI)
last_non_turbo_profile = tp;
return 0;
}
static int acer_platform_profile_setup(void)
{
if (quirks->predator_v4) {
int err;
platform_profile_handler.profile_get =
acer_predator_v4_platform_profile_get;
platform_profile_handler.profile_set =
acer_predator_v4_platform_profile_set;
set_bit(PLATFORM_PROFILE_PERFORMANCE,
platform_profile_handler.choices);
set_bit(PLATFORM_PROFILE_BALANCED_PERFORMANCE,
platform_profile_handler.choices);
set_bit(PLATFORM_PROFILE_BALANCED,
platform_profile_handler.choices);
set_bit(PLATFORM_PROFILE_QUIET,
platform_profile_handler.choices);
set_bit(PLATFORM_PROFILE_LOW_POWER,
platform_profile_handler.choices);
err = platform_profile_register(&platform_profile_handler);
if (err)
return err;
platform_profile_support = true;
/* Set default non-turbo profile */
last_non_turbo_profile =
ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED_WMI;
}
return 0;
}
static int acer_thermal_profile_change(void)
{
/*
* This mode key can rotate each mode or toggle turbo mode.
* On battery, only ECO and BALANCED mode are available.
*/
if (quirks->predator_v4) {
u8 current_tp;
int tp, err;
u64 on_AC;
acpi_status status;
err = ec_read(ACER_PREDATOR_V4_THERMAL_PROFILE_EC_OFFSET,
&current_tp);
if (err < 0)
return err;
/* Check power source */
status = WMI_gaming_execute_u64(
ACER_WMID_GET_GAMING_SYS_INFO_METHODID,
ACER_WMID_CMD_GET_PREDATOR_V4_BAT_STATUS, &on_AC);
if (ACPI_FAILURE(status))
return -EIO;
switch (current_tp) {
case ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO:
if (!on_AC)
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED_WMI;
else if (cycle_gaming_thermal_profile)
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_ECO_WMI;
else
tp = last_non_turbo_profile;
break;
case ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE:
if (!on_AC)
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED_WMI;
else
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO_WMI;
break;
case ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED:
if (!on_AC)
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_ECO_WMI;
else if (cycle_gaming_thermal_profile)
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE_WMI;
else
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO_WMI;
break;
case ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET:
if (!on_AC)
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED_WMI;
else if (cycle_gaming_thermal_profile)
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED_WMI;
else
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO_WMI;
break;
case ACER_PREDATOR_V4_THERMAL_PROFILE_ECO:
if (!on_AC)
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED_WMI;
else if (cycle_gaming_thermal_profile)
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET_WMI;
else
tp = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO_WMI;
break;
default:
return -EOPNOTSUPP;
}
status = WMI_gaming_execute_u64(
ACER_WMID_SET_GAMING_MISC_SETTING_METHODID, tp, NULL);
if (ACPI_FAILURE(status))
return -EIO;
/* Store non-turbo profile for turbo mode toggle*/
if (tp != ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO_WMI)
last_non_turbo_profile = tp;
platform_profile_notify();
}
return 0;
}
/*
* Switch series keyboard dock status
*/
@ -1997,6 +2273,8 @@ static void acer_wmi_notify(u32 value, void *context)
case WMID_GAMING_TURBO_KEY_EVENT:
if (return_value.key_num == 0x4)
acer_toggle_turbo();
if (return_value.key_num == 0x5 && has_cap(ACER_CAP_PLATFORM_PROFILE))
acer_thermal_profile_change();
break;
default:
pr_warn("Unknown function number - %d - %d\n",
@ -2222,6 +2500,8 @@ static u32 get_wmid_devices(void)
return devices;
}
static int acer_wmi_hwmon_init(void);
/*
* Platform device
*/
@ -2245,8 +2525,25 @@ static int acer_platform_probe(struct platform_device *device)
if (err)
goto error_rfkill;
return err;
if (has_cap(ACER_CAP_PLATFORM_PROFILE)) {
err = acer_platform_profile_setup();
if (err)
goto error_platform_profile;
}
if (has_cap(ACER_CAP_FAN_SPEED_READ)) {
err = acer_wmi_hwmon_init();
if (err)
goto error_hwmon;
}
return 0;
error_hwmon:
if (platform_profile_support)
platform_profile_remove();
error_platform_profile:
acer_rfkill_exit();
error_rfkill:
if (has_cap(ACER_CAP_BRIGHTNESS))
acer_backlight_exit();
@ -2265,6 +2562,9 @@ static void acer_platform_remove(struct platform_device *device)
acer_backlight_exit();
acer_rfkill_exit();
if (platform_profile_support)
platform_profile_remove();
}
#ifdef CONFIG_PM_SLEEP
@ -2351,6 +2651,73 @@ static void __init create_debugfs(void)
&interface->debug.wmid_devices);
}
static umode_t acer_wmi_hwmon_is_visible(const void *data,
enum hwmon_sensor_types type, u32 attr,
int channel)
{
switch (type) {
case hwmon_fan:
if (acer_get_fan_speed(channel) >= 0)
return 0444;
break;
default:
return 0;
}
return 0;
}
static int acer_wmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
u32 attr, int channel, long *val)
{
int ret;
switch (type) {
case hwmon_fan:
ret = acer_get_fan_speed(channel);
if (ret < 0)
return ret;
*val = ret;
break;
default:
return -EOPNOTSUPP;
}
return 0;
}
static const struct hwmon_channel_info *const acer_wmi_hwmon_info[] = {
HWMON_CHANNEL_INFO(fan, HWMON_F_INPUT, HWMON_F_INPUT), NULL
};
static const struct hwmon_ops acer_wmi_hwmon_ops = {
.read = acer_wmi_hwmon_read,
.is_visible = acer_wmi_hwmon_is_visible,
};
static const struct hwmon_chip_info acer_wmi_hwmon_chip_info = {
.ops = &acer_wmi_hwmon_ops,
.info = acer_wmi_hwmon_info,
};
static int acer_wmi_hwmon_init(void)
{
struct device *dev = &acer_platform_device->dev;
struct device *hwmon;
hwmon = devm_hwmon_device_register_with_info(dev, "acer",
&acer_platform_driver,
&acer_wmi_hwmon_chip_info,
NULL);
if (IS_ERR(hwmon)) {
dev_err(dev, "Could not register acer hwmon device\n");
return PTR_ERR(hwmon);
}
return 0;
}
static int __init acer_wmi_init(void)
{
int err;

View File

@ -18,3 +18,17 @@ config AMD_HSMP
If you choose to compile this driver as a module the module will be
called amd_hsmp.
config AMD_WBRF
bool "AMD Wifi RF Band mitigations (WBRF)"
depends on ACPI
help
WBRF(Wifi Band RFI mitigation) mechanism allows Wifi drivers
to notify the frequencies they are using so that other hardware
can be reconfigured to avoid harmonic conflicts.
AMD provides an ACPI based mechanism to support WBRF on platform with
appropriate underlying support.
This mechanism will only be activated on platforms that advertise a
need for it.

View File

@ -8,3 +8,4 @@ obj-$(CONFIG_AMD_PMC) += pmc/
amd_hsmp-y := hsmp.o
obj-$(CONFIG_AMD_HSMP) += amd_hsmp.o
obj-$(CONFIG_AMD_PMF) += pmf/
obj-$(CONFIG_AMD_WBRF) += wbrf.o

View File

@ -31,13 +31,13 @@
#include "pmc.h"
/* SMU communication registers */
#define AMD_PMC_REGISTER_MESSAGE 0x538
#define AMD_PMC_REGISTER_RESPONSE 0x980
#define AMD_PMC_REGISTER_ARGUMENT 0x9BC
/* PMC Scratch Registers */
#define AMD_PMC_SCRATCH_REG_CZN 0x94
#define AMD_PMC_SCRATCH_REG_YC 0xD14
#define AMD_PMC_SCRATCH_REG_1AH 0xF14
/* STB Registers */
#define AMD_PMC_STB_PMI_0 0x03E30600
@ -145,6 +145,7 @@ static const struct amd_pmc_bit_map soc15_ip_blk[] = {
{"JPEG", BIT(18)},
{"IPU", BIT(19)},
{"UMSCH", BIT(20)},
{"VPE", BIT(21)},
{}
};
@ -350,10 +351,17 @@ static void amd_pmc_get_ip_info(struct amd_pmc_dev *dev)
case AMD_CPU_ID_CB:
dev->num_ips = 12;
dev->s2d_msg_id = 0xBE;
dev->smu_msg = 0x538;
break;
case AMD_CPU_ID_PS:
dev->num_ips = 21;
dev->s2d_msg_id = 0x85;
dev->smu_msg = 0x538;
break;
case PCI_DEVICE_ID_AMD_1AH_M20H_ROOT:
dev->num_ips = 22;
dev->s2d_msg_id = 0xDE;
dev->smu_msg = 0x938;
break;
}
}
@ -588,6 +596,9 @@ static int amd_pmc_idlemask_read(struct amd_pmc_dev *pdev, struct device *dev,
case AMD_CPU_ID_PS:
val = amd_pmc_reg_read(pdev, AMD_PMC_SCRATCH_REG_YC);
break;
case PCI_DEVICE_ID_AMD_1AH_M20H_ROOT:
val = amd_pmc_reg_read(pdev, AMD_PMC_SCRATCH_REG_1AH);
break;
default:
return -EINVAL;
}
@ -618,6 +629,7 @@ static bool amd_pmc_is_stb_supported(struct amd_pmc_dev *dev)
case AMD_CPU_ID_YC:
case AMD_CPU_ID_CB:
case AMD_CPU_ID_PS:
case PCI_DEVICE_ID_AMD_1AH_M20H_ROOT:
return true;
default:
return false;
@ -653,7 +665,7 @@ static void amd_pmc_dump_registers(struct amd_pmc_dev *dev)
argument = AMD_S2D_REGISTER_ARGUMENT;
response = AMD_S2D_REGISTER_RESPONSE;
} else {
message = AMD_PMC_REGISTER_MESSAGE;
message = dev->smu_msg;
argument = AMD_PMC_REGISTER_ARGUMENT;
response = AMD_PMC_REGISTER_RESPONSE;
}
@ -680,7 +692,7 @@ static int amd_pmc_send_cmd(struct amd_pmc_dev *dev, u32 arg, u32 *data, u8 msg,
argument = AMD_S2D_REGISTER_ARGUMENT;
response = AMD_S2D_REGISTER_RESPONSE;
} else {
message = AMD_PMC_REGISTER_MESSAGE;
message = dev->smu_msg;
argument = AMD_PMC_REGISTER_ARGUMENT;
response = AMD_PMC_REGISTER_RESPONSE;
}
@ -751,6 +763,7 @@ static int amd_pmc_get_os_hint(struct amd_pmc_dev *dev)
case AMD_CPU_ID_YC:
case AMD_CPU_ID_CB:
case AMD_CPU_ID_PS:
case PCI_DEVICE_ID_AMD_1AH_M20H_ROOT:
return MSG_OS_HINT_RN;
}
return -EINVAL;
@ -967,9 +980,6 @@ static int amd_pmc_s2d_init(struct amd_pmc_dev *dev)
/* Spill to DRAM feature uses separate SMU message port */
dev->msg_port = 1;
/* Get num of IP blocks within the SoC */
amd_pmc_get_ip_info(dev);
amd_pmc_send_cmd(dev, S2D_TELEMETRY_SIZE, &size, dev->s2d_msg_id, true);
if (size != S2D_TELEMETRY_BYTES_MAX)
return -EIO;
@ -1077,6 +1087,9 @@ static int amd_pmc_probe(struct platform_device *pdev)
mutex_init(&dev->lock);
/* Get num of IP blocks within the SoC */
amd_pmc_get_ip_info(dev);
if (enable_stb && amd_pmc_is_stb_supported(dev)) {
err = amd_pmc_s2d_init(dev);
if (err)

View File

@ -26,6 +26,7 @@ struct amd_pmc_dev {
u32 dram_size;
u32 num_ips;
u32 s2d_msg_id;
u32 smu_msg;
/* SMU version information */
u8 smu_program;
u8 major;

View File

@ -9,6 +9,7 @@ config AMD_PMF
depends on POWER_SUPPLY
depends on AMD_NB
select ACPI_PLATFORM_PROFILE
depends on TEE && AMDTEE
help
This driver provides support for the AMD Platform Management Framework.
The goal is to enhance end user experience by making AMD PCs smarter,

View File

@ -6,4 +6,5 @@
obj-$(CONFIG_AMD_PMF) += amd-pmf.o
amd-pmf-objs := core.o acpi.o sps.o \
auto-mode.o cnqf.o
auto-mode.o cnqf.o \
tee-if.o spc.o

View File

@ -111,7 +111,6 @@ int apmf_os_power_slider_update(struct amd_pmf_dev *pdev, u8 event)
struct os_power_slider args;
struct acpi_buffer params;
union acpi_object *info;
int err = 0;
args.size = sizeof(args);
args.slider_event = event;
@ -121,10 +120,10 @@ int apmf_os_power_slider_update(struct amd_pmf_dev *pdev, u8 event)
info = apmf_if_call(pdev, APMF_FUNC_OS_POWER_SLIDER_UPDATE, &params);
if (!info)
err = -EIO;
return -EIO;
kfree(info);
return err;
return 0;
}
static void apmf_sbios_heartbeat_notify(struct work_struct *work)
@ -135,11 +134,9 @@ static void apmf_sbios_heartbeat_notify(struct work_struct *work)
dev_dbg(dev->dev, "Sending heartbeat to SBIOS\n");
info = apmf_if_call(dev, APMF_FUNC_SBIOS_HEARTBEAT, NULL);
if (!info)
goto out;
return;
schedule_delayed_work(&dev->heart_beat, msecs_to_jiffies(dev->hb_interval * 1000));
out:
kfree(info);
}
@ -148,7 +145,6 @@ int apmf_update_fan_idx(struct amd_pmf_dev *pdev, bool manual, u32 idx)
union acpi_object *info;
struct apmf_fan_idx args;
struct acpi_buffer params;
int err = 0;
args.size = sizeof(args);
args.fan_ctl_mode = manual;
@ -158,14 +154,11 @@ int apmf_update_fan_idx(struct amd_pmf_dev *pdev, bool manual, u32 idx)
params.pointer = (void *)&args;
info = apmf_if_call(pdev, APMF_FUNC_SET_FAN_IDX, &params);
if (!info) {
err = -EIO;
goto out;
}
if (!info)
return -EIO;
out:
kfree(info);
return err;
return 0;
}
int apmf_get_auto_mode_def(struct amd_pmf_dev *pdev, struct apmf_auto_mode *data)
@ -286,6 +279,43 @@ int apmf_install_handler(struct amd_pmf_dev *pmf_dev)
return 0;
}
static acpi_status apmf_walk_resources(struct acpi_resource *res, void *data)
{
struct amd_pmf_dev *dev = data;
switch (res->type) {
case ACPI_RESOURCE_TYPE_ADDRESS64:
dev->policy_addr = res->data.address64.address.minimum;
dev->policy_sz = res->data.address64.address.address_length;
break;
case ACPI_RESOURCE_TYPE_FIXED_MEMORY32:
dev->policy_addr = res->data.fixed_memory32.address;
dev->policy_sz = res->data.fixed_memory32.address_length;
break;
}
if (!dev->policy_addr || dev->policy_sz > POLICY_BUF_MAX_SZ || dev->policy_sz == 0) {
pr_err("Incorrect Policy params, possibly a SBIOS bug\n");
return AE_ERROR;
}
return AE_OK;
}
int apmf_check_smart_pc(struct amd_pmf_dev *pmf_dev)
{
acpi_handle ahandle = ACPI_HANDLE(pmf_dev->dev);
acpi_status status;
status = acpi_walk_resources(ahandle, METHOD_NAME__CRS, apmf_walk_resources, pmf_dev);
if (ACPI_FAILURE(status)) {
dev_err(pmf_dev->dev, "acpi_walk_resources failed :%d\n", status);
return -EINVAL;
}
return 0;
}
void apmf_acpi_deinit(struct amd_pmf_dev *pmf_dev)
{
acpi_handle ahandle = ACPI_HANDLE(pmf_dev->dev);

View File

@ -251,29 +251,37 @@ static const struct pci_device_id pmf_pci_ids[] = {
{ }
};
static void amd_pmf_set_dram_addr(struct amd_pmf_dev *dev)
int amd_pmf_set_dram_addr(struct amd_pmf_dev *dev, bool alloc_buffer)
{
u64 phys_addr;
u32 hi, low;
/* Get Metrics Table Address */
if (alloc_buffer) {
dev->buf = kzalloc(sizeof(dev->m_table), GFP_KERNEL);
if (!dev->buf)
return -ENOMEM;
}
phys_addr = virt_to_phys(dev->buf);
hi = phys_addr >> 32;
low = phys_addr & GENMASK(31, 0);
amd_pmf_send_cmd(dev, SET_DRAM_ADDR_HIGH, 0, hi, NULL);
amd_pmf_send_cmd(dev, SET_DRAM_ADDR_LOW, 0, low, NULL);
return 0;
}
int amd_pmf_init_metrics_table(struct amd_pmf_dev *dev)
{
/* Get Metrics Table Address */
dev->buf = kzalloc(sizeof(dev->m_table), GFP_KERNEL);
if (!dev->buf)
return -ENOMEM;
int ret;
INIT_DELAYED_WORK(&dev->work_buffer, amd_pmf_get_metrics);
amd_pmf_set_dram_addr(dev);
ret = amd_pmf_set_dram_addr(dev, true);
if (ret)
return ret;
/*
* Start collecting the metrics data after a small delay
@ -284,17 +292,30 @@ int amd_pmf_init_metrics_table(struct amd_pmf_dev *dev)
return 0;
}
static int amd_pmf_resume_handler(struct device *dev)
static int amd_pmf_suspend_handler(struct device *dev)
{
struct amd_pmf_dev *pdev = dev_get_drvdata(dev);
if (pdev->buf)
amd_pmf_set_dram_addr(pdev);
kfree(pdev->buf);
return 0;
}
static DEFINE_SIMPLE_DEV_PM_OPS(amd_pmf_pm, NULL, amd_pmf_resume_handler);
static int amd_pmf_resume_handler(struct device *dev)
{
struct amd_pmf_dev *pdev = dev_get_drvdata(dev);
int ret;
if (pdev->buf) {
ret = amd_pmf_set_dram_addr(pdev, false);
if (ret)
return ret;
}
return 0;
}
static DEFINE_SIMPLE_DEV_PM_OPS(amd_pmf_pm, amd_pmf_suspend_handler, amd_pmf_resume_handler);
static void amd_pmf_init_features(struct amd_pmf_dev *dev)
{
@ -309,13 +330,13 @@ static void amd_pmf_init_features(struct amd_pmf_dev *dev)
dev_dbg(dev->dev, "SPS enabled and Platform Profiles registered\n");
}
/* Enable Auto Mode */
if (is_apmf_func_supported(dev, APMF_FUNC_AUTO_MODE)) {
if (!amd_pmf_init_smart_pc(dev)) {
dev_dbg(dev->dev, "Smart PC Solution Enabled\n");
} else if (is_apmf_func_supported(dev, APMF_FUNC_AUTO_MODE)) {
amd_pmf_init_auto_mode(dev);
dev_dbg(dev->dev, "Auto Mode Init done\n");
} else if (is_apmf_func_supported(dev, APMF_FUNC_DYN_SLIDER_AC) ||
is_apmf_func_supported(dev, APMF_FUNC_DYN_SLIDER_DC)) {
/* Enable Cool n Quiet Framework (CnQF) */
ret = amd_pmf_init_cnqf(dev);
if (ret)
dev_warn(dev->dev, "CnQF Init failed\n");
@ -330,7 +351,9 @@ static void amd_pmf_deinit_features(struct amd_pmf_dev *dev)
amd_pmf_deinit_sps(dev);
}
if (is_apmf_func_supported(dev, APMF_FUNC_AUTO_MODE)) {
if (!dev->smart_pc_enabled) {
amd_pmf_deinit_smart_pc(dev);
} else if (is_apmf_func_supported(dev, APMF_FUNC_AUTO_MODE)) {
amd_pmf_deinit_auto_mode(dev);
} else if (is_apmf_func_supported(dev, APMF_FUNC_DYN_SLIDER_AC) ||
is_apmf_func_supported(dev, APMF_FUNC_DYN_SLIDER_DC)) {
@ -408,9 +431,9 @@ static int amd_pmf_probe(struct platform_device *pdev)
apmf_acpi_init(dev);
platform_set_drvdata(pdev, dev);
amd_pmf_dbgfs_register(dev);
amd_pmf_init_features(dev);
apmf_install_handler(dev);
amd_pmf_dbgfs_register(dev);
dev_info(dev->dev, "registered PMF device successfully\n");
@ -448,3 +471,4 @@ module_platform_driver(amd_pmf_driver);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("AMD Platform Management Framework Driver");
MODULE_SOFTDEP("pre: amdtee");

View File

@ -14,6 +14,11 @@
#include <linux/acpi.h>
#include <linux/platform_profile.h>
#define POLICY_BUF_MAX_SZ 0x4b000
#define POLICY_SIGN_COOKIE 0x31535024
#define POLICY_COOKIE_OFFSET 0x10
#define POLICY_COOKIE_LEN 0x14
/* APMF Functions */
#define APMF_FUNC_VERIFY_INTERFACE 0
#define APMF_FUNC_GET_SYS_PARAMS 1
@ -44,6 +49,7 @@
#define GET_STT_MIN_LIMIT 0x1F
#define GET_STT_LIMIT_APU 0x20
#define GET_STT_LIMIT_HS2 0x21
#define SET_P3T 0x23 /* P3T: Peak Package Power Limit */
/* OS slider update notification */
#define DC_BEST_PERF 0
@ -59,6 +65,24 @@
#define ARG_NONE 0
#define AVG_SAMPLE_SIZE 3
/* Policy Actions */
#define PMF_POLICY_SPL 2
#define PMF_POLICY_SPPT 3
#define PMF_POLICY_FPPT 4
#define PMF_POLICY_SPPT_APU_ONLY 5
#define PMF_POLICY_STT_MIN 6
#define PMF_POLICY_STT_SKINTEMP_APU 7
#define PMF_POLICY_STT_SKINTEMP_HS2 8
#define PMF_POLICY_SYSTEM_STATE 9
#define PMF_POLICY_P3T 38
/* TA macros */
#define PMF_TA_IF_VERSION_MAJOR 1
#define TA_PMF_ACTION_MAX 32
#define TA_PMF_UNDO_MAX 8
#define TA_OUTPUT_RESERVED_MEM 906
#define MAX_OPERATION_PARAMS 4
/* AMD PMF BIOS interfaces */
struct apmf_verify_interface {
u16 size;
@ -129,6 +153,21 @@ struct smu_pmf_metrics {
u16 infra_gfx_maxfreq; /* in MHz */
u16 skin_temp; /* in centi-Celsius */
u16 device_state;
u16 curtemp; /* in centi-Celsius */
u16 filter_alpha_value;
u16 avg_gfx_clkfrequency;
u16 avg_fclk_frequency;
u16 avg_gfx_activity;
u16 avg_socclk_frequency;
u16 avg_vclk_frequency;
u16 avg_vcn_activity;
u16 avg_dram_reads;
u16 avg_dram_writes;
u16 avg_socket_power;
u16 avg_core_power[2];
u16 avg_core_c0residency[16];
u16 spare1;
u32 metrics_counter;
} __packed;
enum amd_stt_skin_temp {
@ -179,6 +218,19 @@ struct amd_pmf_dev {
bool cnqf_enabled;
bool cnqf_supported;
struct notifier_block pwr_src_notifier;
/* Smart PC solution builder */
struct dentry *esbin;
unsigned char *policy_buf;
u32 policy_sz;
struct tee_context *tee_ctx;
struct tee_shm *fw_shm_pool;
u32 session_id;
void *shbuf;
struct delayed_work pb_work;
struct pmf_action_table *prev_data;
u64 policy_addr;
void *policy_base;
bool smart_pc_enabled;
};
struct apmf_sps_prop_granular {
@ -389,6 +441,145 @@ struct apmf_dyn_slider_output {
struct apmf_cnqf_power_set ps[APMF_CNQF_MAX];
} __packed;
enum smart_pc_status {
PMF_SMART_PC_ENABLED,
PMF_SMART_PC_DISABLED,
};
/* Smart PC - TA internals */
enum system_state {
SYSTEM_STATE_S0i3,
SYSTEM_STATE_S4,
SYSTEM_STATE_SCREEN_LOCK,
SYSTEM_STATE_MAX,
};
enum ta_slider {
TA_BEST_BATTERY,
TA_BETTER_BATTERY,
TA_BETTER_PERFORMANCE,
TA_BEST_PERFORMANCE,
TA_MAX,
};
/* Command ids for TA communication */
enum ta_pmf_command {
TA_PMF_COMMAND_POLICY_BUILDER_INITIALIZE,
TA_PMF_COMMAND_POLICY_BUILDER_ENACT_POLICIES,
};
enum ta_pmf_error_type {
TA_PMF_TYPE_SUCCESS,
TA_PMF_ERROR_TYPE_GENERIC,
TA_PMF_ERROR_TYPE_CRYPTO,
TA_PMF_ERROR_TYPE_CRYPTO_VALIDATE,
TA_PMF_ERROR_TYPE_CRYPTO_VERIFY_OEM,
TA_PMF_ERROR_TYPE_POLICY_BUILDER,
TA_PMF_ERROR_TYPE_PB_CONVERT,
TA_PMF_ERROR_TYPE_PB_SETUP,
TA_PMF_ERROR_TYPE_PB_ENACT,
TA_PMF_ERROR_TYPE_ASD_GET_DEVICE_INFO,
TA_PMF_ERROR_TYPE_ASD_GET_DEVICE_PCIE_INFO,
TA_PMF_ERROR_TYPE_SYS_DRV_FW_VALIDATION,
TA_PMF_ERROR_TYPE_MAX,
};
struct pmf_action_table {
enum system_state system_state;
u32 spl; /* in mW */
u32 sppt; /* in mW */
u32 sppt_apuonly; /* in mW */
u32 fppt; /* in mW */
u32 stt_minlimit; /* in mW */
u32 stt_skintemp_apu; /* in C */
u32 stt_skintemp_hs2; /* in C */
u32 p3t_limit; /* in mW */
};
/* Input conditions */
struct ta_pmf_condition_info {
u32 power_source;
u32 bat_percentage;
u32 power_slider;
u32 lid_state;
bool user_present;
u32 rsvd1[2];
u32 monitor_count;
u32 rsvd2[2];
u32 bat_design;
u32 full_charge_capacity;
int drain_rate;
bool user_engaged;
u32 device_state;
u32 socket_power;
u32 skin_temperature;
u32 rsvd3[5];
u32 ambient_light;
u32 length;
u32 avg_c0residency;
u32 max_c0residency;
u32 s0i3_entry;
u32 gfx_busy;
u32 rsvd4[7];
bool camera_state;
u32 workload_type;
u32 display_type;
u32 display_state;
u32 rsvd5[150];
};
struct ta_pmf_load_policy_table {
u32 table_size;
u8 table[POLICY_BUF_MAX_SZ];
};
/* TA initialization params */
struct ta_pmf_init_table {
u32 frequency; /* SMU sampling frequency */
bool validate;
bool sku_check;
bool metadata_macrocheck;
struct ta_pmf_load_policy_table policies_table;
};
/* Everything the TA needs to Enact Policies */
struct ta_pmf_enact_table {
struct ta_pmf_condition_info ev_info;
u32 name;
};
struct ta_pmf_action {
u32 action_index;
u32 value;
};
/* Output actions from TA */
struct ta_pmf_enact_result {
u32 actions_count;
struct ta_pmf_action actions_list[TA_PMF_ACTION_MAX];
u32 undo_count;
struct ta_pmf_action undo_list[TA_PMF_UNDO_MAX];
};
union ta_pmf_input {
struct ta_pmf_enact_table enact_table;
struct ta_pmf_init_table init_table;
};
union ta_pmf_output {
struct ta_pmf_enact_result policy_apply_table;
u32 rsvd[TA_OUTPUT_RESERVED_MEM];
};
struct ta_pmf_shared_memory {
int command_id;
int resp_id;
u32 pmf_result;
u32 if_version;
union ta_pmf_output pmf_output;
union ta_pmf_input pmf_input;
};
/* Core Layer */
int apmf_acpi_init(struct amd_pmf_dev *pmf_dev);
void apmf_acpi_deinit(struct amd_pmf_dev *pmf_dev);
@ -398,6 +589,7 @@ int amd_pmf_init_metrics_table(struct amd_pmf_dev *dev);
int amd_pmf_get_power_source(void);
int apmf_install_handler(struct amd_pmf_dev *pmf_dev);
int apmf_os_power_slider_update(struct amd_pmf_dev *dev, u8 flag);
int amd_pmf_set_dram_addr(struct amd_pmf_dev *dev, bool alloc_buffer);
/* SPS Layer */
int amd_pmf_get_pprof_modes(struct amd_pmf_dev *pmf);
@ -409,7 +601,9 @@ int apmf_get_static_slider_granular(struct amd_pmf_dev *pdev,
struct apmf_static_slider_granular_output *output);
bool is_pprof_balanced(struct amd_pmf_dev *pmf);
int amd_pmf_power_slider_update_event(struct amd_pmf_dev *dev);
const char *amd_pmf_source_as_str(unsigned int state);
const char *amd_pmf_source_as_str(unsigned int state);
int apmf_update_fan_idx(struct amd_pmf_dev *pdev, bool manual, u32 idx);
int amd_pmf_set_sps_power_limits(struct amd_pmf_dev *pmf);
@ -433,4 +627,13 @@ void amd_pmf_deinit_cnqf(struct amd_pmf_dev *dev);
int amd_pmf_trans_cnqf(struct amd_pmf_dev *dev, int socket_power, ktime_t time_lapsed_ms);
extern const struct attribute_group cnqf_feature_attribute_group;
/* Smart PC builder Layer */
int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev);
void amd_pmf_deinit_smart_pc(struct amd_pmf_dev *dev);
int apmf_check_smart_pc(struct amd_pmf_dev *pmf_dev);
/* Smart PC - TA interfaces */
void amd_pmf_populate_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in);
void amd_pmf_dump_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in);
#endif /* PMF_H */

View File

@ -0,0 +1,158 @@
// SPDX-License-Identifier: GPL-2.0
/*
* AMD Platform Management Framework Driver - Smart PC Capabilities
*
* Copyright (c) 2023, Advanced Micro Devices, Inc.
* All Rights Reserved.
*
* Authors: Shyam Sundar S K <Shyam-sundar.S-k@amd.com>
* Patil Rajesh Reddy <Patil.Reddy@amd.com>
*/
#include <acpi/button.h>
#include <linux/power_supply.h>
#include <linux/units.h>
#include "pmf.h"
#ifdef CONFIG_AMD_PMF_DEBUG
static const char *ta_slider_as_str(unsigned int state)
{
switch (state) {
case TA_BEST_PERFORMANCE:
return "PERFORMANCE";
case TA_BETTER_PERFORMANCE:
return "BALANCED";
case TA_BEST_BATTERY:
return "POWER_SAVER";
default:
return "Unknown TA Slider State";
}
}
void amd_pmf_dump_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
{
dev_dbg(dev->dev, "==== TA inputs START ====\n");
dev_dbg(dev->dev, "Slider State: %s\n", ta_slider_as_str(in->ev_info.power_slider));
dev_dbg(dev->dev, "Power Source: %s\n", amd_pmf_source_as_str(in->ev_info.power_source));
dev_dbg(dev->dev, "Battery Percentage: %u\n", in->ev_info.bat_percentage);
dev_dbg(dev->dev, "Designed Battery Capacity: %u\n", in->ev_info.bat_design);
dev_dbg(dev->dev, "Fully Charged Capacity: %u\n", in->ev_info.full_charge_capacity);
dev_dbg(dev->dev, "Drain Rate: %d\n", in->ev_info.drain_rate);
dev_dbg(dev->dev, "Socket Power: %u\n", in->ev_info.socket_power);
dev_dbg(dev->dev, "Skin Temperature: %u\n", in->ev_info.skin_temperature);
dev_dbg(dev->dev, "Avg C0 Residency: %u\n", in->ev_info.avg_c0residency);
dev_dbg(dev->dev, "Max C0 Residency: %u\n", in->ev_info.max_c0residency);
dev_dbg(dev->dev, "GFX Busy: %u\n", in->ev_info.gfx_busy);
dev_dbg(dev->dev, "LID State: %s\n", in->ev_info.lid_state ? "close" : "open");
dev_dbg(dev->dev, "==== TA inputs END ====\n");
}
#else
void amd_pmf_dump_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in) {}
#endif
static void amd_pmf_get_smu_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
{
u16 max, avg = 0;
int i;
memset(dev->buf, 0, sizeof(dev->m_table));
amd_pmf_send_cmd(dev, SET_TRANSFER_TABLE, 0, 7, NULL);
memcpy(&dev->m_table, dev->buf, sizeof(dev->m_table));
in->ev_info.socket_power = dev->m_table.apu_power + dev->m_table.dgpu_power;
in->ev_info.skin_temperature = dev->m_table.skin_temp;
/* Get the avg and max C0 residency of all the cores */
max = dev->m_table.avg_core_c0residency[0];
for (i = 0; i < ARRAY_SIZE(dev->m_table.avg_core_c0residency); i++) {
avg += dev->m_table.avg_core_c0residency[i];
if (dev->m_table.avg_core_c0residency[i] > max)
max = dev->m_table.avg_core_c0residency[i];
}
avg = DIV_ROUND_CLOSEST(avg, ARRAY_SIZE(dev->m_table.avg_core_c0residency));
in->ev_info.avg_c0residency = avg;
in->ev_info.max_c0residency = max;
in->ev_info.gfx_busy = dev->m_table.avg_gfx_activity;
}
static const char * const pmf_battery_supply_name[] = {
"BATT",
"BAT0",
};
static int amd_pmf_get_battery_prop(enum power_supply_property prop)
{
union power_supply_propval value;
struct power_supply *psy;
int i, ret;
for (i = 0; i < ARRAY_SIZE(pmf_battery_supply_name); i++) {
psy = power_supply_get_by_name(pmf_battery_supply_name[i]);
if (!psy)
continue;
ret = power_supply_get_property(psy, prop, &value);
if (ret) {
power_supply_put(psy);
return ret;
}
}
return value.intval;
}
static int amd_pmf_get_battery_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
{
int val;
val = amd_pmf_get_battery_prop(POWER_SUPPLY_PROP_PRESENT);
if (val < 0)
return val;
if (val != 1)
return -ENODEV;
in->ev_info.bat_percentage = amd_pmf_get_battery_prop(POWER_SUPPLY_PROP_CAPACITY);
/* all values in mWh metrics */
in->ev_info.bat_design = amd_pmf_get_battery_prop(POWER_SUPPLY_PROP_ENERGY_FULL_DESIGN) /
MILLIWATT_PER_WATT;
in->ev_info.full_charge_capacity = amd_pmf_get_battery_prop(POWER_SUPPLY_PROP_ENERGY_FULL) /
MILLIWATT_PER_WATT;
in->ev_info.drain_rate = amd_pmf_get_battery_prop(POWER_SUPPLY_PROP_POWER_NOW) /
MILLIWATT_PER_WATT;
return 0;
}
static int amd_pmf_get_slider_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
{
int val;
switch (dev->current_profile) {
case PLATFORM_PROFILE_PERFORMANCE:
val = TA_BEST_PERFORMANCE;
break;
case PLATFORM_PROFILE_BALANCED:
val = TA_BETTER_PERFORMANCE;
break;
case PLATFORM_PROFILE_LOW_POWER:
val = TA_BEST_BATTERY;
break;
default:
dev_err(dev->dev, "Unknown Platform Profile.\n");
return -EOPNOTSUPP;
}
in->ev_info.power_slider = val;
return 0;
}
void amd_pmf_populate_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
{
/* TA side lid open is 1 and close is 0, hence the ! here */
in->ev_info.lid_state = !acpi_lid_open();
in->ev_info.power_source = amd_pmf_get_power_source();
amd_pmf_get_smu_info(dev, in);
amd_pmf_get_battery_info(dev, in);
amd_pmf_get_slider_info(dev, in);
}

View File

@ -27,7 +27,7 @@ static const char *slider_as_str(unsigned int state)
}
}
static const char *source_as_str(unsigned int state)
const char *amd_pmf_source_as_str(unsigned int state)
{
switch (state) {
case POWER_SOURCE_AC:
@ -47,7 +47,8 @@ static void amd_pmf_dump_sps_defaults(struct amd_pmf_static_slider_granular *dat
for (i = 0; i < POWER_SOURCE_MAX; i++) {
for (j = 0; j < POWER_MODE_MAX; j++) {
pr_debug("--- Source:%s Mode:%s ---\n", source_as_str(i), slider_as_str(j));
pr_debug("--- Source:%s Mode:%s ---\n", amd_pmf_source_as_str(i),
slider_as_str(j));
pr_debug("SPL: %u mW\n", data->prop[i][j].spl);
pr_debug("SPPT: %u mW\n", data->prop[i][j].sppt);
pr_debug("SPPT_ApuOnly: %u mW\n", data->prop[i][j].sppt_apu_only);

View File

@ -0,0 +1,472 @@
// SPDX-License-Identifier: GPL-2.0
/*
* AMD Platform Management Framework Driver - TEE Interface
*
* Copyright (c) 2023, Advanced Micro Devices, Inc.
* All Rights Reserved.
*
* Author: Shyam Sundar S K <Shyam-sundar.S-k@amd.com>
*/
#include <linux/debugfs.h>
#include <linux/tee_drv.h>
#include <linux/uuid.h>
#include "pmf.h"
#define MAX_TEE_PARAM 4
/* Policy binary actions sampling frequency (in ms) */
static int pb_actions_ms = MSEC_PER_SEC;
/* Sideload policy binaries to debug policy failures */
static bool pb_side_load;
#ifdef CONFIG_AMD_PMF_DEBUG
module_param(pb_actions_ms, int, 0644);
MODULE_PARM_DESC(pb_actions_ms, "Policy binary actions sampling frequency (default = 1000ms)");
module_param(pb_side_load, bool, 0444);
MODULE_PARM_DESC(pb_side_load, "Sideload policy binaries debug policy failures");
#endif
static const uuid_t amd_pmf_ta_uuid = UUID_INIT(0x6fd93b77, 0x3fb8, 0x524d,
0xb1, 0x2d, 0xc5, 0x29, 0xb1, 0x3d, 0x85, 0x43);
static const char *amd_pmf_uevent_as_str(unsigned int state)
{
switch (state) {
case SYSTEM_STATE_S0i3:
return "S0i3";
case SYSTEM_STATE_S4:
return "S4";
case SYSTEM_STATE_SCREEN_LOCK:
return "SCREEN_LOCK";
default:
return "Unknown Smart PC event";
}
}
static void amd_pmf_prepare_args(struct amd_pmf_dev *dev, int cmd,
struct tee_ioctl_invoke_arg *arg,
struct tee_param *param)
{
memset(arg, 0, sizeof(*arg));
memset(param, 0, MAX_TEE_PARAM * sizeof(*param));
arg->func = cmd;
arg->session = dev->session_id;
arg->num_params = MAX_TEE_PARAM;
/* Fill invoke cmd params */
param[0].u.memref.size = sizeof(struct ta_pmf_shared_memory);
param[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT;
param[0].u.memref.shm = dev->fw_shm_pool;
param[0].u.memref.shm_offs = 0;
}
static int amd_pmf_update_uevents(struct amd_pmf_dev *dev, u16 event)
{
char *envp[2] = {};
envp[0] = kasprintf(GFP_KERNEL, "EVENT_ID=%d", event);
if (!envp[0])
return -EINVAL;
kobject_uevent_env(&dev->dev->kobj, KOBJ_CHANGE, envp);
kfree(envp[0]);
return 0;
}
static void amd_pmf_apply_policies(struct amd_pmf_dev *dev, struct ta_pmf_enact_result *out)
{
u32 val;
int idx;
for (idx = 0; idx < out->actions_count; idx++) {
val = out->actions_list[idx].value;
switch (out->actions_list[idx].action_index) {
case PMF_POLICY_SPL:
if (dev->prev_data->spl != val) {
amd_pmf_send_cmd(dev, SET_SPL, false, val, NULL);
dev_dbg(dev->dev, "update SPL: %u\n", val);
dev->prev_data->spl = val;
}
break;
case PMF_POLICY_SPPT:
if (dev->prev_data->sppt != val) {
amd_pmf_send_cmd(dev, SET_SPPT, false, val, NULL);
dev_dbg(dev->dev, "update SPPT: %u\n", val);
dev->prev_data->sppt = val;
}
break;
case PMF_POLICY_FPPT:
if (dev->prev_data->fppt != val) {
amd_pmf_send_cmd(dev, SET_FPPT, false, val, NULL);
dev_dbg(dev->dev, "update FPPT: %u\n", val);
dev->prev_data->fppt = val;
}
break;
case PMF_POLICY_SPPT_APU_ONLY:
if (dev->prev_data->sppt_apuonly != val) {
amd_pmf_send_cmd(dev, SET_SPPT_APU_ONLY, false, val, NULL);
dev_dbg(dev->dev, "update SPPT_APU_ONLY: %u\n", val);
dev->prev_data->sppt_apuonly = val;
}
break;
case PMF_POLICY_STT_MIN:
if (dev->prev_data->stt_minlimit != val) {
amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false, val, NULL);
dev_dbg(dev->dev, "update STT_MIN: %u\n", val);
dev->prev_data->stt_minlimit = val;
}
break;
case PMF_POLICY_STT_SKINTEMP_APU:
if (dev->prev_data->stt_skintemp_apu != val) {
amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, val, NULL);
dev_dbg(dev->dev, "update STT_SKINTEMP_APU: %u\n", val);
dev->prev_data->stt_skintemp_apu = val;
}
break;
case PMF_POLICY_STT_SKINTEMP_HS2:
if (dev->prev_data->stt_skintemp_hs2 != val) {
amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, val, NULL);
dev_dbg(dev->dev, "update STT_SKINTEMP_HS2: %u\n", val);
dev->prev_data->stt_skintemp_hs2 = val;
}
break;
case PMF_POLICY_P3T:
if (dev->prev_data->p3t_limit != val) {
amd_pmf_send_cmd(dev, SET_P3T, false, val, NULL);
dev_dbg(dev->dev, "update P3T: %u\n", val);
dev->prev_data->p3t_limit = val;
}
break;
case PMF_POLICY_SYSTEM_STATE:
amd_pmf_update_uevents(dev, val);
dev_dbg(dev->dev, "update SYSTEM_STATE: %s\n",
amd_pmf_uevent_as_str(val));
break;
}
}
}
static int amd_pmf_invoke_cmd_enact(struct amd_pmf_dev *dev)
{
struct ta_pmf_shared_memory *ta_sm = NULL;
struct ta_pmf_enact_result *out = NULL;
struct ta_pmf_enact_table *in = NULL;
struct tee_param param[MAX_TEE_PARAM];
struct tee_ioctl_invoke_arg arg;
int ret = 0;
if (!dev->tee_ctx)
return -ENODEV;
memset(dev->shbuf, 0, dev->policy_sz);
ta_sm = dev->shbuf;
out = &ta_sm->pmf_output.policy_apply_table;
in = &ta_sm->pmf_input.enact_table;
memset(ta_sm, 0, sizeof(*ta_sm));
ta_sm->command_id = TA_PMF_COMMAND_POLICY_BUILDER_ENACT_POLICIES;
ta_sm->if_version = PMF_TA_IF_VERSION_MAJOR;
amd_pmf_populate_ta_inputs(dev, in);
amd_pmf_prepare_args(dev, TA_PMF_COMMAND_POLICY_BUILDER_ENACT_POLICIES, &arg, param);
ret = tee_client_invoke_func(dev->tee_ctx, &arg, param);
if (ret < 0 || arg.ret != 0) {
dev_err(dev->dev, "TEE enact cmd failed. err: %x, ret:%d\n", arg.ret, ret);
return ret;
}
if (ta_sm->pmf_result == TA_PMF_TYPE_SUCCESS && out->actions_count) {
amd_pmf_dump_ta_inputs(dev, in);
dev_dbg(dev->dev, "action count:%u result:%x\n", out->actions_count,
ta_sm->pmf_result);
amd_pmf_apply_policies(dev, out);
}
return 0;
}
static int amd_pmf_invoke_cmd_init(struct amd_pmf_dev *dev)
{
struct ta_pmf_shared_memory *ta_sm = NULL;
struct tee_param param[MAX_TEE_PARAM];
struct ta_pmf_init_table *in = NULL;
struct tee_ioctl_invoke_arg arg;
int ret = 0;
if (!dev->tee_ctx) {
dev_err(dev->dev, "Failed to get TEE context\n");
return -ENODEV;
}
dev_dbg(dev->dev, "Policy Binary size: %u bytes\n", dev->policy_sz);
memset(dev->shbuf, 0, dev->policy_sz);
ta_sm = dev->shbuf;
in = &ta_sm->pmf_input.init_table;
ta_sm->command_id = TA_PMF_COMMAND_POLICY_BUILDER_INITIALIZE;
ta_sm->if_version = PMF_TA_IF_VERSION_MAJOR;
in->metadata_macrocheck = false;
in->sku_check = false;
in->validate = true;
in->frequency = pb_actions_ms;
in->policies_table.table_size = dev->policy_sz;
memcpy(in->policies_table.table, dev->policy_buf, dev->policy_sz);
amd_pmf_prepare_args(dev, TA_PMF_COMMAND_POLICY_BUILDER_INITIALIZE, &arg, param);
ret = tee_client_invoke_func(dev->tee_ctx, &arg, param);
if (ret < 0 || arg.ret != 0) {
dev_err(dev->dev, "Failed to invoke TEE init cmd. err: %x, ret:%d\n", arg.ret, ret);
return ret;
}
return ta_sm->pmf_result;
}
static void amd_pmf_invoke_cmd(struct work_struct *work)
{
struct amd_pmf_dev *dev = container_of(work, struct amd_pmf_dev, pb_work.work);
amd_pmf_invoke_cmd_enact(dev);
schedule_delayed_work(&dev->pb_work, msecs_to_jiffies(pb_actions_ms));
}
static int amd_pmf_start_policy_engine(struct amd_pmf_dev *dev)
{
u32 cookie, length;
int res;
cookie = readl(dev->policy_buf + POLICY_COOKIE_OFFSET);
length = readl(dev->policy_buf + POLICY_COOKIE_LEN);
if (cookie != POLICY_SIGN_COOKIE || !length)
return -EINVAL;
/* Update the actual length */
dev->policy_sz = length + 512;
res = amd_pmf_invoke_cmd_init(dev);
if (res == TA_PMF_TYPE_SUCCESS) {
/* Now its safe to announce that smart pc is enabled */
dev->smart_pc_enabled = PMF_SMART_PC_ENABLED;
/*
* Start collecting the data from TA FW after a small delay
* or else, we might end up getting stale values.
*/
schedule_delayed_work(&dev->pb_work, msecs_to_jiffies(pb_actions_ms * 3));
} else {
dev_err(dev->dev, "ta invoke cmd init failed err: %x\n", res);
dev->smart_pc_enabled = PMF_SMART_PC_DISABLED;
return res;
}
return 0;
}
#ifdef CONFIG_AMD_PMF_DEBUG
static void amd_pmf_hex_dump_pb(struct amd_pmf_dev *dev)
{
print_hex_dump_debug("(pb): ", DUMP_PREFIX_OFFSET, 16, 1, dev->policy_buf,
dev->policy_sz, false);
}
static ssize_t amd_pmf_get_pb_data(struct file *filp, const char __user *buf,
size_t length, loff_t *pos)
{
struct amd_pmf_dev *dev = filp->private_data;
unsigned char *new_policy_buf;
int ret;
/* Policy binary size cannot exceed POLICY_BUF_MAX_SZ */
if (length > POLICY_BUF_MAX_SZ || length == 0)
return -EINVAL;
/* re-alloc to the new buffer length of the policy binary */
new_policy_buf = kzalloc(length, GFP_KERNEL);
if (!new_policy_buf)
return -ENOMEM;
if (copy_from_user(new_policy_buf, buf, length))
return -EFAULT;
kfree(dev->policy_buf);
dev->policy_buf = new_policy_buf;
dev->policy_sz = length;
amd_pmf_hex_dump_pb(dev);
ret = amd_pmf_start_policy_engine(dev);
if (ret)
return -EINVAL;
return length;
}
static const struct file_operations pb_fops = {
.write = amd_pmf_get_pb_data,
.open = simple_open,
};
static void amd_pmf_open_pb(struct amd_pmf_dev *dev, struct dentry *debugfs_root)
{
dev->esbin = debugfs_create_dir("pb", debugfs_root);
debugfs_create_file("update_policy", 0644, dev->esbin, dev, &pb_fops);
}
static void amd_pmf_remove_pb(struct amd_pmf_dev *dev)
{
debugfs_remove_recursive(dev->esbin);
}
#else
static void amd_pmf_open_pb(struct amd_pmf_dev *dev, struct dentry *debugfs_root) {}
static void amd_pmf_remove_pb(struct amd_pmf_dev *dev) {}
static void amd_pmf_hex_dump_pb(struct amd_pmf_dev *dev) {}
#endif
static int amd_pmf_get_bios_buffer(struct amd_pmf_dev *dev)
{
dev->policy_buf = kzalloc(dev->policy_sz, GFP_KERNEL);
if (!dev->policy_buf)
return -ENOMEM;
dev->policy_base = devm_ioremap(dev->dev, dev->policy_addr, dev->policy_sz);
if (!dev->policy_base)
return -ENOMEM;
memcpy(dev->policy_buf, dev->policy_base, dev->policy_sz);
amd_pmf_hex_dump_pb(dev);
if (pb_side_load)
amd_pmf_open_pb(dev, dev->dbgfs_dir);
return amd_pmf_start_policy_engine(dev);
}
static int amd_pmf_amdtee_ta_match(struct tee_ioctl_version_data *ver, const void *data)
{
return ver->impl_id == TEE_IMPL_ID_AMDTEE;
}
static int amd_pmf_ta_open_session(struct tee_context *ctx, u32 *id)
{
struct tee_ioctl_open_session_arg sess_arg = {};
int rc;
export_uuid(sess_arg.uuid, &amd_pmf_ta_uuid);
sess_arg.clnt_login = TEE_IOCTL_LOGIN_PUBLIC;
sess_arg.num_params = 0;
rc = tee_client_open_session(ctx, &sess_arg, NULL);
if (rc < 0 || sess_arg.ret != 0) {
pr_err("Failed to open TEE session err:%#x, rc:%d\n", sess_arg.ret, rc);
return rc;
}
*id = sess_arg.session;
return rc;
}
static int amd_pmf_tee_init(struct amd_pmf_dev *dev)
{
u32 size;
int ret;
dev->tee_ctx = tee_client_open_context(NULL, amd_pmf_amdtee_ta_match, NULL, NULL);
if (IS_ERR(dev->tee_ctx)) {
dev_err(dev->dev, "Failed to open TEE context\n");
return PTR_ERR(dev->tee_ctx);
}
ret = amd_pmf_ta_open_session(dev->tee_ctx, &dev->session_id);
if (ret) {
dev_err(dev->dev, "Failed to open TA session (%d)\n", ret);
ret = -EINVAL;
goto out_ctx;
}
size = sizeof(struct ta_pmf_shared_memory) + dev->policy_sz;
dev->fw_shm_pool = tee_shm_alloc_kernel_buf(dev->tee_ctx, size);
if (IS_ERR(dev->fw_shm_pool)) {
dev_err(dev->dev, "Failed to alloc TEE shared memory\n");
ret = PTR_ERR(dev->fw_shm_pool);
goto out_sess;
}
dev->shbuf = tee_shm_get_va(dev->fw_shm_pool, 0);
if (IS_ERR(dev->shbuf)) {
dev_err(dev->dev, "Failed to get TEE virtual address\n");
ret = PTR_ERR(dev->shbuf);
goto out_shm;
}
dev_dbg(dev->dev, "TEE init done\n");
return 0;
out_shm:
tee_shm_free(dev->fw_shm_pool);
out_sess:
tee_client_close_session(dev->tee_ctx, dev->session_id);
out_ctx:
tee_client_close_context(dev->tee_ctx);
return ret;
}
static void amd_pmf_tee_deinit(struct amd_pmf_dev *dev)
{
tee_shm_free(dev->fw_shm_pool);
tee_client_close_session(dev->tee_ctx, dev->session_id);
tee_client_close_context(dev->tee_ctx);
}
int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)
{
int ret;
ret = apmf_check_smart_pc(dev);
if (ret) {
/*
* Lets not return from here if Smart PC bit is not advertised in
* the BIOS. This way, there will be some amount of power savings
* to the user with static slider (if enabled).
*/
dev_info(dev->dev, "PMF Smart PC not advertised in BIOS!:%d\n", ret);
return -ENODEV;
}
ret = amd_pmf_tee_init(dev);
if (ret)
return ret;
INIT_DELAYED_WORK(&dev->pb_work, amd_pmf_invoke_cmd);
amd_pmf_set_dram_addr(dev, true);
amd_pmf_get_bios_buffer(dev);
dev->prev_data = kzalloc(sizeof(*dev->prev_data), GFP_KERNEL);
if (!dev->prev_data)
return -ENOMEM;
return dev->smart_pc_enabled;
}
void amd_pmf_deinit_smart_pc(struct amd_pmf_dev *dev)
{
if (pb_side_load)
amd_pmf_remove_pb(dev);
kfree(dev->prev_data);
kfree(dev->policy_buf);
cancel_delayed_work_sync(&dev->pb_work);
amd_pmf_tee_deinit(dev);
}

View File

@ -0,0 +1,317 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Wifi Frequency Band Manage Interface
* Copyright (C) 2023 Advanced Micro Devices
*/
#include <linux/acpi.h>
#include <linux/acpi_amd_wbrf.h>
/*
* Functions bit vector for WBRF method
*
* Bit 0: WBRF supported.
* Bit 1: Function 1 (Add / Remove frequency) is supported.
* Bit 2: Function 2 (Get frequency list) is supported.
*/
#define WBRF_ENABLED 0x0
#define WBRF_RECORD 0x1
#define WBRF_RETRIEVE 0x2
#define WBRF_REVISION 0x1
/*
* The data structure used for WBRF_RETRIEVE is not naturally aligned.
* And unfortunately the design has been settled down.
*/
struct amd_wbrf_ranges_out {
u32 num_of_ranges;
struct freq_band_range band_list[MAX_NUM_OF_WBRF_RANGES];
} __packed;
static const guid_t wifi_acpi_dsm_guid =
GUID_INIT(0x7b7656cf, 0xdc3d, 0x4c1c,
0x83, 0xe9, 0x66, 0xe7, 0x21, 0xde, 0x30, 0x70);
/*
* Used to notify consumer (amdgpu driver currently) about
* the wifi frequency is change.
*/
static BLOCKING_NOTIFIER_HEAD(wbrf_chain_head);
static int wbrf_record(struct acpi_device *adev, uint8_t action, struct wbrf_ranges_in_out *in)
{
union acpi_object argv4;
union acpi_object *tmp;
union acpi_object *obj;
u32 num_of_ranges = 0;
u32 num_of_elements;
u32 arg_idx = 0;
int ret;
u32 i;
if (!in)
return -EINVAL;
for (i = 0; i < ARRAY_SIZE(in->band_list); i++) {
if (in->band_list[i].start && in->band_list[i].end)
num_of_ranges++;
}
/*
* The num_of_ranges value in the "in" object supplied by
* the caller is required to be equal to the number of
* entries in the band_list array in there.
*/
if (num_of_ranges != in->num_of_ranges)
return -EINVAL;
/*
* Every input frequency band comes with two end points(start/end)
* and each is accounted as an element. Meanwhile the range count
* and action type are accounted as an element each.
* So, the total element count = 2 * num_of_ranges + 1 + 1.
*/
num_of_elements = 2 * num_of_ranges + 2;
tmp = kcalloc(num_of_elements, sizeof(*tmp), GFP_KERNEL);
if (!tmp)
return -ENOMEM;
argv4.package.type = ACPI_TYPE_PACKAGE;
argv4.package.count = num_of_elements;
argv4.package.elements = tmp;
/* save the number of ranges*/
tmp[0].integer.type = ACPI_TYPE_INTEGER;
tmp[0].integer.value = num_of_ranges;
/* save the action(WBRF_RECORD_ADD/REMOVE/RETRIEVE) */
tmp[1].integer.type = ACPI_TYPE_INTEGER;
tmp[1].integer.value = action;
arg_idx = 2;
for (i = 0; i < ARRAY_SIZE(in->band_list); i++) {
if (!in->band_list[i].start || !in->band_list[i].end)
continue;
tmp[arg_idx].integer.type = ACPI_TYPE_INTEGER;
tmp[arg_idx++].integer.value = in->band_list[i].start;
tmp[arg_idx].integer.type = ACPI_TYPE_INTEGER;
tmp[arg_idx++].integer.value = in->band_list[i].end;
}
obj = acpi_evaluate_dsm(adev->handle, &wifi_acpi_dsm_guid,
WBRF_REVISION, WBRF_RECORD, &argv4);
if (!obj)
return -EINVAL;
if (obj->type != ACPI_TYPE_INTEGER) {
ret = -EINVAL;
goto out;
}
ret = obj->integer.value;
if (ret)
ret = -EINVAL;
out:
ACPI_FREE(obj);
kfree(tmp);
return ret;
}
/**
* acpi_amd_wbrf_add_remove - add or remove the frequency band the device is using
*
* @dev: device pointer
* @action: remove or add the frequency band into bios
* @in: input structure containing the frequency band the device is using
*
* Broadcast to other consumers the frequency band the device starts
* to use. Underneath the surface the information is cached into an
* internal buffer first. Then a notification is sent to all those
* registered consumers. So then they can retrieve that buffer to
* know the latest active frequency bands. Consumers that haven't
* yet been registered can retrieve the information from the cache
* when they register.
*
* Return:
* 0 for success add/remove wifi frequency band.
* Returns a negative error code for failure.
*/
int acpi_amd_wbrf_add_remove(struct device *dev, uint8_t action, struct wbrf_ranges_in_out *in)
{
struct acpi_device *adev;
int ret;
adev = ACPI_COMPANION(dev);
if (!adev)
return -ENODEV;
ret = wbrf_record(adev, action, in);
if (ret)
return ret;
blocking_notifier_call_chain(&wbrf_chain_head, WBRF_CHANGED, NULL);
return 0;
}
EXPORT_SYMBOL_GPL(acpi_amd_wbrf_add_remove);
/**
* acpi_amd_wbrf_supported_producer - determine if the WBRF can be enabled
* for the device as a producer
*
* @dev: device pointer
*
* Check if the platform equipped with necessary implementations to
* support WBRF for the device as a producer.
*
* Return:
* true if WBRF is supported, otherwise returns false
*/
bool acpi_amd_wbrf_supported_producer(struct device *dev)
{
struct acpi_device *adev;
adev = ACPI_COMPANION(dev);
if (!adev)
return false;
return acpi_check_dsm(adev->handle, &wifi_acpi_dsm_guid,
WBRF_REVISION, BIT(WBRF_RECORD));
}
EXPORT_SYMBOL_GPL(acpi_amd_wbrf_supported_producer);
/**
* acpi_amd_wbrf_supported_consumer - determine if the WBRF can be enabled
* for the device as a consumer
*
* @dev: device pointer
*
* Determine if the platform equipped with necessary implementations to
* support WBRF for the device as a consumer.
*
* Return:
* true if WBRF is supported, otherwise returns false.
*/
bool acpi_amd_wbrf_supported_consumer(struct device *dev)
{
struct acpi_device *adev;
adev = ACPI_COMPANION(dev);
if (!adev)
return false;
return acpi_check_dsm(adev->handle, &wifi_acpi_dsm_guid,
WBRF_REVISION, BIT(WBRF_RETRIEVE));
}
EXPORT_SYMBOL_GPL(acpi_amd_wbrf_supported_consumer);
/**
* amd_wbrf_retrieve_freq_band - retrieve current active frequency bands
*
* @dev: device pointer
* @out: output structure containing all the active frequency bands
*
* Retrieve the current active frequency bands which were broadcasted
* by other producers. The consumer who calls this API should take
* proper actions if any of the frequency band may cause RFI with its
* own frequency band used.
*
* Return:
* 0 for getting wifi freq band successfully.
* Returns a negative error code for failure.
*/
int amd_wbrf_retrieve_freq_band(struct device *dev, struct wbrf_ranges_in_out *out)
{
struct amd_wbrf_ranges_out acpi_out = {0};
struct acpi_device *adev;
union acpi_object *obj;
union acpi_object param;
int ret = 0;
adev = ACPI_COMPANION(dev);
if (!adev)
return -ENODEV;
param.type = ACPI_TYPE_STRING;
param.string.length = 0;
param.string.pointer = NULL;
obj = acpi_evaluate_dsm(adev->handle, &wifi_acpi_dsm_guid,
WBRF_REVISION, WBRF_RETRIEVE, &param);
if (!obj)
return -EINVAL;
/*
* The return buffer is with variable length and the format below:
* number_of_entries(1 DWORD): Number of entries
* start_freq of 1st entry(1 QWORD): Start frequency of the 1st entry
* end_freq of 1st entry(1 QWORD): End frequency of the 1st entry
* ...
* ...
* start_freq of the last entry(1 QWORD)
* end_freq of the last entry(1 QWORD)
*
* Thus the buffer length is determined by the number of entries.
* - For zero entry scenario, the buffer length will be 4 bytes.
* - For one entry scenario, the buffer length will be 20 bytes.
*/
if (obj->buffer.length > sizeof(acpi_out) || obj->buffer.length < 4) {
dev_err(dev, "Wrong sized WBRT information");
ret = -EINVAL;
goto out;
}
memcpy(&acpi_out, obj->buffer.pointer, obj->buffer.length);
out->num_of_ranges = acpi_out.num_of_ranges;
memcpy(out->band_list, acpi_out.band_list, sizeof(acpi_out.band_list));
out:
ACPI_FREE(obj);
return ret;
}
EXPORT_SYMBOL_GPL(amd_wbrf_retrieve_freq_band);
/**
* amd_wbrf_register_notifier - register for notifications of frequency
* band update
*
* @nb: driver notifier block
*
* The consumer should register itself via this API so that it can get
* notified on the frequency band updates from other producers.
*
* Return:
* 0 for registering a consumer driver successfully.
* Returns a negative error code for failure.
*/
int amd_wbrf_register_notifier(struct notifier_block *nb)
{
return blocking_notifier_chain_register(&wbrf_chain_head, nb);
}
EXPORT_SYMBOL_GPL(amd_wbrf_register_notifier);
/**
* amd_wbrf_unregister_notifier - unregister for notifications of
* frequency band update
*
* @nb: driver notifier block
*
* The consumer should call this API when it is longer interested with
* the frequency band updates from other producers. Usually, this should
* be performed during driver cleanup.
*
* Return:
* 0 for unregistering a consumer driver.
* Returns a negative error code for failure.
*/
int amd_wbrf_unregister_notifier(struct notifier_block *nb)
{
return blocking_notifier_chain_unregister(&wbrf_chain_head, nb);
}
EXPORT_SYMBOL_GPL(amd_wbrf_unregister_notifier);

View File

@ -1816,9 +1816,8 @@ static void asus_dmi_check(void)
return;
/* On L1400B WLED control the sound card, don't mess with it ... */
if (strncmp(model, "L1400B", 6) == 0) {
if (strncmp(model, "L1400B", 6) == 0)
wlan_status = -1;
}
}
static bool asus_device_present;

View File

@ -4615,7 +4615,7 @@ fail_platform:
return err;
}
static int asus_wmi_remove(struct platform_device *device)
static void asus_wmi_remove(struct platform_device *device)
{
struct asus_wmi *asus;
@ -4638,7 +4638,6 @@ static int asus_wmi_remove(struct platform_device *device)
platform_profile_remove();
kfree(asus);
return 0;
}
/* Platform driver - hibernate/resume callbacks *******************************/
@ -4799,7 +4798,7 @@ int __init_or_module asus_wmi_register_driver(struct asus_wmi_driver *driver)
return -EBUSY;
platform_driver = &driver->platform_driver;
platform_driver->remove = asus_wmi_remove;
platform_driver->remove_new = asus_wmi_remove;
platform_driver->driver.owner = driver->owner;
platform_driver->driver.name = driver->name;
platform_driver->driver.pm = &asus_pm_ops;

View File

@ -429,7 +429,6 @@ static DEVICE_ATTR(lighting_control_state, 0644, show_control_state,
static int alienware_zone_init(struct platform_device *dev)
{
u8 zone;
char buffer[10];
char *name;
if (interface == WMAX) {
@ -466,8 +465,7 @@ static int alienware_zone_init(struct platform_device *dev)
return -ENOMEM;
for (zone = 0; zone < quirks->num_zones; zone++) {
sprintf(buffer, "zone%02hhX", zone);
name = kstrdup(buffer, GFP_KERNEL);
name = kasprintf(GFP_KERNEL, "zone%02hhX", zone);
if (name == NULL)
return 1;
sysfs_attr_init(&zone_dev_attrs[zone].attr);

View File

@ -6,12 +6,16 @@
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/device.h>
#include <linux/dmi.h>
#include <linux/fs.h>
#include <linux/list.h>
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/uaccess.h>
#include <linux/wmi.h>
#include <uapi/linux/wmi.h>
#include "dell-smbios.h"
#include "dell-wmi-descriptor.h"
@ -32,7 +36,8 @@ struct wmi_smbios_priv {
struct list_head list;
struct wmi_device *wdev;
struct device *child;
u32 req_buf_size;
u64 req_buf_size;
struct miscdevice char_dev;
};
static LIST_HEAD(wmi_list);
@ -108,48 +113,115 @@ out_wmi_call:
return ret;
}
static long dell_smbios_wmi_filter(struct wmi_device *wdev, unsigned int cmd,
struct wmi_ioctl_buffer *arg)
static int dell_smbios_wmi_open(struct inode *inode, struct file *filp)
{
struct wmi_smbios_priv *priv;
int ret = 0;
switch (cmd) {
case DELL_WMI_SMBIOS_CMD:
mutex_lock(&call_mutex);
priv = dev_get_drvdata(&wdev->dev);
if (!priv) {
ret = -ENODEV;
goto fail_smbios_cmd;
}
memcpy(priv->buf, arg, priv->req_buf_size);
if (dell_smbios_call_filter(&wdev->dev, &priv->buf->std)) {
dev_err(&wdev->dev, "Invalid call %d/%d:%8x\n",
priv->buf->std.cmd_class,
priv->buf->std.cmd_select,
priv->buf->std.input[0]);
ret = -EFAULT;
goto fail_smbios_cmd;
}
ret = run_smbios_call(priv->wdev);
if (ret)
goto fail_smbios_cmd;
memcpy(arg, priv->buf, priv->req_buf_size);
fail_smbios_cmd:
mutex_unlock(&call_mutex);
break;
default:
ret = -ENOIOCTLCMD;
priv = container_of(filp->private_data, struct wmi_smbios_priv, char_dev);
filp->private_data = priv;
return nonseekable_open(inode, filp);
}
static ssize_t dell_smbios_wmi_read(struct file *filp, char __user *buffer, size_t length,
loff_t *offset)
{
struct wmi_smbios_priv *priv = filp->private_data;
return simple_read_from_buffer(buffer, length, offset, &priv->req_buf_size,
sizeof(priv->req_buf_size));
}
static long dell_smbios_wmi_do_ioctl(struct wmi_smbios_priv *priv,
struct dell_wmi_smbios_buffer __user *arg)
{
long ret;
if (get_user(priv->buf->length, &arg->length))
return -EFAULT;
if (priv->buf->length < priv->req_buf_size)
return -EINVAL;
/* if it's too big, warn, driver will only use what is needed */
if (priv->buf->length > priv->req_buf_size)
dev_err(&priv->wdev->dev, "Buffer %llu is bigger than required %llu\n",
priv->buf->length, priv->req_buf_size);
if (copy_from_user(priv->buf, arg, priv->req_buf_size))
return -EFAULT;
if (dell_smbios_call_filter(&priv->wdev->dev, &priv->buf->std)) {
dev_err(&priv->wdev->dev, "Invalid call %d/%d:%8x\n",
priv->buf->std.cmd_class,
priv->buf->std.cmd_select,
priv->buf->std.input[0]);
return -EINVAL;
}
ret = run_smbios_call(priv->wdev);
if (ret)
return ret;
if (copy_to_user(arg, priv->buf, priv->req_buf_size))
return -EFAULT;
return 0;
}
static long dell_smbios_wmi_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
struct dell_wmi_smbios_buffer __user *input = (struct dell_wmi_smbios_buffer __user *)arg;
struct wmi_smbios_priv *priv = filp->private_data;
long ret;
if (cmd != DELL_WMI_SMBIOS_CMD)
return -ENOIOCTLCMD;
mutex_lock(&call_mutex);
ret = dell_smbios_wmi_do_ioctl(priv, input);
mutex_unlock(&call_mutex);
return ret;
}
static const struct file_operations dell_smbios_wmi_fops = {
.owner = THIS_MODULE,
.open = dell_smbios_wmi_open,
.read = dell_smbios_wmi_read,
.unlocked_ioctl = dell_smbios_wmi_ioctl,
.compat_ioctl = compat_ptr_ioctl,
};
static void dell_smbios_wmi_unregister_chardev(void *data)
{
struct miscdevice *char_dev = data;
misc_deregister(char_dev);
}
static int dell_smbios_wmi_register_chardev(struct wmi_smbios_priv *priv)
{
int ret;
priv->char_dev.minor = MISC_DYNAMIC_MINOR;
priv->char_dev.name = "wmi/dell-smbios";
priv->char_dev.fops = &dell_smbios_wmi_fops;
priv->char_dev.mode = 0444;
ret = misc_register(&priv->char_dev);
if (ret < 0)
return ret;
return devm_add_action_or_reset(&priv->wdev->dev, dell_smbios_wmi_unregister_chardev,
&priv->char_dev);
}
static int dell_smbios_wmi_probe(struct wmi_device *wdev, const void *context)
{
struct wmi_driver *wdriver =
container_of(wdev->dev.driver, struct wmi_driver, driver);
struct wmi_smbios_priv *priv;
u32 hotfix;
u32 buffer_size, hotfix;
int count;
int ret;
@ -162,62 +234,58 @@ static int dell_smbios_wmi_probe(struct wmi_device *wdev, const void *context)
if (!priv)
return -ENOMEM;
priv->wdev = wdev;
dev_set_drvdata(&wdev->dev, priv);
/* WMI buffer size will be either 4k or 32k depending on machine */
if (!dell_wmi_get_size(&priv->req_buf_size))
if (!dell_wmi_get_size(&buffer_size))
return -EPROBE_DEFER;
priv->req_buf_size = buffer_size;
/* some SMBIOS calls fail unless BIOS contains hotfix */
if (!dell_wmi_get_hotfix(&hotfix))
return -EPROBE_DEFER;
if (!hotfix) {
if (!hotfix)
dev_warn(&wdev->dev,
"WMI SMBIOS userspace interface not supported(%u), try upgrading to a newer BIOS\n",
hotfix);
wdriver->filter_callback = NULL;
}
/* add in the length object we will use internally with ioctl */
priv->req_buf_size += sizeof(u64);
ret = set_required_buffer_size(wdev, priv->req_buf_size);
if (ret)
return ret;
count = get_order(priv->req_buf_size);
priv->buf = (void *)__get_free_pages(GFP_KERNEL, count);
priv->buf = (void *)devm_get_free_pages(&wdev->dev, GFP_KERNEL, count);
if (!priv->buf)
return -ENOMEM;
ret = dell_smbios_wmi_register_chardev(priv);
if (ret)
return ret;
/* ID is used by dell-smbios to set priority of drivers */
wdev->dev.id = 1;
ret = dell_smbios_register_device(&wdev->dev, &dell_smbios_wmi_call);
if (ret)
goto fail_register;
return ret;
priv->wdev = wdev;
dev_set_drvdata(&wdev->dev, priv);
mutex_lock(&list_mutex);
list_add_tail(&priv->list, &wmi_list);
mutex_unlock(&list_mutex);
return 0;
fail_register:
free_pages((unsigned long)priv->buf, count);
return ret;
}
static void dell_smbios_wmi_remove(struct wmi_device *wdev)
{
struct wmi_smbios_priv *priv = dev_get_drvdata(&wdev->dev);
int count;
mutex_lock(&call_mutex);
mutex_lock(&list_mutex);
list_del(&priv->list);
mutex_unlock(&list_mutex);
dell_smbios_unregister_device(&wdev->dev);
count = get_order(priv->req_buf_size);
free_pages((unsigned long)priv->buf, count);
mutex_unlock(&call_mutex);
}
@ -256,7 +324,6 @@ static struct wmi_driver dell_smbios_wmi_driver = {
.probe = dell_smbios_wmi_probe,
.remove = dell_smbios_wmi_remove,
.id_table = dell_smbios_wmi_id_table,
.filter_callback = dell_smbios_wmi_filter,
};
int init_dell_smbios_wmi(void)

View File

@ -7,7 +7,6 @@
*/
#include "bioscfg.h"
#include <asm-generic/posix_types.h>
GET_INSTANCE_ID(password);
/*

View File

@ -1478,7 +1478,7 @@ static int __init hp_wmi_bios_setup(struct platform_device *device)
return 0;
}
static int __exit hp_wmi_bios_remove(struct platform_device *device)
static void __exit hp_wmi_bios_remove(struct platform_device *device)
{
int i;
@ -1502,8 +1502,6 @@ static int __exit hp_wmi_bios_remove(struct platform_device *device)
if (platform_profile_support)
platform_profile_remove();
return 0;
}
static int hp_wmi_resume_handler(struct device *device)
@ -1560,7 +1558,7 @@ static struct platform_driver hp_wmi_driver __refdata = {
.pm = &hp_wmi_pm_ops,
.dev_groups = hp_wmi_groups,
},
.remove = __exit_p(hp_wmi_bios_remove),
.remove_new = __exit_p(hp_wmi_bios_remove),
};
static umode_t hp_wmi_hwmon_is_visible(const void *data,

View File

@ -7,6 +7,7 @@ config INTEL_PMC_CORE
tristate "Intel PMC Core driver"
depends on PCI
depends on ACPI
depends on INTEL_PMT_TELEMETRY
help
The Intel Platform Controller Hub for Intel Core SoCs provides access
to Power Management Controller registers via various interfaces. This

View File

@ -4,7 +4,7 @@
#
intel_pmc_core-y := core.o core_ssram.o spt.o cnp.o \
icl.o tgl.o adl.o mtl.o
icl.o tgl.o adl.o mtl.o arl.o lnl.o
obj-$(CONFIG_INTEL_PMC_CORE) += intel_pmc_core.o
intel_pmc_core_pltdrv-y := pltdrv.o
obj-$(CONFIG_INTEL_PMC_CORE) += intel_pmc_core_pltdrv.o

View File

@ -307,6 +307,8 @@ const struct pmc_reg_map adl_reg_map = {
.lpm_sts = adl_lpm_maps,
.lpm_status_offset = ADL_LPM_STATUS_OFFSET,
.lpm_live_status_offset = ADL_LPM_LIVE_STATUS_OFFSET,
.pson_residency_offset = TGL_PSON_RESIDENCY_OFFSET,
.pson_residency_counter_step = TGL_PSON_RES_COUNTER_STEP,
};
int adl_core_init(struct pmc_dev *pmcdev)
@ -322,5 +324,7 @@ int adl_core_init(struct pmc_dev *pmcdev)
if (ret)
return ret;
pmc_core_get_low_power_modes(pmcdev);
return 0;
}

View File

@ -0,0 +1,729 @@
// SPDX-License-Identifier: GPL-2.0
/*
* This file contains platform specific structure definitions
* and init function used by Meteor Lake PCH.
*
* Copyright (c) 2022, Intel Corporation.
* All Rights Reserved.
*
*/
#include <linux/pci.h>
#include "core.h"
#include "../pmt/telemetry.h"
/* PMC SSRAM PMT Telemetry GUID */
#define IOEP_LPM_REQ_GUID 0x5077612
#define SOCS_LPM_REQ_GUID 0x8478657
#define PCHS_LPM_REQ_GUID 0x9684572
static const u8 ARL_LPM_REG_INDEX[] = {0, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 20};
const struct pmc_bit_map arl_socs_ltr_show_map[] = {
{"SOUTHPORT_A", CNP_PMC_LTR_SPA},
{"SOUTHPORT_B", CNP_PMC_LTR_SPB},
{"SATA", CNP_PMC_LTR_SATA},
{"GIGABIT_ETHERNET", CNP_PMC_LTR_GBE},
{"XHCI", CNP_PMC_LTR_XHCI},
{"SOUTHPORT_F", ADL_PMC_LTR_SPF},
{"ME", CNP_PMC_LTR_ME},
/* EVA is Enterprise Value Add, doesn't really exist on PCH */
{"SATA1", CNP_PMC_LTR_EVA},
{"SOUTHPORT_C", CNP_PMC_LTR_SPC},
{"HD_AUDIO", CNP_PMC_LTR_AZ},
{"CNV", CNP_PMC_LTR_CNV},
{"LPSS", CNP_PMC_LTR_LPSS},
{"SOUTHPORT_D", CNP_PMC_LTR_SPD},
{"SOUTHPORT_E", CNP_PMC_LTR_SPE},
{"SATA2", CNP_PMC_LTR_CAM},
{"ESPI", CNP_PMC_LTR_ESPI},
{"SCC", CNP_PMC_LTR_SCC},
{"ISH", CNP_PMC_LTR_ISH},
{"UFSX2", CNP_PMC_LTR_UFSX2},
{"EMMC", CNP_PMC_LTR_EMMC},
/*
* Check intel_pmc_core_ids[] users of cnp_reg_map for
* a list of core SoCs using this.
*/
{"WIGIG", ICL_PMC_LTR_WIGIG},
{"THC0", TGL_PMC_LTR_THC0},
{"THC1", TGL_PMC_LTR_THC1},
{"SOUTHPORT_G", MTL_PMC_LTR_SPG},
{"Reserved", ARL_SOCS_PMC_LTR_RESERVED},
{"IOE_PMC", MTL_PMC_LTR_IOE_PMC},
{"DMI3", ARL_PMC_LTR_DMI3},
/* Below two cannot be used for LTR_IGNORE */
{"CURRENT_PLATFORM", CNP_PMC_LTR_CUR_PLT},
{"AGGREGATED_SYSTEM", CNP_PMC_LTR_CUR_ASLT},
{}
};
const struct pmc_bit_map arl_socs_clocksource_status_map[] = {
{"AON2_OFF_STS", BIT(0)},
{"AON3_OFF_STS", BIT(1)},
{"AON4_OFF_STS", BIT(2)},
{"AON5_OFF_STS", BIT(3)},
{"AON1_OFF_STS", BIT(4)},
{"XTAL_LVM_OFF_STS", BIT(5)},
{"AON3_SPL_OFF_STS", BIT(9)},
{"DMI3FPW_0_PLL_OFF_STS", BIT(10)},
{"DMI3FPW_1_PLL_OFF_STS", BIT(11)},
{"G5X16FPW_0_PLL_OFF_STS", BIT(14)},
{"G5X16FPW_1_PLL_OFF_STS", BIT(15)},
{"G5X16FPW_2_PLL_OFF_STS", BIT(16)},
{"XTAL_AGGR_OFF_STS", BIT(17)},
{"USB2_PLL_OFF_STS", BIT(18)},
{"G5X16FPW_3_PLL_OFF_STS", BIT(19)},
{"BCLK_EXT_INJ_CLK_OFF_STS", BIT(20)},
{"PHY_OC_EXT_INJ_CLK_OFF_STS", BIT(21)},
{"FILTER_PLL_OFF_STS", BIT(22)},
{"FABRIC_PLL_OFF_STS", BIT(25)},
{"SOC_PLL_OFF_STS", BIT(26)},
{"PCIEFAB_PLL_OFF_STS", BIT(27)},
{"REF_PLL_OFF_STS", BIT(28)},
{"GENLOCK_FILTER_PLL_OFF_STS", BIT(30)},
{"RTC_PLL_OFF_STS", BIT(31)},
{}
};
const struct pmc_bit_map arl_socs_power_gating_status_0_map[] = {
{"PMC_PGD0_PG_STS", BIT(0)},
{"DMI_PGD0_PG_STS", BIT(1)},
{"ESPISPI_PGD0_PG_STS", BIT(2)},
{"XHCI_PGD0_PG_STS", BIT(3)},
{"SPA_PGD0_PG_STS", BIT(4)},
{"SPB_PGD0_PG_STS", BIT(5)},
{"SPC_PGD0_PG_STS", BIT(6)},
{"GBE_PGD0_PG_STS", BIT(7)},
{"SATA_PGD0_PG_STS", BIT(8)},
{"FIACPCB_P5x16_PGD0_PG_STS", BIT(9)},
{"G5x16FPW_PGD0_PG_STS", BIT(10)},
{"FIA_D_PGD0_PG_STS", BIT(11)},
{"MPFPW2_PGD0_PG_STS", BIT(12)},
{"SPD_PGD0_PG_STS", BIT(13)},
{"LPSS_PGD0_PG_STS", BIT(14)},
{"LPC_PGD0_PG_STS", BIT(15)},
{"SMB_PGD0_PG_STS", BIT(16)},
{"ISH_PGD0_PG_STS", BIT(17)},
{"P2S_PGD0_PG_STS", BIT(18)},
{"NPK_PGD0_PG_STS", BIT(19)},
{"DMI3FPW_PGD0_PG_STS", BIT(20)},
{"GBETSN1_PGD0_PG_STS", BIT(21)},
{"FUSE_PGD0_PG_STS", BIT(22)},
{"FIACPCB_D_PGD0_PG_STS", BIT(23)},
{"FUSEGPSB_PGD0_PG_STS", BIT(24)},
{"XDCI_PGD0_PG_STS", BIT(25)},
{"EXI_PGD0_PG_STS", BIT(26)},
{"CSE_PGD0_PG_STS", BIT(27)},
{"KVMCC_PGD0_PG_STS", BIT(28)},
{"PMT_PGD0_PG_STS", BIT(29)},
{"CLINK_PGD0_PG_STS", BIT(30)},
{"PTIO_PGD0_PG_STS", BIT(31)},
{}
};
const struct pmc_bit_map arl_socs_power_gating_status_1_map[] = {
{"USBR0_PGD0_PG_STS", BIT(0)},
{"SUSRAM_PGD0_PG_STS", BIT(1)},
{"SMT1_PGD0_PG_STS", BIT(2)},
{"FIACPCB_U_PGD0_PG_STS", BIT(3)},
{"SMS2_PGD0_PG_STS", BIT(4)},
{"SMS1_PGD0_PG_STS", BIT(5)},
{"CSMERTC_PGD0_PG_STS", BIT(6)},
{"CSMEPSF_PGD0_PG_STS", BIT(7)},
{"SBR0_PGD0_PG_STS", BIT(8)},
{"SBR1_PGD0_PG_STS", BIT(9)},
{"SBR2_PGD0_PG_STS", BIT(10)},
{"SBR3_PGD0_PG_STS", BIT(11)},
{"MPFPW1_PGD0_PG_STS", BIT(12)},
{"SBR5_PGD0_PG_STS", BIT(13)},
{"FIA_X_PGD0_PG_STS", BIT(14)},
{"FIACPCB_X_PGD0_PG_STS", BIT(15)},
{"SBRG_PGD0_PG_STS", BIT(16)},
{"SOC_D2D_PGD1_PG_STS", BIT(17)},
{"PSF4_PGD0_PG_STS", BIT(18)},
{"CNVI_PGD0_PG_STS", BIT(19)},
{"UFSX2_PGD0_PG_STS", BIT(20)},
{"ENDBG_PGD0_PG_STS", BIT(21)},
{"DBG_PSF_PGD0_PG_STS", BIT(22)},
{"SBR6_PGD0_PG_STS", BIT(23)},
{"SOC_D2D_PGD2_PG_STS", BIT(24)},
{"NPK_PGD1_PG_STS", BIT(25)},
{"DMI3_PGD0_PG_STS", BIT(26)},
{"DBG_SBR_PGD0_PG_STS", BIT(27)},
{"SOC_D2D_PGD0_PG_STS", BIT(28)},
{"PSF6_PGD0_PG_STS", BIT(29)},
{"PSF7_PGD0_PG_STS", BIT(30)},
{"MPFPW3_PGD0_PG_STS", BIT(31)},
{}
};
const struct pmc_bit_map arl_socs_power_gating_status_2_map[] = {
{"PSF8_PGD0_PG_STS", BIT(0)},
{"FIA_PGD0_PG_STS", BIT(1)},
{"SOC_D2D_PGD3_PG_STS", BIT(2)},
{"FIA_U_PGD0_PG_STS", BIT(3)},
{"TAM_PGD0_PG_STS", BIT(4)},
{"GBETSN_PGD0_PG_STS", BIT(5)},
{"TBTLSX_PGD0_PG_STS", BIT(6)},
{"THC0_PGD0_PG_STS", BIT(7)},
{"THC1_PGD0_PG_STS", BIT(8)},
{"PMC_PGD1_PG_STS", BIT(9)},
{"FIA_P5x16_PGD0_PG_STS", BIT(10)},
{"GNA_PGD0_PG_STS", BIT(11)},
{"ACE_PGD0_PG_STS", BIT(12)},
{"ACE_PGD1_PG_STS", BIT(13)},
{"ACE_PGD2_PG_STS", BIT(14)},
{"ACE_PGD3_PG_STS", BIT(15)},
{"ACE_PGD4_PG_STS", BIT(16)},
{"ACE_PGD5_PG_STS", BIT(17)},
{"ACE_PGD6_PG_STS", BIT(18)},
{"ACE_PGD7_PG_STS", BIT(19)},
{"ACE_PGD8_PG_STS", BIT(20)},
{"FIA_PGS_PGD0_PG_STS", BIT(21)},
{"FIACPCB_PGS_PGD0_PG_STS", BIT(22)},
{"FUSEPMSB_PGD0_PG_STS", BIT(23)},
{}
};
const struct pmc_bit_map arl_socs_d3_status_2_map[] = {
{"CSMERTC_D3_STS", BIT(1)},
{"SUSRAM_D3_STS", BIT(2)},
{"CSE_D3_STS", BIT(4)},
{"KVMCC_D3_STS", BIT(5)},
{"USBR0_D3_STS", BIT(6)},
{"ISH_D3_STS", BIT(7)},
{"SMT1_D3_STS", BIT(8)},
{"SMT2_D3_STS", BIT(9)},
{"SMT3_D3_STS", BIT(10)},
{"GNA_D3_STS", BIT(12)},
{"CLINK_D3_STS", BIT(14)},
{"PTIO_D3_STS", BIT(16)},
{"PMT_D3_STS", BIT(17)},
{"SMS1_D3_STS", BIT(18)},
{"SMS2_D3_STS", BIT(19)},
{}
};
const struct pmc_bit_map arl_socs_d3_status_3_map[] = {
{"GBETSN_D3_STS", BIT(13)},
{"THC0_D3_STS", BIT(14)},
{"THC1_D3_STS", BIT(15)},
{"ACE_D3_STS", BIT(23)},
{}
};
const struct pmc_bit_map arl_socs_vnn_req_status_3_map[] = {
{"DTS0_VNN_REQ_STS", BIT(7)},
{"GPIOCOM5_VNN_REQ_STS", BIT(11)},
{}
};
const struct pmc_bit_map *arl_socs_lpm_maps[] = {
arl_socs_clocksource_status_map,
arl_socs_power_gating_status_0_map,
arl_socs_power_gating_status_1_map,
arl_socs_power_gating_status_2_map,
mtl_socm_d3_status_0_map,
mtl_socm_d3_status_1_map,
arl_socs_d3_status_2_map,
arl_socs_d3_status_3_map,
mtl_socm_vnn_req_status_0_map,
mtl_socm_vnn_req_status_1_map,
mtl_socm_vnn_req_status_2_map,
arl_socs_vnn_req_status_3_map,
mtl_socm_vnn_misc_status_map,
mtl_socm_signal_status_map,
NULL
};
const struct pmc_bit_map arl_socs_pfear_map[] = {
{"RSVD64", BIT(0)},
{"RSVD65", BIT(1)},
{"RSVD66", BIT(2)},
{"RSVD67", BIT(3)},
{"RSVD68", BIT(4)},
{"GBETSN", BIT(5)},
{"TBTLSX", BIT(6)},
{}
};
const struct pmc_bit_map *ext_arl_socs_pfear_map[] = {
mtl_socm_pfear_map,
arl_socs_pfear_map,
NULL
};
const struct pmc_reg_map arl_socs_reg_map = {
.pfear_sts = ext_arl_socs_pfear_map,
.ppfear_buckets = ARL_SOCS_PPFEAR_NUM_ENTRIES,
.pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT,
.lpm_sts = arl_socs_lpm_maps,
.ltr_ignore_max = ARL_SOCS_NUM_IP_IGN_ALLOWED,
.ltr_show_sts = arl_socs_ltr_show_map,
.slp_s0_offset = CNP_PMC_SLP_S0_RES_COUNTER_OFFSET,
.slp_s0_res_counter_step = TGL_PMC_SLP_S0_RES_COUNTER_STEP,
.lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
.msr_sts = msr_map,
.ltr_ignore_offset = CNP_PMC_LTR_IGNORE_OFFSET,
.regmap_length = MTL_SOC_PMC_MMIO_REG_LEN,
.ppfear0_offset = CNP_PMC_HOST_PPFEAR0A,
.pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET,
.lpm_priority_offset = MTL_LPM_PRI_OFFSET,
.lpm_en_offset = MTL_LPM_EN_OFFSET,
.lpm_residency_offset = MTL_LPM_RESIDENCY_OFFSET,
.lpm_status_offset = MTL_LPM_STATUS_OFFSET,
.lpm_sts_latch_en_offset = MTL_LPM_STATUS_LATCH_EN_OFFSET,
.lpm_live_status_offset = MTL_LPM_LIVE_STATUS_OFFSET,
.lpm_num_maps = ADL_LPM_NUM_MAPS,
.lpm_reg_index = ARL_LPM_REG_INDEX,
.etr3_offset = ETR3_OFFSET,
.pson_residency_offset = TGL_PSON_RESIDENCY_OFFSET,
.pson_residency_counter_step = TGL_PSON_RES_COUNTER_STEP,
};
const struct pmc_bit_map arl_pchs_ltr_show_map[] = {
{"SOUTHPORT_A", CNP_PMC_LTR_SPA},
{"SOUTHPORT_B", CNP_PMC_LTR_SPB},
{"SATA", CNP_PMC_LTR_SATA},
{"GIGABIT_ETHERNET", CNP_PMC_LTR_GBE},
{"XHCI", CNP_PMC_LTR_XHCI},
{"SOUTHPORT_F", ADL_PMC_LTR_SPF},
{"ME", CNP_PMC_LTR_ME},
/* EVA is Enterprise Value Add, doesn't really exist on PCH */
{"SATA1", CNP_PMC_LTR_EVA},
{"SOUTHPORT_C", CNP_PMC_LTR_SPC},
{"HD_AUDIO", CNP_PMC_LTR_AZ},
{"CNV", CNP_PMC_LTR_CNV},
{"LPSS", CNP_PMC_LTR_LPSS},
{"SOUTHPORT_D", CNP_PMC_LTR_SPD},
{"SOUTHPORT_E", CNP_PMC_LTR_SPE},
{"SATA2", CNP_PMC_LTR_CAM},
{"ESPI", CNP_PMC_LTR_ESPI},
{"SCC", CNP_PMC_LTR_SCC},
{"ISH", CNP_PMC_LTR_ISH},
{"UFSX2", CNP_PMC_LTR_UFSX2},
{"EMMC", CNP_PMC_LTR_EMMC},
/*
* Check intel_pmc_core_ids[] users of cnp_reg_map for
* a list of core SoCs using this.
*/
{"WIGIG", ICL_PMC_LTR_WIGIG},
{"THC0", TGL_PMC_LTR_THC0},
{"THC1", TGL_PMC_LTR_THC1},
{"SOUTHPORT_G", MTL_PMC_LTR_SPG},
{"ESE", MTL_PMC_LTR_ESE},
{"IOE_PMC", MTL_PMC_LTR_IOE_PMC},
{"DMI3", ARL_PMC_LTR_DMI3},
/* Below two cannot be used for LTR_IGNORE */
{"CURRENT_PLATFORM", CNP_PMC_LTR_CUR_PLT},
{"AGGREGATED_SYSTEM", CNP_PMC_LTR_CUR_ASLT},
{}
};
const struct pmc_bit_map arl_pchs_clocksource_status_map[] = {
{"AON2_OFF_STS", BIT(0)},
{"AON3_OFF_STS", BIT(1)},
{"AON4_OFF_STS", BIT(2)},
{"AON2_SPL_OFF_STS", BIT(3)},
{"AONL_OFF_STS", BIT(4)},
{"XTAL_LVM_OFF_STS", BIT(5)},
{"AON5_ACRO_OFF_STS", BIT(6)},
{"AON6_ACRO_OFF_STS", BIT(7)},
{"USB3_PLL_OFF_STS", BIT(8)},
{"ACRO_OFF_STS", BIT(9)},
{"AUDIO_PLL_OFF_STS", BIT(10)},
{"MAIN_CRO_OFF_STS", BIT(11)},
{"MAIN_DIVIDER_OFF_STS", BIT(12)},
{"REF_PLL_NON_OC_OFF_STS", BIT(13)},
{"DMI_PLL_OFF_STS", BIT(14)},
{"PHY_EXT_INJ_OFF_STS", BIT(15)},
{"AON6_MCRO_OFF_STS", BIT(16)},
{"XTAL_AGGR_OFF_STS", BIT(17)},
{"USB2_PLL_OFF_STS", BIT(18)},
{"TSN0_PLL_OFF_STS", BIT(19)},
{"TSN1_PLL_OFF_STS", BIT(20)},
{"GBE_PLL_OFF_STS", BIT(21)},
{"SATA_PLL_OFF_STS", BIT(22)},
{"PCIE0_PLL_OFF_STS", BIT(23)},
{"PCIE1_PLL_OFF_STS", BIT(24)},
{"PCIE2_PLL_OFF_STS", BIT(26)},
{"PCIE3_PLL_OFF_STS", BIT(27)},
{"REF_PLL_OFF_STS", BIT(28)},
{"PCIE4_PLL_OFF_STS", BIT(29)},
{"PCIE5_PLL_OFF_STS", BIT(30)},
{"REF38P4_PLL_OFF_STS", BIT(31)},
{}
};
const struct pmc_bit_map arl_pchs_power_gating_status_0_map[] = {
{"PMC_PGD0_PG_STS", BIT(0)},
{"DMI_PGD0_PG_STS", BIT(1)},
{"ESPISPI_PGD0_PG_STS", BIT(2)},
{"XHCI_PGD0_PG_STS", BIT(3)},
{"SPA_PGD0_PG_STS", BIT(4)},
{"SPB_PGD0_PG_STS", BIT(5)},
{"SPC_PGD0_PG_STS", BIT(6)},
{"GBE_PGD0_PG_STS", BIT(7)},
{"SATA_PGD0_PG_STS", BIT(8)},
{"FIA_X_PGD0_PG_STS", BIT(9)},
{"MPFPW4_PGD0_PG_STS", BIT(10)},
{"EAH_PGD0_PG_STS", BIT(11)},
{"MPFPW1_PGD0_PG_STS", BIT(12)},
{"SPD_PGD0_PG_STS", BIT(13)},
{"LPSS_PGD0_PG_STS", BIT(14)},
{"LPC_PGD0_PG_STS", BIT(15)},
{"SMB_PGD0_PG_STS", BIT(16)},
{"ISH_PGD0_PG_STS", BIT(17)},
{"P2S_PGD0_PG_STS", BIT(18)},
{"NPK_PGD0_PG_STS", BIT(19)},
{"U3FPW1_PGD0_PG_STS", BIT(20)},
{"PECI_PGD0_PG_STS", BIT(21)},
{"FUSE_PGD0_PG_STS", BIT(22)},
{"SBR8_PGD0_PG_STS", BIT(23)},
{"EXE_PGD0_PG_STS", BIT(24)},
{"XDCI_PGD0_PG_STS", BIT(25)},
{"EXI_PGD0_PG_STS", BIT(26)},
{"CSE_PGD0_PG_STS", BIT(27)},
{"KVMCC_PGD0_PG_STS", BIT(28)},
{"PMT_PGD0_PG_STS", BIT(29)},
{"CLINK_PGD0_PG_STS", BIT(30)},
{"PTIO_PGD0_PG_STS", BIT(31)},
{}
};
const struct pmc_bit_map arl_pchs_power_gating_status_1_map[] = {
{"USBR0_PGD0_PG_STS", BIT(0)},
{"SUSRAM_PGD0_PG_STS", BIT(1)},
{"SMT1_PGD0_PG_STS", BIT(2)},
{"SMT4_PGD0_PG_STS", BIT(3)},
{"SMS2_PGD0_PG_STS", BIT(4)},
{"SMS1_PGD0_PG_STS", BIT(5)},
{"CSMERTC_PGD0_PG_STS", BIT(6)},
{"CSMEPSF_PGD0_PG_STS", BIT(7)},
{"SBR0_PGD0_PG_STS", BIT(8)},
{"SBR1_PGD0_PG_STS", BIT(9)},
{"SBR2_PGD0_PG_STS", BIT(10)},
{"SBR3_PGD0_PG_STS", BIT(11)},
{"SBR4_PGD0_PG_STS", BIT(12)},
{"SBR5_PGD0_PG_STS", BIT(13)},
{"MPFPW3_PGD0_PG_STS", BIT(14)},
{"PSF1_PGD0_PG_STS", BIT(15)},
{"PSF2_PGD0_PG_STS", BIT(16)},
{"PSF3_PGD0_PG_STS", BIT(17)},
{"PSF4_PGD0_PG_STS", BIT(18)},
{"CNVI_PGD0_PG_STS", BIT(19)},
{"DMI3_PGD0_PG_STS", BIT(20)},
{"ENDBG_PGD0_PG_STS", BIT(21)},
{"DBG_SBR_PGD0_PG_STS", BIT(22)},
{"SBR6_PGD0_PG_STS", BIT(23)},
{"SBR7_PGD0_PG_STS", BIT(24)},
{"NPK_PGD1_PG_STS", BIT(25)},
{"U3FPW3_PGD0_PG_STS", BIT(26)},
{"MPFPW2_PGD0_PG_STS", BIT(27)},
{"MPFPW7_PGD0_PG_STS", BIT(28)},
{"GBETSN1_PGD0_PG_STS", BIT(29)},
{"PSF7_PGD0_PG_STS", BIT(30)},
{"FIA2_PGD0_PG_STS", BIT(31)},
{}
};
const struct pmc_bit_map arl_pchs_power_gating_status_2_map[] = {
{"U3FPW2_PGD0_PG_STS", BIT(0)},
{"FIA_PGD0_PG_STS", BIT(1)},
{"FIACPCB_X_PGD0_PG_STS", BIT(2)},
{"FIA1_PGD0_PG_STS", BIT(3)},
{"TAM_PGD0_PG_STS", BIT(4)},
{"GBETSN_PGD0_PG_STS", BIT(5)},
{"SBR9_PGD0_PG_STS", BIT(6)},
{"THC0_PGD0_PG_STS", BIT(7)},
{"THC1_PGD0_PG_STS", BIT(8)},
{"PMC_PGD1_PG_STS", BIT(9)},
{"DBC_PGD0_PG_STS", BIT(10)},
{"DBG_PSF_PGD0_PG_STS", BIT(11)},
{"SPF_PGD0_PG_STS", BIT(12)},
{"ACE_PGD0_PG_STS", BIT(13)},
{"ACE_PGD1_PG_STS", BIT(14)},
{"ACE_PGD2_PG_STS", BIT(15)},
{"ACE_PGD3_PG_STS", BIT(16)},
{"ACE_PGD4_PG_STS", BIT(17)},
{"ACE_PGD5_PG_STS", BIT(18)},
{"ACE_PGD6_PG_STS", BIT(19)},
{"ACE_PGD7_PG_STS", BIT(20)},
{"SPE_PGD0_PG_STS", BIT(21)},
{"MPFPW5_PG_STS", BIT(22)},
{}
};
const struct pmc_bit_map arl_pchs_d3_status_0_map[] = {
{"SPF_D3_STS", BIT(0)},
{"LPSS_D3_STS", BIT(3)},
{"XDCI_D3_STS", BIT(4)},
{"XHCI_D3_STS", BIT(5)},
{"SPA_D3_STS", BIT(12)},
{"SPB_D3_STS", BIT(13)},
{"SPC_D3_STS", BIT(14)},
{"SPD_D3_STS", BIT(15)},
{"SPE_D3_STS", BIT(16)},
{"ESPISPI_D3_STS", BIT(18)},
{"SATA_D3_STS", BIT(20)},
{"PSTH_D3_STS", BIT(21)},
{"DMI_D3_STS", BIT(22)},
{}
};
const struct pmc_bit_map arl_pchs_d3_status_1_map[] = {
{"GBETSN1_D3_STS", BIT(14)},
{"GBE_D3_STS", BIT(19)},
{"ITSS_D3_STS", BIT(23)},
{"P2S_D3_STS", BIT(24)},
{"CNVI_D3_STS", BIT(27)},
{}
};
const struct pmc_bit_map arl_pchs_d3_status_2_map[] = {
{"CSMERTC_D3_STS", BIT(1)},
{"SUSRAM_D3_STS", BIT(2)},
{"CSE_D3_STS", BIT(4)},
{"KVMCC_D3_STS", BIT(5)},
{"USBR0_D3_STS", BIT(6)},
{"ISH_D3_STS", BIT(7)},
{"SMT1_D3_STS", BIT(8)},
{"SMT2_D3_STS", BIT(9)},
{"SMT3_D3_STS", BIT(10)},
{"SMT4_D3_STS", BIT(11)},
{"SMT5_D3_STS", BIT(12)},
{"SMT6_D3_STS", BIT(13)},
{"CLINK_D3_STS", BIT(14)},
{"PTIO_D3_STS", BIT(16)},
{"PMT_D3_STS", BIT(17)},
{"SMS1_D3_STS", BIT(18)},
{"SMS2_D3_STS", BIT(19)},
{}
};
const struct pmc_bit_map arl_pchs_d3_status_3_map[] = {
{"ESE_D3_STS", BIT(3)},
{"GBETSN_D3_STS", BIT(13)},
{"THC0_D3_STS", BIT(14)},
{"THC1_D3_STS", BIT(15)},
{"ACE_D3_STS", BIT(23)},
{}
};
const struct pmc_bit_map arl_pchs_vnn_req_status_0_map[] = {
{"FIA_VNN_REQ_STS", BIT(17)},
{"ESPISPI_VNN_REQ_STS", BIT(18)},
{}
};
const struct pmc_bit_map arl_pchs_vnn_req_status_1_map[] = {
{"NPK_VNN_REQ_STS", BIT(4)},
{"DFXAGG_VNN_REQ_STS", BIT(8)},
{"EXI_VNN_REQ_STS", BIT(9)},
{"GBE_VNN_REQ_STS", BIT(19)},
{"SMB_VNN_REQ_STS", BIT(25)},
{"LPC_VNN_REQ_STS", BIT(26)},
{"CNVI_VNN_REQ_STS", BIT(27)},
{}
};
const struct pmc_bit_map arl_pchs_vnn_req_status_2_map[] = {
{"FIA2_VNN_REQ_STS", BIT(0)},
{"CSMERTC_VNN_REQ_STS", BIT(1)},
{"CSE_VNN_REQ_STS", BIT(4)},
{"ISH_VNN_REQ_STS", BIT(7)},
{"SMT1_VNN_REQ_STS", BIT(8)},
{"SMT4_VNN_REQ_STS", BIT(11)},
{"CLINK_VNN_REQ_STS", BIT(14)},
{"SMS1_VNN_REQ_STS", BIT(18)},
{"SMS2_VNN_REQ_STS", BIT(19)},
{"GPIOCOM4_VNN_REQ_STS", BIT(20)},
{"GPIOCOM3_VNN_REQ_STS", BIT(21)},
{"GPIOCOM2_VNN_REQ_STS", BIT(22)},
{"GPIOCOM1_VNN_REQ_STS", BIT(23)},
{"GPIOCOM0_VNN_REQ_STS", BIT(24)},
{}
};
const struct pmc_bit_map arl_pchs_vnn_req_status_3_map[] = {
{"ESE_VNN_REQ_STS", BIT(3)},
{"DTS0_VNN_REQ_STS", BIT(7)},
{"GPIOCOM5_VNN_REQ_STS", BIT(11)},
{"FIA1_VNN_REQ_STS", BIT(12)},
{}
};
const struct pmc_bit_map arl_pchs_vnn_misc_status_map[] = {
{"CPU_C10_REQ_STS", BIT(0)},
{"TS_OFF_REQ_STS", BIT(1)},
{"PNDE_MET_REQ_STS", BIT(2)},
{"PCIE_DEEP_PM_REQ_STS", BIT(3)},
{"FW_THROTTLE_ALLOWED_REQ_STS", BIT(4)},
{"ISH_VNNAON_REQ_STS", BIT(7)},
{"IOE_COND_MET_S02I2_0_REQ_STS", BIT(8)},
{"IOE_COND_MET_S02I2_1_REQ_STS", BIT(9)},
{"IOE_COND_MET_S02I2_2_REQ_STS", BIT(10)},
{"PLT_GREATER_REQ_STS", BIT(11)},
{"PMC_IDLE_FB_OCP_REQ_STS", BIT(13)},
{"PM_SYNC_STATES_REQ_STS", BIT(14)},
{"EA_REQ_STS", BIT(15)},
{"DMI_CLKREQ_B_REQ_STS", BIT(16)},
{"BRK_EV_EN_REQ_STS", BIT(17)},
{"AUTO_DEMO_EN_REQ_STS", BIT(18)},
{"ITSS_CLK_SRC_REQ_STS", BIT(19)},
{"ARC_IDLE_REQ_STS", BIT(21)},
{"DMI_IN_REQ_STS", BIT(22)},
{"FIA_DEEP_PM_REQ_STS", BIT(23)},
{"XDCI_ATTACHED_REQ_STS", BIT(24)},
{"ARC_INTERRUPT_WAKE_REQ_STS", BIT(25)},
{"PRE_WAKE0_REQ_STS", BIT(27)},
{"PRE_WAKE1_REQ_STS", BIT(28)},
{"PRE_WAKE2_EN_REQ_STS", BIT(29)},
{"CNVI_V1P05_REQ_STS", BIT(31)},
{}
};
const struct pmc_bit_map arl_pchs_signal_status_map[] = {
{"LSX_Wake0_STS", BIT(0)},
{"LSX_Wake1_STS", BIT(1)},
{"LSX_Wake2_STS", BIT(2)},
{"LSX_Wake3_STS", BIT(3)},
{"LSX_Wake4_STS", BIT(4)},
{"LSX_Wake5_STS", BIT(5)},
{"LSX_Wake6_STS", BIT(6)},
{"LSX_Wake7_STS", BIT(7)},
{"Int_Timer_SS_Wake0_STS", BIT(8)},
{"Int_Timer_SS_Wake1_STS", BIT(9)},
{"Int_Timer_SS_Wake0_STS", BIT(10)},
{"Int_Timer_SS_Wake1_STS", BIT(11)},
{"Int_Timer_SS_Wake2_STS", BIT(12)},
{"Int_Timer_SS_Wake3_STS", BIT(13)},
{"Int_Timer_SS_Wake4_STS", BIT(14)},
{"Int_Timer_SS_Wake5_STS", BIT(15)},
{}
};
const struct pmc_bit_map *arl_pchs_lpm_maps[] = {
arl_pchs_clocksource_status_map,
arl_pchs_power_gating_status_0_map,
arl_pchs_power_gating_status_1_map,
arl_pchs_power_gating_status_2_map,
arl_pchs_d3_status_0_map,
arl_pchs_d3_status_1_map,
arl_pchs_d3_status_2_map,
arl_pchs_d3_status_3_map,
arl_pchs_vnn_req_status_0_map,
arl_pchs_vnn_req_status_1_map,
arl_pchs_vnn_req_status_2_map,
arl_pchs_vnn_req_status_3_map,
arl_pchs_vnn_misc_status_map,
arl_pchs_signal_status_map,
NULL
};
const struct pmc_reg_map arl_pchs_reg_map = {
.pfear_sts = ext_arl_socs_pfear_map,
.ppfear_buckets = ARL_SOCS_PPFEAR_NUM_ENTRIES,
.pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT,
.ltr_ignore_max = ARL_SOCS_NUM_IP_IGN_ALLOWED,
.lpm_sts = arl_pchs_lpm_maps,
.ltr_show_sts = arl_pchs_ltr_show_map,
.slp_s0_offset = CNP_PMC_SLP_S0_RES_COUNTER_OFFSET,
.slp_s0_res_counter_step = TGL_PMC_SLP_S0_RES_COUNTER_STEP,
.lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
.msr_sts = msr_map,
.ltr_ignore_offset = CNP_PMC_LTR_IGNORE_OFFSET,
.regmap_length = ARL_PCH_PMC_MMIO_REG_LEN,
.ppfear0_offset = CNP_PMC_HOST_PPFEAR0A,
.pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET,
.lpm_priority_offset = MTL_LPM_PRI_OFFSET,
.lpm_en_offset = MTL_LPM_EN_OFFSET,
.lpm_residency_offset = MTL_LPM_RESIDENCY_OFFSET,
.lpm_status_offset = MTL_LPM_STATUS_OFFSET,
.lpm_sts_latch_en_offset = MTL_LPM_STATUS_LATCH_EN_OFFSET,
.lpm_live_status_offset = MTL_LPM_LIVE_STATUS_OFFSET,
.lpm_num_maps = ADL_LPM_NUM_MAPS,
.lpm_reg_index = ARL_LPM_REG_INDEX,
.etr3_offset = ETR3_OFFSET,
};
#define PMC_DEVID_SOCS 0xae7f
#define PMC_DEVID_IOEP 0x7ecf
#define PMC_DEVID_PCHS 0x7f27
static struct pmc_info arl_pmc_info_list[] = {
{
.guid = IOEP_LPM_REQ_GUID,
.devid = PMC_DEVID_IOEP,
.map = &mtl_ioep_reg_map,
},
{
.guid = SOCS_LPM_REQ_GUID,
.devid = PMC_DEVID_SOCS,
.map = &arl_socs_reg_map,
},
{
.guid = PCHS_LPM_REQ_GUID,
.devid = PMC_DEVID_PCHS,
.map = &arl_pchs_reg_map,
},
{}
};
#define ARL_NPU_PCI_DEV 0xad1d
/*
* Set power state of select devices that do not have drivers to D3
* so that they do not block Package C entry.
*/
static void arl_d3_fixup(void)
{
pmc_core_set_device_d3(ARL_NPU_PCI_DEV);
}
static int arl_resume(struct pmc_dev *pmcdev)
{
arl_d3_fixup();
pmc_core_send_ltr_ignore(pmcdev, 3, 0);
return pmc_core_resume_common(pmcdev);
}
int arl_core_init(struct pmc_dev *pmcdev)
{
struct pmc *pmc = pmcdev->pmcs[PMC_IDX_SOC];
int ret;
int func = 0;
bool ssram_init = true;
arl_d3_fixup();
pmcdev->suspend = cnl_suspend;
pmcdev->resume = arl_resume;
pmcdev->regmap_list = arl_pmc_info_list;
/*
* If ssram init fails use legacy method to at least get the
* primary PMC
*/
ret = pmc_core_ssram_init(pmcdev, func);
if (ret) {
ssram_init = false;
pmc->map = &arl_socs_reg_map;
ret = get_primary_reg_base(pmc);
if (ret)
return ret;
}
pmc_core_get_low_power_modes(pmcdev);
pmc_core_punit_pmt_init(pmcdev, ARL_PMT_DMU_GUID);
if (ssram_init) {
ret = pmc_core_ssram_get_lpm_reqs(pmcdev);
if (ret)
return ret;
}
return 0;
}

View File

@ -234,5 +234,7 @@ int cnp_core_init(struct pmc_dev *pmcdev)
if (ret)
return ret;
pmc_core_get_low_power_modes(pmcdev);
return 0;
}

View File

@ -20,6 +20,7 @@
#include <linux/pci.h>
#include <linux/slab.h>
#include <linux/suspend.h>
#include <linux/units.h>
#include <asm/cpu_device_id.h>
#include <asm/intel-family.h>
@ -27,6 +28,7 @@
#include <asm/tsc.h>
#include "core.h"
#include "../pmt/telemetry.h"
/* Maximum number of modes supported by platfoms that has low power mode capability */
const char *pmc_lpm_modes[] = {
@ -206,6 +208,20 @@ static int pmc_core_dev_state_get(void *data, u64 *val)
DEFINE_DEBUGFS_ATTRIBUTE(pmc_core_dev_state, pmc_core_dev_state_get, NULL, "%llu\n");
static int pmc_core_pson_residency_get(void *data, u64 *val)
{
struct pmc *pmc = data;
const struct pmc_reg_map *map = pmc->map;
u32 value;
value = pmc_core_reg_read(pmc, map->pson_residency_offset);
*val = (u64)value * map->pson_residency_counter_step;
return 0;
}
DEFINE_DEBUGFS_ATTRIBUTE(pmc_core_pson_residency, pmc_core_pson_residency_get, NULL, "%llu\n");
static int pmc_core_check_read_lock_bit(struct pmc *pmc)
{
u32 value;
@ -731,7 +747,7 @@ static int pmc_core_substate_l_sts_regs_show(struct seq_file *s, void *unused)
}
DEFINE_SHOW_ATTRIBUTE(pmc_core_substate_l_sts_regs);
static void pmc_core_substate_req_header_show(struct seq_file *s)
static void pmc_core_substate_req_header_show(struct seq_file *s, int pmc_index)
{
struct pmc_dev *pmcdev = s->private;
int i, mode;
@ -746,72 +762,126 @@ static void pmc_core_substate_req_header_show(struct seq_file *s)
static int pmc_core_substate_req_regs_show(struct seq_file *s, void *unused)
{
struct pmc_dev *pmcdev = s->private;
struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN];
const struct pmc_bit_map **maps = pmc->map->lpm_sts;
const struct pmc_bit_map *map;
const int num_maps = pmc->map->lpm_num_maps;
u32 sts_offset = pmc->map->lpm_status_offset;
u32 *lpm_req_regs = pmc->lpm_req_regs;
int mp;
u32 sts_offset;
u32 *lpm_req_regs;
int num_maps, mp, pmc_index;
/* Display the header */
pmc_core_substate_req_header_show(s);
for (pmc_index = 0; pmc_index < ARRAY_SIZE(pmcdev->pmcs); ++pmc_index) {
struct pmc *pmc = pmcdev->pmcs[pmc_index];
const struct pmc_bit_map **maps;
/* Loop over maps */
for (mp = 0; mp < num_maps; mp++) {
u32 req_mask = 0;
u32 lpm_status;
int mode, idx, i, len = 32;
if (!pmc)
continue;
maps = pmc->map->lpm_sts;
num_maps = pmc->map->lpm_num_maps;
sts_offset = pmc->map->lpm_status_offset;
lpm_req_regs = pmc->lpm_req_regs;
/*
* Capture the requirements and create a mask so that we only
* show an element if it's required for at least one of the
* enabled low power modes
* When there are multiple PMCs, though the PMC may exist, the
* requirement register discovery could have failed so check
* before accessing.
*/
pmc_for_each_mode(idx, mode, pmcdev)
req_mask |= lpm_req_regs[mp + (mode * num_maps)];
if (!lpm_req_regs)
continue;
/* Get the last latched status for this map */
lpm_status = pmc_core_reg_read(pmc, sts_offset + (mp * 4));
/* Display the header */
pmc_core_substate_req_header_show(s, pmc_index);
/* Loop over elements in this map */
map = maps[mp];
for (i = 0; map[i].name && i < len; i++) {
u32 bit_mask = map[i].bit_mask;
/* Loop over maps */
for (mp = 0; mp < num_maps; mp++) {
u32 req_mask = 0;
u32 lpm_status;
const struct pmc_bit_map *map;
int mode, idx, i, len = 32;
if (!(bit_mask & req_mask))
/*
* Not required for any enabled states
* so don't display
*/
continue;
/*
* Capture the requirements and create a mask so that we only
* show an element if it's required for at least one of the
* enabled low power modes
*/
pmc_for_each_mode(idx, mode, pmcdev)
req_mask |= lpm_req_regs[mp + (mode * num_maps)];
/* Display the element name in the first column */
seq_printf(s, "%30s |", map[i].name);
/* Get the last latched status for this map */
lpm_status = pmc_core_reg_read(pmc, sts_offset + (mp * 4));
/* Loop over the enabled states and display if required */
pmc_for_each_mode(idx, mode, pmcdev) {
if (lpm_req_regs[mp + (mode * num_maps)] & bit_mask)
seq_printf(s, " %9s |",
"Required");
else
seq_printf(s, " %9s |", " ");
/* Loop over elements in this map */
map = maps[mp];
for (i = 0; map[i].name && i < len; i++) {
u32 bit_mask = map[i].bit_mask;
if (!(bit_mask & req_mask)) {
/*
* Not required for any enabled states
* so don't display
*/
continue;
}
/* Display the element name in the first column */
seq_printf(s, "pmc%d: %26s |", pmc_index, map[i].name);
/* Loop over the enabled states and display if required */
pmc_for_each_mode(idx, mode, pmcdev) {
bool required = lpm_req_regs[mp + (mode * num_maps)] &
bit_mask;
seq_printf(s, " %9s |", required ? "Required" : " ");
}
/* In Status column, show the last captured state of this agent */
seq_printf(s, " %9s |", lpm_status & bit_mask ? "Yes" : " ");
seq_puts(s, "\n");
}
/* In Status column, show the last captured state of this agent */
if (lpm_status & bit_mask)
seq_printf(s, " %9s |", "Yes");
else
seq_printf(s, " %9s |", " ");
seq_puts(s, "\n");
}
}
return 0;
}
DEFINE_SHOW_ATTRIBUTE(pmc_core_substate_req_regs);
static unsigned int pmc_core_get_crystal_freq(void)
{
unsigned int eax_denominator, ebx_numerator, ecx_hz, edx;
if (boot_cpu_data.cpuid_level < 0x15)
return 0;
eax_denominator = ebx_numerator = ecx_hz = edx = 0;
/* CPUID 15H TSC/Crystal ratio, plus optionally Crystal Hz */
cpuid(0x15, &eax_denominator, &ebx_numerator, &ecx_hz, &edx);
if (ebx_numerator == 0 || eax_denominator == 0)
return 0;
return ecx_hz;
}
static int pmc_core_die_c6_us_show(struct seq_file *s, void *unused)
{
struct pmc_dev *pmcdev = s->private;
u64 die_c6_res, count;
int ret;
if (!pmcdev->crystal_freq) {
dev_warn_once(&pmcdev->pdev->dev, "Crystal frequency unavailable\n");
return -ENXIO;
}
ret = pmt_telem_read(pmcdev->punit_ep, pmcdev->die_c6_offset,
&count, 1);
if (ret)
return ret;
die_c6_res = div64_u64(count * HZ_PER_MHZ, pmcdev->crystal_freq);
seq_printf(s, "%llu\n", die_c6_res);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(pmc_core_die_c6_us);
static int pmc_core_lpm_latch_mode_show(struct seq_file *s, void *unused)
{
struct pmc_dev *pmcdev = s->private;
@ -969,9 +1039,8 @@ static bool pmc_core_pri_verify(u32 lpm_pri, u8 *mode_order)
return true;
}
static void pmc_core_get_low_power_modes(struct platform_device *pdev)
void pmc_core_get_low_power_modes(struct pmc_dev *pmcdev)
{
struct pmc_dev *pmcdev = platform_get_drvdata(pdev);
struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN];
u8 pri_order[LPM_MAX_NUM_MODES] = LPM_DEFAULT_PRI;
u8 mode_order[LPM_MAX_NUM_MODES];
@ -1003,7 +1072,8 @@ static void pmc_core_get_low_power_modes(struct platform_device *pdev)
for (mode = 0; mode < LPM_MAX_NUM_MODES; mode++)
pri_order[mode_order[mode]] = mode;
else
dev_warn(&pdev->dev, "Assuming a default substate order for this platform\n");
dev_warn(&pmcdev->pdev->dev,
"Assuming a default substate order for this platform\n");
/*
* Loop through all modes from lowest to highest priority,
@ -1039,6 +1109,69 @@ int get_primary_reg_base(struct pmc *pmc)
return 0;
}
void pmc_core_punit_pmt_init(struct pmc_dev *pmcdev, u32 guid)
{
struct telem_endpoint *ep;
struct pci_dev *pcidev;
pcidev = pci_get_domain_bus_and_slot(0, 0, PCI_DEVFN(10, 0));
if (!pcidev) {
dev_err(&pmcdev->pdev->dev, "PUNIT PMT device not found.");
return;
}
ep = pmt_telem_find_and_register_endpoint(pcidev, guid, 0);
pci_dev_put(pcidev);
if (IS_ERR(ep)) {
dev_err(&pmcdev->pdev->dev,
"pmc_core: couldn't get DMU telem endpoint %ld",
PTR_ERR(ep));
return;
}
pmcdev->punit_ep = ep;
pmcdev->has_die_c6 = true;
pmcdev->die_c6_offset = MTL_PMT_DMU_DIE_C6_OFFSET;
}
void pmc_core_set_device_d3(unsigned int device)
{
struct pci_dev *pcidev;
pcidev = pci_get_device(PCI_VENDOR_ID_INTEL, device, NULL);
if (pcidev) {
if (!device_trylock(&pcidev->dev)) {
pci_dev_put(pcidev);
return;
}
if (!pcidev->dev.driver) {
dev_info(&pcidev->dev, "Setting to D3hot\n");
pci_set_power_state(pcidev, PCI_D3hot);
}
device_unlock(&pcidev->dev);
pci_dev_put(pcidev);
}
}
static bool pmc_core_is_pson_residency_enabled(struct pmc_dev *pmcdev)
{
struct platform_device *pdev = pmcdev->pdev;
struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
u8 val;
if (!adev)
return false;
if (fwnode_property_read_u8(acpi_fwnode_handle(adev),
"intel-cec-pson-switching-enabled-in-s0",
&val))
return false;
return val == 1;
}
static void pmc_core_dbgfs_unregister(struct pmc_dev *pmcdev)
{
debugfs_remove_recursive(pmcdev->dbgfs_dir);
@ -1108,6 +1241,17 @@ static void pmc_core_dbgfs_register(struct pmc_dev *pmcdev)
pmcdev->dbgfs_dir, pmcdev,
&pmc_core_substate_req_regs_fops);
}
if (primary_pmc->map->pson_residency_offset && pmc_core_is_pson_residency_enabled(pmcdev)) {
debugfs_create_file("pson_residency_usec", 0444,
pmcdev->dbgfs_dir, primary_pmc, &pmc_core_pson_residency);
}
if (pmcdev->has_die_c6) {
debugfs_create_file("die_c6_us_show", 0444,
pmcdev->dbgfs_dir, pmcdev,
&pmc_core_die_c6_us_fops);
}
}
static const struct x86_cpu_id intel_pmc_core_ids[] = {
@ -1120,18 +1264,20 @@ static const struct x86_cpu_id intel_pmc_core_ids[] = {
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_NNPI, icl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, cnp_core_init),
X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE_L, cnp_core_init),
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, tgl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, tgl_l_core_init),
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE, tgl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT, tgl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT, tgl_l_core_init),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L, icl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(ROCKETLAKE, tgl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, tgl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GRACEMONT, tgl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, tgl_l_core_init),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GRACEMONT, tgl_l_core_init),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, adl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, tgl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, tgl_l_core_init),
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, adl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_S, adl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, mtl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(ARROWLAKE, arl_core_init),
X86_MATCH_INTEL_FAM6_MODEL(LUNARLAKE_M, lnl_core_init),
{}
};
@ -1202,6 +1348,10 @@ static void pmc_core_clean_structure(struct platform_device *pdev)
pci_dev_put(pmcdev->ssram_pcidev);
pci_disable_device(pmcdev->ssram_pcidev);
}
if (pmcdev->punit_ep)
pmt_telem_unregister_endpoint(pmcdev->punit_ep);
platform_set_drvdata(pdev, NULL);
mutex_destroy(&pmcdev->lock);
}
@ -1222,6 +1372,8 @@ static int pmc_core_probe(struct platform_device *pdev)
if (!pmcdev)
return -ENOMEM;
pmcdev->crystal_freq = pmc_core_get_crystal_freq();
platform_set_drvdata(pdev, pmcdev);
pmcdev->pdev = pdev;
@ -1253,7 +1405,6 @@ static int pmc_core_probe(struct platform_device *pdev)
}
pmcdev->pmc_xram_read_bit = pmc_core_check_read_lock_bit(primary_pmc);
pmc_core_get_low_power_modes(pdev);
pmc_core_do_dmi_quirks(primary_pmc);
pmc_core_dbgfs_register(pmcdev);

View File

@ -16,6 +16,8 @@
#include <linux/bits.h>
#include <linux/platform_device.h>
struct telem_endpoint;
#define SLP_S0_RES_COUNTER_MASK GENMASK(31, 0)
#define PMC_BASE_ADDR_DEFAULT 0xFE000000
@ -221,6 +223,10 @@ enum ppfear_regs {
#define TGL_LPM_PRI_OFFSET 0x1C7C
#define TGL_LPM_NUM_MAPS 6
/* Tigerlake PSON residency register */
#define TGL_PSON_RESIDENCY_OFFSET 0x18f8
#define TGL_PSON_RES_COUNTER_STEP 0x7A
/* Extended Test Mode Register 3 (CNL and later) */
#define ETR3_OFFSET 0x1048
#define ETR3_CF9GR BIT(20)
@ -257,10 +263,25 @@ enum ppfear_regs {
#define MTL_SOCM_NUM_IP_IGN_ALLOWED 25
#define MTL_SOC_PMC_MMIO_REG_LEN 0x2708
#define MTL_PMC_LTR_SPG 0x1B74
#define ARL_SOCS_PMC_LTR_RESERVED 0x1B88
#define ARL_SOCS_NUM_IP_IGN_ALLOWED 26
#define ARL_PMC_LTR_DMI3 0x1BE4
#define ARL_PCH_PMC_MMIO_REG_LEN 0x2720
/* Meteor Lake PGD PFET Enable Ack Status */
#define MTL_SOCM_PPFEAR_NUM_ENTRIES 8
#define MTL_IOE_PPFEAR_NUM_ENTRIES 10
#define ARL_SOCS_PPFEAR_NUM_ENTRIES 9
/* Die C6 from PUNIT telemetry */
#define MTL_PMT_DMU_DIE_C6_OFFSET 15
#define MTL_PMT_DMU_GUID 0x1A067102
#define ARL_PMT_DMU_GUID 0x1A06A000
#define LNL_PMC_MMIO_REG_LEN 0x2708
#define LNL_PMC_LTR_OSSE 0x1B88
#define LNL_NUM_IP_IGN_ALLOWED 27
#define LNL_PPFEAR_NUM_ENTRIES 12
extern const char *pmc_lpm_modes[];
@ -320,6 +341,9 @@ struct pmc_reg_map {
const u32 lpm_status_offset;
const u32 lpm_live_status_offset;
const u32 etr3_offset;
const u8 *lpm_reg_index;
const u32 pson_residency_offset;
const u32 pson_residency_counter_step;
};
/**
@ -329,6 +353,7 @@ struct pmc_reg_map {
* specific attributes
*/
struct pmc_info {
u32 guid;
u16 devid;
const struct pmc_reg_map *map;
};
@ -355,6 +380,7 @@ struct pmc {
* @devs: pointer to an array of pmc pointers
* @pdev: pointer to platform_device struct
* @ssram_pcidev: pointer to pci device struct for the PMC SSRAM
* @crystal_freq: crystal frequency from cpuid
* @dbgfs_dir: path to debugfs interface
* @pmc_xram_read_bit: flag to indicate whether PMC XRAM shadow registers
* used to read MPHY PG and PLL status are available
@ -373,6 +399,7 @@ struct pmc_dev {
struct dentry *dbgfs_dir;
struct platform_device *pdev;
struct pci_dev *ssram_pcidev;
unsigned int crystal_freq;
int pmc_xram_read_bit;
struct mutex lock; /* generic mutex lock for PMC Core */
@ -425,6 +452,7 @@ extern const struct pmc_bit_map tgl_vnn_misc_status_map[];
extern const struct pmc_bit_map tgl_signal_status_map[];
extern const struct pmc_bit_map *tgl_lpm_maps[];
extern const struct pmc_reg_map tgl_reg_map;
extern const struct pmc_reg_map tgl_h_reg_map;
extern const struct pmc_bit_map adl_pfear_map[];
extern const struct pmc_bit_map *ext_adl_pfear_map[];
extern const struct pmc_bit_map adl_ltr_show_map[];
@ -486,21 +514,80 @@ extern const struct pmc_bit_map mtl_ioem_power_gating_status_1_map[];
extern const struct pmc_bit_map mtl_ioem_vnn_req_status_1_map[];
extern const struct pmc_bit_map *mtl_ioem_lpm_maps[];
extern const struct pmc_reg_map mtl_ioem_reg_map;
extern const struct pmc_reg_map lnl_socm_reg_map;
/* LNL */
extern const struct pmc_bit_map lnl_ltr_show_map[];
extern const struct pmc_bit_map lnl_clocksource_status_map[];
extern const struct pmc_bit_map lnl_power_gating_status_0_map[];
extern const struct pmc_bit_map lnl_power_gating_status_1_map[];
extern const struct pmc_bit_map lnl_power_gating_status_2_map[];
extern const struct pmc_bit_map lnl_d3_status_0_map[];
extern const struct pmc_bit_map lnl_d3_status_1_map[];
extern const struct pmc_bit_map lnl_d3_status_2_map[];
extern const struct pmc_bit_map lnl_d3_status_3_map[];
extern const struct pmc_bit_map lnl_vnn_req_status_0_map[];
extern const struct pmc_bit_map lnl_vnn_req_status_1_map[];
extern const struct pmc_bit_map lnl_vnn_req_status_2_map[];
extern const struct pmc_bit_map lnl_vnn_req_status_3_map[];
extern const struct pmc_bit_map lnl_vnn_misc_status_map[];
extern const struct pmc_bit_map *lnl_lpm_maps[];
extern const struct pmc_bit_map lnl_pfear_map[];
extern const struct pmc_bit_map *ext_lnl_pfear_map[];
/* ARL */
extern const struct pmc_bit_map arl_socs_ltr_show_map[];
extern const struct pmc_bit_map arl_socs_clocksource_status_map[];
extern const struct pmc_bit_map arl_socs_power_gating_status_0_map[];
extern const struct pmc_bit_map arl_socs_power_gating_status_1_map[];
extern const struct pmc_bit_map arl_socs_power_gating_status_2_map[];
extern const struct pmc_bit_map arl_socs_d3_status_2_map[];
extern const struct pmc_bit_map arl_socs_d3_status_3_map[];
extern const struct pmc_bit_map arl_socs_vnn_req_status_3_map[];
extern const struct pmc_bit_map *arl_socs_lpm_maps[];
extern const struct pmc_bit_map arl_socs_pfear_map[];
extern const struct pmc_bit_map *ext_arl_socs_pfear_map[];
extern const struct pmc_reg_map arl_socs_reg_map;
extern const struct pmc_bit_map arl_pchs_ltr_show_map[];
extern const struct pmc_bit_map arl_pchs_clocksource_status_map[];
extern const struct pmc_bit_map arl_pchs_power_gating_status_0_map[];
extern const struct pmc_bit_map arl_pchs_power_gating_status_1_map[];
extern const struct pmc_bit_map arl_pchs_power_gating_status_2_map[];
extern const struct pmc_bit_map arl_pchs_d3_status_0_map[];
extern const struct pmc_bit_map arl_pchs_d3_status_1_map[];
extern const struct pmc_bit_map arl_pchs_d3_status_2_map[];
extern const struct pmc_bit_map arl_pchs_d3_status_3_map[];
extern const struct pmc_bit_map arl_pchs_vnn_req_status_0_map[];
extern const struct pmc_bit_map arl_pchs_vnn_req_status_1_map[];
extern const struct pmc_bit_map arl_pchs_vnn_req_status_2_map[];
extern const struct pmc_bit_map arl_pchs_vnn_req_status_3_map[];
extern const struct pmc_bit_map arl_pchs_vnn_misc_status_map[];
extern const struct pmc_bit_map arl_pchs_signal_status_map[];
extern const struct pmc_bit_map *arl_pchs_lpm_maps[];
extern const struct pmc_reg_map arl_pchs_reg_map;
extern void pmc_core_get_tgl_lpm_reqs(struct platform_device *pdev);
extern int pmc_core_ssram_get_lpm_reqs(struct pmc_dev *pmcdev);
int pmc_core_send_ltr_ignore(struct pmc_dev *pmcdev, u32 value, int ignore);
int pmc_core_resume_common(struct pmc_dev *pmcdev);
int get_primary_reg_base(struct pmc *pmc);
extern void pmc_core_get_low_power_modes(struct pmc_dev *pmcdev);
extern void pmc_core_punit_pmt_init(struct pmc_dev *pmcdev, u32 guid);
extern void pmc_core_set_device_d3(unsigned int device);
extern void pmc_core_ssram_init(struct pmc_dev *pmcdev);
extern int pmc_core_ssram_init(struct pmc_dev *pmcdev, int func);
int spt_core_init(struct pmc_dev *pmcdev);
int cnp_core_init(struct pmc_dev *pmcdev);
int icl_core_init(struct pmc_dev *pmcdev);
int tgl_core_init(struct pmc_dev *pmcdev);
int tgl_l_core_init(struct pmc_dev *pmcdev);
int tgl_core_generic_init(struct pmc_dev *pmcdev, int pch_tp);
int adl_core_init(struct pmc_dev *pmcdev);
int mtl_core_init(struct pmc_dev *pmcdev);
int arl_core_init(struct pmc_dev *pmcdev);
int lnl_core_init(struct pmc_dev *pmcdev);
void cnl_suspend(struct pmc_dev *pmcdev);
int cnl_resume(struct pmc_dev *pmcdev);

View File

@ -8,10 +8,13 @@
*
*/
#include <linux/cleanup.h>
#include <linux/pci.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include "core.h"
#include "../vsec.h"
#include "../pmt/telemetry.h"
#define SSRAM_HDR_SIZE 0x100
#define SSRAM_PWRM_OFFSET 0x14
@ -21,6 +24,185 @@
#define SSRAM_IOE_OFFSET 0x68
#define SSRAM_DEVID_OFFSET 0x70
/* PCH query */
#define LPM_HEADER_OFFSET 1
#define LPM_REG_COUNT 28
#define LPM_MODE_OFFSET 1
DEFINE_FREE(pmc_core_iounmap, void __iomem *, iounmap(_T));
static u32 pmc_core_find_guid(struct pmc_info *list, const struct pmc_reg_map *map)
{
for (; list->map; ++list)
if (list->map == map)
return list->guid;
return 0;
}
static int pmc_core_get_lpm_req(struct pmc_dev *pmcdev, struct pmc *pmc)
{
struct telem_endpoint *ep;
const u8 *lpm_indices;
int num_maps, mode_offset = 0;
int ret, mode, i;
int lpm_size;
u32 guid;
lpm_indices = pmc->map->lpm_reg_index;
num_maps = pmc->map->lpm_num_maps;
lpm_size = LPM_MAX_NUM_MODES * num_maps;
guid = pmc_core_find_guid(pmcdev->regmap_list, pmc->map);
if (!guid)
return -ENXIO;
ep = pmt_telem_find_and_register_endpoint(pmcdev->ssram_pcidev, guid, 0);
if (IS_ERR(ep)) {
dev_dbg(&pmcdev->pdev->dev, "couldn't get telem endpoint %ld",
PTR_ERR(ep));
return -EPROBE_DEFER;
}
pmc->lpm_req_regs = devm_kzalloc(&pmcdev->pdev->dev,
lpm_size * sizeof(u32),
GFP_KERNEL);
if (!pmc->lpm_req_regs) {
ret = -ENOMEM;
goto unregister_ep;
}
/*
* PMC Low Power Mode (LPM) table
*
* In telemetry space, the LPM table contains a 4 byte header followed
* by 8 consecutive mode blocks (one for each LPM mode). Each block
* has a 4 byte header followed by a set of registers that describe the
* IP state requirements for the given mode. The IP mapping is platform
* specific but the same for each block, making for easy analysis.
* Platforms only use a subset of the space to track the requirements
* for their IPs. Callers provide the requirement registers they use as
* a list of indices. Each requirement register is associated with an
* IP map that's maintained by the caller.
*
* Header
* +----+----------------------------+----------------------------+
* | 0 | REVISION | ENABLED MODES |
* +----+--------------+-------------+-------------+--------------+
*
* Low Power Mode 0 Block
* +----+--------------+-------------+-------------+--------------+
* | 1 | SUB ID | SIZE | MAJOR | MINOR |
* +----+--------------+-------------+-------------+--------------+
* | 2 | LPM0 Requirements 0 |
* +----+---------------------------------------------------------+
* | | ... |
* +----+---------------------------------------------------------+
* | 29 | LPM0 Requirements 27 |
* +----+---------------------------------------------------------+
*
* ...
*
* Low Power Mode 7 Block
* +----+--------------+-------------+-------------+--------------+
* | | SUB ID | SIZE | MAJOR | MINOR |
* +----+--------------+-------------+-------------+--------------+
* | 60 | LPM7 Requirements 0 |
* +----+---------------------------------------------------------+
* | | ... |
* +----+---------------------------------------------------------+
* | 87 | LPM7 Requirements 27 |
* +----+---------------------------------------------------------+
*
*/
mode_offset = LPM_HEADER_OFFSET + LPM_MODE_OFFSET;
pmc_for_each_mode(i, mode, pmcdev) {
u32 *req_offset = pmc->lpm_req_regs + (mode * num_maps);
int m;
for (m = 0; m < num_maps; m++) {
u8 sample_id = lpm_indices[m] + mode_offset;
ret = pmt_telem_read32(ep, sample_id, req_offset, 1);
if (ret) {
dev_err(&pmcdev->pdev->dev,
"couldn't read Low Power Mode requirements: %d\n", ret);
devm_kfree(&pmcdev->pdev->dev, pmc->lpm_req_regs);
goto unregister_ep;
}
++req_offset;
}
mode_offset += LPM_REG_COUNT + LPM_MODE_OFFSET;
}
unregister_ep:
pmt_telem_unregister_endpoint(ep);
return ret;
}
int pmc_core_ssram_get_lpm_reqs(struct pmc_dev *pmcdev)
{
int ret, i;
if (!pmcdev->ssram_pcidev)
return -ENODEV;
for (i = 0; i < ARRAY_SIZE(pmcdev->pmcs); ++i) {
if (!pmcdev->pmcs[i])
continue;
ret = pmc_core_get_lpm_req(pmcdev, pmcdev->pmcs[i]);
if (ret)
return ret;
}
return 0;
}
static void
pmc_add_pmt(struct pmc_dev *pmcdev, u64 ssram_base, void __iomem *ssram)
{
struct pci_dev *pcidev = pmcdev->ssram_pcidev;
struct intel_vsec_platform_info info = {};
struct intel_vsec_header *headers[2] = {};
struct intel_vsec_header header;
void __iomem *dvsec;
u32 dvsec_offset;
u32 table, hdr;
ssram = ioremap(ssram_base, SSRAM_HDR_SIZE);
if (!ssram)
return;
dvsec_offset = readl(ssram + SSRAM_DVSEC_OFFSET);
iounmap(ssram);
dvsec = ioremap(ssram_base + dvsec_offset, SSRAM_DVSEC_SIZE);
if (!dvsec)
return;
hdr = readl(dvsec + PCI_DVSEC_HEADER1);
header.id = readw(dvsec + PCI_DVSEC_HEADER2);
header.rev = PCI_DVSEC_HEADER1_REV(hdr);
header.length = PCI_DVSEC_HEADER1_LEN(hdr);
header.num_entries = readb(dvsec + INTEL_DVSEC_ENTRIES);
header.entry_size = readb(dvsec + INTEL_DVSEC_SIZE);
table = readl(dvsec + INTEL_DVSEC_TABLE);
header.tbir = INTEL_DVSEC_TABLE_BAR(table);
header.offset = INTEL_DVSEC_TABLE_OFFSET(table);
iounmap(dvsec);
headers[0] = &header;
info.caps = VSEC_CAP_TELEMETRY;
info.headers = headers;
info.base_addr = ssram_base;
info.parent = &pmcdev->pdev->dev;
intel_vsec_register(pcidev, &info);
}
static const struct pmc_reg_map *pmc_core_find_regmap(struct pmc_info *list, u16 devid)
{
for (; list->map; ++list)
@ -35,20 +217,20 @@ static inline u64 get_base(void __iomem *addr, u32 offset)
return lo_hi_readq(addr + offset) & GENMASK_ULL(63, 3);
}
static void
static int
pmc_core_pmc_add(struct pmc_dev *pmcdev, u64 pwrm_base,
const struct pmc_reg_map *reg_map, int pmc_index)
{
struct pmc *pmc = pmcdev->pmcs[pmc_index];
if (!pwrm_base)
return;
return -ENODEV;
/* Memory for primary PMC has been allocated in core.c */
if (!pmc) {
pmc = devm_kzalloc(&pmcdev->pdev->dev, sizeof(*pmc), GFP_KERNEL);
if (!pmc)
return;
return -ENOMEM;
}
pmc->map = reg_map;
@ -57,77 +239,88 @@ pmc_core_pmc_add(struct pmc_dev *pmcdev, u64 pwrm_base,
if (!pmc->regbase) {
devm_kfree(&pmcdev->pdev->dev, pmc);
return;
return -ENOMEM;
}
pmcdev->pmcs[pmc_index] = pmc;
return 0;
}
static void
pmc_core_ssram_get_pmc(struct pmc_dev *pmcdev, void __iomem *ssram, u32 offset,
int pmc_idx)
static int
pmc_core_ssram_get_pmc(struct pmc_dev *pmcdev, int pmc_idx, u32 offset)
{
u64 pwrm_base;
struct pci_dev *ssram_pcidev = pmcdev->ssram_pcidev;
void __iomem __free(pmc_core_iounmap) *tmp_ssram = NULL;
void __iomem __free(pmc_core_iounmap) *ssram = NULL;
const struct pmc_reg_map *map;
u64 ssram_base, pwrm_base;
u16 devid;
if (pmc_idx != PMC_IDX_SOC) {
u64 ssram_base = get_base(ssram, offset);
if (!pmcdev->regmap_list)
return -ENOENT;
if (!ssram_base)
return;
ssram_base = ssram_pcidev->resource[0].start;
tmp_ssram = ioremap(ssram_base, SSRAM_HDR_SIZE);
if (pmc_idx != PMC_IDX_MAIN) {
/*
* The secondary PMC BARS (which are behind hidden PCI devices)
* are read from fixed offsets in MMIO of the primary PMC BAR.
*/
ssram_base = get_base(tmp_ssram, offset);
ssram = ioremap(ssram_base, SSRAM_HDR_SIZE);
if (!ssram)
return;
return -ENOMEM;
} else {
ssram = no_free_ptr(tmp_ssram);
}
pwrm_base = get_base(ssram, SSRAM_PWRM_OFFSET);
devid = readw(ssram + SSRAM_DEVID_OFFSET);
if (pmcdev->regmap_list) {
const struct pmc_reg_map *map;
/* Find and register and PMC telemetry entries */
pmc_add_pmt(pmcdev, ssram_base, ssram);
map = pmc_core_find_regmap(pmcdev->regmap_list, devid);
if (map)
pmc_core_pmc_add(pmcdev, pwrm_base, map, pmc_idx);
}
map = pmc_core_find_regmap(pmcdev->regmap_list, devid);
if (!map)
return -ENODEV;
if (pmc_idx != PMC_IDX_SOC)
iounmap(ssram);
return pmc_core_pmc_add(pmcdev, pwrm_base, map, pmc_idx);
}
void pmc_core_ssram_init(struct pmc_dev *pmcdev)
int pmc_core_ssram_init(struct pmc_dev *pmcdev, int func)
{
void __iomem *ssram;
struct pci_dev *pcidev;
u64 ssram_base;
int ret;
pcidev = pci_get_domain_bus_and_slot(0, 0, PCI_DEVFN(20, 2));
pcidev = pci_get_domain_bus_and_slot(0, 0, PCI_DEVFN(20, func));
if (!pcidev)
goto out;
return -ENODEV;
ret = pcim_enable_device(pcidev);
if (ret)
goto release_dev;
ssram_base = pcidev->resource[0].start;
ssram = ioremap(ssram_base, SSRAM_HDR_SIZE);
if (!ssram)
goto disable_dev;
pmcdev->ssram_pcidev = pcidev;
pmc_core_ssram_get_pmc(pmcdev, ssram, 0, PMC_IDX_SOC);
pmc_core_ssram_get_pmc(pmcdev, ssram, SSRAM_IOE_OFFSET, PMC_IDX_IOE);
pmc_core_ssram_get_pmc(pmcdev, ssram, SSRAM_PCH_OFFSET, PMC_IDX_PCH);
ret = pmc_core_ssram_get_pmc(pmcdev, PMC_IDX_MAIN, 0);
if (ret)
goto disable_dev;
iounmap(ssram);
out:
return;
pmc_core_ssram_get_pmc(pmcdev, PMC_IDX_IOE, SSRAM_IOE_OFFSET);
pmc_core_ssram_get_pmc(pmcdev, PMC_IDX_PCH, SSRAM_PCH_OFFSET);
return 0;
disable_dev:
pmcdev->ssram_pcidev = NULL;
pci_disable_device(pcidev);
release_dev:
pci_dev_put(pcidev);
return ret;
}
MODULE_IMPORT_NS(INTEL_VSEC);
MODULE_IMPORT_NS(INTEL_PMT_TELEMETRY);

View File

@ -53,7 +53,15 @@ const struct pmc_reg_map icl_reg_map = {
int icl_core_init(struct pmc_dev *pmcdev)
{
struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN];
int ret;
pmc->map = &icl_reg_map;
return get_primary_reg_base(pmc);
ret = get_primary_reg_base(pmc);
if (ret)
return ret;
pmc_core_get_low_power_modes(pmcdev);
return ret;
}

View File

@ -0,0 +1,549 @@
// SPDX-License-Identifier: GPL-2.0
/*
* This file contains platform specific structure definitions
* and init function used by Meteor Lake PCH.
*
* Copyright (c) 2022, Intel Corporation.
* All Rights Reserved.
*
*/
#include <linux/cpu.h>
#include <linux/pci.h>
#include "core.h"
#define SOCM_LPM_REQ_GUID 0x11594920
#define PMC_DEVID_SOCM 0xa87f
static const u8 LNL_LPM_REG_INDEX[] = {0, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 20};
static struct pmc_info lnl_pmc_info_list[] = {
{
.guid = SOCM_LPM_REQ_GUID,
.devid = PMC_DEVID_SOCM,
.map = &lnl_socm_reg_map,
},
{}
};
const struct pmc_bit_map lnl_ltr_show_map[] = {
{"SOUTHPORT_A", CNP_PMC_LTR_SPA},
{"SOUTHPORT_B", CNP_PMC_LTR_SPB},
{"SATA", CNP_PMC_LTR_SATA},
{"GIGABIT_ETHERNET", CNP_PMC_LTR_GBE},
{"XHCI", CNP_PMC_LTR_XHCI},
{"SOUTHPORT_F", ADL_PMC_LTR_SPF},
{"ME", CNP_PMC_LTR_ME},
/* EVA is Enterprise Value Add, doesn't really exist on PCH */
{"SATA1", CNP_PMC_LTR_EVA},
{"SOUTHPORT_C", CNP_PMC_LTR_SPC},
{"HD_AUDIO", CNP_PMC_LTR_AZ},
{"CNV", CNP_PMC_LTR_CNV},
{"LPSS", CNP_PMC_LTR_LPSS},
{"SOUTHPORT_D", CNP_PMC_LTR_SPD},
{"SOUTHPORT_E", CNP_PMC_LTR_SPE},
{"SATA2", CNP_PMC_LTR_CAM},
{"ESPI", CNP_PMC_LTR_ESPI},
{"SCC", CNP_PMC_LTR_SCC},
{"ISH", CNP_PMC_LTR_ISH},
{"UFSX2", CNP_PMC_LTR_UFSX2},
{"EMMC", CNP_PMC_LTR_EMMC},
/*
* Check intel_pmc_core_ids[] users of cnp_reg_map for
* a list of core SoCs using this.
*/
{"WIGIG", ICL_PMC_LTR_WIGIG},
{"THC0", TGL_PMC_LTR_THC0},
{"THC1", TGL_PMC_LTR_THC1},
{"SOUTHPORT_G", CNP_PMC_LTR_RESERVED},
{"ESE", MTL_PMC_LTR_ESE},
{"IOE_PMC", MTL_PMC_LTR_IOE_PMC},
{"DMI3", ARL_PMC_LTR_DMI3},
{"OSSE", LNL_PMC_LTR_OSSE},
/* Below two cannot be used for LTR_IGNORE */
{"CURRENT_PLATFORM", CNP_PMC_LTR_CUR_PLT},
{"AGGREGATED_SYSTEM", CNP_PMC_LTR_CUR_ASLT},
{}
};
const struct pmc_bit_map lnl_power_gating_status_0_map[] = {
{"PMC_PGD0_PG_STS", BIT(0)},
{"FUSE_OSSE_PGD0_PG_STS", BIT(1)},
{"ESPISPI_PGD0_PG_STS", BIT(2)},
{"XHCI_PGD0_PG_STS", BIT(3)},
{"SPA_PGD0_PG_STS", BIT(4)},
{"SPB_PGD0_PG_STS", BIT(5)},
{"SPR16B0_PGD0_PG_STS", BIT(6)},
{"GBE_PGD0_PG_STS", BIT(7)},
{"SBR8B7_PGD0_PG_STS", BIT(8)},
{"SBR8B6_PGD0_PG_STS", BIT(9)},
{"SBR16B1_PGD0_PG_STS", BIT(10)},
{"SBR8B8_PGD0_PG_STS", BIT(11)},
{"ESE_PGD3_PG_STS", BIT(12)},
{"D2D_DISP_PGD0_PG_STS", BIT(13)},
{"LPSS_PGD0_PG_STS", BIT(14)},
{"LPC_PGD0_PG_STS", BIT(15)},
{"SMB_PGD0_PG_STS", BIT(16)},
{"ISH_PGD0_PG_STS", BIT(17)},
{"SBR8B2_PGD0_PG_STS", BIT(18)},
{"NPK_PGD0_PG_STS", BIT(19)},
{"D2D_NOC_PGD0_PG_STS", BIT(20)},
{"SAFSS_PGD0_PG_STS", BIT(21)},
{"FUSE_PGD0_PG_STS", BIT(22)},
{"D2D_DISP_PGD1_PG_STS", BIT(23)},
{"MPFPW1_PGD0_PG_STS", BIT(24)},
{"XDCI_PGD0_PG_STS", BIT(25)},
{"EXI_PGD0_PG_STS", BIT(26)},
{"CSE_PGD0_PG_STS", BIT(27)},
{"KVMCC_PGD0_PG_STS", BIT(28)},
{"PMT_PGD0_PG_STS", BIT(29)},
{"CLINK_PGD0_PG_STS", BIT(30)},
{"PTIO_PGD0_PG_STS", BIT(31)},
{}
};
const struct pmc_bit_map lnl_power_gating_status_1_map[] = {
{"USBR0_PGD0_PG_STS", BIT(0)},
{"SUSRAM_PGD0_PG_STS", BIT(1)},
{"SMT1_PGD0_PG_STS", BIT(2)},
{"U3FPW1_PGD0_PG_STS", BIT(3)},
{"SMS2_PGD0_PG_STS", BIT(4)},
{"SMS1_PGD0_PG_STS", BIT(5)},
{"CSMERTC_PGD0_PG_STS", BIT(6)},
{"CSMEPSF_PGD0_PG_STS", BIT(7)},
{"FIA_PG_PGD0_PG_STS", BIT(8)},
{"SBR16B4_PGD0_PG_STS", BIT(9)},
{"P2SB8B_PGD0_PG_STS", BIT(10)},
{"DBG_SBR_PGD0_PG_STS", BIT(11)},
{"SBR8B9_PGD0_PG_STS", BIT(12)},
{"OSSE_SMT1_PGD0_PG_STS", BIT(13)},
{"SBR8B10_PGD0_PG_STS", BIT(14)},
{"SBR16B3_PGD0_PG_STS", BIT(15)},
{"G5FPW1_PGD0_PG_STS", BIT(16)},
{"SBRG_PGD0_PG_STS", BIT(17)},
{"PSF4_PGD0_PG_STS", BIT(18)},
{"CNVI_PGD0_PG_STS", BIT(19)},
{"USFX2_PGD0_PG_STS", BIT(20)},
{"ENDBG_PGD0_PG_STS", BIT(21)},
{"FIACPCB_P5X4_PGD0_PG_STS", BIT(22)},
{"SBR8B3_PGD0_PG_STS", BIT(23)},
{"SBR8B0_PGD0_PG_STS", BIT(24)},
{"NPK_PGD1_PG_STS", BIT(25)},
{"OSSE_HOTHAM_PGD0_PG_STS", BIT(26)},
{"D2D_NOC_PGD2_PG_STS", BIT(27)},
{"SBR8B1_PGD0_PG_STS", BIT(28)},
{"PSF6_PGD0_PG_STS", BIT(29)},
{"PSF7_PGD0_PG_STS", BIT(30)},
{"FIA_U_PGD0_PG_STS", BIT(31)},
{}
};
const struct pmc_bit_map lnl_power_gating_status_2_map[] = {
{"PSF8_PGD0_PG_STS", BIT(0)},
{"SBR16B2_PGD0_PG_STS", BIT(1)},
{"D2D_IPU_PGD0_PG_STS", BIT(2)},
{"FIACPCB_U_PGD0_PG_STS", BIT(3)},
{"TAM_PGD0_PG_STS", BIT(4)},
{"D2D_NOC_PGD1_PG_STS", BIT(5)},
{"TBTLSX_PGD0_PG_STS", BIT(6)},
{"THC0_PGD0_PG_STS", BIT(7)},
{"THC1_PGD0_PG_STS", BIT(8)},
{"PMC_PGD0_PG_STS", BIT(9)},
{"SBR8B5_PGD0_PG_STS", BIT(10)},
{"UFSPW1_PGD0_PG_STS", BIT(11)},
{"DBC_PGD0_PG_STS", BIT(12)},
{"TCSS_PGD0_PG_STS", BIT(13)},
{"FIA_P5X4_PGD0_PG_STS", BIT(14)},
{"DISP_PGA_PGD0_PG_STS", BIT(15)},
{"DISP_PSF_PGD0_PG_STS", BIT(16)},
{"PSF0_PGD0_PG_STS", BIT(17)},
{"P2SB16B_PGD0_PG_STS", BIT(18)},
{"ACE_PGD0_PG_STS", BIT(19)},
{"ACE_PGD1_PG_STS", BIT(20)},
{"ACE_PGD2_PG_STS", BIT(21)},
{"ACE_PGD3_PG_STS", BIT(22)},
{"ACE_PGD4_PG_STS", BIT(23)},
{"ACE_PGD5_PG_STS", BIT(24)},
{"ACE_PGD6_PG_STS", BIT(25)},
{"ACE_PGD7_PG_STS", BIT(26)},
{"ACE_PGD8_PG_STS", BIT(27)},
{"ACE_PGD9_PG_STS", BIT(28)},
{"ACE_PGD10_PG_STS", BIT(29)},
{"FIACPCB_PG_PGD0_PG_STS", BIT(30)},
{"OSSE_PGD0_PG_STS", BIT(31)},
{}
};
const struct pmc_bit_map lnl_d3_status_0_map[] = {
{"LPSS_D3_STS", BIT(3)},
{"XDCI_D3_STS", BIT(4)},
{"XHCI_D3_STS", BIT(5)},
{"SPA_D3_STS", BIT(12)},
{"SPB_D3_STS", BIT(13)},
{"OSSE_D3_STS", BIT(15)},
{"ESPISPI_D3_STS", BIT(18)},
{"PSTH_D3_STS", BIT(21)},
{}
};
const struct pmc_bit_map lnl_d3_status_1_map[] = {
{"OSSE_SMT1_D3_STS", BIT(7)},
{"GBE_D3_STS", BIT(19)},
{"ITSS_D3_STS", BIT(23)},
{"CNVI_D3_STS", BIT(27)},
{"UFSX2_D3_STS", BIT(28)},
{"OSSE_HOTHAM_D3_STS", BIT(31)},
{}
};
const struct pmc_bit_map lnl_d3_status_2_map[] = {
{"ESE_D3_STS", BIT(0)},
{"CSMERTC_D3_STS", BIT(1)},
{"SUSRAM_D3_STS", BIT(2)},
{"CSE_D3_STS", BIT(4)},
{"KVMCC_D3_STS", BIT(5)},
{"USBR0_D3_STS", BIT(6)},
{"ISH_D3_STS", BIT(7)},
{"SMT1_D3_STS", BIT(8)},
{"SMT2_D3_STS", BIT(9)},
{"SMT3_D3_STS", BIT(10)},
{"OSSE_SMT2_D3_STS", BIT(13)},
{"CLINK_D3_STS", BIT(14)},
{"PTIO_D3_STS", BIT(16)},
{"PMT_D3_STS", BIT(17)},
{"SMS1_D3_STS", BIT(18)},
{"SMS2_D3_STS", BIT(19)},
{}
};
const struct pmc_bit_map lnl_d3_status_3_map[] = {
{"THC0_D3_STS", BIT(14)},
{"THC1_D3_STS", BIT(15)},
{"OSSE_SMT3_D3_STS", BIT(21)},
{"ACE_D3_STS", BIT(23)},
{}
};
const struct pmc_bit_map lnl_vnn_req_status_0_map[] = {
{"LPSS_VNN_REQ_STS", BIT(3)},
{"OSSE_VNN_REQ_STS", BIT(15)},
{"ESPISPI_VNN_REQ_STS", BIT(18)},
{}
};
const struct pmc_bit_map lnl_vnn_req_status_1_map[] = {
{"NPK_VNN_REQ_STS", BIT(4)},
{"OSSE_SMT1_VNN_REQ_STS", BIT(7)},
{"DFXAGG_VNN_REQ_STS", BIT(8)},
{"EXI_VNN_REQ_STS", BIT(9)},
{"P2D_VNN_REQ_STS", BIT(18)},
{"GBE_VNN_REQ_STS", BIT(19)},
{"SMB_VNN_REQ_STS", BIT(25)},
{"LPC_VNN_REQ_STS", BIT(26)},
{}
};
const struct pmc_bit_map lnl_vnn_req_status_2_map[] = {
{"eSE_VNN_REQ_STS", BIT(0)},
{"CSMERTC_VNN_REQ_STS", BIT(1)},
{"CSE_VNN_REQ_STS", BIT(4)},
{"ISH_VNN_REQ_STS", BIT(7)},
{"SMT1_VNN_REQ_STS", BIT(8)},
{"CLINK_VNN_REQ_STS", BIT(14)},
{"SMS1_VNN_REQ_STS", BIT(18)},
{"SMS2_VNN_REQ_STS", BIT(19)},
{"GPIOCOM4_VNN_REQ_STS", BIT(20)},
{"GPIOCOM3_VNN_REQ_STS", BIT(21)},
{"GPIOCOM2_VNN_REQ_STS", BIT(22)},
{"GPIOCOM1_VNN_REQ_STS", BIT(23)},
{"GPIOCOM0_VNN_REQ_STS", BIT(24)},
{}
};
const struct pmc_bit_map lnl_vnn_req_status_3_map[] = {
{"DISP_SHIM_VNN_REQ_STS", BIT(2)},
{"DTS0_VNN_REQ_STS", BIT(7)},
{"GPIOCOM5_VNN_REQ_STS", BIT(11)},
{}
};
const struct pmc_bit_map lnl_vnn_misc_status_map[] = {
{"CPU_C10_REQ_STS", BIT(0)},
{"TS_OFF_REQ_STS", BIT(1)},
{"PNDE_MET_REQ_STS", BIT(2)},
{"PCIE_DEEP_PM_REQ_STS", BIT(3)},
{"PMC_CLK_THROTTLE_EN_REQ_STS", BIT(4)},
{"NPK_VNNAON_REQ_STS", BIT(5)},
{"VNN_SOC_REQ_STS", BIT(6)},
{"ISH_VNNAON_REQ_STS", BIT(7)},
{"D2D_NOC_CFI_QACTIVE_REQ_STS", BIT(8)},
{"D2D_NOC_GPSB_QACTIVE_REQ_STS", BIT(9)},
{"D2D_NOC_IPU_QACTIVE_REQ_STS", BIT(10)},
{"PLT_GREATER_REQ_STS", BIT(11)},
{"PCIE_CLKREQ_REQ_STS", BIT(12)},
{"PMC_IDLE_FB_OCP_REQ_STS", BIT(13)},
{"PM_SYNC_STATES_REQ_STS", BIT(14)},
{"EA_REQ_STS", BIT(15)},
{"MPHY_CORE_OFF_REQ_STS", BIT(16)},
{"BRK_EV_EN_REQ_STS", BIT(17)},
{"AUTO_DEMO_EN_REQ_STS", BIT(18)},
{"ITSS_CLK_SRC_REQ_STS", BIT(19)},
{"LPC_CLK_SRC_REQ_STS", BIT(20)},
{"ARC_IDLE_REQ_STS", BIT(21)},
{"MPHY_SUS_REQ_STS", BIT(22)},
{"FIA_DEEP_PM_REQ_STS", BIT(23)},
{"UXD_CONNECTED_REQ_STS", BIT(24)},
{"ARC_INTERRUPT_WAKE_REQ_STS", BIT(25)},
{"D2D_NOC_DISP_DDI_QACTIVE_REQ_STS", BIT(26)},
{"PRE_WAKE0_REQ_STS", BIT(27)},
{"PRE_WAKE1_REQ_STS", BIT(28)},
{"PRE_WAKE2_EN_REQ_STS", BIT(29)},
{"WOV_REQ_STS", BIT(30)},
{"D2D_NOC_DISP_EDP_QACTIVE_REQ_STS_31", BIT(31)},
{}
};
const struct pmc_bit_map lnl_clocksource_status_map[] = {
{"AON2_OFF_STS", BIT(0)},
{"AON3_OFF_STS", BIT(1)},
{"AON4_OFF_STS", BIT(2)},
{"AON5_OFF_STS", BIT(3)},
{"AON1_OFF_STS", BIT(4)},
{"MPFPW1_0_PLL_OFF_STS", BIT(6)},
{"USB3_PLL_OFF_STS", BIT(8)},
{"AON3_SPL_OFF_STS", BIT(9)},
{"G5FPW1_PLL_OFF_STS", BIT(15)},
{"XTAL_AGGR_OFF_STS", BIT(17)},
{"USB2_PLL_OFF_STS", BIT(18)},
{"SAF_PLL_OFF_STS", BIT(19)},
{"SE_TCSS_PLL_OFF_STS", BIT(20)},
{"DDI_PLL_OFF_STS", BIT(21)},
{"FILTER_PLL_OFF_STS", BIT(22)},
{"ACE_PLL_OFF_STS", BIT(24)},
{"FABRIC_PLL_OFF_STS", BIT(25)},
{"SOC_PLL_OFF_STS", BIT(26)},
{"REF_OFF_STS", BIT(28)},
{"IMG_OFF_STS", BIT(29)},
{"RTC_PLL_OFF_STS", BIT(31)},
{}
};
const struct pmc_bit_map *lnl_lpm_maps[] = {
lnl_clocksource_status_map,
lnl_power_gating_status_0_map,
lnl_power_gating_status_1_map,
lnl_power_gating_status_2_map,
lnl_d3_status_0_map,
lnl_d3_status_1_map,
lnl_d3_status_2_map,
lnl_d3_status_3_map,
lnl_vnn_req_status_0_map,
lnl_vnn_req_status_1_map,
lnl_vnn_req_status_2_map,
lnl_vnn_req_status_3_map,
lnl_vnn_misc_status_map,
mtl_socm_signal_status_map,
NULL
};
const struct pmc_bit_map lnl_pfear_map[] = {
{"PMC_0", BIT(0)},
{"FUSE_OSSE", BIT(1)},
{"ESPISPI", BIT(2)},
{"XHCI", BIT(3)},
{"SPA", BIT(4)},
{"SPB", BIT(5)},
{"SBR16B0", BIT(6)},
{"GBE", BIT(7)},
{"SBR8B7", BIT(0)},
{"SBR8B6", BIT(1)},
{"SBR16B1", BIT(1)},
{"SBR8B8", BIT(2)},
{"ESE", BIT(3)},
{"SBR8B10", BIT(4)},
{"D2D_DISP_0", BIT(5)},
{"LPSS", BIT(6)},
{"LPC", BIT(7)},
{"SMB", BIT(0)},
{"ISH", BIT(1)},
{"SBR8B2", BIT(2)},
{"NPK_0", BIT(3)},
{"D2D_NOC_0", BIT(4)},
{"SAFSS", BIT(5)},
{"FUSE", BIT(6)},
{"D2D_DISP_1", BIT(7)},
{"MPFPW1", BIT(0)},
{"XDCI", BIT(1)},
{"EXI", BIT(2)},
{"CSE", BIT(3)},
{"KVMCC", BIT(4)},
{"PMT", BIT(5)},
{"CLINK", BIT(6)},
{"PTIO", BIT(7)},
{"USBR", BIT(0)},
{"SUSRAM", BIT(1)},
{"SMT1", BIT(2)},
{"U3FPW1", BIT(3)},
{"SMS2", BIT(4)},
{"SMS1", BIT(5)},
{"CSMERTC", BIT(6)},
{"CSMEPSF", BIT(7)},
{"FIA_PG", BIT(0)},
{"SBR16B4", BIT(1)},
{"P2SB8B", BIT(2)},
{"DBG_SBR", BIT(3)},
{"SBR8B9", BIT(4)},
{"OSSE_SMT1", BIT(5)},
{"SBR8B10", BIT(6)},
{"SBR16B3", BIT(7)},
{"G5FPW1", BIT(0)},
{"SBRG", BIT(1)},
{"PSF4", BIT(2)},
{"CNVI", BIT(3)},
{"UFSX2", BIT(4)},
{"ENDBG", BIT(5)},
{"FIACPCB_P5X4", BIT(6)},
{"SBR8B3", BIT(7)},
{"SBR8B0", BIT(0)},
{"NPK_1", BIT(1)},
{"OSSE_HOTHAM", BIT(2)},
{"D2D_NOC_2", BIT(3)},
{"SBR8B1", BIT(4)},
{"PSF6", BIT(5)},
{"PSF7", BIT(6)},
{"FIA_U", BIT(7)},
{"PSF8", BIT(0)},
{"SBR16B2", BIT(1)},
{"D2D_IPU", BIT(2)},
{"FIACPCB_U", BIT(3)},
{"TAM", BIT(4)},
{"D2D_NOC_1", BIT(5)},
{"TBTLSX", BIT(6)},
{"THC0", BIT(7)},
{"THC1", BIT(0)},
{"PMC_1", BIT(1)},
{"SBR8B5", BIT(2)},
{"UFSPW1", BIT(3)},
{"DBC", BIT(4)},
{"TCSS", BIT(5)},
{"FIA_P5X4", BIT(6)},
{"DISP_PGA", BIT(7)},
{"DBG_PSF", BIT(0)},
{"PSF0", BIT(1)},
{"P2SB16B", BIT(2)},
{"ACE0", BIT(3)},
{"ACE1", BIT(4)},
{"ACE2", BIT(5)},
{"ACE3", BIT(6)},
{"ACE4", BIT(7)},
{"ACE5", BIT(0)},
{"ACE6", BIT(1)},
{"ACE7", BIT(2)},
{"ACE8", BIT(3)},
{"ACE9", BIT(4)},
{"ACE10", BIT(5)},
{"FIACPCB", BIT(6)},
{"OSSE", BIT(7)},
{}
};
const struct pmc_bit_map *ext_lnl_pfear_map[] = {
lnl_pfear_map,
NULL
};
const struct pmc_reg_map lnl_socm_reg_map = {
.pfear_sts = ext_lnl_pfear_map,
.slp_s0_offset = CNP_PMC_SLP_S0_RES_COUNTER_OFFSET,
.slp_s0_res_counter_step = TGL_PMC_SLP_S0_RES_COUNTER_STEP,
.ltr_show_sts = lnl_ltr_show_map,
.msr_sts = msr_map,
.ltr_ignore_offset = CNP_PMC_LTR_IGNORE_OFFSET,
.regmap_length = LNL_PMC_MMIO_REG_LEN,
.ppfear0_offset = CNP_PMC_HOST_PPFEAR0A,
.ppfear_buckets = LNL_PPFEAR_NUM_ENTRIES,
.pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET,
.pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT,
.ltr_ignore_max = LNL_NUM_IP_IGN_ALLOWED,
.lpm_num_maps = ADL_LPM_NUM_MAPS,
.lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
.etr3_offset = ETR3_OFFSET,
.lpm_sts_latch_en_offset = MTL_LPM_STATUS_LATCH_EN_OFFSET,
.lpm_priority_offset = MTL_LPM_PRI_OFFSET,
.lpm_en_offset = MTL_LPM_EN_OFFSET,
.lpm_residency_offset = MTL_LPM_RESIDENCY_OFFSET,
.lpm_sts = lnl_lpm_maps,
.lpm_status_offset = MTL_LPM_STATUS_OFFSET,
.lpm_live_status_offset = MTL_LPM_LIVE_STATUS_OFFSET,
.lpm_reg_index = LNL_LPM_REG_INDEX,
};
#define LNL_NPU_PCI_DEV 0x643e
#define LNL_IPU_PCI_DEV 0x645d
/*
* Set power state of select devices that do not have drivers to D3
* so that they do not block Package C entry.
*/
static void lnl_d3_fixup(void)
{
pmc_core_set_device_d3(LNL_IPU_PCI_DEV);
pmc_core_set_device_d3(LNL_NPU_PCI_DEV);
}
static int lnl_resume(struct pmc_dev *pmcdev)
{
lnl_d3_fixup();
pmc_core_send_ltr_ignore(pmcdev, 3, 0);
return pmc_core_resume_common(pmcdev);
}
int lnl_core_init(struct pmc_dev *pmcdev)
{
int ret;
int func = 2;
bool ssram_init = true;
struct pmc *pmc = pmcdev->pmcs[PMC_IDX_SOC];
lnl_d3_fixup();
pmcdev->suspend = cnl_suspend;
pmcdev->resume = lnl_resume;
pmcdev->regmap_list = lnl_pmc_info_list;
ret = pmc_core_ssram_init(pmcdev, func);
/* If regbase not assigned, set map and discover using legacy method */
if (ret) {
ssram_init = false;
pmc->map = &lnl_socm_reg_map;
ret = get_primary_reg_base(pmc);
if (ret)
return ret;
}
pmc_core_get_low_power_modes(pmcdev);
if (ssram_init) {
ret = pmc_core_ssram_get_lpm_reqs(pmcdev);
if (ret)
return ret;
}
return 0;
}

View File

@ -10,6 +10,14 @@
#include <linux/pci.h>
#include "core.h"
#include "../pmt/telemetry.h"
/* PMC SSRAM PMT Telemetry GUIDS */
#define SOCP_LPM_REQ_GUID 0x2625030
#define IOEM_LPM_REQ_GUID 0x4357464
#define IOEP_LPM_REQ_GUID 0x5077612
static const u8 MTL_LPM_REG_INDEX[] = {0, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 20};
/*
* Die Mapping to Product.
@ -465,6 +473,7 @@ const struct pmc_reg_map mtl_socm_reg_map = {
.lpm_sts = mtl_socm_lpm_maps,
.lpm_status_offset = MTL_LPM_STATUS_OFFSET,
.lpm_live_status_offset = MTL_LPM_LIVE_STATUS_OFFSET,
.lpm_reg_index = MTL_LPM_REG_INDEX,
};
const struct pmc_bit_map mtl_ioep_pfear_map[] = {
@ -782,6 +791,13 @@ const struct pmc_reg_map mtl_ioep_reg_map = {
.ltr_show_sts = mtl_ioep_ltr_show_map,
.ltr_ignore_offset = CNP_PMC_LTR_IGNORE_OFFSET,
.ltr_ignore_max = ADL_NUM_IP_IGN_ALLOWED,
.lpm_num_maps = ADL_LPM_NUM_MAPS,
.lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
.lpm_residency_offset = MTL_LPM_RESIDENCY_OFFSET,
.lpm_priority_offset = MTL_LPM_PRI_OFFSET,
.lpm_en_offset = MTL_LPM_EN_OFFSET,
.lpm_sts_latch_en_offset = MTL_LPM_STATUS_LATCH_EN_OFFSET,
.lpm_reg_index = MTL_LPM_REG_INDEX,
};
const struct pmc_bit_map mtl_ioem_pfear_map[] = {
@ -922,6 +938,13 @@ const struct pmc_reg_map mtl_ioem_reg_map = {
.ltr_show_sts = mtl_ioep_ltr_show_map,
.ltr_ignore_offset = CNP_PMC_LTR_IGNORE_OFFSET,
.ltr_ignore_max = ADL_NUM_IP_IGN_ALLOWED,
.lpm_sts_latch_en_offset = MTL_LPM_STATUS_LATCH_EN_OFFSET,
.lpm_num_maps = ADL_LPM_NUM_MAPS,
.lpm_priority_offset = MTL_LPM_PRI_OFFSET,
.lpm_en_offset = MTL_LPM_EN_OFFSET,
.lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
.lpm_residency_offset = MTL_LPM_RESIDENCY_OFFSET,
.lpm_reg_index = MTL_LPM_REG_INDEX,
};
#define PMC_DEVID_SOCM 0x7e7f
@ -929,16 +952,19 @@ const struct pmc_reg_map mtl_ioem_reg_map = {
#define PMC_DEVID_IOEM 0x7ebf
static struct pmc_info mtl_pmc_info_list[] = {
{
.devid = PMC_DEVID_SOCM,
.map = &mtl_socm_reg_map,
.guid = SOCP_LPM_REQ_GUID,
.devid = PMC_DEVID_SOCM,
.map = &mtl_socm_reg_map,
},
{
.devid = PMC_DEVID_IOEP,
.map = &mtl_ioep_reg_map,
.guid = IOEP_LPM_REQ_GUID,
.devid = PMC_DEVID_IOEP,
.map = &mtl_ioep_reg_map,
},
{
.devid = PMC_DEVID_IOEM,
.map = &mtl_ioem_reg_map
.guid = IOEM_LPM_REQ_GUID,
.devid = PMC_DEVID_IOEM,
.map = &mtl_ioem_reg_map
},
{}
};
@ -946,34 +972,15 @@ static struct pmc_info mtl_pmc_info_list[] = {
#define MTL_GNA_PCI_DEV 0x7e4c
#define MTL_IPU_PCI_DEV 0x7d19
#define MTL_VPU_PCI_DEV 0x7d1d
static void mtl_set_device_d3(unsigned int device)
{
struct pci_dev *pcidev;
pcidev = pci_get_device(PCI_VENDOR_ID_INTEL, device, NULL);
if (pcidev) {
if (!device_trylock(&pcidev->dev)) {
pci_dev_put(pcidev);
return;
}
if (!pcidev->dev.driver) {
dev_info(&pcidev->dev, "Setting to D3hot\n");
pci_set_power_state(pcidev, PCI_D3hot);
}
device_unlock(&pcidev->dev);
pci_dev_put(pcidev);
}
}
/*
* Set power state of select devices that do not have drivers to D3
* so that they do not block Package C entry.
*/
static void mtl_d3_fixup(void)
{
mtl_set_device_d3(MTL_GNA_PCI_DEV);
mtl_set_device_d3(MTL_IPU_PCI_DEV);
mtl_set_device_d3(MTL_VPU_PCI_DEV);
pmc_core_set_device_d3(MTL_GNA_PCI_DEV);
pmc_core_set_device_d3(MTL_IPU_PCI_DEV);
pmc_core_set_device_d3(MTL_VPU_PCI_DEV);
}
static int mtl_resume(struct pmc_dev *pmcdev)
@ -987,23 +994,36 @@ static int mtl_resume(struct pmc_dev *pmcdev)
int mtl_core_init(struct pmc_dev *pmcdev)
{
struct pmc *pmc = pmcdev->pmcs[PMC_IDX_SOC];
int ret = 0;
int ret;
int func = 2;
bool ssram_init = true;
mtl_d3_fixup();
pmcdev->suspend = cnl_suspend;
pmcdev->resume = mtl_resume;
pmcdev->regmap_list = mtl_pmc_info_list;
pmc_core_ssram_init(pmcdev);
/* If regbase not assigned, set map and discover using legacy method */
if (!pmc->regbase) {
/*
* If ssram init fails use legacy method to at least get the
* primary PMC
*/
ret = pmc_core_ssram_init(pmcdev, func);
if (ret) {
ssram_init = false;
dev_warn(&pmcdev->pdev->dev,
"ssram init failed, %d, using legacy init\n", ret);
pmc->map = &mtl_socm_reg_map;
ret = get_primary_reg_base(pmc);
if (ret)
return ret;
}
pmc_core_get_low_power_modes(pmcdev);
pmc_core_punit_pmt_init(pmcdev, MTL_PMT_DMU_GUID);
if (ssram_init)
return pmc_core_ssram_get_lpm_reqs(pmcdev);
return 0;
}

View File

@ -137,7 +137,15 @@ const struct pmc_reg_map spt_reg_map = {
int spt_core_init(struct pmc_dev *pmcdev)
{
struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN];
int ret;
pmc->map = &spt_reg_map;
return get_primary_reg_base(pmc);
ret = get_primary_reg_base(pmc);
if (ret)
return ret;
pmc_core_get_low_power_modes(pmcdev);
return ret;
}

View File

@ -13,6 +13,11 @@
#define ACPI_S0IX_DSM_UUID "57a6512e-3979-4e9d-9708-ff13b2508972"
#define ACPI_GET_LOW_MODE_REGISTERS 1
enum pch_type {
PCH_H,
PCH_LP
};
const struct pmc_bit_map tgl_pfear_map[] = {
{"PSF9", BIT(0)},
{"RES_66", BIT(1)},
@ -205,6 +210,33 @@ const struct pmc_reg_map tgl_reg_map = {
.etr3_offset = ETR3_OFFSET,
};
const struct pmc_reg_map tgl_h_reg_map = {
.pfear_sts = ext_tgl_pfear_map,
.slp_s0_offset = CNP_PMC_SLP_S0_RES_COUNTER_OFFSET,
.slp_s0_res_counter_step = TGL_PMC_SLP_S0_RES_COUNTER_STEP,
.ltr_show_sts = cnp_ltr_show_map,
.msr_sts = msr_map,
.ltr_ignore_offset = CNP_PMC_LTR_IGNORE_OFFSET,
.regmap_length = CNP_PMC_MMIO_REG_LEN,
.ppfear0_offset = CNP_PMC_HOST_PPFEAR0A,
.ppfear_buckets = ICL_PPFEAR_NUM_ENTRIES,
.pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET,
.pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT,
.ltr_ignore_max = TGL_NUM_IP_IGN_ALLOWED,
.lpm_num_maps = TGL_LPM_NUM_MAPS,
.lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
.lpm_sts_latch_en_offset = TGL_LPM_STS_LATCH_EN_OFFSET,
.lpm_en_offset = TGL_LPM_EN_OFFSET,
.lpm_priority_offset = TGL_LPM_PRI_OFFSET,
.lpm_residency_offset = TGL_LPM_RESIDENCY_OFFSET,
.lpm_sts = tgl_lpm_maps,
.lpm_status_offset = TGL_LPM_STATUS_OFFSET,
.lpm_live_status_offset = TGL_LPM_LIVE_STATUS_OFFSET,
.etr3_offset = ETR3_OFFSET,
.pson_residency_offset = TGL_PSON_RESIDENCY_OFFSET,
.pson_residency_counter_step = TGL_PSON_RES_COUNTER_STEP,
};
void pmc_core_get_tgl_lpm_reqs(struct platform_device *pdev)
{
struct pmc_dev *pmcdev = platform_get_drvdata(pdev);
@ -253,12 +285,25 @@ free_acpi_obj:
ACPI_FREE(out_obj);
}
int tgl_l_core_init(struct pmc_dev *pmcdev)
{
return tgl_core_generic_init(pmcdev, PCH_LP);
}
int tgl_core_init(struct pmc_dev *pmcdev)
{
return tgl_core_generic_init(pmcdev, PCH_H);
}
int tgl_core_generic_init(struct pmc_dev *pmcdev, int pch_tp)
{
struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN];
int ret;
pmc->map = &tgl_reg_map;
if (pch_tp == PCH_H)
pmc->map = &tgl_h_reg_map;
else
pmc->map = &tgl_reg_map;
pmcdev->suspend = cnl_suspend;
pmcdev->resume = cnl_resume;
@ -267,6 +312,7 @@ int tgl_core_init(struct pmc_dev *pmcdev)
if (ret)
return ret;
pmc_core_get_low_power_modes(pmcdev);
pmc_core_get_tgl_lpm_reqs(pmcdev->pdev);
return 0;

View File

@ -17,7 +17,7 @@
#include "../vsec.h"
#include "class.h"
#define PMT_XA_START 0
#define PMT_XA_START 1
#define PMT_XA_MAX INT_MAX
#define PMT_XA_LIMIT XA_LIMIT(PMT_XA_START, PMT_XA_MAX)
#define GUID_SPR_PUNIT 0x9956f43f
@ -31,7 +31,7 @@ bool intel_pmt_is_early_client_hw(struct device *dev)
* differences from the server platforms (which use the Out Of Band
* Management Services Module OOBMSM).
*/
return !!(ivdev->info->quirks & VSEC_QUIRK_EARLY_HW);
return !!(ivdev->quirks & VSEC_QUIRK_EARLY_HW);
}
EXPORT_SYMBOL_NS_GPL(intel_pmt_is_early_client_hw, INTEL_PMT);
@ -159,11 +159,12 @@ static struct class intel_pmt_class = {
};
static int intel_pmt_populate_entry(struct intel_pmt_entry *entry,
struct intel_pmt_header *header,
struct device *dev,
struct intel_vsec_device *ivdev,
struct resource *disc_res)
{
struct pci_dev *pci_dev = to_pci_dev(dev->parent);
struct pci_dev *pci_dev = ivdev->pcidev;
struct device *dev = &ivdev->auxdev.dev;
struct intel_pmt_header *header = &entry->header;
u8 bir;
/*
@ -215,6 +216,13 @@ static int intel_pmt_populate_entry(struct intel_pmt_entry *entry,
break;
case ACCESS_BARID:
/* Use the provided base address if it exists */
if (ivdev->base_addr) {
entry->base_addr = ivdev->base_addr +
GET_ADDRESS(header->base_offset);
break;
}
/*
* If another BAR was specified then the base offset
* represents the offset within that BAR. SO retrieve the
@ -239,6 +247,7 @@ static int intel_pmt_dev_register(struct intel_pmt_entry *entry,
struct intel_pmt_namespace *ns,
struct device *parent)
{
struct intel_vsec_device *ivdev = dev_to_ivdev(parent);
struct resource res = {0};
struct device *dev;
int ret;
@ -262,7 +271,7 @@ static int intel_pmt_dev_register(struct intel_pmt_entry *entry,
if (ns->attr_grp) {
ret = sysfs_create_group(entry->kobj, ns->attr_grp);
if (ret)
goto fail_sysfs;
goto fail_sysfs_create_group;
}
/* if size is 0 assume no data buffer, so no file needed */
@ -287,13 +296,23 @@ static int intel_pmt_dev_register(struct intel_pmt_entry *entry,
entry->pmt_bin_attr.size = entry->size;
ret = sysfs_create_bin_file(&dev->kobj, &entry->pmt_bin_attr);
if (!ret)
return 0;
if (ret)
goto fail_ioremap;
if (ns->pmt_add_endpoint) {
ret = ns->pmt_add_endpoint(entry, ivdev->pcidev);
if (ret)
goto fail_add_endpoint;
}
return 0;
fail_add_endpoint:
sysfs_remove_bin_file(entry->kobj, &entry->pmt_bin_attr);
fail_ioremap:
if (ns->attr_grp)
sysfs_remove_group(entry->kobj, ns->attr_grp);
fail_sysfs:
fail_sysfs_create_group:
device_unregister(dev);
fail_dev_create:
xa_erase(ns->xa, entry->devid);
@ -305,7 +324,6 @@ int intel_pmt_dev_create(struct intel_pmt_entry *entry, struct intel_pmt_namespa
struct intel_vsec_device *intel_vsec_dev, int idx)
{
struct device *dev = &intel_vsec_dev->auxdev.dev;
struct intel_pmt_header header;
struct resource *disc_res;
int ret;
@ -315,16 +333,15 @@ int intel_pmt_dev_create(struct intel_pmt_entry *entry, struct intel_pmt_namespa
if (IS_ERR(entry->disc_table))
return PTR_ERR(entry->disc_table);
ret = ns->pmt_header_decode(entry, &header, dev);
ret = ns->pmt_header_decode(entry, dev);
if (ret)
return ret;
ret = intel_pmt_populate_entry(entry, &header, dev, disc_res);
ret = intel_pmt_populate_entry(entry, intel_vsec_dev, disc_res);
if (ret)
return ret;
return intel_pmt_dev_register(entry, ns, dev);
}
EXPORT_SYMBOL_NS_GPL(intel_pmt_dev_create, INTEL_PMT);

View File

@ -9,6 +9,7 @@
#include <linux/io.h>
#include "../vsec.h"
#include "telemetry.h"
/* PMT access types */
#define ACCESS_BARID 2
@ -18,7 +19,26 @@
#define GET_BIR(v) ((v) & GENMASK(2, 0))
#define GET_ADDRESS(v) ((v) & GENMASK(31, 3))
struct pci_dev;
struct telem_endpoint {
struct pci_dev *pcidev;
struct telem_header header;
void __iomem *base;
bool present;
struct kref kref;
};
struct intel_pmt_header {
u32 base_offset;
u32 size;
u32 guid;
u8 access_type;
};
struct intel_pmt_entry {
struct telem_endpoint *ep;
struct intel_pmt_header header;
struct bin_attribute pmt_bin_attr;
struct kobject *kobj;
void __iomem *disc_table;
@ -29,20 +49,14 @@ struct intel_pmt_entry {
int devid;
};
struct intel_pmt_header {
u32 base_offset;
u32 size;
u32 guid;
u8 access_type;
};
struct intel_pmt_namespace {
const char *name;
struct xarray *xa;
const struct attribute_group *attr_grp;
int (*pmt_header_decode)(struct intel_pmt_entry *entry,
struct intel_pmt_header *header,
struct device *dev);
int (*pmt_add_endpoint)(struct intel_pmt_entry *entry,
struct pci_dev *pdev);
};
bool intel_pmt_is_early_client_hw(struct device *dev);

View File

@ -223,10 +223,10 @@ static const struct attribute_group pmt_crashlog_group = {
};
static int pmt_crashlog_header_decode(struct intel_pmt_entry *entry,
struct intel_pmt_header *header,
struct device *dev)
{
void __iomem *disc_table = entry->disc_table;
struct intel_pmt_header *header = &entry->header;
struct crashlog_entry *crashlog;
if (!pmt_crashlog_supported(entry))

View File

@ -30,6 +30,15 @@
/* Used by client hardware to identify a fixed telemetry entry*/
#define TELEM_CLIENT_FIXED_BLOCK_GUID 0x10000000
#define NUM_BYTES_QWORD(v) ((v) << 3)
#define SAMPLE_ID_OFFSET(v) ((v) << 3)
#define NUM_BYTES_DWORD(v) ((v) << 2)
#define SAMPLE_ID_OFFSET32(v) ((v) << 2)
/* Protects access to the xarray of telemetry endpoint handles */
static DEFINE_MUTEX(ep_lock);
enum telem_type {
TELEM_TYPE_PUNIT = 0,
TELEM_TYPE_CRASHLOG,
@ -58,10 +67,10 @@ static bool pmt_telem_region_overlaps(struct intel_pmt_entry *entry,
}
static int pmt_telem_header_decode(struct intel_pmt_entry *entry,
struct intel_pmt_header *header,
struct device *dev)
{
void __iomem *disc_table = entry->disc_table;
struct intel_pmt_header *header = &entry->header;
if (pmt_telem_region_overlaps(entry, dev))
return 1;
@ -84,21 +93,195 @@ static int pmt_telem_header_decode(struct intel_pmt_entry *entry,
return 0;
}
static int pmt_telem_add_endpoint(struct intel_pmt_entry *entry,
struct pci_dev *pdev)
{
struct telem_endpoint *ep;
/* Endpoint lifetimes are managed by kref, not devres */
entry->ep = kzalloc(sizeof(*(entry->ep)), GFP_KERNEL);
if (!entry->ep)
return -ENOMEM;
ep = entry->ep;
ep->pcidev = pdev;
ep->header.access_type = entry->header.access_type;
ep->header.guid = entry->header.guid;
ep->header.base_offset = entry->header.base_offset;
ep->header.size = entry->header.size;
ep->base = entry->base;
ep->present = true;
kref_init(&ep->kref);
return 0;
}
static DEFINE_XARRAY_ALLOC(telem_array);
static struct intel_pmt_namespace pmt_telem_ns = {
.name = "telem",
.xa = &telem_array,
.pmt_header_decode = pmt_telem_header_decode,
.pmt_add_endpoint = pmt_telem_add_endpoint,
};
/* Called when all users unregister and the device is removed */
static void pmt_telem_ep_release(struct kref *kref)
{
struct telem_endpoint *ep;
ep = container_of(kref, struct telem_endpoint, kref);
kfree(ep);
}
unsigned long pmt_telem_get_next_endpoint(unsigned long start)
{
struct intel_pmt_entry *entry;
unsigned long found_idx;
mutex_lock(&ep_lock);
xa_for_each_start(&telem_array, found_idx, entry, start) {
/*
* Return first found index after start.
* 0 is not valid id.
*/
if (found_idx > start)
break;
}
mutex_unlock(&ep_lock);
return found_idx == start ? 0 : found_idx;
}
EXPORT_SYMBOL_NS_GPL(pmt_telem_get_next_endpoint, INTEL_PMT_TELEMETRY);
struct telem_endpoint *pmt_telem_register_endpoint(int devid)
{
struct intel_pmt_entry *entry;
unsigned long index = devid;
mutex_lock(&ep_lock);
entry = xa_find(&telem_array, &index, index, XA_PRESENT);
if (!entry) {
mutex_unlock(&ep_lock);
return ERR_PTR(-ENXIO);
}
kref_get(&entry->ep->kref);
mutex_unlock(&ep_lock);
return entry->ep;
}
EXPORT_SYMBOL_NS_GPL(pmt_telem_register_endpoint, INTEL_PMT_TELEMETRY);
void pmt_telem_unregister_endpoint(struct telem_endpoint *ep)
{
kref_put(&ep->kref, pmt_telem_ep_release);
}
EXPORT_SYMBOL_NS_GPL(pmt_telem_unregister_endpoint, INTEL_PMT_TELEMETRY);
int pmt_telem_get_endpoint_info(int devid, struct telem_endpoint_info *info)
{
struct intel_pmt_entry *entry;
unsigned long index = devid;
int err = 0;
if (!info)
return -EINVAL;
mutex_lock(&ep_lock);
entry = xa_find(&telem_array, &index, index, XA_PRESENT);
if (!entry) {
err = -ENXIO;
goto unlock;
}
info->pdev = entry->ep->pcidev;
info->header = entry->ep->header;
unlock:
mutex_unlock(&ep_lock);
return err;
}
EXPORT_SYMBOL_NS_GPL(pmt_telem_get_endpoint_info, INTEL_PMT_TELEMETRY);
int pmt_telem_read(struct telem_endpoint *ep, u32 id, u64 *data, u32 count)
{
u32 offset, size;
if (!ep->present)
return -ENODEV;
offset = SAMPLE_ID_OFFSET(id);
size = ep->header.size;
if (offset + NUM_BYTES_QWORD(count) > size)
return -EINVAL;
memcpy_fromio(data, ep->base + offset, NUM_BYTES_QWORD(count));
return ep->present ? 0 : -EPIPE;
}
EXPORT_SYMBOL_NS_GPL(pmt_telem_read, INTEL_PMT_TELEMETRY);
int pmt_telem_read32(struct telem_endpoint *ep, u32 id, u32 *data, u32 count)
{
u32 offset, size;
if (!ep->present)
return -ENODEV;
offset = SAMPLE_ID_OFFSET32(id);
size = ep->header.size;
if (offset + NUM_BYTES_DWORD(count) > size)
return -EINVAL;
memcpy_fromio(data, ep->base + offset, NUM_BYTES_DWORD(count));
return ep->present ? 0 : -EPIPE;
}
EXPORT_SYMBOL_NS_GPL(pmt_telem_read32, INTEL_PMT_TELEMETRY);
struct telem_endpoint *
pmt_telem_find_and_register_endpoint(struct pci_dev *pcidev, u32 guid, u16 pos)
{
int devid = 0;
int inst = 0;
int err = 0;
while ((devid = pmt_telem_get_next_endpoint(devid))) {
struct telem_endpoint_info ep_info;
err = pmt_telem_get_endpoint_info(devid, &ep_info);
if (err)
return ERR_PTR(err);
if (ep_info.header.guid == guid && ep_info.pdev == pcidev) {
if (inst == pos)
return pmt_telem_register_endpoint(devid);
++inst;
}
}
return ERR_PTR(-ENXIO);
}
EXPORT_SYMBOL_NS_GPL(pmt_telem_find_and_register_endpoint, INTEL_PMT_TELEMETRY);
static void pmt_telem_remove(struct auxiliary_device *auxdev)
{
struct pmt_telem_priv *priv = auxiliary_get_drvdata(auxdev);
int i;
for (i = 0; i < priv->num_entries; i++)
intel_pmt_dev_destroy(&priv->entry[i], &pmt_telem_ns);
}
mutex_lock(&ep_lock);
for (i = 0; i < priv->num_entries; i++) {
struct intel_pmt_entry *entry = &priv->entry[i];
kref_put(&entry->ep->kref, pmt_telem_ep_release);
intel_pmt_dev_destroy(entry, &pmt_telem_ns);
}
mutex_unlock(&ep_lock);
};
static int pmt_telem_probe(struct auxiliary_device *auxdev, const struct auxiliary_device_id *id)
{
@ -117,7 +300,9 @@ static int pmt_telem_probe(struct auxiliary_device *auxdev, const struct auxilia
for (i = 0; i < intel_vsec_dev->num_resources; i++) {
struct intel_pmt_entry *entry = &priv->entry[priv->num_entries];
mutex_lock(&ep_lock);
ret = intel_pmt_dev_create(entry, &pmt_telem_ns, intel_vsec_dev, i);
mutex_unlock(&ep_lock);
if (ret < 0)
goto abort_probe;
if (ret)

View File

@ -0,0 +1,126 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _TELEMETRY_H
#define _TELEMETRY_H
/* Telemetry types */
#define PMT_TELEM_TELEMETRY 0
#define PMT_TELEM_CRASHLOG 1
struct telem_endpoint;
struct pci_dev;
struct telem_header {
u8 access_type;
u16 size;
u32 guid;
u32 base_offset;
};
struct telem_endpoint_info {
struct pci_dev *pdev;
struct telem_header header;
};
/**
* pmt_telem_get_next_endpoint() - Get next device id for a telemetry endpoint
* @start: starting devid to look from
*
* This functions can be used in a while loop predicate to retrieve the devid
* of all available telemetry endpoints. Functions pmt_telem_get_next_endpoint()
* and pmt_telem_register_endpoint() can be used inside of the loop to examine
* endpoint info and register to receive a pointer to the endpoint. The pointer
* is then usable in the telemetry read calls to access the telemetry data.
*
* Return:
* * devid - devid of the next present endpoint from start
* * 0 - when no more endpoints are present after start
*/
unsigned long pmt_telem_get_next_endpoint(unsigned long start);
/**
* pmt_telem_register_endpoint() - Register a telemetry endpoint
* @devid: device id/handle of the telemetry endpoint
*
* Increments the kref usage counter for the endpoint.
*
* Return:
* * endpoint - On success returns pointer to the telemetry endpoint
* * -ENXIO - telemetry endpoint not found
*/
struct telem_endpoint *pmt_telem_register_endpoint(int devid);
/**
* pmt_telem_unregister_endpoint() - Unregister a telemetry endpoint
* @ep: ep structure to populate.
*
* Decrements the kref usage counter for the endpoint.
*/
void pmt_telem_unregister_endpoint(struct telem_endpoint *ep);
/**
* pmt_telem_get_endpoint_info() - Get info for an endpoint from its devid
* @devid: device id/handle of the telemetry endpoint
* @info: Endpoint info structure to be populated
*
* Return:
* * 0 - Success
* * -ENXIO - telemetry endpoint not found for the devid
* * -EINVAL - @info is NULL
*/
int pmt_telem_get_endpoint_info(int devid, struct telem_endpoint_info *info);
/**
* pmt_telem_find_and_register_endpoint() - Get a telemetry endpoint from
* pci_dev device, guid and pos
* @pdev: PCI device inside the Intel vsec
* @guid: GUID of the telemetry space
* @pos: Instance of the guid
*
* Return:
* * endpoint - On success returns pointer to the telemetry endpoint
* * -ENXIO - telemetry endpoint not found
*/
struct telem_endpoint *pmt_telem_find_and_register_endpoint(struct pci_dev *pcidev,
u32 guid, u16 pos);
/**
* pmt_telem_read() - Read qwords from counter sram using sample id
* @ep: Telemetry endpoint to be read
* @id: The beginning sample id of the metric(s) to be read
* @data: Allocated qword buffer
* @count: Number of qwords requested
*
* Callers must ensure reads are aligned. When the call returns -ENODEV,
* the device has been removed and callers should unregister the telemetry
* endpoint.
*
* Return:
* * 0 - Success
* * -ENODEV - The device is not present.
* * -EINVAL - The offset is out bounds
* * -EPIPE - The device was removed during the read. Data written
* but should be considered invalid.
*/
int pmt_telem_read(struct telem_endpoint *ep, u32 id, u64 *data, u32 count);
/**
* pmt_telem_read32() - Read qwords from counter sram using sample id
* @ep: Telemetry endpoint to be read
* @id: The beginning sample id of the metric(s) to be read
* @data: Allocated dword buffer
* @count: Number of dwords requested
*
* Callers must ensure reads are aligned. When the call returns -ENODEV,
* the device has been removed and callers should unregister the telemetry
* endpoint.
*
* Return:
* * 0 - Success
* * -ENODEV - The device is not present.
* * -EINVAL - The offset is out bounds
* * -EPIPE - The device was removed during the read. Data written
* but should be considered invalid.
*/
int pmt_telem_read32(struct telem_endpoint *ep, u32 id, u32 *data, u32 count);
#endif

View File

@ -234,6 +234,7 @@ struct perf_level {
* @saved_clos_configs: Save SST-CP CLOS configuration to store restore for suspend/resume
* @saved_clos_assocs: Save SST-CP CLOS association to store restore for suspend/resume
* @saved_pp_control: Save SST-PP control information to store restore for suspend/resume
* @write_blocked: Write operation is blocked, so can't change SST state
*
* This structure is used store complete SST information for a power_domain. This information
* is used to read/write request for any SST IOCTL. Each physical CPU package can have multiple
@ -259,6 +260,7 @@ struct tpmi_per_power_domain_info {
u64 saved_clos_configs[4];
u64 saved_clos_assocs[4];
u64 saved_pp_control;
bool write_blocked;
};
/**
@ -515,6 +517,9 @@ static long isst_if_clos_param(void __user *argp)
return -EINVAL;
if (clos_param.get_set) {
if (power_domain_info->write_blocked)
return -EPERM;
_write_cp_info("clos.min_freq", clos_param.min_freq_mhz,
(SST_CLOS_CONFIG_0_OFFSET + clos_param.clos * SST_REG_SIZE),
SST_CLOS_CONFIG_MIN_START, SST_CLOS_CONFIG_MIN_WIDTH,
@ -602,6 +607,9 @@ static long isst_if_clos_assoc(void __user *argp)
power_domain_info = &sst_inst->power_domain_info[punit_id];
if (assoc_cmds.get_set && power_domain_info->write_blocked)
return -EPERM;
offset = SST_CLOS_ASSOC_0_OFFSET +
(punit_cpu_no / SST_CLOS_ASSOC_CPUS_PER_REG) * SST_REG_SIZE;
shift = punit_cpu_no % SST_CLOS_ASSOC_CPUS_PER_REG;
@ -752,6 +760,9 @@ static int isst_if_set_perf_level(void __user *argp)
if (!power_domain_info)
return -EINVAL;
if (power_domain_info->write_blocked)
return -EPERM;
if (!(power_domain_info->pp_header.allowed_level_mask & BIT(perf_level.level)))
return -EINVAL;
@ -809,6 +820,9 @@ static int isst_if_set_perf_feature(void __user *argp)
if (!power_domain_info)
return -EINVAL;
if (power_domain_info->write_blocked)
return -EPERM;
_write_pp_info("perf_feature", perf_feature.feature, SST_PP_CONTROL_OFFSET,
SST_PP_FEATURE_STATE_START, SST_PP_FEATURE_STATE_WIDTH,
SST_MUL_FACTOR_NONE)
@ -1257,11 +1271,21 @@ static long isst_if_def_ioctl(struct file *file, unsigned int cmd,
int tpmi_sst_dev_add(struct auxiliary_device *auxdev)
{
bool read_blocked = 0, write_blocked = 0;
struct intel_tpmi_plat_info *plat_info;
struct tpmi_sst_struct *tpmi_sst;
int i, ret, pkg = 0, inst = 0;
int num_resources;
ret = tpmi_get_feature_status(auxdev, TPMI_ID_SST, &read_blocked, &write_blocked);
if (ret)
dev_info(&auxdev->dev, "Can't read feature status: ignoring read/write blocked status\n");
if (read_blocked) {
dev_info(&auxdev->dev, "Firmware has blocked reads, exiting\n");
return -ENODEV;
}
plat_info = tpmi_get_platform_data(auxdev);
if (!plat_info) {
dev_err(&auxdev->dev, "No platform info\n");
@ -1306,6 +1330,7 @@ int tpmi_sst_dev_add(struct auxiliary_device *auxdev)
tpmi_sst->power_domain_info[i].package_id = pkg;
tpmi_sst->power_domain_info[i].power_domain_id = i;
tpmi_sst->power_domain_info[i].auxdev = auxdev;
tpmi_sst->power_domain_info[i].write_blocked = write_blocked;
tpmi_sst->power_domain_info[i].sst_base = devm_ioremap_resource(&auxdev->dev, res);
if (IS_ERR(tpmi_sst->power_domain_info[i].sst_base))
return PTR_ERR(tpmi_sst->power_domain_info[i].sst_base);

View File

@ -170,19 +170,6 @@ struct tpmi_feature_state {
u32 locked:1;
} __packed;
/*
* List of supported TMPI IDs.
* Some TMPI IDs are not used by Linux, so the numbers are not consecutive.
*/
enum intel_tpmi_id {
TPMI_ID_RAPL = 0, /* Running Average Power Limit */
TPMI_ID_PEM = 1, /* Power and Perf excursion Monitor */
TPMI_ID_UNCORE = 2, /* Uncore Frequency Scaling */
TPMI_ID_SST = 5, /* Speed Select Technology */
TPMI_CONTROL_ID = 0x80, /* Special ID for getting feature status */
TPMI_INFO_ID = 0x81, /* Special ID for PCI BDF and Package ID information */
};
/*
* The size from hardware is in u32 units. This size is from a trusted hardware,
* but better to verify for pre silicon platforms. Set size to 0, when invalid.
@ -345,8 +332,8 @@ err_unlock:
return ret;
}
int tpmi_get_feature_status(struct auxiliary_device *auxdev, int feature_id,
int *locked, int *disabled)
int tpmi_get_feature_status(struct auxiliary_device *auxdev,
int feature_id, bool *read_blocked, bool *write_blocked)
{
struct intel_vsec_device *intel_vsec_dev = dev_to_ivdev(auxdev->dev.parent);
struct intel_tpmi_info *tpmi_info = auxiliary_get_drvdata(&intel_vsec_dev->auxdev);
@ -357,8 +344,8 @@ int tpmi_get_feature_status(struct auxiliary_device *auxdev, int feature_id,
if (ret)
return ret;
*locked = feature_state.locked;
*disabled = !feature_state.enabled;
*read_blocked = feature_state.read_blocked;
*write_blocked = feature_state.write_blocked;
return 0;
}
@ -598,9 +585,21 @@ static int tpmi_create_device(struct intel_tpmi_info *tpmi_info,
struct intel_vsec_device *vsec_dev = tpmi_info->vsec_dev;
char feature_id_name[TPMI_FEATURE_NAME_LEN];
struct intel_vsec_device *feature_vsec_dev;
struct tpmi_feature_state feature_state;
struct resource *res, *tmp;
const char *name;
int i;
int i, ret;
ret = tpmi_read_feature_status(tpmi_info, pfs->pfs_header.tpmi_id, &feature_state);
if (ret)
return ret;
/*
* If not enabled, continue to look at other features in the PFS, so return -EOPNOTSUPP.
* This will not cause failure of loading of this driver.
*/
if (!feature_state.enabled)
return -EOPNOTSUPP;
name = intel_tpmi_name(pfs->pfs_header.tpmi_id);
if (!name)

View File

@ -66,6 +66,7 @@ struct tpmi_uncore_struct {
int min_ratio;
struct tpmi_uncore_power_domain_info *pd_info;
struct tpmi_uncore_cluster_info root_cluster;
bool write_blocked;
};
#define UNCORE_GENMASK_MIN_RATIO GENMASK_ULL(21, 15)
@ -157,6 +158,9 @@ static int uncore_write_control_freq(struct uncore_data *data, unsigned int inpu
cluster_info = container_of(data, struct tpmi_uncore_cluster_info, uncore_data);
uncore_root = cluster_info->uncore_root;
if (uncore_root->write_blocked)
return -EPERM;
/* Update each cluster in a package */
if (cluster_info->root_domain) {
struct tpmi_uncore_struct *uncore_root = cluster_info->uncore_root;
@ -233,11 +237,21 @@ static void remove_cluster_entries(struct tpmi_uncore_struct *tpmi_uncore)
static int uncore_probe(struct auxiliary_device *auxdev, const struct auxiliary_device_id *id)
{
bool read_blocked = 0, write_blocked = 0;
struct intel_tpmi_plat_info *plat_info;
struct tpmi_uncore_struct *tpmi_uncore;
int ret, i, pkg = 0;
int num_resources;
ret = tpmi_get_feature_status(auxdev, TPMI_ID_UNCORE, &read_blocked, &write_blocked);
if (ret)
dev_info(&auxdev->dev, "Can't read feature status: ignoring blocked status\n");
if (read_blocked) {
dev_info(&auxdev->dev, "Firmware has blocked reads, exiting\n");
return -ENODEV;
}
/* Get number of power domains, which is equal to number of resources */
num_resources = tpmi_get_resource_count(auxdev);
if (!num_resources)
@ -266,6 +280,7 @@ static int uncore_probe(struct auxiliary_device *auxdev, const struct auxiliary_
}
tpmi_uncore->power_domain_count = num_resources;
tpmi_uncore->write_blocked = write_blocked;
/* Get the package ID from the TPMI core */
plat_info = tpmi_get_platform_data(auxdev);

View File

@ -205,6 +205,16 @@ static const struct x86_cpu_id intel_uncore_cpu_ids[] = {
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, NULL),
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, NULL),
X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, NULL),
X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE, NULL),
X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE_L, NULL),
X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, NULL),
X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE_L, NULL),
X86_MATCH_INTEL_FAM6_MODEL(CANNONLAKE_L, NULL),
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE, NULL),
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_L, NULL),
X86_MATCH_INTEL_FAM6_MODEL(ROCKETLAKE, NULL),
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE, NULL),
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, NULL),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, NULL),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, NULL),
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, NULL),
@ -212,6 +222,9 @@ static const struct x86_cpu_id intel_uncore_cpu_ids[] = {
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_S, NULL),
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE, NULL),
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, NULL),
X86_MATCH_INTEL_FAM6_MODEL(ARROWLAKE, NULL),
X86_MATCH_INTEL_FAM6_MODEL(ARROWLAKE_H, NULL),
X86_MATCH_INTEL_FAM6_MODEL(LUNARLAKE_M, NULL),
{}
};
MODULE_DEVICE_TABLE(x86cpu, intel_uncore_cpu_ids);

View File

@ -15,6 +15,7 @@
#include <linux/auxiliary_bus.h>
#include <linux/bits.h>
#include <linux/cleanup.h>
#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/idr.h>
@ -24,13 +25,6 @@
#include "vsec.h"
/* Intel DVSEC offsets */
#define INTEL_DVSEC_ENTRIES 0xA
#define INTEL_DVSEC_SIZE 0xB
#define INTEL_DVSEC_TABLE 0xC
#define INTEL_DVSEC_TABLE_BAR(x) ((x) & GENMASK(2, 0))
#define INTEL_DVSEC_TABLE_OFFSET(x) ((x) & GENMASK(31, 3))
#define TABLE_OFFSET_SHIFT 3
#define PMT_XA_START 0
#define PMT_XA_MAX INT_MAX
#define PMT_XA_LIMIT XA_LIMIT(PMT_XA_START, PMT_XA_MAX)
@ -39,34 +33,6 @@ static DEFINE_IDA(intel_vsec_ida);
static DEFINE_IDA(intel_vsec_sdsi_ida);
static DEFINE_XARRAY_ALLOC(auxdev_array);
/**
* struct intel_vsec_header - Common fields of Intel VSEC and DVSEC registers.
* @rev: Revision ID of the VSEC/DVSEC register space
* @length: Length of the VSEC/DVSEC register space
* @id: ID of the feature
* @num_entries: Number of instances of the feature
* @entry_size: Size of the discovery table for each feature
* @tbir: BAR containing the discovery tables
* @offset: BAR offset of start of the first discovery table
*/
struct intel_vsec_header {
u8 rev;
u16 length;
u16 id;
u8 num_entries;
u8 entry_size;
u8 tbir;
u32 offset;
};
enum intel_vsec_id {
VSEC_ID_TELEMETRY = 2,
VSEC_ID_WATCHER = 3,
VSEC_ID_CRASHLOG = 4,
VSEC_ID_SDSI = 65,
VSEC_ID_TPMI = 66,
};
static const char *intel_vsec_name(enum intel_vsec_id id)
{
switch (id) {
@ -120,6 +86,8 @@ static void intel_vsec_dev_release(struct device *dev)
{
struct intel_vsec_device *intel_vsec_dev = dev_to_ivdev(dev);
xa_erase(&auxdev_array, intel_vsec_dev->id);
mutex_lock(&vsec_ida_lock);
ida_free(intel_vsec_dev->ida, intel_vsec_dev->auxdev.id);
mutex_unlock(&vsec_ida_lock);
@ -135,19 +103,28 @@ int intel_vsec_add_aux(struct pci_dev *pdev, struct device *parent,
struct auxiliary_device *auxdev = &intel_vsec_dev->auxdev;
int ret, id;
mutex_lock(&vsec_ida_lock);
ret = ida_alloc(intel_vsec_dev->ida, GFP_KERNEL);
mutex_unlock(&vsec_ida_lock);
if (!parent)
return -EINVAL;
ret = xa_alloc(&auxdev_array, &intel_vsec_dev->id, intel_vsec_dev,
PMT_XA_LIMIT, GFP_KERNEL);
if (ret < 0) {
kfree(intel_vsec_dev->resource);
kfree(intel_vsec_dev);
return ret;
}
if (!parent)
parent = &pdev->dev;
mutex_lock(&vsec_ida_lock);
id = ida_alloc(intel_vsec_dev->ida, GFP_KERNEL);
mutex_unlock(&vsec_ida_lock);
if (id < 0) {
xa_erase(&auxdev_array, intel_vsec_dev->id);
kfree(intel_vsec_dev->resource);
kfree(intel_vsec_dev);
return id;
}
auxdev->id = ret;
auxdev->id = id;
auxdev->name = name;
auxdev->dev.parent = parent;
auxdev->dev.release = intel_vsec_dev_release;
@ -164,29 +141,27 @@ int intel_vsec_add_aux(struct pci_dev *pdev, struct device *parent,
return ret;
}
ret = devm_add_action_or_reset(parent, intel_vsec_remove_aux,
return devm_add_action_or_reset(parent, intel_vsec_remove_aux,
auxdev);
if (ret < 0)
return ret;
/* Add auxdev to list */
ret = xa_alloc(&auxdev_array, &id, intel_vsec_dev, PMT_XA_LIMIT,
GFP_KERNEL);
if (ret)
return ret;
return 0;
}
EXPORT_SYMBOL_NS_GPL(intel_vsec_add_aux, INTEL_VSEC);
static int intel_vsec_add_dev(struct pci_dev *pdev, struct intel_vsec_header *header,
struct intel_vsec_platform_info *info)
{
struct intel_vsec_device *intel_vsec_dev;
struct resource *res, *tmp;
struct intel_vsec_device __free(kfree) *intel_vsec_dev = NULL;
struct resource __free(kfree) *res = NULL;
struct resource *tmp;
struct device *parent;
unsigned long quirks = info->quirks;
u64 base_addr;
int i;
if (info->parent)
parent = info->parent;
else
parent = &pdev->dev;
if (!intel_vsec_supported(header->id, info->caps))
return -EINVAL;
@ -205,37 +180,50 @@ static int intel_vsec_add_dev(struct pci_dev *pdev, struct intel_vsec_header *he
return -ENOMEM;
res = kcalloc(header->num_entries, sizeof(*res), GFP_KERNEL);
if (!res) {
kfree(intel_vsec_dev);
if (!res)
return -ENOMEM;
}
if (quirks & VSEC_QUIRK_TABLE_SHIFT)
header->offset >>= TABLE_OFFSET_SHIFT;
if (info->base_addr)
base_addr = info->base_addr;
else
base_addr = pdev->resource[header->tbir].start;
/*
* The DVSEC/VSEC contains the starting offset and count for a block of
* discovery tables. Create a resource array of these tables to the
* auxiliary device driver.
*/
for (i = 0, tmp = res; i < header->num_entries; i++, tmp++) {
tmp->start = pdev->resource[header->tbir].start +
header->offset + i * (header->entry_size * sizeof(u32));
tmp->start = base_addr + header->offset + i * (header->entry_size * sizeof(u32));
tmp->end = tmp->start + (header->entry_size * sizeof(u32)) - 1;
tmp->flags = IORESOURCE_MEM;
/* Check resource is not in use */
if (!request_mem_region(tmp->start, resource_size(tmp), ""))
return -EBUSY;
release_mem_region(tmp->start, resource_size(tmp));
}
intel_vsec_dev->pcidev = pdev;
intel_vsec_dev->resource = res;
intel_vsec_dev->resource = no_free_ptr(res);
intel_vsec_dev->num_resources = header->num_entries;
intel_vsec_dev->info = info;
intel_vsec_dev->quirks = info->quirks;
intel_vsec_dev->base_addr = info->base_addr;
if (header->id == VSEC_ID_SDSI)
intel_vsec_dev->ida = &intel_vsec_sdsi_ida;
else
intel_vsec_dev->ida = &intel_vsec_ida;
return intel_vsec_add_aux(pdev, NULL, intel_vsec_dev,
/*
* Pass the ownership of intel_vsec_dev and resource within it to
* intel_vsec_add_aux()
*/
return intel_vsec_add_aux(pdev, parent, no_free_ptr(intel_vsec_dev),
intel_vsec_name(header->id));
}
@ -353,6 +341,16 @@ static bool intel_vsec_walk_vsec(struct pci_dev *pdev,
return have_devices;
}
void intel_vsec_register(struct pci_dev *pdev,
struct intel_vsec_platform_info *info)
{
if (!pdev || !info)
return;
intel_vsec_walk_header(pdev, info);
}
EXPORT_SYMBOL_NS_GPL(intel_vsec_register, INTEL_VSEC);
static int intel_vsec_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct intel_vsec_platform_info *info;
@ -421,6 +419,11 @@ static const struct intel_vsec_platform_info tgl_info = {
.quirks = VSEC_QUIRK_TABLE_SHIFT | VSEC_QUIRK_EARLY_HW,
};
/* LNL info */
static const struct intel_vsec_platform_info lnl_info = {
.caps = VSEC_CAP_TELEMETRY | VSEC_CAP_WATCHER,
};
#define PCI_DEVICE_ID_INTEL_VSEC_ADL 0x467d
#define PCI_DEVICE_ID_INTEL_VSEC_DG1 0x490e
#define PCI_DEVICE_ID_INTEL_VSEC_MTL_M 0x7d0d
@ -428,6 +431,7 @@ static const struct intel_vsec_platform_info tgl_info = {
#define PCI_DEVICE_ID_INTEL_VSEC_OOBMSM 0x09a7
#define PCI_DEVICE_ID_INTEL_VSEC_RPL 0xa77d
#define PCI_DEVICE_ID_INTEL_VSEC_TGL 0x9a0d
#define PCI_DEVICE_ID_INTEL_VSEC_LNL_M 0x647d
static const struct pci_device_id intel_vsec_pci_ids[] = {
{ PCI_DEVICE_DATA(INTEL, VSEC_ADL, &tgl_info) },
{ PCI_DEVICE_DATA(INTEL, VSEC_DG1, &dg1_info) },
@ -436,6 +440,7 @@ static const struct pci_device_id intel_vsec_pci_ids[] = {
{ PCI_DEVICE_DATA(INTEL, VSEC_OOBMSM, &oobmsm_info) },
{ PCI_DEVICE_DATA(INTEL, VSEC_RPL, &tgl_info) },
{ PCI_DEVICE_DATA(INTEL, VSEC_TGL, &tgl_info) },
{ PCI_DEVICE_DATA(INTEL, VSEC_LNL_M, &lnl_info) },
{ }
};
MODULE_DEVICE_TABLE(pci, intel_vsec_pci_ids);

View File

@ -11,9 +11,45 @@
#define VSEC_CAP_SDSI BIT(3)
#define VSEC_CAP_TPMI BIT(4)
/* Intel DVSEC offsets */
#define INTEL_DVSEC_ENTRIES 0xA
#define INTEL_DVSEC_SIZE 0xB
#define INTEL_DVSEC_TABLE 0xC
#define INTEL_DVSEC_TABLE_BAR(x) ((x) & GENMASK(2, 0))
#define INTEL_DVSEC_TABLE_OFFSET(x) ((x) & GENMASK(31, 3))
#define TABLE_OFFSET_SHIFT 3
struct pci_dev;
struct resource;
enum intel_vsec_id {
VSEC_ID_TELEMETRY = 2,
VSEC_ID_WATCHER = 3,
VSEC_ID_CRASHLOG = 4,
VSEC_ID_SDSI = 65,
VSEC_ID_TPMI = 66,
};
/**
* struct intel_vsec_header - Common fields of Intel VSEC and DVSEC registers.
* @rev: Revision ID of the VSEC/DVSEC register space
* @length: Length of the VSEC/DVSEC register space
* @id: ID of the feature
* @num_entries: Number of instances of the feature
* @entry_size: Size of the discovery table for each feature
* @tbir: BAR containing the discovery tables
* @offset: BAR offset of start of the first discovery table
*/
struct intel_vsec_header {
u8 rev;
u16 length;
u16 id;
u8 num_entries;
u8 entry_size;
u8 tbir;
u32 offset;
};
enum intel_vsec_quirks {
/* Watcher feature not supported */
VSEC_QUIRK_NO_WATCHER = BIT(0),
@ -33,9 +69,11 @@ enum intel_vsec_quirks {
/* Platform specific data */
struct intel_vsec_platform_info {
struct device *parent;
struct intel_vsec_header **headers;
unsigned long caps;
unsigned long quirks;
u64 base_addr;
};
struct intel_vsec_device {
@ -43,10 +81,12 @@ struct intel_vsec_device {
struct pci_dev *pcidev;
struct resource *resource;
struct ida *ida;
struct intel_vsec_platform_info *info;
int num_resources;
int id; /* xa */
void *priv_data;
size_t priv_data_size;
unsigned long quirks;
u64 base_addr;
};
int intel_vsec_add_aux(struct pci_dev *pdev, struct device *parent,
@ -62,4 +102,7 @@ static inline struct intel_vsec_device *auxdev_to_ivdev(struct auxiliary_device
{
return container_of(auxdev, struct intel_vsec_device, auxdev);
}
void intel_vsec_register(struct pci_dev *pdev,
struct intel_vsec_platform_info *info);
#endif

View File

@ -25,18 +25,13 @@
static int get_fwu_request(struct device *dev, u32 *out)
{
struct acpi_buffer result = {ACPI_ALLOCATE_BUFFER, NULL};
union acpi_object *obj;
acpi_status status;
status = wmi_query_block(INTEL_WMI_SBL_GUID, 0, &result);
if (ACPI_FAILURE(status)) {
dev_err(dev, "wmi_query_block failed\n");
obj = wmidev_block_query(to_wmi_device(dev), 0);
if (!obj)
return -ENODEV;
}
obj = (union acpi_object *)result.pointer;
if (!obj || obj->type != ACPI_TYPE_INTEGER) {
if (obj->type != ACPI_TYPE_INTEGER) {
dev_warn(dev, "wmi_query_block returned invalid value\n");
kfree(obj);
return -EINVAL;
@ -58,7 +53,7 @@ static int set_fwu_request(struct device *dev, u32 in)
input.length = sizeof(u32);
input.pointer = &value;
status = wmi_set_block(INTEL_WMI_SBL_GUID, 0, &input);
status = wmidev_block_set(to_wmi_device(dev), 0, &input);
if (ACPI_FAILURE(status)) {
dev_err(dev, "wmi_set_block failed\n");
return -ENODEV;

View File

@ -32,8 +32,7 @@ static ssize_t force_power_store(struct device *dev,
mode = hex_to_bin(buf[0]);
dev_dbg(dev, "force_power: storing %#x\n", mode);
if (mode == 0 || mode == 1) {
status = wmi_evaluate_method(INTEL_WMI_THUNDERBOLT_GUID, 0, 1,
&input, NULL);
status = wmidev_evaluate_method(to_wmi_device(dev), 0, 1, &input, NULL);
if (ACPI_FAILURE(status)) {
dev_dbg(dev, "force_power: failed to evaluate ACPI method\n");
return -ENODEV;

View File

@ -1114,39 +1114,6 @@ static int ips_monitor(void *data)
return 0;
}
#if 0
#define THM_DUMPW(reg) \
{ \
u16 val = thm_readw(reg); \
dev_dbg(ips->dev, #reg ": 0x%04x\n", val); \
}
#define THM_DUMPL(reg) \
{ \
u32 val = thm_readl(reg); \
dev_dbg(ips->dev, #reg ": 0x%08x\n", val); \
}
#define THM_DUMPQ(reg) \
{ \
u64 val = thm_readq(reg); \
dev_dbg(ips->dev, #reg ": 0x%016x\n", val); \
}
static void dump_thermal_info(struct ips_driver *ips)
{
u16 ptl;
ptl = thm_readw(THM_PTL);
dev_dbg(ips->dev, "Processor temp limit: %d\n", ptl);
THM_DUMPW(THM_CTA);
THM_DUMPW(THM_TRC);
THM_DUMPW(THM_CTV1);
THM_DUMPL(THM_STS);
THM_DUMPW(THM_PTV);
THM_DUMPQ(THM_MGTV);
}
#endif
/**
* ips_irq_handler - handle temperature triggers and other IPS events
* @irq: irq number

File diff suppressed because it is too large Load Diff

View File

@ -23,17 +23,14 @@
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/sysfs.h>
#include <linux/types.h>
#include <linux/uaccess.h>
#include <linux/uuid.h>
#include <linux/wmi.h>
#include <linux/fs.h>
#include <uapi/linux/wmi.h>
MODULE_AUTHOR("Carlos Corbacho");
MODULE_DESCRIPTION("ACPI-WMI Mapping Driver");
@ -66,12 +63,9 @@ struct wmi_block {
struct wmi_device dev;
struct list_head list;
struct guid_block gblock;
struct miscdevice char_dev;
struct mutex char_mutex;
struct acpi_device *acpi_device;
wmi_notify_handler handler;
void *handler_data;
u64 req_buf_size;
unsigned long flags;
};
@ -85,16 +79,6 @@ struct wmi_block {
#define ACPI_WMI_STRING BIT(2) /* GUID takes & returns a string */
#define ACPI_WMI_EVENT BIT(3) /* GUID is an event */
static bool debug_event;
module_param(debug_event, bool, 0444);
MODULE_PARM_DESC(debug_event,
"Log WMI Events [0/1]");
static bool debug_dump_wdg;
module_param(debug_dump_wdg, bool, 0444);
MODULE_PARM_DESC(debug_dump_wdg,
"Dump available WMI interfaces [0/1]");
static const struct acpi_device_id wmi_device_ids[] = {
{"PNP0C14", 0},
{"pnp0c14", 0},
@ -106,6 +90,8 @@ MODULE_DEVICE_TABLE(acpi, wmi_device_ids);
static const char * const allow_duplicates[] = {
"05901221-D566-11D1-B2F0-00A0C9062910", /* wmi-bmof */
"8A42EA14-4F2A-FD45-6422-0087F7A7E608", /* dell-wmi-ddv */
"44FADEB1-B204-40F2-8581-394BBDC1B651", /* intel-wmi-sbl-fw-update */
"86CCFD48-205E-4A77-9C48-2021CBEDE341", /* intel-wmi-thunderbolt */
NULL
};
@ -146,23 +132,19 @@ static const void *find_guid_context(struct wmi_block *wblock,
static int get_subobj_info(acpi_handle handle, const char *pathname,
struct acpi_device_info **info)
{
struct acpi_device_info *dummy_info, **info_ptr;
acpi_handle subobj_handle;
acpi_status status;
status = acpi_get_handle(handle, (char *)pathname, &subobj_handle);
status = acpi_get_handle(handle, pathname, &subobj_handle);
if (status == AE_NOT_FOUND)
return -ENOENT;
else if (ACPI_FAILURE(status))
return -EIO;
info_ptr = info ? info : &dummy_info;
status = acpi_get_object_info(subobj_handle, info_ptr);
if (ACPI_FAILURE(status))
return -EIO;
if (!info)
kfree(dummy_info);
status = acpi_get_object_info(subobj_handle, info);
if (ACPI_FAILURE(status))
return -EIO;
return 0;
}
@ -264,26 +246,6 @@ static void wmi_device_put(struct wmi_device *wdev)
* Exported WMI functions
*/
/**
* set_required_buffer_size - Sets the buffer size needed for performing IOCTL
* @wdev: A wmi bus device from a driver
* @length: Required buffer size
*
* Allocates memory needed for buffer, stores the buffer size in that memory.
*
* Return: 0 on success or a negative error code for failure.
*/
int set_required_buffer_size(struct wmi_device *wdev, u64 length)
{
struct wmi_block *wblock;
wblock = container_of(wdev, struct wmi_block, dev);
wblock->req_buf_size = length;
return 0;
}
EXPORT_SYMBOL_GPL(set_required_buffer_size);
/**
* wmi_instance_count - Get number of WMI object instances
* @guid_string: 36 char string of the form fa50ff2b-f2e8-45de-83fa-65417f2f49ba
@ -536,41 +498,50 @@ EXPORT_SYMBOL_GPL(wmidev_block_query);
*
* Return: acpi_status signaling success or error.
*/
acpi_status wmi_set_block(const char *guid_string, u8 instance,
const struct acpi_buffer *in)
acpi_status wmi_set_block(const char *guid_string, u8 instance, const struct acpi_buffer *in)
{
struct wmi_block *wblock;
struct guid_block *block;
struct wmi_device *wdev;
acpi_handle handle;
struct acpi_object_list input;
union acpi_object params[2];
char method[WMI_ACPI_METHOD_NAME_SIZE];
acpi_status status;
if (!in)
return AE_BAD_DATA;
wdev = wmi_find_device_by_guid(guid_string);
if (IS_ERR(wdev))
return AE_ERROR;
wblock = container_of(wdev, struct wmi_block, dev);
block = &wblock->gblock;
handle = wblock->acpi_device->handle;
status = wmidev_block_set(wdev, instance, in);
wmi_device_put(wdev);
if (block->instance_count <= instance) {
status = AE_BAD_PARAMETER;
return status;
}
EXPORT_SYMBOL_GPL(wmi_set_block);
goto err_wdev_put;
}
/**
* wmidev_block_set - Write to a WMI block
* @wdev: A wmi bus device from a driver
* @instance: Instance index
* @in: Buffer containing new values for the data block
*
* Write contents of the input buffer to an ACPI-WMI data block.
*
* Return: acpi_status signaling success or error.
*/
acpi_status wmidev_block_set(struct wmi_device *wdev, u8 instance, const struct acpi_buffer *in)
{
struct wmi_block *wblock = container_of(wdev, struct wmi_block, dev);
acpi_handle handle = wblock->acpi_device->handle;
struct guid_block *block = &wblock->gblock;
char method[WMI_ACPI_METHOD_NAME_SIZE];
struct acpi_object_list input;
union acpi_object params[2];
if (!in)
return AE_BAD_DATA;
if (block->instance_count <= instance)
return AE_BAD_PARAMETER;
/* Check GUID is a data block */
if (block->flags & (ACPI_WMI_EVENT | ACPI_WMI_METHOD)) {
status = AE_ERROR;
goto err_wdev_put;
}
if (block->flags & (ACPI_WMI_EVENT | ACPI_WMI_METHOD))
return AE_ERROR;
input.count = 2;
input.pointer = params;
@ -582,73 +553,9 @@ acpi_status wmi_set_block(const char *guid_string, u8 instance,
get_acpi_method_name(wblock, 'S', method);
status = acpi_evaluate_object(handle, method, &input, NULL);
err_wdev_put:
wmi_device_put(wdev);
return status;
}
EXPORT_SYMBOL_GPL(wmi_set_block);
static void wmi_dump_wdg(const struct guid_block *g)
{
pr_info("%pUL:\n", &g->guid);
if (g->flags & ACPI_WMI_EVENT)
pr_info("\tnotify_id: 0x%02X\n", g->notify_id);
else
pr_info("\tobject_id: %2pE\n", g->object_id);
pr_info("\tinstance_count: %d\n", g->instance_count);
pr_info("\tflags: %#x", g->flags);
if (g->flags) {
if (g->flags & ACPI_WMI_EXPENSIVE)
pr_cont(" ACPI_WMI_EXPENSIVE");
if (g->flags & ACPI_WMI_METHOD)
pr_cont(" ACPI_WMI_METHOD");
if (g->flags & ACPI_WMI_STRING)
pr_cont(" ACPI_WMI_STRING");
if (g->flags & ACPI_WMI_EVENT)
pr_cont(" ACPI_WMI_EVENT");
}
pr_cont("\n");
}
static void wmi_notify_debug(u32 value, void *context)
{
struct acpi_buffer response = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj;
acpi_status status;
status = wmi_get_event_data(value, &response);
if (status != AE_OK) {
pr_info("bad event status 0x%x\n", status);
return;
}
obj = response.pointer;
if (!obj)
return;
pr_info("DEBUG: event 0x%02X ", value);
switch (obj->type) {
case ACPI_TYPE_BUFFER:
pr_cont("BUFFER_TYPE - length %u\n", obj->buffer.length);
break;
case ACPI_TYPE_STRING:
pr_cont("STRING_TYPE - %s\n", obj->string.pointer);
break;
case ACPI_TYPE_INTEGER:
pr_cont("INTEGER_TYPE - %llu\n", obj->integer.value);
break;
case ACPI_TYPE_PACKAGE:
pr_cont("PACKAGE_TYPE - %u elements\n", obj->package.count);
break;
default:
pr_cont("object type 0x%X\n", obj->type);
}
kfree(obj);
return acpi_evaluate_object(handle, method, &input, NULL);
}
EXPORT_SYMBOL_GPL(wmidev_block_set);
/**
* wmi_install_notify_handler - Register handler for WMI events (deprecated)
@ -678,8 +585,7 @@ acpi_status wmi_install_notify_handler(const char *guid,
acpi_status wmi_status;
if (guid_equal(&block->gblock.guid, &guid_input)) {
if (block->handler &&
block->handler != wmi_notify_debug)
if (block->handler)
return AE_ALREADY_ACQUIRED;
block->handler = handler;
@ -720,22 +626,14 @@ acpi_status wmi_remove_notify_handler(const char *guid)
acpi_status wmi_status;
if (guid_equal(&block->gblock.guid, &guid_input)) {
if (!block->handler ||
block->handler == wmi_notify_debug)
if (!block->handler)
return AE_NULL_ENTRY;
if (debug_event) {
block->handler = wmi_notify_debug;
status = AE_OK;
} else {
wmi_status = wmi_method_enable(block, false);
block->handler = NULL;
block->handler_data = NULL;
if ((wmi_status != AE_OK) ||
((wmi_status == AE_OK) &&
(status == AE_NOT_EXIST)))
status = wmi_status;
}
wmi_status = wmi_method_enable(block, false);
block->handler = NULL;
block->handler_data = NULL;
if (wmi_status != AE_OK || (wmi_status == AE_OK && status == AE_NOT_EXIST))
status = wmi_status;
}
}
@ -956,111 +854,12 @@ static int wmi_dev_match(struct device *dev, struct device_driver *driver)
return 0;
}
static int wmi_char_open(struct inode *inode, struct file *filp)
{
/*
* The miscdevice already stores a pointer to itself
* inside filp->private_data
*/
struct wmi_block *wblock = container_of(filp->private_data, struct wmi_block, char_dev);
filp->private_data = wblock;
return nonseekable_open(inode, filp);
}
static ssize_t wmi_char_read(struct file *filp, char __user *buffer,
size_t length, loff_t *offset)
{
struct wmi_block *wblock = filp->private_data;
return simple_read_from_buffer(buffer, length, offset,
&wblock->req_buf_size,
sizeof(wblock->req_buf_size));
}
static long wmi_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
struct wmi_ioctl_buffer __user *input =
(struct wmi_ioctl_buffer __user *) arg;
struct wmi_block *wblock = filp->private_data;
struct wmi_ioctl_buffer *buf;
struct wmi_driver *wdriver;
int ret;
if (_IOC_TYPE(cmd) != WMI_IOC)
return -ENOTTY;
/* make sure we're not calling a higher instance than exists*/
if (_IOC_NR(cmd) >= wblock->gblock.instance_count)
return -EINVAL;
mutex_lock(&wblock->char_mutex);
buf = wblock->handler_data;
if (get_user(buf->length, &input->length)) {
dev_dbg(&wblock->dev.dev, "Read length from user failed\n");
ret = -EFAULT;
goto out_ioctl;
}
/* if it's too small, abort */
if (buf->length < wblock->req_buf_size) {
dev_err(&wblock->dev.dev,
"Buffer %lld too small, need at least %lld\n",
buf->length, wblock->req_buf_size);
ret = -EINVAL;
goto out_ioctl;
}
/* if it's too big, warn, driver will only use what is needed */
if (buf->length > wblock->req_buf_size)
dev_warn(&wblock->dev.dev,
"Buffer %lld is bigger than required %lld\n",
buf->length, wblock->req_buf_size);
/* copy the structure from userspace */
if (copy_from_user(buf, input, wblock->req_buf_size)) {
dev_dbg(&wblock->dev.dev, "Copy %llu from user failed\n",
wblock->req_buf_size);
ret = -EFAULT;
goto out_ioctl;
}
/* let the driver do any filtering and do the call */
wdriver = drv_to_wdrv(wblock->dev.dev.driver);
if (!try_module_get(wdriver->driver.owner)) {
ret = -EBUSY;
goto out_ioctl;
}
ret = wdriver->filter_callback(&wblock->dev, cmd, buf);
module_put(wdriver->driver.owner);
if (ret)
goto out_ioctl;
/* return the result (only up to our internal buffer size) */
if (copy_to_user(input, buf, wblock->req_buf_size)) {
dev_dbg(&wblock->dev.dev, "Copy %llu to user failed\n",
wblock->req_buf_size);
ret = -EFAULT;
}
out_ioctl:
mutex_unlock(&wblock->char_mutex);
return ret;
}
static const struct file_operations wmi_fops = {
.owner = THIS_MODULE,
.read = wmi_char_read,
.open = wmi_char_open,
.unlocked_ioctl = wmi_ioctl,
.compat_ioctl = compat_ptr_ioctl,
};
static int wmi_dev_probe(struct device *dev)
{
struct wmi_block *wblock = dev_to_wblock(dev);
struct wmi_driver *wdriver = drv_to_wdrv(dev->driver);
int ret = 0;
char *buf;
if (ACPI_FAILURE(wmi_method_enable(wblock, true)))
dev_warn(dev, "failed to enable device -- probing anyway\n");
@ -1068,55 +867,17 @@ static int wmi_dev_probe(struct device *dev)
if (wdriver->probe) {
ret = wdriver->probe(dev_to_wdev(dev),
find_guid_context(wblock, wdriver));
if (ret != 0)
goto probe_failure;
}
if (!ret) {
if (ACPI_FAILURE(wmi_method_enable(wblock, false)))
dev_warn(dev, "Failed to disable device\n");
/* driver wants a character device made */
if (wdriver->filter_callback) {
/* check that required buffer size declared by driver or MOF */
if (!wblock->req_buf_size) {
dev_err(&wblock->dev.dev,
"Required buffer size not set\n");
ret = -EINVAL;
goto probe_failure;
}
wblock->handler_data = kmalloc(wblock->req_buf_size,
GFP_KERNEL);
if (!wblock->handler_data) {
ret = -ENOMEM;
goto probe_failure;
}
buf = kasprintf(GFP_KERNEL, "wmi/%s", wdriver->driver.name);
if (!buf) {
ret = -ENOMEM;
goto probe_string_failure;
}
wblock->char_dev.minor = MISC_DYNAMIC_MINOR;
wblock->char_dev.name = buf;
wblock->char_dev.fops = &wmi_fops;
wblock->char_dev.mode = 0444;
ret = misc_register(&wblock->char_dev);
if (ret) {
dev_warn(dev, "failed to register char dev: %d\n", ret);
ret = -ENOMEM;
goto probe_misc_failure;
return ret;
}
}
set_bit(WMI_PROBED, &wblock->flags);
return 0;
probe_misc_failure:
kfree(buf);
probe_string_failure:
kfree(wblock->handler_data);
probe_failure:
if (ACPI_FAILURE(wmi_method_enable(wblock, false)))
dev_warn(dev, "failed to disable device\n");
return ret;
return 0;
}
static void wmi_dev_remove(struct device *dev)
@ -1126,12 +887,6 @@ static void wmi_dev_remove(struct device *dev)
clear_bit(WMI_PROBED, &wblock->flags);
if (wdriver->filter_callback) {
misc_deregister(&wblock->char_dev);
kfree(wblock->char_dev.name);
kfree(wblock->handler_data);
}
if (wdriver->remove)
wdriver->remove(dev_to_wdev(dev));
@ -1203,7 +958,6 @@ static int wmi_create_device(struct device *wmi_bus_dev,
if (wblock->gblock.flags & ACPI_WMI_METHOD) {
wblock->dev.dev.type = &wmi_type_method;
mutex_init(&wblock->char_mutex);
goto out_init;
}
@ -1240,9 +994,7 @@ static int wmi_create_device(struct device *wmi_bus_dev,
kfree(info);
get_acpi_method_name(wblock, 'S', method);
result = get_subobj_info(device->handle, method, NULL);
if (result == 0)
if (acpi_has_method(device->handle, method))
wblock->dev.setable = true;
out_init:
@ -1337,9 +1089,6 @@ static int parse_wdg(struct device *wmi_bus_dev, struct platform_device *pdev)
total = obj->buffer.length / sizeof(struct guid_block);
for (i = 0; i < total; i++) {
if (debug_dump_wdg)
wmi_dump_wdg(&gblock[i]);
if (!gblock[i].instance_count) {
dev_info(wmi_bus_dev, FW_INFO "%pUL has zero instances\n", &gblock[i].guid);
continue;
@ -1365,17 +1114,10 @@ static int parse_wdg(struct device *wmi_bus_dev, struct platform_device *pdev)
list_add_tail(&wblock->list, &wmi_block_list);
if (debug_event) {
wblock->handler = wmi_notify_debug;
wmi_method_enable(wblock, true);
}
retval = wmi_add_device(pdev, &wblock->dev);
if (retval) {
dev_err(wmi_bus_dev, "failed to register %pUL\n",
&wblock->gblock.guid);
if (debug_event)
wmi_method_enable(wblock, false);
list_del(&wblock->list);
put_device(&wblock->dev.dev);
@ -1396,7 +1138,7 @@ acpi_wmi_ec_space_handler(u32 function, acpi_physical_address address,
u32 bits, u64 *value,
void *handler_context, void *region_context)
{
int result = 0, i = 0;
int result = 0;
u8 temp = 0;
if ((address > 0xFF) || !value)
@ -1410,9 +1152,9 @@ acpi_wmi_ec_space_handler(u32 function, acpi_physical_address address,
if (function == ACPI_READ) {
result = ec_read(address, &temp);
(*value) |= ((u64)temp) << i;
*value = temp;
} else {
temp = 0xff & ((*value) >> i);
temp = 0xff & *value;
result = ec_write(address, temp);
}
@ -1428,24 +1170,13 @@ acpi_wmi_ec_space_handler(u32 function, acpi_physical_address address,
}
}
static void acpi_wmi_notify_handler(acpi_handle handle, u32 event,
void *context)
static int wmi_notify_device(struct device *dev, void *data)
{
struct wmi_block *wblock = NULL, *iter;
struct wmi_block *wblock = dev_to_wblock(dev);
u32 *event = data;
list_for_each_entry(iter, &wmi_block_list, list) {
struct guid_block *block = &iter->gblock;
if (iter->acpi_device->handle == handle &&
(block->flags & ACPI_WMI_EVENT) &&
(block->notify_id == event)) {
wblock = iter;
break;
}
}
if (!wblock)
return;
if (!(wblock->gblock.flags & ACPI_WMI_EVENT && wblock->gblock.notify_id == *event))
return 0;
/* If a driver is bound, then notify the driver. */
if (test_bit(WMI_PROBED, &wblock->flags) && wblock->dev.dev.driver) {
@ -1457,7 +1188,7 @@ static void acpi_wmi_notify_handler(acpi_handle handle, u32 event,
status = get_event_data(wblock, &evdata);
if (ACPI_FAILURE(status)) {
dev_warn(&wblock->dev.dev, "failed to get event data\n");
return;
return -EIO;
}
}
@ -1467,16 +1198,20 @@ static void acpi_wmi_notify_handler(acpi_handle handle, u32 event,
kfree(evdata.pointer);
} else if (wblock->handler) {
/* Legacy handler */
wblock->handler(event, wblock->handler_data);
wblock->handler(*event, wblock->handler_data);
}
if (debug_event)
pr_info("DEBUG: GUID %pUL event 0x%02X\n", &wblock->gblock.guid, event);
acpi_bus_generate_netlink_event(wblock->acpi_device->pnp.device_class,
dev_name(&wblock->dev.dev), *event, 0);
acpi_bus_generate_netlink_event(
wblock->acpi_device->pnp.device_class,
dev_name(&wblock->dev.dev),
event, 0);
return -EBUSY;
}
static void acpi_wmi_notify_handler(acpi_handle handle, u32 event, void *context)
{
struct device *wmi_bus_dev = context;
device_for_each_child(wmi_bus_dev, &event, wmi_notify_device);
}
static int wmi_remove_device(struct device *dev, void *data)
@ -1491,16 +1226,31 @@ static int wmi_remove_device(struct device *dev, void *data)
static void acpi_wmi_remove(struct platform_device *device)
{
struct acpi_device *acpi_device = ACPI_COMPANION(&device->dev);
struct device *wmi_bus_device = dev_get_drvdata(&device->dev);
acpi_remove_notify_handler(acpi_device->handle, ACPI_ALL_NOTIFY,
acpi_wmi_notify_handler);
acpi_remove_address_space_handler(acpi_device->handle,
ACPI_ADR_SPACE_EC, &acpi_wmi_ec_space_handler);
device_for_each_child_reverse(wmi_bus_device, NULL, wmi_remove_device);
device_unregister(wmi_bus_device);
}
static void acpi_wmi_remove_notify_handler(void *data)
{
struct acpi_device *acpi_device = data;
acpi_remove_notify_handler(acpi_device->handle, ACPI_ALL_NOTIFY, acpi_wmi_notify_handler);
}
static void acpi_wmi_remove_address_space_handler(void *data)
{
struct acpi_device *acpi_device = data;
acpi_remove_address_space_handler(acpi_device->handle, ACPI_ADR_SPACE_EC,
&acpi_wmi_ec_space_handler);
}
static void acpi_wmi_remove_bus_device(void *data)
{
struct device *wmi_bus_dev = data;
device_unregister(wmi_bus_dev);
}
static int acpi_wmi_probe(struct platform_device *device)
@ -1516,6 +1266,17 @@ static int acpi_wmi_probe(struct platform_device *device)
return -ENODEV;
}
wmi_bus_dev = device_create(&wmi_bus_class, &device->dev, MKDEV(0, 0), NULL, "wmi_bus-%s",
dev_name(&device->dev));
if (IS_ERR(wmi_bus_dev))
return PTR_ERR(wmi_bus_dev);
error = devm_add_action_or_reset(&device->dev, acpi_wmi_remove_bus_device, wmi_bus_dev);
if (error < 0)
return error;
dev_set_drvdata(&device->dev, wmi_bus_dev);
status = acpi_install_address_space_handler(acpi_device->handle,
ACPI_ADR_SPACE_EC,
&acpi_wmi_ec_space_handler,
@ -1524,46 +1285,29 @@ static int acpi_wmi_probe(struct platform_device *device)
dev_err(&device->dev, "Error installing EC region handler\n");
return -ENODEV;
}
error = devm_add_action_or_reset(&device->dev, acpi_wmi_remove_address_space_handler,
acpi_device);
if (error < 0)
return error;
status = acpi_install_notify_handler(acpi_device->handle,
ACPI_ALL_NOTIFY,
acpi_wmi_notify_handler,
NULL);
status = acpi_install_notify_handler(acpi_device->handle, ACPI_ALL_NOTIFY,
acpi_wmi_notify_handler, wmi_bus_dev);
if (ACPI_FAILURE(status)) {
dev_err(&device->dev, "Error installing notify handler\n");
error = -ENODEV;
goto err_remove_ec_handler;
return -ENODEV;
}
wmi_bus_dev = device_create(&wmi_bus_class, &device->dev, MKDEV(0, 0),
NULL, "wmi_bus-%s", dev_name(&device->dev));
if (IS_ERR(wmi_bus_dev)) {
error = PTR_ERR(wmi_bus_dev);
goto err_remove_notify_handler;
}
dev_set_drvdata(&device->dev, wmi_bus_dev);
error = devm_add_action_or_reset(&device->dev, acpi_wmi_remove_notify_handler,
acpi_device);
if (error < 0)
return error;
error = parse_wdg(wmi_bus_dev, device);
if (error) {
pr_err("Failed to parse WDG method\n");
goto err_remove_busdev;
return error;
}
return 0;
err_remove_busdev:
device_unregister(wmi_bus_dev);
err_remove_notify_handler:
acpi_remove_notify_handler(acpi_device->handle, ACPI_ALL_NOTIFY,
acpi_wmi_notify_handler);
err_remove_ec_handler:
acpi_remove_address_space_handler(acpi_device->handle,
ACPI_ADR_SPACE_EC,
&acpi_wmi_ec_space_handler);
return error;
}
int __must_check __wmi_driver_register(struct wmi_driver *driver,

View File

@ -141,9 +141,11 @@ int x86_acpi_irq_helper_get(const struct x86_acpi_irq_data *data)
}
static int i2c_client_count;
static int spi_dev_count;
static int pdev_count;
static int serdev_count;
static struct i2c_client **i2c_clients;
static struct spi_device **spi_devs;
static struct platform_device **pdevs;
static struct serdev_device **serdevs;
static struct gpio_keys_button *buttons;
@ -185,6 +187,46 @@ static __init int x86_instantiate_i2c_client(const struct x86_dev_info *dev_info
return 0;
}
static __init int x86_instantiate_spi_dev(const struct x86_dev_info *dev_info, int idx)
{
const struct x86_spi_dev_info *spi_dev_info = &dev_info->spi_dev_info[idx];
struct spi_board_info board_info = spi_dev_info->board_info;
struct spi_controller *controller;
struct acpi_device *adev;
acpi_handle handle;
acpi_status status;
board_info.irq = x86_acpi_irq_helper_get(&spi_dev_info->irq_data);
if (board_info.irq < 0)
return board_info.irq;
status = acpi_get_handle(NULL, spi_dev_info->ctrl_path, &handle);
if (ACPI_FAILURE(status)) {
pr_err("Error could not get %s handle\n", spi_dev_info->ctrl_path);
return -ENODEV;
}
adev = acpi_fetch_acpi_dev(handle);
if (!adev) {
pr_err("Error could not get adev for %s\n", spi_dev_info->ctrl_path);
return -ENODEV;
}
controller = acpi_spi_find_controller_by_adev(adev);
if (!controller) {
pr_err("Error could not get SPI controller for %s\n", spi_dev_info->ctrl_path);
return -ENODEV;
}
spi_devs[idx] = spi_new_device(controller, &board_info);
put_device(&controller->dev);
if (!spi_devs[idx])
return dev_err_probe(&controller->dev, -ENOMEM,
"creating SPI-device %d\n", idx);
return 0;
}
static __init int x86_instantiate_serdev(const struct x86_serdev_info *info, int idx)
{
struct acpi_device *ctrl_adev, *serdev_adev;
@ -263,6 +305,11 @@ static void x86_android_tablet_remove(struct platform_device *pdev)
kfree(pdevs);
kfree(buttons);
for (i = 0; i < spi_dev_count; i++)
spi_unregister_device(spi_devs[i]);
kfree(spi_devs);
for (i = 0; i < i2c_client_count; i++)
i2c_unregister_device(i2c_clients[i]);
@ -333,6 +380,21 @@ static __init int x86_android_tablet_probe(struct platform_device *pdev)
}
}
spi_devs = kcalloc(dev_info->spi_dev_count, sizeof(*spi_devs), GFP_KERNEL);
if (!spi_devs) {
x86_android_tablet_remove(pdev);
return -ENOMEM;
}
spi_dev_count = dev_info->spi_dev_count;
for (i = 0; i < spi_dev_count; i++) {
ret = x86_instantiate_spi_dev(dev_info, i);
if (ret < 0) {
x86_android_tablet_remove(pdev);
return ret;
}
}
/* + 1 to make space for (optional) gpio_keys_button pdev */
pdevs = kcalloc(dev_info->pdev_count + 1, sizeof(*pdevs), GFP_KERNEL);
if (!pdevs) {

View File

@ -12,6 +12,8 @@
#include <linux/efi.h>
#include <linux/gpio/machine.h>
#include <linux/mfd/arizona/pdata.h>
#include <linux/mfd/arizona/registers.h>
#include <linux/mfd/intel_soc_pmic.h>
#include <linux/pinctrl/consumer.h>
#include <linux/pinctrl/machine.h>
@ -32,12 +34,30 @@
*
* To avoid having to have a similar hack in the mainline kernel program the
* LP8557 to directly set the level and use the lp855x_bl driver for control.
*
* The LP8557 can either be configured to multiply its PWM input and
* the I2C register set level (requiring both to be at 100% for 100% output);
* or to only take the I2C register set level into account.
*
* Multiplying the 2 levels is useful because this will turn off the backlight
* when the panel goes off and turns off its PWM output.
*
* But on some models the panel's PWM output defaults to a duty-cycle of
* much less then 100%, severely limiting max brightness. In this case
* the LP8557 should be configured to only take the I2C register into
* account and the i915 driver must turn off the panel and the backlight
* separately using e.g. VBT MIPI sequences to turn off the backlight.
*/
static struct lp855x_platform_data lenovo_lp8557_pdata = {
static struct lp855x_platform_data lenovo_lp8557_pwm_and_reg_pdata = {
.device_control = 0x86,
.initial_brightness = 128,
};
static struct lp855x_platform_data lenovo_lp8557_reg_only_pdata = {
.device_control = 0x85,
.initial_brightness = 128,
};
/* Lenovo Yoga Book X90F / X90L's Android factory img has everything hardcoded */
static const struct property_entry lenovo_yb1_x90_wacom_props[] = {
@ -120,7 +140,7 @@ static const struct x86_i2c_client_info lenovo_yb1_x90_i2c_clients[] __initconst
.type = "lp8557",
.addr = 0x2c,
.dev_name = "lp8557",
.platform_data = &lenovo_lp8557_pdata,
.platform_data = &lenovo_lp8557_pwm_and_reg_pdata,
},
.adapter_path = "\\_SB_.PCI0.I2C4",
}, {
@ -356,7 +376,7 @@ static struct x86_i2c_client_info lenovo_yoga_tab2_830_1050_i2c_clients[] __init
.type = "lp8557",
.addr = 0x2c,
.dev_name = "lp8557",
.platform_data = &lenovo_lp8557_pdata,
.platform_data = &lenovo_lp8557_pwm_and_reg_pdata,
},
.adapter_path = "\\_SB_.I2C3",
},
@ -653,12 +673,94 @@ static const struct x86_i2c_client_info lenovo_yt3_i2c_clients[] __initconst = {
.type = "lp8557",
.addr = 0x2c,
.dev_name = "lp8557",
.platform_data = &lenovo_lp8557_pdata,
.platform_data = &lenovo_lp8557_reg_only_pdata,
},
.adapter_path = "\\_SB_.PCI0.I2C1",
}
};
/*
* The AOSP 3.5 mm Headset: Accessory Specification gives the following values:
* Function A Play/Pause: 0 ohm
* Function D Voice assistant: 135 ohm
* Function B Volume Up 240 ohm
* Function C Volume Down 470 ohm
* Minimum Mic DC resistance 1000 ohm
* Minimum Ear speaker impedance 16 ohm
* Note the first max value below must be less then the min. speaker impedance,
* to allow CTIA/OMTP detection to work. The other max values are the closest
* value from extcon-arizona.c:arizona_micd_levels halfway 2 button resistances.
*/
static const struct arizona_micd_range arizona_micd_aosp_ranges[] = {
{ .max = 11, .key = KEY_PLAYPAUSE },
{ .max = 186, .key = KEY_VOICECOMMAND },
{ .max = 348, .key = KEY_VOLUMEUP },
{ .max = 752, .key = KEY_VOLUMEDOWN },
};
/* YT3 WM5102 arizona_micd_config comes from Android kernel sources */
static struct arizona_micd_config lenovo_yt3_wm5102_micd_config[] = {
{ 0, 1, 0 },
{ ARIZONA_ACCDET_SRC, 2, 1 },
};
static struct arizona_pdata lenovo_yt3_wm5102_pdata = {
.irq_flags = IRQF_TRIGGER_LOW,
.micd_detect_debounce = 200,
.micd_ranges = arizona_micd_aosp_ranges,
.num_micd_ranges = ARRAY_SIZE(arizona_micd_aosp_ranges),
.hpdet_channel = ARIZONA_ACCDET_MODE_HPL,
/* Below settings come from Android kernel sources */
.micd_bias_start_time = 1,
.micd_rate = 6,
.micd_configs = lenovo_yt3_wm5102_micd_config,
.num_micd_configs = ARRAY_SIZE(lenovo_yt3_wm5102_micd_config),
.micbias = {
[0] = { /* MICBIAS1 */
.mV = 2800,
.ext_cap = 1,
.discharge = 1,
.soft_start = 0,
.bypass = 0,
},
[1] = { /* MICBIAS2 */
.mV = 2800,
.ext_cap = 1,
.discharge = 1,
.soft_start = 0,
.bypass = 0,
},
[2] = { /* MICBIAS2 */
.mV = 2800,
.ext_cap = 1,
.discharge = 1,
.soft_start = 0,
.bypass = 0,
},
},
};
static const struct x86_spi_dev_info lenovo_yt3_spi_devs[] __initconst = {
{
/* WM5102 codec */
.board_info = {
.modalias = "wm5102",
.platform_data = &lenovo_yt3_wm5102_pdata,
.max_speed_hz = 5000000,
},
.ctrl_path = "\\_SB_.PCI0.SPI1",
.irq_data = {
.type = X86_ACPI_IRQ_TYPE_GPIOINT,
.chip = "INT33FF:00",
.index = 91,
.trigger = ACPI_LEVEL_SENSITIVE,
.polarity = ACPI_ACTIVE_LOW,
.con_id = "wm5102_irq",
},
}
};
static int __init lenovo_yt3_init(void)
{
int ret;
@ -702,14 +804,28 @@ static struct gpiod_lookup_table lenovo_yt3_hideep_gpios = {
},
};
static struct gpiod_lookup_table lenovo_yt3_wm5102_gpios = {
.dev_id = "spi1.0",
.table = {
GPIO_LOOKUP("INT33FF:00", 75, "wlf,spkvdd-ena", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("INT33FF:00", 81, "wlf,ldoena", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("INT33FF:00", 82, "reset", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("arizona", 2, "wlf,micd-pol", GPIO_ACTIVE_HIGH),
{ }
},
};
static struct gpiod_lookup_table * const lenovo_yt3_gpios[] = {
&lenovo_yt3_hideep_gpios,
&lenovo_yt3_wm5102_gpios,
NULL
};
const struct x86_dev_info lenovo_yt3_info __initconst = {
.i2c_client_info = lenovo_yt3_i2c_clients,
.i2c_client_count = ARRAY_SIZE(lenovo_yt3_i2c_clients),
.spi_dev_info = lenovo_yt3_spi_devs,
.spi_dev_count = ARRAY_SIZE(lenovo_yt3_spi_devs),
.gpiod_lookup_tables = lenovo_yt3_gpios,
.init = lenovo_yt3_init,
};

View File

@ -14,6 +14,7 @@
#include <linux/gpio_keys.h>
#include <linux/i2c.h>
#include <linux/irqdomain_defs.h>
#include <linux/spi/spi.h>
struct gpio_desc;
struct gpiod_lookup_table;
@ -48,6 +49,12 @@ struct x86_i2c_client_info {
struct x86_acpi_irq_data irq_data;
};
struct x86_spi_dev_info {
struct spi_board_info board_info;
char *ctrl_path;
struct x86_acpi_irq_data irq_data;
};
struct x86_serdev_info {
const char *ctrl_hid;
const char *ctrl_uid;
@ -72,10 +79,12 @@ struct x86_dev_info {
const struct software_node *bat_swnode;
struct gpiod_lookup_table * const *gpiod_lookup_tables;
const struct x86_i2c_client_info *i2c_client_info;
const struct x86_spi_dev_info *spi_dev_info;
const struct platform_device_info *pdev_info;
const struct x86_serdev_info *serdev_info;
const struct x86_gpio_button *gpio_button;
int i2c_client_count;
int spi_dev_count;
int pdev_count;
int serdev_count;
int gpio_button_count;

View File

@ -0,0 +1,91 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Wifi Band Exclusion Interface (AMD ACPI Implementation)
* Copyright (C) 2023 Advanced Micro Devices
*/
#ifndef _ACPI_AMD_WBRF_H
#define _ACPI_AMD_WBRF_H
#include <linux/device.h>
#include <linux/notifier.h>
/* The maximum number of frequency band ranges */
#define MAX_NUM_OF_WBRF_RANGES 11
/* Record actions */
#define WBRF_RECORD_ADD 0x0
#define WBRF_RECORD_REMOVE 0x1
/**
* struct freq_band_range - Wifi frequency band range definition
* @start: start frequency point (in Hz)
* @end: end frequency point (in Hz)
*/
struct freq_band_range {
u64 start;
u64 end;
};
/**
* struct wbrf_ranges_in_out - wbrf ranges info
* @num_of_ranges: total number of band ranges in this struct
* @band_list: array of Wifi band ranges
*/
struct wbrf_ranges_in_out {
u64 num_of_ranges;
struct freq_band_range band_list[MAX_NUM_OF_WBRF_RANGES];
};
/**
* enum wbrf_notifier_actions - wbrf notifier actions index
* @WBRF_CHANGED: there was some frequency band updates. The consumers
* should retrieve the latest active frequency bands.
*/
enum wbrf_notifier_actions {
WBRF_CHANGED,
};
#if IS_ENABLED(CONFIG_AMD_WBRF)
bool acpi_amd_wbrf_supported_producer(struct device *dev);
int acpi_amd_wbrf_add_remove(struct device *dev, uint8_t action, struct wbrf_ranges_in_out *in);
bool acpi_amd_wbrf_supported_consumer(struct device *dev);
int amd_wbrf_retrieve_freq_band(struct device *dev, struct wbrf_ranges_in_out *out);
int amd_wbrf_register_notifier(struct notifier_block *nb);
int amd_wbrf_unregister_notifier(struct notifier_block *nb);
#else
static inline
bool acpi_amd_wbrf_supported_consumer(struct device *dev)
{
return false;
}
static inline
int acpi_amd_wbrf_add_remove(struct device *dev, uint8_t action, struct wbrf_ranges_in_out *in)
{
return -ENODEV;
}
static inline
bool acpi_amd_wbrf_supported_producer(struct device *dev)
{
return false;
}
static inline
int amd_wbrf_retrieve_freq_band(struct device *dev, struct wbrf_ranges_in_out *out)
{
return -ENODEV;
}
static inline
int amd_wbrf_register_notifier(struct notifier_block *nb)
{
return -ENODEV;
}
static inline
int amd_wbrf_unregister_notifier(struct notifier_block *nb)
{
return -ENODEV;
}
#endif /* CONFIG_AMD_WBRF */
#endif /* _ACPI_AMD_WBRF_H */

View File

@ -12,6 +12,19 @@
#define TPMI_MINOR_VERSION(val) FIELD_GET(GENMASK(4, 0), val)
#define TPMI_MAJOR_VERSION(val) FIELD_GET(GENMASK(7, 5), val)
/*
* List of supported TMPI IDs.
* Some TMPI IDs are not used by Linux, so the numbers are not consecutive.
*/
enum intel_tpmi_id {
TPMI_ID_RAPL = 0, /* Running Average Power Limit */
TPMI_ID_PEM = 1, /* Power and Perf excursion Monitor */
TPMI_ID_UNCORE = 2, /* Uncore Frequency Scaling */
TPMI_ID_SST = 5, /* Speed Select Technology */
TPMI_CONTROL_ID = 0x80, /* Special ID for getting feature status */
TPMI_INFO_ID = 0x81, /* Special ID for PCI BDF and Package ID information */
};
/**
* struct intel_tpmi_plat_info - Platform information for a TPMI device instance
* @package_id: CPU Package id
@ -32,7 +45,6 @@ struct intel_tpmi_plat_info {
struct intel_tpmi_plat_info *tpmi_get_platform_data(struct auxiliary_device *auxdev);
struct resource *tpmi_get_resource_at_index(struct auxiliary_device *auxdev, int index);
int tpmi_get_resource_count(struct auxiliary_device *auxdev);
int tpmi_get_feature_status(struct auxiliary_device *auxdev, int feature_id, int *locked,
int *disabled);
int tpmi_get_feature_status(struct auxiliary_device *auxdev, int feature_id, bool *read_blocked,
bool *write_blocked);
#endif

View File

@ -15,6 +15,6 @@ struct lpss_clk_data {
struct clk *clk;
};
extern int lpss_atom_clk_init(void);
int lpss_atom_clk_init(void);
#endif /* __CLK_LPSS_H */

View File

@ -11,7 +11,6 @@
#include <linux/device.h>
#include <linux/acpi.h>
#include <linux/mod_devicetable.h>
#include <uapi/linux/wmi.h>
/**
* struct wmi_device - WMI device structure
@ -22,11 +21,17 @@
*/
struct wmi_device {
struct device dev;
/* private: used by the WMI driver core */
bool setable;
};
/**
* to_wmi_device() - Helper macro to cast a device to a wmi_device
* @device: device struct
*
* Cast a struct device to a struct wmi_device.
*/
#define to_wmi_device(device) container_of(device, struct wmi_device, dev)
extern acpi_status wmidev_evaluate_method(struct wmi_device *wdev,
u8 instance, u32 method_id,
const struct acpi_buffer *in,
@ -35,9 +40,9 @@ extern acpi_status wmidev_evaluate_method(struct wmi_device *wdev,
extern union acpi_object *wmidev_block_query(struct wmi_device *wdev,
u8 instance);
u8 wmidev_instance_count(struct wmi_device *wdev);
acpi_status wmidev_block_set(struct wmi_device *wdev, u8 instance, const struct acpi_buffer *in);
extern int set_required_buffer_size(struct wmi_device *wdev, u64 length);
u8 wmidev_instance_count(struct wmi_device *wdev);
/**
* struct wmi_driver - WMI driver structure
@ -47,11 +52,8 @@ extern int set_required_buffer_size(struct wmi_device *wdev, u64 length);
* @probe: Callback for device binding
* @remove: Callback for device unbinding
* @notify: Callback for receiving WMI events
* @filter_callback: Callback for filtering device IOCTLs
*
* This represents WMI drivers which handle WMI devices.
* @filter_callback is only necessary for drivers which
* want to set up a WMI IOCTL interface.
*/
struct wmi_driver {
struct device_driver driver;
@ -61,8 +63,6 @@ struct wmi_driver {
int (*probe)(struct wmi_device *wdev, const void *context);
void (*remove)(struct wmi_device *wdev);
void (*notify)(struct wmi_device *device, union acpi_object *data);
long (*filter_callback)(struct wmi_device *wdev, unsigned int cmd,
struct wmi_ioctl_buffer *arg);
};
extern int __must_check __wmi_driver_register(struct wmi_driver *driver,