Power management updates for 6.6-rc1

- Rework the menu and teo cpuidle governors to avoid calling
    tick_nohz_get_sleep_length(), which is likely to become quite
    expensive going forward, too often and improve making decisions
    regarding whether or not to stop the scheduler tick in the teo
    governor (Rafael Wysocki).
 
  - Improve the performance of cpufreq_stats_create_table() in some
    cases (Liao Chang).
 
  - Fix two issues in the amd-pstate-ut cpufreq driver (Swapnil Sapkal).
 
  - Use clamp() helper macro to improve the code readability in
    cpufreq_verify_within_limits() (Liao Chang).
 
  - Set stale CPU frequency to minimum in intel_pstate (Doug Smythies).
 
  - Migrate cpufreq drivers for various platforms to use void remove
    callback (Yangtao Li).
 
  - Add online/offline/exit hooks for Tegra driver (Sumit Gupta).
 
  - Explicitly include correct DT includes in cpufreq (Rob Herring).
 
  - Frequency domain updates for qcom-hw driver (Neil Armstrong).
 
  - Modify AMD pstate driver return the highest_perf value (Meng Li).
 
  - Generic cleanups for cppc, mediatek and powernow driver (Liao Chang,
    Konrad Dybcio).
 
  - Add more platforms to cpufreq-arm driver's blocklist (AngeloGioacchino
    Del Regno and Konrad Dybcio).
 
  - brcmstb-avs-cpufreq: Fix -Warray-bounds bug (Gustavo A. R. Silva).
 
  - Add device PM helpers to allow a device to remain powered-on during
    system-wide transitions (Ulf Hansson).
 
  - Rework hibernation memory snapshotting to avoid storing pages filled
    with zeros in hibernation image files (Brian Geffon).
 
  - Add check to make sure that CPU latency QoS constraints do not use
    negative values (Clive Lin).
 
  - Optimize rp->domains memory allocation in the Intel RAPL power
    capping driver (xiongxin).
 
  - Remove recursion while parsing zones in the arm_scmi power capping
    driver (Cristian Marussi).
 
  - Fix memory leak in devfreq_dev_release() (Boris Brezillon).
 
  - Rewrite devfreq_monitor_start() kerneldoc comment (Manivannan
    Sadhasivam).
 
  - Explicitly include correct DT includes in devfreq (Rob Herring).
 
  - Remove unsued pm_runtime_update_max_time_suspended() extern
    declaration (YueHaibing).
 
  - Add turbo-boost support to cpupower (Wyes Karny).
 
  - Add support for amd_pstate mode change to cpupower (Wyes Karny).
 
  - Fix 'cpupower idle_set' command to accept only numeric values of
    arguments (Likhitha Korrapati).
 
  - Clean up OPP code and add new frequency related APIs to it (Viresh
    Kumar, Manivannan Sadhasivam).
 
  - Convert ti cpufreq/opp bindings to json schema (Nishanth Menon).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAmTslI4SHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRxLMYP/3v0DxA3HZSZ/Xg63P9ylnln084cDt+/
 qpJZ0CJUd6+MkoeuCYq/5udNwPSREsfx+pIEJy+h/iCiQlQz3NzriR7/dgPV0Ud0
 t7k95lyZo+u51MNxk4SEqRMVTyYaNgDPvGbLyWFpLnne3CsxYzfH5xr77yHf342W
 jHii1vJLXiXPnQWDlahf8tUpdQ0MQFmEwx0WkJp81NaAFyXDi0fPrB4YZaZrr6AQ
 3TNaxTxZSirVSn19m5RPPAQhEfK8Dk4jF8wVPWsuL9F6v+9wERD9zcaxUPf3CD36
 aj+SqKLCkOfkJHk45PCIYbS2wQ04fT/yWE9Rzm4iSr+fWA/q7vA0jXsaAgcv1Bm7
 k6QyAy2ffLZTUFObX5bevIPvxZTzunLh0iglHx0WZKS/nn/9Jwpt6UMrpOsjiw/J
 GLKEww+ZiKXj980GfvV2QUZG/XmsrvML/1L+qiDxNB2IPTxxuOxrWQ+cM7oxUTPM
 pdIPIdwkm5ICVRVcAfNw/fr30s2yp1K304VWgzbKdK9b1aVhUSkxZGI8KHFODOHO
 4Crii2rk0r972kxuJmenKwEfmwr/rbAAstFVSM736jH9RUANaWsIeNvkurXMOd2f
 mil9DViTAu0iY4cy5tgLiLHDH4tOQOOCntRVFJ1tSytMyCFlMvVM0dwrc0yh254Q
 zcrNj8ERJSsC
 =6BIh
 -----END PGP SIGNATURE-----

Merge tag 'pm-6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "These rework cpuidle governors to call tick_nohz_get_sleep_length()
  less often and fix one of them, rework hibernation to avoid storing
  pages filled with zeros in hibernation images, switch over some
  cpufreq drivers to use void remove callbacks, fix and clean up
  multiple cpufreq drivers, fix the devfreq core, update the cpupower
  utility and make other assorted improvements.

  Specifics:

   - Rework the menu and teo cpuidle governors to avoid calling
     tick_nohz_get_sleep_length(), which is likely to become quite
     expensive going forward, too often and improve making decisions
     regarding whether or not to stop the scheduler tick in the teo
     governor (Rafael Wysocki)

   - Improve the performance of cpufreq_stats_create_table() in some
     cases (Liao Chang)

   - Fix two issues in the amd-pstate-ut cpufreq driver (Swapnil Sapkal)

   - Use clamp() helper macro to improve the code readability in
     cpufreq_verify_within_limits() (Liao Chang)

   - Set stale CPU frequency to minimum in intel_pstate (Doug Smythies)

   - Migrate cpufreq drivers for various platforms to use void remove
     callback (Yangtao Li)

   - Add online/offline/exit hooks for Tegra driver (Sumit Gupta)

   - Explicitly include correct DT includes in cpufreq (Rob Herring)

   - Frequency domain updates for qcom-hw driver (Neil Armstrong)

   - Modify AMD pstate driver return the highest_perf value (Meng Li)

   - Generic cleanups for cppc, mediatek and powernow driver (Liao
     Chang, Konrad Dybcio)

   - Add more platforms to cpufreq-arm driver's blocklist
     (AngeloGioacchino Del Regno and Konrad Dybcio)

   - brcmstb-avs-cpufreq: Fix -Warray-bounds bug (Gustavo A. R. Silva)

   - Add device PM helpers to allow a device to remain powered-on during
     system-wide transitions (Ulf Hansson)

   - Rework hibernation memory snapshotting to avoid storing pages
     filled with zeros in hibernation image files (Brian Geffon)

   - Add check to make sure that CPU latency QoS constraints do not use
     negative values (Clive Lin)

   - Optimize rp->domains memory allocation in the Intel RAPL power
     capping driver (xiongxin)

   - Remove recursion while parsing zones in the arm_scmi power capping
     driver (Cristian Marussi)

   - Fix memory leak in devfreq_dev_release() (Boris Brezillon)

   - Rewrite devfreq_monitor_start() kerneldoc comment (Manivannan
     Sadhasivam)

   - Explicitly include correct DT includes in devfreq (Rob Herring)

   - Remove unsued pm_runtime_update_max_time_suspended() extern
     declaration (YueHaibing)

   - Add turbo-boost support to cpupower (Wyes Karny)

   - Add support for amd_pstate mode change to cpupower (Wyes Karny)

   - Fix 'cpupower idle_set' command to accept only numeric values of
     arguments (Likhitha Korrapati)

   - Clean up OPP code and add new frequency related APIs to it (Viresh
     Kumar, Manivannan Sadhasivam)

   - Convert ti cpufreq/opp bindings to json schema (Nishanth Menon)"

* tag 'pm-6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (74 commits)
  cpufreq: tegra194: remove opp table in exit hook
  cpufreq: powernow-k8: Use related_cpus instead of cpus in driver.exit()
  cpufreq: tegra194: add online/offline hooks
  cpuidle: teo: Avoid unnecessary variable assignments
  cpufreq: qcom-cpufreq-hw: add support for 4 freq domains
  dt-bindings: cpufreq: qcom-hw: add a 4th frequency domain
  cpufreq: amd-pstate-ut: Fix kernel panic when loading the driver
  cpufreq: amd-pstate-ut: Remove module parameter access
  cpufreq: Use clamp() helper macro to improve the code readability
  PM: sleep: Add helpers to allow a device to remain powered-on
  PM: QoS: Add check to make sure CPU latency is non-negative
  PM: runtime: Remove unsued extern declaration of pm_runtime_update_max_time_suspended()
  cpufreq: intel_pstate: set stale CPU frequency to minimum
  cpufreq: stats: Improve the performance of cpufreq_stats_create_table()
  dt-bindings: cpufreq: Convert ti-cpufreq to json schema
  dt-bindings: opp: Convert ti-omap5-opp-supply to json schema
  OPP: Fix argument name in doc comment
  cpuidle: menu: Skip tick_nohz_get_sleep_length() call in some cases
  cpufreq: cppc: Set fie_disabled to FIE_DISABLED if fails to create kworker_fie
  cpufreq: cppc: cppc_cpufreq_get_rate() returns zero in all error cases.
  ...
This commit is contained in:
Linus Torvalds 2023-08-28 18:04:39 -07:00
commit ccc5e98177
63 changed files with 1103 additions and 619 deletions

View File

@ -49,6 +49,7 @@ properties:
- description: Frequency domain 0 register region
- description: Frequency domain 1 register region
- description: Frequency domain 2 register region
- description: Frequency domain 3 register region
reg-names:
minItems: 1
@ -56,6 +57,7 @@ properties:
- const: freq-domain0
- const: freq-domain1
- const: freq-domain2
- const: freq-domain3
clocks:
items:
@ -69,7 +71,7 @@ properties:
interrupts:
minItems: 1
maxItems: 3
maxItems: 4
interrupt-names:
minItems: 1
@ -77,6 +79,7 @@ properties:
- const: dcvsh-irq-0
- const: dcvsh-irq-1
- const: dcvsh-irq-2
- const: dcvsh-irq-3
'#freq-domain-cells':
const: 1

View File

@ -1,132 +0,0 @@
TI CPUFreq and OPP bindings
================================
Certain TI SoCs, like those in the am335x, am437x, am57xx, and dra7xx
families support different OPPs depending on the silicon variant in use.
The ti-cpufreq driver can use revision and an efuse value from the SoC to
provide the OPP framework with supported hardware information. This is
used to determine which OPPs from the operating-points-v2 table get enabled
when it is parsed by the OPP framework.
Required properties:
--------------------
In 'cpus' nodes:
- operating-points-v2: Phandle to the operating-points-v2 table to use.
In 'operating-points-v2' table:
- compatible: Should be
- 'operating-points-v2-ti-cpu' for am335x, am43xx, and dra7xx/am57xx,
omap34xx, omap36xx and am3517 SoCs
- syscon: A phandle pointing to a syscon node representing the control module
register space of the SoC.
Optional properties:
--------------------
- "vdd-supply", "vbb-supply": to define two regulators for dra7xx
- "cpu0-supply", "vbb-supply": to define two regulators for omap36xx
For each opp entry in 'operating-points-v2' table:
- opp-supported-hw: Two bitfields indicating:
1. Which revision of the SoC the OPP is supported by
2. Which eFuse bits indicate this OPP is available
A bitwise AND is performed against these values and if any bit
matches, the OPP gets enabled.
Example:
--------
/* From arch/arm/boot/dts/am33xx.dtsi */
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
compatible = "arm,cortex-a8";
device_type = "cpu";
reg = <0>;
operating-points-v2 = <&cpu0_opp_table>;
clocks = <&dpll_mpu_ck>;
clock-names = "cpu";
clock-latency = <300000>; /* From omap-cpufreq driver */
};
};
/*
* cpu0 has different OPPs depending on SoC revision and some on revisions
* 0x2 and 0x4 have eFuse bits that indicate if they are available or not
*/
cpu0_opp_table: opp-table {
compatible = "operating-points-v2-ti-cpu";
syscon = <&scm_conf>;
/*
* The three following nodes are marked with opp-suspend
* because they can not be enabled simultaneously on a
* single SoC.
*/
opp50-300000000 {
opp-hz = /bits/ 64 <300000000>;
opp-microvolt = <950000 931000 969000>;
opp-supported-hw = <0x06 0x0010>;
opp-suspend;
};
opp100-275000000 {
opp-hz = /bits/ 64 <275000000>;
opp-microvolt = <1100000 1078000 1122000>;
opp-supported-hw = <0x01 0x00FF>;
opp-suspend;
};
opp100-300000000 {
opp-hz = /bits/ 64 <300000000>;
opp-microvolt = <1100000 1078000 1122000>;
opp-supported-hw = <0x06 0x0020>;
opp-suspend;
};
opp100-500000000 {
opp-hz = /bits/ 64 <500000000>;
opp-microvolt = <1100000 1078000 1122000>;
opp-supported-hw = <0x01 0xFFFF>;
};
opp100-600000000 {
opp-hz = /bits/ 64 <600000000>;
opp-microvolt = <1100000 1078000 1122000>;
opp-supported-hw = <0x06 0x0040>;
};
opp120-600000000 {
opp-hz = /bits/ 64 <600000000>;
opp-microvolt = <1200000 1176000 1224000>;
opp-supported-hw = <0x01 0xFFFF>;
};
opp120-720000000 {
opp-hz = /bits/ 64 <720000000>;
opp-microvolt = <1200000 1176000 1224000>;
opp-supported-hw = <0x06 0x0080>;
};
oppturbo-720000000 {
opp-hz = /bits/ 64 <720000000>;
opp-microvolt = <1260000 1234800 1285200>;
opp-supported-hw = <0x01 0xFFFF>;
};
oppturbo-800000000 {
opp-hz = /bits/ 64 <800000000>;
opp-microvolt = <1260000 1234800 1285200>;
opp-supported-hw = <0x06 0x0100>;
};
oppnitro-1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <1325000 1298500 1351500>;
opp-supported-hw = <0x04 0x0200>;
};
};

View File

@ -0,0 +1,92 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/opp/operating-points-v2-ti-cpu.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: TI CPU OPP (Operating Performance Points)
description:
TI SoCs, like those in the AM335x, AM437x, AM57xx, AM62x, and DRA7xx
families, the CPU frequencies subset and the voltage value of each
OPP vary based on the silicon variant used. The data sheet sections
corresponding to "Operating Performance Points" describe the frequency
and voltage values based on device type and speed bin information
blown in corresponding eFuse bits as referred to by the Technical
Reference Manual.
This document extends the operating-points-v2 binding by providing
the hardware description for the scheme mentioned above.
maintainers:
- Nishanth Menon <nm@ti.com>
allOf:
- $ref: opp-v2-base.yaml#
properties:
compatible:
const: operating-points-v2-ti-cpu
syscon:
$ref: /schemas/types.yaml#/definitions/phandle
description: |
points to syscon node representing the control module
register space of the SoC.
opp-shared: true
patternProperties:
'^opp(-?[0-9]+)*$':
type: object
additionalProperties: false
properties:
clock-latency-ns: true
opp-hz: true
opp-microvolt: true
opp-supported-hw: true
opp-suspend: true
turbo-mode: true
required:
- opp-hz
- opp-supported-hw
required:
- compatible
- syscon
additionalProperties: false
examples:
- |
opp-table {
compatible = "operating-points-v2-ti-cpu";
syscon = <&scm_conf>;
opp-300000000 {
opp-hz = /bits/ 64 <300000000>;
opp-microvolt = <1100000 1078000 1122000>;
opp-supported-hw = <0x06 0x0020>;
opp-suspend;
};
opp-500000000 {
opp-hz = /bits/ 64 <500000000>;
opp-microvolt = <1100000 1078000 1122000>;
opp-supported-hw = <0x01 0xFFFF>;
};
opp-600000000 {
opp-hz = /bits/ 64 <600000000>;
opp-microvolt = <1100000 1078000 1122000>;
opp-supported-hw = <0x06 0x0040>;
};
opp-1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <1325000 1298500 1351500>;
opp-supported-hw = <0x04 0x0200>;
};
};

View File

@ -56,7 +56,7 @@ patternProperties:
need to be configured and that is left for the implementation
specific binding.
minItems: 1
maxItems: 16
maxItems: 32
items:
maxItems: 1

View File

@ -0,0 +1,101 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/opp/ti,omap-opp-supply.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments OMAP compatible OPP supply
description:
OMAP5, DRA7, and AM57 families of SoCs have Class 0 AVS eFuse
registers, which contain OPP-specific voltage information tailored
for the specific device. This binding provides the information
needed to describe such a hardware values and relate them to program
the primary regulator during an OPP transition.
Also, some supplies may have an associated vbb-supply, an Adaptive
Body Bias regulator, which must transition in a specific sequence
w.r.t the vdd-supply and clk when making an OPP transition. By
supplying two regulators to the device that will undergo OPP
transitions, we can use the multi-regulator support implemented by
the OPP core to describe both regulators the platform needs. The
OPP core binding Documentation/devicetree/bindings/opp/opp-v2.yaml
provides further information (refer to Example 4 Handling multiple
regulators).
maintainers:
- Nishanth Menon <nm@ti.com>
properties:
$nodename:
pattern: '^opp-supply(@[0-9a-f]+)?$'
compatible:
oneOf:
- description: Basic OPP supply controlling VDD and VBB
const: ti,omap-opp-supply
- description: OMAP5+ optimized voltages in efuse(Class 0) VDD along with
VBB.
const: ti,omap5-opp-supply
- description: OMAP5+ optimized voltages in efuse(class0) VDD but no VBB
const: ti,omap5-core-opp-supply
reg:
maxItems: 1
ti,absolute-max-voltage-uv:
$ref: /schemas/types.yaml#/definitions/uint32
description: Absolute maximum voltage for the OPP supply in micro-volts.
minimum: 750000
maximum: 1500000
ti,efuse-settings:
description: An array of u32 tuple items providing information about
optimized efuse configuration.
minItems: 1
$ref: /schemas/types.yaml#/definitions/uint32-matrix
items:
items:
- description: Reference voltage in micro-volts (OPP Voltage)
minimum: 750000
maximum: 1500000
multipleOf: 10000
- description: efuse offset where the optimized voltage is located
multipleOf: 4
maximum: 256
required:
- compatible
- ti,absolute-max-voltage-uv
allOf:
- if:
not:
properties:
compatible:
contains:
const: ti,omap-opp-supply
then:
required:
- reg
- ti,efuse-settings
additionalProperties: false
examples:
- |
opp-supply {
compatible = "ti,omap-opp-supply";
ti,absolute-max-voltage-uv = <1375000>;
};
- |
opp-supply@4a003b20 {
compatible = "ti,omap5-opp-supply";
reg = <0x4a003b20 0x8>;
ti,efuse-settings =
/* uV offset */
<1060000 0x0>,
<1160000 0x4>,
<1210000 0x8>;
ti,absolute-max-voltage-uv = <1500000>;
};

View File

@ -1,63 +0,0 @@
Texas Instruments OMAP compatible OPP supply description
OMAP5, DRA7, and AM57 family of SoCs have Class0 AVS eFuse registers which
contain data that can be used to adjust voltages programmed for some of their
supplies for more efficient operation. This binding provides the information
needed to read these values and use them to program the main regulator during
an OPP transitions.
Also, some supplies may have an associated vbb-supply which is an Adaptive Body
Bias regulator which much be transitioned in a specific sequence with regards
to the vdd-supply and clk when making an OPP transition. By supplying two
regulators to the device that will undergo OPP transitions we can make use
of the multi regulator binding that is part of the OPP core described here [1]
to describe both regulators needed by the platform.
[1] Documentation/devicetree/bindings/opp/opp-v2.yaml
Required Properties for Device Node:
- vdd-supply: phandle to regulator controlling VDD supply
- vbb-supply: phandle to regulator controlling Body Bias supply
(Usually Adaptive Body Bias regulator)
Required Properties for opp-supply node:
- compatible: Should be one of:
"ti,omap-opp-supply" - basic OPP supply controlling VDD and VBB
"ti,omap5-opp-supply" - OMAP5+ optimized voltages in efuse(class0)VDD
along with VBB
"ti,omap5-core-opp-supply" - OMAP5+ optimized voltages in efuse(class0) VDD
but no VBB.
- reg: Address and length of the efuse register set for the device (mandatory
only for "ti,omap5-opp-supply")
- ti,efuse-settings: An array of u32 tuple items providing information about
optimized efuse configuration. Each item consists of the following:
volt: voltage in uV - reference voltage (OPP voltage)
efuse_offseet: efuse offset from reg where the optimized voltage is stored.
- ti,absolute-max-voltage-uv: absolute maximum voltage for the OPP supply.
Example:
/* Device Node (CPU) */
cpus {
cpu0: cpu@0 {
device_type = "cpu";
...
vdd-supply = <&vcc>;
vbb-supply = <&abb_mpu>;
};
};
/* OMAP OPP Supply with Class0 registers */
opp_supply_mpu: opp_supply@4a003b20 {
compatible = "ti,omap5-opp-supply";
reg = <0x4a003b20 0x8>;
ti,efuse-settings = <
/* uV offset */
1060000 0x0
1160000 0x4
1210000 0x8
>;
ti,absolute-max-voltage-uv = <1500000>;
};

View File

@ -1011,22 +1011,20 @@ static int __init acpi_cpufreq_probe(struct platform_device *pdev)
return ret;
}
static int acpi_cpufreq_remove(struct platform_device *pdev)
static void acpi_cpufreq_remove(struct platform_device *pdev)
{
pr_debug("%s\n", __func__);
cpufreq_unregister_driver(&acpi_cpufreq_driver);
free_acpi_perf_data();
return 0;
}
static struct platform_driver acpi_cpufreq_platdrv = {
.driver = {
.name = "acpi-cpufreq",
},
.remove = acpi_cpufreq_remove,
.remove_new = acpi_cpufreq_remove,
};
static int __init acpi_cpufreq_init(void)

View File

@ -64,27 +64,9 @@ static struct amd_pstate_ut_struct amd_pstate_ut_cases[] = {
static bool get_shared_mem(void)
{
bool result = false;
char path[] = "/sys/module/amd_pstate/parameters/shared_mem";
char buf[5] = {0};
struct file *filp = NULL;
loff_t pos = 0;
ssize_t ret;
if (!boot_cpu_has(X86_FEATURE_CPPC)) {
filp = filp_open(path, O_RDONLY, 0);
if (IS_ERR(filp))
pr_err("%s unable to open %s file!\n", __func__, path);
else {
ret = kernel_read(filp, &buf, sizeof(buf), &pos);
if (ret < 0)
pr_err("%s read %s file fail ret=%ld!\n",
__func__, path, (long)ret);
filp_close(filp, NULL);
}
if ('Y' == *buf)
result = true;
}
if (!boot_cpu_has(X86_FEATURE_CPPC))
result = true;
return result;
}
@ -145,8 +127,6 @@ static void amd_pstate_ut_check_perf(u32 index)
struct cpufreq_policy *policy = NULL;
struct amd_cpudata *cpudata = NULL;
highest_perf = amd_get_highest_perf();
for_each_possible_cpu(cpu) {
policy = cpufreq_cpu_get(cpu);
if (!policy)
@ -158,9 +138,10 @@ static void amd_pstate_ut_check_perf(u32 index)
if (ret) {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cppc_get_perf_caps ret=%d error!\n", __func__, ret);
return;
goto skip_test;
}
highest_perf = cppc_perf.highest_perf;
nominal_perf = cppc_perf.nominal_perf;
lowest_nonlinear_perf = cppc_perf.lowest_nonlinear_perf;
lowest_perf = cppc_perf.lowest_perf;
@ -169,9 +150,10 @@ static void amd_pstate_ut_check_perf(u32 index)
if (ret) {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s read CPPC_CAP1 ret=%d error!\n", __func__, ret);
return;
goto skip_test;
}
highest_perf = AMD_CPPC_HIGHEST_PERF(cap1);
nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1);
lowest_nonlinear_perf = AMD_CPPC_LOWNONLIN_PERF(cap1);
lowest_perf = AMD_CPPC_LOWEST_PERF(cap1);
@ -187,7 +169,7 @@ static void amd_pstate_ut_check_perf(u32 index)
nominal_perf, cpudata->nominal_perf,
lowest_nonlinear_perf, cpudata->lowest_nonlinear_perf,
lowest_perf, cpudata->lowest_perf);
return;
goto skip_test;
}
if (!((highest_perf >= nominal_perf) &&
@ -198,11 +180,15 @@ static void amd_pstate_ut_check_perf(u32 index)
pr_err("%s cpu%d highest=%d >= nominal=%d > lowest_nonlinear=%d > lowest=%d > 0, the formula is incorrect!\n",
__func__, cpu, highest_perf, nominal_perf,
lowest_nonlinear_perf, lowest_perf);
return;
goto skip_test;
}
cpufreq_cpu_put(policy);
}
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
return;
skip_test:
cpufreq_cpu_put(policy);
}
/*
@ -230,14 +216,14 @@ static void amd_pstate_ut_check_freq(u32 index)
pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n",
__func__, cpu, cpudata->max_freq, cpudata->nominal_freq,
cpudata->lowest_nonlinear_freq, cpudata->min_freq);
return;
goto skip_test;
}
if (cpudata->min_freq != policy->min) {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d cpudata_min_freq=%d policy_min=%d, they should be equal!\n",
__func__, cpu, cpudata->min_freq, policy->min);
return;
goto skip_test;
}
if (cpudata->boost_supported) {
@ -249,16 +235,20 @@ static void amd_pstate_ut_check_freq(u32 index)
pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n",
__func__, cpu, policy->max, cpudata->max_freq,
cpudata->nominal_freq);
return;
goto skip_test;
}
} else {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d must support boost!\n", __func__, cpu);
return;
goto skip_test;
}
cpufreq_cpu_put(policy);
}
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
return;
skip_test:
cpufreq_cpu_put(policy);
}
static int __init amd_pstate_ut_init(void)

View File

@ -14,10 +14,8 @@
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/mfd/syscon.h>
#include <linux/mod_devicetable.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_irq.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/regmap.h>

View File

@ -434,7 +434,11 @@ brcm_avs_get_freq_table(struct device *dev, struct private_data *priv)
if (ret)
return ERR_PTR(ret);
table = devm_kcalloc(dev, AVS_PSTATE_MAX + 1, sizeof(*table),
/*
* We allocate space for the 5 different P-STATES AVS,
* plus extra space for a terminating element.
*/
table = devm_kcalloc(dev, AVS_PSTATE_MAX + 1 + 1, sizeof(*table),
GFP_KERNEL);
if (!table)
return ERR_PTR(-ENOMEM);
@ -749,13 +753,11 @@ static int brcm_avs_cpufreq_probe(struct platform_device *pdev)
return ret;
}
static int brcm_avs_cpufreq_remove(struct platform_device *pdev)
static void brcm_avs_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&brcm_avs_driver);
brcm_avs_prepare_uninit(pdev);
return 0;
}
static const struct of_device_id brcm_avs_cpufreq_match[] = {
@ -770,7 +772,7 @@ static struct platform_driver brcm_avs_cpufreq_platdrv = {
.of_match_table = brcm_avs_cpufreq_match,
},
.probe = brcm_avs_cpufreq_probe,
.remove = brcm_avs_cpufreq_remove,
.remove_new = brcm_avs_cpufreq_remove,
};
module_platform_driver(brcm_avs_cpufreq_platdrv);

View File

@ -249,15 +249,19 @@ static void __init cppc_freq_invariance_init(void)
return;
kworker_fie = kthread_create_worker(0, "cppc_fie");
if (IS_ERR(kworker_fie))
if (IS_ERR(kworker_fie)) {
pr_warn("%s: failed to create kworker_fie: %ld\n", __func__,
PTR_ERR(kworker_fie));
fie_disabled = FIE_DISABLED;
return;
}
ret = sched_setattr_nocheck(kworker_fie->task, &attr);
if (ret) {
pr_warn("%s: failed to set SCHED_DEADLINE: %d\n", __func__,
ret);
kthread_destroy_worker(kworker_fie);
return;
fie_disabled = FIE_DISABLED;
}
}
@ -267,7 +271,6 @@ static void cppc_freq_invariance_exit(void)
return;
kthread_destroy_worker(kworker_fie);
kworker_fie = NULL;
}
#else
@ -849,13 +852,13 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t0);
if (ret)
return ret;
return 0;
udelay(2); /* 2usec delay between sampling */
ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t1);
if (ret)
return ret;
return 0;
delivered_perf = cppc_perf_from_fbctrs(cpu_data, &fb_ctrs_t0,
&fb_ctrs_t1);

View File

@ -143,14 +143,19 @@ static const struct of_device_id blocklist[] __initconst = {
{ .compatible = "qcom,apq8096", },
{ .compatible = "qcom,msm8996", },
{ .compatible = "qcom,msm8998", },
{ .compatible = "qcom,qcm2290", },
{ .compatible = "qcom,qcs404", },
{ .compatible = "qcom,qdu1000", },
{ .compatible = "qcom,sa8155p" },
{ .compatible = "qcom,sa8540p" },
{ .compatible = "qcom,sa8775p" },
{ .compatible = "qcom,sc7180", },
{ .compatible = "qcom,sc7280", },
{ .compatible = "qcom,sc8180x", },
{ .compatible = "qcom,sc8280xp", },
{ .compatible = "qcom,sdm845", },
{ .compatible = "qcom,sdx75", },
{ .compatible = "qcom,sm6115", },
{ .compatible = "qcom,sm6350", },
{ .compatible = "qcom,sm6375", },
@ -158,6 +163,8 @@ static const struct of_device_id blocklist[] __initconst = {
{ .compatible = "qcom,sm8150", },
{ .compatible = "qcom,sm8250", },
{ .compatible = "qcom,sm8350", },
{ .compatible = "qcom,sm8450", },
{ .compatible = "qcom,sm8550", },
{ .compatible = "st,stih407", },
{ .compatible = "st,stih410", },

View File

@ -349,11 +349,10 @@ err:
return ret;
}
static int dt_cpufreq_remove(struct platform_device *pdev)
static void dt_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&dt_cpufreq_driver);
dt_cpufreq_release();
return 0;
}
static struct platform_driver dt_cpufreq_platdrv = {
@ -361,7 +360,7 @@ static struct platform_driver dt_cpufreq_platdrv = {
.name = "cpufreq-dt",
},
.probe = dt_cpufreq_probe,
.remove = dt_cpufreq_remove,
.remove_new = dt_cpufreq_remove,
};
module_platform_driver(dt_cpufreq_platdrv);

View File

@ -1234,16 +1234,16 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
ret = freq_qos_add_notifier(&policy->constraints, FREQ_QOS_MIN,
&policy->nb_min);
if (ret) {
dev_err(dev, "Failed to register MIN QoS notifier: %d (%*pbl)\n",
ret, cpumask_pr_args(policy->cpus));
dev_err(dev, "Failed to register MIN QoS notifier: %d (CPU%u)\n",
ret, cpu);
goto err_kobj_remove;
}
ret = freq_qos_add_notifier(&policy->constraints, FREQ_QOS_MAX,
&policy->nb_max);
if (ret) {
dev_err(dev, "Failed to register MAX QoS notifier: %d (%*pbl)\n",
ret, cpumask_pr_args(policy->cpus));
dev_err(dev, "Failed to register MAX QoS notifier: %d (CPU%u)\n",
ret, cpu);
goto err_min_qos_notifier;
}

View File

@ -243,7 +243,8 @@ void cpufreq_stats_create_table(struct cpufreq_policy *policy)
/* Find valid-unique entries */
cpufreq_for_each_valid_entry(pos, policy->freq_table)
if (freq_table_get_index(stats, pos->frequency) == -1)
if (policy->freq_table_sorted != CPUFREQ_TABLE_UNSORTED ||
freq_table_get_index(stats, pos->frequency) == -1)
stats->freq_table[i++] = pos->frequency;
stats->state_num = i;

View File

@ -131,7 +131,7 @@ static int __init davinci_cpufreq_probe(struct platform_device *pdev)
return cpufreq_register_driver(&davinci_driver);
}
static int __exit davinci_cpufreq_remove(struct platform_device *pdev)
static void __exit davinci_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&davinci_driver);
@ -139,15 +139,13 @@ static int __exit davinci_cpufreq_remove(struct platform_device *pdev)
if (cpufreq.asyncclk)
clk_put(cpufreq.asyncclk);
return 0;
}
static struct platform_driver davinci_cpufreq_driver = {
.driver = {
.name = "cpufreq-davinci",
},
.remove = __exit_p(davinci_cpufreq_remove),
.remove_new = __exit_p(davinci_cpufreq_remove),
};
int __init davinci_cpufreq_init(void)

View File

@ -172,20 +172,18 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev)
return 0;
}
static int imx_cpufreq_dt_remove(struct platform_device *pdev)
static void imx_cpufreq_dt_remove(struct platform_device *pdev)
{
platform_device_unregister(cpufreq_dt_pdev);
if (!of_machine_is_compatible("fsl,imx7ulp"))
dev_pm_opp_put_supported_hw(cpufreq_opp_token);
else
clk_bulk_put(ARRAY_SIZE(imx7ulp_clks), imx7ulp_clks);
return 0;
}
static struct platform_driver imx_cpufreq_dt_driver = {
.probe = imx_cpufreq_dt_probe,
.remove = imx_cpufreq_dt_remove,
.remove_new = imx_cpufreq_dt_remove,
.driver = {
.name = "imx-cpufreq-dt",
},

View File

@ -519,7 +519,7 @@ put_node:
return ret;
}
static int imx6q_cpufreq_remove(struct platform_device *pdev)
static void imx6q_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&imx6q_cpufreq_driver);
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
@ -530,8 +530,6 @@ static int imx6q_cpufreq_remove(struct platform_device *pdev)
regulator_put(soc_reg);
clk_bulk_put(num_clks, clks);
return 0;
}
static struct platform_driver imx6q_cpufreq_platdrv = {
@ -539,7 +537,7 @@ static struct platform_driver imx6q_cpufreq_platdrv = {
.name = "imx6q-cpufreq",
},
.probe = imx6q_cpufreq_probe,
.remove = imx6q_cpufreq_remove,
.remove_new = imx6q_cpufreq_remove,
};
module_platform_driver(imx6q_cpufreq_platdrv);

View File

@ -2609,6 +2609,11 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
intel_pstate_clear_update_util_hook(policy->cpu);
intel_pstate_hwp_set(policy->cpu);
}
/*
* policy->cur is never updated with the intel_pstate driver, but it
* is used as a stale frequency value. So, keep it within limits.
*/
policy->cur = policy->min;
mutex_unlock(&intel_pstate_limits_lock);

View File

@ -178,20 +178,18 @@ out_node:
return err;
}
static int kirkwood_cpufreq_remove(struct platform_device *pdev)
static void kirkwood_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&kirkwood_cpufreq_driver);
clk_disable_unprepare(priv.powersave_clk);
clk_disable_unprepare(priv.ddr_clk);
clk_disable_unprepare(priv.cpu_clk);
return 0;
}
static struct platform_driver kirkwood_cpufreq_platform_driver = {
.probe = kirkwood_cpufreq_probe,
.remove = kirkwood_cpufreq_remove,
.remove_new = kirkwood_cpufreq_remove,
.driver = {
.name = "kirkwood-cpufreq",
},

View File

@ -10,8 +10,9 @@
#include <linux/iopoll.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#define LUT_MAX_ENTRIES 32U
@ -315,11 +316,9 @@ static int mtk_cpufreq_hw_driver_probe(struct platform_device *pdev)
return ret;
}
static int mtk_cpufreq_hw_driver_remove(struct platform_device *pdev)
static void mtk_cpufreq_hw_driver_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&cpufreq_mtk_hw_driver);
return 0;
}
static const struct of_device_id mtk_cpufreq_hw_match[] = {
@ -330,7 +329,7 @@ MODULE_DEVICE_TABLE(of, mtk_cpufreq_hw_match);
static struct platform_driver mtk_cpufreq_hw_driver = {
.probe = mtk_cpufreq_hw_driver_probe,
.remove = mtk_cpufreq_hw_driver_remove,
.remove_new = mtk_cpufreq_hw_driver_remove,
.driver = {
.name = "mtk-cpufreq-hw",
.of_match_table = mtk_cpufreq_hw_match,

View File

@ -313,8 +313,6 @@ out:
return ret;
}
#define DYNAMIC_POWER "dynamic-power-coefficient"
static int mtk_cpufreq_opp_notifier(struct notifier_block *nb,
unsigned long event, void *data)
{

View File

@ -182,11 +182,9 @@ static int omap_cpufreq_probe(struct platform_device *pdev)
return cpufreq_register_driver(&omap_driver);
}
static int omap_cpufreq_remove(struct platform_device *pdev)
static void omap_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&omap_driver);
return 0;
}
static struct platform_driver omap_cpufreq_platdrv = {
@ -194,7 +192,7 @@ static struct platform_driver omap_cpufreq_platdrv = {
.name = "omap-cpufreq",
},
.probe = omap_cpufreq_probe,
.remove = omap_cpufreq_remove,
.remove_new = omap_cpufreq_remove,
};
module_platform_driver(omap_cpufreq_platdrv);

View File

@ -608,22 +608,20 @@ static int __init pcc_cpufreq_probe(struct platform_device *pdev)
return ret;
}
static int pcc_cpufreq_remove(struct platform_device *pdev)
static void pcc_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&pcc_cpufreq_driver);
pcc_clear_mapping();
free_percpu(pcc_cpu_info);
return 0;
}
static struct platform_driver pcc_cpufreq_platdrv = {
.driver = {
.name = "pcc-cpufreq",
},
.remove = pcc_cpufreq_remove,
.remove_new = pcc_cpufreq_remove,
};
static int __init pcc_cpufreq_init(void)

View File

@ -1101,7 +1101,8 @@ static int powernowk8_cpu_exit(struct cpufreq_policy *pol)
kfree(data->powernow_table);
kfree(data);
for_each_cpu(cpu, pol->cpus)
/* pol->cpus will be empty here, use related_cpus instead. */
for_each_cpu(cpu, pol->related_cpus)
per_cpu(powernow_data, cpu) = NULL;
return 0;

View File

@ -9,7 +9,7 @@
#include <linux/cpufreq.h>
#include <linux/module.h>
#include <linux/of_platform.h>
#include <linux/of.h>
#include <asm/machdep.h>
#include <asm/cell-regs.h>

View File

@ -11,7 +11,6 @@
#include <linux/types.h>
#include <linux/timer.h>
#include <linux/init.h>
#include <linux/of_platform.h>
#include <linux/pm_qos.h>
#include <linux/slab.h>

View File

@ -28,7 +28,7 @@
#define GT_IRQ_STATUS BIT(2)
#define MAX_FREQ_DOMAINS 3
#define MAX_FREQ_DOMAINS 4
struct qcom_cpufreq_soc_data {
u32 reg_enable;
@ -730,16 +730,14 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
return ret;
}
static int qcom_cpufreq_hw_driver_remove(struct platform_device *pdev)
static void qcom_cpufreq_hw_driver_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&cpufreq_qcom_hw_driver);
return 0;
}
static struct platform_driver qcom_cpufreq_hw_driver = {
.probe = qcom_cpufreq_hw_driver_probe,
.remove = qcom_cpufreq_hw_driver_remove,
.remove_new = qcom_cpufreq_hw_driver_remove,
.driver = {
.name = "qcom-cpufreq-hw",
.of_match_table = qcom_cpufreq_hw_match,

View File

@ -22,7 +22,6 @@
#include <linux/module.h>
#include <linux/nvmem-consumer.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_domain.h>
#include <linux/pm_opp.h>
@ -334,7 +333,7 @@ free_drv:
return ret;
}
static int qcom_cpufreq_remove(struct platform_device *pdev)
static void qcom_cpufreq_remove(struct platform_device *pdev)
{
struct qcom_cpufreq_drv *drv = platform_get_drvdata(pdev);
unsigned int cpu;
@ -346,13 +345,11 @@ static int qcom_cpufreq_remove(struct platform_device *pdev)
kfree(drv->opp_tokens);
kfree(drv);
return 0;
}
static struct platform_driver qcom_cpufreq_driver = {
.probe = qcom_cpufreq_probe,
.remove = qcom_cpufreq_remove,
.remove_new = qcom_cpufreq_remove,
.driver = {
.name = "qcom-cpufreq-nvmem",
},

View File

@ -288,11 +288,9 @@ static int qoriq_cpufreq_probe(struct platform_device *pdev)
return 0;
}
static int qoriq_cpufreq_remove(struct platform_device *pdev)
static void qoriq_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&qoriq_cpufreq_driver);
return 0;
}
static struct platform_driver qoriq_cpufreq_platform_driver = {
@ -300,7 +298,7 @@ static struct platform_driver qoriq_cpufreq_platform_driver = {
.name = "qoriq-cpufreq",
},
.probe = qoriq_cpufreq_probe,
.remove = qoriq_cpufreq_remove,
.remove_new = qoriq_cpufreq_remove,
};
module_platform_driver(qoriq_cpufreq_platform_driver);

View File

@ -65,7 +65,7 @@ remove_opp:
return ret;
}
static int raspberrypi_cpufreq_remove(struct platform_device *pdev)
static void raspberrypi_cpufreq_remove(struct platform_device *pdev)
{
struct device *cpu_dev;
@ -74,8 +74,6 @@ static int raspberrypi_cpufreq_remove(struct platform_device *pdev)
dev_pm_opp_remove_all_dynamic(cpu_dev);
platform_device_unregister(cpufreq_dt);
return 0;
}
/*
@ -87,7 +85,7 @@ static struct platform_driver raspberrypi_cpufreq_driver = {
.name = "raspberrypi-cpufreq",
},
.probe = raspberrypi_cpufreq_probe,
.remove = raspberrypi_cpufreq_remove,
.remove_new = raspberrypi_cpufreq_remove,
};
module_platform_driver(raspberrypi_cpufreq_driver);

View File

@ -14,7 +14,7 @@
#include <linux/cpumask.h>
#include <linux/export.h>
#include <linux/module.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/scpi_protocol.h>
#include <linux/slab.h>
@ -208,11 +208,10 @@ static int scpi_cpufreq_probe(struct platform_device *pdev)
return ret;
}
static int scpi_cpufreq_remove(struct platform_device *pdev)
static void scpi_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&scpi_cpufreq_driver);
scpi_ops = NULL;
return 0;
}
static struct platform_driver scpi_cpufreq_platdrv = {
@ -220,7 +219,7 @@ static struct platform_driver scpi_cpufreq_platdrv = {
.name = "scpi-cpufreq",
},
.probe = scpi_cpufreq_probe,
.remove = scpi_cpufreq_remove,
.remove_new = scpi_cpufreq_remove,
};
module_platform_driver(scpi_cpufreq_platdrv);

View File

@ -13,7 +13,7 @@
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/regmap.h>

View File

@ -137,7 +137,7 @@ free_opp:
return ret;
}
static int sun50i_cpufreq_nvmem_remove(struct platform_device *pdev)
static void sun50i_cpufreq_nvmem_remove(struct platform_device *pdev)
{
int *opp_tokens = platform_get_drvdata(pdev);
unsigned int cpu;
@ -148,13 +148,11 @@ static int sun50i_cpufreq_nvmem_remove(struct platform_device *pdev)
dev_pm_opp_put_prop_name(opp_tokens[cpu]);
kfree(opp_tokens);
return 0;
}
static struct platform_driver sun50i_cpufreq_driver = {
.probe = sun50i_cpufreq_nvmem_probe,
.remove = sun50i_cpufreq_nvmem_remove,
.remove_new = sun50i_cpufreq_nvmem_remove,
.driver = {
.name = "sun50i-cpufreq-nvmem",
},

View File

@ -259,11 +259,9 @@ put_bpmp:
return err;
}
static int tegra186_cpufreq_remove(struct platform_device *pdev)
static void tegra186_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&tegra186_cpufreq_driver);
return 0;
}
static const struct of_device_id tegra186_cpufreq_of_match[] = {
@ -278,7 +276,7 @@ static struct platform_driver tegra186_cpufreq_platform_driver = {
.of_match_table = tegra186_cpufreq_of_match,
},
.probe = tegra186_cpufreq_probe,
.remove = tegra186_cpufreq_remove,
.remove_new = tegra186_cpufreq_remove,
};
module_platform_driver(tegra186_cpufreq_platform_driver);

View File

@ -508,6 +508,32 @@ static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
return 0;
}
static int tegra194_cpufreq_online(struct cpufreq_policy *policy)
{
/* We did light-weight tear down earlier, nothing to do here */
return 0;
}
static int tegra194_cpufreq_offline(struct cpufreq_policy *policy)
{
/*
* Preserve policy->driver_data and don't free resources on light-weight
* tear down.
*/
return 0;
}
static int tegra194_cpufreq_exit(struct cpufreq_policy *policy)
{
struct device *cpu_dev = get_cpu_device(policy->cpu);
dev_pm_opp_remove_all_dynamic(cpu_dev);
dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
return 0;
}
static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy,
unsigned int index)
{
@ -535,6 +561,9 @@ static struct cpufreq_driver tegra194_cpufreq_driver = {
.target_index = tegra194_cpufreq_set_target,
.get = tegra194_get_speed,
.init = tegra194_cpufreq_init,
.exit = tegra194_cpufreq_exit,
.online = tegra194_cpufreq_online,
.offline = tegra194_cpufreq_offline,
.attr = cpufreq_generic_attr,
};
@ -708,12 +737,10 @@ put_bpmp:
return err;
}
static int tegra194_cpufreq_remove(struct platform_device *pdev)
static void tegra194_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&tegra194_cpufreq_driver);
tegra194_cpufreq_free_resources();
return 0;
}
static const struct of_device_id tegra194_cpufreq_of_match[] = {
@ -730,7 +757,7 @@ static struct platform_driver tegra194_ccplex_driver = {
.of_match_table = tegra194_cpufreq_of_match,
},
.probe = tegra194_cpufreq_probe,
.remove = tegra194_cpufreq_remove,
.remove_new = tegra194_cpufreq_remove,
};
module_platform_driver(tegra194_ccplex_driver);

View File

@ -12,7 +12,7 @@
#include <linux/module.h>
#include <linux/init.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/regmap.h>
#include <linux/slab.h>

View File

@ -18,7 +18,6 @@
#include <linux/device.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/slab.h>
@ -552,7 +551,7 @@ static int ve_spc_cpufreq_probe(struct platform_device *pdev)
return ret;
}
static int ve_spc_cpufreq_remove(struct platform_device *pdev)
static void ve_spc_cpufreq_remove(struct platform_device *pdev)
{
bL_switcher_get_enabled();
__bLs_unregister_notifier();
@ -560,7 +559,6 @@ static int ve_spc_cpufreq_remove(struct platform_device *pdev)
bL_switcher_put_enabled();
pr_info("%s: Un-registered platform driver: %s\n", __func__,
ve_spc_cpufreq_driver.name);
return 0;
}
static struct platform_driver ve_spc_cpufreq_platdrv = {
@ -568,7 +566,7 @@ static struct platform_driver ve_spc_cpufreq_platdrv = {
.name = "vexpress-spc-cpufreq",
},
.probe = ve_spc_cpufreq_probe,
.remove = ve_spc_cpufreq_remove,
.remove_new = ve_spc_cpufreq_remove,
};
module_platform_driver(ve_spc_cpufreq_platdrv);

View File

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Common definitions for cpuidle governors. */
#ifndef __CPUIDLE_GOVERNOR_H
#define __CPUIDLE_GOVERNOR_H
/*
* Idle state target residency threshold used for deciding whether or not to
* check the time till the closest expected timer event.
*/
#define RESIDENCY_THRESHOLD_NS (15 * NSEC_PER_USEC)
#endif /* __CPUIDLE_GOVERNOR_H */

View File

@ -19,6 +19,8 @@
#include <linux/sched/stat.h>
#include <linux/math64.h>
#include "gov.h"
#define BUCKETS 12
#define INTERVAL_SHIFT 3
#define INTERVALS (1UL << INTERVAL_SHIFT)
@ -166,8 +168,7 @@ static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev);
* of points is below a threshold. If it is... then use the
* average of these 8 points as the estimated value.
*/
static unsigned int get_typical_interval(struct menu_device *data,
unsigned int predicted_us)
static unsigned int get_typical_interval(struct menu_device *data)
{
int i, divisor;
unsigned int min, max, thresh, avg;
@ -195,11 +196,7 @@ again:
}
}
/*
* If the result of the computation is going to be discarded anyway,
* avoid the computation altogether.
*/
if (min >= predicted_us)
if (!max)
return UINT_MAX;
if (divisor == INTERVALS)
@ -267,7 +264,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
{
struct menu_device *data = this_cpu_ptr(&menu_devices);
s64 latency_req = cpuidle_governor_latency_req(dev->cpu);
unsigned int predicted_us;
u64 predicted_ns;
u64 interactivity_req;
unsigned int nr_iowaiters;
@ -279,16 +275,41 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
data->needs_update = 0;
}
/* determine the expected residency time, round up */
delta = tick_nohz_get_sleep_length(&delta_tick);
if (unlikely(delta < 0)) {
delta = 0;
delta_tick = 0;
}
data->next_timer_ns = delta;
nr_iowaiters = nr_iowait_cpu(dev->cpu);
data->bucket = which_bucket(data->next_timer_ns, nr_iowaiters);
/* Find the shortest expected idle interval. */
predicted_ns = get_typical_interval(data) * NSEC_PER_USEC;
if (predicted_ns > RESIDENCY_THRESHOLD_NS) {
unsigned int timer_us;
/* Determine the time till the closest timer. */
delta = tick_nohz_get_sleep_length(&delta_tick);
if (unlikely(delta < 0)) {
delta = 0;
delta_tick = 0;
}
data->next_timer_ns = delta;
data->bucket = which_bucket(data->next_timer_ns, nr_iowaiters);
/* Round up the result for half microseconds. */
timer_us = div_u64((RESOLUTION * DECAY * NSEC_PER_USEC) / 2 +
data->next_timer_ns *
data->correction_factor[data->bucket],
RESOLUTION * DECAY * NSEC_PER_USEC);
/* Use the lowest expected idle interval to pick the idle state. */
predicted_ns = min((u64)timer_us * NSEC_PER_USEC, predicted_ns);
} else {
/*
* Because the next timer event is not going to be determined
* in this case, assume that without the tick the closest timer
* will be in distant future and that the closest tick will occur
* after 1/2 of the tick period.
*/
data->next_timer_ns = KTIME_MAX;
delta_tick = TICK_NSEC / 2;
data->bucket = which_bucket(KTIME_MAX, nr_iowaiters);
}
if (unlikely(drv->state_count <= 1 || latency_req == 0) ||
((data->next_timer_ns < drv->states[1].target_residency_ns ||
@ -303,16 +324,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
return 0;
}
/* Round up the result for half microseconds. */
predicted_us = div_u64(data->next_timer_ns *
data->correction_factor[data->bucket] +
(RESOLUTION * DECAY * NSEC_PER_USEC) / 2,
RESOLUTION * DECAY * NSEC_PER_USEC);
/* Use the lowest expected idle interval to pick the idle state. */
predicted_ns = (u64)min(predicted_us,
get_typical_interval(data, predicted_us)) *
NSEC_PER_USEC;
if (tick_nohz_tick_stopped()) {
/*
* If the tick is already stopped, the cost of possible short

View File

@ -140,6 +140,8 @@
#include <linux/sched/topology.h>
#include <linux/tick.h>
#include "gov.h"
/*
* The number of bits to shift the CPU's capacity by in order to determine
* the utilized threshold.
@ -152,7 +154,6 @@
*/
#define UTIL_THRESHOLD_SHIFT 6
/*
* The PULSE value is added to metrics when they grow and the DECAY_SHIFT value
* is used for decreasing metrics on a regular basis.
@ -186,8 +187,8 @@ struct teo_bin {
* @total: Grand total of the "intercepts" and "hits" metrics for all bins.
* @next_recent_idx: Index of the next @recent_idx entry to update.
* @recent_idx: Indices of bins corresponding to recent "intercepts".
* @tick_hits: Number of "hits" after TICK_NSEC.
* @util_threshold: Threshold above which the CPU is considered utilized
* @utilized: Whether the last sleep on the CPU happened while utilized
*/
struct teo_cpu {
s64 time_span_ns;
@ -196,8 +197,8 @@ struct teo_cpu {
unsigned int total;
int next_recent_idx;
int recent_idx[NR_RECENT];
unsigned int tick_hits;
unsigned long util_threshold;
bool utilized;
};
static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
@ -228,6 +229,7 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
{
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
int i, idx_timer = 0, idx_duration = 0;
s64 target_residency_ns;
u64 measured_ns;
if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) {
@ -268,7 +270,6 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
* fall into.
*/
for (i = 0; i < drv->state_count; i++) {
s64 target_residency_ns = drv->states[i].target_residency_ns;
struct teo_bin *bin = &cpu_data->state_bins[i];
bin->hits -= bin->hits >> DECAY_SHIFT;
@ -276,6 +277,8 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
cpu_data->total += bin->hits + bin->intercepts;
target_residency_ns = drv->states[i].target_residency_ns;
if (target_residency_ns <= cpu_data->sleep_length_ns) {
idx_timer = i;
if (target_residency_ns <= measured_ns)
@ -290,6 +293,26 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
if (cpu_data->recent_idx[i] >= 0)
cpu_data->state_bins[cpu_data->recent_idx[i]].recent--;
/*
* If the deepest state's target residency is below the tick length,
* make a record of it to help teo_select() decide whether or not
* to stop the tick. This effectively adds an extra hits-only bin
* beyond the last state-related one.
*/
if (target_residency_ns < TICK_NSEC) {
cpu_data->tick_hits -= cpu_data->tick_hits >> DECAY_SHIFT;
cpu_data->total += cpu_data->tick_hits;
if (TICK_NSEC <= cpu_data->sleep_length_ns) {
idx_timer = drv->state_count;
if (TICK_NSEC <= measured_ns) {
cpu_data->tick_hits += PULSE;
goto end;
}
}
}
/*
* If the measured idle duration falls into the same bin as the sleep
* length, this is a "hit", so update the "hits" metric for that bin.
@ -305,18 +328,14 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
cpu_data->recent_idx[i] = idx_duration;
}
end:
cpu_data->total += PULSE;
}
static bool teo_time_ok(u64 interval_ns)
static bool teo_state_ok(int i, struct cpuidle_driver *drv)
{
return !tick_nohz_tick_stopped() || interval_ns >= TICK_NSEC;
}
static s64 teo_middle_of_bin(int idx, struct cpuidle_driver *drv)
{
return (drv->states[idx].target_residency_ns +
drv->states[idx+1].target_residency_ns) / 2;
return !tick_nohz_tick_stopped() ||
drv->states[i].target_residency_ns >= TICK_NSEC;
}
/**
@ -356,6 +375,8 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
{
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
s64 latency_req = cpuidle_governor_latency_req(dev->cpu);
ktime_t delta_tick = TICK_NSEC / 2;
unsigned int tick_intercept_sum = 0;
unsigned int idx_intercept_sum = 0;
unsigned int intercept_sum = 0;
unsigned int idx_recent_sum = 0;
@ -365,7 +386,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
int constraint_idx = 0;
int idx0 = 0, idx = -1;
bool alt_intercepts, alt_recent;
ktime_t delta_tick;
bool cpu_utilized;
s64 duration_ns;
int i;
@ -375,44 +396,48 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
}
cpu_data->time_span_ns = local_clock();
duration_ns = tick_nohz_get_sleep_length(&delta_tick);
cpu_data->sleep_length_ns = duration_ns;
/*
* Set the expected sleep length to infinity in case of an early
* return.
*/
cpu_data->sleep_length_ns = KTIME_MAX;
/* Check if there is any choice in the first place. */
if (drv->state_count < 2) {
idx = 0;
goto end;
}
if (!dev->states_usage[0].disable) {
idx = 0;
if (drv->states[1].target_residency_ns > duration_ns)
goto end;
goto out_tick;
}
cpu_data->utilized = teo_cpu_is_utilized(dev->cpu, cpu_data);
if (!dev->states_usage[0].disable)
idx = 0;
cpu_utilized = teo_cpu_is_utilized(dev->cpu, cpu_data);
/*
* If the CPU is being utilized over the threshold and there are only 2
* states to choose from, the metrics need not be considered, so choose
* the shallowest non-polling state and exit.
*/
if (drv->state_count < 3 && cpu_data->utilized) {
for (i = 0; i < drv->state_count; ++i) {
if (!dev->states_usage[i].disable &&
!(drv->states[i].flags & CPUIDLE_FLAG_POLLING)) {
idx = i;
goto end;
}
if (drv->state_count < 3 && cpu_utilized) {
/*
* If state 0 is enabled and it is not a polling one, select it
* right away unless the scheduler tick has been stopped, in
* which case care needs to be taken to leave the CPU in a deep
* enough state in case it is not woken up any time soon after
* all. If state 1 is disabled, though, state 0 must be used
* anyway.
*/
if ((!idx && !(drv->states[0].flags & CPUIDLE_FLAG_POLLING) &&
teo_state_ok(0, drv)) || dev->states_usage[1].disable) {
idx = 0;
goto out_tick;
}
/* Assume that state 1 is not a polling one and use it. */
idx = 1;
duration_ns = drv->states[1].target_residency_ns;
goto end;
}
/*
* Find the deepest idle state whose target residency does not exceed
* the current sleep length and the deepest idle state not deeper than
* the former whose exit latency does not exceed the current latency
* constraint. Compute the sums of metrics for early wakeup pattern
* detection.
*/
/* Compute the sums of metrics for early wakeup pattern detection. */
for (i = 1; i < drv->state_count; i++) {
struct teo_bin *prev_bin = &cpu_data->state_bins[i-1];
struct cpuidle_state *s = &drv->states[i];
@ -428,19 +453,15 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
if (dev->states_usage[i].disable)
continue;
if (idx < 0) {
idx = i; /* first enabled state */
idx0 = i;
}
if (s->target_residency_ns > duration_ns)
break;
if (idx < 0)
idx0 = i; /* first enabled state */
idx = i;
if (s->exit_latency_ns <= latency_req)
constraint_idx = i;
/* Save the sums for the current state. */
idx_intercept_sum = intercept_sum;
idx_hit_sum = hit_sum;
idx_recent_sum = recent_sum;
@ -449,11 +470,21 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
/* Avoid unnecessary overhead. */
if (idx < 0) {
idx = 0; /* No states enabled, must use 0. */
goto end;
} else if (idx == idx0) {
goto out_tick;
}
if (idx == idx0) {
/*
* Only one idle state is enabled, so use it, but do not
* allow the tick to be stopped it is shallow enough.
*/
duration_ns = drv->states[idx].target_residency_ns;
goto end;
}
tick_intercept_sum = intercept_sum +
cpu_data->state_bins[drv->state_count-1].intercepts;
/*
* If the sum of the intercepts metric for all of the idle states
* shallower than the current candidate one (idx) is greater than the
@ -461,13 +492,11 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
* all of the deeper states, or the sum of the numbers of recent
* intercepts over all of the states shallower than the candidate one
* is greater than a half of the number of recent events taken into
* account, the CPU is likely to wake up early, so find an alternative
* idle state to select.
* account, a shallower idle state is likely to be a better choice.
*/
alt_intercepts = 2 * idx_intercept_sum > cpu_data->total - idx_hit_sum;
alt_recent = idx_recent_sum > NR_RECENT / 2;
if (alt_recent || alt_intercepts) {
s64 first_suitable_span_ns = duration_ns;
int first_suitable_idx = idx;
/*
@ -476,44 +505,39 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
* cases (both with respect to intercepts overall and with
* respect to the recent intercepts only) in the past.
*
* Take the possible latency constraint and duration limitation
* present if the tick has been stopped already into account.
* Take the possible duration limitation present if the tick
* has been stopped already into account.
*/
intercept_sum = 0;
recent_sum = 0;
for (i = idx - 1; i >= 0; i--) {
struct teo_bin *bin = &cpu_data->state_bins[i];
s64 span_ns;
intercept_sum += bin->intercepts;
recent_sum += bin->recent;
span_ns = teo_middle_of_bin(i, drv);
if ((!alt_recent || 2 * recent_sum > idx_recent_sum) &&
(!alt_intercepts ||
2 * intercept_sum > idx_intercept_sum)) {
if (teo_time_ok(span_ns) &&
!dev->states_usage[i].disable) {
/*
* Use the current state unless it is too
* shallow or disabled, in which case take the
* first enabled state that is deep enough.
*/
if (teo_state_ok(i, drv) &&
!dev->states_usage[i].disable)
idx = i;
duration_ns = span_ns;
} else {
/*
* The current state is too shallow or
* disabled, so take the first enabled
* deeper state with suitable time span.
*/
else
idx = first_suitable_idx;
duration_ns = first_suitable_span_ns;
}
break;
}
if (dev->states_usage[i].disable)
continue;
if (!teo_time_ok(span_ns)) {
if (!teo_state_ok(i, drv)) {
/*
* The current state is too shallow, but if an
* alternative candidate state has been found,
@ -525,7 +549,6 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
break;
}
first_suitable_span_ns = span_ns;
first_suitable_idx = i;
}
}
@ -539,31 +562,75 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
/*
* If the CPU is being utilized over the threshold, choose a shallower
* non-polling state to improve latency
* non-polling state to improve latency, unless the scheduler tick has
* been stopped already and the shallower state's target residency is
* not sufficiently large.
*/
if (cpu_data->utilized)
idx = teo_find_shallower_state(drv, dev, idx, duration_ns, true);
if (cpu_utilized) {
i = teo_find_shallower_state(drv, dev, idx, KTIME_MAX, true);
if (teo_state_ok(i, drv))
idx = i;
}
/*
* Skip the timers check if state 0 is the current candidate one,
* because an immediate non-timer wakeup is expected in that case.
*/
if (!idx)
goto out_tick;
/*
* If state 0 is a polling one, check if the target residency of
* the current candidate state is low enough and skip the timers
* check in that case too.
*/
if ((drv->states[0].flags & CPUIDLE_FLAG_POLLING) &&
drv->states[idx].target_residency_ns < RESIDENCY_THRESHOLD_NS)
goto out_tick;
duration_ns = tick_nohz_get_sleep_length(&delta_tick);
cpu_data->sleep_length_ns = duration_ns;
/*
* If the closest expected timer is before the terget residency of the
* candidate state, a shallower one needs to be found.
*/
if (drv->states[idx].target_residency_ns > duration_ns) {
i = teo_find_shallower_state(drv, dev, idx, duration_ns, false);
if (teo_state_ok(i, drv))
idx = i;
}
/*
* If the selected state's target residency is below the tick length
* and intercepts occurring before the tick length are the majority of
* total wakeup events, do not stop the tick.
*/
if (drv->states[idx].target_residency_ns < TICK_NSEC &&
tick_intercept_sum > cpu_data->total / 2 + cpu_data->total / 8)
duration_ns = TICK_NSEC / 2;
end:
/*
* Don't stop the tick if the selected state is a polling one or if the
* expected idle duration is shorter than the tick period length.
* Allow the tick to be stopped unless the selected state is a polling
* one or the expected idle duration is shorter than the tick period
* length.
*/
if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
duration_ns < TICK_NSEC) && !tick_nohz_tick_stopped()) {
*stop_tick = false;
if ((!(drv->states[idx].flags & CPUIDLE_FLAG_POLLING) &&
duration_ns >= TICK_NSEC) || tick_nohz_tick_stopped())
return idx;
/*
* The tick is not going to be stopped, so if the target
* residency of the state to be returned is not within the time
* till the closest timer including the tick, try to correct
* that.
*/
if (idx > idx0 &&
drv->states[idx].target_residency_ns > delta_tick)
idx = teo_find_shallower_state(drv, dev, idx, delta_tick, false);
}
/*
* The tick is not going to be stopped, so if the target residency of
* the state to be returned is not within the time till the closest
* timer including the tick, try to correct that.
*/
if (idx > idx0 &&
drv->states[idx].target_residency_ns > delta_tick)
idx = teo_find_shallower_state(drv, dev, idx, delta_tick, false);
out_tick:
*stop_tick = false;
return idx;
}

View File

@ -472,10 +472,11 @@ static void devfreq_monitor(struct work_struct *work)
* devfreq_monitor_start() - Start load monitoring of devfreq instance
* @devfreq: the devfreq instance.
*
* Helper function for starting devfreq device load monitoring. By
* default delayed work based monitoring is supported. Function
* to be called from governor in response to DEVFREQ_GOV_START
* event when device is added to devfreq framework.
* Helper function for starting devfreq device load monitoring. By default,
* deferrable timer is used for load monitoring. But the users can change this
* behavior using the "timer" type in devfreq_dev_profile. This function will be
* called by devfreq governor in response to the DEVFREQ_GOV_START event
* generated while adding a device to the devfreq framework.
*/
void devfreq_monitor_start(struct devfreq *devfreq)
{
@ -763,6 +764,7 @@ static void devfreq_dev_release(struct device *dev)
dev_pm_opp_put_opp_table(devfreq->opp_table);
mutex_destroy(&devfreq->lock);
srcu_cleanup_notifier_head(&devfreq->transition_notifier_list);
kfree(devfreq);
}

View File

@ -7,7 +7,7 @@
#include <linux/devfreq.h>
#include <linux/device.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/of.h>
#include <linux/pm_opp.h>
#include <linux/platform_device.h>
#include <linux/slab.h>

View File

@ -3,9 +3,9 @@
* Copyright 2019 NXP
*/
#include <linux/mod_devicetable.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/devfreq.h>
#include <linux/pm_opp.h>

View File

@ -8,7 +8,6 @@
#include <linux/minmax.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/regulator/consumer.h>

View File

@ -13,7 +13,7 @@
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/reset.h>

View File

@ -177,25 +177,24 @@ unsigned long dev_pm_opp_get_power(struct dev_pm_opp *opp)
EXPORT_SYMBOL_GPL(dev_pm_opp_get_power);
/**
* dev_pm_opp_get_freq() - Gets the frequency corresponding to an available opp
* @opp: opp for which frequency has to be returned for
* dev_pm_opp_get_freq_indexed() - Gets the frequency corresponding to an
* available opp with specified index
* @opp: opp for which frequency has to be returned for
* @index: index of the frequency within the required opp
*
* Return: frequency in hertz corresponding to the opp, else
* return 0
* Return: frequency in hertz corresponding to the opp with specified index,
* else return 0
*/
unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp)
unsigned long dev_pm_opp_get_freq_indexed(struct dev_pm_opp *opp, u32 index)
{
if (IS_ERR_OR_NULL(opp)) {
if (IS_ERR_OR_NULL(opp) || index >= opp->opp_table->clk_count) {
pr_err("%s: Invalid parameters\n", __func__);
return 0;
}
if (!assert_single_clk(opp->opp_table))
return 0;
return opp->rates[0];
return opp->rates[index];
}
EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq);
EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq_indexed);
/**
* dev_pm_opp_get_level() - Gets the level corresponding to an available opp
@ -227,20 +226,18 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_level);
unsigned int dev_pm_opp_get_required_pstate(struct dev_pm_opp *opp,
unsigned int index)
{
struct opp_table *opp_table = opp->opp_table;
if (IS_ERR_OR_NULL(opp) || !opp->available ||
index >= opp_table->required_opp_count) {
index >= opp->opp_table->required_opp_count) {
pr_err("%s: Invalid parameters\n", __func__);
return 0;
}
/* required-opps not fully initialized yet */
if (lazy_linking_pending(opp_table))
if (lazy_linking_pending(opp->opp_table))
return 0;
/* The required OPP table must belong to a genpd */
if (unlikely(!opp_table->required_opp_tables[index]->is_genpd)) {
if (unlikely(!opp->opp_table->required_opp_tables[index]->is_genpd)) {
pr_err("%s: Performance state is only valid for genpds.\n", __func__);
return 0;
}
@ -450,7 +447,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count);
/* Helpers to read keys */
static unsigned long _read_freq(struct dev_pm_opp *opp, int index)
{
return opp->rates[0];
return opp->rates[index];
}
static unsigned long _read_level(struct dev_pm_opp *opp, int index)
@ -626,6 +623,34 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact);
/**
* dev_pm_opp_find_freq_exact_indexed() - Search for an exact freq for the
* clock corresponding to the index
* @dev: Device for which we do this operation
* @freq: frequency to search for
* @index: Clock index
* @available: true/false - match for available opp
*
* Search for the matching exact OPP for the clock corresponding to the
* specified index from a starting freq for a device.
*
* Return: matching *opp , else returns ERR_PTR in case of error and should be
* handled using IS_ERR. Error return values can be:
* EINVAL: for bad pointer
* ERANGE: no match found for search
* ENODEV: if device not found in list of registered devices
*
* The callers are required to call dev_pm_opp_put() for the returned OPP after
* use.
*/
struct dev_pm_opp *
dev_pm_opp_find_freq_exact_indexed(struct device *dev, unsigned long freq,
u32 index, bool available)
{
return _find_key_exact(dev, freq, index, available, _read_freq, NULL);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact_indexed);
static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table,
unsigned long *freq)
{
@ -658,6 +683,34 @@ struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil);
/**
* dev_pm_opp_find_freq_ceil_indexed() - Search for a rounded ceil freq for the
* clock corresponding to the index
* @dev: Device for which we do this operation
* @freq: Start frequency
* @index: Clock index
*
* Search for the matching ceil *available* OPP for the clock corresponding to
* the specified index from a starting freq for a device.
*
* Return: matching *opp and refreshes *freq accordingly, else returns
* ERR_PTR in case of error and should be handled using IS_ERR. Error return
* values can be:
* EINVAL: for bad pointer
* ERANGE: no match found for search
* ENODEV: if device not found in list of registered devices
*
* The callers are required to call dev_pm_opp_put() for the returned OPP after
* use.
*/
struct dev_pm_opp *
dev_pm_opp_find_freq_ceil_indexed(struct device *dev, unsigned long *freq,
u32 index)
{
return _find_key_ceil(dev, freq, index, true, _read_freq, NULL);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil_indexed);
/**
* dev_pm_opp_find_freq_floor() - Search for a rounded floor freq
* @dev: device for which we do this operation
@ -683,6 +736,34 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor);
/**
* dev_pm_opp_find_freq_floor_indexed() - Search for a rounded floor freq for the
* clock corresponding to the index
* @dev: Device for which we do this operation
* @freq: Start frequency
* @index: Clock index
*
* Search for the matching floor *available* OPP for the clock corresponding to
* the specified index from a starting freq for a device.
*
* Return: matching *opp and refreshes *freq accordingly, else returns
* ERR_PTR in case of error and should be handled using IS_ERR. Error return
* values can be:
* EINVAL: for bad pointer
* ERANGE: no match found for search
* ENODEV: if device not found in list of registered devices
*
* The callers are required to call dev_pm_opp_put() for the returned OPP after
* use.
*/
struct dev_pm_opp *
dev_pm_opp_find_freq_floor_indexed(struct device *dev, unsigned long *freq,
u32 index)
{
return _find_key_floor(dev, freq, index, true, _read_freq, NULL);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor_indexed);
/**
* dev_pm_opp_find_level_exact() - search for an exact level
* @dev: device for which we do this operation
@ -2379,7 +2460,7 @@ static int _opp_attach_genpd(struct opp_table *opp_table, struct device *dev,
virt_dev = dev_pm_domain_attach_by_name(dev, *name);
if (IS_ERR_OR_NULL(virt_dev)) {
ret = PTR_ERR(virt_dev) ? : -ENODEV;
ret = virt_dev ? PTR_ERR(virt_dev) : -ENODEV;
dev_err(dev, "Couldn't attach to pm_domain: %d\n", ret);
goto err;
}

View File

@ -24,7 +24,7 @@
/**
* dev_pm_opp_init_cpufreq_table() - create a cpufreq table for a device
* @dev: device for which we do this operation
* @table: Cpufreq table returned back to caller
* @opp_table: Cpufreq table returned back to caller
*
* Generate a cpufreq table for a provided device- this assumes that the
* opp table is already initialized and ready for usage.
@ -89,7 +89,7 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_init_cpufreq_table);
/**
* dev_pm_opp_free_cpufreq_table() - free the cpufreq table
* @dev: device for which we do this operation
* @table: table to free
* @opp_table: table to free
*
* Free up the table allocated by dev_pm_opp_init_cpufreq_table
*/

View File

@ -12,6 +12,7 @@
#include <linux/module.h>
#include <linux/powercap.h>
#include <linux/scmi_protocol.h>
#include <linux/slab.h>
#define to_scmi_powercap_zone(z) \
container_of(z, struct scmi_powercap_zone, zone)
@ -19,6 +20,8 @@
static const struct scmi_powercap_proto_ops *powercap_ops;
struct scmi_powercap_zone {
bool registered;
bool invalid;
unsigned int height;
struct device *dev;
struct scmi_protocol_handle *ph;
@ -32,6 +35,7 @@ struct scmi_powercap_root {
unsigned int num_zones;
struct scmi_powercap_zone *spzones;
struct list_head *registered_zones;
struct list_head scmi_zones;
};
static struct powercap_control_type *scmi_top_pcntrl;
@ -271,12 +275,6 @@ static void scmi_powercap_unregister_all_zones(struct scmi_powercap_root *pr)
}
}
static inline bool
scmi_powercap_is_zone_registered(struct scmi_powercap_zone *spz)
{
return !list_empty(&spz->node);
}
static inline unsigned int
scmi_powercap_get_zone_height(struct scmi_powercap_zone *spz)
{
@ -295,11 +293,46 @@ scmi_powercap_get_parent_zone(struct scmi_powercap_zone *spz)
return &spz->spzones[spz->info->parent_id];
}
static int scmi_powercap_register_zone(struct scmi_powercap_root *pr,
struct scmi_powercap_zone *spz,
struct scmi_powercap_zone *parent)
{
int ret = 0;
struct powercap_zone *z;
if (spz->invalid) {
list_del(&spz->node);
return -EINVAL;
}
z = powercap_register_zone(&spz->zone, scmi_top_pcntrl, spz->info->name,
parent ? &parent->zone : NULL,
&zone_ops, 1, &constraint_ops);
if (!IS_ERR(z)) {
spz->height = scmi_powercap_get_zone_height(spz);
spz->registered = true;
list_move(&spz->node, &pr->registered_zones[spz->height]);
dev_dbg(spz->dev, "Registered node %s - parent %s - height:%d\n",
spz->info->name, parent ? parent->info->name : "ROOT",
spz->height);
} else {
list_del(&spz->node);
ret = PTR_ERR(z);
dev_err(spz->dev,
"Error registering node:%s - parent:%s - h:%d - ret:%d\n",
spz->info->name,
parent ? parent->info->name : "ROOT",
spz->height, ret);
}
return ret;
}
/**
* scmi_powercap_register_zone - Register an SCMI powercap zone recursively
* scmi_zones_register- Register SCMI powercap zones starting from parent zones
*
* @dev: A reference to the SCMI device
* @pr: A reference to the root powercap zones descriptors
* @spz: A reference to the SCMI powercap zone to register
*
* When registering SCMI powercap zones with the powercap framework we should
* take care to always register zones starting from the root ones and to
@ -309,10 +342,10 @@ scmi_powercap_get_parent_zone(struct scmi_powercap_zone *spz)
* zones provided by the SCMI platform firmware is built to comply with such
* requirement.
*
* This function, given an SCMI powercap zone to register, takes care to walk
* the SCMI powercap zones tree up to the root looking recursively for
* unregistered parent zones before registering the provided zone; at the same
* time each registered zone height in such a tree is accounted for and each
* This function, given the set of SCMI powercap zones to register, takes care
* to walk the SCMI powercap zones trees up to the root registering any
* unregistered parent zone before registering the child zones; at the same
* time each registered-zone height in such a tree is accounted for and each
* zone, once registered, is stored in the @registered_zones array that is
* indexed by zone height: this way will be trivial, at unregister time, to walk
* the @registered_zones array backward and unregister all the zones starting
@ -330,57 +363,55 @@ scmi_powercap_get_parent_zone(struct scmi_powercap_zone *spz)
*
* Return: 0 on Success
*/
static int scmi_powercap_register_zone(struct scmi_powercap_root *pr,
struct scmi_powercap_zone *spz)
static int scmi_zones_register(struct device *dev,
struct scmi_powercap_root *pr)
{
int ret = 0;
struct scmi_powercap_zone *parent;
unsigned int sp = 0, reg_zones = 0;
struct scmi_powercap_zone *spz, **zones_stack;
if (!spz->info)
return ret;
zones_stack = kcalloc(pr->num_zones, sizeof(spz), GFP_KERNEL);
if (!zones_stack)
return -ENOMEM;
parent = scmi_powercap_get_parent_zone(spz);
if (parent && !scmi_powercap_is_zone_registered(parent)) {
/*
* Bail out if a parent domain was marked as unsupported:
* only domains participating as leaves can be skipped.
*/
if (!parent->info)
return -ENODEV;
spz = list_first_entry_or_null(&pr->scmi_zones,
struct scmi_powercap_zone, node);
while (spz) {
struct scmi_powercap_zone *parent;
ret = scmi_powercap_register_zone(pr, parent);
if (ret)
return ret;
}
if (!scmi_powercap_is_zone_registered(spz)) {
struct powercap_zone *z;
z = powercap_register_zone(&spz->zone,
scmi_top_pcntrl,
spz->info->name,
parent ? &parent->zone : NULL,
&zone_ops, 1, &constraint_ops);
if (!IS_ERR(z)) {
spz->height = scmi_powercap_get_zone_height(spz);
list_add(&spz->node,
&pr->registered_zones[spz->height]);
dev_dbg(spz->dev,
"Registered node %s - parent %s - height:%d\n",
spz->info->name,
parent ? parent->info->name : "ROOT",
spz->height);
ret = 0;
parent = scmi_powercap_get_parent_zone(spz);
if (parent && !parent->registered) {
zones_stack[sp++] = spz;
spz = parent;
} else {
ret = PTR_ERR(z);
dev_err(spz->dev,
"Error registering node:%s - parent:%s - h:%d - ret:%d\n",
spz->info->name,
parent ? parent->info->name : "ROOT",
spz->height, ret);
ret = scmi_powercap_register_zone(pr, spz, parent);
if (!ret) {
reg_zones++;
} else if (sp) {
/* Failed to register a non-leaf zone.
* Bail-out.
*/
dev_err(dev,
"Failed to register non-leaf zone - ret:%d\n",
ret);
scmi_powercap_unregister_all_zones(pr);
reg_zones = 0;
goto out;
}
/* Pick next zone to process */
if (sp)
spz = zones_stack[--sp];
else
spz = list_first_entry_or_null(&pr->scmi_zones,
struct scmi_powercap_zone,
node);
}
}
out:
kfree(zones_stack);
dev_info(dev, "Registered %d SCMI Powercap domains !\n", reg_zones);
return ret;
}
@ -424,6 +455,8 @@ static int scmi_powercap_probe(struct scmi_device *sdev)
if (!pr->registered_zones)
return -ENOMEM;
INIT_LIST_HEAD(&pr->scmi_zones);
for (i = 0, spz = pr->spzones; i < pr->num_zones; i++, spz++) {
/*
* Powercap domains are validate by the protocol layer, i.e.
@ -438,6 +471,7 @@ static int scmi_powercap_probe(struct scmi_device *sdev)
INIT_LIST_HEAD(&spz->node);
INIT_LIST_HEAD(&pr->registered_zones[i]);
list_add_tail(&spz->node, &pr->scmi_zones);
/*
* Forcibly skip powercap domains using an abstract scale.
* Note that only leaves domains can be skipped, so this could
@ -448,7 +482,7 @@ static int scmi_powercap_probe(struct scmi_device *sdev)
dev_warn(dev,
"Abstract power scale not supported. Skip %s.\n",
spz->info->name);
spz->info = NULL;
spz->invalid = true;
continue;
}
}
@ -457,21 +491,12 @@ static int scmi_powercap_probe(struct scmi_device *sdev)
* Scan array of retrieved SCMI powercap domains and register them
* recursively starting from the root domains.
*/
for (i = 0, spz = pr->spzones; i < pr->num_zones; i++, spz++) {
ret = scmi_powercap_register_zone(pr, spz);
if (ret) {
dev_err(dev,
"Failed to register powercap zone %s - ret:%d\n",
spz->info->name, ret);
scmi_powercap_unregister_all_zones(pr);
return ret;
}
}
ret = scmi_zones_register(dev, pr);
if (ret)
return ret;
dev_set_drvdata(dev, pr);
dev_info(dev, "Registered %d SCMI Powercap domains !\n", pr->num_zones);
return ret;
}

View File

@ -1485,7 +1485,7 @@ static int rapl_detect_domains(struct rapl_package *rp)
}
pr_debug("found %d domains on %s\n", rp->nr_domains, rp->name);
rp->domains = kcalloc(rp->nr_domains + 1, sizeof(struct rapl_domain),
rp->domains = kcalloc(rp->nr_domains, sizeof(struct rapl_domain),
GFP_KERNEL);
if (!rp->domains)
return -ENOMEM;

View File

@ -19,6 +19,7 @@
#include <linux/pm_qos.h>
#include <linux/spinlock.h>
#include <linux/sysfs.h>
#include <linux/minmax.h>
/*********************************************************************
* CPUFREQ INTERFACE *
@ -370,7 +371,7 @@ struct cpufreq_driver {
int (*target_intermediate)(struct cpufreq_policy *policy,
unsigned int index);
/* should be defined, if possible */
/* should be defined, if possible, return 0 on error */
unsigned int (*get)(unsigned int cpu);
/* Called to update policy limits on firmware notifications. */
@ -467,17 +468,8 @@ static inline void cpufreq_verify_within_limits(struct cpufreq_policy_data *poli
unsigned int min,
unsigned int max)
{
if (policy->min < min)
policy->min = min;
if (policy->max < min)
policy->max = min;
if (policy->min > max)
policy->min = max;
if (policy->max > max)
policy->max = max;
if (policy->min > policy->max)
policy->min = policy->max;
return;
policy->max = clamp(policy->max, min, max);
policy->min = clamp(policy->min, min, policy->max);
}
static inline void

View File

@ -103,7 +103,7 @@ int dev_pm_opp_get_supplies(struct dev_pm_opp *opp, struct dev_pm_opp_supply *su
unsigned long dev_pm_opp_get_power(struct dev_pm_opp *opp);
unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp);
unsigned long dev_pm_opp_get_freq_indexed(struct dev_pm_opp *opp, u32 index);
unsigned int dev_pm_opp_get_level(struct dev_pm_opp *opp);
@ -121,17 +121,29 @@ unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev);
struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
unsigned long freq,
bool available);
struct dev_pm_opp *
dev_pm_opp_find_freq_exact_indexed(struct device *dev, unsigned long freq,
u32 index, bool available);
struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
unsigned long *freq);
struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev,
unsigned int level);
struct dev_pm_opp *dev_pm_opp_find_level_ceil(struct device *dev,
unsigned int *level);
struct dev_pm_opp *dev_pm_opp_find_freq_floor_indexed(struct device *dev,
unsigned long *freq, u32 index);
struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
unsigned long *freq);
struct dev_pm_opp *dev_pm_opp_find_freq_ceil_indexed(struct device *dev,
unsigned long *freq, u32 index);
struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev,
unsigned int level);
struct dev_pm_opp *dev_pm_opp_find_level_ceil(struct device *dev,
unsigned int *level);
struct dev_pm_opp *dev_pm_opp_find_bw_ceil(struct device *dev,
unsigned int *bw, int index);
@ -200,7 +212,7 @@ static inline unsigned long dev_pm_opp_get_power(struct dev_pm_opp *opp)
return 0;
}
static inline unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp)
static inline unsigned long dev_pm_opp_get_freq_indexed(struct dev_pm_opp *opp, u32 index)
{
return 0;
}
@ -247,36 +259,55 @@ static inline unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev)
return 0;
}
static inline struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev,
unsigned int level)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline struct dev_pm_opp *dev_pm_opp_find_level_ceil(struct device *dev,
unsigned int *level)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
unsigned long freq, bool available)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline struct dev_pm_opp *
dev_pm_opp_find_freq_exact_indexed(struct device *dev, unsigned long freq,
u32 index, bool available)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
unsigned long *freq)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline struct dev_pm_opp *
dev_pm_opp_find_freq_floor_indexed(struct device *dev, unsigned long *freq, u32 index)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
unsigned long *freq)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline struct dev_pm_opp *
dev_pm_opp_find_freq_ceil_indexed(struct device *dev, unsigned long *freq, u32 index)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev,
unsigned int level)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline struct dev_pm_opp *dev_pm_opp_find_level_ceil(struct device *dev,
unsigned int *level)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline struct dev_pm_opp *dev_pm_opp_find_bw_ceil(struct device *dev,
unsigned int *bw, int index)
{
@ -631,4 +662,9 @@ static inline void dev_pm_opp_put_prop_name(int token)
dev_pm_opp_clear_config(token);
}
static inline unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp)
{
return dev_pm_opp_get_freq_indexed(opp, 0);
}
#endif /* __LINUX_OPP_H__ */

View File

@ -85,8 +85,6 @@ extern void pm_runtime_irq_safe(struct device *dev);
extern void __pm_runtime_use_autosuspend(struct device *dev, bool use);
extern void pm_runtime_set_autosuspend_delay(struct device *dev, int delay);
extern u64 pm_runtime_autosuspend_expiration(struct device *dev);
extern void pm_runtime_update_max_time_suspended(struct device *dev,
s64 delta_ns);
extern void pm_runtime_set_memalloc_noio(struct device *dev, bool enable);
extern void pm_runtime_get_suppliers(struct device *dev);
extern void pm_runtime_put_suppliers(struct device *dev);

View File

@ -194,6 +194,16 @@ static inline void pm_wakeup_dev_event(struct device *dev, unsigned int msec,
#endif /* !CONFIG_PM_SLEEP */
static inline bool device_awake_path(struct device *dev)
{
return device_wakeup_path(dev);
}
static inline void device_set_awake_path(struct device *dev)
{
device_set_wakeup_path(dev);
}
static inline void __pm_wakeup_event(struct wakeup_source *ws, unsigned int msec)
{
return pm_wakeup_ws_event(ws, msec, false);

View File

@ -220,6 +220,11 @@ static struct pm_qos_constraints cpu_latency_constraints = {
.type = PM_QOS_MIN,
};
static inline bool cpu_latency_qos_value_invalid(s32 value)
{
return value < 0 && value != PM_QOS_DEFAULT_VALUE;
}
/**
* cpu_latency_qos_limit - Return current system-wide CPU latency QoS limit.
*/
@ -263,7 +268,7 @@ static void cpu_latency_qos_apply(struct pm_qos_request *req,
*/
void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value)
{
if (!req)
if (!req || cpu_latency_qos_value_invalid(value))
return;
if (cpu_latency_qos_request_active(req)) {
@ -289,7 +294,7 @@ EXPORT_SYMBOL_GPL(cpu_latency_qos_add_request);
*/
void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value)
{
if (!req)
if (!req || cpu_latency_qos_value_invalid(new_value))
return;
if (!cpu_latency_qos_request_active(req)) {

View File

@ -404,6 +404,7 @@ struct bm_position {
struct mem_zone_bm_rtree *zone;
struct rtree_node *node;
unsigned long node_pfn;
unsigned long cur_pfn;
int node_bit;
};
@ -589,6 +590,7 @@ static void memory_bm_position_reset(struct memory_bitmap *bm)
bm->cur.node = list_entry(bm->cur.zone->leaves.next,
struct rtree_node, list);
bm->cur.node_pfn = 0;
bm->cur.cur_pfn = BM_END_OF_MAP;
bm->cur.node_bit = 0;
}
@ -799,6 +801,7 @@ node_found:
bm->cur.zone = zone;
bm->cur.node = node;
bm->cur.node_pfn = (pfn - zone->start_pfn) & ~BM_BLOCK_MASK;
bm->cur.cur_pfn = pfn;
/* Set return values */
*addr = node->data;
@ -850,6 +853,11 @@ static void memory_bm_clear_current(struct memory_bitmap *bm)
clear_bit(bit, bm->cur.node->data);
}
static unsigned long memory_bm_get_current(struct memory_bitmap *bm)
{
return bm->cur.cur_pfn;
}
static int memory_bm_test_bit(struct memory_bitmap *bm, unsigned long pfn)
{
void *addr;
@ -929,10 +937,12 @@ static unsigned long memory_bm_next_pfn(struct memory_bitmap *bm)
if (bit < bits) {
pfn = bm->cur.zone->start_pfn + bm->cur.node_pfn + bit;
bm->cur.node_bit = bit + 1;
bm->cur.cur_pfn = pfn;
return pfn;
}
} while (rtree_next_node(bm));
bm->cur.cur_pfn = BM_END_OF_MAP;
return BM_END_OF_MAP;
}
@ -1423,14 +1433,19 @@ static unsigned int count_data_pages(void)
/*
* This is needed, because copy_page and memcpy are not usable for copying
* task structs.
* task structs. Returns true if the page was filled with only zeros,
* otherwise false.
*/
static inline void do_copy_page(long *dst, long *src)
static inline bool do_copy_page(long *dst, long *src)
{
long z = 0;
int n;
for (n = PAGE_SIZE / sizeof(long); n; n--)
for (n = PAGE_SIZE / sizeof(long); n; n--) {
z |= *src;
*dst++ = *src++;
}
return !z;
}
/**
@ -1439,17 +1454,21 @@ static inline void do_copy_page(long *dst, long *src)
* Check if the page we are going to copy is marked as present in the kernel
* page tables. This always is the case if CONFIG_DEBUG_PAGEALLOC or
* CONFIG_ARCH_HAS_SET_DIRECT_MAP is not set. In that case kernel_page_present()
* always returns 'true'.
* always returns 'true'. Returns true if the page was entirely composed of
* zeros, otherwise it will return false.
*/
static void safe_copy_page(void *dst, struct page *s_page)
static bool safe_copy_page(void *dst, struct page *s_page)
{
bool zeros_only;
if (kernel_page_present(s_page)) {
do_copy_page(dst, page_address(s_page));
zeros_only = do_copy_page(dst, page_address(s_page));
} else {
hibernate_map_page(s_page);
do_copy_page(dst, page_address(s_page));
zeros_only = do_copy_page(dst, page_address(s_page));
hibernate_unmap_page(s_page);
}
return zeros_only;
}
#ifdef CONFIG_HIGHMEM
@ -1459,17 +1478,18 @@ static inline struct page *page_is_saveable(struct zone *zone, unsigned long pfn
saveable_highmem_page(zone, pfn) : saveable_page(zone, pfn);
}
static void copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
{
struct page *s_page, *d_page;
void *src, *dst;
bool zeros_only;
s_page = pfn_to_page(src_pfn);
d_page = pfn_to_page(dst_pfn);
if (PageHighMem(s_page)) {
src = kmap_atomic(s_page);
dst = kmap_atomic(d_page);
do_copy_page(dst, src);
zeros_only = do_copy_page(dst, src);
kunmap_atomic(dst);
kunmap_atomic(src);
} else {
@ -1478,30 +1498,39 @@ static void copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
* The page pointed to by src may contain some kernel
* data modified by kmap_atomic()
*/
safe_copy_page(buffer, s_page);
zeros_only = safe_copy_page(buffer, s_page);
dst = kmap_atomic(d_page);
copy_page(dst, buffer);
kunmap_atomic(dst);
} else {
safe_copy_page(page_address(d_page), s_page);
zeros_only = safe_copy_page(page_address(d_page), s_page);
}
}
return zeros_only;
}
#else
#define page_is_saveable(zone, pfn) saveable_page(zone, pfn)
static inline void copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
static inline int copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
{
safe_copy_page(page_address(pfn_to_page(dst_pfn)),
return safe_copy_page(page_address(pfn_to_page(dst_pfn)),
pfn_to_page(src_pfn));
}
#endif /* CONFIG_HIGHMEM */
static void copy_data_pages(struct memory_bitmap *copy_bm,
struct memory_bitmap *orig_bm)
/*
* Copy data pages will copy all pages into pages pulled from the copy_bm.
* If a page was entirely filled with zeros it will be marked in the zero_bm.
*
* Returns the number of pages copied.
*/
static unsigned long copy_data_pages(struct memory_bitmap *copy_bm,
struct memory_bitmap *orig_bm,
struct memory_bitmap *zero_bm)
{
unsigned long copied_pages = 0;
struct zone *zone;
unsigned long pfn;
unsigned long pfn, copy_pfn;
for_each_populated_zone(zone) {
unsigned long max_zone_pfn;
@ -1514,18 +1543,29 @@ static void copy_data_pages(struct memory_bitmap *copy_bm,
}
memory_bm_position_reset(orig_bm);
memory_bm_position_reset(copy_bm);
copy_pfn = memory_bm_next_pfn(copy_bm);
for(;;) {
pfn = memory_bm_next_pfn(orig_bm);
if (unlikely(pfn == BM_END_OF_MAP))
break;
copy_data_page(memory_bm_next_pfn(copy_bm), pfn);
if (copy_data_page(copy_pfn, pfn)) {
memory_bm_set_bit(zero_bm, pfn);
/* Use this copy_pfn for a page that is not full of zeros */
continue;
}
copied_pages++;
copy_pfn = memory_bm_next_pfn(copy_bm);
}
return copied_pages;
}
/* Total number of image pages */
static unsigned int nr_copy_pages;
/* Number of pages needed for saving the original pfns of the image pages */
static unsigned int nr_meta_pages;
/* Number of zero pages */
static unsigned int nr_zero_pages;
/*
* Numbers of normal and highmem page frames allocated for hibernation image
* before suspending devices.
@ -1546,6 +1586,9 @@ static struct memory_bitmap orig_bm;
*/
static struct memory_bitmap copy_bm;
/* Memory bitmap which tracks which saveable pages were zero filled. */
static struct memory_bitmap zero_bm;
/**
* swsusp_free - Free pages allocated for hibernation image.
*
@ -1590,6 +1633,7 @@ loop:
out:
nr_copy_pages = 0;
nr_meta_pages = 0;
nr_zero_pages = 0;
restore_pblist = NULL;
buffer = NULL;
alloc_normal = 0;
@ -1808,8 +1852,15 @@ int hibernate_preallocate_memory(void)
goto err_out;
}
error = memory_bm_create(&zero_bm, GFP_IMAGE, PG_ANY);
if (error) {
pr_err("Cannot allocate zero bitmap\n");
goto err_out;
}
alloc_normal = 0;
alloc_highmem = 0;
nr_zero_pages = 0;
/* Count the number of saveable data pages. */
save_highmem = count_highmem_pages();
@ -2089,19 +2140,19 @@ asmlinkage __visible int swsusp_save(void)
* Kill them.
*/
drain_local_pages(NULL);
copy_data_pages(&copy_bm, &orig_bm);
nr_copy_pages = copy_data_pages(&copy_bm, &orig_bm, &zero_bm);
/*
* End of critical section. From now on, we can write to memory,
* but we should not touch disk. This specially means we must _not_
* touch swap space! Except we must write out our image of course.
*/
nr_pages += nr_highmem;
nr_copy_pages = nr_pages;
/* We don't actually copy the zero pages */
nr_zero_pages = nr_pages - nr_copy_pages;
nr_meta_pages = DIV_ROUND_UP(nr_pages * sizeof(long), PAGE_SIZE);
pr_info("Image created (%d pages copied)\n", nr_pages);
pr_info("Image created (%d pages copied, %d zero pages)\n", nr_copy_pages, nr_zero_pages);
return 0;
}
@ -2146,15 +2197,22 @@ static int init_header(struct swsusp_info *info)
return init_header_complete(info);
}
#define ENCODED_PFN_ZERO_FLAG ((unsigned long)1 << (BITS_PER_LONG - 1))
#define ENCODED_PFN_MASK (~ENCODED_PFN_ZERO_FLAG)
/**
* pack_pfns - Prepare PFNs for saving.
* @bm: Memory bitmap.
* @buf: Memory buffer to store the PFNs in.
* @zero_bm: Memory bitmap containing PFNs of zero pages.
*
* PFNs corresponding to set bits in @bm are stored in the area of memory
* pointed to by @buf (1 page at a time).
* pointed to by @buf (1 page at a time). Pages which were filled with only
* zeros will have the highest bit set in the packed format to distinguish
* them from PFNs which will be contained in the image file.
*/
static inline void pack_pfns(unsigned long *buf, struct memory_bitmap *bm)
static inline void pack_pfns(unsigned long *buf, struct memory_bitmap *bm,
struct memory_bitmap *zero_bm)
{
int j;
@ -2162,6 +2220,8 @@ static inline void pack_pfns(unsigned long *buf, struct memory_bitmap *bm)
buf[j] = memory_bm_next_pfn(bm);
if (unlikely(buf[j] == BM_END_OF_MAP))
break;
if (memory_bm_test_bit(zero_bm, buf[j]))
buf[j] |= ENCODED_PFN_ZERO_FLAG;
}
}
@ -2203,7 +2263,7 @@ int snapshot_read_next(struct snapshot_handle *handle)
memory_bm_position_reset(&copy_bm);
} else if (handle->cur <= nr_meta_pages) {
clear_page(buffer);
pack_pfns(buffer, &orig_bm);
pack_pfns(buffer, &orig_bm, &zero_bm);
} else {
struct page *page;
@ -2299,24 +2359,35 @@ static int load_header(struct swsusp_info *info)
* unpack_orig_pfns - Set bits corresponding to given PFNs in a memory bitmap.
* @bm: Memory bitmap.
* @buf: Area of memory containing the PFNs.
* @zero_bm: Memory bitmap with the zero PFNs marked.
*
* For each element of the array pointed to by @buf (1 page at a time), set the
* corresponding bit in @bm.
* corresponding bit in @bm. If the page was originally populated with only
* zeros then a corresponding bit will also be set in @zero_bm.
*/
static int unpack_orig_pfns(unsigned long *buf, struct memory_bitmap *bm)
static int unpack_orig_pfns(unsigned long *buf, struct memory_bitmap *bm,
struct memory_bitmap *zero_bm)
{
unsigned long decoded_pfn;
bool zero;
int j;
for (j = 0; j < PAGE_SIZE / sizeof(long); j++) {
if (unlikely(buf[j] == BM_END_OF_MAP))
break;
if (pfn_valid(buf[j]) && memory_bm_pfn_present(bm, buf[j])) {
memory_bm_set_bit(bm, buf[j]);
zero = !!(buf[j] & ENCODED_PFN_ZERO_FLAG);
decoded_pfn = buf[j] & ENCODED_PFN_MASK;
if (pfn_valid(decoded_pfn) && memory_bm_pfn_present(bm, decoded_pfn)) {
memory_bm_set_bit(bm, decoded_pfn);
if (zero) {
memory_bm_set_bit(zero_bm, decoded_pfn);
nr_zero_pages++;
}
} else {
if (!pfn_valid(buf[j]))
if (!pfn_valid(decoded_pfn))
pr_err(FW_BUG "Memory map mismatch at 0x%llx after hibernation\n",
(unsigned long long)PFN_PHYS(buf[j]));
(unsigned long long)PFN_PHYS(decoded_pfn));
return -EFAULT;
}
}
@ -2538,6 +2609,7 @@ static inline void free_highmem_data(void) {}
* prepare_image - Make room for loading hibernation image.
* @new_bm: Uninitialized memory bitmap structure.
* @bm: Memory bitmap with unsafe pages marked.
* @zero_bm: Memory bitmap containing the zero pages.
*
* Use @bm to mark the pages that will be overwritten in the process of
* restoring the system memory state from the suspend image ("unsafe" pages)
@ -2548,10 +2620,15 @@ static inline void free_highmem_data(void) {}
* pages will be used for just yet. Instead, we mark them all as allocated and
* create a lists of "safe" pages to be used later. On systems with high
* memory a list of "safe" highmem pages is created too.
*
* Because it was not known which pages were unsafe when @zero_bm was created,
* make a copy of it and recreate it within safe pages.
*/
static int prepare_image(struct memory_bitmap *new_bm, struct memory_bitmap *bm)
static int prepare_image(struct memory_bitmap *new_bm, struct memory_bitmap *bm,
struct memory_bitmap *zero_bm)
{
unsigned int nr_pages, nr_highmem;
struct memory_bitmap tmp;
struct linked_page *lp;
int error;
@ -2568,6 +2645,24 @@ static int prepare_image(struct memory_bitmap *new_bm, struct memory_bitmap *bm)
duplicate_memory_bitmap(new_bm, bm);
memory_bm_free(bm, PG_UNSAFE_KEEP);
/* Make a copy of zero_bm so it can be created in safe pages */
error = memory_bm_create(&tmp, GFP_ATOMIC, PG_ANY);
if (error)
goto Free;
duplicate_memory_bitmap(&tmp, zero_bm);
memory_bm_free(zero_bm, PG_UNSAFE_KEEP);
/* Recreate zero_bm in safe pages */
error = memory_bm_create(zero_bm, GFP_ATOMIC, PG_SAFE);
if (error)
goto Free;
duplicate_memory_bitmap(zero_bm, &tmp);
memory_bm_free(&tmp, PG_UNSAFE_KEEP);
/* At this point zero_bm is in safe pages and it can be used for restoring. */
if (nr_highmem > 0) {
error = prepare_highmem_image(bm, &nr_highmem);
if (error)
@ -2582,7 +2677,7 @@ static int prepare_image(struct memory_bitmap *new_bm, struct memory_bitmap *bm)
*
* nr_copy_pages cannot be less than allocated_unsafe_pages too.
*/
nr_pages = nr_copy_pages - nr_highmem - allocated_unsafe_pages;
nr_pages = (nr_zero_pages + nr_copy_pages) - nr_highmem - allocated_unsafe_pages;
nr_pages = DIV_ROUND_UP(nr_pages, PBES_PER_LINKED_PAGE);
while (nr_pages > 0) {
lp = get_image_page(GFP_ATOMIC, PG_SAFE);
@ -2595,7 +2690,7 @@ static int prepare_image(struct memory_bitmap *new_bm, struct memory_bitmap *bm)
nr_pages--;
}
/* Preallocate memory for the image */
nr_pages = nr_copy_pages - nr_highmem - allocated_unsafe_pages;
nr_pages = (nr_zero_pages + nr_copy_pages) - nr_highmem - allocated_unsafe_pages;
while (nr_pages > 0) {
lp = (struct linked_page *)get_zeroed_page(GFP_ATOMIC);
if (!lp) {
@ -2683,8 +2778,9 @@ int snapshot_write_next(struct snapshot_handle *handle)
static struct chain_allocator ca;
int error = 0;
next:
/* Check if we have already loaded the entire image */
if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages)
if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages + nr_zero_pages)
return 0;
handle->sync_read = 1;
@ -2709,19 +2805,26 @@ int snapshot_write_next(struct snapshot_handle *handle)
if (error)
return error;
error = memory_bm_create(&zero_bm, GFP_ATOMIC, PG_ANY);
if (error)
return error;
nr_zero_pages = 0;
hibernate_restore_protection_begin();
} else if (handle->cur <= nr_meta_pages + 1) {
error = unpack_orig_pfns(buffer, &copy_bm);
error = unpack_orig_pfns(buffer, &copy_bm, &zero_bm);
if (error)
return error;
if (handle->cur == nr_meta_pages + 1) {
error = prepare_image(&orig_bm, &copy_bm);
error = prepare_image(&orig_bm, &copy_bm, &zero_bm);
if (error)
return error;
chain_init(&ca, GFP_ATOMIC, PG_SAFE);
memory_bm_position_reset(&orig_bm);
memory_bm_position_reset(&zero_bm);
restore_pblist = NULL;
handle->buffer = get_buffer(&orig_bm, &ca);
handle->sync_read = 0;
@ -2738,6 +2841,14 @@ int snapshot_write_next(struct snapshot_handle *handle)
handle->sync_read = 0;
}
handle->cur++;
/* Zero pages were not included in the image, memset it and move on. */
if (handle->cur > nr_meta_pages + 1 &&
memory_bm_test_bit(&zero_bm, memory_bm_get_current(&orig_bm))) {
memset(handle->buffer, 0, PAGE_SIZE);
goto next;
}
return PAGE_SIZE;
}
@ -2754,7 +2865,7 @@ void snapshot_write_finalize(struct snapshot_handle *handle)
copy_last_highmem_page();
hibernate_restore_protect_page(handle->buffer);
/* Do that only if we have loaded the image entirely */
if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages) {
if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages + nr_zero_pages) {
memory_bm_recycle(&orig_bm);
free_highmem_data();
}
@ -2763,7 +2874,7 @@ void snapshot_write_finalize(struct snapshot_handle *handle)
int snapshot_image_loaded(struct snapshot_handle *handle)
{
return !(!nr_copy_pages || !last_highmem_page_copied() ||
handle->cur <= nr_meta_pages + nr_copy_pages);
handle->cur <= nr_meta_pages + nr_copy_pages + nr_zero_pages);
}
#ifdef CONFIG_HIGHMEM

View File

@ -53,7 +53,7 @@ DESTDIR ?=
VERSION:= $(shell ./utils/version-gen.sh)
LIB_MAJ= 0.0.1
LIB_MIN= 0
LIB_MIN= 1
PACKAGE = cpupower
PACKAGE_BUGREPORT = linux-pm@vger.kernel.org

View File

@ -14,6 +14,13 @@
#include "cpupower.h"
#include "cpupower_intern.h"
int is_valid_path(const char *path)
{
if (access(path, F_OK) == -1)
return 0;
return 1;
}
unsigned int cpupower_read_sysfs(const char *path, char *buf, size_t buflen)
{
ssize_t numread;

View File

@ -7,5 +7,6 @@
#define SYSFS_PATH_MAX 255
int is_valid_path(const char *path);
unsigned int cpupower_read_sysfs(const char *path, char *buf, size_t buflen);
unsigned int cpupower_write_sysfs(const char *path, char *buf, size_t buflen);

View File

@ -41,14 +41,6 @@ int cmd_idle_set(int argc, char **argv)
cont = 0;
break;
case 'd':
if (param) {
param = -1;
cont = 0;
break;
}
param = ret;
idlestate = atoi(optarg);
break;
case 'e':
if (param) {
param = -1;
@ -56,7 +48,13 @@ int cmd_idle_set(int argc, char **argv)
break;
}
param = ret;
idlestate = atoi(optarg);
strtol(optarg, &endptr, 10);
if (*endptr != '\0') {
printf(_("Bad value: %s, Integer expected\n"), optarg);
exit(EXIT_FAILURE);
} else {
idlestate = atoi(optarg);
}
break;
case 'D':
if (param) {

View File

@ -18,6 +18,9 @@
static struct option set_opts[] = {
{"perf-bias", required_argument, NULL, 'b'},
{"epp", required_argument, NULL, 'e'},
{"amd-pstate-mode", required_argument, NULL, 'm'},
{"turbo-boost", required_argument, NULL, 't'},
{ },
};
@ -37,11 +40,15 @@ int cmd_set(int argc, char **argv)
union {
struct {
int perf_bias:1;
int epp:1;
int mode:1;
int turbo_boost:1;
};
int params;
} params;
int perf_bias = 0;
int perf_bias = 0, turbo_boost = 1;
int ret = 0;
char epp[30], mode[20];
ret = uname(&uts);
if (!ret && (!strcmp(uts.machine, "ppc64le") ||
@ -55,7 +62,7 @@ int cmd_set(int argc, char **argv)
params.params = 0;
/* parameter parsing */
while ((ret = getopt_long(argc, argv, "b:",
while ((ret = getopt_long(argc, argv, "b:e:m:",
set_opts, NULL)) != -1) {
switch (ret) {
case 'b':
@ -69,6 +76,38 @@ int cmd_set(int argc, char **argv)
}
params.perf_bias = 1;
break;
case 'e':
if (params.epp)
print_wrong_arg_exit();
if (sscanf(optarg, "%29s", epp) != 1) {
print_wrong_arg_exit();
return -EINVAL;
}
params.epp = 1;
break;
case 'm':
if (cpupower_cpu_info.vendor != X86_VENDOR_AMD)
print_wrong_arg_exit();
if (params.mode)
print_wrong_arg_exit();
if (sscanf(optarg, "%19s", mode) != 1) {
print_wrong_arg_exit();
return -EINVAL;
}
params.mode = 1;
break;
case 't':
if (params.turbo_boost)
print_wrong_arg_exit();
turbo_boost = atoi(optarg);
if (turbo_boost < 0 || turbo_boost > 1) {
printf("--turbo-boost param out of range [0-1]\n");
print_wrong_arg_exit();
}
params.turbo_boost = 1;
break;
default:
print_wrong_arg_exit();
}
@ -77,6 +116,18 @@ int cmd_set(int argc, char **argv)
if (!params.params)
print_wrong_arg_exit();
if (params.mode) {
ret = cpupower_set_amd_pstate_mode(mode);
if (ret)
fprintf(stderr, "Error setting mode\n");
}
if (params.turbo_boost) {
ret = cpupower_set_turbo_boost(turbo_boost);
if (ret)
fprintf(stderr, "Error setting turbo-boost\n");
}
/* Default is: set all CPUs */
if (bitmask_isallclear(cpus_chosen))
bitmask_setall(cpus_chosen);
@ -102,6 +153,16 @@ int cmd_set(int argc, char **argv)
break;
}
}
if (params.epp) {
ret = cpupower_set_epp(cpu, epp);
if (ret) {
fprintf(stderr,
"Error setting epp value on CPU %d\n", cpu);
break;
}
}
}
return ret;
}

View File

@ -116,6 +116,10 @@ extern int cpupower_intel_set_perf_bias(unsigned int cpu, unsigned int val);
extern int cpupower_intel_get_perf_bias(unsigned int cpu);
extern unsigned long long msr_intel_get_turbo_ratio(unsigned int cpu);
extern int cpupower_set_epp(unsigned int cpu, char *epp);
extern int cpupower_set_amd_pstate_mode(char *mode);
extern int cpupower_set_turbo_boost(int turbo_boost);
/* Read/Write msr ****************************/
/* PCI stuff ****************************/
@ -173,6 +177,13 @@ static inline int cpupower_intel_get_perf_bias(unsigned int cpu)
static inline unsigned long long msr_intel_get_turbo_ratio(unsigned int cpu)
{ return 0; };
static inline int cpupower_set_epp(unsigned int cpu, char *epp)
{ return -1; };
static inline int cpupower_set_amd_pstate_mode(char *mode)
{ return -1; };
static inline int cpupower_set_turbo_boost(int turbo_boost)
{ return -1; };
/* Read/Write msr ****************************/
static inline int cpufreq_has_boost_support(unsigned int cpu, int *support,

View File

@ -87,6 +87,61 @@ int cpupower_intel_set_perf_bias(unsigned int cpu, unsigned int val)
return 0;
}
int cpupower_set_epp(unsigned int cpu, char *epp)
{
char path[SYSFS_PATH_MAX];
char linebuf[30] = {};
snprintf(path, sizeof(path),
PATH_TO_CPU "cpu%u/cpufreq/energy_performance_preference", cpu);
if (!is_valid_path(path))
return -1;
snprintf(linebuf, sizeof(linebuf), "%s", epp);
if (cpupower_write_sysfs(path, linebuf, 30) <= 0)
return -1;
return 0;
}
int cpupower_set_amd_pstate_mode(char *mode)
{
char path[SYSFS_PATH_MAX];
char linebuf[20] = {};
snprintf(path, sizeof(path), PATH_TO_CPU "amd_pstate/status");
if (!is_valid_path(path))
return -1;
snprintf(linebuf, sizeof(linebuf), "%s\n", mode);
if (cpupower_write_sysfs(path, linebuf, 20) <= 0)
return -1;
return 0;
}
int cpupower_set_turbo_boost(int turbo_boost)
{
char path[SYSFS_PATH_MAX];
char linebuf[2] = {};
snprintf(path, sizeof(path), PATH_TO_CPU "cpufreq/boost");
if (!is_valid_path(path))
return -1;
snprintf(linebuf, sizeof(linebuf), "%d", turbo_boost);
if (cpupower_write_sysfs(path, linebuf, 2) <= 0)
return -1;
return 0;
}
bool cpupower_amd_pstate_enabled(void)
{
char *driver = cpufreq_get_driver(0);
@ -95,7 +150,7 @@ bool cpupower_amd_pstate_enabled(void)
if (!driver)
return ret;
if (!strcmp(driver, "amd-pstate"))
if (!strncmp(driver, "amd", 3))
ret = true;
cpufreq_put_driver(driver);