Merge branch 'pm-cpufreq'

Merge cpufreq updates for 6.4-rc1:

 - Fix the frequency unit in cpufreq_verify_current_freq checks()
   (Sanjay Chandrashekara).

 - Make mode_state_machine in amd-pstate static (Tom Rix).

 - Make the cpufreq core require drivers with target_index() to set
   freq_table (Viresh Kumar).

 - Fix typo in the ARM_BRCMSTB_AVS_CPUFREQ Kconfig entry (Jingyu Wang).

 - Use of_property_read_bool() for boolean properties in the pmac32
   cpufreq driver (Rob Herring).

 - Make the cpufreq sysfs interface return proper error codes on
   obviously invalid input (qinyu).

 - Add guided autonomous mode support to the AMD P-state driver (Wyes
   Karny).

 - Make the Intel P-state driver enable HWP IO boost on all server
   platforms (Srinivas Pandruvada).

 - Add opp and bandwidth support to tegra194 cpufreq driver (Sumit
   Gupta).

 - Use of_property_present() for testing DT property presence (Rob
   Herring).

 - Remove MODULE_LICENSE in non-modules (Nick Alcock).

 - Add SM7225 to cpufreq-dt-platdev blocklist (Luca Weiss).

 - Optimizations and fixes for qcom-cpufreq-hw driver (Krzysztof
   Kozlowski, Konrad Dybcio, and Bjorn Andersson).

 - DT binding updates for qcom-cpufreq-hw driver (Konrad Dybcio and
   Bartosz Golaszewski).

 - Updates and fixes for mediatek driver (Jia-Wei Chang and
   AngeloGioacchino Del Regno).

* pm-cpufreq: (29 commits)
  cpufreq: use correct unit when verify cur freq
  cpufreq: tegra194: add OPP support and set bandwidth
  cpufreq: amd-pstate: Make varaiable mode_state_machine static
  cpufreq: drivers with target_index() must set freq_table
  cpufreq: qcom-cpufreq-hw: Revert adding cpufreq qos
  dt-bindings: cpufreq: cpufreq-qcom-hw: Add QCM2290
  dt-bindings: cpufreq: cpufreq-qcom-hw: Sanitize data per compatible
  dt-bindings: cpufreq: cpufreq-qcom-hw: Allow just 1 frequency domain
  cpufreq: Add SM7225 to cpufreq-dt-platdev blocklist
  cpufreq: qcom-cpufreq-hw: fix double IO unmap and resource release on exit
  cpufreq: mediatek: Raise proc and sram max voltage for MT7622/7623
  cpufreq: mediatek: raise proc/sram max voltage for MT8516
  cpufreq: mediatek: fix KP caused by handler usage after regulator_put/clk_put
  cpufreq: mediatek: fix passing zero to 'PTR_ERR'
  cpufreq: pmac32: Use of_property_read_bool() for boolean properties
  cpufreq: Fix typo in the ARM_BRCMSTB_AVS_CPUFREQ Kconfig entry
  cpufreq: warn about invalid vals to scaling_max/min_freq interfaces
  Documentation: cpufreq: amd-pstate: Update amd_pstate status sysfs for guided
  cpufreq: amd-pstate: Add guided mode control support via sysfs
  cpufreq: amd-pstate: Add guided autonomous mode
  ...
This commit is contained in:
Rafael J. Wysocki 2023-04-24 18:24:47 +02:00
commit 640324e3e6
22 changed files with 653 additions and 208 deletions

View file

@ -339,6 +339,29 @@
This mode requires kvm-amd.avic=1.
(Default when IOMMU HW support is present.)
amd_pstate= [X86]
disable
Do not enable amd_pstate as the default
scaling driver for the supported processors
passive
Use amd_pstate with passive mode as a scaling driver.
In this mode autonomous selection is disabled.
Driver requests a desired performance level and platform
tries to match the same performance level if it is
satisfied by guaranteed performance level.
active
Use amd_pstate_epp driver instance as the scaling driver,
driver provides a hint to the hardware if software wants
to bias toward performance (0x0) or energy efficiency (0xff)
to the CPPC firmware. then CPPC power algorithm will
calculate the runtime workload and adjust the realtime cores
frequency.
guided
Activate guided autonomous mode. Driver requests minimum and
maximum performance level and the platform autonomously
selects a performance level in this range and appropriate
to the current workload.
amijoy.map= [HW,JOY] Amiga joystick support
Map of devices attached to JOY0DAT and JOY1DAT
Format: <a>,<b>
@ -7059,20 +7082,3 @@
xmon commands.
off xmon is disabled.
amd_pstate= [X86]
disable
Do not enable amd_pstate as the default
scaling driver for the supported processors
passive
Use amd_pstate as a scaling driver, driver requests a
desired performance on this abstract scale and the power
management firmware translates the requests into actual
hardware states (core frequency, data fabric and memory
clocks etc.)
active
Use amd_pstate_epp driver instance as the scaling driver,
driver provides a hint to the hardware if software wants
to bias toward performance (0x0) or energy efficiency (0xff)
to the CPPC firmware. then CPPC power algorithm will
calculate the runtime workload and adjust the realtime cores
frequency.

View file

@ -303,13 +303,18 @@ efficiency frequency management method on AMD processors.
AMD Pstate Driver Operation Modes
=================================
``amd_pstate`` CPPC has two operation modes: CPPC Autonomous(active) mode and
CPPC non-autonomous(passive) mode.
active mode and passive mode can be chosen by different kernel parameters.
When in Autonomous mode, CPPC ignores requests done in the Desired Performance
Target register and takes into account only the values set to the Minimum requested
performance, Maximum requested performance, and Energy Performance Preference
registers. When Autonomous is disabled, it only considers the Desired Performance Target.
``amd_pstate`` CPPC has 3 operation modes: autonomous (active) mode,
non-autonomous (passive) mode and guided autonomous (guided) mode.
Active/passive/guided mode can be chosen by different kernel parameters.
- In autonomous mode, platform ignores the desired performance level request
and takes into account only the values set to the minimum, maximum and energy
performance preference registers.
- In non-autonomous mode, platform gets desired performance level
from OS directly through Desired Performance Register.
- In guided-autonomous mode, platform sets operating performance level
autonomously according to the current workload and within the limits set by
OS through min and max performance registers.
Active Mode
------------
@ -338,6 +343,15 @@ to the Performance Reduction Tolerance register. Above the nominal performance l
processor must provide at least nominal performance requested and go higher if current
operating conditions allow.
Guided Mode
-----------
``amd_pstate=guided``
If ``amd_pstate=guided`` is passed to kernel command line option then this mode
is activated. In this mode, driver requests minimum and maximum performance
level and the platform autonomously selects a performance level in this range
and appropriate to the current workload.
User Space Interface in ``sysfs`` - General
===========================================
@ -358,6 +372,9 @@ control its functionality at the system level. They are located in the
"passive"
The driver is functional and in the ``passive mode``
"guided"
The driver is functional and in the ``guided mode``
"disable"
The driver is unregistered and not functional now.

View file

@ -20,12 +20,20 @@ properties:
oneOf:
- description: v1 of CPUFREQ HW
items:
- enum:
- qcom,qcm2290-cpufreq-hw
- qcom,sc7180-cpufreq-hw
- qcom,sdm845-cpufreq-hw
- qcom,sm6115-cpufreq-hw
- qcom,sm6350-cpufreq-hw
- qcom,sm8150-cpufreq-hw
- const: qcom,cpufreq-hw
- description: v2 of CPUFREQ HW (EPSS)
items:
- enum:
- qcom,qdu1000-cpufreq-epss
- qcom,sa8775p-cpufreq-epss
- qcom,sc7280-cpufreq-epss
- qcom,sc8280xp-cpufreq-epss
- qcom,sm6375-cpufreq-epss
@ -36,14 +44,14 @@ properties:
- const: qcom,cpufreq-epss
reg:
minItems: 2
minItems: 1
items:
- description: Frequency domain 0 register region
- description: Frequency domain 1 register region
- description: Frequency domain 2 register region
reg-names:
minItems: 2
minItems: 1
items:
- const: freq-domain0
- const: freq-domain1
@ -85,6 +93,111 @@ required:
additionalProperties: false
allOf:
- if:
properties:
compatible:
contains:
enum:
- qcom,qcm2290-cpufreq-hw
then:
properties:
reg:
minItems: 1
maxItems: 1
reg-names:
minItems: 1
maxItems: 1
interrupts:
minItems: 1
maxItems: 1
interrupt-names:
minItems: 1
- if:
properties:
compatible:
contains:
enum:
- qcom,qdu1000-cpufreq-epss
- qcom,sc7180-cpufreq-hw
- qcom,sc8280xp-cpufreq-epss
- qcom,sdm845-cpufreq-hw
- qcom,sm6115-cpufreq-hw
- qcom,sm6350-cpufreq-hw
- qcom,sm6375-cpufreq-epss
then:
properties:
reg:
minItems: 2
maxItems: 2
reg-names:
minItems: 2
maxItems: 2
interrupts:
minItems: 2
maxItems: 2
interrupt-names:
minItems: 2
- if:
properties:
compatible:
contains:
enum:
- qcom,sc7280-cpufreq-epss
- qcom,sm8250-cpufreq-epss
- qcom,sm8350-cpufreq-epss
- qcom,sm8450-cpufreq-epss
- qcom,sm8550-cpufreq-epss
then:
properties:
reg:
minItems: 3
maxItems: 3
reg-names:
minItems: 3
maxItems: 3
interrupts:
minItems: 3
maxItems: 3
interrupt-names:
minItems: 3
- if:
properties:
compatible:
contains:
enum:
- qcom,sm8150-cpufreq-hw
then:
properties:
reg:
minItems: 3
maxItems: 3
reg-names:
minItems: 3
maxItems: 3
# On some SoCs the Prime core shares the LMH irq with Big cores
interrupts:
minItems: 2
maxItems: 2
interrupt-names:
minItems: 2
examples:
- |
#include <dt-bindings/clock/qcom,gcc-sdm845.h>
@ -235,7 +348,7 @@ examples:
#size-cells = <1>;
cpufreq@17d43000 {
compatible = "qcom,cpufreq-hw";
compatible = "qcom,sdm845-cpufreq-hw", "qcom,cpufreq-hw";
reg = <0x17d43000 0x1400>, <0x17d45800 0x1400>;
reg-names = "freq-domain0", "freq-domain1";

View file

@ -1433,6 +1433,102 @@ int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable)
}
EXPORT_SYMBOL_GPL(cppc_set_epp_perf);
/**
* cppc_get_auto_sel_caps - Read autonomous selection register.
* @cpunum : CPU from which to read register.
* @perf_caps : struct where autonomous selection register value is updated.
*/
int cppc_get_auto_sel_caps(int cpunum, struct cppc_perf_caps *perf_caps)
{
struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum);
struct cpc_register_resource *auto_sel_reg;
u64 auto_sel;
if (!cpc_desc) {
pr_debug("No CPC descriptor for CPU:%d\n", cpunum);
return -ENODEV;
}
auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE];
if (!CPC_SUPPORTED(auto_sel_reg))
pr_warn_once("Autonomous mode is not unsupported!\n");
if (CPC_IN_PCC(auto_sel_reg)) {
int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpunum);
struct cppc_pcc_data *pcc_ss_data = NULL;
int ret = 0;
if (pcc_ss_id < 0)
return -ENODEV;
pcc_ss_data = pcc_data[pcc_ss_id];
down_write(&pcc_ss_data->pcc_lock);
if (send_pcc_cmd(pcc_ss_id, CMD_READ) >= 0) {
cpc_read(cpunum, auto_sel_reg, &auto_sel);
perf_caps->auto_sel = (bool)auto_sel;
} else {
ret = -EIO;
}
up_write(&pcc_ss_data->pcc_lock);
return ret;
}
return 0;
}
EXPORT_SYMBOL_GPL(cppc_get_auto_sel_caps);
/**
* cppc_set_auto_sel - Write autonomous selection register.
* @cpu : CPU to which to write register.
* @enable : the desired value of autonomous selection resiter to be updated.
*/
int cppc_set_auto_sel(int cpu, bool enable)
{
int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu);
struct cpc_register_resource *auto_sel_reg;
struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu);
struct cppc_pcc_data *pcc_ss_data = NULL;
int ret = -EINVAL;
if (!cpc_desc) {
pr_debug("No CPC descriptor for CPU:%d\n", cpu);
return -ENODEV;
}
auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE];
if (CPC_IN_PCC(auto_sel_reg)) {
if (pcc_ss_id < 0) {
pr_debug("Invalid pcc_ss_id\n");
return -ENODEV;
}
if (CPC_SUPPORTED(auto_sel_reg)) {
ret = cpc_write(cpu, auto_sel_reg, enable);
if (ret)
return ret;
}
pcc_ss_data = pcc_data[pcc_ss_id];
down_write(&pcc_ss_data->pcc_lock);
/* after writing CPC, transfer the ownership of PCC to platform */
ret = send_pcc_cmd(pcc_ss_id, CMD_WRITE);
up_write(&pcc_ss_data->pcc_lock);
} else {
ret = -ENOTSUPP;
pr_debug("_CPC in PCC is not supported\n");
}
return ret;
}
EXPORT_SYMBOL_GPL(cppc_set_auto_sel);
/**
* cppc_set_enable - Set to enable CPPC on the processor by writing the
* Continuous Performance Control package EnableRegister field.
@ -1488,7 +1584,7 @@ EXPORT_SYMBOL_GPL(cppc_set_enable);
int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
{
struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu);
struct cpc_register_resource *desired_reg;
struct cpc_register_resource *desired_reg, *min_perf_reg, *max_perf_reg;
int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu);
struct cppc_pcc_data *pcc_ss_data = NULL;
int ret = 0;
@ -1499,6 +1595,8 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
}
desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF];
min_perf_reg = &cpc_desc->cpc_regs[MIN_PERF];
max_perf_reg = &cpc_desc->cpc_regs[MAX_PERF];
/*
* This is Phase-I where we want to write to CPC registers
@ -1507,7 +1605,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
* Since read_lock can be acquired by multiple CPUs simultaneously we
* achieve that goal here
*/
if (CPC_IN_PCC(desired_reg)) {
if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) {
if (pcc_ss_id < 0) {
pr_debug("Invalid pcc_ss_id\n");
return -ENODEV;
@ -1530,13 +1628,19 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
cpc_desc->write_cmd_status = 0;
}
/*
* Skip writing MIN/MAX until Linux knows how to come up with
* useful values.
*/
cpc_write(cpu, desired_reg, perf_ctrls->desired_perf);
if (CPC_IN_PCC(desired_reg))
/*
* Only write if min_perf and max_perf not zero. Some drivers pass zero
* value to min and max perf, but they don't mean to set the zero value,
* they just don't want to write to those registers.
*/
if (perf_ctrls->min_perf)
cpc_write(cpu, min_perf_reg, perf_ctrls->min_perf);
if (perf_ctrls->max_perf)
cpc_write(cpu, max_perf_reg, perf_ctrls->max_perf);
if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg))
up_read(&pcc_ss_data->pcc_lock); /* END Phase-I */
/*
* This is Phase-II where we transfer the ownership of PCC to Platform
@ -1584,7 +1688,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
* case during a CMD_READ and if there are pending writes it delivers
* the write command before servicing the read command
*/
if (CPC_IN_PCC(desired_reg)) {
if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) {
if (down_write_trylock(&pcc_ss_data->pcc_lock)) {/* BEGIN Phase-II */
/* Update only if there are pending write commands */
if (pcc_ss_data->pending_pcc_write_cmd)

View file

@ -95,7 +95,7 @@ config ARM_BRCMSTB_AVS_CPUFREQ
help
Some Broadcom STB SoCs use a co-processor running proprietary firmware
("AVS") to handle voltage and frequency scaling. This driver provides
a standard CPUfreq interface to to the firmware.
a standard CPUfreq interface to the firmware.
Say Y, if you have a Broadcom SoC with AVS support for DFS or DVFS.

View file

@ -106,6 +106,8 @@ static unsigned int epp_values[] = {
[EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE,
};
typedef int (*cppc_mode_transition_fn)(int);
static inline int get_mode_idx_from_str(const char *str, size_t size)
{
int i;
@ -308,7 +310,22 @@ static int cppc_init_perf(struct amd_cpudata *cpudata)
cppc_perf.lowest_nonlinear_perf);
WRITE_ONCE(cpudata->lowest_perf, cppc_perf.lowest_perf);
return 0;
if (cppc_state == AMD_PSTATE_ACTIVE)
return 0;
ret = cppc_get_auto_sel_caps(cpudata->cpu, &cppc_perf);
if (ret) {
pr_warn("failed to get auto_sel, ret: %d\n", ret);
return 0;
}
ret = cppc_set_auto_sel(cpudata->cpu,
(cppc_state == AMD_PSTATE_PASSIVE) ? 0 : 1);
if (ret)
pr_warn("failed to set auto_sel, ret: %d\n", ret);
return ret;
}
DEFINE_STATIC_CALL(amd_pstate_init_perf, pstate_init_perf);
@ -385,12 +402,18 @@ static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
}
static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf,
u32 des_perf, u32 max_perf, bool fast_switch)
u32 des_perf, u32 max_perf, bool fast_switch, int gov_flags)
{
u64 prev = READ_ONCE(cpudata->cppc_req_cached);
u64 value = prev;
des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf);
if ((cppc_state == AMD_PSTATE_GUIDED) && (gov_flags & CPUFREQ_GOV_DYNAMIC_SWITCHING)) {
min_perf = des_perf;
des_perf = 0;
}
value &= ~AMD_CPPC_MIN_PERF(~0L);
value |= AMD_CPPC_MIN_PERF(min_perf);
@ -445,7 +468,7 @@ static int amd_pstate_target(struct cpufreq_policy *policy,
cpufreq_freq_transition_begin(policy, &freqs);
amd_pstate_update(cpudata, min_perf, des_perf,
max_perf, false);
max_perf, false, policy->governor->flags);
cpufreq_freq_transition_end(policy, &freqs, false);
return 0;
@ -479,7 +502,8 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
if (max_perf < min_perf)
max_perf = min_perf;
amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true);
amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true,
policy->governor->flags);
cpufreq_cpu_put(policy);
}
@ -816,6 +840,98 @@ static ssize_t show_energy_performance_preference(
return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]);
}
static void amd_pstate_driver_cleanup(void)
{
amd_pstate_enable(false);
cppc_state = AMD_PSTATE_DISABLE;
current_pstate_driver = NULL;
}
static int amd_pstate_register_driver(int mode)
{
int ret;
if (mode == AMD_PSTATE_PASSIVE || mode == AMD_PSTATE_GUIDED)
current_pstate_driver = &amd_pstate_driver;
else if (mode == AMD_PSTATE_ACTIVE)
current_pstate_driver = &amd_pstate_epp_driver;
else
return -EINVAL;
cppc_state = mode;
ret = cpufreq_register_driver(current_pstate_driver);
if (ret) {
amd_pstate_driver_cleanup();
return ret;
}
return 0;
}
static int amd_pstate_unregister_driver(int dummy)
{
cpufreq_unregister_driver(current_pstate_driver);
amd_pstate_driver_cleanup();
return 0;
}
static int amd_pstate_change_mode_without_dvr_change(int mode)
{
int cpu = 0;
cppc_state = mode;
if (boot_cpu_has(X86_FEATURE_CPPC) || cppc_state == AMD_PSTATE_ACTIVE)
return 0;
for_each_present_cpu(cpu) {
cppc_set_auto_sel(cpu, (cppc_state == AMD_PSTATE_PASSIVE) ? 0 : 1);
}
return 0;
}
static int amd_pstate_change_driver_mode(int mode)
{
int ret;
ret = amd_pstate_unregister_driver(0);
if (ret)
return ret;
ret = amd_pstate_register_driver(mode);
if (ret)
return ret;
return 0;
}
static cppc_mode_transition_fn mode_state_machine[AMD_PSTATE_MAX][AMD_PSTATE_MAX] = {
[AMD_PSTATE_DISABLE] = {
[AMD_PSTATE_DISABLE] = NULL,
[AMD_PSTATE_PASSIVE] = amd_pstate_register_driver,
[AMD_PSTATE_ACTIVE] = amd_pstate_register_driver,
[AMD_PSTATE_GUIDED] = amd_pstate_register_driver,
},
[AMD_PSTATE_PASSIVE] = {
[AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver,
[AMD_PSTATE_PASSIVE] = NULL,
[AMD_PSTATE_ACTIVE] = amd_pstate_change_driver_mode,
[AMD_PSTATE_GUIDED] = amd_pstate_change_mode_without_dvr_change,
},
[AMD_PSTATE_ACTIVE] = {
[AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver,
[AMD_PSTATE_PASSIVE] = amd_pstate_change_driver_mode,
[AMD_PSTATE_ACTIVE] = NULL,
[AMD_PSTATE_GUIDED] = amd_pstate_change_driver_mode,
},
[AMD_PSTATE_GUIDED] = {
[AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver,
[AMD_PSTATE_PASSIVE] = amd_pstate_change_mode_without_dvr_change,
[AMD_PSTATE_ACTIVE] = amd_pstate_change_driver_mode,
[AMD_PSTATE_GUIDED] = NULL,
},
};
static ssize_t amd_pstate_show_status(char *buf)
{
if (!current_pstate_driver)
@ -824,55 +940,22 @@ static ssize_t amd_pstate_show_status(char *buf)
return sysfs_emit(buf, "%s\n", amd_pstate_mode_string[cppc_state]);
}
static void amd_pstate_driver_cleanup(void)
{
current_pstate_driver = NULL;
}
static int amd_pstate_update_status(const char *buf, size_t size)
{
int ret = 0;
int mode_idx;
if (size > 7 || size < 6)
if (size > strlen("passive") || size < strlen("active"))
return -EINVAL;
mode_idx = get_mode_idx_from_str(buf, size);
switch(mode_idx) {
case AMD_PSTATE_DISABLE:
if (current_pstate_driver) {
cpufreq_unregister_driver(current_pstate_driver);
amd_pstate_driver_cleanup();
}
break;
case AMD_PSTATE_PASSIVE:
if (current_pstate_driver) {
if (current_pstate_driver == &amd_pstate_driver)
return 0;
cpufreq_unregister_driver(current_pstate_driver);
}
if (mode_idx < 0 || mode_idx >= AMD_PSTATE_MAX)
return -EINVAL;
current_pstate_driver = &amd_pstate_driver;
cppc_state = AMD_PSTATE_PASSIVE;
ret = cpufreq_register_driver(current_pstate_driver);
break;
case AMD_PSTATE_ACTIVE:
if (current_pstate_driver) {
if (current_pstate_driver == &amd_pstate_epp_driver)
return 0;
cpufreq_unregister_driver(current_pstate_driver);
}
if (mode_state_machine[cppc_state][mode_idx])
return mode_state_machine[cppc_state][mode_idx](mode_idx);
current_pstate_driver = &amd_pstate_epp_driver;
cppc_state = AMD_PSTATE_ACTIVE;
ret = cpufreq_register_driver(current_pstate_driver);
break;
default:
ret = -EINVAL;
break;
}
return ret;
return 0;
}
static ssize_t show_status(struct kobject *kobj,
@ -1277,7 +1360,7 @@ static int __init amd_pstate_init(void)
/* capability check */
if (boot_cpu_has(X86_FEATURE_CPPC)) {
pr_debug("AMD CPPC MSR based functionality is supported\n");
if (cppc_state == AMD_PSTATE_PASSIVE)
if (cppc_state != AMD_PSTATE_ACTIVE)
current_pstate_driver->adjust_perf = amd_pstate_adjust_perf;
} else {
pr_debug("AMD CPPC shared memory based functionality is supported\n");
@ -1339,7 +1422,7 @@ static int __init amd_pstate_param(char *str)
if (cppc_state == AMD_PSTATE_ACTIVE)
current_pstate_driver = &amd_pstate_epp_driver;
if (cppc_state == AMD_PSTATE_PASSIVE)
if (cppc_state == AMD_PSTATE_PASSIVE || cppc_state == AMD_PSTATE_GUIDED)
current_pstate_driver = &amd_pstate_driver;
return 0;

View file

@ -152,6 +152,7 @@ static const struct of_device_id blocklist[] __initconst = {
{ .compatible = "qcom,sm6115", },
{ .compatible = "qcom,sm6350", },
{ .compatible = "qcom,sm6375", },
{ .compatible = "qcom,sm7225", },
{ .compatible = "qcom,sm8150", },
{ .compatible = "qcom,sm8250", },
{ .compatible = "qcom,sm8350", },
@ -179,7 +180,7 @@ static bool __init cpu0_node_has_opp_v2_prop(void)
struct device_node *np = of_cpu_device_node_get(0);
bool ret = false;
if (of_get_property(np, "operating-points-v2", NULL))
if (of_property_present(np, "operating-points-v2"))
ret = true;
of_node_put(np);

View file

@ -73,6 +73,11 @@ static inline bool has_target(void)
return cpufreq_driver->target_index || cpufreq_driver->target;
}
bool has_target_index(void)
{
return !!cpufreq_driver->target_index;
}
/* internal prototypes */
static unsigned int __cpufreq_get(struct cpufreq_policy *policy);
static int cpufreq_init_governor(struct cpufreq_policy *policy);
@ -725,9 +730,9 @@ static ssize_t store_##file_name \
unsigned long val; \
int ret; \
\
ret = sscanf(buf, "%lu", &val); \
if (ret != 1) \
return -EINVAL; \
ret = kstrtoul(buf, 0, &val); \
if (ret) \
return ret; \
\
ret = freq_qos_update_request(policy->object##_freq_req, val);\
return ret >= 0 ? count : ret; \
@ -1727,7 +1732,7 @@ static unsigned int cpufreq_verify_current_freq(struct cpufreq_policy *policy, b
* MHz. In such cases it is better to avoid getting into
* unnecessary frequency updates.
*/
if (abs(policy->cur - new_freq) < HZ_PER_MHZ)
if (abs(policy->cur - new_freq) < KHZ_PER_MHZ)
return policy->cur;
cpufreq_out_of_sync(policy, new_freq);

View file

@ -355,8 +355,13 @@ int cpufreq_table_validate_and_sort(struct cpufreq_policy *policy)
{
int ret;
if (!policy->freq_table)
if (!policy->freq_table) {
/* Freq table must be passed by drivers with target_index() */
if (has_target_index())
return -EINVAL;
return 0;
}
ret = cpufreq_frequency_table_cpuinfo(policy, policy->freq_table);
if (ret)
@ -367,4 +372,3 @@ int cpufreq_table_validate_and_sort(struct cpufreq_policy *policy)
MODULE_AUTHOR("Dominik Brodowski <linux@brodo.de>");
MODULE_DESCRIPTION("CPUfreq frequency table helpers");
MODULE_LICENSE("GPL");

View file

@ -89,7 +89,7 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev)
cpu_dev = get_cpu_device(0);
if (!of_find_property(cpu_dev->of_node, "cpu-supply", NULL))
if (!of_property_present(cpu_dev->of_node, "cpu-supply"))
return -ENODEV;
if (of_machine_is_compatible("fsl,imx7ulp")) {

View file

@ -222,7 +222,7 @@ static int imx6q_opp_check_speed_grading(struct device *dev)
u32 val;
int ret;
if (of_find_property(dev->of_node, "nvmem-cells", NULL)) {
if (of_property_present(dev->of_node, "nvmem-cells")) {
ret = nvmem_cell_read_u32(dev, "speed_grade", &val);
if (ret)
return ret;
@ -279,7 +279,7 @@ static int imx6ul_opp_check_speed_grading(struct device *dev)
u32 val;
int ret = 0;
if (of_find_property(dev->of_node, "nvmem-cells", NULL)) {
if (of_property_present(dev->of_node, "nvmem-cells")) {
ret = nvmem_cell_read_u32(dev, "speed_grade", &val);
if (ret)
return ret;

View file

@ -2384,12 +2384,6 @@ static const struct x86_cpu_id intel_pstate_cpu_ee_disable_ids[] = {
{}
};
static const struct x86_cpu_id intel_pstate_hwp_boost_ids[] = {
X86_MATCH(SKYLAKE_X, core_funcs),
X86_MATCH(SKYLAKE, core_funcs),
{}
};
static int intel_pstate_init_cpu(unsigned int cpunum)
{
struct cpudata *cpu;
@ -2408,12 +2402,9 @@ static int intel_pstate_init_cpu(unsigned int cpunum)
cpu->epp_default = -EINVAL;
if (hwp_active) {
const struct x86_cpu_id *id;
intel_pstate_hwp_enable(cpu);
id = x86_match_cpu(intel_pstate_hwp_boost_ids);
if (id && intel_pstate_acpi_pm_profile_server())
if (intel_pstate_acpi_pm_profile_server())
hwp_boost = true;
}
} else if (hwp_active) {

View file

@ -373,13 +373,13 @@ static struct device *of_get_cci(struct device *cpu_dev)
struct platform_device *pdev;
np = of_parse_phandle(cpu_dev->of_node, "mediatek,cci", 0);
if (IS_ERR_OR_NULL(np))
return NULL;
if (!np)
return ERR_PTR(-ENODEV);
pdev = of_find_device_by_node(np);
of_node_put(np);
if (IS_ERR_OR_NULL(pdev))
return NULL;
if (!pdev)
return ERR_PTR(-ENODEV);
return &pdev->dev;
}
@ -401,7 +401,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
info->ccifreq_bound = false;
if (info->soc_data->ccifreq_supported) {
info->cci_dev = of_get_cci(info->cpu_dev);
if (IS_ERR_OR_NULL(info->cci_dev)) {
if (IS_ERR(info->cci_dev)) {
ret = PTR_ERR(info->cci_dev);
dev_err(cpu_dev, "cpu%d: failed to get cci device\n", cpu);
return -ENODEV;
@ -420,7 +420,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
ret = PTR_ERR(info->inter_clk);
dev_err_probe(cpu_dev, ret,
"cpu%d: failed to get intermediate clk\n", cpu);
goto out_free_resources;
goto out_free_mux_clock;
}
info->proc_reg = regulator_get_optional(cpu_dev, "proc");
@ -428,13 +428,13 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
ret = PTR_ERR(info->proc_reg);
dev_err_probe(cpu_dev, ret,
"cpu%d: failed to get proc regulator\n", cpu);
goto out_free_resources;
goto out_free_inter_clock;
}
ret = regulator_enable(info->proc_reg);
if (ret) {
dev_warn(cpu_dev, "cpu%d: failed to enable vproc\n", cpu);
goto out_free_resources;
goto out_free_proc_reg;
}
/* Both presence and absence of sram regulator are valid cases. */
@ -442,14 +442,14 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
if (IS_ERR(info->sram_reg)) {
ret = PTR_ERR(info->sram_reg);
if (ret == -EPROBE_DEFER)
goto out_free_resources;
goto out_disable_proc_reg;
info->sram_reg = NULL;
} else {
ret = regulator_enable(info->sram_reg);
if (ret) {
dev_warn(cpu_dev, "cpu%d: failed to enable vsram\n", cpu);
goto out_free_resources;
goto out_free_sram_reg;
}
}
@ -458,13 +458,13 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
if (ret) {
dev_err(cpu_dev,
"cpu%d: failed to get OPP-sharing information\n", cpu);
goto out_free_resources;
goto out_disable_sram_reg;
}
ret = dev_pm_opp_of_cpumask_add_table(&info->cpus);
if (ret) {
dev_warn(cpu_dev, "cpu%d: no OPP table\n", cpu);
goto out_free_resources;
goto out_disable_sram_reg;
}
ret = clk_prepare_enable(info->cpu_clk);
@ -533,43 +533,41 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
out_free_opp_table:
dev_pm_opp_of_cpumask_remove_table(&info->cpus);
out_free_resources:
if (regulator_is_enabled(info->proc_reg))
regulator_disable(info->proc_reg);
if (info->sram_reg && regulator_is_enabled(info->sram_reg))
out_disable_sram_reg:
if (info->sram_reg)
regulator_disable(info->sram_reg);
if (!IS_ERR(info->proc_reg))
regulator_put(info->proc_reg);
if (!IS_ERR(info->sram_reg))
out_free_sram_reg:
if (info->sram_reg)
regulator_put(info->sram_reg);
if (!IS_ERR(info->cpu_clk))
clk_put(info->cpu_clk);
if (!IS_ERR(info->inter_clk))
clk_put(info->inter_clk);
out_disable_proc_reg:
regulator_disable(info->proc_reg);
out_free_proc_reg:
regulator_put(info->proc_reg);
out_free_inter_clock:
clk_put(info->inter_clk);
out_free_mux_clock:
clk_put(info->cpu_clk);
return ret;
}
static void mtk_cpu_dvfs_info_release(struct mtk_cpu_dvfs_info *info)
{
if (!IS_ERR(info->proc_reg)) {
regulator_disable(info->proc_reg);
regulator_put(info->proc_reg);
}
if (!IS_ERR(info->sram_reg)) {
regulator_disable(info->proc_reg);
regulator_put(info->proc_reg);
if (info->sram_reg) {
regulator_disable(info->sram_reg);
regulator_put(info->sram_reg);
}
if (!IS_ERR(info->cpu_clk)) {
clk_disable_unprepare(info->cpu_clk);
clk_put(info->cpu_clk);
}
if (!IS_ERR(info->inter_clk)) {
clk_disable_unprepare(info->inter_clk);
clk_put(info->inter_clk);
}
clk_disable_unprepare(info->cpu_clk);
clk_put(info->cpu_clk);
clk_disable_unprepare(info->inter_clk);
clk_put(info->inter_clk);
dev_pm_opp_of_cpumask_remove_table(&info->cpus);
dev_pm_opp_unregister_notifier(info->cpu_dev, &info->opp_nb);
}
@ -695,6 +693,15 @@ static const struct mtk_cpufreq_platform_data mt2701_platform_data = {
.ccifreq_supported = false,
};
static const struct mtk_cpufreq_platform_data mt7622_platform_data = {
.min_volt_shift = 100000,
.max_volt_shift = 200000,
.proc_max_volt = 1360000,
.sram_min_volt = 0,
.sram_max_volt = 1360000,
.ccifreq_supported = false,
};
static const struct mtk_cpufreq_platform_data mt8183_platform_data = {
.min_volt_shift = 100000,
.max_volt_shift = 200000,
@ -713,20 +720,29 @@ static const struct mtk_cpufreq_platform_data mt8186_platform_data = {
.ccifreq_supported = true,
};
static const struct mtk_cpufreq_platform_data mt8516_platform_data = {
.min_volt_shift = 100000,
.max_volt_shift = 200000,
.proc_max_volt = 1310000,
.sram_min_volt = 0,
.sram_max_volt = 1310000,
.ccifreq_supported = false,
};
/* List of machines supported by this driver */
static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
{ .compatible = "mediatek,mt2701", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt2712", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt7622", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt7623", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt8167", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt7622", .data = &mt7622_platform_data },
{ .compatible = "mediatek,mt7623", .data = &mt7622_platform_data },
{ .compatible = "mediatek,mt8167", .data = &mt8516_platform_data },
{ .compatible = "mediatek,mt817x", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt8173", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt8176", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt8183", .data = &mt8183_platform_data },
{ .compatible = "mediatek,mt8186", .data = &mt8186_platform_data },
{ .compatible = "mediatek,mt8365", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt8516", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt8516", .data = &mt8516_platform_data },
{ }
};
MODULE_DEVICE_TABLE(of, mtk_cpufreq_machines);

View file

@ -546,7 +546,7 @@ static int pmac_cpufreq_init_7447A(struct device_node *cpunode)
{
struct device_node *volt_gpio_np;
if (of_get_property(cpunode, "dynamic-power-step", NULL) == NULL)
if (!of_property_read_bool(cpunode, "dynamic-power-step"))
return 1;
volt_gpio_np = of_find_node_by_name(NULL, "cpu-vcore-select");
@ -576,7 +576,7 @@ static int pmac_cpufreq_init_750FX(struct device_node *cpunode)
u32 pvr;
const u32 *value;
if (of_get_property(cpunode, "dynamic-power-step", NULL) == NULL)
if (!of_property_read_bool(cpunode, "dynamic-power-step"))
return 1;
hi_freq = cur_freq;
@ -632,7 +632,7 @@ static int __init pmac_cpufreq_setup(void)
/* Check for 7447A based MacRISC3 */
if (of_machine_is_compatible("MacRISC3") &&
of_get_property(cpunode, "dynamic-power-step", NULL) &&
of_property_read_bool(cpunode, "dynamic-power-step") &&
PVR_VER(mfspr(SPRN_PVR)) == 0x8003) {
pmac_cpufreq_init_7447A(cpunode);

View file

@ -14,7 +14,6 @@
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/pm_opp.h>
#include <linux/pm_qos.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/units.h>
@ -29,6 +28,8 @@
#define GT_IRQ_STATUS BIT(2)
#define MAX_FREQ_DOMAINS 3
struct qcom_cpufreq_soc_data {
u32 reg_enable;
u32 reg_domain_state;
@ -43,7 +44,6 @@ struct qcom_cpufreq_soc_data {
struct qcom_cpufreq_data {
void __iomem *base;
struct resource *res;
/*
* Mutex to synchronize between de-init sequence and re-starting LMh
@ -58,8 +58,6 @@ struct qcom_cpufreq_data {
struct clk_hw cpu_clk;
bool per_core_dcvs;
struct freq_qos_request throttle_freq_req;
};
static struct {
@ -349,8 +347,6 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data)
throttled_freq = freq_hz / HZ_PER_KHZ;
freq_qos_update_request(&data->throttle_freq_req, throttled_freq);
/* Update thermal pressure (the boost frequencies are accepted) */
arch_update_thermal_pressure(policy->related_cpus, throttled_freq);
@ -443,14 +439,6 @@ static int qcom_cpufreq_hw_lmh_init(struct cpufreq_policy *policy, int index)
if (data->throttle_irq < 0)
return data->throttle_irq;
ret = freq_qos_add_request(&policy->constraints,
&data->throttle_freq_req, FREQ_QOS_MAX,
FREQ_QOS_MAX_DEFAULT_VALUE);
if (ret < 0) {
dev_err(&pdev->dev, "Failed to add freq constraint (%d)\n", ret);
return ret;
}
data->cancel_throttle = false;
data->policy = policy;
@ -517,7 +505,6 @@ static void qcom_cpufreq_hw_lmh_exit(struct qcom_cpufreq_data *data)
if (data->throttle_irq <= 0)
return;
freq_qos_remove_request(&data->throttle_freq_req);
free_irq(data->throttle_irq, data);
}
@ -590,16 +577,12 @@ static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
{
struct device *cpu_dev = get_cpu_device(policy->cpu);
struct qcom_cpufreq_data *data = policy->driver_data;
struct resource *res = data->res;
void __iomem *base = data->base;
dev_pm_opp_remove_all_dynamic(cpu_dev);
dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
qcom_cpufreq_hw_lmh_exit(data);
kfree(policy->freq_table);
kfree(data);
iounmap(base);
release_mem_region(res->start, resource_size(res));
return 0;
}
@ -651,10 +634,9 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
{
struct clk_hw_onecell_data *clk_data;
struct device *dev = &pdev->dev;
struct device_node *soc_node;
struct device *cpu_dev;
struct clk *clk;
int ret, i, num_domains, reg_sz;
int ret, i, num_domains;
clk = clk_get(dev, "xo");
if (IS_ERR(clk))
@ -681,24 +663,9 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
if (ret)
return ret;
/* Allocate qcom_cpufreq_data based on the available frequency domains in DT */
soc_node = of_get_parent(dev->of_node);
if (!soc_node)
return -EINVAL;
ret = of_property_read_u32(soc_node, "#address-cells", &reg_sz);
if (ret)
goto of_exit;
ret = of_property_read_u32(soc_node, "#size-cells", &i);
if (ret)
goto of_exit;
reg_sz += i;
num_domains = of_property_count_elems_of_size(dev->of_node, "reg", sizeof(u32) * reg_sz);
if (num_domains <= 0)
return num_domains;
for (num_domains = 0; num_domains < MAX_FREQ_DOMAINS; num_domains++)
if (!platform_get_resource(pdev, IORESOURCE_MEM, num_domains))
break;
qcom_cpufreq.data = devm_kzalloc(dev, sizeof(struct qcom_cpufreq_data) * num_domains,
GFP_KERNEL);
@ -718,17 +685,15 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
for (i = 0; i < num_domains; i++) {
struct qcom_cpufreq_data *data = &qcom_cpufreq.data[i];
struct clk_init_data clk_init = {};
struct resource *res;
void __iomem *base;
base = devm_platform_get_and_ioremap_resource(pdev, i, &res);
base = devm_platform_ioremap_resource(pdev, i);
if (IS_ERR(base)) {
dev_err(dev, "Failed to map resource %pR\n", res);
dev_err(dev, "Failed to map resource index %d\n", i);
return PTR_ERR(base);
}
data->base = base;
data->res = res;
/* Register CPU clock for each frequency domain */
clk_init.name = kasprintf(GFP_KERNEL, "qcom_cpufreq%d", i);
@ -762,9 +727,6 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
else
dev_dbg(dev, "QCOM CPUFreq HW driver initialized\n");
of_exit:
of_node_put(soc_node);
return ret;
}

View file

@ -310,7 +310,7 @@ static int scmi_cpufreq_probe(struct scmi_device *sdev)
#ifdef CONFIG_COMMON_CLK
/* dummy clock provider as needed by OPP if clocks property is used */
if (of_find_property(dev->of_node, "#clock-cells", NULL))
if (of_property_present(dev->of_node, "#clock-cells"))
devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, NULL);
#endif

View file

@ -221,4 +221,3 @@ module_init(tegra_cpufreq_init);
MODULE_AUTHOR("Tuomas Tynkkynen <ttynkkynen@nvidia.com>");
MODULE_DESCRIPTION("cpufreq driver for NVIDIA Tegra124");
MODULE_LICENSE("GPL v2");

View file

@ -12,6 +12,7 @@
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/units.h>
#include <asm/smp_plat.h>
@ -65,12 +66,36 @@ struct tegra_cpufreq_soc {
struct tegra194_cpufreq_data {
void __iomem *regs;
struct cpufreq_frequency_table **tables;
struct cpufreq_frequency_table **bpmp_luts;
const struct tegra_cpufreq_soc *soc;
bool icc_dram_bw_scaling;
};
static struct workqueue_struct *read_counters_wq;
static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz)
{
struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
struct dev_pm_opp *opp;
struct device *dev;
int ret;
dev = get_cpu_device(policy->cpu);
if (!dev)
return -ENODEV;
opp = dev_pm_opp_find_freq_exact(dev, freq_khz * KHZ, true);
if (IS_ERR(opp))
return PTR_ERR(opp);
ret = dev_pm_opp_set_opp(dev, opp);
if (ret)
data->icc_dram_bw_scaling = false;
dev_pm_opp_put(opp);
return ret;
}
static void tegra_get_cpu_mpidr(void *mpidr)
{
*((u64 *)mpidr) = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
@ -354,7 +379,7 @@ static unsigned int tegra194_get_speed(u32 cpu)
* to the last written ndiv value from freq_table. This is
* done to return consistent value.
*/
cpufreq_for_each_valid_entry(pos, data->tables[clusterid]) {
cpufreq_for_each_valid_entry(pos, data->bpmp_luts[clusterid]) {
if (pos->driver_data != ndiv)
continue;
@ -369,16 +394,93 @@ static unsigned int tegra194_get_speed(u32 cpu)
return rate;
}
static int tegra_cpufreq_init_cpufreq_table(struct cpufreq_policy *policy,
struct cpufreq_frequency_table *bpmp_lut,
struct cpufreq_frequency_table **opp_table)
{
struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
struct cpufreq_frequency_table *freq_table = NULL;
struct cpufreq_frequency_table *pos;
struct device *cpu_dev;
struct dev_pm_opp *opp;
unsigned long rate;
int ret, max_opps;
int j = 0;
cpu_dev = get_cpu_device(policy->cpu);
if (!cpu_dev) {
pr_err("%s: failed to get cpu%d device\n", __func__, policy->cpu);
return -ENODEV;
}
/* Initialize OPP table mentioned in operating-points-v2 property in DT */
ret = dev_pm_opp_of_add_table_indexed(cpu_dev, 0);
if (!ret) {
max_opps = dev_pm_opp_get_opp_count(cpu_dev);
if (max_opps <= 0) {
dev_err(cpu_dev, "Failed to add OPPs\n");
return max_opps;
}
/* Disable all opps and cross-validate against LUT later */
for (rate = 0; ; rate++) {
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate);
if (IS_ERR(opp))
break;
dev_pm_opp_put(opp);
dev_pm_opp_disable(cpu_dev, rate);
}
} else {
dev_err(cpu_dev, "Invalid or empty opp table in device tree\n");
data->icc_dram_bw_scaling = false;
return ret;
}
freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_KERNEL);
if (!freq_table)
return -ENOMEM;
/*
* Cross check the frequencies from BPMP-FW LUT against the OPP's present in DT.
* Enable only those DT OPP's which are present in LUT also.
*/
cpufreq_for_each_valid_entry(pos, bpmp_lut) {
opp = dev_pm_opp_find_freq_exact(cpu_dev, pos->frequency * KHZ, false);
if (IS_ERR(opp))
continue;
ret = dev_pm_opp_enable(cpu_dev, pos->frequency * KHZ);
if (ret < 0)
return ret;
freq_table[j].driver_data = pos->driver_data;
freq_table[j].frequency = pos->frequency;
j++;
}
freq_table[j].driver_data = pos->driver_data;
freq_table[j].frequency = CPUFREQ_TABLE_END;
*opp_table = &freq_table[0];
dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus);
return ret;
}
static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
{
struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
int maxcpus_per_cluster = data->soc->maxcpus_per_cluster;
struct cpufreq_frequency_table *freq_table;
struct cpufreq_frequency_table *bpmp_lut;
u32 start_cpu, cpu;
u32 clusterid;
int ret;
data->soc->ops->get_cpu_cluster_id(policy->cpu, NULL, &clusterid);
if (clusterid >= data->soc->num_clusters || !data->tables[clusterid])
if (clusterid >= data->soc->num_clusters || !data->bpmp_luts[clusterid])
return -EINVAL;
start_cpu = rounddown(policy->cpu, maxcpus_per_cluster);
@ -387,9 +489,22 @@ static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
if (cpu_possible(cpu))
cpumask_set_cpu(cpu, policy->cpus);
}
policy->freq_table = data->tables[clusterid];
policy->cpuinfo.transition_latency = TEGRA_CPUFREQ_TRANSITION_LATENCY;
bpmp_lut = data->bpmp_luts[clusterid];
if (data->icc_dram_bw_scaling) {
ret = tegra_cpufreq_init_cpufreq_table(policy, bpmp_lut, &freq_table);
if (!ret) {
policy->freq_table = freq_table;
return 0;
}
}
data->icc_dram_bw_scaling = false;
policy->freq_table = bpmp_lut;
pr_info("OPP tables missing from DT, EMC frequency scaling disabled\n");
return 0;
}
@ -406,6 +521,9 @@ static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy,
*/
data->soc->ops->set_cpu_ndiv(policy, (u64)tbl->driver_data);
if (data->icc_dram_bw_scaling)
tegra_cpufreq_set_bw(policy, tbl->frequency);
return 0;
}
@ -439,8 +557,8 @@ static void tegra194_cpufreq_free_resources(void)
}
static struct cpufreq_frequency_table *
init_freq_table(struct platform_device *pdev, struct tegra_bpmp *bpmp,
unsigned int cluster_id)
tegra_cpufreq_bpmp_read_lut(struct platform_device *pdev, struct tegra_bpmp *bpmp,
unsigned int cluster_id)
{
struct cpufreq_frequency_table *freq_table;
struct mrq_cpu_ndiv_limits_response resp;
@ -515,6 +633,7 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev)
const struct tegra_cpufreq_soc *soc;
struct tegra194_cpufreq_data *data;
struct tegra_bpmp *bpmp;
struct device *cpu_dev;
int err, i;
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
@ -530,9 +649,9 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev)
return -EINVAL;
}
data->tables = devm_kcalloc(&pdev->dev, data->soc->num_clusters,
sizeof(*data->tables), GFP_KERNEL);
if (!data->tables)
data->bpmp_luts = devm_kcalloc(&pdev->dev, data->soc->num_clusters,
sizeof(*data->bpmp_luts), GFP_KERNEL);
if (!data->bpmp_luts)
return -ENOMEM;
if (soc->actmon_cntr_base) {
@ -556,15 +675,26 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev)
}
for (i = 0; i < data->soc->num_clusters; i++) {
data->tables[i] = init_freq_table(pdev, bpmp, i);
if (IS_ERR(data->tables[i])) {
err = PTR_ERR(data->tables[i]);
data->bpmp_luts[i] = tegra_cpufreq_bpmp_read_lut(pdev, bpmp, i);
if (IS_ERR(data->bpmp_luts[i])) {
err = PTR_ERR(data->bpmp_luts[i]);
goto err_free_res;
}
}
tegra194_cpufreq_driver.driver_data = data;
/* Check for optional OPPv2 and interconnect paths on CPU0 to enable ICC scaling */
cpu_dev = get_cpu_device(0);
if (!cpu_dev)
return -EPROBE_DEFER;
if (dev_pm_opp_of_get_opp_desc_node(cpu_dev)) {
err = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL);
if (!err)
data->icc_dram_bw_scaling = true;
}
err = cpufreq_register_driver(&tegra194_cpufreq_driver);
if (!err)
goto put_bpmp;

View file

@ -25,7 +25,7 @@ static bool cpu0_node_has_opp_v2_prop(void)
struct device_node *np = of_cpu_device_node_get(0);
bool ret = false;
if (of_get_property(np, "operating-points-v2", NULL))
if (of_property_present(np, "operating-points-v2"))
ret = true;
of_node_put(np);

View file

@ -109,6 +109,7 @@ struct cppc_perf_caps {
u32 lowest_freq;
u32 nominal_freq;
u32 energy_perf;
bool auto_sel;
};
struct cppc_perf_ctrls {
@ -153,6 +154,8 @@ extern int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val);
extern int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val);
extern int cppc_get_epp_perf(int cpunum, u64 *epp_perf);
extern int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable);
extern int cppc_get_auto_sel_caps(int cpunum, struct cppc_perf_caps *perf_caps);
extern int cppc_set_auto_sel(int cpu, bool enable);
#else /* !CONFIG_ACPI_CPPC_LIB */
static inline int cppc_get_desired_perf(int cpunum, u64 *desired_perf)
{
@ -214,6 +217,14 @@ static inline int cppc_get_epp_perf(int cpunum, u64 *epp_perf)
{
return -ENOTSUPP;
}
static inline int cppc_set_auto_sel(int cpu, bool enable)
{
return -ENOTSUPP;
}
static inline int cppc_get_auto_sel_caps(int cpunum, struct cppc_perf_caps *perf_caps)
{
return -ENOTSUPP;
}
#endif /* !CONFIG_ACPI_CPPC_LIB */
#endif /* _CPPC_ACPI_H*/

View file

@ -97,6 +97,7 @@ enum amd_pstate_mode {
AMD_PSTATE_DISABLE = 0,
AMD_PSTATE_PASSIVE,
AMD_PSTATE_ACTIVE,
AMD_PSTATE_GUIDED,
AMD_PSTATE_MAX,
};
@ -104,6 +105,7 @@ static const char * const amd_pstate_mode_string[] = {
[AMD_PSTATE_DISABLE] = "disable",
[AMD_PSTATE_PASSIVE] = "passive",
[AMD_PSTATE_ACTIVE] = "active",
[AMD_PSTATE_GUIDED] = "guided",
NULL,
};
#endif /* _LINUX_AMD_PSTATE_H */

View file

@ -237,6 +237,7 @@ bool cpufreq_supports_freq_invariance(void);
struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy);
void cpufreq_enable_fast_switch(struct cpufreq_policy *policy);
void cpufreq_disable_fast_switch(struct cpufreq_policy *policy);
bool has_target_index(void);
#else
static inline unsigned int cpufreq_get(unsigned int cpu)
{