Update devfreq next for v5.19

Detailed description for this pull request:
 1. Update devfreq core
 - Add cpu based scaling support to passive governor. Some device like
 cache might require the dynamic frequency scaling. But, it has very
 tightly to cpu frequency. So that use passive governor to scale
 the frequency according to current cpu frequency.
 
 To decide the frequency of the device, the governor does one of the following:
 : Derives the optimal devfreq device opp from required-opps property of
   the parent cpu opp_table.
 
 : Scales the device frequency in proportion to the CPU frequency. So, if
   the CPUs are running at their max frequency, the device runs at its
   max frequency. If the CPUs are running at their min frequency, the
   device runs at its min frequency. It is interpolated for frequencies
   in between.
 
 2. Update devfreq driver
 - Update rk3399_dmc.c as following:
 : Convert dt-binding document to YAML and deprecate unused properties.
 
 : Use Hz units for the device-tree properties of rk3399_dmc.
 
 : rk3399_dmc is able to set the idle time before changing the dmc clock.
   Specify idle time parameters by using nano-second unit on dt bidning.
 
 : Add new disable-freq properties to optimize the power-saving feature
   of rk3399_dmc.
 
 : Disable devfreq-event device on remove() to fix unbalanced
   enable-disable count.
 
 : Use devm_pm_opp_of_add_table()
 
 : Block PMU (Power-Management Unit) transitions when scaling frequency
   by ARM Trust Firmware in order to fix the conflict between PMU and DMC
   (Dynamic Memory Controller).
 -----BEGIN PGP SIGNATURE-----
 
 iQJKBAABCgA0FiEEsSpuqBtbWtRe4rLGnM3fLN7rz1MFAmKDddYWHGN3MDAuY2hv
 aUBzYW1zdW5nLmNvbQAKCRCczd8s3uvPU1o0EACGE7FV/pA/sbOFhhLXQlI1XMQf
 C2XVvjigP5M7Y9qgM6qLBKCSmRPbBjuO4VLKkIycmo2GRcUs+Sk5CqALjWZF0tDR
 aiAV+P3wIC2pCZNtAXJG6BhP5GNzGYYdv2zJfNmVUsYUYVC3ezppuE1Yp2Sscq5t
 K43eAOq0c9+TTeLfdDKdi07QtevMHwbOxjNUhwnzN5+yD9cknK1Ht5FVjDc8eYTG
 yBJPgMy9qhNqEZ4gl9zpKJkpAJMOhquT4ZtytNioIfnmKCrpyiOy+yApvW5WiU1d
 /8KMiG42C47rUBYCiEGEdn6PaTOpuOK0hnuoxlxE07dqJP349SbozFTqZxfbCJhI
 OjZSdLTolPFq82zuAafBJmijmeAiYdI0FKrhXmQo4doeleIQG9z0h8YpkQSnF47L
 xsj7QswD0LxtMQbYv5zds1NoGnUn/U+TpoJ8mbpg5gPWKaBTNdTZFJQj+HXtrDyy
 1ouR41u6kOoLLDHu2I5f9dTMptJRg3jS+kQfDv17K8u5mAmcwFidpOiyac6m2xad
 Nnn85NAE9JaxpzVe3+S3uQdscEe2FBWEZCkbrCcPn2T88BZwrfcby02YRBw22WkS
 p4xPAOncw4lhZ6z3pedcH/1+kk6uoTube5QK8ddSRxYrRavtOsqchAaec+hYRNif
 CbjJOqsdbDLa7ANLxw==
 =/3W8
 -----END PGP SIGNATURE-----

Merge tag 'devfreq-next-for-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/chanwoo/linux

Pull devfreq changes for 5.19-rc1 from Chanwoo Choi:

"1. Update devfreq core

 - Add cpu based scaling support to passive governor. Some device like
   cache might require the dynamic frequency scaling. But, it has very
   tightly to cpu frequency. So that use passive governor to scale
   the frequency according to current cpu frequency.

  To decide the frequency of the device, the governor does one of the following:
  : Derives the optimal devfreq device opp from required-opps property of
    the parent cpu opp_table.

  : Scales the device frequency in proportion to the CPU frequency. So, if
    the CPUs are running at their max frequency, the device runs at its
    max frequency. If the CPUs are running at their min frequency, the
    device runs at its min frequency. It is interpolated for frequencies
    in between.

 2. Update devfreq drivers

  - Update rk3399_dmc.c as following:

   : Convert dt-binding document to YAML and deprecate unused properties.

   : Use Hz units for the device-tree properties of rk3399_dmc.

   : rk3399_dmc is able to set the idle time before changing the dmc clock.
     Specify idle time parameters by using nano-second unit on dt bidning.

   : Add new disable-freq properties to optimize the power-saving feature
     of rk3399_dmc.

   : Disable devfreq-event device on remove() to fix unbalanced
     enable-disable count.

   : Use devm_pm_opp_of_add_table()

   : Block PMU (Power-Management Unit) transitions when scaling frequency
     by ARM Trust Firmware in order to fix the conflict between PMU and DMC
     (Dynamic Memory Controller)."

* tag 'devfreq-next-for-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/chanwoo/linux:
  PM / devfreq: passive: Keep cpufreq_policy for possible cpus
  PM / devfreq: passive: Reduce duplicate code when passive_devfreq case
  PM / devfreq: Add cpu based scaling support to passive governor
  PM / devfreq: Export devfreq_get_freq_range symbol within devfreq
  PM / devfreq: rk3399_dmc: Block PMU during transitions
  soc: rockchip: power-domain: Manage resource conflicts with firmware
  PM / devfreq: rk3399_dmc: Avoid static (reused) profile
  PM / devfreq: rk3399_dmc: Use devm_pm_opp_of_add_table()
  PM / devfreq: rk3399_dmc: Disable edev on remove()
  PM / devfreq: rk3399_dmc: Support new *-ns properties
  PM / devfreq: rk3399_dmc: Support new disable-freq properties
  PM / devfreq: rk3399_dmc: Use bitfield macro definitions for ODT_PD
  PM / devfreq: rk3399_dmc: Drop excess timing properties
  PM / devfreq: rk3399_dmc: Drop undocumented ondemand DT props
  dt-bindings: devfreq: rk3399_dmc: Add more disable-freq properties
  dt-bindings: devfreq: rk3399_dmc: Specify idle params in nanoseconds
  dt-bindings: devfreq: rk3399_dmc: Fix Hz units
  dt-bindings: devfreq: rk3399_dmc: Deprecate unused/redundant properties
  dt-bindings: devfreq: rk3399_dmc: Convert to YAML
This commit is contained in:
Rafael J. Wysocki 2022-05-18 20:55:34 +02:00
commit d44d6c4a3a
9 changed files with 1069 additions and 461 deletions

View File

@ -1,212 +0,0 @@
* Rockchip rk3399 DMC (Dynamic Memory Controller) device
Required properties:
- compatible: Must be "rockchip,rk3399-dmc".
- devfreq-events: Node to get DDR loading, Refer to
Documentation/devicetree/bindings/devfreq/event/
rockchip-dfi.txt
- clocks: Phandles for clock specified in "clock-names" property
- clock-names : The name of clock used by the DFI, must be
"pclk_ddr_mon";
- operating-points-v2: Refer to Documentation/devicetree/bindings/opp/opp-v2.yaml
for details.
- center-supply: DMC supply node.
- status: Marks the node enabled/disabled.
- rockchip,pmu: Phandle to the syscon managing the "PMU general register
files".
Optional properties:
- interrupts: The CPU interrupt number. The interrupt specifier
format depends on the interrupt controller.
It should be a DCF interrupt. When DDR DVFS finishes
a DCF interrupt is triggered.
- rockchip,pmu: Phandle to the syscon managing the "PMU general register
files".
Following properties relate to DDR timing:
- rockchip,dram_speed_bin : Value reference include/dt-bindings/clock/rk3399-ddr.h,
it selects the DDR3 cl-trp-trcd type. It must be
set according to "Speed Bin" in DDR3 datasheet,
DO NOT use a smaller "Speed Bin" than specified
for the DDR3 being used.
- rockchip,pd_idle : Configure the PD_IDLE value. Defines the
power-down idle period in which memories are
placed into power-down mode if bus is idle
for PD_IDLE DFI clock cycles.
- rockchip,sr_idle : Configure the SR_IDLE value. Defines the
self-refresh idle period in which memories are
placed into self-refresh mode if bus is idle
for SR_IDLE * 1024 DFI clock cycles (DFI
clocks freq is half of DRAM clock), default
value is "0".
- rockchip,sr_mc_gate_idle : Defines the memory self-refresh and controller
clock gating idle period. Memories are placed
into self-refresh mode and memory controller
clock arg gating started if bus is idle for
sr_mc_gate_idle*1024 DFI clock cycles.
- rockchip,srpd_lite_idle : Defines the self-refresh power down idle
period in which memories are placed into
self-refresh power down mode if bus is idle
for srpd_lite_idle * 1024 DFI clock cycles.
This parameter is for LPDDR4 only.
- rockchip,standby_idle : Defines the standby idle period in which
memories are placed into self-refresh mode.
The controller, pi, PHY and DRAM clock will
be gated if bus is idle for standby_idle * DFI
clock cycles.
- rockchip,dram_dll_dis_freq : Defines the DDR3 DLL bypass frequency in MHz.
When DDR frequency is less than DRAM_DLL_DISB_FREQ,
DDR3 DLL will be bypassed. Note: if DLL was bypassed,
the odt will also stop working.
- rockchip,phy_dll_dis_freq : Defines the PHY dll bypass frequency in
MHz (Mega Hz). When DDR frequency is less than
DRAM_DLL_DISB_FREQ, PHY DLL will be bypassed.
Note: PHY DLL and PHY ODT are independent.
- rockchip,ddr3_odt_dis_freq : When the DRAM type is DDR3, this parameter defines
the ODT disable frequency in MHz (Mega Hz).
when the DDR frequency is less then ddr3_odt_dis_freq,
the ODT on the DRAM side and controller side are
both disabled.
- rockchip,ddr3_drv : When the DRAM type is DDR3, this parameter defines
the DRAM side driver strength in ohms. Default
value is 40.
- rockchip,ddr3_odt : When the DRAM type is DDR3, this parameter defines
the DRAM side ODT strength in ohms. Default value
is 120.
- rockchip,phy_ddr3_ca_drv : When the DRAM type is DDR3, this parameter defines
the phy side CA line (incluing command line,
address line and clock line) driver strength.
Default value is 40.
- rockchip,phy_ddr3_dq_drv : When the DRAM type is DDR3, this parameter defines
the PHY side DQ line (including DQS/DQ/DM line)
driver strength. Default value is 40.
- rockchip,phy_ddr3_odt : When the DRAM type is DDR3, this parameter defines
the PHY side ODT strength. Default value is 240.
- rockchip,lpddr3_odt_dis_freq : When the DRAM type is LPDDR3, this parameter defines
then ODT disable frequency in MHz (Mega Hz).
When DDR frequency is less then ddr3_odt_dis_freq,
the ODT on the DRAM side and controller side are
both disabled.
- rockchip,lpddr3_drv : When the DRAM type is LPDDR3, this parameter defines
the DRAM side driver strength in ohms. Default
value is 34.
- rockchip,lpddr3_odt : When the DRAM type is LPDDR3, this parameter defines
the DRAM side ODT strength in ohms. Default value
is 240.
- rockchip,phy_lpddr3_ca_drv : When the DRAM type is LPDDR3, this parameter defines
the PHY side CA line (including command line,
address line and clock line) driver strength.
Default value is 40.
- rockchip,phy_lpddr3_dq_drv : When the DRAM type is LPDDR3, this parameter defines
the PHY side DQ line (including DQS/DQ/DM line)
driver strength. Default value is 40.
- rockchip,phy_lpddr3_odt : When dram type is LPDDR3, this parameter define
the phy side odt strength, default value is 240.
- rockchip,lpddr4_odt_dis_freq : When the DRAM type is LPDDR4, this parameter
defines the ODT disable frequency in
MHz (Mega Hz). When the DDR frequency is less then
ddr3_odt_dis_freq, the ODT on the DRAM side and
controller side are both disabled.
- rockchip,lpddr4_drv : When the DRAM type is LPDDR4, this parameter defines
the DRAM side driver strength in ohms. Default
value is 60.
- rockchip,lpddr4_dq_odt : When the DRAM type is LPDDR4, this parameter defines
the DRAM side ODT on DQS/DQ line strength in ohms.
Default value is 40.
- rockchip,lpddr4_ca_odt : When the DRAM type is LPDDR4, this parameter defines
the DRAM side ODT on CA line strength in ohms.
Default value is 40.
- rockchip,phy_lpddr4_ca_drv : When the DRAM type is LPDDR4, this parameter defines
the PHY side CA line (including command address
line) driver strength. Default value is 40.
- rockchip,phy_lpddr4_ck_cs_drv : When the DRAM type is LPDDR4, this parameter defines
the PHY side clock line and CS line driver
strength. Default value is 80.
- rockchip,phy_lpddr4_dq_drv : When the DRAM type is LPDDR4, this parameter defines
the PHY side DQ line (including DQS/DQ/DM line)
driver strength. Default value is 80.
- rockchip,phy_lpddr4_odt : When the DRAM type is LPDDR4, this parameter defines
the PHY side ODT strength. Default value is 60.
Example:
dmc_opp_table: dmc_opp_table {
compatible = "operating-points-v2";
opp00 {
opp-hz = /bits/ 64 <300000000>;
opp-microvolt = <900000>;
};
opp01 {
opp-hz = /bits/ 64 <666000000>;
opp-microvolt = <900000>;
};
};
dmc: dmc {
compatible = "rockchip,rk3399-dmc";
devfreq-events = <&dfi>;
interrupts = <GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cru SCLK_DDRC>;
clock-names = "dmc_clk";
operating-points-v2 = <&dmc_opp_table>;
center-supply = <&ppvar_centerlogic>;
upthreshold = <15>;
downdifferential = <10>;
rockchip,ddr3_speed_bin = <21>;
rockchip,pd_idle = <0x40>;
rockchip,sr_idle = <0x2>;
rockchip,sr_mc_gate_idle = <0x3>;
rockchip,srpd_lite_idle = <0x4>;
rockchip,standby_idle = <0x2000>;
rockchip,dram_dll_dis_freq = <300>;
rockchip,phy_dll_dis_freq = <125>;
rockchip,auto_pd_dis_freq = <666>;
rockchip,ddr3_odt_dis_freq = <333>;
rockchip,ddr3_drv = <40>;
rockchip,ddr3_odt = <120>;
rockchip,phy_ddr3_ca_drv = <40>;
rockchip,phy_ddr3_dq_drv = <40>;
rockchip,phy_ddr3_odt = <240>;
rockchip,lpddr3_odt_dis_freq = <333>;
rockchip,lpddr3_drv = <34>;
rockchip,lpddr3_odt = <240>;
rockchip,phy_lpddr3_ca_drv = <40>;
rockchip,phy_lpddr3_dq_drv = <40>;
rockchip,phy_lpddr3_odt = <240>;
rockchip,lpddr4_odt_dis_freq = <333>;
rockchip,lpddr4_drv = <60>;
rockchip,lpddr4_dq_odt = <40>;
rockchip,lpddr4_ca_odt = <40>;
rockchip,phy_lpddr4_ca_drv = <40>;
rockchip,phy_lpddr4_ck_cs_drv = <80>;
rockchip,phy_lpddr4_dq_drv = <80>;
rockchip,phy_lpddr4_odt = <60>;
};

View File

@ -0,0 +1,384 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# %YAML 1.2
---
$id: http://devicetree.org/schemas/memory-controllers/rockchip,rk3399-dmc.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Rockchip rk3399 DMC (Dynamic Memory Controller) device
maintainers:
- Brian Norris <briannorris@chromium.org>
properties:
compatible:
enum:
- rockchip,rk3399-dmc
devfreq-events:
$ref: /schemas/types.yaml#/definitions/phandle
description:
Node to get DDR loading. Refer to
Documentation/devicetree/bindings/devfreq/event/rockchip-dfi.txt.
clocks:
maxItems: 1
clock-names:
items:
- const: dmc_clk
operating-points-v2: true
center-supply:
description:
DMC regulator supply.
rockchip,pmu:
$ref: /schemas/types.yaml#/definitions/phandle
description:
Phandle to the syscon managing the "PMU general register files".
interrupts:
maxItems: 1
description:
The CPU interrupt number. It should be a DCF interrupt. When DDR DVFS
finishes, a DCF interrupt is triggered.
rockchip,ddr3_speed_bin:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
For values, reference include/dt-bindings/clock/rk3399-ddr.h. Selects the
DDR3 cl-trp-trcd type. It must be set according to "Speed Bin" in DDR3
datasheet; DO NOT use a smaller "Speed Bin" than specified for the DDR3
being used.
rockchip,pd_idle:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
Configure the PD_IDLE value. Defines the power-down idle period in which
memories are placed into power-down mode if bus is idle for PD_IDLE DFI
clock cycles.
See also rockchip,pd-idle-ns.
rockchip,sr_idle:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
Configure the SR_IDLE value. Defines the self-refresh idle period in
which memories are placed into self-refresh mode if bus is idle for
SR_IDLE * 1024 DFI clock cycles (DFI clocks freq is half of DRAM clock).
See also rockchip,sr-idle-ns.
default: 0
rockchip,sr_mc_gate_idle:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
Defines the memory self-refresh and controller clock gating idle period.
Memories are placed into self-refresh mode and memory controller clock
arg gating started if bus is idle for sr_mc_gate_idle*1024 DFI clock
cycles.
See also rockchip,sr-mc-gate-idle-ns.
rockchip,srpd_lite_idle:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
Defines the self-refresh power down idle period in which memories are
placed into self-refresh power down mode if bus is idle for
srpd_lite_idle * 1024 DFI clock cycles. This parameter is for LPDDR4
only.
See also rockchip,srpd-lite-idle-ns.
rockchip,standby_idle:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
Defines the standby idle period in which memories are placed into
self-refresh mode. The controller, pi, PHY and DRAM clock will be gated
if bus is idle for standby_idle * DFI clock cycles.
See also rockchip,standby-idle-ns.
rockchip,dram_dll_dis_freq:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description: |
Defines the DDR3 DLL bypass frequency in MHz. When DDR frequency is less
than DRAM_DLL_DISB_FREQ, DDR3 DLL will be bypassed.
Note: if DLL was bypassed, the odt will also stop working.
rockchip,phy_dll_dis_freq:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description: |
Defines the PHY dll bypass frequency in MHz (Mega Hz). When DDR frequency
is less than DRAM_DLL_DISB_FREQ, PHY DLL will be bypassed.
Note: PHY DLL and PHY ODT are independent.
rockchip,auto_pd_dis_freq:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
Defines the auto PD disable frequency in MHz.
rockchip,ddr3_odt_dis_freq:
$ref: /schemas/types.yaml#/definitions/uint32
minimum: 1000000 # In case anyone thought this was MHz.
description:
When the DRAM type is DDR3, this parameter defines the ODT disable
frequency in Hz. When the DDR frequency is less then ddr3_odt_dis_freq,
the ODT on the DRAM side and controller side are both disabled.
rockchip,ddr3_drv:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is DDR3, this parameter defines the DRAM side drive
strength in ohms.
default: 40
rockchip,ddr3_odt:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is DDR3, this parameter defines the DRAM side ODT
strength in ohms.
default: 120
rockchip,phy_ddr3_ca_drv:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is DDR3, this parameter defines the phy side CA line
(incluing command line, address line and clock line) drive strength.
default: 40
rockchip,phy_ddr3_dq_drv:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is DDR3, this parameter defines the PHY side DQ line
(including DQS/DQ/DM line) drive strength.
default: 40
rockchip,phy_ddr3_odt:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is DDR3, this parameter defines the PHY side ODT
strength.
default: 240
rockchip,lpddr3_odt_dis_freq:
$ref: /schemas/types.yaml#/definitions/uint32
minimum: 1000000 # In case anyone thought this was MHz.
description:
When the DRAM type is LPDDR3, this parameter defines then ODT disable
frequency in Hz. When DDR frequency is less then ddr3_odt_dis_freq, the
ODT on the DRAM side and controller side are both disabled.
rockchip,lpddr3_drv:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is LPDDR3, this parameter defines the DRAM side drive
strength in ohms.
default: 34
rockchip,lpddr3_odt:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is LPDDR3, this parameter defines the DRAM side ODT
strength in ohms.
default: 240
rockchip,phy_lpddr3_ca_drv:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is LPDDR3, this parameter defines the PHY side CA line
(including command line, address line and clock line) drive strength.
default: 40
rockchip,phy_lpddr3_dq_drv:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is LPDDR3, this parameter defines the PHY side DQ line
(including DQS/DQ/DM line) drive strength.
default: 40
rockchip,phy_lpddr3_odt:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When dram type is LPDDR3, this parameter define the phy side odt
strength, default value is 240.
rockchip,lpddr4_odt_dis_freq:
$ref: /schemas/types.yaml#/definitions/uint32
minimum: 1000000 # In case anyone thought this was MHz.
description:
When the DRAM type is LPDDR4, this parameter defines the ODT disable
frequency in Hz. When the DDR frequency is less then ddr3_odt_dis_freq,
the ODT on the DRAM side and controller side are both disabled.
rockchip,lpddr4_drv:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is LPDDR4, this parameter defines the DRAM side drive
strength in ohms.
default: 60
rockchip,lpddr4_dq_odt:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is LPDDR4, this parameter defines the DRAM side ODT on
DQS/DQ line strength in ohms.
default: 40
rockchip,lpddr4_ca_odt:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is LPDDR4, this parameter defines the DRAM side ODT on
CA line strength in ohms.
default: 40
rockchip,phy_lpddr4_ca_drv:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is LPDDR4, this parameter defines the PHY side CA line
(including command address line) drive strength.
default: 40
rockchip,phy_lpddr4_ck_cs_drv:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is LPDDR4, this parameter defines the PHY side clock
line and CS line drive strength.
default: 80
rockchip,phy_lpddr4_dq_drv:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is LPDDR4, this parameter defines the PHY side DQ line
(including DQS/DQ/DM line) drive strength.
default: 80
rockchip,phy_lpddr4_odt:
deprecated: true
$ref: /schemas/types.yaml#/definitions/uint32
description:
When the DRAM type is LPDDR4, this parameter defines the PHY side ODT
strength.
default: 60
rockchip,pd-idle-ns:
description:
Configure the PD_IDLE value in nanoseconds. Defines the power-down idle
period in which memories are placed into power-down mode if bus is idle
for PD_IDLE nanoseconds.
rockchip,sr-idle-ns:
description:
Configure the SR_IDLE value in nanoseconds. Defines the self-refresh idle
period in which memories are placed into self-refresh mode if bus is idle
for SR_IDLE nanoseconds.
default: 0
rockchip,sr-mc-gate-idle-ns:
description:
Defines the memory self-refresh and controller clock gating idle period in nanoseconds.
Memories are placed into self-refresh mode and memory controller clock
arg gating started if bus is idle for sr_mc_gate_idle nanoseconds.
rockchip,srpd-lite-idle-ns:
description:
Defines the self-refresh power down idle period in which memories are
placed into self-refresh power down mode if bus is idle for
srpd_lite_idle nanoseonds. This parameter is for LPDDR4 only.
rockchip,standby-idle-ns:
description:
Defines the standby idle period in which memories are placed into
self-refresh mode. The controller, pi, PHY and DRAM clock will be gated
if bus is idle for standby_idle nanoseconds.
rockchip,pd-idle-dis-freq-hz:
description:
Defines the power-down idle disable frequency in Hz. When the DDR
frequency is greater than pd-idle-dis-freq, power-down idle is disabled.
See also rockchip,pd-idle-ns.
rockchip,sr-idle-dis-freq-hz:
description:
Defines the self-refresh idle disable frequency in Hz. When the DDR
frequency is greater than sr-idle-dis-freq, self-refresh idle is
disabled. See also rockchip,sr-idle-ns.
rockchip,sr-mc-gate-idle-dis-freq-hz:
description:
Defines the self-refresh and memory-controller clock gating disable
frequency in Hz. When the DDR frequency is greater than
sr-mc-gate-idle-dis-freq, the clock will not be gated when idle. See also
rockchip,sr-mc-gate-idle-ns.
rockchip,srpd-lite-idle-dis-freq-hz:
description:
Defines the self-refresh power down idle disable frequency in Hz. When
the DDR frequency is greater than srpd-lite-idle-dis-freq, memory will
not be placed into self-refresh power down mode when idle. See also
rockchip,srpd-lite-idle-ns.
rockchip,standby-idle-dis-freq-hz:
description:
Defines the standby idle disable frequency in Hz. When the DDR frequency
is greater than standby-idle-dis-freq, standby idle is disabled. See also
rockchip,standby-idle-ns.
required:
- compatible
- devfreq-events
- clocks
- clock-names
- operating-points-v2
- center-supply
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/rk3399-cru.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
memory-controller {
compatible = "rockchip,rk3399-dmc";
devfreq-events = <&dfi>;
rockchip,pmu = <&pmu>;
interrupts = <GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cru SCLK_DDRC>;
clock-names = "dmc_clk";
operating-points-v2 = <&dmc_opp_table>;
center-supply = <&ppvar_centerlogic>;
rockchip,pd-idle-ns = <160>;
rockchip,sr-idle-ns = <10240>;
rockchip,sr-mc-gate-idle-ns = <40960>;
rockchip,srpd-lite-idle-ns = <61440>;
rockchip,standby-idle-ns = <81920>;
rockchip,ddr3_odt_dis_freq = <333000000>;
rockchip,lpddr3_odt_dis_freq = <333000000>;
rockchip,lpddr4_odt_dis_freq = <333000000>;
rockchip,pd-idle-dis-freq-hz = <1000000000>;
rockchip,sr-idle-dis-freq-hz = <1000000000>;
rockchip,sr-mc-gate-idle-dis-freq-hz = <1000000000>;
rockchip,srpd-lite-idle-dis-freq-hz = <0>;
rockchip,standby-idle-dis-freq-hz = <928000000>;
};

View File

@ -112,16 +112,16 @@ static unsigned long find_available_max_freq(struct devfreq *devfreq)
}
/**
* get_freq_range() - Get the current freq range
* devfreq_get_freq_range() - Get the current freq range
* @devfreq: the devfreq instance
* @min_freq: the min frequency
* @max_freq: the max frequency
*
* This takes into consideration all constraints.
*/
static void get_freq_range(struct devfreq *devfreq,
unsigned long *min_freq,
unsigned long *max_freq)
void devfreq_get_freq_range(struct devfreq *devfreq,
unsigned long *min_freq,
unsigned long *max_freq)
{
unsigned long *freq_table = devfreq->profile->freq_table;
s32 qos_min_freq, qos_max_freq;
@ -158,6 +158,7 @@ static void get_freq_range(struct devfreq *devfreq,
if (*min_freq > *max_freq)
*min_freq = *max_freq;
}
EXPORT_SYMBOL(devfreq_get_freq_range);
/**
* devfreq_get_freq_level() - Lookup freq_table for the frequency
@ -418,7 +419,7 @@ int devfreq_update_target(struct devfreq *devfreq, unsigned long freq)
err = devfreq->governor->get_target_freq(devfreq, &freq);
if (err)
return err;
get_freq_range(devfreq, &min_freq, &max_freq);
devfreq_get_freq_range(devfreq, &min_freq, &max_freq);
if (freq < min_freq) {
freq = min_freq;
@ -785,6 +786,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
{
struct devfreq *devfreq;
struct devfreq_governor *governor;
unsigned long min_freq, max_freq;
int err = 0;
if (!dev || !profile || !governor_name) {
@ -849,6 +851,8 @@ struct devfreq *devfreq_add_device(struct device *dev,
goto err_dev;
}
devfreq_get_freq_range(devfreq, &min_freq, &max_freq);
devfreq->suspend_freq = dev_pm_opp_get_suspend_opp_freq(dev);
devfreq->opp_table = dev_pm_opp_get_opp_table(dev);
if (IS_ERR(devfreq->opp_table))
@ -1587,7 +1591,7 @@ static ssize_t min_freq_show(struct device *dev, struct device_attribute *attr,
unsigned long min_freq, max_freq;
mutex_lock(&df->lock);
get_freq_range(df, &min_freq, &max_freq);
devfreq_get_freq_range(df, &min_freq, &max_freq);
mutex_unlock(&df->lock);
return sprintf(buf, "%lu\n", min_freq);
@ -1641,7 +1645,7 @@ static ssize_t max_freq_show(struct device *dev, struct device_attribute *attr,
unsigned long min_freq, max_freq;
mutex_lock(&df->lock);
get_freq_range(df, &min_freq, &max_freq);
devfreq_get_freq_range(df, &min_freq, &max_freq);
mutex_unlock(&df->lock);
return sprintf(buf, "%lu\n", max_freq);
@ -1955,7 +1959,7 @@ static int devfreq_summary_show(struct seq_file *s, void *data)
mutex_lock(&devfreq->lock);
cur_freq = devfreq->previous_freq;
get_freq_range(devfreq, &min_freq, &max_freq);
devfreq_get_freq_range(devfreq, &min_freq, &max_freq);
timer = devfreq->profile->timer;
if (IS_SUPPORTED_ATTR(devfreq->governor->attrs, POLLING_INTERVAL))

View File

@ -47,6 +47,31 @@
#define DEVFREQ_GOV_ATTR_POLLING_INTERVAL BIT(0)
#define DEVFREQ_GOV_ATTR_TIMER BIT(1)
/**
* struct devfreq_cpu_data - Hold the per-cpu data
* @node: list node
* @dev: reference to cpu device.
* @first_cpu: the cpumask of the first cpu of a policy.
* @opp_table: reference to cpu opp table.
* @cur_freq: the current frequency of the cpu.
* @min_freq: the min frequency of the cpu.
* @max_freq: the max frequency of the cpu.
*
* This structure stores the required cpu_data of a cpu.
* This is auto-populated by the governor.
*/
struct devfreq_cpu_data {
struct list_head node;
struct device *dev;
unsigned int first_cpu;
struct opp_table *opp_table;
unsigned int cur_freq;
unsigned int min_freq;
unsigned int max_freq;
};
/**
* struct devfreq_governor - Devfreq policy governor
* @node: list node - contains registered devfreq governors
@ -89,6 +114,8 @@ int devm_devfreq_add_governor(struct device *dev,
int devfreq_update_status(struct devfreq *devfreq, unsigned long freq);
int devfreq_update_target(struct devfreq *devfreq, unsigned long freq);
void devfreq_get_freq_range(struct devfreq *devfreq, unsigned long *min_freq,
unsigned long *max_freq);
static inline int devfreq_update_stats(struct devfreq *df)
{

View File

@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-only
// SPDX-License-Identifier: GPL-2.0-only
/*
* linux/drivers/devfreq/governor_passive.c
*
@ -8,20 +8,159 @@
*/
#include <linux/module.h>
#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/cpumask.h>
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/devfreq.h>
#include "governor.h"
static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
#define HZ_PER_KHZ 1000
static struct devfreq_cpu_data *
get_parent_cpu_data(struct devfreq_passive_data *p_data,
struct cpufreq_policy *policy)
{
struct devfreq_cpu_data *parent_cpu_data;
if (!p_data || !policy)
return NULL;
list_for_each_entry(parent_cpu_data, &p_data->cpu_data_list, node)
if (parent_cpu_data->first_cpu == cpumask_first(policy->related_cpus))
return parent_cpu_data;
return NULL;
}
static unsigned long get_target_freq_by_required_opp(struct device *p_dev,
struct opp_table *p_opp_table,
struct opp_table *opp_table,
unsigned long *freq)
{
struct dev_pm_opp *opp = NULL, *p_opp = NULL;
unsigned long target_freq;
if (!p_dev || !p_opp_table || !opp_table || !freq)
return 0;
p_opp = devfreq_recommended_opp(p_dev, freq, 0);
if (IS_ERR(p_opp))
return 0;
opp = dev_pm_opp_xlate_required_opp(p_opp_table, opp_table, p_opp);
dev_pm_opp_put(p_opp);
if (IS_ERR(opp))
return 0;
target_freq = dev_pm_opp_get_freq(opp);
dev_pm_opp_put(opp);
return target_freq;
}
static int get_target_freq_with_cpufreq(struct devfreq *devfreq,
unsigned long *target_freq)
{
struct devfreq_passive_data *p_data =
(struct devfreq_passive_data *)devfreq->data;
struct devfreq_cpu_data *parent_cpu_data;
struct cpufreq_policy *policy;
unsigned long cpu, cpu_cur, cpu_min, cpu_max, cpu_percent;
unsigned long dev_min, dev_max;
unsigned long freq = 0;
int ret = 0;
for_each_online_cpu(cpu) {
policy = cpufreq_cpu_get(cpu);
if (!policy) {
ret = -EINVAL;
continue;
}
parent_cpu_data = get_parent_cpu_data(p_data, policy);
if (!parent_cpu_data) {
cpufreq_cpu_put(policy);
continue;
}
/* Get target freq via required opps */
cpu_cur = parent_cpu_data->cur_freq * HZ_PER_KHZ;
freq = get_target_freq_by_required_opp(parent_cpu_data->dev,
parent_cpu_data->opp_table,
devfreq->opp_table, &cpu_cur);
if (freq) {
*target_freq = max(freq, *target_freq);
cpufreq_cpu_put(policy);
continue;
}
/* Use interpolation if required opps is not available */
devfreq_get_freq_range(devfreq, &dev_min, &dev_max);
cpu_min = parent_cpu_data->min_freq;
cpu_max = parent_cpu_data->max_freq;
cpu_cur = parent_cpu_data->cur_freq;
cpu_percent = ((cpu_cur - cpu_min) * 100) / (cpu_max - cpu_min);
freq = dev_min + mult_frac(dev_max - dev_min, cpu_percent, 100);
*target_freq = max(freq, *target_freq);
cpufreq_cpu_put(policy);
}
return ret;
}
static int get_target_freq_with_devfreq(struct devfreq *devfreq,
unsigned long *freq)
{
struct devfreq_passive_data *p_data
= (struct devfreq_passive_data *)devfreq->data;
struct devfreq *parent_devfreq = (struct devfreq *)p_data->parent;
unsigned long child_freq = ULONG_MAX;
struct dev_pm_opp *opp, *p_opp;
int i, count;
/* Get target freq via required opps */
child_freq = get_target_freq_by_required_opp(parent_devfreq->dev.parent,
parent_devfreq->opp_table,
devfreq->opp_table, freq);
if (child_freq)
goto out;
/* Use interpolation if required opps is not available */
for (i = 0; i < parent_devfreq->profile->max_state; i++)
if (parent_devfreq->profile->freq_table[i] == *freq)
break;
if (i == parent_devfreq->profile->max_state)
return -EINVAL;
if (i < devfreq->profile->max_state) {
child_freq = devfreq->profile->freq_table[i];
} else {
count = devfreq->profile->max_state;
child_freq = devfreq->profile->freq_table[count - 1];
}
out:
*freq = child_freq;
return 0;
}
static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
unsigned long *freq)
{
struct devfreq_passive_data *p_data =
(struct devfreq_passive_data *)devfreq->data;
int ret;
if (!p_data)
return -EINVAL;
/*
* If the devfreq device with passive governor has the specific method
* to determine the next frequency, should use the get_target_freq()
@ -30,75 +169,178 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
if (p_data->get_target_freq)
return p_data->get_target_freq(devfreq, freq);
/*
* If the parent and passive devfreq device uses the OPP table,
* get the next frequency by using the OPP table.
*/
switch (p_data->parent_type) {
case DEVFREQ_PARENT_DEV:
ret = get_target_freq_with_devfreq(devfreq, freq);
break;
case CPUFREQ_PARENT_DEV:
ret = get_target_freq_with_cpufreq(devfreq, freq);
break;
default:
ret = -EINVAL;
dev_err(&devfreq->dev, "Invalid parent type\n");
break;
}
/*
* - parent devfreq device uses the governors except for passive.
* - passive devfreq device uses the passive governor.
*
* Each devfreq has the OPP table. After deciding the new frequency
* from the governor of parent devfreq device, the passive governor
* need to get the index of new frequency on OPP table of parent
* device. And then the index is used for getting the suitable
* new frequency for passive devfreq device.
*/
if (!devfreq->profile || !devfreq->profile->freq_table
|| devfreq->profile->max_state <= 0)
return -EINVAL;
return ret;
}
/*
* The passive governor have to get the correct frequency from OPP
* list of parent device. Because in this case, *freq is temporary
* value which is decided by ondemand governor.
*/
if (devfreq->opp_table && parent_devfreq->opp_table) {
p_opp = devfreq_recommended_opp(parent_devfreq->dev.parent,
freq, 0);
if (IS_ERR(p_opp))
return PTR_ERR(p_opp);
opp = dev_pm_opp_xlate_required_opp(parent_devfreq->opp_table,
devfreq->opp_table, p_opp);
dev_pm_opp_put(p_opp);
if (IS_ERR(opp))
goto no_required_opp;
*freq = dev_pm_opp_get_freq(opp);
dev_pm_opp_put(opp);
static int cpufreq_passive_notifier_call(struct notifier_block *nb,
unsigned long event, void *ptr)
{
struct devfreq_passive_data *p_data =
container_of(nb, struct devfreq_passive_data, nb);
struct devfreq *devfreq = (struct devfreq *)p_data->this;
struct devfreq_cpu_data *parent_cpu_data;
struct cpufreq_freqs *freqs = ptr;
unsigned int cur_freq;
int ret;
if (event != CPUFREQ_POSTCHANGE || !freqs)
return 0;
parent_cpu_data = get_parent_cpu_data(p_data, freqs->policy);
if (!parent_cpu_data || parent_cpu_data->cur_freq == freqs->new)
return 0;
cur_freq = parent_cpu_data->cur_freq;
parent_cpu_data->cur_freq = freqs->new;
mutex_lock(&devfreq->lock);
ret = devfreq_update_target(devfreq, freqs->new);
mutex_unlock(&devfreq->lock);
if (ret) {
parent_cpu_data->cur_freq = cur_freq;
dev_err(&devfreq->dev, "failed to update the frequency.\n");
return ret;
}
no_required_opp:
/*
* Get the OPP table's index of decided frequency by governor
* of parent device.
*/
for (i = 0; i < parent_devfreq->profile->max_state; i++)
if (parent_devfreq->profile->freq_table[i] == *freq)
break;
if (i == parent_devfreq->profile->max_state)
return -EINVAL;
/* Get the suitable frequency by using index of parent device. */
if (i < devfreq->profile->max_state) {
child_freq = devfreq->profile->freq_table[i];
} else {
count = devfreq->profile->max_state;
child_freq = devfreq->profile->freq_table[count - 1];
}
/* Return the suitable frequency for passive device. */
*freq = child_freq;
return 0;
}
static int cpufreq_passive_unregister_notifier(struct devfreq *devfreq)
{
struct devfreq_passive_data *p_data
= (struct devfreq_passive_data *)devfreq->data;
struct devfreq_cpu_data *parent_cpu_data;
int cpu, ret = 0;
if (p_data->nb.notifier_call) {
ret = cpufreq_unregister_notifier(&p_data->nb,
CPUFREQ_TRANSITION_NOTIFIER);
if (ret < 0)
return ret;
}
for_each_possible_cpu(cpu) {
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
if (!policy) {
ret = -EINVAL;
continue;
}
parent_cpu_data = get_parent_cpu_data(p_data, policy);
if (!parent_cpu_data) {
cpufreq_cpu_put(policy);
continue;
}
list_del(&parent_cpu_data->node);
if (parent_cpu_data->opp_table)
dev_pm_opp_put_opp_table(parent_cpu_data->opp_table);
kfree(parent_cpu_data);
cpufreq_cpu_put(policy);
}
return ret;
}
static int cpufreq_passive_register_notifier(struct devfreq *devfreq)
{
struct devfreq_passive_data *p_data
= (struct devfreq_passive_data *)devfreq->data;
struct device *dev = devfreq->dev.parent;
struct opp_table *opp_table = NULL;
struct devfreq_cpu_data *parent_cpu_data;
struct cpufreq_policy *policy;
struct device *cpu_dev;
unsigned int cpu;
int ret;
p_data->cpu_data_list
= (struct list_head)LIST_HEAD_INIT(p_data->cpu_data_list);
p_data->nb.notifier_call = cpufreq_passive_notifier_call;
ret = cpufreq_register_notifier(&p_data->nb, CPUFREQ_TRANSITION_NOTIFIER);
if (ret) {
dev_err(dev, "failed to register cpufreq notifier\n");
p_data->nb.notifier_call = NULL;
goto err;
}
for_each_possible_cpu(cpu) {
policy = cpufreq_cpu_get(cpu);
if (!policy) {
ret = -EPROBE_DEFER;
goto err;
}
parent_cpu_data = get_parent_cpu_data(p_data, policy);
if (parent_cpu_data) {
cpufreq_cpu_put(policy);
continue;
}
parent_cpu_data = kzalloc(sizeof(*parent_cpu_data),
GFP_KERNEL);
if (!parent_cpu_data) {
ret = -ENOMEM;
goto err_put_policy;
}
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev) {
dev_err(dev, "failed to get cpu device\n");
ret = -ENODEV;
goto err_free_cpu_data;
}
opp_table = dev_pm_opp_get_opp_table(cpu_dev);
if (IS_ERR(opp_table)) {
dev_err(dev, "failed to get opp_table of cpu%d\n", cpu);
ret = PTR_ERR(opp_table);
goto err_free_cpu_data;
}
parent_cpu_data->dev = cpu_dev;
parent_cpu_data->opp_table = opp_table;
parent_cpu_data->first_cpu = cpumask_first(policy->related_cpus);
parent_cpu_data->cur_freq = policy->cur;
parent_cpu_data->min_freq = policy->cpuinfo.min_freq;
parent_cpu_data->max_freq = policy->cpuinfo.max_freq;
list_add_tail(&parent_cpu_data->node, &p_data->cpu_data_list);
cpufreq_cpu_put(policy);
}
mutex_lock(&devfreq->lock);
ret = devfreq_update_target(devfreq, 0L);
mutex_unlock(&devfreq->lock);
if (ret)
dev_err(dev, "failed to update the frequency\n");
return ret;
err_free_cpu_data:
kfree(parent_cpu_data);
err_put_policy:
cpufreq_cpu_put(policy);
err:
WARN_ON(cpufreq_passive_unregister_notifier(devfreq));
return ret;
}
static int devfreq_passive_notifier_call(struct notifier_block *nb,
unsigned long event, void *ptr)
{
@ -131,30 +373,55 @@ static int devfreq_passive_notifier_call(struct notifier_block *nb,
return NOTIFY_DONE;
}
static int devfreq_passive_event_handler(struct devfreq *devfreq,
unsigned int event, void *data)
static int devfreq_passive_unregister_notifier(struct devfreq *devfreq)
{
struct devfreq_passive_data *p_data
= (struct devfreq_passive_data *)devfreq->data;
struct devfreq *parent = (struct devfreq *)p_data->parent;
struct notifier_block *nb = &p_data->nb;
return devfreq_unregister_notifier(parent, nb, DEVFREQ_TRANSITION_NOTIFIER);
}
static int devfreq_passive_register_notifier(struct devfreq *devfreq)
{
struct devfreq_passive_data *p_data
= (struct devfreq_passive_data *)devfreq->data;
struct devfreq *parent = (struct devfreq *)p_data->parent;
struct notifier_block *nb = &p_data->nb;
int ret = 0;
if (!parent)
return -EPROBE_DEFER;
nb->notifier_call = devfreq_passive_notifier_call;
return devfreq_register_notifier(parent, nb, DEVFREQ_TRANSITION_NOTIFIER);
}
static int devfreq_passive_event_handler(struct devfreq *devfreq,
unsigned int event, void *data)
{
struct devfreq_passive_data *p_data
= (struct devfreq_passive_data *)devfreq->data;
int ret = -EINVAL;
if (!p_data)
return -EINVAL;
if (!p_data->this)
p_data->this = devfreq;
switch (event) {
case DEVFREQ_GOV_START:
if (!p_data->this)
p_data->this = devfreq;
nb->notifier_call = devfreq_passive_notifier_call;
ret = devfreq_register_notifier(parent, nb,
DEVFREQ_TRANSITION_NOTIFIER);
if (p_data->parent_type == DEVFREQ_PARENT_DEV)
ret = devfreq_passive_register_notifier(devfreq);
else if (p_data->parent_type == CPUFREQ_PARENT_DEV)
ret = cpufreq_passive_register_notifier(devfreq);
break;
case DEVFREQ_GOV_STOP:
WARN_ON(devfreq_unregister_notifier(parent, nb,
DEVFREQ_TRANSITION_NOTIFIER));
if (p_data->parent_type == DEVFREQ_PARENT_DEV)
WARN_ON(devfreq_passive_unregister_notifier(devfreq));
else if (p_data->parent_type == CPUFREQ_PARENT_DEV)
WARN_ON(cpufreq_passive_unregister_notifier(devfreq));
break;
default:
break;

View File

@ -5,6 +5,7 @@
*/
#include <linux/arm-smccc.h>
#include <linux/bitfield.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/devfreq.h>
@ -20,55 +21,49 @@
#include <linux/rwsem.h>
#include <linux/suspend.h>
#include <soc/rockchip/pm_domains.h>
#include <soc/rockchip/rk3399_grf.h>
#include <soc/rockchip/rockchip_sip.h>
struct dram_timing {
unsigned int ddr3_speed_bin;
unsigned int pd_idle;
unsigned int sr_idle;
unsigned int sr_mc_gate_idle;
unsigned int srpd_lite_idle;
unsigned int standby_idle;
unsigned int auto_pd_dis_freq;
unsigned int dram_dll_dis_freq;
unsigned int phy_dll_dis_freq;
unsigned int ddr3_odt_dis_freq;
unsigned int ddr3_drv;
unsigned int ddr3_odt;
unsigned int phy_ddr3_ca_drv;
unsigned int phy_ddr3_dq_drv;
unsigned int phy_ddr3_odt;
unsigned int lpddr3_odt_dis_freq;
unsigned int lpddr3_drv;
unsigned int lpddr3_odt;
unsigned int phy_lpddr3_ca_drv;
unsigned int phy_lpddr3_dq_drv;
unsigned int phy_lpddr3_odt;
unsigned int lpddr4_odt_dis_freq;
unsigned int lpddr4_drv;
unsigned int lpddr4_dq_odt;
unsigned int lpddr4_ca_odt;
unsigned int phy_lpddr4_ca_drv;
unsigned int phy_lpddr4_ck_cs_drv;
unsigned int phy_lpddr4_dq_drv;
unsigned int phy_lpddr4_odt;
};
#define NS_TO_CYCLE(NS, MHz) (((NS) * (MHz)) / NSEC_PER_USEC)
#define RK3399_SET_ODT_PD_0_SR_IDLE GENMASK(7, 0)
#define RK3399_SET_ODT_PD_0_SR_MC_GATE_IDLE GENMASK(15, 8)
#define RK3399_SET_ODT_PD_0_STANDBY_IDLE GENMASK(31, 16)
#define RK3399_SET_ODT_PD_1_PD_IDLE GENMASK(11, 0)
#define RK3399_SET_ODT_PD_1_SRPD_LITE_IDLE GENMASK(27, 16)
#define RK3399_SET_ODT_PD_2_ODT_ENABLE BIT(0)
struct rk3399_dmcfreq {
struct device *dev;
struct devfreq *devfreq;
struct devfreq_dev_profile profile;
struct devfreq_simple_ondemand_data ondemand_data;
struct clk *dmc_clk;
struct devfreq_event_dev *edev;
struct mutex lock;
struct dram_timing timing;
struct regulator *vdd_center;
struct regmap *regmap_pmu;
unsigned long rate, target_rate;
unsigned long volt, target_volt;
unsigned int odt_dis_freq;
int odt_pd_arg0, odt_pd_arg1;
unsigned int pd_idle_ns;
unsigned int sr_idle_ns;
unsigned int sr_mc_gate_idle_ns;
unsigned int srpd_lite_idle_ns;
unsigned int standby_idle_ns;
unsigned int ddr3_odt_dis_freq;
unsigned int lpddr3_odt_dis_freq;
unsigned int lpddr4_odt_dis_freq;
unsigned int pd_idle_dis_freq;
unsigned int sr_idle_dis_freq;
unsigned int sr_mc_gate_idle_dis_freq;
unsigned int srpd_lite_idle_dis_freq;
unsigned int standby_idle_dis_freq;
};
static int rk3399_dmcfreq_target(struct device *dev, unsigned long *freq,
@ -78,10 +73,14 @@ static int rk3399_dmcfreq_target(struct device *dev, unsigned long *freq,
struct dev_pm_opp *opp;
unsigned long old_clk_rate = dmcfreq->rate;
unsigned long target_volt, target_rate;
unsigned int ddrcon_mhz;
struct arm_smccc_res res;
bool odt_enable = false;
int err;
u32 odt_pd_arg0 = 0;
u32 odt_pd_arg1 = 0;
u32 odt_pd_arg2 = 0;
opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(opp))
return PTR_ERR(opp);
@ -95,19 +94,71 @@ static int rk3399_dmcfreq_target(struct device *dev, unsigned long *freq,
mutex_lock(&dmcfreq->lock);
/*
* Ensure power-domain transitions don't interfere with ARM Trusted
* Firmware power-domain idling.
*/
err = rockchip_pmu_block();
if (err) {
dev_err(dev, "Failed to block PMU: %d\n", err);
goto out_unlock;
}
/*
* Some idle parameters may be based on the DDR controller clock, which
* is half of the DDR frequency.
* pd_idle and standby_idle are based on the controller clock cycle.
* sr_idle_cycle, sr_mc_gate_idle_cycle, and srpd_lite_idle_cycle
* are based on the 1024 controller clock cycle
*/
ddrcon_mhz = target_rate / USEC_PER_SEC / 2;
u32p_replace_bits(&odt_pd_arg1,
NS_TO_CYCLE(dmcfreq->pd_idle_ns, ddrcon_mhz),
RK3399_SET_ODT_PD_1_PD_IDLE);
u32p_replace_bits(&odt_pd_arg0,
NS_TO_CYCLE(dmcfreq->standby_idle_ns, ddrcon_mhz),
RK3399_SET_ODT_PD_0_STANDBY_IDLE);
u32p_replace_bits(&odt_pd_arg0,
DIV_ROUND_UP(NS_TO_CYCLE(dmcfreq->sr_idle_ns,
ddrcon_mhz), 1024),
RK3399_SET_ODT_PD_0_SR_IDLE);
u32p_replace_bits(&odt_pd_arg0,
DIV_ROUND_UP(NS_TO_CYCLE(dmcfreq->sr_mc_gate_idle_ns,
ddrcon_mhz), 1024),
RK3399_SET_ODT_PD_0_SR_MC_GATE_IDLE);
u32p_replace_bits(&odt_pd_arg1,
DIV_ROUND_UP(NS_TO_CYCLE(dmcfreq->srpd_lite_idle_ns,
ddrcon_mhz), 1024),
RK3399_SET_ODT_PD_1_SRPD_LITE_IDLE);
if (dmcfreq->regmap_pmu) {
if (target_rate >= dmcfreq->sr_idle_dis_freq)
odt_pd_arg0 &= ~RK3399_SET_ODT_PD_0_SR_IDLE;
if (target_rate >= dmcfreq->sr_mc_gate_idle_dis_freq)
odt_pd_arg0 &= ~RK3399_SET_ODT_PD_0_SR_MC_GATE_IDLE;
if (target_rate >= dmcfreq->standby_idle_dis_freq)
odt_pd_arg0 &= ~RK3399_SET_ODT_PD_0_STANDBY_IDLE;
if (target_rate >= dmcfreq->pd_idle_dis_freq)
odt_pd_arg1 &= ~RK3399_SET_ODT_PD_1_PD_IDLE;
if (target_rate >= dmcfreq->srpd_lite_idle_dis_freq)
odt_pd_arg1 &= ~RK3399_SET_ODT_PD_1_SRPD_LITE_IDLE;
if (target_rate >= dmcfreq->odt_dis_freq)
odt_enable = true;
odt_pd_arg2 |= RK3399_SET_ODT_PD_2_ODT_ENABLE;
/*
* This makes a SMC call to the TF-A to set the DDR PD
* (power-down) timings and to enable or disable the
* ODT (on-die termination) resistors.
*/
arm_smccc_smc(ROCKCHIP_SIP_DRAM_FREQ, dmcfreq->odt_pd_arg0,
dmcfreq->odt_pd_arg1,
ROCKCHIP_SIP_CONFIG_DRAM_SET_ODT_PD,
odt_enable, 0, 0, 0, &res);
arm_smccc_smc(ROCKCHIP_SIP_DRAM_FREQ, odt_pd_arg0, odt_pd_arg1,
ROCKCHIP_SIP_CONFIG_DRAM_SET_ODT_PD, odt_pd_arg2,
0, 0, 0, &res);
}
/*
@ -158,6 +209,8 @@ static int rk3399_dmcfreq_target(struct device *dev, unsigned long *freq,
dmcfreq->volt = target_volt;
out:
rockchip_pmu_unblock();
out_unlock:
mutex_unlock(&dmcfreq->lock);
return err;
}
@ -189,13 +242,6 @@ static int rk3399_dmcfreq_get_cur_freq(struct device *dev, unsigned long *freq)
return 0;
}
static struct devfreq_dev_profile rk3399_devfreq_dmc_profile = {
.polling_ms = 200,
.target = rk3399_dmcfreq_target,
.get_dev_status = rk3399_dmcfreq_get_dev_status,
.get_cur_freq = rk3399_dmcfreq_get_cur_freq,
};
static __maybe_unused int rk3399_dmcfreq_suspend(struct device *dev)
{
struct rk3399_dmcfreq *dmcfreq = dev_get_drvdata(dev);
@ -238,69 +284,48 @@ static __maybe_unused int rk3399_dmcfreq_resume(struct device *dev)
static SIMPLE_DEV_PM_OPS(rk3399_dmcfreq_pm, rk3399_dmcfreq_suspend,
rk3399_dmcfreq_resume);
static int of_get_ddr_timings(struct dram_timing *timing,
struct device_node *np)
static int rk3399_dmcfreq_of_props(struct rk3399_dmcfreq *data,
struct device_node *np)
{
int ret = 0;
ret = of_property_read_u32(np, "rockchip,ddr3_speed_bin",
&timing->ddr3_speed_bin);
ret |= of_property_read_u32(np, "rockchip,pd_idle",
&timing->pd_idle);
ret |= of_property_read_u32(np, "rockchip,sr_idle",
&timing->sr_idle);
ret |= of_property_read_u32(np, "rockchip,sr_mc_gate_idle",
&timing->sr_mc_gate_idle);
ret |= of_property_read_u32(np, "rockchip,srpd_lite_idle",
&timing->srpd_lite_idle);
ret |= of_property_read_u32(np, "rockchip,standby_idle",
&timing->standby_idle);
ret |= of_property_read_u32(np, "rockchip,auto_pd_dis_freq",
&timing->auto_pd_dis_freq);
ret |= of_property_read_u32(np, "rockchip,dram_dll_dis_freq",
&timing->dram_dll_dis_freq);
ret |= of_property_read_u32(np, "rockchip,phy_dll_dis_freq",
&timing->phy_dll_dis_freq);
/*
* These are all optional, and serve as minimum bounds. Give them large
* (i.e., never "disabled") values if the DT doesn't specify one.
*/
data->pd_idle_dis_freq =
data->sr_idle_dis_freq =
data->sr_mc_gate_idle_dis_freq =
data->srpd_lite_idle_dis_freq =
data->standby_idle_dis_freq = UINT_MAX;
ret |= of_property_read_u32(np, "rockchip,pd-idle-ns",
&data->pd_idle_ns);
ret |= of_property_read_u32(np, "rockchip,sr-idle-ns",
&data->sr_idle_ns);
ret |= of_property_read_u32(np, "rockchip,sr-mc-gate-idle-ns",
&data->sr_mc_gate_idle_ns);
ret |= of_property_read_u32(np, "rockchip,srpd-lite-idle-ns",
&data->srpd_lite_idle_ns);
ret |= of_property_read_u32(np, "rockchip,standby-idle-ns",
&data->standby_idle_ns);
ret |= of_property_read_u32(np, "rockchip,ddr3_odt_dis_freq",
&timing->ddr3_odt_dis_freq);
ret |= of_property_read_u32(np, "rockchip,ddr3_drv",
&timing->ddr3_drv);
ret |= of_property_read_u32(np, "rockchip,ddr3_odt",
&timing->ddr3_odt);
ret |= of_property_read_u32(np, "rockchip,phy_ddr3_ca_drv",
&timing->phy_ddr3_ca_drv);
ret |= of_property_read_u32(np, "rockchip,phy_ddr3_dq_drv",
&timing->phy_ddr3_dq_drv);
ret |= of_property_read_u32(np, "rockchip,phy_ddr3_odt",
&timing->phy_ddr3_odt);
&data->ddr3_odt_dis_freq);
ret |= of_property_read_u32(np, "rockchip,lpddr3_odt_dis_freq",
&timing->lpddr3_odt_dis_freq);
ret |= of_property_read_u32(np, "rockchip,lpddr3_drv",
&timing->lpddr3_drv);
ret |= of_property_read_u32(np, "rockchip,lpddr3_odt",
&timing->lpddr3_odt);
ret |= of_property_read_u32(np, "rockchip,phy_lpddr3_ca_drv",
&timing->phy_lpddr3_ca_drv);
ret |= of_property_read_u32(np, "rockchip,phy_lpddr3_dq_drv",
&timing->phy_lpddr3_dq_drv);
ret |= of_property_read_u32(np, "rockchip,phy_lpddr3_odt",
&timing->phy_lpddr3_odt);
&data->lpddr3_odt_dis_freq);
ret |= of_property_read_u32(np, "rockchip,lpddr4_odt_dis_freq",
&timing->lpddr4_odt_dis_freq);
ret |= of_property_read_u32(np, "rockchip,lpddr4_drv",
&timing->lpddr4_drv);
ret |= of_property_read_u32(np, "rockchip,lpddr4_dq_odt",
&timing->lpddr4_dq_odt);
ret |= of_property_read_u32(np, "rockchip,lpddr4_ca_odt",
&timing->lpddr4_ca_odt);
ret |= of_property_read_u32(np, "rockchip,phy_lpddr4_ca_drv",
&timing->phy_lpddr4_ca_drv);
ret |= of_property_read_u32(np, "rockchip,phy_lpddr4_ck_cs_drv",
&timing->phy_lpddr4_ck_cs_drv);
ret |= of_property_read_u32(np, "rockchip,phy_lpddr4_dq_drv",
&timing->phy_lpddr4_dq_drv);
ret |= of_property_read_u32(np, "rockchip,phy_lpddr4_odt",
&timing->phy_lpddr4_odt);
&data->lpddr4_odt_dis_freq);
ret |= of_property_read_u32(np, "rockchip,pd-idle-dis-freq-hz",
&data->pd_idle_dis_freq);
ret |= of_property_read_u32(np, "rockchip,sr-idle-dis-freq-hz",
&data->sr_idle_dis_freq);
ret |= of_property_read_u32(np, "rockchip,sr-mc-gate-idle-dis-freq-hz",
&data->sr_mc_gate_idle_dis_freq);
ret |= of_property_read_u32(np, "rockchip,srpd-lite-idle-dis-freq-hz",
&data->srpd_lite_idle_dis_freq);
ret |= of_property_read_u32(np, "rockchip,standby-idle-dis-freq-hz",
&data->standby_idle_dis_freq);
return ret;
}
@ -311,8 +336,7 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct device_node *np = pdev->dev.of_node, *node;
struct rk3399_dmcfreq *data;
int ret, index, size;
uint32_t *timing;
int ret;
struct dev_pm_opp *opp;
u32 ddr_type;
u32 val;
@ -343,26 +367,7 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
return ret;
}
/*
* Get dram timing and pass it to arm trust firmware,
* the dram driver in arm trust firmware will get these
* timing and to do dram initial.
*/
if (!of_get_ddr_timings(&data->timing, np)) {
timing = &data->timing.ddr3_speed_bin;
size = sizeof(struct dram_timing) / 4;
for (index = 0; index < size; index++) {
arm_smccc_smc(ROCKCHIP_SIP_DRAM_FREQ, *timing++, index,
ROCKCHIP_SIP_CONFIG_DRAM_SET_PARAM,
0, 0, 0, 0, &res);
if (res.a0) {
dev_err(dev, "Failed to set dram param: %ld\n",
res.a0);
ret = -EINVAL;
goto err_edev;
}
}
}
rk3399_dmcfreq_of_props(data, np);
node = of_parse_phandle(np, "rockchip,pmu", 0);
if (!node)
@ -381,13 +386,13 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
switch (ddr_type) {
case RK3399_PMUGRF_DDRTYPE_DDR3:
data->odt_dis_freq = data->timing.ddr3_odt_dis_freq;
data->odt_dis_freq = data->ddr3_odt_dis_freq;
break;
case RK3399_PMUGRF_DDRTYPE_LPDDR3:
data->odt_dis_freq = data->timing.lpddr3_odt_dis_freq;
data->odt_dis_freq = data->lpddr3_odt_dis_freq;
break;
case RK3399_PMUGRF_DDRTYPE_LPDDR4:
data->odt_dis_freq = data->timing.lpddr4_odt_dis_freq;
data->odt_dis_freq = data->lpddr4_odt_dis_freq;
break;
default:
ret = -EINVAL;
@ -399,63 +404,46 @@ no_pmu:
ROCKCHIP_SIP_CONFIG_DRAM_INIT,
0, 0, 0, 0, &res);
/*
* In TF-A there is a platform SIP call to set the PD (power-down)
* timings and to enable or disable the ODT (on-die termination).
* This call needs three arguments as follows:
*
* arg0:
* bit[0-7] : sr_idle
* bit[8-15] : sr_mc_gate_idle
* bit[16-31] : standby idle
* arg1:
* bit[0-11] : pd_idle
* bit[16-27] : srpd_lite_idle
* arg2:
* bit[0] : odt enable
*/
data->odt_pd_arg0 = (data->timing.sr_idle & 0xff) |
((data->timing.sr_mc_gate_idle & 0xff) << 8) |
((data->timing.standby_idle & 0xffff) << 16);
data->odt_pd_arg1 = (data->timing.pd_idle & 0xfff) |
((data->timing.srpd_lite_idle & 0xfff) << 16);
/*
* We add a devfreq driver to our parent since it has a device tree node
* with operating points.
*/
if (dev_pm_opp_of_add_table(dev)) {
if (devm_pm_opp_of_add_table(dev)) {
dev_err(dev, "Invalid operating-points in device tree.\n");
ret = -EINVAL;
goto err_edev;
}
of_property_read_u32(np, "upthreshold",
&data->ondemand_data.upthreshold);
of_property_read_u32(np, "downdifferential",
&data->ondemand_data.downdifferential);
data->ondemand_data.upthreshold = 25;
data->ondemand_data.downdifferential = 15;
data->rate = clk_get_rate(data->dmc_clk);
opp = devfreq_recommended_opp(dev, &data->rate, 0);
if (IS_ERR(opp)) {
ret = PTR_ERR(opp);
goto err_free_opp;
goto err_edev;
}
data->rate = dev_pm_opp_get_freq(opp);
data->volt = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
rk3399_devfreq_dmc_profile.initial_freq = data->rate;
data->profile = (struct devfreq_dev_profile) {
.polling_ms = 200,
.target = rk3399_dmcfreq_target,
.get_dev_status = rk3399_dmcfreq_get_dev_status,
.get_cur_freq = rk3399_dmcfreq_get_cur_freq,
.initial_freq = data->rate,
};
data->devfreq = devm_devfreq_add_device(dev,
&rk3399_devfreq_dmc_profile,
&data->profile,
DEVFREQ_GOV_SIMPLE_ONDEMAND,
&data->ondemand_data);
if (IS_ERR(data->devfreq)) {
ret = PTR_ERR(data->devfreq);
goto err_free_opp;
goto err_edev;
}
devm_devfreq_register_opp_notifier(dev, data->devfreq);
@ -465,8 +453,6 @@ no_pmu:
return 0;
err_free_opp:
dev_pm_opp_of_remove_table(&pdev->dev);
err_edev:
devfreq_event_disable_edev(data->edev);
@ -477,11 +463,7 @@ static int rk3399_dmcfreq_remove(struct platform_device *pdev)
{
struct rk3399_dmcfreq *dmcfreq = dev_get_drvdata(&pdev->dev);
/*
* Before remove the opp table we need to unregister the opp notifier.
*/
devm_devfreq_unregister_opp_notifier(dmcfreq->dev, dmcfreq->devfreq);
dev_pm_opp_of_remove_table(dmcfreq->dev);
devfreq_event_disable_edev(dmcfreq->edev);
return 0;
}

View File

@ -8,6 +8,7 @@
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/err.h>
#include <linux/mutex.h>
#include <linux/pm_clock.h>
#include <linux/pm_domain.h>
#include <linux/of_address.h>
@ -16,6 +17,7 @@
#include <linux/clk.h>
#include <linux/regmap.h>
#include <linux/mfd/syscon.h>
#include <soc/rockchip/pm_domains.h>
#include <dt-bindings/power/px30-power.h>
#include <dt-bindings/power/rk3036-power.h>
#include <dt-bindings/power/rk3066-power.h>
@ -139,6 +141,109 @@ struct rockchip_pmu {
#define DOMAIN_RK3568(name, pwr, req, wakeup) \
DOMAIN_M(name, pwr, pwr, req, req, req, wakeup)
/*
* Dynamic Memory Controller may need to coordinate with us -- see
* rockchip_pmu_block().
*
* dmc_pmu_mutex protects registration-time races, so DMC driver doesn't try to
* block() while we're initializing the PMU.
*/
static DEFINE_MUTEX(dmc_pmu_mutex);
static struct rockchip_pmu *dmc_pmu;
/*
* Block PMU transitions and make sure they don't interfere with ARM Trusted
* Firmware operations. There are two conflicts, noted in the comments below.
*
* Caller must unblock PMU transitions via rockchip_pmu_unblock().
*/
int rockchip_pmu_block(void)
{
struct rockchip_pmu *pmu;
struct generic_pm_domain *genpd;
struct rockchip_pm_domain *pd;
int i, ret;
mutex_lock(&dmc_pmu_mutex);
/* No PMU (yet)? Then we just block rockchip_pmu_probe(). */
if (!dmc_pmu)
return 0;
pmu = dmc_pmu;
/*
* mutex blocks all idle transitions: we can't touch the
* PMU_BUS_IDLE_REQ (our ".idle_offset") register while ARM Trusted
* Firmware might be using it.
*/
mutex_lock(&pmu->mutex);
/*
* Power domain clocks: Per Rockchip, we *must* keep certain clocks
* enabled for the duration of power-domain transitions. Most
* transitions are handled by this driver, but some cases (in
* particular, DRAM DVFS / memory-controller idle) must be handled by
* firmware. Firmware can handle most clock management via a special
* "ungate" register (PMU_CRU_GATEDIS_CON0), but unfortunately, this
* doesn't handle PLLs. We can assist this transition by doing the
* clock management on behalf of firmware.
*/
for (i = 0; i < pmu->genpd_data.num_domains; i++) {
genpd = pmu->genpd_data.domains[i];
if (genpd) {
pd = to_rockchip_pd(genpd);
ret = clk_bulk_enable(pd->num_clks, pd->clks);
if (ret < 0) {
dev_err(pmu->dev,
"failed to enable clks for domain '%s': %d\n",
genpd->name, ret);
goto err;
}
}
}
return 0;
err:
for (i = i - 1; i >= 0; i--) {
genpd = pmu->genpd_data.domains[i];
if (genpd) {
pd = to_rockchip_pd(genpd);
clk_bulk_disable(pd->num_clks, pd->clks);
}
}
mutex_unlock(&pmu->mutex);
mutex_unlock(&dmc_pmu_mutex);
return ret;
}
EXPORT_SYMBOL_GPL(rockchip_pmu_block);
/* Unblock PMU transitions. */
void rockchip_pmu_unblock(void)
{
struct rockchip_pmu *pmu;
struct generic_pm_domain *genpd;
struct rockchip_pm_domain *pd;
int i;
if (dmc_pmu) {
pmu = dmc_pmu;
for (i = 0; i < pmu->genpd_data.num_domains; i++) {
genpd = pmu->genpd_data.domains[i];
if (genpd) {
pd = to_rockchip_pd(genpd);
clk_bulk_disable(pd->num_clks, pd->clks);
}
}
mutex_unlock(&pmu->mutex);
}
mutex_unlock(&dmc_pmu_mutex);
}
EXPORT_SYMBOL_GPL(rockchip_pmu_unblock);
static bool rockchip_pmu_domain_is_idle(struct rockchip_pm_domain *pd)
{
struct rockchip_pmu *pmu = pd->pmu;
@ -690,6 +795,12 @@ static int rockchip_pm_domain_probe(struct platform_device *pdev)
error = -ENODEV;
/*
* Prevent any rockchip_pmu_block() from racing with the remainder of
* setup (clocks, register initialization).
*/
mutex_lock(&dmc_pmu_mutex);
for_each_available_child_of_node(np, node) {
error = rockchip_pm_add_one_domain(pmu, node);
if (error) {
@ -719,10 +830,17 @@ static int rockchip_pm_domain_probe(struct platform_device *pdev)
goto err_out;
}
/* We only expect one PMU. */
if (!WARN_ON_ONCE(dmc_pmu))
dmc_pmu = pmu;
mutex_unlock(&dmc_pmu_mutex);
return 0;
err_out:
rockchip_pm_domain_cleanup(pmu);
mutex_unlock(&dmc_pmu_mutex);
return error;
}

View File

@ -38,6 +38,7 @@ enum devfreq_timer {
struct devfreq;
struct devfreq_governor;
struct devfreq_cpu_data;
struct thermal_cooling_device;
/**
@ -288,6 +289,11 @@ struct devfreq_simple_ondemand_data {
#endif
#if IS_ENABLED(CONFIG_DEVFREQ_GOV_PASSIVE)
enum devfreq_parent_dev_type {
DEVFREQ_PARENT_DEV,
CPUFREQ_PARENT_DEV,
};
/**
* struct devfreq_passive_data - ``void *data`` fed to struct devfreq
* and devfreq_add_device
@ -299,8 +305,11 @@ struct devfreq_simple_ondemand_data {
* using governors except for passive governor.
* If the devfreq device has the specific method to decide
* the next frequency, should use this callback.
* @this: the devfreq instance of own device.
* @nb: the notifier block for DEVFREQ_TRANSITION_NOTIFIER list
* @parent_type: the parent type of the device.
* @this: the devfreq instance of own device.
* @nb: the notifier block for DEVFREQ_TRANSITION_NOTIFIER or
* CPUFREQ_TRANSITION_NOTIFIER list.
* @cpu_data_list: the list of cpu frequency data for all cpufreq_policy.
*
* The devfreq_passive_data have to set the devfreq instance of parent
* device with governors except for the passive governor. But, don't need to
@ -314,9 +323,13 @@ struct devfreq_passive_data {
/* Optional callback to decide the next frequency of passvice device */
int (*get_target_freq)(struct devfreq *this, unsigned long *freq);
/* Should set the type of parent device */
enum devfreq_parent_dev_type parent_type;
/* For passive governor's internal use. Don't need to set them */
struct devfreq *this;
struct notifier_block nb;
struct list_head cpu_data_list;
};
#endif

View File

@ -0,0 +1,25 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright 2022, The Chromium OS Authors. All rights reserved.
*/
#ifndef __SOC_ROCKCHIP_PM_DOMAINS_H__
#define __SOC_ROCKCHIP_PM_DOMAINS_H__
#ifdef CONFIG_ROCKCHIP_PM_DOMAINS
int rockchip_pmu_block(void);
void rockchip_pmu_unblock(void);
#else /* CONFIG_ROCKCHIP_PM_DOMAINS */
static inline int rockchip_pmu_block(void)
{
return 0;
}
static inline void rockchip_pmu_unblock(void) { }
#endif /* CONFIG_ROCKCHIP_PM_DOMAINS */
#endif /* __SOC_ROCKCHIP_PM_DOMAINS_H__ */