USB / Thunderbolt driver updates for 6.5-rc1

Here is the big set of USB and Thunderbolt driver updates for 6.5-rc1.
 
 Included in here are:
   - Lots of USB4/Thunderbolt additions and updates for new hardware
     types and fixes as people are starting to get access to the hardware
     in the wild
   - new gadget controller driver, cdns2, added
   - new typec drivers added
   - xhci driver updates
   - typec driver updates
   - usbip driver fixes
   - usb-serial driver updates and fixes
   - lots of smaller USB driver updates
 
 All of these have been in linux-next for a while with no reported
 problems.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCZKKcVw8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ynEIQCgzJgnMCB81mirXlY1ZpCSDFqN8KIAnjRc27/1
 vnUzldMQGY/FiS549gD9
 =G3yz
 -----END PGP SIGNATURE-----

Merge tag 'usb-6.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB / Thunderbolt driver updates from Greg KH:
 "Here is the big set of USB and Thunderbolt driver updates for 6.5-rc1.

  Included in here are:

   - Lots of USB4/Thunderbolt additions and updates for new hardware
     types and fixes as people are starting to get access to the
     hardware in the wild

   - new gadget controller driver, cdns2, added

   - new typec drivers added

   - xhci driver updates

   - typec driver updates

   - usbip driver fixes

   - usb-serial driver updates and fixes

   - lots of smaller USB driver updates

  All of these have been in linux-next for a while with no reported
  problems"

* tag 'usb-6.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (265 commits)
  usb: host: xhci-plat: Set XHCI_STATE_REMOVING before resuming XHCI HC
  usb: host: xhci: Do not re-initialize the XHCI HC if being removed
  usb: typec: nb7vpq904m: fix CONFIG_DRM dependency
  usbip: usbip_host: Replace strlcpy with strscpy
  usb: dwc3: gadget: Propagate core init errors to UDC during pullup
  USB: serial: option: add LARA-R6 01B PIDs
  usb: ulpi: Make container_of() no-op in to_ulpi_dev()
  usb: gadget: legacy: fix error return code in gfs_bind
  usb: typec: fsa4480: add support for Audio Accessory Mode
  usb: typec: fsa4480: rework mux & switch setup to handle more states
  usb: typec: ucsi: call typec_set_mode on non-altmode partner change
  USB: gadget: f_hid: make hidg_class a static const structure
  USB: gadget: f_printer: make usb_gadget_class a static const structure
  USB: mon: make mon_bin_class a static const structure
  USB: gadget: udc: core: make udc_class a static const structure
  USB: roles: make role_class a static const structure
  dt-bindings: usb: dwc3: Add interrupt-names property support for wakeup interrupt
  dt-bindings: usb: Add StarFive JH7110 USB controller
  dt-bindings: usb: dwc3: Add IPQ9574 compatible
  usb: cdns2: Fix spelling mistake in a trace message "Wakupe" -> "Wakeup"
  ...
This commit is contained in:
Linus Torvalds 2023-07-03 13:23:10 -07:00
commit 56cbceab92
264 changed files with 11504 additions and 2330 deletions

View File

@ -292,6 +292,16 @@ Description:
which is marked with early_stop has failed to initialize, it will ignore
all future connections until this attribute is clear.
What: /sys/bus/usb/devices/.../<hub_interface>/port<X>/state
Date: June 2023
Contact: Roy Luo <royluo@google.com>
Description:
Indicates current state of the USB device attached to the port.
Valid states are: 'not-attached', 'attached', 'powered',
'reconnecting', 'unauthenticated', 'default', 'addressed',
'configured', and 'suspended'. This file supports poll() to
monitor the state change from user space.
What: /sys/bus/usb/devices/.../power/usb2_lpm_l1_timeout
Date: May 2013
Contact: Mathias Nyman <mathias.nyman@linux.intel.com>

View File

@ -1,4 +1,4 @@
What: /sys/bus/platform/drivers/eud/.../enable
What: /sys/bus/platform/drivers/qcom_eud/.../enable
Date: February 2022
Contact: Souradeep Chowdhury <quic_schowdhu@quicinc.com>
Description:

View File

@ -61,6 +61,10 @@ properties:
power-domains:
maxItems: 1
orientation-switch:
description: Flag the port as possible handler of orientation switching
type: boolean
resets:
items:
- description: reset of phy block.
@ -251,6 +255,8 @@ examples:
vdda-phy-supply = <&vdda_usb2_ss_1p2>;
vdda-pll-supply = <&vdda_usb2_ss_core>;
orientation-switch;
usb3-phy@200 {
reg = <0x200 0x128>,
<0x400 0x200>,

View File

@ -14,6 +14,9 @@ description: |
regulator will be enabled in situations where the device is required to
provide power to the connected peripheral.
allOf:
- $ref: regulator.yaml#
properties:
compatible:
enum:
@ -25,8 +28,11 @@ properties:
required:
- compatible
- reg
- regulator-min-microamp
- regulator-max-microamp
additionalProperties: false
unevaluatedProperties: false
examples:
- |
@ -36,6 +42,8 @@ examples:
pm8150b_vbus: usb-vbus-regulator@1100 {
compatible = "qcom,pm8150b-vbus-reg";
reg = <0x1100>;
regulator-min-microamp = <500000>;
regulator-max-microamp = <3000000>;
};
};
...

View File

@ -1,55 +0,0 @@
--------------------------------------------------------------------------
= Zynq UltraScale+ MPSoC and Versal reset driver binding =
--------------------------------------------------------------------------
The Zynq UltraScale+ MPSoC and Versal has several different resets.
See Chapter 36 of the Zynq UltraScale+ MPSoC TRM (UG) for more information
about zynqmp resets.
Please also refer to reset.txt in this directory for common reset
controller binding usage.
Required Properties:
- compatible: "xlnx,zynqmp-reset" for Zynq UltraScale+ MPSoC platform
"xlnx,versal-reset" for Versal platform
- #reset-cells: Specifies the number of cells needed to encode reset
line, should be 1
-------
Example
-------
firmware {
zynqmp_firmware: zynqmp-firmware {
compatible = "xlnx,zynqmp-firmware";
method = "smc";
zynqmp_reset: reset-controller {
compatible = "xlnx,zynqmp-reset";
#reset-cells = <1>;
};
};
};
Specifying reset lines connected to IP modules
==============================================
Device nodes that need access to reset lines should
specify them as a reset phandle in their corresponding node as
specified in reset.txt.
For list of all valid reset indices for Zynq UltraScale+ MPSoC see
<dt-bindings/reset/xlnx-zynqmp-resets.h>
For list of all valid reset indices for Versal see
<dt-bindings/reset/xlnx-versal-resets.h>
Example:
serdes: zynqmp_phy@fd400000 {
...
resets = <&zynqmp_reset ZYNQMP_RESET_SATA>;
reset-names = "sata_rst";
...
};

View File

@ -0,0 +1,52 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/reset/xlnx,zynqmp-reset.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Zynq UltraScale+ MPSoC and Versal reset
maintainers:
- Piyush Mehta <piyush.mehta@amd.com>
description: |
The Zynq UltraScale+ MPSoC and Versal has several different resets.
The PS reset subsystem is responsible for handling the external reset
input to the device and that all internal reset requirements are met
for the system (as a whole) and for the functional units.
Please also refer to reset.txt in this directory for common reset
controller binding usage. Device nodes that need access to reset
lines should specify them as a reset phandle in their corresponding
node as specified in reset.txt.
For list of all valid reset indices for Zynq UltraScale+ MPSoC
<dt-bindings/reset/xlnx-zynqmp-resets.h>
For list of all valid reset indices for Versal
<dt-bindings/reset/xlnx-versal-resets.h>
properties:
compatible:
enum:
- xlnx,zynqmp-reset
- xlnx,versal-reset
"#reset-cells":
const: 1
required:
- compatible
- "#reset-cells"
additionalProperties: false
examples:
- |
zynqmp_reset: reset-controller {
compatible = "xlnx,zynqmp-reset";
#reset-cells = <1>;
};
...

View File

@ -45,7 +45,9 @@ properties:
- fsl,vf610-usb
- const: fsl,imx27-usb
- items:
- const: fsl,imx8dxl-usb
- enum:
- fsl,imx8dxl-usb
- fsl,imx8ulp-usb
- const: fsl,imx7ulp-usb
- const: fsl,imx6ul-usb
- items:

View File

@ -53,6 +53,7 @@ properties:
- amlogic,meson8b-usb
- amlogic,meson-gxbb-usb
- amlogic,meson-g12a-usb
- amlogic,meson-a1-usb
- intel,socfpga-agilex-hsotg
- const: snps,dwc2
- const: amcc,dwc-otg

View File

@ -0,0 +1,103 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (c) 2020 NXP
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/fsl,imx8qm-cdns3.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: NXP iMX8QM Soc USB Controller
maintainers:
- Frank Li <Frank.Li@nxp.com>
properties:
compatible:
const: fsl,imx8qm-usb3
reg:
items:
- description: Register set for iMX USB3 Platform Control
"#address-cells":
enum: [ 1, 2 ]
"#size-cells":
enum: [ 1, 2 ]
ranges: true
clocks:
items:
- description: Standby clock. Used during ultra low power states.
- description: USB bus clock for usb3 controller.
- description: AXI clock for AXI interface.
- description: ipg clock for register access.
- description: Core clock for usb3 controller.
clock-names:
items:
- const: lpm
- const: bus
- const: aclk
- const: ipg
- const: core
power-domains:
maxItems: 1
# Required child node:
patternProperties:
"^usb@[0-9a-f]+$":
$ref: cdns,usb3.yaml#
required:
- compatible
- reg
- "#address-cells"
- "#size-cells"
- ranges
- clocks
- clock-names
- power-domains
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/imx8-lpcg.h>
#include <dt-bindings/firmware/imx/rsrc.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
usb@5b110000 {
compatible = "fsl,imx8qm-usb3";
reg = <0x5b110000 0x10000>;
ranges;
clocks = <&usb3_lpcg IMX_LPCG_CLK_1>,
<&usb3_lpcg IMX_LPCG_CLK_0>,
<&usb3_lpcg IMX_LPCG_CLK_7>,
<&usb3_lpcg IMX_LPCG_CLK_4>,
<&usb3_lpcg IMX_LPCG_CLK_5>;
clock-names = "lpm", "bus", "aclk", "ipg", "core";
assigned-clocks = <&clk IMX_SC_R_USB_2 IMX_SC_PM_CLK_MST_BUS>;
assigned-clock-rates = <250000000>;
power-domains = <&pd IMX_SC_R_USB_2>;
#address-cells = <1>;
#size-cells = <1>;
usb@5b120000 {
compatible = "cdns,usb3";
reg = <0x5b120000 0x10000>, /* memory area for OTG/DRD registers */
<0x5b130000 0x10000>, /* memory area for HOST registers */
<0x5b140000 0x10000>; /* memory area for DEVICE registers */
reg-names = "otg", "xhci", "dev";
interrupt-parent = <&gic>;
interrupts = <GIC_SPI 271 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 271 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 271 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 271 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "host", "peripheral", "otg", "wakeup";
phys = <&usb3_phy>;
phy-names = "cdns3,usb3-phy";
};
};

View File

@ -61,6 +61,7 @@ properties:
- ibm,476gtr-ehci
- nxp,lpc1850-ehci
- qca,ar7100-ehci
- rockchip,rk3588-ehci
- snps,hsdk-v1.0-ehci
- socionext,uniphier-ehci
- const: generic-ehci

View File

@ -44,6 +44,7 @@ properties:
- hpe,gxp-ohci
- ibm,476gtr-ohci
- ingenic,jz4740-ohci
- rockchip,rk3588-ohci
- snps,hsdk-v1.0-ohci
- const: generic-ohci
- enum:
@ -69,7 +70,7 @@ properties:
clocks:
minItems: 1
maxItems: 3
maxItems: 4
description: |
In case the Renesas R-Car Gen3 SoCs:
- if a host only channel: first clock should be host.
@ -147,6 +148,20 @@ allOf:
then:
properties:
transceiver: false
- if:
properties:
compatible:
contains:
const: rockchip,rk3588-ohci
then:
properties:
clocks:
minItems: 4
else:
properties:
clocks:
minItems: 1
maxItems: 3
unevaluatedProperties: false

View File

@ -0,0 +1,107 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/microchip,usb5744.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Microchip USB5744 4-port Hub Controller
description:
Microchip's USB5744 SmartHubTM IC is a 4 port, SuperSpeed (SS)/Hi-Speed (HS),
low power, low pin count configurable and fully compliant with the USB 3.1
Gen 1 specification. The USB5744 also supports Full Speed (FS) and Low Speed
(LS) USB signaling, offering complete coverage of all defined USB operating
speeds. The new SuperSpeed hubs operate in parallel with the USB 2.0
controller, so 5 Gbps SuperSpeed data transfers are not affected by slower
USB 2.0 traffic.
maintainers:
- Piyush Mehta <piyush.mehta@amd.com>
- Michal Simek <michal.simek@amd.com>
properties:
compatible:
enum:
- usb424,2744
- usb424,5744
- microchip,usb5744
reg:
maxItems: 1
reset-gpios:
maxItems: 1
description:
GPIO controlling the GRST# pin.
vdd-supply:
description:
VDD power supply to the hub
peer-hub:
$ref: /schemas/types.yaml#/definitions/phandle
description:
phandle to the peer hub on the controller.
i2c-bus:
$ref: /schemas/types.yaml#/definitions/phandle
description:
phandle of an usb hub connected via i2c bus.
required:
- compatible
- reg
allOf:
- if:
properties:
compatible:
contains:
const: microchip,usb5744
then:
properties:
reset-gpios: false
vdd-supply: false
peer-hub: false
i2c-bus: false
else:
$ref: /schemas/usb/usb-device.yaml
required:
- peer-hub
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
i2c: i2c {
#address-cells = <1>;
#size-cells = <0>;
hub: usb-hub@2d {
compatible = "microchip,usb5744";
reg = <0x2d>;
};
};
usb {
#address-cells = <1>;
#size-cells = <0>;
/* 2.0 hub on port 1 */
hub_2_0: hub@1 {
compatible = "usb424,2744";
reg = <1>;
peer-hub = <&hub_3_0>;
i2c-bus = <&hub>;
reset-gpios = <&gpio 3 GPIO_ACTIVE_LOW>;
};
/* 3.0 hub on port 2 */
hub_3_0: hub@2 {
compatible = "usb424,5744";
reg = <2>;
peer-hub = <&hub_2_0>;
i2c-bus = <&hub>;
reset-gpios = <&gpio 3 GPIO_ACTIVE_LOW>;
};
};

View File

@ -0,0 +1,141 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/onnn,nb7vpq904m.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: ON Semiconductor Type-C DisplayPort ALT Mode Linear Redriver
maintainers:
- Neil Armstrong <neil.armstrong@linaro.org>
properties:
compatible:
enum:
- onnn,nb7vpq904m
reg:
maxItems: 1
vcc-supply:
description: power supply (1.8V)
enable-gpios: true
retimer-switch:
description: Flag the port as possible handle of SuperSpeed signals retiming
type: boolean
orientation-switch:
description: Flag the port as possible handler of orientation switching
type: boolean
ports:
$ref: /schemas/graph.yaml#/properties/ports
properties:
port@0:
$ref: /schemas/graph.yaml#/properties/port
description: Super Speed (SS) Output endpoint to the Type-C connector
port@1:
$ref: /schemas/graph.yaml#/$defs/port-base
description: Super Speed (SS) Input endpoint from the Super-Speed PHY
unevaluatedProperties: false
properties:
endpoint:
$ref: /schemas/graph.yaml#/$defs/endpoint-base
unevaluatedProperties: false
properties:
data-lanes:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: |
An array of physical data lane indexes. Position determines how
lanes are connected to the redriver, It is assumed the same order
is kept on the other side of the redriver.
Lane number represents the following
- 0 is RX2 lane
- 1 is TX2 lane
- 2 is TX1 lane
- 3 is RX1 lane
The position determines the physical port of the redriver, in the
order A, B, C & D.
oneOf:
- items:
- const: 0
- const: 1
- const: 2
- const: 3
description: |
This is the lanes default layout
- Port A to RX2 lane
- Port B to TX2 lane
- Port C to TX1 lane
- Port D to RX1 lane
- items:
- const: 3
- const: 2
- const: 1
- const: 0
description: |
This is the USBRX2/USBTX2 and USBRX1/USBTX1 swapped lanes layout
- Port A to RX1 lane
- Port B to TX1 lane
- Port C to TX2 lane
- Port D to RX2 lane
port@2:
$ref: /schemas/graph.yaml#/properties/port
description:
Sideband Use (SBU) AUX lines endpoint to the Type-C connector for the purpose of
handling altmode muxing and orientation switching.
required:
- compatible
- reg
additionalProperties: false
examples:
- |
i2c {
#address-cells = <1>;
#size-cells = <0>;
typec-mux@32 {
compatible = "onnn,nb7vpq904m";
reg = <0x32>;
vcc-supply = <&vreg_l15b_1p8>;
retimer-switch;
orientation-switch;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
usb_con_ss: endpoint {
remote-endpoint = <&typec_con_ss>;
};
};
port@1 {
reg = <1>;
phy_con_ss: endpoint {
remote-endpoint = <&usb_phy_ss>;
data-lanes = <3 2 1 0>;
};
};
port@2 {
reg = <2>;
usb_con_sbu: endpoint {
remote-endpoint = <&typec_dp_aux>;
};
};
};
};
};
...

View File

@ -17,6 +17,7 @@ properties:
- qcom,ipq6018-dwc3
- qcom,ipq8064-dwc3
- qcom,ipq8074-dwc3
- qcom,ipq9574-dwc3
- qcom,msm8953-dwc3
- qcom,msm8994-dwc3
- qcom,msm8996-dwc3
@ -133,7 +134,6 @@ required:
- "#address-cells"
- "#size-cells"
- ranges
- power-domains
- clocks
- clock-names
- interrupts
@ -177,6 +177,7 @@ allOf:
compatible:
contains:
enum:
- qcom,ipq9574-dwc3
- qcom,msm8953-dwc3
- qcom,msm8996-dwc3
- qcom,msm8998-dwc3

View File

@ -0,0 +1,190 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/qcom,pmic-typec.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm PMIC based USB Type-C block
maintainers:
- Bryan O'Donoghue <bryan.odonoghue@linaro.org>
description:
Qualcomm PMIC Type-C block
properties:
compatible:
enum:
- qcom,pm8150b-typec
connector:
type: object
$ref: /schemas/connector/usb-connector.yaml#
unevaluatedProperties: false
reg:
description: Type-C port and pdphy SPMI register base offsets
maxItems: 2
interrupts:
items:
- description: Type-C CC attach notification, VBUS error, tCCDebounce done
- description: Type-C VCONN powered
- description: Type-C CC state change
- description: Type-C VCONN over-current
- description: Type-C VBUS state change
- description: Type-C Attach/detach notification
- description: Type-C Legacy cable detect
- description: Type-C Try.Src Try.Snk state change
- description: Power Domain Signal TX - HardReset or CableReset signal TX
- description: Power Domain Signal RX - HardReset or CableReset signal RX
- description: Power Domain TX complete
- description: Power Domain RX complete
- description: Power Domain TX fail
- description: Power Domain TX message discard
- description: Power Domain RX message discard
- description: Power Domain Fast Role Swap event
interrupt-names:
items:
- const: or-rid-detect-change
- const: vpd-detect
- const: cc-state-change
- const: vconn-oc
- const: vbus-change
- const: attach-detach
- const: legacy-cable-detect
- const: try-snk-src-detect
- const: sig-tx
- const: sig-rx
- const: msg-tx
- const: msg-rx
- const: msg-tx-failed
- const: msg-tx-discarded
- const: msg-rx-discarded
- const: fr-swap
vdd-vbus-supply:
description: VBUS power supply.
vdd-pdphy-supply:
description: VDD regulator supply to the PDPHY.
port:
$ref: /schemas/graph.yaml#/properties/port
description:
Contains a port which produces data-role switching messages.
required:
- compatible
- reg
- interrupts
- interrupt-names
- vdd-vbus-supply
- vdd-pdphy-supply
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/usb/pd.h>
pmic {
#address-cells = <1>;
#size-cells = <0>;
pm8150b_typec: typec@1500 {
compatible = "qcom,pm8150b-typec";
reg = <0x1500>,
<0x1700>;
interrupts = <0x2 0x15 0x00 IRQ_TYPE_EDGE_RISING>,
<0x2 0x15 0x01 IRQ_TYPE_EDGE_BOTH>,
<0x2 0x15 0x02 IRQ_TYPE_EDGE_RISING>,
<0x2 0x15 0x03 IRQ_TYPE_EDGE_BOTH>,
<0x2 0x15 0x04 IRQ_TYPE_EDGE_RISING>,
<0x2 0x15 0x05 IRQ_TYPE_EDGE_RISING>,
<0x2 0x15 0x06 IRQ_TYPE_EDGE_BOTH>,
<0x2 0x15 0x07 IRQ_TYPE_EDGE_RISING>,
<0x2 0x17 0x00 IRQ_TYPE_EDGE_RISING>,
<0x2 0x17 0x01 IRQ_TYPE_EDGE_RISING>,
<0x2 0x17 0x02 IRQ_TYPE_EDGE_RISING>,
<0x2 0x17 0x03 IRQ_TYPE_EDGE_RISING>,
<0x2 0x17 0x04 IRQ_TYPE_EDGE_RISING>,
<0x2 0x17 0x05 IRQ_TYPE_EDGE_RISING>,
<0x2 0x17 0x06 IRQ_TYPE_EDGE_RISING>,
<0x2 0x17 0x07 IRQ_TYPE_EDGE_RISING>;
interrupt-names = "or-rid-detect-change",
"vpd-detect",
"cc-state-change",
"vconn-oc",
"vbus-change",
"attach-detach",
"legacy-cable-detect",
"try-snk-src-detect",
"sig-tx",
"sig-rx",
"msg-tx",
"msg-rx",
"msg-tx-failed",
"msg-tx-discarded",
"msg-rx-discarded",
"fr-swap";
vdd-vbus-supply = <&pm8150b_vbus>;
vdd-pdphy-supply = <&vreg_l2a_3p1>;
connector {
compatible = "usb-c-connector";
power-role = "source";
data-role = "dual";
self-powered;
source-pdos = <PDO_FIXED(5000, 3000, PDO_FIXED_DUAL_ROLE |
PDO_FIXED_USB_COMM | PDO_FIXED_DATA_SWAP)>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
pmic_typec_mux_out: endpoint {
remote-endpoint = <&usb_phy_typec_mux_in>;
};
};
port@1 {
reg = <1>;
pmic_typec_role_switch_out: endpoint {
remote-endpoint = <&usb_role_switch_in>;
};
};
};
};
};
};
usb {
dr_mode = "otg";
usb-role-switch;
port {
usb_role_switch_in: endpoint {
remote-endpoint = <&pmic_typec_role_switch_out>;
};
};
};
usb-phy {
orientation-switch;
port {
usb_phy_typec_mux_in: endpoint {
remote-endpoint = <&pmic_typec_mux_out>;
};
};
};
...

View File

@ -44,15 +44,15 @@ properties:
It's either a single common DWC3 interrupt (dwc_usb3) or individual
interrupts for the host, gadget and DRD modes.
minItems: 1
maxItems: 3
maxItems: 4
interrupt-names:
minItems: 1
maxItems: 3
maxItems: 4
oneOf:
- const: dwc_usb3
- items:
enum: [host, peripheral, otg]
enum: [host, peripheral, otg, wakeup]
clocks:
description:

View File

@ -0,0 +1,115 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/starfive,jh7110-usb.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: StarFive JH7110 wrapper module for the Cadence USBSS-DRD controller
maintainers:
- Minda Chen <minda.chen@starfivetech.com>
properties:
compatible:
const: starfive,jh7110-usb
ranges: true
starfive,stg-syscon:
$ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to System Register Controller stg_syscon node.
- description: dr mode register offset of STG_SYSCONSAIF__SYSCFG register for USB.
description:
The phandle to System Register Controller syscon node and the offset
of STG_SYSCONSAIF__SYSCFG register for USB.
dr_mode:
enum: [host, otg, peripheral]
"#address-cells":
enum: [1, 2]
"#size-cells":
enum: [1, 2]
clocks:
items:
- description: link power management clock
- description: standby clock
- description: APB clock
- description: AXI clock
- description: UTMI APB clock
clock-names:
items:
- const: lpm
- const: stb
- const: apb
- const: axi
- const: utmi_apb
resets:
items:
- description: Power up reset
- description: APB clock reset
- description: AXI clock reset
- description: UTMI APB clock reset
reset-names:
items:
- const: pwrup
- const: apb
- const: axi
- const: utmi_apb
patternProperties:
"^usb@[0-9a-f]+$":
$ref: cdns,usb3.yaml#
description: Required child node
required:
- compatible
- ranges
- starfive,stg-syscon
- '#address-cells'
- '#size-cells'
- dr_mode
- clocks
- resets
additionalProperties: false
examples:
- |
usb@10100000 {
compatible = "starfive,jh7110-usb";
ranges = <0x0 0x10100000 0x100000>;
#address-cells = <1>;
#size-cells = <1>;
starfive,stg-syscon = <&stg_syscon 0x4>;
clocks = <&syscrg 4>,
<&stgcrg 5>,
<&stgcrg 1>,
<&stgcrg 3>,
<&stgcrg 2>;
clock-names = "lpm", "stb", "apb", "axi", "utmi_apb";
resets = <&stgcrg 10>,
<&stgcrg 8>,
<&stgcrg 7>,
<&stgcrg 9>;
reset-names = "pwrup", "apb", "axi", "utmi_apb";
dr_mode = "host";
usb@0 {
compatible = "cdns,usb3";
reg = <0x0 0x10000>,
<0x10000 0x10000>,
<0x20000 0x10000>;
reg-names = "otg", "xhci", "dev";
interrupts = <100>, <108>, <110>;
interrupt-names = "host", "peripheral", "otg";
maximum-speed = "super-speed";
};
};

View File

@ -231,7 +231,7 @@ properties:
power-on sequence to a port until the port has adequate power.
swap-dx-lanes:
$ref: /schemas/types.yaml#/definitions/uint8-array
$ref: /schemas/types.yaml#/definitions/uint32-array
description: |
Specifies the ports which will swap the differential-pair (D+/D-),
default is not-swapped.

View File

@ -4540,6 +4540,12 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb.git
F: drivers/usb/cdns3/
X: drivers/usb/cdns3/cdns3*
CADENCE USBHS DRIVER
M: Pawel Laszczak <pawell@cadence.com>
L: linux-usb@vger.kernel.org
S: Maintained
F: drivers/usb/gadget/udc/cdns2
CADET FM/AM RADIO RECEIVER DRIVER
M: Hans Verkuil <hverkuil@xs4all.nl>
L: linux-media@vger.kernel.org
@ -17694,6 +17700,14 @@ S: Maintained
F: Documentation/devicetree/bindings/thermal/qcom-tsens.yaml
F: drivers/thermal/qcom/
QUALCOMM TYPEC PORT MANAGER DRIVER
M: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
L: linux-arm-msm@vger.kernel.org
L: linux-usb@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/usb/qcom,pmic-*.yaml
F: drivers/usb/typec/tcpm/qcom/
QUALCOMM VENUS VIDEO ACCELERATOR DRIVER
M: Stanimir Varbanov <stanimir.k.varbanov@gmail.com>
M: Vikash Garodia <quic_vgarodia@quicinc.com>
@ -20329,6 +20343,12 @@ F: Documentation/devicetree/bindings/reset/starfive,jh7100-reset.yaml
F: drivers/reset/starfive/reset-starfive-jh71*
F: include/dt-bindings/reset/starfive?jh71*.h
STARFIVE JH71X0 USB DRIVERS
M: Minda Chen <minda.chen@starfivetech.com>
S: Maintained
F: Documentation/devicetree/bindings/usb/starfive,jh7110-usb.yaml
F: drivers/usb/cdns3/cdns3-starfive.c
STARFIVE JH71XX PMU CONTROLLER DRIVER
M: Walker Chen <walker.chen@starfivetech.com>
S: Supported
@ -22126,6 +22146,7 @@ F: drivers/usb/
F: include/dt-bindings/usb/
F: include/linux/usb.h
F: include/linux/usb/
F: include/uapi/linux/usb/
USB TYPEC BUS FOR ALTERNATE MODES
M: Heikki Krogerus <heikki.krogerus@linux.intel.com>

View File

@ -77,7 +77,7 @@ static int cros_typec_get_switch_handles(struct cros_typec_port *port,
{
int ret = 0;
port->mux = fwnode_typec_mux_get(fwnode, NULL);
port->mux = fwnode_typec_mux_get(fwnode);
if (IS_ERR(port->mux)) {
ret = PTR_ERR(port->mux);
dev_dbg(dev, "Mux handle not found: %d.\n", ret);

View File

@ -369,7 +369,6 @@ static int pmic_glink_altmode_probe(struct auxiliary_device *adev,
{
struct pmic_glink_altmode_port *alt_port;
struct pmic_glink_altmode *altmode;
struct typec_altmode_desc mux_desc = {};
const struct of_device_id *match;
struct fwnode_handle *fwnode;
struct device *dev = &adev->dev;
@ -427,9 +426,7 @@ static int pmic_glink_altmode_probe(struct auxiliary_device *adev,
alt_port->dp_alt.mode = USB_TYPEC_DP_MODE;
alt_port->dp_alt.active = 1;
mux_desc.svid = USB_TYPEC_DP_SID;
mux_desc.mode = USB_TYPEC_DP_MODE;
alt_port->typec_mux = fwnode_typec_mux_get(fwnode, &mux_desc);
alt_port->typec_mux = fwnode_typec_mux_get(fwnode);
if (IS_ERR(alt_port->typec_mux))
return dev_err_probe(dev, PTR_ERR(alt_port->typec_mux),
"failed to acquire mode-switch for port: %d\n",

View File

@ -2,7 +2,7 @@
obj-${CONFIG_USB4} := thunderbolt.o
thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o
thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o
thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o
thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o clx.o
thunderbolt-${CONFIG_ACPI} += acpi.o
thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o

View File

@ -296,16 +296,15 @@ static bool tb_acpi_bus_match(struct device *dev)
static struct acpi_device *tb_acpi_switch_find_companion(struct tb_switch *sw)
{
struct tb_switch *parent_sw = tb_switch_parent(sw);
struct acpi_device *adev = NULL;
struct tb_switch *parent_sw;
/*
* Device routers exists under the downstream facing USB4 port
* of the parent router. Their _ADR is always 0.
*/
parent_sw = tb_switch_parent(sw);
if (parent_sw) {
struct tb_port *port = tb_port_at(tb_route(sw), parent_sw);
struct tb_port *port = tb_switch_downstream_port(sw);
struct acpi_device *port_adev;
port_adev = acpi_find_child_by_adr(ACPI_COMPANION(&parent_sw->dev),

423
drivers/thunderbolt/clx.c Normal file
View File

@ -0,0 +1,423 @@
// SPDX-License-Identifier: GPL-2.0
/*
* CLx support
*
* Copyright (C) 2020 - 2023, Intel Corporation
* Authors: Gil Fine <gil.fine@intel.com>
* Mika Westerberg <mika.westerberg@linux.intel.com>
*/
#include <linux/module.h>
#include "tb.h"
static bool clx_enabled = true;
module_param_named(clx, clx_enabled, bool, 0444);
MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)");
static const char *clx_name(unsigned int clx)
{
switch (clx) {
case TB_CL0S | TB_CL1 | TB_CL2:
return "CL0s/CL1/CL2";
case TB_CL1 | TB_CL2:
return "CL1/CL2";
case TB_CL0S | TB_CL2:
return "CL0s/CL2";
case TB_CL0S | TB_CL1:
return "CL0s/CL1";
case TB_CL0S:
return "CL0s";
case 0:
return "disabled";
default:
return "unknown";
}
}
static int tb_port_pm_secondary_set(struct tb_port *port, bool secondary)
{
u32 phy;
int ret;
ret = tb_port_read(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
if (secondary)
phy |= LANE_ADP_CS_1_PMS;
else
phy &= ~LANE_ADP_CS_1_PMS;
return tb_port_write(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
}
static int tb_port_pm_secondary_enable(struct tb_port *port)
{
return tb_port_pm_secondary_set(port, true);
}
static int tb_port_pm_secondary_disable(struct tb_port *port)
{
return tb_port_pm_secondary_set(port, false);
}
/* Called for USB4 or Titan Ridge routers only */
static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx)
{
u32 val, mask = 0;
bool ret;
/* Don't enable CLx in case of two single-lane links */
if (!port->bonded && port->dual_link_port)
return false;
/* Don't enable CLx in case of inter-domain link */
if (port->xdomain)
return false;
if (tb_switch_is_usb4(port->sw)) {
if (!usb4_port_clx_supported(port))
return false;
} else if (!tb_lc_is_clx_supported(port)) {
return false;
}
if (clx & TB_CL0S)
mask |= LANE_ADP_CS_0_CL0S_SUPPORT;
if (clx & TB_CL1)
mask |= LANE_ADP_CS_0_CL1_SUPPORT;
if (clx & TB_CL2)
mask |= LANE_ADP_CS_0_CL2_SUPPORT;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_0, 1);
if (ret)
return false;
return !!(val & mask);
}
static int tb_port_clx_set(struct tb_port *port, unsigned int clx, bool enable)
{
u32 phy, mask = 0;
int ret;
if (clx & TB_CL0S)
mask |= LANE_ADP_CS_1_CL0S_ENABLE;
if (clx & TB_CL1)
mask |= LANE_ADP_CS_1_CL1_ENABLE;
if (clx & TB_CL2)
mask |= LANE_ADP_CS_1_CL2_ENABLE;
if (!mask)
return -EOPNOTSUPP;
ret = tb_port_read(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
if (enable)
phy |= mask;
else
phy &= ~mask;
return tb_port_write(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
}
static int tb_port_clx_disable(struct tb_port *port, unsigned int clx)
{
return tb_port_clx_set(port, clx, false);
}
static int tb_port_clx_enable(struct tb_port *port, unsigned int clx)
{
return tb_port_clx_set(port, clx, true);
}
static int tb_port_clx(struct tb_port *port)
{
u32 val;
int ret;
if (!tb_port_clx_supported(port, TB_CL0S | TB_CL1 | TB_CL2))
return 0;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
if (val & LANE_ADP_CS_1_CL0S_ENABLE)
ret |= TB_CL0S;
if (val & LANE_ADP_CS_1_CL1_ENABLE)
ret |= TB_CL1;
if (val & LANE_ADP_CS_1_CL2_ENABLE)
ret |= TB_CL2;
return ret;
}
/**
* tb_port_clx_is_enabled() - Is given CL state enabled
* @port: USB4 port to check
* @clx: Mask of CL states to check
*
* Returns true if any of the given CL states is enabled for @port.
*/
bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx)
{
return !!(tb_port_clx(port) & clx);
}
/**
* tb_switch_clx_init() - Initialize router CL states
* @sw: Router
*
* Can be called for any router. Initializes the current CL state by
* reading it from the hardware.
*
* Returns %0 in case of success and negative errno in case of failure.
*/
int tb_switch_clx_init(struct tb_switch *sw)
{
struct tb_port *up, *down;
unsigned int clx, tmp;
if (tb_switch_is_icm(sw))
return 0;
if (!tb_route(sw))
return 0;
if (!tb_switch_clx_is_supported(sw))
return 0;
up = tb_upstream_port(sw);
down = tb_switch_downstream_port(sw);
clx = tb_port_clx(up);
tmp = tb_port_clx(down);
if (clx != tmp)
tb_sw_warn(sw, "CLx: inconsistent configuration %#x != %#x\n",
clx, tmp);
tb_sw_dbg(sw, "CLx: current mode: %s\n", clx_name(clx));
sw->clx = clx;
return 0;
}
static int tb_switch_pm_secondary_resolve(struct tb_switch *sw)
{
struct tb_port *up, *down;
int ret;
if (!tb_route(sw))
return 0;
up = tb_upstream_port(sw);
down = tb_switch_downstream_port(sw);
ret = tb_port_pm_secondary_enable(up);
if (ret)
return ret;
return tb_port_pm_secondary_disable(down);
}
static int tb_switch_mask_clx_objections(struct tb_switch *sw)
{
int up_port = sw->config.upstream_port_number;
u32 offset, val[2], mask_obj, unmask_obj;
int ret, i;
/* Only Titan Ridge of pre-USB4 devices support CLx states */
if (!tb_switch_is_titan_ridge(sw))
return 0;
if (!tb_route(sw))
return 0;
/*
* In Titan Ridge there are only 2 dual-lane Thunderbolt ports:
* Port A consists of lane adapters 1,2 and
* Port B consists of lane adapters 3,4
* If upstream port is A, (lanes are 1,2), we mask objections from
* port B (lanes 3,4) and unmask objections from Port A and vice-versa.
*/
if (up_port == 1) {
mask_obj = TB_LOW_PWR_C0_PORT_B_MASK;
unmask_obj = TB_LOW_PWR_C1_PORT_A_MASK;
offset = TB_LOW_PWR_C1_CL1;
} else {
mask_obj = TB_LOW_PWR_C1_PORT_A_MASK;
unmask_obj = TB_LOW_PWR_C0_PORT_B_MASK;
offset = TB_LOW_PWR_C3_CL1;
}
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->cap_lp + offset, ARRAY_SIZE(val));
if (ret)
return ret;
for (i = 0; i < ARRAY_SIZE(val); i++) {
val[i] |= mask_obj;
val[i] &= ~unmask_obj;
}
return tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->cap_lp + offset, ARRAY_SIZE(val));
}
/**
* tb_switch_clx_is_supported() - Is CLx supported on this type of router
* @sw: The router to check CLx support for
*/
bool tb_switch_clx_is_supported(const struct tb_switch *sw)
{
if (!clx_enabled)
return false;
if (sw->quirks & QUIRK_NO_CLX)
return false;
/*
* CLx is not enabled and validated on Intel USB4 platforms
* before Alder Lake.
*/
if (tb_switch_is_tiger_lake(sw))
return false;
return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw);
}
static bool validate_mask(unsigned int clx)
{
/* Previous states need to be enabled */
if (clx & TB_CL1)
return (clx & TB_CL0S) == TB_CL0S;
return true;
}
/**
* tb_switch_clx_enable() - Enable CLx on upstream port of specified router
* @sw: Router to enable CLx for
* @clx: The CLx state to enable
*
* CLx is enabled only if both sides of the link support CLx, and if both sides
* of the link are not configured as two single lane links and only if the link
* is not inter-domain link. The complete set of conditions is described in CM
* Guide 1.0 section 8.1.
*
* Returns %0 on success or an error code on failure.
*/
int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
{
bool up_clx_support, down_clx_support;
struct tb_switch *parent_sw;
struct tb_port *up, *down;
int ret;
if (!clx || sw->clx == clx)
return 0;
if (!validate_mask(clx))
return -EINVAL;
parent_sw = tb_switch_parent(sw);
if (!parent_sw)
return 0;
if (!tb_switch_clx_is_supported(parent_sw) ||
!tb_switch_clx_is_supported(sw))
return 0;
/* Only support CL2 for v2 routers */
if ((clx & TB_CL2) &&
(usb4_switch_version(parent_sw) < 2 ||
usb4_switch_version(sw) < 2))
return -EOPNOTSUPP;
ret = tb_switch_pm_secondary_resolve(sw);
if (ret)
return ret;
up = tb_upstream_port(sw);
down = tb_switch_downstream_port(sw);
up_clx_support = tb_port_clx_supported(up, clx);
down_clx_support = tb_port_clx_supported(down, clx);
tb_port_dbg(up, "CLx: %s %ssupported\n", clx_name(clx),
up_clx_support ? "" : "not ");
tb_port_dbg(down, "CLx: %s %ssupported\n", clx_name(clx),
down_clx_support ? "" : "not ");
if (!up_clx_support || !down_clx_support)
return -EOPNOTSUPP;
ret = tb_port_clx_enable(up, clx);
if (ret)
return ret;
ret = tb_port_clx_enable(down, clx);
if (ret) {
tb_port_clx_disable(up, clx);
return ret;
}
ret = tb_switch_mask_clx_objections(sw);
if (ret) {
tb_port_clx_disable(up, clx);
tb_port_clx_disable(down, clx);
return ret;
}
sw->clx |= clx;
tb_sw_dbg(sw, "CLx: %s enabled\n", clx_name(clx));
return 0;
}
/**
* tb_switch_clx_disable() - Disable CLx on upstream port of specified router
* @sw: Router to disable CLx for
*
* Disables all CL states of the given router. Can be called on any
* router and if the states were not enabled already does nothing.
*
* Returns the CL states that were disabled or negative errno in case of
* failure.
*/
int tb_switch_clx_disable(struct tb_switch *sw)
{
unsigned int clx = sw->clx;
struct tb_port *up, *down;
int ret;
if (!tb_switch_clx_is_supported(sw))
return 0;
if (!clx)
return 0;
up = tb_upstream_port(sw);
down = tb_switch_downstream_port(sw);
ret = tb_port_clx_disable(up, clx);
if (ret)
return ret;
ret = tb_port_clx_disable(down, clx);
if (ret)
return ret;
sw->clx = 0;
tb_sw_dbg(sw, "CLx: %s disabled\n", clx_name(clx));
return clx;
}

View File

@ -409,6 +409,13 @@ static int tb_async_error(const struct ctl_pkg *pkg)
case TB_CFG_ERROR_HEC_ERROR_DETECTED:
case TB_CFG_ERROR_FLOW_CONTROL_ERROR:
case TB_CFG_ERROR_DP_BW:
case TB_CFG_ERROR_ROP_CMPLT:
case TB_CFG_ERROR_POP_CMPLT:
case TB_CFG_ERROR_PCIE_WAKE:
case TB_CFG_ERROR_DP_CON_CHANGE:
case TB_CFG_ERROR_DPTX_DISCOVERY:
case TB_CFG_ERROR_LINK_RECOVERY:
case TB_CFG_ERROR_ASYM_LINK:
return true;
default:
@ -758,6 +765,27 @@ int tb_cfg_ack_notification(struct tb_ctl *ctl, u64 route,
case TB_CFG_ERROR_DP_BW:
name = "DP_BW";
break;
case TB_CFG_ERROR_ROP_CMPLT:
name = "router operation completion";
break;
case TB_CFG_ERROR_POP_CMPLT:
name = "port operation completion";
break;
case TB_CFG_ERROR_PCIE_WAKE:
name = "PCIe wake";
break;
case TB_CFG_ERROR_DP_CON_CHANGE:
name = "DP connector change";
break;
case TB_CFG_ERROR_DPTX_DISCOVERY:
name = "DPTX discovery";
break;
case TB_CFG_ERROR_LINK_RECOVERY:
name = "link recovery";
break;
case TB_CFG_ERROR_ASYM_LINK:
name = "asymmetric link";
break;
default:
name = "unknown";
break;

View File

@ -14,12 +14,15 @@
#include "tb.h"
#include "sb_regs.h"
#define PORT_CAP_PCIE_LEN 1
#define PORT_CAP_V1_PCIE_LEN 1
#define PORT_CAP_V2_PCIE_LEN 2
#define PORT_CAP_POWER_LEN 2
#define PORT_CAP_LANE_LEN 3
#define PORT_CAP_USB3_LEN 5
#define PORT_CAP_DP_LEN 8
#define PORT_CAP_TMU_LEN 8
#define PORT_CAP_DP_V1_LEN 9
#define PORT_CAP_DP_V2_LEN 14
#define PORT_CAP_TMU_V1_LEN 8
#define PORT_CAP_TMU_V2_LEN 10
#define PORT_CAP_BASIC_LEN 9
#define PORT_CAP_USB4_LEN 20
@ -553,8 +556,9 @@ static int margining_run_write(void *data, u64 val)
struct usb4_port *usb4 = port->usb4;
struct tb_switch *sw = port->sw;
struct tb_margining *margining;
struct tb_switch *down_sw;
struct tb *tb = sw->tb;
int ret;
int ret, clx;
if (val != 1)
return -EINVAL;
@ -566,15 +570,24 @@ static int margining_run_write(void *data, u64 val)
goto out_rpm_put;
}
/*
* CL states may interfere with lane margining so inform the user know
* and bail out.
*/
if (tb_port_is_clx_enabled(port, TB_CL1 | TB_CL2)) {
tb_port_warn(port,
"CL states are enabled, Disable them with clx=0 and re-connect\n");
ret = -EINVAL;
goto out_unlock;
if (tb_is_upstream_port(port))
down_sw = sw;
else if (port->remote)
down_sw = port->remote->sw;
else
down_sw = NULL;
if (down_sw) {
/*
* CL states may interfere with lane margining so
* disable them temporarily now.
*/
ret = tb_switch_clx_disable(down_sw);
if (ret < 0) {
tb_sw_warn(down_sw, "failed to disable CL states\n");
goto out_unlock;
}
clx = ret;
}
margining = usb4->margining;
@ -586,7 +599,7 @@ static int margining_run_write(void *data, u64 val)
margining->right_high,
USB4_MARGIN_SW_COUNTER_CLEAR);
if (ret)
goto out_unlock;
goto out_clx;
ret = usb4_port_sw_margin_errors(port, &margining->results[0]);
} else {
@ -600,6 +613,9 @@ static int margining_run_write(void *data, u64 val)
margining->right_high, margining->results);
}
out_clx:
if (down_sw)
tb_switch_clx_enable(down_sw, clx);
out_unlock:
mutex_unlock(&tb->lock);
out_rpm_put:
@ -1148,7 +1164,10 @@ static void port_cap_show(struct tb_port *port, struct seq_file *s,
break;
case TB_PORT_CAP_TIME1:
length = PORT_CAP_TMU_LEN;
if (usb4_switch_version(port->sw) < 2)
length = PORT_CAP_TMU_V1_LEN;
else
length = PORT_CAP_TMU_V2_LEN;
break;
case TB_PORT_CAP_POWER:
@ -1157,12 +1176,17 @@ static void port_cap_show(struct tb_port *port, struct seq_file *s,
case TB_PORT_CAP_ADAP:
if (tb_port_is_pcie_down(port) || tb_port_is_pcie_up(port)) {
length = PORT_CAP_PCIE_LEN;
} else if (tb_port_is_dpin(port) || tb_port_is_dpout(port)) {
if (usb4_dp_port_bw_mode_supported(port))
length = PORT_CAP_DP_LEN + 1;
if (usb4_switch_version(port->sw) < 2)
length = PORT_CAP_V1_PCIE_LEN;
else
length = PORT_CAP_DP_LEN;
length = PORT_CAP_V2_PCIE_LEN;
} else if (tb_port_is_dpin(port)) {
if (usb4_switch_version(port->sw) < 2)
length = PORT_CAP_DP_V1_LEN;
else
length = PORT_CAP_DP_V2_LEN;
} else if (tb_port_is_dpout(port)) {
length = PORT_CAP_DP_V1_LEN;
} else if (tb_port_is_usb3_down(port) ||
tb_port_is_usb3_up(port)) {
length = PORT_CAP_USB3_LEN;

View File

@ -412,6 +412,7 @@ static void speed_get(const struct dma_test *dt, u64 *val)
static int speed_validate(u64 val)
{
switch (val) {
case 40:
case 20:
case 10:
case 0:
@ -489,9 +490,12 @@ static void dma_test_check_errors(struct dma_test *dt, int ret)
if (!dt->error_code) {
if (dt->link_speed && dt->xd->link_speed != dt->link_speed) {
dt->error_code = DMA_TEST_SPEED_ERROR;
} else if (dt->link_width &&
dt->xd->link_width != dt->link_width) {
dt->error_code = DMA_TEST_WIDTH_ERROR;
} else if (dt->link_width) {
const struct tb_xdomain *xd = dt->xd;
if ((dt->link_width == 1 && xd->link_width != TB_LINK_WIDTH_SINGLE) ||
(dt->link_width == 2 && xd->link_width < TB_LINK_WIDTH_DUAL))
dt->error_code = DMA_TEST_WIDTH_ERROR;
} else if (dt->packets_to_send != dt->packets_sent ||
dt->packets_to_receive != dt->packets_received ||
dt->crc_errors || dt->buffer_overflow_errors) {
@ -756,5 +760,5 @@ module_exit(dma_test_exit);
MODULE_AUTHOR("Isaac Hazan <isaac.hazan@intel.com>");
MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>");
MODULE_DESCRIPTION("DMA traffic test driver");
MODULE_DESCRIPTION("Thunderbolt/USB4 DMA traffic test driver");
MODULE_LICENSE("GPL v2");

View File

@ -605,9 +605,8 @@ static int usb4_drom_parse(struct tb_switch *sw)
crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len);
if (crc != header->data_crc32) {
tb_sw_warn(sw,
"DROM data CRC32 mismatch (expected: %#x, got: %#x), aborting\n",
"DROM data CRC32 mismatch (expected: %#x, got: %#x), continuing\n",
header->data_crc32, crc);
return -EINVAL;
}
return tb_drom_parse_entries(sw, USB4_DROM_HEADER_SIZE);

View File

@ -644,13 +644,14 @@ static int add_switch(struct tb_switch *parent_sw, struct tb_switch *sw)
return ret;
}
static void update_switch(struct tb_switch *parent_sw, struct tb_switch *sw,
u64 route, u8 connection_id, u8 connection_key,
u8 link, u8 depth, bool boot)
static void update_switch(struct tb_switch *sw, u64 route, u8 connection_id,
u8 connection_key, u8 link, u8 depth, bool boot)
{
struct tb_switch *parent_sw = tb_switch_parent(sw);
/* Disconnect from parent */
tb_port_at(tb_route(sw), parent_sw)->remote = NULL;
/* Re-connect via updated port*/
tb_switch_downstream_port(sw)->remote = NULL;
/* Re-connect via updated port */
tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw);
/* Update with the new addressing information */
@ -671,10 +672,7 @@ static void update_switch(struct tb_switch *parent_sw, struct tb_switch *sw,
static void remove_switch(struct tb_switch *sw)
{
struct tb_switch *parent_sw;
parent_sw = tb_to_switch(sw->dev.parent);
tb_port_at(tb_route(sw), parent_sw)->remote = NULL;
tb_switch_downstream_port(sw)->remote = NULL;
tb_switch_remove(sw);
}
@ -755,7 +753,6 @@ icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
if (sw) {
u8 phy_port, sw_phy_port;
parent_sw = tb_to_switch(sw->dev.parent);
sw_phy_port = tb_phy_port_from_link(sw->link);
phy_port = tb_phy_port_from_link(link);
@ -785,7 +782,7 @@ icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
route = tb_route(sw);
}
update_switch(parent_sw, sw, route, pkg->connection_id,
update_switch(sw, route, pkg->connection_id,
pkg->connection_key, link, depth, boot);
tb_switch_put(sw);
return;
@ -853,7 +850,8 @@ icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
sw->security_level = security_level;
sw->boot = boot;
sw->link_speed = speed_gen3 ? 20 : 10;
sw->link_width = dual_lane ? 2 : 1;
sw->link_width = dual_lane ? TB_LINK_WIDTH_DUAL :
TB_LINK_WIDTH_SINGLE;
sw->rpm = intel_vss_is_rtd3(pkg->ep_name, sizeof(pkg->ep_name));
if (add_switch(parent_sw, sw))
@ -1236,9 +1234,8 @@ __icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr,
if (sw) {
/* Update the switch if it is still in the same place */
if (tb_route(sw) == route && !!sw->authorized == authorized) {
parent_sw = tb_to_switch(sw->dev.parent);
update_switch(parent_sw, sw, route, pkg->connection_id,
0, 0, 0, boot);
update_switch(sw, route, pkg->connection_id, 0, 0, 0,
boot);
tb_switch_put(sw);
return;
}
@ -1276,7 +1273,8 @@ __icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr,
sw->security_level = security_level;
sw->boot = boot;
sw->link_speed = speed_gen3 ? 20 : 10;
sw->link_width = dual_lane ? 2 : 1;
sw->link_width = dual_lane ? TB_LINK_WIDTH_DUAL :
TB_LINK_WIDTH_SINGLE;
sw->rpm = force_rtd3;
if (!sw->rpm)
sw->rpm = intel_vss_is_rtd3(pkg->ep_name,

View File

@ -46,6 +46,10 @@
#define QUIRK_AUTO_CLEAR_INT BIT(0)
#define QUIRK_E2E BIT(1)
static bool host_reset = true;
module_param(host_reset, bool, 0444);
MODULE_PARM_DESC(host_reset, "reset USBv2 host router (default: true)");
static int ring_interrupt_index(const struct tb_ring *ring)
{
int bit = ring->hop;
@ -1217,6 +1221,37 @@ static void nhi_check_iommu(struct tb_nhi *nhi)
str_enabled_disabled(port_ok));
}
static void nhi_reset(struct tb_nhi *nhi)
{
ktime_t timeout;
u32 val;
val = ioread32(nhi->iobase + REG_CAPS);
/* Reset only v2 and later routers */
if (FIELD_GET(REG_CAPS_VERSION_MASK, val) < REG_CAPS_VERSION_2)
return;
if (!host_reset) {
dev_dbg(&nhi->pdev->dev, "skipping host router reset\n");
return;
}
iowrite32(REG_RESET_HRR, nhi->iobase + REG_RESET);
msleep(100);
timeout = ktime_add_ms(ktime_get(), 500);
do {
val = ioread32(nhi->iobase + REG_RESET);
if (!(val & REG_RESET_HRR)) {
dev_warn(&nhi->pdev->dev, "host router reset successful\n");
return;
}
usleep_range(10, 20);
} while (ktime_before(ktime_get(), timeout));
dev_warn(&nhi->pdev->dev, "timeout resetting host router\n");
}
static int nhi_init_msi(struct tb_nhi *nhi)
{
struct pci_dev *pdev = nhi->pdev;
@ -1317,7 +1352,7 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
nhi->ops = (const struct tb_nhi_ops *)id->driver_data;
/* cannot fail - table is allocated in pcim_iomap_regions */
nhi->iobase = pcim_iomap_table(pdev)[0];
nhi->hop_count = ioread32(nhi->iobase + REG_HOP_COUNT) & 0x3ff;
nhi->hop_count = ioread32(nhi->iobase + REG_CAPS) & 0x3ff;
dev_dbg(dev, "total paths: %d\n", nhi->hop_count);
nhi->tx_rings = devm_kcalloc(&pdev->dev, nhi->hop_count,
@ -1330,6 +1365,8 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
nhi_check_quirks(nhi);
nhi_check_iommu(nhi);
nhi_reset(nhi);
res = nhi_init_msi(nhi);
if (res)
return dev_err_probe(dev, res, "cannot enable MSI, aborting\n");
@ -1480,6 +1517,8 @@ static struct pci_device_id nhi_ids[] = {
.driver_data = (kernel_ulong_t)&icl_nhi_ops },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI1),
.driver_data = (kernel_ulong_t)&icl_nhi_ops },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) },
/* Any USB4 compliant host */
{ PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) },
@ -1488,6 +1527,7 @@ static struct pci_device_id nhi_ids[] = {
};
MODULE_DEVICE_TABLE(pci, nhi_ids);
MODULE_DESCRIPTION("Thunderbolt/USB4 core driver");
MODULE_LICENSE("GPL");
static struct pci_driver nhi_driver = {

View File

@ -75,6 +75,10 @@ extern const struct tb_nhi_ops icl_nhi_ops;
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE 0x15ef
#define PCI_DEVICE_ID_INTEL_ADL_NHI0 0x463e
#define PCI_DEVICE_ID_INTEL_ADL_NHI1 0x466d
#define PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI 0x5781
#define PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI 0x5784
#define PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HUB_80G_BRIDGE 0x5786
#define PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HUB_40G_BRIDGE 0x57a4
#define PCI_DEVICE_ID_INTEL_MTL_M_NHI0 0x7eb2
#define PCI_DEVICE_ID_INTEL_MTL_P_NHI0 0x7ec2
#define PCI_DEVICE_ID_INTEL_MTL_P_NHI1 0x7ec3

View File

@ -37,7 +37,7 @@ struct ring_desc {
/* NHI registers in bar 0 */
/*
* 16 bytes per entry, one entry for every hop (REG_HOP_COUNT)
* 16 bytes per entry, one entry for every hop (REG_CAPS)
* 00: physical pointer to an array of struct ring_desc
* 08: ring tail (set by NHI)
* 10: ring head (index of first non posted descriptor)
@ -46,7 +46,7 @@ struct ring_desc {
#define REG_TX_RING_BASE 0x00000
/*
* 16 bytes per entry, one entry for every hop (REG_HOP_COUNT)
* 16 bytes per entry, one entry for every hop (REG_CAPS)
* 00: physical pointer to an array of struct ring_desc
* 08: ring head (index of first not posted descriptor)
* 10: ring tail (set by NHI)
@ -56,7 +56,7 @@ struct ring_desc {
#define REG_RX_RING_BASE 0x08000
/*
* 32 bytes per entry, one entry for every hop (REG_HOP_COUNT)
* 32 bytes per entry, one entry for every hop (REG_CAPS)
* 00: enum_ring_flags
* 04: isoch time stamp ?? (write 0)
* ..: unknown
@ -64,7 +64,7 @@ struct ring_desc {
#define REG_TX_OPTIONS_BASE 0x19800
/*
* 32 bytes per entry, one entry for every hop (REG_HOP_COUNT)
* 32 bytes per entry, one entry for every hop (REG_CAPS)
* 00: enum ring_flags
* If RING_FLAG_E2E_FLOW_CONTROL is set then bits 13-23 must be set to
* the corresponding TX hop id.
@ -77,7 +77,7 @@ struct ring_desc {
/*
* three bitfields: tx, rx, rx overflow
* Every bitfield contains one bit for every hop (REG_HOP_COUNT).
* Every bitfield contains one bit for every hop (REG_CAPS).
* New interrupts are fired only after ALL registers have been
* read (even those containing only disabled rings).
*/
@ -87,7 +87,7 @@ struct ring_desc {
/*
* two bitfields: rx, tx
* Both bitfields contains one bit for every hop (REG_HOP_COUNT). To
* Both bitfields contains one bit for every hop (REG_CAPS). To
* enable/disable interrupts set/clear the corresponding bits.
*/
#define REG_RING_INTERRUPT_BASE 0x38200
@ -104,12 +104,17 @@ struct ring_desc {
#define REG_INT_VEC_ALLOC_REGS (32 / REG_INT_VEC_ALLOC_BITS)
/* The last 11 bits contain the number of hops supported by the NHI port. */
#define REG_HOP_COUNT 0x39640
#define REG_CAPS 0x39640
#define REG_CAPS_VERSION_MASK GENMASK(23, 16)
#define REG_CAPS_VERSION_2 0x40
#define REG_DMA_MISC 0x39864
#define REG_DMA_MISC_INT_AUTO_CLEAR BIT(2)
#define REG_DMA_MISC_DISABLE_AUTO_CLEAR BIT(17)
#define REG_RESET 0x39898
#define REG_RESET_HRR BIT(0)
#define REG_INMAIL_DATA 0x39900
#define REG_INMAIL_CMD 0x39904

View File

@ -12,6 +12,10 @@
#include "tb.h"
#define NVM_MIN_SIZE SZ_32K
#define NVM_MAX_SIZE SZ_1M
#define NVM_DATA_DWORDS 16
/* Intel specific NVM offsets */
#define INTEL_NVM_DEVID 0x05
#define INTEL_NVM_VERSION 0x08

View File

@ -10,6 +10,7 @@
static void quirk_force_power_link(struct tb_switch *sw)
{
sw->quirks |= QUIRK_FORCE_POWER_LINK_CONTROLLER;
tb_sw_dbg(sw, "forcing power to link controller\n");
}
static void quirk_dp_credit_allocation(struct tb_switch *sw)
@ -74,6 +75,14 @@ static const struct tb_quirk tb_quirks[] = {
quirk_usb3_maximum_bandwidth },
{ 0x8087, PCI_DEVICE_ID_INTEL_MTL_P_NHI1, 0x0000, 0x0000,
quirk_usb3_maximum_bandwidth },
{ 0x8087, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI, 0x0000, 0x0000,
quirk_usb3_maximum_bandwidth },
{ 0x8087, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI, 0x0000, 0x0000,
quirk_usb3_maximum_bandwidth },
{ 0x8087, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HUB_80G_BRIDGE, 0x0000, 0x0000,
quirk_usb3_maximum_bandwidth },
{ 0x8087, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HUB_40G_BRIDGE, 0x0000, 0x0000,
quirk_usb3_maximum_bandwidth },
/*
* CLx is not supported on AMD USB4 Yellow Carp and Pink Sardine platforms.
*/
@ -105,6 +114,7 @@ void tb_check_quirks(struct tb_switch *sw)
if (q->device && q->device != sw->device)
continue;
tb_sw_dbg(sw, "running %ps\n", q->hook);
q->hook(sw);
}
}

View File

@ -187,10 +187,34 @@ static ssize_t nvm_authenticate_show(struct device *dev,
return ret;
}
static void tb_retimer_nvm_authenticate_status(struct tb_port *port, u32 *status)
{
int i;
tb_port_dbg(port, "reading NVM authentication status of retimers\n");
/*
* Before doing anything else, read the authentication status.
* If the retimer has it set, store it for the new retimer
* device instance.
*/
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++)
usb4_port_retimer_nvm_authenticate_status(port, i, &status[i]);
}
static void tb_retimer_set_inbound_sbtx(struct tb_port *port)
{
int i;
/*
* When USB4 port is online sideband communications are
* already up.
*/
if (!usb4_port_device_is_offline(port->usb4))
return;
tb_port_dbg(port, "enabling sideband transactions\n");
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++)
usb4_port_retimer_set_inbound_sbtx(port, i);
}
@ -199,6 +223,16 @@ static void tb_retimer_unset_inbound_sbtx(struct tb_port *port)
{
int i;
/*
* When USB4 port is offline we need to keep the sideband
* communications up to make it possible to communicate with
* the connected retimers.
*/
if (usb4_port_device_is_offline(port->usb4))
return;
tb_port_dbg(port, "disabling sideband transactions\n");
for (i = TB_MAX_RETIMER_INDEX; i >= 1; i--)
usb4_port_retimer_unset_inbound_sbtx(port, i);
}
@ -229,6 +263,13 @@ static ssize_t nvm_authenticate_store(struct device *dev,
rt->auth_status = 0;
if (val) {
/*
* When NVM authentication starts the retimer is not
* accessible so calling tb_retimer_unset_inbound_sbtx()
* will fail and therefore we do not call it. Exception
* is when the validation fails or we only write the new
* NVM image without authentication.
*/
tb_retimer_set_inbound_sbtx(rt->port);
if (val == AUTHENTICATE_ONLY) {
ret = tb_retimer_nvm_authenticate(rt, true);
@ -249,7 +290,8 @@ static ssize_t nvm_authenticate_store(struct device *dev,
}
exit_unlock:
tb_retimer_unset_inbound_sbtx(rt->port);
if (ret || val == WRITE_ONLY)
tb_retimer_unset_inbound_sbtx(rt->port);
mutex_unlock(&rt->tb->lock);
exit_rpm:
pm_runtime_mark_last_busy(&rt->dev);
@ -341,12 +383,6 @@ static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status)
return ret;
}
if (vendor != PCI_VENDOR_ID_INTEL && vendor != 0x8087) {
tb_port_info(port, "retimer NVM format of vendor %#x is not supported\n",
vendor);
return -EOPNOTSUPP;
}
/*
* Check that it supports NVM operations. If not then don't add
* the device at all.
@ -454,20 +490,18 @@ int tb_retimer_scan(struct tb_port *port, bool add)
if (ret)
return ret;
/*
* Immediately after sending enumerate retimers read the
* authentication status of each retimer.
*/
tb_retimer_nvm_authenticate_status(port, status);
/*
* Enable sideband channel for each retimer. We can do this
* regardless whether there is device connected or not.
*/
tb_retimer_set_inbound_sbtx(port);
/*
* Before doing anything else, read the authentication status.
* If the retimer has it set, store it for the new retimer
* device instance.
*/
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++)
usb4_port_retimer_nvm_authenticate_status(port, i, &status[i]);
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++) {
/*
* Last retimer is true only for the last on-board

View File

@ -26,10 +26,6 @@ struct nvm_auth_status {
u32 status;
};
static bool clx_enabled = true;
module_param_named(clx, clx_enabled, bool, 0444);
MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)");
/*
* Hold NVM authentication failure status per switch This information
* needs to stay around even when the switch gets power cycled so we
@ -727,7 +723,7 @@ static int tb_init_port(struct tb_port *port)
* can be read from the path config space. Legacy
* devices we use hard-coded value.
*/
if (tb_switch_is_usb4(port->sw)) {
if (port->cap_usb4) {
struct tb_regs_hop hop;
if (!tb_port_read(port, &hop, TB_CFG_HOPS, 0, 2))
@ -907,15 +903,23 @@ int tb_port_get_link_speed(struct tb_port *port)
speed = (val & LANE_ADP_CS_1_CURRENT_SPEED_MASK) >>
LANE_ADP_CS_1_CURRENT_SPEED_SHIFT;
return speed == LANE_ADP_CS_1_CURRENT_SPEED_GEN3 ? 20 : 10;
switch (speed) {
case LANE_ADP_CS_1_CURRENT_SPEED_GEN4:
return 40;
case LANE_ADP_CS_1_CURRENT_SPEED_GEN3:
return 20;
default:
return 10;
}
}
/**
* tb_port_get_link_width() - Get current link width
* @port: Port to check (USB4 or CIO)
*
* Returns link width. Return values can be 1 (Single-Lane), 2 (Dual-Lane)
* or negative errno in case of failure.
* Returns link width. Return the link width as encoded in &enum
* tb_link_width or negative errno in case of failure.
*/
int tb_port_get_link_width(struct tb_port *port)
{
@ -930,11 +934,13 @@ int tb_port_get_link_width(struct tb_port *port)
if (ret)
return ret;
/* Matches the values in enum tb_link_width */
return (val & LANE_ADP_CS_1_CURRENT_WIDTH_MASK) >>
LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT;
}
static bool tb_port_is_width_supported(struct tb_port *port, int width)
static bool tb_port_is_width_supported(struct tb_port *port,
unsigned int width_mask)
{
u32 phy, widths;
int ret;
@ -950,20 +956,25 @@ static bool tb_port_is_width_supported(struct tb_port *port, int width)
widths = (phy & LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK) >>
LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT;
return !!(widths & width);
return widths & width_mask;
}
static bool is_gen4_link(struct tb_port *port)
{
return tb_port_get_link_speed(port) > 20;
}
/**
* tb_port_set_link_width() - Set target link width of the lane adapter
* @port: Lane adapter
* @width: Target link width (%1 or %2)
* @width: Target link width
*
* Sets the target link width of the lane adapter to @width. Does not
* enable/disable lane bonding. For that call tb_port_set_lane_bonding().
*
* Return: %0 in case of success and negative errno in case of error
*/
int tb_port_set_link_width(struct tb_port *port, unsigned int width)
int tb_port_set_link_width(struct tb_port *port, enum tb_link_width width)
{
u32 val;
int ret;
@ -978,11 +989,14 @@ int tb_port_set_link_width(struct tb_port *port, unsigned int width)
val &= ~LANE_ADP_CS_1_TARGET_WIDTH_MASK;
switch (width) {
case 1:
case TB_LINK_WIDTH_SINGLE:
/* Gen 4 link cannot be single */
if (is_gen4_link(port))
return -EOPNOTSUPP;
val |= LANE_ADP_CS_1_TARGET_WIDTH_SINGLE <<
LANE_ADP_CS_1_TARGET_WIDTH_SHIFT;
break;
case 2:
case TB_LINK_WIDTH_DUAL:
val |= LANE_ADP_CS_1_TARGET_WIDTH_DUAL <<
LANE_ADP_CS_1_TARGET_WIDTH_SHIFT;
break;
@ -1004,12 +1018,9 @@ int tb_port_set_link_width(struct tb_port *port, unsigned int width)
* cases one should use tb_port_lane_bonding_enable() instead to enable
* lane bonding.
*
* As a side effect sets @port->bonding accordingly (and does the same
* for lane 1 too).
*
* Return: %0 in case of success and negative errno in case of error
*/
int tb_port_set_lane_bonding(struct tb_port *port, bool bonding)
static int tb_port_set_lane_bonding(struct tb_port *port, bool bonding)
{
u32 val;
int ret;
@ -1027,19 +1038,8 @@ int tb_port_set_lane_bonding(struct tb_port *port, bool bonding)
else
val &= ~LANE_ADP_CS_1_LB;
ret = tb_port_write(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
/*
* When lane 0 bonding is set it will affect lane 1 too so
* update both.
*/
port->bonded = bonding;
port->dual_link_port->bonded = bonding;
return 0;
return tb_port_write(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
}
/**
@ -1056,36 +1056,52 @@ int tb_port_set_lane_bonding(struct tb_port *port, bool bonding)
*/
int tb_port_lane_bonding_enable(struct tb_port *port)
{
enum tb_link_width width;
int ret;
/*
* Enable lane bonding for both links if not already enabled by
* for example the boot firmware.
*/
ret = tb_port_get_link_width(port);
if (ret == 1) {
ret = tb_port_set_link_width(port, 2);
width = tb_port_get_link_width(port);
if (width == TB_LINK_WIDTH_SINGLE) {
ret = tb_port_set_link_width(port, TB_LINK_WIDTH_DUAL);
if (ret)
goto err_lane0;
}
ret = tb_port_get_link_width(port->dual_link_port);
if (ret == 1) {
ret = tb_port_set_link_width(port->dual_link_port, 2);
width = tb_port_get_link_width(port->dual_link_port);
if (width == TB_LINK_WIDTH_SINGLE) {
ret = tb_port_set_link_width(port->dual_link_port,
TB_LINK_WIDTH_DUAL);
if (ret)
goto err_lane0;
}
ret = tb_port_set_lane_bonding(port, true);
if (ret)
goto err_lane1;
/*
* Only set bonding if the link was not already bonded. This
* avoids the lane adapter to re-enter bonding state.
*/
if (width == TB_LINK_WIDTH_SINGLE) {
ret = tb_port_set_lane_bonding(port, true);
if (ret)
goto err_lane1;
}
/*
* When lane 0 bonding is set it will affect lane 1 too so
* update both.
*/
port->bonded = true;
port->dual_link_port->bonded = true;
return 0;
err_lane1:
tb_port_set_link_width(port->dual_link_port, 1);
tb_port_set_link_width(port->dual_link_port, TB_LINK_WIDTH_SINGLE);
err_lane0:
tb_port_set_link_width(port, 1);
tb_port_set_link_width(port, TB_LINK_WIDTH_SINGLE);
return ret;
}
@ -1099,27 +1115,34 @@ err_lane0:
void tb_port_lane_bonding_disable(struct tb_port *port)
{
tb_port_set_lane_bonding(port, false);
tb_port_set_link_width(port->dual_link_port, 1);
tb_port_set_link_width(port, 1);
tb_port_set_link_width(port->dual_link_port, TB_LINK_WIDTH_SINGLE);
tb_port_set_link_width(port, TB_LINK_WIDTH_SINGLE);
port->dual_link_port->bonded = false;
port->bonded = false;
}
/**
* tb_port_wait_for_link_width() - Wait until link reaches specific width
* @port: Port to wait for
* @width: Expected link width (%1 or %2)
* @width_mask: Expected link width mask
* @timeout_msec: Timeout in ms how long to wait
*
* Should be used after both ends of the link have been bonded (or
* bonding has been disabled) to wait until the link actually reaches
* the expected state. Returns %-ETIMEDOUT if the @width was not reached
* within the given timeout, %0 if it did.
* the expected state. Returns %-ETIMEDOUT if the width was not reached
* within the given timeout, %0 if it did. Can be passed a mask of
* expected widths and succeeds if any of the widths is reached.
*/
int tb_port_wait_for_link_width(struct tb_port *port, int width,
int tb_port_wait_for_link_width(struct tb_port *port, unsigned int width_mask,
int timeout_msec)
{
ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec);
int ret;
/* Gen 4 link does not support single lane */
if ((width_mask & TB_LINK_WIDTH_SINGLE) && is_gen4_link(port))
return -EOPNOTSUPP;
do {
ret = tb_port_get_link_width(port);
if (ret < 0) {
@ -1130,7 +1153,7 @@ int tb_port_wait_for_link_width(struct tb_port *port, int width,
*/
if (ret != -EACCES)
return ret;
} else if (ret == width) {
} else if (ret & width_mask) {
return 0;
}
@ -1183,135 +1206,6 @@ int tb_port_update_credits(struct tb_port *port)
return tb_port_do_update_credits(port->dual_link_port);
}
static int __tb_port_pm_secondary_set(struct tb_port *port, bool secondary)
{
u32 phy;
int ret;
ret = tb_port_read(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
if (secondary)
phy |= LANE_ADP_CS_1_PMS;
else
phy &= ~LANE_ADP_CS_1_PMS;
return tb_port_write(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
}
static int tb_port_pm_secondary_enable(struct tb_port *port)
{
return __tb_port_pm_secondary_set(port, true);
}
static int tb_port_pm_secondary_disable(struct tb_port *port)
{
return __tb_port_pm_secondary_set(port, false);
}
/* Called for USB4 or Titan Ridge routers only */
static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask)
{
u32 val, mask = 0;
bool ret;
/* Don't enable CLx in case of two single-lane links */
if (!port->bonded && port->dual_link_port)
return false;
/* Don't enable CLx in case of inter-domain link */
if (port->xdomain)
return false;
if (tb_switch_is_usb4(port->sw)) {
if (!usb4_port_clx_supported(port))
return false;
} else if (!tb_lc_is_clx_supported(port)) {
return false;
}
if (clx_mask & TB_CL1) {
/* CL0s and CL1 are enabled and supported together */
mask |= LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT;
}
if (clx_mask & TB_CL2)
mask |= LANE_ADP_CS_0_CL2_SUPPORT;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_0, 1);
if (ret)
return false;
return !!(val & mask);
}
static int __tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable)
{
u32 phy, mask;
int ret;
/* CL0s and CL1 are enabled and supported together */
if (clx == TB_CL1)
mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE;
else
/* For now we support only CL0s and CL1. Not CL2 */
return -EOPNOTSUPP;
ret = tb_port_read(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
if (enable)
phy |= mask;
else
phy &= ~mask;
return tb_port_write(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
}
static int tb_port_clx_disable(struct tb_port *port, enum tb_clx clx)
{
return __tb_port_clx_set(port, clx, false);
}
static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx)
{
return __tb_port_clx_set(port, clx, true);
}
/**
* tb_port_is_clx_enabled() - Is given CL state enabled
* @port: USB4 port to check
* @clx_mask: Mask of CL states to check
*
* Returns true if any of the given CL states is enabled for @port.
*/
bool tb_port_is_clx_enabled(struct tb_port *port, unsigned int clx_mask)
{
u32 val, mask = 0;
int ret;
if (!tb_port_clx_supported(port, clx_mask))
return false;
if (clx_mask & TB_CL1)
mask |= LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE;
if (clx_mask & TB_CL2)
mask |= LANE_ADP_CS_1_CL2_ENABLE;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return false;
return !!(val & mask);
}
static int tb_port_start_lane_initialization(struct tb_port *port)
{
int ret;
@ -1911,20 +1805,57 @@ static ssize_t speed_show(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR(rx_speed, 0444, speed_show, NULL);
static DEVICE_ATTR(tx_speed, 0444, speed_show, NULL);
static ssize_t lanes_show(struct device *dev, struct device_attribute *attr,
char *buf)
static ssize_t rx_lanes_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_switch *sw = tb_to_switch(dev);
unsigned int width;
return sysfs_emit(buf, "%u\n", sw->link_width);
switch (sw->link_width) {
case TB_LINK_WIDTH_SINGLE:
case TB_LINK_WIDTH_ASYM_TX:
width = 1;
break;
case TB_LINK_WIDTH_DUAL:
width = 2;
break;
case TB_LINK_WIDTH_ASYM_RX:
width = 3;
break;
default:
WARN_ON_ONCE(1);
return -EINVAL;
}
return sysfs_emit(buf, "%u\n", width);
}
static DEVICE_ATTR(rx_lanes, 0444, rx_lanes_show, NULL);
/*
* Currently link has same amount of lanes both directions (1 or 2) but
* expose them separately to allow possible asymmetric links in the future.
*/
static DEVICE_ATTR(rx_lanes, 0444, lanes_show, NULL);
static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL);
static ssize_t tx_lanes_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_switch *sw = tb_to_switch(dev);
unsigned int width;
switch (sw->link_width) {
case TB_LINK_WIDTH_SINGLE:
case TB_LINK_WIDTH_ASYM_RX:
width = 1;
break;
case TB_LINK_WIDTH_DUAL:
width = 2;
break;
case TB_LINK_WIDTH_ASYM_TX:
width = 3;
break;
default:
WARN_ON_ONCE(1);
return -EINVAL;
}
return sysfs_emit(buf, "%u\n", width);
}
static DEVICE_ATTR(tx_lanes, 0444, tx_lanes_show, NULL);
static ssize_t nvm_authenticate_show(struct device *dev,
struct device_attribute *attr, char *buf)
@ -2189,8 +2120,9 @@ static int tb_switch_uevent(const struct device *dev, struct kobj_uevent_env *en
const struct tb_switch *sw = tb_to_switch(dev);
const char *type;
if (sw->config.thunderbolt_version == USB4_VERSION_1_0) {
if (add_uevent_var(env, "USB4_VERSION=1.0"))
if (tb_switch_is_usb4(sw)) {
if (add_uevent_var(env, "USB4_VERSION=%u.0",
usb4_switch_version(sw)))
return -ENOMEM;
}
@ -2498,9 +2430,13 @@ int tb_switch_configure(struct tb_switch *sw)
/*
* For USB4 devices, we need to program the CM version
* accordingly so that it knows to expose all the
* additional capabilities.
* additional capabilities. Program it according to USB4
* version to avoid changing existing (v1) routers behaviour.
*/
sw->config.cmuv = USB4_VERSION_1_0;
if (usb4_switch_version(sw) < 2)
sw->config.cmuv = ROUTER_CS_4_CMUV_V1;
else
sw->config.cmuv = ROUTER_CS_4_CMUV_V2;
sw->config.plug_events_delay = 0xa;
/* Enumerate the switch */
@ -2530,6 +2466,22 @@ int tb_switch_configure(struct tb_switch *sw)
return tb_plug_events_active(sw, true);
}
/**
* tb_switch_configuration_valid() - Set the tunneling configuration to be valid
* @sw: Router to configure
*
* Needs to be called before any tunnels can be setup through the
* router. Can be called to any router.
*
* Returns %0 in success and negative errno otherwise.
*/
int tb_switch_configuration_valid(struct tb_switch *sw)
{
if (tb_switch_is_usb4(sw))
return usb4_switch_configuration_valid(sw);
return 0;
}
static int tb_switch_set_uuid(struct tb_switch *sw)
{
bool uid = false;
@ -2754,9 +2706,9 @@ static int tb_switch_update_link_attributes(struct tb_switch *sw)
*/
int tb_switch_lane_bonding_enable(struct tb_switch *sw)
{
struct tb_switch *parent = tb_to_switch(sw->dev.parent);
struct tb_port *up, *down;
u64 route = tb_route(sw);
unsigned int width_mask;
int ret;
if (!route)
@ -2766,10 +2718,10 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw)
return 0;
up = tb_upstream_port(sw);
down = tb_port_at(route, parent);
down = tb_switch_downstream_port(sw);
if (!tb_port_is_width_supported(up, 2) ||
!tb_port_is_width_supported(down, 2))
if (!tb_port_is_width_supported(up, TB_LINK_WIDTH_DUAL) ||
!tb_port_is_width_supported(down, TB_LINK_WIDTH_DUAL))
return 0;
ret = tb_port_lane_bonding_enable(up);
@ -2785,7 +2737,11 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw)
return ret;
}
ret = tb_port_wait_for_link_width(down, 2, 100);
/* Any of the widths are all bonded */
width_mask = TB_LINK_WIDTH_DUAL | TB_LINK_WIDTH_ASYM_TX |
TB_LINK_WIDTH_ASYM_RX;
ret = tb_port_wait_for_link_width(down, width_mask, 100);
if (ret) {
tb_port_warn(down, "timeout enabling lane bonding\n");
return ret;
@ -2808,8 +2764,8 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw)
*/
void tb_switch_lane_bonding_disable(struct tb_switch *sw)
{
struct tb_switch *parent = tb_to_switch(sw->dev.parent);
struct tb_port *up, *down;
int ret;
if (!tb_route(sw))
return;
@ -2818,7 +2774,7 @@ void tb_switch_lane_bonding_disable(struct tb_switch *sw)
if (!up->bonded)
return;
down = tb_port_at(tb_route(sw), parent);
down = tb_switch_downstream_port(sw);
tb_port_lane_bonding_disable(up);
tb_port_lane_bonding_disable(down);
@ -2827,7 +2783,8 @@ void tb_switch_lane_bonding_disable(struct tb_switch *sw)
* It is fine if we get other errors as the router might have
* been unplugged.
*/
if (tb_port_wait_for_link_width(down, 1, 100) == -ETIMEDOUT)
ret = tb_port_wait_for_link_width(down, TB_LINK_WIDTH_SINGLE, 100);
if (ret == -ETIMEDOUT)
tb_sw_warn(sw, "timeout disabling lane bonding\n");
tb_port_update_credits(down);
@ -2994,6 +2951,10 @@ int tb_switch_add(struct tb_switch *sw)
if (ret)
return ret;
ret = tb_switch_clx_init(sw);
if (ret)
return ret;
ret = tb_switch_tmu_init(sw);
if (ret)
return ret;
@ -3246,13 +3207,8 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime)
/*
* Actually only needed for Titan Ridge but for simplicity can be
* done for USB4 device too as CLx is re-enabled at resume.
* CL0s and CL1 are enabled and supported together.
*/
if (tb_switch_is_clx_enabled(sw, TB_CL1)) {
if (tb_switch_disable_clx(sw, TB_CL1))
tb_sw_warn(sw, "failed to disable %s on upstream port\n",
tb_switch_clx_name(TB_CL1));
}
tb_switch_clx_disable(sw);
err = tb_plug_events_active(sw, false);
if (err)
@ -3474,234 +3430,6 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw,
return NULL;
}
static int tb_switch_pm_secondary_resolve(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
int ret;
if (!tb_route(sw))
return 0;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
ret = tb_port_pm_secondary_enable(up);
if (ret)
return ret;
return tb_port_pm_secondary_disable(down);
}
static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
{
struct tb_switch *parent = tb_switch_parent(sw);
bool up_clx_support, down_clx_support;
struct tb_port *up, *down;
int ret;
if (!tb_switch_is_clx_supported(sw))
return 0;
/*
* Enable CLx for host router's downstream port as part of the
* downstream router enabling procedure.
*/
if (!tb_route(sw))
return 0;
/* Enable CLx only for first hop router (depth = 1) */
if (tb_route(parent))
return 0;
ret = tb_switch_pm_secondary_resolve(sw);
if (ret)
return ret;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
up_clx_support = tb_port_clx_supported(up, clx);
down_clx_support = tb_port_clx_supported(down, clx);
tb_port_dbg(up, "%s %ssupported\n", tb_switch_clx_name(clx),
up_clx_support ? "" : "not ");
tb_port_dbg(down, "%s %ssupported\n", tb_switch_clx_name(clx),
down_clx_support ? "" : "not ");
if (!up_clx_support || !down_clx_support)
return -EOPNOTSUPP;
ret = tb_port_clx_enable(up, clx);
if (ret)
return ret;
ret = tb_port_clx_enable(down, clx);
if (ret) {
tb_port_clx_disable(up, clx);
return ret;
}
ret = tb_switch_mask_clx_objections(sw);
if (ret) {
tb_port_clx_disable(up, clx);
tb_port_clx_disable(down, clx);
return ret;
}
sw->clx = clx;
tb_port_dbg(up, "%s enabled\n", tb_switch_clx_name(clx));
return 0;
}
/**
* tb_switch_enable_clx() - Enable CLx on upstream port of specified router
* @sw: Router to enable CLx for
* @clx: The CLx state to enable
*
* Enable CLx state only for first hop router. That is the most common
* use-case, that is intended for better thermal management, and so helps
* to improve performance. CLx is enabled only if both sides of the link
* support CLx, and if both sides of the link are not configured as two
* single lane links and only if the link is not inter-domain link. The
* complete set of conditions is described in CM Guide 1.0 section 8.1.
*
* Return: Returns 0 on success or an error code on failure.
*/
int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
{
struct tb_switch *root_sw = sw->tb->root_switch;
if (!clx_enabled)
return 0;
/*
* CLx is not enabled and validated on Intel USB4 platforms before
* Alder Lake.
*/
if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw))
return 0;
switch (clx) {
case TB_CL1:
/* CL0s and CL1 are enabled and supported together */
return __tb_switch_enable_clx(sw, clx);
default:
return -EOPNOTSUPP;
}
}
static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx)
{
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
int ret;
if (!tb_switch_is_clx_supported(sw))
return 0;
/*
* Disable CLx for host router's downstream port as part of the
* downstream router enabling procedure.
*/
if (!tb_route(sw))
return 0;
/* Disable CLx only for first hop router (depth = 1) */
if (tb_route(parent))
return 0;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
ret = tb_port_clx_disable(up, clx);
if (ret)
return ret;
ret = tb_port_clx_disable(down, clx);
if (ret)
return ret;
sw->clx = TB_CLX_DISABLE;
tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx));
return 0;
}
/**
* tb_switch_disable_clx() - Disable CLx on upstream port of specified router
* @sw: Router to disable CLx for
* @clx: The CLx state to disable
*
* Return: Returns 0 on success or an error code on failure.
*/
int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx)
{
if (!clx_enabled)
return 0;
switch (clx) {
case TB_CL1:
/* CL0s and CL1 are enabled and supported together */
return __tb_switch_disable_clx(sw, clx);
default:
return -EOPNOTSUPP;
}
}
/**
* tb_switch_mask_clx_objections() - Mask CLx objections for a router
* @sw: Router to mask objections for
*
* Mask the objections coming from the second depth routers in order to
* stop these objections from interfering with the CLx states of the first
* depth link.
*/
int tb_switch_mask_clx_objections(struct tb_switch *sw)
{
int up_port = sw->config.upstream_port_number;
u32 offset, val[2], mask_obj, unmask_obj;
int ret, i;
/* Only Titan Ridge of pre-USB4 devices support CLx states */
if (!tb_switch_is_titan_ridge(sw))
return 0;
if (!tb_route(sw))
return 0;
/*
* In Titan Ridge there are only 2 dual-lane Thunderbolt ports:
* Port A consists of lane adapters 1,2 and
* Port B consists of lane adapters 3,4
* If upstream port is A, (lanes are 1,2), we mask objections from
* port B (lanes 3,4) and unmask objections from Port A and vice-versa.
*/
if (up_port == 1) {
mask_obj = TB_LOW_PWR_C0_PORT_B_MASK;
unmask_obj = TB_LOW_PWR_C1_PORT_A_MASK;
offset = TB_LOW_PWR_C1_CL1;
} else {
mask_obj = TB_LOW_PWR_C1_PORT_A_MASK;
unmask_obj = TB_LOW_PWR_C0_PORT_B_MASK;
offset = TB_LOW_PWR_C3_CL1;
}
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->cap_lp + offset, ARRAY_SIZE(val));
if (ret)
return ret;
for (i = 0; i < ARRAY_SIZE(val); i++) {
val[i] |= mask_obj;
val[i] &= ~unmask_obj;
}
return tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->cap_lp + offset, ARRAY_SIZE(val));
}
/*
* Can be used for read/write a specified PCIe bridge for any Thunderbolt 3
* device. For now used only for Titan Ridge.

View File

@ -131,7 +131,7 @@ tb_attach_bandwidth_group(struct tb_cm *tcm, struct tb_port *in,
static void tb_discover_bandwidth_group(struct tb_cm *tcm, struct tb_port *in,
struct tb_port *out)
{
if (usb4_dp_port_bw_mode_enabled(in)) {
if (usb4_dp_port_bandwidth_mode_enabled(in)) {
int index, i;
index = usb4_dp_port_group_id(in);
@ -240,6 +240,147 @@ static void tb_discover_dp_resources(struct tb *tb)
}
}
/* Enables CL states up to host router */
static int tb_enable_clx(struct tb_switch *sw)
{
struct tb_cm *tcm = tb_priv(sw->tb);
unsigned int clx = TB_CL0S | TB_CL1;
const struct tb_tunnel *tunnel;
int ret;
/*
* Currently only enable CLx for the first link. This is enough
* to allow the CPU to save energy at least on Intel hardware
* and makes it slightly simpler to implement. We may change
* this in the future to cover the whole topology if it turns
* out to be beneficial.
*/
while (sw && sw->config.depth > 1)
sw = tb_switch_parent(sw);
if (!sw)
return 0;
if (sw->config.depth != 1)
return 0;
/*
* If we are re-enabling then check if there is an active DMA
* tunnel and in that case bail out.
*/
list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
if (tb_tunnel_is_dma(tunnel)) {
if (tb_tunnel_port_on_path(tunnel, tb_upstream_port(sw)))
return 0;
}
}
/*
* Initially try with CL2. If that's not supported by the
* topology try with CL0s and CL1 and then give up.
*/
ret = tb_switch_clx_enable(sw, clx | TB_CL2);
if (ret == -EOPNOTSUPP)
ret = tb_switch_clx_enable(sw, clx);
return ret == -EOPNOTSUPP ? 0 : ret;
}
/* Disables CL states up to the host router */
static void tb_disable_clx(struct tb_switch *sw)
{
do {
if (tb_switch_clx_disable(sw) < 0)
tb_sw_warn(sw, "failed to disable CL states\n");
sw = tb_switch_parent(sw);
} while (sw);
}
static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data)
{
struct tb_switch *sw;
sw = tb_to_switch(dev);
if (!sw)
return 0;
if (tb_switch_tmu_is_configured(sw, TB_SWITCH_TMU_MODE_LOWRES)) {
enum tb_switch_tmu_mode mode;
int ret;
if (tb_switch_clx_is_enabled(sw, TB_CL1))
mode = TB_SWITCH_TMU_MODE_HIFI_UNI;
else
mode = TB_SWITCH_TMU_MODE_HIFI_BI;
ret = tb_switch_tmu_configure(sw, mode);
if (ret)
return ret;
return tb_switch_tmu_enable(sw);
}
return 0;
}
static void tb_increase_tmu_accuracy(struct tb_tunnel *tunnel)
{
struct tb_switch *sw;
if (!tunnel)
return;
/*
* Once first DP tunnel is established we change the TMU
* accuracy of first depth child routers (and the host router)
* to the highest. This is needed for the DP tunneling to work
* but also allows CL0s.
*
* If both routers are v2 then we don't need to do anything as
* they are using enhanced TMU mode that allows all CLx.
*/
sw = tunnel->tb->root_switch;
device_for_each_child(&sw->dev, NULL, tb_increase_switch_tmu_accuracy);
}
static int tb_enable_tmu(struct tb_switch *sw)
{
int ret;
/*
* If both routers at the end of the link are v2 we simply
* enable the enhanched uni-directional mode. That covers all
* the CL states. For v1 and before we need to use the normal
* rate to allow CL1 (when supported). Otherwise we keep the TMU
* running at the highest accuracy.
*/
ret = tb_switch_tmu_configure(sw,
TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI);
if (ret == -EOPNOTSUPP) {
if (tb_switch_clx_is_enabled(sw, TB_CL1))
ret = tb_switch_tmu_configure(sw,
TB_SWITCH_TMU_MODE_LOWRES);
else
ret = tb_switch_tmu_configure(sw,
TB_SWITCH_TMU_MODE_HIFI_BI);
}
if (ret)
return ret;
/* If it is already enabled in correct mode, don't touch it */
if (tb_switch_tmu_is_enabled(sw))
return 0;
ret = tb_switch_tmu_disable(sw);
if (ret)
return ret;
ret = tb_switch_tmu_post_time(sw);
if (ret)
return ret;
return tb_switch_tmu_enable(sw);
}
static void tb_switch_discover_tunnels(struct tb_switch *sw,
struct list_head *list,
bool alloc_hopids)
@ -253,13 +394,7 @@ static void tb_switch_discover_tunnels(struct tb_switch *sw,
switch (port->config.type) {
case TB_TYPE_DP_HDMI_IN:
tunnel = tb_tunnel_discover_dp(tb, port, alloc_hopids);
/*
* In case of DP tunnel exists, change host router's
* 1st children TMU mode to HiFi for CL0s to work.
*/
if (tunnel)
tb_switch_enable_tmu_1st_child(tb->root_switch,
TB_SWITCH_TMU_RATE_HIFI);
tb_increase_tmu_accuracy(tunnel);
break;
case TB_TYPE_PCIE_DOWN:
@ -357,25 +492,6 @@ static void tb_scan_xdomain(struct tb_port *port)
}
}
static int tb_enable_tmu(struct tb_switch *sw)
{
int ret;
/* If it is already enabled in correct mode, don't touch it */
if (tb_switch_tmu_is_enabled(sw, sw->tmu.unidirectional_request))
return 0;
ret = tb_switch_tmu_disable(sw);
if (ret)
return ret;
ret = tb_switch_tmu_post_time(sw);
if (ret)
return ret;
return tb_switch_tmu_enable(sw);
}
/**
* tb_find_unused_port() - return the first inactive port on @sw
* @sw: Switch to find the port on
@ -480,7 +596,8 @@ static int tb_available_bandwidth(struct tb *tb, struct tb_port *src_port,
usb3_consumed_down = 0;
}
*available_up = *available_down = 40000;
/* Maximum possible bandwidth asymmetric Gen 4 link is 120 Gb/s */
*available_up = *available_down = 120000;
/* Find the minimum available bandwidth over all links */
tb_for_each_port_on_path(src_port, dst_port, port) {
@ -491,18 +608,45 @@ static int tb_available_bandwidth(struct tb *tb, struct tb_port *src_port,
if (tb_is_upstream_port(port)) {
link_speed = port->sw->link_speed;
/*
* sw->link_width is from upstream perspective
* so we use the opposite for downstream of the
* host router.
*/
if (port->sw->link_width == TB_LINK_WIDTH_ASYM_TX) {
up_bw = link_speed * 3 * 1000;
down_bw = link_speed * 1 * 1000;
} else if (port->sw->link_width == TB_LINK_WIDTH_ASYM_RX) {
up_bw = link_speed * 1 * 1000;
down_bw = link_speed * 3 * 1000;
} else {
up_bw = link_speed * port->sw->link_width * 1000;
down_bw = up_bw;
}
} else {
link_speed = tb_port_get_link_speed(port);
if (link_speed < 0)
return link_speed;
link_width = tb_port_get_link_width(port);
if (link_width < 0)
return link_width;
if (link_width == TB_LINK_WIDTH_ASYM_TX) {
up_bw = link_speed * 1 * 1000;
down_bw = link_speed * 3 * 1000;
} else if (link_width == TB_LINK_WIDTH_ASYM_RX) {
up_bw = link_speed * 3 * 1000;
down_bw = link_speed * 1 * 1000;
} else {
up_bw = link_speed * link_width * 1000;
down_bw = up_bw;
}
}
link_width = port->bonded ? 2 : 1;
up_bw = link_speed * link_width * 1000; /* Mb/s */
/* Leave 10% guard band */
up_bw -= up_bw / 10;
down_bw = up_bw;
down_bw -= down_bw / 10;
tb_port_dbg(port, "link total bandwidth %d/%d Mb/s\n", up_bw,
down_bw);
@ -628,7 +772,7 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
* Look up available down port. Since we are chaining it should
* be found right above this switch.
*/
port = tb_port_at(tb_route(sw), parent);
port = tb_switch_downstream_port(sw);
down = tb_find_usb3_down(parent, port);
if (!down)
return 0;
@ -739,7 +883,6 @@ static void tb_scan_port(struct tb_port *port)
struct tb_port *upstream_port;
bool discovery = false;
struct tb_switch *sw;
int ret;
if (tb_is_upstream_port(port))
return;
@ -838,28 +981,20 @@ static void tb_scan_port(struct tb_port *port)
* CL0s and CL1 are enabled and supported together.
* Silently ignore CLx enabling in case CLx is not supported.
*/
if (discovery) {
if (discovery)
tb_sw_dbg(sw, "discovery, not touching CL states\n");
} else {
ret = tb_switch_enable_clx(sw, TB_CL1);
if (ret && ret != -EOPNOTSUPP)
tb_sw_warn(sw, "failed to enable %s on upstream port\n",
tb_switch_clx_name(TB_CL1));
}
if (tb_switch_is_clx_enabled(sw, TB_CL1))
/*
* To support highest CLx state, we set router's TMU to
* Normal-Uni mode.
*/
tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true);
else
/* If CLx disabled, configure router's TMU to HiFi-Bidir mode*/
tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false);
else if (tb_enable_clx(sw))
tb_sw_warn(sw, "failed to enable CL states\n");
if (tb_enable_tmu(sw))
tb_sw_warn(sw, "failed to enable TMU\n");
/*
* Configuration valid needs to be set after the TMU has been
* enabled for the upstream port of the router so we do it here.
*/
tb_switch_configuration_valid(sw);
/* Scan upstream retimers */
tb_retimer_scan(upstream_port, true);
@ -1034,7 +1169,7 @@ tb_recalc_estimated_bandwidth_for_group(struct tb_bandwidth_group *group)
struct tb_tunnel *tunnel;
struct tb_port *out;
if (!usb4_dp_port_bw_mode_enabled(in))
if (!usb4_dp_port_bandwidth_mode_enabled(in))
continue;
tunnel = tb_find_tunnel(tb, TB_TUNNEL_DP, in, NULL);
@ -1082,7 +1217,7 @@ tb_recalc_estimated_bandwidth_for_group(struct tb_bandwidth_group *group)
else
estimated_bw = estimated_up;
if (usb4_dp_port_set_estimated_bw(in, estimated_bw))
if (usb4_dp_port_set_estimated_bandwidth(in, estimated_bw))
tb_port_warn(in, "failed to update estimated bandwidth\n");
}
@ -1263,8 +1398,7 @@ static void tb_tunnel_dp(struct tb *tb)
* In case of DP tunnel exists, change host router's 1st children
* TMU mode to HiFi for CL0s to work.
*/
tb_switch_enable_tmu_1st_child(tb->root_switch, TB_SWITCH_TMU_RATE_HIFI);
tb_increase_tmu_accuracy(tunnel);
return;
err_free:
@ -1378,7 +1512,6 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
{
struct tb_port *up, *down, *port;
struct tb_cm *tcm = tb_priv(tb);
struct tb_switch *parent_sw;
struct tb_tunnel *tunnel;
up = tb_switch_find_port(sw, TB_TYPE_PCIE_UP);
@ -1389,9 +1522,8 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
* Look up available down port. Since we are chaining it should
* be found right above this switch.
*/
parent_sw = tb_to_switch(sw->dev.parent);
port = tb_port_at(tb_route(sw), parent_sw);
down = tb_find_pcie_down(parent_sw, port);
port = tb_switch_downstream_port(sw);
down = tb_find_pcie_down(tb_switch_parent(sw), port);
if (!down)
return 0;
@ -1428,30 +1560,45 @@ static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
struct tb_port *nhi_port, *dst_port;
struct tb_tunnel *tunnel;
struct tb_switch *sw;
int ret;
sw = tb_to_switch(xd->dev.parent);
dst_port = tb_port_at(xd->route, sw);
nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI);
mutex_lock(&tb->lock);
/*
* When tunneling DMA paths the link should not enter CL states
* so disable them now.
*/
tb_disable_clx(sw);
tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, transmit_path,
transmit_ring, receive_path, receive_ring);
if (!tunnel) {
mutex_unlock(&tb->lock);
return -ENOMEM;
ret = -ENOMEM;
goto err_clx;
}
if (tb_tunnel_activate(tunnel)) {
tb_port_info(nhi_port,
"DMA tunnel activation failed, aborting\n");
tb_tunnel_free(tunnel);
mutex_unlock(&tb->lock);
return -EIO;
ret = -EIO;
goto err_free;
}
list_add_tail(&tunnel->list, &tcm->tunnel_list);
mutex_unlock(&tb->lock);
return 0;
err_free:
tb_tunnel_free(tunnel);
err_clx:
tb_enable_clx(sw);
mutex_unlock(&tb->lock);
return ret;
}
static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
@ -1477,6 +1624,13 @@ static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
receive_path, receive_ring))
tb_deactivate_and_free_tunnel(tunnel);
}
/*
* Try to re-enable CL states now, it is OK if this fails
* because we may still have another DMA tunnel active through
* the same host router USB4 downstream port.
*/
tb_enable_clx(sw);
}
static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
@ -1758,12 +1912,12 @@ static void tb_handle_dp_bandwidth_request(struct work_struct *work)
tb_port_dbg(in, "handling bandwidth allocation request\n");
if (!usb4_dp_port_bw_mode_enabled(in)) {
if (!usb4_dp_port_bandwidth_mode_enabled(in)) {
tb_port_warn(in, "bandwidth allocation mode not enabled\n");
goto unlock;
}
ret = usb4_dp_port_requested_bw(in);
ret = usb4_dp_port_requested_bandwidth(in);
if (ret < 0) {
if (ret == -ENODATA)
tb_port_dbg(in, "no bandwidth request active\n");
@ -1830,17 +1984,26 @@ static void tb_queue_dp_bandwidth_request(struct tb *tb, u64 route, u8 port)
static void tb_handle_notification(struct tb *tb, u64 route,
const struct cfg_error_pkg *error)
{
if (tb_cfg_ack_notification(tb->ctl, route, error))
tb_warn(tb, "could not ack notification on %llx\n", route);
switch (error->error) {
case TB_CFG_ERROR_PCIE_WAKE:
case TB_CFG_ERROR_DP_CON_CHANGE:
case TB_CFG_ERROR_DPTX_DISCOVERY:
if (tb_cfg_ack_notification(tb->ctl, route, error))
tb_warn(tb, "could not ack notification on %llx\n",
route);
break;
case TB_CFG_ERROR_DP_BW:
if (tb_cfg_ack_notification(tb->ctl, route, error))
tb_warn(tb, "could not ack notification on %llx\n",
route);
tb_queue_dp_bandwidth_request(tb, route, error->port);
break;
default:
/* Ack is enough */
return;
/* Ignore for now */
break;
}
}
@ -1955,8 +2118,7 @@ static int tb_start(struct tb *tb)
* To support highest CLx state, we set host router's TMU to
* Normal mode.
*/
tb_switch_tmu_configure(tb->root_switch, TB_SWITCH_TMU_RATE_NORMAL,
false);
tb_switch_tmu_configure(tb->root_switch, TB_SWITCH_TMU_MODE_LOWRES);
/* Enable TMU if it is off */
tb_switch_tmu_enable(tb->root_switch);
/* Full scan to discover devices added before the driver was loaded. */
@ -1997,34 +2159,19 @@ static int tb_suspend_noirq(struct tb *tb)
static void tb_restore_children(struct tb_switch *sw)
{
struct tb_port *port;
int ret;
/* No need to restore if the router is already unplugged */
if (sw->is_unplugged)
return;
/*
* CL0s and CL1 are enabled and supported together.
* Silently ignore CLx re-enabling in case CLx is not supported.
*/
ret = tb_switch_enable_clx(sw, TB_CL1);
if (ret && ret != -EOPNOTSUPP)
tb_sw_warn(sw, "failed to re-enable %s on upstream port\n",
tb_switch_clx_name(TB_CL1));
if (tb_switch_is_clx_enabled(sw, TB_CL1))
/*
* To support highest CLx state, we set router's TMU to
* Normal-Uni mode.
*/
tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true);
else
/* If CLx disabled, configure router's TMU to HiFi-Bidir mode*/
tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false);
if (tb_enable_clx(sw))
tb_sw_warn(sw, "failed to re-enable CL states\n");
if (tb_enable_tmu(sw))
tb_sw_warn(sw, "failed to restore TMU configuration\n");
tb_switch_configuration_valid(sw);
tb_switch_for_each_port(sw, port) {
if (!tb_port_has_remote(port) && !port->xdomain)
continue;

View File

@ -19,10 +19,6 @@
#include "ctl.h"
#include "dma_port.h"
#define NVM_MIN_SIZE SZ_32K
#define NVM_MAX_SIZE SZ_512K
#define NVM_DATA_DWORDS 16
/* Keep link controller awake during update */
#define QUIRK_FORCE_POWER_LINK_CONTROLLER BIT(0)
/* Disable CLx if not supported */
@ -77,51 +73,37 @@ enum tb_nvm_write_ops {
#define USB4_SWITCH_MAX_DEPTH 5
/**
* enum tb_switch_tmu_rate - TMU refresh rate
* @TB_SWITCH_TMU_RATE_OFF: %0 (Disable Time Sync handshake)
* @TB_SWITCH_TMU_RATE_HIFI: %16 us time interval between successive
* transmission of the Delay Request TSNOS
* (Time Sync Notification Ordered Set) on a Link
* @TB_SWITCH_TMU_RATE_NORMAL: %1 ms time interval between successive
* transmission of the Delay Request TSNOS on
* a Link
* enum tb_switch_tmu_mode - TMU mode
* @TB_SWITCH_TMU_MODE_OFF: TMU is off
* @TB_SWITCH_TMU_MODE_LOWRES: Uni-directional, normal mode
* @TB_SWITCH_TMU_MODE_HIFI_UNI: Uni-directional, HiFi mode
* @TB_SWITCH_TMU_MODE_HIFI_BI: Bi-directional, HiFi mode
* @TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI: Enhanced Uni-directional, MedRes mode
*
* Ordering is based on TMU accuracy level (highest last).
*/
enum tb_switch_tmu_rate {
TB_SWITCH_TMU_RATE_OFF = 0,
TB_SWITCH_TMU_RATE_HIFI = 16,
TB_SWITCH_TMU_RATE_NORMAL = 1000,
enum tb_switch_tmu_mode {
TB_SWITCH_TMU_MODE_OFF,
TB_SWITCH_TMU_MODE_LOWRES,
TB_SWITCH_TMU_MODE_HIFI_UNI,
TB_SWITCH_TMU_MODE_HIFI_BI,
TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI,
};
/**
* struct tb_switch_tmu - Structure holding switch TMU configuration
* struct tb_switch_tmu - Structure holding router TMU configuration
* @cap: Offset to the TMU capability (%0 if not found)
* @has_ucap: Does the switch support uni-directional mode
* @rate: TMU refresh rate related to upstream switch. In case of root
* switch this holds the domain rate. Reflects the HW setting.
* @unidirectional: Is the TMU in uni-directional or bi-directional mode
* related to upstream switch. Don't care for root switch.
* Reflects the HW setting.
* @unidirectional_request: Is the new TMU mode: uni-directional or bi-directional
* that is requested to be set. Related to upstream switch.
* Don't care for root switch.
* @rate_request: TMU new refresh rate related to upstream switch that is
* requested to be set. In case of root switch, this holds
* the new domain rate that is requested to be set.
* @mode: TMU mode related to the upstream router. Reflects the HW
* setting. Don't care for host router.
* @mode_request: TMU mode requested to set. Related to upstream router.
* Don't care for host router.
*/
struct tb_switch_tmu {
int cap;
bool has_ucap;
enum tb_switch_tmu_rate rate;
bool unidirectional;
bool unidirectional_request;
enum tb_switch_tmu_rate rate_request;
};
enum tb_clx {
TB_CLX_DISABLE,
/* CL0s and CL1 are enabled and supported together */
TB_CL1 = BIT(0),
TB_CL2 = BIT(1),
enum tb_switch_tmu_mode mode;
enum tb_switch_tmu_mode mode_request;
};
/**
@ -142,7 +124,7 @@ enum tb_clx {
* @vendor_name: Name of the vendor (or %NULL if not known)
* @device_name: Name of the device (or %NULL if not known)
* @link_speed: Speed of the link in Gb/s
* @link_width: Width of the link (1 or 2)
* @link_width: Width of the upstream facing link
* @link_usb4: Upstream link is USB4
* @generation: Switch Thunderbolt generation
* @cap_plug_events: Offset to the plug events capability (%0 if not found)
@ -174,12 +156,17 @@ enum tb_clx {
* @min_dp_main_credits: Router preferred minimum number of buffers for DP MAIN
* @max_pcie_credits: Router preferred number of buffers for PCIe
* @max_dma_credits: Router preferred number of buffers for DMA/P2P
* @clx: CLx state on the upstream link of the router
* @clx: CLx states on the upstream link of the router
*
* When the switch is being added or removed to the domain (other
* switches) you need to have domain lock held.
*
* In USB4 terminology this structure represents a router.
*
* Note @link_width is not the same as whether link is bonded or not.
* For Gen 4 links the link is also bonded when it is asymmetric. The
* correct way to find out whether the link is bonded or not is to look
* @bonded field of the upstream port.
*/
struct tb_switch {
struct device dev;
@ -195,7 +182,7 @@ struct tb_switch {
const char *vendor_name;
const char *device_name;
unsigned int link_speed;
unsigned int link_width;
enum tb_link_width link_width;
bool link_usb4;
unsigned int generation;
int cap_plug_events;
@ -225,7 +212,7 @@ struct tb_switch {
unsigned int min_dp_main_credits;
unsigned int max_pcie_credits;
unsigned int max_dma_credits;
enum tb_clx clx;
unsigned int clx;
};
/**
@ -455,6 +442,11 @@ struct tb_path {
#define TB_WAKE_ON_PCIE BIT(4)
#define TB_WAKE_ON_DP BIT(5)
/* CL states */
#define TB_CL0S BIT(0)
#define TB_CL1 BIT(1)
#define TB_CL2 BIT(2)
/**
* struct tb_cm_ops - Connection manager specific operations vector
* @driver_ready: Called right after control channel is started. Used by
@ -802,6 +794,7 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
struct tb_switch *tb_switch_alloc_safe_mode(struct tb *tb,
struct device *parent, u64 route);
int tb_switch_configure(struct tb_switch *sw);
int tb_switch_configuration_valid(struct tb_switch *sw);
int tb_switch_add(struct tb_switch *sw);
void tb_switch_remove(struct tb_switch *sw);
void tb_switch_suspend(struct tb_switch *sw, bool runtime);
@ -857,6 +850,20 @@ static inline struct tb_switch *tb_switch_parent(struct tb_switch *sw)
return tb_to_switch(sw->dev.parent);
}
/**
* tb_switch_downstream_port() - Return downstream facing port of parent router
* @sw: Device router pointer
*
* Only call for device routers. Returns the downstream facing port of
* the parent router.
*/
static inline struct tb_port *tb_switch_downstream_port(struct tb_switch *sw)
{
if (WARN_ON(!tb_route(sw)))
return NULL;
return tb_port_at(tb_route(sw), tb_switch_parent(sw));
}
static inline bool tb_switch_is_light_ridge(const struct tb_switch *sw)
{
return sw->config.vendor_id == PCI_VENDOR_ID_INTEL &&
@ -935,17 +942,6 @@ static inline bool tb_switch_is_tiger_lake(const struct tb_switch *sw)
return false;
}
/**
* tb_switch_is_usb4() - Is the switch USB4 compliant
* @sw: Switch to check
*
* Returns true if the @sw is USB4 compliant router, false otherwise.
*/
static inline bool tb_switch_is_usb4(const struct tb_switch *sw)
{
return sw->config.thunderbolt_version == USB4_VERSION_1_0;
}
/**
* tb_switch_is_icm() - Is the switch handled by ICM firmware
* @sw: Switch to check
@ -973,68 +969,58 @@ int tb_switch_tmu_init(struct tb_switch *sw);
int tb_switch_tmu_post_time(struct tb_switch *sw);
int tb_switch_tmu_disable(struct tb_switch *sw);
int tb_switch_tmu_enable(struct tb_switch *sw);
void tb_switch_tmu_configure(struct tb_switch *sw,
enum tb_switch_tmu_rate rate,
bool unidirectional);
void tb_switch_enable_tmu_1st_child(struct tb_switch *sw,
enum tb_switch_tmu_rate rate);
int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_mode mode);
/**
* tb_switch_tmu_is_configured() - Is given TMU mode configured
* @sw: Router whose mode to check
* @mode: Mode to check
*
* Checks if given router TMU mode is configured to @mode. Note the
* router TMU might not be enabled to this mode.
*/
static inline bool tb_switch_tmu_is_configured(const struct tb_switch *sw,
enum tb_switch_tmu_mode mode)
{
return sw->tmu.mode_request == mode;
}
/**
* tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled
* @sw: Router whose TMU mode to check
* @unidirectional: If uni-directional (bi-directional otherwise)
*
* Return true if hardware TMU configuration matches the one passed in
* as parameter. That is HiFi/Normal and either uni-directional or bi-directional.
* Return true if hardware TMU configuration matches the requested
* configuration (and is not %TB_SWITCH_TMU_MODE_OFF).
*/
static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw,
bool unidirectional)
static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw)
{
return sw->tmu.rate == sw->tmu.rate_request &&
sw->tmu.unidirectional == unidirectional;
return sw->tmu.mode != TB_SWITCH_TMU_MODE_OFF &&
sw->tmu.mode == sw->tmu.mode_request;
}
static inline const char *tb_switch_clx_name(enum tb_clx clx)
{
switch (clx) {
/* CL0s and CL1 are enabled and supported together */
case TB_CL1:
return "CL0s/CL1";
default:
return "unknown";
}
}
bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx);
int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx);
int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx);
int tb_switch_clx_init(struct tb_switch *sw);
bool tb_switch_clx_is_supported(const struct tb_switch *sw);
int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx);
int tb_switch_clx_disable(struct tb_switch *sw);
/**
* tb_switch_is_clx_enabled() - Checks if the CLx is enabled
* tb_switch_clx_is_enabled() - Checks if the CLx is enabled
* @sw: Router to check for the CLx
* @clx: The CLx state to check for
* @clx: The CLx states to check for
*
* Checks if the specified CLx is enabled on the router upstream link.
* Returns true if any of the given states is enabled.
*
* Not applicable for a host router.
*/
static inline bool tb_switch_is_clx_enabled(const struct tb_switch *sw,
enum tb_clx clx)
static inline bool tb_switch_clx_is_enabled(const struct tb_switch *sw,
unsigned int clx)
{
return sw->clx == clx;
return sw->clx & clx;
}
/**
* tb_switch_is_clx_supported() - Is CLx supported on this type of router
* @sw: The router to check CLx support for
*/
static inline bool tb_switch_is_clx_supported(const struct tb_switch *sw)
{
if (sw->quirks & QUIRK_NO_CLX)
return false;
return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw);
}
int tb_switch_mask_clx_objections(struct tb_switch *sw);
int tb_switch_pcie_l1_enable(struct tb_switch *sw);
int tb_switch_xhci_connect(struct tb_switch *sw);
@ -1073,14 +1059,12 @@ static inline bool tb_port_use_credit_allocation(const struct tb_port *port)
int tb_port_get_link_speed(struct tb_port *port);
int tb_port_get_link_width(struct tb_port *port);
int tb_port_set_link_width(struct tb_port *port, unsigned int width);
int tb_port_set_lane_bonding(struct tb_port *port, bool bonding);
int tb_port_set_link_width(struct tb_port *port, enum tb_link_width width);
int tb_port_lane_bonding_enable(struct tb_port *port);
void tb_port_lane_bonding_disable(struct tb_port *port);
int tb_port_wait_for_link_width(struct tb_port *port, int width,
int tb_port_wait_for_link_width(struct tb_port *port, unsigned int width_mask,
int timeout_msec);
int tb_port_update_credits(struct tb_port *port);
bool tb_port_is_clx_enabled(struct tb_port *port, unsigned int clx);
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
@ -1183,6 +1167,17 @@ static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd)
return tb_to_switch(xd->dev.parent);
}
/**
* tb_xdomain_downstream_port() - Return downstream facing port of parent router
* @xd: Xdomain pointer
*
* Returns the downstream port the XDomain is connected to.
*/
static inline struct tb_port *tb_xdomain_downstream_port(struct tb_xdomain *xd)
{
return tb_port_at(xd->route, tb_xdomain_parent(xd));
}
int tb_retimer_nvm_read(struct tb_retimer *rt, unsigned int address, void *buf,
size_t size);
int tb_retimer_scan(struct tb_port *port, bool add);
@ -1200,7 +1195,31 @@ static inline struct tb_retimer *tb_to_retimer(struct device *dev)
return NULL;
}
/**
* usb4_switch_version() - Returns USB4 version of the router
* @sw: Router to check
*
* Returns major version of USB4 router (%1 for v1, %2 for v2 and so
* on). Can be called to pre-USB4 router too and in that case returns %0.
*/
static inline unsigned int usb4_switch_version(const struct tb_switch *sw)
{
return FIELD_GET(USB4_VERSION_MAJOR_MASK, sw->config.thunderbolt_version);
}
/**
* tb_switch_is_usb4() - Is the switch USB4 compliant
* @sw: Switch to check
*
* Returns true if the @sw is USB4 compliant router, false otherwise.
*/
static inline bool tb_switch_is_usb4(const struct tb_switch *sw)
{
return usb4_switch_version(sw) > 0;
}
int usb4_switch_setup(struct tb_switch *sw);
int usb4_switch_configuration_valid(struct tb_switch *sw);
int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid);
int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size);
@ -1273,19 +1292,22 @@ int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw);
int usb4_dp_port_set_cm_id(struct tb_port *port, int cm_id);
bool usb4_dp_port_bw_mode_supported(struct tb_port *port);
bool usb4_dp_port_bw_mode_enabled(struct tb_port *port);
int usb4_dp_port_set_cm_bw_mode_supported(struct tb_port *port, bool supported);
bool usb4_dp_port_bandwidth_mode_supported(struct tb_port *port);
bool usb4_dp_port_bandwidth_mode_enabled(struct tb_port *port);
int usb4_dp_port_set_cm_bandwidth_mode_supported(struct tb_port *port,
bool supported);
int usb4_dp_port_group_id(struct tb_port *port);
int usb4_dp_port_set_group_id(struct tb_port *port, int group_id);
int usb4_dp_port_nrd(struct tb_port *port, int *rate, int *lanes);
int usb4_dp_port_set_nrd(struct tb_port *port, int rate, int lanes);
int usb4_dp_port_granularity(struct tb_port *port);
int usb4_dp_port_set_granularity(struct tb_port *port, int granularity);
int usb4_dp_port_set_estimated_bw(struct tb_port *port, int bw);
int usb4_dp_port_allocated_bw(struct tb_port *port);
int usb4_dp_port_allocate_bw(struct tb_port *port, int bw);
int usb4_dp_port_requested_bw(struct tb_port *port);
int usb4_dp_port_set_estimated_bandwidth(struct tb_port *port, int bw);
int usb4_dp_port_allocated_bandwidth(struct tb_port *port);
int usb4_dp_port_allocate_bandwidth(struct tb_port *port, int bw);
int usb4_dp_port_requested_bandwidth(struct tb_port *port);
int usb4_pci_port_set_ext_encapsulation(struct tb_port *port, bool enable);
static inline bool tb_is_usb4_port_device(const struct device *dev)
{
@ -1303,6 +1325,11 @@ struct usb4_port *usb4_port_device_add(struct tb_port *port);
void usb4_port_device_remove(struct usb4_port *usb4);
int usb4_port_device_resume(struct usb4_port *usb4);
static inline bool usb4_port_device_is_offline(const struct usb4_port *usb4)
{
return usb4->offline;
}
void tb_check_quirks(struct tb_switch *sw);
#ifdef CONFIG_ACPI

View File

@ -30,6 +30,13 @@ enum tb_cfg_error {
TB_CFG_ERROR_FLOW_CONTROL_ERROR = 13,
TB_CFG_ERROR_LOCK = 15,
TB_CFG_ERROR_DP_BW = 32,
TB_CFG_ERROR_ROP_CMPLT = 33,
TB_CFG_ERROR_POP_CMPLT = 34,
TB_CFG_ERROR_PCIE_WAKE = 35,
TB_CFG_ERROR_DP_CON_CHANGE = 36,
TB_CFG_ERROR_DPTX_DISCOVERY = 37,
TB_CFG_ERROR_LINK_RECOVERY = 38,
TB_CFG_ERROR_ASYM_LINK = 39,
};
/* common header */

View File

@ -190,11 +190,14 @@ struct tb_regs_switch_header {
u32 thunderbolt_version:8;
} __packed;
/* USB4 version 1.0 */
#define USB4_VERSION_1_0 0x20
/* Used with the router thunderbolt_version */
#define USB4_VERSION_MAJOR_MASK GENMASK(7, 5)
#define ROUTER_CS_1 0x01
#define ROUTER_CS_4 0x04
/* Used with the router cmuv field */
#define ROUTER_CS_4_CMUV_V1 0x10
#define ROUTER_CS_4_CMUV_V2 0x20
#define ROUTER_CS_5 0x05
#define ROUTER_CS_5_SLP BIT(0)
#define ROUTER_CS_5_WOP BIT(1)
@ -249,11 +252,13 @@ enum usb4_switch_op {
#define TMU_RTR_CS_3_LOCAL_TIME_NS_MASK GENMASK(15, 0)
#define TMU_RTR_CS_3_TS_PACKET_INTERVAL_MASK GENMASK(31, 16)
#define TMU_RTR_CS_3_TS_PACKET_INTERVAL_SHIFT 16
#define TMU_RTR_CS_15 0xf
#define TMU_RTR_CS_15 0x0f
#define TMU_RTR_CS_15_FREQ_AVG_MASK GENMASK(5, 0)
#define TMU_RTR_CS_15_DELAY_AVG_MASK GENMASK(11, 6)
#define TMU_RTR_CS_15_OFFSET_AVG_MASK GENMASK(17, 12)
#define TMU_RTR_CS_15_ERROR_AVG_MASK GENMASK(23, 18)
#define TMU_RTR_CS_18 0x12
#define TMU_RTR_CS_18_DELTA_AVG_CONST_MASK GENMASK(23, 16)
#define TMU_RTR_CS_22 0x16
#define TMU_RTR_CS_24 0x18
#define TMU_RTR_CS_25 0x19
@ -319,6 +324,14 @@ struct tb_regs_port_header {
#define TMU_ADP_CS_3_UDM BIT(29)
#define TMU_ADP_CS_6 0x06
#define TMU_ADP_CS_6_DTS BIT(1)
#define TMU_ADP_CS_8 0x08
#define TMU_ADP_CS_8_REPL_TIMEOUT_MASK GENMASK(14, 0)
#define TMU_ADP_CS_8_EUDM BIT(15)
#define TMU_ADP_CS_8_REPL_THRESHOLD_MASK GENMASK(25, 16)
#define TMU_ADP_CS_9 0x09
#define TMU_ADP_CS_9_REPL_N_MASK GENMASK(7, 0)
#define TMU_ADP_CS_9_DIRSWITCH_N_MASK GENMASK(15, 8)
#define TMU_ADP_CS_9_ADP_TS_INTERVAL_MASK GENMASK(31, 16)
/* Lane adapter registers */
#define LANE_ADP_CS_0 0x00
@ -346,6 +359,7 @@ struct tb_regs_port_header {
#define LANE_ADP_CS_1_CURRENT_SPEED_SHIFT 16
#define LANE_ADP_CS_1_CURRENT_SPEED_GEN2 0x8
#define LANE_ADP_CS_1_CURRENT_SPEED_GEN3 0x4
#define LANE_ADP_CS_1_CURRENT_SPEED_GEN4 0x2
#define LANE_ADP_CS_1_CURRENT_WIDTH_MASK GENMASK(25, 20)
#define LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT 20
#define LANE_ADP_CS_1_PMS BIT(30)
@ -436,6 +450,9 @@ struct tb_regs_port_header {
#define DP_COMMON_CAP_1_LANE 0x0
#define DP_COMMON_CAP_2_LANES 0x1
#define DP_COMMON_CAP_4_LANES 0x2
#define DP_COMMON_CAP_UHBR10 BIT(17)
#define DP_COMMON_CAP_UHBR20 BIT(18)
#define DP_COMMON_CAP_UHBR13_5 BIT(19)
#define DP_COMMON_CAP_LTTPR_NS BIT(27)
#define DP_COMMON_CAP_BW_MODE BIT(28)
#define DP_COMMON_CAP_DPRX_DONE BIT(31)
@ -447,6 +464,8 @@ struct tb_regs_port_header {
/* PCIe adapter registers */
#define ADP_PCIE_CS_0 0x00
#define ADP_PCIE_CS_0_PE BIT(31)
#define ADP_PCIE_CS_1 0x01
#define ADP_PCIE_CS_1_EE BIT(0)
/* USB adapter registers */
#define ADP_USB3_CS_0 0x00

View File

@ -170,6 +170,23 @@ static struct tb_switch *alloc_host_usb4(struct kunit *test)
return sw;
}
static struct tb_switch *alloc_host_br(struct kunit *test)
{
struct tb_switch *sw;
sw = alloc_host_usb4(test);
if (!sw)
return NULL;
sw->ports[10].config.type = TB_TYPE_DP_HDMI_IN;
sw->ports[10].config.max_in_hop_id = 9;
sw->ports[10].config.max_out_hop_id = 9;
sw->ports[10].cap_adap = -1;
sw->ports[10].disabled = false;
return sw;
}
static struct tb_switch *alloc_dev_default(struct kunit *test,
struct tb_switch *parent,
u64 route, bool bonded)
@ -1583,6 +1600,71 @@ static void tb_test_tunnel_dp_max_length(struct kunit *test)
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_3dp(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev2, *dev3, *dev4, *dev5;
struct tb_port *in1, *in2, *in3, *out1, *out2, *out3;
struct tb_tunnel *tunnel1, *tunnel2, *tunnel3;
/*
* Create 3 DP tunnels from Host to Devices #2, #5 and #4.
*
* [Host]
* 3 |
* 1 |
* [Device #1]
* 3 / | 5 \ 7
* 1 / | \ 1
* [Device #2] | [Device #4]
* | 1
* [Device #3]
* | 5
* | 1
* [Device #5]
*/
host = alloc_host_br(test);
dev1 = alloc_dev_default(test, host, 0x3, true);
dev2 = alloc_dev_default(test, dev1, 0x303, true);
dev3 = alloc_dev_default(test, dev1, 0x503, true);
dev4 = alloc_dev_default(test, dev1, 0x703, true);
dev5 = alloc_dev_default(test, dev3, 0x50503, true);
in1 = &host->ports[5];
in2 = &host->ports[6];
in3 = &host->ports[10];
out1 = &dev2->ports[13];
out2 = &dev5->ports[13];
out3 = &dev4->ports[14];
tunnel1 = tb_tunnel_alloc_dp(NULL, in1, out1, 1, 0, 0);
KUNIT_ASSERT_TRUE(test, tunnel1 != NULL);
KUNIT_EXPECT_EQ(test, tunnel1->type, TB_TUNNEL_DP);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->src_port, in1);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->dst_port, out1);
KUNIT_ASSERT_EQ(test, tunnel1->npaths, 3);
KUNIT_ASSERT_EQ(test, tunnel1->paths[0]->path_length, 3);
tunnel2 = tb_tunnel_alloc_dp(NULL, in2, out2, 1, 0, 0);
KUNIT_ASSERT_TRUE(test, tunnel2 != NULL);
KUNIT_EXPECT_EQ(test, tunnel2->type, TB_TUNNEL_DP);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->src_port, in2);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->dst_port, out2);
KUNIT_ASSERT_EQ(test, tunnel2->npaths, 3);
KUNIT_ASSERT_EQ(test, tunnel2->paths[0]->path_length, 4);
tunnel3 = tb_tunnel_alloc_dp(NULL, in3, out3, 1, 0, 0);
KUNIT_ASSERT_TRUE(test, tunnel3 != NULL);
KUNIT_EXPECT_EQ(test, tunnel3->type, TB_TUNNEL_DP);
KUNIT_EXPECT_PTR_EQ(test, tunnel3->src_port, in3);
KUNIT_EXPECT_PTR_EQ(test, tunnel3->dst_port, out3);
KUNIT_ASSERT_EQ(test, tunnel3->npaths, 3);
KUNIT_ASSERT_EQ(test, tunnel3->paths[0]->path_length, 3);
tb_tunnel_free(tunnel2);
tb_tunnel_free(tunnel1);
}
static void tb_test_tunnel_usb3(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev2;
@ -2790,6 +2872,7 @@ static struct kunit_case tb_test_cases[] = {
KUNIT_CASE(tb_test_tunnel_dp_chain),
KUNIT_CASE(tb_test_tunnel_dp_tree),
KUNIT_CASE(tb_test_tunnel_dp_max_length),
KUNIT_CASE(tb_test_tunnel_3dp),
KUNIT_CASE(tb_test_tunnel_port_on_path),
KUNIT_CASE(tb_test_tunnel_usb3),
KUNIT_CASE(tb_test_tunnel_dma),

View File

@ -11,23 +11,63 @@
#include "tb.h"
static int tb_switch_set_tmu_mode_params(struct tb_switch *sw,
enum tb_switch_tmu_rate rate)
static const unsigned int tmu_rates[] = {
[TB_SWITCH_TMU_MODE_OFF] = 0,
[TB_SWITCH_TMU_MODE_LOWRES] = 1000,
[TB_SWITCH_TMU_MODE_HIFI_UNI] = 16,
[TB_SWITCH_TMU_MODE_HIFI_BI] = 16,
[TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI] = 16,
};
const struct {
unsigned int freq_meas_window;
unsigned int avg_const;
unsigned int delta_avg_const;
unsigned int repl_timeout;
unsigned int repl_threshold;
unsigned int repl_n;
unsigned int dirswitch_n;
} tmu_params[] = {
[TB_SWITCH_TMU_MODE_OFF] = { },
[TB_SWITCH_TMU_MODE_LOWRES] = { 30, 4, },
[TB_SWITCH_TMU_MODE_HIFI_UNI] = { 800, 8, },
[TB_SWITCH_TMU_MODE_HIFI_BI] = { 800, 8, },
[TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI] = {
800, 4, 0, 3125, 25, 128, 255,
},
};
static const char *tmu_mode_name(enum tb_switch_tmu_mode mode)
{
switch (mode) {
case TB_SWITCH_TMU_MODE_OFF:
return "off";
case TB_SWITCH_TMU_MODE_LOWRES:
return "uni-directional, LowRes";
case TB_SWITCH_TMU_MODE_HIFI_UNI:
return "uni-directional, HiFi";
case TB_SWITCH_TMU_MODE_HIFI_BI:
return "bi-directional, HiFi";
case TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI:
return "enhanced uni-directional, MedRes";
default:
return "unknown";
}
}
static bool tb_switch_tmu_enhanced_is_supported(const struct tb_switch *sw)
{
return usb4_switch_version(sw) > 1;
}
static int tb_switch_set_tmu_mode_params(struct tb_switch *sw,
enum tb_switch_tmu_mode mode)
{
u32 freq_meas_wind[2] = { 30, 800 };
u32 avg_const[2] = { 4, 8 };
u32 freq, avg, val;
int ret;
if (rate == TB_SWITCH_TMU_RATE_NORMAL) {
freq = freq_meas_wind[0];
avg = avg_const[0];
} else if (rate == TB_SWITCH_TMU_RATE_HIFI) {
freq = freq_meas_wind[1];
avg = avg_const[1];
} else {
return 0;
}
freq = tmu_params[mode].freq_meas_window;
avg = tmu_params[mode].avg_const;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_0, 1);
@ -56,37 +96,30 @@ static int tb_switch_set_tmu_mode_params(struct tb_switch *sw,
FIELD_PREP(TMU_RTR_CS_15_OFFSET_AVG_MASK, avg) |
FIELD_PREP(TMU_RTR_CS_15_ERROR_AVG_MASK, avg);
return tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_15, 1);
}
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_15, 1);
if (ret)
return ret;
static const char *tb_switch_tmu_mode_name(const struct tb_switch *sw)
{
bool root_switch = !tb_route(sw);
if (tb_switch_tmu_enhanced_is_supported(sw)) {
u32 delta_avg = tmu_params[mode].delta_avg_const;
switch (sw->tmu.rate) {
case TB_SWITCH_TMU_RATE_OFF:
return "off";
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_18, 1);
if (ret)
return ret;
case TB_SWITCH_TMU_RATE_HIFI:
/* Root switch does not have upstream directionality */
if (root_switch)
return "HiFi";
if (sw->tmu.unidirectional)
return "uni-directional, HiFi";
return "bi-directional, HiFi";
val &= ~TMU_RTR_CS_18_DELTA_AVG_CONST_MASK;
val |= FIELD_PREP(TMU_RTR_CS_18_DELTA_AVG_CONST_MASK, delta_avg);
case TB_SWITCH_TMU_RATE_NORMAL:
if (root_switch)
return "normal";
return "uni-directional, normal";
default:
return "unknown";
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_18, 1);
}
return ret;
}
static bool tb_switch_tmu_ucap_supported(struct tb_switch *sw)
static bool tb_switch_tmu_ucap_is_supported(struct tb_switch *sw)
{
int ret;
u32 val;
@ -182,6 +215,103 @@ static bool tb_port_tmu_is_unidirectional(struct tb_port *port)
return val & TMU_ADP_CS_3_UDM;
}
static bool tb_port_tmu_is_enhanced(struct tb_port *port)
{
int ret;
u32 val;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_tmu + TMU_ADP_CS_8, 1);
if (ret)
return false;
return val & TMU_ADP_CS_8_EUDM;
}
/* Can be called to non-v2 lane adapters too */
static int tb_port_tmu_enhanced_enable(struct tb_port *port, bool enable)
{
int ret;
u32 val;
if (!tb_switch_tmu_enhanced_is_supported(port->sw))
return 0;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_tmu + TMU_ADP_CS_8, 1);
if (ret)
return ret;
if (enable)
val |= TMU_ADP_CS_8_EUDM;
else
val &= ~TMU_ADP_CS_8_EUDM;
return tb_port_write(port, &val, TB_CFG_PORT,
port->cap_tmu + TMU_ADP_CS_8, 1);
}
static int tb_port_set_tmu_mode_params(struct tb_port *port,
enum tb_switch_tmu_mode mode)
{
u32 repl_timeout, repl_threshold, repl_n, dirswitch_n, val;
int ret;
repl_timeout = tmu_params[mode].repl_timeout;
repl_threshold = tmu_params[mode].repl_threshold;
repl_n = tmu_params[mode].repl_n;
dirswitch_n = tmu_params[mode].dirswitch_n;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_tmu + TMU_ADP_CS_8, 1);
if (ret)
return ret;
val &= ~TMU_ADP_CS_8_REPL_TIMEOUT_MASK;
val &= ~TMU_ADP_CS_8_REPL_THRESHOLD_MASK;
val |= FIELD_PREP(TMU_ADP_CS_8_REPL_TIMEOUT_MASK, repl_timeout);
val |= FIELD_PREP(TMU_ADP_CS_8_REPL_THRESHOLD_MASK, repl_threshold);
ret = tb_port_write(port, &val, TB_CFG_PORT,
port->cap_tmu + TMU_ADP_CS_8, 1);
if (ret)
return ret;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_tmu + TMU_ADP_CS_9, 1);
if (ret)
return ret;
val &= ~TMU_ADP_CS_9_REPL_N_MASK;
val &= ~TMU_ADP_CS_9_DIRSWITCH_N_MASK;
val |= FIELD_PREP(TMU_ADP_CS_9_REPL_N_MASK, repl_n);
val |= FIELD_PREP(TMU_ADP_CS_9_DIRSWITCH_N_MASK, dirswitch_n);
return tb_port_write(port, &val, TB_CFG_PORT,
port->cap_tmu + TMU_ADP_CS_9, 1);
}
/* Can be called to non-v2 lane adapters too */
static int tb_port_tmu_rate_write(struct tb_port *port, int rate)
{
int ret;
u32 val;
if (!tb_switch_tmu_enhanced_is_supported(port->sw))
return 0;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_tmu + TMU_ADP_CS_9, 1);
if (ret)
return ret;
val &= ~TMU_ADP_CS_9_ADP_TS_INTERVAL_MASK;
val |= FIELD_PREP(TMU_ADP_CS_9_ADP_TS_INTERVAL_MASK, rate);
return tb_port_write(port, &val, TB_CFG_PORT,
port->cap_tmu + TMU_ADP_CS_9, 1);
}
static int tb_port_tmu_time_sync(struct tb_port *port, bool time_sync)
{
u32 val = time_sync ? TMU_ADP_CS_6_DTS : 0;
@ -224,6 +354,50 @@ static int tb_switch_tmu_set_time_disruption(struct tb_switch *sw, bool set)
return tb_sw_write(sw, &val, TB_CFG_SWITCH, offset, 1);
}
static int tmu_mode_init(struct tb_switch *sw)
{
bool enhanced, ucap;
int ret, rate;
ucap = tb_switch_tmu_ucap_is_supported(sw);
if (ucap)
tb_sw_dbg(sw, "TMU: supports uni-directional mode\n");
enhanced = tb_switch_tmu_enhanced_is_supported(sw);
if (enhanced)
tb_sw_dbg(sw, "TMU: supports enhanced uni-directional mode\n");
ret = tb_switch_tmu_rate_read(sw);
if (ret < 0)
return ret;
rate = ret;
/* Off by default */
sw->tmu.mode = TB_SWITCH_TMU_MODE_OFF;
if (tb_route(sw)) {
struct tb_port *up = tb_upstream_port(sw);
if (enhanced && tb_port_tmu_is_enhanced(up)) {
sw->tmu.mode = TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI;
} else if (ucap && tb_port_tmu_is_unidirectional(up)) {
if (tmu_rates[TB_SWITCH_TMU_MODE_LOWRES] == rate)
sw->tmu.mode = TB_SWITCH_TMU_MODE_LOWRES;
else if (tmu_rates[TB_SWITCH_TMU_MODE_LOWRES] == rate)
sw->tmu.mode = TB_SWITCH_TMU_MODE_HIFI_UNI;
} else if (rate) {
sw->tmu.mode = TB_SWITCH_TMU_MODE_HIFI_BI;
}
} else if (rate) {
sw->tmu.mode = TB_SWITCH_TMU_MODE_HIFI_BI;
}
/* Update the initial request to match the current mode */
sw->tmu.mode_request = sw->tmu.mode;
sw->tmu.has_ucap = ucap;
return 0;
}
/**
* tb_switch_tmu_init() - Initialize switch TMU structures
* @sw: Switch to initialized
@ -252,27 +426,11 @@ int tb_switch_tmu_init(struct tb_switch *sw)
port->cap_tmu = cap;
}
ret = tb_switch_tmu_rate_read(sw);
if (ret < 0)
ret = tmu_mode_init(sw);
if (ret)
return ret;
sw->tmu.rate = ret;
sw->tmu.has_ucap = tb_switch_tmu_ucap_supported(sw);
if (sw->tmu.has_ucap) {
tb_sw_dbg(sw, "TMU: supports uni-directional mode\n");
if (tb_route(sw)) {
struct tb_port *up = tb_upstream_port(sw);
sw->tmu.unidirectional =
tb_port_tmu_is_unidirectional(up);
}
} else {
sw->tmu.unidirectional = false;
}
tb_sw_dbg(sw, "TMU: current mode: %s\n", tb_switch_tmu_mode_name(sw));
tb_sw_dbg(sw, "TMU: current mode: %s\n", tmu_mode_name(sw->tmu.mode));
return 0;
}
@ -308,7 +466,7 @@ int tb_switch_tmu_post_time(struct tb_switch *sw)
return ret;
for (i = 0; i < ARRAY_SIZE(gm_local_time); i++)
tb_sw_dbg(root_switch, "local_time[%d]=0x%08x\n", i,
tb_sw_dbg(root_switch, "TMU: local_time[%d]=0x%08x\n", i,
gm_local_time[i]);
/* Convert to nanoseconds (drop fractional part) */
@ -375,6 +533,23 @@ out:
return ret;
}
static int disable_enhanced(struct tb_port *up, struct tb_port *down)
{
int ret;
/*
* Router may already been disconnected so ignore errors on the
* upstream port.
*/
tb_port_tmu_rate_write(up, 0);
tb_port_tmu_enhanced_enable(up, false);
ret = tb_port_tmu_rate_write(down, 0);
if (ret)
return ret;
return tb_port_tmu_enhanced_enable(down, false);
}
/**
* tb_switch_tmu_disable() - Disable TMU of a switch
* @sw: Switch whose TMU to disable
@ -383,26 +558,15 @@ out:
*/
int tb_switch_tmu_disable(struct tb_switch *sw)
{
/*
* No need to disable TMU on devices that don't support CLx since
* on these devices e.g. Alpine Ridge and earlier, the TMU mode
* HiFi bi-directional is enabled by default and we don't change it.
*/
if (!tb_switch_is_clx_supported(sw))
return 0;
/* Already disabled? */
if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF)
if (sw->tmu.mode == TB_SWITCH_TMU_MODE_OFF)
return 0;
if (tb_route(sw)) {
bool unidirectional = sw->tmu.unidirectional;
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *down, *up;
int ret;
down = tb_port_at(tb_route(sw), parent);
down = tb_switch_downstream_port(sw);
up = tb_upstream_port(sw);
/*
* In case of uni-directional time sync, TMU handshake is
@ -415,37 +579,49 @@ int tb_switch_tmu_disable(struct tb_switch *sw)
* uni-directional mode and we don't want to change it's TMU
* mode.
*/
tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
tb_switch_tmu_rate_write(sw, tmu_rates[TB_SWITCH_TMU_MODE_OFF]);
tb_port_tmu_time_sync_disable(up);
ret = tb_port_tmu_time_sync_disable(down);
if (ret)
return ret;
if (unidirectional) {
switch (sw->tmu.mode) {
case TB_SWITCH_TMU_MODE_LOWRES:
case TB_SWITCH_TMU_MODE_HIFI_UNI:
/* The switch may be unplugged so ignore any errors */
tb_port_tmu_unidirectional_disable(up);
ret = tb_port_tmu_unidirectional_disable(down);
if (ret)
return ret;
break;
case TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI:
ret = disable_enhanced(up, down);
if (ret)
return ret;
break;
default:
break;
}
} else {
tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
tb_switch_tmu_rate_write(sw, tmu_rates[TB_SWITCH_TMU_MODE_OFF]);
}
sw->tmu.unidirectional = false;
sw->tmu.rate = TB_SWITCH_TMU_RATE_OFF;
sw->tmu.mode = TB_SWITCH_TMU_MODE_OFF;
tb_sw_dbg(sw, "TMU: disabled\n");
return 0;
}
static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional)
/* Called only when there is failure enabling requested mode */
static void tb_switch_tmu_off(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
unsigned int rate = tmu_rates[TB_SWITCH_TMU_MODE_OFF];
struct tb_port *down, *up;
down = tb_port_at(tb_route(sw), parent);
down = tb_switch_downstream_port(sw);
up = tb_upstream_port(sw);
/*
* In case of any failure in one of the steps when setting
@ -456,28 +632,38 @@ static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional)
*/
tb_port_tmu_time_sync_disable(down);
tb_port_tmu_time_sync_disable(up);
if (unidirectional)
tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF);
else
tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
tb_switch_set_tmu_mode_params(sw, sw->tmu.rate);
switch (sw->tmu.mode_request) {
case TB_SWITCH_TMU_MODE_LOWRES:
case TB_SWITCH_TMU_MODE_HIFI_UNI:
tb_switch_tmu_rate_write(tb_switch_parent(sw), rate);
break;
case TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI:
disable_enhanced(up, down);
break;
default:
break;
}
/* Always set the rate to 0 */
tb_switch_tmu_rate_write(sw, rate);
tb_switch_set_tmu_mode_params(sw, sw->tmu.mode);
tb_port_tmu_unidirectional_disable(down);
tb_port_tmu_unidirectional_disable(up);
}
/*
* This function is called when the previous TMU mode was
* TB_SWITCH_TMU_RATE_OFF.
* TB_SWITCH_TMU_MODE_OFF.
*/
static int __tb_switch_tmu_enable_bidirectional(struct tb_switch *sw)
static int tb_switch_tmu_enable_bidirectional(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
int ret;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
down = tb_switch_downstream_port(sw);
ret = tb_port_tmu_unidirectional_disable(up);
if (ret)
@ -487,7 +673,7 @@ static int __tb_switch_tmu_enable_bidirectional(struct tb_switch *sw)
if (ret)
goto out;
ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI);
ret = tb_switch_tmu_rate_write(sw, tmu_rates[TB_SWITCH_TMU_MODE_HIFI_BI]);
if (ret)
goto out;
@ -502,12 +688,14 @@ static int __tb_switch_tmu_enable_bidirectional(struct tb_switch *sw)
return 0;
out:
__tb_switch_tmu_off(sw, false);
tb_switch_tmu_off(sw);
return ret;
}
static int tb_switch_tmu_objection_mask(struct tb_switch *sw)
/* Only needed for Titan Ridge */
static int tb_switch_tmu_disable_objections(struct tb_switch *sw)
{
struct tb_port *up = tb_upstream_port(sw);
u32 val;
int ret;
@ -518,36 +706,34 @@ static int tb_switch_tmu_objection_mask(struct tb_switch *sw)
val &= ~TB_TIME_VSEC_3_CS_9_TMU_OBJ_MASK;
return tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1);
}
static int tb_switch_tmu_unidirectional_enable(struct tb_switch *sw)
{
struct tb_port *up = tb_upstream_port(sw);
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1);
if (ret)
return ret;
return tb_port_tmu_write(up, TMU_ADP_CS_6,
TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK,
TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK);
TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL1 |
TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL2);
}
/*
* This function is called when the previous TMU mode was
* TB_SWITCH_TMU_RATE_OFF.
* TB_SWITCH_TMU_MODE_OFF.
*/
static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw)
static int tb_switch_tmu_enable_unidirectional(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
int ret;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
ret = tb_switch_tmu_rate_write(parent, sw->tmu.rate_request);
down = tb_switch_downstream_port(sw);
ret = tb_switch_tmu_rate_write(tb_switch_parent(sw),
tmu_rates[sw->tmu.mode_request]);
if (ret)
return ret;
ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.rate_request);
ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.mode_request);
if (ret)
return ret;
@ -570,16 +756,65 @@ static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw)
return 0;
out:
__tb_switch_tmu_off(sw, true);
tb_switch_tmu_off(sw);
return ret;
}
static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw)
/*
* This function is called when the previous TMU mode was
* TB_SWITCH_TMU_RATE_OFF.
*/
static int tb_switch_tmu_enable_enhanced(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
unsigned int rate = tmu_rates[sw->tmu.mode_request];
struct tb_port *up, *down;
int ret;
/* Router specific parameters first */
ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.mode_request);
if (ret)
return ret;
up = tb_upstream_port(sw);
down = tb_switch_downstream_port(sw);
ret = tb_port_set_tmu_mode_params(up, sw->tmu.mode_request);
if (ret)
goto out;
ret = tb_port_tmu_rate_write(up, rate);
if (ret)
goto out;
ret = tb_port_tmu_enhanced_enable(up, true);
if (ret)
goto out;
ret = tb_port_set_tmu_mode_params(down, sw->tmu.mode_request);
if (ret)
goto out;
ret = tb_port_tmu_rate_write(down, rate);
if (ret)
goto out;
ret = tb_port_tmu_enhanced_enable(down, true);
if (ret)
goto out;
return 0;
out:
tb_switch_tmu_off(sw);
return ret;
}
static void tb_switch_tmu_change_mode_prev(struct tb_switch *sw)
{
unsigned int rate = tmu_rates[sw->tmu.mode];
struct tb_port *down, *up;
down = tb_port_at(tb_route(sw), parent);
down = tb_switch_downstream_port(sw);
up = tb_upstream_port(sw);
/*
* In case of any failure in one of the steps when change mode,
@ -587,42 +822,97 @@ static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw)
* In case of additional failures in the functions below,
* ignore them since the caller shall already report a failure.
*/
tb_port_tmu_set_unidirectional(down, sw->tmu.unidirectional);
if (sw->tmu.unidirectional_request)
tb_switch_tmu_rate_write(parent, sw->tmu.rate);
else
tb_switch_tmu_rate_write(sw, sw->tmu.rate);
switch (sw->tmu.mode) {
case TB_SWITCH_TMU_MODE_LOWRES:
case TB_SWITCH_TMU_MODE_HIFI_UNI:
tb_port_tmu_set_unidirectional(down, true);
tb_switch_tmu_rate_write(tb_switch_parent(sw), rate);
break;
tb_switch_set_tmu_mode_params(sw, sw->tmu.rate);
tb_port_tmu_set_unidirectional(up, sw->tmu.unidirectional);
case TB_SWITCH_TMU_MODE_HIFI_BI:
tb_port_tmu_set_unidirectional(down, false);
tb_switch_tmu_rate_write(sw, rate);
break;
default:
break;
}
tb_switch_set_tmu_mode_params(sw, sw->tmu.mode);
switch (sw->tmu.mode) {
case TB_SWITCH_TMU_MODE_LOWRES:
case TB_SWITCH_TMU_MODE_HIFI_UNI:
tb_port_tmu_set_unidirectional(up, true);
break;
case TB_SWITCH_TMU_MODE_HIFI_BI:
tb_port_tmu_set_unidirectional(up, false);
break;
default:
break;
}
}
static int __tb_switch_tmu_change_mode(struct tb_switch *sw)
static int tb_switch_tmu_change_mode(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
unsigned int rate = tmu_rates[sw->tmu.mode_request];
struct tb_port *up, *down;
int ret;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
ret = tb_port_tmu_set_unidirectional(down, sw->tmu.unidirectional_request);
if (ret)
goto out;
down = tb_switch_downstream_port(sw);
if (sw->tmu.unidirectional_request)
ret = tb_switch_tmu_rate_write(parent, sw->tmu.rate_request);
else
ret = tb_switch_tmu_rate_write(sw, sw->tmu.rate_request);
/* Program the upstream router downstream facing lane adapter */
switch (sw->tmu.mode_request) {
case TB_SWITCH_TMU_MODE_LOWRES:
case TB_SWITCH_TMU_MODE_HIFI_UNI:
ret = tb_port_tmu_set_unidirectional(down, true);
if (ret)
goto out;
ret = tb_switch_tmu_rate_write(tb_switch_parent(sw), rate);
if (ret)
goto out;
break;
case TB_SWITCH_TMU_MODE_HIFI_BI:
ret = tb_port_tmu_set_unidirectional(down, false);
if (ret)
goto out;
ret = tb_switch_tmu_rate_write(sw, rate);
if (ret)
goto out;
break;
default:
/* Not allowed to change modes from other than above */
return -EINVAL;
}
ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.mode_request);
if (ret)
return ret;
ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.rate_request);
if (ret)
return ret;
/* Program the new mode and the downstream router lane adapter */
switch (sw->tmu.mode_request) {
case TB_SWITCH_TMU_MODE_LOWRES:
case TB_SWITCH_TMU_MODE_HIFI_UNI:
ret = tb_port_tmu_set_unidirectional(up, true);
if (ret)
goto out;
break;
ret = tb_port_tmu_set_unidirectional(up, sw->tmu.unidirectional_request);
if (ret)
goto out;
case TB_SWITCH_TMU_MODE_HIFI_BI:
ret = tb_port_tmu_set_unidirectional(up, false);
if (ret)
goto out;
break;
default:
/* Not allowed to change modes from other than above */
return -EINVAL;
}
ret = tb_port_tmu_time_sync_enable(down);
if (ret)
@ -635,7 +925,7 @@ static int __tb_switch_tmu_change_mode(struct tb_switch *sw)
return 0;
out:
__tb_switch_tmu_change_mode_prev(sw);
tb_switch_tmu_change_mode_prev(sw);
return ret;
}
@ -643,45 +933,21 @@ out:
* tb_switch_tmu_enable() - Enable TMU on a router
* @sw: Router whose TMU to enable
*
* Enables TMU of a router to be in uni-directional Normal/HiFi
* or bi-directional HiFi mode. Calling tb_switch_tmu_configure() is required
* before calling this function, to select the mode Normal/HiFi and
* directionality (uni-directional/bi-directional).
* In HiFi mode all tunneling should work. In Normal mode, DP tunneling can't
* work. Uni-directional mode is required for CLx (Link Low-Power) to work.
* Enables TMU of a router to be in uni-directional Normal/HiFi or
* bi-directional HiFi mode. Calling tb_switch_tmu_configure() is
* required before calling this function.
*/
int tb_switch_tmu_enable(struct tb_switch *sw)
{
bool unidirectional = sw->tmu.unidirectional_request;
int ret;
if (unidirectional && !sw->tmu.has_ucap)
return -EOPNOTSUPP;
/*
* No need to enable TMU on devices that don't support CLx since on
* these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi
* bi-directional is enabled by default.
*/
if (!tb_switch_is_clx_supported(sw))
if (tb_switch_tmu_is_enabled(sw))
return 0;
if (tb_switch_tmu_is_enabled(sw, sw->tmu.unidirectional_request))
return 0;
if (tb_switch_is_titan_ridge(sw) && unidirectional) {
/*
* Titan Ridge supports CL0s and CL1 only. CL0s and CL1 are
* enabled and supported together.
*/
if (!tb_switch_is_clx_enabled(sw, TB_CL1))
return -EOPNOTSUPP;
ret = tb_switch_tmu_objection_mask(sw);
if (ret)
return ret;
ret = tb_switch_tmu_unidirectional_enable(sw);
if (tb_switch_is_titan_ridge(sw) &&
(sw->tmu.mode_request == TB_SWITCH_TMU_MODE_LOWRES ||
sw->tmu.mode_request == TB_SWITCH_TMU_MODE_HIFI_UNI)) {
ret = tb_switch_tmu_disable_objections(sw);
if (ret)
return ret;
}
@ -696,19 +962,30 @@ int tb_switch_tmu_enable(struct tb_switch *sw)
* HiFi-Uni/HiFi-BiDir/Normal-Uni or from Normal-Uni to
* HiFi-Uni.
*/
if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) {
if (unidirectional)
ret = __tb_switch_tmu_enable_unidirectional(sw);
else
ret = __tb_switch_tmu_enable_bidirectional(sw);
if (ret)
return ret;
} else if (sw->tmu.rate == TB_SWITCH_TMU_RATE_NORMAL) {
ret = __tb_switch_tmu_change_mode(sw);
if (ret)
return ret;
if (sw->tmu.mode == TB_SWITCH_TMU_MODE_OFF) {
switch (sw->tmu.mode_request) {
case TB_SWITCH_TMU_MODE_LOWRES:
case TB_SWITCH_TMU_MODE_HIFI_UNI:
ret = tb_switch_tmu_enable_unidirectional(sw);
break;
case TB_SWITCH_TMU_MODE_HIFI_BI:
ret = tb_switch_tmu_enable_bidirectional(sw);
break;
case TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI:
ret = tb_switch_tmu_enable_enhanced(sw);
break;
default:
ret = -EINVAL;
break;
}
} else if (sw->tmu.mode == TB_SWITCH_TMU_MODE_LOWRES ||
sw->tmu.mode == TB_SWITCH_TMU_MODE_HIFI_UNI ||
sw->tmu.mode == TB_SWITCH_TMU_MODE_HIFI_BI) {
ret = tb_switch_tmu_change_mode(sw);
} else {
ret = -EINVAL;
}
sw->tmu.unidirectional = unidirectional;
} else {
/*
* Host router port configurations are written as
@ -716,58 +993,68 @@ int tb_switch_tmu_enable(struct tb_switch *sw)
* of the child node - see above.
* Here only the host router' rate configuration is written.
*/
ret = tb_switch_tmu_rate_write(sw, sw->tmu.rate_request);
if (ret)
return ret;
ret = tb_switch_tmu_rate_write(sw, tmu_rates[sw->tmu.mode_request]);
}
sw->tmu.rate = sw->tmu.rate_request;
if (ret) {
tb_sw_warn(sw, "TMU: failed to enable mode %s: %d\n",
tmu_mode_name(sw->tmu.mode_request), ret);
} else {
sw->tmu.mode = sw->tmu.mode_request;
tb_sw_dbg(sw, "TMU: mode set to: %s\n", tmu_mode_name(sw->tmu.mode));
}
tb_sw_dbg(sw, "TMU: mode set to: %s\n", tb_switch_tmu_mode_name(sw));
return tb_switch_tmu_set_time_disruption(sw, false);
}
/**
* tb_switch_tmu_configure() - Configure the TMU rate and directionality
* tb_switch_tmu_configure() - Configure the TMU mode
* @sw: Router whose mode to change
* @rate: Rate to configure Off/Normal/HiFi
* @unidirectional: If uni-directional (bi-directional otherwise)
* @mode: Mode to configure
*
* Selects the rate of the TMU and directionality (uni-directional or
* bi-directional). Must be called before tb_switch_tmu_enable().
* Selects the TMU mode that is enabled when tb_switch_tmu_enable() is
* next called.
*
* Returns %0 in success and negative errno otherwise. Specifically
* returns %-EOPNOTSUPP if the requested mode is not possible (not
* supported by the router and/or topology).
*/
void tb_switch_tmu_configure(struct tb_switch *sw,
enum tb_switch_tmu_rate rate, bool unidirectional)
int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_mode mode)
{
sw->tmu.unidirectional_request = unidirectional;
sw->tmu.rate_request = rate;
}
switch (mode) {
case TB_SWITCH_TMU_MODE_OFF:
break;
static int tb_switch_tmu_config_enable(struct device *dev, void *rate)
{
if (tb_is_switch(dev)) {
struct tb_switch *sw = tb_to_switch(dev);
case TB_SWITCH_TMU_MODE_LOWRES:
case TB_SWITCH_TMU_MODE_HIFI_UNI:
if (!sw->tmu.has_ucap)
return -EOPNOTSUPP;
break;
tb_switch_tmu_configure(sw, *(enum tb_switch_tmu_rate *)rate,
tb_switch_is_clx_enabled(sw, TB_CL1));
if (tb_switch_tmu_enable(sw))
tb_sw_dbg(sw, "fail switching TMU mode for 1st depth router\n");
case TB_SWITCH_TMU_MODE_HIFI_BI:
break;
case TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI: {
const struct tb_switch *parent_sw = tb_switch_parent(sw);
if (!parent_sw || !tb_switch_tmu_enhanced_is_supported(parent_sw))
return -EOPNOTSUPP;
if (!tb_switch_tmu_enhanced_is_supported(sw))
return -EOPNOTSUPP;
break;
}
default:
tb_sw_warn(sw, "TMU: unsupported mode %u\n", mode);
return -EINVAL;
}
if (sw->tmu.mode_request != mode) {
tb_sw_dbg(sw, "TMU: mode change %s -> %s requested\n",
tmu_mode_name(sw->tmu.mode), tmu_mode_name(mode));
sw->tmu.mode_request = mode;
}
return 0;
}
/**
* tb_switch_enable_tmu_1st_child - Configure and enable TMU for 1st chidren
* @sw: The router to configure and enable it's children TMU
* @rate: Rate of the TMU to configure the router's chidren to
*
* Configures and enables the TMU mode of 1st depth children of the specified
* router to the specified rate.
*/
void tb_switch_enable_tmu_1st_child(struct tb_switch *sw,
enum tb_switch_tmu_rate rate)
{
device_for_each_child(&sw->dev, &rate,
tb_switch_tmu_config_enable);
}

View File

@ -10,6 +10,7 @@
#include <linux/slab.h>
#include <linux/list.h>
#include <linux/ktime.h>
#include <linux/string_helpers.h>
#include "tunnel.h"
#include "tb.h"
@ -41,9 +42,14 @@
* Number of credits we try to allocate for each DMA path if not limited
* by the host router baMaxHI.
*/
#define TB_DMA_CREDITS 14U
#define TB_DMA_CREDITS 14
/* Minimum number of credits for DMA path */
#define TB_MIN_DMA_CREDITS 1U
#define TB_MIN_DMA_CREDITS 1
static unsigned int dma_credits = TB_DMA_CREDITS;
module_param(dma_credits, uint, 0444);
MODULE_PARM_DESC(dma_credits, "specify custom credits for DMA tunnels (default: "
__MODULE_STRING(TB_DMA_CREDITS) ")");
static bool bw_alloc_mode = true;
module_param(bw_alloc_mode, bool, 0444);
@ -95,7 +101,7 @@ static unsigned int tb_available_credits(const struct tb_port *port,
pcie = tb_acpi_may_tunnel_pcie() ? sw->max_pcie_credits : 0;
if (tb_acpi_is_xdomain_allowed()) {
spare = min_not_zero(sw->max_dma_credits, TB_DMA_CREDITS);
spare = min_not_zero(sw->max_dma_credits, dma_credits);
/* Add some credits for potential second DMA tunnel */
spare += TB_MIN_DMA_CREDITS;
} else {
@ -148,18 +154,49 @@ static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths,
return tunnel;
}
static int tb_pci_set_ext_encapsulation(struct tb_tunnel *tunnel, bool enable)
{
int ret;
/* Only supported of both routers are at least USB4 v2 */
if (usb4_switch_version(tunnel->src_port->sw) < 2 ||
usb4_switch_version(tunnel->dst_port->sw) < 2)
return 0;
ret = usb4_pci_port_set_ext_encapsulation(tunnel->src_port, enable);
if (ret)
return ret;
ret = usb4_pci_port_set_ext_encapsulation(tunnel->dst_port, enable);
if (ret)
return ret;
tb_tunnel_dbg(tunnel, "extended encapsulation %s\n",
str_enabled_disabled(enable));
return 0;
}
static int tb_pci_activate(struct tb_tunnel *tunnel, bool activate)
{
int res;
if (activate) {
res = tb_pci_set_ext_encapsulation(tunnel, activate);
if (res)
return res;
}
res = tb_pci_port_enable(tunnel->src_port, activate);
if (res)
return res;
if (tb_port_is_pcie_up(tunnel->dst_port))
return tb_pci_port_enable(tunnel->dst_port, activate);
if (tb_port_is_pcie_up(tunnel->dst_port)) {
res = tb_pci_port_enable(tunnel->dst_port, activate);
if (res)
return res;
}
return 0;
return activate ? 0 : tb_pci_set_ext_encapsulation(tunnel, activate);
}
static int tb_pci_init_credits(struct tb_path_hop *hop)
@ -381,6 +418,10 @@ static int tb_dp_cm_handshake(struct tb_port *in, struct tb_port *out,
return -ETIMEDOUT;
}
/*
* Returns maximum possible rate from capability supporting only DP 2.0
* and below. Used when DP BW allocation mode is not enabled.
*/
static inline u32 tb_dp_cap_get_rate(u32 val)
{
u32 rate = (val & DP_COMMON_CAP_RATE_MASK) >> DP_COMMON_CAP_RATE_SHIFT;
@ -399,6 +440,28 @@ static inline u32 tb_dp_cap_get_rate(u32 val)
}
}
/*
* Returns maximum possible rate from capability supporting DP 2.1
* UHBR20, 13.5 and 10 rates as well. Use only when DP BW allocation
* mode is enabled.
*/
static inline u32 tb_dp_cap_get_rate_ext(u32 val)
{
if (val & DP_COMMON_CAP_UHBR20)
return 20000;
else if (val & DP_COMMON_CAP_UHBR13_5)
return 13500;
else if (val & DP_COMMON_CAP_UHBR10)
return 10000;
return tb_dp_cap_get_rate(val);
}
static inline bool tb_dp_is_uhbr_rate(unsigned int rate)
{
return rate >= 10000;
}
static inline u32 tb_dp_cap_set_rate(u32 val, u32 rate)
{
val &= ~DP_COMMON_CAP_RATE_MASK;
@ -461,7 +524,9 @@ static inline u32 tb_dp_cap_set_lanes(u32 val, u32 lanes)
static unsigned int tb_dp_bandwidth(unsigned int rate, unsigned int lanes)
{
/* Tunneling removes the DP 8b/10b encoding */
/* Tunneling removes the DP 8b/10b 128/132b encoding */
if (tb_dp_is_uhbr_rate(rate))
return rate * lanes * 128 / 132;
return rate * lanes * 8 / 10;
}
@ -604,7 +669,7 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
in->cap_adap + DP_REMOTE_CAP, 1);
}
static int tb_dp_bw_alloc_mode_enable(struct tb_tunnel *tunnel)
static int tb_dp_bandwidth_alloc_mode_enable(struct tb_tunnel *tunnel)
{
int ret, estimated_bw, granularity, tmp;
struct tb_port *out = tunnel->dst_port;
@ -616,7 +681,7 @@ static int tb_dp_bw_alloc_mode_enable(struct tb_tunnel *tunnel)
if (!bw_alloc_mode)
return 0;
ret = usb4_dp_port_set_cm_bw_mode_supported(in, true);
ret = usb4_dp_port_set_cm_bandwidth_mode_supported(in, true);
if (ret)
return ret;
@ -654,6 +719,19 @@ static int tb_dp_bw_alloc_mode_enable(struct tb_tunnel *tunnel)
if (ret)
return ret;
/*
* Pick up granularity that supports maximum possible bandwidth.
* For that we use the UHBR rates too.
*/
in_rate = tb_dp_cap_get_rate_ext(in_dp_cap);
out_rate = tb_dp_cap_get_rate_ext(out_dp_cap);
rate = min(in_rate, out_rate);
tmp = tb_dp_bandwidth(rate, lanes);
tb_port_dbg(in,
"maximum bandwidth through allocation mode %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tmp);
for (granularity = 250; tmp / granularity > 255 && granularity <= 1000;
granularity *= 2)
;
@ -680,12 +758,12 @@ static int tb_dp_bw_alloc_mode_enable(struct tb_tunnel *tunnel)
tb_port_dbg(in, "estimated bandwidth %d Mb/s\n", estimated_bw);
ret = usb4_dp_port_set_estimated_bw(in, estimated_bw);
ret = usb4_dp_port_set_estimated_bandwidth(in, estimated_bw);
if (ret)
return ret;
/* Initial allocation should be 0 according the spec */
ret = usb4_dp_port_allocate_bw(in, 0);
ret = usb4_dp_port_allocate_bandwidth(in, 0);
if (ret)
return ret;
@ -707,7 +785,7 @@ static int tb_dp_init(struct tb_tunnel *tunnel)
if (!tb_switch_is_usb4(sw))
return 0;
if (!usb4_dp_port_bw_mode_supported(in))
if (!usb4_dp_port_bandwidth_mode_supported(in))
return 0;
tb_port_dbg(in, "bandwidth allocation mode supported\n");
@ -716,17 +794,17 @@ static int tb_dp_init(struct tb_tunnel *tunnel)
if (ret)
return ret;
return tb_dp_bw_alloc_mode_enable(tunnel);
return tb_dp_bandwidth_alloc_mode_enable(tunnel);
}
static void tb_dp_deinit(struct tb_tunnel *tunnel)
{
struct tb_port *in = tunnel->src_port;
if (!usb4_dp_port_bw_mode_supported(in))
if (!usb4_dp_port_bandwidth_mode_supported(in))
return;
if (usb4_dp_port_bw_mode_enabled(in)) {
usb4_dp_port_set_cm_bw_mode_supported(in, false);
if (usb4_dp_port_bandwidth_mode_enabled(in)) {
usb4_dp_port_set_cm_bandwidth_mode_supported(in, false);
tb_port_dbg(in, "bandwidth allocation mode disabled\n");
}
}
@ -769,15 +847,42 @@ static int tb_dp_activate(struct tb_tunnel *tunnel, bool active)
}
/* max_bw is rounded up to next granularity */
static int tb_dp_nrd_bandwidth(struct tb_tunnel *tunnel, int *max_bw)
static int tb_dp_bandwidth_mode_maximum_bandwidth(struct tb_tunnel *tunnel,
int *max_bw)
{
struct tb_port *in = tunnel->src_port;
int ret, rate, lanes, nrd_bw;
u32 cap;
ret = usb4_dp_port_nrd(in, &rate, &lanes);
/*
* DP IN adapter DP_LOCAL_CAP gets updated to the lowest AUX
* read parameter values so this so we can use this to determine
* the maximum possible bandwidth over this link.
*
* See USB4 v2 spec 1.0 10.4.4.5.
*/
ret = tb_port_read(in, &cap, TB_CFG_PORT,
in->cap_adap + DP_LOCAL_CAP, 1);
if (ret)
return ret;
rate = tb_dp_cap_get_rate_ext(cap);
if (tb_dp_is_uhbr_rate(rate)) {
/*
* When UHBR is used there is no reduction in lanes so
* we can use this directly.
*/
lanes = tb_dp_cap_get_lanes(cap);
} else {
/*
* If there is no UHBR supported then check the
* non-reduced rate and lanes.
*/
ret = usb4_dp_port_nrd(in, &rate, &lanes);
if (ret)
return ret;
}
nrd_bw = tb_dp_bandwidth(rate, lanes);
if (max_bw) {
@ -790,26 +895,27 @@ static int tb_dp_nrd_bandwidth(struct tb_tunnel *tunnel, int *max_bw)
return nrd_bw;
}
static int tb_dp_bw_mode_consumed_bandwidth(struct tb_tunnel *tunnel,
int *consumed_up, int *consumed_down)
static int tb_dp_bandwidth_mode_consumed_bandwidth(struct tb_tunnel *tunnel,
int *consumed_up,
int *consumed_down)
{
struct tb_port *out = tunnel->dst_port;
struct tb_port *in = tunnel->src_port;
int ret, allocated_bw, max_bw;
if (!usb4_dp_port_bw_mode_enabled(in))
if (!usb4_dp_port_bandwidth_mode_enabled(in))
return -EOPNOTSUPP;
if (!tunnel->bw_mode)
return -EOPNOTSUPP;
/* Read what was allocated previously if any */
ret = usb4_dp_port_allocated_bw(in);
ret = usb4_dp_port_allocated_bandwidth(in);
if (ret < 0)
return ret;
allocated_bw = ret;
ret = tb_dp_nrd_bandwidth(tunnel, &max_bw);
ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, &max_bw);
if (ret < 0)
return ret;
if (allocated_bw == max_bw)
@ -839,15 +945,15 @@ static int tb_dp_allocated_bandwidth(struct tb_tunnel *tunnel, int *allocated_up
* If we have already set the allocated bandwidth then use that.
* Otherwise we read it from the DPRX.
*/
if (usb4_dp_port_bw_mode_enabled(in) && tunnel->bw_mode) {
if (usb4_dp_port_bandwidth_mode_enabled(in) && tunnel->bw_mode) {
int ret, allocated_bw, max_bw;
ret = usb4_dp_port_allocated_bw(in);
ret = usb4_dp_port_allocated_bandwidth(in);
if (ret < 0)
return ret;
allocated_bw = ret;
ret = tb_dp_nrd_bandwidth(tunnel, &max_bw);
ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, &max_bw);
if (ret < 0)
return ret;
if (allocated_bw == max_bw)
@ -874,23 +980,23 @@ static int tb_dp_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up,
struct tb_port *in = tunnel->src_port;
int max_bw, ret, tmp;
if (!usb4_dp_port_bw_mode_enabled(in))
if (!usb4_dp_port_bandwidth_mode_enabled(in))
return -EOPNOTSUPP;
ret = tb_dp_nrd_bandwidth(tunnel, &max_bw);
ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, &max_bw);
if (ret < 0)
return ret;
if (in->sw->config.depth < out->sw->config.depth) {
tmp = min(*alloc_down, max_bw);
ret = usb4_dp_port_allocate_bw(in, tmp);
ret = usb4_dp_port_allocate_bandwidth(in, tmp);
if (ret)
return ret;
*alloc_down = tmp;
*alloc_up = 0;
} else {
tmp = min(*alloc_up, max_bw);
ret = usb4_dp_port_allocate_bw(in, tmp);
ret = usb4_dp_port_allocate_bandwidth(in, tmp);
if (ret)
return ret;
*alloc_down = 0;
@ -900,6 +1006,9 @@ static int tb_dp_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up,
/* Now we can use BW mode registers to figure out the bandwidth */
/* TODO: need to handle discovery too */
tunnel->bw_mode = true;
tb_port_dbg(in, "allocated bandwidth through allocation mode %d Mb/s\n",
tmp);
return 0;
}
@ -974,23 +1083,20 @@ static int tb_dp_maximum_bandwidth(struct tb_tunnel *tunnel, int *max_up,
int *max_down)
{
struct tb_port *in = tunnel->src_port;
u32 rate, lanes;
int ret;
/*
* DP IN adapter DP_LOCAL_CAP gets updated to the lowest AUX read
* parameter values so this so we can use this to determine the
* maximum possible bandwidth over this link.
*/
ret = tb_dp_read_cap(tunnel, DP_LOCAL_CAP, &rate, &lanes);
if (ret)
if (!usb4_dp_port_bandwidth_mode_enabled(in))
return -EOPNOTSUPP;
ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, NULL);
if (ret < 0)
return ret;
if (in->sw->config.depth < tunnel->dst_port->sw->config.depth) {
*max_up = 0;
*max_down = tb_dp_bandwidth(rate, lanes);
*max_down = ret;
} else {
*max_up = tb_dp_bandwidth(rate, lanes);
*max_up = ret;
*max_down = 0;
}
@ -1011,8 +1117,8 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
* mode is enabled first and then read the bandwidth
* through those registers.
*/
ret = tb_dp_bw_mode_consumed_bandwidth(tunnel, consumed_up,
consumed_down);
ret = tb_dp_bandwidth_mode_consumed_bandwidth(tunnel, consumed_up,
consumed_down);
if (ret < 0) {
if (ret != -EOPNOTSUPP)
return ret;
@ -1132,6 +1238,47 @@ static int tb_dp_init_video_path(struct tb_path *path)
return 0;
}
static void tb_dp_dump(struct tb_tunnel *tunnel)
{
struct tb_port *in, *out;
u32 dp_cap, rate, lanes;
in = tunnel->src_port;
out = tunnel->dst_port;
if (tb_port_read(in, &dp_cap, TB_CFG_PORT,
in->cap_adap + DP_LOCAL_CAP, 1))
return;
rate = tb_dp_cap_get_rate(dp_cap);
lanes = tb_dp_cap_get_lanes(dp_cap);
tb_port_dbg(in, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tb_dp_bandwidth(rate, lanes));
out = tunnel->dst_port;
if (tb_port_read(out, &dp_cap, TB_CFG_PORT,
out->cap_adap + DP_LOCAL_CAP, 1))
return;
rate = tb_dp_cap_get_rate(dp_cap);
lanes = tb_dp_cap_get_lanes(dp_cap);
tb_port_dbg(out, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tb_dp_bandwidth(rate, lanes));
if (tb_port_read(in, &dp_cap, TB_CFG_PORT,
in->cap_adap + DP_REMOTE_CAP, 1))
return;
rate = tb_dp_cap_get_rate(dp_cap);
lanes = tb_dp_cap_get_lanes(dp_cap);
tb_port_dbg(in, "reduced bandwidth %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tb_dp_bandwidth(rate, lanes));
}
/**
* tb_tunnel_discover_dp() - Discover existing Display Port tunnels
* @tb: Pointer to the domain structure
@ -1209,6 +1356,8 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in,
goto err_deactivate;
}
tb_dp_dump(tunnel);
tb_tunnel_dbg(tunnel, "discovered\n");
return tunnel;
@ -1452,6 +1601,10 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
struct tb_path *path;
int credits;
/* Ring 0 is reserved for control channel */
if (WARN_ON(!receive_ring || !transmit_ring))
return NULL;
if (receive_ring > 0)
npaths++;
if (transmit_ring > 0)
@ -1468,7 +1621,7 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
tunnel->dst_port = dst;
tunnel->deinit = tb_dma_deinit;
credits = min_not_zero(TB_DMA_CREDITS, nhi->sw->max_dma_credits);
credits = min_not_zero(dma_credits, nhi->sw->max_dma_credits);
if (receive_ring > 0) {
path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0,

View File

@ -15,6 +15,7 @@
#include "tb.h"
#define USB4_DATA_RETRIES 3
#define USB4_DATA_DWORDS 16
enum usb4_sb_target {
USB4_SB_TARGET_ROUTER,
@ -112,7 +113,7 @@ static int __usb4_switch_op(struct tb_switch *sw, u16 opcode, u32 *metadata,
{
const struct tb_cm_ops *cm_ops = sw->tb->cm_ops;
if (tx_dwords > NVM_DATA_DWORDS || rx_dwords > NVM_DATA_DWORDS)
if (tx_dwords > USB4_DATA_DWORDS || rx_dwords > USB4_DATA_DWORDS)
return -EINVAL;
/*
@ -231,11 +232,14 @@ static bool link_is_usb4(struct tb_port *port)
* is not available for some reason (like that there is Thunderbolt 3
* switch upstream) then the internal xHCI controller is enabled
* instead.
*
* This does not set the configuration valid bit of the router. To do
* that call usb4_switch_configuration_valid().
*/
int usb4_switch_setup(struct tb_switch *sw)
{
struct tb_port *downstream_port;
struct tb_switch *parent;
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *down;
bool tbt3, xhci;
u32 val = 0;
int ret;
@ -249,9 +253,8 @@ int usb4_switch_setup(struct tb_switch *sw)
if (ret)
return ret;
parent = tb_switch_parent(sw);
downstream_port = tb_port_at(tb_route(sw), parent);
sw->link_usb4 = link_is_usb4(downstream_port);
down = tb_switch_downstream_port(sw);
sw->link_usb4 = link_is_usb4(down);
tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT");
xhci = val & ROUTER_CS_6_HCI;
@ -288,7 +291,33 @@ int usb4_switch_setup(struct tb_switch *sw)
/* TBT3 supported by the CM */
val |= ROUTER_CS_5_C3S;
/* Tunneling configuration is ready now */
return tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1);
}
/**
* usb4_switch_configuration_valid() - Set tunneling configuration to be valid
* @sw: USB4 router
*
* Sets configuration valid bit for the router. Must be called before
* any tunnels can be set through the router and after
* usb4_switch_setup() has been called. Can be called to host and device
* routers (does nothing for the latter).
*
* Returns %0 in success and negative errno otherwise.
*/
int usb4_switch_configuration_valid(struct tb_switch *sw)
{
u32 val;
int ret;
if (!tb_route(sw))
return 0;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1);
if (ret)
return ret;
val |= ROUTER_CS_5_CV;
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1);
@ -703,7 +732,7 @@ int usb4_switch_credits_init(struct tb_switch *sw)
int max_usb3, min_dp_aux, min_dp_main, max_pcie, max_dma;
int ret, length, i, nports;
const struct tb_port *port;
u32 data[NVM_DATA_DWORDS];
u32 data[USB4_DATA_DWORDS];
u32 metadata = 0;
u8 status = 0;
@ -1199,7 +1228,7 @@ static int usb4_port_wait_for_bit(struct tb_port *port, u32 offset, u32 bit,
static int usb4_port_read_data(struct tb_port *port, void *data, size_t dwords)
{
if (dwords > NVM_DATA_DWORDS)
if (dwords > USB4_DATA_DWORDS)
return -EINVAL;
return tb_port_read(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2,
@ -1209,7 +1238,7 @@ static int usb4_port_read_data(struct tb_port *port, void *data, size_t dwords)
static int usb4_port_write_data(struct tb_port *port, const void *data,
size_t dwords)
{
if (dwords > NVM_DATA_DWORDS)
if (dwords > USB4_DATA_DWORDS)
return -EINVAL;
return tb_port_write(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2,
@ -1845,7 +1874,7 @@ static int usb4_port_retimer_nvm_read_block(void *data, unsigned int dwaddress,
int ret;
metadata = dwaddress << USB4_NVM_READ_OFFSET_SHIFT;
if (dwords < NVM_DATA_DWORDS)
if (dwords < USB4_DATA_DWORDS)
metadata |= dwords << USB4_NVM_READ_LENGTH_SHIFT;
ret = usb4_port_retimer_write(port, index, USB4_SB_METADATA, &metadata,
@ -2265,13 +2294,14 @@ int usb4_dp_port_set_cm_id(struct tb_port *port, int cm_id)
}
/**
* usb4_dp_port_bw_mode_supported() - Is the bandwidth allocation mode supported
* usb4_dp_port_bandwidth_mode_supported() - Is the bandwidth allocation mode
* supported
* @port: DP IN adapter to check
*
* Can be called to any DP IN adapter. Returns true if the adapter
* supports USB4 bandwidth allocation mode, false otherwise.
*/
bool usb4_dp_port_bw_mode_supported(struct tb_port *port)
bool usb4_dp_port_bandwidth_mode_supported(struct tb_port *port)
{
int ret;
u32 val;
@ -2288,13 +2318,14 @@ bool usb4_dp_port_bw_mode_supported(struct tb_port *port)
}
/**
* usb4_dp_port_bw_mode_enabled() - Is the bandwidth allocation mode enabled
* usb4_dp_port_bandwidth_mode_enabled() - Is the bandwidth allocation mode
* enabled
* @port: DP IN adapter to check
*
* Can be called to any DP IN adapter. Returns true if the bandwidth
* allocation mode has been enabled, false otherwise.
*/
bool usb4_dp_port_bw_mode_enabled(struct tb_port *port)
bool usb4_dp_port_bandwidth_mode_enabled(struct tb_port *port)
{
int ret;
u32 val;
@ -2311,7 +2342,8 @@ bool usb4_dp_port_bw_mode_enabled(struct tb_port *port)
}
/**
* usb4_dp_port_set_cm_bw_mode_supported() - Set/clear CM support for bandwidth allocation mode
* usb4_dp_port_set_cm_bandwidth_mode_supported() - Set/clear CM support for
* bandwidth allocation mode
* @port: DP IN adapter
* @supported: Does the CM support bandwidth allocation mode
*
@ -2320,7 +2352,8 @@ bool usb4_dp_port_bw_mode_enabled(struct tb_port *port)
* otherwise. Specifically returns %-OPNOTSUPP if the passed in adapter
* does not support this.
*/
int usb4_dp_port_set_cm_bw_mode_supported(struct tb_port *port, bool supported)
int usb4_dp_port_set_cm_bandwidth_mode_supported(struct tb_port *port,
bool supported)
{
u32 val;
int ret;
@ -2594,7 +2627,7 @@ int usb4_dp_port_set_granularity(struct tb_port *port, int granularity)
}
/**
* usb4_dp_port_set_estimated_bw() - Set estimated bandwidth
* usb4_dp_port_set_estimated_bandwidth() - Set estimated bandwidth
* @port: DP IN adapter
* @bw: Estimated bandwidth in Mb/s.
*
@ -2604,7 +2637,7 @@ int usb4_dp_port_set_granularity(struct tb_port *port, int granularity)
* and negative errno otherwise. Specifically returns %-EOPNOTSUPP if
* the adapter does not support this.
*/
int usb4_dp_port_set_estimated_bw(struct tb_port *port, int bw)
int usb4_dp_port_set_estimated_bandwidth(struct tb_port *port, int bw)
{
u32 val, granularity;
int ret;
@ -2630,14 +2663,14 @@ int usb4_dp_port_set_estimated_bw(struct tb_port *port, int bw)
}
/**
* usb4_dp_port_allocated_bw() - Return allocated bandwidth
* usb4_dp_port_allocated_bandwidth() - Return allocated bandwidth
* @port: DP IN adapter
*
* Reads and returns allocated bandwidth for @port in Mb/s (taking into
* account the programmed granularity). Returns negative errno in case
* of error.
*/
int usb4_dp_port_allocated_bw(struct tb_port *port)
int usb4_dp_port_allocated_bandwidth(struct tb_port *port)
{
u32 val, granularity;
int ret;
@ -2723,7 +2756,7 @@ static int usb4_dp_port_wait_and_clear_cm_ack(struct tb_port *port,
}
/**
* usb4_dp_port_allocate_bw() - Set allocated bandwidth
* usb4_dp_port_allocate_bandwidth() - Set allocated bandwidth
* @port: DP IN adapter
* @bw: New allocated bandwidth in Mb/s
*
@ -2731,7 +2764,7 @@ static int usb4_dp_port_wait_and_clear_cm_ack(struct tb_port *port,
* driver). Takes into account the programmed granularity. Returns %0 in
* success and negative errno in case of error.
*/
int usb4_dp_port_allocate_bw(struct tb_port *port, int bw)
int usb4_dp_port_allocate_bandwidth(struct tb_port *port, int bw)
{
u32 val, granularity;
int ret;
@ -2765,7 +2798,7 @@ int usb4_dp_port_allocate_bw(struct tb_port *port, int bw)
}
/**
* usb4_dp_port_requested_bw() - Read requested bandwidth
* usb4_dp_port_requested_bandwidth() - Read requested bandwidth
* @port: DP IN adapter
*
* Reads the DPCD (graphics driver) requested bandwidth and returns it
@ -2774,7 +2807,7 @@ int usb4_dp_port_allocate_bw(struct tb_port *port, int bw)
* the adapter does not support bandwidth allocation mode, and %ENODATA
* if there is no active bandwidth request from the graphics driver.
*/
int usb4_dp_port_requested_bw(struct tb_port *port)
int usb4_dp_port_requested_bandwidth(struct tb_port *port)
{
u32 val, granularity;
int ret;
@ -2797,3 +2830,34 @@ int usb4_dp_port_requested_bw(struct tb_port *port)
return (val & ADP_DP_CS_8_REQUESTED_BW_MASK) * granularity;
}
/**
* usb4_pci_port_set_ext_encapsulation() - Enable/disable extended encapsulation
* @port: PCIe adapter
* @enable: Enable/disable extended encapsulation
*
* Enables or disables extended encapsulation used in PCIe tunneling. Caller
* needs to make sure both adapters support this before enabling. Returns %0 on
* success and negative errno otherwise.
*/
int usb4_pci_port_set_ext_encapsulation(struct tb_port *port, bool enable)
{
u32 val;
int ret;
if (!tb_port_is_pcie_up(port) && !tb_port_is_pcie_down(port))
return -EINVAL;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_PCIE_CS_1, 1);
if (ret)
return ret;
if (enable)
val |= ADP_PCIE_CS_1_EE;
else
val &= ~ADP_PCIE_CS_1_EE;
return tb_port_write(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_PCIE_CS_1, 1);
}

View File

@ -537,9 +537,8 @@ static int tb_xdp_link_state_status_request(struct tb_ctl *ctl, u64 route,
static int tb_xdp_link_state_status_response(struct tb *tb, struct tb_ctl *ctl,
struct tb_xdomain *xd, u8 sequence)
{
struct tb_switch *sw = tb_to_switch(xd->dev.parent);
struct tb_xdp_link_state_status_response res;
struct tb_port *port = tb_port_at(xd->route, sw);
struct tb_port *port = tb_xdomain_downstream_port(xd);
u32 val[2];
int ret;
@ -1137,7 +1136,7 @@ static int tb_xdomain_update_link_attributes(struct tb_xdomain *xd)
struct tb_port *port;
int ret;
port = tb_port_at(xd->route, tb_xdomain_parent(xd));
port = tb_xdomain_downstream_port(xd);
ret = tb_port_get_link_speed(port);
if (ret < 0)
@ -1251,8 +1250,7 @@ static int tb_xdomain_get_link_status(struct tb_xdomain *xd)
static int tb_xdomain_link_state_change(struct tb_xdomain *xd,
unsigned int width)
{
struct tb_switch *sw = tb_to_switch(xd->dev.parent);
struct tb_port *port = tb_port_at(xd->route, sw);
struct tb_port *port = tb_xdomain_downstream_port(xd);
struct tb *tb = xd->tb;
u8 tlw, tls;
u32 val;
@ -1292,13 +1290,16 @@ static int tb_xdomain_link_state_change(struct tb_xdomain *xd,
static int tb_xdomain_bond_lanes_uuid_high(struct tb_xdomain *xd)
{
unsigned int width, width_mask;
struct tb_port *port;
int ret, width;
int ret;
if (xd->target_link_width == LANE_ADP_CS_1_TARGET_WIDTH_SINGLE) {
width = 1;
width = TB_LINK_WIDTH_SINGLE;
width_mask = width;
} else if (xd->target_link_width == LANE_ADP_CS_1_TARGET_WIDTH_DUAL) {
width = 2;
width = TB_LINK_WIDTH_DUAL;
width_mask = width | TB_LINK_WIDTH_ASYM_TX | TB_LINK_WIDTH_ASYM_RX;
} else {
if (xd->state_retries-- > 0) {
dev_dbg(&xd->dev,
@ -1309,7 +1310,7 @@ static int tb_xdomain_bond_lanes_uuid_high(struct tb_xdomain *xd)
return -ETIMEDOUT;
}
port = tb_port_at(xd->route, tb_xdomain_parent(xd));
port = tb_xdomain_downstream_port(xd);
/*
* We can't use tb_xdomain_lane_bonding_enable() here because it
@ -1330,15 +1331,16 @@ static int tb_xdomain_bond_lanes_uuid_high(struct tb_xdomain *xd)
return ret;
}
ret = tb_port_wait_for_link_width(port, width, XDOMAIN_BONDING_TIMEOUT);
ret = tb_port_wait_for_link_width(port, width_mask,
XDOMAIN_BONDING_TIMEOUT);
if (ret) {
dev_warn(&xd->dev, "error waiting for link width to become %d\n",
width);
width_mask);
return ret;
}
port->bonded = width == 2;
port->dual_link_port->bonded = width == 2;
port->bonded = width > TB_LINK_WIDTH_SINGLE;
port->dual_link_port->bonded = width > TB_LINK_WIDTH_SINGLE;
tb_port_update_credits(port);
tb_xdomain_update_link_attributes(xd);
@ -1425,7 +1427,7 @@ static int tb_xdomain_get_properties(struct tb_xdomain *xd)
if (xd->bonding_possible) {
struct tb_port *port;
port = tb_port_at(xd->route, tb_xdomain_parent(xd));
port = tb_xdomain_downstream_port(xd);
if (!port->bonded)
tb_port_disable(port->dual_link_port);
}
@ -1737,16 +1739,57 @@ static ssize_t speed_show(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR(rx_speed, 0444, speed_show, NULL);
static DEVICE_ATTR(tx_speed, 0444, speed_show, NULL);
static ssize_t lanes_show(struct device *dev, struct device_attribute *attr,
char *buf)
static ssize_t rx_lanes_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev);
unsigned int width;
return sysfs_emit(buf, "%u\n", xd->link_width);
switch (xd->link_width) {
case TB_LINK_WIDTH_SINGLE:
case TB_LINK_WIDTH_ASYM_RX:
width = 1;
break;
case TB_LINK_WIDTH_DUAL:
width = 2;
break;
case TB_LINK_WIDTH_ASYM_TX:
width = 3;
break;
default:
WARN_ON_ONCE(1);
return -EINVAL;
}
return sysfs_emit(buf, "%u\n", width);
}
static DEVICE_ATTR(rx_lanes, 0444, rx_lanes_show, NULL);
static DEVICE_ATTR(rx_lanes, 0444, lanes_show, NULL);
static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL);
static ssize_t tx_lanes_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev);
unsigned int width;
switch (xd->link_width) {
case TB_LINK_WIDTH_SINGLE:
case TB_LINK_WIDTH_ASYM_TX:
width = 1;
break;
case TB_LINK_WIDTH_DUAL:
width = 2;
break;
case TB_LINK_WIDTH_ASYM_RX:
width = 3;
break;
default:
WARN_ON_ONCE(1);
return -EINVAL;
}
return sysfs_emit(buf, "%u\n", width);
}
static DEVICE_ATTR(tx_lanes, 0444, tx_lanes_show, NULL);
static struct attribute *xdomain_attrs[] = {
&dev_attr_device.attr,
@ -1976,10 +2019,11 @@ void tb_xdomain_remove(struct tb_xdomain *xd)
*/
int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd)
{
unsigned int width_mask;
struct tb_port *port;
int ret;
port = tb_port_at(xd->route, tb_xdomain_parent(xd));
port = tb_xdomain_downstream_port(xd);
if (!port->dual_link_port)
return -ENODEV;
@ -1999,7 +2043,12 @@ int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd)
return ret;
}
ret = tb_port_wait_for_link_width(port, 2, XDOMAIN_BONDING_TIMEOUT);
/* Any of the widths are all bonded */
width_mask = TB_LINK_WIDTH_DUAL | TB_LINK_WIDTH_ASYM_TX |
TB_LINK_WIDTH_ASYM_RX;
ret = tb_port_wait_for_link_width(port, width_mask,
XDOMAIN_BONDING_TIMEOUT);
if (ret) {
tb_port_warn(port, "failed to enable lane bonding\n");
return ret;
@ -2024,10 +2073,13 @@ void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd)
{
struct tb_port *port;
port = tb_port_at(xd->route, tb_xdomain_parent(xd));
port = tb_xdomain_downstream_port(xd);
if (port->dual_link_port) {
int ret;
tb_port_lane_bonding_disable(port);
if (tb_port_wait_for_link_width(port, 1, 100) == -ETIMEDOUT)
ret = tb_port_wait_for_link_width(port, TB_LINK_WIDTH_SINGLE, 100);
if (ret == -ETIMEDOUT)
tb_port_warn(port, "timeout disabling lane bonding\n");
tb_port_disable(port->dual_link_port);
tb_port_update_credits(port);

View File

@ -177,7 +177,7 @@ static int c67x00_drv_probe(struct platform_device *pdev)
return ret;
}
static int c67x00_drv_remove(struct platform_device *pdev)
static void c67x00_drv_remove(struct platform_device *pdev)
{
struct c67x00_device *c67x00 = platform_get_drvdata(pdev);
struct resource *res;
@ -197,13 +197,11 @@ static int c67x00_drv_remove(struct platform_device *pdev)
release_mem_region(res->start, resource_size(res));
kfree(c67x00);
return 0;
}
static struct platform_driver c67x00_driver = {
.probe = c67x00_drv_probe,
.remove = c67x00_drv_remove,
.remove_new = c67x00_drv_remove,
.driver = {
.name = "c67x00",
},

View File

@ -78,6 +78,17 @@ config USB_CDNS3_IMX
For example, imx8qm and imx8qxp.
config USB_CDNS3_STARFIVE
tristate "Cadence USB3 support on StarFive SoC platforms"
depends on ARCH_STARFIVE || COMPILE_TEST
help
Say 'Y' or 'M' here if you are building for StarFive SoCs
platforms that contain Cadence USB3 controller core.
e.g. JH7110.
If you choose to build this driver as module it will
be dynamically linked and module will be called cdns3-starfive.ko
endif
if USB_CDNS_SUPPORT

View File

@ -24,6 +24,7 @@ endif
obj-$(CONFIG_USB_CDNS3_PCI_WRAP) += cdns3-pci-wrap.o
obj-$(CONFIG_USB_CDNS3_TI) += cdns3-ti.o
obj-$(CONFIG_USB_CDNS3_IMX) += cdns3-imx.o
obj-$(CONFIG_USB_CDNS3_STARFIVE) += cdns3-starfive.o
cdnsp-udc-pci-y := cdnsp-pci.o

View File

@ -800,7 +800,8 @@ void cdns3_gadget_giveback(struct cdns3_endpoint *priv_ep,
if (request->status == -EINPROGRESS)
request->status = status;
usb_gadget_unmap_request_by_dev(priv_dev->sysdev, request,
if (likely(!(priv_req->flags & REQUEST_UNALIGNED)))
usb_gadget_unmap_request_by_dev(priv_dev->sysdev, request,
priv_ep->dir);
if ((priv_req->flags & REQUEST_UNALIGNED) &&
@ -808,10 +809,10 @@ void cdns3_gadget_giveback(struct cdns3_endpoint *priv_ep,
/* Make DMA buffer CPU accessible */
dma_sync_single_for_cpu(priv_dev->sysdev,
priv_req->aligned_buf->dma,
priv_req->aligned_buf->size,
request->actual,
priv_req->aligned_buf->dir);
memcpy(request->buf, priv_req->aligned_buf->buf,
request->length);
request->actual);
}
priv_req->flags &= ~(REQUEST_PENDING | REQUEST_UNALIGNED);
@ -2543,10 +2544,12 @@ static int __cdns3_gadget_ep_queue(struct usb_ep *ep,
if (ret < 0)
return ret;
ret = usb_gadget_map_request_by_dev(priv_dev->sysdev, request,
if (likely(!(priv_req->flags & REQUEST_UNALIGNED))) {
ret = usb_gadget_map_request_by_dev(priv_dev->sysdev, request,
usb_endpoint_dir_in(ep->desc));
if (ret)
return ret;
if (ret)
return ret;
}
list_add_tail(&request->list, &priv_ep->deferred_req_list);

View File

@ -105,11 +105,11 @@ static inline void cdns_imx_writel(struct cdns_imx *data, u32 offset, u32 value)
}
static const struct clk_bulk_data imx_cdns3_core_clks[] = {
{ .id = "usb3_lpm_clk" },
{ .id = "usb3_bus_clk" },
{ .id = "usb3_aclk" },
{ .id = "usb3_ipg_clk" },
{ .id = "usb3_core_pclk" },
{ .id = "lpm" },
{ .id = "bus" },
{ .id = "aclk" },
{ .id = "ipg" },
{ .id = "core" },
};
static int cdns_imx_noncore_init(struct cdns_imx *data)
@ -218,7 +218,7 @@ err:
return ret;
}
static int cdns_imx_remove(struct platform_device *pdev)
static void cdns_imx_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct cdns_imx *data = dev_get_drvdata(dev);
@ -229,8 +229,6 @@ static int cdns_imx_remove(struct platform_device *pdev)
pm_runtime_disable(dev);
pm_runtime_put_noidle(dev);
platform_set_drvdata(pdev, NULL);
return 0;
}
#ifdef CONFIG_PM
@ -375,14 +373,22 @@ static inline bool cdns_imx_is_power_lost(struct cdns_imx *data)
return false;
}
static int __maybe_unused cdns_imx_system_suspend(struct device *dev)
{
pm_runtime_put_sync(dev);
return 0;
}
static int __maybe_unused cdns_imx_system_resume(struct device *dev)
{
struct cdns_imx *data = dev_get_drvdata(dev);
int ret;
ret = cdns_imx_resume(dev);
if (ret)
ret = pm_runtime_resume_and_get(dev);
if (ret < 0) {
dev_err(dev, "Could not get runtime PM.\n");
return ret;
}
if (cdns_imx_is_power_lost(data)) {
dev_dbg(dev, "resume from power lost\n");
@ -405,7 +411,7 @@ static int cdns_imx_platform_suspend(struct device *dev,
static const struct dev_pm_ops cdns_imx_pm_ops = {
SET_RUNTIME_PM_OPS(cdns_imx_suspend, cdns_imx_resume, NULL)
SET_SYSTEM_SLEEP_PM_OPS(cdns_imx_suspend, cdns_imx_system_resume)
SET_SYSTEM_SLEEP_PM_OPS(cdns_imx_system_suspend, cdns_imx_system_resume)
};
static const struct of_device_id cdns_imx_of_match[] = {
@ -416,7 +422,7 @@ MODULE_DEVICE_TABLE(of, cdns_imx_of_match);
static struct platform_driver cdns_imx_driver = {
.probe = cdns_imx_probe,
.remove = cdns_imx_remove,
.remove_new = cdns_imx_remove,
.driver = {
.name = "cdns3-imx",
.of_match_table = cdns_imx_of_match,

View File

@ -175,7 +175,7 @@ err_phy3_init:
*
* Returns 0 on success otherwise negative errno
*/
static int cdns3_plat_remove(struct platform_device *pdev)
static void cdns3_plat_remove(struct platform_device *pdev)
{
struct cdns *cdns = platform_get_drvdata(pdev);
struct device *dev = cdns->dev;
@ -187,7 +187,6 @@ static int cdns3_plat_remove(struct platform_device *pdev)
set_phy_power_off(cdns);
phy_exit(cdns->usb2_phy);
phy_exit(cdns->usb3_phy);
return 0;
}
#ifdef CONFIG_PM
@ -320,7 +319,7 @@ MODULE_DEVICE_TABLE(of, of_cdns3_match);
static struct platform_driver cdns3_driver = {
.probe = cdns3_plat_probe,
.remove = cdns3_plat_remove,
.remove_new = cdns3_plat_remove,
.driver = {
.name = "cdns-usb3",
.of_match_table = of_match_ptr(of_cdns3_match),

View File

@ -0,0 +1,246 @@
// SPDX-License-Identifier: GPL-2.0
/**
* cdns3-starfive.c - StarFive specific Glue layer for Cadence USB Controller
*
* Copyright (C) 2023 StarFive Technology Co., Ltd.
*
* Author: Minda Chen <minda.chen@starfivetech.com>
*/
#include <linux/bits.h>
#include <linux/clk.h>
#include <linux/module.h>
#include <linux/mfd/syscon.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/io.h>
#include <linux/of_platform.h>
#include <linux/reset.h>
#include <linux/regmap.h>
#include <linux/usb/otg.h>
#include "core.h"
#define USB_STRAP_HOST BIT(17)
#define USB_STRAP_DEVICE BIT(18)
#define USB_STRAP_MASK GENMASK(18, 16)
#define USB_SUSPENDM_HOST BIT(19)
#define USB_SUSPENDM_MASK BIT(19)
#define USB_MISC_CFG_MASK GENMASK(23, 20)
#define USB_SUSPENDM_BYPS BIT(20)
#define USB_PLL_EN BIT(22)
#define USB_REFCLK_MODE BIT(23)
struct cdns_starfive {
struct device *dev;
struct regmap *stg_syscon;
struct reset_control *resets;
struct clk_bulk_data *clks;
int num_clks;
u32 stg_usb_mode;
};
static void cdns_mode_init(struct platform_device *pdev,
struct cdns_starfive *data)
{
enum usb_dr_mode mode;
regmap_update_bits(data->stg_syscon, data->stg_usb_mode,
USB_MISC_CFG_MASK,
USB_SUSPENDM_BYPS | USB_PLL_EN | USB_REFCLK_MODE);
/* dr mode setting */
mode = usb_get_dr_mode(&pdev->dev);
switch (mode) {
case USB_DR_MODE_HOST:
regmap_update_bits(data->stg_syscon,
data->stg_usb_mode,
USB_STRAP_MASK,
USB_STRAP_HOST);
regmap_update_bits(data->stg_syscon,
data->stg_usb_mode,
USB_SUSPENDM_MASK,
USB_SUSPENDM_HOST);
break;
case USB_DR_MODE_PERIPHERAL:
regmap_update_bits(data->stg_syscon, data->stg_usb_mode,
USB_STRAP_MASK, USB_STRAP_DEVICE);
regmap_update_bits(data->stg_syscon, data->stg_usb_mode,
USB_SUSPENDM_MASK, 0);
break;
default:
break;
}
}
static int cdns_clk_rst_init(struct cdns_starfive *data)
{
int ret;
ret = clk_bulk_prepare_enable(data->num_clks, data->clks);
if (ret)
return dev_err_probe(data->dev, ret,
"failed to enable clocks\n");
ret = reset_control_deassert(data->resets);
if (ret) {
dev_err(data->dev, "failed to reset clocks\n");
goto err_clk_init;
}
return ret;
err_clk_init:
clk_bulk_disable_unprepare(data->num_clks, data->clks);
return ret;
}
static void cdns_clk_rst_deinit(struct cdns_starfive *data)
{
reset_control_assert(data->resets);
clk_bulk_disable_unprepare(data->num_clks, data->clks);
}
static int cdns_starfive_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct cdns_starfive *data;
unsigned int args;
int ret;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->dev = dev;
data->stg_syscon =
syscon_regmap_lookup_by_phandle_args(pdev->dev.of_node,
"starfive,stg-syscon", 1, &args);
if (IS_ERR(data->stg_syscon))
return dev_err_probe(dev, PTR_ERR(data->stg_syscon),
"Failed to parse starfive,stg-syscon\n");
data->stg_usb_mode = args;
data->num_clks = devm_clk_bulk_get_all(data->dev, &data->clks);
if (data->num_clks < 0)
return dev_err_probe(data->dev, -ENODEV,
"Failed to get clocks\n");
data->resets = devm_reset_control_array_get_exclusive(data->dev);
if (IS_ERR(data->resets))
return dev_err_probe(data->dev, PTR_ERR(data->resets),
"Failed to get resets");
cdns_mode_init(pdev, data);
ret = cdns_clk_rst_init(data);
if (ret)
return ret;
ret = of_platform_populate(dev->of_node, NULL, NULL, dev);
if (ret) {
dev_err(dev, "Failed to create children\n");
cdns_clk_rst_deinit(data);
return ret;
}
device_set_wakeup_capable(dev, true);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
platform_set_drvdata(pdev, data);
return 0;
}
static int cdns_starfive_remove_core(struct device *dev, void *c)
{
struct platform_device *pdev = to_platform_device(dev);
platform_device_unregister(pdev);
return 0;
}
static int cdns_starfive_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct cdns_starfive *data = dev_get_drvdata(dev);
pm_runtime_get_sync(dev);
device_for_each_child(dev, NULL, cdns_starfive_remove_core);
pm_runtime_disable(dev);
pm_runtime_put_noidle(dev);
cdns_clk_rst_deinit(data);
platform_set_drvdata(pdev, NULL);
return 0;
}
#ifdef CONFIG_PM
static int cdns_starfive_runtime_resume(struct device *dev)
{
struct cdns_starfive *data = dev_get_drvdata(dev);
return clk_bulk_prepare_enable(data->num_clks, data->clks);
}
static int cdns_starfive_runtime_suspend(struct device *dev)
{
struct cdns_starfive *data = dev_get_drvdata(dev);
clk_bulk_disable_unprepare(data->num_clks, data->clks);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int cdns_starfive_resume(struct device *dev)
{
struct cdns_starfive *data = dev_get_drvdata(dev);
return cdns_clk_rst_init(data);
}
static int cdns_starfive_suspend(struct device *dev)
{
struct cdns_starfive *data = dev_get_drvdata(dev);
cdns_clk_rst_deinit(data);
return 0;
}
#endif
#endif
static const struct dev_pm_ops cdns_starfive_pm_ops = {
SET_RUNTIME_PM_OPS(cdns_starfive_runtime_suspend,
cdns_starfive_runtime_resume, NULL)
SET_SYSTEM_SLEEP_PM_OPS(cdns_starfive_suspend, cdns_starfive_resume)
};
static const struct of_device_id cdns_starfive_of_match[] = {
{ .compatible = "starfive,jh7110-usb", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, cdns_starfive_of_match);
static struct platform_driver cdns_starfive_driver = {
.probe = cdns_starfive_probe,
.remove = cdns_starfive_remove,
.driver = {
.name = "cdns3-starfive",
.of_match_table = cdns_starfive_of_match,
.pm = &cdns_starfive_pm_ops,
},
};
module_platform_driver(cdns_starfive_driver);
MODULE_ALIAS("platform:cdns3-starfive");
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Cadence USB3 StarFive Glue Layer");

View File

@ -199,7 +199,7 @@ static int cdns_ti_remove_core(struct device *dev, void *c)
return 0;
}
static int cdns_ti_remove(struct platform_device *pdev)
static void cdns_ti_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -208,8 +208,6 @@ static int cdns_ti_remove(struct platform_device *pdev)
pm_runtime_disable(dev);
platform_set_drvdata(pdev, NULL);
return 0;
}
static const struct of_device_id cdns_ti_of_match[] = {
@ -221,7 +219,7 @@ MODULE_DEVICE_TABLE(of, cdns_ti_of_match);
static struct platform_driver cdns_ti_driver = {
.probe = cdns_ti_probe,
.remove = cdns_ti_remove,
.remove_new = cdns_ti_remove,
.driver = {
.name = "cdns3-ti",
.of_match_table = cdns_ti_of_match,

View File

@ -70,6 +70,10 @@ static const struct ci_hdrc_imx_platform_flag imx7ulp_usb_data = {
CI_HDRC_PMQOS,
};
static const struct ci_hdrc_imx_platform_flag imx8ulp_usb_data = {
.flags = CI_HDRC_SUPPORTS_RUNTIME_PM,
};
static const struct of_device_id ci_hdrc_imx_dt_ids[] = {
{ .compatible = "fsl,imx23-usb", .data = &imx23_usb_data},
{ .compatible = "fsl,imx28-usb", .data = &imx28_usb_data},
@ -80,6 +84,7 @@ static const struct of_device_id ci_hdrc_imx_dt_ids[] = {
{ .compatible = "fsl,imx6ul-usb", .data = &imx6ul_usb_data},
{ .compatible = "fsl,imx7d-usb", .data = &imx7d_usb_data},
{ .compatible = "fsl,imx7ulp-usb", .data = &imx7ulp_usb_data},
{ .compatible = "fsl,imx8ulp-usb", .data = &imx8ulp_usb_data},
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, ci_hdrc_imx_dt_ids);
@ -502,7 +507,7 @@ disable_hsic_regulator:
return ret;
}
static int ci_hdrc_imx_remove(struct platform_device *pdev)
static void ci_hdrc_imx_remove(struct platform_device *pdev)
{
struct ci_hdrc_imx_data *data = platform_get_drvdata(pdev);
@ -522,8 +527,6 @@ static int ci_hdrc_imx_remove(struct platform_device *pdev)
if (data->hsic_pad_regulator)
regulator_disable(data->hsic_pad_regulator);
}
return 0;
}
static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
@ -650,7 +653,7 @@ static const struct dev_pm_ops ci_hdrc_imx_pm_ops = {
};
static struct platform_driver ci_hdrc_imx_driver = {
.probe = ci_hdrc_imx_probe,
.remove = ci_hdrc_imx_remove,
.remove_new = ci_hdrc_imx_remove,
.shutdown = ci_hdrc_imx_shutdown,
.driver = {
.name = "imx_usb",

View File

@ -274,7 +274,7 @@ err_iface:
return ret;
}
static int ci_hdrc_msm_remove(struct platform_device *pdev)
static void ci_hdrc_msm_remove(struct platform_device *pdev)
{
struct ci_hdrc_msm *ci = platform_get_drvdata(pdev);
@ -282,8 +282,6 @@ static int ci_hdrc_msm_remove(struct platform_device *pdev)
ci_hdrc_remove_device(ci->ci);
clk_disable_unprepare(ci->iface_clk);
clk_disable_unprepare(ci->core_clk);
return 0;
}
static const struct of_device_id msm_ci_dt_match[] = {
@ -294,7 +292,7 @@ MODULE_DEVICE_TABLE(of, msm_ci_dt_match);
static struct platform_driver ci_hdrc_msm_driver = {
.probe = ci_hdrc_msm_probe,
.remove = ci_hdrc_msm_remove,
.remove_new = ci_hdrc_msm_remove,
.driver = {
.name = "msm_hsusb",
.of_match_table = msm_ci_dt_match,

View File

@ -362,7 +362,7 @@ fail_power_off:
return err;
}
static int tegra_usb_remove(struct platform_device *pdev)
static void tegra_usb_remove(struct platform_device *pdev)
{
struct tegra_usb *usb = platform_get_drvdata(pdev);
@ -371,8 +371,6 @@ static int tegra_usb_remove(struct platform_device *pdev)
pm_runtime_put_sync_suspend(&pdev->dev);
pm_runtime_force_suspend(&pdev->dev);
return 0;
}
static int __maybe_unused tegra_usb_runtime_resume(struct device *dev)
@ -410,7 +408,7 @@ static struct platform_driver tegra_usb_driver = {
.pm = &tegra_usb_pm,
},
.probe = tegra_usb_probe,
.remove = tegra_usb_remove,
.remove_new = tegra_usb_remove,
};
module_platform_driver(tegra_usb_driver);

View File

@ -106,20 +106,18 @@ clk_err:
return ret;
}
static int ci_hdrc_usb2_remove(struct platform_device *pdev)
static void ci_hdrc_usb2_remove(struct platform_device *pdev)
{
struct ci_hdrc_usb2_priv *priv = platform_get_drvdata(pdev);
pm_runtime_disable(&pdev->dev);
ci_hdrc_remove_device(priv->ci_pdev);
clk_disable_unprepare(priv->clk);
return 0;
}
static struct platform_driver ci_hdrc_usb2_driver = {
.probe = ci_hdrc_usb2_probe,
.remove = ci_hdrc_usb2_remove,
.remove_new = ci_hdrc_usb2_remove,
.driver = {
.name = "chipidea-usb2",
.of_match_table = of_match_ptr(ci_hdrc_usb2_of_match),

View File

@ -1227,7 +1227,7 @@ ulpi_exit:
return ret;
}
static int ci_hdrc_remove(struct platform_device *pdev)
static void ci_hdrc_remove(struct platform_device *pdev)
{
struct ci_hdrc *ci = platform_get_drvdata(pdev);
@ -1245,8 +1245,6 @@ static int ci_hdrc_remove(struct platform_device *pdev)
ci_hdrc_enter_lpm(ci, true);
ci_usb_phy_exit(ci);
ci_ulpi_exit(ci);
return 0;
}
#ifdef CONFIG_PM
@ -1485,7 +1483,7 @@ static const struct dev_pm_ops ci_pm_ops = {
static struct platform_driver ci_hdrc_driver = {
.probe = ci_hdrc_probe,
.remove = ci_hdrc_remove,
.remove_new = ci_hdrc_remove,
.driver = {
.name = "ci_hdrc",
.pm = &ci_pm_ops,

View File

@ -113,7 +113,6 @@
#define MX7D_USBNC_USB_CTRL2_DP_DM_MASK (BIT(12) | BIT(13) | \
BIT(14) | BIT(15))
#define MX7D_USB_OTG_PHY_CFG1 0x30
#define MX7D_USB_OTG_PHY_CFG2_CHRG_CHRGSEL BIT(0)
#define MX7D_USB_OTG_PHY_CFG2_CHRG_VDATDETENB0 BIT(1)
#define MX7D_USB_OTG_PHY_CFG2_CHRG_VDATSRCENB0 BIT(2)
@ -135,7 +134,7 @@
#define TXVREFTUNE0_MASK (0xf << 20)
#define MX6_USB_OTG_WAKEUP_BITS (MX6_BM_WAKEUP_ENABLE | MX6_BM_VBUS_WAKEUP | \
MX6_BM_ID_WAKEUP)
MX6_BM_ID_WAKEUP | MX6SX_BM_DPDM_WAKEUP_EN)
struct usbmisc_ops {
/* It's called once when probe a usb device */
@ -152,6 +151,7 @@ struct usbmisc_ops {
int (*charger_detection)(struct imx_usbmisc_data *data);
/* It's called when system resume from usb power lost */
int (*power_lost_check)(struct imx_usbmisc_data *data);
void (*vbus_comparator_on)(struct imx_usbmisc_data *data, bool on);
};
struct imx_usbmisc {
@ -875,6 +875,33 @@ static int imx7d_charger_detection(struct imx_usbmisc_data *data)
return ret;
}
static void usbmisc_imx7d_vbus_comparator_on(struct imx_usbmisc_data *data,
bool on)
{
unsigned long flags;
struct imx_usbmisc *usbmisc = dev_get_drvdata(data->dev);
u32 val;
if (data->hsic)
return;
spin_lock_irqsave(&usbmisc->lock, flags);
/*
* Disable VBUS valid comparator when in suspend mode,
* when OTG is disabled and DRVVBUS0 is asserted case
* the Bandgap circuitry and VBUS Valid comparator are
* still powered, even in Suspend or Sleep mode.
*/
val = readl(usbmisc->base + MX7D_USB_OTG_PHY_CFG2);
if (on)
val |= MX7D_USB_OTG_PHY_CFG2_DRVVBUS0;
else
val &= ~MX7D_USB_OTG_PHY_CFG2_DRVVBUS0;
writel(val, usbmisc->base + MX7D_USB_OTG_PHY_CFG2);
spin_unlock_irqrestore(&usbmisc->lock, flags);
}
static int usbmisc_imx7ulp_init(struct imx_usbmisc_data *data)
{
struct imx_usbmisc *usbmisc = dev_get_drvdata(data->dev);
@ -1018,6 +1045,7 @@ static const struct usbmisc_ops imx7d_usbmisc_ops = {
.set_wakeup = usbmisc_imx7d_set_wakeup,
.charger_detection = imx7d_charger_detection,
.power_lost_check = usbmisc_imx7d_power_lost_check,
.vbus_comparator_on = usbmisc_imx7d_vbus_comparator_on,
};
static const struct usbmisc_ops imx7ulp_usbmisc_ops = {
@ -1132,6 +1160,9 @@ int imx_usbmisc_suspend(struct imx_usbmisc_data *data, bool wakeup)
usbmisc = dev_get_drvdata(data->dev);
if (usbmisc->ops->vbus_comparator_on)
usbmisc->ops->vbus_comparator_on(data, false);
if (wakeup && usbmisc->ops->set_wakeup)
ret = usbmisc->ops->set_wakeup(data, true);
if (ret) {
@ -1185,6 +1216,9 @@ int imx_usbmisc_resume(struct imx_usbmisc_data *data, bool wakeup)
goto hsic_set_clk_fail;
}
if (usbmisc->ops->vbus_comparator_on)
usbmisc->ops->vbus_comparator_on(data, true);
return 0;
hsic_set_clk_fail:

View File

@ -267,7 +267,7 @@ put_role_sw:
return ret;
}
static int usb_conn_remove(struct platform_device *pdev)
static void usb_conn_remove(struct platform_device *pdev)
{
struct usb_conn_info *info = platform_get_drvdata(pdev);
@ -277,8 +277,6 @@ static int usb_conn_remove(struct platform_device *pdev)
regulator_disable(info->vbus);
usb_role_switch_put(info->role_sw);
return 0;
}
static int __maybe_unused usb_conn_suspend(struct device *dev)
@ -338,7 +336,7 @@ MODULE_DEVICE_TABLE(of, usb_conn_dt_match);
static struct platform_driver usb_conn_driver = {
.probe = usb_conn_probe,
.remove = usb_conn_remove,
.remove_new = usb_conn_remove,
.driver = {
.name = "usb-conn-gpio",
.pm = &usb_conn_pm_ops,

View File

@ -746,6 +746,7 @@ static int driver_resume(struct usb_interface *intf)
return 0;
}
#ifdef CONFIG_PM
/* The following routines apply to the entire device, not interfaces */
void usbfs_notify_suspend(struct usb_device *udev)
{
@ -764,6 +765,7 @@ void usbfs_notify_resume(struct usb_device *udev)
}
mutex_unlock(&usbfs_mutex);
}
#endif
struct usb_driver usbfs_driver = {
.name = "usbfs",
@ -2640,21 +2642,21 @@ static long usbdev_do_ioctl(struct file *file, unsigned int cmd,
snoop(&dev->dev, "%s: CONTROL\n", __func__);
ret = proc_control(ps, p);
if (ret >= 0)
inode->i_mtime = current_time(inode);
inode->i_mtime = inode->i_ctime = current_time(inode);
break;
case USBDEVFS_BULK:
snoop(&dev->dev, "%s: BULK\n", __func__);
ret = proc_bulk(ps, p);
if (ret >= 0)
inode->i_mtime = current_time(inode);
inode->i_mtime = inode->i_ctime = current_time(inode);
break;
case USBDEVFS_RESETEP:
snoop(&dev->dev, "%s: RESETEP\n", __func__);
ret = proc_resetep(ps, p);
if (ret >= 0)
inode->i_mtime = current_time(inode);
inode->i_mtime = inode->i_ctime = current_time(inode);
break;
case USBDEVFS_RESET:
@ -2666,7 +2668,7 @@ static long usbdev_do_ioctl(struct file *file, unsigned int cmd,
snoop(&dev->dev, "%s: CLEAR_HALT\n", __func__);
ret = proc_clearhalt(ps, p);
if (ret >= 0)
inode->i_mtime = current_time(inode);
inode->i_mtime = inode->i_ctime = current_time(inode);
break;
case USBDEVFS_GETDRIVER:
@ -2693,7 +2695,7 @@ static long usbdev_do_ioctl(struct file *file, unsigned int cmd,
snoop(&dev->dev, "%s: SUBMITURB\n", __func__);
ret = proc_submiturb(ps, p);
if (ret >= 0)
inode->i_mtime = current_time(inode);
inode->i_mtime = inode->i_ctime = current_time(inode);
break;
#ifdef CONFIG_COMPAT
@ -2701,14 +2703,14 @@ static long usbdev_do_ioctl(struct file *file, unsigned int cmd,
snoop(&dev->dev, "%s: CONTROL32\n", __func__);
ret = proc_control_compat(ps, p);
if (ret >= 0)
inode->i_mtime = current_time(inode);
inode->i_mtime = inode->i_ctime = current_time(inode);
break;
case USBDEVFS_BULK32:
snoop(&dev->dev, "%s: BULK32\n", __func__);
ret = proc_bulk_compat(ps, p);
if (ret >= 0)
inode->i_mtime = current_time(inode);
inode->i_mtime = inode->i_ctime = current_time(inode);
break;
case USBDEVFS_DISCSIGNAL32:
@ -2720,7 +2722,7 @@ static long usbdev_do_ioctl(struct file *file, unsigned int cmd,
snoop(&dev->dev, "%s: SUBMITURB32\n", __func__);
ret = proc_submiturb_compat(ps, p);
if (ret >= 0)
inode->i_mtime = current_time(inode);
inode->i_mtime = inode->i_ctime = current_time(inode);
break;
case USBDEVFS_IOCTL32:

View File

@ -415,12 +415,15 @@ static int check_root_hub_suspended(struct device *dev)
return 0;
}
static int suspend_common(struct device *dev, bool do_wakeup)
static int suspend_common(struct device *dev, pm_message_t msg)
{
struct pci_dev *pci_dev = to_pci_dev(dev);
struct usb_hcd *hcd = pci_get_drvdata(pci_dev);
bool do_wakeup;
int retval;
do_wakeup = PMSG_IS_AUTO(msg) ? true : device_may_wakeup(dev);
/* Root hub suspend should have stopped all downstream traffic,
* and all bus master traffic. And done so for both the interface
* and the stub usb_device (which we check here). But maybe it
@ -447,7 +450,7 @@ static int suspend_common(struct device *dev, bool do_wakeup)
(retval == 0 && do_wakeup && hcd->shared_hcd &&
HCD_WAKEUP_PENDING(hcd->shared_hcd))) {
if (hcd->driver->pci_resume)
hcd->driver->pci_resume(hcd, false);
hcd->driver->pci_resume(hcd, msg);
retval = -EBUSY;
}
if (retval)
@ -470,7 +473,7 @@ static int suspend_common(struct device *dev, bool do_wakeup)
return retval;
}
static int resume_common(struct device *dev, int event)
static int resume_common(struct device *dev, pm_message_t msg)
{
struct pci_dev *pci_dev = to_pci_dev(dev);
struct usb_hcd *hcd = pci_get_drvdata(pci_dev);
@ -498,12 +501,11 @@ static int resume_common(struct device *dev, int event)
* No locking is needed because PCI controller drivers do not
* get unbound during system resume.
*/
if (pci_dev->class == CL_EHCI && event != PM_EVENT_AUTO_RESUME)
if (pci_dev->class == CL_EHCI && msg.event != PM_EVENT_AUTO_RESUME)
for_each_companion(pci_dev, hcd,
ehci_wait_for_companions);
retval = hcd->driver->pci_resume(hcd,
event == PM_EVENT_RESTORE);
retval = hcd->driver->pci_resume(hcd, msg);
if (retval) {
dev_err(dev, "PCI post-resume error %d!\n", retval);
usb_hc_died(hcd);
@ -516,7 +518,7 @@ static int resume_common(struct device *dev, int event)
static int hcd_pci_suspend(struct device *dev)
{
return suspend_common(dev, device_may_wakeup(dev));
return suspend_common(dev, PMSG_SUSPEND);
}
static int hcd_pci_suspend_noirq(struct device *dev)
@ -577,12 +579,12 @@ static int hcd_pci_resume_noirq(struct device *dev)
static int hcd_pci_resume(struct device *dev)
{
return resume_common(dev, PM_EVENT_RESUME);
return resume_common(dev, PMSG_RESUME);
}
static int hcd_pci_restore(struct device *dev)
{
return resume_common(dev, PM_EVENT_RESTORE);
return resume_common(dev, PMSG_RESTORE);
}
#else
@ -600,7 +602,7 @@ static int hcd_pci_runtime_suspend(struct device *dev)
{
int retval;
retval = suspend_common(dev, true);
retval = suspend_common(dev, PMSG_AUTO_SUSPEND);
if (retval == 0)
powermac_set_asic(to_pci_dev(dev), 0);
dev_dbg(dev, "hcd_pci_runtime_suspend: %d\n", retval);
@ -612,7 +614,7 @@ static int hcd_pci_runtime_resume(struct device *dev)
int retval;
powermac_set_asic(to_pci_dev(dev), 1);
retval = resume_common(dev, PM_EVENT_AUTO_RESUME);
retval = resume_common(dev, PMSG_AUTO_RESUME);
dev_dbg(dev, "hcd_pci_runtime_resume: %d\n", retval);
return retval;
}

View File

@ -2018,6 +2018,19 @@ bool usb_device_is_owned(struct usb_device *udev)
return !!hub->ports[udev->portnum - 1]->port_owner;
}
static void update_port_device_state(struct usb_device *udev)
{
struct usb_hub *hub;
struct usb_port *port_dev;
if (udev->parent) {
hub = usb_hub_to_struct_hub(udev->parent);
port_dev = hub->ports[udev->portnum - 1];
WRITE_ONCE(port_dev->state, udev->state);
sysfs_notify_dirent(port_dev->state_kn);
}
}
static void recursively_mark_NOTATTACHED(struct usb_device *udev)
{
struct usb_hub *hub = usb_hub_to_struct_hub(udev);
@ -2030,6 +2043,7 @@ static void recursively_mark_NOTATTACHED(struct usb_device *udev)
if (udev->state == USB_STATE_SUSPENDED)
udev->active_duration -= jiffies;
udev->state = USB_STATE_NOTATTACHED;
update_port_device_state(udev);
}
/**
@ -2086,6 +2100,7 @@ void usb_set_device_state(struct usb_device *udev,
udev->state != USB_STATE_SUSPENDED)
udev->active_duration += jiffies;
udev->state = new_state;
update_port_device_state(udev);
} else
recursively_mark_NOTATTACHED(udev);
spin_unlock_irqrestore(&device_state_lock, flags);

View File

@ -84,6 +84,8 @@ struct usb_hub {
* @peer: related usb2 and usb3 ports (share the same connector)
* @req: default pm qos request for hubs without port power control
* @connect_type: port's connect type
* @state: device state of the usb device attached to the port
* @state_kn: kernfs_node of the sysfs attribute that accesses @state
* @location: opaque representation of platform connector location
* @status_lock: synchronize port_event() vs usb_port_{suspend|resume}
* @portnum: port index num based one
@ -100,6 +102,8 @@ struct usb_port {
struct usb_port *peer;
struct dev_pm_qos_request *req;
enum usb_port_connect_type connect_type;
enum usb_device_state state;
struct kernfs_node *state_kn;
usb_port_location_t location;
struct mutex status_lock;
u32 over_current_count;

View File

@ -160,6 +160,16 @@ static ssize_t connect_type_show(struct device *dev,
}
static DEVICE_ATTR_RO(connect_type);
static ssize_t state_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct usb_port *port_dev = to_usb_port(dev);
enum usb_device_state state = READ_ONCE(port_dev->state);
return sysfs_emit(buf, "%s\n", usb_state_string(state));
}
static DEVICE_ATTR_RO(state);
static ssize_t over_current_count_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@ -259,6 +269,7 @@ static DEVICE_ATTR_RW(usb3_lpm_permit);
static struct attribute *port_dev_attrs[] = {
&dev_attr_connect_type.attr,
&dev_attr_state.attr,
&dev_attr_location.attr,
&dev_attr_quirks.attr,
&dev_attr_over_current_count.attr,
@ -705,19 +716,24 @@ int usb_hub_create_port_device(struct usb_hub *hub, int port1)
return retval;
}
port_dev->state_kn = sysfs_get_dirent(port_dev->dev.kobj.sd, "state");
if (!port_dev->state_kn) {
dev_err(&port_dev->dev, "failed to sysfs_get_dirent 'state'\n");
retval = -ENODEV;
goto err_unregister;
}
/* Set default policy of port-poweroff disabled. */
retval = dev_pm_qos_add_request(&port_dev->dev, port_dev->req,
DEV_PM_QOS_FLAGS, PM_QOS_FLAG_NO_POWER_OFF);
if (retval < 0) {
device_unregister(&port_dev->dev);
return retval;
goto err_put_kn;
}
retval = component_add(&port_dev->dev, &connector_ops);
if (retval) {
dev_warn(&port_dev->dev, "failed to add component\n");
device_unregister(&port_dev->dev);
return retval;
goto err_put_kn;
}
find_and_link_peer(hub, port1);
@ -754,6 +770,13 @@ int usb_hub_create_port_device(struct usb_hub *hub, int port1)
port_dev->req = NULL;
}
return 0;
err_put_kn:
sysfs_put(port_dev->state_kn);
err_unregister:
device_unregister(&port_dev->dev);
return retval;
}
void usb_hub_remove_port_device(struct usb_hub *hub, int port1)
@ -765,5 +788,6 @@ void usb_hub_remove_port_device(struct usb_hub *hub, int port1)
if (peer)
unlink_peers(port_dev, peer);
component_del(&port_dev->dev, &connector_ops);
sysfs_put(port_dev->state_kn);
device_unregister(&port_dev->dev);
}

View File

@ -161,6 +161,25 @@ static void dwc2_set_amlogic_g12a_params(struct dwc2_hsotg *hsotg)
p->hird_threshold_en = false;
}
static void dwc2_set_amlogic_a1_params(struct dwc2_hsotg *hsotg)
{
struct dwc2_core_params *p = &hsotg->params;
p->otg_caps.hnp_support = false;
p->otg_caps.srp_support = false;
p->speed = DWC2_SPEED_PARAM_HIGH;
p->host_rx_fifo_size = 192;
p->host_nperio_tx_fifo_size = 128;
p->host_perio_tx_fifo_size = 128;
p->phy_type = DWC2_PHY_TYPE_PARAM_UTMI;
p->phy_utmi_width = 8;
p->ahbcfg = GAHBCFG_HBSTLEN_INCR8 << GAHBCFG_HBSTLEN_SHIFT;
p->lpm = false;
p->lpm_clock_gating = false;
p->besl = false;
p->hird_threshold_en = false;
}
static void dwc2_set_amcc_params(struct dwc2_hsotg *hsotg)
{
struct dwc2_core_params *p = &hsotg->params;
@ -258,6 +277,8 @@ const struct of_device_id dwc2_of_match_table[] = {
.data = dwc2_set_amlogic_params },
{ .compatible = "amlogic,meson-g12a-usb",
.data = dwc2_set_amlogic_g12a_params },
{ .compatible = "amlogic,meson-a1-usb",
.data = dwc2_set_amlogic_a1_params },
{ .compatible = "amcc,dwc-otg", .data = dwc2_set_amcc_params },
{ .compatible = "apm,apm82181-dwc-otg", .data = dwc2_set_amcc_params },
{ .compatible = "st,stm32f4x9-fsotg",

View File

@ -203,6 +203,11 @@ int dwc2_lowlevel_hw_disable(struct dwc2_hsotg *hsotg)
return ret;
}
static void dwc2_reset_control_assert(void *data)
{
reset_control_assert(data);
}
static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
{
int i, ret;
@ -213,6 +218,10 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
"error getting reset control\n");
reset_control_deassert(hsotg->reset);
ret = devm_add_action_or_reset(hsotg->dev, dwc2_reset_control_assert,
hsotg->reset);
if (ret)
return ret;
hsotg->reset_ecc = devm_reset_control_get_optional(hsotg->dev, "dwc2-ecc");
if (IS_ERR(hsotg->reset_ecc))
@ -220,6 +229,10 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
"error getting reset control for ecc\n");
reset_control_deassert(hsotg->reset_ecc);
ret = devm_add_action_or_reset(hsotg->dev, dwc2_reset_control_assert,
hsotg->reset_ecc);
if (ret)
return ret;
/*
* Attempt to find a generic PHY, then look for an old style
@ -288,7 +301,7 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
* stops device processing. Any resources used on behalf of this device are
* freed.
*/
static int dwc2_driver_remove(struct platform_device *dev)
static void dwc2_driver_remove(struct platform_device *dev)
{
struct dwc2_hsotg *hsotg = platform_get_drvdata(dev);
struct dwc2_gregs_backup *gr;
@ -338,11 +351,6 @@ static int dwc2_driver_remove(struct platform_device *dev)
if (hsotg->ll_hw_enabled)
dwc2_lowlevel_hw_disable(hsotg);
reset_control_assert(hsotg->reset);
reset_control_assert(hsotg->reset_ecc);
return 0;
}
/**
@ -746,7 +754,7 @@ static struct platform_driver dwc2_platform_driver = {
.pm = &dwc2_dev_pm_ops,
},
.probe = dwc2_driver_probe,
.remove = dwc2_driver_remove,
.remove_new = dwc2_driver_remove,
.shutdown = dwc2_driver_shutdown,
};

View File

@ -1800,6 +1800,17 @@ static int dwc3_probe(struct platform_device *pdev)
dwc_res = *res;
dwc_res.start += DWC3_GLOBALS_REGS_START;
if (dev->of_node) {
struct device_node *parent = of_get_parent(dev->of_node);
if (of_device_is_compatible(parent, "realtek,rtd-dwc3")) {
dwc_res.start -= DWC3_GLOBALS_REGS_START;
dwc_res.start += DWC3_RTK_RTD_GLOBALS_REGS_START;
}
of_node_put(parent);
}
regs = devm_ioremap_resource(dev, &dwc_res);
if (IS_ERR(regs))
return PTR_ERR(regs);
@ -1913,7 +1924,7 @@ err_put_psy:
return ret;
}
static int dwc3_remove(struct platform_device *pdev)
static void dwc3_remove(struct platform_device *pdev)
{
struct dwc3 *dwc = platform_get_drvdata(pdev);
@ -1940,8 +1951,6 @@ static int dwc3_remove(struct platform_device *pdev)
if (dwc->usb_psy)
power_supply_put(dwc->usb_psy);
return 0;
}
#ifdef CONFIG_PM
@ -2252,7 +2261,7 @@ MODULE_DEVICE_TABLE(acpi, dwc3_acpi_match);
static struct platform_driver dwc3_driver = {
.probe = dwc3_probe,
.remove = dwc3_remove,
.remove_new = dwc3_remove,
.driver = {
.name = "dwc3",
.of_match_table = of_match_ptr(of_dwc3_match),

View File

@ -84,6 +84,8 @@
#define DWC3_OTG_REGS_START 0xcc00
#define DWC3_OTG_REGS_END 0xccff
#define DWC3_RTK_RTD_GLOBALS_REGS_START 0x8100
/* Global Registers */
#define DWC3_GSBUSCFG0 0xc100
#define DWC3_GSBUSCFG1 0xc104

View File

@ -275,7 +275,7 @@ static int dwc3_ti_remove_core(struct device *dev, void *c)
return 0;
}
static int dwc3_ti_remove(struct platform_device *pdev)
static void dwc3_ti_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dwc3_data *data = platform_get_drvdata(pdev);
@ -294,7 +294,6 @@ static int dwc3_ti_remove(struct platform_device *pdev)
pm_runtime_set_suspended(dev);
platform_set_drvdata(pdev, NULL);
return 0;
}
#ifdef CONFIG_PM
@ -362,7 +361,7 @@ MODULE_DEVICE_TABLE(of, dwc3_ti_of_match);
static struct platform_driver dwc3_ti_driver = {
.probe = dwc3_ti_probe,
.remove = dwc3_ti_remove,
.remove_new = dwc3_ti_remove,
.driver = {
.name = "dwc3-am62",
.pm = DEV_PM_OPS,

View File

@ -128,7 +128,7 @@ vdd33_err:
return ret;
}
static int dwc3_exynos_remove(struct platform_device *pdev)
static void dwc3_exynos_remove(struct platform_device *pdev)
{
struct dwc3_exynos *exynos = platform_get_drvdata(pdev);
int i;
@ -143,8 +143,6 @@ static int dwc3_exynos_remove(struct platform_device *pdev)
regulator_disable(exynos->vdd33);
regulator_disable(exynos->vdd10);
return 0;
}
static const struct dwc3_exynos_driverdata exynos5250_drvdata = {
@ -234,7 +232,7 @@ static const struct dev_pm_ops dwc3_exynos_dev_pm_ops = {
static struct platform_driver dwc3_exynos_driver = {
.probe = dwc3_exynos_probe,
.remove = dwc3_exynos_remove,
.remove_new = dwc3_exynos_remove,
.driver = {
.name = "exynos-dwc3",
.of_match_table = exynos_dwc3_match,

View File

@ -266,7 +266,7 @@ disable_hsio_clk:
return err;
}
static int dwc3_imx8mp_remove(struct platform_device *pdev)
static void dwc3_imx8mp_remove(struct platform_device *pdev)
{
struct dwc3_imx8mp *dwc3_imx = platform_get_drvdata(pdev);
struct device *dev = &pdev->dev;
@ -280,8 +280,6 @@ static int dwc3_imx8mp_remove(struct platform_device *pdev)
pm_runtime_disable(dev);
pm_runtime_put_noidle(dev);
platform_set_drvdata(pdev, NULL);
return 0;
}
static int __maybe_unused dwc3_imx8mp_suspend(struct dwc3_imx8mp *dwc3_imx,
@ -411,7 +409,7 @@ MODULE_DEVICE_TABLE(of, dwc3_imx8mp_of_match);
static struct platform_driver dwc3_imx8mp_driver = {
.probe = dwc3_imx8mp_probe,
.remove = dwc3_imx8mp_remove,
.remove_new = dwc3_imx8mp_remove,
.driver = {
.name = "imx8mp-dwc3",
.pm = &dwc3_imx8mp_dev_pm_ops,

View File

@ -181,7 +181,7 @@ static int kdwc3_remove_core(struct device *dev, void *c)
return 0;
}
static int kdwc3_remove(struct platform_device *pdev)
static void kdwc3_remove(struct platform_device *pdev)
{
struct dwc3_keystone *kdwc = platform_get_drvdata(pdev);
struct device_node *node = pdev->dev.of_node;
@ -198,8 +198,6 @@ static int kdwc3_remove(struct platform_device *pdev)
phy_pm_runtime_put_sync(kdwc->usb3_phy);
platform_set_drvdata(pdev, NULL);
return 0;
}
static const struct of_device_id kdwc3_of_match[] = {
@ -211,7 +209,7 @@ MODULE_DEVICE_TABLE(of, kdwc3_of_match);
static struct platform_driver kdwc3_driver = {
.probe = kdwc3_probe,
.remove = kdwc3_remove,
.remove_new = kdwc3_remove,
.driver = {
.name = "keystone-dwc3",
.of_match_table = kdwc3_of_match,

View File

@ -140,7 +140,6 @@ static const char * const meson_a1_phy_names[] = {
struct dwc3_meson_g12a;
struct dwc3_meson_g12a_drvdata {
bool otg_switch_supported;
bool otg_phy_host_port_disable;
struct clk_bulk_data *clks;
int num_clks;
@ -189,7 +188,6 @@ static int dwc3_meson_gxl_usb_post_init(struct dwc3_meson_g12a *priv);
*/
static const struct dwc3_meson_g12a_drvdata gxl_drvdata = {
.otg_switch_supported = true,
.otg_phy_host_port_disable = true,
.clks = meson_gxl_clocks,
.num_clks = ARRAY_SIZE(meson_g12a_clocks),
@ -203,7 +201,6 @@ static const struct dwc3_meson_g12a_drvdata gxl_drvdata = {
};
static const struct dwc3_meson_g12a_drvdata gxm_drvdata = {
.otg_switch_supported = true,
.otg_phy_host_port_disable = true,
.clks = meson_gxl_clocks,
.num_clks = ARRAY_SIZE(meson_g12a_clocks),
@ -217,7 +214,6 @@ static const struct dwc3_meson_g12a_drvdata gxm_drvdata = {
};
static const struct dwc3_meson_g12a_drvdata axg_drvdata = {
.otg_switch_supported = true,
.clks = meson_gxl_clocks,
.num_clks = ARRAY_SIZE(meson_gxl_clocks),
.phy_names = meson_a1_phy_names,
@ -230,7 +226,6 @@ static const struct dwc3_meson_g12a_drvdata axg_drvdata = {
};
static const struct dwc3_meson_g12a_drvdata g12a_drvdata = {
.otg_switch_supported = true,
.clks = meson_g12a_clocks,
.num_clks = ARRAY_SIZE(meson_g12a_clocks),
.phy_names = meson_g12a_phy_names,
@ -242,7 +237,6 @@ static const struct dwc3_meson_g12a_drvdata g12a_drvdata = {
};
static const struct dwc3_meson_g12a_drvdata a1_drvdata = {
.otg_switch_supported = false,
.clks = meson_a1_clocks,
.num_clks = ARRAY_SIZE(meson_a1_clocks),
.phy_names = meson_a1_phy_names,
@ -307,7 +301,7 @@ static int dwc3_meson_g12a_usb2_init_phy(struct dwc3_meson_g12a *priv, int i,
U2P_R0_POWER_ON_RESET,
U2P_R0_POWER_ON_RESET);
if (priv->drvdata->otg_switch_supported && i == USB2_OTG_PHY) {
if (i == USB2_OTG_PHY) {
regmap_update_bits(priv->u2p_regmap[i], U2P_R0,
U2P_R0_ID_PULLUP | U2P_R0_DRV_VBUS,
U2P_R0_ID_PULLUP | U2P_R0_DRV_VBUS);
@ -490,7 +484,7 @@ static int dwc3_meson_g12a_otg_mode_set(struct dwc3_meson_g12a *priv,
{
int ret;
if (!priv->drvdata->otg_switch_supported || !priv->phys[USB2_OTG_PHY])
if (!priv->phys[USB2_OTG_PHY])
return -EINVAL;
if (mode == PHY_MODE_USB_HOST)
@ -589,9 +583,6 @@ static int dwc3_meson_g12a_otg_init(struct platform_device *pdev,
int ret, irq;
struct device *dev = &pdev->dev;
if (!priv->drvdata->otg_switch_supported)
return 0;
if (priv->otg_mode == USB_DR_MODE_OTG) {
/* Ack irq before registering */
regmap_update_bits(priv->usb_glue_regmap, USB_R5,
@ -805,7 +796,7 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
ret = dwc3_meson_g12a_otg_init(pdev, priv);
if (ret)
goto err_phys_power;
goto err_plat_depopulate;
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
@ -813,6 +804,9 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
return 0;
err_plat_depopulate:
of_platform_depopulate(dev);
err_phys_power:
for (i = 0 ; i < PHY_COUNT ; ++i)
phy_power_off(priv->phys[i]);
@ -835,14 +829,13 @@ err_disable_clks:
return ret;
}
static int dwc3_meson_g12a_remove(struct platform_device *pdev)
static void dwc3_meson_g12a_remove(struct platform_device *pdev)
{
struct dwc3_meson_g12a *priv = platform_get_drvdata(pdev);
struct device *dev = &pdev->dev;
int i;
if (priv->drvdata->otg_switch_supported)
usb_role_switch_unregister(priv->role_switch);
usb_role_switch_unregister(priv->role_switch);
of_platform_depopulate(dev);
@ -859,8 +852,6 @@ static int dwc3_meson_g12a_remove(struct platform_device *pdev)
clk_bulk_disable_unprepare(priv->drvdata->num_clks,
priv->drvdata->clks);
return 0;
}
static int __maybe_unused dwc3_meson_g12a_runtime_suspend(struct device *dev)
@ -971,7 +962,7 @@ MODULE_DEVICE_TABLE(of, dwc3_meson_g12a_match);
static struct platform_driver dwc3_meson_g12a_driver = {
.probe = dwc3_meson_g12a_probe,
.remove = dwc3_meson_g12a_remove,
.remove_new = dwc3_meson_g12a_remove,
.driver = {
.name = "dwc3-meson-g12a",
.of_match_table = dwc3_meson_g12a_match,

View File

@ -112,13 +112,11 @@ static void __dwc3_of_simple_teardown(struct dwc3_of_simple *simple)
pm_runtime_set_suspended(simple->dev);
}
static int dwc3_of_simple_remove(struct platform_device *pdev)
static void dwc3_of_simple_remove(struct platform_device *pdev)
{
struct dwc3_of_simple *simple = platform_get_drvdata(pdev);
__dwc3_of_simple_teardown(simple);
return 0;
}
static void dwc3_of_simple_shutdown(struct platform_device *pdev)
@ -183,7 +181,7 @@ MODULE_DEVICE_TABLE(of, of_dwc3_simple_match);
static struct platform_driver dwc3_of_simple_driver = {
.probe = dwc3_of_simple_probe,
.remove = dwc3_of_simple_remove,
.remove_new = dwc3_of_simple_remove,
.shutdown = dwc3_of_simple_shutdown,
.driver = {
.name = "dwc3-of-simple",

View File

@ -534,7 +534,7 @@ err1:
return ret;
}
static int dwc3_omap_remove(struct platform_device *pdev)
static void dwc3_omap_remove(struct platform_device *pdev)
{
struct dwc3_omap *omap = platform_get_drvdata(pdev);
@ -543,8 +543,6 @@ static int dwc3_omap_remove(struct platform_device *pdev)
of_platform_depopulate(omap->dev);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
return 0;
}
static const struct of_device_id of_dwc3_match[] = {
@ -611,7 +609,7 @@ static const struct dev_pm_ops dwc3_omap_dev_pm_ops = {
static struct platform_driver dwc3_omap_driver = {
.probe = dwc3_omap_probe,
.remove = dwc3_omap_remove,
.remove_new = dwc3_omap_remove,
.driver = {
.name = "omap-dwc3",
.of_match_table = of_dwc3_match,

View File

@ -167,7 +167,8 @@ static int dwc3_qcom_register_extcon(struct dwc3_qcom *qcom)
qcom->edev = extcon_get_edev_by_phandle(dev, 0);
if (IS_ERR(qcom->edev))
return PTR_ERR(qcom->edev);
return dev_err_probe(dev, PTR_ERR(qcom->edev),
"Failed to get extcon\n");
qcom->vbus_nb.notifier_call = dwc3_qcom_vbus_notifier;
@ -252,16 +253,14 @@ static int dwc3_qcom_interconnect_init(struct dwc3_qcom *qcom)
qcom->icc_path_ddr = of_icc_get(dev, "usb-ddr");
if (IS_ERR(qcom->icc_path_ddr)) {
dev_err(dev, "failed to get usb-ddr path: %ld\n",
PTR_ERR(qcom->icc_path_ddr));
return PTR_ERR(qcom->icc_path_ddr);
return dev_err_probe(dev, PTR_ERR(qcom->icc_path_ddr),
"failed to get usb-ddr path\n");
}
qcom->icc_path_apps = of_icc_get(dev, "apps-usb");
if (IS_ERR(qcom->icc_path_apps)) {
dev_err(dev, "failed to get apps-usb path: %ld\n",
PTR_ERR(qcom->icc_path_apps));
ret = PTR_ERR(qcom->icc_path_apps);
ret = dev_err_probe(dev, PTR_ERR(qcom->icc_path_apps),
"failed to get apps-usb path\n");
goto put_path_ddr;
}
@ -800,6 +799,7 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct dwc3_qcom *qcom;
struct resource *res, *parent_res = NULL;
struct resource local_res;
int ret, i;
bool ignore_pipe_clk;
bool wakeup_source;
@ -821,9 +821,8 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
qcom->resets = devm_reset_control_array_get_optional_exclusive(dev);
if (IS_ERR(qcom->resets)) {
ret = PTR_ERR(qcom->resets);
dev_err(&pdev->dev, "failed to get resets, err=%d\n", ret);
return ret;
return dev_err_probe(&pdev->dev, PTR_ERR(qcom->resets),
"failed to get resets\n");
}
ret = reset_control_assert(qcom->resets);
@ -842,7 +841,7 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
ret = dwc3_qcom_clk_init(qcom, of_clk_get_parent_count(np));
if (ret) {
dev_err(dev, "failed to get clocks\n");
dev_err_probe(dev, ret, "failed to get clocks\n");
goto reset_assert;
}
@ -851,9 +850,8 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
if (np) {
parent_res = res;
} else {
parent_res = kmemdup(res, sizeof(struct resource), GFP_KERNEL);
if (!parent_res)
return -ENOMEM;
memcpy(&local_res, res, sizeof(struct resource));
parent_res = &local_res;
parent_res->start = res->start +
qcom->acpi_pdata->qscratch_base_offset;
@ -865,9 +863,10 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
if (IS_ERR_OR_NULL(qcom->urs_usb)) {
dev_err(dev, "failed to create URS USB platdev\n");
if (!qcom->urs_usb)
return -ENODEV;
ret = -ENODEV;
else
return PTR_ERR(qcom->urs_usb);
ret = PTR_ERR(qcom->urs_usb);
goto clk_disable;
}
}
}
@ -947,14 +946,18 @@ reset_assert:
return ret;
}
static int dwc3_qcom_remove(struct platform_device *pdev)
static void dwc3_qcom_remove(struct platform_device *pdev)
{
struct dwc3_qcom *qcom = platform_get_drvdata(pdev);
struct device_node *np = pdev->dev.of_node;
struct device *dev = &pdev->dev;
int i;
device_remove_software_node(&qcom->dwc3->dev);
of_platform_depopulate(dev);
if (np)
of_platform_depopulate(&pdev->dev);
else
platform_device_put(pdev);
for (i = qcom->num_clocks - 1; i >= 0; i--) {
clk_disable_unprepare(qcom->clks[i]);
@ -967,8 +970,6 @@ static int dwc3_qcom_remove(struct platform_device *pdev)
pm_runtime_allow(dev);
pm_runtime_disable(dev);
return 0;
}
static int __maybe_unused dwc3_qcom_pm_suspend(struct device *dev)
@ -1061,7 +1062,7 @@ MODULE_DEVICE_TABLE(acpi, dwc3_qcom_acpi_match);
static struct platform_driver dwc3_qcom_driver = {
.probe = dwc3_qcom_probe,
.remove = dwc3_qcom_remove,
.remove_new = dwc3_qcom_remove,
.driver = {
.name = "dwc3-qcom",
.pm = &dwc3_qcom_dev_pm_ops,

View File

@ -305,7 +305,7 @@ undo_platform_dev_alloc:
return ret;
}
static int st_dwc3_remove(struct platform_device *pdev)
static void st_dwc3_remove(struct platform_device *pdev)
{
struct st_dwc3 *dwc3_data = platform_get_drvdata(pdev);
@ -313,8 +313,6 @@ static int st_dwc3_remove(struct platform_device *pdev)
reset_control_assert(dwc3_data->rstc_pwrdn);
reset_control_assert(dwc3_data->rstc_rst);
return 0;
}
#ifdef CONFIG_PM_SLEEP
@ -364,7 +362,7 @@ MODULE_DEVICE_TABLE(of, st_dwc3_match);
static struct platform_driver st_dwc3_driver = {
.probe = st_dwc3_probe,
.remove = st_dwc3_remove,
.remove_new = st_dwc3_remove,
.driver = {
.name = "usb-st-dwc3",
.of_match_table = st_dwc3_match,

View File

@ -305,7 +305,7 @@ err_clk_put:
return ret;
}
static int dwc3_xlnx_remove(struct platform_device *pdev)
static void dwc3_xlnx_remove(struct platform_device *pdev)
{
struct dwc3_xlnx *priv_data = platform_get_drvdata(pdev);
struct device *dev = &pdev->dev;
@ -318,8 +318,6 @@ static int dwc3_xlnx_remove(struct platform_device *pdev)
pm_runtime_disable(dev);
pm_runtime_put_noidle(dev);
pm_runtime_set_suspended(dev);
return 0;
}
static int __maybe_unused dwc3_xlnx_runtime_suspend(struct device *dev)
@ -388,7 +386,7 @@ static const struct dev_pm_ops dwc3_xlnx_dev_pm_ops = {
static struct platform_driver dwc3_xlnx_driver = {
.probe = dwc3_xlnx_probe,
.remove = dwc3_xlnx_remove,
.remove_new = dwc3_xlnx_remove,
.driver = {
.name = "dwc3-xilinx",
.of_match_table = dwc3_xlnx_of_match,

View File

@ -1207,5 +1207,8 @@ void dwc3_ep0_interrupt(struct dwc3 *dwc,
dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
}
break;
default:
dev_err(dwc->dev, "unknown endpoint event %d\n", event->endpoint_event);
break;
}
}

View File

@ -2703,13 +2703,17 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
static int dwc3_gadget_soft_connect(struct dwc3 *dwc)
{
int ret;
/*
* In the Synopsys DWC_usb31 1.90a programming guide section
* 4.1.9, it specifies that for a reconnect after a
* device-initiated disconnect requires a core soft reset
* (DCTL.CSftRst) before enabling the run/stop bit.
*/
dwc3_core_soft_reset(dwc);
ret = dwc3_core_soft_reset(dwc);
if (ret)
return ret;
dwc3_event_buffers_setup(dwc);
__dwc3_gadget_start(dwc);
@ -2744,7 +2748,9 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
ret = pm_runtime_get_sync(dwc->dev);
if (!ret || ret < 0) {
pm_runtime_put(dwc->dev);
return 0;
if (ret < 0)
pm_runtime_set_suspended(dwc->dev);
return ret;
}
if (dwc->pullups_connected == is_on) {
@ -3809,6 +3815,9 @@ static void dwc3_endpoint_interrupt(struct dwc3 *dwc,
break;
case DWC3_DEPEVT_RXTXFIFOEVT:
break;
default:
dev_err(dwc->dev, "unknown endpoint event %d\n", event->endpoint_event);
break;
}
}

View File

@ -165,7 +165,7 @@ static int fotg210_probe(struct platform_device *pdev)
return ret;
}
static int fotg210_remove(struct platform_device *pdev)
static void fotg210_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
enum usb_dr_mode mode;
@ -176,8 +176,6 @@ static int fotg210_remove(struct platform_device *pdev)
fotg210_udc_remove(pdev);
else
fotg210_hcd_remove(pdev);
return 0;
}
#ifdef CONFIG_OF
@ -196,7 +194,7 @@ static struct platform_driver fotg210_driver = {
.of_match_table = of_match_ptr(fotg210_of_match),
},
.probe = fotg210_probe,
.remove = fotg210_remove,
.remove_new = fotg210_remove,
};
static int __init fotg210_init(void)

View File

@ -23,7 +23,11 @@
#define HIDG_MINORS 4
static int major, minors;
static struct class *hidg_class;
static const struct class hidg_class = {
.name = "hidg",
};
static DEFINE_IDA(hidg_ida);
static DEFINE_MUTEX(hidg_ida_lock); /* protects access to hidg_ida */
@ -1272,7 +1276,7 @@ static struct usb_function *hidg_alloc(struct usb_function_instance *fi)
device_initialize(&hidg->dev);
hidg->dev.release = hidg_release;
hidg->dev.class = hidg_class;
hidg->dev.class = &hidg_class;
hidg->dev.devt = MKDEV(major, opts->minor);
ret = dev_set_name(&hidg->dev, "hidg%d", opts->minor);
if (ret)
@ -1325,17 +1329,13 @@ int ghid_setup(struct usb_gadget *g, int count)
int status;
dev_t dev;
hidg_class = class_create("hidg");
if (IS_ERR(hidg_class)) {
status = PTR_ERR(hidg_class);
hidg_class = NULL;
status = class_register(&hidg_class);
if (status)
return status;
}
status = alloc_chrdev_region(&dev, 0, count, "hidg");
if (status) {
class_destroy(hidg_class);
hidg_class = NULL;
class_unregister(&hidg_class);
return status;
}
@ -1352,6 +1352,5 @@ void ghid_cleanup(void)
major = minors = 0;
}
class_destroy(hidg_class);
hidg_class = NULL;
class_unregister(&hidg_class);
}

View File

@ -2857,7 +2857,7 @@ int fsg_common_create_lun(struct fsg_common *common, struct fsg_lun_config *cfg,
const char **name_pfx)
{
struct fsg_lun *lun;
char *pathbuf, *p;
char *pathbuf = NULL, *p = "(no medium)";
int rc = -ENOMEM;
if (id >= ARRAY_SIZE(common->luns))
@ -2907,12 +2907,9 @@ int fsg_common_create_lun(struct fsg_common *common, struct fsg_lun_config *cfg,
rc = fsg_lun_open(lun, cfg->filename);
if (rc)
goto error_lun;
}
pathbuf = kmalloc(PATH_MAX, GFP_KERNEL);
p = "(no medium)";
if (fsg_lun_is_open(lun)) {
p = "(error)";
pathbuf = kmalloc(PATH_MAX, GFP_KERNEL);
if (pathbuf) {
p = file_path(lun->filp, pathbuf, PATH_MAX);
if (IS_ERR(p))
@ -2931,7 +2928,6 @@ int fsg_common_create_lun(struct fsg_common *common, struct fsg_lun_config *cfg,
error_lun:
if (device_is_registered(&lun->dev))
device_unregister(&lun->dev);
fsg_lun_close(lun);
common->luns[id] = NULL;
error_sysfs:
kfree(lun);

View File

@ -54,7 +54,10 @@
#define DEFAULT_Q_LEN 10 /* same as legacy g_printer gadget */
static int major, minors;
static struct class *usb_gadget_class;
static const struct class usb_gadget_class = {
.name = "usb_printer_gadget",
};
static DEFINE_IDA(printer_ida);
static DEFINE_MUTEX(printer_ida_lock); /* protects access do printer_ida */
@ -1120,7 +1123,7 @@ autoconf_fail:
/* Setup the sysfs files for the printer gadget. */
devt = MKDEV(major, dev->minor);
pdev = device_create(usb_gadget_class, NULL, devt,
pdev = device_create(&usb_gadget_class, NULL, devt,
NULL, "g_printer%d", dev->minor);
if (IS_ERR(pdev)) {
ERROR(dev, "Failed to create device: g_printer\n");
@ -1143,7 +1146,7 @@ autoconf_fail:
return 0;
fail_cdev_add:
device_destroy(usb_gadget_class, devt);
device_destroy(&usb_gadget_class, devt);
fail_rx_reqs:
while (!list_empty(&dev->rx_reqs)) {
@ -1211,8 +1214,8 @@ static ssize_t f_printer_opts_pnp_string_show(struct config_item *item,
if (!opts->pnp_string)
goto unlock;
result = strlcpy(page, opts->pnp_string, PAGE_SIZE);
if (result >= PAGE_SIZE) {
result = strscpy(page, opts->pnp_string, PAGE_SIZE);
if (result < 1) {
result = PAGE_SIZE;
} else if (page[result - 1] != '\n' && result + 1 < PAGE_SIZE) {
page[result++] = '\n';
@ -1410,7 +1413,7 @@ static void printer_func_unbind(struct usb_configuration *c,
dev = func_to_printer(f);
device_destroy(usb_gadget_class, MKDEV(major, dev->minor));
device_destroy(&usb_gadget_class, MKDEV(major, dev->minor));
/* Remove Character Device */
cdev_del(&dev->printer_cdev);
@ -1512,19 +1515,14 @@ static int gprinter_setup(int count)
int status;
dev_t devt;
usb_gadget_class = class_create("usb_printer_gadget");
if (IS_ERR(usb_gadget_class)) {
status = PTR_ERR(usb_gadget_class);
usb_gadget_class = NULL;
pr_err("unable to create usb_gadget class %d\n", status);
status = class_register(&usb_gadget_class);
if (status)
return status;
}
status = alloc_chrdev_region(&devt, 0, count, "USB printer gadget");
if (status) {
pr_err("alloc_chrdev_region %d\n", status);
class_destroy(usb_gadget_class);
usb_gadget_class = NULL;
class_unregister(&usb_gadget_class);
return status;
}
@ -1540,6 +1538,5 @@ static void gprinter_cleanup(void)
unregister_chrdev_region(MKDEV(major, 0), minors);
major = minors = 0;
}
class_destroy(usb_gadget_class);
usb_gadget_class = NULL;
class_unregister(&usb_gadget_class);
}

View File

@ -539,16 +539,20 @@ static int gs_alloc_requests(struct usb_ep *ep, struct list_head *head,
static int gs_start_io(struct gs_port *port)
{
struct list_head *head = &port->read_pool;
struct usb_ep *ep = port->port_usb->out;
struct usb_ep *ep;
int status;
unsigned started;
if (!port->port_usb || !port->port.tty)
return -EIO;
/* Allocate RX and TX I/O buffers. We can't easily do this much
* earlier (with GFP_KERNEL) because the requests are coupled to
* endpoints, as are the packet sizes we'll be using. Different
* configurations may use different endpoints with a given port;
* and high speed vs full speed changes packet sizes too.
*/
ep = port->port_usb->out;
status = gs_alloc_requests(ep, head, gs_read_complete,
&port->read_allocated);
if (status)
@ -916,8 +920,11 @@ static void __gs_console_push(struct gs_console *cons)
}
req->length = size;
spin_unlock_irq(&cons->lock);
if (usb_ep_queue(ep, req, GFP_ATOMIC))
req->length = 0;
spin_lock_irq(&cons->lock);
}
static void gs_console_work(struct work_struct *work)
@ -1420,10 +1427,19 @@ EXPORT_SYMBOL_GPL(gserial_disconnect);
void gserial_suspend(struct gserial *gser)
{
struct gs_port *port = gser->ioport;
struct gs_port *port;
unsigned long flags;
spin_lock_irqsave(&port->port_lock, flags);
spin_lock_irqsave(&serial_port_lock, flags);
port = gser->ioport;
if (!port) {
spin_unlock_irqrestore(&serial_port_lock, flags);
return;
}
spin_lock(&port->port_lock);
spin_unlock(&serial_port_lock);
port->suspended = true;
spin_unlock_irqrestore(&port->port_lock, flags);
}

View File

@ -382,9 +382,12 @@ static void uvcg_video_pump(struct work_struct *work)
{
struct uvc_video *video = container_of(work, struct uvc_video, pump);
struct uvc_video_queue *queue = &video->queue;
/* video->max_payload_size is only set when using bulk transfer */
bool is_bulk = video->max_payload_size;
struct usb_request *req = NULL;
struct uvc_buffer *buf;
unsigned long flags;
bool buf_done;
int ret;
while (video->ep->enabled) {
@ -408,20 +411,47 @@ static void uvcg_video_pump(struct work_struct *work)
*/
spin_lock_irqsave(&queue->irqlock, flags);
buf = uvcg_queue_head(queue);
if (buf == NULL) {
if (buf != NULL) {
video->encode(req, video, buf);
buf_done = buf->state == UVC_BUF_STATE_DONE;
} else if (!(queue->flags & UVC_QUEUE_DISCONNECTED) && !is_bulk) {
/*
* No video buffer available; the queue is still connected and
* we're transferring over ISOC. Queue a 0 length request to
* prevent missed ISOC transfers.
*/
req->length = 0;
buf_done = false;
} else {
/*
* Either the queue has been disconnected or no video buffer
* available for bulk transfer. Either way, stop processing
* further.
*/
spin_unlock_irqrestore(&queue->irqlock, flags);
break;
}
video->encode(req, video, buf);
/*
* With usb3 we have more requests. This will decrease the
* interrupt load to a quarter but also catches the corner
* cases, which needs to be handled.
* With USB3 handling more requests at a higher speed, we can't
* afford to generate an interrupt for every request. Decide to
* interrupt:
*
* - When no more requests are available in the free queue, as
* this may be our last chance to refill the endpoint's
* request queue.
*
* - When this is request is the last request for the video
* buffer, as we want to start sending the next video buffer
* ASAP in case it doesn't get started already in the next
* iteration of this loop.
*
* - Four times over the length of the requests queue (as
* indicated by video->uvc_num_requests), as a trade-off
* between latency and interrupt load.
*/
if (list_empty(&video->req_free) ||
buf->state == UVC_BUF_STATE_DONE ||
if (list_empty(&video->req_free) || buf_done ||
!(video->req_int_count %
DIV_ROUND_UP(video->uvc_num_requests, 4))) {
video->req_int_count = 0;
@ -441,8 +471,7 @@ static void uvcg_video_pump(struct work_struct *work)
/* Endpoint now owns the request */
req = NULL;
if (buf->state != UVC_BUF_STATE_DONE)
video->req_int_count++;
video->req_int_count++;
}
if (!req)
@ -527,4 +556,3 @@ int uvcg_video_init(struct uvc_video *video, struct uvc_device *uvc)
V4L2_BUF_TYPE_VIDEO_OUTPUT, &video->mutex);
return 0;
}

View File

@ -389,8 +389,10 @@ static int gfs_bind(struct usb_composite_dev *cdev)
struct usb_descriptor_header *usb_desc;
usb_desc = usb_otg_descriptor_alloc(cdev->gadget);
if (!usb_desc)
if (!usb_desc) {
ret = -ENOMEM;
goto error_rndis;
}
usb_otg_descriptor_init(cdev->gadget, usb_desc);
gfs_otg_desc[0] = usb_desc;
gfs_otg_desc[1] = NULL;

View File

@ -237,7 +237,7 @@ static int hidg_plat_driver_probe(struct platform_device *pdev)
return 0;
}
static int hidg_plat_driver_remove(struct platform_device *pdev)
static void hidg_plat_driver_remove(struct platform_device *pdev)
{
struct hidg_func_node *e, *n;
@ -245,8 +245,6 @@ static int hidg_plat_driver_remove(struct platform_device *pdev)
list_del(&e->node);
kfree(e);
}
return 0;
}
@ -263,7 +261,7 @@ static struct usb_composite_driver hidg_driver = {
};
static struct platform_driver hidg_plat_driver = {
.remove = hidg_plat_driver_remove,
.remove_new = hidg_plat_driver_remove,
.driver = {
.name = "hidg",
},

View File

@ -463,6 +463,8 @@ config USB_ASPEED_UDC
source "drivers/usb/gadget/udc/aspeed-vhub/Kconfig"
source "drivers/usb/gadget/udc/cdns2/Kconfig"
#
# LAST -- dummy/emulated controller
#

View File

@ -42,3 +42,4 @@ obj-$(CONFIG_USB_ASPEED_VHUB) += aspeed-vhub/
obj-$(CONFIG_USB_ASPEED_UDC) += aspeed_udc.o
obj-$(CONFIG_USB_BDC_UDC) += bdc/
obj-$(CONFIG_USB_MAX3420_UDC) += max3420_udc.o
obj-$(CONFIG_USB_CDNS2_UDC) += cdns2/

View File

@ -253,14 +253,14 @@ void ast_vhub_init_hw(struct ast_vhub *vhub)
vhub->regs + AST_VHUB_IER);
}
static int ast_vhub_remove(struct platform_device *pdev)
static void ast_vhub_remove(struct platform_device *pdev)
{
struct ast_vhub *vhub = platform_get_drvdata(pdev);
unsigned long flags;
int i;
if (!vhub || !vhub->regs)
return 0;
return;
/* Remove devices */
for (i = 0; i < vhub->max_ports; i++)
@ -289,8 +289,6 @@ static int ast_vhub_remove(struct platform_device *pdev)
vhub->ep0_bufs,
vhub->ep0_bufs_dma);
vhub->ep0_bufs = NULL;
return 0;
}
static int ast_vhub_probe(struct platform_device *pdev)
@ -431,7 +429,7 @@ MODULE_DEVICE_TABLE(of, ast_vhub_dt_ids);
static struct platform_driver ast_vhub_driver = {
.probe = ast_vhub_probe,
.remove = ast_vhub_remove,
.remove_new = ast_vhub_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = ast_vhub_dt_ids,

View File

@ -2369,7 +2369,7 @@ static int usba_udc_probe(struct platform_device *pdev)
return 0;
}
static int usba_udc_remove(struct platform_device *pdev)
static void usba_udc_remove(struct platform_device *pdev)
{
struct usba_udc *udc;
int i;
@ -2382,8 +2382,6 @@ static int usba_udc_remove(struct platform_device *pdev)
for (i = 1; i < udc->num_ep; i++)
usba_ep_cleanup_debugfs(&udc->usba_ep[i]);
usba_cleanup_debugfs(udc);
return 0;
}
#ifdef CONFIG_PM_SLEEP
@ -2450,7 +2448,7 @@ static SIMPLE_DEV_PM_OPS(usba_udc_pm_ops, usba_udc_suspend, usba_udc_resume);
static struct platform_driver udc_driver = {
.probe = usba_udc_probe,
.remove = usba_udc_remove,
.remove_new = usba_udc_remove,
.driver = {
.name = "atmel_usba_udc",
.pm = &usba_udc_pm_ops,

View File

@ -2354,7 +2354,7 @@ report_request_failure:
* bcm63xx_udc_remove - Remove the device from the system.
* @pdev: Platform device struct from the bcm63xx BSP code.
*/
static int bcm63xx_udc_remove(struct platform_device *pdev)
static void bcm63xx_udc_remove(struct platform_device *pdev)
{
struct bcm63xx_udc *udc = platform_get_drvdata(pdev);
@ -2363,13 +2363,11 @@ static int bcm63xx_udc_remove(struct platform_device *pdev)
BUG_ON(udc->driver);
bcm63xx_uninit_udc_hw(udc);
return 0;
}
static struct platform_driver bcm63xx_udc_driver = {
.probe = bcm63xx_udc_probe,
.remove = bcm63xx_udc_remove,
.remove_new = bcm63xx_udc_remove,
.driver = {
.name = DRV_MODULE_NAME,
},

View File

@ -583,7 +583,7 @@ disable_clk:
return ret;
}
static int bdc_remove(struct platform_device *pdev)
static void bdc_remove(struct platform_device *pdev)
{
struct bdc *bdc;
@ -593,7 +593,6 @@ static int bdc_remove(struct platform_device *pdev)
bdc_hw_exit(bdc);
bdc_phy_exit(bdc);
clk_disable_unprepare(bdc->clk);
return 0;
}
#ifdef CONFIG_PM_SLEEP
@ -648,7 +647,7 @@ static struct platform_driver bdc_driver = {
.of_match_table = bdc_of_match,
},
.probe = bdc_probe,
.remove = bdc_remove,
.remove_new = bdc_remove,
};
module_platform_driver(bdc_driver);

View File

@ -0,0 +1,11 @@
config USB_CDNS2_UDC
tristate "Cadence USBHS Device Controller"
depends on USB_PCI && ACPI && HAS_DMA
help
Cadence USBHS Device controller is a PCI based USB peripheral
controller which supports both full and high speed USB 2.0
data transfers.
Say "y" to link the driver statically, or "m" to build a
dynamically linked module called "cdns2-udc-pci.ko" and to
force all gadget drivers to also be dynamically linked.

View File

@ -0,0 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
# define_trace.h needs to know how to find our header
CFLAGS_cdns2-trace.o := -I$(src)
obj-$(CONFIG_USB_CDNS2_UDC) += cdns2-udc-pci.o
cdns2-udc-pci-$(CONFIG_USB_CDNS2_UDC) += cdns2-pci.o cdns2-gadget.o cdns2-ep0.o
cdns2-udc-pci-$(CONFIG_TRACING) += cdns2-trace.o

View File

@ -0,0 +1,203 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cadence USBHS-DEV Driver.
* Debug header file.
*
* Copyright (C) 2023 Cadence.
*
* Author: Pawel Laszczak <pawell@cadence.com>
*/
#ifndef __LINUX_CDNS2_DEBUG
#define __LINUX_CDNS2_DEBUG
static inline const char *cdns2_decode_usb_irq(char *str, size_t size,
u8 usb_irq, u8 ext_irq)
{
int ret;
ret = snprintf(str, size, "usbirq: 0x%02x - ", usb_irq);
if (usb_irq & USBIRQ_SOF)
ret += snprintf(str + ret, size - ret, "SOF ");
if (usb_irq & USBIRQ_SUTOK)
ret += snprintf(str + ret, size - ret, "SUTOK ");
if (usb_irq & USBIRQ_SUDAV)
ret += snprintf(str + ret, size - ret, "SETUP ");
if (usb_irq & USBIRQ_SUSPEND)
ret += snprintf(str + ret, size - ret, "Suspend ");
if (usb_irq & USBIRQ_URESET)
ret += snprintf(str + ret, size - ret, "Reset ");
if (usb_irq & USBIRQ_HSPEED)
ret += snprintf(str + ret, size - ret, "HS ");
if (usb_irq & USBIRQ_LPM)
ret += snprintf(str + ret, size - ret, "LPM ");
ret += snprintf(str + ret, size - ret, ", EXT: 0x%02x - ", ext_irq);
if (ext_irq & EXTIRQ_WAKEUP)
ret += snprintf(str + ret, size - ret, "Wakeup ");
if (ext_irq & EXTIRQ_VBUSFAULT_FALL)
ret += snprintf(str + ret, size - ret, "VBUS_FALL ");
if (ext_irq & EXTIRQ_VBUSFAULT_RISE)
ret += snprintf(str + ret, size - ret, "VBUS_RISE ");
if (ret >= size)
pr_info("CDNS2: buffer overflowed.\n");
return str;
}
static inline const char *cdns2_decode_dma_irq(char *str, size_t size,
u32 ep_ists, u32 ep_sts,
const char *ep_name)
{
int ret;
ret = snprintf(str, size, "ISTS: %08x, %s: %08x ",
ep_ists, ep_name, ep_sts);
if (ep_sts & DMA_EP_STS_IOC)
ret += snprintf(str + ret, size - ret, "IOC ");
if (ep_sts & DMA_EP_STS_ISP)
ret += snprintf(str + ret, size - ret, "ISP ");
if (ep_sts & DMA_EP_STS_DESCMIS)
ret += snprintf(str + ret, size - ret, "DESCMIS ");
if (ep_sts & DMA_EP_STS_TRBERR)
ret += snprintf(str + ret, size - ret, "TRBERR ");
if (ep_sts & DMA_EP_STS_OUTSMM)
ret += snprintf(str + ret, size - ret, "OUTSMM ");
if (ep_sts & DMA_EP_STS_ISOERR)
ret += snprintf(str + ret, size - ret, "ISOERR ");
if (ep_sts & DMA_EP_STS_DBUSY)
ret += snprintf(str + ret, size - ret, "DBUSY ");
if (DMA_EP_STS_CCS(ep_sts))
ret += snprintf(str + ret, size - ret, "CCS ");
if (ret >= size)
pr_info("CDNS2: buffer overflowed.\n");
return str;
}
static inline const char *cdns2_decode_epx_irq(char *str, size_t size,
char *ep_name, u32 ep_ists,
u32 ep_sts)
{
return cdns2_decode_dma_irq(str, size, ep_ists, ep_sts, ep_name);
}
static inline const char *cdns2_decode_ep0_irq(char *str, size_t size,
u32 ep_ists, u32 ep_sts,
int dir)
{
return cdns2_decode_dma_irq(str, size, ep_ists, ep_sts,
dir ? "ep0IN" : "ep0OUT");
}
static inline const char *cdns2_raw_ring(struct cdns2_endpoint *pep,
struct cdns2_trb *trbs,
char *str, size_t size)
{
struct cdns2_ring *ring = &pep->ring;
struct cdns2_trb *trb;
dma_addr_t dma;
int ret;
int i;
ret = snprintf(str, size, "\n\t\tTR for %s:", pep->name);
trb = &trbs[ring->dequeue];
dma = cdns2_trb_virt_to_dma(pep, trb);
ret += snprintf(str + ret, size - ret,
"\n\t\tRing deq index: %d, trb: V=%p, P=0x%pad\n",
ring->dequeue, trb, &dma);
trb = &trbs[ring->enqueue];
dma = cdns2_trb_virt_to_dma(pep, trb);
ret += snprintf(str + ret, size - ret,
"\t\tRing enq index: %d, trb: V=%p, P=0x%pad\n",
ring->enqueue, trb, &dma);
ret += snprintf(str + ret, size - ret,
"\t\tfree trbs: %d, CCS=%d, PCS=%d\n",
ring->free_trbs, ring->ccs, ring->pcs);
if (TRBS_PER_SEGMENT > 40) {
ret += snprintf(str + ret, size - ret,
"\t\tTransfer ring %d too big\n", TRBS_PER_SEGMENT);
return str;
}
dma = ring->dma;
for (i = 0; i < TRBS_PER_SEGMENT; ++i) {
trb = &trbs[i];
ret += snprintf(str + ret, size - ret,
"\t\t@%pad %08x %08x %08x\n", &dma,
le32_to_cpu(trb->buffer),
le32_to_cpu(trb->length),
le32_to_cpu(trb->control));
dma += sizeof(*trb);
}
if (ret >= size)
pr_info("CDNS2: buffer overflowed.\n");
return str;
}
static inline const char *cdns2_trb_type_string(u8 type)
{
switch (type) {
case TRB_NORMAL:
return "Normal";
case TRB_LINK:
return "Link";
default:
return "UNKNOWN";
}
}
static inline const char *cdns2_decode_trb(char *str, size_t size, u32 flags,
u32 length, u32 buffer)
{
int type = TRB_FIELD_TO_TYPE(flags);
int ret;
switch (type) {
case TRB_LINK:
ret = snprintf(str, size,
"LINK %08x type '%s' flags %c:%c:%c%c:%c",
buffer, cdns2_trb_type_string(type),
flags & TRB_CYCLE ? 'C' : 'c',
flags & TRB_TOGGLE ? 'T' : 't',
flags & TRB_CHAIN ? 'C' : 'c',
flags & TRB_CHAIN ? 'H' : 'h',
flags & TRB_IOC ? 'I' : 'i');
break;
case TRB_NORMAL:
ret = snprintf(str, size,
"type: '%s', Buffer: %08x, length: %ld, burst len: %ld, "
"flags %c:%c:%c%c:%c",
cdns2_trb_type_string(type),
buffer, TRB_LEN(length),
TRB_FIELD_TO_BURST(length),
flags & TRB_CYCLE ? 'C' : 'c',
flags & TRB_ISP ? 'I' : 'i',
flags & TRB_CHAIN ? 'C' : 'c',
flags & TRB_CHAIN ? 'H' : 'h',
flags & TRB_IOC ? 'I' : 'i');
break;
default:
ret = snprintf(str, size, "type '%s' -> raw %08x %08x %08x",
cdns2_trb_type_string(type),
buffer, length, flags);
}
if (ret >= size)
pr_info("CDNS2: buffer overflowed.\n");
return str;
}
#endif /*__LINUX_CDNS2_DEBUG*/

View File

@ -0,0 +1,659 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Cadence USBHS-DEV driver.
*
* Copyright (C) 2023 Cadence Design Systems.
*
* Authors: Pawel Laszczak <pawell@cadence.com>
*/
#include <linux/usb/composite.h>
#include <asm/unaligned.h>
#include "cdns2-gadget.h"
#include "cdns2-trace.h"
static struct usb_endpoint_descriptor cdns2_gadget_ep0_desc = {
.bLength = USB_DT_ENDPOINT_SIZE,
.bDescriptorType = USB_DT_ENDPOINT,
.bmAttributes = USB_ENDPOINT_XFER_CONTROL,
.wMaxPacketSize = cpu_to_le16(64)
};
static int cdns2_w_index_to_ep_index(u16 wIndex)
{
if (!(wIndex & USB_ENDPOINT_NUMBER_MASK))
return 0;
return ((wIndex & USB_ENDPOINT_NUMBER_MASK) * 2) +
(wIndex & USB_ENDPOINT_DIR_MASK ? 1 : 0) - 1;
}
static bool cdns2_check_new_setup(struct cdns2_device *pdev)
{
u8 reg;
reg = readb(&pdev->ep0_regs->cs);
return !!(reg & EP0CS_CHGSET);
}
static void cdns2_ep0_enqueue(struct cdns2_device *pdev, dma_addr_t dma_addr,
unsigned int length, int zlp)
{
struct cdns2_adma_regs __iomem *regs = pdev->adma_regs;
struct cdns2_endpoint *pep = &pdev->eps[0];
struct cdns2_ring *ring = &pep->ring;
ring->trbs[0].buffer = cpu_to_le32(TRB_BUFFER(dma_addr));
ring->trbs[0].length = cpu_to_le32(TRB_LEN(length));
if (zlp) {
ring->trbs[0].control = cpu_to_le32(TRB_CYCLE |
TRB_TYPE(TRB_NORMAL));
ring->trbs[1].buffer = cpu_to_le32(TRB_BUFFER(dma_addr));
ring->trbs[1].length = cpu_to_le32(TRB_LEN(0));
ring->trbs[1].control = cpu_to_le32(TRB_CYCLE | TRB_IOC |
TRB_TYPE(TRB_NORMAL));
} else {
ring->trbs[0].control = cpu_to_le32(TRB_CYCLE | TRB_IOC |
TRB_TYPE(TRB_NORMAL));
ring->trbs[1].control = 0;
}
trace_cdns2_queue_trb(pep, ring->trbs);
if (!pep->dir)
writel(0, &pdev->ep0_regs->rxbc);
cdns2_select_ep(pdev, pep->dir);
writel(DMA_EP_STS_TRBERR, &regs->ep_sts);
writel(pep->ring.dma, &regs->ep_traddr);
trace_cdns2_doorbell_ep0(pep, readl(&regs->ep_traddr));
writel(DMA_EP_CMD_DRDY, &regs->ep_cmd);
}
static int cdns2_ep0_delegate_req(struct cdns2_device *pdev)
{
int ret;
spin_unlock(&pdev->lock);
ret = pdev->gadget_driver->setup(&pdev->gadget, &pdev->setup);
spin_lock(&pdev->lock);
return ret;
}
static void cdns2_ep0_stall(struct cdns2_device *pdev)
{
struct cdns2_endpoint *pep = &pdev->eps[0];
struct cdns2_request *preq;
preq = cdns2_next_preq(&pep->pending_list);
set_reg_bit_8(&pdev->ep0_regs->cs, EP0CS_DSTALL);
if (pdev->ep0_stage == CDNS2_DATA_STAGE && preq)
cdns2_gadget_giveback(pep, preq, -ECONNRESET);
else if (preq)
list_del_init(&preq->list);
pdev->ep0_stage = CDNS2_SETUP_STAGE;
pep->ep_state |= EP_STALLED;
}
static void cdns2_status_stage(struct cdns2_device *pdev)
{
struct cdns2_endpoint *pep = &pdev->eps[0];
struct cdns2_request *preq;
preq = cdns2_next_preq(&pep->pending_list);
if (preq)
list_del_init(&preq->list);
pdev->ep0_stage = CDNS2_SETUP_STAGE;
writeb(EP0CS_HSNAK, &pdev->ep0_regs->cs);
}
static int cdns2_req_ep0_set_configuration(struct cdns2_device *pdev,
struct usb_ctrlrequest *ctrl_req)
{
enum usb_device_state state = pdev->gadget.state;
u32 config = le16_to_cpu(ctrl_req->wValue);
int ret;
if (state < USB_STATE_ADDRESS) {
dev_err(pdev->dev, "Set Configuration - bad device state\n");
return -EINVAL;
}
ret = cdns2_ep0_delegate_req(pdev);
if (ret)
return ret;
trace_cdns2_device_state(config ? "configured" : "addressed");
if (!config)
usb_gadget_set_state(&pdev->gadget, USB_STATE_ADDRESS);
return 0;
}
static int cdns2_req_ep0_set_address(struct cdns2_device *pdev, u32 addr)
{
enum usb_device_state device_state = pdev->gadget.state;
u8 reg;
if (addr > USB_DEVICE_MAX_ADDRESS) {
dev_err(pdev->dev,
"Device address (%d) cannot be greater than %d\n",
addr, USB_DEVICE_MAX_ADDRESS);
return -EINVAL;
}
if (device_state == USB_STATE_CONFIGURED) {
dev_err(pdev->dev,
"can't set_address from configured state\n");
return -EINVAL;
}
reg = readb(&pdev->usb_regs->fnaddr);
pdev->dev_address = reg;
usb_gadget_set_state(&pdev->gadget,
(addr ? USB_STATE_ADDRESS : USB_STATE_DEFAULT));
trace_cdns2_device_state(addr ? "addressed" : "default");
return 0;
}
static int cdns2_req_ep0_handle_status(struct cdns2_device *pdev,
struct usb_ctrlrequest *ctrl)
{
struct cdns2_endpoint *pep;
__le16 *response_pkt;
u16 status = 0;
int ep_sts;
u32 recip;
recip = ctrl->bRequestType & USB_RECIP_MASK;
switch (recip) {
case USB_RECIP_DEVICE:
status = pdev->gadget.is_selfpowered;
status |= pdev->may_wakeup << USB_DEVICE_REMOTE_WAKEUP;
break;
case USB_RECIP_INTERFACE:
return cdns2_ep0_delegate_req(pdev);
case USB_RECIP_ENDPOINT:
ep_sts = cdns2_w_index_to_ep_index(le16_to_cpu(ctrl->wIndex));
pep = &pdev->eps[ep_sts];
if (pep->ep_state & EP_STALLED)
status = BIT(USB_ENDPOINT_HALT);
break;
default:
return -EINVAL;
}
put_unaligned_le16(status, (__le16 *)pdev->ep0_preq.request.buf);
cdns2_ep0_enqueue(pdev, pdev->ep0_preq.request.dma,
sizeof(*response_pkt), 0);
return 0;
}
static int cdns2_ep0_handle_feature_device(struct cdns2_device *pdev,
struct usb_ctrlrequest *ctrl,
int set)
{
enum usb_device_state state;
enum usb_device_speed speed;
int ret = 0;
u32 wValue;
u16 tmode;
wValue = le16_to_cpu(ctrl->wValue);
state = pdev->gadget.state;
speed = pdev->gadget.speed;
switch (wValue) {
case USB_DEVICE_REMOTE_WAKEUP:
pdev->may_wakeup = !!set;
break;
case USB_DEVICE_TEST_MODE:
if (state != USB_STATE_CONFIGURED || speed > USB_SPEED_HIGH)
return -EINVAL;
tmode = le16_to_cpu(ctrl->wIndex);
if (!set || (tmode & 0xff) != 0)
return -EINVAL;
tmode >>= 8;
switch (tmode) {
case USB_TEST_J:
case USB_TEST_K:
case USB_TEST_SE0_NAK:
case USB_TEST_PACKET:
/*
* The USBHS controller automatically handles the
* Set_Feature(testmode) request. Standard test modes
* that use values of test mode selector from
* 01h to 04h (Test_J, Test_K, Test_SE0_NAK,
* Test_Packet) are supported by the
* controller(HS - ack, FS - stall).
*/
break;
default:
ret = -EINVAL;
}
break;
default:
ret = -EINVAL;
}
return ret;
}
static int cdns2_ep0_handle_feature_intf(struct cdns2_device *pdev,
struct usb_ctrlrequest *ctrl,
int set)
{
int ret = 0;
u32 wValue;
wValue = le16_to_cpu(ctrl->wValue);
switch (wValue) {
case USB_INTRF_FUNC_SUSPEND:
break;
default:
ret = -EINVAL;
}
return ret;
}
static int cdns2_ep0_handle_feature_endpoint(struct cdns2_device *pdev,
struct usb_ctrlrequest *ctrl,
int set)
{
struct cdns2_endpoint *pep;
u8 wValue;
wValue = le16_to_cpu(ctrl->wValue);
pep = &pdev->eps[cdns2_w_index_to_ep_index(le16_to_cpu(ctrl->wIndex))];
if (wValue != USB_ENDPOINT_HALT)
return -EINVAL;
if (!(le16_to_cpu(ctrl->wIndex) & ~USB_DIR_IN))
return 0;
switch (wValue) {
case USB_ENDPOINT_HALT:
if (set || !(pep->ep_state & EP_WEDGE))
return cdns2_halt_endpoint(pdev, pep, set);
break;
default:
dev_warn(pdev->dev, "WARN Incorrect wValue %04x\n", wValue);
return -EINVAL;
}
return 0;
}
static int cdns2_req_ep0_handle_feature(struct cdns2_device *pdev,
struct usb_ctrlrequest *ctrl,
int set)
{
switch (ctrl->bRequestType & USB_RECIP_MASK) {
case USB_RECIP_DEVICE:
return cdns2_ep0_handle_feature_device(pdev, ctrl, set);
case USB_RECIP_INTERFACE:
return cdns2_ep0_handle_feature_intf(pdev, ctrl, set);
case USB_RECIP_ENDPOINT:
return cdns2_ep0_handle_feature_endpoint(pdev, ctrl, set);
default:
return -EINVAL;
}
}
static int cdns2_ep0_std_request(struct cdns2_device *pdev)
{
struct usb_ctrlrequest *ctrl = &pdev->setup;
int ret;
switch (ctrl->bRequest) {
case USB_REQ_SET_ADDRESS:
ret = cdns2_req_ep0_set_address(pdev,
le16_to_cpu(ctrl->wValue));
break;
case USB_REQ_SET_CONFIGURATION:
ret = cdns2_req_ep0_set_configuration(pdev, ctrl);
break;
case USB_REQ_GET_STATUS:
ret = cdns2_req_ep0_handle_status(pdev, ctrl);
break;
case USB_REQ_CLEAR_FEATURE:
ret = cdns2_req_ep0_handle_feature(pdev, ctrl, 0);
break;
case USB_REQ_SET_FEATURE:
ret = cdns2_req_ep0_handle_feature(pdev, ctrl, 1);
break;
default:
ret = cdns2_ep0_delegate_req(pdev);
break;
}
return ret;
}
static void __pending_setup_status_handler(struct cdns2_device *pdev)
{
struct usb_request *request = pdev->pending_status_request;
if (pdev->status_completion_no_call && request && request->complete) {
request->complete(&pdev->eps[0].endpoint, request);
pdev->status_completion_no_call = 0;
}
}
void cdns2_pending_setup_status_handler(struct work_struct *work)
{
struct cdns2_device *pdev = container_of(work, struct cdns2_device,
pending_status_wq);
unsigned long flags;
spin_lock_irqsave(&pdev->lock, flags);
__pending_setup_status_handler(pdev);
spin_unlock_irqrestore(&pdev->lock, flags);
}
void cdns2_handle_setup_packet(struct cdns2_device *pdev)
{
struct usb_ctrlrequest *ctrl = &pdev->setup;
struct cdns2_endpoint *pep = &pdev->eps[0];
struct cdns2_request *preq;
int ret = 0;
u16 len;
u8 reg;
int i;
writeb(EP0CS_CHGSET, &pdev->ep0_regs->cs);
for (i = 0; i < 8; i++)
((u8 *)&pdev->setup)[i] = readb(&pdev->ep0_regs->setupdat[i]);
/*
* If SETUP packet was modified while reading just simple ignore it.
* The new one will be handled latter.
*/
if (cdns2_check_new_setup(pdev)) {
trace_cdns2_ep0_setup("overridden");
return;
}
trace_cdns2_ctrl_req(ctrl);
if (!pdev->gadget_driver)
goto out;
if (pdev->gadget.state == USB_STATE_NOTATTACHED) {
dev_err(pdev->dev, "ERR: Setup detected in unattached state\n");
ret = -EINVAL;
goto out;
}
pep = &pdev->eps[0];
/* Halt for Ep0 is cleared automatically when SETUP packet arrives. */
pep->ep_state &= ~EP_STALLED;
if (!list_empty(&pep->pending_list)) {
preq = cdns2_next_preq(&pep->pending_list);
cdns2_gadget_giveback(pep, preq, -ECONNRESET);
}
len = le16_to_cpu(ctrl->wLength);
if (len)
pdev->ep0_stage = CDNS2_DATA_STAGE;
else
pdev->ep0_stage = CDNS2_STATUS_STAGE;
pep->dir = ctrl->bRequestType & USB_DIR_IN;
/*
* SET_ADDRESS request is acknowledged automatically by controller and
* in the worse case driver may not notice this request. To check
* whether this request has been processed driver can use
* fnaddr register.
*/
reg = readb(&pdev->usb_regs->fnaddr);
if (pdev->setup.bRequest != USB_REQ_SET_ADDRESS &&
pdev->dev_address != reg)
cdns2_req_ep0_set_address(pdev, reg);
if ((ctrl->bRequestType & USB_TYPE_MASK) == USB_TYPE_STANDARD)
ret = cdns2_ep0_std_request(pdev);
else
ret = cdns2_ep0_delegate_req(pdev);
if (ret == USB_GADGET_DELAYED_STATUS) {
trace_cdns2_ep0_status_stage("delayed");
return;
}
out:
if (ret < 0)
cdns2_ep0_stall(pdev);
else if (pdev->ep0_stage == CDNS2_STATUS_STAGE)
cdns2_status_stage(pdev);
}
static void cdns2_transfer_completed(struct cdns2_device *pdev)
{
struct cdns2_endpoint *pep = &pdev->eps[0];
if (!list_empty(&pep->pending_list)) {
struct cdns2_request *preq;
trace_cdns2_complete_trb(pep, pep->ring.trbs);
preq = cdns2_next_preq(&pep->pending_list);
preq->request.actual =
TRB_LEN(le32_to_cpu(pep->ring.trbs->length));
cdns2_gadget_giveback(pep, preq, 0);
}
cdns2_status_stage(pdev);
}
void cdns2_handle_ep0_interrupt(struct cdns2_device *pdev, int dir)
{
u32 ep_sts_reg;
cdns2_select_ep(pdev, dir);
trace_cdns2_ep0_irq(pdev);
ep_sts_reg = readl(&pdev->adma_regs->ep_sts);
writel(ep_sts_reg, &pdev->adma_regs->ep_sts);
__pending_setup_status_handler(pdev);
if ((ep_sts_reg & DMA_EP_STS_IOC) || (ep_sts_reg & DMA_EP_STS_ISP)) {
pdev->eps[0].dir = dir;
cdns2_transfer_completed(pdev);
}
}
/*
* Function shouldn't be called by gadget driver,
* endpoint 0 is allways active.
*/
static int cdns2_gadget_ep0_enable(struct usb_ep *ep,
const struct usb_endpoint_descriptor *desc)
{
return -EINVAL;
}
/*
* Function shouldn't be called by gadget driver,
* endpoint 0 is allways active.
*/
static int cdns2_gadget_ep0_disable(struct usb_ep *ep)
{
return -EINVAL;
}
static int cdns2_gadget_ep0_set_halt(struct usb_ep *ep, int value)
{
struct cdns2_endpoint *pep = ep_to_cdns2_ep(ep);
struct cdns2_device *pdev = pep->pdev;
unsigned long flags;
if (!value)
return 0;
spin_lock_irqsave(&pdev->lock, flags);
cdns2_ep0_stall(pdev);
spin_unlock_irqrestore(&pdev->lock, flags);
return 0;
}
static int cdns2_gadget_ep0_set_wedge(struct usb_ep *ep)
{
return cdns2_gadget_ep0_set_halt(ep, 1);
}
static int cdns2_gadget_ep0_queue(struct usb_ep *ep,
struct usb_request *request,
gfp_t gfp_flags)
{
struct cdns2_endpoint *pep = ep_to_cdns2_ep(ep);
struct cdns2_device *pdev = pep->pdev;
struct cdns2_request *preq;
unsigned long flags;
u8 zlp = 0;
int ret;
spin_lock_irqsave(&pdev->lock, flags);
preq = to_cdns2_request(request);
trace_cdns2_request_enqueue(preq);
/* Cancel the request if controller receive new SETUP packet. */
if (cdns2_check_new_setup(pdev)) {
trace_cdns2_ep0_setup("overridden");
spin_unlock_irqrestore(&pdev->lock, flags);
return -ECONNRESET;
}
/* Send STATUS stage. Should be called only for SET_CONFIGURATION. */
if (pdev->ep0_stage == CDNS2_STATUS_STAGE) {
cdns2_status_stage(pdev);
request->actual = 0;
pdev->status_completion_no_call = true;
pdev->pending_status_request = request;
usb_gadget_set_state(&pdev->gadget, USB_STATE_CONFIGURED);
spin_unlock_irqrestore(&pdev->lock, flags);
/*
* Since there is no completion interrupt for status stage,
* it needs to call ->completion in software after
* cdns2_gadget_ep0_queue is back.
*/
queue_work(system_freezable_wq, &pdev->pending_status_wq);
return 0;
}
if (!list_empty(&pep->pending_list)) {
trace_cdns2_ep0_setup("pending");
dev_err(pdev->dev,
"can't handle multiple requests for ep0\n");
spin_unlock_irqrestore(&pdev->lock, flags);
return -EBUSY;
}
ret = usb_gadget_map_request_by_dev(pdev->dev, request, pep->dir);
if (ret) {
spin_unlock_irqrestore(&pdev->lock, flags);
dev_err(pdev->dev, "failed to map request\n");
return -EINVAL;
}
request->status = -EINPROGRESS;
list_add_tail(&preq->list, &pep->pending_list);
if (request->zero && request->length &&
(request->length % ep->maxpacket == 0))
zlp = 1;
cdns2_ep0_enqueue(pdev, request->dma, request->length, zlp);
spin_unlock_irqrestore(&pdev->lock, flags);
return 0;
}
static const struct usb_ep_ops cdns2_gadget_ep0_ops = {
.enable = cdns2_gadget_ep0_enable,
.disable = cdns2_gadget_ep0_disable,
.alloc_request = cdns2_gadget_ep_alloc_request,
.free_request = cdns2_gadget_ep_free_request,
.queue = cdns2_gadget_ep0_queue,
.dequeue = cdns2_gadget_ep_dequeue,
.set_halt = cdns2_gadget_ep0_set_halt,
.set_wedge = cdns2_gadget_ep0_set_wedge,
};
void cdns2_ep0_config(struct cdns2_device *pdev)
{
struct cdns2_endpoint *pep;
pep = &pdev->eps[0];
if (!list_empty(&pep->pending_list)) {
struct cdns2_request *preq;
preq = cdns2_next_preq(&pep->pending_list);
list_del_init(&preq->list);
}
writeb(EP0_FIFO_AUTO, &pdev->ep0_regs->fifo);
cdns2_select_ep(pdev, USB_DIR_OUT);
writel(DMA_EP_CFG_ENABLE, &pdev->adma_regs->ep_cfg);
writeb(EP0_FIFO_IO_TX | EP0_FIFO_AUTO, &pdev->ep0_regs->fifo);
cdns2_select_ep(pdev, USB_DIR_IN);
writel(DMA_EP_CFG_ENABLE, &pdev->adma_regs->ep_cfg);
writeb(pdev->gadget.ep0->maxpacket, &pdev->ep0_regs->maxpack);
writel(DMA_EP_IEN_EP_OUT0 | DMA_EP_IEN_EP_IN0,
&pdev->adma_regs->ep_ien);
}
void cdns2_init_ep0(struct cdns2_device *pdev,
struct cdns2_endpoint *pep)
{
u16 maxpacket = le16_to_cpu(cdns2_gadget_ep0_desc.wMaxPacketSize);
usb_ep_set_maxpacket_limit(&pep->endpoint, maxpacket);
pep->endpoint.ops = &cdns2_gadget_ep0_ops;
pep->endpoint.desc = &cdns2_gadget_ep0_desc;
pep->endpoint.caps.type_control = true;
pep->endpoint.caps.dir_in = true;
pep->endpoint.caps.dir_out = true;
pdev->gadget.ep0 = &pep->endpoint;
}

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More