USB/Thunderbolt/PHY driver updates for 5.6-rc1

Here is the big USB and Thunderbolt and PHY driver updates for 5.6-rc1.
 
 With the advent of USB4, "Thunderbolt" has really become USB4, so the
 renaming of the Kconfig option and starting to share subsystem code has
 begun, hence both subsystems coming in through the same tree here.
 
 PHY driver updates also touched USB drivers, so that is coming in
 through here as well.
 
 Major stuff included in here are:
 	- USB 4 initial support added (i.e. Thunderbolt)
 	- musb driver updates
 	- USB gadget driver updates
 	- PHY driver updates
 	- USB PHY driver updates
 	- lots of USB serial stuff fixed up
 	- USB typec updates
 	- USB-IP fixes
 	- lots of other smaller USB driver updates
 
 All of these have been in linux-next for a while now (the usb-serial
 tree is already tested in linux-next on its own before merged into
 here), with no reported issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXjFTNw8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ynpKQCgrh2FoobS2x0oFg/OUHdjokQV/BYAoJGWLOmt
 8S5cnsCuLq3w5qpCcBva
 =PMGd
 -----END PGP SIGNATURE-----

Merge tag 'usb-5.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB/Thunderbolt/PHY driver updates from Greg KH:
 "Here is the big USB and Thunderbolt and PHY driver updates for
  5.6-rc1.

  With the advent of USB4, "Thunderbolt" has really become USB4, so the
  renaming of the Kconfig option and starting to share subsystem code
  has begun, hence both subsystems coming in through the same tree here.

  PHY driver updates also touched USB drivers, so that is coming in
  through here as well.

  Major stuff included in here are:
   - USB 4 initial support added (i.e. Thunderbolt)
   - musb driver updates
   - USB gadget driver updates
   - PHY driver updates
   - USB PHY driver updates
   - lots of USB serial stuff fixed up
   - USB typec updates
   - USB-IP fixes
   - lots of other smaller USB driver updates

  All of these have been in linux-next for a while now (the usb-serial
  tree is already tested in linux-next on its own before merged into
  here), with no reported issues"

[ Removed an incorrect compile test enablement for PHY_EXYNOS5250_SATA
  that causes configuration warnings    - Linus ]

* tag 'usb-5.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (207 commits)
  Doc: ABI: add usb charger uevent
  usb: phy: show USB charger type for user
  usb: cdns3: fix spelling mistake and rework grammar in text
  usb: phy: phy-gpio-vbus-usb: Convert to GPIO descriptors
  USB: serial: cyberjack: fix spelling mistake "To" -> "Too"
  USB: serial: ir-usb: simplify endpoint check
  USB: serial: ir-usb: make set_termios synchronous
  USB: serial: ir-usb: fix IrLAP framing
  USB: serial: ir-usb: fix link-speed handling
  USB: serial: ir-usb: add missing endpoint sanity check
  usb: typec: fusb302: fix "op-sink-microwatt" default that was in mW
  usb: typec: wcove: fix "op-sink-microwatt" default that was in mW
  usb: dwc3: pci: add ID for the Intel Comet Lake -V variant
  usb: typec: tcpci: mask event interrupts when remove driver
  usb: host: xhci-tegra: set MODULE_FIRMWARE for tegra186
  usb: chipidea: add inline for ci_hdrc_host_driver_init if host is not defined
  usb: chipidea: handle single role for usb role class
  usb: musb: fix spelling mistake: "periperal" -> "peripheral"
  phy: ti: j721e-wiz: Fix build error without CONFIG_OF_ADDRESS
  USB: usbfs: Always unlink URBs in reverse order
  ...
This commit is contained in:
Linus Torvalds 2020-01-29 10:09:44 -08:00
commit aac9662671
199 changed files with 9948 additions and 2491 deletions

View File

@ -16,6 +16,10 @@ Description:
write UDC's name found in /sys/class/udc/*
to bind a gadget, empty string "" to unbind.
max_speed - maximum speed the driver supports. Valid
names are super-speed-plus, super-speed,
high-speed, full-speed, and low-speed.
bDeviceClass - USB device class code
bDeviceSubClass - USB device subclass code
bDeviceProtocol - USB device protocol code

View File

@ -0,0 +1,46 @@
What: Raise a uevent when a USB charger is inserted or removed
Date: 2020-01-14
KernelVersion: 5.6
Contact: linux-usb@vger.kernel.org
Description: There are two USB charger states:
USB_CHARGER_ABSENT
USB_CHARGER_PRESENT
There are five USB charger types:
USB_CHARGER_UNKNOWN_TYPE: Charger type is unknown
USB_CHARGER_SDP_TYPE: Standard Downstream Port
USB_CHARGER_CDP_TYPE: Charging Downstream Port
USB_CHARGER_DCP_TYPE: Dedicated Charging Port
USB_CHARGER_ACA_TYPE: Accessory Charging Adapter
https://www.usb.org/document-library/battery-charging-v12-spec-and-adopters-agreement
Here are two examples taken using udevadm monitor -p when
USB charger is online:
UDEV change /devices/soc0/usbphynop1 (platform)
ACTION=change
DEVPATH=/devices/soc0/usbphynop1
DRIVER=usb_phy_generic
MODALIAS=of:Nusbphynop1T(null)Cusb-nop-xceiv
OF_COMPATIBLE_0=usb-nop-xceiv
OF_COMPATIBLE_N=1
OF_FULLNAME=/usbphynop1
OF_NAME=usbphynop1
SEQNUM=2493
SUBSYSTEM=platform
USB_CHARGER_STATE=USB_CHARGER_PRESENT
USB_CHARGER_TYPE=USB_CHARGER_SDP_TYPE
USEC_INITIALIZED=227422826
USB charger is offline:
KERNEL change /devices/soc0/usbphynop1 (platform)
ACTION=change
DEVPATH=/devices/soc0/usbphynop1
DRIVER=usb_phy_generic
MODALIAS=of:Nusbphynop1T(null)Cusb-nop-xceiv
OF_COMPATIBLE_0=usb-nop-xceiv
OF_COMPATIBLE_N=1
OF_FULLNAME=/usbphynop1
OF_NAME=usbphynop1
SEQNUM=2494
SUBSYSTEM=platform
USB_CHARGER_STATE=USB_CHARGER_ABSENT
USB_CHARGER_TYPE=USB_CHARGER_UNKNOWN_TYPE

View File

@ -1,6 +1,28 @@
=============
Thunderbolt
=============
.. SPDX-License-Identifier: GPL-2.0
======================
USB4 and Thunderbolt
======================
USB4 is the public specification based on Thunderbolt 3 protocol with
some differences at the register level among other things. Connection
manager is an entity running on the host router (host controller)
responsible for enumerating routers and establishing tunnels. A
connection manager can be implemented either in firmware or software.
Typically PCs come with a firmware connection manager for Thunderbolt 3
and early USB4 capable systems. Apple systems on the other hand use
software connection manager and the later USB4 compliant devices follow
the suit.
The Linux Thunderbolt driver supports both and can detect at runtime which
connection manager implementation is to be used. To be on the safe side the
software connection manager in Linux also advertises security level
``user`` which means PCIe tunneling is disabled by default. The
documentation below applies to both implementations with the exception that
the software connection manager only supports ``user`` security level and
is expected to be accompanied with an IOMMU based DMA protection.
Security levels and how to use them
-----------------------------------
The interface presented here is not meant for end users. Instead there
should be a userspace tool that handles all the low-level details, keeps
a database of the authorized devices and prompts users for new connections.
@ -18,8 +40,6 @@ This will authorize all devices automatically when they appear. However,
keep in mind that this bypasses the security levels and makes the system
vulnerable to DMA attacks.
Security levels and how to use them
-----------------------------------
Starting with Intel Falcon Ridge Thunderbolt controller there are 4
security levels available. Intel Titan Ridge added one more security level
(usbonly). The reason for these is the fact that the connected devices can

View File

@ -1,8 +1,8 @@
USB Connector
=============
USB connector node represents physical USB connector. It should be
a child of USB interface controller.
A USB connector node represents a physical USB connector. It should be
a child of a USB interface controller.
Required properties:
- compatible: describes type of the connector, must be one of:

View File

@ -0,0 +1,135 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/phy/allwinner,sun9i-a80-usb-phy.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Allwinner A80 USB PHY Device Tree Bindings
maintainers:
- Chen-Yu Tsai <wens@csie.org>
- Maxime Ripard <mripard@kernel.org>
properties:
"#phy-cells":
const: 0
compatible:
const: allwinner,sun9i-a80-usb-phy
reg:
maxItems: 1
clocks:
anyOf:
- description: Main PHY Clock
- items:
- description: Main PHY clock
- description: HSIC 12MHz clock
- description: HSIC 480MHz clock
clock-names:
oneOf:
- const: phy
- items:
- const: phy
- const: hsic_12M
- const: hsic_480M
resets:
anyOf:
- description: Normal USB PHY reset
- items:
- description: Normal USB PHY reset
- description: HSIC Reset
reset-names:
oneOf:
- const: phy
- items:
- const: phy
- const: hsic
phy_type:
const: hsic
description:
When absent, the PHY type will be assumed to be normal USB.
phy-supply:
description:
Regulator that powers VBUS
required:
- "#phy-cells"
- compatible
- reg
- clocks
- clock-names
- resets
- reset-names
additionalProperties: false
if:
properties:
phy_type:
const: hsic
required:
- phy_type
then:
properties:
clocks:
maxItems: 3
clock-names:
maxItems: 3
resets:
maxItems: 2
reset-names:
maxItems: 2
examples:
- |
#include <dt-bindings/clock/sun9i-a80-usb.h>
#include <dt-bindings/reset/sun9i-a80-usb.h>
usbphy1: phy@a00800 {
compatible = "allwinner,sun9i-a80-usb-phy";
reg = <0x00a00800 0x4>;
clocks = <&usb_clocks CLK_USB0_PHY>;
clock-names = "phy";
resets = <&usb_clocks RST_USB0_PHY>;
reset-names = "phy";
phy-supply = <&reg_usb1_vbus>;
#phy-cells = <0>;
};
- |
#include <dt-bindings/clock/sun9i-a80-usb.h>
#include <dt-bindings/reset/sun9i-a80-usb.h>
usbphy3: phy@a02800 {
compatible = "allwinner,sun9i-a80-usb-phy";
reg = <0x00a02800 0x4>;
clocks = <&usb_clocks CLK_USB2_PHY>,
<&usb_clocks CLK_USB_HSIC>,
<&usb_clocks CLK_USB2_HSIC>;
clock-names = "phy",
"hsic_12M",
"hsic_480M";
resets = <&usb_clocks RST_USB2_PHY>,
<&usb_clocks RST_USB2_HSIC>;
reset-names = "phy",
"hsic";
phy_type = "hsic";
phy-supply = <&reg_usb3_vbus>;
#phy-cells = <0>;
};

View File

@ -1,30 +1,49 @@
Broadcom STB USB PHY
Required properties:
- compatible: brcm,brcmstb-usb-phy
- reg: two offset and length pairs.
The first pair specifies a manditory set of memory mapped
registers used for general control of the PHY.
The second pair specifies optional registers used by some of
the SoCs that support USB 3.x
- #phy-cells: Shall be 1 as it expects one argument for setting
the type of the PHY. Possible values are:
- PHY_TYPE_USB2 for USB1.1/2.0 PHY
- PHY_TYPE_USB3 for USB3.x PHY
- compatible: should be one of
"brcm,brcmstb-usb-phy"
"brcm,bcm7216-usb-phy"
"brcm,bcm7211-usb-phy"
- reg and reg-names properties requirements are specific to the
compatible string.
"brcm,brcmstb-usb-phy":
- reg: 1 or 2 offset and length pairs. One for the base CTRL registers
and an optional pair for systems with USB 3.x support
- reg-names: not specified
"brcm,bcm7216-usb-phy":
- reg: 3 offset and length pairs for CTRL, XHCI_EC and XHCI_GBL
registers
- reg-names: "ctrl", "xhci_ec", "xhci_gbl"
"brcm,bcm7211-usb-phy":
- reg: 5 offset and length pairs for CTRL, XHCI_EC, XHCI_GBL,
USB_PHY and USB_MDIO registers and an optional pair
for the BDC registers
- reg-names: "ctrl", "xhci_ec", "xhci_gbl", "usb_phy", "usb_mdio", "bdc_ec"
- #phy-cells: Shall be 1 as it expects one argument for setting
the type of the PHY. Possible values are:
- PHY_TYPE_USB2 for USB1.1/2.0 PHY
- PHY_TYPE_USB3 for USB3.x PHY
Optional Properties:
- clocks : clock phandles.
- clock-names: String, clock name.
- interrupts: wakeup interrupt
- interrupt-names: "wakeup"
- brcm,ipp: Boolean, Invert Port Power.
Possible values are: 0 (Don't invert), 1 (Invert)
- brcm,ioc: Boolean, Invert Over Current detection.
Possible values are: 0 (Don't invert), 1 (Invert)
NOTE: one or both of the following two properties must be set
- brcm,has-xhci: Boolean indicating the phy has an XHCI phy.
- brcm,has-eohci: Boolean indicating the phy has an EHCI/OHCI phy.
- dr_mode: String, PHY Device mode.
Possible values are: "host", "peripheral ", "drd" or "typec-pd"
If this property is not defined, the phy will default to "host" mode.
- brcm,syscon-piarbctl: phandle to syscon for handling config registers
NOTE: one or both of the following two properties must be set
- brcm,has-xhci: Boolean indicating the phy has an XHCI phy.
- brcm,has-eohci: Boolean indicating the phy has an EHCI/OHCI phy.
Example:
@ -41,3 +60,27 @@ usbphy_0: usb-phy@f0470200 {
clocks = <&usb20>, <&usb30>;
clock-names = "sw_usb", "sw_usb3";
};
usb-phy@29f0200 {
reg = <0x29f0200 0x200>,
<0x29c0880 0x30>,
<0x29cc100 0x534>,
<0x2808000 0x24>,
<0x2980080 0x8>;
reg-names = "ctrl",
"xhci_ec",
"xhci_gbl",
"usb_phy",
"usb_mdio";
brcm,ioc = <0x0>;
brcm,ipp = <0x0>;
compatible = "brcm,bcm7211-usb-phy";
interrupts = <0x30>;
interrupt-parent = <&vpu_intr1_nosec_intc>;
interrupt-names = "wake";
#phy-cells = <0x1>;
brcm,has-xhci;
syscon-piarbctl = <&syscon_piarbctl>;
clocks = <&scmi_clk 256>;
clock-names = "sw_usb";
};

View File

@ -2,6 +2,7 @@
Required properties:
- compatible: should be one or more of
"brcm,bcm7216-sata-phy"
"brcm,bcm7425-sata-phy"
"brcm,bcm7445-sata-phy"
"brcm,iproc-ns2-sata-phy"

View File

@ -0,0 +1,56 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/phy/intel,lgm-emmc-phy.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Intel Lightning Mountain(LGM) eMMC PHY Device Tree Bindings
maintainers:
- Ramuthevar Vadivel Murugan <vadivel.muruganx.ramuthevar@linux.intel.com>
description: |+
Bindings for eMMC PHY on Intel's Lightning Mountain SoC, syscon
node is used to reference the base address of eMMC phy registers.
The eMMC PHY node should be the child of a syscon node with the
required property:
- compatible: Should be one of the following:
"intel,lgm-syscon", "syscon"
- reg:
maxItems: 1
properties:
compatible:
const: intel,lgm-emmc-phy
"#phy-cells":
const: 0
reg:
maxItems: 1
clocks:
maxItems: 1
required:
- "#phy-cells"
- compatible
- reg
- clocks
examples:
- |
sysconf: chiptop@e0200000 {
compatible = "intel,lgm-syscon", "syscon";
reg = <0xe0200000 0x100>;
emmc-phy: emmc-phy@a8 {
compatible = "intel,lgm-emmc-phy";
reg = <0x00a8 0x10>;
clocks = <&emmc>;
#phy-cells = <0>;
};
};
...

View File

@ -2,21 +2,24 @@ Cadence Sierra PHY
-----------------------
Required properties:
- compatible: cdns,sierra-phy-t0
- clocks: Must contain an entry in clock-names.
See ../clocks/clock-bindings.txt for details.
- clock-names: Must be "phy_clk"
- compatible: Must be "cdns,sierra-phy-t0" for Sierra in Cadence platform
Must be "ti,sierra-phy-t0" for Sierra in TI's J721E SoC.
- resets: Must contain an entry for each in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must include "sierra_reset" and "sierra_apb".
"sierra_reset" must control the reset line to the PHY.
"sierra_apb" must control the reset line to the APB PHY
interface.
interface ("sierra_apb" is optional).
- reg: register range for the PHY.
- #address-cells: Must be 1
- #size-cells: Must be 0
Optional properties:
- clocks: Must contain an entry in clock-names.
See ../clocks/clock-bindings.txt for details.
- clock-names: Must contain "cmn_refclk_dig_div" and
"cmn_refclk1_dig_div" for configuring the frequency of
the clock to the lanes. "phy_clk" is deprecated.
- cdns,autoconf: A boolean property whose presence indicates that the
PHY registers will be configured by hardware. If not
present, all sub-node optional properties must be

View File

@ -13,9 +13,6 @@ properties:
"#phy-cells":
const: 0
"#clock-cells":
const: 0
compatible:
enum:
- rockchip,px30-dsi-dphy
@ -49,7 +46,6 @@ properties:
required:
- "#phy-cells"
- "#clock-cells"
- compatible
- reg
- clocks
@ -66,7 +62,6 @@ examples:
reg = <0x0 0xff2e0000 0x0 0x10000>;
clocks = <&pmucru 13>, <&cru 12>;
clock-names = "ref", "pclk";
#clock-cells = <0>;
resets = <&cru 12>;
reset-names = "apb";
#phy-cells = <0>;

View File

@ -1,37 +0,0 @@
Allwinner sun9i USB PHY
-----------------------
Required properties:
- compatible : should be one of
* allwinner,sun9i-a80-usb-phy
- reg : a list of offset + length pairs
- #phy-cells : from the generic phy bindings, must be 0
- phy_type : "hsic" for HSIC usage;
other values or absence of this property indicates normal USB
- clocks : phandle + clock specifier for the phy clocks
- clock-names : depending on the "phy_type" property,
* "phy" for normal USB
* "hsic_480M", "hsic_12M" for HSIC
- resets : a list of phandle + reset specifier pairs
- reset-names : depending on the "phy_type" property,
* "phy" for normal USB
* "hsic" for HSIC
Optional Properties:
- phy-supply : from the generic phy bindings, a phandle to a regulator that
provides power to VBUS.
It is recommended to list all clocks and resets available.
The driver will only use those matching the phy_type.
Example:
usbphy1: phy@a01800 {
compatible = "allwinner,sun9i-a80-usb-phy";
reg = <0x00a01800 0x4>;
clocks = <&usb_phy_clk 2>, <&usb_phy_clk 10>,
<&usb_phy_clk 3>;
clock-names = "hsic_480M", "hsic_12M", "phy";
resets = <&usb_phy_clk 18>, <&usb_phy_clk 19>;
reset-names = "hsic", "phy";
#phy-cells = <0>;
};

View File

@ -0,0 +1,221 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com/
%YAML 1.2
---
$id: "http://devicetree.org/schemas/phy/ti,phy-j721e-wiz.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: TI J721E WIZ (SERDES Wrapper)
maintainers:
- Kishon Vijay Abraham I <kishon@ti.com>
properties:
compatible:
enum:
- ti,j721e-wiz-16g
- ti,j721e-wiz-10g
power-domains:
maxItems: 1
clocks:
maxItems: 3
description: clock-specifier to represent input to the WIZ
clock-names:
items:
- const: fck
- const: core_ref_clk
- const: ext_ref_clk
num-lanes:
minimum: 1
maximum: 4
"#address-cells":
const: 1
"#size-cells":
const: 1
"#reset-cells":
const: 1
ranges: true
assigned-clocks:
maxItems: 2
assigned-clock-parents:
maxItems: 2
typec-dir-gpios:
maxItems: 1
description:
GPIO to signal Type-C cable orientation for lane swap.
If GPIO is active, lane 0 and lane 1 of SERDES will be swapped to
achieve the funtionality of an external type-C plug flip mux.
typec-dir-debounce-ms:
minimum: 100
maximum: 1000
default: 100
description:
Number of milliseconds to wait before sampling typec-dir-gpio.
If not specified, the default debounce of 100ms will be used.
Type-C spec states minimum CC pin debounce of 100 ms and maximum
of 200 ms. However, some solutions might need more than 200 ms.
patternProperties:
"^pll[0|1]-refclk$":
type: object
description: |
WIZ node should have subnodes for each of the PLLs present in
the SERDES.
properties:
clocks:
maxItems: 2
description: Phandle to clock nodes representing the two inputs to PLL.
"#clock-cells":
const: 0
assigned-clocks:
maxItems: 1
assigned-clock-parents:
maxItems: 1
required:
- clocks
- "#clock-cells"
- assigned-clocks
- assigned-clock-parents
"^cmn-refclk1?-dig-div$":
type: object
description:
WIZ node should have subnodes for each of the PMA common refclock
provided by the SERDES.
properties:
clocks:
maxItems: 1
description: Phandle to the clock node representing the input to the
divider clock.
"#clock-cells":
const: 0
required:
- clocks
- "#clock-cells"
"^refclk-dig$":
type: object
description: |
WIZ node should have subnode for refclk_dig to select the reference
clock source for the reference clock used in the PHY and PMA digital
logic.
properties:
clocks:
maxItems: 4
description: Phandle to four clock nodes representing the inputs to
refclk_dig
"#clock-cells":
const: 0
assigned-clocks:
maxItems: 1
assigned-clock-parents:
maxItems: 1
required:
- clocks
- "#clock-cells"
- assigned-clocks
- assigned-clock-parents
"^serdes@[0-9a-f]+$":
type: object
description: |
WIZ node should have '1' subnode for the SERDES. It could be either
Sierra SERDES or Torrent SERDES. Sierra SERDES should follow the
bindings specified in
Documentation/devicetree/bindings/phy/phy-cadence-sierra.txt
Torrent SERDES should follow the bindings specified in
Documentation/devicetree/bindings/phy/phy-cadence-dp.txt
required:
- compatible
- power-domains
- clocks
- clock-names
- num-lanes
- "#address-cells"
- "#size-cells"
- "#reset-cells"
- ranges
examples:
- |
#include <dt-bindings/soc/ti,sci_pm_domain.h>
wiz@5000000 {
compatible = "ti,j721e-wiz-16g";
#address-cells = <1>;
#size-cells = <1>;
power-domains = <&k3_pds 292 TI_SCI_PD_EXCLUSIVE>;
clocks = <&k3_clks 292 5>, <&k3_clks 292 11>, <&dummy_cmn_refclk>;
clock-names = "fck", "core_ref_clk", "ext_ref_clk";
assigned-clocks = <&k3_clks 292 11>, <&k3_clks 292 0>;
assigned-clock-parents = <&k3_clks 292 15>, <&k3_clks 292 4>;
num-lanes = <2>;
#reset-cells = <1>;
ranges = <0x5000000 0x5000000 0x10000>;
pll0-refclk {
clocks = <&k3_clks 293 13>, <&dummy_cmn_refclk>;
#clock-cells = <0>;
assigned-clocks = <&wiz1_pll0_refclk>;
assigned-clock-parents = <&k3_clks 293 13>;
};
pll1-refclk {
clocks = <&k3_clks 293 0>, <&dummy_cmn_refclk1>;
#clock-cells = <0>;
assigned-clocks = <&wiz1_pll1_refclk>;
assigned-clock-parents = <&k3_clks 293 0>;
};
cmn-refclk-dig-div {
clocks = <&wiz1_refclk_dig>;
#clock-cells = <0>;
};
cmn-refclk1-dig-div {
clocks = <&wiz1_pll1_refclk>;
#clock-cells = <0>;
};
refclk-dig {
clocks = <&k3_clks 292 11>, <&k3_clks 292 0>, <&dummy_cmn_refclk>, <&dummy_cmn_refclk1>;
#clock-cells = <0>;
assigned-clocks = <&wiz0_refclk_dig>;
assigned-clock-parents = <&k3_clks 292 11>;
};
serdes@5000000 {
compatible = "cdns,ti,sierra-phy-t0";
reg-names = "serdes";
reg = <0x5000000 0x10000>;
#address-cells = <1>;
#size-cells = <0>;
resets = <&serdes_wiz0 0>;
reset-names = "sierra_reset";
clocks = <&wiz0_cmn_refclk_dig_div>, <&wiz0_cmn_refclk1_dig_div>;
clock-names = "cmn_refclk_dig_div", "cmn_refclk1_dig_div";
};
};

View File

@ -15,6 +15,10 @@ Required properties:
"qcom,ci-hdrc"
"chipidea,usb2"
"xlnx,zynq-usb-2.20a"
"nvidia,tegra20-udc"
"nvidia,tegra30-udc"
"nvidia,tegra114-udc"
"nvidia,tegra124-udc"
- reg: base address and length of the registers
- interrupts: interrupt for the USB controller

View File

@ -0,0 +1,57 @@
MediaTek musb DRD/OTG controller
-------------------------------------------
Required properties:
- compatible : should be one of:
"mediatek,mt2701-musb"
...
followed by "mediatek,mtk-musb"
- reg : specifies physical base address and size of
the registers
- interrupts : interrupt used by musb controller
- interrupt-names : must be "mc"
- phys : PHY specifier for the OTG phy
- dr_mode : should be one of "host", "peripheral" or "otg",
refer to usb/generic.txt
- clocks : a list of phandle + clock-specifier pairs, one for
each entry in clock-names
- clock-names : must contain "main", "mcu", "univpll"
for clocks of controller
Optional properties:
- power-domains : a phandle to USB power domain node to control USB's
MTCMOS
Required child nodes:
usb connector node as defined in bindings/connector/usb-connector.txt
Optional properties:
- id-gpios : input GPIO for USB ID pin.
- vbus-gpios : input GPIO for USB VBUS pin.
- vbus-supply : reference to the VBUS regulator, needed when supports
dual-role mode
- usb-role-switch : use USB Role Switch to support dual-role switch, see
usb/generic.txt.
Example:
usb2: usb@11200000 {
compatible = "mediatek,mt2701-musb",
"mediatek,mtk-musb";
reg = <0 0x11200000 0 0x1000>;
interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_LOW>;
interrupt-names = "mc";
phys = <&u2port2 PHY_TYPE_USB2>;
dr_mode = "otg";
clocks = <&pericfg CLK_PERI_USB0>,
<&pericfg CLK_PERI_USB0_MCU>,
<&pericfg CLK_PERI_USB_SLV>;
clock-names = "main","mcu","univpll";
power-domains = <&scpsys MT2701_POWER_DOMAIN_IFR_MSC>;
usb-role-switch;
connector{
compatible = "gpio-usb-b-connector", "usb-b-connector";
type = "micro";
id-gpios = <&pio 44 GPIO_ACTIVE_HIGH>;
vbus-supply = <&usb_vbus>;
};
};

View File

@ -16474,6 +16474,7 @@ M: Andreas Noever <andreas.noever@gmail.com>
M: Michael Jamet <michael.jamet@intel.com>
M: Mika Westerberg <mika.westerberg@linux.intel.com>
M: Yehezkel Bernat <YehezkelShB@gmail.com>
L: linux-usb@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt.git
S: Maintained
F: Documentation/admin-guide/thunderbolt.rst

View File

@ -143,7 +143,7 @@
compatible = "smsc,usb3503a";
reg = <0x8>;
connect-gpios = <&gpioext2 1 GPIO_ACTIVE_HIGH>;
intn-gpios = <&gpioext2 0 GPIO_ACTIVE_LOW>;
intn-gpios = <&gpioext2 0 GPIO_ACTIVE_HIGH>;
initial-mode = <1>;
};
};

View File

@ -823,6 +823,17 @@ static int davinci_phy_fixup(struct phy_device *phydev)
#define HAS_NAND IS_ENABLED(CONFIG_MTD_NAND_DAVINCI)
#define GPIO_nVBUS_DRV 160
static struct gpiod_lookup_table dm644evm_usb_gpio_table = {
.dev_id = "musb-davinci",
.table = {
GPIO_LOOKUP("davinci_gpio", GPIO_nVBUS_DRV, NULL,
GPIO_ACTIVE_HIGH),
{ }
},
};
static __init void davinci_evm_init(void)
{
int ret;
@ -875,6 +886,7 @@ static __init void davinci_evm_init(void)
dm644x_init_asp();
/* irlml6401 switches over 1A, in under 8 msec */
gpiod_add_lookup_table(&dm644evm_usb_gpio_table);
davinci_setup_usb(1000, 8);
if (IS_BUILTIN(CONFIG_PHYLIB)) {

View File

@ -11,9 +11,9 @@
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/gpio/machine.h>
#include <linux/gpio.h>
#include <linux/interrupt.h>
#include <linux/usb/gpio_vbus.h>
#include <asm/mach-types.h>
#include <linux/sizes.h>
@ -144,17 +144,18 @@ static inline void __init colibri_pxa320_init_eth(void) {}
#endif /* CONFIG_AX88796 */
#if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE)
static struct gpio_vbus_mach_info colibri_pxa320_gpio_vbus_info = {
.gpio_vbus = mfp_to_gpio(MFP_PIN_GPIO96),
.gpio_pullup = -1,
static struct gpiod_lookup_table gpio_vbus_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
GPIO_LOOKUP("gpio-pxa", MFP_PIN_GPIO96,
"vbus", GPIO_ACTIVE_HIGH),
{ },
},
};
static struct platform_device colibri_pxa320_gpio_vbus = {
.name = "gpio-vbus",
.id = -1,
.dev = {
.platform_data = &colibri_pxa320_gpio_vbus_info,
},
};
static void colibri_pxa320_udc_command(int cmd)
@ -173,6 +174,7 @@ static struct pxa2xx_udc_mach_info colibri_pxa320_udc_info __initdata = {
static void __init colibri_pxa320_init_udc(void)
{
pxa_set_udc_info(&colibri_pxa320_udc_info);
gpiod_add_lookup_table(&gpio_vbus_gpiod_table);
platform_device_register(&colibri_pxa320_gpio_vbus);
}
#else

View File

@ -14,6 +14,7 @@
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/clk-provider.h>
#include <linux/gpio/machine.h>
#include <linux/gpio.h>
#include <linux/delay.h>
#include <linux/platform_device.h>
@ -22,7 +23,6 @@
#include <linux/mfd/t7l66xb.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/partitions.h>
#include <linux/usb/gpio_vbus.h>
#include <linux/memblock.h>
#include <video/w100fb.h>
@ -51,18 +51,20 @@ void __init eseries_fixup(struct tag *tags, char **cmdline)
memblock_add(0xa0000000, SZ_64M);
}
struct gpio_vbus_mach_info e7xx_udc_info = {
.gpio_vbus = GPIO_E7XX_USB_DISC,
.gpio_pullup = GPIO_E7XX_USB_PULLUP,
.gpio_pullup_inverted = 1
static struct gpiod_lookup_table e7xx_gpio_vbus_gpiod_table __maybe_unused = {
.dev_id = "gpio-vbus",
.table = {
GPIO_LOOKUP("gpio-pxa", GPIO_E7XX_USB_DISC,
"vbus", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("gpio-pxa", GPIO_E7XX_USB_PULLUP,
"pullup", GPIO_ACTIVE_LOW),
{ },
},
};
static struct platform_device e7xx_gpio_vbus __maybe_unused = {
.name = "gpio-vbus",
.id = -1,
.dev = {
.platform_data = &e7xx_udc_info,
},
};
struct pxaficp_platform_data e7xx_ficp_platform_data = {
@ -165,6 +167,7 @@ static void __init e330_init(void)
pxa_set_stuart_info(NULL);
eseries_register_clks();
eseries_get_tmio_gpios();
gpiod_add_lookup_table(&e7xx_gpio_vbus_gpiod_table);
platform_add_devices(ARRAY_AND_SIZE(e330_devices));
}
@ -216,6 +219,7 @@ static void __init e350_init(void)
pxa_set_stuart_info(NULL);
eseries_register_clks();
eseries_get_tmio_gpios();
gpiod_add_lookup_table(&e7xx_gpio_vbus_gpiod_table);
platform_add_devices(ARRAY_AND_SIZE(e350_devices));
}
@ -340,6 +344,7 @@ static void __init e400_init(void)
eseries_register_clks();
eseries_get_tmio_gpios();
pxa_set_fb_info(NULL, &e400_pxafb_mach_info);
gpiod_add_lookup_table(&e7xx_gpio_vbus_gpiod_table);
platform_add_devices(ARRAY_AND_SIZE(e400_devices));
}
@ -534,6 +539,7 @@ static void __init e740_init(void)
clk_add_alias("CLK_CK48M", e740_t7l66xb_device.name,
"UDCCLK", &pxa25x_device_udc.dev),
eseries_get_tmio_gpios();
gpiod_add_lookup_table(&e7xx_gpio_vbus_gpiod_table);
platform_add_devices(ARRAY_AND_SIZE(e740_devices));
pxa_set_ac97_info(NULL);
pxa_set_ficp_info(&e7xx_ficp_platform_data);
@ -733,6 +739,7 @@ static void __init e750_init(void)
clk_add_alias("CLK_CK3P6MI", e750_tc6393xb_device.name,
"GPIO11_CLK", NULL),
eseries_get_tmio_gpios();
gpiod_add_lookup_table(&e7xx_gpio_vbus_gpiod_table);
platform_add_devices(ARRAY_AND_SIZE(e750_devices));
pxa_set_ac97_info(NULL);
pxa_set_ficp_info(&e7xx_ficp_platform_data);
@ -888,18 +895,20 @@ static struct platform_device e800_fb_device = {
/* --------------------------- UDC definitions --------------------------- */
static struct gpio_vbus_mach_info e800_udc_info = {
.gpio_vbus = GPIO_E800_USB_DISC,
.gpio_pullup = GPIO_E800_USB_PULLUP,
.gpio_pullup_inverted = 1
static struct gpiod_lookup_table e800_gpio_vbus_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
GPIO_LOOKUP("gpio-pxa", GPIO_E800_USB_DISC,
"vbus", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("gpio-pxa", GPIO_E800_USB_PULLUP,
"pullup", GPIO_ACTIVE_LOW),
{ },
},
};
static struct platform_device e800_gpio_vbus = {
.name = "gpio-vbus",
.id = -1,
.dev = {
.platform_data = &e800_udc_info,
},
};
@ -949,6 +958,7 @@ static void __init e800_init(void)
clk_add_alias("CLK_CK3P6MI", e800_tc6393xb_device.name,
"GPIO11_CLK", NULL),
eseries_get_tmio_gpios();
gpiod_add_lookup_table(&e800_gpio_vbus_gpiod_table);
platform_add_devices(ARRAY_AND_SIZE(e800_devices));
pxa_set_ac97_info(NULL);
}

View File

@ -20,10 +20,10 @@
#include <linux/delay.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#include <linux/gpio/machine.h>
#include <linux/gpio.h>
#include <linux/err.h>
#include <linux/clk.h>
#include <linux/usb/gpio_vbus.h>
#include <asm/setup.h>
#include <asm/memory.h>
@ -101,21 +101,25 @@ static void __init gumstix_mmc_init(void)
#endif
#ifdef CONFIG_USB_PXA25X
static struct gpio_vbus_mach_info gumstix_udc_info = {
.gpio_vbus = GPIO_GUMSTIX_USB_GPIOn,
.gpio_pullup = GPIO_GUMSTIX_USB_GPIOx,
static struct gpiod_lookup_table gumstix_gpio_vbus_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
GPIO_LOOKUP("gpio-pxa", GPIO_GUMSTIX_USB_GPIOn,
"vbus", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("gpio-pxa", GPIO_GUMSTIX_USB_GPIOx,
"pullup", GPIO_ACTIVE_HIGH),
{ },
},
};
static struct platform_device gumstix_gpio_vbus = {
.name = "gpio-vbus",
.id = -1,
.dev = {
.platform_data = &gumstix_udc_info,
},
};
static void __init gumstix_udc_init(void)
{
gpiod_add_lookup_table(&gumstix_gpio_vbus_gpiod_table);
platform_device_register(&gumstix_gpio_vbus);
}
#else

View File

@ -34,7 +34,6 @@
#include <linux/spi/ads7846.h>
#include <linux/spi/spi.h>
#include <linux/spi/pxa2xx_spi.h>
#include <linux/usb/gpio_vbus.h>
#include <linux/platform_data/i2c-pxa.h>
#include <mach/hardware.h>
@ -578,18 +577,24 @@ static struct pwm_lookup hx4700_pwm_lookup[] = {
* USB "Transceiver"
*/
static struct gpio_vbus_mach_info gpio_vbus_info = {
.gpio_pullup = GPIO76_HX4700_USBC_PUEN,
.gpio_vbus = GPIOD14_nUSBC_DETECT,
.gpio_vbus_inverted = 1,
static struct gpiod_lookup_table gpio_vbus_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
/* This GPIO is on ASIC3 */
GPIO_LOOKUP("asic3",
/* Convert to a local offset on the ASIC3 */
GPIOD14_nUSBC_DETECT - HX4700_ASIC3_GPIO_BASE,
"vbus", GPIO_ACTIVE_LOW),
/* This one is on the primary SOC GPIO */
GPIO_LOOKUP("gpio-pxa", GPIO76_HX4700_USBC_PUEN,
"pullup", GPIO_ACTIVE_HIGH),
{ },
},
};
static struct platform_device gpio_vbus = {
.name = "gpio-vbus",
.id = -1,
.dev = {
.platform_data = &gpio_vbus_info,
},
};
static struct pxa2xx_udc_mach_info hx4700_udc_info;
@ -883,6 +888,7 @@ static void __init hx4700_init(void)
pxa_set_stuart_info(NULL);
gpiod_add_lookup_table(&bq24022_gpiod_table);
gpiod_add_lookup_table(&gpio_vbus_gpiod_table);
platform_add_devices(devices, ARRAY_SIZE(devices));
pwm_add_table(hx4700_pwm_lookup, ARRAY_SIZE(hx4700_pwm_lookup));

View File

@ -27,7 +27,6 @@
#include <linux/regulator/fixed.h>
#include <linux/regulator/gpio-regulator.h>
#include <linux/regulator/machine.h>
#include <linux/usb/gpio_vbus.h>
#include <linux/platform_data/i2c-pxa.h>
#include <mach/hardware.h>
@ -506,9 +505,20 @@ static struct resource gpio_vbus_resource = {
.end = IRQ_MAGICIAN_VBUS,
};
static struct gpio_vbus_mach_info gpio_vbus_info = {
.gpio_pullup = GPIO27_MAGICIAN_USBC_PUEN,
.gpio_vbus = EGPIO_MAGICIAN_CABLE_VBUS,
static struct gpiod_lookup_table gpio_vbus_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
/*
* EGPIO on register 4 index 1, the second EGPIO chip
* starts at register 4 so this will be at index 1 on that
* chip.
*/
GPIO_LOOKUP("htc-egpio-1", 1,
"vbus", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("gpio-pxa", GPIO27_MAGICIAN_USBC_PUEN,
"pullup", GPIO_ACTIVE_HIGH),
{ },
},
};
static struct platform_device gpio_vbus = {
@ -516,9 +526,6 @@ static struct platform_device gpio_vbus = {
.id = -1,
.num_resources = 1,
.resource = &gpio_vbus_resource,
.dev = {
.platform_data = &gpio_vbus_info,
},
};
/*
@ -1032,6 +1039,7 @@ static void __init magician_init(void)
ARRAY_SIZE(pwm_backlight_supply), 5000000);
gpiod_add_lookup_table(&bq24022_gpiod_table);
gpiod_add_lookup_table(&gpio_vbus_gpiod_table);
platform_add_devices(ARRAY_AND_SIZE(devices));
}

View File

@ -24,7 +24,6 @@
#include <linux/power_supply.h>
#include <linux/wm97xx.h>
#include <linux/mtd/physmap.h>
#include <linux/usb/gpio_vbus.h>
#include <linux/reboot.h>
#include <linux/regulator/fixed.h>
#include <linux/regulator/max1586.h>
@ -368,10 +367,13 @@ static struct pxa2xx_udc_mach_info mioa701_udc_info = {
.gpio_pullup = GPIO22_USB_ENABLE,
};
struct gpio_vbus_mach_info gpio_vbus_data = {
.gpio_vbus = GPIO13_nUSB_DETECT,
.gpio_vbus_inverted = 1,
.gpio_pullup = -1,
static struct gpiod_lookup_table gpio_vbus_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
GPIO_LOOKUP("gpio-pxa", GPIO13_nUSB_DETECT,
"vbus", GPIO_ACTIVE_LOW),
{ },
},
};
/*
@ -677,7 +679,7 @@ MIO_SIMPLE_DEV(mioa701_led, "leds-gpio", &gpio_led_info)
MIO_SIMPLE_DEV(pxa2xx_pcm, "pxa2xx-pcm", NULL)
MIO_SIMPLE_DEV(mioa701_sound, "mioa701-wm9713", NULL)
MIO_SIMPLE_DEV(mioa701_board, "mioa701-board", NULL)
MIO_SIMPLE_DEV(gpio_vbus, "gpio-vbus", &gpio_vbus_data);
MIO_SIMPLE_DEV(gpio_vbus, "gpio-vbus", NULL);
static struct platform_device *devices[] __initdata = {
&mioa701_gpio_keys,
@ -750,6 +752,7 @@ static void __init mioa701_machine_init(void)
pxa_set_ac97_info(&mioa701_ac97_info);
pm_power_off = mioa701_poweroff;
pwm_add_table(mioa701_pwm_lookup, ARRAY_SIZE(mioa701_pwm_lookup));
gpiod_add_lookup_table(&gpio_vbus_gpiod_table);
platform_add_devices(devices, ARRAY_SIZE(devices));
gsm_init();

View File

@ -13,10 +13,10 @@
#include <linux/pda_power.h>
#include <linux/pwm.h>
#include <linux/pwm_backlight.h>
#include <linux/gpio/machine.h>
#include <linux/gpio.h>
#include <linux/wm97xx.h>
#include <linux/power_supply.h>
#include <linux/usb/gpio_vbus.h>
#include <linux/regulator/max1586.h>
#include <linux/platform_data/i2c-pxa.h>
@ -159,32 +159,32 @@ void __init palm27x_lcd_init(int power, struct pxafb_mode_info *mode)
******************************************************************************/
#if defined(CONFIG_USB_PXA27X) || \
defined(CONFIG_USB_PXA27X_MODULE)
static struct gpio_vbus_mach_info palm27x_udc_info = {
.gpio_vbus_inverted = 1,
/* The actual GPIO offsets get filled in in the palm27x_udc_init() call */
static struct gpiod_lookup_table palm27x_udc_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
GPIO_LOOKUP("gpio-pxa", 0,
"vbus", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("gpio-pxa", 0,
"pullup", GPIO_ACTIVE_HIGH),
{ },
},
};
static struct platform_device palm27x_gpio_vbus = {
.name = "gpio-vbus",
.id = -1,
.dev = {
.platform_data = &palm27x_udc_info,
},
};
void __init palm27x_udc_init(int vbus, int pullup, int vbus_inverted)
{
palm27x_udc_info.gpio_vbus = vbus;
palm27x_udc_info.gpio_pullup = pullup;
palm27x_udc_info.gpio_vbus_inverted = vbus_inverted;
if (!gpio_request(pullup, "USB Pullup")) {
gpio_direction_output(pullup,
palm27x_udc_info.gpio_vbus_inverted);
gpio_free(pullup);
} else
return;
palm27x_udc_gpiod_table.table[0].chip_hwnum = vbus;
palm27x_udc_gpiod_table.table[1].chip_hwnum = pullup;
if (vbus_inverted)
palm27x_udc_gpiod_table.table[0].flags = GPIO_ACTIVE_LOW;
gpiod_add_lookup_table(&palm27x_udc_gpiod_table);
platform_device_register(&palm27x_gpio_vbus);
}
#endif

View File

@ -23,7 +23,6 @@
#include <linux/gpio.h>
#include <linux/wm97xx.h>
#include <linux/power_supply.h>
#include <linux/usb/gpio_vbus.h>
#include <asm/mach-types.h>
#include <asm/mach/arch.h>

View File

@ -23,7 +23,6 @@
#include <linux/power_supply.h>
#include <linux/gpio_keys.h>
#include <linux/mtd/physmap.h>
#include <linux/usb/gpio_vbus.h>
#include <asm/mach-types.h>
#include <asm/mach/arch.h>
@ -319,22 +318,25 @@ static inline void palmtc_mkp_init(void) {}
* UDC
******************************************************************************/
#if defined(CONFIG_USB_PXA25X)||defined(CONFIG_USB_PXA25X_MODULE)
static struct gpio_vbus_mach_info palmtc_udc_info = {
.gpio_vbus = GPIO_NR_PALMTC_USB_DETECT_N,
.gpio_vbus_inverted = 1,
.gpio_pullup = GPIO_NR_PALMTC_USB_POWER,
static struct gpiod_lookup_table palmtc_udc_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTC_USB_DETECT_N,
"vbus", GPIO_ACTIVE_LOW),
GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTC_USB_POWER,
"pullup", GPIO_ACTIVE_HIGH),
{ },
},
};
static struct platform_device palmtc_gpio_vbus = {
.name = "gpio-vbus",
.id = -1,
.dev = {
.platform_data = &palmtc_udc_info,
},
};
static void __init palmtc_udc_init(void)
{
gpiod_add_lookup_table(&palmtc_udc_gpiod_table);
platform_device_register(&palmtc_gpio_vbus);
};
#else

View File

@ -23,7 +23,6 @@
#include <linux/gpio.h>
#include <linux/wm97xx.h>
#include <linux/power_supply.h>
#include <linux/usb/gpio_vbus.h>
#include <asm/mach-types.h>
#include <asm/mach/arch.h>
@ -201,18 +200,20 @@ static struct pxaficp_platform_data palmte2_ficp_platform_data = {
/******************************************************************************
* UDC
******************************************************************************/
static struct gpio_vbus_mach_info palmte2_udc_info = {
.gpio_vbus = GPIO_NR_PALMTE2_USB_DETECT_N,
.gpio_vbus_inverted = 1,
.gpio_pullup = GPIO_NR_PALMTE2_USB_PULLUP,
static struct gpiod_lookup_table palmte2_udc_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTE2_USB_DETECT_N,
"vbus", GPIO_ACTIVE_LOW),
GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTE2_USB_PULLUP,
"pullup", GPIO_ACTIVE_HIGH),
{ },
},
};
static struct platform_device palmte2_gpio_vbus = {
.name = "gpio-vbus",
.id = -1,
.dev = {
.platform_data = &palmte2_udc_info,
},
};
/******************************************************************************
@ -368,6 +369,7 @@ static void __init palmte2_init(void)
pxa_set_ficp_info(&palmte2_ficp_platform_data);
pwm_add_table(palmte2_pwm_lookup, ARRAY_SIZE(palmte2_pwm_lookup));
gpiod_add_lookup_table(&palmte2_udc_gpiod_table);
platform_add_devices(devices, ARRAY_SIZE(devices));
}

View File

@ -23,7 +23,6 @@
#include <linux/gpio.h>
#include <linux/wm97xx.h>
#include <linux/power_supply.h>
#include <linux/usb/gpio_vbus.h>
#include <linux/mtd/platnand.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/physmap.h>

View File

@ -25,7 +25,6 @@
#include <linux/gpio.h>
#include <linux/wm97xx.h>
#include <linux/power_supply.h>
#include <linux/usb/gpio_vbus.h>
#include <linux/platform_data/i2c-gpio.h>
#include <linux/gpio/machine.h>

View File

@ -33,7 +33,6 @@
#include <linux/spi/pxa2xx_spi.h>
#include <linux/input/matrix_keypad.h>
#include <linux/platform_data/i2c-pxa.h>
#include <linux/usb/gpio_vbus.h>
#include <linux/reboot.h>
#include <linux/memblock.h>
@ -240,18 +239,20 @@ static struct scoop_pcmcia_config tosa_pcmcia_config = {
/*
* USB Device Controller
*/
static struct gpio_vbus_mach_info tosa_udc_info = {
.gpio_pullup = TOSA_GPIO_USB_PULLUP,
.gpio_vbus = TOSA_GPIO_USB_IN,
.gpio_vbus_inverted = 1,
static struct gpiod_lookup_table tosa_udc_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_USB_IN,
"vbus", GPIO_ACTIVE_LOW),
GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_USB_PULLUP,
"pullup", GPIO_ACTIVE_HIGH),
{ },
},
};
static struct platform_device tosa_gpio_vbus = {
.name = "gpio-vbus",
.id = -1,
.dev = {
.platform_data = &tosa_udc_info,
},
};
/*
@ -949,6 +950,7 @@ static void __init tosa_init(void)
clk_add_alias("CLK_CK3P6MI", tc6393xb_device.name, "GPIO11_CLK", NULL);
gpiod_add_lookup_table(&tosa_udc_gpiod_table);
platform_add_devices(devices, ARRAY_SIZE(devices));
}

View File

@ -14,7 +14,6 @@
#include <linux/leds.h>
#include <linux/gpio.h>
#include <linux/gpio/machine.h>
#include <linux/usb/gpio_vbus.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/physmap.h>
@ -352,17 +351,18 @@ static inline void vpac270_uhc_init(void) {}
* USB Gadget
******************************************************************************/
#if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE)
static struct gpio_vbus_mach_info vpac270_gpio_vbus_info = {
.gpio_vbus = GPIO41_VPAC270_UDC_DETECT,
.gpio_pullup = -1,
static struct gpiod_lookup_table vpac270_gpio_vbus_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
GPIO_LOOKUP("gpio-pxa", GPIO41_VPAC270_UDC_DETECT,
"vbus", GPIO_ACTIVE_HIGH),
{ },
},
};
static struct platform_device vpac270_gpio_vbus = {
.name = "gpio-vbus",
.id = -1,
.dev = {
.platform_data = &vpac270_gpio_vbus_info,
},
};
static void vpac270_udc_command(int cmd)
@ -381,6 +381,7 @@ static struct pxa2xx_udc_mach_info vpac270_udc_info __initdata = {
static void __init vpac270_udc_init(void)
{
pxa_set_udc_info(&vpac270_udc_info);
gpiod_add_lookup_table(&vpac270_gpio_vbus_gpiod_table);
platform_device_register(&vpac270_gpio_vbus);
}
#else

View File

@ -13,7 +13,6 @@
#include <linux/serial_core.h>
#include <linux/serial_s3c.h>
#include <linux/spi/spi_gpio.h>
#include <linux/usb/gpio_vbus.h>
#include <linux/platform_data/s3c-hsotg.h>
#include <asm/mach-types.h>
@ -124,15 +123,16 @@ static struct s3c2410_hcd_info smartq_usb_host_info = {
.enable_oc = smartq_usb_host_enableoc,
};
static struct gpio_vbus_mach_info smartq_usb_otg_vbus_pdata = {
.gpio_vbus = S3C64XX_GPL(9),
.gpio_pullup = -1,
.gpio_vbus_inverted = true,
static struct gpiod_lookup_table smartq_usb_otg_vbus_gpiod_table = {
.dev_id = "gpio-vbus",
.table = {
GPIO_LOOKUP("GPL", 9, "vbus", GPIO_ACTIVE_LOW),
{ },
},
};
static struct platform_device smartq_usb_otg_vbus_dev = {
.name = "gpio-vbus",
.dev.platform_data = &smartq_usb_otg_vbus_pdata,
};
static struct pwm_lookup smartq_pwm_lookup[] = {
@ -418,6 +418,7 @@ void __init smartq_machine_init(void)
pwm_add_table(smartq_pwm_lookup, ARRAY_SIZE(smartq_pwm_lookup));
gpiod_add_lookup_table(&smartq_lcd_control_gpiod_table);
gpiod_add_lookup_table(&smartq_usb_otg_vbus_gpiod_table);
platform_add_devices(smartq_devices, ARRAY_SIZE(smartq_devices));
gpiod_add_lookup_table(&smartq_audio_gpios);

View File

@ -171,7 +171,7 @@ obj-$(CONFIG_POWERCAP) += powercap/
obj-$(CONFIG_MCB) += mcb/
obj-$(CONFIG_PERF_EVENTS) += perf/
obj-$(CONFIG_RAS) += ras/
obj-$(CONFIG_THUNDERBOLT) += thunderbolt/
obj-$(CONFIG_USB4) += thunderbolt/
obj-$(CONFIG_CORESIGHT) += hwtracing/coresight/
obj-y += hwtracing/intel_th/
obj-$(CONFIG_STM) += hwtracing/stm/

View File

@ -532,12 +532,12 @@ config FUJITSU_ES
This driver provides support for Extended Socket network device
on Extended Partitioning of FUJITSU PRIMEQUEST 2000 E2 series.
config THUNDERBOLT_NET
tristate "Networking over Thunderbolt cable"
depends on THUNDERBOLT && INET
config USB4_NET
tristate "Networking over USB4 and Thunderbolt cables"
depends on USB4 && INET
help
Select this if you want to create network between two
computers over a Thunderbolt cable. The driver supports Apple
Select this if you want to create network between two computers
over a USB4 and Thunderbolt cables. The driver supports Apple
ThunderboltIP protocol and allows communication with any host
supporting the same protocol including Windows and macOS.

View File

@ -77,6 +77,6 @@ obj-$(CONFIG_NTB_NETDEV) += ntb_netdev.o
obj-$(CONFIG_FUJITSU_ES) += fjes/
thunderbolt-net-y += thunderbolt.o
obj-$(CONFIG_THUNDERBOLT_NET) += thunderbolt-net.o
obj-$(CONFIG_USB4_NET) += thunderbolt-net.o
obj-$(CONFIG_NETDEVSIM) += netdevsim/
obj-$(CONFIG_NET_FAILOVER) += net_failover.o

View File

@ -69,5 +69,6 @@ source "drivers/phy/socionext/Kconfig"
source "drivers/phy/st/Kconfig"
source "drivers/phy/tegra/Kconfig"
source "drivers/phy/ti/Kconfig"
source "drivers/phy/intel/Kconfig"
endmenu

View File

@ -18,6 +18,7 @@ obj-y += broadcom/ \
cadence/ \
freescale/ \
hisilicon/ \
intel/ \
lantiq/ \
marvell/ \
motorola/ \

View File

@ -48,7 +48,8 @@ config PHY_SUN9I_USB
config PHY_SUN50I_USB3
tristate "Allwinner H6 SoC USB3 PHY driver"
depends on ARCH_SUNXI && HAS_IOMEM && OF
depends on ARCH_SUNXI || COMPILE_TEST
depends on HAS_IOMEM && OF
depends on RESET_CONTROLLER
select GENERIC_PHY
help

View File

@ -50,7 +50,7 @@ config PHY_BCM_NS_USB3
config PHY_NS2_PCIE
tristate "Broadcom Northstar2 PCIe PHY driver"
depends on OF && MDIO_BUS_MUX_BCM_IPROC
depends on (OF && MDIO_BUS_MUX_BCM_IPROC) || (COMPILE_TEST && MDIO_BUS)
select GENERIC_PHY
default ARCH_BCM_IPROC
help
@ -83,7 +83,7 @@ config PHY_BRCM_SATA
config PHY_BRCM_USB
tristate "Broadcom STB USB PHY driver"
depends on ARCH_BRCMSTB
depends on ARCH_BRCMSTB || COMPILE_TEST
depends on OF
select GENERIC_PHY
select SOC_BRCMSTB

View File

@ -8,7 +8,7 @@ obj-$(CONFIG_PHY_NS2_USB_DRD) += phy-bcm-ns2-usbdrd.o
obj-$(CONFIG_PHY_BRCM_SATA) += phy-brcm-sata.o
obj-$(CONFIG_PHY_BRCM_USB) += phy-brcm-usb-dvr.o
phy-brcm-usb-dvr-objs := phy-brcm-usb.o phy-brcm-usb-init.o
phy-brcm-usb-dvr-objs := phy-brcm-usb.o phy-brcm-usb-init.o phy-brcm-usb-init-synopsys.o
obj-$(CONFIG_PHY_BCM_SR_PCIE) += phy-bcm-sr-pcie.o
obj-$(CONFIG_PHY_BCM_SR_USB) += phy-bcm-sr-usb.o

View File

@ -33,6 +33,7 @@
#define SATA_PHY_CTRL_REG_28NM_SPACE_SIZE 0x8
enum brcm_sata_phy_version {
BRCM_SATA_PHY_STB_16NM,
BRCM_SATA_PHY_STB_28NM,
BRCM_SATA_PHY_STB_40NM,
BRCM_SATA_PHY_IPROC_NS2,
@ -104,10 +105,13 @@ enum sata_phy_regs {
PLL1_ACTRL5 = 0x85,
PLL1_ACTRL6 = 0x86,
PLL1_ACTRL7 = 0x87,
PLL1_ACTRL8 = 0x88,
TX_REG_BANK = 0x070,
TX_ACTRL0 = 0x80,
TX_ACTRL0_TXPOL_FLIP = BIT(6),
TX_ACTRL5 = 0x85,
TX_ACTRL5_SSC_EN = BIT(11),
AEQRX_REG_BANK_0 = 0xd0,
AEQ_CONTROL1 = 0x81,
@ -116,6 +120,7 @@ enum sata_phy_regs {
AEQ_FRC_EQ = 0x83,
AEQ_FRC_EQ_FORCE = BIT(0),
AEQ_FRC_EQ_FORCE_VAL = BIT(1),
AEQ_RFZ_FRC_VAL = BIT(8),
AEQRX_REG_BANK_1 = 0xe0,
AEQRX_SLCAL0_CTRL0 = 0x82,
AEQRX_SLCAL1_CTRL0 = 0x86,
@ -152,7 +157,28 @@ enum sata_phy_regs {
TXPMD_TX_FREQ_CTRL_CONTROL3_FMAX_MASK = 0x3ff,
RXPMD_REG_BANK = 0x1c0,
RXPMD_RX_CDR_CONTROL1 = 0x81,
RXPMD_RX_PPM_VAL_MASK = 0x1ff,
RXPMD_RXPMD_EN_FRC = BIT(12),
RXPMD_RXPMD_EN_FRC_VAL = BIT(13),
RXPMD_RX_CDR_CDR_PROP_BW = 0x82,
RXPMD_G_CDR_PROP_BW_MASK = 0x7,
RXPMD_G1_CDR_PROP_BW_SHIFT = 0,
RXPMD_G2_CDR_PROP_BW_SHIFT = 3,
RXPMD_G3_CDR_PROB_BW_SHIFT = 6,
RXPMD_RX_CDR_CDR_ACQ_INTEG_BW = 0x83,
RXPMD_G_CDR_ACQ_INT_BW_MASK = 0x7,
RXPMD_G1_CDR_ACQ_INT_BW_SHIFT = 0,
RXPMD_G2_CDR_ACQ_INT_BW_SHIFT = 3,
RXPMD_G3_CDR_ACQ_INT_BW_SHIFT = 6,
RXPMD_RX_CDR_CDR_LOCK_INTEG_BW = 0x84,
RXPMD_G_CDR_LOCK_INT_BW_MASK = 0x7,
RXPMD_G1_CDR_LOCK_INT_BW_SHIFT = 0,
RXPMD_G2_CDR_LOCK_INT_BW_SHIFT = 3,
RXPMD_G3_CDR_LOCK_INT_BW_SHIFT = 6,
RXPMD_RX_FREQ_MON_CONTROL1 = 0x87,
RXPMD_MON_CORRECT_EN = BIT(8),
RXPMD_MON_MARGIN_VAL_MASK = 0xff,
};
enum sata_phy_ctrl_regs {
@ -166,6 +192,7 @@ static inline void __iomem *brcm_sata_pcb_base(struct brcm_sata_port *port)
u32 size = 0;
switch (priv->version) {
case BRCM_SATA_PHY_STB_16NM:
case BRCM_SATA_PHY_STB_28NM:
case BRCM_SATA_PHY_IPROC_NS2:
case BRCM_SATA_PHY_DSL_28NM:
@ -287,6 +314,94 @@ static int brcm_stb_sata_init(struct brcm_sata_port *port)
return brcm_stb_sata_rxaeq_init(port);
}
static int brcm_stb_sata_16nm_ssc_init(struct brcm_sata_port *port)
{
void __iomem *base = brcm_sata_pcb_base(port);
u32 tmp, value;
/* Reduce CP tail current to 1/16th of its default value */
brcm_sata_phy_wr(base, PLL1_REG_BANK, PLL1_ACTRL6, 0, 0x141);
/* Turn off CP tail current boost */
brcm_sata_phy_wr(base, PLL1_REG_BANK, PLL1_ACTRL8, 0, 0xc006);
/* Set a specific AEQ equalizer value */
tmp = AEQ_FRC_EQ_FORCE_VAL | AEQ_FRC_EQ_FORCE;
brcm_sata_phy_wr(base, AEQRX_REG_BANK_0, AEQ_FRC_EQ,
~(tmp | AEQ_RFZ_FRC_VAL |
AEQ_FRC_EQ_VAL_MASK << AEQ_FRC_EQ_VAL_SHIFT),
tmp | 32 << AEQ_FRC_EQ_VAL_SHIFT);
/* Set RX PPM val center frequency */
if (port->ssc_en)
value = 0x52;
else
value = 0;
brcm_sata_phy_wr(base, RXPMD_REG_BANK, RXPMD_RX_CDR_CONTROL1,
~RXPMD_RX_PPM_VAL_MASK, value);
/* Set proportional loop bandwith Gen1/2/3 */
tmp = RXPMD_G_CDR_PROP_BW_MASK << RXPMD_G1_CDR_PROP_BW_SHIFT |
RXPMD_G_CDR_PROP_BW_MASK << RXPMD_G2_CDR_PROP_BW_SHIFT |
RXPMD_G_CDR_PROP_BW_MASK << RXPMD_G3_CDR_PROB_BW_SHIFT;
if (port->ssc_en)
value = 2 << RXPMD_G1_CDR_PROP_BW_SHIFT |
2 << RXPMD_G2_CDR_PROP_BW_SHIFT |
2 << RXPMD_G3_CDR_PROB_BW_SHIFT;
else
value = 1 << RXPMD_G1_CDR_PROP_BW_SHIFT |
1 << RXPMD_G2_CDR_PROP_BW_SHIFT |
1 << RXPMD_G3_CDR_PROB_BW_SHIFT;
brcm_sata_phy_wr(base, RXPMD_REG_BANK, RXPMD_RX_CDR_CDR_PROP_BW, ~tmp,
value);
/* Set CDR integral loop acquisition bandwidth for Gen1/2/3 */
tmp = RXPMD_G_CDR_ACQ_INT_BW_MASK << RXPMD_G1_CDR_ACQ_INT_BW_SHIFT |
RXPMD_G_CDR_ACQ_INT_BW_MASK << RXPMD_G2_CDR_ACQ_INT_BW_SHIFT |
RXPMD_G_CDR_ACQ_INT_BW_MASK << RXPMD_G3_CDR_ACQ_INT_BW_SHIFT;
if (port->ssc_en)
value = 1 << RXPMD_G1_CDR_ACQ_INT_BW_SHIFT |
1 << RXPMD_G2_CDR_ACQ_INT_BW_SHIFT |
1 << RXPMD_G3_CDR_ACQ_INT_BW_SHIFT;
else
value = 0;
brcm_sata_phy_wr(base, RXPMD_REG_BANK, RXPMD_RX_CDR_CDR_ACQ_INTEG_BW,
~tmp, value);
/* Set CDR integral loop locking bandwidth to 1 for Gen 1/2/3 */
tmp = RXPMD_G_CDR_LOCK_INT_BW_MASK << RXPMD_G1_CDR_LOCK_INT_BW_SHIFT |
RXPMD_G_CDR_LOCK_INT_BW_MASK << RXPMD_G2_CDR_LOCK_INT_BW_SHIFT |
RXPMD_G_CDR_LOCK_INT_BW_MASK << RXPMD_G3_CDR_LOCK_INT_BW_SHIFT;
if (port->ssc_en)
value = 1 << RXPMD_G1_CDR_LOCK_INT_BW_SHIFT |
1 << RXPMD_G2_CDR_LOCK_INT_BW_SHIFT |
1 << RXPMD_G3_CDR_LOCK_INT_BW_SHIFT;
else
value = 0;
brcm_sata_phy_wr(base, RXPMD_REG_BANK, RXPMD_RX_CDR_CDR_LOCK_INTEG_BW,
~tmp, value);
/* Set no guard band and clamp CDR */
tmp = RXPMD_MON_CORRECT_EN | RXPMD_MON_MARGIN_VAL_MASK;
if (port->ssc_en)
value = 0x51;
else
value = 0;
brcm_sata_phy_wr(base, RXPMD_REG_BANK, RXPMD_RX_FREQ_MON_CONTROL1,
~tmp, RXPMD_MON_CORRECT_EN | value);
/* Turn on/off SSC */
brcm_sata_phy_wr(base, TX_REG_BANK, TX_ACTRL5, ~TX_ACTRL5_SSC_EN,
port->ssc_en ? TX_ACTRL5_SSC_EN : 0);
return 0;
}
static int brcm_stb_sata_16nm_init(struct brcm_sata_port *port)
{
return brcm_stb_sata_16nm_ssc_init(port);
}
/* NS2 SATA PLL1 defaults were characterized by H/W group */
#define NS2_PLL1_ACTRL2_MAGIC 0x1df8
#define NS2_PLL1_ACTRL3_MAGIC 0x2b00
@ -544,6 +659,9 @@ static int brcm_sata_phy_init(struct phy *phy)
struct brcm_sata_port *port = phy_get_drvdata(phy);
switch (port->phy_priv->version) {
case BRCM_SATA_PHY_STB_16NM:
rc = brcm_stb_sata_16nm_init(port);
break;
case BRCM_SATA_PHY_STB_28NM:
case BRCM_SATA_PHY_STB_40NM:
rc = brcm_stb_sata_init(port);
@ -601,6 +719,8 @@ static const struct phy_ops phy_ops = {
};
static const struct of_device_id brcm_sata_phy_of_match[] = {
{ .compatible = "brcm,bcm7216-sata-phy",
.data = (void *)BRCM_SATA_PHY_STB_16NM },
{ .compatible = "brcm,bcm7445-sata-phy",
.data = (void *)BRCM_SATA_PHY_STB_28NM },
{ .compatible = "brcm,bcm7425-sata-phy",

View File

@ -0,0 +1,414 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018, Broadcom */
/*
* This module contains USB PHY initialization for power up and S3 resume
* for newer Synopsys based USB hardware first used on the bcm7216.
*/
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/soc/brcmstb/brcmstb.h>
#include "phy-brcm-usb-init.h"
#define PHY_LOCK_TIMEOUT_MS 200
/* Register definitions for syscon piarbctl registers */
#define PIARBCTL_CAM 0x00
#define PIARBCTL_SPLITTER 0x04
#define PIARBCTL_MISC 0x08
#define PIARBCTL_MISC_SECURE_MASK 0x80000000
#define PIARBCTL_MISC_USB_SELECT_MASK 0x40000000
#define PIARBCTL_MISC_USB_4G_SDRAM_MASK 0x20000000
#define PIARBCTL_MISC_USB_PRIORITY_MASK 0x000f0000
#define PIARBCTL_MISC_USB_MEM_PAGE_MASK 0x0000f000
#define PIARBCTL_MISC_CAM1_MEM_PAGE_MASK 0x00000f00
#define PIARBCTL_MISC_CAM0_MEM_PAGE_MASK 0x000000f0
#define PIARBCTL_MISC_SATA_PRIORITY_MASK 0x0000000f
#define PIARBCTL_MISC_USB_ONLY_MASK \
(PIARBCTL_MISC_USB_SELECT_MASK | \
PIARBCTL_MISC_USB_4G_SDRAM_MASK | \
PIARBCTL_MISC_USB_PRIORITY_MASK | \
PIARBCTL_MISC_USB_MEM_PAGE_MASK)
/* Register definitions for the USB CTRL block */
#define USB_CTRL_SETUP 0x00
#define USB_CTRL_SETUP_STRAP_IPP_SEL_MASK 0x02000000
#define USB_CTRL_SETUP_SCB2_EN_MASK 0x00008000
#define USB_CTRL_SETUP_tca_drv_sel_MASK 0x01000000
#define USB_CTRL_SETUP_SCB1_EN_MASK 0x00004000
#define USB_CTRL_SETUP_SOFT_SHUTDOWN_MASK 0x00000200
#define USB_CTRL_SETUP_IPP_MASK 0x00000020
#define USB_CTRL_SETUP_IOC_MASK 0x00000010
#define USB_CTRL_USB_PM 0x04
#define USB_CTRL_USB_PM_USB_PWRDN_MASK 0x80000000
#define USB_CTRL_USB_PM_SOFT_RESET_MASK 0x40000000
#define USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK 0x00800000
#define USB_CTRL_USB_PM_XHC_SOFT_RESETB_MASK 0x00400000
#define USB_CTRL_USB_PM_STATUS 0x08
#define USB_CTRL_USB_DEVICE_CTL1 0x10
#define USB_CTRL_USB_DEVICE_CTL1_PORT_MODE_MASK 0x00000003
#define USB_CTRL_TEST_PORT_CTL 0x30
#define USB_CTRL_TEST_PORT_CTL_TPOUT_SEL_MASK 0x000000ff
#define USB_CTRL_TEST_PORT_CTL_TPOUT_SEL_PME_GEN_MASK 0x0000002e
#define USB_CTRL_TP_DIAG1 0x34
#define USB_CTLR_TP_DIAG1_wake_MASK 0x00000002
#define USB_CTRL_CTLR_CSHCR 0x50
#define USB_CTRL_CTLR_CSHCR_ctl_pme_en_MASK 0x00040000
/* Register definitions for the USB_PHY block in 7211b0 */
#define USB_PHY_PLL_CTL 0x00
#define USB_PHY_PLL_CTL_PLL_RESETB_MASK 0x40000000
#define USB_PHY_PLL_LDO_CTL 0x08
#define USB_PHY_PLL_LDO_CTL_AFE_CORERDY_MASK 0x00000004
#define USB_PHY_PLL_LDO_CTL_AFE_LDO_PWRDWNB_MASK 0x00000002
#define USB_PHY_PLL_LDO_CTL_AFE_BG_PWRDWNB_MASK 0x00000001
#define USB_PHY_UTMI_CTL_1 0x04
#define USB_PHY_UTMI_CTL_1_POWER_UP_FSM_EN_MASK 0x00000800
#define USB_PHY_UTMI_CTL_1_PHY_MODE_MASK 0x0000000c
#define USB_PHY_UTMI_CTL_1_PHY_MODE_SHIFT 2
#define USB_PHY_IDDQ 0x1c
#define USB_PHY_IDDQ_phy_iddq_MASK 0x00000001
#define USB_PHY_STATUS 0x20
#define USB_PHY_STATUS_pll_lock_MASK 0x00000001
/* Register definitions for the MDIO registers in the DWC2 block of
* the 7211b0.
* NOTE: The PHY's MDIO registers are only accessible through the
* legacy DesignWare USB controller even though it's not being used.
*/
#define USB_GMDIOCSR 0
#define USB_GMDIOGEN 4
/* Register definitions for the BDC EC block in 7211b0 */
#define BDC_EC_AXIRDA 0x0c
#define BDC_EC_AXIRDA_RTS_MASK 0xf0000000
#define BDC_EC_AXIRDA_RTS_SHIFT 28
static void usb_mdio_write_7211b0(struct brcm_usb_init_params *params,
uint8_t addr, uint16_t data)
{
void __iomem *usb_mdio = params->regs[BRCM_REGS_USB_MDIO];
addr &= 0x1f; /* 5-bit address */
brcm_usb_writel(0xffffffff, usb_mdio + USB_GMDIOGEN);
while (brcm_usb_readl(usb_mdio + USB_GMDIOCSR) & (1<<31))
;
brcm_usb_writel(0x59020000 | (addr << 18) | data,
usb_mdio + USB_GMDIOGEN);
while (brcm_usb_readl(usb_mdio + USB_GMDIOCSR) & (1<<31))
;
brcm_usb_writel(0x00000000, usb_mdio + USB_GMDIOGEN);
while (brcm_usb_readl(usb_mdio + USB_GMDIOCSR) & (1<<31))
;
}
static uint16_t __maybe_unused usb_mdio_read_7211b0(
struct brcm_usb_init_params *params, uint8_t addr)
{
void __iomem *usb_mdio = params->regs[BRCM_REGS_USB_MDIO];
addr &= 0x1f; /* 5-bit address */
brcm_usb_writel(0xffffffff, usb_mdio + USB_GMDIOGEN);
while (brcm_usb_readl(usb_mdio + USB_GMDIOCSR) & (1<<31))
;
brcm_usb_writel(0x69020000 | (addr << 18), usb_mdio + USB_GMDIOGEN);
while (brcm_usb_readl(usb_mdio + USB_GMDIOCSR) & (1<<31))
;
brcm_usb_writel(0x00000000, usb_mdio + USB_GMDIOGEN);
while (brcm_usb_readl(usb_mdio + USB_GMDIOCSR) & (1<<31))
;
return brcm_usb_readl(usb_mdio + USB_GMDIOCSR) & 0xffff;
}
static void usb2_eye_fix_7211b0(struct brcm_usb_init_params *params)
{
/* select bank */
usb_mdio_write_7211b0(params, 0x1f, 0x80a0);
/* Set the eye */
usb_mdio_write_7211b0(params, 0x0a, 0xc6a0);
}
static void xhci_soft_reset(struct brcm_usb_init_params *params,
int on_off)
{
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
/* Assert reset */
if (on_off)
USB_CTRL_UNSET(ctrl, USB_PM, XHC_SOFT_RESETB);
/* De-assert reset */
else
USB_CTRL_SET(ctrl, USB_PM, XHC_SOFT_RESETB);
}
static void usb_init_ipp(struct brcm_usb_init_params *params)
{
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
u32 reg;
u32 orig_reg;
pr_debug("%s\n", __func__);
orig_reg = reg = brcm_usb_readl(USB_CTRL_REG(ctrl, SETUP));
if (params->ipp != 2)
/* override ipp strap pin (if it exits) */
reg &= ~(USB_CTRL_MASK(SETUP, STRAP_IPP_SEL));
/* Override the default OC and PP polarity */
reg &= ~(USB_CTRL_MASK(SETUP, IPP) | USB_CTRL_MASK(SETUP, IOC));
if (params->ioc)
reg |= USB_CTRL_MASK(SETUP, IOC);
if (params->ipp == 1)
reg |= USB_CTRL_MASK(SETUP, IPP);
brcm_usb_writel(reg, USB_CTRL_REG(ctrl, SETUP));
/*
* If we're changing IPP, make sure power is off long enough
* to turn off any connected devices.
*/
if ((reg ^ orig_reg) & USB_CTRL_MASK(SETUP, IPP))
msleep(50);
}
static void syscon_piarbctl_init(struct regmap *rmap)
{
/* Switch from legacy USB OTG controller to new STB USB controller */
regmap_update_bits(rmap, PIARBCTL_MISC, PIARBCTL_MISC_USB_ONLY_MASK,
PIARBCTL_MISC_USB_SELECT_MASK |
PIARBCTL_MISC_USB_4G_SDRAM_MASK);
}
static void usb_init_common(struct brcm_usb_init_params *params)
{
u32 reg;
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
pr_debug("%s\n", __func__);
USB_CTRL_UNSET(ctrl, USB_PM, USB_PWRDN);
/* 1 millisecond - for USB clocks to settle down */
usleep_range(1000, 2000);
if (USB_CTRL_MASK(USB_DEVICE_CTL1, PORT_MODE)) {
reg = brcm_usb_readl(USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
reg &= ~USB_CTRL_MASK(USB_DEVICE_CTL1, PORT_MODE);
reg |= params->mode;
brcm_usb_writel(reg, USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
}
switch (params->mode) {
case USB_CTLR_MODE_HOST:
USB_CTRL_UNSET(ctrl, USB_PM, BDC_SOFT_RESETB);
break;
default:
USB_CTRL_UNSET(ctrl, USB_PM, BDC_SOFT_RESETB);
USB_CTRL_SET(ctrl, USB_PM, BDC_SOFT_RESETB);
break;
}
}
static void usb_wake_enable_7211b0(struct brcm_usb_init_params *params,
bool enable)
{
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
if (enable)
USB_CTRL_SET(ctrl, CTLR_CSHCR, ctl_pme_en);
else
USB_CTRL_UNSET(ctrl, CTLR_CSHCR, ctl_pme_en);
}
static void usb_init_common_7211b0(struct brcm_usb_init_params *params)
{
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
void __iomem *usb_phy = params->regs[BRCM_REGS_USB_PHY];
void __iomem *bdc_ec = params->regs[BRCM_REGS_BDC_EC];
int timeout_ms = PHY_LOCK_TIMEOUT_MS;
u32 reg;
if (params->syscon_piarbctl)
syscon_piarbctl_init(params->syscon_piarbctl);
USB_CTRL_UNSET(ctrl, USB_PM, USB_PWRDN);
usb_wake_enable_7211b0(params, false);
if (!params->wake_enabled) {
/* undo possible suspend settings */
brcm_usb_writel(0, usb_phy + USB_PHY_IDDQ);
reg = brcm_usb_readl(usb_phy + USB_PHY_PLL_CTL);
reg |= USB_PHY_PLL_CTL_PLL_RESETB_MASK;
brcm_usb_writel(reg, usb_phy + USB_PHY_PLL_CTL);
/* temporarily enable FSM so PHY comes up properly */
reg = brcm_usb_readl(usb_phy + USB_PHY_UTMI_CTL_1);
reg |= USB_PHY_UTMI_CTL_1_POWER_UP_FSM_EN_MASK;
brcm_usb_writel(reg, usb_phy + USB_PHY_UTMI_CTL_1);
}
/* Init the PHY */
reg = USB_PHY_PLL_LDO_CTL_AFE_CORERDY_MASK |
USB_PHY_PLL_LDO_CTL_AFE_LDO_PWRDWNB_MASK |
USB_PHY_PLL_LDO_CTL_AFE_BG_PWRDWNB_MASK;
brcm_usb_writel(reg, usb_phy + USB_PHY_PLL_LDO_CTL);
/* wait for lock */
while (timeout_ms-- > 0) {
reg = brcm_usb_readl(usb_phy + USB_PHY_STATUS);
if (reg & USB_PHY_STATUS_pll_lock_MASK)
break;
usleep_range(1000, 2000);
}
/* Set the PHY_MODE */
reg = brcm_usb_readl(usb_phy + USB_PHY_UTMI_CTL_1);
reg &= ~USB_PHY_UTMI_CTL_1_PHY_MODE_MASK;
reg |= params->mode << USB_PHY_UTMI_CTL_1_PHY_MODE_SHIFT;
brcm_usb_writel(reg, usb_phy + USB_PHY_UTMI_CTL_1);
/* Fix the incorrect default */
reg = brcm_usb_readl(ctrl + USB_CTRL_SETUP);
reg &= ~USB_CTRL_SETUP_tca_drv_sel_MASK;
brcm_usb_writel(reg, ctrl + USB_CTRL_SETUP);
usb_init_common(params);
/*
* The BDC controller will get occasional failures with
* the default "Read Transaction Size" of 6 (1024 bytes).
* Set it to 4 (256 bytes).
*/
if ((params->mode != USB_CTLR_MODE_HOST) && bdc_ec) {
reg = brcm_usb_readl(bdc_ec + BDC_EC_AXIRDA);
reg &= ~BDC_EC_AXIRDA_RTS_MASK;
reg |= (0x4 << BDC_EC_AXIRDA_RTS_SHIFT);
brcm_usb_writel(reg, bdc_ec + BDC_EC_AXIRDA);
}
/*
* Disable FSM, otherwise the PHY will auto suspend when no
* device is connected and will be reset on resume.
*/
reg = brcm_usb_readl(usb_phy + USB_PHY_UTMI_CTL_1);
reg &= ~USB_PHY_UTMI_CTL_1_POWER_UP_FSM_EN_MASK;
brcm_usb_writel(reg, usb_phy + USB_PHY_UTMI_CTL_1);
usb2_eye_fix_7211b0(params);
}
static void usb_init_xhci(struct brcm_usb_init_params *params)
{
pr_debug("%s\n", __func__);
xhci_soft_reset(params, 0);
}
static void usb_uninit_common(struct brcm_usb_init_params *params)
{
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
pr_debug("%s\n", __func__);
USB_CTRL_SET(ctrl, USB_PM, USB_PWRDN);
}
static void usb_uninit_common_7211b0(struct brcm_usb_init_params *params)
{
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
void __iomem *usb_phy = params->regs[BRCM_REGS_USB_PHY];
u32 reg;
pr_debug("%s\n", __func__);
if (params->wake_enabled) {
USB_CTRL_SET(ctrl, TEST_PORT_CTL, TPOUT_SEL_PME_GEN);
usb_wake_enable_7211b0(params, true);
} else {
USB_CTRL_SET(ctrl, USB_PM, USB_PWRDN);
brcm_usb_writel(0, usb_phy + USB_PHY_PLL_LDO_CTL);
reg = brcm_usb_readl(usb_phy + USB_PHY_PLL_CTL);
reg &= ~USB_PHY_PLL_CTL_PLL_RESETB_MASK;
brcm_usb_writel(reg, usb_phy + USB_PHY_PLL_CTL);
brcm_usb_writel(USB_PHY_IDDQ_phy_iddq_MASK,
usb_phy + USB_PHY_IDDQ);
}
}
static void usb_uninit_xhci(struct brcm_usb_init_params *params)
{
pr_debug("%s\n", __func__);
if (!params->wake_enabled)
xhci_soft_reset(params, 1);
}
static int usb_get_dual_select(struct brcm_usb_init_params *params)
{
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
u32 reg = 0;
pr_debug("%s\n", __func__);
reg = brcm_usb_readl(USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
reg &= USB_CTRL_MASK(USB_DEVICE_CTL1, PORT_MODE);
return reg;
}
static void usb_set_dual_select(struct brcm_usb_init_params *params, int mode)
{
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
u32 reg;
pr_debug("%s\n", __func__);
reg = brcm_usb_readl(USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
reg &= ~USB_CTRL_MASK(USB_DEVICE_CTL1, PORT_MODE);
reg |= mode;
brcm_usb_writel(reg, USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
}
static const struct brcm_usb_init_ops bcm7216_ops = {
.init_ipp = usb_init_ipp,
.init_common = usb_init_common,
.init_xhci = usb_init_xhci,
.uninit_common = usb_uninit_common,
.uninit_xhci = usb_uninit_xhci,
.get_dual_select = usb_get_dual_select,
.set_dual_select = usb_set_dual_select,
};
static const struct brcm_usb_init_ops bcm7211b0_ops = {
.init_ipp = usb_init_ipp,
.init_common = usb_init_common_7211b0,
.init_xhci = usb_init_xhci,
.uninit_common = usb_uninit_common_7211b0,
.uninit_xhci = usb_uninit_xhci,
.get_dual_select = usb_get_dual_select,
.set_dual_select = usb_set_dual_select,
};
void brcm_usb_dvr_init_7216(struct brcm_usb_init_params *params)
{
pr_debug("%s\n", __func__);
params->family_name = "7216";
params->ops = &bcm7216_ops;
}
void brcm_usb_dvr_init_7211b0(struct brcm_usb_init_params *params)
{
pr_debug("%s\n", __func__);
params->family_name = "7211";
params->ops = &bcm7211b0_ops;
params->suspend_with_clocks = true;
}

View File

@ -42,6 +42,7 @@
#define USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK 0x80000000 /* option */
#define USB_CTRL_EBRIDGE 0x0c
#define USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK 0x00020000 /* option */
#define USB_CTRL_EBRIDGE_EBR_SCB_SIZE_MASK 0x00000f80 /* option */
#define USB_CTRL_OBRIDGE 0x10
#define USB_CTRL_OBRIDGE_LS_KEEP_ALIVE_MASK 0x08000000
#define USB_CTRL_MDIO 0x14
@ -57,6 +58,8 @@
#define USB_CTRL_USB_PM_SOFT_RESET_MASK 0x40000000 /* option */
#define USB_CTRL_USB_PM_USB20_HC_RESETB_MASK 0x30000000 /* option */
#define USB_CTRL_USB_PM_USB20_HC_RESETB_VAR_MASK 0x00300000 /* option */
#define USB_CTRL_USB_PM_RMTWKUP_EN_MASK 0x00000001
#define USB_CTRL_USB_PM_STATUS 0x38
#define USB_CTRL_USB30_CTL1 0x60
#define USB_CTRL_USB30_CTL1_PHY3_PLL_SEQ_START_MASK 0x00000010
#define USB_CTRL_USB30_CTL1_PHY3_RESETB_MASK 0x00010000
@ -126,10 +129,6 @@ enum {
USB_CTRL_SELECTOR_COUNT,
};
#define USB_CTRL_REG(base, reg) ((void __iomem *)base + USB_CTRL_##reg)
#define USB_XHCI_EC_REG(base, reg) ((void __iomem *)base + USB_XHCI_EC_##reg)
#define USB_CTRL_MASK(reg, field) \
USB_CTRL_##reg##_##field##_MASK
#define USB_CTRL_MASK_FAMILY(params, reg, field) \
(params->usb_reg_bits_map[USB_CTRL_##reg##_##field##_SELECTOR])
@ -140,13 +139,6 @@ enum {
usb_ctrl_unset_family(params, USB_CTRL_##reg, \
USB_CTRL_##reg##_##field##_SELECTOR)
#define USB_CTRL_SET(base, reg, field) \
usb_ctrl_set(USB_CTRL_REG(base, reg), \
USB_CTRL_##reg##_##field##_MASK)
#define USB_CTRL_UNSET(base, reg, field) \
usb_ctrl_unset(USB_CTRL_REG(base, reg), \
USB_CTRL_##reg##_##field##_MASK)
#define MDIO_USB2 0
#define MDIO_USB3 BIT(31)
@ -176,6 +168,7 @@ static const struct id_to_type id_to_type_table[] = {
{ 0x33900000, BRCM_FAMILY_3390A0 },
{ 0x72500010, BRCM_FAMILY_7250B0 },
{ 0x72600000, BRCM_FAMILY_7260A0 },
{ 0x72550000, BRCM_FAMILY_7260A0 },
{ 0x72680000, BRCM_FAMILY_7271A0 },
{ 0x72710000, BRCM_FAMILY_7271A0 },
{ 0x73640000, BRCM_FAMILY_7364A0 },
@ -401,26 +394,14 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
},
};
static inline u32 brcmusb_readl(void __iomem *addr)
{
return readl(addr);
}
static inline void brcmusb_writel(u32 val, void __iomem *addr)
{
writel(val, addr);
}
static inline
void usb_ctrl_unset_family(struct brcm_usb_init_params *params,
u32 reg_offset, u32 field)
{
u32 mask;
void __iomem *reg;
mask = params->usb_reg_bits_map[field];
reg = params->ctrl_regs + reg_offset;
brcmusb_writel(brcmusb_readl(reg) & ~mask, reg);
brcm_usb_ctrl_unset(params->regs[BRCM_REGS_CTRL] + reg_offset, mask);
};
static inline
@ -428,45 +409,27 @@ void usb_ctrl_set_family(struct brcm_usb_init_params *params,
u32 reg_offset, u32 field)
{
u32 mask;
void __iomem *reg;
mask = params->usb_reg_bits_map[field];
reg = params->ctrl_regs + reg_offset;
brcmusb_writel(brcmusb_readl(reg) | mask, reg);
brcm_usb_ctrl_set(params->regs[BRCM_REGS_CTRL] + reg_offset, mask);
};
static inline void usb_ctrl_set(void __iomem *reg, u32 field)
{
u32 value;
value = brcmusb_readl(reg);
brcmusb_writel(value | field, reg);
}
static inline void usb_ctrl_unset(void __iomem *reg, u32 field)
{
u32 value;
value = brcmusb_readl(reg);
brcmusb_writel(value & ~field, reg);
}
static u32 brcmusb_usb_mdio_read(void __iomem *ctrl_base, u32 reg, int mode)
{
u32 data;
data = (reg << 16) | mode;
brcmusb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
brcm_usb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
data |= (1 << 24);
brcmusb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
brcm_usb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
data &= ~(1 << 24);
/* wait for the 60MHz parallel to serial shifter */
usleep_range(10, 20);
brcmusb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
brcm_usb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
/* wait for the 60MHz parallel to serial shifter */
usleep_range(10, 20);
return brcmusb_readl(USB_CTRL_REG(ctrl_base, MDIO2)) & 0xffff;
return brcm_usb_readl(USB_CTRL_REG(ctrl_base, MDIO2)) & 0xffff;
}
static void brcmusb_usb_mdio_write(void __iomem *ctrl_base, u32 reg,
@ -475,14 +438,14 @@ static void brcmusb_usb_mdio_write(void __iomem *ctrl_base, u32 reg,
u32 data;
data = (reg << 16) | val | mode;
brcmusb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
brcm_usb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
data |= (1 << 25);
brcmusb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
brcm_usb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
data &= ~(1 << 25);
/* wait for the 60MHz parallel to serial shifter */
usleep_range(10, 20);
brcmusb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
brcm_usb_writel(data, USB_CTRL_REG(ctrl_base, MDIO));
/* wait for the 60MHz parallel to serial shifter */
usleep_range(10, 20);
}
@ -581,7 +544,7 @@ static void brcmusb_usb3_pll_54mhz(struct brcm_usb_init_params *params)
{
u32 ofs;
int ii;
void __iomem *ctrl_base = params->ctrl_regs;
void __iomem *ctrl_base = params->regs[BRCM_REGS_CTRL];
/*
* On newer B53 based SoC's, the reference clock for the
@ -662,7 +625,7 @@ static void brcmusb_usb3_ssc_enable(void __iomem *ctrl_base)
static void brcmusb_usb3_phy_workarounds(struct brcm_usb_init_params *params)
{
void __iomem *ctrl_base = params->ctrl_regs;
void __iomem *ctrl_base = params->regs[BRCM_REGS_CTRL];
brcmusb_usb3_pll_fix(ctrl_base);
brcmusb_usb3_pll_54mhz(params);
@ -704,21 +667,21 @@ static void brcmusb_memc_fix(struct brcm_usb_init_params *params)
static void brcmusb_usb3_otp_fix(struct brcm_usb_init_params *params)
{
void __iomem *xhci_ec_base = params->xhci_ec_regs;
void __iomem *xhci_ec_base = params->regs[BRCM_REGS_XHCI_EC];
u32 val;
if (params->family_id != 0x74371000 || !xhci_ec_base)
return;
brcmusb_writel(0xa20c, USB_XHCI_EC_REG(xhci_ec_base, IRAADR));
val = brcmusb_readl(USB_XHCI_EC_REG(xhci_ec_base, IRADAT));
brcm_usb_writel(0xa20c, USB_XHCI_EC_REG(xhci_ec_base, IRAADR));
val = brcm_usb_readl(USB_XHCI_EC_REG(xhci_ec_base, IRADAT));
/* set cfg_pick_ss_lock */
val |= (1 << 27);
brcmusb_writel(val, USB_XHCI_EC_REG(xhci_ec_base, IRADAT));
brcm_usb_writel(val, USB_XHCI_EC_REG(xhci_ec_base, IRADAT));
/* Reset USB 3.0 PHY for workaround to take effect */
USB_CTRL_UNSET(params->ctrl_regs, USB30_CTL1, PHY3_RESETB);
USB_CTRL_SET(params->ctrl_regs, USB30_CTL1, PHY3_RESETB);
USB_CTRL_UNSET(params->regs[BRCM_REGS_CTRL], USB30_CTL1, PHY3_RESETB);
USB_CTRL_SET(params->regs[BRCM_REGS_CTRL], USB30_CTL1, PHY3_RESETB);
}
static void brcmusb_xhci_soft_reset(struct brcm_usb_init_params *params,
@ -747,7 +710,7 @@ static void brcmusb_xhci_soft_reset(struct brcm_usb_init_params *params,
* - default chip/rev.
* NOTE: The minor rev is always ignored.
*/
static enum brcm_family_type brcmusb_get_family_type(
static enum brcm_family_type get_family_type(
struct brcm_usb_init_params *params)
{
int last_type = -1;
@ -775,9 +738,9 @@ static enum brcm_family_type brcmusb_get_family_type(
return last_type;
}
void brcm_usb_init_ipp(struct brcm_usb_init_params *params)
static void usb_init_ipp(struct brcm_usb_init_params *params)
{
void __iomem *ctrl = params->ctrl_regs;
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
u32 reg;
u32 orig_reg;
@ -791,7 +754,7 @@ void brcm_usb_init_ipp(struct brcm_usb_init_params *params)
USB_CTRL_SET_FAMILY(params, USB30_CTL1, USB3_IPP);
}
reg = brcmusb_readl(USB_CTRL_REG(ctrl, SETUP));
reg = brcm_usb_readl(USB_CTRL_REG(ctrl, SETUP));
orig_reg = reg;
if (USB_CTRL_MASK_FAMILY(params, SETUP, STRAP_CC_DRD_MODE_ENABLE_SEL))
/* Never use the strap, it's going away. */
@ -799,8 +762,8 @@ void brcm_usb_init_ipp(struct brcm_usb_init_params *params)
SETUP,
STRAP_CC_DRD_MODE_ENABLE_SEL));
if (USB_CTRL_MASK_FAMILY(params, SETUP, STRAP_IPP_SEL))
/* override ipp strap pin (if it exits) */
if (params->ipp != 2)
/* override ipp strap pin (if it exits) */
reg &= ~(USB_CTRL_MASK_FAMILY(params, SETUP,
STRAP_IPP_SEL));
@ -808,50 +771,38 @@ void brcm_usb_init_ipp(struct brcm_usb_init_params *params)
reg &= ~(USB_CTRL_MASK(SETUP, IPP) | USB_CTRL_MASK(SETUP, IOC));
if (params->ioc)
reg |= USB_CTRL_MASK(SETUP, IOC);
if (params->ipp == 1 && ((reg & USB_CTRL_MASK(SETUP, IPP)) == 0))
if (params->ipp == 1)
reg |= USB_CTRL_MASK(SETUP, IPP);
brcmusb_writel(reg, USB_CTRL_REG(ctrl, SETUP));
brcm_usb_writel(reg, USB_CTRL_REG(ctrl, SETUP));
/*
* If we're changing IPP, make sure power is off long enough
* to turn off any connected devices.
*/
if (reg != orig_reg)
if ((reg ^ orig_reg) & USB_CTRL_MASK(SETUP, IPP))
msleep(50);
}
int brcm_usb_init_get_dual_select(struct brcm_usb_init_params *params)
static void usb_wake_enable(struct brcm_usb_init_params *params,
bool enable)
{
void __iomem *ctrl = params->ctrl_regs;
u32 reg = 0;
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
if (USB_CTRL_MASK_FAMILY(params, USB_DEVICE_CTL1, PORT_MODE)) {
reg = brcmusb_readl(USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
reg &= USB_CTRL_MASK_FAMILY(params, USB_DEVICE_CTL1,
PORT_MODE);
}
return reg;
if (enable)
USB_CTRL_SET(ctrl, USB_PM, RMTWKUP_EN);
else
USB_CTRL_UNSET(ctrl, USB_PM, RMTWKUP_EN);
}
void brcm_usb_init_set_dual_select(struct brcm_usb_init_params *params,
int mode)
{
void __iomem *ctrl = params->ctrl_regs;
u32 reg;
if (USB_CTRL_MASK_FAMILY(params, USB_DEVICE_CTL1, PORT_MODE)) {
reg = brcmusb_readl(USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
reg &= ~USB_CTRL_MASK_FAMILY(params, USB_DEVICE_CTL1,
PORT_MODE);
reg |= mode;
brcmusb_writel(reg, USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
}
}
void brcm_usb_init_common(struct brcm_usb_init_params *params)
static void usb_init_common(struct brcm_usb_init_params *params)
{
u32 reg;
void __iomem *ctrl = params->ctrl_regs;
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
/* Clear any pending wake conditions */
usb_wake_enable(params, false);
reg = brcm_usb_readl(USB_CTRL_REG(ctrl, USB_PM_STATUS));
brcm_usb_writel(reg, USB_CTRL_REG(ctrl, USB_PM_STATUS));
/* Take USB out of power down */
if (USB_CTRL_MASK_FAMILY(params, PLL_CTL, PLL_IDDQ_PWRDN)) {
@ -877,7 +828,7 @@ void brcm_usb_init_common(struct brcm_usb_init_params *params)
/* Block auto PLL suspend by USB2 PHY (Sasi) */
USB_CTRL_SET(ctrl, PLL_CTL, PLL_SUSPEND_EN);
reg = brcmusb_readl(USB_CTRL_REG(ctrl, SETUP));
reg = brcm_usb_readl(USB_CTRL_REG(ctrl, SETUP));
if (params->selected_family == BRCM_FAMILY_7364A0)
/* Suppress overcurrent indication from USB30 ports for A0 */
reg |= USB_CTRL_MASK_FAMILY(params, SETUP, OC3_DISABLE);
@ -893,16 +844,16 @@ void brcm_usb_init_common(struct brcm_usb_init_params *params)
reg |= USB_CTRL_MASK_FAMILY(params, SETUP, SCB1_EN);
if (USB_CTRL_MASK_FAMILY(params, SETUP, SCB2_EN))
reg |= USB_CTRL_MASK_FAMILY(params, SETUP, SCB2_EN);
brcmusb_writel(reg, USB_CTRL_REG(ctrl, SETUP));
brcm_usb_writel(reg, USB_CTRL_REG(ctrl, SETUP));
brcmusb_memc_fix(params);
if (USB_CTRL_MASK_FAMILY(params, USB_DEVICE_CTL1, PORT_MODE)) {
reg = brcmusb_readl(USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
reg = brcm_usb_readl(USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
reg &= ~USB_CTRL_MASK_FAMILY(params, USB_DEVICE_CTL1,
PORT_MODE);
reg |= params->mode;
brcmusb_writel(reg, USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
brcm_usb_writel(reg, USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
}
if (USB_CTRL_MASK_FAMILY(params, USB_PM, BDC_SOFT_RESETB)) {
switch (params->mode) {
@ -924,10 +875,10 @@ void brcm_usb_init_common(struct brcm_usb_init_params *params)
}
}
void brcm_usb_init_eohci(struct brcm_usb_init_params *params)
static void usb_init_eohci(struct brcm_usb_init_params *params)
{
u32 reg;
void __iomem *ctrl = params->ctrl_regs;
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
if (USB_CTRL_MASK_FAMILY(params, USB_PM, USB20_HC_RESETB))
USB_CTRL_SET_FAMILY(params, USB_PM, USB20_HC_RESETB);
@ -940,19 +891,30 @@ void brcm_usb_init_eohci(struct brcm_usb_init_params *params)
USB_CTRL_SET(ctrl, EBRIDGE, ESTOP_SCB_REQ);
/* Setup the endian bits */
reg = brcmusb_readl(USB_CTRL_REG(ctrl, SETUP));
reg = brcm_usb_readl(USB_CTRL_REG(ctrl, SETUP));
reg &= ~USB_CTRL_SETUP_ENDIAN_BITS;
reg |= USB_CTRL_MASK_FAMILY(params, SETUP, ENDIAN);
brcmusb_writel(reg, USB_CTRL_REG(ctrl, SETUP));
brcm_usb_writel(reg, USB_CTRL_REG(ctrl, SETUP));
if (params->selected_family == BRCM_FAMILY_7271A0)
/* Enable LS keep alive fix for certain keyboards */
USB_CTRL_SET(ctrl, OBRIDGE, LS_KEEP_ALIVE);
if (params->family_id == 0x72550000) {
/*
* Make the burst size 512 bytes to fix a hardware bug
* on the 7255a0. See HW7255-24.
*/
reg = brcm_usb_readl(USB_CTRL_REG(ctrl, EBRIDGE));
reg &= ~USB_CTRL_MASK(EBRIDGE, EBR_SCB_SIZE);
reg |= 0x800;
brcm_usb_writel(reg, USB_CTRL_REG(ctrl, EBRIDGE));
}
}
void brcm_usb_init_xhci(struct brcm_usb_init_params *params)
static void usb_init_xhci(struct brcm_usb_init_params *params)
{
void __iomem *ctrl = params->ctrl_regs;
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
USB_CTRL_UNSET(ctrl, USB30_PCTL, PHY3_IDDQ_OVERRIDE);
/* 1 millisecond - for USB clocks to settle down */
@ -978,34 +940,80 @@ void brcm_usb_init_xhci(struct brcm_usb_init_params *params)
brcmusb_usb3_otp_fix(params);
}
void brcm_usb_uninit_common(struct brcm_usb_init_params *params)
static void usb_uninit_common(struct brcm_usb_init_params *params)
{
if (USB_CTRL_MASK_FAMILY(params, USB_PM, USB_PWRDN))
USB_CTRL_SET_FAMILY(params, USB_PM, USB_PWRDN);
if (USB_CTRL_MASK_FAMILY(params, PLL_CTL, PLL_IDDQ_PWRDN))
USB_CTRL_SET_FAMILY(params, PLL_CTL, PLL_IDDQ_PWRDN);
if (params->wake_enabled)
usb_wake_enable(params, true);
}
void brcm_usb_uninit_eohci(struct brcm_usb_init_params *params)
static void usb_uninit_eohci(struct brcm_usb_init_params *params)
{
if (USB_CTRL_MASK_FAMILY(params, USB_PM, USB20_HC_RESETB))
USB_CTRL_UNSET_FAMILY(params, USB_PM, USB20_HC_RESETB);
}
void brcm_usb_uninit_xhci(struct brcm_usb_init_params *params)
static void usb_uninit_xhci(struct brcm_usb_init_params *params)
{
brcmusb_xhci_soft_reset(params, 1);
USB_CTRL_SET(params->ctrl_regs, USB30_PCTL, PHY3_IDDQ_OVERRIDE);
USB_CTRL_SET(params->regs[BRCM_REGS_CTRL], USB30_PCTL,
PHY3_IDDQ_OVERRIDE);
}
void brcm_usb_set_family_map(struct brcm_usb_init_params *params)
static int usb_get_dual_select(struct brcm_usb_init_params *params)
{
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
u32 reg = 0;
pr_debug("%s\n", __func__);
if (USB_CTRL_MASK_FAMILY(params, USB_DEVICE_CTL1, PORT_MODE)) {
reg = brcm_usb_readl(USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
reg &= USB_CTRL_MASK_FAMILY(params, USB_DEVICE_CTL1,
PORT_MODE);
}
return reg;
}
static void usb_set_dual_select(struct brcm_usb_init_params *params, int mode)
{
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
u32 reg;
pr_debug("%s\n", __func__);
if (USB_CTRL_MASK_FAMILY(params, USB_DEVICE_CTL1, PORT_MODE)) {
reg = brcm_usb_readl(USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
reg &= ~USB_CTRL_MASK_FAMILY(params, USB_DEVICE_CTL1,
PORT_MODE);
reg |= mode;
brcm_usb_writel(reg, USB_CTRL_REG(ctrl, USB_DEVICE_CTL1));
}
}
static const struct brcm_usb_init_ops bcm7445_ops = {
.init_ipp = usb_init_ipp,
.init_common = usb_init_common,
.init_eohci = usb_init_eohci,
.init_xhci = usb_init_xhci,
.uninit_common = usb_uninit_common,
.uninit_eohci = usb_uninit_eohci,
.uninit_xhci = usb_uninit_xhci,
.get_dual_select = usb_get_dual_select,
.set_dual_select = usb_set_dual_select,
};
void brcm_usb_dvr_init_7445(struct brcm_usb_init_params *params)
{
int fam;
fam = brcmusb_get_family_type(params);
pr_debug("%s\n", __func__);
fam = get_family_type(params);
params->selected_family = fam;
params->usb_reg_bits_map =
&usb_reg_bits_map_table[fam][0];
params->family_name = family_names[fam];
params->ops = &bcm7445_ops;
}

View File

@ -6,16 +6,50 @@
#ifndef _USB_BRCM_COMMON_INIT_H
#define _USB_BRCM_COMMON_INIT_H
#include <linux/regmap.h>
#define USB_CTLR_MODE_HOST 0
#define USB_CTLR_MODE_DEVICE 1
#define USB_CTLR_MODE_DRD 2
#define USB_CTLR_MODE_TYPEC_PD 3
enum brcmusb_reg_sel {
BRCM_REGS_CTRL = 0,
BRCM_REGS_XHCI_EC,
BRCM_REGS_XHCI_GBL,
BRCM_REGS_USB_PHY,
BRCM_REGS_USB_MDIO,
BRCM_REGS_BDC_EC,
BRCM_REGS_MAX
};
#define USB_CTRL_REG(base, reg) ((void __iomem *)base + USB_CTRL_##reg)
#define USB_XHCI_EC_REG(base, reg) ((void __iomem *)base + USB_XHCI_EC_##reg)
#define USB_CTRL_MASK(reg, field) \
USB_CTRL_##reg##_##field##_MASK
#define USB_CTRL_SET(base, reg, field) \
brcm_usb_ctrl_set(USB_CTRL_REG(base, reg), \
USB_CTRL_##reg##_##field##_MASK)
#define USB_CTRL_UNSET(base, reg, field) \
brcm_usb_ctrl_unset(USB_CTRL_REG(base, reg), \
USB_CTRL_##reg##_##field##_MASK)
struct brcm_usb_init_params;
struct brcm_usb_init_ops {
void (*init_ipp)(struct brcm_usb_init_params *params);
void (*init_common)(struct brcm_usb_init_params *params);
void (*init_eohci)(struct brcm_usb_init_params *params);
void (*init_xhci)(struct brcm_usb_init_params *params);
void (*uninit_common)(struct brcm_usb_init_params *params);
void (*uninit_eohci)(struct brcm_usb_init_params *params);
void (*uninit_xhci)(struct brcm_usb_init_params *params);
int (*get_dual_select)(struct brcm_usb_init_params *params);
void (*set_dual_select)(struct brcm_usb_init_params *params, int mode);
};
struct brcm_usb_init_params {
void __iomem *ctrl_regs;
void __iomem *xhci_ec_regs;
void __iomem *regs[BRCM_REGS_MAX];
int ioc;
int ipp;
int mode;
@ -24,19 +58,105 @@ struct brcm_usb_init_params {
int selected_family;
const char *family_name;
const u32 *usb_reg_bits_map;
const struct brcm_usb_init_ops *ops;
struct regmap *syscon_piarbctl;
bool wake_enabled;
bool suspend_with_clocks;
};
void brcm_usb_set_family_map(struct brcm_usb_init_params *params);
int brcm_usb_init_get_dual_select(struct brcm_usb_init_params *params);
void brcm_usb_init_set_dual_select(struct brcm_usb_init_params *params,
int mode);
void brcm_usb_dvr_init_7445(struct brcm_usb_init_params *params);
void brcm_usb_dvr_init_7216(struct brcm_usb_init_params *params);
void brcm_usb_dvr_init_7211b0(struct brcm_usb_init_params *params);
void brcm_usb_init_ipp(struct brcm_usb_init_params *ini);
void brcm_usb_init_common(struct brcm_usb_init_params *ini);
void brcm_usb_init_eohci(struct brcm_usb_init_params *ini);
void brcm_usb_init_xhci(struct brcm_usb_init_params *ini);
void brcm_usb_uninit_common(struct brcm_usb_init_params *ini);
void brcm_usb_uninit_eohci(struct brcm_usb_init_params *ini);
void brcm_usb_uninit_xhci(struct brcm_usb_init_params *ini);
static inline u32 brcm_usb_readl(void __iomem *addr)
{
/*
* MIPS endianness is configured by boot strap, which also reverses all
* bus endianness (i.e., big-endian CPU + big endian bus ==> native
* endian I/O).
*
* Other architectures (e.g., ARM) either do not support big endian, or
* else leave I/O in little endian mode.
*/
if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(__BIG_ENDIAN))
return __raw_readl(addr);
else
return readl_relaxed(addr);
}
static inline void brcm_usb_writel(u32 val, void __iomem *addr)
{
/* See brcmnand_readl() comments */
if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(__BIG_ENDIAN))
__raw_writel(val, addr);
else
writel_relaxed(val, addr);
}
static inline void brcm_usb_ctrl_unset(void __iomem *reg, u32 mask)
{
brcm_usb_writel(brcm_usb_readl(reg) & ~(mask), reg);
};
static inline void brcm_usb_ctrl_set(void __iomem *reg, u32 mask)
{
brcm_usb_writel(brcm_usb_readl(reg) | (mask), reg);
};
static inline void brcm_usb_init_ipp(struct brcm_usb_init_params *ini)
{
if (ini->ops->init_ipp)
ini->ops->init_ipp(ini);
}
static inline void brcm_usb_init_common(struct brcm_usb_init_params *ini)
{
if (ini->ops->init_common)
ini->ops->init_common(ini);
}
static inline void brcm_usb_init_eohci(struct brcm_usb_init_params *ini)
{
if (ini->ops->init_eohci)
ini->ops->init_eohci(ini);
}
static inline void brcm_usb_init_xhci(struct brcm_usb_init_params *ini)
{
if (ini->ops->init_xhci)
ini->ops->init_xhci(ini);
}
static inline void brcm_usb_uninit_common(struct brcm_usb_init_params *ini)
{
if (ini->ops->uninit_common)
ini->ops->uninit_common(ini);
}
static inline void brcm_usb_uninit_eohci(struct brcm_usb_init_params *ini)
{
if (ini->ops->uninit_eohci)
ini->ops->uninit_eohci(ini);
}
static inline void brcm_usb_uninit_xhci(struct brcm_usb_init_params *ini)
{
if (ini->ops->uninit_xhci)
ini->ops->uninit_xhci(ini);
}
static inline int brcm_usb_get_dual_select(struct brcm_usb_init_params *ini)
{
if (ini->ops->get_dual_select)
return ini->ops->get_dual_select(ini);
return 0;
}
static inline void brcm_usb_set_dual_select(struct brcm_usb_init_params *ini,
int mode)
{
if (ini->ops->set_dual_select)
ini->ops->set_dual_select(ini, mode);
}
#endif /* _USB_BRCM_COMMON_INIT_H */

View File

@ -16,6 +16,7 @@
#include <linux/interrupt.h>
#include <linux/soc/brcmstb/brcmstb.h>
#include <dt-bindings/phy/phy.h>
#include <linux/mfd/syscon.h>
#include "phy-brcm-usb-init.h"
@ -32,6 +33,12 @@ struct value_to_name_map {
const char *name;
};
struct match_chip_info {
void *init_func;
u8 required_regs[BRCM_REGS_MAX + 1];
u8 optional_reg;
};
static struct value_to_name_map brcm_dr_mode_to_name[] = {
{ USB_CTLR_MODE_HOST, "host" },
{ USB_CTLR_MODE_DEVICE, "peripheral" },
@ -57,11 +64,26 @@ struct brcm_usb_phy_data {
bool has_xhci;
struct clk *usb_20_clk;
struct clk *usb_30_clk;
struct clk *suspend_clk;
struct mutex mutex; /* serialize phy init */
int init_count;
int wake_irq;
struct brcm_usb_phy phys[BRCM_USB_PHY_ID_MAX];
};
static s8 *node_reg_names[BRCM_REGS_MAX] = {
"crtl", "xhci_ec", "xhci_gbl", "usb_phy", "usb_mdio", "bdc_ec"
};
static irqreturn_t brcm_usb_phy_wake_isr(int irq, void *dev_id)
{
struct phy *gphy = dev_id;
pm_wakeup_event(&gphy->dev, 0);
return IRQ_HANDLED;
}
static int brcm_usb_phy_init(struct phy *gphy)
{
struct brcm_usb_phy *phy = phy_get_drvdata(gphy);
@ -74,8 +96,9 @@ static int brcm_usb_phy_init(struct phy *gphy)
*/
mutex_lock(&priv->mutex);
if (priv->init_count++ == 0) {
clk_enable(priv->usb_20_clk);
clk_enable(priv->usb_30_clk);
clk_prepare_enable(priv->usb_20_clk);
clk_prepare_enable(priv->usb_30_clk);
clk_prepare_enable(priv->suspend_clk);
brcm_usb_init_common(&priv->ini);
}
mutex_unlock(&priv->mutex);
@ -106,8 +129,9 @@ static int brcm_usb_phy_exit(struct phy *gphy)
mutex_lock(&priv->mutex);
if (--priv->init_count == 0) {
brcm_usb_uninit_common(&priv->ini);
clk_disable(priv->usb_20_clk);
clk_disable(priv->usb_30_clk);
clk_disable_unprepare(priv->usb_20_clk);
clk_disable_unprepare(priv->usb_30_clk);
clk_disable_unprepare(priv->suspend_clk);
}
mutex_unlock(&priv->mutex);
phy->inited = false;
@ -194,7 +218,7 @@ static ssize_t dual_select_store(struct device *dev,
res = name_to_value(&brcm_dual_mode_to_name[0],
ARRAY_SIZE(brcm_dual_mode_to_name), buf, &value);
if (!res) {
brcm_usb_init_set_dual_select(&priv->ini, value);
brcm_usb_set_dual_select(&priv->ini, value);
res = len;
}
mutex_unlock(&sysfs_lock);
@ -209,7 +233,7 @@ static ssize_t dual_select_show(struct device *dev,
int value;
mutex_lock(&sysfs_lock);
value = brcm_usb_init_get_dual_select(&priv->ini);
value = brcm_usb_get_dual_select(&priv->ini);
mutex_unlock(&sysfs_lock);
return sprintf(buf, "%s\n",
value_to_name(&brcm_dual_mode_to_name[0],
@ -228,15 +252,106 @@ static const struct attribute_group brcm_usb_phy_group = {
.attrs = brcm_usb_phy_attrs,
};
static int brcm_usb_phy_dvr_init(struct device *dev,
static struct match_chip_info chip_info_7216 = {
.init_func = &brcm_usb_dvr_init_7216,
.required_regs = {
BRCM_REGS_CTRL,
BRCM_REGS_XHCI_EC,
BRCM_REGS_XHCI_GBL,
-1,
},
};
static struct match_chip_info chip_info_7211b0 = {
.init_func = &brcm_usb_dvr_init_7211b0,
.required_regs = {
BRCM_REGS_CTRL,
BRCM_REGS_XHCI_EC,
BRCM_REGS_XHCI_GBL,
BRCM_REGS_USB_PHY,
BRCM_REGS_USB_MDIO,
-1,
},
.optional_reg = BRCM_REGS_BDC_EC,
};
static struct match_chip_info chip_info_7445 = {
.init_func = &brcm_usb_dvr_init_7445,
.required_regs = {
BRCM_REGS_CTRL,
BRCM_REGS_XHCI_EC,
-1,
},
};
static const struct of_device_id brcm_usb_dt_ids[] = {
{
.compatible = "brcm,bcm7216-usb-phy",
.data = &chip_info_7216,
},
{
.compatible = "brcm,bcm7211-usb-phy",
.data = &chip_info_7211b0,
},
{
.compatible = "brcm,brcmstb-usb-phy",
.data = &chip_info_7445,
},
{ /* sentinel */ }
};
static int brcm_usb_get_regs(struct platform_device *pdev,
enum brcmusb_reg_sel regs,
struct brcm_usb_init_params *ini,
bool optional)
{
struct resource *res;
/* Older DT nodes have ctrl and optional xhci_ec by index only */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
node_reg_names[regs]);
if (res == NULL) {
if (regs == BRCM_REGS_CTRL) {
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
} else if (regs == BRCM_REGS_XHCI_EC) {
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
/* XHCI_EC registers are optional */
if (res == NULL)
return 0;
}
if (res == NULL) {
if (optional) {
dev_dbg(&pdev->dev,
"Optional reg %s not found\n",
node_reg_names[regs]);
return 0;
}
dev_err(&pdev->dev, "can't get %s base addr\n",
node_reg_names[regs]);
return 1;
}
}
ini->regs[regs] = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(ini->regs[regs])) {
dev_err(&pdev->dev, "can't map %s register space\n",
node_reg_names[regs]);
return 1;
}
return 0;
}
static int brcm_usb_phy_dvr_init(struct platform_device *pdev,
struct brcm_usb_phy_data *priv,
struct device_node *dn)
{
struct phy *gphy;
struct device *dev = &pdev->dev;
struct phy *gphy = NULL;
int err;
priv->usb_20_clk = of_clk_get_by_name(dn, "sw_usb");
if (IS_ERR(priv->usb_20_clk)) {
if (PTR_ERR(priv->usb_20_clk) == -EPROBE_DEFER)
return -EPROBE_DEFER;
dev_info(dev, "Clock not found in Device Tree\n");
priv->usb_20_clk = NULL;
}
@ -267,6 +382,8 @@ static int brcm_usb_phy_dvr_init(struct device *dev,
priv->usb_30_clk = of_clk_get_by_name(dn, "sw_usb3");
if (IS_ERR(priv->usb_30_clk)) {
if (PTR_ERR(priv->usb_30_clk) == -EPROBE_DEFER)
return -EPROBE_DEFER;
dev_info(dev,
"USB3.0 clock not found in Device Tree\n");
priv->usb_30_clk = NULL;
@ -275,18 +392,46 @@ static int brcm_usb_phy_dvr_init(struct device *dev,
if (err)
return err;
}
priv->suspend_clk = clk_get(dev, "usb0_freerun");
if (IS_ERR(priv->suspend_clk)) {
if (PTR_ERR(priv->suspend_clk) == -EPROBE_DEFER)
return -EPROBE_DEFER;
dev_err(dev, "Suspend Clock not found in Device Tree\n");
priv->suspend_clk = NULL;
}
priv->wake_irq = platform_get_irq_byname(pdev, "wake");
if (priv->wake_irq < 0)
priv->wake_irq = platform_get_irq_byname(pdev, "wakeup");
if (priv->wake_irq >= 0) {
err = devm_request_irq(dev, priv->wake_irq,
brcm_usb_phy_wake_isr, 0,
dev_name(dev), gphy);
if (err < 0)
return err;
device_set_wakeup_capable(dev, 1);
} else {
dev_info(dev,
"Wake interrupt missing, system wake not supported\n");
}
return 0;
}
static int brcm_usb_phy_probe(struct platform_device *pdev)
{
struct resource *res;
struct device *dev = &pdev->dev;
struct brcm_usb_phy_data *priv;
struct phy_provider *phy_provider;
struct device_node *dn = pdev->dev.of_node;
int err;
const char *mode;
const struct of_device_id *match;
void (*dvr_init)(struct brcm_usb_init_params *params);
const struct match_chip_info *info;
struct regmap *rmap;
int x;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
@ -295,30 +440,14 @@ static int brcm_usb_phy_probe(struct platform_device *pdev)
priv->ini.family_id = brcmstb_get_family_id();
priv->ini.product_id = brcmstb_get_product_id();
brcm_usb_set_family_map(&priv->ini);
match = of_match_node(brcm_usb_dt_ids, dev->of_node);
info = match->data;
dvr_init = info->init_func;
(*dvr_init)(&priv->ini);
dev_dbg(dev, "Best mapping table is for %s\n",
priv->ini.family_name);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(dev, "can't get USB_CTRL base address\n");
return -EINVAL;
}
priv->ini.ctrl_regs = devm_ioremap_resource(dev, res);
if (IS_ERR(priv->ini.ctrl_regs)) {
dev_err(dev, "can't map CTRL register space\n");
return -EINVAL;
}
/* The XHCI EC registers are optional */
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (res) {
priv->ini.xhci_ec_regs =
devm_ioremap_resource(dev, res);
if (IS_ERR(priv->ini.xhci_ec_regs)) {
dev_err(dev, "can't map XHCI EC register space\n");
return -EINVAL;
}
}
of_property_read_u32(dn, "brcm,ipp", &priv->ini.ipp);
of_property_read_u32(dn, "brcm,ioc", &priv->ini.ioc);
@ -335,7 +464,23 @@ static int brcm_usb_phy_probe(struct platform_device *pdev)
if (of_property_read_bool(dn, "brcm,has-eohci"))
priv->has_eohci = true;
err = brcm_usb_phy_dvr_init(dev, priv, dn);
for (x = 0; x < BRCM_REGS_MAX; x++) {
if (info->required_regs[x] >= BRCM_REGS_MAX)
break;
err = brcm_usb_get_regs(pdev, info->required_regs[x],
&priv->ini, false);
if (err)
return -EINVAL;
}
if (info->optional_reg) {
err = brcm_usb_get_regs(pdev, info->optional_reg,
&priv->ini, true);
if (err)
return -EINVAL;
}
err = brcm_usb_phy_dvr_init(pdev, priv, dn);
if (err)
return err;
@ -354,14 +499,23 @@ static int brcm_usb_phy_probe(struct platform_device *pdev)
if (err)
dev_warn(dev, "Error creating sysfs attributes\n");
/* Get piarbctl syscon if it exists */
rmap = syscon_regmap_lookup_by_phandle(dev->of_node,
"syscon-piarbctl");
if (IS_ERR(rmap))
rmap = syscon_regmap_lookup_by_phandle(dev->of_node,
"brcm,syscon-piarbctl");
if (!IS_ERR(rmap))
priv->ini.syscon_piarbctl = rmap;
/* start with everything off */
if (priv->has_xhci)
brcm_usb_uninit_xhci(&priv->ini);
if (priv->has_eohci)
brcm_usb_uninit_eohci(&priv->ini);
brcm_usb_uninit_common(&priv->ini);
clk_disable(priv->usb_20_clk);
clk_disable(priv->usb_30_clk);
clk_disable_unprepare(priv->usb_20_clk);
clk_disable_unprepare(priv->usb_30_clk);
phy_provider = devm_of_phy_provider_register(dev, brcm_usb_phy_xlate);
@ -381,8 +535,28 @@ static int brcm_usb_phy_suspend(struct device *dev)
struct brcm_usb_phy_data *priv = dev_get_drvdata(dev);
if (priv->init_count) {
clk_disable(priv->usb_20_clk);
clk_disable(priv->usb_30_clk);
priv->ini.wake_enabled = device_may_wakeup(dev);
if (priv->phys[BRCM_USB_PHY_3_0].inited)
brcm_usb_uninit_xhci(&priv->ini);
if (priv->phys[BRCM_USB_PHY_2_0].inited)
brcm_usb_uninit_eohci(&priv->ini);
brcm_usb_uninit_common(&priv->ini);
/*
* Handle the clocks unless needed for wake. This has
* to work for both older XHCI->3.0-clks, EOHCI->2.0-clks
* and newer XHCI->2.0-clks/3.0-clks.
*/
if (!priv->ini.suspend_with_clocks) {
if (priv->phys[BRCM_USB_PHY_3_0].inited)
clk_disable_unprepare(priv->usb_30_clk);
if (priv->phys[BRCM_USB_PHY_2_0].inited ||
!priv->has_eohci)
clk_disable_unprepare(priv->usb_20_clk);
}
if (priv->wake_irq >= 0)
enable_irq_wake(priv->wake_irq);
}
return 0;
}
@ -391,8 +565,8 @@ static int brcm_usb_phy_resume(struct device *dev)
{
struct brcm_usb_phy_data *priv = dev_get_drvdata(dev);
clk_enable(priv->usb_20_clk);
clk_enable(priv->usb_30_clk);
clk_prepare_enable(priv->usb_20_clk);
clk_prepare_enable(priv->usb_30_clk);
brcm_usb_init_ipp(&priv->ini);
/*
@ -400,18 +574,22 @@ static int brcm_usb_phy_resume(struct device *dev)
* Uninitialize anything that wasn't previously initialized.
*/
if (priv->init_count) {
if (priv->wake_irq >= 0)
disable_irq_wake(priv->wake_irq);
brcm_usb_init_common(&priv->ini);
if (priv->phys[BRCM_USB_PHY_2_0].inited) {
brcm_usb_init_eohci(&priv->ini);
} else if (priv->has_eohci) {
brcm_usb_uninit_eohci(&priv->ini);
clk_disable(priv->usb_20_clk);
clk_disable_unprepare(priv->usb_20_clk);
}
if (priv->phys[BRCM_USB_PHY_3_0].inited) {
brcm_usb_init_xhci(&priv->ini);
} else if (priv->has_xhci) {
brcm_usb_uninit_xhci(&priv->ini);
clk_disable(priv->usb_30_clk);
clk_disable_unprepare(priv->usb_30_clk);
if (!priv->has_eohci)
clk_disable_unprepare(priv->usb_20_clk);
}
} else {
if (priv->has_xhci)
@ -419,10 +597,10 @@ static int brcm_usb_phy_resume(struct device *dev)
if (priv->has_eohci)
brcm_usb_uninit_eohci(&priv->ini);
brcm_usb_uninit_common(&priv->ini);
clk_disable(priv->usb_20_clk);
clk_disable(priv->usb_30_clk);
clk_disable_unprepare(priv->usb_20_clk);
clk_disable_unprepare(priv->usb_30_clk);
}
priv->ini.wake_enabled = false;
return 0;
}
#endif /* CONFIG_PM_SLEEP */
@ -431,11 +609,6 @@ static const struct dev_pm_ops brcm_usb_phy_pm_ops = {
SET_LATE_SYSTEM_SLEEP_PM_OPS(brcm_usb_phy_suspend, brcm_usb_phy_resume)
};
static const struct of_device_id brcm_usb_dt_ids[] = {
{ .compatible = "brcm,brcmstb-usb-phy" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, brcm_usb_dt_ids);
static struct platform_driver brcm_usb_driver = {

View File

@ -22,48 +22,134 @@
#include <dt-bindings/phy/phy.h>
/* PHY register offsets */
#define SIERRA_PHY_PLL_CFG (0xc00e << 2)
#define SIERRA_DET_STANDEC_A (0x4000 << 2)
#define SIERRA_DET_STANDEC_B (0x4001 << 2)
#define SIERRA_DET_STANDEC_C (0x4002 << 2)
#define SIERRA_DET_STANDEC_D (0x4003 << 2)
#define SIERRA_DET_STANDEC_E (0x4004 << 2)
#define SIERRA_PSM_LANECAL (0x4008 << 2)
#define SIERRA_PSM_DIAG (0x4015 << 2)
#define SIERRA_PSC_TX_A0 (0x4028 << 2)
#define SIERRA_PSC_TX_A1 (0x4029 << 2)
#define SIERRA_PSC_TX_A2 (0x402A << 2)
#define SIERRA_PSC_TX_A3 (0x402B << 2)
#define SIERRA_PSC_RX_A0 (0x4030 << 2)
#define SIERRA_PSC_RX_A1 (0x4031 << 2)
#define SIERRA_PSC_RX_A2 (0x4032 << 2)
#define SIERRA_PSC_RX_A3 (0x4033 << 2)
#define SIERRA_PLLCTRL_SUBRATE (0x403A << 2)
#define SIERRA_PLLCTRL_GEN_D (0x403E << 2)
#define SIERRA_DRVCTRL_ATTEN (0x406A << 2)
#define SIERRA_CLKPATHCTRL_TMR (0x4081 << 2)
#define SIERRA_RX_CREQ_FLTR_A_MODE1 (0x4087 << 2)
#define SIERRA_RX_CREQ_FLTR_A_MODE0 (0x4088 << 2)
#define SIERRA_CREQ_CCLKDET_MODE01 (0x408E << 2)
#define SIERRA_RX_CTLE_MAINTENANCE (0x4091 << 2)
#define SIERRA_CREQ_FSMCLK_SEL (0x4092 << 2)
#define SIERRA_CTLELUT_CTRL (0x4098 << 2)
#define SIERRA_DFE_ECMP_RATESEL (0x40C0 << 2)
#define SIERRA_DFE_SMP_RATESEL (0x40C1 << 2)
#define SIERRA_DEQ_VGATUNE_CTRL (0x40E1 << 2)
#define SIERRA_TMRVAL_MODE3 (0x416E << 2)
#define SIERRA_TMRVAL_MODE2 (0x416F << 2)
#define SIERRA_TMRVAL_MODE1 (0x4170 << 2)
#define SIERRA_TMRVAL_MODE0 (0x4171 << 2)
#define SIERRA_PICNT_MODE1 (0x4174 << 2)
#define SIERRA_CPI_OUTBUF_RATESEL (0x417C << 2)
#define SIERRA_LFPSFILT_NS (0x418A << 2)
#define SIERRA_LFPSFILT_RD (0x418B << 2)
#define SIERRA_LFPSFILT_MP (0x418C << 2)
#define SIERRA_SDFILT_H2L_A (0x4191 << 2)
#define SIERRA_COMMON_CDB_OFFSET 0x0
#define SIERRA_MACRO_ID_REG 0x0
#define SIERRA_CMN_PLLLC_MODE_PREG 0x48
#define SIERRA_CMN_PLLLC_LF_COEFF_MODE1_PREG 0x49
#define SIERRA_CMN_PLLLC_LF_COEFF_MODE0_PREG 0x4A
#define SIERRA_CMN_PLLLC_LOCK_CNTSTART_PREG 0x4B
#define SIERRA_CMN_PLLLC_BWCAL_MODE1_PREG 0x4F
#define SIERRA_CMN_PLLLC_BWCAL_MODE0_PREG 0x50
#define SIERRA_CMN_PLLLC_SS_TIME_STEPSIZE_MODE_PREG 0x62
#define SIERRA_MACRO_ID 0x00007364
#define SIERRA_MAX_LANES 4
#define SIERRA_LANE_CDB_OFFSET(ln, block_offset, reg_offset) \
((0x4000 << (block_offset)) + \
(((ln) << 9) << (reg_offset)))
#define SIERRA_DET_STANDEC_A_PREG 0x000
#define SIERRA_DET_STANDEC_B_PREG 0x001
#define SIERRA_DET_STANDEC_C_PREG 0x002
#define SIERRA_DET_STANDEC_D_PREG 0x003
#define SIERRA_DET_STANDEC_E_PREG 0x004
#define SIERRA_PSM_LANECAL_DLY_A1_RESETS_PREG 0x008
#define SIERRA_PSM_A0IN_TMR_PREG 0x009
#define SIERRA_PSM_DIAG_PREG 0x015
#define SIERRA_PSC_TX_A0_PREG 0x028
#define SIERRA_PSC_TX_A1_PREG 0x029
#define SIERRA_PSC_TX_A2_PREG 0x02A
#define SIERRA_PSC_TX_A3_PREG 0x02B
#define SIERRA_PSC_RX_A0_PREG 0x030
#define SIERRA_PSC_RX_A1_PREG 0x031
#define SIERRA_PSC_RX_A2_PREG 0x032
#define SIERRA_PSC_RX_A3_PREG 0x033
#define SIERRA_PLLCTRL_SUBRATE_PREG 0x03A
#define SIERRA_PLLCTRL_GEN_D_PREG 0x03E
#define SIERRA_PLLCTRL_CPGAIN_MODE_PREG 0x03F
#define SIERRA_PLLCTRL_STATUS_PREG 0x044
#define SIERRA_CLKPATH_BIASTRIM_PREG 0x04B
#define SIERRA_DFE_BIASTRIM_PREG 0x04C
#define SIERRA_DRVCTRL_ATTEN_PREG 0x06A
#define SIERRA_CLKPATHCTRL_TMR_PREG 0x081
#define SIERRA_RX_CREQ_FLTR_A_MODE3_PREG 0x085
#define SIERRA_RX_CREQ_FLTR_A_MODE2_PREG 0x086
#define SIERRA_RX_CREQ_FLTR_A_MODE1_PREG 0x087
#define SIERRA_RX_CREQ_FLTR_A_MODE0_PREG 0x088
#define SIERRA_CREQ_CCLKDET_MODE01_PREG 0x08E
#define SIERRA_RX_CTLE_MAINTENANCE_PREG 0x091
#define SIERRA_CREQ_FSMCLK_SEL_PREG 0x092
#define SIERRA_CREQ_EQ_CTRL_PREG 0x093
#define SIERRA_CREQ_SPARE_PREG 0x096
#define SIERRA_CREQ_EQ_OPEN_EYE_THRESH_PREG 0x097
#define SIERRA_CTLELUT_CTRL_PREG 0x098
#define SIERRA_DFE_ECMP_RATESEL_PREG 0x0C0
#define SIERRA_DFE_SMP_RATESEL_PREG 0x0C1
#define SIERRA_DEQ_PHALIGN_CTRL 0x0C4
#define SIERRA_DEQ_CONCUR_CTRL1_PREG 0x0C8
#define SIERRA_DEQ_CONCUR_CTRL2_PREG 0x0C9
#define SIERRA_DEQ_EPIPWR_CTRL2_PREG 0x0CD
#define SIERRA_DEQ_FAST_MAINT_CYCLES_PREG 0x0CE
#define SIERRA_DEQ_ERRCMP_CTRL_PREG 0x0D0
#define SIERRA_DEQ_OFFSET_CTRL_PREG 0x0D8
#define SIERRA_DEQ_GAIN_CTRL_PREG 0x0E0
#define SIERRA_DEQ_VGATUNE_CTRL_PREG 0x0E1
#define SIERRA_DEQ_GLUT0 0x0E8
#define SIERRA_DEQ_GLUT1 0x0E9
#define SIERRA_DEQ_GLUT2 0x0EA
#define SIERRA_DEQ_GLUT3 0x0EB
#define SIERRA_DEQ_GLUT4 0x0EC
#define SIERRA_DEQ_GLUT5 0x0ED
#define SIERRA_DEQ_GLUT6 0x0EE
#define SIERRA_DEQ_GLUT7 0x0EF
#define SIERRA_DEQ_GLUT8 0x0F0
#define SIERRA_DEQ_GLUT9 0x0F1
#define SIERRA_DEQ_GLUT10 0x0F2
#define SIERRA_DEQ_GLUT11 0x0F3
#define SIERRA_DEQ_GLUT12 0x0F4
#define SIERRA_DEQ_GLUT13 0x0F5
#define SIERRA_DEQ_GLUT14 0x0F6
#define SIERRA_DEQ_GLUT15 0x0F7
#define SIERRA_DEQ_GLUT16 0x0F8
#define SIERRA_DEQ_ALUT0 0x108
#define SIERRA_DEQ_ALUT1 0x109
#define SIERRA_DEQ_ALUT2 0x10A
#define SIERRA_DEQ_ALUT3 0x10B
#define SIERRA_DEQ_ALUT4 0x10C
#define SIERRA_DEQ_ALUT5 0x10D
#define SIERRA_DEQ_ALUT6 0x10E
#define SIERRA_DEQ_ALUT7 0x10F
#define SIERRA_DEQ_ALUT8 0x110
#define SIERRA_DEQ_ALUT9 0x111
#define SIERRA_DEQ_ALUT10 0x112
#define SIERRA_DEQ_ALUT11 0x113
#define SIERRA_DEQ_ALUT12 0x114
#define SIERRA_DEQ_ALUT13 0x115
#define SIERRA_DEQ_DFETAP_CTRL_PREG 0x128
#define SIERRA_DFE_EN_1010_IGNORE_PREG 0x134
#define SIERRA_DEQ_TAU_CTRL1_SLOW_MAINT_PREG 0x150
#define SIERRA_DEQ_TAU_CTRL2_PREG 0x151
#define SIERRA_DEQ_PICTRL_PREG 0x161
#define SIERRA_CPICAL_TMRVAL_MODE1_PREG 0x170
#define SIERRA_CPICAL_TMRVAL_MODE0_PREG 0x171
#define SIERRA_CPICAL_PICNT_MODE1_PREG 0x174
#define SIERRA_CPI_OUTBUF_RATESEL_PREG 0x17C
#define SIERRA_CPICAL_RES_STARTCODE_MODE23_PREG 0x183
#define SIERRA_LFPSDET_SUPPORT_PREG 0x188
#define SIERRA_LFPSFILT_NS_PREG 0x18A
#define SIERRA_LFPSFILT_RD_PREG 0x18B
#define SIERRA_LFPSFILT_MP_PREG 0x18C
#define SIERRA_SIGDET_SUPPORT_PREG 0x190
#define SIERRA_SDFILT_H2L_A_PREG 0x191
#define SIERRA_SDFILT_L2H_PREG 0x193
#define SIERRA_RXBUFFER_CTLECTRL_PREG 0x19E
#define SIERRA_RXBUFFER_RCDFECTRL_PREG 0x19F
#define SIERRA_RXBUFFER_DFECTRL_PREG 0x1A0
#define SIERRA_DEQ_TAU_CTRL1_FAST_MAINT_PREG 0x14F
#define SIERRA_DEQ_TAU_CTRL1_SLOW_MAINT_PREG 0x150
#define SIERRA_PHY_CONFIG_CTRL_OFFSET(block_offset) \
(0xc000 << (block_offset))
#define SIERRA_PHY_PLL_CFG 0xe
#define SIERRA_MACRO_ID 0x00007364
#define SIERRA_MAX_LANES 16
#define PLL_LOCK_TIME 100000
static const struct reg_field macro_id_type =
REG_FIELD(SIERRA_MACRO_ID_REG, 0, 15);
static const struct reg_field phy_pll_cfg_1 =
REG_FIELD(SIERRA_PHY_PLL_CFG, 1, 1);
static const struct reg_field pllctrl_lock =
REG_FIELD(SIERRA_PLLCTRL_STATUS_PREG, 0, 0);
struct cdns_sierra_inst {
struct phy *phy;
@ -80,53 +166,172 @@ struct cdns_reg_pairs {
struct cdns_sierra_data {
u32 id_value;
u32 pcie_regs;
u32 usb_regs;
struct cdns_reg_pairs *pcie_vals;
struct cdns_reg_pairs *usb_vals;
u8 block_offset_shift;
u8 reg_offset_shift;
u32 pcie_cmn_regs;
u32 pcie_ln_regs;
u32 usb_cmn_regs;
u32 usb_ln_regs;
struct cdns_reg_pairs *pcie_cmn_vals;
struct cdns_reg_pairs *pcie_ln_vals;
struct cdns_reg_pairs *usb_cmn_vals;
struct cdns_reg_pairs *usb_ln_vals;
};
struct cdns_regmap_cdb_context {
struct device *dev;
void __iomem *base;
u8 reg_offset_shift;
};
struct cdns_sierra_phy {
struct device *dev;
void __iomem *base;
struct regmap *regmap;
struct cdns_sierra_data *init_data;
struct cdns_sierra_inst phys[SIERRA_MAX_LANES];
struct reset_control *phy_rst;
struct reset_control *apb_rst;
struct regmap *regmap_lane_cdb[SIERRA_MAX_LANES];
struct regmap *regmap_phy_config_ctrl;
struct regmap *regmap_common_cdb;
struct regmap_field *macro_id_type;
struct regmap_field *phy_pll_cfg_1;
struct regmap_field *pllctrl_lock[SIERRA_MAX_LANES];
struct clk *clk;
struct clk *cmn_refclk_dig_div;
struct clk *cmn_refclk1_dig_div;
int nsubnodes;
u32 num_lanes;
bool autoconf;
};
static void cdns_sierra_phy_init(struct phy *gphy)
static int cdns_regmap_write(void *context, unsigned int reg, unsigned int val)
{
struct cdns_regmap_cdb_context *ctx = context;
u32 offset = reg << ctx->reg_offset_shift;
writew(val, ctx->base + offset);
return 0;
}
static int cdns_regmap_read(void *context, unsigned int reg, unsigned int *val)
{
struct cdns_regmap_cdb_context *ctx = context;
u32 offset = reg << ctx->reg_offset_shift;
*val = readw(ctx->base + offset);
return 0;
}
#define SIERRA_LANE_CDB_REGMAP_CONF(n) \
{ \
.name = "sierra_lane" n "_cdb", \
.reg_stride = 1, \
.fast_io = true, \
.reg_write = cdns_regmap_write, \
.reg_read = cdns_regmap_read, \
}
static struct regmap_config cdns_sierra_lane_cdb_config[] = {
SIERRA_LANE_CDB_REGMAP_CONF("0"),
SIERRA_LANE_CDB_REGMAP_CONF("1"),
SIERRA_LANE_CDB_REGMAP_CONF("2"),
SIERRA_LANE_CDB_REGMAP_CONF("3"),
SIERRA_LANE_CDB_REGMAP_CONF("4"),
SIERRA_LANE_CDB_REGMAP_CONF("5"),
SIERRA_LANE_CDB_REGMAP_CONF("6"),
SIERRA_LANE_CDB_REGMAP_CONF("7"),
SIERRA_LANE_CDB_REGMAP_CONF("8"),
SIERRA_LANE_CDB_REGMAP_CONF("9"),
SIERRA_LANE_CDB_REGMAP_CONF("10"),
SIERRA_LANE_CDB_REGMAP_CONF("11"),
SIERRA_LANE_CDB_REGMAP_CONF("12"),
SIERRA_LANE_CDB_REGMAP_CONF("13"),
SIERRA_LANE_CDB_REGMAP_CONF("14"),
SIERRA_LANE_CDB_REGMAP_CONF("15"),
};
static struct regmap_config cdns_sierra_common_cdb_config = {
.name = "sierra_common_cdb",
.reg_stride = 1,
.fast_io = true,
.reg_write = cdns_regmap_write,
.reg_read = cdns_regmap_read,
};
static struct regmap_config cdns_sierra_phy_config_ctrl_config = {
.name = "sierra_phy_config_ctrl",
.reg_stride = 1,
.fast_io = true,
.reg_write = cdns_regmap_write,
.reg_read = cdns_regmap_read,
};
static int cdns_sierra_phy_init(struct phy *gphy)
{
struct cdns_sierra_inst *ins = phy_get_drvdata(gphy);
struct cdns_sierra_phy *phy = dev_get_drvdata(gphy->dev.parent);
struct regmap *regmap;
int i, j;
struct cdns_reg_pairs *vals;
u32 num_regs;
struct cdns_reg_pairs *cmn_vals, *ln_vals;
u32 num_cmn_regs, num_ln_regs;
/* Initialise the PHY registers, unless auto configured */
if (phy->autoconf)
return 0;
clk_set_rate(phy->cmn_refclk_dig_div, 25000000);
clk_set_rate(phy->cmn_refclk1_dig_div, 25000000);
if (ins->phy_type == PHY_TYPE_PCIE) {
num_regs = phy->init_data->pcie_regs;
vals = phy->init_data->pcie_vals;
num_cmn_regs = phy->init_data->pcie_cmn_regs;
num_ln_regs = phy->init_data->pcie_ln_regs;
cmn_vals = phy->init_data->pcie_cmn_vals;
ln_vals = phy->init_data->pcie_ln_vals;
} else if (ins->phy_type == PHY_TYPE_USB3) {
num_regs = phy->init_data->usb_regs;
vals = phy->init_data->usb_vals;
num_cmn_regs = phy->init_data->usb_cmn_regs;
num_ln_regs = phy->init_data->usb_ln_regs;
cmn_vals = phy->init_data->usb_cmn_vals;
ln_vals = phy->init_data->usb_ln_vals;
} else {
return;
return -EINVAL;
}
for (i = 0; i < ins->num_lanes; i++)
for (j = 0; j < num_regs ; j++)
writel(vals[j].val, phy->base +
vals[j].off + (i + ins->mlane) * 0x800);
regmap = phy->regmap_common_cdb;
for (j = 0; j < num_cmn_regs ; j++)
regmap_write(regmap, cmn_vals[j].off, cmn_vals[j].val);
for (i = 0; i < ins->num_lanes; i++) {
for (j = 0; j < num_ln_regs ; j++) {
regmap = phy->regmap_lane_cdb[i + ins->mlane];
regmap_write(regmap, ln_vals[j].off, ln_vals[j].val);
}
}
return 0;
}
static int cdns_sierra_phy_on(struct phy *gphy)
{
struct cdns_sierra_phy *sp = dev_get_drvdata(gphy->dev.parent);
struct cdns_sierra_inst *ins = phy_get_drvdata(gphy);
struct device *dev = sp->dev;
u32 val;
int ret;
/* Take the PHY lane group out of reset */
return reset_control_deassert(ins->lnk_rst);
ret = reset_control_deassert(ins->lnk_rst);
if (ret) {
dev_err(dev, "Failed to take the PHY lane out of reset\n");
return ret;
}
ret = regmap_field_read_poll_timeout(sp->pllctrl_lock[ins->mlane],
val, val, 1000, PLL_LOCK_TIME);
if (ret < 0)
dev_err(dev, "PLL lock of lane failed\n");
return ret;
}
static int cdns_sierra_phy_off(struct phy *gphy)
@ -136,9 +341,20 @@ static int cdns_sierra_phy_off(struct phy *gphy)
return reset_control_assert(ins->lnk_rst);
}
static int cdns_sierra_phy_reset(struct phy *gphy)
{
struct cdns_sierra_phy *sp = dev_get_drvdata(gphy->dev.parent);
reset_control_assert(sp->phy_rst);
reset_control_deassert(sp->phy_rst);
return 0;
};
static const struct phy_ops ops = {
.init = cdns_sierra_phy_init,
.power_on = cdns_sierra_phy_on,
.power_off = cdns_sierra_phy_off,
.reset = cdns_sierra_phy_reset,
.owner = THIS_MODULE,
};
@ -159,41 +375,152 @@ static int cdns_sierra_get_optional(struct cdns_sierra_inst *inst,
static const struct of_device_id cdns_sierra_id_table[];
static struct regmap *cdns_regmap_init(struct device *dev, void __iomem *base,
u32 block_offset, u8 reg_offset_shift,
const struct regmap_config *config)
{
struct cdns_regmap_cdb_context *ctx;
ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return ERR_PTR(-ENOMEM);
ctx->dev = dev;
ctx->base = base + block_offset;
ctx->reg_offset_shift = reg_offset_shift;
return devm_regmap_init(dev, NULL, ctx, config);
}
static int cdns_regfield_init(struct cdns_sierra_phy *sp)
{
struct device *dev = sp->dev;
struct regmap_field *field;
struct regmap *regmap;
int i;
regmap = sp->regmap_common_cdb;
field = devm_regmap_field_alloc(dev, regmap, macro_id_type);
if (IS_ERR(field)) {
dev_err(dev, "MACRO_ID_TYPE reg field init failed\n");
return PTR_ERR(field);
}
sp->macro_id_type = field;
regmap = sp->regmap_phy_config_ctrl;
field = devm_regmap_field_alloc(dev, regmap, phy_pll_cfg_1);
if (IS_ERR(field)) {
dev_err(dev, "PHY_PLL_CFG_1 reg field init failed\n");
return PTR_ERR(field);
}
sp->phy_pll_cfg_1 = field;
for (i = 0; i < SIERRA_MAX_LANES; i++) {
regmap = sp->regmap_lane_cdb[i];
field = devm_regmap_field_alloc(dev, regmap, pllctrl_lock);
if (IS_ERR(field)) {
dev_err(dev, "P%d_ENABLE reg field init failed\n", i);
return PTR_ERR(field);
}
sp->pllctrl_lock[i] = field;
}
return 0;
}
static int cdns_regmap_init_blocks(struct cdns_sierra_phy *sp,
void __iomem *base, u8 block_offset_shift,
u8 reg_offset_shift)
{
struct device *dev = sp->dev;
struct regmap *regmap;
u32 block_offset;
int i;
for (i = 0; i < SIERRA_MAX_LANES; i++) {
block_offset = SIERRA_LANE_CDB_OFFSET(i, block_offset_shift,
reg_offset_shift);
regmap = cdns_regmap_init(dev, base, block_offset,
reg_offset_shift,
&cdns_sierra_lane_cdb_config[i]);
if (IS_ERR(regmap)) {
dev_err(dev, "Failed to init lane CDB regmap\n");
return PTR_ERR(regmap);
}
sp->regmap_lane_cdb[i] = regmap;
}
regmap = cdns_regmap_init(dev, base, SIERRA_COMMON_CDB_OFFSET,
reg_offset_shift,
&cdns_sierra_common_cdb_config);
if (IS_ERR(regmap)) {
dev_err(dev, "Failed to init common CDB regmap\n");
return PTR_ERR(regmap);
}
sp->regmap_common_cdb = regmap;
block_offset = SIERRA_PHY_CONFIG_CTRL_OFFSET(block_offset_shift);
regmap = cdns_regmap_init(dev, base, block_offset, reg_offset_shift,
&cdns_sierra_phy_config_ctrl_config);
if (IS_ERR(regmap)) {
dev_err(dev, "Failed to init PHY config and control regmap\n");
return PTR_ERR(regmap);
}
sp->regmap_phy_config_ctrl = regmap;
return 0;
}
static int cdns_sierra_phy_probe(struct platform_device *pdev)
{
struct cdns_sierra_phy *sp;
struct phy_provider *phy_provider;
struct device *dev = &pdev->dev;
const struct of_device_id *match;
struct cdns_sierra_data *data;
unsigned int id_value;
struct resource *res;
int i, ret, node = 0;
void __iomem *base;
struct clk *clk;
struct device_node *dn = dev->of_node, *child;
if (of_get_child_count(dn) == 0)
return -ENODEV;
/* Get init data for this PHY */
match = of_match_device(cdns_sierra_id_table, dev);
if (!match)
return -EINVAL;
data = (struct cdns_sierra_data *)match->data;
sp = devm_kzalloc(dev, sizeof(*sp), GFP_KERNEL);
if (!sp)
return -ENOMEM;
dev_set_drvdata(dev, sp);
sp->dev = dev;
sp->init_data = data;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
sp->base = devm_ioremap_resource(dev, res);
if (IS_ERR(sp->base)) {
base = devm_ioremap_resource(dev, res);
if (IS_ERR(base)) {
dev_err(dev, "missing \"reg\"\n");
return PTR_ERR(sp->base);
return PTR_ERR(base);
}
/* Get init data for this PHY */
match = of_match_device(cdns_sierra_id_table, dev);
if (!match)
return -EINVAL;
sp->init_data = (struct cdns_sierra_data *)match->data;
ret = cdns_regmap_init_blocks(sp, base, data->block_offset_shift,
data->reg_offset_shift);
if (ret)
return ret;
ret = cdns_regfield_init(sp);
if (ret)
return ret;
platform_set_drvdata(pdev, sp);
sp->clk = devm_clk_get(dev, "phy_clk");
sp->clk = devm_clk_get_optional(dev, "phy_clk");
if (IS_ERR(sp->clk)) {
dev_err(dev, "failed to get clock phy_clk\n");
return PTR_ERR(sp->clk);
@ -205,12 +532,28 @@ static int cdns_sierra_phy_probe(struct platform_device *pdev)
return PTR_ERR(sp->phy_rst);
}
sp->apb_rst = devm_reset_control_get(dev, "sierra_apb");
sp->apb_rst = devm_reset_control_get_optional(dev, "sierra_apb");
if (IS_ERR(sp->apb_rst)) {
dev_err(dev, "failed to get apb reset\n");
return PTR_ERR(sp->apb_rst);
}
clk = devm_clk_get_optional(dev, "cmn_refclk_dig_div");
if (IS_ERR(clk)) {
dev_err(dev, "cmn_refclk_dig_div clock not found\n");
ret = PTR_ERR(clk);
return ret;
}
sp->cmn_refclk_dig_div = clk;
clk = devm_clk_get_optional(dev, "cmn_refclk1_dig_div");
if (IS_ERR(clk)) {
dev_err(dev, "cmn_refclk1_dig_div clock not found\n");
ret = PTR_ERR(clk);
return ret;
}
sp->cmn_refclk1_dig_div = clk;
ret = clk_prepare_enable(sp->clk);
if (ret)
return ret;
@ -219,7 +562,8 @@ static int cdns_sierra_phy_probe(struct platform_device *pdev)
reset_control_deassert(sp->apb_rst);
/* Check that PHY is present */
if (sp->init_data->id_value != readl(sp->base)) {
regmap_field_read(sp->macro_id_type, &id_value);
if (sp->init_data->id_value != id_value) {
ret = -EINVAL;
goto clk_disable;
}
@ -230,7 +574,7 @@ static int cdns_sierra_phy_probe(struct platform_device *pdev)
struct phy *gphy;
sp->phys[node].lnk_rst =
of_reset_control_get_exclusive_by_index(child, 0);
of_reset_control_array_get_exclusive(child);
if (IS_ERR(sp->phys[node].lnk_rst)) {
dev_err(dev, "failed to get reset %s\n",
@ -248,6 +592,8 @@ static int cdns_sierra_phy_probe(struct platform_device *pdev)
}
}
sp->num_lanes += sp->phys[node].num_lanes;
gphy = devm_phy_create(dev, child, &ops);
if (IS_ERR(gphy)) {
@ -257,17 +603,18 @@ static int cdns_sierra_phy_probe(struct platform_device *pdev)
sp->phys[node].phy = gphy;
phy_set_drvdata(gphy, &sp->phys[node]);
/* Initialise the PHY registers, unless auto configured */
if (!sp->autoconf)
cdns_sierra_phy_init(gphy);
node++;
}
sp->nsubnodes = node;
if (sp->num_lanes > SIERRA_MAX_LANES) {
dev_err(dev, "Invalid lane configuration\n");
goto put_child2;
}
/* If more than one subnode, configure the PHY as multilink */
if (!sp->autoconf && sp->nsubnodes > 1)
writel(2, sp->base + SIERRA_PHY_PLL_CFG);
regmap_field_write(sp->phy_pll_cfg_1, 0x1);
pm_runtime_enable(dev);
phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
@ -288,7 +635,7 @@ clk_disable:
static int cdns_sierra_phy_remove(struct platform_device *pdev)
{
struct cdns_sierra_phy *phy = dev_get_drvdata(pdev->dev.parent);
struct cdns_sierra_phy *phy = platform_get_drvdata(pdev);
int i;
reset_control_assert(phy->phy_rst);
@ -306,68 +653,158 @@ static int cdns_sierra_phy_remove(struct platform_device *pdev)
return 0;
}
static struct cdns_reg_pairs cdns_usb_regs[] = {
/*
* Write USB configuration parameters to the PHY.
* These values are specific to this specific hardware
* configuration.
*/
{0xFE0A, SIERRA_DET_STANDEC_A},
{0x000F, SIERRA_DET_STANDEC_B},
{0x55A5, SIERRA_DET_STANDEC_C},
{0x69AD, SIERRA_DET_STANDEC_D},
{0x0241, SIERRA_DET_STANDEC_E},
{0x0110, SIERRA_PSM_LANECAL},
{0xCF00, SIERRA_PSM_DIAG},
{0x001F, SIERRA_PSC_TX_A0},
{0x0007, SIERRA_PSC_TX_A1},
{0x0003, SIERRA_PSC_TX_A2},
{0x0003, SIERRA_PSC_TX_A3},
{0x0FFF, SIERRA_PSC_RX_A0},
{0x0003, SIERRA_PSC_RX_A1},
{0x0003, SIERRA_PSC_RX_A2},
{0x0001, SIERRA_PSC_RX_A3},
{0x0001, SIERRA_PLLCTRL_SUBRATE},
{0x0406, SIERRA_PLLCTRL_GEN_D},
{0x0000, SIERRA_DRVCTRL_ATTEN},
{0x823E, SIERRA_CLKPATHCTRL_TMR},
{0x078F, SIERRA_RX_CREQ_FLTR_A_MODE1},
{0x078F, SIERRA_RX_CREQ_FLTR_A_MODE0},
{0x7B3C, SIERRA_CREQ_CCLKDET_MODE01},
{0x023C, SIERRA_RX_CTLE_MAINTENANCE},
{0x3232, SIERRA_CREQ_FSMCLK_SEL},
{0x8452, SIERRA_CTLELUT_CTRL},
{0x4121, SIERRA_DFE_ECMP_RATESEL},
{0x4121, SIERRA_DFE_SMP_RATESEL},
{0x9999, SIERRA_DEQ_VGATUNE_CTRL},
{0x0330, SIERRA_TMRVAL_MODE0},
{0x01FF, SIERRA_PICNT_MODE1},
{0x0009, SIERRA_CPI_OUTBUF_RATESEL},
{0x000F, SIERRA_LFPSFILT_NS},
{0x0009, SIERRA_LFPSFILT_RD},
{0x0001, SIERRA_LFPSFILT_MP},
{0x8013, SIERRA_SDFILT_H2L_A},
{0x0400, SIERRA_TMRVAL_MODE1},
/* refclk100MHz_32b_PCIe_cmn_pll_ext_ssc */
static struct cdns_reg_pairs cdns_pcie_cmn_regs_ext_ssc[] = {
{0x2106, SIERRA_CMN_PLLLC_LF_COEFF_MODE1_PREG},
{0x2106, SIERRA_CMN_PLLLC_LF_COEFF_MODE0_PREG},
{0x8A06, SIERRA_CMN_PLLLC_BWCAL_MODE1_PREG},
{0x8A06, SIERRA_CMN_PLLLC_BWCAL_MODE0_PREG},
{0x1B1B, SIERRA_CMN_PLLLC_SS_TIME_STEPSIZE_MODE_PREG}
};
static struct cdns_reg_pairs cdns_pcie_regs[] = {
/*
* Write PCIe configuration parameters to the PHY.
* These values are specific to this specific hardware
* configuration.
*/
{0x891f, SIERRA_DET_STANDEC_D},
{0x0053, SIERRA_DET_STANDEC_E},
{0x0400, SIERRA_TMRVAL_MODE2},
{0x0200, SIERRA_TMRVAL_MODE3},
/* refclk100MHz_32b_PCIe_ln_ext_ssc */
static struct cdns_reg_pairs cdns_pcie_ln_regs_ext_ssc[] = {
{0x813E, SIERRA_CLKPATHCTRL_TMR_PREG},
{0x8047, SIERRA_RX_CREQ_FLTR_A_MODE3_PREG},
{0x808F, SIERRA_RX_CREQ_FLTR_A_MODE2_PREG},
{0x808F, SIERRA_RX_CREQ_FLTR_A_MODE1_PREG},
{0x808F, SIERRA_RX_CREQ_FLTR_A_MODE0_PREG},
{0x033C, SIERRA_RX_CTLE_MAINTENANCE_PREG},
{0x44CC, SIERRA_CREQ_EQ_OPEN_EYE_THRESH_PREG}
};
/* refclk100MHz_20b_USB_cmn_pll_ext_ssc */
static struct cdns_reg_pairs cdns_usb_cmn_regs_ext_ssc[] = {
{0x2085, SIERRA_CMN_PLLLC_LF_COEFF_MODE1_PREG},
{0x2085, SIERRA_CMN_PLLLC_LF_COEFF_MODE0_PREG},
{0x0000, SIERRA_CMN_PLLLC_BWCAL_MODE0_PREG},
{0x0000, SIERRA_CMN_PLLLC_SS_TIME_STEPSIZE_MODE_PREG}
};
/* refclk100MHz_20b_USB_ln_ext_ssc */
static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
{0xFE0A, SIERRA_DET_STANDEC_A_PREG},
{0x000F, SIERRA_DET_STANDEC_B_PREG},
{0x00A5, SIERRA_DET_STANDEC_C_PREG},
{0x69ad, SIERRA_DET_STANDEC_D_PREG},
{0x0241, SIERRA_DET_STANDEC_E_PREG},
{0x0010, SIERRA_PSM_LANECAL_DLY_A1_RESETS_PREG},
{0x0014, SIERRA_PSM_A0IN_TMR_PREG},
{0xCF00, SIERRA_PSM_DIAG_PREG},
{0x001F, SIERRA_PSC_TX_A0_PREG},
{0x0007, SIERRA_PSC_TX_A1_PREG},
{0x0003, SIERRA_PSC_TX_A2_PREG},
{0x0003, SIERRA_PSC_TX_A3_PREG},
{0x0FFF, SIERRA_PSC_RX_A0_PREG},
{0x0619, SIERRA_PSC_RX_A1_PREG},
{0x0003, SIERRA_PSC_RX_A2_PREG},
{0x0001, SIERRA_PSC_RX_A3_PREG},
{0x0001, SIERRA_PLLCTRL_SUBRATE_PREG},
{0x0406, SIERRA_PLLCTRL_GEN_D_PREG},
{0x5233, SIERRA_PLLCTRL_CPGAIN_MODE_PREG},
{0x00CA, SIERRA_CLKPATH_BIASTRIM_PREG},
{0x2512, SIERRA_DFE_BIASTRIM_PREG},
{0x0000, SIERRA_DRVCTRL_ATTEN_PREG},
{0x873E, SIERRA_CLKPATHCTRL_TMR_PREG},
{0x03CF, SIERRA_RX_CREQ_FLTR_A_MODE1_PREG},
{0x01CE, SIERRA_RX_CREQ_FLTR_A_MODE0_PREG},
{0x7B3C, SIERRA_CREQ_CCLKDET_MODE01_PREG},
{0x033F, SIERRA_RX_CTLE_MAINTENANCE_PREG},
{0x3232, SIERRA_CREQ_FSMCLK_SEL_PREG},
{0x0000, SIERRA_CREQ_EQ_CTRL_PREG},
{0x8000, SIERRA_CREQ_SPARE_PREG},
{0xCC44, SIERRA_CREQ_EQ_OPEN_EYE_THRESH_PREG},
{0x8453, SIERRA_CTLELUT_CTRL_PREG},
{0x4110, SIERRA_DFE_ECMP_RATESEL_PREG},
{0x4110, SIERRA_DFE_SMP_RATESEL_PREG},
{0x0002, SIERRA_DEQ_PHALIGN_CTRL},
{0x3200, SIERRA_DEQ_CONCUR_CTRL1_PREG},
{0x5064, SIERRA_DEQ_CONCUR_CTRL2_PREG},
{0x0030, SIERRA_DEQ_EPIPWR_CTRL2_PREG},
{0x0048, SIERRA_DEQ_FAST_MAINT_CYCLES_PREG},
{0x5A5A, SIERRA_DEQ_ERRCMP_CTRL_PREG},
{0x02F5, SIERRA_DEQ_OFFSET_CTRL_PREG},
{0x02F5, SIERRA_DEQ_GAIN_CTRL_PREG},
{0x9A8A, SIERRA_DEQ_VGATUNE_CTRL_PREG},
{0x0014, SIERRA_DEQ_GLUT0},
{0x0014, SIERRA_DEQ_GLUT1},
{0x0014, SIERRA_DEQ_GLUT2},
{0x0014, SIERRA_DEQ_GLUT3},
{0x0014, SIERRA_DEQ_GLUT4},
{0x0014, SIERRA_DEQ_GLUT5},
{0x0014, SIERRA_DEQ_GLUT6},
{0x0014, SIERRA_DEQ_GLUT7},
{0x0014, SIERRA_DEQ_GLUT8},
{0x0014, SIERRA_DEQ_GLUT9},
{0x0014, SIERRA_DEQ_GLUT10},
{0x0014, SIERRA_DEQ_GLUT11},
{0x0014, SIERRA_DEQ_GLUT12},
{0x0014, SIERRA_DEQ_GLUT13},
{0x0014, SIERRA_DEQ_GLUT14},
{0x0014, SIERRA_DEQ_GLUT15},
{0x0014, SIERRA_DEQ_GLUT16},
{0x0BAE, SIERRA_DEQ_ALUT0},
{0x0AEB, SIERRA_DEQ_ALUT1},
{0x0A28, SIERRA_DEQ_ALUT2},
{0x0965, SIERRA_DEQ_ALUT3},
{0x08A2, SIERRA_DEQ_ALUT4},
{0x07DF, SIERRA_DEQ_ALUT5},
{0x071C, SIERRA_DEQ_ALUT6},
{0x0659, SIERRA_DEQ_ALUT7},
{0x0596, SIERRA_DEQ_ALUT8},
{0x0514, SIERRA_DEQ_ALUT9},
{0x0492, SIERRA_DEQ_ALUT10},
{0x0410, SIERRA_DEQ_ALUT11},
{0x038E, SIERRA_DEQ_ALUT12},
{0x030C, SIERRA_DEQ_ALUT13},
{0x03F4, SIERRA_DEQ_DFETAP_CTRL_PREG},
{0x0001, SIERRA_DFE_EN_1010_IGNORE_PREG},
{0x3C01, SIERRA_DEQ_TAU_CTRL1_FAST_MAINT_PREG},
{0x3C40, SIERRA_DEQ_TAU_CTRL1_SLOW_MAINT_PREG},
{0x1C08, SIERRA_DEQ_TAU_CTRL2_PREG},
{0x0033, SIERRA_DEQ_PICTRL_PREG},
{0x0400, SIERRA_CPICAL_TMRVAL_MODE1_PREG},
{0x0330, SIERRA_CPICAL_TMRVAL_MODE0_PREG},
{0x01FF, SIERRA_CPICAL_PICNT_MODE1_PREG},
{0x0009, SIERRA_CPI_OUTBUF_RATESEL_PREG},
{0x3232, SIERRA_CPICAL_RES_STARTCODE_MODE23_PREG},
{0x0005, SIERRA_LFPSDET_SUPPORT_PREG},
{0x000F, SIERRA_LFPSFILT_NS_PREG},
{0x0009, SIERRA_LFPSFILT_RD_PREG},
{0x0001, SIERRA_LFPSFILT_MP_PREG},
{0x8013, SIERRA_SDFILT_H2L_A_PREG},
{0x8009, SIERRA_SDFILT_L2H_PREG},
{0x0024, SIERRA_RXBUFFER_CTLECTRL_PREG},
{0x0020, SIERRA_RXBUFFER_RCDFECTRL_PREG},
{0x4243, SIERRA_RXBUFFER_DFECTRL_PREG}
};
static const struct cdns_sierra_data cdns_map_sierra = {
SIERRA_MACRO_ID,
ARRAY_SIZE(cdns_pcie_regs),
ARRAY_SIZE(cdns_usb_regs),
cdns_pcie_regs,
cdns_usb_regs
0x2,
0x2,
ARRAY_SIZE(cdns_pcie_cmn_regs_ext_ssc),
ARRAY_SIZE(cdns_pcie_ln_regs_ext_ssc),
ARRAY_SIZE(cdns_usb_cmn_regs_ext_ssc),
ARRAY_SIZE(cdns_usb_ln_regs_ext_ssc),
cdns_pcie_cmn_regs_ext_ssc,
cdns_pcie_ln_regs_ext_ssc,
cdns_usb_cmn_regs_ext_ssc,
cdns_usb_ln_regs_ext_ssc,
};
static const struct cdns_sierra_data cdns_ti_map_sierra = {
SIERRA_MACRO_ID,
0x0,
0x1,
ARRAY_SIZE(cdns_pcie_cmn_regs_ext_ssc),
ARRAY_SIZE(cdns_pcie_ln_regs_ext_ssc),
ARRAY_SIZE(cdns_usb_cmn_regs_ext_ssc),
ARRAY_SIZE(cdns_usb_ln_regs_ext_ssc),
cdns_pcie_cmn_regs_ext_ssc,
cdns_pcie_ln_regs_ext_ssc,
cdns_usb_cmn_regs_ext_ssc,
cdns_usb_ln_regs_ext_ssc,
};
static const struct of_device_id cdns_sierra_id_table[] = {
@ -375,6 +812,10 @@ static const struct of_device_id cdns_sierra_id_table[] = {
.compatible = "cdns,sierra-phy-t0",
.data = &cdns_map_sierra,
},
{
.compatible = "ti,sierra-phy-t0",
.data = &cdns_ti_map_sierra,
},
{}
};
MODULE_DEVICE_TABLE(of, cdns_sierra_id_table);

View File

@ -33,14 +33,14 @@ config PHY_HISTB_COMBPHY
If unsure, say N.
config PHY_HISI_INNO_USB2
tristate "HiSilicon INNO USB2 PHY support"
depends on (ARCH_HISI && ARM64) || COMPILE_TEST
select GENERIC_PHY
select MFD_SYSCON
help
Support for INNO USB2 PHY on HiSilicon SoCs. This Phy supports
USB 1.5Mb/s, USB 12Mb/s, USB 480Mb/s speeds. It supports one
USB host port to accept one USB device.
tristate "HiSilicon INNO USB2 PHY support"
depends on (ARCH_HISI && ARM64) || COMPILE_TEST
select GENERIC_PHY
select MFD_SYSCON
help
Support for INNO USB2 PHY on HiSilicon SoCs. This Phy supports
USB 1.5Mb/s, USB 12Mb/s, USB 480Mb/s speeds. It supports one
USB host port to accept one USB device.
config PHY_HIX5HD2_SATA
tristate "HIX5HD2 SATA PHY Driver"

View File

@ -0,0 +1,9 @@
# SPDX-License-Identifier: GPL-2.0
#
# Phy drivers for Intel Lightning Mountain(LGM) platform
#
config PHY_INTEL_EMMC
tristate "Intel EMMC PHY driver"
select GENERIC_PHY
help
Enable this to support the Intel EMMC PHY

View File

@ -0,0 +1,2 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PHY_INTEL_EMMC) += phy-intel-emmc.o

View File

@ -0,0 +1,284 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Intel eMMC PHY driver
* Copyright (C) 2019 Intel, Corp.
*/
#include <linux/bits.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
/* eMMC phy register definitions */
#define EMMC_PHYCTRL0_REG 0xa8
#define DR_TY_MASK GENMASK(30, 28)
#define DR_TY_SHIFT(x) (((x) << 28) & DR_TY_MASK)
#define OTAPDLYENA BIT(14)
#define OTAPDLYSEL_MASK GENMASK(13, 10)
#define OTAPDLYSEL_SHIFT(x) (((x) << 10) & OTAPDLYSEL_MASK)
#define EMMC_PHYCTRL1_REG 0xac
#define PDB_MASK BIT(0)
#define PDB_SHIFT(x) (((x) << 0) & PDB_MASK)
#define ENDLL_MASK BIT(7)
#define ENDLL_SHIFT(x) (((x) << 7) & ENDLL_MASK)
#define EMMC_PHYCTRL2_REG 0xb0
#define FRQSEL_25M 0
#define FRQSEL_50M 1
#define FRQSEL_100M 2
#define FRQSEL_150M 3
#define FRQSEL_MASK GENMASK(24, 22)
#define FRQSEL_SHIFT(x) (((x) << 22) & FRQSEL_MASK)
#define EMMC_PHYSTAT_REG 0xbc
#define CALDONE_MASK BIT(9)
#define DLLRDY_MASK BIT(8)
#define IS_CALDONE(x) ((x) & CALDONE_MASK)
#define IS_DLLRDY(x) ((x) & DLLRDY_MASK)
struct intel_emmc_phy {
struct regmap *syscfg;
struct clk *emmcclk;
};
static int intel_emmc_phy_power(struct phy *phy, bool on_off)
{
struct intel_emmc_phy *priv = phy_get_drvdata(phy);
unsigned int caldone;
unsigned int dllrdy;
unsigned int freqsel;
unsigned long rate;
int ret, quot;
/*
* Keep phyctrl_pdb and phyctrl_endll low to allow
* initialization of CALIO state M/C DFFs
*/
ret = regmap_update_bits(priv->syscfg, EMMC_PHYCTRL1_REG, PDB_MASK,
PDB_SHIFT(0));
if (ret) {
dev_err(&phy->dev, "CALIO power down bar failed: %d\n", ret);
return ret;
}
/* Already finish power_off above */
if (!on_off)
return 0;
rate = clk_get_rate(priv->emmcclk);
quot = DIV_ROUND_CLOSEST(rate, 50000000);
if (quot > FRQSEL_150M)
dev_warn(&phy->dev, "Unsupported rate: %lu\n", rate);
freqsel = clamp_t(int, quot, FRQSEL_25M, FRQSEL_150M);
/*
* According to the user manual, calpad calibration
* cycle takes more than 2us without the minimal recommended
* value, so we may need a little margin here
*/
udelay(5);
ret = regmap_update_bits(priv->syscfg, EMMC_PHYCTRL1_REG, PDB_MASK,
PDB_SHIFT(1));
if (ret) {
dev_err(&phy->dev, "CALIO power down bar failed: %d\n", ret);
return ret;
}
/*
* According to the user manual, it asks driver to wait 5us for
* calpad busy trimming. However it is documented that this value is
* PVT(A.K.A process,voltage and temperature) relevant, so some
* failure cases are found which indicates we should be more tolerant
* to calpad busy trimming.
*/
ret = regmap_read_poll_timeout(priv->syscfg, EMMC_PHYSTAT_REG,
caldone, IS_CALDONE(caldone),
0, 50);
if (ret) {
dev_err(&phy->dev, "caldone failed, ret=%d\n", ret);
return ret;
}
/* Set the frequency of the DLL operation */
ret = regmap_update_bits(priv->syscfg, EMMC_PHYCTRL2_REG, FRQSEL_MASK,
FRQSEL_SHIFT(freqsel));
if (ret) {
dev_err(&phy->dev, "set the frequency of dll failed:%d\n", ret);
return ret;
}
/* Turn on the DLL */
ret = regmap_update_bits(priv->syscfg, EMMC_PHYCTRL1_REG, ENDLL_MASK,
ENDLL_SHIFT(1));
if (ret) {
dev_err(&phy->dev, "turn on the dll failed: %d\n", ret);
return ret;
}
/*
* After enabling analog DLL circuits docs say that we need 10.2 us if
* our source clock is at 50 MHz and that lock time scales linearly
* with clock speed. If we are powering on the PHY and the card clock
* is super slow (like 100 kHZ) this could take as long as 5.1 ms as
* per the math: 10.2 us * (50000000 Hz / 100000 Hz) => 5.1 ms
* Hopefully we won't be running at 100 kHz, but we should still make
* sure we wait long enough.
*
* NOTE: There appear to be corner cases where the DLL seems to take
* extra long to lock for reasons that aren't understood. In some
* extreme cases we've seen it take up to over 10ms (!). We'll be
* generous and give it 50ms.
*/
ret = regmap_read_poll_timeout(priv->syscfg,
EMMC_PHYSTAT_REG,
dllrdy, IS_DLLRDY(dllrdy),
0, 50 * USEC_PER_MSEC);
if (ret) {
dev_err(&phy->dev, "dllrdy failed. ret=%d\n", ret);
return ret;
}
return 0;
}
static int intel_emmc_phy_init(struct phy *phy)
{
struct intel_emmc_phy *priv = phy_get_drvdata(phy);
/*
* We purposely get the clock here and not in probe to avoid the
* circular dependency problem. We expect:
* - PHY driver to probe
* - SDHCI driver to start probe
* - SDHCI driver to register it's clock
* - SDHCI driver to get the PHY
* - SDHCI driver to init the PHY
*
* The clock is optional, so upon any error just return it like
* any other error to user.
*
*/
priv->emmcclk = clk_get_optional(&phy->dev, "emmcclk");
if (IS_ERR(priv->emmcclk)) {
dev_err(&phy->dev, "ERROR: getting emmcclk\n");
return PTR_ERR(priv->emmcclk);
}
return 0;
}
static int intel_emmc_phy_exit(struct phy *phy)
{
struct intel_emmc_phy *priv = phy_get_drvdata(phy);
clk_put(priv->emmcclk);
return 0;
}
static int intel_emmc_phy_power_on(struct phy *phy)
{
struct intel_emmc_phy *priv = phy_get_drvdata(phy);
int ret;
/* Drive impedance: 50 Ohm */
ret = regmap_update_bits(priv->syscfg, EMMC_PHYCTRL0_REG, DR_TY_MASK,
DR_TY_SHIFT(6));
if (ret) {
dev_err(&phy->dev, "ERROR set drive-impednce-50ohm: %d\n", ret);
return ret;
}
/* Output tap delay: disable */
ret = regmap_update_bits(priv->syscfg, EMMC_PHYCTRL0_REG, OTAPDLYENA,
0);
if (ret) {
dev_err(&phy->dev, "ERROR Set output tap delay : %d\n", ret);
return ret;
}
/* Output tap delay */
ret = regmap_update_bits(priv->syscfg, EMMC_PHYCTRL0_REG,
OTAPDLYSEL_MASK, OTAPDLYSEL_SHIFT(4));
if (ret) {
dev_err(&phy->dev, "ERROR: output tap dly select: %d\n", ret);
return ret;
}
/* Power up eMMC phy analog blocks */
return intel_emmc_phy_power(phy, true);
}
static int intel_emmc_phy_power_off(struct phy *phy)
{
/* Power down eMMC phy analog blocks */
return intel_emmc_phy_power(phy, false);
}
static const struct phy_ops ops = {
.init = intel_emmc_phy_init,
.exit = intel_emmc_phy_exit,
.power_on = intel_emmc_phy_power_on,
.power_off = intel_emmc_phy_power_off,
.owner = THIS_MODULE,
};
static int intel_emmc_phy_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct intel_emmc_phy *priv;
struct phy *generic_phy;
struct phy_provider *phy_provider;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
/* Get eMMC phy (accessed via chiptop) regmap */
priv->syscfg = syscon_regmap_lookup_by_phandle(np, "intel,syscon");
if (IS_ERR(priv->syscfg)) {
dev_err(dev, "failed to find syscon\n");
return PTR_ERR(priv->syscfg);
}
generic_phy = devm_phy_create(dev, np, &ops);
if (IS_ERR(generic_phy)) {
dev_err(dev, "failed to create PHY\n");
return PTR_ERR(generic_phy);
}
phy_set_drvdata(generic_phy, priv);
phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
return PTR_ERR_OR_ZERO(phy_provider);
}
static const struct of_device_id intel_emmc_phy_dt_ids[] = {
{ .compatible = "intel,lgm-emmc-phy" },
{}
};
MODULE_DEVICE_TABLE(of, intel_emmc_phy_dt_ids);
static struct platform_driver intel_emmc_driver = {
.probe = intel_emmc_phy_probe,
.driver = {
.name = "intel-emmc-phy",
.of_match_table = intel_emmc_phy_dt_ids,
},
};
module_platform_driver(intel_emmc_driver);
MODULE_AUTHOR("Peter Harliman Liem <peter.harliman.liem@intel.com>");
MODULE_DESCRIPTION("Intel eMMC PHY driver");
MODULE_LICENSE("GPL v2");

View File

@ -386,7 +386,7 @@ static struct phy *ltq_vrx200_pcie_phy_xlate(struct device *dev,
default:
dev_err(dev, "invalid PHY mode %u\n", mode);
return ERR_PTR(-EINVAL);
};
}
return priv->phy;
}

View File

@ -10,14 +10,16 @@ config ARMADA375_USBCLUSTER_PHY
config PHY_BERLIN_SATA
tristate "Marvell Berlin SATA PHY driver"
depends on ARCH_BERLIN && HAS_IOMEM && OF
depends on ARCH_BERLIN || COMPILE_TEST
depends on OF && HAS_IOMEM
select GENERIC_PHY
help
Enable this to support the SATA PHY on Marvell Berlin SoCs.
config PHY_BERLIN_USB
tristate "Marvell Berlin USB PHY Driver"
depends on ARCH_BERLIN && RESET_CONTROLLER && HAS_IOMEM && OF
depends on ARCH_BERLIN || COMPILE_TEST
depends on OF && HAS_IOMEM && RESET_CONTROLLER
select GENERIC_PHY
help
Enable this to support the USB PHY on Marvell Berlin SoCs.
@ -95,7 +97,7 @@ config PHY_PXA_28NM_USB2
config PHY_PXA_USB
tristate "Marvell PXA USB PHY Driver"
depends on ARCH_PXA || ARCH_MMP
depends on ARCH_PXA || ARCH_MMP || COMPILE_TEST
select GENERIC_PHY
help
Enable this to support Marvell PXA USB PHY driver for Marvell

View File

@ -3,12 +3,13 @@
# Phy drivers for Mediatek devices
#
config PHY_MTK_TPHY
tristate "MediaTek T-PHY Driver"
depends on ARCH_MEDIATEK && OF
select GENERIC_PHY
help
Say 'Y' here to add support for MediaTek T-PHY driver,
it supports multiple usb2.0, usb3.0 ports, PCIe and
tristate "MediaTek T-PHY Driver"
depends on ARCH_MEDIATEK || COMPILE_TEST
depends on OF
select GENERIC_PHY
help
Say 'Y' here to add support for MediaTek T-PHY driver,
it supports multiple usb2.0, usb3.0 ports, PCIe and
SATA, and meanwhile supports two version T-PHY which have
different banks layout, the T-PHY with shared banks between
multi-ports is first version, otherwise is second veriosn,
@ -16,7 +17,8 @@ config PHY_MTK_TPHY
config PHY_MTK_UFS
tristate "MediaTek UFS M-PHY driver"
depends on ARCH_MEDIATEK && OF
depends on ARCH_MEDIATEK || COMPILE_TEST
depends on OF
select GENERIC_PHY
help
Support for UFS M-PHY on MediaTek chipsets.
@ -25,10 +27,11 @@ config PHY_MTK_UFS
specified M-PHYs.
config PHY_MTK_XSPHY
tristate "MediaTek XS-PHY Driver"
depends on ARCH_MEDIATEK && OF
select GENERIC_PHY
help
tristate "MediaTek XS-PHY Driver"
depends on ARCH_MEDIATEK || COMPILE_TEST
depends on OF
select GENERIC_PHY
help
Enable this to support the SuperSpeedPlus XS-PHY transceiver for
USB3.1 GEN2 controllers on MediaTek chips. The driver supports
multiple USB2.0, USB3.1 GEN2 ports.

View File

@ -29,7 +29,7 @@ static void devm_phy_release(struct device *dev, void *res)
{
struct phy *phy = *(struct phy **)res;
phy_put(phy);
phy_put(dev, phy);
}
static void devm_phy_provider_release(struct device *dev, void *res)
@ -566,12 +566,12 @@ struct phy *of_phy_get(struct device_node *np, const char *con_id)
EXPORT_SYMBOL_GPL(of_phy_get);
/**
* phy_put() - release the PHY
* @phy: the phy returned by phy_get()
* of_phy_put() - release the PHY
* @phy: the phy returned by of_phy_get()
*
* Releases a refcount the caller received from phy_get().
* Releases a refcount the caller received from of_phy_get().
*/
void phy_put(struct phy *phy)
void of_phy_put(struct phy *phy)
{
if (!phy || IS_ERR(phy))
return;
@ -584,6 +584,20 @@ void phy_put(struct phy *phy)
module_put(phy->ops->owner);
put_device(&phy->dev);
}
EXPORT_SYMBOL_GPL(of_phy_put);
/**
* phy_put() - release the PHY
* @dev: device that wants to release this phy
* @phy: the phy returned by phy_get()
*
* Releases a refcount the caller received from phy_get().
*/
void phy_put(struct device *dev, struct phy *phy)
{
device_link_remove(dev, &phy->dev);
of_phy_put(phy);
}
EXPORT_SYMBOL_GPL(phy_put);
/**
@ -651,6 +665,7 @@ struct phy *phy_get(struct device *dev, const char *string)
{
int index = 0;
struct phy *phy;
struct device_link *link;
if (string == NULL) {
dev_WARN(dev, "missing string\n");
@ -672,6 +687,13 @@ struct phy *phy_get(struct device *dev, const char *string)
get_device(&phy->dev);
link = device_link_add(dev, &phy->dev, DL_FLAG_STATELESS);
if (!link) {
dev_err(dev, "failed to create device link to %s\n",
dev_name(phy->dev.parent));
return ERR_PTR(-EINVAL);
}
return phy;
}
EXPORT_SYMBOL_GPL(phy_get);
@ -765,6 +787,7 @@ struct phy *devm_of_phy_get(struct device *dev, struct device_node *np,
const char *con_id)
{
struct phy **ptr, *phy;
struct device_link *link;
ptr = devres_alloc(devm_phy_release, sizeof(*ptr), GFP_KERNEL);
if (!ptr)
@ -776,6 +799,14 @@ struct phy *devm_of_phy_get(struct device *dev, struct device_node *np,
devres_add(dev, ptr);
} else {
devres_free(ptr);
return phy;
}
link = device_link_add(dev, &phy->dev, DL_FLAG_STATELESS);
if (!link) {
dev_err(dev, "failed to create device link to %s\n",
dev_name(phy->dev.parent));
return ERR_PTR(-EINVAL);
}
return phy;
@ -798,6 +829,7 @@ struct phy *devm_of_phy_get_by_index(struct device *dev, struct device_node *np,
int index)
{
struct phy **ptr, *phy;
struct device_link *link;
ptr = devres_alloc(devm_phy_release, sizeof(*ptr), GFP_KERNEL);
if (!ptr)
@ -819,6 +851,13 @@ struct phy *devm_of_phy_get_by_index(struct device *dev, struct device_node *np,
*ptr = phy;
devres_add(dev, ptr);
link = device_link_add(dev, &phy->dev, DL_FLAG_STATELESS);
if (!link) {
dev_err(dev, "failed to create device link to %s\n",
dev_name(phy->dev.parent));
return ERR_PTR(-EINVAL);
}
return phy;
}
EXPORT_SYMBOL_GPL(devm_of_phy_get_by_index);

View File

@ -80,7 +80,7 @@ static int read_poll_timeout(void __iomem *addr, u32 mask)
if (readl_relaxed(addr) & mask)
return 0;
usleep_range(DELAY_INTERVAL_US, DELAY_INTERVAL_US + 50);
usleep_range(DELAY_INTERVAL_US, DELAY_INTERVAL_US + 50);
} while (!time_after(jiffies, timeout));
return (readl_relaxed(addr) & mask) ? 0 : -ETIMEDOUT;

View File

@ -166,8 +166,9 @@ static const unsigned int sdm845_ufsphy_regs_layout[] = {
};
static const unsigned int sm8150_ufsphy_regs_layout[] = {
[QPHY_START_CTRL] = 0x00,
[QPHY_PCS_READY_STATUS] = 0x180,
[QPHY_START_CTRL] = QPHY_V4_PHY_START,
[QPHY_PCS_READY_STATUS] = QPHY_V4_PCS_READY_STATUS,
[QPHY_SW_RESET] = QPHY_V4_SW_RESET,
};
static const struct qmp_phy_init_tbl msm8996_pcie_serdes_tbl[] = {
@ -885,7 +886,6 @@ static const struct qmp_phy_init_tbl msm8998_usb3_pcs_tbl[] = {
};
static const struct qmp_phy_init_tbl sm8150_ufsphy_serdes_tbl[] = {
QMP_PHY_INIT_CFG(QPHY_POWER_DOWN_CONTROL, 0x01),
QMP_PHY_INIT_CFG(QSERDES_V4_COM_SYSCLK_EN_SEL, 0xd9),
QMP_PHY_INIT_CFG(QSERDES_V4_COM_HSCLK_SEL, 0x11),
QMP_PHY_INIT_CFG(QSERDES_V4_COM_HSCLK_HS_SWITCH_SEL, 0x00),
@ -1390,7 +1390,6 @@ static const struct qmp_phy_cfg sm8150_ufsphy_cfg = {
.pwrdn_ctrl = SW_PWRDN,
.is_dual_lane_phy = true,
.no_pcs_sw_reset = true,
};
static void qcom_qmp_phy_configure(void __iomem *base,

View File

@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2017, The Linux Foundation. All rights reserved.
*/

View File

@ -39,6 +39,7 @@ config PHY_ROCKCHIP_INNO_DSIDPHY
tristate "Rockchip Innosilicon MIPI/LVDS/TTL PHY driver"
depends on (ARCH_ROCKCHIP || COMPILE_TEST) && OF
select GENERIC_PHY
select GENERIC_PHY_MIPI_DPHY
help
Enable this to support the Rockchip MIPI/LVDS/TTL PHY with
Innosilicon IP block.

View File

@ -16,6 +16,7 @@
#include <linux/platform_device.h>
#include <linux/reset.h>
#include <linux/phy/phy.h>
#include <linux/phy/phy-mipi-dphy.h>
#include <linux/pm_runtime.h>
#include <linux/mfd/syscon.h>
@ -167,31 +168,6 @@
#define DSI_PHY_STATUS 0xb0
#define PHY_LOCK BIT(0)
struct mipi_dphy_timing {
unsigned int clkmiss;
unsigned int clkpost;
unsigned int clkpre;
unsigned int clkprepare;
unsigned int clksettle;
unsigned int clktermen;
unsigned int clktrail;
unsigned int clkzero;
unsigned int dtermen;
unsigned int eot;
unsigned int hsexit;
unsigned int hsprepare;
unsigned int hszero;
unsigned int hssettle;
unsigned int hsskip;
unsigned int hstrail;
unsigned int init;
unsigned int lpx;
unsigned int taget;
unsigned int tago;
unsigned int tasure;
unsigned int wakeup;
};
struct inno_dsidphy {
struct device *dev;
struct clk *ref_clk;
@ -201,7 +177,9 @@ struct inno_dsidphy {
void __iomem *host_base;
struct reset_control *rst;
enum phy_mode mode;
struct phy_configure_opts_mipi_dphy dphy_cfg;
struct clk *pll_clk;
struct {
struct clk_hw hw;
u8 prediv;
@ -238,37 +216,79 @@ static void phy_update_bits(struct inno_dsidphy *inno,
writel(tmp, inno->phy_base + reg);
}
static void mipi_dphy_timing_get_default(struct mipi_dphy_timing *timing,
unsigned long period)
static unsigned long inno_dsidphy_pll_calc_rate(struct inno_dsidphy *inno,
unsigned long rate)
{
/* Global Operation Timing Parameters */
timing->clkmiss = 0;
timing->clkpost = 70000 + 52 * period;
timing->clkpre = 8 * period;
timing->clkprepare = 65000;
timing->clksettle = 95000;
timing->clktermen = 0;
timing->clktrail = 80000;
timing->clkzero = 260000;
timing->dtermen = 0;
timing->eot = 0;
timing->hsexit = 120000;
timing->hsprepare = 65000 + 4 * period;
timing->hszero = 145000 + 6 * period;
timing->hssettle = 85000 + 6 * period;
timing->hsskip = 40000;
timing->hstrail = max(8 * period, 60000 + 4 * period);
timing->init = 100000000;
timing->lpx = 60000;
timing->taget = 5 * timing->lpx;
timing->tago = 4 * timing->lpx;
timing->tasure = 2 * timing->lpx;
timing->wakeup = 1000000000;
unsigned long prate = clk_get_rate(inno->ref_clk);
unsigned long best_freq = 0;
unsigned long fref, fout;
u8 min_prediv, max_prediv;
u8 _prediv, best_prediv = 1;
u16 _fbdiv, best_fbdiv = 1;
u32 min_delta = UINT_MAX;
/*
* The PLL output frequency can be calculated using a simple formula:
* PLL_Output_Frequency = (FREF / PREDIV * FBDIV) / 2
* PLL_Output_Frequency: it is equal to DDR-Clock-Frequency * 2
*/
fref = prate / 2;
if (rate > 1000000000UL)
fout = 1000000000UL;
else
fout = rate;
/* 5Mhz < Fref / prediv < 40MHz */
min_prediv = DIV_ROUND_UP(fref, 40000000);
max_prediv = fref / 5000000;
for (_prediv = min_prediv; _prediv <= max_prediv; _prediv++) {
u64 tmp;
u32 delta;
tmp = (u64)fout * _prediv;
do_div(tmp, fref);
_fbdiv = tmp;
/*
* The possible settings of feedback divider are
* 12, 13, 14, 16, ~ 511
*/
if (_fbdiv == 15)
continue;
if (_fbdiv < 12 || _fbdiv > 511)
continue;
tmp = (u64)_fbdiv * fref;
do_div(tmp, _prediv);
delta = abs(fout - tmp);
if (!delta) {
best_prediv = _prediv;
best_fbdiv = _fbdiv;
best_freq = tmp;
break;
} else if (delta < min_delta) {
best_prediv = _prediv;
best_fbdiv = _fbdiv;
best_freq = tmp;
min_delta = delta;
}
}
if (best_freq) {
inno->pll.prediv = best_prediv;
inno->pll.fbdiv = best_fbdiv;
inno->pll.rate = best_freq;
}
return best_freq;
}
static void inno_dsidphy_mipi_mode_enable(struct inno_dsidphy *inno)
{
struct mipi_dphy_timing gotp;
struct phy_configure_opts_mipi_dphy *cfg = &inno->dphy_cfg;
const struct {
unsigned long rate;
u8 hs_prepare;
@ -288,12 +308,14 @@ static void inno_dsidphy_mipi_mode_enable(struct inno_dsidphy *inno)
{ 800000000, 0x21, 0x1f, 0x09, 0x29},
{1000000000, 0x09, 0x20, 0x09, 0x27},
};
u32 t_txbyteclkhs, t_txclkesc, ui;
u32 t_txbyteclkhs, t_txclkesc;
u32 txbyteclkhs, txclkesc, esc_clk_div;
u32 hs_exit, clk_post, clk_pre, wakeup, lpx, ta_go, ta_sure, ta_wait;
u32 hs_prepare, hs_trail, hs_zero, clk_lane_hs_zero, data_lane_hs_zero;
unsigned int i;
inno_dsidphy_pll_calc_rate(inno, cfg->hs_clk_rate);
/* Select MIPI mode */
phy_update_bits(inno, REGISTER_PART_LVDS, 0x03,
MODE_ENABLE_MASK, MIPI_MODE_ENABLE);
@ -328,32 +350,27 @@ static void inno_dsidphy_mipi_mode_enable(struct inno_dsidphy *inno)
txclkesc = txbyteclkhs / esc_clk_div;
t_txclkesc = div_u64(PSEC_PER_SEC, txclkesc);
ui = div_u64(PSEC_PER_SEC, inno->pll.rate);
memset(&gotp, 0, sizeof(gotp));
mipi_dphy_timing_get_default(&gotp, ui);
/*
* The value of counter for HS Ths-exit
* Ths-exit = Tpin_txbyteclkhs * value
*/
hs_exit = DIV_ROUND_UP(gotp.hsexit, t_txbyteclkhs);
hs_exit = DIV_ROUND_UP(cfg->hs_exit, t_txbyteclkhs);
/*
* The value of counter for HS Tclk-post
* Tclk-post = Tpin_txbyteclkhs * value
*/
clk_post = DIV_ROUND_UP(gotp.clkpost, t_txbyteclkhs);
clk_post = DIV_ROUND_UP(cfg->clk_post, t_txbyteclkhs);
/*
* The value of counter for HS Tclk-pre
* Tclk-pre = Tpin_txbyteclkhs * value
*/
clk_pre = DIV_ROUND_UP(gotp.clkpre, t_txbyteclkhs);
clk_pre = DIV_ROUND_UP(cfg->clk_pre, t_txbyteclkhs);
/*
* The value of counter for HS Tlpx Time
* Tlpx = Tpin_txbyteclkhs * (2 + value)
*/
lpx = DIV_ROUND_UP(gotp.lpx, t_txbyteclkhs);
lpx = DIV_ROUND_UP(cfg->lpx, t_txbyteclkhs);
if (lpx >= 2)
lpx -= 2;
@ -362,19 +379,19 @@ static void inno_dsidphy_mipi_mode_enable(struct inno_dsidphy *inno)
* Tta-go for turnaround
* Tta-go = Ttxclkesc * value
*/
ta_go = DIV_ROUND_UP(gotp.tago, t_txclkesc);
ta_go = DIV_ROUND_UP(cfg->ta_go, t_txclkesc);
/*
* The value of counter for HS Tta-sure
* Tta-sure for turnaround
* Tta-sure = Ttxclkesc * value
*/
ta_sure = DIV_ROUND_UP(gotp.tasure, t_txclkesc);
ta_sure = DIV_ROUND_UP(cfg->ta_sure, t_txclkesc);
/*
* The value of counter for HS Tta-wait
* Tta-wait for turnaround
* Tta-wait = Ttxclkesc * value
*/
ta_wait = DIV_ROUND_UP(gotp.taget, t_txclkesc);
ta_wait = DIV_ROUND_UP(cfg->ta_get, t_txclkesc);
for (i = 0; i < ARRAY_SIZE(timings); i++)
if (inno->pll.rate <= timings[i].rate)
@ -479,6 +496,7 @@ static int inno_dsidphy_power_on(struct phy *phy)
struct inno_dsidphy *inno = phy_get_drvdata(phy);
clk_prepare_enable(inno->pclk_phy);
clk_prepare_enable(inno->ref_clk);
pm_runtime_get_sync(inno->dev);
/* Bandgap power on */
@ -524,6 +542,7 @@ static int inno_dsidphy_power_off(struct phy *phy)
LVDS_PLL_POWER_OFF | LVDS_BANDGAP_POWER_DOWN);
pm_runtime_put(inno->dev);
clk_disable_unprepare(inno->ref_clk);
clk_disable_unprepare(inno->pclk_phy);
return 0;
@ -546,168 +565,32 @@ static int inno_dsidphy_set_mode(struct phy *phy, enum phy_mode mode,
return 0;
}
static int inno_dsidphy_configure(struct phy *phy,
union phy_configure_opts *opts)
{
struct inno_dsidphy *inno = phy_get_drvdata(phy);
int ret;
if (inno->mode != PHY_MODE_MIPI_DPHY)
return -EINVAL;
ret = phy_mipi_dphy_config_validate(&opts->mipi_dphy);
if (ret)
return ret;
memcpy(&inno->dphy_cfg, &opts->mipi_dphy, sizeof(inno->dphy_cfg));
return 0;
}
static const struct phy_ops inno_dsidphy_ops = {
.configure = inno_dsidphy_configure,
.set_mode = inno_dsidphy_set_mode,
.power_on = inno_dsidphy_power_on,
.power_off = inno_dsidphy_power_off,
.owner = THIS_MODULE,
};
static unsigned long inno_dsidphy_pll_round_rate(struct inno_dsidphy *inno,
unsigned long prate,
unsigned long rate,
u8 *prediv, u16 *fbdiv)
{
unsigned long best_freq = 0;
unsigned long fref, fout;
u8 min_prediv, max_prediv;
u8 _prediv, best_prediv = 1;
u16 _fbdiv, best_fbdiv = 1;
u32 min_delta = UINT_MAX;
/*
* The PLL output frequency can be calculated using a simple formula:
* PLL_Output_Frequency = (FREF / PREDIV * FBDIV) / 2
* PLL_Output_Frequency: it is equal to DDR-Clock-Frequency * 2
*/
fref = prate / 2;
if (rate > 1000000000UL)
fout = 1000000000UL;
else
fout = rate;
/* 5Mhz < Fref / prediv < 40MHz */
min_prediv = DIV_ROUND_UP(fref, 40000000);
max_prediv = fref / 5000000;
for (_prediv = min_prediv; _prediv <= max_prediv; _prediv++) {
u64 tmp;
u32 delta;
tmp = (u64)fout * _prediv;
do_div(tmp, fref);
_fbdiv = tmp;
/*
* The possible settings of feedback divider are
* 12, 13, 14, 16, ~ 511
*/
if (_fbdiv == 15)
continue;
if (_fbdiv < 12 || _fbdiv > 511)
continue;
tmp = (u64)_fbdiv * fref;
do_div(tmp, _prediv);
delta = abs(fout - tmp);
if (!delta) {
best_prediv = _prediv;
best_fbdiv = _fbdiv;
best_freq = tmp;
break;
} else if (delta < min_delta) {
best_prediv = _prediv;
best_fbdiv = _fbdiv;
best_freq = tmp;
min_delta = delta;
}
}
if (best_freq) {
*prediv = best_prediv;
*fbdiv = best_fbdiv;
}
return best_freq;
}
static long inno_dsidphy_pll_clk_round_rate(struct clk_hw *hw,
unsigned long rate,
unsigned long *prate)
{
struct inno_dsidphy *inno = hw_to_inno(hw);
unsigned long fout;
u16 fbdiv = 1;
u8 prediv = 1;
fout = inno_dsidphy_pll_round_rate(inno, *prate, rate,
&prediv, &fbdiv);
return fout;
}
static int inno_dsidphy_pll_clk_set_rate(struct clk_hw *hw,
unsigned long rate,
unsigned long parent_rate)
{
struct inno_dsidphy *inno = hw_to_inno(hw);
unsigned long fout;
u16 fbdiv = 1;
u8 prediv = 1;
fout = inno_dsidphy_pll_round_rate(inno, parent_rate, rate,
&prediv, &fbdiv);
dev_dbg(inno->dev, "fin=%lu, fout=%lu, prediv=%u, fbdiv=%u\n",
parent_rate, fout, prediv, fbdiv);
inno->pll.prediv = prediv;
inno->pll.fbdiv = fbdiv;
inno->pll.rate = fout;
return 0;
}
static unsigned long
inno_dsidphy_pll_clk_recalc_rate(struct clk_hw *hw, unsigned long prate)
{
struct inno_dsidphy *inno = hw_to_inno(hw);
/* PLL_Output_Frequency = (FREF / PREDIV * FBDIV) / 2 */
return (prate / inno->pll.prediv * inno->pll.fbdiv) / 2;
}
static const struct clk_ops inno_dsidphy_pll_clk_ops = {
.round_rate = inno_dsidphy_pll_clk_round_rate,
.set_rate = inno_dsidphy_pll_clk_set_rate,
.recalc_rate = inno_dsidphy_pll_clk_recalc_rate,
};
static int inno_dsidphy_pll_register(struct inno_dsidphy *inno)
{
struct device *dev = inno->dev;
struct clk *clk;
const char *parent_name;
struct clk_init_data init;
int ret;
parent_name = __clk_get_name(inno->ref_clk);
init.name = "mipi_dphy_pll";
ret = of_property_read_string(dev->of_node, "clock-output-names",
&init.name);
if (ret < 0)
dev_dbg(dev, "phy should set clock-output-names property\n");
init.ops = &inno_dsidphy_pll_clk_ops;
init.parent_names = &parent_name;
init.num_parents = 1;
init.flags = 0;
inno->pll.hw.init = &init;
clk = devm_clk_register(dev, &inno->pll.hw);
if (IS_ERR(clk)) {
ret = PTR_ERR(clk);
dev_err(dev, "failed to register PLL: %d\n", ret);
return ret;
}
return devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get,
&inno->pll.hw);
}
static int inno_dsidphy_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -764,10 +647,6 @@ static int inno_dsidphy_probe(struct platform_device *pdev)
return ret;
}
ret = inno_dsidphy_pll_register(inno);
if (ret)
return ret;
pm_runtime_enable(dev);
return 0;

View File

@ -32,7 +32,7 @@ config PHY_EXYNOS_PCIE
config PHY_SAMSUNG_USB2
tristate "Samsung USB 2.0 PHY driver"
depends on HAS_IOMEM
depends on USB_EHCI_EXYNOS || USB_OHCI_EXYNOS || USB_DWC2
depends on USB_EHCI_EXYNOS || USB_OHCI_EXYNOS || USB_DWC2 || COMPILE_TEST
select GENERIC_PHY
select MFD_SYSCON
default ARCH_EXYNOS
@ -60,7 +60,7 @@ config PHY_EXYNOS5250_USB2
config PHY_S5PV210_USB2
bool "Support for S5PV210"
depends on PHY_SAMSUNG_USB2
depends on ARCH_S5PV210
depends on ARCH_S5PV210 || COMPILE_TEST
help
Enable USB PHY support for S5PV210. This option requires that Samsung
USB 2.0 PHY driver is enabled and means that support for this
@ -69,7 +69,7 @@ config PHY_S5PV210_USB2
config PHY_EXYNOS5_USBDRD
tristate "Exynos5 SoC series USB DRD PHY driver"
depends on ARCH_EXYNOS && OF
depends on (ARCH_EXYNOS && OF) || COMPILE_TEST
depends on HAS_IOMEM
depends on USB_DWC3_EXYNOS
select GENERIC_PHY

View File

@ -4,7 +4,7 @@
#
config PHY_DA8XX_USB
tristate "TI DA8xx USB PHY Driver"
depends on ARCH_DAVINCI_DA8XX
depends on ARCH_DAVINCI_DA8XX || COMPILE_TEST
select GENERIC_PHY
select MFD_SYSCON
help
@ -14,7 +14,7 @@ config PHY_DA8XX_USB
config PHY_DM816X_USB
tristate "TI dm816x USB PHY driver"
depends on ARCH_OMAP2PLUS
depends on ARCH_OMAP2PLUS || COMPILE_TEST
depends on USB_SUPPORT
select GENERIC_PHY
select USB_PHY
@ -33,6 +33,22 @@ config PHY_AM654_SERDES
This option enables support for TI AM654 SerDes PHY used for
PCIe.
config PHY_J721E_WIZ
tristate "TI J721E WIZ (SERDES Wrapper) support"
depends on OF && ARCH_K3 || COMPILE_TEST
depends on HAS_IOMEM && OF_ADDRESS
depends on COMMON_CLK
select GENERIC_PHY
select MULTIPLEXER
select REGMAP_MMIO
select MUX_MMIO
help
This option enables support for WIZ module present in TI's J721E
SoC. WIZ is a serdes wrapper used to configure some of the input
signals to the SERDES (Sierra/Torrent). This driver configures
three clock selects (pll0, pll1, dig) and resets for each of the
lanes.
config OMAP_CONTROL_PHY
tristate "OMAP CONTROL PHY Driver"
depends on ARCH_OMAP2PLUS || COMPILE_TEST

View File

@ -8,3 +8,4 @@ obj-$(CONFIG_PHY_TUSB1210) += phy-tusb1210.o
obj-$(CONFIG_TWL4030_USB) += phy-twl4030-usb.o
obj-$(CONFIG_PHY_AM654_SERDES) += phy-am654-serdes.o
obj-$(CONFIG_PHY_TI_GMII_SEL) += phy-gmii-sel.o
obj-$(CONFIG_PHY_J721E_WIZ) += phy-j721e-wiz.o

View File

@ -0,0 +1,959 @@
// SPDX-License-Identifier: GPL-2.0
/**
* Wrapper driver for SERDES used in J721E
*
* Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com/
* Author: Kishon Vijay Abraham I <kishon@ti.com>
*/
#include <dt-bindings/phy/phy.h>
#include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/gpio.h>
#include <linux/gpio/consumer.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/mux/consumer.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <linux/reset-controller.h>
#define WIZ_SERDES_CTRL 0x404
#define WIZ_SERDES_TOP_CTRL 0x408
#define WIZ_SERDES_RST 0x40c
#define WIZ_SERDES_TYPEC 0x410
#define WIZ_LANECTL(n) (0x480 + (0x40 * (n)))
#define WIZ_MAX_LANES 4
#define WIZ_MUX_NUM_CLOCKS 3
#define WIZ_DIV_NUM_CLOCKS_16G 2
#define WIZ_DIV_NUM_CLOCKS_10G 1
#define WIZ_SERDES_TYPEC_LN10_SWAP BIT(30)
enum wiz_lane_standard_mode {
LANE_MODE_GEN1,
LANE_MODE_GEN2,
LANE_MODE_GEN3,
LANE_MODE_GEN4,
};
enum wiz_refclk_mux_sel {
PLL0_REFCLK,
PLL1_REFCLK,
REFCLK_DIG,
};
enum wiz_refclk_div_sel {
CMN_REFCLK_DIG_DIV,
CMN_REFCLK1_DIG_DIV,
};
static const struct reg_field por_en = REG_FIELD(WIZ_SERDES_CTRL, 31, 31);
static const struct reg_field phy_reset_n = REG_FIELD(WIZ_SERDES_RST, 31, 31);
static const struct reg_field pll1_refclk_mux_sel =
REG_FIELD(WIZ_SERDES_RST, 29, 29);
static const struct reg_field pll0_refclk_mux_sel =
REG_FIELD(WIZ_SERDES_RST, 28, 28);
static const struct reg_field refclk_dig_sel_16g =
REG_FIELD(WIZ_SERDES_RST, 24, 25);
static const struct reg_field refclk_dig_sel_10g =
REG_FIELD(WIZ_SERDES_RST, 24, 24);
static const struct reg_field pma_cmn_refclk_int_mode =
REG_FIELD(WIZ_SERDES_TOP_CTRL, 28, 29);
static const struct reg_field pma_cmn_refclk_mode =
REG_FIELD(WIZ_SERDES_TOP_CTRL, 30, 31);
static const struct reg_field pma_cmn_refclk_dig_div =
REG_FIELD(WIZ_SERDES_TOP_CTRL, 26, 27);
static const struct reg_field pma_cmn_refclk1_dig_div =
REG_FIELD(WIZ_SERDES_TOP_CTRL, 24, 25);
static const struct reg_field p_enable[WIZ_MAX_LANES] = {
REG_FIELD(WIZ_LANECTL(0), 30, 31),
REG_FIELD(WIZ_LANECTL(1), 30, 31),
REG_FIELD(WIZ_LANECTL(2), 30, 31),
REG_FIELD(WIZ_LANECTL(3), 30, 31),
};
static const struct reg_field p_align[WIZ_MAX_LANES] = {
REG_FIELD(WIZ_LANECTL(0), 29, 29),
REG_FIELD(WIZ_LANECTL(1), 29, 29),
REG_FIELD(WIZ_LANECTL(2), 29, 29),
REG_FIELD(WIZ_LANECTL(3), 29, 29),
};
static const struct reg_field p_raw_auto_start[WIZ_MAX_LANES] = {
REG_FIELD(WIZ_LANECTL(0), 28, 28),
REG_FIELD(WIZ_LANECTL(1), 28, 28),
REG_FIELD(WIZ_LANECTL(2), 28, 28),
REG_FIELD(WIZ_LANECTL(3), 28, 28),
};
static const struct reg_field p_standard_mode[WIZ_MAX_LANES] = {
REG_FIELD(WIZ_LANECTL(0), 24, 25),
REG_FIELD(WIZ_LANECTL(1), 24, 25),
REG_FIELD(WIZ_LANECTL(2), 24, 25),
REG_FIELD(WIZ_LANECTL(3), 24, 25),
};
static const struct reg_field typec_ln10_swap =
REG_FIELD(WIZ_SERDES_TYPEC, 30, 30);
struct wiz_clk_mux {
struct clk_hw hw;
struct regmap_field *field;
u32 *table;
struct clk_init_data clk_data;
};
#define to_wiz_clk_mux(_hw) container_of(_hw, struct wiz_clk_mux, hw)
struct wiz_clk_divider {
struct clk_hw hw;
struct regmap_field *field;
struct clk_div_table *table;
struct clk_init_data clk_data;
};
#define to_wiz_clk_div(_hw) container_of(_hw, struct wiz_clk_divider, hw)
struct wiz_clk_mux_sel {
struct regmap_field *field;
u32 table[4];
const char *node_name;
};
struct wiz_clk_div_sel {
struct regmap_field *field;
struct clk_div_table *table;
const char *node_name;
};
static struct wiz_clk_mux_sel clk_mux_sel_16g[] = {
{
/*
* Mux value to be configured for each of the input clocks
* in the order populated in device tree
*/
.table = { 1, 0 },
.node_name = "pll0-refclk",
},
{
.table = { 1, 0 },
.node_name = "pll1-refclk",
},
{
.table = { 1, 3, 0, 2 },
.node_name = "refclk-dig",
},
};
static struct wiz_clk_mux_sel clk_mux_sel_10g[] = {
{
/*
* Mux value to be configured for each of the input clocks
* in the order populated in device tree
*/
.table = { 1, 0 },
.node_name = "pll0-refclk",
},
{
.table = { 1, 0 },
.node_name = "pll1-refclk",
},
{
.table = { 1, 0 },
.node_name = "refclk-dig",
},
};
static struct clk_div_table clk_div_table[] = {
{ .val = 0, .div = 1, },
{ .val = 1, .div = 2, },
{ .val = 2, .div = 4, },
{ .val = 3, .div = 8, },
};
static struct wiz_clk_div_sel clk_div_sel[] = {
{
.table = clk_div_table,
.node_name = "cmn-refclk-dig-div",
},
{
.table = clk_div_table,
.node_name = "cmn-refclk1-dig-div",
},
};
enum wiz_type {
J721E_WIZ_16G,
J721E_WIZ_10G,
};
#define WIZ_TYPEC_DIR_DEBOUNCE_MIN 100 /* ms */
#define WIZ_TYPEC_DIR_DEBOUNCE_MAX 1000
struct wiz {
struct regmap *regmap;
enum wiz_type type;
struct wiz_clk_mux_sel *clk_mux_sel;
struct wiz_clk_div_sel *clk_div_sel;
unsigned int clk_div_sel_num;
struct regmap_field *por_en;
struct regmap_field *phy_reset_n;
struct regmap_field *p_enable[WIZ_MAX_LANES];
struct regmap_field *p_align[WIZ_MAX_LANES];
struct regmap_field *p_raw_auto_start[WIZ_MAX_LANES];
struct regmap_field *p_standard_mode[WIZ_MAX_LANES];
struct regmap_field *pma_cmn_refclk_int_mode;
struct regmap_field *pma_cmn_refclk_mode;
struct regmap_field *pma_cmn_refclk_dig_div;
struct regmap_field *pma_cmn_refclk1_dig_div;
struct regmap_field *typec_ln10_swap;
struct device *dev;
u32 num_lanes;
struct platform_device *serdes_pdev;
struct reset_controller_dev wiz_phy_reset_dev;
struct gpio_desc *gpio_typec_dir;
int typec_dir_delay;
};
static int wiz_reset(struct wiz *wiz)
{
int ret;
ret = regmap_field_write(wiz->por_en, 0x1);
if (ret)
return ret;
mdelay(1);
ret = regmap_field_write(wiz->por_en, 0x0);
if (ret)
return ret;
return 0;
}
static int wiz_mode_select(struct wiz *wiz)
{
u32 num_lanes = wiz->num_lanes;
int ret;
int i;
for (i = 0; i < num_lanes; i++) {
ret = regmap_field_write(wiz->p_standard_mode[i],
LANE_MODE_GEN4);
if (ret)
return ret;
}
return 0;
}
static int wiz_init_raw_interface(struct wiz *wiz, bool enable)
{
u32 num_lanes = wiz->num_lanes;
int i;
int ret;
for (i = 0; i < num_lanes; i++) {
ret = regmap_field_write(wiz->p_align[i], enable);
if (ret)
return ret;
ret = regmap_field_write(wiz->p_raw_auto_start[i], enable);
if (ret)
return ret;
}
return 0;
}
static int wiz_init(struct wiz *wiz)
{
struct device *dev = wiz->dev;
int ret;
ret = wiz_reset(wiz);
if (ret) {
dev_err(dev, "WIZ reset failed\n");
return ret;
}
ret = wiz_mode_select(wiz);
if (ret) {
dev_err(dev, "WIZ mode select failed\n");
return ret;
}
ret = wiz_init_raw_interface(wiz, true);
if (ret) {
dev_err(dev, "WIZ interface initialization failed\n");
return ret;
}
return 0;
}
static int wiz_regfield_init(struct wiz *wiz)
{
struct wiz_clk_mux_sel *clk_mux_sel;
struct wiz_clk_div_sel *clk_div_sel;
struct regmap *regmap = wiz->regmap;
int num_lanes = wiz->num_lanes;
struct device *dev = wiz->dev;
int i;
wiz->por_en = devm_regmap_field_alloc(dev, regmap, por_en);
if (IS_ERR(wiz->por_en)) {
dev_err(dev, "POR_EN reg field init failed\n");
return PTR_ERR(wiz->por_en);
}
wiz->phy_reset_n = devm_regmap_field_alloc(dev, regmap,
phy_reset_n);
if (IS_ERR(wiz->phy_reset_n)) {
dev_err(dev, "PHY_RESET_N reg field init failed\n");
return PTR_ERR(wiz->phy_reset_n);
}
wiz->pma_cmn_refclk_int_mode =
devm_regmap_field_alloc(dev, regmap, pma_cmn_refclk_int_mode);
if (IS_ERR(wiz->pma_cmn_refclk_int_mode)) {
dev_err(dev, "PMA_CMN_REFCLK_INT_MODE reg field init failed\n");
return PTR_ERR(wiz->pma_cmn_refclk_int_mode);
}
wiz->pma_cmn_refclk_mode =
devm_regmap_field_alloc(dev, regmap, pma_cmn_refclk_mode);
if (IS_ERR(wiz->pma_cmn_refclk_mode)) {
dev_err(dev, "PMA_CMN_REFCLK_MODE reg field init failed\n");
return PTR_ERR(wiz->pma_cmn_refclk_mode);
}
clk_div_sel = &wiz->clk_div_sel[CMN_REFCLK_DIG_DIV];
clk_div_sel->field = devm_regmap_field_alloc(dev, regmap,
pma_cmn_refclk_dig_div);
if (IS_ERR(clk_div_sel->field)) {
dev_err(dev, "PMA_CMN_REFCLK_DIG_DIV reg field init failed\n");
return PTR_ERR(clk_div_sel->field);
}
if (wiz->type == J721E_WIZ_16G) {
clk_div_sel = &wiz->clk_div_sel[CMN_REFCLK1_DIG_DIV];
clk_div_sel->field =
devm_regmap_field_alloc(dev, regmap,
pma_cmn_refclk1_dig_div);
if (IS_ERR(clk_div_sel->field)) {
dev_err(dev, "PMA_CMN_REFCLK1_DIG_DIV reg field init failed\n");
return PTR_ERR(clk_div_sel->field);
}
}
clk_mux_sel = &wiz->clk_mux_sel[PLL0_REFCLK];
clk_mux_sel->field = devm_regmap_field_alloc(dev, regmap,
pll0_refclk_mux_sel);
if (IS_ERR(clk_mux_sel->field)) {
dev_err(dev, "PLL0_REFCLK_SEL reg field init failed\n");
return PTR_ERR(clk_mux_sel->field);
}
clk_mux_sel = &wiz->clk_mux_sel[PLL1_REFCLK];
clk_mux_sel->field = devm_regmap_field_alloc(dev, regmap,
pll1_refclk_mux_sel);
if (IS_ERR(clk_mux_sel->field)) {
dev_err(dev, "PLL1_REFCLK_SEL reg field init failed\n");
return PTR_ERR(clk_mux_sel->field);
}
clk_mux_sel = &wiz->clk_mux_sel[REFCLK_DIG];
if (wiz->type == J721E_WIZ_10G)
clk_mux_sel->field =
devm_regmap_field_alloc(dev, regmap,
refclk_dig_sel_10g);
else
clk_mux_sel->field =
devm_regmap_field_alloc(dev, regmap,
refclk_dig_sel_16g);
if (IS_ERR(clk_mux_sel->field)) {
dev_err(dev, "REFCLK_DIG_SEL reg field init failed\n");
return PTR_ERR(clk_mux_sel->field);
}
for (i = 0; i < num_lanes; i++) {
wiz->p_enable[i] = devm_regmap_field_alloc(dev, regmap,
p_enable[i]);
if (IS_ERR(wiz->p_enable[i])) {
dev_err(dev, "P%d_ENABLE reg field init failed\n", i);
return PTR_ERR(wiz->p_enable[i]);
}
wiz->p_align[i] = devm_regmap_field_alloc(dev, regmap,
p_align[i]);
if (IS_ERR(wiz->p_align[i])) {
dev_err(dev, "P%d_ALIGN reg field init failed\n", i);
return PTR_ERR(wiz->p_align[i]);
}
wiz->p_raw_auto_start[i] =
devm_regmap_field_alloc(dev, regmap, p_raw_auto_start[i]);
if (IS_ERR(wiz->p_raw_auto_start[i])) {
dev_err(dev, "P%d_RAW_AUTO_START reg field init fail\n",
i);
return PTR_ERR(wiz->p_raw_auto_start[i]);
}
wiz->p_standard_mode[i] =
devm_regmap_field_alloc(dev, regmap, p_standard_mode[i]);
if (IS_ERR(wiz->p_standard_mode[i])) {
dev_err(dev, "P%d_STANDARD_MODE reg field init fail\n",
i);
return PTR_ERR(wiz->p_standard_mode[i]);
}
}
wiz->typec_ln10_swap = devm_regmap_field_alloc(dev, regmap,
typec_ln10_swap);
if (IS_ERR(wiz->typec_ln10_swap)) {
dev_err(dev, "LN10_SWAP reg field init failed\n");
return PTR_ERR(wiz->typec_ln10_swap);
}
return 0;
}
static u8 wiz_clk_mux_get_parent(struct clk_hw *hw)
{
struct wiz_clk_mux *mux = to_wiz_clk_mux(hw);
struct regmap_field *field = mux->field;
unsigned int val;
regmap_field_read(field, &val);
return clk_mux_val_to_index(hw, mux->table, 0, val);
}
static int wiz_clk_mux_set_parent(struct clk_hw *hw, u8 index)
{
struct wiz_clk_mux *mux = to_wiz_clk_mux(hw);
struct regmap_field *field = mux->field;
int val;
val = mux->table[index];
return regmap_field_write(field, val);
}
static const struct clk_ops wiz_clk_mux_ops = {
.set_parent = wiz_clk_mux_set_parent,
.get_parent = wiz_clk_mux_get_parent,
};
static int wiz_mux_clk_register(struct wiz *wiz, struct device_node *node,
struct regmap_field *field, u32 *table)
{
struct device *dev = wiz->dev;
struct clk_init_data *init;
const char **parent_names;
unsigned int num_parents;
struct wiz_clk_mux *mux;
char clk_name[100];
struct clk *clk;
int ret;
mux = devm_kzalloc(dev, sizeof(*mux), GFP_KERNEL);
if (!mux)
return -ENOMEM;
num_parents = of_clk_get_parent_count(node);
if (num_parents < 2) {
dev_err(dev, "SERDES clock must have parents\n");
return -EINVAL;
}
parent_names = devm_kzalloc(dev, (sizeof(char *) * num_parents),
GFP_KERNEL);
if (!parent_names)
return -ENOMEM;
of_clk_parent_fill(node, parent_names, num_parents);
snprintf(clk_name, sizeof(clk_name), "%s_%s", dev_name(dev),
node->name);
init = &mux->clk_data;
init->ops = &wiz_clk_mux_ops;
init->flags = CLK_SET_RATE_NO_REPARENT;
init->parent_names = parent_names;
init->num_parents = num_parents;
init->name = clk_name;
mux->field = field;
mux->table = table;
mux->hw.init = init;
clk = devm_clk_register(dev, &mux->hw);
if (IS_ERR(clk))
return PTR_ERR(clk);
ret = of_clk_add_provider(node, of_clk_src_simple_get, clk);
if (ret)
dev_err(dev, "Failed to add clock provider: %s\n", clk_name);
return ret;
}
static unsigned long wiz_clk_div_recalc_rate(struct clk_hw *hw,
unsigned long parent_rate)
{
struct wiz_clk_divider *div = to_wiz_clk_div(hw);
struct regmap_field *field = div->field;
int val;
regmap_field_read(field, &val);
return divider_recalc_rate(hw, parent_rate, val, div->table, 0x0, 2);
}
static long wiz_clk_div_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *prate)
{
struct wiz_clk_divider *div = to_wiz_clk_div(hw);
return divider_round_rate(hw, rate, prate, div->table, 2, 0x0);
}
static int wiz_clk_div_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate)
{
struct wiz_clk_divider *div = to_wiz_clk_div(hw);
struct regmap_field *field = div->field;
int val;
val = divider_get_val(rate, parent_rate, div->table, 2, 0x0);
if (val < 0)
return val;
return regmap_field_write(field, val);
}
static const struct clk_ops wiz_clk_div_ops = {
.recalc_rate = wiz_clk_div_recalc_rate,
.round_rate = wiz_clk_div_round_rate,
.set_rate = wiz_clk_div_set_rate,
};
static int wiz_div_clk_register(struct wiz *wiz, struct device_node *node,
struct regmap_field *field,
struct clk_div_table *table)
{
struct device *dev = wiz->dev;
struct wiz_clk_divider *div;
struct clk_init_data *init;
const char **parent_names;
char clk_name[100];
struct clk *clk;
int ret;
div = devm_kzalloc(dev, sizeof(*div), GFP_KERNEL);
if (!div)
return -ENOMEM;
snprintf(clk_name, sizeof(clk_name), "%s_%s", dev_name(dev),
node->name);
parent_names = devm_kzalloc(dev, sizeof(char *), GFP_KERNEL);
if (!parent_names)
return -ENOMEM;
of_clk_parent_fill(node, parent_names, 1);
init = &div->clk_data;
init->ops = &wiz_clk_div_ops;
init->flags = 0;
init->parent_names = parent_names;
init->num_parents = 1;
init->name = clk_name;
div->field = field;
div->table = table;
div->hw.init = init;
clk = devm_clk_register(dev, &div->hw);
if (IS_ERR(clk))
return PTR_ERR(clk);
ret = of_clk_add_provider(node, of_clk_src_simple_get, clk);
if (ret)
dev_err(dev, "Failed to add clock provider: %s\n", clk_name);
return ret;
}
static void wiz_clock_cleanup(struct wiz *wiz, struct device_node *node)
{
struct wiz_clk_mux_sel *clk_mux_sel = wiz->clk_mux_sel;
struct device_node *clk_node;
int i;
for (i = 0; i < WIZ_MUX_NUM_CLOCKS; i++) {
clk_node = of_get_child_by_name(node, clk_mux_sel[i].node_name);
of_clk_del_provider(clk_node);
of_node_put(clk_node);
}
}
static int wiz_clock_init(struct wiz *wiz, struct device_node *node)
{
struct wiz_clk_mux_sel *clk_mux_sel = wiz->clk_mux_sel;
struct device *dev = wiz->dev;
struct device_node *clk_node;
const char *node_name;
unsigned long rate;
struct clk *clk;
int ret;
int i;
clk = devm_clk_get(dev, "core_ref_clk");
if (IS_ERR(clk)) {
dev_err(dev, "core_ref_clk clock not found\n");
ret = PTR_ERR(clk);
return ret;
}
rate = clk_get_rate(clk);
if (rate >= 100000000)
regmap_field_write(wiz->pma_cmn_refclk_int_mode, 0x1);
else
regmap_field_write(wiz->pma_cmn_refclk_int_mode, 0x3);
clk = devm_clk_get(dev, "ext_ref_clk");
if (IS_ERR(clk)) {
dev_err(dev, "ext_ref_clk clock not found\n");
ret = PTR_ERR(clk);
return ret;
}
rate = clk_get_rate(clk);
if (rate >= 100000000)
regmap_field_write(wiz->pma_cmn_refclk_mode, 0x0);
else
regmap_field_write(wiz->pma_cmn_refclk_mode, 0x2);
for (i = 0; i < WIZ_MUX_NUM_CLOCKS; i++) {
node_name = clk_mux_sel[i].node_name;
clk_node = of_get_child_by_name(node, node_name);
if (!clk_node) {
dev_err(dev, "Unable to get %s node\n", node_name);
ret = -EINVAL;
goto err;
}
ret = wiz_mux_clk_register(wiz, clk_node, clk_mux_sel[i].field,
clk_mux_sel[i].table);
if (ret) {
dev_err(dev, "Failed to register %s clock\n",
node_name);
of_node_put(clk_node);
goto err;
}
of_node_put(clk_node);
}
for (i = 0; i < wiz->clk_div_sel_num; i++) {
node_name = clk_div_sel[i].node_name;
clk_node = of_get_child_by_name(node, node_name);
if (!clk_node) {
dev_err(dev, "Unable to get %s node\n", node_name);
ret = -EINVAL;
goto err;
}
ret = wiz_div_clk_register(wiz, clk_node, clk_div_sel[i].field,
clk_div_sel[i].table);
if (ret) {
dev_err(dev, "Failed to register %s clock\n",
node_name);
of_node_put(clk_node);
goto err;
}
of_node_put(clk_node);
}
return 0;
err:
wiz_clock_cleanup(wiz, node);
return ret;
}
static int wiz_phy_reset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct device *dev = rcdev->dev;
struct wiz *wiz = dev_get_drvdata(dev);
int ret = 0;
if (id == 0) {
ret = regmap_field_write(wiz->phy_reset_n, false);
return ret;
}
ret = regmap_field_write(wiz->p_enable[id - 1], false);
return ret;
}
static int wiz_phy_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct device *dev = rcdev->dev;
struct wiz *wiz = dev_get_drvdata(dev);
int ret;
/* if typec-dir gpio was specified, set LN10 SWAP bit based on that */
if (id == 0 && wiz->gpio_typec_dir) {
if (wiz->typec_dir_delay)
msleep_interruptible(wiz->typec_dir_delay);
if (gpiod_get_value_cansleep(wiz->gpio_typec_dir))
regmap_field_write(wiz->typec_ln10_swap, 1);
else
regmap_field_write(wiz->typec_ln10_swap, 0);
}
if (id == 0) {
ret = regmap_field_write(wiz->phy_reset_n, true);
return ret;
}
ret = regmap_field_write(wiz->p_enable[id - 1], true);
return ret;
}
static const struct reset_control_ops wiz_phy_reset_ops = {
.assert = wiz_phy_reset_assert,
.deassert = wiz_phy_reset_deassert,
};
static struct regmap_config wiz_regmap_config = {
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
.fast_io = true,
};
static const struct of_device_id wiz_id_table[] = {
{
.compatible = "ti,j721e-wiz-16g", .data = (void *)J721E_WIZ_16G
},
{
.compatible = "ti,j721e-wiz-10g", .data = (void *)J721E_WIZ_10G
},
{}
};
MODULE_DEVICE_TABLE(of, wiz_id_table);
static int wiz_probe(struct platform_device *pdev)
{
struct reset_controller_dev *phy_reset_dev;
struct device *dev = &pdev->dev;
struct device_node *node = dev->of_node;
struct platform_device *serdes_pdev;
struct device_node *child_node;
struct regmap *regmap;
struct resource res;
void __iomem *base;
struct wiz *wiz;
u32 num_lanes;
int ret;
wiz = devm_kzalloc(dev, sizeof(*wiz), GFP_KERNEL);
if (!wiz)
return -ENOMEM;
wiz->type = (enum wiz_type)of_device_get_match_data(dev);
child_node = of_get_child_by_name(node, "serdes");
if (!child_node) {
dev_err(dev, "Failed to get SERDES child DT node\n");
return -ENODEV;
}
ret = of_address_to_resource(child_node, 0, &res);
if (ret) {
dev_err(dev, "Failed to get memory resource\n");
goto err_addr_to_resource;
}
base = devm_ioremap(dev, res.start, resource_size(&res));
if (!base)
goto err_addr_to_resource;
regmap = devm_regmap_init_mmio(dev, base, &wiz_regmap_config);
if (IS_ERR(regmap)) {
dev_err(dev, "Failed to initialize regmap\n");
ret = PTR_ERR(regmap);
goto err_addr_to_resource;
}
ret = of_property_read_u32(node, "num-lanes", &num_lanes);
if (ret) {
dev_err(dev, "Failed to read num-lanes property\n");
goto err_addr_to_resource;
}
if (num_lanes > WIZ_MAX_LANES) {
dev_err(dev, "Cannot support %d lanes\n", num_lanes);
goto err_addr_to_resource;
}
wiz->gpio_typec_dir = devm_gpiod_get_optional(dev, "typec-dir",
GPIOD_IN);
if (IS_ERR(wiz->gpio_typec_dir)) {
ret = PTR_ERR(wiz->gpio_typec_dir);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to request typec-dir gpio: %d\n",
ret);
goto err_addr_to_resource;
}
if (wiz->gpio_typec_dir) {
ret = of_property_read_u32(node, "typec-dir-debounce-ms",
&wiz->typec_dir_delay);
if (ret && ret != -EINVAL) {
dev_err(dev, "Invalid typec-dir-debounce property\n");
goto err_addr_to_resource;
}
/* use min. debounce from Type-C spec if not provided in DT */
if (ret == -EINVAL)
wiz->typec_dir_delay = WIZ_TYPEC_DIR_DEBOUNCE_MIN;
if (wiz->typec_dir_delay < WIZ_TYPEC_DIR_DEBOUNCE_MIN ||
wiz->typec_dir_delay > WIZ_TYPEC_DIR_DEBOUNCE_MAX) {
dev_err(dev, "Invalid typec-dir-debounce property\n");
goto err_addr_to_resource;
}
}
wiz->dev = dev;
wiz->regmap = regmap;
wiz->num_lanes = num_lanes;
if (wiz->type == J721E_WIZ_10G)
wiz->clk_mux_sel = clk_mux_sel_10g;
else
wiz->clk_mux_sel = clk_mux_sel_16g;
wiz->clk_div_sel = clk_div_sel;
if (wiz->type == J721E_WIZ_10G)
wiz->clk_div_sel_num = WIZ_DIV_NUM_CLOCKS_10G;
else
wiz->clk_div_sel_num = WIZ_DIV_NUM_CLOCKS_16G;
platform_set_drvdata(pdev, wiz);
ret = wiz_regfield_init(wiz);
if (ret) {
dev_err(dev, "Failed to initialize regfields\n");
goto err_addr_to_resource;
}
phy_reset_dev = &wiz->wiz_phy_reset_dev;
phy_reset_dev->dev = dev;
phy_reset_dev->ops = &wiz_phy_reset_ops,
phy_reset_dev->owner = THIS_MODULE,
phy_reset_dev->of_node = node;
/* Reset for each of the lane and one for the entire SERDES */
phy_reset_dev->nr_resets = num_lanes + 1;
ret = devm_reset_controller_register(dev, phy_reset_dev);
if (ret < 0) {
dev_warn(dev, "Failed to register reset controller\n");
goto err_addr_to_resource;
}
pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev);
if (ret < 0) {
dev_err(dev, "pm_runtime_get_sync failed\n");
goto err_get_sync;
}
ret = wiz_clock_init(wiz, node);
if (ret < 0) {
dev_warn(dev, "Failed to initialize clocks\n");
goto err_get_sync;
}
serdes_pdev = of_platform_device_create(child_node, NULL, dev);
if (!serdes_pdev) {
dev_WARN(dev, "Unable to create SERDES platform device\n");
goto err_pdev_create;
}
wiz->serdes_pdev = serdes_pdev;
ret = wiz_init(wiz);
if (ret) {
dev_err(dev, "WIZ initialization failed\n");
goto err_wiz_init;
}
of_node_put(child_node);
return 0;
err_wiz_init:
of_platform_device_destroy(&serdes_pdev->dev, NULL);
err_pdev_create:
wiz_clock_cleanup(wiz, node);
err_get_sync:
pm_runtime_put(dev);
pm_runtime_disable(dev);
err_addr_to_resource:
of_node_put(child_node);
return ret;
}
static int wiz_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *node = dev->of_node;
struct platform_device *serdes_pdev;
struct wiz *wiz;
wiz = dev_get_drvdata(dev);
serdes_pdev = wiz->serdes_pdev;
of_platform_device_destroy(&serdes_pdev->dev, NULL);
wiz_clock_cleanup(wiz, node);
pm_runtime_put(dev);
pm_runtime_disable(dev);
return 0;
}
static struct platform_driver wiz_driver = {
.probe = wiz_probe,
.remove = wiz_remove,
.driver = {
.name = "wiz",
.of_match_table = wiz_id_table,
},
};
module_platform_driver(wiz_driver);
MODULE_AUTHOR("Texas Instruments Inc.");
MODULE_DESCRIPTION("TI J721E WIZ driver");
MODULE_LICENSE("GPL v2");

View File

@ -850,6 +850,12 @@ static int ti_pipe3_probe(struct platform_device *pdev)
static int ti_pipe3_remove(struct platform_device *pdev)
{
struct ti_pipe3 *phy = platform_get_drvdata(pdev);
if (phy->mode == PIPE3_MODE_SATA) {
clk_disable_unprepare(phy->refclk);
phy->sata_refclk_enabled = false;
}
pm_runtime_disable(&pdev->dev);
return 0;
@ -900,18 +906,8 @@ static void ti_pipe3_disable_clocks(struct ti_pipe3 *phy)
{
if (!IS_ERR(phy->wkupclk))
clk_disable_unprepare(phy->wkupclk);
if (!IS_ERR(phy->refclk)) {
if (!IS_ERR(phy->refclk))
clk_disable_unprepare(phy->refclk);
/*
* SATA refclk needs an additional disable as we left it
* on in probe to avoid Errata i783
*/
if (phy->sata_refclk_enabled) {
clk_disable_unprepare(phy->refclk);
phy->sata_refclk_enabled = false;
}
}
if (!IS_ERR(phy->div_clk))
clk_disable_unprepare(phy->div_clk);
}

View File

@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
menuconfig THUNDERBOLT
tristate "Thunderbolt support"
menuconfig USB4
tristate "Unified support for USB4 and Thunderbolt"
depends on PCI
depends on X86 || COMPILE_TEST
select APPLE_PROPERTIES if EFI_STUB && X86
@ -9,9 +9,10 @@ menuconfig THUNDERBOLT
select CRYPTO_HASH
select NVMEM
help
Thunderbolt Controller driver. This driver is required if you
want to hotplug Thunderbolt devices on Apple hardware or on PCs
with Intel Falcon Ridge or newer.
USB4 and Thunderbolt driver. USB4 is the public speficiation
based on Thunderbolt 3 protocol. This driver is required if
you want to hotplug Thunderbolt and USB4 compliant devices on
Apple hardware or on PCs with Intel Falcon Ridge or newer.
To compile this driver a module, choose M here. The module will be
called thunderbolt.

View File

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-${CONFIG_THUNDERBOLT} := thunderbolt.o
obj-${CONFIG_USB4} := thunderbolt.o
thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o
thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o
thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o

View File

@ -113,7 +113,16 @@ int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
return ret;
}
static int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap)
/**
* tb_switch_find_cap() - Find switch capability
* @sw Switch to find the capability for
* @cap: Capability to look
*
* Returns offset to start of capability or %-ENOENT if no such
* capability was found. Negative errno is returned if there was an
* error.
*/
int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap)
{
int offset = sw->config.first_cap_offset;

View File

@ -708,19 +708,26 @@ void tb_ctl_stop(struct tb_ctl *ctl)
/* public interface, commands */
/**
* tb_cfg_error() - send error packet
* tb_cfg_ack_plug() - Ack hot plug/unplug event
* @ctl: Control channel to use
* @route: Router that originated the event
* @port: Port where the hot plug/unplug happened
* @unplug: Ack hot plug or unplug
*
* Return: Returns 0 on success or an error code on failure.
* Call this as response for hot plug/unplug event to ack it.
* Returns %0 on success or an error code on failure.
*/
int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
enum tb_cfg_error error)
int tb_cfg_ack_plug(struct tb_ctl *ctl, u64 route, u32 port, bool unplug)
{
struct cfg_error_pkg pkg = {
.header = tb_cfg_make_header(route),
.port = port,
.error = error,
.error = TB_CFG_ERROR_ACK_PLUG_EVENT,
.pg = unplug ? TB_CFG_ERROR_PG_HOT_UNPLUG
: TB_CFG_ERROR_PG_HOT_PLUG,
};
tb_ctl_dbg(ctl, "resetting error on %llx:%x.\n", route, port);
tb_ctl_dbg(ctl, "acking hot %splug event on %llx:%x\n",
unplug ? "un" : "", route, port);
return tb_ctl_tx(ctl, &pkg, sizeof(pkg), TB_CFG_PKG_ERROR);
}

View File

@ -123,8 +123,7 @@ static inline struct tb_cfg_header tb_cfg_make_header(u64 route)
return header;
}
int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
enum tb_cfg_error error);
int tb_cfg_ack_plug(struct tb_ctl *ctl, u64 route, u32 port, bool unplug);
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
int timeout_msec);
struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,

View File

@ -130,13 +130,52 @@ static int tb_eeprom_in(struct tb_switch *sw, u8 *val)
return 0;
}
/**
* tb_eeprom_get_drom_offset - get drom offset within eeprom
*/
static int tb_eeprom_get_drom_offset(struct tb_switch *sw, u16 *offset)
{
struct tb_cap_plug_events cap;
int res;
if (!sw->cap_plug_events) {
tb_sw_warn(sw, "no TB_CAP_PLUG_EVENTS, cannot read eeprom\n");
return -ENODEV;
}
res = tb_sw_read(sw, &cap, TB_CFG_SWITCH, sw->cap_plug_events,
sizeof(cap) / 4);
if (res)
return res;
if (!cap.eeprom_ctl.present || cap.eeprom_ctl.not_present) {
tb_sw_warn(sw, "no NVM\n");
return -ENODEV;
}
if (cap.drom_offset > 0xffff) {
tb_sw_warn(sw, "drom offset is larger than 0xffff: %#x\n",
cap.drom_offset);
return -ENXIO;
}
*offset = cap.drom_offset;
return 0;
}
/**
* tb_eeprom_read_n - read count bytes from offset into val
*/
static int tb_eeprom_read_n(struct tb_switch *sw, u16 offset, u8 *val,
size_t count)
{
u16 drom_offset;
int i, res;
res = tb_eeprom_get_drom_offset(sw, &drom_offset);
if (res)
return res;
offset += drom_offset;
res = tb_eeprom_active(sw, true);
if (res)
return res;
@ -238,36 +277,6 @@ struct tb_drom_entry_port {
} __packed;
/**
* tb_eeprom_get_drom_offset - get drom offset within eeprom
*/
static int tb_eeprom_get_drom_offset(struct tb_switch *sw, u16 *offset)
{
struct tb_cap_plug_events cap;
int res;
if (!sw->cap_plug_events) {
tb_sw_warn(sw, "no TB_CAP_PLUG_EVENTS, cannot read eeprom\n");
return -ENOSYS;
}
res = tb_sw_read(sw, &cap, TB_CFG_SWITCH, sw->cap_plug_events,
sizeof(cap) / 4);
if (res)
return res;
if (!cap.eeprom_ctl.present || cap.eeprom_ctl.not_present) {
tb_sw_warn(sw, "no NVM\n");
return -ENOSYS;
}
if (cap.drom_offset > 0xffff) {
tb_sw_warn(sw, "drom offset is larger than 0xffff: %#x\n",
cap.drom_offset);
return -ENXIO;
}
*offset = cap.drom_offset;
return 0;
}
/**
* tb_drom_read_uid_only - read uid directly from drom
*
@ -277,17 +286,11 @@ static int tb_eeprom_get_drom_offset(struct tb_switch *sw, u16 *offset)
int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid)
{
u8 data[9];
u16 drom_offset;
u8 crc;
int res = tb_eeprom_get_drom_offset(sw, &drom_offset);
if (res)
return res;
if (drom_offset == 0)
return -ENODEV;
int res;
/* read uid */
res = tb_eeprom_read_n(sw, drom_offset, data, 9);
res = tb_eeprom_read_n(sw, 0, data, 9);
if (res)
return res;
@ -484,12 +487,42 @@ err_free:
return ret;
}
static int usb4_copy_host_drom(struct tb_switch *sw, u16 *size)
{
int ret;
ret = usb4_switch_drom_read(sw, 14, size, sizeof(*size));
if (ret)
return ret;
/* Size includes CRC8 + UID + CRC32 */
*size += 1 + 8 + 4;
sw->drom = kzalloc(*size, GFP_KERNEL);
if (!sw->drom)
return -ENOMEM;
ret = usb4_switch_drom_read(sw, 0, sw->drom, *size);
if (ret) {
kfree(sw->drom);
sw->drom = NULL;
}
return ret;
}
static int tb_drom_read_n(struct tb_switch *sw, u16 offset, u8 *val,
size_t count)
{
if (tb_switch_is_usb4(sw))
return usb4_switch_drom_read(sw, offset, val, count);
return tb_eeprom_read_n(sw, offset, val, count);
}
/**
* tb_drom_read - copy drom to sw->drom and parse it
*/
int tb_drom_read(struct tb_switch *sw)
{
u16 drom_offset;
u16 size;
u32 crc;
struct tb_drom_header *header;
@ -510,18 +543,26 @@ int tb_drom_read(struct tb_switch *sw)
goto parse;
/*
* The root switch contains only a dummy drom (header only,
* no entries). Hardcode the configuration here.
* USB4 hosts may support reading DROM through router
* operations.
*/
tb_drom_read_uid_only(sw, &sw->uid);
if (tb_switch_is_usb4(sw)) {
usb4_switch_read_uid(sw, &sw->uid);
if (!usb4_copy_host_drom(sw, &size))
goto parse;
} else {
/*
* The root switch contains only a dummy drom
* (header only, no entries). Hardcode the
* configuration here.
*/
tb_drom_read_uid_only(sw, &sw->uid);
}
return 0;
}
res = tb_eeprom_get_drom_offset(sw, &drom_offset);
if (res)
return res;
res = tb_eeprom_read_n(sw, drom_offset + 14, (u8 *) &size, 2);
res = tb_drom_read_n(sw, 14, (u8 *) &size, 2);
if (res)
return res;
size &= 0x3ff;
@ -535,7 +576,7 @@ int tb_drom_read(struct tb_switch *sw)
sw->drom = kzalloc(size, GFP_KERNEL);
if (!sw->drom)
return -ENOMEM;
res = tb_eeprom_read_n(sw, drom_offset, sw->drom, size);
res = tb_drom_read_n(sw, 0, sw->drom, size);
if (res)
goto err;

View File

@ -1271,6 +1271,9 @@ static struct pci_device_id nhi_ids[] = {
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI1),
.driver_data = (kernel_ulong_t)&icl_nhi_ops },
/* Any USB4 compliant host */
{ PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) },
{ 0,}
};

View File

@ -74,4 +74,6 @@ extern const struct tb_nhi_ops icl_nhi_ops;
#define PCI_DEVICE_ID_INTEL_ICL_NHI1 0x8a0d
#define PCI_DEVICE_ID_INTEL_ICL_NHI0 0x8a17
#define PCI_CLASS_SERIAL_USB_USB4 0x0c0340
#endif

View File

@ -163,10 +163,12 @@ static int nvm_validate_and_write(struct tb_switch *sw)
image_size -= hdr_size;
}
if (tb_switch_is_usb4(sw))
return usb4_switch_nvm_write(sw, 0, buf, image_size);
return dma_port_flash_write(sw->dma_port, 0, buf, image_size);
}
static int nvm_authenticate_host(struct tb_switch *sw)
static int nvm_authenticate_host_dma_port(struct tb_switch *sw)
{
int ret = 0;
@ -206,7 +208,7 @@ static int nvm_authenticate_host(struct tb_switch *sw)
return ret;
}
static int nvm_authenticate_device(struct tb_switch *sw)
static int nvm_authenticate_device_dma_port(struct tb_switch *sw)
{
int ret, retries = 10;
@ -251,6 +253,78 @@ static int nvm_authenticate_device(struct tb_switch *sw)
return -ETIMEDOUT;
}
static void nvm_authenticate_start_dma_port(struct tb_switch *sw)
{
struct pci_dev *root_port;
/*
* During host router NVM upgrade we should not allow root port to
* go into D3cold because some root ports cannot trigger PME
* itself. To be on the safe side keep the root port in D0 during
* the whole upgrade process.
*/
root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev);
if (root_port)
pm_runtime_get_noresume(&root_port->dev);
}
static void nvm_authenticate_complete_dma_port(struct tb_switch *sw)
{
struct pci_dev *root_port;
root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev);
if (root_port)
pm_runtime_put(&root_port->dev);
}
static inline bool nvm_readable(struct tb_switch *sw)
{
if (tb_switch_is_usb4(sw)) {
/*
* USB4 devices must support NVM operations but it is
* optional for hosts. Therefore we query the NVM sector
* size here and if it is supported assume NVM
* operations are implemented.
*/
return usb4_switch_nvm_sector_size(sw) > 0;
}
/* Thunderbolt 2 and 3 devices support NVM through DMA port */
return !!sw->dma_port;
}
static inline bool nvm_upgradeable(struct tb_switch *sw)
{
if (sw->no_nvm_upgrade)
return false;
return nvm_readable(sw);
}
static inline int nvm_read(struct tb_switch *sw, unsigned int address,
void *buf, size_t size)
{
if (tb_switch_is_usb4(sw))
return usb4_switch_nvm_read(sw, address, buf, size);
return dma_port_flash_read(sw->dma_port, address, buf, size);
}
static int nvm_authenticate(struct tb_switch *sw)
{
int ret;
if (tb_switch_is_usb4(sw))
return usb4_switch_nvm_authenticate(sw);
if (!tb_route(sw)) {
nvm_authenticate_start_dma_port(sw);
ret = nvm_authenticate_host_dma_port(sw);
} else {
ret = nvm_authenticate_device_dma_port(sw);
}
return ret;
}
static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val,
size_t bytes)
{
@ -264,7 +338,7 @@ static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val,
goto out;
}
ret = dma_port_flash_read(sw->dma_port, offset, val, bytes);
ret = nvm_read(sw, offset, val, bytes);
mutex_unlock(&sw->tb->lock);
out:
@ -341,9 +415,21 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
u32 val;
int ret;
if (!sw->dma_port)
if (!nvm_readable(sw))
return 0;
/*
* The NVM format of non-Intel hardware is not known so
* currently restrict NVM upgrade for Intel hardware. We may
* relax this in the future when we learn other NVM formats.
*/
if (sw->config.vendor_id != PCI_VENDOR_ID_INTEL) {
dev_info(&sw->dev,
"NVM format of vendor %#x is not known, disabling NVM upgrade\n",
sw->config.vendor_id);
return 0;
}
nvm = kzalloc(sizeof(*nvm), GFP_KERNEL);
if (!nvm)
return -ENOMEM;
@ -358,8 +444,7 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
if (!sw->safe_mode) {
u32 nvm_size, hdr_size;
ret = dma_port_flash_read(sw->dma_port, NVM_FLASH_SIZE, &val,
sizeof(val));
ret = nvm_read(sw, NVM_FLASH_SIZE, &val, sizeof(val));
if (ret)
goto err_ida;
@ -367,8 +452,7 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
nvm_size = (SZ_1M << (val & 7)) / 8;
nvm_size = (nvm_size - hdr_size) / 2;
ret = dma_port_flash_read(sw->dma_port, NVM_VERSION, &val,
sizeof(val));
ret = nvm_read(sw, NVM_VERSION, &val, sizeof(val));
if (ret)
goto err_ida;
@ -619,6 +703,24 @@ int tb_port_clear_counter(struct tb_port *port, int counter)
return tb_port_write(port, zero, TB_CFG_COUNTERS, 3 * counter, 3);
}
/**
* tb_port_unlock() - Unlock downstream port
* @port: Port to unlock
*
* Needed for USB4 but can be called for any CIO/USB4 ports. Makes the
* downstream router accessible for CM.
*/
int tb_port_unlock(struct tb_port *port)
{
if (tb_switch_is_icm(port->sw))
return 0;
if (!tb_port_is_null(port))
return -EINVAL;
if (tb_switch_is_usb4(port->sw))
return usb4_port_unlock(port);
return 0;
}
/**
* tb_init_port() - initialize a port
*
@ -650,6 +752,10 @@ static int tb_init_port(struct tb_port *port)
port->cap_phy = cap;
else
tb_port_WARN(port, "non switch port without a PHY\n");
cap = tb_port_find_cap(port, TB_PORT_CAP_USB4);
if (cap > 0)
port->cap_usb4 = cap;
} else if (port->port != 0) {
cap = tb_port_find_cap(port, TB_PORT_CAP_ADAP);
if (cap > 0)
@ -936,11 +1042,46 @@ bool tb_port_is_enabled(struct tb_port *port)
case TB_TYPE_DP_HDMI_OUT:
return tb_dp_port_is_enabled(port);
case TB_TYPE_USB3_UP:
case TB_TYPE_USB3_DOWN:
return tb_usb3_port_is_enabled(port);
default:
return false;
}
}
/**
* tb_usb3_port_is_enabled() - Is the USB3 adapter port enabled
* @port: USB3 adapter port to check
*/
bool tb_usb3_port_is_enabled(struct tb_port *port)
{
u32 data;
if (tb_port_read(port, &data, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_0, 1))
return false;
return !!(data & ADP_USB3_CS_0_PE);
}
/**
* tb_usb3_port_enable() - Enable USB3 adapter port
* @port: USB3 adapter port to enable
* @enable: Enable/disable the USB3 adapter
*/
int tb_usb3_port_enable(struct tb_port *port, bool enable)
{
u32 word = enable ? (ADP_USB3_CS_0_PE | ADP_USB3_CS_0_V)
: ADP_USB3_CS_0_V;
if (!port->cap_adap)
return -ENXIO;
return tb_port_write(port, &word, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_0, 1);
}
/**
* tb_pci_port_is_enabled() - Is the PCIe adapter port enabled
* @port: PCIe port to check
@ -1088,20 +1229,38 @@ int tb_dp_port_enable(struct tb_port *port, bool enable)
/* switch utility functions */
static void tb_dump_switch(struct tb *tb, struct tb_regs_switch_header *sw)
static const char *tb_switch_generation_name(const struct tb_switch *sw)
{
tb_dbg(tb, " Switch: %x:%x (Revision: %d, TB Version: %d)\n",
sw->vendor_id, sw->device_id, sw->revision,
sw->thunderbolt_version);
tb_dbg(tb, " Max Port Number: %d\n", sw->max_port_number);
switch (sw->generation) {
case 1:
return "Thunderbolt 1";
case 2:
return "Thunderbolt 2";
case 3:
return "Thunderbolt 3";
case 4:
return "USB4";
default:
return "Unknown";
}
}
static void tb_dump_switch(const struct tb *tb, const struct tb_switch *sw)
{
const struct tb_regs_switch_header *regs = &sw->config;
tb_dbg(tb, " %s Switch: %x:%x (Revision: %d, TB Version: %d)\n",
tb_switch_generation_name(sw), regs->vendor_id, regs->device_id,
regs->revision, regs->thunderbolt_version);
tb_dbg(tb, " Max Port Number: %d\n", regs->max_port_number);
tb_dbg(tb, " Config:\n");
tb_dbg(tb,
" Upstream Port Number: %d Depth: %d Route String: %#llx Enabled: %d, PlugEventsDelay: %dms\n",
sw->upstream_port_number, sw->depth,
(((u64) sw->route_hi) << 32) | sw->route_lo,
sw->enabled, sw->plug_events_delay);
regs->upstream_port_number, regs->depth,
(((u64) regs->route_hi) << 32) | regs->route_lo,
regs->enabled, regs->plug_events_delay);
tb_dbg(tb, " unknown1: %#x unknown4: %#x\n",
sw->__unknown1, sw->__unknown4);
regs->__unknown1, regs->__unknown4);
}
/**
@ -1148,6 +1307,10 @@ static int tb_plug_events_active(struct tb_switch *sw, bool active)
if (res)
return res;
/* Plug events are always enabled in USB4 */
if (tb_switch_is_usb4(sw))
return 0;
res = tb_sw_read(sw, &data, TB_CFG_SWITCH, sw->cap_plug_events + 1, 1);
if (res)
return res;
@ -1359,30 +1522,6 @@ static ssize_t lanes_show(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR(rx_lanes, 0444, lanes_show, NULL);
static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL);
static void nvm_authenticate_start(struct tb_switch *sw)
{
struct pci_dev *root_port;
/*
* During host router NVM upgrade we should not allow root port to
* go into D3cold because some root ports cannot trigger PME
* itself. To be on the safe side keep the root port in D0 during
* the whole upgrade process.
*/
root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev);
if (root_port)
pm_runtime_get_noresume(&root_port->dev);
}
static void nvm_authenticate_complete(struct tb_switch *sw)
{
struct pci_dev *root_port;
root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev);
if (root_port)
pm_runtime_put(&root_port->dev);
}
static ssize_t nvm_authenticate_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@ -1431,17 +1570,7 @@ static ssize_t nvm_authenticate_store(struct device *dev,
goto exit_unlock;
sw->nvm->authenticating = true;
if (!tb_route(sw)) {
/*
* Keep root port from suspending as long as the
* NVM upgrade process is running.
*/
nvm_authenticate_start(sw);
ret = nvm_authenticate_host(sw);
} else {
ret = nvm_authenticate_device(sw);
}
ret = nvm_authenticate(sw);
}
exit_unlock:
@ -1556,11 +1685,11 @@ static umode_t switch_attr_is_visible(struct kobject *kobj,
return attr->mode;
return 0;
} else if (attr == &dev_attr_nvm_authenticate.attr) {
if (sw->dma_port && !sw->no_nvm_upgrade)
if (nvm_upgradeable(sw))
return attr->mode;
return 0;
} else if (attr == &dev_attr_nvm_version.attr) {
if (sw->dma_port)
if (nvm_readable(sw))
return attr->mode;
return 0;
} else if (attr == &dev_attr_boot.attr) {
@ -1672,6 +1801,9 @@ static int tb_switch_get_generation(struct tb_switch *sw)
return 3;
default:
if (tb_switch_is_usb4(sw))
return 4;
/*
* For unknown switches assume generation to be 1 to be
* on the safe side.
@ -1682,6 +1814,19 @@ static int tb_switch_get_generation(struct tb_switch *sw)
}
}
static bool tb_switch_exceeds_max_depth(const struct tb_switch *sw, int depth)
{
int max_depth;
if (tb_switch_is_usb4(sw) ||
(sw->tb->root_switch && tb_switch_is_usb4(sw->tb->root_switch)))
max_depth = USB4_SWITCH_MAX_DEPTH;
else
max_depth = TB_SWITCH_MAX_DEPTH;
return depth > max_depth;
}
/**
* tb_switch_alloc() - allocate a switch
* @tb: Pointer to the owning domain
@ -1703,10 +1848,16 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
int upstream_port;
int i, ret, depth;
/* Make sure we do not exceed maximum topology limit */
/* Unlock the downstream port so we can access the switch below */
if (route) {
struct tb_switch *parent_sw = tb_to_switch(parent);
struct tb_port *down;
down = tb_port_at(route, parent_sw);
tb_port_unlock(down);
}
depth = tb_route_length(route);
if (depth > TB_SWITCH_MAX_DEPTH)
return ERR_PTR(-EADDRNOTAVAIL);
upstream_port = tb_cfg_get_upstream_port(tb->ctl, route);
if (upstream_port < 0)
@ -1721,8 +1872,10 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
if (ret)
goto err_free_sw_ports;
sw->generation = tb_switch_get_generation(sw);
tb_dbg(tb, "current switch config:\n");
tb_dump_switch(tb, &sw->config);
tb_dump_switch(tb, sw);
/* configure switch */
sw->config.upstream_port_number = upstream_port;
@ -1731,6 +1884,12 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
sw->config.route_lo = lower_32_bits(route);
sw->config.enabled = 0;
/* Make sure we do not exceed maximum topology limit */
if (tb_switch_exceeds_max_depth(sw, depth)) {
ret = -EADDRNOTAVAIL;
goto err_free_sw_ports;
}
/* initialize ports */
sw->ports = kcalloc(sw->config.max_port_number + 1, sizeof(*sw->ports),
GFP_KERNEL);
@ -1745,14 +1904,9 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
sw->ports[i].port = i;
}
sw->generation = tb_switch_get_generation(sw);
ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_PLUG_EVENTS);
if (ret < 0) {
tb_sw_warn(sw, "cannot find TB_VSE_CAP_PLUG_EVENTS aborting\n");
goto err_free_sw_ports;
}
sw->cap_plug_events = ret;
if (ret > 0)
sw->cap_plug_events = ret;
ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER);
if (ret > 0)
@ -1823,7 +1977,8 @@ tb_switch_alloc_safe_mode(struct tb *tb, struct device *parent, u64 route)
*
* Call this function before the switch is added to the system. It will
* upload configuration to the switch and makes it available for the
* connection manager to use.
* connection manager to use. Can be called to the switch again after
* resume from low power states to re-initialize it.
*
* Return: %0 in case of success and negative errno in case of failure
*/
@ -1834,21 +1989,50 @@ int tb_switch_configure(struct tb_switch *sw)
int ret;
route = tb_route(sw);
tb_dbg(tb, "initializing Switch at %#llx (depth: %d, up port: %d)\n",
route, tb_route_length(route), sw->config.upstream_port_number);
if (sw->config.vendor_id != PCI_VENDOR_ID_INTEL)
tb_sw_warn(sw, "unknown switch vendor id %#x\n",
sw->config.vendor_id);
tb_dbg(tb, "%s Switch at %#llx (depth: %d, up port: %d)\n",
sw->config.enabled ? "restoring " : "initializing", route,
tb_route_length(route), sw->config.upstream_port_number);
sw->config.enabled = 1;
/* upload configuration */
ret = tb_sw_write(sw, 1 + (u32 *)&sw->config, TB_CFG_SWITCH, 1, 3);
if (ret)
return ret;
if (tb_switch_is_usb4(sw)) {
/*
* For USB4 devices, we need to program the CM version
* accordingly so that it knows to expose all the
* additional capabilities.
*/
sw->config.cmuv = USB4_VERSION_1_0;
ret = tb_lc_configure_link(sw);
/* Enumerate the switch */
ret = tb_sw_write(sw, (u32 *)&sw->config + 1, TB_CFG_SWITCH,
ROUTER_CS_1, 4);
if (ret)
return ret;
ret = usb4_switch_setup(sw);
if (ret)
return ret;
ret = usb4_switch_configure_link(sw);
} else {
if (sw->config.vendor_id != PCI_VENDOR_ID_INTEL)
tb_sw_warn(sw, "unknown switch vendor id %#x\n",
sw->config.vendor_id);
if (!sw->cap_plug_events) {
tb_sw_warn(sw, "cannot find TB_VSE_CAP_PLUG_EVENTS aborting\n");
return -ENODEV;
}
/* Enumerate the switch */
ret = tb_sw_write(sw, (u32 *)&sw->config + 1, TB_CFG_SWITCH,
ROUTER_CS_1, 3);
if (ret)
return ret;
ret = tb_lc_configure_link(sw);
}
if (ret)
return ret;
@ -1857,18 +2041,32 @@ int tb_switch_configure(struct tb_switch *sw)
static int tb_switch_set_uuid(struct tb_switch *sw)
{
bool uid = false;
u32 uuid[4];
int ret;
if (sw->uuid)
return 0;
/*
* The newer controllers include fused UUID as part of link
* controller specific registers
*/
ret = tb_lc_read_uuid(sw, uuid);
if (ret) {
if (tb_switch_is_usb4(sw)) {
ret = usb4_switch_read_uid(sw, &sw->uid);
if (ret)
return ret;
uid = true;
} else {
/*
* The newer controllers include fused UUID as part of
* link controller specific registers
*/
ret = tb_lc_read_uuid(sw, uuid);
if (ret) {
if (ret != -EINVAL)
return ret;
uid = true;
}
}
if (uid) {
/*
* ICM generates UUID based on UID and fills the upper
* two words with ones. This is not strictly following
@ -1935,7 +2133,7 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
nvm_get_auth_status(sw, &status);
if (status) {
if (!tb_route(sw))
nvm_authenticate_complete(sw);
nvm_authenticate_complete_dma_port(sw);
return 0;
}
@ -1950,7 +2148,7 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
/* Now we can allow root port to suspend again */
if (!tb_route(sw))
nvm_authenticate_complete(sw);
nvm_authenticate_complete_dma_port(sw);
if (status) {
tb_sw_info(sw, "switch flash authentication failed\n");
@ -2004,6 +2202,8 @@ static bool tb_switch_lane_bonding_possible(struct tb_switch *sw)
if (!up->dual_link_port || !up->dual_link_port->remote)
return false;
if (tb_switch_is_usb4(sw))
return usb4_switch_lane_bonding_possible(sw);
return tb_lc_lane_bonding_possible(sw);
}
@ -2175,6 +2375,10 @@ int tb_switch_add(struct tb_switch *sw)
ret = tb_switch_update_link_attributes(sw);
if (ret)
return ret;
ret = tb_switch_tmu_init(sw);
if (ret)
return ret;
}
ret = device_add(&sw->dev);
@ -2240,7 +2444,11 @@ void tb_switch_remove(struct tb_switch *sw)
if (!sw->is_unplugged)
tb_plug_events_active(sw, false);
tb_lc_unconfigure_link(sw);
if (tb_switch_is_usb4(sw))
usb4_switch_unconfigure_link(sw);
else
tb_lc_unconfigure_link(sw);
tb_switch_nvm_remove(sw);
@ -2298,7 +2506,10 @@ int tb_switch_resume(struct tb_switch *sw)
return err;
}
err = tb_drom_read_uid_only(sw, &uid);
if (tb_switch_is_usb4(sw))
err = usb4_switch_read_uid(sw, &uid);
else
err = tb_drom_read_uid_only(sw, &uid);
if (err) {
tb_sw_warn(sw, "uid read failed\n");
return err;
@ -2311,16 +2522,7 @@ int tb_switch_resume(struct tb_switch *sw)
}
}
/* upload configuration */
err = tb_sw_write(sw, 1 + (u32 *) &sw->config, TB_CFG_SWITCH, 1, 3);
if (err)
return err;
err = tb_lc_configure_link(sw);
if (err)
return err;
err = tb_plug_events_active(sw, true);
err = tb_switch_configure(sw);
if (err)
return err;
@ -2336,8 +2538,14 @@ int tb_switch_resume(struct tb_switch *sw)
tb_sw_set_unplugged(port->remote->sw);
else if (port->xdomain)
port->xdomain->is_unplugged = true;
} else if (tb_port_has_remote(port)) {
if (tb_switch_resume(port->remote->sw)) {
} else if (tb_port_has_remote(port) || port->xdomain) {
/*
* Always unlock the port so the downstream
* switch/domain is accessible.
*/
if (tb_port_unlock(port))
tb_port_warn(port, "failed to unlock port\n");
if (port->remote && tb_switch_resume(port->remote->sw)) {
tb_port_warn(port,
"lost during suspend, disconnecting\n");
tb_sw_set_unplugged(port->remote->sw);
@ -2361,7 +2569,10 @@ void tb_switch_suspend(struct tb_switch *sw)
tb_switch_suspend(port->remote->sw);
}
tb_lc_set_sleep(sw);
if (tb_switch_is_usb4(sw))
usb4_switch_set_sleep(sw);
else
tb_lc_set_sleep(sw);
}
/**
@ -2374,6 +2585,8 @@ void tb_switch_suspend(struct tb_switch *sw)
*/
bool tb_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
if (tb_switch_is_usb4(sw))
return usb4_switch_query_dp_resource(sw, in);
return tb_lc_dp_sink_query(sw, in);
}
@ -2388,6 +2601,8 @@ bool tb_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in)
*/
int tb_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
if (tb_switch_is_usb4(sw))
return usb4_switch_alloc_dp_resource(sw, in);
return tb_lc_dp_sink_alloc(sw, in);
}
@ -2401,10 +2616,16 @@ int tb_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
*/
void tb_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
if (tb_lc_dp_sink_dealloc(sw, in)) {
int ret;
if (tb_switch_is_usb4(sw))
ret = usb4_switch_dealloc_dp_resource(sw, in);
else
ret = tb_lc_dp_sink_dealloc(sw, in);
if (ret)
tb_sw_warn(sw, "failed to de-allocate DP resource for port %d\n",
in->port);
}
}
struct tb_sw_lookup {
@ -2517,6 +2738,24 @@ struct tb_switch *tb_switch_find_by_route(struct tb *tb, u64 route)
return NULL;
}
/**
* tb_switch_find_port() - return the first port of @type on @sw or NULL
* @sw: Switch to find the port from
* @type: Port type to look for
*/
struct tb_port *tb_switch_find_port(struct tb_switch *sw,
enum tb_port_type type)
{
struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (port->config.type == type)
return port;
}
return NULL;
}
void tb_switch_exit(void)
{
ida_destroy(&nvm_ida);

View File

@ -111,6 +111,10 @@ static void tb_discover_tunnels(struct tb_switch *sw)
tunnel = tb_tunnel_discover_pci(tb, port);
break;
case TB_TYPE_USB3_DOWN:
tunnel = tb_tunnel_discover_usb3(tb, port);
break;
default:
break;
}
@ -158,6 +162,137 @@ static void tb_scan_xdomain(struct tb_port *port)
}
}
static int tb_enable_tmu(struct tb_switch *sw)
{
int ret;
/* If it is already enabled in correct mode, don't touch it */
if (tb_switch_tmu_is_enabled(sw))
return 0;
ret = tb_switch_tmu_disable(sw);
if (ret)
return ret;
ret = tb_switch_tmu_post_time(sw);
if (ret)
return ret;
return tb_switch_tmu_enable(sw);
}
/**
* tb_find_unused_port() - return the first inactive port on @sw
* @sw: Switch to find the port on
* @type: Port type to look for
*/
static struct tb_port *tb_find_unused_port(struct tb_switch *sw,
enum tb_port_type type)
{
struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (tb_is_upstream_port(port))
continue;
if (port->config.type != type)
continue;
if (!port->cap_adap)
continue;
if (tb_port_is_enabled(port))
continue;
return port;
}
return NULL;
}
static struct tb_port *tb_find_usb3_down(struct tb_switch *sw,
const struct tb_port *port)
{
struct tb_port *down;
down = usb4_switch_map_usb3_down(sw, port);
if (down) {
if (WARN_ON(!tb_port_is_usb3_down(down)))
goto out;
if (WARN_ON(tb_usb3_port_is_enabled(down)))
goto out;
return down;
}
out:
return tb_find_unused_port(sw, TB_TYPE_USB3_DOWN);
}
static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down, *port;
struct tb_cm *tcm = tb_priv(tb);
struct tb_tunnel *tunnel;
up = tb_switch_find_port(sw, TB_TYPE_USB3_UP);
if (!up)
return 0;
/*
* Look up available down port. Since we are chaining it should
* be found right above this switch.
*/
port = tb_port_at(tb_route(sw), parent);
down = tb_find_usb3_down(parent, port);
if (!down)
return 0;
if (tb_route(parent)) {
struct tb_port *parent_up;
/*
* Check first that the parent switch has its upstream USB3
* port enabled. Otherwise the chain is not complete and
* there is no point setting up a new tunnel.
*/
parent_up = tb_switch_find_port(parent, TB_TYPE_USB3_UP);
if (!parent_up || !tb_port_is_enabled(parent_up))
return 0;
}
tunnel = tb_tunnel_alloc_usb3(tb, up, down);
if (!tunnel)
return -ENOMEM;
if (tb_tunnel_activate(tunnel)) {
tb_port_info(up,
"USB3 tunnel activation failed, aborting\n");
tb_tunnel_free(tunnel);
return -EIO;
}
list_add_tail(&tunnel->list, &tcm->tunnel_list);
return 0;
}
static int tb_create_usb3_tunnels(struct tb_switch *sw)
{
struct tb_port *port;
int ret;
if (tb_route(sw)) {
ret = tb_tunnel_usb3(sw->tb, sw);
if (ret)
return ret;
}
tb_switch_for_each_port(sw, port) {
if (!tb_port_has_remote(port))
continue;
ret = tb_create_usb3_tunnels(port->remote->sw);
if (ret)
return ret;
}
return 0;
}
static void tb_scan_port(struct tb_port *port);
/**
@ -257,6 +392,18 @@ static void tb_scan_port(struct tb_port *port)
if (tb_switch_lane_bonding_enable(sw))
tb_sw_warn(sw, "failed to enable lane bonding\n");
if (tb_enable_tmu(sw))
tb_sw_warn(sw, "failed to enable TMU\n");
/*
* Create USB 3.x tunnels only when the switch is plugged to the
* domain. This is because we scan the domain also during discovery
* and want to discover existing USB 3.x tunnels before we create
* any new.
*/
if (tcm->hotplug_active && tb_tunnel_usb3(sw->tb, sw))
tb_sw_warn(sw, "USB3 tunnel creation failed\n");
tb_scan_switch(sw);
}
@ -338,57 +485,18 @@ static void tb_free_unplugged_children(struct tb_switch *sw)
}
}
/**
* tb_find_port() - return the first port of @type on @sw or NULL
* @sw: Switch to find the port from
* @type: Port type to look for
*/
static struct tb_port *tb_find_port(struct tb_switch *sw,
enum tb_port_type type)
{
struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (port->config.type == type)
return port;
}
return NULL;
}
/**
* tb_find_unused_port() - return the first inactive port on @sw
* @sw: Switch to find the port on
* @type: Port type to look for
*/
static struct tb_port *tb_find_unused_port(struct tb_switch *sw,
enum tb_port_type type)
{
struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (tb_is_upstream_port(port))
continue;
if (port->config.type != type)
continue;
if (port->cap_adap)
continue;
if (tb_port_is_enabled(port))
continue;
return port;
}
return NULL;
}
static struct tb_port *tb_find_pcie_down(struct tb_switch *sw,
const struct tb_port *port)
{
struct tb_port *down = NULL;
/*
* To keep plugging devices consistently in the same PCIe
* hierarchy, do mapping here for root switch downstream PCIe
* ports.
* hierarchy, do mapping here for switch downstream PCIe ports.
*/
if (!tb_route(sw)) {
if (tb_switch_is_usb4(sw)) {
down = usb4_switch_map_pcie_down(sw, port);
} else if (!tb_route(sw)) {
int phy_port = tb_phy_port_from_link(port->port);
int index;
@ -409,12 +517,17 @@ static struct tb_port *tb_find_pcie_down(struct tb_switch *sw,
/* Validate the hard-coding */
if (WARN_ON(index > sw->config.max_port_number))
goto out;
if (WARN_ON(!tb_port_is_pcie_down(&sw->ports[index])))
down = &sw->ports[index];
}
if (down) {
if (WARN_ON(!tb_port_is_pcie_down(down)))
goto out;
if (WARN_ON(tb_pci_port_is_enabled(&sw->ports[index])))
if (WARN_ON(tb_pci_port_is_enabled(down)))
goto out;
return &sw->ports[index];
return down;
}
out:
@ -586,7 +699,7 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
struct tb_switch *parent_sw;
struct tb_tunnel *tunnel;
up = tb_find_port(sw, TB_TYPE_PCIE_UP);
up = tb_switch_find_port(sw, TB_TYPE_PCIE_UP);
if (!up)
return 0;
@ -624,7 +737,7 @@ static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
sw = tb_to_switch(xd->dev.parent);
dst_port = tb_port_at(xd->route, sw);
nhi_port = tb_find_port(tb->root_switch, TB_TYPE_NHI);
nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI);
mutex_lock(&tb->lock);
tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, xd->transmit_ring,
@ -719,6 +832,7 @@ static void tb_handle_hotplug(struct work_struct *work)
tb_sw_set_unplugged(port->remote->sw);
tb_free_invalid_tunnels(tb);
tb_remove_dp_resources(port->remote->sw);
tb_switch_tmu_disable(port->remote->sw);
tb_switch_lane_bonding_disable(port->remote->sw);
tb_switch_remove(port->remote->sw);
port->remote = NULL;
@ -786,8 +900,7 @@ static void tb_handle_event(struct tb *tb, enum tb_cfg_pkg_type type,
route = tb_cfg_get_route(&pkg->header);
if (tb_cfg_error(tb->ctl, route, pkg->port,
TB_CFG_ERROR_ACK_PLUG_EVENT)) {
if (tb_cfg_ack_plug(tb->ctl, route, pkg->port, pkg->unplug)) {
tb_warn(tb, "could not ack plug event on %llx:%x\n", route,
pkg->port);
}
@ -866,10 +979,17 @@ static int tb_start(struct tb *tb)
return ret;
}
/* Enable TMU if it is off */
tb_switch_tmu_enable(tb->root_switch);
/* Full scan to discover devices added before the driver was loaded. */
tb_scan_switch(tb->root_switch);
/* Find out tunnels created by the boot firmware */
tb_discover_tunnels(tb->root_switch);
/*
* If the boot firmware did not create USB 3.x tunnels create them
* now for the whole topology.
*/
tb_create_usb3_tunnels(tb->root_switch);
/* Add DP IN resources for the root switch */
tb_add_dp_resources(tb->root_switch);
/* Make the discovered switches available to the userspace */
@ -897,6 +1017,9 @@ static void tb_restore_children(struct tb_switch *sw)
{
struct tb_port *port;
if (tb_enable_tmu(sw))
tb_sw_warn(sw, "failed to restore TMU configuration\n");
tb_switch_for_each_port(sw, port) {
if (!tb_port_has_remote(port))
continue;

View File

@ -44,6 +44,39 @@ struct tb_switch_nvm {
#define TB_SWITCH_KEY_SIZE 32
#define TB_SWITCH_MAX_DEPTH 6
#define USB4_SWITCH_MAX_DEPTH 5
/**
* enum tb_switch_tmu_rate - TMU refresh rate
* @TB_SWITCH_TMU_RATE_OFF: %0 (Disable Time Sync handshake)
* @TB_SWITCH_TMU_RATE_HIFI: %16 us time interval between successive
* transmission of the Delay Request TSNOS
* (Time Sync Notification Ordered Set) on a Link
* @TB_SWITCH_TMU_RATE_NORMAL: %1 ms time interval between successive
* transmission of the Delay Request TSNOS on
* a Link
*/
enum tb_switch_tmu_rate {
TB_SWITCH_TMU_RATE_OFF = 0,
TB_SWITCH_TMU_RATE_HIFI = 16,
TB_SWITCH_TMU_RATE_NORMAL = 1000,
};
/**
* struct tb_switch_tmu - Structure holding switch TMU configuration
* @cap: Offset to the TMU capability (%0 if not found)
* @has_ucap: Does the switch support uni-directional mode
* @rate: TMU refresh rate related to upstream switch. In case of root
* switch this holds the domain rate.
* @unidirectional: Is the TMU in uni-directional or bi-directional mode
* related to upstream switch. Don't case for root switch.
*/
struct tb_switch_tmu {
int cap;
bool has_ucap;
enum tb_switch_tmu_rate rate;
bool unidirectional;
};
/**
* struct tb_switch - a thunderbolt switch
@ -54,6 +87,7 @@ struct tb_switch_nvm {
* mailbox this will hold the pointer to that (%NULL
* otherwise). If set it also means the switch has
* upgradeable NVM.
* @tmu: The switch TMU configuration
* @tb: Pointer to the domain the switch belongs to
* @uid: Unique ID of the switch
* @uuid: UUID of the switch (or %NULL if not supported)
@ -92,6 +126,7 @@ struct tb_switch {
struct tb_regs_switch_header config;
struct tb_port *ports;
struct tb_dma_port *dma_port;
struct tb_switch_tmu tmu;
struct tb *tb;
u64 uid;
uuid_t *uuid;
@ -128,7 +163,9 @@ struct tb_switch {
* @remote: Remote port (%NULL if not connected)
* @xdomain: Remote host (%NULL if not connected)
* @cap_phy: Offset, zero if not found
* @cap_tmu: Offset of the adapter specific TMU capability (%0 if not present)
* @cap_adap: Offset of the adapter specific capability (%0 if not present)
* @cap_usb4: Offset to the USB4 port capability (%0 if not present)
* @port: Port number on switch
* @disabled: Disabled by eeprom
* @bonded: true if the port is bonded (two lanes combined as one)
@ -145,7 +182,9 @@ struct tb_port {
struct tb_port *remote;
struct tb_xdomain *xdomain;
int cap_phy;
int cap_tmu;
int cap_adap;
int cap_usb4;
u8 port;
bool disabled;
bool bonded;
@ -393,6 +432,16 @@ static inline bool tb_port_is_dpout(const struct tb_port *port)
return port && port->config.type == TB_TYPE_DP_HDMI_OUT;
}
static inline bool tb_port_is_usb3_down(const struct tb_port *port)
{
return port && port->config.type == TB_TYPE_USB3_DOWN;
}
static inline bool tb_port_is_usb3_up(const struct tb_port *port)
{
return port && port->config.type == TB_TYPE_USB3_UP;
}
static inline int tb_sw_read(struct tb_switch *sw, void *buffer,
enum tb_cfg_space space, u32 offset, u32 length)
{
@ -533,6 +582,8 @@ void tb_switch_suspend(struct tb_switch *sw);
int tb_switch_resume(struct tb_switch *sw);
int tb_switch_reset(struct tb *tb, u64 route);
void tb_sw_set_unplugged(struct tb_switch *sw);
struct tb_port *tb_switch_find_port(struct tb_switch *sw,
enum tb_port_type type);
struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link,
u8 depth);
struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_t *uuid);
@ -635,6 +686,17 @@ static inline bool tb_switch_is_titan_ridge(const struct tb_switch *sw)
}
}
/**
* tb_switch_is_usb4() - Is the switch USB4 compliant
* @sw: Switch to check
*
* Returns true if the @sw is USB4 compliant router, false otherwise.
*/
static inline bool tb_switch_is_usb4(const struct tb_switch *sw)
{
return sw->config.thunderbolt_version == USB4_VERSION_1_0;
}
/**
* tb_switch_is_icm() - Is the switch handled by ICM firmware
* @sw: Switch to check
@ -656,10 +718,22 @@ bool tb_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in);
int tb_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
void tb_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
int tb_switch_tmu_init(struct tb_switch *sw);
int tb_switch_tmu_post_time(struct tb_switch *sw);
int tb_switch_tmu_disable(struct tb_switch *sw);
int tb_switch_tmu_enable(struct tb_switch *sw);
static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw)
{
return sw->tmu.rate == TB_SWITCH_TMU_RATE_HIFI &&
!sw->tmu.unidirectional;
}
int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged);
int tb_port_add_nfc_credits(struct tb_port *port, int credits);
int tb_port_set_initial_credits(struct tb_port *port, u32 credits);
int tb_port_clear_counter(struct tb_port *port, int counter);
int tb_port_unlock(struct tb_port *port);
int tb_port_alloc_in_hopid(struct tb_port *port, int hopid, int max_hopid);
void tb_port_release_in_hopid(struct tb_port *port, int hopid);
int tb_port_alloc_out_hopid(struct tb_port *port, int hopid, int max_hopid);
@ -668,9 +742,13 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
struct tb_port *prev);
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);
bool tb_port_is_enabled(struct tb_port *port);
bool tb_usb3_port_is_enabled(struct tb_port *port);
int tb_usb3_port_enable(struct tb_port *port, bool enable);
bool tb_pci_port_is_enabled(struct tb_port *port);
int tb_pci_port_enable(struct tb_port *port, bool enable);
@ -734,4 +812,27 @@ void tb_xdomain_remove(struct tb_xdomain *xd);
struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8 link,
u8 depth);
int usb4_switch_setup(struct tb_switch *sw);
int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid);
int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size);
int usb4_switch_configure_link(struct tb_switch *sw);
void usb4_switch_unconfigure_link(struct tb_switch *sw);
bool usb4_switch_lane_bonding_possible(struct tb_switch *sw);
int usb4_switch_set_sleep(struct tb_switch *sw);
int usb4_switch_nvm_sector_size(struct tb_switch *sw);
int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size);
int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
const void *buf, size_t size);
int usb4_switch_nvm_authenticate(struct tb_switch *sw);
bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in);
int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
int usb4_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
struct tb_port *usb4_switch_map_pcie_down(struct tb_switch *sw,
const struct tb_port *port);
struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
const struct tb_port *port);
int usb4_port_unlock(struct tb_port *port);
#endif

View File

@ -67,9 +67,13 @@ struct cfg_error_pkg {
u32 zero1:4;
u32 port:6;
u32 zero2:2; /* Both should be zero, still they are different fields. */
u32 zero3:16;
u32 zero3:14;
u32 pg:2;
} __packed;
#define TB_CFG_ERROR_PG_HOT_PLUG 0x2
#define TB_CFG_ERROR_PG_HOT_UNPLUG 0x3
/* TB_CFG_PKG_EVENT */
struct cfg_event_pkg {
struct tb_cfg_header header;

View File

@ -26,6 +26,7 @@
#define TB_MAX_CONFIG_RW_LENGTH 60
enum tb_switch_cap {
TB_SWITCH_CAP_TMU = 0x03,
TB_SWITCH_CAP_VSE = 0x05,
};
@ -41,6 +42,7 @@ enum tb_port_cap {
TB_PORT_CAP_TIME1 = 0x03,
TB_PORT_CAP_ADAP = 0x04,
TB_PORT_CAP_VSE = 0x05,
TB_PORT_CAP_USB4 = 0x06,
};
enum tb_port_state {
@ -164,10 +166,52 @@ struct tb_regs_switch_header {
* milliseconds. Writing 0x00 is interpreted
* as 255ms.
*/
u32 __unknown4:16;
u32 cmuv:8;
u32 __unknown4:8;
u32 thunderbolt_version:8;
} __packed;
/* USB4 version 1.0 */
#define USB4_VERSION_1_0 0x20
#define ROUTER_CS_1 0x01
#define ROUTER_CS_4 0x04
#define ROUTER_CS_5 0x05
#define ROUTER_CS_5_SLP BIT(0)
#define ROUTER_CS_5_C3S BIT(23)
#define ROUTER_CS_5_PTO BIT(24)
#define ROUTER_CS_5_UTO BIT(25)
#define ROUTER_CS_5_HCO BIT(26)
#define ROUTER_CS_5_CV BIT(31)
#define ROUTER_CS_6 0x06
#define ROUTER_CS_6_SLPR BIT(0)
#define ROUTER_CS_6_TNS BIT(1)
#define ROUTER_CS_6_HCI BIT(18)
#define ROUTER_CS_6_CR BIT(25)
#define ROUTER_CS_7 0x07
#define ROUTER_CS_9 0x09
#define ROUTER_CS_25 0x19
#define ROUTER_CS_26 0x1a
#define ROUTER_CS_26_STATUS_MASK GENMASK(29, 24)
#define ROUTER_CS_26_STATUS_SHIFT 24
#define ROUTER_CS_26_ONS BIT(30)
#define ROUTER_CS_26_OV BIT(31)
/* Router TMU configuration */
#define TMU_RTR_CS_0 0x00
#define TMU_RTR_CS_0_TD BIT(27)
#define TMU_RTR_CS_0_UCAP BIT(30)
#define TMU_RTR_CS_1 0x01
#define TMU_RTR_CS_1_LOCAL_TIME_NS_MASK GENMASK(31, 16)
#define TMU_RTR_CS_1_LOCAL_TIME_NS_SHIFT 16
#define TMU_RTR_CS_2 0x02
#define TMU_RTR_CS_3 0x03
#define TMU_RTR_CS_3_LOCAL_TIME_NS_MASK GENMASK(15, 0)
#define TMU_RTR_CS_3_TS_PACKET_INTERVAL_MASK GENMASK(31, 16)
#define TMU_RTR_CS_3_TS_PACKET_INTERVAL_SHIFT 16
#define TMU_RTR_CS_22 0x16
#define TMU_RTR_CS_24 0x18
enum tb_port_type {
TB_TYPE_INACTIVE = 0x000000,
TB_TYPE_PORT = 0x000001,
@ -178,7 +222,8 @@ enum tb_port_type {
TB_TYPE_DP_HDMI_OUT = 0x0e0102,
TB_TYPE_PCIE_DOWN = 0x100101,
TB_TYPE_PCIE_UP = 0x100102,
/* TB_TYPE_USB = 0x200000, lower order bits are not known */
TB_TYPE_USB3_DOWN = 0x200101,
TB_TYPE_USB3_UP = 0x200102,
};
/* Present on every port in TB_CF_PORT at address zero. */
@ -216,10 +261,15 @@ struct tb_regs_port_header {
#define ADP_CS_4_NFC_BUFFERS_MASK GENMASK(9, 0)
#define ADP_CS_4_TOTAL_BUFFERS_MASK GENMASK(29, 20)
#define ADP_CS_4_TOTAL_BUFFERS_SHIFT 20
#define ADP_CS_4_LCK BIT(31)
#define ADP_CS_5 0x05
#define ADP_CS_5_LCA_MASK GENMASK(28, 22)
#define ADP_CS_5_LCA_SHIFT 22
/* TMU adapter registers */
#define TMU_ADP_CS_3 0x03
#define TMU_ADP_CS_3_UDM BIT(29)
/* Lane adapter registers */
#define LANE_ADP_CS_0 0x00
#define LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK GENMASK(25, 20)
@ -237,6 +287,12 @@ struct tb_regs_port_header {
#define LANE_ADP_CS_1_CURRENT_WIDTH_MASK GENMASK(25, 20)
#define LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT 20
/* USB4 port registers */
#define PORT_CS_18 0x12
#define PORT_CS_18_BE BIT(8)
#define PORT_CS_19 0x13
#define PORT_CS_19_PC BIT(3)
/* Display Port adapter registers */
#define ADP_DP_CS_0 0x00
#define ADP_DP_CS_0_VIDEO_HOPID_MASK GENMASK(26, 16)
@ -277,6 +333,11 @@ struct tb_regs_port_header {
#define ADP_PCIE_CS_0 0x00
#define ADP_PCIE_CS_0_PE BIT(31)
/* USB adapter registers */
#define ADP_USB3_CS_0 0x00
#define ADP_USB3_CS_0_V BIT(30)
#define ADP_USB3_CS_0_PE BIT(31)
/* Hop register from TB_CFG_HOPS. 8 byte per entry. */
struct tb_regs_hop {
/* DWORD 0 */

383
drivers/thunderbolt/tmu.c Normal file
View File

@ -0,0 +1,383 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Thunderbolt Time Management Unit (TMU) support
*
* Copyright (C) 2019, Intel Corporation
* Authors: Mika Westerberg <mika.westerberg@linux.intel.com>
* Rajmohan Mani <rajmohan.mani@intel.com>
*/
#include <linux/delay.h>
#include "tb.h"
static const char *tb_switch_tmu_mode_name(const struct tb_switch *sw)
{
bool root_switch = !tb_route(sw);
switch (sw->tmu.rate) {
case TB_SWITCH_TMU_RATE_OFF:
return "off";
case TB_SWITCH_TMU_RATE_HIFI:
/* Root switch does not have upstream directionality */
if (root_switch)
return "HiFi";
if (sw->tmu.unidirectional)
return "uni-directional, HiFi";
return "bi-directional, HiFi";
case TB_SWITCH_TMU_RATE_NORMAL:
if (root_switch)
return "normal";
return "uni-directional, normal";
default:
return "unknown";
}
}
static bool tb_switch_tmu_ucap_supported(struct tb_switch *sw)
{
int ret;
u32 val;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_0, 1);
if (ret)
return false;
return !!(val & TMU_RTR_CS_0_UCAP);
}
static int tb_switch_tmu_rate_read(struct tb_switch *sw)
{
int ret;
u32 val;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_3, 1);
if (ret)
return ret;
val >>= TMU_RTR_CS_3_TS_PACKET_INTERVAL_SHIFT;
return val;
}
static int tb_switch_tmu_rate_write(struct tb_switch *sw, int rate)
{
int ret;
u32 val;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_3, 1);
if (ret)
return ret;
val &= ~TMU_RTR_CS_3_TS_PACKET_INTERVAL_MASK;
val |= rate << TMU_RTR_CS_3_TS_PACKET_INTERVAL_SHIFT;
return tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_3, 1);
}
static int tb_port_tmu_write(struct tb_port *port, u8 offset, u32 mask,
u32 value)
{
u32 data;
int ret;
ret = tb_port_read(port, &data, TB_CFG_PORT, port->cap_tmu + offset, 1);
if (ret)
return ret;
data &= ~mask;
data |= value;
return tb_port_write(port, &data, TB_CFG_PORT,
port->cap_tmu + offset, 1);
}
static int tb_port_tmu_set_unidirectional(struct tb_port *port,
bool unidirectional)
{
u32 val;
if (!port->sw->tmu.has_ucap)
return 0;
val = unidirectional ? TMU_ADP_CS_3_UDM : 0;
return tb_port_tmu_write(port, TMU_ADP_CS_3, TMU_ADP_CS_3_UDM, val);
}
static inline int tb_port_tmu_unidirectional_disable(struct tb_port *port)
{
return tb_port_tmu_set_unidirectional(port, false);
}
static bool tb_port_tmu_is_unidirectional(struct tb_port *port)
{
int ret;
u32 val;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_tmu + TMU_ADP_CS_3, 1);
if (ret)
return false;
return val & TMU_ADP_CS_3_UDM;
}
static int tb_switch_tmu_set_time_disruption(struct tb_switch *sw, bool set)
{
int ret;
u32 val;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_0, 1);
if (ret)
return ret;
if (set)
val |= TMU_RTR_CS_0_TD;
else
val &= ~TMU_RTR_CS_0_TD;
return tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_0, 1);
}
/**
* tb_switch_tmu_init() - Initialize switch TMU structures
* @sw: Switch to initialized
*
* This function must be called before other TMU related functions to
* makes the internal structures are filled in correctly. Does not
* change any hardware configuration.
*/
int tb_switch_tmu_init(struct tb_switch *sw)
{
struct tb_port *port;
int ret;
if (tb_switch_is_icm(sw))
return 0;
ret = tb_switch_find_cap(sw, TB_SWITCH_CAP_TMU);
if (ret > 0)
sw->tmu.cap = ret;
tb_switch_for_each_port(sw, port) {
int cap;
cap = tb_port_find_cap(port, TB_PORT_CAP_TIME1);
if (cap > 0)
port->cap_tmu = cap;
}
ret = tb_switch_tmu_rate_read(sw);
if (ret < 0)
return ret;
sw->tmu.rate = ret;
sw->tmu.has_ucap = tb_switch_tmu_ucap_supported(sw);
if (sw->tmu.has_ucap) {
tb_sw_dbg(sw, "TMU: supports uni-directional mode\n");
if (tb_route(sw)) {
struct tb_port *up = tb_upstream_port(sw);
sw->tmu.unidirectional =
tb_port_tmu_is_unidirectional(up);
}
} else {
sw->tmu.unidirectional = false;
}
tb_sw_dbg(sw, "TMU: current mode: %s\n", tb_switch_tmu_mode_name(sw));
return 0;
}
/**
* tb_switch_tmu_post_time() - Update switch local time
* @sw: Switch whose time to update
*
* Updates switch local time using time posting procedure.
*/
int tb_switch_tmu_post_time(struct tb_switch *sw)
{
unsigned int post_local_time_offset, post_time_offset;
struct tb_switch *root_switch = sw->tb->root_switch;
u64 hi, mid, lo, local_time, post_time;
int i, ret, retries = 100;
u32 gm_local_time[3];
if (!tb_route(sw))
return 0;
if (!tb_switch_is_usb4(sw))
return 0;
/* Need to be able to read the grand master time */
if (!root_switch->tmu.cap)
return 0;
ret = tb_sw_read(root_switch, gm_local_time, TB_CFG_SWITCH,
root_switch->tmu.cap + TMU_RTR_CS_1,
ARRAY_SIZE(gm_local_time));
if (ret)
return ret;
for (i = 0; i < ARRAY_SIZE(gm_local_time); i++)
tb_sw_dbg(root_switch, "local_time[%d]=0x%08x\n", i,
gm_local_time[i]);
/* Convert to nanoseconds (drop fractional part) */
hi = gm_local_time[2] & TMU_RTR_CS_3_LOCAL_TIME_NS_MASK;
mid = gm_local_time[1];
lo = (gm_local_time[0] & TMU_RTR_CS_1_LOCAL_TIME_NS_MASK) >>
TMU_RTR_CS_1_LOCAL_TIME_NS_SHIFT;
local_time = hi << 48 | mid << 16 | lo;
/* Tell the switch that time sync is disrupted for a while */
ret = tb_switch_tmu_set_time_disruption(sw, true);
if (ret)
return ret;
post_local_time_offset = sw->tmu.cap + TMU_RTR_CS_22;
post_time_offset = sw->tmu.cap + TMU_RTR_CS_24;
/*
* Write the Grandmaster time to the Post Local Time registers
* of the new switch.
*/
ret = tb_sw_write(sw, &local_time, TB_CFG_SWITCH,
post_local_time_offset, 2);
if (ret)
goto out;
/*
* Have the new switch update its local time (by writing 1 to
* the post_time registers) and wait for the completion of the
* same (post_time register becomes 0). This means the time has
* been converged properly.
*/
post_time = 1;
ret = tb_sw_write(sw, &post_time, TB_CFG_SWITCH, post_time_offset, 2);
if (ret)
goto out;
do {
usleep_range(5, 10);
ret = tb_sw_read(sw, &post_time, TB_CFG_SWITCH,
post_time_offset, 2);
if (ret)
goto out;
} while (--retries && post_time);
if (!retries) {
ret = -ETIMEDOUT;
goto out;
}
tb_sw_dbg(sw, "TMU: updated local time to %#llx\n", local_time);
out:
tb_switch_tmu_set_time_disruption(sw, false);
return ret;
}
/**
* tb_switch_tmu_disable() - Disable TMU of a switch
* @sw: Switch whose TMU to disable
*
* Turns off TMU of @sw if it is enabled. If not enabled does nothing.
*/
int tb_switch_tmu_disable(struct tb_switch *sw)
{
int ret;
if (!tb_switch_is_usb4(sw))
return 0;
/* Already disabled? */
if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF)
return 0;
if (sw->tmu.unidirectional) {
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
/* The switch may be unplugged so ignore any errors */
tb_port_tmu_unidirectional_disable(up);
ret = tb_port_tmu_unidirectional_disable(down);
if (ret)
return ret;
}
tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
sw->tmu.unidirectional = false;
sw->tmu.rate = TB_SWITCH_TMU_RATE_OFF;
tb_sw_dbg(sw, "TMU: disabled\n");
return 0;
}
/**
* tb_switch_tmu_enable() - Enable TMU on a switch
* @sw: Switch whose TMU to enable
*
* Enables TMU of a switch to be in bi-directional, HiFi mode. In this mode
* all tunneling should work.
*/
int tb_switch_tmu_enable(struct tb_switch *sw)
{
int ret;
if (!tb_switch_is_usb4(sw))
return 0;
if (tb_switch_tmu_is_enabled(sw))
return 0;
ret = tb_switch_tmu_set_time_disruption(sw, true);
if (ret)
return ret;
/* Change mode to bi-directional */
if (tb_route(sw) && sw->tmu.unidirectional) {
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
ret = tb_port_tmu_unidirectional_disable(down);
if (ret)
return ret;
ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI);
if (ret)
return ret;
ret = tb_port_tmu_unidirectional_disable(up);
if (ret)
return ret;
} else {
ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI);
if (ret)
return ret;
}
sw->tmu.unidirectional = false;
sw->tmu.rate = TB_SWITCH_TMU_RATE_HIFI;
tb_sw_dbg(sw, "TMU: mode set to: %s\n", tb_switch_tmu_mode_name(sw));
return tb_switch_tmu_set_time_disruption(sw, false);
}

View File

@ -19,6 +19,12 @@
#define TB_PCI_PATH_DOWN 0
#define TB_PCI_PATH_UP 1
/* USB3 adapters use always HopID of 8 for both directions */
#define TB_USB3_HOPID 8
#define TB_USB3_PATH_DOWN 0
#define TB_USB3_PATH_UP 1
/* DP adapters use HopID 8 for AUX and 9 for Video */
#define TB_DP_AUX_TX_HOPID 8
#define TB_DP_AUX_RX_HOPID 8
@ -31,7 +37,7 @@
#define TB_DMA_PATH_OUT 0
#define TB_DMA_PATH_IN 1
static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA" };
static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" };
#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \
do { \
@ -243,6 +249,12 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
return tunnel;
}
static bool tb_dp_is_usb4(const struct tb_switch *sw)
{
/* Titan Ridge DP adapters need the same treatment as USB4 */
return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw);
}
static int tb_dp_cm_handshake(struct tb_port *in, struct tb_port *out)
{
int timeout = 10;
@ -250,8 +262,7 @@ static int tb_dp_cm_handshake(struct tb_port *in, struct tb_port *out)
int ret;
/* Both ends need to support this */
if (!tb_switch_is_titan_ridge(in->sw) ||
!tb_switch_is_titan_ridge(out->sw))
if (!tb_dp_is_usb4(in->sw) || !tb_dp_is_usb4(out->sw))
return 0;
ret = tb_port_read(out, &val, TB_CFG_PORT,
@ -531,7 +542,7 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel)
u32 val, rate = 0, lanes = 0;
int ret;
if (tb_switch_is_titan_ridge(sw)) {
if (tb_dp_is_usb4(sw)) {
int timeout = 10;
/*
@ -843,6 +854,156 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
return tunnel;
}
static int tb_usb3_activate(struct tb_tunnel *tunnel, bool activate)
{
int res;
res = tb_usb3_port_enable(tunnel->src_port, activate);
if (res)
return res;
if (tb_port_is_usb3_up(tunnel->dst_port))
return tb_usb3_port_enable(tunnel->dst_port, activate);
return 0;
}
static void tb_usb3_init_path(struct tb_path *path)
{
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_fc_enable = TB_PATH_ALL;
path->ingress_shared_buffer = TB_PATH_NONE;
path->priority = 3;
path->weight = 3;
path->drop_packages = 0;
path->nfc_credits = 0;
path->hops[0].initial_credits = 7;
path->hops[1].initial_credits =
tb_initial_credits(path->hops[1].in_port->sw);
}
/**
* tb_tunnel_discover_usb3() - Discover existing USB3 tunnels
* @tb: Pointer to the domain structure
* @down: USB3 downstream adapter
*
* If @down adapter is active, follows the tunnel to the USB3 upstream
* adapter and back. Returns the discovered tunnel or %NULL if there was
* no tunnel.
*/
struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
{
struct tb_tunnel *tunnel;
struct tb_path *path;
if (!tb_usb3_port_is_enabled(down))
return NULL;
tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_USB3);
if (!tunnel)
return NULL;
tunnel->activate = tb_usb3_activate;
tunnel->src_port = down;
/*
* Discover both paths even if they are not complete. We will
* clean them up by calling tb_tunnel_deactivate() below in that
* case.
*/
path = tb_path_discover(down, TB_USB3_HOPID, NULL, -1,
&tunnel->dst_port, "USB3 Up");
if (!path) {
/* Just disable the downstream port */
tb_usb3_port_enable(down, false);
goto err_free;
}
tunnel->paths[TB_USB3_PATH_UP] = path;
tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_UP]);
path = tb_path_discover(tunnel->dst_port, -1, down, TB_USB3_HOPID, NULL,
"USB3 Down");
if (!path)
goto err_deactivate;
tunnel->paths[TB_USB3_PATH_DOWN] = path;
tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_DOWN]);
/* Validate that the tunnel is complete */
if (!tb_port_is_usb3_up(tunnel->dst_port)) {
tb_port_warn(tunnel->dst_port,
"path does not end on an USB3 adapter, cleaning up\n");
goto err_deactivate;
}
if (down != tunnel->src_port) {
tb_tunnel_warn(tunnel, "path is not complete, cleaning up\n");
goto err_deactivate;
}
if (!tb_usb3_port_is_enabled(tunnel->dst_port)) {
tb_tunnel_warn(tunnel,
"tunnel is not fully activated, cleaning up\n");
goto err_deactivate;
}
tb_tunnel_dbg(tunnel, "discovered\n");
return tunnel;
err_deactivate:
tb_tunnel_deactivate(tunnel);
err_free:
tb_tunnel_free(tunnel);
return NULL;
}
/**
* tb_tunnel_alloc_usb3() - allocate a USB3 tunnel
* @tb: Pointer to the domain structure
* @up: USB3 upstream adapter port
* @down: USB3 downstream adapter port
*
* Allocate an USB3 tunnel. The ports must be of type @TB_TYPE_USB3_UP and
* @TB_TYPE_USB3_DOWN.
*
* Return: Returns a tb_tunnel on success or %NULL on failure.
*/
struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
struct tb_port *down)
{
struct tb_tunnel *tunnel;
struct tb_path *path;
tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_USB3);
if (!tunnel)
return NULL;
tunnel->activate = tb_usb3_activate;
tunnel->src_port = down;
tunnel->dst_port = up;
path = tb_path_alloc(tb, down, TB_USB3_HOPID, up, TB_USB3_HOPID, 0,
"USB3 Down");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_usb3_init_path(path);
tunnel->paths[TB_USB3_PATH_DOWN] = path;
path = tb_path_alloc(tb, up, TB_USB3_HOPID, down, TB_USB3_HOPID, 0,
"USB3 Up");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_usb3_init_path(path);
tunnel->paths[TB_USB3_PATH_UP] = path;
return tunnel;
}
/**
* tb_tunnel_free() - free a tunnel
* @tunnel: Tunnel to be freed

View File

@ -15,6 +15,7 @@ enum tb_tunnel_type {
TB_TUNNEL_PCI,
TB_TUNNEL_DP,
TB_TUNNEL_DMA,
TB_TUNNEL_USB3,
};
/**
@ -57,6 +58,9 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
struct tb_port *dst, int transmit_ring,
int transmit_path, int receive_ring,
int receive_path);
struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down);
struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
struct tb_port *down);
void tb_tunnel_free(struct tb_tunnel *tunnel);
int tb_tunnel_activate(struct tb_tunnel *tunnel);
@ -82,5 +86,10 @@ static inline bool tb_tunnel_is_dma(const struct tb_tunnel *tunnel)
return tunnel->type == TB_TUNNEL_DMA;
}
static inline bool tb_tunnel_is_usb3(const struct tb_tunnel *tunnel)
{
return tunnel->type == TB_TUNNEL_USB3;
}
#endif

764
drivers/thunderbolt/usb4.c Normal file
View File

@ -0,0 +1,764 @@
// SPDX-License-Identifier: GPL-2.0
/*
* USB4 specific functionality
*
* Copyright (C) 2019, Intel Corporation
* Authors: Mika Westerberg <mika.westerberg@linux.intel.com>
* Rajmohan Mani <rajmohan.mani@intel.com>
*/
#include <linux/delay.h>
#include <linux/ktime.h>
#include "tb.h"
#define USB4_DATA_DWORDS 16
#define USB4_DATA_RETRIES 3
enum usb4_switch_op {
USB4_SWITCH_OP_QUERY_DP_RESOURCE = 0x10,
USB4_SWITCH_OP_ALLOC_DP_RESOURCE = 0x11,
USB4_SWITCH_OP_DEALLOC_DP_RESOURCE = 0x12,
USB4_SWITCH_OP_NVM_WRITE = 0x20,
USB4_SWITCH_OP_NVM_AUTH = 0x21,
USB4_SWITCH_OP_NVM_READ = 0x22,
USB4_SWITCH_OP_NVM_SET_OFFSET = 0x23,
USB4_SWITCH_OP_DROM_READ = 0x24,
USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25,
};
#define USB4_NVM_READ_OFFSET_MASK GENMASK(23, 2)
#define USB4_NVM_READ_OFFSET_SHIFT 2
#define USB4_NVM_READ_LENGTH_MASK GENMASK(27, 24)
#define USB4_NVM_READ_LENGTH_SHIFT 24
#define USB4_NVM_SET_OFFSET_MASK USB4_NVM_READ_OFFSET_MASK
#define USB4_NVM_SET_OFFSET_SHIFT USB4_NVM_READ_OFFSET_SHIFT
#define USB4_DROM_ADDRESS_MASK GENMASK(14, 2)
#define USB4_DROM_ADDRESS_SHIFT 2
#define USB4_DROM_SIZE_MASK GENMASK(19, 15)
#define USB4_DROM_SIZE_SHIFT 15
#define USB4_NVM_SECTOR_SIZE_MASK GENMASK(23, 0)
typedef int (*read_block_fn)(struct tb_switch *, unsigned int, void *, size_t);
typedef int (*write_block_fn)(struct tb_switch *, const void *, size_t);
static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
u32 value, int timeout_msec)
{
ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec);
do {
u32 val;
int ret;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1);
if (ret)
return ret;
if ((val & bit) == value)
return 0;
usleep_range(50, 100);
} while (ktime_before(ktime_get(), timeout));
return -ETIMEDOUT;
}
static int usb4_switch_op_read_data(struct tb_switch *sw, void *data,
size_t dwords)
{
if (dwords > USB4_DATA_DWORDS)
return -EINVAL;
return tb_sw_read(sw, data, TB_CFG_SWITCH, ROUTER_CS_9, dwords);
}
static int usb4_switch_op_write_data(struct tb_switch *sw, const void *data,
size_t dwords)
{
if (dwords > USB4_DATA_DWORDS)
return -EINVAL;
return tb_sw_write(sw, data, TB_CFG_SWITCH, ROUTER_CS_9, dwords);
}
static int usb4_switch_op_read_metadata(struct tb_switch *sw, u32 *metadata)
{
return tb_sw_read(sw, metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1);
}
static int usb4_switch_op_write_metadata(struct tb_switch *sw, u32 metadata)
{
return tb_sw_write(sw, &metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1);
}
static int usb4_switch_do_read_data(struct tb_switch *sw, u16 address,
void *buf, size_t size, read_block_fn read_block)
{
unsigned int retries = USB4_DATA_RETRIES;
unsigned int offset;
offset = address & 3;
address = address & ~3;
do {
size_t nbytes = min_t(size_t, size, USB4_DATA_DWORDS * 4);
unsigned int dwaddress, dwords;
u8 data[USB4_DATA_DWORDS * 4];
int ret;
dwaddress = address / 4;
dwords = ALIGN(nbytes, 4) / 4;
ret = read_block(sw, dwaddress, data, dwords);
if (ret) {
if (ret == -ETIMEDOUT) {
if (retries--)
continue;
ret = -EIO;
}
return ret;
}
memcpy(buf, data + offset, nbytes);
size -= nbytes;
address += nbytes;
buf += nbytes;
} while (size > 0);
return 0;
}
static int usb4_switch_do_write_data(struct tb_switch *sw, u16 address,
const void *buf, size_t size, write_block_fn write_next_block)
{
unsigned int retries = USB4_DATA_RETRIES;
unsigned int offset;
offset = address & 3;
address = address & ~3;
do {
u32 nbytes = min_t(u32, size, USB4_DATA_DWORDS * 4);
u8 data[USB4_DATA_DWORDS * 4];
int ret;
memcpy(data + offset, buf, nbytes);
ret = write_next_block(sw, data, nbytes / 4);
if (ret) {
if (ret == -ETIMEDOUT) {
if (retries--)
continue;
ret = -EIO;
}
return ret;
}
size -= nbytes;
address += nbytes;
buf += nbytes;
} while (size > 0);
return 0;
}
static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status)
{
u32 val;
int ret;
val = opcode | ROUTER_CS_26_OV;
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_26, 1);
if (ret)
return ret;
ret = usb4_switch_wait_for_bit(sw, ROUTER_CS_26, ROUTER_CS_26_OV, 0, 500);
if (ret)
return ret;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_26, 1);
if (val & ROUTER_CS_26_ONS)
return -EOPNOTSUPP;
*status = (val & ROUTER_CS_26_STATUS_MASK) >> ROUTER_CS_26_STATUS_SHIFT;
return 0;
}
/**
* usb4_switch_setup() - Additional setup for USB4 device
* @sw: USB4 router to setup
*
* USB4 routers need additional settings in order to enable all the
* tunneling. This function enables USB and PCIe tunneling if it can be
* enabled (e.g the parent switch also supports them). If USB tunneling
* is not available for some reason (like that there is Thunderbolt 3
* switch upstream) then the internal xHCI controller is enabled
* instead.
*/
int usb4_switch_setup(struct tb_switch *sw)
{
struct tb_switch *parent;
bool tbt3, xhci;
u32 val = 0;
int ret;
if (!tb_route(sw))
return 0;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_6, 1);
if (ret)
return ret;
xhci = val & ROUTER_CS_6_HCI;
tbt3 = !(val & ROUTER_CS_6_TNS);
tb_sw_dbg(sw, "TBT3 support: %s, xHCI: %s\n",
tbt3 ? "yes" : "no", xhci ? "yes" : "no");
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1);
if (ret)
return ret;
parent = tb_switch_parent(sw);
if (tb_switch_find_port(parent, TB_TYPE_USB3_DOWN)) {
val |= ROUTER_CS_5_UTO;
xhci = false;
}
/* Only enable PCIe tunneling if the parent router supports it */
if (tb_switch_find_port(parent, TB_TYPE_PCIE_DOWN)) {
val |= ROUTER_CS_5_PTO;
/*
* xHCI can be enabled if PCIe tunneling is supported
* and the parent does not have any USB3 dowstream
* adapters (so we cannot do USB 3.x tunneling).
*/
if (xhci)
val |= ROUTER_CS_5_HCO;
}
/* TBT3 supported by the CM */
val |= ROUTER_CS_5_C3S;
/* Tunneling configuration is ready now */
val |= ROUTER_CS_5_CV;
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1);
if (ret)
return ret;
return usb4_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_CR,
ROUTER_CS_6_CR, 50);
}
/**
* usb4_switch_read_uid() - Read UID from USB4 router
* @sw: USB4 router
*
* Reads 64-bit UID from USB4 router config space.
*/
int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid)
{
return tb_sw_read(sw, uid, TB_CFG_SWITCH, ROUTER_CS_7, 2);
}
static int usb4_switch_drom_read_block(struct tb_switch *sw,
unsigned int dwaddress, void *buf,
size_t dwords)
{
u8 status = 0;
u32 metadata;
int ret;
metadata = (dwords << USB4_DROM_SIZE_SHIFT) & USB4_DROM_SIZE_MASK;
metadata |= (dwaddress << USB4_DROM_ADDRESS_SHIFT) &
USB4_DROM_ADDRESS_MASK;
ret = usb4_switch_op_write_metadata(sw, metadata);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_DROM_READ, &status);
if (ret)
return ret;
if (status)
return -EIO;
return usb4_switch_op_read_data(sw, buf, dwords);
}
/**
* usb4_switch_drom_read() - Read arbitrary bytes from USB4 router DROM
* @sw: USB4 router
*
* Uses USB4 router operations to read router DROM. For devices this
* should always work but for hosts it may return %-EOPNOTSUPP in which
* case the host router does not have DROM.
*/
int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size)
{
return usb4_switch_do_read_data(sw, address, buf, size,
usb4_switch_drom_read_block);
}
static int usb4_set_port_configured(struct tb_port *port, bool configured)
{
int ret;
u32 val;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_usb4 + PORT_CS_19, 1);
if (ret)
return ret;
if (configured)
val |= PORT_CS_19_PC;
else
val &= ~PORT_CS_19_PC;
return tb_port_write(port, &val, TB_CFG_PORT,
port->cap_usb4 + PORT_CS_19, 1);
}
/**
* usb4_switch_configure_link() - Set upstream USB4 link configured
* @sw: USB4 router
*
* Sets the upstream USB4 link to be configured for power management
* purposes.
*/
int usb4_switch_configure_link(struct tb_switch *sw)
{
struct tb_port *up;
if (!tb_route(sw))
return 0;
up = tb_upstream_port(sw);
return usb4_set_port_configured(up, true);
}
/**
* usb4_switch_unconfigure_link() - Un-set upstream USB4 link configuration
* @sw: USB4 router
*
* Reverse of usb4_switch_configure_link().
*/
void usb4_switch_unconfigure_link(struct tb_switch *sw)
{
struct tb_port *up;
if (sw->is_unplugged || !tb_route(sw))
return;
up = tb_upstream_port(sw);
usb4_set_port_configured(up, false);
}
/**
* usb4_switch_lane_bonding_possible() - Are conditions met for lane bonding
* @sw: USB4 router
*
* Checks whether conditions are met so that lane bonding can be
* established with the upstream router. Call only for device routers.
*/
bool usb4_switch_lane_bonding_possible(struct tb_switch *sw)
{
struct tb_port *up;
int ret;
u32 val;
up = tb_upstream_port(sw);
ret = tb_port_read(up, &val, TB_CFG_PORT, up->cap_usb4 + PORT_CS_18, 1);
if (ret)
return false;
return !!(val & PORT_CS_18_BE);
}
/**
* usb4_switch_set_sleep() - Prepare the router to enter sleep
* @sw: USB4 router
*
* Enables wakes and sets sleep bit for the router. Returns when the
* router sleep ready bit has been asserted.
*/
int usb4_switch_set_sleep(struct tb_switch *sw)
{
int ret;
u32 val;
/* Set sleep bit and wait for sleep ready to be asserted */
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1);
if (ret)
return ret;
val |= ROUTER_CS_5_SLP;
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1);
if (ret)
return ret;
return usb4_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_SLPR,
ROUTER_CS_6_SLPR, 500);
}
/**
* usb4_switch_nvm_sector_size() - Return router NVM sector size
* @sw: USB4 router
*
* If the router supports NVM operations this function returns the NVM
* sector size in bytes. If NVM operations are not supported returns
* %-EOPNOTSUPP.
*/
int usb4_switch_nvm_sector_size(struct tb_switch *sw)
{
u32 metadata;
u8 status;
int ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_SECTOR_SIZE, &status);
if (ret)
return ret;
if (status)
return status == 0x2 ? -EOPNOTSUPP : -EIO;
ret = usb4_switch_op_read_metadata(sw, &metadata);
if (ret)
return ret;
return metadata & USB4_NVM_SECTOR_SIZE_MASK;
}
static int usb4_switch_nvm_read_block(struct tb_switch *sw,
unsigned int dwaddress, void *buf, size_t dwords)
{
u8 status = 0;
u32 metadata;
int ret;
metadata = (dwords << USB4_NVM_READ_LENGTH_SHIFT) &
USB4_NVM_READ_LENGTH_MASK;
metadata |= (dwaddress << USB4_NVM_READ_OFFSET_SHIFT) &
USB4_NVM_READ_OFFSET_MASK;
ret = usb4_switch_op_write_metadata(sw, metadata);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_READ, &status);
if (ret)
return ret;
if (status)
return -EIO;
return usb4_switch_op_read_data(sw, buf, dwords);
}
/**
* usb4_switch_nvm_read() - Read arbitrary bytes from router NVM
* @sw: USB4 router
* @address: Starting address in bytes
* @buf: Read data is placed here
* @size: How many bytes to read
*
* Reads NVM contents of the router. If NVM is not supported returns
* %-EOPNOTSUPP.
*/
int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size)
{
return usb4_switch_do_read_data(sw, address, buf, size,
usb4_switch_nvm_read_block);
}
static int usb4_switch_nvm_set_offset(struct tb_switch *sw,
unsigned int address)
{
u32 metadata, dwaddress;
u8 status = 0;
int ret;
dwaddress = address / 4;
metadata = (dwaddress << USB4_NVM_SET_OFFSET_SHIFT) &
USB4_NVM_SET_OFFSET_MASK;
ret = usb4_switch_op_write_metadata(sw, metadata);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_SET_OFFSET, &status);
if (ret)
return ret;
return status ? -EIO : 0;
}
static int usb4_switch_nvm_write_next_block(struct tb_switch *sw,
const void *buf, size_t dwords)
{
u8 status;
int ret;
ret = usb4_switch_op_write_data(sw, buf, dwords);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_WRITE, &status);
if (ret)
return ret;
return status ? -EIO : 0;
}
/**
* usb4_switch_nvm_write() - Write to the router NVM
* @sw: USB4 router
* @address: Start address where to write in bytes
* @buf: Pointer to the data to write
* @size: Size of @buf in bytes
*
* Writes @buf to the router NVM using USB4 router operations. If NVM
* write is not supported returns %-EOPNOTSUPP.
*/
int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
const void *buf, size_t size)
{
int ret;
ret = usb4_switch_nvm_set_offset(sw, address);
if (ret)
return ret;
return usb4_switch_do_write_data(sw, address, buf, size,
usb4_switch_nvm_write_next_block);
}
/**
* usb4_switch_nvm_authenticate() - Authenticate new NVM
* @sw: USB4 router
*
* After the new NVM has been written via usb4_switch_nvm_write(), this
* function triggers NVM authentication process. If the authentication
* is successful the router is power cycled and the new NVM starts
* running. In case of failure returns negative errno.
*/
int usb4_switch_nvm_authenticate(struct tb_switch *sw)
{
u8 status = 0;
int ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_AUTH, &status);
if (ret)
return ret;
switch (status) {
case 0x0:
tb_sw_dbg(sw, "NVM authentication successful\n");
return 0;
case 0x1:
return -EINVAL;
case 0x2:
return -EAGAIN;
case 0x3:
return -EOPNOTSUPP;
default:
return -EIO;
}
}
/**
* usb4_switch_query_dp_resource() - Query availability of DP IN resource
* @sw: USB4 router
* @in: DP IN adapter
*
* For DP tunneling this function can be used to query availability of
* DP IN resource. Returns true if the resource is available for DP
* tunneling, false otherwise.
*/
bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
u8 status;
int ret;
ret = usb4_switch_op_write_metadata(sw, in->port);
if (ret)
return false;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_QUERY_DP_RESOURCE, &status);
/*
* If DP resource allocation is not supported assume it is
* always available.
*/
if (ret == -EOPNOTSUPP)
return true;
else if (ret)
return false;
return !status;
}
/**
* usb4_switch_alloc_dp_resource() - Allocate DP IN resource
* @sw: USB4 router
* @in: DP IN adapter
*
* Allocates DP IN resource for DP tunneling using USB4 router
* operations. If the resource was allocated returns %0. Otherwise
* returns negative errno, in particular %-EBUSY if the resource is
* already allocated.
*/
int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
u8 status;
int ret;
ret = usb4_switch_op_write_metadata(sw, in->port);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_ALLOC_DP_RESOURCE, &status);
if (ret == -EOPNOTSUPP)
return 0;
else if (ret)
return ret;
return status ? -EBUSY : 0;
}
/**
* usb4_switch_dealloc_dp_resource() - Releases allocated DP IN resource
* @sw: USB4 router
* @in: DP IN adapter
*
* Releases the previously allocated DP IN resource.
*/
int usb4_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
u8 status;
int ret;
ret = usb4_switch_op_write_metadata(sw, in->port);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_DEALLOC_DP_RESOURCE, &status);
if (ret == -EOPNOTSUPP)
return 0;
else if (ret)
return ret;
return status ? -EIO : 0;
}
static int usb4_port_idx(const struct tb_switch *sw, const struct tb_port *port)
{
struct tb_port *p;
int usb4_idx = 0;
/* Assume port is primary */
tb_switch_for_each_port(sw, p) {
if (!tb_port_is_null(p))
continue;
if (tb_is_upstream_port(p))
continue;
if (!p->link_nr) {
if (p == port)
break;
usb4_idx++;
}
}
return usb4_idx;
}
/**
* usb4_switch_map_pcie_down() - Map USB4 port to a PCIe downstream adapter
* @sw: USB4 router
* @port: USB4 port
*
* USB4 routers have direct mapping between USB4 ports and PCIe
* downstream adapters where the PCIe topology is extended. This
* function returns the corresponding downstream PCIe adapter or %NULL
* if no such mapping was possible.
*/
struct tb_port *usb4_switch_map_pcie_down(struct tb_switch *sw,
const struct tb_port *port)
{
int usb4_idx = usb4_port_idx(sw, port);
struct tb_port *p;
int pcie_idx = 0;
/* Find PCIe down port matching usb4_port */
tb_switch_for_each_port(sw, p) {
if (!tb_port_is_pcie_down(p))
continue;
if (pcie_idx == usb4_idx && !tb_pci_port_is_enabled(p))
return p;
pcie_idx++;
}
return NULL;
}
/**
* usb4_switch_map_usb3_down() - Map USB4 port to a USB3 downstream adapter
* @sw: USB4 router
* @port: USB4 port
*
* USB4 routers have direct mapping between USB4 ports and USB 3.x
* downstream adapters where the USB 3.x topology is extended. This
* function returns the corresponding downstream USB 3.x adapter or
* %NULL if no such mapping was possible.
*/
struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
const struct tb_port *port)
{
int usb4_idx = usb4_port_idx(sw, port);
struct tb_port *p;
int usb_idx = 0;
/* Find USB3 down port matching usb4_port */
tb_switch_for_each_port(sw, p) {
if (!tb_port_is_usb3_down(p))
continue;
if (usb_idx == usb4_idx && !tb_usb3_port_is_enabled(p))
return p;
usb_idx++;
}
return NULL;
}
/**
* usb4_port_unlock() - Unlock USB4 downstream port
* @port: USB4 port to unlock
*
* Unlocks USB4 downstream port so that the connection manager can
* access the router below this port.
*/
int usb4_port_unlock(struct tb_port *port)
{
int ret;
u32 val;
ret = tb_port_read(port, &val, TB_CFG_PORT, ADP_CS_4, 1);
if (ret)
return ret;
val &= ~ADP_CS_4_LCK;
return tb_port_write(port, &val, TB_CFG_PORT, ADP_CS_4, 1);
}

View File

@ -1220,7 +1220,13 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent,
u64 route, const uuid_t *local_uuid,
const uuid_t *remote_uuid)
{
struct tb_switch *parent_sw = tb_to_switch(parent);
struct tb_xdomain *xd;
struct tb_port *down;
/* Make sure the downstream domain is accessible */
down = tb_port_at(route, parent_sw);
tb_port_unlock(down);
xd = kzalloc(sizeof(*xd), GFP_KERNEL);
if (!xd)

View File

@ -53,4 +53,14 @@ config USB_CDNS3_TI
e.g. J721e.
config USB_CDNS3_IMX
tristate "Cadence USB3 support on NXP i.MX platforms"
depends on ARCH_MXC || COMPILE_TEST
default USB_CDNS3
help
Say 'Y' or 'M' here if you are building for NXP i.MX
platforms that contain Cadence USB3 controller core.
For example, imx8qm and imx8qxp.
endif

View File

@ -15,3 +15,4 @@ cdns3-$(CONFIG_USB_CDNS3_HOST) += host.o
obj-$(CONFIG_USB_CDNS3_PCI_WRAP) += cdns3-pci-wrap.o
obj-$(CONFIG_USB_CDNS3_TI) += cdns3-ti.o
obj-$(CONFIG_USB_CDNS3_IMX) += cdns3-imx.o

View File

@ -0,0 +1,216 @@
// SPDX-License-Identifier: GPL-2.0
/**
* cdns3-imx.c - NXP i.MX specific Glue layer for Cadence USB Controller
*
* Copyright (C) 2019 NXP
*/
#include <linux/bits.h>
#include <linux/clk.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
#include <linux/io.h>
#include <linux/of_platform.h>
#include <linux/iopoll.h>
#define USB3_CORE_CTRL1 0x00
#define USB3_CORE_CTRL2 0x04
#define USB3_INT_REG 0x08
#define USB3_CORE_STATUS 0x0c
#define XHCI_DEBUG_LINK_ST 0x10
#define XHCI_DEBUG_BUS 0x14
#define USB3_SSPHY_CTRL1 0x40
#define USB3_SSPHY_CTRL2 0x44
#define USB3_SSPHY_STATUS 0x4c
#define USB2_PHY_CTRL1 0x50
#define USB2_PHY_CTRL2 0x54
#define USB2_PHY_STATUS 0x5c
/* Register bits definition */
/* USB3_CORE_CTRL1 */
#define SW_RESET_MASK (0x3f << 26)
#define PWR_SW_RESET BIT(31)
#define APB_SW_RESET BIT(30)
#define AXI_SW_RESET BIT(29)
#define RW_SW_RESET BIT(28)
#define PHY_SW_RESET BIT(27)
#define PHYAHB_SW_RESET BIT(26)
#define ALL_SW_RESET (PWR_SW_RESET | APB_SW_RESET | AXI_SW_RESET | \
RW_SW_RESET | PHY_SW_RESET | PHYAHB_SW_RESET)
#define OC_DISABLE BIT(9)
#define MDCTRL_CLK_SEL BIT(7)
#define MODE_STRAP_MASK (0x7)
#define DEV_MODE (1 << 2)
#define HOST_MODE (1 << 1)
#define OTG_MODE (1 << 0)
/* USB3_INT_REG */
#define CLK_125_REQ BIT(29)
#define LPM_CLK_REQ BIT(28)
#define DEVU3_WAEKUP_EN BIT(14)
#define OTG_WAKEUP_EN BIT(12)
#define DEV_INT_EN (3 << 8) /* DEV INT b9:8 */
#define HOST_INT1_EN (1 << 0) /* HOST INT b7:0 */
/* USB3_CORE_STATUS */
#define MDCTRL_CLK_STATUS BIT(15)
#define DEV_POWER_ON_READY BIT(13)
#define HOST_POWER_ON_READY BIT(12)
/* USB3_SSPHY_STATUS */
#define CLK_VALID_MASK (0x3f << 26)
#define CLK_VALID_COMPARE_BITS (0xf << 28)
#define PHY_REFCLK_REQ (1 << 0)
struct cdns_imx {
struct device *dev;
void __iomem *noncore;
struct clk_bulk_data *clks;
int num_clks;
};
static inline u32 cdns_imx_readl(struct cdns_imx *data, u32 offset)
{
return readl(data->noncore + offset);
}
static inline void cdns_imx_writel(struct cdns_imx *data, u32 offset, u32 value)
{
writel(value, data->noncore + offset);
}
static const struct clk_bulk_data imx_cdns3_core_clks[] = {
{ .id = "usb3_lpm_clk" },
{ .id = "usb3_bus_clk" },
{ .id = "usb3_aclk" },
{ .id = "usb3_ipg_clk" },
{ .id = "usb3_core_pclk" },
};
static int cdns_imx_noncore_init(struct cdns_imx *data)
{
u32 value;
int ret;
struct device *dev = data->dev;
cdns_imx_writel(data, USB3_SSPHY_STATUS, CLK_VALID_MASK);
udelay(1);
ret = readl_poll_timeout(data->noncore + USB3_SSPHY_STATUS, value,
(value & CLK_VALID_COMPARE_BITS) == CLK_VALID_COMPARE_BITS,
10, 100000);
if (ret) {
dev_err(dev, "wait clkvld timeout\n");
return ret;
}
value = cdns_imx_readl(data, USB3_CORE_CTRL1);
value |= ALL_SW_RESET;
cdns_imx_writel(data, USB3_CORE_CTRL1, value);
udelay(1);
value = cdns_imx_readl(data, USB3_CORE_CTRL1);
value = (value & ~MODE_STRAP_MASK) | OTG_MODE | OC_DISABLE;
cdns_imx_writel(data, USB3_CORE_CTRL1, value);
value = cdns_imx_readl(data, USB3_INT_REG);
value |= HOST_INT1_EN | DEV_INT_EN;
cdns_imx_writel(data, USB3_INT_REG, value);
value = cdns_imx_readl(data, USB3_CORE_CTRL1);
value &= ~ALL_SW_RESET;
cdns_imx_writel(data, USB3_CORE_CTRL1, value);
return ret;
}
static int cdns_imx_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *node = dev->of_node;
struct cdns_imx *data;
int ret;
if (!node)
return -ENODEV;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
platform_set_drvdata(pdev, data);
data->dev = dev;
data->noncore = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(data->noncore)) {
dev_err(dev, "can't map IOMEM resource\n");
return PTR_ERR(data->noncore);
}
data->num_clks = ARRAY_SIZE(imx_cdns3_core_clks);
data->clks = (struct clk_bulk_data *)imx_cdns3_core_clks;
ret = devm_clk_bulk_get(dev, data->num_clks, data->clks);
if (ret)
return ret;
ret = clk_bulk_prepare_enable(data->num_clks, data->clks);
if (ret)
return ret;
ret = cdns_imx_noncore_init(data);
if (ret)
goto err;
ret = of_platform_populate(node, NULL, NULL, dev);
if (ret) {
dev_err(dev, "failed to create children: %d\n", ret);
goto err;
}
return ret;
err:
clk_bulk_disable_unprepare(data->num_clks, data->clks);
return ret;
}
static int cdns_imx_remove_core(struct device *dev, void *data)
{
struct platform_device *pdev = to_platform_device(dev);
platform_device_unregister(pdev);
return 0;
}
static int cdns_imx_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
device_for_each_child(dev, NULL, cdns_imx_remove_core);
platform_set_drvdata(pdev, NULL);
return 0;
}
static const struct of_device_id cdns_imx_of_match[] = {
{ .compatible = "fsl,imx8qm-usb3", },
{},
};
MODULE_DEVICE_TABLE(of, cdns_imx_of_match);
static struct platform_driver cdns_imx_driver = {
.probe = cdns_imx_probe,
.remove = cdns_imx_remove,
.driver = {
.name = "cdns3-imx",
.of_match_table = cdns_imx_of_match,
},
};
module_platform_driver(cdns_imx_driver);
MODULE_ALIAS("platform:cdns3-imx");
MODULE_AUTHOR("Peter Chen <peter.chen@nxp.com>");
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Cadence USB3 i.MX Glue Layer");

View File

@ -140,7 +140,7 @@ static inline char *cdns3_dbg_ring(struct cdns3_endpoint *priv_ep,
trb_per_sector = TRBS_PER_SEGMENT;
if (trb_per_sector > TRBS_PER_SEGMENT) {
sprintf(str + ret, "\t\tTo big transfer ring %d\n",
sprintf(str + ret, "\t\tTransfer ring %d too big\n",
trb_per_sector);
return str;
}

View File

@ -71,6 +71,23 @@ static int __cdns3_gadget_ep_queue(struct usb_ep *ep,
struct usb_request *request,
gfp_t gfp_flags);
static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
struct usb_request *request);
static int cdns3_ep_run_stream_transfer(struct cdns3_endpoint *priv_ep,
struct usb_request *request);
/**
* cdns3_clear_register_bit - clear bit in given register.
* @ptr: address of device controller register to be read and changed
* @mask: bits requested to clar
*/
void cdns3_clear_register_bit(void __iomem *ptr, u32 mask)
{
mask = readl(ptr) & ~mask;
writel(mask, ptr);
}
/**
* cdns3_set_register_bit - set bit in given register.
* @ptr: address of device controller register to be read and changed
@ -150,6 +167,21 @@ void cdns3_select_ep(struct cdns3_device *priv_dev, u32 ep)
writel(ep, &priv_dev->regs->ep_sel);
}
/**
* cdns3_get_tdl - gets current tdl for selected endpoint.
* @priv_dev: extended gadget object
*
* Before calling this function the appropriate endpoint must
* be selected by means of cdns3_select_ep function.
*/
static int cdns3_get_tdl(struct cdns3_device *priv_dev)
{
if (priv_dev->dev_ver < DEV_VER_V3)
return EP_CMD_TDL_GET(readl(&priv_dev->regs->ep_cmd));
else
return readl(&priv_dev->regs->ep_tdl);
}
dma_addr_t cdns3_trb_virt_to_dma(struct cdns3_endpoint *priv_ep,
struct cdns3_trb *trb)
{
@ -166,7 +198,22 @@ int cdns3_ring_size(struct cdns3_endpoint *priv_ep)
case USB_ENDPOINT_XFER_CONTROL:
return TRB_CTRL_RING_SIZE;
default:
return TRB_RING_SIZE;
if (priv_ep->use_streams)
return TRB_STREAM_RING_SIZE;
else
return TRB_RING_SIZE;
}
}
static void cdns3_free_trb_pool(struct cdns3_endpoint *priv_ep)
{
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
if (priv_ep->trb_pool) {
dma_free_coherent(priv_dev->sysdev,
cdns3_ring_size(priv_ep),
priv_ep->trb_pool, priv_ep->trb_pool_dma);
priv_ep->trb_pool = NULL;
}
}
@ -180,8 +227,12 @@ int cdns3_allocate_trb_pool(struct cdns3_endpoint *priv_ep)
{
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
int ring_size = cdns3_ring_size(priv_ep);
int num_trbs = ring_size / TRB_SIZE;
struct cdns3_trb *link_trb;
if (priv_ep->trb_pool && priv_ep->alloc_ring_size < ring_size)
cdns3_free_trb_pool(priv_ep);
if (!priv_ep->trb_pool) {
priv_ep->trb_pool = dma_alloc_coherent(priv_dev->sysdev,
ring_size,
@ -189,32 +240,30 @@ int cdns3_allocate_trb_pool(struct cdns3_endpoint *priv_ep)
GFP_DMA32 | GFP_ATOMIC);
if (!priv_ep->trb_pool)
return -ENOMEM;
} else {
priv_ep->alloc_ring_size = ring_size;
memset(priv_ep->trb_pool, 0, ring_size);
}
priv_ep->num_trbs = num_trbs;
if (!priv_ep->num)
return 0;
priv_ep->num_trbs = ring_size / TRB_SIZE;
/* Initialize the last TRB as Link TRB. */
/* Initialize the last TRB as Link TRB */
link_trb = (priv_ep->trb_pool + (priv_ep->num_trbs - 1));
link_trb->buffer = TRB_BUFFER(priv_ep->trb_pool_dma);
link_trb->control = TRB_CYCLE | TRB_TYPE(TRB_LINK) | TRB_TOGGLE;
return 0;
}
static void cdns3_free_trb_pool(struct cdns3_endpoint *priv_ep)
{
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
if (priv_ep->trb_pool) {
dma_free_coherent(priv_dev->sysdev,
cdns3_ring_size(priv_ep),
priv_ep->trb_pool, priv_ep->trb_pool_dma);
priv_ep->trb_pool = NULL;
if (priv_ep->use_streams) {
/*
* For stream capable endpoints driver use single correct TRB.
* The last trb has zeroed cycle bit
*/
link_trb->control = 0;
} else {
link_trb->buffer = TRB_BUFFER(priv_ep->trb_pool_dma);
link_trb->control = TRB_CYCLE | TRB_TYPE(TRB_LINK) | TRB_TOGGLE;
}
return 0;
}
/**
@ -253,6 +302,7 @@ void cdns3_hw_reset_eps_config(struct cdns3_device *priv_dev)
priv_dev->onchip_used_size = 0;
priv_dev->out_mem_is_allocated = 0;
priv_dev->wait_for_setup = 0;
priv_dev->using_streams = 0;
}
/**
@ -356,17 +406,43 @@ static int cdns3_start_all_request(struct cdns3_device *priv_dev,
{
struct usb_request *request;
int ret = 0;
u8 pending_empty = list_empty(&priv_ep->pending_req_list);
/*
* If the last pending transfer is INTERNAL
* OR streams are enabled for this endpoint
* do NOT start new transfer till the last one is pending
*/
if (!pending_empty) {
struct cdns3_request *priv_req;
request = cdns3_next_request(&priv_ep->pending_req_list);
priv_req = to_cdns3_request(request);
if ((priv_req->flags & REQUEST_INTERNAL) ||
(priv_ep->flags & EP_TDLCHK_EN) ||
priv_ep->use_streams) {
trace_printk("Blocking external request\n");
return ret;
}
}
while (!list_empty(&priv_ep->deferred_req_list)) {
request = cdns3_next_request(&priv_ep->deferred_req_list);
ret = cdns3_ep_run_transfer(priv_ep, request);
if (!priv_ep->use_streams) {
ret = cdns3_ep_run_transfer(priv_ep, request);
} else {
priv_ep->stream_sg_idx = 0;
ret = cdns3_ep_run_stream_transfer(priv_ep, request);
}
if (ret)
return ret;
list_del(&request->list);
list_add_tail(&request->list,
&priv_ep->pending_req_list);
if (request->stream_id != 0 || (priv_ep->flags & EP_TDLCHK_EN))
break;
}
priv_ep->flags &= ~EP_RING_FULL;
@ -379,7 +455,7 @@ static int cdns3_start_all_request(struct cdns3_device *priv_dev,
* buffer for unblocking on-chip FIFO buffer. This flag will be cleared
* if before first DESCMISS interrupt the DMA will be armed.
*/
#define cdns3_wa2_enable_detection(priv_dev, ep_priv, reg) do { \
#define cdns3_wa2_enable_detection(priv_dev, priv_ep, reg) do { \
if (!priv_ep->dir && priv_ep->type != USB_ENDPOINT_XFER_ISOC) { \
priv_ep->flags |= EP_QUIRK_EXTRA_BUF_DET; \
(reg) |= EP_STS_EN_DESCMISEN; \
@ -450,10 +526,17 @@ struct usb_request *cdns3_wa2_gadget_giveback(struct cdns3_device *priv_dev,
if (!req)
return NULL;
/* unmap the gadget request before copying data */
usb_gadget_unmap_request_by_dev(priv_dev->sysdev, req,
priv_ep->dir);
cdns3_wa2_descmiss_copy_data(priv_ep, req);
if (!(priv_ep->flags & EP_QUIRK_END_TRANSFER) &&
req->length != req->actual) {
/* wait for next part of transfer */
/* re-map the gadget request buffer*/
usb_gadget_map_request_by_dev(priv_dev->sysdev, req,
usb_endpoint_dir_in(priv_ep->endpoint.desc));
return NULL;
}
@ -570,6 +653,13 @@ static void cdns3_wa2_descmissing_packet(struct cdns3_endpoint *priv_ep)
{
struct cdns3_request *priv_req;
struct usb_request *request;
u8 pending_empty = list_empty(&priv_ep->pending_req_list);
/* check for pending transfer */
if (!pending_empty) {
trace_cdns3_wa2(priv_ep, "Ignoring Descriptor missing IRQ\n");
return;
}
if (priv_ep->flags & EP_QUIRK_EXTRA_BUF_DET) {
priv_ep->flags &= ~EP_QUIRK_EXTRA_BUF_DET;
@ -578,8 +668,10 @@ static void cdns3_wa2_descmissing_packet(struct cdns3_endpoint *priv_ep)
trace_cdns3_wa2(priv_ep, "Description Missing detected\n");
if (priv_ep->wa2_counter >= CDNS3_WA2_NUM_BUFFERS)
if (priv_ep->wa2_counter >= CDNS3_WA2_NUM_BUFFERS) {
trace_cdns3_wa2(priv_ep, "WA2 overflow\n");
cdns3_wa2_remove_old_request(priv_ep);
}
request = cdns3_gadget_ep_alloc_request(&priv_ep->endpoint,
GFP_ATOMIC);
@ -621,6 +713,78 @@ err:
"Failed: No sufficient memory for DESCMIS\n");
}
static void cdns3_wa2_reset_tdl(struct cdns3_device *priv_dev)
{
u16 tdl = EP_CMD_TDL_GET(readl(&priv_dev->regs->ep_cmd));
if (tdl) {
u16 reset_val = EP_CMD_TDL_MAX + 1 - tdl;
writel(EP_CMD_TDL_SET(reset_val) | EP_CMD_STDL,
&priv_dev->regs->ep_cmd);
}
}
static void cdns3_wa2_check_outq_status(struct cdns3_device *priv_dev)
{
u32 ep_sts_reg;
/* select EP0-out */
cdns3_select_ep(priv_dev, 0);
ep_sts_reg = readl(&priv_dev->regs->ep_sts);
if (EP_STS_OUTQ_VAL(ep_sts_reg)) {
u32 outq_ep_num = EP_STS_OUTQ_NO(ep_sts_reg);
struct cdns3_endpoint *outq_ep = priv_dev->eps[outq_ep_num];
if ((outq_ep->flags & EP_ENABLED) && !(outq_ep->use_streams) &&
outq_ep->type != USB_ENDPOINT_XFER_ISOC && outq_ep_num) {
u8 pending_empty = list_empty(&outq_ep->pending_req_list);
if ((outq_ep->flags & EP_QUIRK_EXTRA_BUF_DET) ||
(outq_ep->flags & EP_QUIRK_EXTRA_BUF_EN) ||
!pending_empty) {
} else {
u32 ep_sts_en_reg;
u32 ep_cmd_reg;
cdns3_select_ep(priv_dev, outq_ep->num |
outq_ep->dir);
ep_sts_en_reg = readl(&priv_dev->regs->ep_sts_en);
ep_cmd_reg = readl(&priv_dev->regs->ep_cmd);
outq_ep->flags |= EP_TDLCHK_EN;
cdns3_set_register_bit(&priv_dev->regs->ep_cfg,
EP_CFG_TDL_CHK);
cdns3_wa2_enable_detection(priv_dev, outq_ep,
ep_sts_en_reg);
writel(ep_sts_en_reg,
&priv_dev->regs->ep_sts_en);
/* reset tdl value to zero */
cdns3_wa2_reset_tdl(priv_dev);
/*
* Memory barrier - Reset tdl before ringing the
* doorbell.
*/
wmb();
if (EP_CMD_DRDY & ep_cmd_reg) {
trace_cdns3_wa2(outq_ep, "Enabling WA2 skipping doorbell\n");
} else {
trace_cdns3_wa2(outq_ep, "Enabling WA2 ringing doorbell\n");
/*
* ring doorbell to generate DESCMIS irq
*/
writel(EP_CMD_DRDY,
&priv_dev->regs->ep_cmd);
}
}
}
}
}
/**
* cdns3_gadget_giveback - call struct usb_request's ->complete callback
* @priv_ep: The endpoint to whom the request belongs to
@ -807,14 +971,120 @@ static void cdns3_wa1_tray_restore_cycle_bit(struct cdns3_device *priv_dev,
cdns3_wa1_restore_cycle_bit(priv_ep);
}
static int cdns3_ep_run_stream_transfer(struct cdns3_endpoint *priv_ep,
struct usb_request *request)
{
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
struct cdns3_request *priv_req;
struct cdns3_trb *trb;
dma_addr_t trb_dma;
int address;
u32 control;
u32 length;
u32 tdl;
unsigned int sg_idx = priv_ep->stream_sg_idx;
priv_req = to_cdns3_request(request);
address = priv_ep->endpoint.desc->bEndpointAddress;
priv_ep->flags |= EP_PENDING_REQUEST;
/* must allocate buffer aligned to 8 */
if (priv_req->flags & REQUEST_UNALIGNED)
trb_dma = priv_req->aligned_buf->dma;
else
trb_dma = request->dma;
/* For stream capable endpoints driver use only single TD. */
trb = priv_ep->trb_pool + priv_ep->enqueue;
priv_req->start_trb = priv_ep->enqueue;
priv_req->end_trb = priv_req->start_trb;
priv_req->trb = trb;
cdns3_select_ep(priv_ep->cdns3_dev, address);
control = TRB_TYPE(TRB_NORMAL) | TRB_CYCLE |
TRB_STREAM_ID(priv_req->request.stream_id) | TRB_ISP;
if (!request->num_sgs) {
trb->buffer = TRB_BUFFER(trb_dma);
length = request->length;
} else {
trb->buffer = TRB_BUFFER(request->sg[sg_idx].dma_address);
length = request->sg[sg_idx].length;
}
tdl = DIV_ROUND_UP(length, priv_ep->endpoint.maxpacket);
trb->length = TRB_BURST_LEN(16 /*priv_ep->trb_burst_size*/) |
TRB_LEN(length);
/*
* For DEV_VER_V2 controller version we have enabled
* USB_CONF2_EN_TDL_TRB in DMULT configuration.
* This enables TDL calculation based on TRB, hence setting TDL in TRB.
*/
if (priv_dev->dev_ver >= DEV_VER_V2) {
if (priv_dev->gadget.speed == USB_SPEED_SUPER)
trb->length |= TRB_TDL_SS_SIZE(tdl);
}
priv_req->flags |= REQUEST_PENDING;
trb->control = control;
trace_cdns3_prepare_trb(priv_ep, priv_req->trb);
/*
* Memory barrier - Cycle Bit must be set before trb->length and
* trb->buffer fields.
*/
wmb();
/* always first element */
writel(EP_TRADDR_TRADDR(priv_ep->trb_pool_dma),
&priv_dev->regs->ep_traddr);
if (!(priv_ep->flags & EP_STALLED)) {
trace_cdns3_ring(priv_ep);
/*clearing TRBERR and EP_STS_DESCMIS before seting DRDY*/
writel(EP_STS_TRBERR | EP_STS_DESCMIS, &priv_dev->regs->ep_sts);
priv_ep->prime_flag = false;
/*
* Controller version DEV_VER_V2 tdl calculation
* is based on TRB
*/
if (priv_dev->dev_ver < DEV_VER_V2)
writel(EP_CMD_TDL_SET(tdl) | EP_CMD_STDL,
&priv_dev->regs->ep_cmd);
else if (priv_dev->dev_ver > DEV_VER_V2)
writel(tdl, &priv_dev->regs->ep_tdl);
priv_ep->last_stream_id = priv_req->request.stream_id;
writel(EP_CMD_DRDY, &priv_dev->regs->ep_cmd);
writel(EP_CMD_ERDY_SID(priv_req->request.stream_id) |
EP_CMD_ERDY, &priv_dev->regs->ep_cmd);
trace_cdns3_doorbell_epx(priv_ep->name,
readl(&priv_dev->regs->ep_traddr));
}
/* WORKAROUND for transition to L0 */
__cdns3_gadget_wakeup(priv_dev);
return 0;
}
/**
* cdns3_ep_run_transfer - start transfer on no-default endpoint hardware
* @priv_ep: endpoint object
*
* Returns zero on success or negative value on failure
*/
int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
struct usb_request *request)
static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
struct usb_request *request)
{
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
struct cdns3_request *priv_req;
@ -826,6 +1096,7 @@ int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
int address;
u32 control;
int pcs;
u16 total_tdl = 0;
if (priv_ep->type == USB_ENDPOINT_XFER_ISOC)
num_trb = priv_ep->interval;
@ -910,6 +1181,9 @@ int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
if (likely(priv_dev->dev_ver >= DEV_VER_V2))
td_size = DIV_ROUND_UP(length,
priv_ep->endpoint.maxpacket);
else if (priv_ep->flags & EP_TDLCHK_EN)
total_tdl += DIV_ROUND_UP(length,
priv_ep->endpoint.maxpacket);
trb->length = TRB_BURST_LEN(priv_ep->trb_burst_size) |
TRB_LEN(length);
@ -954,6 +1228,23 @@ int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
if (sg_iter == 1)
trb->control |= TRB_IOC | TRB_ISP;
if (priv_dev->dev_ver < DEV_VER_V2 &&
(priv_ep->flags & EP_TDLCHK_EN)) {
u16 tdl = total_tdl;
u16 old_tdl = EP_CMD_TDL_GET(readl(&priv_dev->regs->ep_cmd));
if (tdl > EP_CMD_TDL_MAX) {
tdl = EP_CMD_TDL_MAX;
priv_ep->pending_tdl = total_tdl - EP_CMD_TDL_MAX;
}
if (old_tdl < tdl) {
tdl -= old_tdl;
writel(EP_CMD_TDL_SET(tdl) | EP_CMD_STDL,
&priv_dev->regs->ep_cmd);
}
}
/*
* Memory barrier - cycle bit must be set before other filds in trb.
*/
@ -1153,29 +1444,56 @@ static void cdns3_transfer_completed(struct cdns3_device *priv_dev,
cdns3_move_deq_to_next_trb(priv_req);
}
/* Re-select endpoint. It could be changed by other CPU during
* handling usb_gadget_giveback_request.
*/
cdns3_select_ep(priv_dev, priv_ep->endpoint.address);
if (!request->stream_id) {
/* Re-select endpoint. It could be changed by other CPU
* during handling usb_gadget_giveback_request.
*/
cdns3_select_ep(priv_dev, priv_ep->endpoint.address);
if (!cdns3_request_handled(priv_ep, priv_req))
goto prepare_next_td;
if (!cdns3_request_handled(priv_ep, priv_req))
goto prepare_next_td;
trb = priv_ep->trb_pool + priv_ep->dequeue;
trace_cdns3_complete_trb(priv_ep, trb);
trb = priv_ep->trb_pool + priv_ep->dequeue;
trace_cdns3_complete_trb(priv_ep, trb);
if (trb != priv_req->trb)
dev_warn(priv_dev->dev,
"request_trb=0x%p, queue_trb=0x%p\n",
priv_req->trb, trb);
if (trb != priv_req->trb)
dev_warn(priv_dev->dev,
"request_trb=0x%p, queue_trb=0x%p\n",
priv_req->trb, trb);
request->actual = TRB_LEN(le32_to_cpu(trb->length));
cdns3_move_deq_to_next_trb(priv_req);
cdns3_gadget_giveback(priv_ep, priv_req, 0);
request->actual = TRB_LEN(le32_to_cpu(trb->length));
cdns3_move_deq_to_next_trb(priv_req);
cdns3_gadget_giveback(priv_ep, priv_req, 0);
if (priv_ep->type != USB_ENDPOINT_XFER_ISOC &&
TRBS_PER_SEGMENT == 2)
if (priv_ep->type != USB_ENDPOINT_XFER_ISOC &&
TRBS_PER_SEGMENT == 2)
break;
} else {
/* Re-select endpoint. It could be changed by other CPU
* during handling usb_gadget_giveback_request.
*/
cdns3_select_ep(priv_dev, priv_ep->endpoint.address);
trb = priv_ep->trb_pool;
trace_cdns3_complete_trb(priv_ep, trb);
if (trb != priv_req->trb)
dev_warn(priv_dev->dev,
"request_trb=0x%p, queue_trb=0x%p\n",
priv_req->trb, trb);
request->actual += TRB_LEN(le32_to_cpu(trb->length));
if (!request->num_sgs ||
(request->num_sgs == (priv_ep->stream_sg_idx + 1))) {
priv_ep->stream_sg_idx = 0;
cdns3_gadget_giveback(priv_ep, priv_req, 0);
} else {
priv_ep->stream_sg_idx++;
cdns3_ep_run_stream_transfer(priv_ep, request);
}
break;
}
}
priv_ep->flags &= ~EP_PENDING_REQUEST;
@ -1205,6 +1523,21 @@ void cdns3_rearm_transfer(struct cdns3_endpoint *priv_ep, u8 rearm)
}
}
static void cdns3_reprogram_tdl(struct cdns3_endpoint *priv_ep)
{
u16 tdl = priv_ep->pending_tdl;
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
if (tdl > EP_CMD_TDL_MAX) {
tdl = EP_CMD_TDL_MAX;
priv_ep->pending_tdl -= EP_CMD_TDL_MAX;
} else {
priv_ep->pending_tdl = 0;
}
writel(EP_CMD_TDL_SET(tdl) | EP_CMD_STDL, &priv_dev->regs->ep_cmd);
}
/**
* cdns3_check_ep_interrupt_proceed - Processes interrupt related to endpoint
* @priv_ep: endpoint object
@ -1215,6 +1548,9 @@ static int cdns3_check_ep_interrupt_proceed(struct cdns3_endpoint *priv_ep)
{
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
u32 ep_sts_reg;
struct usb_request *deferred_request;
struct usb_request *pending_request;
u32 tdl = 0;
cdns3_select_ep(priv_dev, priv_ep->endpoint.address);
@ -1223,6 +1559,36 @@ static int cdns3_check_ep_interrupt_proceed(struct cdns3_endpoint *priv_ep)
ep_sts_reg = readl(&priv_dev->regs->ep_sts);
writel(ep_sts_reg, &priv_dev->regs->ep_sts);
if ((ep_sts_reg & EP_STS_PRIME) && priv_ep->use_streams) {
bool dbusy = !!(ep_sts_reg & EP_STS_DBUSY);
tdl = cdns3_get_tdl(priv_dev);
/*
* Continue the previous transfer:
* There is some racing between ERDY and PRIME. The device send
* ERDY and almost in the same time Host send PRIME. It cause
* that host ignore the ERDY packet and driver has to send it
* again.
*/
if (tdl && (dbusy | !EP_STS_BUFFEMPTY(ep_sts_reg) |
EP_STS_HOSTPP(ep_sts_reg))) {
writel(EP_CMD_ERDY |
EP_CMD_ERDY_SID(priv_ep->last_stream_id),
&priv_dev->regs->ep_cmd);
ep_sts_reg &= ~(EP_STS_MD_EXIT | EP_STS_IOC);
} else {
priv_ep->prime_flag = true;
pending_request = cdns3_next_request(&priv_ep->pending_req_list);
deferred_request = cdns3_next_request(&priv_ep->deferred_req_list);
if (deferred_request && !pending_request) {
cdns3_start_all_request(priv_dev, priv_ep);
}
}
}
if (ep_sts_reg & EP_STS_TRBERR) {
if (priv_ep->flags & EP_STALL_PENDING &&
!(ep_sts_reg & EP_STS_DESCMIS &&
@ -1259,7 +1625,8 @@ static int cdns3_check_ep_interrupt_proceed(struct cdns3_endpoint *priv_ep)
}
}
if ((ep_sts_reg & EP_STS_IOC) || (ep_sts_reg & EP_STS_ISP)) {
if ((ep_sts_reg & EP_STS_IOC) || (ep_sts_reg & EP_STS_ISP) ||
(ep_sts_reg & EP_STS_IOT)) {
if (priv_ep->flags & EP_QUIRK_EXTRA_BUF_EN) {
if (ep_sts_reg & EP_STS_ISP)
priv_ep->flags |= EP_QUIRK_END_TRANSFER;
@ -1267,6 +1634,29 @@ static int cdns3_check_ep_interrupt_proceed(struct cdns3_endpoint *priv_ep)
priv_ep->flags &= ~EP_QUIRK_END_TRANSFER;
}
if (!priv_ep->use_streams) {
if ((ep_sts_reg & EP_STS_IOC) ||
(ep_sts_reg & EP_STS_ISP)) {
cdns3_transfer_completed(priv_dev, priv_ep);
} else if ((priv_ep->flags & EP_TDLCHK_EN) &
priv_ep->pending_tdl) {
/* handle IOT with pending tdl */
cdns3_reprogram_tdl(priv_ep);
}
} else if (priv_ep->dir == USB_DIR_OUT) {
priv_ep->ep_sts_pending |= ep_sts_reg;
} else if (ep_sts_reg & EP_STS_IOT) {
cdns3_transfer_completed(priv_dev, priv_ep);
}
}
/*
* MD_EXIT interrupt sets when stream capable endpoint exits
* from MOVE DATA state of Bulk IN/OUT stream protocol state machine
*/
if (priv_ep->dir == USB_DIR_OUT && (ep_sts_reg & EP_STS_MD_EXIT) &&
(priv_ep->ep_sts_pending & EP_STS_IOT) && priv_ep->use_streams) {
priv_ep->ep_sts_pending = 0;
cdns3_transfer_completed(priv_dev, priv_ep);
}
@ -1274,7 +1664,7 @@ static int cdns3_check_ep_interrupt_proceed(struct cdns3_endpoint *priv_ep)
* WA2: this condition should only be meet when
* priv_ep->flags & EP_QUIRK_EXTRA_BUF_DET or
* priv_ep->flags & EP_QUIRK_EXTRA_BUF_EN.
* In other cases this interrupt will be disabled/
* In other cases this interrupt will be disabled.
*/
if (ep_sts_reg & EP_STS_DESCMIS && priv_dev->dev_ver < DEV_VER_V2 &&
!(priv_ep->flags & EP_STALLED))
@ -1457,6 +1847,9 @@ static irqreturn_t cdns3_device_thread_irq_handler(int irq, void *data)
ret = IRQ_HANDLED;
}
if (priv_dev->dev_ver < DEV_VER_V2 && priv_dev->using_streams)
cdns3_wa2_check_outq_status(priv_dev);
irqend:
writel(~0, &priv_dev->regs->ep_ien);
spin_unlock_irqrestore(&priv_dev->lock, flags);
@ -1511,6 +1904,27 @@ static int cdns3_ep_onchip_buffer_reserve(struct cdns3_device *priv_dev,
return 0;
}
void cdns3_stream_ep_reconfig(struct cdns3_device *priv_dev,
struct cdns3_endpoint *priv_ep)
{
if (!priv_ep->use_streams || priv_dev->gadget.speed < USB_SPEED_SUPER)
return;
if (priv_dev->dev_ver >= DEV_VER_V3) {
u32 mask = BIT(priv_ep->num + (priv_ep->dir ? 16 : 0));
/*
* Stream capable endpoints are handled by using ep_tdl
* register. Other endpoints use TDL from TRB feature.
*/
cdns3_clear_register_bit(&priv_dev->regs->tdl_from_trb, mask);
}
/* Enable Stream Bit TDL chk and SID chk */
cdns3_set_register_bit(&priv_dev->regs->ep_cfg, EP_CFG_STREAM_EN |
EP_CFG_TDL_CHK | EP_CFG_SID_CHK);
}
void cdns3_configure_dmult(struct cdns3_device *priv_dev,
struct cdns3_endpoint *priv_ep)
{
@ -1772,6 +2186,7 @@ static int cdns3_gadget_ep_enable(struct usb_ep *ep,
{
struct cdns3_endpoint *priv_ep;
struct cdns3_device *priv_dev;
const struct usb_ss_ep_comp_descriptor *comp_desc;
u32 reg = EP_STS_EN_TRBERREN;
u32 bEndpointAddress;
unsigned long flags;
@ -1781,6 +2196,7 @@ static int cdns3_gadget_ep_enable(struct usb_ep *ep,
priv_ep = ep_to_cdns3_ep(ep);
priv_dev = priv_ep->cdns3_dev;
comp_desc = priv_ep->endpoint.comp_desc;
if (!ep || !desc || desc->bDescriptorType != USB_DT_ENDPOINT) {
dev_dbg(priv_dev->dev, "usbss: invalid parameters\n");
@ -1811,6 +2227,24 @@ static int cdns3_gadget_ep_enable(struct usb_ep *ep,
goto exit;
}
bEndpointAddress = priv_ep->num | priv_ep->dir;
cdns3_select_ep(priv_dev, bEndpointAddress);
if (usb_ss_max_streams(comp_desc) && usb_endpoint_xfer_bulk(desc)) {
/*
* Enable stream support (SS mode) related interrupts
* in EP_STS_EN Register
*/
if (priv_dev->gadget.speed >= USB_SPEED_SUPER) {
reg |= EP_STS_EN_IOTEN | EP_STS_EN_PRIMEEEN |
EP_STS_EN_SIDERREN | EP_STS_EN_MD_EXITEN |
EP_STS_EN_STREAMREN;
priv_ep->use_streams = true;
cdns3_stream_ep_reconfig(priv_dev, priv_ep);
priv_dev->using_streams |= true;
}
}
ret = cdns3_allocate_trb_pool(priv_ep);
if (ret)
@ -1957,6 +2391,7 @@ static int cdns3_gadget_ep_disable(struct usb_ep *ep)
ep->desc = NULL;
priv_ep->flags &= ~EP_ENABLED;
priv_ep->use_streams = false;
spin_unlock_irqrestore(&priv_dev->lock, flags);
@ -2005,13 +2440,21 @@ static int __cdns3_gadget_ep_queue(struct usb_ep *ep,
list_add_tail(&request->list, &priv_ep->deferred_req_list);
/*
* For stream capable endpoint if prime irq flag is set then only start
* request.
* If hardware endpoint configuration has not been set yet then
* just queue request in deferred list. Transfer will be started in
* cdns3_set_hw_configuration.
*/
if (priv_dev->hw_configured_flag && !(priv_ep->flags & EP_STALLED) &&
!(priv_ep->flags & EP_STALL_PENDING))
cdns3_start_all_request(priv_dev, priv_ep);
if (!request->stream_id) {
if (priv_dev->hw_configured_flag &&
!(priv_ep->flags & EP_STALLED) &&
!(priv_ep->flags & EP_STALL_PENDING))
cdns3_start_all_request(priv_dev, priv_ep);
} else {
if (priv_dev->hw_configured_flag && priv_ep->prime_flag)
cdns3_start_all_request(priv_dev, priv_ep);
}
return 0;
}
@ -2384,7 +2827,6 @@ static int cdns3_gadget_udc_stop(struct usb_gadget *gadget)
struct cdns3_endpoint *priv_ep;
u32 bEndpointAddress;
struct usb_ep *ep;
int ret = 0;
int val;
priv_dev->gadget_driver = NULL;
@ -2408,7 +2850,7 @@ static int cdns3_gadget_udc_stop(struct usb_gadget *gadget)
writel(0, &priv_dev->regs->usb_ien);
writel(USB_CONF_DEVDS, &priv_dev->regs->usb_conf);
return ret;
return 0;
}
static const struct usb_gadget_ops cdns3_gadget_ops = {

View File

@ -599,6 +599,7 @@ struct cdns3_usb_regs {
#define EP_CMD_TDL_MASK GENMASK(15, 9)
#define EP_CMD_TDL_SET(p) (((p) << 9) & EP_CMD_TDL_MASK)
#define EP_CMD_TDL_GET(p) (((p) & EP_CMD_TDL_MASK) >> 9)
#define EP_CMD_TDL_MAX (EP_CMD_TDL_MASK >> 9)
/* ERDY Stream ID value (used in SS mode). */
#define EP_CMD_ERDY_SID_MASK GENMASK(31, 16)
@ -969,10 +970,18 @@ struct cdns3_usb_regs {
#define ISO_MAX_INTERVAL 10
#define MAX_TRB_LENGTH BIT(16)
#if TRBS_PER_SEGMENT < 2
#error "Incorrect TRBS_PER_SEGMENT. Minimal Transfer Ring size is 2."
#endif
#define TRBS_PER_STREAM_SEGMENT 2
#if TRBS_PER_STREAM_SEGMENT < 2
#error "Incorrect TRBS_PER_STREAMS_SEGMENT. Minimal Transfer Ring size is 2."
#endif
/*
*Only for ISOC endpoints - maximum number of TRBs is calculated as
* pow(2, bInterval-1) * number of usb requests. It is limitation made by
@ -1000,6 +1009,7 @@ struct cdns3_trb {
#define TRB_SIZE (sizeof(struct cdns3_trb))
#define TRB_RING_SIZE (TRB_SIZE * TRBS_PER_SEGMENT)
#define TRB_STREAM_RING_SIZE (TRB_SIZE * TRBS_PER_STREAM_SEGMENT)
#define TRB_ISO_RING_SIZE (TRB_SIZE * TRBS_PER_ISOC_SEGMENT)
#define TRB_CTRL_RING_SIZE (TRB_SIZE * 2)
@ -1078,7 +1088,7 @@ struct cdns3_trb {
#define CDNS3_ENDPOINTS_MAX_COUNT 32
#define CDNS3_EP_ZLP_BUF_SIZE 1024
#define CDNS3_EP_BUF_SIZE 2 /* KB */
#define CDNS3_EP_BUF_SIZE 4 /* KB */
#define CDNS3_EP_ISO_HS_MULT 3
#define CDNS3_EP_ISO_SS_BURST 3
#define CDNS3_MAX_NUM_DESCMISS_BUF 32
@ -1109,6 +1119,7 @@ struct cdns3_device;
* @interval: interval between packets used for ISOC endpoint.
* @free_trbs: number of free TRBs in transfer ring
* @num_trbs: number of all TRBs in transfer ring
* @alloc_ring_size: size of the allocated TRB ring
* @pcs: producer cycle state
* @ccs: consumer cycle state
* @enqueue: enqueue index in transfer ring
@ -1142,6 +1153,7 @@ struct cdns3_endpoint {
#define EP_QUIRK_END_TRANSFER BIT(11)
#define EP_QUIRK_EXTRA_BUF_DET BIT(12)
#define EP_QUIRK_EXTRA_BUF_EN BIT(13)
#define EP_TDLCHK_EN BIT(15)
u32 flags;
struct cdns3_request *descmis_req;
@ -1153,6 +1165,7 @@ struct cdns3_endpoint {
int free_trbs;
int num_trbs;
int alloc_ring_size;
u8 pcs;
u8 ccs;
int enqueue;
@ -1163,6 +1176,14 @@ struct cdns3_endpoint {
struct cdns3_trb *wa1_trb;
unsigned int wa1_trb_index;
unsigned int wa1_cycle_bit:1;
/* Stream related */
unsigned int use_streams:1;
unsigned int prime_flag:1;
u32 ep_sts_pending;
u16 last_stream_id;
u16 pending_tdl;
unsigned int stream_sg_idx;
};
/**
@ -1290,6 +1311,7 @@ struct cdns3_device {
int hw_configured_flag:1;
int wake_up_flag:1;
unsigned status_completion_no_call:1;
unsigned using_streams:1;
int out_mem_is_allocated;
struct work_struct pending_status_wq;
@ -1310,8 +1332,6 @@ void cdns3_set_hw_configuration(struct cdns3_device *priv_dev);
void cdns3_select_ep(struct cdns3_device *priv_dev, u32 ep);
void cdns3_allow_enable_l1(struct cdns3_device *priv_dev, int enable);
struct usb_request *cdns3_next_request(struct list_head *list);
int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
struct usb_request *request);
void cdns3_rearm_transfer(struct cdns3_endpoint *priv_ep, u8 rearm);
int cdns3_allocate_trb_pool(struct cdns3_endpoint *priv_ep);
u8 cdns3_ep_addr_to_index(u8 ep_addr);

View File

@ -122,18 +122,24 @@ DECLARE_EVENT_CLASS(cdns3_log_epx_irq,
__string(ep_name, priv_ep->name)
__field(u32, ep_sts)
__field(u32, ep_traddr)
__field(u32, ep_last_sid)
__field(u32, use_streams)
__dynamic_array(char, str, CDNS3_MSG_MAX)
),
TP_fast_assign(
__assign_str(ep_name, priv_ep->name);
__entry->ep_sts = readl(&priv_dev->regs->ep_sts);
__entry->ep_traddr = readl(&priv_dev->regs->ep_traddr);
__entry->ep_last_sid = priv_ep->last_stream_id;
__entry->use_streams = priv_ep->use_streams;
),
TP_printk("%s, ep_traddr: %08x",
TP_printk("%s, ep_traddr: %08x ep_last_sid: %08x use_streams: %d",
cdns3_decode_epx_irq(__get_str(str),
__get_str(ep_name),
__entry->ep_sts),
__entry->ep_traddr)
__entry->ep_traddr,
__entry->ep_last_sid,
__entry->use_streams)
);
DEFINE_EVENT(cdns3_log_epx_irq, cdns3_epx_irq,
@ -210,6 +216,7 @@ DECLARE_EVENT_CLASS(cdns3_log_request,
__field(int, end_trb)
__field(struct cdns3_trb *, start_trb_addr)
__field(int, flags)
__field(unsigned int, stream_id)
),
TP_fast_assign(
__assign_str(name, req->priv_ep->name);
@ -225,9 +232,10 @@ DECLARE_EVENT_CLASS(cdns3_log_request,
__entry->end_trb = req->end_trb;
__entry->start_trb_addr = req->trb;
__entry->flags = req->flags;
__entry->stream_id = req->request.stream_id;
),
TP_printk("%s: req: %p, req buff %p, length: %u/%u %s%s%s, status: %d,"
" trb: [start:%d, end:%d: virt addr %pa], flags:%x ",
" trb: [start:%d, end:%d: virt addr %pa], flags:%x SID: %u",
__get_str(name), __entry->req, __entry->buf, __entry->actual,
__entry->length,
__entry->zero ? "Z" : "z",
@ -237,7 +245,8 @@ DECLARE_EVENT_CLASS(cdns3_log_request,
__entry->start_trb,
__entry->end_trb,
__entry->start_trb_addr,
__entry->flags
__entry->flags,
__entry->stream_id
)
);
@ -281,6 +290,39 @@ TRACE_EVENT(cdns3_ep0_queue,
__entry->length)
);
DECLARE_EVENT_CLASS(cdns3_stream_split_transfer_len,
TP_PROTO(struct cdns3_request *req),
TP_ARGS(req),
TP_STRUCT__entry(
__string(name, req->priv_ep->name)
__field(struct cdns3_request *, req)
__field(unsigned int, length)
__field(unsigned int, actual)
__field(unsigned int, stream_id)
),
TP_fast_assign(
__assign_str(name, req->priv_ep->name);
__entry->req = req;
__entry->actual = req->request.length;
__entry->length = req->request.actual;
__entry->stream_id = req->request.stream_id;
),
TP_printk("%s: req: %p,request length: %u actual length: %u SID: %u",
__get_str(name), __entry->req, __entry->length,
__entry->actual, __entry->stream_id)
);
DEFINE_EVENT(cdns3_stream_split_transfer_len, cdns3_stream_transfer_split,
TP_PROTO(struct cdns3_request *req),
TP_ARGS(req)
);
DEFINE_EVENT(cdns3_stream_split_transfer_len,
cdns3_stream_transfer_split_next_part,
TP_PROTO(struct cdns3_request *req),
TP_ARGS(req)
);
DECLARE_EVENT_CLASS(cdns3_log_aligned_request,
TP_PROTO(struct cdns3_request *priv_req),
TP_ARGS(priv_req),
@ -319,6 +361,34 @@ DEFINE_EVENT(cdns3_log_aligned_request, cdns3_prepare_aligned_request,
TP_ARGS(req)
);
DECLARE_EVENT_CLASS(cdns3_log_map_request,
TP_PROTO(struct cdns3_request *priv_req),
TP_ARGS(priv_req),
TP_STRUCT__entry(
__string(name, priv_req->priv_ep->name)
__field(struct usb_request *, req)
__field(void *, buf)
__field(dma_addr_t, dma)
),
TP_fast_assign(
__assign_str(name, priv_req->priv_ep->name);
__entry->req = &priv_req->request;
__entry->buf = priv_req->request.buf;
__entry->dma = priv_req->request.dma;
),
TP_printk("%s: req: %p, req buf %p, dma %p",
__get_str(name), __entry->req, __entry->buf, &__entry->dma
)
);
DEFINE_EVENT(cdns3_log_map_request, cdns3_map_request,
TP_PROTO(struct cdns3_request *req),
TP_ARGS(req)
);
DEFINE_EVENT(cdns3_log_map_request, cdns3_mapped_request,
TP_PROTO(struct cdns3_request *req),
TP_ARGS(req)
);
DECLARE_EVENT_CLASS(cdns3_log_trb,
TP_PROTO(struct cdns3_endpoint *priv_ep, struct cdns3_trb *trb),
TP_ARGS(priv_ep, trb),
@ -329,6 +399,7 @@ DECLARE_EVENT_CLASS(cdns3_log_trb,
__field(u32, length)
__field(u32, control)
__field(u32, type)
__field(unsigned int, last_stream_id)
),
TP_fast_assign(
__assign_str(name, priv_ep->name);
@ -337,8 +408,9 @@ DECLARE_EVENT_CLASS(cdns3_log_trb,
__entry->length = trb->length;
__entry->control = trb->control;
__entry->type = usb_endpoint_type(priv_ep->endpoint.desc);
__entry->last_stream_id = priv_ep->last_stream_id;
),
TP_printk("%s: trb %p, dma buf: 0x%08x, size: %ld, burst: %d ctrl: 0x%08x (%s%s%s%s%s%s%s)",
TP_printk("%s: trb %p, dma buf: 0x%08x, size: %ld, burst: %d ctrl: 0x%08x (%s%s%s%s%s%s%s) SID:%lu LAST_SID:%u",
__get_str(name), __entry->trb, __entry->buffer,
TRB_LEN(__entry->length),
(u8)TRB_BURST_LEN_GET(__entry->length),
@ -349,7 +421,9 @@ DECLARE_EVENT_CLASS(cdns3_log_trb,
__entry->control & TRB_FIFO_MODE ? "FIFO, " : "",
__entry->control & TRB_CHAIN ? "CHAIN, " : "",
__entry->control & TRB_IOC ? "IOC, " : "",
TRB_FIELD_TO_TYPE(__entry->control) == TRB_NORMAL ? "Normal" : "LINK"
TRB_FIELD_TO_TYPE(__entry->control) == TRB_NORMAL ? "Normal" : "LINK",
TRB_FIELD_TO_STREAMID(__entry->control),
__entry->last_stream_id
)
);
@ -398,6 +472,7 @@ DECLARE_EVENT_CLASS(cdns3_log_ep,
__field(unsigned int, maxpacket)
__field(unsigned int, maxpacket_limit)
__field(unsigned int, max_streams)
__field(unsigned int, use_streams)
__field(unsigned int, maxburst)
__field(unsigned int, flags)
__field(unsigned int, dir)
@ -409,16 +484,18 @@ DECLARE_EVENT_CLASS(cdns3_log_ep,
__entry->maxpacket = priv_ep->endpoint.maxpacket;
__entry->maxpacket_limit = priv_ep->endpoint.maxpacket_limit;
__entry->max_streams = priv_ep->endpoint.max_streams;
__entry->use_streams = priv_ep->use_streams;
__entry->maxburst = priv_ep->endpoint.maxburst;
__entry->flags = priv_ep->flags;
__entry->dir = priv_ep->dir;
__entry->enqueue = priv_ep->enqueue;
__entry->dequeue = priv_ep->dequeue;
),
TP_printk("%s: mps: %d/%d. streams: %d, burst: %d, enq idx: %d, "
"deq idx: %d, flags %s%s%s%s%s%s%s%s, dir: %s",
TP_printk("%s: mps: %d/%d. streams: %d, stream enable: %d, burst: %d, "
"enq idx: %d, deq idx: %d, flags %s%s%s%s%s%s%s%s, dir: %s",
__get_str(name), __entry->maxpacket,
__entry->maxpacket_limit, __entry->max_streams,
__entry->use_streams,
__entry->maxburst, __entry->enqueue,
__entry->dequeue,
__entry->flags & EP_ENABLED ? "EN | " : "",

View File

@ -7,6 +7,7 @@ config USB_CHIPIDEA
select RESET_CONTROLLER
select USB_ULPI_BUS
select USB_ROLE_SWITCH
select USB_TEGRA_PHY if ARCH_TEGRA
help
Say Y here if your system has a dual role high speed USB
controller based on ChipIdea silicon IP. It supports:

View File

@ -302,6 +302,16 @@ static inline enum usb_role ci_role_to_usb_role(struct ci_hdrc *ci)
return USB_ROLE_NONE;
}
static inline enum ci_role usb_role_to_ci_role(enum usb_role role)
{
if (role == USB_ROLE_HOST)
return CI_ROLE_HOST;
else if (role == USB_ROLE_DEVICE)
return CI_ROLE_GADGET;
else
return CI_ROLE_END;
}
/**
* hw_read_id_reg: reads from a identification register
* @ci: the controller

View File

@ -83,13 +83,6 @@ static int tegra_udc_probe(struct platform_device *pdev)
return err;
}
/*
* Tegra's USB PHY driver doesn't implement optional phy_init()
* hook, so we have to power on UDC controller before ChipIdea
* driver initialization kicks in.
*/
usb_phy_set_suspend(udc->phy, 0);
/* setup and register ChipIdea HDRC device */
udc->data.name = "tegra-udc";
udc->data.flags = soc->flags;
@ -109,7 +102,6 @@ static int tegra_udc_probe(struct platform_device *pdev)
return 0;
fail_power_off:
usb_phy_set_suspend(udc->phy, 1);
clk_disable_unprepare(udc->clk);
return err;
}
@ -119,7 +111,6 @@ static int tegra_udc_remove(struct platform_device *pdev)
struct tegra_udc *udc = platform_get_drvdata(pdev);
ci_hdrc_remove_device(udc->dev);
usb_phy_set_suspend(udc->phy, 1);
clk_disable_unprepare(udc->clk);
return 0;

View File

@ -618,9 +618,11 @@ static int ci_usb_role_switch_set(struct device *dev, enum usb_role role)
struct ci_hdrc *ci = dev_get_drvdata(dev);
struct ci_hdrc_cable *cable = NULL;
enum usb_role current_role = ci_role_to_usb_role(ci);
enum ci_role ci_role = usb_role_to_ci_role(role);
unsigned long flags;
if (current_role == role)
if ((ci_role != CI_ROLE_END && !ci->roles[ci_role]) ||
(current_role == role))
return 0;
pm_runtime_get_sync(ci->dev);

View File

@ -20,7 +20,7 @@ static inline void ci_hdrc_host_destroy(struct ci_hdrc *ci)
}
static void ci_hdrc_host_driver_init(void)
static inline void ci_hdrc_host_driver_init(void)
{
}

View File

@ -574,7 +574,7 @@ __acquires(ps->lock)
/* Now carefully unlink all the marked pending URBs */
rescan:
list_for_each_entry(as, &ps->async_pending, asynclist) {
list_for_each_entry_reverse(as, &ps->async_pending, asynclist) {
if (as->bulk_status == AS_UNLINK) {
as->bulk_status = 0; /* Only once */
urb = as->urb;
@ -636,7 +636,7 @@ static void destroy_async(struct usb_dev_state *ps, struct list_head *list)
spin_lock_irqsave(&ps->lock, flags);
while (!list_empty(list)) {
as = list_entry(list->next, struct async, asynclist);
as = list_last_entry(list, struct async, asynclist);
list_del_init(&as->asynclist);
urb = as->urb;
usb_get_urb(urb);

View File

@ -288,14 +288,9 @@ static void dwc2_handle_conn_id_status_change_intr(struct dwc2_hsotg *hsotg)
/*
* Need to schedule a work, as there are possible DELAY function calls.
* Release lock before scheduling workq as it holds spinlock during
* scheduling.
*/
if (hsotg->wq_otg) {
spin_unlock(&hsotg->lock);
if (hsotg->wq_otg)
queue_work(hsotg->wq_otg, &hsotg->wf_otg);
spin_lock(&hsotg->lock);
}
}
/**

View File

@ -183,6 +183,7 @@ DEFINE_SHOW_ATTRIBUTE(state);
static int fifo_show(struct seq_file *seq, void *v)
{
struct dwc2_hsotg *hsotg = seq->private;
int fifo_count = dwc2_hsotg_tx_fifo_count(hsotg);
u32 val;
int idx;
@ -196,7 +197,7 @@ static int fifo_show(struct seq_file *seq, void *v)
seq_puts(seq, "\nPeriodic TXFIFOs:\n");
for (idx = 1; idx < hsotg->num_of_eps; idx++) {
for (idx = 1; idx <= fifo_count; idx++) {
val = dwc2_readl(hsotg, DPTXFSIZN(idx));
seq_printf(seq, "\tDPTXFIFO%2d: Size %d, Start 0x%08x\n", idx,

View File

@ -3784,15 +3784,26 @@ irq_retry:
for (idx = 1; idx < hsotg->num_of_eps; idx++) {
hs_ep = hsotg->eps_out[idx];
/* Proceed only unmasked ISOC EPs */
if ((BIT(idx) & ~daintmsk) || !hs_ep->isochronous)
if (BIT(idx) & ~daintmsk)
continue;
epctrl = dwc2_readl(hsotg, DOEPCTL(idx));
if (epctrl & DXEPCTL_EPENA) {
//ISOC Ep's only
if ((epctrl & DXEPCTL_EPENA) && hs_ep->isochronous) {
epctrl |= DXEPCTL_SNAK;
epctrl |= DXEPCTL_EPDIS;
dwc2_writel(hsotg, epctrl, DOEPCTL(idx));
continue;
}
//Non-ISOC EP's
if (hs_ep->halted) {
if (!(epctrl & DXEPCTL_EPENA))
epctrl |= DXEPCTL_EPENA;
epctrl |= DXEPCTL_EPDIS;
epctrl |= DXEPCTL_STALL;
dwc2_writel(hsotg, epctrl, DOEPCTL(idx));
}
}
@ -4056,11 +4067,12 @@ static int dwc2_hsotg_ep_enable(struct usb_ep *ep,
* a unique tx-fifo even if it is non-periodic.
*/
if (dir_in && hsotg->dedicated_fifos) {
unsigned fifo_count = dwc2_hsotg_tx_fifo_count(hsotg);
u32 fifo_index = 0;
u32 fifo_size = UINT_MAX;
size = hs_ep->ep.maxpacket * hs_ep->mc;
for (i = 1; i < hsotg->num_of_eps; ++i) {
for (i = 1; i <= fifo_count; ++i) {
if (hsotg->fifo_map & (1 << i))
continue;
val = dwc2_readl(hsotg, DPTXFSIZN(i));
@ -4310,19 +4322,20 @@ static int dwc2_hsotg_ep_sethalt(struct usb_ep *ep, int value, bool now)
epctl = dwc2_readl(hs, epreg);
if (value) {
epctl |= DXEPCTL_STALL;
if (!(dwc2_readl(hs, GINTSTS) & GINTSTS_GOUTNAKEFF))
dwc2_set_bit(hs, DCTL, DCTL_SGOUTNAK);
// STALL bit will be set in GOUTNAKEFF interrupt handler
} else {
epctl &= ~DXEPCTL_STALL;
xfertype = epctl & DXEPCTL_EPTYPE_MASK;
if (xfertype == DXEPCTL_EPTYPE_BULK ||
xfertype == DXEPCTL_EPTYPE_INTERRUPT)
epctl |= DXEPCTL_SETD0PID;
dwc2_writel(hs, epctl, epreg);
}
dwc2_writel(hs, epctl, epreg);
}
hs_ep->halted = value;
return 0;
}

View File

@ -2824,7 +2824,7 @@ static int dwc2_queue_transaction(struct dwc2_hsotg *hsotg,
list_move_tail(&chan->split_order_list_entry,
&hsotg->split_order);
if (hsotg->params.host_dma) {
if (hsotg->params.host_dma && chan->qh) {
if (hsotg->params.dma_desc_enable) {
if (!chan->xfer_started ||
chan->ep_type == USB_ENDPOINT_XFER_ISOC) {

View File

@ -1246,6 +1246,9 @@ static void dwc3_core_exit_mode(struct dwc3 *dwc)
/* do nothing */
break;
}
/* de-assert DRVVBUS for HOST and OTG mode */
dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE);
}
static void dwc3_get_properties(struct dwc3 *dwc)

Some files were not shown because too many files have changed in this diff Show More