USB/Thunderbolt changes for 6.7-rc1

Here is the "big" set of USB and Thunderbolt changes for 6.7-rc1.
 Nothing really major in here, just lots of constant development for new
 hardware.  Included in here are:
   - Thunderbolt (i.e. USB4) fixes for reported issues and support for
     new hardware types and devices
   - USB typec additions of new drivers and cleanups for some existing
     ones
   - xhci cleanups and expanded tracing support and some platform
     specific updates
   - USB "La Jolla Cove Adapter (LJCA)" support added, and the gpio, spi,
     and i2c drivers for that type of device (all acked by the respective
     subsystem maintainers.)
   - lots of USB gadget driver updates and cleanups
   - new USB dwc3 platforms supported, as well as other dwc3 fixes and
     cleanups
   - USB chipidea driver updates
   - other smaller driver cleanups and additions, full details in the
     shortlog
 
 All of these have been in the linux-next tree for a while with no
 reported problems, EXCEPT for some merge conflicts that you will run
 into in your tree.  2 of them are in device-tree files, which will be
 trivial to resolve (accept both sides), and the last in the
 drivers/gpio/gpio-ljca.c file, in the remove callback, resolution should
 be pretty trivial (take the version in this branch), see here:
 	https://lore.kernel.org/all/20231016134159.11d8f849@canb.auug.org.au/
 for details, or I can provide a resolved merge point if needed.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCZUStew8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ykxgQCggUyfGo+JVV8XZVu5A9KwT6nr7mUAmwUgFxhZ
 khK77t0KqF4hjXryeaHa
 =iPd+
 -----END PGP SIGNATURE-----

Merge tag 'usb-6.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB/Thunderbolt updates from Greg KH:
 "Here is the "big" set of USB and Thunderbolt changes for 6.7-rc1.
  Nothing really major in here, just lots of constant development for
  new hardware. Included in here are:

   - Thunderbolt (i.e. USB4) fixes for reported issues and support for
     new hardware types and devices

   - USB typec additions of new drivers and cleanups for some existing
     ones

   - xhci cleanups and expanded tracing support and some platform
     specific updates

   - USB "La Jolla Cove Adapter (LJCA)" support added, and the gpio,
     spi, and i2c drivers for that type of device (all acked by the
     respective subsystem maintainers.)

   - lots of USB gadget driver updates and cleanups

   - new USB dwc3 platforms supported, as well as other dwc3 fixes and
     cleanups

   - USB chipidea driver updates

   - other smaller driver cleanups and additions, full details in the
     shortlog

  All of these have been in the linux-next tree for a while with no
  reported problems"

* tag 'usb-6.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (167 commits)
  usb: gadget: uvc: Add missing initialization of ssp config descriptor
  usb: storage: set 1.50 as the lower bcdDevice for older "Super Top" compatibility
  usb: raw-gadget: report suspend, resume, reset, and disconnect events
  usb: raw-gadget: don't disable device if usb_ep_queue fails
  usb: raw-gadget: properly handle interrupted requests
  usb:cdnsp: remove TRB_FLUSH_ENDPOINT command
  usb: gadget: aspeed_udc: Convert to platform remove callback returning void
  dt-bindings: usb: fsa4480: Add compatible for OCP96011
  usb: typec: fsa4480: Add support to swap SBU orientation
  dt-bindings: usb: fsa4480: Add data-lanes property to endpoint
  usb: typec: tcpm: Fix NULL pointer dereference in tcpm_pd_svdm()
  Revert "dt-bindings: usb: Add bindings for multiport properties on DWC3 controller"
  Revert "dt-bindings: usb: qcom,dwc3: Add bindings for SC8280 Multiport"
  thunderbolt: Fix one kernel-doc comment
  usb: gadget: f_ncm: Always set current gadget in ncm_bind()
  usb: core: Remove duplicated check in usb_hub_create_port_device
  usb: typec: tcpm: Add additional checks for contaminant
  arm64: dts: rockchip: rk3588s: Add USB3 host controller
  usb: dwc3: add optional PHY interface clocks
  dt-bindings: usb: add rk3588 compatible to rockchip,dwc3
  ...
This commit is contained in:
Linus Torvalds 2023-11-03 16:00:42 -10:00
commit 2c40c1c6ad
150 changed files with 7061 additions and 1327 deletions

View File

@ -35,4 +35,6 @@ Description:
req_number the number of pre-allocated requests
for both capture and playback
function_name name of the interface
c_terminal_type code of the capture terminal type
p_terminal_type code of the playback terminal type
===================== =======================================

View File

@ -313,6 +313,15 @@ Description:
Inter-Chip SSIC devices support asymmetric lanes up to 4 lanes per
direction. Devices before USB 3.2 are single lane (tx_lanes = 1)
What: /sys/bus/usb/devices/.../typec
Date: November 2023
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
Symlink to the USB Type-C partner device. USB Type-C partner
represents the component that communicates over the
Configuration Channel (CC signal on USB Type-C connectors and
cables) with the local port.
What: /sys/bus/usb/devices/usbX/bAlternateSetting
Description:
The current interface alternate setting number, in decimal.

View File

@ -124,6 +124,13 @@ Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
The voltage the supply supports in millivolts.
What: /sys/class/usb_power_delivery/.../source-capabilities/<position>:fixed_supply/peak_current
Date: October 2023
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
This file shows the value of the Fixed Power Source Peak Current
Capability field.
What: /sys/class/usb_power_delivery/.../source-capabilities/<position>:fixed_supply/maximum_current
Date: May 2022
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>

View File

@ -20,6 +20,7 @@ properties:
- qcom,qcm2290-qmp-usb3-phy
- qcom,sa8775p-qmp-usb3-uni-phy
- qcom,sc8280xp-qmp-usb3-uni-phy
- qcom,sdx75-qmp-usb3-uni-phy
- qcom,sm6115-qmp-usb3-phy
reg:
@ -75,6 +76,7 @@ allOf:
contains:
enum:
- qcom,ipq9574-qmp-usb3-phy
- qcom,sdx75-qmp-usb3-uni-phy
then:
properties:
clock-names:

View File

@ -14,7 +14,12 @@ description:
properties:
compatible:
const: qcom,sm8550-snps-eusb2-phy
oneOf:
- items:
- enum:
- qcom,sdx75-snps-eusb2-phy
- const: qcom,sm8550-snps-eusb2-phy
- const: qcom,sm8550-snps-eusb2-phy
reg:
maxItems: 1

View File

@ -35,6 +35,12 @@ properties:
'#size-cells':
const: 0
orientation-gpios:
description: Array of input gpios for the Type-C connector orientation indication.
The GPIO indication is used to detect the orientation of the Type-C connector.
The array should contain a gpio entry for each PMIC Glink connector, in reg order.
It is defined that GPIO active level means "CC2" or Reversed/Flipped orientation.
patternProperties:
'^connector@\d$':
$ref: /schemas/connector/usb-connector.yaml#
@ -44,6 +50,19 @@ patternProperties:
required:
- compatible
allOf:
- if:
not:
properties:
compatible:
contains:
enum:
- qcom,sm8450-pmic-glink
- qcom,sm8550-pmic-glink
then:
properties:
orientation-gpios: false
additionalProperties: false
examples:

View File

@ -15,7 +15,9 @@ properties:
oneOf:
- enum:
- chipidea,usb2
- fsl,imx27-usb
- lsi,zevio-usb
- nuvoton,npcm750-udc
- nvidia,tegra20-ehci
- nvidia,tegra20-udc
- nvidia,tegra30-ehci
@ -66,6 +68,10 @@ properties:
- items:
- const: xlnx,zynq-usb-2.20a
- const: chipidea,usb2
- items:
- enum:
- nuvoton,npcm845-udc
- const: nuvoton,npcm750-udc
reg:
minItems: 1
@ -388,6 +394,7 @@ allOf:
enum:
- chipidea,usb2
- lsi,zevio-usb
- nuvoton,npcm750-udc
- nvidia,tegra20-udc
- nvidia,tegra30-udc
- nvidia,tegra114-udc

View File

@ -11,8 +11,12 @@ maintainers:
properties:
compatible:
enum:
- fcs,fsa4480
oneOf:
- const: fcs,fsa4480
- items:
- enum:
- ocs,ocp96011
- const: fcs,fsa4480
reg:
maxItems: 1
@ -32,10 +36,43 @@ properties:
type: boolean
port:
$ref: /schemas/graph.yaml#/properties/port
$ref: /schemas/graph.yaml#/$defs/port-base
description:
A port node to link the FSA4480 to a TypeC controller for the purpose of
handling altmode muxing and orientation switching.
unevaluatedProperties: false
properties:
endpoint:
$ref: /schemas/graph.yaml#/$defs/endpoint-base
unevaluatedProperties: false
properties:
data-lanes:
$ref: /schemas/types.yaml#/definitions/uint32-array
description:
Specifies how the AUX+/- lines are connected to SBU1/2.
oneOf:
- items:
- const: 0
- const: 1
description: |
Default AUX/SBU layout (FSA4480)
- AUX+ connected to SBU2
- AUX- connected to SBU1
Default AUX/SBU layout (OCP96011)
- AUX+ connected to SBU1
- AUX- connected to SBU2
- items:
- const: 1
- const: 0
description: |
Swapped AUX/SBU layout (FSA4480)
- AUX+ connected to SBU1
- AUX- connected to SBU2
Swapped AUX/SBU layout (OCP96011)
- AUX+ connected to SBU2
- AUX- connected to SBU1
required:
- compatible

View File

@ -4,7 +4,7 @@
$id: http://devicetree.org/schemas/usb/genesys,gl850g.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Genesys Logic GL850G USB 2.0 hub controller
title: Genesys Logic USB hub controller
maintainers:
- Icenowy Zheng <uwu@icenowy.me>
@ -18,6 +18,7 @@ properties:
- usb5e3,608
- usb5e3,610
- usb5e3,620
- usb5e3,626
reg: true

View File

@ -19,6 +19,7 @@ properties:
compatible:
items:
- enum:
- nxp,cbdtu02043
- onnn,fsusb43l10x
- pericom,pi3usb102
- const: gpio-sbu-mux
@ -50,7 +51,6 @@ required:
- compatible
- enable-gpios
- select-gpios
- mode-switch
- orientation-switch
- port

View File

@ -0,0 +1,94 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/nxp,ptn36502.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: NXP PTN36502 Type-C USB 3.1 Gen 1 and DisplayPort v1.2 combo redriver
maintainers:
- Luca Weiss <luca.weiss@fairphone.com>
properties:
compatible:
enum:
- nxp,ptn36502
reg:
maxItems: 1
vdd18-supply:
description: Power supply for VDD18 pin
retimer-switch:
description: Flag the port as possible handle of SuperSpeed signals retiming
type: boolean
orientation-switch:
description: Flag the port as possible handler of orientation switching
type: boolean
ports:
$ref: /schemas/graph.yaml#/properties/ports
properties:
port@0:
$ref: /schemas/graph.yaml#/properties/port
description: Super Speed (SS) Output endpoint to the Type-C connector
port@1:
$ref: /schemas/graph.yaml#/properties/port
description: Super Speed (SS) Input endpoint from the Super-Speed PHY
port@2:
$ref: /schemas/graph.yaml#/properties/port
description:
Sideband Use (SBU) AUX lines endpoint to the Type-C connector for the purpose of
handling altmode muxing and orientation switching.
required:
- compatible
- reg
additionalProperties: false
examples:
- |
i2c {
#address-cells = <1>;
#size-cells = <0>;
typec-mux@1a {
compatible = "nxp,ptn36502";
reg = <0x1a>;
vdd18-supply = <&usb_redrive_1v8>;
retimer-switch;
orientation-switch;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
usb_con_ss: endpoint {
remote-endpoint = <&typec_con_ss>;
};
};
port@1 {
reg = <1>;
phy_con_ss: endpoint {
remote-endpoint = <&usb_phy_ss>;
};
};
port@2 {
reg = <2>;
usb_con_sbu: endpoint {
remote-endpoint = <&typec_dp_aux>;
};
};
};
};
};
...

View File

@ -14,6 +14,7 @@ properties:
items:
- enum:
- qcom,ipq4019-dwc3
- qcom,ipq5018-dwc3
- qcom,ipq5332-dwc3
- qcom,ipq6018-dwc3
- qcom,ipq8064-dwc3
@ -34,6 +35,7 @@ properties:
- qcom,sdm845-dwc3
- qcom,sdx55-dwc3
- qcom,sdx65-dwc3
- qcom,sdx75-dwc3
- qcom,sm4250-dwc3
- qcom,sm6115-dwc3
- qcom,sm6125-dwc3
@ -180,6 +182,8 @@ allOf:
- qcom,sdm670-dwc3
- qcom,sdm845-dwc3
- qcom,sdx55-dwc3
- qcom,sdx65-dwc3
- qcom,sdx75-dwc3
- qcom,sm6350-dwc3
then:
properties:
@ -238,6 +242,7 @@ allOf:
compatible:
contains:
enum:
- qcom,ipq5018-dwc3
- qcom,ipq5332-dwc3
- qcom,msm8994-dwc3
- qcom,qcs404-dwc3
@ -363,6 +368,7 @@ allOf:
- qcom,sdm845-dwc3
- qcom,sdx55-dwc3
- qcom,sdx65-dwc3
- qcom,sdx75-dwc3
- qcom,sm4250-dwc3
- qcom,sm6125-dwc3
- qcom,sm6350-dwc3
@ -411,6 +417,7 @@ allOf:
compatible:
contains:
enum:
- qcom,ipq5018-dwc3
- qcom,ipq5332-dwc3
- qcom,sdm660-dwc3
then:

View File

@ -0,0 +1,80 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
# Copyright 2023 Realtek Semiconductor Corporation
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/realtek,rtd-dwc3.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Realtek DWC3 USB SoC Controller Glue
maintainers:
- Stanley Chang <stanley_chang@realtek.com>
description:
The Realtek DHC SoC embeds a DWC3 USB IP Core configured for USB 2.0
and USB 3.0 in host or dual-role mode.
properties:
compatible:
items:
- enum:
- realtek,rtd1295-dwc3
- realtek,rtd1315e-dwc3
- realtek,rtd1319-dwc3
- realtek,rtd1319d-dwc3
- realtek,rtd1395-dwc3
- realtek,rtd1619-dwc3
- realtek,rtd1619b-dwc3
- const: realtek,rtd-dwc3
reg:
items:
- description: Address and length of register set for wrapper of dwc3 core.
- description: Address and length of register set for pm control.
'#address-cells':
const: 1
'#size-cells':
const: 1
ranges: true
patternProperties:
"^usb@[0-9a-f]+$":
$ref: snps,dwc3.yaml#
description: Required child node
required:
- compatible
- reg
- "#address-cells"
- "#size-cells"
- ranges
additionalProperties: false
examples:
- |
usb@98013e00 {
compatible = "realtek,rtd1319d-dwc3", "realtek,rtd-dwc3";
reg = <0x98013e00 0x140>, <0x98013f60 0x4>;
#address-cells = <1>;
#size-cells = <1>;
ranges;
usb@98050000 {
compatible = "snps,dwc3";
reg = <0x98050000 0x9000>;
interrupts = <0 94 4>;
phys = <&usb2phy &usb3phy>;
phy-names = "usb2-phy", "usb3-phy";
dr_mode = "otg";
usb-role-switch;
role-switch-default-mode = "host";
snps,dis_u2_susphy_quirk;
snps,parkmode-disable-ss-quirk;
snps,parkmode-disable-hs-quirk;
maximum-speed = "high-speed";
};
};

View File

@ -20,9 +20,6 @@ description:
Type-C PHY
Documentation/devicetree/bindings/phy/phy-rockchip-typec.txt
allOf:
- $ref: snps,dwc3.yaml#
select:
properties:
compatible:
@ -30,6 +27,7 @@ select:
enum:
- rockchip,rk3328-dwc3
- rockchip,rk3568-dwc3
- rockchip,rk3588-dwc3
required:
- compatible
@ -39,6 +37,7 @@ properties:
- enum:
- rockchip,rk3328-dwc3
- rockchip,rk3568-dwc3
- rockchip,rk3588-dwc3
- const: snps,dwc3
reg:
@ -58,7 +57,9 @@ properties:
Master/Core clock, must to be >= 62.5 MHz for SS
operation and >= 30MHz for HS operation
- description:
Controller grf clock
Controller grf clock OR UTMI clock
- description:
PIPE clock
clock-names:
minItems: 3
@ -66,7 +67,10 @@ properties:
- const: ref_clk
- const: suspend_clk
- const: bus_clk
- const: grf_clk
- enum:
- grf_clk
- utmi
- const: pipe
power-domains:
maxItems: 1
@ -86,6 +90,52 @@ required:
- clocks
- clock-names
allOf:
- $ref: snps,dwc3.yaml#
- if:
properties:
compatible:
contains:
const: rockchip,rk3328-dwc3
then:
properties:
clocks:
minItems: 3
maxItems: 4
clock-names:
minItems: 3
items:
- const: ref_clk
- const: suspend_clk
- const: bus_clk
- const: grf_clk
- if:
properties:
compatible:
contains:
const: rockchip,rk3568-dwc3
then:
properties:
clocks:
maxItems: 3
clock-names:
maxItems: 3
- if:
properties:
compatible:
contains:
const: rockchip,rk3588-dwc3
then:
properties:
clock-names:
minItems: 3
items:
- const: ref_clk
- const: suspend_clk
- const: bus_clk
- const: utmi
- const: pipe
examples:
- |
#include <dt-bindings/clock/rk3328-cru.h>

View File

@ -310,6 +310,62 @@ properties:
maximum: 62
deprecated: true
snps,rx-thr-num-pkt:
description:
USB RX packet threshold count. In host mode, this field specifies
the space that must be available in the RX FIFO before the core can
start the corresponding USB RX transaction (burst).
In device mode, this field specifies the space that must be
available in the RX FIFO before the core can send ERDY for a
flow-controlled endpoint. It is only used for SuperSpeed.
The valid values for this field are from 1 to 15. (DWC3 SuperSpeed
USB 3.0 Controller Databook)
$ref: /schemas/types.yaml#/definitions/uint8
minimum: 1
maximum: 15
snps,rx-max-burst:
description:
Max USB RX burst size. In host mode, this field specifies the
Maximum Bulk IN burst the DWC_usb3 core can perform. When the system
bus is slower than the USB, RX FIFO can overrun during a long burst.
You can program a smaller value to this field to limit the RX burst
size that the core can perform. It only applies to SS Bulk,
Isochronous, and Interrupt IN endpoints in the host mode.
In device mode, this field specifies the NUMP value that is sent in
ERDY for an OUT endpoint.
The valid values for this field are from 1 to 16. (DWC3 SuperSpeed
USB 3.0 Controller Databook)
$ref: /schemas/types.yaml#/definitions/uint8
minimum: 1
maximum: 16
snps,tx-thr-num-pkt:
description:
USB TX packet threshold count. This field specifies the number of
packets that must be in the TXFIFO before the core can start
transmission for the corresponding USB transaction (burst).
This count is valid in both host and device modes. It is only used
for SuperSpeed operation.
Valid values are from 1 to 15. (DWC3 SuperSpeed USB 3.0 Controller
Databook)
$ref: /schemas/types.yaml#/definitions/uint8
minimum: 1
maximum: 15
snps,tx-max-burst:
description:
Max USB TX burst size. When the system bus is slower than the USB,
TX FIFO can underrun during a long burst. Program a smaller value
to this field to limit the TX burst size that the core can execute.
In Host mode, it only applies to SS Bulk, Isochronous, and Interrupt
OUT endpoints. This value is not used in device mode.
Valid values are from 1 to 16. (DWC3 SuperSpeed USB 3.0 Controller
Databook)
$ref: /schemas/types.yaml#/definitions/uint8
minimum: 1
maximum: 16
snps,rx-thr-num-pkt-prd:
description:
Periodic ESS RX packet threshold count (host mode only). Set this and

View File

@ -20,8 +20,23 @@ properties:
enum:
- ti,tps6598x
- apple,cd321x
- ti,tps25750
reg:
maxItems: 1
minItems: 1
items:
- description: main PD controller address
- description: |
I2C slave address field in PBMs input data
which is used as the device address when writing the
patch for TPS25750.
The patch address can be any value except 0x00, 0x20,
0x21, 0x22, and 0x23
reg-names:
items:
- const: main
- const: patch-address
wakeup-source: true
@ -35,10 +50,42 @@ properties:
connector:
$ref: /schemas/connector/usb-connector.yaml#
firmware-name:
description: |
Should contain the name of the default patch binary
file located on the firmware search path which is
used to switch the controller into APP mode.
This is used when tps25750 doesn't have an EEPROM
connected to it.
maxItems: 1
required:
- compatible
- reg
allOf:
- if:
properties:
compatible:
contains:
const: ti,tps25750
then:
properties:
reg:
maxItems: 2
connector:
required:
- data-role
required:
- connector
- reg-names
else:
properties:
reg:
maxItems: 1
additionalProperties: false
examples:
@ -71,4 +118,36 @@ examples:
};
};
};
- |
#include <dt-bindings/interrupt-controller/irq.h>
i2c {
#address-cells = <1>;
#size-cells = <0>;
typec@21 {
compatible = "ti,tps25750";
reg = <0x21>, <0x0f>;
reg-names = "main", "patch-address";
interrupt-parent = <&msmgpio>;
interrupts = <100 IRQ_TYPE_LEVEL_LOW>;
interrupt-names = "irq";
firmware-name = "tps25750.bin";
pinctrl-names = "default";
pinctrl-0 = <&typec_pins>;
typec_con0: connector {
compatible = "usb-c-connector";
label = "USB-C";
data-role = "dual";
port {
typec_ep0: endpoint {
remote-endpoint = <&otg_ep>;
};
};
};
};
};
...

View File

@ -37,7 +37,6 @@ properties:
required:
- compatible
- reg
- reset-gpios
- vdd-supply
- peer-hub

View File

@ -93,44 +93,18 @@ DMA address space of the device. However, most buffers passed to your
driver can safely be used with such DMA mapping. (See the first section
of Documentation/core-api/dma-api-howto.rst, titled "What memory is DMA-able?")
- When you're using scatterlists, you can map everything at once. On some
systems, this kicks in an IOMMU and turns the scatterlists into single
DMA transactions::
- When you have the scatterlists which have been mapped for the USB controller,
you could use the new ``usb_sg_*()`` calls, which would turn scatterlist
into URBs::
int usb_buffer_map_sg (struct usb_device *dev, unsigned pipe,
struct scatterlist *sg, int nents);
int usb_sg_init(struct usb_sg_request *io, struct usb_device *dev,
unsigned pipe, unsigned period, struct scatterlist *sg,
int nents, size_t length, gfp_t mem_flags);
void usb_buffer_dmasync_sg (struct usb_device *dev, unsigned pipe,
struct scatterlist *sg, int n_hw_ents);
void usb_sg_wait(struct usb_sg_request *io);
void usb_buffer_unmap_sg (struct usb_device *dev, unsigned pipe,
struct scatterlist *sg, int n_hw_ents);
void usb_sg_cancel(struct usb_sg_request *io);
It's probably easier to use the new ``usb_sg_*()`` calls, which do the DMA
mapping and apply other tweaks to make scatterlist i/o be fast.
- Some drivers may prefer to work with the model that they're mapping large
buffers, synchronizing their safe re-use. (If there's no re-use, then let
usbcore do the map/unmap.) Large periodic transfers make good examples
here, since it's cheaper to just synchronize the buffer than to unmap it
each time an urb completes and then re-map it on during resubmission.
These calls all work with initialized urbs: ``urb->dev``, ``urb->pipe``,
``urb->transfer_buffer``, and ``urb->transfer_buffer_length`` must all be
valid when these calls are used (``urb->setup_packet`` must be valid too
if urb is a control request)::
struct urb *usb_buffer_map (struct urb *urb);
void usb_buffer_dmasync (struct urb *urb);
void usb_buffer_unmap (struct urb *urb);
The calls manage ``urb->transfer_dma`` for you, and set
``URB_NO_TRANSFER_DMA_MAP`` so that usbcore won't map or unmap the buffer.
They cannot be used for setup_packet buffers in control requests.
Note that several of those interfaces are currently commented out, since
they don't have current users. See the source code. Other than the dmasync
calls (where the underlying DMA primitives have changed), most of them can
easily be commented back in if you want to use them.
When the USB controller doesn't support DMA, the ``usb_sg_init()`` would try
to submit URBs in PIO way as long as the page in scatterlists is not in the
Highmem, which could be very rare in modern architectures.

View File

@ -755,6 +755,8 @@ The uac2 function provides these attributes in its function directory:
req_number the number of pre-allocated request for both capture
and playback
function_name name of the interface
c_terminal_type code of the capture terminal type
p_terminal_type code of the playback terminal type
================ ====================================================
The attributes have sane default values.

View File

@ -59,6 +59,7 @@
compatible = "qcom,sm8550-pmic-glink", "qcom,pmic-glink";
#address-cells = <1>;
#size-cells = <0>;
orientation-gpios = <&tlmm 11 GPIO_ACTIVE_HIGH>;
connector@0 {
compatible = "usb-c-connector";

View File

@ -77,6 +77,7 @@
compatible = "qcom,sm8550-pmic-glink", "qcom,pmic-glink";
#address-cells = <1>;
#size-cells = <0>;
orientation-gpios = <&tlmm 11 GPIO_ACTIVE_HIGH>;
connector@0 {
compatible = "usb-c-connector";

View File

@ -443,6 +443,27 @@
status = "disabled";
};
usb_host2_xhci: usb@fcd00000 {
compatible = "rockchip,rk3588-dwc3", "snps,dwc3";
reg = <0x0 0xfcd00000 0x0 0x400000>;
interrupts = <GIC_SPI 222 IRQ_TYPE_LEVEL_HIGH 0>;
clocks = <&cru REF_CLK_USB3OTG2>, <&cru SUSPEND_CLK_USB3OTG2>,
<&cru ACLK_USB3OTG2>, <&cru CLK_UTMI_OTG2>,
<&cru CLK_PIPEPHY2_PIPE_U3_G>;
clock-names = "ref_clk", "suspend_clk", "bus_clk", "utmi", "pipe";
dr_mode = "host";
phys = <&combphy2_psu PHY_TYPE_USB3>;
phy-names = "usb3-phy";
phy_type = "utmi_wide";
resets = <&cru SRST_A_USB3OTG2>;
snps,dis_enblslpm_quirk;
snps,dis-u2-freeclk-exists-quirk;
snps,dis-del-phy-power-chg-quirk;
snps,dis-tx-ipgap-linecheck-quirk;
snps,dis_rxdet_inp3_quirk;
status = "disabled";
};
pmu1grf: syscon@fd58a000 {
compatible = "rockchip,rk3588-pmugrf", "syscon", "simple-mfd";
reg = <0x0 0xfd58a000 0x0 0x10000>;

View File

@ -1312,9 +1312,9 @@ config GPIO_KEMPLD
config GPIO_LJCA
tristate "INTEL La Jolla Cove Adapter GPIO support"
depends on MFD_LJCA
depends on USB_LJCA
select GPIOLIB_IRQCHIP
default MFD_LJCA
default USB_LJCA
help
Select this option to enable GPIO driver for the INTEL
La Jolla Cove Adapter (LJCA) board.

View File

@ -6,6 +6,7 @@
*/
#include <linux/acpi.h>
#include <linux/auxiliary_bus.h>
#include <linux/bitfield.h>
#include <linux/bitops.h>
#include <linux/dev_printk.h>
@ -13,19 +14,18 @@
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/kref.h>
#include <linux/mfd/ljca.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/usb/ljca.h>
/* GPIO commands */
#define LJCA_GPIO_CONFIG 1
#define LJCA_GPIO_READ 2
#define LJCA_GPIO_WRITE 3
#define LJCA_GPIO_INT_EVENT 4
#define LJCA_GPIO_INT_MASK 5
#define LJCA_GPIO_INT_UNMASK 6
#define LJCA_GPIO_CONFIG 1
#define LJCA_GPIO_READ 2
#define LJCA_GPIO_WRITE 3
#define LJCA_GPIO_INT_EVENT 4
#define LJCA_GPIO_INT_MASK 5
#define LJCA_GPIO_INT_UNMASK 6
#define LJCA_GPIO_CONF_DISABLE BIT(0)
#define LJCA_GPIO_CONF_INPUT BIT(1)
@ -36,45 +36,49 @@
#define LJCA_GPIO_CONF_INTERRUPT BIT(6)
#define LJCA_GPIO_INT_TYPE BIT(7)
#define LJCA_GPIO_CONF_EDGE FIELD_PREP(LJCA_GPIO_INT_TYPE, 1)
#define LJCA_GPIO_CONF_LEVEL FIELD_PREP(LJCA_GPIO_INT_TYPE, 0)
#define LJCA_GPIO_CONF_EDGE FIELD_PREP(LJCA_GPIO_INT_TYPE, 1)
#define LJCA_GPIO_CONF_LEVEL FIELD_PREP(LJCA_GPIO_INT_TYPE, 0)
/* Intentional overlap with PULLUP / PULLDOWN */
#define LJCA_GPIO_CONF_SET BIT(3)
#define LJCA_GPIO_CONF_CLR BIT(4)
#define LJCA_GPIO_CONF_SET BIT(3)
#define LJCA_GPIO_CONF_CLR BIT(4)
struct gpio_op {
#define LJCA_GPIO_BUF_SIZE 60u
struct ljca_gpio_op {
u8 index;
u8 value;
} __packed;
struct gpio_packet {
struct ljca_gpio_packet {
u8 num;
struct gpio_op item[];
struct ljca_gpio_op item[] __counted_by(num);
} __packed;
#define LJCA_GPIO_BUF_SIZE 60
struct ljca_gpio_dev {
struct platform_device *pdev;
struct ljca_client *ljca;
struct gpio_chip gc;
struct ljca_gpio_info *gpio_info;
DECLARE_BITMAP(unmasked_irqs, LJCA_MAX_GPIO_NUM);
DECLARE_BITMAP(enabled_irqs, LJCA_MAX_GPIO_NUM);
DECLARE_BITMAP(reenable_irqs, LJCA_MAX_GPIO_NUM);
DECLARE_BITMAP(output_enabled, LJCA_MAX_GPIO_NUM);
u8 *connect_mode;
/* mutex to protect irq bus */
/* protect irq bus */
struct mutex irq_lock;
struct work_struct work;
/* lock to protect package transfer to Hardware */
/* protect package transfer to hardware */
struct mutex trans_lock;
u8 obuf[LJCA_GPIO_BUF_SIZE];
u8 ibuf[LJCA_GPIO_BUF_SIZE];
};
static int gpio_config(struct ljca_gpio_dev *ljca_gpio, u8 gpio_id, u8 config)
static int ljca_gpio_config(struct ljca_gpio_dev *ljca_gpio, u8 gpio_id,
u8 config)
{
struct gpio_packet *packet = (struct gpio_packet *)ljca_gpio->obuf;
struct ljca_gpio_packet *packet =
(struct ljca_gpio_packet *)ljca_gpio->obuf;
int ret;
mutex_lock(&ljca_gpio->trans_lock);
@ -82,43 +86,43 @@ static int gpio_config(struct ljca_gpio_dev *ljca_gpio, u8 gpio_id, u8 config)
packet->item[0].value = config | ljca_gpio->connect_mode[gpio_id];
packet->num = 1;
ret = ljca_transfer(ljca_gpio->gpio_info->ljca, LJCA_GPIO_CONFIG, packet,
struct_size(packet, item, packet->num), NULL, NULL);
ret = ljca_transfer(ljca_gpio->ljca, LJCA_GPIO_CONFIG, (u8 *)packet,
struct_size(packet, item, packet->num), NULL, 0);
mutex_unlock(&ljca_gpio->trans_lock);
return ret;
return ret < 0 ? ret : 0;
}
static int ljca_gpio_read(struct ljca_gpio_dev *ljca_gpio, u8 gpio_id)
{
struct gpio_packet *packet = (struct gpio_packet *)ljca_gpio->obuf;
struct gpio_packet *ack_packet = (struct gpio_packet *)ljca_gpio->ibuf;
unsigned int ibuf_len = LJCA_GPIO_BUF_SIZE;
struct ljca_gpio_packet *ack_packet =
(struct ljca_gpio_packet *)ljca_gpio->ibuf;
struct ljca_gpio_packet *packet =
(struct ljca_gpio_packet *)ljca_gpio->obuf;
int ret;
mutex_lock(&ljca_gpio->trans_lock);
packet->num = 1;
packet->item[0].index = gpio_id;
ret = ljca_transfer(ljca_gpio->gpio_info->ljca, LJCA_GPIO_READ, packet,
struct_size(packet, item, packet->num), ljca_gpio->ibuf, &ibuf_len);
if (ret)
goto out_unlock;
ret = ljca_transfer(ljca_gpio->ljca, LJCA_GPIO_READ, (u8 *)packet,
struct_size(packet, item, packet->num),
ljca_gpio->ibuf, LJCA_GPIO_BUF_SIZE);
if (!ibuf_len || ack_packet->num != packet->num) {
dev_err(&ljca_gpio->pdev->dev, "failed gpio_id:%u %u", gpio_id, ack_packet->num);
ret = -EIO;
if (ret <= 0 || ack_packet->num != packet->num) {
dev_err(&ljca_gpio->ljca->auxdev.dev,
"read package error, gpio_id: %u num: %u ret: %d\n",
gpio_id, ack_packet->num, ret);
ret = ret < 0 ? ret : -EIO;
}
out_unlock:
mutex_unlock(&ljca_gpio->trans_lock);
if (ret)
return ret;
return ack_packet->item[0].value > 0;
return ret < 0 ? ret : ack_packet->item[0].value > 0;
}
static int ljca_gpio_write(struct ljca_gpio_dev *ljca_gpio, u8 gpio_id,
int value)
static int ljca_gpio_write(struct ljca_gpio_dev *ljca_gpio, u8 gpio_id, int value)
{
struct gpio_packet *packet = (struct gpio_packet *)ljca_gpio->obuf;
struct ljca_gpio_packet *packet =
(struct ljca_gpio_packet *)ljca_gpio->obuf;
int ret;
mutex_lock(&ljca_gpio->trans_lock);
@ -126,10 +130,11 @@ static int ljca_gpio_write(struct ljca_gpio_dev *ljca_gpio, u8 gpio_id,
packet->item[0].index = gpio_id;
packet->item[0].value = value & 1;
ret = ljca_transfer(ljca_gpio->gpio_info->ljca, LJCA_GPIO_WRITE, packet,
struct_size(packet, item, packet->num), NULL, NULL);
ret = ljca_transfer(ljca_gpio->ljca, LJCA_GPIO_WRITE, (u8 *)packet,
struct_size(packet, item, packet->num), NULL, 0);
mutex_unlock(&ljca_gpio->trans_lock);
return ret;
return ret < 0 ? ret : 0;
}
static int ljca_gpio_get_value(struct gpio_chip *chip, unsigned int offset)
@ -147,16 +152,24 @@ static void ljca_gpio_set_value(struct gpio_chip *chip, unsigned int offset,
ret = ljca_gpio_write(ljca_gpio, offset, val);
if (ret)
dev_err(chip->parent, "offset:%u val:%d set value failed %d\n", offset, val, ret);
dev_err(chip->parent,
"set value failed offset: %u val: %d ret: %d\n",
offset, val, ret);
}
static int ljca_gpio_direction_input(struct gpio_chip *chip,
unsigned int offset)
static int ljca_gpio_direction_input(struct gpio_chip *chip, unsigned int offset)
{
struct ljca_gpio_dev *ljca_gpio = gpiochip_get_data(chip);
u8 config = LJCA_GPIO_CONF_INPUT | LJCA_GPIO_CONF_CLR;
int ret;
return gpio_config(ljca_gpio, offset, config);
ret = ljca_gpio_config(ljca_gpio, offset, config);
if (ret)
return ret;
clear_bit(offset, ljca_gpio->output_enabled);
return 0;
}
static int ljca_gpio_direction_output(struct gpio_chip *chip,
@ -166,14 +179,26 @@ static int ljca_gpio_direction_output(struct gpio_chip *chip,
u8 config = LJCA_GPIO_CONF_OUTPUT | LJCA_GPIO_CONF_CLR;
int ret;
ret = gpio_config(ljca_gpio, offset, config);
ret = ljca_gpio_config(ljca_gpio, offset, config);
if (ret)
return ret;
ljca_gpio_set_value(chip, offset, val);
set_bit(offset, ljca_gpio->output_enabled);
return 0;
}
static int ljca_gpio_get_direction(struct gpio_chip *chip, unsigned int offset)
{
struct ljca_gpio_dev *ljca_gpio = gpiochip_get_data(chip);
if (test_bit(offset, ljca_gpio->output_enabled))
return GPIO_LINE_DIRECTION_OUT;
return GPIO_LINE_DIRECTION_IN;
}
static int ljca_gpio_set_config(struct gpio_chip *chip, unsigned int offset,
unsigned long config)
{
@ -197,7 +222,8 @@ static int ljca_gpio_set_config(struct gpio_chip *chip, unsigned int offset,
return 0;
}
static int ljca_gpio_init_valid_mask(struct gpio_chip *chip, unsigned long *valid_mask,
static int ljca_gpio_init_valid_mask(struct gpio_chip *chip,
unsigned long *valid_mask,
unsigned int ngpios)
{
struct ljca_gpio_dev *ljca_gpio = gpiochip_get_data(chip);
@ -208,15 +234,18 @@ static int ljca_gpio_init_valid_mask(struct gpio_chip *chip, unsigned long *vali
return 0;
}
static void ljca_gpio_irq_init_valid_mask(struct gpio_chip *chip, unsigned long *valid_mask,
static void ljca_gpio_irq_init_valid_mask(struct gpio_chip *chip,
unsigned long *valid_mask,
unsigned int ngpios)
{
ljca_gpio_init_valid_mask(chip, valid_mask, ngpios);
}
static int ljca_enable_irq(struct ljca_gpio_dev *ljca_gpio, int gpio_id, bool enable)
static int ljca_enable_irq(struct ljca_gpio_dev *ljca_gpio, int gpio_id,
bool enable)
{
struct gpio_packet *packet = (struct gpio_packet *)ljca_gpio->obuf;
struct ljca_gpio_packet *packet =
(struct ljca_gpio_packet *)ljca_gpio->obuf;
int ret;
mutex_lock(&ljca_gpio->trans_lock);
@ -224,18 +253,20 @@ static int ljca_enable_irq(struct ljca_gpio_dev *ljca_gpio, int gpio_id, bool en
packet->item[0].index = gpio_id;
packet->item[0].value = 0;
ret = ljca_transfer(ljca_gpio->gpio_info->ljca,
enable ? LJCA_GPIO_INT_UNMASK : LJCA_GPIO_INT_MASK, packet,
struct_size(packet, item, packet->num), NULL, NULL);
ret = ljca_transfer(ljca_gpio->ljca,
enable ? LJCA_GPIO_INT_UNMASK : LJCA_GPIO_INT_MASK,
(u8 *)packet, struct_size(packet, item, packet->num),
NULL, 0);
mutex_unlock(&ljca_gpio->trans_lock);
return ret;
return ret < 0 ? ret : 0;
}
static void ljca_gpio_async(struct work_struct *work)
{
struct ljca_gpio_dev *ljca_gpio = container_of(work, struct ljca_gpio_dev, work);
int gpio_id;
int unmasked;
struct ljca_gpio_dev *ljca_gpio =
container_of(work, struct ljca_gpio_dev, work);
int gpio_id, unmasked;
for_each_set_bit(gpio_id, ljca_gpio->reenable_irqs, ljca_gpio->gc.ngpio) {
clear_bit(gpio_id, ljca_gpio->reenable_irqs);
@ -245,20 +276,22 @@ static void ljca_gpio_async(struct work_struct *work)
}
}
static void ljca_gpio_event_cb(void *context, u8 cmd, const void *evt_data, int len)
static void ljca_gpio_event_cb(void *context, u8 cmd, const void *evt_data,
int len)
{
const struct gpio_packet *packet = evt_data;
const struct ljca_gpio_packet *packet = evt_data;
struct ljca_gpio_dev *ljca_gpio = context;
int i;
int irq;
int i, irq;
if (cmd != LJCA_GPIO_INT_EVENT)
return;
for (i = 0; i < packet->num; i++) {
irq = irq_find_mapping(ljca_gpio->gc.irq.domain, packet->item[i].index);
irq = irq_find_mapping(ljca_gpio->gc.irq.domain,
packet->item[i].index);
if (!irq) {
dev_err(ljca_gpio->gc.parent, "gpio_id %u does not mapped to IRQ yet\n",
dev_err(ljca_gpio->gc.parent,
"gpio_id %u does not mapped to IRQ yet\n",
packet->item[i].index);
return;
}
@ -299,18 +332,22 @@ static int ljca_irq_set_type(struct irq_data *irqd, unsigned int type)
ljca_gpio->connect_mode[gpio_id] = LJCA_GPIO_CONF_INTERRUPT;
switch (type) {
case IRQ_TYPE_LEVEL_HIGH:
ljca_gpio->connect_mode[gpio_id] |= (LJCA_GPIO_CONF_LEVEL | LJCA_GPIO_CONF_PULLUP);
ljca_gpio->connect_mode[gpio_id] |=
(LJCA_GPIO_CONF_LEVEL | LJCA_GPIO_CONF_PULLUP);
break;
case IRQ_TYPE_LEVEL_LOW:
ljca_gpio->connect_mode[gpio_id] |= (LJCA_GPIO_CONF_LEVEL | LJCA_GPIO_CONF_PULLDOWN);
ljca_gpio->connect_mode[gpio_id] |=
(LJCA_GPIO_CONF_LEVEL | LJCA_GPIO_CONF_PULLDOWN);
break;
case IRQ_TYPE_EDGE_BOTH:
break;
case IRQ_TYPE_EDGE_RISING:
ljca_gpio->connect_mode[gpio_id] |= (LJCA_GPIO_CONF_EDGE | LJCA_GPIO_CONF_PULLUP);
ljca_gpio->connect_mode[gpio_id] |=
(LJCA_GPIO_CONF_EDGE | LJCA_GPIO_CONF_PULLUP);
break;
case IRQ_TYPE_EDGE_FALLING:
ljca_gpio->connect_mode[gpio_id] |= (LJCA_GPIO_CONF_EDGE | LJCA_GPIO_CONF_PULLDOWN);
ljca_gpio->connect_mode[gpio_id] |=
(LJCA_GPIO_CONF_EDGE | LJCA_GPIO_CONF_PULLDOWN);
break;
default:
return -EINVAL;
@ -332,15 +369,14 @@ static void ljca_irq_bus_unlock(struct irq_data *irqd)
struct gpio_chip *gc = irq_data_get_irq_chip_data(irqd);
struct ljca_gpio_dev *ljca_gpio = gpiochip_get_data(gc);
int gpio_id = irqd_to_hwirq(irqd);
int enabled;
int unmasked;
int enabled, unmasked;
enabled = test_bit(gpio_id, ljca_gpio->enabled_irqs);
unmasked = test_bit(gpio_id, ljca_gpio->unmasked_irqs);
if (enabled != unmasked) {
if (unmasked) {
gpio_config(ljca_gpio, gpio_id, 0);
ljca_gpio_config(ljca_gpio, gpio_id, 0);
ljca_enable_irq(ljca_gpio, gpio_id, true);
set_bit(gpio_id, ljca_gpio->enabled_irqs);
} else {
@ -363,43 +399,48 @@ static const struct irq_chip ljca_gpio_irqchip = {
GPIOCHIP_IRQ_RESOURCE_HELPERS,
};
static int ljca_gpio_probe(struct platform_device *pdev)
static int ljca_gpio_probe(struct auxiliary_device *auxdev,
const struct auxiliary_device_id *aux_dev_id)
{
struct ljca_client *ljca = auxiliary_dev_to_ljca_client(auxdev);
struct ljca_gpio_dev *ljca_gpio;
struct gpio_irq_chip *girq;
int ret;
ljca_gpio = devm_kzalloc(&pdev->dev, sizeof(*ljca_gpio), GFP_KERNEL);
ljca_gpio = devm_kzalloc(&auxdev->dev, sizeof(*ljca_gpio), GFP_KERNEL);
if (!ljca_gpio)
return -ENOMEM;
ljca_gpio->gpio_info = dev_get_platdata(&pdev->dev);
ljca_gpio->connect_mode = devm_kcalloc(&pdev->dev, ljca_gpio->gpio_info->num,
sizeof(*ljca_gpio->connect_mode), GFP_KERNEL);
ljca_gpio->ljca = ljca;
ljca_gpio->gpio_info = dev_get_platdata(&auxdev->dev);
ljca_gpio->connect_mode = devm_kcalloc(&auxdev->dev,
ljca_gpio->gpio_info->num,
sizeof(*ljca_gpio->connect_mode),
GFP_KERNEL);
if (!ljca_gpio->connect_mode)
return -ENOMEM;
mutex_init(&ljca_gpio->irq_lock);
mutex_init(&ljca_gpio->trans_lock);
ljca_gpio->pdev = pdev;
ljca_gpio->gc.direction_input = ljca_gpio_direction_input;
ljca_gpio->gc.direction_output = ljca_gpio_direction_output;
ljca_gpio->gc.get_direction = ljca_gpio_get_direction;
ljca_gpio->gc.get = ljca_gpio_get_value;
ljca_gpio->gc.set = ljca_gpio_set_value;
ljca_gpio->gc.set_config = ljca_gpio_set_config;
ljca_gpio->gc.init_valid_mask = ljca_gpio_init_valid_mask;
ljca_gpio->gc.can_sleep = true;
ljca_gpio->gc.parent = &pdev->dev;
ljca_gpio->gc.parent = &auxdev->dev;
ljca_gpio->gc.base = -1;
ljca_gpio->gc.ngpio = ljca_gpio->gpio_info->num;
ljca_gpio->gc.label = ACPI_COMPANION(&pdev->dev) ?
acpi_dev_name(ACPI_COMPANION(&pdev->dev)) :
dev_name(&pdev->dev);
ljca_gpio->gc.label = ACPI_COMPANION(&auxdev->dev) ?
acpi_dev_name(ACPI_COMPANION(&auxdev->dev)) :
dev_name(&auxdev->dev);
ljca_gpio->gc.owner = THIS_MODULE;
platform_set_drvdata(pdev, ljca_gpio);
ljca_register_event_cb(ljca_gpio->gpio_info->ljca, ljca_gpio_event_cb, ljca_gpio);
auxiliary_set_drvdata(auxdev, ljca_gpio);
ljca_register_event_cb(ljca, ljca_gpio_event_cb, ljca_gpio);
girq = &ljca_gpio->gc.irq;
gpio_irq_chip_set_chip(girq, &ljca_gpio_irqchip);
@ -413,7 +454,7 @@ static int ljca_gpio_probe(struct platform_device *pdev)
INIT_WORK(&ljca_gpio->work, ljca_gpio_async);
ret = gpiochip_add_data(&ljca_gpio->gc, ljca_gpio);
if (ret) {
ljca_unregister_event_cb(ljca_gpio->gpio_info->ljca);
ljca_unregister_event_cb(ljca);
mutex_destroy(&ljca_gpio->irq_lock);
mutex_destroy(&ljca_gpio->trans_lock);
}
@ -421,33 +462,33 @@ static int ljca_gpio_probe(struct platform_device *pdev)
return ret;
}
static void ljca_gpio_remove(struct platform_device *pdev)
static void ljca_gpio_remove(struct auxiliary_device *auxdev)
{
struct ljca_gpio_dev *ljca_gpio = platform_get_drvdata(pdev);
struct ljca_gpio_dev *ljca_gpio = auxiliary_get_drvdata(auxdev);
gpiochip_remove(&ljca_gpio->gc);
ljca_unregister_event_cb(ljca_gpio->gpio_info->ljca);
ljca_unregister_event_cb(ljca_gpio->ljca);
cancel_work_sync(&ljca_gpio->work);
mutex_destroy(&ljca_gpio->irq_lock);
mutex_destroy(&ljca_gpio->trans_lock);
}
#define LJCA_GPIO_DRV_NAME "ljca-gpio"
static const struct platform_device_id ljca_gpio_id[] = {
{ LJCA_GPIO_DRV_NAME, 0 },
{ /* sentinel */ }
static const struct auxiliary_device_id ljca_gpio_id_table[] = {
{ "usb_ljca.ljca-gpio", 0 },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(platform, ljca_gpio_id);
MODULE_DEVICE_TABLE(auxiliary, ljca_gpio_id_table);
static struct platform_driver ljca_gpio_driver = {
.driver.name = LJCA_GPIO_DRV_NAME,
static struct auxiliary_driver ljca_gpio_driver = {
.probe = ljca_gpio_probe,
.remove_new = ljca_gpio_remove,
.remove = ljca_gpio_remove,
.id_table = ljca_gpio_id_table,
};
module_platform_driver(ljca_gpio_driver);
module_auxiliary_driver(ljca_gpio_driver);
MODULE_AUTHOR("Ye Xiang <xiang.ye@intel.com>");
MODULE_AUTHOR("Wang Zhifeng <zhifeng.wang@intel.com>");
MODULE_AUTHOR("Zhang Lixu <lixu.zhang@intel.com>");
MODULE_AUTHOR("Wentong Wu <wentong.wu@intel.com>");
MODULE_AUTHOR("Zhifeng Wang <zhifeng.wang@intel.com>");
MODULE_AUTHOR("Lixu Zhang <lixu.zhang@intel.com>");
MODULE_DESCRIPTION("Intel La Jolla Cove Adapter USB-GPIO driver");
MODULE_LICENSE("GPL");
MODULE_IMPORT_NS(LJCA);

View File

@ -1264,6 +1264,17 @@ config I2C_DLN2
This driver can also be built as a module. If so, the module
will be called i2c-dln2.
config I2C_LJCA
tristate "I2C functionality of Intel La Jolla Cove Adapter"
depends on USB_LJCA
default USB_LJCA
help
If you say yes to this option, I2C functionality support of Intel
La Jolla Cove Adapter (LJCA) will be included.
This driver can also be built as a module. If so, the module
will be called i2c-ljca.
config I2C_CP2615
tristate "Silicon Labs CP2615 USB sound card and I2C adapter"
depends on USB

View File

@ -133,6 +133,7 @@ obj-$(CONFIG_I2C_GXP) += i2c-gxp.o
# External I2C/SMBus adapter drivers
obj-$(CONFIG_I2C_DIOLAN_U2C) += i2c-diolan-u2c.o
obj-$(CONFIG_I2C_DLN2) += i2c-dln2.o
obj-$(CONFIG_I2C_LJCA) += i2c-ljca.o
obj-$(CONFIG_I2C_CP2615) += i2c-cp2615.o
obj-$(CONFIG_I2C_PARPORT) += i2c-parport.o
obj-$(CONFIG_I2C_PCI1XXXX) += i2c-mchp-pci1xxxx.o

View File

@ -0,0 +1,343 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Intel La Jolla Cove Adapter USB-I2C driver
*
* Copyright (c) 2023, Intel Corporation.
*/
#include <linux/acpi.h>
#include <linux/auxiliary_bus.h>
#include <linux/bitfield.h>
#include <linux/bits.h>
#include <linux/dev_printk.h>
#include <linux/i2c.h>
#include <linux/module.h>
#include <linux/usb/ljca.h>
/* I2C init flags */
#define LJCA_I2C_INIT_FLAG_MODE BIT(0)
#define LJCA_I2C_INIT_FLAG_MODE_POLLING FIELD_PREP(LJCA_I2C_INIT_FLAG_MODE, 0)
#define LJCA_I2C_INIT_FLAG_MODE_INTERRUPT FIELD_PREP(LJCA_I2C_INIT_FLAG_MODE, 1)
#define LJCA_I2C_INIT_FLAG_ADDR_16BIT BIT(0)
#define LJCA_I2C_INIT_FLAG_FREQ GENMASK(2, 1)
#define LJCA_I2C_INIT_FLAG_FREQ_100K FIELD_PREP(LJCA_I2C_INIT_FLAG_FREQ, 0)
#define LJCA_I2C_INIT_FLAG_FREQ_400K FIELD_PREP(LJCA_I2C_INIT_FLAG_FREQ, 1)
#define LJCA_I2C_INIT_FLAG_FREQ_1M FIELD_PREP(LJCA_I2C_INIT_FLAG_FREQ, 2)
#define LJCA_I2C_BUF_SIZE 60u
#define LJCA_I2C_MAX_XFER_SIZE (LJCA_I2C_BUF_SIZE - sizeof(struct ljca_i2c_rw_packet))
/* I2C commands */
enum ljca_i2c_cmd {
LJCA_I2C_INIT = 1,
LJCA_I2C_XFER,
LJCA_I2C_START,
LJCA_I2C_STOP,
LJCA_I2C_READ,
LJCA_I2C_WRITE,
};
enum ljca_xfer_type {
LJCA_I2C_WRITE_XFER_TYPE,
LJCA_I2C_READ_XFER_TYPE,
};
/* I2C raw commands: Init/Start/Read/Write/Stop */
struct ljca_i2c_rw_packet {
u8 id;
__le16 len;
u8 data[] __counted_by(len);
} __packed;
struct ljca_i2c_dev {
struct ljca_client *ljca;
struct ljca_i2c_info *i2c_info;
struct i2c_adapter adap;
u8 obuf[LJCA_I2C_BUF_SIZE];
u8 ibuf[LJCA_I2C_BUF_SIZE];
};
static int ljca_i2c_init(struct ljca_i2c_dev *ljca_i2c, u8 id)
{
struct ljca_i2c_rw_packet *w_packet =
(struct ljca_i2c_rw_packet *)ljca_i2c->obuf;
int ret;
w_packet->id = id;
w_packet->len = cpu_to_le16(sizeof(*w_packet->data));
w_packet->data[0] = LJCA_I2C_INIT_FLAG_FREQ_400K;
ret = ljca_transfer(ljca_i2c->ljca, LJCA_I2C_INIT, (u8 *)w_packet,
struct_size(w_packet, data, 1), NULL, 0);
return ret < 0 ? ret : 0;
}
static int ljca_i2c_start(struct ljca_i2c_dev *ljca_i2c, u8 slave_addr,
enum ljca_xfer_type type)
{
struct ljca_i2c_rw_packet *w_packet =
(struct ljca_i2c_rw_packet *)ljca_i2c->obuf;
struct ljca_i2c_rw_packet *r_packet =
(struct ljca_i2c_rw_packet *)ljca_i2c->ibuf;
s16 rp_len;
int ret;
w_packet->id = ljca_i2c->i2c_info->id;
w_packet->len = cpu_to_le16(sizeof(*w_packet->data));
w_packet->data[0] = (slave_addr << 1) | type;
ret = ljca_transfer(ljca_i2c->ljca, LJCA_I2C_START, (u8 *)w_packet,
struct_size(w_packet, data, 1), (u8 *)r_packet,
LJCA_I2C_BUF_SIZE);
if (ret < 0 || ret < sizeof(*r_packet))
return ret < 0 ? ret : -EIO;
rp_len = le16_to_cpu(r_packet->len);
if (rp_len < 0 || r_packet->id != w_packet->id) {
dev_dbg(&ljca_i2c->adap.dev,
"i2c start failed len: %d id: %d %d\n",
rp_len, r_packet->id, w_packet->id);
return -EIO;
}
return 0;
}
static void ljca_i2c_stop(struct ljca_i2c_dev *ljca_i2c, u8 slave_addr)
{
struct ljca_i2c_rw_packet *w_packet =
(struct ljca_i2c_rw_packet *)ljca_i2c->obuf;
struct ljca_i2c_rw_packet *r_packet =
(struct ljca_i2c_rw_packet *)ljca_i2c->ibuf;
s16 rp_len;
int ret;
w_packet->id = ljca_i2c->i2c_info->id;
w_packet->len = cpu_to_le16(sizeof(*w_packet->data));
w_packet->data[0] = 0;
ret = ljca_transfer(ljca_i2c->ljca, LJCA_I2C_STOP, (u8 *)w_packet,
struct_size(w_packet, data, 1), (u8 *)r_packet,
LJCA_I2C_BUF_SIZE);
if (ret < 0 || ret < sizeof(*r_packet)) {
dev_dbg(&ljca_i2c->adap.dev,
"i2c stop failed ret: %d id: %d\n",
ret, w_packet->id);
return;
}
rp_len = le16_to_cpu(r_packet->len);
if (rp_len < 0 || r_packet->id != w_packet->id)
dev_dbg(&ljca_i2c->adap.dev,
"i2c stop failed len: %d id: %d %d\n",
rp_len, r_packet->id, w_packet->id);
}
static int ljca_i2c_pure_read(struct ljca_i2c_dev *ljca_i2c, u8 *data, u8 len)
{
struct ljca_i2c_rw_packet *w_packet =
(struct ljca_i2c_rw_packet *)ljca_i2c->obuf;
struct ljca_i2c_rw_packet *r_packet =
(struct ljca_i2c_rw_packet *)ljca_i2c->ibuf;
s16 rp_len;
int ret;
w_packet->id = ljca_i2c->i2c_info->id;
w_packet->len = cpu_to_le16(len);
w_packet->data[0] = 0;
ret = ljca_transfer(ljca_i2c->ljca, LJCA_I2C_READ, (u8 *)w_packet,
struct_size(w_packet, data, 1), (u8 *)r_packet,
LJCA_I2C_BUF_SIZE);
if (ret < 0 || ret < sizeof(*r_packet))
return ret < 0 ? ret : -EIO;
rp_len = le16_to_cpu(r_packet->len);
if (rp_len != len || r_packet->id != w_packet->id) {
dev_dbg(&ljca_i2c->adap.dev,
"i2c raw read failed len: %d id: %d %d\n",
rp_len, r_packet->id, w_packet->id);
return -EIO;
}
memcpy(data, r_packet->data, len);
return 0;
}
static int ljca_i2c_read(struct ljca_i2c_dev *ljca_i2c, u8 slave_addr, u8 *data,
u8 len)
{
int ret;
ret = ljca_i2c_start(ljca_i2c, slave_addr, LJCA_I2C_READ_XFER_TYPE);
if (!ret)
ret = ljca_i2c_pure_read(ljca_i2c, data, len);
ljca_i2c_stop(ljca_i2c, slave_addr);
return ret;
}
static int ljca_i2c_pure_write(struct ljca_i2c_dev *ljca_i2c, u8 *data, u8 len)
{
struct ljca_i2c_rw_packet *w_packet =
(struct ljca_i2c_rw_packet *)ljca_i2c->obuf;
struct ljca_i2c_rw_packet *r_packet =
(struct ljca_i2c_rw_packet *)ljca_i2c->ibuf;
s16 rplen;
int ret;
w_packet->id = ljca_i2c->i2c_info->id;
w_packet->len = cpu_to_le16(len);
memcpy(w_packet->data, data, len);
ret = ljca_transfer(ljca_i2c->ljca, LJCA_I2C_WRITE, (u8 *)w_packet,
struct_size(w_packet, data, len), (u8 *)r_packet,
LJCA_I2C_BUF_SIZE);
if (ret < 0 || ret < sizeof(*r_packet))
return ret < 0 ? ret : -EIO;
rplen = le16_to_cpu(r_packet->len);
if (rplen != len || r_packet->id != w_packet->id) {
dev_dbg(&ljca_i2c->adap.dev,
"i2c write failed len: %d id: %d/%d\n",
rplen, r_packet->id, w_packet->id);
return -EIO;
}
return 0;
}
static int ljca_i2c_write(struct ljca_i2c_dev *ljca_i2c, u8 slave_addr,
u8 *data, u8 len)
{
int ret;
ret = ljca_i2c_start(ljca_i2c, slave_addr, LJCA_I2C_WRITE_XFER_TYPE);
if (!ret)
ret = ljca_i2c_pure_write(ljca_i2c, data, len);
ljca_i2c_stop(ljca_i2c, slave_addr);
return ret;
}
static int ljca_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msg,
int num)
{
struct ljca_i2c_dev *ljca_i2c;
struct i2c_msg *cur_msg;
int i, ret;
ljca_i2c = i2c_get_adapdata(adapter);
if (!ljca_i2c)
return -EINVAL;
for (i = 0; i < num; i++) {
cur_msg = &msg[i];
if (cur_msg->flags & I2C_M_RD)
ret = ljca_i2c_read(ljca_i2c, cur_msg->addr,
cur_msg->buf, cur_msg->len);
else
ret = ljca_i2c_write(ljca_i2c, cur_msg->addr,
cur_msg->buf, cur_msg->len);
if (ret)
return ret;
}
return num;
}
static u32 ljca_i2c_func(struct i2c_adapter *adap)
{
return I2C_FUNC_I2C | (I2C_FUNC_SMBUS_EMUL & ~I2C_FUNC_SMBUS_QUICK);
}
static const struct i2c_adapter_quirks ljca_i2c_quirks = {
.flags = I2C_AQ_NO_ZERO_LEN,
.max_read_len = LJCA_I2C_MAX_XFER_SIZE,
.max_write_len = LJCA_I2C_MAX_XFER_SIZE,
};
static const struct i2c_algorithm ljca_i2c_algo = {
.master_xfer = ljca_i2c_xfer,
.functionality = ljca_i2c_func,
};
static int ljca_i2c_probe(struct auxiliary_device *auxdev,
const struct auxiliary_device_id *aux_dev_id)
{
struct ljca_client *ljca = auxiliary_dev_to_ljca_client(auxdev);
struct ljca_i2c_dev *ljca_i2c;
int ret;
ljca_i2c = devm_kzalloc(&auxdev->dev, sizeof(*ljca_i2c), GFP_KERNEL);
if (!ljca_i2c)
return -ENOMEM;
ljca_i2c->ljca = ljca;
ljca_i2c->i2c_info = dev_get_platdata(&auxdev->dev);
ljca_i2c->adap.owner = THIS_MODULE;
ljca_i2c->adap.class = I2C_CLASS_HWMON;
ljca_i2c->adap.algo = &ljca_i2c_algo;
ljca_i2c->adap.quirks = &ljca_i2c_quirks;
ljca_i2c->adap.dev.parent = &auxdev->dev;
snprintf(ljca_i2c->adap.name, sizeof(ljca_i2c->adap.name), "%s-%s-%d",
dev_name(&auxdev->dev), dev_name(auxdev->dev.parent),
ljca_i2c->i2c_info->id);
device_set_node(&ljca_i2c->adap.dev, dev_fwnode(&auxdev->dev));
i2c_set_adapdata(&ljca_i2c->adap, ljca_i2c);
auxiliary_set_drvdata(auxdev, ljca_i2c);
ret = ljca_i2c_init(ljca_i2c, ljca_i2c->i2c_info->id);
if (ret)
return dev_err_probe(&auxdev->dev, -EIO,
"i2c init failed id: %d\n",
ljca_i2c->i2c_info->id);
ret = devm_i2c_add_adapter(&auxdev->dev, &ljca_i2c->adap);
if (ret)
return ret;
if (has_acpi_companion(&ljca_i2c->adap.dev))
acpi_dev_clear_dependencies(ACPI_COMPANION(&ljca_i2c->adap.dev));
return 0;
}
static void ljca_i2c_remove(struct auxiliary_device *auxdev)
{
struct ljca_i2c_dev *ljca_i2c = auxiliary_get_drvdata(auxdev);
i2c_del_adapter(&ljca_i2c->adap);
}
static const struct auxiliary_device_id ljca_i2c_id_table[] = {
{ "usb_ljca.ljca-i2c", 0 },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(auxiliary, ljca_i2c_id_table);
static struct auxiliary_driver ljca_i2c_driver = {
.probe = ljca_i2c_probe,
.remove = ljca_i2c_remove,
.id_table = ljca_i2c_id_table,
};
module_auxiliary_driver(ljca_i2c_driver);
MODULE_AUTHOR("Wentong Wu <wentong.wu@intel.com>");
MODULE_AUTHOR("Zhifeng Wang <zhifeng.wang@intel.com>");
MODULE_AUTHOR("Lixu Zhang <lixu.zhang@intel.com>");
MODULE_DESCRIPTION("Intel La Jolla Cove Adapter USB-I2C driver");
MODULE_LICENSE("GPL");
MODULE_IMPORT_NS(LJCA);

View File

@ -492,6 +492,8 @@ static int cros_typec_enable_dp(struct cros_typec_data *typec,
{
struct cros_typec_port *port = typec->ports[port_num];
struct typec_displayport_data dp_data;
u32 cable_tbt_vdo;
u32 cable_dp_vdo;
int ret;
if (typec->pd_ctrl_ver < 2) {
@ -524,6 +526,32 @@ static int cros_typec_enable_dp(struct cros_typec_data *typec,
port->state.data = &dp_data;
port->state.mode = TYPEC_MODAL_STATE(ffs(pd_ctrl->dp_mode));
/* Get cable VDO for cables with DPSID to check DPAM2.1 is supported */
cable_dp_vdo = cros_typec_get_cable_vdo(port, USB_TYPEC_DP_SID);
/**
* Get cable VDO for thunderbolt cables and cables with DPSID but does not
* support DPAM2.1.
*/
cable_tbt_vdo = cros_typec_get_cable_vdo(port, USB_TYPEC_TBT_SID);
if (cable_dp_vdo & DP_CAP_DPAM_VERSION) {
dp_data.conf |= cable_dp_vdo;
} else if (cable_tbt_vdo) {
dp_data.conf |= TBT_CABLE_SPEED(cable_tbt_vdo) << DP_CONF_SIGNALLING_SHIFT;
/* Cable Type */
if (cable_tbt_vdo & TBT_CABLE_OPTICAL)
dp_data.conf |= DP_CONF_CABLE_TYPE_OPTICAL << DP_CONF_CABLE_TYPE_SHIFT;
else if (cable_tbt_vdo & TBT_CABLE_RETIMER)
dp_data.conf |= DP_CONF_CABLE_TYPE_RE_TIMER << DP_CONF_CABLE_TYPE_SHIFT;
else if (cable_tbt_vdo & TBT_CABLE_ACTIVE_PASSIVE)
dp_data.conf |= DP_CONF_CABLE_TYPE_RE_DRIVER << DP_CONF_CABLE_TYPE_SHIFT;
} else if (PD_IDH_PTYPE(port->c_identity.id_header) == IDH_PTYPE_PCABLE) {
dp_data.conf |= VDO_TYPEC_CABLE_SPEED(port->c_identity.vdo[0]) <<
DP_CONF_SIGNALLING_SHIFT;
}
ret = cros_typec_retimer_set(port->retimer, port->state);
if (!ret)
ret = typec_mux_set(port->mux, &port->state);

View File

@ -237,7 +237,7 @@ static int tps65217_charger_probe(struct platform_device *pdev)
for (i = 0; i < NUM_CHARGER_IRQS; i++) {
ret = devm_request_threaded_irq(&pdev->dev, irq[i], NULL,
tps65217_charger_irq,
IRQF_ONESHOT, "tps65217-charger",
IRQF_SHARED, "tps65217-charger",
charger);
if (ret) {
dev_err(charger->dev,

View File

@ -616,6 +616,17 @@ config SPI_FSL_ESPI
From MPC8536, 85xx platform uses the controller, and all P10xx,
P20xx, P30xx,P40xx, P50xx uses this controller.
config SPI_LJCA
tristate "Intel La Jolla Cove Adapter SPI support"
depends on USB_LJCA
default USB_LJCA
help
Select this option to enable SPI driver for the Intel
La Jolla Cove Adapter (LJCA) board.
This driver can also be built as a module. If so, the module
will be called spi-ljca.
config SPI_MESON_SPICC
tristate "Amlogic Meson SPICC controller"
depends on COMMON_CLK

View File

@ -71,6 +71,7 @@ obj-$(CONFIG_SPI_INTEL_PCI) += spi-intel-pci.o
obj-$(CONFIG_SPI_INTEL_PLATFORM) += spi-intel-platform.o
obj-$(CONFIG_SPI_LANTIQ_SSC) += spi-lantiq-ssc.o
obj-$(CONFIG_SPI_JCORE) += spi-jcore.o
obj-$(CONFIG_SPI_LJCA) += spi-ljca.o
obj-$(CONFIG_SPI_LM70_LLP) += spi-lm70llp.o
obj-$(CONFIG_SPI_LOONGSON_CORE) += spi-loongson-core.o
obj-$(CONFIG_SPI_LOONGSON_PCI) += spi-loongson-pci.o

297
drivers/spi/spi-ljca.c Normal file
View File

@ -0,0 +1,297 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Intel La Jolla Cove Adapter USB-SPI driver
*
* Copyright (c) 2023, Intel Corporation.
*/
#include <linux/auxiliary_bus.h>
#include <linux/bitfield.h>
#include <linux/bits.h>
#include <linux/dev_printk.h>
#include <linux/module.h>
#include <linux/spi/spi.h>
#include <linux/usb/ljca.h>
#define LJCA_SPI_BUS_MAX_HZ 48000000
#define LJCA_SPI_BUF_SIZE 60u
#define LJCA_SPI_MAX_XFER_SIZE \
(LJCA_SPI_BUF_SIZE - sizeof(struct ljca_spi_xfer_packet))
#define LJCA_SPI_CLK_MODE_POLARITY BIT(0)
#define LJCA_SPI_CLK_MODE_PHASE BIT(1)
#define LJCA_SPI_XFER_INDICATOR_ID GENMASK(5, 0)
#define LJCA_SPI_XFER_INDICATOR_CMPL BIT(6)
#define LJCA_SPI_XFER_INDICATOR_INDEX BIT(7)
/* SPI commands */
enum ljca_spi_cmd {
LJCA_SPI_INIT = 1,
LJCA_SPI_READ,
LJCA_SPI_WRITE,
LJCA_SPI_WRITEREAD,
LJCA_SPI_DEINIT,
};
enum {
LJCA_SPI_BUS_SPEED_24M,
LJCA_SPI_BUS_SPEED_12M,
LJCA_SPI_BUS_SPEED_8M,
LJCA_SPI_BUS_SPEED_6M,
LJCA_SPI_BUS_SPEED_4_8M, /*4.8MHz*/
LJCA_SPI_BUS_SPEED_MIN = LJCA_SPI_BUS_SPEED_4_8M,
};
enum {
LJCA_SPI_CLOCK_LOW_POLARITY,
LJCA_SPI_CLOCK_HIGH_POLARITY,
};
enum {
LJCA_SPI_CLOCK_FIRST_PHASE,
LJCA_SPI_CLOCK_SECOND_PHASE,
};
struct ljca_spi_init_packet {
u8 index;
u8 speed;
u8 mode;
} __packed;
struct ljca_spi_xfer_packet {
u8 indicator;
u8 len;
u8 data[] __counted_by(len);
} __packed;
struct ljca_spi_dev {
struct ljca_client *ljca;
struct spi_controller *controller;
struct ljca_spi_info *spi_info;
u8 speed;
u8 mode;
u8 obuf[LJCA_SPI_BUF_SIZE];
u8 ibuf[LJCA_SPI_BUF_SIZE];
};
static int ljca_spi_read_write(struct ljca_spi_dev *ljca_spi, const u8 *w_data,
u8 *r_data, int len, int id, int complete,
int cmd)
{
struct ljca_spi_xfer_packet *w_packet =
(struct ljca_spi_xfer_packet *)ljca_spi->obuf;
struct ljca_spi_xfer_packet *r_packet =
(struct ljca_spi_xfer_packet *)ljca_spi->ibuf;
int ret;
w_packet->indicator = FIELD_PREP(LJCA_SPI_XFER_INDICATOR_ID, id) |
FIELD_PREP(LJCA_SPI_XFER_INDICATOR_CMPL, complete) |
FIELD_PREP(LJCA_SPI_XFER_INDICATOR_INDEX,
ljca_spi->spi_info->id);
if (cmd == LJCA_SPI_READ) {
w_packet->len = sizeof(u16);
*(__le16 *)&w_packet->data[0] = cpu_to_le16(len);
} else {
w_packet->len = len;
memcpy(w_packet->data, w_data, len);
}
ret = ljca_transfer(ljca_spi->ljca, cmd, (u8 *)w_packet,
struct_size(w_packet, data, w_packet->len),
(u8 *)r_packet, LJCA_SPI_BUF_SIZE);
if (ret < 0)
return ret;
else if (ret < sizeof(*r_packet) || r_packet->len <= 0)
return -EIO;
if (r_data)
memcpy(r_data, r_packet->data, r_packet->len);
return 0;
}
static int ljca_spi_init(struct ljca_spi_dev *ljca_spi, u8 div, u8 mode)
{
struct ljca_spi_init_packet w_packet = {};
int ret;
if (ljca_spi->mode == mode && ljca_spi->speed == div)
return 0;
w_packet.index = ljca_spi->spi_info->id;
w_packet.speed = div;
w_packet.mode = FIELD_PREP(LJCA_SPI_CLK_MODE_POLARITY,
(mode & SPI_CPOL) ? LJCA_SPI_CLOCK_HIGH_POLARITY :
LJCA_SPI_CLOCK_LOW_POLARITY) |
FIELD_PREP(LJCA_SPI_CLK_MODE_PHASE,
(mode & SPI_CPHA) ? LJCA_SPI_CLOCK_SECOND_PHASE :
LJCA_SPI_CLOCK_FIRST_PHASE);
ret = ljca_transfer(ljca_spi->ljca, LJCA_SPI_INIT, (u8 *)&w_packet,
sizeof(w_packet), NULL, 0);
if (ret < 0)
return ret;
ljca_spi->mode = mode;
ljca_spi->speed = div;
return 0;
}
static int ljca_spi_deinit(struct ljca_spi_dev *ljca_spi)
{
struct ljca_spi_init_packet w_packet = {};
int ret;
w_packet.index = ljca_spi->spi_info->id;
ret = ljca_transfer(ljca_spi->ljca, LJCA_SPI_DEINIT, (u8 *)&w_packet,
sizeof(w_packet), NULL, 0);
return ret < 0 ? ret : 0;
}
static inline int ljca_spi_transfer(struct ljca_spi_dev *ljca_spi,
const u8 *tx_data, u8 *rx_data, u16 len)
{
int complete, cur_len;
int remaining = len;
int cmd, ret, i;
int offset = 0;
if (tx_data && rx_data)
cmd = LJCA_SPI_WRITEREAD;
else if (tx_data)
cmd = LJCA_SPI_WRITE;
else if (rx_data)
cmd = LJCA_SPI_READ;
else
return -EINVAL;
for (i = 0; remaining > 0; i++) {
cur_len = min_t(unsigned int, remaining, LJCA_SPI_MAX_XFER_SIZE);
complete = (cur_len == remaining);
ret = ljca_spi_read_write(ljca_spi,
tx_data ? tx_data + offset : NULL,
rx_data ? rx_data + offset : NULL,
cur_len, i, complete, cmd);
if (ret)
return ret;
offset += cur_len;
remaining -= cur_len;
}
return 0;
}
static int ljca_spi_transfer_one(struct spi_controller *controller,
struct spi_device *spi,
struct spi_transfer *xfer)
{
u8 div = DIV_ROUND_UP(controller->max_speed_hz, xfer->speed_hz) / 2 - 1;
struct ljca_spi_dev *ljca_spi = spi_controller_get_devdata(controller);
int ret;
div = min_t(u8, LJCA_SPI_BUS_SPEED_MIN, div);
ret = ljca_spi_init(ljca_spi, div, spi->mode);
if (ret) {
dev_err(&ljca_spi->ljca->auxdev.dev,
"cannot initialize transfer ret %d\n", ret);
return ret;
}
ret = ljca_spi_transfer(ljca_spi, xfer->tx_buf, xfer->rx_buf, xfer->len);
if (ret)
dev_err(&ljca_spi->ljca->auxdev.dev,
"transfer failed len: %d\n", xfer->len);
return ret;
}
static int ljca_spi_probe(struct auxiliary_device *auxdev,
const struct auxiliary_device_id *aux_dev_id)
{
struct ljca_client *ljca = auxiliary_dev_to_ljca_client(auxdev);
struct spi_controller *controller;
struct ljca_spi_dev *ljca_spi;
int ret;
controller = devm_spi_alloc_master(&auxdev->dev, sizeof(*ljca_spi));
if (!controller)
return -ENOMEM;
ljca_spi = spi_controller_get_devdata(controller);
ljca_spi->ljca = ljca;
ljca_spi->spi_info = dev_get_platdata(&auxdev->dev);
ljca_spi->controller = controller;
controller->bus_num = -1;
controller->mode_bits = SPI_CPHA | SPI_CPOL;
controller->transfer_one = ljca_spi_transfer_one;
controller->auto_runtime_pm = false;
controller->max_speed_hz = LJCA_SPI_BUS_MAX_HZ;
device_set_node(&ljca_spi->controller->dev, dev_fwnode(&auxdev->dev));
auxiliary_set_drvdata(auxdev, controller);
ret = spi_register_controller(controller);
if (ret)
dev_err(&auxdev->dev, "Failed to register controller\n");
return ret;
}
static void ljca_spi_dev_remove(struct auxiliary_device *auxdev)
{
struct spi_controller *controller = auxiliary_get_drvdata(auxdev);
struct ljca_spi_dev *ljca_spi = spi_controller_get_devdata(controller);
spi_unregister_controller(controller);
ljca_spi_deinit(ljca_spi);
}
static int ljca_spi_dev_suspend(struct device *dev)
{
struct spi_controller *controller = dev_get_drvdata(dev);
return spi_controller_suspend(controller);
}
static int ljca_spi_dev_resume(struct device *dev)
{
struct spi_controller *controller = dev_get_drvdata(dev);
return spi_controller_resume(controller);
}
static const struct dev_pm_ops ljca_spi_pm = {
SYSTEM_SLEEP_PM_OPS(ljca_spi_dev_suspend, ljca_spi_dev_resume)
};
static const struct auxiliary_device_id ljca_spi_id_table[] = {
{ "usb_ljca.ljca-spi", 0 },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(auxiliary, ljca_spi_id_table);
static struct auxiliary_driver ljca_spi_driver = {
.driver.pm = &ljca_spi_pm,
.probe = ljca_spi_probe,
.remove = ljca_spi_dev_remove,
.id_table = ljca_spi_id_table,
};
module_auxiliary_driver(ljca_spi_driver);
MODULE_AUTHOR("Wentong Wu <wentong.wu@intel.com>");
MODULE_AUTHOR("Zhifeng Wang <zhifeng.wang@intel.com>");
MODULE_AUTHOR("Lixu Zhang <lixu.zhang@intel.com>");
MODULE_DESCRIPTION("Intel La Jolla Cove Adapter USB-SPI driver");
MODULE_LICENSE("GPL");
MODULE_IMPORT_NS(LJCA);

View File

@ -174,6 +174,28 @@ bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx)
return !!(tb_port_clx(port) & clx);
}
/**
* tb_switch_clx_is_supported() - Is CLx supported on this type of router
* @sw: The router to check CLx support for
*/
static bool tb_switch_clx_is_supported(const struct tb_switch *sw)
{
if (!clx_enabled)
return false;
if (sw->quirks & QUIRK_NO_CLX)
return false;
/*
* CLx is not enabled and validated on Intel USB4 platforms
* before Alder Lake.
*/
if (tb_switch_is_tiger_lake(sw))
return false;
return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw);
}
/**
* tb_switch_clx_init() - Initialize router CL states
* @sw: Router
@ -273,28 +295,6 @@ static int tb_switch_mask_clx_objections(struct tb_switch *sw)
sw->cap_lp + offset, ARRAY_SIZE(val));
}
/**
* tb_switch_clx_is_supported() - Is CLx supported on this type of router
* @sw: The router to check CLx support for
*/
bool tb_switch_clx_is_supported(const struct tb_switch *sw)
{
if (!clx_enabled)
return false;
if (sw->quirks & QUIRK_NO_CLX)
return false;
/*
* CLx is not enabled and validated on Intel USB4 platforms
* before Alder Lake.
*/
if (tb_switch_is_tiger_lake(sw))
return false;
return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw);
}
static bool validate_mask(unsigned int clx)
{
/* Previous states need to be enabled */
@ -405,6 +405,9 @@ int tb_switch_clx_disable(struct tb_switch *sw)
if (!clx)
return 0;
if (sw->is_unplugged)
return clx;
up = tb_upstream_port(sw);
down = tb_switch_downstream_port(sw);

View File

@ -101,7 +101,7 @@ struct dma_test {
unsigned int packets_sent;
unsigned int packets_received;
unsigned int link_speed;
unsigned int link_width;
enum tb_link_width link_width;
unsigned int crc_errors;
unsigned int buffer_overflow_errors;
enum dma_test_result result;
@ -465,9 +465,9 @@ DMA_TEST_DEBUGFS_ATTR(packets_to_send, packets_to_send_get,
static int dma_test_set_bonding(struct dma_test *dt)
{
switch (dt->link_width) {
case 2:
case TB_LINK_WIDTH_DUAL:
return tb_xdomain_lane_bonding_enable(dt->xd);
case 1:
case TB_LINK_WIDTH_SINGLE:
tb_xdomain_lane_bonding_disable(dt->xd);
fallthrough;
default:
@ -490,12 +490,8 @@ static void dma_test_check_errors(struct dma_test *dt, int ret)
if (!dt->error_code) {
if (dt->link_speed && dt->xd->link_speed != dt->link_speed) {
dt->error_code = DMA_TEST_SPEED_ERROR;
} else if (dt->link_width) {
const struct tb_xdomain *xd = dt->xd;
if ((dt->link_width == 1 && xd->link_width != TB_LINK_WIDTH_SINGLE) ||
(dt->link_width == 2 && xd->link_width < TB_LINK_WIDTH_DUAL))
dt->error_code = DMA_TEST_WIDTH_ERROR;
} else if (dt->link_width && dt->link_width != dt->xd->link_width) {
dt->error_code = DMA_TEST_WIDTH_ERROR;
} else if (dt->packets_to_send != dt->packets_sent ||
dt->packets_to_receive != dt->packets_received ||
dt->crc_errors || dt->buffer_overflow_errors) {

View File

@ -19,9 +19,9 @@ static void tb_dump_hop(const struct tb_path_hop *hop, const struct tb_regs_hop
tb_port_dbg(port, " In HopID: %d => Out port: %d Out HopID: %d\n",
hop->in_hop_index, regs->out_port, regs->next_hop);
tb_port_dbg(port, " Weight: %d Priority: %d Credits: %d Drop: %d\n",
regs->weight, regs->priority,
regs->initial_credits, regs->drop_packages);
tb_port_dbg(port, " Weight: %d Priority: %d Credits: %d Drop: %d PM: %d\n",
regs->weight, regs->priority, regs->initial_credits,
regs->drop_packages, regs->pmps);
tb_port_dbg(port, " Counter enabled: %d Counter index: %d\n",
regs->counter_enable, regs->counter);
tb_port_dbg(port, " Flow Control (In/Eg): %d/%d Shared Buffer (In/Eg): %d/%d\n",
@ -535,6 +535,7 @@ int tb_path_activate(struct tb_path *path)
hop.next_hop = path->hops[i].next_hop_index;
hop.out_port = path->hops[i].out_port->port;
hop.initial_credits = path->hops[i].initial_credits;
hop.pmps = path->hops[i].pm_support;
hop.unknown1 = 0;
hop.enable = 1;

View File

@ -31,6 +31,9 @@ static void quirk_usb3_maximum_bandwidth(struct tb_switch *sw)
{
struct tb_port *port;
if (tb_switch_is_icm(sw))
return;
tb_switch_for_each_port(sw, port) {
if (!tb_port_is_usb3_down(port))
continue;

View File

@ -94,6 +94,7 @@ static int tb_retimer_nvm_add(struct tb_retimer *rt)
goto err_nvm;
rt->nvm = nvm;
dev_dbg(&rt->dev, "NVM version %x.%x\n", nvm->major, nvm->minor);
return 0;
err_nvm:

View File

@ -372,6 +372,7 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
ret = tb_nvm_add_active(nvm, nvm_read);
if (ret)
goto err_nvm;
tb_sw_dbg(sw, "NVM version %x.%x\n", nvm->major, nvm->minor);
}
if (!sw->no_nvm_upgrade) {
@ -914,6 +915,48 @@ int tb_port_get_link_speed(struct tb_port *port)
}
}
/**
* tb_port_get_link_generation() - Returns link generation
* @port: Lane adapter
*
* Returns link generation as number or negative errno in case of
* failure. Does not distinguish between Thunderbolt 1 and Thunderbolt 2
* links so for those always returns 2.
*/
int tb_port_get_link_generation(struct tb_port *port)
{
int ret;
ret = tb_port_get_link_speed(port);
if (ret < 0)
return ret;
switch (ret) {
case 40:
return 4;
case 20:
return 3;
default:
return 2;
}
}
static const char *width_name(enum tb_link_width width)
{
switch (width) {
case TB_LINK_WIDTH_SINGLE:
return "symmetric, single lane";
case TB_LINK_WIDTH_DUAL:
return "symmetric, dual lanes";
case TB_LINK_WIDTH_ASYM_TX:
return "asymmetric, 3 transmitters, 1 receiver";
case TB_LINK_WIDTH_ASYM_RX:
return "asymmetric, 3 receivers, 1 transmitter";
default:
return "unknown";
}
}
/**
* tb_port_get_link_width() - Get current link width
* @port: Port to check (USB4 or CIO)
@ -939,8 +982,15 @@ int tb_port_get_link_width(struct tb_port *port)
LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT;
}
static bool tb_port_is_width_supported(struct tb_port *port,
unsigned int width_mask)
/**
* tb_port_width_supported() - Is the given link width supported
* @port: Port to check
* @width: Widths to check (bitmask)
*
* Can be called to any lane adapter. Checks if given @width is
* supported by the hardware and returns %true if it is.
*/
bool tb_port_width_supported(struct tb_port *port, unsigned int width)
{
u32 phy, widths;
int ret;
@ -948,20 +998,23 @@ static bool tb_port_is_width_supported(struct tb_port *port,
if (!port->cap_phy)
return false;
if (width & (TB_LINK_WIDTH_ASYM_TX | TB_LINK_WIDTH_ASYM_RX)) {
if (tb_port_get_link_generation(port) < 4 ||
!usb4_port_asym_supported(port))
return false;
}
ret = tb_port_read(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_0, 1);
if (ret)
return false;
widths = (phy & LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK) >>
LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT;
return widths & width_mask;
}
static bool is_gen4_link(struct tb_port *port)
{
return tb_port_get_link_speed(port) > 20;
/*
* The field encoding is the same as &enum tb_link_width (which is
* passed to @width).
*/
widths = FIELD_GET(LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK, phy);
return widths & width;
}
/**
@ -991,15 +1044,23 @@ int tb_port_set_link_width(struct tb_port *port, enum tb_link_width width)
switch (width) {
case TB_LINK_WIDTH_SINGLE:
/* Gen 4 link cannot be single */
if (is_gen4_link(port))
if (tb_port_get_link_generation(port) >= 4)
return -EOPNOTSUPP;
val |= LANE_ADP_CS_1_TARGET_WIDTH_SINGLE <<
LANE_ADP_CS_1_TARGET_WIDTH_SHIFT;
break;
case TB_LINK_WIDTH_DUAL:
if (tb_port_get_link_generation(port) >= 4)
return usb4_port_asym_set_link_width(port, width);
val |= LANE_ADP_CS_1_TARGET_WIDTH_DUAL <<
LANE_ADP_CS_1_TARGET_WIDTH_SHIFT;
break;
case TB_LINK_WIDTH_ASYM_TX:
case TB_LINK_WIDTH_ASYM_RX:
return usb4_port_asym_set_link_width(port, width);
default:
return -EINVAL;
}
@ -1124,7 +1185,7 @@ void tb_port_lane_bonding_disable(struct tb_port *port)
/**
* tb_port_wait_for_link_width() - Wait until link reaches specific width
* @port: Port to wait for
* @width_mask: Expected link width mask
* @width: Expected link width (bitmask)
* @timeout_msec: Timeout in ms how long to wait
*
* Should be used after both ends of the link have been bonded (or
@ -1133,14 +1194,15 @@ void tb_port_lane_bonding_disable(struct tb_port *port)
* within the given timeout, %0 if it did. Can be passed a mask of
* expected widths and succeeds if any of the widths is reached.
*/
int tb_port_wait_for_link_width(struct tb_port *port, unsigned int width_mask,
int tb_port_wait_for_link_width(struct tb_port *port, unsigned int width,
int timeout_msec)
{
ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec);
int ret;
/* Gen 4 link does not support single lane */
if ((width_mask & TB_LINK_WIDTH_SINGLE) && is_gen4_link(port))
if ((width & TB_LINK_WIDTH_SINGLE) &&
tb_port_get_link_generation(port) >= 4)
return -EOPNOTSUPP;
do {
@ -1153,7 +1215,7 @@ int tb_port_wait_for_link_width(struct tb_port *port, unsigned int width_mask,
*/
if (ret != -EACCES)
return ret;
} else if (ret & width_mask) {
} else if (ret & width) {
return 0;
}
@ -1332,7 +1394,7 @@ int tb_pci_port_enable(struct tb_port *port, bool enable)
* tb_dp_port_hpd_is_active() - Is HPD already active
* @port: DP out port to check
*
* Checks if the DP OUT adapter port has HDP bit already set.
* Checks if the DP OUT adapter port has HPD bit already set.
*/
int tb_dp_port_hpd_is_active(struct tb_port *port)
{
@ -1344,14 +1406,14 @@ int tb_dp_port_hpd_is_active(struct tb_port *port)
if (ret)
return ret;
return !!(data & ADP_DP_CS_2_HDP);
return !!(data & ADP_DP_CS_2_HPD);
}
/**
* tb_dp_port_hpd_clear() - Clear HPD from DP IN port
* @port: Port to clear HPD
*
* If the DP IN port has HDP set, this function can be used to clear it.
* If the DP IN port has HPD set, this function can be used to clear it.
*/
int tb_dp_port_hpd_clear(struct tb_port *port)
{
@ -1363,7 +1425,7 @@ int tb_dp_port_hpd_clear(struct tb_port *port)
if (ret)
return ret;
data |= ADP_DP_CS_3_HDPC;
data |= ADP_DP_CS_3_HPDC;
return tb_port_write(port, &data, TB_CFG_PORT,
port->cap_adap + ADP_DP_CS_3, 1);
}
@ -2697,6 +2759,38 @@ static int tb_switch_update_link_attributes(struct tb_switch *sw)
return 0;
}
/* Must be called after tb_switch_update_link_attributes() */
static void tb_switch_link_init(struct tb_switch *sw)
{
struct tb_port *up, *down;
bool bonded;
if (!tb_route(sw) || tb_switch_is_icm(sw))
return;
tb_sw_dbg(sw, "current link speed %u.0 Gb/s\n", sw->link_speed);
tb_sw_dbg(sw, "current link width %s\n", width_name(sw->link_width));
bonded = sw->link_width >= TB_LINK_WIDTH_DUAL;
/*
* Gen 4 links come up as bonded so update the port structures
* accordingly.
*/
up = tb_upstream_port(sw);
down = tb_switch_downstream_port(sw);
up->bonded = bonded;
if (up->dual_link_port)
up->dual_link_port->bonded = bonded;
tb_port_update_credits(up);
down->bonded = bonded;
if (down->dual_link_port)
down->dual_link_port->bonded = bonded;
tb_port_update_credits(down);
}
/**
* tb_switch_lane_bonding_enable() - Enable lane bonding
* @sw: Switch to enable lane bonding
@ -2705,24 +2799,20 @@ static int tb_switch_update_link_attributes(struct tb_switch *sw)
* switch. If conditions are correct and both switches support the feature,
* lanes are bonded. It is safe to call this to any switch.
*/
int tb_switch_lane_bonding_enable(struct tb_switch *sw)
static int tb_switch_lane_bonding_enable(struct tb_switch *sw)
{
struct tb_port *up, *down;
u64 route = tb_route(sw);
unsigned int width_mask;
unsigned int width;
int ret;
if (!route)
return 0;
if (!tb_switch_lane_bonding_possible(sw))
return 0;
up = tb_upstream_port(sw);
down = tb_switch_downstream_port(sw);
if (!tb_port_is_width_supported(up, TB_LINK_WIDTH_DUAL) ||
!tb_port_is_width_supported(down, TB_LINK_WIDTH_DUAL))
if (!tb_port_width_supported(up, TB_LINK_WIDTH_DUAL) ||
!tb_port_width_supported(down, TB_LINK_WIDTH_DUAL))
return 0;
/*
@ -2746,21 +2836,10 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw)
}
/* Any of the widths are all bonded */
width_mask = TB_LINK_WIDTH_DUAL | TB_LINK_WIDTH_ASYM_TX |
TB_LINK_WIDTH_ASYM_RX;
width = TB_LINK_WIDTH_DUAL | TB_LINK_WIDTH_ASYM_TX |
TB_LINK_WIDTH_ASYM_RX;
ret = tb_port_wait_for_link_width(down, width_mask, 100);
if (ret) {
tb_port_warn(down, "timeout enabling lane bonding\n");
return ret;
}
tb_port_update_credits(down);
tb_port_update_credits(up);
tb_switch_update_link_attributes(sw);
tb_sw_dbg(sw, "lane bonding enabled\n");
return ret;
return tb_port_wait_for_link_width(down, width, 100);
}
/**
@ -2770,20 +2849,27 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw)
* Disables lane bonding between @sw and parent. This can be called even
* if lanes were not bonded originally.
*/
void tb_switch_lane_bonding_disable(struct tb_switch *sw)
static int tb_switch_lane_bonding_disable(struct tb_switch *sw)
{
struct tb_port *up, *down;
int ret;
if (!tb_route(sw))
return;
up = tb_upstream_port(sw);
if (!up->bonded)
return;
return 0;
/*
* If the link is Gen 4 there is no way to switch the link to
* two single lane links so avoid that here. Also don't bother
* if the link is not up anymore (sw is unplugged).
*/
ret = tb_port_get_link_generation(up);
if (ret < 0)
return ret;
if (ret >= 4)
return -EOPNOTSUPP;
down = tb_switch_downstream_port(sw);
tb_port_lane_bonding_disable(up);
tb_port_lane_bonding_disable(down);
@ -2791,15 +2877,160 @@ void tb_switch_lane_bonding_disable(struct tb_switch *sw)
* It is fine if we get other errors as the router might have
* been unplugged.
*/
ret = tb_port_wait_for_link_width(down, TB_LINK_WIDTH_SINGLE, 100);
if (ret == -ETIMEDOUT)
tb_sw_warn(sw, "timeout disabling lane bonding\n");
return tb_port_wait_for_link_width(down, TB_LINK_WIDTH_SINGLE, 100);
}
static int tb_switch_asym_enable(struct tb_switch *sw, enum tb_link_width width)
{
struct tb_port *up, *down, *port;
enum tb_link_width down_width;
int ret;
up = tb_upstream_port(sw);
down = tb_switch_downstream_port(sw);
if (width == TB_LINK_WIDTH_ASYM_TX) {
down_width = TB_LINK_WIDTH_ASYM_RX;
port = down;
} else {
down_width = TB_LINK_WIDTH_ASYM_TX;
port = up;
}
ret = tb_port_set_link_width(up, width);
if (ret)
return ret;
ret = tb_port_set_link_width(down, down_width);
if (ret)
return ret;
/*
* Initiate the change in the router that one of its TX lanes is
* changing to RX but do so only if there is an actual change.
*/
if (sw->link_width != width) {
ret = usb4_port_asym_start(port);
if (ret)
return ret;
ret = tb_port_wait_for_link_width(up, width, 100);
if (ret)
return ret;
}
sw->link_width = width;
return 0;
}
static int tb_switch_asym_disable(struct tb_switch *sw)
{
struct tb_port *up, *down;
int ret;
up = tb_upstream_port(sw);
down = tb_switch_downstream_port(sw);
ret = tb_port_set_link_width(up, TB_LINK_WIDTH_DUAL);
if (ret)
return ret;
ret = tb_port_set_link_width(down, TB_LINK_WIDTH_DUAL);
if (ret)
return ret;
/*
* Initiate the change in the router that has three TX lanes and
* is changing one of its TX lanes to RX but only if there is a
* change in the link width.
*/
if (sw->link_width > TB_LINK_WIDTH_DUAL) {
if (sw->link_width == TB_LINK_WIDTH_ASYM_TX)
ret = usb4_port_asym_start(up);
else
ret = usb4_port_asym_start(down);
if (ret)
return ret;
ret = tb_port_wait_for_link_width(up, TB_LINK_WIDTH_DUAL, 100);
if (ret)
return ret;
}
sw->link_width = TB_LINK_WIDTH_DUAL;
return 0;
}
/**
* tb_switch_set_link_width() - Configure router link width
* @sw: Router to configure
* @width: The new link width
*
* Set device router link width to @width from router upstream port
* perspective. Supports also asymmetric links if the routers boths side
* of the link supports it.
*
* Does nothing for host router.
*
* Returns %0 in case of success, negative errno otherwise.
*/
int tb_switch_set_link_width(struct tb_switch *sw, enum tb_link_width width)
{
struct tb_port *up, *down;
int ret = 0;
if (!tb_route(sw))
return 0;
up = tb_upstream_port(sw);
down = tb_switch_downstream_port(sw);
switch (width) {
case TB_LINK_WIDTH_SINGLE:
ret = tb_switch_lane_bonding_disable(sw);
break;
case TB_LINK_WIDTH_DUAL:
if (sw->link_width == TB_LINK_WIDTH_ASYM_TX ||
sw->link_width == TB_LINK_WIDTH_ASYM_RX) {
ret = tb_switch_asym_disable(sw);
if (ret)
break;
}
ret = tb_switch_lane_bonding_enable(sw);
break;
case TB_LINK_WIDTH_ASYM_TX:
case TB_LINK_WIDTH_ASYM_RX:
ret = tb_switch_asym_enable(sw, width);
break;
}
switch (ret) {
case 0:
break;
case -ETIMEDOUT:
tb_sw_warn(sw, "timeout changing link width\n");
return ret;
case -ENOTCONN:
case -EOPNOTSUPP:
case -ENODEV:
return ret;
default:
tb_sw_dbg(sw, "failed to change link width: %d\n", ret);
return ret;
}
tb_port_update_credits(down);
tb_port_update_credits(up);
tb_switch_update_link_attributes(sw);
tb_sw_dbg(sw, "lane bonding disabled\n");
tb_sw_dbg(sw, "link width set to %s\n", width_name(width));
return ret;
}
/**
@ -2959,6 +3190,8 @@ int tb_switch_add(struct tb_switch *sw)
if (ret)
return ret;
tb_switch_link_init(sw);
ret = tb_switch_clx_init(sw);
if (ret)
return ret;

File diff suppressed because it is too large Load Diff

View File

@ -162,11 +162,6 @@ struct tb_switch_tmu {
* switches) you need to have domain lock held.
*
* In USB4 terminology this structure represents a router.
*
* Note @link_width is not the same as whether link is bonded or not.
* For Gen 4 links the link is also bonded when it is asymmetric. The
* correct way to find out whether the link is bonded or not is to look
* @bonded field of the upstream port.
*/
struct tb_switch {
struct device dev;
@ -348,6 +343,7 @@ struct tb_retimer {
* the path
* @nfc_credits: Number of non-flow controlled buffers allocated for the
* @in_port.
* @pm_support: Set path PM packet support bit to 1 (for USB4 v2 routers)
*
* Hop configuration is always done on the IN port of a switch.
* in_port and out_port have to be on the same switch. Packets arriving on
@ -368,6 +364,7 @@ struct tb_path_hop {
int next_hop_index;
unsigned int initial_credits;
unsigned int nfc_credits;
bool pm_support;
};
/**
@ -864,6 +861,15 @@ static inline struct tb_port *tb_switch_downstream_port(struct tb_switch *sw)
return tb_port_at(tb_route(sw), tb_switch_parent(sw));
}
/**
* tb_switch_depth() - Returns depth of the connected router
* @sw: Router
*/
static inline int tb_switch_depth(const struct tb_switch *sw)
{
return sw->config.depth;
}
static inline bool tb_switch_is_light_ridge(const struct tb_switch *sw)
{
return sw->config.vendor_id == PCI_VENDOR_ID_INTEL &&
@ -956,8 +962,7 @@ static inline bool tb_switch_is_icm(const struct tb_switch *sw)
return !sw->config.enabled;
}
int tb_switch_lane_bonding_enable(struct tb_switch *sw);
void tb_switch_lane_bonding_disable(struct tb_switch *sw);
int tb_switch_set_link_width(struct tb_switch *sw, enum tb_link_width width);
int tb_switch_configure_link(struct tb_switch *sw);
void tb_switch_unconfigure_link(struct tb_switch *sw);
@ -1001,7 +1006,6 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw)
bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx);
int tb_switch_clx_init(struct tb_switch *sw);
bool tb_switch_clx_is_supported(const struct tb_switch *sw);
int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx);
int tb_switch_clx_disable(struct tb_switch *sw);
@ -1040,6 +1044,21 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid);
struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
struct tb_port *prev);
/**
* tb_port_path_direction_downstream() - Checks if path directed downstream
* @src: Source adapter
* @dst: Destination adapter
*
* Returns %true only if the specified path from source adapter (@src)
* to destination adapter (@dst) is directed downstream.
*/
static inline bool
tb_port_path_direction_downstream(const struct tb_port *src,
const struct tb_port *dst)
{
return src->sw->config.depth < dst->sw->config.depth;
}
static inline bool tb_port_use_credit_allocation(const struct tb_port *port)
{
return tb_port_is_null(port) && port->sw->credit_allocation;
@ -1057,12 +1076,29 @@ static inline bool tb_port_use_credit_allocation(const struct tb_port *port)
for ((p) = tb_next_port_on_path((src), (dst), NULL); (p); \
(p) = tb_next_port_on_path((src), (dst), (p)))
/**
* tb_for_each_upstream_port_on_path() - Iterate over each upstreamm port on path
* @src: Source port
* @dst: Destination port
* @p: Port used as iterator
*
* Walks over each upstream lane adapter on path from @src to @dst.
*/
#define tb_for_each_upstream_port_on_path(src, dst, p) \
for ((p) = tb_next_port_on_path((src), (dst), NULL); (p); \
(p) = tb_next_port_on_path((src), (dst), (p))) \
if (!tb_port_is_null((p)) || !tb_is_upstream_port((p))) {\
continue; \
} else
int tb_port_get_link_speed(struct tb_port *port);
int tb_port_get_link_generation(struct tb_port *port);
int tb_port_get_link_width(struct tb_port *port);
bool tb_port_width_supported(struct tb_port *port, unsigned int width);
int tb_port_set_link_width(struct tb_port *port, enum tb_link_width width);
int tb_port_lane_bonding_enable(struct tb_port *port);
void tb_port_lane_bonding_disable(struct tb_port *port);
int tb_port_wait_for_link_width(struct tb_port *port, unsigned int width_mask,
int tb_port_wait_for_link_width(struct tb_port *port, unsigned int width,
int timeout_msec);
int tb_port_update_credits(struct tb_port *port);
@ -1256,6 +1292,11 @@ int usb4_port_router_online(struct tb_port *port);
int usb4_port_enumerate_retimers(struct tb_port *port);
bool usb4_port_clx_supported(struct tb_port *port);
int usb4_port_margining_caps(struct tb_port *port, u32 *caps);
bool usb4_port_asym_supported(struct tb_port *port);
int usb4_port_asym_set_link_width(struct tb_port *port, enum tb_link_width width);
int usb4_port_asym_start(struct tb_port *port);
int usb4_port_hw_margin(struct tb_port *port, unsigned int lanes,
unsigned int ber_level, bool timing, bool right_high,
u32 *results);
@ -1283,7 +1324,6 @@ int usb4_port_retimer_nvm_read(struct tb_port *port, u8 index,
unsigned int address, void *buf, size_t size);
int usb4_usb3_port_max_link_rate(struct tb_port *port);
int usb4_usb3_port_actual_link_rate(struct tb_port *port);
int usb4_usb3_port_allocated_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw);
int usb4_usb3_port_allocate_bandwidth(struct tb_port *port, int *upstream_bw,

View File

@ -346,10 +346,14 @@ struct tb_regs_port_header {
#define LANE_ADP_CS_1 0x01
#define LANE_ADP_CS_1_TARGET_SPEED_MASK GENMASK(3, 0)
#define LANE_ADP_CS_1_TARGET_SPEED_GEN3 0xc
#define LANE_ADP_CS_1_TARGET_WIDTH_MASK GENMASK(9, 4)
#define LANE_ADP_CS_1_TARGET_WIDTH_MASK GENMASK(5, 4)
#define LANE_ADP_CS_1_TARGET_WIDTH_SHIFT 4
#define LANE_ADP_CS_1_TARGET_WIDTH_SINGLE 0x1
#define LANE_ADP_CS_1_TARGET_WIDTH_DUAL 0x3
#define LANE_ADP_CS_1_TARGET_WIDTH_ASYM_MASK GENMASK(7, 6)
#define LANE_ADP_CS_1_TARGET_WIDTH_ASYM_TX 0x1
#define LANE_ADP_CS_1_TARGET_WIDTH_ASYM_RX 0x2
#define LANE_ADP_CS_1_TARGET_WIDTH_ASYM_DUAL 0x0
#define LANE_ADP_CS_1_CL0S_ENABLE BIT(10)
#define LANE_ADP_CS_1_CL1_ENABLE BIT(11)
#define LANE_ADP_CS_1_CL2_ENABLE BIT(12)
@ -382,12 +386,15 @@ struct tb_regs_port_header {
#define PORT_CS_18_WOCS BIT(16)
#define PORT_CS_18_WODS BIT(17)
#define PORT_CS_18_WOU4S BIT(18)
#define PORT_CS_18_CSA BIT(22)
#define PORT_CS_18_TIP BIT(24)
#define PORT_CS_19 0x13
#define PORT_CS_19_PC BIT(3)
#define PORT_CS_19_PID BIT(4)
#define PORT_CS_19_WOC BIT(16)
#define PORT_CS_19_WOD BIT(17)
#define PORT_CS_19_WOU4 BIT(18)
#define PORT_CS_19_START_ASYM BIT(24)
/* Display Port adapter registers */
#define ADP_DP_CS_0 0x00
@ -400,7 +407,7 @@ struct tb_regs_port_header {
#define ADP_DP_CS_1_AUX_RX_HOPID_SHIFT 11
#define ADP_DP_CS_2 0x02
#define ADP_DP_CS_2_NRD_MLC_MASK GENMASK(2, 0)
#define ADP_DP_CS_2_HDP BIT(6)
#define ADP_DP_CS_2_HPD BIT(6)
#define ADP_DP_CS_2_NRD_MLR_MASK GENMASK(9, 7)
#define ADP_DP_CS_2_NRD_MLR_SHIFT 7
#define ADP_DP_CS_2_CA BIT(10)
@ -417,7 +424,7 @@ struct tb_regs_port_header {
#define ADP_DP_CS_2_ESTIMATED_BW_MASK GENMASK(31, 24)
#define ADP_DP_CS_2_ESTIMATED_BW_SHIFT 24
#define ADP_DP_CS_3 0x03
#define ADP_DP_CS_3_HDPC BIT(9)
#define ADP_DP_CS_3_HPDC BIT(9)
#define DP_LOCAL_CAP 0x04
#define DP_REMOTE_CAP 0x05
/* For DP IN adapter */
@ -484,9 +491,6 @@ struct tb_regs_port_header {
#define ADP_USB3_CS_3 0x03
#define ADP_USB3_CS_3_SCALE_MASK GENMASK(5, 0)
#define ADP_USB3_CS_4 0x04
#define ADP_USB3_CS_4_ALR_MASK GENMASK(6, 0)
#define ADP_USB3_CS_4_ALR_20G 0x1
#define ADP_USB3_CS_4_ULV BIT(7)
#define ADP_USB3_CS_4_MSLR_MASK GENMASK(18, 12)
#define ADP_USB3_CS_4_MSLR_SHIFT 12
#define ADP_USB3_CS_4_MSLR_20G 0x1
@ -499,7 +503,8 @@ struct tb_regs_hop {
* out_port (on the incoming port of the next switch)
*/
u32 out_port:6; /* next port of the path (on the same switch) */
u32 initial_credits:8;
u32 initial_credits:7;
u32 pmps:1;
u32 unknown1:6; /* set to zero */
bool enable:1;

View File

@ -21,12 +21,18 @@
#define TB_PCI_PATH_DOWN 0
#define TB_PCI_PATH_UP 1
#define TB_PCI_PRIORITY 3
#define TB_PCI_WEIGHT 1
/* USB3 adapters use always HopID of 8 for both directions */
#define TB_USB3_HOPID 8
#define TB_USB3_PATH_DOWN 0
#define TB_USB3_PATH_UP 1
#define TB_USB3_PRIORITY 3
#define TB_USB3_WEIGHT 2
/* DP adapters use HopID 8 for AUX and 9 for Video */
#define TB_DP_AUX_TX_HOPID 8
#define TB_DP_AUX_RX_HOPID 8
@ -36,6 +42,12 @@
#define TB_DP_AUX_PATH_OUT 1
#define TB_DP_AUX_PATH_IN 2
#define TB_DP_VIDEO_PRIORITY 1
#define TB_DP_VIDEO_WEIGHT 1
#define TB_DP_AUX_PRIORITY 2
#define TB_DP_AUX_WEIGHT 1
/* Minimum number of credits needed for PCIe path */
#define TB_MIN_PCIE_CREDITS 6U
/*
@ -46,6 +58,18 @@
/* Minimum number of credits for DMA path */
#define TB_MIN_DMA_CREDITS 1
#define TB_DMA_PRIORITY 5
#define TB_DMA_WEIGHT 1
/*
* Reserve additional bandwidth for USB 3.x and PCIe bulk traffic
* according to USB4 v2 Connection Manager guide. This ends up reserving
* 1500 Mb/s for PCIe and 3000 Mb/s for USB 3.x taking weights into
* account.
*/
#define USB4_V2_PCI_MIN_BANDWIDTH (1500 * TB_PCI_WEIGHT)
#define USB4_V2_USB3_MIN_BANDWIDTH (1500 * TB_USB3_WEIGHT)
static unsigned int dma_credits = TB_DMA_CREDITS;
module_param(dma_credits, uint, 0444);
MODULE_PARM_DESC(dma_credits, "specify custom credits for DMA tunnels (default: "
@ -58,27 +82,6 @@ MODULE_PARM_DESC(bw_alloc_mode,
static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" };
#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \
do { \
struct tb_tunnel *__tunnel = (tunnel); \
level(__tunnel->tb, "%llx:%u <-> %llx:%u (%s): " fmt, \
tb_route(__tunnel->src_port->sw), \
__tunnel->src_port->port, \
tb_route(__tunnel->dst_port->sw), \
__tunnel->dst_port->port, \
tb_tunnel_names[__tunnel->type], \
## arg); \
} while (0)
#define tb_tunnel_WARN(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_WARN, tunnel, fmt, ##arg)
#define tb_tunnel_warn(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_warn, tunnel, fmt, ##arg)
#define tb_tunnel_info(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_info, tunnel, fmt, ##arg)
#define tb_tunnel_dbg(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_dbg, tunnel, fmt, ##arg)
static inline unsigned int tb_usable_credits(const struct tb_port *port)
{
return port->total_credits - port->ctl_credits;
@ -131,6 +134,16 @@ static unsigned int tb_available_credits(const struct tb_port *port,
return credits > 0 ? credits : 0;
}
static void tb_init_pm_support(struct tb_path_hop *hop)
{
struct tb_port *out_port = hop->out_port;
struct tb_port *in_port = hop->in_port;
if (tb_port_is_null(in_port) && tb_port_is_null(out_port) &&
usb4_switch_version(in_port->sw) >= 2)
hop->pm_support = true;
}
static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths,
enum tb_tunnel_type type)
{
@ -156,11 +169,11 @@ static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths,
static int tb_pci_set_ext_encapsulation(struct tb_tunnel *tunnel, bool enable)
{
struct tb_port *port = tb_upstream_port(tunnel->dst_port->sw);
int ret;
/* Only supported of both routers are at least USB4 v2 */
if (usb4_switch_version(tunnel->src_port->sw) < 2 ||
usb4_switch_version(tunnel->dst_port->sw) < 2)
if (tb_port_get_link_generation(port) < 4)
return 0;
ret = usb4_pci_port_set_ext_encapsulation(tunnel->src_port, enable);
@ -234,8 +247,8 @@ static int tb_pci_init_path(struct tb_path *path)
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_fc_enable = TB_PATH_ALL;
path->ingress_shared_buffer = TB_PATH_NONE;
path->priority = 3;
path->weight = 1;
path->priority = TB_PCI_PRIORITY;
path->weight = TB_PCI_WEIGHT;
path->drop_packages = 0;
tb_path_for_each_hop(path, hop) {
@ -376,6 +389,51 @@ err_free:
return NULL;
}
/**
* tb_tunnel_reserved_pci() - Amount of bandwidth to reserve for PCIe
* @port: Lane 0 adapter
* @reserved_up: Upstream bandwidth in Mb/s to reserve
* @reserved_down: Downstream bandwidth in Mb/s to reserve
*
* Can be called to any connected lane 0 adapter to find out how much
* bandwidth needs to be left in reserve for possible PCIe bulk traffic.
* Returns true if there is something to be reserved and writes the
* amount to @reserved_down/@reserved_up. Otherwise returns false and
* does not touch the parameters.
*/
bool tb_tunnel_reserved_pci(struct tb_port *port, int *reserved_up,
int *reserved_down)
{
if (WARN_ON_ONCE(!port->remote))
return false;
if (!tb_acpi_may_tunnel_pcie())
return false;
if (tb_port_get_link_generation(port) < 4)
return false;
/* Must have PCIe adapters */
if (tb_is_upstream_port(port)) {
if (!tb_switch_find_port(port->sw, TB_TYPE_PCIE_UP))
return false;
if (!tb_switch_find_port(port->remote->sw, TB_TYPE_PCIE_DOWN))
return false;
} else {
if (!tb_switch_find_port(port->sw, TB_TYPE_PCIE_DOWN))
return false;
if (!tb_switch_find_port(port->remote->sw, TB_TYPE_PCIE_UP))
return false;
}
*reserved_up = USB4_V2_PCI_MIN_BANDWIDTH;
*reserved_down = USB4_V2_PCI_MIN_BANDWIDTH;
tb_port_dbg(port, "reserving %u/%u Mb/s for PCIe\n", *reserved_up,
*reserved_down);
return true;
}
static bool tb_dp_is_usb4(const struct tb_switch *sw)
{
/* Titan Ridge DP adapters need the same treatment as USB4 */
@ -614,8 +672,9 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
in_rate = tb_dp_cap_get_rate(in_dp_cap);
in_lanes = tb_dp_cap_get_lanes(in_dp_cap);
tb_port_dbg(in, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
in_rate, in_lanes, tb_dp_bandwidth(in_rate, in_lanes));
tb_tunnel_dbg(tunnel,
"DP IN maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
in_rate, in_lanes, tb_dp_bandwidth(in_rate, in_lanes));
/*
* If the tunnel bandwidth is limited (max_bw is set) then see
@ -624,10 +683,11 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
out_rate = tb_dp_cap_get_rate(out_dp_cap);
out_lanes = tb_dp_cap_get_lanes(out_dp_cap);
bw = tb_dp_bandwidth(out_rate, out_lanes);
tb_port_dbg(out, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
out_rate, out_lanes, bw);
tb_tunnel_dbg(tunnel,
"DP OUT maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
out_rate, out_lanes, bw);
if (in->sw->config.depth < out->sw->config.depth)
if (tb_port_path_direction_downstream(in, out))
max_bw = tunnel->max_down;
else
max_bw = tunnel->max_up;
@ -639,13 +699,14 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
out_rate, out_lanes, &new_rate,
&new_lanes);
if (ret) {
tb_port_info(out, "not enough bandwidth for DP tunnel\n");
tb_tunnel_info(tunnel, "not enough bandwidth\n");
return ret;
}
new_bw = tb_dp_bandwidth(new_rate, new_lanes);
tb_port_dbg(out, "bandwidth reduced to %u Mb/s x%u = %u Mb/s\n",
new_rate, new_lanes, new_bw);
tb_tunnel_dbg(tunnel,
"bandwidth reduced to %u Mb/s x%u = %u Mb/s\n",
new_rate, new_lanes, new_bw);
/*
* Set new rate and number of lanes before writing it to
@ -662,7 +723,7 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
*/
if (tb_route(out->sw) && tb_switch_is_titan_ridge(out->sw)) {
out_dp_cap |= DP_COMMON_CAP_LTTPR_NS;
tb_port_dbg(out, "disabling LTTPR\n");
tb_tunnel_dbg(tunnel, "disabling LTTPR\n");
}
return tb_port_write(in, &out_dp_cap, TB_CFG_PORT,
@ -712,8 +773,8 @@ static int tb_dp_bandwidth_alloc_mode_enable(struct tb_tunnel *tunnel)
lanes = min(in_lanes, out_lanes);
tmp = tb_dp_bandwidth(rate, lanes);
tb_port_dbg(in, "non-reduced bandwidth %u Mb/s x%u = %u Mb/s\n", rate,
lanes, tmp);
tb_tunnel_dbg(tunnel, "non-reduced bandwidth %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tmp);
ret = usb4_dp_port_set_nrd(in, rate, lanes);
if (ret)
@ -728,15 +789,15 @@ static int tb_dp_bandwidth_alloc_mode_enable(struct tb_tunnel *tunnel)
rate = min(in_rate, out_rate);
tmp = tb_dp_bandwidth(rate, lanes);
tb_port_dbg(in,
"maximum bandwidth through allocation mode %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tmp);
tb_tunnel_dbg(tunnel,
"maximum bandwidth through allocation mode %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tmp);
for (granularity = 250; tmp / granularity > 255 && granularity <= 1000;
granularity *= 2)
;
tb_port_dbg(in, "granularity %d Mb/s\n", granularity);
tb_tunnel_dbg(tunnel, "granularity %d Mb/s\n", granularity);
/*
* Returns -EINVAL if granularity above is outside of the
@ -751,12 +812,12 @@ static int tb_dp_bandwidth_alloc_mode_enable(struct tb_tunnel *tunnel)
* max_up/down fields. For discovery we just read what the
* estimation was set to.
*/
if (in->sw->config.depth < out->sw->config.depth)
if (tb_port_path_direction_downstream(in, out))
estimated_bw = tunnel->max_down;
else
estimated_bw = tunnel->max_up;
tb_port_dbg(in, "estimated bandwidth %d Mb/s\n", estimated_bw);
tb_tunnel_dbg(tunnel, "estimated bandwidth %d Mb/s\n", estimated_bw);
ret = usb4_dp_port_set_estimated_bandwidth(in, estimated_bw);
if (ret)
@ -767,7 +828,7 @@ static int tb_dp_bandwidth_alloc_mode_enable(struct tb_tunnel *tunnel)
if (ret)
return ret;
tb_port_dbg(in, "bandwidth allocation mode enabled\n");
tb_tunnel_dbg(tunnel, "bandwidth allocation mode enabled\n");
return 0;
}
@ -788,7 +849,7 @@ static int tb_dp_init(struct tb_tunnel *tunnel)
if (!usb4_dp_port_bandwidth_mode_supported(in))
return 0;
tb_port_dbg(in, "bandwidth allocation mode supported\n");
tb_tunnel_dbg(tunnel, "bandwidth allocation mode supported\n");
ret = usb4_dp_port_set_cm_id(in, tb->index);
if (ret)
@ -805,7 +866,7 @@ static void tb_dp_deinit(struct tb_tunnel *tunnel)
return;
if (usb4_dp_port_bandwidth_mode_enabled(in)) {
usb4_dp_port_set_cm_bandwidth_mode_supported(in, false);
tb_port_dbg(in, "bandwidth allocation mode disabled\n");
tb_tunnel_dbg(tunnel, "bandwidth allocation mode disabled\n");
}
}
@ -921,10 +982,7 @@ static int tb_dp_bandwidth_mode_consumed_bandwidth(struct tb_tunnel *tunnel,
if (allocated_bw == max_bw)
allocated_bw = ret;
tb_port_dbg(in, "consumed bandwidth through allocation mode %d Mb/s\n",
allocated_bw);
if (in->sw->config.depth < out->sw->config.depth) {
if (tb_port_path_direction_downstream(in, out)) {
*consumed_up = 0;
*consumed_down = allocated_bw;
} else {
@ -959,7 +1017,7 @@ static int tb_dp_allocated_bandwidth(struct tb_tunnel *tunnel, int *allocated_up
if (allocated_bw == max_bw)
allocated_bw = ret;
if (in->sw->config.depth < out->sw->config.depth) {
if (tb_port_path_direction_downstream(in, out)) {
*allocated_up = 0;
*allocated_down = allocated_bw;
} else {
@ -987,7 +1045,7 @@ static int tb_dp_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up,
if (ret < 0)
return ret;
if (in->sw->config.depth < out->sw->config.depth) {
if (tb_port_path_direction_downstream(in, out)) {
tmp = min(*alloc_down, max_bw);
ret = usb4_dp_port_allocate_bandwidth(in, tmp);
if (ret)
@ -1006,9 +1064,6 @@ static int tb_dp_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up,
/* Now we can use BW mode registers to figure out the bandwidth */
/* TODO: need to handle discovery too */
tunnel->bw_mode = true;
tb_port_dbg(in, "allocated bandwidth through allocation mode %d Mb/s\n",
tmp);
return 0;
}
@ -1035,8 +1090,7 @@ static int tb_dp_read_dprx(struct tb_tunnel *tunnel, u32 *rate, u32 *lanes,
*rate = tb_dp_cap_get_rate(val);
*lanes = tb_dp_cap_get_lanes(val);
tb_port_dbg(in, "consumed bandwidth through DPRX %d Mb/s\n",
tb_dp_bandwidth(*rate, *lanes));
tb_tunnel_dbg(tunnel, "DPRX read done\n");
return 0;
}
usleep_range(100, 150);
@ -1073,9 +1127,6 @@ static int tb_dp_read_cap(struct tb_tunnel *tunnel, unsigned int cap, u32 *rate,
*rate = tb_dp_cap_get_rate(val);
*lanes = tb_dp_cap_get_lanes(val);
tb_port_dbg(in, "bandwidth from %#x capability %d Mb/s\n", cap,
tb_dp_bandwidth(*rate, *lanes));
return 0;
}
@ -1092,7 +1143,7 @@ static int tb_dp_maximum_bandwidth(struct tb_tunnel *tunnel, int *max_up,
if (ret < 0)
return ret;
if (in->sw->config.depth < tunnel->dst_port->sw->config.depth) {
if (tb_port_path_direction_downstream(in, tunnel->dst_port)) {
*max_up = 0;
*max_down = ret;
} else {
@ -1150,7 +1201,7 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
return 0;
}
if (in->sw->config.depth < tunnel->dst_port->sw->config.depth) {
if (tb_port_path_direction_downstream(in, tunnel->dst_port)) {
*consumed_up = 0;
*consumed_down = tb_dp_bandwidth(rate, lanes);
} else {
@ -1172,7 +1223,7 @@ static void tb_dp_init_aux_credits(struct tb_path_hop *hop)
hop->initial_credits = 1;
}
static void tb_dp_init_aux_path(struct tb_path *path)
static void tb_dp_init_aux_path(struct tb_path *path, bool pm_support)
{
struct tb_path_hop *hop;
@ -1180,11 +1231,14 @@ static void tb_dp_init_aux_path(struct tb_path *path)
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_fc_enable = TB_PATH_ALL;
path->ingress_shared_buffer = TB_PATH_NONE;
path->priority = 2;
path->weight = 1;
path->priority = TB_DP_AUX_PRIORITY;
path->weight = TB_DP_AUX_WEIGHT;
tb_path_for_each_hop(path, hop)
tb_path_for_each_hop(path, hop) {
tb_dp_init_aux_credits(hop);
if (pm_support)
tb_init_pm_support(hop);
}
}
static int tb_dp_init_video_credits(struct tb_path_hop *hop)
@ -1216,7 +1270,7 @@ static int tb_dp_init_video_credits(struct tb_path_hop *hop)
return 0;
}
static int tb_dp_init_video_path(struct tb_path *path)
static int tb_dp_init_video_path(struct tb_path *path, bool pm_support)
{
struct tb_path_hop *hop;
@ -1224,8 +1278,8 @@ static int tb_dp_init_video_path(struct tb_path *path)
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_fc_enable = TB_PATH_NONE;
path->ingress_shared_buffer = TB_PATH_NONE;
path->priority = 1;
path->weight = 1;
path->priority = TB_DP_VIDEO_PRIORITY;
path->weight = TB_DP_VIDEO_WEIGHT;
tb_path_for_each_hop(path, hop) {
int ret;
@ -1233,6 +1287,8 @@ static int tb_dp_init_video_path(struct tb_path *path)
ret = tb_dp_init_video_credits(hop);
if (ret)
return ret;
if (pm_support)
tb_init_pm_support(hop);
}
return 0;
@ -1253,8 +1309,9 @@ static void tb_dp_dump(struct tb_tunnel *tunnel)
rate = tb_dp_cap_get_rate(dp_cap);
lanes = tb_dp_cap_get_lanes(dp_cap);
tb_port_dbg(in, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tb_dp_bandwidth(rate, lanes));
tb_tunnel_dbg(tunnel,
"DP IN maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tb_dp_bandwidth(rate, lanes));
out = tunnel->dst_port;
@ -1265,8 +1322,9 @@ static void tb_dp_dump(struct tb_tunnel *tunnel)
rate = tb_dp_cap_get_rate(dp_cap);
lanes = tb_dp_cap_get_lanes(dp_cap);
tb_port_dbg(out, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tb_dp_bandwidth(rate, lanes));
tb_tunnel_dbg(tunnel,
"DP OUT maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tb_dp_bandwidth(rate, lanes));
if (tb_port_read(in, &dp_cap, TB_CFG_PORT,
in->cap_adap + DP_REMOTE_CAP, 1))
@ -1275,8 +1333,8 @@ static void tb_dp_dump(struct tb_tunnel *tunnel)
rate = tb_dp_cap_get_rate(dp_cap);
lanes = tb_dp_cap_get_lanes(dp_cap);
tb_port_dbg(in, "reduced bandwidth %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tb_dp_bandwidth(rate, lanes));
tb_tunnel_dbg(tunnel, "reduced bandwidth %u Mb/s x%u = %u Mb/s\n",
rate, lanes, tb_dp_bandwidth(rate, lanes));
}
/**
@ -1322,7 +1380,7 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in,
goto err_free;
}
tunnel->paths[TB_DP_VIDEO_PATH_OUT] = path;
if (tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT]))
if (tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT], false))
goto err_free;
path = tb_path_discover(in, TB_DP_AUX_TX_HOPID, NULL, -1, NULL, "AUX TX",
@ -1330,14 +1388,14 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in,
if (!path)
goto err_deactivate;
tunnel->paths[TB_DP_AUX_PATH_OUT] = path;
tb_dp_init_aux_path(tunnel->paths[TB_DP_AUX_PATH_OUT]);
tb_dp_init_aux_path(tunnel->paths[TB_DP_AUX_PATH_OUT], false);
path = tb_path_discover(tunnel->dst_port, -1, in, TB_DP_AUX_RX_HOPID,
&port, "AUX RX", alloc_hopid);
if (!path)
goto err_deactivate;
tunnel->paths[TB_DP_AUX_PATH_IN] = path;
tb_dp_init_aux_path(tunnel->paths[TB_DP_AUX_PATH_IN]);
tb_dp_init_aux_path(tunnel->paths[TB_DP_AUX_PATH_IN], false);
/* Validate that the tunnel is complete */
if (!tb_port_is_dpout(tunnel->dst_port)) {
@ -1392,6 +1450,7 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
struct tb_tunnel *tunnel;
struct tb_path **paths;
struct tb_path *path;
bool pm_support;
if (WARN_ON(!in->cap_adap || !out->cap_adap))
return NULL;
@ -1413,26 +1472,27 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
tunnel->max_down = max_down;
paths = tunnel->paths;
pm_support = usb4_switch_version(in->sw) >= 2;
path = tb_path_alloc(tb, in, TB_DP_VIDEO_HOPID, out, TB_DP_VIDEO_HOPID,
link_nr, "Video");
if (!path)
goto err_free;
tb_dp_init_video_path(path);
tb_dp_init_video_path(path, pm_support);
paths[TB_DP_VIDEO_PATH_OUT] = path;
path = tb_path_alloc(tb, in, TB_DP_AUX_TX_HOPID, out,
TB_DP_AUX_TX_HOPID, link_nr, "AUX TX");
if (!path)
goto err_free;
tb_dp_init_aux_path(path);
tb_dp_init_aux_path(path, pm_support);
paths[TB_DP_AUX_PATH_OUT] = path;
path = tb_path_alloc(tb, out, TB_DP_AUX_RX_HOPID, in,
TB_DP_AUX_RX_HOPID, link_nr, "AUX RX");
if (!path)
goto err_free;
tb_dp_init_aux_path(path);
tb_dp_init_aux_path(path, pm_support);
paths[TB_DP_AUX_PATH_IN] = path;
return tunnel;
@ -1497,8 +1557,8 @@ static int tb_dma_init_rx_path(struct tb_path *path, unsigned int credits)
path->ingress_fc_enable = TB_PATH_ALL;
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_shared_buffer = TB_PATH_NONE;
path->priority = 5;
path->weight = 1;
path->priority = TB_DMA_PRIORITY;
path->weight = TB_DMA_WEIGHT;
path->clear_fc = true;
/*
@ -1531,8 +1591,8 @@ static int tb_dma_init_tx_path(struct tb_path *path, unsigned int credits)
path->ingress_fc_enable = TB_PATH_ALL;
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_shared_buffer = TB_PATH_NONE;
path->priority = 5;
path->weight = 1;
path->priority = TB_DMA_PRIORITY;
path->weight = TB_DMA_WEIGHT;
path->clear_fc = true;
tb_path_for_each_hop(path, hop) {
@ -1758,14 +1818,23 @@ static int tb_usb3_activate(struct tb_tunnel *tunnel, bool activate)
static int tb_usb3_consumed_bandwidth(struct tb_tunnel *tunnel,
int *consumed_up, int *consumed_down)
{
int pcie_enabled = tb_acpi_may_tunnel_pcie();
struct tb_port *port = tb_upstream_port(tunnel->dst_port->sw);
int pcie_weight = tb_acpi_may_tunnel_pcie() ? TB_PCI_WEIGHT : 0;
/*
* PCIe tunneling, if enabled, affects the USB3 bandwidth so
* take that it into account here.
*/
*consumed_up = tunnel->allocated_up * (3 + pcie_enabled) / 3;
*consumed_down = tunnel->allocated_down * (3 + pcie_enabled) / 3;
*consumed_up = tunnel->allocated_up *
(TB_USB3_WEIGHT + pcie_weight) / TB_USB3_WEIGHT;
*consumed_down = tunnel->allocated_down *
(TB_USB3_WEIGHT + pcie_weight) / TB_USB3_WEIGHT;
if (tb_port_get_link_generation(port) >= 4) {
*consumed_up = max(*consumed_up, USB4_V2_USB3_MIN_BANDWIDTH);
*consumed_down = max(*consumed_down, USB4_V2_USB3_MIN_BANDWIDTH);
}
return 0;
}
@ -1790,17 +1859,10 @@ static void tb_usb3_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
{
int ret, max_rate, allocate_up, allocate_down;
ret = usb4_usb3_port_actual_link_rate(tunnel->src_port);
ret = tb_usb3_max_link_rate(tunnel->dst_port, tunnel->src_port);
if (ret < 0) {
tb_tunnel_warn(tunnel, "failed to read actual link rate\n");
tb_tunnel_warn(tunnel, "failed to read maximum link rate\n");
return;
} else if (!ret) {
/* Use maximum link rate if the link valid is not set */
ret = tb_usb3_max_link_rate(tunnel->dst_port, tunnel->src_port);
if (ret < 0) {
tb_tunnel_warn(tunnel, "failed to read maximum link rate\n");
return;
}
}
/*
@ -1871,8 +1933,8 @@ static void tb_usb3_init_path(struct tb_path *path)
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_fc_enable = TB_PATH_ALL;
path->ingress_shared_buffer = TB_PATH_NONE;
path->priority = 3;
path->weight = 3;
path->priority = TB_USB3_PRIORITY;
path->weight = TB_USB3_WEIGHT;
path->drop_packages = 0;
tb_path_for_each_hop(path, hop)
@ -2387,3 +2449,8 @@ void tb_tunnel_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
tunnel->reclaim_available_bandwidth(tunnel, available_up,
available_down);
}
const char *tb_tunnel_type_name(const struct tb_tunnel *tunnel)
{
return tb_tunnel_names[tunnel->type];
}

View File

@ -80,6 +80,8 @@ struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down,
bool alloc_hopid);
struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
struct tb_port *down);
bool tb_tunnel_reserved_pci(struct tb_port *port, int *reserved_up,
int *reserved_down);
struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in,
bool alloc_hopid);
struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
@ -137,5 +139,27 @@ static inline bool tb_tunnel_is_usb3(const struct tb_tunnel *tunnel)
return tunnel->type == TB_TUNNEL_USB3;
}
#endif
const char *tb_tunnel_type_name(const struct tb_tunnel *tunnel);
#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \
do { \
struct tb_tunnel *__tunnel = (tunnel); \
level(__tunnel->tb, "%llx:%u <-> %llx:%u (%s): " fmt, \
tb_route(__tunnel->src_port->sw), \
__tunnel->src_port->port, \
tb_route(__tunnel->dst_port->sw), \
__tunnel->dst_port->port, \
tb_tunnel_type_name(__tunnel), \
## arg); \
} while (0)
#define tb_tunnel_WARN(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_WARN, tunnel, fmt, ##arg)
#define tb_tunnel_warn(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_warn, tunnel, fmt, ##arg)
#define tb_tunnel_info(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_info, tunnel, fmt, ##arg)
#define tb_tunnel_dbg(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_dbg, tunnel, fmt, ##arg)
#endif

View File

@ -1454,6 +1454,112 @@ bool usb4_port_clx_supported(struct tb_port *port)
return !!(val & PORT_CS_18_CPS);
}
/**
* usb4_port_asym_supported() - If the port supports asymmetric link
* @port: USB4 port
*
* Checks if the port and the cable supports asymmetric link and returns
* %true in that case.
*/
bool usb4_port_asym_supported(struct tb_port *port)
{
u32 val;
if (!port->cap_usb4)
return false;
if (tb_port_read(port, &val, TB_CFG_PORT, port->cap_usb4 + PORT_CS_18, 1))
return false;
return !!(val & PORT_CS_18_CSA);
}
/**
* usb4_port_asym_set_link_width() - Set link width to asymmetric or symmetric
* @port: USB4 port
* @width: Asymmetric width to configure
*
* Sets USB4 port link width to @width. Can be called for widths where
* usb4_port_asym_width_supported() returned @true.
*/
int usb4_port_asym_set_link_width(struct tb_port *port, enum tb_link_width width)
{
u32 val;
int ret;
if (!port->cap_phy)
return -EINVAL;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
val &= ~LANE_ADP_CS_1_TARGET_WIDTH_ASYM_MASK;
switch (width) {
case TB_LINK_WIDTH_DUAL:
val |= FIELD_PREP(LANE_ADP_CS_1_TARGET_WIDTH_ASYM_MASK,
LANE_ADP_CS_1_TARGET_WIDTH_ASYM_DUAL);
break;
case TB_LINK_WIDTH_ASYM_TX:
val |= FIELD_PREP(LANE_ADP_CS_1_TARGET_WIDTH_ASYM_MASK,
LANE_ADP_CS_1_TARGET_WIDTH_ASYM_TX);
break;
case TB_LINK_WIDTH_ASYM_RX:
val |= FIELD_PREP(LANE_ADP_CS_1_TARGET_WIDTH_ASYM_MASK,
LANE_ADP_CS_1_TARGET_WIDTH_ASYM_RX);
break;
default:
return -EINVAL;
}
return tb_port_write(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
}
/**
* usb4_port_asym_start() - Start symmetry change and wait for completion
* @port: USB4 port
*
* Start symmetry change of the link to asymmetric or symmetric
* (according to what was previously set in tb_port_set_link_width().
* Wait for completion of the change.
*
* Returns %0 in case of success, %-ETIMEDOUT if case of timeout or
* a negative errno in case of a failure.
*/
int usb4_port_asym_start(struct tb_port *port)
{
int ret;
u32 val;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_usb4 + PORT_CS_19, 1);
if (ret)
return ret;
val &= ~PORT_CS_19_START_ASYM;
val |= FIELD_PREP(PORT_CS_19_START_ASYM, 1);
ret = tb_port_write(port, &val, TB_CFG_PORT,
port->cap_usb4 + PORT_CS_19, 1);
if (ret)
return ret;
/*
* Wait for PORT_CS_19_START_ASYM to be 0. This means the USB4
* port started the symmetry transition.
*/
ret = usb4_port_wait_for_bit(port, port->cap_usb4 + PORT_CS_19,
PORT_CS_19_START_ASYM, 0, 1000);
if (ret)
return ret;
/* Then wait for the transtion to be completed */
return usb4_port_wait_for_bit(port, port->cap_usb4 + PORT_CS_18,
PORT_CS_18_TIP, 0, 5000);
}
/**
* usb4_port_margining_caps() - Read USB4 port marginig capabilities
* @port: USB4 port
@ -1946,35 +2052,6 @@ int usb4_usb3_port_max_link_rate(struct tb_port *port)
return usb4_usb3_port_max_bandwidth(port, ret);
}
/**
* usb4_usb3_port_actual_link_rate() - Established USB3 link rate
* @port: USB3 adapter port
*
* Return actual established link rate of a USB3 adapter in Mb/s. If the
* link is not up returns %0 and negative errno in case of failure.
*/
int usb4_usb3_port_actual_link_rate(struct tb_port *port)
{
int ret, lr;
u32 val;
if (!tb_port_is_usb3_down(port) && !tb_port_is_usb3_up(port))
return -EINVAL;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_4, 1);
if (ret)
return ret;
if (!(val & ADP_USB3_CS_4_ULV))
return 0;
lr = val & ADP_USB3_CS_4_ALR_MASK;
ret = lr == ADP_USB3_CS_4_ALR_20G ? 20000 : 10000;
return usb4_usb3_port_max_bandwidth(port, ret);
}
static int usb4_usb3_port_cm_request(struct tb_port *port, bool request)
{
int ret;

View File

@ -91,6 +91,16 @@ config USB_PCI
If you have such a device you may say N here and PCI related code
will not be built in the USB driver.
config USB_PCI_AMD
bool "AMD PCI USB host support"
depends on USB_PCI && HAS_IOPORT
default X86 || MACH_LOONGSON64 || PPC_PASEMI
help
Enable workarounds for USB implementation quirks in SB600/SB700/SB800
and later south bridge implementations. These are common on x86 PCs
with AMD CPUs but rarely used elsewhere, with the exception of a few
powerpc and mips desktop machines.
if USB
source "drivers/usb/core/Kconfig"

View File

@ -109,7 +109,6 @@ int c67x00_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status);
void c67x00_endpoint_disable(struct usb_hcd *hcd,
struct usb_host_endpoint *ep);
void c67x00_hcd_msg_received(struct c67x00_sie *sie, u16 msg);
void c67x00_sched_kick(struct c67x00_hcd *c67x00);
int c67x00_sched_start_scheduler(struct c67x00_hcd *c67x00);
void c67x00_sched_stop_scheduler(struct c67x00_hcd *c67x00);

View File

@ -131,8 +131,6 @@ static inline const char *cdnsp_trb_type_string(u8 type)
return "Endpoint Not ready";
case TRB_HALT_ENDPOINT:
return "Halt Endpoint";
case TRB_FLUSH_ENDPOINT:
return "FLush Endpoint";
default:
return "UNKNOWN";
}
@ -328,7 +326,6 @@ static inline const char *cdnsp_decode_trb(char *str, size_t size, u32 field0,
break;
case TRB_RESET_EP:
case TRB_HALT_ENDPOINT:
case TRB_FLUSH_ENDPOINT:
ret = snprintf(str, size,
"%s: ep%d%s(%d) ctx %08x%08x slot %ld flags %c",
cdnsp_trb_type_string(type),

View File

@ -1024,10 +1024,8 @@ static int cdnsp_gadget_ep_disable(struct usb_ep *ep)
pep->ep_state |= EP_DIS_IN_RROGRESS;
/* Endpoint was unconfigured by Reset Device command. */
if (!(pep->ep_state & EP_UNCONFIGURED)) {
if (!(pep->ep_state & EP_UNCONFIGURED))
cdnsp_cmd_stop_ep(pdev, pep);
cdnsp_cmd_flush_ep(pdev, pep);
}
/* Remove all queued USB requests. */
while (!list_empty(&pep->pending_list)) {
@ -1424,8 +1422,6 @@ static void cdnsp_stop(struct cdnsp_device *pdev)
{
u32 temp;
cdnsp_cmd_flush_ep(pdev, &pdev->eps[0]);
/* Remove internally queued request for ep0. */
if (!list_empty(&pdev->eps[0].pending_list)) {
struct cdnsp_request *req;

View File

@ -1128,8 +1128,6 @@ union cdnsp_trb {
#define TRB_HALT_ENDPOINT 54
/* Doorbell Overflow Event. */
#define TRB_DRB_OVERFLOW 57
/* Flush Endpoint Command. */
#define TRB_FLUSH_ENDPOINT 58
#define TRB_TYPE_LINK(x) (((x) & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK))
#define TRB_TYPE_LINK_LE32(x) (((x) & cpu_to_le32(TRB_TYPE_BITMASK)) == \
@ -1539,8 +1537,6 @@ void cdnsp_queue_configure_endpoint(struct cdnsp_device *pdev,
void cdnsp_queue_reset_ep(struct cdnsp_device *pdev, unsigned int ep_index);
void cdnsp_queue_halt_endpoint(struct cdnsp_device *pdev,
unsigned int ep_index);
void cdnsp_queue_flush_endpoint(struct cdnsp_device *pdev,
unsigned int ep_index);
void cdnsp_force_header_wakeup(struct cdnsp_device *pdev, int intf_num);
void cdnsp_queue_reset_device(struct cdnsp_device *pdev);
void cdnsp_queue_new_dequeue_state(struct cdnsp_device *pdev,
@ -1574,7 +1570,6 @@ void cdnsp_irq_reset(struct cdnsp_device *pdev);
int cdnsp_halt_endpoint(struct cdnsp_device *pdev,
struct cdnsp_ep *pep, int value);
int cdnsp_cmd_stop_ep(struct cdnsp_device *pdev, struct cdnsp_ep *pep);
int cdnsp_cmd_flush_ep(struct cdnsp_device *pdev, struct cdnsp_ep *pep);
void cdnsp_setup_analyze(struct cdnsp_device *pdev);
int cdnsp_status_stage(struct cdnsp_device *pdev);
int cdnsp_reset_device(struct cdnsp_device *pdev);

View File

@ -2123,19 +2123,6 @@ ep_stopped:
return ret;
}
int cdnsp_cmd_flush_ep(struct cdnsp_device *pdev, struct cdnsp_ep *pep)
{
int ret;
cdnsp_queue_flush_endpoint(pdev, pep->idx);
cdnsp_ring_cmd_db(pdev);
ret = cdnsp_wait_for_cmd_compl(pdev);
trace_cdnsp_handle_cmd_flush_ep(pep->out_ctx);
return ret;
}
/*
* The transfer burst count field of the isochronous TRB defines the number of
* bursts that are required to move all packets in this TD. Only SuperSpeed
@ -2465,17 +2452,6 @@ void cdnsp_queue_halt_endpoint(struct cdnsp_device *pdev, unsigned int ep_index)
EP_ID_FOR_TRB(ep_index));
}
/*
* Queue a flush endpoint request on the command ring.
*/
void cdnsp_queue_flush_endpoint(struct cdnsp_device *pdev,
unsigned int ep_index)
{
cdnsp_queue_command(pdev, 0, 0, 0, TRB_TYPE(TRB_FLUSH_ENDPOINT) |
SLOT_ID_FOR_TRB(pdev->slot_id) |
EP_ID_FOR_TRB(ep_index));
}
void cdnsp_force_header_wakeup(struct cdnsp_device *pdev, int intf_num)
{
u32 lo, mid;

View File

@ -43,6 +43,10 @@ config USB_CHIPIDEA_MSM
tristate "Enable MSM hsusb glue driver" if EXPERT
default USB_CHIPIDEA
config USB_CHIPIDEA_NPCM
tristate "Enable NPCM hsusb glue driver" if EXPERT
default USB_CHIPIDEA
config USB_CHIPIDEA_IMX
tristate "Enable i.MX USB glue driver" if EXPERT
depends on OF

View File

@ -13,6 +13,7 @@ ci_hdrc-$(CONFIG_USB_OTG_FSM) += otg_fsm.o
obj-$(CONFIG_USB_CHIPIDEA_GENERIC) += ci_hdrc_usb2.o
obj-$(CONFIG_USB_CHIPIDEA_MSM) += ci_hdrc_msm.o
obj-$(CONFIG_USB_CHIPIDEA_NPCM) += ci_hdrc_npcm.o
obj-$(CONFIG_USB_CHIPIDEA_PCI) += ci_hdrc_pci.o
obj-$(CONFIG_USB_CHIPIDEA_IMX) += usbmisc_imx.o ci_hdrc_imx.o
obj-$(CONFIG_USB_CHIPIDEA_TEGRA) += ci_hdrc_tegra.o

View File

@ -0,0 +1,114 @@
// SPDX-License-Identifier: GPL-2.0
// Copyright (c) 2023 Nuvoton Technology corporation.
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/usb/chipidea.h>
#include <linux/clk.h>
#include <linux/io.h>
#include <linux/reset-controller.h>
#include <linux/of.h>
#include "ci.h"
struct npcm_udc_data {
struct platform_device *ci;
struct clk *core_clk;
struct ci_hdrc_platform_data pdata;
};
static int npcm_udc_notify_event(struct ci_hdrc *ci, unsigned event)
{
struct device *dev = ci->dev->parent;
switch (event) {
case CI_HDRC_CONTROLLER_RESET_EVENT:
/* clear all mode bits */
hw_write(ci, OP_USBMODE, 0xffffffff, 0x0);
break;
default:
dev_dbg(dev, "unknown ci_hdrc event (%d)\n",event);
break;
}
return 0;
}
static int npcm_udc_probe(struct platform_device *pdev)
{
int ret;
struct npcm_udc_data *ci;
struct platform_device *plat_ci;
struct device *dev = &pdev->dev;
ci = devm_kzalloc(&pdev->dev, sizeof(*ci), GFP_KERNEL);
if (!ci)
return -ENOMEM;
platform_set_drvdata(pdev, ci);
ci->core_clk = devm_clk_get_optional(dev, NULL);
if (IS_ERR(ci->core_clk))
return PTR_ERR(ci->core_clk);
ret = clk_prepare_enable(ci->core_clk);
if (ret)
return dev_err_probe(dev, ret, "failed to enable the clock: %d\n", ret);
ci->pdata.name = dev_name(dev);
ci->pdata.capoffset = DEF_CAPOFFSET;
ci->pdata.flags = CI_HDRC_REQUIRES_ALIGNED_DMA |
CI_HDRC_FORCE_VBUS_ACTIVE_ALWAYS;
ci->pdata.phy_mode = USBPHY_INTERFACE_MODE_UTMI;
ci->pdata.notify_event = npcm_udc_notify_event;
plat_ci = ci_hdrc_add_device(dev, pdev->resource, pdev->num_resources,
&ci->pdata);
if (IS_ERR(plat_ci)) {
ret = PTR_ERR(plat_ci);
dev_err(dev, "failed to register HDRC NPCM device: %d\n", ret);
goto clk_err;
}
pm_runtime_no_callbacks(dev);
pm_runtime_enable(dev);
return 0;
clk_err:
clk_disable_unprepare(ci->core_clk);
return ret;
}
static int npcm_udc_remove(struct platform_device *pdev)
{
struct npcm_udc_data *ci = platform_get_drvdata(pdev);
pm_runtime_disable(&pdev->dev);
ci_hdrc_remove_device(ci->ci);
clk_disable_unprepare(ci->core_clk);
return 0;
}
static const struct of_device_id npcm_udc_dt_match[] = {
{ .compatible = "nuvoton,npcm750-udc", },
{ .compatible = "nuvoton,npcm845-udc", },
{ }
};
MODULE_DEVICE_TABLE(of, npcm_udc_dt_match);
static struct platform_driver npcm_udc_driver = {
.probe = npcm_udc_probe,
.remove = npcm_udc_remove,
.driver = {
.name = "npcm_udc",
.of_match_table = npcm_udc_dt_match,
},
};
module_platform_driver(npcm_udc_driver);
MODULE_DESCRIPTION("NPCM USB device controller driver");
MODULE_AUTHOR("Tomer Maimon <tomer.maimon@nuvoton.com>");
MODULE_LICENSE("GPL v2");

View File

@ -293,14 +293,12 @@ static int tegra_usb_probe(struct platform_device *pdev)
usb->phy = devm_usb_get_phy_by_phandle(&pdev->dev, "nvidia,phy", 0);
if (IS_ERR(usb->phy))
return dev_err_probe(&pdev->dev, PTR_ERR(usb->phy),
"failed to get PHY\n");
"failed to get PHY");
usb->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(usb->clk)) {
err = PTR_ERR(usb->clk);
dev_err(&pdev->dev, "failed to get clock: %d\n", err);
return err;
}
if (IS_ERR(usb->clk))
return dev_err_probe(&pdev->dev, PTR_ERR(usb->clk),
"failed to get clock");
err = devm_tegra_core_dev_init_opp_table_common(&pdev->dev);
if (err)
@ -316,7 +314,7 @@ static int tegra_usb_probe(struct platform_device *pdev)
err = tegra_usb_reset_controller(&pdev->dev);
if (err) {
dev_err(&pdev->dev, "failed to reset controller: %d\n", err);
dev_err_probe(&pdev->dev, err, "failed to reset controller");
goto fail_power_off;
}
@ -347,8 +345,8 @@ static int tegra_usb_probe(struct platform_device *pdev)
usb->dev = ci_hdrc_add_device(&pdev->dev, pdev->resource,
pdev->num_resources, &usb->data);
if (IS_ERR(usb->dev)) {
err = PTR_ERR(usb->dev);
dev_err(&pdev->dev, "failed to add HDRC device: %d\n", err);
err = dev_err_probe(&pdev->dev, PTR_ERR(usb->dev),
"failed to add HDRC device");
goto phy_shutdown;
}

View File

@ -9,9 +9,9 @@
#include <linux/dma-mapping.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/property.h>
#include <linux/usb/chipidea.h>
#include <linux/usb/hcd.h>
#include <linux/usb/ulpi.h>
@ -51,8 +51,8 @@ static int ci_hdrc_usb2_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct ci_hdrc_usb2_priv *priv;
struct ci_hdrc_platform_data *ci_pdata = dev_get_platdata(dev);
const struct ci_hdrc_platform_data *data;
int ret;
const struct of_device_id *match;
if (!ci_pdata) {
ci_pdata = devm_kmalloc(dev, sizeof(*ci_pdata), GFP_KERNEL);
@ -61,11 +61,10 @@ static int ci_hdrc_usb2_probe(struct platform_device *pdev)
*ci_pdata = ci_default_pdata; /* struct copy */
}
match = of_match_device(ci_hdrc_usb2_of_match, &pdev->dev);
if (match && match->data) {
data = device_get_match_data(&pdev->dev);
if (data)
/* struct copy */
*ci_pdata = *(struct ci_hdrc_platform_data *)match->data;
}
*ci_pdata = *data;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
@ -120,7 +119,7 @@ static struct platform_driver ci_hdrc_usb2_driver = {
.remove_new = ci_hdrc_usb2_remove,
.driver = {
.name = "chipidea-usb2",
.of_match_table = of_match_ptr(ci_hdrc_usb2_of_match),
.of_match_table = ci_hdrc_usb2_of_match,
},
};
module_platform_driver(ci_hdrc_usb2_driver);

View File

@ -30,8 +30,7 @@ struct ehci_ci_priv {
};
struct ci_hdrc_dma_aligned_buffer {
void *kmalloc_ptr;
void *old_xfer_buffer;
void *original_buffer;
u8 data[];
};
@ -380,59 +379,52 @@ static int ci_ehci_bus_suspend(struct usb_hcd *hcd)
return 0;
}
static void ci_hdrc_free_dma_aligned_buffer(struct urb *urb)
static void ci_hdrc_free_dma_aligned_buffer(struct urb *urb, bool copy_back)
{
struct ci_hdrc_dma_aligned_buffer *temp;
size_t length;
if (!(urb->transfer_flags & URB_ALIGNED_TEMP_BUFFER))
return;
urb->transfer_flags &= ~URB_ALIGNED_TEMP_BUFFER;
temp = container_of(urb->transfer_buffer,
struct ci_hdrc_dma_aligned_buffer, data);
urb->transfer_buffer = temp->original_buffer;
if (copy_back && usb_urb_dir_in(urb)) {
size_t length;
if (usb_urb_dir_in(urb)) {
if (usb_pipeisoc(urb->pipe))
length = urb->transfer_buffer_length;
else
length = urb->actual_length;
memcpy(temp->old_xfer_buffer, temp->data, length);
memcpy(temp->original_buffer, temp->data, length);
}
urb->transfer_buffer = temp->old_xfer_buffer;
kfree(temp->kmalloc_ptr);
urb->transfer_flags &= ~URB_ALIGNED_TEMP_BUFFER;
kfree(temp);
}
static int ci_hdrc_alloc_dma_aligned_buffer(struct urb *urb, gfp_t mem_flags)
{
struct ci_hdrc_dma_aligned_buffer *temp, *kmalloc_ptr;
const unsigned int ci_hdrc_usb_dma_align = 32;
size_t kmalloc_size;
struct ci_hdrc_dma_aligned_buffer *temp;
if (urb->num_sgs || urb->sg || urb->transfer_buffer_length == 0 ||
!((uintptr_t)urb->transfer_buffer & (ci_hdrc_usb_dma_align - 1)))
if (urb->num_sgs || urb->sg || urb->transfer_buffer_length == 0)
return 0;
if (IS_ALIGNED((uintptr_t)urb->transfer_buffer, 4)
&& IS_ALIGNED(urb->transfer_buffer_length, 4))
return 0;
/* Allocate a buffer with enough padding for alignment */
kmalloc_size = urb->transfer_buffer_length +
sizeof(struct ci_hdrc_dma_aligned_buffer) +
ci_hdrc_usb_dma_align - 1;
kmalloc_ptr = kmalloc(kmalloc_size, mem_flags);
if (!kmalloc_ptr)
temp = kmalloc(sizeof(*temp) + ALIGN(urb->transfer_buffer_length, 4), mem_flags);
if (!temp)
return -ENOMEM;
/* Position our struct dma_aligned_buffer such that data is aligned */
temp = PTR_ALIGN(kmalloc_ptr + 1, ci_hdrc_usb_dma_align) - 1;
temp->kmalloc_ptr = kmalloc_ptr;
temp->old_xfer_buffer = urb->transfer_buffer;
if (usb_urb_dir_out(urb))
memcpy(temp->data, urb->transfer_buffer,
urb->transfer_buffer_length);
urb->transfer_buffer = temp->data;
temp->original_buffer = urb->transfer_buffer;
urb->transfer_buffer = temp->data;
urb->transfer_flags |= URB_ALIGNED_TEMP_BUFFER;
return 0;
@ -449,7 +441,7 @@ static int ci_hdrc_map_urb_for_dma(struct usb_hcd *hcd, struct urb *urb,
ret = usb_hcd_map_urb_for_dma(hcd, urb, mem_flags);
if (ret)
ci_hdrc_free_dma_aligned_buffer(urb);
ci_hdrc_free_dma_aligned_buffer(urb, false);
return ret;
}
@ -457,7 +449,7 @@ static int ci_hdrc_map_urb_for_dma(struct usb_hcd *hcd, struct urb *urb,
static void ci_hdrc_unmap_urb_for_dma(struct usb_hcd *hcd, struct urb *urb)
{
usb_hcd_unmap_urb_for_dma(hcd, urb);
ci_hdrc_free_dma_aligned_buffer(urb);
ci_hdrc_free_dma_aligned_buffer(urb, true);
}
#ifdef CONFIG_PM_SLEEP

View File

@ -130,8 +130,11 @@ enum ci_role ci_otg_role(struct ci_hdrc *ci)
void ci_handle_vbus_change(struct ci_hdrc *ci)
{
if (!ci->is_otg)
if (!ci->is_otg) {
if (ci->platdata->flags & CI_HDRC_FORCE_VBUS_ACTIVE_ALWAYS)
usb_gadget_vbus_connect(&ci->gadget);
return;
}
if (hw_read_otgsc(ci, OTGSC_BSV) && !ci->vbus_active)
usb_gadget_vbus_connect(&ci->gadget);

View File

@ -206,8 +206,7 @@ int usb_hcd_pci_probe(struct pci_dev *dev, const struct hc_driver *driver)
goto free_irq_vectors;
}
hcd->amd_resume_bug = (usb_hcd_amd_remote_wakeup_quirk(dev) &&
driver->flags & (HCD_USB11 | HCD_USB3)) ? 1 : 0;
hcd->amd_resume_bug = usb_hcd_amd_resume_bug(dev, driver);
if (driver->flags & HCD_MEMORY) {
/* EHCI, OHCI */

View File

@ -2274,6 +2274,8 @@ void usb_disconnect(struct usb_device **pdev)
*/
if (!test_and_set_bit(port1, hub->child_usage_bits))
pm_runtime_get_sync(&port_dev->dev);
typec_deattach(port_dev->connector, &udev->dev);
}
usb_remove_ep_devs(&udev->ep0);
@ -2620,6 +2622,8 @@ int usb_new_device(struct usb_device *udev)
if (!test_and_set_bit(port1, hub->child_usage_bits))
pm_runtime_get_sync(&port_dev->dev);
typec_attach(port_dev->connector, &udev->dev);
}
(void) usb_create_ep_devs(&udev->dev, &udev->ep0, udev);

View File

@ -14,6 +14,7 @@
#include <linux/usb.h>
#include <linux/usb/ch11.h>
#include <linux/usb/hcd.h>
#include <linux/usb/typec.h>
#include "usb.h"
struct usb_hub {
@ -82,6 +83,7 @@ struct usb_hub {
* @dev: generic device interface
* @port_owner: port's owner
* @peer: related usb2 and usb3 ports (share the same connector)
* @connector: USB Type-C connector
* @req: default pm qos request for hubs without port power control
* @connect_type: port's connect type
* @state: device state of the usb device attached to the port
@ -100,6 +102,7 @@ struct usb_port {
struct device dev;
struct usb_dev_state *port_owner;
struct usb_port *peer;
struct typec_connector *connector;
struct dev_pm_qos_request *req;
enum usb_port_connect_type connect_type;
enum usb_device_state state;

View File

@ -653,6 +653,7 @@ static void find_and_link_peer(struct usb_hub *hub, int port1)
static int connector_bind(struct device *dev, struct device *connector, void *data)
{
struct usb_port *port_dev = to_usb_port(dev);
int ret;
ret = sysfs_create_link(&dev->kobj, &connector->kobj, "connector");
@ -660,16 +661,30 @@ static int connector_bind(struct device *dev, struct device *connector, void *da
return ret;
ret = sysfs_create_link(&connector->kobj, &dev->kobj, dev_name(dev));
if (ret)
if (ret) {
sysfs_remove_link(&dev->kobj, "connector");
return ret;
}
return ret;
port_dev->connector = data;
/*
* If there is already USB device connected to the port, letting the
* Type-C connector know about it immediately.
*/
if (port_dev->child)
typec_attach(port_dev->connector, &port_dev->child->dev);
return 0;
}
static void connector_unbind(struct device *dev, struct device *connector, void *data)
{
struct usb_port *port_dev = to_usb_port(dev);
sysfs_remove_link(&connector->kobj, dev_name(dev));
sysfs_remove_link(&dev->kobj, "connector");
port_dev->connector = NULL;
}
static const struct component_ops connector_ops = {
@ -698,6 +713,7 @@ int usb_hub_create_port_device(struct usb_hub *hub, int port1)
set_bit(port1, hub->power_bits);
port_dev->dev.parent = hub->intfdev;
if (hub_is_superspeed(hdev)) {
port_dev->is_superspeed = 1;
port_dev->usb3_lpm_u1_permit = 1;
port_dev->usb3_lpm_u2_permit = 1;
port_dev->dev.groups = port_dev_usb3_group;
@ -705,8 +721,6 @@ int usb_hub_create_port_device(struct usb_hub *hub, int port1)
port_dev->dev.groups = port_dev_group;
port_dev->dev.type = &usb_port_device_type;
port_dev->dev.driver = &usb_port_driver;
if (hub_is_superspeed(hub->hdev))
port_dev->is_superspeed = 1;
dev_set_name(&port_dev->dev, "%s-port%d", dev_name(&hub->hdev->dev),
port1);
mutex_init(&port_dev->status_lock);

View File

@ -4769,8 +4769,8 @@ fail3:
if (qh_allocated && qh->channel && qh->channel->qh == qh)
qh->channel->qh = NULL;
fail2:
spin_unlock_irqrestore(&hsotg->lock, flags);
urb->hcpriv = NULL;
spin_unlock_irqrestore(&hsotg->lock, flags);
kfree(qtd);
fail1:
if (qh_allocated) {

View File

@ -5,7 +5,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/of.h>
#include <linux/usb/of.h>
#include <linux/pci_ids.h>
#include <linux/pci.h>
@ -968,26 +968,17 @@ typedef void (*set_params_cb)(struct dwc2_hsotg *data);
int dwc2_init_params(struct dwc2_hsotg *hsotg)
{
const struct of_device_id *match;
set_params_cb set_params;
dwc2_set_default_params(hsotg);
dwc2_get_device_properties(hsotg);
match = of_match_device(dwc2_of_match_table, hsotg->dev);
if (match && match->data) {
set_params = match->data;
set_params = device_get_match_data(hsotg->dev);
if (set_params) {
set_params(hsotg);
} else if (!match) {
const struct acpi_device_id *amatch;
const struct pci_device_id *pmatch = NULL;
amatch = acpi_match_device(dwc2_acpi_match, hsotg->dev);
if (amatch && amatch->driver_data) {
set_params = (set_params_cb)amatch->driver_data;
set_params(hsotg);
} else if (!amatch)
pmatch = pci_match_id(dwc2_pci_ids, to_pci_dev(hsotg->dev->parent));
} else {
const struct pci_device_id *pmatch =
pci_match_id(dwc2_pci_ids, to_pci_dev(hsotg->dev->parent));
if (pmatch && pmatch->driver_data) {
set_params = (set_params_cb)pmatch->driver_data;

View File

@ -178,4 +178,15 @@ config USB_DWC3_OCTEON
Only the host mode is currently supported.
Say 'Y' or 'M' here if you have one such device.
config USB_DWC3_RTK
tristate "Realtek DWC3 Platform Driver"
depends on OF && ARCH_REALTEK
default USB_DWC3
select USB_ROLE_SWITCH
help
RTK DHC RTD SoCs with DesignWare Core USB3 IP inside,
and IP Core configured for USB 2.0 and USB 3.0 in host
or dual-role mode.
Say 'Y' or 'M' if you have such device.
endif

View File

@ -55,3 +55,4 @@ obj-$(CONFIG_USB_DWC3_QCOM) += dwc3-qcom.o
obj-$(CONFIG_USB_DWC3_IMX8MP) += dwc3-imx8mp.o
obj-$(CONFIG_USB_DWC3_XILINX) += dwc3-xilinx.o
obj-$(CONFIG_USB_DWC3_OCTEON) += dwc3-octeon.o
obj-$(CONFIG_USB_DWC3_RTK) += dwc3-rtk.o

View File

@ -854,8 +854,20 @@ static int dwc3_clk_enable(struct dwc3 *dwc)
if (ret)
goto disable_ref_clk;
ret = clk_prepare_enable(dwc->utmi_clk);
if (ret)
goto disable_susp_clk;
ret = clk_prepare_enable(dwc->pipe_clk);
if (ret)
goto disable_utmi_clk;
return 0;
disable_utmi_clk:
clk_disable_unprepare(dwc->utmi_clk);
disable_susp_clk:
clk_disable_unprepare(dwc->susp_clk);
disable_ref_clk:
clk_disable_unprepare(dwc->ref_clk);
disable_bus_clk:
@ -865,6 +877,8 @@ disable_bus_clk:
static void dwc3_clk_disable(struct dwc3 *dwc)
{
clk_disable_unprepare(dwc->pipe_clk);
clk_disable_unprepare(dwc->utmi_clk);
clk_disable_unprepare(dwc->susp_clk);
clk_disable_unprepare(dwc->ref_clk);
clk_disable_unprepare(dwc->bus_clk);
@ -1094,6 +1108,111 @@ static void dwc3_set_power_down_clk_scale(struct dwc3 *dwc)
}
}
static void dwc3_config_threshold(struct dwc3 *dwc)
{
u32 reg;
u8 rx_thr_num;
u8 rx_maxburst;
u8 tx_thr_num;
u8 tx_maxburst;
/*
* Must config both number of packets and max burst settings to enable
* RX and/or TX threshold.
*/
if (!DWC3_IP_IS(DWC3) && dwc->dr_mode == USB_DR_MODE_HOST) {
rx_thr_num = dwc->rx_thr_num_pkt_prd;
rx_maxburst = dwc->rx_max_burst_prd;
tx_thr_num = dwc->tx_thr_num_pkt_prd;
tx_maxburst = dwc->tx_max_burst_prd;
if (rx_thr_num && rx_maxburst) {
reg = dwc3_readl(dwc->regs, DWC3_GRXTHRCFG);
reg |= DWC31_RXTHRNUMPKTSEL_PRD;
reg &= ~DWC31_RXTHRNUMPKT_PRD(~0);
reg |= DWC31_RXTHRNUMPKT_PRD(rx_thr_num);
reg &= ~DWC31_MAXRXBURSTSIZE_PRD(~0);
reg |= DWC31_MAXRXBURSTSIZE_PRD(rx_maxburst);
dwc3_writel(dwc->regs, DWC3_GRXTHRCFG, reg);
}
if (tx_thr_num && tx_maxburst) {
reg = dwc3_readl(dwc->regs, DWC3_GTXTHRCFG);
reg |= DWC31_TXTHRNUMPKTSEL_PRD;
reg &= ~DWC31_TXTHRNUMPKT_PRD(~0);
reg |= DWC31_TXTHRNUMPKT_PRD(tx_thr_num);
reg &= ~DWC31_MAXTXBURSTSIZE_PRD(~0);
reg |= DWC31_MAXTXBURSTSIZE_PRD(tx_maxburst);
dwc3_writel(dwc->regs, DWC3_GTXTHRCFG, reg);
}
}
rx_thr_num = dwc->rx_thr_num_pkt;
rx_maxburst = dwc->rx_max_burst;
tx_thr_num = dwc->tx_thr_num_pkt;
tx_maxburst = dwc->tx_max_burst;
if (DWC3_IP_IS(DWC3)) {
if (rx_thr_num && rx_maxburst) {
reg = dwc3_readl(dwc->regs, DWC3_GRXTHRCFG);
reg |= DWC3_GRXTHRCFG_PKTCNTSEL;
reg &= ~DWC3_GRXTHRCFG_RXPKTCNT(~0);
reg |= DWC3_GRXTHRCFG_RXPKTCNT(rx_thr_num);
reg &= ~DWC3_GRXTHRCFG_MAXRXBURSTSIZE(~0);
reg |= DWC3_GRXTHRCFG_MAXRXBURSTSIZE(rx_maxburst);
dwc3_writel(dwc->regs, DWC3_GRXTHRCFG, reg);
}
if (tx_thr_num && tx_maxburst) {
reg = dwc3_readl(dwc->regs, DWC3_GTXTHRCFG);
reg |= DWC3_GTXTHRCFG_PKTCNTSEL;
reg &= ~DWC3_GTXTHRCFG_TXPKTCNT(~0);
reg |= DWC3_GTXTHRCFG_TXPKTCNT(tx_thr_num);
reg &= ~DWC3_GTXTHRCFG_MAXTXBURSTSIZE(~0);
reg |= DWC3_GTXTHRCFG_MAXTXBURSTSIZE(tx_maxburst);
dwc3_writel(dwc->regs, DWC3_GTXTHRCFG, reg);
}
} else {
if (rx_thr_num && rx_maxburst) {
reg = dwc3_readl(dwc->regs, DWC3_GRXTHRCFG);
reg |= DWC31_GRXTHRCFG_PKTCNTSEL;
reg &= ~DWC31_GRXTHRCFG_RXPKTCNT(~0);
reg |= DWC31_GRXTHRCFG_RXPKTCNT(rx_thr_num);
reg &= ~DWC31_GRXTHRCFG_MAXRXBURSTSIZE(~0);
reg |= DWC31_GRXTHRCFG_MAXRXBURSTSIZE(rx_maxburst);
dwc3_writel(dwc->regs, DWC3_GRXTHRCFG, reg);
}
if (tx_thr_num && tx_maxburst) {
reg = dwc3_readl(dwc->regs, DWC3_GTXTHRCFG);
reg |= DWC31_GTXTHRCFG_PKTCNTSEL;
reg &= ~DWC31_GTXTHRCFG_TXPKTCNT(~0);
reg |= DWC31_GTXTHRCFG_TXPKTCNT(tx_thr_num);
reg &= ~DWC31_GTXTHRCFG_MAXTXBURSTSIZE(~0);
reg |= DWC31_GTXTHRCFG_MAXTXBURSTSIZE(tx_maxburst);
dwc3_writel(dwc->regs, DWC3_GTXTHRCFG, reg);
}
}
}
/**
* dwc3_core_init - Low-level initialization of DWC3 Core
* @dwc: Pointer to our controller context structure
@ -1246,42 +1365,7 @@ static int dwc3_core_init(struct dwc3 *dwc)
dwc3_writel(dwc->regs, DWC3_GUCTL1, reg);
}
/*
* Must config both number of packets and max burst settings to enable
* RX and/or TX threshold.
*/
if (!DWC3_IP_IS(DWC3) && dwc->dr_mode == USB_DR_MODE_HOST) {
u8 rx_thr_num = dwc->rx_thr_num_pkt_prd;
u8 rx_maxburst = dwc->rx_max_burst_prd;
u8 tx_thr_num = dwc->tx_thr_num_pkt_prd;
u8 tx_maxburst = dwc->tx_max_burst_prd;
if (rx_thr_num && rx_maxburst) {
reg = dwc3_readl(dwc->regs, DWC3_GRXTHRCFG);
reg |= DWC31_RXTHRNUMPKTSEL_PRD;
reg &= ~DWC31_RXTHRNUMPKT_PRD(~0);
reg |= DWC31_RXTHRNUMPKT_PRD(rx_thr_num);
reg &= ~DWC31_MAXRXBURSTSIZE_PRD(~0);
reg |= DWC31_MAXRXBURSTSIZE_PRD(rx_maxburst);
dwc3_writel(dwc->regs, DWC3_GRXTHRCFG, reg);
}
if (tx_thr_num && tx_maxburst) {
reg = dwc3_readl(dwc->regs, DWC3_GTXTHRCFG);
reg |= DWC31_TXTHRNUMPKTSEL_PRD;
reg &= ~DWC31_TXTHRNUMPKT_PRD(~0);
reg |= DWC31_TXTHRNUMPKT_PRD(tx_thr_num);
reg &= ~DWC31_MAXTXBURSTSIZE_PRD(~0);
reg |= DWC31_MAXTXBURSTSIZE_PRD(tx_maxburst);
dwc3_writel(dwc->regs, DWC3_GTXTHRCFG, reg);
}
}
dwc3_config_threshold(dwc);
return 0;
@ -1417,6 +1501,10 @@ static void dwc3_get_properties(struct dwc3 *dwc)
u8 lpm_nyet_threshold;
u8 tx_de_emphasis;
u8 hird_threshold;
u8 rx_thr_num_pkt = 0;
u8 rx_max_burst = 0;
u8 tx_thr_num_pkt = 0;
u8 tx_max_burst = 0;
u8 rx_thr_num_pkt_prd = 0;
u8 rx_max_burst_prd = 0;
u8 tx_thr_num_pkt_prd = 0;
@ -1479,6 +1567,14 @@ static void dwc3_get_properties(struct dwc3 *dwc)
"snps,usb2-lpm-disable");
dwc->usb2_gadget_lpm_disable = device_property_read_bool(dev,
"snps,usb2-gadget-lpm-disable");
device_property_read_u8(dev, "snps,rx-thr-num-pkt",
&rx_thr_num_pkt);
device_property_read_u8(dev, "snps,rx-max-burst",
&rx_max_burst);
device_property_read_u8(dev, "snps,tx-thr-num-pkt",
&tx_thr_num_pkt);
device_property_read_u8(dev, "snps,tx-max-burst",
&tx_max_burst);
device_property_read_u8(dev, "snps,rx-thr-num-pkt-prd",
&rx_thr_num_pkt_prd);
device_property_read_u8(dev, "snps,rx-max-burst-prd",
@ -1560,6 +1656,12 @@ static void dwc3_get_properties(struct dwc3 *dwc)
dwc->hird_threshold = hird_threshold;
dwc->rx_thr_num_pkt = rx_thr_num_pkt;
dwc->rx_max_burst = rx_max_burst;
dwc->tx_thr_num_pkt = tx_thr_num_pkt;
dwc->tx_max_burst = tx_max_burst;
dwc->rx_thr_num_pkt_prd = rx_thr_num_pkt_prd;
dwc->rx_max_burst_prd = rx_max_burst_prd;
@ -1785,6 +1887,20 @@ static int dwc3_get_clocks(struct dwc3 *dwc)
}
}
/* specific to Rockchip RK3588 */
dwc->utmi_clk = devm_clk_get_optional(dev, "utmi");
if (IS_ERR(dwc->utmi_clk)) {
return dev_err_probe(dev, PTR_ERR(dwc->utmi_clk),
"could not get utmi clock\n");
}
/* specific to Rockchip RK3588 */
dwc->pipe_clk = devm_clk_get_optional(dev, "pipe");
if (IS_ERR(dwc->pipe_clk)) {
return dev_err_probe(dev, PTR_ERR(dwc->pipe_clk),
"could not get pipe clock\n");
}
return 0;
}

View File

@ -211,6 +211,11 @@
#define DWC3_GRXTHRCFG_RXPKTCNT(n) (((n) & 0xf) << 24)
#define DWC3_GRXTHRCFG_PKTCNTSEL BIT(29)
/* Global TX Threshold Configuration Register */
#define DWC3_GTXTHRCFG_MAXTXBURSTSIZE(n) (((n) & 0xff) << 16)
#define DWC3_GTXTHRCFG_TXPKTCNT(n) (((n) & 0xf) << 24)
#define DWC3_GTXTHRCFG_PKTCNTSEL BIT(29)
/* Global RX Threshold Configuration Register for DWC_usb31 only */
#define DWC31_GRXTHRCFG_MAXRXBURSTSIZE(n) (((n) & 0x1f) << 16)
#define DWC31_GRXTHRCFG_RXPKTCNT(n) (((n) & 0x1f) << 21)
@ -991,6 +996,8 @@ struct dwc3_scratchpad_array {
* @bus_clk: clock for accessing the registers
* @ref_clk: reference clock
* @susp_clk: clock used when the SS phy is in low power (S3) state
* @utmi_clk: clock used for USB2 PHY communication
* @pipe_clk: clock used for USB3 PHY communication
* @reset: reset control
* @regs: base address for our registers
* @regs_size: address space size
@ -1045,6 +1052,10 @@ struct dwc3_scratchpad_array {
* @test_mode_nr: test feature selector
* @lpm_nyet_threshold: LPM NYET response threshold
* @hird_threshold: HIRD threshold
* @rx_thr_num_pkt: USB receive packet count
* @rx_max_burst: max USB receive burst size
* @tx_thr_num_pkt: USB transmit packet count
* @tx_max_burst: max USB transmit burst size
* @rx_thr_num_pkt_prd: periodic ESS receive packet count
* @rx_max_burst_prd: max periodic ESS receive burst size
* @tx_thr_num_pkt_prd: periodic ESS transmit packet count
@ -1106,6 +1117,8 @@ struct dwc3_scratchpad_array {
* instances in park mode.
* @parkmode_disable_hs_quirk: set if we need to disable all HishSpeed
* instances in park mode.
* @gfladj_refclk_lpm_sel: set if we need to enable SOF/ITP counter
* running based on ref_clk
* @tx_de_emphasis_quirk: set if we enable Tx de-emphasis quirk
* @tx_de_emphasis: Tx de-emphasis value
* 0 - -6dB de-emphasis
@ -1156,6 +1169,8 @@ struct dwc3 {
struct clk *bus_clk;
struct clk *ref_clk;
struct clk *susp_clk;
struct clk *utmi_clk;
struct clk *pipe_clk;
struct reset_control *reset;
@ -1273,6 +1288,10 @@ struct dwc3 {
u8 test_mode_nr;
u8 lpm_nyet_threshold;
u8 hird_threshold;
u8 rx_thr_num_pkt;
u8 rx_max_burst;
u8 tx_thr_num_pkt;
u8 tx_max_burst;
u8 rx_thr_num_pkt_prd;
u8 rx_max_burst_prd;
u8 tx_thr_num_pkt_prd;

475
drivers/usb/dwc3/dwc3-rtk.c Normal file
View File

@ -0,0 +1,475 @@
// SPDX-License-Identifier: GPL-2.0
/*
* dwc3-rtk.c - Realtek DWC3 Specific Glue layer
*
* Copyright (C) 2023 Realtek Semiconductor Corporation
*
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/suspend.h>
#include <linux/sys_soc.h>
#include <linux/usb/otg.h>
#include <linux/usb/of.h>
#include <linux/usb/role.h>
#include "core.h"
#define WRAP_CTR_REG 0x0
#define DISABLE_MULTI_REQ BIT(1)
#define DESC_R2W_MULTI_DISABLE BIT(9)
#define FORCE_PIPE3_PHY_STATUS_TO_0 BIT(13)
#define WRAP_USB2_PHY_UTMI_REG 0x8
#define TXHSVM_EN BIT(3)
#define WRAP_PHY_PIPE_REG 0xC
#define RESET_DISABLE_PIPE3_P0 BIT(0)
#define CLOCK_ENABLE_FOR_PIPE3_PCLK BIT(1)
#define WRAP_USB_HMAC_CTR0_REG 0x60
#define U3PORT_DIS BIT(8)
#define WRAP_USB2_PHY_REG 0x70
#define USB2_PHY_EN_PHY_PLL_PORT0 BIT(12)
#define USB2_PHY_EN_PHY_PLL_PORT1 BIT(13)
#define USB2_PHY_SWITCH_MASK 0x707
#define USB2_PHY_SWITCH_DEVICE 0x0
#define USB2_PHY_SWITCH_HOST 0x606
#define WRAP_APHY_REG 0x128
#define USB3_MBIAS_ENABLE BIT(1)
/* pm control */
#define WRAP_USB_DBUS_PWR_CTRL_REG 0x160
#define USB_DBUS_PWR_CTRL_REG 0x0
#define DBUS_PWR_CTRL_EN BIT(0)
struct dwc3_rtk {
struct device *dev;
void __iomem *regs;
size_t regs_size;
void __iomem *pm_base;
struct dwc3 *dwc;
enum usb_role cur_role;
struct usb_role_switch *role_switch;
};
static void switch_usb2_role(struct dwc3_rtk *rtk, enum usb_role role)
{
void __iomem *reg;
int val;
reg = rtk->regs + WRAP_USB2_PHY_REG;
val = ~USB2_PHY_SWITCH_MASK & readl(reg);
switch (role) {
case USB_ROLE_DEVICE:
writel(USB2_PHY_SWITCH_DEVICE | val, reg);
break;
case USB_ROLE_HOST:
writel(USB2_PHY_SWITCH_HOST | val, reg);
break;
default:
dev_dbg(rtk->dev, "%s: role=%d\n", __func__, role);
break;
}
}
static void switch_dwc3_role(struct dwc3_rtk *rtk, enum usb_role role)
{
if (!rtk->dwc->role_sw)
return;
usb_role_switch_set_role(rtk->dwc->role_sw, role);
}
static enum usb_role dwc3_rtk_get_role(struct dwc3_rtk *rtk)
{
enum usb_role role;
role = rtk->cur_role;
if (rtk->dwc && rtk->dwc->role_sw)
role = usb_role_switch_get_role(rtk->dwc->role_sw);
else
dev_dbg(rtk->dev, "%s not usb_role_switch role=%d\n", __func__, role);
return role;
}
static void dwc3_rtk_set_role(struct dwc3_rtk *rtk, enum usb_role role)
{
rtk->cur_role = role;
switch_dwc3_role(rtk, role);
mdelay(10);
switch_usb2_role(rtk, role);
}
#if IS_ENABLED(CONFIG_USB_ROLE_SWITCH)
static int dwc3_usb_role_switch_set(struct usb_role_switch *sw, enum usb_role role)
{
struct dwc3_rtk *rtk = usb_role_switch_get_drvdata(sw);
dwc3_rtk_set_role(rtk, role);
return 0;
}
static enum usb_role dwc3_usb_role_switch_get(struct usb_role_switch *sw)
{
struct dwc3_rtk *rtk = usb_role_switch_get_drvdata(sw);
return dwc3_rtk_get_role(rtk);
}
static int dwc3_rtk_setup_role_switch(struct dwc3_rtk *rtk)
{
struct usb_role_switch_desc dwc3_role_switch = {NULL};
dwc3_role_switch.name = dev_name(rtk->dev);
dwc3_role_switch.driver_data = rtk;
dwc3_role_switch.allow_userspace_control = true;
dwc3_role_switch.fwnode = dev_fwnode(rtk->dev);
dwc3_role_switch.set = dwc3_usb_role_switch_set;
dwc3_role_switch.get = dwc3_usb_role_switch_get;
rtk->role_switch = usb_role_switch_register(rtk->dev, &dwc3_role_switch);
if (IS_ERR(rtk->role_switch))
return PTR_ERR(rtk->role_switch);
return 0;
}
static int dwc3_rtk_remove_role_switch(struct dwc3_rtk *rtk)
{
if (rtk->role_switch)
usb_role_switch_unregister(rtk->role_switch);
rtk->role_switch = NULL;
return 0;
}
#else
#define dwc3_rtk_setup_role_switch(x) 0
#define dwc3_rtk_remove_role_switch(x) 0
#endif
static const char *const speed_names[] = {
[USB_SPEED_UNKNOWN] = "UNKNOWN",
[USB_SPEED_LOW] = "low-speed",
[USB_SPEED_FULL] = "full-speed",
[USB_SPEED_HIGH] = "high-speed",
[USB_SPEED_WIRELESS] = "wireless",
[USB_SPEED_SUPER] = "super-speed",
[USB_SPEED_SUPER_PLUS] = "super-speed-plus",
};
static enum usb_device_speed __get_dwc3_maximum_speed(struct device_node *np)
{
struct device_node *dwc3_np;
const char *maximum_speed;
int ret;
dwc3_np = of_get_compatible_child(np, "snps,dwc3");
if (!dwc3_np)
return USB_SPEED_UNKNOWN;
ret = of_property_read_string(dwc3_np, "maximum-speed", &maximum_speed);
if (ret < 0)
return USB_SPEED_UNKNOWN;
ret = match_string(speed_names, ARRAY_SIZE(speed_names), maximum_speed);
return (ret < 0) ? USB_SPEED_UNKNOWN : ret;
}
static int dwc3_rtk_init(struct dwc3_rtk *rtk)
{
struct device *dev = rtk->dev;
void __iomem *reg;
int val;
enum usb_device_speed maximum_speed;
const struct soc_device_attribute rtk_soc_kylin_a00[] = {
{ .family = "Realtek Kylin", .revision = "A00", },
{ /* empty */ } };
const struct soc_device_attribute rtk_soc_hercules[] = {
{ .family = "Realtek Hercules", }, { /* empty */ } };
const struct soc_device_attribute rtk_soc_thor[] = {
{ .family = "Realtek Thor", }, { /* empty */ } };
if (soc_device_match(rtk_soc_kylin_a00)) {
reg = rtk->regs + WRAP_CTR_REG;
val = readl(reg);
writel(DISABLE_MULTI_REQ | val, reg);
dev_info(dev, "[bug fixed] 1295/1296 A00: add workaround to disable multiple request for D-Bus");
}
if (soc_device_match(rtk_soc_hercules)) {
reg = rtk->regs + WRAP_USB2_PHY_REG;
val = readl(reg);
writel(USB2_PHY_EN_PHY_PLL_PORT1 | val, reg);
dev_info(dev, "[bug fixed] 1395 add workaround to disable usb2 port 2 suspend!");
}
reg = rtk->regs + WRAP_USB2_PHY_UTMI_REG;
val = readl(reg);
writel(TXHSVM_EN | val, reg);
maximum_speed = __get_dwc3_maximum_speed(dev->of_node);
if (maximum_speed != USB_SPEED_UNKNOWN && maximum_speed <= USB_SPEED_HIGH) {
if (soc_device_match(rtk_soc_thor)) {
reg = rtk->regs + WRAP_USB_HMAC_CTR0_REG;
val = readl(reg);
writel(U3PORT_DIS | val, reg);
} else {
reg = rtk->regs + WRAP_CTR_REG;
val = readl(reg);
writel(FORCE_PIPE3_PHY_STATUS_TO_0 | val, reg);
reg = rtk->regs + WRAP_PHY_PIPE_REG;
val = ~CLOCK_ENABLE_FOR_PIPE3_PCLK & readl(reg);
writel(RESET_DISABLE_PIPE3_P0 | val, reg);
reg = rtk->regs + WRAP_USB_HMAC_CTR0_REG;
val = readl(reg);
writel(U3PORT_DIS | val, reg);
reg = rtk->regs + WRAP_APHY_REG;
val = readl(reg);
writel(~USB3_MBIAS_ENABLE & val, reg);
dev_dbg(rtk->dev, "%s: disable usb 3.0 phy\n", __func__);
}
}
reg = rtk->regs + WRAP_CTR_REG;
val = readl(reg);
writel(DESC_R2W_MULTI_DISABLE | val, reg);
/* Set phy Dp/Dm initial state to host mode to avoid the Dp glitch */
reg = rtk->regs + WRAP_USB2_PHY_REG;
val = ~USB2_PHY_SWITCH_MASK & readl(reg);
writel(USB2_PHY_SWITCH_HOST | val, reg);
if (rtk->pm_base) {
reg = rtk->pm_base + USB_DBUS_PWR_CTRL_REG;
val = DBUS_PWR_CTRL_EN | readl(reg);
writel(val, reg);
}
return 0;
}
static int dwc3_rtk_probe_dwc3_core(struct dwc3_rtk *rtk)
{
struct device *dev = rtk->dev;
struct device_node *node = dev->of_node;
struct platform_device *dwc3_pdev;
struct device *dwc3_dev;
struct device_node *dwc3_node;
enum usb_dr_mode dr_mode;
int ret = 0;
ret = dwc3_rtk_init(rtk);
if (ret)
return -EINVAL;
ret = of_platform_populate(node, NULL, NULL, dev);
if (ret) {
dev_err(dev, "failed to add dwc3 core\n");
return ret;
}
dwc3_node = of_get_compatible_child(node, "snps,dwc3");
if (!dwc3_node) {
dev_err(dev, "failed to find dwc3 core node\n");
ret = -ENODEV;
goto depopulate;
}
dwc3_pdev = of_find_device_by_node(dwc3_node);
if (!dwc3_pdev) {
dev_err(dev, "failed to find dwc3 core platform_device\n");
ret = -ENODEV;
goto err_node_put;
}
dwc3_dev = &dwc3_pdev->dev;
rtk->dwc = platform_get_drvdata(dwc3_pdev);
if (!rtk->dwc) {
dev_err(dev, "failed to find dwc3 core\n");
ret = -ENODEV;
goto err_pdev_put;
}
dr_mode = usb_get_dr_mode(dwc3_dev);
if (dr_mode != rtk->dwc->dr_mode) {
dev_info(dev, "dts set dr_mode=%d, but dwc3 set dr_mode=%d\n",
dr_mode, rtk->dwc->dr_mode);
dr_mode = rtk->dwc->dr_mode;
}
switch (dr_mode) {
case USB_DR_MODE_PERIPHERAL:
rtk->cur_role = USB_ROLE_DEVICE;
break;
case USB_DR_MODE_HOST:
rtk->cur_role = USB_ROLE_HOST;
break;
default:
dev_dbg(rtk->dev, "%s: dr_mode=%d\n", __func__, dr_mode);
break;
}
if (device_property_read_bool(dwc3_dev, "usb-role-switch")) {
ret = dwc3_rtk_setup_role_switch(rtk);
if (ret) {
dev_err(dev, "dwc3_rtk_setup_role_switch fail=%d\n", ret);
goto err_pdev_put;
}
rtk->cur_role = dwc3_rtk_get_role(rtk);
}
switch_usb2_role(rtk, rtk->cur_role);
return 0;
err_pdev_put:
platform_device_put(dwc3_pdev);
err_node_put:
of_node_put(dwc3_node);
depopulate:
of_platform_depopulate(dev);
return ret;
}
static int dwc3_rtk_probe(struct platform_device *pdev)
{
struct dwc3_rtk *rtk;
struct device *dev = &pdev->dev;
struct resource *res;
void __iomem *regs;
int ret = 0;
rtk = devm_kzalloc(dev, sizeof(*rtk), GFP_KERNEL);
if (!rtk) {
ret = -ENOMEM;
goto out;
}
platform_set_drvdata(pdev, rtk);
rtk->dev = dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(dev, "missing memory resource\n");
ret = -ENODEV;
goto out;
}
regs = devm_ioremap_resource(dev, res);
if (IS_ERR(regs)) {
ret = PTR_ERR(regs);
goto out;
}
rtk->regs = regs;
rtk->regs_size = resource_size(res);
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (res) {
rtk->pm_base = devm_ioremap_resource(dev, res);
if (IS_ERR(rtk->pm_base)) {
ret = PTR_ERR(rtk->pm_base);
goto out;
}
}
ret = dwc3_rtk_probe_dwc3_core(rtk);
out:
return ret;
}
static void dwc3_rtk_remove(struct platform_device *pdev)
{
struct dwc3_rtk *rtk = platform_get_drvdata(pdev);
rtk->dwc = NULL;
dwc3_rtk_remove_role_switch(rtk);
of_platform_depopulate(rtk->dev);
}
static void dwc3_rtk_shutdown(struct platform_device *pdev)
{
struct dwc3_rtk *rtk = platform_get_drvdata(pdev);
of_platform_depopulate(rtk->dev);
}
static const struct of_device_id rtk_dwc3_match[] = {
{ .compatible = "realtek,rtd-dwc3" },
{},
};
MODULE_DEVICE_TABLE(of, rtk_dwc3_match);
#ifdef CONFIG_PM_SLEEP
static int dwc3_rtk_suspend(struct device *dev)
{
return 0;
}
static int dwc3_rtk_resume(struct device *dev)
{
struct dwc3_rtk *rtk = dev_get_drvdata(dev);
dwc3_rtk_init(rtk);
switch_usb2_role(rtk, rtk->cur_role);
/* runtime set active to reflect active state. */
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
return 0;
}
static const struct dev_pm_ops dwc3_rtk_dev_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(dwc3_rtk_suspend, dwc3_rtk_resume)
};
#define DEV_PM_OPS (&dwc3_rtk_dev_pm_ops)
#else
#define DEV_PM_OPS NULL
#endif /* CONFIG_PM_SLEEP */
static struct platform_driver dwc3_rtk_driver = {
.probe = dwc3_rtk_probe,
.remove_new = dwc3_rtk_remove,
.driver = {
.name = "rtk-dwc3",
.of_match_table = rtk_dwc3_match,
.pm = DEV_PM_OPS,
},
.shutdown = dwc3_rtk_shutdown,
};
module_platform_driver(dwc3_rtk_driver);
MODULE_AUTHOR("Stanley Chang <stanley_chang@realtek.com>");
MODULE_DESCRIPTION("DesignWare USB3 Realtek Glue Layer");
MODULE_ALIAS("platform:rtk-dwc3");
MODULE_LICENSE("GPL");
MODULE_SOFTDEP("pre: phy_rtk_usb2 phy_rtk_usb3");

View File

@ -32,9 +32,6 @@
#define XLNX_USB_TRAFFIC_ROUTE_CONFIG 0x005C
#define XLNX_USB_TRAFFIC_ROUTE_FPD 0x1
/* Versal USB Reset ID */
#define VERSAL_USB_RESET_ID 0xC104036
#define XLNX_USB_FPD_PIPE_CLK 0x7c
#define PIPE_CLK_DESELECT 1
#define PIPE_CLK_SELECT 0
@ -72,20 +69,23 @@ static void dwc3_xlnx_mask_phy_rst(struct dwc3_xlnx *priv_data, bool mask)
static int dwc3_xlnx_init_versal(struct dwc3_xlnx *priv_data)
{
struct device *dev = priv_data->dev;
struct reset_control *crst;
int ret;
crst = devm_reset_control_get_exclusive(dev, NULL);
if (IS_ERR(crst))
return dev_err_probe(dev, PTR_ERR(crst), "failed to get reset signal\n");
dwc3_xlnx_mask_phy_rst(priv_data, false);
/* Assert and De-assert reset */
ret = zynqmp_pm_reset_assert(VERSAL_USB_RESET_ID,
PM_RESET_ACTION_ASSERT);
ret = reset_control_assert(crst);
if (ret < 0) {
dev_err_probe(dev, ret, "failed to assert Reset\n");
return ret;
}
ret = zynqmp_pm_reset_assert(VERSAL_USB_RESET_ID,
PM_RESET_ACTION_RELEASE);
ret = reset_control_deassert(crst);
if (ret < 0) {
dev_err_probe(dev, ret, "failed to De-assert Reset\n");
return ret;

View File

@ -1410,7 +1410,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
struct usb_composite_dev *cdev = c->cdev;
struct f_ncm *ncm = func_to_ncm(f);
struct usb_string *us;
int status;
int status = 0;
struct usb_ep *ep;
struct f_ncm_opts *ncm_opts;
@ -1428,22 +1428,17 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
f->os_desc_table[0].os_desc = &ncm_opts->ncm_os_desc;
}
/*
* in drivers/usb/gadget/configfs.c:configfs_composite_bind()
* configurations are bound in sequence with list_for_each_entry,
* in each configuration its functions are bound in sequence
* with list_for_each_entry, so we assume no race condition
* with regard to ncm_opts->bound access
*/
if (!ncm_opts->bound) {
mutex_lock(&ncm_opts->lock);
gether_set_gadget(ncm_opts->net, cdev->gadget);
mutex_lock(&ncm_opts->lock);
gether_set_gadget(ncm_opts->net, cdev->gadget);
if (!ncm_opts->bound)
status = gether_register_netdev(ncm_opts->net);
mutex_unlock(&ncm_opts->lock);
if (status)
goto fail;
ncm_opts->bound = true;
}
mutex_unlock(&ncm_opts->lock);
if (status)
goto fail;
ncm_opts->bound = true;
us = usb_gstrings_attach(cdev, ncm_strings,
ARRAY_SIZE(ncm_string_defs));
if (IS_ERR(us)) {

View File

@ -212,7 +212,7 @@ static struct uac2_input_terminal_descriptor io_in_it_desc = {
.bDescriptorSubtype = UAC_INPUT_TERMINAL,
/* .bTerminalID = DYNAMIC */
.wTerminalType = cpu_to_le16(UAC_INPUT_TERMINAL_MICROPHONE),
/* .wTerminalType = DYNAMIC */
.bAssocTerminal = 0,
/* .bCSourceID = DYNAMIC */
.iChannelNames = 0,
@ -240,7 +240,7 @@ static struct uac2_output_terminal_descriptor io_out_ot_desc = {
.bDescriptorSubtype = UAC_OUTPUT_TERMINAL,
/* .bTerminalID = DYNAMIC */
.wTerminalType = cpu_to_le16(UAC_OUTPUT_TERMINAL_SPEAKER),
/* .wTerminalType = DYNAMIC */
.bAssocTerminal = 0,
/* .bSourceID = DYNAMIC */
/* .bCSourceID = DYNAMIC */
@ -977,6 +977,9 @@ static void setup_descriptor(struct f_uac2_opts *opts)
iad_desc.bInterfaceCount++;
}
io_in_it_desc.wTerminalType = cpu_to_le16(opts->c_terminal_type);
io_out_ot_desc.wTerminalType = cpu_to_le16(opts->p_terminal_type);
setup_headers(opts, fs_audio_desc, USB_SPEED_FULL);
setup_headers(opts, hs_audio_desc, USB_SPEED_HIGH);
setup_headers(opts, ss_audio_desc, USB_SPEED_SUPER);
@ -2095,6 +2098,9 @@ UAC2_ATTRIBUTE(s16, c_volume_res);
UAC2_ATTRIBUTE(u32, fb_max);
UAC2_ATTRIBUTE_STRING(function_name);
UAC2_ATTRIBUTE(s16, p_terminal_type);
UAC2_ATTRIBUTE(s16, c_terminal_type);
static struct configfs_attribute *f_uac2_attrs[] = {
&f_uac2_opts_attr_p_chmask,
&f_uac2_opts_attr_p_srate,
@ -2122,6 +2128,9 @@ static struct configfs_attribute *f_uac2_attrs[] = {
&f_uac2_opts_attr_function_name,
&f_uac2_opts_attr_p_terminal_type,
&f_uac2_opts_attr_c_terminal_type,
NULL,
};
@ -2180,6 +2189,9 @@ static struct usb_function_instance *afunc_alloc_inst(void)
snprintf(opts->function_name, sizeof(opts->function_name), "Source/Sink");
opts->p_terminal_type = UAC2_DEF_P_TERM_TYPE;
opts->c_terminal_type = UAC2_DEF_C_TERM_TYPE;
return &opts->func_inst;
}

View File

@ -516,6 +516,7 @@ uvc_copy_descriptors(struct uvc_device *uvc, enum usb_device_speed speed)
void *mem;
switch (speed) {
case USB_SPEED_SUPER_PLUS:
case USB_SPEED_SUPER:
uvc_control_desc = uvc->desc.ss_control;
uvc_streaming_cls = uvc->desc.ss_streaming;
@ -564,7 +565,8 @@ uvc_copy_descriptors(struct uvc_device *uvc, enum usb_device_speed speed)
bytes += uvc_interrupt_ep.bLength + uvc_interrupt_cs_ep.bLength;
n_desc += 2;
if (speed == USB_SPEED_SUPER) {
if (speed == USB_SPEED_SUPER ||
speed == USB_SPEED_SUPER_PLUS) {
bytes += uvc_ss_interrupt_comp.bLength;
n_desc += 1;
}
@ -619,7 +621,8 @@ uvc_copy_descriptors(struct uvc_device *uvc, enum usb_device_speed speed)
if (uvc->enable_interrupt_ep) {
UVC_COPY_DESCRIPTOR(mem, dst, &uvc_interrupt_ep);
if (speed == USB_SPEED_SUPER)
if (speed == USB_SPEED_SUPER ||
speed == USB_SPEED_SUPER_PLUS)
UVC_COPY_DESCRIPTOR(mem, dst, &uvc_ss_interrupt_comp);
UVC_COPY_DESCRIPTOR(mem, dst, &uvc_interrupt_cs_ep);
@ -795,6 +798,13 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
goto error;
}
f->ssp_descriptors = uvc_copy_descriptors(uvc, USB_SPEED_SUPER_PLUS);
if (IS_ERR(f->ssp_descriptors)) {
ret = PTR_ERR(f->ssp_descriptors);
f->ssp_descriptors = NULL;
goto error;
}
/* Preallocate control endpoint request. */
uvc->control_req = usb_ep_alloc_request(cdev->gadget->ep0, GFP_KERNEL);
uvc->control_buf = kmalloc(UVC_MAX_REQUEST_SIZE, GFP_KERNEL);

View File

@ -1200,7 +1200,7 @@ void gether_disconnect(struct gether *link)
DBG(dev, "%s\n", __func__);
netif_stop_queue(dev->net);
netif_device_detach(dev->net);
netif_carrier_off(dev->net);
/* disable endpoints, forcing (synchronous) completion

View File

@ -35,6 +35,11 @@
#define UAC2_DEF_REQ_NUM 2
#define UAC2_DEF_INT_REQ_NUM 10
#define UAC2_DEF_P_TERM_TYPE 0x301
/* UAC_OUTPUT_TERMINAL_SPEAKER */
#define UAC2_DEF_C_TERM_TYPE 0x201
/* UAC_INPUT_TERMINAL_MICROPHONE*/
struct f_uac2_opts {
struct usb_function_instance func_inst;
int p_chmask;
@ -65,6 +70,9 @@ struct f_uac2_opts {
char function_name[32];
s16 p_terminal_type;
s16 c_terminal_type;
struct mutex lock;
int refcnt;
};

View File

@ -31,6 +31,12 @@
#include <linux/usb/gadgetfs.h>
#include <linux/usb/gadget.h>
#include <linux/usb/composite.h> /* for USB_GADGET_DELAYED_STATUS */
/* Undef helpers from linux/usb/composite.h as gadgetfs redefines them */
#undef DBG
#undef ERROR
#undef INFO
/*
@ -1511,7 +1517,16 @@ delegate:
event->u.setup = *ctrl;
ep0_readable (dev);
spin_unlock (&dev->lock);
return 0;
/*
* Return USB_GADGET_DELAYED_STATUS as a workaround to
* stop some UDC drivers (e.g. dwc3) from automatically
* proceeding with the status stage for 0-length
* transfers.
* Should be removed once all UDC drivers are fixed to
* always delay the status stage until a response is
* queued to EP0.
*/
return w_length == 0 ? USB_GADGET_DELAYED_STATUS : 0;
}
}

View File

@ -25,6 +25,7 @@
#include <linux/usb/ch9.h>
#include <linux/usb/ch11.h>
#include <linux/usb/gadget.h>
#include <linux/usb/composite.h>
#include <uapi/linux/usb/raw_gadget.h>
@ -64,7 +65,7 @@ static int raw_event_queue_add(struct raw_event_queue *queue,
struct usb_raw_event *event;
spin_lock_irqsave(&queue->lock, flags);
if (WARN_ON(queue->size >= RAW_EVENT_QUEUE_SIZE)) {
if (queue->size >= RAW_EVENT_QUEUE_SIZE) {
spin_unlock_irqrestore(&queue->lock, flags);
return -ENOMEM;
}
@ -310,9 +311,10 @@ static int gadget_bind(struct usb_gadget *gadget,
dev->eps_num = i;
spin_unlock_irqrestore(&dev->lock, flags);
dev_dbg(&gadget->dev, "gadget connected\n");
ret = raw_queue_event(dev, USB_RAW_EVENT_CONNECT, 0, NULL);
if (ret < 0) {
dev_err(&gadget->dev, "failed to queue event\n");
dev_err(&gadget->dev, "failed to queue connect event\n");
set_gadget_data(gadget, NULL);
return ret;
}
@ -357,20 +359,65 @@ static int gadget_setup(struct usb_gadget *gadget,
ret = raw_queue_event(dev, USB_RAW_EVENT_CONTROL, sizeof(*ctrl), ctrl);
if (ret < 0)
dev_err(&gadget->dev, "failed to queue event\n");
dev_err(&gadget->dev, "failed to queue control event\n");
goto out;
out_unlock:
spin_unlock_irqrestore(&dev->lock, flags);
out:
if (ret == 0 && ctrl->wLength == 0) {
/*
* Return USB_GADGET_DELAYED_STATUS as a workaround to stop
* some UDC drivers (e.g. dwc3) from automatically proceeding
* with the status stage for 0-length transfers.
* Should be removed once all UDC drivers are fixed to always
* delay the status stage until a response is queued to EP0.
*/
return USB_GADGET_DELAYED_STATUS;
}
return ret;
}
/* These are currently unused but present in case UDC driver requires them. */
static void gadget_disconnect(struct usb_gadget *gadget) { }
static void gadget_suspend(struct usb_gadget *gadget) { }
static void gadget_resume(struct usb_gadget *gadget) { }
static void gadget_reset(struct usb_gadget *gadget) { }
static void gadget_disconnect(struct usb_gadget *gadget)
{
struct raw_dev *dev = get_gadget_data(gadget);
int ret;
dev_dbg(&gadget->dev, "gadget disconnected\n");
ret = raw_queue_event(dev, USB_RAW_EVENT_DISCONNECT, 0, NULL);
if (ret < 0)
dev_err(&gadget->dev, "failed to queue disconnect event\n");
}
static void gadget_suspend(struct usb_gadget *gadget)
{
struct raw_dev *dev = get_gadget_data(gadget);
int ret;
dev_dbg(&gadget->dev, "gadget suspended\n");
ret = raw_queue_event(dev, USB_RAW_EVENT_SUSPEND, 0, NULL);
if (ret < 0)
dev_err(&gadget->dev, "failed to queue suspend event\n");
}
static void gadget_resume(struct usb_gadget *gadget)
{
struct raw_dev *dev = get_gadget_data(gadget);
int ret;
dev_dbg(&gadget->dev, "gadget resumed\n");
ret = raw_queue_event(dev, USB_RAW_EVENT_RESUME, 0, NULL);
if (ret < 0)
dev_err(&gadget->dev, "failed to queue resume event\n");
}
static void gadget_reset(struct usb_gadget *gadget)
{
struct raw_dev *dev = get_gadget_data(gadget);
int ret;
dev_dbg(&gadget->dev, "gadget reset\n");
ret = raw_queue_event(dev, USB_RAW_EVENT_RESET, 0, NULL);
if (ret < 0)
dev_err(&gadget->dev, "failed to queue reset event\n");
}
/*----------------------------------------------------------------------*/
@ -663,12 +710,12 @@ static int raw_process_ep0_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
if (WARN_ON(in && dev->ep0_out_pending)) {
ret = -ENODEV;
dev->state = STATE_DEV_FAILED;
goto out_done;
goto out_unlock;
}
if (WARN_ON(!in && dev->ep0_in_pending)) {
ret = -ENODEV;
dev->state = STATE_DEV_FAILED;
goto out_done;
goto out_unlock;
}
dev->req->buf = data;
@ -682,8 +729,7 @@ static int raw_process_ep0_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
dev_err(&dev->gadget->dev,
"fail, usb_ep_queue returned %d\n", ret);
spin_lock_irqsave(&dev->lock, flags);
dev->state = STATE_DEV_FAILED;
goto out_done;
goto out_queue_failed;
}
ret = wait_for_completion_interruptible(&dev->ep0_done);
@ -692,13 +738,16 @@ static int raw_process_ep0_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
usb_ep_dequeue(dev->gadget->ep0, dev->req);
wait_for_completion(&dev->ep0_done);
spin_lock_irqsave(&dev->lock, flags);
goto out_done;
if (dev->ep0_status == -ECONNRESET)
dev->ep0_status = -EINTR;
goto out_interrupted;
}
spin_lock_irqsave(&dev->lock, flags);
ret = dev->ep0_status;
out_done:
out_interrupted:
ret = dev->ep0_status;
out_queue_failed:
dev->ep0_urb_queued = false;
out_unlock:
spin_unlock_irqrestore(&dev->lock, flags);
@ -1066,8 +1115,7 @@ static int raw_process_ep_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
dev_err(&dev->gadget->dev,
"fail, usb_ep_queue returned %d\n", ret);
spin_lock_irqsave(&dev->lock, flags);
dev->state = STATE_DEV_FAILED;
goto out_done;
goto out_queue_failed;
}
ret = wait_for_completion_interruptible(&done);
@ -1076,13 +1124,16 @@ static int raw_process_ep_io(struct raw_dev *dev, struct usb_raw_ep_io *io,
usb_ep_dequeue(ep->ep, ep->req);
wait_for_completion(&done);
spin_lock_irqsave(&dev->lock, flags);
goto out_done;
if (ep->status == -ECONNRESET)
ep->status = -EINTR;
goto out_interrupted;
}
spin_lock_irqsave(&dev->lock, flags);
ret = ep->status;
out_done:
out_interrupted:
ret = ep->status;
out_queue_failed:
ep->urb_queued = false;
out_unlock:
spin_unlock_irqrestore(&dev->lock, flags);

View File

@ -1432,15 +1432,24 @@ static void ast_udc_init_hw(struct ast_udc_dev *udc)
ast_udc_write(udc, 0, AST_UDC_EP0_CTRL);
}
static int ast_udc_remove(struct platform_device *pdev)
static void ast_udc_remove(struct platform_device *pdev)
{
struct ast_udc_dev *udc = platform_get_drvdata(pdev);
unsigned long flags;
u32 ctrl;
usb_del_gadget_udc(&udc->gadget);
if (udc->driver)
return -EBUSY;
if (udc->driver) {
/*
* This is broken as only some cleanup is skipped, *udev is
* freed and the register mapping goes away. Any further usage
* probably crashes. Also the device is unbound, so the skipped
* cleanup is never catched up later.
*/
dev_alert(&pdev->dev,
"Driver is busy and still going away. Fasten your seat belts!\n");
return;
}
spin_lock_irqsave(&udc->lock, flags);
@ -1459,8 +1468,6 @@ static int ast_udc_remove(struct platform_device *pdev)
udc->ep0_buf_dma);
udc->ep0_buf = NULL;
return 0;
}
static int ast_udc_probe(struct platform_device *pdev)
@ -1581,7 +1588,7 @@ MODULE_DEVICE_TABLE(of, ast_udc_of_dt_ids);
static struct platform_driver ast_udc_driver = {
.probe = ast_udc_probe,
.remove = ast_udc_remove,
.remove_new = ast_udc_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = ast_udc_of_dt_ids,

View File

@ -2000,6 +2000,7 @@ static int at91udc_resume(struct platform_device *pdev)
#endif
static struct platform_driver at91_udc_driver = {
.probe = at91udc_probe,
.remove = at91udc_remove,
.shutdown = at91udc_shutdown,
.suspend = at91udc_suspend,
@ -2010,7 +2011,7 @@ static struct platform_driver at91_udc_driver = {
},
};
module_platform_driver_probe(at91_udc_driver, at91udc_probe);
module_platform_driver(at91_udc_driver);
MODULE_DESCRIPTION("AT91 udc driver");
MODULE_AUTHOR("Thomas Rathbone, David Brownell");

View File

@ -1126,12 +1126,12 @@ EXPORT_SYMBOL_GPL(usb_gadget_set_state);
/* ------------------------------------------------------------------------- */
/* Acquire connect_lock before calling this function. */
static void usb_udc_connect_control_locked(struct usb_udc *udc) __must_hold(&udc->connect_lock)
static int usb_udc_connect_control_locked(struct usb_udc *udc) __must_hold(&udc->connect_lock)
{
if (udc->vbus)
usb_gadget_connect_locked(udc->gadget);
return usb_gadget_connect_locked(udc->gadget);
else
usb_gadget_disconnect_locked(udc->gadget);
return usb_gadget_disconnect_locked(udc->gadget);
}
static void vbus_event_work(struct work_struct *work)
@ -1605,12 +1605,23 @@ static int gadget_bind_driver(struct device *dev)
}
usb_gadget_enable_async_callbacks(udc);
udc->allow_connect = true;
usb_udc_connect_control_locked(udc);
ret = usb_udc_connect_control_locked(udc);
if (ret)
goto err_connect_control;
mutex_unlock(&udc->connect_lock);
kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
return 0;
err_connect_control:
udc->allow_connect = false;
usb_gadget_disable_async_callbacks(udc);
if (gadget->irq)
synchronize_irq(gadget->irq);
usb_gadget_udc_stop_locked(udc);
mutex_unlock(&udc->connect_lock);
err_start:
driver->unbind(udc->gadget);

View File

@ -27,9 +27,10 @@
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/moduleparam.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
#include <linux/usb/ch9.h>
#include <linux/usb/gadget.h>
@ -2471,17 +2472,12 @@ static const struct of_device_id qe_udc_match[];
static int qe_udc_probe(struct platform_device *ofdev)
{
struct qe_udc *udc;
const struct of_device_id *match;
struct device_node *np = ofdev->dev.of_node;
struct qe_ep *ep;
unsigned int ret = 0;
unsigned int i;
const void *prop;
match = of_match_device(qe_udc_match, &ofdev->dev);
if (!match)
return -EINVAL;
prop = of_get_property(np, "mode", NULL);
if (!prop || strcmp(prop, "peripheral"))
return -ENODEV;
@ -2493,7 +2489,7 @@ static int qe_udc_probe(struct platform_device *ofdev)
return -ENOMEM;
}
udc->soc_type = (unsigned long)match->data;
udc->soc_type = (unsigned long)device_get_match_data(&ofdev->dev);
udc->usb_regs = of_iomap(np, 0);
if (!udc->usb_regs) {
ret = -ENOMEM;

View File

@ -2666,6 +2666,7 @@ static const struct platform_device_id fsl_udc_devtype[] = {
};
MODULE_DEVICE_TABLE(platform, fsl_udc_devtype);
static struct platform_driver udc_driver = {
.probe = fsl_udc_probe,
.remove = fsl_udc_remove,
.id_table = fsl_udc_devtype,
/* these suspend and resume are not usb suspend and resume */
@ -2679,7 +2680,7 @@ static struct platform_driver udc_driver = {
},
};
module_platform_driver_probe(udc_driver, fsl_udc_probe);
module_platform_driver(udc_driver);
MODULE_DESCRIPTION(DRIVER_DESC);
MODULE_AUTHOR(DRIVER_AUTHOR);

View File

@ -1506,10 +1506,11 @@ clean_up:
}
static struct platform_driver fusb300_driver = {
.remove_new = fusb300_remove,
.driver = {
.probe = fusb300_probe,
.remove_new = fusb300_remove,
.driver = {
.name = udc_name,
},
};
module_platform_driver_probe(fusb300_driver, fusb300_probe);
module_platform_driver(fusb300_driver);

View File

@ -3254,6 +3254,7 @@ MODULE_DEVICE_TABLE(of, lpc32xx_udc_of_match);
#endif
static struct platform_driver lpc32xx_udc_driver = {
.probe = lpc32xx_udc_probe,
.remove = lpc32xx_udc_remove,
.shutdown = lpc32xx_udc_shutdown,
.suspend = lpc32xx_udc_suspend,
@ -3264,7 +3265,7 @@ static struct platform_driver lpc32xx_udc_driver = {
},
};
module_platform_driver_probe(lpc32xx_udc_driver, lpc32xx_udc_probe);
module_platform_driver(lpc32xx_udc_driver);
MODULE_DESCRIPTION("LPC32XX udc driver");
MODULE_AUTHOR("Kevin Wells <kevin.wells@nxp.com>");

View File

@ -1687,10 +1687,11 @@ clean_up:
/*-------------------------------------------------------------------------*/
static struct platform_driver m66592_driver = {
.probe = m66592_probe,
.remove_new = m66592_remove,
.driver = {
.name = udc_name,
},
};
module_platform_driver_probe(m66592_driver, m66592_probe);
module_platform_driver(m66592_driver);

View File

@ -1964,13 +1964,14 @@ clean_up2:
/*-------------------------------------------------------------------------*/
static struct platform_driver r8a66597_driver = {
.probe = r8a66597_probe,
.remove_new = r8a66597_remove,
.driver = {
.name = udc_name,
},
};
module_platform_driver_probe(r8a66597_driver, r8a66597_probe);
module_platform_driver(r8a66597_driver);
MODULE_DESCRIPTION("R8A66597 USB gadget driver");
MODULE_LICENSE("GPL");

View File

@ -60,6 +60,23 @@
#define EHCI_USBLEGCTLSTS 4 /* legacy control/status */
#define EHCI_USBLEGCTLSTS_SOOE (1 << 13) /* SMI on ownership change */
/* ASMEDIA quirk use */
#define ASMT_DATA_WRITE0_REG 0xF8
#define ASMT_DATA_WRITE1_REG 0xFC
#define ASMT_CONTROL_REG 0xE0
#define ASMT_CONTROL_WRITE_BIT 0x02
#define ASMT_WRITEREG_CMD 0x10423
#define ASMT_FLOWCTL_ADDR 0xFA30
#define ASMT_FLOWCTL_DATA 0xBA
#define ASMT_PSEUDO_DATA 0
/* Intel quirk use */
#define USB_INTEL_XUSB2PR 0xD0
#define USB_INTEL_USB2PRM 0xD4
#define USB_INTEL_USB3_PSSEN 0xD8
#define USB_INTEL_USB3PRM 0xDC
#ifdef CONFIG_USB_PCI_AMD
/* AMD quirk use */
#define AB_REG_BAR_LOW 0xe0
#define AB_REG_BAR_HIGH 0xe1
@ -93,21 +110,6 @@
#define NB_PIF0_PWRDOWN_0 0x01100012
#define NB_PIF0_PWRDOWN_1 0x01100013
#define USB_INTEL_XUSB2PR 0xD0
#define USB_INTEL_USB2PRM 0xD4
#define USB_INTEL_USB3_PSSEN 0xD8
#define USB_INTEL_USB3PRM 0xDC
/* ASMEDIA quirk use */
#define ASMT_DATA_WRITE0_REG 0xF8
#define ASMT_DATA_WRITE1_REG 0xFC
#define ASMT_CONTROL_REG 0xE0
#define ASMT_CONTROL_WRITE_BIT 0x02
#define ASMT_WRITEREG_CMD 0x10423
#define ASMT_FLOWCTL_ADDR 0xFA30
#define ASMT_FLOWCTL_DATA 0xBA
#define ASMT_PSEUDO_DATA 0
/*
* amd_chipset_gen values represent AMD different chipset generations
*/
@ -458,50 +460,6 @@ void usb_amd_quirk_pll_disable(void)
}
EXPORT_SYMBOL_GPL(usb_amd_quirk_pll_disable);
static int usb_asmedia_wait_write(struct pci_dev *pdev)
{
unsigned long retry_count;
unsigned char value;
for (retry_count = 1000; retry_count > 0; --retry_count) {
pci_read_config_byte(pdev, ASMT_CONTROL_REG, &value);
if (value == 0xff) {
dev_err(&pdev->dev, "%s: check_ready ERROR", __func__);
return -EIO;
}
if ((value & ASMT_CONTROL_WRITE_BIT) == 0)
return 0;
udelay(50);
}
dev_warn(&pdev->dev, "%s: check_write_ready timeout", __func__);
return -ETIMEDOUT;
}
void usb_asmedia_modifyflowcontrol(struct pci_dev *pdev)
{
if (usb_asmedia_wait_write(pdev) != 0)
return;
/* send command and address to device */
pci_write_config_dword(pdev, ASMT_DATA_WRITE0_REG, ASMT_WRITEREG_CMD);
pci_write_config_dword(pdev, ASMT_DATA_WRITE1_REG, ASMT_FLOWCTL_ADDR);
pci_write_config_byte(pdev, ASMT_CONTROL_REG, ASMT_CONTROL_WRITE_BIT);
if (usb_asmedia_wait_write(pdev) != 0)
return;
/* send data to device */
pci_write_config_dword(pdev, ASMT_DATA_WRITE0_REG, ASMT_FLOWCTL_DATA);
pci_write_config_dword(pdev, ASMT_DATA_WRITE1_REG, ASMT_PSEUDO_DATA);
pci_write_config_byte(pdev, ASMT_CONTROL_REG, ASMT_CONTROL_WRITE_BIT);
}
EXPORT_SYMBOL_GPL(usb_asmedia_modifyflowcontrol);
void usb_amd_quirk_pll_enable(void)
{
usb_amd_quirk_pll(0);
@ -630,7 +588,62 @@ bool usb_amd_pt_check_port(struct device *device, int port)
return !(value & BIT(port_shift));
}
EXPORT_SYMBOL_GPL(usb_amd_pt_check_port);
#endif /* CONFIG_USB_PCI_AMD */
static int usb_asmedia_wait_write(struct pci_dev *pdev)
{
unsigned long retry_count;
unsigned char value;
for (retry_count = 1000; retry_count > 0; --retry_count) {
pci_read_config_byte(pdev, ASMT_CONTROL_REG, &value);
if (value == 0xff) {
dev_err(&pdev->dev, "%s: check_ready ERROR", __func__);
return -EIO;
}
if ((value & ASMT_CONTROL_WRITE_BIT) == 0)
return 0;
udelay(50);
}
dev_warn(&pdev->dev, "%s: check_write_ready timeout", __func__);
return -ETIMEDOUT;
}
void usb_asmedia_modifyflowcontrol(struct pci_dev *pdev)
{
if (usb_asmedia_wait_write(pdev) != 0)
return;
/* send command and address to device */
pci_write_config_dword(pdev, ASMT_DATA_WRITE0_REG, ASMT_WRITEREG_CMD);
pci_write_config_dword(pdev, ASMT_DATA_WRITE1_REG, ASMT_FLOWCTL_ADDR);
pci_write_config_byte(pdev, ASMT_CONTROL_REG, ASMT_CONTROL_WRITE_BIT);
if (usb_asmedia_wait_write(pdev) != 0)
return;
/* send data to device */
pci_write_config_dword(pdev, ASMT_DATA_WRITE0_REG, ASMT_FLOWCTL_DATA);
pci_write_config_dword(pdev, ASMT_DATA_WRITE1_REG, ASMT_PSEUDO_DATA);
pci_write_config_byte(pdev, ASMT_CONTROL_REG, ASMT_CONTROL_WRITE_BIT);
}
EXPORT_SYMBOL_GPL(usb_asmedia_modifyflowcontrol);
static inline int io_type_enabled(struct pci_dev *pdev, unsigned int mask)
{
u16 cmd;
return !pci_read_config_word(pdev, PCI_COMMAND, &cmd) && (cmd & mask);
}
#define mmio_enabled(dev) io_type_enabled(dev, PCI_COMMAND_MEMORY)
#if defined(CONFIG_HAS_IOPORT) && IS_ENABLED(CONFIG_USB_UHCI_HCD)
/*
* Make sure the controller is completely inactive, unable to
* generate interrupts or do DMA.
@ -712,14 +725,7 @@ reset_needed:
}
EXPORT_SYMBOL_GPL(uhci_check_and_reset_hc);
static inline int io_type_enabled(struct pci_dev *pdev, unsigned int mask)
{
u16 cmd;
return !pci_read_config_word(pdev, PCI_COMMAND, &cmd) && (cmd & mask);
}
#define pio_enabled(dev) io_type_enabled(dev, PCI_COMMAND_IO)
#define mmio_enabled(dev) io_type_enabled(dev, PCI_COMMAND_MEMORY)
static void quirk_usb_handoff_uhci(struct pci_dev *pdev)
{
@ -739,6 +745,12 @@ static void quirk_usb_handoff_uhci(struct pci_dev *pdev)
uhci_check_and_reset_hc(pdev, base);
}
#else /* defined(CONFIG_HAS_IOPORT && IS_ENABLED(CONFIG_USB_UHCI_HCD) */
static void quirk_usb_handoff_uhci(struct pci_dev *pdev) {}
#endif /* defined(CONFIG_HAS_IOPORT && IS_ENABLED(CONFIG_USB_UHCI_HCD) */
static int mmio_resource_enabled(struct pci_dev *pdev, int idx)
{
return pci_resource_start(pdev, idx) && mmio_enabled(pdev);

View File

@ -2,9 +2,7 @@
#ifndef __LINUX_USB_PCI_QUIRKS_H
#define __LINUX_USB_PCI_QUIRKS_H
#ifdef CONFIG_USB_PCI
void uhci_reset_hc(struct pci_dev *pdev, unsigned long base);
int uhci_check_and_reset_hc(struct pci_dev *pdev, unsigned long base);
#ifdef CONFIG_USB_PCI_AMD
int usb_hcd_amd_remote_wakeup_quirk(struct pci_dev *pdev);
bool usb_amd_hang_symptom_quirk(void);
bool usb_amd_prefetch_quirk(void);
@ -12,23 +10,41 @@ void usb_amd_dev_put(void);
bool usb_amd_quirk_pll_check(void);
void usb_amd_quirk_pll_disable(void);
void usb_amd_quirk_pll_enable(void);
void usb_asmedia_modifyflowcontrol(struct pci_dev *pdev);
void usb_enable_intel_xhci_ports(struct pci_dev *xhci_pdev);
void usb_disable_xhci_ports(struct pci_dev *xhci_pdev);
void sb800_prefetch(struct device *dev, int on);
bool usb_amd_pt_check_port(struct device *device, int port);
#else
struct pci_dev;
static inline bool usb_amd_hang_symptom_quirk(void)
{
return false;
};
static inline bool usb_amd_prefetch_quirk(void)
{
return false;
}
static inline void usb_amd_quirk_pll_disable(void) {}
static inline void usb_amd_quirk_pll_enable(void) {}
static inline void usb_asmedia_modifyflowcontrol(struct pci_dev *pdev) {}
static inline void usb_amd_dev_put(void) {}
static inline void usb_disable_xhci_ports(struct pci_dev *xhci_pdev) {}
static inline bool usb_amd_quirk_pll_check(void)
{
return false;
}
static inline void sb800_prefetch(struct device *dev, int on) {}
static inline bool usb_amd_pt_check_port(struct device *device, int port)
{
return false;
}
#endif /* CONFIG_USB_PCI_AMD */
#ifdef CONFIG_USB_PCI
void uhci_reset_hc(struct pci_dev *pdev, unsigned long base);
int uhci_check_and_reset_hc(struct pci_dev *pdev, unsigned long base);
void usb_asmedia_modifyflowcontrol(struct pci_dev *pdev);
void usb_enable_intel_xhci_ports(struct pci_dev *xhci_pdev);
void usb_disable_xhci_ports(struct pci_dev *xhci_pdev);
#else
struct pci_dev;
static inline void usb_asmedia_modifyflowcontrol(struct pci_dev *pdev) {}
static inline void usb_disable_xhci_ports(struct pci_dev *xhci_pdev) {}
#endif /* CONFIG_USB_PCI */
#endif /* __LINUX_USB_PCI_QUIRKS_H */

View File

@ -204,7 +204,7 @@ static void xhci_ring_dump_segment(struct seq_file *s,
for (i = 0; i < TRBS_PER_SEGMENT; i++) {
trb = &seg->trbs[i];
dma = seg->dma + i * sizeof(*trb);
seq_printf(s, "%pad: %s\n", &dma,
seq_printf(s, "%2u %pad: %s\n", seg->num, &dma,
xhci_decode_trb(str, XHCI_MSG_MAX, le32_to_cpu(trb->generic.field[0]),
le32_to_cpu(trb->generic.field[1]),
le32_to_cpu(trb->generic.field[2]),

View File

@ -79,6 +79,33 @@
/* true: Controller Not Ready to accept doorbell or op reg writes after reset */
#define XHCI_STS_CNR (1 << 11)
/**
* struct xhci_protocol_caps
* @revision: major revision, minor revision, capability ID,
* and next capability pointer.
* @name_string: Four ASCII characters to say which spec this xHC
* follows, typically "USB ".
* @port_info: Port offset, count, and protocol-defined information.
*/
struct xhci_protocol_caps {
u32 revision;
u32 name_string;
u32 port_info;
};
#define XHCI_EXT_PORT_MAJOR(x) (((x) >> 24) & 0xff)
#define XHCI_EXT_PORT_MINOR(x) (((x) >> 16) & 0xff)
#define XHCI_EXT_PORT_PSIC(x) (((x) >> 28) & 0x0f)
#define XHCI_EXT_PORT_OFF(x) ((x) & 0xff)
#define XHCI_EXT_PORT_COUNT(x) (((x) >> 8) & 0xff)
#define XHCI_EXT_PORT_PSIV(x) (((x) >> 0) & 0x0f)
#define XHCI_EXT_PORT_PSIE(x) (((x) >> 4) & 0x03)
#define XHCI_EXT_PORT_PLT(x) (((x) >> 6) & 0x03)
#define XHCI_EXT_PORT_PFD(x) (((x) >> 8) & 0x01)
#define XHCI_EXT_PORT_LP(x) (((x) >> 14) & 0x03)
#define XHCI_EXT_PORT_PSIM(x) (((x) >> 16) & 0xffff)
#include <linux/io.h>
/**

View File

@ -1262,7 +1262,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
retval = -ENODEV;
break;
}
trace_xhci_get_port_status(wIndex, temp);
trace_xhci_get_port_status(port, temp);
status = xhci_get_port_status(hcd, bus_state, wIndex, temp,
&flags);
if (status == 0xffffffff)
@ -1687,7 +1687,7 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
retval = -ENODEV;
break;
}
trace_xhci_hub_status_data(i, temp);
trace_xhci_hub_status_data(ports[i], temp);
if ((temp & mask) != 0 ||
(bus_state->port_c_suspend & 1 << i) ||

View File

@ -29,6 +29,7 @@
static struct xhci_segment *xhci_segment_alloc(struct xhci_hcd *xhci,
unsigned int cycle_state,
unsigned int max_packet,
unsigned int num,
gfp_t flags)
{
struct xhci_segment *seg;
@ -60,6 +61,7 @@ static struct xhci_segment *xhci_segment_alloc(struct xhci_hcd *xhci,
for (i = 0; i < TRBS_PER_SEGMENT; i++)
seg->trbs[i].link.control = cpu_to_le32(TRB_CYCLE);
}
seg->num = num;
seg->dma = dma;
seg->next = NULL;
@ -128,7 +130,7 @@ static void xhci_link_rings(struct xhci_hcd *xhci, struct xhci_ring *ring,
struct xhci_segment *first, struct xhci_segment *last,
unsigned int num_segs)
{
struct xhci_segment *next;
struct xhci_segment *next, *seg;
bool chain_links;
if (!ring || !first || !last)
@ -144,13 +146,18 @@ static void xhci_link_rings(struct xhci_hcd *xhci, struct xhci_ring *ring,
xhci_link_segments(last, next, ring->type, chain_links);
ring->num_segs += num_segs;
if (ring->type != TYPE_EVENT && ring->enq_seg == ring->last_seg) {
ring->last_seg->trbs[TRBS_PER_SEGMENT-1].link.control
&= ~cpu_to_le32(LINK_TOGGLE);
last->trbs[TRBS_PER_SEGMENT-1].link.control
|= cpu_to_le32(LINK_TOGGLE);
if (ring->enq_seg == ring->last_seg) {
if (ring->type != TYPE_EVENT) {
ring->last_seg->trbs[TRBS_PER_SEGMENT-1].link.control
&= ~cpu_to_le32(LINK_TOGGLE);
last->trbs[TRBS_PER_SEGMENT-1].link.control
|= cpu_to_le32(LINK_TOGGLE);
}
ring->last_seg = last;
}
for (seg = last; seg != ring->last_seg; seg = seg->next)
seg->next->num = seg->num + 1;
}
/*
@ -320,8 +327,9 @@ void xhci_initialize_ring_info(struct xhci_ring *ring,
/* Allocate segments and link them for a ring */
static int xhci_alloc_segments_for_ring(struct xhci_hcd *xhci,
struct xhci_segment **first, struct xhci_segment **last,
unsigned int num_segs, unsigned int cycle_state,
enum xhci_ring_type type, unsigned int max_packet, gfp_t flags)
unsigned int num_segs, unsigned int num,
unsigned int cycle_state, enum xhci_ring_type type,
unsigned int max_packet, gfp_t flags)
{
struct xhci_segment *prev;
bool chain_links;
@ -331,16 +339,17 @@ static int xhci_alloc_segments_for_ring(struct xhci_hcd *xhci,
(type == TYPE_ISOC &&
(xhci->quirks & XHCI_AMD_0x96_HOST)));
prev = xhci_segment_alloc(xhci, cycle_state, max_packet, flags);
prev = xhci_segment_alloc(xhci, cycle_state, max_packet, num, flags);
if (!prev)
return -ENOMEM;
num_segs--;
num++;
*first = prev;
while (num_segs > 0) {
while (num < num_segs) {
struct xhci_segment *next;
next = xhci_segment_alloc(xhci, cycle_state, max_packet, flags);
next = xhci_segment_alloc(xhci, cycle_state, max_packet, num,
flags);
if (!next) {
prev = *first;
while (prev) {
@ -353,7 +362,7 @@ static int xhci_alloc_segments_for_ring(struct xhci_hcd *xhci,
xhci_link_segments(prev, next, type, chain_links);
prev = next;
num_segs--;
num++;
}
xhci_link_segments(prev, *first, type, chain_links);
*last = prev;
@ -388,7 +397,7 @@ struct xhci_ring *xhci_ring_alloc(struct xhci_hcd *xhci,
return ring;
ret = xhci_alloc_segments_for_ring(xhci, &ring->first_seg,
&ring->last_seg, num_segs, cycle_state, type,
&ring->last_seg, num_segs, 0, cycle_state, type,
max_packet, flags);
if (ret)
goto fail;
@ -428,7 +437,8 @@ int xhci_ring_expansion(struct xhci_hcd *xhci, struct xhci_ring *ring,
int ret;
ret = xhci_alloc_segments_for_ring(xhci, &first, &last,
num_new_segs, ring->cycle_state, ring->type,
num_new_segs, ring->enq_seg->num + 1,
ring->cycle_state, ring->type,
ring->bounce_buf_len, flags);
if (ret)
return -ENOMEM;
@ -1766,7 +1776,7 @@ void xhci_free_command(struct xhci_hcd *xhci,
kfree(command);
}
int xhci_alloc_erst(struct xhci_hcd *xhci,
static int xhci_alloc_erst(struct xhci_hcd *xhci,
struct xhci_ring *evt_ring,
struct xhci_erst *erst,
gfp_t flags)
@ -1797,23 +1807,13 @@ int xhci_alloc_erst(struct xhci_hcd *xhci,
}
static void
xhci_free_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir)
xhci_remove_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir)
{
struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
size_t erst_size;
u64 tmp64;
u32 tmp;
if (!ir)
return;
erst_size = sizeof(struct xhci_erst_entry) * ir->erst.num_entries;
if (ir->erst.entries)
dma_free_coherent(dev, erst_size,
ir->erst.entries,
ir->erst.erst_dma_addr);
ir->erst.entries = NULL;
/*
* Clean out interrupter registers except ERSTBA. Clearing either the
* low or high 32 bits of ERSTBA immediately causes the controller to
@ -1824,14 +1824,30 @@ xhci_free_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir)
tmp &= ERST_SIZE_MASK;
writel(tmp, &ir->ir_set->erst_size);
tmp64 = xhci_read_64(xhci, &ir->ir_set->erst_dequeue);
tmp64 &= (u64) ERST_PTR_MASK;
xhci_write_64(xhci, tmp64, &ir->ir_set->erst_dequeue);
xhci_write_64(xhci, ERST_EHB, &ir->ir_set->erst_dequeue);
}
}
/* free interrrupter event ring */
static void
xhci_free_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir)
{
struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
size_t erst_size;
if (!ir)
return;
erst_size = sizeof(struct xhci_erst_entry) * ir->erst.num_entries;
if (ir->erst.entries)
dma_free_coherent(dev, erst_size,
ir->erst.entries,
ir->erst.erst_dma_addr);
ir->erst.entries = NULL;
/* free interrupter event ring */
if (ir->event_ring)
xhci_ring_free(xhci, ir->event_ring);
ir->event_ring = NULL;
kfree(ir);
@ -1844,6 +1860,7 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
cancel_delayed_work_sync(&xhci->cmd_timer);
xhci_remove_interrupter(xhci, xhci->interrupter);
xhci_free_interrupter(xhci, xhci->interrupter);
xhci->interrupter = NULL;
xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Freed primary event ring");
@ -1933,7 +1950,6 @@ no_bw:
static void xhci_set_hc_event_deq(struct xhci_hcd *xhci, struct xhci_interrupter *ir)
{
u64 temp;
dma_addr_t deq;
deq = xhci_trb_virt_to_dma(ir->event_ring->deq_seg,
@ -1941,16 +1957,12 @@ static void xhci_set_hc_event_deq(struct xhci_hcd *xhci, struct xhci_interrupter
if (!deq)
xhci_warn(xhci, "WARN something wrong with SW event ring dequeue ptr.\n");
/* Update HC event ring dequeue pointer */
temp = xhci_read_64(xhci, &ir->ir_set->erst_dequeue);
temp &= ERST_PTR_MASK;
/* Don't clear the EHB bit (which is RW1C) because
* there might be more events to service.
*/
temp &= ~ERST_EHB;
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"// Write event ring dequeue pointer, preserving EHB bit");
xhci_write_64(xhci, ((u64) deq & (u64) ~ERST_PTR_MASK) | temp,
&ir->ir_set->erst_dequeue);
xhci_write_64(xhci, deq & ERST_PTR_MASK, &ir->ir_set->erst_dequeue);
}
static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
@ -2238,14 +2250,18 @@ xhci_alloc_interrupter(struct xhci_hcd *xhci, gfp_t flags)
{
struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
struct xhci_interrupter *ir;
unsigned int num_segs;
int ret;
ir = kzalloc_node(sizeof(*ir), flags, dev_to_node(dev));
if (!ir)
return NULL;
ir->event_ring = xhci_ring_alloc(xhci, ERST_NUM_SEGS, 1, TYPE_EVENT,
0, flags);
num_segs = min_t(unsigned int, 1 << HCS_ERST_MAX(xhci->hcs_params2),
ERST_MAX_SEGS);
ir->event_ring = xhci_ring_alloc(xhci, num_segs, 1, TYPE_EVENT, 0,
flags);
if (!ir->event_ring) {
xhci_warn(xhci, "Failed to allocate interrupter event ring\n");
kfree(ir);
@ -2281,7 +2297,7 @@ xhci_add_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir,
/* set ERST count with the number of entries in the segment table */
erst_size = readl(&ir->ir_set->erst_size);
erst_size &= ERST_SIZE_MASK;
erst_size |= ERST_NUM_SEGS;
erst_size |= ir->event_ring->num_segs;
writel(erst_size, &ir->ir_set->erst_size);
erst_base = xhci_read_64(xhci, &ir->ir_set->erst_base);

View File

@ -19,6 +19,18 @@
#define HS_BW_BOUNDARY 6144
/* usb2 spec section11.18.1: at most 188 FS bytes per microframe */
#define FS_PAYLOAD_MAX 188
#define LS_PAYLOAD_MAX 18
/* section 11.18.1, per fs frame */
#define FS_BW_BOUNDARY 1157
#define LS_BW_BOUNDARY 144
/*
* max number of microframes for split transfer, assume extra-cs budget is 0
* for fs isoc in : 1 ss + 1 idle + 6 cs (roundup(1023/188))
*/
#define TT_MICROFRAMES_MAX 8
/* offset from SS for fs/ls isoc/intr ep (ss + idle) */
#define CS_OFFSET 2
#define DBG_BUF_EN 64
@ -237,17 +249,26 @@ static void drop_tt(struct usb_device *udev)
static struct mu3h_sch_ep_info *
create_sch_ep(struct xhci_hcd_mtk *mtk, struct usb_device *udev,
struct usb_host_endpoint *ep)
struct usb_host_endpoint *ep, struct xhci_ep_ctx *ep_ctx)
{
struct mu3h_sch_ep_info *sch_ep;
struct mu3h_sch_bw_info *bw_info;
struct mu3h_sch_tt *tt = NULL;
u32 len;
bw_info = get_bw_info(mtk, udev, ep);
if (!bw_info)
return ERR_PTR(-ENODEV);
sch_ep = kzalloc(sizeof(*sch_ep), GFP_KERNEL);
if (is_fs_or_ls(udev->speed))
len = TT_MICROFRAMES_MAX;
else if ((udev->speed >= USB_SPEED_SUPER) &&
usb_endpoint_xfer_isoc(&ep->desc))
len = get_esit(ep_ctx);
else
len = 1;
sch_ep = kzalloc(struct_size(sch_ep, bw_budget_table, len), GFP_KERNEL);
if (!sch_ep)
return ERR_PTR(-ENOMEM);
@ -279,7 +300,11 @@ static void setup_sch_info(struct xhci_ep_ctx *ep_ctx,
u32 mult;
u32 esit_pkts;
u32 max_esit_payload;
u32 bw_per_microframe;
u32 *bwb_table;
int i;
bwb_table = sch_ep->bw_budget_table;
ep_type = CTX_TO_EP_TYPE(le32_to_cpu(ep_ctx->ep_info2));
maxpkt = MAX_PACKET_DECODED(le32_to_cpu(ep_ctx->ep_info2));
max_burst = CTX_TO_MAX_BURST(le32_to_cpu(ep_ctx->ep_info2));
@ -313,7 +338,7 @@ static void setup_sch_info(struct xhci_ep_ctx *ep_ctx,
* opportunities per microframe
*/
sch_ep->pkts = max_burst + 1;
sch_ep->bw_cost_per_microframe = maxpkt * sch_ep->pkts;
bwb_table[0] = maxpkt * sch_ep->pkts;
} else if (sch_ep->speed >= USB_SPEED_SUPER) {
/* usb3_r1 spec section4.4.7 & 4.4.8 */
sch_ep->cs_count = 0;
@ -330,6 +355,7 @@ static void setup_sch_info(struct xhci_ep_ctx *ep_ctx,
if (ep_type == INT_IN_EP || ep_type == INT_OUT_EP) {
sch_ep->pkts = esit_pkts;
sch_ep->num_budget_microframes = 1;
bwb_table[0] = maxpkt * sch_ep->pkts;
}
if (ep_type == ISOC_IN_EP || ep_type == ISOC_OUT_EP) {
@ -346,18 +372,52 @@ static void setup_sch_info(struct xhci_ep_ctx *ep_ctx,
DIV_ROUND_UP(esit_pkts, sch_ep->pkts);
sch_ep->repeat = !!(sch_ep->num_budget_microframes > 1);
bw_per_microframe = maxpkt * sch_ep->pkts;
for (i = 0; i < sch_ep->num_budget_microframes - 1; i++)
bwb_table[i] = bw_per_microframe;
/* last one <= bw_per_microframe */
bwb_table[i] = maxpkt * esit_pkts - i * bw_per_microframe;
}
sch_ep->bw_cost_per_microframe = maxpkt * sch_ep->pkts;
} else if (is_fs_or_ls(sch_ep->speed)) {
sch_ep->pkts = 1; /* at most one packet for each microframe */
/*
* num_budget_microframes and cs_count will be updated when
* @cs_count will be updated to add extra-cs when
* check TT for INT_OUT_EP, ISOC/INT_IN_EP type
* @maxpkt <= 1023;
*/
sch_ep->cs_count = DIV_ROUND_UP(maxpkt, FS_PAYLOAD_MAX);
sch_ep->num_budget_microframes = sch_ep->cs_count;
sch_ep->bw_cost_per_microframe = min_t(u32, maxpkt, FS_PAYLOAD_MAX);
/* init budget table */
if (ep_type == ISOC_OUT_EP) {
for (i = 0; i < sch_ep->cs_count - 1; i++)
bwb_table[i] = FS_PAYLOAD_MAX;
bwb_table[i] = maxpkt - i * FS_PAYLOAD_MAX;
} else if (ep_type == INT_OUT_EP) {
/* only first one used (maxpkt <= 64), others zero */
bwb_table[0] = maxpkt;
} else { /* INT_IN_EP or ISOC_IN_EP */
bwb_table[0] = 0; /* start split */
bwb_table[1] = 0; /* idle */
/*
* @cs_count will be updated according to cs position
* (add 1 or 2 extra-cs), but assume only first
* @num_budget_microframes elements will be used later,
* although in fact it does not (extra-cs budget many receive
* some data for IN ep);
* @cs_count is 1 for INT_IN_EP (maxpkt <= 64);
*/
for (i = 0; i < sch_ep->cs_count - 1; i++)
bwb_table[i + CS_OFFSET] = FS_PAYLOAD_MAX;
bwb_table[i + CS_OFFSET] = maxpkt - i * FS_PAYLOAD_MAX;
/* ss + idle */
sch_ep->num_budget_microframes += CS_OFFSET;
}
}
}
@ -374,7 +434,7 @@ static u32 get_max_bw(struct mu3h_sch_bw_info *sch_bw,
for (j = 0; j < sch_ep->num_budget_microframes; j++) {
k = XHCI_MTK_BW_INDEX(base + j);
bw = sch_bw->bus_bw[k] + sch_ep->bw_cost_per_microframe;
bw = sch_bw->bus_bw[k] + sch_ep->bw_budget_table[j];
if (bw > max_bw)
max_bw = bw;
}
@ -382,56 +442,152 @@ static u32 get_max_bw(struct mu3h_sch_bw_info *sch_bw,
return max_bw;
}
/*
* for OUT: get first SS consumed bw;
* for IN: get first CS consumed bw;
*/
static u16 get_fs_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
{
struct mu3h_sch_tt *tt = sch_ep->sch_tt;
u16 fs_bw;
if (sch_ep->ep_type == ISOC_OUT_EP || sch_ep->ep_type == INT_OUT_EP)
fs_bw = tt->fs_bus_bw_out[XHCI_MTK_BW_INDEX(offset)];
else /* skip ss + idle */
fs_bw = tt->fs_bus_bw_in[XHCI_MTK_BW_INDEX(offset + CS_OFFSET)];
return fs_bw;
}
static void update_bus_bw(struct mu3h_sch_bw_info *sch_bw,
struct mu3h_sch_ep_info *sch_ep, bool used)
{
int bw_updated;
u32 base;
int i, j;
bw_updated = sch_ep->bw_cost_per_microframe * (used ? 1 : -1);
for (i = 0; i < sch_ep->num_esit; i++) {
base = sch_ep->offset + i * sch_ep->esit;
for (j = 0; j < sch_ep->num_budget_microframes; j++)
sch_bw->bus_bw[XHCI_MTK_BW_INDEX(base + j)] += bw_updated;
}
}
static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
{
struct mu3h_sch_tt *tt = sch_ep->sch_tt;
u32 tmp;
int base;
int i, j, k;
for (i = 0; i < sch_ep->num_esit; i++) {
base = offset + i * sch_ep->esit;
/*
* Compared with hs bus, no matter what ep type,
* the hub will always delay one uframe to send data
*/
base = sch_ep->offset + i * sch_ep->esit;
for (j = 0; j < sch_ep->num_budget_microframes; j++) {
k = XHCI_MTK_BW_INDEX(base + j);
tmp = tt->fs_bus_bw[k] + sch_ep->bw_cost_per_microframe;
if (tmp > FS_PAYLOAD_MAX)
return -ESCH_BW_OVERFLOW;
if (used)
sch_bw->bus_bw[k] += sch_ep->bw_budget_table[j];
else
sch_bw->bus_bw[k] -= sch_ep->bw_budget_table[j];
}
}
}
static int check_ls_budget_microframes(struct mu3h_sch_ep_info *sch_ep, int offset)
{
struct mu3h_sch_tt *tt = sch_ep->sch_tt;
int i;
if (sch_ep->speed != USB_SPEED_LOW)
return 0;
if (sch_ep->ep_type == INT_OUT_EP)
i = XHCI_MTK_BW_INDEX(offset);
else if (sch_ep->ep_type == INT_IN_EP)
i = XHCI_MTK_BW_INDEX(offset + CS_OFFSET); /* skip ss + idle */
else
return -EINVAL;
if (tt->ls_bus_bw[i] + sch_ep->maxpkt > LS_PAYLOAD_MAX)
return -ESCH_BW_OVERFLOW;
return 0;
}
static int check_fs_budget_microframes(struct mu3h_sch_ep_info *sch_ep, int offset)
{
struct mu3h_sch_tt *tt = sch_ep->sch_tt;
u32 tmp;
int i, k;
/*
* for OUT eps, will transfer exactly assigned length of data,
* so can't allocate more than 188 bytes;
* but it's not for IN eps, usually it can't receive full
* 188 bytes in a uframe, if it not assign full 188 bytes,
* can add another one;
*/
for (i = 0; i < sch_ep->num_budget_microframes; i++) {
k = XHCI_MTK_BW_INDEX(offset + i);
if (sch_ep->ep_type == ISOC_OUT_EP || sch_ep->ep_type == INT_OUT_EP)
tmp = tt->fs_bus_bw_out[k] + sch_ep->bw_budget_table[i];
else /* ep_type : ISOC IN / INTR IN */
tmp = tt->fs_bus_bw_in[k];
if (tmp > FS_PAYLOAD_MAX)
return -ESCH_BW_OVERFLOW;
}
return 0;
}
static int check_sch_tt(struct mu3h_sch_ep_info *sch_ep, u32 offset)
static int check_fs_budget_frames(struct mu3h_sch_ep_info *sch_ep, int offset)
{
struct mu3h_sch_tt *tt = sch_ep->sch_tt;
u32 head, tail;
int i, j, k;
/* bugdet scheduled may cross at most two fs frames */
j = XHCI_MTK_BW_INDEX(offset) / UFRAMES_PER_FRAME;
k = XHCI_MTK_BW_INDEX(offset + sch_ep->num_budget_microframes - 1) / UFRAMES_PER_FRAME;
if (j != k) {
head = tt->fs_frame_bw[j];
tail = tt->fs_frame_bw[k];
} else {
head = tt->fs_frame_bw[j];
tail = 0;
}
j = roundup(offset, UFRAMES_PER_FRAME);
for (i = 0; i < sch_ep->num_budget_microframes; i++) {
if ((offset + i) < j)
head += sch_ep->bw_budget_table[i];
else
tail += sch_ep->bw_budget_table[i];
}
if (head > FS_BW_BOUNDARY || tail > FS_BW_BOUNDARY)
return -ESCH_BW_OVERFLOW;
return 0;
}
static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
{
int i, base;
int ret = 0;
for (i = 0; i < sch_ep->num_esit; i++) {
base = offset + i * sch_ep->esit;
ret = check_ls_budget_microframes(sch_ep, base);
if (ret)
goto err;
ret = check_fs_budget_microframes(sch_ep, base);
if (ret)
goto err;
ret = check_fs_budget_frames(sch_ep, base);
if (ret)
goto err;
}
err:
return ret;
}
static int check_ss_and_cs(struct mu3h_sch_ep_info *sch_ep, u32 offset)
{
u32 start_ss, last_ss;
u32 start_cs, last_cs;
if (!sch_ep->sch_tt)
return 0;
start_ss = offset % 8;
start_ss = offset % UFRAMES_PER_FRAME;
if (sch_ep->ep_type == ISOC_OUT_EP) {
last_ss = start_ss + sch_ep->cs_count - 1;
@ -444,6 +600,7 @@ static int check_sch_tt(struct mu3h_sch_ep_info *sch_ep, u32 offset)
return -ESCH_SS_Y6;
} else {
/* maxpkt <= 1023, cs <= 6 */
u32 cs_count = DIV_ROUND_UP(sch_ep->maxpkt, FS_PAYLOAD_MAX);
/*
@ -454,44 +611,164 @@ static int check_sch_tt(struct mu3h_sch_ep_info *sch_ep, u32 offset)
return -ESCH_SS_Y6;
/* one uframe for ss + one uframe for idle */
start_cs = (start_ss + 2) % 8;
start_cs = (start_ss + CS_OFFSET) % UFRAMES_PER_FRAME;
last_cs = start_cs + cs_count - 1;
if (last_cs > 7)
return -ESCH_CS_OVERFLOW;
/* add extra-cs */
cs_count += (last_cs == 7) ? 1 : 2;
if (cs_count > 7)
cs_count = 7; /* HW limit */
sch_ep->cs_count = cs_count;
/* ss, idle are ignored */
sch_ep->num_budget_microframes = cs_count;
/*
* if interval=1, maxp >752, num_budge_micoframe is larger
* than sch_ep->esit, will overstep boundary
*/
if (sch_ep->num_budget_microframes > sch_ep->esit)
sch_ep->num_budget_microframes = sch_ep->esit;
}
return 0;
}
/*
* when isoc-out transfers 188 bytes in a uframe, and send isoc/intr's
* ss token in the uframe, may cause 'bit stuff error' in downstream
* port;
* when isoc-out transfer less than 188 bytes in a uframe, shall send
* isoc-in's ss after isoc-out's ss (but hw can't ensure the sequence,
* so just avoid overlap).
*/
static int check_isoc_ss_overlap(struct mu3h_sch_ep_info *sch_ep, u32 offset)
{
struct mu3h_sch_tt *tt = sch_ep->sch_tt;
int base;
int i, j, k;
if (!tt)
return 0;
for (i = 0; i < sch_ep->num_esit; i++) {
base = offset + i * sch_ep->esit;
if (sch_ep->ep_type == ISOC_OUT_EP) {
for (j = 0; j < sch_ep->num_budget_microframes; j++) {
k = XHCI_MTK_BW_INDEX(base + j + CS_OFFSET);
/* use cs to indicate existence of in-ss @(base+j) */
if (tt->fs_bus_bw_in[k])
return -ESCH_SS_OVERLAP;
}
} else if (sch_ep->ep_type == ISOC_IN_EP || sch_ep->ep_type == INT_IN_EP) {
k = XHCI_MTK_BW_INDEX(base);
/* only check IN's ss */
if (tt->fs_bus_bw_out[k])
return -ESCH_SS_OVERLAP;
}
}
return 0;
}
static int check_sch_tt_budget(struct mu3h_sch_ep_info *sch_ep, u32 offset)
{
int ret;
ret = check_ss_and_cs(sch_ep, offset);
if (ret)
return ret;
ret = check_isoc_ss_overlap(sch_ep, offset);
if (ret)
return ret;
return check_fs_bus_bw(sch_ep, offset);
}
/* allocate microframes in the ls/fs frame */
static int alloc_sch_portion_of_frame(struct mu3h_sch_ep_info *sch_ep)
{
struct mu3h_sch_bw_info *sch_bw = sch_ep->bw_info;
const u32 bw_boundary = get_bw_boundary(sch_ep->speed);
u32 bw_max, fs_bw_min;
u32 offset, offset_min;
u16 fs_bw;
int frames;
int i, j;
int ret;
frames = sch_ep->esit / UFRAMES_PER_FRAME;
for (i = 0; i < UFRAMES_PER_FRAME; i++) {
fs_bw_min = FS_PAYLOAD_MAX;
offset_min = XHCI_MTK_MAX_ESIT;
for (j = 0; j < frames; j++) {
offset = (i + j * UFRAMES_PER_FRAME) % sch_ep->esit;
ret = check_sch_tt_budget(sch_ep, offset);
if (ret)
continue;
/* check hs bw domain */
bw_max = get_max_bw(sch_bw, sch_ep, offset);
if (bw_max > bw_boundary) {
ret = -ESCH_BW_OVERFLOW;
continue;
}
/* use best-fit between frames */
fs_bw = get_fs_bw(sch_ep, offset);
if (fs_bw < fs_bw_min) {
fs_bw_min = fs_bw;
offset_min = offset;
}
if (!fs_bw_min)
break;
}
/* use first-fit between microframes in a frame */
if (offset_min < XHCI_MTK_MAX_ESIT)
break;
}
if (offset_min == XHCI_MTK_MAX_ESIT)
return -ESCH_BW_OVERFLOW;
sch_ep->offset = offset_min;
return 0;
}
static void update_sch_tt(struct mu3h_sch_ep_info *sch_ep, bool used)
{
struct mu3h_sch_tt *tt = sch_ep->sch_tt;
int bw_updated;
u16 *fs_bus_bw;
u32 base;
int i, j;
int i, j, k, f;
bw_updated = sch_ep->bw_cost_per_microframe * (used ? 1 : -1);
if (sch_ep->ep_type == ISOC_OUT_EP || sch_ep->ep_type == INT_OUT_EP)
fs_bus_bw = tt->fs_bus_bw_out;
else
fs_bus_bw = tt->fs_bus_bw_in;
for (i = 0; i < sch_ep->num_esit; i++) {
base = sch_ep->offset + i * sch_ep->esit;
for (j = 0; j < sch_ep->num_budget_microframes; j++)
tt->fs_bus_bw[XHCI_MTK_BW_INDEX(base + j)] += bw_updated;
for (j = 0; j < sch_ep->num_budget_microframes; j++) {
k = XHCI_MTK_BW_INDEX(base + j);
f = k / UFRAMES_PER_FRAME;
if (used) {
if (sch_ep->speed == USB_SPEED_LOW)
tt->ls_bus_bw[k] += (u8)sch_ep->bw_budget_table[j];
fs_bus_bw[k] += (u16)sch_ep->bw_budget_table[j];
tt->fs_frame_bw[f] += (u16)sch_ep->bw_budget_table[j];
} else {
if (sch_ep->speed == USB_SPEED_LOW)
tt->ls_bus_bw[k] -= (u8)sch_ep->bw_budget_table[j];
fs_bus_bw[k] -= (u16)sch_ep->bw_budget_table[j];
tt->fs_frame_bw[f] -= (u16)sch_ep->bw_budget_table[j];
}
}
}
if (used)
@ -513,7 +790,8 @@ static int load_ep_bw(struct mu3h_sch_bw_info *sch_bw,
return 0;
}
static int check_sch_bw(struct mu3h_sch_ep_info *sch_ep)
/* allocate microframes for hs/ss/ssp */
static int alloc_sch_microframes(struct mu3h_sch_ep_info *sch_ep)
{
struct mu3h_sch_bw_info *sch_bw = sch_ep->bw_info;
const u32 bw_boundary = get_bw_boundary(sch_ep->speed);
@ -521,16 +799,12 @@ static int check_sch_bw(struct mu3h_sch_ep_info *sch_ep)
u32 worst_bw;
u32 min_bw = ~0;
int min_index = -1;
int ret = 0;
/*
* Search through all possible schedule microframes.
* and find a microframe where its worst bandwidth is minimum.
*/
for (offset = 0; offset < sch_ep->esit; offset++) {
ret = check_sch_tt(sch_ep, offset);
if (ret)
continue;
worst_bw = get_max_bw(sch_bw, sch_ep, offset);
if (worst_bw > bw_boundary)
@ -540,21 +814,29 @@ static int check_sch_bw(struct mu3h_sch_ep_info *sch_ep)
min_bw = worst_bw;
min_index = offset;
}
/* use first-fit for LS/FS */
if (sch_ep->sch_tt && min_index >= 0)
break;
if (min_bw == 0)
break;
}
if (min_index < 0)
return ret ? ret : -ESCH_BW_OVERFLOW;
return -ESCH_BW_OVERFLOW;
sch_ep->offset = min_index;
return load_ep_bw(sch_bw, sch_ep, true);
return 0;
}
static int check_sch_bw(struct mu3h_sch_ep_info *sch_ep)
{
int ret;
if (sch_ep->sch_tt)
ret = alloc_sch_portion_of_frame(sch_ep);
else
ret = alloc_sch_microframes(sch_ep);
if (ret)
return ret;
return load_ep_bw(sch_ep->bw_info, sch_ep, true);
}
static void destroy_sch_ep(struct xhci_hcd_mtk *mtk, struct usb_device *udev,
@ -651,7 +933,7 @@ static int add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
xhci_dbg(xhci, "%s %s\n", __func__, decode_ep(ep, udev->speed));
sch_ep = create_sch_ep(mtk, udev, ep);
sch_ep = create_sch_ep(mtk, udev, ep, ep_ctx);
if (IS_ERR_OR_NULL(sch_ep))
return -ENOMEM;

View File

@ -30,12 +30,21 @@
#define XHCI_MTK_MAX_ESIT (1 << 6)
#define XHCI_MTK_BW_INDEX(x) ((x) & (XHCI_MTK_MAX_ESIT - 1))
#define UFRAMES_PER_FRAME 8
#define XHCI_MTK_FRAMES_CNT (XHCI_MTK_MAX_ESIT / UFRAMES_PER_FRAME)
/**
* @fs_bus_bw: array to keep track of bandwidth already used for FS
* @fs_bus_bw_out: save bandwidth used by FS/LS OUT eps in each uframes
* @fs_bus_bw_in: save bandwidth used by FS/LS IN eps in each uframes
* @ls_bus_bw: save bandwidth used by LS eps in each uframes
* @fs_frame_bw: save bandwidth used by FS/LS eps in each FS frames
* @ep_list: Endpoints using this TT
*/
struct mu3h_sch_tt {
u32 fs_bus_bw[XHCI_MTK_MAX_ESIT];
u16 fs_bus_bw_out[XHCI_MTK_MAX_ESIT];
u16 fs_bus_bw_in[XHCI_MTK_MAX_ESIT];
u8 ls_bus_bw[XHCI_MTK_MAX_ESIT];
u16 fs_frame_bw[XHCI_MTK_FRAMES_CNT];
struct list_head ep_list;
};
@ -58,7 +67,6 @@ struct mu3h_sch_bw_info {
* @num_esit: number of @esit in a period
* @num_budget_microframes: number of continuous uframes
* (@repeat==1) scheduled within the interval
* @bw_cost_per_microframe: bandwidth cost per microframe
* @hentry: hash table entry
* @endpoint: linked into bandwidth domain which it belongs to
* @tt_endpoint: linked into mu3h_sch_tt's list which it belongs to
@ -83,12 +91,12 @@ struct mu3h_sch_bw_info {
* times; 1: distribute the (bMaxBurst+1)*(Mult+1) packets
* according to @pkts and @repeat. normal mode is used by
* default
* @bw_budget_table: table to record bandwidth budget per microframe
*/
struct mu3h_sch_ep_info {
u32 esit;
u32 num_esit;
u32 num_budget_microframes;
u32 bw_cost_per_microframe;
struct list_head endpoint;
struct hlist_node hentry;
struct list_head tt_endpoint;
@ -108,6 +116,7 @@ struct mu3h_sch_ep_info {
u32 pkts;
u32 cs_count;
u32 burst_mode;
u32 bw_budget_table[];
};
#define MU3C_U3_PORT_MAX 4

View File

@ -535,6 +535,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
/* xHC spec requires PCI devices to support D3hot and D3cold */
if (xhci->hci_version >= 0x120)
xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
else if (pdev->vendor == PCI_VENDOR_ID_AMD && xhci->hci_version >= 0x110)
xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
if (xhci->quirks & XHCI_RESET_ON_RESUME)
xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
@ -693,7 +695,9 @@ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
/* USB-2 and USB-3 roothubs initialized, allow runtime pm suspend */
pm_runtime_put_noidle(&dev->dev);
if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW)
if (pci_choose_state(dev, PMSG_SUSPEND) == PCI_D0)
pm_runtime_forbid(&dev->dev);
else if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW)
pm_runtime_allow(&dev->dev);
dma_set_max_seg_size(&dev->dev, UINT_MAX);

View File

@ -458,23 +458,38 @@ static int __maybe_unused xhci_plat_resume(struct device *dev)
int ret;
if (!device_may_wakeup(dev) && (xhci->quirks & XHCI_SUSPEND_RESUME_CLKS)) {
clk_prepare_enable(xhci->clk);
clk_prepare_enable(xhci->reg_clk);
ret = clk_prepare_enable(xhci->clk);
if (ret)
return ret;
ret = clk_prepare_enable(xhci->reg_clk);
if (ret) {
clk_disable_unprepare(xhci->clk);
return ret;
}
}
ret = xhci_priv_resume_quirk(hcd);
if (ret)
return ret;
goto disable_clks;
ret = xhci_resume(xhci, PMSG_RESUME);
if (ret)
return ret;
goto disable_clks;
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
return 0;
disable_clks:
if (!device_may_wakeup(dev) && (xhci->quirks & XHCI_SUSPEND_RESUME_CLKS)) {
clk_disable_unprepare(xhci->clk);
clk_disable_unprepare(xhci->reg_clk);
}
return ret;
}
static int __maybe_unused xhci_plat_runtime_suspend(struct device *dev)

View File

@ -144,7 +144,7 @@ static void next_trb(struct xhci_hcd *xhci,
struct xhci_segment **seg,
union xhci_trb **trb)
{
if (trb_is_link(*trb)) {
if (trb_is_link(*trb) || last_trb_on_seg(*seg, *trb)) {
*seg = (*seg)->next;
*trb = ((*seg)->trbs);
} else {
@ -450,8 +450,9 @@ static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags)
* In the future we should distinguish between -ENODEV and -ETIMEDOUT
* and try to recover a -ETIMEDOUT with a host controller reset.
*/
ret = xhci_handshake(&xhci->op_regs->cmd_ring,
CMD_RING_RUNNING, 0, 5 * 1000 * 1000);
ret = xhci_handshake_check_state(xhci, &xhci->op_regs->cmd_ring,
CMD_RING_RUNNING, 0, 5 * 1000 * 1000,
XHCI_STATE_REMOVING);
if (ret < 0) {
xhci_err(xhci, "Abort failed to stop command ring: %d\n", ret);
xhci_halt(xhci);
@ -1879,7 +1880,6 @@ static void handle_port_status(struct xhci_hcd *xhci,
if ((port_id <= 0) || (port_id > max_ports)) {
xhci_warn(xhci, "Port change event with invalid port ID %d\n",
port_id);
inc_deq(xhci, ir->event_ring);
return;
}
@ -1906,7 +1906,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
xhci_dbg(xhci, "Port change event, %d-%d, id %d, portsc: 0x%x\n",
hcd->self.busnum, hcd_portnum + 1, port_id, portsc);
trace_xhci_handle_port_status(hcd_portnum, portsc);
trace_xhci_handle_port_status(port, portsc);
if (hcd->state == HC_STATE_SUSPENDED) {
xhci_dbg(xhci, "resume root hub\n");
@ -2007,8 +2007,6 @@ static void handle_port_status(struct xhci_hcd *xhci,
}
cleanup:
/* Update event ring dequeue pointer before dropping the lock */
inc_deq(xhci, ir->event_ring);
/* Don't make the USB core poll the roothub if we got a bad port status
* change event. Besides, at that point we can't tell which roothub
@ -2884,13 +2882,6 @@ cleanup:
trb_comp_code != COMP_MISSED_SERVICE_ERROR &&
trb_comp_code != COMP_NO_PING_RESPONSE_ERROR;
/*
* Do not update event ring dequeue pointer if we're in a loop
* processing missed tds.
*/
if (!handling_skipped_tds)
inc_deq(xhci, ir->event_ring);
/*
* If ep->skip is set, it means there are missed tds on the
* endpoint ring need to take care of.
@ -2922,9 +2913,7 @@ err_out:
static int xhci_handle_event(struct xhci_hcd *xhci, struct xhci_interrupter *ir)
{
union xhci_trb *event;
int update_ptrs = 1;
u32 trb_type;
int ret;
/* Event ring hasn't been allocated yet. */
if (!ir || !ir->event_ring || !ir->event_ring->dequeue) {
@ -2954,12 +2943,9 @@ static int xhci_handle_event(struct xhci_hcd *xhci, struct xhci_interrupter *ir)
break;
case TRB_PORT_STATUS:
handle_port_status(xhci, ir, event);
update_ptrs = 0;
break;
case TRB_TRANSFER:
ret = handle_tx_event(xhci, ir, &event->trans_event);
if (ret >= 0)
update_ptrs = 0;
handle_tx_event(xhci, ir, &event->trans_event);
break;
case TRB_DEV_NOTE:
handle_device_notification(xhci, event);
@ -2979,9 +2965,8 @@ static int xhci_handle_event(struct xhci_hcd *xhci, struct xhci_interrupter *ir)
return 0;
}
if (update_ptrs)
/* Update SW event ring dequeue pointer */
inc_deq(xhci, ir->event_ring);
/* Update SW event ring dequeue pointer */
inc_deq(xhci, ir->event_ring);
/* Are there more items on the event ring? Caller will call us again to
* check.
@ -3013,13 +2998,12 @@ static void xhci_update_erst_dequeue(struct xhci_hcd *xhci,
* Per 4.9.4, Software writes to the ERDP register shall
* always advance the Event Ring Dequeue Pointer value.
*/
if ((temp_64 & (u64) ~ERST_PTR_MASK) ==
((u64) deq & (u64) ~ERST_PTR_MASK))
if ((temp_64 & ERST_PTR_MASK) == (deq & ERST_PTR_MASK))
return;
/* Update HC event ring dequeue pointer */
temp_64 &= ERST_DESI_MASK;
temp_64 |= ((u64) deq & (u64) ~ERST_PTR_MASK);
temp_64 = ir->event_ring->deq_seg->num & ERST_DESI_MASK;
temp_64 |= deq & ERST_PTR_MASK;
}
/* Clear the event handler busy flag (RW1C) */

View File

@ -509,35 +509,38 @@ DEFINE_EVENT(xhci_log_ring, xhci_inc_deq,
);
DECLARE_EVENT_CLASS(xhci_log_portsc,
TP_PROTO(u32 portnum, u32 portsc),
TP_ARGS(portnum, portsc),
TP_PROTO(struct xhci_port *port, u32 portsc),
TP_ARGS(port, portsc),
TP_STRUCT__entry(
__field(u32, busnum)
__field(u32, portnum)
__field(u32, portsc)
),
TP_fast_assign(
__entry->portnum = portnum;
__entry->busnum = port->rhub->hcd->self.busnum;
__entry->portnum = port->hcd_portnum;
__entry->portsc = portsc;
),
TP_printk("port-%d: %s",
TP_printk("port %d-%d: %s",
__entry->busnum,
__entry->portnum,
xhci_decode_portsc(__get_buf(XHCI_MSG_MAX), __entry->portsc)
)
);
DEFINE_EVENT(xhci_log_portsc, xhci_handle_port_status,
TP_PROTO(u32 portnum, u32 portsc),
TP_ARGS(portnum, portsc)
TP_PROTO(struct xhci_port *port, u32 portsc),
TP_ARGS(port, portsc)
);
DEFINE_EVENT(xhci_log_portsc, xhci_get_port_status,
TP_PROTO(u32 portnum, u32 portsc),
TP_ARGS(portnum, portsc)
TP_PROTO(struct xhci_port *port, u32 portsc),
TP_ARGS(port, portsc)
);
DEFINE_EVENT(xhci_log_portsc, xhci_hub_status_data,
TP_PROTO(u32 portnum, u32 portsc),
TP_ARGS(portnum, portsc)
TP_PROTO(struct xhci_port *port, u32 portsc),
TP_ARGS(port, portsc)
);
DECLARE_EVENT_CLASS(xhci_log_doorbell,

View File

@ -81,6 +81,29 @@ int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, u64 timeout_us)
return ret;
}
/*
* xhci_handshake_check_state - same as xhci_handshake but takes an additional
* exit_state parameter, and bails out with an error immediately when xhc_state
* has exit_state flag set.
*/
int xhci_handshake_check_state(struct xhci_hcd *xhci, void __iomem *ptr,
u32 mask, u32 done, int usec, unsigned int exit_state)
{
u32 result;
int ret;
ret = readl_poll_timeout_atomic(ptr, result,
(result & mask) == done ||
result == U32_MAX ||
xhci->xhc_state & exit_state,
1, usec);
if (result == U32_MAX || xhci->xhc_state & exit_state)
return -ENODEV;
return ret;
}
/*
* Disable interrupts and begin the xHCI halting process.
*/
@ -201,7 +224,8 @@ int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us)
if (xhci->quirks & XHCI_INTEL_HOST)
udelay(1000);
ret = xhci_handshake(&xhci->op_regs->command, CMD_RESET, 0, timeout_us);
ret = xhci_handshake_check_state(xhci, &xhci->op_regs->command,
CMD_RESET, 0, timeout_us, XHCI_STATE_REMOVING);
if (ret)
return ret;
@ -520,7 +544,7 @@ int xhci_run(struct usb_hcd *hcd)
xhci_dbg_trace(xhci, trace_xhci_dbg_init, "xhci_run");
temp_64 = xhci_read_64(xhci, &ir->ir_set->erst_dequeue);
temp_64 &= ~ERST_PTR_MASK;
temp_64 &= ERST_PTR_MASK;
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"ERST deq = 64'h%0lx", (long unsigned int) temp_64);
@ -968,6 +992,7 @@ int xhci_resume(struct xhci_hcd *xhci, pm_message_t msg)
int retval = 0;
bool comp_timer_running = false;
bool pending_portevent = false;
bool suspended_usb3_devs = false;
bool reinit_xhc = false;
if (!hcd->state)
@ -1115,10 +1140,17 @@ int xhci_resume(struct xhci_hcd *xhci, pm_message_t msg)
/*
* Resume roothubs only if there are pending events.
* USB 3 devices resend U3 LFPS wake after a 100ms delay if
* the first wake signalling failed, give it that chance.
* the first wake signalling failed, give it that chance if
* there are suspended USB 3 devices.
*/
if (xhci->usb3_rhub.bus_state.suspended_ports ||
xhci->usb3_rhub.bus_state.bus_suspended)
suspended_usb3_devs = true;
pending_portevent = xhci_pending_portevent(xhci);
if (!pending_portevent && msg.event == PM_EVENT_AUTO_RESUME) {
if (suspended_usb3_devs && !pending_portevent &&
msg.event == PM_EVENT_AUTO_RESUME) {
msleep(120);
pending_portevent = xhci_pending_portevent(xhci);
}

View File

@ -525,7 +525,7 @@ struct xhci_intr_reg {
* a work queue (or delayed service routine)?
*/
#define ERST_EHB (1 << 3)
#define ERST_PTR_MASK (0xf)
#define ERST_PTR_MASK (GENMASK_ULL(63, 4))
/**
* struct xhci_run_regs
@ -558,33 +558,6 @@ struct xhci_doorbell_array {
#define DB_VALUE(ep, stream) ((((ep) + 1) & 0xff) | ((stream) << 16))
#define DB_VALUE_HOST 0x00000000
/**
* struct xhci_protocol_caps
* @revision: major revision, minor revision, capability ID,
* and next capability pointer.
* @name_string: Four ASCII characters to say which spec this xHC
* follows, typically "USB ".
* @port_info: Port offset, count, and protocol-defined information.
*/
struct xhci_protocol_caps {
u32 revision;
u32 name_string;
u32 port_info;
};
#define XHCI_EXT_PORT_MAJOR(x) (((x) >> 24) & 0xff)
#define XHCI_EXT_PORT_MINOR(x) (((x) >> 16) & 0xff)
#define XHCI_EXT_PORT_PSIC(x) (((x) >> 28) & 0x0f)
#define XHCI_EXT_PORT_OFF(x) ((x) & 0xff)
#define XHCI_EXT_PORT_COUNT(x) (((x) >> 8) & 0xff)
#define XHCI_EXT_PORT_PSIV(x) (((x) >> 0) & 0x0f)
#define XHCI_EXT_PORT_PSIE(x) (((x) >> 4) & 0x03)
#define XHCI_EXT_PORT_PLT(x) (((x) >> 6) & 0x03)
#define XHCI_EXT_PORT_PFD(x) (((x) >> 8) & 0x01)
#define XHCI_EXT_PORT_LP(x) (((x) >> 14) & 0x03)
#define XHCI_EXT_PORT_PSIM(x) (((x) >> 16) & 0xffff)
#define PLT_MASK (0x03 << 6)
#define PLT_SYM (0x00 << 6)
#define PLT_ASYM_RX (0x02 << 6)
@ -1545,6 +1518,7 @@ struct xhci_segment {
union xhci_trb *trbs;
/* private to HCD */
struct xhci_segment *next;
unsigned int num;
dma_addr_t dma;
/* Max packet sized bounce buffer for td-fragmant alignment */
dma_addr_t bounce_dma;
@ -1669,12 +1643,8 @@ struct urb_priv {
struct xhci_td td[] __counted_by(num_tds);
};
/*
* Each segment table entry is 4*32bits long. 1K seems like an ok size:
* (1K bytes * 8bytes/bit) / (4*32 bits) = 64 segment entries in the table,
* meaning 64 ring segments.
* Initial allocated size of the ERST, in number of entries */
#define ERST_NUM_SEGS 1
/* Reasonable limit for number of Event Ring segments (spec allows 32k) */
#define ERST_MAX_SEGS 2
/* Poll every 60 seconds */
#define POLL_TIMEOUT 60
/* Stop endpoint command timeout (secs) for URB cancellation watchdog timer */
@ -2078,13 +2048,8 @@ struct xhci_ring *xhci_ring_alloc(struct xhci_hcd *xhci,
void xhci_ring_free(struct xhci_hcd *xhci, struct xhci_ring *ring);
int xhci_ring_expansion(struct xhci_hcd *xhci, struct xhci_ring *ring,
unsigned int num_trbs, gfp_t flags);
int xhci_alloc_erst(struct xhci_hcd *xhci,
struct xhci_ring *evt_ring,
struct xhci_erst *erst,
gfp_t flags);
void xhci_initialize_ring_info(struct xhci_ring *ring,
unsigned int cycle_state);
void xhci_free_erst(struct xhci_hcd *xhci, struct xhci_erst *erst);
void xhci_free_endpoint_ring(struct xhci_hcd *xhci,
struct xhci_virt_device *virt_dev,
unsigned int ep_index);
@ -2119,6 +2084,8 @@ void xhci_free_container_ctx(struct xhci_hcd *xhci,
/* xHCI host controller glue */
typedef void (*xhci_get_quirks_t)(struct device *, struct xhci_hcd *);
int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, u64 timeout_us);
int xhci_handshake_check_state(struct xhci_hcd *xhci, void __iomem *ptr,
u32 mask, u32 done, int usec, unsigned int exit_state);
void xhci_quiesce(struct xhci_hcd *xhci);
int xhci_halt(struct xhci_hcd *xhci);
int xhci_start(struct xhci_hcd *xhci);

View File

@ -165,6 +165,19 @@ config APPLE_MFI_FASTCHARGE
It is safe to say M here.
config USB_LJCA
tristate "Intel La Jolla Cove Adapter support"
select AUXILIARY_BUS
depends on USB && ACPI
help
This adds support for Intel La Jolla Cove USB-I2C/SPI/GPIO
Master Adapter (LJCA). Additional drivers such as I2C_LJCA,
GPIO_LJCA and SPI_LJCA must be enabled in order to use the
functionality of the device.
This driver can also be built as a module. If so, the module
will be called usb-ljca.
source "drivers/usb/misc/sisusbvga/Kconfig"
config USB_LD

Some files were not shown because too many files have changed in this diff Show More