pci-v6.10-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmZLzNIUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vwr/Q//STe2XGKI8bAKqP2wbbkzm+ISnK4A
 Lqf3FEAIXunxDRspszfXKKV2p4vaIkmOFiwIdtp/kWvd0DQn5+ATXJ/iQtp8aFX/
 R+6BQ7EZc2G7fN5fbQuK54+CvmWEpkKEMbXYbd6ivQ14Cijdb3Nbu+w+DYFjS+6C
 k2a9lS1bTW7Xcy0fyiO1w6GQiWqtmOH8U3OlQtIrI0EVkDG9OG1LsLuc92/FgkOo
 REN+sU+hX1K5fHrvm2CtjYDn/9/B6bJ/It22H1dPgUL9nKvKC67fYzosMtUCOX1M
 6XSPjZIuXOmQGeZXHhpSlVwaidxoUjYO98I7nMquxKdCy6yct3geK7ULG/xeQCgD
 ML7MGQB4+sTiSWalXUQaziKqF1FIDEvU3HMGXFWnoBL5l56eRp8KS1EI9Eqk9pU3
 pk9fJaCkcFnkzPtMFzqPOm5q9zUZ6bGbfYb0hs72TUKplmVDhFo2T1YsW2AOyHZ7
 mjuDzUYZX0H7uM1tntA56IgZX+oNOrLvhBt5L5M/BQeCsZFBBUfIcAEaYoL9LwXO
 AYgIG3jdqzHHyAUzutJF+XHKinJLMHm0XVYbFmO6saPhFzrUJSNHqT7NzW1DGGTl
 OnO8e1WNMX1EcnKvnc6fXyGmM3SgVwy45FsbG/zRnhn4uBKqKtjrh6uX/myA22LK
 CSeqSUK9XmXxFNA=
 =xjoS
 -----END PGP SIGNATURE-----

Merge tag 'pci-v6.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:

   - Skip E820 checks for MCFG ECAM regions for new (2016+) machines,
     since there's no requirement to describe them in E820 and some
     platforms require ECAM to work (Bjorn Helgaas)

   - Rename PCI_IRQ_LEGACY to PCI_IRQ_INTX to be more specific (Damien
     Le Moal)

   - Remove last user and pci_enable_device_io() (Heiner Kallweit)

   - Wait for Link Training==0 to avoid possible race (Ilpo Järvinen)

   - Skip waiting for devices that have been disconnected while
     suspended (Ilpo Järvinen)

   - Clear Secondary Status errors after enumeration since Master Aborts
     and Unsupported Request errors are an expected part of enumeration
     (Vidya Sagar)

  MSI:

   - Remove unused IMS (Interrupt Message Store) support (Bjorn Helgaas)

  Error handling:

   - Mask Genesys GL975x SD host controller Replay Timer Timeout
     correctable errors caused by a hardware defect; the errors cause
     interrupts that prevent system suspend (Kai-Heng Feng)

   - Fix EDR-related _DSM support, which previously evaluated revision 5
     but assumed revision 6 behavior (Kuppuswamy Sathyanarayanan)

  ASPM:

   - Simplify link state definitions and mask calculation (Ilpo
     Järvinen)

  Power management:

   - Avoid D3cold for HP Pavilion 17 PC/1972 PCIe Ports, where BIOS
     apparently doesn't know how to put them back in D0 (Mario
     Limonciello)

  CXL:

   - Support resetting CXL devices; special handling required because
     CXL Ports mask Secondary Bus Reset by default (Dave Jiang)

  DOE:

   - Support DOE Discovery Version 2 (Alexey Kardashevskiy)

  Endpoint framework:

   - Set endpoint BAR to be 64-bit if the driver says that's all the
     device supports, in addition to doing so if the size is >2GB
     (Niklas Cassel)

   - Simplify endpoint BAR allocation and setting interfaces (Niklas
     Cassel)

  Cadence PCIe controller driver:

   - Drop DT binding redundant msi-parent and pci-bus.yaml (Krzysztof
     Kozlowski)

  Cadence PCIe endpoint driver:

   - Configure endpoint BARs to be 64-bit based on the BAR type, not the
     BAR value (Niklas Cassel)

  Freescale Layerscape PCIe controller driver:

   - Convert DT binding to YAML (Frank Li)

  MediaTek MT7621 PCIe controller driver:

   - Add DT binding missing 'reg' property for child Root Ports
     (Krzysztof Kozlowski)

   - Fix theoretical string truncation in PHY name (Sergio Paracuellos)

  NVIDIA Tegra194 PCIe controller driver:

   - Return success for endpoint probe instead of falling through to the
     failure path (Vidya Sagar)

  Renesas R-Car PCIe controller driver:

   - Add DT binding missing IOMMU properties (Geert Uytterhoeven)

   - Add DT binding R-Car V4H compatible for host and endpoint mode
     (Yoshihiro Shimoda)

  Rockchip PCIe controller driver:

   - Configure endpoint BARs to be 64-bit based on the BAR type, not the
     BAR value (Niklas Cassel)

   - Add DT binding missing maxItems to ep-gpios (Krzysztof Kozlowski)

   - Set the Subsystem Vendor ID, which was previously zero because it
     was masked incorrectly (Rick Wertenbroek)

  Synopsys DesignWare PCIe controller driver:

   - Restructure DBI register access to accommodate devices where this
     requires Refclk to be active (Manivannan Sadhasivam)

   - Remove the deinit() callback, which was only need by the
     pcie-rcar-gen4, and do it directly in that driver (Manivannan
     Sadhasivam)

   - Add dw_pcie_ep_cleanup() so drivers that support PERST# can clean
     up things like eDMA (Manivannan Sadhasivam)

   - Rename dw_pcie_ep_exit() to dw_pcie_ep_deinit() to make it parallel
     to dw_pcie_ep_init() (Manivannan Sadhasivam)

   - Rename dw_pcie_ep_init_complete() to dw_pcie_ep_init_registers() to
     reflect the actual functionality (Manivannan Sadhasivam)

   - Call dw_pcie_ep_init_registers() directly from all the glue
     drivers, not just those that require active Refclk from the host
     (Manivannan Sadhasivam)

   - Remove the "core_init_notifier" flag, which was an obscure way for
     glue drivers to indicate that they depend on Refclk from the host
     (Manivannan Sadhasivam)

  TI J721E PCIe driver:

   - Add DT binding J784S4 SoC Device ID (Siddharth Vadapalli)

   - Add DT binding J722S SoC support (Siddharth Vadapalli)

  TI Keystone PCIe controller driver:

   - Add DT binding missing num-viewport, phys and phy-name properties
     (Jan Kiszka)

  Miscellaneous:

   - Constify and annotate with __ro_after_init (Heiner Kallweit)

   - Convert DT bindings to YAML (Krzysztof Kozlowski)

   - Check for kcalloc() failure in of_pci_prop_intr_map() (Duoming
     Zhou)"

* tag 'pci-v6.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (97 commits)
  PCI: Do not wait for disconnected devices when resuming
  x86/pci: Skip early E820 check for ECAM region
  PCI: Remove unused pci_enable_device_io()
  ata: pata_cs5520: Remove unnecessary call to pci_enable_device_io()
  PCI: Update pci_find_capability() stub return types
  PCI: Remove PCI_IRQ_LEGACY
  scsi: vmw_pvscsi: Do not use PCI_IRQ_LEGACY instead of PCI_IRQ_LEGACY
  scsi: pmcraid: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
  scsi: mpt3sas: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
  scsi: megaraid_sas: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
  scsi: ipr: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
  scsi: hpsa: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
  scsi: arcmsr: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
  wifi: rtw89: Use PCI_IRQ_INTX instead of PCI_IRQ_LEGACY
  dt-bindings: PCI: rockchip,rk3399-pcie: Add missing maxItems to ep-gpios
  Revert "genirq/msi: Provide constants for PCI/IMS support"
  Revert "x86/apic/msi: Enable PCI/IMS"
  Revert "iommu/vt-d: Enable PCI/IMS"
  Revert "iommu/amd: Enable PCI/IMS"
  Revert "PCI/MSI: Provide IMS (Interrupt Message Store) support"
  ...
This commit is contained in:
Linus Torvalds 2024-05-21 10:09:28 -07:00
commit f0bae243b2
130 changed files with 1232 additions and 761 deletions

View File

@ -103,7 +103,7 @@ min_vecs argument set to this limit, and the PCI core will return -ENOSPC
if it can't meet the minimum number of vectors.
The flags argument is used to specify which type of interrupt can be used
by the device and the driver (PCI_IRQ_LEGACY, PCI_IRQ_MSI, PCI_IRQ_MSIX).
by the device and the driver (PCI_IRQ_INTX, PCI_IRQ_MSI, PCI_IRQ_MSIX).
A convenient short-hand (PCI_IRQ_ALL_TYPES) is also available to ask for
any possible kind of interrupt. If the PCI_IRQ_AFFINITY flag is set,
pci_alloc_irq_vectors() will spread the interrupts around the available CPUs.

View File

@ -335,7 +335,7 @@ causes the PCI support to program CPU vector data into the PCI device
capability registers. Many architectures, chip-sets, or BIOSes do NOT
support MSI or MSI-X and a call to pci_alloc_irq_vectors with just
the PCI_IRQ_MSI and PCI_IRQ_MSIX flags will fail, so try to always
specify PCI_IRQ_LEGACY as well.
specify PCI_IRQ_INTX as well.
Drivers that have different interrupt handlers for MSI/MSI-X and
legacy INTx should chose the right one based on the msi_enabled

View File

@ -241,7 +241,7 @@ After reboot with new kernel or insert the module, a device file named
Then, you need a user space tool named aer-inject, which can be gotten
from:
https://git.kernel.org/cgit/linux/kernel/git/gong.chen/aer-inject.git/
https://github.com/intel/aer-inject.git
More information about aer-inject can be found in the document in
its source code.

View File

@ -13,7 +13,7 @@ description:
Amlogic Meson PCIe host controller is based on the Synopsys DesignWare PCI core.
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: /schemas/pci/snps,dw-pcie-common.yaml#
# We need a select here so we don't match all nodes with 'snps,dw-pcie'

View File

@ -85,7 +85,7 @@ required:
unevaluatedProperties: false
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: /schemas/interrupt-controller/msi-controller.yaml#
- if:
properties:

View File

@ -11,7 +11,7 @@ maintainers:
- Scott Branden <scott.branden@broadcom.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:

View File

@ -108,7 +108,7 @@ required:
- msi-controller
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: /schemas/interrupt-controller/msi-controller.yaml#
- if:
properties:

View File

@ -10,7 +10,6 @@ maintainers:
- Tom Joseph <tjoseph@cadence.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: cdns-pcie-host.yaml#
properties:
@ -25,8 +24,6 @@ properties:
- const: reg
- const: cfg
msi-parent: true
required:
- reg
- reg-names

View File

@ -10,7 +10,7 @@ maintainers:
- Tom Joseph <tjoseph@cadence.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: cdns-pcie.yaml#
properties:

View File

@ -51,7 +51,7 @@ description: |
<0x6000 0 0 4 &pci_intc 2>;
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:

View File

@ -0,0 +1,102 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/fsl,layerscape-pcie-ep.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Freescale Layerscape PCIe Endpoint(EP) controller
maintainers:
- Frank Li <Frank.Li@nxp.com>
description:
This PCIe EP controller is based on the Synopsys DesignWare PCIe IP.
This controller derives its clocks from the Reset Configuration Word (RCW)
which is used to describe the PLL settings at the time of chip-reset.
Also as per the available Reference Manuals, there is no specific 'version'
register available in the Freescale PCIe controller register set,
which can allow determining the underlying DesignWare PCIe controller version
information.
properties:
compatible:
enum:
- fsl,ls2088a-pcie-ep
- fsl,ls1088a-pcie-ep
- fsl,ls1046a-pcie-ep
- fsl,ls1028a-pcie-ep
- fsl,lx2160ar2-pcie-ep
reg:
maxItems: 2
reg-names:
items:
- const: regs
- const: addr_space
fsl,pcie-scfg:
$ref: /schemas/types.yaml#/definitions/phandle
description: A phandle to the SCFG device node. The second entry is the
physical PCIe controller index starting from '0'. This is used to get
SCFG PEXN registers.
big-endian:
$ref: /schemas/types.yaml#/definitions/flag
description: If the PEX_LUT and PF register block is in big-endian, specify
this property.
dma-coherent: true
interrupts:
minItems: 1
maxItems: 2
interrupt-names:
minItems: 1
maxItems: 2
required:
- compatible
- reg
- reg-names
allOf:
- if:
properties:
compatible:
enum:
- fsl,ls1028a-pcie-ep
- fsl,ls1046a-pcie-ep
- fsl,ls1088a-pcie-ep
then:
properties:
interrupt-names:
items:
- const: pme
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie_ep1: pcie-ep@3400000 {
compatible = "fsl,ls1028a-pcie-ep";
reg = <0x00 0x03400000 0x0 0x00100000
0x80 0x00000000 0x8 0x00000000>;
reg-names = "regs", "addr_space";
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; /* PME interrupt */
interrupt-names = "pme";
num-ib-windows = <6>;
num-ob-windows = <8>;
status = "disabled";
};
};
...

View File

@ -0,0 +1,167 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/fsl,layerscape-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Freescale Layerscape PCIe Root Complex(RC) controller
maintainers:
- Frank Li <Frank.Li@nxp.com>
description:
This PCIe RC controller is based on the Synopsys DesignWare PCIe IP
This controller derives its clocks from the Reset Configuration Word (RCW)
which is used to describe the PLL settings at the time of chip-reset.
Also as per the available Reference Manuals, there is no specific 'version'
register available in the Freescale PCIe controller register set,
which can allow determining the underlying DesignWare PCIe controller version
information.
properties:
compatible:
enum:
- fsl,ls1021a-pcie
- fsl,ls2080a-pcie
- fsl,ls2085a-pcie
- fsl,ls2088a-pcie
- fsl,ls1088a-pcie
- fsl,ls1046a-pcie
- fsl,ls1043a-pcie
- fsl,ls1012a-pcie
- fsl,ls1028a-pcie
- fsl,lx2160a-pcie
reg:
maxItems: 2
reg-names:
items:
- const: regs
- const: config
fsl,pcie-scfg:
$ref: /schemas/types.yaml#/definitions/phandle
description: A phandle to the SCFG device node. The second entry is the
physical PCIe controller index starting from '0'. This is used to get
SCFG PEXN registers.
big-endian:
$ref: /schemas/types.yaml#/definitions/flag
description: If the PEX_LUT and PF register block is in big-endian, specify
this property.
dma-coherent: true
msi-parent: true
iommu-map: true
interrupts:
minItems: 1
maxItems: 2
interrupt-names:
minItems: 1
maxItems: 2
required:
- compatible
- reg
- reg-names
- "#address-cells"
- "#size-cells"
- device_type
- bus-range
- ranges
- interrupts
- interrupt-names
- "#interrupt-cells"
- interrupt-map-mask
- interrupt-map
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- if:
properties:
compatible:
enum:
- fsl,ls1028a-pcie
- fsl,ls1046a-pcie
- fsl,ls1043a-pcie
- fsl,ls1012a-pcie
then:
properties:
interrupts:
maxItems: 2
interrupt-names:
items:
- const: pme
- const: aer
- if:
properties:
compatible:
enum:
- fsl,ls2080a-pcie
- fsl,ls2085a-pcie
- fsl,ls2088a-pcie
then:
properties:
interrupts:
maxItems: 1
interrupt-names:
items:
- const: intr
- if:
properties:
compatible:
enum:
- fsl,ls1088a-pcie
then:
properties:
interrupts:
maxItems: 1
interrupt-names:
items:
- const: aer
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie@3400000 {
compatible = "fsl,ls1088a-pcie";
reg = <0x00 0x03400000 0x0 0x00100000>, /* controller registers */
<0x20 0x00000000 0x0 0x00002000>; /* configuration space */
reg-names = "regs", "config";
interrupts = <0 108 IRQ_TYPE_LEVEL_HIGH>; /* aer interrupt */
interrupt-names = "aer";
#address-cells = <3>;
#size-cells = <2>;
dma-coherent;
device_type = "pci";
bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x20 0x00010000 0x0 0x00010000 /* downstream I/O */
0x82000000 0x0 0x40000000 0x20 0x40000000 0x0 0x40000000>; /* non-prefetchable memory */
msi-parent = <&its>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0000 0 0 1 &gic 0 0 0 109 IRQ_TYPE_LEVEL_HIGH>,
<0000 0 0 2 &gic 0 0 0 110 IRQ_TYPE_LEVEL_HIGH>,
<0000 0 0 3 &gic 0 0 0 111 IRQ_TYPE_LEVEL_HIGH>,
<0000 0 0 4 &gic 0 0 0 112 IRQ_TYPE_LEVEL_HIGH>;
iommu-map = <0 &smmu 0 1>; /* Fixed-up by bootloader */
};
};
...

View File

@ -116,7 +116,7 @@ required:
- ranges
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- if:
properties:
compatible:

View File

@ -12,7 +12,7 @@ maintainers:
description: PCI host controller found in the Intel IXP4xx SoC series.
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:

View File

@ -11,7 +11,7 @@ maintainers:
- Srikanth Thokala <srikanth.thokala@intel.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:

View File

@ -1,79 +0,0 @@
Freescale Layerscape PCIe controller
This PCIe host controller is based on the Synopsys DesignWare PCIe IP
and thus inherits all the common properties defined in snps,dw-pcie.yaml.
This controller derives its clocks from the Reset Configuration Word (RCW)
which is used to describe the PLL settings at the time of chip-reset.
Also as per the available Reference Manuals, there is no specific 'version'
register available in the Freescale PCIe controller register set,
which can allow determining the underlying DesignWare PCIe controller version
information.
Required properties:
- compatible: should contain the platform identifier such as:
RC mode:
"fsl,ls1021a-pcie"
"fsl,ls2080a-pcie", "fsl,ls2085a-pcie"
"fsl,ls2088a-pcie"
"fsl,ls1088a-pcie"
"fsl,ls1046a-pcie"
"fsl,ls1043a-pcie"
"fsl,ls1012a-pcie"
"fsl,ls1028a-pcie"
EP mode:
"fsl,ls1028a-pcie-ep", "fsl,ls-pcie-ep"
"fsl,ls1046a-pcie-ep", "fsl,ls-pcie-ep"
"fsl,ls1088a-pcie-ep", "fsl,ls-pcie-ep"
"fsl,ls2088a-pcie-ep", "fsl,ls-pcie-ep"
"fsl,lx2160ar2-pcie-ep", "fsl,ls-pcie-ep"
- reg: base addresses and lengths of the PCIe controller register blocks.
- interrupts: A list of interrupt outputs of the controller. Must contain an
entry for each entry in the interrupt-names property.
- interrupt-names: It could include the following entries:
"aer": Used for interrupt line which reports AER events when
non MSI/MSI-X/INTx mode is used
"pme": Used for interrupt line which reports PME events when
non MSI/MSI-X/INTx mode is used
"intr": Used for SoCs(like ls2080a, lx2160a, ls2080a, ls2088a, ls1088a)
which has a single interrupt line for miscellaneous controller
events(could include AER and PME events).
- fsl,pcie-scfg: Must include two entries.
The first entry must be a link to the SCFG device node
The second entry is the physical PCIe controller index starting from '0'.
This is used to get SCFG PEXN registers
- dma-coherent: Indicates that the hardware IP block can ensure the coherency
of the data transferred from/to the IP block. This can avoid the software
cache flush/invalid actions, and improve the performance significantly.
Optional properties:
- big-endian: If the PEX_LUT and PF register block is in big-endian, specify
this property.
Example:
pcie@3400000 {
compatible = "fsl,ls1088a-pcie";
reg = <0x00 0x03400000 0x0 0x00100000>, /* controller registers */
<0x20 0x00000000 0x0 0x00002000>; /* configuration space */
reg-names = "regs", "config";
interrupts = <0 108 IRQ_TYPE_LEVEL_HIGH>; /* aer interrupt */
interrupt-names = "aer";
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
dma-coherent;
num-viewport = <256>;
bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x20 0x00010000 0x0 0x00010000 /* downstream I/O */
0x82000000 0x0 0x40000000 0x20 0x40000000 0x0 0x40000000>; /* non-prefetchable memory */
msi-parent = <&its>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0000 0 0 1 &gic 0 0 0 109 IRQ_TYPE_LEVEL_HIGH>,
<0000 0 0 2 &gic 0 0 0 110 IRQ_TYPE_LEVEL_HIGH>,
<0000 0 0 3 &gic 0 0 0 111 IRQ_TYPE_LEVEL_HIGH>,
<0000 0 0 4 &gic 0 0 0 112 IRQ_TYPE_LEVEL_HIGH>;
iommu-map = <0 &smmu 0 1>; /* Fixed-up by bootloader */
};

View File

@ -13,7 +13,7 @@ description: |+
PCI host controller found on Loongson PCHs and SoCs.
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:

View File

@ -14,7 +14,7 @@ description: |+
with 3 Root Ports. Each Root Port supports a Gen1 1-lane Link
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:
@ -33,9 +33,12 @@ properties:
patternProperties:
'^pcie@[0-2],0$':
type: object
$ref: /schemas/pci/pci-bus.yaml#
$ref: /schemas/pci/pci-pci-bridge.yaml#
properties:
reg:
maxItems: 1
resets:
maxItems: 1

View File

@ -140,7 +140,7 @@ required:
- interrupt-controller
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- if:
properties:
compatible:

View File

@ -10,7 +10,7 @@ maintainers:
- Daire McNamara <daire.mcnamara@microchip.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: /schemas/interrupt-controller/msi-controller.yaml#
properties:

View File

@ -95,6 +95,6 @@ anyOf:
- msi-map
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
additionalProperties: true

View File

@ -130,7 +130,7 @@ anyOf:
- msi-map
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- if:
properties:
compatible:

View File

@ -16,7 +16,9 @@ allOf:
properties:
compatible:
items:
- const: renesas,r8a779f0-pcie-ep # R-Car S4-8
- enum:
- renesas,r8a779f0-pcie-ep # R-Car S4-8
- renesas,r8a779g0-pcie-ep # R-Car V4H
- const: renesas,rcar-gen4-pcie-ep # R-Car Gen4
reg:

View File

@ -16,7 +16,9 @@ allOf:
properties:
compatible:
items:
- const: renesas,r8a779f0-pcie # R-Car S4-8
- enum:
- renesas,r8a779f0-pcie # R-Car S4-8
- renesas,r8a779g0-pcie # R-Car V4H
- const: renesas,rcar-gen4-pcie # R-Car Gen4
reg:

View File

@ -12,7 +12,7 @@ maintainers:
- Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
allOf:
- $ref: pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:
@ -77,6 +77,9 @@ properties:
vpcie12v-supply:
description: The 12v regulator to use for PCIe.
iommu-map: true
iommu-map-mask: true
required:
- compatible
- reg

View File

@ -110,7 +110,7 @@ required:
- "#interrupt-cells"
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- if:
properties:

View File

@ -10,7 +10,7 @@ maintainers:
- Shawn Lin <shawn.lin@rock-chips.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: rockchip,rk3399-pcie-common.yaml#
properties:
@ -37,6 +37,7 @@ properties:
description: This property is needed if using 24MHz OSC for RC's PHY.
ep-gpios:
maxItems: 1
description: pre-reset GPIO
vpcie12v-supply:

View File

@ -23,7 +23,7 @@ select:
- compatible
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: /schemas/pci/snps,dw-pcie-common.yaml#
- if:
not:

View File

@ -11,7 +11,7 @@ maintainers:
- Kishon Vijay Abraham I <kishon@ti.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:
@ -55,6 +55,20 @@ properties:
dma-coherent: true
num-viewport:
$ref: /schemas/types.yaml#/definitions/uint32
phys:
description: per-lane PHYs
minItems: 1
maxItems: 2
phy-names:
minItems: 1
maxItems: 2
items:
pattern: '^pcie-phy[0-1]$'
required:
- compatible
- reg
@ -74,6 +88,7 @@ then:
- dma-coherent
- power-domains
- msi-map
- num-viewport
unevaluatedProperties: false
@ -81,6 +96,7 @@ examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/phy/phy.h>
#include <dt-bindings/soc/ti,sci_pm_domain.h>
pcie0_rc: pcie@5500000 {
@ -98,9 +114,13 @@ examples:
ti,syscon-pcie-id = <&scm_conf 0x0210>;
ti,syscon-pcie-mode = <&scm_conf 0x4060>;
bus-range = <0x0 0xff>;
num-viewport = <16>;
max-link-speed = <2>;
dma-coherent;
interrupts = <GIC_SPI 340 IRQ_TYPE_EDGE_RISING>;
msi-map = <0x0 &gic_its 0x0 0x10000>;
device_type = "pci";
num-lanes = <1>;
phys = <&serdes0 PHY_TYPE_PCIE 0>;
phy-names = "pcie-phy0";
};

View File

@ -23,6 +23,10 @@ properties:
items:
- const: ti,j7200-pcie-host
- const: ti,j721e-pcie-host
- description: PCIe controller in J722S
items:
- const: ti,j722s-pcie-host
- const: ti,j721e-pcie-host
reg:
maxItems: 4
@ -68,6 +72,7 @@ properties:
- 0xb00d
- 0xb00f
- 0xb010
- 0xb012
- 0xb013
msi-map: true

View File

@ -13,7 +13,7 @@ description: |+
PCI host controller found on the ARM Versatile PB board's FPGA.
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:

View File

@ -10,7 +10,7 @@ maintainers:
- Bharat Kumar Gogada <bharat.kumar.gogada@amd.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:

View File

@ -10,7 +10,7 @@ maintainers:
- Thippeswamy Havalige <thippeswamy.havalige@amd.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:

View File

@ -10,7 +10,7 @@ maintainers:
- Thippeswamy Havalige <thippeswamy.havalige@amd.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: /schemas/interrupt-controller/msi-controller.yaml#
properties:

View File

@ -10,7 +10,7 @@ maintainers:
- Thippeswamy Havalige <thippeswamy.havalige@amd.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:

View File

@ -88,7 +88,7 @@ MSI功能。
如果设备对最小数量的向量有要求驱动程序可以传递一个min_vecs参数设置为这个限制
如果PCI核不能满足最小数量的向量将返回-ENOSPC。
flags参数用来指定设备和驱动程序可以使用哪种类型的中断PCI_IRQ_LEGACY, PCI_IRQ_MSI,
flags参数用来指定设备和驱动程序可以使用哪种类型的中断PCI_IRQ_INTX, PCI_IRQ_MSI,
PCI_IRQ_MSIX。一个方便的短语PCI_IRQ_ALL_TYPES也可以用来要求任何可能的中断类型。
如果PCI_IRQ_AFFINITY标志被设置pci_alloc_irq_vectors()将把中断分散到可用的CPU上。

View File

@ -304,7 +304,7 @@ MSI-X可以分配几个单独的向量。
的PCI_IRQ_MSI和/或PCI_IRQ_MSIX标志来启用MSI功能。这将导致PCI支持将CPU向量数
据编程到PCI设备功能寄存器中。许多架构、芯片组或BIOS不支持MSI或MSI-X调用
``pci_alloc_irq_vectors`` 时只使用PCI_IRQ_MSI和PCI_IRQ_MSIX标志会失败
所以尽量也要指定 ``PCI_IRQ_LEGACY``
所以尽量也要指定 ``PCI_IRQ_INTX``
对MSI/MSI-X和传统INTx有不同中断处理程序的驱动程序应该在调用
``pci_alloc_irq_vectors`` 后根据 ``pci_dev``结构体中的 ``msi_enabled``

View File

@ -184,7 +184,6 @@ static int x86_msi_prepare(struct irq_domain *domain, struct device *dev,
alloc->type = X86_IRQ_ALLOC_TYPE_PCI_MSI;
return 0;
case DOMAIN_BUS_PCI_DEVICE_MSIX:
case DOMAIN_BUS_PCI_DEVICE_IMS:
alloc->type = X86_IRQ_ALLOC_TYPE_PCI_MSIX;
return 0;
default:
@ -229,10 +228,6 @@ static bool x86_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
case DOMAIN_BUS_PCI_DEVICE_MSI:
case DOMAIN_BUS_PCI_DEVICE_MSIX:
break;
case DOMAIN_BUS_PCI_DEVICE_IMS:
if (!(pops->supported_flags & MSI_FLAG_PCI_IMS))
return false;
break;
default:
WARN_ON_ONCE(1);
return false;

View File

@ -518,7 +518,34 @@ static bool __ref pci_mmcfg_reserved(struct device *dev,
{
struct resource *conflict;
if (!early && !acpi_disabled) {
if (early) {
/*
* Don't try to do this check unless configuration type 1
* is available. How about type 2?
*/
/*
* 946f2ee5c731 ("Check that MCFG points to an e820
* reserved area") added this E820 check in 2006 to work
* around BIOS defects.
*
* Per PCI Firmware r3.3, sec 4.1.2, ECAM space must be
* reserved by a PNP0C02 resource, but it need not be
* mentioned in E820. Before the ACPI interpreter is
* available, we can't check for PNP0C02 resources, so
* there's no reliable way to verify the region in this
* early check. Keep it only for the old machines that
* motivated 946f2ee5c731.
*/
if (dmi_get_bios_year() < 2016 && raw_pci_ops)
return is_mmconf_reserved(e820__mapped_all, cfg, dev,
"E820 entry");
return true;
}
if (!acpi_disabled) {
if (is_mmconf_reserved(is_acpi_reserved, cfg, dev,
"ACPI motherboard resource"))
return true;
@ -551,16 +578,7 @@ static bool __ref pci_mmcfg_reserved(struct device *dev,
* For MCFG information constructed from hotpluggable host bridge's
* _CBA method, just assume it's reserved.
*/
if (pci_mmcfg_running_state)
return true;
/* Don't try to do this check unless configuration
type 1 is available. how about type 2 ?*/
if (raw_pci_ops)
return is_mmconf_reserved(e820__mapped_all, cfg, dev,
"E820 entry");
return false;
return pci_mmcfg_running_state;
}
static void __init pci_mmcfg_reject_broken(int early)

View File

@ -154,9 +154,6 @@ static const uint32_t ehci_hdr[] = { /* dev f function 4 - devfn = 7d */
0x0, 0x40, 0x0, 0x40a, /* CapPtr INT-D, IRQA */
0xc8020001, 0x0, 0x0, 0x0, /* Capabilities - 40 is R/O, 44 is
mask 8103 (power control) */
#if 0
0x1, 0x40080000, 0x0, 0x0, /* EECP - see EHCI spec section 2.1.7 */
#endif
0x01000001, 0x0, 0x0, 0x0, /* EECP - see EHCI spec section 2.1.7 */
0x2020, 0x0, 0x0, 0x0, /* (EHCI page 8) 60 SBRN (R/O),
61 FLADJ (R/W), PORTWAKECAP */

View File

@ -151,12 +151,6 @@ static int cs5520_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
if (!host)
return -ENOMEM;
/* Perform set up for DMA */
if (pci_enable_device_io(pdev)) {
dev_err(&pdev->dev, "unable to configure BAR2.\n");
return -ENODEV;
}
if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) {
dev_err(&pdev->dev, "unable to configure DMA mask.\n");
return -ENODEV;

View File

@ -525,7 +525,7 @@ static int cxl_cdat_get_length(struct device *dev,
__le32 response[2];
int rc;
rc = pci_doe(doe_mb, PCI_DVSEC_VENDOR_ID_CXL,
rc = pci_doe(doe_mb, PCI_VENDOR_ID_CXL,
CXL_DOE_PROTOCOL_TABLE_ACCESS,
&request, sizeof(request),
&response, sizeof(response));
@ -555,7 +555,7 @@ static int cxl_cdat_read_table(struct device *dev,
__le32 request = CDAT_DOE_REQ(entry_handle);
int rc;
rc = pci_doe(doe_mb, PCI_DVSEC_VENDOR_ID_CXL,
rc = pci_doe(doe_mb, PCI_VENDOR_ID_CXL,
CXL_DOE_PROTOCOL_TABLE_ACCESS,
&request, sizeof(request),
rsp, sizeof(*rsp) + remaining);
@ -640,7 +640,7 @@ void read_cdat_data(struct cxl_port *port)
if (!pdev)
return;
doe_mb = pci_find_doe_mailbox(pdev, PCI_DVSEC_VENDOR_ID_CXL,
doe_mb = pci_find_doe_mailbox(pdev, PCI_VENDOR_ID_CXL,
CXL_DOE_PROTOCOL_TABLE_ACCESS);
if (!doe_mb) {
dev_dbg(dev, "No CDAT mailbox\n");
@ -1045,3 +1045,32 @@ long cxl_pci_get_latency(struct pci_dev *pdev)
return cxl_flit_size(pdev) * MEGA / bw;
}
static int __cxl_endpoint_decoder_reset_detected(struct device *dev, void *data)
{
struct cxl_port *port = data;
struct cxl_decoder *cxld;
struct cxl_hdm *cxlhdm;
void __iomem *hdm;
u32 ctrl;
if (!is_endpoint_decoder(dev))
return 0;
cxld = to_cxl_decoder(dev);
if ((cxld->flags & CXL_DECODER_F_ENABLE) == 0)
return 0;
cxlhdm = dev_get_drvdata(&port->dev);
hdm = cxlhdm->regs.hdm_decoder;
ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(cxld->id));
return !FIELD_GET(CXL_HDM_DECODER0_CTRL_COMMITTED, ctrl);
}
bool cxl_endpoint_decoder_reset_detected(struct cxl_port *port)
{
return device_for_each_child(&port->dev, port,
__cxl_endpoint_decoder_reset_detected);
}
EXPORT_SYMBOL_NS_GPL(cxl_endpoint_decoder_reset_detected, CXL);

View File

@ -314,7 +314,7 @@ int cxl_find_regblock_instance(struct pci_dev *pdev, enum cxl_regloc_type type,
.resource = CXL_RESOURCE_NONE,
};
regloc = pci_find_dvsec_capability(pdev, PCI_DVSEC_VENDOR_ID_CXL,
regloc = pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_CXL,
CXL_DVSEC_REG_LOCATOR);
if (!regloc)
return -ENXIO;

View File

@ -898,6 +898,8 @@ void cxl_coordinates_combine(struct access_coordinate *out,
struct access_coordinate *c1,
struct access_coordinate *c2);
bool cxl_endpoint_decoder_reset_detected(struct cxl_port *port);
/*
* Unit test builds overrides this to __weak, find the 'strong' version
* of these symbols in tools/testing/cxl/.

View File

@ -13,7 +13,6 @@
* "DVSEC" redundancies removed. When obvious, abbreviations may be used.
*/
#define PCI_DVSEC_HEADER1_LENGTH_MASK GENMASK(31, 20)
#define PCI_DVSEC_VENDOR_ID_CXL 0x1E98
/* CXL 2.0 8.1.3: PCIe DVSEC for CXL Device */
#define CXL_DVSEC_PCIE_DEVICE 0

View File

@ -817,7 +817,7 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
cxlds->rcd = is_cxl_restricted(pdev);
cxlds->serial = pci_get_dsn(pdev);
cxlds->cxl_dvsec = pci_find_dvsec_capability(
pdev, PCI_DVSEC_VENDOR_ID_CXL, CXL_DVSEC_PCIE_DEVICE);
pdev, PCI_VENDOR_ID_CXL, CXL_DVSEC_PCIE_DEVICE);
if (!cxlds->cxl_dvsec)
dev_warn(&pdev->dev,
"Device DVSEC not present, skip CXL.mem init\n");
@ -957,11 +957,33 @@ static void cxl_error_resume(struct pci_dev *pdev)
dev->driver ? "successful" : "failed");
}
static void cxl_reset_done(struct pci_dev *pdev)
{
struct cxl_dev_state *cxlds = pci_get_drvdata(pdev);
struct cxl_memdev *cxlmd = cxlds->cxlmd;
struct device *dev = &pdev->dev;
/*
* FLR does not expect to touch the HDM decoders and related
* registers. SBR, however, will wipe all device configurations.
* Issue a warning if there was an active decoder before the reset
* that no longer exists.
*/
guard(device)(&cxlmd->dev);
if (cxlmd->endpoint &&
cxl_endpoint_decoder_reset_detected(cxlmd->endpoint)) {
dev_crit(dev, "SBR happened without memory regions removal.\n");
dev_crit(dev, "System may be unstable if regions hosted system memory.\n");
add_taint(TAINT_USER, LOCKDEP_STILL_OK);
}
}
static const struct pci_error_handlers cxl_error_handlers = {
.error_detected = cxl_error_detected,
.slot_reset = cxl_slot_reset,
.resume = cxl_error_resume,
.cor_error_detected = cxl_cor_error_detected,
.reset_done = cxl_reset_done,
};
static struct pci_driver cxl_pci_driver = {

View File

@ -279,7 +279,7 @@ int amdgpu_irq_init(struct amdgpu_device *adev)
adev->irq.msi_enabled = false;
if (!amdgpu_msi_ok(adev))
flags = PCI_IRQ_LEGACY;
flags = PCI_IRQ_INTX;
else
flags = PCI_IRQ_ALL_TYPES;

View File

@ -3281,7 +3281,7 @@ static int qib_7220_intr_fallback(struct qib_devdata *dd)
qib_free_irq(dd);
dd->msi_lo = 0;
if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_LEGACY) < 0)
if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_INTX) < 0)
qib_dev_err(dd, "Failed to enable INTx\n");
qib_setup_7220_interrupt(dd);
return 1;

View File

@ -3471,8 +3471,7 @@ try_intx:
pci_irq_vector(dd->pcidev, msixnum),
ret);
qib_7322_free_irq(dd);
pci_alloc_irq_vectors(dd->pcidev, 1, 1,
PCI_IRQ_LEGACY);
pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_INTX);
goto try_intx;
}
dd->cspec->msix_entries[msixnum].arg = arg;
@ -5143,7 +5142,7 @@ static int qib_7322_intr_fallback(struct qib_devdata *dd)
qib_devinfo(dd->pcidev,
"MSIx interrupt not detected, trying INTx interrupts\n");
qib_7322_free_irq(dd);
if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_LEGACY) < 0)
if (pci_alloc_irq_vectors(dd->pcidev, 1, 1, PCI_IRQ_INTX) < 0)
qib_dev_err(dd, "Failed to enable INTx\n");
qib_setup_7322_interrupt(dd, 0);
return 1;

View File

@ -210,7 +210,7 @@ int qib_pcie_params(struct qib_devdata *dd, u32 minw, u32 *nent)
}
if (dd->flags & QIB_HAS_INTX)
flags |= PCI_IRQ_LEGACY;
flags |= PCI_IRQ_INTX;
maxvec = (nent && *nent) ? *nent : 1;
nvec = pci_alloc_irq_vectors(dd->pcidev, 1, maxvec, flags);
if (nvec < 0)

View File

@ -531,7 +531,7 @@ static int pvrdma_alloc_intrs(struct pvrdma_dev *dev)
PCI_IRQ_MSIX);
if (ret < 0) {
ret = pci_alloc_irq_vectors(pdev, 1, 1,
PCI_IRQ_MSI | PCI_IRQ_LEGACY);
PCI_IRQ_MSI | PCI_IRQ_INTX);
if (ret < 0)
return ret;
}

View File

@ -3784,20 +3784,11 @@ static struct irq_chip amd_ir_chip = {
};
static const struct msi_parent_ops amdvi_msi_parent_ops = {
.supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED |
MSI_FLAG_MULTI_PCI_MSI |
MSI_FLAG_PCI_IMS,
.supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED | MSI_FLAG_MULTI_PCI_MSI,
.prefix = "IR-",
.init_dev_msi_info = msi_parent_init_dev_msi_info,
};
static const struct msi_parent_ops virt_amdvi_msi_parent_ops = {
.supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED |
MSI_FLAG_MULTI_PCI_MSI,
.prefix = "vIR-",
.init_dev_msi_info = msi_parent_init_dev_msi_info,
};
int amd_iommu_create_irq_domain(struct amd_iommu *iommu)
{
struct fwnode_handle *fn;
@ -3815,11 +3806,7 @@ int amd_iommu_create_irq_domain(struct amd_iommu *iommu)
irq_domain_update_bus_token(iommu->ir_domain, DOMAIN_BUS_AMDVI);
iommu->ir_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT |
IRQ_DOMAIN_FLAG_ISOLATED_MSI;
if (amd_iommu_np_cache)
iommu->ir_domain->msi_parent_ops = &virt_amdvi_msi_parent_ops;
else
iommu->ir_domain->msi_parent_ops = &amdvi_msi_parent_ops;
iommu->ir_domain->msi_parent_ops = &amdvi_msi_parent_ops;
return 0;
}

View File

@ -85,7 +85,7 @@ static const struct irq_domain_ops intel_ir_domain_ops;
static void iommu_disable_irq_remapping(struct intel_iommu *iommu);
static int __init parse_ioapics_under_ir(void);
static const struct msi_parent_ops dmar_msi_parent_ops, virt_dmar_msi_parent_ops;
static const struct msi_parent_ops dmar_msi_parent_ops;
static bool ir_pre_enabled(struct intel_iommu *iommu)
{
@ -570,11 +570,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
irq_domain_update_bus_token(iommu->ir_domain, DOMAIN_BUS_DMAR);
iommu->ir_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT |
IRQ_DOMAIN_FLAG_ISOLATED_MSI;
if (cap_caching_mode(iommu->cap))
iommu->ir_domain->msi_parent_ops = &virt_dmar_msi_parent_ops;
else
iommu->ir_domain->msi_parent_ops = &dmar_msi_parent_ops;
iommu->ir_domain->msi_parent_ops = &dmar_msi_parent_ops;
ir_table->base = ir_table_base;
ir_table->bitmap = bitmap;
@ -1526,20 +1522,11 @@ static const struct irq_domain_ops intel_ir_domain_ops = {
};
static const struct msi_parent_ops dmar_msi_parent_ops = {
.supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED |
MSI_FLAG_MULTI_PCI_MSI |
MSI_FLAG_PCI_IMS,
.supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED | MSI_FLAG_MULTI_PCI_MSI,
.prefix = "IR-",
.init_dev_msi_info = msi_parent_init_dev_msi_info,
};
static const struct msi_parent_ops virt_dmar_msi_parent_ops = {
.supported_flags = X86_VECTOR_MSI_FLAGS_SUPPORTED |
MSI_FLAG_MULTI_PCI_MSI,
.prefix = "vIR-",
.init_dev_msi_info = msi_parent_init_dev_msi_info,
};
/*
* Support of Interrupt Remapping Unit Hotplug
*/

View File

@ -54,7 +54,7 @@ static int intel_lpss_pci_probe(struct pci_dev *pdev,
if (ret)
return ret;
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_LEGACY);
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_INTX);
if (ret < 0)
return ret;

View File

@ -787,8 +787,7 @@ static int vmci_guest_probe_device(struct pci_dev *pdev,
error = pci_alloc_irq_vectors(pdev, num_irq_vectors, num_irq_vectors,
PCI_IRQ_MSIX);
if (error < 0) {
error = pci_alloc_irq_vectors(pdev, 1, 1,
PCI_IRQ_MSIX | PCI_IRQ_MSI | PCI_IRQ_LEGACY);
error = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
if (error < 0)
goto err_unsubscribe_event;
} else {

View File

@ -170,7 +170,7 @@ static int xgbe_config_irqs(struct xgbe_prv_data *pdata)
goto out;
ret = pci_alloc_irq_vectors(pdata->pcidev, 1, 1,
PCI_IRQ_LEGACY | PCI_IRQ_MSI);
PCI_IRQ_INTX | PCI_IRQ_MSI);
if (ret < 0) {
dev_info(pdata->dev, "single IRQ enablement failed\n");
return ret;

View File

@ -17,7 +17,7 @@
#define AQ_CFG_IS_POLLING_DEF 0U
#define AQ_CFG_FORCE_LEGACY_INT 0U
#define AQ_CFG_FORCE_INTX 0U
#define AQ_CFG_INTERRUPT_MODERATION_OFF 0
#define AQ_CFG_INTERRUPT_MODERATION_ON 1

View File

@ -104,7 +104,7 @@ struct aq_stats_s {
};
#define AQ_HW_IRQ_INVALID 0U
#define AQ_HW_IRQ_LEGACY 1U
#define AQ_HW_IRQ_INTX 1U
#define AQ_HW_IRQ_MSI 2U
#define AQ_HW_IRQ_MSIX 3U

View File

@ -127,7 +127,7 @@ void aq_nic_cfg_start(struct aq_nic_s *self)
cfg->irq_type = aq_pci_func_get_irq_type(self);
if ((cfg->irq_type == AQ_HW_IRQ_LEGACY) ||
if ((cfg->irq_type == AQ_HW_IRQ_INTX) ||
(cfg->aq_hw_caps->vecs == 1U) ||
(cfg->vecs == 1U)) {
cfg->is_rss = 0U;

View File

@ -200,7 +200,7 @@ unsigned int aq_pci_func_get_irq_type(struct aq_nic_s *self)
if (self->pdev->msi_enabled)
return AQ_HW_IRQ_MSI;
return AQ_HW_IRQ_LEGACY;
return AQ_HW_IRQ_INTX;
}
static void aq_pci_free_irq_vectors(struct aq_nic_s *self)
@ -298,11 +298,8 @@ static int aq_pci_probe(struct pci_dev *pdev,
numvecs += AQ_HW_SERVICE_IRQS;
/*enable interrupts */
#if !AQ_CFG_FORCE_LEGACY_INT
err = pci_alloc_irq_vectors(self->pdev, 1, numvecs,
PCI_IRQ_MSIX | PCI_IRQ_MSI |
PCI_IRQ_LEGACY);
#if !AQ_CFG_FORCE_INTX
err = pci_alloc_irq_vectors(self->pdev, 1, numvecs, PCI_IRQ_ALL_TYPES);
if (err < 0)
goto err_hwinit;
numvecs = err;

View File

@ -352,7 +352,7 @@ static int hw_atl_a0_hw_init(struct aq_hw_s *self, const u8 *mac_addr)
{
static u32 aq_hw_atl_igcr_table_[4][2] = {
[AQ_HW_IRQ_INVALID] = { 0x20000000U, 0x20000000U },
[AQ_HW_IRQ_LEGACY] = { 0x20000080U, 0x20000080U },
[AQ_HW_IRQ_INTX] = { 0x20000080U, 0x20000080U },
[AQ_HW_IRQ_MSI] = { 0x20000021U, 0x20000025U },
[AQ_HW_IRQ_MSIX] = { 0x20000022U, 0x20000026U },
};

View File

@ -562,7 +562,7 @@ static int hw_atl_b0_hw_init(struct aq_hw_s *self, const u8 *mac_addr)
{
static u32 aq_hw_atl_igcr_table_[4][2] = {
[AQ_HW_IRQ_INVALID] = { 0x20000000U, 0x20000000U },
[AQ_HW_IRQ_LEGACY] = { 0x20000080U, 0x20000080U },
[AQ_HW_IRQ_INTX] = { 0x20000080U, 0x20000080U },
[AQ_HW_IRQ_MSI] = { 0x20000021U, 0x20000025U },
[AQ_HW_IRQ_MSIX] = { 0x20000022U, 0x20000026U },
};

View File

@ -534,7 +534,7 @@ static int hw_atl2_hw_init(struct aq_hw_s *self, const u8 *mac_addr)
{
static u32 aq_hw_atl2_igcr_table_[4][2] = {
[AQ_HW_IRQ_INVALID] = { 0x20000000U, 0x20000000U },
[AQ_HW_IRQ_LEGACY] = { 0x20000080U, 0x20000080U },
[AQ_HW_IRQ_INTX] = { 0x20000080U, 0x20000080U },
[AQ_HW_IRQ_MSI] = { 0x20000021U, 0x20000025U },
[AQ_HW_IRQ_MSIX] = { 0x20000022U, 0x20000026U },
};

View File

@ -901,7 +901,7 @@ static int alx_init_intr(struct alx_priv *alx)
int ret;
ret = pci_alloc_irq_vectors(alx->hw.pdev, 1, 1,
PCI_IRQ_MSI | PCI_IRQ_LEGACY);
PCI_IRQ_MSI | PCI_IRQ_INTX);
if (ret < 0)
return ret;

View File

@ -5106,7 +5106,7 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
rtl_lock_config_regs(tp);
fallthrough;
case RTL_GIGA_MAC_VER_07 ... RTL_GIGA_MAC_VER_17:
flags = PCI_IRQ_LEGACY;
flags = PCI_IRQ_INTX;
break;
default:
flags = PCI_IRQ_ALL_TYPES;

View File

@ -1674,14 +1674,14 @@ static int wx_set_interrupt_capability(struct wx *wx)
/* minmum one for queue, one for misc*/
nvecs = 1;
nvecs = pci_alloc_irq_vectors(pdev, nvecs,
nvecs, PCI_IRQ_MSI | PCI_IRQ_LEGACY);
nvecs, PCI_IRQ_MSI | PCI_IRQ_INTX);
if (nvecs == 1) {
if (pdev->msi_enabled)
wx_err(wx, "Fallback to MSI.\n");
else
wx_err(wx, "Fallback to LEGACY.\n");
wx_err(wx, "Fallback to INTx.\n");
} else {
wx_err(wx, "Failed to allocate MSI/LEGACY interrupts. Error: %d\n", nvecs);
wx_err(wx, "Failed to allocate MSI/INTx interrupts. Error: %d\n", nvecs);
return nvecs;
}
@ -2127,7 +2127,7 @@ void wx_write_eitr(struct wx_q_vector *q_vector)
* wx_configure_vectors - Configure vectors for hardware
* @wx: board private structure
*
* wx_configure_vectors sets up the hardware to properly generate MSI-X/MSI/LEGACY
* wx_configure_vectors sets up the hardware to properly generate MSI-X/MSI/INTx
* interrupts.
**/
void wx_configure_vectors(struct wx *wx)

View File

@ -394,14 +394,14 @@ static irqreturn_t ath10k_ahb_interrupt_handler(int irq, void *arg)
if (!ath10k_pci_irq_pending(ar))
return IRQ_NONE;
ath10k_pci_disable_and_clear_legacy_irq(ar);
ath10k_pci_disable_and_clear_intx_irq(ar);
ath10k_pci_irq_msi_fw_mask(ar);
napi_schedule(&ar->napi);
return IRQ_HANDLED;
}
static int ath10k_ahb_request_irq_legacy(struct ath10k *ar)
static int ath10k_ahb_request_irq_intx(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
struct ath10k_ahb *ar_ahb = ath10k_ahb_priv(ar);
@ -415,12 +415,12 @@ static int ath10k_ahb_request_irq_legacy(struct ath10k *ar)
ar_ahb->irq, ret);
return ret;
}
ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_LEGACY;
ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_INTX;
return 0;
}
static void ath10k_ahb_release_irq_legacy(struct ath10k *ar)
static void ath10k_ahb_release_irq_intx(struct ath10k *ar)
{
struct ath10k_ahb *ar_ahb = ath10k_ahb_priv(ar);
@ -430,7 +430,7 @@ static void ath10k_ahb_release_irq_legacy(struct ath10k *ar)
static void ath10k_ahb_irq_disable(struct ath10k *ar)
{
ath10k_ce_disable_interrupts(ar);
ath10k_pci_disable_and_clear_legacy_irq(ar);
ath10k_pci_disable_and_clear_intx_irq(ar);
}
static int ath10k_ahb_resource_init(struct ath10k *ar)
@ -621,7 +621,7 @@ static int ath10k_ahb_hif_start(struct ath10k *ar)
ath10k_core_napi_enable(ar);
ath10k_ce_enable_interrupts(ar);
ath10k_pci_enable_legacy_irq(ar);
ath10k_pci_enable_intx_irq(ar);
ath10k_pci_rx_post(ar);
@ -775,7 +775,7 @@ static int ath10k_ahb_probe(struct platform_device *pdev)
ath10k_pci_init_napi(ar);
ret = ath10k_ahb_request_irq_legacy(ar);
ret = ath10k_ahb_request_irq_intx(ar);
if (ret)
goto err_free_pipes;
@ -806,7 +806,7 @@ err_halt_device:
ath10k_ahb_clock_disable(ar);
err_free_irq:
ath10k_ahb_release_irq_legacy(ar);
ath10k_ahb_release_irq_intx(ar);
err_free_pipes:
ath10k_pci_release_resource(ar);
@ -828,7 +828,7 @@ static void ath10k_ahb_remove(struct platform_device *pdev)
ath10k_core_unregister(ar);
ath10k_ahb_irq_disable(ar);
ath10k_ahb_release_irq_legacy(ar);
ath10k_ahb_release_irq_intx(ar);
ath10k_pci_release_resource(ar);
ath10k_ahb_halt_chip(ar);
ath10k_ahb_clock_disable(ar);

View File

@ -721,7 +721,7 @@ bool ath10k_pci_irq_pending(struct ath10k *ar)
return false;
}
void ath10k_pci_disable_and_clear_legacy_irq(struct ath10k *ar)
void ath10k_pci_disable_and_clear_intx_irq(struct ath10k *ar)
{
/* IMPORTANT: INTR_CLR register has to be set after
* INTR_ENABLE is set to 0, otherwise interrupt can not be
@ -739,7 +739,7 @@ void ath10k_pci_disable_and_clear_legacy_irq(struct ath10k *ar)
PCIE_INTR_ENABLE_ADDRESS);
}
void ath10k_pci_enable_legacy_irq(struct ath10k *ar)
void ath10k_pci_enable_intx_irq(struct ath10k *ar)
{
ath10k_pci_write32(ar, SOC_CORE_BASE_ADDRESS +
PCIE_INTR_ENABLE_ADDRESS,
@ -1935,7 +1935,7 @@ static void ath10k_pci_irq_msi_fw_unmask(struct ath10k *ar)
static void ath10k_pci_irq_disable(struct ath10k *ar)
{
ath10k_ce_disable_interrupts(ar);
ath10k_pci_disable_and_clear_legacy_irq(ar);
ath10k_pci_disable_and_clear_intx_irq(ar);
ath10k_pci_irq_msi_fw_mask(ar);
}
@ -1949,7 +1949,7 @@ static void ath10k_pci_irq_sync(struct ath10k *ar)
static void ath10k_pci_irq_enable(struct ath10k *ar)
{
ath10k_ce_enable_interrupts(ar);
ath10k_pci_enable_legacy_irq(ar);
ath10k_pci_enable_intx_irq(ar);
ath10k_pci_irq_msi_fw_unmask(ar);
}
@ -3111,11 +3111,11 @@ static irqreturn_t ath10k_pci_interrupt_handler(int irq, void *arg)
return IRQ_NONE;
}
if ((ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY) &&
if ((ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_INTX) &&
!ath10k_pci_irq_pending(ar))
return IRQ_NONE;
ath10k_pci_disable_and_clear_legacy_irq(ar);
ath10k_pci_disable_and_clear_intx_irq(ar);
ath10k_pci_irq_msi_fw_mask(ar);
napi_schedule(&ar->napi);
@ -3152,7 +3152,7 @@ static int ath10k_pci_napi_poll(struct napi_struct *ctx, int budget)
napi_schedule(ctx);
goto out;
}
ath10k_pci_enable_legacy_irq(ar);
ath10k_pci_enable_intx_irq(ar);
ath10k_pci_irq_msi_fw_unmask(ar);
}
@ -3177,7 +3177,7 @@ static int ath10k_pci_request_irq_msi(struct ath10k *ar)
return 0;
}
static int ath10k_pci_request_irq_legacy(struct ath10k *ar)
static int ath10k_pci_request_irq_intx(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
int ret;
@ -3199,8 +3199,8 @@ static int ath10k_pci_request_irq(struct ath10k *ar)
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
switch (ar_pci->oper_irq_mode) {
case ATH10K_PCI_IRQ_LEGACY:
return ath10k_pci_request_irq_legacy(ar);
case ATH10K_PCI_IRQ_INTX:
return ath10k_pci_request_irq_intx(ar);
case ATH10K_PCI_IRQ_MSI:
return ath10k_pci_request_irq_msi(ar);
default:
@ -3232,7 +3232,7 @@ static int ath10k_pci_init_irq(struct ath10k *ar)
ath10k_pci_irq_mode);
/* Try MSI */
if (ath10k_pci_irq_mode != ATH10K_PCI_IRQ_LEGACY) {
if (ath10k_pci_irq_mode != ATH10K_PCI_IRQ_INTX) {
ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_MSI;
ret = pci_enable_msi(ar_pci->pdev);
if (ret == 0)
@ -3250,7 +3250,7 @@ static int ath10k_pci_init_irq(struct ath10k *ar)
* For now, fix the race by repeating the write in below
* synchronization checking.
*/
ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_LEGACY;
ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_INTX;
ath10k_pci_write32(ar, SOC_CORE_BASE_ADDRESS + PCIE_INTR_ENABLE_ADDRESS,
PCIE_INTR_FIRMWARE_MASK | PCIE_INTR_CE_MASK_ALL);
@ -3258,7 +3258,7 @@ static int ath10k_pci_init_irq(struct ath10k *ar)
return 0;
}
static void ath10k_pci_deinit_irq_legacy(struct ath10k *ar)
static void ath10k_pci_deinit_irq_intx(struct ath10k *ar)
{
ath10k_pci_write32(ar, SOC_CORE_BASE_ADDRESS + PCIE_INTR_ENABLE_ADDRESS,
0);
@ -3269,8 +3269,8 @@ static int ath10k_pci_deinit_irq(struct ath10k *ar)
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
switch (ar_pci->oper_irq_mode) {
case ATH10K_PCI_IRQ_LEGACY:
ath10k_pci_deinit_irq_legacy(ar);
case ATH10K_PCI_IRQ_INTX:
ath10k_pci_deinit_irq_intx(ar);
break;
default:
pci_disable_msi(ar_pci->pdev);
@ -3307,14 +3307,14 @@ int ath10k_pci_wait_for_target_init(struct ath10k *ar)
if (val & FW_IND_INITIALIZED)
break;
if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY)
if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_INTX)
/* Fix potential race by repeating CORE_BASE writes */
ath10k_pci_enable_legacy_irq(ar);
ath10k_pci_enable_intx_irq(ar);
mdelay(10);
} while (time_before(jiffies, timeout));
ath10k_pci_disable_and_clear_legacy_irq(ar);
ath10k_pci_disable_and_clear_intx_irq(ar);
ath10k_pci_irq_msi_fw_mask(ar);
if (val == 0xffffffff) {

View File

@ -101,7 +101,7 @@ struct ath10k_pci_supp_chip {
enum ath10k_pci_irq_mode {
ATH10K_PCI_IRQ_AUTO = 0,
ATH10K_PCI_IRQ_LEGACY = 1,
ATH10K_PCI_IRQ_INTX = 1,
ATH10K_PCI_IRQ_MSI = 2,
};
@ -243,9 +243,9 @@ int ath10k_pci_init_pipes(struct ath10k *ar);
int ath10k_pci_init_config(struct ath10k *ar);
void ath10k_pci_rx_post(struct ath10k *ar);
void ath10k_pci_flush(struct ath10k *ar);
void ath10k_pci_enable_legacy_irq(struct ath10k *ar);
void ath10k_pci_enable_intx_irq(struct ath10k *ar);
bool ath10k_pci_irq_pending(struct ath10k *ar);
void ath10k_pci_disable_and_clear_legacy_irq(struct ath10k *ar);
void ath10k_pci_disable_and_clear_intx_irq(struct ath10k *ar);
void ath10k_pci_irq_msi_fw_mask(struct ath10k *ar);
int ath10k_pci_wait_for_target_init(struct ath10k *ar);
int ath10k_pci_setup_resource(struct ath10k *ar);

View File

@ -1613,7 +1613,7 @@ static struct rtw_hci_ops rtw_pci_ops = {
static int rtw_pci_request_irq(struct rtw_dev *rtwdev, struct pci_dev *pdev)
{
unsigned int flags = PCI_IRQ_LEGACY;
unsigned int flags = PCI_IRQ_INTX;
int ret;
if (!rtw_disable_msi)

View File

@ -3637,7 +3637,7 @@ static int rtw89_pci_request_irq(struct rtw89_dev *rtwdev,
unsigned long flags = 0;
int ret;
flags |= PCI_IRQ_LEGACY | PCI_IRQ_MSI;
flags |= PCI_IRQ_INTX | PCI_IRQ_MSI;
ret = pci_alloc_irq_vectors(pdev, 1, 1, flags);
if (ret < 0) {
rtw89_err(rtwdev, "failed to alloc irq vectors, ret %d\n", ret);

View File

@ -2129,7 +2129,7 @@ static int idt_init_isr(struct idt_ntb_dev *ndev)
int ret;
/* Allocate just one interrupt vector for the ISR */
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI | PCI_IRQ_LEGACY);
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI | PCI_IRQ_INTX);
if (ret != 1) {
dev_err(&pdev->dev, "Failed to allocate IRQ vector");
return ret;

View File

@ -36,10 +36,13 @@ DEFINE_RAW_SPINLOCK(pci_lock);
int noinline pci_bus_read_config_##size \
(struct pci_bus *bus, unsigned int devfn, int pos, type *value) \
{ \
int res; \
unsigned long flags; \
u32 data = 0; \
if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER; \
int res; \
\
if (PCI_##size##_BAD) \
return PCIBIOS_BAD_REGISTER_NUMBER; \
\
pci_lock_config(flags); \
res = bus->ops->read(bus, devfn, pos, len, &data); \
if (res) \
@ -47,6 +50,7 @@ int noinline pci_bus_read_config_##size \
else \
*value = (type)data; \
pci_unlock_config(flags); \
\
return res; \
}
@ -54,12 +58,16 @@ int noinline pci_bus_read_config_##size \
int noinline pci_bus_write_config_##size \
(struct pci_bus *bus, unsigned int devfn, int pos, type value) \
{ \
int res; \
unsigned long flags; \
if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER; \
int res; \
\
if (PCI_##size##_BAD) \
return PCIBIOS_BAD_REGISTER_NUMBER; \
\
pci_lock_config(flags); \
res = bus->ops->write(bus, devfn, pos, len, value); \
pci_unlock_config(flags); \
\
return res; \
}
@ -216,24 +224,27 @@ static noinline void pci_wait_cfg(struct pci_dev *dev)
}
/* Returns 0 on success, negative values indicate error. */
#define PCI_USER_READ_CONFIG(size, type) \
#define PCI_USER_READ_CONFIG(size, type) \
int pci_user_read_config_##size \
(struct pci_dev *dev, int pos, type *val) \
{ \
int ret = PCIBIOS_SUCCESSFUL; \
u32 data = -1; \
int ret; \
\
if (PCI_##size##_BAD) \
return -EINVAL; \
raw_spin_lock_irq(&pci_lock); \
\
raw_spin_lock_irq(&pci_lock); \
if (unlikely(dev->block_cfg_access)) \
pci_wait_cfg(dev); \
ret = dev->bus->ops->read(dev->bus, dev->devfn, \
pos, sizeof(type), &data); \
raw_spin_unlock_irq(&pci_lock); \
pos, sizeof(type), &data); \
raw_spin_unlock_irq(&pci_lock); \
if (ret) \
PCI_SET_ERROR_RESPONSE(val); \
else \
*val = (type)data; \
\
return pcibios_err_to_errno(ret); \
} \
EXPORT_SYMBOL_GPL(pci_user_read_config_##size);
@ -243,15 +254,18 @@ EXPORT_SYMBOL_GPL(pci_user_read_config_##size);
int pci_user_write_config_##size \
(struct pci_dev *dev, int pos, type val) \
{ \
int ret = PCIBIOS_SUCCESSFUL; \
int ret; \
\
if (PCI_##size##_BAD) \
return -EINVAL; \
raw_spin_lock_irq(&pci_lock); \
\
raw_spin_lock_irq(&pci_lock); \
if (unlikely(dev->block_cfg_access)) \
pci_wait_cfg(dev); \
ret = dev->bus->ops->write(dev->bus, dev->devfn, \
pos, sizeof(type), val); \
raw_spin_unlock_irq(&pci_lock); \
pos, sizeof(type), val); \
raw_spin_unlock_irq(&pci_lock); \
\
return pcibios_err_to_errno(ret); \
} \
EXPORT_SYMBOL_GPL(pci_user_write_config_##size);
@ -275,6 +289,8 @@ void pci_cfg_access_lock(struct pci_dev *dev)
{
might_sleep();
lock_map_acquire(&dev->cfg_access_lock);
raw_spin_lock_irq(&pci_lock);
if (dev->block_cfg_access)
pci_wait_cfg(dev);
@ -329,6 +345,8 @@ void pci_cfg_access_unlock(struct pci_dev *dev)
raw_spin_unlock_irqrestore(&pci_lock, flags);
wake_up_all(&pci_cfg_wait);
lock_map_release(&dev->cfg_access_lock);
}
EXPORT_SYMBOL_GPL(pci_cfg_access_unlock);

View File

@ -99,14 +99,11 @@ static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn,
ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS;
} else {
bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH);
bool is_64bits = sz > SZ_2G;
bool is_64bits = !!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64);
if (is_64bits && (bar & 1))
return -EINVAL;
if (is_64bits && !(flags & PCI_BASE_ADDRESS_MEM_TYPE_64))
epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
if (is_64bits && is_prefetch)
ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS;
else if (is_prefetch)
@ -746,6 +743,8 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
spin_lock_init(&ep->lock);
pci_epc_init_notify(epc);
return 0;
free_epc_mem:

View File

@ -467,6 +467,15 @@ static int dra7xx_add_pcie_ep(struct dra7xx_pcie *dra7xx,
return ret;
}
ret = dw_pcie_ep_init_registers(ep);
if (ret) {
dev_err(dev, "Failed to initialize DWC endpoint registers\n");
dw_pcie_ep_deinit(ep);
return ret;
}
dw_pcie_ep_init_notify(ep);
return 0;
}

View File

@ -1123,6 +1123,16 @@ static int imx6_add_pcie_ep(struct imx6_pcie *imx6_pcie,
dev_err(dev, "failed to initialize endpoint\n");
return ret;
}
ret = dw_pcie_ep_init_registers(ep);
if (ret) {
dev_err(dev, "Failed to initialize DWC endpoint registers\n");
dw_pcie_ep_deinit(ep);
return ret;
}
dw_pcie_ep_init_notify(ep);
/* Start LTSSM. */
imx6_pcie_ltssm_enable(dev);

View File

@ -1286,6 +1286,15 @@ static int ks_pcie_probe(struct platform_device *pdev)
ret = dw_pcie_ep_init(&pci->ep);
if (ret < 0)
goto err_get_sync;
ret = dw_pcie_ep_init_registers(&pci->ep);
if (ret) {
dev_err(dev, "Failed to initialize DWC endpoint registers\n");
goto err_ep_init;
}
dw_pcie_ep_init_notify(&pci->ep);
break;
default:
dev_err(dev, "INVALID device type %d\n", mode);
@ -1295,6 +1304,8 @@ static int ks_pcie_probe(struct platform_device *pdev)
return 0;
err_ep_init:
dw_pcie_ep_deinit(&pci->ep);
err_get_sync:
pm_runtime_put(dev);
pm_runtime_disable(dev);

View File

@ -279,6 +279,15 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
if (ret)
return ret;
ret = dw_pcie_ep_init_registers(&pci->ep);
if (ret) {
dev_err(dev, "Failed to initialize DWC endpoint registers\n");
dw_pcie_ep_deinit(&pci->ep);
return ret;
}
dw_pcie_ep_init_notify(&pci->ep);
return ls_pcie_ep_interrupt_init(pcie, pdev);
}

View File

@ -441,7 +441,20 @@ static int artpec6_pcie_probe(struct platform_device *pdev)
pci->ep.ops = &pcie_ep_ops;
return dw_pcie_ep_init(&pci->ep);
ret = dw_pcie_ep_init(&pci->ep);
if (ret)
return ret;
ret = dw_pcie_ep_init_registers(&pci->ep);
if (ret) {
dev_err(dev, "Failed to initialize DWC endpoint registers\n");
dw_pcie_ep_deinit(&pci->ep);
return ret;
}
dw_pcie_ep_init_notify(&pci->ep);
break;
default:
dev_err(dev, "INVALID device type %d\n", artpec6_pcie->mode);
}

View File

@ -15,6 +15,10 @@
#include <linux/pci-epc.h>
#include <linux/pci-epf.h>
/**
* dw_pcie_ep_linkup - Notify EPF drivers about Link Up event
* @ep: DWC EP device
*/
void dw_pcie_ep_linkup(struct dw_pcie_ep *ep)
{
struct pci_epc *epc = ep->epc;
@ -23,6 +27,10 @@ void dw_pcie_ep_linkup(struct dw_pcie_ep *ep)
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_linkup);
/**
* dw_pcie_ep_init_notify - Notify EPF drivers about EPC initialization complete
* @ep: DWC EP device
*/
void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep)
{
struct pci_epc *epc = ep->epc;
@ -31,6 +39,14 @@ void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep)
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_init_notify);
/**
* dw_pcie_ep_get_func_from_ep - Get the struct dw_pcie_ep_func corresponding to
* the endpoint function
* @ep: DWC EP device
* @func_no: Function number of the endpoint device
*
* Return: struct dw_pcie_ep_func if success, NULL otherwise.
*/
struct dw_pcie_ep_func *
dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no)
{
@ -61,6 +77,11 @@ static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, u8 func_no,
dw_pcie_dbi_ro_wr_dis(pci);
}
/**
* dw_pcie_ep_reset_bar - Reset endpoint BAR
* @pci: DWC PCI device
* @bar: BAR number of the endpoint
*/
void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
{
u8 func_no, funcs;
@ -440,6 +461,13 @@ static const struct pci_epc_ops epc_ops = {
.get_features = dw_pcie_ep_get_features,
};
/**
* dw_pcie_ep_raise_intx_irq - Raise INTx IRQ to the host
* @ep: DWC EP device
* @func_no: Function number of the endpoint
*
* Return: 0 if success, errono otherwise.
*/
int dw_pcie_ep_raise_intx_irq(struct dw_pcie_ep *ep, u8 func_no)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
@ -451,6 +479,14 @@ int dw_pcie_ep_raise_intx_irq(struct dw_pcie_ep *ep, u8 func_no)
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_intx_irq);
/**
* dw_pcie_ep_raise_msi_irq - Raise MSI IRQ to the host
* @ep: DWC EP device
* @func_no: Function number of the endpoint
* @interrupt_num: Interrupt number to be raised
*
* Return: 0 if success, errono otherwise.
*/
int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
u8 interrupt_num)
{
@ -500,6 +536,15 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_msi_irq);
/**
* dw_pcie_ep_raise_msix_irq_doorbell - Raise MSI-X to the host using Doorbell
* method
* @ep: DWC EP device
* @func_no: Function number of the endpoint device
* @interrupt_num: Interrupt number to be raised
*
* Return: 0 if success, errno otherwise.
*/
int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no,
u16 interrupt_num)
{
@ -519,6 +564,14 @@ int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no,
return 0;
}
/**
* dw_pcie_ep_raise_msix_irq - Raise MSI-X to the host
* @ep: DWC EP device
* @func_no: Function number of the endpoint device
* @interrupt_num: Interrupt number to be raised
*
* Return: 0 if success, errno otherwise.
*/
int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
u16 interrupt_num)
{
@ -566,22 +619,42 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
return 0;
}
void dw_pcie_ep_exit(struct dw_pcie_ep *ep)
/**
* dw_pcie_ep_cleanup - Cleanup DWC EP resources after fundamental reset
* @ep: DWC EP device
*
* Cleans up the DWC EP specific resources like eDMA etc... after fundamental
* reset like PERST#. Note that this API is only applicable for drivers
* supporting PERST# or any other methods of fundamental reset.
*/
void dw_pcie_ep_cleanup(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct pci_epc *epc = ep->epc;
dw_pcie_edma_remove(pci);
ep->epc->init_complete = false;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_cleanup);
/**
* dw_pcie_ep_deinit - Deinitialize the endpoint device
* @ep: DWC EP device
*
* Deinitialize the endpoint device. EPC device is not destroyed since that will
* be taken care by Devres.
*/
void dw_pcie_ep_deinit(struct dw_pcie_ep *ep)
{
struct pci_epc *epc = ep->epc;
dw_pcie_ep_cleanup(ep);
pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem,
epc->mem->window.page_size);
pci_epc_mem_exit(epc);
if (ep->ops->deinit)
ep->ops->deinit(ep);
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_exit);
EXPORT_SYMBOL_GPL(dw_pcie_ep_deinit);
static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap)
{
@ -601,14 +674,27 @@ static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap)
return 0;
}
int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
/**
* dw_pcie_ep_init_registers - Initialize DWC EP specific registers
* @ep: DWC EP device
*
* Initialize the registers (CSRs) specific to DWC EP. This API should be called
* only when the endpoint receives an active refclk (either from host or
* generated locally).
*/
int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct dw_pcie_ep_func *ep_func;
struct device *dev = pci->dev;
struct pci_epc *epc = ep->epc;
unsigned int offset, ptm_cap_base;
unsigned int nbars;
u8 hdr_type;
u8 func_no;
int i, ret;
void *addr;
u32 reg;
int i;
hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE) &
PCI_HEADER_TYPE_MASK;
@ -619,6 +705,58 @@ int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
return -EIO;
}
dw_pcie_version_detect(pci);
dw_pcie_iatu_detect(pci);
ret = dw_pcie_edma_detect(pci);
if (ret)
return ret;
if (!ep->ib_window_map) {
ep->ib_window_map = devm_bitmap_zalloc(dev, pci->num_ib_windows,
GFP_KERNEL);
if (!ep->ib_window_map)
goto err_remove_edma;
}
if (!ep->ob_window_map) {
ep->ob_window_map = devm_bitmap_zalloc(dev, pci->num_ob_windows,
GFP_KERNEL);
if (!ep->ob_window_map)
goto err_remove_edma;
}
if (!ep->outbound_addr) {
addr = devm_kcalloc(dev, pci->num_ob_windows, sizeof(phys_addr_t),
GFP_KERNEL);
if (!addr)
goto err_remove_edma;
ep->outbound_addr = addr;
}
for (func_no = 0; func_no < epc->max_functions; func_no++) {
ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (ep_func)
continue;
ep_func = devm_kzalloc(dev, sizeof(*ep_func), GFP_KERNEL);
if (!ep_func)
goto err_remove_edma;
ep_func->func_no = func_no;
ep_func->msi_cap = dw_pcie_ep_find_capability(ep, func_no,
PCI_CAP_ID_MSI);
ep_func->msix_cap = dw_pcie_ep_find_capability(ep, func_no,
PCI_CAP_ID_MSIX);
list_add_tail(&ep_func->list, &ep->func_list);
}
if (ep->ops->init)
ep->ops->init(ep);
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
ptm_cap_base = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_PTM);
@ -658,22 +796,32 @@ int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
dw_pcie_dbi_ro_wr_dis(pci);
return 0;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_init_complete);
err_remove_edma:
dw_pcie_edma_remove(pci);
return ret;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_init_registers);
/**
* dw_pcie_ep_init - Initialize the endpoint device
* @ep: DWC EP device
*
* Initialize the endpoint device. Allocate resources and create the EPC
* device with the endpoint framework.
*
* Return: 0 if success, errno otherwise.
*/
int dw_pcie_ep_init(struct dw_pcie_ep *ep)
{
int ret;
void *addr;
u8 func_no;
struct resource *res;
struct pci_epc *epc;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct device *dev = pci->dev;
struct platform_device *pdev = to_platform_device(dev);
struct device_node *np = dev->of_node;
const struct pci_epc_features *epc_features;
struct dw_pcie_ep_func *ep_func;
INIT_LIST_HEAD(&ep->func_list);
@ -691,26 +839,6 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
if (ep->ops->pre_init)
ep->ops->pre_init(ep);
dw_pcie_version_detect(pci);
dw_pcie_iatu_detect(pci);
ep->ib_window_map = devm_bitmap_zalloc(dev, pci->num_ib_windows,
GFP_KERNEL);
if (!ep->ib_window_map)
return -ENOMEM;
ep->ob_window_map = devm_bitmap_zalloc(dev, pci->num_ob_windows,
GFP_KERNEL);
if (!ep->ob_window_map)
return -ENOMEM;
addr = devm_kcalloc(dev, pci->num_ob_windows, sizeof(phys_addr_t),
GFP_KERNEL);
if (!addr)
return -ENOMEM;
ep->outbound_addr = addr;
epc = devm_pci_epc_create(dev, &epc_ops);
if (IS_ERR(epc)) {
dev_err(dev, "Failed to create epc device\n");
@ -724,28 +852,11 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
if (ret < 0)
epc->max_functions = 1;
for (func_no = 0; func_no < epc->max_functions; func_no++) {
ep_func = devm_kzalloc(dev, sizeof(*ep_func), GFP_KERNEL);
if (!ep_func)
return -ENOMEM;
ep_func->func_no = func_no;
ep_func->msi_cap = dw_pcie_ep_find_capability(ep, func_no,
PCI_CAP_ID_MSI);
ep_func->msix_cap = dw_pcie_ep_find_capability(ep, func_no,
PCI_CAP_ID_MSIX);
list_add_tail(&ep_func->list, &ep->func_list);
}
if (ep->ops->init)
ep->ops->init(ep);
ret = pci_epc_mem_init(epc, ep->phys_base, ep->addr_size,
ep->page_size);
if (ret < 0) {
dev_err(dev, "Failed to initialize address space\n");
goto err_ep_deinit;
return ret;
}
ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys,
@ -756,36 +867,11 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
goto err_exit_epc_mem;
}
ret = dw_pcie_edma_detect(pci);
if (ret)
goto err_free_epc_mem;
if (ep->ops->get_features) {
epc_features = ep->ops->get_features(ep);
if (epc_features->core_init_notifier)
return 0;
}
ret = dw_pcie_ep_init_complete(ep);
if (ret)
goto err_remove_edma;
return 0;
err_remove_edma:
dw_pcie_edma_remove(pci);
err_free_epc_mem:
pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem,
epc->mem->window.page_size);
err_exit_epc_mem:
pci_epc_mem_exit(epc);
err_ep_deinit:
if (ep->ops->deinit)
ep->ops->deinit(ep);
return ret;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_init);

View File

@ -145,6 +145,17 @@ static int dw_plat_pcie_probe(struct platform_device *pdev)
pci->ep.ops = &pcie_ep_ops;
ret = dw_pcie_ep_init(&pci->ep);
if (ret)
return ret;
ret = dw_pcie_ep_init_registers(&pci->ep);
if (ret) {
dev_err(dev, "Failed to initialize DWC endpoint registers\n");
dw_pcie_ep_deinit(&pci->ep);
}
dw_pcie_ep_init_notify(&pci->ep);
break;
default:
dev_err(dev, "INVALID device type %d\n", dw_plat_pcie->mode);

View File

@ -333,7 +333,6 @@ struct dw_pcie_rp {
struct dw_pcie_ep_ops {
void (*pre_init)(struct dw_pcie_ep *ep);
void (*init)(struct dw_pcie_ep *ep);
void (*deinit)(struct dw_pcie_ep *ep);
int (*raise_irq)(struct dw_pcie_ep *ep, u8 func_no,
unsigned int type, u16 interrupt_num);
const struct pci_epc_features* (*get_features)(struct dw_pcie_ep *ep);
@ -670,9 +669,10 @@ static inline void __iomem *dw_pcie_own_conf_map_bus(struct pci_bus *bus,
#ifdef CONFIG_PCIE_DW_EP
void dw_pcie_ep_linkup(struct dw_pcie_ep *ep);
int dw_pcie_ep_init(struct dw_pcie_ep *ep);
int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep);
int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep);
void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep);
void dw_pcie_ep_exit(struct dw_pcie_ep *ep);
void dw_pcie_ep_deinit(struct dw_pcie_ep *ep);
void dw_pcie_ep_cleanup(struct dw_pcie_ep *ep);
int dw_pcie_ep_raise_intx_irq(struct dw_pcie_ep *ep, u8 func_no);
int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
u8 interrupt_num);
@ -693,7 +693,7 @@ static inline int dw_pcie_ep_init(struct dw_pcie_ep *ep)
return 0;
}
static inline int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
static inline int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
{
return 0;
}
@ -702,7 +702,11 @@ static inline void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep)
{
}
static inline void dw_pcie_ep_exit(struct dw_pcie_ep *ep)
static inline void dw_pcie_ep_deinit(struct dw_pcie_ep *ep)
{
}
static inline void dw_pcie_ep_cleanup(struct dw_pcie_ep *ep)
{
}

View File

@ -396,6 +396,7 @@ static int keembay_pcie_probe(struct platform_device *pdev)
struct keembay_pcie *pcie;
struct dw_pcie *pci;
enum dw_pcie_device_mode mode;
int ret;
data = device_get_match_data(dev);
if (!data)
@ -430,11 +431,26 @@ static int keembay_pcie_probe(struct platform_device *pdev)
return -ENODEV;
pci->ep.ops = &keembay_pcie_ep_ops;
return dw_pcie_ep_init(&pci->ep);
ret = dw_pcie_ep_init(&pci->ep);
if (ret)
return ret;
ret = dw_pcie_ep_init_registers(&pci->ep);
if (ret) {
dev_err(dev, "Failed to initialize DWC endpoint registers\n");
dw_pcie_ep_deinit(&pci->ep);
return ret;
}
dw_pcie_ep_init_notify(&pci->ep);
break;
default:
dev_err(dev, "Invalid device type %d\n", pcie->mode);
return -ENODEV;
}
return 0;
}
static const struct keembay_pcie_of_data keembay_pcie_rc_of_data = {

View File

@ -463,7 +463,7 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
PARF_INT_ALL_LINK_UP | PARF_INT_ALL_EDMA;
writel_relaxed(val, pcie_ep->parf + PARF_INT_ALL_MASK);
ret = dw_pcie_ep_init_complete(&pcie_ep->pci.ep);
ret = dw_pcie_ep_init_registers(&pcie_ep->pci.ep);
if (ret) {
dev_err(dev, "Failed to complete initialization: %d\n", ret);
goto err_disable_resources;
@ -507,6 +507,7 @@ static void qcom_pcie_perst_assert(struct dw_pcie *pci)
return;
}
dw_pcie_ep_cleanup(&pci->ep);
qcom_pcie_disable_resources(pcie_ep);
pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED;
}
@ -774,7 +775,6 @@ static void qcom_pcie_ep_init_debugfs(struct qcom_pcie_ep *pcie_ep)
static const struct pci_epc_features qcom_pcie_epc_features = {
.linkup_notifier = true,
.core_init_notifier = true,
.msi_capable = true,
.msix_capable = false,
.align = SZ_4K,

View File

@ -352,11 +352,8 @@ static void rcar_gen4_pcie_ep_init(struct dw_pcie_ep *ep)
dw_pcie_ep_reset_bar(pci, bar);
}
static void rcar_gen4_pcie_ep_deinit(struct dw_pcie_ep *ep)
static void rcar_gen4_pcie_ep_deinit(struct rcar_gen4_pcie *rcar)
{
struct dw_pcie *dw = to_dw_pcie_from_ep(ep);
struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
writel(0, rcar->base + PCIEDMAINTSTSEN);
rcar_gen4_pcie_common_deinit(rcar);
}
@ -410,7 +407,6 @@ static unsigned int rcar_gen4_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep,
static const struct dw_pcie_ep_ops pcie_ep_ops = {
.pre_init = rcar_gen4_pcie_ep_pre_init,
.init = rcar_gen4_pcie_ep_init,
.deinit = rcar_gen4_pcie_ep_deinit,
.raise_irq = rcar_gen4_pcie_ep_raise_irq,
.get_features = rcar_gen4_pcie_ep_get_features,
.get_dbi_offset = rcar_gen4_pcie_ep_get_dbi_offset,
@ -420,18 +416,36 @@ static const struct dw_pcie_ep_ops pcie_ep_ops = {
static int rcar_gen4_add_dw_pcie_ep(struct rcar_gen4_pcie *rcar)
{
struct dw_pcie_ep *ep = &rcar->dw.ep;
struct device *dev = rcar->dw.dev;
int ret;
if (!IS_ENABLED(CONFIG_PCIE_RCAR_GEN4_EP))
return -ENODEV;
ep->ops = &pcie_ep_ops;
return dw_pcie_ep_init(ep);
ret = dw_pcie_ep_init(ep);
if (ret) {
rcar_gen4_pcie_ep_deinit(rcar);
return ret;
}
ret = dw_pcie_ep_init_registers(ep);
if (ret) {
dev_err(dev, "Failed to initialize DWC endpoint registers\n");
dw_pcie_ep_deinit(ep);
rcar_gen4_pcie_ep_deinit(rcar);
}
dw_pcie_ep_init_notify(ep);
return ret;
}
static void rcar_gen4_remove_dw_pcie_ep(struct rcar_gen4_pcie *rcar)
{
dw_pcie_ep_exit(&rcar->dw.ep);
dw_pcie_ep_deinit(&rcar->dw.ep);
rcar_gen4_pcie_ep_deinit(rcar);
}
/* Common */

View File

@ -1715,6 +1715,8 @@ static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie)
if (ret)
dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret);
dw_pcie_ep_cleanup(&pcie->pci.ep);
reset_control_assert(pcie->core_rst);
tegra_pcie_disable_phy(pcie);
@ -1895,7 +1897,7 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
val = (upper_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK);
dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_HIGH_OFF, val);
ret = dw_pcie_ep_init_complete(ep);
ret = dw_pcie_ep_init_registers(ep);
if (ret) {
dev_err(dev, "Failed to complete initialization: %d\n", ret);
goto fail_init_complete;
@ -2004,7 +2006,6 @@ static int tegra_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
static const struct pci_epc_features tegra_pcie_epc_features = {
.linkup_notifier = true,
.core_init_notifier = true,
.msi_capable = false,
.msix_capable = false,
.bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M,
@ -2273,11 +2274,14 @@ static int tegra_pcie_dw_probe(struct platform_device *pdev)
ret = tegra_pcie_config_ep(pcie, pdev);
if (ret < 0)
goto fail;
else
return 0;
break;
default:
dev_err(dev, "Invalid PCIe device type %d\n",
pcie->of_data->mode);
ret = -EINVAL;
}
fail:

View File

@ -399,7 +399,20 @@ static int uniphier_pcie_ep_probe(struct platform_device *pdev)
return ret;
priv->pci.ep.ops = &uniphier_pcie_ep_ops;
return dw_pcie_ep_init(&priv->pci.ep);
ret = dw_pcie_ep_init(&priv->pci.ep);
if (ret)
return ret;
ret = dw_pcie_ep_init_registers(&priv->pci.ep);
if (ret) {
dev_err(dev, "Failed to initialize DWC endpoint registers\n");
dw_pcie_ep_deinit(&priv->pci.ep);
return ret;
}
dw_pcie_ep_init_notify(&priv->pci.ep);
return 0;
}
static const struct uniphier_pcie_ep_soc_data uniphier_pro5_data = {

View File

@ -202,7 +202,7 @@ static int mt7621_pcie_parse_port(struct mt7621_pcie *pcie,
struct mt7621_pcie_port *port;
struct device *dev = pcie->dev;
struct platform_device *pdev = to_platform_device(dev);
char name[10];
char name[11];
int err;
port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);

View File

@ -542,6 +542,8 @@ static int rcar_pcie_ep_probe(struct platform_device *pdev)
goto err_pm_put;
}
pci_epc_init_notify(epc);
return 0;
err_pm_put:

View File

@ -98,10 +98,8 @@ static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn,
/* All functions share the same vendor ID with function 0 */
if (fn == 0) {
u32 vid_regs = (hdr->vendorid & GENMASK(15, 0)) |
(hdr->subsys_vendor_id & GENMASK(31, 16)) << 16;
rockchip_pcie_write(rockchip, vid_regs,
rockchip_pcie_write(rockchip,
hdr->vendorid | hdr->subsys_vendor_id << 16,
PCIE_CORE_CONFIG_VENDOR);
}
@ -153,7 +151,7 @@ static int rockchip_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn,
ctrl = ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_IO_32BITS;
} else {
bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH);
bool is_64bits = sz > SZ_2G;
bool is_64bits = !!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64);
if (is_64bits && (bar & 1))
return -EINVAL;
@ -609,6 +607,8 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev)
rockchip_pcie_write(rockchip, PCIE_CLIENT_CONF_ENABLE,
PCIE_CLIENT_CONFIG);
pci_epc_init_notify(epc);
return 0;
err_epc_mem_exit:
pci_epc_mem_exit(epc);

View File

@ -383,11 +383,13 @@ static void pci_doe_task_complete(struct pci_doe_task *task)
complete(task->private);
}
static int pci_doe_discovery(struct pci_doe_mb *doe_mb, u8 *index, u16 *vid,
static int pci_doe_discovery(struct pci_doe_mb *doe_mb, u8 capver, u8 *index, u16 *vid,
u8 *protocol)
{
u32 request_pl = FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_REQ_3_INDEX,
*index);
*index) |
FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_REQ_3_VER,
(capver >= 2) ? 2 : 0);
__le32 request_pl_le = cpu_to_le32(request_pl);
__le32 response_pl_le;
u32 response_pl;
@ -421,13 +423,17 @@ static int pci_doe_cache_protocols(struct pci_doe_mb *doe_mb)
{
u8 index = 0;
u8 xa_idx = 0;
u32 hdr = 0;
pci_read_config_dword(doe_mb->pdev, doe_mb->cap_offset, &hdr);
do {
int rc;
u16 vid;
u8 prot;
rc = pci_doe_discovery(doe_mb, &index, &vid, &prot);
rc = pci_doe_discovery(doe_mb, PCI_EXT_CAP_VER(hdr), &index,
&vid, &prot);
if (rc)
return rc;

View File

@ -690,50 +690,35 @@ static void pci_epf_test_unbind(struct pci_epf *epf)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
struct pci_epc *epc = epf->epc;
struct pci_epf_bar *epf_bar;
int bar;
cancel_delayed_work(&epf_test->cmd_handler);
pci_epf_test_clean_dma_chan(epf_test);
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
epf_bar = &epf->bar[bar];
if (!epf_test->reg[bar])
continue;
if (epf_test->reg[bar]) {
pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no,
epf_bar);
pci_epf_free_space(epf, epf_test->reg[bar], bar,
PRIMARY_INTERFACE);
}
pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no,
&epf->bar[bar]);
pci_epf_free_space(epf, epf_test->reg[bar], bar,
PRIMARY_INTERFACE);
}
}
static int pci_epf_test_set_bar(struct pci_epf *epf)
{
int bar, add;
int ret;
struct pci_epf_bar *epf_bar;
int bar, ret;
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
const struct pci_epc_features *epc_features;
epc_features = epf_test->epc_features;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar += add) {
epf_bar = &epf->bar[bar];
/*
* pci_epc_set_bar() sets PCI_BASE_ADDRESS_MEM_TYPE_64
* if the specific implementation required a 64-bit BAR,
* even if we only requested a 32-bit BAR.
*/
add = (epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64) ? 2 : 1;
if (epc_features->bar[bar].type == BAR_RESERVED)
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
if (!epf_test->reg[bar])
continue;
ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no,
epf_bar);
&epf->bar[bar]);
if (ret) {
pci_epf_free_space(epf, epf_test->reg[bar], bar,
PRIMARY_INTERFACE);
@ -753,6 +738,7 @@ static int pci_epf_test_core_init(struct pci_epf *epf)
const struct pci_epc_features *epc_features;
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
bool linkup_notifier = false;
bool msix_capable = false;
bool msi_capable = true;
int ret;
@ -795,6 +781,10 @@ static int pci_epf_test_core_init(struct pci_epf *epf)
}
}
linkup_notifier = epc_features->linkup_notifier;
if (!linkup_notifier)
queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work);
return 0;
}
@ -817,14 +807,13 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
struct device *dev = &epf->dev;
struct pci_epf_bar *epf_bar;
size_t msix_table_size = 0;
size_t test_reg_bar_size;
size_t pba_size = 0;
bool msix_capable;
void *base;
int bar, add;
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
enum pci_barno bar;
const struct pci_epc_features *epc_features;
size_t test_reg_size;
@ -849,16 +838,14 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
}
epf_test->reg[test_reg_bar] = base;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar += add) {
epf_bar = &epf->bar[bar];
add = (epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64) ? 2 : 1;
for (bar = BAR_0; bar < PCI_STD_NUM_BARS; bar++) {
bar = pci_epc_get_next_free_bar(epc_features, bar);
if (bar == NO_BAR)
break;
if (bar == test_reg_bar)
continue;
if (epc_features->bar[bar].type == BAR_RESERVED)
continue;
base = pci_epf_alloc_space(epf, bar_size[bar], bar,
epc_features, PRIMARY_INTERFACE);
if (!base)
@ -870,19 +857,6 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
return 0;
}
static void pci_epf_configure_bar(struct pci_epf *epf,
const struct pci_epc_features *epc_features)
{
struct pci_epf_bar *epf_bar;
int i;
for (i = 0; i < PCI_STD_NUM_BARS; i++) {
epf_bar = &epf->bar[i];
if (epc_features->bar[i].only_64bit)
epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
}
}
static int pci_epf_test_bind(struct pci_epf *epf)
{
int ret;
@ -890,8 +864,6 @@ static int pci_epf_test_bind(struct pci_epf *epf)
const struct pci_epc_features *epc_features;
enum pci_barno test_reg_bar = BAR_0;
struct pci_epc *epc = epf->epc;
bool linkup_notifier = false;
bool core_init_notifier = false;
if (WARN_ON_ONCE(!epc))
return -EINVAL;
@ -902,12 +874,9 @@ static int pci_epf_test_bind(struct pci_epf *epf)
return -EOPNOTSUPP;
}
linkup_notifier = epc_features->linkup_notifier;
core_init_notifier = epc_features->core_init_notifier;
test_reg_bar = pci_epc_get_first_free_bar(epc_features);
if (test_reg_bar < 0)
return -EINVAL;
pci_epf_configure_bar(epf, epc_features);
epf_test->test_reg_bar = test_reg_bar;
epf_test->epc_features = epc_features;
@ -916,21 +885,12 @@ static int pci_epf_test_bind(struct pci_epf *epf)
if (ret)
return ret;
if (!core_init_notifier) {
ret = pci_epf_test_core_init(epf);
if (ret)
return ret;
}
epf_test->dma_supported = true;
ret = pci_epf_test_init_dma_chan(epf_test);
if (ret)
epf_test->dma_supported = false;
if (!linkup_notifier && !core_init_notifier)
queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work);
return 0;
}

View File

@ -64,6 +64,9 @@ static int pci_secondary_epc_epf_link(struct config_item *epf_item,
return ret;
}
/* Send any pending EPC initialization complete to the EPF driver */
pci_epc_notify_pending_init(epc, epf);
return 0;
}
@ -125,6 +128,9 @@ static int pci_primary_epc_epf_link(struct config_item *epf_item,
return ret;
}
/* Send any pending EPC initialization complete to the EPF driver */
pci_epc_notify_pending_init(epc, epf);
return 0;
}
@ -230,6 +236,9 @@ static int pci_epc_epf_link(struct config_item *epc_item,
return ret;
}
/* Send any pending EPC initialization complete to the EPF driver */
pci_epc_notify_pending_init(epc, epf);
return 0;
}

View File

@ -748,10 +748,32 @@ void pci_epc_init_notify(struct pci_epc *epc)
epf->event_ops->core_init(epf);
mutex_unlock(&epf->lock);
}
epc->init_complete = true;
mutex_unlock(&epc->list_lock);
}
EXPORT_SYMBOL_GPL(pci_epc_init_notify);
/**
* pci_epc_notify_pending_init() - Notify the pending EPC device initialization
* complete to the EPF device
* @epc: the EPC device whose core initialization is pending to be notified
* @epf: the EPF device to be notified
*
* Invoke to notify the pending EPC device initialization complete to the EPF
* device. This is used to deliver the notification if the EPC initialization
* got completed before the EPF driver bind.
*/
void pci_epc_notify_pending_init(struct pci_epc *epc, struct pci_epf *epf)
{
if (epc->init_complete) {
mutex_lock(&epf->lock);
if (epf->event_ops && epf->event_ops->core_init)
epf->event_ops->core_init(epf);
mutex_unlock(&epf->lock);
}
}
EXPORT_SYMBOL_GPL(pci_epc_notify_pending_init);
/**
* pci_epc_bme_notify() - Notify the EPF device that the EPC device has received
* the BME event from the Root complex

View File

@ -255,6 +255,8 @@ EXPORT_SYMBOL_GPL(pci_epf_free_space);
* @type: Identifies if the allocation is for primary EPC or secondary EPC
*
* Invoke to allocate memory for the PCI EPF register space.
* Flag PCI_BASE_ADDRESS_MEM_TYPE_64 will automatically get set if the BAR
* can only be a 64-bit BAR, or if the requested size is larger than 2 GB.
*/
void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
const struct pci_epc_features *epc_features,
@ -304,9 +306,10 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
epf_bar[bar].addr = space;
epf_bar[bar].size = size;
epf_bar[bar].barno = bar;
epf_bar[bar].flags |= upper_32_bits(size) ?
PCI_BASE_ADDRESS_MEM_TYPE_64 :
PCI_BASE_ADDRESS_MEM_TYPE_32;
if (upper_32_bits(size) || epc_features->bar[bar].only_64bit)
epf_bar[bar].flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
else
epf_bar[bar].flags |= PCI_BASE_ADDRESS_MEM_TYPE_32;
return space;
}

View File

@ -6,6 +6,8 @@ cpcihp:
->set_power callbacks in struct cpci_hp_controller_ops. Why were they
introduced? Can they be removed from the struct?
* Returned code from pci_hp_add_bridge() is not checked.
cpqphp:
* The driver spawns a kthread cpqhp_event_thread() which is woken by the
@ -16,6 +18,8 @@ cpqphp:
* A large portion of cpqphp_ctrl.c and cpqphp_pci.c concerns resource
management. Doesn't this duplicate functionality in the core?
* Returned code from pci_hp_add_bridge() is not checked.
ibmphp:
* Implementations of hotplug_slot_ops callbacks such as get_adapter_present()
@ -43,13 +47,7 @@ ibmphp:
* A large portion of ibmphp_res.c and ibmphp_pci.c concerns resource
management. Doesn't this duplicate functionality in the core?
sgi_hotplug:
* Several functions access the pci_slot member in struct hotplug_slot even
though pci_hotplug.h declares it private. See sn_hp_destroy() for an
example. Either the pci_slot member should no longer be declared private
or sgi_hotplug should store a pointer to it in struct slot. Probably the
former.
* Returned code from pci_hp_add_bridge() is not checked.
shpchp:

View File

@ -213,8 +213,8 @@ EXPORT_SYMBOL(pci_disable_msix);
* * %PCI_IRQ_MSIX Allow trying MSI-X vector allocations
* * %PCI_IRQ_MSI Allow trying MSI vector allocations
*
* * %PCI_IRQ_LEGACY Allow trying legacy INTx interrupts, if
* and only if @min_vecs == 1
* * %PCI_IRQ_INTX Allow trying INTx interrupts, if and
* only if @min_vecs == 1
*
* * %PCI_IRQ_AFFINITY Auto-manage IRQs affinity by spreading
* the vectors around available CPUs
@ -279,8 +279,8 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
return nvecs;
}
/* use legacy IRQ if allowed */
if (flags & PCI_IRQ_LEGACY) {
/* use INTx IRQ if allowed */
if (flags & PCI_IRQ_INTX) {
if (min_vecs == 1 && dev->irq) {
/*
* Invoke the affinity spreading logic to ensure that
@ -365,56 +365,6 @@ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr)
}
EXPORT_SYMBOL(pci_irq_get_affinity);
/**
* pci_ims_alloc_irq - Allocate an interrupt on a PCI/IMS interrupt domain
* @dev: The PCI device to operate on
* @icookie: Pointer to an IMS implementation specific cookie for this
* IMS instance (PASID, queue ID, pointer...).
* The cookie content is copied into the MSI descriptor for the
* interrupt chip callbacks or domain specific setup functions.
* @affdesc: Optional pointer to an interrupt affinity descriptor
*
* There is no index for IMS allocations as IMS is an implementation
* specific storage and does not have any direct associations between
* index, which might be a pure software construct, and device
* functionality. This association is established by the driver either via
* the index - if there is a hardware table - or in case of purely software
* managed IMS implementation the association happens via the
* irq_write_msi_msg() callback of the implementation specific interrupt
* chip, which utilizes the provided @icookie to store the MSI message in
* the appropriate place.
*
* Return: A struct msi_map
*
* On success msi_map::index contains the allocated index (>= 0) and
* msi_map::virq the allocated Linux interrupt number (> 0).
*
* On fail msi_map::index contains the error code and msi_map::virq
* is set to 0.
*/
struct msi_map pci_ims_alloc_irq(struct pci_dev *dev, union msi_instance_cookie *icookie,
const struct irq_affinity_desc *affdesc)
{
return msi_domain_alloc_irq_at(&dev->dev, MSI_SECONDARY_DOMAIN, MSI_ANY_INDEX,
affdesc, icookie);
}
EXPORT_SYMBOL_GPL(pci_ims_alloc_irq);
/**
* pci_ims_free_irq - Allocate an interrupt on a PCI/IMS interrupt domain
* which was allocated via pci_ims_alloc_irq()
* @dev: The PCI device to operate on
* @map: A struct msi_map describing the interrupt to free as
* returned from pci_ims_alloc_irq()
*/
void pci_ims_free_irq(struct pci_dev *dev, struct msi_map map)
{
if (WARN_ON_ONCE(map.index < 0 || map.virq <= 0))
return;
msi_domain_free_irqs_range(&dev->dev, MSI_SECONDARY_DOMAIN, map.index, map.index);
}
EXPORT_SYMBOL_GPL(pci_ims_free_irq);
/**
* pci_free_irq_vectors() - Free previously allocated IRQs for a device
* @dev: the PCI device to operate on

View File

@ -355,65 +355,6 @@ bool pci_msi_domain_supports(struct pci_dev *pdev, unsigned int feature_mask,
return (supported & feature_mask) == feature_mask;
}
/**
* pci_create_ims_domain - Create a secondary IMS domain for a PCI device
* @pdev: The PCI device to operate on
* @template: The MSI info template which describes the domain
* @hwsize: The size of the hardware entry table or 0 if the domain
* is purely software managed
* @data: Optional pointer to domain specific data to be stored
* in msi_domain_info::data
*
* Return: True on success, false otherwise
*
* An IMS domain is expected to have the following constraints:
* - The index space is managed by the core code
*
* - There is no requirement for consecutive index ranges
*
* - The interrupt chip must provide the following callbacks:
* - irq_mask()
* - irq_unmask()
* - irq_write_msi_msg()
*
* - The interrupt chip must provide the following optional callbacks
* when the irq_mask(), irq_unmask() and irq_write_msi_msg() callbacks
* cannot operate directly on hardware, e.g. in the case that the
* interrupt message store is in queue memory:
* - irq_bus_lock()
* - irq_bus_unlock()
*
* These callbacks are invoked from preemptible task context and are
* allowed to sleep. In this case the mandatory callbacks above just
* store the information. The irq_bus_unlock() callback is supposed
* to make the change effective before returning.
*
* - Interrupt affinity setting is handled by the underlying parent
* interrupt domain and communicated to the IMS domain via
* irq_write_msi_msg().
*
* The domain is automatically destroyed when the PCI device is removed.
*/
bool pci_create_ims_domain(struct pci_dev *pdev, const struct msi_domain_template *template,
unsigned int hwsize, void *data)
{
struct irq_domain *domain = dev_get_msi_domain(&pdev->dev);
if (!domain || !irq_domain_is_msi_parent(domain))
return false;
if (template->info.bus_token != DOMAIN_BUS_PCI_DEVICE_IMS ||
!(template->info.flags & MSI_FLAG_ALLOC_SIMPLE_MSI_DESCS) ||
!(template->info.flags & MSI_FLAG_FREE_MSI_DESCS) ||
!template->chip.irq_mask || !template->chip.irq_unmask ||
!template->chip.irq_write_msi_msg || template->chip.irq_set_affinity)
return false;
return msi_create_device_irq_domain(&pdev->dev, MSI_SECONDARY_DOMAIN, template,
hwsize, data, NULL);
}
EXPORT_SYMBOL_GPL(pci_create_ims_domain);
/*
* Users of the generic MSI infrastructure expect a device to have a single ID,
* so with DMA aliases we have to pick the least-worst compromise. Devices with

View File

@ -86,9 +86,11 @@ static int pcim_setup_msi_release(struct pci_dev *dev)
return 0;
ret = devm_add_action(&dev->dev, pcim_msi_release, dev);
if (!ret)
dev->is_msi_managed = true;
return ret;
if (ret)
return ret;
dev->is_msi_managed = true;
return 0;
}
/*
@ -99,9 +101,10 @@ static int pci_setup_msi_context(struct pci_dev *dev)
{
int ret = msi_setup_device_data(&dev->dev);
if (!ret)
ret = pcim_setup_msi_release(dev);
return ret;
if (ret)
return ret;
return pcim_setup_msi_release(dev);
}
/*

View File

@ -238,6 +238,8 @@ static int of_pci_prop_intr_map(struct pci_dev *pdev, struct of_changeset *ocs,
return 0;
int_map = kcalloc(map_sz, sizeof(u32), GFP_KERNEL);
if (!int_map)
return -ENOMEM;
mapp = int_map;
list_for_each_entry(child, &pdev->subordinate->devices, bus_list) {

Some files were not shown because too many files have changed in this diff Show More