pci-v5.17-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmHgpugUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vz59g//eWRLb0j2Vgv84ZH4x1iv6MaBboQr
 2wScnfoN+MIoh+tuM4kRak15X4nB8rJhNZZCzesMUN6PeZvrkoPo4sz/xdzIrA/N
 qY3h8NZ3nC4yCvs/tGem0zZUcSCJsxUAD0eegyMSa142xGIOQTHBSJRflR9osKSo
 bnQlKTkugV8t4kD7NlQ5M3HzN3R+mjsII5JNzCqv2XlzAZG3D8DhPyIpZnRNAOmW
 KiHOVXvQOocfUlvSs5kBlhgR1HgJkGnruCrJ1iDCWQH1Zk0iuVgoZWgVda6Cs3Xv
 gcTJLB7VoSdNZKnct9aMNYPKziHkYc7clilPeDsJs5TlSv3kKERzLj6c/5ZAxFWN
 +RsH+zYHDXJSsL/w0twPnaF5WCuVYUyrs3UiSjUvShKl1T9k9J+Jo8zwUUZx8Xb0
 qXX8jRGMHolBGwPXm2fHEb4bwTUI8emPj29qK4L96KsQ3zKXWB8eGSosxUP52Tti
 RR2WZjkvwlREZCJp6jSEJYkhzoEaVAm8CjKpKUuneX9WcUOsMBSs9k7EXbUy7JeM
 hq5Keuqa8PZo/IK2DYYAchNnBJUDMsWJeduBW12qSmx3J+9victP2qOFu+9skP0a
 85xlO6Cx8beiQh+XnY7jyROvIFuxTnGKHgkq/89Ham/whEzdJ+GRIiYB218kLLCW
 ILdas3C2iiGz99I=
 =Vgg4
 -----END PGP SIGNATURE-----

Merge tag 'pci-v5.17-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:
   - Use pci_find_vsec_capability() instead of open-coding it (Andy
     Shevchenko)
   - Convert pci_dev_present() stub from macro to static inline to avoid
     'unused variable' errors (Hans de Goede)
   - Convert sysfs slot attributes from default_attrs to default_groups
     (Greg Kroah-Hartman)
   - Use DWORD accesses for LTR, L1 SS to avoid BayHub OZ711LV2 erratum
     (Rajat Jain)
   - Remove unnecessary initialization of static variables (Longji Guo)

  Resource management:
   - Always write Intel I210 ROM BAR on update to work around device
     defect (Bjorn Helgaas)

  PCIe native device hotplug:
   - Fix pciehp lockdep errors on Thunderbolt undock (Hans de Goede)
   - Fix infinite loop in pciehp IRQ handler on power fault (Lukas
     Wunner)

  Power management:
   - Convert amd64-agp, sis-agp, via-agp from legacy PCI power
     management to generic power management (Vaibhav Gupta)

  IOMMU:
   - Add function 1 DMA alias quirk for Marvell 88SE9125 SATA controller
     so it can work with an IOMMU (Yifeng Li)

  Error handling:
   - Add PCI_ERROR_RESPONSE and related definitions for signaling and
     checking for transaction errors on PCI (Naveen Naidu)
   - Fabricate PCI_ERROR_RESPONSE data (~0) in config read wrappers,
     instead of in host controller drivers, when transactions fail on
     PCI (Naveen Naidu)
   - Use PCI_POSSIBLE_ERROR() to check for possible failure of config
     reads (Naveen Naidu)

  Peer-to-peer DMA:
   - Add Logan Gunthorpe as P2PDMA maintainer (Bjorn Helgaas)

  ASPM:
   - Calculate link L0s and L1 exit latencies when needed instead of
     caching them (Saheed O. Bolarinwa)
   - Calculate device L0s and L1 acceptable exit latencies when needed
     instead of caching them (Saheed O. Bolarinwa)
   - Remove struct aspm_latency since it's no longer needed (Saheed O.
     Bolarinwa)

  APM X-Gene PCIe controller driver:
   - Fix IB window setup, which was broken by the fact that IB resources
     are now sorted in address order instead of DT dma-ranges order (Rob
     Herring)

  Apple PCIe controller driver:
   - Enable clock gating to save power (Hector Martin)
   - Fix REFCLK1 enable/poll logic (Hector Martin)

  Broadcom STB PCIe controller driver:
   - Declare bitmap correctly for use by bitmap interfaces (Christophe
     JAILLET)
   - Clean up computation of legacy and non-legacy MSI bitmasks (Florian
     Fainelli)
   - Update suspend/resume/remove error handling to warn about errors
     and not fail the operation (Jim Quinlan)
   - Correct the "pcie" and "msi" interrupt descriptions in DT binding
     (Jim Quinlan)
   - Add DT bindings for endpoint voltage regulators (Jim Quinlan)
   - Split brcm_pcie_setup() into two functions (Jim Quinlan)
   - Add mechanism for turning on voltage regulators for connected
     devices (Jim Quinlan)
   - Turn voltage regulators for connected devices on/off when bus is
     added or removed (Jim Quinlan)
   - When suspending, don't turn off voltage regulators for wakeup
     devices (Jim Quinlan)

  Freescale i.MX6 PCIe controller driver:
   - Add i.MX8MM support (Richard Zhu)

  Freescale Layerscape PCIe controller driver:
   - Use DWC common ops instead of layerscape-specific link-up functions
     (Hou Zhiqiang)

  Intel VMD host bridge driver:
   - Honor platform ACPI _OSC feature negotiation for Root Ports below
     VMD (Kai-Heng Feng)
   - Add support for Raptor Lake SKUs (Karthik L Gopalakrishnan)
   - Reset everything below VMD before enumerating to work around
     failure to enumerate NVMe devices when guest OS reboots (Nirmal
     Patel)

  Bridge emulation (used by Marvell Aardvark and MVEBU):
   - Make emulated ROM BAR read-only by default (Pali Rohár)
   - Make some emulated legacy PCI bits read-only for PCIe devices (Pali
     Rohár)
   - Update reserved bits in emulated PCIe Capability (Pali Rohár)
   - Allow drivers to emulate different PCIe Capability versions (Pali
     Rohár)
   - Set emulated Capabilities List bit for all PCIe devices, since they
     must have at least a PCIe Capability (Pali Rohár)

  Marvell Aardvark PCIe controller driver:
   - Add bridge emulation definitions for PCIe DEVCAP2, DEVCTL2,
     DEVSTA2, LNKCAP2, LNKCTL2, LNKSTA2, SLTCAP2, SLTCTL2, SLTSTA2 (Pali
     Rohár)
   - Add aardvark support for DEVCAP2, DEVCTL2, LNKCAP2 and LNKCTL2
     registers (Pali Rohár)
   - Clear all MSIs at setup to avoid spurious interrupts (Pali Rohár)
   - Disable bus mastering when unbinding host controller driver (Pali
     Rohár)
   - Mask all interrupts when unbinding host controller driver (Pali
     Rohár)
   - Fix memory leak in host controller unbind (Pali Rohár)
   - Assert PERST# when unbinding host controller driver (Pali Rohár)
   - Disable link training when unbinding host controller driver (Pali
     Rohár)
   - Disable common PHY when unbinding host controller driver (Pali
     Rohár)
   - Fix resource type checking to check only IORESOURCE_MEM, not
     IORESOURCE_MEM_64, which is a flavor of IORESOURCE_MEM (Pali Rohár)

  Marvell MVEBU PCIe controller driver:
   - Implement pci_remap_iospace() for ARM so mvebu can use
     devm_pci_remap_iospace() instead of the previous ARM-specific
     pci_ioremap_io() interface (Pali Rohár)
   - Use the standard pci_host_probe() instead of the device-specific
     mvebu_pci_host_probe() (Pali Rohár)
   - Replace all uses of ARM-specific pci_ioremap_io() with the ARM
     implementation of the standard pci_remap_iospace() interface and
     remove pci_ioremap_io() (Pali Rohár)
   - Skip initializing invalid Root Ports (Pali Rohár)
   - Check for errors from pci_bridge_emul_init() (Pali Rohár)
   - Ignore any bridges at non-zero function numbers (Pali Rohár)
   - Return ~0 data for invalid config read size (Pali Rohár)
   - Disallow mapping interrupts on emulated bridges (Pali Rohár)
   - Clear Root Port Memory & I/O Space Enable and Bus Master Enable at
     initialization (Pali Rohár)
   - Make type bits in Root Port I/O Base register read-only (Pali
     Rohár)
   - Disable Root Port windows when base/limit set to invalid values
     (Pali Rohár)
   - Set controller to Root Complex mode (Pali Rohár)
   - Set Root Port Class Code to PCI Bridge (Pali Rohár)
   - Update emulated Root Port secondary bus numbers to better reflect
     the actual topology (Pali Rohár)
   - Add PCI_BRIDGE_CTL_BUS_RESET support to emulated Root Ports so
     pci_reset_secondary_bus() can reset connected devices (Pali Rohár)
   - Add PCI_EXP_DEVCTL Error Reporting Enable support to emulated Root
     Ports (Pali Rohár)
   - Add PCI_EXP_RTSTA PME Status bit support to emulated Root Ports
     (Pali Rohár)
   - Add DEVCAP2, DEVCTL2 and LNKCTL2 support to emulated Root Ports on
     Armada XP and newer devices (Pali Rohár)
   - Export mvebu-mbus.c symbols to allow pci-mvebu.c to be a module
     (Pali Rohár)
   - Add support for compiling as a module (Pali Rohár)

  MediaTek PCIe controller driver:
   - Assert PERST# for 100ms to allow power and clock to stabilize
     (qizhong cheng)

  MediaTek PCIe Gen3 controller driver:
   - Disable Mediatek DVFSRC voltage request since lack of DVFSRC to
     respond to the request causes failure to exit L1 PM Substate
     (Jianjun Wang)

  MediaTek MT7621 PCIe controller driver:
   - Declare mt7621_pci_ops static (Sergio Paracuellos)
   - Give pcibios_root_bridge_prepare() access to host bridge windows
     (Sergio Paracuellos)
   - Move MIPS I/O coherency unit setup from driver to
     pcibios_root_bridge_prepare() (Sergio Paracuellos)
   - Add missing MODULE_LICENSE() (Sergio Paracuellos)
   - Allow COMPILE_TEST for all arches (Sergio Paracuellos)

  Microsoft Hyper-V host bridge driver:
   - Add hv-internal interfaces to encapsulate arch IRQ dependencies
     (Sunil Muthuswamy)
   - Add arm64 Hyper-V vPCI support (Sunil Muthuswamy)

  Qualcomm PCIe controller driver:
   - Undo PM setup in qcom_pcie_probe() error handling path (Christophe
     JAILLET)
   - Use __be16 type to store return value from cpu_to_be16()
     (Manivannan Sadhasivam)
   - Constify static dw_pcie_ep_ops (Rikard Falkeborn)

  Renesas R-Car PCIe controller driver:
   - Fix aarch32 abort handler so it doesn't check the wrong bus clock
     before accessing the host controller (Marek Vasut)

  TI Keystone PCIe controller driver:
   - Add register offset for ti,syscon-pcie-id and ti,syscon-pcie-mode
     DT properties (Kishon Vijay Abraham I)

  MicroSemi Switchtec management driver:
   - Add Gen4 automotive device IDs (Kelvin Cao)
   - Declare state_names[] as static so it's not allocated and
     initialized for every call (Kelvin Cao)

  Host controller driver cleanups:
   - Use of_device_get_match_data(), not of_match_device(), when we only
     need the device data in altera, artpec6, cadence, designware-plat,
     dra7xx, keystone, kirin (Fan Fei)
   - Drop pointless of_device_get_match_data() cast in j721e (Bjorn
     Helgaas)
   - Drop redundant struct device * from j721e since struct cdns_pcie
     already has one (Bjorn Helgaas)
   - Rename driver structs to *_pcie in intel-gw, iproc, ls-gen4,
     mediatek-gen3, microchip, mt7621, rcar-gen2, tegra194, uniphier,
     xgene, xilinx, xilinx-cpm for consistency across drivers (Fan Fei)
   - Fix invalid address space conversions in hisi, spear13xx (Bjorn
     Helgaas)

  Miscellaneous:
   - Sort Intel Device IDs by value (Andy Shevchenko)
   - Change Capability offsets to hex to match spec (Baruch Siach)
   - Correct misspellings (Krzysztof Wilczyński)
   - Terminate statement with semicolon in pci_endpoint_test.c (Ming
     Wang)"

* tag 'pci-v5.17-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (151 commits)
  PCI: mt7621: Allow COMPILE_TEST for all arches
  PCI: mt7621: Add missing MODULE_LICENSE()
  PCI: mt7621: Move MIPS setup to pcibios_root_bridge_prepare()
  PCI: Let pcibios_root_bridge_prepare() access bridge->windows
  PCI: mt7621: Declare mt7621_pci_ops static
  PCI: brcmstb: Do not turn off WOL regulators on suspend
  PCI: brcmstb: Add control of subdevice voltage regulators
  PCI: brcmstb: Add mechanism to turn on subdev regulators
  PCI: brcmstb: Split brcm_pcie_setup() into two funcs
  dt-bindings: PCI: Add bindings for Brcmstb EP voltage regulators
  dt-bindings: PCI: Correct brcmstb interrupts, interrupt-map.
  PCI: brcmstb: Fix function return value handling
  PCI: brcmstb: Do not use __GENMASK
  PCI: brcmstb: Declare 'used' as bitmap, not unsigned long
  PCI: hv: Add arm64 Hyper-V vPCI support
  PCI: hv: Make the code arch neutral by adding arch specific interfaces
  PCI: pciehp: Use down_read/write_nested(reset_lock) to fix lockdep errors
  x86/PCI: Remove initialization of static variables to false
  PCI: Use DWORD accesses for LTR, L1 SS to avoid erratum
  misc: pci_endpoint_test: Terminate statement with semicolon
  ...
This commit is contained in:
Linus Torvalds 2022-01-16 08:08:11 +02:00
commit d0a231f01e
94 changed files with 2617 additions and 1738 deletions

View File

@ -146,11 +146,15 @@ examples:
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
interrupts = <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "pcie", "msi";
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>;
interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH
0 0 0 2 &gicv2 GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH
0 0 0 3 &gicv2 GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH
0 0 0 4 &gicv2 GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>;
msi-parent = <&pcie0>;
msi-controller;
ranges = <0x02000000 0x0 0xf8000000 0x6 0x00000000 0x0 0x04000000>;
@ -158,5 +162,24 @@ examples:
<0x42000000 0x1 0x80000000 0x3 0x00000000 0x0 0x80000000>;
brcm,enable-ssc;
brcm,scb-sizes = <0x0000000080000000 0x0000000080000000>;
/* PCIe bridge, Root Port */
pci@0,0 {
#address-cells = <3>;
#size-cells = <2>;
reg = <0x0 0x0 0x0 0x0 0x0>;
compatible = "pciclass,0604";
device_type = "pci";
vpcie3v3-supply = <&vreg7>;
ranges;
/* PCIe endpoint */
pci-ep@0,0 {
assigned-addresses =
<0x82010000 0x0 0xf8000000 0x6 0x00000000 0x0 0x2000>;
reg = <0x0 0x0 0x0 0x0 0x0>;
compatible = "pci14e4,1688";
};
};
};
};

View File

@ -127,6 +127,12 @@ properties:
enum: [1, 2, 3, 4]
default: 1
phys:
maxItems: 1
phy-names:
const: pcie-phy
reset-gpio:
description: Should specify the GPIO for controlling the PCI bus device
reset signal. It's not polarity aware and defaults to active-low reset

View File

@ -32,8 +32,12 @@ properties:
maxItems: 1
ti,syscon-pcie-mode:
$ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: Phandle to the SYSCON entry
- description: pcie_ctrl register offset within SYSCON
description: Phandle to the SYSCON entry required for configuring PCIe in RC or EP mode.
$ref: /schemas/types.yaml#/definitions/phandle
interrupts:
minItems: 1
@ -65,7 +69,7 @@ examples:
<0x5506000 0x1000>;
reg-names = "app", "dbics", "addr_space", "atu";
power-domains = <&k3_pds 120 TI_SCI_PD_EXCLUSIVE>;
ti,syscon-pcie-mode = <&pcie0_mode>;
ti,syscon-pcie-mode = <&scm_conf 0x4060>;
max-link-speed = <2>;
dma-coherent;
interrupts = <GIC_SPI 340 IRQ_TYPE_EDGE_RISING>;

View File

@ -36,12 +36,20 @@ properties:
maxItems: 1
ti,syscon-pcie-id:
$ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: Phandle to the SYSCON entry
- description: pcie_device_id register offset within SYSCON
description: Phandle to the SYSCON entry required for getting PCIe device/vendor ID
$ref: /schemas/types.yaml#/definitions/phandle
ti,syscon-pcie-mode:
$ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: Phandle to the SYSCON entry
- description: pcie_ctrl register offset within SYSCON
description: Phandle to the SYSCON entry required for configuring PCIe in RC or EP mode.
$ref: /schemas/types.yaml#/definitions/phandle
msi-map: true
@ -87,8 +95,8 @@ examples:
#size-cells = <2>;
ranges = <0x81000000 0 0 0x10020000 0 0x00010000>,
<0x82000000 0 0x10030000 0x10030000 0 0x07FD0000>;
ti,syscon-pcie-id = <&pcie_devid>;
ti,syscon-pcie-mode = <&pcie0_mode>;
ti,syscon-pcie-id = <&scm_conf 0x0210>;
ti,syscon-pcie-mode = <&scm_conf 0x4060>;
bus-range = <0x0 0xff>;
max-link-speed = <2>;
dma-coherent;

View File

@ -14890,6 +14890,19 @@ L: linux-pci@vger.kernel.org
S: Supported
F: Documentation/PCI/pci-error-recovery.rst
PCI PEER-TO-PEER DMA (P2PDMA)
M: Bjorn Helgaas <bhelgaas@google.com>
M: Logan Gunthorpe <logang@deltatee.com>
L: linux-pci@vger.kernel.org
S: Supported
Q: https://patchwork.kernel.org/project/linux-pci/list/
B: https://bugzilla.kernel.org
C: irc://irc.oftc.net/linux-pci
T: git git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git
F: Documentation/driver-api/pci/p2pdma.rst
F: drivers/pci/p2pdma.c
F: include/linux/pci-p2pdma.h
PCI MSI DRIVER FOR ALTERA MSI IP
M: Joyce Ooi <joyce.ooi@intel.com>
L: linux-pci@vger.kernel.org

View File

@ -180,7 +180,10 @@ void pci_ioremap_set_mem_type(int mem_type);
static inline void pci_ioremap_set_mem_type(int mem_type) {}
#endif
extern int pci_ioremap_io(unsigned int offset, phys_addr_t phys_addr);
struct resource;
#define pci_remap_iospace pci_remap_iospace
int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr);
/*
* PCI configuration space mapping function.

View File

@ -38,6 +38,7 @@ static int num_pcie_ports;
static int __init dove_pcie_setup(int nr, struct pci_sys_data *sys)
{
struct pcie_port *pp;
struct resource realio;
if (nr >= num_pcie_ports)
return 0;
@ -53,10 +54,10 @@ static int __init dove_pcie_setup(int nr, struct pci_sys_data *sys)
orion_pcie_setup(pp->base);
if (pp->index == 0)
pci_ioremap_io(sys->busnr * SZ_64K, DOVE_PCIE0_IO_PHYS_BASE);
else
pci_ioremap_io(sys->busnr * SZ_64K, DOVE_PCIE1_IO_PHYS_BASE);
realio.start = sys->busnr * SZ_64K;
realio.end = realio.start + SZ_64K - 1;
pci_remap_iospace(&realio, pp->index == 0 ? DOVE_PCIE0_IO_PHYS_BASE :
DOVE_PCIE1_IO_PHYS_BASE);
/*
* IORESOURCE_MEM

View File

@ -185,6 +185,7 @@ iop3xx_pci_abort(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
int iop3xx_pci_setup(int nr, struct pci_sys_data *sys)
{
struct resource *res;
struct resource realio;
if (nr != 0)
return 0;
@ -206,7 +207,9 @@ int iop3xx_pci_setup(int nr, struct pci_sys_data *sys)
pci_add_resource_offset(&sys->resources, res, sys->mem_offset);
pci_ioremap_io(0, IOP3XX_PCI_LOWER_IO_PA);
realio.start = 0;
realio.end = realio.start + SZ_64K - 1;
pci_remap_iospace(&realio, IOP3XX_PCI_LOWER_IO_PA);
return 1;
}

View File

@ -101,6 +101,7 @@ static void __init mv78xx0_pcie_preinit(void)
static int __init mv78xx0_pcie_setup(int nr, struct pci_sys_data *sys)
{
struct pcie_port *pp;
struct resource realio;
if (nr >= num_pcie_ports)
return 0;
@ -115,7 +116,9 @@ static int __init mv78xx0_pcie_setup(int nr, struct pci_sys_data *sys)
orion_pcie_set_local_bus_nr(pp->base, sys->busnr);
orion_pcie_setup(pp->base);
pci_ioremap_io(nr * SZ_64K, MV78XX0_PCIE_IO_PHYS_BASE(nr));
realio.start = nr * SZ_64K;
realio.end = realio.start + SZ_64K - 1;
pci_remap_iospace(&realio, MV78XX0_PCIE_IO_PHYS_BASE(nr));
pci_add_resource_offset(&sys->resources, &pp->res, sys->mem_offset);

View File

@ -142,6 +142,7 @@ static struct pci_ops pcie_ops = {
static int __init pcie_setup(struct pci_sys_data *sys)
{
struct resource *res;
struct resource realio;
int dev;
/*
@ -164,7 +165,9 @@ static int __init pcie_setup(struct pci_sys_data *sys)
pcie_ops.read = pcie_rd_conf_wa;
}
pci_ioremap_io(sys->busnr * SZ_64K, ORION5X_PCIE_IO_PHYS_BASE);
realio.start = sys->busnr * SZ_64K;
realio.end = realio.start + SZ_64K - 1;
pci_remap_iospace(&realio, ORION5X_PCIE_IO_PHYS_BASE);
/*
* Request resources.
@ -466,6 +469,7 @@ static void __init orion5x_setup_pci_wins(void)
static int __init pci_setup(struct pci_sys_data *sys)
{
struct resource *res;
struct resource realio;
/*
* Point PCI unit MBUS decode windows to DRAM space.
@ -482,7 +486,9 @@ static int __init pci_setup(struct pci_sys_data *sys)
*/
orion5x_setbits(PCI_CMD, PCI_CMD_HOST_REORDER);
pci_ioremap_io(sys->busnr * SZ_64K, ORION5X_PCI_IO_PHYS_BASE);
realio.start = sys->busnr * SZ_64K;
realio.end = realio.start + SZ_64K - 1;
pci_remap_iospace(&realio, ORION5X_PCI_IO_PHYS_BASE);
/*
* Request resources

View File

@ -459,16 +459,20 @@ void pci_ioremap_set_mem_type(int mem_type)
pci_ioremap_mem_type = mem_type;
}
int pci_ioremap_io(unsigned int offset, phys_addr_t phys_addr)
int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
{
BUG_ON(offset + SZ_64K - 1 > IO_SPACE_LIMIT);
unsigned long vaddr = (unsigned long)PCI_IOBASE + res->start;
return ioremap_page_range(PCI_IO_VIRT_BASE + offset,
PCI_IO_VIRT_BASE + offset + SZ_64K,
phys_addr,
if (!(res->flags & IORESOURCE_IO))
return -EINVAL;
if (res->end > IO_SPACE_LIMIT)
return -EINVAL;
return ioremap_page_range(vaddr, vaddr + resource_size(res), phys_addr,
__pgprot(get_mem_type(pci_ioremap_mem_type)->prot_pte));
}
EXPORT_SYMBOL_GPL(pci_ioremap_io);
EXPORT_SYMBOL(pci_remap_iospace);
void __iomem *pci_remap_cfgspace(resource_size_t res_cookie, size_t size)
{

View File

@ -64,6 +64,15 @@
#define HV_REGISTER_STIMER0_CONFIG 0x000B0000
#define HV_REGISTER_STIMER0_COUNT 0x000B0001
union hv_msi_entry {
u64 as_uint64[2];
struct {
u64 address;
u32 data;
u32 reserved;
} __packed;
};
#include <asm-generic/hyperv-tlfs.h>
#endif

View File

@ -10,6 +10,8 @@
#include <linux/slab.h>
#include <linux/sys_soc.h>
#include <linux/memblock.h>
#include <linux/pci.h>
#include <linux/bug.h>
#include <asm/bootinfo.h>
#include <asm/mipsregs.h>
@ -22,6 +24,35 @@
static void *detect_magic __initdata = detect_memory_region;
int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
{
struct resource_entry *entry;
resource_size_t mask;
entry = resource_list_first_type(&bridge->windows, IORESOURCE_MEM);
if (!entry) {
pr_err("Cannot get memory resource\n");
return -EINVAL;
}
if (mips_cps_numiocu(0)) {
/*
* Hardware doesn't accept mask values with 1s after
* 0s (e.g. 0xffef), so warn if that's happen
*/
mask = ~(entry->res->end - entry->res->start) & CM_GCR_REGn_MASK_ADDRMASK;
WARN_ON(mask && BIT(ffz(~mask)) - 1 != ~mask);
write_gcr_reg1_base(entry->res->start);
write_gcr_reg1_mask(mask | CM_GCR_REGn_MASK_CMTGT_IOCU0);
pr_info("PCI coherence region base: 0x%08llx, mask/settings: 0x%08llx\n",
(unsigned long long)read_gcr_reg1_base(),
(unsigned long long)read_gcr_reg1_mask());
}
return 0;
}
phys_addr_t mips_cpc_default_phys_base(void)
{
panic("Cannot detect cpc address");

View File

@ -602,6 +602,39 @@ enum hv_interrupt_type {
HV_X64_INTERRUPT_TYPE_MAXIMUM = 0x000A,
};
union hv_msi_address_register {
u32 as_uint32;
struct {
u32 reserved1:2;
u32 destination_mode:1;
u32 redirection_hint:1;
u32 reserved2:8;
u32 destination_id:8;
u32 msi_base:12;
};
} __packed;
union hv_msi_data_register {
u32 as_uint32;
struct {
u32 vector:8;
u32 delivery_mode:3;
u32 reserved1:3;
u32 level_assert:1;
u32 trigger_mode:1;
u32 reserved2:16;
};
} __packed;
/* HvRetargetDeviceInterrupt hypercall */
union hv_msi_entry {
u64 as_uint64;
struct {
union hv_msi_address_register address;
union hv_msi_data_register data;
} __packed;
};
#include <asm-generic/hyperv-tlfs.h>
#endif

View File

@ -169,13 +169,6 @@ bool hv_vcpu_is_preempted(int vcpu);
static inline void hv_apic_init(void) {}
#endif
static inline void hv_set_msi_entry_from_desc(union hv_msi_entry *msi_entry,
struct msi_desc *msi_desc)
{
msi_entry->address.as_uint32 = msi_desc->msg.address_lo;
msi_entry->data.as_uint32 = msi_desc->msg.data;
}
struct irq_domain *hv_create_pci_msi_domain(void);
int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector,

View File

@ -20,7 +20,7 @@ struct pci_root_info {
};
static bool pci_use_crs = true;
static bool pci_ignore_seg = false;
static bool pci_ignore_seg;
static int __init set_use_crs(const struct dmi_system_id *id)
{

View File

@ -914,6 +914,7 @@ int mvebu_mbus_add_window_remap_by_id(unsigned int target,
return mvebu_mbus_alloc_window(s, base, size, remap, target, attribute);
}
EXPORT_SYMBOL_GPL(mvebu_mbus_add_window_remap_by_id);
int mvebu_mbus_add_window_by_id(unsigned int target, unsigned int attribute,
phys_addr_t base, size_t size)
@ -921,6 +922,7 @@ int mvebu_mbus_add_window_by_id(unsigned int target, unsigned int attribute,
return mvebu_mbus_add_window_remap_by_id(target, attribute, base,
size, MVEBU_MBUS_NO_REMAP);
}
EXPORT_SYMBOL_GPL(mvebu_mbus_add_window_by_id);
int mvebu_mbus_del_window(phys_addr_t base, size_t size)
{
@ -933,6 +935,7 @@ int mvebu_mbus_del_window(phys_addr_t base, size_t size)
mvebu_mbus_disable_window(&mbus_state, win);
return 0;
}
EXPORT_SYMBOL_GPL(mvebu_mbus_del_window);
void mvebu_mbus_get_pcie_mem_aperture(struct resource *res)
{
@ -940,6 +943,7 @@ void mvebu_mbus_get_pcie_mem_aperture(struct resource *res)
return;
*res = mbus_state.pcie_mem_aperture;
}
EXPORT_SYMBOL_GPL(mvebu_mbus_get_pcie_mem_aperture);
void mvebu_mbus_get_pcie_io_aperture(struct resource *res)
{
@ -947,6 +951,7 @@ void mvebu_mbus_get_pcie_io_aperture(struct resource *res)
return;
*res = mbus_state.pcie_io_aperture;
}
EXPORT_SYMBOL_GPL(mvebu_mbus_get_pcie_io_aperture);
int mvebu_mbus_get_dram_win_info(phys_addr_t phyaddr, u8 *target, u8 *attr)
{

View File

@ -588,20 +588,11 @@ static void agp_amd64_remove(struct pci_dev *pdev)
agp_bridges_found--;
}
#ifdef CONFIG_PM
#define agp_amd64_suspend NULL
static int agp_amd64_suspend(struct pci_dev *pdev, pm_message_t state)
static int __maybe_unused agp_amd64_resume(struct device *dev)
{
pci_save_state(pdev);
pci_set_power_state(pdev, pci_choose_state(pdev, state));
return 0;
}
static int agp_amd64_resume(struct pci_dev *pdev)
{
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
struct pci_dev *pdev = to_pci_dev(dev);
if (pdev->vendor == PCI_VENDOR_ID_NVIDIA)
nforce3_agp_init(pdev);
@ -609,8 +600,6 @@ static int agp_amd64_resume(struct pci_dev *pdev)
return amd_8151_configure();
}
#endif /* CONFIG_PM */
static const struct pci_device_id agp_amd64_pci_table[] = {
{
.class = (PCI_CLASS_BRIDGE_HOST << 8),
@ -738,15 +727,14 @@ static const struct pci_device_id agp_amd64_pci_promisc_table[] = {
{ }
};
static SIMPLE_DEV_PM_OPS(agp_amd64_pm_ops, agp_amd64_suspend, agp_amd64_resume);
static struct pci_driver agp_amd64_pci_driver = {
.name = "agpgart-amd64",
.id_table = agp_amd64_pci_table,
.probe = agp_amd64_probe,
.remove = agp_amd64_remove,
#ifdef CONFIG_PM
.suspend = agp_amd64_suspend,
.resume = agp_amd64_resume,
#endif
.driver.pm = &agp_amd64_pm_ops,
};

View File

@ -217,26 +217,14 @@ static void agp_sis_remove(struct pci_dev *pdev)
agp_put_bridge(bridge);
}
#ifdef CONFIG_PM
#define agp_sis_suspend NULL
static int agp_sis_suspend(struct pci_dev *pdev, pm_message_t state)
static int __maybe_unused agp_sis_resume(
__attribute__((unused)) struct device *dev)
{
pci_save_state(pdev);
pci_set_power_state(pdev, pci_choose_state(pdev, state));
return 0;
}
static int agp_sis_resume(struct pci_dev *pdev)
{
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
return sis_driver.configure();
}
#endif /* CONFIG_PM */
static const struct pci_device_id agp_sis_pci_table[] = {
{
.class = (PCI_CLASS_BRIDGE_HOST << 8),
@ -419,15 +407,14 @@ static const struct pci_device_id agp_sis_pci_table[] = {
MODULE_DEVICE_TABLE(pci, agp_sis_pci_table);
static SIMPLE_DEV_PM_OPS(agp_sis_pm_ops, agp_sis_suspend, agp_sis_resume);
static struct pci_driver agp_sis_pci_driver = {
.name = "agpgart-sis",
.id_table = agp_sis_pci_table,
.probe = agp_sis_probe,
.remove = agp_sis_remove,
#ifdef CONFIG_PM
.suspend = agp_sis_suspend,
.resume = agp_sis_resume,
#endif
.driver.pm = &agp_sis_pm_ops,
};
static int __init agp_sis_init(void)

View File

@ -492,22 +492,11 @@ static void agp_via_remove(struct pci_dev *pdev)
agp_put_bridge(bridge);
}
#ifdef CONFIG_PM
#define agp_via_suspend NULL
static int agp_via_suspend(struct pci_dev *pdev, pm_message_t state)
static int __maybe_unused agp_via_resume(struct device *dev)
{
pci_save_state (pdev);
pci_set_power_state (pdev, PCI_D3hot);
return 0;
}
static int agp_via_resume(struct pci_dev *pdev)
{
struct agp_bridge_data *bridge = pci_get_drvdata(pdev);
pci_set_power_state (pdev, PCI_D0);
pci_restore_state(pdev);
struct agp_bridge_data *bridge = dev_get_drvdata(dev);
if (bridge->driver == &via_agp3_driver)
return via_configure_agp3();
@ -517,8 +506,6 @@ static int agp_via_resume(struct pci_dev *pdev)
return 0;
}
#endif /* CONFIG_PM */
/* must be the same order as name table above */
static const struct pci_device_id agp_via_pci_table[] = {
#define ID(x) \
@ -567,16 +554,14 @@ static const struct pci_device_id agp_via_pci_table[] = {
MODULE_DEVICE_TABLE(pci, agp_via_pci_table);
static SIMPLE_DEV_PM_OPS(agp_via_pm_ops, agp_via_suspend, agp_via_resume);
static struct pci_driver agp_via_pci_driver = {
.name = "agpgart-via",
.id_table = agp_via_pci_table,
.probe = agp_via_probe,
.remove = agp_via_remove,
#ifdef CONFIG_PM
.suspend = agp_via_suspend,
.resume = agp_via_resume,
#endif
.driver.pm = &agp_via_pm_ops,
};

View File

@ -865,7 +865,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
goto err_release_irq;
}
misc_device->parent = &pdev->dev;
misc_device->fops = &pci_endpoint_test_fops,
misc_device->fops = &pci_endpoint_test_fops;
err = misc_register(misc_device);
if (err) {

View File

@ -184,7 +184,7 @@ config PCI_LABEL
config PCI_HYPERV
tristate "Hyper-V PCI Frontend"
depends on X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
select PCI_HYPERV_INTERFACE
help
The PCI device frontend driver allows the kernel to import arbitrary

View File

@ -42,7 +42,10 @@ int noinline pci_bus_read_config_##size \
if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER; \
pci_lock_config(flags); \
res = bus->ops->read(bus, devfn, pos, len, &data); \
*value = (type)data; \
if (res) \
PCI_SET_ERROR_RESPONSE(value); \
else \
*value = (type)data; \
pci_unlock_config(flags); \
return res; \
}
@ -80,10 +83,8 @@ int pci_generic_config_read(struct pci_bus *bus, unsigned int devfn,
void __iomem *addr;
addr = bus->ops->map_bus(bus, devfn, where);
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
if (size == 1)
*val = readb(addr);
@ -122,10 +123,8 @@ int pci_generic_config_read32(struct pci_bus *bus, unsigned int devfn,
void __iomem *addr;
addr = bus->ops->map_bus(bus, devfn, where & ~0x3);
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
*val = readl(addr);
@ -228,7 +227,10 @@ int pci_user_read_config_##size \
ret = dev->bus->ops->read(dev->bus, dev->devfn, \
pos, sizeof(type), &data); \
raw_spin_unlock_irq(&pci_lock); \
*val = (type)data; \
if (ret) \
PCI_SET_ERROR_RESPONSE(val); \
else \
*val = (type)data; \
return pcibios_err_to_errno(ret); \
} \
EXPORT_SYMBOL_GPL(pci_user_read_config_##size);
@ -410,9 +412,9 @@ int pcie_capability_read_word(struct pci_dev *dev, int pos, u16 *val)
if (pcie_capability_reg_implemented(dev, pos)) {
ret = pci_read_config_word(dev, pci_pcie_cap(dev) + pos, val);
/*
* Reset *val to 0 if pci_read_config_word() fails, it may
* have been written as 0xFFFF if hardware error happens
* during pci_read_config_word().
* Reset *val to 0 if pci_read_config_word() fails; it may
* have been written as 0xFFFF (PCI_ERROR_RESPONSE) if the
* config read failed on PCI.
*/
if (ret)
*val = 0;
@ -445,9 +447,9 @@ int pcie_capability_read_dword(struct pci_dev *dev, int pos, u32 *val)
if (pcie_capability_reg_implemented(dev, pos)) {
ret = pci_read_config_dword(dev, pci_pcie_cap(dev) + pos, val);
/*
* Reset *val to 0 if pci_read_config_dword() fails, it may
* have been written as 0xFFFFFFFF if hardware error happens
* during pci_read_config_dword().
* Reset *val to 0 if pci_read_config_dword() fails; it may
* have been written as 0xFFFFFFFF (PCI_ERROR_RESPONSE) if
* the config read failed on PCI.
*/
if (ret)
*val = 0;
@ -523,7 +525,7 @@ EXPORT_SYMBOL(pcie_capability_clear_and_set_dword);
int pci_read_config_byte(const struct pci_dev *dev, int where, u8 *val)
{
if (pci_dev_is_disconnected(dev)) {
*val = ~0;
PCI_SET_ERROR_RESPONSE(val);
return PCIBIOS_DEVICE_NOT_FOUND;
}
return pci_bus_read_config_byte(dev->bus, dev->devfn, where, val);
@ -533,7 +535,7 @@ EXPORT_SYMBOL(pci_read_config_byte);
int pci_read_config_word(const struct pci_dev *dev, int where, u16 *val)
{
if (pci_dev_is_disconnected(dev)) {
*val = ~0;
PCI_SET_ERROR_RESPONSE(val);
return PCIBIOS_DEVICE_NOT_FOUND;
}
return pci_bus_read_config_word(dev->bus, dev->devfn, where, val);
@ -544,7 +546,7 @@ int pci_read_config_dword(const struct pci_dev *dev, int where,
u32 *val)
{
if (pci_dev_is_disconnected(dev)) {
*val = ~0;
PCI_SET_ERROR_RESPONSE(val);
return PCIBIOS_DEVICE_NOT_FOUND;
}
return pci_bus_read_config_dword(dev->bus, dev->devfn, where, val);

View File

@ -4,7 +4,7 @@ menu "PCI controller drivers"
depends on PCI
config PCI_MVEBU
bool "Marvell EBU PCIe controller"
tristate "Marvell EBU PCIe controller"
depends on ARCH_MVEBU || ARCH_DOVE || COMPILE_TEST
depends on MVEBU_MBUS
depends on ARM
@ -281,7 +281,7 @@ config PCIE_BRCMSTB
config PCI_HYPERV_INTERFACE
tristate "Hyper-V PCI Interface"
depends on X86 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && X86_64
depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN
help
The Hyper-V PCI Interface is a helper driver allows other drivers to
have a common interface with the Hyper-V PCI frontend driver.
@ -332,8 +332,8 @@ config PCIE_APPLE
If unsure, say Y if you have an Apple Silicon system.
config PCIE_MT7621
bool "MediaTek MT7621 PCIe Controller"
depends on SOC_MT7621 || (MIPS && COMPILE_TEST)
tristate "MediaTek MT7621 PCIe Controller"
depends on SOC_MT7621 || COMPILE_TEST
select PHY_MT7621_PCI
default SOC_MT7621
help

View File

@ -51,11 +51,10 @@ enum link_status {
#define MAX_LANES 2
struct j721e_pcie {
struct device *dev;
struct cdns_pcie *cdns_pcie;
struct clk *refclk;
u32 mode;
u32 num_lanes;
struct cdns_pcie *cdns_pcie;
void __iomem *user_cfg_base;
void __iomem *intd_cfg_base;
u32 linkdown_irq_regfield;
@ -99,7 +98,7 @@ static inline void j721e_pcie_intd_writel(struct j721e_pcie *pcie, u32 offset,
static irqreturn_t j721e_pcie_link_irq_handler(int irq, void *priv)
{
struct j721e_pcie *pcie = priv;
struct device *dev = pcie->dev;
struct device *dev = pcie->cdns_pcie->dev;
u32 reg;
reg = j721e_pcie_intd_readl(pcie, STATUS_REG_SYS_2);
@ -165,7 +164,7 @@ static const struct cdns_pcie_ops j721e_pcie_ops = {
static int j721e_pcie_set_mode(struct j721e_pcie *pcie, struct regmap *syscon,
unsigned int offset)
{
struct device *dev = pcie->dev;
struct device *dev = pcie->cdns_pcie->dev;
u32 mask = J721E_MODE_RC;
u32 mode = pcie->mode;
u32 val = 0;
@ -184,7 +183,7 @@ static int j721e_pcie_set_mode(struct j721e_pcie *pcie, struct regmap *syscon,
static int j721e_pcie_set_link_speed(struct j721e_pcie *pcie,
struct regmap *syscon, unsigned int offset)
{
struct device *dev = pcie->dev;
struct device *dev = pcie->cdns_pcie->dev;
struct device_node *np = dev->of_node;
int link_speed;
u32 val = 0;
@ -205,7 +204,7 @@ static int j721e_pcie_set_link_speed(struct j721e_pcie *pcie,
static int j721e_pcie_set_lane_count(struct j721e_pcie *pcie,
struct regmap *syscon, unsigned int offset)
{
struct device *dev = pcie->dev;
struct device *dev = pcie->cdns_pcie->dev;
u32 lanes = pcie->num_lanes;
u32 val = 0;
int ret;
@ -220,7 +219,7 @@ static int j721e_pcie_set_lane_count(struct j721e_pcie *pcie,
static int j721e_pcie_ctrl_init(struct j721e_pcie *pcie)
{
struct device *dev = pcie->dev;
struct device *dev = pcie->cdns_pcie->dev;
struct device_node *node = dev->of_node;
struct of_phandle_args args;
unsigned int offset = 0;
@ -354,7 +353,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct device_node *node = dev->of_node;
struct pci_host_bridge *bridge;
struct j721e_pcie_data *data;
const struct j721e_pcie_data *data;
struct cdns_pcie *cdns_pcie;
struct j721e_pcie *pcie;
struct cdns_pcie_rc *rc;
@ -367,7 +366,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
int ret;
int irq;
data = (struct j721e_pcie_data *)of_device_get_match_data(dev);
data = of_device_get_match_data(dev);
if (!data)
return -EINVAL;
@ -377,7 +376,6 @@ static int j721e_pcie_probe(struct platform_device *pdev)
if (!pcie)
return -ENOMEM;
pcie->dev = dev;
pcie->mode = mode;
pcie->linkdown_irq_regfield = data->linkdown_irq_regfield;

View File

@ -45,7 +45,6 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
{
const struct cdns_plat_pcie_of_data *data;
struct cdns_plat_pcie *cdns_plat_pcie;
const struct of_device_id *match;
struct device *dev = &pdev->dev;
struct pci_host_bridge *bridge;
struct cdns_pcie_ep *ep;
@ -54,11 +53,10 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
bool is_rc;
int ret;
match = of_match_device(cdns_plat_pcie_of_match, dev);
if (!match)
data = of_device_get_match_data(dev);
if (!data)
return -EINVAL;
data = (struct cdns_plat_pcie_of_data *)match->data;
is_rc = data->is_rc;
pr_debug(" Started %s with is_rc: %d\n", __func__, is_rc);

View File

@ -310,7 +310,7 @@ struct cdns_pcie {
* single function at a time
* @vendor_id: PCI vendor ID
* @device_id: PCI device ID
* @avail_ib_bar: Satus of RP_BAR0, RP_BAR1 and RP_NO_BAR if it's free or
* @avail_ib_bar: Status of RP_BAR0, RP_BAR1 and RP_NO_BAR if it's free or
* available
* @quirk_retrain_flag: Retrain link as quirk for PCIe Gen2
* @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk

View File

@ -697,16 +697,14 @@ static int dra7xx_pcie_probe(struct platform_device *pdev)
struct device_node *np = dev->of_node;
char name[10];
struct gpio_desc *reset;
const struct of_device_id *match;
const struct dra7xx_pcie_of_data *data;
enum dw_pcie_device_mode mode;
u32 b1co_mode_sel_mask;
match = of_match_device(of_match_ptr(of_dra7xx_pcie_match), dev);
if (!match)
data = of_device_get_match_data(dev);
if (!data)
return -EINVAL;
data = (struct dra7xx_pcie_of_data *)match->data;
mode = (enum dw_pcie_device_mode)data->mode;
b1co_mode_sel_mask = data->b1co_mode_sel_mask;

View File

@ -217,10 +217,8 @@ static int exynos_pcie_rd_own_conf(struct pci_bus *bus, unsigned int devfn,
{
struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata);
if (PCI_SLOT(devfn)) {
*val = ~0;
if (PCI_SLOT(devfn))
return PCIBIOS_DEVICE_NOT_FOUND;
}
*val = dw_pcie_read_dbi(pci, where, size);
return PCIBIOS_SUCCESSFUL;

View File

@ -29,6 +29,7 @@
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/reset.h>
#include <linux/phy/phy.h>
#include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
@ -49,6 +50,7 @@ enum imx6_pcie_variants {
IMX6QP,
IMX7D,
IMX8MQ,
IMX8MM,
};
#define IMX6_PCIE_FLAG_IMX6_PHY BIT(0)
@ -88,6 +90,7 @@ struct imx6_pcie {
struct device *pd_pcie;
/* power domain for pcie phy */
struct device *pd_pcie_phy;
struct phy *phy;
const struct imx6_pcie_drvdata *drvdata;
};
@ -372,6 +375,8 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie)
case IMX7D:
case IMX8MQ:
reset_control_assert(imx6_pcie->pciephy_reset);
fallthrough;
case IMX8MM:
reset_control_assert(imx6_pcie->apps_reset);
break;
case IMX6SX:
@ -407,7 +412,8 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie)
static unsigned int imx6_pcie_grp_offset(const struct imx6_pcie *imx6_pcie)
{
WARN_ON(imx6_pcie->drvdata->variant != IMX8MQ);
WARN_ON(imx6_pcie->drvdata->variant != IMX8MQ &&
imx6_pcie->drvdata->variant != IMX8MM);
return imx6_pcie->controller_id == 1 ? IOMUXC_GPR16 : IOMUXC_GPR14;
}
@ -446,6 +452,11 @@ static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie)
break;
case IMX7D:
break;
case IMX8MM:
ret = clk_prepare_enable(imx6_pcie->pcie_aux);
if (ret)
dev_err(dev, "unable to enable pcie_aux clock\n");
break;
case IMX8MQ:
ret = clk_prepare_enable(imx6_pcie->pcie_aux);
if (ret) {
@ -522,6 +533,14 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
goto err_ref_clk;
}
switch (imx6_pcie->drvdata->variant) {
case IMX8MM:
if (phy_power_on(imx6_pcie->phy))
dev_err(dev, "unable to power on PHY\n");
break;
default:
break;
}
/* allow the clocks to stabilize */
usleep_range(200, 500);
@ -538,6 +557,10 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
case IMX8MQ:
reset_control_deassert(imx6_pcie->pciephy_reset);
break;
case IMX8MM:
if (phy_init(imx6_pcie->phy))
dev_err(dev, "waiting for phy ready timeout!\n");
break;
case IMX7D:
reset_control_deassert(imx6_pcie->pciephy_reset);
@ -614,6 +637,12 @@ static void imx6_pcie_configure_type(struct imx6_pcie *imx6_pcie)
static void imx6_pcie_init_phy(struct imx6_pcie *imx6_pcie)
{
switch (imx6_pcie->drvdata->variant) {
case IMX8MM:
/*
* The PHY initialization had been done in the PHY
* driver, break here directly.
*/
break;
case IMX8MQ:
/*
* TODO: Currently this code assumes external
@ -753,6 +782,7 @@ static void imx6_pcie_ltssm_enable(struct device *dev)
break;
case IMX7D:
case IMX8MQ:
case IMX8MM:
reset_control_deassert(imx6_pcie->apps_reset);
break;
}
@ -871,6 +901,7 @@ static void imx6_pcie_ltssm_disable(struct device *dev)
IMX6Q_GPR12_PCIE_CTL_2, 0);
break;
case IMX7D:
case IMX8MM:
reset_control_assert(imx6_pcie->apps_reset);
break;
default:
@ -930,6 +961,7 @@ static void imx6_pcie_clk_disable(struct imx6_pcie *imx6_pcie)
IMX7D_GPR12_PCIE_PHY_REFCLK_SEL);
break;
case IMX8MQ:
case IMX8MM:
clk_disable_unprepare(imx6_pcie->pcie_aux);
break;
default:
@ -945,8 +977,16 @@ static int imx6_pcie_suspend_noirq(struct device *dev)
return 0;
imx6_pcie_pm_turnoff(imx6_pcie);
imx6_pcie_clk_disable(imx6_pcie);
imx6_pcie_ltssm_disable(dev);
imx6_pcie_clk_disable(imx6_pcie);
switch (imx6_pcie->drvdata->variant) {
case IMX8MM:
if (phy_power_off(imx6_pcie->phy))
dev_err(dev, "unable to power off PHY\n");
break;
default:
break;
}
return 0;
}
@ -1043,11 +1083,6 @@ static int imx6_pcie_probe(struct platform_device *pdev)
}
/* Fetch clocks */
imx6_pcie->pcie_phy = devm_clk_get(dev, "pcie_phy");
if (IS_ERR(imx6_pcie->pcie_phy))
return dev_err_probe(dev, PTR_ERR(imx6_pcie->pcie_phy),
"pcie_phy clock source missing or invalid\n");
imx6_pcie->pcie_bus = devm_clk_get(dev, "pcie_bus");
if (IS_ERR(imx6_pcie->pcie_bus))
return dev_err_probe(dev, PTR_ERR(imx6_pcie->pcie_bus),
@ -1089,10 +1124,35 @@ static int imx6_pcie_probe(struct platform_device *pdev)
dev_err(dev, "Failed to get PCIE APPS reset control\n");
return PTR_ERR(imx6_pcie->apps_reset);
}
break;
case IMX8MM:
imx6_pcie->pcie_aux = devm_clk_get(dev, "pcie_aux");
if (IS_ERR(imx6_pcie->pcie_aux))
return dev_err_probe(dev, PTR_ERR(imx6_pcie->pcie_aux),
"pcie_aux clock source missing or invalid\n");
imx6_pcie->apps_reset = devm_reset_control_get_exclusive(dev,
"apps");
if (IS_ERR(imx6_pcie->apps_reset))
return dev_err_probe(dev, PTR_ERR(imx6_pcie->apps_reset),
"failed to get pcie apps reset control\n");
imx6_pcie->phy = devm_phy_get(dev, "pcie-phy");
if (IS_ERR(imx6_pcie->phy))
return dev_err_probe(dev, PTR_ERR(imx6_pcie->phy),
"failed to get pcie phy\n");
break;
default:
break;
}
/* Don't fetch the pcie_phy clock, if it has abstract PHY driver */
if (imx6_pcie->phy == NULL) {
imx6_pcie->pcie_phy = devm_clk_get(dev, "pcie_phy");
if (IS_ERR(imx6_pcie->pcie_phy))
return dev_err_probe(dev, PTR_ERR(imx6_pcie->pcie_phy),
"pcie_phy clock source missing or invalid\n");
}
/* Grab turnoff reset */
imx6_pcie->turnoff_reset = devm_reset_control_get_optional_exclusive(dev, "turnoff");
@ -1202,6 +1262,10 @@ static const struct imx6_pcie_drvdata drvdata[] = {
[IMX8MQ] = {
.variant = IMX8MQ,
},
[IMX8MM] = {
.variant = IMX8MM,
.flags = IMX6_PCIE_FLAG_SUPPORTS_SUSPEND,
},
};
static const struct of_device_id imx6_pcie_of_match[] = {
@ -1209,7 +1273,8 @@ static const struct of_device_id imx6_pcie_of_match[] = {
{ .compatible = "fsl,imx6sx-pcie", .data = &drvdata[IMX6SX], },
{ .compatible = "fsl,imx6qp-pcie", .data = &drvdata[IMX6QP], },
{ .compatible = "fsl,imx7d-pcie", .data = &drvdata[IMX7D], },
{ .compatible = "fsl,imx8mq-pcie", .data = &drvdata[IMX8MQ], } ,
{ .compatible = "fsl,imx8mq-pcie", .data = &drvdata[IMX8MQ], },
{ .compatible = "fsl,imx8mm-pcie", .data = &drvdata[IMX8MM], },
{},
};

View File

@ -747,9 +747,9 @@ err:
#ifdef CONFIG_ARM
/*
* When a PCI device does not exist during config cycles, keystone host gets a
* bus error instead of returning 0xffffffff. This handler always returns 0
* for this kind of faults.
* When a PCI device does not exist during config cycles, keystone host
* gets a bus error instead of returning 0xffffffff (PCI_ERROR_RESPONSE).
* This handler always returns 0 for this kind of fault.
*/
static int ks_pcie_fault(unsigned long addr, unsigned int fsr,
struct pt_regs *regs)
@ -775,12 +775,19 @@ static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie)
struct dw_pcie *pci = ks_pcie->pci;
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
struct of_phandle_args args;
unsigned int offset = 0;
devctrl_regs = syscon_regmap_lookup_by_phandle(np, "ti,syscon-pcie-id");
if (IS_ERR(devctrl_regs))
return PTR_ERR(devctrl_regs);
ret = regmap_read(devctrl_regs, 0, &id);
/* Do not error out to maintain old DT compatibility */
ret = of_parse_phandle_with_fixed_args(np, "ti,syscon-pcie-id", 1, 0, &args);
if (!ret)
offset = args.args[0];
ret = regmap_read(devctrl_regs, offset, &id);
if (ret)
return ret;
@ -989,6 +996,8 @@ err_phy:
static int ks_pcie_set_mode(struct device *dev)
{
struct device_node *np = dev->of_node;
struct of_phandle_args args;
unsigned int offset = 0;
struct regmap *syscon;
u32 val;
u32 mask;
@ -998,10 +1007,15 @@ static int ks_pcie_set_mode(struct device *dev)
if (IS_ERR(syscon))
return 0;
/* Do not error out to maintain old DT compatibility */
ret = of_parse_phandle_with_fixed_args(np, "ti,syscon-pcie-mode", 1, 0, &args);
if (!ret)
offset = args.args[0];
mask = KS_PCIE_DEV_TYPE_MASK | KS_PCIE_SYSCLOCKOUTEN;
val = KS_PCIE_DEV_TYPE(RC) | KS_PCIE_SYSCLOCKOUTEN;
ret = regmap_update_bits(syscon, 0, mask, val);
ret = regmap_update_bits(syscon, offset, mask, val);
if (ret) {
dev_err(dev, "failed to set pcie mode\n");
return ret;
@ -1014,6 +1028,8 @@ static int ks_pcie_am654_set_mode(struct device *dev,
enum dw_pcie_device_mode mode)
{
struct device_node *np = dev->of_node;
struct of_phandle_args args;
unsigned int offset = 0;
struct regmap *syscon;
u32 val;
u32 mask;
@ -1023,6 +1039,11 @@ static int ks_pcie_am654_set_mode(struct device *dev,
if (IS_ERR(syscon))
return 0;
/* Do not error out to maintain old DT compatibility */
ret = of_parse_phandle_with_fixed_args(np, "ti,syscon-pcie-mode", 1, 0, &args);
if (!ret)
offset = args.args[0];
mask = AM654_PCIE_DEV_TYPE_MASK;
switch (mode) {
@ -1037,7 +1058,7 @@ static int ks_pcie_am654_set_mode(struct device *dev,
return -EINVAL;
}
ret = regmap_update_bits(syscon, 0, mask, val);
ret = regmap_update_bits(syscon, offset, mask, val);
if (ret) {
dev_err(dev, "failed to set pcie mode\n");
return ret;
@ -1087,7 +1108,6 @@ static int __init ks_pcie_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
const struct ks_pcie_of_data *data;
const struct of_device_id *match;
enum dw_pcie_device_mode mode;
struct dw_pcie *pci;
struct keystone_pcie *ks_pcie;
@ -1104,8 +1124,7 @@ static int __init ks_pcie_probe(struct platform_device *pdev)
int irq;
int i;
match = of_match_device(of_match_ptr(ks_pcie_of_match), dev);
data = (struct ks_pcie_of_data *)match->data;
data = of_device_get_match_data(dev);
if (!data)
return -EINVAL;

View File

@ -3,6 +3,7 @@
* PCIe host controller driver for Freescale Layerscape SoCs
*
* Copyright (C) 2014 Freescale Semiconductor.
* Copyright 2021 NXP
*
* Author: Minghuan Lian <Minghuan.Lian@freescale.com>
*/
@ -22,12 +23,6 @@
#include "pcie-designware.h"
/* PEX1/2 Misc Ports Status Register */
#define SCFG_PEXMSCPORTSR(pex_idx) (0x94 + (pex_idx) * 4)
#define LTSSM_STATE_SHIFT 20
#define LTSSM_STATE_MASK 0x3f
#define LTSSM_PCIE_L0 0x11 /* L0 state */
/* PEX Internal Configuration Registers */
#define PCIE_STRFMR1 0x71c /* Symbol Timer & Filter Mask Register1 */
#define PCIE_ABSERR 0x8d0 /* Bridge Slave Error Response Register */
@ -35,20 +30,8 @@
#define PCIE_IATU_NUM 6
struct ls_pcie_drvdata {
u32 lut_offset;
u32 ltssm_shift;
u32 lut_dbg;
const struct dw_pcie_host_ops *ops;
const struct dw_pcie_ops *dw_pcie_ops;
};
struct ls_pcie {
struct dw_pcie *pci;
void __iomem *lut;
struct regmap *scfg;
const struct ls_pcie_drvdata *drvdata;
int index;
};
#define to_ls_pcie(x) dev_get_drvdata((x)->dev)
@ -83,38 +66,6 @@ static void ls_pcie_drop_msg_tlp(struct ls_pcie *pcie)
iowrite32(val, pci->dbi_base + PCIE_STRFMR1);
}
static int ls1021_pcie_link_up(struct dw_pcie *pci)
{
u32 state;
struct ls_pcie *pcie = to_ls_pcie(pci);
if (!pcie->scfg)
return 0;
regmap_read(pcie->scfg, SCFG_PEXMSCPORTSR(pcie->index), &state);
state = (state >> LTSSM_STATE_SHIFT) & LTSSM_STATE_MASK;
if (state < LTSSM_PCIE_L0)
return 0;
return 1;
}
static int ls_pcie_link_up(struct dw_pcie *pci)
{
struct ls_pcie *pcie = to_ls_pcie(pci);
u32 state;
state = (ioread32(pcie->lut + pcie->drvdata->lut_dbg) >>
pcie->drvdata->ltssm_shift) &
LTSSM_STATE_MASK;
if (state < LTSSM_PCIE_L0)
return 0;
return 1;
}
/* Forward error response of outbound non-posted requests */
static void ls_pcie_fix_error_response(struct ls_pcie *pcie)
{
@ -139,96 +90,20 @@ static int ls_pcie_host_init(struct pcie_port *pp)
return 0;
}
static int ls1021_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct ls_pcie *pcie = to_ls_pcie(pci);
struct device *dev = pci->dev;
u32 index[2];
int ret;
pcie->scfg = syscon_regmap_lookup_by_phandle(dev->of_node,
"fsl,pcie-scfg");
if (IS_ERR(pcie->scfg)) {
ret = PTR_ERR(pcie->scfg);
dev_err(dev, "No syscfg phandle specified\n");
pcie->scfg = NULL;
return ret;
}
if (of_property_read_u32_array(dev->of_node,
"fsl,pcie-scfg", index, 2)) {
pcie->scfg = NULL;
return -EINVAL;
}
pcie->index = index[1];
return ls_pcie_host_init(pp);
}
static const struct dw_pcie_host_ops ls1021_pcie_host_ops = {
.host_init = ls1021_pcie_host_init,
};
static const struct dw_pcie_host_ops ls_pcie_host_ops = {
.host_init = ls_pcie_host_init,
};
static const struct dw_pcie_ops dw_ls1021_pcie_ops = {
.link_up = ls1021_pcie_link_up,
};
static const struct dw_pcie_ops dw_ls_pcie_ops = {
.link_up = ls_pcie_link_up,
};
static const struct ls_pcie_drvdata ls1021_drvdata = {
.ops = &ls1021_pcie_host_ops,
.dw_pcie_ops = &dw_ls1021_pcie_ops,
};
static const struct ls_pcie_drvdata ls1043_drvdata = {
.lut_offset = 0x10000,
.ltssm_shift = 24,
.lut_dbg = 0x7fc,
.ops = &ls_pcie_host_ops,
.dw_pcie_ops = &dw_ls_pcie_ops,
};
static const struct ls_pcie_drvdata ls1046_drvdata = {
.lut_offset = 0x80000,
.ltssm_shift = 24,
.lut_dbg = 0x407fc,
.ops = &ls_pcie_host_ops,
.dw_pcie_ops = &dw_ls_pcie_ops,
};
static const struct ls_pcie_drvdata ls2080_drvdata = {
.lut_offset = 0x80000,
.ltssm_shift = 0,
.lut_dbg = 0x7fc,
.ops = &ls_pcie_host_ops,
.dw_pcie_ops = &dw_ls_pcie_ops,
};
static const struct ls_pcie_drvdata ls2088_drvdata = {
.lut_offset = 0x80000,
.ltssm_shift = 0,
.lut_dbg = 0x407fc,
.ops = &ls_pcie_host_ops,
.dw_pcie_ops = &dw_ls_pcie_ops,
};
static const struct of_device_id ls_pcie_of_match[] = {
{ .compatible = "fsl,ls1012a-pcie", .data = &ls1046_drvdata },
{ .compatible = "fsl,ls1021a-pcie", .data = &ls1021_drvdata },
{ .compatible = "fsl,ls1028a-pcie", .data = &ls2088_drvdata },
{ .compatible = "fsl,ls1043a-pcie", .data = &ls1043_drvdata },
{ .compatible = "fsl,ls1046a-pcie", .data = &ls1046_drvdata },
{ .compatible = "fsl,ls2080a-pcie", .data = &ls2080_drvdata },
{ .compatible = "fsl,ls2085a-pcie", .data = &ls2080_drvdata },
{ .compatible = "fsl,ls2088a-pcie", .data = &ls2088_drvdata },
{ .compatible = "fsl,ls1088a-pcie", .data = &ls2088_drvdata },
{ .compatible = "fsl,ls1012a-pcie", },
{ .compatible = "fsl,ls1021a-pcie", },
{ .compatible = "fsl,ls1028a-pcie", },
{ .compatible = "fsl,ls1043a-pcie", },
{ .compatible = "fsl,ls1046a-pcie", },
{ .compatible = "fsl,ls2080a-pcie", },
{ .compatible = "fsl,ls2085a-pcie", },
{ .compatible = "fsl,ls2088a-pcie", },
{ .compatible = "fsl,ls1088a-pcie", },
{ },
};
@ -247,11 +122,8 @@ static int ls_pcie_probe(struct platform_device *pdev)
if (!pci)
return -ENOMEM;
pcie->drvdata = of_device_get_match_data(dev);
pci->dev = dev;
pci->ops = pcie->drvdata->dw_pcie_ops;
pci->pp.ops = pcie->drvdata->ops;
pci->pp.ops = &ls_pcie_host_ops;
pcie->pci = pci;
@ -260,8 +132,6 @@ static int ls_pcie_probe(struct platform_device *pdev)
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
pcie->lut = pci->dbi_base + pcie->drvdata->lut_offset;
if (!ls_pcie_is_bridge(pcie))
return -ENODEV;

View File

@ -380,17 +380,15 @@ static int artpec6_pcie_probe(struct platform_device *pdev)
struct dw_pcie *pci;
struct artpec6_pcie *artpec6_pcie;
int ret;
const struct of_device_id *match;
const struct artpec_pcie_of_data *data;
enum artpec_pcie_variants variant;
enum dw_pcie_device_mode mode;
u32 val;
match = of_match_device(artpec6_pcie_of_match, dev);
if (!match)
data = of_device_get_match_data(dev);
if (!data)
return -EINVAL;
data = (struct artpec_pcie_of_data *)match->data;
variant = (enum artpec_pcie_variants)data->variant;
mode = (enum dw_pcie_device_mode)data->mode;

View File

@ -122,15 +122,13 @@ static int dw_plat_pcie_probe(struct platform_device *pdev)
struct dw_plat_pcie *dw_plat_pcie;
struct dw_pcie *pci;
int ret;
const struct of_device_id *match;
const struct dw_plat_pcie_of_data *data;
enum dw_pcie_device_mode mode;
match = of_match_device(dw_plat_pcie_of_match, dev);
if (!match)
data = of_device_get_match_data(dev);
if (!data)
return -EINVAL;
data = (struct dw_plat_pcie_of_data *)match->data;
mode = (enum dw_pcie_device_mode)data->mode;
dw_plat_pcie = devm_kzalloc(dev, sizeof(*dw_plat_pcie), GFP_KERNEL);

View File

@ -672,10 +672,11 @@ void dw_pcie_iatu_detect(struct dw_pcie *pci)
if (!pci->atu_base) {
struct resource *res =
platform_get_resource_byname(pdev, IORESOURCE_MEM, "atu");
if (res)
if (res) {
pci->atu_size = resource_size(res);
pci->atu_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pci->atu_base))
pci->atu_base = devm_ioremap_resource(dev, res);
}
if (!pci->atu_base || IS_ERR(pci->atu_base))
pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET;
}

View File

@ -18,6 +18,10 @@
#if defined(CONFIG_PCI_HISI) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
struct hisi_pcie {
void __iomem *reg_base;
};
static int hisi_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where,
int size, u32 *val)
{
@ -58,10 +62,10 @@ static void __iomem *hisi_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
int where)
{
struct pci_config_window *cfg = bus->sysdata;
void __iomem *reg_base = cfg->priv;
struct hisi_pcie *pcie = cfg->priv;
if (bus->number == cfg->busr.start)
return reg_base + where;
return pcie->reg_base + where;
else
return pci_ecam_map_bus(bus, devfn, where);
}
@ -71,12 +75,16 @@ static void __iomem *hisi_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
static int hisi_pcie_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct hisi_pcie *pcie;
struct acpi_device *adev = to_acpi_device(dev);
struct acpi_pci_root *root = acpi_driver_data(adev);
struct resource *res;
void __iomem *reg_base;
int ret;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
/*
* Retrieve RC base and size from a HISI0081 device with _UID
* matching our segment.
@ -91,11 +99,11 @@ static int hisi_pcie_init(struct pci_config_window *cfg)
return -ENOMEM;
}
reg_base = devm_pci_remap_cfgspace(dev, res->start, resource_size(res));
if (!reg_base)
pcie->reg_base = devm_pci_remap_cfgspace(dev, res->start, resource_size(res));
if (!pcie->reg_base)
return -ENOMEM;
cfg->priv = reg_base;
cfg->priv = pcie;
return 0;
}
@ -115,9 +123,13 @@ const struct pci_ecam_ops hisi_pcie_ops = {
static int hisi_pcie_platform_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct hisi_pcie *pcie;
struct platform_device *pdev = to_platform_device(dev);
struct resource *res;
void __iomem *reg_base;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!res) {
@ -125,11 +137,11 @@ static int hisi_pcie_platform_init(struct pci_config_window *cfg)
return -EINVAL;
}
reg_base = devm_pci_remap_cfgspace(dev, res->start, resource_size(res));
if (!reg_base)
pcie->reg_base = devm_pci_remap_cfgspace(dev, res->start, resource_size(res));
if (!pcie->reg_base)
return -ENOMEM;
cfg->priv = reg_base;
cfg->priv = pcie;
return 0;
}

View File

@ -127,10 +127,8 @@ static int histb_pcie_rd_own_conf(struct pci_bus *bus, unsigned int devfn,
{
struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata);
if (PCI_SLOT(devfn)) {
*val = ~0;
if (PCI_SLOT(devfn))
return PCIBIOS_DEVICE_NOT_FOUND;
}
*val = dw_pcie_read_dbi(pci, where, size);
return PCIBIOS_SUCCESSFUL;

View File

@ -62,7 +62,7 @@ struct intel_pcie_soc {
unsigned int pcie_ver;
};
struct intel_pcie_port {
struct intel_pcie {
struct dw_pcie pci;
void __iomem *app_base;
struct gpio_desc *reset_gpio;
@ -83,53 +83,53 @@ static void pcie_update_bits(void __iomem *base, u32 ofs, u32 mask, u32 val)
writel(val, base + ofs);
}
static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 ofs, u32 val)
static inline void pcie_app_wr(struct intel_pcie *pcie, u32 ofs, u32 val)
{
writel(val, lpp->app_base + ofs);
writel(val, pcie->app_base + ofs);
}
static void pcie_app_wr_mask(struct intel_pcie_port *lpp, u32 ofs,
static void pcie_app_wr_mask(struct intel_pcie *pcie, u32 ofs,
u32 mask, u32 val)
{
pcie_update_bits(lpp->app_base, ofs, mask, val);
pcie_update_bits(pcie->app_base, ofs, mask, val);
}
static inline u32 pcie_rc_cfg_rd(struct intel_pcie_port *lpp, u32 ofs)
static inline u32 pcie_rc_cfg_rd(struct intel_pcie *pcie, u32 ofs)
{
return dw_pcie_readl_dbi(&lpp->pci, ofs);
return dw_pcie_readl_dbi(&pcie->pci, ofs);
}
static inline void pcie_rc_cfg_wr(struct intel_pcie_port *lpp, u32 ofs, u32 val)
static inline void pcie_rc_cfg_wr(struct intel_pcie *pcie, u32 ofs, u32 val)
{
dw_pcie_writel_dbi(&lpp->pci, ofs, val);
dw_pcie_writel_dbi(&pcie->pci, ofs, val);
}
static void pcie_rc_cfg_wr_mask(struct intel_pcie_port *lpp, u32 ofs,
static void pcie_rc_cfg_wr_mask(struct intel_pcie *pcie, u32 ofs,
u32 mask, u32 val)
{
pcie_update_bits(lpp->pci.dbi_base, ofs, mask, val);
pcie_update_bits(pcie->pci.dbi_base, ofs, mask, val);
}
static void intel_pcie_ltssm_enable(struct intel_pcie_port *lpp)
static void intel_pcie_ltssm_enable(struct intel_pcie *pcie)
{
pcie_app_wr_mask(lpp, PCIE_APP_CCR, PCIE_APP_CCR_LTSSM_ENABLE,
pcie_app_wr_mask(pcie, PCIE_APP_CCR, PCIE_APP_CCR_LTSSM_ENABLE,
PCIE_APP_CCR_LTSSM_ENABLE);
}
static void intel_pcie_ltssm_disable(struct intel_pcie_port *lpp)
static void intel_pcie_ltssm_disable(struct intel_pcie *pcie)
{
pcie_app_wr_mask(lpp, PCIE_APP_CCR, PCIE_APP_CCR_LTSSM_ENABLE, 0);
pcie_app_wr_mask(pcie, PCIE_APP_CCR, PCIE_APP_CCR_LTSSM_ENABLE, 0);
}
static void intel_pcie_link_setup(struct intel_pcie_port *lpp)
static void intel_pcie_link_setup(struct intel_pcie *pcie)
{
u32 val;
u8 offset = dw_pcie_find_capability(&lpp->pci, PCI_CAP_ID_EXP);
u8 offset = dw_pcie_find_capability(&pcie->pci, PCI_CAP_ID_EXP);
val = pcie_rc_cfg_rd(lpp, offset + PCI_EXP_LNKCTL);
val = pcie_rc_cfg_rd(pcie, offset + PCI_EXP_LNKCTL);
val &= ~(PCI_EXP_LNKCTL_LD | PCI_EXP_LNKCTL_ASPMC);
pcie_rc_cfg_wr(lpp, offset + PCI_EXP_LNKCTL, val);
pcie_rc_cfg_wr(pcie, offset + PCI_EXP_LNKCTL, val);
}
static void intel_pcie_init_n_fts(struct dw_pcie *pci)
@ -148,14 +148,14 @@ static void intel_pcie_init_n_fts(struct dw_pcie *pci)
pci->n_fts[0] = PORT_AFR_N_FTS_GEN12_DFT;
}
static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
static int intel_pcie_ep_rst_init(struct intel_pcie *pcie)
{
struct device *dev = lpp->pci.dev;
struct device *dev = pcie->pci.dev;
int ret;
lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
if (IS_ERR(lpp->reset_gpio)) {
ret = PTR_ERR(lpp->reset_gpio);
pcie->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
if (IS_ERR(pcie->reset_gpio)) {
ret = PTR_ERR(pcie->reset_gpio);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to request PCIe GPIO: %d\n", ret);
return ret;
@ -167,19 +167,19 @@ static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)
return 0;
}
static void intel_pcie_core_rst_assert(struct intel_pcie_port *lpp)
static void intel_pcie_core_rst_assert(struct intel_pcie *pcie)
{
reset_control_assert(lpp->core_rst);
reset_control_assert(pcie->core_rst);
}
static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
static void intel_pcie_core_rst_deassert(struct intel_pcie *pcie)
{
/*
* One micro-second delay to make sure the reset pulse
* wide enough so that core reset is clean.
*/
udelay(1);
reset_control_deassert(lpp->core_rst);
reset_control_deassert(pcie->core_rst);
/*
* Some SoC core reset also reset PHY, more delay needed
@ -188,58 +188,58 @@ static void intel_pcie_core_rst_deassert(struct intel_pcie_port *lpp)
usleep_range(1000, 2000);
}
static void intel_pcie_device_rst_assert(struct intel_pcie_port *lpp)
static void intel_pcie_device_rst_assert(struct intel_pcie *pcie)
{
gpiod_set_value_cansleep(lpp->reset_gpio, 1);
gpiod_set_value_cansleep(pcie->reset_gpio, 1);
}
static void intel_pcie_device_rst_deassert(struct intel_pcie_port *lpp)
static void intel_pcie_device_rst_deassert(struct intel_pcie *pcie)
{
msleep(lpp->rst_intrvl);
gpiod_set_value_cansleep(lpp->reset_gpio, 0);
msleep(pcie->rst_intrvl);
gpiod_set_value_cansleep(pcie->reset_gpio, 0);
}
static void intel_pcie_core_irq_disable(struct intel_pcie_port *lpp)
static void intel_pcie_core_irq_disable(struct intel_pcie *pcie)
{
pcie_app_wr(lpp, PCIE_APP_IRNEN, 0);
pcie_app_wr(lpp, PCIE_APP_IRNCR, PCIE_APP_IRN_INT);
pcie_app_wr(pcie, PCIE_APP_IRNEN, 0);
pcie_app_wr(pcie, PCIE_APP_IRNCR, PCIE_APP_IRN_INT);
}
static int intel_pcie_get_resources(struct platform_device *pdev)
{
struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
struct dw_pcie *pci = &lpp->pci;
struct intel_pcie *pcie = platform_get_drvdata(pdev);
struct dw_pcie *pci = &pcie->pci;
struct device *dev = pci->dev;
int ret;
lpp->core_clk = devm_clk_get(dev, NULL);
if (IS_ERR(lpp->core_clk)) {
ret = PTR_ERR(lpp->core_clk);
pcie->core_clk = devm_clk_get(dev, NULL);
if (IS_ERR(pcie->core_clk)) {
ret = PTR_ERR(pcie->core_clk);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get clks: %d\n", ret);
return ret;
}
lpp->core_rst = devm_reset_control_get(dev, NULL);
if (IS_ERR(lpp->core_rst)) {
ret = PTR_ERR(lpp->core_rst);
pcie->core_rst = devm_reset_control_get(dev, NULL);
if (IS_ERR(pcie->core_rst)) {
ret = PTR_ERR(pcie->core_rst);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get resets: %d\n", ret);
return ret;
}
ret = device_property_read_u32(dev, "reset-assert-ms",
&lpp->rst_intrvl);
&pcie->rst_intrvl);
if (ret)
lpp->rst_intrvl = RESET_INTERVAL_MS;
pcie->rst_intrvl = RESET_INTERVAL_MS;
lpp->app_base = devm_platform_ioremap_resource_byname(pdev, "app");
if (IS_ERR(lpp->app_base))
return PTR_ERR(lpp->app_base);
pcie->app_base = devm_platform_ioremap_resource_byname(pdev, "app");
if (IS_ERR(pcie->app_base))
return PTR_ERR(pcie->app_base);
lpp->phy = devm_phy_get(dev, "pcie");
if (IS_ERR(lpp->phy)) {
ret = PTR_ERR(lpp->phy);
pcie->phy = devm_phy_get(dev, "pcie");
if (IS_ERR(pcie->phy)) {
ret = PTR_ERR(pcie->phy);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Couldn't get pcie-phy: %d\n", ret);
return ret;
@ -248,137 +248,137 @@ static int intel_pcie_get_resources(struct platform_device *pdev)
return 0;
}
static int intel_pcie_wait_l2(struct intel_pcie_port *lpp)
static int intel_pcie_wait_l2(struct intel_pcie *pcie)
{
u32 value;
int ret;
struct dw_pcie *pci = &lpp->pci;
struct dw_pcie *pci = &pcie->pci;
if (pci->link_gen < 3)
return 0;
/* Send PME_TURN_OFF message */
pcie_app_wr_mask(lpp, PCIE_APP_MSG_CR, PCIE_APP_MSG_XMT_PM_TURNOFF,
pcie_app_wr_mask(pcie, PCIE_APP_MSG_CR, PCIE_APP_MSG_XMT_PM_TURNOFF,
PCIE_APP_MSG_XMT_PM_TURNOFF);
/* Read PMC status and wait for falling into L2 link state */
ret = readl_poll_timeout(lpp->app_base + PCIE_APP_PMC, value,
ret = readl_poll_timeout(pcie->app_base + PCIE_APP_PMC, value,
value & PCIE_APP_PMC_IN_L2, 20,
jiffies_to_usecs(5 * HZ));
if (ret)
dev_err(lpp->pci.dev, "PCIe link enter L2 timeout!\n");
dev_err(pcie->pci.dev, "PCIe link enter L2 timeout!\n");
return ret;
}
static void intel_pcie_turn_off(struct intel_pcie_port *lpp)
static void intel_pcie_turn_off(struct intel_pcie *pcie)
{
if (dw_pcie_link_up(&lpp->pci))
intel_pcie_wait_l2(lpp);
if (dw_pcie_link_up(&pcie->pci))
intel_pcie_wait_l2(pcie);
/* Put endpoint device in reset state */
intel_pcie_device_rst_assert(lpp);
pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND, PCI_COMMAND_MEMORY, 0);
intel_pcie_device_rst_assert(pcie);
pcie_rc_cfg_wr_mask(pcie, PCI_COMMAND, PCI_COMMAND_MEMORY, 0);
}
static int intel_pcie_host_setup(struct intel_pcie_port *lpp)
static int intel_pcie_host_setup(struct intel_pcie *pcie)
{
int ret;
struct dw_pcie *pci = &lpp->pci;
struct dw_pcie *pci = &pcie->pci;
intel_pcie_core_rst_assert(lpp);
intel_pcie_device_rst_assert(lpp);
intel_pcie_core_rst_assert(pcie);
intel_pcie_device_rst_assert(pcie);
ret = phy_init(lpp->phy);
ret = phy_init(pcie->phy);
if (ret)
return ret;
intel_pcie_core_rst_deassert(lpp);
intel_pcie_core_rst_deassert(pcie);
ret = clk_prepare_enable(lpp->core_clk);
ret = clk_prepare_enable(pcie->core_clk);
if (ret) {
dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret);
dev_err(pcie->pci.dev, "Core clock enable failed: %d\n", ret);
goto clk_err;
}
pci->atu_base = pci->dbi_base + 0xC0000;
intel_pcie_ltssm_disable(lpp);
intel_pcie_link_setup(lpp);
intel_pcie_ltssm_disable(pcie);
intel_pcie_link_setup(pcie);
intel_pcie_init_n_fts(pci);
dw_pcie_setup_rc(&pci->pp);
dw_pcie_upconfig_setup(pci);
intel_pcie_device_rst_deassert(lpp);
intel_pcie_ltssm_enable(lpp);
intel_pcie_device_rst_deassert(pcie);
intel_pcie_ltssm_enable(pcie);
ret = dw_pcie_wait_for_link(pci);
if (ret)
goto app_init_err;
/* Enable integrated interrupts */
pcie_app_wr_mask(lpp, PCIE_APP_IRNEN, PCIE_APP_IRN_INT,
pcie_app_wr_mask(pcie, PCIE_APP_IRNEN, PCIE_APP_IRN_INT,
PCIE_APP_IRN_INT);
return 0;
app_init_err:
clk_disable_unprepare(lpp->core_clk);
clk_disable_unprepare(pcie->core_clk);
clk_err:
intel_pcie_core_rst_assert(lpp);
phy_exit(lpp->phy);
intel_pcie_core_rst_assert(pcie);
phy_exit(pcie->phy);
return ret;
}
static void __intel_pcie_remove(struct intel_pcie_port *lpp)
static void __intel_pcie_remove(struct intel_pcie *pcie)
{
intel_pcie_core_irq_disable(lpp);
intel_pcie_turn_off(lpp);
clk_disable_unprepare(lpp->core_clk);
intel_pcie_core_rst_assert(lpp);
phy_exit(lpp->phy);
intel_pcie_core_irq_disable(pcie);
intel_pcie_turn_off(pcie);
clk_disable_unprepare(pcie->core_clk);
intel_pcie_core_rst_assert(pcie);
phy_exit(pcie->phy);
}
static int intel_pcie_remove(struct platform_device *pdev)
{
struct intel_pcie_port *lpp = platform_get_drvdata(pdev);
struct pcie_port *pp = &lpp->pci.pp;
struct intel_pcie *pcie = platform_get_drvdata(pdev);
struct pcie_port *pp = &pcie->pci.pp;
dw_pcie_host_deinit(pp);
__intel_pcie_remove(lpp);
__intel_pcie_remove(pcie);
return 0;
}
static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev)
{
struct intel_pcie_port *lpp = dev_get_drvdata(dev);
struct intel_pcie *pcie = dev_get_drvdata(dev);
int ret;
intel_pcie_core_irq_disable(lpp);
ret = intel_pcie_wait_l2(lpp);
intel_pcie_core_irq_disable(pcie);
ret = intel_pcie_wait_l2(pcie);
if (ret)
return ret;
phy_exit(lpp->phy);
clk_disable_unprepare(lpp->core_clk);
phy_exit(pcie->phy);
clk_disable_unprepare(pcie->core_clk);
return ret;
}
static int __maybe_unused intel_pcie_resume_noirq(struct device *dev)
{
struct intel_pcie_port *lpp = dev_get_drvdata(dev);
struct intel_pcie *pcie = dev_get_drvdata(dev);
return intel_pcie_host_setup(lpp);
return intel_pcie_host_setup(pcie);
}
static int intel_pcie_rc_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
struct intel_pcie *pcie = dev_get_drvdata(pci->dev);
return intel_pcie_host_setup(lpp);
return intel_pcie_host_setup(pcie);
}
static u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
@ -402,17 +402,17 @@ static int intel_pcie_probe(struct platform_device *pdev)
{
const struct intel_pcie_soc *data;
struct device *dev = &pdev->dev;
struct intel_pcie_port *lpp;
struct intel_pcie *pcie;
struct pcie_port *pp;
struct dw_pcie *pci;
int ret;
lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
if (!lpp)
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
platform_set_drvdata(pdev, lpp);
pci = &lpp->pci;
platform_set_drvdata(pdev, pcie);
pci = &pcie->pci;
pci->dev = dev;
pp = &pci->pp;
@ -420,7 +420,7 @@ static int intel_pcie_probe(struct platform_device *pdev)
if (ret)
return ret;
ret = intel_pcie_ep_rst_init(lpp);
ret = intel_pcie_ep_rst_init(pcie);
if (ret)
return ret;

View File

@ -530,10 +530,8 @@ static int kirin_pcie_rd_own_conf(struct pci_bus *bus, unsigned int devfn,
{
struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata);
if (PCI_SLOT(devfn)) {
*val = ~0;
if (PCI_SLOT(devfn))
return PCIBIOS_DEVICE_NOT_FOUND;
}
*val = dw_pcie_read_dbi(pci, where, size);
return PCIBIOS_SUCCESSFUL;
@ -773,7 +771,6 @@ static const struct of_device_id kirin_pcie_match[] = {
static int kirin_pcie_probe(struct platform_device *pdev)
{
enum pcie_kirin_phy_type phy_type;
const struct of_device_id *of_id;
struct device *dev = &pdev->dev;
struct kirin_pcie *kirin_pcie;
struct dw_pcie *pci;
@ -784,13 +781,12 @@ static int kirin_pcie_probe(struct platform_device *pdev)
return -EINVAL;
}
of_id = of_match_device(kirin_pcie_match, dev);
if (!of_id) {
phy_type = (long)of_device_get_match_data(dev);
if (!phy_type) {
dev_err(dev, "OF data missing\n");
return -EINVAL;
}
phy_type = (long)of_id->data;
kirin_pcie = devm_kzalloc(dev, sizeof(struct kirin_pcie), GFP_KERNEL);
if (!kirin_pcie)

View File

@ -553,10 +553,8 @@ static int qcom_pcie_ep_enable_irq_resources(struct platform_device *pdev,
int irq, ret;
irq = platform_get_irq_byname(pdev, "global");
if (irq < 0) {
dev_err(&pdev->dev, "Failed to get Global IRQ\n");
if (irq < 0)
return irq;
}
ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
qcom_pcie_ep_global_irq_thread,
@ -620,7 +618,7 @@ static void qcom_pcie_ep_init(struct dw_pcie_ep *ep)
dw_pcie_ep_reset_bar(pci, bar);
}
static struct dw_pcie_ep_ops pci_ep_ops = {
static const struct dw_pcie_ep_ops pci_ep_ops = {
.ep_init = qcom_pcie_ep_init,
.raise_irq = qcom_pcie_ep_raise_irq,
.get_features = qcom_pcie_epc_get_features,

View File

@ -1343,7 +1343,7 @@ static int qcom_pcie_config_sid_sm8250(struct qcom_pcie *pcie)
/* Look for an available entry to hold the mapping */
for (i = 0; i < nr_map; i++) {
u16 bdf_be = cpu_to_be16(map[i].bdf);
__be16 bdf_be = cpu_to_be16(map[i].bdf);
u32 val;
u8 hash;
@ -1534,6 +1534,12 @@ static int qcom_pcie_probe(struct platform_device *pdev)
const struct qcom_pcie_cfg *pcie_cfg;
int ret;
pcie_cfg = of_device_get_match_data(dev);
if (!pcie_cfg || !pcie_cfg->ops) {
dev_err(dev, "Invalid platform data\n");
return -EINVAL;
}
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
@ -1553,12 +1559,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
pcie->pci = pci;
pcie_cfg = of_device_get_match_data(dev);
if (!pcie_cfg || !pcie_cfg->ops) {
dev_err(dev, "Invalid platform data\n");
return -EINVAL;
}
pcie->ops = pcie_cfg->ops;
pcie->pipe_clk_need_muxing = pcie_cfg->pipe_clk_need_muxing;

View File

@ -69,7 +69,7 @@ struct pcie_app_reg {
static int spear13xx_pcie_start_link(struct dw_pcie *pci)
{
struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pci);
struct pcie_app_reg *app_reg = spear13xx_pcie->app_base;
struct pcie_app_reg __iomem *app_reg = spear13xx_pcie->app_base;
/* enable ltssm */
writel(DEVICE_TYPE_RC | (1 << MISCTRL_EN_ID)
@ -83,7 +83,7 @@ static int spear13xx_pcie_start_link(struct dw_pcie *pci)
static irqreturn_t spear13xx_pcie_irq_handler(int irq, void *arg)
{
struct spear13xx_pcie *spear13xx_pcie = arg;
struct pcie_app_reg *app_reg = spear13xx_pcie->app_base;
struct pcie_app_reg __iomem *app_reg = spear13xx_pcie->app_base;
struct dw_pcie *pci = spear13xx_pcie->pci;
struct pcie_port *pp = &pci->pp;
unsigned int status;
@ -102,7 +102,7 @@ static irqreturn_t spear13xx_pcie_irq_handler(int irq, void *arg)
static void spear13xx_pcie_enable_interrupts(struct spear13xx_pcie *spear13xx_pcie)
{
struct pcie_app_reg *app_reg = spear13xx_pcie->app_base;
struct pcie_app_reg __iomem *app_reg = spear13xx_pcie->app_base;
/* Enable MSI interrupt */
if (IS_ENABLED(CONFIG_PCI_MSI))
@ -113,7 +113,7 @@ static void spear13xx_pcie_enable_interrupts(struct spear13xx_pcie *spear13xx_pc
static int spear13xx_pcie_link_up(struct dw_pcie *pci)
{
struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pci);
struct pcie_app_reg *app_reg = spear13xx_pcie->app_base;
struct pcie_app_reg __iomem *app_reg = spear13xx_pcie->app_base;
if (readl(&app_reg->app_status_1) & XMLH_LINK_UP)
return 1;

View File

@ -245,7 +245,7 @@ static const unsigned int pcie_gen_freq[] = {
GEN4_CORE_CLK_FREQ
};
struct tegra_pcie_dw {
struct tegra194_pcie {
struct device *dev;
struct resource *appl_res;
struct resource *dbi_res;
@ -289,22 +289,22 @@ struct tegra_pcie_dw {
int ep_state;
};
struct tegra_pcie_dw_of_data {
struct tegra194_pcie_of_data {
enum dw_pcie_device_mode mode;
};
static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci)
static inline struct tegra194_pcie *to_tegra_pcie(struct dw_pcie *pci)
{
return container_of(pci, struct tegra_pcie_dw, pci);
return container_of(pci, struct tegra194_pcie, pci);
}
static inline void appl_writel(struct tegra_pcie_dw *pcie, const u32 value,
static inline void appl_writel(struct tegra194_pcie *pcie, const u32 value,
const u32 reg)
{
writel_relaxed(value, pcie->appl_base + reg);
}
static inline u32 appl_readl(struct tegra_pcie_dw *pcie, const u32 reg)
static inline u32 appl_readl(struct tegra194_pcie *pcie, const u32 reg)
{
return readl_relaxed(pcie->appl_base + reg);
}
@ -316,7 +316,7 @@ struct tegra_pcie_soc {
static void apply_bad_link_workaround(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
struct tegra194_pcie *pcie = to_tegra_pcie(pci);
u32 current_link_width;
u16 val;
@ -349,7 +349,7 @@ static void apply_bad_link_workaround(struct pcie_port *pp)
static irqreturn_t tegra_pcie_rp_irq_handler(int irq, void *arg)
{
struct tegra_pcie_dw *pcie = arg;
struct tegra194_pcie *pcie = arg;
struct dw_pcie *pci = &pcie->pci;
struct pcie_port *pp = &pci->pp;
u32 val, tmp;
@ -420,7 +420,7 @@ static irqreturn_t tegra_pcie_rp_irq_handler(int irq, void *arg)
return IRQ_HANDLED;
}
static void pex_ep_event_hot_rst_done(struct tegra_pcie_dw *pcie)
static void pex_ep_event_hot_rst_done(struct tegra194_pcie *pcie)
{
u32 val;
@ -448,7 +448,7 @@ static void pex_ep_event_hot_rst_done(struct tegra_pcie_dw *pcie)
static irqreturn_t tegra_pcie_ep_irq_thread(int irq, void *arg)
{
struct tegra_pcie_dw *pcie = arg;
struct tegra194_pcie *pcie = arg;
struct dw_pcie *pci = &pcie->pci;
u32 val, speed;
@ -494,7 +494,7 @@ static irqreturn_t tegra_pcie_ep_irq_thread(int irq, void *arg)
static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
{
struct tegra_pcie_dw *pcie = arg;
struct tegra194_pcie *pcie = arg;
struct dw_pcie_ep *ep = &pcie->pci.ep;
int spurious = 1;
u32 status_l0, status_l1, link_status;
@ -537,7 +537,7 @@ static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
return IRQ_HANDLED;
}
static int tegra_pcie_dw_rd_own_conf(struct pci_bus *bus, u32 devfn, int where,
static int tegra194_pcie_rd_own_conf(struct pci_bus *bus, u32 devfn, int where,
int size, u32 *val)
{
/*
@ -554,7 +554,7 @@ static int tegra_pcie_dw_rd_own_conf(struct pci_bus *bus, u32 devfn, int where,
return pci_generic_config_read(bus, devfn, where, size, val);
}
static int tegra_pcie_dw_wr_own_conf(struct pci_bus *bus, u32 devfn, int where,
static int tegra194_pcie_wr_own_conf(struct pci_bus *bus, u32 devfn, int where,
int size, u32 val)
{
/*
@ -571,8 +571,8 @@ static int tegra_pcie_dw_wr_own_conf(struct pci_bus *bus, u32 devfn, int where,
static struct pci_ops tegra_pci_ops = {
.map_bus = dw_pcie_own_conf_map_bus,
.read = tegra_pcie_dw_rd_own_conf,
.write = tegra_pcie_dw_wr_own_conf,
.read = tegra194_pcie_rd_own_conf,
.write = tegra194_pcie_wr_own_conf,
};
#if defined(CONFIG_PCIEASPM)
@ -594,7 +594,7 @@ static const u32 event_cntr_data_offset[] = {
0x1dc
};
static void disable_aspm_l11(struct tegra_pcie_dw *pcie)
static void disable_aspm_l11(struct tegra194_pcie *pcie)
{
u32 val;
@ -603,7 +603,7 @@ static void disable_aspm_l11(struct tegra_pcie_dw *pcie)
dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
}
static void disable_aspm_l12(struct tegra_pcie_dw *pcie)
static void disable_aspm_l12(struct tegra194_pcie *pcie)
{
u32 val;
@ -612,7 +612,7 @@ static void disable_aspm_l12(struct tegra_pcie_dw *pcie)
dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
}
static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event)
static inline u32 event_counter_prog(struct tegra194_pcie *pcie, u32 event)
{
u32 val;
@ -629,7 +629,7 @@ static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event)
static int aspm_state_cnt(struct seq_file *s, void *data)
{
struct tegra_pcie_dw *pcie = (struct tegra_pcie_dw *)
struct tegra194_pcie *pcie = (struct tegra194_pcie *)
dev_get_drvdata(s->private);
u32 val;
@ -660,7 +660,7 @@ static int aspm_state_cnt(struct seq_file *s, void *data)
return 0;
}
static void init_host_aspm(struct tegra_pcie_dw *pcie)
static void init_host_aspm(struct tegra194_pcie *pcie)
{
struct dw_pcie *pci = &pcie->pci;
u32 val;
@ -688,22 +688,22 @@ static void init_host_aspm(struct tegra_pcie_dw *pcie)
dw_pcie_writel_dbi(pci, PCIE_PORT_AFR, val);
}
static void init_debugfs(struct tegra_pcie_dw *pcie)
static void init_debugfs(struct tegra194_pcie *pcie)
{
debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt", pcie->debugfs,
aspm_state_cnt);
}
#else
static inline void disable_aspm_l12(struct tegra_pcie_dw *pcie) { return; }
static inline void disable_aspm_l11(struct tegra_pcie_dw *pcie) { return; }
static inline void init_host_aspm(struct tegra_pcie_dw *pcie) { return; }
static inline void init_debugfs(struct tegra_pcie_dw *pcie) { return; }
static inline void disable_aspm_l12(struct tegra194_pcie *pcie) { return; }
static inline void disable_aspm_l11(struct tegra194_pcie *pcie) { return; }
static inline void init_host_aspm(struct tegra194_pcie *pcie) { return; }
static inline void init_debugfs(struct tegra194_pcie *pcie) { return; }
#endif
static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
struct tegra194_pcie *pcie = to_tegra_pcie(pci);
u32 val;
u16 val_w;
@ -741,7 +741,7 @@ static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp)
static void tegra_pcie_enable_legacy_interrupts(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
struct tegra194_pcie *pcie = to_tegra_pcie(pci);
u32 val;
/* Enable legacy interrupt generation */
@ -762,7 +762,7 @@ static void tegra_pcie_enable_legacy_interrupts(struct pcie_port *pp)
static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
struct tegra194_pcie *pcie = to_tegra_pcie(pci);
u32 val;
/* Enable MSI interrupt generation */
@ -775,7 +775,7 @@ static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp)
static void tegra_pcie_enable_interrupts(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
struct tegra194_pcie *pcie = to_tegra_pcie(pci);
/* Clear interrupt statuses before enabling interrupts */
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
@ -800,7 +800,7 @@ static void tegra_pcie_enable_interrupts(struct pcie_port *pp)
tegra_pcie_enable_msi_interrupts(pp);
}
static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie)
static void config_gen3_gen4_eq_presets(struct tegra194_pcie *pcie)
{
struct dw_pcie *pci = &pcie->pci;
u32 val, offset, i;
@ -853,10 +853,10 @@ static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie)
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
}
static int tegra_pcie_dw_host_init(struct pcie_port *pp)
static int tegra194_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
struct tegra194_pcie *pcie = to_tegra_pcie(pci);
u32 val;
pp->bridge->ops = &tegra_pci_ops;
@ -914,10 +914,10 @@ static int tegra_pcie_dw_host_init(struct pcie_port *pp)
return 0;
}
static int tegra_pcie_dw_start_link(struct dw_pcie *pci)
static int tegra194_pcie_start_link(struct dw_pcie *pci)
{
u32 val, offset, speed, tmp;
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
struct tegra194_pcie *pcie = to_tegra_pcie(pci);
struct pcie_port *pp = &pci->pp;
bool retry = true;
@ -982,7 +982,7 @@ retry_link:
val &= ~PCI_DLF_EXCHANGE_ENABLE;
dw_pcie_writel_dbi(pci, offset, val);
tegra_pcie_dw_host_init(pp);
tegra194_pcie_host_init(pp);
dw_pcie_setup_rc(pp);
retry = false;
@ -998,32 +998,32 @@ retry_link:
return 0;
}
static int tegra_pcie_dw_link_up(struct dw_pcie *pci)
static int tegra194_pcie_link_up(struct dw_pcie *pci)
{
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
struct tegra194_pcie *pcie = to_tegra_pcie(pci);
u32 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
return !!(val & PCI_EXP_LNKSTA_DLLLA);
}
static void tegra_pcie_dw_stop_link(struct dw_pcie *pci)
static void tegra194_pcie_stop_link(struct dw_pcie *pci)
{
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
struct tegra194_pcie *pcie = to_tegra_pcie(pci);
disable_irq(pcie->pex_rst_irq);
}
static const struct dw_pcie_ops tegra_dw_pcie_ops = {
.link_up = tegra_pcie_dw_link_up,
.start_link = tegra_pcie_dw_start_link,
.stop_link = tegra_pcie_dw_stop_link,
.link_up = tegra194_pcie_link_up,
.start_link = tegra194_pcie_start_link,
.stop_link = tegra194_pcie_stop_link,
};
static const struct dw_pcie_host_ops tegra_pcie_dw_host_ops = {
.host_init = tegra_pcie_dw_host_init,
static const struct dw_pcie_host_ops tegra194_pcie_host_ops = {
.host_init = tegra194_pcie_host_init,
};
static void tegra_pcie_disable_phy(struct tegra_pcie_dw *pcie)
static void tegra_pcie_disable_phy(struct tegra194_pcie *pcie)
{
unsigned int phy_count = pcie->phy_count;
@ -1033,7 +1033,7 @@ static void tegra_pcie_disable_phy(struct tegra_pcie_dw *pcie)
}
}
static int tegra_pcie_enable_phy(struct tegra_pcie_dw *pcie)
static int tegra_pcie_enable_phy(struct tegra194_pcie *pcie)
{
unsigned int i;
int ret;
@ -1060,7 +1060,7 @@ phy_exit:
return ret;
}
static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie)
static int tegra194_pcie_parse_dt(struct tegra194_pcie *pcie)
{
struct platform_device *pdev = to_platform_device(pcie->dev);
struct device_node *np = pcie->dev->of_node;
@ -1156,7 +1156,7 @@ static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie)
return 0;
}
static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
static int tegra_pcie_bpmp_set_ctrl_state(struct tegra194_pcie *pcie,
bool enable)
{
struct mrq_uphy_response resp;
@ -1184,7 +1184,7 @@ static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
return tegra_bpmp_transfer(pcie->bpmp, &msg);
}
static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie,
static int tegra_pcie_bpmp_set_pll_state(struct tegra194_pcie *pcie,
bool enable)
{
struct mrq_uphy_response resp;
@ -1212,7 +1212,7 @@ static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie,
return tegra_bpmp_transfer(pcie->bpmp, &msg);
}
static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie)
static void tegra_pcie_downstream_dev_to_D0(struct tegra194_pcie *pcie)
{
struct pcie_port *pp = &pcie->pci.pp;
struct pci_bus *child, *root_bus = NULL;
@ -1250,7 +1250,7 @@ static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie)
}
}
static int tegra_pcie_get_slot_regulators(struct tegra_pcie_dw *pcie)
static int tegra_pcie_get_slot_regulators(struct tegra194_pcie *pcie)
{
pcie->slot_ctl_3v3 = devm_regulator_get_optional(pcie->dev, "vpcie3v3");
if (IS_ERR(pcie->slot_ctl_3v3)) {
@ -1271,7 +1271,7 @@ static int tegra_pcie_get_slot_regulators(struct tegra_pcie_dw *pcie)
return 0;
}
static int tegra_pcie_enable_slot_regulators(struct tegra_pcie_dw *pcie)
static int tegra_pcie_enable_slot_regulators(struct tegra194_pcie *pcie)
{
int ret;
@ -1309,7 +1309,7 @@ fail_12v_enable:
return ret;
}
static void tegra_pcie_disable_slot_regulators(struct tegra_pcie_dw *pcie)
static void tegra_pcie_disable_slot_regulators(struct tegra194_pcie *pcie)
{
if (pcie->slot_ctl_12v)
regulator_disable(pcie->slot_ctl_12v);
@ -1317,7 +1317,7 @@ static void tegra_pcie_disable_slot_regulators(struct tegra_pcie_dw *pcie)
regulator_disable(pcie->slot_ctl_3v3);
}
static int tegra_pcie_config_controller(struct tegra_pcie_dw *pcie,
static int tegra_pcie_config_controller(struct tegra194_pcie *pcie,
bool en_hw_hot_rst)
{
int ret;
@ -1414,7 +1414,7 @@ fail_slot_reg_en:
return ret;
}
static void tegra_pcie_unconfig_controller(struct tegra_pcie_dw *pcie)
static void tegra_pcie_unconfig_controller(struct tegra194_pcie *pcie)
{
int ret;
@ -1442,7 +1442,7 @@ static void tegra_pcie_unconfig_controller(struct tegra_pcie_dw *pcie)
pcie->cid, ret);
}
static int tegra_pcie_init_controller(struct tegra_pcie_dw *pcie)
static int tegra_pcie_init_controller(struct tegra194_pcie *pcie)
{
struct dw_pcie *pci = &pcie->pci;
struct pcie_port *pp = &pci->pp;
@ -1452,7 +1452,7 @@ static int tegra_pcie_init_controller(struct tegra_pcie_dw *pcie)
if (ret < 0)
return ret;
pp->ops = &tegra_pcie_dw_host_ops;
pp->ops = &tegra194_pcie_host_ops;
ret = dw_pcie_host_init(pp);
if (ret < 0) {
@ -1467,11 +1467,11 @@ fail_host_init:
return ret;
}
static int tegra_pcie_try_link_l2(struct tegra_pcie_dw *pcie)
static int tegra_pcie_try_link_l2(struct tegra194_pcie *pcie)
{
u32 val;
if (!tegra_pcie_dw_link_up(&pcie->pci))
if (!tegra194_pcie_link_up(&pcie->pci))
return 0;
val = appl_readl(pcie, APPL_RADM_STATUS);
@ -1483,12 +1483,12 @@ static int tegra_pcie_try_link_l2(struct tegra_pcie_dw *pcie)
1, PME_ACK_TIMEOUT);
}
static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie)
static void tegra194_pcie_pme_turnoff(struct tegra194_pcie *pcie)
{
u32 data;
int err;
if (!tegra_pcie_dw_link_up(&pcie->pci)) {
if (!tegra194_pcie_link_up(&pcie->pci)) {
dev_dbg(pcie->dev, "PCIe link is not up...!\n");
return;
}
@ -1545,15 +1545,15 @@ static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie)
appl_writel(pcie, data, APPL_PINMUX);
}
static void tegra_pcie_deinit_controller(struct tegra_pcie_dw *pcie)
static void tegra_pcie_deinit_controller(struct tegra194_pcie *pcie)
{
tegra_pcie_downstream_dev_to_D0(pcie);
dw_pcie_host_deinit(&pcie->pci.pp);
tegra_pcie_dw_pme_turnoff(pcie);
tegra194_pcie_pme_turnoff(pcie);
tegra_pcie_unconfig_controller(pcie);
}
static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie)
static int tegra_pcie_config_rp(struct tegra194_pcie *pcie)
{
struct device *dev = pcie->dev;
char *name;
@ -1580,7 +1580,7 @@ static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie)
goto fail_pm_get_sync;
}
pcie->link_state = tegra_pcie_dw_link_up(&pcie->pci);
pcie->link_state = tegra194_pcie_link_up(&pcie->pci);
if (!pcie->link_state) {
ret = -ENOMEDIUM;
goto fail_host_init;
@ -1605,7 +1605,7 @@ fail_pm_get_sync:
return ret;
}
static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie)
static void pex_ep_event_pex_rst_assert(struct tegra194_pcie *pcie)
{
u32 val;
int ret;
@ -1644,7 +1644,7 @@ static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie)
dev_dbg(pcie->dev, "Uninitialization of endpoint is completed\n");
}
static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
static void pex_ep_event_pex_rst_deassert(struct tegra194_pcie *pcie)
{
struct dw_pcie *pci = &pcie->pci;
struct dw_pcie_ep *ep = &pci->ep;
@ -1809,7 +1809,7 @@ fail_pll_init:
static irqreturn_t tegra_pcie_ep_pex_rst_irq(int irq, void *arg)
{
struct tegra_pcie_dw *pcie = arg;
struct tegra194_pcie *pcie = arg;
if (gpiod_get_value(pcie->pex_rst_gpiod))
pex_ep_event_pex_rst_assert(pcie);
@ -1819,7 +1819,7 @@ static irqreturn_t tegra_pcie_ep_pex_rst_irq(int irq, void *arg)
return IRQ_HANDLED;
}
static int tegra_pcie_ep_raise_legacy_irq(struct tegra_pcie_dw *pcie, u16 irq)
static int tegra_pcie_ep_raise_legacy_irq(struct tegra194_pcie *pcie, u16 irq)
{
/* Tegra194 supports only INTA */
if (irq > 1)
@ -1831,7 +1831,7 @@ static int tegra_pcie_ep_raise_legacy_irq(struct tegra_pcie_dw *pcie, u16 irq)
return 0;
}
static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq)
static int tegra_pcie_ep_raise_msi_irq(struct tegra194_pcie *pcie, u16 irq)
{
if (unlikely(irq > 31))
return -EINVAL;
@ -1841,7 +1841,7 @@ static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq)
return 0;
}
static int tegra_pcie_ep_raise_msix_irq(struct tegra_pcie_dw *pcie, u16 irq)
static int tegra_pcie_ep_raise_msix_irq(struct tegra194_pcie *pcie, u16 irq)
{
struct dw_pcie_ep *ep = &pcie->pci.ep;
@ -1855,7 +1855,7 @@ static int tegra_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
u16 interrupt_num)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
struct tegra194_pcie *pcie = to_tegra_pcie(pci);
switch (type) {
case PCI_EPC_IRQ_LEGACY:
@ -1896,7 +1896,7 @@ static const struct dw_pcie_ep_ops pcie_ep_ops = {
.get_features = tegra_pcie_ep_get_features,
};
static int tegra_pcie_config_ep(struct tegra_pcie_dw *pcie,
static int tegra_pcie_config_ep(struct tegra194_pcie *pcie,
struct platform_device *pdev)
{
struct dw_pcie *pci = &pcie->pci;
@ -1957,12 +1957,12 @@ static int tegra_pcie_config_ep(struct tegra_pcie_dw *pcie,
return 0;
}
static int tegra_pcie_dw_probe(struct platform_device *pdev)
static int tegra194_pcie_probe(struct platform_device *pdev)
{
const struct tegra_pcie_dw_of_data *data;
const struct tegra194_pcie_of_data *data;
struct device *dev = &pdev->dev;
struct resource *atu_dma_res;
struct tegra_pcie_dw *pcie;
struct tegra194_pcie *pcie;
struct pcie_port *pp;
struct dw_pcie *pci;
struct phy **phys;
@ -1988,7 +1988,7 @@ static int tegra_pcie_dw_probe(struct platform_device *pdev)
pcie->dev = &pdev->dev;
pcie->mode = (enum dw_pcie_device_mode)data->mode;
ret = tegra_pcie_dw_parse_dt(pcie);
ret = tegra194_pcie_parse_dt(pcie);
if (ret < 0) {
const char *level = KERN_ERR;
@ -2146,9 +2146,9 @@ fail:
return ret;
}
static int tegra_pcie_dw_remove(struct platform_device *pdev)
static int tegra194_pcie_remove(struct platform_device *pdev)
{
struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
struct tegra194_pcie *pcie = platform_get_drvdata(pdev);
if (!pcie->link_state)
return 0;
@ -2164,9 +2164,9 @@ static int tegra_pcie_dw_remove(struct platform_device *pdev)
return 0;
}
static int tegra_pcie_dw_suspend_late(struct device *dev)
static int tegra194_pcie_suspend_late(struct device *dev)
{
struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
struct tegra194_pcie *pcie = dev_get_drvdata(dev);
u32 val;
if (!pcie->link_state)
@ -2182,9 +2182,9 @@ static int tegra_pcie_dw_suspend_late(struct device *dev)
return 0;
}
static int tegra_pcie_dw_suspend_noirq(struct device *dev)
static int tegra194_pcie_suspend_noirq(struct device *dev)
{
struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
struct tegra194_pcie *pcie = dev_get_drvdata(dev);
if (!pcie->link_state)
return 0;
@ -2193,15 +2193,15 @@ static int tegra_pcie_dw_suspend_noirq(struct device *dev)
pcie->msi_ctrl_int = dw_pcie_readl_dbi(&pcie->pci,
PORT_LOGIC_MSI_CTRL_INT_0_EN);
tegra_pcie_downstream_dev_to_D0(pcie);
tegra_pcie_dw_pme_turnoff(pcie);
tegra194_pcie_pme_turnoff(pcie);
tegra_pcie_unconfig_controller(pcie);
return 0;
}
static int tegra_pcie_dw_resume_noirq(struct device *dev)
static int tegra194_pcie_resume_noirq(struct device *dev)
{
struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
struct tegra194_pcie *pcie = dev_get_drvdata(dev);
int ret;
if (!pcie->link_state)
@ -2211,7 +2211,7 @@ static int tegra_pcie_dw_resume_noirq(struct device *dev)
if (ret < 0)
return ret;
ret = tegra_pcie_dw_host_init(&pcie->pci.pp);
ret = tegra194_pcie_host_init(&pcie->pci.pp);
if (ret < 0) {
dev_err(dev, "Failed to init host: %d\n", ret);
goto fail_host_init;
@ -2219,7 +2219,7 @@ static int tegra_pcie_dw_resume_noirq(struct device *dev)
dw_pcie_setup_rc(&pcie->pci.pp);
ret = tegra_pcie_dw_start_link(&pcie->pci);
ret = tegra194_pcie_start_link(&pcie->pci);
if (ret < 0)
goto fail_host_init;
@ -2234,9 +2234,9 @@ fail_host_init:
return ret;
}
static int tegra_pcie_dw_resume_early(struct device *dev)
static int tegra194_pcie_resume_early(struct device *dev)
{
struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
struct tegra194_pcie *pcie = dev_get_drvdata(dev);
u32 val;
if (pcie->mode == DW_PCIE_EP_TYPE) {
@ -2259,9 +2259,9 @@ static int tegra_pcie_dw_resume_early(struct device *dev)
return 0;
}
static void tegra_pcie_dw_shutdown(struct platform_device *pdev)
static void tegra194_pcie_shutdown(struct platform_device *pdev)
{
struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
struct tegra194_pcie *pcie = platform_get_drvdata(pdev);
if (!pcie->link_state)
return;
@ -2273,50 +2273,50 @@ static void tegra_pcie_dw_shutdown(struct platform_device *pdev)
if (IS_ENABLED(CONFIG_PCI_MSI))
disable_irq(pcie->pci.pp.msi_irq);
tegra_pcie_dw_pme_turnoff(pcie);
tegra194_pcie_pme_turnoff(pcie);
tegra_pcie_unconfig_controller(pcie);
}
static const struct tegra_pcie_dw_of_data tegra_pcie_dw_rc_of_data = {
static const struct tegra194_pcie_of_data tegra194_pcie_rc_of_data = {
.mode = DW_PCIE_RC_TYPE,
};
static const struct tegra_pcie_dw_of_data tegra_pcie_dw_ep_of_data = {
static const struct tegra194_pcie_of_data tegra194_pcie_ep_of_data = {
.mode = DW_PCIE_EP_TYPE,
};
static const struct of_device_id tegra_pcie_dw_of_match[] = {
static const struct of_device_id tegra194_pcie_of_match[] = {
{
.compatible = "nvidia,tegra194-pcie",
.data = &tegra_pcie_dw_rc_of_data,
.data = &tegra194_pcie_rc_of_data,
},
{
.compatible = "nvidia,tegra194-pcie-ep",
.data = &tegra_pcie_dw_ep_of_data,
.data = &tegra194_pcie_ep_of_data,
},
{},
};
static const struct dev_pm_ops tegra_pcie_dw_pm_ops = {
.suspend_late = tegra_pcie_dw_suspend_late,
.suspend_noirq = tegra_pcie_dw_suspend_noirq,
.resume_noirq = tegra_pcie_dw_resume_noirq,
.resume_early = tegra_pcie_dw_resume_early,
static const struct dev_pm_ops tegra194_pcie_pm_ops = {
.suspend_late = tegra194_pcie_suspend_late,
.suspend_noirq = tegra194_pcie_suspend_noirq,
.resume_noirq = tegra194_pcie_resume_noirq,
.resume_early = tegra194_pcie_resume_early,
};
static struct platform_driver tegra_pcie_dw_driver = {
.probe = tegra_pcie_dw_probe,
.remove = tegra_pcie_dw_remove,
.shutdown = tegra_pcie_dw_shutdown,
static struct platform_driver tegra194_pcie_driver = {
.probe = tegra194_pcie_probe,
.remove = tegra194_pcie_remove,
.shutdown = tegra194_pcie_shutdown,
.driver = {
.name = "tegra194-pcie",
.pm = &tegra_pcie_dw_pm_ops,
.of_match_table = tegra_pcie_dw_of_match,
.pm = &tegra194_pcie_pm_ops,
.of_match_table = tegra194_pcie_of_match,
},
};
module_platform_driver(tegra_pcie_dw_driver);
module_platform_driver(tegra194_pcie_driver);
MODULE_DEVICE_TABLE(of, tegra_pcie_dw_of_match);
MODULE_DEVICE_TABLE(of, tegra194_pcie_of_match);
MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>");
MODULE_DESCRIPTION("NVIDIA PCIe host controller driver");

View File

@ -61,9 +61,9 @@
#define PCL_RDLH_LINK_UP BIT(1)
#define PCL_XMLH_LINK_UP BIT(0)
struct uniphier_pcie_priv {
void __iomem *base;
struct uniphier_pcie {
struct dw_pcie pci;
void __iomem *base;
struct clk *clk;
struct reset_control *rst;
struct phy *phy;
@ -72,62 +72,62 @@ struct uniphier_pcie_priv {
#define to_uniphier_pcie(x) dev_get_drvdata((x)->dev)
static void uniphier_pcie_ltssm_enable(struct uniphier_pcie_priv *priv,
static void uniphier_pcie_ltssm_enable(struct uniphier_pcie *pcie,
bool enable)
{
u32 val;
val = readl(priv->base + PCL_APP_READY_CTRL);
val = readl(pcie->base + PCL_APP_READY_CTRL);
if (enable)
val |= PCL_APP_LTSSM_ENABLE;
else
val &= ~PCL_APP_LTSSM_ENABLE;
writel(val, priv->base + PCL_APP_READY_CTRL);
writel(val, pcie->base + PCL_APP_READY_CTRL);
}
static void uniphier_pcie_init_rc(struct uniphier_pcie_priv *priv)
static void uniphier_pcie_init_rc(struct uniphier_pcie *pcie)
{
u32 val;
/* set RC MODE */
val = readl(priv->base + PCL_MODE);
val = readl(pcie->base + PCL_MODE);
val |= PCL_MODE_REGEN;
val &= ~PCL_MODE_REGVAL;
writel(val, priv->base + PCL_MODE);
writel(val, pcie->base + PCL_MODE);
/* use auxiliary power detection */
val = readl(priv->base + PCL_APP_PM0);
val = readl(pcie->base + PCL_APP_PM0);
val |= PCL_SYS_AUX_PWR_DET;
writel(val, priv->base + PCL_APP_PM0);
writel(val, pcie->base + PCL_APP_PM0);
/* assert PERST# */
val = readl(priv->base + PCL_PINCTRL0);
val = readl(pcie->base + PCL_PINCTRL0);
val &= ~(PCL_PERST_NOE_REGVAL | PCL_PERST_OUT_REGVAL
| PCL_PERST_PLDN_REGVAL);
val |= PCL_PERST_NOE_REGEN | PCL_PERST_OUT_REGEN
| PCL_PERST_PLDN_REGEN;
writel(val, priv->base + PCL_PINCTRL0);
writel(val, pcie->base + PCL_PINCTRL0);
uniphier_pcie_ltssm_enable(priv, false);
uniphier_pcie_ltssm_enable(pcie, false);
usleep_range(100000, 200000);
/* deassert PERST# */
val = readl(priv->base + PCL_PINCTRL0);
val = readl(pcie->base + PCL_PINCTRL0);
val |= PCL_PERST_OUT_REGVAL | PCL_PERST_OUT_REGEN;
writel(val, priv->base + PCL_PINCTRL0);
writel(val, pcie->base + PCL_PINCTRL0);
}
static int uniphier_pcie_wait_rc(struct uniphier_pcie_priv *priv)
static int uniphier_pcie_wait_rc(struct uniphier_pcie *pcie)
{
u32 status;
int ret;
/* wait PIPE clock */
ret = readl_poll_timeout(priv->base + PCL_PIPEMON, status,
ret = readl_poll_timeout(pcie->base + PCL_PIPEMON, status,
status & PCL_PCLK_ALIVE, 100000, 1000000);
if (ret) {
dev_err(priv->pci.dev,
dev_err(pcie->pci.dev,
"Failed to initialize controller in RC mode\n");
return ret;
}
@ -137,10 +137,10 @@ static int uniphier_pcie_wait_rc(struct uniphier_pcie_priv *priv)
static int uniphier_pcie_link_up(struct dw_pcie *pci)
{
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
struct uniphier_pcie *pcie = to_uniphier_pcie(pci);
u32 val, mask;
val = readl(priv->base + PCL_STATUS_LINK);
val = readl(pcie->base + PCL_STATUS_LINK);
mask = PCL_RDLH_LINK_UP | PCL_XMLH_LINK_UP;
return (val & mask) == mask;
@ -148,39 +148,40 @@ static int uniphier_pcie_link_up(struct dw_pcie *pci)
static int uniphier_pcie_start_link(struct dw_pcie *pci)
{
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
struct uniphier_pcie *pcie = to_uniphier_pcie(pci);
uniphier_pcie_ltssm_enable(priv, true);
uniphier_pcie_ltssm_enable(pcie, true);
return 0;
}
static void uniphier_pcie_stop_link(struct dw_pcie *pci)
{
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
struct uniphier_pcie *pcie = to_uniphier_pcie(pci);
uniphier_pcie_ltssm_enable(priv, false);
uniphier_pcie_ltssm_enable(pcie, false);
}
static void uniphier_pcie_irq_enable(struct uniphier_pcie_priv *priv)
static void uniphier_pcie_irq_enable(struct uniphier_pcie *pcie)
{
writel(PCL_RCV_INT_ALL_ENABLE, priv->base + PCL_RCV_INT);
writel(PCL_RCV_INTX_ALL_ENABLE, priv->base + PCL_RCV_INTX);
writel(PCL_RCV_INT_ALL_ENABLE, pcie->base + PCL_RCV_INT);
writel(PCL_RCV_INTX_ALL_ENABLE, pcie->base + PCL_RCV_INTX);
}
static void uniphier_pcie_irq_mask(struct irq_data *d)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
struct uniphier_pcie *pcie = to_uniphier_pcie(pci);
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&pp->lock, flags);
val = readl(priv->base + PCL_RCV_INTX);
val = readl(pcie->base + PCL_RCV_INTX);
val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT);
writel(val, priv->base + PCL_RCV_INTX);
writel(val, pcie->base + PCL_RCV_INTX);
raw_spin_unlock_irqrestore(&pp->lock, flags);
}
@ -189,15 +190,15 @@ static void uniphier_pcie_irq_unmask(struct irq_data *d)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
struct uniphier_pcie *pcie = to_uniphier_pcie(pci);
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&pp->lock, flags);
val = readl(priv->base + PCL_RCV_INTX);
val = readl(pcie->base + PCL_RCV_INTX);
val &= ~BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT);
writel(val, priv->base + PCL_RCV_INTX);
writel(val, pcie->base + PCL_RCV_INTX);
raw_spin_unlock_irqrestore(&pp->lock, flags);
}
@ -226,13 +227,13 @@ static void uniphier_pcie_irq_handler(struct irq_desc *desc)
{
struct pcie_port *pp = irq_desc_get_handler_data(desc);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
struct uniphier_pcie *pcie = to_uniphier_pcie(pci);
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned long reg;
u32 val, bit;
/* INT for debug */
val = readl(priv->base + PCL_RCV_INT);
val = readl(pcie->base + PCL_RCV_INT);
if (val & PCL_CFG_BW_MGT_STATUS)
dev_dbg(pci->dev, "Link Bandwidth Management Event\n");
@ -243,16 +244,16 @@ static void uniphier_pcie_irq_handler(struct irq_desc *desc)
if (val & PCL_CFG_PME_MSI_STATUS)
dev_dbg(pci->dev, "PME Interrupt\n");
writel(val, priv->base + PCL_RCV_INT);
writel(val, pcie->base + PCL_RCV_INT);
/* INTx */
chained_irq_enter(chip, desc);
val = readl(priv->base + PCL_RCV_INTX);
val = readl(pcie->base + PCL_RCV_INTX);
reg = FIELD_GET(PCL_RCV_INTX_ALL_STATUS, val);
for_each_set_bit(bit, &reg, PCI_NUM_INTX)
generic_handle_domain_irq(priv->legacy_irq_domain, bit);
generic_handle_domain_irq(pcie->legacy_irq_domain, bit);
chained_irq_exit(chip, desc);
}
@ -260,7 +261,7 @@ static void uniphier_pcie_irq_handler(struct irq_desc *desc)
static int uniphier_pcie_config_legacy_irq(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
struct uniphier_pcie *pcie = to_uniphier_pcie(pci);
struct device_node *np = pci->dev->of_node;
struct device_node *np_intc;
int ret = 0;
@ -278,9 +279,9 @@ static int uniphier_pcie_config_legacy_irq(struct pcie_port *pp)
goto out_put_node;
}
priv->legacy_irq_domain = irq_domain_add_linear(np_intc, PCI_NUM_INTX,
pcie->legacy_irq_domain = irq_domain_add_linear(np_intc, PCI_NUM_INTX,
&uniphier_intx_domain_ops, pp);
if (!priv->legacy_irq_domain) {
if (!pcie->legacy_irq_domain) {
dev_err(pci->dev, "Failed to get INTx domain\n");
ret = -ENODEV;
goto out_put_node;
@ -297,14 +298,14 @@ out_put_node:
static int uniphier_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
struct uniphier_pcie *pcie = to_uniphier_pcie(pci);
int ret;
ret = uniphier_pcie_config_legacy_irq(pp);
if (ret)
return ret;
uniphier_pcie_irq_enable(priv);
uniphier_pcie_irq_enable(pcie);
return 0;
}
@ -313,36 +314,36 @@ static const struct dw_pcie_host_ops uniphier_pcie_host_ops = {
.host_init = uniphier_pcie_host_init,
};
static int uniphier_pcie_host_enable(struct uniphier_pcie_priv *priv)
static int uniphier_pcie_host_enable(struct uniphier_pcie *pcie)
{
int ret;
ret = clk_prepare_enable(priv->clk);
ret = clk_prepare_enable(pcie->clk);
if (ret)
return ret;
ret = reset_control_deassert(priv->rst);
ret = reset_control_deassert(pcie->rst);
if (ret)
goto out_clk_disable;
uniphier_pcie_init_rc(priv);
uniphier_pcie_init_rc(pcie);
ret = phy_init(priv->phy);
ret = phy_init(pcie->phy);
if (ret)
goto out_rst_assert;
ret = uniphier_pcie_wait_rc(priv);
ret = uniphier_pcie_wait_rc(pcie);
if (ret)
goto out_phy_exit;
return 0;
out_phy_exit:
phy_exit(priv->phy);
phy_exit(pcie->phy);
out_rst_assert:
reset_control_assert(priv->rst);
reset_control_assert(pcie->rst);
out_clk_disable:
clk_disable_unprepare(priv->clk);
clk_disable_unprepare(pcie->clk);
return ret;
}
@ -356,41 +357,41 @@ static const struct dw_pcie_ops dw_pcie_ops = {
static int uniphier_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct uniphier_pcie_priv *priv;
struct uniphier_pcie *pcie;
int ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
priv->pci.dev = dev;
priv->pci.ops = &dw_pcie_ops;
pcie->pci.dev = dev;
pcie->pci.ops = &dw_pcie_ops;
priv->base = devm_platform_ioremap_resource_byname(pdev, "link");
if (IS_ERR(priv->base))
return PTR_ERR(priv->base);
pcie->base = devm_platform_ioremap_resource_byname(pdev, "link");
if (IS_ERR(pcie->base))
return PTR_ERR(pcie->base);
priv->clk = devm_clk_get(dev, NULL);
if (IS_ERR(priv->clk))
return PTR_ERR(priv->clk);
pcie->clk = devm_clk_get(dev, NULL);
if (IS_ERR(pcie->clk))
return PTR_ERR(pcie->clk);
priv->rst = devm_reset_control_get_shared(dev, NULL);
if (IS_ERR(priv->rst))
return PTR_ERR(priv->rst);
pcie->rst = devm_reset_control_get_shared(dev, NULL);
if (IS_ERR(pcie->rst))
return PTR_ERR(pcie->rst);
priv->phy = devm_phy_optional_get(dev, "pcie-phy");
if (IS_ERR(priv->phy))
return PTR_ERR(priv->phy);
pcie->phy = devm_phy_optional_get(dev, "pcie-phy");
if (IS_ERR(pcie->phy))
return PTR_ERR(pcie->phy);
platform_set_drvdata(pdev, priv);
platform_set_drvdata(pdev, pcie);
ret = uniphier_pcie_host_enable(priv);
ret = uniphier_pcie_host_enable(pcie);
if (ret)
return ret;
priv->pci.pp.ops = &uniphier_pcie_host_ops;
pcie->pci.pp.ops = &uniphier_pcie_host_ops;
return dw_pcie_host_init(&priv->pci.pp);
return dw_pcie_host_init(&pcie->pci.pp);
}
static const struct of_device_id uniphier_pcie_match[] = {

View File

@ -34,31 +34,31 @@
#define PF_DBG_WE BIT(31)
#define PF_DBG_PABR BIT(27)
#define to_ls_pcie_g4(x) platform_get_drvdata((x)->pdev)
#define to_ls_g4_pcie(x) platform_get_drvdata((x)->pdev)
struct ls_pcie_g4 {
struct ls_g4_pcie {
struct mobiveil_pcie pci;
struct delayed_work dwork;
int irq;
};
static inline u32 ls_pcie_g4_pf_readl(struct ls_pcie_g4 *pcie, u32 off)
static inline u32 ls_g4_pcie_pf_readl(struct ls_g4_pcie *pcie, u32 off)
{
return ioread32(pcie->pci.csr_axi_slave_base + PCIE_PF_OFF + off);
}
static inline void ls_pcie_g4_pf_writel(struct ls_pcie_g4 *pcie,
static inline void ls_g4_pcie_pf_writel(struct ls_g4_pcie *pcie,
u32 off, u32 val)
{
iowrite32(val, pcie->pci.csr_axi_slave_base + PCIE_PF_OFF + off);
}
static int ls_pcie_g4_link_up(struct mobiveil_pcie *pci)
static int ls_g4_pcie_link_up(struct mobiveil_pcie *pci)
{
struct ls_pcie_g4 *pcie = to_ls_pcie_g4(pci);
struct ls_g4_pcie *pcie = to_ls_g4_pcie(pci);
u32 state;
state = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
state = ls_g4_pcie_pf_readl(pcie, PCIE_PF_DBG);
state = state & PF_DBG_LTSSM_MASK;
if (state == PF_DBG_LTSSM_L0)
@ -67,14 +67,14 @@ static int ls_pcie_g4_link_up(struct mobiveil_pcie *pci)
return 0;
}
static void ls_pcie_g4_disable_interrupt(struct ls_pcie_g4 *pcie)
static void ls_g4_pcie_disable_interrupt(struct ls_g4_pcie *pcie)
{
struct mobiveil_pcie *mv_pci = &pcie->pci;
mobiveil_csr_writel(mv_pci, 0, PAB_INTP_AMBA_MISC_ENB);
}
static void ls_pcie_g4_enable_interrupt(struct ls_pcie_g4 *pcie)
static void ls_g4_pcie_enable_interrupt(struct ls_g4_pcie *pcie)
{
struct mobiveil_pcie *mv_pci = &pcie->pci;
u32 val;
@ -87,7 +87,7 @@ static void ls_pcie_g4_enable_interrupt(struct ls_pcie_g4 *pcie)
mobiveil_csr_writel(mv_pci, val, PAB_INTP_AMBA_MISC_ENB);
}
static int ls_pcie_g4_reinit_hw(struct ls_pcie_g4 *pcie)
static int ls_g4_pcie_reinit_hw(struct ls_g4_pcie *pcie)
{
struct mobiveil_pcie *mv_pci = &pcie->pci;
struct device *dev = &mv_pci->pdev->dev;
@ -97,7 +97,7 @@ static int ls_pcie_g4_reinit_hw(struct ls_pcie_g4 *pcie)
/* Poll for pab_csb_reset to set and PAB activity to clear */
do {
usleep_range(10, 15);
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_INT_STAT);
val = ls_g4_pcie_pf_readl(pcie, PCIE_PF_INT_STAT);
act_stat = mobiveil_csr_readl(mv_pci, PAB_ACTIVITY_STAT);
} while (((val & PF_INT_STAT_PABRST) == 0 || act_stat) && to--);
if (to < 0) {
@ -106,22 +106,22 @@ static int ls_pcie_g4_reinit_hw(struct ls_pcie_g4 *pcie)
}
/* clear PEX_RESET bit in PEX_PF0_DBG register */
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
val = ls_g4_pcie_pf_readl(pcie, PCIE_PF_DBG);
val |= PF_DBG_WE;
ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val);
ls_g4_pcie_pf_writel(pcie, PCIE_PF_DBG, val);
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
val = ls_g4_pcie_pf_readl(pcie, PCIE_PF_DBG);
val |= PF_DBG_PABR;
ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val);
ls_g4_pcie_pf_writel(pcie, PCIE_PF_DBG, val);
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
val = ls_g4_pcie_pf_readl(pcie, PCIE_PF_DBG);
val &= ~PF_DBG_WE;
ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val);
ls_g4_pcie_pf_writel(pcie, PCIE_PF_DBG, val);
mobiveil_host_init(mv_pci, true);
to = 100;
while (!ls_pcie_g4_link_up(mv_pci) && to--)
while (!ls_g4_pcie_link_up(mv_pci) && to--)
usleep_range(200, 250);
if (to < 0) {
dev_err(dev, "PCIe link training timeout\n");
@ -131,9 +131,9 @@ static int ls_pcie_g4_reinit_hw(struct ls_pcie_g4 *pcie)
return 0;
}
static irqreturn_t ls_pcie_g4_isr(int irq, void *dev_id)
static irqreturn_t ls_g4_pcie_isr(int irq, void *dev_id)
{
struct ls_pcie_g4 *pcie = (struct ls_pcie_g4 *)dev_id;
struct ls_g4_pcie *pcie = (struct ls_g4_pcie *)dev_id;
struct mobiveil_pcie *mv_pci = &pcie->pci;
u32 val;
@ -142,7 +142,7 @@ static irqreturn_t ls_pcie_g4_isr(int irq, void *dev_id)
return IRQ_NONE;
if (val & PAB_INTP_RESET) {
ls_pcie_g4_disable_interrupt(pcie);
ls_g4_pcie_disable_interrupt(pcie);
schedule_delayed_work(&pcie->dwork, msecs_to_jiffies(1));
}
@ -151,9 +151,9 @@ static irqreturn_t ls_pcie_g4_isr(int irq, void *dev_id)
return IRQ_HANDLED;
}
static int ls_pcie_g4_interrupt_init(struct mobiveil_pcie *mv_pci)
static int ls_g4_pcie_interrupt_init(struct mobiveil_pcie *mv_pci)
{
struct ls_pcie_g4 *pcie = to_ls_pcie_g4(mv_pci);
struct ls_g4_pcie *pcie = to_ls_g4_pcie(mv_pci);
struct platform_device *pdev = mv_pci->pdev;
struct device *dev = &pdev->dev;
int ret;
@ -162,7 +162,7 @@ static int ls_pcie_g4_interrupt_init(struct mobiveil_pcie *mv_pci)
if (pcie->irq < 0)
return pcie->irq;
ret = devm_request_irq(dev, pcie->irq, ls_pcie_g4_isr,
ret = devm_request_irq(dev, pcie->irq, ls_g4_pcie_isr,
IRQF_SHARED, pdev->name, pcie);
if (ret) {
dev_err(dev, "Can't register PCIe IRQ, errno = %d\n", ret);
@ -172,11 +172,11 @@ static int ls_pcie_g4_interrupt_init(struct mobiveil_pcie *mv_pci)
return 0;
}
static void ls_pcie_g4_reset(struct work_struct *work)
static void ls_g4_pcie_reset(struct work_struct *work)
{
struct delayed_work *dwork = container_of(work, struct delayed_work,
work);
struct ls_pcie_g4 *pcie = container_of(dwork, struct ls_pcie_g4, dwork);
struct ls_g4_pcie *pcie = container_of(dwork, struct ls_g4_pcie, dwork);
struct mobiveil_pcie *mv_pci = &pcie->pci;
u16 ctrl;
@ -184,26 +184,26 @@ static void ls_pcie_g4_reset(struct work_struct *work)
ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET;
mobiveil_csr_writew(mv_pci, ctrl, PCI_BRIDGE_CONTROL);
if (!ls_pcie_g4_reinit_hw(pcie))
if (!ls_g4_pcie_reinit_hw(pcie))
return;
ls_pcie_g4_enable_interrupt(pcie);
ls_g4_pcie_enable_interrupt(pcie);
}
static struct mobiveil_rp_ops ls_pcie_g4_rp_ops = {
.interrupt_init = ls_pcie_g4_interrupt_init,
static struct mobiveil_rp_ops ls_g4_pcie_rp_ops = {
.interrupt_init = ls_g4_pcie_interrupt_init,
};
static const struct mobiveil_pab_ops ls_pcie_g4_pab_ops = {
.link_up = ls_pcie_g4_link_up,
static const struct mobiveil_pab_ops ls_g4_pcie_pab_ops = {
.link_up = ls_g4_pcie_link_up,
};
static int __init ls_pcie_g4_probe(struct platform_device *pdev)
static int __init ls_g4_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct pci_host_bridge *bridge;
struct mobiveil_pcie *mv_pci;
struct ls_pcie_g4 *pcie;
struct ls_g4_pcie *pcie;
struct device_node *np = dev->of_node;
int ret;
@ -220,13 +220,13 @@ static int __init ls_pcie_g4_probe(struct platform_device *pdev)
mv_pci = &pcie->pci;
mv_pci->pdev = pdev;
mv_pci->ops = &ls_pcie_g4_pab_ops;
mv_pci->rp.ops = &ls_pcie_g4_rp_ops;
mv_pci->ops = &ls_g4_pcie_pab_ops;
mv_pci->rp.ops = &ls_g4_pcie_rp_ops;
mv_pci->rp.bridge = bridge;
platform_set_drvdata(pdev, pcie);
INIT_DELAYED_WORK(&pcie->dwork, ls_pcie_g4_reset);
INIT_DELAYED_WORK(&pcie->dwork, ls_g4_pcie_reset);
ret = mobiveil_pcie_host_probe(mv_pci);
if (ret) {
@ -234,22 +234,22 @@ static int __init ls_pcie_g4_probe(struct platform_device *pdev)
return ret;
}
ls_pcie_g4_enable_interrupt(pcie);
ls_g4_pcie_enable_interrupt(pcie);
return 0;
}
static const struct of_device_id ls_pcie_g4_of_match[] = {
static const struct of_device_id ls_g4_pcie_of_match[] = {
{ .compatible = "fsl,lx2160a-pcie", },
{ },
};
static struct platform_driver ls_pcie_g4_driver = {
static struct platform_driver ls_g4_pcie_driver = {
.driver = {
.name = "layerscape-pcie-gen4",
.of_match_table = ls_pcie_g4_of_match,
.of_match_table = ls_g4_pcie_of_match,
.suppress_bind_attrs = true,
},
};
builtin_platform_driver_probe(ls_pcie_g4_driver, ls_pcie_g4_probe);
builtin_platform_driver_probe(ls_g4_pcie_driver, ls_g4_pcie_probe);

View File

@ -115,6 +115,7 @@
#define PCIE_MSI_ADDR_HIGH_REG (CONTROL_BASE_ADDR + 0x54)
#define PCIE_MSI_STATUS_REG (CONTROL_BASE_ADDR + 0x58)
#define PCIE_MSI_MASK_REG (CONTROL_BASE_ADDR + 0x5C)
#define PCIE_MSI_ALL_MASK GENMASK(31, 0)
#define PCIE_MSI_PAYLOAD_REG (CONTROL_BASE_ADDR + 0x9C)
#define PCIE_MSI_DATA_MASK GENMASK(15, 0)
@ -570,6 +571,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG);
/* Clear all interrupts */
advk_writel(pcie, PCIE_MSI_ALL_MASK, PCIE_MSI_STATUS_REG);
advk_writel(pcie, PCIE_ISR0_ALL_MASK, PCIE_ISR0_REG);
advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG);
advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG);
@ -582,7 +584,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_MASK_REG);
/* Unmask all MSIs */
advk_writel(pcie, 0, PCIE_MSI_MASK_REG);
advk_writel(pcie, ~(u32)PCIE_MSI_ALL_MASK, PCIE_MSI_MASK_REG);
/* Enable summary interrupt for GIC SPI source */
reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK);
@ -872,11 +874,15 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
return PCI_BRIDGE_EMUL_HANDLED;
}
case PCI_CAP_LIST_ID:
case PCI_EXP_DEVCAP:
case PCI_EXP_DEVCTL:
case PCI_EXP_DEVCAP2:
case PCI_EXP_DEVCTL2:
case PCI_EXP_LNKCAP2:
case PCI_EXP_LNKCTL2:
*value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
return PCI_BRIDGE_EMUL_HANDLED;
default:
return PCI_BRIDGE_EMUL_NOT_HANDLED;
}
@ -890,10 +896,6 @@ advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
struct advk_pcie *pcie = bridge->data;
switch (reg) {
case PCI_EXP_DEVCTL:
advk_writel(pcie, new, PCIE_CORE_PCIEXP_CAP + reg);
break;
case PCI_EXP_LNKCTL:
advk_writel(pcie, new, PCIE_CORE_PCIEXP_CAP + reg);
if (new & PCI_EXP_LNKCTL_RL)
@ -915,6 +917,12 @@ advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
advk_writel(pcie, new, PCIE_ISR0_REG);
break;
case PCI_EXP_DEVCTL:
case PCI_EXP_DEVCTL2:
case PCI_EXP_LNKCTL2:
advk_writel(pcie, new, PCIE_CORE_PCIEXP_CAP + reg);
break;
default:
break;
}
@ -953,6 +961,9 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
/* Support interrupt A for MSI feature */
bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE;
/* Aardvark HW provides PCIe Capability structure in version 2 */
bridge->pcie_conf.cap = cpu_to_le16(2);
/* Indicates supports for Completion Retry Status */
bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
@ -1017,10 +1028,8 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
u32 reg;
int ret;
if (!advk_pcie_valid_device(pcie, bus, devfn)) {
*val = 0xffffffff;
if (!advk_pcie_valid_device(pcie, bus, devfn))
return PCIBIOS_DEVICE_NOT_FOUND;
}
if (pci_is_root_bus(bus))
return pci_bridge_emul_conf_read(&pcie->bridge, where,
@ -1383,7 +1392,7 @@ static void advk_pcie_handle_msi(struct advk_pcie *pcie)
msi_mask = advk_readl(pcie, PCIE_MSI_MASK_REG);
msi_val = advk_readl(pcie, PCIE_MSI_STATUS_REG);
msi_status = msi_val & ~msi_mask;
msi_status = msi_val & ((~msi_mask) & PCIE_MSI_ALL_MASK);
for (msi_idx = 0; msi_idx < MSI_IRQ_NUM; msi_idx++) {
if (!(BIT(msi_idx) & msi_status))
@ -1535,8 +1544,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
* only PIO for issuing configuration transfers which does
* not use PCIe window configuration.
*/
if (type != IORESOURCE_MEM && type != IORESOURCE_MEM_64 &&
type != IORESOURCE_IO)
if (type != IORESOURCE_MEM && type != IORESOURCE_IO)
continue;
/*
@ -1544,8 +1552,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
* configuration is set to transparent memory access so it
* does not need window configuration.
*/
if ((type == IORESOURCE_MEM || type == IORESOURCE_MEM_64) &&
entry->offset == 0)
if (type == IORESOURCE_MEM && entry->offset == 0)
continue;
/*
@ -1677,20 +1684,64 @@ static int advk_pcie_remove(struct platform_device *pdev)
{
struct advk_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
u32 val;
int i;
/* Remove PCI bus with all devices */
pci_lock_rescan_remove();
pci_stop_root_bus(bridge->bus);
pci_remove_root_bus(bridge->bus);
pci_unlock_rescan_remove();
/* Disable Root Bridge I/O space, memory space and bus mastering */
val = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
val &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
advk_writel(pcie, val, PCIE_CORE_CMD_STATUS_REG);
/* Disable MSI */
val = advk_readl(pcie, PCIE_CORE_CTRL2_REG);
val &= ~PCIE_CORE_CTRL2_MSI_ENABLE;
advk_writel(pcie, val, PCIE_CORE_CTRL2_REG);
/* Clear MSI address */
advk_writel(pcie, 0, PCIE_MSI_ADDR_LOW_REG);
advk_writel(pcie, 0, PCIE_MSI_ADDR_HIGH_REG);
/* Mask all interrupts */
advk_writel(pcie, PCIE_MSI_ALL_MASK, PCIE_MSI_MASK_REG);
advk_writel(pcie, PCIE_ISR0_ALL_MASK, PCIE_ISR0_MASK_REG);
advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_MASK_REG);
advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_MASK_REG);
/* Clear all interrupts */
advk_writel(pcie, PCIE_MSI_ALL_MASK, PCIE_MSI_STATUS_REG);
advk_writel(pcie, PCIE_ISR0_ALL_MASK, PCIE_ISR0_REG);
advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG);
advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG);
/* Remove IRQ domains */
advk_pcie_remove_msi_irq_domain(pcie);
advk_pcie_remove_irq_domain(pcie);
/* Free config space for emulated root bridge */
pci_bridge_emul_cleanup(&pcie->bridge);
/* Assert PERST# signal which prepares PCIe card for power down */
if (pcie->reset_gpio)
gpiod_set_value_cansleep(pcie->reset_gpio, 1);
/* Disable link training */
val = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
val &= ~LINK_TRAINING_EN;
advk_writel(pcie, val, PCIE_CORE_CTRL0_REG);
/* Disable outbound address windows mapping */
for (i = 0; i < OB_WIN_COUNT; i++)
advk_pcie_disable_ob_win(pcie, i);
/* Disable phy */
advk_pcie_disable_phy(pcie);
return 0;
}

View File

@ -43,13 +43,12 @@
#include <linux/pci-ecam.h>
#include <linux/delay.h>
#include <linux/semaphore.h>
#include <linux/irqdomain.h>
#include <asm/irqdomain.h>
#include <asm/apic.h>
#include <linux/irq.h>
#include <linux/msi.h>
#include <linux/hyperv.h>
#include <linux/refcount.h>
#include <linux/irqdomain.h>
#include <linux/acpi.h>
#include <asm/mshyperv.h>
/*
@ -583,6 +582,265 @@ struct hv_pci_compl {
static void hv_pci_onchannelcallback(void *context);
#ifdef CONFIG_X86
#define DELIVERY_MODE APIC_DELIVERY_MODE_FIXED
#define FLOW_HANDLER handle_edge_irq
#define FLOW_NAME "edge"
static int hv_pci_irqchip_init(void)
{
return 0;
}
static struct irq_domain *hv_pci_get_root_domain(void)
{
return x86_vector_domain;
}
static unsigned int hv_msi_get_int_vector(struct irq_data *data)
{
struct irq_cfg *cfg = irqd_cfg(data);
return cfg->vector;
}
static void hv_set_msi_entry_from_desc(union hv_msi_entry *msi_entry,
struct msi_desc *msi_desc)
{
msi_entry->address.as_uint32 = msi_desc->msg.address_lo;
msi_entry->data.as_uint32 = msi_desc->msg.data;
}
static int hv_msi_prepare(struct irq_domain *domain, struct device *dev,
int nvec, msi_alloc_info_t *info)
{
return pci_msi_prepare(domain, dev, nvec, info);
}
#elif defined(CONFIG_ARM64)
/*
* SPI vectors to use for vPCI; arch SPIs range is [32, 1019], but leaving a bit
* of room at the start to allow for SPIs to be specified through ACPI and
* starting with a power of two to satisfy power of 2 multi-MSI requirement.
*/
#define HV_PCI_MSI_SPI_START 64
#define HV_PCI_MSI_SPI_NR (1020 - HV_PCI_MSI_SPI_START)
#define DELIVERY_MODE 0
#define FLOW_HANDLER NULL
#define FLOW_NAME NULL
#define hv_msi_prepare NULL
struct hv_pci_chip_data {
DECLARE_BITMAP(spi_map, HV_PCI_MSI_SPI_NR);
struct mutex map_lock;
};
/* Hyper-V vPCI MSI GIC IRQ domain */
static struct irq_domain *hv_msi_gic_irq_domain;
/* Hyper-V PCI MSI IRQ chip */
static struct irq_chip hv_arm64_msi_irq_chip = {
.name = "MSI",
.irq_set_affinity = irq_chip_set_affinity_parent,
.irq_eoi = irq_chip_eoi_parent,
.irq_mask = irq_chip_mask_parent,
.irq_unmask = irq_chip_unmask_parent
};
static unsigned int hv_msi_get_int_vector(struct irq_data *irqd)
{
return irqd->parent_data->hwirq;
}
static void hv_set_msi_entry_from_desc(union hv_msi_entry *msi_entry,
struct msi_desc *msi_desc)
{
msi_entry->address = ((u64)msi_desc->msg.address_hi << 32) |
msi_desc->msg.address_lo;
msi_entry->data = msi_desc->msg.data;
}
/*
* @nr_bm_irqs: Indicates the number of IRQs that were allocated from
* the bitmap.
* @nr_dom_irqs: Indicates the number of IRQs that were allocated from
* the parent domain.
*/
static void hv_pci_vec_irq_free(struct irq_domain *domain,
unsigned int virq,
unsigned int nr_bm_irqs,
unsigned int nr_dom_irqs)
{
struct hv_pci_chip_data *chip_data = domain->host_data;
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
int first = d->hwirq - HV_PCI_MSI_SPI_START;
int i;
mutex_lock(&chip_data->map_lock);
bitmap_release_region(chip_data->spi_map,
first,
get_count_order(nr_bm_irqs));
mutex_unlock(&chip_data->map_lock);
for (i = 0; i < nr_dom_irqs; i++) {
if (i)
d = irq_domain_get_irq_data(domain, virq + i);
irq_domain_reset_irq_data(d);
}
irq_domain_free_irqs_parent(domain, virq, nr_dom_irqs);
}
static void hv_pci_vec_irq_domain_free(struct irq_domain *domain,
unsigned int virq,
unsigned int nr_irqs)
{
hv_pci_vec_irq_free(domain, virq, nr_irqs, nr_irqs);
}
static int hv_pci_vec_alloc_device_irq(struct irq_domain *domain,
unsigned int nr_irqs,
irq_hw_number_t *hwirq)
{
struct hv_pci_chip_data *chip_data = domain->host_data;
int index;
/* Find and allocate region from the SPI bitmap */
mutex_lock(&chip_data->map_lock);
index = bitmap_find_free_region(chip_data->spi_map,
HV_PCI_MSI_SPI_NR,
get_count_order(nr_irqs));
mutex_unlock(&chip_data->map_lock);
if (index < 0)
return -ENOSPC;
*hwirq = index + HV_PCI_MSI_SPI_START;
return 0;
}
static int hv_pci_vec_irq_gic_domain_alloc(struct irq_domain *domain,
unsigned int virq,
irq_hw_number_t hwirq)
{
struct irq_fwspec fwspec;
struct irq_data *d;
int ret;
fwspec.fwnode = domain->parent->fwnode;
fwspec.param_count = 2;
fwspec.param[0] = hwirq;
fwspec.param[1] = IRQ_TYPE_EDGE_RISING;
ret = irq_domain_alloc_irqs_parent(domain, virq, 1, &fwspec);
if (ret)
return ret;
/*
* Since the interrupt specifier is not coming from ACPI or DT, the
* trigger type will need to be set explicitly. Otherwise, it will be
* set to whatever is in the GIC configuration.
*/
d = irq_domain_get_irq_data(domain->parent, virq);
return d->chip->irq_set_type(d, IRQ_TYPE_EDGE_RISING);
}
static int hv_pci_vec_irq_domain_alloc(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs,
void *args)
{
irq_hw_number_t hwirq;
unsigned int i;
int ret;
ret = hv_pci_vec_alloc_device_irq(domain, nr_irqs, &hwirq);
if (ret)
return ret;
for (i = 0; i < nr_irqs; i++) {
ret = hv_pci_vec_irq_gic_domain_alloc(domain, virq + i,
hwirq + i);
if (ret) {
hv_pci_vec_irq_free(domain, virq, nr_irqs, i);
return ret;
}
irq_domain_set_hwirq_and_chip(domain, virq + i,
hwirq + i,
&hv_arm64_msi_irq_chip,
domain->host_data);
pr_debug("pID:%d vID:%u\n", (int)(hwirq + i), virq + i);
}
return 0;
}
/*
* Pick the first cpu as the irq affinity that can be temporarily used for
* composing MSI from the hypervisor. GIC will eventually set the right
* affinity for the irq and the 'unmask' will retarget the interrupt to that
* cpu.
*/
static int hv_pci_vec_irq_domain_activate(struct irq_domain *domain,
struct irq_data *irqd, bool reserve)
{
int cpu = cpumask_first(cpu_present_mask);
irq_data_update_effective_affinity(irqd, cpumask_of(cpu));
return 0;
}
static const struct irq_domain_ops hv_pci_domain_ops = {
.alloc = hv_pci_vec_irq_domain_alloc,
.free = hv_pci_vec_irq_domain_free,
.activate = hv_pci_vec_irq_domain_activate,
};
static int hv_pci_irqchip_init(void)
{
static struct hv_pci_chip_data *chip_data;
struct fwnode_handle *fn = NULL;
int ret = -ENOMEM;
chip_data = kzalloc(sizeof(*chip_data), GFP_KERNEL);
if (!chip_data)
return ret;
mutex_init(&chip_data->map_lock);
fn = irq_domain_alloc_named_fwnode("hv_vpci_arm64");
if (!fn)
goto free_chip;
/*
* IRQ domain once enabled, should not be removed since there is no
* way to ensure that all the corresponding devices are also gone and
* no interrupts will be generated.
*/
hv_msi_gic_irq_domain = acpi_irq_create_hierarchy(0, HV_PCI_MSI_SPI_NR,
fn, &hv_pci_domain_ops,
chip_data);
if (!hv_msi_gic_irq_domain) {
pr_err("Failed to create Hyper-V arm64 vPCI MSI IRQ domain\n");
goto free_chip;
}
return 0;
free_chip:
kfree(chip_data);
if (fn)
irq_domain_free_fwnode(fn);
return ret;
}
static struct irq_domain *hv_pci_get_root_domain(void)
{
return hv_msi_gic_irq_domain;
}
#endif /* CONFIG_ARM64 */
/**
* hv_pci_generic_compl() - Invoked for a completion packet
* @context: Set up by the sender of the packet.
@ -1191,17 +1449,11 @@ static void hv_msi_free(struct irq_domain *domain, struct msi_domain_info *info,
put_pcichild(hpdev);
}
static int hv_set_affinity(struct irq_data *data, const struct cpumask *dest,
bool force)
{
struct irq_data *parent = data->parent_data;
return parent->chip->irq_set_affinity(parent, dest, force);
}
static void hv_irq_mask(struct irq_data *data)
{
pci_msi_mask_irq(data);
if (data->parent_data->chip->irq_mask)
irq_chip_mask_parent(data);
}
/**
@ -1217,7 +1469,6 @@ static void hv_irq_mask(struct irq_data *data)
static void hv_irq_unmask(struct irq_data *data)
{
struct msi_desc *msi_desc = irq_data_get_msi_desc(data);
struct irq_cfg *cfg = irqd_cfg(data);
struct hv_retarget_device_interrupt *params;
struct hv_pcibus_device *hbus;
struct cpumask *dest;
@ -1246,7 +1497,7 @@ static void hv_irq_unmask(struct irq_data *data)
(hbus->hdev->dev_instance.b[7] << 8) |
(hbus->hdev->dev_instance.b[6] & 0xf8) |
PCI_FUNC(pdev->devfn);
params->int_target.vector = cfg->vector;
params->int_target.vector = hv_msi_get_int_vector(data);
/*
* Honoring apic->delivery_mode set to APIC_DELIVERY_MODE_FIXED by
@ -1319,6 +1570,8 @@ exit_unlock:
dev_err(&hbus->hdev->device,
"%s() failed: %#llx", __func__, res);
if (data->parent_data->chip->irq_unmask)
irq_chip_unmask_parent(data);
pci_msi_unmask_irq(data);
}
@ -1347,7 +1600,7 @@ static u32 hv_compose_msi_req_v1(
int_pkt->wslot.slot = slot;
int_pkt->int_desc.vector = vector;
int_pkt->int_desc.vector_count = 1;
int_pkt->int_desc.delivery_mode = APIC_DELIVERY_MODE_FIXED;
int_pkt->int_desc.delivery_mode = DELIVERY_MODE;
/*
* Create MSI w/ dummy vCPU set, overwritten by subsequent retarget in
@ -1377,7 +1630,7 @@ static u32 hv_compose_msi_req_v2(
int_pkt->wslot.slot = slot;
int_pkt->int_desc.vector = vector;
int_pkt->int_desc.vector_count = 1;
int_pkt->int_desc.delivery_mode = APIC_DELIVERY_MODE_FIXED;
int_pkt->int_desc.delivery_mode = DELIVERY_MODE;
cpu = hv_compose_msi_req_get_cpu(affinity);
int_pkt->int_desc.processor_array[0] =
hv_cpu_number_to_vp_number(cpu);
@ -1397,7 +1650,7 @@ static u32 hv_compose_msi_req_v3(
int_pkt->int_desc.vector = vector;
int_pkt->int_desc.reserved = 0;
int_pkt->int_desc.vector_count = 1;
int_pkt->int_desc.delivery_mode = APIC_DELIVERY_MODE_FIXED;
int_pkt->int_desc.delivery_mode = DELIVERY_MODE;
cpu = hv_compose_msi_req_get_cpu(affinity);
int_pkt->int_desc.processor_array[0] =
hv_cpu_number_to_vp_number(cpu);
@ -1419,7 +1672,6 @@ static u32 hv_compose_msi_req_v3(
*/
static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct irq_cfg *cfg = irqd_cfg(data);
struct hv_pcibus_device *hbus;
struct vmbus_channel *channel;
struct hv_pci_dev *hpdev;
@ -1470,7 +1722,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
size = hv_compose_msi_req_v1(&ctxt.int_pkts.v1,
dest,
hpdev->desc.win_slot.slot,
cfg->vector);
hv_msi_get_int_vector(data));
break;
case PCI_PROTOCOL_VERSION_1_2:
@ -1478,14 +1730,14 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2,
dest,
hpdev->desc.win_slot.slot,
cfg->vector);
hv_msi_get_int_vector(data));
break;
case PCI_PROTOCOL_VERSION_1_4:
size = hv_compose_msi_req_v3(&ctxt.int_pkts.v3,
dest,
hpdev->desc.win_slot.slot,
cfg->vector);
hv_msi_get_int_vector(data));
break;
default:
@ -1594,14 +1846,18 @@ return_null_message:
static struct irq_chip hv_msi_irq_chip = {
.name = "Hyper-V PCIe MSI",
.irq_compose_msi_msg = hv_compose_msi_msg,
.irq_set_affinity = hv_set_affinity,
.irq_set_affinity = irq_chip_set_affinity_parent,
#ifdef CONFIG_X86
.irq_ack = irq_chip_ack_parent,
#elif defined(CONFIG_ARM64)
.irq_eoi = irq_chip_eoi_parent,
#endif
.irq_mask = hv_irq_mask,
.irq_unmask = hv_irq_unmask,
};
static struct msi_domain_ops hv_msi_ops = {
.msi_prepare = pci_msi_prepare,
.msi_prepare = hv_msi_prepare,
.msi_free = hv_msi_free,
};
@ -1625,12 +1881,12 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_device *hbus)
hbus->msi_info.flags = (MSI_FLAG_USE_DEF_DOM_OPS |
MSI_FLAG_USE_DEF_CHIP_OPS | MSI_FLAG_MULTI_PCI_MSI |
MSI_FLAG_PCI_MSIX);
hbus->msi_info.handler = handle_edge_irq;
hbus->msi_info.handler_name = "edge";
hbus->msi_info.handler = FLOW_HANDLER;
hbus->msi_info.handler_name = FLOW_NAME;
hbus->msi_info.data = hbus;
hbus->irq_domain = pci_msi_create_irq_domain(hbus->fwnode,
&hbus->msi_info,
x86_vector_domain);
hv_pci_get_root_domain());
if (!hbus->irq_domain) {
dev_err(&hbus->hdev->device,
"Failed to build an MSI IRQ domain\n");
@ -1774,7 +2030,7 @@ static void prepopulate_bars(struct hv_pcibus_device *hbus)
* If the memory enable bit is already set, Hyper-V silently ignores
* the below BAR updates, and the related PCI device driver can not
* work, because reading from the device register(s) always returns
* 0xFFFFFFFF.
* 0xFFFFFFFF (PCI_ERROR_RESPONSE).
*/
list_for_each_entry(hpdev, &hbus->children, list_entry) {
_hv_pcifront_read_config(hpdev, PCI_COMMAND, 2, &command);
@ -3547,9 +3803,15 @@ static void __exit exit_hv_pci_drv(void)
static int __init init_hv_pci_drv(void)
{
int ret;
if (!hv_is_hyperv_initialized())
return -ENODEV;
ret = hv_pci_irqchip_init();
if (ret)
return ret;
/* Set the invalid domain number's bit, so it will not be used */
set_bit(HVPCI_DOM_INVALID, hvpci_dom_map);

View File

@ -6,6 +6,7 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/clk.h>
#include <linux/delay.h>
@ -51,10 +52,14 @@
PCIE_CONF_FUNC(PCI_FUNC(devfn)) | PCIE_CONF_REG(where) | \
PCIE_CONF_ADDR_EN)
#define PCIE_CONF_DATA_OFF 0x18fc
#define PCIE_INT_CAUSE_OFF 0x1900
#define PCIE_INT_PM_PME BIT(28)
#define PCIE_MASK_OFF 0x1910
#define PCIE_MASK_ENABLE_INTS 0x0f000000
#define PCIE_CTRL_OFF 0x1a00
#define PCIE_CTRL_X1_MODE 0x0001
#define PCIE_CTRL_RC_MODE BIT(1)
#define PCIE_CTRL_MASTER_HOT_RESET BIT(24)
#define PCIE_STAT_OFF 0x1a04
#define PCIE_STAT_BUS 0xff00
#define PCIE_STAT_DEV 0x1f0000
@ -125,6 +130,11 @@ static bool mvebu_pcie_link_up(struct mvebu_pcie_port *port)
return !(mvebu_readl(port, PCIE_STAT_OFF) & PCIE_STAT_LINK_DOWN);
}
static u8 mvebu_pcie_get_local_bus_nr(struct mvebu_pcie_port *port)
{
return (mvebu_readl(port, PCIE_STAT_OFF) & PCIE_STAT_BUS) >> 8;
}
static void mvebu_pcie_set_local_bus_nr(struct mvebu_pcie_port *port, int nr)
{
u32 stat;
@ -145,6 +155,30 @@ static void mvebu_pcie_set_local_dev_nr(struct mvebu_pcie_port *port, int nr)
mvebu_writel(port, stat, PCIE_STAT_OFF);
}
static void mvebu_pcie_disable_wins(struct mvebu_pcie_port *port)
{
int i;
mvebu_writel(port, 0, PCIE_BAR_LO_OFF(0));
mvebu_writel(port, 0, PCIE_BAR_HI_OFF(0));
for (i = 1; i < 3; i++) {
mvebu_writel(port, 0, PCIE_BAR_CTRL_OFF(i));
mvebu_writel(port, 0, PCIE_BAR_LO_OFF(i));
mvebu_writel(port, 0, PCIE_BAR_HI_OFF(i));
}
for (i = 0; i < 5; i++) {
mvebu_writel(port, 0, PCIE_WIN04_CTRL_OFF(i));
mvebu_writel(port, 0, PCIE_WIN04_BASE_OFF(i));
mvebu_writel(port, 0, PCIE_WIN04_REMAP_OFF(i));
}
mvebu_writel(port, 0, PCIE_WIN5_CTRL_OFF);
mvebu_writel(port, 0, PCIE_WIN5_BASE_OFF);
mvebu_writel(port, 0, PCIE_WIN5_REMAP_OFF);
}
/*
* Setup PCIE BARs and Address Decode Wins:
* BAR[0] -> internal registers (needed for MSI)
@ -161,21 +195,7 @@ static void mvebu_pcie_setup_wins(struct mvebu_pcie_port *port)
dram = mv_mbus_dram_info();
/* First, disable and clear BARs and windows. */
for (i = 1; i < 3; i++) {
mvebu_writel(port, 0, PCIE_BAR_CTRL_OFF(i));
mvebu_writel(port, 0, PCIE_BAR_LO_OFF(i));
mvebu_writel(port, 0, PCIE_BAR_HI_OFF(i));
}
for (i = 0; i < 5; i++) {
mvebu_writel(port, 0, PCIE_WIN04_CTRL_OFF(i));
mvebu_writel(port, 0, PCIE_WIN04_BASE_OFF(i));
mvebu_writel(port, 0, PCIE_WIN04_REMAP_OFF(i));
}
mvebu_writel(port, 0, PCIE_WIN5_CTRL_OFF);
mvebu_writel(port, 0, PCIE_WIN5_BASE_OFF);
mvebu_writel(port, 0, PCIE_WIN5_REMAP_OFF);
mvebu_pcie_disable_wins(port);
/* Setup windows for DDR banks. Count total DDR size on the fly. */
size = 0;
@ -213,18 +233,47 @@ static void mvebu_pcie_setup_wins(struct mvebu_pcie_port *port)
static void mvebu_pcie_setup_hw(struct mvebu_pcie_port *port)
{
u32 cmd, mask;
u32 ctrl, cmd, dev_rev, mask;
/* Setup PCIe controller to Root Complex mode. */
ctrl = mvebu_readl(port, PCIE_CTRL_OFF);
ctrl |= PCIE_CTRL_RC_MODE;
mvebu_writel(port, ctrl, PCIE_CTRL_OFF);
/* Disable Root Bridge I/O space, memory space and bus mastering. */
cmd = mvebu_readl(port, PCIE_CMD_OFF);
cmd &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
mvebu_writel(port, cmd, PCIE_CMD_OFF);
/*
* Change Class Code of PCI Bridge device to PCI Bridge (0x6004)
* because default value is Memory controller (0x5080).
*
* Note that this mvebu PCI Bridge does not have compliant Type 1
* Configuration Space. Header Type is reported as Type 0 and it
* has format of Type 0 config space.
*
* Moreover Type 0 BAR registers (ranges 0x10 - 0x28 and 0x30 - 0x34)
* have the same format in Marvell's specification as in PCIe
* specification, but their meaning is totally different and they do
* different things: they are aliased into internal mvebu registers
* (e.g. PCIE_BAR_LO_OFF) and these should not be changed or
* reconfigured by pci device drivers.
*
* Therefore driver uses emulation of PCI Bridge which emulates
* access to configuration space via internal mvebu registers or
* emulated configuration buffer. Driver access these PCI Bridge
* directly for simplification, but these registers can be accessed
* also via standard mvebu way for accessing PCI config space.
*/
dev_rev = mvebu_readl(port, PCIE_DEV_REV_OFF);
dev_rev &= ~0xffffff00;
dev_rev |= (PCI_CLASS_BRIDGE_PCI << 8) << 8;
mvebu_writel(port, dev_rev, PCIE_DEV_REV_OFF);
/* Point PCIe unit MBUS decode windows to DRAM space. */
mvebu_pcie_setup_wins(port);
/* Master + slave enable. */
cmd = mvebu_readl(port, PCIE_CMD_OFF);
cmd |= PCI_COMMAND_IO;
cmd |= PCI_COMMAND_MEMORY;
cmd |= PCI_COMMAND_MASTER;
mvebu_writel(port, cmd, PCIE_CMD_OFF);
/* Enable interrupt lines A-D. */
mask = mvebu_readl(port, PCIE_MASK_OFF);
mask |= PCIE_MASK_ENABLE_INTS;
@ -250,6 +299,9 @@ static int mvebu_pcie_hw_rd_conf(struct mvebu_pcie_port *port,
case 4:
*val = readl_relaxed(conf_data);
break;
default:
*val = 0xffffffff;
return PCIBIOS_BAD_REGISTER_NUMBER;
}
return PCIBIOS_SUCCESSFUL;
@ -303,7 +355,7 @@ static void mvebu_pcie_del_windows(struct mvebu_pcie_port *port,
* areas each having a power of two size. We start from the largest
* one (i.e highest order bit set in the size).
*/
static void mvebu_pcie_add_windows(struct mvebu_pcie_port *port,
static int mvebu_pcie_add_windows(struct mvebu_pcie_port *port,
unsigned int target, unsigned int attribute,
phys_addr_t base, size_t size,
phys_addr_t remap)
@ -324,7 +376,7 @@ static void mvebu_pcie_add_windows(struct mvebu_pcie_port *port,
&base, &end, ret);
mvebu_pcie_del_windows(port, base - size_mapped,
size_mapped);
return;
return ret;
}
size -= sz;
@ -333,16 +385,20 @@ static void mvebu_pcie_add_windows(struct mvebu_pcie_port *port,
if (remap != MVEBU_MBUS_NO_REMAP)
remap += sz;
}
return 0;
}
static void mvebu_pcie_set_window(struct mvebu_pcie_port *port,
static int mvebu_pcie_set_window(struct mvebu_pcie_port *port,
unsigned int target, unsigned int attribute,
const struct mvebu_pcie_window *desired,
struct mvebu_pcie_window *cur)
{
int ret;
if (desired->base == cur->base && desired->remap == cur->remap &&
desired->size == cur->size)
return;
return 0;
if (cur->size != 0) {
mvebu_pcie_del_windows(port, cur->base, cur->size);
@ -357,31 +413,35 @@ static void mvebu_pcie_set_window(struct mvebu_pcie_port *port,
}
if (desired->size == 0)
return;
return 0;
ret = mvebu_pcie_add_windows(port, target, attribute, desired->base,
desired->size, desired->remap);
if (ret) {
cur->size = 0;
cur->base = 0;
return ret;
}
mvebu_pcie_add_windows(port, target, attribute, desired->base,
desired->size, desired->remap);
*cur = *desired;
return 0;
}
static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
static int mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
{
struct mvebu_pcie_window desired = {};
struct pci_bridge_emul_conf *conf = &port->bridge.conf;
/* Are the new iobase/iolimit values invalid? */
if (conf->iolimit < conf->iobase ||
conf->iolimitupper < conf->iobaseupper ||
!(conf->command & PCI_COMMAND_IO)) {
mvebu_pcie_set_window(port, port->io_target, port->io_attr,
&desired, &port->iowin);
return;
}
conf->iolimitupper < conf->iobaseupper)
return mvebu_pcie_set_window(port, port->io_target, port->io_attr,
&desired, &port->iowin);
if (!mvebu_has_ioport(port)) {
dev_WARN(&port->pcie->pdev->dev,
"Attempt to set IO when IO is disabled\n");
return;
return -EOPNOTSUPP;
}
/*
@ -399,22 +459,19 @@ static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
desired.remap) +
1;
mvebu_pcie_set_window(port, port->io_target, port->io_attr, &desired,
&port->iowin);
return mvebu_pcie_set_window(port, port->io_target, port->io_attr, &desired,
&port->iowin);
}
static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
static int mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
{
struct mvebu_pcie_window desired = {.remap = MVEBU_MBUS_NO_REMAP};
struct pci_bridge_emul_conf *conf = &port->bridge.conf;
/* Are the new membase/memlimit values invalid? */
if (conf->memlimit < conf->membase ||
!(conf->command & PCI_COMMAND_MEMORY)) {
mvebu_pcie_set_window(port, port->mem_target, port->mem_attr,
&desired, &port->memwin);
return;
}
if (conf->memlimit < conf->membase)
return mvebu_pcie_set_window(port, port->mem_target, port->mem_attr,
&desired, &port->memwin);
/*
* We read the PCI-to-PCI bridge emulated registers, and
@ -426,8 +483,56 @@ static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
desired.size = (((conf->memlimit & 0xFFF0) << 16) | 0xFFFFF) -
desired.base + 1;
mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, &desired,
&port->memwin);
return mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, &desired,
&port->memwin);
}
static pci_bridge_emul_read_status_t
mvebu_pci_bridge_emul_base_conf_read(struct pci_bridge_emul *bridge,
int reg, u32 *value)
{
struct mvebu_pcie_port *port = bridge->data;
switch (reg) {
case PCI_COMMAND:
*value = mvebu_readl(port, PCIE_CMD_OFF);
break;
case PCI_PRIMARY_BUS: {
/*
* From the whole 32bit register we support reading from HW only
* secondary bus number which is mvebu local bus number.
* Other bits are retrieved only from emulated config buffer.
*/
__le32 *cfgspace = (__le32 *)&bridge->conf;
u32 val = le32_to_cpu(cfgspace[PCI_PRIMARY_BUS / 4]);
val &= ~0xff00;
val |= mvebu_pcie_get_local_bus_nr(port) << 8;
*value = val;
break;
}
case PCI_INTERRUPT_LINE: {
/*
* From the whole 32bit register we support reading from HW only
* one bit: PCI_BRIDGE_CTL_BUS_RESET.
* Other bits are retrieved only from emulated config buffer.
*/
__le32 *cfgspace = (__le32 *)&bridge->conf;
u32 val = le32_to_cpu(cfgspace[PCI_INTERRUPT_LINE / 4]);
if (mvebu_readl(port, PCIE_CTRL_OFF) & PCIE_CTRL_MASTER_HOT_RESET)
val |= PCI_BRIDGE_CTL_BUS_RESET << 16;
else
val &= ~(PCI_BRIDGE_CTL_BUS_RESET << 16);
*value = val;
break;
}
default:
return PCI_BRIDGE_EMUL_NOT_HANDLED;
}
return PCI_BRIDGE_EMUL_HANDLED;
}
static pci_bridge_emul_read_status_t
@ -442,9 +547,7 @@ mvebu_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
break;
case PCI_EXP_DEVCTL:
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL) &
~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE |
PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE);
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL);
break;
case PCI_EXP_LNKCAP:
@ -468,6 +571,18 @@ mvebu_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
*value = mvebu_readl(port, PCIE_RC_RTSTA);
break;
case PCI_EXP_DEVCAP2:
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCAP2);
break;
case PCI_EXP_DEVCTL2:
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL2);
break;
case PCI_EXP_LNKCTL2:
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL2);
break;
default:
return PCI_BRIDGE_EMUL_NOT_HANDLED;
}
@ -484,39 +599,62 @@ mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
switch (reg) {
case PCI_COMMAND:
{
if (!mvebu_has_ioport(port))
conf->command &= ~PCI_COMMAND_IO;
if ((old ^ new) & PCI_COMMAND_IO)
mvebu_pcie_handle_iobase_change(port);
if ((old ^ new) & PCI_COMMAND_MEMORY)
mvebu_pcie_handle_membase_change(port);
if (!mvebu_has_ioport(port)) {
conf->command = cpu_to_le16(
le16_to_cpu(conf->command) & ~PCI_COMMAND_IO);
new &= ~PCI_COMMAND_IO;
}
mvebu_writel(port, new, PCIE_CMD_OFF);
break;
}
case PCI_IO_BASE:
/*
* We keep bit 1 set, it is a read-only bit that
* indicates we support 32 bits addressing for the
* I/O
*/
conf->iobase |= PCI_IO_RANGE_TYPE_32;
conf->iolimit |= PCI_IO_RANGE_TYPE_32;
mvebu_pcie_handle_iobase_change(port);
if ((mask & 0xffff) && mvebu_pcie_handle_iobase_change(port)) {
/* On error disable IO range */
conf->iobase &= ~0xf0;
conf->iolimit &= ~0xf0;
conf->iobaseupper = cpu_to_le16(0x0000);
conf->iolimitupper = cpu_to_le16(0x0000);
if (mvebu_has_ioport(port))
conf->iobase |= 0xf0;
}
break;
case PCI_MEMORY_BASE:
mvebu_pcie_handle_membase_change(port);
if (mvebu_pcie_handle_membase_change(port)) {
/* On error disable mem range */
conf->membase = cpu_to_le16(le16_to_cpu(conf->membase) & ~0xfff0);
conf->memlimit = cpu_to_le16(le16_to_cpu(conf->memlimit) & ~0xfff0);
conf->membase = cpu_to_le16(le16_to_cpu(conf->membase) | 0xfff0);
}
break;
case PCI_IO_BASE_UPPER16:
mvebu_pcie_handle_iobase_change(port);
if (mvebu_pcie_handle_iobase_change(port)) {
/* On error disable IO range */
conf->iobase &= ~0xf0;
conf->iolimit &= ~0xf0;
conf->iobaseupper = cpu_to_le16(0x0000);
conf->iolimitupper = cpu_to_le16(0x0000);
if (mvebu_has_ioport(port))
conf->iobase |= 0xf0;
}
break;
case PCI_PRIMARY_BUS:
mvebu_pcie_set_local_bus_nr(port, conf->secondary_bus);
if (mask & 0xff00)
mvebu_pcie_set_local_bus_nr(port, conf->secondary_bus);
break;
case PCI_INTERRUPT_LINE:
if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) {
u32 ctrl = mvebu_readl(port, PCIE_CTRL_OFF);
if (new & (PCI_BRIDGE_CTL_BUS_RESET << 16))
ctrl |= PCIE_CTRL_MASTER_HOT_RESET;
else
ctrl &= ~PCIE_CTRL_MASTER_HOT_RESET;
mvebu_writel(port, ctrl, PCIE_CTRL_OFF);
}
break;
default:
@ -532,13 +670,6 @@ mvebu_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
switch (reg) {
case PCI_EXP_DEVCTL:
/*
* Armada370 data says these bits must always
* be zero when in root complex mode.
*/
new &= ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE |
PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE);
mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL);
break;
@ -555,12 +686,31 @@ mvebu_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
break;
case PCI_EXP_RTSTA:
mvebu_writel(port, new, PCIE_RC_RTSTA);
/*
* PME Status bit in Root Status Register (PCIE_RC_RTSTA)
* is read-only and can be cleared only by writing 0b to the
* Interrupt Cause RW0C register (PCIE_INT_CAUSE_OFF). So
* clear PME via Interrupt Cause.
*/
if (new & PCI_EXP_RTSTA_PME)
mvebu_writel(port, ~PCIE_INT_PM_PME, PCIE_INT_CAUSE_OFF);
break;
case PCI_EXP_DEVCTL2:
mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL2);
break;
case PCI_EXP_LNKCTL2:
mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL2);
break;
default:
break;
}
}
static struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = {
.read_base = mvebu_pci_bridge_emul_base_conf_read,
.write_base = mvebu_pci_bridge_emul_base_conf_write,
.read_pcie = mvebu_pci_bridge_emul_pcie_conf_read,
.write_pcie = mvebu_pci_bridge_emul_pcie_conf_write,
@ -570,9 +720,11 @@ static struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = {
* Initialize the configuration space of the PCI-to-PCI bridge
* associated with the given PCIe interface.
*/
static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
static int mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
{
struct pci_bridge_emul *bridge = &port->bridge;
u32 pcie_cap = mvebu_readl(port, PCIE_CAP_PCIEXP);
u8 pcie_cap_ver = ((pcie_cap >> 16) & PCI_EXP_FLAGS_VERS);
bridge->conf.vendor = PCI_VENDOR_ID_MARVELL;
bridge->conf.device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16;
@ -585,11 +737,17 @@ static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32;
}
/*
* Older mvebu hardware provides PCIe Capability structure only in
* version 1. New hardware provides it in version 2.
*/
bridge->pcie_conf.cap = cpu_to_le16(pcie_cap_ver);
bridge->has_pcie = true;
bridge->data = port;
bridge->ops = &mvebu_pci_bridge_emul_ops;
pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR);
return pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR);
}
static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys)
@ -606,6 +764,9 @@ static struct mvebu_pcie_port *mvebu_pcie_find_port(struct mvebu_pcie *pcie,
for (i = 0; i < pcie->nports; i++) {
struct mvebu_pcie_port *port = &pcie->ports[i];
if (!port->base)
continue;
if (bus->number == 0 && port->devfn == devfn)
return port;
if (bus->number != 0 &&
@ -653,20 +814,16 @@ static int mvebu_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where,
int ret;
port = mvebu_pcie_find_port(pcie, bus, devfn);
if (!port) {
*val = 0xffffffff;
if (!port)
return PCIBIOS_DEVICE_NOT_FOUND;
}
/* Access the emulated PCI-to-PCI bridge */
if (bus->number == 0)
return pci_bridge_emul_conf_read(&port->bridge, where,
size, val);
if (!mvebu_pcie_link_up(port)) {
*val = 0xffffffff;
if (!mvebu_pcie_link_up(port))
return PCIBIOS_DEVICE_NOT_FOUND;
}
/* Access the real PCIe interface */
ret = mvebu_pcie_hw_rd_conf(port, bus, devfn,
@ -680,6 +837,15 @@ static struct pci_ops mvebu_pcie_ops = {
.write = mvebu_pcie_wr_conf,
};
static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
/* Interrupt support on mvebu emulated bridges is not implemented yet */
if (dev->bus->number == 0)
return 0; /* Proper return code 0 == NO_IRQ */
return of_irq_parse_and_map_pci(dev, slot, pin);
}
static resource_size_t mvebu_pcie_align_resource(struct pci_dev *dev,
const struct resource *res,
resource_size_t start,
@ -781,6 +947,8 @@ static int mvebu_pcie_suspend(struct device *dev)
pcie = dev_get_drvdata(dev);
for (i = 0; i < pcie->nports; i++) {
struct mvebu_pcie_port *port = pcie->ports + i;
if (!port->base)
continue;
port->saved_pcie_stat = mvebu_readl(port, PCIE_STAT_OFF);
}
@ -795,6 +963,8 @@ static int mvebu_pcie_resume(struct device *dev)
pcie = dev_get_drvdata(dev);
for (i = 0; i < pcie->nports; i++) {
struct mvebu_pcie_port *port = pcie->ports + i;
if (!port->base)
continue;
mvebu_writel(port, port->saved_pcie_stat, PCIE_STAT_OFF);
mvebu_pcie_setup_hw(port);
}
@ -838,6 +1008,11 @@ static int mvebu_pcie_parse_port(struct mvebu_pcie *pcie,
port->devfn = of_pci_get_devfn(child);
if (port->devfn < 0)
goto skip;
if (PCI_FUNC(port->devfn) != 0) {
dev_err(dev, "%s: invalid function number, must be zero\n",
port->name);
goto skip;
}
ret = mvebu_get_tgt_attr(dev->of_node, port->devfn, IORESOURCE_MEM,
&port->mem_target, &port->mem_attr);
@ -992,6 +1167,10 @@ static int mvebu_pcie_parse_request_resources(struct mvebu_pcie *pcie)
resource_size(&pcie->io) - 1);
pcie->realio.name = "PCI I/O";
ret = devm_pci_remap_iospace(dev, &pcie->realio, pcie->io.start);
if (ret)
return ret;
pci_add_resource(&bridge->windows, &pcie->realio);
ret = devm_request_resource(dev, &ioport_resource, &pcie->realio);
if (ret)
@ -1001,54 +1180,6 @@ static int mvebu_pcie_parse_request_resources(struct mvebu_pcie *pcie)
return 0;
}
/*
* This is a copy of pci_host_probe(), except that it does the I/O
* remap as the last step, once we are sure we won't fail.
*
* It should be removed once the I/O remap error handling issue has
* been sorted out.
*/
static int mvebu_pci_host_probe(struct pci_host_bridge *bridge)
{
struct mvebu_pcie *pcie;
struct pci_bus *bus, *child;
int ret;
ret = pci_scan_root_bus_bridge(bridge);
if (ret < 0) {
dev_err(bridge->dev.parent, "Scanning root bridge failed");
return ret;
}
pcie = pci_host_bridge_priv(bridge);
if (resource_size(&pcie->io) != 0) {
unsigned int i;
for (i = 0; i < resource_size(&pcie->realio); i += SZ_64K)
pci_ioremap_io(i, pcie->io.start + i);
}
bus = bridge->bus;
/*
* We insert PCI resources into the iomem_resource and
* ioport_resource trees in either pci_bus_claim_resources()
* or pci_bus_assign_resources().
*/
if (pci_has_flag(PCI_PROBE_ONLY)) {
pci_bus_claim_resources(bus);
} else {
pci_bus_size_bridges(bus);
pci_bus_assign_resources(bus);
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
}
pci_bus_add_devices(bus);
return 0;
}
static int mvebu_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -1112,9 +1243,93 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
continue;
}
ret = mvebu_pci_bridge_emul_init(port);
if (ret < 0) {
dev_err(dev, "%s: cannot init emulated bridge\n",
port->name);
devm_iounmap(dev, port->base);
port->base = NULL;
mvebu_pcie_powerdown(port);
continue;
}
/*
* PCIe topology exported by mvebu hw is quite complicated. In
* reality has something like N fully independent host bridges
* where each host bridge has one PCIe Root Port (which acts as
* PCI Bridge device). Each host bridge has its own independent
* internal registers, independent access to PCI config space,
* independent interrupt lines, independent window and memory
* access configuration. But additionally there is some kind of
* peer-to-peer support between PCIe devices behind different
* host bridges limited just to forwarding of memory and I/O
* transactions (forwarding of error messages and config cycles
* is not supported). So we could say there are N independent
* PCIe Root Complexes.
*
* For this kind of setup DT should have been structured into
* N independent PCIe controllers / host bridges. But instead
* structure in past was defined to put PCIe Root Ports of all
* host bridges into one bus zero, like in classic multi-port
* Root Complex setup with just one host bridge.
*
* This means that pci-mvebu.c driver provides "virtual" bus 0
* on which registers all PCIe Root Ports (PCI Bridge devices)
* specified in DT by their BDF addresses and virtually routes
* PCI config access of each PCI bridge device to specific PCIe
* host bridge.
*
* Normally PCI Bridge should choose between Type 0 and Type 1
* config requests based on primary and secondary bus numbers
* configured on the bridge itself. But because mvebu PCI Bridge
* does not have registers for primary and secondary bus numbers
* in its config space, it determinates type of config requests
* via its own custom way.
*
* There are two options how mvebu determinate type of config
* request.
*
* 1. If Secondary Bus Number Enable bit is not set or is not
* available (applies for pre-XP PCIe controllers) then Type 0
* is used if target bus number equals Local Bus Number (bits
* [15:8] in register 0x1a04) and target device number differs
* from Local Device Number (bits [20:16] in register 0x1a04).
* Type 1 is used if target bus number differs from Local Bus
* Number. And when target bus number equals Local Bus Number
* and target device equals Local Device Number then request is
* routed to Local PCI Bridge (PCIe Root Port).
*
* 2. If Secondary Bus Number Enable bit is set (bit 7 in
* register 0x1a2c) then mvebu hw determinate type of config
* request like compliant PCI Bridge based on primary bus number
* which is configured via Local Bus Number (bits [15:8] in
* register 0x1a04) and secondary bus number which is configured
* via Secondary Bus Number (bits [7:0] in register 0x1a2c).
* Local PCI Bridge (PCIe Root Port) is available on primary bus
* as device with Local Device Number (bits [20:16] in register
* 0x1a04).
*
* Secondary Bus Number Enable bit is disabled by default and
* option 2. is not available on pre-XP PCIe controllers. Hence
* this driver always use option 1.
*
* Basically it means that primary and secondary buses shares
* one virtual number configured via Local Bus Number bits and
* Local Device Number bits determinates if accessing primary
* or secondary bus. Set Local Device Number to 1 and redirect
* all writes of PCI Bridge Secondary Bus Number register to
* Local Bus Number (bits [15:8] in register 0x1a04).
*
* So when accessing devices on buses behind secondary bus
* number it would work correctly. And also when accessing
* device 0 at secondary bus number via config space would be
* correctly routed to secondary bus. Due to issues described
* in mvebu_pcie_setup_hw(), PCI Bridges at primary bus (zero)
* are not accessed directly via PCI config space but rarher
* indirectly via kernel emulated PCI bridge driver.
*/
mvebu_pcie_setup_hw(port);
mvebu_pcie_set_local_dev_nr(port, 1);
mvebu_pci_bridge_emul_init(port);
mvebu_pcie_set_local_dev_nr(port, 0);
}
pcie->nports = i;
@ -1122,8 +1337,55 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
bridge->sysdata = pcie;
bridge->ops = &mvebu_pcie_ops;
bridge->align_resource = mvebu_pcie_align_resource;
bridge->map_irq = mvebu_pcie_map_irq;
return mvebu_pci_host_probe(bridge);
return pci_host_probe(bridge);
}
static int mvebu_pcie_remove(struct platform_device *pdev)
{
struct mvebu_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
u32 cmd;
int i;
/* Remove PCI bus with all devices. */
pci_lock_rescan_remove();
pci_stop_root_bus(bridge->bus);
pci_remove_root_bus(bridge->bus);
pci_unlock_rescan_remove();
for (i = 0; i < pcie->nports; i++) {
struct mvebu_pcie_port *port = &pcie->ports[i];
if (!port->base)
continue;
/* Disable Root Bridge I/O space, memory space and bus mastering. */
cmd = mvebu_readl(port, PCIE_CMD_OFF);
cmd &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
mvebu_writel(port, cmd, PCIE_CMD_OFF);
/* Mask all interrupt sources. */
mvebu_writel(port, 0, PCIE_MASK_OFF);
/* Free config space for emulated root bridge. */
pci_bridge_emul_cleanup(&port->bridge);
/* Disable and clear BARs and windows. */
mvebu_pcie_disable_wins(port);
/* Delete PCIe IO and MEM windows. */
if (port->iowin.size)
mvebu_pcie_del_windows(port, port->iowin.base, port->iowin.size);
if (port->memwin.size)
mvebu_pcie_del_windows(port, port->memwin.base, port->memwin.size);
/* Power down card and disable clocks. Must be the last step. */
mvebu_pcie_powerdown(port);
}
return 0;
}
static const struct of_device_id mvebu_pcie_of_match_table[] = {
@ -1142,10 +1404,14 @@ static struct platform_driver mvebu_pcie_driver = {
.driver = {
.name = "mvebu-pcie",
.of_match_table = mvebu_pcie_of_match_table,
/* driver unloading/unbinding currently not supported */
.suppress_bind_attrs = true,
.pm = &mvebu_pcie_pm_ops,
},
.probe = mvebu_pcie_probe,
.remove = mvebu_pcie_remove,
};
builtin_platform_driver(mvebu_pcie_driver);
module_platform_driver(mvebu_pcie_driver);
MODULE_AUTHOR("Thomas Petazzoni <thomas.petazzoni@bootlin.com>");
MODULE_AUTHOR("Pali Rohár <pali@kernel.org>");
MODULE_DESCRIPTION("Marvell EBU PCIe controller");
MODULE_LICENSE("GPL v2");

View File

@ -93,7 +93,7 @@
#define RCAR_PCI_UNIT_REV_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x48)
struct rcar_pci_priv {
struct rcar_pci {
struct device *dev;
void __iomem *reg;
struct resource mem_res;
@ -105,7 +105,7 @@ struct rcar_pci_priv {
static void __iomem *rcar_pci_cfg_base(struct pci_bus *bus, unsigned int devfn,
int where)
{
struct rcar_pci_priv *priv = bus->sysdata;
struct rcar_pci *priv = bus->sysdata;
int slot, val;
if (!pci_is_root_bus(bus) || PCI_FUNC(devfn))
@ -132,7 +132,7 @@ static void __iomem *rcar_pci_cfg_base(struct pci_bus *bus, unsigned int devfn,
static irqreturn_t rcar_pci_err_irq(int irq, void *pw)
{
struct rcar_pci_priv *priv = pw;
struct rcar_pci *priv = pw;
struct device *dev = priv->dev;
u32 status = ioread32(priv->reg + RCAR_PCI_INT_STATUS_REG);
@ -148,7 +148,7 @@ static irqreturn_t rcar_pci_err_irq(int irq, void *pw)
return IRQ_NONE;
}
static void rcar_pci_setup_errirq(struct rcar_pci_priv *priv)
static void rcar_pci_setup_errirq(struct rcar_pci *priv)
{
struct device *dev = priv->dev;
int ret;
@ -166,11 +166,11 @@ static void rcar_pci_setup_errirq(struct rcar_pci_priv *priv)
iowrite32(val, priv->reg + RCAR_PCI_INT_ENABLE_REG);
}
#else
static inline void rcar_pci_setup_errirq(struct rcar_pci_priv *priv) { }
static inline void rcar_pci_setup_errirq(struct rcar_pci *priv) { }
#endif
/* PCI host controller setup */
static void rcar_pci_setup(struct rcar_pci_priv *priv)
static void rcar_pci_setup(struct rcar_pci *priv)
{
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(priv);
struct device *dev = priv->dev;
@ -279,7 +279,7 @@ static int rcar_pci_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct resource *cfg_res, *mem_res;
struct rcar_pci_priv *priv;
struct rcar_pci *priv;
struct pci_host_bridge *bridge;
void __iomem *reg;

View File

@ -41,10 +41,9 @@ static int handle_ea_bar(u32 e0, int bar, struct pci_bus *bus,
}
if (where_a == 0x4) {
addr = bus->ops->map_bus(bus, devfn, bar); /* BAR 0 */
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
v = readl(addr);
v &= ~0xf;
v |= 2; /* EA entry-1. Base-L */
@ -56,10 +55,9 @@ static int handle_ea_bar(u32 e0, int bar, struct pci_bus *bus,
u32 barl_rb;
addr = bus->ops->map_bus(bus, devfn, bar); /* BAR 0 */
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
barl_orig = readl(addr + 0);
writel(0xffffffff, addr + 0);
barl_rb = readl(addr + 0);
@ -72,10 +70,9 @@ static int handle_ea_bar(u32 e0, int bar, struct pci_bus *bus,
}
if (where_a == 0xc) {
addr = bus->ops->map_bus(bus, devfn, bar + 4); /* BAR 1 */
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
v = readl(addr); /* EA entry-3. Base-H */
set_val(v, where, size, val);
return PCIBIOS_SUCCESSFUL;
@ -104,10 +101,8 @@ static int thunder_ecam_p2_config_read(struct pci_bus *bus, unsigned int devfn,
}
addr = bus->ops->map_bus(bus, devfn, where_a);
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
v = readl(addr);
@ -135,10 +130,8 @@ static int thunder_ecam_config_read(struct pci_bus *bus, unsigned int devfn,
int where_a = where & ~3;
addr = bus->ops->map_bus(bus, devfn, 0xc);
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
v = readl(addr);
@ -146,10 +139,8 @@ static int thunder_ecam_config_read(struct pci_bus *bus, unsigned int devfn,
cfg_type = (v >> 16) & 0x7f;
addr = bus->ops->map_bus(bus, devfn, 8);
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
class_rev = readl(addr);
if (class_rev == 0xffffffff)
@ -176,10 +167,8 @@ static int thunder_ecam_config_read(struct pci_bus *bus, unsigned int devfn,
}
addr = bus->ops->map_bus(bus, devfn, 0);
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
vendor_device = readl(addr);
if (vendor_device == 0xffffffff)
@ -196,10 +185,9 @@ static int thunder_ecam_config_read(struct pci_bus *bus, unsigned int devfn,
bool is_tns = (vendor_device == 0xa01f177d);
addr = bus->ops->map_bus(bus, devfn, 0x70);
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
/* E_CAP */
v = readl(addr);
has_msix = (v & 0xff00) != 0;
@ -211,10 +199,9 @@ static int thunder_ecam_config_read(struct pci_bus *bus, unsigned int devfn,
}
if (where_a == 0xb0) {
addr = bus->ops->map_bus(bus, devfn, where_a);
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
v = readl(addr);
if (v & 0xff00)
pr_err("Bad MSIX cap header: %08x\n", v);
@ -268,10 +255,9 @@ static int thunder_ecam_config_read(struct pci_bus *bus, unsigned int devfn,
if (where_a == 0x70) {
addr = bus->ops->map_bus(bus, devfn, where_a);
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
v = readl(addr);
if (v & 0xff00)
pr_err("Bad PCIe cap header: %08x\n", v);

View File

@ -41,10 +41,8 @@ static int thunder_pem_bridge_read(struct pci_bus *bus, unsigned int devfn,
struct pci_config_window *cfg = bus->sysdata;
struct thunder_pem_pci *pem_pci = (struct thunder_pem_pci *)cfg->priv;
if (devfn != 0 || where >= 2048) {
*val = ~0;
if (devfn != 0 || where >= 2048)
return PCIBIOS_DEVICE_NOT_FOUND;
}
/*
* 32-bit accesses only. Write the address to the low order

View File

@ -269,9 +269,7 @@ static void xgene_free_domains(struct xgene_msi *msi)
static int xgene_msi_init_allocator(struct xgene_msi *xgene_msi)
{
int size = BITS_TO_LONGS(NR_MSI_VEC) * sizeof(long);
xgene_msi->bitmap = kzalloc(size, GFP_KERNEL);
xgene_msi->bitmap = bitmap_zalloc(NR_MSI_VEC, GFP_KERNEL);
if (!xgene_msi->bitmap)
return -ENOMEM;
@ -360,7 +358,7 @@ static int xgene_msi_remove(struct platform_device *pdev)
kfree(msi->msi_groups);
kfree(msi->bitmap);
bitmap_free(msi->bitmap);
msi->bitmap = NULL;
xgene_free_domains(msi);

View File

@ -60,7 +60,7 @@
#define XGENE_PCIE_IP_VER_2 2
#if defined(CONFIG_PCI_XGENE) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
struct xgene_pcie_port {
struct xgene_pcie {
struct device_node *node;
struct device *dev;
struct clk *clk;
@ -71,12 +71,12 @@ struct xgene_pcie_port {
u32 version;
};
static u32 xgene_pcie_readl(struct xgene_pcie_port *port, u32 reg)
static u32 xgene_pcie_readl(struct xgene_pcie *port, u32 reg)
{
return readl(port->csr_base + reg);
}
static void xgene_pcie_writel(struct xgene_pcie_port *port, u32 reg, u32 val)
static void xgene_pcie_writel(struct xgene_pcie *port, u32 reg, u32 val)
{
writel(val, port->csr_base + reg);
}
@ -86,15 +86,15 @@ static inline u32 pcie_bar_low_val(u32 addr, u32 flags)
return (addr & PCI_BASE_ADDRESS_MEM_MASK) | flags;
}
static inline struct xgene_pcie_port *pcie_bus_to_port(struct pci_bus *bus)
static inline struct xgene_pcie *pcie_bus_to_port(struct pci_bus *bus)
{
struct pci_config_window *cfg;
if (acpi_disabled)
return (struct xgene_pcie_port *)(bus->sysdata);
return (struct xgene_pcie *)(bus->sysdata);
cfg = bus->sysdata;
return (struct xgene_pcie_port *)(cfg->priv);
return (struct xgene_pcie *)(cfg->priv);
}
/*
@ -103,7 +103,7 @@ static inline struct xgene_pcie_port *pcie_bus_to_port(struct pci_bus *bus)
*/
static void __iomem *xgene_pcie_get_cfg_base(struct pci_bus *bus)
{
struct xgene_pcie_port *port = pcie_bus_to_port(bus);
struct xgene_pcie *port = pcie_bus_to_port(bus);
if (bus->number >= (bus->primary + 1))
return port->cfg_base + AXI_EP_CFG_ACCESS;
@ -117,7 +117,7 @@ static void __iomem *xgene_pcie_get_cfg_base(struct pci_bus *bus)
*/
static void xgene_pcie_set_rtdid_reg(struct pci_bus *bus, uint devfn)
{
struct xgene_pcie_port *port = pcie_bus_to_port(bus);
struct xgene_pcie *port = pcie_bus_to_port(bus);
unsigned int b, d, f;
u32 rtdid_val = 0;
@ -164,18 +164,18 @@ static void __iomem *xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 *val)
{
struct xgene_pcie_port *port = pcie_bus_to_port(bus);
struct xgene_pcie *port = pcie_bus_to_port(bus);
if (pci_generic_config_read32(bus, devfn, where & ~0x3, 4, val) !=
PCIBIOS_SUCCESSFUL)
return PCIBIOS_DEVICE_NOT_FOUND;
/*
* The v1 controller has a bug in its Configuration Request
* Retry Status (CRS) logic: when CRS Software Visibility is
* enabled and we read the Vendor and Device ID of a non-existent
* device, the controller fabricates return data of 0xFFFF0001
* ("device exists but is not ready") instead of 0xFFFFFFFF
* The v1 controller has a bug in its Configuration Request Retry
* Status (CRS) logic: when CRS Software Visibility is enabled and
* we read the Vendor and Device ID of a non-existent device, the
* controller fabricates return data of 0xFFFF0001 ("device exists
* but is not ready") instead of 0xFFFFFFFF (PCI_ERROR_RESPONSE)
* ("device does not exist"). This causes the PCI core to retry
* the read until it times out. Avoid this by not claiming to
* support CRS SV.
@ -227,7 +227,7 @@ static int xgene_pcie_ecam_init(struct pci_config_window *cfg, u32 ipversion)
{
struct device *dev = cfg->parent;
struct acpi_device *adev = to_acpi_device(dev);
struct xgene_pcie_port *port;
struct xgene_pcie *port;
struct resource csr;
int ret;
@ -281,7 +281,7 @@ const struct pci_ecam_ops xgene_v2_pcie_ecam_ops = {
#endif
#if defined(CONFIG_PCI_XGENE)
static u64 xgene_pcie_set_ib_mask(struct xgene_pcie_port *port, u32 addr,
static u64 xgene_pcie_set_ib_mask(struct xgene_pcie *port, u32 addr,
u32 flags, u64 size)
{
u64 mask = (~(size - 1) & PCI_BASE_ADDRESS_MEM_MASK) | flags;
@ -307,7 +307,7 @@ static u64 xgene_pcie_set_ib_mask(struct xgene_pcie_port *port, u32 addr,
return mask;
}
static void xgene_pcie_linkup(struct xgene_pcie_port *port,
static void xgene_pcie_linkup(struct xgene_pcie *port,
u32 *lanes, u32 *speed)
{
u32 val32;
@ -322,7 +322,7 @@ static void xgene_pcie_linkup(struct xgene_pcie_port *port,
}
}
static int xgene_pcie_init_port(struct xgene_pcie_port *port)
static int xgene_pcie_init_port(struct xgene_pcie *port)
{
struct device *dev = port->dev;
int rc;
@ -342,7 +342,7 @@ static int xgene_pcie_init_port(struct xgene_pcie_port *port)
return 0;
}
static int xgene_pcie_map_reg(struct xgene_pcie_port *port,
static int xgene_pcie_map_reg(struct xgene_pcie *port,
struct platform_device *pdev)
{
struct device *dev = port->dev;
@ -362,7 +362,7 @@ static int xgene_pcie_map_reg(struct xgene_pcie_port *port,
return 0;
}
static void xgene_pcie_setup_ob_reg(struct xgene_pcie_port *port,
static void xgene_pcie_setup_ob_reg(struct xgene_pcie *port,
struct resource *res, u32 offset,
u64 cpu_addr, u64 pci_addr)
{
@ -394,7 +394,7 @@ static void xgene_pcie_setup_ob_reg(struct xgene_pcie_port *port,
xgene_pcie_writel(port, offset + 0x14, upper_32_bits(pci_addr));
}
static void xgene_pcie_setup_cfg_reg(struct xgene_pcie_port *port)
static void xgene_pcie_setup_cfg_reg(struct xgene_pcie *port)
{
u64 addr = port->cfg_addr;
@ -403,7 +403,7 @@ static void xgene_pcie_setup_cfg_reg(struct xgene_pcie_port *port)
xgene_pcie_writel(port, CFGCTL, EN_REG);
}
static int xgene_pcie_map_ranges(struct xgene_pcie_port *port)
static int xgene_pcie_map_ranges(struct xgene_pcie *port)
{
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(port);
struct resource_entry *window;
@ -444,7 +444,7 @@ static int xgene_pcie_map_ranges(struct xgene_pcie_port *port)
return 0;
}
static void xgene_pcie_setup_pims(struct xgene_pcie_port *port, u32 pim_reg,
static void xgene_pcie_setup_pims(struct xgene_pcie *port, u32 pim_reg,
u64 pim, u64 size)
{
xgene_pcie_writel(port, pim_reg, lower_32_bits(pim));
@ -465,7 +465,7 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
return 1;
}
if ((size > SZ_1K) && (size < SZ_1T) && !(*ib_reg_mask & (1 << 0))) {
if ((size > SZ_1K) && (size < SZ_4G) && !(*ib_reg_mask & (1 << 0))) {
*ib_reg_mask |= (1 << 0);
return 0;
}
@ -478,7 +478,7 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
return -EINVAL;
}
static void xgene_pcie_setup_ib_reg(struct xgene_pcie_port *port,
static void xgene_pcie_setup_ib_reg(struct xgene_pcie *port,
struct resource_entry *entry,
u8 *ib_reg_mask)
{
@ -529,7 +529,7 @@ static void xgene_pcie_setup_ib_reg(struct xgene_pcie_port *port,
xgene_pcie_setup_pims(port, pim_reg, pci_addr, ~(size - 1));
}
static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie_port *port)
static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie *port)
{
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(port);
struct resource_entry *entry;
@ -542,7 +542,7 @@ static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie_port *port)
}
/* clear BAR configuration which was done by firmware */
static void xgene_pcie_clear_config(struct xgene_pcie_port *port)
static void xgene_pcie_clear_config(struct xgene_pcie *port)
{
int i;
@ -550,7 +550,7 @@ static void xgene_pcie_clear_config(struct xgene_pcie_port *port)
xgene_pcie_writel(port, i, 0);
}
static int xgene_pcie_setup(struct xgene_pcie_port *port)
static int xgene_pcie_setup(struct xgene_pcie *port)
{
struct device *dev = port->dev;
u32 val, lanes = 0, speed = 0;
@ -588,7 +588,7 @@ static int xgene_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *dn = dev->of_node;
struct xgene_pcie_port *port;
struct xgene_pcie *port;
struct pci_host_bridge *bridge;
int ret;

View File

@ -510,10 +510,8 @@ static int altera_pcie_cfg_read(struct pci_bus *bus, unsigned int devfn,
if (altera_pcie_hide_rc_bar(bus, devfn, where))
return PCIBIOS_BAD_REGISTER_NUMBER;
if (!altera_pcie_valid_device(pcie, bus, PCI_SLOT(devfn))) {
*value = 0xffffffff;
if (!altera_pcie_valid_device(pcie, bus, PCI_SLOT(devfn)))
return PCIBIOS_DEVICE_NOT_FOUND;
}
return _altera_pcie_cfg_read(pcie, bus->number, devfn, where, size,
value);
@ -767,7 +765,7 @@ static int altera_pcie_probe(struct platform_device *pdev)
struct altera_pcie *pcie;
struct pci_host_bridge *bridge;
int ret;
const struct of_device_id *match;
const struct altera_pcie_data *data;
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie));
if (!bridge)
@ -777,11 +775,11 @@ static int altera_pcie_probe(struct platform_device *pdev)
pcie->pdev = pdev;
platform_set_drvdata(pdev, pcie);
match = of_match_device(altera_pcie_of_match, &pdev->dev);
if (!match)
data = of_device_get_match_data(&pdev->dev);
if (!data)
return -ENODEV;
pcie->pcie_data = match->data;
pcie->pcie_data = data;
ret = altera_pcie_parse_dt(pcie);
if (ret) {

View File

@ -42,8 +42,9 @@
#define CORE_FABRIC_STAT_MASK 0x001F001F
#define CORE_LANE_CFG(port) (0x84000 + 0x4000 * (port))
#define CORE_LANE_CFG_REFCLK0REQ BIT(0)
#define CORE_LANE_CFG_REFCLK1 BIT(1)
#define CORE_LANE_CFG_REFCLK1REQ BIT(1)
#define CORE_LANE_CFG_REFCLK0ACK BIT(2)
#define CORE_LANE_CFG_REFCLK1ACK BIT(3)
#define CORE_LANE_CFG_REFCLKEN (BIT(9) | BIT(10))
#define CORE_LANE_CTL(port) (0x84004 + 0x4000 * (port))
#define CORE_LANE_CTL_CFGACC BIT(15)
@ -482,9 +483,9 @@ static int apple_pcie_setup_refclk(struct apple_pcie *pcie,
if (res < 0)
return res;
rmw_set(CORE_LANE_CFG_REFCLK1, pcie->base + CORE_LANE_CFG(port->idx));
rmw_set(CORE_LANE_CFG_REFCLK1REQ, pcie->base + CORE_LANE_CFG(port->idx));
res = readl_relaxed_poll_timeout(pcie->base + CORE_LANE_CFG(port->idx),
stat, stat & CORE_LANE_CFG_REFCLK1,
stat, stat & CORE_LANE_CFG_REFCLK1ACK,
100, 50000);
if (res < 0)
@ -563,6 +564,9 @@ static int apple_pcie_setup_port(struct apple_pcie *pcie,
return ret;
}
rmw_clear(PORT_REFCLK_CGDIS, port->base + PORT_REFCLK);
rmw_clear(PORT_APPCLK_CGDIS, port->base + PORT_APPCLK);
ret = apple_pcie_port_setup_irq(port);
if (ret)
return ret;

View File

@ -24,6 +24,7 @@
#include <linux/pci.h>
#include <linux/pci-ecam.h>
#include <linux/printk.h>
#include <linux/regulator/consumer.h>
#include <linux/reset.h>
#include <linux/sizes.h>
#include <linux/slab.h>
@ -145,6 +146,9 @@
#define BRCM_INT_PCI_MSI_NR 32
#define BRCM_INT_PCI_MSI_LEGACY_NR 8
#define BRCM_INT_PCI_MSI_SHIFT 0
#define BRCM_INT_PCI_MSI_MASK GENMASK(BRCM_INT_PCI_MSI_NR - 1, 0)
#define BRCM_INT_PCI_MSI_LEGACY_MASK GENMASK(31, \
32 - BRCM_INT_PCI_MSI_LEGACY_NR)
/* MSI target addresses */
#define BRCM_MSI_TARGET_ADDR_LT_4GB 0x0fffffffcULL
@ -192,6 +196,8 @@ static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie,
static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val);
static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val);
static inline void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val);
static int brcm_pcie_linkup(struct brcm_pcie *pcie);
static int brcm_pcie_add_bus(struct pci_bus *bus);
enum {
RGR1_SW_INIT_1,
@ -280,6 +286,14 @@ static const struct pcie_cfg_data bcm2711_cfg = {
.bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic,
};
struct subdev_regulators {
unsigned int num_supplies;
struct regulator_bulk_data supplies[];
};
static int pci_subdev_regulators_add_bus(struct pci_bus *bus);
static void pci_subdev_regulators_remove_bus(struct pci_bus *bus);
struct brcm_msi {
struct device *dev;
void __iomem *base;
@ -289,8 +303,7 @@ struct brcm_msi {
struct mutex lock; /* guards the alloc/free operations */
u64 target_addr;
int irq;
/* used indicates which MSI interrupts have been alloc'd */
unsigned long used;
DECLARE_BITMAP(used, BRCM_INT_PCI_MSI_NR);
bool legacy;
/* Some chips have MSIs in bits [31..24] of a shared register. */
int legacy_shift;
@ -318,6 +331,9 @@ struct brcm_pcie {
u32 hw_rev;
void (*perst_set)(struct brcm_pcie *pcie, u32 val);
void (*bridge_sw_init_set)(struct brcm_pcie *pcie, u32 val);
bool refusal_mode;
struct subdev_regulators *sr;
bool ep_wakeup_capable;
};
static inline bool is_bmips(const struct brcm_pcie *pcie)
@ -434,6 +450,99 @@ static int brcm_pcie_set_ssc(struct brcm_pcie *pcie)
return ssc && pll ? 0 : -EIO;
}
static void *alloc_subdev_regulators(struct device *dev)
{
static const char * const supplies[] = {
"vpcie3v3",
"vpcie3v3aux",
"vpcie12v",
};
const size_t size = sizeof(struct subdev_regulators)
+ sizeof(struct regulator_bulk_data) * ARRAY_SIZE(supplies);
struct subdev_regulators *sr;
int i;
sr = devm_kzalloc(dev, size, GFP_KERNEL);
if (sr) {
sr->num_supplies = ARRAY_SIZE(supplies);
for (i = 0; i < ARRAY_SIZE(supplies); i++)
sr->supplies[i].supply = supplies[i];
}
return sr;
}
static int pci_subdev_regulators_add_bus(struct pci_bus *bus)
{
struct device *dev = &bus->dev;
struct subdev_regulators *sr;
int ret;
if (!dev->of_node || !bus->parent || !pci_is_root_bus(bus->parent))
return 0;
if (dev->driver_data)
dev_err(dev, "dev.driver_data unexpectedly non-NULL\n");
sr = alloc_subdev_regulators(dev);
if (!sr)
return -ENOMEM;
dev->driver_data = sr;
ret = regulator_bulk_get(dev, sr->num_supplies, sr->supplies);
if (ret)
return ret;
ret = regulator_bulk_enable(sr->num_supplies, sr->supplies);
if (ret) {
dev_err(dev, "failed to enable regulators for downstream device\n");
return ret;
}
return 0;
}
static int brcm_pcie_add_bus(struct pci_bus *bus)
{
struct device *dev = &bus->dev;
struct brcm_pcie *pcie = (struct brcm_pcie *) bus->sysdata;
int ret;
if (!dev->of_node || !bus->parent || !pci_is_root_bus(bus->parent))
return 0;
ret = pci_subdev_regulators_add_bus(bus);
if (ret)
return ret;
/* Grab the regulators for suspend/resume */
pcie->sr = bus->dev.driver_data;
/*
* If we have failed linkup there is no point to return an error as
* currently it will cause a WARNING() from pci_alloc_child_bus().
* We return 0 and turn on the "refusal_mode" so that any further
* accesses to the pci_dev just get 0xffffffff
*/
if (brcm_pcie_linkup(pcie) != 0)
pcie->refusal_mode = true;
return 0;
}
static void pci_subdev_regulators_remove_bus(struct pci_bus *bus)
{
struct device *dev = &bus->dev;
struct subdev_regulators *sr = dev->driver_data;
if (!sr || !bus->parent || !pci_is_root_bus(bus->parent))
return;
if (regulator_bulk_disable(sr->num_supplies, sr->supplies))
dev_err(dev, "failed to disable regulators for downstream device\n");
dev->driver_data = NULL;
}
/* Limits operation to a specific generation (1, 2, or 3) */
static void brcm_pcie_set_gen(struct brcm_pcie *pcie, int gen)
{
@ -565,7 +674,7 @@ static int brcm_msi_alloc(struct brcm_msi *msi)
int hwirq;
mutex_lock(&msi->lock);
hwirq = bitmap_find_free_region(&msi->used, msi->nr, 0);
hwirq = bitmap_find_free_region(msi->used, msi->nr, 0);
mutex_unlock(&msi->lock);
return hwirq;
@ -574,7 +683,7 @@ static int brcm_msi_alloc(struct brcm_msi *msi)
static void brcm_msi_free(struct brcm_msi *msi, unsigned long hwirq)
{
mutex_lock(&msi->lock);
bitmap_release_region(&msi->used, hwirq, 0);
bitmap_release_region(msi->used, hwirq, 0);
mutex_unlock(&msi->lock);
}
@ -650,7 +759,8 @@ static void brcm_msi_remove(struct brcm_pcie *pcie)
static void brcm_msi_set_regs(struct brcm_msi *msi)
{
u32 val = __GENMASK(31, msi->legacy_shift);
u32 val = msi->legacy ? BRCM_INT_PCI_MSI_LEGACY_MASK :
BRCM_INT_PCI_MSI_MASK;
writel(val, msi->intr_base + MSI_INT_MASK_CLR);
writel(val, msi->intr_base + MSI_INT_CLR);
@ -692,6 +802,12 @@ static int brcm_pcie_enable_msi(struct brcm_pcie *pcie)
msi->irq = irq;
msi->legacy = pcie->hw_rev < BRCM_PCIE_HW_REV_33;
/*
* Sanity check to make sure that the 'used' bitmap in struct brcm_msi
* is large enough.
*/
BUILD_BUG_ON(BRCM_INT_PCI_MSI_LEGACY_NR > BRCM_INT_PCI_MSI_NR);
if (msi->legacy) {
msi->intr_base = msi->base + PCIE_INTR2_CPU_BASE;
msi->nr = BRCM_INT_PCI_MSI_LEGACY_NR;
@ -742,6 +858,18 @@ static void __iomem *brcm_pcie_map_conf(struct pci_bus *bus, unsigned int devfn,
/* Accesses to the RC go right to the RC registers if slot==0 */
if (pci_is_root_bus(bus))
return PCI_SLOT(devfn) ? NULL : base + where;
if (pcie->refusal_mode) {
/*
* At this point we do not have link. There will be a CPU
* abort -- a quirk with this controller --if Linux tries
* to read any config-space registers besides those
* targeting the host bridge. To prevent this we hijack
* the address to point to a safe access that will return
* 0xffffffff.
*/
writel(0xffffffff, base + PCIE_MISC_RC_BAR2_CONFIG_HI);
return base + PCIE_MISC_RC_BAR2_CONFIG_HI + (where & 0x3);
}
/* For devices, write to the config space index register */
idx = PCIE_ECAM_OFFSET(bus->number, devfn, 0);
@ -770,6 +898,8 @@ static struct pci_ops brcm_pcie_ops = {
.map_bus = brcm_pcie_map_conf,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
.add_bus = brcm_pcie_add_bus,
.remove_bus = pci_subdev_regulators_remove_bus,
};
static struct pci_ops brcm_pcie_ops32 = {
@ -917,16 +1047,9 @@ static inline int brcm_pcie_get_rc_bar2_size_and_offset(struct brcm_pcie *pcie,
static int brcm_pcie_setup(struct brcm_pcie *pcie)
{
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
u64 rc_bar2_offset, rc_bar2_size;
void __iomem *base = pcie->base;
struct device *dev = pcie->dev;
struct resource_entry *entry;
bool ssc_good = false;
struct resource *res;
int num_out_wins = 0;
u16 nlw, cls, lnksta;
int i, ret, memc;
int ret, memc;
u32 tmp, burst, aspm_support;
/* Reset the bridge */
@ -1016,6 +1139,40 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
if (pcie->gen)
brcm_pcie_set_gen(pcie, pcie->gen);
/* Don't advertise L0s capability if 'aspm-no-l0s' */
aspm_support = PCIE_LINK_STATE_L1;
if (!of_property_read_bool(pcie->np, "aspm-no-l0s"))
aspm_support |= PCIE_LINK_STATE_L0S;
tmp = readl(base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
u32p_replace_bits(&tmp, aspm_support,
PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK);
writel(tmp, base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
/*
* For config space accesses on the RC, show the right class for
* a PCIe-PCIe bridge (the default setting is to be EP mode).
*/
tmp = readl(base + PCIE_RC_CFG_PRIV1_ID_VAL3);
u32p_replace_bits(&tmp, 0x060400,
PCIE_RC_CFG_PRIV1_ID_VAL3_CLASS_CODE_MASK);
writel(tmp, base + PCIE_RC_CFG_PRIV1_ID_VAL3);
return 0;
}
static int brcm_pcie_linkup(struct brcm_pcie *pcie)
{
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
struct device *dev = pcie->dev;
void __iomem *base = pcie->base;
struct resource_entry *entry;
struct resource *res;
int num_out_wins = 0;
u16 nlw, cls, lnksta;
bool ssc_good = false;
u32 tmp;
int ret, i;
/* Unassert the fundamental reset */
pcie->perst_set(pcie, 0);
@ -1066,24 +1223,6 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
num_out_wins++;
}
/* Don't advertise L0s capability if 'aspm-no-l0s' */
aspm_support = PCIE_LINK_STATE_L1;
if (!of_property_read_bool(pcie->np, "aspm-no-l0s"))
aspm_support |= PCIE_LINK_STATE_L0S;
tmp = readl(base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
u32p_replace_bits(&tmp, aspm_support,
PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK);
writel(tmp, base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
/*
* For config space accesses on the RC, show the right class for
* a PCIe-PCIe bridge (the default setting is to be EP mode).
*/
tmp = readl(base + PCIE_RC_CFG_PRIV1_ID_VAL3);
u32p_replace_bits(&tmp, 0x060400,
PCIE_RC_CFG_PRIV1_ID_VAL3_CLASS_CODE_MASK);
writel(tmp, base + PCIE_RC_CFG_PRIV1_ID_VAL3);
if (pcie->ssc) {
ret = brcm_pcie_set_ssc(pcie);
if (ret == 0)
@ -1212,17 +1351,60 @@ static void brcm_pcie_turn_off(struct brcm_pcie *pcie)
pcie->bridge_sw_init_set(pcie, 1);
}
static int pci_dev_may_wakeup(struct pci_dev *dev, void *data)
{
bool *ret = data;
if (device_may_wakeup(&dev->dev)) {
*ret = true;
dev_info(&dev->dev, "disable cancelled for wake-up device\n");
}
return (int) *ret;
}
static int brcm_pcie_suspend(struct device *dev)
{
struct brcm_pcie *pcie = dev_get_drvdata(dev);
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
int ret;
brcm_pcie_turn_off(pcie);
ret = brcm_phy_stop(pcie);
reset_control_rearm(pcie->rescal);
/*
* If brcm_phy_stop() returns an error, just dev_err(). If we
* return the error it will cause the suspend to fail and this is a
* forgivable offense that will probably be erased on resume.
*/
if (brcm_phy_stop(pcie))
dev_err(dev, "Could not stop phy for suspend\n");
ret = reset_control_rearm(pcie->rescal);
if (ret) {
dev_err(dev, "Could not rearm rescal reset\n");
return ret;
}
if (pcie->sr) {
/*
* Now turn off the regulators, but if at least one
* downstream device is enabled as a wake-up source, do not
* turn off regulators.
*/
pcie->ep_wakeup_capable = false;
pci_walk_bus(bridge->bus, pci_dev_may_wakeup,
&pcie->ep_wakeup_capable);
if (!pcie->ep_wakeup_capable) {
ret = regulator_bulk_disable(pcie->sr->num_supplies,
pcie->sr->supplies);
if (ret) {
dev_err(dev, "Could not turn off regulators\n");
reset_control_reset(pcie->rescal);
return ret;
}
}
}
clk_disable_unprepare(pcie->clk);
return ret;
return 0;
}
static int brcm_pcie_resume(struct device *dev)
@ -1233,11 +1415,32 @@ static int brcm_pcie_resume(struct device *dev)
int ret;
base = pcie->base;
clk_prepare_enable(pcie->clk);
ret = clk_prepare_enable(pcie->clk);
if (ret)
return ret;
if (pcie->sr) {
if (pcie->ep_wakeup_capable) {
/*
* We are resuming from a suspend. In the suspend we
* did not disable the power supplies, so there is
* no need to enable them (and falsely increase their
* usage count).
*/
pcie->ep_wakeup_capable = false;
} else {
ret = regulator_bulk_enable(pcie->sr->num_supplies,
pcie->sr->supplies);
if (ret) {
dev_err(dev, "Could not turn on regulators\n");
goto err_disable_clk;
}
}
}
ret = reset_control_reset(pcie->rescal);
if (ret)
goto err_disable_clk;
goto err_regulator;
ret = brcm_phy_start(pcie);
if (ret)
@ -1258,6 +1461,10 @@ static int brcm_pcie_resume(struct device *dev)
if (ret)
goto err_reset;
ret = brcm_pcie_linkup(pcie);
if (ret)
goto err_reset;
if (pcie->msi)
brcm_msi_set_regs(pcie->msi);
@ -1265,6 +1472,9 @@ static int brcm_pcie_resume(struct device *dev)
err_reset:
reset_control_rearm(pcie->rescal);
err_regulator:
if (pcie->sr)
regulator_bulk_disable(pcie->sr->num_supplies, pcie->sr->supplies);
err_disable_clk:
clk_disable_unprepare(pcie->clk);
return ret;
@ -1274,8 +1484,10 @@ static void __brcm_pcie_remove(struct brcm_pcie *pcie)
{
brcm_msi_remove(pcie);
brcm_pcie_turn_off(pcie);
brcm_phy_stop(pcie);
reset_control_rearm(pcie->rescal);
if (brcm_phy_stop(pcie))
dev_err(pcie->dev, "Could not stop phy\n");
if (reset_control_rearm(pcie->rescal))
dev_err(pcie->dev, "Could not rearm rescal reset\n");
clk_disable_unprepare(pcie->clk);
}
@ -1394,7 +1606,17 @@ static int brcm_pcie_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, pcie);
return pci_host_probe(bridge);
ret = pci_host_probe(bridge);
if (!ret && !brcm_pcie_link_up(pcie))
ret = -ENODEV;
if (ret) {
brcm_pcie_remove(pdev);
return ret;
}
return 0;
fail:
__brcm_pcie_remove(pcie);
return ret;
@ -1403,8 +1625,8 @@ fail:
MODULE_DEVICE_TABLE(of, brcm_pcie_match);
static const struct dev_pm_ops brcm_pcie_pm_ops = {
.suspend = brcm_pcie_suspend,
.resume = brcm_pcie_resume,
.suspend_noirq = brcm_pcie_suspend,
.resume_noirq = brcm_pcie_resume,
};
static struct platform_driver brcm_pcie_driver = {

View File

@ -23,7 +23,7 @@ static void bcma_pcie2_fixup_class(struct pci_dev *dev)
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x8011, bcma_pcie2_fixup_class);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x8012, bcma_pcie2_fixup_class);
static int iproc_pcie_bcma_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
static int iproc_bcma_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
struct iproc_pcie *pcie = dev->sysdata;
struct bcma_device *bdev = container_of(pcie->dev, struct bcma_device, dev);
@ -31,7 +31,7 @@ static int iproc_pcie_bcma_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return bcma_core_irq(bdev, 5);
}
static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
static int iproc_bcma_pcie_probe(struct bcma_device *bdev)
{
struct device *dev = &bdev->dev;
struct iproc_pcie *pcie;
@ -64,33 +64,33 @@ static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
if (ret)
return ret;
pcie->map_irq = iproc_pcie_bcma_map_irq;
pcie->map_irq = iproc_bcma_pcie_map_irq;
bcma_set_drvdata(bdev, pcie);
return iproc_pcie_setup(pcie, &bridge->windows);
}
static void iproc_pcie_bcma_remove(struct bcma_device *bdev)
static void iproc_bcma_pcie_remove(struct bcma_device *bdev)
{
struct iproc_pcie *pcie = bcma_get_drvdata(bdev);
iproc_pcie_remove(pcie);
}
static const struct bcma_device_id iproc_pcie_bcma_table[] = {
static const struct bcma_device_id iproc_bcma_pcie_table[] = {
BCMA_CORE(BCMA_MANUF_BCM, BCMA_CORE_NS_PCIEG2, BCMA_ANY_REV, BCMA_ANY_CLASS),
{},
};
MODULE_DEVICE_TABLE(bcma, iproc_pcie_bcma_table);
MODULE_DEVICE_TABLE(bcma, iproc_bcma_pcie_table);
static struct bcma_driver iproc_pcie_bcma_driver = {
static struct bcma_driver iproc_bcma_pcie_driver = {
.name = KBUILD_MODNAME,
.id_table = iproc_pcie_bcma_table,
.probe = iproc_pcie_bcma_probe,
.remove = iproc_pcie_bcma_remove,
.id_table = iproc_bcma_pcie_table,
.probe = iproc_bcma_pcie_probe,
.remove = iproc_bcma_pcie_remove,
};
module_bcma_driver(iproc_pcie_bcma_driver);
module_bcma_driver(iproc_bcma_pcie_driver);
MODULE_AUTHOR("Hauke Mehrtens");
MODULE_DESCRIPTION("Broadcom iProc PCIe BCMA driver");

View File

@ -37,7 +37,7 @@ static const struct of_device_id iproc_pcie_of_match_table[] = {
};
MODULE_DEVICE_TABLE(of, iproc_pcie_of_match_table);
static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
static int iproc_pltfm_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct iproc_pcie *pcie;
@ -115,30 +115,30 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
return 0;
}
static int iproc_pcie_pltfm_remove(struct platform_device *pdev)
static int iproc_pltfm_pcie_remove(struct platform_device *pdev)
{
struct iproc_pcie *pcie = platform_get_drvdata(pdev);
return iproc_pcie_remove(pcie);
}
static void iproc_pcie_pltfm_shutdown(struct platform_device *pdev)
static void iproc_pltfm_pcie_shutdown(struct platform_device *pdev)
{
struct iproc_pcie *pcie = platform_get_drvdata(pdev);
iproc_pcie_shutdown(pcie);
}
static struct platform_driver iproc_pcie_pltfm_driver = {
static struct platform_driver iproc_pltfm_pcie_driver = {
.driver = {
.name = "iproc-pcie",
.of_match_table = of_match_ptr(iproc_pcie_of_match_table),
},
.probe = iproc_pcie_pltfm_probe,
.remove = iproc_pcie_pltfm_remove,
.shutdown = iproc_pcie_pltfm_shutdown,
.probe = iproc_pltfm_pcie_probe,
.remove = iproc_pltfm_pcie_remove,
.shutdown = iproc_pltfm_pcie_shutdown,
};
module_platform_driver(iproc_pcie_pltfm_driver);
module_platform_driver(iproc_pltfm_pcie_driver);
MODULE_AUTHOR("Ray Jui <rjui@broadcom.com>");
MODULE_DESCRIPTION("Broadcom iPROC PCIe platform driver");

View File

@ -659,10 +659,8 @@ static int iproc_pci_raw_config_read32(struct iproc_pcie *pcie,
void __iomem *addr;
addr = iproc_pcie_map_cfg_bus(pcie, 0, devfn, where & ~0x3);
if (!addr) {
*val = ~0;
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
}
*val = readl(addr);

View File

@ -79,6 +79,9 @@
#define PCIE_ICMD_PM_REG 0x198
#define PCIE_TURN_OFF_LINK BIT(4)
#define PCIE_MISC_CTRL_REG 0x348
#define PCIE_DISABLE_DVFSRC_VLT_REQ BIT(1)
#define PCIE_TRANS_TABLE_BASE_REG 0x800
#define PCIE_ATR_SRC_ADDR_MSB_OFFSET 0x4
#define PCIE_ATR_TRSL_ADDR_LSB_OFFSET 0x8
@ -110,7 +113,7 @@ struct mtk_msi_set {
};
/**
* struct mtk_pcie_port - PCIe port information
* struct mtk_gen3_pcie - PCIe port information
* @dev: pointer to PCIe device
* @base: IO mapped register base
* @reg_base: physical register base
@ -129,7 +132,7 @@ struct mtk_msi_set {
* @lock: lock protecting IRQ bit map
* @msi_irq_in_use: bit map for assigned MSI IRQ
*/
struct mtk_pcie_port {
struct mtk_gen3_pcie {
struct device *dev;
void __iomem *base;
phys_addr_t reg_base;
@ -162,7 +165,7 @@ struct mtk_pcie_port {
static void mtk_pcie_config_tlp_header(struct pci_bus *bus, unsigned int devfn,
int where, int size)
{
struct mtk_pcie_port *port = bus->sysdata;
struct mtk_gen3_pcie *pcie = bus->sysdata;
int bytes;
u32 val;
@ -171,15 +174,15 @@ static void mtk_pcie_config_tlp_header(struct pci_bus *bus, unsigned int devfn,
val = PCIE_CFG_FORCE_BYTE_EN | PCIE_CFG_BYTE_EN(bytes) |
PCIE_CFG_HEADER(bus->number, devfn);
writel_relaxed(val, port->base + PCIE_CFGNUM_REG);
writel_relaxed(val, pcie->base + PCIE_CFGNUM_REG);
}
static void __iomem *mtk_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
int where)
{
struct mtk_pcie_port *port = bus->sysdata;
struct mtk_gen3_pcie *pcie = bus->sysdata;
return port->base + PCIE_CFG_OFFSET_ADDR + where;
return pcie->base + PCIE_CFG_OFFSET_ADDR + where;
}
static int mtk_pcie_config_read(struct pci_bus *bus, unsigned int devfn,
@ -207,7 +210,7 @@ static struct pci_ops mtk_pcie_ops = {
.write = mtk_pcie_config_write,
};
static int mtk_pcie_set_trans_table(struct mtk_pcie_port *port,
static int mtk_pcie_set_trans_table(struct mtk_gen3_pcie *pcie,
resource_size_t cpu_addr,
resource_size_t pci_addr,
resource_size_t size,
@ -217,12 +220,12 @@ static int mtk_pcie_set_trans_table(struct mtk_pcie_port *port,
u32 val;
if (num >= PCIE_MAX_TRANS_TABLES) {
dev_err(port->dev, "not enough translate table for addr: %#llx, limited to [%d]\n",
dev_err(pcie->dev, "not enough translate table for addr: %#llx, limited to [%d]\n",
(unsigned long long)cpu_addr, PCIE_MAX_TRANS_TABLES);
return -ENODEV;
}
table = port->base + PCIE_TRANS_TABLE_BASE_REG +
table = pcie->base + PCIE_TRANS_TABLE_BASE_REG +
num * PCIE_ATR_TLB_SET_OFFSET;
writel_relaxed(lower_32_bits(cpu_addr) | PCIE_ATR_SIZE(fls(size) - 1),
@ -244,66 +247,71 @@ static int mtk_pcie_set_trans_table(struct mtk_pcie_port *port,
return 0;
}
static void mtk_pcie_enable_msi(struct mtk_pcie_port *port)
static void mtk_pcie_enable_msi(struct mtk_gen3_pcie *pcie)
{
int i;
u32 val;
for (i = 0; i < PCIE_MSI_SET_NUM; i++) {
struct mtk_msi_set *msi_set = &port->msi_sets[i];
struct mtk_msi_set *msi_set = &pcie->msi_sets[i];
msi_set->base = port->base + PCIE_MSI_SET_BASE_REG +
msi_set->base = pcie->base + PCIE_MSI_SET_BASE_REG +
i * PCIE_MSI_SET_OFFSET;
msi_set->msg_addr = port->reg_base + PCIE_MSI_SET_BASE_REG +
msi_set->msg_addr = pcie->reg_base + PCIE_MSI_SET_BASE_REG +
i * PCIE_MSI_SET_OFFSET;
/* Configure the MSI capture address */
writel_relaxed(lower_32_bits(msi_set->msg_addr), msi_set->base);
writel_relaxed(upper_32_bits(msi_set->msg_addr),
port->base + PCIE_MSI_SET_ADDR_HI_BASE +
pcie->base + PCIE_MSI_SET_ADDR_HI_BASE +
i * PCIE_MSI_SET_ADDR_HI_OFFSET);
}
val = readl_relaxed(port->base + PCIE_MSI_SET_ENABLE_REG);
val = readl_relaxed(pcie->base + PCIE_MSI_SET_ENABLE_REG);
val |= PCIE_MSI_SET_ENABLE;
writel_relaxed(val, port->base + PCIE_MSI_SET_ENABLE_REG);
writel_relaxed(val, pcie->base + PCIE_MSI_SET_ENABLE_REG);
val = readl_relaxed(port->base + PCIE_INT_ENABLE_REG);
val = readl_relaxed(pcie->base + PCIE_INT_ENABLE_REG);
val |= PCIE_MSI_ENABLE;
writel_relaxed(val, port->base + PCIE_INT_ENABLE_REG);
writel_relaxed(val, pcie->base + PCIE_INT_ENABLE_REG);
}
static int mtk_pcie_startup_port(struct mtk_pcie_port *port)
static int mtk_pcie_startup_port(struct mtk_gen3_pcie *pcie)
{
struct resource_entry *entry;
struct pci_host_bridge *host = pci_host_bridge_from_priv(port);
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
unsigned int table_index = 0;
int err;
u32 val;
/* Set as RC mode */
val = readl_relaxed(port->base + PCIE_SETTING_REG);
val = readl_relaxed(pcie->base + PCIE_SETTING_REG);
val |= PCIE_RC_MODE;
writel_relaxed(val, port->base + PCIE_SETTING_REG);
writel_relaxed(val, pcie->base + PCIE_SETTING_REG);
/* Set class code */
val = readl_relaxed(port->base + PCIE_PCI_IDS_1);
val = readl_relaxed(pcie->base + PCIE_PCI_IDS_1);
val &= ~GENMASK(31, 8);
val |= PCI_CLASS(PCI_CLASS_BRIDGE_PCI << 8);
writel_relaxed(val, port->base + PCIE_PCI_IDS_1);
writel_relaxed(val, pcie->base + PCIE_PCI_IDS_1);
/* Mask all INTx interrupts */
val = readl_relaxed(port->base + PCIE_INT_ENABLE_REG);
val = readl_relaxed(pcie->base + PCIE_INT_ENABLE_REG);
val &= ~PCIE_INTX_ENABLE;
writel_relaxed(val, port->base + PCIE_INT_ENABLE_REG);
writel_relaxed(val, pcie->base + PCIE_INT_ENABLE_REG);
/* Disable DVFSRC voltage request */
val = readl_relaxed(pcie->base + PCIE_MISC_CTRL_REG);
val |= PCIE_DISABLE_DVFSRC_VLT_REQ;
writel_relaxed(val, pcie->base + PCIE_MISC_CTRL_REG);
/* Assert all reset signals */
val = readl_relaxed(port->base + PCIE_RST_CTRL_REG);
val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG);
val |= PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | PCIE_PE_RSTB;
writel_relaxed(val, port->base + PCIE_RST_CTRL_REG);
writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG);
/*
* Described in PCIe CEM specification setctions 2.2 (PERST# Signal)
* Described in PCIe CEM specification sections 2.2 (PERST# Signal)
* and 2.2.1 (Initial Power-Up (G3 to S0)).
* The deassertion of PERST# should be delayed 100ms (TPVPERL)
* for the power and clock to become stable.
@ -312,19 +320,19 @@ static int mtk_pcie_startup_port(struct mtk_pcie_port *port)
/* De-assert reset signals */
val &= ~(PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | PCIE_PE_RSTB);
writel_relaxed(val, port->base + PCIE_RST_CTRL_REG);
writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG);
/* Check if the link is up or not */
err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_REG, val,
err = readl_poll_timeout(pcie->base + PCIE_LINK_STATUS_REG, val,
!!(val & PCIE_PORT_LINKUP), 20,
PCI_PM_D3COLD_WAIT * USEC_PER_MSEC);
if (err) {
val = readl_relaxed(port->base + PCIE_LTSSM_STATUS_REG);
dev_err(port->dev, "PCIe link down, ltssm reg val: %#x\n", val);
val = readl_relaxed(pcie->base + PCIE_LTSSM_STATUS_REG);
dev_err(pcie->dev, "PCIe link down, ltssm reg val: %#x\n", val);
return err;
}
mtk_pcie_enable_msi(port);
mtk_pcie_enable_msi(pcie);
/* Set PCIe translation windows */
resource_list_for_each_entry(entry, &host->windows) {
@ -347,12 +355,12 @@ static int mtk_pcie_startup_port(struct mtk_pcie_port *port)
pci_addr = res->start - entry->offset;
size = resource_size(res);
err = mtk_pcie_set_trans_table(port, cpu_addr, pci_addr, size,
err = mtk_pcie_set_trans_table(pcie, cpu_addr, pci_addr, size,
type, table_index);
if (err)
return err;
dev_dbg(port->dev, "set %s trans window[%d]: cpu_addr = %#llx, pci_addr = %#llx, size = %#llx\n",
dev_dbg(pcie->dev, "set %s trans window[%d]: cpu_addr = %#llx, pci_addr = %#llx, size = %#llx\n",
range_type, table_index, (unsigned long long)cpu_addr,
(unsigned long long)pci_addr, (unsigned long long)size);
@ -396,7 +404,7 @@ static struct msi_domain_info mtk_msi_domain_info = {
static void mtk_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct mtk_msi_set *msi_set = irq_data_get_irq_chip_data(data);
struct mtk_pcie_port *port = data->domain->host_data;
struct mtk_gen3_pcie *pcie = data->domain->host_data;
unsigned long hwirq;
hwirq = data->hwirq % PCIE_MSI_IRQS_PER_SET;
@ -404,7 +412,7 @@ static void mtk_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
msg->address_hi = upper_32_bits(msi_set->msg_addr);
msg->address_lo = lower_32_bits(msi_set->msg_addr);
msg->data = hwirq;
dev_dbg(port->dev, "msi#%#lx address_hi %#x address_lo %#x data %d\n",
dev_dbg(pcie->dev, "msi#%#lx address_hi %#x address_lo %#x data %d\n",
hwirq, msg->address_hi, msg->address_lo, msg->data);
}
@ -421,33 +429,33 @@ static void mtk_msi_bottom_irq_ack(struct irq_data *data)
static void mtk_msi_bottom_irq_mask(struct irq_data *data)
{
struct mtk_msi_set *msi_set = irq_data_get_irq_chip_data(data);
struct mtk_pcie_port *port = data->domain->host_data;
struct mtk_gen3_pcie *pcie = data->domain->host_data;
unsigned long hwirq, flags;
u32 val;
hwirq = data->hwirq % PCIE_MSI_IRQS_PER_SET;
raw_spin_lock_irqsave(&port->irq_lock, flags);
raw_spin_lock_irqsave(&pcie->irq_lock, flags);
val = readl_relaxed(msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET);
val &= ~BIT(hwirq);
writel_relaxed(val, msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET);
raw_spin_unlock_irqrestore(&port->irq_lock, flags);
raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
}
static void mtk_msi_bottom_irq_unmask(struct irq_data *data)
{
struct mtk_msi_set *msi_set = irq_data_get_irq_chip_data(data);
struct mtk_pcie_port *port = data->domain->host_data;
struct mtk_gen3_pcie *pcie = data->domain->host_data;
unsigned long hwirq, flags;
u32 val;
hwirq = data->hwirq % PCIE_MSI_IRQS_PER_SET;
raw_spin_lock_irqsave(&port->irq_lock, flags);
raw_spin_lock_irqsave(&pcie->irq_lock, flags);
val = readl_relaxed(msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET);
val |= BIT(hwirq);
writel_relaxed(val, msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET);
raw_spin_unlock_irqrestore(&port->irq_lock, flags);
raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
}
static struct irq_chip mtk_msi_bottom_irq_chip = {
@ -463,22 +471,22 @@ static int mtk_msi_bottom_domain_alloc(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs,
void *arg)
{
struct mtk_pcie_port *port = domain->host_data;
struct mtk_gen3_pcie *pcie = domain->host_data;
struct mtk_msi_set *msi_set;
int i, hwirq, set_idx;
mutex_lock(&port->lock);
mutex_lock(&pcie->lock);
hwirq = bitmap_find_free_region(port->msi_irq_in_use, PCIE_MSI_IRQS_NUM,
hwirq = bitmap_find_free_region(pcie->msi_irq_in_use, PCIE_MSI_IRQS_NUM,
order_base_2(nr_irqs));
mutex_unlock(&port->lock);
mutex_unlock(&pcie->lock);
if (hwirq < 0)
return -ENOSPC;
set_idx = hwirq / PCIE_MSI_IRQS_PER_SET;
msi_set = &port->msi_sets[set_idx];
msi_set = &pcie->msi_sets[set_idx];
for (i = 0; i < nr_irqs; i++)
irq_domain_set_info(domain, virq + i, hwirq + i,
@ -491,15 +499,15 @@ static int mtk_msi_bottom_domain_alloc(struct irq_domain *domain,
static void mtk_msi_bottom_domain_free(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs)
{
struct mtk_pcie_port *port = domain->host_data;
struct mtk_gen3_pcie *pcie = domain->host_data;
struct irq_data *data = irq_domain_get_irq_data(domain, virq);
mutex_lock(&port->lock);
mutex_lock(&pcie->lock);
bitmap_release_region(port->msi_irq_in_use, data->hwirq,
bitmap_release_region(pcie->msi_irq_in_use, data->hwirq,
order_base_2(nr_irqs));
mutex_unlock(&port->lock);
mutex_unlock(&pcie->lock);
irq_domain_free_irqs_common(domain, virq, nr_irqs);
}
@ -511,28 +519,28 @@ static const struct irq_domain_ops mtk_msi_bottom_domain_ops = {
static void mtk_intx_mask(struct irq_data *data)
{
struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data);
struct mtk_gen3_pcie *pcie = irq_data_get_irq_chip_data(data);
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&port->irq_lock, flags);
val = readl_relaxed(port->base + PCIE_INT_ENABLE_REG);
raw_spin_lock_irqsave(&pcie->irq_lock, flags);
val = readl_relaxed(pcie->base + PCIE_INT_ENABLE_REG);
val &= ~BIT(data->hwirq + PCIE_INTX_SHIFT);
writel_relaxed(val, port->base + PCIE_INT_ENABLE_REG);
raw_spin_unlock_irqrestore(&port->irq_lock, flags);
writel_relaxed(val, pcie->base + PCIE_INT_ENABLE_REG);
raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
}
static void mtk_intx_unmask(struct irq_data *data)
{
struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data);
struct mtk_gen3_pcie *pcie = irq_data_get_irq_chip_data(data);
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&port->irq_lock, flags);
val = readl_relaxed(port->base + PCIE_INT_ENABLE_REG);
raw_spin_lock_irqsave(&pcie->irq_lock, flags);
val = readl_relaxed(pcie->base + PCIE_INT_ENABLE_REG);
val |= BIT(data->hwirq + PCIE_INTX_SHIFT);
writel_relaxed(val, port->base + PCIE_INT_ENABLE_REG);
raw_spin_unlock_irqrestore(&port->irq_lock, flags);
writel_relaxed(val, pcie->base + PCIE_INT_ENABLE_REG);
raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
}
/**
@ -545,11 +553,11 @@ static void mtk_intx_unmask(struct irq_data *data)
*/
static void mtk_intx_eoi(struct irq_data *data)
{
struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data);
struct mtk_gen3_pcie *pcie = irq_data_get_irq_chip_data(data);
unsigned long hwirq;
hwirq = data->hwirq + PCIE_INTX_SHIFT;
writel_relaxed(BIT(hwirq), port->base + PCIE_INT_STATUS_REG);
writel_relaxed(BIT(hwirq), pcie->base + PCIE_INT_STATUS_REG);
}
static struct irq_chip mtk_intx_irq_chip = {
@ -573,13 +581,13 @@ static const struct irq_domain_ops intx_domain_ops = {
.map = mtk_pcie_intx_map,
};
static int mtk_pcie_init_irq_domains(struct mtk_pcie_port *port)
static int mtk_pcie_init_irq_domains(struct mtk_gen3_pcie *pcie)
{
struct device *dev = port->dev;
struct device *dev = pcie->dev;
struct device_node *intc_node, *node = dev->of_node;
int ret;
raw_spin_lock_init(&port->irq_lock);
raw_spin_lock_init(&pcie->irq_lock);
/* Setup INTx */
intc_node = of_get_child_by_name(node, "interrupt-controller");
@ -588,28 +596,28 @@ static int mtk_pcie_init_irq_domains(struct mtk_pcie_port *port)
return -ENODEV;
}
port->intx_domain = irq_domain_add_linear(intc_node, PCI_NUM_INTX,
&intx_domain_ops, port);
if (!port->intx_domain) {
pcie->intx_domain = irq_domain_add_linear(intc_node, PCI_NUM_INTX,
&intx_domain_ops, pcie);
if (!pcie->intx_domain) {
dev_err(dev, "failed to create INTx IRQ domain\n");
return -ENODEV;
}
/* Setup MSI */
mutex_init(&port->lock);
mutex_init(&pcie->lock);
port->msi_bottom_domain = irq_domain_add_linear(node, PCIE_MSI_IRQS_NUM,
&mtk_msi_bottom_domain_ops, port);
if (!port->msi_bottom_domain) {
pcie->msi_bottom_domain = irq_domain_add_linear(node, PCIE_MSI_IRQS_NUM,
&mtk_msi_bottom_domain_ops, pcie);
if (!pcie->msi_bottom_domain) {
dev_err(dev, "failed to create MSI bottom domain\n");
ret = -ENODEV;
goto err_msi_bottom_domain;
}
port->msi_domain = pci_msi_create_irq_domain(dev->fwnode,
pcie->msi_domain = pci_msi_create_irq_domain(dev->fwnode,
&mtk_msi_domain_info,
port->msi_bottom_domain);
if (!port->msi_domain) {
pcie->msi_bottom_domain);
if (!pcie->msi_domain) {
dev_err(dev, "failed to create MSI domain\n");
ret = -ENODEV;
goto err_msi_domain;
@ -618,32 +626,32 @@ static int mtk_pcie_init_irq_domains(struct mtk_pcie_port *port)
return 0;
err_msi_domain:
irq_domain_remove(port->msi_bottom_domain);
irq_domain_remove(pcie->msi_bottom_domain);
err_msi_bottom_domain:
irq_domain_remove(port->intx_domain);
irq_domain_remove(pcie->intx_domain);
return ret;
}
static void mtk_pcie_irq_teardown(struct mtk_pcie_port *port)
static void mtk_pcie_irq_teardown(struct mtk_gen3_pcie *pcie)
{
irq_set_chained_handler_and_data(port->irq, NULL, NULL);
irq_set_chained_handler_and_data(pcie->irq, NULL, NULL);
if (port->intx_domain)
irq_domain_remove(port->intx_domain);
if (pcie->intx_domain)
irq_domain_remove(pcie->intx_domain);
if (port->msi_domain)
irq_domain_remove(port->msi_domain);
if (pcie->msi_domain)
irq_domain_remove(pcie->msi_domain);
if (port->msi_bottom_domain)
irq_domain_remove(port->msi_bottom_domain);
if (pcie->msi_bottom_domain)
irq_domain_remove(pcie->msi_bottom_domain);
irq_dispose_mapping(port->irq);
irq_dispose_mapping(pcie->irq);
}
static void mtk_pcie_msi_handler(struct mtk_pcie_port *port, int set_idx)
static void mtk_pcie_msi_handler(struct mtk_gen3_pcie *pcie, int set_idx)
{
struct mtk_msi_set *msi_set = &port->msi_sets[set_idx];
struct mtk_msi_set *msi_set = &pcie->msi_sets[set_idx];
unsigned long msi_enable, msi_status;
irq_hw_number_t bit, hwirq;
@ -658,59 +666,59 @@ static void mtk_pcie_msi_handler(struct mtk_pcie_port *port, int set_idx)
for_each_set_bit(bit, &msi_status, PCIE_MSI_IRQS_PER_SET) {
hwirq = bit + set_idx * PCIE_MSI_IRQS_PER_SET;
generic_handle_domain_irq(port->msi_bottom_domain, hwirq);
generic_handle_domain_irq(pcie->msi_bottom_domain, hwirq);
}
} while (true);
}
static void mtk_pcie_irq_handler(struct irq_desc *desc)
{
struct mtk_pcie_port *port = irq_desc_get_handler_data(desc);
struct mtk_gen3_pcie *pcie = irq_desc_get_handler_data(desc);
struct irq_chip *irqchip = irq_desc_get_chip(desc);
unsigned long status;
irq_hw_number_t irq_bit = PCIE_INTX_SHIFT;
chained_irq_enter(irqchip, desc);
status = readl_relaxed(port->base + PCIE_INT_STATUS_REG);
status = readl_relaxed(pcie->base + PCIE_INT_STATUS_REG);
for_each_set_bit_from(irq_bit, &status, PCI_NUM_INTX +
PCIE_INTX_SHIFT)
generic_handle_domain_irq(port->intx_domain,
generic_handle_domain_irq(pcie->intx_domain,
irq_bit - PCIE_INTX_SHIFT);
irq_bit = PCIE_MSI_SHIFT;
for_each_set_bit_from(irq_bit, &status, PCIE_MSI_SET_NUM +
PCIE_MSI_SHIFT) {
mtk_pcie_msi_handler(port, irq_bit - PCIE_MSI_SHIFT);
mtk_pcie_msi_handler(pcie, irq_bit - PCIE_MSI_SHIFT);
writel_relaxed(BIT(irq_bit), port->base + PCIE_INT_STATUS_REG);
writel_relaxed(BIT(irq_bit), pcie->base + PCIE_INT_STATUS_REG);
}
chained_irq_exit(irqchip, desc);
}
static int mtk_pcie_setup_irq(struct mtk_pcie_port *port)
static int mtk_pcie_setup_irq(struct mtk_gen3_pcie *pcie)
{
struct device *dev = port->dev;
struct device *dev = pcie->dev;
struct platform_device *pdev = to_platform_device(dev);
int err;
err = mtk_pcie_init_irq_domains(port);
err = mtk_pcie_init_irq_domains(pcie);
if (err)
return err;
port->irq = platform_get_irq(pdev, 0);
if (port->irq < 0)
return port->irq;
pcie->irq = platform_get_irq(pdev, 0);
if (pcie->irq < 0)
return pcie->irq;
irq_set_chained_handler_and_data(port->irq, mtk_pcie_irq_handler, port);
irq_set_chained_handler_and_data(pcie->irq, mtk_pcie_irq_handler, pcie);
return 0;
}
static int mtk_pcie_parse_port(struct mtk_pcie_port *port)
static int mtk_pcie_parse_port(struct mtk_gen3_pcie *pcie)
{
struct device *dev = port->dev;
struct device *dev = pcie->dev;
struct platform_device *pdev = to_platform_device(dev);
struct resource *regs;
int ret;
@ -718,77 +726,77 @@ static int mtk_pcie_parse_port(struct mtk_pcie_port *port)
regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pcie-mac");
if (!regs)
return -EINVAL;
port->base = devm_ioremap_resource(dev, regs);
if (IS_ERR(port->base)) {
pcie->base = devm_ioremap_resource(dev, regs);
if (IS_ERR(pcie->base)) {
dev_err(dev, "failed to map register base\n");
return PTR_ERR(port->base);
return PTR_ERR(pcie->base);
}
port->reg_base = regs->start;
pcie->reg_base = regs->start;
port->phy_reset = devm_reset_control_get_optional_exclusive(dev, "phy");
if (IS_ERR(port->phy_reset)) {
ret = PTR_ERR(port->phy_reset);
pcie->phy_reset = devm_reset_control_get_optional_exclusive(dev, "phy");
if (IS_ERR(pcie->phy_reset)) {
ret = PTR_ERR(pcie->phy_reset);
if (ret != -EPROBE_DEFER)
dev_err(dev, "failed to get PHY reset\n");
return ret;
}
port->mac_reset = devm_reset_control_get_optional_exclusive(dev, "mac");
if (IS_ERR(port->mac_reset)) {
ret = PTR_ERR(port->mac_reset);
pcie->mac_reset = devm_reset_control_get_optional_exclusive(dev, "mac");
if (IS_ERR(pcie->mac_reset)) {
ret = PTR_ERR(pcie->mac_reset);
if (ret != -EPROBE_DEFER)
dev_err(dev, "failed to get MAC reset\n");
return ret;
}
port->phy = devm_phy_optional_get(dev, "pcie-phy");
if (IS_ERR(port->phy)) {
ret = PTR_ERR(port->phy);
pcie->phy = devm_phy_optional_get(dev, "pcie-phy");
if (IS_ERR(pcie->phy)) {
ret = PTR_ERR(pcie->phy);
if (ret != -EPROBE_DEFER)
dev_err(dev, "failed to get PHY\n");
return ret;
}
port->num_clks = devm_clk_bulk_get_all(dev, &port->clks);
if (port->num_clks < 0) {
pcie->num_clks = devm_clk_bulk_get_all(dev, &pcie->clks);
if (pcie->num_clks < 0) {
dev_err(dev, "failed to get clocks\n");
return port->num_clks;
return pcie->num_clks;
}
return 0;
}
static int mtk_pcie_power_up(struct mtk_pcie_port *port)
static int mtk_pcie_power_up(struct mtk_gen3_pcie *pcie)
{
struct device *dev = port->dev;
struct device *dev = pcie->dev;
int err;
/* PHY power on and enable pipe clock */
reset_control_deassert(port->phy_reset);
reset_control_deassert(pcie->phy_reset);
err = phy_init(port->phy);
err = phy_init(pcie->phy);
if (err) {
dev_err(dev, "failed to initialize PHY\n");
goto err_phy_init;
}
err = phy_power_on(port->phy);
err = phy_power_on(pcie->phy);
if (err) {
dev_err(dev, "failed to power on PHY\n");
goto err_phy_on;
}
/* MAC power on and enable transaction layer clocks */
reset_control_deassert(port->mac_reset);
reset_control_deassert(pcie->mac_reset);
pm_runtime_enable(dev);
pm_runtime_get_sync(dev);
err = clk_bulk_prepare_enable(port->num_clks, port->clks);
err = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks);
if (err) {
dev_err(dev, "failed to enable clocks\n");
goto err_clk_init;
@ -799,55 +807,55 @@ static int mtk_pcie_power_up(struct mtk_pcie_port *port)
err_clk_init:
pm_runtime_put_sync(dev);
pm_runtime_disable(dev);
reset_control_assert(port->mac_reset);
phy_power_off(port->phy);
reset_control_assert(pcie->mac_reset);
phy_power_off(pcie->phy);
err_phy_on:
phy_exit(port->phy);
phy_exit(pcie->phy);
err_phy_init:
reset_control_assert(port->phy_reset);
reset_control_assert(pcie->phy_reset);
return err;
}
static void mtk_pcie_power_down(struct mtk_pcie_port *port)
static void mtk_pcie_power_down(struct mtk_gen3_pcie *pcie)
{
clk_bulk_disable_unprepare(port->num_clks, port->clks);
clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks);
pm_runtime_put_sync(port->dev);
pm_runtime_disable(port->dev);
reset_control_assert(port->mac_reset);
pm_runtime_put_sync(pcie->dev);
pm_runtime_disable(pcie->dev);
reset_control_assert(pcie->mac_reset);
phy_power_off(port->phy);
phy_exit(port->phy);
reset_control_assert(port->phy_reset);
phy_power_off(pcie->phy);
phy_exit(pcie->phy);
reset_control_assert(pcie->phy_reset);
}
static int mtk_pcie_setup(struct mtk_pcie_port *port)
static int mtk_pcie_setup(struct mtk_gen3_pcie *pcie)
{
int err;
err = mtk_pcie_parse_port(port);
err = mtk_pcie_parse_port(pcie);
if (err)
return err;
/* Don't touch the hardware registers before power up */
err = mtk_pcie_power_up(port);
err = mtk_pcie_power_up(pcie);
if (err)
return err;
/* Try link up */
err = mtk_pcie_startup_port(port);
err = mtk_pcie_startup_port(pcie);
if (err)
goto err_setup;
err = mtk_pcie_setup_irq(port);
err = mtk_pcie_setup_irq(pcie);
if (err)
goto err_setup;
return 0;
err_setup:
mtk_pcie_power_down(port);
mtk_pcie_power_down(pcie);
return err;
}
@ -855,30 +863,30 @@ err_setup:
static int mtk_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct mtk_pcie_port *port;
struct mtk_gen3_pcie *pcie;
struct pci_host_bridge *host;
int err;
host = devm_pci_alloc_host_bridge(dev, sizeof(*port));
host = devm_pci_alloc_host_bridge(dev, sizeof(*pcie));
if (!host)
return -ENOMEM;
port = pci_host_bridge_priv(host);
pcie = pci_host_bridge_priv(host);
port->dev = dev;
platform_set_drvdata(pdev, port);
pcie->dev = dev;
platform_set_drvdata(pdev, pcie);
err = mtk_pcie_setup(port);
err = mtk_pcie_setup(pcie);
if (err)
return err;
host->ops = &mtk_pcie_ops;
host->sysdata = port;
host->sysdata = pcie;
err = pci_host_probe(host);
if (err) {
mtk_pcie_irq_teardown(port);
mtk_pcie_power_down(port);
mtk_pcie_irq_teardown(pcie);
mtk_pcie_power_down(pcie);
return err;
}
@ -887,66 +895,66 @@ static int mtk_pcie_probe(struct platform_device *pdev)
static int mtk_pcie_remove(struct platform_device *pdev)
{
struct mtk_pcie_port *port = platform_get_drvdata(pdev);
struct pci_host_bridge *host = pci_host_bridge_from_priv(port);
struct mtk_gen3_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
pci_lock_rescan_remove();
pci_stop_root_bus(host->bus);
pci_remove_root_bus(host->bus);
pci_unlock_rescan_remove();
mtk_pcie_irq_teardown(port);
mtk_pcie_power_down(port);
mtk_pcie_irq_teardown(pcie);
mtk_pcie_power_down(pcie);
return 0;
}
static void __maybe_unused mtk_pcie_irq_save(struct mtk_pcie_port *port)
static void __maybe_unused mtk_pcie_irq_save(struct mtk_gen3_pcie *pcie)
{
int i;
raw_spin_lock(&port->irq_lock);
raw_spin_lock(&pcie->irq_lock);
port->saved_irq_state = readl_relaxed(port->base + PCIE_INT_ENABLE_REG);
pcie->saved_irq_state = readl_relaxed(pcie->base + PCIE_INT_ENABLE_REG);
for (i = 0; i < PCIE_MSI_SET_NUM; i++) {
struct mtk_msi_set *msi_set = &port->msi_sets[i];
struct mtk_msi_set *msi_set = &pcie->msi_sets[i];
msi_set->saved_irq_state = readl_relaxed(msi_set->base +
PCIE_MSI_SET_ENABLE_OFFSET);
}
raw_spin_unlock(&port->irq_lock);
raw_spin_unlock(&pcie->irq_lock);
}
static void __maybe_unused mtk_pcie_irq_restore(struct mtk_pcie_port *port)
static void __maybe_unused mtk_pcie_irq_restore(struct mtk_gen3_pcie *pcie)
{
int i;
raw_spin_lock(&port->irq_lock);
raw_spin_lock(&pcie->irq_lock);
writel_relaxed(port->saved_irq_state, port->base + PCIE_INT_ENABLE_REG);
writel_relaxed(pcie->saved_irq_state, pcie->base + PCIE_INT_ENABLE_REG);
for (i = 0; i < PCIE_MSI_SET_NUM; i++) {
struct mtk_msi_set *msi_set = &port->msi_sets[i];
struct mtk_msi_set *msi_set = &pcie->msi_sets[i];
writel_relaxed(msi_set->saved_irq_state,
msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET);
}
raw_spin_unlock(&port->irq_lock);
raw_spin_unlock(&pcie->irq_lock);
}
static int __maybe_unused mtk_pcie_turn_off_link(struct mtk_pcie_port *port)
static int __maybe_unused mtk_pcie_turn_off_link(struct mtk_gen3_pcie *pcie)
{
u32 val;
val = readl_relaxed(port->base + PCIE_ICMD_PM_REG);
val = readl_relaxed(pcie->base + PCIE_ICMD_PM_REG);
val |= PCIE_TURN_OFF_LINK;
writel_relaxed(val, port->base + PCIE_ICMD_PM_REG);
writel_relaxed(val, pcie->base + PCIE_ICMD_PM_REG);
/* Check the link is L2 */
return readl_poll_timeout(port->base + PCIE_LTSSM_STATUS_REG, val,
return readl_poll_timeout(pcie->base + PCIE_LTSSM_STATUS_REG, val,
(PCIE_LTSSM_STATE(val) ==
PCIE_LTSSM_STATE_L2_IDLE), 20,
50 * USEC_PER_MSEC);
@ -954,46 +962,46 @@ static int __maybe_unused mtk_pcie_turn_off_link(struct mtk_pcie_port *port)
static int __maybe_unused mtk_pcie_suspend_noirq(struct device *dev)
{
struct mtk_pcie_port *port = dev_get_drvdata(dev);
struct mtk_gen3_pcie *pcie = dev_get_drvdata(dev);
int err;
u32 val;
/* Trigger link to L2 state */
err = mtk_pcie_turn_off_link(port);
err = mtk_pcie_turn_off_link(pcie);
if (err) {
dev_err(port->dev, "cannot enter L2 state\n");
dev_err(pcie->dev, "cannot enter L2 state\n");
return err;
}
/* Pull down the PERST# pin */
val = readl_relaxed(port->base + PCIE_RST_CTRL_REG);
val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG);
val |= PCIE_PE_RSTB;
writel_relaxed(val, port->base + PCIE_RST_CTRL_REG);
writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG);
dev_dbg(port->dev, "entered L2 states successfully");
dev_dbg(pcie->dev, "entered L2 states successfully");
mtk_pcie_irq_save(port);
mtk_pcie_power_down(port);
mtk_pcie_irq_save(pcie);
mtk_pcie_power_down(pcie);
return 0;
}
static int __maybe_unused mtk_pcie_resume_noirq(struct device *dev)
{
struct mtk_pcie_port *port = dev_get_drvdata(dev);
struct mtk_gen3_pcie *pcie = dev_get_drvdata(dev);
int err;
err = mtk_pcie_power_up(port);
err = mtk_pcie_power_up(pcie);
if (err)
return err;
err = mtk_pcie_startup_port(port);
err = mtk_pcie_startup_port(pcie);
if (err) {
mtk_pcie_power_down(port);
mtk_pcie_power_down(pcie);
return err;
}
mtk_pcie_irq_restore(port);
mtk_pcie_irq_restore(pcie);
return 0;
}

View File

@ -365,19 +365,12 @@ static int mtk_pcie_config_read(struct pci_bus *bus, unsigned int devfn,
{
struct mtk_pcie_port *port;
u32 bn = bus->number;
int ret;
port = mtk_pcie_find_port(bus, devfn);
if (!port) {
*val = ~0;
if (!port)
return PCIBIOS_DEVICE_NOT_FOUND;
}
ret = mtk_pcie_hw_rd_cfg(port, bn, devfn, where, size, val);
if (ret)
*val = ~0;
return ret;
return mtk_pcie_hw_rd_cfg(port, bn, devfn, where, size, val);
}
static int mtk_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
@ -702,6 +695,13 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
*/
writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL);
/*
* Described in PCIe CEM specification sections 2.2 (PERST# Signal) and
* 2.2.1 (Initial Power-Up (G3 to S0)). The deassertion of PERST# should
* be delayed 100ms (TPVPERL) for the power and clock to become stable.
*/
msleep(100);
/* De-assert PHY, PE, PIPE, MAC and configuration reset */
val = readl(port->base + PCIE_RST_CTRL);
val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB |

View File

@ -262,7 +262,7 @@ struct mc_msi {
DECLARE_BITMAP(used, MC_NUM_MSI_IRQS);
};
struct mc_port {
struct mc_pcie {
void __iomem *axi_base_addr;
struct device *dev;
struct irq_domain *intx_domain;
@ -382,7 +382,7 @@ static struct {
static char poss_clks[][5] = { "fic0", "fic1", "fic2", "fic3" };
static void mc_pcie_enable_msi(struct mc_port *port, void __iomem *base)
static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *base)
{
struct mc_msi *msi = &port->msi;
u32 cap_offset = MC_MSI_CAP_CTRL_OFFSET;
@ -405,7 +405,7 @@ static void mc_pcie_enable_msi(struct mc_port *port, void __iomem *base)
static void mc_handle_msi(struct irq_desc *desc)
{
struct mc_port *port = irq_desc_get_handler_data(desc);
struct mc_pcie *port = irq_desc_get_handler_data(desc);
struct device *dev = port->dev;
struct mc_msi *msi = &port->msi;
void __iomem *bridge_base_addr =
@ -428,7 +428,7 @@ static void mc_handle_msi(struct irq_desc *desc)
static void mc_msi_bottom_irq_ack(struct irq_data *data)
{
struct mc_port *port = irq_data_get_irq_chip_data(data);
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
u32 bitpos = data->hwirq;
@ -443,7 +443,7 @@ static void mc_msi_bottom_irq_ack(struct irq_data *data)
static void mc_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct mc_port *port = irq_data_get_irq_chip_data(data);
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
phys_addr_t addr = port->msi.vector_phy;
msg->address_lo = lower_32_bits(addr);
@ -470,7 +470,7 @@ static struct irq_chip mc_msi_bottom_irq_chip = {
static int mc_irq_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args)
{
struct mc_port *port = domain->host_data;
struct mc_pcie *port = domain->host_data;
struct mc_msi *msi = &port->msi;
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
@ -503,7 +503,7 @@ static void mc_irq_msi_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs)
{
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
struct mc_port *port = irq_data_get_irq_chip_data(d);
struct mc_pcie *port = irq_data_get_irq_chip_data(d);
struct mc_msi *msi = &port->msi;
mutex_lock(&msi->lock);
@ -534,7 +534,7 @@ static struct msi_domain_info mc_msi_domain_info = {
.chip = &mc_msi_irq_chip,
};
static int mc_allocate_msi_domains(struct mc_port *port)
static int mc_allocate_msi_domains(struct mc_pcie *port)
{
struct device *dev = port->dev;
struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
@ -562,7 +562,7 @@ static int mc_allocate_msi_domains(struct mc_port *port)
static void mc_handle_intx(struct irq_desc *desc)
{
struct mc_port *port = irq_desc_get_handler_data(desc);
struct mc_pcie *port = irq_desc_get_handler_data(desc);
struct device *dev = port->dev;
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
@ -585,7 +585,7 @@ static void mc_handle_intx(struct irq_desc *desc)
static void mc_ack_intx_irq(struct irq_data *data)
{
struct mc_port *port = irq_data_get_irq_chip_data(data);
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
@ -595,7 +595,7 @@ static void mc_ack_intx_irq(struct irq_data *data)
static void mc_mask_intx_irq(struct irq_data *data)
{
struct mc_port *port = irq_data_get_irq_chip_data(data);
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
unsigned long flags;
@ -611,7 +611,7 @@ static void mc_mask_intx_irq(struct irq_data *data)
static void mc_unmask_intx_irq(struct irq_data *data)
{
struct mc_port *port = irq_data_get_irq_chip_data(data);
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
unsigned long flags;
@ -698,7 +698,7 @@ static u32 local_events(void __iomem *addr)
return val;
}
static u32 get_events(struct mc_port *port)
static u32 get_events(struct mc_pcie *port)
{
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
@ -715,7 +715,7 @@ static u32 get_events(struct mc_port *port)
static irqreturn_t mc_event_handler(int irq, void *dev_id)
{
struct mc_port *port = dev_id;
struct mc_pcie *port = dev_id;
struct device *dev = port->dev;
struct irq_data *data;
@ -731,7 +731,7 @@ static irqreturn_t mc_event_handler(int irq, void *dev_id)
static void mc_handle_event(struct irq_desc *desc)
{
struct mc_port *port = irq_desc_get_handler_data(desc);
struct mc_pcie *port = irq_desc_get_handler_data(desc);
unsigned long events;
u32 bit;
struct irq_chip *chip = irq_desc_get_chip(desc);
@ -748,7 +748,7 @@ static void mc_handle_event(struct irq_desc *desc)
static void mc_ack_event_irq(struct irq_data *data)
{
struct mc_port *port = irq_data_get_irq_chip_data(data);
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
u32 event = data->hwirq;
void __iomem *addr;
u32 mask;
@ -763,7 +763,7 @@ static void mc_ack_event_irq(struct irq_data *data)
static void mc_mask_event_irq(struct irq_data *data)
{
struct mc_port *port = irq_data_get_irq_chip_data(data);
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
u32 event = data->hwirq;
void __iomem *addr;
u32 mask;
@ -793,7 +793,7 @@ static void mc_mask_event_irq(struct irq_data *data)
static void mc_unmask_event_irq(struct irq_data *data)
{
struct mc_port *port = irq_data_get_irq_chip_data(data);
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
u32 event = data->hwirq;
void __iomem *addr;
u32 mask;
@ -881,7 +881,7 @@ static int mc_pcie_init_clks(struct device *dev)
return 0;
}
static int mc_pcie_init_irq_domains(struct mc_port *port)
static int mc_pcie_init_irq_domains(struct mc_pcie *port)
{
struct device *dev = port->dev;
struct device_node *node = dev->of_node;
@ -957,7 +957,7 @@ static void mc_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
}
static int mc_pcie_setup_windows(struct platform_device *pdev,
struct mc_port *port)
struct mc_pcie *port)
{
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
@ -983,7 +983,7 @@ static int mc_platform_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct platform_device *pdev = to_platform_device(dev);
struct mc_port *port;
struct mc_pcie *port;
void __iomem *bridge_base_addr;
void __iomem *ctrl_base_addr;
int ret;

View File

@ -93,8 +93,8 @@ struct mt7621_pcie_port {
* reset lines are inverted.
*/
struct mt7621_pcie {
void __iomem *base;
struct device *dev;
void __iomem *base;
struct list_head ports;
bool resets_inverted;
};
@ -129,7 +129,7 @@ static inline void pcie_port_write(struct mt7621_pcie_port *port,
writel_relaxed(val, port->base + reg);
}
static inline u32 mt7621_pci_get_cfgaddr(unsigned int bus, unsigned int slot,
static inline u32 mt7621_pcie_get_cfgaddr(unsigned int bus, unsigned int slot,
unsigned int func, unsigned int where)
{
return (((where & 0xf00) >> 8) << 24) | (bus << 16) | (slot << 11) |
@ -140,7 +140,7 @@ static void __iomem *mt7621_pcie_map_bus(struct pci_bus *bus,
unsigned int devfn, int where)
{
struct mt7621_pcie *pcie = bus->sysdata;
u32 address = mt7621_pci_get_cfgaddr(bus->number, PCI_SLOT(devfn),
u32 address = mt7621_pcie_get_cfgaddr(bus->number, PCI_SLOT(devfn),
PCI_FUNC(devfn), where);
writel_relaxed(address, pcie->base + RALINK_PCI_CONFIG_ADDR);
@ -148,7 +148,7 @@ static void __iomem *mt7621_pcie_map_bus(struct pci_bus *bus,
return pcie->base + RALINK_PCI_CONFIG_DATA + (where & 3);
}
struct pci_ops mt7621_pci_ops = {
static struct pci_ops mt7621_pcie_ops = {
.map_bus = mt7621_pcie_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
@ -156,7 +156,7 @@ struct pci_ops mt7621_pci_ops = {
static u32 read_config(struct mt7621_pcie *pcie, unsigned int dev, u32 reg)
{
u32 address = mt7621_pci_get_cfgaddr(0, dev, 0, reg);
u32 address = mt7621_pcie_get_cfgaddr(0, dev, 0, reg);
pcie_write(pcie, address, RALINK_PCI_CONFIG_ADDR);
return pcie_read(pcie, RALINK_PCI_CONFIG_DATA);
@ -165,7 +165,7 @@ static u32 read_config(struct mt7621_pcie *pcie, unsigned int dev, u32 reg)
static void write_config(struct mt7621_pcie *pcie, unsigned int dev,
u32 reg, u32 val)
{
u32 address = mt7621_pci_get_cfgaddr(0, dev, 0, reg);
u32 address = mt7621_pcie_get_cfgaddr(0, dev, 0, reg);
pcie_write(pcie, address, RALINK_PCI_CONFIG_ADDR);
pcie_write(pcie, val, RALINK_PCI_CONFIG_DATA);
@ -208,37 +208,6 @@ static inline void mt7621_control_deassert(struct mt7621_pcie_port *port)
reset_control_assert(port->pcie_rst);
}
static int setup_cm_memory_region(struct pci_host_bridge *host)
{
struct mt7621_pcie *pcie = pci_host_bridge_priv(host);
struct device *dev = pcie->dev;
struct resource_entry *entry;
resource_size_t mask;
entry = resource_list_first_type(&host->windows, IORESOURCE_MEM);
if (!entry) {
dev_err(dev, "cannot get memory resource\n");
return -EINVAL;
}
if (mips_cps_numiocu(0)) {
/*
* FIXME: hardware doesn't accept mask values with 1s after
* 0s (e.g. 0xffef), so it would be great to warn if that's
* about to happen
*/
mask = ~(entry->res->end - entry->res->start);
write_gcr_reg1_base(entry->res->start);
write_gcr_reg1_mask(mask | CM_GCR_REGn_MASK_CMTGT_IOCU0);
dev_info(dev, "PCI coherence region base: 0x%08llx, mask/settings: 0x%08llx\n",
(unsigned long long)read_gcr_reg1_base(),
(unsigned long long)read_gcr_reg1_mask());
}
return 0;
}
static int mt7621_pcie_parse_port(struct mt7621_pcie *pcie,
struct device_node *node,
int slot)
@ -505,16 +474,16 @@ static int mt7621_pcie_register_host(struct pci_host_bridge *host)
{
struct mt7621_pcie *pcie = pci_host_bridge_priv(host);
host->ops = &mt7621_pci_ops;
host->ops = &mt7621_pcie_ops;
host->sysdata = pcie;
return pci_host_probe(host);
}
static const struct soc_device_attribute mt7621_pci_quirks_match[] = {
static const struct soc_device_attribute mt7621_pcie_quirks_match[] = {
{ .soc_id = "mt7621", .revision = "E2" }
};
static int mt7621_pci_probe(struct platform_device *pdev)
static int mt7621_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
const struct soc_device_attribute *attr;
@ -535,7 +504,7 @@ static int mt7621_pci_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, pcie);
INIT_LIST_HEAD(&pcie->ports);
attr = soc_device_match(mt7621_pci_quirks_match);
attr = soc_device_match(mt7621_pcie_quirks_match);
if (attr)
pcie->resets_inverted = true;
@ -557,12 +526,6 @@ static int mt7621_pci_probe(struct platform_device *pdev)
goto remove_resets;
}
err = setup_cm_memory_region(bridge);
if (err) {
dev_err(dev, "error setting up iocu mem regions\n");
goto remove_resets;
}
return mt7621_pcie_register_host(bridge);
remove_resets:
@ -572,7 +535,7 @@ remove_resets:
return err;
}
static int mt7621_pci_remove(struct platform_device *pdev)
static int mt7621_pcie_remove(struct platform_device *pdev)
{
struct mt7621_pcie *pcie = platform_get_drvdata(pdev);
struct mt7621_pcie_port *port;
@ -583,18 +546,20 @@ static int mt7621_pci_remove(struct platform_device *pdev)
return 0;
}
static const struct of_device_id mt7621_pci_ids[] = {
static const struct of_device_id mt7621_pcie_ids[] = {
{ .compatible = "mediatek,mt7621-pci" },
{},
};
MODULE_DEVICE_TABLE(of, mt7621_pci_ids);
MODULE_DEVICE_TABLE(of, mt7621_pcie_ids);
static struct platform_driver mt7621_pci_driver = {
.probe = mt7621_pci_probe,
.remove = mt7621_pci_remove,
static struct platform_driver mt7621_pcie_driver = {
.probe = mt7621_pcie_probe,
.remove = mt7621_pcie_remove,
.driver = {
.name = "mt7621-pci",
.of_match_table = of_match_ptr(mt7621_pci_ids),
.of_match_table = of_match_ptr(mt7621_pcie_ids),
},
};
builtin_platform_driver(mt7621_pci_driver);
builtin_platform_driver(mt7621_pcie_driver);
MODULE_LICENSE("GPL v2");

View File

@ -50,10 +50,10 @@ struct rcar_msi {
*/
static void __iomem *pcie_base;
/*
* Static copy of bus clock pointer, so we can check whether the clock
* is enabled or not.
* Static copy of PCIe device pointer, so we can check whether the
* device is runtime suspended or not.
*/
static struct clk *pcie_bus_clk;
static struct device *pcie_dev;
#endif
/* Structure representing the PCIe interface */
@ -159,10 +159,8 @@ static int rcar_pcie_read_conf(struct pci_bus *bus, unsigned int devfn,
ret = rcar_pcie_config_access(host, RCAR_PCI_ACCESS_READ,
bus, devfn, where, val);
if (ret != PCIBIOS_SUCCESSFUL) {
*val = 0xffffffff;
if (ret != PCIBIOS_SUCCESSFUL)
return ret;
}
if (size == 1)
*val = (*val >> (BITS_PER_BYTE * (where & 3))) & 0xff;
@ -792,7 +790,7 @@ static int rcar_pcie_get_resources(struct rcar_pcie_host *host)
#ifdef CONFIG_ARM
/* Cache static copy for L1 link state fixup hook on aarch32 */
pcie_base = pcie->base;
pcie_bus_clk = host->bus_clk;
pcie_dev = pcie->dev;
#endif
return 0;
@ -1062,7 +1060,7 @@ static int rcar_pcie_aarch32_abort_handler(unsigned long addr,
spin_lock_irqsave(&pmsr_lock, flags);
if (!pcie_base || !__clk_is_enabled(pcie_bus_clk)) {
if (!pcie_base || pm_runtime_suspended(pcie_dev)) {
ret = 1;
goto unlock_exit;
}

View File

@ -221,10 +221,8 @@ static int rockchip_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where,
{
struct rockchip_pcie *rockchip = bus->sysdata;
if (!rockchip_pcie_valid_device(rockchip, bus, PCI_SLOT(devfn))) {
*val = 0xffffffff;
if (!rockchip_pcie_valid_device(rockchip, bus, PCI_SLOT(devfn)))
return PCIBIOS_DEVICE_NOT_FOUND;
}
if (pci_is_root_bus(bus))
return rockchip_pcie_rd_own_conf(rockchip, where, size, val);

View File

@ -99,10 +99,10 @@
#define XILINX_CPM_PCIE_REG_PSCR_LNKUP BIT(11)
/**
* struct xilinx_cpm_pcie_port - PCIe port information
* struct xilinx_cpm_pcie - PCIe port information
* @dev: Device pointer
* @reg_base: Bridge Register Base
* @cpm_base: CPM System Level Control and Status Register(SLCR) Base
* @dev: Device pointer
* @intx_domain: Legacy IRQ domain pointer
* @cpm_domain: CPM IRQ domain pointer
* @cfg: Holds mappings of config space window
@ -110,10 +110,10 @@
* @irq: Error interrupt number
* @lock: lock protecting shared register access
*/
struct xilinx_cpm_pcie_port {
struct xilinx_cpm_pcie {
struct device *dev;
void __iomem *reg_base;
void __iomem *cpm_base;
struct device *dev;
struct irq_domain *intx_domain;
struct irq_domain *cpm_domain;
struct pci_config_window *cfg;
@ -122,24 +122,24 @@ struct xilinx_cpm_pcie_port {
raw_spinlock_t lock;
};
static u32 pcie_read(struct xilinx_cpm_pcie_port *port, u32 reg)
static u32 pcie_read(struct xilinx_cpm_pcie *port, u32 reg)
{
return readl_relaxed(port->reg_base + reg);
}
static void pcie_write(struct xilinx_cpm_pcie_port *port,
static void pcie_write(struct xilinx_cpm_pcie *port,
u32 val, u32 reg)
{
writel_relaxed(val, port->reg_base + reg);
}
static bool cpm_pcie_link_up(struct xilinx_cpm_pcie_port *port)
static bool cpm_pcie_link_up(struct xilinx_cpm_pcie *port)
{
return (pcie_read(port, XILINX_CPM_PCIE_REG_PSCR) &
XILINX_CPM_PCIE_REG_PSCR_LNKUP);
}
static void cpm_pcie_clear_err_interrupts(struct xilinx_cpm_pcie_port *port)
static void cpm_pcie_clear_err_interrupts(struct xilinx_cpm_pcie *port)
{
unsigned long val = pcie_read(port, XILINX_CPM_PCIE_REG_RPEFR);
@ -153,7 +153,7 @@ static void cpm_pcie_clear_err_interrupts(struct xilinx_cpm_pcie_port *port)
static void xilinx_cpm_mask_leg_irq(struct irq_data *data)
{
struct xilinx_cpm_pcie_port *port = irq_data_get_irq_chip_data(data);
struct xilinx_cpm_pcie *port = irq_data_get_irq_chip_data(data);
unsigned long flags;
u32 mask;
u32 val;
@ -167,7 +167,7 @@ static void xilinx_cpm_mask_leg_irq(struct irq_data *data)
static void xilinx_cpm_unmask_leg_irq(struct irq_data *data)
{
struct xilinx_cpm_pcie_port *port = irq_data_get_irq_chip_data(data);
struct xilinx_cpm_pcie *port = irq_data_get_irq_chip_data(data);
unsigned long flags;
u32 mask;
u32 val;
@ -211,7 +211,7 @@ static const struct irq_domain_ops intx_domain_ops = {
static void xilinx_cpm_pcie_intx_flow(struct irq_desc *desc)
{
struct xilinx_cpm_pcie_port *port = irq_desc_get_handler_data(desc);
struct xilinx_cpm_pcie *port = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned long val;
int i;
@ -229,7 +229,7 @@ static void xilinx_cpm_pcie_intx_flow(struct irq_desc *desc)
static void xilinx_cpm_mask_event_irq(struct irq_data *d)
{
struct xilinx_cpm_pcie_port *port = irq_data_get_irq_chip_data(d);
struct xilinx_cpm_pcie *port = irq_data_get_irq_chip_data(d);
u32 val;
raw_spin_lock(&port->lock);
@ -241,7 +241,7 @@ static void xilinx_cpm_mask_event_irq(struct irq_data *d)
static void xilinx_cpm_unmask_event_irq(struct irq_data *d)
{
struct xilinx_cpm_pcie_port *port = irq_data_get_irq_chip_data(d);
struct xilinx_cpm_pcie *port = irq_data_get_irq_chip_data(d);
u32 val;
raw_spin_lock(&port->lock);
@ -273,7 +273,7 @@ static const struct irq_domain_ops event_domain_ops = {
static void xilinx_cpm_pcie_event_flow(struct irq_desc *desc)
{
struct xilinx_cpm_pcie_port *port = irq_desc_get_handler_data(desc);
struct xilinx_cpm_pcie *port = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned long val;
int i;
@ -327,7 +327,7 @@ static const struct {
static irqreturn_t xilinx_cpm_pcie_intr_handler(int irq, void *dev_id)
{
struct xilinx_cpm_pcie_port *port = dev_id;
struct xilinx_cpm_pcie *port = dev_id;
struct device *dev = port->dev;
struct irq_data *d;
@ -350,7 +350,7 @@ static irqreturn_t xilinx_cpm_pcie_intr_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}
static void xilinx_cpm_free_irq_domains(struct xilinx_cpm_pcie_port *port)
static void xilinx_cpm_free_irq_domains(struct xilinx_cpm_pcie *port)
{
if (port->intx_domain) {
irq_domain_remove(port->intx_domain);
@ -369,7 +369,7 @@ static void xilinx_cpm_free_irq_domains(struct xilinx_cpm_pcie_port *port)
*
* Return: '0' on success and error value on failure
*/
static int xilinx_cpm_pcie_init_irq_domain(struct xilinx_cpm_pcie_port *port)
static int xilinx_cpm_pcie_init_irq_domain(struct xilinx_cpm_pcie *port)
{
struct device *dev = port->dev;
struct device_node *node = dev->of_node;
@ -410,7 +410,7 @@ out:
return -ENOMEM;
}
static int xilinx_cpm_setup_irq(struct xilinx_cpm_pcie_port *port)
static int xilinx_cpm_setup_irq(struct xilinx_cpm_pcie *port)
{
struct device *dev = port->dev;
struct platform_device *pdev = to_platform_device(dev);
@ -462,7 +462,7 @@ static int xilinx_cpm_setup_irq(struct xilinx_cpm_pcie_port *port)
* xilinx_cpm_pcie_init_port - Initialize hardware
* @port: PCIe port information
*/
static void xilinx_cpm_pcie_init_port(struct xilinx_cpm_pcie_port *port)
static void xilinx_cpm_pcie_init_port(struct xilinx_cpm_pcie *port)
{
if (cpm_pcie_link_up(port))
dev_info(port->dev, "PCIe Link is UP\n");
@ -497,7 +497,7 @@ static void xilinx_cpm_pcie_init_port(struct xilinx_cpm_pcie_port *port)
*
* Return: '0' on success and error value on failure
*/
static int xilinx_cpm_pcie_parse_dt(struct xilinx_cpm_pcie_port *port,
static int xilinx_cpm_pcie_parse_dt(struct xilinx_cpm_pcie *port,
struct resource *bus_range)
{
struct device *dev = port->dev;
@ -523,7 +523,7 @@ static int xilinx_cpm_pcie_parse_dt(struct xilinx_cpm_pcie_port *port,
return 0;
}
static void xilinx_cpm_free_interrupts(struct xilinx_cpm_pcie_port *port)
static void xilinx_cpm_free_interrupts(struct xilinx_cpm_pcie *port)
{
irq_set_chained_handler_and_data(port->intx_irq, NULL, NULL);
irq_set_chained_handler_and_data(port->irq, NULL, NULL);
@ -537,7 +537,7 @@ static void xilinx_cpm_free_interrupts(struct xilinx_cpm_pcie_port *port)
*/
static int xilinx_cpm_pcie_probe(struct platform_device *pdev)
{
struct xilinx_cpm_pcie_port *port;
struct xilinx_cpm_pcie *port;
struct device *dev = &pdev->dev;
struct pci_host_bridge *bridge;
struct resource_entry *bus;

View File

@ -146,7 +146,7 @@
struct nwl_msi { /* MSI information */
struct irq_domain *msi_domain;
unsigned long *bitmap;
DECLARE_BITMAP(bitmap, INT_PCI_MSI_NR);
struct irq_domain *dev_domain;
struct mutex lock; /* protect bitmap variable */
int irq_msi0;
@ -335,12 +335,10 @@ static void nwl_pcie_leg_handler(struct irq_desc *desc)
static void nwl_pcie_handle_msi_irq(struct nwl_pcie *pcie, u32 status_reg)
{
struct nwl_msi *msi;
struct nwl_msi *msi = &pcie->msi;
unsigned long status;
u32 bit;
msi = &pcie->msi;
while ((status = nwl_bridge_readl(pcie, status_reg)) != 0) {
for_each_set_bit(bit, &status, 32) {
nwl_bridge_writel(pcie, 1 << bit, status_reg);
@ -560,30 +558,21 @@ static int nwl_pcie_enable_msi(struct nwl_pcie *pcie)
struct nwl_msi *msi = &pcie->msi;
unsigned long base;
int ret;
int size = BITS_TO_LONGS(INT_PCI_MSI_NR) * sizeof(long);
mutex_init(&msi->lock);
msi->bitmap = kzalloc(size, GFP_KERNEL);
if (!msi->bitmap)
return -ENOMEM;
/* Get msi_1 IRQ number */
msi->irq_msi1 = platform_get_irq_byname(pdev, "msi1");
if (msi->irq_msi1 < 0) {
ret = -EINVAL;
goto err;
}
if (msi->irq_msi1 < 0)
return -EINVAL;
irq_set_chained_handler_and_data(msi->irq_msi1,
nwl_pcie_msi_handler_high, pcie);
/* Get msi_0 IRQ number */
msi->irq_msi0 = platform_get_irq_byname(pdev, "msi0");
if (msi->irq_msi0 < 0) {
ret = -EINVAL;
goto err;
}
if (msi->irq_msi0 < 0)
return -EINVAL;
irq_set_chained_handler_and_data(msi->irq_msi0,
nwl_pcie_msi_handler_low, pcie);
@ -592,8 +581,7 @@ static int nwl_pcie_enable_msi(struct nwl_pcie *pcie)
ret = nwl_bridge_readl(pcie, I_MSII_CAPABILITIES) & MSII_PRESENT;
if (!ret) {
dev_err(dev, "MSI not present\n");
ret = -EIO;
goto err;
return -EIO;
}
/* Enable MSII */
@ -632,10 +620,6 @@ static int nwl_pcie_enable_msi(struct nwl_pcie *pcie)
nwl_bridge_writel(pcie, MSGF_MSI_SR_LO_MASK, MSGF_MSI_MASK_LO);
return 0;
err:
kfree(msi->bitmap);
msi->bitmap = NULL;
return ret;
}
static int nwl_pcie_bridge_init(struct nwl_pcie *pcie)

View File

@ -91,18 +91,18 @@
#define XILINX_NUM_MSI_IRQS 128
/**
* struct xilinx_pcie_port - PCIe port information
* @reg_base: IO Mapped Register Base
* struct xilinx_pcie - PCIe port information
* @dev: Device pointer
* @reg_base: IO Mapped Register Base
* @msi_map: Bitmap of allocated MSIs
* @map_lock: Mutex protecting the MSI allocation
* @msi_domain: MSI IRQ domain pointer
* @leg_domain: Legacy IRQ domain pointer
* @resources: Bus Resources
*/
struct xilinx_pcie_port {
void __iomem *reg_base;
struct xilinx_pcie {
struct device *dev;
void __iomem *reg_base;
unsigned long msi_map[BITS_TO_LONGS(XILINX_NUM_MSI_IRQS)];
struct mutex map_lock;
struct irq_domain *msi_domain;
@ -110,35 +110,35 @@ struct xilinx_pcie_port {
struct list_head resources;
};
static inline u32 pcie_read(struct xilinx_pcie_port *port, u32 reg)
static inline u32 pcie_read(struct xilinx_pcie *pcie, u32 reg)
{
return readl(port->reg_base + reg);
return readl(pcie->reg_base + reg);
}
static inline void pcie_write(struct xilinx_pcie_port *port, u32 val, u32 reg)
static inline void pcie_write(struct xilinx_pcie *pcie, u32 val, u32 reg)
{
writel(val, port->reg_base + reg);
writel(val, pcie->reg_base + reg);
}
static inline bool xilinx_pcie_link_up(struct xilinx_pcie_port *port)
static inline bool xilinx_pcie_link_up(struct xilinx_pcie *pcie)
{
return (pcie_read(port, XILINX_PCIE_REG_PSCR) &
return (pcie_read(pcie, XILINX_PCIE_REG_PSCR) &
XILINX_PCIE_REG_PSCR_LNKUP) ? 1 : 0;
}
/**
* xilinx_pcie_clear_err_interrupts - Clear Error Interrupts
* @port: PCIe port information
* @pcie: PCIe port information
*/
static void xilinx_pcie_clear_err_interrupts(struct xilinx_pcie_port *port)
static void xilinx_pcie_clear_err_interrupts(struct xilinx_pcie *pcie)
{
struct device *dev = port->dev;
unsigned long val = pcie_read(port, XILINX_PCIE_REG_RPEFR);
struct device *dev = pcie->dev;
unsigned long val = pcie_read(pcie, XILINX_PCIE_REG_RPEFR);
if (val & XILINX_PCIE_RPEFR_ERR_VALID) {
dev_dbg(dev, "Requester ID %lu\n",
val & XILINX_PCIE_RPEFR_REQ_ID);
pcie_write(port, XILINX_PCIE_RPEFR_ALL_MASK,
pcie_write(pcie, XILINX_PCIE_RPEFR_ALL_MASK,
XILINX_PCIE_REG_RPEFR);
}
}
@ -152,11 +152,11 @@ static void xilinx_pcie_clear_err_interrupts(struct xilinx_pcie_port *port)
*/
static bool xilinx_pcie_valid_device(struct pci_bus *bus, unsigned int devfn)
{
struct xilinx_pcie_port *port = bus->sysdata;
struct xilinx_pcie *pcie = bus->sysdata;
/* Check if link is up when trying to access downstream ports */
/* Check if link is up when trying to access downstream pcie ports */
if (!pci_is_root_bus(bus)) {
if (!xilinx_pcie_link_up(port))
if (!xilinx_pcie_link_up(pcie))
return false;
} else if (devfn > 0) {
/* Only one device down on each root port */
@ -177,12 +177,12 @@ static bool xilinx_pcie_valid_device(struct pci_bus *bus, unsigned int devfn)
static void __iomem *xilinx_pcie_map_bus(struct pci_bus *bus,
unsigned int devfn, int where)
{
struct xilinx_pcie_port *port = bus->sysdata;
struct xilinx_pcie *pcie = bus->sysdata;
if (!xilinx_pcie_valid_device(bus, devfn))
return NULL;
return port->reg_base + PCIE_ECAM_OFFSET(bus->number, devfn, where);
return pcie->reg_base + PCIE_ECAM_OFFSET(bus->number, devfn, where);
}
/* PCIe operations */
@ -215,7 +215,7 @@ static int xilinx_msi_set_affinity(struct irq_data *d, const struct cpumask *mas
static void xilinx_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct xilinx_pcie_port *pcie = irq_data_get_irq_chip_data(data);
struct xilinx_pcie *pcie = irq_data_get_irq_chip_data(data);
phys_addr_t pa = ALIGN_DOWN(virt_to_phys(pcie), SZ_4K);
msg->address_lo = lower_32_bits(pa);
@ -232,14 +232,14 @@ static struct irq_chip xilinx_msi_bottom_chip = {
static int xilinx_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args)
{
struct xilinx_pcie_port *port = domain->host_data;
struct xilinx_pcie *pcie = domain->host_data;
int hwirq, i;
mutex_lock(&port->map_lock);
mutex_lock(&pcie->map_lock);
hwirq = bitmap_find_free_region(port->msi_map, XILINX_NUM_MSI_IRQS, order_base_2(nr_irqs));
hwirq = bitmap_find_free_region(pcie->msi_map, XILINX_NUM_MSI_IRQS, order_base_2(nr_irqs));
mutex_unlock(&port->map_lock);
mutex_unlock(&pcie->map_lock);
if (hwirq < 0)
return -ENOSPC;
@ -256,13 +256,13 @@ static void xilinx_msi_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs)
{
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
struct xilinx_pcie_port *port = domain->host_data;
struct xilinx_pcie *pcie = domain->host_data;
mutex_lock(&port->map_lock);
mutex_lock(&pcie->map_lock);
bitmap_release_region(port->msi_map, d->hwirq, order_base_2(nr_irqs));
bitmap_release_region(pcie->msi_map, d->hwirq, order_base_2(nr_irqs));
mutex_unlock(&port->map_lock);
mutex_unlock(&pcie->map_lock);
}
static const struct irq_domain_ops xilinx_msi_domain_ops = {
@ -275,7 +275,7 @@ static struct msi_domain_info xilinx_msi_info = {
.chip = &xilinx_msi_top_chip,
};
static int xilinx_allocate_msi_domains(struct xilinx_pcie_port *pcie)
static int xilinx_allocate_msi_domains(struct xilinx_pcie *pcie)
{
struct fwnode_handle *fwnode = dev_fwnode(pcie->dev);
struct irq_domain *parent;
@ -298,7 +298,7 @@ static int xilinx_allocate_msi_domains(struct xilinx_pcie_port *pcie)
return 0;
}
static void xilinx_free_msi_domains(struct xilinx_pcie_port *pcie)
static void xilinx_free_msi_domains(struct xilinx_pcie *pcie)
{
struct irq_domain *parent = pcie->msi_domain->parent;
@ -342,13 +342,13 @@ static const struct irq_domain_ops intx_domain_ops = {
*/
static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
{
struct xilinx_pcie_port *port = (struct xilinx_pcie_port *)data;
struct device *dev = port->dev;
struct xilinx_pcie *pcie = (struct xilinx_pcie *)data;
struct device *dev = pcie->dev;
u32 val, mask, status;
/* Read interrupt decode and mask registers */
val = pcie_read(port, XILINX_PCIE_REG_IDR);
mask = pcie_read(port, XILINX_PCIE_REG_IMR);
val = pcie_read(pcie, XILINX_PCIE_REG_IDR);
mask = pcie_read(pcie, XILINX_PCIE_REG_IMR);
status = val & mask;
if (!status)
@ -371,23 +371,23 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
if (status & XILINX_PCIE_INTR_CORRECTABLE) {
dev_warn(dev, "Correctable error message\n");
xilinx_pcie_clear_err_interrupts(port);
xilinx_pcie_clear_err_interrupts(pcie);
}
if (status & XILINX_PCIE_INTR_NONFATAL) {
dev_warn(dev, "Non fatal error message\n");
xilinx_pcie_clear_err_interrupts(port);
xilinx_pcie_clear_err_interrupts(pcie);
}
if (status & XILINX_PCIE_INTR_FATAL) {
dev_warn(dev, "Fatal error message\n");
xilinx_pcie_clear_err_interrupts(port);
xilinx_pcie_clear_err_interrupts(pcie);
}
if (status & (XILINX_PCIE_INTR_INTX | XILINX_PCIE_INTR_MSI)) {
struct irq_domain *domain;
val = pcie_read(port, XILINX_PCIE_REG_RPIFR1);
val = pcie_read(pcie, XILINX_PCIE_REG_RPIFR1);
/* Check whether interrupt valid */
if (!(val & XILINX_PCIE_RPIFR1_INTR_VALID)) {
@ -397,17 +397,17 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
/* Decode the IRQ number */
if (val & XILINX_PCIE_RPIFR1_MSI_INTR) {
val = pcie_read(port, XILINX_PCIE_REG_RPIFR2) &
val = pcie_read(pcie, XILINX_PCIE_REG_RPIFR2) &
XILINX_PCIE_RPIFR2_MSG_DATA;
domain = port->msi_domain->parent;
domain = pcie->msi_domain->parent;
} else {
val = (val & XILINX_PCIE_RPIFR1_INTR_MASK) >>
XILINX_PCIE_RPIFR1_INTR_SHIFT;
domain = port->leg_domain;
domain = pcie->leg_domain;
}
/* Clear interrupt FIFO register 1 */
pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK,
pcie_write(pcie, XILINX_PCIE_RPIFR1_ALL_MASK,
XILINX_PCIE_REG_RPIFR1);
generic_handle_domain_irq(domain, val);
@ -442,20 +442,20 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
error:
/* Clear the Interrupt Decode register */
pcie_write(port, status, XILINX_PCIE_REG_IDR);
pcie_write(pcie, status, XILINX_PCIE_REG_IDR);
return IRQ_HANDLED;
}
/**
* xilinx_pcie_init_irq_domain - Initialize IRQ domain
* @port: PCIe port information
* @pcie: PCIe port information
*
* Return: '0' on success and error value on failure
*/
static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
static int xilinx_pcie_init_irq_domain(struct xilinx_pcie *pcie)
{
struct device *dev = port->dev;
struct device *dev = pcie->dev;
struct device_node *pcie_intc_node;
int ret;
@ -466,25 +466,25 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
return -ENODEV;
}
port->leg_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
pcie->leg_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
&intx_domain_ops,
port);
pcie);
of_node_put(pcie_intc_node);
if (!port->leg_domain) {
if (!pcie->leg_domain) {
dev_err(dev, "Failed to get a INTx IRQ domain\n");
return -ENODEV;
}
/* Setup MSI */
if (IS_ENABLED(CONFIG_PCI_MSI)) {
phys_addr_t pa = ALIGN_DOWN(virt_to_phys(port), SZ_4K);
phys_addr_t pa = ALIGN_DOWN(virt_to_phys(pcie), SZ_4K);
ret = xilinx_allocate_msi_domains(port);
ret = xilinx_allocate_msi_domains(pcie);
if (ret)
return ret;
pcie_write(port, upper_32_bits(pa), XILINX_PCIE_REG_MSIBASE1);
pcie_write(port, lower_32_bits(pa), XILINX_PCIE_REG_MSIBASE2);
pcie_write(pcie, upper_32_bits(pa), XILINX_PCIE_REG_MSIBASE1);
pcie_write(pcie, lower_32_bits(pa), XILINX_PCIE_REG_MSIBASE2);
}
return 0;
@ -492,44 +492,44 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
/**
* xilinx_pcie_init_port - Initialize hardware
* @port: PCIe port information
* @pcie: PCIe port information
*/
static void xilinx_pcie_init_port(struct xilinx_pcie_port *port)
static void xilinx_pcie_init_port(struct xilinx_pcie *pcie)
{
struct device *dev = port->dev;
struct device *dev = pcie->dev;
if (xilinx_pcie_link_up(port))
if (xilinx_pcie_link_up(pcie))
dev_info(dev, "PCIe Link is UP\n");
else
dev_info(dev, "PCIe Link is DOWN\n");
/* Disable all interrupts */
pcie_write(port, ~XILINX_PCIE_IDR_ALL_MASK,
pcie_write(pcie, ~XILINX_PCIE_IDR_ALL_MASK,
XILINX_PCIE_REG_IMR);
/* Clear pending interrupts */
pcie_write(port, pcie_read(port, XILINX_PCIE_REG_IDR) &
pcie_write(pcie, pcie_read(pcie, XILINX_PCIE_REG_IDR) &
XILINX_PCIE_IMR_ALL_MASK,
XILINX_PCIE_REG_IDR);
/* Enable all interrupts we handle */
pcie_write(port, XILINX_PCIE_IMR_ENABLE_MASK, XILINX_PCIE_REG_IMR);
pcie_write(pcie, XILINX_PCIE_IMR_ENABLE_MASK, XILINX_PCIE_REG_IMR);
/* Enable the Bridge enable bit */
pcie_write(port, pcie_read(port, XILINX_PCIE_REG_RPSC) |
pcie_write(pcie, pcie_read(pcie, XILINX_PCIE_REG_RPSC) |
XILINX_PCIE_REG_RPSC_BEN,
XILINX_PCIE_REG_RPSC);
}
/**
* xilinx_pcie_parse_dt - Parse Device tree
* @port: PCIe port information
* @pcie: PCIe port information
*
* Return: '0' on success and error value on failure
*/
static int xilinx_pcie_parse_dt(struct xilinx_pcie_port *port)
static int xilinx_pcie_parse_dt(struct xilinx_pcie *pcie)
{
struct device *dev = port->dev;
struct device *dev = pcie->dev;
struct device_node *node = dev->of_node;
struct resource regs;
unsigned int irq;
@ -541,14 +541,14 @@ static int xilinx_pcie_parse_dt(struct xilinx_pcie_port *port)
return err;
}
port->reg_base = devm_pci_remap_cfg_resource(dev, &regs);
if (IS_ERR(port->reg_base))
return PTR_ERR(port->reg_base);
pcie->reg_base = devm_pci_remap_cfg_resource(dev, &regs);
if (IS_ERR(pcie->reg_base))
return PTR_ERR(pcie->reg_base);
irq = irq_of_parse_and_map(node, 0);
err = devm_request_irq(dev, irq, xilinx_pcie_intr_handler,
IRQF_SHARED | IRQF_NO_THREAD,
"xilinx-pcie", port);
"xilinx-pcie", pcie);
if (err) {
dev_err(dev, "unable to request irq %d\n", irq);
return err;
@ -566,41 +566,41 @@ static int xilinx_pcie_parse_dt(struct xilinx_pcie_port *port)
static int xilinx_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct xilinx_pcie_port *port;
struct xilinx_pcie *pcie;
struct pci_host_bridge *bridge;
int err;
if (!dev->of_node)
return -ENODEV;
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*port));
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie));
if (!bridge)
return -ENODEV;
port = pci_host_bridge_priv(bridge);
mutex_init(&port->map_lock);
port->dev = dev;
pcie = pci_host_bridge_priv(bridge);
mutex_init(&pcie->map_lock);
pcie->dev = dev;
err = xilinx_pcie_parse_dt(port);
err = xilinx_pcie_parse_dt(pcie);
if (err) {
dev_err(dev, "Parsing DT failed\n");
return err;
}
xilinx_pcie_init_port(port);
xilinx_pcie_init_port(pcie);
err = xilinx_pcie_init_irq_domain(port);
err = xilinx_pcie_init_irq_domain(pcie);
if (err) {
dev_err(dev, "Failed creating IRQ Domain\n");
return err;
}
bridge->sysdata = port;
bridge->sysdata = pcie;
bridge->ops = &xilinx_pcie_ops;
err = pci_host_probe(bridge);
if (err)
xilinx_free_msi_domains(port);
xilinx_free_msi_domains(pcie);
return err;
}

View File

@ -501,6 +501,40 @@ static inline void vmd_acpi_begin(void) { }
static inline void vmd_acpi_end(void) { }
#endif /* CONFIG_ACPI */
static void vmd_domain_reset(struct vmd_dev *vmd)
{
u16 bus, max_buses = resource_size(&vmd->resources[0]);
u8 dev, functions, fn, hdr_type;
char __iomem *base;
for (bus = 0; bus < max_buses; bus++) {
for (dev = 0; dev < 32; dev++) {
base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus,
PCI_DEVFN(dev, 0), 0);
hdr_type = readb(base + PCI_HEADER_TYPE) &
PCI_HEADER_TYPE_MASK;
functions = (hdr_type & 0x80) ? 8 : 1;
for (fn = 0; fn < functions; fn++) {
base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus,
PCI_DEVFN(dev, fn), 0);
hdr_type = readb(base + PCI_HEADER_TYPE) &
PCI_HEADER_TYPE_MASK;
if (hdr_type != PCI_HEADER_TYPE_BRIDGE ||
(readw(base + PCI_CLASS_DEVICE) !=
PCI_CLASS_BRIDGE_PCI))
continue;
memset_io(base + PCI_IO_BASE, 0,
PCI_ROM_ADDRESS1 - PCI_IO_BASE);
}
}
}
}
static void vmd_attach_resources(struct vmd_dev *vmd)
{
vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[1];
@ -541,7 +575,7 @@ static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint,
int ret;
ret = pci_read_config_dword(dev, PCI_REG_VMLOCK, &vmlock);
if (ret || vmlock == ~0)
if (ret || PCI_POSSIBLE_ERROR(vmlock))
return -ENODEV;
if (MB2_SHADOW_EN(vmlock)) {
@ -661,6 +695,21 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd)
return 0;
}
/*
* Since VMD is an aperture to regular PCIe root ports, only allow it to
* control features that the OS is allowed to control on the physical PCI bus.
*/
static void vmd_copy_host_bridge_flags(struct pci_host_bridge *root_bridge,
struct pci_host_bridge *vmd_bridge)
{
vmd_bridge->native_pcie_hotplug = root_bridge->native_pcie_hotplug;
vmd_bridge->native_shpc_hotplug = root_bridge->native_shpc_hotplug;
vmd_bridge->native_aer = root_bridge->native_aer;
vmd_bridge->native_pme = root_bridge->native_pme;
vmd_bridge->native_ltr = root_bridge->native_ltr;
vmd_bridge->native_dpc = root_bridge->native_dpc;
}
static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
{
struct pci_sysdata *sd = &vmd->sysdata;
@ -798,6 +847,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
return -ENODEV;
}
vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus),
to_pci_host_bridge(vmd->bus->bridge));
vmd_attach_resources(vmd);
if (vmd->irq_domain)
dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
@ -805,6 +857,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
vmd_acpi_begin();
pci_scan_child_bus(vmd->bus);
vmd_domain_reset(vmd);
list_for_each_entry(child, &vmd->bus->children, node)
pci_reset_bus(child->self);
pci_assign_unassigned_bus_resources(vmd->bus);
/*
@ -953,6 +1008,10 @@ static const struct pci_device_id vmd_ids[] = {
.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
VMD_FEAT_HAS_BUS_RESTRICTIONS |
VMD_FEAT_OFFSET_FIRST_VECTOR,},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa77f),
.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
VMD_FEAT_HAS_BUS_RESTRICTIONS |
VMD_FEAT_OFFSET_FIRST_VECTOR,},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B),
.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
VMD_FEAT_HAS_BUS_RESTRICTIONS |

View File

@ -1262,7 +1262,7 @@ static void epf_ntb_db_mw_bar_cleanup(struct epf_ntb *ntb,
}
/**
* epf_ntb_configure_interrupt() - Configure MSI/MSI-X capaiblity
* epf_ntb_configure_interrupt() - Configure MSI/MSI-X capability
* @ntb: NTB device that facilitates communication between HOST1 and HOST2
* @type: PRIMARY interface or SECONDARY interface
*

View File

@ -334,7 +334,7 @@ int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no, u8 interrupts)
u8 encode_int;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions ||
interrupts > 32)
interrupts < 1 || interrupts > 32)
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))

View File

@ -30,11 +30,6 @@ ibmphp:
or ibmphp should store a pointer to its bus in struct slot. Probably the
former.
* The functions get_max_adapter_speed() and get_bus_name() are commented out.
Can they be deleted? There are also forward declarations at the top of
ibmphp_core.c as well as pointers in ibmphp_hotplug_slot_ops, likewise
commented out.
* ibmphp_init_devno() takes a struct slot **, it could instead take a
struct slot *.

View File

@ -2273,7 +2273,7 @@ static u32 configure_new_device(struct controller *ctrl, struct pci_func *func
while ((function < max_functions) && (!stop_it)) {
pci_bus_read_config_dword(ctrl->pci_bus, PCI_DEVFN(func->device, function), 0x00, &ID);
if (ID == 0xFFFFFFFF) {
if (PCI_POSSIBLE_ERROR(ID)) {
function++;
} else {
/* Setup slot structure. */
@ -2517,7 +2517,7 @@ static int configure_new_function(struct controller *ctrl, struct pci_func *func
pci_bus_read_config_dword(pci_bus, PCI_DEVFN(device, 0), 0x00, &ID);
pci_bus->number = func->bus;
if (ID != 0xFFFFFFFF) { /* device present */
if (!PCI_POSSIBLE_ERROR(ID)) { /* device present */
/* Setup slot structure. */
new_slot = cpqhp_slot_create(hold_bus_node->base);

View File

@ -50,14 +50,6 @@ static int irqs[16]; /* PIC mode IRQs we're using so far (in case MPS
static int init_flag;
/*
static int get_max_adapter_speed_1 (struct hotplug_slot *, u8 *, u8);
static inline int get_max_adapter_speed (struct hotplug_slot *hs, u8 *value)
{
return get_max_adapter_speed_1 (hs, value, 1);
}
*/
static inline int get_cur_bus_info(struct slot **sl)
{
int rc = 1;
@ -401,69 +393,6 @@ static int get_max_bus_speed(struct slot *slot)
return rc;
}
/*
static int get_max_adapter_speed_1(struct hotplug_slot *hotplug_slot, u8 *value, u8 flag)
{
int rc = -ENODEV;
struct slot *pslot;
struct slot myslot;
debug("get_max_adapter_speed_1 - Entry hotplug_slot[%lx] pvalue[%lx]\n",
(ulong)hotplug_slot, (ulong) value);
if (flag)
ibmphp_lock_operations();
if (hotplug_slot && value) {
pslot = hotplug_slot->private;
if (pslot) {
memcpy(&myslot, pslot, sizeof(struct slot));
rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
&(myslot.status));
if (!(SLOT_LATCH (myslot.status)) &&
(SLOT_PRESENT (myslot.status))) {
rc = ibmphp_hpc_readslot(pslot,
READ_EXTSLOTSTATUS,
&(myslot.ext_status));
if (!rc)
*value = SLOT_SPEED(myslot.ext_status);
} else
*value = MAX_ADAPTER_NONE;
}
}
if (flag)
ibmphp_unlock_operations();
debug("get_max_adapter_speed_1 - Exit rc[%d] value[%x]\n", rc, *value);
return rc;
}
static int get_bus_name(struct hotplug_slot *hotplug_slot, char *value)
{
int rc = -ENODEV;
struct slot *pslot = NULL;
debug("get_bus_name - Entry hotplug_slot[%lx]\n", (ulong)hotplug_slot);
ibmphp_lock_operations();
if (hotplug_slot) {
pslot = hotplug_slot->private;
if (pslot) {
rc = 0;
snprintf(value, 100, "Bus %x", pslot->bus);
}
} else
rc = -ENODEV;
ibmphp_unlock_operations();
debug("get_bus_name - Exit rc[%d] value[%x]\n", rc, *value);
return rc;
}
*/
/****************************************************************************
* This routine will initialize the ops data structure used in the validate
* function. It will also power off empty slots that are powered on since BIOS
@ -1231,9 +1160,6 @@ const struct hotplug_slot_ops ibmphp_hotplug_slot_ops = {
.get_attention_status = get_attention_status,
.get_latch_status = get_latch_status,
.get_adapter_status = get_adapter_present,
/* .get_max_adapter_speed = get_max_adapter_speed,
.get_bus_name_status = get_bus_name,
*/
};
static void ibmphp_unload(void)

View File

@ -75,6 +75,8 @@ extern int pciehp_poll_time;
* @reset_lock: prevents access to the Data Link Layer Link Active bit in the
* Link Status register and to the Presence Detect State bit in the Slot
* Status register during a slot reset which may cause them to flap
* @depth: Number of additional hotplug ports in the path to the root bus,
* used as lock subclass for @reset_lock
* @ist_running: flag to keep user request waiting while IRQ thread is running
* @request_result: result of last user request submitted to the IRQ thread
* @requester: wait queue to wake up on completion of user request,
@ -106,6 +108,7 @@ struct controller {
struct hotplug_slot hotplug_slot; /* hotplug core interface */
struct rw_semaphore reset_lock;
unsigned int depth;
unsigned int ist_running;
int request_result;
wait_queue_head_t requester;

View File

@ -166,7 +166,7 @@ static void pciehp_check_presence(struct controller *ctrl)
{
int occupied;
down_read(&ctrl->reset_lock);
down_read_nested(&ctrl->reset_lock, ctrl->depth);
mutex_lock(&ctrl->state_lock);
occupied = pciehp_card_present_or_link_active(ctrl);

View File

@ -89,7 +89,7 @@ static int pcie_poll_cmd(struct controller *ctrl, int timeout)
do {
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
if (slot_status == (u16) ~0) {
if (PCI_POSSIBLE_ERROR(slot_status)) {
ctrl_info(ctrl, "%s: no response from device\n",
__func__);
return 0;
@ -165,7 +165,7 @@ static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
pcie_wait_cmd(ctrl);
pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl);
if (slot_ctrl == (u16) ~0) {
if (PCI_POSSIBLE_ERROR(slot_ctrl)) {
ctrl_info(ctrl, "%s: no response from device\n", __func__);
goto out;
}
@ -236,7 +236,7 @@ int pciehp_check_link_active(struct controller *ctrl)
int ret;
ret = pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
if (ret == PCIBIOS_DEVICE_NOT_FOUND || lnk_status == (u16)~0)
if (ret == PCIBIOS_DEVICE_NOT_FOUND || PCI_POSSIBLE_ERROR(lnk_status))
return -ENODEV;
ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
@ -443,7 +443,7 @@ int pciehp_card_present(struct controller *ctrl)
int ret;
ret = pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
if (ret == PCIBIOS_DEVICE_NOT_FOUND || slot_status == (u16)~0)
if (ret == PCIBIOS_DEVICE_NOT_FOUND || PCI_POSSIBLE_ERROR(slot_status))
return -ENODEV;
return !!(slot_status & PCI_EXP_SLTSTA_PDS);
@ -583,7 +583,7 @@ static void pciehp_ignore_dpc_link_change(struct controller *ctrl,
* the corresponding link change may have been ignored above.
* Synthesize it to ensure that it is acted on.
*/
down_read(&ctrl->reset_lock);
down_read_nested(&ctrl->reset_lock, ctrl->depth);
if (!pciehp_check_link_active(ctrl))
pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC);
up_read(&ctrl->reset_lock);
@ -621,7 +621,7 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
read_status:
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &status);
if (status == (u16) ~0) {
if (PCI_POSSIBLE_ERROR(status)) {
ctrl_info(ctrl, "%s: no response from device\n", __func__);
if (parent)
pm_runtime_put(parent);
@ -642,6 +642,8 @@ read_status:
*/
if (ctrl->power_fault_detected)
status &= ~PCI_EXP_SLTSTA_PFD;
else if (status & PCI_EXP_SLTSTA_PFD)
ctrl->power_fault_detected = true;
events |= status;
if (!events) {
@ -651,7 +653,7 @@ read_status:
}
if (status) {
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, events);
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, status);
/*
* In MSI mode, all event bits must be zero before the port
@ -725,8 +727,7 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
}
/* Check Power Fault Detected */
if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) {
ctrl->power_fault_detected = 1;
if (events & PCI_EXP_SLTSTA_PFD) {
ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl));
pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
PCI_EXP_SLTCTL_ATTN_IND_ON);
@ -746,7 +747,7 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
* Disable requests have higher priority than Presence Detect Changed
* or Data Link Layer State Changed events.
*/
down_read(&ctrl->reset_lock);
down_read_nested(&ctrl->reset_lock, ctrl->depth);
if (events & DISABLE_SLOT)
pciehp_handle_disable_request(ctrl);
else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
@ -906,7 +907,7 @@ int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe)
if (probe)
return 0;
down_write(&ctrl->reset_lock);
down_write_nested(&ctrl->reset_lock, ctrl->depth);
if (!ATTN_BUTTN(ctrl)) {
ctrl_mask |= PCI_EXP_SLTCTL_PDCE;
@ -962,6 +963,20 @@ static inline void dbg_ctrl(struct controller *ctrl)
#define FLAG(x, y) (((x) & (y)) ? '+' : '-')
static inline int pcie_hotplug_depth(struct pci_dev *dev)
{
struct pci_bus *bus = dev->bus;
int depth = 0;
while (bus->parent) {
bus = bus->parent;
if (bus->self && bus->self->is_hotplug_bridge)
depth++;
}
return depth;
}
struct controller *pcie_init(struct pcie_device *dev)
{
struct controller *ctrl;
@ -975,6 +990,7 @@ struct controller *pcie_init(struct pcie_device *dev)
return NULL;
ctrl->pcie = dev;
ctrl->depth = pcie_hotplug_depth(dev->port);
pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap);
if (pdev->hotplug_user_indicators)

View File

@ -247,7 +247,7 @@ void of_pci_check_probe_only(void)
else
pci_clear_flags(PCI_PROBE_ONLY);
pr_info("PROBE_ONLY %sabled\n", val ? "en" : "dis");
pr_info("PROBE_ONLY %s\n", val ? "enabled" : "disabled");
}
EXPORT_SYMBOL_GPL(of_pci_check_probe_only);

View File

@ -710,7 +710,7 @@ void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size)
if (!ret)
goto out;
if (unlikely(!percpu_ref_tryget_live(ref))) {
if (unlikely(!percpu_ref_tryget_live_rcu(ref))) {
gen_pool_free(p2pdma->pool, (unsigned long) ret, size);
ret = NULL;
goto out;

View File

@ -139,8 +139,13 @@ struct pci_bridge_reg_behavior pci_regs_behavior[PCI_STD_HEADER_SIZEOF / 4] = {
.ro = GENMASK(7, 0),
},
/*
* If expansion ROM is unsupported then ROM Base Address register must
* be implemented as read-only register that return 0 when read, same
* as for unused Base Address registers.
*/
[PCI_ROM_ADDRESS1 / 4] = {
.rw = GENMASK(31, 11) | BIT(0),
.ro = ~0,
},
/*
@ -171,41 +176,55 @@ struct pci_bridge_reg_behavior pcie_cap_regs_behavior[PCI_CAP_PCIE_SIZEOF / 4] =
[PCI_CAP_LIST_ID / 4] = {
/*
* Capability ID, Next Capability Pointer and
* Capabilities register are all read-only.
* bits [14:0] of Capabilities register are all read-only.
* Bit 15 of Capabilities register is reserved.
*/
.ro = ~0,
.ro = GENMASK(30, 0),
},
[PCI_EXP_DEVCAP / 4] = {
.ro = ~0,
/*
* Bits [31:29] and [17:16] are reserved.
* Bits [27:18] are reserved for non-upstream ports.
* Bits 28 and [14:6] are reserved for non-endpoint devices.
* Other bits are read-only.
*/
.ro = BIT(15) | GENMASK(5, 0),
},
[PCI_EXP_DEVCTL / 4] = {
/* Device control register is RW */
.rw = GENMASK(15, 0),
/*
* Device control register is RW, except bit 15 which is
* reserved for non-endpoints or non-PCIe-to-PCI/X bridges.
*/
.rw = GENMASK(14, 0),
/*
* Device status register has bits 6 and [3:0] W1C, [5:4] RO,
* the rest is reserved
* the rest is reserved. Also bit 6 is reserved for non-upstream
* ports.
*/
.w1c = (BIT(6) | GENMASK(3, 0)) << 16,
.w1c = GENMASK(3, 0) << 16,
.ro = GENMASK(5, 4) << 16,
},
[PCI_EXP_LNKCAP / 4] = {
/* All bits are RO, except bit 23 which is reserved */
.ro = lower_32_bits(~BIT(23)),
/*
* All bits are RO, except bit 23 which is reserved and
* bit 18 which is reserved for non-upstream ports.
*/
.ro = lower_32_bits(~(BIT(23) | PCI_EXP_LNKCAP_CLKPM)),
},
[PCI_EXP_LNKCTL / 4] = {
/*
* Link control has bits [15:14], [11:3] and [1:0] RW, the
* rest is reserved.
* rest is reserved. Bit 8 is reserved for non-upstream ports.
*
* Link status has bits [13:0] RO, and bits [15:14]
* W1C.
*/
.rw = GENMASK(15, 14) | GENMASK(11, 3) | GENMASK(1, 0),
.rw = GENMASK(15, 14) | GENMASK(11, 9) | GENMASK(7, 3) | GENMASK(1, 0),
.ro = GENMASK(13, 0) << 16,
.w1c = GENMASK(15, 14) << 16,
},
@ -251,6 +270,49 @@ struct pci_bridge_reg_behavior pcie_cap_regs_behavior[PCI_CAP_PCIE_SIZEOF / 4] =
.ro = GENMASK(15, 0) | PCI_EXP_RTSTA_PENDING,
.w1c = PCI_EXP_RTSTA_PME,
},
[PCI_EXP_DEVCAP2 / 4] = {
/*
* Device capabilities 2 register has reserved bits [30:27].
* Also bits [26:24] are reserved for non-upstream ports.
*/
.ro = BIT(31) | GENMASK(23, 0),
},
[PCI_EXP_DEVCTL2 / 4] = {
/*
* Device control 2 register is RW. Bit 11 is reserved for
* non-upstream ports.
*
* Device status 2 register is reserved.
*/
.rw = GENMASK(15, 12) | GENMASK(10, 0),
},
[PCI_EXP_LNKCAP2 / 4] = {
/* Link capabilities 2 register has reserved bits [30:25] and 0. */
.ro = BIT(31) | GENMASK(24, 1),
},
[PCI_EXP_LNKCTL2 / 4] = {
/*
* Link control 2 register is RW.
*
* Link status 2 register has bits 5, 15 W1C;
* bits 10, 11 reserved and others are RO.
*/
.rw = GENMASK(15, 0),
.w1c = (BIT(15) | BIT(5)) << 16,
.ro = (GENMASK(14, 12) | GENMASK(9, 6) | GENMASK(4, 0)) << 16,
},
[PCI_EXP_SLTCAP2 / 4] = {
/* Slot capabilities 2 register is reserved. */
},
[PCI_EXP_SLTCTL2 / 4] = {
/* Both Slot control 2 and Slot status 2 registers are reserved. */
},
};
/*
@ -265,7 +327,11 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
{
BUILD_BUG_ON(sizeof(bridge->conf) != PCI_BRIDGE_CONF_END);
bridge->conf.class_revision |= cpu_to_le32(PCI_CLASS_BRIDGE_PCI << 16);
/*
* class_revision: Class is high 24 bits and revision is low 8 bit of this member,
* while class for PCI Bridge Normal Decode has the 24-bit value: PCI_CLASS_BRIDGE_PCI << 8
*/
bridge->conf.class_revision |= cpu_to_le32((PCI_CLASS_BRIDGE_PCI << 8) << 8);
bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
bridge->conf.cache_line_size = 0x10;
bridge->conf.status = cpu_to_le16(PCI_STATUS_CAP_LIST);
@ -277,11 +343,9 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
if (bridge->has_pcie) {
bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
bridge->conf.status |= cpu_to_le16(PCI_STATUS_CAP_LIST);
bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
/* Set PCIe v2, root port, slot support */
bridge->pcie_conf.cap =
cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
PCI_EXP_FLAGS_SLOT);
bridge->pcie_conf.cap |= cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4);
bridge->pcie_cap_regs_behavior =
kmemdup(pcie_cap_regs_behavior,
sizeof(pcie_cap_regs_behavior),
@ -290,6 +354,27 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
kfree(bridge->pci_regs_behavior);
return -ENOMEM;
}
/* These bits are applicable only for PCI and reserved on PCIe */
bridge->pci_regs_behavior[PCI_CACHE_LINE_SIZE / 4].ro &=
~GENMASK(15, 8);
bridge->pci_regs_behavior[PCI_COMMAND / 4].ro &=
~((PCI_COMMAND_SPECIAL | PCI_COMMAND_INVALIDATE |
PCI_COMMAND_VGA_PALETTE | PCI_COMMAND_WAIT |
PCI_COMMAND_FAST_BACK) |
(PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK |
PCI_STATUS_DEVSEL_MASK) << 16);
bridge->pci_regs_behavior[PCI_PRIMARY_BUS / 4].ro &=
~GENMASK(31, 24);
bridge->pci_regs_behavior[PCI_IO_BASE / 4].ro &=
~((PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK |
PCI_STATUS_DEVSEL_MASK) << 16);
bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].rw &=
~((PCI_BRIDGE_CTL_MASTER_ABORT |
BIT(8) | BIT(9) | BIT(11)) << 16);
bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].ro &=
~((PCI_BRIDGE_CTL_FAST_BACK) << 16);
bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].w1c &=
~(BIT(10) << 16);
}
if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) {

View File

@ -1115,7 +1115,7 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
return -EIO;
pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
if (pmcsr == (u16) ~0) {
if (PCI_POSSIBLE_ERROR(pmcsr)) {
pci_err(dev, "can't change power state from %s to %s (config space inaccessible)\n",
pci_power_name(dev->current_state),
pci_power_name(state));
@ -1271,16 +1271,16 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
* After reset, the device should not silently discard config
* requests, but it may still indicate that it needs more time by
* responding to them with CRS completions. The Root Port will
* generally synthesize ~0 data to complete the read (except when
* CRS SV is enabled and the read was for the Vendor ID; in that
* case it synthesizes 0x0001 data).
* generally synthesize ~0 (PCI_ERROR_RESPONSE) data to complete
* the read (except when CRS SV is enabled and the read was for the
* Vendor ID; in that case it synthesizes 0x0001 data).
*
* Wait for the device to return a non-CRS completion. Read the
* Command register instead of Vendor ID so we don't have to
* contend with the CRS SV value.
*/
pci_read_config_dword(dev, PCI_COMMAND, &id);
while (id == ~0) {
while (PCI_POSSIBLE_ERROR(id)) {
if (delay > timeout) {
pci_warn(dev, "not ready %dms after %s; giving up\n",
delay - 1, reset_type);
@ -1556,7 +1556,7 @@ static void pci_save_ltr_state(struct pci_dev *dev)
{
int ltr;
struct pci_cap_saved_state *save_state;
u16 *cap;
u32 *cap;
if (!pci_is_pcie(dev))
return;
@ -1571,25 +1571,25 @@ static void pci_save_ltr_state(struct pci_dev *dev)
return;
}
cap = (u16 *)&save_state->cap.data[0];
pci_read_config_word(dev, ltr + PCI_LTR_MAX_SNOOP_LAT, cap++);
pci_read_config_word(dev, ltr + PCI_LTR_MAX_NOSNOOP_LAT, cap++);
/* Some broken devices only support dword access to LTR */
cap = &save_state->cap.data[0];
pci_read_config_dword(dev, ltr + PCI_LTR_MAX_SNOOP_LAT, cap);
}
static void pci_restore_ltr_state(struct pci_dev *dev)
{
struct pci_cap_saved_state *save_state;
int ltr;
u16 *cap;
u32 *cap;
save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_LTR);
ltr = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_LTR);
if (!save_state || !ltr)
return;
cap = (u16 *)&save_state->cap.data[0];
pci_write_config_word(dev, ltr + PCI_LTR_MAX_SNOOP_LAT, *cap++);
pci_write_config_word(dev, ltr + PCI_LTR_MAX_NOSNOOP_LAT, *cap++);
/* Some broken devices only support dword access to LTR */
cap = &save_state->cap.data[0];
pci_write_config_dword(dev, ltr + PCI_LTR_MAX_SNOOP_LAT, *cap);
}
/**

View File

@ -41,11 +41,6 @@
#define ASPM_STATE_ALL (ASPM_STATE_L0S | ASPM_STATE_L1 | \
ASPM_STATE_L1SS)
struct aspm_latency {
u32 l0s; /* L0s latency (nsec) */
u32 l1; /* L1 latency (nsec) */
};
struct pcie_link_state {
struct pci_dev *pdev; /* Upstream component of the Link */
struct pci_dev *downstream; /* Downstream component, function 0 */
@ -65,15 +60,6 @@ struct pcie_link_state {
u32 clkpm_enabled:1; /* Current Clock PM state */
u32 clkpm_default:1; /* Default Clock PM state by BIOS */
u32 clkpm_disable:1; /* Clock PM disabled */
/* Exit latencies */
struct aspm_latency latency_up; /* Upstream direction exit latency */
struct aspm_latency latency_dw; /* Downstream direction exit latency */
/*
* Endpoint acceptable latencies. A pcie downstream port only
* has one slot under it, so at most there are 8 functions.
*/
struct aspm_latency acceptable[8];
};
static int aspm_disabled, aspm_force;
@ -105,6 +91,20 @@ static const char *policy_str[] = {
#define LINK_RETRAIN_TIMEOUT HZ
/*
* The L1 PM substate capability is only implemented in function 0 in a
* multi function device.
*/
static struct pci_dev *pci_function_0(struct pci_bus *linkbus)
{
struct pci_dev *child;
list_for_each_entry(child, &linkbus->devices, bus_list)
if (PCI_FUNC(child->devfn) == 0)
return child;
return NULL;
}
static int policy_to_aspm_state(struct pcie_link_state *link)
{
switch (aspm_policy) {
@ -378,8 +378,10 @@ static void encode_l12_threshold(u32 threshold_us, u32 *scale, u32 *value)
static void pcie_aspm_check_latency(struct pci_dev *endpoint)
{
u32 latency, l1_switch_latency = 0;
struct aspm_latency *acceptable;
u32 latency, encoding, lnkcap_up, lnkcap_dw;
u32 l1_switch_latency = 0, latency_up_l0s;
u32 latency_up_l1, latency_dw_l0s, latency_dw_l1;
u32 acceptable_l0s, acceptable_l1;
struct pcie_link_state *link;
/* Device not in D0 doesn't need latency check */
@ -388,17 +390,36 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
return;
link = endpoint->bus->self->link_state;
acceptable = &link->acceptable[PCI_FUNC(endpoint->devfn)];
/* Calculate endpoint L0s acceptable latency */
encoding = (endpoint->devcap & PCI_EXP_DEVCAP_L0S) >> 6;
acceptable_l0s = calc_l0s_acceptable(encoding);
/* Calculate endpoint L1 acceptable latency */
encoding = (endpoint->devcap & PCI_EXP_DEVCAP_L1) >> 9;
acceptable_l1 = calc_l1_acceptable(encoding);
while (link) {
struct pci_dev *dev = pci_function_0(link->pdev->subordinate);
/* Read direction exit latencies */
pcie_capability_read_dword(link->pdev, PCI_EXP_LNKCAP,
&lnkcap_up);
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP,
&lnkcap_dw);
latency_up_l0s = calc_l0s_latency(lnkcap_up);
latency_up_l1 = calc_l1_latency(lnkcap_up);
latency_dw_l0s = calc_l0s_latency(lnkcap_dw);
latency_dw_l1 = calc_l1_latency(lnkcap_dw);
/* Check upstream direction L0s latency */
if ((link->aspm_capable & ASPM_STATE_L0S_UP) &&
(link->latency_up.l0s > acceptable->l0s))
(latency_up_l0s > acceptable_l0s))
link->aspm_capable &= ~ASPM_STATE_L0S_UP;
/* Check downstream direction L0s latency */
if ((link->aspm_capable & ASPM_STATE_L0S_DW) &&
(link->latency_dw.l0s > acceptable->l0s))
(latency_dw_l0s > acceptable_l0s))
link->aspm_capable &= ~ASPM_STATE_L0S_DW;
/*
* Check L1 latency.
@ -413,9 +434,9 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
* L1 exit latencies advertised by a device include L1
* substate latencies (and hence do not do any check).
*/
latency = max_t(u32, link->latency_up.l1, link->latency_dw.l1);
latency = max_t(u32, latency_up_l1, latency_dw_l1);
if ((link->aspm_capable & ASPM_STATE_L1) &&
(latency + l1_switch_latency > acceptable->l1))
(latency + l1_switch_latency > acceptable_l1))
link->aspm_capable &= ~ASPM_STATE_L1;
l1_switch_latency += 1000;
@ -423,20 +444,6 @@ static void pcie_aspm_check_latency(struct pci_dev *endpoint)
}
}
/*
* The L1 PM substate capability is only implemented in function 0 in a
* multi function device.
*/
static struct pci_dev *pci_function_0(struct pci_bus *linkbus)
{
struct pci_dev *child;
list_for_each_entry(child, &linkbus->devices, bus_list)
if (PCI_FUNC(child->devfn) == 0)
return child;
return NULL;
}
static void pci_clear_and_set_dword(struct pci_dev *pdev, int pos,
u32 clear, u32 set)
{
@ -496,6 +503,7 @@ static void aspm_calc_l1ss_info(struct pcie_link_state *link,
encode_l12_threshold(l1_2_threshold, &scale, &value);
ctl1 |= t_common_mode << 8 | scale << 29 | value << 16;
/* Some broken devices only support dword access to L1 SS */
pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL1, &pctl1);
pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL2, &pctl2);
pci_read_config_dword(child, child->l1ss + PCI_L1SS_CTL1, &cctl1);
@ -593,8 +601,6 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
link->aspm_enabled |= ASPM_STATE_L0S_UP;
if (parent_lnkctl & PCI_EXP_LNKCTL_ASPM_L0S)
link->aspm_enabled |= ASPM_STATE_L0S_DW;
link->latency_up.l0s = calc_l0s_latency(parent_lnkcap);
link->latency_dw.l0s = calc_l0s_latency(child_lnkcap);
/* Setup L1 state */
if (parent_lnkcap & child_lnkcap & PCI_EXP_LNKCAP_ASPM_L1)
@ -602,8 +608,6 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
if (parent_lnkctl & child_lnkctl & PCI_EXP_LNKCTL_ASPM_L1)
link->aspm_enabled |= ASPM_STATE_L1;
link->latency_up.l1 = calc_l1_latency(parent_lnkcap);
link->latency_dw.l1 = calc_l1_latency(child_lnkcap);
/* Setup L1 substate */
pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CAP,
@ -660,22 +664,10 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
/* Get and check endpoint acceptable latencies */
list_for_each_entry(child, &linkbus->devices, bus_list) {
u32 reg32, encoding;
struct aspm_latency *acceptable =
&link->acceptable[PCI_FUNC(child->devfn)];
if (pci_pcie_type(child) != PCI_EXP_TYPE_ENDPOINT &&
pci_pcie_type(child) != PCI_EXP_TYPE_LEG_END)
continue;
pcie_capability_read_dword(child, PCI_EXP_DEVCAP, &reg32);
/* Calculate endpoint L0s acceptable latency */
encoding = (reg32 & PCI_EXP_DEVCAP_L0S) >> 6;
acceptable->l0s = calc_l0s_acceptable(encoding);
/* Calculate endpoint L1 acceptable latency */
encoding = (reg32 & PCI_EXP_DEVCAP_L1) >> 9;
acceptable->l1 = calc_l1_acceptable(encoding);
pcie_aspm_check_latency(child);
}
}

View File

@ -79,7 +79,7 @@ static bool dpc_completed(struct pci_dev *pdev)
u16 status;
pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_STATUS, &status);
if ((status != 0xffff) && (status & PCI_EXP_DPC_STATUS_TRIGGER))
if ((!PCI_POSSIBLE_ERROR(status)) && (status & PCI_EXP_DPC_STATUS_TRIGGER))
return false;
if (test_bit(PCI_DPC_RECOVERING, &pdev->priv_flags))
@ -312,7 +312,7 @@ static irqreturn_t dpc_irq(int irq, void *context)
pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
if (!(status & PCI_EXP_DPC_STATUS_INTERRUPT) || status == (u16)(~0))
if (!(status & PCI_EXP_DPC_STATUS_INTERRUPT) || PCI_POSSIBLE_ERROR(status))
return IRQ_NONE;
pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,

View File

@ -224,7 +224,7 @@ static void pcie_pme_work_fn(struct work_struct *work)
break;
pcie_capability_read_dword(port, PCI_EXP_RTSTA, &rtsta);
if (rtsta == (u32) ~0)
if (PCI_POSSIBLE_ERROR(rtsta))
break;
if (rtsta & PCI_EXP_RTSTA_PME) {
@ -274,7 +274,7 @@ static irqreturn_t pcie_pme_irq(int irq, void *context)
spin_lock_irqsave(&data->lock, flags);
pcie_capability_read_dword(port, PCI_EXP_RTSTA, &rtsta);
if (rtsta == (u32) ~0 || !(rtsta & PCI_EXP_RTSTA_PME)) {
if (PCI_POSSIBLE_ERROR(rtsta) || !(rtsta & PCI_EXP_RTSTA_PME)) {
spin_unlock_irqrestore(&data->lock, flags);
return IRQ_NONE;
}

View File

@ -206,14 +206,14 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
* memory BAR or a ROM, bit 0 must be clear; if it's an io BAR, bit
* 1 must be clear.
*/
if (sz == 0xffffffff)
if (PCI_POSSIBLE_ERROR(sz))
sz = 0;
/*
* I don't know how l can have all bits set. Copied from old code.
* Maybe it fixes a bug on some ancient platform.
*/
if (l == 0xffffffff)
if (PCI_POSSIBLE_ERROR(l))
l = 0;
if (type == pci_bar_unknown) {
@ -898,8 +898,6 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
bridge->bus = bus;
/* Temporarily move resources off the list */
list_splice_init(&bridge->windows, &resources);
bus->sysdata = bridge->sysdata;
bus->ops = bridge->ops;
bus->number = bus->busn_res.start = bridge->busnr;
@ -925,6 +923,8 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
if (err)
goto free;
/* Temporarily move resources off the list */
list_splice_init(&bridge->windows, &resources);
err = device_add(&bridge->dev);
if (err) {
put_device(&bridge->dev);
@ -1579,20 +1579,12 @@ void set_pcie_hotplug_bridge(struct pci_dev *pdev)
static void set_pcie_thunderbolt(struct pci_dev *dev)
{
int vsec = 0;
u32 header;
u16 vsec;
while ((vsec = pci_find_next_ext_capability(dev, vsec,
PCI_EXT_CAP_ID_VNDR))) {
pci_read_config_dword(dev, vsec + PCI_VNDR_HEADER, &header);
/* Is the device part of a Thunderbolt controller? */
if (dev->vendor == PCI_VENDOR_ID_INTEL &&
PCI_VNDR_HEADER_ID(header) == PCI_VSEC_ID_INTEL_TBT) {
dev->is_thunderbolt = 1;
return;
}
}
/* Is the device part of a Thunderbolt controller? */
vsec = pci_find_vsec_capability(dev, PCI_VENDOR_ID_INTEL, PCI_VSEC_ID_INTEL_TBT);
if (vsec)
dev->is_thunderbolt = 1;
}
static void set_pcie_untrusted(struct pci_dev *dev)
@ -1683,7 +1675,7 @@ static int pci_cfg_space_size_ext(struct pci_dev *dev)
if (pci_read_config_dword(dev, pos, &status) != PCIBIOS_SUCCESSFUL)
return PCI_CFG_SPACE_SIZE;
if (status == 0xffffffff || pci_ext_cfg_is_aliased(dev))
if (PCI_POSSIBLE_ERROR(status) || pci_ext_cfg_is_aliased(dev))
return PCI_CFG_SPACE_SIZE;
return PCI_CFG_SPACE_EXP_SIZE;
@ -2373,8 +2365,8 @@ bool pci_bus_generic_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l,
if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, l))
return false;
/* Some broken boards return 0 or ~0 if a slot is empty: */
if (*l == 0xffffffff || *l == 0x00000000 ||
/* Some broken boards return 0 or ~0 (PCI_ERROR_RESPONSE) if a slot is empty: */
if (PCI_POSSIBLE_ERROR(*l) || *l == 0x00000000 ||
*l == 0x0000ffff || *l == 0xffff0000)
return false;

View File

@ -980,8 +980,8 @@ static void quirk_via_ioapic(struct pci_dev *dev)
else
tmp = 0x1f; /* all known bits (4-0) routed to external APIC */
pci_info(dev, "%sbling VIA external APIC routing\n",
tmp == 0 ? "Disa" : "Ena");
pci_info(dev, "%s VIA external APIC routing\n",
tmp ? "Enabling" : "Disabling");
/* Offset 0x58: External APIC IRQ output control */
pci_write_config_byte(dev, 0x58, tmp);
@ -4103,6 +4103,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9120,
quirk_dma_func1_alias);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9123,
quirk_dma_func1_alias);
/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c136 */
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9125,
quirk_dma_func1_alias);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9128,
quirk_dma_func1_alias);
/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c14 */
@ -5683,6 +5686,15 @@ SWITCHTEC_QUIRK(0x4268); /* PAX 68XG4 */
SWITCHTEC_QUIRK(0x4252); /* PAX 52XG4 */
SWITCHTEC_QUIRK(0x4236); /* PAX 36XG4 */
SWITCHTEC_QUIRK(0x4228); /* PAX 28XG4 */
SWITCHTEC_QUIRK(0x4352); /* PFXA 52XG4 */
SWITCHTEC_QUIRK(0x4336); /* PFXA 36XG4 */
SWITCHTEC_QUIRK(0x4328); /* PFXA 28XG4 */
SWITCHTEC_QUIRK(0x4452); /* PSXA 52XG4 */
SWITCHTEC_QUIRK(0x4436); /* PSXA 36XG4 */
SWITCHTEC_QUIRK(0x4428); /* PSXA 28XG4 */
SWITCHTEC_QUIRK(0x4552); /* PAXA 52XG4 */
SWITCHTEC_QUIRK(0x4536); /* PAXA 36XG4 */
SWITCHTEC_QUIRK(0x4528); /* PAXA 28XG4 */
/*
* The PLX NTB uses devfn proxy IDs to move TLPs between NT endpoints.
@ -5857,3 +5869,13 @@ static void nvidia_ion_ahci_fixup(struct pci_dev *pdev)
pdev->dev_flags |= PCI_DEV_FLAGS_HAS_MSI_MASKING;
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0ab8, nvidia_ion_ahci_fixup);
static void rom_bar_overlap_defect(struct pci_dev *dev)
{
pci_info(dev, "working around ROM BAR overlap defect\n");
dev->rom_bar_overlap = 1;
}
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1533, rom_bar_overlap_defect);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1536, rom_bar_overlap_defect);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1537, rom_bar_overlap_defect);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1538, rom_bar_overlap_defect);

View File

@ -75,12 +75,16 @@ static void pci_std_update_resource(struct pci_dev *dev, int resno)
* as zero when disabled, so don't update ROM BARs unless
* they're enabled. See
* https://lore.kernel.org/r/43147B3D.1030309@vc.cvut.cz/
* But we must update ROM BAR for buggy devices where even a
* disabled ROM can conflict with other BARs.
*/
if (!(res->flags & IORESOURCE_ROM_ENABLE))
if (!(res->flags & IORESOURCE_ROM_ENABLE) &&
!dev->rom_bar_overlap)
return;
reg = dev->rom_base_reg;
new |= PCI_ROM_ADDRESS_ENABLE;
if (res->flags & IORESOURCE_ROM_ENABLE)
new |= PCI_ROM_ADDRESS_ENABLE;
} else
return;

View File

@ -96,11 +96,12 @@ static struct attribute *pci_slot_default_attrs[] = {
&pci_slot_attr_cur_speed.attr,
NULL,
};
ATTRIBUTE_GROUPS(pci_slot_default);
static struct kobj_type pci_slot_ktype = {
.sysfs_ops = &pci_slot_sysfs_ops,
.release = &pci_slot_release,
.default_attrs = pci_slot_default_attrs,
.default_groups = pci_slot_default_groups,
};
static char *make_slot_name(const char *name)

View File

@ -122,7 +122,7 @@ static void stuser_set_state(struct switchtec_user *stuser,
{
/* requires the mrpc_mutex to already be held when called */
const char * const state_names[] = {
static const char * const state_names[] = {
[MRPC_IDLE] = "IDLE",
[MRPC_QUEUED] = "QUEUED",
[MRPC_RUNNING] = "RUNNING",
@ -1779,6 +1779,15 @@ static const struct pci_device_id switchtec_pci_tbl[] = {
SWITCHTEC_PCI_DEVICE(0x4252, SWITCHTEC_GEN4), //PAX 52XG4
SWITCHTEC_PCI_DEVICE(0x4236, SWITCHTEC_GEN4), //PAX 36XG4
SWITCHTEC_PCI_DEVICE(0x4228, SWITCHTEC_GEN4), //PAX 28XG4
SWITCHTEC_PCI_DEVICE(0x4352, SWITCHTEC_GEN4), //PFXA 52XG4
SWITCHTEC_PCI_DEVICE(0x4336, SWITCHTEC_GEN4), //PFXA 36XG4
SWITCHTEC_PCI_DEVICE(0x4328, SWITCHTEC_GEN4), //PFXA 28XG4
SWITCHTEC_PCI_DEVICE(0x4452, SWITCHTEC_GEN4), //PSXA 52XG4
SWITCHTEC_PCI_DEVICE(0x4436, SWITCHTEC_GEN4), //PSXA 36XG4
SWITCHTEC_PCI_DEVICE(0x4428, SWITCHTEC_GEN4), //PSXA 28XG4
SWITCHTEC_PCI_DEVICE(0x4552, SWITCHTEC_GEN4), //PAXA 52XG4
SWITCHTEC_PCI_DEVICE(0x4536, SWITCHTEC_GEN4), //PAXA 36XG4
SWITCHTEC_PCI_DEVICE(0x4528, SWITCHTEC_GEN4), //PAXA 28XG4
{0}
};
MODULE_DEVICE_TABLE(pci, switchtec_pci_tbl);

View File

@ -20,6 +20,7 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
#include <linux/regmap.h>
#include <pcmcia/ss.h>
@ -230,6 +231,7 @@ static int at91_cf_probe(struct platform_device *pdev)
struct at91_cf_socket *cf;
struct at91_cf_data *board;
struct resource *io;
struct resource realio;
int status;
board = devm_kzalloc(&pdev->dev, sizeof(*board), GFP_KERNEL);
@ -307,7 +309,9 @@ static int at91_cf_probe(struct platform_device *pdev)
* io_offset is set to 0x10000 to avoid the check in static_find_io().
* */
cf->socket.io_offset = 0x10000;
status = pci_ioremap_io(0x10000, cf->phys_baseaddr + CF_IO_PHYS);
realio.start = cf->socket.io_offset;
realio.end = realio.start + SZ_64K - 1;
status = pci_remap_iospace(&realio, cf->phys_baseaddr + CF_IO_PHYS);
if (status)
goto fail0a;

View File

@ -540,39 +540,6 @@ enum hv_interrupt_source {
HV_INTERRUPT_SOURCE_IOAPIC,
};
union hv_msi_address_register {
u32 as_uint32;
struct {
u32 reserved1:2;
u32 destination_mode:1;
u32 redirection_hint:1;
u32 reserved2:8;
u32 destination_id:8;
u32 msi_base:12;
};
} __packed;
union hv_msi_data_register {
u32 as_uint32;
struct {
u32 vector:8;
u32 delivery_mode:3;
u32 reserved1:3;
u32 level_assert:1;
u32 trigger_mode:1;
u32 reserved2:16;
};
} __packed;
/* HvRetargetDeviceInterrupt hypercall */
union hv_msi_entry {
u64 as_uint64;
struct {
union hv_msi_address_register address;
union hv_msi_data_register data;
} __packed;
};
union hv_ioapic_rte {
u64 as_uint64;

View File

@ -154,6 +154,15 @@ enum pci_interrupt_pin {
/* The number of legacy PCI INTx interrupts */
#define PCI_NUM_INTX 4
/*
* Reading from a device that doesn't respond typically returns ~0. A
* successful read from a device may also return ~0, so you need additional
* information to reliably identify errors.
*/
#define PCI_ERROR_RESPONSE (~0ULL)
#define PCI_SET_ERROR_RESPONSE(val) (*(val) = ((typeof(*(val))) PCI_ERROR_RESPONSE))
#define PCI_POSSIBLE_ERROR(val) ((val) == ((typeof(val)) PCI_ERROR_RESPONSE))
/*
* pci_power_t values must match the bits in the Capabilities PME_Support
* and Control/Status PowerState fields in the Power Management capability.
@ -456,6 +465,7 @@ struct pci_dev {
unsigned int link_active_reporting:1;/* Device capable of reporting link active */
unsigned int no_vf_scan:1; /* Don't scan for VFs after IOV enablement */
unsigned int no_command_memory:1; /* No PCI_COMMAND_MEMORY */
unsigned int rom_bar_overlap:1; /* ROM BAR disable broken */
pci_dev_flags_t dev_flags;
atomic_t enable_cnt; /* pci_enable_device has been called */
@ -1777,7 +1787,10 @@ static inline struct pci_dev *pci_get_class(unsigned int class,
struct pci_dev *from)
{ return NULL; }
#define pci_dev_present(ids) (0)
static inline int pci_dev_present(const struct pci_device_id *ids)
{ return 0; }
#define no_pci_devices() (1)
#define pci_dev_put(dev) do { } while (0)

View File

@ -2618,8 +2618,8 @@
#define PCI_DEVICE_ID_INTEL_PXHD_0 0x0320
#define PCI_DEVICE_ID_INTEL_PXHD_1 0x0321
#define PCI_DEVICE_ID_INTEL_PXH_0 0x0329
#define PCI_DEVICE_ID_INTEL_PXH_1 0x032A
#define PCI_DEVICE_ID_INTEL_PXHV 0x032C
#define PCI_DEVICE_ID_INTEL_PXH_1 0x032a
#define PCI_DEVICE_ID_INTEL_PXHV 0x032c
#define PCI_DEVICE_ID_INTEL_80332_0 0x0330
#define PCI_DEVICE_ID_INTEL_80332_1 0x0332
#define PCI_DEVICE_ID_INTEL_80333_0 0x0370
@ -2637,14 +2637,14 @@
#define PCI_DEVICE_ID_INTEL_MFD_SDIO2 0x0822
#define PCI_DEVICE_ID_INTEL_MFD_EMMC0 0x0823
#define PCI_DEVICE_ID_INTEL_MFD_EMMC1 0x0824
#define PCI_DEVICE_ID_INTEL_MRST_SD2 0x084F
#define PCI_DEVICE_ID_INTEL_QUARK_X1000_ILB 0x095E
#define PCI_DEVICE_ID_INTEL_MRST_SD2 0x084f
#define PCI_DEVICE_ID_INTEL_QUARK_X1000_ILB 0x095e
#define PCI_DEVICE_ID_INTEL_I960 0x0960
#define PCI_DEVICE_ID_INTEL_I960RM 0x0962
#define PCI_DEVICE_ID_INTEL_CENTERTON_ILB 0x0c60
#define PCI_DEVICE_ID_INTEL_8257X_SOL 0x1062
#define PCI_DEVICE_ID_INTEL_82573E_SOL 0x1085
#define PCI_DEVICE_ID_INTEL_82573L_SOL 0x108F
#define PCI_DEVICE_ID_INTEL_82573L_SOL 0x108f
#define PCI_DEVICE_ID_INTEL_82815_MC 0x1130
#define PCI_DEVICE_ID_INTEL_82815_CGC 0x1132
#define PCI_DEVICE_ID_INTEL_82092AA_0 0x1221
@ -2738,12 +2738,6 @@
#define PCI_DEVICE_ID_INTEL_82801EB_11 0x24db
#define PCI_DEVICE_ID_INTEL_82801EB_12 0x24dc
#define PCI_DEVICE_ID_INTEL_82801EB_13 0x24dd
#define PCI_DEVICE_ID_INTEL_ESB_1 0x25a1
#define PCI_DEVICE_ID_INTEL_ESB_2 0x25a2
#define PCI_DEVICE_ID_INTEL_ESB_4 0x25a4
#define PCI_DEVICE_ID_INTEL_ESB_5 0x25a6
#define PCI_DEVICE_ID_INTEL_ESB_9 0x25ab
#define PCI_DEVICE_ID_INTEL_ESB_10 0x25ac
#define PCI_DEVICE_ID_INTEL_82820_HB 0x2500
#define PCI_DEVICE_ID_INTEL_82820_UP_HB 0x2501
#define PCI_DEVICE_ID_INTEL_82850_HB 0x2530
@ -2758,14 +2752,15 @@
#define PCI_DEVICE_ID_INTEL_82915G_IG 0x2582
#define PCI_DEVICE_ID_INTEL_82915GM_HB 0x2590
#define PCI_DEVICE_ID_INTEL_82915GM_IG 0x2592
#define PCI_DEVICE_ID_INTEL_5000_ERR 0x25F0
#define PCI_DEVICE_ID_INTEL_5000_FBD0 0x25F5
#define PCI_DEVICE_ID_INTEL_5000_FBD1 0x25F6
#define PCI_DEVICE_ID_INTEL_82945G_HB 0x2770
#define PCI_DEVICE_ID_INTEL_82945G_IG 0x2772
#define PCI_DEVICE_ID_INTEL_3000_HB 0x2778
#define PCI_DEVICE_ID_INTEL_82945GM_HB 0x27A0
#define PCI_DEVICE_ID_INTEL_82945GM_IG 0x27A2
#define PCI_DEVICE_ID_INTEL_ESB_1 0x25a1
#define PCI_DEVICE_ID_INTEL_ESB_2 0x25a2
#define PCI_DEVICE_ID_INTEL_ESB_4 0x25a4
#define PCI_DEVICE_ID_INTEL_ESB_5 0x25a6
#define PCI_DEVICE_ID_INTEL_ESB_9 0x25ab
#define PCI_DEVICE_ID_INTEL_ESB_10 0x25ac
#define PCI_DEVICE_ID_INTEL_5000_ERR 0x25f0
#define PCI_DEVICE_ID_INTEL_5000_FBD0 0x25f5
#define PCI_DEVICE_ID_INTEL_5000_FBD1 0x25f6
#define PCI_DEVICE_ID_INTEL_ICH6_0 0x2640
#define PCI_DEVICE_ID_INTEL_ICH6_1 0x2641
#define PCI_DEVICE_ID_INTEL_ICH6_2 0x2642
@ -2777,6 +2772,11 @@
#define PCI_DEVICE_ID_INTEL_ESB2_14 0x2698
#define PCI_DEVICE_ID_INTEL_ESB2_17 0x269b
#define PCI_DEVICE_ID_INTEL_ESB2_18 0x269e
#define PCI_DEVICE_ID_INTEL_82945G_HB 0x2770
#define PCI_DEVICE_ID_INTEL_82945G_IG 0x2772
#define PCI_DEVICE_ID_INTEL_3000_HB 0x2778
#define PCI_DEVICE_ID_INTEL_82945GM_HB 0x27a0
#define PCI_DEVICE_ID_INTEL_82945GM_IG 0x27a2
#define PCI_DEVICE_ID_INTEL_ICH7_0 0x27b8
#define PCI_DEVICE_ID_INTEL_ICH7_1 0x27b9
#define PCI_DEVICE_ID_INTEL_ICH7_30 0x27b0
@ -2829,7 +2829,7 @@
#define PCI_DEVICE_ID_INTEL_LYNNFIELD_QPI_PHY0 0x2c91
#define PCI_DEVICE_ID_INTEL_LYNNFIELD_MCR 0x2c98
#define PCI_DEVICE_ID_INTEL_LYNNFIELD_MC_TAD 0x2c99
#define PCI_DEVICE_ID_INTEL_LYNNFIELD_MC_TEST 0x2c9C
#define PCI_DEVICE_ID_INTEL_LYNNFIELD_MC_TEST 0x2c9c
#define PCI_DEVICE_ID_INTEL_LYNNFIELD_MC_CH0_CTRL 0x2ca0
#define PCI_DEVICE_ID_INTEL_LYNNFIELD_MC_CH0_ADDR 0x2ca1
#define PCI_DEVICE_ID_INTEL_LYNNFIELD_MC_CH0_RANK 0x2ca2
@ -2941,16 +2941,16 @@
#define PCI_DEVICE_ID_INTEL_SBRIDGE_BR 0x3cf5 /* 13.6 */
#define PCI_DEVICE_ID_INTEL_SBRIDGE_SAD1 0x3cf6 /* 12.7 */
#define PCI_DEVICE_ID_INTEL_IOAT_SNB 0x402f
#define PCI_DEVICE_ID_INTEL_5400_ERR 0x4030
#define PCI_DEVICE_ID_INTEL_5400_FBD0 0x4035
#define PCI_DEVICE_ID_INTEL_5400_FBD1 0x4036
#define PCI_DEVICE_ID_INTEL_EP80579_0 0x5031
#define PCI_DEVICE_ID_INTEL_EP80579_1 0x5032
#define PCI_DEVICE_ID_INTEL_5100_16 0x65f0
#define PCI_DEVICE_ID_INTEL_5100_19 0x65f3
#define PCI_DEVICE_ID_INTEL_5100_21 0x65f5
#define PCI_DEVICE_ID_INTEL_5100_22 0x65f6
#define PCI_DEVICE_ID_INTEL_5400_ERR 0x4030
#define PCI_DEVICE_ID_INTEL_5400_FBD0 0x4035
#define PCI_DEVICE_ID_INTEL_5400_FBD1 0x4036
#define PCI_DEVICE_ID_INTEL_IOAT_SCNB 0x65ff
#define PCI_DEVICE_ID_INTEL_EP80579_0 0x5031
#define PCI_DEVICE_ID_INTEL_EP80579_1 0x5032
#define PCI_DEVICE_ID_INTEL_82371SB_0 0x7000
#define PCI_DEVICE_ID_INTEL_82371SB_1 0x7010
#define PCI_DEVICE_ID_INTEL_82371SB_2 0x7020

View File

@ -301,23 +301,23 @@
#define PCI_SID_ESR_FIC 0x20 /* First In Chassis Flag */
#define PCI_SID_CHASSIS_NR 3 /* Chassis Number */
/* Message Signalled Interrupt registers */
/* Message Signaled Interrupt registers */
#define PCI_MSI_FLAGS 2 /* Message Control */
#define PCI_MSI_FLAGS 0x02 /* Message Control */
#define PCI_MSI_FLAGS_ENABLE 0x0001 /* MSI feature enabled */
#define PCI_MSI_FLAGS_QMASK 0x000e /* Maximum queue size available */
#define PCI_MSI_FLAGS_QSIZE 0x0070 /* Message queue size configured */
#define PCI_MSI_FLAGS_64BIT 0x0080 /* 64-bit addresses allowed */
#define PCI_MSI_FLAGS_MASKBIT 0x0100 /* Per-vector masking capable */
#define PCI_MSI_RFU 3 /* Rest of capability flags */
#define PCI_MSI_ADDRESS_LO 4 /* Lower 32 bits */
#define PCI_MSI_ADDRESS_HI 8 /* Upper 32 bits (if PCI_MSI_FLAGS_64BIT set) */
#define PCI_MSI_DATA_32 8 /* 16 bits of data for 32-bit devices */
#define PCI_MSI_MASK_32 12 /* Mask bits register for 32-bit devices */
#define PCI_MSI_PENDING_32 16 /* Pending intrs for 32-bit devices */
#define PCI_MSI_DATA_64 12 /* 16 bits of data for 64-bit devices */
#define PCI_MSI_MASK_64 16 /* Mask bits register for 64-bit devices */
#define PCI_MSI_PENDING_64 20 /* Pending intrs for 64-bit devices */
#define PCI_MSI_ADDRESS_LO 0x04 /* Lower 32 bits */
#define PCI_MSI_ADDRESS_HI 0x08 /* Upper 32 bits (if PCI_MSI_FLAGS_64BIT set) */
#define PCI_MSI_DATA_32 0x08 /* 16 bits of data for 32-bit devices */
#define PCI_MSI_MASK_32 0x0c /* Mask bits register for 32-bit devices */
#define PCI_MSI_PENDING_32 0x10 /* Pending intrs for 32-bit devices */
#define PCI_MSI_DATA_64 0x0c /* 16 bits of data for 64-bit devices */
#define PCI_MSI_MASK_64 0x10 /* Mask bits register for 64-bit devices */
#define PCI_MSI_PENDING_64 0x14 /* Pending intrs for 64-bit devices */
/* MSI-X registers (in MSI-X capability) */
#define PCI_MSIX_FLAGS 2 /* Message Control */
@ -335,10 +335,10 @@
/* MSI-X Table entry format (in memory mapped by a BAR) */
#define PCI_MSIX_ENTRY_SIZE 16
#define PCI_MSIX_ENTRY_LOWER_ADDR 0 /* Message Address */
#define PCI_MSIX_ENTRY_UPPER_ADDR 4 /* Message Upper Address */
#define PCI_MSIX_ENTRY_DATA 8 /* Message Data */
#define PCI_MSIX_ENTRY_VECTOR_CTRL 12 /* Vector Control */
#define PCI_MSIX_ENTRY_LOWER_ADDR 0x0 /* Message Address */
#define PCI_MSIX_ENTRY_UPPER_ADDR 0x4 /* Message Upper Address */
#define PCI_MSIX_ENTRY_DATA 0x8 /* Message Data */
#define PCI_MSIX_ENTRY_VECTOR_CTRL 0xc /* Vector Control */
#define PCI_MSIX_ENTRY_CTRL_MASKBIT 0x00000001
/* CompactPCI Hotswap Register */
@ -470,7 +470,7 @@
/* PCI Express capability registers */
#define PCI_EXP_FLAGS 2 /* Capabilities register */
#define PCI_EXP_FLAGS 0x02 /* Capabilities register */
#define PCI_EXP_FLAGS_VERS 0x000f /* Capability version */
#define PCI_EXP_FLAGS_TYPE 0x00f0 /* Device/Port type */
#define PCI_EXP_TYPE_ENDPOINT 0x0 /* Express Endpoint */
@ -484,7 +484,7 @@
#define PCI_EXP_TYPE_RC_EC 0xa /* Root Complex Event Collector */
#define PCI_EXP_FLAGS_SLOT 0x0100 /* Slot implemented */
#define PCI_EXP_FLAGS_IRQ 0x3e00 /* Interrupt message number */
#define PCI_EXP_DEVCAP 4 /* Device capabilities */
#define PCI_EXP_DEVCAP 0x04 /* Device capabilities */
#define PCI_EXP_DEVCAP_PAYLOAD 0x00000007 /* Max_Payload_Size */
#define PCI_EXP_DEVCAP_PHANTOM 0x00000018 /* Phantom functions */
#define PCI_EXP_DEVCAP_EXT_TAG 0x00000020 /* Extended tags */
@ -497,7 +497,7 @@
#define PCI_EXP_DEVCAP_PWR_VAL 0x03fc0000 /* Slot Power Limit Value */
#define PCI_EXP_DEVCAP_PWR_SCL 0x0c000000 /* Slot Power Limit Scale */
#define PCI_EXP_DEVCAP_FLR 0x10000000 /* Function Level Reset */
#define PCI_EXP_DEVCTL 8 /* Device Control */
#define PCI_EXP_DEVCTL 0x08 /* Device Control */
#define PCI_EXP_DEVCTL_CERE 0x0001 /* Correctable Error Reporting En. */
#define PCI_EXP_DEVCTL_NFERE 0x0002 /* Non-Fatal Error Reporting Enable */
#define PCI_EXP_DEVCTL_FERE 0x0004 /* Fatal Error Reporting Enable */
@ -522,7 +522,7 @@
#define PCI_EXP_DEVCTL_READRQ_2048B 0x4000 /* 2048 Bytes */
#define PCI_EXP_DEVCTL_READRQ_4096B 0x5000 /* 4096 Bytes */
#define PCI_EXP_DEVCTL_BCR_FLR 0x8000 /* Bridge Configuration Retry / FLR */
#define PCI_EXP_DEVSTA 10 /* Device Status */
#define PCI_EXP_DEVSTA 0x0a /* Device Status */
#define PCI_EXP_DEVSTA_CED 0x0001 /* Correctable Error Detected */
#define PCI_EXP_DEVSTA_NFED 0x0002 /* Non-Fatal Error Detected */
#define PCI_EXP_DEVSTA_FED 0x0004 /* Fatal Error Detected */
@ -530,7 +530,7 @@
#define PCI_EXP_DEVSTA_AUXPD 0x0010 /* AUX Power Detected */
#define PCI_EXP_DEVSTA_TRPND 0x0020 /* Transactions Pending */
#define PCI_CAP_EXP_RC_ENDPOINT_SIZEOF_V1 12 /* v1 endpoints without link end here */
#define PCI_EXP_LNKCAP 12 /* Link Capabilities */
#define PCI_EXP_LNKCAP 0x0c /* Link Capabilities */
#define PCI_EXP_LNKCAP_SLS 0x0000000f /* Supported Link Speeds */
#define PCI_EXP_LNKCAP_SLS_2_5GB 0x00000001 /* LNKCAP2 SLS Vector bit 0 */
#define PCI_EXP_LNKCAP_SLS_5_0GB 0x00000002 /* LNKCAP2 SLS Vector bit 1 */
@ -549,7 +549,7 @@
#define PCI_EXP_LNKCAP_DLLLARC 0x00100000 /* Data Link Layer Link Active Reporting Capable */
#define PCI_EXP_LNKCAP_LBNC 0x00200000 /* Link Bandwidth Notification Capability */
#define PCI_EXP_LNKCAP_PN 0xff000000 /* Port Number */
#define PCI_EXP_LNKCTL 16 /* Link Control */
#define PCI_EXP_LNKCTL 0x10 /* Link Control */
#define PCI_EXP_LNKCTL_ASPMC 0x0003 /* ASPM Control */
#define PCI_EXP_LNKCTL_ASPM_L0S 0x0001 /* L0s Enable */
#define PCI_EXP_LNKCTL_ASPM_L1 0x0002 /* L1 Enable */
@ -562,7 +562,7 @@
#define PCI_EXP_LNKCTL_HAWD 0x0200 /* Hardware Autonomous Width Disable */
#define PCI_EXP_LNKCTL_LBMIE 0x0400 /* Link Bandwidth Management Interrupt Enable */
#define PCI_EXP_LNKCTL_LABIE 0x0800 /* Link Autonomous Bandwidth Interrupt Enable */
#define PCI_EXP_LNKSTA 18 /* Link Status */
#define PCI_EXP_LNKSTA 0x12 /* Link Status */
#define PCI_EXP_LNKSTA_CLS 0x000f /* Current Link Speed */
#define PCI_EXP_LNKSTA_CLS_2_5GB 0x0001 /* Current Link Speed 2.5GT/s */
#define PCI_EXP_LNKSTA_CLS_5_0GB 0x0002 /* Current Link Speed 5.0GT/s */
@ -582,7 +582,7 @@
#define PCI_EXP_LNKSTA_LBMS 0x4000 /* Link Bandwidth Management Status */
#define PCI_EXP_LNKSTA_LABS 0x8000 /* Link Autonomous Bandwidth Status */
#define PCI_CAP_EXP_ENDPOINT_SIZEOF_V1 20 /* v1 endpoints with link end here */
#define PCI_EXP_SLTCAP 20 /* Slot Capabilities */
#define PCI_EXP_SLTCAP 0x14 /* Slot Capabilities */
#define PCI_EXP_SLTCAP_ABP 0x00000001 /* Attention Button Present */
#define PCI_EXP_SLTCAP_PCP 0x00000002 /* Power Controller Present */
#define PCI_EXP_SLTCAP_MRLSP 0x00000004 /* MRL Sensor Present */
@ -595,7 +595,7 @@
#define PCI_EXP_SLTCAP_EIP 0x00020000 /* Electromechanical Interlock Present */
#define PCI_EXP_SLTCAP_NCCS 0x00040000 /* No Command Completed Support */
#define PCI_EXP_SLTCAP_PSN 0xfff80000 /* Physical Slot Number */
#define PCI_EXP_SLTCTL 24 /* Slot Control */
#define PCI_EXP_SLTCTL 0x18 /* Slot Control */
#define PCI_EXP_SLTCTL_ABPE 0x0001 /* Attention Button Pressed Enable */
#define PCI_EXP_SLTCTL_PFDE 0x0002 /* Power Fault Detected Enable */
#define PCI_EXP_SLTCTL_MRLSCE 0x0004 /* MRL Sensor Changed Enable */
@ -617,7 +617,7 @@
#define PCI_EXP_SLTCTL_EIC 0x0800 /* Electromechanical Interlock Control */
#define PCI_EXP_SLTCTL_DLLSCE 0x1000 /* Data Link Layer State Changed Enable */
#define PCI_EXP_SLTCTL_IBPD_DISABLE 0x4000 /* In-band PD disable */
#define PCI_EXP_SLTSTA 26 /* Slot Status */
#define PCI_EXP_SLTSTA 0x1a /* Slot Status */
#define PCI_EXP_SLTSTA_ABP 0x0001 /* Attention Button Pressed */
#define PCI_EXP_SLTSTA_PFD 0x0002 /* Power Fault Detected */
#define PCI_EXP_SLTSTA_MRLSC 0x0004 /* MRL Sensor Changed */
@ -627,15 +627,15 @@
#define PCI_EXP_SLTSTA_PDS 0x0040 /* Presence Detect State */
#define PCI_EXP_SLTSTA_EIS 0x0080 /* Electromechanical Interlock Status */
#define PCI_EXP_SLTSTA_DLLSC 0x0100 /* Data Link Layer State Changed */
#define PCI_EXP_RTCTL 28 /* Root Control */
#define PCI_EXP_RTCTL 0x1c /* Root Control */
#define PCI_EXP_RTCTL_SECEE 0x0001 /* System Error on Correctable Error */
#define PCI_EXP_RTCTL_SENFEE 0x0002 /* System Error on Non-Fatal Error */
#define PCI_EXP_RTCTL_SEFEE 0x0004 /* System Error on Fatal Error */
#define PCI_EXP_RTCTL_PMEIE 0x0008 /* PME Interrupt Enable */
#define PCI_EXP_RTCTL_CRSSVE 0x0010 /* CRS Software Visibility Enable */
#define PCI_EXP_RTCAP 30 /* Root Capabilities */
#define PCI_EXP_RTCAP 0x1e /* Root Capabilities */
#define PCI_EXP_RTCAP_CRSVIS 0x0001 /* CRS Software Visibility capability */
#define PCI_EXP_RTSTA 32 /* Root Status */
#define PCI_EXP_RTSTA 0x20 /* Root Status */
#define PCI_EXP_RTSTA_PME 0x00010000 /* PME status */
#define PCI_EXP_RTSTA_PENDING 0x00020000 /* PME pending */
/*
@ -646,7 +646,7 @@
* Use pcie_capability_read_word() and similar interfaces to use them
* safely.
*/
#define PCI_EXP_DEVCAP2 36 /* Device Capabilities 2 */
#define PCI_EXP_DEVCAP2 0x24 /* Device Capabilities 2 */
#define PCI_EXP_DEVCAP2_COMP_TMOUT_DIS 0x00000010 /* Completion Timeout Disable supported */
#define PCI_EXP_DEVCAP2_ARI 0x00000020 /* Alternative Routing-ID */
#define PCI_EXP_DEVCAP2_ATOMIC_ROUTE 0x00000040 /* Atomic Op routing */
@ -658,7 +658,7 @@
#define PCI_EXP_DEVCAP2_OBFF_MSG 0x00040000 /* New message signaling */
#define PCI_EXP_DEVCAP2_OBFF_WAKE 0x00080000 /* Re-use WAKE# for OBFF */
#define PCI_EXP_DEVCAP2_EE_PREFIX 0x00200000 /* End-End TLP Prefix */
#define PCI_EXP_DEVCTL2 40 /* Device Control 2 */
#define PCI_EXP_DEVCTL2 0x28 /* Device Control 2 */
#define PCI_EXP_DEVCTL2_COMP_TIMEOUT 0x000f /* Completion Timeout Value */
#define PCI_EXP_DEVCTL2_COMP_TMOUT_DIS 0x0010 /* Completion Timeout Disable */
#define PCI_EXP_DEVCTL2_ARI 0x0020 /* Alternative Routing-ID */
@ -670,9 +670,9 @@
#define PCI_EXP_DEVCTL2_OBFF_MSGA_EN 0x2000 /* Enable OBFF Message type A */
#define PCI_EXP_DEVCTL2_OBFF_MSGB_EN 0x4000 /* Enable OBFF Message type B */
#define PCI_EXP_DEVCTL2_OBFF_WAKE_EN 0x6000 /* OBFF using WAKE# signaling */
#define PCI_EXP_DEVSTA2 42 /* Device Status 2 */
#define PCI_CAP_EXP_RC_ENDPOINT_SIZEOF_V2 44 /* v2 endpoints without link end here */
#define PCI_EXP_LNKCAP2 44 /* Link Capabilities 2 */
#define PCI_EXP_DEVSTA2 0x2a /* Device Status 2 */
#define PCI_CAP_EXP_RC_ENDPOINT_SIZEOF_V2 0x2c /* end of v2 EPs w/o link */
#define PCI_EXP_LNKCAP2 0x2c /* Link Capabilities 2 */
#define PCI_EXP_LNKCAP2_SLS_2_5GB 0x00000002 /* Supported Speed 2.5GT/s */
#define PCI_EXP_LNKCAP2_SLS_5_0GB 0x00000004 /* Supported Speed 5GT/s */
#define PCI_EXP_LNKCAP2_SLS_8_0GB 0x00000008 /* Supported Speed 8GT/s */
@ -680,7 +680,7 @@
#define PCI_EXP_LNKCAP2_SLS_32_0GB 0x00000020 /* Supported Speed 32GT/s */
#define PCI_EXP_LNKCAP2_SLS_64_0GB 0x00000040 /* Supported Speed 64GT/s */
#define PCI_EXP_LNKCAP2_CROSSLINK 0x00000100 /* Crosslink supported */
#define PCI_EXP_LNKCTL2 48 /* Link Control 2 */
#define PCI_EXP_LNKCTL2 0x30 /* Link Control 2 */
#define PCI_EXP_LNKCTL2_TLS 0x000f
#define PCI_EXP_LNKCTL2_TLS_2_5GT 0x0001 /* Supported Speed 2.5GT/s */
#define PCI_EXP_LNKCTL2_TLS_5_0GT 0x0002 /* Supported Speed 5GT/s */
@ -691,12 +691,12 @@
#define PCI_EXP_LNKCTL2_ENTER_COMP 0x0010 /* Enter Compliance */
#define PCI_EXP_LNKCTL2_TX_MARGIN 0x0380 /* Transmit Margin */
#define PCI_EXP_LNKCTL2_HASD 0x0020 /* HW Autonomous Speed Disable */
#define PCI_EXP_LNKSTA2 50 /* Link Status 2 */
#define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 52 /* v2 endpoints with link end here */
#define PCI_EXP_SLTCAP2 52 /* Slot Capabilities 2 */
#define PCI_EXP_LNKSTA2 0x32 /* Link Status 2 */
#define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 0x32 /* end of v2 EPs w/ link */
#define PCI_EXP_SLTCAP2 0x34 /* Slot Capabilities 2 */
#define PCI_EXP_SLTCAP2_IBPD 0x00000001 /* In-band PD Disable Supported */
#define PCI_EXP_SLTCTL2 56 /* Slot Control 2 */
#define PCI_EXP_SLTSTA2 58 /* Slot Status 2 */
#define PCI_EXP_SLTCTL2 0x38 /* Slot Control 2 */
#define PCI_EXP_SLTSTA2 0x3a /* Slot Status 2 */
/* Extended Capabilities (PCI-X 2.0 and Express) */
#define PCI_EXT_CAP_ID(header) (header & 0x0000ffff)
@ -742,7 +742,7 @@
#define PCI_EXT_CAP_MCAST_ENDPOINT_SIZEOF 40
/* Advanced Error Reporting */
#define PCI_ERR_UNCOR_STATUS 4 /* Uncorrectable Error Status */
#define PCI_ERR_UNCOR_STATUS 0x04 /* Uncorrectable Error Status */
#define PCI_ERR_UNC_UND 0x00000001 /* Undefined */
#define PCI_ERR_UNC_DLP 0x00000010 /* Data Link Protocol */
#define PCI_ERR_UNC_SURPDN 0x00000020 /* Surprise Down */
@ -760,11 +760,11 @@
#define PCI_ERR_UNC_MCBTLP 0x00800000 /* MC blocked TLP */
#define PCI_ERR_UNC_ATOMEG 0x01000000 /* Atomic egress blocked */
#define PCI_ERR_UNC_TLPPRE 0x02000000 /* TLP prefix blocked */
#define PCI_ERR_UNCOR_MASK 8 /* Uncorrectable Error Mask */
#define PCI_ERR_UNCOR_MASK 0x08 /* Uncorrectable Error Mask */
/* Same bits as above */
#define PCI_ERR_UNCOR_SEVER 12 /* Uncorrectable Error Severity */
#define PCI_ERR_UNCOR_SEVER 0x0c /* Uncorrectable Error Severity */
/* Same bits as above */
#define PCI_ERR_COR_STATUS 16 /* Correctable Error Status */
#define PCI_ERR_COR_STATUS 0x10 /* Correctable Error Status */
#define PCI_ERR_COR_RCVR 0x00000001 /* Receiver Error Status */
#define PCI_ERR_COR_BAD_TLP 0x00000040 /* Bad TLP Status */
#define PCI_ERR_COR_BAD_DLLP 0x00000080 /* Bad DLLP Status */
@ -773,20 +773,20 @@
#define PCI_ERR_COR_ADV_NFAT 0x00002000 /* Advisory Non-Fatal */
#define PCI_ERR_COR_INTERNAL 0x00004000 /* Corrected Internal */
#define PCI_ERR_COR_LOG_OVER 0x00008000 /* Header Log Overflow */
#define PCI_ERR_COR_MASK 20 /* Correctable Error Mask */
#define PCI_ERR_COR_MASK 0x14 /* Correctable Error Mask */
/* Same bits as above */
#define PCI_ERR_CAP 24 /* Advanced Error Capabilities */
#define PCI_ERR_CAP_FEP(x) ((x) & 31) /* First Error Pointer */
#define PCI_ERR_CAP 0x18 /* Advanced Error Capabilities & Ctrl*/
#define PCI_ERR_CAP_FEP(x) ((x) & 0x1f) /* First Error Pointer */
#define PCI_ERR_CAP_ECRC_GENC 0x00000020 /* ECRC Generation Capable */
#define PCI_ERR_CAP_ECRC_GENE 0x00000040 /* ECRC Generation Enable */
#define PCI_ERR_CAP_ECRC_CHKC 0x00000080 /* ECRC Check Capable */
#define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */
#define PCI_ERR_HEADER_LOG 28 /* Header Log Register (16 bytes) */
#define PCI_ERR_ROOT_COMMAND 44 /* Root Error Command */
#define PCI_ERR_HEADER_LOG 0x1c /* Header Log Register (16 bytes) */
#define PCI_ERR_ROOT_COMMAND 0x2c /* Root Error Command */
#define PCI_ERR_ROOT_CMD_COR_EN 0x00000001 /* Correctable Err Reporting Enable */
#define PCI_ERR_ROOT_CMD_NONFATAL_EN 0x00000002 /* Non-Fatal Err Reporting Enable */
#define PCI_ERR_ROOT_CMD_FATAL_EN 0x00000004 /* Fatal Err Reporting Enable */
#define PCI_ERR_ROOT_STATUS 48
#define PCI_ERR_ROOT_STATUS 0x30
#define PCI_ERR_ROOT_COR_RCV 0x00000001 /* ERR_COR Received */
#define PCI_ERR_ROOT_MULTI_COR_RCV 0x00000002 /* Multiple ERR_COR */
#define PCI_ERR_ROOT_UNCOR_RCV 0x00000004 /* ERR_FATAL/NONFATAL */
@ -795,52 +795,52 @@
#define PCI_ERR_ROOT_NONFATAL_RCV 0x00000020 /* Non-Fatal Received */
#define PCI_ERR_ROOT_FATAL_RCV 0x00000040 /* Fatal Received */
#define PCI_ERR_ROOT_AER_IRQ 0xf8000000 /* Advanced Error Interrupt Message Number */
#define PCI_ERR_ROOT_ERR_SRC 52 /* Error Source Identification */
#define PCI_ERR_ROOT_ERR_SRC 0x34 /* Error Source Identification */
/* Virtual Channel */
#define PCI_VC_PORT_CAP1 4
#define PCI_VC_PORT_CAP1 0x04
#define PCI_VC_CAP1_EVCC 0x00000007 /* extended VC count */
#define PCI_VC_CAP1_LPEVCC 0x00000070 /* low prio extended VC count */
#define PCI_VC_CAP1_ARB_SIZE 0x00000c00
#define PCI_VC_PORT_CAP2 8
#define PCI_VC_PORT_CAP2 0x08
#define PCI_VC_CAP2_32_PHASE 0x00000002
#define PCI_VC_CAP2_64_PHASE 0x00000004
#define PCI_VC_CAP2_128_PHASE 0x00000008
#define PCI_VC_CAP2_ARB_OFF 0xff000000
#define PCI_VC_PORT_CTRL 12
#define PCI_VC_PORT_CTRL 0x0c
#define PCI_VC_PORT_CTRL_LOAD_TABLE 0x00000001
#define PCI_VC_PORT_STATUS 14
#define PCI_VC_PORT_STATUS 0x0e
#define PCI_VC_PORT_STATUS_TABLE 0x00000001
#define PCI_VC_RES_CAP 16
#define PCI_VC_RES_CAP 0x10
#define PCI_VC_RES_CAP_32_PHASE 0x00000002
#define PCI_VC_RES_CAP_64_PHASE 0x00000004
#define PCI_VC_RES_CAP_128_PHASE 0x00000008
#define PCI_VC_RES_CAP_128_PHASE_TB 0x00000010
#define PCI_VC_RES_CAP_256_PHASE 0x00000020
#define PCI_VC_RES_CAP_ARB_OFF 0xff000000
#define PCI_VC_RES_CTRL 20
#define PCI_VC_RES_CTRL 0x14
#define PCI_VC_RES_CTRL_LOAD_TABLE 0x00010000
#define PCI_VC_RES_CTRL_ARB_SELECT 0x000e0000
#define PCI_VC_RES_CTRL_ID 0x07000000
#define PCI_VC_RES_CTRL_ENABLE 0x80000000
#define PCI_VC_RES_STATUS 26
#define PCI_VC_RES_STATUS 0x1a
#define PCI_VC_RES_STATUS_TABLE 0x00000001
#define PCI_VC_RES_STATUS_NEGO 0x00000002
#define PCI_CAP_VC_BASE_SIZEOF 0x10
#define PCI_CAP_VC_PER_VC_SIZEOF 0x0C
#define PCI_CAP_VC_PER_VC_SIZEOF 0x0c
/* Power Budgeting */
#define PCI_PWR_DSR 4 /* Data Select Register */
#define PCI_PWR_DATA 8 /* Data Register */
#define PCI_PWR_DSR 0x04 /* Data Select Register */
#define PCI_PWR_DATA 0x08 /* Data Register */
#define PCI_PWR_DATA_BASE(x) ((x) & 0xff) /* Base Power */
#define PCI_PWR_DATA_SCALE(x) (((x) >> 8) & 3) /* Data Scale */
#define PCI_PWR_DATA_PM_SUB(x) (((x) >> 10) & 7) /* PM Sub State */
#define PCI_PWR_DATA_PM_STATE(x) (((x) >> 13) & 3) /* PM State */
#define PCI_PWR_DATA_TYPE(x) (((x) >> 15) & 7) /* Type */
#define PCI_PWR_DATA_RAIL(x) (((x) >> 18) & 7) /* Power Rail */
#define PCI_PWR_CAP 12 /* Capability */
#define PCI_PWR_CAP 0x0c /* Capability */
#define PCI_PWR_CAP_BUDGET(x) ((x) & 1) /* Included in system budget */
#define PCI_EXT_CAP_PWR_SIZEOF 16
#define PCI_EXT_CAP_PWR_SIZEOF 0x10
/* Root Complex Event Collector Endpoint Association */
#define PCI_RCEC_RCIEP_BITMAP 4 /* Associated Bitmap for RCiEPs */
@ -964,7 +964,7 @@
#define PCI_SRIOV_VFM_MI 0x1 /* Dormant.MigrateIn */
#define PCI_SRIOV_VFM_MO 0x2 /* Active.MigrateOut */
#define PCI_SRIOV_VFM_AV 0x3 /* Active.Available */
#define PCI_EXT_CAP_SRIOV_SIZEOF 64
#define PCI_EXT_CAP_SRIOV_SIZEOF 0x40
#define PCI_LTR_MAX_SNOOP_LAT 0x4
#define PCI_LTR_MAX_NOSNOOP_LAT 0x6
@ -1017,12 +1017,12 @@
#define PCI_TPH_LOC_NONE 0x000 /* no location */
#define PCI_TPH_LOC_CAP 0x200 /* in capability */
#define PCI_TPH_LOC_MSIX 0x400 /* in MSI-X */
#define PCI_TPH_CAP_ST_MASK 0x07FF0000 /* st table mask */
#define PCI_TPH_CAP_ST_SHIFT 16 /* st table shift */
#define PCI_TPH_BASE_SIZEOF 12 /* size with no st table */
#define PCI_TPH_CAP_ST_MASK 0x07FF0000 /* ST table mask */
#define PCI_TPH_CAP_ST_SHIFT 16 /* ST table shift */
#define PCI_TPH_BASE_SIZEOF 0xc /* size with no ST table */
/* Downstream Port Containment */
#define PCI_EXP_DPC_CAP 4 /* DPC Capability */
#define PCI_EXP_DPC_CAP 0x04 /* DPC Capability */
#define PCI_EXP_DPC_IRQ 0x001F /* Interrupt Message Number */
#define PCI_EXP_DPC_CAP_RP_EXT 0x0020 /* Root Port Extensions */
#define PCI_EXP_DPC_CAP_POISONED_TLP 0x0040 /* Poisoned TLP Egress Blocking Supported */
@ -1030,19 +1030,19 @@
#define PCI_EXP_DPC_RP_PIO_LOG_SIZE 0x0F00 /* RP PIO Log Size */
#define PCI_EXP_DPC_CAP_DL_ACTIVE 0x1000 /* ERR_COR signal on DL_Active supported */
#define PCI_EXP_DPC_CTL 6 /* DPC control */
#define PCI_EXP_DPC_CTL 0x06 /* DPC control */
#define PCI_EXP_DPC_CTL_EN_FATAL 0x0001 /* Enable trigger on ERR_FATAL message */
#define PCI_EXP_DPC_CTL_EN_NONFATAL 0x0002 /* Enable trigger on ERR_NONFATAL message */
#define PCI_EXP_DPC_CTL_INT_EN 0x0008 /* DPC Interrupt Enable */
#define PCI_EXP_DPC_STATUS 8 /* DPC Status */
#define PCI_EXP_DPC_STATUS 0x08 /* DPC Status */
#define PCI_EXP_DPC_STATUS_TRIGGER 0x0001 /* Trigger Status */
#define PCI_EXP_DPC_STATUS_TRIGGER_RSN 0x0006 /* Trigger Reason */
#define PCI_EXP_DPC_STATUS_INTERRUPT 0x0008 /* Interrupt Status */
#define PCI_EXP_DPC_RP_BUSY 0x0010 /* Root Port Busy */
#define PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT 0x0060 /* Trig Reason Extension */
#define PCI_EXP_DPC_SOURCE_ID 10 /* DPC Source Identifier */
#define PCI_EXP_DPC_SOURCE_ID 0x0A /* DPC Source Identifier */
#define PCI_EXP_DPC_RP_PIO_STATUS 0x0C /* RP PIO Status */
#define PCI_EXP_DPC_RP_PIO_MASK 0x10 /* RP PIO Mask */