Native PCI drivers for root complex devices were originally all in
drivers/pci/host/. Some of these devices can also be operated in endpoint
mode. Drivers for endpoint mode didn't seem to fit in the "host"
directory, so we put both the root complex and endpoint drivers in
per-device directories, e.g., drivers/pci/dwc/, drivers/pci/cadence/, etc.
These per-device directories contain trivial Kconfig and Makefiles and
clutter drivers/pci/. Make a new drivers/pci/controllers/ directory and
collect all the device-specific drivers there.
No functional change intended.
Link: https://lkml.kernel.org/r/1520304202-232891-1-git-send-email-shawn.lin@rock-chips.com
Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
[bhelgaas: changelog]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Allow VMD devices with PCI id 8086:28c0 to bind to VMD driver.
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
[lorenzo.pieralisi@arm.com: updated commit subject]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Depending on platform configuration, certain VMD devices may have an
additional configuration option which specifies the range of bus numbers
allowed in a VMD PCIe domain. We determine this requirement by checking
the value of two vendor specific config registers in the VMD endpoint:
VMCAP[0] | VMCONFIG[9:8] | Bus Numbers
----------------------------------------
0 | * | 0-255
1 | 00 | 0-127
1 | 01 | 128-255
1 | 10 | 0-255
This feature is also added as a bit in driver_data, to allow future
conforming device ids which support these features to be enabled through
sysfs new_id.
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
[lorenzo.pieralisi@arm.com: updated commit subject]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Certain VMD devices have registers within membar 2 which may shadow the
membar 1 and membar 2 addresses. These are intended to be used in
virtualization, where assigning a guest address wouldn't be translated
in the assignment to root port and child devices because the addresses
exist within the assignment message.
These values will only reflect the membars when enabled in the BIOS, as
determined by a register in the VMD device.
This patch declares this option as a bit in the pci id driver_data, so
that future conforming device ids can be enabled through sysfs new_id if
necessary.
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
[lorenzo.pieralisi@arm.com: updated commit subject]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Add the Intel VMD device ids to the pci id database and update the VMD
driver.
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Add SPDX GPL-2.0 to all PCI files that specified the GPL version 2 license.
Remove the boilerplate GPL version 2 language, relying on the assertion in
b24413180f ("License cleanup: add SPDX GPL-2.0 license identifier to
files with no license") that the SPDX identifier may be used instead of the
full boilerplate text.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
VMD hardware has to share its vectors among child devices in its PCI
domain so we should allocate as many as possible rather than just ones
that can be affinitized.
pci_alloc_irq_vectors_affinity() limits the number of affinitized IRQs to
the number of present CPUs (see irq_calc_affinity_vectors()). But we'd
prefer to have more vectors, even if they aren't distributed across the
CPUs, so use pci_alloc_irq_vectors() instead.
Reported-by: Brad Goodman <Bradley.Goodman@dell.com>
Signed-off-by: Keith Busch <keith.busch@intel.com>
[bhelgaas: add irq_calc_affinity_vectors() reference to changelog]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Free up the IRQs we request on the suspend path and reallocate them on the
resume path.
Fixes this error:
CPU 111 disable failed: CPU has 9 vectors assigned and there are only 0 available.
Error taking CPU111 down: -34
Non-boot CPUs are not disabled
Enabling non-boot CPUs ...
Signed-off-by: Scott Bauer <scott.bauer@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Keith Busch <keith.busch@intel.com>
We don't want slower IRQ handlers impacting faster devices that happen to
be assigned the same VMD interrupt vector. The driver was trying to
separate such devices by checking if MSI-X wasn't used, but really we just
don't want endpoint devices to share with bridges. Most bridges may use MSI
currently, so that criteria happened to work, but newer ones may use MSI-X,
so this patch explicitly checks the device type when choosing a vector.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
The driver has a special purpose for the VMD device's first IRQ, so this
one shouldn't be considered for IRQ affinity.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Recent __call_srcu() changes have exposed that we need to cleanup SRCU
structures after pci_stop_root_bus() calls into vmd_msi_free().
Fixes: 3906b91844 ("PCI: vmd: Use SRCU as a local RCU to prevent delaying global RCU")
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Keith Busch <keith.busch@intel.com>
Cc: <stable@vger.kernel.org> # 4.11
VMD domains are allocated starting at 0x10000, not 0x1000 as the comment
said. Correct the comment and add a reference to the ACPI spec for _SEG.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Similar as commit 8ff0ef996c ("PCI: host: Mark PCIe/PCI (MSI) IRQ cascade
handlers as IRQF_NO_THREAD"), we should mark PCIe/PCI (MSI) IRQ cascade
handlers in designware, qcom, and vmd as IRQF_NO_THREAD explicitly.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Keith Busch <keith.busch@intel.com> # vmd
Acked-by: Jingoo Han <jingoohan1@gmail.com> # pcie-designware-plat.c
Use the fwnode to create a named domain so diagnosis works.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-pci@vger.kernel.org
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Christoph Hellwig <hch@lst.de>
Link: http://lkml.kernel.org/r/20170619235444.379861978@linutronix.de
Fix the following warnings:
drivers/pci/host/vmd.c:731:12: warning: ‘vmd_suspend’ defined but not used [-Wunused-function]
static int vmd_suspend(struct device *dev)
^
drivers/pci/host/vmd.c:739:12: warning: ‘vmd_resume’ defined but not used [-Wunused-function]
static int vmd_resume(struct device *dev)
^
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Bjorn Helgaas <helgaas@kernel.org>
Reviewed-by: Keith Busch <keith.busch@intel.com>
SRCU lets synchronize_srcu() depend on VMD-local RCU primitives, preventing
long delays from locking up RCU in other systems. VMD performs a
synchronize when removing a device, but will hit all IRQ lists if the
device uses all VMD vectors. This patch will not help VMD's RCU
synchronization, but will isolate the read side delays to the VMD
subsystem. Additionally, the use of SRCU in VMD's ISR will keep it
isolated from any other RCU waiters in the rest of the system.
Tested using concurrent FIO and NVMe resets:
[global]
rw=read
bs=4k
direct=1
ioengine=libaio
iodepth=32
norandommap
timeout=300
runtime=1000000000
[nvme0]
cpus_allowed=0-63
numjobs=8
filename=/dev/nvme0n1
[nvme1]
cpus_allowed=0-63
numjobs=8
filename=/dev/nvme1n1
while (true) do
for i in /sys/class/nvme/nvme*; do
echo "Resetting ${i##*/}"
echo 1 > $i/reset_controller;
sleep 5
done;
done
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
The driver core clears the driver data to NULL after device_release or on
probe failure. Thus, it is not needed to manually clear the device driver
data to NULL.
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
* pci/host-vmd:
x86/PCI: VMD: Move VMD driver to drivers/pci/host
x86/PCI: VMD: Synchronize with RCU freeing MSI IRQ descs
x86/PCI: VMD: Eliminate index member from IRQ list
x86/PCI: VMD: Eliminate vmd_vector member from list type
x86/PCI: VMD: Convert to use pci_alloc_irq_vectors() API
x86/PCI: VMD: Allocate IRQ lists with correct MSI-X count
PCI: Use positive flags in pci_alloc_irq_vectors()
PCI: Update "pci=resource_alignment" documentation
Conflicts:
drivers/pci/host/Kconfig
drivers/pci/host/Makefile
Move the driver source and Kconfig to the PCI host bridge drivers directory
and move the config option to a more appropriate sub-menu instead of
occupying the top-level location.
Update the Kconfig option with the X86_64 dependency that was implicitly
included from the previous location, and add information about the module
name when built as a loadable module.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Jon Derrick <jonathan.derrick@intel.com>