IOMMU Updates for Linux v5.20/v6.0:

Including:
 
 	- Most intrusive patch is small and changes the default
 	  allocation policy for DMA addresses. Before the change the
 	  allocator tried its best to find an address in the first 4GB.
 	  But that lead to performance problems when that space gets
 	  exhaused, and since most devices are capable of 64-bit DMA
 	  these days, we changed it to search in the full DMA-mask
 	  range from the beginning.  This change has the potential to
 	  uncover bugs elsewhere, in the kernel or the hardware. There
 	  is a Kconfig option and a command line option to restore the
 	  old behavior, but none of them is enabled by default.
 
 	- Add Robin Murphy as reviewer of IOMMU code and maintainer for
 	  the dma-iommu and iova code
 
 	- Chaning IOVA magazine size from 1032 to 1024 bytes to save
 	  memory
 
 	- Some core code cleanups and dead-code removal
 
 	- Support for ACPI IORT RMR node
 
 	- Support for multiple PCI domains in the AMD-Vi driver
 
 	- ARM SMMU changes from Will Deacon:
 
 	  - Add even more Qualcomm device-tree compatible strings
 
 	  - Support dumping of IMP DEF Qualcomm registers on TLB sync
 	    timeout
 
 	  - Fix reference count leak on device tree node in Qualcomm
 	    driver
 
 	- Intel VT-d driver updates from Lu Baolu:
 
 	  - Make intel-iommu.h private
 
 	  - Optimize the use of two locks
 
 	  - Extend the driver to support large-scale platforms
 
 	  - Cleanup some dead code
 
 	- MediaTek IOMMU refactoring and support for TTBR up to 35bit
 
 	- Basic support for Exynos SysMMU v7
 
 	- VirtIO IOMMU driver gets a map/unmap_pages() implementation
 
 	- Other smaller cleanups and fixes
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEr9jSbILcajRFYWYyK/BELZcBGuMFAmLs3DIACgkQK/BELZcB
 GuMizhAAguAnLLOkOLlR9/MhrTZfNXCUX+bfrEIevjFXMw4iPNfCCr4ydQ7EdVK6
 ZA/3Z89huYl0d0x/FELolnQi+HOeqYrfTDe4rB7TgNgwZnWa+fdHcyYkgBGyfPaV
 ilgjNcx8o//9o4NasyB6kU395jVmFxb735gMTTb+tcO9fr+/qIB6hxrHuCklxrNr
 C7wK6kkoDPi5n0QuXCSjXEx2Hk245pAWKPLwqxsUYzHGlLfl7ULOxw65BUBGvn/H
 uCsTfJFu7u+ErwQYf0qPuOwRBnRdsx9g5EAnfab8p074SoKWvbNnftIxgIRp8ZEM
 YgCbhYa1GOFI4r+XzqRzEbc0/vPSttims4Jqz0KxYs7pr5EoVifrWLJFjJdCdc2h
 Tio1gTvOq8HbH63kwYNKJhg4iSC6zVd37ihEhvfFO6LcgFl4iCfd2o9zK7oY40J4
 XoOxofVnJ2e3tzdhZ/n5quCXiudHixm6WuVa7QYKscF7Ud0tY1wWKuibdlMQTeNM
 68MvtlteKcfs1BrWzZyrFMrFeAfIY8LI82y6jdJuoNMU5LE9+5yelXBdJhnVygZ+
 Jglv1TIt6W/z1H5JgXtNVZ1wWgBm7rurOqNyfN8XCd8eP1z321CLfX8ujkhKrIWP
 ApG15cwvpnh1JX630+UFiEikTGU0fb2orMdPwYmwuu8DAsoLVHE=
 =hI2K
 -----END PGP SIGNATURE-----

Merge tag 'iommu-updates-v5.20-or-v6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull iommu updates from Joerg Roedel:

 - The most intrusive patch is small and changes the default allocation
   policy for DMA addresses.

   Before the change the allocator tried its best to find an address in
   the first 4GB. But that lead to performance problems when that space
   gets exhaused, and since most devices are capable of 64-bit DMA these
   days, we changed it to search in the full DMA-mask range from the
   beginning.

   This change has the potential to uncover bugs elsewhere, in the
   kernel or the hardware. There is a Kconfig option and a command line
   option to restore the old behavior, but none of them is enabled by
   default.

 - Add Robin Murphy as reviewer of IOMMU code and maintainer for the
   dma-iommu and iova code

 - Chaning IOVA magazine size from 1032 to 1024 bytes to save memory

 - Some core code cleanups and dead-code removal

 - Support for ACPI IORT RMR node

 - Support for multiple PCI domains in the AMD-Vi driver

 - ARM SMMU changes from Will Deacon:
      - Add even more Qualcomm device-tree compatible strings
      - Support dumping of IMP DEF Qualcomm registers on TLB sync
        timeout
      - Fix reference count leak on device tree node in Qualcomm driver

 - Intel VT-d driver updates from Lu Baolu:
      - Make intel-iommu.h private
      - Optimize the use of two locks
      - Extend the driver to support large-scale platforms
      - Cleanup some dead code

 - MediaTek IOMMU refactoring and support for TTBR up to 35bit

 - Basic support for Exynos SysMMU v7

 - VirtIO IOMMU driver gets a map/unmap_pages() implementation

 - Other smaller cleanups and fixes

* tag 'iommu-updates-v5.20-or-v6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (116 commits)
  iommu/amd: Fix compile warning in init code
  iommu/amd: Add support for AVIC when SNP is enabled
  iommu/amd: Simplify and Consolidate Virtual APIC (AVIC) Enablement
  ACPI/IORT: Fix build error implicit-function-declaration
  drivers: iommu: fix clang -wformat warning
  iommu/arm-smmu: qcom_iommu: Add of_node_put() when breaking out of loop
  iommu/arm-smmu-qcom: Add SM6375 SMMU compatible
  dt-bindings: arm-smmu: Add compatible for Qualcomm SM6375
  MAINTAINERS: Add Robin Murphy as IOMMU SUBSYTEM reviewer
  iommu/amd: Do not support IOMMUv2 APIs when SNP is enabled
  iommu/amd: Do not support IOMMU_DOMAIN_IDENTITY after SNP is enabled
  iommu/amd: Set translation valid bit only when IO page tables are in use
  iommu/amd: Introduce function to check and enable SNP
  iommu/amd: Globally detect SNP support
  iommu/amd: Process all IVHDs before enabling IOMMU features
  iommu/amd: Introduce global variable for storing common EFR and EFR2
  iommu/amd: Introduce Support for Extended Feature 2 Register
  iommu/amd: Change macro for IOMMU control register bit shift to decimal value
  iommu/exynos: Enable default VM instance on SysMMU v7
  iommu/exynos: Add SysMMU v7 register set
  ...
This commit is contained in:
Linus Torvalds 2022-08-06 10:42:38 -07:00
commit 1d239c1eb8
59 changed files with 2530 additions and 1493 deletions

View File

@ -2274,23 +2274,39 @@
ivrs_ioapic [HW,X86-64]
Provide an override to the IOAPIC-ID<->DEVICE-ID
mapping provided in the IVRS ACPI table. For
example, to map IOAPIC-ID decimal 10 to
PCI device 00:14.0 write the parameter as:
mapping provided in the IVRS ACPI table.
By default, PCI segment is 0, and can be omitted.
For example:
* To map IOAPIC-ID decimal 10 to PCI device 00:14.0
write the parameter as:
ivrs_ioapic[10]=00:14.0
* To map IOAPIC-ID decimal 10 to PCI segment 0x1 and
PCI device 00:14.0 write the parameter as:
ivrs_ioapic[10]=0001:00:14.0
ivrs_hpet [HW,X86-64]
Provide an override to the HPET-ID<->DEVICE-ID
mapping provided in the IVRS ACPI table. For
example, to map HPET-ID decimal 0 to
PCI device 00:14.0 write the parameter as:
mapping provided in the IVRS ACPI table.
By default, PCI segment is 0, and can be omitted.
For example:
* To map HPET-ID decimal 0 to PCI device 00:14.0
write the parameter as:
ivrs_hpet[0]=00:14.0
* To map HPET-ID decimal 10 to PCI segment 0x1 and
PCI device 00:14.0 write the parameter as:
ivrs_ioapic[10]=0001:00:14.0
ivrs_acpihid [HW,X86-64]
Provide an override to the ACPI-HID:UID<->DEVICE-ID
mapping provided in the IVRS ACPI table. For
example, to map UART-HID:UID AMD0020:0 to
PCI device 00:14.5 write the parameter as:
mapping provided in the IVRS ACPI table.
For example, to map UART-HID:UID AMD0020:0 to
PCI segment 0x1 and PCI device ID 00:14.5,
write the parameter as:
ivrs_acpihid[0001:00:14.5]=AMD0020:0
By default, PCI segment is 0, and can be omitted.
For example, PCI device 00:14.5 write the parameter as:
ivrs_acpihid[00:14.5]=AMD0020:0
js= [HW,JOY] Analog joystick

View File

@ -42,6 +42,7 @@ properties:
- qcom,sdx55-smmu-500
- qcom,sdx65-smmu-500
- qcom,sm6350-smmu-500
- qcom,sm6375-smmu-500
- qcom,sm8150-smmu-500
- qcom,sm8250-smmu-500
- qcom,sm8350-smmu-500

View File

@ -101,6 +101,10 @@ properties:
items:
- const: bclk
mediatek,infracfg:
$ref: /schemas/types.yaml#/definitions/phandle
description: The phandle to the mediatek infracfg syscon
mediatek,larbs:
$ref: /schemas/types.yaml#/definitions/phandle-array
minItems: 1
@ -167,6 +171,18 @@ allOf:
required:
- power-domains
- if:
properties:
compatible:
contains:
enum:
- mediatek,mt2712-m4u
- mediatek,mt8173-m4u
then:
required:
- mediatek,infracfg
- if: # The IOMMUs don't have larbs.
not:
properties:
@ -191,6 +207,7 @@ examples:
interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_LOW>;
clocks = <&infracfg CLK_INFRA_M4U>;
clock-names = "bclk";
mediatek,infracfg = <&infracfg>;
mediatek,larbs = <&larb0>, <&larb1>, <&larb2>,
<&larb3>, <&larb4>, <&larb5>;
#iommu-cells = <1>;

View File

@ -10233,7 +10233,6 @@ L: iommu@lists.linux.dev
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
F: drivers/iommu/intel/
F: include/linux/intel-iommu.h
F: include/linux/intel-svm.h
INTEL IOP-ADMA DMA DRIVER
@ -10605,9 +10604,20 @@ T: git git://git.kernel.org/pub/scm/fs/xfs/xfs-linux.git
F: fs/iomap/
F: include/linux/iomap.h
IOMMU DRIVERS
IOMMU DMA-API LAYER
M: Robin Murphy <robin.murphy@arm.com>
L: iommu@lists.linux.dev
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
F: drivers/iommu/dma-iommu.c
F: drivers/iommu/iova.c
F: include/linux/dma-iommu.h
F: include/linux/iova.h
IOMMU SUBSYSTEM
M: Joerg Roedel <joro@8bytes.org>
M: Will Deacon <will@kernel.org>
R: Robin Murphy <robin.murphy@arm.com>
L: iommu@lists.linux.dev
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git

View File

@ -6,7 +6,6 @@
* Copyright (c) 2006-2009, Intel Corporation
*/
#include <linux/intel-iommu.h>
#include <linux/init_task.h>
#include <linux/spinlock.h>
#include <linux/export.h>
@ -516,17 +515,3 @@ struct acpi_table_header *tboot_get_dmar_table(struct acpi_table_header *dmar_tb
return dmar_tbl;
}
int tboot_force_iommu(void)
{
if (!tboot_enabled())
return 0;
if (no_iommu || dmar_disabled)
pr_warn("Forcing Intel-IOMMU to enabled\n");
dmar_disabled = 0;
no_iommu = 0;
return 1;
}

View File

@ -41,7 +41,6 @@
#include <linux/mman.h>
#include <linux/highmem.h>
#include <linux/iommu.h>
#include <linux/intel-iommu.h>
#include <linux/cpufreq.h>
#include <linux/user-return-notifier.h>
#include <linux/srcu.h>

View File

@ -788,6 +788,294 @@ void acpi_configure_pmsi_domain(struct device *dev)
}
#ifdef CONFIG_IOMMU_API
static void iort_rmr_free(struct device *dev,
struct iommu_resv_region *region)
{
struct iommu_iort_rmr_data *rmr_data;
rmr_data = container_of(region, struct iommu_iort_rmr_data, rr);
kfree(rmr_data->sids);
kfree(rmr_data);
}
static struct iommu_iort_rmr_data *iort_rmr_alloc(
struct acpi_iort_rmr_desc *rmr_desc,
int prot, enum iommu_resv_type type,
u32 *sids, u32 num_sids)
{
struct iommu_iort_rmr_data *rmr_data;
struct iommu_resv_region *region;
u32 *sids_copy;
u64 addr = rmr_desc->base_address, size = rmr_desc->length;
rmr_data = kmalloc(sizeof(*rmr_data), GFP_KERNEL);
if (!rmr_data)
return NULL;
/* Create a copy of SIDs array to associate with this rmr_data */
sids_copy = kmemdup(sids, num_sids * sizeof(*sids), GFP_KERNEL);
if (!sids_copy) {
kfree(rmr_data);
return NULL;
}
rmr_data->sids = sids_copy;
rmr_data->num_sids = num_sids;
if (!IS_ALIGNED(addr, SZ_64K) || !IS_ALIGNED(size, SZ_64K)) {
/* PAGE align base addr and size */
addr &= PAGE_MASK;
size = PAGE_ALIGN(size + offset_in_page(rmr_desc->base_address));
pr_err(FW_BUG "RMR descriptor[0x%llx - 0x%llx] not aligned to 64K, continue with [0x%llx - 0x%llx]\n",
rmr_desc->base_address,
rmr_desc->base_address + rmr_desc->length - 1,
addr, addr + size - 1);
}
region = &rmr_data->rr;
INIT_LIST_HEAD(&region->list);
region->start = addr;
region->length = size;
region->prot = prot;
region->type = type;
region->free = iort_rmr_free;
return rmr_data;
}
static void iort_rmr_desc_check_overlap(struct acpi_iort_rmr_desc *desc,
u32 count)
{
int i, j;
for (i = 0; i < count; i++) {
u64 end, start = desc[i].base_address, length = desc[i].length;
if (!length) {
pr_err(FW_BUG "RMR descriptor[0x%llx] with zero length, continue anyway\n",
start);
continue;
}
end = start + length - 1;
/* Check for address overlap */
for (j = i + 1; j < count; j++) {
u64 e_start = desc[j].base_address;
u64 e_end = e_start + desc[j].length - 1;
if (start <= e_end && end >= e_start)
pr_err(FW_BUG "RMR descriptor[0x%llx - 0x%llx] overlaps, continue anyway\n",
start, end);
}
}
}
/*
* Please note, we will keep the already allocated RMR reserve
* regions in case of a memory allocation failure.
*/
static void iort_get_rmrs(struct acpi_iort_node *node,
struct acpi_iort_node *smmu,
u32 *sids, u32 num_sids,
struct list_head *head)
{
struct acpi_iort_rmr *rmr = (struct acpi_iort_rmr *)node->node_data;
struct acpi_iort_rmr_desc *rmr_desc;
int i;
rmr_desc = ACPI_ADD_PTR(struct acpi_iort_rmr_desc, node,
rmr->rmr_offset);
iort_rmr_desc_check_overlap(rmr_desc, rmr->rmr_count);
for (i = 0; i < rmr->rmr_count; i++, rmr_desc++) {
struct iommu_iort_rmr_data *rmr_data;
enum iommu_resv_type type;
int prot = IOMMU_READ | IOMMU_WRITE;
if (rmr->flags & ACPI_IORT_RMR_REMAP_PERMITTED)
type = IOMMU_RESV_DIRECT_RELAXABLE;
else
type = IOMMU_RESV_DIRECT;
if (rmr->flags & ACPI_IORT_RMR_ACCESS_PRIVILEGE)
prot |= IOMMU_PRIV;
/* Attributes 0x00 - 0x03 represents device memory */
if (ACPI_IORT_RMR_ACCESS_ATTRIBUTES(rmr->flags) <=
ACPI_IORT_RMR_ATTR_DEVICE_GRE)
prot |= IOMMU_MMIO;
else if (ACPI_IORT_RMR_ACCESS_ATTRIBUTES(rmr->flags) ==
ACPI_IORT_RMR_ATTR_NORMAL_IWB_OWB)
prot |= IOMMU_CACHE;
rmr_data = iort_rmr_alloc(rmr_desc, prot, type,
sids, num_sids);
if (!rmr_data)
return;
list_add_tail(&rmr_data->rr.list, head);
}
}
static u32 *iort_rmr_alloc_sids(u32 *sids, u32 count, u32 id_start,
u32 new_count)
{
u32 *new_sids;
u32 total_count = count + new_count;
int i;
new_sids = krealloc_array(sids, count + new_count,
sizeof(*new_sids), GFP_KERNEL);
if (!new_sids)
return NULL;
for (i = count; i < total_count; i++)
new_sids[i] = id_start++;
return new_sids;
}
static bool iort_rmr_has_dev(struct device *dev, u32 id_start,
u32 id_count)
{
int i;
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
/*
* Make sure the kernel has preserved the boot firmware PCIe
* configuration. This is required to ensure that the RMR PCIe
* StreamIDs are still valid (Refer: ARM DEN 0049E.d Section 3.1.1.5).
*/
if (dev_is_pci(dev)) {
struct pci_dev *pdev = to_pci_dev(dev);
struct pci_host_bridge *host = pci_find_host_bridge(pdev->bus);
if (!host->preserve_config)
return false;
}
for (i = 0; i < fwspec->num_ids; i++) {
if (fwspec->ids[i] >= id_start &&
fwspec->ids[i] <= id_start + id_count)
return true;
}
return false;
}
static void iort_node_get_rmr_info(struct acpi_iort_node *node,
struct acpi_iort_node *iommu,
struct device *dev, struct list_head *head)
{
struct acpi_iort_node *smmu = NULL;
struct acpi_iort_rmr *rmr;
struct acpi_iort_id_mapping *map;
u32 *sids = NULL;
u32 num_sids = 0;
int i;
if (!node->mapping_offset || !node->mapping_count) {
pr_err(FW_BUG "Invalid ID mapping, skipping RMR node %p\n",
node);
return;
}
rmr = (struct acpi_iort_rmr *)node->node_data;
if (!rmr->rmr_offset || !rmr->rmr_count)
return;
map = ACPI_ADD_PTR(struct acpi_iort_id_mapping, node,
node->mapping_offset);
/*
* Go through the ID mappings and see if we have a match for SMMU
* and dev(if !NULL). If found, get the sids for the Node.
* Please note, id_count is equal to the number of IDs in the
* range minus one.
*/
for (i = 0; i < node->mapping_count; i++, map++) {
struct acpi_iort_node *parent;
if (!map->id_count)
continue;
parent = ACPI_ADD_PTR(struct acpi_iort_node, iort_table,
map->output_reference);
if (parent != iommu)
continue;
/* If dev is valid, check RMR node corresponds to the dev SID */
if (dev && !iort_rmr_has_dev(dev, map->output_base,
map->id_count))
continue;
/* Retrieve SIDs associated with the Node. */
sids = iort_rmr_alloc_sids(sids, num_sids, map->output_base,
map->id_count + 1);
if (!sids)
return;
num_sids += map->id_count + 1;
}
if (!sids)
return;
iort_get_rmrs(node, smmu, sids, num_sids, head);
kfree(sids);
}
static void iort_find_rmrs(struct acpi_iort_node *iommu, struct device *dev,
struct list_head *head)
{
struct acpi_table_iort *iort;
struct acpi_iort_node *iort_node, *iort_end;
int i;
/* Only supports ARM DEN 0049E.d onwards */
if (iort_table->revision < 5)
return;
iort = (struct acpi_table_iort *)iort_table;
iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort,
iort->node_offset);
iort_end = ACPI_ADD_PTR(struct acpi_iort_node, iort,
iort_table->length);
for (i = 0; i < iort->node_count; i++) {
if (WARN_TAINT(iort_node >= iort_end, TAINT_FIRMWARE_WORKAROUND,
"IORT node pointer overflows, bad table!\n"))
return;
if (iort_node->type == ACPI_IORT_NODE_RMR)
iort_node_get_rmr_info(iort_node, iommu, dev, head);
iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort_node,
iort_node->length);
}
}
/*
* Populate the RMR list associated with a given IOMMU and dev(if provided).
* If dev is NULL, the function populates all the RMRs associated with the
* given IOMMU.
*/
static void iort_iommu_rmr_get_resv_regions(struct fwnode_handle *iommu_fwnode,
struct device *dev,
struct list_head *head)
{
struct acpi_iort_node *iommu;
iommu = iort_get_iort_node(iommu_fwnode);
if (!iommu)
return;
iort_find_rmrs(iommu, dev, head);
}
static struct acpi_iort_node *iort_get_msi_resv_iommu(struct device *dev)
{
struct acpi_iort_node *iommu;
@ -806,27 +1094,22 @@ static struct acpi_iort_node *iort_get_msi_resv_iommu(struct device *dev)
return NULL;
}
/**
* iort_iommu_msi_get_resv_regions - Reserved region driver helper
* @dev: Device from iommu_get_resv_regions()
* @head: Reserved region list from iommu_get_resv_regions()
*
* Returns: Number of msi reserved regions on success (0 if platform
* doesn't require the reservation or no associated msi regions),
* appropriate error value otherwise. The ITS interrupt translation
* spaces (ITS_base + SZ_64K, SZ_64K) associated with the device
* are the msi reserved regions.
/*
* Retrieve platform specific HW MSI reserve regions.
* The ITS interrupt translation spaces (ITS_base + SZ_64K, SZ_64K)
* associated with the device are the HW MSI reserved regions.
*/
int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head)
static void iort_iommu_msi_get_resv_regions(struct device *dev,
struct list_head *head)
{
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct acpi_iort_its_group *its;
struct acpi_iort_node *iommu_node, *its_node = NULL;
int i, resv = 0;
int i;
iommu_node = iort_get_msi_resv_iommu(dev);
if (!iommu_node)
return 0;
return;
/*
* Current logic to reserve ITS regions relies on HW topologies
@ -846,7 +1129,7 @@ int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head)
}
if (!its_node)
return 0;
return;
/* Move to ITS specific data */
its = (struct acpi_iort_its_group *)its_node->node_data;
@ -860,16 +1143,53 @@ int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head)
region = iommu_alloc_resv_region(base + SZ_64K, SZ_64K,
prot, IOMMU_RESV_MSI);
if (region) {
if (region)
list_add_tail(&region->list, head);
resv++;
}
}
}
return (resv == its->its_count) ? resv : -ENODEV;
}
/**
* iort_iommu_get_resv_regions - Generic helper to retrieve reserved regions.
* @dev: Device from iommu_get_resv_regions()
* @head: Reserved region list from iommu_get_resv_regions()
*/
void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head)
{
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
iort_iommu_msi_get_resv_regions(dev, head);
iort_iommu_rmr_get_resv_regions(fwspec->iommu_fwnode, dev, head);
}
/**
* iort_get_rmr_sids - Retrieve IORT RMR node reserved regions with
* associated StreamIDs information.
* @iommu_fwnode: fwnode associated with IOMMU
* @head: Resereved region list
*/
void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode,
struct list_head *head)
{
iort_iommu_rmr_get_resv_regions(iommu_fwnode, NULL, head);
}
EXPORT_SYMBOL_GPL(iort_get_rmr_sids);
/**
* iort_put_rmr_sids - Free memory allocated for RMR reserved regions.
* @iommu_fwnode: fwnode associated with IOMMU
* @head: Resereved region list
*/
void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode,
struct list_head *head)
{
struct iommu_resv_region *entry, *next;
list_for_each_entry_safe(entry, next, head, list)
entry->free(NULL, entry);
}
EXPORT_SYMBOL_GPL(iort_put_rmr_sids);
static inline bool iort_iommu_driver_enabled(u8 type)
{
switch (type) {
@ -1034,8 +1354,8 @@ int iort_iommu_configure_id(struct device *dev, const u32 *id_in)
}
#else
int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head)
{ return 0; }
void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head)
{ }
int iort_iommu_configure_id(struct device *dev, const u32 *input_id)
{ return -ENODEV; }
#endif

View File

@ -20,7 +20,7 @@
#include <linux/kernel.h>
#include <linux/pagemap.h>
#include <linux/agp_backend.h>
#include <linux/intel-iommu.h>
#include <linux/iommu.h>
#include <linux/delay.h>
#include <asm/smp.h>
#include "agp.h"
@ -573,18 +573,15 @@ static void intel_gtt_cleanup(void)
*/
static inline int needs_ilk_vtd_wa(void)
{
#ifdef CONFIG_INTEL_IOMMU
const unsigned short gpu_devid = intel_private.pcidev->device;
/* Query intel_iommu to see if we need the workaround. Presumably that
* was loaded first.
/*
* Query iommu subsystem to see if we need the workaround. Presumably
* that was loaded first.
*/
if ((gpu_devid == PCI_DEVICE_ID_INTEL_IRONLAKE_D_IG ||
gpu_devid == PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG) &&
intel_iommu_gfx_mapped)
return 1;
#endif
return 0;
return ((gpu_devid == PCI_DEVICE_ID_INTEL_IRONLAKE_D_IG ||
gpu_devid == PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG) &&
device_iommu_mapped(&intel_private.pcidev->dev));
}
static bool intel_gtt_can_wc(void)

View File

@ -27,7 +27,6 @@
#include <acpi/video.h>
#include <linux/i2c.h>
#include <linux/input.h>
#include <linux/intel-iommu.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/dma-resv.h>

View File

@ -6,7 +6,6 @@
#include <linux/dma-resv.h>
#include <linux/highmem.h>
#include <linux/intel-iommu.h>
#include <linux/sync_file.h>
#include <linux/uaccess.h>

View File

@ -144,6 +144,32 @@ config IOMMU_DMA
select IRQ_MSI_IOMMU
select NEED_SG_DMA_LENGTH
config IOMMU_DMA_PCI_SAC
bool "Enable 64-bit legacy PCI optimisation by default"
depends on IOMMU_DMA
help
Enable by default an IOMMU optimisation for 64-bit legacy PCI devices,
wherein the DMA API layer will always first try to allocate a 32-bit
DMA address suitable for a single address cycle, before falling back
to allocating from the device's full usable address range. If your
system has 64-bit legacy PCI devices in 32-bit slots where using dual
address cycles reduces DMA throughput significantly, this may be
beneficial to overall performance.
If you have a modern PCI Express based system, this feature mostly just
represents extra overhead in the allocation path for no practical
benefit, and it should usually be preferable to say "n" here.
However, beware that this feature has also historically papered over
bugs where the IOMMU address width and/or device DMA mask is not set
correctly. If device DMA problems and IOMMU faults start occurring
after disabling this option, it is almost certainly indicative of a
latent driver or firmware/BIOS bug, which would previously have only
manifested with several gigabytes worth of concurrent DMA mappings.
If this option is not set, the feature can still be re-enabled at
boot time with the "iommu.forcedac=0" command-line argument.
# Shared Virtual Addressing
config IOMMU_SVA
bool
@ -363,6 +389,16 @@ config ARM_SMMU_QCOM
When running on a Qualcomm platform that has the custom variant
of the ARM SMMU, this needs to be built into the SMMU driver.
config ARM_SMMU_QCOM_DEBUG
bool "ARM SMMU QCOM implementation defined debug support"
depends on ARM_SMMU_QCOM
help
Support for implementation specific debug features in ARM SMMU
hardware found in QTI platforms.
Say Y here to enable debug for issues such as TLB sync timeouts
which requires implementation defined register dumps.
config ARM_SMMU_V3
tristate "ARM Ltd. System MMU Version 3 (SMMUv3) Support"
depends on ARM64

View File

@ -13,12 +13,13 @@
extern irqreturn_t amd_iommu_int_thread(int irq, void *data);
extern irqreturn_t amd_iommu_int_handler(int irq, void *data);
extern void amd_iommu_apply_erratum_63(u16 devid);
extern void amd_iommu_apply_erratum_63(struct amd_iommu *iommu, u16 devid);
extern void amd_iommu_restart_event_logging(struct amd_iommu *iommu);
extern int amd_iommu_init_devices(void);
extern void amd_iommu_uninit_devices(void);
extern void amd_iommu_init_notifier(void);
extern int amd_iommu_init_api(void);
extern void amd_iommu_set_rlookup_table(struct amd_iommu *iommu, u16 devid);
#ifdef CONFIG_AMD_IOMMU_DEBUGFS
void amd_iommu_debugfs_setup(struct amd_iommu *iommu);
@ -114,10 +115,17 @@ void amd_iommu_domain_clr_pt_root(struct protection_domain *domain)
amd_iommu_domain_set_pt_root(domain, 0);
}
static inline int get_pci_sbdf_id(struct pci_dev *pdev)
{
int seg = pci_domain_nr(pdev->bus);
u16 devid = pci_dev_id(pdev);
return PCI_SEG_DEVID_TO_SBDF(seg, devid);
}
extern bool translation_pre_enabled(struct amd_iommu *iommu);
extern bool amd_iommu_is_attach_deferred(struct device *dev);
extern int __init add_special_device(u8 type, u8 id, u16 *devid,
extern int __init add_special_device(u8 type, u8 id, u32 *devid,
bool cmd_line);
#ifdef CONFIG_DMI
@ -128,4 +136,10 @@ static inline void amd_iommu_apply_ivrs_quirks(void) { }
extern void amd_iommu_domain_set_pgtable(struct protection_domain *domain,
u64 *root, int mode);
extern struct dev_table_entry *get_dev_table(struct amd_iommu *iommu);
extern u64 amd_iommu_efr;
extern u64 amd_iommu_efr2;
extern bool amd_iommu_snp_en;
#endif

View File

@ -67,6 +67,7 @@
#define MMIO_INTCAPXT_EVT_OFFSET 0x0170
#define MMIO_INTCAPXT_PPR_OFFSET 0x0178
#define MMIO_INTCAPXT_GALOG_OFFSET 0x0180
#define MMIO_EXT_FEATURES2 0x01A0
#define MMIO_CMD_HEAD_OFFSET 0x2000
#define MMIO_CMD_TAIL_OFFSET 0x2008
#define MMIO_EVT_HEAD_OFFSET 0x2010
@ -102,6 +103,12 @@
#define FEATURE_GLXVAL_SHIFT 14
#define FEATURE_GLXVAL_MASK (0x03ULL << FEATURE_GLXVAL_SHIFT)
/* Extended Feature 2 Bits */
#define FEATURE_SNPAVICSUP_SHIFT 5
#define FEATURE_SNPAVICSUP_MASK (0x07ULL << FEATURE_SNPAVICSUP_SHIFT)
#define FEATURE_SNPAVICSUP_GAM(x) \
((x & FEATURE_SNPAVICSUP_MASK) >> FEATURE_SNPAVICSUP_SHIFT == 0x1)
/* Note:
* The current driver only support 16-bit PASID.
* Currently, hardware only implement upto 16-bit PASID
@ -143,27 +150,28 @@
#define EVENT_FLAG_I 0x008
/* feature control bits */
#define CONTROL_IOMMU_EN 0x00ULL
#define CONTROL_HT_TUN_EN 0x01ULL
#define CONTROL_EVT_LOG_EN 0x02ULL
#define CONTROL_EVT_INT_EN 0x03ULL
#define CONTROL_COMWAIT_EN 0x04ULL
#define CONTROL_INV_TIMEOUT 0x05ULL
#define CONTROL_PASSPW_EN 0x08ULL
#define CONTROL_RESPASSPW_EN 0x09ULL
#define CONTROL_COHERENT_EN 0x0aULL
#define CONTROL_ISOC_EN 0x0bULL
#define CONTROL_CMDBUF_EN 0x0cULL
#define CONTROL_PPRLOG_EN 0x0dULL
#define CONTROL_PPRINT_EN 0x0eULL
#define CONTROL_PPR_EN 0x0fULL
#define CONTROL_GT_EN 0x10ULL
#define CONTROL_GA_EN 0x11ULL
#define CONTROL_GAM_EN 0x19ULL
#define CONTROL_GALOG_EN 0x1CULL
#define CONTROL_GAINT_EN 0x1DULL
#define CONTROL_XT_EN 0x32ULL
#define CONTROL_INTCAPXT_EN 0x33ULL
#define CONTROL_IOMMU_EN 0
#define CONTROL_HT_TUN_EN 1
#define CONTROL_EVT_LOG_EN 2
#define CONTROL_EVT_INT_EN 3
#define CONTROL_COMWAIT_EN 4
#define CONTROL_INV_TIMEOUT 5
#define CONTROL_PASSPW_EN 8
#define CONTROL_RESPASSPW_EN 9
#define CONTROL_COHERENT_EN 10
#define CONTROL_ISOC_EN 11
#define CONTROL_CMDBUF_EN 12
#define CONTROL_PPRLOG_EN 13
#define CONTROL_PPRINT_EN 14
#define CONTROL_PPR_EN 15
#define CONTROL_GT_EN 16
#define CONTROL_GA_EN 17
#define CONTROL_GAM_EN 25
#define CONTROL_GALOG_EN 28
#define CONTROL_GAINT_EN 29
#define CONTROL_XT_EN 50
#define CONTROL_INTCAPXT_EN 51
#define CONTROL_SNPAVIC_EN 61
#define CTRL_INV_TO_MASK (7 << CONTROL_INV_TIMEOUT)
#define CTRL_INV_TO_NONE 0
@ -445,8 +453,6 @@ struct irq_remap_table {
u32 *table;
};
extern struct irq_remap_table **irq_lookup_table;
/* Interrupt remapping feature used? */
extern bool amd_iommu_irq_remap;
@ -456,6 +462,16 @@ extern bool amdr_ivrs_remap_support;
/* kmem_cache to get tables with 128 byte alignement */
extern struct kmem_cache *amd_iommu_irq_cache;
#define PCI_SBDF_TO_SEGID(sbdf) (((sbdf) >> 16) & 0xffff)
#define PCI_SBDF_TO_DEVID(sbdf) ((sbdf) & 0xffff)
#define PCI_SEG_DEVID_TO_SBDF(seg, devid) ((((u32)(seg) & 0xffff) << 16) | \
((devid) & 0xffff))
/* Make iterating over all pci segment easier */
#define for_each_pci_segment(pci_seg) \
list_for_each_entry((pci_seg), &amd_iommu_pci_seg_list, list)
#define for_each_pci_segment_safe(pci_seg, next) \
list_for_each_entry_safe((pci_seg), (next), &amd_iommu_pci_seg_list, list)
/*
* Make iterating over all IOMMUs easier
*/
@ -478,13 +494,14 @@ extern struct kmem_cache *amd_iommu_irq_cache;
struct amd_iommu_fault {
u64 address; /* IO virtual address of the fault*/
u32 pasid; /* Address space identifier */
u16 device_id; /* Originating PCI device id */
u32 sbdf; /* Originating PCI device id */
u16 tag; /* PPR tag */
u16 flags; /* Fault flags */
};
struct amd_iommu;
struct iommu_domain;
struct irq_domain;
struct amd_irte_ops;
@ -530,6 +547,75 @@ struct protection_domain {
unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference count */
};
/*
* This structure contains information about one PCI segment in the system.
*/
struct amd_iommu_pci_seg {
/* List with all PCI segments in the system */
struct list_head list;
/* List of all available dev_data structures */
struct llist_head dev_data_list;
/* PCI segment number */
u16 id;
/* Largest PCI device id we expect translation requests for */
u16 last_bdf;
/* Size of the device table */
u32 dev_table_size;
/* Size of the alias table */
u32 alias_table_size;
/* Size of the rlookup table */
u32 rlookup_table_size;
/*
* device table virtual address
*
* Pointer to the per PCI segment device table.
* It is indexed by the PCI device id or the HT unit id and contains
* information about the domain the device belongs to as well as the
* page table root pointer.
*/
struct dev_table_entry *dev_table;
/*
* The rlookup iommu table is used to find the IOMMU which is
* responsible for a specific device. It is indexed by the PCI
* device id.
*/
struct amd_iommu **rlookup_table;
/*
* This table is used to find the irq remapping table for a given
* device id quickly.
*/
struct irq_remap_table **irq_lookup_table;
/*
* Pointer to a device table which the content of old device table
* will be copied to. It's only be used in kdump kernel.
*/
struct dev_table_entry *old_dev_tbl_cpy;
/*
* The alias table is a driver specific data structure which contains the
* mappings of the PCI device ids to the actual requestor ids on the IOMMU.
* More than one device can share the same requestor id.
*/
u16 *alias_table;
/*
* A list of required unity mappings we find in ACPI. It is not locked
* because as runtime it is only read. It is created at ACPI table
* parsing time.
*/
struct list_head unity_map;
};
/*
* Structure where we save information about one hardware AMD IOMMU in the
* system.
@ -567,6 +653,9 @@ struct amd_iommu {
/* Extended features */
u64 features;
/* Extended features 2 */
u64 features2;
/* IOMMUv2 */
bool is_iommu_v2;
@ -581,7 +670,7 @@ struct amd_iommu {
u16 cap_ptr;
/* pci domain of this IOMMU */
u16 pci_seg;
struct amd_iommu_pci_seg *pci_seg;
/* start of exclusion range of that IOMMU */
u64 exclusion_start;
@ -666,8 +755,8 @@ struct acpihid_map_entry {
struct list_head list;
u8 uid[ACPIHID_UID_LEN];
u8 hid[ACPIHID_HID_LEN];
u16 devid;
u16 root_devid;
u32 devid;
u32 root_devid;
bool cmd_line;
struct iommu_group *group;
};
@ -675,7 +764,7 @@ struct acpihid_map_entry {
struct devid_map {
struct list_head list;
u8 id;
u16 devid;
u32 devid;
bool cmd_line;
};
@ -689,7 +778,7 @@ struct iommu_dev_data {
struct list_head list; /* For domain->dev_list */
struct llist_node dev_data_list; /* For global dev_data_list */
struct protection_domain *domain; /* Domain the device is bound to */
struct pci_dev *pdev;
struct device *dev;
u16 devid; /* PCI Device ID */
bool iommu_v2; /* Device can make use of IOMMUv2 */
struct {
@ -709,6 +798,12 @@ extern struct list_head ioapic_map;
extern struct list_head hpet_map;
extern struct list_head acpihid_map;
/*
* List with all PCI segments in the system. This list is not locked because
* it is only written at driver initialization time
*/
extern struct list_head amd_iommu_pci_seg_list;
/*
* List with all IOMMUs in the system. This list is not locked because it is
* only written and read at driver initialization or suspend time
@ -748,39 +843,13 @@ struct unity_map_entry {
int prot;
};
/*
* List of all unity mappings. It is not locked because as runtime it is only
* read. It is created at ACPI table parsing time.
*/
extern struct list_head amd_iommu_unity_map;
/*
* Data structures for device handling
*/
/*
* Device table used by hardware. Read and write accesses by software are
* locked with the amd_iommu_pd_table lock.
*/
extern struct dev_table_entry *amd_iommu_dev_table;
/*
* Alias table to find requestor ids to device ids. Not locked because only
* read on runtime.
*/
extern u16 *amd_iommu_alias_table;
/*
* Reverse lookup table to find the IOMMU which translates a specific device.
*/
extern struct amd_iommu **amd_iommu_rlookup_table;
/* size of the dma_ops aperture as power of 2 */
extern unsigned amd_iommu_aperture_order;
/* largest PCI device id we expect translation requests for */
extern u16 amd_iommu_last_bdf;
/* allocation bitmap for domain ids */
extern unsigned long *amd_iommu_pd_alloc_bitmap;
@ -913,6 +982,7 @@ struct irq_2_irte {
struct amd_ir_data {
u32 cached_ga_tag;
struct amd_iommu *iommu;
struct irq_2_irte irq_2_irte;
struct msi_msg msi_entry;
void *entry; /* Pointer to union irte or struct irte_ga */
@ -930,9 +1000,9 @@ struct amd_ir_data {
struct amd_irte_ops {
void (*prepare)(void *, u32, bool, u8, u32, int);
void (*activate)(void *, u16, u16);
void (*deactivate)(void *, u16, u16);
void (*set_affinity)(void *, u16, u16, u8, u32);
void (*activate)(struct amd_iommu *iommu, void *, u16, u16);
void (*deactivate)(struct amd_iommu *iommu, void *, u16, u16);
void (*set_affinity)(struct amd_iommu *iommu, void *, u16, u16, u8, u32);
void *(*get)(struct irq_remap_table *, int);
void (*set_allocated)(struct irq_remap_table *, int);
bool (*is_allocated)(struct irq_remap_table *, int);

File diff suppressed because it is too large Load Diff

View File

@ -258,7 +258,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
__npte = PM_LEVEL_PDE(level, iommu_virt_to_phys(page));
/* pte could have been changed somewhere. */
if (cmpxchg64(pte, __pte, __npte) != __pte)
if (!try_cmpxchg64(pte, &__pte, __npte))
free_page((unsigned long)page);
else if (IOMMU_PTE_PRESENT(__pte))
*updated = true;
@ -341,10 +341,8 @@ static void free_clear_pte(u64 *pte, u64 pteval, struct list_head *freelist)
u64 *pt;
int mode;
while (cmpxchg64(pte, pteval, 0) != pteval) {
while (!try_cmpxchg64(pte, &pteval, 0))
pr_warn("AMD-Vi: IOMMU pte changed since we read it\n");
pteval = *pte;
}
if (!IOMMU_PTE_PRESENT(pteval))
return;

File diff suppressed because it is too large Load Diff

View File

@ -51,7 +51,7 @@ struct pasid_state {
struct device_state {
struct list_head list;
u16 devid;
u32 sbdf;
atomic_t count;
struct pci_dev *pdev;
struct pasid_state **states;
@ -83,35 +83,25 @@ static struct workqueue_struct *iommu_wq;
static void free_pasid_states(struct device_state *dev_state);
static u16 device_id(struct pci_dev *pdev)
{
u16 devid;
devid = pdev->bus->number;
devid = (devid << 8) | pdev->devfn;
return devid;
}
static struct device_state *__get_device_state(u16 devid)
static struct device_state *__get_device_state(u32 sbdf)
{
struct device_state *dev_state;
list_for_each_entry(dev_state, &state_list, list) {
if (dev_state->devid == devid)
if (dev_state->sbdf == sbdf)
return dev_state;
}
return NULL;
}
static struct device_state *get_device_state(u16 devid)
static struct device_state *get_device_state(u32 sbdf)
{
struct device_state *dev_state;
unsigned long flags;
spin_lock_irqsave(&state_lock, flags);
dev_state = __get_device_state(devid);
dev_state = __get_device_state(sbdf);
if (dev_state != NULL)
atomic_inc(&dev_state->count);
spin_unlock_irqrestore(&state_lock, flags);
@ -528,15 +518,16 @@ static int ppr_notifier(struct notifier_block *nb, unsigned long e, void *data)
unsigned long flags;
struct fault *fault;
bool finish;
u16 tag, devid;
u16 tag, devid, seg_id;
int ret;
iommu_fault = data;
tag = iommu_fault->tag & 0x1ff;
finish = (iommu_fault->tag >> 9) & 1;
devid = iommu_fault->device_id;
pdev = pci_get_domain_bus_and_slot(0, PCI_BUS_NUM(devid),
seg_id = PCI_SBDF_TO_SEGID(iommu_fault->sbdf);
devid = PCI_SBDF_TO_DEVID(iommu_fault->sbdf);
pdev = pci_get_domain_bus_and_slot(seg_id, PCI_BUS_NUM(devid),
devid & 0xff);
if (!pdev)
return -ENODEV;
@ -550,7 +541,7 @@ static int ppr_notifier(struct notifier_block *nb, unsigned long e, void *data)
goto out;
}
dev_state = get_device_state(iommu_fault->device_id);
dev_state = get_device_state(iommu_fault->sbdf);
if (dev_state == NULL)
goto out;
@ -609,7 +600,7 @@ int amd_iommu_bind_pasid(struct pci_dev *pdev, u32 pasid,
struct pasid_state *pasid_state;
struct device_state *dev_state;
struct mm_struct *mm;
u16 devid;
u32 sbdf;
int ret;
might_sleep();
@ -617,8 +608,8 @@ int amd_iommu_bind_pasid(struct pci_dev *pdev, u32 pasid,
if (!amd_iommu_v2_supported())
return -ENODEV;
devid = device_id(pdev);
dev_state = get_device_state(devid);
sbdf = get_pci_sbdf_id(pdev);
dev_state = get_device_state(sbdf);
if (dev_state == NULL)
return -EINVAL;
@ -692,15 +683,15 @@ void amd_iommu_unbind_pasid(struct pci_dev *pdev, u32 pasid)
{
struct pasid_state *pasid_state;
struct device_state *dev_state;
u16 devid;
u32 sbdf;
might_sleep();
if (!amd_iommu_v2_supported())
return;
devid = device_id(pdev);
dev_state = get_device_state(devid);
sbdf = get_pci_sbdf_id(pdev);
dev_state = get_device_state(sbdf);
if (dev_state == NULL)
return;
@ -742,7 +733,7 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids)
struct iommu_group *group;
unsigned long flags;
int ret, tmp;
u16 devid;
u32 sbdf;
might_sleep();
@ -759,7 +750,7 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids)
if (pasids <= 0 || pasids > (PASID_MASK + 1))
return -EINVAL;
devid = device_id(pdev);
sbdf = get_pci_sbdf_id(pdev);
dev_state = kzalloc(sizeof(*dev_state), GFP_KERNEL);
if (dev_state == NULL)
@ -768,7 +759,7 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids)
spin_lock_init(&dev_state->lock);
init_waitqueue_head(&dev_state->wq);
dev_state->pdev = pdev;
dev_state->devid = devid;
dev_state->sbdf = sbdf;
tmp = pasids;
for (dev_state->pasid_levels = 0; (tmp - 1) & ~0x1ff; tmp >>= 9)
@ -806,7 +797,7 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids)
spin_lock_irqsave(&state_lock, flags);
if (__get_device_state(devid) != NULL) {
if (__get_device_state(sbdf) != NULL) {
spin_unlock_irqrestore(&state_lock, flags);
ret = -EBUSY;
goto out_free_domain;
@ -838,16 +829,16 @@ void amd_iommu_free_device(struct pci_dev *pdev)
{
struct device_state *dev_state;
unsigned long flags;
u16 devid;
u32 sbdf;
if (!amd_iommu_v2_supported())
return;
devid = device_id(pdev);
sbdf = get_pci_sbdf_id(pdev);
spin_lock_irqsave(&state_lock, flags);
dev_state = __get_device_state(devid);
dev_state = __get_device_state(sbdf);
if (dev_state == NULL) {
spin_unlock_irqrestore(&state_lock, flags);
return;
@ -867,18 +858,18 @@ int amd_iommu_set_invalid_ppr_cb(struct pci_dev *pdev,
{
struct device_state *dev_state;
unsigned long flags;
u16 devid;
u32 sbdf;
int ret;
if (!amd_iommu_v2_supported())
return -ENODEV;
devid = device_id(pdev);
sbdf = get_pci_sbdf_id(pdev);
spin_lock_irqsave(&state_lock, flags);
ret = -EINVAL;
dev_state = __get_device_state(devid);
dev_state = __get_device_state(sbdf);
if (dev_state == NULL)
goto out_unlock;
@ -898,18 +889,18 @@ int amd_iommu_set_invalidate_ctx_cb(struct pci_dev *pdev,
{
struct device_state *dev_state;
unsigned long flags;
u16 devid;
u32 sbdf;
int ret;
if (!amd_iommu_v2_supported())
return -ENODEV;
devid = device_id(pdev);
sbdf = get_pci_sbdf_id(pdev);
spin_lock_irqsave(&state_lock, flags);
ret = -EINVAL;
dev_state = __get_device_state(devid);
dev_state = __get_device_state(sbdf);
if (dev_state == NULL)
goto out_unlock;

View File

@ -15,7 +15,7 @@
struct ivrs_quirk_entry {
u8 id;
u16 devid;
u32 devid;
};
enum {
@ -49,7 +49,7 @@ static int __init ivrs_ioapic_quirk_cb(const struct dmi_system_id *d)
const struct ivrs_quirk_entry *i;
for (i = d->driver_data; i->id != 0 && i->devid != 0; i++)
add_special_device(IVHD_SPECIAL_IOAPIC, i->id, (u16 *)&i->devid, 0);
add_special_device(IVHD_SPECIAL_IOAPIC, i->id, (u32 *)&i->devid, 0);
return 0;
}

View File

@ -564,9 +564,6 @@ static void apple_dart_release_device(struct device *dev)
{
struct apple_dart_master_cfg *cfg = dev_iommu_priv_get(dev);
if (!cfg)
return;
dev_iommu_priv_set(dev, NULL);
kfree(cfg);
}
@ -771,7 +768,6 @@ static const struct iommu_ops apple_dart_iommu_ops = {
.of_xlate = apple_dart_of_xlate,
.def_domain_type = apple_dart_def_domain_type,
.get_resv_regions = apple_dart_get_resv_regions,
.put_resv_regions = generic_iommu_put_resv_regions,
.pgsize_bitmap = -1UL, /* Restricted during dart probe */
.owner = THIS_MODULE,
.default_domain_ops = &(const struct iommu_domain_ops) {

View File

@ -1380,12 +1380,21 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd);
}
static void arm_smmu_init_bypass_stes(__le64 *strtab, unsigned int nent)
static void arm_smmu_init_bypass_stes(__le64 *strtab, unsigned int nent, bool force)
{
unsigned int i;
u64 val = STRTAB_STE_0_V;
if (disable_bypass && !force)
val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT);
else
val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS);
for (i = 0; i < nent; ++i) {
arm_smmu_write_strtab_ent(NULL, -1, strtab);
strtab[0] = cpu_to_le64(val);
strtab[1] = cpu_to_le64(FIELD_PREP(STRTAB_STE_1_SHCFG,
STRTAB_STE_1_SHCFG_INCOMING));
strtab[2] = 0;
strtab += STRTAB_STE_DWORDS;
}
}
@ -1413,7 +1422,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
return -ENOMEM;
}
arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT, false);
arm_smmu_write_strtab_l1_desc(strtab, desc);
return 0;
}
@ -2537,6 +2546,19 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
return sid < limit;
}
static int arm_smmu_init_sid_strtab(struct arm_smmu_device *smmu, u32 sid)
{
/* Check the SIDs are in range of the SMMU and our stream table */
if (!arm_smmu_sid_in_range(smmu, sid))
return -ERANGE;
/* Ensure l2 strtab is initialised */
if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
return arm_smmu_init_l2_strtab(smmu, sid);
return 0;
}
static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
struct arm_smmu_master *master)
{
@ -2560,20 +2582,9 @@ static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
new_stream->id = sid;
new_stream->master = master;
/*
* Check the SIDs are in range of the SMMU and our stream table
*/
if (!arm_smmu_sid_in_range(smmu, sid)) {
ret = -ERANGE;
ret = arm_smmu_init_sid_strtab(smmu, sid);
if (ret)
break;
}
/* Ensure l2 strtab is initialised */
if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
ret = arm_smmu_init_l2_strtab(smmu, sid);
if (ret)
break;
}
/* Insert into SID tree */
new_node = &(smmu->streams.rb_node);
@ -2691,20 +2702,14 @@ err_free_master:
static void arm_smmu_release_device(struct device *dev)
{
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct arm_smmu_master *master;
struct arm_smmu_master *master = dev_iommu_priv_get(dev);
if (!fwspec || fwspec->ops != &arm_smmu_ops)
return;
master = dev_iommu_priv_get(dev);
if (WARN_ON(arm_smmu_master_sva_enabled(master)))
iopf_queue_remove_device(master->smmu->evtq.iopf, dev);
arm_smmu_detach_dev(master);
arm_smmu_disable_pasid(master);
arm_smmu_remove_master(master);
kfree(master);
iommu_fwspec_free(dev);
}
static struct iommu_group *arm_smmu_device_group(struct device *dev)
@ -2760,58 +2765,27 @@ static void arm_smmu_get_resv_regions(struct device *dev,
iommu_dma_get_resv_regions(dev, head);
}
static bool arm_smmu_dev_has_feature(struct device *dev,
enum iommu_dev_features feat)
{
struct arm_smmu_master *master = dev_iommu_priv_get(dev);
if (!master)
return false;
switch (feat) {
case IOMMU_DEV_FEAT_IOPF:
return arm_smmu_master_iopf_supported(master);
case IOMMU_DEV_FEAT_SVA:
return arm_smmu_master_sva_supported(master);
default:
return false;
}
}
static bool arm_smmu_dev_feature_enabled(struct device *dev,
enum iommu_dev_features feat)
{
struct arm_smmu_master *master = dev_iommu_priv_get(dev);
if (!master)
return false;
switch (feat) {
case IOMMU_DEV_FEAT_IOPF:
return master->iopf_enabled;
case IOMMU_DEV_FEAT_SVA:
return arm_smmu_master_sva_enabled(master);
default:
return false;
}
}
static int arm_smmu_dev_enable_feature(struct device *dev,
enum iommu_dev_features feat)
{
struct arm_smmu_master *master = dev_iommu_priv_get(dev);
if (!arm_smmu_dev_has_feature(dev, feat))
if (!master)
return -ENODEV;
if (arm_smmu_dev_feature_enabled(dev, feat))
return -EBUSY;
switch (feat) {
case IOMMU_DEV_FEAT_IOPF:
if (!arm_smmu_master_iopf_supported(master))
return -EINVAL;
if (master->iopf_enabled)
return -EBUSY;
master->iopf_enabled = true;
return 0;
case IOMMU_DEV_FEAT_SVA:
if (!arm_smmu_master_sva_supported(master))
return -EINVAL;
if (arm_smmu_master_sva_enabled(master))
return -EBUSY;
return arm_smmu_master_enable_sva(master);
default:
return -EINVAL;
@ -2823,16 +2797,20 @@ static int arm_smmu_dev_disable_feature(struct device *dev,
{
struct arm_smmu_master *master = dev_iommu_priv_get(dev);
if (!arm_smmu_dev_feature_enabled(dev, feat))
if (!master)
return -EINVAL;
switch (feat) {
case IOMMU_DEV_FEAT_IOPF:
if (!master->iopf_enabled)
return -EINVAL;
if (master->sva_enabled)
return -EBUSY;
master->iopf_enabled = false;
return 0;
case IOMMU_DEV_FEAT_SVA:
if (!arm_smmu_master_sva_enabled(master))
return -EINVAL;
return arm_smmu_master_disable_sva(master);
default:
return -EINVAL;
@ -2847,9 +2825,6 @@ static struct iommu_ops arm_smmu_ops = {
.device_group = arm_smmu_device_group,
.of_xlate = arm_smmu_of_xlate,
.get_resv_regions = arm_smmu_get_resv_regions,
.put_resv_regions = generic_iommu_put_resv_regions,
.dev_has_feat = arm_smmu_dev_has_feature,
.dev_feat_enabled = arm_smmu_dev_feature_enabled,
.dev_enable_feat = arm_smmu_dev_enable_feature,
.dev_disable_feat = arm_smmu_dev_disable_feature,
.sva_bind = arm_smmu_sva_bind,
@ -3049,7 +3024,7 @@ static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits);
cfg->strtab_base_cfg = reg;
arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents);
arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents, false);
return 0;
}
@ -3743,6 +3718,36 @@ static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
return devm_ioremap_resource(dev, &res);
}
static void arm_smmu_rmr_install_bypass_ste(struct arm_smmu_device *smmu)
{
struct list_head rmr_list;
struct iommu_resv_region *e;
INIT_LIST_HEAD(&rmr_list);
iort_get_rmr_sids(dev_fwnode(smmu->dev), &rmr_list);
list_for_each_entry(e, &rmr_list, list) {
__le64 *step;
struct iommu_iort_rmr_data *rmr;
int ret, i;
rmr = container_of(e, struct iommu_iort_rmr_data, rr);
for (i = 0; i < rmr->num_sids; i++) {
ret = arm_smmu_init_sid_strtab(smmu, rmr->sids[i]);
if (ret) {
dev_err(smmu->dev, "RMR SID(0x%x) bypass failed\n",
rmr->sids[i]);
continue;
}
step = arm_smmu_get_step_for_sid(smmu, rmr->sids[i]);
arm_smmu_init_bypass_stes(step, 1, true);
}
}
iort_put_rmr_sids(dev_fwnode(smmu->dev), &rmr_list);
}
static int arm_smmu_device_probe(struct platform_device *pdev)
{
int irq, ret;
@ -3826,6 +3831,9 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
/* Record our private device structure */
platform_set_drvdata(pdev, smmu);
/* Check for RMRs and install bypass STEs if any */
arm_smmu_rmr_install_bypass_ste(smmu);
/* Reset the device */
ret = arm_smmu_device_reset(smmu, bypass);
if (ret)

View File

@ -3,3 +3,4 @@ obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
obj-$(CONFIG_ARM_SMMU) += arm_smmu.o
arm_smmu-objs += arm-smmu.o arm-smmu-impl.o arm-smmu-nvidia.o
arm_smmu-$(CONFIG_ARM_SMMU_QCOM) += arm-smmu-qcom.o
arm_smmu-$(CONFIG_ARM_SMMU_QCOM_DEBUG) += arm-smmu-qcom-debug.o

View File

@ -0,0 +1,142 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/of_device.h>
#include <linux/qcom_scm.h>
#include <linux/ratelimit.h>
#include "arm-smmu.h"
#include "arm-smmu-qcom.h"
enum qcom_smmu_impl_reg_offset {
QCOM_SMMU_TBU_PWR_STATUS,
QCOM_SMMU_STATS_SYNC_INV_TBU_ACK,
QCOM_SMMU_MMU2QSS_AND_SAFE_WAIT_CNTR,
};
struct qcom_smmu_config {
const u32 *reg_offset;
};
void qcom_smmu_tlb_sync_debug(struct arm_smmu_device *smmu)
{
int ret;
u32 tbu_pwr_status, sync_inv_ack, sync_inv_progress;
struct qcom_smmu *qsmmu = container_of(smmu, struct qcom_smmu, smmu);
const struct qcom_smmu_config *cfg;
static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL,
DEFAULT_RATELIMIT_BURST);
if (__ratelimit(&rs)) {
dev_err(smmu->dev, "TLB sync timed out -- SMMU may be deadlocked\n");
cfg = qsmmu->cfg;
if (!cfg)
return;
ret = qcom_scm_io_readl(smmu->ioaddr + cfg->reg_offset[QCOM_SMMU_TBU_PWR_STATUS],
&tbu_pwr_status);
if (ret)
dev_err(smmu->dev,
"Failed to read TBU power status: %d\n", ret);
ret = qcom_scm_io_readl(smmu->ioaddr + cfg->reg_offset[QCOM_SMMU_STATS_SYNC_INV_TBU_ACK],
&sync_inv_ack);
if (ret)
dev_err(smmu->dev,
"Failed to read TBU sync/inv ack status: %d\n", ret);
ret = qcom_scm_io_readl(smmu->ioaddr + cfg->reg_offset[QCOM_SMMU_MMU2QSS_AND_SAFE_WAIT_CNTR],
&sync_inv_progress);
if (ret)
dev_err(smmu->dev,
"Failed to read TCU syn/inv progress: %d\n", ret);
dev_err(smmu->dev,
"TBU: power_status %#x sync_inv_ack %#x sync_inv_progress %#x\n",
tbu_pwr_status, sync_inv_ack, sync_inv_progress);
}
}
/* Implementation Defined Register Space 0 register offsets */
static const u32 qcom_smmu_impl0_reg_offset[] = {
[QCOM_SMMU_TBU_PWR_STATUS] = 0x2204,
[QCOM_SMMU_STATS_SYNC_INV_TBU_ACK] = 0x25dc,
[QCOM_SMMU_MMU2QSS_AND_SAFE_WAIT_CNTR] = 0x2670,
};
static const struct qcom_smmu_config qcm2290_smmu_cfg = {
.reg_offset = qcom_smmu_impl0_reg_offset,
};
static const struct qcom_smmu_config sc7180_smmu_cfg = {
.reg_offset = qcom_smmu_impl0_reg_offset,
};
static const struct qcom_smmu_config sc7280_smmu_cfg = {
.reg_offset = qcom_smmu_impl0_reg_offset,
};
static const struct qcom_smmu_config sc8180x_smmu_cfg = {
.reg_offset = qcom_smmu_impl0_reg_offset,
};
static const struct qcom_smmu_config sc8280xp_smmu_cfg = {
.reg_offset = qcom_smmu_impl0_reg_offset,
};
static const struct qcom_smmu_config sm6125_smmu_cfg = {
.reg_offset = qcom_smmu_impl0_reg_offset,
};
static const struct qcom_smmu_config sm6350_smmu_cfg = {
.reg_offset = qcom_smmu_impl0_reg_offset,
};
static const struct qcom_smmu_config sm8150_smmu_cfg = {
.reg_offset = qcom_smmu_impl0_reg_offset,
};
static const struct qcom_smmu_config sm8250_smmu_cfg = {
.reg_offset = qcom_smmu_impl0_reg_offset,
};
static const struct qcom_smmu_config sm8350_smmu_cfg = {
.reg_offset = qcom_smmu_impl0_reg_offset,
};
static const struct qcom_smmu_config sm8450_smmu_cfg = {
.reg_offset = qcom_smmu_impl0_reg_offset,
};
static const struct of_device_id __maybe_unused qcom_smmu_impl_debug_match[] = {
{ .compatible = "qcom,msm8998-smmu-v2" },
{ .compatible = "qcom,qcm2290-smmu-500", .data = &qcm2290_smmu_cfg },
{ .compatible = "qcom,sc7180-smmu-500", .data = &sc7180_smmu_cfg },
{ .compatible = "qcom,sc7280-smmu-500", .data = &sc7280_smmu_cfg},
{ .compatible = "qcom,sc8180x-smmu-500", .data = &sc8180x_smmu_cfg },
{ .compatible = "qcom,sc8280xp-smmu-500", .data = &sc8280xp_smmu_cfg },
{ .compatible = "qcom,sdm630-smmu-v2" },
{ .compatible = "qcom,sdm845-smmu-500" },
{ .compatible = "qcom,sm6125-smmu-500", .data = &sm6125_smmu_cfg},
{ .compatible = "qcom,sm6350-smmu-500", .data = &sm6350_smmu_cfg},
{ .compatible = "qcom,sm8150-smmu-500", .data = &sm8150_smmu_cfg },
{ .compatible = "qcom,sm8250-smmu-500", .data = &sm8250_smmu_cfg },
{ .compatible = "qcom,sm8350-smmu-500", .data = &sm8350_smmu_cfg },
{ .compatible = "qcom,sm8450-smmu-500", .data = &sm8450_smmu_cfg },
{ }
};
const void *qcom_smmu_impl_data(struct arm_smmu_device *smmu)
{
const struct of_device_id *match;
const struct device_node *np = smmu->dev->of_node;
match = of_match_node(qcom_smmu_impl_debug_match, np);
if (!match)
return NULL;
return match->data;
}

View File

@ -5,23 +5,40 @@
#include <linux/acpi.h>
#include <linux/adreno-smmu-priv.h>
#include <linux/delay.h>
#include <linux/of_device.h>
#include <linux/qcom_scm.h>
#include "arm-smmu.h"
#include "arm-smmu-qcom.h"
struct qcom_smmu {
struct arm_smmu_device smmu;
bool bypass_quirk;
u8 bypass_cbndx;
u32 stall_enabled;
};
#define QCOM_DUMMY_VAL -1
static struct qcom_smmu *to_qcom_smmu(struct arm_smmu_device *smmu)
{
return container_of(smmu, struct qcom_smmu, smmu);
}
static void qcom_smmu_tlb_sync(struct arm_smmu_device *smmu, int page,
int sync, int status)
{
unsigned int spin_cnt, delay;
u32 reg;
arm_smmu_writel(smmu, page, sync, QCOM_DUMMY_VAL);
for (delay = 1; delay < TLB_LOOP_TIMEOUT; delay *= 2) {
for (spin_cnt = TLB_SPIN_COUNT; spin_cnt > 0; spin_cnt--) {
reg = arm_smmu_readl(smmu, page, status);
if (!(reg & ARM_SMMU_sTLBGSTATUS_GSACTIVE))
return;
cpu_relax();
}
udelay(delay);
}
qcom_smmu_tlb_sync_debug(smmu);
}
static void qcom_adreno_smmu_write_sctlr(struct arm_smmu_device *smmu, int idx,
u32 reg)
{
@ -233,6 +250,7 @@ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = {
{ .compatible = "qcom,sc7280-mdss" },
{ .compatible = "qcom,sc7280-mss-pil" },
{ .compatible = "qcom,sc8180x-mdss" },
{ .compatible = "qcom,sm8250-mdss" },
{ .compatible = "qcom,sdm845-mdss" },
{ .compatible = "qcom,sdm845-mss-pil" },
{ }
@ -374,6 +392,7 @@ static const struct arm_smmu_impl qcom_smmu_impl = {
.def_domain_type = qcom_smmu_def_domain_type,
.reset = qcom_smmu500_reset,
.write_s2cr = qcom_smmu_write_s2cr,
.tlb_sync = qcom_smmu_tlb_sync,
};
static const struct arm_smmu_impl qcom_adreno_smmu_impl = {
@ -382,6 +401,7 @@ static const struct arm_smmu_impl qcom_adreno_smmu_impl = {
.reset = qcom_smmu500_reset,
.alloc_context_bank = qcom_adreno_smmu_alloc_context_bank,
.write_sctlr = qcom_adreno_smmu_write_sctlr,
.tlb_sync = qcom_smmu_tlb_sync,
};
static struct arm_smmu_device *qcom_smmu_create(struct arm_smmu_device *smmu,
@ -398,6 +418,7 @@ static struct arm_smmu_device *qcom_smmu_create(struct arm_smmu_device *smmu,
return ERR_PTR(-ENOMEM);
qsmmu->smmu.impl = impl;
qsmmu->cfg = qcom_smmu_impl_data(smmu);
return &qsmmu->smmu;
}
@ -413,6 +434,7 @@ static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = {
{ .compatible = "qcom,sdm845-smmu-500" },
{ .compatible = "qcom,sm6125-smmu-500" },
{ .compatible = "qcom,sm6350-smmu-500" },
{ .compatible = "qcom,sm6375-smmu-500" },
{ .compatible = "qcom,sm8150-smmu-500" },
{ .compatible = "qcom,sm8250-smmu-500" },
{ .compatible = "qcom,sm8350-smmu-500" },

View File

@ -0,0 +1,28 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2022, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _ARM_SMMU_QCOM_H
#define _ARM_SMMU_QCOM_H
struct qcom_smmu {
struct arm_smmu_device smmu;
const struct qcom_smmu_config *cfg;
bool bypass_quirk;
u8 bypass_cbndx;
u32 stall_enabled;
};
#ifdef CONFIG_ARM_SMMU_QCOM_DEBUG
void qcom_smmu_tlb_sync_debug(struct arm_smmu_device *smmu);
const void *qcom_smmu_impl_data(struct arm_smmu_device *smmu);
#else
static inline void qcom_smmu_tlb_sync_debug(struct arm_smmu_device *smmu) { }
static inline const void *qcom_smmu_impl_data(struct arm_smmu_device *smmu)
{
return NULL;
}
#endif
#endif /* _ARM_SMMU_QCOM_H */

View File

@ -1432,27 +1432,19 @@ out_free:
static void arm_smmu_release_device(struct device *dev)
{
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct arm_smmu_master_cfg *cfg;
struct arm_smmu_device *smmu;
struct arm_smmu_master_cfg *cfg = dev_iommu_priv_get(dev);
int ret;
if (!fwspec || fwspec->ops != &arm_smmu_ops)
return;
cfg = dev_iommu_priv_get(dev);
smmu = cfg->smmu;
ret = arm_smmu_rpm_get(smmu);
ret = arm_smmu_rpm_get(cfg->smmu);
if (ret < 0)
return;
arm_smmu_master_free_smes(cfg, fwspec);
arm_smmu_rpm_put(smmu);
arm_smmu_rpm_put(cfg->smmu);
dev_iommu_priv_set(dev, NULL);
kfree(cfg);
iommu_fwspec_free(dev);
}
static void arm_smmu_probe_finalize(struct device *dev)
@ -1592,7 +1584,6 @@ static struct iommu_ops arm_smmu_ops = {
.device_group = arm_smmu_device_group,
.of_xlate = arm_smmu_of_xlate,
.get_resv_regions = arm_smmu_get_resv_regions,
.put_resv_regions = generic_iommu_put_resv_regions,
.def_domain_type = arm_smmu_def_domain_type,
.pgsize_bitmap = -1UL, /* Restricted during device attach */
.owner = THIS_MODULE,
@ -2071,10 +2062,57 @@ err_reset_platform_ops: __maybe_unused;
return err;
}
static void arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu)
{
struct list_head rmr_list;
struct iommu_resv_region *e;
int idx, cnt = 0;
u32 reg;
INIT_LIST_HEAD(&rmr_list);
iort_get_rmr_sids(dev_fwnode(smmu->dev), &rmr_list);
/*
* Rather than trying to look at existing mappings that
* are setup by the firmware and then invalidate the ones
* that do no have matching RMR entries, just disable the
* SMMU until it gets enabled again in the reset routine.
*/
reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0);
reg |= ARM_SMMU_sCR0_CLIENTPD;
arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sCR0, reg);
list_for_each_entry(e, &rmr_list, list) {
struct iommu_iort_rmr_data *rmr;
int i;
rmr = container_of(e, struct iommu_iort_rmr_data, rr);
for (i = 0; i < rmr->num_sids; i++) {
idx = arm_smmu_find_sme(smmu, rmr->sids[i], ~0);
if (idx < 0)
continue;
if (smmu->s2crs[idx].count == 0) {
smmu->smrs[idx].id = rmr->sids[i];
smmu->smrs[idx].mask = 0;
smmu->smrs[idx].valid = true;
}
smmu->s2crs[idx].count++;
smmu->s2crs[idx].type = S2CR_TYPE_BYPASS;
smmu->s2crs[idx].privcfg = S2CR_PRIVCFG_DEFAULT;
cnt++;
}
}
dev_notice(smmu->dev, "\tpreserved %d boot mapping%s\n", cnt,
cnt == 1 ? "" : "s");
iort_put_rmr_sids(dev_fwnode(smmu->dev), &rmr_list);
}
static int arm_smmu_device_probe(struct platform_device *pdev)
{
struct resource *res;
resource_size_t ioaddr;
struct arm_smmu_device *smmu;
struct device *dev = &pdev->dev;
int num_irqs, i, err;
@ -2098,7 +2136,8 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
smmu->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
if (IS_ERR(smmu->base))
return PTR_ERR(smmu->base);
ioaddr = res->start;
smmu->ioaddr = res->start;
/*
* The resource size should effectively match the value of SMMU_TOP;
* stash that temporarily until we know PAGESIZE to validate it with.
@ -2178,7 +2217,7 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
}
err = iommu_device_sysfs_add(&smmu->iommu, smmu->dev, NULL,
"smmu.%pa", &ioaddr);
"smmu.%pa", &smmu->ioaddr);
if (err) {
dev_err(dev, "Failed to register iommu in sysfs\n");
return err;
@ -2191,6 +2230,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
}
platform_set_drvdata(pdev, smmu);
/* Check for RMRs and install bypass SMRs if any */
arm_smmu_rmr_install_bypass_smr(smmu);
arm_smmu_device_reset(smmu);
arm_smmu_test_smr_masks(smmu);

View File

@ -278,6 +278,7 @@ struct arm_smmu_device {
struct device *dev;
void __iomem *base;
phys_addr_t ioaddr;
unsigned int numpage;
unsigned int pgshift;

View File

@ -532,16 +532,6 @@ static struct iommu_device *qcom_iommu_probe_device(struct device *dev)
return &qcom_iommu->iommu;
}
static void qcom_iommu_release_device(struct device *dev)
{
struct qcom_iommu_dev *qcom_iommu = to_iommu(dev);
if (!qcom_iommu)
return;
iommu_fwspec_free(dev);
}
static int qcom_iommu_of_xlate(struct device *dev, struct of_phandle_args *args)
{
struct qcom_iommu_dev *qcom_iommu;
@ -591,7 +581,6 @@ static const struct iommu_ops qcom_iommu_ops = {
.capable = qcom_iommu_capable,
.domain_alloc = qcom_iommu_domain_alloc,
.probe_device = qcom_iommu_probe_device,
.release_device = qcom_iommu_release_device,
.device_group = generic_device_group,
.of_xlate = qcom_iommu_of_xlate,
.pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M,
@ -750,9 +739,12 @@ static bool qcom_iommu_has_secure_context(struct qcom_iommu_dev *qcom_iommu)
{
struct device_node *child;
for_each_child_of_node(qcom_iommu->dev->of_node, child)
if (of_device_is_compatible(child, "qcom,msm-iommu-v1-sec"))
for_each_child_of_node(qcom_iommu->dev->of_node, child) {
if (of_device_is_compatible(child, "qcom,msm-iommu-v1-sec")) {
of_node_put(child);
return true;
}
}
return false;
}

View File

@ -64,10 +64,11 @@ struct iommu_dma_cookie {
/* Domain for flush queue callback; NULL if flush queue not in use */
struct iommu_domain *fq_domain;
struct mutex mutex;
};
static DEFINE_STATIC_KEY_FALSE(iommu_deferred_attach_enabled);
bool iommu_dma_forcedac __read_mostly;
bool iommu_dma_forcedac __read_mostly = !IS_ENABLED(CONFIG_IOMMU_DMA_PCI_SAC);
static int __init iommu_dma_forcedac_setup(char *str)
{
@ -310,6 +311,7 @@ int iommu_get_dma_cookie(struct iommu_domain *domain)
if (!domain->iova_cookie)
return -ENOMEM;
mutex_init(&domain->iova_cookie->mutex);
return 0;
}
@ -385,7 +387,7 @@ void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list)
{
if (!is_of_node(dev_iommu_fwspec_get(dev)->iommu_fwnode))
iort_iommu_msi_get_resv_regions(dev, list);
iort_iommu_get_resv_regions(dev, list);
}
EXPORT_SYMBOL(iommu_dma_get_resv_regions);
@ -560,26 +562,33 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
}
/* start_pfn is always nonzero for an already-initialised domain */
mutex_lock(&cookie->mutex);
if (iovad->start_pfn) {
if (1UL << order != iovad->granule ||
base_pfn != iovad->start_pfn) {
pr_warn("Incompatible range for DMA domain\n");
return -EFAULT;
ret = -EFAULT;
goto done_unlock;
}
return 0;
ret = 0;
goto done_unlock;
}
init_iova_domain(iovad, 1UL << order, base_pfn);
ret = iova_domain_init_rcaches(iovad);
if (ret)
return ret;
goto done_unlock;
/* If the FQ fails we can simply fall back to strict mode */
if (domain->type == IOMMU_DOMAIN_DMA_FQ && iommu_dma_init_fq(domain))
domain->type = IOMMU_DOMAIN_DMA;
return iova_reserve_iommu_regions(dev, domain);
ret = iova_reserve_iommu_regions(dev, domain);
done_unlock:
mutex_unlock(&cookie->mutex);
return ret;
}
/**

View File

@ -135,6 +135,11 @@ static u32 lv2ent_offset(sysmmu_iova_t iova)
#define CFG_SYSSEL (1 << 22) /* System MMU 3.2 only */
#define CFG_FLPDCACHE (1 << 20) /* System MMU 3.2+ only */
#define CTRL_VM_ENABLE BIT(0)
#define CTRL_VM_FAULT_MODE_STALL BIT(3)
#define CAPA0_CAPA1_EXIST BIT(11)
#define CAPA1_VCR_ENABLED BIT(14)
/* common registers */
#define REG_MMU_CTRL 0x000
#define REG_MMU_CFG 0x004
@ -148,29 +153,20 @@ static u32 lv2ent_offset(sysmmu_iova_t iova)
#define MAKE_MMU_VER(maj, min) ((((maj) & 0xF) << 7) | ((min) & 0x7F))
/* v1.x - v3.x registers */
#define REG_MMU_FLUSH 0x00C
#define REG_MMU_FLUSH_ENTRY 0x010
#define REG_PT_BASE_ADDR 0x014
#define REG_INT_STATUS 0x018
#define REG_INT_CLEAR 0x01C
#define REG_PAGE_FAULT_ADDR 0x024
#define REG_AW_FAULT_ADDR 0x028
#define REG_AR_FAULT_ADDR 0x02C
#define REG_DEFAULT_SLAVE_ADDR 0x030
/* v5.x registers */
#define REG_V5_PT_BASE_PFN 0x00C
#define REG_V5_MMU_FLUSH_ALL 0x010
#define REG_V5_MMU_FLUSH_ENTRY 0x014
#define REG_V5_MMU_FLUSH_RANGE 0x018
#define REG_V5_MMU_FLUSH_START 0x020
#define REG_V5_MMU_FLUSH_END 0x024
#define REG_V5_INT_STATUS 0x060
#define REG_V5_INT_CLEAR 0x064
#define REG_V5_FAULT_AR_VA 0x070
#define REG_V5_FAULT_AW_VA 0x080
/* v7.x registers */
#define REG_V7_CAPA0 0x870
#define REG_V7_CAPA1 0x874
#define REG_V7_CTRL_VM 0x8000
#define has_sysmmu(dev) (dev_iommu_priv_get(dev) != NULL)
static struct device *dma_dev;
@ -250,6 +246,21 @@ struct exynos_iommu_domain {
struct iommu_domain domain; /* generic domain data structure */
};
/*
* SysMMU version specific data. Contains offsets for the registers which can
* be found in different SysMMU variants, but have different offset values.
*/
struct sysmmu_variant {
u32 pt_base; /* page table base address (physical) */
u32 flush_all; /* invalidate all TLB entries */
u32 flush_entry; /* invalidate specific TLB entry */
u32 flush_range; /* invalidate TLB entries in specified range */
u32 flush_start; /* start address of range invalidation */
u32 flush_end; /* end address of range invalidation */
u32 int_status; /* interrupt status information */
u32 int_clear; /* clear the interrupt */
};
/*
* This structure hold all data of a single SYSMMU controller, this includes
* hw resources like registers and clocks, pointers and list nodes to connect
@ -274,6 +285,45 @@ struct sysmmu_drvdata {
unsigned int version; /* our version */
struct iommu_device iommu; /* IOMMU core handle */
const struct sysmmu_variant *variant; /* version specific data */
/* v7 fields */
bool has_vcr; /* virtual machine control register */
};
#define SYSMMU_REG(data, reg) ((data)->sfrbase + (data)->variant->reg)
/* SysMMU v1..v3 */
static const struct sysmmu_variant sysmmu_v1_variant = {
.flush_all = 0x0c,
.flush_entry = 0x10,
.pt_base = 0x14,
.int_status = 0x18,
.int_clear = 0x1c,
};
/* SysMMU v5 and v7 (non-VM capable) */
static const struct sysmmu_variant sysmmu_v5_variant = {
.pt_base = 0x0c,
.flush_all = 0x10,
.flush_entry = 0x14,
.flush_range = 0x18,
.flush_start = 0x20,
.flush_end = 0x24,
.int_status = 0x60,
.int_clear = 0x64,
};
/* SysMMU v7: VM capable register set */
static const struct sysmmu_variant sysmmu_v7_vm_variant = {
.pt_base = 0x800c,
.flush_all = 0x8010,
.flush_entry = 0x8014,
.flush_range = 0x8018,
.flush_start = 0x8020,
.flush_end = 0x8024,
.int_status = 0x60,
.int_clear = 0x64,
};
static struct exynos_iommu_domain *to_exynos_domain(struct iommu_domain *dom)
@ -304,10 +354,7 @@ static bool sysmmu_block(struct sysmmu_drvdata *data)
static void __sysmmu_tlb_invalidate(struct sysmmu_drvdata *data)
{
if (MMU_MAJ_VER(data->version) < 5)
writel(0x1, data->sfrbase + REG_MMU_FLUSH);
else
writel(0x1, data->sfrbase + REG_V5_MMU_FLUSH_ALL);
writel(0x1, SYSMMU_REG(data, flush_all));
}
static void __sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data,
@ -315,34 +362,30 @@ static void __sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data,
{
unsigned int i;
if (MMU_MAJ_VER(data->version) < 5) {
if (MMU_MAJ_VER(data->version) < 5 || num_inv == 1) {
for (i = 0; i < num_inv; i++) {
writel((iova & SPAGE_MASK) | 1,
data->sfrbase + REG_MMU_FLUSH_ENTRY);
SYSMMU_REG(data, flush_entry));
iova += SPAGE_SIZE;
}
} else {
if (num_inv == 1) {
writel((iova & SPAGE_MASK) | 1,
data->sfrbase + REG_V5_MMU_FLUSH_ENTRY);
} else {
writel((iova & SPAGE_MASK),
data->sfrbase + REG_V5_MMU_FLUSH_START);
writel((iova & SPAGE_MASK) + (num_inv - 1) * SPAGE_SIZE,
data->sfrbase + REG_V5_MMU_FLUSH_END);
writel(1, data->sfrbase + REG_V5_MMU_FLUSH_RANGE);
}
writel(iova & SPAGE_MASK, SYSMMU_REG(data, flush_start));
writel((iova & SPAGE_MASK) + (num_inv - 1) * SPAGE_SIZE,
SYSMMU_REG(data, flush_end));
writel(0x1, SYSMMU_REG(data, flush_range));
}
}
static void __sysmmu_set_ptbase(struct sysmmu_drvdata *data, phys_addr_t pgd)
{
if (MMU_MAJ_VER(data->version) < 5)
writel(pgd, data->sfrbase + REG_PT_BASE_ADDR);
else
writel(pgd >> PAGE_SHIFT,
data->sfrbase + REG_V5_PT_BASE_PFN);
u32 pt_base;
if (MMU_MAJ_VER(data->version) < 5)
pt_base = pgd;
else
pt_base = pgd >> SPAGE_ORDER;
writel(pt_base, SYSMMU_REG(data, pt_base));
__sysmmu_tlb_invalidate(data);
}
@ -362,6 +405,20 @@ static void __sysmmu_disable_clocks(struct sysmmu_drvdata *data)
clk_disable_unprepare(data->clk_master);
}
static bool __sysmmu_has_capa1(struct sysmmu_drvdata *data)
{
u32 capa0 = readl(data->sfrbase + REG_V7_CAPA0);
return capa0 & CAPA0_CAPA1_EXIST;
}
static void __sysmmu_get_vcr(struct sysmmu_drvdata *data)
{
u32 capa1 = readl(data->sfrbase + REG_V7_CAPA1);
data->has_vcr = capa1 & CAPA1_VCR_ENABLED;
}
static void __sysmmu_get_version(struct sysmmu_drvdata *data)
{
u32 ver;
@ -379,6 +436,19 @@ static void __sysmmu_get_version(struct sysmmu_drvdata *data)
dev_dbg(data->sysmmu, "hardware version: %d.%d\n",
MMU_MAJ_VER(data->version), MMU_MIN_VER(data->version));
if (MMU_MAJ_VER(data->version) < 5) {
data->variant = &sysmmu_v1_variant;
} else if (MMU_MAJ_VER(data->version) < 7) {
data->variant = &sysmmu_v5_variant;
} else {
if (__sysmmu_has_capa1(data))
__sysmmu_get_vcr(data);
if (data->has_vcr)
data->variant = &sysmmu_v7_vm_variant;
else
data->variant = &sysmmu_v5_variant;
}
__sysmmu_disable_clocks(data);
}
@ -406,19 +476,14 @@ static irqreturn_t exynos_sysmmu_irq(int irq, void *dev_id)
const struct sysmmu_fault_info *finfo;
unsigned int i, n, itype;
sysmmu_iova_t fault_addr;
unsigned short reg_status, reg_clear;
int ret = -ENOSYS;
WARN_ON(!data->active);
if (MMU_MAJ_VER(data->version) < 5) {
reg_status = REG_INT_STATUS;
reg_clear = REG_INT_CLEAR;
finfo = sysmmu_faults;
n = ARRAY_SIZE(sysmmu_faults);
} else {
reg_status = REG_V5_INT_STATUS;
reg_clear = REG_V5_INT_CLEAR;
finfo = sysmmu_v5_faults;
n = ARRAY_SIZE(sysmmu_v5_faults);
}
@ -427,7 +492,7 @@ static irqreturn_t exynos_sysmmu_irq(int irq, void *dev_id)
clk_enable(data->clk_master);
itype = __ffs(readl(data->sfrbase + reg_status));
itype = __ffs(readl(SYSMMU_REG(data, int_status)));
for (i = 0; i < n; i++, finfo++)
if (finfo->bit == itype)
break;
@ -444,7 +509,7 @@ static irqreturn_t exynos_sysmmu_irq(int irq, void *dev_id)
/* fault is not recovered by fault handler */
BUG_ON(ret != 0);
writel(1 << itype, data->sfrbase + reg_clear);
writel(1 << itype, SYSMMU_REG(data, int_clear));
sysmmu_unblock(data);
@ -486,6 +551,18 @@ static void __sysmmu_init_config(struct sysmmu_drvdata *data)
writel(cfg, data->sfrbase + REG_MMU_CFG);
}
static void __sysmmu_enable_vid(struct sysmmu_drvdata *data)
{
u32 ctrl;
if (MMU_MAJ_VER(data->version) < 7 || !data->has_vcr)
return;
ctrl = readl(data->sfrbase + REG_V7_CTRL_VM);
ctrl |= CTRL_VM_ENABLE | CTRL_VM_FAULT_MODE_STALL;
writel(ctrl, data->sfrbase + REG_V7_CTRL_VM);
}
static void __sysmmu_enable(struct sysmmu_drvdata *data)
{
unsigned long flags;
@ -496,6 +573,7 @@ static void __sysmmu_enable(struct sysmmu_drvdata *data)
writel(CTRL_BLOCK, data->sfrbase + REG_MMU_CTRL);
__sysmmu_init_config(data);
__sysmmu_set_ptbase(data, data->pgtable);
__sysmmu_enable_vid(data);
writel(CTRL_ENABLE, data->sfrbase + REG_MMU_CTRL);
data->active = true;
spin_unlock_irqrestore(&data->lock, flags);
@ -551,7 +629,7 @@ static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data,
* 64KB page can be one of 16 consecutive sets.
*/
if (MMU_MAJ_VER(data->version) == 2)
num_inv = min_t(unsigned int, size / PAGE_SIZE, 64);
num_inv = min_t(unsigned int, size / SPAGE_SIZE, 64);
if (sysmmu_block(data)) {
__sysmmu_tlb_invalidate_entry(data, iova, num_inv);
@ -623,6 +701,8 @@ static int exynos_sysmmu_probe(struct platform_device *pdev)
data->sysmmu = dev;
spin_lock_init(&data->lock);
__sysmmu_get_version(data);
ret = iommu_device_sysfs_add(&data->iommu, &pdev->dev, NULL,
dev_name(data->sysmmu));
if (ret)
@ -630,11 +710,10 @@ static int exynos_sysmmu_probe(struct platform_device *pdev)
ret = iommu_device_register(&data->iommu, &exynos_iommu_ops, dev);
if (ret)
return ret;
goto err_iommu_register;
platform_set_drvdata(pdev, data);
__sysmmu_get_version(data);
if (PG_ENT_SHIFT < 0) {
if (MMU_MAJ_VER(data->version) < 5) {
PG_ENT_SHIFT = SYSMMU_PG_ENT_SHIFT;
@ -647,6 +726,14 @@ static int exynos_sysmmu_probe(struct platform_device *pdev)
}
}
if (MMU_MAJ_VER(data->version) >= 5) {
ret = dma_set_mask(dev, DMA_BIT_MASK(36));
if (ret) {
dev_err(dev, "Unable to set DMA mask: %d\n", ret);
goto err_dma_set_mask;
}
}
/*
* use the first registered sysmmu device for performing
* dma mapping operations on iommu page tables (cpu cache flush)
@ -657,6 +744,12 @@ static int exynos_sysmmu_probe(struct platform_device *pdev)
pm_runtime_enable(dev);
return 0;
err_dma_set_mask:
iommu_device_unregister(&data->iommu);
err_iommu_register:
iommu_device_sysfs_remove(&data->iommu);
return ret;
}
static int __maybe_unused exynos_sysmmu_suspend(struct device *dev)
@ -1251,9 +1344,6 @@ static void exynos_iommu_release_device(struct device *dev)
struct exynos_iommu_owner *owner = dev_iommu_priv_get(dev);
struct sysmmu_drvdata *data;
if (!has_sysmmu(dev))
return;
if (owner->domain) {
struct iommu_group *group = iommu_group_get(dev);

View File

@ -447,15 +447,10 @@ static struct iommu_device *fsl_pamu_probe_device(struct device *dev)
return &pamu_iommu;
}
static void fsl_pamu_release_device(struct device *dev)
{
}
static const struct iommu_ops fsl_pamu_ops = {
.capable = fsl_pamu_capable,
.domain_alloc = fsl_pamu_domain_alloc,
.probe_device = fsl_pamu_probe_device,
.release_device = fsl_pamu_release_device,
.device_group = fsl_pamu_device_group,
.default_domain_ops = &(const struct iommu_domain_ops) {
.attach_dev = fsl_pamu_attach_device,

View File

@ -10,7 +10,7 @@
#define pr_fmt(fmt) "DMAR: " fmt
#include <linux/intel-iommu.h>
#include "iommu.h"
#include "cap_audit.h"
static u64 intel_iommu_cap_sanity;

View File

@ -10,11 +10,11 @@
#include <linux/debugfs.h>
#include <linux/dmar.h>
#include <linux/intel-iommu.h>
#include <linux/pci.h>
#include <asm/irq_remapping.h>
#include "iommu.h"
#include "pasid.h"
#include "perf.h"
@ -263,10 +263,9 @@ static void ctx_tbl_walk(struct seq_file *m, struct intel_iommu *iommu, u16 bus)
static void root_tbl_walk(struct seq_file *m, struct intel_iommu *iommu)
{
unsigned long flags;
u16 bus;
spin_lock_irqsave(&iommu->lock, flags);
spin_lock(&iommu->lock);
seq_printf(m, "IOMMU %s: Root Table Address: 0x%llx\n", iommu->name,
(u64)virt_to_phys(iommu->root_entry));
seq_puts(m, "B.D.F\tRoot_entry\t\t\t\tContext_entry\t\t\t\tPASID\tPASID_table_entry\n");
@ -278,8 +277,7 @@ static void root_tbl_walk(struct seq_file *m, struct intel_iommu *iommu)
*/
for (bus = 0; bus < 256; bus++)
ctx_tbl_walk(m, iommu, bus);
spin_unlock_irqrestore(&iommu->lock, flags);
spin_unlock(&iommu->lock);
}
static int dmar_translation_struct_show(struct seq_file *m, void *unused)
@ -342,13 +340,13 @@ static void pgtable_walk_level(struct seq_file *m, struct dma_pte *pde,
}
}
static int show_device_domain_translation(struct device *dev, void *data)
static int __show_device_domain_translation(struct device *dev, void *data)
{
struct device_domain_info *info = dev_iommu_priv_get(dev);
struct dmar_domain *domain = info->domain;
struct dmar_domain *domain;
struct seq_file *m = data;
u64 path[6] = { 0 };
domain = to_dmar_domain(iommu_get_domain_for_dev(dev));
if (!domain)
return 0;
@ -359,20 +357,39 @@ static int show_device_domain_translation(struct device *dev, void *data)
pgtable_walk_level(m, domain->pgd, domain->agaw + 2, 0, path);
seq_putc(m, '\n');
/* Don't iterate */
return 1;
}
static int show_device_domain_translation(struct device *dev, void *data)
{
struct iommu_group *group;
group = iommu_group_get(dev);
if (group) {
/*
* The group->mutex is held across the callback, which will
* block calls to iommu_attach/detach_group/device. Hence,
* the domain of the device will not change during traversal.
*
* All devices in an iommu group share a single domain, hence
* we only dump the domain of the first device. Even though,
* this code still possibly races with the iommu_unmap()
* interface. This could be solved by RCU-freeing the page
* table pages in the iommu_unmap() path.
*/
iommu_group_for_each_dev(group, data,
__show_device_domain_translation);
iommu_group_put(group);
}
return 0;
}
static int domain_translation_struct_show(struct seq_file *m, void *unused)
{
unsigned long flags;
int ret;
spin_lock_irqsave(&device_domain_lock, flags);
ret = bus_for_each_dev(&pci_bus_type, NULL, m,
show_device_domain_translation);
spin_unlock_irqrestore(&device_domain_lock, flags);
return ret;
return bus_for_each_dev(&pci_bus_type, NULL, m,
show_device_domain_translation);
}
DEFINE_SHOW_ATTRIBUTE(domain_translation_struct);

View File

@ -19,7 +19,6 @@
#include <linux/pci.h>
#include <linux/dmar.h>
#include <linux/iova.h>
#include <linux/intel-iommu.h>
#include <linux/timer.h>
#include <linux/irq.h>
#include <linux/interrupt.h>
@ -30,10 +29,11 @@
#include <linux/numa.h>
#include <linux/limits.h>
#include <asm/irq_remapping.h>
#include <trace/events/intel_iommu.h>
#include "iommu.h"
#include "../irq_remapping.h"
#include "perf.h"
#include "trace.h"
typedef int (*dmar_res_handler_t)(struct acpi_dmar_header *, void *);
struct dmar_res_callback {
@ -60,7 +60,7 @@ LIST_HEAD(dmar_drhd_units);
struct acpi_table_header * __initdata dmar_tbl;
static int dmar_dev_scope_status = 1;
static unsigned long dmar_seq_ids[BITS_TO_LONGS(DMAR_UNITS_SUPPORTED)];
static DEFINE_IDA(dmar_seq_ids);
static int alloc_iommu(struct dmar_drhd_unit *drhd);
static void free_iommu(struct intel_iommu *iommu);
@ -1023,28 +1023,6 @@ out:
return err;
}
static int dmar_alloc_seq_id(struct intel_iommu *iommu)
{
iommu->seq_id = find_first_zero_bit(dmar_seq_ids,
DMAR_UNITS_SUPPORTED);
if (iommu->seq_id >= DMAR_UNITS_SUPPORTED) {
iommu->seq_id = -1;
} else {
set_bit(iommu->seq_id, dmar_seq_ids);
sprintf(iommu->name, "dmar%d", iommu->seq_id);
}
return iommu->seq_id;
}
static void dmar_free_seq_id(struct intel_iommu *iommu)
{
if (iommu->seq_id >= 0) {
clear_bit(iommu->seq_id, dmar_seq_ids);
iommu->seq_id = -1;
}
}
static int alloc_iommu(struct dmar_drhd_unit *drhd)
{
struct intel_iommu *iommu;
@ -1062,11 +1040,14 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
if (!iommu)
return -ENOMEM;
if (dmar_alloc_seq_id(iommu) < 0) {
iommu->seq_id = ida_alloc_range(&dmar_seq_ids, 0,
DMAR_UNITS_SUPPORTED - 1, GFP_KERNEL);
if (iommu->seq_id < 0) {
pr_err("Failed to allocate seq_id\n");
err = -ENOSPC;
err = iommu->seq_id;
goto error;
}
sprintf(iommu->name, "dmar%d", iommu->seq_id);
err = map_iommu(iommu, drhd->reg_base_addr);
if (err) {
@ -1150,7 +1131,7 @@ err_sysfs:
err_unmap:
unmap_iommu(iommu);
error_free_seq_id:
dmar_free_seq_id(iommu);
ida_free(&dmar_seq_ids, iommu->seq_id);
error:
kfree(iommu);
return err;
@ -1183,7 +1164,7 @@ static void free_iommu(struct intel_iommu *iommu)
if (iommu->reg)
unmap_iommu(iommu);
dmar_free_seq_id(iommu);
ida_free(&dmar_seq_ids, iommu->seq_id);
kfree(iommu);
}

View File

@ -17,7 +17,6 @@
#include <linux/dma-direct.h>
#include <linux/dma-iommu.h>
#include <linux/dmi.h>
#include <linux/intel-iommu.h>
#include <linux/intel-svm.h>
#include <linux/memory.h>
#include <linux/pci.h>
@ -26,6 +25,7 @@
#include <linux/syscore_ops.h>
#include <linux/tboot.h>
#include "iommu.h"
#include "../irq_remapping.h"
#include "../iommu-sva-lib.h"
#include "pasid.h"
@ -126,13 +126,8 @@ static inline unsigned long virt_to_dma_pfn(void *p)
return page_to_dma_pfn(virt_to_page(p));
}
/* global iommu list, set NULL for ignored DMAR units */
static struct intel_iommu **g_iommus;
static void __init check_tylersburg_isoch(void);
static int rwbf_quirk;
static inline struct device_domain_info *
dmar_search_domain_by_dev_info(int segment, int bus, int devfn);
/*
* set to 1 to panic kernel if can't successfully enable VT-d
@ -256,10 +251,6 @@ static inline void context_clear_entry(struct context_entry *context)
static struct dmar_domain *si_domain;
static int hw_pass_through = 1;
#define for_each_domain_iommu(idx, domain) \
for (idx = 0; idx < g_num_of_iommus; idx++) \
if (domain->iommu_refcnt[idx])
struct dmar_rmrr_unit {
struct list_head list; /* list of rmrr units */
struct acpi_dmar_header *hdr; /* ACPI header */
@ -293,12 +284,7 @@ static LIST_HEAD(dmar_satc_units);
#define for_each_rmrr_units(rmrr) \
list_for_each_entry(rmrr, &dmar_rmrr_units, list)
/* bitmap for indexing intel_iommus */
static int g_num_of_iommus;
static void domain_remove_dev_info(struct dmar_domain *domain);
static void dmar_remove_one_dev_info(struct device *dev);
static void __dmar_remove_one_dev_info(struct device_domain_info *info);
int dmar_disabled = !IS_ENABLED(CONFIG_INTEL_IOMMU_DEFAULT_ON);
int intel_iommu_sm = IS_ENABLED(CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON);
@ -314,12 +300,6 @@ static int iommu_skip_te_disable;
#define IDENTMAP_GFX 2
#define IDENTMAP_AZALIA 4
int intel_iommu_gfx_mapped;
EXPORT_SYMBOL_GPL(intel_iommu_gfx_mapped);
DEFINE_SPINLOCK(device_domain_lock);
static LIST_HEAD(device_domain_list);
const struct iommu_ops intel_iommu_ops;
static bool translation_pre_enabled(struct intel_iommu *iommu)
@ -455,24 +435,6 @@ int iommu_calculate_agaw(struct intel_iommu *iommu)
return __iommu_calculate_agaw(iommu, DEFAULT_DOMAIN_ADDRESS_WIDTH);
}
/* This functionin only returns single iommu in a domain */
struct intel_iommu *domain_get_iommu(struct dmar_domain *domain)
{
int iommu_id;
/* si_domain and vm domain should not get here. */
if (WARN_ON(!iommu_is_dma_domain(&domain->domain)))
return NULL;
for_each_domain_iommu(iommu_id, domain)
break;
if (iommu_id < 0 || iommu_id >= g_num_of_iommus)
return NULL;
return g_iommus[iommu_id];
}
static inline bool iommu_paging_structure_coherency(struct intel_iommu *iommu)
{
return sm_supported(iommu) ?
@ -481,16 +443,16 @@ static inline bool iommu_paging_structure_coherency(struct intel_iommu *iommu)
static void domain_update_iommu_coherency(struct dmar_domain *domain)
{
struct iommu_domain_info *info;
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
bool found = false;
int i;
unsigned long i;
domain->iommu_coherency = true;
for_each_domain_iommu(i, domain) {
xa_for_each(&domain->iommu_array, i, info) {
found = true;
if (!iommu_paging_structure_coherency(g_iommus[i])) {
if (!iommu_paging_structure_coherency(info->iommu)) {
domain->iommu_coherency = false;
break;
}
@ -544,15 +506,8 @@ static int domain_update_device_node(struct dmar_domain *domain)
struct device_domain_info *info;
int nid = NUMA_NO_NODE;
assert_spin_locked(&device_domain_lock);
if (list_empty(&domain->devices))
return NUMA_NO_NODE;
spin_lock(&domain->lock);
list_for_each_entry(info, &domain->devices, link) {
if (!info->dev)
continue;
/*
* There could possibly be multiple device numa nodes as devices
* within the same domain may sit behind different IOMMUs. There
@ -563,6 +518,7 @@ static int domain_update_device_node(struct dmar_domain *domain)
if (nid != NUMA_NO_NODE)
break;
}
spin_unlock(&domain->lock);
return nid;
}
@ -804,26 +760,23 @@ static int device_context_mapped(struct intel_iommu *iommu, u8 bus, u8 devfn)
{
struct context_entry *context;
int ret = 0;
unsigned long flags;
spin_lock_irqsave(&iommu->lock, flags);
spin_lock(&iommu->lock);
context = iommu_context_addr(iommu, bus, devfn, 0);
if (context)
ret = context_present(context);
spin_unlock_irqrestore(&iommu->lock, flags);
spin_unlock(&iommu->lock);
return ret;
}
static void free_context_table(struct intel_iommu *iommu)
{
int i;
unsigned long flags;
struct context_entry *context;
int i;
if (!iommu->root_entry)
return;
spin_lock_irqsave(&iommu->lock, flags);
if (!iommu->root_entry) {
goto out;
}
for (i = 0; i < ROOT_ENTRY_NR; i++) {
context = iommu_context_addr(iommu, i, 0, 0);
if (context)
@ -835,12 +788,10 @@ static void free_context_table(struct intel_iommu *iommu)
context = iommu_context_addr(iommu, i, 0x80, 0);
if (context)
free_pgtable_page(context);
}
free_pgtable_page(iommu->root_entry);
iommu->root_entry = NULL;
out:
spin_unlock_irqrestore(&iommu->lock, flags);
}
#ifdef CONFIG_DMAR_DEBUG
@ -849,9 +800,14 @@ static void pgtable_walk(struct intel_iommu *iommu, unsigned long pfn, u8 bus, u
struct device_domain_info *info;
struct dma_pte *parent, *pte;
struct dmar_domain *domain;
struct pci_dev *pdev;
int offset, level;
info = dmar_search_domain_by_dev_info(iommu->segment, bus, devfn);
pdev = pci_get_domain_bus_and_slot(iommu->segment, bus, devfn);
if (!pdev)
return;
info = dev_iommu_priv_get(&pdev->dev);
if (!info || !info->domain) {
pr_info("device [%02x:%02x.%d] not probed\n",
bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
@ -1234,7 +1190,6 @@ static void domain_unmap(struct dmar_domain *domain, unsigned long start_pfn,
static int iommu_alloc_root_entry(struct intel_iommu *iommu)
{
struct root_entry *root;
unsigned long flags;
root = (struct root_entry *)alloc_pgtable_page(iommu->node);
if (!root) {
@ -1244,10 +1199,7 @@ static int iommu_alloc_root_entry(struct intel_iommu *iommu)
}
__iommu_flush_cache(iommu, root, ROOT_SIZE);
spin_lock_irqsave(&iommu->lock, flags);
iommu->root_entry = root;
spin_unlock_irqrestore(&iommu->lock, flags);
return 0;
}
@ -1389,23 +1341,23 @@ static void __iommu_flush_iotlb(struct intel_iommu *iommu, u16 did,
}
static struct device_domain_info *
iommu_support_dev_iotlb (struct dmar_domain *domain, struct intel_iommu *iommu,
u8 bus, u8 devfn)
iommu_support_dev_iotlb(struct dmar_domain *domain, struct intel_iommu *iommu,
u8 bus, u8 devfn)
{
struct device_domain_info *info;
assert_spin_locked(&device_domain_lock);
if (!iommu->qi)
return NULL;
list_for_each_entry(info, &domain->devices, link)
spin_lock(&domain->lock);
list_for_each_entry(info, &domain->devices, link) {
if (info->iommu == iommu && info->bus == bus &&
info->devfn == devfn) {
if (info->ats_supported && info->dev)
return info;
break;
spin_unlock(&domain->lock);
return info->ats_supported ? info : NULL;
}
}
spin_unlock(&domain->lock);
return NULL;
}
@ -1415,23 +1367,21 @@ static void domain_update_iotlb(struct dmar_domain *domain)
struct device_domain_info *info;
bool has_iotlb_device = false;
assert_spin_locked(&device_domain_lock);
list_for_each_entry(info, &domain->devices, link)
spin_lock(&domain->lock);
list_for_each_entry(info, &domain->devices, link) {
if (info->ats_enabled) {
has_iotlb_device = true;
break;
}
}
domain->has_iotlb_device = has_iotlb_device;
spin_unlock(&domain->lock);
}
static void iommu_enable_dev_iotlb(struct device_domain_info *info)
{
struct pci_dev *pdev;
assert_spin_locked(&device_domain_lock);
if (!info || !dev_is_pci(info->dev))
return;
@ -1477,8 +1427,6 @@ static void iommu_disable_dev_iotlb(struct device_domain_info *info)
{
struct pci_dev *pdev;
assert_spin_locked(&device_domain_lock);
if (!dev_is_pci(info->dev))
return;
@ -1518,17 +1466,15 @@ static void __iommu_flush_dev_iotlb(struct device_domain_info *info,
static void iommu_flush_dev_iotlb(struct dmar_domain *domain,
u64 addr, unsigned mask)
{
unsigned long flags;
struct device_domain_info *info;
if (!domain->has_iotlb_device)
return;
spin_lock_irqsave(&device_domain_lock, flags);
spin_lock(&domain->lock);
list_for_each_entry(info, &domain->devices, link)
__iommu_flush_dev_iotlb(info, addr, mask);
spin_unlock_irqrestore(&device_domain_lock, flags);
spin_unlock(&domain->lock);
}
static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
@ -1539,7 +1485,7 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
unsigned int aligned_pages = __roundup_pow_of_two(pages);
unsigned int mask = ilog2(aligned_pages);
uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT;
u16 did = domain->iommu_did[iommu->seq_id];
u16 did = domain_id_iommu(domain, iommu);
BUG_ON(pages == 0);
@ -1609,11 +1555,12 @@ static inline void __mapping_notify_one(struct intel_iommu *iommu,
static void intel_flush_iotlb_all(struct iommu_domain *domain)
{
struct dmar_domain *dmar_domain = to_dmar_domain(domain);
int idx;
struct iommu_domain_info *info;
unsigned long idx;
for_each_domain_iommu(idx, dmar_domain) {
struct intel_iommu *iommu = g_iommus[idx];
u16 did = dmar_domain->iommu_did[iommu->seq_id];
xa_for_each(&dmar_domain->iommu_array, idx, info) {
struct intel_iommu *iommu = info->iommu;
u16 did = domain_id_iommu(dmar_domain, iommu);
if (domain_use_first_level(dmar_domain))
qi_flush_piotlb(iommu, did, PASID_RID2PASID, 0, -1, 0);
@ -1719,23 +1666,16 @@ static int iommu_init_domains(struct intel_iommu *iommu)
static void disable_dmar_iommu(struct intel_iommu *iommu)
{
struct device_domain_info *info, *tmp;
unsigned long flags;
if (!iommu->domain_ids)
return;
spin_lock_irqsave(&device_domain_lock, flags);
list_for_each_entry_safe(info, tmp, &device_domain_list, global) {
if (info->iommu != iommu)
continue;
if (!info->dev || !info->domain)
continue;
__dmar_remove_one_dev_info(info);
}
spin_unlock_irqrestore(&device_domain_lock, flags);
/*
* All iommu domains must have been detached from the devices,
* hence there should be no domain IDs in use.
*/
if (WARN_ON(bitmap_weight(iommu->domain_ids, cap_ndoms(iommu->cap))
> NUM_RESERVED_DID))
return;
if (iommu->gcmd & DMA_GCMD_TE)
iommu_disable_translation(iommu);
@ -1748,8 +1688,6 @@ static void free_dmar_iommu(struct intel_iommu *iommu)
iommu->domain_ids = NULL;
}
g_iommus[iommu->seq_id] = NULL;
/* free context mapping */
free_context_table(iommu);
@ -1795,55 +1733,77 @@ static struct dmar_domain *alloc_domain(unsigned int type)
domain->flags |= DOMAIN_FLAG_USE_FIRST_LEVEL;
domain->has_iotlb_device = false;
INIT_LIST_HEAD(&domain->devices);
spin_lock_init(&domain->lock);
xa_init(&domain->iommu_array);
return domain;
}
/* Must be called with iommu->lock */
static int domain_attach_iommu(struct dmar_domain *domain,
struct intel_iommu *iommu)
{
struct iommu_domain_info *info, *curr;
unsigned long ndomains;
int num;
int num, ret = -ENOSPC;
assert_spin_locked(&device_domain_lock);
assert_spin_locked(&iommu->lock);
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
domain->iommu_refcnt[iommu->seq_id] += 1;
if (domain->iommu_refcnt[iommu->seq_id] == 1) {
ndomains = cap_ndoms(iommu->cap);
num = find_first_zero_bit(iommu->domain_ids, ndomains);
if (num >= ndomains) {
pr_err("%s: No free domain ids\n", iommu->name);
domain->iommu_refcnt[iommu->seq_id] -= 1;
return -ENOSPC;
}
set_bit(num, iommu->domain_ids);
domain->iommu_did[iommu->seq_id] = num;
domain->nid = iommu->node;
domain_update_iommu_cap(domain);
spin_lock(&iommu->lock);
curr = xa_load(&domain->iommu_array, iommu->seq_id);
if (curr) {
curr->refcnt++;
spin_unlock(&iommu->lock);
kfree(info);
return 0;
}
ndomains = cap_ndoms(iommu->cap);
num = find_first_zero_bit(iommu->domain_ids, ndomains);
if (num >= ndomains) {
pr_err("%s: No free domain ids\n", iommu->name);
goto err_unlock;
}
set_bit(num, iommu->domain_ids);
info->refcnt = 1;
info->did = num;
info->iommu = iommu;
curr = xa_cmpxchg(&domain->iommu_array, iommu->seq_id,
NULL, info, GFP_ATOMIC);
if (curr) {
ret = xa_err(curr) ? : -EBUSY;
goto err_clear;
}
domain_update_iommu_cap(domain);
spin_unlock(&iommu->lock);
return 0;
err_clear:
clear_bit(info->did, iommu->domain_ids);
err_unlock:
spin_unlock(&iommu->lock);
kfree(info);
return ret;
}
static void domain_detach_iommu(struct dmar_domain *domain,
struct intel_iommu *iommu)
{
int num;
struct iommu_domain_info *info;
assert_spin_locked(&device_domain_lock);
assert_spin_locked(&iommu->lock);
domain->iommu_refcnt[iommu->seq_id] -= 1;
if (domain->iommu_refcnt[iommu->seq_id] == 0) {
num = domain->iommu_did[iommu->seq_id];
clear_bit(num, iommu->domain_ids);
spin_lock(&iommu->lock);
info = xa_load(&domain->iommu_array, iommu->seq_id);
if (--info->refcnt == 0) {
clear_bit(info->did, iommu->domain_ids);
xa_erase(&domain->iommu_array, iommu->seq_id);
domain->nid = NUMA_NO_NODE;
domain_update_iommu_cap(domain);
domain->iommu_did[iommu->seq_id] = 0;
kfree(info);
}
spin_unlock(&iommu->lock);
}
static inline int guestwidth_to_adjustwidth(int gaw)
@ -1862,10 +1822,6 @@ static inline int guestwidth_to_adjustwidth(int gaw)
static void domain_exit(struct dmar_domain *domain)
{
/* Remove associated devices and clear attached or cached domains */
domain_remove_dev_info(domain);
if (domain->pgd) {
LIST_HEAD(freelist);
@ -1873,6 +1829,9 @@ static void domain_exit(struct dmar_domain *domain)
put_pages_list(&freelist);
}
if (WARN_ON(!list_empty(&domain->devices)))
return;
kfree(domain);
}
@ -1930,11 +1889,11 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
struct pasid_table *table,
u8 bus, u8 devfn)
{
u16 did = domain->iommu_did[iommu->seq_id];
struct device_domain_info *info =
iommu_support_dev_iotlb(domain, iommu, bus, devfn);
u16 did = domain_id_iommu(domain, iommu);
int translation = CONTEXT_TT_MULTI_LEVEL;
struct device_domain_info *info = NULL;
struct context_entry *context;
unsigned long flags;
int ret;
WARN_ON(did == 0);
@ -1947,9 +1906,7 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
BUG_ON(!domain->pgd);
spin_lock_irqsave(&device_domain_lock, flags);
spin_lock(&iommu->lock);
ret = -ENOMEM;
context = iommu_context_addr(iommu, bus, devfn, 1);
if (!context)
@ -2000,7 +1957,6 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
* Setup the Device-TLB enable bit and Page request
* Enable bit:
*/
info = iommu_support_dev_iotlb(domain, iommu, bus, devfn);
if (info && info->ats_supported)
context_set_sm_dte(context);
if (info && info->pri_supported)
@ -2023,7 +1979,6 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
goto out_unlock;
}
info = iommu_support_dev_iotlb(domain, iommu, bus, devfn);
if (info && info->ats_supported)
translation = CONTEXT_TT_DEV_IOTLB;
else
@ -2069,7 +2024,6 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
out_unlock:
spin_unlock(&iommu->lock);
spin_unlock_irqrestore(&device_domain_lock, flags);
return ret;
}
@ -2186,8 +2140,9 @@ static void switch_to_super_page(struct dmar_domain *domain,
unsigned long end_pfn, int level)
{
unsigned long lvl_pages = lvl_to_nr_pages(level);
struct iommu_domain_info *info;
struct dma_pte *pte = NULL;
int i;
unsigned long i;
while (start_pfn <= end_pfn) {
if (!pte)
@ -2198,8 +2153,8 @@ static void switch_to_super_page(struct dmar_domain *domain,
start_pfn + lvl_pages - 1,
level + 1);
for_each_domain_iommu(i, domain)
iommu_flush_iotlb_psi(g_iommus[i], domain,
xa_for_each(&domain->iommu_array, i, info)
iommu_flush_iotlb_psi(info->iommu, domain,
start_pfn, lvl_pages,
0, 0);
}
@ -2313,16 +2268,15 @@ static void domain_context_clear_one(struct device_domain_info *info, u8 bus, u8
{
struct intel_iommu *iommu = info->iommu;
struct context_entry *context;
unsigned long flags;
u16 did_old;
if (!iommu)
return;
spin_lock_irqsave(&iommu->lock, flags);
spin_lock(&iommu->lock);
context = iommu_context_addr(iommu, bus, devfn, 0);
if (!context) {
spin_unlock_irqrestore(&iommu->lock, flags);
spin_unlock(&iommu->lock);
return;
}
@ -2330,14 +2284,14 @@ static void domain_context_clear_one(struct device_domain_info *info, u8 bus, u8
if (hw_pass_through && domain_type_is_si(info->domain))
did_old = FLPT_DEFAULT_DID;
else
did_old = info->domain->iommu_did[iommu->seq_id];
did_old = domain_id_iommu(info->domain, iommu);
} else {
did_old = context_domain_id(context);
}
context_clear_entry(context);
__iommu_flush_cache(iommu, context, sizeof(*context));
spin_unlock_irqrestore(&iommu->lock, flags);
spin_unlock(&iommu->lock);
iommu->flush.flush_context(iommu,
did_old,
(((u16)bus) << 8) | devfn,
@ -2356,30 +2310,6 @@ static void domain_context_clear_one(struct device_domain_info *info, u8 bus, u8
__iommu_flush_dev_iotlb(info, 0, MAX_AGAW_PFN_WIDTH);
}
static void domain_remove_dev_info(struct dmar_domain *domain)
{
struct device_domain_info *info, *tmp;
unsigned long flags;
spin_lock_irqsave(&device_domain_lock, flags);
list_for_each_entry_safe(info, tmp, &domain->devices, link)
__dmar_remove_one_dev_info(info);
spin_unlock_irqrestore(&device_domain_lock, flags);
}
static inline struct device_domain_info *
dmar_search_domain_by_dev_info(int segment, int bus, int devfn)
{
struct device_domain_info *info;
list_for_each_entry(info, &device_domain_list, global)
if (info->segment == segment && info->bus == bus &&
info->devfn == devfn)
return info;
return NULL;
}
static int domain_setup_first_level(struct intel_iommu *iommu,
struct dmar_domain *domain,
struct device *dev,
@ -2412,7 +2342,7 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
flags |= PASID_FLAG_PAGE_SNOOP;
return intel_pasid_setup_first_level(iommu, dev, (pgd_t *)pgd, pasid,
domain->iommu_did[iommu->seq_id],
domain_id_iommu(domain, iommu),
flags);
}
@ -2499,7 +2429,6 @@ static int domain_add_dev_info(struct dmar_domain *domain, struct device *dev)
{
struct device_domain_info *info = dev_iommu_priv_get(dev);
struct intel_iommu *iommu;
unsigned long flags;
u8 bus, devfn;
int ret;
@ -2507,17 +2436,13 @@ static int domain_add_dev_info(struct dmar_domain *domain, struct device *dev)
if (!iommu)
return -ENODEV;
spin_lock_irqsave(&device_domain_lock, flags);
info->domain = domain;
spin_lock(&iommu->lock);
ret = domain_attach_iommu(domain, iommu);
spin_unlock(&iommu->lock);
if (ret) {
spin_unlock_irqrestore(&device_domain_lock, flags);
if (ret)
return ret;
}
info->domain = domain;
spin_lock(&domain->lock);
list_add(&info->link, &domain->devices);
spin_unlock_irqrestore(&device_domain_lock, flags);
spin_unlock(&domain->lock);
/* PASID table is mandatory for a PCI device in scalable mode. */
if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev)) {
@ -2529,7 +2454,6 @@ static int domain_add_dev_info(struct dmar_domain *domain, struct device *dev)
}
/* Setup the PASID entry for requests without PASID: */
spin_lock_irqsave(&iommu->lock, flags);
if (hw_pass_through && domain_type_is_si(domain))
ret = intel_pasid_setup_pass_through(iommu, domain,
dev, PASID_RID2PASID);
@ -2539,7 +2463,6 @@ static int domain_add_dev_info(struct dmar_domain *domain, struct device *dev)
else
ret = intel_pasid_setup_second_level(iommu, domain,
dev, PASID_RID2PASID);
spin_unlock_irqrestore(&iommu->lock, flags);
if (ret) {
dev_err(dev, "Setup RID2PASID failed\n");
dmar_remove_one_dev_info(dev);
@ -2807,7 +2730,6 @@ static int copy_translation_tables(struct intel_iommu *iommu)
struct root_entry *old_rt;
phys_addr_t old_rt_phys;
int ctxt_table_entries;
unsigned long flags;
u64 rtaddr_reg;
int bus, ret;
bool new_ext, ext;
@ -2850,7 +2772,7 @@ static int copy_translation_tables(struct intel_iommu *iommu)
}
}
spin_lock_irqsave(&iommu->lock, flags);
spin_lock(&iommu->lock);
/* Context tables are copied, now write them to the root_entry table */
for (bus = 0; bus < 256; bus++) {
@ -2869,7 +2791,7 @@ static int copy_translation_tables(struct intel_iommu *iommu)
iommu->root_entry[bus].hi = val;
}
spin_unlock_irqrestore(&iommu->lock, flags);
spin_unlock(&iommu->lock);
kfree(ctxt_tbls);
@ -2968,36 +2890,6 @@ static int __init init_dmars(void)
struct intel_iommu *iommu;
int ret;
/*
* for each drhd
* allocate root
* initialize and program root entry to not present
* endfor
*/
for_each_drhd_unit(drhd) {
/*
* lock not needed as this is only incremented in the single
* threaded kernel __init code path all other access are read
* only
*/
if (g_num_of_iommus < DMAR_UNITS_SUPPORTED) {
g_num_of_iommus++;
continue;
}
pr_err_once("Exceeded %d IOMMUs\n", DMAR_UNITS_SUPPORTED);
}
/* Preallocate enough resources for IOMMU hot-addition */
if (g_num_of_iommus < DMAR_UNITS_SUPPORTED)
g_num_of_iommus = DMAR_UNITS_SUPPORTED;
g_iommus = kcalloc(g_num_of_iommus, sizeof(struct intel_iommu *),
GFP_KERNEL);
if (!g_iommus) {
ret = -ENOMEM;
goto error;
}
ret = intel_cap_audit(CAP_AUDIT_STATIC_DMAR, NULL);
if (ret)
goto free_iommu;
@ -3020,8 +2912,6 @@ static int __init init_dmars(void)
intel_pasid_max_id);
}
g_iommus[iommu->seq_id] = iommu;
intel_iommu_init_qi(iommu);
ret = iommu_init_domains(iommu);
@ -3147,9 +3037,6 @@ free_iommu:
free_dmar_iommu(iommu);
}
kfree(g_iommus);
error:
return ret;
}
@ -3530,9 +3417,6 @@ static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
int sp, ret;
struct intel_iommu *iommu = dmaru->iommu;
if (g_iommus[iommu->seq_id])
return 0;
ret = intel_cap_audit(CAP_AUDIT_HOTPLUG_DMAR, iommu);
if (ret)
goto out;
@ -3556,7 +3440,6 @@ static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
if (iommu->gcmd & DMA_GCMD_TE)
iommu_disable_translation(iommu);
g_iommus[iommu->seq_id] = iommu;
ret = iommu_init_domains(iommu);
if (ret == 0)
ret = iommu_alloc_root_entry(iommu);
@ -4022,6 +3905,20 @@ static int __init probe_acpi_namespace_devices(void)
return 0;
}
static __init int tboot_force_iommu(void)
{
if (!tboot_enabled())
return 0;
if (no_iommu || dmar_disabled)
pr_warn("Forcing Intel-IOMMU to enabled\n");
dmar_disabled = 0;
no_iommu = 0;
return 1;
}
int __init intel_iommu_init(void)
{
int ret = -ENODEV;
@ -4093,9 +3990,6 @@ int __init intel_iommu_init(void)
if (list_empty(&dmar_satc_units))
pr_info("No SATC found\n");
if (dmar_map_gfx)
intel_iommu_gfx_mapped = 1;
init_no_remapping_devices();
ret = init_dmars();
@ -4181,21 +4075,13 @@ static void domain_context_clear(struct device_domain_info *info)
&domain_context_clear_one_cb, info);
}
static void __dmar_remove_one_dev_info(struct device_domain_info *info)
static void dmar_remove_one_dev_info(struct device *dev)
{
struct dmar_domain *domain;
struct intel_iommu *iommu;
unsigned long flags;
struct device_domain_info *info = dev_iommu_priv_get(dev);
struct dmar_domain *domain = info->domain;
struct intel_iommu *iommu = info->iommu;
assert_spin_locked(&device_domain_lock);
if (WARN_ON(!info))
return;
iommu = info->iommu;
domain = info->domain;
if (info->dev && !dev_is_real_dma_subdevice(info->dev)) {
if (!dev_is_real_dma_subdevice(info->dev)) {
if (dev_is_pci(info->dev) && sm_supported(iommu))
intel_pasid_tear_down_entry(iommu, info->dev,
PASID_RID2PASID, false);
@ -4205,23 +4091,12 @@ static void __dmar_remove_one_dev_info(struct device_domain_info *info)
intel_pasid_free_table(info->dev);
}
spin_lock(&domain->lock);
list_del(&info->link);
spin_unlock(&domain->lock);
spin_lock_irqsave(&iommu->lock, flags);
domain_detach_iommu(domain, iommu);
spin_unlock_irqrestore(&iommu->lock, flags);
}
static void dmar_remove_one_dev_info(struct device *dev)
{
struct device_domain_info *info;
unsigned long flags;
spin_lock_irqsave(&device_domain_lock, flags);
info = dev_iommu_priv_get(dev);
if (info)
__dmar_remove_one_dev_info(info);
spin_unlock_irqrestore(&device_domain_lock, flags);
info->domain = NULL;
}
static int md_domain_init(struct dmar_domain *domain, int guest_width)
@ -4466,15 +4341,16 @@ static void intel_iommu_tlb_sync(struct iommu_domain *domain,
struct dmar_domain *dmar_domain = to_dmar_domain(domain);
unsigned long iova_pfn = IOVA_PFN(gather->start);
size_t size = gather->end - gather->start;
struct iommu_domain_info *info;
unsigned long start_pfn;
unsigned long nrpages;
int iommu_id;
unsigned long i;
nrpages = aligned_nrpages(gather->start, size);
start_pfn = mm_to_dma_pfn(iova_pfn);
for_each_domain_iommu(iommu_id, dmar_domain)
iommu_flush_iotlb_psi(g_iommus[iommu_id], dmar_domain,
xa_for_each(&dmar_domain->iommu_array, i, info)
iommu_flush_iotlb_psi(info->iommu, dmar_domain,
start_pfn, nrpages,
list_empty(&gather->freelist), 0);
@ -4503,7 +4379,7 @@ static bool domain_support_force_snooping(struct dmar_domain *domain)
struct device_domain_info *info;
bool support = true;
assert_spin_locked(&device_domain_lock);
assert_spin_locked(&domain->lock);
list_for_each_entry(info, &domain->devices, link) {
if (!ecap_sc_support(info->iommu->ecap)) {
support = false;
@ -4518,8 +4394,7 @@ static void domain_set_force_snooping(struct dmar_domain *domain)
{
struct device_domain_info *info;
assert_spin_locked(&device_domain_lock);
assert_spin_locked(&domain->lock);
/*
* Second level page table supports per-PTE snoop control. The
* iommu_map() interface will handle this by setting SNP bit.
@ -4537,20 +4412,19 @@ static void domain_set_force_snooping(struct dmar_domain *domain)
static bool intel_iommu_enforce_cache_coherency(struct iommu_domain *domain)
{
struct dmar_domain *dmar_domain = to_dmar_domain(domain);
unsigned long flags;
if (dmar_domain->force_snooping)
return true;
spin_lock_irqsave(&device_domain_lock, flags);
spin_lock(&dmar_domain->lock);
if (!domain_support_force_snooping(dmar_domain)) {
spin_unlock_irqrestore(&device_domain_lock, flags);
spin_unlock(&dmar_domain->lock);
return false;
}
domain_set_force_snooping(dmar_domain);
dmar_domain->force_snooping = true;
spin_unlock_irqrestore(&device_domain_lock, flags);
spin_unlock(&dmar_domain->lock);
return true;
}
@ -4572,7 +4446,6 @@ static struct iommu_device *intel_iommu_probe_device(struct device *dev)
struct pci_dev *pdev = dev_is_pci(dev) ? to_pci_dev(dev) : NULL;
struct device_domain_info *info;
struct intel_iommu *iommu;
unsigned long flags;
u8 bus, devfn;
iommu = device_to_iommu(dev, &bus, &devfn);
@ -4615,10 +4488,7 @@ static struct iommu_device *intel_iommu_probe_device(struct device *dev)
}
}
spin_lock_irqsave(&device_domain_lock, flags);
list_add(&info->global, &device_domain_list);
dev_iommu_priv_set(dev, info);
spin_unlock_irqrestore(&device_domain_lock, flags);
return &iommu->iommu;
}
@ -4626,15 +4496,9 @@ static struct iommu_device *intel_iommu_probe_device(struct device *dev)
static void intel_iommu_release_device(struct device *dev)
{
struct device_domain_info *info = dev_iommu_priv_get(dev);
unsigned long flags;
dmar_remove_one_dev_info(dev);
spin_lock_irqsave(&device_domain_lock, flags);
dev_iommu_priv_set(dev, NULL);
list_del(&info->global);
spin_unlock_irqrestore(&device_domain_lock, flags);
kfree(info);
set_dma_ops(dev, NULL);
}
@ -4707,7 +4571,6 @@ int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct device *dev)
struct device_domain_info *info = dev_iommu_priv_get(dev);
struct context_entry *context;
struct dmar_domain *domain;
unsigned long flags;
u64 ctx_lo;
int ret;
@ -4715,9 +4578,7 @@ int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct device *dev)
if (!domain)
return -EINVAL;
spin_lock_irqsave(&device_domain_lock, flags);
spin_lock(&iommu->lock);
ret = -EINVAL;
if (!info->pasid_supported)
goto out;
@ -4733,7 +4594,7 @@ int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct device *dev)
context[0].lo = ctx_lo;
wmb();
iommu->flush.flush_context(iommu,
domain->iommu_did[iommu->seq_id],
domain_id_iommu(domain, iommu),
PCI_DEVID(info->bus, info->devfn),
DMA_CCMD_MASK_NOBIT,
DMA_CCMD_DEVICE_INVL);
@ -4747,7 +4608,6 @@ int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct device *dev)
out:
spin_unlock(&iommu->lock);
spin_unlock_irqrestore(&device_domain_lock, flags);
return ret;
}
@ -4871,13 +4731,11 @@ static void intel_iommu_iotlb_sync_map(struct iommu_domain *domain,
struct dmar_domain *dmar_domain = to_dmar_domain(domain);
unsigned long pages = aligned_nrpages(iova, size);
unsigned long pfn = iova >> VTD_PAGE_SHIFT;
struct intel_iommu *iommu;
int iommu_id;
struct iommu_domain_info *info;
unsigned long i;
for_each_domain_iommu(iommu_id, dmar_domain) {
iommu = g_iommus[iommu_id];
__mapping_notify_one(iommu, dmar_domain, pfn, pages);
}
xa_for_each(&dmar_domain->iommu_array, i, info)
__mapping_notify_one(info->iommu, dmar_domain, pfn, pages);
}
const struct iommu_ops intel_iommu_ops = {
@ -4887,7 +4745,6 @@ const struct iommu_ops intel_iommu_ops = {
.probe_finalize = intel_iommu_probe_finalize,
.release_device = intel_iommu_release_device,
.get_resv_regions = intel_iommu_get_resv_regions,
.put_resv_regions = generic_iommu_put_resv_regions,
.device_group = intel_iommu_device_group,
.dev_enable_feat = intel_iommu_dev_enable_feat,
.dev_disable_feat = intel_iommu_dev_disable_feat,

View File

@ -21,6 +21,7 @@
#include <linux/dmar.h>
#include <linux/ioasid.h>
#include <linux/bitfield.h>
#include <linux/xarray.h>
#include <asm/cacheflush.h>
#include <asm/iommu.h>
@ -480,7 +481,6 @@ enum {
#define VTD_FLAG_SVM_CAPABLE (1 << 2)
extern int intel_iommu_sm;
extern spinlock_t device_domain_lock;
#define sm_supported(iommu) (intel_iommu_sm && ecap_smts((iommu)->ecap))
#define pasid_supported(iommu) (sm_supported(iommu) && \
@ -525,25 +525,25 @@ struct context_entry {
*/
#define DOMAIN_FLAG_USE_FIRST_LEVEL BIT(1)
struct dmar_domain {
int nid; /* node id */
unsigned int iommu_refcnt[DMAR_UNITS_SUPPORTED];
/* Refcount of devices per iommu */
u16 iommu_did[DMAR_UNITS_SUPPORTED];
/* Domain ids per IOMMU. Use u16 since
struct iommu_domain_info {
struct intel_iommu *iommu;
unsigned int refcnt; /* Refcount of devices per iommu */
u16 did; /* Domain ids per IOMMU. Use u16 since
* domain ids are 16 bit wide according
* to VT-d spec, section 9.3 */
};
struct dmar_domain {
int nid; /* node id */
struct xarray iommu_array; /* Attached IOMMU array */
u8 has_iotlb_device: 1;
u8 iommu_coherency: 1; /* indicate coherency of iommu access */
u8 force_snooping : 1; /* Create IOPTEs with snoop control */
u8 set_pte_snp:1;
spinlock_t lock; /* Protect device tracking lists */
struct list_head devices; /* all devices' list */
struct iova_domain iovad; /* iova's that belong to this domain */
struct dma_pte *pgd; /* virtual address */
int gaw; /* max guest address width */
@ -611,7 +611,6 @@ struct intel_iommu {
/* PCI domain-device relationship */
struct device_domain_info {
struct list_head link; /* link to domain siblings */
struct list_head global; /* link to global list */
u32 segment; /* PCI segment number */
u8 bus; /* PCI bus number */
u8 devfn; /* PCI devfn number */
@ -642,6 +641,16 @@ static inline struct dmar_domain *to_dmar_domain(struct iommu_domain *dom)
return container_of(dom, struct dmar_domain, domain);
}
/* Retrieve the domain ID which has allocated to the domain */
static inline u16
domain_id_iommu(struct dmar_domain *domain, struct intel_iommu *iommu)
{
struct iommu_domain_info *info =
xa_load(&domain->iommu_array, iommu->seq_id);
return info->did;
}
/*
* 0: readable
* 1: writable
@ -727,7 +736,6 @@ extern int dmar_ir_support(void);
void *alloc_pgtable_page(int node);
void free_pgtable_page(void *vaddr);
struct intel_iommu *domain_get_iommu(struct dmar_domain *domain);
void iommu_flush_write_buffer(struct intel_iommu *iommu);
int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct device *dev);
struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn);
@ -787,7 +795,6 @@ extern int iommu_calculate_agaw(struct intel_iommu *iommu);
extern int iommu_calculate_max_sagaw(struct intel_iommu *iommu);
extern int dmar_disabled;
extern int intel_iommu_enabled;
extern int intel_iommu_gfx_mapped;
#else
static inline int iommu_calculate_agaw(struct intel_iommu *iommu)
{

View File

@ -10,7 +10,6 @@
#include <linux/hpet.h>
#include <linux/pci.h>
#include <linux/irq.h>
#include <linux/intel-iommu.h>
#include <linux/acpi.h>
#include <linux/irqdomain.h>
#include <linux/crash_dump.h>
@ -21,6 +20,7 @@
#include <asm/irq_remapping.h>
#include <asm/pci-direct.h>
#include "iommu.h"
#include "../irq_remapping.h"
#include "cap_audit.h"

View File

@ -12,13 +12,13 @@
#include <linux/bitops.h>
#include <linux/cpufeature.h>
#include <linux/dmar.h>
#include <linux/intel-iommu.h>
#include <linux/iommu.h>
#include <linux/memory.h>
#include <linux/pci.h>
#include <linux/pci-ats.h>
#include <linux/spinlock.h>
#include "iommu.h"
#include "pasid.h"
/*
@ -450,17 +450,17 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
struct pasid_entry *pte;
u16 did, pgtt;
spin_lock(&iommu->lock);
pte = intel_pasid_get_entry(dev, pasid);
if (WARN_ON(!pte))
return;
if (!pasid_pte_is_present(pte))
if (WARN_ON(!pte) || !pasid_pte_is_present(pte)) {
spin_unlock(&iommu->lock);
return;
}
did = pasid_get_domain_id(pte);
pgtt = pasid_pte_get_pgtt(pte);
intel_pasid_clear_entry(dev, pasid, fault_ignore);
spin_unlock(&iommu->lock);
if (!ecap_coherent(iommu->ecap))
clflush_cache_range(pte, sizeof(*pte));
@ -496,22 +496,6 @@ static void pasid_flush_caches(struct intel_iommu *iommu,
}
}
static inline int pasid_enable_wpe(struct pasid_entry *pte)
{
#ifdef CONFIG_X86
unsigned long cr0 = read_cr0();
/* CR0.WP is normally set but just to be sure */
if (unlikely(!(cr0 & X86_CR0_WP))) {
pr_err_ratelimited("No CPU write protect!\n");
return -EINVAL;
}
#endif
pasid_set_wpe(pte);
return 0;
};
/*
* Set up the scalable mode pasid table entry for first only
* translation type.
@ -528,39 +512,52 @@ int intel_pasid_setup_first_level(struct intel_iommu *iommu,
return -EINVAL;
}
pte = intel_pasid_get_entry(dev, pasid);
if (WARN_ON(!pte))
return -EINVAL;
if (flags & PASID_FLAG_SUPERVISOR_MODE) {
#ifdef CONFIG_X86
unsigned long cr0 = read_cr0();
/* Caller must ensure PASID entry is not in use. */
if (pasid_pte_is_present(pte))
/* CR0.WP is normally set but just to be sure */
if (unlikely(!(cr0 & X86_CR0_WP))) {
pr_err("No CPU write protect!\n");
return -EINVAL;
}
#endif
if (!ecap_srs(iommu->ecap)) {
pr_err("No supervisor request support on %s\n",
iommu->name);
return -EINVAL;
}
}
if ((flags & PASID_FLAG_FL5LP) && !cap_5lp_support(iommu->cap)) {
pr_err("No 5-level paging support for first-level on %s\n",
iommu->name);
return -EINVAL;
}
spin_lock(&iommu->lock);
pte = intel_pasid_get_entry(dev, pasid);
if (!pte) {
spin_unlock(&iommu->lock);
return -ENODEV;
}
if (pasid_pte_is_present(pte)) {
spin_unlock(&iommu->lock);
return -EBUSY;
}
pasid_clear_entry(pte);
/* Setup the first level page table pointer: */
pasid_set_flptr(pte, (u64)__pa(pgd));
if (flags & PASID_FLAG_SUPERVISOR_MODE) {
if (!ecap_srs(iommu->ecap)) {
pr_err("No supervisor request support on %s\n",
iommu->name);
return -EINVAL;
}
pasid_set_sre(pte);
if (pasid_enable_wpe(pte))
return -EINVAL;
pasid_set_wpe(pte);
}
if (flags & PASID_FLAG_FL5LP) {
if (cap_5lp_support(iommu->cap)) {
pasid_set_flpm(pte, 1);
} else {
pr_err("No 5-level paging support for first-level\n");
pasid_clear_entry(pte);
return -EINVAL;
}
}
if (flags & PASID_FLAG_FL5LP)
pasid_set_flpm(pte, 1);
if (flags & PASID_FLAG_PAGE_SNOOP)
pasid_set_pgsnp(pte);
@ -572,6 +569,8 @@ int intel_pasid_setup_first_level(struct intel_iommu *iommu,
/* Setup Present and PASID Granular Transfer Type: */
pasid_set_translation_type(pte, PASID_ENTRY_PGTT_FL_ONLY);
pasid_set_present(pte);
spin_unlock(&iommu->lock);
pasid_flush_caches(iommu, pte, pasid, did);
return 0;
@ -627,17 +626,19 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
}
pgd_val = virt_to_phys(pgd);
did = domain->iommu_did[iommu->seq_id];
did = domain_id_iommu(domain, iommu);
spin_lock(&iommu->lock);
pte = intel_pasid_get_entry(dev, pasid);
if (!pte) {
dev_err(dev, "Failed to get pasid entry of PASID %d\n", pasid);
spin_unlock(&iommu->lock);
return -ENODEV;
}
/* Caller must ensure PASID entry is not in use. */
if (pasid_pte_is_present(pte))
if (pasid_pte_is_present(pte)) {
spin_unlock(&iommu->lock);
return -EBUSY;
}
pasid_clear_entry(pte);
pasid_set_domain_id(pte, did);
@ -654,6 +655,8 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
if (pasid != PASID_RID2PASID)
pasid_set_sre(pte);
pasid_set_present(pte);
spin_unlock(&iommu->lock);
pasid_flush_caches(iommu, pte, pasid, did);
return 0;
@ -669,15 +672,17 @@ int intel_pasid_setup_pass_through(struct intel_iommu *iommu,
u16 did = FLPT_DEFAULT_DID;
struct pasid_entry *pte;
spin_lock(&iommu->lock);
pte = intel_pasid_get_entry(dev, pasid);
if (!pte) {
dev_err(dev, "Failed to get pasid entry of PASID %d\n", pasid);
spin_unlock(&iommu->lock);
return -ENODEV;
}
/* Caller must ensure PASID entry is not in use. */
if (pasid_pte_is_present(pte))
if (pasid_pte_is_present(pte)) {
spin_unlock(&iommu->lock);
return -EBUSY;
}
pasid_clear_entry(pte);
pasid_set_domain_id(pte, did);
@ -692,6 +697,8 @@ int intel_pasid_setup_pass_through(struct intel_iommu *iommu,
*/
pasid_set_sre(pte);
pasid_set_present(pte);
spin_unlock(&iommu->lock);
pasid_flush_caches(iommu, pte, pasid, did);
return 0;

View File

@ -39,6 +39,7 @@
* only and pass-through transfer modes.
*/
#define FLPT_DEFAULT_DID 1
#define NUM_RESERVED_DID 2
/*
* The SUPERVISOR_MODE flag indicates a first level translation which

View File

@ -9,8 +9,8 @@
*/
#include <linux/spinlock.h>
#include <linux/intel-iommu.h>
#include "iommu.h"
#include "perf.h"
static DEFINE_SPINLOCK(latency_lock);

View File

@ -5,7 +5,6 @@
* Authors: David Woodhouse <dwmw2@infradead.org>
*/
#include <linux/intel-iommu.h>
#include <linux/mmu_notifier.h>
#include <linux/sched.h>
#include <linux/sched/mm.h>
@ -21,11 +20,12 @@
#include <linux/ioasid.h>
#include <asm/page.h>
#include <asm/fpu/api.h>
#include <trace/events/intel_iommu.h>
#include "iommu.h"
#include "pasid.h"
#include "perf.h"
#include "../iommu-sva-lib.h"
#include "trace.h"
static irqreturn_t prq_event_thread(int irq, void *d);
static void intel_svm_drain_prq(struct device *dev, u32 pasid);
@ -328,9 +328,9 @@ static struct iommu_sva *intel_svm_bind_mm(struct intel_iommu *iommu,
unsigned int flags)
{
struct device_domain_info *info = dev_iommu_priv_get(dev);
unsigned long iflags, sflags;
struct intel_svm_dev *sdev;
struct intel_svm *svm;
unsigned long sflags;
int ret = 0;
svm = pasid_private_find(mm->pasid);
@ -394,11 +394,8 @@ static struct iommu_sva *intel_svm_bind_mm(struct intel_iommu *iommu,
sflags = (flags & SVM_FLAG_SUPERVISOR_MODE) ?
PASID_FLAG_SUPERVISOR_MODE : 0;
sflags |= cpu_feature_enabled(X86_FEATURE_LA57) ? PASID_FLAG_FL5LP : 0;
spin_lock_irqsave(&iommu->lock, iflags);
ret = intel_pasid_setup_first_level(iommu, dev, mm->pgd, mm->pasid,
FLPT_DEFAULT_DID, sflags);
spin_unlock_irqrestore(&iommu->lock, iflags);
if (ret)
goto free_sdev;
@ -544,7 +541,7 @@ static void intel_svm_drain_prq(struct device *dev, u32 pasid)
domain = info->domain;
pdev = to_pci_dev(dev);
sid = PCI_DEVID(info->bus, info->devfn);
did = domain->iommu_did[iommu->seq_id];
did = domain_id_iommu(domain, iommu);
qdep = pci_ats_queue_depth(pdev);
/*

View File

@ -11,4 +11,4 @@
#include <linux/types.h>
#define CREATE_TRACE_POINTS
#include <trace/events/intel_iommu.h>
#include "trace.h"

View File

@ -13,7 +13,8 @@
#define _TRACE_INTEL_IOMMU_H
#include <linux/tracepoint.h>
#include <linux/intel-iommu.h>
#include "iommu.h"
#define MSG_MAX 256
@ -91,4 +92,8 @@ TRACE_EVENT(prq_report,
#endif /* _TRACE_INTEL_IOMMU_H */
/* This part must be outside protection */
#undef TRACE_INCLUDE_PATH
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_PATH ../../drivers/iommu/intel/
#define TRACE_INCLUDE_FILE trace
#include <trace/define_trace.h>

View File

@ -182,14 +182,8 @@ static bool arm_v7s_is_mtk_enabled(struct io_pgtable_cfg *cfg)
(cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_EXT);
}
static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl,
struct io_pgtable_cfg *cfg)
static arm_v7s_iopte to_mtk_iopte(phys_addr_t paddr, arm_v7s_iopte pte)
{
arm_v7s_iopte pte = paddr & ARM_V7S_LVL_MASK(lvl);
if (!arm_v7s_is_mtk_enabled(cfg))
return pte;
if (paddr & BIT_ULL(32))
pte |= ARM_V7S_ATTR_MTK_PA_BIT32;
if (paddr & BIT_ULL(33))
@ -199,6 +193,17 @@ static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl,
return pte;
}
static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl,
struct io_pgtable_cfg *cfg)
{
arm_v7s_iopte pte = paddr & ARM_V7S_LVL_MASK(lvl);
if (arm_v7s_is_mtk_enabled(cfg))
return to_mtk_iopte(paddr, pte);
return pte;
}
static phys_addr_t iopte_to_paddr(arm_v7s_iopte pte, int lvl,
struct io_pgtable_cfg *cfg)
{
@ -240,10 +245,17 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
dma_addr_t dma;
size_t size = ARM_V7S_TABLE_SIZE(lvl, cfg);
void *table = NULL;
gfp_t gfp_l1;
/*
* ARM_MTK_TTBR_EXT extend the translation table base support larger
* memory address.
*/
gfp_l1 = cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT ?
GFP_KERNEL : ARM_V7S_TABLE_GFP_DMA;
if (lvl == 1)
table = (void *)__get_free_pages(
__GFP_ZERO | ARM_V7S_TABLE_GFP_DMA, get_order(size));
table = (void *)__get_free_pages(gfp_l1 | __GFP_ZERO, get_order(size));
else if (lvl == 2)
table = kmem_cache_zalloc(data->l2_tables, gfp);
@ -251,7 +263,8 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
return NULL;
phys = virt_to_phys(table);
if (phys != (arm_v7s_iopte)phys) {
if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT ?
phys >= (1ULL << cfg->oas) : phys != (arm_v7s_iopte)phys) {
/* Doesn't fit in PTE */
dev_err(dev, "Page table does not fit in PTE: %pa", &phys);
goto out_free;
@ -457,9 +470,14 @@ static arm_v7s_iopte arm_v7s_install_table(arm_v7s_iopte *table,
arm_v7s_iopte curr,
struct io_pgtable_cfg *cfg)
{
phys_addr_t phys = virt_to_phys(table);
arm_v7s_iopte old, new;
new = virt_to_phys(table) | ARM_V7S_PTE_TYPE_TABLE;
new = phys | ARM_V7S_PTE_TYPE_TABLE;
if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT)
new = to_mtk_iopte(phys, new);
if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS)
new |= ARM_V7S_ATTR_NS_TABLE;
@ -779,6 +797,8 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
void *cookie)
{
struct arm_v7s_io_pgtable *data;
slab_flags_t slab_flag;
phys_addr_t paddr;
if (cfg->ias > (arm_v7s_is_mtk_enabled(cfg) ? 34 : ARM_V7S_ADDR_BITS))
return NULL;
@ -788,7 +808,8 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS |
IO_PGTABLE_QUIRK_NO_PERMS |
IO_PGTABLE_QUIRK_ARM_MTK_EXT))
IO_PGTABLE_QUIRK_ARM_MTK_EXT |
IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT))
return NULL;
/* If ARM_MTK_4GB is enabled, the NO_PERMS is also expected. */
@ -796,15 +817,27 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
!(cfg->quirks & IO_PGTABLE_QUIRK_NO_PERMS))
return NULL;
if ((cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT) &&
!arm_v7s_is_mtk_enabled(cfg))
return NULL;
data = kmalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return NULL;
spin_lock_init(&data->split_lock);
/*
* ARM_MTK_TTBR_EXT extend the translation table base support larger
* memory address.
*/
slab_flag = cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT ?
0 : ARM_V7S_TABLE_SLAB_FLAGS;
data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2",
ARM_V7S_TABLE_SIZE(2, cfg),
ARM_V7S_TABLE_SIZE(2, cfg),
ARM_V7S_TABLE_SLAB_FLAGS, NULL);
slab_flag, NULL);
if (!data->l2_tables)
goto out_free_data;
@ -850,12 +883,16 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
wmb();
/* TTBR */
cfg->arm_v7s_cfg.ttbr = virt_to_phys(data->pgd) | ARM_V7S_TTBR_S |
(cfg->coherent_walk ? (ARM_V7S_TTBR_NOS |
ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_WBWA) |
ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_WBWA)) :
(ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_NC) |
ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_NC)));
paddr = virt_to_phys(data->pgd);
if (arm_v7s_is_mtk_enabled(cfg))
cfg->arm_v7s_cfg.ttbr = paddr | upper_32_bits(paddr);
else
cfg->arm_v7s_cfg.ttbr = paddr | ARM_V7S_TTBR_S |
(cfg->coherent_walk ? (ARM_V7S_TTBR_NOS |
ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_WBWA) |
ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_WBWA)) :
(ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_NC) |
ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_NC)));
return &data->iop;
out_free_data:

View File

@ -259,7 +259,8 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list
return 0;
out_release:
ops->release_device(dev);
if (ops->release_device)
ops->release_device(dev);
out_module_put:
module_put(ops->owner);
@ -272,7 +273,7 @@ err_free:
int iommu_probe_device(struct device *dev)
{
const struct iommu_ops *ops = dev->bus->iommu_ops;
const struct iommu_ops *ops;
struct iommu_group *group;
int ret;
@ -313,6 +314,7 @@ int iommu_probe_device(struct device *dev)
mutex_unlock(&group->mutex);
iommu_group_put(group);
ops = dev_iommu_ops(dev);
if (ops->probe_finalize)
ops->probe_finalize(dev);
@ -336,7 +338,8 @@ void iommu_release_device(struct device *dev)
iommu_device_unlink(dev->iommu->iommu_dev, dev);
ops = dev_iommu_ops(dev);
ops->release_device(dev);
if (ops->release_device)
ops->release_device(dev);
iommu_group_remove_device(dev);
module_put(ops->owner);
@ -600,7 +603,7 @@ static void iommu_group_release(struct kobject *kobj)
if (group->iommu_data_release)
group->iommu_data_release(group->iommu_data);
ida_simple_remove(&iommu_group_ida, group->id);
ida_free(&iommu_group_ida, group->id);
if (group->default_domain)
iommu_domain_free(group->default_domain);
@ -641,7 +644,7 @@ struct iommu_group *iommu_group_alloc(void)
INIT_LIST_HEAD(&group->devices);
INIT_LIST_HEAD(&group->entry);
ret = ida_simple_get(&iommu_group_ida, 0, 0, GFP_KERNEL);
ret = ida_alloc(&iommu_group_ida, GFP_KERNEL);
if (ret < 0) {
kfree(group);
return ERR_PTR(ret);
@ -651,7 +654,7 @@ struct iommu_group *iommu_group_alloc(void)
ret = kobject_init_and_add(&group->kobj, &iommu_group_ktype,
NULL, "%d", group->id);
if (ret) {
ida_simple_remove(&iommu_group_ida, group->id);
ida_free(&iommu_group_ida, group->id);
kobject_put(&group->kobj);
return ERR_PTR(ret);
}
@ -2576,32 +2579,25 @@ void iommu_get_resv_regions(struct device *dev, struct list_head *list)
ops->get_resv_regions(dev, list);
}
void iommu_put_resv_regions(struct device *dev, struct list_head *list)
{
const struct iommu_ops *ops = dev_iommu_ops(dev);
if (ops->put_resv_regions)
ops->put_resv_regions(dev, list);
}
/**
* generic_iommu_put_resv_regions - Reserved region driver helper
* iommu_put_resv_regions - release resered regions
* @dev: device for which to free reserved regions
* @list: reserved region list for device
*
* IOMMU drivers can use this to implement their .put_resv_regions() callback
* for simple reservations. Memory allocated for each reserved region will be
* freed. If an IOMMU driver allocates additional resources per region, it is
* going to have to implement a custom callback.
* This releases a reserved region list acquired by iommu_get_resv_regions().
*/
void generic_iommu_put_resv_regions(struct device *dev, struct list_head *list)
void iommu_put_resv_regions(struct device *dev, struct list_head *list)
{
struct iommu_resv_region *entry, *next;
list_for_each_entry_safe(entry, next, list, list)
kfree(entry);
list_for_each_entry_safe(entry, next, list, list) {
if (entry->free)
entry->free(dev, entry);
else
kfree(entry);
}
}
EXPORT_SYMBOL(generic_iommu_put_resv_regions);
EXPORT_SYMBOL(iommu_put_resv_regions);
struct iommu_resv_region *iommu_alloc_resv_region(phys_addr_t start,
size_t length, int prot,
@ -2751,19 +2747,6 @@ int iommu_dev_disable_feature(struct device *dev, enum iommu_dev_features feat)
}
EXPORT_SYMBOL_GPL(iommu_dev_disable_feature);
bool iommu_dev_feature_enabled(struct device *dev, enum iommu_dev_features feat)
{
if (dev->iommu && dev->iommu->iommu_dev) {
const struct iommu_ops *ops = dev->iommu->iommu_dev->ops;
if (ops->dev_feat_enabled)
return ops->dev_feat_enabled(dev, feat);
}
return false;
}
EXPORT_SYMBOL_GPL(iommu_dev_feature_enabled);
/**
* iommu_sva_bind_device() - Bind a process address space to a device
* @dev: the device

View File

@ -614,7 +614,12 @@ EXPORT_SYMBOL_GPL(reserve_iova);
* dynamic size tuning described in the paper.
*/
#define IOVA_MAG_SIZE 128
/*
* As kmalloc's buffer size is fixed to power of 2, 127 is chosen to
* assure size of 'iova_magazine' to be 1024 bytes, so that no memory
* will be wasted.
*/
#define IOVA_MAG_SIZE 127
#define MAX_GLOBAL_MAGS 32 /* magazines per bin */
struct iova_magazine {

View File

@ -394,10 +394,6 @@ static struct iommu_device *msm_iommu_probe_device(struct device *dev)
return &iommu->iommu;
}
static void msm_iommu_release_device(struct device *dev)
{
}
static int msm_iommu_attach_dev(struct iommu_domain *domain, struct device *dev)
{
int ret = 0;
@ -603,7 +599,7 @@ static int insert_iommu_master(struct device *dev,
for (sid = 0; sid < master->num_mids; sid++)
if (master->mids[sid] == spec->args[0]) {
dev_warn(dev, "Stream ID 0x%hx repeated; ignoring\n",
dev_warn(dev, "Stream ID 0x%x repeated; ignoring\n",
sid);
return 0;
}
@ -677,7 +673,6 @@ fail:
static struct iommu_ops msm_iommu_ops = {
.domain_alloc = msm_iommu_domain_alloc,
.probe_device = msm_iommu_probe_device,
.release_device = msm_iommu_release_device,
.device_group = generic_device_group,
.pgsize_bitmap = MSM_IOMMU_PGSIZES,
.of_xlate = qcom_iommu_of_xlate,

View File

@ -34,7 +34,6 @@
#include <dt-bindings/memory/mtk-memory-port.h>
#define REG_MMU_PT_BASE_ADDR 0x000
#define MMU_PT_ADDR_MASK GENMASK(31, 7)
#define REG_MMU_INVALIDATE 0x020
#define F_ALL_INVLD 0x2
@ -138,6 +137,7 @@
/* PM and clock always on. e.g. infra iommu */
#define PM_CLK_AO BIT(15)
#define IFA_IOMMU_PCIE_SUPPORT BIT(16)
#define PGTABLE_PA_35_EN BIT(17)
#define MTK_IOMMU_HAS_FLAG_MASK(pdata, _x, mask) \
((((pdata)->flags) & (mask)) == (_x))
@ -596,6 +596,9 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_domain *dom,
.iommu_dev = data->dev,
};
if (MTK_IOMMU_HAS_FLAG(data->plat_data, PGTABLE_PA_35_EN))
dom->cfg.quirks |= IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT;
if (MTK_IOMMU_HAS_FLAG(data->plat_data, HAS_4GB_MODE))
dom->cfg.oas = data->enable_4GB ? 33 : 32;
else
@ -684,8 +687,7 @@ static int mtk_iommu_attach_device(struct iommu_domain *domain,
goto err_unlock;
}
bank->m4u_dom = dom;
writel(dom->cfg.arm_v7s_cfg.ttbr & MMU_PT_ADDR_MASK,
bank->base + REG_MMU_PT_BASE_ADDR);
writel(dom->cfg.arm_v7s_cfg.ttbr, bank->base + REG_MMU_PT_BASE_ADDR);
pm_runtime_put(m4udev);
}
@ -819,17 +821,12 @@ static void mtk_iommu_release_device(struct device *dev)
struct device *larbdev;
unsigned int larbid;
if (!fwspec || fwspec->ops != &mtk_iommu_ops)
return;
data = dev_iommu_priv_get(dev);
if (MTK_IOMMU_IS_TYPE(data->plat_data, MTK_IOMMU_TYPE_MM)) {
larbid = MTK_M4U_TO_LARB(fwspec->ids[0]);
larbdev = data->larb_imu[larbid].dev;
device_link_remove(dev, larbdev);
}
iommu_fwspec_free(dev);
}
static int mtk_iommu_get_group_id(struct device *dev, const struct mtk_iommu_plat_data *plat_data)
@ -933,7 +930,6 @@ static const struct iommu_ops mtk_iommu_ops = {
.device_group = mtk_iommu_device_group,
.of_xlate = mtk_iommu_of_xlate,
.get_resv_regions = mtk_iommu_get_resv_regions,
.put_resv_regions = generic_iommu_put_resv_regions,
.pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M,
.owner = THIS_MODULE,
.default_domain_ops = &(const struct iommu_domain_ops) {
@ -1140,22 +1136,32 @@ static int mtk_iommu_probe(struct platform_device *pdev)
data->protect_base = ALIGN(virt_to_phys(protect), MTK_PROTECT_PA_ALIGN);
if (MTK_IOMMU_HAS_FLAG(data->plat_data, HAS_4GB_MODE)) {
switch (data->plat_data->m4u_plat) {
case M4U_MT2712:
p = "mediatek,mt2712-infracfg";
break;
case M4U_MT8173:
p = "mediatek,mt8173-infracfg";
break;
default:
p = NULL;
infracfg = syscon_regmap_lookup_by_phandle(dev->of_node, "mediatek,infracfg");
if (IS_ERR(infracfg)) {
/*
* Legacy devicetrees will not specify a phandle to
* mediatek,infracfg: in that case, we use the older
* way to retrieve a syscon to infra.
*
* This is for retrocompatibility purposes only, hence
* no more compatibles shall be added to this.
*/
switch (data->plat_data->m4u_plat) {
case M4U_MT2712:
p = "mediatek,mt2712-infracfg";
break;
case M4U_MT8173:
p = "mediatek,mt8173-infracfg";
break;
default:
p = NULL;
}
infracfg = syscon_regmap_lookup_by_compatible(p);
if (IS_ERR(infracfg))
return PTR_ERR(infracfg);
}
infracfg = syscon_regmap_lookup_by_compatible(p);
if (IS_ERR(infracfg))
return PTR_ERR(infracfg);
ret = regmap_read(infracfg, REG_INFRA_MISC, &val);
if (ret)
return ret;
@ -1204,18 +1210,16 @@ static int mtk_iommu_probe(struct platform_device *pdev)
if (MTK_IOMMU_IS_TYPE(data->plat_data, MTK_IOMMU_TYPE_MM)) {
ret = mtk_iommu_mm_dts_parse(dev, &match, data);
if (ret) {
dev_err(dev, "mm dts parse fail(%d).", ret);
dev_err_probe(dev, ret, "mm dts parse fail\n");
goto out_runtime_disable;
}
} else if (MTK_IOMMU_IS_TYPE(data->plat_data, MTK_IOMMU_TYPE_INFRA) &&
data->plat_data->pericfg_comp_str) {
infracfg = syscon_regmap_lookup_by_compatible(data->plat_data->pericfg_comp_str);
if (IS_ERR(infracfg)) {
ret = PTR_ERR(infracfg);
} else if (MTK_IOMMU_IS_TYPE(data->plat_data, MTK_IOMMU_TYPE_INFRA)) {
p = data->plat_data->pericfg_comp_str;
data->pericfg = syscon_regmap_lookup_by_compatible(p);
if (IS_ERR(data->pericfg)) {
ret = PTR_ERR(data->pericfg);
goto out_runtime_disable;
}
data->pericfg = infracfg;
}
platform_set_drvdata(pdev, data);
@ -1366,8 +1370,7 @@ static int __maybe_unused mtk_iommu_runtime_resume(struct device *dev)
writel_relaxed(reg->int_control[i], base + REG_MMU_INT_CONTROL0);
writel_relaxed(reg->int_main_control[i], base + REG_MMU_INT_MAIN_CONTROL);
writel_relaxed(reg->ivrp_paddr[i], base + REG_MMU_IVRP_PADDR);
writel(m4u_dom->cfg.arm_v7s_cfg.ttbr & MMU_PT_ADDR_MASK,
base + REG_MMU_PT_BASE_ADDR);
writel(m4u_dom->cfg.arm_v7s_cfg.ttbr, base + REG_MMU_PT_BASE_ADDR);
} while (++i < data->plat_data->banks_num);
/*
@ -1401,7 +1404,7 @@ static const struct mtk_iommu_plat_data mt2712_data = {
static const struct mtk_iommu_plat_data mt6779_data = {
.m4u_plat = M4U_MT6779,
.flags = HAS_SUB_COMM_2BITS | OUT_ORDER_WR_EN | WR_THROT_EN |
MTK_IOMMU_TYPE_MM,
MTK_IOMMU_TYPE_MM | PGTABLE_PA_35_EN,
.inv_sel_reg = REG_MMU_INV_SEL_GEN2,
.banks_num = 1,
.banks_enable = {true},

View File

@ -532,15 +532,10 @@ static void mtk_iommu_v1_release_device(struct device *dev)
struct device *larbdev;
unsigned int larbid;
if (!fwspec || fwspec->ops != &mtk_iommu_v1_ops)
return;
data = dev_iommu_priv_get(dev);
larbid = mt2701_m4u_to_larb(fwspec->ids[0]);
larbdev = data->larb_imu[larbid].dev;
device_link_remove(dev, larbdev);
iommu_fwspec_free(dev);
}
static int mtk_iommu_v1_hw_init(const struct mtk_iommu_v1_data *data)

View File

@ -383,16 +383,6 @@ static struct iommu_device *sprd_iommu_probe_device(struct device *dev)
return &sdev->iommu;
}
static void sprd_iommu_release_device(struct device *dev)
{
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
if (!fwspec || fwspec->ops != &sprd_iommu_ops)
return;
iommu_fwspec_free(dev);
}
static struct iommu_group *sprd_iommu_device_group(struct device *dev)
{
struct sprd_iommu_device *sdev = dev_iommu_priv_get(dev);
@ -417,7 +407,6 @@ static int sprd_iommu_of_xlate(struct device *dev, struct of_phandle_args *args)
static const struct iommu_ops sprd_iommu_ops = {
.domain_alloc = sprd_iommu_domain_alloc,
.probe_device = sprd_iommu_probe_device,
.release_device = sprd_iommu_release_device,
.device_group = sprd_iommu_device_group,
.of_xlate = sprd_iommu_of_xlate,
.pgsize_bitmap = ~0UL << SPRD_IOMMU_PAGE_SHIFT,

View File

@ -738,8 +738,6 @@ static struct iommu_device *sun50i_iommu_probe_device(struct device *dev)
return &iommu->iommu;
}
static void sun50i_iommu_release_device(struct device *dev) {}
static struct iommu_group *sun50i_iommu_device_group(struct device *dev)
{
struct sun50i_iommu *iommu = sun50i_iommu_from_dev(dev);
@ -764,7 +762,6 @@ static const struct iommu_ops sun50i_iommu_ops = {
.domain_alloc = sun50i_iommu_domain_alloc,
.of_xlate = sun50i_iommu_of_xlate,
.probe_device = sun50i_iommu_probe_device,
.release_device = sun50i_iommu_release_device,
.default_domain_ops = &(const struct iommu_domain_ops) {
.attach_dev = sun50i_iommu_attach_device,
.detach_dev = sun50i_iommu_detach_device,

View File

@ -246,10 +246,6 @@ static struct iommu_device *gart_iommu_probe_device(struct device *dev)
return &gart_handle->iommu;
}
static void gart_iommu_release_device(struct device *dev)
{
}
static int gart_iommu_of_xlate(struct device *dev,
struct of_phandle_args *args)
{
@ -273,7 +269,6 @@ static void gart_iommu_sync(struct iommu_domain *domain,
static const struct iommu_ops gart_iommu_ops = {
.domain_alloc = gart_iommu_domain_alloc,
.probe_device = gart_iommu_probe_device,
.release_device = gart_iommu_release_device,
.device_group = generic_device_group,
.pgsize_bitmap = GART_IOMMU_PGSIZES,
.of_xlate = gart_iommu_of_xlate,

View File

@ -864,8 +864,6 @@ static struct iommu_device *tegra_smmu_probe_device(struct device *dev)
return &smmu->iommu;
}
static void tegra_smmu_release_device(struct device *dev) {}
static const struct tegra_smmu_group_soc *
tegra_smmu_find_group(struct tegra_smmu *smmu, unsigned int swgroup)
{
@ -966,7 +964,6 @@ static int tegra_smmu_of_xlate(struct device *dev,
static const struct iommu_ops tegra_smmu_ops = {
.domain_alloc = tegra_smmu_domain_alloc,
.probe_device = tegra_smmu_probe_device,
.release_device = tegra_smmu_release_device,
.device_group = tegra_smmu_device_group,
.of_xlate = tegra_smmu_of_xlate,
.pgsize_bitmap = SZ_4K,

View File

@ -788,11 +788,13 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
return 0;
}
static int viommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t pgsize, size_t pgcount,
int prot, gfp_t gfp, size_t *mapped)
{
int ret;
u32 flags;
size_t size = pgsize * pgcount;
u64 end = iova + size - 1;
struct virtio_iommu_req_map map;
struct viommu_domain *vdomain = to_viommu_domain(domain);
@ -823,17 +825,21 @@ static int viommu_map(struct iommu_domain *domain, unsigned long iova,
ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
if (ret)
viommu_del_mappings(vdomain, iova, end);
else if (mapped)
*mapped = size;
return ret;
}
static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
size_t size, struct iommu_iotlb_gather *gather)
static size_t viommu_unmap_pages(struct iommu_domain *domain, unsigned long iova,
size_t pgsize, size_t pgcount,
struct iommu_iotlb_gather *gather)
{
int ret = 0;
size_t unmapped;
struct virtio_iommu_req_unmap unmap;
struct viommu_domain *vdomain = to_viommu_domain(domain);
size_t size = pgsize * pgcount;
unmapped = viommu_del_mappings(vdomain, iova, iova + size - 1);
if (unmapped < size)
@ -964,7 +970,7 @@ static struct iommu_device *viommu_probe_device(struct device *dev)
return &viommu->iommu;
err_free_dev:
generic_iommu_put_resv_regions(dev, &vdev->resv_regions);
iommu_put_resv_regions(dev, &vdev->resv_regions);
kfree(vdev);
return ERR_PTR(ret);
@ -981,15 +987,9 @@ static void viommu_probe_finalize(struct device *dev)
static void viommu_release_device(struct device *dev)
{
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct viommu_endpoint *vdev;
struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
if (!fwspec || fwspec->ops != &viommu_ops)
return;
vdev = dev_iommu_priv_get(dev);
generic_iommu_put_resv_regions(dev, &vdev->resv_regions);
iommu_put_resv_regions(dev, &vdev->resv_regions);
kfree(vdev);
}
@ -1013,13 +1013,12 @@ static struct iommu_ops viommu_ops = {
.release_device = viommu_release_device,
.device_group = viommu_device_group,
.get_resv_regions = viommu_get_resv_regions,
.put_resv_regions = generic_iommu_put_resv_regions,
.of_xlate = viommu_of_xlate,
.owner = THIS_MODULE,
.default_domain_ops = &(const struct iommu_domain_ops) {
.attach_dev = viommu_attach_dev,
.map = viommu_map,
.unmap = viommu_unmap,
.map_pages = viommu_map_pages,
.unmap_pages = viommu_unmap_pages,
.iova_to_phys = viommu_iova_to_phys,
.iotlb_sync = viommu_iotlb_sync,
.free = viommu_domain_free,

View File

@ -33,10 +33,14 @@ struct irq_domain *iort_get_device_domain(struct device *dev, u32 id,
enum irq_domain_bus_token bus_token);
void acpi_configure_pmsi_domain(struct device *dev);
int iort_pmsi_get_dev_id(struct device *dev, u32 *dev_id);
void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode,
struct list_head *head);
void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode,
struct list_head *head);
/* IOMMU interface */
int iort_dma_get_ranges(struct device *dev, u64 *size);
int iort_iommu_configure_id(struct device *dev, const u32 *id_in);
int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head);
void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head);
phys_addr_t acpi_iort_dma_get_max_cpu_address(void);
#else
static inline void acpi_iort_init(void) { }
@ -46,14 +50,18 @@ static inline struct irq_domain *iort_get_device_domain(
struct device *dev, u32 id, enum irq_domain_bus_token bus_token)
{ return NULL; }
static inline void acpi_configure_pmsi_domain(struct device *dev) { }
static inline
void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head) { }
static inline
void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head) { }
/* IOMMU interface */
static inline int iort_dma_get_ranges(struct device *dev, u64 *size)
{ return -ENODEV; }
static inline int iort_iommu_configure_id(struct device *dev, const u32 *id_in)
{ return -ENODEV; }
static inline
int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head)
{ return 0; }
void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head)
{ }
static inline phys_addr_t acpi_iort_dma_get_max_cpu_address(void)
{ return PHYS_ADDR_MAX; }

View File

@ -206,4 +206,8 @@ int amd_iommu_pc_get_reg(struct amd_iommu *iommu, u8 bank, u8 cntr, u8 fxn,
u64 *value);
struct amd_iommu *get_amd_iommu(unsigned int idx);
#ifdef CONFIG_AMD_MEM_ENCRYPT
int amd_iommu_snp_enable(void);
#endif
#endif /* _ASM_X86_AMD_IOMMU_H */

View File

@ -18,11 +18,7 @@
struct acpi_dmar_header;
#ifdef CONFIG_X86
# define DMAR_UNITS_SUPPORTED MAX_IO_APICS
#else
# define DMAR_UNITS_SUPPORTED 64
#endif
#define DMAR_UNITS_SUPPORTED 1024
/* DMAR Flags */
#define DMAR_INTR_REMAP 0x1

View File

@ -74,17 +74,22 @@ struct io_pgtable_cfg {
* to support up to 35 bits PA where the bit32, bit33 and bit34 are
* encoded in the bit9, bit4 and bit5 of the PTE respectively.
*
* IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT: (ARM v7s format) MediaTek IOMMUs
* extend the translation table base support up to 35 bits PA, the
* encoding format is same with IO_PGTABLE_QUIRK_ARM_MTK_EXT.
*
* IO_PGTABLE_QUIRK_ARM_TTBR1: (ARM LPAE format) Configure the table
* for use in the upper half of a split address space.
*
* IO_PGTABLE_QUIRK_ARM_OUTER_WBWA: Override the outer-cacheability
* attributes set in the TCR for a non-coherent page-table walker.
*/
#define IO_PGTABLE_QUIRK_ARM_NS BIT(0)
#define IO_PGTABLE_QUIRK_NO_PERMS BIT(1)
#define IO_PGTABLE_QUIRK_ARM_MTK_EXT BIT(3)
#define IO_PGTABLE_QUIRK_ARM_TTBR1 BIT(5)
#define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6)
#define IO_PGTABLE_QUIRK_ARM_NS BIT(0)
#define IO_PGTABLE_QUIRK_NO_PERMS BIT(1)
#define IO_PGTABLE_QUIRK_ARM_MTK_EXT BIT(3)
#define IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT BIT(4)
#define IO_PGTABLE_QUIRK_ARM_TTBR1 BIT(5)
#define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6)
unsigned long quirks;
unsigned long pgsize_bitmap;
unsigned int ias;

View File

@ -135,6 +135,7 @@ enum iommu_resv_type {
* @length: Length of the region in bytes
* @prot: IOMMU Protection flags (READ/WRITE/...)
* @type: Type of the reserved region
* @free: Callback to free associated memory allocations
*/
struct iommu_resv_region {
struct list_head list;
@ -142,6 +143,15 @@ struct iommu_resv_region {
size_t length;
int prot;
enum iommu_resv_type type;
void (*free)(struct device *dev, struct iommu_resv_region *region);
};
struct iommu_iort_rmr_data {
struct iommu_resv_region rr;
/* Stream IDs associated with IORT RMR entry */
const u32 *sids;
u32 num_sids;
};
/**
@ -154,8 +164,7 @@ struct iommu_resv_region {
* supported, this feature must be enabled before and
* disabled after %IOMMU_DEV_FEAT_SVA.
*
* Device drivers query whether a feature is supported using
* iommu_dev_has_feature(), and enable it using iommu_dev_enable_feature().
* Device drivers enable a feature using iommu_dev_enable_feature().
*/
enum iommu_dev_features {
IOMMU_DEV_FEAT_SVA,
@ -200,13 +209,11 @@ struct iommu_iotlb_gather {
* group and attached to the groups domain
* @device_group: find iommu group for a particular device
* @get_resv_regions: Request list of reserved regions for a device
* @put_resv_regions: Free list of reserved regions for a device
* @of_xlate: add OF master IDs to iommu grouping
* @is_attach_deferred: Check if domain attach should be deferred from iommu
* driver init to device driver init (default no)
* @dev_has/enable/disable_feat: per device entries to check/enable/disable
* iommu specific features.
* @dev_feat_enabled: check enabled feature
* @sva_bind: Bind process address space to device
* @sva_unbind: Unbind process address space from device
* @sva_get_pasid: Get PASID associated to a SVA handle
@ -232,14 +239,11 @@ struct iommu_ops {
/* Request/Free a list of reserved regions for a device */
void (*get_resv_regions)(struct device *dev, struct list_head *list);
void (*put_resv_regions)(struct device *dev, struct list_head *list);
int (*of_xlate)(struct device *dev, struct of_phandle_args *args);
bool (*is_attach_deferred)(struct device *dev);
/* Per device IOMMU features */
bool (*dev_has_feat)(struct device *dev, enum iommu_dev_features f);
bool (*dev_feat_enabled)(struct device *dev, enum iommu_dev_features f);
int (*dev_enable_feat)(struct device *dev, enum iommu_dev_features f);
int (*dev_disable_feat)(struct device *dev, enum iommu_dev_features f);
@ -448,8 +452,6 @@ extern void iommu_set_fault_handler(struct iommu_domain *domain,
extern void iommu_get_resv_regions(struct device *dev, struct list_head *list);
extern void iommu_put_resv_regions(struct device *dev, struct list_head *list);
extern void generic_iommu_put_resv_regions(struct device *dev,
struct list_head *list);
extern void iommu_set_default_passthrough(bool cmd_line);
extern void iommu_set_default_translated(bool cmd_line);
extern bool iommu_default_passthrough(void);
@ -662,7 +664,6 @@ void iommu_release_device(struct device *dev);
int iommu_dev_enable_feature(struct device *dev, enum iommu_dev_features f);
int iommu_dev_disable_feature(struct device *dev, enum iommu_dev_features f);
bool iommu_dev_feature_enabled(struct device *dev, enum iommu_dev_features f);
struct iommu_sva *iommu_sva_bind_device(struct device *dev,
struct mm_struct *mm,
@ -989,12 +990,6 @@ const struct iommu_ops *iommu_ops_from_fwnode(struct fwnode_handle *fwnode)
return NULL;
}
static inline bool
iommu_dev_feature_enabled(struct device *dev, enum iommu_dev_features feat)
{
return false;
}
static inline int
iommu_dev_enable_feature(struct device *dev, enum iommu_dev_features feat)
{

View File

@ -126,7 +126,6 @@ extern void tboot_probe(void);
extern void tboot_shutdown(u32 shutdown_type);
extern struct acpi_table_header *tboot_get_dmar_table(
struct acpi_table_header *dmar_tbl);
extern int tboot_force_iommu(void);
#else
@ -136,7 +135,6 @@ extern int tboot_force_iommu(void);
#define tboot_sleep(sleep_state, pm1a_control, pm1b_control) \
do { } while (0)
#define tboot_get_dmar_table(dmar_tbl) (dmar_tbl)
#define tboot_force_iommu() 0
#endif /* !CONFIG_INTEL_TXT */