IOMMU Updates for Linux v5.19

Including:
 
 	- Intel VT-d driver updates
 	  - Domain force snooping improvement.
 	  - Cleanups, no intentional functional changes.
 
 	- ARM SMMU driver updates
 	  - Add new Qualcomm device-tree compatible strings
 	  - Add new Nvidia device-tree compatible string for Tegra234
 	  - Fix UAF in SMMUv3 shared virtual addressing code
 	  - Force identity-mapped domains for users of ye olde SMMU
 	    legacy binding
 	  - Minor cleanups
 
 	- Patches to fix a BUG_ON in the vfio_iommu_group_notifier
 	  - Groundwork for upcoming iommufd framework
 	  - Introduction of DMA ownership so that an entire IOMMU group
 	    is either controlled by the kernel or by user-space
 
 	- MT8195 and MT8186 support in the Mediatek IOMMU driver
 
 	- Patches to make forcing of cache-coherent DMA more coherent
 	  between IOMMU drivers
 
 	- Fixes for thunderbolt device DMA protection
 
 	- Various smaller fixes and cleanups
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEr9jSbILcajRFYWYyK/BELZcBGuMFAmKWCbUACgkQK/BELZcB
 GuPHmRAAuoH9iK/jrC3SgrqpBfH2iRN7ovIX8dFvgbQWX27lhXF4gvj2/nYdIvPK
 75j/LmdibuzV3Iez4kjbGKNG1AikwK3dKIH21a84f3ctnoamQyL6nMfCVBFaVD/D
 kvPpTHyjbGPNf6KZyWQdkJ5DXD1aoG1DKkBnslH5pTNPqGuNqbcnRTg0YxiJFLBv
 5w2B6jL06XRzunh+Sp1Dbj+po8ROjLRCEU+tdrndO8W/Dyp6+ZNNuxL9/3BM9zMj
 py0M4piFtGnhmJSdym1eeHm7r1YRjkZw+MN+e8NcrcSihmDutEWo7nRRxA5uVaa+
 3O2DNERqCvQUYxfNRUOKwzV8v51GYQHEPhvOe/MLgaEQDmDmlF2dHNGm93eCMdrv
 m1cT011oU7pa4qHomwLyTJxSsR7FzJ37igq/WeY++MBhl+frqfzEQPVxF+W7GLb8
 QvT/+woCPzLVpJbE7s0FUD4nbPd8c1dAz4+HO1DajxILIOTq1bnPIorSjgXODRjq
 yzsiP1rAg0L0PsL7pXn3cPMzNCE//xtOsRsAGmaVv6wBoMLyWVFCU/wjPEdjrSWA
 nXpAuCL84uxCEl/KLYMsg9UhjT6ko7CuKdsybIG9zNIiUau43uSqgTen0xCpYt0i
 m//O/X3tPyxmoLKRW+XVehGOrBZW+qrQny6hk/Zex+6UJQqVMTA=
 =W0hj
 -----END PGP SIGNATURE-----

Merge tag 'iommu-updates-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull iommu updates from Joerg Roedel:

 - Intel VT-d driver updates:
     - Domain force snooping improvement.
     - Cleanups, no intentional functional changes.

 - ARM SMMU driver updates:
     - Add new Qualcomm device-tree compatible strings
     - Add new Nvidia device-tree compatible string for Tegra234
     - Fix UAF in SMMUv3 shared virtual addressing code
     - Force identity-mapped domains for users of ye olde SMMU legacy
       binding
     - Minor cleanups

 - Fix a BUG_ON in the vfio_iommu_group_notifier:
     - Groundwork for upcoming iommufd framework
     - Introduction of DMA ownership so that an entire IOMMU group is
       either controlled by the kernel or by user-space

 - MT8195 and MT8186 support in the Mediatek IOMMU driver

 - Make forcing of cache-coherent DMA more coherent between IOMMU
   drivers

 - Fixes for thunderbolt device DMA protection

 - Various smaller fixes and cleanups

* tag 'iommu-updates-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (88 commits)
  iommu/amd: Increase timeout waiting for GA log enablement
  iommu/s390: Tolerate repeat attach_dev calls
  iommu/vt-d: Remove hard coding PGSNP bit in PASID entries
  iommu/vt-d: Remove domain_update_iommu_snooping()
  iommu/vt-d: Check domain force_snooping against attached devices
  iommu/vt-d: Block force-snoop domain attaching if no SC support
  iommu/vt-d: Size Page Request Queue to avoid overflow condition
  iommu/vt-d: Fold dmar_insert_one_dev_info() into its caller
  iommu/vt-d: Change return type of dmar_insert_one_dev_info()
  iommu/vt-d: Remove unneeded validity check on dev
  iommu/dma: Explicitly sort PCI DMA windows
  iommu/dma: Fix iova map result check bug
  iommu/mediatek: Fix NULL pointer dereference when printing dev_name
  iommu: iommu_group_claim_dma_owner() must always assign a domain
  iommu/arm-smmu: Force identity domains for legacy binding
  iommu/arm-smmu: Support Tegra234 SMMU
  dt-bindings: arm-smmu: Add compatible for Tegra234 SOC
  dt-bindings: arm-smmu: Document nvidia,memory-controller property
  iommu/arm-smmu-qcom: Add SC8280XP support
  dt-bindings: arm-smmu: Add compatible for Qualcomm SC8280XP
  ...
This commit is contained in:
Linus Torvalds 2022-05-31 09:56:54 -07:00
commit e1cbc3b96a
53 changed files with 2316 additions and 1035 deletions

View File

@ -37,8 +37,10 @@ properties:
- qcom,sc7180-smmu-500
- qcom,sc7280-smmu-500
- qcom,sc8180x-smmu-500
- qcom,sc8280xp-smmu-500
- qcom,sdm845-smmu-500
- qcom,sdx55-smmu-500
- qcom,sdx65-smmu-500
- qcom,sm6350-smmu-500
- qcom,sm8150-smmu-500
- qcom,sm8250-smmu-500
@ -62,8 +64,9 @@ properties:
for improved performance.
items:
- enum:
- nvidia,tegra194-smmu
- nvidia,tegra186-smmu
- nvidia,tegra194-smmu
- nvidia,tegra234-smmu
- const: nvidia,smmu-500
- items:
- const: arm,mmu-500
@ -157,6 +160,17 @@ properties:
power-domains:
maxItems: 1
nvidia,memory-controller:
description: |
A phandle to the memory controller on NVIDIA Tegra186 and later SoCs.
The memory controller needs to be programmed with a mapping of memory
client IDs to ARM SMMU stream IDs.
If this property is absent, the mapping programmed by early firmware
will be used and it is not guaranteed that IOMMU translations will be
enabled for any given device.
$ref: /schemas/types.yaml#/definitions/phandle
required:
- compatible
- reg
@ -172,13 +186,20 @@ allOf:
compatible:
contains:
enum:
- nvidia,tegra194-smmu
- nvidia,tegra186-smmu
- nvidia,tegra194-smmu
- nvidia,tegra234-smmu
then:
properties:
reg:
minItems: 1
maxItems: 2
# The reference to the memory controller is required to ensure that the
# memory client to stream ID mapping can be done synchronously with the
# IOMMU attachment.
required:
- nvidia,memory-controller
else:
properties:
reg:

View File

@ -76,7 +76,11 @@ properties:
- mediatek,mt8167-m4u # generation two
- mediatek,mt8173-m4u # generation two
- mediatek,mt8183-m4u # generation two
- mediatek,mt8186-iommu-mm # generation two
- mediatek,mt8192-m4u # generation two
- mediatek,mt8195-iommu-vdo # generation two
- mediatek,mt8195-iommu-vpp # generation two
- mediatek,mt8195-iommu-infra # generation two
- description: mt7623 generation one
items:
@ -119,7 +123,9 @@ properties:
dt-binding/memory/mt8167-larb-port.h for mt8167,
dt-binding/memory/mt8173-larb-port.h for mt8173,
dt-binding/memory/mt8183-larb-port.h for mt8183,
dt-binding/memory/mt8186-memory-port.h for mt8186,
dt-binding/memory/mt8192-larb-port.h for mt8192.
dt-binding/memory/mt8195-memory-port.h for mt8195.
power-domains:
maxItems: 1
@ -128,7 +134,6 @@ required:
- compatible
- reg
- interrupts
- mediatek,larbs
- '#iommu-cells'
allOf:
@ -140,7 +145,10 @@ allOf:
- mediatek,mt2701-m4u
- mediatek,mt2712-m4u
- mediatek,mt8173-m4u
- mediatek,mt8186-iommu-mm
- mediatek,mt8192-m4u
- mediatek,mt8195-iommu-vdo
- mediatek,mt8195-iommu-vpp
then:
required:
@ -150,12 +158,26 @@ allOf:
properties:
compatible:
enum:
- mediatek,mt8186-iommu-mm
- mediatek,mt8192-m4u
- mediatek,mt8195-iommu-vdo
- mediatek,mt8195-iommu-vpp
then:
required:
- power-domains
- if: # The IOMMUs don't have larbs.
not:
properties:
compatible:
contains:
const: mediatek,mt8195-iommu-infra
then:
required:
- mediatek,larbs
additionalProperties: false
examples:
@ -173,13 +195,3 @@ examples:
<&larb3>, <&larb4>, <&larb5>;
#iommu-cells = <1>;
};
- |
#include <dt-bindings/memory/mt8173-larb-port.h>
/* Example for a client device */
display {
compatible = "mediatek,mt8173-disp";
iommus = <&iommu M4U_PORT_DISP_OVL0>,
<&iommu M4U_PORT_DISP_RDMA0>;
};

View File

@ -86,16 +86,6 @@ examples:
- |
#include <dt-bindings/clock/exynos5250.h>
gsc_0: scaler@13e00000 {
compatible = "samsung,exynos5-gsc";
reg = <0x13e00000 0x1000>;
interrupts = <0 85 0>;
power-domains = <&pd_gsc>;
clocks = <&clock CLK_GSCL0>;
clock-names = "gscl";
iommus = <&sysmmu_gsc0>;
};
sysmmu_gsc0: iommu@13e80000 {
compatible = "samsung,exynos-sysmmu";
reg = <0x13E80000 0x1000>;

View File

@ -1375,14 +1375,6 @@ L: linux-input@vger.kernel.org
S: Odd fixes
F: drivers/input/mouse/bcm5974.c
APPLE DART IOMMU DRIVER
M: Sven Peter <sven@svenpeter.dev>
R: Alyssa Rosenzweig <alyssa@rosenzweig.io>
L: iommu@lists.linux-foundation.org
S: Maintained
F: Documentation/devicetree/bindings/iommu/apple,dart.yaml
F: drivers/iommu/apple-dart.c
APPLE PCIE CONTROLLER DRIVER
M: Alyssa Rosenzweig <alyssa@rosenzweig.io>
M: Marc Zyngier <maz@kernel.org>
@ -1834,6 +1826,7 @@ F: Documentation/devicetree/bindings/arm/apple/*
F: Documentation/devicetree/bindings/clock/apple,nco.yaml
F: Documentation/devicetree/bindings/i2c/apple,i2c.yaml
F: Documentation/devicetree/bindings/interrupt-controller/apple,*
F: Documentation/devicetree/bindings/iommu/apple,dart.yaml
F: Documentation/devicetree/bindings/iommu/apple,sart.yaml
F: Documentation/devicetree/bindings/mailbox/apple,mailbox.yaml
F: Documentation/devicetree/bindings/nvme/apple,nvme-ans.yaml
@ -1845,6 +1838,7 @@ F: arch/arm64/boot/dts/apple/
F: drivers/clk/clk-apple-nco.c
F: drivers/i2c/busses/i2c-pasemi-core.c
F: drivers/i2c/busses/i2c-pasemi-platform.c
F: drivers/iommu/apple-dart.c
F: drivers/irqchip/irq-apple-aic.c
F: drivers/mailbox/apple-mailbox.c
F: drivers/nvme/host/apple.c

View File

@ -20,6 +20,10 @@
#include <linux/platform_device.h>
#include <linux/reset.h>
#include <linux/of_irq.h>
#include <linux/of_device.h>
#include <linux/acpi.h>
#include <linux/iommu.h>
#include <linux/dma-map-ops.h>
#define to_amba_driver(d) container_of(d, struct amba_driver, drv)
@ -273,6 +277,36 @@ static void amba_shutdown(struct device *dev)
drv->shutdown(to_amba_device(dev));
}
static int amba_dma_configure(struct device *dev)
{
struct amba_driver *drv = to_amba_driver(dev->driver);
enum dev_dma_attr attr;
int ret = 0;
if (dev->of_node) {
ret = of_dma_configure(dev, dev->of_node, true);
} else if (has_acpi_companion(dev)) {
attr = acpi_get_dma_attr(to_acpi_device_node(dev->fwnode));
ret = acpi_dma_configure(dev, attr);
}
if (!ret && !drv->driver_managed_dma) {
ret = iommu_device_use_default_domain(dev);
if (ret)
arch_teardown_dma_ops(dev);
}
return ret;
}
static void amba_dma_cleanup(struct device *dev)
{
struct amba_driver *drv = to_amba_driver(dev->driver);
if (!drv->driver_managed_dma)
iommu_device_unuse_default_domain(dev);
}
#ifdef CONFIG_PM
/*
* Hooks to provide runtime PM of the pclk (bus clock). It is safe to
@ -341,7 +375,8 @@ struct bus_type amba_bustype = {
.probe = amba_probe,
.remove = amba_remove,
.shutdown = amba_shutdown,
.dma_configure = platform_dma_configure,
.dma_configure = amba_dma_configure,
.dma_cleanup = amba_dma_cleanup,
.pm = &amba_pm,
};
EXPORT_SYMBOL_GPL(amba_bustype);

View File

@ -671,6 +671,8 @@ sysfs_failed:
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
if (dev->bus && dev->bus->dma_cleanup)
dev->bus->dma_cleanup(dev);
pinctrl_bind_failed:
device_links_no_driver(dev);
device_unbind_cleanup(dev);
@ -1199,6 +1201,9 @@ static void __device_release_driver(struct device *dev, struct device *parent)
device_remove(dev);
if (dev->bus && dev->bus->dma_cleanup)
dev->bus->dma_cleanup(dev);
device_links_driver_cleanup(dev);
device_unbind_cleanup(dev);

View File

@ -30,6 +30,8 @@
#include <linux/property.h>
#include <linux/kmemleak.h>
#include <linux/types.h>
#include <linux/iommu.h>
#include <linux/dma-map-ops.h>
#include "base.h"
#include "power/power.h"
@ -1454,9 +1456,9 @@ static void platform_shutdown(struct device *_dev)
drv->shutdown(dev);
}
int platform_dma_configure(struct device *dev)
static int platform_dma_configure(struct device *dev)
{
struct platform_driver *drv = to_platform_driver(dev->driver);
enum dev_dma_attr attr;
int ret = 0;
@ -1467,9 +1469,23 @@ int platform_dma_configure(struct device *dev)
ret = acpi_dma_configure(dev, attr);
}
if (!ret && !drv->driver_managed_dma) {
ret = iommu_device_use_default_domain(dev);
if (ret)
arch_teardown_dma_ops(dev);
}
return ret;
}
static void platform_dma_cleanup(struct device *dev)
{
struct platform_driver *drv = to_platform_driver(dev->driver);
if (!drv->driver_managed_dma)
iommu_device_unuse_default_domain(dev);
}
static const struct dev_pm_ops platform_dev_pm_ops = {
SET_RUNTIME_PM_OPS(pm_generic_runtime_suspend, pm_generic_runtime_resume, NULL)
USE_PLATFORM_PM_SLEEP_OPS
@ -1484,6 +1500,7 @@ struct bus_type platform_bus_type = {
.remove = platform_remove,
.shutdown = platform_shutdown,
.dma_configure = platform_dma_configure,
.dma_cleanup = platform_dma_cleanup,
.pm = &platform_dev_pm_ops,
};
EXPORT_SYMBOL_GPL(platform_bus_type);

View File

@ -21,6 +21,7 @@
#include <linux/dma-mapping.h>
#include <linux/acpi.h>
#include <linux/iommu.h>
#include <linux/dma-map-ops.h>
#include "fsl-mc-private.h"
@ -140,15 +141,33 @@ static int fsl_mc_dma_configure(struct device *dev)
{
struct device *dma_dev = dev;
struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev);
struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(dev->driver);
u32 input_id = mc_dev->icid;
int ret;
while (dev_is_fsl_mc(dma_dev))
dma_dev = dma_dev->parent;
if (dev_of_node(dma_dev))
return of_dma_configure_id(dev, dma_dev->of_node, 0, &input_id);
ret = of_dma_configure_id(dev, dma_dev->of_node, 0, &input_id);
else
ret = acpi_dma_configure_id(dev, DEV_DMA_COHERENT, &input_id);
return acpi_dma_configure_id(dev, DEV_DMA_COHERENT, &input_id);
if (!ret && !mc_drv->driver_managed_dma) {
ret = iommu_device_use_default_domain(dev);
if (ret)
arch_teardown_dma_ops(dev);
}
return ret;
}
static void fsl_mc_dma_cleanup(struct device *dev)
{
struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(dev->driver);
if (!mc_drv->driver_managed_dma)
iommu_device_unuse_default_domain(dev);
}
static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
@ -312,6 +331,7 @@ struct bus_type fsl_mc_bus_type = {
.match = fsl_mc_bus_match,
.uevent = fsl_mc_bus_uevent,
.dma_configure = fsl_mc_dma_configure,
.dma_cleanup = fsl_mc_dma_cleanup,
.dev_groups = fsl_mc_dev_groups,
.bus_groups = fsl_mc_bus_groups,
};

View File

@ -407,6 +407,7 @@
/* IOMMU IVINFO */
#define IOMMU_IVINFO_OFFSET 36
#define IOMMU_IVINFO_EFRSUP BIT(0)
#define IOMMU_IVINFO_DMA_REMAP BIT(1)
/* IOMMU Feature Reporting Field (for IVHD type 10h */
#define IOMMU_FEAT_GASUP_SHIFT 6
@ -449,6 +450,9 @@ extern struct irq_remap_table **irq_lookup_table;
/* Interrupt remapping feature used? */
extern bool amd_iommu_irq_remap;
/* IVRS indicates that pre-boot remapping was enabled */
extern bool amdr_ivrs_remap_support;
/* kmem_cache to get tables with 128 byte alignement */
extern struct kmem_cache *amd_iommu_irq_cache;

View File

@ -83,7 +83,7 @@
#define ACPI_DEVFLAG_LINT1 0x80
#define ACPI_DEVFLAG_ATSDIS 0x10000000
#define LOOP_TIMEOUT 100000
#define LOOP_TIMEOUT 2000000
/*
* ACPI table definitions
*
@ -181,6 +181,7 @@ u32 amd_iommu_max_pasid __read_mostly = ~0;
bool amd_iommu_v2_present __read_mostly;
static bool amd_iommu_pc_present __read_mostly;
bool amdr_ivrs_remap_support __read_mostly;
bool amd_iommu_force_isolation __read_mostly;
@ -325,6 +326,8 @@ static void __init early_iommu_features_init(struct amd_iommu *iommu,
{
if (amd_iommu_ivinfo & IOMMU_IVINFO_EFRSUP)
iommu->features = h->efr_reg;
if (amd_iommu_ivinfo & IOMMU_IVINFO_DMA_REMAP)
amdr_ivrs_remap_support = true;
}
/* Access to l1 and l2 indexed register spaces */
@ -1985,8 +1988,7 @@ static int __init amd_iommu_init_pci(void)
for_each_iommu(iommu)
iommu_flush_all_caches(iommu);
if (!ret)
print_iommu_info();
print_iommu_info();
out:
return ret;

View File

@ -1838,20 +1838,10 @@ void amd_iommu_domain_update(struct protection_domain *domain)
amd_iommu_domain_flush_complete(domain);
}
static void __init amd_iommu_init_dma_ops(void)
{
if (iommu_default_passthrough() || sme_me_mask)
x86_swiotlb_enable = true;
else
x86_swiotlb_enable = false;
}
int __init amd_iommu_init_api(void)
{
int err;
amd_iommu_init_dma_ops();
err = bus_set_iommu(&pci_bus_type, &amd_iommu_ops);
if (err)
return err;
@ -2165,6 +2155,8 @@ static bool amd_iommu_capable(enum iommu_cap cap)
return (irq_remapping_enabled == 1);
case IOMMU_CAP_NOEXEC:
return false;
case IOMMU_CAP_PRE_BOOT_PROTECTION:
return amdr_ivrs_remap_support;
default:
break;
}
@ -2274,6 +2266,12 @@ static int amd_iommu_def_domain_type(struct device *dev)
return 0;
}
static bool amd_iommu_enforce_cache_coherency(struct iommu_domain *domain)
{
/* IOMMU_PTE_FC is always set */
return true;
}
const struct iommu_ops amd_iommu_ops = {
.capable = amd_iommu_capable,
.domain_alloc = amd_iommu_domain_alloc,
@ -2296,6 +2294,7 @@ const struct iommu_ops amd_iommu_ops = {
.flush_iotlb_all = amd_iommu_flush_iotlb_all,
.iotlb_sync = amd_iommu_iotlb_sync,
.free = amd_iommu_domain_free,
.enforce_cache_coherency = amd_iommu_enforce_cache_coherency,
}
};

View File

@ -956,6 +956,7 @@ static void __exit amd_iommu_v2_exit(void)
{
struct device_state *dev_state, *next;
unsigned long flags;
LIST_HEAD(freelist);
if (!amd_iommu_v2_supported())
return;
@ -975,11 +976,20 @@ static void __exit amd_iommu_v2_exit(void)
put_device_state(dev_state);
list_del(&dev_state->list);
free_device_state(dev_state);
list_add_tail(&dev_state->list, &freelist);
}
spin_unlock_irqrestore(&state_lock, flags);
/*
* Since free_device_state waits on the count to be zero,
* we need to free dev_state outside the spinlock.
*/
list_for_each_entry_safe(dev_state, next, &freelist, list) {
list_del(&dev_state->list);
free_device_state(dev_state);
}
destroy_workqueue(iommu_wq);
}

View File

@ -6,6 +6,7 @@
#include <linux/mm.h>
#include <linux/mmu_context.h>
#include <linux/mmu_notifier.h>
#include <linux/sched/mm.h>
#include <linux/slab.h>
#include "arm-smmu-v3.h"
@ -96,9 +97,14 @@ static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm)
struct arm_smmu_ctx_desc *cd;
struct arm_smmu_ctx_desc *ret = NULL;
/* Don't free the mm until we release the ASID */
mmgrab(mm);
asid = arm64_mm_context_get(mm);
if (!asid)
return ERR_PTR(-ESRCH);
if (!asid) {
err = -ESRCH;
goto out_drop_mm;
}
cd = kzalloc(sizeof(*cd), GFP_KERNEL);
if (!cd) {
@ -165,6 +171,8 @@ out_free_cd:
kfree(cd);
out_put_context:
arm64_mm_context_put(mm);
out_drop_mm:
mmdrop(mm);
return err < 0 ? ERR_PTR(err) : ret;
}
@ -173,6 +181,7 @@ static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd)
if (arm_smmu_free_asid(cd)) {
/* Unpin ASID */
arm64_mm_context_put(cd->mm);
mmdrop(cd->mm);
kfree(cd);
}
}

View File

@ -3770,6 +3770,8 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
/* Base address */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -EINVAL;
if (resource_size(res) < arm_smmu_resource_size(smmu)) {
dev_err(dev, "MMIO region too small (%pr)\n", res);
return -EINVAL;

View File

@ -211,7 +211,8 @@ struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu)
if (of_property_read_bool(np, "calxeda,smmu-secure-config-access"))
smmu->impl = &calxeda_impl;
if (of_device_is_compatible(np, "nvidia,tegra194-smmu") ||
if (of_device_is_compatible(np, "nvidia,tegra234-smmu") ||
of_device_is_compatible(np, "nvidia,tegra194-smmu") ||
of_device_is_compatible(np, "nvidia,tegra186-smmu"))
return nvidia_smmu_impl_init(smmu);

View File

@ -408,6 +408,7 @@ static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = {
{ .compatible = "qcom,sc7180-smmu-500" },
{ .compatible = "qcom,sc7280-smmu-500" },
{ .compatible = "qcom,sc8180x-smmu-500" },
{ .compatible = "qcom,sc8280xp-smmu-500" },
{ .compatible = "qcom,sdm630-smmu-v2" },
{ .compatible = "qcom,sdm845-smmu-500" },
{ .compatible = "qcom,sm6125-smmu-500" },

View File

@ -1574,6 +1574,9 @@ static int arm_smmu_def_domain_type(struct device *dev)
struct arm_smmu_master_cfg *cfg = dev_iommu_priv_get(dev);
const struct arm_smmu_impl *impl = cfg->smmu->impl;
if (using_legacy_binding)
return IOMMU_DOMAIN_IDENTITY;
if (impl && impl->def_domain_type)
return impl->def_domain_type(dev);
@ -2092,11 +2095,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
if (err)
return err;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ioaddr = res->start;
smmu->base = devm_ioremap_resource(dev, res);
smmu->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
if (IS_ERR(smmu->base))
return PTR_ERR(smmu->base);
ioaddr = res->start;
/*
* The resource size should effectively match the value of SMMU_TOP;
* stash that temporarily until we know PAGESIZE to validate it with.

View File

@ -20,6 +20,7 @@
#include <linux/iommu.h>
#include <linux/iova.h>
#include <linux/irq.h>
#include <linux/list_sort.h>
#include <linux/mm.h>
#include <linux/mutex.h>
#include <linux/pci.h>
@ -414,6 +415,15 @@ static int cookie_init_hw_msi_region(struct iommu_dma_cookie *cookie,
return 0;
}
static int iommu_dma_ranges_sort(void *priv, const struct list_head *a,
const struct list_head *b)
{
struct resource_entry *res_a = list_entry(a, typeof(*res_a), node);
struct resource_entry *res_b = list_entry(b, typeof(*res_b), node);
return res_a->res->start > res_b->res->start;
}
static int iova_reserve_pci_windows(struct pci_dev *dev,
struct iova_domain *iovad)
{
@ -432,6 +442,7 @@ static int iova_reserve_pci_windows(struct pci_dev *dev,
}
/* Get reserved DMA windows from host bridge */
list_sort(NULL, &bridge->dma_ranges, iommu_dma_ranges_sort);
resource_list_for_each_entry(window, &bridge->dma_ranges) {
end = window->res->start - window->offset;
resv_iova:
@ -440,7 +451,7 @@ resv_iova:
hi = iova_pfn(iovad, end);
reserve_iova(iovad, lo, hi);
} else if (end < start) {
/* dma_ranges list should be sorted */
/* DMA ranges should be non-overlapping */
dev_err(&dev->dev,
"Failed to reserve IOVA [%pa-%pa]\n",
&start, &end);
@ -776,6 +787,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap;
struct page **pages;
dma_addr_t iova;
ssize_t ret;
if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
iommu_deferred_attach(dev, domain))
@ -813,8 +825,8 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
arch_dma_prep_coherent(sg_page(sg), sg->length);
}
if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot)
< size)
ret = iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot);
if (ret < 0 || ret < size)
goto out_free_sg;
sgt->sgl->dma_address = iova;
@ -971,6 +983,11 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
void *padding_start;
size_t padding_size, aligned_size;
if (!is_swiotlb_active(dev)) {
dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n");
return DMA_MAPPING_ERROR;
}
aligned_size = iova_align(iovad, size);
phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size,
iova_mask(iovad), dir, attrs);
@ -1209,7 +1226,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
* implementation - it knows better than we do.
*/
ret = iommu_map_sg_atomic(domain, iova, sg, nents, prot);
if (ret < iova_len)
if (ret < 0 || ret < iova_len)
goto out_free_iova;
return __finalise_sg(dev, sg, nents, iova);

View File

@ -11,6 +11,9 @@
#include <linux/fsl/guts.h>
#include <linux/interrupt.h>
#include <linux/genalloc.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/platform_device.h>
#include <asm/mpc85xx.h>

View File

@ -9,6 +9,7 @@
#include "fsl_pamu_domain.h"
#include <linux/platform_device.h>
#include <sysdev/fsl_pci.h>
/*

View File

@ -533,33 +533,6 @@ static void domain_update_iommu_coherency(struct dmar_domain *domain)
rcu_read_unlock();
}
static bool domain_update_iommu_snooping(struct intel_iommu *skip)
{
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
bool ret = true;
rcu_read_lock();
for_each_active_iommu(iommu, drhd) {
if (iommu != skip) {
/*
* If the hardware is operating in the scalable mode,
* the snooping control is always supported since we
* always set PASID-table-entry.PGSNP bit if the domain
* is managed outside (UNMANAGED).
*/
if (!sm_supported(iommu) &&
!ecap_sc_support(iommu->ecap)) {
ret = false;
break;
}
}
}
rcu_read_unlock();
return ret;
}
static int domain_update_iommu_superpage(struct dmar_domain *domain,
struct intel_iommu *skip)
{
@ -641,7 +614,6 @@ static unsigned long domain_super_pgsize_bitmap(struct dmar_domain *domain)
static void domain_update_iommu_cap(struct dmar_domain *domain)
{
domain_update_iommu_coherency(domain);
domain->iommu_snooping = domain_update_iommu_snooping(NULL);
domain->iommu_superpage = domain_update_iommu_superpage(domain, NULL);
/*
@ -2460,7 +2432,7 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
if (level == 5)
flags |= PASID_FLAG_FL5LP;
if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED)
if (domain->force_snooping)
flags |= PASID_FLAG_PAGE_SNOOP;
return intel_pasid_setup_first_level(iommu, dev, (pgd_t *)pgd, pasid,
@ -2474,64 +2446,6 @@ static bool dev_is_real_dma_subdevice(struct device *dev)
pci_real_dma_dev(to_pci_dev(dev)) != to_pci_dev(dev);
}
static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
int bus, int devfn,
struct device *dev,
struct dmar_domain *domain)
{
struct device_domain_info *info = dev_iommu_priv_get(dev);
unsigned long flags;
int ret;
spin_lock_irqsave(&device_domain_lock, flags);
info->domain = domain;
spin_lock(&iommu->lock);
ret = domain_attach_iommu(domain, iommu);
spin_unlock(&iommu->lock);
if (ret) {
spin_unlock_irqrestore(&device_domain_lock, flags);
return NULL;
}
list_add(&info->link, &domain->devices);
spin_unlock_irqrestore(&device_domain_lock, flags);
/* PASID table is mandatory for a PCI device in scalable mode. */
if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev)) {
ret = intel_pasid_alloc_table(dev);
if (ret) {
dev_err(dev, "PASID table allocation failed\n");
dmar_remove_one_dev_info(dev);
return NULL;
}
/* Setup the PASID entry for requests without PASID: */
spin_lock_irqsave(&iommu->lock, flags);
if (hw_pass_through && domain_type_is_si(domain))
ret = intel_pasid_setup_pass_through(iommu, domain,
dev, PASID_RID2PASID);
else if (domain_use_first_level(domain))
ret = domain_setup_first_level(iommu, domain, dev,
PASID_RID2PASID);
else
ret = intel_pasid_setup_second_level(iommu, domain,
dev, PASID_RID2PASID);
spin_unlock_irqrestore(&iommu->lock, flags);
if (ret) {
dev_err(dev, "Setup RID2PASID failed\n");
dmar_remove_one_dev_info(dev);
return NULL;
}
}
if (dev && domain_context_mapping(domain, dev)) {
dev_err(dev, "Domain context map failed\n");
dmar_remove_one_dev_info(dev);
return NULL;
}
return domain;
}
static int iommu_domain_identity_map(struct dmar_domain *domain,
unsigned long first_vpfn,
unsigned long last_vpfn)
@ -2607,17 +2521,62 @@ static int __init si_domain_init(int hw)
static int domain_add_dev_info(struct dmar_domain *domain, struct device *dev)
{
struct dmar_domain *ndomain;
struct device_domain_info *info = dev_iommu_priv_get(dev);
struct intel_iommu *iommu;
unsigned long flags;
u8 bus, devfn;
int ret;
iommu = device_to_iommu(dev, &bus, &devfn);
if (!iommu)
return -ENODEV;
ndomain = dmar_insert_one_dev_info(iommu, bus, devfn, dev, domain);
if (ndomain != domain)
return -EBUSY;
spin_lock_irqsave(&device_domain_lock, flags);
info->domain = domain;
spin_lock(&iommu->lock);
ret = domain_attach_iommu(domain, iommu);
spin_unlock(&iommu->lock);
if (ret) {
spin_unlock_irqrestore(&device_domain_lock, flags);
return ret;
}
list_add(&info->link, &domain->devices);
spin_unlock_irqrestore(&device_domain_lock, flags);
/* PASID table is mandatory for a PCI device in scalable mode. */
if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev)) {
ret = intel_pasid_alloc_table(dev);
if (ret) {
dev_err(dev, "PASID table allocation failed\n");
dmar_remove_one_dev_info(dev);
return ret;
}
/* Setup the PASID entry for requests without PASID: */
spin_lock_irqsave(&iommu->lock, flags);
if (hw_pass_through && domain_type_is_si(domain))
ret = intel_pasid_setup_pass_through(iommu, domain,
dev, PASID_RID2PASID);
else if (domain_use_first_level(domain))
ret = domain_setup_first_level(iommu, domain, dev,
PASID_RID2PASID);
else
ret = intel_pasid_setup_second_level(iommu, domain,
dev, PASID_RID2PASID);
spin_unlock_irqrestore(&iommu->lock, flags);
if (ret) {
dev_err(dev, "Setup RID2PASID failed\n");
dmar_remove_one_dev_info(dev);
return ret;
}
}
ret = domain_context_mapping(domain, dev);
if (ret) {
dev_err(dev, "Domain context map failed\n");
dmar_remove_one_dev_info(dev);
return ret;
}
return 0;
}
@ -3607,12 +3566,7 @@ static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
iommu->name);
return -ENXIO;
}
if (!ecap_sc_support(iommu->ecap) &&
domain_update_iommu_snooping(iommu)) {
pr_warn("%s: Doesn't support snooping.\n",
iommu->name);
return -ENXIO;
}
sp = domain_update_iommu_superpage(NULL, iommu) - 1;
if (sp >= 0 && !(cap_super_page_val(iommu->cap) & (1 << sp))) {
pr_warn("%s: Doesn't support large page.\n",
@ -4304,7 +4258,6 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
domain->agaw = width_to_agaw(adjust_width);
domain->iommu_coherency = false;
domain->iommu_snooping = false;
domain->iommu_superpage = 0;
domain->max_addr = 0;
@ -4369,6 +4322,9 @@ static int prepare_domain_attach_device(struct iommu_domain *domain,
if (!iommu)
return -ENODEV;
if (dmar_domain->force_snooping && !ecap_sc_support(iommu->ecap))
return -EOPNOTSUPP;
/* check if this iommu agaw is sufficient for max mapped address */
addr_width = agaw_to_width(iommu->agaw);
if (addr_width > cap_mgaw(iommu->cap))
@ -4443,7 +4399,7 @@ static int intel_iommu_map(struct iommu_domain *domain,
prot |= DMA_PTE_READ;
if (iommu_prot & IOMMU_WRITE)
prot |= DMA_PTE_WRITE;
if ((iommu_prot & IOMMU_CACHE) && dmar_domain->iommu_snooping)
if (dmar_domain->set_pte_snp)
prot |= DMA_PTE_SNP;
max_addr = iova + size;
@ -4566,12 +4522,71 @@ static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain,
return phys;
}
static bool domain_support_force_snooping(struct dmar_domain *domain)
{
struct device_domain_info *info;
bool support = true;
assert_spin_locked(&device_domain_lock);
list_for_each_entry(info, &domain->devices, link) {
if (!ecap_sc_support(info->iommu->ecap)) {
support = false;
break;
}
}
return support;
}
static void domain_set_force_snooping(struct dmar_domain *domain)
{
struct device_domain_info *info;
assert_spin_locked(&device_domain_lock);
/*
* Second level page table supports per-PTE snoop control. The
* iommu_map() interface will handle this by setting SNP bit.
*/
if (!domain_use_first_level(domain)) {
domain->set_pte_snp = true;
return;
}
list_for_each_entry(info, &domain->devices, link)
intel_pasid_setup_page_snoop_control(info->iommu, info->dev,
PASID_RID2PASID);
}
static bool intel_iommu_enforce_cache_coherency(struct iommu_domain *domain)
{
struct dmar_domain *dmar_domain = to_dmar_domain(domain);
unsigned long flags;
if (dmar_domain->force_snooping)
return true;
spin_lock_irqsave(&device_domain_lock, flags);
if (!domain_support_force_snooping(dmar_domain)) {
spin_unlock_irqrestore(&device_domain_lock, flags);
return false;
}
domain_set_force_snooping(dmar_domain);
dmar_domain->force_snooping = true;
spin_unlock_irqrestore(&device_domain_lock, flags);
return true;
}
static bool intel_iommu_capable(enum iommu_cap cap)
{
if (cap == IOMMU_CAP_CACHE_COHERENCY)
return domain_update_iommu_snooping(NULL);
return true;
if (cap == IOMMU_CAP_INTR_REMAP)
return irq_remapping_enabled == 1;
if (cap == IOMMU_CAP_PRE_BOOT_PROTECTION)
return dmar_platform_optin();
return false;
}
@ -4919,6 +4934,7 @@ const struct iommu_ops intel_iommu_ops = {
.iotlb_sync = intel_iommu_tlb_sync,
.iova_to_phys = intel_iommu_iova_to_phys,
.free = intel_iommu_domain_free,
.enforce_cache_coherency = intel_iommu_enforce_cache_coherency,
}
};

View File

@ -710,9 +710,6 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
pasid_set_fault_enable(pte);
pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED)
pasid_set_pgsnp(pte);
/*
* Since it is a second level only translation setup, we should
* set SRE bit as well (addresses are expected to be GPAs).
@ -762,3 +759,45 @@ int intel_pasid_setup_pass_through(struct intel_iommu *iommu,
return 0;
}
/*
* Set the page snoop control for a pasid entry which has been set up.
*/
void intel_pasid_setup_page_snoop_control(struct intel_iommu *iommu,
struct device *dev, u32 pasid)
{
struct pasid_entry *pte;
u16 did;
spin_lock(&iommu->lock);
pte = intel_pasid_get_entry(dev, pasid);
if (WARN_ON(!pte || !pasid_pte_is_present(pte))) {
spin_unlock(&iommu->lock);
return;
}
pasid_set_pgsnp(pte);
did = pasid_get_domain_id(pte);
spin_unlock(&iommu->lock);
if (!ecap_coherent(iommu->ecap))
clflush_cache_range(pte, sizeof(*pte));
/*
* VT-d spec 3.4 table23 states guides for cache invalidation:
*
* - PASID-selective-within-Domain PASID-cache invalidation
* - PASID-selective PASID-based IOTLB invalidation
* - If (pasid is RID_PASID)
* - Global Device-TLB invalidation to affected functions
* Else
* - PASID-based Device-TLB invalidation (with S=1 and
* Addr[63:12]=0x7FFFFFFF_FFFFF) to affected functions
*/
pasid_cache_invalidation_with_pasid(iommu, did, pasid);
qi_flush_piotlb(iommu, did, pasid, 0, -1, 0);
/* Device IOTLB doesn't need to be flushed in caching mode. */
if (!cap_caching_mode(iommu->cap))
devtlb_invalidation_with_pasid(iommu, dev, pasid);
}

View File

@ -123,4 +123,6 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu,
bool fault_ignore);
int vcmd_alloc_pasid(struct intel_iommu *iommu, u32 *pasid);
void vcmd_free_pasid(struct intel_iommu *iommu, u32 pasid);
void intel_pasid_setup_page_snoop_control(struct intel_iommu *iommu,
struct device *dev, u32 pasid);
#endif /* __INTEL_PASID_H */

View File

@ -18,7 +18,6 @@
#include <linux/errno.h>
#include <linux/iommu.h>
#include <linux/idr.h>
#include <linux/notifier.h>
#include <linux/err.h>
#include <linux/pci.h>
#include <linux/bitops.h>
@ -40,14 +39,16 @@ struct iommu_group {
struct kobject *devices_kobj;
struct list_head devices;
struct mutex mutex;
struct blocking_notifier_head notifier;
void *iommu_data;
void (*iommu_data_release)(void *iommu_data);
char *name;
int id;
struct iommu_domain *default_domain;
struct iommu_domain *blocking_domain;
struct iommu_domain *domain;
struct list_head entry;
unsigned int owner_cnt;
void *owner;
};
struct group_device {
@ -82,8 +83,8 @@ static int __iommu_attach_device(struct iommu_domain *domain,
struct device *dev);
static int __iommu_attach_group(struct iommu_domain *domain,
struct iommu_group *group);
static void __iommu_detach_group(struct iommu_domain *domain,
struct iommu_group *group);
static int __iommu_group_set_domain(struct iommu_group *group,
struct iommu_domain *new_domain);
static int iommu_create_device_direct_mappings(struct iommu_group *group,
struct device *dev);
static struct iommu_group *iommu_group_get_for_dev(struct device *dev);
@ -294,7 +295,11 @@ int iommu_probe_device(struct device *dev)
mutex_lock(&group->mutex);
iommu_alloc_default_domain(group, dev);
if (group->default_domain) {
/*
* If device joined an existing group which has been claimed, don't
* attach the default domain.
*/
if (group->default_domain && !group->owner) {
ret = __iommu_attach_device(group->default_domain, dev);
if (ret) {
mutex_unlock(&group->mutex);
@ -599,6 +604,8 @@ static void iommu_group_release(struct kobject *kobj)
if (group->default_domain)
iommu_domain_free(group->default_domain);
if (group->blocking_domain)
iommu_domain_free(group->blocking_domain);
kfree(group->name);
kfree(group);
@ -633,7 +640,6 @@ struct iommu_group *iommu_group_alloc(void)
mutex_init(&group->mutex);
INIT_LIST_HEAD(&group->devices);
INIT_LIST_HEAD(&group->entry);
BLOCKING_INIT_NOTIFIER_HEAD(&group->notifier);
ret = ida_simple_get(&iommu_group_ida, 0, 0, GFP_KERNEL);
if (ret < 0) {
@ -906,10 +912,6 @@ rename:
if (ret)
goto err_put_group;
/* Notify any listeners about change to group. */
blocking_notifier_call_chain(&group->notifier,
IOMMU_GROUP_NOTIFY_ADD_DEVICE, dev);
trace_add_device_to_group(group->id, dev);
dev_info(dev, "Adding to iommu group %d\n", group->id);
@ -951,10 +953,6 @@ void iommu_group_remove_device(struct device *dev)
dev_info(dev, "Removing from iommu group %d\n", group->id);
/* Pre-notify listeners that a device is being removed. */
blocking_notifier_call_chain(&group->notifier,
IOMMU_GROUP_NOTIFY_DEL_DEVICE, dev);
mutex_lock(&group->mutex);
list_for_each_entry(tmp_device, &group->devices, list) {
if (tmp_device->dev == dev) {
@ -1076,36 +1074,6 @@ void iommu_group_put(struct iommu_group *group)
}
EXPORT_SYMBOL_GPL(iommu_group_put);
/**
* iommu_group_register_notifier - Register a notifier for group changes
* @group: the group to watch
* @nb: notifier block to signal
*
* This function allows iommu group users to track changes in a group.
* See include/linux/iommu.h for actions sent via this notifier. Caller
* should hold a reference to the group throughout notifier registration.
*/
int iommu_group_register_notifier(struct iommu_group *group,
struct notifier_block *nb)
{
return blocking_notifier_chain_register(&group->notifier, nb);
}
EXPORT_SYMBOL_GPL(iommu_group_register_notifier);
/**
* iommu_group_unregister_notifier - Unregister a notifier
* @group: the group to watch
* @nb: notifier block to signal
*
* Unregister a previously registered group notifier block.
*/
int iommu_group_unregister_notifier(struct iommu_group *group,
struct notifier_block *nb)
{
return blocking_notifier_chain_unregister(&group->notifier, nb);
}
EXPORT_SYMBOL_GPL(iommu_group_unregister_notifier);
/**
* iommu_register_device_fault_handler() - Register a device fault handler
* @dev: the device
@ -1651,14 +1619,8 @@ static int remove_iommu_group(struct device *dev, void *data)
static int iommu_bus_notifier(struct notifier_block *nb,
unsigned long action, void *data)
{
unsigned long group_action = 0;
struct device *dev = data;
struct iommu_group *group;
/*
* ADD/DEL call into iommu driver ops if provided, which may
* result in ADD/DEL notifiers to group->notifier
*/
if (action == BUS_NOTIFY_ADD_DEVICE) {
int ret;
@ -1669,34 +1631,6 @@ static int iommu_bus_notifier(struct notifier_block *nb,
return NOTIFY_OK;
}
/*
* Remaining BUS_NOTIFYs get filtered and republished to the
* group, if anyone is listening
*/
group = iommu_group_get(dev);
if (!group)
return 0;
switch (action) {
case BUS_NOTIFY_BIND_DRIVER:
group_action = IOMMU_GROUP_NOTIFY_BIND_DRIVER;
break;
case BUS_NOTIFY_BOUND_DRIVER:
group_action = IOMMU_GROUP_NOTIFY_BOUND_DRIVER;
break;
case BUS_NOTIFY_UNBIND_DRIVER:
group_action = IOMMU_GROUP_NOTIFY_UNBIND_DRIVER;
break;
case BUS_NOTIFY_UNBOUND_DRIVER:
group_action = IOMMU_GROUP_NOTIFY_UNBOUND_DRIVER;
break;
}
if (group_action)
blocking_notifier_call_chain(&group->notifier,
group_action, dev);
iommu_group_put(group);
return 0;
}
@ -1913,6 +1847,29 @@ bool iommu_present(struct bus_type *bus)
}
EXPORT_SYMBOL_GPL(iommu_present);
/**
* device_iommu_capable() - check for a general IOMMU capability
* @dev: device to which the capability would be relevant, if available
* @cap: IOMMU capability
*
* Return: true if an IOMMU is present and supports the given capability
* for the given device, otherwise false.
*/
bool device_iommu_capable(struct device *dev, enum iommu_cap cap)
{
const struct iommu_ops *ops;
if (!dev->iommu || !dev->iommu->iommu_dev)
return false;
ops = dev_iommu_ops(dev);
if (!ops->capable)
return false;
return ops->capable(cap);
}
EXPORT_SYMBOL_GPL(device_iommu_capable);
bool iommu_capable(struct bus_type *bus, enum iommu_cap cap)
{
if (!bus->iommu_ops || !bus->iommu_ops->capable)
@ -1983,6 +1940,24 @@ void iommu_domain_free(struct iommu_domain *domain)
}
EXPORT_SYMBOL_GPL(iommu_domain_free);
/*
* Put the group's domain back to the appropriate core-owned domain - either the
* standard kernel-mode DMA configuration or an all-DMA-blocked domain.
*/
static void __iommu_group_set_core_domain(struct iommu_group *group)
{
struct iommu_domain *new_domain;
int ret;
if (group->owner)
new_domain = group->blocking_domain;
else
new_domain = group->default_domain;
ret = __iommu_group_set_domain(group, new_domain);
WARN(ret, "iommu driver failed to attach the default/blocking domain");
}
static int __iommu_attach_device(struct iommu_domain *domain,
struct device *dev)
{
@ -2039,9 +2014,6 @@ static void __iommu_detach_device(struct iommu_domain *domain,
if (iommu_is_attach_deferred(dev))
return;
if (unlikely(domain->ops->detach_dev == NULL))
return;
domain->ops->detach_dev(domain, dev);
trace_detach_device_from_domain(dev);
}
@ -2055,12 +2027,10 @@ void iommu_detach_device(struct iommu_domain *domain, struct device *dev)
return;
mutex_lock(&group->mutex);
if (iommu_group_device_count(group) != 1) {
WARN_ON(1);
if (WARN_ON(domain != group->domain) ||
WARN_ON(iommu_group_device_count(group) != 1))
goto out_unlock;
}
__iommu_detach_group(domain, group);
__iommu_group_set_core_domain(group);
out_unlock:
mutex_unlock(&group->mutex);
@ -2116,7 +2086,8 @@ static int __iommu_attach_group(struct iommu_domain *domain,
{
int ret;
if (group->default_domain && group->domain != group->default_domain)
if (group->domain && group->domain != group->default_domain &&
group->domain != group->blocking_domain)
return -EBUSY;
ret = __iommu_group_for_each_dev(group, domain,
@ -2148,34 +2119,49 @@ static int iommu_group_do_detach_device(struct device *dev, void *data)
return 0;
}
static void __iommu_detach_group(struct iommu_domain *domain,
struct iommu_group *group)
static int __iommu_group_set_domain(struct iommu_group *group,
struct iommu_domain *new_domain)
{
int ret;
if (!group->default_domain) {
__iommu_group_for_each_dev(group, domain,
if (group->domain == new_domain)
return 0;
/*
* New drivers should support default domains and so the detach_dev() op
* will never be called. Otherwise the NULL domain represents some
* platform specific behavior.
*/
if (!new_domain) {
if (WARN_ON(!group->domain->ops->detach_dev))
return -EINVAL;
__iommu_group_for_each_dev(group, group->domain,
iommu_group_do_detach_device);
group->domain = NULL;
return;
return 0;
}
if (group->domain == group->default_domain)
return;
/* Detach by re-attaching to the default domain */
ret = __iommu_group_for_each_dev(group, group->default_domain,
/*
* Changing the domain is done by calling attach_dev() on the new
* domain. This switch does not have to be atomic and DMA can be
* discarded during the transition. DMA must only be able to access
* either new_domain or group->domain, never something else.
*
* Note that this is called in error unwind paths, attaching to a
* domain that has already been attached cannot fail.
*/
ret = __iommu_group_for_each_dev(group, new_domain,
iommu_group_do_attach_device);
if (ret != 0)
WARN_ON(1);
else
group->domain = group->default_domain;
if (ret)
return ret;
group->domain = new_domain;
return 0;
}
void iommu_detach_group(struct iommu_domain *domain, struct iommu_group *group)
{
mutex_lock(&group->mutex);
__iommu_detach_group(domain, group);
__iommu_group_set_core_domain(group);
mutex_unlock(&group->mutex);
}
EXPORT_SYMBOL_GPL(iommu_detach_group);
@ -3102,3 +3088,167 @@ out:
return ret;
}
/**
* iommu_device_use_default_domain() - Device driver wants to handle device
* DMA through the kernel DMA API.
* @dev: The device.
*
* The device driver about to bind @dev wants to do DMA through the kernel
* DMA API. Return 0 if it is allowed, otherwise an error.
*/
int iommu_device_use_default_domain(struct device *dev)
{
struct iommu_group *group = iommu_group_get(dev);
int ret = 0;
if (!group)
return 0;
mutex_lock(&group->mutex);
if (group->owner_cnt) {
if (group->domain != group->default_domain ||
group->owner) {
ret = -EBUSY;
goto unlock_out;
}
}
group->owner_cnt++;
unlock_out:
mutex_unlock(&group->mutex);
iommu_group_put(group);
return ret;
}
/**
* iommu_device_unuse_default_domain() - Device driver stops handling device
* DMA through the kernel DMA API.
* @dev: The device.
*
* The device driver doesn't want to do DMA through kernel DMA API anymore.
* It must be called after iommu_device_use_default_domain().
*/
void iommu_device_unuse_default_domain(struct device *dev)
{
struct iommu_group *group = iommu_group_get(dev);
if (!group)
return;
mutex_lock(&group->mutex);
if (!WARN_ON(!group->owner_cnt))
group->owner_cnt--;
mutex_unlock(&group->mutex);
iommu_group_put(group);
}
static int __iommu_group_alloc_blocking_domain(struct iommu_group *group)
{
struct group_device *dev =
list_first_entry(&group->devices, struct group_device, list);
if (group->blocking_domain)
return 0;
group->blocking_domain =
__iommu_domain_alloc(dev->dev->bus, IOMMU_DOMAIN_BLOCKED);
if (!group->blocking_domain) {
/*
* For drivers that do not yet understand IOMMU_DOMAIN_BLOCKED
* create an empty domain instead.
*/
group->blocking_domain = __iommu_domain_alloc(
dev->dev->bus, IOMMU_DOMAIN_UNMANAGED);
if (!group->blocking_domain)
return -EINVAL;
}
return 0;
}
/**
* iommu_group_claim_dma_owner() - Set DMA ownership of a group
* @group: The group.
* @owner: Caller specified pointer. Used for exclusive ownership.
*
* This is to support backward compatibility for vfio which manages
* the dma ownership in iommu_group level. New invocations on this
* interface should be prohibited.
*/
int iommu_group_claim_dma_owner(struct iommu_group *group, void *owner)
{
int ret = 0;
mutex_lock(&group->mutex);
if (group->owner_cnt) {
ret = -EPERM;
goto unlock_out;
} else {
if (group->domain && group->domain != group->default_domain) {
ret = -EBUSY;
goto unlock_out;
}
ret = __iommu_group_alloc_blocking_domain(group);
if (ret)
goto unlock_out;
ret = __iommu_group_set_domain(group, group->blocking_domain);
if (ret)
goto unlock_out;
group->owner = owner;
}
group->owner_cnt++;
unlock_out:
mutex_unlock(&group->mutex);
return ret;
}
EXPORT_SYMBOL_GPL(iommu_group_claim_dma_owner);
/**
* iommu_group_release_dma_owner() - Release DMA ownership of a group
* @group: The group.
*
* Release the DMA ownership claimed by iommu_group_claim_dma_owner().
*/
void iommu_group_release_dma_owner(struct iommu_group *group)
{
int ret;
mutex_lock(&group->mutex);
if (WARN_ON(!group->owner_cnt || !group->owner))
goto unlock_out;
group->owner_cnt = 0;
group->owner = NULL;
ret = __iommu_group_set_domain(group, group->default_domain);
WARN(ret, "iommu driver failed to attach the default domain");
unlock_out:
mutex_unlock(&group->mutex);
}
EXPORT_SYMBOL_GPL(iommu_group_release_dma_owner);
/**
* iommu_group_dma_owner_claimed() - Query group dma ownership status
* @group: The group.
*
* This provides status query on a given group. It is racy and only for
* non-binding status reporting.
*/
bool iommu_group_dma_owner_claimed(struct iommu_group *group)
{
unsigned int user;
mutex_lock(&group->mutex);
user = group->owner_cnt;
mutex_unlock(&group->mutex);
return user;
}
EXPORT_SYMBOL_GPL(iommu_group_dma_owner_claimed);

View File

@ -583,7 +583,7 @@ static void print_ctx_regs(void __iomem *base, int ctx)
GET_SCTLR(base, ctx), GET_ACTLR(base, ctx));
}
static void insert_iommu_master(struct device *dev,
static int insert_iommu_master(struct device *dev,
struct msm_iommu_dev **iommu,
struct of_phandle_args *spec)
{
@ -592,6 +592,10 @@ static void insert_iommu_master(struct device *dev,
if (list_empty(&(*iommu)->ctx_list)) {
master = kzalloc(sizeof(*master), GFP_ATOMIC);
if (!master) {
dev_err(dev, "Failed to allocate iommu_master\n");
return -ENOMEM;
}
master->of_node = dev->of_node;
list_add(&master->list, &(*iommu)->ctx_list);
dev_iommu_priv_set(dev, master);
@ -601,30 +605,34 @@ static void insert_iommu_master(struct device *dev,
if (master->mids[sid] == spec->args[0]) {
dev_warn(dev, "Stream ID 0x%hx repeated; ignoring\n",
sid);
return;
return 0;
}
master->mids[master->num_mids++] = spec->args[0];
return 0;
}
static int qcom_iommu_of_xlate(struct device *dev,
struct of_phandle_args *spec)
{
struct msm_iommu_dev *iommu;
struct msm_iommu_dev *iommu = NULL, *iter;
unsigned long flags;
int ret = 0;
spin_lock_irqsave(&msm_iommu_lock, flags);
list_for_each_entry(iommu, &qcom_iommu_devices, dev_node)
if (iommu->dev->of_node == spec->np)
list_for_each_entry(iter, &qcom_iommu_devices, dev_node) {
if (iter->dev->of_node == spec->np) {
iommu = iter;
break;
}
}
if (!iommu || iommu->dev->of_node != spec->np) {
if (!iommu) {
ret = -ENODEV;
goto fail;
}
insert_iommu_master(dev, &iommu, spec);
ret = insert_iommu_master(dev, &iommu, spec);
fail:
spin_unlock_irqrestore(&msm_iommu_lock, flags);

File diff suppressed because it is too large Load Diff

View File

@ -1,101 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2015-2016 MediaTek Inc.
* Author: Honghui Zhang <honghui.zhang@mediatek.com>
*/
#ifndef _MTK_IOMMU_H_
#define _MTK_IOMMU_H_
#include <linux/clk.h>
#include <linux/component.h>
#include <linux/device.h>
#include <linux/io.h>
#include <linux/io-pgtable.h>
#include <linux/iommu.h>
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/dma-mapping.h>
#include <soc/mediatek/smi.h>
#include <dt-bindings/memory/mtk-memory-port.h>
#define MTK_LARB_COM_MAX 8
#define MTK_LARB_SUBCOM_MAX 4
#define MTK_IOMMU_GROUP_MAX 8
struct mtk_iommu_suspend_reg {
union {
u32 standard_axi_mode;/* v1 */
u32 misc_ctrl;/* v2 */
};
u32 dcm_dis;
u32 ctrl_reg;
u32 int_control0;
u32 int_main_control;
u32 ivrp_paddr;
u32 vld_pa_rng;
u32 wr_len_ctrl;
};
enum mtk_iommu_plat {
M4U_MT2701,
M4U_MT2712,
M4U_MT6779,
M4U_MT8167,
M4U_MT8173,
M4U_MT8183,
M4U_MT8192,
};
struct mtk_iommu_iova_region;
struct mtk_iommu_plat_data {
enum mtk_iommu_plat m4u_plat;
u32 flags;
u32 inv_sel_reg;
unsigned int iova_region_nr;
const struct mtk_iommu_iova_region *iova_region;
unsigned char larbid_remap[MTK_LARB_COM_MAX][MTK_LARB_SUBCOM_MAX];
};
struct mtk_iommu_domain;
struct mtk_iommu_data {
void __iomem *base;
int irq;
struct device *dev;
struct clk *bclk;
phys_addr_t protect_base; /* protect memory base */
struct mtk_iommu_suspend_reg reg;
struct mtk_iommu_domain *m4u_dom;
struct iommu_group *m4u_group[MTK_IOMMU_GROUP_MAX];
bool enable_4GB;
spinlock_t tlb_lock; /* lock for tlb range flush */
struct iommu_device iommu;
const struct mtk_iommu_plat_data *plat_data;
struct device *smicomm_dev;
struct dma_iommu_mapping *mapping; /* For mtk_iommu_v1.c */
struct list_head list;
struct mtk_smi_larb_iommu larb_imu[MTK_LARB_NR_MAX];
};
static inline int mtk_iommu_bind(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
return component_bind_all(dev, &data->larb_imu);
}
static inline void mtk_iommu_unbind(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
component_unbind_all(dev, &data->larb_imu);
}
#endif

View File

@ -7,7 +7,6 @@
*
* Based on driver/iommu/mtk_iommu.c
*/
#include <linux/memblock.h>
#include <linux/bug.h>
#include <linux/clk.h>
#include <linux/component.h>
@ -28,10 +27,9 @@
#include <linux/spinlock.h>
#include <asm/barrier.h>
#include <asm/dma-iommu.h>
#include <linux/init.h>
#include <dt-bindings/memory/mtk-memory-port.h>
#include <dt-bindings/memory/mt2701-larb-port.h>
#include <soc/mediatek/smi.h>
#include "mtk_iommu.h"
#define REG_MMU_PT_BASE_ADDR 0x000
@ -80,6 +78,7 @@
/* MTK generation one iommu HW only support 4K size mapping */
#define MT2701_IOMMU_PAGE_SHIFT 12
#define MT2701_IOMMU_PAGE_SIZE (1UL << MT2701_IOMMU_PAGE_SHIFT)
#define MT2701_LARB_NR_MAX 3
/*
* MTK m4u support 4GB iova address space, and only support 4K page
@ -87,17 +86,53 @@
*/
#define M2701_IOMMU_PGT_SIZE SZ_4M
struct mtk_iommu_domain {
struct mtk_iommu_v1_suspend_reg {
u32 standard_axi_mode;
u32 dcm_dis;
u32 ctrl_reg;
u32 int_control0;
};
struct mtk_iommu_v1_data {
void __iomem *base;
int irq;
struct device *dev;
struct clk *bclk;
phys_addr_t protect_base; /* protect memory base */
struct mtk_iommu_v1_domain *m4u_dom;
struct iommu_device iommu;
struct dma_iommu_mapping *mapping;
struct mtk_smi_larb_iommu larb_imu[MTK_LARB_NR_MAX];
struct mtk_iommu_v1_suspend_reg reg;
};
struct mtk_iommu_v1_domain {
spinlock_t pgtlock; /* lock for page table */
struct iommu_domain domain;
u32 *pgt_va;
dma_addr_t pgt_pa;
struct mtk_iommu_data *data;
struct mtk_iommu_v1_data *data;
};
static struct mtk_iommu_domain *to_mtk_domain(struct iommu_domain *dom)
static int mtk_iommu_v1_bind(struct device *dev)
{
return container_of(dom, struct mtk_iommu_domain, domain);
struct mtk_iommu_v1_data *data = dev_get_drvdata(dev);
return component_bind_all(dev, &data->larb_imu);
}
static void mtk_iommu_v1_unbind(struct device *dev)
{
struct mtk_iommu_v1_data *data = dev_get_drvdata(dev);
component_unbind_all(dev, &data->larb_imu);
}
static struct mtk_iommu_v1_domain *to_mtk_domain(struct iommu_domain *dom)
{
return container_of(dom, struct mtk_iommu_v1_domain, domain);
}
static const int mt2701_m4u_in_larb[] = {
@ -123,7 +158,7 @@ static inline int mt2701_m4u_to_port(int id)
return id - mt2701_m4u_in_larb[larb];
}
static void mtk_iommu_tlb_flush_all(struct mtk_iommu_data *data)
static void mtk_iommu_v1_tlb_flush_all(struct mtk_iommu_v1_data *data)
{
writel_relaxed(F_INVLD_EN1 | F_INVLD_EN0,
data->base + REG_MMU_INV_SEL);
@ -131,8 +166,8 @@ static void mtk_iommu_tlb_flush_all(struct mtk_iommu_data *data)
wmb(); /* Make sure the tlb flush all done */
}
static void mtk_iommu_tlb_flush_range(struct mtk_iommu_data *data,
unsigned long iova, size_t size)
static void mtk_iommu_v1_tlb_flush_range(struct mtk_iommu_v1_data *data,
unsigned long iova, size_t size)
{
int ret;
u32 tmp;
@ -150,16 +185,16 @@ static void mtk_iommu_tlb_flush_range(struct mtk_iommu_data *data,
if (ret) {
dev_warn(data->dev,
"Partial TLB flush timed out, falling back to full flush\n");
mtk_iommu_tlb_flush_all(data);
mtk_iommu_v1_tlb_flush_all(data);
}
/* Clear the CPE status */
writel_relaxed(0, data->base + REG_MMU_CPE_DONE);
}
static irqreturn_t mtk_iommu_isr(int irq, void *dev_id)
static irqreturn_t mtk_iommu_v1_isr(int irq, void *dev_id)
{
struct mtk_iommu_data *data = dev_id;
struct mtk_iommu_domain *dom = data->m4u_dom;
struct mtk_iommu_v1_data *data = dev_id;
struct mtk_iommu_v1_domain *dom = data->m4u_dom;
u32 int_state, regval, fault_iova, fault_pa;
unsigned int fault_larb, fault_port;
@ -189,13 +224,13 @@ static irqreturn_t mtk_iommu_isr(int irq, void *dev_id)
regval |= F_INT_CLR_BIT;
writel_relaxed(regval, data->base + REG_MMU_INT_CONTROL);
mtk_iommu_tlb_flush_all(data);
mtk_iommu_v1_tlb_flush_all(data);
return IRQ_HANDLED;
}
static void mtk_iommu_config(struct mtk_iommu_data *data,
struct device *dev, bool enable)
static void mtk_iommu_v1_config(struct mtk_iommu_v1_data *data,
struct device *dev, bool enable)
{
struct mtk_smi_larb_iommu *larb_mmu;
unsigned int larbid, portid;
@ -217,9 +252,9 @@ static void mtk_iommu_config(struct mtk_iommu_data *data,
}
}
static int mtk_iommu_domain_finalise(struct mtk_iommu_data *data)
static int mtk_iommu_v1_domain_finalise(struct mtk_iommu_v1_data *data)
{
struct mtk_iommu_domain *dom = data->m4u_dom;
struct mtk_iommu_v1_domain *dom = data->m4u_dom;
spin_lock_init(&dom->pgtlock);
@ -235,9 +270,9 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_data *data)
return 0;
}
static struct iommu_domain *mtk_iommu_domain_alloc(unsigned type)
static struct iommu_domain *mtk_iommu_v1_domain_alloc(unsigned type)
{
struct mtk_iommu_domain *dom;
struct mtk_iommu_v1_domain *dom;
if (type != IOMMU_DOMAIN_UNMANAGED)
return NULL;
@ -249,21 +284,20 @@ static struct iommu_domain *mtk_iommu_domain_alloc(unsigned type)
return &dom->domain;
}
static void mtk_iommu_domain_free(struct iommu_domain *domain)
static void mtk_iommu_v1_domain_free(struct iommu_domain *domain)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
struct mtk_iommu_data *data = dom->data;
struct mtk_iommu_v1_domain *dom = to_mtk_domain(domain);
struct mtk_iommu_v1_data *data = dom->data;
dma_free_coherent(data->dev, M2701_IOMMU_PGT_SIZE,
dom->pgt_va, dom->pgt_pa);
kfree(to_mtk_domain(domain));
}
static int mtk_iommu_attach_device(struct iommu_domain *domain,
struct device *dev)
static int mtk_iommu_v1_attach_device(struct iommu_domain *domain, struct device *dev)
{
struct mtk_iommu_data *data = dev_iommu_priv_get(dev);
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
struct mtk_iommu_v1_data *data = dev_iommu_priv_get(dev);
struct mtk_iommu_v1_domain *dom = to_mtk_domain(domain);
struct dma_iommu_mapping *mtk_mapping;
int ret;
@ -274,29 +308,28 @@ static int mtk_iommu_attach_device(struct iommu_domain *domain,
if (!data->m4u_dom) {
data->m4u_dom = dom;
ret = mtk_iommu_domain_finalise(data);
ret = mtk_iommu_v1_domain_finalise(data);
if (ret) {
data->m4u_dom = NULL;
return ret;
}
}
mtk_iommu_config(data, dev, true);
mtk_iommu_v1_config(data, dev, true);
return 0;
}
static void mtk_iommu_detach_device(struct iommu_domain *domain,
struct device *dev)
static void mtk_iommu_v1_detach_device(struct iommu_domain *domain, struct device *dev)
{
struct mtk_iommu_data *data = dev_iommu_priv_get(dev);
struct mtk_iommu_v1_data *data = dev_iommu_priv_get(dev);
mtk_iommu_config(data, dev, false);
mtk_iommu_v1_config(data, dev, false);
}
static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
static int mtk_iommu_v1_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
struct mtk_iommu_v1_domain *dom = to_mtk_domain(domain);
unsigned int page_num = size >> MT2701_IOMMU_PAGE_SHIFT;
unsigned long flags;
unsigned int i;
@ -317,16 +350,15 @@ static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
spin_unlock_irqrestore(&dom->pgtlock, flags);
mtk_iommu_tlb_flush_range(dom->data, iova, size);
mtk_iommu_v1_tlb_flush_range(dom->data, iova, size);
return map_size == size ? 0 : -EEXIST;
}
static size_t mtk_iommu_unmap(struct iommu_domain *domain,
unsigned long iova, size_t size,
struct iommu_iotlb_gather *gather)
static size_t mtk_iommu_v1_unmap(struct iommu_domain *domain, unsigned long iova,
size_t size, struct iommu_iotlb_gather *gather)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
struct mtk_iommu_v1_domain *dom = to_mtk_domain(domain);
unsigned long flags;
u32 *pgt_base_iova = dom->pgt_va + (iova >> MT2701_IOMMU_PAGE_SHIFT);
unsigned int page_num = size >> MT2701_IOMMU_PAGE_SHIFT;
@ -335,15 +367,14 @@ static size_t mtk_iommu_unmap(struct iommu_domain *domain,
memset(pgt_base_iova, 0, page_num * sizeof(u32));
spin_unlock_irqrestore(&dom->pgtlock, flags);
mtk_iommu_tlb_flush_range(dom->data, iova, size);
mtk_iommu_v1_tlb_flush_range(dom->data, iova, size);
return size;
}
static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
static phys_addr_t mtk_iommu_v1_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
struct mtk_iommu_v1_domain *dom = to_mtk_domain(domain);
unsigned long flags;
phys_addr_t pa;
@ -355,17 +386,16 @@ static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain,
return pa;
}
static const struct iommu_ops mtk_iommu_ops;
static const struct iommu_ops mtk_iommu_v1_ops;
/*
* MTK generation one iommu HW only support one iommu domain, and all the client
* sharing the same iova address space.
*/
static int mtk_iommu_create_mapping(struct device *dev,
struct of_phandle_args *args)
static int mtk_iommu_v1_create_mapping(struct device *dev, struct of_phandle_args *args)
{
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct mtk_iommu_data *data;
struct mtk_iommu_v1_data *data;
struct platform_device *m4updev;
struct dma_iommu_mapping *mtk_mapping;
int ret;
@ -377,11 +407,11 @@ static int mtk_iommu_create_mapping(struct device *dev,
}
if (!fwspec) {
ret = iommu_fwspec_init(dev, &args->np->fwnode, &mtk_iommu_ops);
ret = iommu_fwspec_init(dev, &args->np->fwnode, &mtk_iommu_v1_ops);
if (ret)
return ret;
fwspec = dev_iommu_fwspec_get(dev);
} else if (dev_iommu_fwspec_get(dev)->ops != &mtk_iommu_ops) {
} else if (dev_iommu_fwspec_get(dev)->ops != &mtk_iommu_v1_ops) {
return -EINVAL;
}
@ -413,16 +443,16 @@ static int mtk_iommu_create_mapping(struct device *dev,
return 0;
}
static int mtk_iommu_def_domain_type(struct device *dev)
static int mtk_iommu_v1_def_domain_type(struct device *dev)
{
return IOMMU_DOMAIN_UNMANAGED;
}
static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
static struct iommu_device *mtk_iommu_v1_probe_device(struct device *dev)
{
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct of_phandle_args iommu_spec;
struct mtk_iommu_data *data;
struct mtk_iommu_v1_data *data;
int err, idx = 0, larbid, larbidx;
struct device_link *link;
struct device *larbdev;
@ -440,7 +470,7 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
"#iommu-cells",
idx, &iommu_spec)) {
err = mtk_iommu_create_mapping(dev, &iommu_spec);
err = mtk_iommu_v1_create_mapping(dev, &iommu_spec);
of_node_put(iommu_spec.np);
if (err)
return ERR_PTR(err);
@ -450,13 +480,16 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
idx++;
}
if (!fwspec || fwspec->ops != &mtk_iommu_ops)
if (!fwspec || fwspec->ops != &mtk_iommu_v1_ops)
return ERR_PTR(-ENODEV); /* Not a iommu client device */
data = dev_iommu_priv_get(dev);
/* Link the consumer device with the smi-larb device(supplier) */
larbid = mt2701_m4u_to_larb(fwspec->ids[0]);
if (larbid >= MT2701_LARB_NR_MAX)
return ERR_PTR(-EINVAL);
for (idx = 1; idx < fwspec->num_ids; idx++) {
larbidx = mt2701_m4u_to_larb(fwspec->ids[idx]);
if (larbid != larbidx) {
@ -467,6 +500,9 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
}
larbdev = data->larb_imu[larbid].dev;
if (!larbdev)
return ERR_PTR(-EINVAL);
link = device_link_add(dev, larbdev,
DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
if (!link)
@ -475,10 +511,10 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
return &data->iommu;
}
static void mtk_iommu_probe_finalize(struct device *dev)
static void mtk_iommu_v1_probe_finalize(struct device *dev)
{
struct dma_iommu_mapping *mtk_mapping;
struct mtk_iommu_data *data;
struct mtk_iommu_v1_data *data;
int err;
data = dev_iommu_priv_get(dev);
@ -489,14 +525,14 @@ static void mtk_iommu_probe_finalize(struct device *dev)
dev_err(dev, "Can't create IOMMU mapping - DMA-OPS will not work\n");
}
static void mtk_iommu_release_device(struct device *dev)
static void mtk_iommu_v1_release_device(struct device *dev)
{
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct mtk_iommu_data *data;
struct mtk_iommu_v1_data *data;
struct device *larbdev;
unsigned int larbid;
if (!fwspec || fwspec->ops != &mtk_iommu_ops)
if (!fwspec || fwspec->ops != &mtk_iommu_v1_ops)
return;
data = dev_iommu_priv_get(dev);
@ -507,7 +543,7 @@ static void mtk_iommu_release_device(struct device *dev)
iommu_fwspec_free(dev);
}
static int mtk_iommu_hw_init(const struct mtk_iommu_data *data)
static int mtk_iommu_v1_hw_init(const struct mtk_iommu_v1_data *data)
{
u32 regval;
int ret;
@ -537,7 +573,7 @@ static int mtk_iommu_hw_init(const struct mtk_iommu_data *data)
writel_relaxed(F_MMU_DCM_ON, data->base + REG_MMU_DCM);
if (devm_request_irq(data->dev, data->irq, mtk_iommu_isr, 0,
if (devm_request_irq(data->dev, data->irq, mtk_iommu_v1_isr, 0,
dev_name(data->dev), (void *)data)) {
writel_relaxed(0, data->base + REG_MMU_PT_BASE_ADDR);
clk_disable_unprepare(data->bclk);
@ -548,39 +584,39 @@ static int mtk_iommu_hw_init(const struct mtk_iommu_data *data)
return 0;
}
static const struct iommu_ops mtk_iommu_ops = {
.domain_alloc = mtk_iommu_domain_alloc,
.probe_device = mtk_iommu_probe_device,
.probe_finalize = mtk_iommu_probe_finalize,
.release_device = mtk_iommu_release_device,
.def_domain_type = mtk_iommu_def_domain_type,
static const struct iommu_ops mtk_iommu_v1_ops = {
.domain_alloc = mtk_iommu_v1_domain_alloc,
.probe_device = mtk_iommu_v1_probe_device,
.probe_finalize = mtk_iommu_v1_probe_finalize,
.release_device = mtk_iommu_v1_release_device,
.def_domain_type = mtk_iommu_v1_def_domain_type,
.device_group = generic_device_group,
.pgsize_bitmap = ~0UL << MT2701_IOMMU_PAGE_SHIFT,
.owner = THIS_MODULE,
.default_domain_ops = &(const struct iommu_domain_ops) {
.attach_dev = mtk_iommu_attach_device,
.detach_dev = mtk_iommu_detach_device,
.map = mtk_iommu_map,
.unmap = mtk_iommu_unmap,
.iova_to_phys = mtk_iommu_iova_to_phys,
.free = mtk_iommu_domain_free,
.attach_dev = mtk_iommu_v1_attach_device,
.detach_dev = mtk_iommu_v1_detach_device,
.map = mtk_iommu_v1_map,
.unmap = mtk_iommu_v1_unmap,
.iova_to_phys = mtk_iommu_v1_iova_to_phys,
.free = mtk_iommu_v1_domain_free,
}
};
static const struct of_device_id mtk_iommu_of_ids[] = {
static const struct of_device_id mtk_iommu_v1_of_ids[] = {
{ .compatible = "mediatek,mt2701-m4u", },
{}
};
static const struct component_master_ops mtk_iommu_com_ops = {
.bind = mtk_iommu_bind,
.unbind = mtk_iommu_unbind,
static const struct component_master_ops mtk_iommu_v1_com_ops = {
.bind = mtk_iommu_v1_bind,
.unbind = mtk_iommu_v1_unbind,
};
static int mtk_iommu_probe(struct platform_device *pdev)
static int mtk_iommu_v1_probe(struct platform_device *pdev)
{
struct mtk_iommu_data *data;
struct device *dev = &pdev->dev;
struct mtk_iommu_v1_data *data;
struct resource *res;
struct component_match *match = NULL;
void *protect;
@ -647,7 +683,7 @@ static int mtk_iommu_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, data);
ret = mtk_iommu_hw_init(data);
ret = mtk_iommu_v1_hw_init(data);
if (ret)
return ret;
@ -656,17 +692,17 @@ static int mtk_iommu_probe(struct platform_device *pdev)
if (ret)
return ret;
ret = iommu_device_register(&data->iommu, &mtk_iommu_ops, dev);
ret = iommu_device_register(&data->iommu, &mtk_iommu_v1_ops, dev);
if (ret)
goto out_sysfs_remove;
if (!iommu_present(&platform_bus_type)) {
ret = bus_set_iommu(&platform_bus_type, &mtk_iommu_ops);
ret = bus_set_iommu(&platform_bus_type, &mtk_iommu_v1_ops);
if (ret)
goto out_dev_unreg;
}
ret = component_master_add_with_match(dev, &mtk_iommu_com_ops, match);
ret = component_master_add_with_match(dev, &mtk_iommu_v1_com_ops, match);
if (ret)
goto out_bus_set_null;
return ret;
@ -680,9 +716,9 @@ out_sysfs_remove:
return ret;
}
static int mtk_iommu_remove(struct platform_device *pdev)
static int mtk_iommu_v1_remove(struct platform_device *pdev)
{
struct mtk_iommu_data *data = platform_get_drvdata(pdev);
struct mtk_iommu_v1_data *data = platform_get_drvdata(pdev);
iommu_device_sysfs_remove(&data->iommu);
iommu_device_unregister(&data->iommu);
@ -692,14 +728,14 @@ static int mtk_iommu_remove(struct platform_device *pdev)
clk_disable_unprepare(data->bclk);
devm_free_irq(&pdev->dev, data->irq, data);
component_master_del(&pdev->dev, &mtk_iommu_com_ops);
component_master_del(&pdev->dev, &mtk_iommu_v1_com_ops);
return 0;
}
static int __maybe_unused mtk_iommu_suspend(struct device *dev)
static int __maybe_unused mtk_iommu_v1_suspend(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
struct mtk_iommu_suspend_reg *reg = &data->reg;
struct mtk_iommu_v1_data *data = dev_get_drvdata(dev);
struct mtk_iommu_v1_suspend_reg *reg = &data->reg;
void __iomem *base = data->base;
reg->standard_axi_mode = readl_relaxed(base +
@ -710,10 +746,10 @@ static int __maybe_unused mtk_iommu_suspend(struct device *dev)
return 0;
}
static int __maybe_unused mtk_iommu_resume(struct device *dev)
static int __maybe_unused mtk_iommu_v1_resume(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
struct mtk_iommu_suspend_reg *reg = &data->reg;
struct mtk_iommu_v1_data *data = dev_get_drvdata(dev);
struct mtk_iommu_v1_suspend_reg *reg = &data->reg;
void __iomem *base = data->base;
writel_relaxed(data->m4u_dom->pgt_pa, base + REG_MMU_PT_BASE_ADDR);
@ -726,20 +762,20 @@ static int __maybe_unused mtk_iommu_resume(struct device *dev)
return 0;
}
static const struct dev_pm_ops mtk_iommu_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(mtk_iommu_suspend, mtk_iommu_resume)
static const struct dev_pm_ops mtk_iommu_v1_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(mtk_iommu_v1_suspend, mtk_iommu_v1_resume)
};
static struct platform_driver mtk_iommu_driver = {
.probe = mtk_iommu_probe,
.remove = mtk_iommu_remove,
static struct platform_driver mtk_iommu_v1_driver = {
.probe = mtk_iommu_v1_probe,
.remove = mtk_iommu_v1_remove,
.driver = {
.name = "mtk-iommu-v1",
.of_match_table = mtk_iommu_of_ids,
.pm = &mtk_iommu_pm_ops,
.of_match_table = mtk_iommu_v1_of_ids,
.pm = &mtk_iommu_v1_pm_ops,
}
};
module_platform_driver(mtk_iommu_driver);
module_platform_driver(mtk_iommu_v1_driver);
MODULE_DESCRIPTION("IOMMU API for MediaTek M4U v1 implementations");
MODULE_LICENSE("GPL v2");

View File

@ -99,7 +99,7 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
if (!domain_device)
return -ENOMEM;
if (zdev->dma_table) {
if (zdev->dma_table && !zdev->s390_domain) {
cc = zpci_dma_exit_device(zdev);
if (cc) {
rc = -EIO;
@ -107,6 +107,9 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
}
}
if (zdev->s390_domain)
zpci_unregister_ioat(zdev, 0);
zdev->dma_table = s390_domain->dma_table;
cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
virt_to_phys(zdev->dma_table));
@ -136,7 +139,13 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
return 0;
out_restore:
zpci_dma_init_device(zdev);
if (!zdev->s390_domain) {
zpci_dma_init_device(zdev);
} else {
zdev->dma_table = zdev->s390_domain->dma_table;
zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
virt_to_phys(zdev->dma_table));
}
out_free:
kfree(domain_device);
@ -167,7 +176,7 @@ static void s390_iommu_detach_device(struct iommu_domain *domain,
}
spin_unlock_irqrestore(&s390_domain->list_lock, flags);
if (found) {
if (found && (zdev->s390_domain == s390_domain)) {
zdev->s390_domain = NULL;
zpci_unregister_ioat(zdev, 0);
zpci_dma_init_device(zdev);

View File

@ -369,7 +369,6 @@ static int devm_of_pci_get_host_bridge_resources(struct device *dev,
dev_dbg(dev, "Parsing dma-ranges property...\n");
for_each_of_pci_range(&parser, &range) {
struct resource_entry *entry;
/*
* If we failed translation or got a zero-sized region
* then skip this range
@ -393,12 +392,7 @@ static int devm_of_pci_get_host_bridge_resources(struct device *dev,
goto failed;
}
/* Keep the resource list sorted */
resource_list_for_each_entry(entry, ib_resources)
if (entry->res->start > res->start)
break;
pci_add_resource_offset(&entry->node, res,
pci_add_resource_offset(ib_resources, res,
res->start - range.pci_addr);
}

View File

@ -20,6 +20,7 @@
#include <linux/of_device.h>
#include <linux/acpi.h>
#include <linux/dma-map-ops.h>
#include <linux/iommu.h>
#include "pci.h"
#include "pcie/portdrv.h"
@ -1620,6 +1621,7 @@ static int pci_bus_num_vf(struct device *dev)
*/
static int pci_dma_configure(struct device *dev)
{
struct pci_driver *driver = to_pci_driver(dev->driver);
struct device *bridge;
int ret = 0;
@ -1635,9 +1637,24 @@ static int pci_dma_configure(struct device *dev)
}
pci_put_host_bridge_device(bridge);
if (!ret && !driver->driver_managed_dma) {
ret = iommu_device_use_default_domain(dev);
if (ret)
arch_teardown_dma_ops(dev);
}
return ret;
}
static void pci_dma_cleanup(struct device *dev)
{
struct pci_driver *driver = to_pci_driver(dev->driver);
if (!driver->driver_managed_dma)
iommu_device_unuse_default_domain(dev);
}
struct bus_type pci_bus_type = {
.name = "pci",
.match = pci_bus_match,
@ -1651,6 +1668,7 @@ struct bus_type pci_bus_type = {
.pm = PCI_PM_OPS_PTR,
.num_vf = pci_bus_num_vf,
.dma_configure = pci_dma_configure,
.dma_cleanup = pci_dma_cleanup,
};
EXPORT_SYMBOL(pci_bus_type);

View File

@ -36,6 +36,7 @@ static struct pci_driver stub_driver = {
.name = "pci-stub",
.id_table = NULL, /* only dynamic id's */
.probe = pci_stub_probe,
.driver_managed_dma = true,
};
static int __init pci_stub_init(void)

View File

@ -202,6 +202,8 @@ static struct pci_driver pcie_portdriver = {
.err_handler = &pcie_portdrv_err_handler,
.driver_managed_dma = true,
.driver.pm = PCIE_PORTDRV_PM_OPS,
};

View File

@ -7,9 +7,7 @@
*/
#include <linux/device.h>
#include <linux/dmar.h>
#include <linux/idr.h>
#include <linux/iommu.h>
#include <linux/module.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
@ -257,13 +255,9 @@ static ssize_t iommu_dma_protection_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
/*
* Kernel DMA protection is a feature where Thunderbolt security is
* handled natively using IOMMU. It is enabled when IOMMU is
* enabled and ACPI DMAR table has DMAR_PLATFORM_OPT_IN set.
*/
return sprintf(buf, "%d\n",
iommu_present(&pci_bus_type) && dmar_platform_optin());
struct tb *tb = container_of(dev, struct tb, dev);
return sysfs_emit(buf, "%d\n", tb->nhi->iommu_dma_protection);
}
static DEVICE_ATTR_RO(iommu_dma_protection);

View File

@ -15,9 +15,11 @@
#include <linux/pci.h>
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/iommu.h>
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/property.h>
#include <linux/string_helpers.h>
#include "nhi.h"
#include "nhi_regs.h"
@ -1103,6 +1105,47 @@ static void nhi_check_quirks(struct tb_nhi *nhi)
nhi->quirks |= QUIRK_AUTO_CLEAR_INT;
}
static int nhi_check_iommu_pdev(struct pci_dev *pdev, void *data)
{
if (!pdev->external_facing ||
!device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION))
return 0;
*(bool *)data = true;
return 1; /* Stop walking */
}
static void nhi_check_iommu(struct tb_nhi *nhi)
{
struct pci_bus *bus = nhi->pdev->bus;
bool port_ok = false;
/*
* Ideally what we'd do here is grab every PCI device that
* represents a tunnelling adapter for this NHI and check their
* status directly, but unfortunately USB4 seems to make it
* obnoxiously difficult to reliably make any correlation.
*
* So for now we'll have to bodge it... Hoping that the system
* is at least sane enough that an adapter is in the same PCI
* segment as its NHI, if we can find *something* on that segment
* which meets the requirements for Kernel DMA Protection, we'll
* take that to imply that firmware is aware and has (hopefully)
* done the right thing in general. We need to know that the PCI
* layer has seen the ExternalFacingPort property which will then
* inform the IOMMU layer to enforce the complete "untrusted DMA"
* flow, but also that the IOMMU driver itself can be trusted not
* to have been subverted by a pre-boot DMA attack.
*/
while (bus->parent)
bus = bus->parent;
pci_walk_bus(bus, nhi_check_iommu_pdev, &port_ok);
nhi->iommu_dma_protection = port_ok;
dev_dbg(&nhi->pdev->dev, "IOMMU DMA protection is %s\n",
str_enabled_disabled(port_ok));
}
static int nhi_init_msi(struct tb_nhi *nhi)
{
struct pci_dev *pdev = nhi->pdev;
@ -1220,6 +1263,7 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return -ENOMEM;
nhi_check_quirks(nhi);
nhi_check_iommu(nhi);
res = nhi_init_msi(nhi);
if (res) {

View File

@ -588,6 +588,7 @@ static struct fsl_mc_driver vfio_fsl_mc_driver = {
.name = "vfio-fsl-mc",
.owner = THIS_MODULE,
},
.driver_managed_dma = true,
};
static int __init vfio_fsl_mc_driver_init(void)

View File

@ -194,6 +194,7 @@ static struct pci_driver vfio_pci_driver = {
.remove = vfio_pci_remove,
.sriov_configure = vfio_pci_sriov_configure,
.err_handler = &vfio_pci_core_err_handlers,
.driver_managed_dma = true,
};
static void __init vfio_pci_fill_ids(void)

View File

@ -95,6 +95,7 @@ static struct amba_driver vfio_amba_driver = {
.name = "vfio-amba",
.owner = THIS_MODULE,
},
.driver_managed_dma = true,
};
module_amba_driver(vfio_amba_driver);

View File

@ -76,6 +76,7 @@ static struct platform_driver vfio_platform_driver = {
.driver = {
.name = "vfio-platform",
},
.driver_managed_dma = true,
};
module_platform_driver(vfio_platform_driver);

View File

@ -62,11 +62,6 @@ struct vfio_container {
bool noiommu;
};
struct vfio_unbound_dev {
struct device *dev;
struct list_head unbound_next;
};
struct vfio_group {
struct device dev;
struct cdev cdev;
@ -76,11 +71,8 @@ struct vfio_group {
struct vfio_container *container;
struct list_head device_list;
struct mutex device_lock;
struct notifier_block nb;
struct list_head vfio_next;
struct list_head container_next;
struct list_head unbound_list;
struct mutex unbound_lock;
atomic_t opened;
wait_queue_head_t container_q;
enum vfio_group_type type;
@ -281,8 +273,6 @@ void vfio_unregister_iommu_driver(const struct vfio_iommu_driver_ops *ops)
}
EXPORT_SYMBOL_GPL(vfio_unregister_iommu_driver);
static int vfio_iommu_group_notifier(struct notifier_block *nb,
unsigned long action, void *data);
static void vfio_group_get(struct vfio_group *group);
/*
@ -340,16 +330,8 @@ vfio_group_get_from_iommu(struct iommu_group *iommu_group)
static void vfio_group_release(struct device *dev)
{
struct vfio_group *group = container_of(dev, struct vfio_group, dev);
struct vfio_unbound_dev *unbound, *tmp;
list_for_each_entry_safe(unbound, tmp,
&group->unbound_list, unbound_next) {
list_del(&unbound->unbound_next);
kfree(unbound);
}
mutex_destroy(&group->device_lock);
mutex_destroy(&group->unbound_lock);
iommu_group_put(group->iommu_group);
ida_free(&vfio.group_ida, MINOR(group->dev.devt));
kfree(group);
@ -381,8 +363,6 @@ static struct vfio_group *vfio_group_alloc(struct iommu_group *iommu_group,
refcount_set(&group->users, 1);
INIT_LIST_HEAD(&group->device_list);
mutex_init(&group->device_lock);
INIT_LIST_HEAD(&group->unbound_list);
mutex_init(&group->unbound_lock);
init_waitqueue_head(&group->container_q);
group->iommu_group = iommu_group;
/* put in vfio_group_release() */
@ -412,13 +392,6 @@ static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group,
goto err_put;
}
group->nb.notifier_call = vfio_iommu_group_notifier;
err = iommu_group_register_notifier(iommu_group, &group->nb);
if (err) {
ret = ERR_PTR(err);
goto err_put;
}
mutex_lock(&vfio.group_lock);
/* Did we race creating this group? */
@ -439,7 +412,6 @@ static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group,
err_unlock:
mutex_unlock(&vfio.group_lock);
iommu_group_unregister_notifier(group->iommu_group, &group->nb);
err_put:
put_device(&group->dev);
return ret;
@ -464,7 +436,6 @@ static void vfio_group_put(struct vfio_group *group)
cdev_device_del(&group->cdev, &group->dev);
mutex_unlock(&vfio.group_lock);
iommu_group_unregister_notifier(group->iommu_group, &group->nb);
put_device(&group->dev);
}
@ -520,175 +491,6 @@ static struct vfio_device *vfio_group_get_device(struct vfio_group *group,
return NULL;
}
/*
* Some drivers, like pci-stub, are only used to prevent other drivers from
* claiming a device and are therefore perfectly legitimate for a user owned
* group. The pci-stub driver has no dependencies on DMA or the IOVA mapping
* of the device, but it does prevent the user from having direct access to
* the device, which is useful in some circumstances.
*
* We also assume that we can include PCI interconnect devices, ie. bridges.
* IOMMU grouping on PCI necessitates that if we lack isolation on a bridge
* then all of the downstream devices will be part of the same IOMMU group as
* the bridge. Thus, if placing the bridge into the user owned IOVA space
* breaks anything, it only does so for user owned devices downstream. Note
* that error notification via MSI can be affected for platforms that handle
* MSI within the same IOVA space as DMA.
*/
static const char * const vfio_driver_allowed[] = { "pci-stub" };
static bool vfio_dev_driver_allowed(struct device *dev,
struct device_driver *drv)
{
if (dev_is_pci(dev)) {
struct pci_dev *pdev = to_pci_dev(dev);
if (pdev->hdr_type != PCI_HEADER_TYPE_NORMAL)
return true;
}
return match_string(vfio_driver_allowed,
ARRAY_SIZE(vfio_driver_allowed),
drv->name) >= 0;
}
/*
* A vfio group is viable for use by userspace if all devices are in
* one of the following states:
* - driver-less
* - bound to a vfio driver
* - bound to an otherwise allowed driver
* - a PCI interconnect device
*
* We use two methods to determine whether a device is bound to a vfio
* driver. The first is to test whether the device exists in the vfio
* group. The second is to test if the device exists on the group
* unbound_list, indicating it's in the middle of transitioning from
* a vfio driver to driver-less.
*/
static int vfio_dev_viable(struct device *dev, void *data)
{
struct vfio_group *group = data;
struct vfio_device *device;
struct device_driver *drv = READ_ONCE(dev->driver);
struct vfio_unbound_dev *unbound;
int ret = -EINVAL;
mutex_lock(&group->unbound_lock);
list_for_each_entry(unbound, &group->unbound_list, unbound_next) {
if (dev == unbound->dev) {
ret = 0;
break;
}
}
mutex_unlock(&group->unbound_lock);
if (!ret || !drv || vfio_dev_driver_allowed(dev, drv))
return 0;
device = vfio_group_get_device(group, dev);
if (device) {
vfio_device_put(device);
return 0;
}
return ret;
}
/*
* Async device support
*/
static int vfio_group_nb_add_dev(struct vfio_group *group, struct device *dev)
{
struct vfio_device *device;
/* Do we already know about it? We shouldn't */
device = vfio_group_get_device(group, dev);
if (WARN_ON_ONCE(device)) {
vfio_device_put(device);
return 0;
}
/* Nothing to do for idle groups */
if (!atomic_read(&group->container_users))
return 0;
/* TODO Prevent device auto probing */
dev_WARN(dev, "Device added to live group %d!\n",
iommu_group_id(group->iommu_group));
return 0;
}
static int vfio_group_nb_verify(struct vfio_group *group, struct device *dev)
{
/* We don't care what happens when the group isn't in use */
if (!atomic_read(&group->container_users))
return 0;
return vfio_dev_viable(dev, group);
}
static int vfio_iommu_group_notifier(struct notifier_block *nb,
unsigned long action, void *data)
{
struct vfio_group *group = container_of(nb, struct vfio_group, nb);
struct device *dev = data;
struct vfio_unbound_dev *unbound;
switch (action) {
case IOMMU_GROUP_NOTIFY_ADD_DEVICE:
vfio_group_nb_add_dev(group, dev);
break;
case IOMMU_GROUP_NOTIFY_DEL_DEVICE:
/*
* Nothing to do here. If the device is in use, then the
* vfio sub-driver should block the remove callback until
* it is unused. If the device is unused or attached to a
* stub driver, then it should be released and we don't
* care that it will be going away.
*/
break;
case IOMMU_GROUP_NOTIFY_BIND_DRIVER:
dev_dbg(dev, "%s: group %d binding to driver\n", __func__,
iommu_group_id(group->iommu_group));
break;
case IOMMU_GROUP_NOTIFY_BOUND_DRIVER:
dev_dbg(dev, "%s: group %d bound to driver %s\n", __func__,
iommu_group_id(group->iommu_group), dev->driver->name);
BUG_ON(vfio_group_nb_verify(group, dev));
break;
case IOMMU_GROUP_NOTIFY_UNBIND_DRIVER:
dev_dbg(dev, "%s: group %d unbinding from driver %s\n",
__func__, iommu_group_id(group->iommu_group),
dev->driver->name);
break;
case IOMMU_GROUP_NOTIFY_UNBOUND_DRIVER:
dev_dbg(dev, "%s: group %d unbound from driver\n", __func__,
iommu_group_id(group->iommu_group));
/*
* XXX An unbound device in a live group is ok, but we'd
* really like to avoid the above BUG_ON by preventing other
* drivers from binding to it. Once that occurs, we have to
* stop the system to maintain isolation. At a minimum, we'd
* want a toggle to disable driver auto probe for this device.
*/
mutex_lock(&group->unbound_lock);
list_for_each_entry(unbound,
&group->unbound_list, unbound_next) {
if (dev == unbound->dev) {
list_del(&unbound->unbound_next);
kfree(unbound);
break;
}
}
mutex_unlock(&group->unbound_lock);
break;
}
return NOTIFY_OK;
}
/*
* VFIO driver API
*/
@ -815,6 +617,13 @@ static int __vfio_register_dev(struct vfio_device *device,
int vfio_register_group_dev(struct vfio_device *device)
{
/*
* VFIO always sets IOMMU_CACHE because we offer no way for userspace to
* restore cache coherency.
*/
if (!iommu_capable(device->dev->bus, IOMMU_CAP_CACHE_COHERENCY))
return -EINVAL;
return __vfio_register_dev(device,
vfio_group_find_or_alloc(device->dev));
}
@ -889,29 +698,10 @@ static struct vfio_device *vfio_device_get_from_name(struct vfio_group *group,
void vfio_unregister_group_dev(struct vfio_device *device)
{
struct vfio_group *group = device->group;
struct vfio_unbound_dev *unbound;
unsigned int i = 0;
bool interrupted = false;
long rc;
/*
* When the device is removed from the group, the group suddenly
* becomes non-viable; the device has a driver (until the unbind
* completes), but it's not present in the group. This is bad news
* for any external users that need to re-acquire a group reference
* in order to match and release their existing reference. To
* solve this, we track such devices on the unbound_list to bridge
* the gap until they're fully unbound.
*/
unbound = kzalloc(sizeof(*unbound), GFP_KERNEL);
if (unbound) {
unbound->dev = device->dev;
mutex_lock(&group->unbound_lock);
list_add(&unbound->unbound_next, &group->unbound_list);
mutex_unlock(&group->unbound_lock);
}
WARN_ON(!unbound);
vfio_device_put(device);
rc = try_wait_for_completion(&device->comp);
while (rc <= 0) {
@ -1198,6 +988,8 @@ static void __vfio_group_unset_container(struct vfio_group *group)
driver->ops->detach_group(container->iommu_data,
group->iommu_group);
iommu_group_release_dma_owner(group->iommu_group);
group->container = NULL;
wake_up(&group->container_q);
list_del(&group->container_next);
@ -1282,13 +1074,19 @@ static int vfio_group_set_container(struct vfio_group *group, int container_fd)
goto unlock_out;
}
ret = iommu_group_claim_dma_owner(group->iommu_group, f.file);
if (ret)
goto unlock_out;
driver = container->iommu_driver;
if (driver) {
ret = driver->ops->attach_group(container->iommu_data,
group->iommu_group,
group->type);
if (ret)
if (ret) {
iommu_group_release_dma_owner(group->iommu_group);
goto unlock_out;
}
}
group->container = container;
@ -1305,12 +1103,6 @@ unlock_out:
return ret;
}
static bool vfio_group_viable(struct vfio_group *group)
{
return (iommu_group_for_each_dev(group->iommu_group,
group, vfio_dev_viable) == 0);
}
static int vfio_group_add_container_user(struct vfio_group *group)
{
if (!atomic_inc_not_zero(&group->container_users))
@ -1320,7 +1112,7 @@ static int vfio_group_add_container_user(struct vfio_group *group)
atomic_dec(&group->container_users);
return -EPERM;
}
if (!group->container->iommu_driver || !vfio_group_viable(group)) {
if (!group->container->iommu_driver) {
atomic_dec(&group->container_users);
return -EINVAL;
}
@ -1338,7 +1130,7 @@ static int vfio_group_get_device_fd(struct vfio_group *group, char *buf)
int ret = 0;
if (0 == atomic_read(&group->container_users) ||
!group->container->iommu_driver || !vfio_group_viable(group))
!group->container->iommu_driver)
return -EINVAL;
if (group->type == VFIO_NO_IOMMU && !capable(CAP_SYS_RAWIO))
@ -1430,11 +1222,11 @@ static long vfio_group_fops_unl_ioctl(struct file *filep,
status.flags = 0;
if (vfio_group_viable(group))
status.flags |= VFIO_GROUP_FLAGS_VIABLE;
if (group->container)
status.flags |= VFIO_GROUP_FLAGS_CONTAINER_SET;
status.flags |= VFIO_GROUP_FLAGS_CONTAINER_SET |
VFIO_GROUP_FLAGS_VIABLE;
else if (!iommu_group_dma_owner_claimed(group->iommu_group))
status.flags |= VFIO_GROUP_FLAGS_VIABLE;
if (copy_to_user((void __user *)arg, &status, minsz))
return -EFAULT;

View File

@ -84,8 +84,8 @@ struct vfio_domain {
struct iommu_domain *domain;
struct list_head next;
struct list_head group_list;
int prot; /* IOMMU_CACHE */
bool fgsp; /* Fine-grained super pages */
bool fgsp : 1; /* Fine-grained super pages */
bool enforce_cache_coherency : 1;
};
struct vfio_dma {
@ -1461,7 +1461,7 @@ static int vfio_iommu_map(struct vfio_iommu *iommu, dma_addr_t iova,
list_for_each_entry(d, &iommu->domain_list, next) {
ret = iommu_map(d->domain, iova, (phys_addr_t)pfn << PAGE_SHIFT,
npage << PAGE_SHIFT, prot | d->prot);
npage << PAGE_SHIFT, prot | IOMMU_CACHE);
if (ret)
goto unwind;
@ -1771,7 +1771,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
}
ret = iommu_map(domain->domain, iova, phys,
size, dma->prot | domain->prot);
size, dma->prot | IOMMU_CACHE);
if (ret) {
if (!dma->iommu_mapped) {
vfio_unpin_pages_remote(dma, iova,
@ -1859,7 +1859,7 @@ static void vfio_test_domain_fgsp(struct vfio_domain *domain)
return;
ret = iommu_map(domain->domain, 0, page_to_phys(pages), PAGE_SIZE * 2,
IOMMU_READ | IOMMU_WRITE | domain->prot);
IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE);
if (!ret) {
size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE);
@ -2267,8 +2267,15 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
goto out_detach;
}
if (iommu_capable(bus, IOMMU_CAP_CACHE_COHERENCY))
domain->prot |= IOMMU_CACHE;
/*
* If the IOMMU can block non-coherent operations (ie PCIe TLPs with
* no-snoop set) then VFIO always turns this feature on because on Intel
* platforms it optimizes KVM to disable wbinvd emulation.
*/
if (domain->domain->ops->enforce_cache_coherency)
domain->enforce_cache_coherency =
domain->domain->ops->enforce_cache_coherency(
domain->domain);
/*
* Try to match an existing compatible domain. We don't want to
@ -2279,7 +2286,8 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
*/
list_for_each_entry(d, &iommu->domain_list, next) {
if (d->domain->ops == domain->domain->ops &&
d->prot == domain->prot) {
d->enforce_cache_coherency ==
domain->enforce_cache_coherency) {
iommu_detach_group(domain->domain, group->iommu_group);
if (!iommu_attach_group(d->domain,
group->iommu_group)) {
@ -2611,14 +2619,14 @@ static void vfio_iommu_type1_release(void *iommu_data)
kfree(iommu);
}
static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu)
static int vfio_domains_have_enforce_cache_coherency(struct vfio_iommu *iommu)
{
struct vfio_domain *domain;
int ret = 1;
mutex_lock(&iommu->lock);
list_for_each_entry(domain, &iommu->domain_list, next) {
if (!(domain->prot & IOMMU_CACHE)) {
if (!(domain->enforce_cache_coherency)) {
ret = 0;
break;
}
@ -2641,7 +2649,7 @@ static int vfio_iommu_type1_check_extension(struct vfio_iommu *iommu,
case VFIO_DMA_CC_IOMMU:
if (!iommu)
return 0;
return vfio_domains_have_iommu_cache(iommu);
return vfio_domains_have_enforce_cache_coherency(iommu);
default:
return 0;
}

View File

@ -0,0 +1,217 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2022 MediaTek Inc.
*
* Author: Anan Sun <anan.sun@mediatek.com>
* Author: Yong Wu <yong.wu@mediatek.com>
*/
#ifndef _DT_BINDINGS_MEMORY_MT8186_LARB_PORT_H_
#define _DT_BINDINGS_MEMORY_MT8186_LARB_PORT_H_
#include <dt-bindings/memory/mtk-memory-port.h>
/*
* MM IOMMU supports 16GB dma address. We separate it to four ranges:
* 0 ~ 4G; 4G ~ 8G; 8G ~ 12G; 12G ~ 16G, we could adjust these masters
* locate in anyone region. BUT:
* a) Make sure all the ports inside a larb are in one range.
* b) The iova of any master can NOT cross the 4G/8G/12G boundary.
*
* This is the suggested mapping in this SoC:
*
* modules dma-address-region larbs-ports
* disp 0 ~ 4G larb0/1/2
* vcodec 4G ~ 8G larb4/7
* cam/mdp 8G ~ 12G the other larbs.
* N/A 12G ~ 16G
* CCU0 0x24000_0000 ~ 0x243ff_ffff larb13: port 9/10
* CCU1 0x24400_0000 ~ 0x247ff_ffff larb14: port 4/5
*/
/* MM IOMMU ports */
/* LARB 0 -- MMSYS */
#define IOMMU_PORT_L0_DISP_POSTMASK0 MTK_M4U_ID(0, 0)
#define IOMMU_PORT_L0_REVERSED MTK_M4U_ID(0, 1)
#define IOMMU_PORT_L0_OVL_RDMA0 MTK_M4U_ID(0, 2)
#define IOMMU_PORT_L0_DISP_FAKE0 MTK_M4U_ID(0, 3)
/* LARB 1 -- MMSYS */
#define IOMMU_PORT_L1_DISP_RDMA1 MTK_M4U_ID(1, 0)
#define IOMMU_PORT_L1_OVL_2L_RDMA0 MTK_M4U_ID(1, 1)
#define IOMMU_PORT_L1_DISP_RDMA0 MTK_M4U_ID(1, 2)
#define IOMMU_PORT_L1_DISP_WDMA0 MTK_M4U_ID(1, 3)
#define IOMMU_PORT_L1_DISP_FAKE1 MTK_M4U_ID(1, 4)
/* LARB 2 -- MMSYS */
#define IOMMU_PORT_L2_MDP_RDMA0 MTK_M4U_ID(2, 0)
#define IOMMU_PORT_L2_MDP_RDMA1 MTK_M4U_ID(2, 1)
#define IOMMU_PORT_L2_MDP_WROT0 MTK_M4U_ID(2, 2)
#define IOMMU_PORT_L2_MDP_WROT1 MTK_M4U_ID(2, 3)
#define IOMMU_PORT_L2_DISP_FAKE0 MTK_M4U_ID(2, 4)
/* LARB 4 -- VDEC */
#define IOMMU_PORT_L4_HW_VDEC_MC_EXT MTK_M4U_ID(4, 0)
#define IOMMU_PORT_L4_HW_VDEC_UFO_EXT MTK_M4U_ID(4, 1)
#define IOMMU_PORT_L4_HW_VDEC_PP_EXT MTK_M4U_ID(4, 2)
#define IOMMU_PORT_L4_HW_VDEC_PRED_RD_EXT MTK_M4U_ID(4, 3)
#define IOMMU_PORT_L4_HW_VDEC_PRED_WR_EXT MTK_M4U_ID(4, 4)
#define IOMMU_PORT_L4_HW_VDEC_PPWRAP_EXT MTK_M4U_ID(4, 5)
#define IOMMU_PORT_L4_HW_VDEC_TILE_EXT MTK_M4U_ID(4, 6)
#define IOMMU_PORT_L4_HW_VDEC_VLD_EXT MTK_M4U_ID(4, 7)
#define IOMMU_PORT_L4_HW_VDEC_VLD2_EXT MTK_M4U_ID(4, 8)
#define IOMMU_PORT_L4_HW_VDEC_AVC_MV_EXT MTK_M4U_ID(4, 9)
#define IOMMU_PORT_L4_HW_VDEC_UFO_ENC_EXT MTK_M4U_ID(4, 10)
#define IOMMU_PORT_L4_HW_VDEC_RG_CTRL_DMA_EXT MTK_M4U_ID(4, 11)
#define IOMMU_PORT_L4_HW_MINI_MDP_R0_EXT MTK_M4U_ID(4, 12)
#define IOMMU_PORT_L4_HW_MINI_MDP_W0_EXT MTK_M4U_ID(4, 13)
/* LARB 7 -- VENC */
#define IOMMU_PORT_L7_VENC_RCPU MTK_M4U_ID(7, 0)
#define IOMMU_PORT_L7_VENC_REC MTK_M4U_ID(7, 1)
#define IOMMU_PORT_L7_VENC_BSDMA MTK_M4U_ID(7, 2)
#define IOMMU_PORT_L7_VENC_SV_COMV MTK_M4U_ID(7, 3)
#define IOMMU_PORT_L7_VENC_RD_COMV MTK_M4U_ID(7, 4)
#define IOMMU_PORT_L7_VENC_CUR_LUMA MTK_M4U_ID(7, 5)
#define IOMMU_PORT_L7_VENC_CUR_CHROMA MTK_M4U_ID(7, 6)
#define IOMMU_PORT_L7_VENC_REF_LUMA MTK_M4U_ID(7, 7)
#define IOMMU_PORT_L7_VENC_REF_CHROMA MTK_M4U_ID(7, 8)
#define IOMMU_PORT_L7_JPGENC_Y_RDMA MTK_M4U_ID(7, 9)
#define IOMMU_PORT_L7_JPGENC_C_RDMA MTK_M4U_ID(7, 10)
#define IOMMU_PORT_L7_JPGENC_Q_TABLE MTK_M4U_ID(7, 11)
#define IOMMU_PORT_L7_JPGENC_BSDMA MTK_M4U_ID(7, 12)
/* LARB 8 -- WPE */
#define IOMMU_PORT_L8_WPE_RDMA_0 MTK_M4U_ID(8, 0)
#define IOMMU_PORT_L8_WPE_RDMA_1 MTK_M4U_ID(8, 1)
#define IOMMU_PORT_L8_WPE_WDMA_0 MTK_M4U_ID(8, 2)
/* LARB 9 -- IMG-1 */
#define IOMMU_PORT_L9_IMG_IMGI_D1 MTK_M4U_ID(9, 0)
#define IOMMU_PORT_L9_IMG_IMGBI_D1 MTK_M4U_ID(9, 1)
#define IOMMU_PORT_L9_IMG_DMGI_D1 MTK_M4U_ID(9, 2)
#define IOMMU_PORT_L9_IMG_DEPI_D1 MTK_M4U_ID(9, 3)
#define IOMMU_PORT_L9_IMG_LCE_D1 MTK_M4U_ID(9, 4)
#define IOMMU_PORT_L9_IMG_SMTI_D1 MTK_M4U_ID(9, 5)
#define IOMMU_PORT_L9_IMG_SMTO_D2 MTK_M4U_ID(9, 6)
#define IOMMU_PORT_L9_IMG_SMTO_D1 MTK_M4U_ID(9, 7)
#define IOMMU_PORT_L9_IMG_CRZO_D1 MTK_M4U_ID(9, 8)
#define IOMMU_PORT_L9_IMG_IMG3O_D1 MTK_M4U_ID(9, 9)
#define IOMMU_PORT_L9_IMG_VIPI_D1 MTK_M4U_ID(9, 10)
#define IOMMU_PORT_L9_IMG_SMTI_D5 MTK_M4U_ID(9, 11)
#define IOMMU_PORT_L9_IMG_TIMGO_D1 MTK_M4U_ID(9, 12)
#define IOMMU_PORT_L9_IMG_UFBC_W0 MTK_M4U_ID(9, 13)
#define IOMMU_PORT_L9_IMG_UFBC_R0 MTK_M4U_ID(9, 14)
#define IOMMU_PORT_L9_IMG_WPE_RDMA1 MTK_M4U_ID(9, 15)
#define IOMMU_PORT_L9_IMG_WPE_RDMA0 MTK_M4U_ID(9, 16)
#define IOMMU_PORT_L9_IMG_WPE_WDMA MTK_M4U_ID(9, 17)
#define IOMMU_PORT_L9_IMG_MFB_RDMA0 MTK_M4U_ID(9, 18)
#define IOMMU_PORT_L9_IMG_MFB_RDMA1 MTK_M4U_ID(9, 19)
#define IOMMU_PORT_L9_IMG_MFB_RDMA2 MTK_M4U_ID(9, 20)
#define IOMMU_PORT_L9_IMG_MFB_RDMA3 MTK_M4U_ID(9, 21)
#define IOMMU_PORT_L9_IMG_MFB_RDMA4 MTK_M4U_ID(9, 22)
#define IOMMU_PORT_L9_IMG_MFB_RDMA5 MTK_M4U_ID(9, 23)
#define IOMMU_PORT_L9_IMG_MFB_WDMA0 MTK_M4U_ID(9, 24)
#define IOMMU_PORT_L9_IMG_MFB_WDMA1 MTK_M4U_ID(9, 25)
#define IOMMU_PORT_L9_IMG_RESERVE6 MTK_M4U_ID(9, 26)
#define IOMMU_PORT_L9_IMG_RESERVE7 MTK_M4U_ID(9, 27)
#define IOMMU_PORT_L9_IMG_RESERVE8 MTK_M4U_ID(9, 28)
/* LARB 11 -- IMG-2 */
#define IOMMU_PORT_L11_IMG_IMGI_D1 MTK_M4U_ID(11, 0)
#define IOMMU_PORT_L11_IMG_IMGBI_D1 MTK_M4U_ID(11, 1)
#define IOMMU_PORT_L11_IMG_DMGI_D1 MTK_M4U_ID(11, 2)
#define IOMMU_PORT_L11_IMG_DEPI_D1 MTK_M4U_ID(11, 3)
#define IOMMU_PORT_L11_IMG_LCE_D1 MTK_M4U_ID(11, 4)
#define IOMMU_PORT_L11_IMG_SMTI_D1 MTK_M4U_ID(11, 5)
#define IOMMU_PORT_L11_IMG_SMTO_D2 MTK_M4U_ID(11, 6)
#define IOMMU_PORT_L11_IMG_SMTO_D1 MTK_M4U_ID(11, 7)
#define IOMMU_PORT_L11_IMG_CRZO_D1 MTK_M4U_ID(11, 8)
#define IOMMU_PORT_L11_IMG_IMG3O_D1 MTK_M4U_ID(11, 9)
#define IOMMU_PORT_L11_IMG_VIPI_D1 MTK_M4U_ID(11, 10)
#define IOMMU_PORT_L11_IMG_SMTI_D5 MTK_M4U_ID(11, 11)
#define IOMMU_PORT_L11_IMG_TIMGO_D1 MTK_M4U_ID(11, 12)
#define IOMMU_PORT_L11_IMG_UFBC_W0 MTK_M4U_ID(11, 13)
#define IOMMU_PORT_L11_IMG_UFBC_R0 MTK_M4U_ID(11, 14)
#define IOMMU_PORT_L11_IMG_WPE_RDMA1 MTK_M4U_ID(11, 15)
#define IOMMU_PORT_L11_IMG_WPE_RDMA0 MTK_M4U_ID(11, 16)
#define IOMMU_PORT_L11_IMG_WPE_WDMA MTK_M4U_ID(11, 17)
#define IOMMU_PORT_L11_IMG_MFB_RDMA0 MTK_M4U_ID(11, 18)
#define IOMMU_PORT_L11_IMG_MFB_RDMA1 MTK_M4U_ID(11, 19)
#define IOMMU_PORT_L11_IMG_MFB_RDMA2 MTK_M4U_ID(11, 20)
#define IOMMU_PORT_L11_IMG_MFB_RDMA3 MTK_M4U_ID(11, 21)
#define IOMMU_PORT_L11_IMG_MFB_RDMA4 MTK_M4U_ID(11, 22)
#define IOMMU_PORT_L11_IMG_MFB_RDMA5 MTK_M4U_ID(11, 23)
#define IOMMU_PORT_L11_IMG_MFB_WDMA0 MTK_M4U_ID(11, 24)
#define IOMMU_PORT_L11_IMG_MFB_WDMA1 MTK_M4U_ID(11, 25)
#define IOMMU_PORT_L11_IMG_RESERVE6 MTK_M4U_ID(11, 26)
#define IOMMU_PORT_L11_IMG_RESERVE7 MTK_M4U_ID(11, 27)
#define IOMMU_PORT_L11_IMG_RESERVE8 MTK_M4U_ID(11, 28)
/* LARB 13 -- CAM */
#define IOMMU_PORT_L13_CAM_MRAWI MTK_M4U_ID(13, 0)
#define IOMMU_PORT_L13_CAM_MRAWO_0 MTK_M4U_ID(13, 1)
#define IOMMU_PORT_L13_CAM_MRAWO_1 MTK_M4U_ID(13, 2)
#define IOMMU_PORT_L13_CAM_CAMSV_4 MTK_M4U_ID(13, 6)
#define IOMMU_PORT_L13_CAM_CAMSV_5 MTK_M4U_ID(13, 7)
#define IOMMU_PORT_L13_CAM_CAMSV_6 MTK_M4U_ID(13, 8)
#define IOMMU_PORT_L13_CAM_CCUI MTK_M4U_ID(13, 9)
#define IOMMU_PORT_L13_CAM_CCUO MTK_M4U_ID(13, 10)
#define IOMMU_PORT_L13_CAM_FAKE MTK_M4U_ID(13, 11)
/* LARB 14 -- CAM */
#define IOMMU_PORT_L14_CAM_CCUI MTK_M4U_ID(14, 4)
#define IOMMU_PORT_L14_CAM_CCUO MTK_M4U_ID(14, 5)
/* LARB 16 -- RAW-A */
#define IOMMU_PORT_L16_CAM_IMGO_R1_A MTK_M4U_ID(16, 0)
#define IOMMU_PORT_L16_CAM_RRZO_R1_A MTK_M4U_ID(16, 1)
#define IOMMU_PORT_L16_CAM_CQI_R1_A MTK_M4U_ID(16, 2)
#define IOMMU_PORT_L16_CAM_BPCI_R1_A MTK_M4U_ID(16, 3)
#define IOMMU_PORT_L16_CAM_YUVO_R1_A MTK_M4U_ID(16, 4)
#define IOMMU_PORT_L16_CAM_UFDI_R2_A MTK_M4U_ID(16, 5)
#define IOMMU_PORT_L16_CAM_RAWI_R2_A MTK_M4U_ID(16, 6)
#define IOMMU_PORT_L16_CAM_RAWI_R3_A MTK_M4U_ID(16, 7)
#define IOMMU_PORT_L16_CAM_AAO_R1_A MTK_M4U_ID(16, 8)
#define IOMMU_PORT_L16_CAM_AFO_R1_A MTK_M4U_ID(16, 9)
#define IOMMU_PORT_L16_CAM_FLKO_R1_A MTK_M4U_ID(16, 10)
#define IOMMU_PORT_L16_CAM_LCESO_R1_A MTK_M4U_ID(16, 11)
#define IOMMU_PORT_L16_CAM_CRZO_R1_A MTK_M4U_ID(16, 12)
#define IOMMU_PORT_L16_CAM_LTMSO_R1_A MTK_M4U_ID(16, 13)
#define IOMMU_PORT_L16_CAM_RSSO_R1_A MTK_M4U_ID(16, 14)
#define IOMMU_PORT_L16_CAM_AAHO_R1_A MTK_M4U_ID(16, 15)
#define IOMMU_PORT_L16_CAM_LSCI_R1_A MTK_M4U_ID(16, 16)
/* LARB 17 -- RAW-B */
#define IOMMU_PORT_L17_CAM_IMGO_R1_B MTK_M4U_ID(17, 0)
#define IOMMU_PORT_L17_CAM_RRZO_R1_B MTK_M4U_ID(17, 1)
#define IOMMU_PORT_L17_CAM_CQI_R1_B MTK_M4U_ID(17, 2)
#define IOMMU_PORT_L17_CAM_BPCI_R1_B MTK_M4U_ID(17, 3)
#define IOMMU_PORT_L17_CAM_YUVO_R1_B MTK_M4U_ID(17, 4)
#define IOMMU_PORT_L17_CAM_UFDI_R2_B MTK_M4U_ID(17, 5)
#define IOMMU_PORT_L17_CAM_RAWI_R2_B MTK_M4U_ID(17, 6)
#define IOMMU_PORT_L17_CAM_RAWI_R3_B MTK_M4U_ID(17, 7)
#define IOMMU_PORT_L17_CAM_AAO_R1_B MTK_M4U_ID(17, 8)
#define IOMMU_PORT_L17_CAM_AFO_R1_B MTK_M4U_ID(17, 9)
#define IOMMU_PORT_L17_CAM_FLKO_R1_B MTK_M4U_ID(17, 10)
#define IOMMU_PORT_L17_CAM_LCESO_R1_B MTK_M4U_ID(17, 11)
#define IOMMU_PORT_L17_CAM_CRZO_R1_B MTK_M4U_ID(17, 12)
#define IOMMU_PORT_L17_CAM_LTMSO_R1_B MTK_M4U_ID(17, 13)
#define IOMMU_PORT_L17_CAM_RSSO_R1_B MTK_M4U_ID(17, 14)
#define IOMMU_PORT_L17_CAM_AAHO_R1_B MTK_M4U_ID(17, 15)
#define IOMMU_PORT_L17_CAM_LSCI_R1_B MTK_M4U_ID(17, 16)
/* LARB 19 -- IPE */
#define IOMMU_PORT_L19_IPE_DVS_RDMA MTK_M4U_ID(19, 0)
#define IOMMU_PORT_L19_IPE_DVS_WDMA MTK_M4U_ID(19, 1)
#define IOMMU_PORT_L19_IPE_DVP_RDMA MTK_M4U_ID(19, 2)
#define IOMMU_PORT_L19_IPE_DVP_WDMA MTK_M4U_ID(19, 3)
/* LARB 20 -- IPE */
#define IOMMU_PORT_L20_IPE_FDVT_RDA MTK_M4U_ID(20, 0)
#define IOMMU_PORT_L20_IPE_FDVT_RDB MTK_M4U_ID(20, 1)
#define IOMMU_PORT_L20_IPE_FDVT_WRA MTK_M4U_ID(20, 2)
#define IOMMU_PORT_L20_IPE_FDVT_WRB MTK_M4U_ID(20, 3)
#define IOMMU_PORT_L20_IPE_RSC_RDMA0 MTK_M4U_ID(20, 4)
#define IOMMU_PORT_L20_IPE_RSC_WDMA MTK_M4U_ID(20, 5)
#endif

View File

@ -0,0 +1,408 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2022 MediaTek Inc.
* Author: Yong Wu <yong.wu@mediatek.com>
*/
#ifndef _DT_BINDINGS_MEMORY_MT8195_LARB_PORT_H_
#define _DT_BINDINGS_MEMORY_MT8195_LARB_PORT_H_
#include <dt-bindings/memory/mtk-memory-port.h>
/*
* MM IOMMU supports 16GB dma address. We separate it to four ranges:
* 0 ~ 4G; 4G ~ 8G; 8G ~ 12G; 12G ~ 16G, we could adjust these masters
* locate in anyone region. BUT:
* a) Make sure all the ports inside a larb are in one range.
* b) The iova of any master can NOT cross the 4G/8G/12G boundary.
*
* This is the suggested mapping in this SoC:
*
* modules dma-address-region larbs-ports
* disp 0 ~ 4G larb0/1/2/3
* vcodec 4G ~ 8G larb19/20/21/22/23/24
* cam/mdp 8G ~ 12G the other larbs.
* N/A 12G ~ 16G
* CCU0 0x24000_0000 ~ 0x243ff_ffff larb18: port 0/1
* CCU1 0x24400_0000 ~ 0x247ff_ffff larb18: port 2/3
*
* This SoC have two IOMMU HWs, this is the detailed connected information:
* iommu-vdo: larb0/2/5/7/9/10/11/13/17/19/21/24/25/28
* iommu-vpp: larb1/3/4/6/8/12/14/16/18/20/22/23/26/27
*/
/* MM IOMMU ports */
/* larb0 */
#define M4U_PORT_L0_DISP_RDMA0 MTK_M4U_ID(0, 0)
#define M4U_PORT_L0_DISP_WDMA0 MTK_M4U_ID(0, 1)
#define M4U_PORT_L0_DISP_OVL0_RDMA0 MTK_M4U_ID(0, 2)
#define M4U_PORT_L0_DISP_OVL0_RDMA1 MTK_M4U_ID(0, 3)
#define M4U_PORT_L0_DISP_OVL0_HDR MTK_M4U_ID(0, 4)
#define M4U_PORT_L0_DISP_FAKE0 MTK_M4U_ID(0, 5)
/* larb1 */
#define M4U_PORT_L1_DISP_RDMA0 MTK_M4U_ID(1, 0)
#define M4U_PORT_L1_DISP_WDMA0 MTK_M4U_ID(1, 1)
#define M4U_PORT_L1_DISP_OVL0_RDMA0 MTK_M4U_ID(1, 2)
#define M4U_PORT_L1_DISP_OVL0_RDMA1 MTK_M4U_ID(1, 3)
#define M4U_PORT_L1_DISP_OVL0_HDR MTK_M4U_ID(1, 4)
#define M4U_PORT_L1_DISP_FAKE0 MTK_M4U_ID(1, 5)
/* larb2 */
#define M4U_PORT_L2_MDP_RDMA0 MTK_M4U_ID(2, 0)
#define M4U_PORT_L2_MDP_RDMA2 MTK_M4U_ID(2, 1)
#define M4U_PORT_L2_MDP_RDMA4 MTK_M4U_ID(2, 2)
#define M4U_PORT_L2_MDP_RDMA6 MTK_M4U_ID(2, 3)
#define M4U_PORT_L2_DISP_FAKE1 MTK_M4U_ID(2, 4)
/* larb3 */
#define M4U_PORT_L3_MDP_RDMA1 MTK_M4U_ID(3, 0)
#define M4U_PORT_L3_MDP_RDMA3 MTK_M4U_ID(3, 1)
#define M4U_PORT_L3_MDP_RDMA5 MTK_M4U_ID(3, 2)
#define M4U_PORT_L3_MDP_RDMA7 MTK_M4U_ID(3, 3)
#define M4U_PORT_L3_HDR_DS MTK_M4U_ID(3, 4)
#define M4U_PORT_L3_HDR_ADL MTK_M4U_ID(3, 5)
#define M4U_PORT_L3_DISP_FAKE1 MTK_M4U_ID(3, 6)
/* larb4 */
#define M4U_PORT_L4_MDP_RDMA MTK_M4U_ID(4, 0)
#define M4U_PORT_L4_MDP_FG MTK_M4U_ID(4, 1)
#define M4U_PORT_L4_MDP_OVL MTK_M4U_ID(4, 2)
#define M4U_PORT_L4_MDP_WROT MTK_M4U_ID(4, 3)
#define M4U_PORT_L4_FAKE MTK_M4U_ID(4, 4)
/* larb5 */
#define M4U_PORT_L5_SVPP1_MDP_RDMA MTK_M4U_ID(5, 0)
#define M4U_PORT_L5_SVPP1_MDP_FG MTK_M4U_ID(5, 1)
#define M4U_PORT_L5_SVPP1_MDP_OVL MTK_M4U_ID(5, 2)
#define M4U_PORT_L5_SVPP1_MDP_WROT MTK_M4U_ID(5, 3)
#define M4U_PORT_L5_SVPP2_MDP_RDMA MTK_M4U_ID(5, 4)
#define M4U_PORT_L5_SVPP2_MDP_FG MTK_M4U_ID(5, 5)
#define M4U_PORT_L5_SVPP2_MDP_WROT MTK_M4U_ID(5, 6)
#define M4U_PORT_L5_FAKE MTK_M4U_ID(5, 7)
/* larb6 */
#define M4U_PORT_L6_SVPP3_MDP_RDMA MTK_M4U_ID(6, 0)
#define M4U_PORT_L6_SVPP3_MDP_FG MTK_M4U_ID(6, 1)
#define M4U_PORT_L6_SVPP3_MDP_WROT MTK_M4U_ID(6, 2)
#define M4U_PORT_L6_FAKE MTK_M4U_ID(6, 3)
/* larb7 */
#define M4U_PORT_L7_IMG_WPE_RDMA0 MTK_M4U_ID(7, 0)
#define M4U_PORT_L7_IMG_WPE_RDMA1 MTK_M4U_ID(7, 1)
#define M4U_PORT_L7_IMG_WPE_WDMA0 MTK_M4U_ID(7, 2)
/* larb8 */
#define M4U_PORT_L8_IMG_WPE_RDMA0 MTK_M4U_ID(8, 0)
#define M4U_PORT_L8_IMG_WPE_RDMA1 MTK_M4U_ID(8, 1)
#define M4U_PORT_L8_IMG_WPE_WDMA0 MTK_M4U_ID(8, 2)
/* larb9 */
#define M4U_PORT_L9_IMG_IMGI_T1_A MTK_M4U_ID(9, 0)
#define M4U_PORT_L9_IMG_IMGBI_T1_A MTK_M4U_ID(9, 1)
#define M4U_PORT_L9_IMG_IMGCI_T1_A MTK_M4U_ID(9, 2)
#define M4U_PORT_L9_IMG_SMTI_T1_A MTK_M4U_ID(9, 3)
#define M4U_PORT_L9_IMG_TNCSTI_T1_A MTK_M4U_ID(9, 4)
#define M4U_PORT_L9_IMG_TNCSTI_T4_A MTK_M4U_ID(9, 5)
#define M4U_PORT_L9_IMG_YUVO_T1_A MTK_M4U_ID(9, 6)
#define M4U_PORT_L9_IMG_TIMGO_T1_A MTK_M4U_ID(9, 7)
#define M4U_PORT_L9_IMG_YUVO_T2_A MTK_M4U_ID(9, 8)
#define M4U_PORT_L9_IMG_IMGI_T1_B MTK_M4U_ID(9, 9)
#define M4U_PORT_L9_IMG_IMGBI_T1_B MTK_M4U_ID(9, 10)
#define M4U_PORT_L9_IMG_IMGCI_T1_B MTK_M4U_ID(9, 11)
#define M4U_PORT_L9_IMG_YUVO_T5_A MTK_M4U_ID(9, 12)
#define M4U_PORT_L9_IMG_SMTI_T1_B MTK_M4U_ID(9, 13)
#define M4U_PORT_L9_IMG_TNCSO_T1_A MTK_M4U_ID(9, 14)
#define M4U_PORT_L9_IMG_SMTO_T1_A MTK_M4U_ID(9, 15)
#define M4U_PORT_L9_IMG_TNCSTO_T1_A MTK_M4U_ID(9, 16)
#define M4U_PORT_L9_IMG_YUVO_T2_B MTK_M4U_ID(9, 17)
#define M4U_PORT_L9_IMG_YUVO_T5_B MTK_M4U_ID(9, 18)
#define M4U_PORT_L9_IMG_SMTO_T1_B MTK_M4U_ID(9, 19)
/* larb10 */
#define M4U_PORT_L10_IMG_IMGI_D1_A MTK_M4U_ID(10, 0)
#define M4U_PORT_L10_IMG_IMGCI_D1_A MTK_M4U_ID(10, 1)
#define M4U_PORT_L10_IMG_DEPI_D1_A MTK_M4U_ID(10, 2)
#define M4U_PORT_L10_IMG_DMGI_D1_A MTK_M4U_ID(10, 3)
#define M4U_PORT_L10_IMG_VIPI_D1_A MTK_M4U_ID(10, 4)
#define M4U_PORT_L10_IMG_TNRWI_D1_A MTK_M4U_ID(10, 5)
#define M4U_PORT_L10_IMG_RECI_D1_A MTK_M4U_ID(10, 6)
#define M4U_PORT_L10_IMG_SMTI_D1_A MTK_M4U_ID(10, 7)
#define M4U_PORT_L10_IMG_SMTI_D6_A MTK_M4U_ID(10, 8)
#define M4U_PORT_L10_IMG_PIMGI_P1_A MTK_M4U_ID(10, 9)
#define M4U_PORT_L10_IMG_PIMGBI_P1_A MTK_M4U_ID(10, 10)
#define M4U_PORT_L10_IMG_PIMGCI_P1_A MTK_M4U_ID(10, 11)
#define M4U_PORT_L10_IMG_PIMGI_P1_B MTK_M4U_ID(10, 12)
#define M4U_PORT_L10_IMG_PIMGBI_P1_B MTK_M4U_ID(10, 13)
#define M4U_PORT_L10_IMG_PIMGCI_P1_B MTK_M4U_ID(10, 14)
#define M4U_PORT_L10_IMG_IMG3O_D1_A MTK_M4U_ID(10, 15)
#define M4U_PORT_L10_IMG_IMG4O_D1_A MTK_M4U_ID(10, 16)
#define M4U_PORT_L10_IMG_IMG3CO_D1_A MTK_M4U_ID(10, 17)
#define M4U_PORT_L10_IMG_FEO_D1_A MTK_M4U_ID(10, 18)
#define M4U_PORT_L10_IMG_IMG2O_D1_A MTK_M4U_ID(10, 19)
#define M4U_PORT_L10_IMG_TNRWO_D1_A MTK_M4U_ID(10, 20)
#define M4U_PORT_L10_IMG_SMTO_D1_A MTK_M4U_ID(10, 21)
#define M4U_PORT_L10_IMG_WROT_P1_A MTK_M4U_ID(10, 22)
#define M4U_PORT_L10_IMG_WROT_P1_B MTK_M4U_ID(10, 23)
/* larb11 */
#define M4U_PORT_L11_IMG_WPE_EIS_RDMA0_A MTK_M4U_ID(11, 0)
#define M4U_PORT_L11_IMG_WPE_EIS_RDMA1_A MTK_M4U_ID(11, 1)
#define M4U_PORT_L11_IMG_WPE_EIS_WDMA0_A MTK_M4U_ID(11, 2)
#define M4U_PORT_L11_IMG_WPE_TNR_RDMA0_A MTK_M4U_ID(11, 3)
#define M4U_PORT_L11_IMG_WPE_TNR_RDMA1_A MTK_M4U_ID(11, 4)
#define M4U_PORT_L11_IMG_WPE_TNR_WDMA0_A MTK_M4U_ID(11, 5)
#define M4U_PORT_L11_IMG_WPE_EIS_CQ0_A MTK_M4U_ID(11, 6)
#define M4U_PORT_L11_IMG_WPE_EIS_CQ1_A MTK_M4U_ID(11, 7)
#define M4U_PORT_L11_IMG_WPE_TNR_CQ0_A MTK_M4U_ID(11, 8)
#define M4U_PORT_L11_IMG_WPE_TNR_CQ1_A MTK_M4U_ID(11, 9)
/* larb12 */
#define M4U_PORT_L12_IMG_FDVT_RDA MTK_M4U_ID(12, 0)
#define M4U_PORT_L12_IMG_FDVT_RDB MTK_M4U_ID(12, 1)
#define M4U_PORT_L12_IMG_FDVT_WRA MTK_M4U_ID(12, 2)
#define M4U_PORT_L12_IMG_FDVT_WRB MTK_M4U_ID(12, 3)
#define M4U_PORT_L12_IMG_ME_RDMA MTK_M4U_ID(12, 4)
#define M4U_PORT_L12_IMG_ME_WDMA MTK_M4U_ID(12, 5)
#define M4U_PORT_L12_IMG_DVS_RDMA MTK_M4U_ID(12, 6)
#define M4U_PORT_L12_IMG_DVS_WDMA MTK_M4U_ID(12, 7)
#define M4U_PORT_L12_IMG_DVP_RDMA MTK_M4U_ID(12, 8)
#define M4U_PORT_L12_IMG_DVP_WDMA MTK_M4U_ID(12, 9)
/* larb13 */
#define M4U_PORT_L13_CAM_CAMSV_CQI_E1 MTK_M4U_ID(13, 0)
#define M4U_PORT_L13_CAM_CAMSV_CQI_E2 MTK_M4U_ID(13, 1)
#define M4U_PORT_L13_CAM_GCAMSV_A_IMGO_0 MTK_M4U_ID(13, 2)
#define M4U_PORT_L13_CAM_SCAMSV_A_IMGO_0 MTK_M4U_ID(13, 3)
#define M4U_PORT_L13_CAM_GCAMSV_B_IMGO_0 MTK_M4U_ID(13, 4)
#define M4U_PORT_L13_CAM_GCAMSV_B_IMGO_1 MTK_M4U_ID(13, 5)
#define M4U_PORT_L13_CAM_GCAMSV_A_UFEO_0 MTK_M4U_ID(13, 6)
#define M4U_PORT_L13_CAM_GCAMSV_B_UFEO_0 MTK_M4U_ID(13, 7)
#define M4U_PORT_L13_CAM_PDAI_0 MTK_M4U_ID(13, 8)
#define M4U_PORT_L13_CAM_FAKE MTK_M4U_ID(13, 9)
/* larb14 */
#define M4U_PORT_L14_CAM_GCAMSV_A_IMGO_1 MTK_M4U_ID(14, 0)
#define M4U_PORT_L14_CAM_SCAMSV_A_IMGO_1 MTK_M4U_ID(14, 1)
#define M4U_PORT_L14_CAM_GCAMSV_B_IMGO_0 MTK_M4U_ID(14, 2)
#define M4U_PORT_L14_CAM_GCAMSV_B_IMGO_1 MTK_M4U_ID(14, 3)
#define M4U_PORT_L14_CAM_SCAMSV_B_IMGO_0 MTK_M4U_ID(14, 4)
#define M4U_PORT_L14_CAM_SCAMSV_B_IMGO_1 MTK_M4U_ID(14, 5)
#define M4U_PORT_L14_CAM_IPUI MTK_M4U_ID(14, 6)
#define M4U_PORT_L14_CAM_IPU2I MTK_M4U_ID(14, 7)
#define M4U_PORT_L14_CAM_IPUO MTK_M4U_ID(14, 8)
#define M4U_PORT_L14_CAM_IPU2O MTK_M4U_ID(14, 9)
#define M4U_PORT_L14_CAM_IPU3O MTK_M4U_ID(14, 10)
#define M4U_PORT_L14_CAM_GCAMSV_A_UFEO_1 MTK_M4U_ID(14, 11)
#define M4U_PORT_L14_CAM_GCAMSV_B_UFEO_1 MTK_M4U_ID(14, 12)
#define M4U_PORT_L14_CAM_PDAI_1 MTK_M4U_ID(14, 13)
#define M4U_PORT_L14_CAM_PDAO MTK_M4U_ID(14, 14)
/* larb15: null */
/* larb16 */
#define M4U_PORT_L16_CAM_IMGO_R1 MTK_M4U_ID(16, 0)
#define M4U_PORT_L16_CAM_CQI_R1 MTK_M4U_ID(16, 1)
#define M4U_PORT_L16_CAM_CQI_R2 MTK_M4U_ID(16, 2)
#define M4U_PORT_L16_CAM_BPCI_R1 MTK_M4U_ID(16, 3)
#define M4U_PORT_L16_CAM_LSCI_R1 MTK_M4U_ID(16, 4)
#define M4U_PORT_L16_CAM_RAWI_R2 MTK_M4U_ID(16, 5)
#define M4U_PORT_L16_CAM_RAWI_R3 MTK_M4U_ID(16, 6)
#define M4U_PORT_L16_CAM_UFDI_R2 MTK_M4U_ID(16, 7)
#define M4U_PORT_L16_CAM_UFDI_R3 MTK_M4U_ID(16, 8)
#define M4U_PORT_L16_CAM_RAWI_R4 MTK_M4U_ID(16, 9)
#define M4U_PORT_L16_CAM_RAWI_R5 MTK_M4U_ID(16, 10)
#define M4U_PORT_L16_CAM_AAI_R1 MTK_M4U_ID(16, 11)
#define M4U_PORT_L16_CAM_FHO_R1 MTK_M4U_ID(16, 12)
#define M4U_PORT_L16_CAM_AAO_R1 MTK_M4U_ID(16, 13)
#define M4U_PORT_L16_CAM_TSFSO_R1 MTK_M4U_ID(16, 14)
#define M4U_PORT_L16_CAM_FLKO_R1 MTK_M4U_ID(16, 15)
/* larb17 */
#define M4U_PORT_L17_CAM_YUVO_R1 MTK_M4U_ID(17, 0)
#define M4U_PORT_L17_CAM_YUVO_R3 MTK_M4U_ID(17, 1)
#define M4U_PORT_L17_CAM_YUVCO_R1 MTK_M4U_ID(17, 2)
#define M4U_PORT_L17_CAM_YUVO_R2 MTK_M4U_ID(17, 3)
#define M4U_PORT_L17_CAM_RZH1N2TO_R1 MTK_M4U_ID(17, 4)
#define M4U_PORT_L17_CAM_DRZS4NO_R1 MTK_M4U_ID(17, 5)
#define M4U_PORT_L17_CAM_TNCSO_R1 MTK_M4U_ID(17, 6)
/* larb18 */
#define M4U_PORT_L18_CAM_CCUI MTK_M4U_ID(18, 0)
#define M4U_PORT_L18_CAM_CCUO MTK_M4U_ID(18, 1)
#define M4U_PORT_L18_CAM_CCUI2 MTK_M4U_ID(18, 2)
#define M4U_PORT_L18_CAM_CCUO2 MTK_M4U_ID(18, 3)
/* larb19 */
#define M4U_PORT_L19_VENC_RCPU MTK_M4U_ID(19, 0)
#define M4U_PORT_L19_VENC_REC MTK_M4U_ID(19, 1)
#define M4U_PORT_L19_VENC_BSDMA MTK_M4U_ID(19, 2)
#define M4U_PORT_L19_VENC_SV_COMV MTK_M4U_ID(19, 3)
#define M4U_PORT_L19_VENC_RD_COMV MTK_M4U_ID(19, 4)
#define M4U_PORT_L19_VENC_NBM_RDMA MTK_M4U_ID(19, 5)
#define M4U_PORT_L19_VENC_NBM_RDMA_LITE MTK_M4U_ID(19, 6)
#define M4U_PORT_L19_JPGENC_Y_RDMA MTK_M4U_ID(19, 7)
#define M4U_PORT_L19_JPGENC_C_RDMA MTK_M4U_ID(19, 8)
#define M4U_PORT_L19_JPGENC_Q_TABLE MTK_M4U_ID(19, 9)
#define M4U_PORT_L19_VENC_SUB_W_LUMA MTK_M4U_ID(19, 10)
#define M4U_PORT_L19_VENC_FCS_NBM_RDMA MTK_M4U_ID(19, 11)
#define M4U_PORT_L19_JPGENC_BSDMA MTK_M4U_ID(19, 12)
#define M4U_PORT_L19_JPGDEC_WDMA0 MTK_M4U_ID(19, 13)
#define M4U_PORT_L19_JPGDEC_BSDMA0 MTK_M4U_ID(19, 14)
#define M4U_PORT_L19_VENC_NBM_WDMA MTK_M4U_ID(19, 15)
#define M4U_PORT_L19_VENC_NBM_WDMA_LITE MTK_M4U_ID(19, 16)
#define M4U_PORT_L19_VENC_FCS_NBM_WDMA MTK_M4U_ID(19, 17)
#define M4U_PORT_L19_JPGDEC_WDMA1 MTK_M4U_ID(19, 18)
#define M4U_PORT_L19_JPGDEC_BSDMA1 MTK_M4U_ID(19, 19)
#define M4U_PORT_L19_JPGDEC_BUFF_OFFSET1 MTK_M4U_ID(19, 20)
#define M4U_PORT_L19_JPGDEC_BUFF_OFFSET0 MTK_M4U_ID(19, 21)
#define M4U_PORT_L19_VENC_CUR_LUMA MTK_M4U_ID(19, 22)
#define M4U_PORT_L19_VENC_CUR_CHROMA MTK_M4U_ID(19, 23)
#define M4U_PORT_L19_VENC_REF_LUMA MTK_M4U_ID(19, 24)
#define M4U_PORT_L19_VENC_REF_CHROMA MTK_M4U_ID(19, 25)
#define M4U_PORT_L19_VENC_SUB_R_CHROMA MTK_M4U_ID(19, 26)
/* larb20 */
#define M4U_PORT_L20_VENC_RCPU MTK_M4U_ID(20, 0)
#define M4U_PORT_L20_VENC_REC MTK_M4U_ID(20, 1)
#define M4U_PORT_L20_VENC_BSDMA MTK_M4U_ID(20, 2)
#define M4U_PORT_L20_VENC_SV_COMV MTK_M4U_ID(20, 3)
#define M4U_PORT_L20_VENC_RD_COMV MTK_M4U_ID(20, 4)
#define M4U_PORT_L20_VENC_NBM_RDMA MTK_M4U_ID(20, 5)
#define M4U_PORT_L20_VENC_NBM_RDMA_LITE MTK_M4U_ID(20, 6)
#define M4U_PORT_L20_JPGENC_Y_RDMA MTK_M4U_ID(20, 7)
#define M4U_PORT_L20_JPGENC_C_RDMA MTK_M4U_ID(20, 8)
#define M4U_PORT_L20_JPGENC_Q_TABLE MTK_M4U_ID(20, 9)
#define M4U_PORT_L20_VENC_SUB_W_LUMA MTK_M4U_ID(20, 10)
#define M4U_PORT_L20_VENC_FCS_NBM_RDMA MTK_M4U_ID(20, 11)
#define M4U_PORT_L20_JPGENC_BSDMA MTK_M4U_ID(20, 12)
#define M4U_PORT_L20_JPGDEC_WDMA0 MTK_M4U_ID(20, 13)
#define M4U_PORT_L20_JPGDEC_BSDMA0 MTK_M4U_ID(20, 14)
#define M4U_PORT_L20_VENC_NBM_WDMA MTK_M4U_ID(20, 15)
#define M4U_PORT_L20_VENC_NBM_WDMA_LITE MTK_M4U_ID(20, 16)
#define M4U_PORT_L20_VENC_FCS_NBM_WDMA MTK_M4U_ID(20, 17)
#define M4U_PORT_L20_JPGDEC_WDMA1 MTK_M4U_ID(20, 18)
#define M4U_PORT_L20_JPGDEC_BSDMA1 MTK_M4U_ID(20, 19)
#define M4U_PORT_L20_JPGDEC_BUFF_OFFSET1 MTK_M4U_ID(20, 20)
#define M4U_PORT_L20_JPGDEC_BUFF_OFFSET0 MTK_M4U_ID(20, 21)
#define M4U_PORT_L20_VENC_CUR_LUMA MTK_M4U_ID(20, 22)
#define M4U_PORT_L20_VENC_CUR_CHROMA MTK_M4U_ID(20, 23)
#define M4U_PORT_L20_VENC_REF_LUMA MTK_M4U_ID(20, 24)
#define M4U_PORT_L20_VENC_REF_CHROMA MTK_M4U_ID(20, 25)
#define M4U_PORT_L20_VENC_SUB_R_CHROMA MTK_M4U_ID(20, 26)
/* larb21 */
#define M4U_PORT_L21_VDEC_MC_EXT MTK_M4U_ID(21, 0)
#define M4U_PORT_L21_VDEC_UFO_EXT MTK_M4U_ID(21, 1)
#define M4U_PORT_L21_VDEC_PP_EXT MTK_M4U_ID(21, 2)
#define M4U_PORT_L21_VDEC_PRED_RD_EXT MTK_M4U_ID(21, 3)
#define M4U_PORT_L21_VDEC_PRED_WR_EXT MTK_M4U_ID(21, 4)
#define M4U_PORT_L21_VDEC_PPWRAP_EXT MTK_M4U_ID(21, 5)
#define M4U_PORT_L21_VDEC_TILE_EXT MTK_M4U_ID(21, 6)
#define M4U_PORT_L21_VDEC_VLD_EXT MTK_M4U_ID(21, 7)
#define M4U_PORT_L21_VDEC_VLD2_EXT MTK_M4U_ID(21, 8)
#define M4U_PORT_L21_VDEC_AVC_MV_EXT MTK_M4U_ID(21, 9)
/* larb22 */
#define M4U_PORT_L22_VDEC_MC_EXT MTK_M4U_ID(22, 0)
#define M4U_PORT_L22_VDEC_UFO_EXT MTK_M4U_ID(22, 1)
#define M4U_PORT_L22_VDEC_PP_EXT MTK_M4U_ID(22, 2)
#define M4U_PORT_L22_VDEC_PRED_RD_EXT MTK_M4U_ID(22, 3)
#define M4U_PORT_L22_VDEC_PRED_WR_EXT MTK_M4U_ID(22, 4)
#define M4U_PORT_L22_VDEC_PPWRAP_EXT MTK_M4U_ID(22, 5)
#define M4U_PORT_L22_VDEC_TILE_EXT MTK_M4U_ID(22, 6)
#define M4U_PORT_L22_VDEC_VLD_EXT MTK_M4U_ID(22, 7)
#define M4U_PORT_L22_VDEC_VLD2_EXT MTK_M4U_ID(22, 8)
#define M4U_PORT_L22_VDEC_AVC_MV_EXT MTK_M4U_ID(22, 9)
/* larb23 */
#define M4U_PORT_L23_VDEC_UFO_ENC_EXT MTK_M4U_ID(23, 0)
#define M4U_PORT_L23_VDEC_RDMA_EXT MTK_M4U_ID(23, 1)
/* larb24 */
#define M4U_PORT_L24_VDEC_LAT0_VLD_EXT MTK_M4U_ID(24, 0)
#define M4U_PORT_L24_VDEC_LAT0_VLD2_EXT MTK_M4U_ID(24, 1)
#define M4U_PORT_L24_VDEC_LAT0_AVC_MC_EXT MTK_M4U_ID(24, 2)
#define M4U_PORT_L24_VDEC_LAT0_PRED_RD_EXT MTK_M4U_ID(24, 3)
#define M4U_PORT_L24_VDEC_LAT0_TILE_EXT MTK_M4U_ID(24, 4)
#define M4U_PORT_L24_VDEC_LAT0_WDMA_EXT MTK_M4U_ID(24, 5)
#define M4U_PORT_L24_VDEC_LAT1_VLD_EXT MTK_M4U_ID(24, 6)
#define M4U_PORT_L24_VDEC_LAT1_VLD2_EXT MTK_M4U_ID(24, 7)
#define M4U_PORT_L24_VDEC_LAT1_AVC_MC_EXT MTK_M4U_ID(24, 8)
#define M4U_PORT_L24_VDEC_LAT1_PRED_RD_EXT MTK_M4U_ID(24, 9)
#define M4U_PORT_L24_VDEC_LAT1_TILE_EXT MTK_M4U_ID(24, 10)
#define M4U_PORT_L24_VDEC_LAT1_WDMA_EXT MTK_M4U_ID(24, 11)
/* larb25 */
#define M4U_PORT_L25_CAM_MRAW0_LSCI_M1 MTK_M4U_ID(25, 0)
#define M4U_PORT_L25_CAM_MRAW0_CQI_M1 MTK_M4U_ID(25, 1)
#define M4U_PORT_L25_CAM_MRAW0_CQI_M2 MTK_M4U_ID(25, 2)
#define M4U_PORT_L25_CAM_MRAW0_IMGO_M1 MTK_M4U_ID(25, 3)
#define M4U_PORT_L25_CAM_MRAW0_IMGBO_M1 MTK_M4U_ID(25, 4)
#define M4U_PORT_L25_CAM_MRAW2_LSCI_M1 MTK_M4U_ID(25, 5)
#define M4U_PORT_L25_CAM_MRAW2_CQI_M1 MTK_M4U_ID(25, 6)
#define M4U_PORT_L25_CAM_MRAW2_CQI_M2 MTK_M4U_ID(25, 7)
#define M4U_PORT_L25_CAM_MRAW2_IMGO_M1 MTK_M4U_ID(25, 8)
#define M4U_PORT_L25_CAM_MRAW2_IMGBO_M1 MTK_M4U_ID(25, 9)
#define M4U_PORT_L25_CAM_MRAW0_AFO_M1 MTK_M4U_ID(25, 10)
#define M4U_PORT_L25_CAM_MRAW2_AFO_M1 MTK_M4U_ID(25, 11)
/* larb26 */
#define M4U_PORT_L26_CAM_MRAW1_LSCI_M1 MTK_M4U_ID(26, 0)
#define M4U_PORT_L26_CAM_MRAW1_CQI_M1 MTK_M4U_ID(26, 1)
#define M4U_PORT_L26_CAM_MRAW1_CQI_M2 MTK_M4U_ID(26, 2)
#define M4U_PORT_L26_CAM_MRAW1_IMGO_M1 MTK_M4U_ID(26, 3)
#define M4U_PORT_L26_CAM_MRAW1_IMGBO_M1 MTK_M4U_ID(26, 4)
#define M4U_PORT_L26_CAM_MRAW3_LSCI_M1 MTK_M4U_ID(26, 5)
#define M4U_PORT_L26_CAM_MRAW3_CQI_M1 MTK_M4U_ID(26, 6)
#define M4U_PORT_L26_CAM_MRAW3_CQI_M2 MTK_M4U_ID(26, 7)
#define M4U_PORT_L26_CAM_MRAW3_IMGO_M1 MTK_M4U_ID(26, 8)
#define M4U_PORT_L26_CAM_MRAW3_IMGBO_M1 MTK_M4U_ID(26, 9)
#define M4U_PORT_L26_CAM_MRAW1_AFO_M1 MTK_M4U_ID(26, 10)
#define M4U_PORT_L26_CAM_MRAW3_AFO_M1 MTK_M4U_ID(26, 11)
/* larb27 */
#define M4U_PORT_L27_CAM_IMGO_R1 MTK_M4U_ID(27, 0)
#define M4U_PORT_L27_CAM_CQI_R1 MTK_M4U_ID(27, 1)
#define M4U_PORT_L27_CAM_CQI_R2 MTK_M4U_ID(27, 2)
#define M4U_PORT_L27_CAM_BPCI_R1 MTK_M4U_ID(27, 3)
#define M4U_PORT_L27_CAM_LSCI_R1 MTK_M4U_ID(27, 4)
#define M4U_PORT_L27_CAM_RAWI_R2 MTK_M4U_ID(27, 5)
#define M4U_PORT_L27_CAM_RAWI_R3 MTK_M4U_ID(27, 6)
#define M4U_PORT_L27_CAM_UFDI_R2 MTK_M4U_ID(27, 7)
#define M4U_PORT_L27_CAM_UFDI_R3 MTK_M4U_ID(27, 8)
#define M4U_PORT_L27_CAM_RAWI_R4 MTK_M4U_ID(27, 9)
#define M4U_PORT_L27_CAM_RAWI_R5 MTK_M4U_ID(27, 10)
#define M4U_PORT_L27_CAM_AAI_R1 MTK_M4U_ID(27, 11)
#define M4U_PORT_L27_CAM_FHO_R1 MTK_M4U_ID(27, 12)
#define M4U_PORT_L27_CAM_AAO_R1 MTK_M4U_ID(27, 13)
#define M4U_PORT_L27_CAM_TSFSO_R1 MTK_M4U_ID(27, 14)
#define M4U_PORT_L27_CAM_FLKO_R1 MTK_M4U_ID(27, 15)
/* larb28 */
#define M4U_PORT_L28_CAM_YUVO_R1 MTK_M4U_ID(28, 0)
#define M4U_PORT_L28_CAM_YUVO_R3 MTK_M4U_ID(28, 1)
#define M4U_PORT_L28_CAM_YUVCO_R1 MTK_M4U_ID(28, 2)
#define M4U_PORT_L28_CAM_YUVO_R2 MTK_M4U_ID(28, 3)
#define M4U_PORT_L28_CAM_RZH1N2TO_R1 MTK_M4U_ID(28, 4)
#define M4U_PORT_L28_CAM_DRZS4NO_R1 MTK_M4U_ID(28, 5)
#define M4U_PORT_L28_CAM_TNCSO_R1 MTK_M4U_ID(28, 6)
/* Infra iommu ports */
/* PCIe1: read: BIT16; write BIT17. */
#define IOMMU_PORT_INFRA_PCIE1 MTK_IFAIOMMU_PERI_ID(16)
/* PCIe0: read: BIT18; write BIT19. */
#define IOMMU_PORT_INFRA_PCIE0 MTK_IFAIOMMU_PERI_ID(18)
#define IOMMU_PORT_INFRA_SSUSB_P3_R MTK_IFAIOMMU_PERI_ID(20)
#define IOMMU_PORT_INFRA_SSUSB_P3_W MTK_IFAIOMMU_PERI_ID(21)
#define IOMMU_PORT_INFRA_SSUSB_P2_R MTK_IFAIOMMU_PERI_ID(22)
#define IOMMU_PORT_INFRA_SSUSB_P2_W MTK_IFAIOMMU_PERI_ID(23)
#define IOMMU_PORT_INFRA_SSUSB_P1_1_R MTK_IFAIOMMU_PERI_ID(24)
#define IOMMU_PORT_INFRA_SSUSB_P1_1_W MTK_IFAIOMMU_PERI_ID(25)
#define IOMMU_PORT_INFRA_SSUSB_P1_0_R MTK_IFAIOMMU_PERI_ID(26)
#define IOMMU_PORT_INFRA_SSUSB_P1_0_W MTK_IFAIOMMU_PERI_ID(27)
#define IOMMU_PORT_INFRA_SSUSB2_R MTK_IFAIOMMU_PERI_ID(28)
#define IOMMU_PORT_INFRA_SSUSB2_W MTK_IFAIOMMU_PERI_ID(29)
#define IOMMU_PORT_INFRA_SSUSB_R MTK_IFAIOMMU_PERI_ID(30)
#define IOMMU_PORT_INFRA_SSUSB_W MTK_IFAIOMMU_PERI_ID(31)
#endif

View File

@ -12,4 +12,6 @@
#define MTK_M4U_TO_LARB(id) (((id) >> 5) & 0x1f)
#define MTK_M4U_TO_PORT(id) ((id) & 0x1f)
#define MTK_IFAIOMMU_PERI_ID(port) MTK_M4U_ID(0, port)
#endif

View File

@ -79,6 +79,14 @@ struct amba_driver {
void (*remove)(struct amba_device *);
void (*shutdown)(struct amba_device *);
const struct amba_id *id_table;
/*
* For most device drivers, no need to care about this flag as long as
* all DMAs are handled through the kernel DMA API. For some special
* ones, for example VFIO drivers, they know how to manage the DMA
* themselves and set this flag so that the IOMMU layer will allow them
* to setup and manage their own I/O address space.
*/
bool driver_managed_dma;
};
/*

View File

@ -59,6 +59,8 @@ struct fwnode_handle;
* bus supports.
* @dma_configure: Called to setup DMA configuration on a device on
* this bus.
* @dma_cleanup: Called to cleanup DMA configuration on a device on
* this bus.
* @pm: Power management operations of this bus, callback the specific
* device driver's pm-ops.
* @iommu_ops: IOMMU specific operations for this bus, used to attach IOMMU
@ -103,6 +105,7 @@ struct bus_type {
int (*num_vf)(struct device *dev);
int (*dma_configure)(struct device *dev);
void (*dma_cleanup)(struct device *dev);
const struct dev_pm_ops *pm;

View File

@ -32,6 +32,13 @@ struct fsl_mc_io;
* @shutdown: Function called at shutdown time to quiesce the device
* @suspend: Function called when a device is stopped
* @resume: Function called when a device is resumed
* @driver_managed_dma: Device driver doesn't use kernel DMA API for DMA.
* For most device drivers, no need to care about this flag
* as long as all DMAs are handled through the kernel DMA API.
* For some special ones, for example VFIO drivers, they know
* how to manage the DMA themselves and set this flag so that
* the IOMMU layer will allow them to setup and manage their
* own I/O address space.
*
* Generic DPAA device driver object for device drivers that are registered
* with a DPRC bus. This structure is to be embedded in each device-specific
@ -45,6 +52,7 @@ struct fsl_mc_driver {
void (*shutdown)(struct fsl_mc_device *dev);
int (*suspend)(struct fsl_mc_device *dev, pm_message_t state);
int (*resume)(struct fsl_mc_device *dev);
bool driver_managed_dma;
};
#define to_fsl_mc_driver(_drv) \

View File

@ -539,7 +539,8 @@ struct dmar_domain {
u8 has_iotlb_device: 1;
u8 iommu_coherency: 1; /* indicate coherency of iommu access */
u8 iommu_snooping: 1; /* indicate snooping control feature */
u8 force_snooping : 1; /* Create IOPTEs with snoop control */
u8 set_pte_snp:1;
struct list_head devices; /* all devices' list */
struct iova_domain iovad; /* iova's that belong to this domain */

View File

@ -9,7 +9,7 @@
#define __INTEL_SVM_H__
/* Page Request Queue depth */
#define PRQ_ORDER 2
#define PRQ_ORDER 4
#define PRQ_RING_MASK ((0x1000 << PRQ_ORDER) - 0x20)
#define PRQ_DEPTH ((0x1000 << PRQ_ORDER) >> 5)

View File

@ -103,10 +103,11 @@ static inline bool iommu_is_dma_domain(struct iommu_domain *domain)
}
enum iommu_cap {
IOMMU_CAP_CACHE_COHERENCY, /* IOMMU can enforce cache coherent DMA
transactions */
IOMMU_CAP_CACHE_COHERENCY, /* IOMMU_CACHE is supported */
IOMMU_CAP_INTR_REMAP, /* IOMMU supports interrupt isolation */
IOMMU_CAP_NOEXEC, /* IOMMU_NOEXEC flag */
IOMMU_CAP_PRE_BOOT_PROTECTION, /* Firmware says it used the IOMMU for
DMA protection and we should too */
};
/* These are the possible reserved region types */
@ -272,6 +273,9 @@ struct iommu_ops {
* @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush
* queue
* @iova_to_phys: translate iova to physical address
* @enforce_cache_coherency: Prevent any kind of DMA from bypassing IOMMU_CACHE,
* including no-snoop TLPs on PCIe or other platform
* specific mechanisms.
* @enable_nesting: Enable nesting
* @set_pgtable_quirks: Set io page table quirks (IO_PGTABLE_QUIRK_*)
* @free: Release the domain after use.
@ -300,6 +304,7 @@ struct iommu_domain_ops {
phys_addr_t (*iova_to_phys)(struct iommu_domain *domain,
dma_addr_t iova);
bool (*enforce_cache_coherency)(struct iommu_domain *domain);
int (*enable_nesting)(struct iommu_domain *domain);
int (*set_pgtable_quirks)(struct iommu_domain *domain,
unsigned long quirks);
@ -407,16 +412,10 @@ static inline const struct iommu_ops *dev_iommu_ops(struct device *dev)
return dev->iommu->iommu_dev->ops;
}
#define IOMMU_GROUP_NOTIFY_ADD_DEVICE 1 /* Device added */
#define IOMMU_GROUP_NOTIFY_DEL_DEVICE 2 /* Pre Device removed */
#define IOMMU_GROUP_NOTIFY_BIND_DRIVER 3 /* Pre Driver bind */
#define IOMMU_GROUP_NOTIFY_BOUND_DRIVER 4 /* Post Driver bind */
#define IOMMU_GROUP_NOTIFY_UNBIND_DRIVER 5 /* Pre Driver unbind */
#define IOMMU_GROUP_NOTIFY_UNBOUND_DRIVER 6 /* Post Driver unbind */
extern int bus_set_iommu(struct bus_type *bus, const struct iommu_ops *ops);
extern int bus_iommu_probe(struct bus_type *bus);
extern bool iommu_present(struct bus_type *bus);
extern bool device_iommu_capable(struct device *dev, enum iommu_cap cap);
extern bool iommu_capable(struct bus_type *bus, enum iommu_cap cap);
extern struct iommu_domain *iommu_domain_alloc(struct bus_type *bus);
extern struct iommu_group *iommu_group_get_by_id(int id);
@ -478,10 +477,6 @@ extern int iommu_group_for_each_dev(struct iommu_group *group, void *data,
extern struct iommu_group *iommu_group_get(struct device *dev);
extern struct iommu_group *iommu_group_ref_get(struct iommu_group *group);
extern void iommu_group_put(struct iommu_group *group);
extern int iommu_group_register_notifier(struct iommu_group *group,
struct notifier_block *nb);
extern int iommu_group_unregister_notifier(struct iommu_group *group,
struct notifier_block *nb);
extern int iommu_register_device_fault_handler(struct device *dev,
iommu_dev_fault_handler_t handler,
void *data);
@ -675,6 +670,13 @@ struct iommu_sva *iommu_sva_bind_device(struct device *dev,
void iommu_sva_unbind_device(struct iommu_sva *handle);
u32 iommu_sva_get_pasid(struct iommu_sva *handle);
int iommu_device_use_default_domain(struct device *dev);
void iommu_device_unuse_default_domain(struct device *dev);
int iommu_group_claim_dma_owner(struct iommu_group *group, void *owner);
void iommu_group_release_dma_owner(struct iommu_group *group);
bool iommu_group_dma_owner_claimed(struct iommu_group *group);
#else /* CONFIG_IOMMU_API */
struct iommu_ops {};
@ -689,6 +691,11 @@ static inline bool iommu_present(struct bus_type *bus)
return false;
}
static inline bool device_iommu_capable(struct device *dev, enum iommu_cap cap)
{
return false;
}
static inline bool iommu_capable(struct bus_type *bus, enum iommu_cap cap)
{
return false;
@ -871,18 +878,6 @@ static inline void iommu_group_put(struct iommu_group *group)
{
}
static inline int iommu_group_register_notifier(struct iommu_group *group,
struct notifier_block *nb)
{
return -ENODEV;
}
static inline int iommu_group_unregister_notifier(struct iommu_group *group,
struct notifier_block *nb)
{
return 0;
}
static inline
int iommu_register_device_fault_handler(struct device *dev,
iommu_dev_fault_handler_t handler,
@ -1031,6 +1026,30 @@ static inline struct iommu_fwspec *dev_iommu_fwspec_get(struct device *dev)
{
return NULL;
}
static inline int iommu_device_use_default_domain(struct device *dev)
{
return 0;
}
static inline void iommu_device_unuse_default_domain(struct device *dev)
{
}
static inline int
iommu_group_claim_dma_owner(struct iommu_group *group, void *owner)
{
return -ENODEV;
}
static inline void iommu_group_release_dma_owner(struct iommu_group *group)
{
}
static inline bool iommu_group_dma_owner_claimed(struct iommu_group *group)
{
return false;
}
#endif /* CONFIG_IOMMU_API */
/**

View File

@ -891,6 +891,13 @@ struct module;
* created once it is bound to the driver.
* @driver: Driver model structure.
* @dynids: List of dynamically added device IDs.
* @driver_managed_dma: Device driver doesn't use kernel DMA API for DMA.
* For most device drivers, no need to care about this flag
* as long as all DMAs are handled through the kernel DMA API.
* For some special ones, for example VFIO drivers, they know
* how to manage the DMA themselves and set this flag so that
* the IOMMU layer will allow them to setup and manage their
* own I/O address space.
*/
struct pci_driver {
struct list_head node;
@ -909,6 +916,7 @@ struct pci_driver {
const struct attribute_group **dev_groups;
struct device_driver driver;
struct pci_dynids dynids;
bool driver_managed_dma;
};
static inline struct pci_driver *to_pci_driver(struct device_driver *drv)

View File

@ -210,6 +210,14 @@ struct platform_driver {
struct device_driver driver;
const struct platform_device_id *id_table;
bool prevent_deferred_probe;
/*
* For most device drivers, no need to care about this flag as long as
* all DMAs are handled through the kernel DMA API. For some special
* ones, for example VFIO drivers, they know how to manage the DMA
* themselves and set this flag so that the IOMMU layer will allow them
* to setup and manage their own I/O address space.
*/
bool driver_managed_dma;
};
#define to_platform_driver(drv) (container_of((drv), struct platform_driver, \
@ -328,8 +336,6 @@ extern int platform_pm_restore(struct device *dev);
#define platform_pm_restore NULL
#endif
extern int platform_dma_configure(struct device *dev);
#ifdef CONFIG_PM_SLEEP
#define USE_PLATFORM_PM_SLEEP_OPS \
.suspend = platform_pm_suspend, \

View File

@ -465,6 +465,7 @@ static inline struct tb_xdomain *tb_service_parent(struct tb_service *svc)
* @msix_ida: Used to allocate MSI-X vectors for rings
* @going_away: The host controller device is about to disappear so when
* this flag is set, avoid touching the hardware anymore.
* @iommu_dma_protection: An IOMMU will isolate external-facing ports.
* @interrupt_work: Work scheduled to handle ring interrupt when no
* MSI-X is used.
* @hop_count: Number of rings (end point hops) supported by NHI.
@ -479,6 +480,7 @@ struct tb_nhi {
struct tb_ring **rx_rings;
struct ida msix_ida;
bool going_away;
bool iommu_dma_protection;
struct work_struct interrupt_work;
u32 hop_count;
unsigned long quirks;