For a5xx the gpu is 64b so we need to change iova to 64b everywhere. On
the display side, iova is still 32b so it can ignore the upper bits.
(Although all the armv8 devices have an iommu that can map 64b pa to 32b
iova.)
Signed-off-by: Rob Clark <robdclark@gmail.com>
The msm_iommu_map/unmap funcs have debug prints to show the list of
VA:PA mappings. Use the correct variable to print the VAs.
Signed-off-by: Archit Taneja <architt@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
The u32 type used to pass the physical addresses to iommu_map can't
accommodate 64 bit addresses. Move to dma_addr_t to ensure wrong
addresses aren't provided to the IOMMU driver.
Signed-off-by: Archit Taneja <architt@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Avoid casts from pointers to fixed-size integers to prevent the compiler
from warning. Print virtual memory addresses using %p instead. Also turn
a couple of %d/%x specifiers into %zu/%zd/%zx to avoid further warnings
due to mismatched format strings.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
87e956e9 changed the fault handler to return -ENOSYS, which causes the
iommu driver to print out a huge splat. Which wouldn't be quite so bad
if nothing ever faulted. But seems like some EXA composite operations
generate quite a lot of (seemingly harmless) faults. That is probably a
userspace problem, but the huge increase in verbosity from iommu fault
dumps makes things kind of unusable.
We probably should actually log *some* message (not conditional on
drm.debug). But ratelimit it.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Downstream kernel IOMMU had a non-standard way of dealing with multiple
devices and multiple ports/contexts. We don't need that on upstream
kernel, so rip out the crazy.
Note that we have to move the pinning of the ringbuffer to after the
IOMMU is attached. No idea how that managed to work properly on the
downstream kernel.
For now, I am leaving the IOMMU port name stuff in place, to simplify
things for folks trying to backport latest drm/msm to device kernels.
Once we no longer have to care about pre-DT kernels, we can drop this
and instead backport upstream IOMMU driver.
Signed-off-by: Rob Clark <robdclark@gmail.com>
If probe fails after IOMMU is attached, we need to detach in order to
clean up properly. Before this change, IOMMU faults would occur if the
probe failed (-EPROBE_DEFER).
Signed-off-by: Stephane Viau <sviau@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
use mm.h definition
Cc: David Airlie <airlied@linux.ie>
Cc: Rob Clark <robdclark@gmail.com>
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Add support for adreno 330. Not too much different, just a few
differences in initial configuration plus setting OCMEM base.
Userspace support is already in upstream mesa.
Note that the existing DT code is simply using the bindings from
downstream android kernel, to simplify porting of this driver to
existing devices. These do not constitute any committed/stable
DT ABI. The addition of proper DT bindings will be a subsequent
patch, at which point (as best as possible) I will try to support
either upstream bindings or what is found in downstream android
kernel, so that existing device DT files can be used.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Add a VRAM carveout that is used for systems which do not have an IOMMU.
The VRAM carveout uses CMA. The arch code must setup a CMA pool for the
device (preferrably in highmem.. a 256m-512m VRAM pool in lowmem is not
cool). The user can configure the VRAM pool size using msm.vram module
param.
Technically, the abstraction of IOMMU behind msm_mmu is not strictly
needed, but it simplifies the GEM code a bit, and will be useful later
when I add support for a2xx devices with GPUMMU, so I decided to keep
this part.
It appears to be possible to configure the GPU to restrict access to
addresses within the VRAM pool, but this is not done yet. So for now
the GPU will refuse to load if there is no sort of mmu. Once address
based limits are supported and tested to confirm that we aren't giving
the GPU access to arbitrary memory, this restriction can be lifted
Signed-off-by: Rob Clark <robdclark@gmail.com>