Commit graph

40 commits

Author SHA1 Message Date
Matthew Auld
068b1bd092 drm/i915: stop setting cache_dirty on discrete
Should not be needed. Even with non-coherent display, we should be using
device local-memory there, and not system memory.

v2: also add a warning in i915_gem_clflush_object

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #v1
Link: https://patchwork.freedesktop.org/patch/msgid/20211027161813.3094681-4-matthew.auld@intel.com
2021-11-02 09:44:10 +00:00
Matthew Auld
2ea6ec7643 drm/i915: move cpu_write_needs_clflush
Move it next to its partner in crime; gpu_write_needs_clflush. For
better readability lets keep gpu vs cpu at least in the same file.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20211027161813.3094681-3-matthew.auld@intel.com
2021-11-02 09:44:10 +00:00
Matthew Auld
81340cf3bd drm/i915/uapi: reject set_domain for discrete
The CPU domain should be static for discrete, and on DG1 we don't need
any flushing since everything is already coherent, so really all this
does is an object wait, for which we have an ioctl. Longer term the
desired caching should be an immutable creation time property for the
BO, which can be set with something like gem_create_ext.

One other user is iris + userptr, which uses the set_domain to probe all
the pages to check if the GUP succeeds, however we now have a PROBE
flag for this purpose.

v2: add some more kernel doc, also add the implicit rules with caching

Suggested-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Ramalingam C <ramalingam.c@intel.com>
Reviewed-by: Ramalingam C <ramalingam.c@intel.com>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20210715101536.2606307-5-matthew.auld@intel.com
2021-07-23 11:55:32 +01:00
Matthew Auld
e7737b67ab drm/i915/uapi: reject caching ioctls for discrete
It's a noop on DG1, and in the future when need to support other devices
which let us control the coherency, then it should be an immutable
creation time property for the BO. This will likely be controlled
through a new gem_create_ext extension.

v2: add some kernel doc for the discrete changes, and document the
    implicit rules

Suggested-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Ramalingam C <ramalingam.c@intel.com>
Reviewed-by: Ramalingam C <ramalingam.c@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20210715101536.2606307-2-matthew.auld@intel.com
2021-07-20 11:24:37 +01:00
Thomas Hellström
e11b7b6e57 drm/i915/display: Migrate objects to LMEM if possible for display
Objects intended to be used as display framebuffers must reside in
LMEM for discrete. If they happen to not do that, migrate them to
LMEM before pinning.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210629151203.209465-4-thomas.hellstrom@linux.intel.com
2021-06-30 11:32:53 +01:00
Maarten Lankhorst
ec701249aa drm/i915: Take obj lock around set_domain ioctl
We need to lock the object to move it to the correct domain,
add the missing lock.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
[danvet: Pick version from an older patch series.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210128162612.927917-29-maarten.lankhorst@linux.intel.com
Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-29-maarten.lankhorst@linux.intel.com
2021-03-24 17:27:20 +01:00
Maarten Lankhorst
1b321026e2 drm/i915: Pass ww ctx to intel_pin_to_display_plane
Instead of multiple lockings, lock the object once,
and perform the ww dance around attach_phys and pin_pages.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-23-maarten.lankhorst@linux.intel.com
2021-03-24 17:27:20 +01:00
Maarten Lankhorst
ed29c26911 drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7.
Instead of doing what we do currently, which will never work with
PROVE_LOCKING, do the same as AMD does, and something similar to
relocation slowpath. When all locks are dropped, we acquire the
pages for pinning. When the locks are taken, we transfer those
pages in .get_pages() to the bo. As a final check before installing
the fences, we ensure that the mmu notifier was not called; if it is,
we return -EAGAIN to userspace to signal it has to start over.

Changes since v1:
- Unbinding is done in submit_init only. submit_begin() removed.
- MMU_NOTFIER -> MMU_NOTIFIER
Changes since v2:
- Make i915->mm.notifier a spinlock.
Changes since v3:
- Add WARN_ON if there are any page references left, should have been 0.
- Return 0 on success in submit_init(), bug from spinlock conversion.
- Release pvec outside of notifier_lock (Thomas).
Changes since v4:
- Mention why we're clearing eb->[i + 1].vma in the code. (Thomas)
- Actually check all invalidations in eb_move_to_gpu. (Thomas)
- Do not wait when process is exiting to fix gem_ctx_persistence.userptr.
Changes since v5:
- Clarify why check on PF_EXITING is (temporarily) required.
Changes since v6:
- Ensure userptr validity is checked in set_domain through a special path.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Acked-by: Dave Airlie <airlied@redhat.com>
[danvet: s/kfree/kvfree/ in i915_gem_object_userptr_drop_ref in the
previous review round, but which got lost. The other open questions
around page refcount are imo better discussed in a separate series,
with amdgpu folks involved].
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-17-maarten.lankhorst@linux.intel.com
2021-03-24 17:27:19 +01:00
Maarten Lankhorst
02b64a4a0c drm/i915: Reject more ioctls for userptr, v2.
There are a couple of ioctl's related to tiling and cache placement,
that make no sense for userptr, reject those:
- i915_gem_set_tiling_ioctl()
    Tiling should always be linear for userptr. Changing placement will
    fail with -ENXIO.
- i915_gem_set_caching_ioctl()
    Userptr memory should always be cached. Changing caching mode will
    fail with -ENXIO.
- i915_gem_set_domain_ioctl()
    Still temporarily allowed to work as intended, it's used to check
    userptr validity. With the reworked userptr code, it will keep
    working for this usecase.

This plus the previous changes have been tested against beignet
by using its own unit tests, and intel-video-compute by using
piglit's opencl tests.

Changes since v1:
- set_domain was apparently used in iris for checking userptr validity,
  keep it working as intended.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-14-maarten.lankhorst@linux.intel.com
2021-03-24 17:27:19 +01:00
Chris Wilson
14ca83eece drm/i915/gem: Drop lru bumping on display unpinning
Simplify the frontbuffer unpin by removing the lock requirement. The LRU
bumping was primarily to protect the GTT from being evicted and from
frontbuffers being eagerly shrunk. Now we protect frontbuffers from the
shrinker, and we avoid accidentally evicting from the GTT, so the
benefit from bumping LRU is no more, and we can save more time by not.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210119214336.1463-6-chris@chris-wilson.co.uk
2021-01-20 21:05:12 +00:00
Chris Wilson
e956996cb0 drm/i915/gem: Protect used framebuffers from casual eviction
In the shrinker, we protect framebuffers from light reclaim as we
typically expect framebuffers to be reused in the near future (and with
low latency requirements). We can apply the same logic to the GGTT
eviction and defer framebuffers to the second pass only used if the
caller is desperate enough to wait for space to become available.
In most cases, the caller will use a smaller partial vma instead of
trying to force the object into the GGTT if doing so will cause other
users to be evicted.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210119214336.1463-5-chris@chris-wilson.co.uk
2021-01-20 21:05:12 +00:00
Chris Wilson
d60d3374d3 drm/i915/gem: Make i915_gem_object_flush_write_domain() static
flush_write_domain() is only used within the GEM domain management code,
so move it to i915_gem_domain.c and drop the export.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210119144912.12653-5-chris@chris-wilson.co.uk
2021-01-19 20:47:25 +00:00
Chris Wilson
59dd13ad31 drm/i915/gem: Flush coherency domains on first set-domain-ioctl
Avoid skipping what appears to be a no-op set-domain-ioctl if the cache
coherency state is inconsistent with our target domain. This also has
the utility of using the population of the pages to validate the backing
store.

The danger in skipping the first set-domain is leaving the cache
inconsistent and submitting stale data, or worse leaving the clean data
in the cache and not flushing it to the GPU. The impact should be small
as it requires a no-op set-domain as the very first ioctl in a
particular sequence not found in typical userspace.

Reported-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Fixes: 754a254427 ("drm/i915: Skip object locking around a no-op set-domain ioctl")
Testcase: igt/gem_mmap_offset/blt-coherency
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: <stable@vger.kernel.org> # v5.2+
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201019203825.10966-1-chris@chris-wilson.co.uk
(cherry picked from commit 44c2200afc)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2020-11-03 19:14:03 -05:00
Maarten Lankhorst
c1793ba86a drm/i915: Add ww locking to pin_to_display_plane, v2.
Use ww locking for pin_to_display_plane for all the pinning and locking.
With the locking removed from set_cache_level, we need to fix
i915_gem_set_caching_ioctl to take the object reservation lock.

As this is a single lock, we don't need to use the ww dance.

Changes since v1:
- Do not use ww locking in i915_gem_set_caching_ioctl (Thomas).

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200819140904.1708856-24-maarten.lankhorst@linux.intel.com
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2020-09-07 14:32:22 +03:00
Maarten Lankhorst
1af343cdc1 drm/i915: Remove locking from i915_gem_object_prepare_read/write
Execbuffer submission will perform its own WW locking, and we
cannot rely on the implicit lock there.

This also makes it clear that the GVT code will get a lockdep splat when
multiple batchbuffer shadows need to be performed in the same instance,
fix that up.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200819140904.1708856-7-maarten.lankhorst@linux.intel.com
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2020-09-07 14:29:53 +03:00
Maarten Lankhorst
80f0b679d6 drm/i915: Add an implementation for i915_gem_ww_ctx locking, v2.
i915_gem_ww_ctx is used to lock all gem bo's for pinning and memory
eviction. We don't use it yet, but lets start adding the definition
first.

To use it, we have to pass a non-NULL ww to gem_object_lock, and don't
unlock directly. It is done in i915_gem_ww_ctx_fini.

Changes since v1:
- Change ww_ctx and obj order in locking functions (Jonas Lahtinen)

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200819140904.1708856-6-maarten.lankhorst@linux.intel.com
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2020-09-07 14:29:44 +03:00
Chris Wilson
a95f3ac21d drm/i915/gem: Remove object_is_locked assertion from unpin_from_display_plane
Since moving the obj->vma.list to a spin_lock, and the vm->bound_list to
its vm->mutex, along with tracking shrinkable status under its own
spinlock, we no long require the object to be locked by the caller.

This is fortunate as it appears we can be called with the lock along an
error path in flipping:

<4> [139.942851] WARN_ON(debug_locks && !lock_is_held(&(&((obj)->base.resv)->lock.base)->dep_map))
<4> [139.943242] WARNING: CPU: 0 PID: 1203 at drivers/gpu/drm/i915/gem/i915_gem_domain.c:405 i915_gem_object_unpin_from_display_plane+0x70/0x130 [i915]
<4> [139.943263] Modules linked in: snd_hda_intel i915 vgem snd_hda_codec_realtek snd_hda_codec_generic coretemp snd_intel_dspcfg snd_hda_codec snd_hwdep snd_hda_core r8169 lpc_ich snd_pcm realtek prime_numbers [last unloaded: i915]
<4> [139.943347] CPU: 0 PID: 1203 Comm: kms_flip Tainted: G     U            5.6.0-gd0fda5c2cf3f1-drmtip_474+ #1
<4> [139.943363] Hardware name:  /D510MO, BIOS MOPNV10J.86A.0311.2010.0802.2346 08/02/2010
<4> [139.943589] RIP: 0010:i915_gem_object_unpin_from_display_plane+0x70/0x130 [i915]
<4> [139.943589] Code: 85 28 01 00 00 be ff ff ff ff 48 8d 78 60 e8 d7 9b f0 e2 85 c0 75 b9 48 c7 c6 50 b9 38 c0 48 c7 c7 e9 48 3c c0 e8 20 d4 e9 e2 <0f> 0b eb a2 48 c7 c1 08 bb 38 c0 ba 0a 01 00 00 48 c7 c6 88 a3 35
<4> [139.943589] RSP: 0018:ffffb774c0603b48 EFLAGS: 00010282
<4> [139.943589] RAX: 0000000000000000 RBX: ffff9a142fa36e80 RCX: 0000000000000006
<4> [139.943589] RDX: 000000000000160d RSI: ffff9a142c1a88f8 RDI: ffffffffa434a64d
<4> [139.943589] RBP: ffff9a1410a513c0 R08: ffff9a142c1a88f8 R09: 0000000000000000
<4> [139.943589] R10: 0000000000000000 R11: 0000000000000000 R12: ffff9a1436ee94b8
<4> [139.943589] R13: 0000000000000001 R14: 00000000ffffffff R15: ffff9a1410960000
<4> [139.943589] FS:  00007fc73a744e40(0000) GS:ffff9a143da00000(0000) knlGS:0000000000000000
<4> [139.943589] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4> [139.943589] CR2: 00007fc73997e098 CR3: 000000002f5fe000 CR4: 00000000000006f0
<4> [139.943589] Call Trace:
<4> [139.943589]  intel_pin_and_fence_fb_obj+0x1c9/0x1f0 [i915]
<4> [139.943589]  intel_plane_pin_fb+0x3f/0xd0 [i915]
<4> [139.943589]  intel_prepare_plane_fb+0x13b/0x5c0 [i915]
<4> [139.943589]  drm_atomic_helper_prepare_planes+0x85/0x110
<4> [139.943589]  intel_atomic_commit+0xda/0x390 [i915]
<4> [139.943589]  drm_atomic_helper_page_flip+0x9c/0xd0
<4> [139.943589]  ? drm_event_reserve_init+0x46/0x60
<4> [139.943589]  drm_mode_page_flip_ioctl+0x587/0x5d0

This completes the symmetry lost in commit 8b1c78e06e ("drm/i915: Avoid
calling i915_gem_object_unbind holding object lock").

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1743
Fixes: 8b1c78e06e ("drm/i915: Avoid calling i915_gem_object_unbind holding object lock")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Andi Shyti <andi.shyti@intel.com>
Cc: <stable@vger.kernel.org> # v5.6+
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200420125356.26614-1-chris@chris-wilson.co.uk
2020-04-20 16:23:24 +01:00
Chris Wilson
9da0ea0963 drm/i915/gem: Drop cached obj->bind_count
We cached the number of vma bound to the object in order to speed up
shrinker decisions. This has been superseded by being more proactive in
removing objects we cannot shrink from the shrinker lists, and so we can
drop the clumsy attempt at atomically counting the bind count and
comparing it to the number of pinned mappings of the object. This will
only get more clumsier with asynchronous binding and unbinding.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200401223924.16667-1-chris@chris-wilson.co.uk
2020-04-02 01:17:39 +01:00
Chris Wilson
da42104f58 drm/i915: Hold reference to intel_frontbuffer as we track activity
Since obj->frontbuffer is no longer protected by the struct_mutex, as we
are processing the execbuf, it may be removed. Mark the
intel_frontbuffer as rcu protected, and so acquire a reference to
the struct as we track activity upon it.

Closes: https://gitlab.freedesktop.org/drm/intel/issues/827
Fixes: 8e7cb1799b ("drm/i915: Extract intel_frontbuffer active tracking")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: <stable@vger.kernel.org> # v5.4+
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191218104043.3539458-1-chris@chris-wilson.co.uk
2019-12-18 12:09:57 +00:00
Chris Wilson
7a90f89904 drm/i915/gem: Serialise object before changing cache-level
Wait for the object to be idle before changing its cache-level and
unbinding. This was dropped as supposedly superfluous from commit
8b1c78e06e ("drm/i915: Avoid calling i915_gem_object_unbind holding
object lock"), but it turns out to prevent some cache dirt escaping.
Smells like papering over a race...

Closes: https://gitlab.freedesktop.org/drm/intel/issues/820
Fixes: 8b1c78e06e ("drm/i915: Avoid calling i915_gem_object_unbind holding object lock")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Acked-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213223140.1830738-1-chris@chris-wilson.co.uk
2019-12-14 11:03:49 +00:00
Chris Wilson
16c46fd505 drm/i915/gem: Avoid rcu_barrier() from shrinker paths
As i915_gem_object_unbind() waits on an rcu_barrier() to flush vm
releases (and destruction of their bound vma), we have to be careful not
to invoke that barrier from beneath the shrinker:

<4> [430.222671] WARNING: possible circular locking dependency detected
<4> [430.222673] 5.4.0-rc8-CI-CI_DRM_7508+ #1 Tainted: G     U
<4> [430.222675] ------------------------------------------------------
<4> [430.222677] gem_pwrite/2317 is trying to acquire lock:
<4> [430.222678] ffffffff82248218 (rcu_state.barrier_mutex){+.+.}, at: rcu_barrier+0x23/0x190
<4> [430.222685]
but task is already holding lock:
<4> [430.222687] ffffffff82263a40 (fs_reclaim){+.+.}, at: fs_reclaim_acquire.part.117+0x0/0x30
<4> [430.222691]
which lock already depends on the new lock.

<4> [430.222693]
the existing dependency chain (in reverse order) is:
<4> [430.222695]
-> #2 (fs_reclaim){+.+.}:
<4> [430.222698]        fs_reclaim_acquire.part.117+0x24/0x30
<4> [430.222702]        kmem_cache_alloc_trace+0x2a/0x2c0
<4> [430.222705]        intel_cpuc_prepare+0x37/0x1a0
<4> [430.222709]        cpuhp_invoke_callback+0x9b/0x9d0
<4> [430.222712]        _cpu_up+0xa2/0x140
<4> [430.222714]        do_cpu_up+0x61/0xa0
<4> [430.222718]        smp_init+0x57/0x96
<4> [430.222722]        kernel_init_freeable+0xac/0x1c7
<4> [430.222725]        kernel_init+0x5/0x100
<4> [430.222728]        ret_from_fork+0x24/0x50
<4> [430.222729]
-> #1 (cpu_hotplug_lock.rw_sem){++++}:
<4> [430.222733]        cpus_read_lock+0x34/0xd0
<4> [430.222734]        rcu_barrier+0xaa/0x190
<4> [430.222736]        kernel_init+0x21/0x100
<4> [430.222737]        ret_from_fork+0x24/0x50
<4> [430.222739]
-> #0 (rcu_state.barrier_mutex){+.+.}:
<4> [430.222742]        __lock_acquire+0x1328/0x15d0
<4> [430.222743]        lock_acquire+0xa7/0x1c0
<4> [430.222746]        __mutex_lock+0x9a/0x9d0
<4> [430.222747]        rcu_barrier+0x23/0x190
<4> [430.222850]        i915_gem_object_unbind+0x264/0x3d0 [i915]
<4> [430.222882]        i915_gem_shrink+0x297/0x5f0 [i915]
<4> [430.222912]        i915_gem_shrink_all+0x38/0x60 [i915]
<4> [430.222934]        i915_drop_caches_set+0x1f0/0x240 [i915]
<4> [430.222938]        simple_attr_write+0xb0/0xd0
<4> [430.222941]        full_proxy_write+0x51/0x80
<4> [430.222943]        vfs_write+0xb9/0x1d0
<4> [430.222944]        ksys_write+0x9f/0xe0
<4> [430.222946]        do_syscall_64+0x4f/0x210
<4> [430.222948]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [430.222950]
other info that might help us debug this:

<4> [430.222952] Chain exists of:
  rcu_state.barrier_mutex --> cpu_hotplug_lock.rw_sem --> fs_reclaim

<4> [430.222955]  Possible unsafe locking scenario:

<4> [430.222957]        CPU0                    CPU1
<4> [430.222958]        ----                    ----
<4> [430.222960]   lock(fs_reclaim);
<4> [430.222961]                                lock(cpu_hotplug_lock.rw_sem);
<4> [430.222963]                                lock(fs_reclaim);
<4> [430.222964]   lock(rcu_state.barrier_mutex);
<4> [430.222966]
 *** DEADLOCK ***

<4> [430.222968] 3 locks held by gem_pwrite/2317:
<4> [430.222969]  #0: ffff88849e2d9408 (sb_writers#14){.+.+}, at: vfs_write+0x1a4/0x1d0
<4> [430.222973]  #1: ffff888496976db0 (&attr->mutex){+.+.}, at: simple_attr_write+0x36/0xd0
<4> [430.222976]  #2: ffffffff82263a40 (fs_reclaim){+.+.}, at: fs_reclaim_acquire.part.117+0x0/0x30
<4> [430.222980]
stack backtrace:
<4> [430.222982] CPU: 1 PID: 2317 Comm: gem_pwrite Tainted: G     U            5.4.0-rc8-CI-CI_DRM_7508+ #1
<4> [430.222985] Hardware name: Intel Corporation Tiger Lake Client Platform/TigerLake U DDR4 SODIMM RVP, BIOS TGLSFWI1.R00.2321.A08.1909162051 09/16/2019
<4> [430.222989] Call Trace:
<4> [430.222992]  dump_stack+0x71/0x9b
<4> [430.222995]  check_noncircular+0x19b/0x1c0
<4> [430.222998]  ? __lock_acquire+0x1328/0x15d0
<4> [430.222999]  __lock_acquire+0x1328/0x15d0
<4> [430.223001]  ? mark_held_locks+0x49/0x70
<4> [430.223003]  lock_acquire+0xa7/0x1c0
<4> [430.223005]  ? rcu_barrier+0x23/0x190
<4> [430.223008]  __mutex_lock+0x9a/0x9d0
<4> [430.223009]  ? rcu_barrier+0x23/0x190
<4> [430.223011]  ? rcu_barrier+0x23/0x190
<4> [430.223013]  ? find_held_lock+0x2d/0x90
<4> [430.223045]  ? i915_gem_object_unbind+0x24a/0x3d0 [i915]
<4> [430.223048]  ? rcu_barrier+0x23/0x190
<4> [430.223049]  rcu_barrier+0x23/0x190
<4> [430.223081]  i915_gem_object_unbind+0x264/0x3d0 [i915]
<4> [430.223119]  i915_gem_shrink+0x297/0x5f0 [i915]

Closes: https://gitlab.freedesktop.org/drm/intel/issues/743
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191208161252.3015727-1-chris@chris-wilson.co.uk
2019-12-09 10:49:38 +00:00
Chris Wilson
8b1c78e06e drm/i915: Avoid calling i915_gem_object_unbind holding object lock
In the extreme case, we may wish to wait on an rcu-barrier to reap stale
vm to purge the last of the object bindings. However, we are not allowed
to use rcu_barrier() beneath the dma_resv (i.e. object) lock and do not
take lightly the prospect of unlocking a mutex deep in the bowels of the
routine. i915_gem_object_unbind() itself does not need the object lock,
and it turns out the callers do not need to the unbind as part of a
locked sequence around set-cache-level, so rearrange the code to avoid
taking the object lock in the callers.

<4> [186.816311] ======================================================
<4> [186.816313] WARNING: possible circular locking dependency detected
<4> [186.816316] 5.4.0-rc8-CI-CI_DRM_7486+ #1 Tainted: G     U
<4> [186.816318] ------------------------------------------------------
<4> [186.816320] perf_pmu/1321 is trying to acquire lock:
<4> [186.816322] ffff88849487c4d8 (&mm->mmap_sem#2){++++}, at: __might_fault+0x39/0x90
<4> [186.816331]
but task is already holding lock:
<4> [186.816333] ffffe8ffffa05008 (&cpuctx_mutex){+.+.}, at: perf_event_ctx_lock_nested+0xa9/0x1b0
<4> [186.816339]
which lock already depends on the new lock.

<4> [186.816341]
the existing dependency chain (in reverse order) is:
<4> [186.816343]
-> #6 (&cpuctx_mutex){+.+.}:
<4> [186.816349]        __mutex_lock+0x9a/0x9d0
<4> [186.816352]        perf_event_init_cpu+0xa4/0x140
<4> [186.816357]        perf_event_init+0x19d/0x1cd
<4> [186.816362]        start_kernel+0x372/0x4f4
<4> [186.816365]        secondary_startup_64+0xa4/0xb0
<4> [186.816381]
-> #5 (pmus_lock){+.+.}:
<4> [186.816385]        __mutex_lock+0x9a/0x9d0
<4> [186.816387]        perf_event_init_cpu+0x6b/0x140
<4> [186.816404]        cpuhp_invoke_callback+0x9b/0x9d0
<4> [186.816406]        _cpu_up+0xa2/0x140
<4> [186.816409]        do_cpu_up+0x61/0xa0
<4> [186.816411]        smp_init+0x57/0x96
<4> [186.816413]        kernel_init_freeable+0xac/0x1c7
<4> [186.816416]        kernel_init+0x5/0x100
<4> [186.816419]        ret_from_fork+0x24/0x50
<4> [186.816421]
-> #4 (cpu_hotplug_lock.rw_sem){++++}:
<4> [186.816424]        cpus_read_lock+0x34/0xd0
<4> [186.816427]        rcu_barrier+0xaa/0x190
<4> [186.816429]        kernel_init+0x21/0x100
<4> [186.816431]        ret_from_fork+0x24/0x50
<4> [186.816433]
-> #3 (rcu_state.barrier_mutex){+.+.}:
<4> [186.816436]        __mutex_lock+0x9a/0x9d0
<4> [186.816438]        rcu_barrier+0x23/0x190
<4> [186.816502]        i915_gem_object_unbind+0x3a6/0x400 [i915]
<4> [186.816537]        i915_gem_object_set_cache_level+0x32/0x90 [i915]
<4> [186.816571]        i915_gem_object_pin_to_display_plane+0x5d/0x160 [i915]
<4> [186.816612]        intel_pin_and_fence_fb_obj+0x9e/0x200 [i915]
<4> [186.816679]        intel_plane_pin_fb+0x3f/0xd0 [i915]
<4> [186.816717]        intel_prepare_plane_fb+0x130/0x520 [i915]
<4> [186.816722]        drm_atomic_helper_prepare_planes+0x85/0x110
<4> [186.816761]        intel_atomic_commit+0xc6/0x350 [i915]
<4> [186.816764]        drm_atomic_helper_update_plane+0xed/0x110
<4> [186.816768]        setplane_internal+0x97/0x190
<4> [186.816770]        drm_mode_setplane+0xcd/0x190
<4> [186.816773]        drm_ioctl_kernel+0xa7/0xf0
<4> [186.816775]        drm_ioctl+0x2e1/0x390
<4> [186.816778]        do_vfs_ioctl+0xa0/0x6f0
<4> [186.816780]        ksys_ioctl+0x35/0x60
<4> [186.816782]        __x64_sys_ioctl+0x11/0x20
<4> [186.816785]        do_syscall_64+0x4f/0x210
<4> [186.816787]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [186.816789]
-> #2 (reservation_ww_class_mutex){+.+.}:
<4> [186.816793]        __ww_mutex_lock.constprop.15+0xc3/0x1090
<4> [186.816795]        ww_mutex_lock+0x39/0x70
<4> [186.816798]        dma_resv_lockdep+0x10e/0x1f7
<4> [186.816800]        do_one_initcall+0x58/0x2ff
<4> [186.816802]        kernel_init_freeable+0x137/0x1c7
<4> [186.816804]        kernel_init+0x5/0x100
<4> [186.816806]        ret_from_fork+0x24/0x50
<4> [186.816808]
-> #1 (reservation_ww_class_acquire){+.+.}:
<4> [186.816811]        dma_resv_lockdep+0xec/0x1f7
<4> [186.816813]        do_one_initcall+0x58/0x2ff
<4> [186.816815]        kernel_init_freeable+0x137/0x1c7
<4> [186.816817]        kernel_init+0x5/0x100
<4> [186.816819]        ret_from_fork+0x24/0x50
<4> [186.816820]
-> #0 (&mm->mmap_sem#2){++++}:
<4> [186.816824]        __lock_acquire+0x1328/0x15d0
<4> [186.816826]        lock_acquire+0xa7/0x1c0
<4> [186.816828]        __might_fault+0x63/0x90
<4> [186.816831]        _copy_to_user+0x1e/0x80
<4> [186.816834]        perf_read+0x200/0x2b0
<4> [186.816836]        vfs_read+0x96/0x160
<4> [186.816838]        ksys_read+0x9f/0xe0
<4> [186.816839]        do_syscall_64+0x4f/0x210
<4> [186.816841]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [186.816843]
other info that might help us debug this:

<4> [186.816846] Chain exists of:
  &mm->mmap_sem#2 --> pmus_lock --> &cpuctx_mutex

<4> [186.816849]  Possible unsafe locking scenario:

<4> [186.816851]        CPU0                    CPU1
<4> [186.816853]        ----                    ----
<4> [186.816854]   lock(&cpuctx_mutex);
<4> [186.816856]                                lock(pmus_lock);
<4> [186.816858]                                lock(&cpuctx_mutex);
<4> [186.816860]   lock(&mm->mmap_sem#2);
<4> [186.816861]
 *** DEADLOCK ***

Closes: https://gitlab.freedesktop.org/drm/intel/issues/728
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191206105527.1130413-5-chris@chris-wilson.co.uk
2019-12-07 19:27:36 +00:00
Chris Wilson
780ccb5a5a drm/i915/gem: Hold the obj->vma.lock while walking the vma.list
Remember to take the lock before walking the obj->vma.list so that the
nodes do not change beneath us! E.g.,

i915_gem_object_bump_inactive_ggtt:387 GEM_BUG_ON(vma->vm != &i915->ggtt.vm)

Closes: https://gitlab.freedesktop.org/drm/intel/issues/691
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191204164527.3872783-1-chris@chris-wilson.co.uk
2019-12-04 17:30:44 +00:00
Abdiel Janulgue
cc662126b4 drm/i915: Introduce DRM_I915_GEM_MMAP_OFFSET
This is really just an alias of mmap_gtt. The 'mmap offset' nomenclature
comes from the value returned by this ioctl which is the offset into the
device fd which userpace uses with mmap(2).

mmap_gtt was our initial mmap_offset implementation, this extends
our CPU mmap support to allow additional fault handlers that depends on
the object's backing pages.

Note that we multiplex mmap_gtt and mmap_offset through the same ioctl,
and use the zero extending behaviour of drm to differentiate between
them, when we inspect the flags.

To support multiple mmap types on an object we need to support multiple
mmap_offsets for an object (each offset in the global device address
space corresponding to a unique instance of the object for a file + mmap
type). As we drop the simplified drm core idea of a single mmap_offset,
we need to provide replacement hooks for the dumb mmap interface as
well.

Link: https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1675
Testcase: igt/gem_mmap_offset
Signed-off-by: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191204120032.3682839-1-chris@chris-wilson.co.uk
2019-12-04 15:11:44 +00:00
Chris Wilson
7d0aa0db43 drm/i915/gem: Unbind all current vma on changing cache-level
Avoid dangerous race handling of destroyed vma by unbinding all vma
instead. Unfortunately, this stops us from trying to be clever and only
doing the minimal change required, so on first use of scanout we may
encounter an annoying stall as it transitions to a new cache level.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112413
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191202174310.2630302-1-chris@chris-wilson.co.uk
2019-12-02 21:57:22 +00:00
Chris Wilson
42d70253c2 drm/i915/gem: Track ggtt writes from userspace on the bound vma
When userspace writes into the GTT itself, it is supposed to call
set-domain to let the kernel keep track and so manage the CPU/GPU
caches. As we track writes on the individual i915_vma, we should also be
sure to mark them as dirty.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191119112515.2766748-1-chris@chris-wilson.co.uk
2019-11-19 15:12:46 +00:00
Ramalingam C
9e678fc9ba drm/i915: FB backing gem obj should reside in LMEM
If Local memory is supported by hardware, we want framebuffer backing
gem objects from local memory.

if the backing obj is not from LMEM, pin_to_display is failed.

v2:
  memory regions are correctly assigned to obj->memory_regions [tvrtko]
  migration failure is reported as debug log [Tvrtko]
v3:
  Migration is dropped. only error is reported [Daniel]
  mem region check is move to pin_to_display [Chris]
v4:
  s/dev_priv/i915 [chris]
v5:
  i915_gem_object_is_lmem is used for detecting the obj mem type. [Matt]

cc: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Ramalingam C <ramalingam.c@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191105144414.30470-1-ramalingam.c@intel.com
2019-11-07 10:55:40 +00:00
Chris Wilson
2850748ef8 drm/i915: Pull i915_vma_pin under the vm->mutex
Replace the struct_mutex requirement for pinning the i915_vma with the
local vm->mutex instead. Note that the vm->mutex is tainted by the
shrinker (we require unbinding from inside fs-reclaim) and so we cannot
allocate while holding that mutex. Instead we have to preallocate
workers to do allocate and apply the PTE updates after we have we
reserved their slot in the drm_mm (using fences to order the PTE writes
with the GPU work and with later unbind).

In adding the asynchronous vma binding, one subtle requirement is to
avoid coupling the binding fence into the backing object->resv. That is
the asynchronous binding only applies to the vma timeline itself and not
to the pages as that is a more global timeline (the binding of one vma
does not need to be ordered with another vma, nor does the implicit GEM
fencing depend on a vma, only on writes to the backing store). Keeping
the vma binding distinct from the backing store timelines is verified by
a number of async gem_exec_fence and gem_exec_schedule tests. The way we
do this is quite simple, we keep the fence for the vma binding separate
and only wait on it as required, and never add it to the obj->resv
itself.

Another consequence in reducing the locking around the vma is the
destruction of the vma is no longer globally serialised by struct_mutex.
A natural solution would be to add a kref to i915_vma, but that requires
decoupling the reference cycles, possibly by introducing a new
i915_mm_pages object that is own by both obj->mm and vma->pages.
However, we have not taken that route due to the overshadowing lmem/ttm
discussions, and instead play a series of complicated games with
trylocks to (hopefully) ensure that only one destruction path is called!

v2: Add some commentary, and some helpers to reduce patch churn.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
2019-10-04 15:39:02 +01:00
Chris Wilson
99013b1010 drm/i915: Make shrink/unshrink be atomic
Add an atomic counter and always take the spinlock around the pin/unpin
events, so that we can perform the list manipulation concurrently.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190910212204.17190-1-chris@chris-wilson.co.uk
2019-09-11 08:14:23 +01:00
Matthew Auld
33dd889923 drm/i915: cleanup cache-coloring
Try to tidy up the cache-coloring such that we rid the code of any
mm.color_adjust assumptions, this should hopefully make it more obvious
in the code when we need to actually use the cache-level as the color,
and as a bonus should make adding a different color-scheme simpler.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190909124052.22900-3-matthew.auld@intel.com
2019-09-09 21:00:20 +01:00
Chris Wilson
5a90606df7 drm/i915: Replace obj->pin_global with obj->frontbuffer
obj->pin_global was originally used as a means to keep the shrinker off
the active scanout, but we use the vma->pin_count itself for that and
the obj->frontbuffer to delay shrinking active framebuffers. The other
role that obj->pin_global gained was for spotting display objects inside
GEM and working harder to keep those coherent; for which we can again
simply inspect obj->frontbuffer directly.

Coming up next, we will want to manipulate the pin_global counter
outside of the principle locks, so would need to make pin_global atomic.
However, since obj->frontbuffer is already managed atomically, it makes
sense to use that the primary key for display objects instead of having
pin_global.

Ville pointed out the principle difference is that obj->frontbuffer is
set for as long as an intel_framebuffer is attached to an object, but
obj->pin_global was only raised for as long as the object was active. In
practice, this means that we consider the object as being on the scanout
for longer than is strictly required, causing us to be more proactive in
flushing -- though it should be true that we would have flushed
eventually when the back became the front, except that on the flip path
that flush is async but when hit from another ioctl it will be
synchronous.

v2: i915_gem_object_is_framebuffer()

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190902040303.14195-5-chris@chris-wilson.co.uk
2019-09-03 05:39:37 +01:00
Chris Wilson
1f7fd484ff drm/i915: Replace i915_vma_put_fence()
Avoid calling i915_vma_put_fence() by using our alternate paths that
bind a secondary vma avoiding the original fenced vma. For the few
instances where we need to release the fence (i.e. on binding when the
GGTT range becomes invalid), replace the put_fence with a revoke_fence.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190822061557.18402-1-chris@chris-wilson.co.uk
2019-08-22 08:53:42 +01:00
Chris Wilson
8e7cb1799b drm/i915: Extract intel_frontbuffer active tracking
Move the active tracking for the frontbuffer operations out of the
i915_gem_object and into its own first class (refcounted) object. In the
process of detangling, we switch from low level request tracking to the
easier i915_active -- with the plan that this avoids any potential
atomic callbacks as the frontbuffer tracking wishes to sleep as it
flushes.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190816074635.26062-1-chris@chris-wilson.co.uk
2019-08-16 09:51:11 +01:00
Jani Nikula
df0566a641 drm/i915: move modesetting core code under display/
Now that we have a new subdirectory for display code, continue by moving
modesetting core code.

display/intel_frontbuffer.h sticks out like a sore thumb, otherwise this
is, again, a surprisingly clean operation.

v2:
- don't move intel_sideband.[ch] (Ville)
- use tabs for Makefile file lists and sort them

Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Acked-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190613084416.6794-3-jani.nikula@intel.com
2019-06-17 11:48:32 +03:00
Chris Wilson
ecab9be174 drm/i915: Combine unbound/bound list tracking for objects
With async binding, we don't want to manage a bound/unbound list as we
may end up running before we even acquire the pages. All that is
required is keeping track of shrinkable objects, so reduce it to the
minimum list.

Fixes: 6951e5893b ("drm/i915: Move GEM object domain management from struct_mutex to local")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190612105720.30310-1-chris@chris-wilson.co.uk
2019-06-12 13:36:43 +01:00
Chris Wilson
a8cff4c828 drm/i915: Promote i915->mm.obj_lock to be irqsafe
The intent is to be able to update the mm.lists from inside an irqsoff
section (e.g. from a softirq rcu workqueue), ergo we need to make the
i915->mm.obj_lock irqsafe.

v2: can_discard_pages() ensures we are shrinkable
v3: Beware shadowing of 'flags'

Fixes: 3b4fa9640c ("drm/i915: Track the purgeable objects on a separate eviction list")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110869
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190610145430.17717-1-chris@chris-wilson.co.uk
2019-06-10 20:43:08 +01:00
Chris Wilson
d82b4b2621 drm/i915: Report all objects with allocated pages to the shrinker
Currently, we try to report to the shrinker the precise number of
objects (pages) that are available to be reaped at this moment. This
requires searching all objects with allocated pages to see if they
fulfill the search criteria, and this count is performed quite
frequently. (The shrinker tries to free ~128 pages on each invocation,
before which we count all the objects; counting takes longer than
unbinding the objects!) If we take the pragmatic view that with
sufficient desire, all objects are eventually reapable (they become
inactive, or no longer used as framebuffer etc), we can simply return
the count of pinned pages maintained during get_pages/put_pages rather
than walk the lists every time.

The downside is that we may (slightly) over-report the number of
objects/pages we could shrink and so penalize ourselves by shrinking
more than required. This is mitigated by keeping the order in which we
shrink objects such that we avoid penalizing active and frequently used
objects, and if memory is so tight that we need to free them we would
need to anyway.

v2: Only expose shrinkable objects to the shrinker; a small reduction in
not considering stolen and foreign objects.
v3: Restore the tracking from a "backup" copy from before the gem/ split

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190530203500.26272-2-chris@chris-wilson.co.uk
2019-05-31 21:23:51 +01:00
Chris Wilson
3b4fa9640c drm/i915: Track the purgeable objects on a separate eviction list
Currently the purgeable objects, I915_MADV_DONTNEED, are mixed in the
normal bound/unbound lists. Every shrinker pass starts with an attempt
to purge from this set of unneeded objects, which entails us doing a
walk over both lists looking for any candidates. If there are none, and
since we are shrinking we can reasonably assume that the lists are
full!, this becomes a very slow futile walk.

If we separate out the purgeable objects into own list, this search then
becomes its own phase that is preferentially handled during shrinking.
Instead the cost becomes that we then need to filter the purgeable list
if we want to distinguish between bound and unbound objects.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190530203500.26272-1-chris@chris-wilson.co.uk
2019-05-31 21:23:51 +01:00
Chris Wilson
6951e5893b drm/i915: Move GEM object domain management from struct_mutex to local
Use the per-object local lock to control the cache domain of the
individual GEM objects, not struct_mutex. This is a huge leap forward
for us in terms of object-level synchronisation; execbuffers are
coordinated using the ww_mutex and pread/pwrite is finally fully
serialised again.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190528092956.14910-10-chris@chris-wilson.co.uk
2019-05-28 12:45:29 +01:00
Chris Wilson
f0e4a06397 drm/i915: Move GEM domain management to its own file
Continuing the decluttering of i915_gem.c, that of the read/write
domains, perhaps the biggest of GEM's follies?

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190528092956.14910-7-chris@chris-wilson.co.uk
2019-05-28 12:45:29 +01:00