drm-misc-next for 6.8:

UAPI Changes:
   - Remove Userspace Mode-Setting ioctls
   - v3d: New uapi to handle jobs involving the CPU
 
 Cross-subsystem Changes:
 
 Core Changes:
   - atomic: Add support for FB-less planes which got reverted a bit
     later for lack of IGT tests and userspace code, Dump private objects
     state in drm_state_dump.
   - dma-buf: Add fence deadline support
   - encoder: Create per-encoder debugfs directory, move the bridge chain
     file to that directory
 
 Driver Changes:
   - Include drm_auth.h in driver that use it but don't include it, Drop
     drm_plane_helper.h from drivers that include it but don't use it
   - imagination: Plenty of small fixes
   - panfrost: Improve interrupt handling at poweroff
   - qaic: Convert to persistent DRM devices
   - tidss: Support for the AM62A7, a few probe improvements, some cleanups
   - v3d: Support for jobs involving the CPU
 
   - bridge:
     - Create transparent aux-bridge for DP/USB-C
     - lt8912b: Add suspend/resume support and power regulator support
 
   - panel:
     - himax-hx8394: Drop prepare, unprepare and shutdown logic, Support
       panel rotation
     - New panels: BOE BP101WX1-100, Powkiddy X55, Ampire AM8001280G,
       Evervision VGG644804, SDC ATNA45AF01
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQRcEzekXsqa64kGDp7j7w1vZxhRxQUCZXGXPAAKCRDj7w1vZxhR
 xblIAP4tz7X0mqa7KFr6L0sIfVILHANho4L81IUFi+kpTn64WgEAnqJ3IqJtdLbi
 czXzgJPae3Ifstm7CW7+72fLQCR6Cg8=
 =pU2c
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2023-12-07' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 6.8:

UAPI Changes:
  - Remove Userspace Mode-Setting ioctls
  - v3d: New uapi to handle jobs involving the CPU

Cross-subsystem Changes:

Core Changes:
  - atomic: Add support for FB-less planes which got reverted a bit
    later for lack of IGT tests and userspace code, Dump private objects
    state in drm_state_dump.
  - dma-buf: Add fence deadline support
  - encoder: Create per-encoder debugfs directory, move the bridge chain
    file to that directory

Driver Changes:
  - Include drm_auth.h in driver that use it but don't include it, Drop
    drm_plane_helper.h from drivers that include it but don't use it
  - imagination: Plenty of small fixes
  - panfrost: Improve interrupt handling at poweroff
  - qaic: Convert to persistent DRM devices
  - tidss: Support for the AM62A7, a few probe improvements, some cleanups
  - v3d: Support for jobs involving the CPU

  - bridge:
    - Create transparent aux-bridge for DP/USB-C
    - lt8912b: Add suspend/resume support and power regulator support

  - panel:
    - himax-hx8394: Drop prepare, unprepare and shutdown logic, Support
      panel rotation
    - New panels: BOE BP101WX1-100, Powkiddy X55, Ampire AM8001280G,
      Evervision VGG644804, SDC ATNA45AF01

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maxime Ripard <mripard@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/yu5heqaufyeo4nlowzieu4s5unwqrqyx4jixbfjmzdon677rpk@t53vceua2dao
This commit is contained in:
Dave Airlie 2023-12-08 16:26:20 +10:00
commit a60501d7c2
258 changed files with 6456 additions and 10514 deletions

View File

@ -93,8 +93,15 @@ commands (does not impact QAIC).
uAPI
====
QAIC creates an accel device per phsyical PCIe device. This accel device exists
for as long as the PCIe device is known to Linux.
The PCIe device may not be in the state to accept requests from userspace at
all times. QAIC will trigger KOBJ_ONLINE/OFFLINE uevents to advertise when the
device can accept requests (ONLINE) and when the device is no longer accepting
requests (OFFLINE) because of a reset or other state transition.
QAIC defines a number of driver specific IOCTLs as part of the userspace API.
This section describes those APIs.
DRM_IOCTL_QAIC_MANAGE
This IOCTL allows userspace to send a NNC request to the QSM. The call will

View File

@ -153,6 +153,8 @@ NOTE: Some pages, such as DAX pages, cannot be pinned with longterm pins. That's
because DAX pages do not have a separate page cache, and so "pinning" implies
locking down file system blocks, which is not (yet) supported in that way.
.. _mmu-notifier-registration-case:
CASE 3: MMU notifier registration, with or without page faulting hardware
-------------------------------------------------------------------------
Device drivers can pin pages via get_user_pages*(), and register for mmu

View File

@ -55,6 +55,27 @@ properties:
- port@0
- port@1
vcchdmipll-supply:
description: A 1.8V supply that powers the HDMI PLL.
vcchdmitx-supply:
description: A 1.8V supply that powers the HDMI TX part.
vcclvdspll-supply:
description: A 1.8V supply that powers the LVDS PLL.
vcclvdstx-supply:
description: A 1.8V supply that powers the LVDS TX part.
vccmipirx-supply:
description: A 1.8V supply that powers the MIPI RX part.
vccsysclk-supply:
description: A 1.8V supply that powers the SYSCLK.
vdd-supply:
description: A 1.8V supply that powers the digital part.
required:
- compatible
- reg

View File

@ -23,6 +23,7 @@ properties:
items:
- enum:
- hannstar,hsd060bhw4
- powkiddy,x55-panel
- const: himax,hx8394
reg: true
@ -31,6 +32,8 @@ properties:
backlight: true
rotation: true
port: true
vcc-supply:

View File

@ -16,6 +16,7 @@ properties:
compatible:
items:
- enum:
- ampire,am8001280g
- bananapi,lhr050h41
- feixin,k101-im2byl02
- tdo,tl050hdv35

View File

@ -73,6 +73,8 @@ properties:
- auo,t215hvn01
# Shanghai AVIC Optoelectronics 7" 1024x600 color TFT-LCD panel
- avic,tm070ddh03
# BOE BP101WX1-100 10.1" WXGA (1280x800) LVDS panel
- boe,bp101wx1-100
# BOE EV121WXM-N10-1850 12.1" WXGA (1280x800) TFT LCD panel
- boe,ev121wxm-n10-1850
# BOE HV070WSA-100 7.01" WSVGA TFT LCD panel
@ -144,6 +146,8 @@ properties:
- edt,etmv570g2dhu
# E Ink VB3300-KCA
- eink,vb3300-kca
# Evervision Electronics Co. Ltd. VGG644804 5.7" VGA TFT LCD Panel
- evervision,vgg644804
# Evervision Electronics Co. Ltd. VGG804821 5.0" WVGA TFT LCD Panel
- evervision,vgg804821
# Foxlink Group 5" WVGA TFT LCD panel

View File

@ -23,6 +23,7 @@ properties:
compatible:
enum:
- ti,am625-dss
- ti,am62a7,dss
- ti,am65x-dss
reg:
@ -87,6 +88,7 @@ properties:
For AM65x DSS, the OLDI output port node from video port 1.
For AM625 DSS, the internal DPI output port node from video
port 1.
For AM62A7 DSS, the port is tied off inside the SoC.
port@1:
$ref: /schemas/graph.yaml#/properties/port
@ -108,6 +110,18 @@ properties:
Input memory (from main memory to dispc) bandwidth limit in
bytes per second
allOf:
- if:
properties:
compatible:
contains:
const: ti,am62a7-dss
then:
properties:
ports:
properties:
port@0: false
required:
- compatible
- reg

View File

@ -29,6 +29,7 @@ properties:
- allwinner,sun50i-a64-mali
- rockchip,rk3036-mali
- rockchip,rk3066-mali
- rockchip,rk3128-mali
- rockchip,rk3188-mali
- rockchip,rk3228-mali
- samsung,exynos4210-mali

View File

@ -548,6 +548,8 @@ Plane Composition Properties
.. kernel-doc:: drivers/gpu/drm/drm_blend.c
:doc: overview
.. _damage_tracking_properties:
Damage Tracking Properties
--------------------------
@ -579,6 +581,12 @@ Variable Refresh Properties
.. kernel-doc:: drivers/gpu/drm/drm_connector.c
:doc: Variable refresh properties
Cursor Hotspot Properties
---------------------------
.. kernel-doc:: drivers/gpu/drm/drm_plane.c
:doc: hotspot properties
Existing KMS Properties
-----------------------

View File

@ -466,6 +466,8 @@ DRM MM Range Allocator Function References
.. kernel-doc:: drivers/gpu/drm/drm_mm.c
:export:
.. _drm_gpuvm:
DRM GPUVM
=========
@ -481,6 +483,8 @@ Split and Merge
.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Split and Merge
.. _drm_gpuvm_locking:
Locking
-------

View File

@ -0,0 +1,582 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
===============
VM_BIND locking
===============
This document attempts to describe what's needed to get VM_BIND locking right,
including the userptr mmu_notifier locking. It also discusses some
optimizations to get rid of the looping through of all userptr mappings and
external / shared object mappings that is needed in the simplest
implementation. In addition, there is a section describing the VM_BIND locking
required for implementing recoverable pagefaults.
The DRM GPUVM set of helpers
============================
There is a set of helpers for drivers implementing VM_BIND, and this
set of helpers implements much, but not all of the locking described
in this document. In particular, it is currently lacking a userptr
implementation. This document does not intend to describe the DRM GPUVM
implementation in detail, but it is covered in :ref:`its own
documentation <drm_gpuvm>`. It is highly recommended for any driver
implementing VM_BIND to use the DRM GPUVM helpers and to extend it if
common functionality is missing.
Nomenclature
============
* ``gpu_vm``: Abstraction of a virtual GPU address space with
meta-data. Typically one per client (DRM file-private), or one per
execution context.
* ``gpu_vma``: Abstraction of a GPU address range within a gpu_vm with
associated meta-data. The backing storage of a gpu_vma can either be
a GEM object or anonymous or page-cache pages mapped also into the CPU
address space for the process.
* ``gpu_vm_bo``: Abstracts the association of a GEM object and
a VM. The GEM object maintains a list of gpu_vm_bos, where each gpu_vm_bo
maintains a list of gpu_vmas.
* ``userptr gpu_vma or just userptr``: A gpu_vma, whose backing store
is anonymous or page-cache pages as described above.
* ``revalidating``: Revalidating a gpu_vma means making the latest version
of the backing store resident and making sure the gpu_vma's
page-table entries point to that backing store.
* ``dma_fence``: A struct dma_fence that is similar to a struct completion
and which tracks GPU activity. When the GPU activity is finished,
the dma_fence signals. Please refer to the ``DMA Fences`` section of
the :doc:`dma-buf doc </driver-api/dma-buf>`.
* ``dma_resv``: A struct dma_resv (a.k.a reservation object) that is used
to track GPU activity in the form of multiple dma_fences on a
gpu_vm or a GEM object. The dma_resv contains an array / list
of dma_fences and a lock that needs to be held when adding
additional dma_fences to the dma_resv. The lock is of a type that
allows deadlock-safe locking of multiple dma_resvs in arbitrary
order. Please refer to the ``Reservation Objects`` section of the
:doc:`dma-buf doc </driver-api/dma-buf>`.
* ``exec function``: An exec function is a function that revalidates all
affected gpu_vmas, submits a GPU command batch and registers the
dma_fence representing the GPU command's activity with all affected
dma_resvs. For completeness, although not covered by this document,
it's worth mentioning that an exec function may also be the
revalidation worker that is used by some drivers in compute /
long-running mode.
* ``local object``: A GEM object which is only mapped within a
single VM. Local GEM objects share the gpu_vm's dma_resv.
* ``external object``: a.k.a shared object: A GEM object which may be shared
by multiple gpu_vms and whose backing storage may be shared with
other drivers.
Locks and locking order
=======================
One of the benefits of VM_BIND is that local GEM objects share the gpu_vm's
dma_resv object and hence the dma_resv lock. So, even with a huge
number of local GEM objects, only one lock is needed to make the exec
sequence atomic.
The following locks and locking orders are used:
* The ``gpu_vm->lock`` (optionally an rwsem). Protects the gpu_vm's
data structure keeping track of gpu_vmas. It can also protect the
gpu_vm's list of userptr gpu_vmas. With a CPU mm analogy this would
correspond to the mmap_lock. An rwsem allows several readers to walk
the VM tree concurrently, but the benefit of that concurrency most
likely varies from driver to driver.
* The ``userptr_seqlock``. This lock is taken in read mode for each
userptr gpu_vma on the gpu_vm's userptr list, and in write mode during mmu
notifier invalidation. This is not a real seqlock but described in
``mm/mmu_notifier.c`` as a "Collision-retry read-side/write-side
'lock' a lot like a seqcount. However this allows multiple
write-sides to hold it at once...". The read side critical section
is enclosed by ``mmu_interval_read_begin() /
mmu_interval_read_retry()`` with ``mmu_interval_read_begin()``
sleeping if the write side is held.
The write side is held by the core mm while calling mmu interval
invalidation notifiers.
* The ``gpu_vm->resv`` lock. Protects the gpu_vm's list of gpu_vmas needing
rebinding, as well as the residency state of all the gpu_vm's local
GEM objects.
Furthermore, it typically protects the gpu_vm's list of evicted and
external GEM objects.
* The ``gpu_vm->userptr_notifier_lock``. This is an rwsem that is
taken in read mode during exec and write mode during a mmu notifier
invalidation. The userptr notifier lock is per gpu_vm.
* The ``gem_object->gpuva_lock`` This lock protects the GEM object's
list of gpu_vm_bos. This is usually the same lock as the GEM
object's dma_resv, but some drivers protects this list differently,
see below.
* The ``gpu_vm list spinlocks``. With some implementations they are needed
to be able to update the gpu_vm evicted- and external object
list. For those implementations, the spinlocks are grabbed when the
lists are manipulated. However, to avoid locking order violations
with the dma_resv locks, a special scheme is needed when iterating
over the lists.
.. _gpu_vma lifetime:
Protection and lifetime of gpu_vm_bos and gpu_vmas
==================================================
The GEM object's list of gpu_vm_bos, and the gpu_vm_bo's list of gpu_vmas
is protected by the ``gem_object->gpuva_lock``, which is typically the
same as the GEM object's dma_resv, but if the driver
needs to access these lists from within a dma_fence signalling
critical section, it can instead choose to protect it with a
separate lock, which can be locked from within the dma_fence signalling
critical section. Such drivers then need to pay additional attention
to what locks need to be taken from within the loop when iterating
over the gpu_vm_bo and gpu_vma lists to avoid locking-order violations.
The DRM GPUVM set of helpers provide lockdep asserts that this lock is
held in relevant situations and also provides a means of making itself
aware of which lock is actually used: :c:func:`drm_gem_gpuva_set_lock`.
Each gpu_vm_bo holds a reference counted pointer to the underlying GEM
object, and each gpu_vma holds a reference counted pointer to the
gpu_vm_bo. When iterating over the GEM object's list of gpu_vm_bos and
over the gpu_vm_bo's list of gpu_vmas, the ``gem_object->gpuva_lock`` must
not be dropped, otherwise, gpu_vmas attached to a gpu_vm_bo may
disappear without notice since those are not reference-counted. A
driver may implement its own scheme to allow this at the expense of
additional complexity, but this is outside the scope of this document.
In the DRM GPUVM implementation, each gpu_vm_bo and each gpu_vma
holds a reference count on the gpu_vm itself. Due to this, and to avoid circular
reference counting, cleanup of the gpu_vm's gpu_vmas must not be done from the
gpu_vm's destructor. Drivers typically implements a gpu_vm close
function for this cleanup. The gpu_vm close function will abort gpu
execution using this VM, unmap all gpu_vmas and release page-table memory.
Revalidation and eviction of local objects
==========================================
Note that in all the code examples given below we use simplified
pseudo-code. In particular, the dma_resv deadlock avoidance algorithm
as well as reserving memory for dma_resv fences is left out.
Revalidation
____________
With VM_BIND, all local objects need to be resident when the gpu is
executing using the gpu_vm, and the objects need to have valid
gpu_vmas set up pointing to them. Typically, each gpu command buffer
submission is therefore preceded with a re-validation section:
.. code-block:: C
dma_resv_lock(gpu_vm->resv);
// Validation section starts here.
for_each_gpu_vm_bo_on_evict_list(&gpu_vm->evict_list, &gpu_vm_bo) {
validate_gem_bo(&gpu_vm_bo->gem_bo);
// The following list iteration needs the Gem object's
// dma_resv to be held (it protects the gpu_vm_bo's list of
// gpu_vmas, but since local gem objects share the gpu_vm's
// dma_resv, it is already held at this point.
for_each_gpu_vma_of_gpu_vm_bo(&gpu_vm_bo, &gpu_vma)
move_gpu_vma_to_rebind_list(&gpu_vma, &gpu_vm->rebind_list);
}
for_each_gpu_vma_on_rebind_list(&gpu vm->rebind_list, &gpu_vma) {
rebind_gpu_vma(&gpu_vma);
remove_gpu_vma_from_rebind_list(&gpu_vma);
}
// Validation section ends here, and job submission starts.
add_dependencies(&gpu_job, &gpu_vm->resv);
job_dma_fence = gpu_submit(&gpu_job));
add_dma_fence(job_dma_fence, &gpu_vm->resv);
dma_resv_unlock(gpu_vm->resv);
The reason for having a separate gpu_vm rebind list is that there
might be userptr gpu_vmas that are not mapping a buffer object that
also need rebinding.
Eviction
________
Eviction of one of these local objects will then look similar to the
following:
.. code-block:: C
obj = get_object_from_lru();
dma_resv_lock(obj->resv);
for_each_gpu_vm_bo_of_obj(obj, &gpu_vm_bo);
add_gpu_vm_bo_to_evict_list(&gpu_vm_bo, &gpu_vm->evict_list);
add_dependencies(&eviction_job, &obj->resv);
job_dma_fence = gpu_submit(&eviction_job);
add_dma_fence(&obj->resv, job_dma_fence);
dma_resv_unlock(&obj->resv);
put_object(obj);
Note that since the object is local to the gpu_vm, it will share the gpu_vm's
dma_resv lock such that ``obj->resv == gpu_vm->resv``.
The gpu_vm_bos marked for eviction are put on the gpu_vm's evict list,
which is protected by ``gpu_vm->resv``. During eviction all local
objects have their dma_resv locked and, due to the above equality, also
the gpu_vm's dma_resv protecting the gpu_vm's evict list is locked.
With VM_BIND, gpu_vmas don't need to be unbound before eviction,
since the driver must ensure that the eviction blit or copy will wait
for GPU idle or depend on all previous GPU activity. Furthermore, any
subsequent attempt by the GPU to access freed memory through the
gpu_vma will be preceded by a new exec function, with a revalidation
section which will make sure all gpu_vmas are rebound. The eviction
code holding the object's dma_resv while revalidating will ensure a
new exec function may not race with the eviction.
A driver can be implemented in such a way that, on each exec function,
only a subset of vmas are selected for rebind. In this case, all vmas that are
*not* selected for rebind must be unbound before the exec
function workload is submitted.
Locking with external buffer objects
====================================
Since external buffer objects may be shared by multiple gpu_vm's they
can't share their reservation object with a single gpu_vm. Instead
they need to have a reservation object of their own. The external
objects bound to a gpu_vm using one or many gpu_vmas are therefore put on a
per-gpu_vm list which is protected by the gpu_vm's dma_resv lock or
one of the :ref:`gpu_vm list spinlocks <Spinlock iteration>`. Once
the gpu_vm's reservation object is locked, it is safe to traverse the
external object list and lock the dma_resvs of all external
objects. However, if instead a list spinlock is used, a more elaborate
iteration scheme needs to be used.
At eviction time, the gpu_vm_bos of *all* the gpu_vms an external
object is bound to need to be put on their gpu_vm's evict list.
However, when evicting an external object, the dma_resvs of the
gpu_vms the object is bound to are typically not held. Only
the object's private dma_resv can be guaranteed to be held. If there
is a ww_acquire context at hand at eviction time we could grab those
dma_resvs but that could cause expensive ww_mutex rollbacks. A simple
option is to just mark the gpu_vm_bos of the evicted gem object with
an ``evicted`` bool that is inspected before the next time the
corresponding gpu_vm evicted list needs to be traversed. For example, when
traversing the list of external objects and locking them. At that time,
both the gpu_vm's dma_resv and the object's dma_resv is held, and the
gpu_vm_bo marked evicted, can then be added to the gpu_vm's list of
evicted gpu_vm_bos. The ``evicted`` bool is formally protected by the
object's dma_resv.
The exec function becomes
.. code-block:: C
dma_resv_lock(gpu_vm->resv);
// External object list is protected by the gpu_vm->resv lock.
for_each_gpu_vm_bo_on_extobj_list(gpu_vm, &gpu_vm_bo) {
dma_resv_lock(gpu_vm_bo.gem_obj->resv);
if (gpu_vm_bo_marked_evicted(&gpu_vm_bo))
add_gpu_vm_bo_to_evict_list(&gpu_vm_bo, &gpu_vm->evict_list);
}
for_each_gpu_vm_bo_on_evict_list(&gpu_vm->evict_list, &gpu_vm_bo) {
validate_gem_bo(&gpu_vm_bo->gem_bo);
for_each_gpu_vma_of_gpu_vm_bo(&gpu_vm_bo, &gpu_vma)
move_gpu_vma_to_rebind_list(&gpu_vma, &gpu_vm->rebind_list);
}
for_each_gpu_vma_on_rebind_list(&gpu vm->rebind_list, &gpu_vma) {
rebind_gpu_vma(&gpu_vma);
remove_gpu_vma_from_rebind_list(&gpu_vma);
}
add_dependencies(&gpu_job, &gpu_vm->resv);
job_dma_fence = gpu_submit(&gpu_job));
add_dma_fence(job_dma_fence, &gpu_vm->resv);
for_each_external_obj(gpu_vm, &obj)
add_dma_fence(job_dma_fence, &obj->resv);
dma_resv_unlock_all_resv_locks();
And the corresponding shared-object aware eviction would look like:
.. code-block:: C
obj = get_object_from_lru();
dma_resv_lock(obj->resv);
for_each_gpu_vm_bo_of_obj(obj, &gpu_vm_bo)
if (object_is_vm_local(obj))
add_gpu_vm_bo_to_evict_list(&gpu_vm_bo, &gpu_vm->evict_list);
else
mark_gpu_vm_bo_evicted(&gpu_vm_bo);
add_dependencies(&eviction_job, &obj->resv);
job_dma_fence = gpu_submit(&eviction_job);
add_dma_fence(&obj->resv, job_dma_fence);
dma_resv_unlock(&obj->resv);
put_object(obj);
.. _Spinlock iteration:
Accessing the gpu_vm's lists without the dma_resv lock held
===========================================================
Some drivers will hold the gpu_vm's dma_resv lock when accessing the
gpu_vm's evict list and external objects lists. However, there are
drivers that need to access these lists without the dma_resv lock
held, for example due to asynchronous state updates from within the
dma_fence signalling critical path. In such cases, a spinlock can be
used to protect manipulation of the lists. However, since higher level
sleeping locks need to be taken for each list item while iterating
over the lists, the items already iterated over need to be
temporarily moved to a private list and the spinlock released
while processing each item:
.. code block:: C
struct list_head still_in_list;
INIT_LIST_HEAD(&still_in_list);
spin_lock(&gpu_vm->list_lock);
do {
struct list_head *entry = list_first_entry_or_null(&gpu_vm->list, head);
if (!entry)
break;
list_move_tail(&entry->head, &still_in_list);
list_entry_get_unless_zero(entry);
spin_unlock(&gpu_vm->list_lock);
process(entry);
spin_lock(&gpu_vm->list_lock);
list_entry_put(entry);
} while (true);
list_splice_tail(&still_in_list, &gpu_vm->list);
spin_unlock(&gpu_vm->list_lock);
Due to the additional locking and atomic operations, drivers that *can*
avoid accessing the gpu_vm's list outside of the dma_resv lock
might want to avoid also this iteration scheme. Particularly, if the
driver anticipates a large number of list items. For lists where the
anticipated number of list items is small, where list iteration doesn't
happen very often or if there is a significant additional cost
associated with each iteration, the atomic operation overhead
associated with this type of iteration is, most likely, negligible. Note that
if this scheme is used, it is necessary to make sure this list
iteration is protected by an outer level lock or semaphore, since list
items are temporarily pulled off the list while iterating, and it is
also worth mentioning that the local list ``still_in_list`` should
also be considered protected by the ``gpu_vm->list_lock``, and it is
thus possible that items can be removed also from the local list
concurrently with list iteration.
Please refer to the :ref:`DRM GPUVM locking section
<drm_gpuvm_locking>` and its internal
:c:func:`get_next_vm_bo_from_list` function.
userptr gpu_vmas
================
A userptr gpu_vma is a gpu_vma that, instead of mapping a buffer object to a
GPU virtual address range, directly maps a CPU mm range of anonymous-
or file page-cache pages.
A very simple approach would be to just pin the pages using
pin_user_pages() at bind time and unpin them at unbind time, but this
creates a Denial-Of-Service vector since a single user-space process
would be able to pin down all of system memory, which is not
desirable. (For special use-cases and assuming proper accounting pinning might
still be a desirable feature, though). What we need to do in the
general case is to obtain a reference to the desired pages, make sure
we are notified using a MMU notifier just before the CPU mm unmaps the
pages, dirty them if they are not mapped read-only to the GPU, and
then drop the reference.
When we are notified by the MMU notifier that CPU mm is about to drop the
pages, we need to stop GPU access to the pages by waiting for VM idle
in the MMU notifier and make sure that before the next time the GPU
tries to access whatever is now present in the CPU mm range, we unmap
the old pages from the GPU page tables and repeat the process of
obtaining new page references. (See the :ref:`notifier example
<Invalidation example>` below). Note that when the core mm decides to
laundry pages, we get such an unmap MMU notification and can mark the
pages dirty again before the next GPU access. We also get similar MMU
notifications for NUMA accounting which the GPU driver doesn't really
need to care about, but so far it has proven difficult to exclude
certain notifications.
Using a MMU notifier for device DMA (and other methods) is described in
:ref:`the pin_user_pages() documentation <mmu-notifier-registration-case>`.
Now, the method of obtaining struct page references using
get_user_pages() unfortunately can't be used under a dma_resv lock
since that would violate the locking order of the dma_resv lock vs the
mmap_lock that is grabbed when resolving a CPU pagefault. This means
the gpu_vm's list of userptr gpu_vmas needs to be protected by an
outer lock, which in our example below is the ``gpu_vm->lock``.
The MMU interval seqlock for a userptr gpu_vma is used in the following
way:
.. code-block:: C
// Exclusive locking mode here is strictly needed only if there are
// invalidated userptr gpu_vmas present, to avoid concurrent userptr
// revalidations of the same userptr gpu_vma.
down_write(&gpu_vm->lock);
retry:
// Note: mmu_interval_read_begin() blocks until there is no
// invalidation notifier running anymore.
seq = mmu_interval_read_begin(&gpu_vma->userptr_interval);
if (seq != gpu_vma->saved_seq) {
obtain_new_page_pointers(&gpu_vma);
dma_resv_lock(&gpu_vm->resv);
add_gpu_vma_to_revalidate_list(&gpu_vma, &gpu_vm);
dma_resv_unlock(&gpu_vm->resv);
gpu_vma->saved_seq = seq;
}
// The usual revalidation goes here.
// Final userptr sequence validation may not happen before the
// submission dma_fence is added to the gpu_vm's resv, from the POW
// of the MMU invalidation notifier. Hence the
// userptr_notifier_lock that will make them appear atomic.
add_dependencies(&gpu_job, &gpu_vm->resv);
down_read(&gpu_vm->userptr_notifier_lock);
if (mmu_interval_read_retry(&gpu_vma->userptr_interval, gpu_vma->saved_seq)) {
up_read(&gpu_vm->userptr_notifier_lock);
goto retry;
}
job_dma_fence = gpu_submit(&gpu_job));
add_dma_fence(job_dma_fence, &gpu_vm->resv);
for_each_external_obj(gpu_vm, &obj)
add_dma_fence(job_dma_fence, &obj->resv);
dma_resv_unlock_all_resv_locks();
up_read(&gpu_vm->userptr_notifier_lock);
up_write(&gpu_vm->lock);
The code between ``mmu_interval_read_begin()`` and the
``mmu_interval_read_retry()`` marks the read side critical section of
what we call the ``userptr_seqlock``. In reality, the gpu_vm's userptr
gpu_vma list is looped through, and the check is done for *all* of its
userptr gpu_vmas, although we only show a single one here.
The userptr gpu_vma MMU invalidation notifier might be called from
reclaim context and, again, to avoid locking order violations, we can't
take any dma_resv lock nor the gpu_vm->lock from within it.
.. _Invalidation example:
.. code-block:: C
bool gpu_vma_userptr_invalidate(userptr_interval, cur_seq)
{
// Make sure the exec function either sees the new sequence
// and backs off or we wait for the dma-fence:
down_write(&gpu_vm->userptr_notifier_lock);
mmu_interval_set_seq(userptr_interval, cur_seq);
up_write(&gpu_vm->userptr_notifier_lock);
// At this point, the exec function can't succeed in
// submitting a new job, because cur_seq is an invalid
// sequence number and will always cause a retry. When all
// invalidation callbacks, the mmu notifier core will flip
// the sequence number to a valid one. However we need to
// stop gpu access to the old pages here.
dma_resv_wait_timeout(&gpu_vm->resv, DMA_RESV_USAGE_BOOKKEEP,
false, MAX_SCHEDULE_TIMEOUT);
return true;
}
When this invalidation notifier returns, the GPU can no longer be
accessing the old pages of the userptr gpu_vma and needs to redo the
page-binding before a new GPU submission can succeed.
Efficient userptr gpu_vma exec_function iteration
_________________________________________________
If the gpu_vm's list of userptr gpu_vmas becomes large, it's
inefficient to iterate through the complete lists of userptrs on each
exec function to check whether each userptr gpu_vma's saved
sequence number is stale. A solution to this is to put all
*invalidated* userptr gpu_vmas on a separate gpu_vm list and
only check the gpu_vmas present on this list on each exec
function. This list will then lend itself very-well to the spinlock
locking scheme that is
:ref:`described in the spinlock iteration section <Spinlock iteration>`, since
in the mmu notifier, where we add the invalidated gpu_vmas to the
list, it's not possible to take any outer locks like the
``gpu_vm->lock`` or the ``gpu_vm->resv`` lock. Note that the
``gpu_vm->lock`` still needs to be taken while iterating to ensure the list is
complete, as also mentioned in that section.
If using an invalidated userptr list like this, the retry check in the
exec function trivially becomes a check for invalidated list empty.
Locking at bind and unbind time
===============================
At bind time, assuming a GEM object backed gpu_vma, each
gpu_vma needs to be associated with a gpu_vm_bo and that
gpu_vm_bo in turn needs to be added to the GEM object's
gpu_vm_bo list, and possibly to the gpu_vm's external object
list. This is referred to as *linking* the gpu_vma, and typically
requires that the ``gpu_vm->lock`` and the ``gem_object->gpuva_lock``
are held. When unlinking a gpu_vma the same locks should be held,
and that ensures that when iterating over ``gpu_vmas`, either under
the ``gpu_vm->resv`` or the GEM object's dma_resv, that the gpu_vmas
stay alive as long as the lock under which we iterate is not released. For
userptr gpu_vmas it's similarly required that during vma destroy, the
outer ``gpu_vm->lock`` is held, since otherwise when iterating over
the invalidated userptr list as described in the previous section,
there is nothing keeping those userptr gpu_vmas alive.
Locking for recoverable page-fault page-table updates
=====================================================
There are two important things we need to ensure with locking for
recoverable page-faults:
* At the time we return pages back to the system / allocator for
reuse, there should be no remaining GPU mappings and any GPU TLB
must have been flushed.
* The unmapping and mapping of a gpu_vma must not race.
Since the unmapping (or zapping) of GPU ptes is typically taking place
where it is hard or even impossible to take any outer level locks we
must either introduce a new lock that is held at both mapping and
unmapping time, or look at the locks we do hold at unmapping time and
make sure that they are held also at mapping time. For userptr
gpu_vmas, the ``userptr_seqlock`` is held in write mode in the mmu
invalidation notifier where zapping happens. Hence, if the
``userptr_seqlock`` as well as the ``gpu_vm->userptr_notifier_lock``
is held in read mode during mapping, it will not race with the
zapping. For GEM object backed gpu_vmas, zapping will take place under
the GEM object's dma_resv and ensuring that the dma_resv is held also
when populating the page-tables for any gpu_vma pointing to the GEM
object, will similarly ensure we are race-free.
If any part of the mapping is performed asynchronously
under a dma-fence with these locks released, the zapping will need to
wait for that dma-fence to signal under the relevant lock before
starting to modify the page-table.
Since modifying the
page-table structure in a way that frees up page-table memory
might also require outer level locks, the zapping of GPU ptes
typically focuses only on zeroing page-table or page-directory entries
and flushing TLB, whereas freeing of page-table memory is deferred to
unbind or rebind time.

View File

@ -3,7 +3,7 @@ drm/imagination PowerVR Graphics Driver
=======================================
.. kernel-doc:: drivers/gpu/drm/imagination/pvr_drv.c
:doc: PowerVR Graphics Driver
:doc: PowerVR (Series 6 and later) and IMG Graphics Driver
Contents
========

View File

@ -45,9 +45,6 @@ DEV_QUERY
drm_pvr_heap
drm_pvr_dev_query_heap_info
.. kernel-doc:: include/uapi/drm/pvr_drm.h
:doc: Flags for DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
.. kernel-doc:: include/uapi/drm/pvr_drm.h
:identifiers: drm_pvr_static_data_area_usage
drm_pvr_static_data_area
@ -121,7 +118,7 @@ CREATE_FREE_LIST and DESTROY_FREE_LIST
:identifiers: drm_pvr_ioctl_destroy_free_list_args
CREATE_HWRT_DATASET and DESTROY_HWRT_DATASET
--------------------------------------
--------------------------------------------
.. kernel-doc:: include/uapi/drm/pvr_drm.h
:doc: PowerVR IOCTL CREATE_HWRT_DATASET and DESTROY_HWRT_DATASET interfaces

View File

@ -7,3 +7,4 @@ Misc DRM driver uAPI- and feature implementation guidelines
.. toctree::
drm-vm-bind-async
drm-vm-bind-locking

View File

@ -123,10 +123,15 @@ Documentation should include:
* O(1) complexity under VM_BIND.
The document is now included in the drm documentation :doc:`here </gpu/drm-vm-bind-async>`.
Some parts of userptr like mmu_notifiers should become GPUVA or DRM helpers when
the second driver supporting VM_BIND+userptr appears. Details to be defined when
the time comes.
The DRM GPUVM helpers do not yet include the userptr parts, but discussions
about implementing them are ongoing.
Long running compute: minimal data structure/scaffolding
--------------------------------------------------------
The generic scheduler code needs to include the handling of endless compute

View File

@ -337,8 +337,8 @@ connector register/unregister fixes
Level: Intermediate
Remove load/unload callbacks from all non-DRIVER_LEGACY drivers
---------------------------------------------------------------
Remove load/unload callbacks
----------------------------
The load/unload callbacks in struct &drm_driver are very much midlayers, plus
for historical reasons they get the ordering wrong (and we can't fix that)
@ -347,8 +347,7 @@ between setting up the &drm_driver structure and calling drm_dev_register().
- Rework drivers to no longer use the load/unload callbacks, directly coding the
load/unload sequence into the driver's probe function.
- Once all non-DRIVER_LEGACY drivers are converted, disallow the load/unload
callbacks for all modern drivers.
- Once all drivers are converted, remove the load/unload callbacks.
Contact: Daniel Vetter
@ -782,6 +781,29 @@ Contact: Hans de Goede
Level: Advanced
Buffer age or other damage accumulation algorithm for buffer damage
===================================================================
Drivers that do per-buffer uploads, need a buffer damage handling (rather than
frame damage like drivers that do per-plane or per-CRTC uploads), but there is
no support to get the buffer age or any other damage accumulation algorithm.
For this reason, the damage helpers just fallback to a full plane update if the
framebuffer attached to a plane has changed since the last page-flip. Drivers
set &drm_plane_state.ignore_damage_clips to true as indication to
drm_atomic_helper_damage_iter_init() and drm_atomic_helper_damage_iter_next()
helpers that the damage clips should be ignored.
This should be improved to get damage tracking properly working on drivers that
do per-buffer uploads.
More information about damage tracking and references to learning materials can
be found in :ref:`damage_tracking_properties`.
Contact: Javier Martinez Canillas <javierm@redhat.com>
Level: Advanced
Outside DRM
===========

View File

@ -10397,6 +10397,7 @@ M: Frank Binns <frank.binns@imgtec.com>
M: Donald Robson <donald.robson@imgtec.com>
M: Matt Coster <matt.coster@imgtec.com>
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/devicetree/bindings/gpu/img,powervr.yaml
F: Documentation/gpu/imagination/
F: drivers/gpu/drm/imagination/

View File

@ -11,6 +11,7 @@
#include <linux/idr.h>
#include <drm/drm_accel.h>
#include <drm/drm_auth.h>
#include <drm/drm_debugfs.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>

View File

@ -469,7 +469,7 @@ static void mhi_status_cb(struct mhi_controller *mhi_cntrl, enum mhi_callback re
pci_err(qdev->pdev, "Fatal error received from device. Attempting to recover\n");
/* this event occurs in non-atomic context */
if (reason == MHI_CB_SYS_ERROR)
qaic_dev_reset_clean_local_state(qdev, true);
qaic_dev_reset_clean_local_state(qdev);
}
static int mhi_reset_and_async_power_up(struct mhi_controller *mhi_cntrl)

View File

@ -31,6 +31,15 @@
#define to_drm(qddev) (&(qddev)->drm)
#define to_accel_kdev(qddev) (to_drm(qddev)->accel->kdev) /* Return Linux device of accel node */
enum __packed dev_states {
/* Device is offline or will be very soon */
QAIC_OFFLINE,
/* Device is booting, not clear if it's in a usable state */
QAIC_BOOT,
/* Device is fully operational */
QAIC_ONLINE,
};
extern bool datapath_polling;
struct qaic_user {
@ -121,8 +130,8 @@ struct qaic_device {
struct workqueue_struct *cntl_wq;
/* Synchronizes all the users of device during cleanup */
struct srcu_struct dev_lock;
/* true: Device under reset; false: Device not under reset */
bool in_reset;
/* Track the state of the device during resets */
enum dev_states dev_state;
/* true: single MSI is used to operate device */
bool single_msi;
/*
@ -274,7 +283,7 @@ void wakeup_dbc(struct qaic_device *qdev, u32 dbc_id);
void release_dbc(struct qaic_device *qdev, u32 dbc_id);
void wake_all_cntl(struct qaic_device *qdev);
void qaic_dev_reset_clean_local_state(struct qaic_device *qdev, bool exit_reset);
void qaic_dev_reset_clean_local_state(struct qaic_device *qdev);
struct drm_gem_object *qaic_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf);

View File

@ -1022,7 +1022,8 @@ static void *msg_xfer(struct qaic_device *qdev, struct wrapper_list *wrappers, u
int xfer_count = 0;
int retry_count;
if (qdev->in_reset) {
/* Allow QAIC_BOOT state since we need to check control protocol version */
if (qdev->dev_state == QAIC_OFFLINE) {
mutex_unlock(&qdev->cntl_mutex);
return ERR_PTR(-ENODEV);
}
@ -1306,7 +1307,7 @@ int qaic_manage_ioctl(struct drm_device *dev, void *data, struct drm_file *file_
qdev = usr->qddev->qdev;
qdev_rcu_id = srcu_read_lock(&qdev->dev_lock);
if (qdev->in_reset) {
if (qdev->dev_state != QAIC_ONLINE) {
srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id);
srcu_read_unlock(&usr->qddev_lock, usr_rcu_id);
return -ENODEV;

View File

@ -690,7 +690,7 @@ int qaic_create_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *fi
qdev = usr->qddev->qdev;
qdev_rcu_id = srcu_read_lock(&qdev->dev_lock);
if (qdev->in_reset) {
if (qdev->dev_state != QAIC_ONLINE) {
ret = -ENODEV;
goto unlock_dev_srcu;
}
@ -749,7 +749,7 @@ int qaic_mmap_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file
qdev = usr->qddev->qdev;
qdev_rcu_id = srcu_read_lock(&qdev->dev_lock);
if (qdev->in_reset) {
if (qdev->dev_state != QAIC_ONLINE) {
ret = -ENODEV;
goto unlock_dev_srcu;
}
@ -970,7 +970,7 @@ int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_fi
qdev = usr->qddev->qdev;
qdev_rcu_id = srcu_read_lock(&qdev->dev_lock);
if (qdev->in_reset) {
if (qdev->dev_state != QAIC_ONLINE) {
ret = -ENODEV;
goto unlock_dev_srcu;
}
@ -1341,7 +1341,7 @@ static int __qaic_execute_bo_ioctl(struct drm_device *dev, void *data, struct dr
qdev = usr->qddev->qdev;
qdev_rcu_id = srcu_read_lock(&qdev->dev_lock);
if (qdev->in_reset) {
if (qdev->dev_state != QAIC_ONLINE) {
ret = -ENODEV;
goto unlock_dev_srcu;
}
@ -1497,7 +1497,7 @@ void irq_polling_work(struct work_struct *work)
rcu_id = srcu_read_lock(&dbc->ch_lock);
while (1) {
if (dbc->qdev->in_reset) {
if (dbc->qdev->dev_state != QAIC_ONLINE) {
srcu_read_unlock(&dbc->ch_lock, rcu_id);
return;
}
@ -1687,7 +1687,7 @@ int qaic_wait_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file
qdev = usr->qddev->qdev;
qdev_rcu_id = srcu_read_lock(&qdev->dev_lock);
if (qdev->in_reset) {
if (qdev->dev_state != QAIC_ONLINE) {
ret = -ENODEV;
goto unlock_dev_srcu;
}
@ -1756,7 +1756,7 @@ int qaic_perf_stats_bo_ioctl(struct drm_device *dev, void *data, struct drm_file
qdev = usr->qddev->qdev;
qdev_rcu_id = srcu_read_lock(&qdev->dev_lock);
if (qdev->in_reset) {
if (qdev->dev_state != QAIC_ONLINE) {
ret = -ENODEV;
goto unlock_dev_srcu;
}
@ -1847,7 +1847,7 @@ int qaic_detach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_fi
qdev = usr->qddev->qdev;
qdev_rcu_id = srcu_read_lock(&qdev->dev_lock);
if (qdev->in_reset) {
if (qdev->dev_state != QAIC_ONLINE) {
ret = -ENODEV;
goto unlock_dev_srcu;
}

View File

@ -8,6 +8,7 @@
#include <linux/idr.h>
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/kobject.h>
#include <linux/kref.h>
#include <linux/mhi.h>
#include <linux/module.h>
@ -43,9 +44,6 @@ MODULE_PARM_DESC(datapath_polling, "Operate the datapath in polling mode");
static bool link_up;
static DEFINE_IDA(qaic_usrs);
static int qaic_create_drm_device(struct qaic_device *qdev, s32 partition_id);
static void qaic_destroy_drm_device(struct qaic_device *qdev, s32 partition_id);
static void free_usr(struct kref *kref)
{
struct qaic_user *usr = container_of(kref, struct qaic_user, ref_count);
@ -64,7 +62,7 @@ static int qaic_open(struct drm_device *dev, struct drm_file *file)
int ret;
rcu_id = srcu_read_lock(&qdev->dev_lock);
if (qdev->in_reset) {
if (qdev->dev_state != QAIC_ONLINE) {
ret = -ENODEV;
goto dev_unlock;
}
@ -121,7 +119,7 @@ static void qaic_postclose(struct drm_device *dev, struct drm_file *file)
if (qddev) {
qdev = qddev->qdev;
qdev_rcu_id = srcu_read_lock(&qdev->dev_lock);
if (!qdev->in_reset) {
if (qdev->dev_state == QAIC_ONLINE) {
qaic_release_usr(qdev, usr);
for (i = 0; i < qdev->num_dbc; ++i)
if (qdev->dbc[i].usr && qdev->dbc[i].usr->handle == usr->handle)
@ -183,13 +181,6 @@ static int qaic_create_drm_device(struct qaic_device *qdev, s32 partition_id)
qddev->partition_id = partition_id;
/*
* drm_dev_unregister() sets the driver data to NULL and
* drm_dev_register() does not update the driver data. During a SOC
* reset drm dev is unregistered and registered again leaving the
* driver data to NULL.
*/
dev_set_drvdata(to_accel_kdev(qddev), drm->accel);
ret = drm_dev_register(drm, 0);
if (ret)
pci_dbg(qdev->pdev, "drm_dev_register failed %d\n", ret);
@ -203,7 +194,6 @@ static void qaic_destroy_drm_device(struct qaic_device *qdev, s32 partition_id)
struct drm_device *drm = to_drm(qddev);
struct qaic_user *usr;
drm_dev_get(drm);
drm_dev_unregister(drm);
qddev->partition_id = 0;
/*
@ -232,7 +222,6 @@ static void qaic_destroy_drm_device(struct qaic_device *qdev, s32 partition_id)
mutex_lock(&qddev->users_mutex);
}
mutex_unlock(&qddev->users_mutex);
drm_dev_put(drm);
}
static int qaic_mhi_probe(struct mhi_device *mhi_dev, const struct mhi_device_id *id)
@ -254,8 +243,6 @@ static int qaic_mhi_probe(struct mhi_device *mhi_dev, const struct mhi_device_id
qdev = pci_get_drvdata(to_pci_dev(mhi_dev->mhi_cntrl->cntrl_dev));
qdev->in_reset = false;
dev_set_drvdata(&mhi_dev->dev, qdev);
qdev->cntl_ch = mhi_dev;
@ -265,6 +252,7 @@ static int qaic_mhi_probe(struct mhi_device *mhi_dev, const struct mhi_device_id
return ret;
}
qdev->dev_state = QAIC_BOOT;
ret = get_cntl_version(qdev, NULL, &major, &minor);
if (ret || major != CNTL_MAJOR || minor > CNTL_MINOR) {
pci_err(qdev->pdev, "%s: Control protocol version (%d.%d) not supported. Supported version is (%d.%d). Ret: %d\n",
@ -272,8 +260,8 @@ static int qaic_mhi_probe(struct mhi_device *mhi_dev, const struct mhi_device_id
ret = -EINVAL;
goto close_control;
}
ret = qaic_create_drm_device(qdev, QAIC_NO_PARTITION);
qdev->dev_state = QAIC_ONLINE;
kobject_uevent(&(to_accel_kdev(qdev->qddev))->kobj, KOBJ_ONLINE);
return ret;
@ -291,7 +279,8 @@ static void qaic_notify_reset(struct qaic_device *qdev)
{
int i;
qdev->in_reset = true;
kobject_uevent(&(to_accel_kdev(qdev->qddev))->kobj, KOBJ_OFFLINE);
qdev->dev_state = QAIC_OFFLINE;
/* wake up any waiters to avoid waiting for timeouts at sync */
wake_all_cntl(qdev);
for (i = 0; i < qdev->num_dbc; ++i)
@ -299,21 +288,15 @@ static void qaic_notify_reset(struct qaic_device *qdev)
synchronize_srcu(&qdev->dev_lock);
}
void qaic_dev_reset_clean_local_state(struct qaic_device *qdev, bool exit_reset)
void qaic_dev_reset_clean_local_state(struct qaic_device *qdev)
{
int i;
qaic_notify_reset(qdev);
/* remove drmdevs to prevent new users from coming in */
qaic_destroy_drm_device(qdev, QAIC_NO_PARTITION);
/* start tearing things down */
for (i = 0; i < qdev->num_dbc; ++i)
release_dbc(qdev, i);
if (exit_reset)
qdev->in_reset = false;
}
static void cleanup_qdev(struct qaic_device *qdev)
@ -338,6 +321,7 @@ static struct qaic_device *create_qdev(struct pci_dev *pdev, const struct pci_de
if (!qdev)
return NULL;
qdev->dev_state = QAIC_OFFLINE;
if (id->device == PCI_DEV_AIC100) {
qdev->num_dbc = 16;
qdev->dbc = devm_kcalloc(&pdev->dev, qdev->num_dbc, sizeof(*qdev->dbc), GFP_KERNEL);
@ -499,15 +483,21 @@ static int qaic_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto cleanup_qdev;
}
ret = qaic_create_drm_device(qdev, QAIC_NO_PARTITION);
if (ret)
goto cleanup_qdev;
qdev->mhi_cntrl = qaic_mhi_register_controller(pdev, qdev->bar_0, mhi_irq,
qdev->single_msi);
if (IS_ERR(qdev->mhi_cntrl)) {
ret = PTR_ERR(qdev->mhi_cntrl);
goto cleanup_qdev;
goto cleanup_drm_dev;
}
return 0;
cleanup_drm_dev:
qaic_destroy_drm_device(qdev, QAIC_NO_PARTITION);
cleanup_qdev:
cleanup_qdev(qdev);
return ret;
@ -520,7 +510,8 @@ static void qaic_pci_remove(struct pci_dev *pdev)
if (!qdev)
return;
qaic_dev_reset_clean_local_state(qdev, false);
qaic_dev_reset_clean_local_state(qdev);
qaic_destroy_drm_device(qdev, QAIC_NO_PARTITION);
qaic_mhi_free_controller(qdev->mhi_cntrl, link_up);
cleanup_qdev(qdev);
}
@ -543,14 +534,13 @@ static void qaic_pci_reset_prepare(struct pci_dev *pdev)
qaic_notify_reset(qdev);
qaic_mhi_start_reset(qdev->mhi_cntrl);
qaic_dev_reset_clean_local_state(qdev, false);
qaic_dev_reset_clean_local_state(qdev);
}
static void qaic_pci_reset_done(struct pci_dev *pdev)
{
struct qaic_device *qdev = pci_get_drvdata(pdev);
qdev->in_reset = false;
qaic_mhi_reset_done(qdev->mhi_cntrl);
}

View File

@ -112,10 +112,7 @@ config CFAG12864B
depends on X86
depends on FB
depends on KS0108
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT
select FB_SYS_FOPS
select FB_SYSMEM_HELPERS
default n
help
If you have a Crystalfontz 128x64 2-color LCD, cfag12864b Series,
@ -170,10 +167,7 @@ config IMG_ASCII_LCD
config HT16K33
tristate "Holtek Ht16K33 LED controller with keyscan"
depends on FB && I2C && INPUT
select FB_SYS_FOPS
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT
select FB_SYSMEM_HELPERS
select INPUT_MATRIXKMAP
select FB_BACKLIGHT
select NEW_LEDS

View File

@ -51,16 +51,15 @@ static int cfag12864bfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
{
struct page *pages = virt_to_page(cfag12864b_buffer);
vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
return vm_map_pages_zero(vma, &pages, 1);
}
static const struct fb_ops cfag12864bfb_ops = {
.owner = THIS_MODULE,
.fb_read = fb_sys_read,
.fb_write = fb_sys_write,
.fb_fillrect = sys_fillrect,
.fb_copyarea = sys_copyarea,
.fb_imageblit = sys_imageblit,
__FB_DEFAULT_SYSMEM_OPS_RDWR,
__FB_DEFAULT_SYSMEM_OPS_DRAW,
.fb_mmap = cfag12864bfb_mmap,
};
@ -72,6 +71,7 @@ static int cfag12864bfb_probe(struct platform_device *device)
if (!info)
goto none;
info->flags = FBINFO_VIRTFB;
info->screen_buffer = cfag12864b_buffer;
info->screen_size = CFAG12864B_SIZE;
info->fbops = &cfag12864bfb_ops;

View File

@ -351,17 +351,16 @@ static int ht16k33_mmap(struct fb_info *info, struct vm_area_struct *vma)
struct ht16k33_priv *priv = info->par;
struct page *pages = virt_to_page(priv->fbdev.buffer);
vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
return vm_map_pages_zero(vma, &pages, 1);
}
static const struct fb_ops ht16k33_fb_ops = {
.owner = THIS_MODULE,
.fb_read = fb_sys_read,
.fb_write = fb_sys_write,
__FB_DEFAULT_SYSMEM_OPS_RDWR,
.fb_blank = ht16k33_blank,
.fb_fillrect = sys_fillrect,
.fb_copyarea = sys_copyarea,
.fb_imageblit = sys_imageblit,
__FB_DEFAULT_SYSMEM_OPS_DRAW,
.fb_mmap = ht16k33_mmap,
};
@ -640,6 +639,7 @@ static int ht16k33_fbdev_probe(struct device *dev, struct ht16k33_priv *priv,
INIT_DELAYED_WORK(&priv->work, ht16k33_fb_update);
fbdev->info->fbops = &ht16k33_fb_ops;
fbdev->info->flags |= FBINFO_VIRTFB;
fbdev->info->screen_buffer = fbdev->buffer;
fbdev->info->screen_size = HT16K33_FB_SIZE;
fbdev->info->fix = ht16k33_fb_fix;

View File

@ -1,12 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
agpgart-y := backend.o generic.o isoch.o
ifeq ($(CONFIG_DRM_LEGACY),y)
agpgart-$(CONFIG_COMPAT) += compat_ioctl.o
agpgart-y += frontend.o
endif
obj-$(CONFIG_AGP) += agpgart.o
obj-$(CONFIG_AGP_ALI) += ali-agp.o
obj-$(CONFIG_AGP_ATI) += ati-agp.o

View File

@ -185,15 +185,6 @@ void agp_put_bridge(struct agp_bridge_data *bridge);
int agp_add_bridge(struct agp_bridge_data *bridge);
void agp_remove_bridge(struct agp_bridge_data *bridge);
/* Frontend routines. */
#if IS_ENABLED(CONFIG_DRM_LEGACY)
int agp_frontend_initialize(void);
void agp_frontend_cleanup(void);
#else
static inline int agp_frontend_initialize(void) { return 0; }
static inline void agp_frontend_cleanup(void) {}
#endif
/* Generic routines. */
void agp_generic_enable(struct agp_bridge_data *bridge, u32 mode);
int agp_generic_create_gatt_table(struct agp_bridge_data *bridge);

View File

@ -293,13 +293,6 @@ int agp_add_bridge(struct agp_bridge_data *bridge)
}
if (list_empty(&agp_bridges)) {
error = agp_frontend_initialize();
if (error) {
dev_info(&bridge->dev->dev,
"agp_frontend_initialize() failed\n");
goto frontend_err;
}
dev_info(&bridge->dev->dev, "AGP aperture is %dM @ 0x%lx\n",
bridge->driver->fetch_size(), bridge->gart_bus_addr);
@ -308,8 +301,6 @@ int agp_add_bridge(struct agp_bridge_data *bridge)
list_add(&bridge->list, &agp_bridges);
return 0;
frontend_err:
agp_backend_cleanup(bridge);
err_out:
module_put(bridge->driver->owner);
err_put_bridge:
@ -323,8 +314,6 @@ void agp_remove_bridge(struct agp_bridge_data *bridge)
{
agp_backend_cleanup(bridge);
list_del(&bridge->list);
if (list_empty(&agp_bridges))
agp_frontend_cleanup();
module_put(bridge->driver->owner);
}
EXPORT_SYMBOL_GPL(agp_remove_bridge);

View File

@ -1,291 +0,0 @@
/*
* AGPGART driver frontend compatibility ioctls
* Copyright (C) 2004 Silicon Graphics, Inc.
* Copyright (C) 2002-2003 Dave Jones
* Copyright (C) 1999 Jeff Hartmann
* Copyright (C) 1999 Precision Insight, Inc.
* Copyright (C) 1999 Xi Graphics, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included
* in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
* OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* JEFF HARTMANN, OR ANY OTHER CONTRIBUTORS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/fs.h>
#include <linux/agpgart.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include "agp.h"
#include "compat_ioctl.h"
static int compat_agpioc_info_wrap(struct agp_file_private *priv, void __user *arg)
{
struct agp_info32 userinfo;
struct agp_kern_info kerninfo;
agp_copy_info(agp_bridge, &kerninfo);
userinfo.version.major = kerninfo.version.major;
userinfo.version.minor = kerninfo.version.minor;
userinfo.bridge_id = kerninfo.device->vendor |
(kerninfo.device->device << 16);
userinfo.agp_mode = kerninfo.mode;
userinfo.aper_base = (compat_long_t)kerninfo.aper_base;
userinfo.aper_size = kerninfo.aper_size;
userinfo.pg_total = userinfo.pg_system = kerninfo.max_memory;
userinfo.pg_used = kerninfo.current_memory;
if (copy_to_user(arg, &userinfo, sizeof(userinfo)))
return -EFAULT;
return 0;
}
static int compat_agpioc_reserve_wrap(struct agp_file_private *priv, void __user *arg)
{
struct agp_region32 ureserve;
struct agp_region kreserve;
struct agp_client *client;
struct agp_file_private *client_priv;
DBG("");
if (copy_from_user(&ureserve, arg, sizeof(ureserve)))
return -EFAULT;
if ((unsigned) ureserve.seg_count >= ~0U/sizeof(struct agp_segment32))
return -EFAULT;
kreserve.pid = ureserve.pid;
kreserve.seg_count = ureserve.seg_count;
client = agp_find_client_by_pid(kreserve.pid);
if (kreserve.seg_count == 0) {
/* remove a client */
client_priv = agp_find_private(kreserve.pid);
if (client_priv != NULL) {
set_bit(AGP_FF_IS_CLIENT, &client_priv->access_flags);
set_bit(AGP_FF_IS_VALID, &client_priv->access_flags);
}
if (client == NULL) {
/* client is already removed */
return 0;
}
return agp_remove_client(kreserve.pid);
} else {
struct agp_segment32 *usegment;
struct agp_segment *ksegment;
int seg;
if (ureserve.seg_count >= 16384)
return -EINVAL;
usegment = kmalloc_array(ureserve.seg_count,
sizeof(*usegment),
GFP_KERNEL);
if (!usegment)
return -ENOMEM;
ksegment = kmalloc_array(kreserve.seg_count,
sizeof(*ksegment),
GFP_KERNEL);
if (!ksegment) {
kfree(usegment);
return -ENOMEM;
}
if (copy_from_user(usegment, (void __user *) ureserve.seg_list,
sizeof(*usegment) * ureserve.seg_count)) {
kfree(usegment);
kfree(ksegment);
return -EFAULT;
}
for (seg = 0; seg < ureserve.seg_count; seg++) {
ksegment[seg].pg_start = usegment[seg].pg_start;
ksegment[seg].pg_count = usegment[seg].pg_count;
ksegment[seg].prot = usegment[seg].prot;
}
kfree(usegment);
kreserve.seg_list = ksegment;
if (client == NULL) {
/* Create the client and add the segment */
client = agp_create_client(kreserve.pid);
if (client == NULL) {
kfree(ksegment);
return -ENOMEM;
}
client_priv = agp_find_private(kreserve.pid);
if (client_priv != NULL) {
set_bit(AGP_FF_IS_CLIENT, &client_priv->access_flags);
set_bit(AGP_FF_IS_VALID, &client_priv->access_flags);
}
}
return agp_create_segment(client, &kreserve);
}
/* Will never really happen */
return -EINVAL;
}
static int compat_agpioc_allocate_wrap(struct agp_file_private *priv, void __user *arg)
{
struct agp_memory *memory;
struct agp_allocate32 alloc;
DBG("");
if (copy_from_user(&alloc, arg, sizeof(alloc)))
return -EFAULT;
memory = agp_allocate_memory_wrap(alloc.pg_count, alloc.type);
if (memory == NULL)
return -ENOMEM;
alloc.key = memory->key;
alloc.physical = memory->physical;
if (copy_to_user(arg, &alloc, sizeof(alloc))) {
agp_free_memory_wrap(memory);
return -EFAULT;
}
return 0;
}
static int compat_agpioc_bind_wrap(struct agp_file_private *priv, void __user *arg)
{
struct agp_bind32 bind_info;
struct agp_memory *memory;
DBG("");
if (copy_from_user(&bind_info, arg, sizeof(bind_info)))
return -EFAULT;
memory = agp_find_mem_by_key(bind_info.key);
if (memory == NULL)
return -EINVAL;
return agp_bind_memory(memory, bind_info.pg_start);
}
static int compat_agpioc_unbind_wrap(struct agp_file_private *priv, void __user *arg)
{
struct agp_memory *memory;
struct agp_unbind32 unbind;
DBG("");
if (copy_from_user(&unbind, arg, sizeof(unbind)))
return -EFAULT;
memory = agp_find_mem_by_key(unbind.key);
if (memory == NULL)
return -EINVAL;
return agp_unbind_memory(memory);
}
long compat_agp_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
struct agp_file_private *curr_priv = file->private_data;
int ret_val = -ENOTTY;
mutex_lock(&(agp_fe.agp_mutex));
if ((agp_fe.current_controller == NULL) &&
(cmd != AGPIOC_ACQUIRE32)) {
ret_val = -EINVAL;
goto ioctl_out;
}
if ((agp_fe.backend_acquired != true) &&
(cmd != AGPIOC_ACQUIRE32)) {
ret_val = -EBUSY;
goto ioctl_out;
}
if (cmd != AGPIOC_ACQUIRE32) {
if (!(test_bit(AGP_FF_IS_CONTROLLER, &curr_priv->access_flags))) {
ret_val = -EPERM;
goto ioctl_out;
}
/* Use the original pid of the controller,
* in case it's threaded */
if (agp_fe.current_controller->pid != curr_priv->my_pid) {
ret_val = -EBUSY;
goto ioctl_out;
}
}
switch (cmd) {
case AGPIOC_INFO32:
ret_val = compat_agpioc_info_wrap(curr_priv, (void __user *) arg);
break;
case AGPIOC_ACQUIRE32:
ret_val = agpioc_acquire_wrap(curr_priv);
break;
case AGPIOC_RELEASE32:
ret_val = agpioc_release_wrap(curr_priv);
break;
case AGPIOC_SETUP32:
ret_val = agpioc_setup_wrap(curr_priv, (void __user *) arg);
break;
case AGPIOC_RESERVE32:
ret_val = compat_agpioc_reserve_wrap(curr_priv, (void __user *) arg);
break;
case AGPIOC_PROTECT32:
ret_val = agpioc_protect_wrap(curr_priv);
break;
case AGPIOC_ALLOCATE32:
ret_val = compat_agpioc_allocate_wrap(curr_priv, (void __user *) arg);
break;
case AGPIOC_DEALLOCATE32:
ret_val = agpioc_deallocate_wrap(curr_priv, (int) arg);
break;
case AGPIOC_BIND32:
ret_val = compat_agpioc_bind_wrap(curr_priv, (void __user *) arg);
break;
case AGPIOC_UNBIND32:
ret_val = compat_agpioc_unbind_wrap(curr_priv, (void __user *) arg);
break;
case AGPIOC_CHIPSET_FLUSH32:
break;
}
ioctl_out:
DBG("ioctl returns %d\n", ret_val);
mutex_unlock(&(agp_fe.agp_mutex));
return ret_val;
}

View File

@ -1,106 +0,0 @@
/*
* Copyright (C) 1999 Jeff Hartmann
* Copyright (C) 1999 Precision Insight, Inc.
* Copyright (C) 1999 Xi Graphics, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included
* in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
* OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* JEFF HARTMANN, OR ANY OTHER CONTRIBUTORS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef _AGP_COMPAT_IOCTL_H
#define _AGP_COMPAT_IOCTL_H
#include <linux/compat.h>
#include <linux/agpgart.h>
#define AGPIOC_INFO32 _IOR (AGPIOC_BASE, 0, compat_uptr_t)
#define AGPIOC_ACQUIRE32 _IO (AGPIOC_BASE, 1)
#define AGPIOC_RELEASE32 _IO (AGPIOC_BASE, 2)
#define AGPIOC_SETUP32 _IOW (AGPIOC_BASE, 3, compat_uptr_t)
#define AGPIOC_RESERVE32 _IOW (AGPIOC_BASE, 4, compat_uptr_t)
#define AGPIOC_PROTECT32 _IOW (AGPIOC_BASE, 5, compat_uptr_t)
#define AGPIOC_ALLOCATE32 _IOWR(AGPIOC_BASE, 6, compat_uptr_t)
#define AGPIOC_DEALLOCATE32 _IOW (AGPIOC_BASE, 7, compat_int_t)
#define AGPIOC_BIND32 _IOW (AGPIOC_BASE, 8, compat_uptr_t)
#define AGPIOC_UNBIND32 _IOW (AGPIOC_BASE, 9, compat_uptr_t)
#define AGPIOC_CHIPSET_FLUSH32 _IO (AGPIOC_BASE, 10)
struct agp_info32 {
struct agp_version version; /* version of the driver */
u32 bridge_id; /* bridge vendor/device */
u32 agp_mode; /* mode info of bridge */
compat_long_t aper_base; /* base of aperture */
compat_size_t aper_size; /* size of aperture */
compat_size_t pg_total; /* max pages (swap + system) */
compat_size_t pg_system; /* max pages (system) */
compat_size_t pg_used; /* current pages used */
};
/*
* The "prot" down below needs still a "sleep" flag somehow ...
*/
struct agp_segment32 {
compat_off_t pg_start; /* starting page to populate */
compat_size_t pg_count; /* number of pages */
compat_int_t prot; /* prot flags for mmap */
};
struct agp_region32 {
compat_pid_t pid; /* pid of process */
compat_size_t seg_count; /* number of segments */
struct agp_segment32 *seg_list;
};
struct agp_allocate32 {
compat_int_t key; /* tag of allocation */
compat_size_t pg_count; /* number of pages */
u32 type; /* 0 == normal, other devspec */
u32 physical; /* device specific (some devices
* need a phys address of the
* actual page behind the gatt
* table) */
};
struct agp_bind32 {
compat_int_t key; /* tag of allocation */
compat_off_t pg_start; /* starting page to populate */
};
struct agp_unbind32 {
compat_int_t key; /* tag of allocation */
u32 priority; /* priority for paging out */
};
extern struct agp_front_data agp_fe;
int agpioc_acquire_wrap(struct agp_file_private *priv);
int agpioc_release_wrap(struct agp_file_private *priv);
int agpioc_protect_wrap(struct agp_file_private *priv);
int agpioc_setup_wrap(struct agp_file_private *priv, void __user *arg);
int agpioc_deallocate_wrap(struct agp_file_private *priv, int arg);
struct agp_file_private *agp_find_private(pid_t pid);
struct agp_client *agp_create_client(pid_t id);
int agp_remove_client(pid_t id);
int agp_create_segment(struct agp_client *client, struct agp_region *region);
void agp_free_memory_wrap(struct agp_memory *memory);
struct agp_memory *agp_allocate_memory_wrap(size_t pg_count, u32 type);
struct agp_memory *agp_find_mem_by_key(int key);
struct agp_client *agp_find_client_by_pid(pid_t id);
#endif /* _AGP_COMPAT_H */

File diff suppressed because it is too large Load Diff

View File

@ -934,7 +934,8 @@ EXPORT_SYMBOL(dma_fence_wait_any_timeout);
* the GPU's devfreq to reduce frequency, when in fact the opposite is what is
* needed.
*
* To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline.
* To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline
* (or indirectly via userspace facing ioctls like &sync_set_deadline).
* The deadline hint provides a way for the waiting driver, or userspace, to
* convey an appropriate sense of urgency to the signaling driver.
*

View File

@ -52,12 +52,33 @@ struct sw_sync_create_fence_data {
__s32 fence; /* fd of new fence */
};
/**
* struct sw_sync_get_deadline - get the deadline hint of a sw_sync fence
* @deadline_ns: absolute time of the deadline
* @pad: must be zero
* @fence_fd: the sw_sync fence fd (in)
*
* Return the earliest deadline set on the fence. The timebase for the
* deadline is CLOCK_MONOTONIC (same as vblank). If there is no deadline
* set on the fence, this ioctl will return -ENOENT.
*/
struct sw_sync_get_deadline {
__u64 deadline_ns;
__u32 pad;
__s32 fence_fd;
};
#define SW_SYNC_IOC_MAGIC 'W'
#define SW_SYNC_IOC_CREATE_FENCE _IOWR(SW_SYNC_IOC_MAGIC, 0,\
struct sw_sync_create_fence_data)
#define SW_SYNC_IOC_INC _IOW(SW_SYNC_IOC_MAGIC, 1, __u32)
#define SW_SYNC_GET_DEADLINE _IOWR(SW_SYNC_IOC_MAGIC, 2, \
struct sw_sync_get_deadline)
#define SW_SYNC_HAS_DEADLINE_BIT DMA_FENCE_FLAG_USER_BITS
static const struct dma_fence_ops timeline_fence_ops;
@ -171,6 +192,22 @@ static void timeline_fence_timeline_value_str(struct dma_fence *fence,
snprintf(str, size, "%d", parent->value);
}
static void timeline_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
{
struct sync_pt *pt = dma_fence_to_sync_pt(fence);
unsigned long flags;
spin_lock_irqsave(fence->lock, flags);
if (test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) {
if (ktime_before(deadline, pt->deadline))
pt->deadline = deadline;
} else {
pt->deadline = deadline;
__set_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags);
}
spin_unlock_irqrestore(fence->lock, flags);
}
static const struct dma_fence_ops timeline_fence_ops = {
.get_driver_name = timeline_fence_get_driver_name,
.get_timeline_name = timeline_fence_get_timeline_name,
@ -179,6 +216,7 @@ static const struct dma_fence_ops timeline_fence_ops = {
.release = timeline_fence_release,
.fence_value_str = timeline_fence_value_str,
.timeline_value_str = timeline_fence_timeline_value_str,
.set_deadline = timeline_fence_set_deadline,
};
/**
@ -387,6 +425,47 @@ static long sw_sync_ioctl_inc(struct sync_timeline *obj, unsigned long arg)
return 0;
}
static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long arg)
{
struct sw_sync_get_deadline data;
struct dma_fence *fence;
unsigned long flags;
struct sync_pt *pt;
int ret = 0;
if (copy_from_user(&data, (void __user *)arg, sizeof(data)))
return -EFAULT;
if (data.deadline_ns || data.pad)
return -EINVAL;
fence = sync_file_get_fence(data.fence_fd);
if (!fence)
return -EINVAL;
pt = dma_fence_to_sync_pt(fence);
if (!pt)
return -EINVAL;
spin_lock_irqsave(fence->lock, flags);
if (test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) {
data.deadline_ns = ktime_to_ns(pt->deadline);
} else {
ret = -ENOENT;
}
spin_unlock_irqrestore(fence->lock, flags);
dma_fence_put(fence);
if (ret)
return ret;
if (copy_to_user((void __user *)arg, &data, sizeof(data)))
return -EFAULT;
return 0;
}
static long sw_sync_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
@ -399,6 +478,9 @@ static long sw_sync_ioctl(struct file *file, unsigned int cmd,
case SW_SYNC_IOC_INC:
return sw_sync_ioctl_inc(obj, arg);
case SW_SYNC_GET_DEADLINE:
return sw_sync_ioctl_get_deadline(obj, arg);
default:
return -ENOTTY;
}

View File

@ -55,11 +55,13 @@ static inline struct sync_timeline *dma_fence_parent(struct dma_fence *fence)
* @base: base fence object
* @link: link on the sync timeline's list
* @node: node in the sync timeline's tree
* @deadline: the earliest fence deadline hint
*/
struct sync_pt {
struct dma_fence base;
struct list_head link;
struct rb_node node;
ktime_t deadline;
};
extern const struct file_operations sw_sync_debugfs_fops;

View File

@ -347,6 +347,22 @@ out:
return ret;
}
static int sync_file_ioctl_set_deadline(struct sync_file *sync_file,
unsigned long arg)
{
struct sync_set_deadline ts;
if (copy_from_user(&ts, (void __user *)arg, sizeof(ts)))
return -EFAULT;
if (ts.pad)
return -EINVAL;
dma_fence_set_deadline(sync_file->fence, ns_to_ktime(ts.deadline_ns));
return 0;
}
static long sync_file_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
@ -359,6 +375,9 @@ static long sync_file_ioctl(struct file *file, unsigned int cmd,
case SYNC_IOC_FILE_INFO:
return sync_file_ioctl_fence_info(sync_file, arg);
case SYNC_IOC_SET_DEADLINE:
return sync_file_ioctl_set_deadline(sync_file, arg);
default:
return -ENOTTY;
}

View File

@ -74,12 +74,13 @@ config DRM_KUNIT_TEST_HELPERS
config DRM_KUNIT_TEST
tristate "KUnit tests for DRM" if !KUNIT_ALL_TESTS
depends on DRM && KUNIT
depends on DRM && KUNIT && MMU
select DRM_BUDDY
select DRM_DISPLAY_DP_HELPER
select DRM_DISPLAY_HELPER
select DRM_EXEC
select DRM_EXPORT_FOR_TESTS if m
select DRM_GEM_SHMEM_HELPER
select DRM_KMS_HELPER
select DRM_KUNIT_TEST_HELPERS
select DRM_LIB_RANDOM
@ -409,27 +410,6 @@ config DRM_HYPERV
If M is selected the module will be called hyperv_drm.
# Keep legacy drivers last
menuconfig DRM_LEGACY
bool "Enable legacy drivers (DANGEROUS)"
depends on DRM && MMU
help
Enable legacy DRI1 drivers. Those drivers expose unsafe and dangerous
APIs to user-space, which can be used to circumvent access
restrictions and other security measures. For backwards compatibility
those drivers are still available, but their use is highly
inadvisable and might harm your system.
You are recommended to use the safe modeset-only drivers instead, and
perform 3D emulation in user-space.
Unless you have strong reasons to go rogue, say "N".
if DRM_LEGACY
# leave here to list legacy drivers
endif # DRM_LEGACY
config DRM_EXPORT_FOR_TESTS
bool

View File

@ -47,18 +47,6 @@ drm-y := \
drm_vblank_work.o \
drm_vma_manager.o \
drm_writeback.o
drm-$(CONFIG_DRM_LEGACY) += \
drm_agpsupport.o \
drm_bufs.o \
drm_context.o \
drm_dma.o \
drm_hashtab.o \
drm_irq.o \
drm_legacy_misc.o \
drm_lock.o \
drm_memory.o \
drm_scatter.o \
drm_vm.o
drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_PANEL) += drm_panel.o

View File

@ -73,10 +73,10 @@ amdgpu_ctx_to_drm_sched_prio(int32_t ctx_prio)
return DRM_SCHED_PRIORITY_NORMAL;
case AMDGPU_CTX_PRIORITY_VERY_LOW:
return DRM_SCHED_PRIORITY_MIN;
return DRM_SCHED_PRIORITY_LOW;
case AMDGPU_CTX_PRIORITY_LOW:
return DRM_SCHED_PRIORITY_MIN;
return DRM_SCHED_PRIORITY_LOW;
case AMDGPU_CTX_PRIORITY_NORMAL:
return DRM_SCHED_PRIORITY_NORMAL;

View File

@ -325,7 +325,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched)
int i;
/* Signal all jobs not yet scheduled */
for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) {
struct drm_sched_rq *rq = sched->sched_rq[i];
spin_lock(&rq->lock);
list_for_each_entry(s_entity, &rq->entities, list) {

View File

@ -92,7 +92,6 @@
#include <drm/drm_vblank.h>
#include <drm/drm_audio_component.h>
#include <drm/drm_gem_atomic_helper.h>
#include <drm/drm_plane_helper.h>
#include <acpi/video.h>

View File

@ -7,8 +7,9 @@
#include <linux/clk.h>
#include <linux/component.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/property.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
@ -1012,26 +1013,17 @@ armada_lcd_bind(struct device *dev, struct device *master, void *data)
int irq = platform_get_irq(pdev, 0);
const struct armada_variant *variant;
struct device_node *port = NULL;
struct device_node *np, *parent = dev->of_node;
if (irq < 0)
return irq;
if (!dev->of_node) {
const struct platform_device_id *id;
id = platform_get_device_id(pdev);
if (!id)
return -ENXIO;
variant = (const struct armada_variant *)id->driver_data;
} else {
const struct of_device_id *match;
struct device_node *np, *parent = dev->of_node;
match = of_match_device(dev->driver->of_match_table, dev);
if (!match)
return -ENXIO;
variant = device_get_match_data(dev);
if (!variant)
return -ENXIO;
if (parent) {
np = of_get_child_by_name(parent, "ports");
if (np)
parent = np;
@ -1041,8 +1033,6 @@ armada_lcd_bind(struct device *dev, struct device *master, void *data)
dev_err(dev, "no port node found in %pOF\n", parent);
return -ENXIO;
}
variant = match->data;
}
return armada_drm_crtc_create(drm, dev, res, irq, variant, port);

View File

@ -6,10 +6,10 @@
#include <linux/irq.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/mod_devicetable.h>
#include <linux/of_reserved_mem.h>
#include <linux/platform_device.h>
#include <linux/property.h>
#include <linux/regmap.h>
#include <linux/reset.h>
@ -143,7 +143,6 @@ static int aspeed_gfx_load(struct drm_device *drm)
struct aspeed_gfx *priv = to_aspeed_gfx(drm);
struct device_node *np = pdev->dev.of_node;
const struct aspeed_gfx_config *config;
const struct of_device_id *match;
struct resource *res;
int ret;
@ -152,10 +151,9 @@ static int aspeed_gfx_load(struct drm_device *drm)
if (IS_ERR(priv->base))
return PTR_ERR(priv->base);
match = of_match_device(aspeed_gfx_match, &pdev->dev);
if (!match)
config = device_get_match_data(&pdev->dev);
if (!config)
return -EINVAL;
config = match->data;
priv->dac_reg = config->dac_reg;
priv->int_clr_reg = config->int_clear_reg;

View File

@ -89,11 +89,194 @@ static const struct pci_device_id ast_pciidlist[] = {
MODULE_DEVICE_TABLE(pci, ast_pciidlist);
static bool ast_is_vga_enabled(void __iomem *ioregs)
{
u8 vgaer = __ast_read8(ioregs, AST_IO_VGAER);
return vgaer & AST_IO_VGAER_VGA_ENABLE;
}
static void ast_enable_vga(void __iomem *ioregs)
{
__ast_write8(ioregs, AST_IO_VGAER, AST_IO_VGAER_VGA_ENABLE);
__ast_write8(ioregs, AST_IO_VGAMR_W, AST_IO_VGAMR_IOSEL);
}
/*
* Run this function as part of the HW device cleanup; not
* when the DRM device gets released.
*/
static void ast_enable_mmio_release(void *data)
{
void __iomem *ioregs = (void __force __iomem *)data;
/* enable standard VGA decode */
__ast_write8_i(ioregs, AST_IO_VGACRI, 0xa1, AST_IO_VGACRA1_MMIO_ENABLED);
}
static int ast_enable_mmio(struct device *dev, void __iomem *ioregs)
{
void *data = (void __force *)ioregs;
__ast_write8_i(ioregs, AST_IO_VGACRI, 0xa1,
AST_IO_VGACRA1_MMIO_ENABLED |
AST_IO_VGACRA1_VGAIO_DISABLED);
return devm_add_action_or_reset(dev, ast_enable_mmio_release, data);
}
static void ast_open_key(void __iomem *ioregs)
{
__ast_write8_i(ioregs, AST_IO_VGACRI, 0x80, AST_IO_VGACR80_PASSWORD);
}
static int ast_detect_chip(struct pci_dev *pdev,
void __iomem *regs, void __iomem *ioregs,
enum ast_chip *chip_out,
enum ast_config_mode *config_mode_out)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
enum ast_config_mode config_mode = ast_use_defaults;
uint32_t scu_rev = 0xffffffff;
enum ast_chip chip;
u32 data;
u8 vgacrd0, vgacrd1;
/*
* Find configuration mode and read SCU revision
*/
/* Check if we have device-tree properties */
if (np && !of_property_read_u32(np, "aspeed,scu-revision-id", &data)) {
/* We do, disable P2A access */
config_mode = ast_use_dt;
scu_rev = data;
} else if (pdev->device == PCI_CHIP_AST2000) { // Not all families have a P2A bridge
/*
* The BMC will set SCU 0x40 D[12] to 1 if the P2 bridge
* is disabled. We force using P2A if VGA only mode bit
* is set D[7]
*/
vgacrd0 = __ast_read8_i(ioregs, AST_IO_VGACRI, 0xd0);
vgacrd1 = __ast_read8_i(ioregs, AST_IO_VGACRI, 0xd1);
if (!(vgacrd0 & 0x80) || !(vgacrd1 & 0x10)) {
/*
* We have a P2A bridge and it is enabled.
*/
/* Patch AST2500/AST2510 */
if ((pdev->revision & 0xf0) == 0x40) {
if (!(vgacrd0 & AST_VRAM_INIT_STATUS_MASK))
ast_patch_ahb_2500(regs);
}
/* Double check that it's actually working */
data = __ast_read32(regs, 0xf004);
if ((data != 0xffffffff) && (data != 0x00)) {
config_mode = ast_use_p2a;
/* Read SCU7c (silicon revision register) */
__ast_write32(regs, 0xf004, 0x1e6e0000);
__ast_write32(regs, 0xf000, 0x1);
scu_rev = __ast_read32(regs, 0x1207c);
}
}
}
switch (config_mode) {
case ast_use_defaults:
dev_info(dev, "Using default configuration\n");
break;
case ast_use_dt:
dev_info(dev, "Using device-tree for configuration\n");
break;
case ast_use_p2a:
dev_info(dev, "Using P2A bridge for configuration\n");
break;
}
/*
* Identify chipset
*/
if (pdev->revision >= 0x50) {
chip = AST2600;
dev_info(dev, "AST 2600 detected\n");
} else if (pdev->revision >= 0x40) {
switch (scu_rev & 0x300) {
case 0x0100:
chip = AST2510;
dev_info(dev, "AST 2510 detected\n");
break;
default:
chip = AST2500;
dev_info(dev, "AST 2500 detected\n");
break;
}
} else if (pdev->revision >= 0x30) {
switch (scu_rev & 0x300) {
case 0x0100:
chip = AST1400;
dev_info(dev, "AST 1400 detected\n");
break;
default:
chip = AST2400;
dev_info(dev, "AST 2400 detected\n");
break;
}
} else if (pdev->revision >= 0x20) {
switch (scu_rev & 0x300) {
case 0x0000:
chip = AST1300;
dev_info(dev, "AST 1300 detected\n");
break;
default:
chip = AST2300;
dev_info(dev, "AST 2300 detected\n");
break;
}
} else if (pdev->revision >= 0x10) {
switch (scu_rev & 0x0300) {
case 0x0200:
chip = AST1100;
dev_info(dev, "AST 1100 detected\n");
break;
case 0x0100:
chip = AST2200;
dev_info(dev, "AST 2200 detected\n");
break;
case 0x0000:
chip = AST2150;
dev_info(dev, "AST 2150 detected\n");
break;
default:
chip = AST2100;
dev_info(dev, "AST 2100 detected\n");
break;
}
} else {
chip = AST2000;
dev_info(dev, "AST 2000 detected\n");
}
*chip_out = chip;
*config_mode_out = config_mode;
return 0;
}
static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct ast_device *ast;
struct drm_device *dev;
struct device *dev = &pdev->dev;
int ret;
void __iomem *regs;
void __iomem *ioregs;
enum ast_config_mode config_mode;
enum ast_chip chip;
struct drm_device *drm;
bool need_post = false;
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, &ast_driver);
if (ret)
@ -103,16 +286,80 @@ static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ret)
return ret;
ast = ast_device_create(&ast_driver, pdev, ent->driver_data);
if (IS_ERR(ast))
return PTR_ERR(ast);
dev = &ast->base;
regs = pcim_iomap(pdev, 1, 0);
if (!regs)
return -EIO;
ret = drm_dev_register(dev, ent->driver_data);
if (pdev->revision >= 0x40) {
/*
* On AST2500 and later models, MMIO is enabled by
* default. Adopt it to be compatible with ARM.
*/
resource_size_t len = pci_resource_len(pdev, 1);
if (len < AST_IO_MM_OFFSET)
return -EIO;
if ((len - AST_IO_MM_OFFSET) < AST_IO_MM_LENGTH)
return -EIO;
ioregs = regs + AST_IO_MM_OFFSET;
} else if (pci_resource_flags(pdev, 2) & IORESOURCE_IO) {
/*
* Map I/O registers if we have a PCI BAR for I/O.
*/
resource_size_t len = pci_resource_len(pdev, 2);
if (len < AST_IO_MM_LENGTH)
return -EIO;
ioregs = pcim_iomap(pdev, 2, 0);
if (!ioregs)
return -EIO;
} else {
/*
* Anything else is best effort.
*/
resource_size_t len = pci_resource_len(pdev, 1);
if (len < AST_IO_MM_OFFSET)
return -EIO;
if ((len - AST_IO_MM_OFFSET) < AST_IO_MM_LENGTH)
return -EIO;
ioregs = regs + AST_IO_MM_OFFSET;
dev_info(dev, "Platform has no I/O space, using MMIO\n");
}
if (!ast_is_vga_enabled(ioregs)) {
dev_info(dev, "VGA not enabled on entry, requesting chip POST\n");
need_post = true;
}
/*
* If VGA isn't enabled, we need to enable now or subsequent
* access to the scratch registers will fail.
*/
if (need_post)
ast_enable_vga(ioregs);
/* Enable extended register access */
ast_open_key(ioregs);
ret = ast_enable_mmio(dev, ioregs);
if (ret)
return ret;
drm_fbdev_generic_setup(dev, 32);
ret = ast_detect_chip(pdev, regs, ioregs, &chip, &config_mode);
if (ret)
return ret;
drm = ast_device_create(pdev, &ast_driver, chip, config_mode, regs, ioregs, need_post);
if (IS_ERR(drm))
return PTR_ERR(drm);
pci_set_drvdata(pdev, drm);
ret = drm_dev_register(drm, ent->driver_data);
if (ret)
return ret;
drm_fbdev_generic_setup(drm, 32);
return 0;
}

View File

@ -98,6 +98,12 @@ enum ast_tx_chip {
#define AST_TX_DP501_BIT BIT(AST_TX_DP501)
#define AST_TX_ASTDP_BIT BIT(AST_TX_ASTDP)
enum ast_config_mode {
ast_use_p2a,
ast_use_dt,
ast_use_defaults
};
#define AST_DRAM_512Mx16 0
#define AST_DRAM_1Gx16 1
#define AST_DRAM_512Mx32 2
@ -192,12 +198,13 @@ to_ast_bmc_connector(struct drm_connector *connector)
struct ast_device {
struct drm_device base;
struct mutex ioregs_lock; /* Protects access to I/O registers in ioregs */
void __iomem *regs;
void __iomem *ioregs;
void __iomem *dp501_fw_buf;
enum ast_config_mode config_mode;
enum ast_chip chip;
uint32_t dram_bus_width;
uint32_t dram_type;
uint32_t mclk;
@ -207,6 +214,8 @@ struct ast_device {
unsigned long vram_size;
unsigned long vram_fb_available;
struct mutex modeset_lock; /* Protects access to modeset I/O registers in ioregs */
struct ast_plane primary_plane;
struct ast_plane cursor_plane;
struct drm_crtc crtc;
@ -234,11 +243,6 @@ struct ast_device {
} output;
bool support_wide_screen;
enum {
ast_use_p2a,
ast_use_dt,
ast_use_defaults
} config_mode;
unsigned long tx_chip_types; /* bitfield of enum ast_chip_type */
u8 *dp501_fw_addr;
@ -250,9 +254,13 @@ static inline struct ast_device *to_ast_device(struct drm_device *dev)
return container_of(dev, struct ast_device, base);
}
struct ast_device *ast_device_create(const struct drm_driver *drv,
struct pci_dev *pdev,
unsigned long flags);
struct drm_device *ast_device_create(struct pci_dev *pdev,
const struct drm_driver *drv,
enum ast_chip chip,
enum ast_config_mode config_mode,
void __iomem *regs,
void __iomem *ioregs,
bool need_post);
static inline unsigned long __ast_gen(struct ast_device *ast)
{
@ -272,55 +280,94 @@ static inline bool __ast_gen_is_eq(struct ast_device *ast, unsigned long gen)
#define IS_AST_GEN6(__ast) __ast_gen_is_eq(__ast, 6)
#define IS_AST_GEN7(__ast) __ast_gen_is_eq(__ast, 7)
static inline u8 __ast_read8(const void __iomem *addr, u32 reg)
{
return ioread8(addr + reg);
}
static inline u32 __ast_read32(const void __iomem *addr, u32 reg)
{
return ioread32(addr + reg);
}
static inline void __ast_write8(void __iomem *addr, u32 reg, u8 val)
{
iowrite8(val, addr + reg);
}
static inline void __ast_write32(void __iomem *addr, u32 reg, u32 val)
{
iowrite32(val, addr + reg);
}
static inline u8 __ast_read8_i(void __iomem *addr, u32 reg, u8 index)
{
__ast_write8(addr, reg, index);
return __ast_read8(addr, reg + 1);
}
static inline u8 __ast_read8_i_masked(void __iomem *addr, u32 reg, u8 index, u8 read_mask)
{
u8 val = __ast_read8_i(addr, reg, index);
return val & read_mask;
}
static inline void __ast_write8_i(void __iomem *addr, u32 reg, u8 index, u8 val)
{
__ast_write8(addr, reg, index);
__ast_write8(addr, reg + 1, val);
}
static inline void __ast_write8_i_masked(void __iomem *addr, u32 reg, u8 index, u8 read_mask,
u8 val)
{
u8 tmp = __ast_read8_i_masked(addr, reg, index, read_mask);
tmp |= val;
__ast_write8_i(addr, reg, index, tmp);
}
static inline u32 ast_read32(struct ast_device *ast, u32 reg)
{
return ioread32(ast->regs + reg);
return __ast_read32(ast->regs, reg);
}
static inline void ast_write32(struct ast_device *ast, u32 reg, u32 val)
{
iowrite32(val, ast->regs + reg);
__ast_write32(ast->regs, reg, val);
}
static inline u8 ast_io_read8(struct ast_device *ast, u32 reg)
{
return ioread8(ast->ioregs + reg);
return __ast_read8(ast->ioregs, reg);
}
static inline void ast_io_write8(struct ast_device *ast, u32 reg, u8 val)
{
iowrite8(val, ast->ioregs + reg);
__ast_write8(ast->ioregs, reg, val);
}
static inline u8 ast_get_index_reg(struct ast_device *ast, u32 base, u8 index)
{
ast_io_write8(ast, base, index);
++base;
return ast_io_read8(ast, base);
return __ast_read8_i(ast->ioregs, base, index);
}
static inline u8 ast_get_index_reg_mask(struct ast_device *ast, u32 base, u8 index,
u8 preserve_mask)
{
u8 val = ast_get_index_reg(ast, base, index);
return val & preserve_mask;
return __ast_read8_i_masked(ast->ioregs, base, index, preserve_mask);
}
static inline void ast_set_index_reg(struct ast_device *ast, u32 base, u8 index, u8 val)
{
ast_io_write8(ast, base, index);
++base;
ast_io_write8(ast, base, val);
__ast_write8_i(ast->ioregs, base, index, val);
}
static inline void ast_set_index_reg_mask(struct ast_device *ast, u32 base, u8 index,
u8 preserve_mask, u8 val)
{
u8 tmp = ast_get_index_reg_mask(ast, base, index, preserve_mask);
tmp |= val;
ast_set_index_reg(ast, base, index, tmp);
__ast_write8_i_masked(ast->ioregs, base, index, preserve_mask, val);
}
#define AST_VIDMEM_SIZE_8M 0x00800000
@ -442,7 +489,7 @@ int ast_mm_init(struct ast_device *ast);
void ast_post_gpu(struct drm_device *dev);
u32 ast_mindwm(struct ast_device *ast, u32 r);
void ast_moutdwm(struct ast_device *ast, u32 r, u32 v);
void ast_patch_ahb_2500(struct ast_device *ast);
void ast_patch_ahb_2500(void __iomem *regs);
/* ast dp501 */
void ast_set_dp501_video_output(struct drm_device *dev, u8 mode);
bool ast_backup_fw(struct drm_device *dev, u8 *addr, u32 size);

View File

@ -35,180 +35,6 @@
#include "ast_drv.h"
static bool ast_is_vga_enabled(struct drm_device *dev)
{
struct ast_device *ast = to_ast_device(dev);
u8 ch;
ch = ast_io_read8(ast, AST_IO_VGAER);
return !!(ch & 0x01);
}
static void ast_enable_vga(struct drm_device *dev)
{
struct ast_device *ast = to_ast_device(dev);
ast_io_write8(ast, AST_IO_VGAER, 0x01);
ast_io_write8(ast, AST_IO_VGAMR_W, 0x01);
}
/*
* Run this function as part of the HW device cleanup; not
* when the DRM device gets released.
*/
static void ast_enable_mmio_release(void *data)
{
struct ast_device *ast = data;
/* enable standard VGA decode */
ast_set_index_reg(ast, AST_IO_VGACRI, 0xa1, 0x04);
}
static int ast_enable_mmio(struct ast_device *ast)
{
struct drm_device *dev = &ast->base;
ast_set_index_reg(ast, AST_IO_VGACRI, 0xa1, 0x06);
return devm_add_action_or_reset(dev->dev, ast_enable_mmio_release, ast);
}
static void ast_open_key(struct ast_device *ast)
{
ast_set_index_reg(ast, AST_IO_VGACRI, 0x80, 0xA8);
}
static int ast_device_config_init(struct ast_device *ast)
{
struct drm_device *dev = &ast->base;
struct pci_dev *pdev = to_pci_dev(dev->dev);
struct device_node *np = dev->dev->of_node;
uint32_t scu_rev = 0xffffffff;
u32 data;
u8 jregd0, jregd1;
/*
* Find configuration mode and read SCU revision
*/
ast->config_mode = ast_use_defaults;
/* Check if we have device-tree properties */
if (np && !of_property_read_u32(np, "aspeed,scu-revision-id", &data)) {
/* We do, disable P2A access */
ast->config_mode = ast_use_dt;
scu_rev = data;
} else if (pdev->device == PCI_CHIP_AST2000) { // Not all families have a P2A bridge
/*
* The BMC will set SCU 0x40 D[12] to 1 if the P2 bridge
* is disabled. We force using P2A if VGA only mode bit
* is set D[7]
*/
jregd0 = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd0, 0xff);
jregd1 = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1, 0xff);
if (!(jregd0 & 0x80) || !(jregd1 & 0x10)) {
/*
* We have a P2A bridge and it is enabled.
*/
/* Patch AST2500/AST2510 */
if ((pdev->revision & 0xf0) == 0x40) {
if (!(jregd0 & AST_VRAM_INIT_STATUS_MASK))
ast_patch_ahb_2500(ast);
}
/* Double check that it's actually working */
data = ast_read32(ast, 0xf004);
if ((data != 0xffffffff) && (data != 0x00)) {
ast->config_mode = ast_use_p2a;
/* Read SCU7c (silicon revision register) */
ast_write32(ast, 0xf004, 0x1e6e0000);
ast_write32(ast, 0xf000, 0x1);
scu_rev = ast_read32(ast, 0x1207c);
}
}
}
switch (ast->config_mode) {
case ast_use_defaults:
drm_info(dev, "Using default configuration\n");
break;
case ast_use_dt:
drm_info(dev, "Using device-tree for configuration\n");
break;
case ast_use_p2a:
drm_info(dev, "Using P2A bridge for configuration\n");
break;
}
/*
* Identify chipset
*/
if (pdev->revision >= 0x50) {
ast->chip = AST2600;
drm_info(dev, "AST 2600 detected\n");
} else if (pdev->revision >= 0x40) {
switch (scu_rev & 0x300) {
case 0x0100:
ast->chip = AST2510;
drm_info(dev, "AST 2510 detected\n");
break;
default:
ast->chip = AST2500;
drm_info(dev, "AST 2500 detected\n");
}
} else if (pdev->revision >= 0x30) {
switch (scu_rev & 0x300) {
case 0x0100:
ast->chip = AST1400;
drm_info(dev, "AST 1400 detected\n");
break;
default:
ast->chip = AST2400;
drm_info(dev, "AST 2400 detected\n");
}
} else if (pdev->revision >= 0x20) {
switch (scu_rev & 0x300) {
case 0x0000:
ast->chip = AST1300;
drm_info(dev, "AST 1300 detected\n");
break;
default:
ast->chip = AST2300;
drm_info(dev, "AST 2300 detected\n");
break;
}
} else if (pdev->revision >= 0x10) {
switch (scu_rev & 0x0300) {
case 0x0200:
ast->chip = AST1100;
drm_info(dev, "AST 1100 detected\n");
break;
case 0x0100:
ast->chip = AST2200;
drm_info(dev, "AST 2200 detected\n");
break;
case 0x0000:
ast->chip = AST2150;
drm_info(dev, "AST 2150 detected\n");
break;
default:
ast->chip = AST2100;
drm_info(dev, "AST 2100 detected\n");
break;
}
} else {
ast->chip = AST2000;
drm_info(dev, "AST 2000 detected\n");
}
return 0;
}
static void ast_detect_widescreen(struct ast_device *ast)
{
u8 jreg;
@ -424,69 +250,27 @@ static int ast_get_dram_info(struct drm_device *dev)
return 0;
}
struct ast_device *ast_device_create(const struct drm_driver *drv,
struct pci_dev *pdev,
unsigned long flags)
struct drm_device *ast_device_create(struct pci_dev *pdev,
const struct drm_driver *drv,
enum ast_chip chip,
enum ast_config_mode config_mode,
void __iomem *regs,
void __iomem *ioregs,
bool need_post)
{
struct drm_device *dev;
struct ast_device *ast;
bool need_post = false;
int ret = 0;
int ret;
ast = devm_drm_dev_alloc(&pdev->dev, drv, struct ast_device, base);
if (IS_ERR(ast))
return ast;
return ERR_CAST(ast);
dev = &ast->base;
pci_set_drvdata(pdev, dev);
ret = drmm_mutex_init(dev, &ast->ioregs_lock);
if (ret)
return ERR_PTR(ret);
ast->regs = pcim_iomap(pdev, 1, 0);
if (!ast->regs)
return ERR_PTR(-EIO);
/*
* After AST2500, MMIO is enabled by default, and it should be adopted
* to be compatible with Arm.
*/
if (pdev->revision >= 0x40) {
ast->ioregs = ast->regs + AST_IO_MM_OFFSET;
} else if (!(pci_resource_flags(pdev, 2) & IORESOURCE_IO)) {
drm_info(dev, "platform has no IO space, trying MMIO\n");
ast->ioregs = ast->regs + AST_IO_MM_OFFSET;
}
/* "map" IO regs if the above hasn't done so already */
if (!ast->ioregs) {
ast->ioregs = pcim_iomap(pdev, 2, 0);
if (!ast->ioregs)
return ERR_PTR(-EIO);
}
if (!ast_is_vga_enabled(dev)) {
drm_info(dev, "VGA not enabled on entry, requesting chip POST\n");
need_post = true;
}
/*
* If VGA isn't enabled, we need to enable now or subsequent
* access to the scratch registers will fail.
*/
if (need_post)
ast_enable_vga(dev);
/* Enable extended register access */
ast_open_key(ast);
ret = ast_enable_mmio(ast);
if (ret)
return ERR_PTR(ret);
ret = ast_device_config_init(ast);
if (ret)
return ERR_PTR(ret);
ast->chip = chip;
ast->config_mode = config_mode;
ast->regs = regs;
ast->ioregs = ioregs;
ast_detect_widescreen(ast);
ast_detect_tx_chip(ast, need_post);
@ -517,5 +301,5 @@ struct ast_device *ast_device_create(const struct drm_driver *drv,
if (ret)
return ERR_PTR(ret);
return ast;
return dev;
}

View File

@ -1358,13 +1358,13 @@ static int ast_vga_connector_helper_get_modes(struct drm_connector *connector)
* Protect access to I/O registers from concurrent modesetting
* by acquiring the I/O-register lock.
*/
mutex_lock(&ast->ioregs_lock);
mutex_lock(&ast->modeset_lock);
edid = drm_get_edid(connector, &ast_vga_connector->i2c->adapter);
if (!edid)
goto err_mutex_unlock;
mutex_unlock(&ast->ioregs_lock);
mutex_unlock(&ast->modeset_lock);
count = drm_add_edid_modes(connector, edid);
kfree(edid);
@ -1372,7 +1372,7 @@ static int ast_vga_connector_helper_get_modes(struct drm_connector *connector)
return count;
err_mutex_unlock:
mutex_unlock(&ast->ioregs_lock);
mutex_unlock(&ast->modeset_lock);
err_drm_connector_update_edid_property:
drm_connector_update_edid_property(connector, NULL);
return 0;
@ -1464,13 +1464,13 @@ static int ast_sil164_connector_helper_get_modes(struct drm_connector *connector
* Protect access to I/O registers from concurrent modesetting
* by acquiring the I/O-register lock.
*/
mutex_lock(&ast->ioregs_lock);
mutex_lock(&ast->modeset_lock);
edid = drm_get_edid(connector, &ast_sil164_connector->i2c->adapter);
if (!edid)
goto err_mutex_unlock;
mutex_unlock(&ast->ioregs_lock);
mutex_unlock(&ast->modeset_lock);
count = drm_add_edid_modes(connector, edid);
kfree(edid);
@ -1478,7 +1478,7 @@ static int ast_sil164_connector_helper_get_modes(struct drm_connector *connector
return count;
err_mutex_unlock:
mutex_unlock(&ast->ioregs_lock);
mutex_unlock(&ast->modeset_lock);
err_drm_connector_update_edid_property:
drm_connector_update_edid_property(connector, NULL);
return 0;
@ -1670,13 +1670,13 @@ static int ast_astdp_connector_helper_get_modes(struct drm_connector *connector)
* Protect access to I/O registers from concurrent modesetting
* by acquiring the I/O-register lock.
*/
mutex_lock(&ast->ioregs_lock);
mutex_lock(&ast->modeset_lock);
succ = ast_astdp_read_edid(connector->dev, edid);
if (succ < 0)
goto err_mutex_unlock;
mutex_unlock(&ast->ioregs_lock);
mutex_unlock(&ast->modeset_lock);
drm_connector_update_edid_property(connector, edid);
count = drm_add_edid_modes(connector, edid);
@ -1685,7 +1685,7 @@ static int ast_astdp_connector_helper_get_modes(struct drm_connector *connector)
return count;
err_mutex_unlock:
mutex_unlock(&ast->ioregs_lock);
mutex_unlock(&ast->modeset_lock);
kfree(edid);
err_drm_connector_update_edid_property:
drm_connector_update_edid_property(connector, NULL);
@ -1870,9 +1870,9 @@ static void ast_mode_config_helper_atomic_commit_tail(struct drm_atomic_state *s
* display modes. Protect access to I/O registers by acquiring
* the I/O-register lock. Released in atomic_flush().
*/
mutex_lock(&ast->ioregs_lock);
mutex_lock(&ast->modeset_lock);
drm_atomic_helper_commit_tail_rpm(state);
mutex_unlock(&ast->ioregs_lock);
mutex_unlock(&ast->modeset_lock);
}
static const struct drm_mode_config_helper_funcs ast_mode_config_helper_funcs = {
@ -1910,6 +1910,10 @@ int ast_mode_config_init(struct ast_device *ast)
struct drm_connector *physical_connector = NULL;
int ret;
ret = drmm_mutex_init(dev, &ast->modeset_lock);
if (ret)
return ret;
ret = drmm_mode_config_init(dev);
if (ret)
return ret;

View File

@ -77,28 +77,42 @@ ast_set_def_ext_reg(struct drm_device *dev)
ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xb6, 0xff, reg);
}
u32 ast_mindwm(struct ast_device *ast, u32 r)
static u32 __ast_mindwm(void __iomem *regs, u32 r)
{
uint32_t data;
u32 data;
ast_write32(ast, 0xf004, r & 0xffff0000);
ast_write32(ast, 0xf000, 0x1);
__ast_write32(regs, 0xf004, r & 0xffff0000);
__ast_write32(regs, 0xf000, 0x1);
do {
data = ast_read32(ast, 0xf004) & 0xffff0000;
data = __ast_read32(regs, 0xf004) & 0xffff0000;
} while (data != (r & 0xffff0000));
return ast_read32(ast, 0x10000 + (r & 0x0000ffff));
return __ast_read32(regs, 0x10000 + (r & 0x0000ffff));
}
static void __ast_moutdwm(void __iomem *regs, u32 r, u32 v)
{
u32 data;
__ast_write32(regs, 0xf004, r & 0xffff0000);
__ast_write32(regs, 0xf000, 0x1);
do {
data = __ast_read32(regs, 0xf004) & 0xffff0000;
} while (data != (r & 0xffff0000));
__ast_write32(regs, 0x10000 + (r & 0x0000ffff), v);
}
u32 ast_mindwm(struct ast_device *ast, u32 r)
{
return __ast_mindwm(ast->regs, r);
}
void ast_moutdwm(struct ast_device *ast, u32 r, u32 v)
{
uint32_t data;
ast_write32(ast, 0xf004, r & 0xffff0000);
ast_write32(ast, 0xf000, 0x1);
do {
data = ast_read32(ast, 0xf004) & 0xffff0000;
} while (data != (r & 0xffff0000));
ast_write32(ast, 0x10000 + (r & 0x0000ffff), v);
__ast_moutdwm(ast->regs, r, v);
}
/*
@ -1987,17 +2001,18 @@ static bool ast_dram_init_2500(struct ast_device *ast)
return true;
}
void ast_patch_ahb_2500(struct ast_device *ast)
void ast_patch_ahb_2500(void __iomem *regs)
{
u32 data;
u32 data;
/* Clear bus lock condition */
ast_moutdwm(ast, 0x1e600000, 0xAEED1A03);
ast_moutdwm(ast, 0x1e600084, 0x00010000);
ast_moutdwm(ast, 0x1e600088, 0x00000000);
ast_moutdwm(ast, 0x1e6e2000, 0x1688A8A8);
data = ast_mindwm(ast, 0x1e6e2070);
if (data & 0x08000000) { /* check fast reset */
__ast_moutdwm(regs, 0x1e600000, 0xAEED1A03);
__ast_moutdwm(regs, 0x1e600084, 0x00010000);
__ast_moutdwm(regs, 0x1e600088, 0x00000000);
__ast_moutdwm(regs, 0x1e6e2000, 0x1688A8A8);
data = __ast_mindwm(regs, 0x1e6e2070);
if (data & 0x08000000) { /* check fast reset */
/*
* If "Fast restet" is enabled for ARM-ICE debugger,
* then WDT needs to enable, that
@ -2009,16 +2024,18 @@ void ast_patch_ahb_2500(struct ast_device *ast)
* [1]:= 1:WDT will be cleeared and disabled after timeout occurs
* [0]:= 1:WDT enable
*/
ast_moutdwm(ast, 0x1E785004, 0x00000010);
ast_moutdwm(ast, 0x1E785008, 0x00004755);
ast_moutdwm(ast, 0x1E78500c, 0x00000033);
__ast_moutdwm(regs, 0x1E785004, 0x00000010);
__ast_moutdwm(regs, 0x1E785008, 0x00004755);
__ast_moutdwm(regs, 0x1E78500c, 0x00000033);
udelay(1000);
}
do {
ast_moutdwm(ast, 0x1e6e2000, 0x1688A8A8);
data = ast_mindwm(ast, 0x1e6e2000);
} while (data != 1);
ast_moutdwm(ast, 0x1e6e207c, 0x08000000); /* clear fast reset */
__ast_moutdwm(regs, 0x1e6e2000, 0x1688A8A8);
data = __ast_mindwm(regs, 0x1e6e2000);
} while (data != 1);
__ast_moutdwm(regs, 0x1e6e207c, 0x08000000); /* clear fast reset */
}
void ast_post_chip_2500(struct drm_device *dev)
@ -2030,7 +2047,7 @@ void ast_post_chip_2500(struct drm_device *dev)
reg = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd0, 0xff);
if ((reg & AST_VRAM_INIT_STATUS_MASK) == 0) {/* vga only */
/* Clear bus lock condition */
ast_patch_ahb_2500(ast);
ast_patch_ahb_2500(ast->regs);
/* Disable watchdog */
ast_moutdwm(ast, 0x1E78502C, 0x00000000);

View File

@ -10,10 +10,17 @@
*/
#define AST_IO_MM_OFFSET (0x380)
#define AST_IO_MM_LENGTH (128)
#define AST_IO_VGAARI_W (0x40)
#define AST_IO_VGAMR_W (0x42)
#define AST_IO_VGAMR_R (0x4c)
#define AST_IO_VGAMR_IOSEL BIT(0)
#define AST_IO_VGAER (0x43)
#define AST_IO_VGAER_VGA_ENABLE BIT(0)
#define AST_IO_VGASRI (0x44)
#define AST_IO_VGADRR (0x47)
#define AST_IO_VGADWR (0x48)
@ -21,14 +28,15 @@
#define AST_IO_VGAGRI (0x4E)
#define AST_IO_VGACRI (0x54)
#define AST_IO_VGACR80_PASSWORD (0xa8)
#define AST_IO_VGACRA1_VGAIO_DISABLED BIT(1)
#define AST_IO_VGACRA1_MMIO_ENABLED BIT(2)
#define AST_IO_VGACRCB_HWC_16BPP BIT(0) /* set: ARGB4444, cleared: 2bpp palette */
#define AST_IO_VGACRCB_HWC_ENABLED BIT(1)
#define AST_IO_VGAIR1_R (0x5A)
#define AST_IO_VGAIR1_VREFRESH BIT(3)
#define AST_IO_VGAMR_R (0x4C)
/*
* Display Transmitter Type
*/

View File

@ -12,6 +12,23 @@ config DRM_PANEL_BRIDGE
help
DRM bridge wrapper of DRM panels
config DRM_AUX_BRIDGE
tristate
depends on DRM_BRIDGE && OF
select AUXILIARY_BUS
select DRM_PANEL_BRIDGE
help
Simple transparent bridge that is used by several non-DRM drivers to
build bridges chain.
config DRM_AUX_HPD_BRIDGE
tristate
depends on DRM_BRIDGE && OF
select AUXILIARY_BUS
help
Simple bridge that terminates the bridge chain and provides HPD
support.
menu "Display Interface Bridges"
depends on DRM && DRM_BRIDGE

View File

@ -1,4 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DRM_AUX_BRIDGE) += aux-bridge.o
obj-$(CONFIG_DRM_AUX_HPD_BRIDGE) += aux-hpd-bridge.o
obj-$(CONFIG_DRM_CHIPONE_ICN6211) += chipone-icn6211.o
obj-$(CONFIG_DRM_CHRONTEL_CH7033) += chrontel-ch7033.o
obj-$(CONFIG_DRM_CROS_EC_ANX7688) += cros-ec-anx7688.o

View File

@ -1298,10 +1298,32 @@ static void anx7625_config(struct anx7625_data *ctx)
XTAL_FRQ_SEL, XTAL_FRQ_27M);
}
static int anx7625_hpd_timer_config(struct anx7625_data *ctx)
{
int ret;
/* Set irq detect window to 2ms */
ret = anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
HPD_DET_TIMER_BIT0_7, HPD_TIME & 0xFF);
ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
HPD_DET_TIMER_BIT8_15,
(HPD_TIME >> 8) & 0xFF);
ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
HPD_DET_TIMER_BIT16_23,
(HPD_TIME >> 16) & 0xFF);
return ret;
}
static int anx7625_read_hpd_gpio_config_status(struct anx7625_data *ctx)
{
return anx7625_reg_read(ctx, ctx->i2c.rx_p0_client, GPIO_CTRL_2);
}
static void anx7625_disable_pd_protocol(struct anx7625_data *ctx)
{
struct device *dev = ctx->dev;
int ret;
int ret, val;
/* Reset main ocm */
ret = anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 0x88, 0x40);
@ -1315,6 +1337,19 @@ static void anx7625_disable_pd_protocol(struct anx7625_data *ctx)
DRM_DEV_DEBUG_DRIVER(dev, "disable PD feature fail.\n");
else
DRM_DEV_DEBUG_DRIVER(dev, "disable PD feature succeeded.\n");
/*
* Make sure the HPD GPIO already be configured after OCM release before
* setting HPD detect window register. Here we poll the status register
* at maximum 40ms, then config HPD irq detect window register
*/
readx_poll_timeout(anx7625_read_hpd_gpio_config_status,
ctx, val,
((val & HPD_SOURCE) || (val < 0)),
2000, 2000 * 20);
/* Set HPD irq detect window to 2ms */
anx7625_hpd_timer_config(ctx);
}
static int anx7625_ocm_loading_check(struct anx7625_data *ctx)
@ -1437,20 +1472,6 @@ static void anx7625_start_dp_work(struct anx7625_data *ctx)
static int anx7625_read_hpd_status_p0(struct anx7625_data *ctx)
{
int ret;
/* Set irq detect window to 2ms */
ret = anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
HPD_DET_TIMER_BIT0_7, HPD_TIME & 0xFF);
ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
HPD_DET_TIMER_BIT8_15,
(HPD_TIME >> 8) & 0xFF);
ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
HPD_DET_TIMER_BIT16_23,
(HPD_TIME >> 16) & 0xFF);
if (ret < 0)
return ret;
return anx7625_reg_read(ctx, ctx->i2c.rx_p0_client, SYSTEM_STSTUS);
}
@ -1464,9 +1485,6 @@ static int _anx7625_hpd_polling(struct anx7625_data *ctx,
if (ctx->pdata.intp_irq)
return 0;
/* Delay 200ms for FW HPD de-bounce */
msleep(200);
ret = readx_poll_timeout(anx7625_read_hpd_status_p0,
ctx, val,
((val & HPD_STATUS) || (val < 0)),

View File

@ -259,6 +259,10 @@
#define AP_MIPI_RX_EN BIT(5) /* 1: MIPI RX input in 0: no RX in */
#define AP_DISABLE_PD BIT(6)
#define AP_DISABLE_DISPLAY BIT(7)
#define GPIO_CTRL_2 0x49
#define HPD_SOURCE BIT(6)
/***************************************************************/
/* Register definition of device address 0x84 */
#define MIPI_PHY_CONTROL_3 0x03

View File

@ -0,0 +1,140 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Copyright (C) 2023 Linaro Ltd.
*
* Author: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
*/
#include <linux/auxiliary_bus.h>
#include <linux/module.h>
#include <drm/drm_bridge.h>
#include <drm/bridge/aux-bridge.h>
static DEFINE_IDA(drm_aux_bridge_ida);
static void drm_aux_bridge_release(struct device *dev)
{
struct auxiliary_device *adev = to_auxiliary_dev(dev);
ida_free(&drm_aux_bridge_ida, adev->id);
kfree(adev);
}
static void drm_aux_bridge_unregister_adev(void *_adev)
{
struct auxiliary_device *adev = _adev;
auxiliary_device_delete(adev);
auxiliary_device_uninit(adev);
}
/**
* drm_aux_bridge_register - Create a simple bridge device to link the chain
* @parent: device instance providing this bridge
*
* Creates a simple DRM bridge that doesn't implement any drm_bridge
* operations. Such bridges merely fill a place in the bridge chain linking
* surrounding DRM bridges.
*
* Return: zero on success, negative error code on failure
*/
int drm_aux_bridge_register(struct device *parent)
{
struct auxiliary_device *adev;
int ret;
adev = kzalloc(sizeof(*adev), GFP_KERNEL);
if (!adev)
return -ENOMEM;
ret = ida_alloc(&drm_aux_bridge_ida, GFP_KERNEL);
if (ret < 0) {
kfree(adev);
return ret;
}
adev->id = ret;
adev->name = "aux_bridge";
adev->dev.parent = parent;
adev->dev.of_node = parent->of_node;
adev->dev.release = drm_aux_bridge_release;
ret = auxiliary_device_init(adev);
if (ret) {
ida_free(&drm_aux_bridge_ida, adev->id);
kfree(adev);
return ret;
}
ret = auxiliary_device_add(adev);
if (ret) {
auxiliary_device_uninit(adev);
return ret;
}
return devm_add_action_or_reset(parent, drm_aux_bridge_unregister_adev, adev);
}
EXPORT_SYMBOL_GPL(drm_aux_bridge_register);
struct drm_aux_bridge_data {
struct drm_bridge bridge;
struct drm_bridge *next_bridge;
struct device *dev;
};
static int drm_aux_bridge_attach(struct drm_bridge *bridge,
enum drm_bridge_attach_flags flags)
{
struct drm_aux_bridge_data *data;
if (!(flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR))
return -EINVAL;
data = container_of(bridge, struct drm_aux_bridge_data, bridge);
return drm_bridge_attach(bridge->encoder, data->next_bridge, bridge,
DRM_BRIDGE_ATTACH_NO_CONNECTOR);
}
static const struct drm_bridge_funcs drm_aux_bridge_funcs = {
.attach = drm_aux_bridge_attach,
};
static int drm_aux_bridge_probe(struct auxiliary_device *auxdev,
const struct auxiliary_device_id *id)
{
struct drm_aux_bridge_data *data;
data = devm_kzalloc(&auxdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->dev = &auxdev->dev;
data->next_bridge = devm_drm_of_get_bridge(&auxdev->dev, auxdev->dev.of_node, 0, 0);
if (IS_ERR(data->next_bridge))
return dev_err_probe(&auxdev->dev, PTR_ERR(data->next_bridge),
"failed to acquire drm_bridge\n");
data->bridge.funcs = &drm_aux_bridge_funcs;
data->bridge.of_node = data->dev->of_node;
return devm_drm_bridge_add(data->dev, &data->bridge);
}
static const struct auxiliary_device_id drm_aux_bridge_table[] = {
{ .name = KBUILD_MODNAME ".aux_bridge" },
{},
};
MODULE_DEVICE_TABLE(auxiliary, drm_aux_bridge_table);
static struct auxiliary_driver drm_aux_bridge_drv = {
.name = "aux_bridge",
.id_table = drm_aux_bridge_table,
.probe = drm_aux_bridge_probe,
};
module_auxiliary_driver(drm_aux_bridge_drv);
MODULE_AUTHOR("Dmitry Baryshkov <dmitry.baryshkov@linaro.org>");
MODULE_DESCRIPTION("DRM transparent bridge");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,163 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Copyright (C) 2023 Linaro Ltd.
*
* Author: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
*/
#include <linux/auxiliary_bus.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <drm/drm_bridge.h>
#include <drm/bridge/aux-bridge.h>
static DEFINE_IDA(drm_aux_hpd_bridge_ida);
struct drm_aux_hpd_bridge_data {
struct drm_bridge bridge;
struct device *dev;
};
static void drm_aux_hpd_bridge_release(struct device *dev)
{
struct auxiliary_device *adev = to_auxiliary_dev(dev);
ida_free(&drm_aux_hpd_bridge_ida, adev->id);
of_node_put(adev->dev.platform_data);
kfree(adev);
}
static void drm_aux_hpd_bridge_unregister_adev(void *_adev)
{
struct auxiliary_device *adev = _adev;
auxiliary_device_delete(adev);
auxiliary_device_uninit(adev);
}
/**
* drm_dp_hpd_bridge_register - Create a simple HPD DisplayPort bridge
* @parent: device instance providing this bridge
* @np: device node pointer corresponding to this bridge instance
*
* Creates a simple DRM bridge with the type set to
* DRM_MODE_CONNECTOR_DisplayPort, which terminates the bridge chain and is
* able to send the HPD events.
*
* Return: device instance that will handle created bridge or an error code
* encoded into the pointer.
*/
struct device *drm_dp_hpd_bridge_register(struct device *parent,
struct device_node *np)
{
struct auxiliary_device *adev;
int ret;
adev = kzalloc(sizeof(*adev), GFP_KERNEL);
if (!adev)
return ERR_PTR(-ENOMEM);
ret = ida_alloc(&drm_aux_hpd_bridge_ida, GFP_KERNEL);
if (ret < 0) {
kfree(adev);
return ERR_PTR(ret);
}
adev->id = ret;
adev->name = "dp_hpd_bridge";
adev->dev.parent = parent;
adev->dev.of_node = parent->of_node;
adev->dev.release = drm_aux_hpd_bridge_release;
adev->dev.platform_data = np;
ret = auxiliary_device_init(adev);
if (ret) {
ida_free(&drm_aux_hpd_bridge_ida, adev->id);
kfree(adev);
return ERR_PTR(ret);
}
ret = auxiliary_device_add(adev);
if (ret) {
auxiliary_device_uninit(adev);
return ERR_PTR(ret);
}
ret = devm_add_action_or_reset(parent, drm_aux_hpd_bridge_unregister_adev, adev);
if (ret)
return ERR_PTR(ret);
return &adev->dev;
}
EXPORT_SYMBOL_GPL(drm_dp_hpd_bridge_register);
/**
* drm_aux_hpd_bridge_notify - notify hot plug detection events
* @dev: device created for the HPD bridge
* @status: output connection status
*
* A wrapper around drm_bridge_hpd_notify() that is used to report hot plug
* detection events for bridges created via drm_dp_hpd_bridge_register().
*
* This function shall be called in a context that can sleep.
*/
void drm_aux_hpd_bridge_notify(struct device *dev, enum drm_connector_status status)
{
struct auxiliary_device *adev = to_auxiliary_dev(dev);
struct drm_aux_hpd_bridge_data *data = auxiliary_get_drvdata(adev);
if (!data)
return;
drm_bridge_hpd_notify(&data->bridge, status);
}
EXPORT_SYMBOL_GPL(drm_aux_hpd_bridge_notify);
static int drm_aux_hpd_bridge_attach(struct drm_bridge *bridge,
enum drm_bridge_attach_flags flags)
{
return flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR ? 0 : -EINVAL;
}
static const struct drm_bridge_funcs drm_aux_hpd_bridge_funcs = {
.attach = drm_aux_hpd_bridge_attach,
};
static int drm_aux_hpd_bridge_probe(struct auxiliary_device *auxdev,
const struct auxiliary_device_id *id)
{
struct drm_aux_hpd_bridge_data *data;
data = devm_kzalloc(&auxdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->dev = &auxdev->dev;
data->bridge.funcs = &drm_aux_hpd_bridge_funcs;
data->bridge.of_node = dev_get_platdata(data->dev);
data->bridge.ops = DRM_BRIDGE_OP_HPD;
data->bridge.type = id->driver_data;
auxiliary_set_drvdata(auxdev, data);
return devm_drm_bridge_add(data->dev, &data->bridge);
}
static const struct auxiliary_device_id drm_aux_hpd_bridge_table[] = {
{ .name = KBUILD_MODNAME ".dp_hpd_bridge", .driver_data = DRM_MODE_CONNECTOR_DisplayPort, },
{},
};
MODULE_DEVICE_TABLE(auxiliary, drm_aux_hpd_bridge_table);
static struct auxiliary_driver drm_aux_hpd_bridge_drv = {
.name = "aux_hpd_bridge",
.id_table = drm_aux_hpd_bridge_table,
.probe = drm_aux_hpd_bridge_probe,
};
module_auxiliary_driver(drm_aux_hpd_bridge_drv);
MODULE_AUTHOR("Dmitry Baryshkov <dmitry.baryshkov@linaro.org>");
MODULE_DESCRIPTION("DRM HPD bridge");
MODULE_LICENSE("GPL");

View File

@ -403,7 +403,8 @@ static int _cdns_mhdp_hdcp_disable(struct cdns_mhdp_device *mhdp)
static int _cdns_mhdp_hdcp_enable(struct cdns_mhdp_device *mhdp, u8 content_type)
{
int ret, tries = 3;
int ret = -EINVAL;
int tries = 3;
u32 i;
for (i = 0; i < tries; i++) {

View File

@ -226,8 +226,8 @@ dphy_pll_get_configure_from_opts(struct imx93_dsi *dsi,
unsigned long fout;
unsigned long best_fout = 0;
unsigned int fvco_div;
unsigned int min_n, max_n, n, best_n;
unsigned long m, best_m;
unsigned int min_n, max_n, n, best_n = UINT_MAX;
unsigned long m, best_m = 0;
unsigned long min_delta = ULONG_MAX;
unsigned long delta;
u64 tmp;

View File

@ -43,6 +43,8 @@ struct lt8912 {
struct videomode mode;
struct regulator_bulk_data supplies[7];
u8 data_lanes;
bool is_power_on;
};
@ -257,6 +259,12 @@ static int lt8912_free_i2c(struct lt8912 *lt)
static int lt8912_hard_power_on(struct lt8912 *lt)
{
int ret;
ret = regulator_bulk_enable(ARRAY_SIZE(lt->supplies), lt->supplies);
if (ret)
return ret;
gpiod_set_value_cansleep(lt->gp_reset, 0);
msleep(20);
@ -267,6 +275,9 @@ static void lt8912_hard_power_off(struct lt8912 *lt)
{
gpiod_set_value_cansleep(lt->gp_reset, 1);
msleep(20);
regulator_bulk_disable(ARRAY_SIZE(lt->supplies), lt->supplies);
lt->is_power_on = false;
}
@ -634,6 +645,48 @@ static const struct drm_bridge_funcs lt8912_bridge_funcs = {
.get_edid = lt8912_bridge_get_edid,
};
static int lt8912_bridge_resume(struct device *dev)
{
struct lt8912 *lt = dev_get_drvdata(dev);
int ret;
ret = lt8912_hard_power_on(lt);
if (ret)
return ret;
ret = lt8912_soft_power_on(lt);
if (ret)
return ret;
return lt8912_video_on(lt);
}
static int lt8912_bridge_suspend(struct device *dev)
{
struct lt8912 *lt = dev_get_drvdata(dev);
lt8912_hard_power_off(lt);
return 0;
}
static DEFINE_SIMPLE_DEV_PM_OPS(lt8912_bridge_pm_ops, lt8912_bridge_suspend, lt8912_bridge_resume);
static int lt8912_get_regulators(struct lt8912 *lt)
{
unsigned int i;
const char * const supply_names[] = {
"vdd", "vccmipirx", "vccsysclk", "vcclvdstx",
"vcchdmitx", "vcclvdspll", "vcchdmipll"
};
for (i = 0; i < ARRAY_SIZE(lt->supplies); i++)
lt->supplies[i].supply = supply_names[i];
return devm_regulator_bulk_get(lt->dev, ARRAY_SIZE(lt->supplies),
lt->supplies);
}
static int lt8912_parse_dt(struct lt8912 *lt)
{
struct gpio_desc *gp_reset;
@ -685,6 +738,10 @@ static int lt8912_parse_dt(struct lt8912 *lt)
goto err_free_host_node;
}
ret = lt8912_get_regulators(lt);
if (ret)
goto err_free_host_node;
of_node_put(port_node);
return 0;
@ -770,6 +827,7 @@ static struct i2c_driver lt8912_i2c_driver = {
.driver = {
.name = "lt8912",
.of_match_table = lt8912_dt_match,
.pm = pm_sleep_ptr(&lt8912_bridge_pm_ops),
},
.probe = lt8912_probe,
.remove = lt8912_remove,

View File

@ -54,13 +54,13 @@ static int ptn3460_read_bytes(struct ptn3460_bridge *ptn_bridge, char addr,
int ret;
ret = i2c_master_send(ptn_bridge->client, &addr, 1);
if (ret <= 0) {
if (ret < 0) {
DRM_ERROR("Failed to send i2c command, ret=%d\n", ret);
return ret;
}
ret = i2c_master_recv(ptn_bridge->client, buf, len);
if (ret <= 0) {
if (ret < 0) {
DRM_ERROR("Failed to recv i2c data, ret=%d\n", ret);
return ret;
}
@ -78,7 +78,7 @@ static int ptn3460_write_byte(struct ptn3460_bridge *ptn_bridge, char addr,
buf[1] = val;
ret = i2c_master_send(ptn_bridge->client, buf, ARRAY_SIZE(buf));
if (ret <= 0) {
if (ret < 0) {
DRM_ERROR("Failed to send i2c command, ret=%d\n", ret);
return ret;
}

View File

@ -2273,7 +2273,7 @@ static int tc_probe(struct i2c_client *client)
} else {
if (tc->hpd_pin < 0 || tc->hpd_pin > 1) {
dev_err(dev, "failed to parse HPD number\n");
return ret;
return -EINVAL;
}
}

View File

@ -1413,11 +1413,9 @@ static int ti_sn_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
int ret;
if (!pdata->pwm_enabled) {
ret = pm_runtime_get_sync(pdata->dev);
if (ret < 0) {
pm_runtime_put_sync(pdata->dev);
ret = pm_runtime_resume_and_get(pdata->dev);
if (ret < 0)
return ret;
}
}
if (state->enabled) {

View File

@ -1,451 +0,0 @@
/*
* \file drm_agpsupport.c
* DRM support for AGP/GART backend
*
* \author Rickard E. (Rik) Faith <faith@valinux.com>
* \author Gareth Hughes <gareth@valinux.com>
*/
/*
* Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/slab.h>
#if IS_ENABLED(CONFIG_AGP)
#include <asm/agp.h>
#endif
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
#include <drm/drm_print.h>
#include "drm_legacy.h"
#if IS_ENABLED(CONFIG_AGP)
/*
* Get AGP information.
*
* \return zero on success or a negative number on failure.
*
* Verifies the AGP device has been initialized and acquired and fills in the
* drm_agp_info structure with the information in drm_agp_head::agp_info.
*/
int drm_legacy_agp_info(struct drm_device *dev, struct drm_agp_info *info)
{
struct agp_kern_info *kern;
if (!dev->agp || !dev->agp->acquired)
return -EINVAL;
kern = &dev->agp->agp_info;
info->agp_version_major = kern->version.major;
info->agp_version_minor = kern->version.minor;
info->mode = kern->mode;
info->aperture_base = kern->aper_base;
info->aperture_size = kern->aper_size * 1024 * 1024;
info->memory_allowed = kern->max_memory << PAGE_SHIFT;
info->memory_used = kern->current_memory << PAGE_SHIFT;
info->id_vendor = kern->device->vendor;
info->id_device = kern->device->device;
return 0;
}
EXPORT_SYMBOL(drm_legacy_agp_info);
int drm_legacy_agp_info_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_agp_info *info = data;
int err;
err = drm_legacy_agp_info(dev, info);
if (err)
return err;
return 0;
}
/*
* Acquire the AGP device.
*
* \param dev DRM device that is to acquire AGP.
* \return zero on success or a negative number on failure.
*
* Verifies the AGP device hasn't been acquired before and calls
* \c agp_backend_acquire.
*/
int drm_legacy_agp_acquire(struct drm_device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev->dev);
if (!dev->agp)
return -ENODEV;
if (dev->agp->acquired)
return -EBUSY;
dev->agp->bridge = agp_backend_acquire(pdev);
if (!dev->agp->bridge)
return -ENODEV;
dev->agp->acquired = 1;
return 0;
}
EXPORT_SYMBOL(drm_legacy_agp_acquire);
/*
* Acquire the AGP device (ioctl).
*
* \return zero on success or a negative number on failure.
*
* Verifies the AGP device hasn't been acquired before and calls
* \c agp_backend_acquire.
*/
int drm_legacy_agp_acquire_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
return drm_legacy_agp_acquire((struct drm_device *)file_priv->minor->dev);
}
/*
* Release the AGP device.
*
* \param dev DRM device that is to release AGP.
* \return zero on success or a negative number on failure.
*
* Verifies the AGP device has been acquired and calls \c agp_backend_release.
*/
int drm_legacy_agp_release(struct drm_device *dev)
{
if (!dev->agp || !dev->agp->acquired)
return -EINVAL;
agp_backend_release(dev->agp->bridge);
dev->agp->acquired = 0;
return 0;
}
EXPORT_SYMBOL(drm_legacy_agp_release);
int drm_legacy_agp_release_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
return drm_legacy_agp_release(dev);
}
/*
* Enable the AGP bus.
*
* \param dev DRM device that has previously acquired AGP.
* \param mode Requested AGP mode.
* \return zero on success or a negative number on failure.
*
* Verifies the AGP device has been acquired but not enabled, and calls
* \c agp_enable.
*/
int drm_legacy_agp_enable(struct drm_device *dev, struct drm_agp_mode mode)
{
if (!dev->agp || !dev->agp->acquired)
return -EINVAL;
dev->agp->mode = mode.mode;
agp_enable(dev->agp->bridge, mode.mode);
dev->agp->enabled = 1;
return 0;
}
EXPORT_SYMBOL(drm_legacy_agp_enable);
int drm_legacy_agp_enable_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_agp_mode *mode = data;
return drm_legacy_agp_enable(dev, *mode);
}
/*
* Allocate AGP memory.
*
* \return zero on success or a negative number on failure.
*
* Verifies the AGP device is present and has been acquired, allocates the
* memory via agp_allocate_memory() and creates a drm_agp_mem entry for it.
*/
int drm_legacy_agp_alloc(struct drm_device *dev, struct drm_agp_buffer *request)
{
struct drm_agp_mem *entry;
struct agp_memory *memory;
unsigned long pages;
u32 type;
if (!dev->agp || !dev->agp->acquired)
return -EINVAL;
entry = kzalloc(sizeof(*entry), GFP_KERNEL);
if (!entry)
return -ENOMEM;
pages = DIV_ROUND_UP(request->size, PAGE_SIZE);
type = (u32) request->type;
memory = agp_allocate_memory(dev->agp->bridge, pages, type);
if (!memory) {
kfree(entry);
return -ENOMEM;
}
entry->handle = (unsigned long)memory->key + 1;
entry->memory = memory;
entry->bound = 0;
entry->pages = pages;
list_add(&entry->head, &dev->agp->memory);
request->handle = entry->handle;
request->physical = memory->physical;
return 0;
}
EXPORT_SYMBOL(drm_legacy_agp_alloc);
int drm_legacy_agp_alloc_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_agp_buffer *request = data;
return drm_legacy_agp_alloc(dev, request);
}
/*
* Search for the AGP memory entry associated with a handle.
*
* \param dev DRM device structure.
* \param handle AGP memory handle.
* \return pointer to the drm_agp_mem structure associated with \p handle.
*
* Walks through drm_agp_head::memory until finding a matching handle.
*/
static struct drm_agp_mem *drm_legacy_agp_lookup_entry(struct drm_device *dev,
unsigned long handle)
{
struct drm_agp_mem *entry;
list_for_each_entry(entry, &dev->agp->memory, head) {
if (entry->handle == handle)
return entry;
}
return NULL;
}
/*
* Unbind AGP memory from the GATT (ioctl).
*
* \return zero on success or a negative number on failure.
*
* Verifies the AGP device is present and acquired, looks-up the AGP memory
* entry and passes it to the unbind_agp() function.
*/
int drm_legacy_agp_unbind(struct drm_device *dev, struct drm_agp_binding *request)
{
struct drm_agp_mem *entry;
int ret;
if (!dev->agp || !dev->agp->acquired)
return -EINVAL;
entry = drm_legacy_agp_lookup_entry(dev, request->handle);
if (!entry || !entry->bound)
return -EINVAL;
ret = agp_unbind_memory(entry->memory);
if (ret == 0)
entry->bound = 0;
return ret;
}
EXPORT_SYMBOL(drm_legacy_agp_unbind);
int drm_legacy_agp_unbind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_agp_binding *request = data;
return drm_legacy_agp_unbind(dev, request);
}
/*
* Bind AGP memory into the GATT (ioctl)
*
* \return zero on success or a negative number on failure.
*
* Verifies the AGP device is present and has been acquired and that no memory
* is currently bound into the GATT. Looks-up the AGP memory entry and passes
* it to bind_agp() function.
*/
int drm_legacy_agp_bind(struct drm_device *dev, struct drm_agp_binding *request)
{
struct drm_agp_mem *entry;
int retcode;
int page;
if (!dev->agp || !dev->agp->acquired)
return -EINVAL;
entry = drm_legacy_agp_lookup_entry(dev, request->handle);
if (!entry || entry->bound)
return -EINVAL;
page = DIV_ROUND_UP(request->offset, PAGE_SIZE);
retcode = agp_bind_memory(entry->memory, page);
if (retcode)
return retcode;
entry->bound = dev->agp->base + (page << PAGE_SHIFT);
DRM_DEBUG("base = 0x%lx entry->bound = 0x%lx\n",
dev->agp->base, entry->bound);
return 0;
}
EXPORT_SYMBOL(drm_legacy_agp_bind);
int drm_legacy_agp_bind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_agp_binding *request = data;
return drm_legacy_agp_bind(dev, request);
}
/*
* Free AGP memory (ioctl).
*
* \return zero on success or a negative number on failure.
*
* Verifies the AGP device is present and has been acquired and looks up the
* AGP memory entry. If the memory is currently bound, unbind it via
* unbind_agp(). Frees it via free_agp() as well as the entry itself
* and unlinks from the doubly linked list it's inserted in.
*/
int drm_legacy_agp_free(struct drm_device *dev, struct drm_agp_buffer *request)
{
struct drm_agp_mem *entry;
if (!dev->agp || !dev->agp->acquired)
return -EINVAL;
entry = drm_legacy_agp_lookup_entry(dev, request->handle);
if (!entry)
return -EINVAL;
if (entry->bound)
agp_unbind_memory(entry->memory);
list_del(&entry->head);
agp_free_memory(entry->memory);
kfree(entry);
return 0;
}
EXPORT_SYMBOL(drm_legacy_agp_free);
int drm_legacy_agp_free_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_agp_buffer *request = data;
return drm_legacy_agp_free(dev, request);
}
/*
* Initialize the AGP resources.
*
* \return pointer to a drm_agp_head structure.
*
* Gets the drm_agp_t structure which is made available by the agpgart module
* via the inter_module_* functions. Creates and initializes a drm_agp_head
* structure.
*
* Note that final cleanup of the kmalloced structure is directly done in
* drm_pci_agp_destroy.
*/
struct drm_agp_head *drm_legacy_agp_init(struct drm_device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev->dev);
struct drm_agp_head *head = NULL;
head = kzalloc(sizeof(*head), GFP_KERNEL);
if (!head)
return NULL;
head->bridge = agp_find_bridge(pdev);
if (!head->bridge) {
head->bridge = agp_backend_acquire(pdev);
if (!head->bridge) {
kfree(head);
return NULL;
}
agp_copy_info(head->bridge, &head->agp_info);
agp_backend_release(head->bridge);
} else {
agp_copy_info(head->bridge, &head->agp_info);
}
if (head->agp_info.chipset == NOT_SUPPORTED) {
kfree(head);
return NULL;
}
INIT_LIST_HEAD(&head->memory);
head->cant_use_aperture = head->agp_info.cant_use_aperture;
head->page_mask = head->agp_info.page_mask;
head->base = head->agp_info.aper_base;
return head;
}
/* Only exported for i810.ko */
EXPORT_SYMBOL(drm_legacy_agp_init);
/**
* drm_legacy_agp_clear - Clear AGP resource list
* @dev: DRM device
*
* Iterate over all AGP resources and remove them. But keep the AGP head
* intact so it can still be used. It is safe to call this if AGP is disabled or
* was already removed.
*
* Cleanup is only done for drivers who have DRIVER_LEGACY set.
*/
void drm_legacy_agp_clear(struct drm_device *dev)
{
struct drm_agp_mem *entry, *tempe;
if (!dev->agp)
return;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return;
list_for_each_entry_safe(entry, tempe, &dev->agp->memory, head) {
if (entry->bound)
agp_unbind_memory(entry->memory);
agp_free_memory(entry->memory);
kfree(entry);
}
INIT_LIST_HEAD(&dev->agp->memory);
if (dev->agp->acquired)
drm_legacy_agp_release(dev);
dev->agp->acquired = 0;
dev->agp->enabled = 0;
}
#endif

View File

@ -1773,6 +1773,7 @@ static void __drm_state_dump(struct drm_device *dev, struct drm_printer *p,
struct drm_crtc *crtc;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
struct drm_private_obj *obj;
if (!drm_drv_uses_atomic_modeset(dev))
return;
@ -1801,6 +1802,14 @@ static void __drm_state_dump(struct drm_device *dev, struct drm_printer *p,
if (take_locks)
drm_modeset_unlock(&dev->mode_config.connection_mutex);
drm_connector_list_iter_end(&conn_iter);
list_for_each_entry(obj, &config->privobj_list, head) {
if (take_locks)
drm_modeset_lock(&obj->lock, NULL);
drm_atomic_private_obj_print_state(p, obj->state);
if (take_locks)
drm_modeset_unlock(&obj->lock);
}
}
/**

View File

@ -275,6 +275,20 @@ void __drm_atomic_helper_plane_state_reset(struct drm_plane_state *plane_state,
plane_state->normalized_zpos = val;
}
}
if (plane->hotspot_x_property) {
if (!drm_object_property_get_default_value(&plane->base,
plane->hotspot_x_property,
&val))
plane_state->hotspot_x = val;
}
if (plane->hotspot_y_property) {
if (!drm_object_property_get_default_value(&plane->base,
plane->hotspot_y_property,
&val))
plane_state->hotspot_y = val;
}
}
EXPORT_SYMBOL(__drm_atomic_helper_plane_state_reset);

View File

@ -593,6 +593,22 @@ static int drm_atomic_plane_set_property(struct drm_plane *plane,
} else if (plane->funcs->atomic_set_property) {
return plane->funcs->atomic_set_property(plane, state,
property, val);
} else if (property == plane->hotspot_x_property) {
if (plane->type != DRM_PLANE_TYPE_CURSOR) {
drm_dbg_atomic(plane->dev,
"[PLANE:%d:%s] is not a cursor plane: 0x%llx\n",
plane->base.id, plane->name, val);
return -EINVAL;
}
state->hotspot_x = val;
} else if (property == plane->hotspot_y_property) {
if (plane->type != DRM_PLANE_TYPE_CURSOR) {
drm_dbg_atomic(plane->dev,
"[PLANE:%d:%s] is not a cursor plane: 0x%llx\n",
plane->base.id, plane->name, val);
return -EINVAL;
}
state->hotspot_y = val;
} else {
drm_dbg_atomic(plane->dev,
"[PLANE:%d:%s] unknown property [PROP:%d:%s]\n",
@ -653,6 +669,10 @@ drm_atomic_plane_get_property(struct drm_plane *plane,
*val = state->scaling_filter;
} else if (plane->funcs->atomic_get_property) {
return plane->funcs->atomic_get_property(plane, state, property, val);
} else if (property == plane->hotspot_x_property) {
*val = state->hotspot_x;
} else if (property == plane->hotspot_y_property) {
*val = state->hotspot_y;
} else {
drm_dbg_atomic(dev,
"[PLANE:%d:%s] unknown property [PROP:%d:%s]\n",
@ -1006,13 +1026,28 @@ out:
return ret;
}
static int drm_atomic_check_prop_changes(int ret, uint64_t old_val, uint64_t prop_value,
struct drm_property *prop)
{
if (ret != 0 || old_val != prop_value) {
drm_dbg_atomic(prop->dev,
"[PROP:%d:%s] No prop can be changed during async flip\n",
prop->base.id, prop->name);
return -EINVAL;
}
return 0;
}
int drm_atomic_set_property(struct drm_atomic_state *state,
struct drm_file *file_priv,
struct drm_mode_object *obj,
struct drm_property *prop,
uint64_t prop_value)
u64 prop_value,
bool async_flip)
{
struct drm_mode_object *ref;
u64 old_val;
int ret;
if (!drm_property_change_valid_get(prop, prop_value, &ref))
@ -1029,6 +1064,13 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
break;
}
if (async_flip) {
ret = drm_atomic_connector_get_property(connector, connector_state,
prop, &old_val);
ret = drm_atomic_check_prop_changes(ret, old_val, prop_value, prop);
break;
}
ret = drm_atomic_connector_set_property(connector,
connector_state, file_priv,
prop, prop_value);
@ -1044,6 +1086,13 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
break;
}
if (async_flip) {
ret = drm_atomic_crtc_get_property(crtc, crtc_state,
prop, &old_val);
ret = drm_atomic_check_prop_changes(ret, old_val, prop_value, prop);
break;
}
ret = drm_atomic_crtc_set_property(crtc,
crtc_state, prop, prop_value);
break;
@ -1051,6 +1100,7 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
case DRM_MODE_OBJECT_PLANE: {
struct drm_plane *plane = obj_to_plane(obj);
struct drm_plane_state *plane_state;
struct drm_mode_config *config = &plane->dev->mode_config;
plane_state = drm_atomic_get_plane_state(state, plane);
if (IS_ERR(plane_state)) {
@ -1058,6 +1108,21 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
break;
}
if (async_flip && prop != config->prop_fb_id) {
ret = drm_atomic_plane_get_property(plane, plane_state,
prop, &old_val);
ret = drm_atomic_check_prop_changes(ret, old_val, prop_value, prop);
break;
}
if (async_flip && plane_state->plane->type != DRM_PLANE_TYPE_PRIMARY) {
drm_dbg_atomic(prop->dev,
"[OBJECT:%d] Only primary planes can be changed during async flip\n",
obj->id);
ret = -EINVAL;
break;
}
ret = drm_atomic_plane_set_property(plane,
plane_state, file_priv,
prop, prop_value);
@ -1323,6 +1388,18 @@ static void complete_signaling(struct drm_device *dev,
kfree(fence_state);
}
static void
set_async_flip(struct drm_atomic_state *state)
{
struct drm_crtc *crtc;
struct drm_crtc_state *crtc_state;
int i;
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
crtc_state->async_flip = true;
}
}
int drm_mode_atomic_ioctl(struct drm_device *dev,
void *data, struct drm_file *file_priv)
{
@ -1337,6 +1414,7 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
struct drm_out_fence_state *fence_state;
int ret = 0;
unsigned int i, j, num_fences;
bool async_flip = false;
/* disallow for drivers not supporting atomic: */
if (!drm_core_check_feature(dev, DRIVER_ATOMIC))
@ -1363,9 +1441,13 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
}
if (arg->flags & DRM_MODE_PAGE_FLIP_ASYNC) {
drm_dbg_atomic(dev,
"commit failed: invalid flag DRM_MODE_PAGE_FLIP_ASYNC\n");
return -EINVAL;
if (!dev->mode_config.async_page_flip) {
drm_dbg_atomic(dev,
"commit failed: DRM_MODE_PAGE_FLIP_ASYNC not supported\n");
return -EINVAL;
}
async_flip = true;
}
/* can't test and expect an event at the same time. */
@ -1450,8 +1532,8 @@ retry:
goto out;
}
ret = drm_atomic_set_property(state, file_priv,
obj, prop, prop_value);
ret = drm_atomic_set_property(state, file_priv, obj,
prop, prop_value, async_flip);
if (ret) {
drm_mode_object_put(obj);
goto out;
@ -1468,6 +1550,9 @@ retry:
if (ret)
goto out;
if (arg->flags & DRM_MODE_PAGE_FLIP_ASYNC)
set_async_flip(state);
if (arg->flags & DRM_MODE_ATOMIC_TEST_ONLY) {
ret = drm_atomic_check_only(state);
} else if (arg->flags & DRM_MODE_ATOMIC_NONBLOCK) {

View File

@ -37,13 +37,12 @@
#include <drm/drm_print.h>
#include "drm_internal.h"
#include "drm_legacy.h"
/**
* DOC: master and authentication
*
* &struct drm_master is used to track groups of clients with open
* primary/legacy device nodes. For every &struct drm_file which has had at
* primary device nodes. For every &struct drm_file which has had at
* least once successfully became the device master (either through the
* SET_MASTER IOCTL, or implicitly through opening the primary device node when
* no one else is the current master that time) there exists one &drm_master.
@ -139,7 +138,6 @@ struct drm_master *drm_master_create(struct drm_device *dev)
return NULL;
kref_init(&master->refcount);
drm_master_legacy_init(master);
idr_init_base(&master->magic_map, 1);
master->dev = dev;
@ -365,8 +363,6 @@ void drm_master_release(struct drm_file *file_priv)
if (!drm_is_current_master_locked(file_priv))
goto out;
drm_legacy_lock_master_cleanup(dev, master);
if (dev->master == file_priv->master)
drm_drop_master(dev, file_priv);
out:
@ -429,8 +425,6 @@ static void drm_master_destroy(struct kref *kref)
if (drm_core_check_feature(dev, DRIVER_MODESET))
drm_lease_destroy(master);
drm_legacy_master_rmmaps(dev, master);
idr_destroy(&master->magic_map);
idr_destroy(&master->leases);
idr_destroy(&master->lessee_idr);

View File

@ -1347,50 +1347,6 @@ struct drm_bridge *of_drm_find_bridge(struct device_node *np)
EXPORT_SYMBOL(of_drm_find_bridge);
#endif
#ifdef CONFIG_DEBUG_FS
static int drm_bridge_chains_info(struct seq_file *m, void *data)
{
struct drm_debugfs_entry *entry = m->private;
struct drm_device *dev = entry->dev;
struct drm_printer p = drm_seq_file_printer(m);
struct drm_mode_config *config = &dev->mode_config;
struct drm_encoder *encoder;
unsigned int bridge_idx = 0;
list_for_each_entry(encoder, &config->encoder_list, head) {
struct drm_bridge *bridge;
drm_printf(&p, "encoder[%u]\n", encoder->base.id);
drm_for_each_bridge_in_chain(encoder, bridge) {
drm_printf(&p, "\tbridge[%u] type: %u, ops: %#x",
bridge_idx, bridge->type, bridge->ops);
#ifdef CONFIG_OF
if (bridge->of_node)
drm_printf(&p, ", OF: %pOFfc", bridge->of_node);
#endif
drm_printf(&p, "\n");
bridge_idx++;
}
}
return 0;
}
static const struct drm_debugfs_info drm_bridge_debugfs_list[] = {
{ "bridge_chains", drm_bridge_chains_info, 0 },
};
void drm_bridge_debugfs_init(struct drm_device *dev)
{
drm_debugfs_add_files(dev, drm_bridge_debugfs_list,
ARRAY_SIZE(drm_bridge_debugfs_list));
}
#endif
MODULE_AUTHOR("Ajay Kumar <ajaykumar.rs@samsung.com>");
MODULE_DESCRIPTION("DRM bridge infrastructure");
MODULE_LICENSE("GPL and additional rights");

View File

@ -198,12 +198,6 @@ static void drm_bridge_connector_destroy(struct drm_connector *connector)
struct drm_bridge_connector *bridge_connector =
to_drm_bridge_connector(connector);
if (bridge_connector->bridge_hpd) {
struct drm_bridge *hpd = bridge_connector->bridge_hpd;
drm_bridge_hpd_disable(hpd);
}
drm_connector_unregister(connector);
drm_connector_cleanup(connector);

File diff suppressed because it is too large Load Diff

View File

@ -1,513 +0,0 @@
/*
* Legacy: Generic DRM Contexts
*
* Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
* Author: Rickard E. (Rik) Faith <faith@valinux.com>
* Author: Gareth Hughes <gareth@valinux.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
#include <drm/drm_print.h>
#include "drm_legacy.h"
struct drm_ctx_list {
struct list_head head;
drm_context_t handle;
struct drm_file *tag;
};
/******************************************************************/
/** \name Context bitmap support */
/*@{*/
/*
* Free a handle from the context bitmap.
*
* \param dev DRM device.
* \param ctx_handle context handle.
*
* Clears the bit specified by \p ctx_handle in drm_device::ctx_bitmap and the entry
* in drm_device::ctx_idr, while holding the drm_device::struct_mutex
* lock.
*/
void drm_legacy_ctxbitmap_free(struct drm_device * dev, int ctx_handle)
{
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return;
mutex_lock(&dev->struct_mutex);
idr_remove(&dev->ctx_idr, ctx_handle);
mutex_unlock(&dev->struct_mutex);
}
/*
* Context bitmap allocation.
*
* \param dev DRM device.
* \return (non-negative) context handle on success or a negative number on failure.
*
* Allocate a new idr from drm_device::ctx_idr while holding the
* drm_device::struct_mutex lock.
*/
static int drm_legacy_ctxbitmap_next(struct drm_device * dev)
{
int ret;
mutex_lock(&dev->struct_mutex);
ret = idr_alloc(&dev->ctx_idr, NULL, DRM_RESERVED_CONTEXTS, 0,
GFP_KERNEL);
mutex_unlock(&dev->struct_mutex);
return ret;
}
/*
* Context bitmap initialization.
*
* \param dev DRM device.
*
* Initialise the drm_device::ctx_idr
*/
void drm_legacy_ctxbitmap_init(struct drm_device * dev)
{
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return;
idr_init(&dev->ctx_idr);
}
/*
* Context bitmap cleanup.
*
* \param dev DRM device.
*
* Free all idr members using drm_ctx_sarea_free helper function
* while holding the drm_device::struct_mutex lock.
*/
void drm_legacy_ctxbitmap_cleanup(struct drm_device * dev)
{
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return;
mutex_lock(&dev->struct_mutex);
idr_destroy(&dev->ctx_idr);
mutex_unlock(&dev->struct_mutex);
}
/**
* drm_legacy_ctxbitmap_flush() - Flush all contexts owned by a file
* @dev: DRM device to operate on
* @file: Open file to flush contexts for
*
* This iterates over all contexts on @dev and drops them if they're owned by
* @file. Note that after this call returns, new contexts might be added if
* the file is still alive.
*/
void drm_legacy_ctxbitmap_flush(struct drm_device *dev, struct drm_file *file)
{
struct drm_ctx_list *pos, *tmp;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return;
mutex_lock(&dev->ctxlist_mutex);
list_for_each_entry_safe(pos, tmp, &dev->ctxlist, head) {
if (pos->tag == file &&
pos->handle != DRM_KERNEL_CONTEXT) {
if (dev->driver->context_dtor)
dev->driver->context_dtor(dev, pos->handle);
drm_legacy_ctxbitmap_free(dev, pos->handle);
list_del(&pos->head);
kfree(pos);
}
}
mutex_unlock(&dev->ctxlist_mutex);
}
/*@}*/
/******************************************************************/
/** \name Per Context SAREA Support */
/*@{*/
/*
* Get per-context SAREA.
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument pointing to a drm_ctx_priv_map structure.
* \return zero on success or a negative number on failure.
*
* Gets the map from drm_device::ctx_idr with the handle specified and
* returns its handle.
*/
int drm_legacy_getsareactx(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_ctx_priv_map *request = data;
struct drm_local_map *map;
struct drm_map_list *_entry;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
mutex_lock(&dev->struct_mutex);
map = idr_find(&dev->ctx_idr, request->ctx_id);
if (!map) {
mutex_unlock(&dev->struct_mutex);
return -EINVAL;
}
request->handle = NULL;
list_for_each_entry(_entry, &dev->maplist, head) {
if (_entry->map == map) {
request->handle =
(void *)(unsigned long)_entry->user_token;
break;
}
}
mutex_unlock(&dev->struct_mutex);
if (request->handle == NULL)
return -EINVAL;
return 0;
}
/*
* Set per-context SAREA.
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument pointing to a drm_ctx_priv_map structure.
* \return zero on success or a negative number on failure.
*
* Searches the mapping specified in \p arg and update the entry in
* drm_device::ctx_idr with it.
*/
int drm_legacy_setsareactx(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_ctx_priv_map *request = data;
struct drm_local_map *map = NULL;
struct drm_map_list *r_list = NULL;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
mutex_lock(&dev->struct_mutex);
list_for_each_entry(r_list, &dev->maplist, head) {
if (r_list->map
&& r_list->user_token == (unsigned long) request->handle)
goto found;
}
bad:
mutex_unlock(&dev->struct_mutex);
return -EINVAL;
found:
map = r_list->map;
if (!map)
goto bad;
if (IS_ERR(idr_replace(&dev->ctx_idr, map, request->ctx_id)))
goto bad;
mutex_unlock(&dev->struct_mutex);
return 0;
}
/*@}*/
/******************************************************************/
/** \name The actual DRM context handling routines */
/*@{*/
/*
* Switch context.
*
* \param dev DRM device.
* \param old old context handle.
* \param new new context handle.
* \return zero on success or a negative number on failure.
*
* Attempt to set drm_device::context_flag.
*/
static int drm_context_switch(struct drm_device * dev, int old, int new)
{
if (test_and_set_bit(0, &dev->context_flag)) {
DRM_ERROR("Reentering -- FIXME\n");
return -EBUSY;
}
DRM_DEBUG("Context switch from %d to %d\n", old, new);
if (new == dev->last_context) {
clear_bit(0, &dev->context_flag);
return 0;
}
return 0;
}
/*
* Complete context switch.
*
* \param dev DRM device.
* \param new new context handle.
* \return zero on success or a negative number on failure.
*
* Updates drm_device::last_context and drm_device::last_switch. Verifies the
* hardware lock is held, clears the drm_device::context_flag and wakes up
* drm_device::context_wait.
*/
static int drm_context_switch_complete(struct drm_device *dev,
struct drm_file *file_priv, int new)
{
dev->last_context = new; /* PRE/POST: This is the _only_ writer. */
if (!_DRM_LOCK_IS_HELD(file_priv->master->lock.hw_lock->lock)) {
DRM_ERROR("Lock isn't held after context switch\n");
}
/* If a context switch is ever initiated
when the kernel holds the lock, release
that lock here.
*/
clear_bit(0, &dev->context_flag);
return 0;
}
/*
* Reserve contexts.
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument pointing to a drm_ctx_res structure.
* \return zero on success or a negative number on failure.
*/
int drm_legacy_resctx(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_ctx_res *res = data;
struct drm_ctx ctx;
int i;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
if (res->count >= DRM_RESERVED_CONTEXTS) {
memset(&ctx, 0, sizeof(ctx));
for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
ctx.handle = i;
if (copy_to_user(&res->contexts[i], &ctx, sizeof(ctx)))
return -EFAULT;
}
}
res->count = DRM_RESERVED_CONTEXTS;
return 0;
}
/*
* Add context.
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument pointing to a drm_ctx structure.
* \return zero on success or a negative number on failure.
*
* Get a new handle for the context and copy to userspace.
*/
int drm_legacy_addctx(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_ctx_list *ctx_entry;
struct drm_ctx *ctx = data;
int tmp_handle;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
tmp_handle = drm_legacy_ctxbitmap_next(dev);
if (tmp_handle == DRM_KERNEL_CONTEXT) {
/* Skip kernel's context and get a new one. */
tmp_handle = drm_legacy_ctxbitmap_next(dev);
}
DRM_DEBUG("%d\n", tmp_handle);
if (tmp_handle < 0) {
DRM_DEBUG("Not enough free contexts.\n");
/* Should this return -EBUSY instead? */
return tmp_handle;
}
ctx->handle = tmp_handle;
ctx_entry = kmalloc(sizeof(*ctx_entry), GFP_KERNEL);
if (!ctx_entry) {
DRM_DEBUG("out of memory\n");
return -ENOMEM;
}
INIT_LIST_HEAD(&ctx_entry->head);
ctx_entry->handle = ctx->handle;
ctx_entry->tag = file_priv;
mutex_lock(&dev->ctxlist_mutex);
list_add(&ctx_entry->head, &dev->ctxlist);
mutex_unlock(&dev->ctxlist_mutex);
return 0;
}
/*
* Get context.
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument pointing to a drm_ctx structure.
* \return zero on success or a negative number on failure.
*/
int drm_legacy_getctx(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_ctx *ctx = data;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
/* This is 0, because we don't handle any context flags */
ctx->flags = 0;
return 0;
}
/*
* Switch context.
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument pointing to a drm_ctx structure.
* \return zero on success or a negative number on failure.
*
* Calls context_switch().
*/
int drm_legacy_switchctx(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_ctx *ctx = data;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
DRM_DEBUG("%d\n", ctx->handle);
return drm_context_switch(dev, dev->last_context, ctx->handle);
}
/*
* New context.
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument pointing to a drm_ctx structure.
* \return zero on success or a negative number on failure.
*
* Calls context_switch_complete().
*/
int drm_legacy_newctx(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_ctx *ctx = data;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
DRM_DEBUG("%d\n", ctx->handle);
drm_context_switch_complete(dev, file_priv, ctx->handle);
return 0;
}
/*
* Remove context.
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument pointing to a drm_ctx structure.
* \return zero on success or a negative number on failure.
*
* If not the special kernel context, calls ctxbitmap_free() to free the specified context.
*/
int drm_legacy_rmctx(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_ctx *ctx = data;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
DRM_DEBUG("%d\n", ctx->handle);
if (ctx->handle != DRM_KERNEL_CONTEXT) {
if (dev->driver->context_dtor)
dev->driver->context_dtor(dev, ctx->handle);
drm_legacy_ctxbitmap_free(dev, ctx->handle);
}
mutex_lock(&dev->ctxlist_mutex);
if (!list_empty(&dev->ctxlist)) {
struct drm_ctx_list *pos, *n;
list_for_each_entry_safe(pos, n, &dev->ctxlist, head) {
if (pos->handle == ctx->handle) {
list_del(&pos->head);
kfree(pos);
}
}
}
mutex_unlock(&dev->ctxlist_mutex);
return 0;
}
/*@}*/

View File

@ -439,11 +439,8 @@ EXPORT_SYMBOL(drm_crtc_helper_set_mode);
* @state: atomic state object
*
* Provides a default CRTC-state check handler for CRTCs that only have
* one primary plane attached to it.
*
* This is often the case for the CRTC of simple framebuffers. See also
* drm_plane_helper_atomic_check() for the respective plane-state check
* helper function.
* one primary plane attached to it. This is often the case for the CRTC
* of simple framebuffers.
*
* RETURNS:
* Zero on success, or an errno code otherwise.

View File

@ -253,7 +253,7 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
struct drm_file *file_priv,
struct drm_mode_object *obj,
struct drm_property *prop,
uint64_t prop_value);
u64 prop_value, bool async_flip);
int drm_atomic_get_property(struct drm_mode_object *obj,
struct drm_property *property, uint64_t *val);

View File

@ -241,7 +241,8 @@ drm_atomic_helper_damage_iter_init(struct drm_atomic_helper_damage_iter *iter,
iter->plane_src.x2 = (src.x2 >> 16) + !!(src.x2 & 0xFFFF);
iter->plane_src.y2 = (src.y2 >> 16) + !!(src.y2 & 0xFFFF);
if (!iter->clips || !drm_rect_equals(&state->src, &old_state->src)) {
if (!iter->clips || state->ignore_damage_clips ||
!drm_rect_equals(&state->src, &old_state->src)) {
iter->clips = NULL;
iter->num_clips = 0;
iter->full_update = true;

View File

@ -314,10 +314,8 @@ void drm_debugfs_dev_register(struct drm_device *dev)
drm_framebuffer_debugfs_init(dev);
drm_client_debugfs_init(dev);
}
if (drm_drv_uses_atomic_modeset(dev)) {
if (drm_drv_uses_atomic_modeset(dev))
drm_atomic_debugfs_init(dev);
drm_bridge_debugfs_init(dev);
}
}
int drm_debugfs_register(struct drm_minor *minor, int minor_id,
@ -589,4 +587,65 @@ void drm_debugfs_crtc_remove(struct drm_crtc *crtc)
crtc->debugfs_entry = NULL;
}
static int bridges_show(struct seq_file *m, void *data)
{
struct drm_encoder *encoder = m->private;
struct drm_printer p = drm_seq_file_printer(m);
struct drm_bridge *bridge;
unsigned int idx = 0;
drm_for_each_bridge_in_chain(encoder, bridge) {
drm_printf(&p, "bridge[%d]: %ps\n", idx++, bridge->funcs);
drm_printf(&p, "\ttype: [%d] %s\n",
bridge->type,
drm_get_connector_type_name(bridge->type));
#ifdef CONFIG_OF
if (bridge->of_node)
drm_printf(&p, "\tOF: %pOFfc\n", bridge->of_node);
#endif
drm_printf(&p, "\tops: [0x%x]", bridge->ops);
if (bridge->ops & DRM_BRIDGE_OP_DETECT)
drm_puts(&p, " detect");
if (bridge->ops & DRM_BRIDGE_OP_EDID)
drm_puts(&p, " edid");
if (bridge->ops & DRM_BRIDGE_OP_HPD)
drm_puts(&p, " hpd");
if (bridge->ops & DRM_BRIDGE_OP_MODES)
drm_puts(&p, " modes");
drm_puts(&p, "\n");
}
return 0;
}
DEFINE_SHOW_ATTRIBUTE(bridges);
void drm_debugfs_encoder_add(struct drm_encoder *encoder)
{
struct drm_minor *minor = encoder->dev->primary;
struct dentry *root;
char *name;
name = kasprintf(GFP_KERNEL, "encoder-%d", encoder->index);
if (!name)
return;
root = debugfs_create_dir(name, minor->debugfs_root);
kfree(name);
encoder->debugfs_entry = root;
/* bridges list */
debugfs_create_file("bridges", 0444, root, encoder,
&bridges_fops);
if (encoder->funcs->debugfs_init)
encoder->funcs->debugfs_init(encoder, root);
}
void drm_debugfs_encoder_remove(struct drm_encoder *encoder)
{
debugfs_remove_recursive(encoder->debugfs_entry);
encoder->debugfs_entry = NULL;
}
#endif /* CONFIG_DEBUG_FS */

View File

@ -1,178 +0,0 @@
/*
* \file drm_dma.c
* DMA IOCTL and function support
*
* \author Rickard E. (Rik) Faith <faith@valinux.com>
* \author Gareth Hughes <gareth@valinux.com>
*/
/*
* Created: Fri Mar 19 14:30:16 1999 by faith@valinux.com
*
* Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/export.h>
#include <linux/pci.h>
#include <drm/drm_drv.h>
#include <drm/drm_print.h>
#include "drm_legacy.h"
/**
* drm_legacy_dma_setup() - Initialize the DMA data.
*
* @dev: DRM device.
* Return: zero on success or a negative value on failure.
*
* Allocate and initialize a drm_device_dma structure.
*/
int drm_legacy_dma_setup(struct drm_device *dev)
{
int i;
if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA) ||
!drm_core_check_feature(dev, DRIVER_LEGACY))
return 0;
dev->buf_use = 0;
atomic_set(&dev->buf_alloc, 0);
dev->dma = kzalloc(sizeof(*dev->dma), GFP_KERNEL);
if (!dev->dma)
return -ENOMEM;
for (i = 0; i <= DRM_MAX_ORDER; i++)
memset(&dev->dma->bufs[i], 0, sizeof(dev->dma->bufs[0]));
return 0;
}
/**
* drm_legacy_dma_takedown() - Cleanup the DMA resources.
*
* @dev: DRM device.
*
* Free all pages associated with DMA buffers, the buffers and pages lists, and
* finally the drm_device::dma structure itself.
*/
void drm_legacy_dma_takedown(struct drm_device *dev)
{
struct drm_device_dma *dma = dev->dma;
drm_dma_handle_t *dmah;
int i, j;
if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA) ||
!drm_core_check_feature(dev, DRIVER_LEGACY))
return;
if (!dma)
return;
/* Clear dma buffers */
for (i = 0; i <= DRM_MAX_ORDER; i++) {
if (dma->bufs[i].seg_count) {
DRM_DEBUG("order %d: buf_count = %d,"
" seg_count = %d\n",
i,
dma->bufs[i].buf_count,
dma->bufs[i].seg_count);
for (j = 0; j < dma->bufs[i].seg_count; j++) {
if (dma->bufs[i].seglist[j]) {
dmah = dma->bufs[i].seglist[j];
dma_free_coherent(dev->dev,
dmah->size,
dmah->vaddr,
dmah->busaddr);
kfree(dmah);
}
}
kfree(dma->bufs[i].seglist);
}
if (dma->bufs[i].buf_count) {
for (j = 0; j < dma->bufs[i].buf_count; j++) {
kfree(dma->bufs[i].buflist[j].dev_private);
}
kfree(dma->bufs[i].buflist);
}
}
kfree(dma->buflist);
kfree(dma->pagelist);
kfree(dev->dma);
dev->dma = NULL;
}
/**
* drm_legacy_free_buffer() - Free a buffer.
*
* @dev: DRM device.
* @buf: buffer to free.
*
* Resets the fields of \p buf.
*/
void drm_legacy_free_buffer(struct drm_device *dev, struct drm_buf * buf)
{
if (!buf)
return;
buf->waiting = 0;
buf->pending = 0;
buf->file_priv = NULL;
buf->used = 0;
}
/**
* drm_legacy_reclaim_buffers() - Reclaim the buffers.
*
* @dev: DRM device.
* @file_priv: DRM file private.
*
* Frees each buffer associated with \p file_priv not already on the hardware.
*/
void drm_legacy_reclaim_buffers(struct drm_device *dev,
struct drm_file *file_priv)
{
struct drm_device_dma *dma = dev->dma;
int i;
if (!dma)
return;
for (i = 0; i < dma->buf_count; i++) {
if (dma->buflist[i]->file_priv == file_priv) {
switch (dma->buflist[i]->list) {
case DRM_LIST_NONE:
drm_legacy_free_buffer(dev, dma->buflist[i]);
break;
case DRM_LIST_WAIT:
dma->buflist[i]->list = DRM_LIST_RECLAIM;
break;
default:
/* Buffer already on hardware. */
break;
}
}
}
}

View File

@ -48,7 +48,6 @@
#include "drm_crtc_internal.h"
#include "drm_internal.h"
#include "drm_legacy.h"
MODULE_AUTHOR("Gareth Hughes, Leif Delgass, José Fonseca, Jon Smirl");
MODULE_DESCRIPTION("DRM shared core routines");
@ -585,8 +584,6 @@ static void drm_fs_inode_free(struct inode *inode)
static void drm_dev_init_release(struct drm_device *dev, void *res)
{
drm_legacy_ctxbitmap_cleanup(dev);
drm_legacy_remove_map_hash(dev);
drm_fs_inode_free(dev->anon_inode);
put_device(dev->dev);
@ -597,7 +594,6 @@ static void drm_dev_init_release(struct drm_device *dev, void *res)
mutex_destroy(&dev->clientlist_mutex);
mutex_destroy(&dev->filelist_mutex);
mutex_destroy(&dev->struct_mutex);
drm_legacy_destroy_members(dev);
}
static int drm_dev_init(struct drm_device *dev,
@ -632,7 +628,6 @@ static int drm_dev_init(struct drm_device *dev,
return -EINVAL;
}
drm_legacy_init_members(dev);
INIT_LIST_HEAD(&dev->filelist);
INIT_LIST_HEAD(&dev->filelist_internal);
INIT_LIST_HEAD(&dev->clientlist);
@ -673,12 +668,6 @@ static int drm_dev_init(struct drm_device *dev,
goto err;
}
ret = drm_legacy_create_map_hash(dev);
if (ret)
goto err;
drm_legacy_ctxbitmap_init(dev);
if (drm_core_check_feature(dev, DRIVER_GEM)) {
ret = drm_gem_init(dev);
if (ret) {
@ -949,8 +938,11 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
goto err_minors;
}
if (drm_core_check_feature(dev, DRIVER_MODESET))
drm_modeset_register_all(dev);
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = drm_modeset_register_all(dev);
if (ret)
goto err_unload;
}
DRM_INFO("Initialized %s %d.%d.%d %s for %s on minor %d\n",
driver->name, driver->major, driver->minor,
@ -960,6 +952,9 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
goto out_unlock;
err_unload:
if (dev->driver->unload)
dev->driver->unload(dev);
err_minors:
remove_compat_control_link(dev);
drm_minor_unregister(dev, DRM_MINOR_ACCEL);
@ -990,9 +985,6 @@ EXPORT_SYMBOL(drm_dev_register);
*/
void drm_dev_unregister(struct drm_device *dev)
{
if (drm_core_check_feature(dev, DRIVER_LEGACY))
drm_lastclose(dev);
dev->registered = false;
drm_client_dev_unregister(dev);
@ -1003,9 +995,6 @@ void drm_dev_unregister(struct drm_device *dev)
if (dev->driver->unload)
dev->driver->unload(dev);
drm_legacy_pci_agp_destroy(dev);
drm_legacy_rmmaps(dev);
remove_compat_control_link(dev);
drm_minor_unregister(dev, DRM_MINOR_ACCEL);
drm_minor_unregister(dev, DRM_MINOR_PRIMARY);

View File

@ -30,6 +30,7 @@
#include <drm/drm_print.h>
#include "drm_crtc_internal.h"
#include "drm_internal.h"
/**
* DOC: overview
@ -74,6 +75,8 @@ int drm_encoder_register_all(struct drm_device *dev)
int ret = 0;
drm_for_each_encoder(encoder, dev) {
drm_debugfs_encoder_add(encoder);
if (encoder->funcs && encoder->funcs->late_register)
ret = encoder->funcs->late_register(encoder);
if (ret)
@ -90,6 +93,7 @@ void drm_encoder_unregister_all(struct drm_device *dev)
drm_for_each_encoder(encoder, dev) {
if (encoder->funcs && encoder->funcs->early_unregister)
encoder->funcs->early_unregister(encoder);
drm_debugfs_encoder_remove(encoder);
}
}

View File

@ -47,21 +47,12 @@
#include "drm_crtc_internal.h"
#include "drm_internal.h"
#include "drm_legacy.h"
/* from BKL pushdown */
DEFINE_MUTEX(drm_global_mutex);
bool drm_dev_needs_global_mutex(struct drm_device *dev)
{
/*
* Legacy drivers rely on all kinds of BKL locking semantics, don't
* bother. They also still need BKL locking for their ioctls, so better
* safe than sorry.
*/
if (drm_core_check_feature(dev, DRIVER_LEGACY))
return true;
/*
* The deprecated ->load callback must be called after the driver is
* already registered. This means such drivers rely on the BKL to make
@ -107,9 +98,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
* drm_send_event() as the main starting points.
*
* The memory mapping implementation will vary depending on how the driver
* manages memory. Legacy drivers will use the deprecated drm_legacy_mmap()
* function, modern drivers should use one of the provided memory-manager
* specific implementations. For GEM-based drivers this is drm_gem_mmap().
* manages memory. For GEM-based drivers this is drm_gem_mmap().
*
* No other file operations are supported by the DRM userspace API. Overall the
* following is an example &file_operations structure::
@ -254,18 +243,6 @@ void drm_file_free(struct drm_file *file)
(long)old_encode_dev(file->minor->kdev->devt),
atomic_read(&dev->open_count));
#ifdef CONFIG_DRM_LEGACY
if (drm_core_check_feature(dev, DRIVER_LEGACY) &&
dev->driver->preclose)
dev->driver->preclose(dev, file);
#endif
if (drm_core_check_feature(dev, DRIVER_LEGACY))
drm_legacy_lock_release(dev, file->filp);
if (drm_core_check_feature(dev, DRIVER_HAVE_DMA))
drm_legacy_reclaim_buffers(dev, file);
drm_events_release(file);
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
@ -279,8 +256,6 @@ void drm_file_free(struct drm_file *file)
if (drm_core_check_feature(dev, DRIVER_GEM))
drm_gem_release(dev, file);
drm_legacy_ctxbitmap_flush(dev, file);
if (drm_is_primary_client(file))
drm_master_release(file);
@ -367,29 +342,6 @@ int drm_open_helper(struct file *filp, struct drm_minor *minor)
list_add(&priv->lhead, &dev->filelist);
mutex_unlock(&dev->filelist_mutex);
#ifdef CONFIG_DRM_LEGACY
#ifdef __alpha__
/*
* Default the hose
*/
if (!dev->hose) {
struct pci_dev *pci_dev;
pci_dev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, NULL);
if (pci_dev) {
dev->hose = pci_dev->sysdata;
pci_dev_put(pci_dev);
}
if (!dev->hose) {
struct pci_bus *b = list_entry(pci_root_buses.next,
struct pci_bus, node);
if (b)
dev->hose = b->sysdata;
}
}
#endif
#endif
return 0;
}
@ -411,7 +363,6 @@ int drm_open(struct inode *inode, struct file *filp)
struct drm_device *dev;
struct drm_minor *minor;
int retcode;
int need_setup = 0;
minor = drm_minor_acquire(iminor(inode));
if (IS_ERR(minor))
@ -421,8 +372,7 @@ int drm_open(struct inode *inode, struct file *filp)
if (drm_dev_needs_global_mutex(dev))
mutex_lock(&drm_global_mutex);
if (!atomic_fetch_inc(&dev->open_count))
need_setup = 1;
atomic_fetch_inc(&dev->open_count);
/* share address_space across all char-devs of a single device */
filp->f_mapping = dev->anon_inode->i_mapping;
@ -430,13 +380,6 @@ int drm_open(struct inode *inode, struct file *filp)
retcode = drm_open_helper(filp, minor);
if (retcode)
goto err_undo;
if (need_setup) {
retcode = drm_legacy_setup(dev);
if (retcode) {
drm_close_helper(filp);
goto err_undo;
}
}
if (drm_dev_needs_global_mutex(dev))
mutex_unlock(&drm_global_mutex);
@ -460,9 +403,6 @@ void drm_lastclose(struct drm_device * dev)
dev->driver->lastclose(dev);
drm_dbg_core(dev, "driver lastclose completed\n");
if (drm_core_check_feature(dev, DRIVER_LEGACY))
drm_legacy_dev_reinit(dev);
drm_client_dev_restore(dev);
}
@ -958,7 +898,7 @@ void drm_show_memory_stats(struct drm_printer *p, struct drm_file *file)
{
struct drm_gem_object *obj;
struct drm_memory_stats status = {};
enum drm_gem_object_status supported_status;
enum drm_gem_object_status supported_status = 0;
int id;
spin_lock(&file->table_lock);

View File

@ -583,7 +583,7 @@ int drm_mode_getfb2_ioctl(struct drm_device *dev,
struct drm_mode_fb_cmd2 *r = data;
struct drm_framebuffer *fb;
unsigned int i;
int ret;
int ret = 0;
if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;

View File

@ -1085,6 +1085,37 @@ drm_gpuvm_put(struct drm_gpuvm *gpuvm)
}
EXPORT_SYMBOL_GPL(drm_gpuvm_put);
static int
exec_prepare_obj(struct drm_exec *exec, struct drm_gem_object *obj,
unsigned int num_fences)
{
return num_fences ? drm_exec_prepare_obj(exec, obj, num_fences) :
drm_exec_lock_obj(exec, obj);
}
/**
* drm_gpuvm_prepare_vm() - prepare the GPUVMs common dma-resv
* @gpuvm: the &drm_gpuvm
* @exec: the &drm_exec context
* @num_fences: the amount of &dma_fences to reserve
*
* Calls drm_exec_prepare_obj() for the GPUVMs dummy &drm_gem_object; if
* @num_fences is zero drm_exec_lock_obj() is called instead.
*
* Using this function directly, it is the drivers responsibility to call
* drm_exec_init() and drm_exec_fini() accordingly.
*
* Returns: 0 on success, negative error code on failure.
*/
int
drm_gpuvm_prepare_vm(struct drm_gpuvm *gpuvm,
struct drm_exec *exec,
unsigned int num_fences)
{
return exec_prepare_obj(exec, gpuvm->r_obj, num_fences);
}
EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_vm);
static int
__drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm,
struct drm_exec *exec,
@ -1095,7 +1126,7 @@ __drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm,
int ret = 0;
for_each_vm_bo_in_list(gpuvm, extobj, &extobjs, vm_bo) {
ret = drm_exec_prepare_obj(exec, vm_bo->obj, num_fences);
ret = exec_prepare_obj(exec, vm_bo->obj, num_fences);
if (ret)
break;
}
@ -1116,7 +1147,7 @@ drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gpuvm,
drm_gpuvm_resv_assert_held(gpuvm);
list_for_each_entry(vm_bo, &gpuvm->extobj.list, list.entry.extobj) {
ret = drm_exec_prepare_obj(exec, vm_bo->obj, num_fences);
ret = exec_prepare_obj(exec, vm_bo->obj, num_fences);
if (ret)
break;
@ -1134,7 +1165,8 @@ drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gpuvm,
* @num_fences: the amount of &dma_fences to reserve
*
* Calls drm_exec_prepare_obj() for all &drm_gem_objects the given
* &drm_gpuvm contains mappings of.
* &drm_gpuvm contains mappings of; if @num_fences is zero drm_exec_lock_obj()
* is called instead.
*
* Using this function directly, it is the drivers responsibility to call
* drm_exec_init() and drm_exec_fini() accordingly.
@ -1171,7 +1203,8 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_objects);
* @num_fences: the amount of &dma_fences to reserve
*
* Calls drm_exec_prepare_obj() for all &drm_gem_objects mapped between @addr
* and @addr + @range.
* and @addr + @range; if @num_fences is zero drm_exec_lock_obj() is called
* instead.
*
* Returns: 0 on success, negative error code on failure.
*/
@ -1186,7 +1219,7 @@ drm_gpuvm_prepare_range(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
drm_gpuvm_for_each_va_range(va, gpuvm, addr, end) {
struct drm_gem_object *obj = va->gem.obj;
ret = drm_exec_prepare_obj(exec, obj, num_fences);
ret = exec_prepare_obj(exec, obj, num_fences);
if (ret)
return ret;
}
@ -1502,14 +1535,18 @@ drm_gpuvm_bo_destroy(struct kref *kref)
* hold the dma-resv or driver specific GEM gpuva lock.
*
* This function may only be called from non-atomic context.
*
* Returns: true if vm_bo was destroyed, false otherwise.
*/
void
bool
drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo)
{
might_sleep();
if (vm_bo)
kref_put(&vm_bo->kref, drm_gpuvm_bo_destroy);
return !!kref_put(&vm_bo->kref, drm_gpuvm_bo_destroy);
return false;
}
EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put);

View File

@ -1,203 +0,0 @@
/**************************************************************************
*
* Copyright 2006 Tungsten Graphics, Inc., Bismarck, ND. USA.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sub license, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice (including the
* next paragraph) shall be included in all copies or substantial portions
* of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
* USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*
**************************************************************************/
/*
* Simple open hash tab implementation.
*
* Authors:
* Thomas Hellström <thomas-at-tungstengraphics-dot-com>
*/
#include <linux/hash.h>
#include <linux/mm.h>
#include <linux/rculist.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <drm/drm_print.h>
#include "drm_legacy.h"
int drm_ht_create(struct drm_open_hash *ht, unsigned int order)
{
unsigned int size = 1 << order;
ht->order = order;
ht->table = NULL;
if (size <= PAGE_SIZE / sizeof(*ht->table))
ht->table = kcalloc(size, sizeof(*ht->table), GFP_KERNEL);
else
ht->table = vzalloc(array_size(size, sizeof(*ht->table)));
if (!ht->table) {
DRM_ERROR("Out of memory for hash table\n");
return -ENOMEM;
}
return 0;
}
void drm_ht_verbose_list(struct drm_open_hash *ht, unsigned long key)
{
struct drm_hash_item *entry;
struct hlist_head *h_list;
unsigned int hashed_key;
int count = 0;
hashed_key = hash_long(key, ht->order);
DRM_DEBUG("Key is 0x%08lx, Hashed key is 0x%08x\n", key, hashed_key);
h_list = &ht->table[hashed_key];
hlist_for_each_entry(entry, h_list, head)
DRM_DEBUG("count %d, key: 0x%08lx\n", count++, entry->key);
}
static struct hlist_node *drm_ht_find_key(struct drm_open_hash *ht,
unsigned long key)
{
struct drm_hash_item *entry;
struct hlist_head *h_list;
unsigned int hashed_key;
hashed_key = hash_long(key, ht->order);
h_list = &ht->table[hashed_key];
hlist_for_each_entry(entry, h_list, head) {
if (entry->key == key)
return &entry->head;
if (entry->key > key)
break;
}
return NULL;
}
static struct hlist_node *drm_ht_find_key_rcu(struct drm_open_hash *ht,
unsigned long key)
{
struct drm_hash_item *entry;
struct hlist_head *h_list;
unsigned int hashed_key;
hashed_key = hash_long(key, ht->order);
h_list = &ht->table[hashed_key];
hlist_for_each_entry_rcu(entry, h_list, head) {
if (entry->key == key)
return &entry->head;
if (entry->key > key)
break;
}
return NULL;
}
int drm_ht_insert_item(struct drm_open_hash *ht, struct drm_hash_item *item)
{
struct drm_hash_item *entry;
struct hlist_head *h_list;
struct hlist_node *parent;
unsigned int hashed_key;
unsigned long key = item->key;
hashed_key = hash_long(key, ht->order);
h_list = &ht->table[hashed_key];
parent = NULL;
hlist_for_each_entry(entry, h_list, head) {
if (entry->key == key)
return -EINVAL;
if (entry->key > key)
break;
parent = &entry->head;
}
if (parent) {
hlist_add_behind_rcu(&item->head, parent);
} else {
hlist_add_head_rcu(&item->head, h_list);
}
return 0;
}
/*
* Just insert an item and return any "bits" bit key that hasn't been
* used before.
*/
int drm_ht_just_insert_please(struct drm_open_hash *ht, struct drm_hash_item *item,
unsigned long seed, int bits, int shift,
unsigned long add)
{
int ret;
unsigned long mask = (1UL << bits) - 1;
unsigned long first, unshifted_key;
unshifted_key = hash_long(seed, bits);
first = unshifted_key;
do {
item->key = (unshifted_key << shift) + add;
ret = drm_ht_insert_item(ht, item);
if (ret)
unshifted_key = (unshifted_key + 1) & mask;
} while(ret && (unshifted_key != first));
if (ret) {
DRM_ERROR("Available key bit space exhausted\n");
return -EINVAL;
}
return 0;
}
int drm_ht_find_item(struct drm_open_hash *ht, unsigned long key,
struct drm_hash_item **item)
{
struct hlist_node *list;
list = drm_ht_find_key_rcu(ht, key);
if (!list)
return -EINVAL;
*item = hlist_entry(list, struct drm_hash_item, head);
return 0;
}
int drm_ht_remove_key(struct drm_open_hash *ht, unsigned long key)
{
struct hlist_node *list;
list = drm_ht_find_key(ht, key);
if (list) {
hlist_del_init_rcu(list);
return 0;
}
return -EINVAL;
}
int drm_ht_remove_item(struct drm_open_hash *ht, struct drm_hash_item *item)
{
hlist_del_init_rcu(&item->head);
return 0;
}
void drm_ht_remove(struct drm_open_hash *ht)
{
if (ht->table) {
kvfree(ht->table);
ht->table = NULL;
}
}

View File

@ -117,17 +117,10 @@ void drm_handle_vblank_works(struct drm_vblank_crtc *vblank);
/* IOCTLS */
int drm_wait_vblank_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp);
int drm_legacy_modeset_ctl_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
/* drm_irq.c */
/* IOCTLS */
#if IS_ENABLED(CONFIG_DRM_LEGACY)
int drm_legacy_irq_control(struct drm_device *dev, void *data,
struct drm_file *file_priv);
#endif
int drm_crtc_get_sequence_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp);
@ -194,6 +187,8 @@ void drm_debugfs_connector_remove(struct drm_connector *connector);
void drm_debugfs_crtc_add(struct drm_crtc *crtc);
void drm_debugfs_crtc_remove(struct drm_crtc *crtc);
void drm_debugfs_crtc_crc_add(struct drm_crtc *crtc);
void drm_debugfs_encoder_add(struct drm_encoder *encoder);
void drm_debugfs_encoder_remove(struct drm_encoder *encoder);
#else
static inline void drm_debugfs_dev_fini(struct drm_device *dev)
{
@ -231,6 +226,14 @@ static inline void drm_debugfs_crtc_crc_add(struct drm_crtc *crtc)
{
}
static inline void drm_debugfs_encoder_add(struct drm_encoder *encoder)
{
}
static inline void drm_debugfs_encoder_remove(struct drm_encoder *encoder)
{
}
#endif
drm_ioctl_t drm_version;

View File

@ -31,12 +31,12 @@
#include <linux/ratelimit.h>
#include <linux/export.h>
#include <drm/drm_device.h>
#include <drm/drm_file.h>
#include <drm/drm_print.h>
#include "drm_crtc_internal.h"
#include "drm_internal.h"
#include "drm_legacy.h"
#define DRM_IOCTL_VERSION32 DRM_IOWR(0x00, drm_version32_t)
#define DRM_IOCTL_GET_UNIQUE32 DRM_IOWR(0x01, drm_unique32_t)
@ -163,92 +163,6 @@ static int compat_drm_setunique(struct file *file, unsigned int cmd,
return -EINVAL;
}
#if IS_ENABLED(CONFIG_DRM_LEGACY)
typedef struct drm_map32 {
u32 offset; /* Requested physical address (0 for SAREA) */
u32 size; /* Requested physical size (bytes) */
enum drm_map_type type; /* Type of memory to map */
enum drm_map_flags flags; /* Flags */
u32 handle; /* User-space: "Handle" to pass to mmap() */
int mtrr; /* MTRR slot used */
} drm_map32_t;
static int compat_drm_getmap(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_map32_t __user *argp = (void __user *)arg;
drm_map32_t m32;
struct drm_map map;
int err;
if (copy_from_user(&m32, argp, sizeof(m32)))
return -EFAULT;
map.offset = m32.offset;
err = drm_ioctl_kernel(file, drm_legacy_getmap_ioctl, &map, 0);
if (err)
return err;
m32.offset = map.offset;
m32.size = map.size;
m32.type = map.type;
m32.flags = map.flags;
m32.handle = ptr_to_compat((void __user *)map.handle);
m32.mtrr = map.mtrr;
if (copy_to_user(argp, &m32, sizeof(m32)))
return -EFAULT;
return 0;
}
static int compat_drm_addmap(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_map32_t __user *argp = (void __user *)arg;
drm_map32_t m32;
struct drm_map map;
int err;
if (copy_from_user(&m32, argp, sizeof(m32)))
return -EFAULT;
map.offset = m32.offset;
map.size = m32.size;
map.type = m32.type;
map.flags = m32.flags;
err = drm_ioctl_kernel(file, drm_legacy_addmap_ioctl, &map,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
if (err)
return err;
m32.offset = map.offset;
m32.mtrr = map.mtrr;
m32.handle = ptr_to_compat((void __user *)map.handle);
if (map.handle != compat_ptr(m32.handle))
pr_err_ratelimited("compat_drm_addmap truncated handle %p for type %d offset %x\n",
map.handle, m32.type, m32.offset);
if (copy_to_user(argp, &m32, sizeof(m32)))
return -EFAULT;
return 0;
}
static int compat_drm_rmmap(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_map32_t __user *argp = (void __user *)arg;
struct drm_map map;
u32 handle;
if (get_user(handle, &argp->handle))
return -EFAULT;
map.handle = compat_ptr(handle);
return drm_ioctl_kernel(file, drm_legacy_rmmap_ioctl, &map, DRM_AUTH);
}
#endif
typedef struct drm_client32 {
int idx; /* Which client desired? */
int auth; /* Is client authenticated? */
@ -308,501 +222,6 @@ static int compat_drm_getstats(struct file *file, unsigned int cmd,
return 0;
}
#if IS_ENABLED(CONFIG_DRM_LEGACY)
typedef struct drm_buf_desc32 {
int count; /* Number of buffers of this size */
int size; /* Size in bytes */
int low_mark; /* Low water mark */
int high_mark; /* High water mark */
int flags;
u32 agp_start; /* Start address in the AGP aperture */
} drm_buf_desc32_t;
static int compat_drm_addbufs(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_buf_desc32_t __user *argp = (void __user *)arg;
drm_buf_desc32_t desc32;
struct drm_buf_desc desc;
int err;
if (copy_from_user(&desc32, argp, sizeof(drm_buf_desc32_t)))
return -EFAULT;
desc = (struct drm_buf_desc){
desc32.count, desc32.size, desc32.low_mark, desc32.high_mark,
desc32.flags, desc32.agp_start
};
err = drm_ioctl_kernel(file, drm_legacy_addbufs, &desc,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
if (err)
return err;
desc32 = (drm_buf_desc32_t){
desc.count, desc.size, desc.low_mark, desc.high_mark,
desc.flags, desc.agp_start
};
if (copy_to_user(argp, &desc32, sizeof(drm_buf_desc32_t)))
return -EFAULT;
return 0;
}
static int compat_drm_markbufs(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_buf_desc32_t b32;
drm_buf_desc32_t __user *argp = (void __user *)arg;
struct drm_buf_desc buf;
if (copy_from_user(&b32, argp, sizeof(b32)))
return -EFAULT;
buf.size = b32.size;
buf.low_mark = b32.low_mark;
buf.high_mark = b32.high_mark;
return drm_ioctl_kernel(file, drm_legacy_markbufs, &buf,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
}
typedef struct drm_buf_info32 {
int count; /**< Entries in list */
u32 list;
} drm_buf_info32_t;
static int copy_one_buf32(void *data, int count, struct drm_buf_entry *from)
{
drm_buf_info32_t *request = data;
drm_buf_desc32_t __user *to = compat_ptr(request->list);
drm_buf_desc32_t v = {.count = from->buf_count,
.size = from->buf_size,
.low_mark = from->low_mark,
.high_mark = from->high_mark};
if (copy_to_user(to + count, &v, offsetof(drm_buf_desc32_t, flags)))
return -EFAULT;
return 0;
}
static int drm_legacy_infobufs32(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
drm_buf_info32_t *request = data;
return __drm_legacy_infobufs(dev, data, &request->count, copy_one_buf32);
}
static int compat_drm_infobufs(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_buf_info32_t req32;
drm_buf_info32_t __user *argp = (void __user *)arg;
int err;
if (copy_from_user(&req32, argp, sizeof(req32)))
return -EFAULT;
if (req32.count < 0)
req32.count = 0;
err = drm_ioctl_kernel(file, drm_legacy_infobufs32, &req32, DRM_AUTH);
if (err)
return err;
if (put_user(req32.count, &argp->count))
return -EFAULT;
return 0;
}
typedef struct drm_buf_pub32 {
int idx; /**< Index into the master buffer list */
int total; /**< Buffer size */
int used; /**< Amount of buffer in use (for DMA) */
u32 address; /**< Address of buffer */
} drm_buf_pub32_t;
typedef struct drm_buf_map32 {
int count; /**< Length of the buffer list */
u32 virtual; /**< Mmap'd area in user-virtual */
u32 list; /**< Buffer information */
} drm_buf_map32_t;
static int map_one_buf32(void *data, int idx, unsigned long virtual,
struct drm_buf *buf)
{
drm_buf_map32_t *request = data;
drm_buf_pub32_t __user *to = compat_ptr(request->list) + idx;
drm_buf_pub32_t v;
v.idx = buf->idx;
v.total = buf->total;
v.used = 0;
v.address = virtual + buf->offset;
if (copy_to_user(to, &v, sizeof(v)))
return -EFAULT;
return 0;
}
static int drm_legacy_mapbufs32(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
drm_buf_map32_t *request = data;
void __user *v;
int err = __drm_legacy_mapbufs(dev, data, &request->count,
&v, map_one_buf32,
file_priv);
request->virtual = ptr_to_compat(v);
return err;
}
static int compat_drm_mapbufs(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_buf_map32_t __user *argp = (void __user *)arg;
drm_buf_map32_t req32;
int err;
if (copy_from_user(&req32, argp, sizeof(req32)))
return -EFAULT;
if (req32.count < 0)
return -EINVAL;
err = drm_ioctl_kernel(file, drm_legacy_mapbufs32, &req32, DRM_AUTH);
if (err)
return err;
if (put_user(req32.count, &argp->count)
|| put_user(req32.virtual, &argp->virtual))
return -EFAULT;
return 0;
}
typedef struct drm_buf_free32 {
int count;
u32 list;
} drm_buf_free32_t;
static int compat_drm_freebufs(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_buf_free32_t req32;
struct drm_buf_free request;
drm_buf_free32_t __user *argp = (void __user *)arg;
if (copy_from_user(&req32, argp, sizeof(req32)))
return -EFAULT;
request.count = req32.count;
request.list = compat_ptr(req32.list);
return drm_ioctl_kernel(file, drm_legacy_freebufs, &request, DRM_AUTH);
}
typedef struct drm_ctx_priv_map32 {
unsigned int ctx_id; /**< Context requesting private mapping */
u32 handle; /**< Handle of map */
} drm_ctx_priv_map32_t;
static int compat_drm_setsareactx(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_ctx_priv_map32_t req32;
struct drm_ctx_priv_map request;
drm_ctx_priv_map32_t __user *argp = (void __user *)arg;
if (copy_from_user(&req32, argp, sizeof(req32)))
return -EFAULT;
request.ctx_id = req32.ctx_id;
request.handle = compat_ptr(req32.handle);
return drm_ioctl_kernel(file, drm_legacy_setsareactx, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
}
static int compat_drm_getsareactx(struct file *file, unsigned int cmd,
unsigned long arg)
{
struct drm_ctx_priv_map req;
drm_ctx_priv_map32_t req32;
drm_ctx_priv_map32_t __user *argp = (void __user *)arg;
int err;
if (copy_from_user(&req32, argp, sizeof(req32)))
return -EFAULT;
req.ctx_id = req32.ctx_id;
err = drm_ioctl_kernel(file, drm_legacy_getsareactx, &req, DRM_AUTH);
if (err)
return err;
req32.handle = ptr_to_compat((void __user *)req.handle);
if (copy_to_user(argp, &req32, sizeof(req32)))
return -EFAULT;
return 0;
}
typedef struct drm_ctx_res32 {
int count;
u32 contexts;
} drm_ctx_res32_t;
static int compat_drm_resctx(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_ctx_res32_t __user *argp = (void __user *)arg;
drm_ctx_res32_t res32;
struct drm_ctx_res res;
int err;
if (copy_from_user(&res32, argp, sizeof(res32)))
return -EFAULT;
res.count = res32.count;
res.contexts = compat_ptr(res32.contexts);
err = drm_ioctl_kernel(file, drm_legacy_resctx, &res, DRM_AUTH);
if (err)
return err;
res32.count = res.count;
if (copy_to_user(argp, &res32, sizeof(res32)))
return -EFAULT;
return 0;
}
typedef struct drm_dma32 {
int context; /**< Context handle */
int send_count; /**< Number of buffers to send */
u32 send_indices; /**< List of handles to buffers */
u32 send_sizes; /**< Lengths of data to send */
enum drm_dma_flags flags; /**< Flags */
int request_count; /**< Number of buffers requested */
int request_size; /**< Desired size for buffers */
u32 request_indices; /**< Buffer information */
u32 request_sizes;
int granted_count; /**< Number of buffers granted */
} drm_dma32_t;
static int compat_drm_dma(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_dma32_t d32;
drm_dma32_t __user *argp = (void __user *)arg;
struct drm_dma d;
int err;
if (copy_from_user(&d32, argp, sizeof(d32)))
return -EFAULT;
d.context = d32.context;
d.send_count = d32.send_count;
d.send_indices = compat_ptr(d32.send_indices);
d.send_sizes = compat_ptr(d32.send_sizes);
d.flags = d32.flags;
d.request_count = d32.request_count;
d.request_indices = compat_ptr(d32.request_indices);
d.request_sizes = compat_ptr(d32.request_sizes);
err = drm_ioctl_kernel(file, drm_legacy_dma_ioctl, &d, DRM_AUTH);
if (err)
return err;
if (put_user(d.request_size, &argp->request_size)
|| put_user(d.granted_count, &argp->granted_count))
return -EFAULT;
return 0;
}
#endif
#if IS_ENABLED(CONFIG_DRM_LEGACY)
#if IS_ENABLED(CONFIG_AGP)
typedef struct drm_agp_mode32 {
u32 mode; /**< AGP mode */
} drm_agp_mode32_t;
static int compat_drm_agp_enable(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_agp_mode32_t __user *argp = (void __user *)arg;
struct drm_agp_mode mode;
if (get_user(mode.mode, &argp->mode))
return -EFAULT;
return drm_ioctl_kernel(file, drm_legacy_agp_enable_ioctl, &mode,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
}
typedef struct drm_agp_info32 {
int agp_version_major;
int agp_version_minor;
u32 mode;
u32 aperture_base; /* physical address */
u32 aperture_size; /* bytes */
u32 memory_allowed; /* bytes */
u32 memory_used;
/* PCI information */
unsigned short id_vendor;
unsigned short id_device;
} drm_agp_info32_t;
static int compat_drm_agp_info(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_agp_info32_t __user *argp = (void __user *)arg;
drm_agp_info32_t i32;
struct drm_agp_info info;
int err;
err = drm_ioctl_kernel(file, drm_legacy_agp_info_ioctl, &info, DRM_AUTH);
if (err)
return err;
i32.agp_version_major = info.agp_version_major;
i32.agp_version_minor = info.agp_version_minor;
i32.mode = info.mode;
i32.aperture_base = info.aperture_base;
i32.aperture_size = info.aperture_size;
i32.memory_allowed = info.memory_allowed;
i32.memory_used = info.memory_used;
i32.id_vendor = info.id_vendor;
i32.id_device = info.id_device;
if (copy_to_user(argp, &i32, sizeof(i32)))
return -EFAULT;
return 0;
}
typedef struct drm_agp_buffer32 {
u32 size; /**< In bytes -- will round to page boundary */
u32 handle; /**< Used for binding / unbinding */
u32 type; /**< Type of memory to allocate */
u32 physical; /**< Physical used by i810 */
} drm_agp_buffer32_t;
static int compat_drm_agp_alloc(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_agp_buffer32_t __user *argp = (void __user *)arg;
drm_agp_buffer32_t req32;
struct drm_agp_buffer request;
int err;
if (copy_from_user(&req32, argp, sizeof(req32)))
return -EFAULT;
request.size = req32.size;
request.type = req32.type;
err = drm_ioctl_kernel(file, drm_legacy_agp_alloc_ioctl, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
if (err)
return err;
req32.handle = request.handle;
req32.physical = request.physical;
if (copy_to_user(argp, &req32, sizeof(req32))) {
drm_ioctl_kernel(file, drm_legacy_agp_free_ioctl, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
return -EFAULT;
}
return 0;
}
static int compat_drm_agp_free(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_agp_buffer32_t __user *argp = (void __user *)arg;
struct drm_agp_buffer request;
if (get_user(request.handle, &argp->handle))
return -EFAULT;
return drm_ioctl_kernel(file, drm_legacy_agp_free_ioctl, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
}
typedef struct drm_agp_binding32 {
u32 handle; /**< From drm_agp_buffer */
u32 offset; /**< In bytes -- will round to page boundary */
} drm_agp_binding32_t;
static int compat_drm_agp_bind(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_agp_binding32_t __user *argp = (void __user *)arg;
drm_agp_binding32_t req32;
struct drm_agp_binding request;
if (copy_from_user(&req32, argp, sizeof(req32)))
return -EFAULT;
request.handle = req32.handle;
request.offset = req32.offset;
return drm_ioctl_kernel(file, drm_legacy_agp_bind_ioctl, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
}
static int compat_drm_agp_unbind(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_agp_binding32_t __user *argp = (void __user *)arg;
struct drm_agp_binding request;
if (get_user(request.handle, &argp->handle))
return -EFAULT;
return drm_ioctl_kernel(file, drm_legacy_agp_unbind_ioctl, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
}
#endif /* CONFIG_AGP */
typedef struct drm_scatter_gather32 {
u32 size; /**< In bytes -- will round to page boundary */
u32 handle; /**< Used for mapping / unmapping */
} drm_scatter_gather32_t;
static int compat_drm_sg_alloc(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_scatter_gather32_t __user *argp = (void __user *)arg;
struct drm_scatter_gather request;
int err;
if (get_user(request.size, &argp->size))
return -EFAULT;
err = drm_ioctl_kernel(file, drm_legacy_sg_alloc, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
if (err)
return err;
/* XXX not sure about the handle conversion here... */
if (put_user(request.handle >> PAGE_SHIFT, &argp->handle))
return -EFAULT;
return 0;
}
static int compat_drm_sg_free(struct file *file, unsigned int cmd,
unsigned long arg)
{
drm_scatter_gather32_t __user *argp = (void __user *)arg;
struct drm_scatter_gather request;
unsigned long x;
if (get_user(x, &argp->handle))
return -EFAULT;
request.handle = x << PAGE_SHIFT;
return drm_ioctl_kernel(file, drm_legacy_sg_free, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
}
#endif
#if defined(CONFIG_X86)
typedef struct drm_update_draw32 {
drm_drawable_t handle;
@ -854,7 +273,7 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
req.request.type = req32.request.type;
req.request.sequence = req32.request.sequence;
req.request.signal = req32.request.signal;
err = drm_ioctl_kernel(file, drm_wait_vblank_ioctl, &req, DRM_UNLOCKED);
err = drm_ioctl_kernel(file, drm_wait_vblank_ioctl, &req, 0);
req32.reply.type = req.reply.type;
req32.reply.sequence = req.reply.sequence;
@ -914,37 +333,9 @@ static struct {
#define DRM_IOCTL32_DEF(n, f) [DRM_IOCTL_NR(n##32)] = {.fn = f, .name = #n}
DRM_IOCTL32_DEF(DRM_IOCTL_VERSION, compat_drm_version),
DRM_IOCTL32_DEF(DRM_IOCTL_GET_UNIQUE, compat_drm_getunique),
#if IS_ENABLED(CONFIG_DRM_LEGACY)
DRM_IOCTL32_DEF(DRM_IOCTL_GET_MAP, compat_drm_getmap),
#endif
DRM_IOCTL32_DEF(DRM_IOCTL_GET_CLIENT, compat_drm_getclient),
DRM_IOCTL32_DEF(DRM_IOCTL_GET_STATS, compat_drm_getstats),
DRM_IOCTL32_DEF(DRM_IOCTL_SET_UNIQUE, compat_drm_setunique),
#if IS_ENABLED(CONFIG_DRM_LEGACY)
DRM_IOCTL32_DEF(DRM_IOCTL_ADD_MAP, compat_drm_addmap),
DRM_IOCTL32_DEF(DRM_IOCTL_ADD_BUFS, compat_drm_addbufs),
DRM_IOCTL32_DEF(DRM_IOCTL_MARK_BUFS, compat_drm_markbufs),
DRM_IOCTL32_DEF(DRM_IOCTL_INFO_BUFS, compat_drm_infobufs),
DRM_IOCTL32_DEF(DRM_IOCTL_MAP_BUFS, compat_drm_mapbufs),
DRM_IOCTL32_DEF(DRM_IOCTL_FREE_BUFS, compat_drm_freebufs),
DRM_IOCTL32_DEF(DRM_IOCTL_RM_MAP, compat_drm_rmmap),
DRM_IOCTL32_DEF(DRM_IOCTL_SET_SAREA_CTX, compat_drm_setsareactx),
DRM_IOCTL32_DEF(DRM_IOCTL_GET_SAREA_CTX, compat_drm_getsareactx),
DRM_IOCTL32_DEF(DRM_IOCTL_RES_CTX, compat_drm_resctx),
DRM_IOCTL32_DEF(DRM_IOCTL_DMA, compat_drm_dma),
#if IS_ENABLED(CONFIG_AGP)
DRM_IOCTL32_DEF(DRM_IOCTL_AGP_ENABLE, compat_drm_agp_enable),
DRM_IOCTL32_DEF(DRM_IOCTL_AGP_INFO, compat_drm_agp_info),
DRM_IOCTL32_DEF(DRM_IOCTL_AGP_ALLOC, compat_drm_agp_alloc),
DRM_IOCTL32_DEF(DRM_IOCTL_AGP_FREE, compat_drm_agp_free),
DRM_IOCTL32_DEF(DRM_IOCTL_AGP_BIND, compat_drm_agp_bind),
DRM_IOCTL32_DEF(DRM_IOCTL_AGP_UNBIND, compat_drm_agp_unbind),
#endif
#endif
#if IS_ENABLED(CONFIG_DRM_LEGACY)
DRM_IOCTL32_DEF(DRM_IOCTL_SG_ALLOC, compat_drm_sg_alloc),
DRM_IOCTL32_DEF(DRM_IOCTL_SG_FREE, compat_drm_sg_free),
#endif
#if defined(CONFIG_X86)
DRM_IOCTL32_DEF(DRM_IOCTL_UPDATE_DRAW, compat_drm_update_draw),
#endif

View File

@ -42,7 +42,6 @@
#include "drm_crtc_internal.h"
#include "drm_internal.h"
#include "drm_legacy.h"
/**
* DOC: getunique and setversion story
@ -301,6 +300,10 @@ static int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_
case DRM_CAP_CRTC_IN_VBLANK_EVENT:
req->value = 1;
break;
case DRM_CAP_ATOMIC_ASYNC_PAGE_FLIP:
req->value = drm_core_check_feature(dev, DRIVER_ATOMIC) &&
dev->mode_config.async_page_flip;
break;
default:
return -EINVAL;
}
@ -361,6 +364,15 @@ drm_setclientcap(struct drm_device *dev, void *data, struct drm_file *file_priv)
return -EINVAL;
file_priv->writeback_connectors = req->value;
break;
case DRM_CLIENT_CAP_CURSOR_PLANE_HOTSPOT:
if (!drm_core_check_feature(dev, DRIVER_CURSOR_HOTSPOT))
return -EOPNOTSUPP;
if (!file_priv->atomic)
return -EINVAL;
if (req->value > 1)
return -EINVAL;
file_priv->supports_virtualized_cursor_plane = req->value;
break;
default:
return -EINVAL;
}
@ -559,21 +571,11 @@ static int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)
.name = #ioctl \
}
#if IS_ENABLED(CONFIG_DRM_LEGACY)
#define DRM_LEGACY_IOCTL_DEF(ioctl, _func, _flags) DRM_IOCTL_DEF(ioctl, _func, _flags)
#else
#define DRM_LEGACY_IOCTL_DEF(ioctl, _func, _flags) DRM_IOCTL_DEF(ioctl, drm_invalid_op, _flags)
#endif
/* Ioctl table */
static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, 0),
DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, 0),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_legacy_irq_by_busid,
DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_GET_MAP, drm_legacy_getmap_ioctl, 0),
DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_getclient, 0),
DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_getstats, 0),
@ -586,63 +588,15 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_UNBLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_AUTH_MAGIC, drm_authmagic, DRM_MASTER),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_ADD_MAP, drm_legacy_addmap_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_RM_MAP, drm_legacy_rmmap_ioctl, DRM_AUTH),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_SET_SAREA_CTX, drm_legacy_setsareactx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_GET_SAREA_CTX, drm_legacy_getsareactx, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, 0),
DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, 0),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_ADD_CTX, drm_legacy_addctx, DRM_AUTH|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_RM_CTX, drm_legacy_rmctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_MOD_CTX, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_GET_CTX, drm_legacy_getctx, DRM_AUTH),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_SWITCH_CTX, drm_legacy_switchctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_NEW_CTX, drm_legacy_newctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_RES_CTX, drm_legacy_resctx, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_ADD_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_RM_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_LOCK, drm_legacy_lock, DRM_AUTH),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_UNLOCK, drm_legacy_unlock, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_FINISH, drm_noop, DRM_AUTH),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_ADD_BUFS, drm_legacy_addbufs, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_MARK_BUFS, drm_legacy_markbufs, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_INFO_BUFS, drm_legacy_infobufs, DRM_AUTH),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_MAP_BUFS, drm_legacy_mapbufs, DRM_AUTH),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_FREE_BUFS, drm_legacy_freebufs, DRM_AUTH),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_DMA, drm_legacy_dma_ioctl, DRM_AUTH),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_legacy_irq_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
#if IS_ENABLED(CONFIG_AGP)
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_ACQUIRE, drm_legacy_agp_acquire_ioctl,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_RELEASE, drm_legacy_agp_release_ioctl,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_ENABLE, drm_legacy_agp_enable_ioctl,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_INFO, drm_legacy_agp_info_ioctl, DRM_AUTH),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_ALLOC, drm_legacy_agp_alloc_ioctl,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_FREE, drm_legacy_agp_free_ioctl,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_BIND, drm_legacy_agp_bind_ioctl,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_UNBIND, drm_legacy_agp_unbind_ioctl,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
#endif
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_legacy_sg_alloc, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_SG_FREE, drm_legacy_sg_free, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_legacy_modeset_ctl_ioctl, 0),
DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank_ioctl, 0),
DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
@ -775,7 +729,7 @@ long drm_ioctl_kernel(struct file *file, drm_ioctl_t *func, void *kdata,
{
struct drm_file *file_priv = file->private_data;
struct drm_device *dev = file_priv->minor->dev;
int retcode;
int ret;
/* Update drm_file owner if fd was passed along. */
drm_file_update_pid(file_priv);
@ -783,20 +737,11 @@ long drm_ioctl_kernel(struct file *file, drm_ioctl_t *func, void *kdata,
if (drm_dev_is_unplugged(dev))
return -ENODEV;
retcode = drm_ioctl_permit(flags, file_priv);
if (unlikely(retcode))
return retcode;
ret = drm_ioctl_permit(flags, file_priv);
if (unlikely(ret))
return ret;
/* Enforce sane locking for modern driver ioctls. */
if (likely(!drm_core_check_feature(dev, DRIVER_LEGACY)) ||
(flags & DRM_UNLOCKED))
retcode = func(dev, kdata, file_priv);
else {
mutex_lock(&drm_global_mutex);
retcode = func(dev, kdata, file_priv);
mutex_unlock(&drm_global_mutex);
}
return retcode;
return func(dev, kdata, file_priv);
}
EXPORT_SYMBOL(drm_ioctl_kernel);

View File

@ -1,204 +0,0 @@
/*
* drm_irq.c IRQ and vblank support
*
* \author Rickard E. (Rik) Faith <faith@valinux.com>
* \author Gareth Hughes <gareth@valinux.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
/*
* Created: Fri Mar 19 14:30:16 1999 by faith@valinux.com
*
* Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/export.h>
#include <linux/interrupt.h> /* For task queue support */
#include <linux/pci.h>
#include <linux/vgaarb.h>
#include <drm/drm.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_legacy.h>
#include <drm/drm_print.h>
#include <drm/drm_vblank.h>
#include "drm_internal.h"
static int drm_legacy_irq_install(struct drm_device *dev, int irq)
{
int ret;
unsigned long sh_flags = 0;
if (irq == 0)
return -EINVAL;
if (dev->irq_enabled)
return -EBUSY;
dev->irq_enabled = true;
DRM_DEBUG("irq=%d\n", irq);
/* Before installing handler */
if (dev->driver->irq_preinstall)
dev->driver->irq_preinstall(dev);
/* PCI devices require shared interrupts. */
if (dev_is_pci(dev->dev))
sh_flags = IRQF_SHARED;
ret = request_irq(irq, dev->driver->irq_handler,
sh_flags, dev->driver->name, dev);
if (ret < 0) {
dev->irq_enabled = false;
return ret;
}
/* After installing handler */
if (dev->driver->irq_postinstall)
ret = dev->driver->irq_postinstall(dev);
if (ret < 0) {
dev->irq_enabled = false;
if (drm_core_check_feature(dev, DRIVER_LEGACY))
vga_client_unregister(to_pci_dev(dev->dev));
free_irq(irq, dev);
} else {
dev->irq = irq;
}
return ret;
}
int drm_legacy_irq_uninstall(struct drm_device *dev)
{
unsigned long irqflags;
bool irq_enabled;
int i;
irq_enabled = dev->irq_enabled;
dev->irq_enabled = false;
/*
* Wake up any waiters so they don't hang. This is just to paper over
* issues for UMS drivers which aren't in full control of their
* vblank/irq handling. KMS drivers must ensure that vblanks are all
* disabled when uninstalling the irq handler.
*/
if (drm_dev_has_vblank(dev)) {
spin_lock_irqsave(&dev->vbl_lock, irqflags);
for (i = 0; i < dev->num_crtcs; i++) {
struct drm_vblank_crtc *vblank = &dev->vblank[i];
if (!vblank->enabled)
continue;
WARN_ON(drm_core_check_feature(dev, DRIVER_MODESET));
drm_vblank_disable_and_save(dev, i);
wake_up(&vblank->queue);
}
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
}
if (!irq_enabled)
return -EINVAL;
DRM_DEBUG("irq=%d\n", dev->irq);
if (drm_core_check_feature(dev, DRIVER_LEGACY))
vga_client_unregister(to_pci_dev(dev->dev));
if (dev->driver->irq_uninstall)
dev->driver->irq_uninstall(dev);
free_irq(dev->irq, dev);
return 0;
}
EXPORT_SYMBOL(drm_legacy_irq_uninstall);
int drm_legacy_irq_control(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_control *ctl = data;
int ret = 0, irq;
struct pci_dev *pdev;
/* if we haven't irq we fallback for compatibility reasons -
* this used to be a separate function in drm_dma.h
*/
if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
return 0;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return 0;
/* UMS was only ever supported on pci devices. */
if (WARN_ON(!dev_is_pci(dev->dev)))
return -EINVAL;
switch (ctl->func) {
case DRM_INST_HANDLER:
pdev = to_pci_dev(dev->dev);
irq = pdev->irq;
if (dev->if_version < DRM_IF_VERSION(1, 2) &&
ctl->irq != irq)
return -EINVAL;
mutex_lock(&dev->struct_mutex);
ret = drm_legacy_irq_install(dev, irq);
mutex_unlock(&dev->struct_mutex);
return ret;
case DRM_UNINST_HANDLER:
mutex_lock(&dev->struct_mutex);
ret = drm_legacy_irq_uninstall(dev);
mutex_unlock(&dev->struct_mutex);
return ret;
default:
return -EINVAL;
}
}

View File

@ -1,290 +0,0 @@
#ifndef __DRM_LEGACY_H__
#define __DRM_LEGACY_H__
/*
* Copyright (c) 2014 David Herrmann <dh.herrmann@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
/*
* This file contains legacy interfaces that modern drm drivers
* should no longer be using. They cannot be removed as legacy
* drivers use them, and removing them are API breaks.
*/
#include <linux/list.h>
#include <drm/drm.h>
#include <drm/drm_device.h>
#include <drm/drm_legacy.h>
struct agp_memory;
struct drm_buf_desc;
struct drm_device;
struct drm_file;
struct drm_hash_item;
struct drm_open_hash;
/*
* Hash-table Support
*/
#define drm_hash_entry(_ptr, _type, _member) container_of(_ptr, _type, _member)
/* drm_hashtab.c */
#if IS_ENABLED(CONFIG_DRM_LEGACY)
int drm_ht_create(struct drm_open_hash *ht, unsigned int order);
int drm_ht_insert_item(struct drm_open_hash *ht, struct drm_hash_item *item);
int drm_ht_just_insert_please(struct drm_open_hash *ht, struct drm_hash_item *item,
unsigned long seed, int bits, int shift,
unsigned long add);
int drm_ht_find_item(struct drm_open_hash *ht, unsigned long key, struct drm_hash_item **item);
void drm_ht_verbose_list(struct drm_open_hash *ht, unsigned long key);
int drm_ht_remove_key(struct drm_open_hash *ht, unsigned long key);
int drm_ht_remove_item(struct drm_open_hash *ht, struct drm_hash_item *item);
void drm_ht_remove(struct drm_open_hash *ht);
#endif
/*
* RCU-safe interface
*
* The user of this API needs to make sure that two or more instances of the
* hash table manipulation functions are never run simultaneously.
* The lookup function drm_ht_find_item_rcu may, however, run simultaneously
* with any of the manipulation functions as long as it's called from within
* an RCU read-locked section.
*/
#define drm_ht_insert_item_rcu drm_ht_insert_item
#define drm_ht_just_insert_please_rcu drm_ht_just_insert_please
#define drm_ht_remove_key_rcu drm_ht_remove_key
#define drm_ht_remove_item_rcu drm_ht_remove_item
#define drm_ht_find_item_rcu drm_ht_find_item
/*
* Generic DRM Contexts
*/
#define DRM_KERNEL_CONTEXT 0
#define DRM_RESERVED_CONTEXTS 1
#if IS_ENABLED(CONFIG_DRM_LEGACY)
void drm_legacy_ctxbitmap_init(struct drm_device *dev);
void drm_legacy_ctxbitmap_cleanup(struct drm_device *dev);
void drm_legacy_ctxbitmap_flush(struct drm_device *dev, struct drm_file *file);
#else
static inline void drm_legacy_ctxbitmap_init(struct drm_device *dev) {}
static inline void drm_legacy_ctxbitmap_cleanup(struct drm_device *dev) {}
static inline void drm_legacy_ctxbitmap_flush(struct drm_device *dev, struct drm_file *file) {}
#endif
void drm_legacy_ctxbitmap_free(struct drm_device *dev, int ctx_handle);
#if IS_ENABLED(CONFIG_DRM_LEGACY)
int drm_legacy_resctx(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_addctx(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_getctx(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_switchctx(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_newctx(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_rmctx(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_setsareactx(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_getsareactx(struct drm_device *d, void *v, struct drm_file *f);
#endif
/*
* Generic Buffer Management
*/
#define DRM_MAP_HASH_OFFSET 0x10000000
#if IS_ENABLED(CONFIG_DRM_LEGACY)
static inline int drm_legacy_create_map_hash(struct drm_device *dev)
{
return drm_ht_create(&dev->map_hash, 12);
}
static inline void drm_legacy_remove_map_hash(struct drm_device *dev)
{
drm_ht_remove(&dev->map_hash);
}
#else
static inline int drm_legacy_create_map_hash(struct drm_device *dev)
{
return 0;
}
static inline void drm_legacy_remove_map_hash(struct drm_device *dev) {}
#endif
#if IS_ENABLED(CONFIG_DRM_LEGACY)
int drm_legacy_getmap_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_addmap_ioctl(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_rmmap_ioctl(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_addbufs(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_infobufs(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_markbufs(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_freebufs(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_mapbufs(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_dma_ioctl(struct drm_device *d, void *v, struct drm_file *f);
#endif
int __drm_legacy_infobufs(struct drm_device *, void *, int *,
int (*)(void *, int, struct drm_buf_entry *));
int __drm_legacy_mapbufs(struct drm_device *, void *, int *,
void __user **,
int (*)(void *, int, unsigned long, struct drm_buf *),
struct drm_file *);
#if IS_ENABLED(CONFIG_DRM_LEGACY)
void drm_legacy_master_rmmaps(struct drm_device *dev,
struct drm_master *master);
void drm_legacy_rmmaps(struct drm_device *dev);
#else
static inline void drm_legacy_master_rmmaps(struct drm_device *dev,
struct drm_master *master) {}
static inline void drm_legacy_rmmaps(struct drm_device *dev) {}
#endif
#if IS_ENABLED(CONFIG_DRM_LEGACY)
void drm_legacy_vma_flush(struct drm_device *d);
#else
static inline void drm_legacy_vma_flush(struct drm_device *d)
{
/* do nothing */
}
#endif
/*
* AGP Support
*/
struct drm_agp_mem {
unsigned long handle;
struct agp_memory *memory;
unsigned long bound;
int pages;
struct list_head head;
};
/* drm_agpsupport.c */
#if IS_ENABLED(CONFIG_DRM_LEGACY) && IS_ENABLED(CONFIG_AGP)
void drm_legacy_agp_clear(struct drm_device *dev);
int drm_legacy_agp_acquire_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_release_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_enable_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_info_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_alloc_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_free_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_unbind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_bind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
#else
static inline void drm_legacy_agp_clear(struct drm_device *dev) {}
#endif
/* drm_lock.c */
#if IS_ENABLED(CONFIG_DRM_LEGACY)
int drm_legacy_lock(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_unlock(struct drm_device *d, void *v, struct drm_file *f);
void drm_legacy_lock_release(struct drm_device *dev, struct file *filp);
#else
static inline void drm_legacy_lock_release(struct drm_device *dev, struct file *filp) {}
#endif
/* DMA support */
#if IS_ENABLED(CONFIG_DRM_LEGACY)
int drm_legacy_dma_setup(struct drm_device *dev);
void drm_legacy_dma_takedown(struct drm_device *dev);
#else
static inline int drm_legacy_dma_setup(struct drm_device *dev)
{
return 0;
}
#endif
void drm_legacy_free_buffer(struct drm_device *dev,
struct drm_buf * buf);
#if IS_ENABLED(CONFIG_DRM_LEGACY)
void drm_legacy_reclaim_buffers(struct drm_device *dev,
struct drm_file *filp);
#else
static inline void drm_legacy_reclaim_buffers(struct drm_device *dev,
struct drm_file *filp) {}
#endif
/* Scatter Gather Support */
#if IS_ENABLED(CONFIG_DRM_LEGACY)
void drm_legacy_sg_cleanup(struct drm_device *dev);
int drm_legacy_sg_alloc(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_sg_free(struct drm_device *dev, void *data,
struct drm_file *file_priv);
#endif
#if IS_ENABLED(CONFIG_DRM_LEGACY)
void drm_legacy_init_members(struct drm_device *dev);
void drm_legacy_destroy_members(struct drm_device *dev);
void drm_legacy_dev_reinit(struct drm_device *dev);
int drm_legacy_setup(struct drm_device * dev);
#else
static inline void drm_legacy_init_members(struct drm_device *dev) {}
static inline void drm_legacy_destroy_members(struct drm_device *dev) {}
static inline void drm_legacy_dev_reinit(struct drm_device *dev) {}
static inline int drm_legacy_setup(struct drm_device * dev) { return 0; }
#endif
#if IS_ENABLED(CONFIG_DRM_LEGACY)
void drm_legacy_lock_master_cleanup(struct drm_device *dev, struct drm_master *master);
#else
static inline void drm_legacy_lock_master_cleanup(struct drm_device *dev, struct drm_master *master) {}
#endif
#if IS_ENABLED(CONFIG_DRM_LEGACY)
void drm_master_legacy_init(struct drm_master *master);
#else
static inline void drm_master_legacy_init(struct drm_master *master) {}
#endif
/* drm_pci.c */
#if IS_ENABLED(CONFIG_DRM_LEGACY) && IS_ENABLED(CONFIG_PCI)
int drm_legacy_irq_by_busid(struct drm_device *dev, void *data, struct drm_file *file_priv);
void drm_legacy_pci_agp_destroy(struct drm_device *dev);
#else
static inline int drm_legacy_irq_by_busid(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
return -EINVAL;
}
static inline void drm_legacy_pci_agp_destroy(struct drm_device *dev) {}
#endif
#endif /* __DRM_LEGACY_H__ */

View File

@ -1,105 +0,0 @@
/*
* \file drm_legacy_misc.c
* Misc legacy support functions.
*
* \author Rickard E. (Rik) Faith <faith@valinux.com>
* \author Gareth Hughes <gareth@valinux.com>
*/
/*
* Created: Tue Feb 2 08:37:54 1999 by faith@valinux.com
*
* Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_print.h>
#include "drm_internal.h"
#include "drm_legacy.h"
void drm_legacy_init_members(struct drm_device *dev)
{
INIT_LIST_HEAD(&dev->ctxlist);
INIT_LIST_HEAD(&dev->vmalist);
INIT_LIST_HEAD(&dev->maplist);
spin_lock_init(&dev->buf_lock);
mutex_init(&dev->ctxlist_mutex);
}
void drm_legacy_destroy_members(struct drm_device *dev)
{
mutex_destroy(&dev->ctxlist_mutex);
}
int drm_legacy_setup(struct drm_device * dev)
{
int ret;
if (dev->driver->firstopen &&
drm_core_check_feature(dev, DRIVER_LEGACY)) {
ret = dev->driver->firstopen(dev);
if (ret != 0)
return ret;
}
ret = drm_legacy_dma_setup(dev);
if (ret < 0)
return ret;
DRM_DEBUG("\n");
return 0;
}
void drm_legacy_dev_reinit(struct drm_device *dev)
{
if (dev->irq_enabled)
drm_legacy_irq_uninstall(dev);
mutex_lock(&dev->struct_mutex);
drm_legacy_agp_clear(dev);
drm_legacy_sg_cleanup(dev);
drm_legacy_vma_flush(dev);
drm_legacy_dma_takedown(dev);
mutex_unlock(&dev->struct_mutex);
dev->sigdata.lock = NULL;
dev->context_flag = 0;
dev->last_context = 0;
dev->if_version = 0;
DRM_DEBUG("lastclose completed\n");
}
void drm_master_legacy_init(struct drm_master *master)
{
spin_lock_init(&master->lock.spinlock);
init_waitqueue_head(&master->lock.lock_queue);
}

View File

@ -1,373 +0,0 @@
/*
* \file drm_lock.c
* IOCTLs for locking
*
* \author Rickard E. (Rik) Faith <faith@valinux.com>
* \author Gareth Hughes <gareth@valinux.com>
*/
/*
* Created: Tue Feb 2 08:37:54 1999 by faith@valinux.com
*
* Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/export.h>
#include <linux/sched/signal.h>
#include <drm/drm.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
#include <drm/drm_print.h>
#include "drm_internal.h"
#include "drm_legacy.h"
static int drm_lock_take(struct drm_lock_data *lock_data, unsigned int context);
/*
* Take the heavyweight lock.
*
* \param lock lock pointer.
* \param context locking context.
* \return one if the lock is held, or zero otherwise.
*
* Attempt to mark the lock as held by the given context, via the \p cmpxchg instruction.
*/
static
int drm_lock_take(struct drm_lock_data *lock_data,
unsigned int context)
{
unsigned int old, new, prev;
volatile unsigned int *lock = &lock_data->hw_lock->lock;
spin_lock_bh(&lock_data->spinlock);
do {
old = *lock;
if (old & _DRM_LOCK_HELD)
new = old | _DRM_LOCK_CONT;
else {
new = context | _DRM_LOCK_HELD |
((lock_data->user_waiters + lock_data->kernel_waiters > 1) ?
_DRM_LOCK_CONT : 0);
}
prev = cmpxchg(lock, old, new);
} while (prev != old);
spin_unlock_bh(&lock_data->spinlock);
if (_DRM_LOCKING_CONTEXT(old) == context) {
if (old & _DRM_LOCK_HELD) {
if (context != DRM_KERNEL_CONTEXT) {
DRM_ERROR("%d holds heavyweight lock\n",
context);
}
return 0;
}
}
if ((_DRM_LOCKING_CONTEXT(new)) == context && (new & _DRM_LOCK_HELD)) {
/* Have lock */
return 1;
}
return 0;
}
/*
* This takes a lock forcibly and hands it to context. Should ONLY be used
* inside *_unlock to give lock to kernel before calling *_dma_schedule.
*
* \param dev DRM device.
* \param lock lock pointer.
* \param context locking context.
* \return always one.
*
* Resets the lock file pointer.
* Marks the lock as held by the given context, via the \p cmpxchg instruction.
*/
static int drm_lock_transfer(struct drm_lock_data *lock_data,
unsigned int context)
{
unsigned int old, new, prev;
volatile unsigned int *lock = &lock_data->hw_lock->lock;
lock_data->file_priv = NULL;
do {
old = *lock;
new = context | _DRM_LOCK_HELD;
prev = cmpxchg(lock, old, new);
} while (prev != old);
return 1;
}
static int drm_legacy_lock_free(struct drm_lock_data *lock_data,
unsigned int context)
{
unsigned int old, new, prev;
volatile unsigned int *lock = &lock_data->hw_lock->lock;
spin_lock_bh(&lock_data->spinlock);
if (lock_data->kernel_waiters != 0) {
drm_lock_transfer(lock_data, 0);
lock_data->idle_has_lock = 1;
spin_unlock_bh(&lock_data->spinlock);
return 1;
}
spin_unlock_bh(&lock_data->spinlock);
do {
old = *lock;
new = _DRM_LOCKING_CONTEXT(old);
prev = cmpxchg(lock, old, new);
} while (prev != old);
if (_DRM_LOCK_IS_HELD(old) && _DRM_LOCKING_CONTEXT(old) != context) {
DRM_ERROR("%d freed heavyweight lock held by %d\n",
context, _DRM_LOCKING_CONTEXT(old));
return 1;
}
wake_up_interruptible(&lock_data->lock_queue);
return 0;
}
/*
* Lock ioctl.
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument, pointing to a drm_lock structure.
* \return zero on success or negative number on failure.
*
* Add the current task to the lock wait queue, and attempt to take to lock.
*/
int drm_legacy_lock(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
DECLARE_WAITQUEUE(entry, current);
struct drm_lock *lock = data;
struct drm_master *master = file_priv->master;
int ret = 0;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
++file_priv->lock_count;
if (lock->context == DRM_KERNEL_CONTEXT) {
DRM_ERROR("Process %d using kernel context %d\n",
task_pid_nr(current), lock->context);
return -EINVAL;
}
DRM_DEBUG("%d (pid %d) requests lock (0x%08x), flags = 0x%08x\n",
lock->context, task_pid_nr(current),
master->lock.hw_lock ? master->lock.hw_lock->lock : -1,
lock->flags);
add_wait_queue(&master->lock.lock_queue, &entry);
spin_lock_bh(&master->lock.spinlock);
master->lock.user_waiters++;
spin_unlock_bh(&master->lock.spinlock);
for (;;) {
__set_current_state(TASK_INTERRUPTIBLE);
if (!master->lock.hw_lock) {
/* Device has been unregistered */
send_sig(SIGTERM, current, 0);
ret = -EINTR;
break;
}
if (drm_lock_take(&master->lock, lock->context)) {
master->lock.file_priv = file_priv;
master->lock.lock_time = jiffies;
break; /* Got lock */
}
/* Contention */
mutex_unlock(&drm_global_mutex);
schedule();
mutex_lock(&drm_global_mutex);
if (signal_pending(current)) {
ret = -EINTR;
break;
}
}
spin_lock_bh(&master->lock.spinlock);
master->lock.user_waiters--;
spin_unlock_bh(&master->lock.spinlock);
__set_current_state(TASK_RUNNING);
remove_wait_queue(&master->lock.lock_queue, &entry);
DRM_DEBUG("%d %s\n", lock->context,
ret ? "interrupted" : "has lock");
if (ret) return ret;
/* don't set the block all signals on the master process for now
* really probably not the correct answer but lets us debug xkb
* xserver for now */
if (!drm_is_current_master(file_priv)) {
dev->sigdata.context = lock->context;
dev->sigdata.lock = master->lock.hw_lock;
}
if (dev->driver->dma_quiescent && (lock->flags & _DRM_LOCK_QUIESCENT))
{
if (dev->driver->dma_quiescent(dev)) {
DRM_DEBUG("%d waiting for DMA quiescent\n",
lock->context);
return -EBUSY;
}
}
return 0;
}
/*
* Unlock ioctl.
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument, pointing to a drm_lock structure.
* \return zero on success or negative number on failure.
*
* Transfer and free the lock.
*/
int drm_legacy_unlock(struct drm_device *dev, void *data, struct drm_file *file_priv)
{
struct drm_lock *lock = data;
struct drm_master *master = file_priv->master;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
if (lock->context == DRM_KERNEL_CONTEXT) {
DRM_ERROR("Process %d using kernel context %d\n",
task_pid_nr(current), lock->context);
return -EINVAL;
}
if (drm_legacy_lock_free(&master->lock, lock->context)) {
/* FIXME: Should really bail out here. */
}
return 0;
}
/*
* This function returns immediately and takes the hw lock
* with the kernel context if it is free, otherwise it gets the highest priority when and if
* it is eventually released.
*
* This guarantees that the kernel will _eventually_ have the lock _unless_ it is held
* by a blocked process. (In the latter case an explicit wait for the hardware lock would cause
* a deadlock, which is why the "idlelock" was invented).
*
* This should be sufficient to wait for GPU idle without
* having to worry about starvation.
*/
void drm_legacy_idlelock_take(struct drm_lock_data *lock_data)
{
int ret;
spin_lock_bh(&lock_data->spinlock);
lock_data->kernel_waiters++;
if (!lock_data->idle_has_lock) {
spin_unlock_bh(&lock_data->spinlock);
ret = drm_lock_take(lock_data, DRM_KERNEL_CONTEXT);
spin_lock_bh(&lock_data->spinlock);
if (ret == 1)
lock_data->idle_has_lock = 1;
}
spin_unlock_bh(&lock_data->spinlock);
}
EXPORT_SYMBOL(drm_legacy_idlelock_take);
void drm_legacy_idlelock_release(struct drm_lock_data *lock_data)
{
unsigned int old, prev;
volatile unsigned int *lock = &lock_data->hw_lock->lock;
spin_lock_bh(&lock_data->spinlock);
if (--lock_data->kernel_waiters == 0) {
if (lock_data->idle_has_lock) {
do {
old = *lock;
prev = cmpxchg(lock, old, DRM_KERNEL_CONTEXT);
} while (prev != old);
wake_up_interruptible(&lock_data->lock_queue);
lock_data->idle_has_lock = 0;
}
}
spin_unlock_bh(&lock_data->spinlock);
}
EXPORT_SYMBOL(drm_legacy_idlelock_release);
static int drm_legacy_i_have_hw_lock(struct drm_device *dev,
struct drm_file *file_priv)
{
struct drm_master *master = file_priv->master;
return (file_priv->lock_count && master->lock.hw_lock &&
_DRM_LOCK_IS_HELD(master->lock.hw_lock->lock) &&
master->lock.file_priv == file_priv);
}
void drm_legacy_lock_release(struct drm_device *dev, struct file *filp)
{
struct drm_file *file_priv = filp->private_data;
/* if the master has gone away we can't do anything with the lock */
if (!dev->master)
return;
if (drm_legacy_i_have_hw_lock(dev, file_priv)) {
DRM_DEBUG("File %p released, freeing lock for context %d\n",
filp, _DRM_LOCKING_CONTEXT(file_priv->master->lock.hw_lock->lock));
drm_legacy_lock_free(&file_priv->master->lock,
_DRM_LOCKING_CONTEXT(file_priv->master->lock.hw_lock->lock));
}
}
void drm_legacy_lock_master_cleanup(struct drm_device *dev, struct drm_master *master)
{
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return;
/*
* Since the master is disappearing, so is the
* possibility to lock.
*/
mutex_lock(&dev->struct_mutex);
if (master->lock.hw_lock) {
if (dev->sigdata.lock == master->lock.hw_lock)
dev->sigdata.lock = NULL;
master->lock.hw_lock = NULL;
master->lock.file_priv = NULL;
wake_up_interruptible_all(&master->lock.lock_queue);
}
mutex_unlock(&dev->struct_mutex);
}

View File

@ -1,138 +0,0 @@
/*
* \file drm_memory.c
* Memory management wrappers for DRM
*
* \author Rickard E. (Rik) Faith <faith@valinux.com>
* \author Gareth Hughes <gareth@valinux.com>
*/
/*
* Created: Thu Feb 4 14:00:34 1999 by faith@valinux.com
*
* Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/export.h>
#include <linux/highmem.h>
#include <linux/pci.h>
#include <linux/vmalloc.h>
#include <drm/drm_cache.h>
#include <drm/drm_device.h>
#include "drm_legacy.h"
#if IS_ENABLED(CONFIG_AGP)
#ifdef HAVE_PAGE_AGP
# include <asm/agp.h>
#else
# ifdef __powerpc__
# define PAGE_AGP pgprot_noncached_wc(PAGE_KERNEL)
# else
# define PAGE_AGP PAGE_KERNEL
# endif
#endif
static void *agp_remap(unsigned long offset, unsigned long size,
struct drm_device *dev)
{
unsigned long i, num_pages =
PAGE_ALIGN(size) / PAGE_SIZE;
struct drm_agp_mem *agpmem;
struct page **page_map;
struct page **phys_page_map;
void *addr;
size = PAGE_ALIGN(size);
#ifdef __alpha__
offset -= dev->hose->mem_space->start;
#endif
list_for_each_entry(agpmem, &dev->agp->memory, head)
if (agpmem->bound <= offset
&& (agpmem->bound + (agpmem->pages << PAGE_SHIFT)) >=
(offset + size))
break;
if (&agpmem->head == &dev->agp->memory)
return NULL;
/*
* OK, we're mapping AGP space on a chipset/platform on which memory accesses by
* the CPU do not get remapped by the GART. We fix this by using the kernel's
* page-table instead (that's probably faster anyhow...).
*/
/* note: use vmalloc() because num_pages could be large... */
page_map = vmalloc(array_size(num_pages, sizeof(struct page *)));
if (!page_map)
return NULL;
phys_page_map = (agpmem->memory->pages + (offset - agpmem->bound) / PAGE_SIZE);
for (i = 0; i < num_pages; ++i)
page_map[i] = phys_page_map[i];
addr = vmap(page_map, num_pages, VM_IOREMAP, PAGE_AGP);
vfree(page_map);
return addr;
}
#else /* CONFIG_AGP */
static inline void *agp_remap(unsigned long offset, unsigned long size,
struct drm_device *dev)
{
return NULL;
}
#endif /* CONFIG_AGP */
void drm_legacy_ioremap(struct drm_local_map *map, struct drm_device *dev)
{
if (dev->agp && dev->agp->cant_use_aperture && map->type == _DRM_AGP)
map->handle = agp_remap(map->offset, map->size, dev);
else
map->handle = ioremap(map->offset, map->size);
}
EXPORT_SYMBOL(drm_legacy_ioremap);
void drm_legacy_ioremap_wc(struct drm_local_map *map, struct drm_device *dev)
{
if (dev->agp && dev->agp->cant_use_aperture && map->type == _DRM_AGP)
map->handle = agp_remap(map->offset, map->size, dev);
else
map->handle = ioremap_wc(map->offset, map->size);
}
EXPORT_SYMBOL(drm_legacy_ioremap_wc);
void drm_legacy_ioremapfree(struct drm_local_map *map, struct drm_device *dev)
{
if (!map->handle || !map->size)
return;
if (dev->agp && dev->agp->cant_use_aperture && map->type == _DRM_AGP)
vunmap(map->handle);
else
iounmap(map->handle);
}
EXPORT_SYMBOL(drm_legacy_ioremapfree);

View File

@ -347,7 +347,8 @@ static int mipi_dsi_remove_device_fn(struct device *dev, void *priv)
{
struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
mipi_dsi_detach(dsi);
if (dsi->attached)
mipi_dsi_detach(dsi);
mipi_dsi_device_unregister(dsi);
return 0;
@ -370,11 +371,18 @@ EXPORT_SYMBOL(mipi_dsi_host_unregister);
int mipi_dsi_attach(struct mipi_dsi_device *dsi)
{
const struct mipi_dsi_host_ops *ops = dsi->host->ops;
int ret;
if (!ops || !ops->attach)
return -ENOSYS;
return ops->attach(dsi->host, dsi);
ret = ops->attach(dsi->host, dsi);
if (ret)
return ret;
dsi->attached = true;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_attach);
@ -386,9 +394,14 @@ int mipi_dsi_detach(struct mipi_dsi_device *dsi)
{
const struct mipi_dsi_host_ops *ops = dsi->host->ops;
if (WARN_ON(!dsi->attached))
return -EINVAL;
if (!ops || !ops->detach)
return -ENOSYS;
dsi->attached = false;
return ops->detach(dsi->host, dsi);
}
EXPORT_SYMBOL(mipi_dsi_detach);

View File

@ -538,7 +538,7 @@ retry:
obj_to_connector(obj),
prop_value);
} else {
ret = drm_atomic_set_property(state, file_priv, obj, prop, prop_value);
ret = drm_atomic_set_property(state, file_priv, obj, prop, prop_value, false);
if (ret)
goto out;
ret = drm_atomic_commit(state);

View File

@ -29,18 +29,12 @@
#include <linux/pci.h>
#include <linux/slab.h>
#include <drm/drm_auth.h>
#include <drm/drm.h>
#include <drm/drm_drv.h>
#include <drm/drm_print.h>
#include "drm_internal.h"
#include "drm_legacy.h"
#ifdef CONFIG_DRM_LEGACY
/* List of devices hanging off drivers with stealth attach. */
static LIST_HEAD(legacy_dev_list);
static DEFINE_MUTEX(legacy_dev_list_lock);
#endif
static int drm_get_pci_domain(struct drm_device *dev)
{
@ -71,199 +65,3 @@ int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master)
master->unique_len = strlen(master->unique);
return 0;
}
#ifdef CONFIG_DRM_LEGACY
static int drm_legacy_pci_irq_by_busid(struct drm_device *dev, struct drm_irq_busid *p)
{
struct pci_dev *pdev = to_pci_dev(dev->dev);
if ((p->busnum >> 8) != drm_get_pci_domain(dev) ||
(p->busnum & 0xff) != pdev->bus->number ||
p->devnum != PCI_SLOT(pdev->devfn) || p->funcnum != PCI_FUNC(pdev->devfn))
return -EINVAL;
p->irq = pdev->irq;
DRM_DEBUG("%d:%d:%d => IRQ %d\n", p->busnum, p->devnum, p->funcnum,
p->irq);
return 0;
}
/**
* drm_legacy_irq_by_busid - Get interrupt from bus ID
* @dev: DRM device
* @data: IOCTL parameter pointing to a drm_irq_busid structure
* @file_priv: DRM file private.
*
* Finds the PCI device with the specified bus id and gets its IRQ number.
* This IOCTL is deprecated, and will now return EINVAL for any busid not equal
* to that of the device that this DRM instance attached to.
*
* Return: 0 on success or a negative error code on failure.
*/
int drm_legacy_irq_by_busid(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_irq_busid *p = data;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
/* UMS was only ever support on PCI devices. */
if (WARN_ON(!dev_is_pci(dev->dev)))
return -EINVAL;
if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
return -EOPNOTSUPP;
return drm_legacy_pci_irq_by_busid(dev, p);
}
void drm_legacy_pci_agp_destroy(struct drm_device *dev)
{
if (dev->agp) {
arch_phys_wc_del(dev->agp->agp_mtrr);
drm_legacy_agp_clear(dev);
kfree(dev->agp);
dev->agp = NULL;
}
}
static void drm_legacy_pci_agp_init(struct drm_device *dev)
{
if (drm_core_check_feature(dev, DRIVER_USE_AGP)) {
if (pci_find_capability(to_pci_dev(dev->dev), PCI_CAP_ID_AGP))
dev->agp = drm_legacy_agp_init(dev);
if (dev->agp) {
dev->agp->agp_mtrr = arch_phys_wc_add(
dev->agp->agp_info.aper_base,
dev->agp->agp_info.aper_size *
1024 * 1024);
}
}
}
static int drm_legacy_get_pci_dev(struct pci_dev *pdev,
const struct pci_device_id *ent,
const struct drm_driver *driver)
{
struct drm_device *dev;
int ret;
DRM_DEBUG("\n");
dev = drm_dev_alloc(driver, &pdev->dev);
if (IS_ERR(dev))
return PTR_ERR(dev);
ret = pci_enable_device(pdev);
if (ret)
goto err_free;
#ifdef __alpha__
dev->hose = pdev->sysdata;
#endif
drm_legacy_pci_agp_init(dev);
ret = drm_dev_register(dev, ent->driver_data);
if (ret)
goto err_agp;
if (drm_core_check_feature(dev, DRIVER_LEGACY)) {
mutex_lock(&legacy_dev_list_lock);
list_add_tail(&dev->legacy_dev_list, &legacy_dev_list);
mutex_unlock(&legacy_dev_list_lock);
}
return 0;
err_agp:
drm_legacy_pci_agp_destroy(dev);
pci_disable_device(pdev);
err_free:
drm_dev_put(dev);
return ret;
}
/**
* drm_legacy_pci_init - shadow-attach a legacy DRM PCI driver
* @driver: DRM device driver
* @pdriver: PCI device driver
*
* This is only used by legacy dri1 drivers and deprecated.
*
* Return: 0 on success or a negative error code on failure.
*/
int drm_legacy_pci_init(const struct drm_driver *driver,
struct pci_driver *pdriver)
{
struct pci_dev *pdev = NULL;
const struct pci_device_id *pid;
int i;
DRM_DEBUG("\n");
if (WARN_ON(!(driver->driver_features & DRIVER_LEGACY)))
return -EINVAL;
/* If not using KMS, fall back to stealth mode manual scanning. */
for (i = 0; pdriver->id_table[i].vendor != 0; i++) {
pid = &pdriver->id_table[i];
/* Loop around setting up a DRM device for each PCI device
* matching our ID and device class. If we had the internal
* function that pci_get_subsys and pci_get_class used, we'd
* be able to just pass pid in instead of doing a two-stage
* thing.
*/
pdev = NULL;
while ((pdev =
pci_get_subsys(pid->vendor, pid->device, pid->subvendor,
pid->subdevice, pdev)) != NULL) {
if ((pdev->class & pid->class_mask) != pid->class)
continue;
/* stealth mode requires a manual probe */
pci_dev_get(pdev);
drm_legacy_get_pci_dev(pdev, pid, driver);
}
}
return 0;
}
EXPORT_SYMBOL(drm_legacy_pci_init);
/**
* drm_legacy_pci_exit - unregister shadow-attach legacy DRM driver
* @driver: DRM device driver
* @pdriver: PCI device driver
*
* Unregister a DRM driver shadow-attached through drm_legacy_pci_init(). This
* is deprecated and only used by dri1 drivers.
*/
void drm_legacy_pci_exit(const struct drm_driver *driver,
struct pci_driver *pdriver)
{
struct drm_device *dev, *tmp;
DRM_DEBUG("\n");
if (!(driver->driver_features & DRIVER_LEGACY)) {
WARN_ON(1);
} else {
mutex_lock(&legacy_dev_list_lock);
list_for_each_entry_safe(dev, tmp, &legacy_dev_list,
legacy_dev_list) {
if (dev->driver == driver) {
list_del(&dev->legacy_dev_list);
drm_put_dev(dev);
}
}
mutex_unlock(&legacy_dev_list_lock);
}
DRM_INFO("Module unloaded\n");
}
EXPORT_SYMBOL(drm_legacy_pci_exit);
#endif

View File

@ -230,6 +230,103 @@ static int create_in_format_blob(struct drm_device *dev, struct drm_plane *plane
return 0;
}
/**
* DOC: hotspot properties
*
* HOTSPOT_X: property to set mouse hotspot x offset.
* HOTSPOT_Y: property to set mouse hotspot y offset.
*
* When the plane is being used as a cursor image to display a mouse pointer,
* the "hotspot" is the offset within the cursor image where mouse events
* are expected to go.
*
* Positive values move the hotspot from the top-left corner of the cursor
* plane towards the right and bottom.
*
* Most display drivers do not need this information because the
* hotspot is not actually connected to anything visible on screen.
* However, this is necessary for display drivers like the para-virtualized
* drivers (eg qxl, vbox, virtio, vmwgfx), that are attached to a user console
* with a mouse pointer. Since these consoles are often being remoted over a
* network, they would otherwise have to wait to display the pointer movement to
* the user until a full network round-trip has occurred. New mouse events have
* to be sent from the user's console, over the network to the virtual input
* devices, forwarded to the desktop for processing, and then the cursor plane's
* position can be updated and sent back to the user's console over the network.
* Instead, with the hotspot information, the console can anticipate the new
* location, and draw the mouse cursor there before the confirmation comes in.
* To do that correctly, the user's console must be able predict how the
* desktop will process mouse events, which normally requires the desktop's
* mouse topology information, ie where each CRTC sits in the mouse coordinate
* space. This is typically sent to the para-virtualized drivers using some
* driver-specific method, and the driver then forwards it to the console by
* way of the virtual display device or hypervisor.
*
* The assumption is generally made that there is only one cursor plane being
* used this way at a time, and that the desktop is feeding all mouse devices
* into the same global pointer. Para-virtualized drivers that require this
* should only be exposing a single cursor plane, or find some other way
* to coordinate with a userspace desktop that supports multiple pointers.
* If the hotspot properties are set, the cursor plane is therefore assumed to be
* used only for displaying a mouse cursor image, and the position of the combined
* cursor plane + offset can therefore be used for coordinating with input from a
* mouse device.
*
* The cursor will then be drawn either at the location of the plane in the CRTC
* console, or as a free-floating cursor plane on the user's console
* corresponding to their desktop mouse position.
*
* DRM clients which would like to work correctly on drivers which expose
* hotspot properties should advertise DRM_CLIENT_CAP_CURSOR_PLANE_HOTSPOT.
* Setting this property on drivers which do not special case
* cursor planes will return EOPNOTSUPP, which can be used by userspace to
* gauge requirements of the hardware/drivers they're running on. Advertising
* DRM_CLIENT_CAP_CURSOR_PLANE_HOTSPOT implies that the userspace client will be
* correctly setting the hotspot properties.
*/
/**
* drm_plane_create_hotspot_properties - creates the mouse hotspot
* properties and attaches them to the given cursor plane
*
* @plane: drm cursor plane
*
* This function enables the mouse hotspot property on a given
* cursor plane. Look at the documentation for hotspot properties
* to get a better understanding for what they're used for.
*
* RETURNS:
* Zero for success or -errno
*/
static int drm_plane_create_hotspot_properties(struct drm_plane *plane)
{
struct drm_property *prop_x;
struct drm_property *prop_y;
drm_WARN_ON(plane->dev,
!drm_core_check_feature(plane->dev,
DRIVER_CURSOR_HOTSPOT));
prop_x = drm_property_create_signed_range(plane->dev, 0, "HOTSPOT_X",
INT_MIN, INT_MAX);
if (IS_ERR(prop_x))
return PTR_ERR(prop_x);
prop_y = drm_property_create_signed_range(plane->dev, 0, "HOTSPOT_Y",
INT_MIN, INT_MAX);
if (IS_ERR(prop_y)) {
drm_property_destroy(plane->dev, prop_x);
return PTR_ERR(prop_y);
}
drm_object_attach_property(&plane->base, prop_x, 0);
drm_object_attach_property(&plane->base, prop_y, 0);
plane->hotspot_x_property = prop_x;
plane->hotspot_y_property = prop_y;
return 0;
}
__printf(9, 0)
static int __drm_universal_plane_init(struct drm_device *dev,
struct drm_plane *plane,
@ -348,6 +445,10 @@ static int __drm_universal_plane_init(struct drm_device *dev,
drm_object_attach_property(&plane->base, config->prop_src_w, 0);
drm_object_attach_property(&plane->base, config->prop_src_h, 0);
}
if (drm_core_check_feature(dev, DRIVER_CURSOR_HOTSPOT) &&
type == DRM_PLANE_TYPE_CURSOR) {
drm_plane_create_hotspot_properties(plane);
}
if (format_modifier_count)
create_in_format_blob(dev, plane);
@ -678,6 +779,19 @@ int drm_mode_getplane_res(struct drm_device *dev, void *data,
!file_priv->universal_planes)
continue;
/*
* If we're running on a virtualized driver then,
* unless userspace advertizes support for the
* virtualized cursor plane, disable cursor planes
* because they'll be broken due to missing cursor
* hotspot info.
*/
if (plane->type == DRM_PLANE_TYPE_CURSOR &&
drm_core_check_feature(dev, DRIVER_CURSOR_HOTSPOT) &&
file_priv->atomic &&
!file_priv->supports_virtualized_cursor_plane)
continue;
if (drm_lease_held(file_priv, plane->base.id)) {
if (count < plane_resp->count_planes &&
put_user(plane->base.id, plane_ptr + count))
@ -1052,8 +1166,10 @@ static int drm_mode_cursor_universal(struct drm_crtc *crtc,
return PTR_ERR(fb);
}
fb->hot_x = req->hot_x;
fb->hot_y = req->hot_y;
if (plane->hotspot_x_property && plane->state)
plane->state->hotspot_x = req->hot_x;
if (plane->hotspot_y_property && plane->state)
plane->state->hotspot_y = req->hot_y;
} else {
fb = NULL;
}
@ -1442,6 +1558,36 @@ out:
* Drivers implementing damage can use drm_atomic_helper_damage_iter_init() and
* drm_atomic_helper_damage_iter_next() helper iterator function to get damage
* rectangles clipped to &drm_plane_state.src.
*
* Note that there are two types of damage handling: frame damage and buffer
* damage, the type of damage handling implemented depends on a driver's upload
* target. Drivers implementing a per-plane or per-CRTC upload target need to
* handle frame damage, while drivers implementing a per-buffer upload target
* need to handle buffer damage.
*
* The existing damage helpers only support the frame damage type, there is no
* buffer age support or similar damage accumulation algorithm implemented yet.
*
* Only drivers handling frame damage can use the mentioned damage helpers to
* iterate over the damaged regions. Drivers that handle buffer damage, must set
* &drm_plane_state.ignore_damage_clips for drm_atomic_helper_damage_iter_init()
* to know that damage clips should be ignored and return &drm_plane_state.src
* as the damage rectangle, to force a full plane update.
*
* Drivers with a per-buffer upload target could compare the &drm_plane_state.fb
* of the old and new plane states to determine if the framebuffer attached to a
* plane has changed or not since the last plane update. If &drm_plane_state.fb
* has changed, then &drm_plane_state.ignore_damage_clips must be set to true.
*
* That is because drivers with a per-plane upload target, expect the backing
* storage buffer to not change for a given plane. If the upload buffer changes
* between page flips, the new upload buffer has to be updated as a whole. This
* can be improved in the future if support for frame damage is added to the DRM
* damage helpers, similarly to how user-space already handle this case as it is
* explained in the following documents:
*
* https://registry.khronos.org/EGL/extensions/KHR/EGL_KHR_swap_buffers_with_damage.txt
* https://emersion.fr/blog/2019/intro-to-damage-tracking/
*/
/**

View File

@ -279,35 +279,3 @@ void drm_plane_helper_destroy(struct drm_plane *plane)
kfree(plane);
}
EXPORT_SYMBOL(drm_plane_helper_destroy);
/**
* drm_plane_helper_atomic_check() - Helper to check plane atomic-state
* @plane: plane to check
* @state: atomic state object
*
* Provides a default plane-state check handler for planes whose atomic-state
* scale and positioning are not expected to change since the plane is always
* a fullscreen scanout buffer.
*
* This is often the case for the primary plane of simple framebuffers. See
* also drm_crtc_helper_atomic_check() for the respective CRTC-state check
* helper function.
*
* RETURNS:
* Zero on success, or an errno code otherwise.
*/
int drm_plane_helper_atomic_check(struct drm_plane *plane, struct drm_atomic_state *state)
{
struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, plane);
struct drm_crtc *new_crtc = new_plane_state->crtc;
struct drm_crtc_state *new_crtc_state = NULL;
if (new_crtc)
new_crtc_state = drm_atomic_get_new_crtc_state(state, new_crtc);
return drm_atomic_helper_check_plane_state(new_plane_state, new_crtc_state,
DRM_PLANE_NO_SCALING,
DRM_PLANE_NO_SCALING,
false, false);
}
EXPORT_SYMBOL(drm_plane_helper_atomic_check);

View File

@ -1,220 +0,0 @@
/*
* \file drm_scatter.c
* IOCTLs to manage scatter/gather memory
*
* \author Gareth Hughes <gareth@valinux.com>
*/
/*
* Created: Mon Dec 18 23:20:54 2000 by gareth@valinux.com
*
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <drm/drm.h>
#include <drm/drm_drv.h>
#include <drm/drm_print.h>
#include "drm_legacy.h"
#define DEBUG_SCATTER 0
static void drm_sg_cleanup(struct drm_sg_mem * entry)
{
struct page *page;
int i;
for (i = 0; i < entry->pages; i++) {
page = entry->pagelist[i];
if (page)
ClearPageReserved(page);
}
vfree(entry->virtual);
kfree(entry->busaddr);
kfree(entry->pagelist);
kfree(entry);
}
void drm_legacy_sg_cleanup(struct drm_device *dev)
{
if (drm_core_check_feature(dev, DRIVER_SG) && dev->sg &&
drm_core_check_feature(dev, DRIVER_LEGACY)) {
drm_sg_cleanup(dev->sg);
dev->sg = NULL;
}
}
#ifdef _LP64
# define ScatterHandle(x) (unsigned int)((x >> 32) + (x & ((1L << 32) - 1)))
#else
# define ScatterHandle(x) (unsigned int)(x)
#endif
int drm_legacy_sg_alloc(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_scatter_gather *request = data;
struct drm_sg_mem *entry;
unsigned long pages, i, j;
DRM_DEBUG("\n");
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
if (!drm_core_check_feature(dev, DRIVER_SG))
return -EOPNOTSUPP;
if (request->size > SIZE_MAX - PAGE_SIZE)
return -EINVAL;
if (dev->sg)
return -EINVAL;
entry = kzalloc(sizeof(*entry), GFP_KERNEL);
if (!entry)
return -ENOMEM;
pages = (request->size + PAGE_SIZE - 1) / PAGE_SIZE;
DRM_DEBUG("size=%ld pages=%ld\n", request->size, pages);
entry->pages = pages;
entry->pagelist = kcalloc(pages, sizeof(*entry->pagelist), GFP_KERNEL);
if (!entry->pagelist) {
kfree(entry);
return -ENOMEM;
}
entry->busaddr = kcalloc(pages, sizeof(*entry->busaddr), GFP_KERNEL);
if (!entry->busaddr) {
kfree(entry->pagelist);
kfree(entry);
return -ENOMEM;
}
entry->virtual = vmalloc_32(pages << PAGE_SHIFT);
if (!entry->virtual) {
kfree(entry->busaddr);
kfree(entry->pagelist);
kfree(entry);
return -ENOMEM;
}
/* This also forces the mapping of COW pages, so our page list
* will be valid. Please don't remove it...
*/
memset(entry->virtual, 0, pages << PAGE_SHIFT);
entry->handle = ScatterHandle((unsigned long)entry->virtual);
DRM_DEBUG("handle = %08lx\n", entry->handle);
DRM_DEBUG("virtual = %p\n", entry->virtual);
for (i = (unsigned long)entry->virtual, j = 0; j < pages;
i += PAGE_SIZE, j++) {
entry->pagelist[j] = vmalloc_to_page((void *)i);
if (!entry->pagelist[j])
goto failed;
SetPageReserved(entry->pagelist[j]);
}
request->handle = entry->handle;
dev->sg = entry;
#if DEBUG_SCATTER
/* Verify that each page points to its virtual address, and vice
* versa.
*/
{
int error = 0;
for (i = 0; i < pages; i++) {
unsigned long *tmp;
tmp = page_address(entry->pagelist[i]);
for (j = 0;
j < PAGE_SIZE / sizeof(unsigned long);
j++, tmp++) {
*tmp = 0xcafebabe;
}
tmp = (unsigned long *)((u8 *) entry->virtual +
(PAGE_SIZE * i));
for (j = 0;
j < PAGE_SIZE / sizeof(unsigned long);
j++, tmp++) {
if (*tmp != 0xcafebabe && error == 0) {
error = 1;
DRM_ERROR("Scatter allocation error, "
"pagelist does not match "
"virtual mapping\n");
}
}
tmp = page_address(entry->pagelist[i]);
for (j = 0;
j < PAGE_SIZE / sizeof(unsigned long);
j++, tmp++) {
*tmp = 0;
}
}
if (error == 0)
DRM_ERROR("Scatter allocation matches pagelist\n");
}
#endif
return 0;
failed:
drm_sg_cleanup(entry);
return -ENOMEM;
}
int drm_legacy_sg_free(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_scatter_gather *request = data;
struct drm_sg_mem *entry;
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return -EOPNOTSUPP;
if (!drm_core_check_feature(dev, DRIVER_SG))
return -EOPNOTSUPP;
entry = dev->sg;
dev->sg = NULL;
if (!entry || entry->handle != request->handle)
return -EINVAL;
DRM_DEBUG("virtual = %p\n", entry->virtual);
drm_sg_cleanup(entry);
return 0;
}

View File

@ -126,6 +126,11 @@
* synchronize between the two.
* This requirement is inherited from the Vulkan fence API.
*
* If &DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE is set, the ioctl will also set
* a fence deadline hint on the backing fences before waiting, to provide the
* fence signaler with an appropriate sense of urgency. The deadline is
* specified as an absolute &CLOCK_MONOTONIC value in units of ns.
*
* Similarly, &DRM_IOCTL_SYNCOBJ_TIMELINE_WAIT takes an array of syncobj
* handles as well as an array of u64 points and does a host-side wait on all
* of syncobj fences at the given points simultaneously.
@ -1027,7 +1032,8 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
uint32_t count,
uint32_t flags,
signed long timeout,
uint32_t *idx)
uint32_t *idx,
ktime_t *deadline)
{
struct syncobj_wait_entry *entries;
struct dma_fence *fence;
@ -1108,6 +1114,15 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
drm_syncobj_fence_add_wait(syncobjs[i], &entries[i]);
}
if (deadline) {
for (i = 0; i < count; ++i) {
fence = entries[i].fence;
if (!fence)
continue;
dma_fence_set_deadline(fence, *deadline);
}
}
do {
set_current_state(TASK_INTERRUPTIBLE);
@ -1206,7 +1221,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
struct drm_file *file_private,
struct drm_syncobj_wait *wait,
struct drm_syncobj_timeline_wait *timeline_wait,
struct drm_syncobj **syncobjs, bool timeline)
struct drm_syncobj **syncobjs, bool timeline,
ktime_t *deadline)
{
signed long timeout = 0;
uint32_t first = ~0;
@ -1217,7 +1233,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
NULL,
wait->count_handles,
wait->flags,
timeout, &first);
timeout, &first,
deadline);
if (timeout < 0)
return timeout;
wait->first_signaled = first;
@ -1227,7 +1244,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
u64_to_user_ptr(timeline_wait->points),
timeline_wait->count_handles,
timeline_wait->flags,
timeout, &first);
timeout, &first,
deadline);
if (timeout < 0)
return timeout;
timeline_wait->first_signaled = first;
@ -1298,17 +1316,22 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
{
struct drm_syncobj_wait *args = data;
struct drm_syncobj **syncobjs;
unsigned int possible_flags;
ktime_t t, *tp = NULL;
int ret = 0;
if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
return -EOPNOTSUPP;
if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT))
possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
if (args->flags & ~possible_flags)
return -EINVAL;
if (args->count_handles == 0)
return -EINVAL;
return 0;
ret = drm_syncobj_array_find(file_private,
u64_to_user_ptr(args->handles),
@ -1317,8 +1340,13 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
if (ret < 0)
return ret;
if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
t = ns_to_ktime(args->deadline_nsec);
tp = &t;
}
ret = drm_syncobj_array_wait(dev, file_private,
args, NULL, syncobjs, false);
args, NULL, syncobjs, false, tp);
drm_syncobj_array_free(syncobjs, args->count_handles);
@ -1331,18 +1359,23 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
{
struct drm_syncobj_timeline_wait *args = data;
struct drm_syncobj **syncobjs;
unsigned int possible_flags;
ktime_t t, *tp = NULL;
int ret = 0;
if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE))
return -EOPNOTSUPP;
if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE))
possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE |
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
if (args->flags & ~possible_flags)
return -EINVAL;
if (args->count_handles == 0)
return -EINVAL;
return 0;
ret = drm_syncobj_array_find(file_private,
u64_to_user_ptr(args->handles),
@ -1351,8 +1384,13 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
if (ret < 0)
return ret;
if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
t = ns_to_ktime(args->deadline_nsec);
tp = &t;
}
ret = drm_syncobj_array_wait(dev, file_private,
NULL, args, syncobjs, true);
NULL, args, syncobjs, true, tp);
drm_syncobj_array_free(syncobjs, args->count_handles);

View File

@ -210,11 +210,6 @@ static u32 __get_vblank_counter(struct drm_device *dev, unsigned int pipe)
if (crtc->funcs->get_vblank_counter)
return crtc->funcs->get_vblank_counter(crtc);
}
#ifdef CONFIG_DRM_LEGACY
else if (dev->driver->get_vblank_counter) {
return dev->driver->get_vblank_counter(dev, pipe);
}
#endif
return drm_vblank_no_hw_counter(dev, pipe);
}
@ -433,11 +428,6 @@ static void __disable_vblank(struct drm_device *dev, unsigned int pipe)
if (crtc->funcs->disable_vblank)
crtc->funcs->disable_vblank(crtc);
}
#ifdef CONFIG_DRM_LEGACY
else {
dev->driver->disable_vblank(dev, pipe);
}
#endif
}
/*
@ -1151,11 +1141,6 @@ static int __enable_vblank(struct drm_device *dev, unsigned int pipe)
if (crtc->funcs->enable_vblank)
return crtc->funcs->enable_vblank(crtc);
}
#ifdef CONFIG_DRM_LEGACY
else if (dev->driver->enable_vblank) {
return dev->driver->enable_vblank(dev, pipe);
}
#endif
return -EINVAL;
}
@ -1574,88 +1559,6 @@ void drm_crtc_vblank_restore(struct drm_crtc *crtc)
}
EXPORT_SYMBOL(drm_crtc_vblank_restore);
static void drm_legacy_vblank_pre_modeset(struct drm_device *dev,
unsigned int pipe)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
/* vblank is not initialized (IRQ not installed ?), or has been freed */
if (!drm_dev_has_vblank(dev))
return;
if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
return;
/*
* To avoid all the problems that might happen if interrupts
* were enabled/disabled around or between these calls, we just
* have the kernel take a reference on the CRTC (just once though
* to avoid corrupting the count if multiple, mismatch calls occur),
* so that interrupts remain enabled in the interim.
*/
if (!vblank->inmodeset) {
vblank->inmodeset = 0x1;
if (drm_vblank_get(dev, pipe) == 0)
vblank->inmodeset |= 0x2;
}
}
static void drm_legacy_vblank_post_modeset(struct drm_device *dev,
unsigned int pipe)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
/* vblank is not initialized (IRQ not installed ?), or has been freed */
if (!drm_dev_has_vblank(dev))
return;
if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
return;
if (vblank->inmodeset) {
spin_lock_irq(&dev->vbl_lock);
drm_reset_vblank_timestamp(dev, pipe);
spin_unlock_irq(&dev->vbl_lock);
if (vblank->inmodeset & 0x2)
drm_vblank_put(dev, pipe);
vblank->inmodeset = 0;
}
}
int drm_legacy_modeset_ctl_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_modeset_ctl *modeset = data;
unsigned int pipe;
/* If drm_vblank_init() hasn't been called yet, just no-op */
if (!drm_dev_has_vblank(dev))
return 0;
/* KMS drivers handle this internally */
if (!drm_core_check_feature(dev, DRIVER_LEGACY))
return 0;
pipe = modeset->crtc;
if (pipe >= dev->num_crtcs)
return -EINVAL;
switch (modeset->cmd) {
case _DRM_PRE_MODESET:
drm_legacy_vblank_pre_modeset(dev, pipe);
break;
case _DRM_POST_MODESET:
drm_legacy_vblank_post_modeset(dev, pipe);
break;
default:
return -EINVAL;
}
return 0;
}
static int drm_queue_vblank_event(struct drm_device *dev, unsigned int pipe,
u64 req_seq,
union drm_wait_vblank *vblwait,
@ -1780,10 +1683,6 @@ static void drm_wait_vblank_reply(struct drm_device *dev, unsigned int pipe,
static bool drm_wait_vblank_supported(struct drm_device *dev)
{
#if IS_ENABLED(CONFIG_DRM_LEGACY)
if (unlikely(drm_core_check_feature(dev, DRIVER_LEGACY)))
return dev->irq_enabled;
#endif
return drm_dev_has_vblank(dev);
}

View File

@ -1,665 +0,0 @@
/*
* \file drm_vm.c
* Memory mapping for DRM
*
* \author Rickard E. (Rik) Faith <faith@valinux.com>
* \author Gareth Hughes <gareth@valinux.com>
*/
/*
* Created: Mon Jan 4 08:58:31 1999 by faith@valinux.com
*
* Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/export.h>
#include <linux/pci.h>
#include <linux/seq_file.h>
#include <linux/vmalloc.h>
#include <linux/pgtable.h>
#if defined(__ia64__)
#include <linux/efi.h>
#include <linux/slab.h>
#endif
#include <linux/mem_encrypt.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
#include <drm/drm_framebuffer.h>
#include <drm/drm_print.h>
#include "drm_internal.h"
#include "drm_legacy.h"
struct drm_vma_entry {
struct list_head head;
struct vm_area_struct *vma;
pid_t pid;
};
static void drm_vm_open(struct vm_area_struct *vma);
static void drm_vm_close(struct vm_area_struct *vma);
static pgprot_t drm_io_prot(struct drm_local_map *map,
struct vm_area_struct *vma)
{
pgprot_t tmp = vm_get_page_prot(vma->vm_flags);
#if defined(__i386__) || defined(__x86_64__) || defined(__powerpc__) || \
defined(__mips__) || defined(__loongarch__)
if (map->type == _DRM_REGISTERS && !(map->flags & _DRM_WRITE_COMBINING))
tmp = pgprot_noncached(tmp);
else
tmp = pgprot_writecombine(tmp);
#elif defined(__ia64__)
if (efi_range_is_wc(vma->vm_start, vma->vm_end -
vma->vm_start))
tmp = pgprot_writecombine(tmp);
else
tmp = pgprot_noncached(tmp);
#elif defined(__sparc__) || defined(__arm__)
tmp = pgprot_noncached(tmp);
#endif
return tmp;
}
static pgprot_t drm_dma_prot(uint32_t map_type, struct vm_area_struct *vma)
{
pgprot_t tmp = vm_get_page_prot(vma->vm_flags);
#if defined(__powerpc__) && defined(CONFIG_NOT_COHERENT_CACHE)
tmp = pgprot_noncached_wc(tmp);
#endif
return tmp;
}
/*
* \c fault method for AGP virtual memory.
*
* \param vma virtual memory area.
* \param address access address.
* \return pointer to the page structure.
*
* Find the right map and if it's AGP memory find the real physical page to
* map, get the page, increment the use count and return it.
*/
#if IS_ENABLED(CONFIG_AGP)
static vm_fault_t drm_vm_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_file *priv = vma->vm_file->private_data;
struct drm_device *dev = priv->minor->dev;
struct drm_local_map *map = NULL;
struct drm_map_list *r_list;
struct drm_hash_item *hash;
/*
* Find the right map
*/
if (!dev->agp)
goto vm_fault_error;
if (!dev->agp || !dev->agp->cant_use_aperture)
goto vm_fault_error;
if (drm_ht_find_item(&dev->map_hash, vma->vm_pgoff, &hash))
goto vm_fault_error;
r_list = drm_hash_entry(hash, struct drm_map_list, hash);
map = r_list->map;
if (map && map->type == _DRM_AGP) {
/*
* Using vm_pgoff as a selector forces us to use this unusual
* addressing scheme.
*/
resource_size_t offset = vmf->address - vma->vm_start;
resource_size_t baddr = map->offset + offset;
struct drm_agp_mem *agpmem;
struct page *page;
#ifdef __alpha__
/*
* Adjust to a bus-relative address
*/
baddr -= dev->hose->mem_space->start;
#endif
/*
* It's AGP memory - find the real physical page to map
*/
list_for_each_entry(agpmem, &dev->agp->memory, head) {
if (agpmem->bound <= baddr &&
agpmem->bound + agpmem->pages * PAGE_SIZE > baddr)
break;
}
if (&agpmem->head == &dev->agp->memory)
goto vm_fault_error;
/*
* Get the page, inc the use count, and return it
*/
offset = (baddr - agpmem->bound) >> PAGE_SHIFT;
page = agpmem->memory->pages[offset];
get_page(page);
vmf->page = page;
DRM_DEBUG
("baddr = 0x%llx page = 0x%p, offset = 0x%llx, count=%d\n",
(unsigned long long)baddr,
agpmem->memory->pages[offset],
(unsigned long long)offset,
page_count(page));
return 0;
}
vm_fault_error:
return VM_FAULT_SIGBUS; /* Disallow mremap */
}
#else
static vm_fault_t drm_vm_fault(struct vm_fault *vmf)
{
return VM_FAULT_SIGBUS;
}
#endif
/*
* \c nopage method for shared virtual memory.
*
* \param vma virtual memory area.
* \param address access address.
* \return pointer to the page structure.
*
* Get the mapping, find the real physical page to map, get the page, and
* return it.
*/
static vm_fault_t drm_vm_shm_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_local_map *map = vma->vm_private_data;
unsigned long offset;
unsigned long i;
struct page *page;
if (!map)
return VM_FAULT_SIGBUS; /* Nothing allocated */
offset = vmf->address - vma->vm_start;
i = (unsigned long)map->handle + offset;
page = vmalloc_to_page((void *)i);
if (!page)
return VM_FAULT_SIGBUS;
get_page(page);
vmf->page = page;
DRM_DEBUG("shm_fault 0x%lx\n", offset);
return 0;
}
/*
* \c close method for shared virtual memory.
*
* \param vma virtual memory area.
*
* Deletes map information if we are the last
* person to close a mapping and it's not in the global maplist.
*/
static void drm_vm_shm_close(struct vm_area_struct *vma)
{
struct drm_file *priv = vma->vm_file->private_data;
struct drm_device *dev = priv->minor->dev;
struct drm_vma_entry *pt, *temp;
struct drm_local_map *map;
struct drm_map_list *r_list;
int found_maps = 0;
DRM_DEBUG("0x%08lx,0x%08lx\n",
vma->vm_start, vma->vm_end - vma->vm_start);
map = vma->vm_private_data;
mutex_lock(&dev->struct_mutex);
list_for_each_entry_safe(pt, temp, &dev->vmalist, head) {
if (pt->vma->vm_private_data == map)
found_maps++;
if (pt->vma == vma) {
list_del(&pt->head);
kfree(pt);
}
}
/* We were the only map that was found */
if (found_maps == 1 && map->flags & _DRM_REMOVABLE) {
/* Check to see if we are in the maplist, if we are not, then
* we delete this mappings information.
*/
found_maps = 0;
list_for_each_entry(r_list, &dev->maplist, head) {
if (r_list->map == map)
found_maps++;
}
if (!found_maps) {
switch (map->type) {
case _DRM_REGISTERS:
case _DRM_FRAME_BUFFER:
arch_phys_wc_del(map->mtrr);
iounmap(map->handle);
break;
case _DRM_SHM:
vfree(map->handle);
break;
case _DRM_AGP:
case _DRM_SCATTER_GATHER:
break;
case _DRM_CONSISTENT:
dma_free_coherent(dev->dev,
map->size,
map->handle,
map->offset);
break;
}
kfree(map);
}
}
mutex_unlock(&dev->struct_mutex);
}
/*
* \c fault method for DMA virtual memory.
*
* \param address access address.
* \return pointer to the page structure.
*
* Determine the page number from the page offset and get it from drm_device_dma::pagelist.
*/
static vm_fault_t drm_vm_dma_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_file *priv = vma->vm_file->private_data;
struct drm_device *dev = priv->minor->dev;
struct drm_device_dma *dma = dev->dma;
unsigned long offset;
unsigned long page_nr;
struct page *page;
if (!dma)
return VM_FAULT_SIGBUS; /* Error */
if (!dma->pagelist)
return VM_FAULT_SIGBUS; /* Nothing allocated */
offset = vmf->address - vma->vm_start;
/* vm_[pg]off[set] should be 0 */
page_nr = offset >> PAGE_SHIFT; /* page_nr could just be vmf->pgoff */
page = virt_to_page((void *)dma->pagelist[page_nr]);
get_page(page);
vmf->page = page;
DRM_DEBUG("dma_fault 0x%lx (page %lu)\n", offset, page_nr);
return 0;
}
/*
* \c fault method for scatter-gather virtual memory.
*
* \param address access address.
* \return pointer to the page structure.
*
* Determine the map offset from the page offset and get it from drm_sg_mem::pagelist.
*/
static vm_fault_t drm_vm_sg_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_local_map *map = vma->vm_private_data;
struct drm_file *priv = vma->vm_file->private_data;
struct drm_device *dev = priv->minor->dev;
struct drm_sg_mem *entry = dev->sg;
unsigned long offset;
unsigned long map_offset;
unsigned long page_offset;
struct page *page;
if (!entry)
return VM_FAULT_SIGBUS; /* Error */
if (!entry->pagelist)
return VM_FAULT_SIGBUS; /* Nothing allocated */
offset = vmf->address - vma->vm_start;
map_offset = map->offset - (unsigned long)dev->sg->virtual;
page_offset = (offset >> PAGE_SHIFT) + (map_offset >> PAGE_SHIFT);
page = entry->pagelist[page_offset];
get_page(page);
vmf->page = page;
return 0;
}
/** AGP virtual memory operations */
static const struct vm_operations_struct drm_vm_ops = {
.fault = drm_vm_fault,
.open = drm_vm_open,
.close = drm_vm_close,
};
/** Shared virtual memory operations */
static const struct vm_operations_struct drm_vm_shm_ops = {
.fault = drm_vm_shm_fault,
.open = drm_vm_open,
.close = drm_vm_shm_close,
};
/** DMA virtual memory operations */
static const struct vm_operations_struct drm_vm_dma_ops = {
.fault = drm_vm_dma_fault,
.open = drm_vm_open,
.close = drm_vm_close,
};
/** Scatter-gather virtual memory operations */
static const struct vm_operations_struct drm_vm_sg_ops = {
.fault = drm_vm_sg_fault,
.open = drm_vm_open,
.close = drm_vm_close,
};
static void drm_vm_open_locked(struct drm_device *dev,
struct vm_area_struct *vma)
{
struct drm_vma_entry *vma_entry;
DRM_DEBUG("0x%08lx,0x%08lx\n",
vma->vm_start, vma->vm_end - vma->vm_start);
vma_entry = kmalloc(sizeof(*vma_entry), GFP_KERNEL);
if (vma_entry) {
vma_entry->vma = vma;
vma_entry->pid = current->pid;
list_add(&vma_entry->head, &dev->vmalist);
}
}
static void drm_vm_open(struct vm_area_struct *vma)
{
struct drm_file *priv = vma->vm_file->private_data;
struct drm_device *dev = priv->minor->dev;
mutex_lock(&dev->struct_mutex);
drm_vm_open_locked(dev, vma);
mutex_unlock(&dev->struct_mutex);
}
static void drm_vm_close_locked(struct drm_device *dev,
struct vm_area_struct *vma)
{
struct drm_vma_entry *pt, *temp;
DRM_DEBUG("0x%08lx,0x%08lx\n",
vma->vm_start, vma->vm_end - vma->vm_start);
list_for_each_entry_safe(pt, temp, &dev->vmalist, head) {
if (pt->vma == vma) {
list_del(&pt->head);
kfree(pt);
break;
}
}
}
/*
* \c close method for all virtual memory types.
*
* \param vma virtual memory area.
*
* Search the \p vma private data entry in drm_device::vmalist, unlink it, and
* free it.
*/
static void drm_vm_close(struct vm_area_struct *vma)
{
struct drm_file *priv = vma->vm_file->private_data;
struct drm_device *dev = priv->minor->dev;
mutex_lock(&dev->struct_mutex);
drm_vm_close_locked(dev, vma);
mutex_unlock(&dev->struct_mutex);
}
/*
* mmap DMA memory.
*
* \param file_priv DRM file private.
* \param vma virtual memory area.
* \return zero on success or a negative number on failure.
*
* Sets the virtual memory area operations structure to vm_dma_ops, the file
* pointer, and calls vm_open().
*/
static int drm_mmap_dma(struct file *filp, struct vm_area_struct *vma)
{
struct drm_file *priv = filp->private_data;
struct drm_device *dev;
struct drm_device_dma *dma;
unsigned long length = vma->vm_end - vma->vm_start;
dev = priv->minor->dev;
dma = dev->dma;
DRM_DEBUG("start = 0x%lx, end = 0x%lx, page offset = 0x%lx\n",
vma->vm_start, vma->vm_end, vma->vm_pgoff);
/* Length must match exact page count */
if (!dma || (length >> PAGE_SHIFT) != dma->page_count) {
return -EINVAL;
}
if (!capable(CAP_SYS_ADMIN) &&
(dma->flags & _DRM_DMA_USE_PCI_RO)) {
vm_flags_clear(vma, VM_WRITE | VM_MAYWRITE);
#if defined(__i386__) || defined(__x86_64__)
pgprot_val(vma->vm_page_prot) &= ~_PAGE_RW;
#else
/* Ye gads this is ugly. With more thought
we could move this up higher and use
`protection_map' instead. */
vma->vm_page_prot =
__pgprot(pte_val
(pte_wrprotect
(__pte(pgprot_val(vma->vm_page_prot)))));
#endif
}
vma->vm_ops = &drm_vm_dma_ops;
vm_flags_set(vma, VM_DONTEXPAND | VM_DONTDUMP);
drm_vm_open_locked(dev, vma);
return 0;
}
static resource_size_t drm_core_get_reg_ofs(struct drm_device *dev)
{
#ifdef __alpha__
return dev->hose->dense_mem_base;
#else
return 0;
#endif
}
/*
* mmap DMA memory.
*
* \param file_priv DRM file private.
* \param vma virtual memory area.
* \return zero on success or a negative number on failure.
*
* If the virtual memory area has no offset associated with it then it's a DMA
* area, so calls mmap_dma(). Otherwise searches the map in drm_device::maplist,
* checks that the restricted flag is not set, sets the virtual memory operations
* according to the mapping type and remaps the pages. Finally sets the file
* pointer and calls vm_open().
*/
static int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma)
{
struct drm_file *priv = filp->private_data;
struct drm_device *dev = priv->minor->dev;
struct drm_local_map *map = NULL;
resource_size_t offset = 0;
struct drm_hash_item *hash;
DRM_DEBUG("start = 0x%lx, end = 0x%lx, page offset = 0x%lx\n",
vma->vm_start, vma->vm_end, vma->vm_pgoff);
if (!priv->authenticated)
return -EACCES;
/* We check for "dma". On Apple's UniNorth, it's valid to have
* the AGP mapped at physical address 0
* --BenH.
*/
if (!vma->vm_pgoff
#if IS_ENABLED(CONFIG_AGP)
&& (!dev->agp
|| dev->agp->agp_info.device->vendor != PCI_VENDOR_ID_APPLE)
#endif
)
return drm_mmap_dma(filp, vma);
if (drm_ht_find_item(&dev->map_hash, vma->vm_pgoff, &hash)) {
DRM_ERROR("Could not find map\n");
return -EINVAL;
}
map = drm_hash_entry(hash, struct drm_map_list, hash)->map;
if (!map || ((map->flags & _DRM_RESTRICTED) && !capable(CAP_SYS_ADMIN)))
return -EPERM;
/* Check for valid size. */
if (map->size < vma->vm_end - vma->vm_start)
return -EINVAL;
if (!capable(CAP_SYS_ADMIN) && (map->flags & _DRM_READ_ONLY)) {
vm_flags_clear(vma, VM_WRITE | VM_MAYWRITE);
#if defined(__i386__) || defined(__x86_64__)
pgprot_val(vma->vm_page_prot) &= ~_PAGE_RW;
#else
/* Ye gads this is ugly. With more thought
we could move this up higher and use
`protection_map' instead. */
vma->vm_page_prot =
__pgprot(pte_val
(pte_wrprotect
(__pte(pgprot_val(vma->vm_page_prot)))));
#endif
}
switch (map->type) {
#if !defined(__arm__)
case _DRM_AGP:
if (dev->agp && dev->agp->cant_use_aperture) {
/*
* On some platforms we can't talk to bus dma address from the CPU, so for
* memory of type DRM_AGP, we'll deal with sorting out the real physical
* pages and mappings in fault()
*/
#if defined(__powerpc__)
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
#endif
vma->vm_ops = &drm_vm_ops;
break;
}
fallthrough; /* to _DRM_FRAME_BUFFER... */
#endif
case _DRM_FRAME_BUFFER:
case _DRM_REGISTERS:
offset = drm_core_get_reg_ofs(dev);
vma->vm_page_prot = drm_io_prot(map, vma);
if (io_remap_pfn_range(vma, vma->vm_start,
(map->offset + offset) >> PAGE_SHIFT,
vma->vm_end - vma->vm_start,
vma->vm_page_prot))
return -EAGAIN;
DRM_DEBUG(" Type = %d; start = 0x%lx, end = 0x%lx,"
" offset = 0x%llx\n",
map->type,
vma->vm_start, vma->vm_end, (unsigned long long)(map->offset + offset));
vma->vm_ops = &drm_vm_ops;
break;
case _DRM_CONSISTENT:
/* Consistent memory is really like shared memory. But
* it's allocated in a different way, so avoid fault */
if (remap_pfn_range(vma, vma->vm_start,
page_to_pfn(virt_to_page(map->handle)),
vma->vm_end - vma->vm_start, vma->vm_page_prot))
return -EAGAIN;
vma->vm_page_prot = drm_dma_prot(map->type, vma);
fallthrough; /* to _DRM_SHM */
case _DRM_SHM:
vma->vm_ops = &drm_vm_shm_ops;
vma->vm_private_data = (void *)map;
break;
case _DRM_SCATTER_GATHER:
vma->vm_ops = &drm_vm_sg_ops;
vma->vm_private_data = (void *)map;
vma->vm_page_prot = drm_dma_prot(map->type, vma);
break;
default:
return -EINVAL; /* This should never happen. */
}
vm_flags_set(vma, VM_DONTEXPAND | VM_DONTDUMP);
drm_vm_open_locked(dev, vma);
return 0;
}
int drm_legacy_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct drm_file *priv = filp->private_data;
struct drm_device *dev = priv->minor->dev;
int ret;
if (drm_dev_is_unplugged(dev))
return -ENODEV;
mutex_lock(&dev->struct_mutex);
ret = drm_mmap_locked(filp, vma);
mutex_unlock(&dev->struct_mutex);
return ret;
}
EXPORT_SYMBOL(drm_legacy_mmap);
#if IS_ENABLED(CONFIG_DRM_LEGACY)
void drm_legacy_vma_flush(struct drm_device *dev)
{
struct drm_vma_entry *vma, *vma_temp;
/* Clear vma list (only needed for legacy drivers) */
list_for_each_entry_safe(vma, vma_temp, &dev->vmalist, head) {
list_del(&vma->head);
kfree(vma);
}
}
#endif

View File

@ -11,9 +11,10 @@
#include <linux/component.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
#include <linux/of_device.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/property.h>
#include <linux/regmap.h>
#include <drm/drm_fourcc.h>
@ -103,7 +104,7 @@ struct gsc_context {
unsigned int num_formats;
void __iomem *regs;
const char **clk_names;
const char *const *clk_names;
struct clk *clocks[GSC_MAX_CLOCKS];
int num_clocks;
struct gsc_scaler sc;
@ -1217,7 +1218,7 @@ static const unsigned int gsc_tiled_formats[] = {
static int gsc_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct gsc_driverdata *driver_data;
const struct gsc_driverdata *driver_data;
struct exynos_drm_ipp_formats *formats;
struct gsc_context *ctx;
int num_formats, ret, i, j;
@ -1226,7 +1227,7 @@ static int gsc_probe(struct platform_device *pdev)
if (!ctx)
return -ENOMEM;
driver_data = (struct gsc_driverdata *)of_device_get_match_data(dev);
driver_data = device_get_match_data(dev);
ctx->dev = dev;
ctx->num_clocks = driver_data->num_clocks;
ctx->clk_names = driver_data->clk_names;

View File

@ -9,6 +9,7 @@
#include <linux/sync_file.h>
#include <linux/uaccess.h>
#include <drm/drm_auth.h>
#include <drm/drm_syncobj.h>
#include "display/intel_frontbuffer.h"

Some files were not shown because too many files have changed in this diff Show More