drm-misc-next for v5.15:

UAPI Changes:
 
 Cross-subsystem Changes:
 - udmabuf: Add support for mapping hugepages
 - Add dma-buf stats to sysfs.
 - Assorted fixes to fbdev/omap2.
 - dma-buf: Document DMA_BUF_IOCTL_SYNC
 - Improve dma-buf non-dynamic exporter expectations better.
 - Add module parameters for dma-buf size and list limit.
 - Add HDMI codec support to vc4, to replace vc4's own codec.
 - Document dma-buf implicit fencing rules.
 - dma_resv_test_signaled test_all handling.
 
 Core Changes:
 - Extract i915's eDP backlight code into DRM helpers.
 - Assorted docbook updates.
 - Rework drm_dp_aux documentation.
 - Add support for the DP aux bus.
 - Shrink dma-fence-chain slightly.
 - Add alloc/free helpers for dma-fence-chain.
 - Assorted fixes to TTM., drm/of, bridge
 - drm_gem_plane_helper_prepare/cleanup_fb is now the default for gem drivers.
 - Small fix for scheduler completion.
 - Remove use of drm_device.irq_enabled.
 - Print the driver name to dmesg when registering framebuffer.
 - Export drm/gem's shadow plane handling, and use it in vkms.
 - Assorted small fixes.
 
 Driver Changes:
 - Add eDP backlight to nouveau.
 - Assorted fixes and cleanups to nouveau, panfrost, vmwgfx, anx7625,
   amdgpu, gma500, radeon, mgag200, vgem, vc4, vkms, omapdrm.
 - Add support for Samsung DB7430, Samsung ATNA33XC20, EDT ETMV570G2DHU,
   EDT ETM0350G0DH6, Innolux EJ030NA panels.
 - Fix some simple pannels missing bus_format and connector types.
 - Add mks-guest-stats instrumentation support to vmwgfx.
 - Merge i915-ttm topic branch.
 - Make s6e63m0 panel use Mipi-DBI helpers.
 - Add detect() supoprt for AST.
 - Use interrupts for hotplug on vc4.
 - vmwgfx is now moved to drm-misc-next, as sroland is no longer a maintainer for now.
 - vmwgfx now uses copies of vmware's internal device headers.
 - Slowly convert ti-sn65dsi83 over to atomic.
 - Rework amdgpu dma-resv handling.
 - Fix virtio fencing for planes.
 - Ensure amdgpu can always evict to SYSTEM.
 - Many drivers fixed for implicit fencing rules.
 - Set default prepare/cleanup fb for tiny, vram and simple helpers too.
 - Rework panfrost gpu reset and related serialization.
 - Update VKMS todo list.
 - Make bochs a tiny gpu driver, and use vram helper.
 - Use linux irq interfaces instead of drm_irq in some drivers.
 - Add support for Raspberry Pi Pico to GUD.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEuXvWqAysSYEJGuVH/lWMcqZwE8MFAmDxaBwACgkQ/lWMcqZw
 E8PBYRAAsZgmuQU1urEsDTL931jWoJ8zxHpxSLow8ZtplembyhloGeRXRmGT8erd
 ocw1wAzm0UajbFLvv50XW5N4jPnsn9IBRQVhfNNc06g4OH6qy17PPAA+clHaBJrf
 BFiAcK4rzmUet3+6335ko/OvkD5er0s7ipNljxgB7FkIwP3gh3NEFG0yFcpFpxF4
 fzT5Wz5vMW++XUCXZHMX+vBMjFP2AosxLVvsnxpM/48dyFWTiYRg7jhy5bICKYBM
 3GdRj2e1wm3cAsZISbqtDpXSlstIw6u0w+BB6ryQvD/K5nPTqydE/YMOB85DUWLg
 Sp1tijxM/KtOyC5w/IpDLkf9X24KAIcu0eKffUGbkLvIkP5cSyibelOtZBG6Jmln
 AubXpgi4+mGVyYvMEVngHyrY2tW/rtpNGr/g9To9hYVHKkdRZUtolQk7KgtdV7v3
 pFq60AilYTENJthkjCRoTi66BsocpaJfQOyppp6uD8/a0Spxfrq5tM+POWNylqxB
 70L2ObvM4Xx51GI0ziCZQwkMp2Uzwosr+6CdbrzQKaxxpbQEcr3frkv6cap5V0WY
 lnYgFw3dbA/Ga6YsnInQ87KmF4svnaWB2z/KzfnBF5pNrwoR9/4K5k7Vfb3P9YyN
 w+nrfeHto0r768PjC/05uyD9diDuHOw3RHtljf/C4klBNRDDovU=
 =x8Eo
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2021-07-16' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v5.15:

UAPI Changes:

Cross-subsystem Changes:
- udmabuf: Add support for mapping hugepages
- Add dma-buf stats to sysfs.
- Assorted fixes to fbdev/omap2.
- dma-buf: Document DMA_BUF_IOCTL_SYNC
- Improve dma-buf non-dynamic exporter expectations better.
- Add module parameters for dma-buf size and list limit.
- Add HDMI codec support to vc4, to replace vc4's own codec.
- Document dma-buf implicit fencing rules.
- dma_resv_test_signaled test_all handling.

Core Changes:
- Extract i915's eDP backlight code into DRM helpers.
- Assorted docbook updates.
- Rework drm_dp_aux documentation.
- Add support for the DP aux bus.
- Shrink dma-fence-chain slightly.
- Add alloc/free helpers for dma-fence-chain.
- Assorted fixes to TTM., drm/of, bridge
- drm_gem_plane_helper_prepare/cleanup_fb is now the default for gem drivers.
- Small fix for scheduler completion.
- Remove use of drm_device.irq_enabled.
- Print the driver name to dmesg when registering framebuffer.
- Export drm/gem's shadow plane handling, and use it in vkms.
- Assorted small fixes.

Driver Changes:
- Add eDP backlight to nouveau.
- Assorted fixes and cleanups to nouveau, panfrost, vmwgfx, anx7625,
  amdgpu, gma500, radeon, mgag200, vgem, vc4, vkms, omapdrm.
- Add support for Samsung DB7430, Samsung ATNA33XC20, EDT ETMV570G2DHU,
  EDT ETM0350G0DH6, Innolux EJ030NA panels.
- Fix some simple pannels missing bus_format and connector types.
- Add mks-guest-stats instrumentation support to vmwgfx.
- Merge i915-ttm topic branch.
- Make s6e63m0 panel use Mipi-DBI helpers.
- Add detect() supoprt for AST.
- Use interrupts for hotplug on vc4.
- vmwgfx is now moved to drm-misc-next, as sroland is no longer a maintainer for now.
- vmwgfx now uses copies of vmware's internal device headers.
- Slowly convert ti-sn65dsi83 over to atomic.
- Rework amdgpu dma-resv handling.
- Fix virtio fencing for planes.
- Ensure amdgpu can always evict to SYSTEM.
- Many drivers fixed for implicit fencing rules.
- Set default prepare/cleanup fb for tiny, vram and simple helpers too.
- Rework panfrost gpu reset and related serialization.
- Update VKMS todo list.
- Make bochs a tiny gpu driver, and use vram helper.
- Use linux irq interfaces instead of drm_irq in some drivers.
- Add support for Raspberry Pi Pico to GUD.

Signed-off-by: Dave Airlie <airlied@redhat.com>

# gpg: Signature made Fri 16 Jul 2021 21:06:04 AEST
# gpg:                using RSA key B97BD6A80CAC4981091AE547FE558C72A67013C3
# gpg: Good signature from "Maarten Lankhorst <maarten.lankhorst@linux.intel.com>" [expired]
# gpg:                 aka "Maarten Lankhorst <maarten@debian.org>" [expired]
# gpg:                 aka "Maarten Lankhorst <maarten.lankhorst@canonical.com>" [expired]
# gpg: Note: This key has expired!
# Primary key fingerprint: B97B D6A8 0CAC 4981 091A  E547 FE55 8C72 A670 13C3
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/444811c3-cbec-e9d5-9a6b-9632eda7962a@linux.intel.com
This commit is contained in:
Dave Airlie 2021-07-21 11:58:26 +10:00
commit 588b3eee52
279 changed files with 16152 additions and 14030 deletions

View file

@ -0,0 +1,52 @@
What: /sys/kernel/dmabuf/buffers
Date: May 2021
KernelVersion: v5.13
Contact: Hridya Valsaraju <hridya@google.com>
Description: The /sys/kernel/dmabuf/buffers directory contains a
snapshot of the internal state of every DMA-BUF.
/sys/kernel/dmabuf/buffers/<inode_number> will contain the
statistics for the DMA-BUF with the unique inode number
<inode_number>
Users: kernel memory tuning/debugging tools
What: /sys/kernel/dmabuf/buffers/<inode_number>/exporter_name
Date: May 2021
KernelVersion: v5.13
Contact: Hridya Valsaraju <hridya@google.com>
Description: This file is read-only and contains the name of the exporter of
the DMA-BUF.
What: /sys/kernel/dmabuf/buffers/<inode_number>/size
Date: May 2021
KernelVersion: v5.13
Contact: Hridya Valsaraju <hridya@google.com>
Description: This file is read-only and specifies the size of the DMA-BUF in
bytes.
What: /sys/kernel/dmabuf/buffers/<inode_number>/attachments
Date: May 2021
KernelVersion: v5.13
Contact: Hridya Valsaraju <hridya@google.com>
Description: This directory will contain subdirectories representing every
attachment of the DMA-BUF.
What: /sys/kernel/dmabuf/buffers/<inode_number>/attachments/<attachment_uid>
Date: May 2021
KernelVersion: v5.13
Contact: Hridya Valsaraju <hridya@google.com>
Description: This directory will contain information on the attached device
and the number of current distinct device mappings.
What: /sys/kernel/dmabuf/buffers/<inode_number>/attachments/<attachment_uid>/device
Date: May 2021
KernelVersion: v5.13
Contact: Hridya Valsaraju <hridya@google.com>
Description: This file is read-only and is a symlink to the attached device's
sysfs entry.
What: /sys/kernel/dmabuf/buffers/<inode_number>/attachments/<attachment_uid>/map_counter
Date: May 2021
KernelVersion: v5.13
Contact: Hridya Valsaraju <hridya@google.com>
Description: This file is read-only and contains a map_counter indicating the
number of distinct device mappings of the attachment.

View file

@ -70,6 +70,9 @@ properties:
const: 1
description: See ../../pwm/pwm.yaml for description of the cell formats.
aux-bus:
$ref: /schemas/display/dp-aux-bus.yaml#
ports:
$ref: /schemas/graph.yaml#/properties/ports
@ -150,7 +153,6 @@ properties:
required:
- compatible
- reg
- enable-gpios
- vccio-supply
- vpll-supply
- vcca-supply
@ -201,11 +203,26 @@ examples:
port@1 {
reg = <1>;
endpoint {
sn65dsi86_out: endpoint {
remote-endpoint = <&panel_in_edp>;
};
};
};
aux-bus {
panel {
compatible = "boe,nv133fhm-n62";
power-supply = <&pp3300_dx_edp>;
backlight = <&backlight>;
hpd-gpios = <&sn65dsi86_bridge 2 GPIO_ACTIVE_HIGH>;
port {
panel_in_edp: endpoint {
remote-endpoint = <&sn65dsi86_out>;
};
};
};
};
};
};
- |

View file

@ -0,0 +1,37 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/dp-aux-bus.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: DisplayPort AUX bus
maintainers:
- Douglas Anderson <dianders@chromium.org>
description:
DisplayPort controllers provide a control channel to the sinks that
are hooked up to them. This is the DP AUX bus. Over the DP AUX bus
we can query properties about a sink and also configure it. In
particular, DP sinks support DDC over DP AUX which allows tunneling
a standard I2C DDC connection over the AUX channel.
To model this relationship, DP sinks should be placed as children
of the DP controller under the "aux-bus" node.
At the moment, this binding only handles the eDP case. It is
possible it will be extended in the future to handle the DP case.
For DP, presumably a connector would be listed under the DP AUX
bus instead of a panel.
properties:
$nodename:
const: "aux-bus"
panel:
$ref: panel/panel-common.yaml#
additionalProperties: false
required:
- panel

View file

@ -0,0 +1,62 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/innolux,ej030na.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Innolux EJ030NA 3.0" (320x480 pixels) 24-bit TFT LCD panel
description: |
The panel must obey the rules for a SPI slave device as specified in
spi/spi-controller.yaml
maintainers:
- Paul Cercueil <paul@crapouillou.net>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: innolux,ej030na
backlight: true
port: true
power-supply: true
reg: true
reset-gpios: true
required:
- compatible
- reg
- power-supply
- reset-gpios
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
spi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "innolux,ej030na";
reg = <0>;
spi-max-frequency = <10000000>;
reset-gpios = <&gpe 4 GPIO_ACTIVE_LOW>;
power-supply = <&lcd_power>;
backlight = <&backlight>;
port {
panel_input: endpoint {
remote-endpoint = <&panel_output>;
};
};
};
};

View file

@ -110,6 +110,9 @@ properties:
# Emerging Display Technology Corp. 5.7" VGA TFT LCD panel
- edt,et057090dhu
- edt,et070080dh6
# Emerging Display Technology Corp. 3.5" WVGA TFT LCD panel with
# capacitive multitouch
- edt,etm0350g0dh6
# Emerging Display Technology Corp. 480x272 TFT Display with capacitive touch
- edt,etm043080dh6gp
# Emerging Display Technology Corp. 480x272 TFT Display
@ -128,6 +131,9 @@ properties:
# Emerging Display Technology Corp. WVGA TFT Display with capacitive touch
- edt,etm0700g0dh6
- edt,etm0700g0edh6
# Emerging Display Technology Corp. 5.7" VGA TFT LCD panel with
# capacitive touch
- edt,etmv570g2dhu
# Evervision Electronics Co. Ltd. VGG804821 5.0" WVGA TFT LCD Panel
- evervision,vgg804821
# Foxlink Group 5" WVGA TFT LCD panel
@ -242,6 +248,8 @@ properties:
- rocktech,rk101ii01d-ct
# Rocktech Display Ltd. RK070ER9427 800(RGB)x480 TFT LCD panel
- rocktech,rk070er9427
# Samsung 13.3" FHD (1920x1080 pixels) eDP AMOLED panel
- samsung,atna33xc20
# Samsung 12.2" (2560x1600 pixels) TFT LCD panel
- samsung,lsn122dl01-c01
# Samsung Electronics 10.1" WSVGA TFT LCD panel
@ -298,6 +306,8 @@ properties:
enable-gpios: true
port: true
power-supply: true
no-hpd: true
hpd-gpios: true
additionalProperties: false

View file

@ -33,8 +33,11 @@ properties:
backlight: true
spi-cpha: true
spi-cpol: true
spi-max-frequency:
$ref: /schemas/types.yaml#/definitions/uint32
description: inherited as a SPI client node, the datasheet specifies
maximum 300 ns minimum cycle which gives around 3 MHz max frequency
maximum: 3000000
@ -44,6 +47,9 @@ properties:
required:
- compatible
- reg
- spi-cpha
- spi-cpol
- port
additionalProperties: false
@ -52,15 +58,23 @@ examples:
#include <dt-bindings/gpio/gpio.h>
spi {
compatible = "spi-gpio";
sck-gpios = <&gpio 0 GPIO_ACTIVE_HIGH>;
miso-gpios = <&gpio 1 GPIO_ACTIVE_HIGH>;
mosi-gpios = <&gpio 2 GPIO_ACTIVE_HIGH>;
cs-gpios = <&gpio 3 GPIO_ACTIVE_HIGH>;
num-chipselects = <1>;
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "samsung,lms397kf04";
spi-max-frequency = <3000000>;
spi-cpha;
spi-cpol;
reg = <0>;
vci-supply = <&lcd_3v0_reg>;
vccio-supply = <&lcd_1v8_reg>;
reset-gpios = <&gpio 1 GPIO_ACTIVE_LOW>;
reset-gpios = <&gpio 4 GPIO_ACTIVE_LOW>;
backlight = <&ktd259>;
port {

View file

@ -88,6 +88,9 @@ consider though:
- The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
details.
- The DMA buffer FD also supports a few dma-buf-specific ioctls, see
`DMA Buffer ioctls`_ below for details.
Basic Operation and Device DMA Access
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -106,6 +109,16 @@ Implicit Fence Poll Support
.. kernel-doc:: drivers/dma-buf/dma-buf.c
:doc: implicit fence polling
DMA-BUF statistics
~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-buf-sysfs-stats.c
:doc: overview
DMA Buffer ioctls
~~~~~~~~~~~~~~~~~
.. kernel-doc:: include/uapi/linux/dma-buf.h
Kernel Functions and Structures Reference
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View file

@ -457,6 +457,19 @@ Userspace API Structures
.. kernel-doc:: include/uapi/drm/drm_mode.h
:doc: overview
.. _crtc_index:
CRTC index
----------
CRTC's have both an object ID and an index, and they are not the same thing.
The index is used in cases where a densely packed identifier for a CRTC is
needed, for instance a bitmask of CRTC's. The member possible_crtcs of struct
drm_mode_get_plane is an example.
DRM_IOCTL_MODE_GETRESOURCES populates a structure with an array of CRTC ID's,
and the CRTC index is its position in this array.
.. kernel-doc:: include/uapi/drm/drm.h
:internal:

View file

@ -98,9 +98,17 @@ with VKMS maintainers.
IGT better support
------------------
- Investigate: (1) test cases on kms_plane that are failing due to timeout on
capturing CRC; (2) when running kms_flip test cases in sequence, some
successful individual test cases are failing randomly.
Debugging:
- kms_plane: some test cases are failing due to timeout on capturing CRC;
- kms_flip: when running test cases in sequence, some successful individual
test cases are failing randomly; when individually, some successful test
cases display in the log the following error::
[drm:vkms_prepare_fb [vkms]] ERROR vmap failed: -4
Virtual hardware (vblank-less) mode:
- VKMS already has support for vblanks simulated via hrtimers, which can be
tested with kms_flip test; in some way, we can say that VKMS already mimics
@ -116,7 +124,17 @@ Add Plane Features
There's lots of plane features we could add support for:
- Real overlay planes, not just cursor.
- Multiple overlay planes. [Good to get started]
- Clearing primary plane: clear primary plane before plane composition (at the
start) for correctness of pixel blend ops. It also guarantees alpha channel
is cleared in the target buffer for stable crc. [Good to get started]
- ARGB format on primary plane: blend the primary plane into background with
translucent alpha.
- Support when the primary plane isn't exactly matching the output size: blend
the primary plane into the black background.
- Full alpha blending on all planes.
@ -129,13 +147,8 @@ There's lots of plane features we could add support for:
cursor api).
For all of these, we also want to review the igt test coverage and make sure
all relevant igt testcases work on vkms.
Prime Buffer Sharing
--------------------
- Syzbot report - WARNING in vkms_gem_free_object:
https://syzkaller.appspot.com/bug?extid=e7ad70d406e74d8fc9d0
all relevant igt testcases work on vkms. They are good options for internship
project.
Runtime Configuration
---------------------
@ -153,7 +166,7 @@ module. Use/Test-cases:
the refresh rate.
The currently proposed solution is to expose vkms configuration through
configfs. All existing module options should be supported through configfs
configfs. All existing module options should be supported through configfs
too.
Writeback support
@ -162,6 +175,7 @@ Writeback support
- The writeback and CRC capture operations share the use of composer_enabled
boolean to ensure vblanks. Probably, when these operations work together,
composer_enabled needs to refcounting the composer state to proper work.
[Good to get started]
- Add support for cloned writeback outputs and related test cases using a
cloned output in the IGT kms_writeback.

View file

@ -5770,7 +5770,7 @@ M: Gerd Hoffmann <kraxel@redhat.com>
L: virtualization@lists.linux-foundation.org
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
F: drivers/gpu/drm/bochs/
F: drivers/gpu/drm/tiny/bochs.c
DRM DRIVER FOR BOE HIMAX8279D PANELS
M: Jerry Han <hanxu5@huaqin.corp-partner.google.com>
@ -5955,6 +5955,13 @@ S: Maintained
F: Documentation/devicetree/bindings/display/panel/raydium,rm67191.yaml
F: drivers/gpu/drm/panel/panel-raydium-rm67191.c
DRM DRIVER FOR SAMSUNG DB7430 PANELS
M: Linus Walleij <linus.walleij@linaro.org>
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/devicetree/bindings/display/panel/samsung,lms397kf04.yaml
F: drivers/gpu/drm/panel/panel-samsung-db7430.c
DRM DRIVER FOR SITRONIX ST7703 PANELS
M: Guido Günther <agx@sigxcpu.org>
R: Purism Kernel Team <kernel@puri.sm>
@ -6053,11 +6060,10 @@ F: drivers/gpu/drm/vboxvideo/
DRM DRIVER FOR VMWARE VIRTUAL GPU
M: "VMware Graphics" <linux-graphics-maintainer@vmware.com>
M: Roland Scheidegger <sroland@vmware.com>
M: Zack Rusin <zackr@vmware.com>
L: dri-devel@lists.freedesktop.org
S: Supported
T: git git://people.freedesktop.org/~sroland/linux
T: git git://anongit.freedesktop.org/drm/drm-misc
F: drivers/gpu/drm/vmwgfx/
F: include/uapi/drm/vmwgfx_drm.h

View file

@ -255,21 +255,6 @@ max98357a: audio-codec-0 {
#sound-dai-cells = <0>;
};
panel: panel {
/* Compatible will be filled in per-board */
power-supply = <&pp3300_dx_edp>;
backlight = <&backlight>;
hpd-gpios = <&sn65dsi86_bridge 2 GPIO_ACTIVE_HIGH>;
ports {
port {
panel_in_edp: endpoint {
remote-endpoint = <&sn65dsi86_out>;
};
};
};
};
pwmleds {
compatible = "pwm-leds";
keyboard_backlight: keyboard-backlight {
@ -666,6 +651,21 @@ sn65dsi86_out: endpoint {
};
};
};
aux-bus {
panel: panel {
/* Compatible will be filled in per-board */
power-supply = <&pp3300_dx_edp>;
backlight = <&backlight>;
hpd-gpios = <&sn65dsi86_bridge 2 GPIO_ACTIVE_HIGH>;
port {
panel_in_edp: endpoint {
remote-endpoint = <&sn65dsi86_out>;
};
};
};
};
};
};

View file

@ -72,6 +72,17 @@ menuconfig DMABUF_HEAPS
allows userspace to allocate dma-bufs that can be shared
between drivers.
menuconfig DMABUF_SYSFS_STATS
bool "DMA-BUF sysfs statistics"
select DMA_SHARED_BUFFER
help
Choose this option to enable DMA-BUF sysfs statistics
in location /sys/kernel/dmabuf/buffers.
/sys/kernel/dmabuf/buffers/<inode_number> will contain
statistics for the DMA-BUF with the unique inode number
<inode_number>.
source "drivers/dma-buf/heaps/Kconfig"
endmenu

View file

@ -6,6 +6,7 @@ obj-$(CONFIG_DMABUF_HEAPS) += heaps/
obj-$(CONFIG_SYNC_FILE) += sync_file.o
obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o
obj-$(CONFIG_UDMABUF) += udmabuf.o
obj-$(CONFIG_DMABUF_SYSFS_STATS) += dma-buf-sysfs-stats.o
dmabuf_selftests-y := \
selftest.o \

View file

@ -0,0 +1,337 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* DMA-BUF sysfs statistics.
*
* Copyright (C) 2021 Google LLC.
*/
#include <linux/dma-buf.h>
#include <linux/dma-resv.h>
#include <linux/kobject.h>
#include <linux/printk.h>
#include <linux/slab.h>
#include <linux/sysfs.h>
#include "dma-buf-sysfs-stats.h"
#define to_dma_buf_entry_from_kobj(x) container_of(x, struct dma_buf_sysfs_entry, kobj)
/**
* DOC: overview
*
* ``/sys/kernel/debug/dma_buf/bufinfo`` provides an overview of every DMA-BUF
* in the system. However, since debugfs is not safe to be mounted in
* production, procfs and sysfs can be used to gather DMA-BUF statistics on
* production systems.
*
* The ``/proc/<pid>/fdinfo/<fd>`` files in procfs can be used to gather
* information about DMA-BUF fds. Detailed documentation about the interface
* is present in Documentation/filesystems/proc.rst.
*
* Unfortunately, the existing procfs interfaces can only provide information
* about the DMA-BUFs for which processes hold fds or have the buffers mmapped
* into their address space. This necessitated the creation of the DMA-BUF sysfs
* statistics interface to provide per-buffer information on production systems.
*
* The interface at ``/sys/kernel/dma-buf/buffers`` exposes information about
* every DMA-BUF when ``CONFIG_DMABUF_SYSFS_STATS`` is enabled.
*
* The following stats are exposed by the interface:
*
* * ``/sys/kernel/dmabuf/buffers/<inode_number>/exporter_name``
* * ``/sys/kernel/dmabuf/buffers/<inode_number>/size``
* * ``/sys/kernel/dmabuf/buffers/<inode_number>/attachments/<attach_uid>/device``
* * ``/sys/kernel/dmabuf/buffers/<inode_number>/attachments/<attach_uid>/map_counter``
*
* The information in the interface can also be used to derive per-exporter and
* per-device usage statistics. The data from the interface can be gathered
* on error conditions or other important events to provide a snapshot of
* DMA-BUF usage. It can also be collected periodically by telemetry to monitor
* various metrics.
*
* Detailed documentation about the interface is present in
* Documentation/ABI/testing/sysfs-kernel-dmabuf-buffers.
*/
struct dma_buf_stats_attribute {
struct attribute attr;
ssize_t (*show)(struct dma_buf *dmabuf,
struct dma_buf_stats_attribute *attr, char *buf);
};
#define to_dma_buf_stats_attr(x) container_of(x, struct dma_buf_stats_attribute, attr)
static ssize_t dma_buf_stats_attribute_show(struct kobject *kobj,
struct attribute *attr,
char *buf)
{
struct dma_buf_stats_attribute *attribute;
struct dma_buf_sysfs_entry *sysfs_entry;
struct dma_buf *dmabuf;
attribute = to_dma_buf_stats_attr(attr);
sysfs_entry = to_dma_buf_entry_from_kobj(kobj);
dmabuf = sysfs_entry->dmabuf;
if (!dmabuf || !attribute->show)
return -EIO;
return attribute->show(dmabuf, attribute, buf);
}
static const struct sysfs_ops dma_buf_stats_sysfs_ops = {
.show = dma_buf_stats_attribute_show,
};
static ssize_t exporter_name_show(struct dma_buf *dmabuf,
struct dma_buf_stats_attribute *attr,
char *buf)
{
return sysfs_emit(buf, "%s\n", dmabuf->exp_name);
}
static ssize_t size_show(struct dma_buf *dmabuf,
struct dma_buf_stats_attribute *attr,
char *buf)
{
return sysfs_emit(buf, "%zu\n", dmabuf->size);
}
static struct dma_buf_stats_attribute exporter_name_attribute =
__ATTR_RO(exporter_name);
static struct dma_buf_stats_attribute size_attribute = __ATTR_RO(size);
static struct attribute *dma_buf_stats_default_attrs[] = {
&exporter_name_attribute.attr,
&size_attribute.attr,
NULL,
};
ATTRIBUTE_GROUPS(dma_buf_stats_default);
static void dma_buf_sysfs_release(struct kobject *kobj)
{
struct dma_buf_sysfs_entry *sysfs_entry;
sysfs_entry = to_dma_buf_entry_from_kobj(kobj);
kfree(sysfs_entry);
}
static struct kobj_type dma_buf_ktype = {
.sysfs_ops = &dma_buf_stats_sysfs_ops,
.release = dma_buf_sysfs_release,
.default_groups = dma_buf_stats_default_groups,
};
#define to_dma_buf_attach_entry_from_kobj(x) container_of(x, struct dma_buf_attach_sysfs_entry, kobj)
struct dma_buf_attach_stats_attribute {
struct attribute attr;
ssize_t (*show)(struct dma_buf_attach_sysfs_entry *sysfs_entry,
struct dma_buf_attach_stats_attribute *attr, char *buf);
};
#define to_dma_buf_attach_stats_attr(x) container_of(x, struct dma_buf_attach_stats_attribute, attr)
static ssize_t dma_buf_attach_stats_attribute_show(struct kobject *kobj,
struct attribute *attr,
char *buf)
{
struct dma_buf_attach_stats_attribute *attribute;
struct dma_buf_attach_sysfs_entry *sysfs_entry;
attribute = to_dma_buf_attach_stats_attr(attr);
sysfs_entry = to_dma_buf_attach_entry_from_kobj(kobj);
if (!attribute->show)
return -EIO;
return attribute->show(sysfs_entry, attribute, buf);
}
static const struct sysfs_ops dma_buf_attach_stats_sysfs_ops = {
.show = dma_buf_attach_stats_attribute_show,
};
static ssize_t map_counter_show(struct dma_buf_attach_sysfs_entry *sysfs_entry,
struct dma_buf_attach_stats_attribute *attr,
char *buf)
{
return sysfs_emit(buf, "%u\n", sysfs_entry->map_counter);
}
static struct dma_buf_attach_stats_attribute map_counter_attribute =
__ATTR_RO(map_counter);
static struct attribute *dma_buf_attach_stats_default_attrs[] = {
&map_counter_attribute.attr,
NULL,
};
ATTRIBUTE_GROUPS(dma_buf_attach_stats_default);
static void dma_buf_attach_sysfs_release(struct kobject *kobj)
{
struct dma_buf_attach_sysfs_entry *sysfs_entry;
sysfs_entry = to_dma_buf_attach_entry_from_kobj(kobj);
kfree(sysfs_entry);
}
static struct kobj_type dma_buf_attach_ktype = {
.sysfs_ops = &dma_buf_attach_stats_sysfs_ops,
.release = dma_buf_attach_sysfs_release,
.default_groups = dma_buf_attach_stats_default_groups,
};
void dma_buf_attach_stats_teardown(struct dma_buf_attachment *attach)
{
struct dma_buf_attach_sysfs_entry *sysfs_entry;
sysfs_entry = attach->sysfs_entry;
if (!sysfs_entry)
return;
sysfs_delete_link(&sysfs_entry->kobj, &attach->dev->kobj, "device");
kobject_del(&sysfs_entry->kobj);
kobject_put(&sysfs_entry->kobj);
}
int dma_buf_attach_stats_setup(struct dma_buf_attachment *attach,
unsigned int uid)
{
struct dma_buf_attach_sysfs_entry *sysfs_entry;
int ret;
struct dma_buf *dmabuf;
if (!attach)
return -EINVAL;
dmabuf = attach->dmabuf;
sysfs_entry = kzalloc(sizeof(struct dma_buf_attach_sysfs_entry),
GFP_KERNEL);
if (!sysfs_entry)
return -ENOMEM;
sysfs_entry->kobj.kset = dmabuf->sysfs_entry->attach_stats_kset;
attach->sysfs_entry = sysfs_entry;
ret = kobject_init_and_add(&sysfs_entry->kobj, &dma_buf_attach_ktype,
NULL, "%u", uid);
if (ret)
goto kobj_err;
ret = sysfs_create_link(&sysfs_entry->kobj, &attach->dev->kobj,
"device");
if (ret)
goto link_err;
return 0;
link_err:
kobject_del(&sysfs_entry->kobj);
kobj_err:
kobject_put(&sysfs_entry->kobj);
attach->sysfs_entry = NULL;
return ret;
}
void dma_buf_stats_teardown(struct dma_buf *dmabuf)
{
struct dma_buf_sysfs_entry *sysfs_entry;
sysfs_entry = dmabuf->sysfs_entry;
if (!sysfs_entry)
return;
kset_unregister(sysfs_entry->attach_stats_kset);
kobject_del(&sysfs_entry->kobj);
kobject_put(&sysfs_entry->kobj);
}
/* Statistics files do not need to send uevents. */
static int dmabuf_sysfs_uevent_filter(struct kset *kset, struct kobject *kobj)
{
return 0;
}
static const struct kset_uevent_ops dmabuf_sysfs_no_uevent_ops = {
.filter = dmabuf_sysfs_uevent_filter,
};
static struct kset *dma_buf_stats_kset;
static struct kset *dma_buf_per_buffer_stats_kset;
int dma_buf_init_sysfs_statistics(void)
{
dma_buf_stats_kset = kset_create_and_add("dmabuf",
&dmabuf_sysfs_no_uevent_ops,
kernel_kobj);
if (!dma_buf_stats_kset)
return -ENOMEM;
dma_buf_per_buffer_stats_kset = kset_create_and_add("buffers",
&dmabuf_sysfs_no_uevent_ops,
&dma_buf_stats_kset->kobj);
if (!dma_buf_per_buffer_stats_kset) {
kset_unregister(dma_buf_stats_kset);
return -ENOMEM;
}
return 0;
}
void dma_buf_uninit_sysfs_statistics(void)
{
kset_unregister(dma_buf_per_buffer_stats_kset);
kset_unregister(dma_buf_stats_kset);
}
int dma_buf_stats_setup(struct dma_buf *dmabuf)
{
struct dma_buf_sysfs_entry *sysfs_entry;
int ret;
struct kset *attach_stats_kset;
if (!dmabuf || !dmabuf->file)
return -EINVAL;
if (!dmabuf->exp_name) {
pr_err("exporter name must not be empty if stats needed\n");
return -EINVAL;
}
sysfs_entry = kzalloc(sizeof(struct dma_buf_sysfs_entry), GFP_KERNEL);
if (!sysfs_entry)
return -ENOMEM;
sysfs_entry->kobj.kset = dma_buf_per_buffer_stats_kset;
sysfs_entry->dmabuf = dmabuf;
dmabuf->sysfs_entry = sysfs_entry;
/* create the directory for buffer stats */
ret = kobject_init_and_add(&sysfs_entry->kobj, &dma_buf_ktype, NULL,
"%lu", file_inode(dmabuf->file)->i_ino);
if (ret)
goto err_sysfs_dmabuf;
/* create the directory for attachment stats */
attach_stats_kset = kset_create_and_add("attachments",
&dmabuf_sysfs_no_uevent_ops,
&sysfs_entry->kobj);
if (!attach_stats_kset) {
ret = -ENOMEM;
goto err_sysfs_attach;
}
sysfs_entry->attach_stats_kset = attach_stats_kset;
return 0;
err_sysfs_attach:
kobject_del(&sysfs_entry->kobj);
err_sysfs_dmabuf:
kobject_put(&sysfs_entry->kobj);
dmabuf->sysfs_entry = NULL;
return ret;
}

View file

@ -0,0 +1,62 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* DMA-BUF sysfs statistics.
*
* Copyright (C) 2021 Google LLC.
*/
#ifndef _DMA_BUF_SYSFS_STATS_H
#define _DMA_BUF_SYSFS_STATS_H
#ifdef CONFIG_DMABUF_SYSFS_STATS
int dma_buf_init_sysfs_statistics(void);
void dma_buf_uninit_sysfs_statistics(void);
int dma_buf_stats_setup(struct dma_buf *dmabuf);
int dma_buf_attach_stats_setup(struct dma_buf_attachment *attach,
unsigned int uid);
static inline void dma_buf_update_attachment_map_count(struct dma_buf_attachment *attach,
int delta)
{
struct dma_buf_attach_sysfs_entry *entry = attach->sysfs_entry;
entry->map_counter += delta;
}
void dma_buf_stats_teardown(struct dma_buf *dmabuf);
void dma_buf_attach_stats_teardown(struct dma_buf_attachment *attach);
static inline unsigned int dma_buf_update_attach_uid(struct dma_buf *dmabuf)
{
struct dma_buf_sysfs_entry *entry = dmabuf->sysfs_entry;
return entry->attachment_uid++;
}
#else
static inline int dma_buf_init_sysfs_statistics(void)
{
return 0;
}
static inline void dma_buf_uninit_sysfs_statistics(void) {}
static inline int dma_buf_stats_setup(struct dma_buf *dmabuf)
{
return 0;
}
static inline int dma_buf_attach_stats_setup(struct dma_buf_attachment *attach,
unsigned int uid)
{
return 0;
}
static inline void dma_buf_stats_teardown(struct dma_buf *dmabuf) {}
static inline void dma_buf_attach_stats_teardown(struct dma_buf_attachment *attach) {}
static inline void dma_buf_update_attachment_map_count(struct dma_buf_attachment *attach,
int delta) {}
static inline unsigned int dma_buf_update_attach_uid(struct dma_buf *dmabuf)
{
return 0;
}
#endif
#endif // _DMA_BUF_SYSFS_STATS_H

View file

@ -29,6 +29,8 @@
#include <uapi/linux/dma-buf.h>
#include <uapi/linux/magic.h>
#include "dma-buf-sysfs-stats.h"
static inline int is_dma_buf_file(struct file *);
struct dma_buf_list {
@ -79,6 +81,7 @@ static void dma_buf_release(struct dentry *dentry)
if (dmabuf->resv == (struct dma_resv *)&dmabuf[1])
dma_resv_fini(dmabuf->resv);
dma_buf_stats_teardown(dmabuf);
module_put(dmabuf->owner);
kfree(dmabuf->name);
kfree(dmabuf);
@ -580,6 +583,10 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
file->f_mode |= FMODE_LSEEK;
dmabuf->file = file;
ret = dma_buf_stats_setup(dmabuf);
if (ret)
goto err_sysfs;
mutex_init(&dmabuf->lock);
INIT_LIST_HEAD(&dmabuf->attachments);
@ -589,6 +596,14 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
return dmabuf;
err_sysfs:
/*
* Set file->f_path.dentry->d_fsdata to NULL so that when
* dma_buf_release() gets invoked by dentry_ops, it exits
* early before calling the release() dma_buf op.
*/
file->f_path.dentry->d_fsdata = NULL;
fput(file);
err_dmabuf:
kfree(dmabuf);
err_module:
@ -723,6 +738,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
{
struct dma_buf_attachment *attach;
int ret;
unsigned int attach_uid;
if (WARN_ON(!dmabuf || !dev))
return ERR_PTR(-EINVAL);
@ -748,8 +764,13 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
}
dma_resv_lock(dmabuf->resv, NULL);
list_add(&attach->node, &dmabuf->attachments);
attach_uid = dma_buf_update_attach_uid(dmabuf);
dma_resv_unlock(dmabuf->resv);
ret = dma_buf_attach_stats_setup(attach, attach_uid);
if (ret)
goto err_sysfs;
/* When either the importer or the exporter can't handle dynamic
* mappings we cache the mapping here to avoid issues with the
* reservation object lock.
@ -776,6 +797,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
dma_resv_unlock(attach->dmabuf->resv);
attach->sgt = sgt;
attach->dir = DMA_BIDIRECTIONAL;
dma_buf_update_attachment_map_count(attach, 1 /* delta */);
}
return attach;
@ -792,6 +814,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
if (dma_buf_is_dynamic(attach->dmabuf))
dma_resv_unlock(attach->dmabuf->resv);
err_sysfs:
dma_buf_detach(dmabuf, attach);
return ERR_PTR(ret);
}
@ -841,6 +864,7 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
dma_resv_lock(attach->dmabuf->resv, NULL);
__unmap_dma_buf(attach, attach->sgt, attach->dir);
dma_buf_update_attachment_map_count(attach, -1 /* delta */);
if (dma_buf_is_dynamic(attach->dmabuf)) {
dmabuf->ops->unpin(attach);
@ -854,6 +878,7 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
if (dmabuf->ops->detach)
dmabuf->ops->detach(dmabuf, attach);
dma_buf_attach_stats_teardown(attach);
kfree(attach);
}
EXPORT_SYMBOL_GPL(dma_buf_detach);
@ -926,6 +951,9 @@ EXPORT_SYMBOL_GPL(dma_buf_unpin);
* the underlying backing storage is pinned for as long as a mapping exists,
* therefore users/importers should not hold onto a mapping for undue amounts of
* time.
*
* Important: Dynamic importers must wait for the exclusive fence of the struct
* dma_resv attached to the DMA-BUF first.
*/
struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
enum dma_data_direction direction)
@ -993,6 +1021,9 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
}
#endif /* CONFIG_DMA_API_DEBUG */
if (!IS_ERR(sg_table))
dma_buf_update_attachment_map_count(attach, 1 /* delta */);
return sg_table;
}
EXPORT_SYMBOL_GPL(dma_buf_map_attachment);
@ -1030,6 +1061,8 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
if (dma_buf_is_dynamic(attach->dmabuf) &&
!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY))
dma_buf_unpin(attach);
dma_buf_update_attachment_map_count(attach, -1 /* delta */);
}
EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
@ -1469,6 +1502,12 @@ static inline void dma_buf_uninit_debugfs(void)
static int __init dma_buf_init(void)
{
int ret;
ret = dma_buf_init_sysfs_statistics();
if (ret)
return ret;
dma_buf_mnt = kern_mount(&dma_buf_fs_type);
if (IS_ERR(dma_buf_mnt))
return PTR_ERR(dma_buf_mnt);
@ -1484,5 +1523,6 @@ static void __exit dma_buf_deinit(void)
{
dma_buf_uninit_debugfs();
kern_unmount(dma_buf_mnt);
dma_buf_uninit_sysfs_statistics();
}
__exitcall(dma_buf_deinit);

View file

@ -137,6 +137,7 @@ static void dma_fence_chain_cb(struct dma_fence *f, struct dma_fence_cb *cb)
struct dma_fence_chain *chain;
chain = container_of(cb, typeof(*chain), cb);
init_irq_work(&chain->work, dma_fence_chain_irq_work);
irq_work_queue(&chain->work);
dma_fence_put(f);
}
@ -239,7 +240,6 @@ void dma_fence_chain_init(struct dma_fence_chain *chain,
rcu_assign_pointer(chain->prev, prev);
chain->fence = fence;
chain->prev_seqno = 0;
init_irq_work(&chain->work, dma_fence_chain_irq_work);
/* Try to reuse the context of the previous chain node. */
if (prev_chain && __dma_fence_is_later(seqno, prev->seqno, prev->ops)) {

View file

@ -615,25 +615,21 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
*/
bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all)
{
unsigned int seq, shared_count;
struct dma_fence *fence;
unsigned int seq;
int ret;
rcu_read_lock();
retry:
ret = true;
shared_count = 0;
seq = read_seqcount_begin(&obj->seq);
if (test_all) {
struct dma_resv_list *fobj = dma_resv_shared_list(obj);
unsigned int i;
if (fobj)
shared_count = fobj->shared_count;
unsigned int i, shared_count;
shared_count = fobj ? fobj->shared_count : 0;
for (i = 0; i < shared_count; ++i) {
struct dma_fence *fence;
fence = rcu_dereference(fobj->shared[i]);
ret = dma_resv_test_signaled_single(fence);
if (ret < 0)
@ -641,23 +637,18 @@ bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all)
else if (!ret)
break;
}
}
if (read_seqcount_retry(&obj->seq, seq))
fence = dma_resv_excl_fence(obj);
if (ret && fence) {
ret = dma_resv_test_signaled_single(fence);
if (ret < 0)
goto retry;
}
if (!shared_count) {
struct dma_fence *fence_excl = dma_resv_excl_fence(obj);
if (fence_excl) {
ret = dma_resv_test_signaled_single(fence_excl);
if (ret < 0)
goto retry;
if (read_seqcount_retry(&obj->seq, seq))
goto retry;
}
}
if (read_seqcount_retry(&obj->seq, seq))
goto retry;
rcu_read_unlock();
return ret;

View file

@ -58,28 +58,20 @@ static struct dma_fence *mock_fence(void)
return &f->base;
}
static inline struct mock_chain {
struct dma_fence_chain base;
} *to_mock_chain(struct dma_fence *f) {
return container_of(f, struct mock_chain, base.base);
}
static struct dma_fence *mock_chain(struct dma_fence *prev,
struct dma_fence *fence,
u64 seqno)
{
struct mock_chain *f;
struct dma_fence_chain *f;
f = kmalloc(sizeof(*f), GFP_KERNEL);
f = dma_fence_chain_alloc();
if (!f)
return NULL;
dma_fence_chain_init(&f->base,
dma_fence_get(prev),
dma_fence_get(fence),
dma_fence_chain_init(f, dma_fence_get(prev), dma_fence_get(fence),
seqno);
return &f->base.base;
return &f->base;
}
static int sanitycheck(void *arg)

View file

@ -11,9 +11,15 @@
#include <linux/shmem_fs.h>
#include <linux/slab.h>
#include <linux/udmabuf.h>
#include <linux/hugetlb.h>
static const u32 list_limit = 1024; /* udmabuf_create_list->count limit */
static const size_t size_limit_mb = 64; /* total dmabuf size, in megabytes */
static int list_limit = 1024;
module_param(list_limit, int, 0644);
MODULE_PARM_DESC(list_limit, "udmabuf_create_list->count limit. Default is 1024.");
static int size_limit_mb = 64;
module_param(size_limit_mb, int, 0644);
MODULE_PARM_DESC(size_limit_mb, "Max size of a dmabuf, in megabytes. Default is 64.");
struct udmabuf {
pgoff_t pagecount;
@ -160,10 +166,13 @@ static long udmabuf_create(struct miscdevice *device,
{
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
struct file *memfd = NULL;
struct address_space *mapping = NULL;
struct udmabuf *ubuf;
struct dma_buf *buf;
pgoff_t pgoff, pgcnt, pgidx, pgbuf = 0, pglimit;
struct page *page;
struct page *page, *hpage = NULL;
pgoff_t subpgoff, maxsubpgs;
struct hstate *hpstate;
int seals, ret = -EINVAL;
u32 i, flags;
@ -194,7 +203,8 @@ static long udmabuf_create(struct miscdevice *device,
memfd = fget(list[i].memfd);
if (!memfd)
goto err;
if (!shmem_mapping(file_inode(memfd)->i_mapping))
mapping = file_inode(memfd)->i_mapping;
if (!shmem_mapping(mapping) && !is_file_hugepages(memfd))
goto err;
seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
if (seals == -EINVAL)
@ -205,17 +215,48 @@ static long udmabuf_create(struct miscdevice *device,
goto err;
pgoff = list[i].offset >> PAGE_SHIFT;
pgcnt = list[i].size >> PAGE_SHIFT;
if (is_file_hugepages(memfd)) {
hpstate = hstate_file(memfd);
pgoff = list[i].offset >> huge_page_shift(hpstate);
subpgoff = (list[i].offset &
~huge_page_mask(hpstate)) >> PAGE_SHIFT;
maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
}
for (pgidx = 0; pgidx < pgcnt; pgidx++) {
page = shmem_read_mapping_page(
file_inode(memfd)->i_mapping, pgoff + pgidx);
if (IS_ERR(page)) {
ret = PTR_ERR(page);
goto err;
if (is_file_hugepages(memfd)) {
if (!hpage) {
hpage = find_get_page_flags(mapping, pgoff,
FGP_ACCESSED);
if (IS_ERR(hpage)) {
ret = PTR_ERR(hpage);
goto err;
}
}
page = hpage + subpgoff;
get_page(page);
subpgoff++;
if (subpgoff == maxsubpgs) {
put_page(hpage);
hpage = NULL;
subpgoff = 0;
pgoff++;
}
} else {
page = shmem_read_mapping_page(mapping,
pgoff + pgidx);
if (IS_ERR(page)) {
ret = PTR_ERR(page);
goto err;
}
}
ubuf->pages[pgbuf++] = page;
}
fput(memfd);
memfd = NULL;
if (hpage) {
put_page(hpage);
hpage = NULL;
}
}
exp_info.ops = &udmabuf_ops;

View file

@ -35,6 +35,11 @@ config DRM_MIPI_DSI
bool
depends on DRM
config DRM_DP_AUX_BUS
tristate
depends on DRM
depends on OF
config DRM_DP_AUX_CHARDEV
bool "DRM DP AUX Interface"
depends on DRM
@ -317,8 +322,6 @@ source "drivers/gpu/drm/tilcdc/Kconfig"
source "drivers/gpu/drm/qxl/Kconfig"
source "drivers/gpu/drm/bochs/Kconfig"
source "drivers/gpu/drm/virtio/Kconfig"
source "drivers/gpu/drm/msm/Kconfig"

View file

@ -33,6 +33,8 @@ drm-$(CONFIG_PCI) += drm_pci.o
drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o
drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
obj-$(CONFIG_DRM_DP_AUX_BUS) += drm_dp_aux_bus.o
drm_vram_helper-y := drm_gem_vram_helper.o
obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o
@ -96,7 +98,6 @@ obj-y += omapdrm/
obj-$(CONFIG_DRM_SUN4I) += sun4i/
obj-y += tilcdc/
obj-$(CONFIG_DRM_QXL) += qxl/
obj-$(CONFIG_DRM_BOCHS) += bochs/
obj-$(CONFIG_DRM_VIRTIO_GPU) += virtio/
obj-$(CONFIG_DRM_MSM) += msm/
obj-$(CONFIG_DRM_TEGRA) += tegra/

View file

@ -34,6 +34,7 @@ struct amdgpu_fpriv;
struct amdgpu_bo_list_entry {
struct ttm_validate_buffer tv;
struct amdgpu_bo_va *bo_va;
struct dma_fence_chain *chain;
uint32_t priority;
struct page **user_pages;
bool user_invalidated;

View file

@ -572,6 +572,20 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
goto out;
}
amdgpu_bo_list_for_each_entry(e, p->bo_list) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
e->bo_va = amdgpu_vm_bo_find(vm, bo);
if (bo->tbo.base.dma_buf && !amdgpu_bo_explicit_sync(bo)) {
e->chain = dma_fence_chain_alloc();
if (!e->chain) {
r = -ENOMEM;
goto error_validate;
}
}
}
amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold,
&p->bytes_moved_vis_threshold);
p->bytes_moved = 0;
@ -599,15 +613,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
gws = p->bo_list->gws_obj;
oa = p->bo_list->oa_obj;
amdgpu_bo_list_for_each_entry(e, p->bo_list) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
/* Make sure we use the exclusive slot for shared BOs */
if (bo->prime_shared_count)
e->tv.num_shared = 0;
e->bo_va = amdgpu_vm_bo_find(vm, bo);
}
if (gds) {
p->job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT;
p->job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT;
@ -629,8 +634,13 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
}
error_validate:
if (r)
if (r) {
amdgpu_bo_list_for_each_entry(e, p->bo_list) {
dma_fence_chain_free(e->chain);
e->chain = NULL;
}
ttm_eu_backoff_reservation(&p->ticket, &p->validated);
}
out:
return r;
}
@ -670,9 +680,17 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error,
{
unsigned i;
if (error && backoff)
if (error && backoff) {
struct amdgpu_bo_list_entry *e;
amdgpu_bo_list_for_each_entry(e, parser->bo_list) {
dma_fence_chain_free(e->chain);
e->chain = NULL;
}
ttm_eu_backoff_reservation(&parser->ticket,
&parser->validated);
}
for (i = 0; i < parser->num_post_deps; i++) {
drm_syncobj_put(parser->post_deps[i].syncobj);
@ -1109,7 +1127,7 @@ static int amdgpu_cs_process_syncobj_timeline_out_dep(struct amdgpu_cs_parser *p
dep->chain = NULL;
if (syncobj_deps[i].point) {
dep->chain = kmalloc(sizeof(*dep->chain), GFP_KERNEL);
dep->chain = dma_fence_chain_alloc();
if (!dep->chain)
return -ENOMEM;
}
@ -1117,7 +1135,7 @@ static int amdgpu_cs_process_syncobj_timeline_out_dep(struct amdgpu_cs_parser *p
dep->syncobj = drm_syncobj_find(p->filp,
syncobj_deps[i].handle);
if (!dep->syncobj) {
kfree(dep->chain);
dma_fence_chain_free(dep->chain);
return -EINVAL;
}
dep->point = syncobj_deps[i].point;
@ -1245,6 +1263,28 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm);
amdgpu_bo_list_for_each_entry(e, p->bo_list) {
struct dma_resv *resv = e->tv.bo->base.resv;
struct dma_fence_chain *chain = e->chain;
if (!chain)
continue;
/*
* Work around dma_resv shortcommings by wrapping up the
* submission in a dma_fence_chain and add it as exclusive
* fence, but first add the submission as shared fence to make
* sure that shared fences never signal before the exclusive
* one.
*/
dma_fence_chain_init(chain, dma_resv_excl_fence(resv),
dma_fence_get(p->fence), 1);
dma_resv_add_shared_fence(resv, p->fence);
rcu_assign_pointer(resv->fence_excl, &chain->base);
e->chain = NULL;
}
ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
mutex_unlock(&p->adev->notifier_lock);

View file

@ -42,48 +42,6 @@
#include <linux/pci-p2pdma.h>
#include <linux/pm_runtime.h>
static int
__dma_resv_make_exclusive(struct dma_resv *obj)
{
struct dma_fence **fences;
unsigned int count;
int r;
if (!dma_resv_shared_list(obj)) /* no shared fences to convert */
return 0;
r = dma_resv_get_fences(obj, NULL, &count, &fences);
if (r)
return r;
if (count == 0) {
/* Now that was unexpected. */
} else if (count == 1) {
dma_resv_add_excl_fence(obj, fences[0]);
dma_fence_put(fences[0]);
kfree(fences);
} else {
struct dma_fence_array *array;
array = dma_fence_array_create(count, fences,
dma_fence_context_alloc(1), 0,
false);
if (!array)
goto err_fences_put;
dma_resv_add_excl_fence(obj, &array->base);
dma_fence_put(&array->base);
}
return 0;
err_fences_put:
while (count--)
dma_fence_put(fences[count]);
kfree(fences);
return -ENOMEM;
}
/**
* amdgpu_dma_buf_attach - &dma_buf_ops.attach implementation
*
@ -110,24 +68,6 @@ static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf,
if (r < 0)
goto out;
r = amdgpu_bo_reserve(bo, false);
if (unlikely(r != 0))
goto out;
/*
* We only create shared fences for internal use, but importers
* of the dmabuf rely on exclusive fences for implicitly
* tracking write hazards. As any of the current fences may
* correspond to a write, we need to convert all existing
* fences on the reservation object into a single exclusive
* fence.
*/
r = __dma_resv_make_exclusive(bo->tbo.base.resv);
if (r)
goto out;
bo->prime_shared_count++;
amdgpu_bo_unreserve(bo);
return 0;
out:
@ -150,9 +90,6 @@ static void amdgpu_dma_buf_detach(struct dma_buf *dmabuf,
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
if (attach->dev->driver != adev->dev->driver && bo->prime_shared_count)
bo->prime_shared_count--;
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
}
@ -418,8 +355,6 @@ amdgpu_dma_buf_create_obj(struct drm_device *dev, struct dma_buf *dma_buf)
bo = gem_to_amdgpu_bo(gobj);
bo->allowed_domains = AMDGPU_GEM_DOMAIN_GTT;
bo->preferred_domains = AMDGPU_GEM_DOMAIN_GTT;
if (dma_buf->ops != &amdgpu_dmabuf_ops)
bo->prime_shared_count = 1;
dma_resv_unlock(resv);
return gobj;

View file

@ -1281,7 +1281,7 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
#endif
/* Get rid of things like offb */
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, "amdgpudrmfb");
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, &amdgpu_kms_driver);
if (ret)
return ret;

View file

@ -490,7 +490,7 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
r = drm_sched_init(&ring->sched, &amdgpu_sched_ops,
num_hw_submission, amdgpu_job_hang_limit,
timeout, sched_score, ring->name);
timeout, NULL, sched_score, ring->name);
if (r) {
DRM_ERROR("Failed to create scheduler on ring %s.\n",
ring->name);

View file

@ -829,7 +829,8 @@ int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
break;
}
case AMDGPU_GEM_OP_SET_PLACEMENT:
if (robj->prime_shared_count && (args->value & AMDGPU_GEM_DOMAIN_VRAM)) {
if (robj->tbo.base.import_attach &&
args->value & AMDGPU_GEM_DOMAIN_VRAM) {
r = -EINVAL;
amdgpu_bo_unreserve(robj);
break;

View file

@ -132,14 +132,11 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man,
struct amdgpu_gtt_node *node;
int r;
spin_lock(&mgr->lock);
if (tbo->resource && tbo->resource->mem_type != TTM_PL_TT &&
atomic64_read(&mgr->available) < num_pages) {
spin_unlock(&mgr->lock);
if (!(place->flags & TTM_PL_FLAG_TEMPORARY) &&
atomic64_add_return(num_pages, &mgr->used) > man->size) {
atomic64_sub(num_pages, &mgr->used);
return -ENOSPC;
}
atomic64_sub(num_pages, &mgr->available);
spin_unlock(&mgr->lock);
node = kzalloc(struct_size(node, base.mm_nodes, 1), GFP_KERNEL);
if (!node) {
@ -175,7 +172,8 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man,
kfree(node);
err_out:
atomic64_add(num_pages, &mgr->available);
if (!(place->flags & TTM_PL_FLAG_TEMPORARY))
atomic64_sub(num_pages, &mgr->used);
return r;
}
@ -198,7 +196,9 @@ static void amdgpu_gtt_mgr_del(struct ttm_resource_manager *man,
if (drm_mm_node_allocated(&node->base.mm_nodes[0]))
drm_mm_remove_node(&node->base.mm_nodes[0]);
spin_unlock(&mgr->lock);
atomic64_add(res->num_pages, &mgr->available);
if (!(res->placement & TTM_PL_FLAG_TEMPORARY))
atomic64_sub(res->num_pages, &mgr->used);
kfree(node);
}
@ -213,9 +213,8 @@ static void amdgpu_gtt_mgr_del(struct ttm_resource_manager *man,
uint64_t amdgpu_gtt_mgr_usage(struct ttm_resource_manager *man)
{
struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man);
s64 result = man->size - atomic64_read(&mgr->available);
return (result > 0 ? result : 0) * PAGE_SIZE;
return atomic64_read(&mgr->used) * PAGE_SIZE;
}
/**
@ -265,9 +264,8 @@ static void amdgpu_gtt_mgr_debug(struct ttm_resource_manager *man,
drm_mm_print(&mgr->mm, printer);
spin_unlock(&mgr->lock);
drm_printf(printer, "man size:%llu pages, gtt available:%lld pages, usage:%lluMB\n",
man->size, (u64)atomic64_read(&mgr->available),
amdgpu_gtt_mgr_usage(man) >> 20);
drm_printf(printer, "man size:%llu pages, gtt used:%llu pages\n",
man->size, atomic64_read(&mgr->used));
}
static const struct ttm_resource_manager_func amdgpu_gtt_mgr_func = {
@ -299,7 +297,7 @@ int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size)
size = (adev->gmc.gart_size >> PAGE_SHIFT) - start;
drm_mm_init(&mgr->mm, start, size);
spin_lock_init(&mgr->lock);
atomic64_set(&mgr->available, gtt_size >> PAGE_SHIFT);
atomic64_set(&mgr->used, 0);
ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_TT, &mgr->manager);
ttm_resource_manager_set_used(man, true);

View file

@ -617,7 +617,7 @@ void amdgpu_irq_gpu_reset_resume_helper(struct amdgpu_device *adev)
int amdgpu_irq_get(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
unsigned type)
{
if (!adev_to_drm(adev)->irq_enabled)
if (!adev->irq.installed)
return -ENOENT;
if (type >= src->num_types)
@ -647,7 +647,7 @@ int amdgpu_irq_get(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
int amdgpu_irq_put(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
unsigned type)
{
if (!adev_to_drm(adev)->irq_enabled)
if (!adev->irq.installed)
return -ENOENT;
if (type >= src->num_types)
@ -678,7 +678,7 @@ int amdgpu_irq_put(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
bool amdgpu_irq_enabled(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
unsigned type)
{
if (!adev_to_drm(adev)->irq_enabled)
if (!adev->irq.installed)
return false;
if (type >= src->num_types)

View file

@ -196,7 +196,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
c++;
}
BUG_ON(c >= AMDGPU_BO_MAX_PLACEMENTS);
BUG_ON(c > AMDGPU_BO_MAX_PLACEMENTS);
placement->num_placement = c;
placement->placement = places;
@ -913,7 +913,7 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
return -EINVAL;
/* A shared bo cannot be migrated to VRAM */
if (bo->prime_shared_count || bo->tbo.base.import_attach) {
if (bo->tbo.base.import_attach) {
if (domain & AMDGPU_GEM_DOMAIN_GTT)
domain = AMDGPU_GEM_DOMAIN_GTT;
else

View file

@ -100,7 +100,6 @@ struct amdgpu_bo {
struct ttm_buffer_object tbo;
struct ttm_bo_kmap_obj kmap;
u64 flags;
unsigned prime_shared_count;
/* per VM structure for page tables and with virtual addresses */
struct amdgpu_vm_bo_base *vm_bo;
/* Constant after initialization */

View file

@ -28,6 +28,8 @@
* Christian König <christian.koenig@amd.com>
*/
#include <linux/dma-fence-chain.h>
#include "amdgpu.h"
#include "amdgpu_trace.h"
#include "amdgpu_amdkfd.h"
@ -186,6 +188,55 @@ int amdgpu_sync_vm_fence(struct amdgpu_sync *sync, struct dma_fence *fence)
return amdgpu_sync_fence(sync, fence);
}
/* Determine based on the owner and mode if we should sync to a fence or not */
static bool amdgpu_sync_test_fence(struct amdgpu_device *adev,
enum amdgpu_sync_mode mode,
void *owner, struct dma_fence *f)
{
void *fence_owner = amdgpu_sync_get_owner(f);
/* Always sync to moves, no matter what */
if (fence_owner == AMDGPU_FENCE_OWNER_UNDEFINED)
return true;
/* We only want to trigger KFD eviction fences on
* evict or move jobs. Skip KFD fences otherwise.
*/
if (fence_owner == AMDGPU_FENCE_OWNER_KFD &&
owner != AMDGPU_FENCE_OWNER_UNDEFINED)
return false;
/* Never sync to VM updates either. */
if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
owner != AMDGPU_FENCE_OWNER_UNDEFINED)
return false;
/* Ignore fences depending on the sync mode */
switch (mode) {
case AMDGPU_SYNC_ALWAYS:
return true;
case AMDGPU_SYNC_NE_OWNER:
if (amdgpu_sync_same_dev(adev, f) &&
fence_owner == owner)
return false;
break;
case AMDGPU_SYNC_EQ_OWNER:
if (amdgpu_sync_same_dev(adev, f) &&
fence_owner != owner)
return false;
break;
case AMDGPU_SYNC_EXPLICIT:
return false;
}
WARN(debug_evictions && fence_owner == AMDGPU_FENCE_OWNER_KFD,
"Adding eviction fence to sync obj");
return true;
}
/**
* amdgpu_sync_resv - sync to a reservation object
*
@ -211,67 +262,34 @@ int amdgpu_sync_resv(struct amdgpu_device *adev, struct amdgpu_sync *sync,
/* always sync to the exclusive fence */
f = dma_resv_excl_fence(resv);
r = amdgpu_sync_fence(sync, f);
dma_fence_chain_for_each(f, f) {
struct dma_fence_chain *chain = to_dma_fence_chain(f);
if (amdgpu_sync_test_fence(adev, mode, owner, chain ?
chain->fence : f)) {
r = amdgpu_sync_fence(sync, f);
dma_fence_put(f);
if (r)
return r;
break;
}
}
flist = dma_resv_shared_list(resv);
if (!flist || r)
return r;
if (!flist)
return 0;
for (i = 0; i < flist->shared_count; ++i) {
void *fence_owner;
f = rcu_dereference_protected(flist->shared[i],
dma_resv_held(resv));
fence_owner = amdgpu_sync_get_owner(f);
/* Always sync to moves, no matter what */
if (fence_owner == AMDGPU_FENCE_OWNER_UNDEFINED) {
if (amdgpu_sync_test_fence(adev, mode, owner, f)) {
r = amdgpu_sync_fence(sync, f);
if (r)
break;
return r;
}
/* We only want to trigger KFD eviction fences on
* evict or move jobs. Skip KFD fences otherwise.
*/
if (fence_owner == AMDGPU_FENCE_OWNER_KFD &&
owner != AMDGPU_FENCE_OWNER_UNDEFINED)
continue;
/* Never sync to VM updates either. */
if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
owner != AMDGPU_FENCE_OWNER_UNDEFINED)
continue;
/* Ignore fences depending on the sync mode */
switch (mode) {
case AMDGPU_SYNC_ALWAYS:
break;
case AMDGPU_SYNC_NE_OWNER:
if (amdgpu_sync_same_dev(adev, f) &&
fence_owner == owner)
continue;
break;
case AMDGPU_SYNC_EQ_OWNER:
if (amdgpu_sync_same_dev(adev, f) &&
fence_owner != owner)
continue;
break;
case AMDGPU_SYNC_EXPLICIT:
continue;
}
WARN(debug_evictions && fence_owner == AMDGPU_FENCE_OWNER_KFD,
"Adding eviction fence to sync obj");
r = amdgpu_sync_fence(sync, f);
if (r)
break;
}
return r;
return 0;
}
/**

View file

@ -149,14 +149,16 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
* BOs to be evicted from VRAM
*/
amdgpu_bo_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_VRAM |
AMDGPU_GEM_DOMAIN_GTT);
AMDGPU_GEM_DOMAIN_GTT |
AMDGPU_GEM_DOMAIN_CPU);
abo->placements[0].fpfn = adev->gmc.visible_vram_size >> PAGE_SHIFT;
abo->placements[0].lpfn = 0;
abo->placement.busy_placement = &abo->placements[1];
abo->placement.num_busy_placement = 1;
} else {
/* Move to GTT memory */
amdgpu_bo_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_GTT);
amdgpu_bo_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_GTT |
AMDGPU_GEM_DOMAIN_CPU);
}
break;
case TTM_PL_TT:
@ -521,7 +523,7 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict,
hop->fpfn = 0;
hop->lpfn = 0;
hop->mem_type = TTM_PL_TT;
hop->flags = 0;
hop->flags = TTM_PL_FLAG_TEMPORARY;
return -EMULTIHOP;
}

View file

@ -52,7 +52,7 @@ struct amdgpu_gtt_mgr {
struct ttm_resource_manager manager;
struct drm_mm mm;
spinlock_t lock;
atomic64_t available;
atomic64_t used;
};
struct amdgpu_preempt_mgr {

View file

@ -13,7 +13,6 @@
#include <drm/drm_fb_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_irq.h>
#include <drm/drm_managed.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_vblank.h>
@ -301,8 +300,6 @@ struct komeda_kms_dev *komeda_kms_attach(struct komeda_dev *mdev)
if (err)
goto free_component_binding;
drm->irq_enabled = true;
drm_kms_helper_poll_init(drm);
err = drm_dev_register(drm, 0);
@ -313,7 +310,6 @@ struct komeda_kms_dev *komeda_kms_attach(struct komeda_dev *mdev)
free_interrupts:
drm_kms_helper_poll_fini(drm);
drm->irq_enabled = false;
free_component_binding:
component_unbind_all(mdev->dev, drm);
cleanup_mode_config:
@ -331,7 +327,6 @@ void komeda_kms_detach(struct komeda_kms_dev *kms)
drm_dev_unregister(drm);
drm_kms_helper_poll_fini(drm);
drm_atomic_helper_shutdown(drm);
drm->irq_enabled = false;
component_unbind_all(mdev->dev, drm);
drm_mode_config_cleanup(drm);
komeda_kms_cleanup_private_objs(kms);

View file

@ -847,8 +847,6 @@ static int malidp_bind(struct device *dev)
if (ret < 0)
goto irq_init_fail;
drm->irq_enabled = true;
ret = drm_vblank_init(drm, drm->mode_config.num_crtc);
if (ret < 0) {
DRM_ERROR("failed to initialise vblank\n");
@ -874,7 +872,6 @@ static int malidp_bind(struct device *dev)
vblank_fail:
malidp_se_irq_fini(hwdev);
malidp_de_irq_fini(hwdev);
drm->irq_enabled = false;
irq_init_fail:
drm_atomic_helper_shutdown(drm);
component_unbind_all(dev, drm);
@ -909,7 +906,6 @@ static void malidp_unbind(struct device *dev)
drm_atomic_helper_shutdown(drm);
malidp_se_irq_fini(hwdev);
malidp_de_irq_fini(hwdev);
drm->irq_enabled = false;
component_unbind_all(dev, drm);
of_node_put(malidp->crtc.port);
malidp->crtc.port = NULL;

View file

@ -95,7 +95,7 @@ static int armada_drm_bind(struct device *dev)
}
/* Remove early framebuffers */
ret = drm_aperture_remove_framebuffers(false, "armada-drm-fb");
ret = drm_aperture_remove_framebuffers(false, &armada_drm_driver);
if (ret) {
dev_err(dev, "[" DRM_NAME ":%s] can't kick out simple-fb: %d\n",
__func__, ret);
@ -130,8 +130,6 @@ static int armada_drm_bind(struct device *dev)
if (ret)
goto err_comp;
priv->drm.irq_enabled = true;
drm_mode_config_reset(&priv->drm);
ret = armada_fbdev_init(&priv->drm);

View file

@ -247,8 +247,6 @@ static void armada_drm_overlay_plane_atomic_disable(struct drm_plane *plane,
}
static const struct drm_plane_helper_funcs armada_overlay_plane_helper_funcs = {
.prepare_fb = armada_drm_plane_prepare_fb,
.cleanup_fb = armada_drm_plane_cleanup_fb,
.atomic_check = armada_drm_plane_atomic_check,
.atomic_update = armada_drm_overlay_plane_atomic_update,
.atomic_disable = armada_drm_overlay_plane_atomic_disable,

View file

@ -78,33 +78,6 @@ void armada_drm_plane_calc(struct drm_plane_state *state, u32 addrs[2][3],
}
}
int armada_drm_plane_prepare_fb(struct drm_plane *plane,
struct drm_plane_state *state)
{
DRM_DEBUG_KMS("[PLANE:%d:%s] [FB:%d]\n",
plane->base.id, plane->name,
state->fb ? state->fb->base.id : 0);
/*
* Take a reference on the new framebuffer - we want to
* hold on to it while the hardware is displaying it.
*/
if (state->fb)
drm_framebuffer_get(state->fb);
return 0;
}
void armada_drm_plane_cleanup_fb(struct drm_plane *plane,
struct drm_plane_state *old_state)
{
DRM_DEBUG_KMS("[PLANE:%d:%s] [FB:%d]\n",
plane->base.id, plane->name,
old_state->fb ? old_state->fb->base.id : 0);
if (old_state->fb)
drm_framebuffer_put(old_state->fb);
}
int armada_drm_plane_atomic_check(struct drm_plane *plane,
struct drm_atomic_state *state)
{
@ -282,8 +255,6 @@ static void armada_drm_primary_plane_atomic_disable(struct drm_plane *plane,
}
static const struct drm_plane_helper_funcs armada_primary_plane_helper_funcs = {
.prepare_fb = armada_drm_plane_prepare_fb,
.cleanup_fb = armada_drm_plane_cleanup_fb,
.atomic_check = armada_drm_plane_atomic_check,
.atomic_update = armada_drm_primary_plane_atomic_update,
.atomic_disable = armada_drm_primary_plane_atomic_disable,

View file

@ -21,8 +21,6 @@ struct armada_plane_state {
void armada_drm_plane_calc(struct drm_plane_state *state, u32 addrs[2][3],
u16 pitches[3], bool interlaced);
int armada_drm_plane_prepare_fb(struct drm_plane *plane,
struct drm_plane_state *state);
void armada_drm_plane_cleanup_fb(struct drm_plane *plane,
struct drm_plane_state *old_state);
int armada_drm_plane_atomic_check(struct drm_plane *plane,

View file

@ -220,7 +220,6 @@ static const struct drm_simple_display_pipe_funcs aspeed_gfx_funcs = {
.enable = aspeed_gfx_pipe_enable,
.disable = aspeed_gfx_pipe_disable,
.update = aspeed_gfx_pipe_update,
.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
.enable_vblank = aspeed_gfx_enable_vblank,
.disable_vblank = aspeed_gfx_disable_vblank,
};

View file

@ -100,7 +100,7 @@ static int ast_remove_conflicting_framebuffers(struct pci_dev *pdev)
primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW;
#endif
return drm_aperture_remove_conflicting_framebuffers(base, size, primary, "astdrmfb");
return drm_aperture_remove_conflicting_framebuffers(base, size, primary, &ast_driver);
}
static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)

View file

@ -612,8 +612,7 @@ ast_primary_plane_helper_atomic_disable(struct drm_plane *plane,
}
static const struct drm_plane_helper_funcs ast_primary_plane_helper_funcs = {
.prepare_fb = drm_gem_vram_plane_helper_prepare_fb,
.cleanup_fb = drm_gem_vram_plane_helper_cleanup_fb,
DRM_GEM_VRAM_PLANE_HELPER_FUNCS,
.atomic_check = ast_primary_plane_helper_atomic_check,
.atomic_update = ast_primary_plane_helper_atomic_update,
.atomic_disable = ast_primary_plane_helper_atomic_disable,
@ -1293,6 +1292,18 @@ static enum drm_mode_status ast_mode_valid(struct drm_connector *connector,
return flags;
}
static enum drm_connector_status ast_connector_detect(struct drm_connector
*connector, bool force)
{
int r;
r = ast_get_modes(connector);
if (r < 0)
return connector_status_disconnected;
return connector_status_connected;
}
static void ast_connector_destroy(struct drm_connector *connector)
{
struct ast_connector *ast_connector = to_ast_connector(connector);
@ -1307,6 +1318,7 @@ static const struct drm_connector_helper_funcs ast_connector_helper_funcs = {
static const struct drm_connector_funcs ast_connector_funcs = {
.reset = drm_atomic_helper_connector_reset,
.detect = ast_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = ast_connector_destroy,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
@ -1334,7 +1346,8 @@ static int ast_connector_init(struct drm_device *dev)
connector->interlace_allowed = 0;
connector->doublescan_allowed = 0;
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
connector->polled = DRM_CONNECTOR_POLL_CONNECT |
DRM_CONNECTOR_POLL_DISCONNECT;
drm_connector_attach_encoder(connector, encoder);
@ -1403,6 +1416,8 @@ int ast_mode_config_init(struct ast_private *ast)
drm_mode_config_reset(dev);
drm_kms_helper_poll_init(dev);
return 0;
}

View file

@ -1,11 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
config DRM_BOCHS
tristate "DRM Support for bochs dispi vga interface (qemu stdvga)"
depends on DRM && PCI && MMU
select DRM_KMS_HELPER
select DRM_VRAM_HELPER
select DRM_TTM
select DRM_TTM_HELPER
help
Choose this option for qemu.
If M is selected the module will be called bochs-drm.

View file

@ -1,4 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
bochs-drm-y := bochs_drv.o bochs_mm.o bochs_kms.o bochs_hw.o
obj-$(CONFIG_DRM_BOCHS) += bochs-drm.o

View file

@ -1,98 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#include <linux/io.h>
#include <linux/console.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_encoder.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_gem.h>
#include <drm/drm_gem_vram_helper.h>
#include <drm/drm_simple_kms_helper.h>
/* ---------------------------------------------------------------------- */
#define VBE_DISPI_IOPORT_INDEX 0x01CE
#define VBE_DISPI_IOPORT_DATA 0x01CF
#define VBE_DISPI_INDEX_ID 0x0
#define VBE_DISPI_INDEX_XRES 0x1
#define VBE_DISPI_INDEX_YRES 0x2
#define VBE_DISPI_INDEX_BPP 0x3
#define VBE_DISPI_INDEX_ENABLE 0x4
#define VBE_DISPI_INDEX_BANK 0x5
#define VBE_DISPI_INDEX_VIRT_WIDTH 0x6
#define VBE_DISPI_INDEX_VIRT_HEIGHT 0x7
#define VBE_DISPI_INDEX_X_OFFSET 0x8
#define VBE_DISPI_INDEX_Y_OFFSET 0x9
#define VBE_DISPI_INDEX_VIDEO_MEMORY_64K 0xa
#define VBE_DISPI_ID0 0xB0C0
#define VBE_DISPI_ID1 0xB0C1
#define VBE_DISPI_ID2 0xB0C2
#define VBE_DISPI_ID3 0xB0C3
#define VBE_DISPI_ID4 0xB0C4
#define VBE_DISPI_ID5 0xB0C5
#define VBE_DISPI_DISABLED 0x00
#define VBE_DISPI_ENABLED 0x01
#define VBE_DISPI_GETCAPS 0x02
#define VBE_DISPI_8BIT_DAC 0x20
#define VBE_DISPI_LFB_ENABLED 0x40
#define VBE_DISPI_NOCLEARMEM 0x80
/* ---------------------------------------------------------------------- */
enum bochs_types {
BOCHS_QEMU_STDVGA,
BOCHS_UNKNOWN,
};
struct bochs_device {
/* hw */
void __iomem *mmio;
int ioports;
void __iomem *fb_map;
unsigned long fb_base;
unsigned long fb_size;
unsigned long qext_size;
/* mode */
u16 xres;
u16 yres;
u16 yres_virtual;
u32 stride;
u32 bpp;
struct edid *edid;
/* drm */
struct drm_device *dev;
struct drm_simple_display_pipe pipe;
struct drm_connector connector;
};
/* ---------------------------------------------------------------------- */
/* bochs_hw.c */
int bochs_hw_init(struct drm_device *dev);
void bochs_hw_fini(struct drm_device *dev);
void bochs_hw_blank(struct bochs_device *bochs, bool blank);
void bochs_hw_setmode(struct bochs_device *bochs,
struct drm_display_mode *mode);
void bochs_hw_setformat(struct bochs_device *bochs,
const struct drm_format_info *format);
void bochs_hw_setbase(struct bochs_device *bochs,
int x, int y, int stride, u64 addr);
int bochs_hw_load_edid(struct bochs_device *bochs);
/* bochs_mm.c */
int bochs_mm_init(struct bochs_device *bochs);
void bochs_mm_fini(struct bochs_device *bochs);
/* bochs_kms.c */
int bochs_kms_init(struct bochs_device *bochs);
/* bochs_fbdev.c */
extern const struct drm_mode_config_funcs bochs_mode_funcs;

View file

@ -1,205 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
*/
#include <linux/module.h>
#include <linux/pci.h>
#include <drm/drm_drv.h>
#include <drm/drm_aperture.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_managed.h>
#include "bochs.h"
static int bochs_modeset = -1;
module_param_named(modeset, bochs_modeset, int, 0444);
MODULE_PARM_DESC(modeset, "enable/disable kernel modesetting");
/* ---------------------------------------------------------------------- */
/* drm interface */
static void bochs_unload(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
bochs_mm_fini(bochs);
}
static int bochs_load(struct drm_device *dev)
{
struct bochs_device *bochs;
int ret;
bochs = drmm_kzalloc(dev, sizeof(*bochs), GFP_KERNEL);
if (bochs == NULL)
return -ENOMEM;
dev->dev_private = bochs;
bochs->dev = dev;
ret = bochs_hw_init(dev);
if (ret)
goto err;
ret = bochs_mm_init(bochs);
if (ret)
goto err;
ret = bochs_kms_init(bochs);
if (ret)
goto err;
return 0;
err:
bochs_unload(dev);
return ret;
}
DEFINE_DRM_GEM_FOPS(bochs_fops);
static const struct drm_driver bochs_driver = {
.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC,
.fops = &bochs_fops,
.name = "bochs-drm",
.desc = "bochs dispi vga interface (qemu stdvga)",
.date = "20130925",
.major = 1,
.minor = 0,
DRM_GEM_VRAM_DRIVER,
.release = bochs_unload,
};
/* ---------------------------------------------------------------------- */
/* pm interface */
#ifdef CONFIG_PM_SLEEP
static int bochs_pm_suspend(struct device *dev)
{
struct drm_device *drm_dev = dev_get_drvdata(dev);
return drm_mode_config_helper_suspend(drm_dev);
}
static int bochs_pm_resume(struct device *dev)
{
struct drm_device *drm_dev = dev_get_drvdata(dev);
return drm_mode_config_helper_resume(drm_dev);
}
#endif
static const struct dev_pm_ops bochs_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(bochs_pm_suspend,
bochs_pm_resume)
};
/* ---------------------------------------------------------------------- */
/* pci interface */
static int bochs_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
struct drm_device *dev;
unsigned long fbsize;
int ret;
fbsize = pci_resource_len(pdev, 0);
if (fbsize < 4 * 1024 * 1024) {
DRM_ERROR("less than 4 MB video memory, ignoring device\n");
return -ENOMEM;
}
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, "bochsdrmfb");
if (ret)
return ret;
dev = drm_dev_alloc(&bochs_driver, &pdev->dev);
if (IS_ERR(dev))
return PTR_ERR(dev);
ret = pci_enable_device(pdev);
if (ret)
goto err_free_dev;
pci_set_drvdata(pdev, dev);
ret = bochs_load(dev);
if (ret)
goto err_free_dev;
ret = drm_dev_register(dev, 0);
if (ret)
goto err_unload;
drm_fbdev_generic_setup(dev, 32);
return ret;
err_unload:
bochs_unload(dev);
err_free_dev:
drm_dev_put(dev);
return ret;
}
static void bochs_pci_remove(struct pci_dev *pdev)
{
struct drm_device *dev = pci_get_drvdata(pdev);
drm_dev_unplug(dev);
drm_atomic_helper_shutdown(dev);
bochs_hw_fini(dev);
drm_dev_put(dev);
}
static const struct pci_device_id bochs_pci_tbl[] = {
{
.vendor = 0x1234,
.device = 0x1111,
.subvendor = PCI_SUBVENDOR_ID_REDHAT_QUMRANET,
.subdevice = PCI_SUBDEVICE_ID_QEMU,
.driver_data = BOCHS_QEMU_STDVGA,
},
{
.vendor = 0x1234,
.device = 0x1111,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.driver_data = BOCHS_UNKNOWN,
},
{ /* end of list */ }
};
static struct pci_driver bochs_pci_driver = {
.name = "bochs-drm",
.id_table = bochs_pci_tbl,
.probe = bochs_pci_probe,
.remove = bochs_pci_remove,
.driver.pm = &bochs_pm_ops,
};
/* ---------------------------------------------------------------------- */
/* module init/exit */
static int __init bochs_init(void)
{
if (vgacon_text_force() && bochs_modeset == -1)
return -EINVAL;
if (bochs_modeset == 0)
return -EINVAL;
return pci_register_driver(&bochs_pci_driver);
}
static void __exit bochs_exit(void)
{
pci_unregister_driver(&bochs_pci_driver);
}
module_init(bochs_init);
module_exit(bochs_exit);
MODULE_DEVICE_TABLE(pci, bochs_pci_tbl);
MODULE_AUTHOR("Gerd Hoffmann <kraxel@redhat.com>");
MODULE_LICENSE("GPL");

View file

@ -1,323 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
*/
#include <linux/pci.h>
#include <drm/drm_drv.h>
#include <drm/drm_fourcc.h>
#include <video/vga.h>
#include "bochs.h"
/* ---------------------------------------------------------------------- */
static void bochs_vga_writeb(struct bochs_device *bochs, u16 ioport, u8 val)
{
if (WARN_ON(ioport < 0x3c0 || ioport > 0x3df))
return;
if (bochs->mmio) {
int offset = ioport - 0x3c0 + 0x400;
writeb(val, bochs->mmio + offset);
} else {
outb(val, ioport);
}
}
static u8 bochs_vga_readb(struct bochs_device *bochs, u16 ioport)
{
if (WARN_ON(ioport < 0x3c0 || ioport > 0x3df))
return 0xff;
if (bochs->mmio) {
int offset = ioport - 0x3c0 + 0x400;
return readb(bochs->mmio + offset);
} else {
return inb(ioport);
}
}
static u16 bochs_dispi_read(struct bochs_device *bochs, u16 reg)
{
u16 ret = 0;
if (bochs->mmio) {
int offset = 0x500 + (reg << 1);
ret = readw(bochs->mmio + offset);
} else {
outw(reg, VBE_DISPI_IOPORT_INDEX);
ret = inw(VBE_DISPI_IOPORT_DATA);
}
return ret;
}
static void bochs_dispi_write(struct bochs_device *bochs, u16 reg, u16 val)
{
if (bochs->mmio) {
int offset = 0x500 + (reg << 1);
writew(val, bochs->mmio + offset);
} else {
outw(reg, VBE_DISPI_IOPORT_INDEX);
outw(val, VBE_DISPI_IOPORT_DATA);
}
}
static void bochs_hw_set_big_endian(struct bochs_device *bochs)
{
if (bochs->qext_size < 8)
return;
writel(0xbebebebe, bochs->mmio + 0x604);
}
static void bochs_hw_set_little_endian(struct bochs_device *bochs)
{
if (bochs->qext_size < 8)
return;
writel(0x1e1e1e1e, bochs->mmio + 0x604);
}
#ifdef __BIG_ENDIAN
#define bochs_hw_set_native_endian(_b) bochs_hw_set_big_endian(_b)
#else
#define bochs_hw_set_native_endian(_b) bochs_hw_set_little_endian(_b)
#endif
static int bochs_get_edid_block(void *data, u8 *buf,
unsigned int block, size_t len)
{
struct bochs_device *bochs = data;
size_t i, start = block * EDID_LENGTH;
if (start + len > 0x400 /* vga register offset */)
return -1;
for (i = 0; i < len; i++) {
buf[i] = readb(bochs->mmio + start + i);
}
return 0;
}
int bochs_hw_load_edid(struct bochs_device *bochs)
{
u8 header[8];
if (!bochs->mmio)
return -1;
/* check header to detect whenever edid support is enabled in qemu */
bochs_get_edid_block(bochs, header, 0, ARRAY_SIZE(header));
if (drm_edid_header_is_valid(header) != 8)
return -1;
kfree(bochs->edid);
bochs->edid = drm_do_get_edid(&bochs->connector,
bochs_get_edid_block, bochs);
if (bochs->edid == NULL)
return -1;
return 0;
}
int bochs_hw_init(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
struct pci_dev *pdev = to_pci_dev(dev->dev);
unsigned long addr, size, mem, ioaddr, iosize;
u16 id;
if (pdev->resource[2].flags & IORESOURCE_MEM) {
/* mmio bar with vga and bochs registers present */
if (pci_request_region(pdev, 2, "bochs-drm") != 0) {
DRM_ERROR("Cannot request mmio region\n");
return -EBUSY;
}
ioaddr = pci_resource_start(pdev, 2);
iosize = pci_resource_len(pdev, 2);
bochs->mmio = ioremap(ioaddr, iosize);
if (bochs->mmio == NULL) {
DRM_ERROR("Cannot map mmio region\n");
return -ENOMEM;
}
} else {
ioaddr = VBE_DISPI_IOPORT_INDEX;
iosize = 2;
if (!request_region(ioaddr, iosize, "bochs-drm")) {
DRM_ERROR("Cannot request ioports\n");
return -EBUSY;
}
bochs->ioports = 1;
}
id = bochs_dispi_read(bochs, VBE_DISPI_INDEX_ID);
mem = bochs_dispi_read(bochs, VBE_DISPI_INDEX_VIDEO_MEMORY_64K)
* 64 * 1024;
if ((id & 0xfff0) != VBE_DISPI_ID0) {
DRM_ERROR("ID mismatch\n");
return -ENODEV;
}
if ((pdev->resource[0].flags & IORESOURCE_MEM) == 0)
return -ENODEV;
addr = pci_resource_start(pdev, 0);
size = pci_resource_len(pdev, 0);
if (addr == 0)
return -ENODEV;
if (size != mem) {
DRM_ERROR("Size mismatch: pci=%ld, bochs=%ld\n",
size, mem);
size = min(size, mem);
}
if (pci_request_region(pdev, 0, "bochs-drm") != 0)
DRM_WARN("Cannot request framebuffer, boot fb still active?\n");
bochs->fb_map = ioremap(addr, size);
if (bochs->fb_map == NULL) {
DRM_ERROR("Cannot map framebuffer\n");
return -ENOMEM;
}
bochs->fb_base = addr;
bochs->fb_size = size;
DRM_INFO("Found bochs VGA, ID 0x%x.\n", id);
DRM_INFO("Framebuffer size %ld kB @ 0x%lx, %s @ 0x%lx.\n",
size / 1024, addr,
bochs->ioports ? "ioports" : "mmio",
ioaddr);
if (bochs->mmio && pdev->revision >= 2) {
bochs->qext_size = readl(bochs->mmio + 0x600);
if (bochs->qext_size < 4 || bochs->qext_size > iosize) {
bochs->qext_size = 0;
goto noext;
}
DRM_DEBUG("Found qemu ext regs, size %ld\n",
bochs->qext_size);
bochs_hw_set_native_endian(bochs);
}
noext:
return 0;
}
void bochs_hw_fini(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
/* TODO: shot down existing vram mappings */
if (bochs->mmio)
iounmap(bochs->mmio);
if (bochs->ioports)
release_region(VBE_DISPI_IOPORT_INDEX, 2);
if (bochs->fb_map)
iounmap(bochs->fb_map);
pci_release_regions(to_pci_dev(dev->dev));
kfree(bochs->edid);
}
void bochs_hw_blank(struct bochs_device *bochs, bool blank)
{
DRM_DEBUG_DRIVER("hw_blank %d\n", blank);
/* discard ar_flip_flop */
(void)bochs_vga_readb(bochs, VGA_IS1_RC);
/* blank or unblank; we need only update index and set 0x20 */
bochs_vga_writeb(bochs, VGA_ATT_W, blank ? 0 : 0x20);
}
void bochs_hw_setmode(struct bochs_device *bochs,
struct drm_display_mode *mode)
{
int idx;
if (!drm_dev_enter(bochs->dev, &idx))
return;
bochs->xres = mode->hdisplay;
bochs->yres = mode->vdisplay;
bochs->bpp = 32;
bochs->stride = mode->hdisplay * (bochs->bpp / 8);
bochs->yres_virtual = bochs->fb_size / bochs->stride;
DRM_DEBUG_DRIVER("%dx%d @ %d bpp, vy %d\n",
bochs->xres, bochs->yres, bochs->bpp,
bochs->yres_virtual);
bochs_hw_blank(bochs, false);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_ENABLE, 0);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_BPP, bochs->bpp);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_XRES, bochs->xres);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_YRES, bochs->yres);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_BANK, 0);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_VIRT_WIDTH, bochs->xres);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_VIRT_HEIGHT,
bochs->yres_virtual);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_X_OFFSET, 0);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_Y_OFFSET, 0);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_ENABLE,
VBE_DISPI_ENABLED | VBE_DISPI_LFB_ENABLED);
drm_dev_exit(idx);
}
void bochs_hw_setformat(struct bochs_device *bochs,
const struct drm_format_info *format)
{
int idx;
if (!drm_dev_enter(bochs->dev, &idx))
return;
DRM_DEBUG_DRIVER("format %c%c%c%c\n",
(format->format >> 0) & 0xff,
(format->format >> 8) & 0xff,
(format->format >> 16) & 0xff,
(format->format >> 24) & 0xff);
switch (format->format) {
case DRM_FORMAT_XRGB8888:
bochs_hw_set_little_endian(bochs);
break;
case DRM_FORMAT_BGRX8888:
bochs_hw_set_big_endian(bochs);
break;
default:
/* should not happen */
DRM_ERROR("%s: Huh? Got framebuffer format 0x%x",
__func__, format->format);
break;
}
drm_dev_exit(idx);
}
void bochs_hw_setbase(struct bochs_device *bochs,
int x, int y, int stride, u64 addr)
{
unsigned long offset;
unsigned int vx, vy, vwidth, idx;
if (!drm_dev_enter(bochs->dev, &idx))
return;
bochs->stride = stride;
offset = (unsigned long)addr +
y * bochs->stride +
x * (bochs->bpp / 8);
vy = offset / bochs->stride;
vx = (offset % bochs->stride) * 8 / bochs->bpp;
vwidth = stride * 8 / bochs->bpp;
DRM_DEBUG_DRIVER("x %d, y %d, addr %llx -> offset %lx, vx %d, vy %d\n",
x, y, addr, offset, vx, vy);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_VIRT_WIDTH, vwidth);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_X_OFFSET, vx);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_Y_OFFSET, vy);
drm_dev_exit(idx);
}

View file

@ -1,178 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
*/
#include <linux/moduleparam.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_probe_helper.h>
#include "bochs.h"
static int defx = 1024;
static int defy = 768;
module_param(defx, int, 0444);
module_param(defy, int, 0444);
MODULE_PARM_DESC(defx, "default x resolution");
MODULE_PARM_DESC(defy, "default y resolution");
/* ---------------------------------------------------------------------- */
static const uint32_t bochs_formats[] = {
DRM_FORMAT_XRGB8888,
DRM_FORMAT_BGRX8888,
};
static void bochs_plane_update(struct bochs_device *bochs,
struct drm_plane_state *state)
{
struct drm_gem_vram_object *gbo;
s64 gpu_addr;
if (!state->fb || !bochs->stride)
return;
gbo = drm_gem_vram_of_gem(state->fb->obj[0]);
gpu_addr = drm_gem_vram_offset(gbo);
if (WARN_ON_ONCE(gpu_addr < 0))
return; /* Bug: we didn't pin the BO to VRAM in prepare_fb. */
bochs_hw_setbase(bochs,
state->crtc_x,
state->crtc_y,
state->fb->pitches[0],
state->fb->offsets[0] + gpu_addr);
bochs_hw_setformat(bochs, state->fb->format);
}
static void bochs_pipe_enable(struct drm_simple_display_pipe *pipe,
struct drm_crtc_state *crtc_state,
struct drm_plane_state *plane_state)
{
struct bochs_device *bochs = pipe->crtc.dev->dev_private;
bochs_hw_setmode(bochs, &crtc_state->mode);
bochs_plane_update(bochs, plane_state);
}
static void bochs_pipe_disable(struct drm_simple_display_pipe *pipe)
{
struct bochs_device *bochs = pipe->crtc.dev->dev_private;
bochs_hw_blank(bochs, true);
}
static void bochs_pipe_update(struct drm_simple_display_pipe *pipe,
struct drm_plane_state *old_state)
{
struct bochs_device *bochs = pipe->crtc.dev->dev_private;
bochs_plane_update(bochs, pipe->plane.state);
}
static const struct drm_simple_display_pipe_funcs bochs_pipe_funcs = {
.enable = bochs_pipe_enable,
.disable = bochs_pipe_disable,
.update = bochs_pipe_update,
.prepare_fb = drm_gem_vram_simple_display_pipe_prepare_fb,
.cleanup_fb = drm_gem_vram_simple_display_pipe_cleanup_fb,
};
static int bochs_connector_get_modes(struct drm_connector *connector)
{
struct bochs_device *bochs =
container_of(connector, struct bochs_device, connector);
int count = 0;
if (bochs->edid)
count = drm_add_edid_modes(connector, bochs->edid);
if (!count) {
count = drm_add_modes_noedid(connector, 8192, 8192);
drm_set_preferred_mode(connector, defx, defy);
}
return count;
}
static const struct drm_connector_helper_funcs bochs_connector_connector_helper_funcs = {
.get_modes = bochs_connector_get_modes,
};
static const struct drm_connector_funcs bochs_connector_connector_funcs = {
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = drm_connector_cleanup,
.reset = drm_atomic_helper_connector_reset,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static void bochs_connector_init(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
struct drm_connector *connector = &bochs->connector;
drm_connector_init(dev, connector, &bochs_connector_connector_funcs,
DRM_MODE_CONNECTOR_VIRTUAL);
drm_connector_helper_add(connector,
&bochs_connector_connector_helper_funcs);
bochs_hw_load_edid(bochs);
if (bochs->edid) {
DRM_INFO("Found EDID data blob.\n");
drm_connector_attach_edid_property(connector);
drm_connector_update_edid_property(connector, bochs->edid);
}
}
static struct drm_framebuffer *
bochs_gem_fb_create(struct drm_device *dev, struct drm_file *file,
const struct drm_mode_fb_cmd2 *mode_cmd)
{
if (mode_cmd->pixel_format != DRM_FORMAT_XRGB8888 &&
mode_cmd->pixel_format != DRM_FORMAT_BGRX8888)
return ERR_PTR(-EINVAL);
return drm_gem_fb_create(dev, file, mode_cmd);
}
const struct drm_mode_config_funcs bochs_mode_funcs = {
.fb_create = bochs_gem_fb_create,
.mode_valid = drm_vram_helper_mode_valid,
.atomic_check = drm_atomic_helper_check,
.atomic_commit = drm_atomic_helper_commit,
};
int bochs_kms_init(struct bochs_device *bochs)
{
int ret;
ret = drmm_mode_config_init(bochs->dev);
if (ret)
return ret;
bochs->dev->mode_config.max_width = 8192;
bochs->dev->mode_config.max_height = 8192;
bochs->dev->mode_config.fb_base = bochs->fb_base;
bochs->dev->mode_config.preferred_depth = 24;
bochs->dev->mode_config.prefer_shadow = 0;
bochs->dev->mode_config.prefer_shadow_fbdev = 1;
bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
bochs->dev->mode_config.funcs = &bochs_mode_funcs;
bochs_connector_init(bochs->dev);
drm_simple_display_pipe_init(bochs->dev,
&bochs->pipe,
&bochs_pipe_funcs,
bochs_formats,
ARRAY_SIZE(bochs_formats),
NULL,
&bochs->connector);
drm_mode_config_reset(bochs->dev);
return 0;
}

View file

@ -1,24 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
*/
#include "bochs.h"
/* ---------------------------------------------------------------------- */
int bochs_mm_init(struct bochs_device *bochs)
{
struct drm_vram_mm *vmm;
vmm = drm_vram_helper_alloc_mm(bochs->dev, bochs->fb_base,
bochs->fb_size);
return PTR_ERR_OR_ZERO(vmm);
}
void bochs_mm_fini(struct bochs_device *bochs)
{
if (!bochs->dev->vram_mm)
return;
drm_vram_helper_release_mm(bochs->dev);
}

View file

@ -303,6 +303,7 @@ config DRM_TI_SN65DSI86
select DRM_PANEL
select DRM_MIPI_DSI
select AUXILIARY_BUS
select DRM_DP_AUX_BUS
help
Texas Instruments SN65DSI86 DSI to eDP Bridge driver

View file

@ -1730,7 +1730,6 @@ static int __maybe_unused anx7625_suspend(struct device *dev)
if (!pm_runtime_enabled(dev) || !pm_runtime_suspended(dev)) {
anx7625_runtime_pm_suspend(dev);
disable_irq(ctx->pdata.intp_irq);
flush_workqueue(ctx->workqueue);
}
return 0;
@ -1790,7 +1789,8 @@ static int anx7625_i2c_probe(struct i2c_client *client,
platform->pdata.intp_irq = client->irq;
if (platform->pdata.intp_irq) {
INIT_WORK(&platform->work, anx7625_work_func);
platform->workqueue = create_workqueue("anx7625_work");
platform->workqueue = alloc_workqueue("anx7625_work",
WQ_FREEZABLE | WQ_MEM_RECLAIM, 1);
if (!platform->workqueue) {
DRM_DEV_ERROR(dev, "fail to create work queue\n");
ret = -ENOMEM;
@ -1874,6 +1874,7 @@ static const struct of_device_id anx_match_table[] = {
{.compatible = "analogix,anx7625",},
{},
};
MODULE_DEVICE_TABLE(of, anx_match_table);
static struct i2c_driver anx7625_driver = {
.driver = {

View file

@ -48,12 +48,6 @@ enum transfer_direction {
#define NWL_DSI_ENDPOINT_LCDIF 0
#define NWL_DSI_ENDPOINT_DCSS 1
struct nwl_dsi_plat_clk_config {
const char *id;
struct clk *clk;
bool present;
};
struct nwl_dsi_transfer {
const struct mipi_dsi_msg *msg;
struct mipi_dsi_packet packet;

View file

@ -137,7 +137,6 @@ enum sn65dsi83_model {
struct sn65dsi83 {
struct drm_bridge bridge;
struct drm_display_mode mode;
struct device *dev;
struct regmap *regmap;
struct device_node *host_node;
@ -147,8 +146,6 @@ struct sn65dsi83 {
int dsi_lanes;
bool lvds_dual_link;
bool lvds_dual_link_even_odd_swap;
bool lvds_format_24bpp;
bool lvds_format_jeida;
};
static const struct regmap_range sn65dsi83_readable_ranges[] = {
@ -291,7 +288,8 @@ static int sn65dsi83_attach(struct drm_bridge *bridge,
return ret;
}
static void sn65dsi83_pre_enable(struct drm_bridge *bridge)
static void sn65dsi83_atomic_pre_enable(struct drm_bridge *bridge,
struct drm_bridge_state *old_bridge_state)
{
struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
@ -306,7 +304,8 @@ static void sn65dsi83_pre_enable(struct drm_bridge *bridge)
usleep_range(1000, 1100);
}
static u8 sn65dsi83_get_lvds_range(struct sn65dsi83 *ctx)
static u8 sn65dsi83_get_lvds_range(struct sn65dsi83 *ctx,
const struct drm_display_mode *mode)
{
/*
* The encoding of the LVDS_CLK_RANGE is as follows:
@ -322,7 +321,7 @@ static u8 sn65dsi83_get_lvds_range(struct sn65dsi83 *ctx)
* the clock to 25..154 MHz, the range calculation can be simplified
* as follows:
*/
int mode_clock = ctx->mode.clock;
int mode_clock = mode->clock;
if (ctx->lvds_dual_link)
mode_clock /= 2;
@ -330,7 +329,8 @@ static u8 sn65dsi83_get_lvds_range(struct sn65dsi83 *ctx)
return (mode_clock - 12500) / 25000;
}
static u8 sn65dsi83_get_dsi_range(struct sn65dsi83 *ctx)
static u8 sn65dsi83_get_dsi_range(struct sn65dsi83 *ctx,
const struct drm_display_mode *mode)
{
/*
* The encoding of the CHA_DSI_CLK_RANGE is as follows:
@ -346,7 +346,7 @@ static u8 sn65dsi83_get_dsi_range(struct sn65dsi83 *ctx)
* DSI_CLK = mode clock * bpp / dsi_data_lanes / 2
* the 2 is there because the bus is DDR.
*/
return DIV_ROUND_UP(clamp((unsigned int)ctx->mode.clock *
return DIV_ROUND_UP(clamp((unsigned int)mode->clock *
mipi_dsi_pixel_format_to_bpp(ctx->dsi->format) /
ctx->dsi_lanes / 2, 40000U, 500000U), 5000U);
}
@ -364,23 +364,73 @@ static u8 sn65dsi83_get_dsi_div(struct sn65dsi83 *ctx)
return dsi_div - 1;
}
static void sn65dsi83_enable(struct drm_bridge *bridge)
static void sn65dsi83_atomic_enable(struct drm_bridge *bridge,
struct drm_bridge_state *old_bridge_state)
{
struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
struct drm_atomic_state *state = old_bridge_state->base.state;
const struct drm_bridge_state *bridge_state;
const struct drm_crtc_state *crtc_state;
const struct drm_display_mode *mode;
struct drm_connector *connector;
struct drm_crtc *crtc;
bool lvds_format_24bpp;
bool lvds_format_jeida;
unsigned int pval;
__le16 le16val;
u16 val;
int ret;
/* Get the LVDS format from the bridge state. */
bridge_state = drm_atomic_get_new_bridge_state(state, bridge);
switch (bridge_state->output_bus_cfg.format) {
case MEDIA_BUS_FMT_RGB666_1X7X3_SPWG:
lvds_format_24bpp = false;
lvds_format_jeida = true;
break;
case MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA:
lvds_format_24bpp = true;
lvds_format_jeida = true;
break;
case MEDIA_BUS_FMT_RGB888_1X7X4_SPWG:
lvds_format_24bpp = true;
lvds_format_jeida = false;
break;
default:
/*
* Some bridges still don't set the correct
* LVDS bus pixel format, use SPWG24 default
* format until those are fixed.
*/
lvds_format_24bpp = true;
lvds_format_jeida = false;
dev_warn(ctx->dev,
"Unsupported LVDS bus format 0x%04x, please check output bridge driver. Falling back to SPWG24.\n",
bridge_state->output_bus_cfg.format);
break;
}
/*
* Retrieve the CRTC adjusted mode. This requires a little dance to go
* from the bridge to the encoder, to the connector and to the CRTC.
*/
connector = drm_atomic_get_new_connector_for_encoder(state,
bridge->encoder);
crtc = drm_atomic_get_new_connector_state(state, connector)->crtc;
crtc_state = drm_atomic_get_new_crtc_state(state, crtc);
mode = &crtc_state->adjusted_mode;
/* Clear reset, disable PLL */
regmap_write(ctx->regmap, REG_RC_RESET, 0x00);
regmap_write(ctx->regmap, REG_RC_PLL_EN, 0x00);
/* Reference clock derived from DSI link clock. */
regmap_write(ctx->regmap, REG_RC_LVDS_PLL,
REG_RC_LVDS_PLL_LVDS_CLK_RANGE(sn65dsi83_get_lvds_range(ctx)) |
REG_RC_LVDS_PLL_LVDS_CLK_RANGE(sn65dsi83_get_lvds_range(ctx, mode)) |
REG_RC_LVDS_PLL_HS_CLK_SRC_DPHY);
regmap_write(ctx->regmap, REG_DSI_CLK,
REG_DSI_CLK_CHA_DSI_CLK_RANGE(sn65dsi83_get_dsi_range(ctx)));
REG_DSI_CLK_CHA_DSI_CLK_RANGE(sn65dsi83_get_dsi_range(ctx, mode)));
regmap_write(ctx->regmap, REG_RC_DSI_CLK,
REG_RC_DSI_CLK_DSI_CLK_DIVIDER(sn65dsi83_get_dsi_div(ctx)));
@ -394,20 +444,20 @@ static void sn65dsi83_enable(struct drm_bridge *bridge)
regmap_write(ctx->regmap, REG_DSI_EQ, 0x00);
/* Set up sync signal polarity. */
val = (ctx->mode.flags & DRM_MODE_FLAG_NHSYNC ?
val = (mode->flags & DRM_MODE_FLAG_NHSYNC ?
REG_LVDS_FMT_HS_NEG_POLARITY : 0) |
(ctx->mode.flags & DRM_MODE_FLAG_NVSYNC ?
(mode->flags & DRM_MODE_FLAG_NVSYNC ?
REG_LVDS_FMT_VS_NEG_POLARITY : 0);
/* Set up bits-per-pixel, 18bpp or 24bpp. */
if (ctx->lvds_format_24bpp) {
if (lvds_format_24bpp) {
val |= REG_LVDS_FMT_CHA_24BPP_MODE;
if (ctx->lvds_dual_link)
val |= REG_LVDS_FMT_CHB_24BPP_MODE;
}
/* Set up LVDS format, JEIDA/Format 1 or SPWG/Format 2 */
if (ctx->lvds_format_jeida) {
if (lvds_format_jeida) {
val |= REG_LVDS_FMT_CHA_24BPP_FORMAT1;
if (ctx->lvds_dual_link)
val |= REG_LVDS_FMT_CHB_24BPP_FORMAT1;
@ -426,29 +476,29 @@ static void sn65dsi83_enable(struct drm_bridge *bridge)
REG_LVDS_LANE_CHB_LVDS_TERM);
regmap_write(ctx->regmap, REG_LVDS_CM, 0x00);
val = cpu_to_le16(ctx->mode.hdisplay);
le16val = cpu_to_le16(mode->hdisplay);
regmap_bulk_write(ctx->regmap, REG_VID_CHA_ACTIVE_LINE_LENGTH_LOW,
&val, 2);
val = cpu_to_le16(ctx->mode.vdisplay);
&le16val, 2);
le16val = cpu_to_le16(mode->vdisplay);
regmap_bulk_write(ctx->regmap, REG_VID_CHA_VERTICAL_DISPLAY_SIZE_LOW,
&val, 2);
&le16val, 2);
/* 32 + 1 pixel clock to ensure proper operation */
val = cpu_to_le16(32 + 1);
regmap_bulk_write(ctx->regmap, REG_VID_CHA_SYNC_DELAY_LOW, &val, 2);
val = cpu_to_le16(ctx->mode.hsync_end - ctx->mode.hsync_start);
le16val = cpu_to_le16(32 + 1);
regmap_bulk_write(ctx->regmap, REG_VID_CHA_SYNC_DELAY_LOW, &le16val, 2);
le16val = cpu_to_le16(mode->hsync_end - mode->hsync_start);
regmap_bulk_write(ctx->regmap, REG_VID_CHA_HSYNC_PULSE_WIDTH_LOW,
&val, 2);
val = cpu_to_le16(ctx->mode.vsync_end - ctx->mode.vsync_start);
&le16val, 2);
le16val = cpu_to_le16(mode->vsync_end - mode->vsync_start);
regmap_bulk_write(ctx->regmap, REG_VID_CHA_VSYNC_PULSE_WIDTH_LOW,
&val, 2);
&le16val, 2);
regmap_write(ctx->regmap, REG_VID_CHA_HORIZONTAL_BACK_PORCH,
ctx->mode.htotal - ctx->mode.hsync_end);
mode->htotal - mode->hsync_end);
regmap_write(ctx->regmap, REG_VID_CHA_VERTICAL_BACK_PORCH,
ctx->mode.vtotal - ctx->mode.vsync_end);
mode->vtotal - mode->vsync_end);
regmap_write(ctx->regmap, REG_VID_CHA_HORIZONTAL_FRONT_PORCH,
ctx->mode.hsync_start - ctx->mode.hdisplay);
mode->hsync_start - mode->hdisplay);
regmap_write(ctx->regmap, REG_VID_CHA_VERTICAL_FRONT_PORCH,
ctx->mode.vsync_start - ctx->mode.vdisplay);
mode->vsync_start - mode->vdisplay);
regmap_write(ctx->regmap, REG_VID_CHA_TEST_PATTERN, 0x00);
/* Enable PLL */
@ -472,7 +522,8 @@ static void sn65dsi83_enable(struct drm_bridge *bridge)
regmap_write(ctx->regmap, REG_IRQ_STAT, pval);
}
static void sn65dsi83_disable(struct drm_bridge *bridge)
static void sn65dsi83_atomic_disable(struct drm_bridge *bridge,
struct drm_bridge_state *old_bridge_state)
{
struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
@ -481,7 +532,8 @@ static void sn65dsi83_disable(struct drm_bridge *bridge)
regmap_write(ctx->regmap, REG_RC_PLL_EN, 0x00);
}
static void sn65dsi83_post_disable(struct drm_bridge *bridge)
static void sn65dsi83_atomic_post_disable(struct drm_bridge *bridge,
struct drm_bridge_state *old_bridge_state)
{
struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
@ -503,70 +555,44 @@ sn65dsi83_mode_valid(struct drm_bridge *bridge,
return MODE_OK;
}
static void sn65dsi83_mode_set(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
const struct drm_display_mode *adj)
#define MAX_INPUT_SEL_FORMATS 1
static u32 *
sn65dsi83_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
struct drm_bridge_state *bridge_state,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state,
u32 output_fmt,
unsigned int *num_input_fmts)
{
struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
u32 *input_fmts;
ctx->mode = *adj;
}
*num_input_fmts = 0;
static bool sn65dsi83_mode_fixup(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
struct drm_display_mode *adj)
{
struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
u32 input_bus_format = MEDIA_BUS_FMT_RGB888_1X24;
struct drm_encoder *encoder = bridge->encoder;
struct drm_device *ddev = encoder->dev;
struct drm_connector *connector;
input_fmts = kcalloc(MAX_INPUT_SEL_FORMATS, sizeof(*input_fmts),
GFP_KERNEL);
if (!input_fmts)
return NULL;
/* The DSI format is always RGB888_1X24 */
list_for_each_entry(connector, &ddev->mode_config.connector_list, head) {
switch (connector->display_info.bus_formats[0]) {
case MEDIA_BUS_FMT_RGB666_1X7X3_SPWG:
ctx->lvds_format_24bpp = false;
ctx->lvds_format_jeida = true;
break;
case MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA:
ctx->lvds_format_24bpp = true;
ctx->lvds_format_jeida = true;
break;
case MEDIA_BUS_FMT_RGB888_1X7X4_SPWG:
ctx->lvds_format_24bpp = true;
ctx->lvds_format_jeida = false;
break;
default:
/*
* Some bridges still don't set the correct
* LVDS bus pixel format, use SPWG24 default
* format until those are fixed.
*/
ctx->lvds_format_24bpp = true;
ctx->lvds_format_jeida = false;
dev_warn(ctx->dev,
"Unsupported LVDS bus format 0x%04x, please check output bridge driver. Falling back to SPWG24.\n",
connector->display_info.bus_formats[0]);
break;
}
/* This is the DSI-end bus format */
input_fmts[0] = MEDIA_BUS_FMT_RGB888_1X24;
*num_input_fmts = 1;
drm_display_info_set_bus_formats(&connector->display_info,
&input_bus_format, 1);
}
return true;
return input_fmts;
}
static const struct drm_bridge_funcs sn65dsi83_funcs = {
.attach = sn65dsi83_attach,
.pre_enable = sn65dsi83_pre_enable,
.enable = sn65dsi83_enable,
.disable = sn65dsi83_disable,
.post_disable = sn65dsi83_post_disable,
.mode_valid = sn65dsi83_mode_valid,
.mode_set = sn65dsi83_mode_set,
.mode_fixup = sn65dsi83_mode_fixup,
.attach = sn65dsi83_attach,
.atomic_pre_enable = sn65dsi83_atomic_pre_enable,
.atomic_enable = sn65dsi83_atomic_enable,
.atomic_disable = sn65dsi83_atomic_disable,
.atomic_post_disable = sn65dsi83_atomic_post_disable,
.mode_valid = sn65dsi83_mode_valid,
.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
.atomic_reset = drm_atomic_helper_bridge_reset,
.atomic_get_input_bus_fmts = sn65dsi83_atomic_get_input_bus_fmts,
};
static int sn65dsi83_parse_dt(struct sn65dsi83 *ctx, enum sn65dsi83_model model)

File diff suppressed because it is too large Load diff

View file

@ -33,6 +33,10 @@
*
* .. code-block:: c
*
* static const struct drm_driver example_driver = {
* ...
* };
*
* static int remove_conflicting_framebuffers(struct pci_dev *pdev)
* {
* bool primary = false;
@ -46,7 +50,7 @@
* #endif
*
* return drm_aperture_remove_conflicting_framebuffers(base, size, primary,
* "example driver");
* &example_driver);
* }
*
* static int probe(struct pci_dev *pdev)
@ -274,7 +278,7 @@ static void drm_aperture_detach_drivers(resource_size_t base, resource_size_t si
* @base: the aperture's base address in physical memory
* @size: aperture size in bytes
* @primary: also kick vga16fb if present
* @name: requesting driver name
* @req_driver: requesting DRM driver
*
* This function removes graphics device drivers which use memory range described by
* @base and @size.
@ -283,7 +287,7 @@ static void drm_aperture_detach_drivers(resource_size_t base, resource_size_t si
* 0 on success, or a negative errno code otherwise
*/
int drm_aperture_remove_conflicting_framebuffers(resource_size_t base, resource_size_t size,
bool primary, const char *name)
bool primary, const struct drm_driver *req_driver)
{
#if IS_REACHABLE(CONFIG_FB)
struct apertures_struct *a;
@ -296,7 +300,7 @@ int drm_aperture_remove_conflicting_framebuffers(resource_size_t base, resource_
a->ranges[0].base = base;
a->ranges[0].size = size;
ret = remove_conflicting_framebuffers(a, name, primary);
ret = remove_conflicting_framebuffers(a, req_driver->name, primary);
kfree(a);
if (ret)
@ -312,7 +316,7 @@ EXPORT_SYMBOL(drm_aperture_remove_conflicting_framebuffers);
/**
* drm_aperture_remove_conflicting_pci_framebuffers - remove existing framebuffers for PCI devices
* @pdev: PCI device
* @name: requesting driver name
* @req_driver: requesting DRM driver
*
* This function removes graphics device drivers using memory range configured
* for any of @pdev's memory bars. The function assumes that PCI device with
@ -321,7 +325,8 @@ EXPORT_SYMBOL(drm_aperture_remove_conflicting_framebuffers);
* Returns:
* 0 on success, or a negative errno code otherwise
*/
int drm_aperture_remove_conflicting_pci_framebuffers(struct pci_dev *pdev, const char *name)
int drm_aperture_remove_conflicting_pci_framebuffers(struct pci_dev *pdev,
const struct drm_driver *req_driver)
{
resource_size_t base, size;
int bar, ret = 0;
@ -339,7 +344,7 @@ int drm_aperture_remove_conflicting_pci_framebuffers(struct pci_dev *pdev, const
* otherwise the vga fbdev driver falls over.
*/
#if IS_REACHABLE(CONFIG_FB)
ret = remove_conflicting_pci_framebuffers(pdev, name);
ret = remove_conflicting_pci_framebuffers(pdev, req_driver->name);
#endif
if (ret == 0)
ret = vga_remove_vgacon(pdev);

View file

@ -35,6 +35,7 @@
#include <drm/drm_damage_helper.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_gem_atomic_helper.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_print.h>
#include <drm/drm_self_refresh_helper.h>
@ -2405,6 +2406,15 @@ int drm_atomic_helper_prepare_planes(struct drm_device *dev,
ret = funcs->prepare_fb(plane, new_plane_state);
if (ret)
goto fail;
} else {
WARN_ON_ONCE(funcs->cleanup_fb);
if (!drm_core_check_feature(dev, DRIVER_GEM))
continue;
ret = drm_gem_plane_helper_prepare_fb(plane, new_plane_state);
if (ret)
goto fail;
}
}

View file

@ -46,10 +46,10 @@
* it reached a given hardware component (a CRC sampling "source").
*
* Userspace can control generation of CRCs in a given CRTC by writing to the
* file dri/0/crtc-N/crc/control in debugfs, with N being the index of the CRTC.
* Accepted values are source names (which are driver-specific) and the "auto"
* keyword, which will let the driver select a default source of frame CRCs
* for this CRTC.
* file dri/0/crtc-N/crc/control in debugfs, with N being the :ref:`index of
* the CRTC<crtc_index>`. Accepted values are source names (which are
* driver-specific) and the "auto" keyword, which will let the driver select a
* default source of frame CRCs for this CRTC.
*
* Once frame CRC generation is enabled, userspace can capture them by reading
* the dri/0/crtc-N/crc/data file. Each line in that file contains the frame

View file

@ -0,0 +1,326 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright 2021 Google Inc.
*
* The DP AUX bus is used for devices that are connected over a DisplayPort
* AUX bus. The devices on the far side of the bus are referred to as
* endpoints in this code.
*
* Commonly there is only one device connected to the DP AUX bus: a panel.
* Though historically panels (even DP panels) have been modeled as simple
* platform devices, putting them under the DP AUX bus allows the panel driver
* to perform transactions on that bus.
*/
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
#include <drm/drm_dp_aux_bus.h>
#include <drm/drm_dp_helper.h>
/**
* dp_aux_ep_match() - The match function for the dp_aux_bus.
* @dev: The device to match.
* @drv: The driver to try to match against.
*
* At the moment, we just match on device tree.
*
* Return: True if this driver matches this device; false otherwise.
*/
static int dp_aux_ep_match(struct device *dev, struct device_driver *drv)
{
return !!of_match_device(drv->of_match_table, dev);
}
/**
* dp_aux_ep_probe() - The probe function for the dp_aux_bus.
* @dev: The device to probe.
*
* Calls through to the endpoint driver probe.
*
* Return: 0 if no error or negative error code.
*/
static int dp_aux_ep_probe(struct device *dev)
{
struct dp_aux_ep_driver *aux_ep_drv = to_dp_aux_ep_drv(dev->driver);
struct dp_aux_ep_device *aux_ep = to_dp_aux_ep_dev(dev);
int ret;
ret = dev_pm_domain_attach(dev, true);
if (ret)
return dev_err_probe(dev, ret, "Failed to attach to PM Domain\n");
ret = aux_ep_drv->probe(aux_ep);
if (ret)
dev_pm_domain_detach(dev, true);
return ret;
}
/**
* dp_aux_ep_remove() - The remove function for the dp_aux_bus.
* @dev: The device to remove.
*
* Calls through to the endpoint driver remove.
*
* Return: 0 if no error or negative error code.
*/
static int dp_aux_ep_remove(struct device *dev)
{
struct dp_aux_ep_driver *aux_ep_drv = to_dp_aux_ep_drv(dev->driver);
struct dp_aux_ep_device *aux_ep = to_dp_aux_ep_dev(dev);
if (aux_ep_drv->remove)
aux_ep_drv->remove(aux_ep);
dev_pm_domain_detach(dev, true);
return 0;
}
/**
* dp_aux_ep_shutdown() - The shutdown function for the dp_aux_bus.
* @dev: The device to shutdown.
*
* Calls through to the endpoint driver shutdown.
*/
static void dp_aux_ep_shutdown(struct device *dev)
{
struct dp_aux_ep_driver *aux_ep_drv;
if (!dev->driver)
return;
aux_ep_drv = to_dp_aux_ep_drv(dev->driver);
if (aux_ep_drv->shutdown)
aux_ep_drv->shutdown(to_dp_aux_ep_dev(dev));
}
static struct bus_type dp_aux_bus_type = {
.name = "dp-aux",
.match = dp_aux_ep_match,
.probe = dp_aux_ep_probe,
.remove = dp_aux_ep_remove,
.shutdown = dp_aux_ep_shutdown,
};
static ssize_t modalias_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return of_device_modalias(dev, buf, PAGE_SIZE);
}
static DEVICE_ATTR_RO(modalias);
static struct attribute *dp_aux_ep_dev_attrs[] = {
&dev_attr_modalias.attr,
NULL,
};
ATTRIBUTE_GROUPS(dp_aux_ep_dev);
/**
* dp_aux_ep_dev_release() - Free memory for the dp_aux_ep device
* @dev: The device to free.
*
* Return: 0 if no error or negative error code.
*/
static void dp_aux_ep_dev_release(struct device *dev)
{
kfree(to_dp_aux_ep_dev(dev));
}
static struct device_type dp_aux_device_type_type = {
.groups = dp_aux_ep_dev_groups,
.uevent = of_device_uevent_modalias,
.release = dp_aux_ep_dev_release,
};
/**
* of_dp_aux_ep_destroy() - Destroy an DP AUX endpoint device
* @dev: The device to destroy.
* @data: Not used
*
* This is just used as a callback by of_dp_aux_depopulate_ep_devices() and
* is called for _all_ of the child devices of the device providing the AUX bus.
* We'll only act on those that are of type "dp_aux_bus_type".
*
* This function is effectively an inverse of what's in the loop
* in of_dp_aux_populate_ep_devices().
*
* Return: 0 if no error or negative error code.
*/
static int of_dp_aux_ep_destroy(struct device *dev, void *data)
{
struct device_node *np = dev->of_node;
if (dev->bus != &dp_aux_bus_type)
return 0;
if (!of_node_check_flag(np, OF_POPULATED))
return 0;
of_node_clear_flag(np, OF_POPULATED);
of_node_put(np);
device_unregister(dev);
return 0;
}
/**
* of_dp_aux_depopulate_ep_devices() - Undo of_dp_aux_populate_ep_devices
* @aux: The AUX channel whose devices we want to depopulate
*
* This will destroy all devices that were created
* by of_dp_aux_populate_ep_devices().
*/
void of_dp_aux_depopulate_ep_devices(struct drm_dp_aux *aux)
{
device_for_each_child_reverse(aux->dev, NULL, of_dp_aux_ep_destroy);
}
EXPORT_SYMBOL_GPL(of_dp_aux_depopulate_ep_devices);
/**
* of_dp_aux_populate_ep_devices() - Populate the endpoint devices on the DP AUX
* @aux: The AUX channel whose devices we want to populate. It is required that
* drm_dp_aux_init() has already been called for this AUX channel.
*
* This will populate all the devices under the "aux-bus" node of the device
* providing the AUX channel (AKA aux->dev).
*
* When this function finishes, it is _possible_ (but not guaranteed) that
* our sub-devices will have finished probing. It should be noted that if our
* sub-devices return -EPROBE_DEFER that we will not return any error codes
* ourselves but our sub-devices will _not_ have actually probed successfully
* yet. There may be other cases (maybe added in the future?) where sub-devices
* won't have been probed yet when this function returns, so it's best not to
* rely on that.
*
* If this function succeeds you should later make sure you call
* of_dp_aux_depopulate_ep_devices() to undo it, or just use the devm version
* of this function.
*
* Return: 0 if no error or negative error code.
*/
int of_dp_aux_populate_ep_devices(struct drm_dp_aux *aux)
{
struct device_node *bus, *np;
struct dp_aux_ep_device *aux_ep;
int ret;
/* drm_dp_aux_init() should have been called already; warn if not */
WARN_ON_ONCE(!aux->ddc.algo);
if (!aux->dev->of_node)
return 0;
bus = of_get_child_by_name(aux->dev->of_node, "aux-bus");
if (!bus)
return 0;
for_each_available_child_of_node(bus, np) {
if (of_node_test_and_set_flag(np, OF_POPULATED))
continue;
aux_ep = kzalloc(sizeof(*aux_ep), GFP_KERNEL);
if (!aux_ep)
continue;
aux_ep->aux = aux;
aux_ep->dev.parent = aux->dev;
aux_ep->dev.bus = &dp_aux_bus_type;
aux_ep->dev.type = &dp_aux_device_type_type;
aux_ep->dev.of_node = of_node_get(np);
dev_set_name(&aux_ep->dev, "aux-%s", dev_name(aux->dev));
ret = device_register(&aux_ep->dev);
if (ret) {
dev_err(aux->dev, "Failed to create AUX EP for %pOF: %d\n", np, ret);
of_node_clear_flag(np, OF_POPULATED);
of_node_put(np);
/*
* As per docs of device_register(), call this instead
* of kfree() directly for error cases.
*/
put_device(&aux_ep->dev);
/*
* Following in the footsteps of of_i2c_register_devices(),
* we won't fail the whole function here--we'll just
* continue registering any other devices we find.
*/
}
}
of_node_put(bus);
return 0;
}
static void of_dp_aux_depopulate_ep_devices_void(void *data)
{
of_dp_aux_depopulate_ep_devices(data);
}
/**
* devm_of_dp_aux_populate_ep_devices() - devm wrapper for of_dp_aux_populate_ep_devices()
* @aux: The AUX channel whose devices we want to populate
*
* Handles freeing w/ devm on the device "aux->dev".
*
* Return: 0 if no error or negative error code.
*/
int devm_of_dp_aux_populate_ep_devices(struct drm_dp_aux *aux)
{
int ret;
ret = of_dp_aux_populate_ep_devices(aux);
if (ret)
return ret;
return devm_add_action_or_reset(aux->dev,
of_dp_aux_depopulate_ep_devices_void,
aux);
}
EXPORT_SYMBOL_GPL(devm_of_dp_aux_populate_ep_devices);
int __dp_aux_dp_driver_register(struct dp_aux_ep_driver *drv, struct module *owner)
{
drv->driver.owner = owner;
drv->driver.bus = &dp_aux_bus_type;
return driver_register(&drv->driver);
}
EXPORT_SYMBOL_GPL(__dp_aux_dp_driver_register);
void dp_aux_dp_driver_unregister(struct dp_aux_ep_driver *drv)
{
driver_unregister(&drv->driver);
}
EXPORT_SYMBOL_GPL(dp_aux_dp_driver_unregister);
static int __init dp_aux_bus_init(void)
{
int ret;
ret = bus_register(&dp_aux_bus_type);
if (ret)
return ret;
return 0;
}
static void __exit dp_aux_bus_exit(void)
{
bus_unregister(&dp_aux_bus_type);
}
subsys_initcall(dp_aux_bus_init);
module_exit(dp_aux_bus_exit);
MODULE_AUTHOR("Douglas Anderson <dianders@chromium.org>");
MODULE_DESCRIPTION("DRM DisplayPort AUX bus");
MODULE_LICENSE("GPL v2");

View file

@ -33,9 +33,17 @@
#include <drm/drm_print.h>
#include <drm/drm_vblank.h>
#include <drm/drm_dp_mst_helper.h>
#include <drm/drm_panel.h>
#include "drm_crtc_helper_internal.h"
struct dp_aux_backlight {
struct backlight_device *base;
struct drm_dp_aux *aux;
struct drm_edp_backlight_info info;
bool enabled;
};
/**
* DOC: dp helpers
*
@ -3115,3 +3123,457 @@ int drm_dp_pcon_convert_rgb_to_ycbcr(struct drm_dp_aux *aux, u8 color_spc)
return 0;
}
EXPORT_SYMBOL(drm_dp_pcon_convert_rgb_to_ycbcr);
/**
* drm_edp_backlight_set_level() - Set the backlight level of an eDP panel via AUX
* @aux: The DP AUX channel to use
* @bl: Backlight capability info from drm_edp_backlight_init()
* @level: The brightness level to set
*
* Sets the brightness level of an eDP panel's backlight. Note that the panel's backlight must
* already have been enabled by the driver by calling drm_edp_backlight_enable().
*
* Returns: %0 on success, negative error code on failure
*/
int drm_edp_backlight_set_level(struct drm_dp_aux *aux, const struct drm_edp_backlight_info *bl,
u16 level)
{
int ret;
u8 buf[2] = { 0 };
if (bl->lsb_reg_used) {
buf[0] = (level & 0xff00) >> 8;
buf[1] = (level & 0x00ff);
} else {
buf[0] = level;
}
ret = drm_dp_dpcd_write(aux, DP_EDP_BACKLIGHT_BRIGHTNESS_MSB, buf, sizeof(buf));
if (ret != sizeof(buf)) {
drm_err(aux->drm_dev,
"%s: Failed to write aux backlight level: %d\n",
aux->name, ret);
return ret < 0 ? ret : -EIO;
}
return 0;
}
EXPORT_SYMBOL(drm_edp_backlight_set_level);
static int
drm_edp_backlight_set_enable(struct drm_dp_aux *aux, const struct drm_edp_backlight_info *bl,
bool enable)
{
int ret;
u8 buf;
/* The panel uses something other then DPCD for enabling its backlight */
if (!bl->aux_enable)
return 0;
ret = drm_dp_dpcd_readb(aux, DP_EDP_DISPLAY_CONTROL_REGISTER, &buf);
if (ret != 1) {
drm_err(aux->drm_dev, "%s: Failed to read eDP display control register: %d\n",
aux->name, ret);
return ret < 0 ? ret : -EIO;
}
if (enable)
buf |= DP_EDP_BACKLIGHT_ENABLE;
else
buf &= ~DP_EDP_BACKLIGHT_ENABLE;
ret = drm_dp_dpcd_writeb(aux, DP_EDP_DISPLAY_CONTROL_REGISTER, buf);
if (ret != 1) {
drm_err(aux->drm_dev, "%s: Failed to write eDP display control register: %d\n",
aux->name, ret);
return ret < 0 ? ret : -EIO;
}
return 0;
}
/**
* drm_edp_backlight_enable() - Enable an eDP panel's backlight using DPCD
* @aux: The DP AUX channel to use
* @bl: Backlight capability info from drm_edp_backlight_init()
* @level: The initial backlight level to set via AUX, if there is one
*
* This function handles enabling DPCD backlight controls on a panel over DPCD, while additionally
* restoring any important backlight state such as the given backlight level, the brightness byte
* count, backlight frequency, etc.
*
* Note that certain panels, while supporting brightness level controls over DPCD, may not support
* having their backlights enabled via the standard %DP_EDP_DISPLAY_CONTROL_REGISTER. On such panels
* &drm_edp_backlight_info.aux_enable will be set to %false, this function will skip the step of
* programming the %DP_EDP_DISPLAY_CONTROL_REGISTER, and the driver must perform the required
* implementation specific step for enabling the backlight after calling this function.
*
* Returns: %0 on success, negative error code on failure.
*/
int drm_edp_backlight_enable(struct drm_dp_aux *aux, const struct drm_edp_backlight_info *bl,
const u16 level)
{
int ret;
u8 dpcd_buf, new_dpcd_buf;
ret = drm_dp_dpcd_readb(aux, DP_EDP_BACKLIGHT_MODE_SET_REGISTER, &dpcd_buf);
if (ret != 1) {
drm_dbg_kms(aux->drm_dev,
"%s: Failed to read backlight mode: %d\n", aux->name, ret);
return ret < 0 ? ret : -EIO;
}
new_dpcd_buf = dpcd_buf;
if ((dpcd_buf & DP_EDP_BACKLIGHT_CONTROL_MODE_MASK) != DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD) {
new_dpcd_buf &= ~DP_EDP_BACKLIGHT_CONTROL_MODE_MASK;
new_dpcd_buf |= DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD;
ret = drm_dp_dpcd_writeb(aux, DP_EDP_PWMGEN_BIT_COUNT, bl->pwmgen_bit_count);
if (ret != 1)
drm_dbg_kms(aux->drm_dev, "%s: Failed to write aux pwmgen bit count: %d\n",
aux->name, ret);
}
if (bl->pwm_freq_pre_divider) {
ret = drm_dp_dpcd_writeb(aux, DP_EDP_BACKLIGHT_FREQ_SET, bl->pwm_freq_pre_divider);
if (ret != 1)
drm_dbg_kms(aux->drm_dev,
"%s: Failed to write aux backlight frequency: %d\n",
aux->name, ret);
else
new_dpcd_buf |= DP_EDP_BACKLIGHT_FREQ_AUX_SET_ENABLE;
}
if (new_dpcd_buf != dpcd_buf) {
ret = drm_dp_dpcd_writeb(aux, DP_EDP_BACKLIGHT_MODE_SET_REGISTER, new_dpcd_buf);
if (ret != 1) {
drm_dbg_kms(aux->drm_dev, "%s: Failed to write aux backlight mode: %d\n",
aux->name, ret);
return ret < 0 ? ret : -EIO;
}
}
ret = drm_edp_backlight_set_level(aux, bl, level);
if (ret < 0)
return ret;
ret = drm_edp_backlight_set_enable(aux, bl, true);
if (ret < 0)
return ret;
return 0;
}
EXPORT_SYMBOL(drm_edp_backlight_enable);
/**
* drm_edp_backlight_disable() - Disable an eDP backlight using DPCD, if supported
* @aux: The DP AUX channel to use
* @bl: Backlight capability info from drm_edp_backlight_init()
*
* This function handles disabling DPCD backlight controls on a panel over AUX. Note that some
* panels have backlights that are enabled/disabled by other means, despite having their brightness
* values controlled through DPCD. On such panels &drm_edp_backlight_info.aux_enable will be set to
* %false, this function will become a no-op (and we will skip updating
* %DP_EDP_DISPLAY_CONTROL_REGISTER), and the driver must take care to perform it's own
* implementation specific step for disabling the backlight.
*
* Returns: %0 on success or no-op, negative error code on failure.
*/
int drm_edp_backlight_disable(struct drm_dp_aux *aux, const struct drm_edp_backlight_info *bl)
{
int ret;
ret = drm_edp_backlight_set_enable(aux, bl, false);
if (ret < 0)
return ret;
return 0;
}
EXPORT_SYMBOL(drm_edp_backlight_disable);
static inline int
drm_edp_backlight_probe_max(struct drm_dp_aux *aux, struct drm_edp_backlight_info *bl,
u16 driver_pwm_freq_hz, const u8 edp_dpcd[EDP_DISPLAY_CTL_CAP_SIZE])
{
int fxp, fxp_min, fxp_max, fxp_actual, f = 1;
int ret;
u8 pn, pn_min, pn_max;
ret = drm_dp_dpcd_readb(aux, DP_EDP_PWMGEN_BIT_COUNT, &pn);
if (ret != 1) {
drm_dbg_kms(aux->drm_dev, "%s: Failed to read pwmgen bit count cap: %d\n",
aux->name, ret);
return -ENODEV;
}
pn &= DP_EDP_PWMGEN_BIT_COUNT_MASK;
bl->max = (1 << pn) - 1;
if (!driver_pwm_freq_hz)
return 0;
/*
* Set PWM Frequency divider to match desired frequency provided by the driver.
* The PWM Frequency is calculated as 27Mhz / (F x P).
* - Where F = PWM Frequency Pre-Divider value programmed by field 7:0 of the
* EDP_BACKLIGHT_FREQ_SET register (DPCD Address 00728h)
* - Where P = 2^Pn, where Pn is the value programmed by field 4:0 of the
* EDP_PWMGEN_BIT_COUNT register (DPCD Address 00724h)
*/
/* Find desired value of (F x P)
* Note that, if F x P is out of supported range, the maximum value or minimum value will
* applied automatically. So no need to check that.
*/
fxp = DIV_ROUND_CLOSEST(1000 * DP_EDP_BACKLIGHT_FREQ_BASE_KHZ, driver_pwm_freq_hz);
/* Use highest possible value of Pn for more granularity of brightness adjustment while
* satifying the conditions below.
* - Pn is in the range of Pn_min and Pn_max
* - F is in the range of 1 and 255
* - FxP is within 25% of desired value.
* Note: 25% is arbitrary value and may need some tweak.
*/
ret = drm_dp_dpcd_readb(aux, DP_EDP_PWMGEN_BIT_COUNT_CAP_MIN, &pn_min);
if (ret != 1) {
drm_dbg_kms(aux->drm_dev, "%s: Failed to read pwmgen bit count cap min: %d\n",
aux->name, ret);
return 0;
}
ret = drm_dp_dpcd_readb(aux, DP_EDP_PWMGEN_BIT_COUNT_CAP_MAX, &pn_max);
if (ret != 1) {
drm_dbg_kms(aux->drm_dev, "%s: Failed to read pwmgen bit count cap max: %d\n",
aux->name, ret);
return 0;
}
pn_min &= DP_EDP_PWMGEN_BIT_COUNT_MASK;
pn_max &= DP_EDP_PWMGEN_BIT_COUNT_MASK;
/* Ensure frequency is within 25% of desired value */
fxp_min = DIV_ROUND_CLOSEST(fxp * 3, 4);
fxp_max = DIV_ROUND_CLOSEST(fxp * 5, 4);
if (fxp_min < (1 << pn_min) || (255 << pn_max) < fxp_max) {
drm_dbg_kms(aux->drm_dev,
"%s: Driver defined backlight frequency (%d) out of range\n",
aux->name, driver_pwm_freq_hz);
return 0;
}
for (pn = pn_max; pn >= pn_min; pn--) {
f = clamp(DIV_ROUND_CLOSEST(fxp, 1 << pn), 1, 255);
fxp_actual = f << pn;
if (fxp_min <= fxp_actual && fxp_actual <= fxp_max)
break;
}
ret = drm_dp_dpcd_writeb(aux, DP_EDP_PWMGEN_BIT_COUNT, pn);
if (ret != 1) {
drm_dbg_kms(aux->drm_dev, "%s: Failed to write aux pwmgen bit count: %d\n",
aux->name, ret);
return 0;
}
bl->pwmgen_bit_count = pn;
bl->max = (1 << pn) - 1;
if (edp_dpcd[2] & DP_EDP_BACKLIGHT_FREQ_AUX_SET_CAP) {
bl->pwm_freq_pre_divider = f;
drm_dbg_kms(aux->drm_dev, "%s: Using backlight frequency from driver (%dHz)\n",
aux->name, driver_pwm_freq_hz);
}
return 0;
}
static inline int
drm_edp_backlight_probe_level(struct drm_dp_aux *aux, struct drm_edp_backlight_info *bl,
u8 *current_mode)
{
int ret;
u8 buf[2];
u8 mode_reg;
ret = drm_dp_dpcd_readb(aux, DP_EDP_BACKLIGHT_MODE_SET_REGISTER, &mode_reg);
if (ret != 1) {
drm_dbg_kms(aux->drm_dev, "%s: Failed to read backlight mode: %d\n",
aux->name, ret);
return ret < 0 ? ret : -EIO;
}
*current_mode = (mode_reg & DP_EDP_BACKLIGHT_CONTROL_MODE_MASK);
if (*current_mode == DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD) {
int size = 1 + bl->lsb_reg_used;
ret = drm_dp_dpcd_read(aux, DP_EDP_BACKLIGHT_BRIGHTNESS_MSB, buf, size);
if (ret != size) {
drm_dbg_kms(aux->drm_dev, "%s: Failed to read backlight level: %d\n",
aux->name, ret);
return ret < 0 ? ret : -EIO;
}
if (bl->lsb_reg_used)
return (buf[0] << 8) | buf[1];
else
return buf[0];
}
/*
* If we're not in DPCD control mode yet, the programmed brightness value is meaningless and
* the driver should assume max brightness
*/
return bl->max;
}
/**
* drm_edp_backlight_init() - Probe a display panel's TCON using the standard VESA eDP backlight
* interface.
* @aux: The DP aux device to use for probing
* @bl: The &drm_edp_backlight_info struct to fill out with information on the backlight
* @driver_pwm_freq_hz: Optional PWM frequency from the driver in hz
* @edp_dpcd: A cached copy of the eDP DPCD
* @current_level: Where to store the probed brightness level
* @current_mode: Where to store the currently set backlight control mode
*
* Initializes a &drm_edp_backlight_info struct by probing @aux for it's backlight capabilities,
* along with also probing the current and maximum supported brightness levels.
*
* If @driver_pwm_freq_hz is non-zero, this will be used as the backlight frequency. Otherwise, the
* default frequency from the panel is used.
*
* Returns: %0 on success, negative error code on failure.
*/
int
drm_edp_backlight_init(struct drm_dp_aux *aux, struct drm_edp_backlight_info *bl,
u16 driver_pwm_freq_hz, const u8 edp_dpcd[EDP_DISPLAY_CTL_CAP_SIZE],
u16 *current_level, u8 *current_mode)
{
int ret;
if (edp_dpcd[1] & DP_EDP_BACKLIGHT_AUX_ENABLE_CAP)
bl->aux_enable = true;
if (edp_dpcd[2] & DP_EDP_BACKLIGHT_BRIGHTNESS_BYTE_COUNT)
bl->lsb_reg_used = true;
ret = drm_edp_backlight_probe_max(aux, bl, driver_pwm_freq_hz, edp_dpcd);
if (ret < 0)
return ret;
ret = drm_edp_backlight_probe_level(aux, bl, current_mode);
if (ret < 0)
return ret;
*current_level = ret;
drm_dbg_kms(aux->drm_dev,
"%s: Found backlight level=%d/%d pwm_freq_pre_divider=%d mode=%x\n",
aux->name, *current_level, bl->max, bl->pwm_freq_pre_divider, *current_mode);
drm_dbg_kms(aux->drm_dev,
"%s: Backlight caps: pwmgen_bit_count=%d lsb_reg_used=%d aux_enable=%d\n",
aux->name, bl->pwmgen_bit_count, bl->lsb_reg_used, bl->aux_enable);
return 0;
}
EXPORT_SYMBOL(drm_edp_backlight_init);
#if IS_BUILTIN(CONFIG_BACKLIGHT_CLASS_DEVICE) || \
(IS_MODULE(CONFIG_DRM_KMS_HELPER) && IS_MODULE(CONFIG_BACKLIGHT_CLASS_DEVICE))
static int dp_aux_backlight_update_status(struct backlight_device *bd)
{
struct dp_aux_backlight *bl = bl_get_data(bd);
u16 brightness = backlight_get_brightness(bd);
int ret = 0;
if (!backlight_is_blank(bd)) {
if (!bl->enabled) {
drm_edp_backlight_enable(bl->aux, &bl->info, brightness);
bl->enabled = true;
return 0;
}
ret = drm_edp_backlight_set_level(bl->aux, &bl->info, brightness);
} else {
if (bl->enabled) {
drm_edp_backlight_disable(bl->aux, &bl->info);
bl->enabled = false;
}
}
return ret;
}
static const struct backlight_ops dp_aux_bl_ops = {
.update_status = dp_aux_backlight_update_status,
};
/**
* drm_panel_dp_aux_backlight - create and use DP AUX backlight
* @panel: DRM panel
* @aux: The DP AUX channel to use
*
* Use this function to create and handle backlight if your panel
* supports backlight control over DP AUX channel using DPCD
* registers as per VESA's standard backlight control interface.
*
* When the panel is enabled backlight will be enabled after a
* successful call to &drm_panel_funcs.enable()
*
* When the panel is disabled backlight will be disabled before the
* call to &drm_panel_funcs.disable().
*
* A typical implementation for a panel driver supporting backlight
* control over DP AUX will call this function at probe time.
* Backlight will then be handled transparently without requiring
* any intervention from the driver.
*
* drm_panel_dp_aux_backlight() must be called after the call to drm_panel_init().
*
* Return: 0 on success or a negative error code on failure.
*/
int drm_panel_dp_aux_backlight(struct drm_panel *panel, struct drm_dp_aux *aux)
{
struct dp_aux_backlight *bl;
struct backlight_properties props = { 0 };
u16 current_level;
u8 current_mode;
u8 edp_dpcd[EDP_DISPLAY_CTL_CAP_SIZE];
int ret;
if (!panel || !panel->dev || !aux)
return -EINVAL;
ret = drm_dp_dpcd_read(aux, DP_EDP_DPCD_REV, edp_dpcd,
EDP_DISPLAY_CTL_CAP_SIZE);
if (ret < 0)
return ret;
if (!drm_edp_backlight_supported(edp_dpcd)) {
DRM_DEV_INFO(panel->dev, "DP AUX backlight is not supported\n");
return 0;
}
bl = devm_kzalloc(panel->dev, sizeof(*bl), GFP_KERNEL);
if (!bl)
return -ENOMEM;
bl->aux = aux;
ret = drm_edp_backlight_init(aux, &bl->info, 0, edp_dpcd,
&current_level, &current_mode);
if (ret < 0)
return ret;
props.type = BACKLIGHT_RAW;
props.brightness = current_level;
props.max_brightness = bl->info.max;
bl->base = devm_backlight_device_register(panel->dev, "dp_aux_backlight",
panel->dev, bl,
&dp_aux_bl_ops, &props);
if (IS_ERR(bl->base))
return PTR_ERR(bl->base);
backlight_disable(bl->base);
panel->backlight = bl->base;
return 0;
}
EXPORT_SYMBOL(drm_panel_dp_aux_backlight);
#endif

View file

@ -1148,15 +1148,6 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
return -EACCES;
}
if (node->readonly) {
if (vma->vm_flags & VM_WRITE) {
drm_gem_object_put(obj);
return -EINVAL;
}
vma->vm_flags &= ~VM_MAYWRITE;
}
ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node) << PAGE_SHIFT,
vma);
@ -1311,6 +1302,9 @@ EXPORT_SYMBOL(drm_gem_unlock_reservations);
* @fence_array: array of dma_fence * for the job to block on.
* @fence: the dma_fence to add to the list of dependencies.
*
* This functions consumes the reference for @fence both on success and error
* cases.
*
* Returns:
* 0 on success, or an error on failing to expand the array.
*/

View file

@ -135,6 +135,9 @@
* GEM based framebuffer drivers which have their buffers always pinned in
* memory.
*
* This function is the default implementation for GEM drivers of
* &drm_plane_helper_funcs.prepare_fb if no callback is provided.
*
* See drm_atomic_set_fence_for_plane() for a discussion of implicit and
* explicit fencing in atomic modeset updates.
*/
@ -179,6 +182,27 @@ EXPORT_SYMBOL(drm_gem_simple_display_pipe_prepare_fb);
* Shadow-buffered Planes
*/
/**
* __drm_gem_duplicate_shadow_plane_state - duplicates shadow-buffered plane state
* @plane: the plane
* @new_shadow_plane_state: the new shadow-buffered plane state
*
* This function duplicates shadow-buffered plane state. This is helpful for drivers
* that subclass struct drm_shadow_plane_state.
*
* The function does not duplicate existing mappings of the shadow buffers.
* Mappings are maintained during the atomic commit by the plane's prepare_fb
* and cleanup_fb helpers. See drm_gem_prepare_shadow_fb() and drm_gem_cleanup_shadow_fb()
* for corresponding helpers.
*/
void
__drm_gem_duplicate_shadow_plane_state(struct drm_plane *plane,
struct drm_shadow_plane_state *new_shadow_plane_state)
{
__drm_atomic_helper_plane_duplicate_state(plane, &new_shadow_plane_state->base);
}
EXPORT_SYMBOL(__drm_gem_duplicate_shadow_plane_state);
/**
* drm_gem_duplicate_shadow_plane_state - duplicates shadow-buffered plane state
* @plane: the plane
@ -208,12 +232,25 @@ drm_gem_duplicate_shadow_plane_state(struct drm_plane *plane)
new_shadow_plane_state = kzalloc(sizeof(*new_shadow_plane_state), GFP_KERNEL);
if (!new_shadow_plane_state)
return NULL;
__drm_atomic_helper_plane_duplicate_state(plane, &new_shadow_plane_state->base);
__drm_gem_duplicate_shadow_plane_state(plane, new_shadow_plane_state);
return &new_shadow_plane_state->base;
}
EXPORT_SYMBOL(drm_gem_duplicate_shadow_plane_state);
/**
* __drm_gem_destroy_shadow_plane_state - cleans up shadow-buffered plane state
* @shadow_plane_state: the shadow-buffered plane state
*
* This function cleans up shadow-buffered plane state. Helpful for drivers that
* subclass struct drm_shadow_plane_state.
*/
void __drm_gem_destroy_shadow_plane_state(struct drm_shadow_plane_state *shadow_plane_state)
{
__drm_atomic_helper_plane_destroy_state(&shadow_plane_state->base);
}
EXPORT_SYMBOL(__drm_gem_destroy_shadow_plane_state);
/**
* drm_gem_destroy_shadow_plane_state - deletes shadow-buffered plane state
* @plane: the plane
@ -229,11 +266,26 @@ void drm_gem_destroy_shadow_plane_state(struct drm_plane *plane,
struct drm_shadow_plane_state *shadow_plane_state =
to_drm_shadow_plane_state(plane_state);
__drm_atomic_helper_plane_destroy_state(&shadow_plane_state->base);
__drm_gem_destroy_shadow_plane_state(shadow_plane_state);
kfree(shadow_plane_state);
}
EXPORT_SYMBOL(drm_gem_destroy_shadow_plane_state);
/**
* __drm_gem_reset_shadow_plane - resets a shadow-buffered plane
* @plane: the plane
* @shadow_plane_state: the shadow-buffered plane state
*
* This function resets state for shadow-buffered planes. Helpful
* for drivers that subclass struct drm_shadow_plane_state.
*/
void __drm_gem_reset_shadow_plane(struct drm_plane *plane,
struct drm_shadow_plane_state *shadow_plane_state)
{
__drm_atomic_helper_plane_reset(plane, &shadow_plane_state->base);
}
EXPORT_SYMBOL(__drm_gem_reset_shadow_plane);
/**
* drm_gem_reset_shadow_plane - resets a shadow-buffered plane
* @plane: the plane
@ -255,7 +307,7 @@ void drm_gem_reset_shadow_plane(struct drm_plane *plane)
shadow_plane_state = kzalloc(sizeof(*shadow_plane_state), GFP_KERNEL);
if (!shadow_plane_state)
return;
__drm_atomic_helper_plane_reset(plane, &shadow_plane_state->base);
__drm_gem_reset_shadow_plane(plane, shadow_plane_state);
}
EXPORT_SYMBOL(drm_gem_reset_shadow_plane);

View file

@ -505,13 +505,13 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
if (!args->pitch || !args->size) {
args->pitch = min_pitch;
args->size = args->pitch * args->height;
args->size = PAGE_ALIGN(args->pitch * args->height);
} else {
/* ensure sane minimum values */
if (args->pitch < min_pitch)
args->pitch = min_pitch;
if (args->size < args->pitch * args->height)
args->size = args->pitch * args->height;
args->size = PAGE_ALIGN(args->pitch * args->height);
}
shmem = drm_gem_shmem_create_with_handle(file, dev, args->size, &args->handle);

View file

@ -1012,9 +1012,8 @@ static void drm_vram_mm_cleanup(struct drm_vram_mm *vmm)
* Helpers for integration with struct drm_device
*/
/* deprecated; use drmm_vram_mm_init() */
struct drm_vram_mm *drm_vram_helper_alloc_mm(
struct drm_device *dev, uint64_t vram_base, size_t vram_size)
static struct drm_vram_mm *drm_vram_helper_alloc_mm(struct drm_device *dev, uint64_t vram_base,
size_t vram_size)
{
int ret;
@ -1036,9 +1035,8 @@ struct drm_vram_mm *drm_vram_helper_alloc_mm(
dev->vram_mm = NULL;
return ERR_PTR(ret);
}
EXPORT_SYMBOL(drm_vram_helper_alloc_mm);
void drm_vram_helper_release_mm(struct drm_device *dev)
static void drm_vram_helper_release_mm(struct drm_device *dev)
{
if (!dev->vram_mm)
return;
@ -1047,7 +1045,6 @@ void drm_vram_helper_release_mm(struct drm_device *dev)
kfree(dev->vram_mm);
dev->vram_mm = NULL;
}
EXPORT_SYMBOL(drm_vram_helper_release_mm);
static void drm_vram_mm_release(struct drm_device *dev, void *ptr)
{

View file

@ -74,10 +74,8 @@
* only supports devices with a single interrupt on the main device stored in
* &drm_device.dev and set as the device paramter in drm_dev_alloc().
*
* These IRQ helpers are strictly optional. Drivers which roll their own only
* need to set &drm_device.irq_enabled to signal the DRM core that vblank
* interrupts are working. Since these helpers don't automatically clean up the
* requested interrupt like e.g. devm_request_irq() they're not really
* These IRQ helpers are strictly optional. Since these helpers don't automatically
* clean up the requested interrupt like e.g. devm_request_irq() they're not really
* recommended.
*/
@ -91,9 +89,7 @@
* and after the installation.
*
* This is the simplified helper interface provided for drivers with no special
* needs. Drivers which need to install interrupt handlers for multiple
* interrupts must instead set &drm_device.irq_enabled to signal the DRM core
* that vblank interrupts are available.
* needs.
*
* @irq must match the interrupt number that would be passed to request_irq(),
* if called directly instead of using this helper function.
@ -156,8 +152,7 @@ EXPORT_SYMBOL(drm_irq_install);
*
* Calls the driver's &drm_driver.irq_uninstall function and unregisters the IRQ
* handler. This should only be called by drivers which used drm_irq_install()
* to set up their interrupt handler. Other drivers must only reset
* &drm_device.irq_enabled to false.
* to set up their interrupt handler.
*
* Note that for kernel modesetting drivers it is a bug if this function fails.
* The sanity checks are only to catch buggy user modesetting drivers which call

View file

@ -928,6 +928,59 @@ static int mipi_dbi_spi1_transfer(struct mipi_dbi *dbi, int dc,
return 0;
}
static int mipi_dbi_typec1_command_read(struct mipi_dbi *dbi, u8 *cmd,
u8 *data, size_t len)
{
struct spi_device *spi = dbi->spi;
u32 speed_hz = min_t(u32, MIPI_DBI_MAX_SPI_READ_SPEED,
spi->max_speed_hz / 2);
struct spi_transfer tr[2] = {
{
.speed_hz = speed_hz,
.bits_per_word = 9,
.tx_buf = dbi->tx_buf9,
.len = 2,
}, {
.speed_hz = speed_hz,
.bits_per_word = 8,
.len = len,
.rx_buf = data,
},
};
struct spi_message m;
u16 *dst16;
int ret;
if (!len)
return -EINVAL;
if (!spi_is_bpw_supported(spi, 9)) {
/*
* FIXME: implement something like mipi_dbi_spi1e_transfer() but
* for reads using emulation.
*/
dev_err(&spi->dev,
"reading on host not supporting 9 bpw not yet implemented\n");
return -EOPNOTSUPP;
}
/*
* Turn the 8bit command into a 16bit version of the command in the
* buffer. Only 9 bits of this will be used when executing the actual
* transfer.
*/
dst16 = dbi->tx_buf9;
dst16[0] = *cmd;
spi_message_init_with_transfers(&m, tr, ARRAY_SIZE(tr));
ret = spi_sync(spi, &m);
if (!ret)
MIPI_DBI_DEBUG_COMMAND(*cmd, data, len);
return ret;
}
static int mipi_dbi_typec1_command(struct mipi_dbi *dbi, u8 *cmd,
u8 *parameters, size_t num)
{
@ -935,7 +988,7 @@ static int mipi_dbi_typec1_command(struct mipi_dbi *dbi, u8 *cmd,
int ret;
if (mipi_dbi_command_is_read(dbi, *cmd))
return -EOPNOTSUPP;
return mipi_dbi_typec1_command_read(dbi, cmd, parameters, num);
MIPI_DBI_DEBUG_COMMAND(*cmd, parameters, num);

View file

@ -315,7 +315,7 @@ static int drm_of_lvds_get_remote_pixels_type(
remote_port = of_graph_get_remote_port(endpoint);
if (!remote_port) {
of_node_put(remote_port);
of_node_put(endpoint);
return -EPIPE;
}
@ -331,8 +331,10 @@ static int drm_of_lvds_get_remote_pixels_type(
* configurations by passing the endpoints explicitly to
* drm_of_lvds_get_dual_link_pixel_order().
*/
if (!current_pt || pixels_type != current_pt)
if (!current_pt || pixels_type != current_pt) {
of_node_put(endpoint);
return -EINVAL;
}
}
return pixels_type;

View file

@ -9,6 +9,8 @@
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_bridge.h>
#include <drm/drm_drv.h>
#include <drm/drm_gem_atomic_helper.h>
#include <drm/drm_managed.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_probe_helper.h>
@ -225,8 +227,14 @@ static int drm_simple_kms_plane_prepare_fb(struct drm_plane *plane,
struct drm_simple_display_pipe *pipe;
pipe = container_of(plane, struct drm_simple_display_pipe, plane);
if (!pipe->funcs || !pipe->funcs->prepare_fb)
return 0;
if (!pipe->funcs || !pipe->funcs->prepare_fb) {
if (WARN_ON_ONCE(!drm_core_check_feature(plane->dev, DRIVER_GEM)))
return 0;
WARN_ON_ONCE(pipe->funcs && pipe->funcs->cleanup_fb);
return drm_gem_simple_display_pipe_prepare_fb(pipe, state);
}
return pipe->funcs->prepare_fb(pipe, state);
}

View file

@ -861,7 +861,7 @@ static int drm_syncobj_transfer_to_timeline(struct drm_file *file_private,
&fence);
if (ret)
goto err;
chain = kzalloc(sizeof(struct dma_fence_chain), GFP_KERNEL);
chain = dma_fence_chain_alloc();
if (!chain) {
ret = -ENOMEM;
goto err1;
@ -1402,10 +1402,10 @@ drm_syncobj_timeline_signal_ioctl(struct drm_device *dev, void *data,
goto err_points;
}
for (i = 0; i < args->count_handles; i++) {
chains[i] = kzalloc(sizeof(struct dma_fence_chain), GFP_KERNEL);
chains[i] = dma_fence_chain_alloc();
if (!chains[i]) {
for (j = 0; j < i; j++)
kfree(chains[j]);
dma_fence_chain_free(chains[j]);
ret = -ENOMEM;
goto err_chains;
}

View file

@ -1737,6 +1737,15 @@ static void drm_wait_vblank_reply(struct drm_device *dev, unsigned int pipe,
reply->tval_usec = ts.tv_nsec / 1000;
}
static bool drm_wait_vblank_supported(struct drm_device *dev)
{
if (IS_ENABLED(CONFIG_DRM_LEGACY)) {
if (unlikely(drm_core_check_feature(dev, DRIVER_LEGACY)))
return dev->irq_enabled;
}
return drm_dev_has_vblank(dev);
}
int drm_wait_vblank_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
@ -1748,7 +1757,7 @@ int drm_wait_vblank_ioctl(struct drm_device *dev, void *data,
unsigned int pipe_index;
unsigned int flags, pipe, high_pipe;
if (!dev->irq_enabled)
if (!drm_wait_vblank_supported(dev))
return -EOPNOTSUPP;
if (vblwait->request.type & _DRM_VBLANK_SIGNAL)
@ -2023,7 +2032,7 @@ int drm_crtc_get_sequence_ioctl(struct drm_device *dev, void *data,
if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -EOPNOTSUPP;
if (!dev->irq_enabled)
if (!drm_dev_has_vblank(dev))
return -EOPNOTSUPP;
crtc = drm_crtc_find(dev, file_priv, get_seq->crtc_id);
@ -2082,7 +2091,7 @@ int drm_crtc_queue_sequence_ioctl(struct drm_device *dev, void *data,
if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -EOPNOTSUPP;
if (!dev->irq_enabled)
if (!drm_dev_has_vblank(dev))
return -EOPNOTSUPP;
crtc = drm_crtc_find(dev, file_priv, queue_seq->crtc_id);

View file

@ -190,7 +190,8 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu)
ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops,
etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
msecs_to_jiffies(500), NULL, dev_name(gpu->dev));
msecs_to_jiffies(500), NULL, NULL,
dev_name(gpu->dev));
if (ret)
return ret;

View file

@ -300,16 +300,6 @@ static int exynos_drm_bind(struct device *dev)
drm_mode_config_reset(drm);
/*
* enable drm irq mode.
* - with irq_enabled = true, we can use the vblank feature.
*
* P.S. note that we wouldn't use drm irq handler but
* just specific driver own one instead because
* drm framework supports only one irq handler.
*/
drm->irq_enabled = true;
/* init kms poll for handling hpd */
drm_kms_helper_poll_init(drm);

View file

@ -113,11 +113,11 @@ static void oaktrail_lvds_mode_set(struct drm_encoder *encoder,
/* Find the connector we're trying to set up */
list_for_each_entry(connector, &mode_config->connector_list, head) {
if (!connector->encoder || connector->encoder->crtc != crtc)
continue;
if (connector->encoder && connector->encoder->crtc == crtc)
break;
}
if (!connector) {
if (list_entry_is_head(connector, &mode_config->connector_list, head)) {
DRM_ERROR("Couldn't find connector when setting mode");
gma_power_end(dev);
return;

View file

@ -364,7 +364,6 @@ static void gud_debugfs_init(struct drm_minor *minor)
static const struct drm_simple_display_pipe_funcs gud_pipe_funcs = {
.check = gud_pipe_check,
.update = gud_pipe_update,
.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
};
static const struct drm_mode_config_funcs gud_mode_config_funcs = {
@ -394,14 +393,42 @@ static const struct drm_driver gud_drm_driver = {
.minor = 0,
};
static void gud_free_buffers_and_mutex(struct drm_device *drm, void *unused)
static int gud_alloc_bulk_buffer(struct gud_device *gdrm)
{
struct gud_device *gdrm = to_gud_device(drm);
unsigned int i, num_pages;
struct page **pages;
void *ptr;
int ret;
gdrm->bulk_buf = vmalloc_32(gdrm->bulk_len);
if (!gdrm->bulk_buf)
return -ENOMEM;
num_pages = DIV_ROUND_UP(gdrm->bulk_len, PAGE_SIZE);
pages = kmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);
if (!pages)
return -ENOMEM;
for (i = 0, ptr = gdrm->bulk_buf; i < num_pages; i++, ptr += PAGE_SIZE)
pages[i] = vmalloc_to_page(ptr);
ret = sg_alloc_table_from_pages(&gdrm->bulk_sgt, pages, num_pages,
0, gdrm->bulk_len, GFP_KERNEL);
kfree(pages);
return ret;
}
static void gud_free_buffers_and_mutex(void *data)
{
struct gud_device *gdrm = data;
vfree(gdrm->compress_buf);
kfree(gdrm->bulk_buf);
gdrm->compress_buf = NULL;
sg_free_table(&gdrm->bulk_sgt);
vfree(gdrm->bulk_buf);
gdrm->bulk_buf = NULL;
mutex_destroy(&gdrm->ctrl_lock);
mutex_destroy(&gdrm->damage_lock);
}
static int gud_probe(struct usb_interface *intf, const struct usb_device_id *id)
@ -455,7 +482,7 @@ static int gud_probe(struct usb_interface *intf, const struct usb_device_id *id)
INIT_WORK(&gdrm->work, gud_flush_work);
gud_clear_damage(gdrm);
ret = drmm_add_action_or_reset(drm, gud_free_buffers_and_mutex, NULL);
ret = devm_add_action(dev, gud_free_buffers_and_mutex, gdrm);
if (ret)
return ret;
@ -536,24 +563,17 @@ static int gud_probe(struct usb_interface *intf, const struct usb_device_id *id)
if (desc.max_buffer_size)
max_buffer_size = le32_to_cpu(desc.max_buffer_size);
retry:
/*
* Use plain kmalloc here since devm_kmalloc() places struct devres at the beginning
* of the buffer it allocates. This wastes a lot of memory when allocating big buffers.
* Asking for 2M would actually allocate 4M. This would also prevent getting the biggest
* possible buffer potentially leading to split transfers.
*/
gdrm->bulk_buf = kmalloc(max_buffer_size, GFP_KERNEL | __GFP_NOWARN);
if (!gdrm->bulk_buf) {
max_buffer_size = roundup_pow_of_two(max_buffer_size) / 2;
if (max_buffer_size < SZ_512K)
return -ENOMEM;
goto retry;
}
/* Prevent a misbehaving device from allocating loads of RAM. 4096x4096@XRGB8888 = 64 MB */
if (max_buffer_size > SZ_64M)
max_buffer_size = SZ_64M;
gdrm->bulk_pipe = usb_sndbulkpipe(interface_to_usbdev(intf), usb_endpoint_num(bulk_out));
gdrm->bulk_len = max_buffer_size;
ret = gud_alloc_bulk_buffer(gdrm);
if (ret)
return ret;
if (gdrm->compression & GUD_COMPRESSION_LZ4) {
gdrm->lz4_comp_mem = devm_kmalloc(dev, LZ4_MEM_COMPRESS, GFP_KERNEL);
if (!gdrm->lz4_comp_mem)
@ -640,6 +660,7 @@ static int gud_resume(struct usb_interface *intf)
static const struct usb_device_id gud_id_table[] = {
{ USB_DEVICE_INTERFACE_CLASS(0x1d50, 0x614d, USB_CLASS_VENDOR_SPEC) },
{ USB_DEVICE_INTERFACE_CLASS(0x16d0, 0x10a9, USB_CLASS_VENDOR_SPEC) },
{ }
};

View file

@ -5,6 +5,7 @@
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/scatterlist.h>
#include <linux/usb.h>
#include <linux/workqueue.h>
#include <uapi/drm/drm_fourcc.h>
@ -26,6 +27,7 @@ struct gud_device {
unsigned int bulk_pipe;
void *bulk_buf;
size_t bulk_len;
struct sg_table bulk_sgt;
u8 compression;
void *lz4_comp_mem;

View file

@ -23,6 +23,19 @@
#include "gud_internal.h"
/*
* Some userspace rendering loops runs all displays in the same loop.
* This means that a fast display will have to wait for a slow one.
* For this reason gud does flushing asynchronous by default.
* The down side is that in e.g. a single display setup userspace thinks
* the display is insanely fast since the driver reports back immediately
* that the flush/pageflip is done. This wastes CPU and power.
* Such users might want to set this module parameter to false.
*/
static bool gud_async_flush = true;
module_param_named(async_flush, gud_async_flush, bool, 0644);
MODULE_PARM_DESC(async_flush, "Enable asynchronous flushing [default=true]");
/*
* FIXME: The driver is probably broken on Big Endian machines.
* See discussion:
@ -220,13 +233,51 @@ static int gud_prep_flush(struct gud_device *gdrm, struct drm_framebuffer *fb,
return ret;
}
struct gud_usb_bulk_context {
struct timer_list timer;
struct usb_sg_request sgr;
};
static void gud_usb_bulk_timeout(struct timer_list *t)
{
struct gud_usb_bulk_context *ctx = from_timer(ctx, t, timer);
usb_sg_cancel(&ctx->sgr);
}
static int gud_usb_bulk(struct gud_device *gdrm, size_t len)
{
struct gud_usb_bulk_context ctx;
int ret;
ret = usb_sg_init(&ctx.sgr, gud_to_usb_device(gdrm), gdrm->bulk_pipe, 0,
gdrm->bulk_sgt.sgl, gdrm->bulk_sgt.nents, len, GFP_KERNEL);
if (ret)
return ret;
timer_setup_on_stack(&ctx.timer, gud_usb_bulk_timeout, 0);
mod_timer(&ctx.timer, jiffies + msecs_to_jiffies(3000));
usb_sg_wait(&ctx.sgr);
if (!del_timer_sync(&ctx.timer))
ret = -ETIMEDOUT;
else if (ctx.sgr.status < 0)
ret = ctx.sgr.status;
else if (ctx.sgr.bytes != len)
ret = -EIO;
destroy_timer_on_stack(&ctx.timer);
return ret;
}
static int gud_flush_rect(struct gud_device *gdrm, struct drm_framebuffer *fb,
const struct drm_format_info *format, struct drm_rect *rect)
{
struct usb_device *usb = gud_to_usb_device(gdrm);
struct gud_set_buffer_req req;
int ret, actual_length;
size_t len, trlen;
int ret;
drm_dbg(&gdrm->drm, "Flushing [FB:%d] " DRM_RECT_FMT "\n", fb->base.id, DRM_RECT_ARG(rect));
@ -255,10 +306,7 @@ static int gud_flush_rect(struct gud_device *gdrm, struct drm_framebuffer *fb,
return ret;
}
ret = usb_bulk_msg(usb, gdrm->bulk_pipe, gdrm->bulk_buf, trlen,
&actual_length, msecs_to_jiffies(3000));
if (!ret && trlen != actual_length)
ret = -EIO;
ret = gud_usb_bulk(gdrm, trlen);
if (ret)
gdrm->stats_num_errors++;
@ -543,6 +591,8 @@ void gud_pipe_update(struct drm_simple_display_pipe *pipe,
if (gdrm->flags & GUD_DISPLAY_FLAG_FULL_UPDATE)
drm_rect_init(&damage, 0, 0, fb->width, fb->height);
gud_fb_queue_damage(gdrm, fb, &damage);
if (!gud_async_flush)
flush_work(&gdrm->work);
}
if (!crtc->state->enable)

View file

@ -152,8 +152,7 @@ static const struct drm_plane_funcs hibmc_plane_funcs = {
};
static const struct drm_plane_helper_funcs hibmc_plane_helper_funcs = {
.prepare_fb = drm_gem_vram_plane_helper_prepare_fb,
.cleanup_fb = drm_gem_vram_plane_helper_cleanup_fb,
DRM_GEM_VRAM_PLANE_HELPER_FUNCS,
.atomic_check = hibmc_plane_atomic_check,
.atomic_update = hibmc_plane_atomic_update,
};

View file

@ -19,7 +19,6 @@
#include <drm/drm_drv.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_gem_vram_helper.h>
#include <drm/drm_irq.h>
#include <drm/drm_managed.h>
#include <drm/drm_vblank.h>
@ -28,7 +27,7 @@
DEFINE_DRM_GEM_FOPS(hibmc_fops);
static irqreturn_t hibmc_drm_interrupt(int irq, void *arg)
static irqreturn_t hibmc_interrupt(int irq, void *arg)
{
struct drm_device *dev = (struct drm_device *)arg;
struct hibmc_drm_private *priv = to_hibmc_drm_private(dev);
@ -63,7 +62,6 @@ static const struct drm_driver hibmc_driver = {
.dumb_create = hibmc_dumb_create,
.dumb_map_offset = drm_gem_ttm_dumb_map_offset,
.gem_prime_mmap = drm_gem_prime_mmap,
.irq_handler = hibmc_drm_interrupt,
};
static int __maybe_unused hibmc_pm_suspend(struct device *dev)
@ -251,10 +249,12 @@ static int hibmc_hw_init(struct hibmc_drm_private *priv)
static int hibmc_unload(struct drm_device *dev)
{
struct hibmc_drm_private *priv = to_hibmc_drm_private(dev);
struct pci_dev *pdev = to_pci_dev(dev->dev);
drm_atomic_helper_shutdown(dev);
if (dev->irq_enabled)
drm_irq_uninstall(dev);
free_irq(pdev->irq, dev);
pci_disable_msi(to_pci_dev(dev->dev));
@ -291,7 +291,9 @@ static int hibmc_load(struct drm_device *dev)
if (ret) {
drm_warn(dev, "enabling MSI failed: %d\n", ret);
} else {
ret = drm_irq_install(dev, pdev->irq);
/* PCI devices require shared interrupts. */
ret = request_irq(pdev->irq, hibmc_interrupt, IRQF_SHARED,
dev->driver->name, dev);
if (ret)
drm_warn(dev, "install irq failed: %d\n", ret);
}
@ -314,7 +316,7 @@ static int hibmc_pci_probe(struct pci_dev *pdev,
struct drm_device *dev;
int ret;
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, "hibmcdrmfb");
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, &hibmc_driver);
if (ret)
return ret;

View file

@ -185,8 +185,6 @@ static int kirin_drm_kms_init(struct drm_device *dev,
DRM_ERROR("failed to initialize vblank.\n");
goto err_unbind_all;
}
/* with irq_enabled = true, we can use the vblank feature. */
dev->irq_enabled = true;
/* reset all the states of crtc/plane/encoder/connector */
drm_mode_config_reset(dev);

View file

@ -82,7 +82,7 @@ static int hyperv_setup_gen1(struct hyperv_drm_device *hv)
return -ENODEV;
}
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, "hypervdrmfb");
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, &hyperv_driver);
if (ret) {
drm_err(dev, "Not able to remove boot fb\n");
return ret;
@ -127,7 +127,7 @@ static int hyperv_setup_gen2(struct hyperv_drm_device *hv,
drm_aperture_remove_conflicting_framebuffers(screen_info.lfb_base,
screen_info.lfb_size,
false,
"hypervdrmfb");
&hyperv_driver);
hv->fb_size = (unsigned long)hv->mmio_megabytes * 1024 * 1024;

View file

@ -155,6 +155,7 @@ gem-y += \
gem/i915_gem_stolen.o \
gem/i915_gem_throttle.o \
gem/i915_gem_tiling.o \
gem/i915_gem_ttm.o \
gem/i915_gem_userptr.o \
gem/i915_gem_wait.o \
gem/i915_gemfs.o

View file

@ -11778,7 +11778,7 @@ intel_user_framebuffer_create(struct drm_device *dev,
/* object is backed with LMEM for discrete */
i915 = to_i915(obj->base.dev);
if (HAS_LMEM(i915) && !i915_gem_object_is_lmem(obj)) {
if (HAS_LMEM(i915) && !i915_gem_object_validates_to_lmem(obj)) {
/* object is "remote", not in local memory */
i915_gem_object_put(obj);
return ERR_PTR(-EREMOTE);

View file

@ -314,7 +314,7 @@ struct intel_panel {
/* DPCD backlight */
union {
struct {
u8 pwmgen_bit_count;
struct drm_edp_backlight_info info;
} vesa;
struct {
bool sdr_uses_aux;

View file

@ -107,7 +107,7 @@ intel_dp_aux_supports_hdr_backlight(struct intel_connector *connector)
u8 tcon_cap[4];
ret = drm_dp_dpcd_read(aux, INTEL_EDP_HDR_TCON_CAP0, tcon_cap, sizeof(tcon_cap));
if (ret < 0)
if (ret != sizeof(tcon_cap))
return false;
if (!(tcon_cap[1] & INTEL_EDP_HDR_TCON_BRIGHTNESS_NITS_CAP))
@ -137,7 +137,7 @@ intel_dp_aux_hdr_get_backlight(struct intel_connector *connector, enum pipe pipe
u8 tmp;
u8 buf[2] = { 0 };
if (drm_dp_dpcd_readb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, &tmp) < 0) {
if (drm_dp_dpcd_readb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, &tmp) != 1) {
drm_err(&i915->drm, "Failed to read current backlight mode from DPCD\n");
return 0;
}
@ -153,7 +153,8 @@ intel_dp_aux_hdr_get_backlight(struct intel_connector *connector, enum pipe pipe
return panel->backlight.max;
}
if (drm_dp_dpcd_read(&intel_dp->aux, INTEL_EDP_BRIGHTNESS_NITS_LSB, buf, sizeof(buf)) < 0) {
if (drm_dp_dpcd_read(&intel_dp->aux, INTEL_EDP_BRIGHTNESS_NITS_LSB, buf,
sizeof(buf)) != sizeof(buf)) {
drm_err(&i915->drm, "Failed to read brightness from DPCD\n");
return 0;
}
@ -172,7 +173,8 @@ intel_dp_aux_hdr_set_aux_backlight(const struct drm_connector_state *conn_state,
buf[0] = level & 0xFF;
buf[1] = (level & 0xFF00) >> 8;
if (drm_dp_dpcd_write(&intel_dp->aux, INTEL_EDP_BRIGHTNESS_NITS_LSB, buf, 4) < 0)
if (drm_dp_dpcd_write(&intel_dp->aux, INTEL_EDP_BRIGHTNESS_NITS_LSB, buf,
sizeof(buf)) != sizeof(buf))
drm_err(dev, "Failed to write brightness level to DPCD\n");
}
@ -203,7 +205,7 @@ intel_dp_aux_hdr_enable_backlight(const struct intel_crtc_state *crtc_state,
u8 old_ctrl, ctrl;
ret = drm_dp_dpcd_readb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, &old_ctrl);
if (ret < 0) {
if (ret != 1) {
drm_err(&i915->drm, "Failed to read current backlight control mode: %d\n", ret);
return;
}
@ -221,7 +223,7 @@ intel_dp_aux_hdr_enable_backlight(const struct intel_crtc_state *crtc_state,
}
if (ctrl != old_ctrl)
if (drm_dp_dpcd_writeb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, ctrl) < 0)
if (drm_dp_dpcd_writeb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, ctrl) != 1)
drm_err(&i915->drm, "Failed to configure DPCD brightness controls\n");
}
@ -268,153 +270,19 @@ intel_dp_aux_hdr_setup_backlight(struct intel_connector *connector, enum pipe pi
}
/* VESA backlight callbacks */
static void set_vesa_backlight_enable(struct intel_dp *intel_dp, bool enable)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
u8 reg_val = 0;
/* Early return when display use other mechanism to enable backlight. */
if (!(intel_dp->edp_dpcd[1] & DP_EDP_BACKLIGHT_AUX_ENABLE_CAP))
return;
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_EDP_DISPLAY_CONTROL_REGISTER,
&reg_val) < 0) {
drm_dbg_kms(&i915->drm, "Failed to read DPCD register 0x%x\n",
DP_EDP_DISPLAY_CONTROL_REGISTER);
return;
}
if (enable)
reg_val |= DP_EDP_BACKLIGHT_ENABLE;
else
reg_val &= ~(DP_EDP_BACKLIGHT_ENABLE);
if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_EDP_DISPLAY_CONTROL_REGISTER,
reg_val) != 1) {
drm_dbg_kms(&i915->drm, "Failed to %s aux backlight\n",
enabledisable(enable));
}
}
static bool intel_dp_aux_vesa_backlight_dpcd_mode(struct intel_connector *connector)
{
struct intel_dp *intel_dp = intel_attached_dp(connector);
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
u8 mode_reg;
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_EDP_BACKLIGHT_MODE_SET_REGISTER,
&mode_reg) != 1) {
drm_dbg_kms(&i915->drm,
"Failed to read the DPCD register 0x%x\n",
DP_EDP_BACKLIGHT_MODE_SET_REGISTER);
return false;
}
return (mode_reg & DP_EDP_BACKLIGHT_CONTROL_MODE_MASK) ==
DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD;
}
/*
* Read the current backlight value from DPCD register(s) based
* on if 8-bit(MSB) or 16-bit(MSB and LSB) values are supported
*/
static u32 intel_dp_aux_vesa_get_backlight(struct intel_connector *connector, enum pipe unused)
{
struct intel_dp *intel_dp = intel_attached_dp(connector);
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
u8 read_val[2] = { 0x0 };
u16 level = 0;
/*
* If we're not in DPCD control mode yet, the programmed brightness
* value is meaningless and we should assume max brightness
*/
if (!intel_dp_aux_vesa_backlight_dpcd_mode(connector))
return connector->panel.backlight.max;
if (drm_dp_dpcd_read(&intel_dp->aux, DP_EDP_BACKLIGHT_BRIGHTNESS_MSB,
&read_val, sizeof(read_val)) < 0) {
drm_dbg_kms(&i915->drm, "Failed to read DPCD register 0x%x\n",
DP_EDP_BACKLIGHT_BRIGHTNESS_MSB);
return 0;
}
level = read_val[0];
if (intel_dp->edp_dpcd[2] & DP_EDP_BACKLIGHT_BRIGHTNESS_BYTE_COUNT)
level = (read_val[0] << 8 | read_val[1]);
return level;
return connector->panel.backlight.level;
}
/*
* Sends the current backlight level over the aux channel, checking if its using
* 8-bit or 16 bit value (MSB and LSB)
*/
static void
intel_dp_aux_vesa_set_backlight(const struct drm_connector_state *conn_state,
u32 level)
intel_dp_aux_vesa_set_backlight(const struct drm_connector_state *conn_state, u32 level)
{
struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct intel_dp *intel_dp = intel_attached_dp(connector);
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
u8 vals[2] = { 0x0 };
struct intel_panel *panel = &connector->panel;
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder);
vals[0] = level;
/* Write the MSB and/or LSB */
if (intel_dp->edp_dpcd[2] & DP_EDP_BACKLIGHT_BRIGHTNESS_BYTE_COUNT) {
vals[0] = (level & 0xFF00) >> 8;
vals[1] = (level & 0xFF);
}
if (drm_dp_dpcd_write(&intel_dp->aux, DP_EDP_BACKLIGHT_BRIGHTNESS_MSB,
vals, sizeof(vals)) < 0) {
drm_dbg_kms(&i915->drm,
"Failed to write aux backlight level\n");
return;
}
}
/*
* Set PWM Frequency divider to match desired frequency in vbt.
* The PWM Frequency is calculated as 27Mhz / (F x P).
* - Where F = PWM Frequency Pre-Divider value programmed by field 7:0 of the
* EDP_BACKLIGHT_FREQ_SET register (DPCD Address 00728h)
* - Where P = 2^Pn, where Pn is the value programmed by field 4:0 of the
* EDP_PWMGEN_BIT_COUNT register (DPCD Address 00724h)
*/
static bool intel_dp_aux_vesa_set_pwm_freq(struct intel_connector *connector)
{
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_attached_dp(connector);
const u8 pn = connector->panel.backlight.edp.vesa.pwmgen_bit_count;
int freq, fxp, f, fxp_actual, fxp_min, fxp_max;
freq = dev_priv->vbt.backlight.pwm_freq_hz;
if (!freq) {
drm_dbg_kms(&dev_priv->drm,
"Use panel default backlight frequency\n");
return false;
}
fxp = DIV_ROUND_CLOSEST(KHz(DP_EDP_BACKLIGHT_FREQ_BASE_KHZ), freq);
f = clamp(DIV_ROUND_CLOSEST(fxp, 1 << pn), 1, 255);
fxp_actual = f << pn;
/* Ensure frequency is within 25% of desired value */
fxp_min = DIV_ROUND_CLOSEST(fxp * 3, 4);
fxp_max = DIV_ROUND_CLOSEST(fxp * 5, 4);
if (fxp_min > fxp_actual || fxp_actual > fxp_max) {
drm_dbg_kms(&dev_priv->drm, "Actual frequency out of range\n");
return false;
}
if (drm_dp_dpcd_writeb(&intel_dp->aux,
DP_EDP_BACKLIGHT_FREQ_SET, (u8) f) < 0) {
drm_dbg_kms(&dev_priv->drm,
"Failed to write aux backlight freq\n");
return false;
}
return true;
drm_edp_backlight_set_level(&intel_dp->aux, &panel->backlight.edp.vesa.info, level);
}
static void
@ -422,159 +290,46 @@ intel_dp_aux_vesa_enable_backlight(const struct intel_crtc_state *crtc_state,
const struct drm_connector_state *conn_state, u32 level)
{
struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct intel_dp *intel_dp = intel_attached_dp(connector);
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
struct intel_panel *panel = &connector->panel;
u8 dpcd_buf, new_dpcd_buf, edp_backlight_mode;
u8 pwmgen_bit_count = panel->backlight.edp.vesa.pwmgen_bit_count;
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder);
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_EDP_BACKLIGHT_MODE_SET_REGISTER, &dpcd_buf) != 1) {
drm_dbg_kms(&i915->drm, "Failed to read DPCD register 0x%x\n",
DP_EDP_BACKLIGHT_MODE_SET_REGISTER);
return;
}
new_dpcd_buf = dpcd_buf;
edp_backlight_mode = dpcd_buf & DP_EDP_BACKLIGHT_CONTROL_MODE_MASK;
switch (edp_backlight_mode) {
case DP_EDP_BACKLIGHT_CONTROL_MODE_PWM:
case DP_EDP_BACKLIGHT_CONTROL_MODE_PRESET:
case DP_EDP_BACKLIGHT_CONTROL_MODE_PRODUCT:
new_dpcd_buf &= ~DP_EDP_BACKLIGHT_CONTROL_MODE_MASK;
new_dpcd_buf |= DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD;
if (drm_dp_dpcd_writeb(&intel_dp->aux,
DP_EDP_PWMGEN_BIT_COUNT,
pwmgen_bit_count) < 0)
drm_dbg_kms(&i915->drm,
"Failed to write aux pwmgen bit count\n");
break;
/* Do nothing when it is already DPCD mode */
case DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD:
default:
break;
}
if (intel_dp->edp_dpcd[2] & DP_EDP_BACKLIGHT_FREQ_AUX_SET_CAP)
if (intel_dp_aux_vesa_set_pwm_freq(connector))
new_dpcd_buf |= DP_EDP_BACKLIGHT_FREQ_AUX_SET_ENABLE;
if (new_dpcd_buf != dpcd_buf) {
if (drm_dp_dpcd_writeb(&intel_dp->aux,
DP_EDP_BACKLIGHT_MODE_SET_REGISTER, new_dpcd_buf) < 0) {
drm_dbg_kms(&i915->drm,
"Failed to write aux backlight mode\n");
}
}
intel_dp_aux_vesa_set_backlight(conn_state, level);
set_vesa_backlight_enable(intel_dp, true);
drm_edp_backlight_enable(&intel_dp->aux, &panel->backlight.edp.vesa.info, level);
}
static void intel_dp_aux_vesa_disable_backlight(const struct drm_connector_state *old_conn_state,
u32 level)
{
set_vesa_backlight_enable(enc_to_intel_dp(to_intel_encoder(old_conn_state->best_encoder)),
false);
struct intel_connector *connector = to_intel_connector(old_conn_state->connector);
struct intel_panel *panel = &connector->panel;
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder);
drm_edp_backlight_disable(&intel_dp->aux, &panel->backlight.edp.vesa.info);
}
static u32 intel_dp_aux_vesa_calc_max_backlight(struct intel_connector *connector)
static int intel_dp_aux_vesa_setup_backlight(struct intel_connector *connector, enum pipe pipe)
{
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_attached_dp(connector);
struct intel_panel *panel = &connector->panel;
u32 max_backlight = 0;
int freq, fxp, fxp_min, fxp_max, fxp_actual, f = 1;
u8 pn, pn_min, pn_max;
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
u16 current_level;
u8 current_mode;
int ret;
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_EDP_PWMGEN_BIT_COUNT, &pn) == 1) {
pn &= DP_EDP_PWMGEN_BIT_COUNT_MASK;
max_backlight = (1 << pn) - 1;
}
/* Find desired value of (F x P)
* Note that, if F x P is out of supported range, the maximum value or
* minimum value will applied automatically. So no need to check that.
*/
freq = i915->vbt.backlight.pwm_freq_hz;
drm_dbg_kms(&i915->drm, "VBT defined backlight frequency %u Hz\n",
freq);
if (!freq) {
drm_dbg_kms(&i915->drm,
"Use panel default backlight frequency\n");
return max_backlight;
}
fxp = DIV_ROUND_CLOSEST(KHz(DP_EDP_BACKLIGHT_FREQ_BASE_KHZ), freq);
/* Use highest possible value of Pn for more granularity of brightness
* adjustment while satifying the conditions below.
* - Pn is in the range of Pn_min and Pn_max
* - F is in the range of 1 and 255
* - FxP is within 25% of desired value.
* Note: 25% is arbitrary value and may need some tweak.
*/
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_EDP_PWMGEN_BIT_COUNT_CAP_MIN, &pn_min) != 1) {
drm_dbg_kms(&i915->drm,
"Failed to read pwmgen bit count cap min\n");
return max_backlight;
}
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_EDP_PWMGEN_BIT_COUNT_CAP_MAX, &pn_max) != 1) {
drm_dbg_kms(&i915->drm,
"Failed to read pwmgen bit count cap max\n");
return max_backlight;
}
pn_min &= DP_EDP_PWMGEN_BIT_COUNT_MASK;
pn_max &= DP_EDP_PWMGEN_BIT_COUNT_MASK;
fxp_min = DIV_ROUND_CLOSEST(fxp * 3, 4);
fxp_max = DIV_ROUND_CLOSEST(fxp * 5, 4);
if (fxp_min < (1 << pn_min) || (255 << pn_max) < fxp_max) {
drm_dbg_kms(&i915->drm,
"VBT defined backlight frequency out of range\n");
return max_backlight;
}
for (pn = pn_max; pn >= pn_min; pn--) {
f = clamp(DIV_ROUND_CLOSEST(fxp, 1 << pn), 1, 255);
fxp_actual = f << pn;
if (fxp_min <= fxp_actual && fxp_actual <= fxp_max)
break;
}
drm_dbg_kms(&i915->drm, "Using eDP pwmgen bit count of %d\n", pn);
if (drm_dp_dpcd_writeb(&intel_dp->aux,
DP_EDP_PWMGEN_BIT_COUNT, pn) < 0) {
drm_dbg_kms(&i915->drm,
"Failed to write aux pwmgen bit count\n");
return max_backlight;
}
panel->backlight.edp.vesa.pwmgen_bit_count = pn;
max_backlight = (1 << pn) - 1;
return max_backlight;
}
static int intel_dp_aux_vesa_setup_backlight(struct intel_connector *connector,
enum pipe pipe)
{
struct intel_panel *panel = &connector->panel;
panel->backlight.max = intel_dp_aux_vesa_calc_max_backlight(connector);
if (!panel->backlight.max)
return -ENODEV;
ret = drm_edp_backlight_init(&intel_dp->aux, &panel->backlight.edp.vesa.info,
i915->vbt.backlight.pwm_freq_hz, intel_dp->edp_dpcd,
&current_level, &current_mode);
if (ret < 0)
return ret;
panel->backlight.max = panel->backlight.edp.vesa.info.max;
panel->backlight.min = 0;
panel->backlight.level = intel_dp_aux_vesa_get_backlight(connector, pipe);
panel->backlight.enabled = intel_dp_aux_vesa_backlight_dpcd_mode(connector) &&
panel->backlight.level != 0;
if (current_mode == DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD) {
panel->backlight.level = current_level;
panel->backlight.enabled = panel->backlight.level != 0;
} else {
panel->backlight.level = panel->backlight.max;
panel->backlight.enabled = false;
}
return 0;
}
@ -585,16 +340,12 @@ intel_dp_aux_supports_vesa_backlight(struct intel_connector *connector)
struct intel_dp *intel_dp = intel_attached_dp(connector);
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
/* Check the eDP Display control capabilities registers to determine if
* the panel can support backlight control over the aux channel.
*
* TODO: We currently only support AUX only backlight configurations, not backlights which
/* TODO: We currently only support AUX only backlight configurations, not backlights which
* require a mix of PWM and AUX controls to work. In the mean time, these machines typically
* work just fine using normal PWM controls anyway.
*/
if (intel_dp->edp_dpcd[1] & DP_EDP_TCON_BACKLIGHT_ADJUSTMENT_CAP &&
(intel_dp->edp_dpcd[1] & DP_EDP_BACKLIGHT_AUX_ENABLE_CAP) &&
(intel_dp->edp_dpcd[2] & DP_EDP_BACKLIGHT_BRIGHTNESS_AUX_SET_CAP)) {
if ((intel_dp->edp_dpcd[1] & DP_EDP_BACKLIGHT_AUX_ENABLE_CAP) &&
drm_edp_backlight_supported(intel_dp->edp_dpcd)) {
drm_dbg_kms(&i915->drm, "AUX Backlight Control Supported!\n");
return true;
}

View file

@ -85,13 +85,10 @@ i915_gem_setup(struct drm_i915_gem_object *obj, u64 size)
return -E2BIG;
/*
* For now resort to CPU based clearing for device local-memory, in the
* near future this will use the blitter engine for accelerated, GPU
* based clearing.
* I915_BO_ALLOC_USER will make sure the object is cleared before
* any user access.
*/
flags = 0;
if (mr->type == INTEL_MEMORY_LOCAL)
flags = I915_BO_ALLOC_CPU_CLEAR;
flags = I915_BO_ALLOC_USER;
ret = mr->ops->init_object(mr, obj, size, flags);
if (ret)

View file

@ -2983,7 +2983,7 @@ __free_fence_array(struct eb_fence *fences, unsigned int n)
while (n--) {
drm_syncobj_put(ptr_mask_bits(fences[n].syncobj, 2));
dma_fence_put(fences[n].dma_fence);
kfree(fences[n].chain_fence);
dma_fence_chain_free(fences[n].chain_fence);
}
kvfree(fences);
}
@ -3097,9 +3097,7 @@ add_timeline_fence_array(struct i915_execbuffer *eb,
return -EINVAL;
}
f->chain_fence =
kmalloc(sizeof(*f->chain_fence),
GFP_KERNEL);
f->chain_fence = dma_fence_chain_alloc();
if (!f->chain_fence) {
drm_syncobj_put(syncobj);
dma_fence_put(fence);

View file

@ -4,74 +4,10 @@
*/
#include "intel_memory_region.h"
#include "intel_region_ttm.h"
#include "gem/i915_gem_region.h"
#include "gem/i915_gem_lmem.h"
#include "i915_drv.h"
static void lmem_put_pages(struct drm_i915_gem_object *obj,
struct sg_table *pages)
{
intel_region_ttm_node_free(obj->mm.region, obj->mm.st_mm_node);
obj->mm.dirty = false;
sg_free_table(pages);
kfree(pages);
}
static int lmem_get_pages(struct drm_i915_gem_object *obj)
{
unsigned int flags;
struct sg_table *pages;
flags = I915_ALLOC_MIN_PAGE_SIZE;
if (obj->flags & I915_BO_ALLOC_CONTIGUOUS)
flags |= I915_ALLOC_CONTIGUOUS;
obj->mm.st_mm_node = intel_region_ttm_node_alloc(obj->mm.region,
obj->base.size,
flags);
if (IS_ERR(obj->mm.st_mm_node))
return PTR_ERR(obj->mm.st_mm_node);
/* Range manager is always contigous */
if (obj->mm.region->is_range_manager)
obj->flags |= I915_BO_ALLOC_CONTIGUOUS;
pages = intel_region_ttm_node_to_st(obj->mm.region, obj->mm.st_mm_node);
if (IS_ERR(pages)) {
intel_region_ttm_node_free(obj->mm.region, obj->mm.st_mm_node);
return PTR_ERR(pages);
}
__i915_gem_object_set_pages(obj, pages, i915_sg_dma_sizes(pages->sgl));
if (obj->flags & I915_BO_ALLOC_CPU_CLEAR) {
void __iomem *vaddr =
i915_gem_object_lmem_io_map(obj, 0, obj->base.size);
if (!vaddr) {
struct sg_table *pages =
__i915_gem_object_unset_pages(obj);
if (!IS_ERR_OR_NULL(pages))
lmem_put_pages(obj, pages);
}
memset_io(vaddr, 0, obj->base.size);
io_mapping_unmap(vaddr);
}
return 0;
}
const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = {
.name = "i915_gem_object_lmem",
.flags = I915_GEM_OBJECT_HAS_IOMEM,
.get_pages = lmem_get_pages,
.put_pages = lmem_put_pages,
.release = i915_gem_object_release_memory_region,
};
void __iomem *
i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj,
unsigned long n,
@ -87,10 +23,50 @@ i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj,
return io_mapping_map_wc(&obj->mm.region->iomap, offset, size);
}
/**
* i915_gem_object_validates_to_lmem - Whether the object is resident in
* lmem when pages are present.
* @obj: The object to check.
*
* Migratable objects residency may change from under us if the object is
* not pinned or locked. This function is intended to be used to check whether
* the object can only reside in lmem when pages are present.
*
* Return: Whether the object is always resident in lmem when pages are
* present.
*/
bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj)
{
struct intel_memory_region *mr = READ_ONCE(obj->mm.region);
return !i915_gem_object_migratable(obj) &&
mr && (mr->type == INTEL_MEMORY_LOCAL ||
mr->type == INTEL_MEMORY_STOLEN_LOCAL);
}
/**
* i915_gem_object_is_lmem - Whether the object is resident in
* lmem
* @obj: The object to check.
*
* Even if an object is allowed to migrate and change memory region,
* this function checks whether it will always be present in lmem when
* valid *or* if that's not the case, whether it's currently resident in lmem.
* For migratable and evictable objects, the latter only makes sense when
* the object is locked.
*
* Return: Whether the object migratable but resident in lmem, or not
* migratable and will be present in lmem when valid.
*/
bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj)
{
struct intel_memory_region *mr = obj->mm.region;
struct intel_memory_region *mr = READ_ONCE(obj->mm.region);
#ifdef CONFIG_LOCKDEP
if (i915_gem_object_migratable(obj) &&
i915_gem_object_evictable(obj))
assert_object_held(obj);
#endif
return mr && (mr->type == INTEL_MEMORY_LOCAL ||
mr->type == INTEL_MEMORY_STOLEN_LOCAL);
}
@ -103,23 +79,3 @@ i915_gem_object_create_lmem(struct drm_i915_private *i915,
return i915_gem_object_create_region(i915->mm.regions[INTEL_REGION_LMEM],
size, flags);
}
int __i915_gem_lmem_object_init(struct intel_memory_region *mem,
struct drm_i915_gem_object *obj,
resource_size_t size,
unsigned int flags)
{
static struct lock_class_key lock_class;
struct drm_i915_private *i915 = mem->i915;
drm_gem_private_object_init(&i915->drm, &obj->base, size);
i915_gem_object_init(obj, &i915_gem_lmem_obj_ops, &lock_class, flags);
obj->read_domains = I915_GEM_DOMAIN_WC | I915_GEM_DOMAIN_GTT;
i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
i915_gem_object_init_memory_region(obj, mem);
return 0;
}

View file

@ -26,9 +26,4 @@ i915_gem_object_create_lmem(struct drm_i915_private *i915,
resource_size_t size,
unsigned int flags);
int __i915_gem_lmem_object_init(struct intel_memory_region *mem,
struct drm_i915_gem_object *obj,
resource_size_t size,
unsigned int flags);
#endif /* !__I915_GEM_LMEM_H */

View file

@ -19,6 +19,7 @@
#include "i915_gem_mman.h"
#include "i915_trace.h"
#include "i915_user_extensions.h"
#include "i915_gem_ttm.h"
#include "i915_vma.h"
static inline bool
@ -624,6 +625,8 @@ mmap_offset_attach(struct drm_i915_gem_object *obj,
struct i915_mmap_offset *mmo;
int err;
GEM_BUG_ON(obj->ops->mmap_offset || obj->ops->mmap_ops);
mmo = lookup_mmo(obj, mmap_type);
if (mmo)
goto out;
@ -666,40 +669,47 @@ mmap_offset_attach(struct drm_i915_gem_object *obj,
}
static int
__assign_mmap_offset(struct drm_file *file,
u32 handle,
__assign_mmap_offset(struct drm_i915_gem_object *obj,
enum i915_mmap_type mmap_type,
u64 *offset)
u64 *offset, struct drm_file *file)
{
struct i915_mmap_offset *mmo;
if (i915_gem_object_never_mmap(obj))
return -ENODEV;
if (obj->ops->mmap_offset) {
*offset = obj->ops->mmap_offset(obj);
return 0;
}
if (mmap_type != I915_MMAP_TYPE_GTT &&
!i915_gem_object_has_struct_page(obj) &&
!i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM))
return -ENODEV;
mmo = mmap_offset_attach(obj, mmap_type, file);
if (IS_ERR(mmo))
return PTR_ERR(mmo);
*offset = drm_vma_node_offset_addr(&mmo->vma_node);
return 0;
}
static int
__assign_mmap_offset_handle(struct drm_file *file,
u32 handle,
enum i915_mmap_type mmap_type,
u64 *offset)
{
struct drm_i915_gem_object *obj;
struct i915_mmap_offset *mmo;
int err;
obj = i915_gem_object_lookup(file, handle);
if (!obj)
return -ENOENT;
if (i915_gem_object_never_mmap(obj)) {
err = -ENODEV;
goto out;
}
if (mmap_type != I915_MMAP_TYPE_GTT &&
!i915_gem_object_has_struct_page(obj) &&
!i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM)) {
err = -ENODEV;
goto out;
}
mmo = mmap_offset_attach(obj, mmap_type, file);
if (IS_ERR(mmo)) {
err = PTR_ERR(mmo);
goto out;
}
*offset = drm_vma_node_offset_addr(&mmo->vma_node);
err = 0;
out:
err = __assign_mmap_offset(obj, mmap_type, offset, file);
i915_gem_object_put(obj);
return err;
}
@ -719,7 +729,7 @@ i915_gem_dumb_mmap_offset(struct drm_file *file,
else
mmap_type = I915_MMAP_TYPE_GTT;
return __assign_mmap_offset(file, handle, mmap_type, offset);
return __assign_mmap_offset_handle(file, handle, mmap_type, offset);
}
/**
@ -787,7 +797,7 @@ i915_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
return -EINVAL;
}
return __assign_mmap_offset(file, args->handle, type, &args->offset);
return __assign_mmap_offset_handle(file, args->handle, type, &args->offset);
}
static void vm_open(struct vm_area_struct *vma)
@ -891,8 +901,18 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
* destroyed and will be invalid when the vma manager lock
* is released.
*/
mmo = container_of(node, struct i915_mmap_offset, vma_node);
obj = i915_gem_object_get_rcu(mmo->obj);
if (!node->driver_private) {
mmo = container_of(node, struct i915_mmap_offset, vma_node);
obj = i915_gem_object_get_rcu(mmo->obj);
GEM_BUG_ON(obj && obj->ops->mmap_ops);
} else {
obj = i915_gem_object_get_rcu
(container_of(node, struct drm_i915_gem_object,
base.vma_node));
GEM_BUG_ON(obj && !obj->ops->mmap_ops);
}
}
drm_vma_offset_unlock_lookup(dev->vma_offset_manager);
rcu_read_unlock();
@ -914,7 +934,9 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
}
vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
vma->vm_private_data = mmo;
if (i915_gem_object_has_iomem(obj))
vma->vm_flags |= VM_IO;
/*
* We keep the ref on mmo->obj, not vm_file, but we require
@ -928,6 +950,15 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
/* Drop the initial creation reference, the vma is now holding one. */
fput(anon);
if (obj->ops->mmap_ops) {
vma->vm_page_prot = pgprot_decrypted(vm_get_page_prot(vma->vm_flags));
vma->vm_ops = obj->ops->mmap_ops;
vma->vm_private_data = node->driver_private;
return 0;
}
vma->vm_private_data = mmo;
switch (mmo->mmap_type) {
case I915_MMAP_TYPE_WC:
vma->vm_page_prot =

View file

@ -172,7 +172,7 @@ static void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *f
}
}
static void __i915_gem_free_object_rcu(struct rcu_head *head)
void __i915_gem_free_object_rcu(struct rcu_head *head)
{
struct drm_i915_gem_object *obj =
container_of(head, typeof(*obj), rcu);
@ -208,59 +208,69 @@ static void __i915_gem_object_free_mmaps(struct drm_i915_gem_object *obj)
}
}
void __i915_gem_free_object(struct drm_i915_gem_object *obj)
{
trace_i915_gem_object_destroy(obj);
if (!list_empty(&obj->vma.list)) {
struct i915_vma *vma;
/*
* Note that the vma keeps an object reference while
* it is active, so it *should* not sleep while we
* destroy it. Our debug code errs insits it *might*.
* For the moment, play along.
*/
spin_lock(&obj->vma.lock);
while ((vma = list_first_entry_or_null(&obj->vma.list,
struct i915_vma,
obj_link))) {
GEM_BUG_ON(vma->obj != obj);
spin_unlock(&obj->vma.lock);
__i915_vma_put(vma);
spin_lock(&obj->vma.lock);
}
spin_unlock(&obj->vma.lock);
}
__i915_gem_object_free_mmaps(obj);
GEM_BUG_ON(!list_empty(&obj->lut_list));
atomic_set(&obj->mm.pages_pin_count, 0);
__i915_gem_object_put_pages(obj);
GEM_BUG_ON(i915_gem_object_has_pages(obj));
bitmap_free(obj->bit_17);
if (obj->base.import_attach)
drm_prime_gem_destroy(&obj->base, NULL);
drm_gem_free_mmap_offset(&obj->base);
if (obj->ops->release)
obj->ops->release(obj);
if (obj->mm.n_placements > 1)
kfree(obj->mm.placements);
if (obj->shares_resv_from)
i915_vm_resv_put(obj->shares_resv_from);
}
static void __i915_gem_free_objects(struct drm_i915_private *i915,
struct llist_node *freed)
{
struct drm_i915_gem_object *obj, *on;
llist_for_each_entry_safe(obj, on, freed, freed) {
trace_i915_gem_object_destroy(obj);
if (!list_empty(&obj->vma.list)) {
struct i915_vma *vma;
/*
* Note that the vma keeps an object reference while
* it is active, so it *should* not sleep while we
* destroy it. Our debug code errs insits it *might*.
* For the moment, play along.
*/
spin_lock(&obj->vma.lock);
while ((vma = list_first_entry_or_null(&obj->vma.list,
struct i915_vma,
obj_link))) {
GEM_BUG_ON(vma->obj != obj);
spin_unlock(&obj->vma.lock);
__i915_vma_put(vma);
spin_lock(&obj->vma.lock);
}
spin_unlock(&obj->vma.lock);
might_sleep();
if (obj->ops->delayed_free) {
obj->ops->delayed_free(obj);
continue;
}
__i915_gem_object_free_mmaps(obj);
GEM_BUG_ON(!list_empty(&obj->lut_list));
atomic_set(&obj->mm.pages_pin_count, 0);
__i915_gem_object_put_pages(obj);
GEM_BUG_ON(i915_gem_object_has_pages(obj));
bitmap_free(obj->bit_17);
if (obj->base.import_attach)
drm_prime_gem_destroy(&obj->base, NULL);
drm_gem_free_mmap_offset(&obj->base);
if (obj->ops->release)
obj->ops->release(obj);
if (obj->mm.n_placements > 1)
kfree(obj->mm.placements);
if (obj->shares_resv_from)
i915_vm_resv_put(obj->shares_resv_from);
__i915_gem_free_object(obj);
/* But keep the pointer alive for RCU-protected lookups */
call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
@ -318,6 +328,7 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj)
* worker and performing frees directly from subsequent allocations for
* crude but effective memory throttling.
*/
if (llist_add(&obj->freed, &i915->mm.free_list))
queue_work(i915->wq, &i915->mm.free_work);
}
@ -410,6 +421,60 @@ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset,
return 0;
}
/**
* i915_gem_object_evictable - Whether object is likely evictable after unbind.
* @obj: The object to check
*
* This function checks whether the object is likely unvictable after unbind.
* If the object is not locked when checking, the result is only advisory.
* If the object is locked when checking, and the function returns true,
* then an eviction should indeed be possible. But since unlocked vma
* unpinning and unbinding is currently possible, the object can actually
* become evictable even if this function returns false.
*
* Return: true if the object may be evictable. False otherwise.
*/
bool i915_gem_object_evictable(struct drm_i915_gem_object *obj)
{
struct i915_vma *vma;
int pin_count = atomic_read(&obj->mm.pages_pin_count);
if (!pin_count)
return true;
spin_lock(&obj->vma.lock);
list_for_each_entry(vma, &obj->vma.list, obj_link) {
if (i915_vma_is_pinned(vma)) {
spin_unlock(&obj->vma.lock);
return false;
}
if (atomic_read(&vma->pages_count))
pin_count--;
}
spin_unlock(&obj->vma.lock);
GEM_WARN_ON(pin_count < 0);
return pin_count == 0;
}
/**
* i915_gem_object_migratable - Whether the object is migratable out of the
* current region.
* @obj: Pointer to the object.
*
* Return: Whether the object is allowed to be resident in other
* regions than the current while pages are present.
*/
bool i915_gem_object_migratable(struct drm_i915_gem_object *obj)
{
struct intel_memory_region *mr = READ_ONCE(obj->mm.region);
if (!mr)
return false;
return obj->mm.n_placements > 1;
}
void i915_gem_init__objects(struct drm_i915_private *i915)
{
INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);

View file

@ -200,6 +200,9 @@ static inline bool i915_gem_object_trylock(struct drm_i915_gem_object *obj)
static inline void i915_gem_object_unlock(struct drm_i915_gem_object *obj)
{
if (obj->ops->adjust_lru)
obj->ops->adjust_lru(obj);
dma_resv_unlock(obj->base.resv);
}
@ -339,14 +342,14 @@ struct scatterlist *
__i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
struct i915_gem_object_page_iter *iter,
unsigned int n,
unsigned int *offset, bool allow_alloc);
unsigned int *offset, bool allow_alloc, bool dma);
static inline struct scatterlist *
i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
unsigned int n,
unsigned int *offset, bool allow_alloc)
{
return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset, allow_alloc);
return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset, allow_alloc, false);
}
static inline struct scatterlist *
@ -354,7 +357,7 @@ i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj,
unsigned int n,
unsigned int *offset, bool allow_alloc)
{
return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset, allow_alloc);
return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset, allow_alloc, true);
}
struct page *
@ -587,6 +590,16 @@ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset,
bool i915_gem_object_is_shmem(const struct drm_i915_gem_object *obj);
void __i915_gem_free_object_rcu(struct rcu_head *head);
void __i915_gem_free_object(struct drm_i915_gem_object *obj);
bool i915_gem_object_evictable(struct drm_i915_gem_object *obj);
bool i915_gem_object_migratable(struct drm_i915_gem_object *obj);
bool i915_gem_object_validates_to_lmem(struct drm_i915_gem_object *obj);
#ifdef CONFIG_MMU_NOTIFIER
static inline bool
i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)

View file

@ -61,10 +61,26 @@ struct drm_i915_gem_object_ops {
const struct drm_i915_gem_pread *arg);
int (*pwrite)(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_pwrite *arg);
u64 (*mmap_offset)(struct drm_i915_gem_object *obj);
int (*dmabuf_export)(struct drm_i915_gem_object *obj);
/**
* adjust_lru - notify that the madvise value was updated
* @obj: The gem object
*
* The madvise value may have been updated, or object was recently
* referenced so act accordingly (Perhaps changing an LRU list etc).
*/
void (*adjust_lru)(struct drm_i915_gem_object *obj);
/**
* delayed_free - Override the default delayed free implementation
*/
void (*delayed_free)(struct drm_i915_gem_object *obj);
void (*release)(struct drm_i915_gem_object *obj);
const struct vm_operations_struct *mmap_ops;
const char *name; /* friendly name for debug, e.g. lockdep classes */
};
@ -187,12 +203,14 @@ struct drm_i915_gem_object {
#define I915_BO_ALLOC_VOLATILE BIT(1)
#define I915_BO_ALLOC_STRUCT_PAGE BIT(2)
#define I915_BO_ALLOC_CPU_CLEAR BIT(3)
#define I915_BO_ALLOC_USER BIT(4)
#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \
I915_BO_ALLOC_VOLATILE | \
I915_BO_ALLOC_STRUCT_PAGE | \
I915_BO_ALLOC_CPU_CLEAR)
#define I915_BO_READONLY BIT(4)
#define I915_TILING_QUIRK_BIT 5 /* unknown swizzling; do not release! */
I915_BO_ALLOC_CPU_CLEAR | \
I915_BO_ALLOC_USER)
#define I915_BO_READONLY BIT(5)
#define I915_TILING_QUIRK_BIT 6 /* unknown swizzling; do not release! */
/*
* Is the object to be mapped as read-only to the GPU
@ -310,6 +328,12 @@ struct drm_i915_gem_object {
bool dirty:1;
} mm;
struct {
struct sg_table *cached_io_st;
struct i915_gem_object_page_iter get_io_page;
bool created:1;
} ttm;
/** Record of address bit 17 of each page at last unbind. */
unsigned long *bit_17;

View file

@ -467,9 +467,8 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
struct i915_gem_object_page_iter *iter,
unsigned int n,
unsigned int *offset,
bool allow_alloc)
bool allow_alloc, bool dma)
{
const bool dma = iter == &obj->mm.get_dma_page;
struct scatterlist *sg;
unsigned int idx, count;

View file

@ -18,11 +18,7 @@ void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
mutex_lock(&mem->objects.lock);
if (obj->flags & I915_BO_ALLOC_VOLATILE)
list_add(&obj->mm.region_link, &mem->objects.purgeable);
else
list_add(&obj->mm.region_link, &mem->objects.list);
list_add(&obj->mm.region_link, &mem->objects.list);
mutex_unlock(&mem->objects.lock);
}

View file

@ -0,0 +1,647 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2021 Intel Corporation
*/
#include <drm/ttm/ttm_bo_driver.h>
#include <drm/ttm/ttm_placement.h>
#include "i915_drv.h"
#include "intel_memory_region.h"
#include "intel_region_ttm.h"
#include "gem/i915_gem_object.h"
#include "gem/i915_gem_region.h"
#include "gem/i915_gem_ttm.h"
#include "gem/i915_gem_mman.h"
#define I915_PL_LMEM0 TTM_PL_PRIV
#define I915_PL_SYSTEM TTM_PL_SYSTEM
#define I915_PL_STOLEN TTM_PL_VRAM
#define I915_PL_GGTT TTM_PL_TT
#define I915_TTM_PRIO_PURGE 0
#define I915_TTM_PRIO_NO_PAGES 1
#define I915_TTM_PRIO_HAS_PAGES 2
/**
* struct i915_ttm_tt - TTM page vector with additional private information
* @ttm: The base TTM page vector.
* @dev: The struct device used for dma mapping and unmapping.
* @cached_st: The cached scatter-gather table.
*
* Note that DMA may be going on right up to the point where the page-
* vector is unpopulated in delayed destroy. Hence keep the
* scatter-gather table mapped and cached up to that point. This is
* different from the cached gem object io scatter-gather table which
* doesn't have an associated dma mapping.
*/
struct i915_ttm_tt {
struct ttm_tt ttm;
struct device *dev;
struct sg_table *cached_st;
};
static const struct ttm_place lmem0_sys_placement_flags[] = {
{
.fpfn = 0,
.lpfn = 0,
.mem_type = I915_PL_LMEM0,
.flags = 0,
}, {
.fpfn = 0,
.lpfn = 0,
.mem_type = I915_PL_SYSTEM,
.flags = 0,
}
};
static struct ttm_placement i915_lmem0_placement = {
.num_placement = 1,
.placement = &lmem0_sys_placement_flags[0],
.num_busy_placement = 1,
.busy_placement = &lmem0_sys_placement_flags[0],
};
static struct ttm_placement i915_sys_placement = {
.num_placement = 1,
.placement = &lmem0_sys_placement_flags[1],
.num_busy_placement = 1,
.busy_placement = &lmem0_sys_placement_flags[1],
};
static void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj);
static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo,
uint32_t page_flags)
{
struct ttm_resource_manager *man =
ttm_manager_type(bo->bdev, bo->resource->mem_type);
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
struct i915_ttm_tt *i915_tt;
int ret;
i915_tt = kzalloc(sizeof(*i915_tt), GFP_KERNEL);
if (!i915_tt)
return NULL;
if (obj->flags & I915_BO_ALLOC_CPU_CLEAR &&
man->use_tt)
page_flags |= TTM_PAGE_FLAG_ZERO_ALLOC;
ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, ttm_write_combined);
if (ret) {
kfree(i915_tt);
return NULL;
}
i915_tt->dev = obj->base.dev->dev;
return &i915_tt->ttm;
}
static void i915_ttm_tt_unpopulate(struct ttm_device *bdev, struct ttm_tt *ttm)
{
struct i915_ttm_tt *i915_tt = container_of(ttm, typeof(*i915_tt), ttm);
if (i915_tt->cached_st) {
dma_unmap_sgtable(i915_tt->dev, i915_tt->cached_st,
DMA_BIDIRECTIONAL, 0);
sg_free_table(i915_tt->cached_st);
kfree(i915_tt->cached_st);
i915_tt->cached_st = NULL;
}
ttm_pool_free(&bdev->pool, ttm);
}
static void i915_ttm_tt_destroy(struct ttm_device *bdev, struct ttm_tt *ttm)
{
struct i915_ttm_tt *i915_tt = container_of(ttm, typeof(*i915_tt), ttm);
ttm_tt_destroy_common(bdev, ttm);
kfree(i915_tt);
}
static bool i915_ttm_eviction_valuable(struct ttm_buffer_object *bo,
const struct ttm_place *place)
{
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
/* Will do for now. Our pinned objects are still on TTM's LRU lists */
if (!i915_gem_object_evictable(obj))
return false;
/* This isn't valid with a buddy allocator */
return ttm_bo_eviction_valuable(bo, place);
}
static void i915_ttm_evict_flags(struct ttm_buffer_object *bo,
struct ttm_placement *placement)
{
*placement = i915_sys_placement;
}
static int i915_ttm_move_notify(struct ttm_buffer_object *bo)
{
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
int ret;
ret = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
if (ret)
return ret;
ret = __i915_gem_object_put_pages(obj);
if (ret)
return ret;
return 0;
}
static void i915_ttm_free_cached_io_st(struct drm_i915_gem_object *obj)
{
struct radix_tree_iter iter;
void __rcu **slot;
if (!obj->ttm.cached_io_st)
return;
rcu_read_lock();
radix_tree_for_each_slot(slot, &obj->ttm.get_io_page.radix, &iter, 0)
radix_tree_delete(&obj->ttm.get_io_page.radix, iter.index);
rcu_read_unlock();
sg_free_table(obj->ttm.cached_io_st);
kfree(obj->ttm.cached_io_st);
obj->ttm.cached_io_st = NULL;
}
static void i915_ttm_purge(struct drm_i915_gem_object *obj)
{
struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
struct ttm_operation_ctx ctx = {
.interruptible = true,
.no_wait_gpu = false,
};
struct ttm_placement place = {};
int ret;
if (obj->mm.madv == __I915_MADV_PURGED)
return;
/* TTM's purge interface. Note that we might be reentering. */
ret = ttm_bo_validate(bo, &place, &ctx);
if (!ret) {
i915_ttm_free_cached_io_st(obj);
obj->mm.madv = __I915_MADV_PURGED;
}
}
static void i915_ttm_swap_notify(struct ttm_buffer_object *bo)
{
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
int ret = i915_ttm_move_notify(bo);
GEM_WARN_ON(ret);
GEM_WARN_ON(obj->ttm.cached_io_st);
if (!ret && obj->mm.madv != I915_MADV_WILLNEED)
i915_ttm_purge(obj);
}
static void i915_ttm_delete_mem_notify(struct ttm_buffer_object *bo)
{
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
if (likely(obj)) {
/* This releases all gem object bindings to the backend. */
__i915_gem_free_object(obj);
}
}
static struct intel_memory_region *
i915_ttm_region(struct ttm_device *bdev, int ttm_mem_type)
{
struct drm_i915_private *i915 = container_of(bdev, typeof(*i915), bdev);
/* There's some room for optimization here... */
GEM_BUG_ON(ttm_mem_type != I915_PL_SYSTEM &&
ttm_mem_type < I915_PL_LMEM0);
if (ttm_mem_type == I915_PL_SYSTEM)
return intel_memory_region_lookup(i915, INTEL_MEMORY_SYSTEM,
0);
return intel_memory_region_lookup(i915, INTEL_MEMORY_LOCAL,
ttm_mem_type - I915_PL_LMEM0);
}
static struct sg_table *i915_ttm_tt_get_st(struct ttm_tt *ttm)
{
struct i915_ttm_tt *i915_tt = container_of(ttm, typeof(*i915_tt), ttm);
struct scatterlist *sg;
struct sg_table *st;
int ret;
if (i915_tt->cached_st)
return i915_tt->cached_st;
st = kzalloc(sizeof(*st), GFP_KERNEL);
if (!st)
return ERR_PTR(-ENOMEM);
sg = __sg_alloc_table_from_pages
(st, ttm->pages, ttm->num_pages, 0,
(unsigned long)ttm->num_pages << PAGE_SHIFT,
i915_sg_segment_size(), NULL, 0, GFP_KERNEL);
if (IS_ERR(sg)) {
kfree(st);
return ERR_CAST(sg);
}
ret = dma_map_sgtable(i915_tt->dev, st, DMA_BIDIRECTIONAL, 0);
if (ret) {
sg_free_table(st);
kfree(st);
return ERR_PTR(ret);
}
i915_tt->cached_st = st;
return st;
}
static struct sg_table *
i915_ttm_resource_get_st(struct drm_i915_gem_object *obj,
struct ttm_resource *res)
{
struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
struct ttm_resource_manager *man =
ttm_manager_type(bo->bdev, res->mem_type);
if (man->use_tt)
return i915_ttm_tt_get_st(bo->ttm);
return intel_region_ttm_node_to_st(obj->mm.region, res);
}
static int i915_ttm_move(struct ttm_buffer_object *bo, bool evict,
struct ttm_operation_ctx *ctx,
struct ttm_resource *dst_mem,
struct ttm_place *hop)
{
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
struct ttm_resource_manager *dst_man =
ttm_manager_type(bo->bdev, dst_mem->mem_type);
struct ttm_resource_manager *src_man =
ttm_manager_type(bo->bdev, bo->resource->mem_type);
struct intel_memory_region *dst_reg, *src_reg;
union {
struct ttm_kmap_iter_tt tt;
struct ttm_kmap_iter_iomap io;
} _dst_iter, _src_iter;
struct ttm_kmap_iter *dst_iter, *src_iter;
struct sg_table *dst_st;
int ret;
dst_reg = i915_ttm_region(bo->bdev, dst_mem->mem_type);
src_reg = i915_ttm_region(bo->bdev, bo->resource->mem_type);
GEM_BUG_ON(!dst_reg || !src_reg);
/* Sync for now. We could do the actual copy async. */
ret = ttm_bo_wait_ctx(bo, ctx);
if (ret)
return ret;
ret = i915_ttm_move_notify(bo);
if (ret)
return ret;
if (obj->mm.madv != I915_MADV_WILLNEED) {
i915_ttm_purge(obj);
ttm_resource_free(bo, &dst_mem);
return 0;
}
/* Populate ttm with pages if needed. Typically system memory. */
if (bo->ttm && (dst_man->use_tt ||
(bo->ttm->page_flags & TTM_PAGE_FLAG_SWAPPED))) {
ret = ttm_tt_populate(bo->bdev, bo->ttm, ctx);
if (ret)
return ret;
}
dst_st = i915_ttm_resource_get_st(obj, dst_mem);
if (IS_ERR(dst_st))
return PTR_ERR(dst_st);
/* If we start mapping GGTT, we can no longer use man::use_tt here. */
dst_iter = dst_man->use_tt ?
ttm_kmap_iter_tt_init(&_dst_iter.tt, bo->ttm) :
ttm_kmap_iter_iomap_init(&_dst_iter.io, &dst_reg->iomap,
dst_st, dst_reg->region.start);
src_iter = src_man->use_tt ?
ttm_kmap_iter_tt_init(&_src_iter.tt, bo->ttm) :
ttm_kmap_iter_iomap_init(&_src_iter.io, &src_reg->iomap,
obj->ttm.cached_io_st,
src_reg->region.start);
ttm_move_memcpy(bo, dst_mem->num_pages, dst_iter, src_iter);
ttm_bo_move_sync_cleanup(bo, dst_mem);
i915_ttm_free_cached_io_st(obj);
if (!dst_man->use_tt) {
obj->ttm.cached_io_st = dst_st;
obj->ttm.get_io_page.sg_pos = dst_st->sgl;
obj->ttm.get_io_page.sg_idx = 0;
}
return 0;
}
static int i915_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *mem)
{
if (mem->mem_type < I915_PL_LMEM0)
return 0;
mem->bus.caching = ttm_write_combined;
mem->bus.is_iomem = true;
return 0;
}
static unsigned long i915_ttm_io_mem_pfn(struct ttm_buffer_object *bo,
unsigned long page_offset)
{
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
unsigned long base = obj->mm.region->iomap.base - obj->mm.region->region.start;
struct scatterlist *sg;
unsigned int ofs;
GEM_WARN_ON(bo->ttm);
sg = __i915_gem_object_get_sg(obj, &obj->ttm.get_io_page, page_offset, &ofs, true, true);
return ((base + sg_dma_address(sg)) >> PAGE_SHIFT) + ofs;
}
static struct ttm_device_funcs i915_ttm_bo_driver = {
.ttm_tt_create = i915_ttm_tt_create,
.ttm_tt_unpopulate = i915_ttm_tt_unpopulate,
.ttm_tt_destroy = i915_ttm_tt_destroy,
.eviction_valuable = i915_ttm_eviction_valuable,
.evict_flags = i915_ttm_evict_flags,
.move = i915_ttm_move,
.swap_notify = i915_ttm_swap_notify,
.delete_mem_notify = i915_ttm_delete_mem_notify,
.io_mem_reserve = i915_ttm_io_mem_reserve,
.io_mem_pfn = i915_ttm_io_mem_pfn,
};
/**
* i915_ttm_driver - Return a pointer to the TTM device funcs
*
* Return: Pointer to statically allocated TTM device funcs.
*/
struct ttm_device_funcs *i915_ttm_driver(void)
{
return &i915_ttm_bo_driver;
}
static int i915_ttm_get_pages(struct drm_i915_gem_object *obj)
{
struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
struct ttm_operation_ctx ctx = {
.interruptible = true,
.no_wait_gpu = false,
};
struct sg_table *st;
int ret;
/* Move to the requested placement. */
ret = ttm_bo_validate(bo, &i915_lmem0_placement, &ctx);
if (ret)
return ret == -ENOSPC ? -ENXIO : ret;
/* Object either has a page vector or is an iomem object */
st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
if (IS_ERR(st))
return PTR_ERR(st);
__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
i915_ttm_adjust_lru(obj);
return ret;
}
static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
struct sg_table *st)
{
/*
* We're currently not called from a shrinker, so put_pages()
* typically means the object is about to destroyed, or called
* from move_notify(). So just avoid doing much for now.
* If the object is not destroyed next, The TTM eviction logic
* and shrinkers will move it out if needed.
*/
i915_ttm_adjust_lru(obj);
}
static void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj)
{
struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
/*
* Don't manipulate the TTM LRUs while in TTM bo destruction.
* We're called through i915_ttm_delete_mem_notify().
*/
if (!kref_read(&bo->kref))
return;
/*
* Put on the correct LRU list depending on the MADV status
*/
spin_lock(&bo->bdev->lru_lock);
if (obj->mm.madv != I915_MADV_WILLNEED) {
bo->priority = I915_TTM_PRIO_PURGE;
} else if (!i915_gem_object_has_pages(obj)) {
if (bo->priority < I915_TTM_PRIO_HAS_PAGES)
bo->priority = I915_TTM_PRIO_HAS_PAGES;
} else {
if (bo->priority > I915_TTM_PRIO_NO_PAGES)
bo->priority = I915_TTM_PRIO_NO_PAGES;
}
ttm_bo_move_to_lru_tail(bo, bo->resource, NULL);
spin_unlock(&bo->bdev->lru_lock);
}
/*
* TTM-backed gem object destruction requires some clarification.
* Basically we have two possibilities here. We can either rely on the
* i915 delayed destruction and put the TTM object when the object
* is idle. This would be detected by TTM which would bypass the
* TTM delayed destroy handling. The other approach is to put the TTM
* object early and rely on the TTM destroyed handling, and then free
* the leftover parts of the GEM object once TTM's destroyed list handling is
* complete. For now, we rely on the latter for two reasons:
* a) TTM can evict an object even when it's on the delayed destroy list,
* which in theory allows for complete eviction.
* b) There is work going on in TTM to allow freeing an object even when
* it's not idle, and using the TTM destroyed list handling could help us
* benefit from that.
*/
static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
{
if (obj->ttm.created) {
ttm_bo_put(i915_gem_to_ttm(obj));
} else {
__i915_gem_free_object(obj);
call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
}
}
static vm_fault_t vm_fault_ttm(struct vm_fault *vmf)
{
struct vm_area_struct *area = vmf->vma;
struct drm_i915_gem_object *obj =
i915_ttm_to_gem(area->vm_private_data);
/* Sanity check that we allow writing into this object */
if (unlikely(i915_gem_object_is_readonly(obj) &&
area->vm_flags & VM_WRITE))
return VM_FAULT_SIGBUS;
return ttm_bo_vm_fault(vmf);
}
static int
vm_access_ttm(struct vm_area_struct *area, unsigned long addr,
void *buf, int len, int write)
{
struct drm_i915_gem_object *obj =
i915_ttm_to_gem(area->vm_private_data);
if (i915_gem_object_is_readonly(obj) && write)
return -EACCES;
return ttm_bo_vm_access(area, addr, buf, len, write);
}
static void ttm_vm_open(struct vm_area_struct *vma)
{
struct drm_i915_gem_object *obj =
i915_ttm_to_gem(vma->vm_private_data);
GEM_BUG_ON(!obj);
i915_gem_object_get(obj);
}
static void ttm_vm_close(struct vm_area_struct *vma)
{
struct drm_i915_gem_object *obj =
i915_ttm_to_gem(vma->vm_private_data);
GEM_BUG_ON(!obj);
i915_gem_object_put(obj);
}
static const struct vm_operations_struct vm_ops_ttm = {
.fault = vm_fault_ttm,
.access = vm_access_ttm,
.open = ttm_vm_open,
.close = ttm_vm_close,
};
static u64 i915_ttm_mmap_offset(struct drm_i915_gem_object *obj)
{
/* The ttm_bo must be allocated with I915_BO_ALLOC_USER */
GEM_BUG_ON(!drm_mm_node_allocated(&obj->base.vma_node.vm_node));
return drm_vma_node_offset_addr(&obj->base.vma_node);
}
const struct drm_i915_gem_object_ops i915_gem_ttm_obj_ops = {
.name = "i915_gem_object_ttm",
.flags = I915_GEM_OBJECT_HAS_IOMEM,
.get_pages = i915_ttm_get_pages,
.put_pages = i915_ttm_put_pages,
.truncate = i915_ttm_purge,
.adjust_lru = i915_ttm_adjust_lru,
.delayed_free = i915_ttm_delayed_free,
.mmap_offset = i915_ttm_mmap_offset,
.mmap_ops = &vm_ops_ttm,
};
void i915_ttm_bo_destroy(struct ttm_buffer_object *bo)
{
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
i915_gem_object_release_memory_region(obj);
mutex_destroy(&obj->ttm.get_io_page.lock);
if (obj->ttm.created)
call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
}
/**
* __i915_gem_ttm_object_init - Initialize a ttm-backed i915 gem object
* @mem: The initial memory region for the object.
* @obj: The gem object.
* @size: Object size in bytes.
* @flags: gem object flags.
*
* Return: 0 on success, negative error code on failure.
*/
int __i915_gem_ttm_object_init(struct intel_memory_region *mem,
struct drm_i915_gem_object *obj,
resource_size_t size,
unsigned int flags)
{
static struct lock_class_key lock_class;
struct drm_i915_private *i915 = mem->i915;
enum ttm_bo_type bo_type;
size_t alignment = 0;
int ret;
/* Adjust alignment to GPU- and CPU huge page sizes. */
if (mem->is_range_manager) {
if (size >= SZ_1G)
alignment = SZ_1G >> PAGE_SHIFT;
else if (size >= SZ_2M)
alignment = SZ_2M >> PAGE_SHIFT;
else if (size >= SZ_64K)
alignment = SZ_64K >> PAGE_SHIFT;
}
drm_gem_private_object_init(&i915->drm, &obj->base, size);
i915_gem_object_init(obj, &i915_gem_ttm_obj_ops, &lock_class, flags);
i915_gem_object_init_memory_region(obj, mem);
i915_gem_object_make_unshrinkable(obj);
obj->read_domains = I915_GEM_DOMAIN_WC | I915_GEM_DOMAIN_GTT;
i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
INIT_RADIX_TREE(&obj->ttm.get_io_page.radix, GFP_KERNEL | __GFP_NOWARN);
mutex_init(&obj->ttm.get_io_page.lock);
bo_type = (obj->flags & I915_BO_ALLOC_USER) ? ttm_bo_type_device :
ttm_bo_type_kernel;
/*
* If this function fails, it will call the destructor, but
* our caller still owns the object. So no freeing in the
* destructor until obj->ttm.created is true.
* Similarly, in delayed_destroy, we can't call ttm_bo_put()
* until successful initialization.
*/
obj->base.vma_node.driver_private = i915_gem_to_ttm(obj);
ret = ttm_bo_init(&i915->bdev, i915_gem_to_ttm(obj), size,
bo_type, &i915_sys_placement, alignment,
true, NULL, NULL, i915_ttm_bo_destroy);
if (!ret)
obj->ttm.created = true;
/* i915 wants -ENXIO when out of memory region space. */
return (ret == -ENOSPC) ? -ENXIO : ret;
}

View file

@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2021 Intel Corporation
*/
#ifndef _I915_GEM_TTM_H_
#define _I915_GEM_TTM_H_
#include "gem/i915_gem_object_types.h"
/**
* i915_gem_to_ttm - Convert a struct drm_i915_gem_object to a
* struct ttm_buffer_object.
* @obj: Pointer to the gem object.
*
* Return: Pointer to the embedded struct ttm_buffer_object.
*/
static inline struct ttm_buffer_object *
i915_gem_to_ttm(struct drm_i915_gem_object *obj)
{
return &obj->__do_not_access;
}
/*
* i915 ttm gem object destructor. Internal use only.
*/
void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
/**
* i915_ttm_to_gem - Convert a struct ttm_buffer_object to an embedding
* struct drm_i915_gem_object.
*
* Return: Pointer to the embedding struct ttm_buffer_object, or NULL
* if the object was not an i915 ttm object.
*/
static inline struct drm_i915_gem_object *
i915_ttm_to_gem(struct ttm_buffer_object *bo)
{
if (GEM_WARN_ON(bo->destroy != i915_ttm_bo_destroy))
return NULL;
return container_of(bo, struct drm_i915_gem_object, __do_not_access);
}
int __i915_gem_ttm_object_init(struct intel_memory_region *mem,
struct drm_i915_gem_object *obj,
resource_size_t size,
unsigned int flags);
#endif

View file

@ -578,16 +578,17 @@ static bool assert_mmap_offset(struct drm_i915_private *i915,
int expected)
{
struct drm_i915_gem_object *obj;
struct i915_mmap_offset *mmo;
u64 offset;
int ret;
obj = i915_gem_object_create_internal(i915, size);
if (IS_ERR(obj))
return false;
return expected && expected == PTR_ERR(obj);
mmo = mmap_offset_attach(obj, I915_MMAP_OFFSET_GTT, NULL);
ret = __assign_mmap_offset(obj, I915_MMAP_TYPE_GTT, &offset, NULL);
i915_gem_object_put(obj);
return PTR_ERR_OR_ZERO(mmo) == expected;
return ret == expected;
}
static void disable_retire_worker(struct drm_i915_private *i915)
@ -622,8 +623,8 @@ static int igt_mmap_offset_exhaustion(void *arg)
struct drm_mm *mm = &i915->drm.vma_offset_manager->vm_addr_space_mm;
struct drm_i915_gem_object *obj;
struct drm_mm_node *hole, *next;
struct i915_mmap_offset *mmo;
int loop, err = 0;
u64 offset;
/* Disable background reaper */
disable_retire_worker(i915);
@ -684,13 +685,13 @@ static int igt_mmap_offset_exhaustion(void *arg)
obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
if (IS_ERR(obj)) {
err = PTR_ERR(obj);
pr_err("Unable to create object for reclaimed hole\n");
goto out;
}
mmo = mmap_offset_attach(obj, I915_MMAP_OFFSET_GTT, NULL);
if (IS_ERR(mmo)) {
err = __assign_mmap_offset(obj, I915_MMAP_TYPE_GTT, &offset, NULL);
if (err) {
pr_err("Unable to insert object into reclaimed hole\n");
err = PTR_ERR(mmo);
goto err_obj;
}
@ -865,10 +866,10 @@ static int __igt_mmap(struct drm_i915_private *i915,
struct drm_i915_gem_object *obj,
enum i915_mmap_type type)
{
struct i915_mmap_offset *mmo;
struct vm_area_struct *area;
unsigned long addr;
int err, i;
u64 offset;
if (!can_mmap(obj, type))
return 0;
@ -879,11 +880,11 @@ static int __igt_mmap(struct drm_i915_private *i915,
if (err)
return err;
mmo = mmap_offset_attach(obj, type, NULL);
if (IS_ERR(mmo))
return PTR_ERR(mmo);
err = __assign_mmap_offset(obj, type, &offset, NULL);
if (err)
return err;
addr = igt_mmap_node(i915, &mmo->vma_node, 0, PROT_WRITE, MAP_SHARED);
addr = igt_mmap_offset(i915, offset, obj->base.size, PROT_WRITE, MAP_SHARED);
if (IS_ERR_VALUE(addr))
return addr;
@ -897,13 +898,6 @@ static int __igt_mmap(struct drm_i915_private *i915,
goto out_unmap;
}
if (area->vm_private_data != mmo) {
pr_err("%s: vm_area_struct did not point back to our mmap_offset object!\n",
obj->mm.region->name);
err = -EINVAL;
goto out_unmap;
}
for (i = 0; i < obj->base.size / sizeof(u32); i++) {
u32 __user *ux = u64_to_user_ptr((u64)(addr + i * sizeof(*ux)));
u32 x;
@ -961,7 +955,7 @@ static int igt_mmap(void *arg)
struct drm_i915_gem_object *obj;
int err;
obj = i915_gem_object_create_region(mr, sizes[i], 0);
obj = i915_gem_object_create_region(mr, sizes[i], I915_BO_ALLOC_USER);
if (obj == ERR_PTR(-ENODEV))
continue;
@ -1004,12 +998,12 @@ static int __igt_mmap_access(struct drm_i915_private *i915,
struct drm_i915_gem_object *obj,
enum i915_mmap_type type)
{
struct i915_mmap_offset *mmo;
unsigned long __user *ptr;
unsigned long A, B;
unsigned long x, y;
unsigned long addr;
int err;
u64 offset;
memset(&A, 0xAA, sizeof(A));
memset(&B, 0xBB, sizeof(B));
@ -1017,11 +1011,11 @@ static int __igt_mmap_access(struct drm_i915_private *i915,
if (!can_mmap(obj, type) || !can_access(obj))
return 0;
mmo = mmap_offset_attach(obj, type, NULL);
if (IS_ERR(mmo))
return PTR_ERR(mmo);
err = __assign_mmap_offset(obj, type, &offset, NULL);
if (err)
return err;
addr = igt_mmap_node(i915, &mmo->vma_node, 0, PROT_WRITE, MAP_SHARED);
addr = igt_mmap_offset(i915, offset, obj->base.size, PROT_WRITE, MAP_SHARED);
if (IS_ERR_VALUE(addr))
return addr;
ptr = (unsigned long __user *)addr;
@ -1081,7 +1075,7 @@ static int igt_mmap_access(void *arg)
struct drm_i915_gem_object *obj;
int err;
obj = i915_gem_object_create_region(mr, PAGE_SIZE, 0);
obj = i915_gem_object_create_region(mr, PAGE_SIZE, I915_BO_ALLOC_USER);
if (obj == ERR_PTR(-ENODEV))
continue;
@ -1111,11 +1105,11 @@ static int __igt_mmap_gpu(struct drm_i915_private *i915,
enum i915_mmap_type type)
{
struct intel_engine_cs *engine;
struct i915_mmap_offset *mmo;
unsigned long addr;
u32 __user *ux;
u32 bbe;
int err;
u64 offset;
/*
* Verify that the mmap access into the backing store aligns with
@ -1132,11 +1126,11 @@ static int __igt_mmap_gpu(struct drm_i915_private *i915,
if (err)
return err;
mmo = mmap_offset_attach(obj, type, NULL);
if (IS_ERR(mmo))
return PTR_ERR(mmo);
err = __assign_mmap_offset(obj, type, &offset, NULL);
if (err)
return err;
addr = igt_mmap_node(i915, &mmo->vma_node, 0, PROT_WRITE, MAP_SHARED);
addr = igt_mmap_offset(i915, offset, obj->base.size, PROT_WRITE, MAP_SHARED);
if (IS_ERR_VALUE(addr))
return addr;
@ -1226,7 +1220,7 @@ static int igt_mmap_gpu(void *arg)
struct drm_i915_gem_object *obj;
int err;
obj = i915_gem_object_create_region(mr, PAGE_SIZE, 0);
obj = i915_gem_object_create_region(mr, PAGE_SIZE, I915_BO_ALLOC_USER);
if (obj == ERR_PTR(-ENODEV))
continue;
@ -1303,18 +1297,18 @@ static int __igt_mmap_revoke(struct drm_i915_private *i915,
struct drm_i915_gem_object *obj,
enum i915_mmap_type type)
{
struct i915_mmap_offset *mmo;
unsigned long addr;
int err;
u64 offset;
if (!can_mmap(obj, type))
return 0;
mmo = mmap_offset_attach(obj, type, NULL);
if (IS_ERR(mmo))
return PTR_ERR(mmo);
err = __assign_mmap_offset(obj, type, &offset, NULL);
if (err)
return err;
addr = igt_mmap_node(i915, &mmo->vma_node, 0, PROT_WRITE, MAP_SHARED);
addr = igt_mmap_offset(i915, offset, obj->base.size, PROT_WRITE, MAP_SHARED);
if (IS_ERR_VALUE(addr))
return addr;
@ -1350,10 +1344,20 @@ static int __igt_mmap_revoke(struct drm_i915_private *i915,
}
}
err = check_absent(addr, obj->base.size);
if (err) {
pr_err("%s: was not absent\n", obj->mm.region->name);
goto out_unmap;
if (!obj->ops->mmap_ops) {
err = check_absent(addr, obj->base.size);
if (err) {
pr_err("%s: was not absent\n", obj->mm.region->name);
goto out_unmap;
}
} else {
/* ttm allows access to evicted regions by design */
err = check_present(addr, obj->base.size);
if (err) {
pr_err("%s: was not present\n", obj->mm.region->name);
goto out_unmap;
}
}
out_unmap:
@ -1371,7 +1375,7 @@ static int igt_mmap_revoke(void *arg)
struct drm_i915_gem_object *obj;
int err;
obj = i915_gem_object_create_region(mr, PAGE_SIZE, 0);
obj = i915_gem_object_create_region(mr, PAGE_SIZE, I915_BO_ALLOC_USER);
if (obj == ERR_PTR(-ENODEV))
continue;

View file

@ -9,6 +9,7 @@
#include "intel_region_ttm.h"
#include "gem/i915_gem_lmem.h"
#include "gem/i915_gem_region.h"
#include "gem/i915_gem_ttm.h"
#include "intel_region_lmem.h"
static int init_fake_lmem_bar(struct intel_memory_region *mem)
@ -107,7 +108,7 @@ region_lmem_init(struct intel_memory_region *mem)
static const struct intel_memory_region_ops intel_region_lmem_ops = {
.init = region_lmem_init,
.release = region_lmem_release,
.init_object = __i915_gem_lmem_object_init,
.init_object = __i915_gem_ttm_object_init,
};
struct intel_memory_region *

View file

@ -561,7 +561,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
if (ret)
goto err_perf;
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, "inteldrmfb");
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, dev_priv->drm.driver);
if (ret)
goto err_ggtt;

Some files were not shown because too many files have changed in this diff Show more