drm-misc-next for v6.6:

UAPI Changes:
 
  * fbdev:
    * Make fbdev userspace interfaces optional; only leaves the
      framebuffer console active
 
  * prime:
    * Support dma-buf self-import for all drivers automatically: improves
      support for many userspace compositors
 
 Cross-subsystem Changes:
 
  * backlight:
    * Fix interaction with fbdev in several drivers
 
  * base: Convert struct platform.remove to return void; part of a larger,
    tree-wide effort
 
  * dma-buf: Acquire reservation lock for mmap() in exporters; part
    of an on-going effort to simplify locking around dma-bufs
 
  * fbdev:
    * Use Linux device instead of fbdev device in many places
    * Use deferred-I/O helper macros in various drivers
 
  * i2c: Convert struct i2c from .probe_new to .probe; part of a larger,
    tree-wide effort
 
  * video:
    * Avoid including <linux/screen_info.h>
 
 Core Changes:
 
  * atomic:
    * Improve logging
 
  * prime:
    * Remove struct drm_driver.gem_prime_mmap plus driver updates: all
      drivers now implement this callback with drm_gem_prime_mmap()
 
  * gem:
    * Support execution contexts: provides locking over multiple GEM
      objects
 
  * ttm:
    * Support init_on_free
    * Swapout fixes
 
 Driver Changes:
 
  * accel:
    * ivpu: MMU updates; Support debugfs
 
  * ast:
    * Improve device-model detection
    * Cleanups
 
  * bridge:
    * dw-hdmi: Improve support for YUV420 bus format
    * dw-mipi-dsi: Fix enable/disable of DSI controller
    * lt9611uxc: Use MODULE_FIRMWARE()
    * ps8640: Remove broken EDID code
    * samsung-dsim: Fix command transfer
    * tc358764: Handle HS/VS polarity; Use BIT() macro; Various cleanups
    * Cleanups
 
  * ingenic:
    * Kconfig REGMAP fixes
 
  * loongson:
    * Support display controller
 
  * mgag200:
    * Minor fixes
 
  * mxsfb:
    * Support disabling overlay planes
 
  * nouveau:
    * Improve VRAM detection
    * Various fixes and cleanups
 
  * panel:
    * panel-edp: Support AUO B116XAB01.4
    * Support Visionox R66451 plus DT bindings
    * Cleanups
 
  * ssd130x:
    * Support per-controller default resolution plus DT bindings
    * Reduce memory-allocation overhead
    * Cleanups
 
  * tidss:
    * Support TI AM625 plus DT bindings
    * Implement new connector model plus driver updates
 
  * vkms
    * Improve write-back support
    * Documentation fixes
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEchf7rIzpz2NEoWjlaA3BHVMLeiMFAmSvvRAACgkQaA3BHVML
 eiNpGQgAs8jq1XjN9t8jZsdgXnoCbkZyVUI2NO0HwoVwpRCLgbXp5AX5qq2oRciE
 TBhe4Fceh/ZsYqHTZQahnguxgRKM5JgXwbI4Z0iiOVcqasNbycaKAqipxJJ7kdo1
 qPhGCbgQFVX7oIq2xjfXehh6O0SYX+R9r88X8dMJxMYv/pcLwOHG74kS040WOcQq
 uATgcnobOf/D8ZmlqvfKGAeTUoFo/RSR2Uhlauka58qgeUbicrTELZT2barY9d+k
 as6U5vv4wx2zMklTkjrlkMpAT1ZpbB9d3jGHwL27VEnjlfd3wV2bdH7Dzn9qZRf/
 gn0ALg/b3u5yBWk/k7YBvijXyNcH6Q==
 =bBuG
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2023-07-13' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v6.6:

UAPI Changes:

 * fbdev:
   * Make fbdev userspace interfaces optional; only leaves the
     framebuffer console active

 * prime:
   * Support dma-buf self-import for all drivers automatically: improves
     support for many userspace compositors

Cross-subsystem Changes:

 * backlight:
   * Fix interaction with fbdev in several drivers

 * base: Convert struct platform.remove to return void; part of a larger,
   tree-wide effort

 * dma-buf: Acquire reservation lock for mmap() in exporters; part
   of an on-going effort to simplify locking around dma-bufs

 * fbdev:
   * Use Linux device instead of fbdev device in many places
   * Use deferred-I/O helper macros in various drivers

 * i2c: Convert struct i2c from .probe_new to .probe; part of a larger,
   tree-wide effort

 * video:
   * Avoid including <linux/screen_info.h>

Core Changes:

 * atomic:
   * Improve logging

 * prime:
   * Remove struct drm_driver.gem_prime_mmap plus driver updates: all
     drivers now implement this callback with drm_gem_prime_mmap()

 * gem:
   * Support execution contexts: provides locking over multiple GEM
     objects

 * ttm:
   * Support init_on_free
   * Swapout fixes

Driver Changes:

 * accel:
   * ivpu: MMU updates; Support debugfs

 * ast:
   * Improve device-model detection
   * Cleanups

 * bridge:
   * dw-hdmi: Improve support for YUV420 bus format
   * dw-mipi-dsi: Fix enable/disable of DSI controller
   * lt9611uxc: Use MODULE_FIRMWARE()
   * ps8640: Remove broken EDID code
   * samsung-dsim: Fix command transfer
   * tc358764: Handle HS/VS polarity; Use BIT() macro; Various cleanups
   * Cleanups

 * ingenic:
   * Kconfig REGMAP fixes

 * loongson:
   * Support display controller

 * mgag200:
   * Minor fixes

 * mxsfb:
   * Support disabling overlay planes

 * nouveau:
   * Improve VRAM detection
   * Various fixes and cleanups

 * panel:
   * panel-edp: Support AUO B116XAB01.4
   * Support Visionox R66451 plus DT bindings
   * Cleanups

 * ssd130x:
   * Support per-controller default resolution plus DT bindings
   * Reduce memory-allocation overhead
   * Cleanups

 * tidss:
   * Support TI AM625 plus DT bindings
   * Implement new connector model plus driver updates

 * vkms
   * Improve write-back support
   * Documentation fixes

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
From: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20230713090830.GA23281@linux-uq9g
This commit is contained in:
Daniel Vetter 2023-07-17 15:37:56 +02:00
commit 6c7f27441d
370 changed files with 11492 additions and 3333 deletions

View file

@ -0,0 +1,59 @@
# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/visionox,r66451.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Visionox R66451 AMOLED DSI Panel
maintainers:
- Jessica Zhang <quic_jesszhan@quicinc.com>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: visionox,r66451
reg:
maxItems: 1
description: DSI virtual channel
vddio-supply: true
vdd-supply: true
port: true
reset-gpios: true
additionalProperties: false
required:
- compatible
- reg
- vddio-supply
- vdd-supply
- reset-gpios
- port
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "visionox,r66451";
reg = <0>;
vddio-supply = <&vreg_l12c_1p8>;
vdd-supply = <&vreg_l13c_3p0>;
reset-gpios = <&tlmm 24 GPIO_ACTIVE_LOW>;
port {
panel0_in: endpoint {
remote-endpoint = <&dsi0_out>;
};
};
};
};
...

View file

@ -49,15 +49,15 @@ properties:
solomon,height:
$ref: /schemas/types.yaml#/definitions/uint32
default: 16
description:
Height in pixel of the screen driven by the controller
Height in pixel of the screen driven by the controller.
The default value is controller-dependent.
solomon,width:
$ref: /schemas/types.yaml#/definitions/uint32
default: 96
description:
Width in pixel of the screen driven by the controller
Width in pixel of the screen driven by the controller.
The default value is controller-dependent.
solomon,page-offset:
$ref: /schemas/types.yaml#/definitions/uint32
@ -157,6 +157,10 @@ allOf:
const: sinowealth,sh1106
then:
properties:
width:
default: 132
height:
default: 64
solomon,dclk-div:
default: 1
solomon,dclk-frq:
@ -171,6 +175,10 @@ allOf:
- solomon,ssd1305
then:
properties:
width:
default: 132
height:
default: 64
solomon,dclk-div:
default: 1
solomon,dclk-frq:
@ -185,6 +193,10 @@ allOf:
- solomon,ssd1306
then:
properties:
width:
default: 128
height:
default: 64
solomon,dclk-div:
default: 1
solomon,dclk-frq:
@ -199,6 +211,10 @@ allOf:
- solomon,ssd1307
then:
properties:
width:
default: 128
height:
default: 39
solomon,dclk-div:
default: 2
solomon,dclk-frq:
@ -215,6 +231,10 @@ allOf:
- solomon,ssd1309
then:
properties:
width:
default: 128
height:
default: 64
solomon,dclk-div:
default: 1
solomon,dclk-frq:

View file

@ -12,14 +12,18 @@ maintainers:
- Tomi Valkeinen <tomi.valkeinen@ti.com>
description: |
The AM65x TI Keystone Display SubSystem with two output ports and
two video planes. The first video port supports OLDI and the second
supports DPI format. The fist plane is full video plane with all
features and the second is a "lite plane" without scaling support.
The AM625 and AM65x TI Keystone Display SubSystem with two output
ports and two video planes. In AM65x DSS, the first video port
supports 1 OLDI TX and in AM625 DSS, the first video port output is
internally routed to 2 OLDI TXes. The second video port supports DPI
format. The first plane is full video plane with all features and the
second is a "lite plane" without scaling support.
properties:
compatible:
const: ti,am65x-dss
enum:
- ti,am625-dss
- ti,am65x-dss
reg:
description:
@ -80,7 +84,9 @@ properties:
port@0:
$ref: /schemas/graph.yaml#/properties/port
description:
The DSS OLDI output port node form video port 1
For AM65x DSS, the OLDI output port node from video port 1.
For AM625 DSS, the internal DPI output port node from video
port 1.
port@1:
$ref: /schemas/graph.yaml#/properties/port

View file

@ -493,6 +493,18 @@ DRM Sync Objects
.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
:export:
DRM Execution context
=====================
.. kernel-doc:: drivers/gpu/drm/drm_exec.c
:doc: Overview
.. kernel-doc:: include/drm/drm_exec.h
:internal:
.. kernel-doc:: drivers/gpu/drm/drm_exec.c
:export:
GPU Scheduler
=============

View file

@ -319,15 +319,6 @@ Contact: Daniel Vetter, Noralf Tronnes
Level: Advanced
struct drm_gem_object_funcs
---------------------------
GEM objects can now have a function table instead of having the callbacks on the
DRM driver struct. This is now the preferred way. Callbacks in drivers have been
converted, except for struct drm_driver.gem_prime_mmap.
Level: Intermediate
connector register/unregister fixes
-----------------------------------
@ -452,6 +443,19 @@ Contact: Thomas Zimmermann <tzimmermann@suse.de>
Level: Starter
Remove driver dependencies on FB_DEVICE
---------------------------------------
A number of fbdev drivers provide attributes via sysfs and therefore depend
on CONFIG_FB_DEVICE to be selected. Review each driver and attempt to make
any dependencies on CONFIG_FB_DEVICE optional. At the minimum, the respective
code in the driver could be conditionalized via ifdef CONFIG_FB_DEVICE. Not
all drivers might be able to drop CONFIG_FB_DEVICE.
Contact: Thomas Zimmermann <tzimmermann@suse.de>
Level: Starter
Core refactorings
=================

View file

@ -6150,10 +6150,9 @@ F: kernel/dma/
DMA-BUF HEAPS FRAMEWORK
M: Sumit Semwal <sumit.semwal@linaro.org>
R: Benjamin Gaignard <benjamin.gaignard@collabora.com>
R: Liam Mark <lmark@codeaurora.org>
R: Laura Abbott <labbott@redhat.com>
R: Brian Starkey <Brian.Starkey@arm.com>
R: John Stultz <jstultz@google.com>
R: T.J. Mercier <tjmercier@google.com>
L: linux-media@vger.kernel.org
L: dri-devel@lists.freedesktop.org
L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)
@ -6395,6 +6394,7 @@ F: drivers/gpu/drm/aspeed/
DRM DRIVER FOR AST SERVER GRAPHICS CHIPS
M: Dave Airlie <airlied@redhat.com>
R: Thomas Zimmermann <tzimmermann@suse.de>
R: Jocelyn Falempe <jfalempe@redhat.com>
L: dri-devel@lists.freedesktop.org
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
@ -6546,6 +6546,7 @@ F: drivers/gpu/drm/panel/panel-mantix-mlaf057we51.c
DRM DRIVER FOR MGA G200 GRAPHICS CHIPS
M: Dave Airlie <airlied@redhat.com>
R: Thomas Zimmermann <tzimmermann@suse.de>
R: Jocelyn Falempe <jfalempe@redhat.com>
L: dri-devel@lists.freedesktop.org
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
@ -6945,6 +6946,13 @@ T: git git://anongit.freedesktop.org/drm/drm-misc
F: drivers/gpu/drm/lima/
F: include/uapi/drm/lima_drm.h
DRM DRIVERS FOR LOONGSON
M: Sui Jingfeng <suijingfeng@loongson.cn>
L: dri-devel@lists.freedesktop.org
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
F: drivers/gpu/drm/loongson/
DRM DRIVERS FOR MEDIATEK
M: Chun-Kuang Hu <chunkuang.hu@kernel.org>
M: Philipp Zabel <p.zabel@pengutronix.de>
@ -7014,7 +7022,7 @@ F: drivers/gpu/drm/stm
DRM DRIVERS FOR TI KEYSTONE
M: Jyri Sarha <jyri.sarha@iki.fi>
M: Tomi Valkeinen <tomba@kernel.org>
M: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
@ -7025,16 +7033,18 @@ F: drivers/gpu/drm/tidss/
DRM DRIVERS FOR TI LCDC
M: Jyri Sarha <jyri.sarha@iki.fi>
R: Tomi Valkeinen <tomba@kernel.org>
M: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/devicetree/bindings/display/tilcdc/
F: drivers/gpu/drm/tilcdc/
DRM DRIVERS FOR TI OMAP
M: Tomi Valkeinen <tomba@kernel.org>
M: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/devicetree/bindings/display/ti/
F: drivers/gpu/drm/omapdrm/

View file

@ -5,6 +5,8 @@
#include <linux/efi.h>
#include <linux/memblock.h>
#include <linux/screen_info.h>
#include <asm/efi.h>
#include <asm/mach/map.h>
#include <asm/mmu_context.h>

View file

@ -9,6 +9,7 @@
#include <linux/efi.h>
#include <linux/init.h>
#include <linux/screen_info.h>
#include <asm/efi.h>
#include <asm/stacktrace.h>

View file

@ -18,6 +18,7 @@
#include <linux/kobject.h>
#include <linux/memblock.h>
#include <linux/reboot.h>
#include <linux/screen_info.h>
#include <linux/uaccess.h>
#include <asm/early_ioremap.h>

View file

@ -386,7 +386,7 @@ static struct property_entry gpio_backlight_props[] = {
};
static struct gpio_backlight_platform_data gpio_backlight_data = {
.fbdev = &lcdc_device.dev,
.dev = &lcdc_device.dev,
};
static const struct platform_device_info gpio_backlight_device_info = {

View file

@ -202,7 +202,7 @@ static struct platform_device kfr2r09_sh_lcdc_device = {
};
static struct lv5207lp_platform_data kfr2r09_backlight_data = {
.fbdev = &kfr2r09_sh_lcdc_device.dev,
.dev = &kfr2r09_sh_lcdc_device.dev,
.def_value = 13,
.max_value = 13,
};

View file

@ -2,8 +2,10 @@
# Copyright (C) 2023 Intel Corporation
intel_vpu-y := \
ivpu_debugfs.o \
ivpu_drv.o \
ivpu_fw.o \
ivpu_fw_log.o \
ivpu_gem.o \
ivpu_hw_mtl.o \
ivpu_ipc.o \
@ -13,4 +15,4 @@ intel_vpu-y := \
ivpu_mmu_context.o \
ivpu_pm.o
obj-$(CONFIG_DRM_ACCEL_IVPU) += intel_vpu.o
obj-$(CONFIG_DRM_ACCEL_IVPU) += intel_vpu.o

View file

@ -0,0 +1,294 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020-2023 Intel Corporation
*/
#include <drm/drm_debugfs.h>
#include <drm/drm_file.h>
#include <drm/drm_print.h>
#include <uapi/drm/ivpu_accel.h>
#include "ivpu_debugfs.h"
#include "ivpu_drv.h"
#include "ivpu_fw.h"
#include "ivpu_fw_log.h"
#include "ivpu_gem.h"
#include "ivpu_jsm_msg.h"
#include "ivpu_pm.h"
static int bo_list_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct drm_printer p = drm_seq_file_printer(s);
ivpu_bo_list(node->minor->dev, &p);
return 0;
}
static int fw_name_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
seq_printf(s, "%s\n", vdev->fw->name);
return 0;
}
static int fw_trace_capability_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
u64 trace_hw_component_mask;
u32 trace_destination_mask;
int ret;
ret = ivpu_jsm_trace_get_capability(vdev, &trace_destination_mask,
&trace_hw_component_mask);
if (!ret) {
seq_printf(s,
"trace_destination_mask: %#18x\n"
"trace_hw_component_mask: %#18llx\n",
trace_destination_mask, trace_hw_component_mask);
}
return 0;
}
static int fw_trace_config_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
/**
* WA: VPU_JSM_MSG_TRACE_GET_CONFIG command is not working yet,
* so we use values from vdev->fw instead of calling ivpu_jsm_trace_get_config()
*/
u32 trace_level = vdev->fw->trace_level;
u32 trace_destination_mask = vdev->fw->trace_destination_mask;
u64 trace_hw_component_mask = vdev->fw->trace_hw_component_mask;
seq_printf(s,
"trace_level: %#18x\n"
"trace_destination_mask: %#18x\n"
"trace_hw_component_mask: %#18llx\n",
trace_level, trace_destination_mask, trace_hw_component_mask);
return 0;
}
static int last_bootmode_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
seq_printf(s, "%s\n", (vdev->pm->is_warmboot) ? "warmboot" : "coldboot");
return 0;
}
static int reset_counter_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
seq_printf(s, "%d\n", atomic_read(&vdev->pm->reset_counter));
return 0;
}
static int reset_pending_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
seq_printf(s, "%d\n", atomic_read(&vdev->pm->in_reset));
return 0;
}
static const struct drm_info_list vdev_debugfs_list[] = {
{"bo_list", bo_list_show, 0},
{"fw_name", fw_name_show, 0},
{"fw_trace_capability", fw_trace_capability_show, 0},
{"fw_trace_config", fw_trace_config_show, 0},
{"last_bootmode", last_bootmode_show, 0},
{"reset_counter", reset_counter_show, 0},
{"reset_pending", reset_pending_show, 0},
};
static int fw_log_show(struct seq_file *s, void *v)
{
struct ivpu_device *vdev = s->private;
struct drm_printer p = drm_seq_file_printer(s);
ivpu_fw_log_print(vdev, true, &p);
return 0;
}
static int fw_log_fops_open(struct inode *inode, struct file *file)
{
return single_open(file, fw_log_show, inode->i_private);
}
static ssize_t
fw_log_fops_write(struct file *file, const char __user *user_buf, size_t size, loff_t *pos)
{
struct seq_file *s = file->private_data;
struct ivpu_device *vdev = s->private;
if (!size)
return -EINVAL;
ivpu_fw_log_clear(vdev);
return size;
}
static const struct file_operations fw_log_fops = {
.owner = THIS_MODULE,
.open = fw_log_fops_open,
.write = fw_log_fops_write,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static ssize_t
fw_trace_destination_mask_fops_write(struct file *file, const char __user *user_buf,
size_t size, loff_t *pos)
{
struct ivpu_device *vdev = file->private_data;
struct ivpu_fw_info *fw = vdev->fw;
u32 trace_destination_mask;
int ret;
ret = kstrtou32_from_user(user_buf, size, 0, &trace_destination_mask);
if (ret < 0)
return ret;
fw->trace_destination_mask = trace_destination_mask;
ivpu_jsm_trace_set_config(vdev, fw->trace_level, trace_destination_mask,
fw->trace_hw_component_mask);
return size;
}
static const struct file_operations fw_trace_destination_mask_fops = {
.owner = THIS_MODULE,
.open = simple_open,
.write = fw_trace_destination_mask_fops_write,
};
static ssize_t
fw_trace_hw_comp_mask_fops_write(struct file *file, const char __user *user_buf,
size_t size, loff_t *pos)
{
struct ivpu_device *vdev = file->private_data;
struct ivpu_fw_info *fw = vdev->fw;
u64 trace_hw_component_mask;
int ret;
ret = kstrtou64_from_user(user_buf, size, 0, &trace_hw_component_mask);
if (ret < 0)
return ret;
fw->trace_hw_component_mask = trace_hw_component_mask;
ivpu_jsm_trace_set_config(vdev, fw->trace_level, fw->trace_destination_mask,
trace_hw_component_mask);
return size;
}
static const struct file_operations fw_trace_hw_comp_mask_fops = {
.owner = THIS_MODULE,
.open = simple_open,
.write = fw_trace_hw_comp_mask_fops_write,
};
static ssize_t
fw_trace_level_fops_write(struct file *file, const char __user *user_buf, size_t size, loff_t *pos)
{
struct ivpu_device *vdev = file->private_data;
struct ivpu_fw_info *fw = vdev->fw;
u32 trace_level;
int ret;
ret = kstrtou32_from_user(user_buf, size, 0, &trace_level);
if (ret < 0)
return ret;
fw->trace_level = trace_level;
ivpu_jsm_trace_set_config(vdev, trace_level, fw->trace_destination_mask,
fw->trace_hw_component_mask);
return size;
}
static const struct file_operations fw_trace_level_fops = {
.owner = THIS_MODULE,
.open = simple_open,
.write = fw_trace_level_fops_write,
};
static ssize_t
ivpu_reset_engine_fn(struct file *file, const char __user *user_buf, size_t size, loff_t *pos)
{
struct ivpu_device *vdev = file->private_data;
if (!size)
return -EINVAL;
if (ivpu_jsm_reset_engine(vdev, DRM_IVPU_ENGINE_COMPUTE))
return -ENODEV;
if (ivpu_jsm_reset_engine(vdev, DRM_IVPU_ENGINE_COPY))
return -ENODEV;
return size;
}
static ssize_t
ivpu_force_recovery_fn(struct file *file, const char __user *user_buf, size_t size, loff_t *pos)
{
struct ivpu_device *vdev = file->private_data;
if (!size)
return -EINVAL;
ivpu_pm_schedule_recovery(vdev);
return size;
}
static const struct file_operations ivpu_force_recovery_fops = {
.owner = THIS_MODULE,
.open = simple_open,
.write = ivpu_force_recovery_fn,
};
static const struct file_operations ivpu_reset_engine_fops = {
.owner = THIS_MODULE,
.open = simple_open,
.write = ivpu_reset_engine_fn,
};
void ivpu_debugfs_init(struct drm_minor *minor)
{
struct ivpu_device *vdev = to_ivpu_device(minor->dev);
drm_debugfs_create_files(vdev_debugfs_list, ARRAY_SIZE(vdev_debugfs_list),
minor->debugfs_root, minor);
debugfs_create_file("force_recovery", 0200, minor->debugfs_root, vdev,
&ivpu_force_recovery_fops);
debugfs_create_file("fw_log", 0644, minor->debugfs_root, vdev,
&fw_log_fops);
debugfs_create_file("fw_trace_destination_mask", 0200, minor->debugfs_root, vdev,
&fw_trace_destination_mask_fops);
debugfs_create_file("fw_trace_hw_comp_mask", 0200, minor->debugfs_root, vdev,
&fw_trace_hw_comp_mask_fops);
debugfs_create_file("fw_trace_level", 0200, minor->debugfs_root, vdev,
&fw_trace_level_fops);
debugfs_create_file("reset_engine", 0200, minor->debugfs_root, vdev,
&ivpu_reset_engine_fops);
}

View file

@ -0,0 +1,13 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2020-2023 Intel Corporation
*/
#ifndef __IVPU_DEBUGFS_H__
#define __IVPU_DEBUGFS_H__
struct drm_minor;
void ivpu_debugfs_init(struct drm_minor *minor);
#endif /* __IVPU_DEBUGFS_H__ */

View file

@ -14,6 +14,7 @@
#include <drm/drm_prime.h>
#include "vpu_boot_api.h"
#include "ivpu_debugfs.h"
#include "ivpu_drv.h"
#include "ivpu_fw.h"
#include "ivpu_gem.h"
@ -50,6 +51,10 @@ u8 ivpu_pll_max_ratio = U8_MAX;
module_param_named(pll_max_ratio, ivpu_pll_max_ratio, byte, 0644);
MODULE_PARM_DESC(pll_max_ratio, "Maximum PLL ratio used to set VPU frequency");
bool ivpu_disable_mmu_cont_pages;
module_param_named(disable_mmu_cont_pages, ivpu_disable_mmu_cont_pages, bool, 0644);
MODULE_PARM_DESC(disable_mmu_cont_pages, "Disable MMU contiguous pages optimization");
struct ivpu_file_priv *ivpu_file_priv_get(struct ivpu_file_priv *file_priv)
{
struct ivpu_device *vdev = file_priv->vdev;
@ -369,10 +374,11 @@ static const struct drm_driver driver = {
.open = ivpu_open,
.postclose = ivpu_postclose,
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import = ivpu_gem_prime_import,
.gem_prime_mmap = drm_gem_prime_mmap,
#if defined(CONFIG_DEBUG_FS)
.debugfs_init = ivpu_debugfs_init,
#endif
.ioctls = ivpu_drm_ioctls,
.num_ioctls = ARRAY_SIZE(ivpu_drm_ioctls),
@ -427,7 +433,7 @@ static int ivpu_pci_init(struct ivpu_device *vdev)
return PTR_ERR(vdev->regb);
}
ret = dma_set_mask_and_coherent(vdev->drm.dev, DMA_BIT_MASK(38));
ret = dma_set_mask_and_coherent(vdev->drm.dev, DMA_BIT_MASK(vdev->hw->dma_bits));
if (ret) {
ivpu_err(vdev, "Failed to set DMA mask: %d\n", ret);
return ret;
@ -477,6 +483,8 @@ static int ivpu_dev_init(struct ivpu_device *vdev)
return -ENOMEM;
vdev->hw->ops = &ivpu_hw_mtl_ops;
vdev->hw->dma_bits = 38;
vdev->platform = IVPU_PLATFORM_INVALID;
vdev->context_xa_limit.min = IVPU_USER_CONTEXT_MIN_SSID;
vdev->context_xa_limit.max = IVPU_USER_CONTEXT_MAX_SSID;

View file

@ -132,6 +132,7 @@ struct ivpu_file_priv {
extern int ivpu_dbg_mask;
extern u8 ivpu_pll_min_ratio;
extern u8 ivpu_pll_max_ratio;
extern bool ivpu_disable_mmu_cont_pages;
#define IVPU_TEST_MODE_DISABLED 0
#define IVPU_TEST_MODE_FW_TEST 1

View file

@ -11,6 +11,7 @@
#include "vpu_boot_api.h"
#include "ivpu_drv.h"
#include "ivpu_fw.h"
#include "ivpu_fw_log.h"
#include "ivpu_gem.h"
#include "ivpu_hw.h"
#include "ivpu_ipc.h"
@ -51,13 +52,19 @@ static int ivpu_fw_request(struct ivpu_device *vdev)
int ret = -ENOENT;
int i;
if (ivpu_firmware)
return request_firmware(&vdev->fw->file, ivpu_firmware, vdev->drm.dev);
if (ivpu_firmware) {
ret = request_firmware(&vdev->fw->file, ivpu_firmware, vdev->drm.dev);
if (!ret)
vdev->fw->name = ivpu_firmware;
return ret;
}
for (i = 0; i < ARRAY_SIZE(fw_names); i++) {
ret = firmware_request_nowarn(&vdev->fw->file, fw_names[i], vdev->drm.dev);
if (!ret)
if (!ret) {
vdev->fw->name = fw_names[i];
return 0;
}
}
ivpu_err(vdev, "Failed to request firmware: %d\n", ret);
@ -142,7 +149,9 @@ static int ivpu_fw_parse(struct ivpu_device *vdev)
}
ivpu_dbg(vdev, FW_BOOT, "Header version: 0x%x, format 0x%x\n",
fw_hdr->header_version, fw_hdr->image_format);
ivpu_dbg(vdev, FW_BOOT, "FW version: %s\n", (char *)fw_hdr + VPU_FW_HEADER_SIZE);
ivpu_info(vdev, "Firmware: %s, version: %s", fw->name,
(const char *)fw_hdr + VPU_FW_HEADER_SIZE);
if (IVPU_FW_CHECK_API(vdev, fw_hdr, BOOT, 3))
return -EINVAL;
@ -158,6 +167,10 @@ static int ivpu_fw_parse(struct ivpu_device *vdev)
fw->cold_boot_entry_point = fw_hdr->entry_point;
fw->entry_point = fw->cold_boot_entry_point;
fw->trace_level = min_t(u32, ivpu_log_level, IVPU_FW_LOG_FATAL);
fw->trace_destination_mask = VPU_TRACE_DESTINATION_VERBOSE_TRACING;
fw->trace_hw_component_mask = -1;
ivpu_dbg(vdev, FW_BOOT, "Size: file %lu image %u runtime %u shavenn %u\n",
fw->file->size, fw->image_size, fw->runtime_size, fw->shave_nn_size);
ivpu_dbg(vdev, FW_BOOT, "Address: runtime 0x%llx, load 0x%llx, entry point 0x%llx\n",
@ -189,6 +202,7 @@ static int ivpu_fw_update_global_range(struct ivpu_device *vdev)
static int ivpu_fw_mem_init(struct ivpu_device *vdev)
{
struct ivpu_fw_info *fw = vdev->fw;
int log_verb_size;
int ret;
ret = ivpu_fw_update_global_range(vdev);
@ -201,17 +215,45 @@ static int ivpu_fw_mem_init(struct ivpu_device *vdev)
return -ENOMEM;
}
fw->mem_log_crit = ivpu_bo_alloc_internal(vdev, 0, IVPU_FW_CRITICAL_BUFFER_SIZE,
DRM_IVPU_BO_CACHED);
if (!fw->mem_log_crit) {
ivpu_err(vdev, "Failed to allocate critical log buffer\n");
ret = -ENOMEM;
goto err_free_fw_mem;
}
if (ivpu_log_level <= IVPU_FW_LOG_INFO)
log_verb_size = IVPU_FW_VERBOSE_BUFFER_LARGE_SIZE;
else
log_verb_size = IVPU_FW_VERBOSE_BUFFER_SMALL_SIZE;
fw->mem_log_verb = ivpu_bo_alloc_internal(vdev, 0, log_verb_size, DRM_IVPU_BO_CACHED);
if (!fw->mem_log_verb) {
ivpu_err(vdev, "Failed to allocate verbose log buffer\n");
ret = -ENOMEM;
goto err_free_log_crit;
}
if (fw->shave_nn_size) {
fw->mem_shave_nn = ivpu_bo_alloc_internal(vdev, vdev->hw->ranges.global_high.start,
fw->shave_nn_size, DRM_IVPU_BO_UNCACHED);
if (!fw->mem_shave_nn) {
ivpu_err(vdev, "Failed to allocate shavenn buffer\n");
ivpu_bo_free_internal(fw->mem);
return -ENOMEM;
ret = -ENOMEM;
goto err_free_log_verb;
}
}
return 0;
err_free_log_verb:
ivpu_bo_free_internal(fw->mem_log_verb);
err_free_log_crit:
ivpu_bo_free_internal(fw->mem_log_crit);
err_free_fw_mem:
ivpu_bo_free_internal(fw->mem);
return ret;
}
static void ivpu_fw_mem_fini(struct ivpu_device *vdev)
@ -223,7 +265,12 @@ static void ivpu_fw_mem_fini(struct ivpu_device *vdev)
fw->mem_shave_nn = NULL;
}
ivpu_bo_free_internal(fw->mem_log_verb);
ivpu_bo_free_internal(fw->mem_log_crit);
ivpu_bo_free_internal(fw->mem);
fw->mem_log_verb = NULL;
fw->mem_log_crit = NULL;
fw->mem = NULL;
}
@ -424,6 +471,15 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params
boot_params->pn_freq_pll_ratio = vdev->hw->pll.pn_ratio;
boot_params->max_freq_pll_ratio = vdev->hw->pll.max_ratio;
boot_params->default_trace_level = vdev->fw->trace_level;
boot_params->tracing_buff_message_format_mask = BIT(VPU_TRACING_FORMAT_STRING);
boot_params->trace_destination_mask = vdev->fw->trace_destination_mask;
boot_params->trace_hw_component_mask = vdev->fw->trace_hw_component_mask;
boot_params->crit_tracing_buff_addr = vdev->fw->mem_log_crit->vpu_addr;
boot_params->crit_tracing_buff_size = vdev->fw->mem_log_crit->base.size;
boot_params->verbose_tracing_buff_addr = vdev->fw->mem_log_verb->vpu_addr;
boot_params->verbose_tracing_buff_size = vdev->fw->mem_log_verb->base.size;
boot_params->punit_telemetry_sram_base = ivpu_hw_reg_telemetry_offset_get(vdev);
boot_params->punit_telemetry_sram_size = ivpu_hw_reg_telemetry_size_get(vdev);
boot_params->vpu_telemetry_enable = ivpu_hw_reg_telemetry_enable_get(vdev);

View file

@ -12,6 +12,7 @@ struct vpu_boot_params;
struct ivpu_fw_info {
const struct firmware *file;
const char *name;
struct ivpu_bo *mem;
struct ivpu_bo *mem_shave_nn;
struct ivpu_bo *mem_log_crit;
@ -23,6 +24,9 @@ struct ivpu_fw_info {
u32 shave_nn_size;
u64 entry_point; /* Cold or warm boot entry point for next boot */
u64 cold_boot_entry_point;
u32 trace_level;
u32 trace_destination_mask;
u64 trace_hw_component_mask;
};
int ivpu_fw_init(struct ivpu_device *vdev);

View file

@ -0,0 +1,142 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020-2023 Intel Corporation
*/
#include <linux/ctype.h>
#include <linux/highmem.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/moduleparam.h>
#include "vpu_boot_api.h"
#include "ivpu_drv.h"
#include "ivpu_fw.h"
#include "ivpu_fw_log.h"
#include "ivpu_gem.h"
#define IVPU_FW_LOG_LINE_LENGTH 256
unsigned int ivpu_log_level = IVPU_FW_LOG_ERROR;
module_param(ivpu_log_level, uint, 0444);
MODULE_PARM_DESC(ivpu_log_level,
"VPU firmware default trace level: debug=" __stringify(IVPU_FW_LOG_DEBUG)
" info=" __stringify(IVPU_FW_LOG_INFO)
" warn=" __stringify(IVPU_FW_LOG_WARN)
" error=" __stringify(IVPU_FW_LOG_ERROR)
" fatal=" __stringify(IVPU_FW_LOG_FATAL));
static int fw_log_ptr(struct ivpu_device *vdev, struct ivpu_bo *bo, u32 *offset,
struct vpu_tracing_buffer_header **log_header)
{
struct vpu_tracing_buffer_header *log;
if ((*offset + sizeof(*log)) > bo->base.size)
return -EINVAL;
log = bo->kvaddr + *offset;
if (log->vpu_canary_start != VPU_TRACING_BUFFER_CANARY)
return -EINVAL;
if (log->header_size < sizeof(*log) || log->header_size > 1024) {
ivpu_dbg(vdev, FW_BOOT, "Invalid header size 0x%x\n", log->header_size);
return -EINVAL;
}
if ((char *)log + log->size > (char *)bo->kvaddr + bo->base.size) {
ivpu_dbg(vdev, FW_BOOT, "Invalid log size 0x%x\n", log->size);
return -EINVAL;
}
*log_header = log;
*offset += log->size;
ivpu_dbg(vdev, FW_BOOT,
"FW log name \"%s\", write offset 0x%x size 0x%x, wrap count %d, hdr version %d size %d format %d, alignment %d",
log->name, log->write_index, log->size, log->wrap_count, log->header_version,
log->header_size, log->format, log->alignment);
return 0;
}
static void buffer_print(char *buffer, u32 size, struct drm_printer *p)
{
char line[IVPU_FW_LOG_LINE_LENGTH];
u32 index = 0;
if (!size || !buffer)
return;
while (size--) {
if (*buffer == '\n' || *buffer == 0) {
line[index] = 0;
if (index != 0)
drm_printf(p, "%s\n", line);
index = 0;
buffer++;
continue;
}
if (index == IVPU_FW_LOG_LINE_LENGTH - 1) {
line[index] = 0;
index = 0;
drm_printf(p, "%s\n", line);
}
if (*buffer != '\r' && (isprint(*buffer) || iscntrl(*buffer)))
line[index++] = *buffer;
buffer++;
}
line[index] = 0;
if (index != 0)
drm_printf(p, "%s\n", line);
}
static void fw_log_print_buffer(struct ivpu_device *vdev, struct vpu_tracing_buffer_header *log,
const char *prefix, bool only_new_msgs, struct drm_printer *p)
{
char *log_buffer = (void *)log + log->header_size;
u32 log_size = log->size - log->header_size;
u32 log_start = log->read_index;
u32 log_end = log->write_index;
if (!(log->write_index || log->wrap_count) ||
(log->write_index == log->read_index && only_new_msgs)) {
drm_printf(p, "==== %s \"%s\" log empty ====\n", prefix, log->name);
return;
}
drm_printf(p, "==== %s \"%s\" log start ====\n", prefix, log->name);
if (log->write_index > log->read_index) {
buffer_print(log_buffer + log_start, log_end - log_start, p);
} else {
buffer_print(log_buffer + log_end, log_size - log_end, p);
buffer_print(log_buffer, log_end, p);
}
drm_printf(p, "\x1b[0m");
drm_printf(p, "==== %s \"%s\" log end ====\n", prefix, log->name);
}
void ivpu_fw_log_print(struct ivpu_device *vdev, bool only_new_msgs, struct drm_printer *p)
{
struct vpu_tracing_buffer_header *log_header;
u32 next = 0;
while (fw_log_ptr(vdev, vdev->fw->mem_log_crit, &next, &log_header) == 0)
fw_log_print_buffer(vdev, log_header, "VPU critical", only_new_msgs, p);
next = 0;
while (fw_log_ptr(vdev, vdev->fw->mem_log_verb, &next, &log_header) == 0)
fw_log_print_buffer(vdev, log_header, "VPU verbose", only_new_msgs, p);
}
void ivpu_fw_log_clear(struct ivpu_device *vdev)
{
struct vpu_tracing_buffer_header *log_header;
u32 next = 0;
while (fw_log_ptr(vdev, vdev->fw->mem_log_crit, &next, &log_header) == 0)
log_header->read_index = log_header->write_index;
next = 0;
while (fw_log_ptr(vdev, vdev->fw->mem_log_verb, &next, &log_header) == 0)
log_header->read_index = log_header->write_index;
}

View file

@ -0,0 +1,38 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2020-2023 Intel Corporation
*/
#ifndef __IVPU_FW_LOG_H__
#define __IVPU_FW_LOG_H__
#include <linux/types.h>
#include <drm/drm_print.h>
#include "ivpu_drv.h"
#define IVPU_FW_LOG_DEFAULT 0
#define IVPU_FW_LOG_DEBUG 1
#define IVPU_FW_LOG_INFO 2
#define IVPU_FW_LOG_WARN 3
#define IVPU_FW_LOG_ERROR 4
#define IVPU_FW_LOG_FATAL 5
extern unsigned int ivpu_log_level;
#define IVPU_FW_VERBOSE_BUFFER_SMALL_SIZE SZ_1M
#define IVPU_FW_VERBOSE_BUFFER_LARGE_SIZE SZ_8M
#define IVPU_FW_CRITICAL_BUFFER_SIZE SZ_512K
void ivpu_fw_log_print(struct ivpu_device *vdev, bool only_new_msgs, struct drm_printer *p);
void ivpu_fw_log_clear(struct ivpu_device *vdev);
static inline void ivpu_fw_log_dump(struct ivpu_device *vdev)
{
struct drm_printer p = drm_info_printer(vdev->drm.dev);
ivpu_fw_log_print(vdev, false, &p);
}
#endif /* __IVPU_FW_LOG_H__ */

View file

@ -57,6 +57,7 @@ struct ivpu_hw_info {
u32 tile_fuse;
u32 sku;
u16 config;
int dma_bits;
};
extern const struct ivpu_hw_ops ivpu_hw_mtl_ops;

View file

@ -551,21 +551,10 @@ static void ivpu_boot_tbu_mmu_enable(struct ivpu_device *vdev)
{
u32 val = REGV_RD32(MTL_VPU_HOST_IF_TBU_MMUSSIDV);
if (ivpu_is_fpga(vdev)) {
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU0_AWMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU0_ARMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU2_AWMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU2_ARMMUSSIDV, val);
} else {
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU0_AWMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU0_ARMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU1_AWMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU1_ARMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU2_AWMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU2_ARMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU3_AWMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU3_ARMMUSSIDV, val);
}
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU0_AWMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU0_ARMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU2_AWMMUSSIDV, val);
val = REG_SET_FLD(MTL_VPU_HOST_IF_TBU_MMUSSIDV, TBU2_ARMMUSSIDV, val);
REGV_WR32(MTL_VPU_HOST_IF_TBU_MMUSSIDV, val);
}

View file

@ -289,15 +289,13 @@ ivpu_create_job(struct ivpu_file_priv *file_priv, u32 engine_idx, u32 bo_count)
{
struct ivpu_device *vdev = file_priv->vdev;
struct ivpu_job *job;
size_t buf_size;
int ret;
ret = ivpu_rpm_get(vdev);
if (ret < 0)
return NULL;
buf_size = sizeof(*job) + bo_count * sizeof(struct ivpu_bo *);
job = kzalloc(buf_size, GFP_KERNEL);
job = kzalloc(struct_size(job, bos, bo_count), GFP_KERNEL);
if (!job)
goto err_rpm_put;

View file

@ -143,6 +143,16 @@
#define IVPU_MMU_CD_0_ASET BIT(47)
#define IVPU_MMU_CD_0_ASID GENMASK_ULL(63, 48)
#define IVPU_MMU_T0SZ_48BIT 16
#define IVPU_MMU_T0SZ_38BIT 26
#define IVPU_MMU_IPS_48BIT 5
#define IVPU_MMU_IPS_44BIT 4
#define IVPU_MMU_IPS_42BIT 3
#define IVPU_MMU_IPS_40BIT 2
#define IVPU_MMU_IPS_36BIT 1
#define IVPU_MMU_IPS_32BIT 0
#define IVPU_MMU_CD_1_TTB0_MASK GENMASK_ULL(51, 4)
#define IVPU_MMU_STE_0_S1CDMAX GENMASK_ULL(63, 59)
@ -617,12 +627,12 @@ static int ivpu_mmu_cd_add(struct ivpu_device *vdev, u32 ssid, u64 cd_dma)
entry = cdtab->base + (ssid * IVPU_MMU_CDTAB_ENT_SIZE);
if (cd_dma != 0) {
cd[0] = FIELD_PREP(IVPU_MMU_CD_0_TCR_T0SZ, 26) |
cd[0] = FIELD_PREP(IVPU_MMU_CD_0_TCR_T0SZ, IVPU_MMU_T0SZ_48BIT) |
FIELD_PREP(IVPU_MMU_CD_0_TCR_TG0, 0) |
FIELD_PREP(IVPU_MMU_CD_0_TCR_IRGN0, 0) |
FIELD_PREP(IVPU_MMU_CD_0_TCR_ORGN0, 0) |
FIELD_PREP(IVPU_MMU_CD_0_TCR_SH0, 0) |
FIELD_PREP(IVPU_MMU_CD_0_TCR_IPS, 3) |
FIELD_PREP(IVPU_MMU_CD_0_TCR_IPS, IVPU_MMU_IPS_48BIT) |
FIELD_PREP(IVPU_MMU_CD_0_ASID, ssid) |
IVPU_MMU_CD_0_TCR_EPD1 |
IVPU_MMU_CD_0_AA64 |

View file

@ -11,10 +11,12 @@
#include "ivpu_mmu.h"
#include "ivpu_mmu_context.h"
#define IVPU_MMU_PGD_INDEX_MASK GENMASK(38, 30)
#define IVPU_MMU_PGD_INDEX_MASK GENMASK(47, 39)
#define IVPU_MMU_PUD_INDEX_MASK GENMASK(38, 30)
#define IVPU_MMU_PMD_INDEX_MASK GENMASK(29, 21)
#define IVPU_MMU_PTE_INDEX_MASK GENMASK(20, 12)
#define IVPU_MMU_ENTRY_FLAGS_MASK GENMASK(11, 0)
#define IVPU_MMU_ENTRY_FLAGS_MASK (BIT(52) | GENMASK(11, 0))
#define IVPU_MMU_ENTRY_FLAG_CONT BIT(52)
#define IVPU_MMU_ENTRY_FLAG_NG BIT(11)
#define IVPU_MMU_ENTRY_FLAG_AF BIT(10)
#define IVPU_MMU_ENTRY_FLAG_USER BIT(6)
@ -22,10 +24,13 @@
#define IVPU_MMU_ENTRY_FLAG_TYPE_PAGE BIT(1)
#define IVPU_MMU_ENTRY_FLAG_VALID BIT(0)
#define IVPU_MMU_PAGE_SIZE SZ_4K
#define IVPU_MMU_PTE_MAP_SIZE (IVPU_MMU_PGTABLE_ENTRIES * IVPU_MMU_PAGE_SIZE)
#define IVPU_MMU_PMD_MAP_SIZE (IVPU_MMU_PGTABLE_ENTRIES * IVPU_MMU_PTE_MAP_SIZE)
#define IVPU_MMU_PGTABLE_SIZE (IVPU_MMU_PGTABLE_ENTRIES * sizeof(u64))
#define IVPU_MMU_PAGE_SIZE SZ_4K
#define IVPU_MMU_CONT_PAGES_SIZE (IVPU_MMU_PAGE_SIZE * 16)
#define IVPU_MMU_PTE_MAP_SIZE (IVPU_MMU_PGTABLE_ENTRIES * IVPU_MMU_PAGE_SIZE)
#define IVPU_MMU_PMD_MAP_SIZE (IVPU_MMU_PGTABLE_ENTRIES * IVPU_MMU_PTE_MAP_SIZE)
#define IVPU_MMU_PUD_MAP_SIZE (IVPU_MMU_PGTABLE_ENTRIES * IVPU_MMU_PMD_MAP_SIZE)
#define IVPU_MMU_PGD_MAP_SIZE (IVPU_MMU_PGTABLE_ENTRIES * IVPU_MMU_PUD_MAP_SIZE)
#define IVPU_MMU_PGTABLE_SIZE (IVPU_MMU_PGTABLE_ENTRIES * sizeof(u64))
#define IVPU_MMU_DUMMY_ADDRESS 0xdeadb000
#define IVPU_MMU_ENTRY_VALID (IVPU_MMU_ENTRY_FLAG_TYPE_PAGE | IVPU_MMU_ENTRY_FLAG_VALID)
@ -36,167 +41,268 @@
static int ivpu_mmu_pgtable_init(struct ivpu_device *vdev, struct ivpu_mmu_pgtable *pgtable)
{
dma_addr_t pgd_dma;
u64 *pgd;
pgd = dma_alloc_wc(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE, &pgd_dma, GFP_KERNEL);
if (!pgd)
pgtable->pgd_dma_ptr = dma_alloc_coherent(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE, &pgd_dma,
GFP_KERNEL);
if (!pgtable->pgd_dma_ptr)
return -ENOMEM;
pgtable->pgd = pgd;
pgtable->pgd_dma = pgd_dma;
return 0;
}
static void ivpu_mmu_pgtable_free(struct ivpu_device *vdev, struct ivpu_mmu_pgtable *pgtable)
static void ivpu_mmu_pgtable_free(struct ivpu_device *vdev, u64 *cpu_addr, dma_addr_t dma_addr)
{
int pgd_index, pmd_index;
if (cpu_addr)
dma_free_coherent(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE, cpu_addr,
dma_addr & ~IVPU_MMU_ENTRY_FLAGS_MASK);
}
for (pgd_index = 0; pgd_index < IVPU_MMU_PGTABLE_ENTRIES; ++pgd_index) {
u64 **pmd_entries = pgtable->pgd_cpu_entries[pgd_index];
u64 *pmd = pgtable->pgd_entries[pgd_index];
static void ivpu_mmu_pgtables_free(struct ivpu_device *vdev, struct ivpu_mmu_pgtable *pgtable)
{
int pgd_idx, pud_idx, pmd_idx;
dma_addr_t pud_dma, pmd_dma, pte_dma;
u64 *pud_dma_ptr, *pmd_dma_ptr, *pte_dma_ptr;
if (!pmd_entries)
for (pgd_idx = 0; pgd_idx < IVPU_MMU_PGTABLE_ENTRIES; ++pgd_idx) {
pud_dma_ptr = pgtable->pud_ptrs[pgd_idx];
pud_dma = pgtable->pgd_dma_ptr[pgd_idx];
if (!pud_dma_ptr)
continue;
for (pmd_index = 0; pmd_index < IVPU_MMU_PGTABLE_ENTRIES; ++pmd_index) {
if (pmd_entries[pmd_index])
dma_free_wc(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE,
pmd_entries[pmd_index],
pmd[pmd_index] & ~IVPU_MMU_ENTRY_FLAGS_MASK);
for (pud_idx = 0; pud_idx < IVPU_MMU_PGTABLE_ENTRIES; ++pud_idx) {
pmd_dma_ptr = pgtable->pmd_ptrs[pgd_idx][pud_idx];
pmd_dma = pgtable->pud_ptrs[pgd_idx][pud_idx];
if (!pmd_dma_ptr)
continue;
for (pmd_idx = 0; pmd_idx < IVPU_MMU_PGTABLE_ENTRIES; ++pmd_idx) {
pte_dma_ptr = pgtable->pte_ptrs[pgd_idx][pud_idx][pmd_idx];
pte_dma = pgtable->pmd_ptrs[pgd_idx][pud_idx][pmd_idx];
ivpu_mmu_pgtable_free(vdev, pte_dma_ptr, pte_dma);
}
kfree(pgtable->pte_ptrs[pgd_idx][pud_idx]);
ivpu_mmu_pgtable_free(vdev, pmd_dma_ptr, pmd_dma);
}
kfree(pmd_entries);
dma_free_wc(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE, pgtable->pgd_entries[pgd_index],
pgtable->pgd[pgd_index] & ~IVPU_MMU_ENTRY_FLAGS_MASK);
kfree(pgtable->pmd_ptrs[pgd_idx]);
kfree(pgtable->pte_ptrs[pgd_idx]);
ivpu_mmu_pgtable_free(vdev, pud_dma_ptr, pud_dma);
}
dma_free_wc(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE, pgtable->pgd,
pgtable->pgd_dma & ~IVPU_MMU_ENTRY_FLAGS_MASK);
ivpu_mmu_pgtable_free(vdev, pgtable->pgd_dma_ptr, pgtable->pgd_dma);
}
static u64*
ivpu_mmu_ensure_pmd(struct ivpu_device *vdev, struct ivpu_mmu_pgtable *pgtable, u64 pgd_index)
ivpu_mmu_ensure_pud(struct ivpu_device *vdev, struct ivpu_mmu_pgtable *pgtable, int pgd_idx)
{
u64 **pmd_entries;
dma_addr_t pmd_dma;
u64 *pmd;
u64 *pud_dma_ptr = pgtable->pud_ptrs[pgd_idx];
dma_addr_t pud_dma;
if (pgtable->pgd_entries[pgd_index])
return pgtable->pgd_entries[pgd_index];
if (pud_dma_ptr)
return pud_dma_ptr;
pmd = dma_alloc_wc(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE, &pmd_dma, GFP_KERNEL);
if (!pmd)
pud_dma_ptr = dma_alloc_wc(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE, &pud_dma, GFP_KERNEL);
if (!pud_dma_ptr)
return NULL;
pmd_entries = kzalloc(IVPU_MMU_PGTABLE_SIZE, GFP_KERNEL);
if (!pmd_entries)
goto err_free_pgd;
drm_WARN_ON(&vdev->drm, pgtable->pmd_ptrs[pgd_idx]);
pgtable->pmd_ptrs[pgd_idx] = kzalloc(IVPU_MMU_PGTABLE_SIZE, GFP_KERNEL);
if (!pgtable->pmd_ptrs[pgd_idx])
goto err_free_pud_dma_ptr;
pgtable->pgd_entries[pgd_index] = pmd;
pgtable->pgd_cpu_entries[pgd_index] = pmd_entries;
pgtable->pgd[pgd_index] = pmd_dma | IVPU_MMU_ENTRY_VALID;
drm_WARN_ON(&vdev->drm, pgtable->pte_ptrs[pgd_idx]);
pgtable->pte_ptrs[pgd_idx] = kzalloc(IVPU_MMU_PGTABLE_SIZE, GFP_KERNEL);
if (!pgtable->pte_ptrs[pgd_idx])
goto err_free_pmd_ptrs;
return pmd;
pgtable->pud_ptrs[pgd_idx] = pud_dma_ptr;
pgtable->pgd_dma_ptr[pgd_idx] = pud_dma | IVPU_MMU_ENTRY_VALID;
err_free_pgd:
dma_free_wc(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE, pmd, pmd_dma);
return pud_dma_ptr;
err_free_pmd_ptrs:
kfree(pgtable->pmd_ptrs[pgd_idx]);
err_free_pud_dma_ptr:
ivpu_mmu_pgtable_free(vdev, pud_dma_ptr, pud_dma);
return NULL;
}
static u64*
ivpu_mmu_ensure_pmd(struct ivpu_device *vdev, struct ivpu_mmu_pgtable *pgtable, int pgd_idx,
int pud_idx)
{
u64 *pmd_dma_ptr = pgtable->pmd_ptrs[pgd_idx][pud_idx];
dma_addr_t pmd_dma;
if (pmd_dma_ptr)
return pmd_dma_ptr;
pmd_dma_ptr = dma_alloc_wc(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE, &pmd_dma, GFP_KERNEL);
if (!pmd_dma_ptr)
return NULL;
drm_WARN_ON(&vdev->drm, pgtable->pte_ptrs[pgd_idx][pud_idx]);
pgtable->pte_ptrs[pgd_idx][pud_idx] = kzalloc(IVPU_MMU_PGTABLE_SIZE, GFP_KERNEL);
if (!pgtable->pte_ptrs[pgd_idx][pud_idx])
goto err_free_pmd_dma_ptr;
pgtable->pmd_ptrs[pgd_idx][pud_idx] = pmd_dma_ptr;
pgtable->pud_ptrs[pgd_idx][pud_idx] = pmd_dma | IVPU_MMU_ENTRY_VALID;
return pmd_dma_ptr;
err_free_pmd_dma_ptr:
ivpu_mmu_pgtable_free(vdev, pmd_dma_ptr, pmd_dma);
return NULL;
}
static u64*
ivpu_mmu_ensure_pte(struct ivpu_device *vdev, struct ivpu_mmu_pgtable *pgtable,
int pgd_index, int pmd_index)
int pgd_idx, int pud_idx, int pmd_idx)
{
u64 *pte_dma_ptr = pgtable->pte_ptrs[pgd_idx][pud_idx][pmd_idx];
dma_addr_t pte_dma;
u64 *pte;
if (pgtable->pgd_cpu_entries[pgd_index][pmd_index])
return pgtable->pgd_cpu_entries[pgd_index][pmd_index];
if (pte_dma_ptr)
return pte_dma_ptr;
pte = dma_alloc_wc(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE, &pte_dma, GFP_KERNEL);
if (!pte)
pte_dma_ptr = dma_alloc_wc(vdev->drm.dev, IVPU_MMU_PGTABLE_SIZE, &pte_dma, GFP_KERNEL);
if (!pte_dma_ptr)
return NULL;
pgtable->pgd_cpu_entries[pgd_index][pmd_index] = pte;
pgtable->pgd_entries[pgd_index][pmd_index] = pte_dma | IVPU_MMU_ENTRY_VALID;
pgtable->pte_ptrs[pgd_idx][pud_idx][pmd_idx] = pte_dma_ptr;
pgtable->pmd_ptrs[pgd_idx][pud_idx][pmd_idx] = pte_dma | IVPU_MMU_ENTRY_VALID;
return pte;
return pte_dma_ptr;
}
static int
ivpu_mmu_context_map_page(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
u64 vpu_addr, dma_addr_t dma_addr, int prot)
u64 vpu_addr, dma_addr_t dma_addr, u64 prot)
{
u64 *pte;
int pgd_index = FIELD_GET(IVPU_MMU_PGD_INDEX_MASK, vpu_addr);
int pmd_index = FIELD_GET(IVPU_MMU_PMD_INDEX_MASK, vpu_addr);
int pte_index = FIELD_GET(IVPU_MMU_PTE_INDEX_MASK, vpu_addr);
int pgd_idx = FIELD_GET(IVPU_MMU_PGD_INDEX_MASK, vpu_addr);
int pud_idx = FIELD_GET(IVPU_MMU_PUD_INDEX_MASK, vpu_addr);
int pmd_idx = FIELD_GET(IVPU_MMU_PMD_INDEX_MASK, vpu_addr);
int pte_idx = FIELD_GET(IVPU_MMU_PTE_INDEX_MASK, vpu_addr);
/* Allocate PMD - second level page table if needed */
if (!ivpu_mmu_ensure_pmd(vdev, &ctx->pgtable, pgd_index))
/* Allocate PUD - second level page table if needed */
if (!ivpu_mmu_ensure_pud(vdev, &ctx->pgtable, pgd_idx))
return -ENOMEM;
/* Allocate PTE - third level page table if needed */
pte = ivpu_mmu_ensure_pte(vdev, &ctx->pgtable, pgd_index, pmd_index);
/* Allocate PMD - third level page table if needed */
if (!ivpu_mmu_ensure_pmd(vdev, &ctx->pgtable, pgd_idx, pud_idx))
return -ENOMEM;
/* Allocate PTE - fourth level page table if needed */
pte = ivpu_mmu_ensure_pte(vdev, &ctx->pgtable, pgd_idx, pud_idx, pmd_idx);
if (!pte)
return -ENOMEM;
/* Update PTE - third level page table with DMA address */
pte[pte_index] = dma_addr | prot;
/* Update PTE */
pte[pte_idx] = dma_addr | prot;
return 0;
}
static void ivpu_mmu_context_unmap_page(struct ivpu_mmu_context *ctx, u64 vpu_addr)
{
int pgd_index = FIELD_GET(IVPU_MMU_PGD_INDEX_MASK, vpu_addr);
int pmd_index = FIELD_GET(IVPU_MMU_PMD_INDEX_MASK, vpu_addr);
int pte_index = FIELD_GET(IVPU_MMU_PTE_INDEX_MASK, vpu_addr);
/* Update PTE with dummy physical address and clear flags */
ctx->pgtable.pgd_cpu_entries[pgd_index][pmd_index][pte_index] = IVPU_MMU_ENTRY_INVALID;
}
static void
ivpu_mmu_context_flush_page_tables(struct ivpu_mmu_context *ctx, u64 vpu_addr, size_t size)
{
u64 end_addr = vpu_addr + size;
u64 *pgd = ctx->pgtable.pgd;
/* Align to PMD entry (2 MB) */
vpu_addr &= ~(IVPU_MMU_PTE_MAP_SIZE - 1);
while (vpu_addr < end_addr) {
int pgd_index = FIELD_GET(IVPU_MMU_PGD_INDEX_MASK, vpu_addr);
u64 pmd_end = (pgd_index + 1) * (u64)IVPU_MMU_PMD_MAP_SIZE;
u64 *pmd = ctx->pgtable.pgd_entries[pgd_index];
while (vpu_addr < end_addr && vpu_addr < pmd_end) {
int pmd_index = FIELD_GET(IVPU_MMU_PMD_INDEX_MASK, vpu_addr);
u64 *pte = ctx->pgtable.pgd_cpu_entries[pgd_index][pmd_index];
clflush_cache_range(pte, IVPU_MMU_PGTABLE_SIZE);
vpu_addr += IVPU_MMU_PTE_MAP_SIZE;
}
clflush_cache_range(pmd, IVPU_MMU_PGTABLE_SIZE);
}
clflush_cache_range(pgd, IVPU_MMU_PGTABLE_SIZE);
}
static int
ivpu_mmu_context_map_pages(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
u64 vpu_addr, dma_addr_t dma_addr, size_t size, int prot)
ivpu_mmu_context_map_cont_64k(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, u64 vpu_addr,
dma_addr_t dma_addr, u64 prot)
{
size_t size = IVPU_MMU_CONT_PAGES_SIZE;
drm_WARN_ON(&vdev->drm, !IS_ALIGNED(vpu_addr, size));
drm_WARN_ON(&vdev->drm, !IS_ALIGNED(dma_addr, size));
prot |= IVPU_MMU_ENTRY_FLAG_CONT;
while (size) {
int ret = ivpu_mmu_context_map_page(vdev, ctx, vpu_addr, dma_addr, prot);
if (ret)
return ret;
size -= IVPU_MMU_PAGE_SIZE;
vpu_addr += IVPU_MMU_PAGE_SIZE;
dma_addr += IVPU_MMU_PAGE_SIZE;
size -= IVPU_MMU_PAGE_SIZE;
}
return 0;
}
static void ivpu_mmu_context_unmap_page(struct ivpu_mmu_context *ctx, u64 vpu_addr)
{
int pgd_idx = FIELD_GET(IVPU_MMU_PGD_INDEX_MASK, vpu_addr);
int pud_idx = FIELD_GET(IVPU_MMU_PUD_INDEX_MASK, vpu_addr);
int pmd_idx = FIELD_GET(IVPU_MMU_PMD_INDEX_MASK, vpu_addr);
int pte_idx = FIELD_GET(IVPU_MMU_PTE_INDEX_MASK, vpu_addr);
/* Update PTE with dummy physical address and clear flags */
ctx->pgtable.pte_ptrs[pgd_idx][pud_idx][pmd_idx][pte_idx] = IVPU_MMU_ENTRY_INVALID;
}
static void
ivpu_mmu_context_flush_page_tables(struct ivpu_mmu_context *ctx, u64 vpu_addr, size_t size)
{
struct ivpu_mmu_pgtable *pgtable = &ctx->pgtable;
u64 end_addr = vpu_addr + size;
/* Align to PMD entry (2 MB) */
vpu_addr &= ~(IVPU_MMU_PTE_MAP_SIZE - 1);
while (vpu_addr < end_addr) {
int pgd_idx = FIELD_GET(IVPU_MMU_PGD_INDEX_MASK, vpu_addr);
u64 pud_end = (pgd_idx + 1) * (u64)IVPU_MMU_PUD_MAP_SIZE;
while (vpu_addr < end_addr && vpu_addr < pud_end) {
int pud_idx = FIELD_GET(IVPU_MMU_PUD_INDEX_MASK, vpu_addr);
u64 pmd_end = (pud_idx + 1) * (u64)IVPU_MMU_PMD_MAP_SIZE;
while (vpu_addr < end_addr && vpu_addr < pmd_end) {
int pmd_idx = FIELD_GET(IVPU_MMU_PMD_INDEX_MASK, vpu_addr);
clflush_cache_range(pgtable->pte_ptrs[pgd_idx][pud_idx][pmd_idx],
IVPU_MMU_PGTABLE_SIZE);
vpu_addr += IVPU_MMU_PTE_MAP_SIZE;
}
clflush_cache_range(pgtable->pmd_ptrs[pgd_idx][pud_idx],
IVPU_MMU_PGTABLE_SIZE);
}
clflush_cache_range(pgtable->pud_ptrs[pgd_idx], IVPU_MMU_PGTABLE_SIZE);
}
clflush_cache_range(pgtable->pgd_dma_ptr, IVPU_MMU_PGTABLE_SIZE);
}
static int
ivpu_mmu_context_map_pages(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
u64 vpu_addr, dma_addr_t dma_addr, size_t size, u64 prot)
{
int map_size;
int ret;
while (size) {
if (!ivpu_disable_mmu_cont_pages && size >= IVPU_MMU_CONT_PAGES_SIZE &&
IS_ALIGNED(vpu_addr | dma_addr, IVPU_MMU_CONT_PAGES_SIZE)) {
ret = ivpu_mmu_context_map_cont_64k(vdev, ctx, vpu_addr, dma_addr, prot);
map_size = IVPU_MMU_CONT_PAGES_SIZE;
} else {
ret = ivpu_mmu_context_map_page(vdev, ctx, vpu_addr, dma_addr, prot);
map_size = IVPU_MMU_PAGE_SIZE;
}
if (ret)
return ret;
vpu_addr += map_size;
dma_addr += map_size;
size -= map_size;
}
return 0;
@ -216,8 +322,8 @@ ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
u64 vpu_addr, struct sg_table *sgt, bool llc_coherent)
{
struct scatterlist *sg;
int prot;
int ret;
u64 prot;
u64 i;
if (!IS_ALIGNED(vpu_addr, IVPU_MMU_PAGE_SIZE))
@ -237,7 +343,7 @@ ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
mutex_lock(&ctx->lock);
for_each_sgtable_dma_sg(sgt, sg, i) {
u64 dma_addr = sg_dma_address(sg) - sg->offset;
dma_addr_t dma_addr = sg_dma_address(sg) - sg->offset;
size_t size = sg_dma_len(sg) + sg->offset;
ret = ivpu_mmu_context_map_pages(vdev, ctx, vpu_addr, dma_addr, size, prot);
@ -293,8 +399,14 @@ ivpu_mmu_context_insert_node_locked(struct ivpu_mmu_context *ctx,
{
lockdep_assert_held(&ctx->lock);
return drm_mm_insert_node_in_range(&ctx->mm, node, size, IVPU_MMU_PAGE_SIZE,
0, range->start, range->end, DRM_MM_INSERT_BEST);
if (!ivpu_disable_mmu_cont_pages && size >= IVPU_MMU_CONT_PAGES_SIZE) {
if (!drm_mm_insert_node_in_range(&ctx->mm, node, size, IVPU_MMU_CONT_PAGES_SIZE, 0,
range->start, range->end, DRM_MM_INSERT_BEST))
return 0;
}
return drm_mm_insert_node_in_range(&ctx->mm, node, size, IVPU_MMU_PAGE_SIZE, 0,
range->start, range->end, DRM_MM_INSERT_BEST);
}
void
@ -334,11 +446,15 @@ ivpu_mmu_context_init(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, u3
static void ivpu_mmu_context_fini(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx)
{
drm_WARN_ON(&vdev->drm, !ctx->pgtable.pgd);
if (drm_WARN_ON(&vdev->drm, !ctx->pgtable.pgd_dma_ptr))
return;
mutex_destroy(&ctx->lock);
ivpu_mmu_pgtable_free(vdev, &ctx->pgtable);
ivpu_mmu_pgtables_free(vdev, &ctx->pgtable);
drm_mm_takedown(&ctx->mm);
ctx->pgtable.pgd_dma_ptr = NULL;
ctx->pgtable.pgd_dma = 0;
}
int ivpu_mmu_global_context_init(struct ivpu_device *vdev)

View file

@ -12,12 +12,13 @@ struct ivpu_device;
struct ivpu_file_priv;
struct ivpu_addr_range;
#define IVPU_MMU_PGTABLE_ENTRIES 512
#define IVPU_MMU_PGTABLE_ENTRIES 512ull
struct ivpu_mmu_pgtable {
u64 **pgd_cpu_entries[IVPU_MMU_PGTABLE_ENTRIES];
u64 *pgd_entries[IVPU_MMU_PGTABLE_ENTRIES];
u64 *pgd;
u64 ***pte_ptrs[IVPU_MMU_PGTABLE_ENTRIES];
u64 **pmd_ptrs[IVPU_MMU_PGTABLE_ENTRIES];
u64 *pud_ptrs[IVPU_MMU_PGTABLE_ENTRIES];
u64 *pgd_dma_ptr;
dma_addr_t pgd_dma;
};

View file

@ -259,6 +259,7 @@ void ivpu_pm_reset_prepare_cb(struct pci_dev *pdev)
pm_runtime_get_sync(vdev->drm.dev);
ivpu_dbg(vdev, PM, "Pre-reset..\n");
atomic_inc(&vdev->pm->reset_counter);
atomic_set(&vdev->pm->in_reset, 1);
ivpu_shutdown(vdev);
ivpu_pm_prepare_cold_boot(vdev);

View file

@ -14,6 +14,7 @@ struct ivpu_pm_info {
struct ivpu_device *vdev;
struct work_struct recovery_work;
atomic_t in_reset;
atomic_t reset_counter;
bool is_warmboot;
u32 suspend_reschedule_counter;
};

View file

@ -165,7 +165,6 @@ static const struct drm_driver qaic_accel_driver = {
.ioctls = qaic_drm_ioctls,
.num_ioctls = ARRAY_SIZE(qaic_drm_ioctls),
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import = qaic_gem_prime_import,
};

View file

@ -131,7 +131,6 @@ static struct file_system_type dma_buf_fs_type = {
static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
{
struct dma_buf *dmabuf;
int ret;
if (!is_dma_buf_file(file))
return -EINVAL;
@ -147,11 +146,7 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
dma_resv_lock(dmabuf->resv, NULL);
ret = dmabuf->ops->mmap(dmabuf, vma);
dma_resv_unlock(dmabuf->resv);
return ret;
return dmabuf->ops->mmap(dmabuf, vma);
}
static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
@ -850,6 +845,7 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach,
* - &dma_buf_ops.release()
* - &dma_buf_ops.begin_cpu_access()
* - &dma_buf_ops.end_cpu_access()
* - &dma_buf_ops.mmap()
*
* 2. These &dma_buf_ops callbacks are invoked with locked dma-buf
* reservation and exporter can't take the lock:
@ -858,7 +854,6 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach,
* - &dma_buf_ops.unpin()
* - &dma_buf_ops.map_dma_buf()
* - &dma_buf_ops.unmap_dma_buf()
* - &dma_buf_ops.mmap()
* - &dma_buf_ops.vmap()
* - &dma_buf_ops.vunmap()
*
@ -1463,8 +1458,6 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF);
int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
int ret;
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@ -1485,11 +1478,7 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
vma_set_file(vma, dmabuf->file);
vma->vm_pgoff = pgoff;
dma_resv_lock(dmabuf->resv, NULL);
ret = dmabuf->ops->mmap(dmabuf, vma);
dma_resv_unlock(dmabuf->resv);
return ret;
return dmabuf->ops->mmap(dmabuf, vma);
}
EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF);

View file

@ -13,7 +13,6 @@
#include <linux/dma-buf.h>
#include <linux/dma-heap.h>
#include <linux/dma-map-ops.h>
#include <linux/dma-resv.h>
#include <linux/err.h>
#include <linux/highmem.h>
#include <linux/io.h>
@ -183,8 +182,6 @@ static int cma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
{
struct cma_heap_buffer *buffer = dmabuf->priv;
dma_resv_assert_held(dmabuf->resv);
if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
return -EINVAL;

View file

@ -13,7 +13,6 @@
#include <linux/dma-buf.h>
#include <linux/dma-mapping.h>
#include <linux/dma-heap.h>
#include <linux/dma-resv.h>
#include <linux/err.h>
#include <linux/highmem.h>
#include <linux/mm.h>
@ -201,8 +200,6 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
struct sg_page_iter piter;
int ret;
dma_resv_assert_held(dmabuf->resv);
for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
struct page *page = sg_page_iter_page(&piter);

View file

@ -51,8 +51,6 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
{
struct udmabuf *ubuf = buf->priv;
dma_resv_assert_held(buf->resv);
if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
return -EINVAL;

View file

@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <linux/efi.h>
#include <linux/screen_info.h>
#include <asm/efi.h>
#include "efistub.h"

View file

@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/efi.h>
#include <linux/screen_info.h>
#include <asm/efi.h>
#include "efistub.h"

View file

@ -80,6 +80,7 @@ config DRM_KUNIT_TEST
select DRM_BUDDY
select DRM_EXPORT_FOR_TESTS if m
select DRM_KUNIT_TEST_HELPERS
select DRM_EXEC
default KUNIT_ALL_TESTS
help
This builds unit tests for DRM. This option is not useful for
@ -194,6 +195,12 @@ config DRM_TTM
GPU memory types. Will be enabled automatically if a device driver
uses it.
config DRM_EXEC
tristate
depends on DRM
help
Execution context for command submissions
config DRM_BUDDY
tristate
depends on DRM
@ -323,6 +330,8 @@ source "drivers/gpu/drm/v3d/Kconfig"
source "drivers/gpu/drm/vc4/Kconfig"
source "drivers/gpu/drm/loongson/Kconfig"
source "drivers/gpu/drm/etnaviv/Kconfig"
source "drivers/gpu/drm/hisilicon/Kconfig"

View file

@ -78,6 +78,8 @@ obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o
#
# Memory-management helpers
#
#
obj-$(CONFIG_DRM_EXEC) += drm_exec.o
obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
@ -194,3 +196,4 @@ obj-y += gud/
obj-$(CONFIG_DRM_HYPERV) += hyperv/
obj-y += solomon/
obj-$(CONFIG_DRM_SPRD) += sprd/
obj-$(CONFIG_DRM_LOONGSON) += loongson/

View file

@ -21,6 +21,7 @@ config DRM_AMDGPU
select INTERVAL_TREE
select DRM_BUDDY
select DRM_SUBALLOC_HELPER
select DRM_EXEC
# amdgpu depends on ACPI_VIDEO when ACPI is enabled, for select to work
# ACPI_VIDEO's dependencies must also be selected.
select INPUT if ACPI

View file

@ -53,7 +53,6 @@
#include <drm/ttm/ttm_bo.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/ttm/ttm_execbuf_util.h>
#include <drm/amdgpu_drm.h>
#include <drm/drm_gem.h>

View file

@ -25,6 +25,7 @@
#ifndef AMDGPU_AMDKFD_H_INCLUDED
#define AMDGPU_AMDKFD_H_INCLUDED
#include <linux/list.h>
#include <linux/types.h>
#include <linux/mm.h>
#include <linux/kthread.h>
@ -32,7 +33,6 @@
#include <linux/mmu_notifier.h>
#include <linux/memremap.h>
#include <kgd_kfd_interface.h>
#include <drm/ttm/ttm_execbuf_util.h>
#include "amdgpu_sync.h"
#include "amdgpu_vm.h"
#include "amdgpu_xcp.h"
@ -71,8 +71,7 @@ struct kgd_mem {
struct hmm_range *range;
struct list_head attachments;
/* protected by amdkfd_process_info.lock */
struct ttm_validate_buffer validate_list;
struct ttm_validate_buffer resv_list;
struct list_head validate_list;
uint32_t domain;
unsigned int mapped_to_gpu_memory;
uint64_t va;

View file

@ -27,6 +27,8 @@
#include <linux/sched/task.h>
#include <drm/ttm/ttm_tt.h>
#include <drm/drm_exec.h>
#include "amdgpu_object.h"
#include "amdgpu_gem.h"
#include "amdgpu_vm.h"
@ -964,28 +966,20 @@ static void add_kgd_mem_to_kfd_bo_list(struct kgd_mem *mem,
struct amdkfd_process_info *process_info,
bool userptr)
{
struct ttm_validate_buffer *entry = &mem->validate_list;
struct amdgpu_bo *bo = mem->bo;
INIT_LIST_HEAD(&entry->head);
entry->num_shared = 1;
entry->bo = &bo->tbo;
mutex_lock(&process_info->lock);
if (userptr)
list_add_tail(&entry->head, &process_info->userptr_valid_list);
list_add_tail(&mem->validate_list,
&process_info->userptr_valid_list);
else
list_add_tail(&entry->head, &process_info->kfd_bo_list);
list_add_tail(&mem->validate_list, &process_info->kfd_bo_list);
mutex_unlock(&process_info->lock);
}
static void remove_kgd_mem_from_kfd_bo_list(struct kgd_mem *mem,
struct amdkfd_process_info *process_info)
{
struct ttm_validate_buffer *bo_list_entry;
bo_list_entry = &mem->validate_list;
mutex_lock(&process_info->lock);
list_del(&bo_list_entry->head);
list_del(&mem->validate_list);
mutex_unlock(&process_info->lock);
}
@ -1072,13 +1066,12 @@ static int init_user_pages(struct kgd_mem *mem, uint64_t user_addr,
* object can track VM updates.
*/
struct bo_vm_reservation_context {
struct amdgpu_bo_list_entry kfd_bo; /* BO list entry for the KFD BO */
unsigned int n_vms; /* Number of VMs reserved */
struct amdgpu_bo_list_entry *vm_pd; /* Array of VM BO list entries */
struct ww_acquire_ctx ticket; /* Reservation ticket */
struct list_head list, duplicates; /* BO lists */
struct amdgpu_sync *sync; /* Pointer to sync object */
bool reserved; /* Whether BOs are reserved */
/* DRM execution context for the reservation */
struct drm_exec exec;
/* Number of VMs reserved */
unsigned int n_vms;
/* Pointer to sync object */
struct amdgpu_sync *sync;
};
enum bo_vm_match {
@ -1102,35 +1095,26 @@ static int reserve_bo_and_vm(struct kgd_mem *mem,
WARN_ON(!vm);
ctx->reserved = false;
ctx->n_vms = 1;
ctx->sync = &mem->sync;
drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT);
drm_exec_until_all_locked(&ctx->exec) {
ret = amdgpu_vm_lock_pd(vm, &ctx->exec, 2);
drm_exec_retry_on_contention(&ctx->exec);
if (unlikely(ret))
goto error;
INIT_LIST_HEAD(&ctx->list);
INIT_LIST_HEAD(&ctx->duplicates);
ctx->vm_pd = kcalloc(ctx->n_vms, sizeof(*ctx->vm_pd), GFP_KERNEL);
if (!ctx->vm_pd)
return -ENOMEM;
ctx->kfd_bo.priority = 0;
ctx->kfd_bo.tv.bo = &bo->tbo;
ctx->kfd_bo.tv.num_shared = 1;
list_add(&ctx->kfd_bo.tv.head, &ctx->list);
amdgpu_vm_get_pd_bo(vm, &ctx->list, &ctx->vm_pd[0]);
ret = ttm_eu_reserve_buffers(&ctx->ticket, &ctx->list,
false, &ctx->duplicates);
if (ret) {
pr_err("Failed to reserve buffers in ttm.\n");
kfree(ctx->vm_pd);
ctx->vm_pd = NULL;
return ret;
ret = drm_exec_lock_obj(&ctx->exec, &bo->tbo.base);
drm_exec_retry_on_contention(&ctx->exec);
if (unlikely(ret))
goto error;
}
ctx->reserved = true;
return 0;
error:
pr_err("Failed to reserve buffers in ttm.\n");
drm_exec_fini(&ctx->exec);
return ret;
}
/**
@ -1147,63 +1131,39 @@ static int reserve_bo_and_cond_vms(struct kgd_mem *mem,
struct amdgpu_vm *vm, enum bo_vm_match map_type,
struct bo_vm_reservation_context *ctx)
{
struct amdgpu_bo *bo = mem->bo;
struct kfd_mem_attachment *entry;
unsigned int i;
struct amdgpu_bo *bo = mem->bo;
int ret;
ctx->reserved = false;
ctx->n_vms = 0;
ctx->vm_pd = NULL;
ctx->sync = &mem->sync;
drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT);
drm_exec_until_all_locked(&ctx->exec) {
ctx->n_vms = 0;
list_for_each_entry(entry, &mem->attachments, list) {
if ((vm && vm != entry->bo_va->base.vm) ||
(entry->is_mapped != map_type
&& map_type != BO_VM_ALL))
continue;
INIT_LIST_HEAD(&ctx->list);
INIT_LIST_HEAD(&ctx->duplicates);
ret = amdgpu_vm_lock_pd(entry->bo_va->base.vm,
&ctx->exec, 2);
drm_exec_retry_on_contention(&ctx->exec);
if (unlikely(ret))
goto error;
++ctx->n_vms;
}
list_for_each_entry(entry, &mem->attachments, list) {
if ((vm && vm != entry->bo_va->base.vm) ||
(entry->is_mapped != map_type
&& map_type != BO_VM_ALL))
continue;
ctx->n_vms++;
ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, 1);
drm_exec_retry_on_contention(&ctx->exec);
if (unlikely(ret))
goto error;
}
if (ctx->n_vms != 0) {
ctx->vm_pd = kcalloc(ctx->n_vms, sizeof(*ctx->vm_pd),
GFP_KERNEL);
if (!ctx->vm_pd)
return -ENOMEM;
}
ctx->kfd_bo.priority = 0;
ctx->kfd_bo.tv.bo = &bo->tbo;
ctx->kfd_bo.tv.num_shared = 1;
list_add(&ctx->kfd_bo.tv.head, &ctx->list);
i = 0;
list_for_each_entry(entry, &mem->attachments, list) {
if ((vm && vm != entry->bo_va->base.vm) ||
(entry->is_mapped != map_type
&& map_type != BO_VM_ALL))
continue;
amdgpu_vm_get_pd_bo(entry->bo_va->base.vm, &ctx->list,
&ctx->vm_pd[i]);
i++;
}
ret = ttm_eu_reserve_buffers(&ctx->ticket, &ctx->list,
false, &ctx->duplicates);
if (ret) {
pr_err("Failed to reserve buffers in ttm.\n");
kfree(ctx->vm_pd);
ctx->vm_pd = NULL;
return ret;
}
ctx->reserved = true;
return 0;
error:
pr_err("Failed to reserve buffers in ttm.\n");
drm_exec_fini(&ctx->exec);
return ret;
}
/**
@ -1224,15 +1184,8 @@ static int unreserve_bo_and_vms(struct bo_vm_reservation_context *ctx,
if (wait)
ret = amdgpu_sync_wait(ctx->sync, intr);
if (ctx->reserved)
ttm_eu_backoff_reservation(&ctx->ticket, &ctx->list);
kfree(ctx->vm_pd);
drm_exec_fini(&ctx->exec);
ctx->sync = NULL;
ctx->reserved = false;
ctx->vm_pd = NULL;
return ret;
}
@ -1854,7 +1807,6 @@ int amdgpu_amdkfd_gpuvm_free_memory_of_gpu(
bool use_release_notifier = (mem->bo->kfd_bo == mem);
struct kfd_mem_attachment *entry, *tmp;
struct bo_vm_reservation_context ctx;
struct ttm_validate_buffer *bo_list_entry;
unsigned int mapped_to_gpu_memory;
int ret;
bool is_imported = false;
@ -1882,9 +1834,8 @@ int amdgpu_amdkfd_gpuvm_free_memory_of_gpu(
}
/* Make sure restore workers don't access the BO any more */
bo_list_entry = &mem->validate_list;
mutex_lock(&process_info->lock);
list_del(&bo_list_entry->head);
list_del(&mem->validate_list);
mutex_unlock(&process_info->lock);
/* Cleanup user pages and MMU notifiers */
@ -2451,14 +2402,14 @@ static int update_invalid_user_pages(struct amdkfd_process_info *process_info,
/* Move all invalidated BOs to the userptr_inval_list */
list_for_each_entry_safe(mem, tmp_mem,
&process_info->userptr_valid_list,
validate_list.head)
validate_list)
if (mem->invalid)
list_move_tail(&mem->validate_list.head,
list_move_tail(&mem->validate_list,
&process_info->userptr_inval_list);
/* Go through userptr_inval_list and update any invalid user_pages */
list_for_each_entry(mem, &process_info->userptr_inval_list,
validate_list.head) {
validate_list) {
invalid = mem->invalid;
if (!invalid)
/* BO hasn't been invalidated since the last
@ -2538,51 +2489,42 @@ static int update_invalid_user_pages(struct amdkfd_process_info *process_info,
*/
static int validate_invalid_user_pages(struct amdkfd_process_info *process_info)
{
struct amdgpu_bo_list_entry *pd_bo_list_entries;
struct list_head resv_list, duplicates;
struct ww_acquire_ctx ticket;
struct ttm_operation_ctx ctx = { false, false };
struct amdgpu_sync sync;
struct drm_exec exec;
struct amdgpu_vm *peer_vm;
struct kgd_mem *mem, *tmp_mem;
struct amdgpu_bo *bo;
struct ttm_operation_ctx ctx = { false, false };
int i, ret;
pd_bo_list_entries = kcalloc(process_info->n_vms,
sizeof(struct amdgpu_bo_list_entry),
GFP_KERNEL);
if (!pd_bo_list_entries) {
pr_err("%s: Failed to allocate PD BO list entries\n", __func__);
ret = -ENOMEM;
goto out_no_mem;
}
INIT_LIST_HEAD(&resv_list);
INIT_LIST_HEAD(&duplicates);
/* Get all the page directory BOs that need to be reserved */
i = 0;
list_for_each_entry(peer_vm, &process_info->vm_list_head,
vm_list_node)
amdgpu_vm_get_pd_bo(peer_vm, &resv_list,
&pd_bo_list_entries[i++]);
/* Add the userptr_inval_list entries to resv_list */
list_for_each_entry(mem, &process_info->userptr_inval_list,
validate_list.head) {
list_add_tail(&mem->resv_list.head, &resv_list);
mem->resv_list.bo = mem->validate_list.bo;
mem->resv_list.num_shared = mem->validate_list.num_shared;
}
/* Reserve all BOs and page tables for validation */
ret = ttm_eu_reserve_buffers(&ticket, &resv_list, false, &duplicates);
WARN(!list_empty(&duplicates), "Duplicates should be empty");
if (ret)
goto out_free;
int ret;
amdgpu_sync_create(&sync);
drm_exec_init(&exec, 0);
/* Reserve all BOs and page tables for validation */
drm_exec_until_all_locked(&exec) {
/* Reserve all the page directories */
list_for_each_entry(peer_vm, &process_info->vm_list_head,
vm_list_node) {
ret = amdgpu_vm_lock_pd(peer_vm, &exec, 2);
drm_exec_retry_on_contention(&exec);
if (unlikely(ret))
goto unreserve_out;
}
/* Reserve the userptr_inval_list entries to resv_list */
list_for_each_entry(mem, &process_info->userptr_inval_list,
validate_list) {
struct drm_gem_object *gobj;
gobj = &mem->bo->tbo.base;
ret = drm_exec_prepare_obj(&exec, gobj, 1);
drm_exec_retry_on_contention(&exec);
if (unlikely(ret))
goto unreserve_out;
}
}
ret = process_validate_vms(process_info);
if (ret)
goto unreserve_out;
@ -2590,7 +2532,7 @@ static int validate_invalid_user_pages(struct amdkfd_process_info *process_info)
/* Validate BOs and update GPUVM page tables */
list_for_each_entry_safe(mem, tmp_mem,
&process_info->userptr_inval_list,
validate_list.head) {
validate_list) {
struct kfd_mem_attachment *attachment;
bo = mem->bo;
@ -2632,12 +2574,9 @@ static int validate_invalid_user_pages(struct amdkfd_process_info *process_info)
ret = process_update_pds(process_info, &sync);
unreserve_out:
ttm_eu_backoff_reservation(&ticket, &resv_list);
drm_exec_fini(&exec);
amdgpu_sync_wait(&sync, false);
amdgpu_sync_free(&sync);
out_free:
kfree(pd_bo_list_entries);
out_no_mem:
return ret;
}
@ -2653,7 +2592,7 @@ static int confirm_valid_user_pages_locked(struct amdkfd_process_info *process_i
list_for_each_entry_safe(mem, tmp_mem,
&process_info->userptr_inval_list,
validate_list.head) {
validate_list) {
bool valid;
/* keep mem without hmm range at userptr_inval_list */
@ -2677,7 +2616,7 @@ static int confirm_valid_user_pages_locked(struct amdkfd_process_info *process_i
continue;
}
list_move_tail(&mem->validate_list.head,
list_move_tail(&mem->validate_list,
&process_info->userptr_valid_list);
}
@ -2787,50 +2726,44 @@ static void amdgpu_amdkfd_restore_userptr_worker(struct work_struct *work)
*/
int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef)
{
struct amdgpu_bo_list_entry *pd_bo_list;
struct amdkfd_process_info *process_info = info;
struct amdgpu_vm *peer_vm;
struct kgd_mem *mem;
struct bo_vm_reservation_context ctx;
struct amdgpu_amdkfd_fence *new_fence;
int ret = 0, i;
struct list_head duplicate_save;
struct amdgpu_sync sync_obj;
unsigned long failed_size = 0;
unsigned long total_size = 0;
struct drm_exec exec;
int ret;
INIT_LIST_HEAD(&duplicate_save);
INIT_LIST_HEAD(&ctx.list);
INIT_LIST_HEAD(&ctx.duplicates);
pd_bo_list = kcalloc(process_info->n_vms,
sizeof(struct amdgpu_bo_list_entry),
GFP_KERNEL);
if (!pd_bo_list)
return -ENOMEM;
i = 0;
mutex_lock(&process_info->lock);
list_for_each_entry(peer_vm, &process_info->vm_list_head,
vm_list_node)
amdgpu_vm_get_pd_bo(peer_vm, &ctx.list, &pd_bo_list[i++]);
/* Reserve all BOs and page tables/directory. Add all BOs from
* kfd_bo_list to ctx.list
*/
list_for_each_entry(mem, &process_info->kfd_bo_list,
validate_list.head) {
drm_exec_init(&exec, 0);
drm_exec_until_all_locked(&exec) {
list_for_each_entry(peer_vm, &process_info->vm_list_head,
vm_list_node) {
ret = amdgpu_vm_lock_pd(peer_vm, &exec, 2);
drm_exec_retry_on_contention(&exec);
if (unlikely(ret))
goto ttm_reserve_fail;
}
list_add_tail(&mem->resv_list.head, &ctx.list);
mem->resv_list.bo = mem->validate_list.bo;
mem->resv_list.num_shared = mem->validate_list.num_shared;
}
/* Reserve all BOs and page tables/directory. Add all BOs from
* kfd_bo_list to ctx.list
*/
list_for_each_entry(mem, &process_info->kfd_bo_list,
validate_list) {
struct drm_gem_object *gobj;
ret = ttm_eu_reserve_buffers(&ctx.ticket, &ctx.list,
false, &duplicate_save);
if (ret) {
pr_debug("Memory eviction: TTM Reserve Failed. Try again\n");
goto ttm_reserve_fail;
gobj = &mem->bo->tbo.base;
ret = drm_exec_prepare_obj(&exec, gobj, 1);
drm_exec_retry_on_contention(&exec);
if (unlikely(ret))
goto ttm_reserve_fail;
}
}
amdgpu_sync_create(&sync_obj);
@ -2848,7 +2781,7 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef)
/* Validate BOs and map them to GPUVM (update VM page tables). */
list_for_each_entry(mem, &process_info->kfd_bo_list,
validate_list.head) {
validate_list) {
struct amdgpu_bo *bo = mem->bo;
uint32_t domain = mem->domain;
@ -2924,8 +2857,7 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef)
*ef = dma_fence_get(&new_fence->base);
/* Attach new eviction fence to all BOs except pinned ones */
list_for_each_entry(mem, &process_info->kfd_bo_list,
validate_list.head) {
list_for_each_entry(mem, &process_info->kfd_bo_list, validate_list) {
if (mem->bo->tbo.pin_count)
continue;
@ -2944,11 +2876,10 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef)
}
validate_map_fail:
ttm_eu_backoff_reservation(&ctx.ticket, &ctx.list);
amdgpu_sync_free(&sync_obj);
ttm_reserve_fail:
drm_exec_fini(&exec);
mutex_unlock(&process_info->lock);
kfree(pd_bo_list);
return ret;
}

View file

@ -28,6 +28,7 @@
* Christian König <deathsimple@vodafone.de>
*/
#include <linux/sort.h>
#include <linux/uaccess.h>
#include "amdgpu.h"
@ -50,15 +51,22 @@ static void amdgpu_bo_list_free(struct kref *ref)
refcount);
struct amdgpu_bo_list_entry *e;
amdgpu_bo_list_for_each_entry(e, list) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
amdgpu_bo_unref(&bo);
}
amdgpu_bo_list_for_each_entry(e, list)
amdgpu_bo_unref(&e->bo);
call_rcu(&list->rhead, amdgpu_bo_list_free_rcu);
}
static int amdgpu_bo_list_entry_cmp(const void *_a, const void *_b)
{
const struct amdgpu_bo_list_entry *a = _a, *b = _b;
if (a->priority > b->priority)
return 1;
if (a->priority < b->priority)
return -1;
return 0;
}
int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
struct drm_amdgpu_bo_list_entry *info,
size_t num_entries, struct amdgpu_bo_list **result)
@ -118,7 +126,7 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
entry->priority = min(info[i].bo_priority,
AMDGPU_BO_LIST_MAX_PRIORITY);
entry->tv.bo = &bo->tbo;
entry->bo = bo;
if (bo->preferred_domains == AMDGPU_GEM_DOMAIN_GDS)
list->gds_obj = bo;
@ -133,6 +141,8 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
list->first_userptr = first_userptr;
list->num_entries = num_entries;
sort(array, last_entry, sizeof(struct amdgpu_bo_list_entry),
amdgpu_bo_list_entry_cmp, NULL);
trace_amdgpu_cs_bo_status(list->num_entries, total_size);
@ -141,16 +151,10 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
return 0;
error_free:
for (i = 0; i < last_entry; ++i) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo);
amdgpu_bo_unref(&bo);
}
for (i = first_userptr; i < num_entries; ++i) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo);
amdgpu_bo_unref(&bo);
}
for (i = 0; i < last_entry; ++i)
amdgpu_bo_unref(&array[i].bo);
for (i = first_userptr; i < num_entries; ++i)
amdgpu_bo_unref(&array[i].bo);
kvfree(list);
return r;
@ -182,41 +186,6 @@ int amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id,
return -ENOENT;
}
void amdgpu_bo_list_get_list(struct amdgpu_bo_list *list,
struct list_head *validated)
{
/* This is based on the bucket sort with O(n) time complexity.
* An item with priority "i" is added to bucket[i]. The lists are then
* concatenated in descending order.
*/
struct list_head bucket[AMDGPU_BO_LIST_NUM_BUCKETS];
struct amdgpu_bo_list_entry *e;
unsigned i;
for (i = 0; i < AMDGPU_BO_LIST_NUM_BUCKETS; i++)
INIT_LIST_HEAD(&bucket[i]);
/* Since buffers which appear sooner in the relocation list are
* likely to be used more often than buffers which appear later
* in the list, the sort mustn't change the ordering of buffers
* with the same priority, i.e. it must be stable.
*/
amdgpu_bo_list_for_each_entry(e, list) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
unsigned priority = e->priority;
if (!bo->parent)
list_add_tail(&e->tv.head, &bucket[priority]);
e->user_pages = NULL;
e->range = NULL;
}
/* Connect the sorted buckets in the output list. */
for (i = 0; i < AMDGPU_BO_LIST_NUM_BUCKETS; i++)
list_splice(&bucket[i], validated);
}
void amdgpu_bo_list_put(struct amdgpu_bo_list *list)
{
kref_put(&list->refcount, amdgpu_bo_list_free);

View file

@ -23,7 +23,6 @@
#ifndef __AMDGPU_BO_LIST_H__
#define __AMDGPU_BO_LIST_H__
#include <drm/ttm/ttm_execbuf_util.h>
#include <drm/amdgpu_drm.h>
struct hmm_range;
@ -36,7 +35,7 @@ struct amdgpu_bo_va;
struct amdgpu_fpriv;
struct amdgpu_bo_list_entry {
struct ttm_validate_buffer tv;
struct amdgpu_bo *bo;
struct amdgpu_bo_va *bo_va;
uint32_t priority;
struct page **user_pages;
@ -60,8 +59,6 @@ struct amdgpu_bo_list {
int amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id,
struct amdgpu_bo_list **result);
void amdgpu_bo_list_get_list(struct amdgpu_bo_list *list,
struct list_head *validated);
void amdgpu_bo_list_put(struct amdgpu_bo_list *list);
int amdgpu_bo_create_list_entry_array(struct drm_amdgpu_bo_list_in *in,
struct drm_amdgpu_bo_list_entry **info_param);

View file

@ -65,6 +65,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p,
}
amdgpu_sync_create(&p->sync);
drm_exec_init(&p->exec, DRM_EXEC_INTERRUPTIBLE_WAIT);
return 0;
}
@ -125,7 +126,6 @@ static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p,
uint32_t *offset)
{
struct drm_gem_object *gobj;
struct amdgpu_bo *bo;
unsigned long size;
int r;
@ -133,18 +133,16 @@ static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p,
if (gobj == NULL)
return -EINVAL;
bo = amdgpu_bo_ref(gem_to_amdgpu_bo(gobj));
p->uf_entry.priority = 0;
p->uf_entry.tv.bo = &bo->tbo;
p->uf_bo = amdgpu_bo_ref(gem_to_amdgpu_bo(gobj));
drm_gem_object_put(gobj);
size = amdgpu_bo_size(bo);
size = amdgpu_bo_size(p->uf_bo);
if (size != PAGE_SIZE || (data->offset + 8) > size) {
r = -EINVAL;
goto error_unref;
}
if (amdgpu_ttm_tt_get_usermm(bo->tbo.ttm)) {
if (amdgpu_ttm_tt_get_usermm(p->uf_bo->tbo.ttm)) {
r = -EINVAL;
goto error_unref;
}
@ -154,7 +152,7 @@ static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p,
return 0;
error_unref:
amdgpu_bo_unref(&bo);
amdgpu_bo_unref(&p->uf_bo);
return r;
}
@ -311,7 +309,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
goto free_all_kdata;
}
if (p->uf_entry.tv.bo)
if (p->uf_bo)
p->gang_leader->uf_addr = uf_offset;
kvfree(chunk_array);
@ -356,7 +354,7 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p,
ib = &job->ibs[job->num_ibs++];
/* MM engine doesn't support user fences */
if (p->uf_entry.tv.bo && ring->funcs->no_user_fence)
if (p->uf_bo && ring->funcs->no_user_fence)
return -EINVAL;
if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&
@ -841,55 +839,18 @@ static int amdgpu_cs_bo_validate(void *param, struct amdgpu_bo *bo)
return r;
}
static int amdgpu_cs_list_validate(struct amdgpu_cs_parser *p,
struct list_head *validated)
{
struct ttm_operation_ctx ctx = { true, false };
struct amdgpu_bo_list_entry *lobj;
int r;
list_for_each_entry(lobj, validated, tv.head) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(lobj->tv.bo);
struct mm_struct *usermm;
usermm = amdgpu_ttm_tt_get_usermm(bo->tbo.ttm);
if (usermm && usermm != current->mm)
return -EPERM;
if (amdgpu_ttm_tt_is_userptr(bo->tbo.ttm) &&
lobj->user_invalidated && lobj->user_pages) {
amdgpu_bo_placement_from_domain(bo,
AMDGPU_GEM_DOMAIN_CPU);
r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
if (r)
return r;
amdgpu_ttm_tt_set_user_pages(bo->tbo.ttm,
lobj->user_pages);
}
r = amdgpu_cs_bo_validate(p, bo);
if (r)
return r;
kvfree(lobj->user_pages);
lobj->user_pages = NULL;
}
return 0;
}
static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
union drm_amdgpu_cs *cs)
{
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
struct ttm_operation_ctx ctx = { true, false };
struct amdgpu_vm *vm = &fpriv->vm;
struct amdgpu_bo_list_entry *e;
struct list_head duplicates;
struct drm_gem_object *obj;
unsigned long index;
unsigned int i;
int r;
INIT_LIST_HEAD(&p->validated);
/* p->bo_list could already be assigned if AMDGPU_CHUNK_ID_BO_HANDLES is present */
if (cs->in.bo_list_handle) {
if (p->bo_list)
@ -909,29 +870,13 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
mutex_lock(&p->bo_list->bo_list_mutex);
/* One for TTM and one for each CS job */
amdgpu_bo_list_for_each_entry(e, p->bo_list)
e->tv.num_shared = 1 + p->gang_size;
p->uf_entry.tv.num_shared = 1 + p->gang_size;
amdgpu_bo_list_get_list(p->bo_list, &p->validated);
INIT_LIST_HEAD(&duplicates);
amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd);
/* Two for VM updates, one for TTM and one for each CS job */
p->vm_pd.tv.num_shared = 3 + p->gang_size;
if (p->uf_entry.tv.bo && !ttm_to_amdgpu_bo(p->uf_entry.tv.bo)->parent)
list_add(&p->uf_entry.tv.head, &p->validated);
/* Get userptr backing pages. If pages are updated after registered
* in amdgpu_gem_userptr_ioctl(), amdgpu_cs_list_validate() will do
* amdgpu_ttm_backend_bind() to flush and invalidate new pages
*/
amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
bool userpage_invalidated = false;
struct amdgpu_bo *bo = e->bo;
int i;
e->user_pages = kvmalloc_array(bo->tbo.ttm->num_pages,
@ -959,18 +904,56 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
e->user_invalidated = userpage_invalidated;
}
r = ttm_eu_reserve_buffers(&p->ticket, &p->validated, true,
&duplicates);
if (unlikely(r != 0)) {
if (r != -ERESTARTSYS)
DRM_ERROR("ttm_eu_reserve_buffers failed.\n");
goto out_free_user_pages;
drm_exec_until_all_locked(&p->exec) {
r = amdgpu_vm_lock_pd(&fpriv->vm, &p->exec, 1 + p->gang_size);
drm_exec_retry_on_contention(&p->exec);
if (unlikely(r))
goto out_free_user_pages;
amdgpu_bo_list_for_each_entry(e, p->bo_list) {
/* One fence for TTM and one for each CS job */
r = drm_exec_prepare_obj(&p->exec, &e->bo->tbo.base,
1 + p->gang_size);
drm_exec_retry_on_contention(&p->exec);
if (unlikely(r))
goto out_free_user_pages;
e->bo_va = amdgpu_vm_bo_find(vm, e->bo);
}
if (p->uf_bo) {
r = drm_exec_prepare_obj(&p->exec, &p->uf_bo->tbo.base,
1 + p->gang_size);
drm_exec_retry_on_contention(&p->exec);
if (unlikely(r))
goto out_free_user_pages;
}
}
amdgpu_bo_list_for_each_entry(e, p->bo_list) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) {
struct mm_struct *usermm;
e->bo_va = amdgpu_vm_bo_find(vm, bo);
usermm = amdgpu_ttm_tt_get_usermm(e->bo->tbo.ttm);
if (usermm && usermm != current->mm) {
r = -EPERM;
goto out_free_user_pages;
}
if (amdgpu_ttm_tt_is_userptr(e->bo->tbo.ttm) &&
e->user_invalidated && e->user_pages) {
amdgpu_bo_placement_from_domain(e->bo,
AMDGPU_GEM_DOMAIN_CPU);
r = ttm_bo_validate(&e->bo->tbo, &e->bo->placement,
&ctx);
if (r)
goto out_free_user_pages;
amdgpu_ttm_tt_set_user_pages(e->bo->tbo.ttm,
e->user_pages);
}
kvfree(e->user_pages);
e->user_pages = NULL;
}
amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold,
@ -982,25 +965,21 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
amdgpu_cs_bo_validate, p);
if (r) {
DRM_ERROR("amdgpu_vm_validate_pt_bos() failed.\n");
goto error_validate;
goto out_free_user_pages;
}
r = amdgpu_cs_list_validate(p, &duplicates);
if (r)
goto error_validate;
drm_exec_for_each_locked_object(&p->exec, index, obj) {
r = amdgpu_cs_bo_validate(p, gem_to_amdgpu_bo(obj));
if (unlikely(r))
goto out_free_user_pages;
}
r = amdgpu_cs_list_validate(p, &p->validated);
if (r)
goto error_validate;
if (p->uf_bo) {
r = amdgpu_ttm_alloc_gart(&p->uf_bo->tbo);
if (unlikely(r))
goto out_free_user_pages;
if (p->uf_entry.tv.bo) {
struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo);
r = amdgpu_ttm_alloc_gart(&uf->tbo);
if (r)
goto error_validate;
p->gang_leader->uf_addr += amdgpu_bo_gpu_offset(uf);
p->gang_leader->uf_addr += amdgpu_bo_gpu_offset(p->uf_bo);
}
amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
@ -1012,12 +991,9 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
p->bo_list->oa_obj);
return 0;
error_validate:
ttm_eu_backoff_reservation(&p->ticket, &p->validated);
out_free_user_pages:
amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
struct amdgpu_bo *bo = e->bo;
if (!e->user_pages)
continue;
@ -1123,7 +1099,6 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
struct amdgpu_vm *vm = &fpriv->vm;
struct amdgpu_bo_list_entry *e;
struct amdgpu_bo_va *bo_va;
struct amdgpu_bo *bo;
unsigned int i;
int r;
@ -1152,11 +1127,6 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
}
amdgpu_bo_list_for_each_entry(e, p->bo_list) {
/* ignore duplicates */
bo = ttm_to_amdgpu_bo(e->tv.bo);
if (!bo)
continue;
bo_va = e->bo_va;
if (bo_va == NULL)
continue;
@ -1194,7 +1164,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
if (amdgpu_vm_debug) {
/* Invalidate all BOs to test for userspace bugs */
amdgpu_bo_list_for_each_entry(e, p->bo_list) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
struct amdgpu_bo *bo = e->bo;
/* ignore duplicates */
if (!bo)
@ -1211,8 +1181,9 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
{
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
struct drm_gpu_scheduler *sched;
struct amdgpu_bo_list_entry *e;
struct drm_gem_object *obj;
struct dma_fence *fence;
unsigned long index;
unsigned int i;
int r;
@ -1223,8 +1194,9 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
return r;
}
list_for_each_entry(e, &p->validated, tv.head) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
drm_exec_for_each_locked_object(&p->exec, index, obj) {
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
struct dma_resv *resv = bo->tbo.base.resv;
enum amdgpu_sync_mode sync_mode;
@ -1288,6 +1260,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
struct amdgpu_job *leader = p->gang_leader;
struct amdgpu_bo_list_entry *e;
struct drm_gem_object *gobj;
unsigned long index;
unsigned int i;
uint64_t seq;
int r;
@ -1326,9 +1300,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
*/
r = 0;
amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
r |= !amdgpu_ttm_tt_get_user_pages_done(bo->tbo.ttm, e->range);
r |= !amdgpu_ttm_tt_get_user_pages_done(e->bo->tbo.ttm,
e->range);
e->range = NULL;
}
if (r) {
@ -1338,20 +1311,22 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
}
p->fence = dma_fence_get(&leader->base.s_fence->finished);
list_for_each_entry(e, &p->validated, tv.head) {
drm_exec_for_each_locked_object(&p->exec, index, gobj) {
ttm_bo_move_to_lru_tail_unlocked(&gem_to_amdgpu_bo(gobj)->tbo);
/* Everybody except for the gang leader uses READ */
for (i = 0; i < p->gang_size; ++i) {
if (p->jobs[i] == leader)
continue;
dma_resv_add_fence(e->tv.bo->base.resv,
dma_resv_add_fence(gobj->resv,
&p->jobs[i]->base.s_fence->finished,
DMA_RESV_USAGE_READ);
}
/* The gang leader is remembered as writer */
e->tv.num_shared = 0;
/* The gang leader as remembered as writer */
dma_resv_add_fence(gobj->resv, p->fence, DMA_RESV_USAGE_WRITE);
}
seq = amdgpu_ctx_add_fence(p->ctx, p->entities[p->gang_leader_idx],
@ -1367,7 +1342,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
cs->out.handle = seq;
leader->uf_sequence = seq;
amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket);
amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->exec.ticket);
for (i = 0; i < p->gang_size; ++i) {
amdgpu_job_free_resources(p->jobs[i]);
trace_amdgpu_cs_ioctl(p->jobs[i]);
@ -1376,7 +1351,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
}
amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm);
ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence);
mutex_unlock(&p->adev->notifier_lock);
mutex_unlock(&p->bo_list->bo_list_mutex);
@ -1389,6 +1363,8 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser)
unsigned int i;
amdgpu_sync_free(&parser->sync);
drm_exec_fini(&parser->exec);
for (i = 0; i < parser->num_post_deps; i++) {
drm_syncobj_put(parser->post_deps[i].syncobj);
kfree(parser->post_deps[i].chain);
@ -1409,11 +1385,7 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser)
if (parser->jobs[i])
amdgpu_job_free(parser->jobs[i]);
}
if (parser->uf_entry.tv.bo) {
struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo);
amdgpu_bo_unref(&uf);
}
amdgpu_bo_unref(&parser->uf_bo);
}
int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
@ -1474,7 +1446,6 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
return 0;
error_backoff:
ttm_eu_backoff_reservation(&parser.ticket, &parser.validated);
mutex_unlock(&parser.bo_list->bo_list_mutex);
error_fini:
@ -1809,7 +1780,7 @@ int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
*map = mapping;
/* Double check that the BO is reserved by this CS */
if (dma_resv_locking_ctx((*bo)->tbo.base.resv) != &parser->ticket)
if (dma_resv_locking_ctx((*bo)->tbo.base.resv) != &parser->exec.ticket)
return -EINVAL;
if (!((*bo)->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)) {

View file

@ -24,6 +24,7 @@
#define __AMDGPU_CS_H__
#include <linux/ww_mutex.h>
#include <drm/drm_exec.h>
#include "amdgpu_job.h"
#include "amdgpu_bo_list.h"
@ -62,11 +63,9 @@ struct amdgpu_cs_parser {
struct amdgpu_job *gang_leader;
/* buffer objects */
struct ww_acquire_ctx ticket;
struct drm_exec exec;
struct amdgpu_bo_list *bo_list;
struct amdgpu_mn *mn;
struct amdgpu_bo_list_entry vm_pd;
struct list_head validated;
struct dma_fence *fence;
uint64_t bytes_moved_threshold;
uint64_t bytes_moved_vis_threshold;
@ -74,7 +73,7 @@ struct amdgpu_cs_parser {
uint64_t bytes_moved_vis;
/* user fence */
struct amdgpu_bo_list_entry uf_entry;
struct amdgpu_bo *uf_bo;
unsigned num_post_deps;
struct amdgpu_cs_post_dep *post_deps;

View file

@ -22,6 +22,8 @@
* * Author: Monk.liu@amd.com
*/
#include <drm/drm_exec.h>
#include "amdgpu.h"
uint64_t amdgpu_csa_vaddr(struct amdgpu_device *adev)
@ -65,31 +67,25 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
struct amdgpu_bo *bo, struct amdgpu_bo_va **bo_va,
uint64_t csa_addr, uint32_t size)
{
struct ww_acquire_ctx ticket;
struct list_head list;
struct amdgpu_bo_list_entry pd;
struct ttm_validate_buffer csa_tv;
struct drm_exec exec;
int r;
INIT_LIST_HEAD(&list);
INIT_LIST_HEAD(&csa_tv.head);
csa_tv.bo = &bo->tbo;
csa_tv.num_shared = 1;
list_add(&csa_tv.head, &list);
amdgpu_vm_get_pd_bo(vm, &list, &pd);
r = ttm_eu_reserve_buffers(&ticket, &list, true, NULL);
if (r) {
DRM_ERROR("failed to reserve CSA,PD BOs: err=%d\n", r);
return r;
drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT);
drm_exec_until_all_locked(&exec) {
r = amdgpu_vm_lock_pd(vm, &exec, 0);
if (likely(!r))
r = drm_exec_lock_obj(&exec, &bo->tbo.base);
drm_exec_retry_on_contention(&exec);
if (unlikely(r)) {
DRM_ERROR("failed to reserve CSA,PD BOs: err=%d\n", r);
goto error;
}
}
*bo_va = amdgpu_vm_bo_add(adev, vm, bo);
if (!*bo_va) {
ttm_eu_backoff_reservation(&ticket, &list);
DRM_ERROR("failed to create bo_va for static CSA\n");
return -ENOMEM;
r = -ENOMEM;
goto error;
}
r = amdgpu_vm_bo_map(adev, *bo_va, csa_addr, 0, size,
@ -99,48 +95,42 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
if (r) {
DRM_ERROR("failed to do bo_map on static CSA, err=%d\n", r);
amdgpu_vm_bo_del(adev, *bo_va);
ttm_eu_backoff_reservation(&ticket, &list);
return r;
goto error;
}
ttm_eu_backoff_reservation(&ticket, &list);
return 0;
error:
drm_exec_fini(&exec);
return r;
}
int amdgpu_unmap_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
struct amdgpu_bo *bo, struct amdgpu_bo_va *bo_va,
uint64_t csa_addr)
{
struct ww_acquire_ctx ticket;
struct list_head list;
struct amdgpu_bo_list_entry pd;
struct ttm_validate_buffer csa_tv;
struct drm_exec exec;
int r;
INIT_LIST_HEAD(&list);
INIT_LIST_HEAD(&csa_tv.head);
csa_tv.bo = &bo->tbo;
csa_tv.num_shared = 1;
list_add(&csa_tv.head, &list);
amdgpu_vm_get_pd_bo(vm, &list, &pd);
r = ttm_eu_reserve_buffers(&ticket, &list, true, NULL);
if (r) {
DRM_ERROR("failed to reserve CSA,PD BOs: err=%d\n", r);
return r;
drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT);
drm_exec_until_all_locked(&exec) {
r = amdgpu_vm_lock_pd(vm, &exec, 0);
if (likely(!r))
r = drm_exec_lock_obj(&exec, &bo->tbo.base);
drm_exec_retry_on_contention(&exec);
if (unlikely(r)) {
DRM_ERROR("failed to reserve CSA,PD BOs: err=%d\n", r);
goto error;
}
}
r = amdgpu_vm_bo_unmap(adev, bo_va, csa_addr);
if (r) {
DRM_ERROR("failed to do bo_unmap on static CSA, err=%d\n", r);
ttm_eu_backoff_reservation(&ticket, &list);
return r;
goto error;
}
amdgpu_vm_bo_del(adev, bo_va);
ttm_eu_backoff_reservation(&ticket, &list);
return 0;
error:
drm_exec_fini(&exec);
return r;
}

View file

@ -2850,10 +2850,7 @@ static const struct drm_driver amdgpu_kms_driver = {
.show_fdinfo = amdgpu_show_fdinfo,
#endif
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import = amdgpu_gem_prime_import,
.gem_prime_mmap = drm_gem_prime_mmap,
.name = DRIVER_NAME,
.desc = DRIVER_DESC,
@ -2877,10 +2874,7 @@ const struct drm_driver amdgpu_partition_driver = {
.fops = &amdgpu_driver_kms_fops,
.release = &amdgpu_driver_release_kms,
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import = amdgpu_gem_prime_import,
.gem_prime_mmap = drm_gem_prime_mmap,
.name = DRIVER_NAME,
.desc = DRIVER_DESC,

View file

@ -33,6 +33,7 @@
#include <drm/amdgpu_drm.h>
#include <drm/drm_drv.h>
#include <drm/drm_exec.h>
#include <drm/drm_gem_ttm_helper.h>
#include <drm/ttm/ttm_tt.h>
@ -198,29 +199,24 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj,
struct amdgpu_fpriv *fpriv = file_priv->driver_priv;
struct amdgpu_vm *vm = &fpriv->vm;
struct amdgpu_bo_list_entry vm_pd;
struct list_head list, duplicates;
struct dma_fence *fence = NULL;
struct ttm_validate_buffer tv;
struct ww_acquire_ctx ticket;
struct amdgpu_bo_va *bo_va;
struct drm_exec exec;
long r;
INIT_LIST_HEAD(&list);
INIT_LIST_HEAD(&duplicates);
drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES);
drm_exec_until_all_locked(&exec) {
r = drm_exec_prepare_obj(&exec, &bo->tbo.base, 1);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto out_unlock;
tv.bo = &bo->tbo;
tv.num_shared = 2;
list_add(&tv.head, &list);
amdgpu_vm_get_pd_bo(vm, &list, &vm_pd);
r = ttm_eu_reserve_buffers(&ticket, &list, false, &duplicates);
if (r) {
dev_err(adev->dev, "leaking bo va because "
"we fail to reserve bo (%ld)\n", r);
return;
r = amdgpu_vm_lock_pd(vm, &exec, 0);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto out_unlock;
}
bo_va = amdgpu_vm_bo_find(vm, bo);
if (!bo_va || --bo_va->ref_count)
goto out_unlock;
@ -230,6 +226,9 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj,
goto out_unlock;
r = amdgpu_vm_clear_freed(adev, vm, &fence);
if (unlikely(r < 0))
dev_err(adev->dev, "failed to clear page "
"tables on GEM object close (%ld)\n", r);
if (r || !fence)
goto out_unlock;
@ -237,10 +236,9 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj,
dma_fence_put(fence);
out_unlock:
if (unlikely(r < 0))
dev_err(adev->dev, "failed to clear page "
"tables on GEM object close (%ld)\n", r);
ttm_eu_backoff_reservation(&ticket, &list);
if (r)
dev_err(adev->dev, "leaking bo va (%ld)\n", r);
drm_exec_fini(&exec);
}
static int amdgpu_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
@ -675,10 +673,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
struct amdgpu_fpriv *fpriv = filp->driver_priv;
struct amdgpu_bo *abo;
struct amdgpu_bo_va *bo_va;
struct amdgpu_bo_list_entry vm_pd;
struct ttm_validate_buffer tv;
struct ww_acquire_ctx ticket;
struct list_head list, duplicates;
struct drm_exec exec;
uint64_t va_flags;
uint64_t vm_size;
int r = 0;
@ -728,36 +723,38 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
return -EINVAL;
}
INIT_LIST_HEAD(&list);
INIT_LIST_HEAD(&duplicates);
if ((args->operation != AMDGPU_VA_OP_CLEAR) &&
!(args->flags & AMDGPU_VM_PAGE_PRT)) {
gobj = drm_gem_object_lookup(filp, args->handle);
if (gobj == NULL)
return -ENOENT;
abo = gem_to_amdgpu_bo(gobj);
tv.bo = &abo->tbo;
if (abo->flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID)
tv.num_shared = 1;
else
tv.num_shared = 0;
list_add(&tv.head, &list);
} else {
gobj = NULL;
abo = NULL;
}
amdgpu_vm_get_pd_bo(&fpriv->vm, &list, &vm_pd);
drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT |
DRM_EXEC_IGNORE_DUPLICATES);
drm_exec_until_all_locked(&exec) {
if (gobj) {
r = drm_exec_lock_obj(&exec, gobj);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto error;
}
r = ttm_eu_reserve_buffers(&ticket, &list, true, &duplicates);
if (r)
goto error_unref;
r = amdgpu_vm_lock_pd(&fpriv->vm, &exec, 2);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto error;
}
if (abo) {
bo_va = amdgpu_vm_bo_find(&fpriv->vm, abo);
if (!bo_va) {
r = -ENOENT;
goto error_backoff;
goto error;
}
} else if (args->operation != AMDGPU_VA_OP_CLEAR) {
bo_va = fpriv->prt_va;
@ -794,10 +791,8 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
amdgpu_gem_va_update_vm(adev, &fpriv->vm, bo_va,
args->operation);
error_backoff:
ttm_eu_backoff_reservation(&ticket, &list);
error_unref:
error:
drm_exec_fini(&exec);
drm_gem_object_put(gobj);
return r;
}

View file

@ -22,6 +22,7 @@
*/
#include <linux/firmware.h>
#include <drm/drm_exec.h>
#include "amdgpu_mes.h"
#include "amdgpu.h"
@ -1168,34 +1169,31 @@ int amdgpu_mes_ctx_map_meta_data(struct amdgpu_device *adev,
struct amdgpu_mes_ctx_data *ctx_data)
{
struct amdgpu_bo_va *bo_va;
struct ww_acquire_ctx ticket;
struct list_head list;
struct amdgpu_bo_list_entry pd;
struct ttm_validate_buffer csa_tv;
struct amdgpu_sync sync;
struct drm_exec exec;
int r;
amdgpu_sync_create(&sync);
INIT_LIST_HEAD(&list);
INIT_LIST_HEAD(&csa_tv.head);
csa_tv.bo = &ctx_data->meta_data_obj->tbo;
csa_tv.num_shared = 1;
drm_exec_init(&exec, 0);
drm_exec_until_all_locked(&exec) {
r = drm_exec_lock_obj(&exec,
&ctx_data->meta_data_obj->tbo.base);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto error_fini_exec;
list_add(&csa_tv.head, &list);
amdgpu_vm_get_pd_bo(vm, &list, &pd);
r = ttm_eu_reserve_buffers(&ticket, &list, true, NULL);
if (r) {
DRM_ERROR("failed to reserve meta data BO: err=%d\n", r);
return r;
r = amdgpu_vm_lock_pd(vm, &exec, 0);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto error_fini_exec;
}
bo_va = amdgpu_vm_bo_add(adev, vm, ctx_data->meta_data_obj);
if (!bo_va) {
ttm_eu_backoff_reservation(&ticket, &list);
DRM_ERROR("failed to create bo_va for meta data BO\n");
return -ENOMEM;
r = -ENOMEM;
goto error_fini_exec;
}
r = amdgpu_vm_bo_map(adev, bo_va, ctx_data->meta_data_gpu_addr, 0,
@ -1205,33 +1203,35 @@ int amdgpu_mes_ctx_map_meta_data(struct amdgpu_device *adev,
if (r) {
DRM_ERROR("failed to do bo_map on meta data, err=%d\n", r);
goto error;
goto error_del_bo_va;
}
r = amdgpu_vm_bo_update(adev, bo_va, false);
if (r) {
DRM_ERROR("failed to do vm_bo_update on meta data\n");
goto error;
goto error_del_bo_va;
}
amdgpu_sync_fence(&sync, bo_va->last_pt_update);
r = amdgpu_vm_update_pdes(adev, vm, false);
if (r) {
DRM_ERROR("failed to update pdes on meta data\n");
goto error;
goto error_del_bo_va;
}
amdgpu_sync_fence(&sync, vm->last_update);
amdgpu_sync_wait(&sync, false);
ttm_eu_backoff_reservation(&ticket, &list);
drm_exec_fini(&exec);
amdgpu_sync_free(&sync);
ctx_data->meta_data_va = bo_va;
return 0;
error:
error_del_bo_va:
amdgpu_vm_bo_del(adev, bo_va);
ttm_eu_backoff_reservation(&ticket, &list);
error_fini_exec:
drm_exec_fini(&exec);
amdgpu_sync_free(&sync);
return r;
}
@ -1242,34 +1242,30 @@ int amdgpu_mes_ctx_unmap_meta_data(struct amdgpu_device *adev,
struct amdgpu_bo_va *bo_va = ctx_data->meta_data_va;
struct amdgpu_bo *bo = ctx_data->meta_data_obj;
struct amdgpu_vm *vm = bo_va->base.vm;
struct amdgpu_bo_list_entry vm_pd;
struct list_head list, duplicates;
struct dma_fence *fence = NULL;
struct ttm_validate_buffer tv;
struct ww_acquire_ctx ticket;
long r = 0;
struct dma_fence *fence;
struct drm_exec exec;
long r;
INIT_LIST_HEAD(&list);
INIT_LIST_HEAD(&duplicates);
drm_exec_init(&exec, 0);
drm_exec_until_all_locked(&exec) {
r = drm_exec_lock_obj(&exec,
&ctx_data->meta_data_obj->tbo.base);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto out_unlock;
tv.bo = &bo->tbo;
tv.num_shared = 2;
list_add(&tv.head, &list);
amdgpu_vm_get_pd_bo(vm, &list, &vm_pd);
r = ttm_eu_reserve_buffers(&ticket, &list, false, &duplicates);
if (r) {
dev_err(adev->dev, "leaking bo va because "
"we fail to reserve bo (%ld)\n", r);
return r;
r = amdgpu_vm_lock_pd(vm, &exec, 0);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto out_unlock;
}
amdgpu_vm_bo_del(adev, bo_va);
if (!amdgpu_vm_ready(vm))
goto out_unlock;
r = dma_resv_get_singleton(bo->tbo.base.resv, DMA_RESV_USAGE_BOOKKEEP, &fence);
r = dma_resv_get_singleton(bo->tbo.base.resv, DMA_RESV_USAGE_BOOKKEEP,
&fence);
if (r)
goto out_unlock;
if (fence) {
@ -1288,7 +1284,7 @@ int amdgpu_mes_ctx_unmap_meta_data(struct amdgpu_device *adev,
out_unlock:
if (unlikely(r < 0))
dev_err(adev->dev, "failed to clear page tables (%ld)\n", r);
ttm_eu_backoff_reservation(&ticket, &list);
drm_exec_fini(&exec);
return r;
}

View file

@ -34,6 +34,7 @@
#include <drm/amdgpu_drm.h>
#include <drm/drm_drv.h>
#include <drm/ttm/ttm_tt.h>
#include <drm/drm_exec.h>
#include "amdgpu.h"
#include "amdgpu_trace.h"
#include "amdgpu_amdkfd.h"
@ -339,25 +340,20 @@ void amdgpu_vm_bo_base_init(struct amdgpu_vm_bo_base *base,
}
/**
* amdgpu_vm_get_pd_bo - add the VM PD to a validation list
* amdgpu_vm_lock_pd - lock PD in drm_exec
*
* @vm: vm providing the BOs
* @validated: head of validation list
* @entry: entry to add
* @exec: drm execution context
* @num_fences: number of extra fences to reserve
*
* Add the page directory to the list of BOs to
* validate for command submission.
* Lock the VM root PD in the DRM execution context.
*/
void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
struct list_head *validated,
struct amdgpu_bo_list_entry *entry)
int amdgpu_vm_lock_pd(struct amdgpu_vm *vm, struct drm_exec *exec,
unsigned int num_fences)
{
entry->priority = 0;
entry->tv.bo = &vm->root.bo->tbo;
/* Two for VM updates, one for TTM and one for the CS job */
entry->tv.num_shared = 4;
entry->user_pages = NULL;
list_add(&entry->tv.head, validated);
/* We need at least two fences for the VM PD/PT updates */
return drm_exec_prepare_obj(exec, &vm->root.bo->tbo.base,
2 + num_fences);
}
/**

View file

@ -36,6 +36,8 @@
#include "amdgpu_ring.h"
#include "amdgpu_ids.h"
struct drm_exec;
struct amdgpu_bo_va;
struct amdgpu_job;
struct amdgpu_bo_list_entry;
@ -396,9 +398,8 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm);
int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm);
void amdgpu_vm_release_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm);
void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm);
void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
struct list_head *validated,
struct amdgpu_bo_list_entry *entry);
int amdgpu_vm_lock_pd(struct amdgpu_vm *vm, struct drm_exec *exec,
unsigned int num_fences);
bool amdgpu_vm_ready(struct amdgpu_vm *vm);
uint64_t amdgpu_vm_generation(struct amdgpu_device *adev, struct amdgpu_vm *vm);
int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,

View file

@ -24,6 +24,8 @@
#include <linux/types.h>
#include <linux/sched/task.h>
#include <drm/ttm/ttm_tt.h>
#include <drm/drm_exec.h>
#include "amdgpu_sync.h"
#include "amdgpu_object.h"
#include "amdgpu_vm.h"
@ -1455,37 +1457,34 @@ struct svm_validate_context {
struct svm_range *prange;
bool intr;
DECLARE_BITMAP(bitmap, MAX_GPU_INSTANCE);
struct ttm_validate_buffer tv[MAX_GPU_INSTANCE];
struct list_head validate_list;
struct ww_acquire_ctx ticket;
struct drm_exec exec;
};
static int svm_range_reserve_bos(struct svm_validate_context *ctx)
static int svm_range_reserve_bos(struct svm_validate_context *ctx, bool intr)
{
struct kfd_process_device *pdd;
struct amdgpu_vm *vm;
uint32_t gpuidx;
int r;
INIT_LIST_HEAD(&ctx->validate_list);
for_each_set_bit(gpuidx, ctx->bitmap, MAX_GPU_INSTANCE) {
pdd = kfd_process_device_from_gpuidx(ctx->process, gpuidx);
if (!pdd) {
pr_debug("failed to find device idx %d\n", gpuidx);
return -EINVAL;
drm_exec_init(&ctx->exec, intr ? DRM_EXEC_INTERRUPTIBLE_WAIT: 0);
drm_exec_until_all_locked(&ctx->exec) {
for_each_set_bit(gpuidx, ctx->bitmap, MAX_GPU_INSTANCE) {
pdd = kfd_process_device_from_gpuidx(ctx->process, gpuidx);
if (!pdd) {
pr_debug("failed to find device idx %d\n", gpuidx);
r = -EINVAL;
goto unreserve_out;
}
vm = drm_priv_to_vm(pdd->drm_priv);
r = amdgpu_vm_lock_pd(vm, &ctx->exec, 2);
drm_exec_retry_on_contention(&ctx->exec);
if (unlikely(r)) {
pr_debug("failed %d to reserve bo\n", r);
goto unreserve_out;
}
}
vm = drm_priv_to_vm(pdd->drm_priv);
ctx->tv[gpuidx].bo = &vm->root.bo->tbo;
ctx->tv[gpuidx].num_shared = 4;
list_add(&ctx->tv[gpuidx].head, &ctx->validate_list);
}
r = ttm_eu_reserve_buffers(&ctx->ticket, &ctx->validate_list,
ctx->intr, NULL);
if (r) {
pr_debug("failed %d to reserve bo\n", r);
return r;
}
for_each_set_bit(gpuidx, ctx->bitmap, MAX_GPU_INSTANCE) {
@ -1508,13 +1507,13 @@ static int svm_range_reserve_bos(struct svm_validate_context *ctx)
return 0;
unreserve_out:
ttm_eu_backoff_reservation(&ctx->ticket, &ctx->validate_list);
drm_exec_fini(&ctx->exec);
return r;
}
static void svm_range_unreserve_bos(struct svm_validate_context *ctx)
{
ttm_eu_backoff_reservation(&ctx->ticket, &ctx->validate_list);
drm_exec_fini(&ctx->exec);
}
static void *kfd_svm_page_owner(struct kfd_process *p, int32_t gpuidx)
@ -1613,7 +1612,7 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
goto free_ctx;
}
svm_range_reserve_bos(ctx);
svm_range_reserve_bos(ctx, intr);
p = container_of(prange->svms, struct kfd_process, svms);
owner = kfd_svm_page_owner(p, find_first_bit(ctx->bitmap,

View file

@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
menu "ARM devices"
depends on DRM
config DRM_HDLCD
tristate "ARM HDLCD"

View file

@ -131,10 +131,9 @@ static int komeda_platform_probe(struct platform_device *pdev)
return component_master_add_with_match(dev, &komeda_master_ops, match);
}
static int komeda_platform_remove(struct platform_device *pdev)
static void komeda_platform_remove(struct platform_device *pdev)
{
component_master_del(&pdev->dev, &komeda_master_ops);
return 0;
}
static const struct of_device_id komeda_of_match[] = {
@ -189,7 +188,7 @@ static const struct dev_pm_ops komeda_pm_ops = {
static struct platform_driver komeda_platform_driver = {
.probe = komeda_platform_probe,
.remove = komeda_platform_remove,
.remove_new = komeda_platform_remove,
.driver = {
.name = "komeda",
.of_match_table = komeda_of_match,

View file

@ -367,10 +367,9 @@ static int hdlcd_probe(struct platform_device *pdev)
match);
}
static int hdlcd_remove(struct platform_device *pdev)
static void hdlcd_remove(struct platform_device *pdev)
{
component_master_del(&pdev->dev, &hdlcd_master_ops);
return 0;
}
static const struct of_device_id hdlcd_of_match[] = {
@ -399,7 +398,7 @@ static SIMPLE_DEV_PM_OPS(hdlcd_pm_ops, hdlcd_pm_suspend, hdlcd_pm_resume);
static struct platform_driver hdlcd_platform_driver = {
.probe = hdlcd_probe,
.remove = hdlcd_remove,
.remove_new = hdlcd_remove,
.driver = {
.name = "hdlcd",
.pm = &hdlcd_pm_ops,

View file

@ -935,10 +935,9 @@ static int malidp_platform_probe(struct platform_device *pdev)
match);
}
static int malidp_platform_remove(struct platform_device *pdev)
static void malidp_platform_remove(struct platform_device *pdev)
{
component_master_del(&pdev->dev, &malidp_master_ops);
return 0;
}
static int __maybe_unused malidp_pm_suspend(struct device *dev)
@ -981,7 +980,7 @@ static const struct dev_pm_ops malidp_pm_ops = {
static struct platform_driver malidp_platform_driver = {
.probe = malidp_platform_probe,
.remove = malidp_platform_remove,
.remove_new = malidp_platform_remove,
.driver = {
.name = "mali-dp",
.pm = &malidp_pm_ops,

View file

@ -37,8 +37,6 @@ static const struct drm_ioctl_desc armada_ioctls[] = {
DEFINE_DRM_GEM_FOPS(armada_drm_fops);
static const struct drm_driver armada_drm_driver = {
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import = armada_gem_prime_import,
.dumb_create = armada_gem_dumb_create,
.major = 1,

View file

@ -351,20 +351,18 @@ static int aspeed_gfx_probe(struct platform_device *pdev)
return ret;
}
static int aspeed_gfx_remove(struct platform_device *pdev)
static void aspeed_gfx_remove(struct platform_device *pdev)
{
struct drm_device *drm = platform_get_drvdata(pdev);
sysfs_remove_group(&pdev->dev.kobj, &aspeed_sysfs_attr_group);
drm_dev_unregister(drm);
aspeed_gfx_unload(drm);
return 0;
}
static struct platform_driver aspeed_gfx_platform_driver = {
.probe = aspeed_gfx_probe,
.remove = aspeed_gfx_remove,
.remove_new = aspeed_gfx_remove,
.driver = {
.name = "aspeed_gfx",
.of_match_table = aspeed_gfx_match,

View file

@ -350,7 +350,7 @@ static bool ast_init_dvo(struct drm_device *dev)
data |= 0x00000500;
ast_write32(ast, 0x12008, data);
if (ast->chip == AST2300) {
if (IS_AST_GEN4(ast)) {
data = ast_read32(ast, 0x12084);
/* multi-pins for DVO single-edge */
data |= 0xfffe0000;
@ -366,7 +366,7 @@ static bool ast_init_dvo(struct drm_device *dev)
data &= 0xffffffcf;
data |= 0x00000020;
ast_write32(ast, 0x12090, data);
} else { /* AST2400 */
} else { /* AST GEN5+ */
data = ast_read32(ast, 0x12088);
/* multi-pins for DVO single-edge */
data |= 0x30000000;
@ -437,7 +437,7 @@ void ast_init_3rdtx(struct drm_device *dev)
struct ast_device *ast = to_ast_device(dev);
u8 jreg;
if (ast->chip == AST2300 || ast->chip == AST2400) {
if (IS_AST_GEN4(ast) || IS_AST_GEN5(ast)) {
jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd1, 0xff);
switch (jreg & 0x0e) {
case 0x04:

View file

@ -52,19 +52,38 @@
#define PCI_CHIP_AST2000 0x2000
#define PCI_CHIP_AST2100 0x2010
#define __AST_CHIP(__gen, __index) ((__gen) << 16 | (__index))
enum ast_chip {
AST2000,
AST2100,
AST1100,
AST2200,
AST2150,
AST2300,
AST2400,
AST2500,
AST2600,
/* 1st gen */
AST1000 = __AST_CHIP(1, 0), // unused
AST2000 = __AST_CHIP(1, 1),
/* 2nd gen */
AST1100 = __AST_CHIP(2, 0),
AST2100 = __AST_CHIP(2, 1),
AST2050 = __AST_CHIP(2, 2), // unused
/* 3rd gen */
AST2200 = __AST_CHIP(3, 0),
AST2150 = __AST_CHIP(3, 1),
/* 4th gen */
AST2300 = __AST_CHIP(4, 0),
AST1300 = __AST_CHIP(4, 1),
AST1050 = __AST_CHIP(4, 2), // unused
/* 5th gen */
AST2400 = __AST_CHIP(5, 0),
AST1400 = __AST_CHIP(5, 1),
AST1250 = __AST_CHIP(5, 2), // unused
/* 6th gen */
AST2500 = __AST_CHIP(6, 0),
AST2510 = __AST_CHIP(6, 1),
AST2520 = __AST_CHIP(6, 2), // unused
/* 7th gen */
AST2600 = __AST_CHIP(7, 0),
AST2620 = __AST_CHIP(7, 1), // unused
};
#define __AST_CHIP_GEN(__chip) (((unsigned long)(__chip)) >> 16)
enum ast_tx_chip {
AST_TX_NONE,
AST_TX_SIL164,
@ -166,7 +185,6 @@ struct ast_device {
void __iomem *dp501_fw_buf;
enum ast_chip chip;
bool vga2_clone;
uint32_t dram_bus_width;
uint32_t dram_type;
uint32_t mclk;
@ -219,6 +237,24 @@ struct ast_device *ast_device_create(const struct drm_driver *drv,
struct pci_dev *pdev,
unsigned long flags);
static inline unsigned long __ast_gen(struct ast_device *ast)
{
return __AST_CHIP_GEN(ast->chip);
}
#define AST_GEN(__ast) __ast_gen(__ast)
static inline bool __ast_gen_is_eq(struct ast_device *ast, unsigned long gen)
{
return __ast_gen(ast) == gen;
}
#define IS_AST_GEN1(__ast) __ast_gen_is_eq(__ast, 1)
#define IS_AST_GEN2(__ast) __ast_gen_is_eq(__ast, 2)
#define IS_AST_GEN3(__ast) __ast_gen_is_eq(__ast, 3)
#define IS_AST_GEN4(__ast) __ast_gen_is_eq(__ast, 4)
#define IS_AST_GEN5(__ast) __ast_gen_is_eq(__ast, 5)
#define IS_AST_GEN6(__ast) __ast_gen_is_eq(__ast, 6)
#define IS_AST_GEN7(__ast) __ast_gen_is_eq(__ast, 7)
#define AST_IO_AR_PORT_WRITE (0x40)
#define AST_IO_MISC_PORT_WRITE (0x42)
#define AST_IO_VGA_ENABLE_PORT (0x43)
@ -258,26 +294,35 @@ static inline void ast_io_write8(struct ast_device *ast, u32 reg, u8 val)
iowrite8(val, ast->ioregs + reg);
}
static inline void ast_set_index_reg(struct ast_device *ast,
uint32_t base, uint8_t index,
uint8_t val)
static inline u8 ast_get_index_reg(struct ast_device *ast, u32 base, u8 index)
{
ast_io_write8(ast, base, index);
++base;
return ast_io_read8(ast, base);
}
static inline u8 ast_get_index_reg_mask(struct ast_device *ast, u32 base, u8 index,
u8 preserve_mask)
{
u8 val = ast_get_index_reg(ast, base, index);
return val & preserve_mask;
}
static inline void ast_set_index_reg(struct ast_device *ast, u32 base, u8 index, u8 val)
{
ast_io_write8(ast, base, index);
++base;
ast_io_write8(ast, base, val);
}
void ast_set_index_reg_mask(struct ast_device *ast,
uint32_t base, uint8_t index,
uint8_t mask, uint8_t val);
uint8_t ast_get_index_reg(struct ast_device *ast,
uint32_t base, uint8_t index);
uint8_t ast_get_index_reg_mask(struct ast_device *ast,
uint32_t base, uint8_t index, uint8_t mask);
static inline void ast_open_key(struct ast_device *ast)
static inline void ast_set_index_reg_mask(struct ast_device *ast, u32 base, u8 index,
u8 preserve_mask, u8 val)
{
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x80, 0xA8);
u8 tmp = ast_get_index_reg_mask(ast, base, index, preserve_mask);
tmp |= val;
ast_set_index_reg(ast, base, index, tmp);
}
#define AST_VIDMEM_SIZE_8M 0x00800000
@ -458,9 +503,6 @@ int ast_mode_config_init(struct ast_device *ast);
int ast_mm_init(struct ast_device *ast);
/* ast post */
void ast_enable_vga(struct drm_device *dev);
void ast_enable_mmio(struct drm_device *dev);
bool ast_is_vga_enabled(struct drm_device *dev);
void ast_post_gpu(struct drm_device *dev);
u32 ast_mindwm(struct ast_device *ast, u32 r);
void ast_moutdwm(struct ast_device *ast, u32 r, u32 v);

View file

@ -35,131 +35,170 @@
#include "ast_drv.h"
void ast_set_index_reg_mask(struct ast_device *ast,
uint32_t base, uint8_t index,
uint8_t mask, uint8_t val)
static int ast_init_pci_config(struct pci_dev *pdev)
{
u8 tmp;
ast_io_write8(ast, base, index);
tmp = (ast_io_read8(ast, base + 1) & mask) | val;
ast_set_index_reg(ast, base, index, tmp);
int err;
u16 pcis04;
err = pci_read_config_word(pdev, PCI_COMMAND, &pcis04);
if (err)
goto out;
pcis04 |= PCI_COMMAND_MEMORY | PCI_COMMAND_IO;
err = pci_write_config_word(pdev, PCI_COMMAND, pcis04);
out:
return pcibios_err_to_errno(err);
}
uint8_t ast_get_index_reg(struct ast_device *ast,
uint32_t base, uint8_t index)
static bool ast_is_vga_enabled(struct drm_device *dev)
{
uint8_t ret;
ast_io_write8(ast, base, index);
ret = ast_io_read8(ast, base + 1);
return ret;
}
uint8_t ast_get_index_reg_mask(struct ast_device *ast,
uint32_t base, uint8_t index, uint8_t mask)
{
uint8_t ret;
ast_io_write8(ast, base, index);
ret = ast_io_read8(ast, base + 1) & mask;
return ret;
}
static void ast_detect_config_mode(struct drm_device *dev, u32 *scu_rev)
{
struct device_node *np = dev->dev->of_node;
struct ast_device *ast = to_ast_device(dev);
u8 ch;
ch = ast_io_read8(ast, AST_IO_VGA_ENABLE_PORT);
return !!(ch & 0x01);
}
static void ast_enable_vga(struct drm_device *dev)
{
struct ast_device *ast = to_ast_device(dev);
ast_io_write8(ast, AST_IO_VGA_ENABLE_PORT, 0x01);
ast_io_write8(ast, AST_IO_MISC_PORT_WRITE, 0x01);
}
/*
* Run this function as part of the HW device cleanup; not
* when the DRM device gets released.
*/
static void ast_enable_mmio_release(void *data)
{
struct ast_device *ast = data;
/* enable standard VGA decode */
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa1, 0x04);
}
static int ast_enable_mmio(struct ast_device *ast)
{
struct drm_device *dev = &ast->base;
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa1, 0x06);
return devm_add_action_or_reset(dev->dev, ast_enable_mmio_release, ast);
}
static void ast_open_key(struct ast_device *ast)
{
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x80, 0xA8);
}
static int ast_device_config_init(struct ast_device *ast)
{
struct drm_device *dev = &ast->base;
struct pci_dev *pdev = to_pci_dev(dev->dev);
uint32_t data, jregd0, jregd1;
/* Defaults */
ast->config_mode = ast_use_defaults;
*scu_rev = 0xffffffff;
/* Check if we have device-tree properties */
if (np && !of_property_read_u32(np, "aspeed,scu-revision-id",
scu_rev)) {
/* We do, disable P2A access */
ast->config_mode = ast_use_dt;
drm_info(dev, "Using device-tree for configuration\n");
return;
}
/* Not all families have a P2A bridge */
if (pdev->device != PCI_CHIP_AST2000)
return;
struct device_node *np = dev->dev->of_node;
uint32_t scu_rev = 0xffffffff;
u32 data;
u8 jregd0, jregd1;
/*
* The BMC will set SCU 0x40 D[12] to 1 if the P2 bridge
* is disabled. We force using P2A if VGA only mode bit
* is set D[7]
* Find configuration mode and read SCU revision
*/
jregd0 = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd0, 0xff);
jregd1 = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd1, 0xff);
if (!(jregd0 & 0x80) || !(jregd1 & 0x10)) {
/* Patch AST2500 */
if (((pdev->revision & 0xF0) == 0x40)
&& ((jregd0 & AST_VRAM_INIT_STATUS_MASK) == 0))
ast_patch_ahb_2500(ast);
/* Double check it's actually working */
data = ast_read32(ast, 0xf004);
if ((data != 0xFFFFFFFF) && (data != 0x00)) {
/* P2A works, grab silicon revision */
ast->config_mode = ast_use_p2a;
ast->config_mode = ast_use_defaults;
drm_info(dev, "Using P2A bridge for configuration\n");
/* Check if we have device-tree properties */
if (np && !of_property_read_u32(np, "aspeed,scu-revision-id", &data)) {
/* We do, disable P2A access */
ast->config_mode = ast_use_dt;
scu_rev = data;
} else if (pdev->device == PCI_CHIP_AST2000) { // Not all families have a P2A bridge
/*
* The BMC will set SCU 0x40 D[12] to 1 if the P2 bridge
* is disabled. We force using P2A if VGA only mode bit
* is set D[7]
*/
jregd0 = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd0, 0xff);
jregd1 = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd1, 0xff);
if (!(jregd0 & 0x80) || !(jregd1 & 0x10)) {
/* Read SCU7c (silicon revision register) */
ast_write32(ast, 0xf004, 0x1e6e0000);
ast_write32(ast, 0xf000, 0x1);
*scu_rev = ast_read32(ast, 0x1207c);
return;
/*
* We have a P2A bridge and it is enabled.
*/
/* Patch AST2500/AST2510 */
if ((pdev->revision & 0xf0) == 0x40) {
if (!(jregd0 & AST_VRAM_INIT_STATUS_MASK))
ast_patch_ahb_2500(ast);
}
/* Double check that it's actually working */
data = ast_read32(ast, 0xf004);
if ((data != 0xffffffff) && (data != 0x00)) {
ast->config_mode = ast_use_p2a;
/* Read SCU7c (silicon revision register) */
ast_write32(ast, 0xf004, 0x1e6e0000);
ast_write32(ast, 0xf000, 0x1);
scu_rev = ast_read32(ast, 0x1207c);
}
}
}
/* We have a P2A bridge but it's disabled */
drm_info(dev, "P2A bridge disabled, using default configuration\n");
}
static int ast_detect_chip(struct drm_device *dev, bool *need_post)
{
struct ast_device *ast = to_ast_device(dev);
struct pci_dev *pdev = to_pci_dev(dev->dev);
uint32_t jreg, scu_rev;
switch (ast->config_mode) {
case ast_use_defaults:
drm_info(dev, "Using default configuration\n");
break;
case ast_use_dt:
drm_info(dev, "Using device-tree for configuration\n");
break;
case ast_use_p2a:
drm_info(dev, "Using P2A bridge for configuration\n");
break;
}
/*
* If VGA isn't enabled, we need to enable now or subsequent
* access to the scratch registers will fail. We also inform
* our caller that it needs to POST the chip
* (Assumption: VGA not enabled -> need to POST)
* Identify chipset
*/
if (!ast_is_vga_enabled(dev)) {
ast_enable_vga(dev);
drm_info(dev, "VGA not enabled on entry, requesting chip POST\n");
*need_post = true;
} else
*need_post = false;
/* Enable extended register access */
ast_open_key(ast);
ast_enable_mmio(dev);
/* Find out whether P2A works or whether to use device-tree */
ast_detect_config_mode(dev, &scu_rev);
/* Identify chipset */
if (pdev->revision >= 0x50) {
ast->chip = AST2600;
drm_info(dev, "AST 2600 detected\n");
} else if (pdev->revision >= 0x40) {
ast->chip = AST2500;
drm_info(dev, "AST 2500 detected\n");
switch (scu_rev & 0x300) {
case 0x0100:
ast->chip = AST2510;
drm_info(dev, "AST 2510 detected\n");
break;
default:
ast->chip = AST2500;
drm_info(dev, "AST 2500 detected\n");
}
} else if (pdev->revision >= 0x30) {
ast->chip = AST2400;
drm_info(dev, "AST 2400 detected\n");
switch (scu_rev & 0x300) {
case 0x0100:
ast->chip = AST1400;
drm_info(dev, "AST 1400 detected\n");
break;
default:
ast->chip = AST2400;
drm_info(dev, "AST 2400 detected\n");
}
} else if (pdev->revision >= 0x20) {
ast->chip = AST2300;
drm_info(dev, "AST 2300 detected\n");
switch (scu_rev & 0x300) {
case 0x0000:
ast->chip = AST1300;
drm_info(dev, "AST 1300 detected\n");
break;
default:
ast->chip = AST2300;
drm_info(dev, "AST 2300 detected\n");
break;
}
} else if (pdev->revision >= 0x10) {
switch (scu_rev & 0x0300) {
case 0x0200:
@ -179,15 +218,21 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
drm_info(dev, "AST 2100 detected\n");
break;
}
ast->vga2_clone = false;
} else {
ast->chip = AST2000;
drm_info(dev, "AST 2000 detected\n");
}
return 0;
}
static void ast_detect_widescreen(struct ast_device *ast)
{
u8 jreg;
/* Check if we support wide screen */
switch (ast->chip) {
case AST2000:
switch (AST_GEN(ast)) {
case 1:
ast->support_wide_screen = false;
break;
default:
@ -198,20 +243,23 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
ast->support_wide_screen = true;
else {
ast->support_wide_screen = false;
if (ast->chip == AST2300 &&
(scu_rev & 0x300) == 0x0) /* ast1300 */
if (ast->chip == AST1300)
ast->support_wide_screen = true;
if (ast->chip == AST2400 &&
(scu_rev & 0x300) == 0x100) /* ast1400 */
if (ast->chip == AST1400)
ast->support_wide_screen = true;
if (ast->chip == AST2500 &&
scu_rev == 0x100) /* ast2510 */
if (ast->chip == AST2510)
ast->support_wide_screen = true;
if (ast->chip == AST2600) /* ast2600 */
if (IS_AST_GEN7(ast))
ast->support_wide_screen = true;
}
break;
}
}
static void ast_detect_tx_chip(struct ast_device *ast, bool need_post)
{
struct drm_device *dev = &ast->base;
u8 jreg;
/* Check 3rd Tx option (digital output afaik) */
ast->tx_chip_types |= AST_TX_NONE_BIT;
@ -224,15 +272,15 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
* is at power-on reset, otherwise we'll incorrectly "detect" a
* SIL164 when there is none.
*/
if (!*need_post) {
if (!need_post) {
jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa3, 0xff);
if (jreg & 0x80)
ast->tx_chip_types = AST_TX_SIL164_BIT;
}
if ((ast->chip == AST2300) || (ast->chip == AST2400) || (ast->chip == AST2500)) {
if (IS_AST_GEN4(ast) || IS_AST_GEN5(ast) || IS_AST_GEN6(ast)) {
/*
* On AST2300 and 2400, look the configuration set by the SoC in
* On AST GEN4+, look the configuration set by the SoC in
* the SOC scratch register #1 bits 11:8 (interestingly marked
* as "reserved" in the spec)
*/
@ -254,7 +302,7 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
case 0x0c:
ast->tx_chip_types = AST_TX_DP501_BIT;
}
} else if (ast->chip == AST2600) {
} else if (IS_AST_GEN7(ast)) {
if (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xD1, TX_TYPE_MASK) ==
ASTDP_DPMCU_TX) {
ast->tx_chip_types = AST_TX_ASTDP_BIT;
@ -271,8 +319,6 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
drm_info(dev, "Using DP501 DisplayPort transmitter\n");
if (ast->tx_chip_types & AST_TX_ASTDP_BIT)
drm_info(dev, "Using ASPEED DisplayPort transmitter\n");
return 0;
}
static int ast_get_dram_info(struct drm_device *dev)
@ -286,7 +332,7 @@ static int ast_get_dram_info(struct drm_device *dev)
case ast_use_dt:
/*
* If some properties are missing, use reasonable
* defaults for AST2400
* defaults for GEN5
*/
if (of_property_read_u32(np, "aspeed,mcr-configuration",
&mcr_cfg))
@ -309,7 +355,7 @@ static int ast_get_dram_info(struct drm_device *dev)
default:
ast->dram_bus_width = 16;
ast->dram_type = AST_DRAM_1Gx16;
if (ast->chip == AST2500)
if (IS_AST_GEN6(ast))
ast->mclk = 800;
else
ast->mclk = 396;
@ -321,7 +367,7 @@ static int ast_get_dram_info(struct drm_device *dev)
else
ast->dram_bus_width = 32;
if (ast->chip == AST2500) {
if (IS_AST_GEN6(ast)) {
switch (mcr_cfg & 0x03) {
case 0:
ast->dram_type = AST_DRAM_1Gx16;
@ -337,7 +383,7 @@ static int ast_get_dram_info(struct drm_device *dev)
ast->dram_type = AST_DRAM_8Gx16;
break;
}
} else if (ast->chip == AST2300 || ast->chip == AST2400) {
} else if (IS_AST_GEN4(ast) || IS_AST_GEN5(ast)) {
switch (mcr_cfg & 0x03) {
case 0:
ast->dram_type = AST_DRAM_512Mx16;
@ -395,25 +441,13 @@ static int ast_get_dram_info(struct drm_device *dev)
return 0;
}
/*
* Run this function as part of the HW device cleanup; not
* when the DRM device gets released.
*/
static void ast_device_release(void *data)
{
struct ast_device *ast = data;
/* enable standard VGA decode */
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa1, 0x04);
}
struct ast_device *ast_device_create(const struct drm_driver *drv,
struct pci_dev *pdev,
unsigned long flags)
{
struct drm_device *dev;
struct ast_device *ast;
bool need_post;
bool need_post = false;
int ret = 0;
ast = devm_drm_dev_alloc(&pdev->dev, drv, struct ast_device, base);
@ -449,7 +483,34 @@ struct ast_device *ast_device_create(const struct drm_driver *drv,
return ERR_PTR(-EIO);
}
ast_detect_chip(dev, &need_post);
ret = ast_init_pci_config(pdev);
if (ret)
return ERR_PTR(ret);
if (!ast_is_vga_enabled(dev)) {
drm_info(dev, "VGA not enabled on entry, requesting chip POST\n");
need_post = true;
}
/*
* If VGA isn't enabled, we need to enable now or subsequent
* access to the scratch registers will fail.
*/
if (need_post)
ast_enable_vga(dev);
/* Enable extended register access */
ast_open_key(ast);
ret = ast_enable_mmio(ast);
if (ret)
return ERR_PTR(ret);
ret = ast_device_config_init(ast);
if (ret)
return ERR_PTR(ret);
ast_detect_widescreen(ast);
ast_detect_tx_chip(ast, need_post);
ret = ast_get_dram_info(dev);
if (ret)
@ -477,9 +538,5 @@ struct ast_device *ast_device_create(const struct drm_driver *drv,
if (ret)
return ERR_PTR(ret);
ret = devm_add_action_or_reset(dev->dev, ast_device_release, ast);
if (ret)
return ERR_PTR(ret);
return ast;
}

View file

@ -38,8 +38,6 @@ static u32 ast_get_vram_size(struct ast_device *ast)
u8 jreg;
u32 vram_size;
ast_open_key(ast);
vram_size = AST_VIDMEM_DEFAULT_SIZE;
jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xaa, 0xff);
switch (jreg & 3) {

View file

@ -342,7 +342,7 @@ static void ast_set_crtc_reg(struct ast_device *ast,
u8 jreg05 = 0, jreg07 = 0, jreg09 = 0, jregAC = 0, jregAD = 0, jregAE = 0;
u16 temp, precache = 0;
if ((ast->chip == AST2500 || ast->chip == AST2600) &&
if ((IS_AST_GEN6(ast) || IS_AST_GEN7(ast)) &&
(vbios_mode->enh_table->flags & AST2500PreCatchCRT))
precache = 40;
@ -384,7 +384,7 @@ static void ast_set_crtc_reg(struct ast_device *ast,
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xAD, 0x00, jregAD);
// Workaround for HSync Time non octave pixels (1920x1080@60Hz HSync 44 pixels);
if ((ast->chip == AST2600) && (mode->crtc_vdisplay == 1080))
if (IS_AST_GEN7(ast) && (mode->crtc_vdisplay == 1080))
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xFC, 0xFD, 0x02);
else
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xFC, 0xFD, 0x00);
@ -466,7 +466,7 @@ static void ast_set_dclk_reg(struct ast_device *ast,
{
const struct ast_vbios_dclk_info *clk_info;
if ((ast->chip == AST2500) || (ast->chip == AST2600))
if (IS_AST_GEN6(ast) || IS_AST_GEN7(ast))
clk_info = &dclk_table_ast2500[vbios_mode->enh_table->dclk_index];
else
clk_info = &dclk_table[vbios_mode->enh_table->dclk_index];
@ -510,17 +510,13 @@ static void ast_set_color_reg(struct ast_device *ast,
static void ast_set_crtthd_reg(struct ast_device *ast)
{
/* Set Threshold */
if (ast->chip == AST2600) {
if (IS_AST_GEN7(ast)) {
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa7, 0xe0);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa6, 0xa0);
} else if (ast->chip == AST2300 || ast->chip == AST2400 ||
ast->chip == AST2500) {
} else if (IS_AST_GEN6(ast) || IS_AST_GEN5(ast) || IS_AST_GEN4(ast)) {
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa7, 0x78);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa6, 0x60);
} else if (ast->chip == AST2100 ||
ast->chip == AST1100 ||
ast->chip == AST2200 ||
ast->chip == AST2150) {
} else if (IS_AST_GEN3(ast) || IS_AST_GEN2(ast)) {
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa7, 0x3f);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa6, 0x2f);
} else {
@ -1082,9 +1078,10 @@ ast_crtc_helper_mode_valid(struct drm_crtc *crtc, const struct drm_display_mode
if ((mode->hdisplay == 1152) && (mode->vdisplay == 864))
return MODE_OK;
if ((ast->chip == AST2100) || (ast->chip == AST2200) ||
(ast->chip == AST2300) || (ast->chip == AST2400) ||
(ast->chip == AST2500) || (ast->chip == AST2600)) {
if ((ast->chip == AST2100) || // GEN2, but not AST1100 (?)
(ast->chip == AST2200) || // GEN3, but not AST2150 (?)
IS_AST_GEN4(ast) || IS_AST_GEN5(ast) ||
IS_AST_GEN6(ast) || IS_AST_GEN7(ast)) {
if ((mode->hdisplay == 1920) && (mode->vdisplay == 1080))
return MODE_OK;
@ -1800,12 +1797,12 @@ int ast_mode_config_init(struct ast_device *ast)
dev->mode_config.min_height = 0;
dev->mode_config.preferred_depth = 24;
if (ast->chip == AST2100 ||
ast->chip == AST2200 ||
ast->chip == AST2300 ||
ast->chip == AST2400 ||
ast->chip == AST2500 ||
ast->chip == AST2600) {
if (ast->chip == AST2100 || // GEN2, but not AST1100 (?)
ast->chip == AST2200 || // GEN3, but not AST2150 (?)
IS_AST_GEN7(ast) ||
IS_AST_GEN6(ast) ||
IS_AST_GEN5(ast) ||
IS_AST_GEN4(ast)) {
dev->mode_config.max_width = 1920;
dev->mode_config.max_height = 2048;
} else {

View file

@ -37,41 +37,13 @@
static void ast_post_chip_2300(struct drm_device *dev);
static void ast_post_chip_2500(struct drm_device *dev);
void ast_enable_vga(struct drm_device *dev)
{
struct ast_device *ast = to_ast_device(dev);
ast_io_write8(ast, AST_IO_VGA_ENABLE_PORT, 0x01);
ast_io_write8(ast, AST_IO_MISC_PORT_WRITE, 0x01);
}
void ast_enable_mmio(struct drm_device *dev)
{
struct ast_device *ast = to_ast_device(dev);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa1, 0x06);
}
bool ast_is_vga_enabled(struct drm_device *dev)
{
struct ast_device *ast = to_ast_device(dev);
u8 ch;
ch = ast_io_read8(ast, AST_IO_VGA_ENABLE_PORT);
return !!(ch & 0x01);
}
static const u8 extreginfo[] = { 0x0f, 0x04, 0x1c, 0xff };
static const u8 extreginfo_ast2300a0[] = { 0x0f, 0x04, 0x1c, 0xff };
static const u8 extreginfo_ast2300[] = { 0x0f, 0x04, 0x1f, 0xff };
static void
ast_set_def_ext_reg(struct drm_device *dev)
{
struct ast_device *ast = to_ast_device(dev);
struct pci_dev *pdev = to_pci_dev(dev->dev);
u8 i, index, reg;
const u8 *ext_reg_info;
@ -79,13 +51,9 @@ ast_set_def_ext_reg(struct drm_device *dev)
for (i = 0x81; i <= 0x9f; i++)
ast_set_index_reg(ast, AST_IO_CRTC_PORT, i, 0x00);
if (ast->chip == AST2300 || ast->chip == AST2400 ||
ast->chip == AST2500) {
if (pdev->revision >= 0x20)
ext_reg_info = extreginfo_ast2300;
else
ext_reg_info = extreginfo_ast2300a0;
} else
if (IS_AST_GEN4(ast) || IS_AST_GEN5(ast) || IS_AST_GEN6(ast))
ext_reg_info = extreginfo_ast2300;
else
ext_reg_info = extreginfo;
index = 0xa0;
@ -104,8 +72,7 @@ ast_set_def_ext_reg(struct drm_device *dev)
/* Enable RAMDAC for A1 */
reg = 0x04;
if (ast->chip == AST2300 || ast->chip == AST2400 ||
ast->chip == AST2500)
if (IS_AST_GEN4(ast) || IS_AST_GEN5(ast) || IS_AST_GEN6(ast))
reg |= 0x20;
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb6, 0xff, reg);
}
@ -281,7 +248,7 @@ static void ast_init_dram_reg(struct drm_device *dev)
j = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd0, 0xff);
if ((j & 0x80) == 0) { /* VGA only */
if (ast->chip == AST2000) {
if (IS_AST_GEN1(ast)) {
dram_reg_info = ast2000_dram_table_data;
ast_write32(ast, 0xf004, 0x1e6e0000);
ast_write32(ast, 0xf000, 0x1);
@ -290,8 +257,8 @@ static void ast_init_dram_reg(struct drm_device *dev)
do {
;
} while (ast_read32(ast, 0x10100) != 0xa8);
} else {/* AST2100/1100 */
if (ast->chip == AST2100 || ast->chip == 2200)
} else { /* GEN2/GEN3 */
if (ast->chip == AST2100 || ast->chip == AST2200)
dram_reg_info = ast2100_dram_table_data;
else
dram_reg_info = ast1100_dram_table_data;
@ -313,7 +280,7 @@ static void ast_init_dram_reg(struct drm_device *dev)
if (dram_reg_info->index == 0xff00) {/* delay fn */
for (i = 0; i < 15; i++)
udelay(dram_reg_info->data);
} else if (dram_reg_info->index == 0x4 && ast->chip != AST2000) {
} else if (dram_reg_info->index == 0x4 && !IS_AST_GEN1(ast)) {
data = dram_reg_info->data;
if (ast->dram_type == AST_DRAM_1Gx16)
data = 0x00000d89;
@ -339,15 +306,13 @@ static void ast_init_dram_reg(struct drm_device *dev)
cbrdlli_ast2150(ast, 32); /* 32 bits */
}
switch (ast->chip) {
case AST2000:
switch (AST_GEN(ast)) {
case 1:
temp = ast_read32(ast, 0x10140);
ast_write32(ast, 0x10140, temp | 0x40);
break;
case AST1100:
case AST2100:
case AST2200:
case AST2150:
case 2:
case 3:
temp = ast_read32(ast, 0x1200c);
ast_write32(ast, 0x1200c, temp & 0xfffffffd);
temp = ast_read32(ast, 0x12040);
@ -367,25 +332,16 @@ static void ast_init_dram_reg(struct drm_device *dev)
void ast_post_gpu(struct drm_device *dev)
{
struct ast_device *ast = to_ast_device(dev);
struct pci_dev *pdev = to_pci_dev(dev->dev);
u32 reg;
pci_read_config_dword(pdev, 0x04, &reg);
reg |= 0x3;
pci_write_config_dword(pdev, 0x04, reg);
ast_enable_vga(dev);
ast_open_key(ast);
ast_enable_mmio(dev);
ast_set_def_ext_reg(dev);
if (ast->chip == AST2600) {
if (IS_AST_GEN7(ast)) {
if (ast->tx_chip_types & AST_TX_ASTDP_BIT)
ast_dp_launch(dev);
} else if (ast->config_mode == ast_use_p2a) {
if (ast->chip == AST2500)
if (IS_AST_GEN6(ast))
ast_post_chip_2500(dev);
else if (ast->chip == AST2300 || ast->chip == AST2400)
else if (IS_AST_GEN5(ast) || IS_AST_GEN4(ast))
ast_post_chip_2300(dev);
else
ast_init_dram_reg(dev);

View file

@ -773,15 +773,13 @@ static int atmel_hlcdc_dc_drm_probe(struct platform_device *pdev)
return ret;
}
static int atmel_hlcdc_dc_drm_remove(struct platform_device *pdev)
static void atmel_hlcdc_dc_drm_remove(struct platform_device *pdev)
{
struct drm_device *ddev = platform_get_drvdata(pdev);
drm_dev_unregister(ddev);
atmel_hlcdc_dc_unload(ddev);
drm_dev_put(ddev);
return 0;
}
static int atmel_hlcdc_dc_drm_suspend(struct device *dev)
@ -826,7 +824,7 @@ static const struct of_device_id atmel_hlcdc_dc_of_match[] = {
static struct platform_driver atmel_hlcdc_dc_platform_driver = {
.probe = atmel_hlcdc_dc_drm_probe,
.remove = atmel_hlcdc_dc_drm_remove,
.remove_new = atmel_hlcdc_dc_drm_remove,
.driver = {
.name = "atmel-hlcdc-display-controller",
.pm = pm_sleep_ptr(&atmel_hlcdc_dc_drm_pm_ops),

View file

@ -54,6 +54,26 @@
#include "cdns-mhdp8546-hdcp.h"
#include "cdns-mhdp8546-j721e.h"
static void cdns_mhdp_bridge_hpd_enable(struct drm_bridge *bridge)
{
struct cdns_mhdp_device *mhdp = bridge_to_mhdp(bridge);
/* Enable SW event interrupts */
if (mhdp->bridge_attached)
writel(readl(mhdp->regs + CDNS_APB_INT_MASK) &
~CDNS_APB_INT_MASK_SW_EVENT_INT,
mhdp->regs + CDNS_APB_INT_MASK);
}
static void cdns_mhdp_bridge_hpd_disable(struct drm_bridge *bridge)
{
struct cdns_mhdp_device *mhdp = bridge_to_mhdp(bridge);
writel(readl(mhdp->regs + CDNS_APB_INT_MASK) |
CDNS_APB_INT_MASK_SW_EVENT_INT,
mhdp->regs + CDNS_APB_INT_MASK);
}
static int cdns_mhdp_mailbox_read(struct cdns_mhdp_device *mhdp)
{
int ret, empty;
@ -749,9 +769,7 @@ static int cdns_mhdp_fw_activate(const struct firmware *fw,
* MHDP_HW_STOPPED happens only due to driver removal when
* bridge should already be detached.
*/
if (mhdp->bridge_attached)
writel(~(u32)CDNS_APB_INT_MASK_SW_EVENT_INT,
mhdp->regs + CDNS_APB_INT_MASK);
cdns_mhdp_bridge_hpd_enable(&mhdp->bridge);
spin_unlock(&mhdp->start_lock);
@ -1740,8 +1758,7 @@ static int cdns_mhdp_attach(struct drm_bridge *bridge,
/* Enable SW event interrupts */
if (hw_ready)
writel(~(u32)CDNS_APB_INT_MASK_SW_EVENT_INT,
mhdp->regs + CDNS_APB_INT_MASK);
cdns_mhdp_bridge_hpd_enable(bridge);
return 0;
aux_unregister:
@ -2146,6 +2163,27 @@ cdns_mhdp_bridge_atomic_reset(struct drm_bridge *bridge)
return &cdns_mhdp_state->base;
}
static u32 *cdns_mhdp_get_input_bus_fmts(struct drm_bridge *bridge,
struct drm_bridge_state *bridge_state,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state,
u32 output_fmt,
unsigned int *num_input_fmts)
{
u32 *input_fmts;
*num_input_fmts = 0;
input_fmts = kzalloc(sizeof(*input_fmts), GFP_KERNEL);
if (!input_fmts)
return NULL;
*num_input_fmts = 1;
input_fmts[0] = MEDIA_BUS_FMT_RGB121212_1X36;
return input_fmts;
}
static int cdns_mhdp_atomic_check(struct drm_bridge *bridge,
struct drm_bridge_state *bridge_state,
struct drm_crtc_state *crtc_state,
@ -2165,6 +2203,13 @@ static int cdns_mhdp_atomic_check(struct drm_bridge *bridge,
return -EINVAL;
}
/*
* There might be flags negotiation supported in future.
* Set the bus flags in atomic_check statically for now.
*/
if (mhdp->info)
bridge_state->input_bus_cfg.flags = *mhdp->info->input_bus_flags;
mutex_unlock(&mhdp->link_mutex);
return 0;
}
@ -2184,23 +2229,6 @@ static struct edid *cdns_mhdp_bridge_get_edid(struct drm_bridge *bridge,
return cdns_mhdp_get_edid(mhdp, connector);
}
static void cdns_mhdp_bridge_hpd_enable(struct drm_bridge *bridge)
{
struct cdns_mhdp_device *mhdp = bridge_to_mhdp(bridge);
/* Enable SW event interrupts */
if (mhdp->bridge_attached)
writel(~(u32)CDNS_APB_INT_MASK_SW_EVENT_INT,
mhdp->regs + CDNS_APB_INT_MASK);
}
static void cdns_mhdp_bridge_hpd_disable(struct drm_bridge *bridge)
{
struct cdns_mhdp_device *mhdp = bridge_to_mhdp(bridge);
writel(CDNS_APB_INT_MASK_SW_EVENT_INT, mhdp->regs + CDNS_APB_INT_MASK);
}
static const struct drm_bridge_funcs cdns_mhdp_bridge_funcs = {
.atomic_enable = cdns_mhdp_atomic_enable,
.atomic_disable = cdns_mhdp_atomic_disable,
@ -2210,6 +2238,7 @@ static const struct drm_bridge_funcs cdns_mhdp_bridge_funcs = {
.atomic_duplicate_state = cdns_mhdp_bridge_atomic_duplicate_state,
.atomic_destroy_state = cdns_mhdp_bridge_atomic_destroy_state,
.atomic_reset = cdns_mhdp_bridge_atomic_reset,
.atomic_get_input_bus_fmts = cdns_mhdp_get_input_bus_fmts,
.detect = cdns_mhdp_bridge_detect,
.get_edid = cdns_mhdp_bridge_get_edid,
.hpd_enable = cdns_mhdp_bridge_hpd_enable,
@ -2529,8 +2558,6 @@ static int cdns_mhdp_probe(struct platform_device *pdev)
mhdp->bridge.ops = DRM_BRIDGE_OP_DETECT | DRM_BRIDGE_OP_EDID |
DRM_BRIDGE_OP_HPD;
mhdp->bridge.type = DRM_MODE_CONNECTOR_DisplayPort;
if (mhdp->info)
mhdp->bridge.timings = mhdp->info->timings;
ret = phy_init(mhdp->phy);
if (ret) {
@ -2617,7 +2644,7 @@ static const struct of_device_id mhdp_ids[] = {
#ifdef CONFIG_DRM_CDNS_MHDP8546_J721E
{ .compatible = "ti,j721e-mhdp8546",
.data = &(const struct cdns_mhdp_platform_info) {
.timings = &mhdp_ti_j721e_bridge_timings,
.input_bus_flags = &mhdp_ti_j721e_bridge_input_bus_flags,
.ops = &mhdp_ti_j721e_ops,
},
},

View file

@ -336,7 +336,7 @@ struct cdns_mhdp_bridge_state {
};
struct cdns_mhdp_platform_info {
const struct drm_bridge_timings *timings;
const u32 *input_bus_flags;
const struct mhdp_platform_ops *ops;
};

View file

@ -71,8 +71,7 @@ const struct mhdp_platform_ops mhdp_ti_j721e_ops = {
.disable = cdns_mhdp_j721e_disable,
};
const struct drm_bridge_timings mhdp_ti_j721e_bridge_timings = {
.input_bus_flags = DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE |
DRM_BUS_FLAG_SYNC_SAMPLE_NEGEDGE |
DRM_BUS_FLAG_DE_HIGH,
};
const u32
mhdp_ti_j721e_bridge_input_bus_flags = DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE |
DRM_BUS_FLAG_SYNC_SAMPLE_NEGEDGE |
DRM_BUS_FLAG_DE_HIGH;

View file

@ -14,6 +14,6 @@
struct mhdp_platform_ops;
extern const struct mhdp_platform_ops mhdp_ti_j721e_ops;
extern const struct drm_bridge_timings mhdp_ti_j721e_bridge_timings;
extern const u32 mhdp_ti_j721e_bridge_input_bus_flags;
#endif /* !CDNS_MHDP8546_J721E_H */

View file

@ -28,6 +28,8 @@
#define EDID_BLOCK_SIZE 128
#define EDID_NUM_BLOCKS 2
#define FW_FILE "lt9611uxc_fw.bin"
struct lt9611uxc {
struct device *dev;
struct drm_bridge bridge;
@ -754,7 +756,7 @@ static int lt9611uxc_firmware_update(struct lt9611uxc *lt9611uxc)
REG_SEQ0(0x805a, 0x00),
};
ret = request_firmware(&fw, "lt9611uxc_fw.bin", lt9611uxc->dev);
ret = request_firmware(&fw, FW_FILE, lt9611uxc->dev);
if (ret < 0)
return ret;
@ -1019,3 +1021,5 @@ module_i2c_driver(lt9611uxc_driver);
MODULE_AUTHOR("Dmitry Baryshkov <dmitry.baryshkov@linaro.org>");
MODULE_LICENSE("GPL v2");
MODULE_FIRMWARE(FW_FILE);

View file

@ -105,7 +105,6 @@ struct ps8640 {
struct gpio_desc *gpio_reset;
struct gpio_desc *gpio_powerdown;
struct device_link *link;
struct edid *edid;
bool pre_enabled;
bool need_post_hpd_delay;
};
@ -155,23 +154,6 @@ static inline struct ps8640 *aux_to_ps8640(struct drm_dp_aux *aux)
return container_of(aux, struct ps8640, aux);
}
static bool ps8640_of_panel_on_aux_bus(struct device *dev)
{
struct device_node *bus, *panel;
bus = of_get_child_by_name(dev->of_node, "aux-bus");
if (!bus)
return false;
panel = of_get_child_by_name(bus, "panel");
of_node_put(bus);
if (!panel)
return false;
of_node_put(panel);
return true;
}
static int _ps8640_wait_hpd_asserted(struct ps8640 *ps_bridge, unsigned long wait_us)
{
struct regmap *map = ps_bridge->regmap[PAGE2_TOP_CNTL];
@ -539,50 +521,6 @@ static void ps8640_bridge_detach(struct drm_bridge *bridge)
device_link_del(ps_bridge->link);
}
static struct edid *ps8640_bridge_get_edid(struct drm_bridge *bridge,
struct drm_connector *connector)
{
struct ps8640 *ps_bridge = bridge_to_ps8640(bridge);
struct device *dev = &ps_bridge->page[PAGE0_DP_CNTL]->dev;
bool poweroff = !ps_bridge->pre_enabled;
if (!ps_bridge->edid) {
/*
* When we end calling get_edid() triggered by an ioctl, i.e
*
* drm_mode_getconnector (ioctl)
* -> drm_helper_probe_single_connector_modes
* -> drm_bridge_connector_get_modes
* -> ps8640_bridge_get_edid
*
* We need to make sure that what we need is enabled before
* reading EDID, for this chip, we need to do a full poweron,
* otherwise it will fail.
*/
if (poweroff)
drm_atomic_bridge_chain_pre_enable(bridge,
connector->state->state);
ps_bridge->edid = drm_get_edid(connector,
ps_bridge->page[PAGE0_DP_CNTL]->adapter);
/*
* If we call the get_edid() function without having enabled the
* chip before, return the chip to its original power state.
*/
if (poweroff)
drm_atomic_bridge_chain_post_disable(bridge,
connector->state->state);
}
if (!ps_bridge->edid) {
dev_err(dev, "Failed to get EDID\n");
return NULL;
}
return drm_edid_duplicate(ps_bridge->edid);
}
static void ps8640_runtime_disable(void *data)
{
pm_runtime_dont_use_autosuspend(data);
@ -592,7 +530,6 @@ static void ps8640_runtime_disable(void *data)
static const struct drm_bridge_funcs ps8640_bridge_funcs = {
.attach = ps8640_bridge_attach,
.detach = ps8640_bridge_detach,
.get_edid = ps8640_bridge_get_edid,
.atomic_post_disable = ps8640_atomic_post_disable,
.atomic_pre_enable = ps8640_atomic_pre_enable,
.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
@ -705,14 +642,6 @@ static int ps8640_probe(struct i2c_client *client)
ps_bridge->bridge.of_node = dev->of_node;
ps_bridge->bridge.type = DRM_MODE_CONNECTOR_eDP;
/*
* In the device tree, if panel is listed under aux-bus of the bridge
* node, panel driver should be able to retrieve EDID by itself using
* aux-bus. So let's not set DRM_BRIDGE_OP_EDID here.
*/
if (!ps8640_of_panel_on_aux_bus(&client->dev))
ps_bridge->bridge.ops = DRM_BRIDGE_OP_EDID;
/*
* Get MIPI DSI resources early. These can return -EPROBE_DEFER so
* we want to get them out of the way sooner.
@ -777,13 +706,6 @@ static int ps8640_probe(struct i2c_client *client)
return ret;
}
static void ps8640_remove(struct i2c_client *client)
{
struct ps8640 *ps_bridge = i2c_get_clientdata(client);
kfree(ps_bridge->edid);
}
static const struct of_device_id ps8640_match[] = {
{ .compatible = "parade,ps8640" },
{ }
@ -792,7 +714,6 @@ MODULE_DEVICE_TABLE(of, ps8640_match);
static struct i2c_driver ps8640_driver = {
.probe = ps8640_probe,
.remove = ps8640_remove,
.driver = {
.name = "ps8640",
.of_match_table = ps8640_match,

View file

@ -1009,7 +1009,7 @@ static int samsung_dsim_wait_for_hdr_fifo(struct samsung_dsim *dsi)
do {
u32 reg = samsung_dsim_read(dsi, DSIM_FIFOCTRL_REG);
if (!(reg & DSIM_SFR_HEADER_FULL))
if (reg & DSIM_SFR_HEADER_EMPTY)
return 0;
if (!cond_resched())

View file

@ -473,6 +473,41 @@ static struct edid *sii902x_bridge_get_edid(struct drm_bridge *bridge,
return sii902x_get_edid(sii902x, connector);
}
static u32 *sii902x_bridge_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
struct drm_bridge_state *bridge_state,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state,
u32 output_fmt,
unsigned int *num_input_fmts)
{
u32 *input_fmts;
*num_input_fmts = 0;
input_fmts = kcalloc(1, sizeof(*input_fmts), GFP_KERNEL);
if (!input_fmts)
return NULL;
input_fmts[0] = MEDIA_BUS_FMT_RGB888_1X24;
*num_input_fmts = 1;
return input_fmts;
}
static int sii902x_bridge_atomic_check(struct drm_bridge *bridge,
struct drm_bridge_state *bridge_state,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state)
{
/*
* There might be flags negotiation supported in future but
* set the bus flags in atomic_check statically for now.
*/
bridge_state->input_bus_cfg.flags = bridge->timings->input_bus_flags;
return 0;
}
static const struct drm_bridge_funcs sii902x_bridge_funcs = {
.attach = sii902x_bridge_attach,
.mode_set = sii902x_bridge_mode_set,
@ -480,6 +515,11 @@ static const struct drm_bridge_funcs sii902x_bridge_funcs = {
.enable = sii902x_bridge_enable,
.detect = sii902x_bridge_detect,
.get_edid = sii902x_bridge_get_edid,
.atomic_reset = drm_atomic_helper_bridge_reset,
.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
.atomic_get_input_bus_fmts = sii902x_bridge_atomic_get_input_bus_fmts,
.atomic_check = sii902x_bridge_atomic_check,
};
static int sii902x_mute(struct sii902x *sii902x, bool mute)

View file

@ -49,20 +49,6 @@
#define HDMI14_MAX_TMDSCLK 340000000
enum hdmi_datamap {
RGB444_8B = 0x01,
RGB444_10B = 0x03,
RGB444_12B = 0x05,
RGB444_16B = 0x07,
YCbCr444_8B = 0x09,
YCbCr444_10B = 0x0B,
YCbCr444_12B = 0x0D,
YCbCr444_16B = 0x0F,
YCbCr422_8B = 0x16,
YCbCr422_10B = 0x14,
YCbCr422_12B = 0x12,
};
static const u16 csc_coeff_default[3][4] = {
{ 0x2000, 0x0000, 0x0000, 0x0000 },
{ 0x0000, 0x2000, 0x0000, 0x0000 },
@ -856,10 +842,10 @@ static void dw_hdmi_gp_audio_enable(struct dw_hdmi *hdmi)
if (pdata->enable_audio)
pdata->enable_audio(hdmi,
hdmi->channels,
hdmi->sample_width,
hdmi->sample_rate,
hdmi->sample_non_pcm);
hdmi->channels,
hdmi->sample_width,
hdmi->sample_rate,
hdmi->sample_non_pcm);
}
static void dw_hdmi_gp_audio_disable(struct dw_hdmi *hdmi)
@ -2710,9 +2696,10 @@ static u32 *dw_hdmi_bridge_atomic_get_output_bus_fmts(struct drm_bridge *bridge,
/* Default 8bit fallback */
output_fmts[i++] = MEDIA_BUS_FMT_UYYVYY8_0_5X24;
*num_output_fmts = i;
return output_fmts;
if (drm_mode_is_420_only(info, mode)) {
*num_output_fmts = i;
return output_fmts;
}
}
/*
@ -3346,6 +3333,12 @@ static int dw_hdmi_parse_dt(struct dw_hdmi *hdmi)
return 0;
}
bool dw_hdmi_bus_fmt_is_420(struct dw_hdmi *hdmi)
{
return hdmi_bus_fmt_is_yuv420(hdmi->hdmi_data.enc_out_bus_format);
}
EXPORT_SYMBOL_GPL(dw_hdmi_bus_fmt_is_420);
struct dw_hdmi *dw_hdmi_probe(struct platform_device *pdev,
const struct dw_hdmi_plat_data *plat_data)
{

View file

@ -265,6 +265,7 @@ struct dw_mipi_dsi {
struct dw_mipi_dsi *master; /* dual-dsi master ptr */
struct dw_mipi_dsi *slave; /* dual-dsi slave ptr */
struct drm_display_mode mode;
const struct dw_mipi_dsi_plat_data *plat_data;
};
@ -332,6 +333,7 @@ static int dw_mipi_dsi_host_attach(struct mipi_dsi_host *host,
if (IS_ERR(bridge))
return PTR_ERR(bridge);
bridge->pre_enable_prev_first = true;
dsi->panel_bridge = bridge;
drm_bridge_add(&dsi->bridge);
@ -859,15 +861,6 @@ static void dw_mipi_dsi_bridge_post_atomic_disable(struct drm_bridge *bridge,
*/
dw_mipi_dsi_set_mode(dsi, 0);
/*
* TODO Only way found to call panel-bridge post_disable &
* panel unprepare before the dsi "final" disable...
* This needs to be fixed in the drm_bridge framework and the API
* needs to be updated to manage our own call chains...
*/
if (dsi->panel_bridge->funcs->post_disable)
dsi->panel_bridge->funcs->post_disable(dsi->panel_bridge);
if (phy_ops->power_off)
phy_ops->power_off(dsi->plat_data->priv_data);
@ -942,15 +935,25 @@ static void dw_mipi_dsi_mode_set(struct dw_mipi_dsi *dsi,
phy_ops->power_on(dsi->plat_data->priv_data);
}
static void dw_mipi_dsi_bridge_atomic_pre_enable(struct drm_bridge *bridge,
struct drm_bridge_state *old_bridge_state)
{
struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge);
/* Power up the dsi ctl into a command mode */
dw_mipi_dsi_mode_set(dsi, &dsi->mode);
if (dsi->slave)
dw_mipi_dsi_mode_set(dsi->slave, &dsi->mode);
}
static void dw_mipi_dsi_bridge_mode_set(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
{
struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge);
dw_mipi_dsi_mode_set(dsi, adjusted_mode);
if (dsi->slave)
dw_mipi_dsi_mode_set(dsi->slave, adjusted_mode);
/* Store the display mode for later use in pre_enable callback */
drm_mode_copy(&dsi->mode, adjusted_mode);
}
static void dw_mipi_dsi_bridge_atomic_enable(struct drm_bridge *bridge,
@ -1004,6 +1007,7 @@ static const struct drm_bridge_funcs dw_mipi_dsi_bridge_funcs = {
.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
.atomic_reset = drm_atomic_helper_bridge_reset,
.atomic_pre_enable = dw_mipi_dsi_bridge_atomic_pre_enable,
.atomic_enable = dw_mipi_dsi_bridge_atomic_enable,
.atomic_post_disable = dw_mipi_dsi_bridge_post_atomic_disable,
.mode_set = dw_mipi_dsi_bridge_mode_set,

View file

@ -41,8 +41,17 @@
#define DSI_LANEENABLE 0x0210 /* Enables each lane */
#define DSI_RX_START 1
/* LCDC/DPI Host Registers */
#define LCDCTRL 0x0420
/* LCDC/DPI Host Registers, based on guesswork that this matches TC358764 */
#define LCDCTRL 0x0420 /* Video Path Control */
#define LCDCTRL_MSF BIT(0) /* Magic square in RGB666 */
#define LCDCTRL_VTGEN BIT(4)/* Use chip clock for timing */
#define LCDCTRL_UNK6 BIT(6) /* Unknown */
#define LCDCTRL_EVTMODE BIT(5) /* Event mode */
#define LCDCTRL_RGB888 BIT(8) /* RGB888 mode */
#define LCDCTRL_HSPOL BIT(17) /* Polarity of HSYNC signal */
#define LCDCTRL_DEPOL BIT(18) /* Polarity of DE signal */
#define LCDCTRL_VSPOL BIT(19) /* Polarity of VSYNC signal */
#define LCDCTRL_VSDELAY(v) (((v) & 0xfff) << 20) /* VSYNC delay */
/* SPI Master Registers */
#define SPICMR 0x0450
@ -65,6 +74,7 @@ struct tc358762 {
struct regulator *regulator;
struct drm_bridge *panel_bridge;
struct gpio_desc *reset_gpio;
struct drm_display_mode mode;
bool pre_enabled;
int error;
};
@ -105,6 +115,8 @@ static inline struct tc358762 *bridge_to_tc358762(struct drm_bridge *bridge)
static int tc358762_init(struct tc358762 *ctx)
{
u32 lcdctrl;
tc358762_write(ctx, DSI_LANEENABLE,
LANEENABLE_L0EN | LANEENABLE_CLEN);
tc358762_write(ctx, PPI_D0S_CLRSIPOCOUNT, 5);
@ -114,7 +126,18 @@ static int tc358762_init(struct tc358762 *ctx)
tc358762_write(ctx, PPI_LPTXTIMECNT, LPX_PERIOD);
tc358762_write(ctx, SPICMR, 0x00);
tc358762_write(ctx, LCDCTRL, 0x00100150);
lcdctrl = LCDCTRL_VSDELAY(1) | LCDCTRL_RGB888 |
LCDCTRL_UNK6 | LCDCTRL_VTGEN;
if (ctx->mode.flags & DRM_MODE_FLAG_NHSYNC)
lcdctrl |= LCDCTRL_HSPOL;
if (ctx->mode.flags & DRM_MODE_FLAG_NVSYNC)
lcdctrl |= LCDCTRL_VSPOL;
tc358762_write(ctx, LCDCTRL, lcdctrl);
tc358762_write(ctx, SYSCTRL, 0x040f);
msleep(100);
@ -126,7 +149,7 @@ static int tc358762_init(struct tc358762 *ctx)
return tc358762_clear_error(ctx);
}
static void tc358762_post_disable(struct drm_bridge *bridge)
static void tc358762_post_disable(struct drm_bridge *bridge, struct drm_bridge_state *state)
{
struct tc358762 *ctx = bridge_to_tc358762(bridge);
int ret;
@ -148,7 +171,7 @@ static void tc358762_post_disable(struct drm_bridge *bridge)
dev_err(ctx->dev, "error disabling regulators (%d)\n", ret);
}
static void tc358762_pre_enable(struct drm_bridge *bridge)
static void tc358762_pre_enable(struct drm_bridge *bridge, struct drm_bridge_state *state)
{
struct tc358762 *ctx = bridge_to_tc358762(bridge);
int ret;
@ -162,11 +185,17 @@ static void tc358762_pre_enable(struct drm_bridge *bridge)
usleep_range(5000, 10000);
}
ctx->pre_enabled = true;
}
static void tc358762_enable(struct drm_bridge *bridge, struct drm_bridge_state *state)
{
struct tc358762 *ctx = bridge_to_tc358762(bridge);
int ret;
ret = tc358762_init(ctx);
if (ret < 0)
dev_err(ctx->dev, "error initializing bridge (%d)\n", ret);
ctx->pre_enabled = true;
}
static int tc358762_attach(struct drm_bridge *bridge,
@ -178,10 +207,24 @@ static int tc358762_attach(struct drm_bridge *bridge,
bridge, flags);
}
static void tc358762_bridge_mode_set(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
const struct drm_display_mode *adj)
{
struct tc358762 *ctx = bridge_to_tc358762(bridge);
drm_mode_copy(&ctx->mode, mode);
}
static const struct drm_bridge_funcs tc358762_bridge_funcs = {
.post_disable = tc358762_post_disable,
.pre_enable = tc358762_pre_enable,
.atomic_post_disable = tc358762_post_disable,
.atomic_pre_enable = tc358762_pre_enable,
.atomic_enable = tc358762_enable,
.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
.atomic_reset = drm_atomic_helper_bridge_reset,
.attach = tc358762_attach,
.mode_set = tc358762_bridge_mode_set,
};
static int tc358762_parse_dt(struct tc358762 *ctx)
@ -231,7 +274,7 @@ static int tc358762_probe(struct mipi_dsi_device *dsi)
dsi->lanes = 1;
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE |
MIPI_DSI_MODE_LPM;
MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_VIDEO_HSE;
ret = tc358762_parse_dt(ctx);
if (ret < 0)

View file

@ -42,10 +42,10 @@
/* Video path registers */
#define VP_CTRL 0x0450 /* Video Path Control */
#define VP_CTRL_MSF(v) FLD_VAL(v, 0, 0) /* Magic square in RGB666 */
#define VP_CTRL_VTGEN(v) FLD_VAL(v, 4, 4) /* Use chip clock for timing */
#define VP_CTRL_EVTMODE(v) FLD_VAL(v, 5, 5) /* Event mode */
#define VP_CTRL_RGB888(v) FLD_VAL(v, 8, 8) /* RGB888 mode */
#define VP_CTRL_MSF BIT(0) /* Magic square in RGB666 */
#define VP_CTRL_VTGEN BIT(4) /* Use chip clock for timing */
#define VP_CTRL_EVTMODE BIT(5) /* Event mode */
#define VP_CTRL_RGB888 BIT(8) /* RGB888 mode */
#define VP_CTRL_VSDELAY(v) FLD_VAL(v, 31, 20) /* VSYNC delay */
#define VP_CTRL_HSPOL BIT(17) /* Polarity of HSYNC signal */
#define VP_CTRL_DEPOL BIT(18) /* Polarity of DE signal */
@ -176,7 +176,7 @@ static void tc358764_read(struct tc358764 *ctx, u16 addr, u32 *val)
if (ret >= 0)
le32_to_cpus(val);
dev_dbg(ctx->dev, "read: %d, addr: %d\n", addr, *val);
dev_dbg(ctx->dev, "read: addr=0x%04x data=0x%08x\n", addr, *val);
}
static void tc358764_write(struct tc358764 *ctx, u16 addr, u32 val)
@ -233,8 +233,8 @@ static int tc358764_init(struct tc358764 *ctx)
tc358764_write(ctx, DSI_STARTDSI, DSI_RX_START);
/* configure video path */
tc358764_write(ctx, VP_CTRL, VP_CTRL_VSDELAY(15) | VP_CTRL_RGB888(1) |
VP_CTRL_EVTMODE(1) | VP_CTRL_HSPOL | VP_CTRL_VSPOL);
tc358764_write(ctx, VP_CTRL, VP_CTRL_VSDELAY(15) | VP_CTRL_RGB888 |
VP_CTRL_EVTMODE | VP_CTRL_HSPOL | VP_CTRL_VSPOL);
/* reset PHY */
tc358764_write(ctx, LV_PHY0, LV_PHY0_RST(1) |

View file

@ -2215,13 +2215,6 @@ static int tc_probe_bridge_endpoint(struct tc_data *tc)
return -EINVAL;
}
static void tc_clk_disable(void *data)
{
struct clk *refclk = data;
clk_disable_unprepare(refclk);
}
static int tc_probe(struct i2c_client *client)
{
struct device *dev = &client->dev;
@ -2238,20 +2231,10 @@ static int tc_probe(struct i2c_client *client)
if (ret)
return ret;
tc->refclk = devm_clk_get(dev, "ref");
if (IS_ERR(tc->refclk)) {
ret = PTR_ERR(tc->refclk);
dev_err(dev, "Failed to get refclk: %d\n", ret);
return ret;
}
ret = clk_prepare_enable(tc->refclk);
if (ret)
return ret;
ret = devm_add_action_or_reset(dev, tc_clk_disable, tc->refclk);
if (ret)
return ret;
tc->refclk = devm_clk_get_enabled(dev, "ref");
if (IS_ERR(tc->refclk))
return dev_err_probe(dev, PTR_ERR(tc->refclk),
"Failed to get and enable the ref clk\n");
/* tRSTW = 100 cycles , at 13 MHz that is ~7.69 us */
usleep_range(10, 15);

View file

@ -206,12 +206,55 @@ static enum drm_mode_status tfp410_mode_valid(struct drm_bridge *bridge,
return MODE_OK;
}
static u32 *tfp410_get_input_bus_fmts(struct drm_bridge *bridge,
struct drm_bridge_state *bridge_state,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state,
u32 output_fmt,
unsigned int *num_input_fmts)
{
struct tfp410 *dvi = drm_bridge_to_tfp410(bridge);
u32 *input_fmts;
*num_input_fmts = 0;
input_fmts = kzalloc(sizeof(*input_fmts), GFP_KERNEL);
if (!input_fmts)
return NULL;
*num_input_fmts = 1;
input_fmts[0] = dvi->bus_format;
return input_fmts;
}
static int tfp410_atomic_check(struct drm_bridge *bridge,
struct drm_bridge_state *bridge_state,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state)
{
struct tfp410 *dvi = drm_bridge_to_tfp410(bridge);
/*
* There might be flags negotiation supported in future.
* Set the bus flags in atomic_check statically for now.
*/
bridge_state->input_bus_cfg.flags = dvi->timings.input_bus_flags;
return 0;
}
static const struct drm_bridge_funcs tfp410_bridge_funcs = {
.attach = tfp410_attach,
.detach = tfp410_detach,
.enable = tfp410_enable,
.disable = tfp410_disable,
.mode_valid = tfp410_mode_valid,
.atomic_reset = drm_atomic_helper_bridge_reset,
.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
.atomic_get_input_bus_fmts = tfp410_get_input_bus_fmts,
.atomic_check = tfp410_atomic_check,
};
static const struct drm_bridge_timings tfp410_default_timings = {

View file

@ -415,7 +415,7 @@ void drm_hdcp_update_content_protection(struct drm_connector *connector,
return;
state->content_protection = val;
drm_sysfs_connector_status_event(connector,
dev->mode_config.content_protection_property);
drm_sysfs_connector_property_event(connector,
dev->mode_config.content_protection_property);
}
EXPORT_SYMBOL(drm_hdcp_update_content_protection);

View file

@ -374,16 +374,25 @@ drm_atomic_replace_property_blob_from_id(struct drm_device *dev,
if (blob_id != 0) {
new_blob = drm_property_lookup_blob(dev, blob_id);
if (new_blob == NULL)
if (new_blob == NULL) {
drm_dbg_atomic(dev,
"cannot find blob ID %llu\n", blob_id);
return -EINVAL;
}
if (expected_size > 0 &&
new_blob->length != expected_size) {
drm_dbg_atomic(dev,
"[BLOB:%d] length %zu different from expected %zu\n",
new_blob->base.id, new_blob->length, expected_size);
drm_property_blob_put(new_blob);
return -EINVAL;
}
if (expected_elem_size > 0 &&
new_blob->length % expected_elem_size != 0) {
drm_dbg_atomic(dev,
"[BLOB:%d] length %zu not divisible by element size %zu\n",
new_blob->base.id, new_blob->length, expected_elem_size);
drm_property_blob_put(new_blob);
return -EINVAL;
}
@ -454,7 +463,7 @@ static int drm_atomic_crtc_set_property(struct drm_crtc *crtc,
return crtc->funcs->atomic_set_property(crtc, state, property, val);
} else {
drm_dbg_atomic(crtc->dev,
"[CRTC:%d:%s] unknown property [PROP:%d:%s]]\n",
"[CRTC:%d:%s] unknown property [PROP:%d:%s]\n",
crtc->base.id, crtc->name,
property->base.id, property->name);
return -EINVAL;
@ -489,8 +498,13 @@ drm_atomic_crtc_get_property(struct drm_crtc *crtc,
*val = state->scaling_filter;
else if (crtc->funcs->atomic_get_property)
return crtc->funcs->atomic_get_property(crtc, state, property, val);
else
else {
drm_dbg_atomic(dev,
"[CRTC:%d:%s] unknown property [PROP:%d:%s]\n",
crtc->base.id, crtc->name,
property->base.id, property->name);
return -EINVAL;
}
return 0;
}
@ -525,8 +539,12 @@ static int drm_atomic_plane_set_property(struct drm_plane *plane,
} else if (property == config->prop_crtc_id) {
struct drm_crtc *crtc = drm_crtc_find(dev, file_priv, val);
if (val && !crtc)
if (val && !crtc) {
drm_dbg_atomic(dev,
"[PROP:%d:%s] cannot find CRTC with ID %llu\n",
property->base.id, property->name, val);
return -EACCES;
}
return drm_atomic_set_crtc_for_plane(state, crtc);
} else if (property == config->prop_crtc_x) {
state->crtc_x = U642I64(val);
@ -577,7 +595,7 @@ static int drm_atomic_plane_set_property(struct drm_plane *plane,
property, val);
} else {
drm_dbg_atomic(plane->dev,
"[PLANE:%d:%s] unknown property [PROP:%d:%s]]\n",
"[PLANE:%d:%s] unknown property [PROP:%d:%s]\n",
plane->base.id, plane->name,
property->base.id, property->name);
return -EINVAL;
@ -636,6 +654,10 @@ drm_atomic_plane_get_property(struct drm_plane *plane,
} else if (plane->funcs->atomic_get_property) {
return plane->funcs->atomic_get_property(plane, state, property, val);
} else {
drm_dbg_atomic(dev,
"[PLANE:%d:%s] unknown property [PROP:%d:%s]\n",
plane->base.id, plane->name,
property->base.id, property->name);
return -EINVAL;
}
@ -677,14 +699,21 @@ static int drm_atomic_connector_set_property(struct drm_connector *connector,
if (property == config->prop_crtc_id) {
struct drm_crtc *crtc = drm_crtc_find(dev, file_priv, val);
if (val && !crtc)
if (val && !crtc) {
drm_dbg_atomic(dev,
"[PROP:%d:%s] cannot find CRTC with ID %llu\n",
property->base.id, property->name, val);
return -EACCES;
}
return drm_atomic_set_crtc_for_connector(state, crtc);
} else if (property == config->dpms_property) {
/* setting DPMS property requires special handling, which
* is done in legacy setprop path for us. Disallow (for
* now?) atomic writes to DPMS property:
*/
drm_dbg_atomic(dev,
"legacy [PROP:%d:%s] can only be set via legacy uAPI\n",
property->base.id, property->name);
return -EINVAL;
} else if (property == config->tv_select_subconnector_property) {
state->tv.select_subconnector = val;
@ -774,7 +803,7 @@ static int drm_atomic_connector_set_property(struct drm_connector *connector,
state, property, val);
} else {
drm_dbg_atomic(connector->dev,
"[CONNECTOR:%d:%s] unknown property [PROP:%d:%s]]\n",
"[CONNECTOR:%d:%s] unknown property [PROP:%d:%s]\n",
connector->base.id, connector->name,
property->base.id, property->name);
return -EINVAL;
@ -856,6 +885,10 @@ drm_atomic_connector_get_property(struct drm_connector *connector,
return connector->funcs->atomic_get_property(connector,
state, property, val);
} else {
drm_dbg_atomic(dev,
"[CONNECTOR:%d:%s] unknown property [PROP:%d:%s]\n",
connector->base.id, connector->name,
property->base.id, property->name);
return -EINVAL;
}
@ -894,6 +927,7 @@ int drm_atomic_get_property(struct drm_mode_object *obj,
break;
}
default:
drm_dbg_atomic(dev, "[OBJECT:%d] has no properties\n", obj->id);
ret = -EINVAL;
break;
}
@ -1030,6 +1064,7 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
break;
}
default:
drm_dbg_atomic(prop->dev, "[OBJECT:%d] has no properties\n", obj->id);
ret = -EINVAL;
break;
}
@ -1230,8 +1265,10 @@ static int prepare_signaling(struct drm_device *dev,
* Having this flag means user mode pends on event which will never
* reach due to lack of at least one CRTC for signaling
*/
if (c == 0 && (arg->flags & DRM_MODE_PAGE_FLIP_EVENT))
if (c == 0 && (arg->flags & DRM_MODE_PAGE_FLIP_EVENT)) {
drm_dbg_atomic(dev, "need at least one CRTC for DRM_MODE_PAGE_FLIP_EVENT");
return -EINVAL;
}
return 0;
}
@ -1364,11 +1401,13 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
obj = drm_mode_object_find(dev, file_priv, obj_id, DRM_MODE_OBJECT_ANY);
if (!obj) {
drm_dbg_atomic(dev, "cannot find object ID %d", obj_id);
ret = -ENOENT;
goto out;
}
if (!obj->properties) {
drm_dbg_atomic(dev, "[OBJECT:%d] has no properties", obj_id);
drm_mode_object_put(obj);
ret = -ENOENT;
goto out;
@ -1395,6 +1434,9 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
prop = drm_mode_obj_find_prop_id(obj, prop_id);
if (!prop) {
drm_dbg_atomic(dev,
"[OBJECT:%d] cannot find property ID %d",
obj_id, prop_id);
drm_mode_object_put(obj);
ret = -ENOENT;
goto out;

View file

@ -125,7 +125,7 @@ static void drm_bridge_connector_hpd_cb(void *cb_data,
drm_bridge_connector_hpd_notify(connector, status);
drm_kms_helper_hotplug_event(dev);
drm_kms_helper_connector_hotplug_event(connector);
}
static void drm_bridge_connector_enable_hpd(struct drm_connector *connector)

View file

@ -2730,10 +2730,10 @@ static int drm_connector_privacy_screen_notifier(
drm_connector_update_privacy_screen_properties(connector, true);
drm_modeset_unlock(&dev->mode_config.connection_mutex);
drm_sysfs_connector_status_event(connector,
connector->privacy_screen_sw_state_property);
drm_sysfs_connector_status_event(connector,
connector->privacy_screen_hw_state_property);
drm_sysfs_connector_property_event(connector,
connector->privacy_screen_sw_state_property);
drm_sysfs_connector_property_event(connector,
connector->privacy_screen_hw_state_property);
return NOTIFY_DONE;
}

View file

@ -230,6 +230,7 @@ static const struct edid_quirk {
/* OSVR HDK and HDK2 VR Headsets */
EDID_QUIRK('S', 'V', 'R', 0x1019, EDID_QUIRK_NON_DESKTOP),
EDID_QUIRK('A', 'U', 'O', 0x1111, EDID_QUIRK_NON_DESKTOP),
};
/*
@ -3962,7 +3963,7 @@ static int drm_cvt_modes(struct drm_connector *connector,
struct drm_display_mode *newmode;
struct drm_device *dev = connector->dev;
const struct cvt_timing *cvt;
const int rates[] = { 60, 85, 75, 60, 50 };
static const int rates[] = { 60, 85, 75, 60, 50 };
const u8 empty[3] = { 0, 0, 0 };
for (i = 0; i < 4; i++) {

333
drivers/gpu/drm/drm_exec.c Normal file
View file

@ -0,0 +1,333 @@
// SPDX-License-Identifier: GPL-2.0 OR MIT
#include <drm/drm_exec.h>
#include <drm/drm_gem.h>
#include <linux/dma-resv.h>
/**
* DOC: Overview
*
* This component mainly abstracts the retry loop necessary for locking
* multiple GEM objects while preparing hardware operations (e.g. command
* submissions, page table updates etc..).
*
* If a contention is detected while locking a GEM object the cleanup procedure
* unlocks all previously locked GEM objects and locks the contended one first
* before locking any further objects.
*
* After an object is locked fences slots can optionally be reserved on the
* dma_resv object inside the GEM object.
*
* A typical usage pattern should look like this::
*
* struct drm_gem_object *obj;
* struct drm_exec exec;
* unsigned long index;
* int ret;
*
* drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT);
* drm_exec_until_all_locked(&exec) {
* ret = drm_exec_prepare_obj(&exec, boA, 1);
* drm_exec_retry_on_contention(&exec);
* if (ret)
* goto error;
*
* ret = drm_exec_prepare_obj(&exec, boB, 1);
* drm_exec_retry_on_contention(&exec);
* if (ret)
* goto error;
* }
*
* drm_exec_for_each_locked_object(&exec, index, obj) {
* dma_resv_add_fence(obj->resv, fence, DMA_RESV_USAGE_READ);
* ...
* }
* drm_exec_fini(&exec);
*
* See struct dma_exec for more details.
*/
/* Dummy value used to initially enter the retry loop */
#define DRM_EXEC_DUMMY ((void *)~0)
/* Unlock all objects and drop references */
static void drm_exec_unlock_all(struct drm_exec *exec)
{
struct drm_gem_object *obj;
unsigned long index;
drm_exec_for_each_locked_object(exec, index, obj) {
dma_resv_unlock(obj->resv);
drm_gem_object_put(obj);
}
drm_gem_object_put(exec->prelocked);
exec->prelocked = NULL;
}
/**
* drm_exec_init - initialize a drm_exec object
* @exec: the drm_exec object to initialize
* @flags: controls locking behavior, see DRM_EXEC_* defines
*
* Initialize the object and make sure that we can track locked objects.
*/
void drm_exec_init(struct drm_exec *exec, uint32_t flags)
{
exec->flags = flags;
exec->objects = kmalloc(PAGE_SIZE, GFP_KERNEL);
/* If allocation here fails, just delay that till the first use */
exec->max_objects = exec->objects ? PAGE_SIZE / sizeof(void *) : 0;
exec->num_objects = 0;
exec->contended = DRM_EXEC_DUMMY;
exec->prelocked = NULL;
}
EXPORT_SYMBOL(drm_exec_init);
/**
* drm_exec_fini - finalize a drm_exec object
* @exec: the drm_exec object to finalize
*
* Unlock all locked objects, drop the references to objects and free all memory
* used for tracking the state.
*/
void drm_exec_fini(struct drm_exec *exec)
{
drm_exec_unlock_all(exec);
kvfree(exec->objects);
if (exec->contended != DRM_EXEC_DUMMY) {
drm_gem_object_put(exec->contended);
ww_acquire_fini(&exec->ticket);
}
}
EXPORT_SYMBOL(drm_exec_fini);
/**
* drm_exec_cleanup - cleanup when contention is detected
* @exec: the drm_exec object to cleanup
*
* Cleanup the current state and return true if we should stay inside the retry
* loop, false if there wasn't any contention detected and we can keep the
* objects locked.
*/
bool drm_exec_cleanup(struct drm_exec *exec)
{
if (likely(!exec->contended)) {
ww_acquire_done(&exec->ticket);
return false;
}
if (likely(exec->contended == DRM_EXEC_DUMMY)) {
exec->contended = NULL;
ww_acquire_init(&exec->ticket, &reservation_ww_class);
return true;
}
drm_exec_unlock_all(exec);
exec->num_objects = 0;
return true;
}
EXPORT_SYMBOL(drm_exec_cleanup);
/* Track the locked object in the array */
static int drm_exec_obj_locked(struct drm_exec *exec,
struct drm_gem_object *obj)
{
if (unlikely(exec->num_objects == exec->max_objects)) {
size_t size = exec->max_objects * sizeof(void *);
void *tmp;
tmp = kvrealloc(exec->objects, size, size + PAGE_SIZE,
GFP_KERNEL);
if (!tmp)
return -ENOMEM;
exec->objects = tmp;
exec->max_objects += PAGE_SIZE / sizeof(void *);
}
drm_gem_object_get(obj);
exec->objects[exec->num_objects++] = obj;
return 0;
}
/* Make sure the contended object is locked first */
static int drm_exec_lock_contended(struct drm_exec *exec)
{
struct drm_gem_object *obj = exec->contended;
int ret;
if (likely(!obj))
return 0;
/* Always cleanup the contention so that error handling can kick in */
exec->contended = NULL;
if (exec->flags & DRM_EXEC_INTERRUPTIBLE_WAIT) {
ret = dma_resv_lock_slow_interruptible(obj->resv,
&exec->ticket);
if (unlikely(ret))
goto error_dropref;
} else {
dma_resv_lock_slow(obj->resv, &exec->ticket);
}
ret = drm_exec_obj_locked(exec, obj);
if (unlikely(ret))
goto error_unlock;
exec->prelocked = obj;
return 0;
error_unlock:
dma_resv_unlock(obj->resv);
error_dropref:
drm_gem_object_put(obj);
return ret;
}
/**
* drm_exec_lock_obj - lock a GEM object for use
* @exec: the drm_exec object with the state
* @obj: the GEM object to lock
*
* Lock a GEM object for use and grab a reference to it.
*
* Returns: -EDEADLK if a contention is detected, -EALREADY when object is
* already locked (can be suppressed by setting the DRM_EXEC_IGNORE_DUPLICATES
* flag), -ENOMEM when memory allocation failed and zero for success.
*/
int drm_exec_lock_obj(struct drm_exec *exec, struct drm_gem_object *obj)
{
int ret;
ret = drm_exec_lock_contended(exec);
if (unlikely(ret))
return ret;
if (exec->prelocked == obj) {
drm_gem_object_put(exec->prelocked);
exec->prelocked = NULL;
return 0;
}
if (exec->flags & DRM_EXEC_INTERRUPTIBLE_WAIT)
ret = dma_resv_lock_interruptible(obj->resv, &exec->ticket);
else
ret = dma_resv_lock(obj->resv, &exec->ticket);
if (unlikely(ret == -EDEADLK)) {
drm_gem_object_get(obj);
exec->contended = obj;
return -EDEADLK;
}
if (unlikely(ret == -EALREADY) &&
exec->flags & DRM_EXEC_IGNORE_DUPLICATES)
return 0;
if (unlikely(ret))
return ret;
ret = drm_exec_obj_locked(exec, obj);
if (ret)
goto error_unlock;
return 0;
error_unlock:
dma_resv_unlock(obj->resv);
return ret;
}
EXPORT_SYMBOL(drm_exec_lock_obj);
/**
* drm_exec_unlock_obj - unlock a GEM object in this exec context
* @exec: the drm_exec object with the state
* @obj: the GEM object to unlock
*
* Unlock the GEM object and remove it from the collection of locked objects.
* Should only be used to unlock the most recently locked objects. It's not time
* efficient to unlock objects locked long ago.
*/
void drm_exec_unlock_obj(struct drm_exec *exec, struct drm_gem_object *obj)
{
unsigned int i;
for (i = exec->num_objects; i--;) {
if (exec->objects[i] == obj) {
dma_resv_unlock(obj->resv);
for (++i; i < exec->num_objects; ++i)
exec->objects[i - 1] = exec->objects[i];
--exec->num_objects;
drm_gem_object_put(obj);
return;
}
}
}
EXPORT_SYMBOL(drm_exec_unlock_obj);
/**
* drm_exec_prepare_obj - prepare a GEM object for use
* @exec: the drm_exec object with the state
* @obj: the GEM object to prepare
* @num_fences: how many fences to reserve
*
* Prepare a GEM object for use by locking it and reserving fence slots.
*
* Returns: -EDEADLK if a contention is detected, -EALREADY when object is
* already locked, -ENOMEM when memory allocation failed and zero for success.
*/
int drm_exec_prepare_obj(struct drm_exec *exec, struct drm_gem_object *obj,
unsigned int num_fences)
{
int ret;
ret = drm_exec_lock_obj(exec, obj);
if (ret)
return ret;
ret = dma_resv_reserve_fences(obj->resv, num_fences);
if (ret) {
drm_exec_unlock_obj(exec, obj);
return ret;
}
return 0;
}
EXPORT_SYMBOL(drm_exec_prepare_obj);
/**
* drm_exec_prepare_array - helper to prepare an array of objects
* @exec: the drm_exec object with the state
* @objects: array of GEM object to prepare
* @num_objects: number of GEM objects in the array
* @num_fences: number of fences to reserve on each GEM object
*
* Prepares all GEM objects in an array, aborts on first error.
* Reserves @num_fences on each GEM object after locking it.
*
* Returns: -EDEADLOCK on contention, -EALREADY when object is already locked,
* -ENOMEM when memory allocation failed and zero for success.
*/
int drm_exec_prepare_array(struct drm_exec *exec,
struct drm_gem_object **objects,
unsigned int num_objects,
unsigned int num_fences)
{
int ret;
for (unsigned int i = 0; i < num_objects; ++i) {
ret = drm_exec_prepare_obj(exec, objects[i], num_fences);
if (unlikely(ret))
return ret;
}
return 0;
}
EXPORT_SYMBOL(drm_exec_prepare_array);
MODULE_DESCRIPTION("DRM execution context");
MODULE_LICENSE("Dual MIT/GPL");

View file

@ -54,12 +54,8 @@ static void drm_fbdev_dma_fb_destroy(struct fb_info *info)
static int drm_fbdev_dma_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
{
struct drm_fb_helper *fb_helper = info->par;
struct drm_device *dev = fb_helper->dev;
if (drm_WARN_ON_ONCE(dev, !fb_helper->dev->driver->gem_prime_mmap))
return -ENODEV;
return fb_helper->dev->driver->gem_prime_mmap(fb_helper->buffer->gem, vma);
return drm_gem_prime_mmap(fb_helper->buffer->gem, vma);
}
static const struct fb_ops drm_fbdev_dma_fb_ops = {

View file

@ -1160,8 +1160,8 @@ int drm_gem_pin(struct drm_gem_object *obj)
{
if (obj->funcs->pin)
return obj->funcs->pin(obj);
else
return 0;
return 0;
}
void drm_gem_unpin(struct drm_gem_object *obj)

View file

@ -88,8 +88,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
if (ret)
goto err_release;
mutex_init(&shmem->pages_lock);
mutex_init(&shmem->vmap_lock);
INIT_LIST_HEAD(&shmem->madv_list);
if (!private) {
@ -141,11 +139,13 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
drm_WARN_ON(obj->dev, shmem->vmap_use_count);
if (obj->import_attach) {
drm_prime_gem_destroy(obj, shmem->sgt);
} else {
dma_resv_lock(shmem->base.resv, NULL);
drm_WARN_ON(obj->dev, shmem->vmap_use_count);
if (shmem->sgt) {
dma_unmap_sgtable(obj->dev->dev, shmem->sgt,
DMA_BIDIRECTIONAL, 0);
@ -154,22 +154,24 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
}
if (shmem->pages)
drm_gem_shmem_put_pages(shmem);
drm_WARN_ON(obj->dev, shmem->pages_use_count);
dma_resv_unlock(shmem->base.resv);
}
drm_WARN_ON(obj->dev, shmem->pages_use_count);
drm_gem_object_release(obj);
mutex_destroy(&shmem->pages_lock);
mutex_destroy(&shmem->vmap_lock);
kfree(shmem);
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct page **pages;
dma_resv_assert_held(shmem->base.resv);
if (shmem->pages_use_count++ > 0)
return 0;
@ -197,35 +199,16 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
}
/*
* drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object
* drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
* @shmem: shmem GEM object
*
* This function makes sure that backing pages exists for the shmem GEM object
* and increases the use count.
*
* Returns:
* 0 on success or a negative error code on failure.
* This function decreases the use count and puts the backing pages when use drops to zero.
*/
int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
int ret;
drm_WARN_ON(obj->dev, obj->import_attach);
ret = mutex_lock_interruptible(&shmem->pages_lock);
if (ret)
return ret;
ret = drm_gem_shmem_get_pages_locked(shmem);
mutex_unlock(&shmem->pages_lock);
return ret;
}
EXPORT_SYMBOL(drm_gem_shmem_get_pages);
static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
dma_resv_assert_held(shmem->base.resv);
if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
return;
@ -243,21 +226,26 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
shmem->pages_mark_accessed_on_put);
shmem->pages = NULL;
}
/*
* drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
* @shmem: shmem GEM object
*
* This function decreases the use count and puts the backing pages when use drops to zero.
*/
void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
{
mutex_lock(&shmem->pages_lock);
drm_gem_shmem_put_pages_locked(shmem);
mutex_unlock(&shmem->pages_lock);
}
EXPORT_SYMBOL(drm_gem_shmem_put_pages);
static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
{
int ret;
dma_resv_assert_held(shmem->base.resv);
ret = drm_gem_shmem_get_pages(shmem);
return ret;
}
static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
{
dma_resv_assert_held(shmem->base.resv);
drm_gem_shmem_put_pages(shmem);
}
/**
* drm_gem_shmem_pin - Pin backing pages for a shmem GEM object
* @shmem: shmem GEM object
@ -271,10 +259,17 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
int ret;
drm_WARN_ON(obj->dev, obj->import_attach);
return drm_gem_shmem_get_pages(shmem);
ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
if (ret)
return ret;
ret = drm_gem_shmem_pin_locked(shmem);
dma_resv_unlock(shmem->base.resv);
return ret;
}
EXPORT_SYMBOL(drm_gem_shmem_pin);
@ -291,12 +286,29 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
drm_WARN_ON(obj->dev, obj->import_attach);
drm_gem_shmem_put_pages(shmem);
dma_resv_lock(shmem->base.resv, NULL);
drm_gem_shmem_unpin_locked(shmem);
dma_resv_unlock(shmem->base.resv);
}
EXPORT_SYMBOL(drm_gem_shmem_unpin);
static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
struct iosys_map *map)
/*
* drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
* @map: Returns the kernel virtual address of the SHMEM GEM object's backing
* store.
*
* This function makes sure that a contiguous kernel virtual address mapping
* exists for the buffer backing the shmem GEM object. It hides the differences
* between dma-buf imported and natively allocated objects.
*
* Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
*
* Returns:
* 0 on success or a negative error code on failure.
*/
int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
struct iosys_map *map)
{
struct drm_gem_object *obj = &shmem->base;
int ret = 0;
@ -312,6 +324,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
} else {
pgprot_t prot = PAGE_KERNEL;
dma_resv_assert_held(shmem->base.resv);
if (shmem->vmap_use_count++ > 0) {
iosys_map_set_vaddr(map, shmem->vaddr);
return 0;
@ -346,58 +360,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
return ret;
}
/*
* drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
* @map: Returns the kernel virtual address of the SHMEM GEM object's backing
* store.
*
* This function makes sure that a contiguous kernel virtual address mapping
* exists for the buffer backing the shmem GEM object. It hides the differences
* between dma-buf imported and natively allocated objects.
*
* Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
*
* Returns:
* 0 on success or a negative error code on failure.
*/
int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
struct iosys_map *map)
{
int ret;
ret = mutex_lock_interruptible(&shmem->vmap_lock);
if (ret)
return ret;
ret = drm_gem_shmem_vmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
return ret;
}
EXPORT_SYMBOL(drm_gem_shmem_vmap);
static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
struct iosys_map *map)
{
struct drm_gem_object *obj = &shmem->base;
if (obj->import_attach) {
dma_buf_vunmap(obj->import_attach->dmabuf, map);
} else {
if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
return;
if (--shmem->vmap_use_count > 0)
return;
vunmap(shmem->vaddr);
drm_gem_shmem_put_pages(shmem);
}
shmem->vaddr = NULL;
}
/*
* drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
@ -413,9 +377,24 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
struct iosys_map *map)
{
mutex_lock(&shmem->vmap_lock);
drm_gem_shmem_vunmap_locked(shmem, map);
mutex_unlock(&shmem->vmap_lock);
struct drm_gem_object *obj = &shmem->base;
if (obj->import_attach) {
dma_buf_vunmap(obj->import_attach->dmabuf, map);
} else {
dma_resv_assert_held(shmem->base.resv);
if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
return;
if (--shmem->vmap_use_count > 0)
return;
vunmap(shmem->vaddr);
drm_gem_shmem_put_pages(shmem);
}
shmem->vaddr = NULL;
}
EXPORT_SYMBOL(drm_gem_shmem_vunmap);
@ -447,24 +426,24 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
*/
int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
{
mutex_lock(&shmem->pages_lock);
dma_resv_assert_held(shmem->base.resv);
if (shmem->madv >= 0)
shmem->madv = madv;
madv = shmem->madv;
mutex_unlock(&shmem->pages_lock);
return (madv >= 0);
}
EXPORT_SYMBOL(drm_gem_shmem_madvise);
void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct drm_device *dev = obj->dev;
dma_resv_assert_held(shmem->base.resv);
drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
@ -472,7 +451,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
kfree(shmem->sgt);
shmem->sgt = NULL;
drm_gem_shmem_put_pages_locked(shmem);
drm_gem_shmem_put_pages(shmem);
shmem->madv = -1;
@ -488,17 +467,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
}
EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
{
if (!mutex_trylock(&shmem->pages_lock))
return false;
drm_gem_shmem_purge_locked(shmem);
mutex_unlock(&shmem->pages_lock);
return true;
}
EXPORT_SYMBOL(drm_gem_shmem_purge);
/**
@ -551,7 +519,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
/* We don't use vmf->pgoff since that has the fake offset */
page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
mutex_lock(&shmem->pages_lock);
dma_resv_lock(shmem->base.resv, NULL);
if (page_offset >= num_pages ||
drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
@ -563,7 +531,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
}
mutex_unlock(&shmem->pages_lock);
dma_resv_unlock(shmem->base.resv);
return ret;
}
@ -575,7 +543,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
drm_WARN_ON(obj->dev, obj->import_attach);
mutex_lock(&shmem->pages_lock);
dma_resv_lock(shmem->base.resv, NULL);
/*
* We should have already pinned the pages when the buffer was first
@ -585,7 +553,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
shmem->pages_use_count++;
mutex_unlock(&shmem->pages_lock);
dma_resv_unlock(shmem->base.resv);
drm_gem_vm_open(vma);
}
@ -595,7 +563,10 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
struct drm_gem_object *obj = vma->vm_private_data;
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
dma_resv_lock(shmem->base.resv, NULL);
drm_gem_shmem_put_pages(shmem);
dma_resv_unlock(shmem->base.resv);
drm_gem_vm_close(vma);
}
@ -633,7 +604,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
return ret;
}
dma_resv_lock(shmem->base.resv, NULL);
ret = drm_gem_shmem_get_pages(shmem);
dma_resv_unlock(shmem->base.resv);
if (ret)
return ret;
@ -699,7 +673,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
drm_WARN_ON(obj->dev, obj->import_attach);
ret = drm_gem_shmem_get_pages_locked(shmem);
ret = drm_gem_shmem_get_pages(shmem);
if (ret)
return ERR_PTR(ret);
@ -721,7 +695,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
sg_free_table(sgt);
kfree(sgt);
err_put_pages:
drm_gem_shmem_put_pages_locked(shmem);
drm_gem_shmem_put_pages(shmem);
return ERR_PTR(ret);
}
@ -746,11 +720,11 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
int ret;
struct sg_table *sgt;
ret = mutex_lock_interruptible(&shmem->pages_lock);
ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
if (ret)
return ERR_PTR(ret);
sgt = drm_gem_shmem_get_pages_sgt_locked(shmem);
mutex_unlock(&shmem->pages_lock);
dma_resv_unlock(shmem->base.resv);
return sgt;
}

View file

@ -245,8 +245,7 @@ static int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_
req->value = 1;
return 0;
case DRM_CAP_PRIME:
req->value |= dev->driver->prime_fd_to_handle ? DRM_PRIME_CAP_IMPORT : 0;
req->value |= dev->driver->prime_handle_to_fd ? DRM_PRIME_CAP_EXPORT : 0;
req->value = DRM_PRIME_CAP_IMPORT | DRM_PRIME_CAP_EXPORT;
return 0;
case DRM_CAP_SYNCOBJ:
req->value = drm_core_check_feature(dev, DRIVER_SYNCOBJ);

View file

@ -147,8 +147,10 @@ struct drm_mode_object *__drm_mode_object_find(struct drm_device *dev,
obj = NULL;
if (obj && drm_mode_object_lease_required(obj->type) &&
!_drm_lease_held(file_priv, obj->id))
!_drm_lease_held(file_priv, obj->id)) {
drm_dbg_kms(dev, "[OBJECT:%d] not included in lease", id);
obj = NULL;
}
if (obj && obj->free_cb) {
if (!kref_get_unless_zero(&obj->refcount))

View file

@ -51,15 +51,10 @@ MODULE_IMPORT_NS(DMA_BUF);
* between applications, they can't be guessed like the globally unique GEM
* names.
*
* Drivers that support the PRIME API implement the
* &drm_driver.prime_handle_to_fd and &drm_driver.prime_fd_to_handle operations.
* GEM based drivers must use drm_gem_prime_handle_to_fd() and
* drm_gem_prime_fd_to_handle() to implement these. For GEM based drivers the
* actual driver interfaces is provided through the &drm_gem_object_funcs.export
* and &drm_driver.gem_prime_import hooks.
*
* &dma_buf_ops implementations for GEM drivers are all individually exported
* for drivers which need to overwrite or reimplement some of them.
* Drivers that support the PRIME API implement the drm_gem_object_funcs.export
* and &drm_driver.gem_prime_import hooks. &dma_buf_ops implementations for
* drivers are all individually exported for drivers which need to overwrite
* or reimplement some of them.
*
* Reference Counting for GEM Drivers
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -283,7 +278,7 @@ void drm_gem_dmabuf_release(struct dma_buf *dma_buf)
}
EXPORT_SYMBOL(drm_gem_dmabuf_release);
/**
/*
* drm_gem_prime_fd_to_handle - PRIME import function for GEM drivers
* @dev: drm_device to import into
* @file_priv: drm file-private structure
@ -297,9 +292,9 @@ EXPORT_SYMBOL(drm_gem_dmabuf_release);
*
* Returns 0 on success or a negative error code on failure.
*/
int drm_gem_prime_fd_to_handle(struct drm_device *dev,
struct drm_file *file_priv, int prime_fd,
uint32_t *handle)
static int drm_gem_prime_fd_to_handle(struct drm_device *dev,
struct drm_file *file_priv, int prime_fd,
uint32_t *handle)
{
struct dma_buf *dma_buf;
struct drm_gem_object *obj;
@ -365,18 +360,18 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev,
dma_buf_put(dma_buf);
return ret;
}
EXPORT_SYMBOL(drm_gem_prime_fd_to_handle);
int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_prime_handle *args = data;
if (!dev->driver->prime_fd_to_handle)
return -ENOSYS;
if (dev->driver->prime_fd_to_handle) {
return dev->driver->prime_fd_to_handle(dev, file_priv, args->fd,
&args->handle);
}
return dev->driver->prime_fd_to_handle(dev, file_priv,
args->fd, &args->handle);
return drm_gem_prime_fd_to_handle(dev, file_priv, args->fd, &args->handle);
}
static struct dma_buf *export_and_register_object(struct drm_device *dev,
@ -413,7 +408,7 @@ static struct dma_buf *export_and_register_object(struct drm_device *dev,
return dmabuf;
}
/**
/*
* drm_gem_prime_handle_to_fd - PRIME export function for GEM drivers
* @dev: dev to export the buffer from
* @file_priv: drm file-private structure
@ -426,10 +421,10 @@ static struct dma_buf *export_and_register_object(struct drm_device *dev,
* The actual exporting from GEM object to a dma-buf is done through the
* &drm_gem_object_funcs.export callback.
*/
int drm_gem_prime_handle_to_fd(struct drm_device *dev,
struct drm_file *file_priv, uint32_t handle,
uint32_t flags,
int *prime_fd)
static int drm_gem_prime_handle_to_fd(struct drm_device *dev,
struct drm_file *file_priv, uint32_t handle,
uint32_t flags,
int *prime_fd)
{
struct drm_gem_object *obj;
int ret = 0;
@ -511,22 +506,23 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev,
return ret;
}
EXPORT_SYMBOL(drm_gem_prime_handle_to_fd);
int drm_prime_handle_to_fd_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_prime_handle *args = data;
if (!dev->driver->prime_handle_to_fd)
return -ENOSYS;
/* check flags are valid */
if (args->flags & ~(DRM_CLOEXEC | DRM_RDWR))
return -EINVAL;
return dev->driver->prime_handle_to_fd(dev, file_priv,
args->handle, args->flags, &args->fd);
if (dev->driver->prime_handle_to_fd) {
return dev->driver->prime_handle_to_fd(dev, file_priv,
args->handle, args->flags,
&args->fd);
}
return drm_gem_prime_handle_to_fd(dev, file_priv, args->handle,
args->flags, &args->fd);
}
/**
@ -715,8 +711,6 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
* the same codepath that is used for regular GEM buffer mapping on the DRM fd.
* The fake GEM offset is added to vma->vm_pgoff and &drm_driver->fops->mmap is
* called to set up the mapping.
*
* Drivers can use this as their &drm_driver.gem_prime_mmap callback.
*/
int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
{
@ -772,25 +766,15 @@ EXPORT_SYMBOL(drm_gem_prime_mmap);
* @vma: virtual address range
*
* Provides memory mapping for the buffer. This can be used as the
* &dma_buf_ops.mmap callback. It just forwards to &drm_driver.gem_prime_mmap,
* which should be set to drm_gem_prime_mmap().
*
* FIXME: There's really no point to this wrapper, drivers which need anything
* else but drm_gem_prime_mmap can roll their own &dma_buf_ops.mmap callback.
* &dma_buf_ops.mmap callback. It just forwards to drm_gem_prime_mmap().
*
* Returns 0 on success or a negative error code on failure.
*/
int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma)
{
struct drm_gem_object *obj = dma_buf->priv;
struct drm_device *dev = obj->dev;
dma_resv_assert_held(dma_buf->resv);
if (!dev->driver->gem_prime_mmap)
return -ENOSYS;
return dev->driver->gem_prime_mmap(obj, vma);
return drm_gem_prime_mmap(obj, vma);
}
EXPORT_SYMBOL(drm_gem_dmabuf_mmap);
@ -880,9 +864,9 @@ EXPORT_SYMBOL(drm_prime_get_contiguous_size);
* @obj: GEM object to export
* @flags: flags like DRM_CLOEXEC and DRM_RDWR
*
* This is the implementation of the &drm_gem_object_funcs.export functions for GEM drivers
* using the PRIME helpers. It is used as the default in
* drm_gem_prime_handle_to_fd().
* This is the implementation of the &drm_gem_object_funcs.export functions
* for GEM drivers using the PRIME helpers. It is used as the default for
* drivers that do not set their own.
*/
struct dma_buf *drm_gem_prime_export(struct drm_gem_object *obj,
int flags)
@ -978,10 +962,9 @@ EXPORT_SYMBOL(drm_gem_prime_import_dev);
* @dev: drm_device to import into
* @dma_buf: dma-buf object to import
*
* This is the implementation of the gem_prime_import functions for GEM drivers
* using the PRIME helpers. Drivers can use this as their
* &drm_driver.gem_prime_import implementation. It is used as the default
* implementation in drm_gem_prime_fd_to_handle().
* This is the implementation of the gem_prime_import functions for GEM
* drivers using the PRIME helpers. It is the default for drivers that do
* not set their own &drm_driver.gem_prime_import.
*
* Drivers must arrange to call drm_prime_gem_destroy() from their
* &drm_gem_object_funcs.free hook when using this function.

View file

@ -487,17 +487,17 @@ void drm_sysfs_connector_hotplug_event(struct drm_connector *connector)
EXPORT_SYMBOL(drm_sysfs_connector_hotplug_event);
/**
* drm_sysfs_connector_status_event - generate a DRM uevent for connector
* property status change
* @connector: connector on which property status changed
* @property: connector property whose status changed.
* drm_sysfs_connector_property_event - generate a DRM uevent for connector
* property change
* @connector: connector on which property changed
* @property: connector property which has changed.
*
* Send a uevent for the DRM device specified by @dev. Currently we
* Send a uevent for the specified DRM connector and property. Currently we
* set HOTPLUG=1 and connector id along with the attached property id
* related to the status change.
* related to the change.
*/
void drm_sysfs_connector_status_event(struct drm_connector *connector,
struct drm_property *property)
void drm_sysfs_connector_property_event(struct drm_connector *connector,
struct drm_property *property)
{
struct drm_device *dev = connector->dev;
char hotplug_str[] = "HOTPLUG=1", conn_id[21], prop_id[21];
@ -511,11 +511,14 @@ void drm_sysfs_connector_status_event(struct drm_connector *connector,
snprintf(prop_id, ARRAY_SIZE(prop_id),
"PROPERTY=%u", property->base.id);
DRM_DEBUG("generating connector status event\n");
drm_dbg_kms(connector->dev,
"[CONNECTOR:%d:%s] generating connector property event for [PROP:%d:%s]\n",
connector->base.id, connector->name,
property->base.id, property->name);
kobject_uevent_env(&dev->primary->kdev->kobj, KOBJ_CHANGE, envp);
}
EXPORT_SYMBOL(drm_sysfs_connector_status_event);
EXPORT_SYMBOL(drm_sysfs_connector_property_event);
struct device *drm_sysfs_minor_alloc(struct drm_minor *minor)
{

View file

@ -481,10 +481,7 @@ static const struct drm_driver etnaviv_drm_driver = {
.driver_features = DRIVER_GEM | DRIVER_RENDER,
.open = etnaviv_open,
.postclose = etnaviv_postclose,
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import_sg_table = etnaviv_gem_prime_import_sg_table,
.gem_prime_mmap = drm_gem_prime_mmap,
#ifdef CONFIG_DEBUG_FS
.debugfs_init = etnaviv_debugfs_init,
#endif

View file

@ -109,11 +109,8 @@ static const struct drm_driver exynos_drm_driver = {
.open = exynos_drm_open,
.postclose = exynos_drm_postclose,
.dumb_create = exynos_drm_gem_dumb_create,
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import = exynos_drm_gem_prime_import,
.gem_prime_import_sg_table = exynos_drm_gem_prime_import_sg_table,
.gem_prime_mmap = drm_gem_prime_mmap,
.ioctls = exynos_ioctls,
.num_ioctls = ARRAY_SIZE(exynos_ioctls),
.fops = &exynos_drm_driver_fops,

View file

@ -346,7 +346,7 @@ static int fsl_dcu_drm_probe(struct platform_device *pdev)
return ret;
}
static int fsl_dcu_drm_remove(struct platform_device *pdev)
static void fsl_dcu_drm_remove(struct platform_device *pdev)
{
struct fsl_dcu_drm_device *fsl_dev = platform_get_drvdata(pdev);
@ -354,13 +354,11 @@ static int fsl_dcu_drm_remove(struct platform_device *pdev)
drm_dev_put(fsl_dev->drm);
clk_disable_unprepare(fsl_dev->clk);
clk_unregister(fsl_dev->pix_clk);
return 0;
}
static struct platform_driver fsl_dcu_drm_platform_driver = {
.probe = fsl_dcu_drm_probe,
.remove = fsl_dcu_drm_remove,
.remove_new = fsl_dcu_drm_remove,
.driver = {
.name = "fsl-dcu",
.pm = &fsl_dcu_drm_pm_ops,

View file

@ -390,7 +390,7 @@ static int gud_fb_queue_damage(struct gud_device *gdrm, struct drm_framebuffer *
mutex_lock(&gdrm->damage_lock);
if (!gdrm->shadow_buf) {
gdrm->shadow_buf = vzalloc(fb->pitches[0] * fb->height);
gdrm->shadow_buf = vcalloc(fb->pitches[0], fb->height);
if (!gdrm->shadow_buf) {
mutex_unlock(&gdrm->damage_lock);
return -ENOMEM;

View file

@ -63,7 +63,6 @@ static const struct drm_driver hibmc_driver = {
.debugfs_init = drm_vram_mm_debugfs_init,
.dumb_create = hibmc_dumb_create,
.dumb_map_offset = drm_gem_ttm_dumb_map_offset,
.gem_prime_mmap = drm_gem_prime_mmap,
};
static int __maybe_unused hibmc_pm_suspend(struct device *dev)

View file

@ -874,14 +874,12 @@ static int dsi_probe(struct platform_device *pdev)
return 0;
}
static int dsi_remove(struct platform_device *pdev)
static void dsi_remove(struct platform_device *pdev)
{
struct dsi_data *data = platform_get_drvdata(pdev);
struct dw_dsi *dsi = &data->dsi;
mipi_dsi_host_unregister(&dsi->host);
return 0;
}
static const struct of_device_id dsi_of_match[] = {
@ -892,7 +890,7 @@ MODULE_DEVICE_TABLE(of, dsi_of_match);
static struct platform_driver dsi_driver = {
.probe = dsi_probe,
.remove = dsi_remove,
.remove_new = dsi_remove,
.driver = {
.name = "dw-dsi",
.of_match_table = dsi_of_match,

View file

@ -279,10 +279,9 @@ static int kirin_drm_platform_probe(struct platform_device *pdev)
return component_master_add_with_match(dev, &kirin_drm_ops, match);
}
static int kirin_drm_platform_remove(struct platform_device *pdev)
static void kirin_drm_platform_remove(struct platform_device *pdev)
{
component_master_del(&pdev->dev, &kirin_drm_ops);
return 0;
}
static const struct of_device_id kirin_drm_dt_ids[] = {
@ -295,7 +294,7 @@ MODULE_DEVICE_TABLE(of, kirin_drm_dt_ids);
static struct platform_driver kirin_drm_platform_driver = {
.probe = kirin_drm_platform_probe,
.remove = kirin_drm_platform_remove,
.remove_new = kirin_drm_platform_remove,
.driver = {
.name = "kirin-drm",
.of_match_table = kirin_drm_dt_ids,

View file

@ -7,6 +7,7 @@
#include <linux/hyperv.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/screen_info.h>
#include <drm/drm_aperture.h>
#include <drm/drm_atomic_helper.h>

Some files were not shown because too many files have changed in this diff Show more