Introduce a new DRM driver for Intel GPUs

Xe, is a new driver for Intel GPUs that supports both integrated and
 discrete platforms. The experimental support starts with Tiger Lake.
 i915 will continue be the main production driver for the platforms
 up to Meteor Lake and Alchemist. Then the goal is to make this Intel
 Xe driver the primary driver for Lunar Lake and newer platforms.
 
 It uses most, if not all, of the key drm concepts, in special: TTM,
 drm-scheduler, drm-exec, drm-gpuvm/gpuva and others.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEbSBwaO7dZQkcLOKj+mJfZA7rE8oFAmWErlkACgkQ+mJfZA7r
 E8ptAQf/UILUy63usqMBQx5gGTJ5e6oyGy0r97JdBBWNxhcu3/4uN1SR6V2LSV7p
 mnZt/LrHZ/24s73e5D7sh909GCYm/MMWH7v0KypJq5Z74BHb9IePP7+Q4NTfXfqS
 5AotYxDEwPBQZpfYLA7y17XrB01yz/gF8wHytBFTPiPOlve7BDAYw4j8o+ocztxy
 sLLwPi7A6RJJlpXaDa33DpH5MLpTVYUOuxaIElqu949AJj/Me+LHd26VfoGWt9YT
 AWzAm+LLoBhyE5lk9swWxvphGvNoOuKf8soTMEVXr4qxsR6d4LShkrljSYSbtRhl
 2QZMgvSL4pGkB3Yvfxb0hidWuVMNfA==
 =bubx
 -----END PGP SIGNATURE-----

Merge tag 'drm-xe-next-2023-12-21-pr1-1' of https://gitlab.freedesktop.org/drm/xe/kernel into drm-next

Introduce a new DRM driver for Intel GPUs

Xe, is a new driver for Intel GPUs that supports both integrated and
discrete platforms. The experimental support starts with Tiger Lake.
i915 will continue be the main production driver for the platforms
up to Meteor Lake and Alchemist. Then the goal is to make this Intel
Xe driver the primary driver for Lunar Lake and newer platforms.

It uses most, if not all, of the key drm concepts, in special: TTM,
drm-scheduler, drm-exec, drm-gpuvm/gpuva and others.

Signed-off-by: Dave Airlie <airlied@redhat.com>

[airlied: add an extra X86 check, fix a typo, fix drm_exec_init interface
change].

From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/ZYSwLgXZUZ57qGPQ@intel.com
This commit is contained in:
Dave Airlie 2023-12-22 07:55:59 +10:00
commit d219702902
352 changed files with 61425 additions and 1 deletions

View File

@ -0,0 +1,70 @@
What: /sys/devices/.../hwmon/hwmon<i>/power1_max
Date: September 2023
KernelVersion: 6.5
Contact: intel-xe@lists.freedesktop.org
Description: RW. Card reactive sustained (PL1) power limit in microwatts.
The power controller will throttle the operating frequency
if the power averaged over a window (typically seconds)
exceeds this limit. A read value of 0 means that the PL1
power limit is disabled, writing 0 disables the
limit. Writing values > 0 and <= TDP will enable the power limit.
Only supported for particular Intel xe graphics platforms.
What: /sys/devices/.../hwmon/hwmon<i>/power1_rated_max
Date: September 2023
KernelVersion: 6.5
Contact: intel-xe@lists.freedesktop.org
Description: RO. Card default power limit (default TDP setting).
Only supported for particular Intel xe graphics platforms.
What: /sys/devices/.../hwmon/hwmon<i>/power1_crit
Date: September 2023
KernelVersion: 6.5
Contact: intel-xe@lists.freedesktop.org
Description: RW. Card reactive critical (I1) power limit in microwatts.
Card reactive critical (I1) power limit in microwatts is exposed
for client products. The power controller will throttle the
operating frequency if the power averaged over a window exceeds
this limit.
Only supported for particular Intel xe graphics platforms.
What: /sys/devices/.../hwmon/hwmon<i>/curr1_crit
Date: September 2023
KernelVersion: 6.5
Contact: intel-xe@lists.freedesktop.org
Description: RW. Card reactive critical (I1) power limit in milliamperes.
Card reactive critical (I1) power limit in milliamperes is
exposed for server products. The power controller will throttle
the operating frequency if the power averaged over a window
exceeds this limit.
What: /sys/devices/.../hwmon/hwmon<i>/in0_input
Date: September 2023
KernelVersion: 6.5
Contact: intel-xe@lists.freedesktop.org
Description: RO. Current Voltage in millivolt.
Only supported for particular Intel xe graphics platforms.
What: /sys/devices/.../hwmon/hwmon<i>/energy1_input
Date: September 2023
KernelVersion: 6.5
Contact: intel-xe@lists.freedesktop.org
Description: RO. Energy input of device in microjoules.
Only supported for particular Intel xe graphics platforms.
What: /sys/devices/.../hwmon/hwmon<i>/power1_max_interval
Date: October 2023
KernelVersion: 6.6
Contact: intel-xe@lists.freedesktop.org
Description: RW. Sustained power limit interval (Tau in PL1/Tau) in
milliseconds over which sustained power is averaged.
Only supported for particular Intel xe graphics platforms.

View File

@ -17,3 +17,8 @@ VM_BIND / EXEC uAPI
:doc: Overview
.. kernel-doc:: include/uapi/drm/nouveau_drm.h
drm/xe uAPI
===========
.. kernel-doc:: include/uapi/drm/xe_drm.h

View File

@ -18,6 +18,7 @@ GPU Driver Documentation
vkms
bridge/dw-hdmi
xen-front
xe/index
afbc
komeda-kms
panfrost

View File

@ -0,0 +1,25 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=======================
drm/xe Intel GFX Driver
=======================
The drm/xe driver supports some future GFX cards with rendering, display,
compute and media. Support for currently available platforms like TGL, ADL,
DG2, etc is provided to prototype the driver.
.. toctree::
:titlesonly:
xe_mm
xe_map
xe_migrate
xe_cs
xe_pm
xe_pcode
xe_gt_mcr
xe_wa
xe_rtp
xe_firmware
xe_tile
xe_debugging

View File

@ -0,0 +1,8 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
==================
Command submission
==================
.. kernel-doc:: drivers/gpu/drm/xe/xe_exec.c
:doc: Execbuf (User GPU command submission)

View File

@ -0,0 +1,7 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=========
Debugging
=========
.. kernel-doc:: drivers/gpu/drm/xe/xe_assert.h

View File

@ -0,0 +1,37 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
========
Firmware
========
Firmware Layout
===============
.. kernel-doc:: drivers/gpu/drm/xe/xe_uc_fw_abi.h
:doc: CSS-based Firmware Layout
.. kernel-doc:: drivers/gpu/drm/xe/xe_uc_fw_abi.h
:doc: GSC-based Firmware Layout
Write Once Protected Content Memory (WOPCM) Layout
==================================================
.. kernel-doc:: drivers/gpu/drm/xe/xe_wopcm.c
:doc: Write Once Protected Content Memory (WOPCM) Layout
GuC CTB Blob
============
.. kernel-doc:: drivers/gpu/drm/xe/xe_guc_ct.c
:doc: GuC CTB Blob
GuC Power Conservation (PC)
===========================
.. kernel-doc:: drivers/gpu/drm/xe/xe_guc_pc.c
:doc: GuC Power Conservation (PC)
Internal API
============
TODO

View File

@ -0,0 +1,13 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
==============================================
GT Multicast/Replicated (MCR) Register Support
==============================================
.. kernel-doc:: drivers/gpu/drm/xe/xe_gt_mcr.c
:doc: GT Multicast/Replicated (MCR) Register Support
Internal API
============
TODO

View File

@ -0,0 +1,8 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=========
Map Layer
=========
.. kernel-doc:: drivers/gpu/drm/xe/xe_map.h
:doc: Map layer

View File

@ -0,0 +1,8 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=============
Migrate Layer
=============
.. kernel-doc:: drivers/gpu/drm/xe/xe_migrate_doc.h
:doc: Migrate Layer

View File

@ -0,0 +1,14 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=================
Memory Management
=================
.. kernel-doc:: drivers/gpu/drm/xe/xe_bo_doc.h
:doc: Buffer Objects (BO)
Pagetable building
==================
.. kernel-doc:: drivers/gpu/drm/xe/xe_pt.c
:doc: Pagetable building

View File

@ -0,0 +1,14 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=====
Pcode
=====
.. kernel-doc:: drivers/gpu/drm/xe/xe_pcode.c
:doc: PCODE
Internal API
============
.. kernel-doc:: drivers/gpu/drm/xe/xe_pcode.c
:internal:

View File

@ -0,0 +1,14 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
========================
Runtime Power Management
========================
.. kernel-doc:: drivers/gpu/drm/xe/xe_pm.c
:doc: Xe Power Management
Internal API
============
.. kernel-doc:: drivers/gpu/drm/xe/xe_pm.c
:internal:

View File

@ -0,0 +1,20 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=========================
Register Table Processing
=========================
.. kernel-doc:: drivers/gpu/drm/xe/xe_rtp.c
:doc: Register Table Processing
Internal API
============
.. kernel-doc:: drivers/gpu/drm/xe/xe_rtp_types.h
:internal:
.. kernel-doc:: drivers/gpu/drm/xe/xe_rtp.h
:internal:
.. kernel-doc:: drivers/gpu/drm/xe/xe_rtp.c
:internal:

View File

@ -0,0 +1,14 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
==================
Multi-tile Devices
==================
.. kernel-doc:: drivers/gpu/drm/xe/xe_tile.c
:doc: Multi-tile Design
Internal API
============
.. kernel-doc:: drivers/gpu/drm/xe/xe_tile.c
:internal:

View File

@ -0,0 +1,14 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
====================
Hardware workarounds
====================
.. kernel-doc:: drivers/gpu/drm/xe/xe_wa.c
:doc: Hardware workarounds
Internal API
============
.. kernel-doc:: drivers/gpu/drm/xe/xe_wa.c
:internal:

View File

@ -10599,7 +10599,17 @@ L: linux-kernel@vger.kernel.org
S: Supported
F: arch/x86/include/asm/intel-family.h
INTEL DRM DRIVERS (excluding Poulsbo, Moorestown and derivative chipsets)
INTEL DRM DISPLAY FOR XE AND I915 DRIVERS
M: Jani Nikula <jani.nikula@linux.intel.com>
M: Rodrigo Vivi <rodrigo.vivi@intel.com>
L: intel-gfx@lists.freedesktop.org
L: intel-xe@lists.freedesktop.org
S: Supported
F: drivers/gpu/drm/i915/display/
F: drivers/gpu/drm/xe/display/
F: drivers/gpu/drm/xe/compat-i915-headers
INTEL DRM I915 DRIVER (Meteor Lake, DG2 and older excluding Poulsbo, Moorestown and derivative)
M: Jani Nikula <jani.nikula@linux.intel.com>
M: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
M: Rodrigo Vivi <rodrigo.vivi@intel.com>
@ -10618,6 +10628,23 @@ F: drivers/gpu/drm/i915/
F: include/drm/i915*
F: include/uapi/drm/i915_drm.h
INTEL DRM XE DRIVER (Lunar Lake and newer)
M: Lucas De Marchi <lucas.demarchi@intel.com>
M: Oded Gabbay <ogabbay@kernel.org>
M: Thomas Hellström <thomas.hellstrom@linux.intel.com>
L: intel-xe@lists.freedesktop.org
S: Supported
W: https://drm.pages.freedesktop.org/intel-docs/
Q: http://patchwork.freedesktop.org/project/intel-xe/
B: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues
C: irc://irc.oftc.net/xe
T: git https://gitlab.freedesktop.org/drm/xe/kernel.git
F: Documentation/ABI/testing/sysfs-driver-intel-xe-hwmon
F: Documentation/gpu/xe/
F: drivers/gpu/drm/xe/
F: include/drm/xe*
F: include/uapi/drm/xe_drm.h
INTEL ETHERNET DRIVERS
M: Jesse Brandeburg <jesse.brandeburg@intel.com>
M: Tony Nguyen <anthony.l.nguyen@intel.com>

View File

@ -276,6 +276,8 @@ source "drivers/gpu/drm/nouveau/Kconfig"
source "drivers/gpu/drm/i915/Kconfig"
source "drivers/gpu/drm/xe/Kconfig"
source "drivers/gpu/drm/kmb/Kconfig"
config DRM_VGEM

View File

@ -134,6 +134,7 @@ obj-$(CONFIG_DRM_RADEON)+= radeon/
obj-$(CONFIG_DRM_AMDGPU)+= amd/amdgpu/
obj-$(CONFIG_DRM_AMDGPU)+= amd/amdxcp/
obj-$(CONFIG_DRM_I915) += i915/
obj-$(CONFIG_DRM_XE) += xe/
obj-$(CONFIG_DRM_KMB_DISPLAY) += kmb/
obj-$(CONFIG_DRM_MGAG200) += mgag200/
obj-$(CONFIG_DRM_V3D) += v3d/

4
drivers/gpu/drm/xe/.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only
*.hdrtest
/generated
/xe_gen_wa_oob

View File

@ -0,0 +1,13 @@
# xe dependencies
CONFIG_KUNIT=y
CONFIG_PCI=y
CONFIG_PCI_IOV=y
CONFIG_DEBUG_FS=y
CONFIG_DRM=y
CONFIG_DRM_FBDEV_EMULATION=y
CONFIG_DRM_KMS_HELPER=y
CONFIG_DRM_XE=y
CONFIG_DRM_XE_DISPLAY=n
CONFIG_EXPERT=y
CONFIG_FB=y
CONFIG_DRM_XE_KUNIT_TEST=y

View File

@ -0,0 +1,96 @@
# SPDX-License-Identifier: GPL-2.0-only
config DRM_XE
tristate "Intel Xe Graphics"
depends on DRM && PCI && MMU && (m || (y && KUNIT=y))
select INTERVAL_TREE
# we need shmfs for the swappable backing store, and in particular
# the shmem_readpage() which depends upon tmpfs
select SHMEM
select TMPFS
select DRM_BUDDY
select DRM_EXEC
select DRM_KMS_HELPER
select DRM_PANEL
select DRM_SUBALLOC_HELPER
select DRM_DISPLAY_DP_HELPER
select DRM_DISPLAY_HDCP_HELPER
select DRM_DISPLAY_HDMI_HELPER
select DRM_DISPLAY_HELPER
select DRM_MIPI_DSI
select RELAY
select IRQ_WORK
# xe depends on ACPI_VIDEO when ACPI is enabled
# but for select to work, need to select ACPI_VIDEO's dependencies, ick
select BACKLIGHT_CLASS_DEVICE if ACPI
select INPUT if ACPI
select ACPI_VIDEO if X86 && ACPI
select ACPI_BUTTON if ACPI
select ACPI_WMI if X86 && ACPI
select SYNC_FILE
select IOSF_MBI
select CRC32
select SND_HDA_I915 if SND_HDA_CORE
select CEC_CORE if CEC_NOTIFIER
select VMAP_PFN
select DRM_TTM
select DRM_TTM_HELPER
select DRM_EXEC
select DRM_GPUVM
select DRM_SCHED
select MMU_NOTIFIER
select WANT_DEV_COREDUMP
select AUXILIARY_BUS
help
Experimental driver for Intel Xe series GPUs
If "M" is selected, the module will be called xe.
config DRM_XE_DISPLAY
bool "Enable display support"
depends on DRM_XE && EXPERT && DRM_XE=m
select FB_IOMEM_HELPERS
select I2C
select I2C_ALGOBIT
default y
help
Disable this option only if you want to compile out display support.
config DRM_XE_FORCE_PROBE
string "Force probe xe for selected Intel hardware IDs"
depends on DRM_XE
help
This is the default value for the xe.force_probe module
parameter. Using the module parameter overrides this option.
Force probe the xe for Intel graphics devices that are
recognized but not properly supported by this kernel version. It is
recommended to upgrade to a kernel version with proper support as soon
as it is available.
It can also be used to block the probe of recognized and fully
supported devices.
Use "" to disable force probe. If in doubt, use this.
Use "<pci-id>[,<pci-id>,...]" to force probe the xe for listed
devices. For example, "4500" or "4500,4571".
Use "*" to force probe the driver for all known devices.
Use "!" right before the ID to block the probe of the device. For
example, "4500,!4571" forces the probe of 4500 and blocks the probe of
4571.
Use "!*" to block the probe of the driver for all known devices.
menu "drm/Xe Debugging"
depends on DRM_XE
depends on EXPERT
source "drivers/gpu/drm/xe/Kconfig.debug"
endmenu
menu "drm/xe Profile Guided Optimisation"
visible if EXPERT
depends on DRM_XE
source "drivers/gpu/drm/xe/Kconfig.profile"
endmenu

View File

@ -0,0 +1,107 @@
# SPDX-License-Identifier: GPL-2.0-only
config DRM_XE_WERROR
bool "Force GCC to throw an error instead of a warning when compiling"
# As this may inadvertently break the build, only allow the user
# to shoot oneself in the foot iff they aim really hard
depends on EXPERT
# We use the dependency on !COMPILE_TEST to not be enabled in
# allmodconfig or allyesconfig configurations
depends on !COMPILE_TEST
default n
help
Add -Werror to the build flags for (and only for) xe.ko.
Do not enable this unless you are writing code for the xe.ko module.
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_DEBUG
bool "Enable additional driver debugging"
depends on DRM_XE
depends on EXPERT
depends on !COMPILE_TEST
default n
help
Choose this option to turn on extra driver debugging that may affect
performance but will catch some internal issues.
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_DEBUG_VM
bool "Enable extra VM debugging info"
default n
help
Enable extra VM debugging info
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_DEBUG_SRIOV
bool "Enable extra SR-IOV debugging"
default n
help
Enable extra SR-IOV debugging info.
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_DEBUG_MEM
bool "Enable passing SYS/VRAM addresses to user space"
default n
help
Pass object location trough uapi. Intended for extended
testing and development only.
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_SIMPLE_ERROR_CAPTURE
bool "Enable simple error capture to dmesg on job timeout"
default n
help
Choose this option when debugging an unexpected job timeout
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_KUNIT_TEST
tristate "KUnit tests for the drm xe driver" if !KUNIT_ALL_TESTS
depends on DRM_XE && KUNIT && DEBUG_FS
default KUNIT_ALL_TESTS
select DRM_EXPORT_FOR_TESTS if m
select DRM_KUNIT_TEST_HELPERS
help
Choose this option to allow the driver to perform selftests under
the kunit framework
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_LARGE_GUC_BUFFER
bool "Enable larger guc log buffer"
default n
help
Choose this option when debugging guc issues.
Buffer should be large enough for complex issues.
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_USERPTR_INVAL_INJECT
bool "Inject userptr invalidation -EINVAL errors"
default n
help
Choose this option when debugging error paths that
are hit during checks for userptr invalidations.
Recomended for driver developers only.
If in doubt, say "N".

View File

@ -0,0 +1,54 @@
config DRM_XE_JOB_TIMEOUT_MAX
int "Default max job timeout (ms)"
default 10000 # milliseconds
help
Configures the default max job timeout after which job will
be forcefully taken away from scheduler.
config DRM_XE_JOB_TIMEOUT_MIN
int "Default min job timeout (ms)"
default 1 # milliseconds
help
Configures the default min job timeout after which job will
be forcefully taken away from scheduler.
config DRM_XE_TIMESLICE_MAX
int "Default max timeslice duration (us)"
default 10000000 # microseconds
help
Configures the default max timeslice duration between multiple
contexts by guc scheduling.
config DRM_XE_TIMESLICE_MIN
int "Default min timeslice duration (us)"
default 1 # microseconds
help
Configures the default min timeslice duration between multiple
contexts by guc scheduling.
config DRM_XE_PREEMPT_TIMEOUT
int "Preempt timeout (us, jiffy granularity)"
default 640000 # microseconds
help
How long to wait (in microseconds) for a preemption event to occur
when submitting a new context. If the current context does not hit
an arbitration point and yield to HW before the timer expires, the
HW will be reset to allow the more important context to execute.
config DRM_XE_PREEMPT_TIMEOUT_MAX
int "Default max preempt timeout (us)"
default 10000000 # microseconds
help
Configures the default max preempt timeout after which context
will be forcefully taken away and higher priority context will
run.
config DRM_XE_PREEMPT_TIMEOUT_MIN
int "Default min preempt timeout (us)"
default 1 # microseconds
help
Configures the default min preempt timeout after which context
will be forcefully taken away and higher priority context will
run.
config DRM_XE_ENABLE_SCHEDTIMEOUT_LIMIT
bool "Default configuration of limitation on scheduler timeout"
default y
help
Configures the enablement of limitation on scheduler timeout
to apply to applicable user. For elevated user, all above MIN
and MAX values will apply when this configuration is enable to
apply limitation. By default limitation is applied.

305
drivers/gpu/drm/xe/Makefile Normal file
View File

@ -0,0 +1,305 @@
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the drm device driver. This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
# Unconditionally enable W=1 warnings locally
# --- begin copy-paste W=1 warnings from scripts/Makefile.extrawarn
subdir-ccflags-y += -Wextra -Wunused -Wno-unused-parameter
subdir-ccflags-y += -Wmissing-declarations
subdir-ccflags-y += $(call cc-option, -Wrestrict)
subdir-ccflags-y += -Wmissing-format-attribute
subdir-ccflags-y += -Wmissing-prototypes
subdir-ccflags-y += -Wold-style-definition
subdir-ccflags-y += -Wmissing-include-dirs
subdir-ccflags-y += $(call cc-option, -Wunused-but-set-variable)
subdir-ccflags-y += $(call cc-option, -Wunused-const-variable)
subdir-ccflags-y += $(call cc-option, -Wpacked-not-aligned)
subdir-ccflags-y += $(call cc-option, -Wformat-overflow)
subdir-ccflags-y += $(call cc-option, -Wformat-truncation)
subdir-ccflags-y += $(call cc-option, -Wstringop-overflow)
subdir-ccflags-y += $(call cc-option, -Wstringop-truncation)
# The following turn off the warnings enabled by -Wextra
ifeq ($(findstring 2, $(KBUILD_EXTRA_WARN)),)
subdir-ccflags-y += -Wno-missing-field-initializers
subdir-ccflags-y += -Wno-type-limits
subdir-ccflags-y += -Wno-shift-negative-value
endif
ifeq ($(findstring 3, $(KBUILD_EXTRA_WARN)),)
subdir-ccflags-y += -Wno-sign-compare
endif
# --- end copy-paste
# Enable -Werror in CI and development
subdir-ccflags-$(CONFIG_DRM_XE_WERROR) += -Werror
subdir-ccflags-y += -I$(obj) -I$(srctree)/$(src)
# generated sources
hostprogs := xe_gen_wa_oob
generated_oob := $(obj)/generated/xe_wa_oob.c $(obj)/generated/xe_wa_oob.h
quiet_cmd_wa_oob = GEN $(notdir $(generated_oob))
cmd_wa_oob = mkdir -p $(@D); $^ $(generated_oob)
$(generated_oob) &: $(obj)/xe_gen_wa_oob $(srctree)/$(src)/xe_wa_oob.rules
$(call cmd,wa_oob)
uses_generated_oob := \
$(obj)/xe_gsc.o \
$(obj)/xe_guc.o \
$(obj)/xe_migrate.o \
$(obj)/xe_ring_ops.o \
$(obj)/xe_vm.o \
$(obj)/xe_wa.o \
$(obj)/xe_ttm_stolen_mgr.o
$(uses_generated_oob): $(generated_oob)
# Please keep these build lists sorted!
# core driver code
xe-y += xe_bb.o \
xe_bo.o \
xe_bo_evict.o \
xe_debugfs.o \
xe_devcoredump.o \
xe_device.o \
xe_device_sysfs.o \
xe_dma_buf.o \
xe_drm_client.o \
xe_exec.o \
xe_execlist.o \
xe_exec_queue.o \
xe_force_wake.o \
xe_ggtt.o \
xe_gpu_scheduler.o \
xe_gsc.o \
xe_gsc_submit.o \
xe_gt.o \
xe_gt_ccs_mode.o \
xe_gt_clock.o \
xe_gt_debugfs.o \
xe_gt_freq.o \
xe_gt_idle.o \
xe_gt_mcr.o \
xe_gt_pagefault.o \
xe_gt_sysfs.o \
xe_gt_throttle_sysfs.o \
xe_gt_tlb_invalidation.o \
xe_gt_topology.o \
xe_guc.o \
xe_guc_ads.o \
xe_guc_ct.o \
xe_guc_debugfs.o \
xe_guc_hwconfig.o \
xe_guc_log.o \
xe_guc_pc.o \
xe_guc_submit.o \
xe_heci_gsc.o \
xe_hw_engine.o \
xe_hw_engine_class_sysfs.o \
xe_hw_fence.o \
xe_huc.o \
xe_huc_debugfs.o \
xe_irq.o \
xe_lrc.o \
xe_migrate.o \
xe_mmio.o \
xe_mocs.o \
xe_module.o \
xe_pat.o \
xe_pci.o \
xe_pcode.o \
xe_pm.o \
xe_preempt_fence.o \
xe_pt.o \
xe_pt_walk.o \
xe_query.o \
xe_range_fence.o \
xe_reg_sr.o \
xe_reg_whitelist.o \
xe_rtp.o \
xe_ring_ops.o \
xe_sa.o \
xe_sched_job.o \
xe_step.o \
xe_sync.o \
xe_tile.o \
xe_tile_sysfs.o \
xe_trace.o \
xe_ttm_sys_mgr.o \
xe_ttm_stolen_mgr.o \
xe_ttm_vram_mgr.o \
xe_tuning.o \
xe_uc.o \
xe_uc_debugfs.o \
xe_uc_fw.o \
xe_vm.o \
xe_wait_user_fence.o \
xe_wa.o \
xe_wopcm.o
# graphics hardware monitoring (HWMON) support
xe-$(CONFIG_HWMON) += xe_hwmon.o
# graphics virtualization (SR-IOV) support
xe-y += xe_sriov.o
xe-$(CONFIG_PCI_IOV) += \
xe_lmtt.o \
xe_lmtt_2l.o \
xe_lmtt_ml.o
# i915 Display compat #defines and #includes
subdir-ccflags-$(CONFIG_DRM_XE_DISPLAY) += \
-I$(srctree)/$(src)/display/ext \
-I$(srctree)/$(src)/compat-i915-headers \
-I$(srctree)/drivers/gpu/drm/xe/display/ \
-I$(srctree)/drivers/gpu/drm/i915/display/ \
-Ddrm_i915_gem_object=xe_bo \
-Ddrm_i915_private=xe_device
CFLAGS_i915-display/intel_fbdev.o = $(call cc-disable-warning, override-init)
CFLAGS_i915-display/intel_display_device.o = $(call cc-disable-warning, override-init)
# Rule to build SOC code shared with i915
$(obj)/i915-soc/%.o: $(srctree)/drivers/gpu/drm/i915/soc/%.c FORCE
$(call cmd,force_checksrc)
$(call if_changed_rule,cc_o_c)
# Rule to build display code shared with i915
$(obj)/i915-display/%.o: $(srctree)/drivers/gpu/drm/i915/display/%.c FORCE
$(call cmd,force_checksrc)
$(call if_changed_rule,cc_o_c)
# Display code specific to xe
xe-$(CONFIG_DRM_XE_DISPLAY) += \
xe_display.o \
display/xe_fb_pin.o \
display/xe_hdcp_gsc.o \
display/xe_plane_initial.o \
display/xe_display_rps.o \
display/xe_display_misc.o \
display/xe_dsb_buffer.o \
display/intel_fbdev_fb.o \
display/intel_fb_bo.o \
display/ext/i915_irq.o \
display/ext/i915_utils.o
# SOC code shared with i915
xe-$(CONFIG_DRM_XE_DISPLAY) += \
i915-soc/intel_dram.o \
i915-soc/intel_pch.o
# Display code shared with i915
xe-$(CONFIG_DRM_XE_DISPLAY) += \
i915-display/icl_dsi.o \
i915-display/intel_atomic.o \
i915-display/intel_atomic_plane.o \
i915-display/intel_audio.o \
i915-display/intel_backlight.o \
i915-display/intel_bios.o \
i915-display/intel_bw.o \
i915-display/intel_cdclk.o \
i915-display/intel_color.o \
i915-display/intel_combo_phy.o \
i915-display/intel_connector.o \
i915-display/intel_crtc.o \
i915-display/intel_crtc_state_dump.o \
i915-display/intel_cursor.o \
i915-display/intel_cx0_phy.o \
i915-display/intel_ddi.o \
i915-display/intel_ddi_buf_trans.o \
i915-display/intel_display.o \
i915-display/intel_display_debugfs.o \
i915-display/intel_display_debugfs_params.o \
i915-display/intel_display_device.o \
i915-display/intel_display_driver.o \
i915-display/intel_display_irq.o \
i915-display/intel_display_params.o \
i915-display/intel_display_power.o \
i915-display/intel_display_power_map.o \
i915-display/intel_display_power_well.o \
i915-display/intel_display_trace.o \
i915-display/intel_display_wa.o \
i915-display/intel_dkl_phy.o \
i915-display/intel_dmc.o \
i915-display/intel_dp.o \
i915-display/intel_dp_aux.o \
i915-display/intel_dp_aux_backlight.o \
i915-display/intel_dp_hdcp.o \
i915-display/intel_dp_link_training.o \
i915-display/intel_dp_mst.o \
i915-display/intel_dpll.o \
i915-display/intel_dpll_mgr.o \
i915-display/intel_dpt_common.o \
i915-display/intel_drrs.o \
i915-display/intel_dsb.o \
i915-display/intel_dsi.o \
i915-display/intel_dsi_dcs_backlight.o \
i915-display/intel_dsi_vbt.o \
i915-display/intel_fb.o \
i915-display/intel_fbc.o \
i915-display/intel_fdi.o \
i915-display/intel_fifo_underrun.o \
i915-display/intel_frontbuffer.o \
i915-display/intel_global_state.o \
i915-display/intel_gmbus.o \
i915-display/intel_hdcp.o \
i915-display/intel_hdmi.o \
i915-display/intel_hotplug.o \
i915-display/intel_hotplug_irq.o \
i915-display/intel_hti.o \
i915-display/intel_link_bw.o \
i915-display/intel_lspcon.o \
i915-display/intel_modeset_lock.o \
i915-display/intel_modeset_setup.o \
i915-display/intel_modeset_verify.o \
i915-display/intel_panel.o \
i915-display/intel_pipe_crc.o \
i915-display/intel_pmdemand.o \
i915-display/intel_pps.o \
i915-display/intel_psr.o \
i915-display/intel_qp_tables.o \
i915-display/intel_quirks.o \
i915-display/intel_snps_phy.o \
i915-display/intel_tc.o \
i915-display/intel_vblank.o \
i915-display/intel_vdsc.o \
i915-display/intel_vga.o \
i915-display/intel_vrr.o \
i915-display/intel_wm.o \
i915-display/skl_scaler.o \
i915-display/skl_universal_plane.o \
i915-display/skl_watermark.o
ifeq ($(CONFIG_ACPI),y)
xe-$(CONFIG_DRM_XE_DISPLAY) += \
i915-display/intel_acpi.o \
i915-display/intel_opregion.o
endif
ifeq ($(CONFIG_DRM_FBDEV_EMULATION),y)
xe-$(CONFIG_DRM_XE_DISPLAY) += i915-display/intel_fbdev.o
endif
obj-$(CONFIG_DRM_XE) += xe.o
obj-$(CONFIG_DRM_XE_KUNIT_TEST) += tests/
# header test
hdrtest_find_args := -not -path xe_rtp_helpers.h
ifneq ($(CONFIG_DRM_XE_DISPLAY),y)
hdrtest_find_args += -not -path display/\* -not -path compat-i915-headers/\* -not -path xe_display.h
endif
always-$(CONFIG_DRM_XE_WERROR) += \
$(patsubst %.h,%.hdrtest, $(shell cd $(srctree)/$(src) && find * -name '*.h' $(hdrtest_find_args)))
quiet_cmd_hdrtest = HDRTEST $(patsubst %.hdrtest,%.h,$@)
cmd_hdrtest = $(CC) -DHDRTEST $(filter-out $(CFLAGS_GCOV), $(c_flags)) -S -o /dev/null -x c /dev/null -include $<; touch $@
$(obj)/%.hdrtest: $(src)/%.h FORCE
$(call if_changed_dep,hdrtest)

View File

@ -0,0 +1,46 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _ABI_GSC_COMMAND_HEADER_ABI_H
#define _ABI_GSC_COMMAND_HEADER_ABI_H
#include <linux/types.h>
struct intel_gsc_mtl_header {
u32 validity_marker;
#define GSC_HECI_VALIDITY_MARKER 0xA578875A
u8 heci_client_id;
u8 reserved1;
u16 header_version;
#define MTL_GSC_HEADER_VERSION 1
/* FW allows host to decide host_session handle as it sees fit. */
u64 host_session_handle;
/* handle generated by FW for messages that need to be re-submitted */
u64 gsc_message_handle;
u32 message_size; /* lower 20 bits only, upper 12 are reserved */
/*
* Flags mask:
* Bit 0: Pending
* Bit 1: Session Cleanup;
* Bits 2-15: Flags
* Bits 16-31: Extension Size
* According to internal spec flags are either input or output
* we distinguish the flags using OUTFLAG or INFLAG
*/
u32 flags;
#define GSC_OUTFLAG_MSG_PENDING BIT(0)
#define GSC_INFLAG_MSG_CLEANUP BIT(1)
u32 status;
} __packed;
#endif

View File

@ -0,0 +1,39 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _ABI_GSC_MKHI_COMMANDS_ABI_H
#define _ABI_GSC_MKHI_COMMANDS_ABI_H
#include <linux/types.h>
/* Heci client ID for MKHI commands */
#define HECI_MEADDRESS_MKHI 7
/* Generic MKHI header */
struct gsc_mkhi_header {
u8 group_id;
u8 command;
u8 reserved;
u8 result;
} __packed;
/* GFX_SRV commands */
#define MKHI_GROUP_ID_GFX_SRV 0x30
#define MKHI_GFX_SRV_GET_HOST_COMPATIBILITY_VERSION (0x42)
struct gsc_get_compatibility_version_in {
struct gsc_mkhi_header header;
} __packed;
struct gsc_get_compatibility_version_out {
struct gsc_mkhi_header header;
u16 proj_major;
u16 compat_major;
u16 compat_minor;
u16 reserved[5];
} __packed;
#endif

View File

@ -0,0 +1,59 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _ABI_GSC_PXP_COMMANDS_ABI_H
#define _ABI_GSC_PXP_COMMANDS_ABI_H
#include <linux/types.h>
/* Heci client ID for PXP commands */
#define HECI_MEADDRESS_PXP 17
#define PXP_APIVER(x, y) (((x) & 0xFFFF) << 16 | ((y) & 0xFFFF))
/*
* there are a lot of status codes for PXP, but we only define the cross-API
* common ones that we actually can handle in the kernel driver. Other failure
* codes should be printed to error msg for debug.
*/
enum pxp_status {
PXP_STATUS_SUCCESS = 0x0,
PXP_STATUS_ERROR_API_VERSION = 0x1002,
PXP_STATUS_NOT_READY = 0x100e,
PXP_STATUS_PLATFCONFIG_KF1_NOVERIF = 0x101a,
PXP_STATUS_PLATFCONFIG_KF1_BAD = 0x101f,
PXP_STATUS_OP_NOT_PERMITTED = 0x4013
};
/* Common PXP FW message header */
struct pxp_cmd_header {
u32 api_version;
u32 command_id;
union {
u32 status; /* out */
u32 stream_id; /* in */
#define PXP_CMDHDR_EXTDATA_SESSION_VALID GENMASK(0, 0)
#define PXP_CMDHDR_EXTDATA_APP_TYPE GENMASK(1, 1)
#define PXP_CMDHDR_EXTDATA_SESSION_ID GENMASK(17, 2)
};
/* Length of the message (excluding the header) */
u32 buffer_len;
} __packed;
#define PXP43_CMDID_NEW_HUC_AUTH 0x0000003F /* MTL+ */
/* PXP-Input-Packet: HUC Auth-only */
struct pxp43_new_huc_auth_in {
struct pxp_cmd_header header;
u64 huc_base_address;
u32 huc_size;
} __packed;
/* PXP-Output-Packet: HUC Load and Authentication or Auth-only */
struct pxp43_huc_auth_out {
struct pxp_cmd_header header;
} __packed;
#endif

View File

@ -0,0 +1,219 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2014-2021 Intel Corporation
*/
#ifndef _ABI_GUC_ACTIONS_ABI_H
#define _ABI_GUC_ACTIONS_ABI_H
/**
* DOC: HOST2GUC_SELF_CFG
*
* This message is used by Host KMD to setup of the `GuC Self Config KLVs`_.
*
* This message must be sent as `MMIO HXG Message`_.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ |
* | +-------+--------------------------------------------------------------+
* | | 27:16 | DATA0 = MBZ |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | ACTION = _`GUC_ACTION_HOST2GUC_SELF_CFG` = 0x0508 |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:16 | **KLV_KEY** - KLV key, see `GuC Self Config KLVs`_ |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | **KLV_LEN** - KLV length |
* | | | |
* | | | - 32 bit KLV = 1 |
* | | | - 64 bit KLV = 2 |
* +---+-------+--------------------------------------------------------------+
* | 2 | 31:0 | **VALUE32** - Bits 31-0 of the KLV value |
* +---+-------+--------------------------------------------------------------+
* | 3 | 31:0 | **VALUE64** - Bits 63-32 of the KLV value (**KLV_LEN** = 2) |
* +---+-------+--------------------------------------------------------------+
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_GUC_ |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_RESPONSE_SUCCESS_ |
* | +-------+--------------------------------------------------------------+
* | | 27:0 | DATA0 = **NUM** - 1 if KLV was parsed, 0 if not recognized |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_ACTION_HOST2GUC_SELF_CFG 0x0508
#define HOST2GUC_SELF_CFG_REQUEST_MSG_LEN (GUC_HXG_REQUEST_MSG_MIN_LEN + 3u)
#define HOST2GUC_SELF_CFG_REQUEST_MSG_0_MBZ GUC_HXG_REQUEST_MSG_0_DATA0
#define HOST2GUC_SELF_CFG_REQUEST_MSG_1_KLV_KEY (0xffff << 16)
#define HOST2GUC_SELF_CFG_REQUEST_MSG_1_KLV_LEN (0xffff << 0)
#define HOST2GUC_SELF_CFG_REQUEST_MSG_2_VALUE32 GUC_HXG_REQUEST_MSG_n_DATAn
#define HOST2GUC_SELF_CFG_REQUEST_MSG_3_VALUE64 GUC_HXG_REQUEST_MSG_n_DATAn
#define HOST2GUC_SELF_CFG_RESPONSE_MSG_LEN GUC_HXG_RESPONSE_MSG_MIN_LEN
#define HOST2GUC_SELF_CFG_RESPONSE_MSG_0_NUM GUC_HXG_RESPONSE_MSG_0_DATA0
/**
* DOC: HOST2GUC_CONTROL_CTB
*
* This H2G action allows Vf Host to enable or disable H2G and G2H `CT Buffer`_.
*
* This message must be sent as `MMIO HXG Message`_.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ |
* | +-------+--------------------------------------------------------------+
* | | 27:16 | DATA0 = MBZ |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | ACTION = _`GUC_ACTION_HOST2GUC_CONTROL_CTB` = 0x4509 |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | **CONTROL** - control `CTB based communication`_ |
* | | | |
* | | | - _`GUC_CTB_CONTROL_DISABLE` = 0 |
* | | | - _`GUC_CTB_CONTROL_ENABLE` = 1 |
* +---+-------+--------------------------------------------------------------+
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_GUC_ |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_RESPONSE_SUCCESS_ |
* | +-------+--------------------------------------------------------------+
* | | 27:0 | DATA0 = MBZ |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_ACTION_HOST2GUC_CONTROL_CTB 0x4509
#define HOST2GUC_CONTROL_CTB_REQUEST_MSG_LEN (GUC_HXG_REQUEST_MSG_MIN_LEN + 1u)
#define HOST2GUC_CONTROL_CTB_REQUEST_MSG_0_MBZ GUC_HXG_REQUEST_MSG_0_DATA0
#define HOST2GUC_CONTROL_CTB_REQUEST_MSG_1_CONTROL GUC_HXG_REQUEST_MSG_n_DATAn
#define GUC_CTB_CONTROL_DISABLE 0u
#define GUC_CTB_CONTROL_ENABLE 1u
#define HOST2GUC_CONTROL_CTB_RESPONSE_MSG_LEN GUC_HXG_RESPONSE_MSG_MIN_LEN
#define HOST2GUC_CONTROL_CTB_RESPONSE_MSG_0_MBZ GUC_HXG_RESPONSE_MSG_0_DATA0
/* legacy definitions */
enum xe_guc_action {
XE_GUC_ACTION_DEFAULT = 0x0,
XE_GUC_ACTION_REQUEST_PREEMPTION = 0x2,
XE_GUC_ACTION_REQUEST_ENGINE_RESET = 0x3,
XE_GUC_ACTION_ALLOCATE_DOORBELL = 0x10,
XE_GUC_ACTION_DEALLOCATE_DOORBELL = 0x20,
XE_GUC_ACTION_LOG_BUFFER_FILE_FLUSH_COMPLETE = 0x30,
XE_GUC_ACTION_UK_LOG_ENABLE_LOGGING = 0x40,
XE_GUC_ACTION_FORCE_LOG_BUFFER_FLUSH = 0x302,
XE_GUC_ACTION_ENTER_S_STATE = 0x501,
XE_GUC_ACTION_EXIT_S_STATE = 0x502,
XE_GUC_ACTION_GLOBAL_SCHED_POLICY_CHANGE = 0x506,
XE_GUC_ACTION_SCHED_CONTEXT = 0x1000,
XE_GUC_ACTION_SCHED_CONTEXT_MODE_SET = 0x1001,
XE_GUC_ACTION_SCHED_CONTEXT_MODE_DONE = 0x1002,
XE_GUC_ACTION_SCHED_ENGINE_MODE_SET = 0x1003,
XE_GUC_ACTION_SCHED_ENGINE_MODE_DONE = 0x1004,
XE_GUC_ACTION_SET_CONTEXT_PRIORITY = 0x1005,
XE_GUC_ACTION_SET_CONTEXT_EXECUTION_QUANTUM = 0x1006,
XE_GUC_ACTION_SET_CONTEXT_PREEMPTION_TIMEOUT = 0x1007,
XE_GUC_ACTION_CONTEXT_RESET_NOTIFICATION = 0x1008,
XE_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION = 0x1009,
XE_GUC_ACTION_HOST2GUC_UPDATE_CONTEXT_POLICIES = 0x100B,
XE_GUC_ACTION_SETUP_PC_GUCRC = 0x3004,
XE_GUC_ACTION_AUTHENTICATE_HUC = 0x4000,
XE_GUC_ACTION_GET_HWCONFIG = 0x4100,
XE_GUC_ACTION_REGISTER_CONTEXT = 0x4502,
XE_GUC_ACTION_DEREGISTER_CONTEXT = 0x4503,
XE_GUC_ACTION_REGISTER_COMMAND_TRANSPORT_BUFFER = 0x4505,
XE_GUC_ACTION_DEREGISTER_COMMAND_TRANSPORT_BUFFER = 0x4506,
XE_GUC_ACTION_DEREGISTER_CONTEXT_DONE = 0x4600,
XE_GUC_ACTION_REGISTER_CONTEXT_MULTI_LRC = 0x4601,
XE_GUC_ACTION_CLIENT_SOFT_RESET = 0x5507,
XE_GUC_ACTION_SET_ENG_UTIL_BUFF = 0x550A,
XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR = 0x6000,
XE_GUC_ACTION_REPORT_PAGE_FAULT_REQ_DESC = 0x6002,
XE_GUC_ACTION_PAGE_FAULT_RES_DESC = 0x6003,
XE_GUC_ACTION_ACCESS_COUNTER_NOTIFY = 0x6004,
XE_GUC_ACTION_TLB_INVALIDATION = 0x7000,
XE_GUC_ACTION_TLB_INVALIDATION_DONE = 0x7001,
XE_GUC_ACTION_TLB_INVALIDATION_ALL = 0x7002,
XE_GUC_ACTION_STATE_CAPTURE_NOTIFICATION = 0x8002,
XE_GUC_ACTION_NOTIFY_FLUSH_LOG_BUFFER_TO_FILE = 0x8003,
XE_GUC_ACTION_NOTIFY_CRASH_DUMP_POSTED = 0x8004,
XE_GUC_ACTION_NOTIFY_EXCEPTION = 0x8005,
XE_GUC_ACTION_LIMIT
};
enum xe_guc_rc_options {
XE_GUCRC_HOST_CONTROL,
XE_GUCRC_FIRMWARE_CONTROL,
};
enum xe_guc_preempt_options {
XE_GUC_PREEMPT_OPTION_DROP_WORK_Q = 0x4,
XE_GUC_PREEMPT_OPTION_DROP_SUBMIT_Q = 0x8,
};
enum xe_guc_report_status {
XE_GUC_REPORT_STATUS_UNKNOWN = 0x0,
XE_GUC_REPORT_STATUS_ACKED = 0x1,
XE_GUC_REPORT_STATUS_ERROR = 0x2,
XE_GUC_REPORT_STATUS_COMPLETE = 0x4,
};
enum xe_guc_sleep_state_status {
XE_GUC_SLEEP_STATE_SUCCESS = 0x1,
XE_GUC_SLEEP_STATE_PREEMPT_TO_IDLE_FAILED = 0x2,
XE_GUC_SLEEP_STATE_ENGINE_RESET_FAILED = 0x3
#define XE_GUC_SLEEP_STATE_INVALID_MASK 0x80000000
};
#define GUC_LOG_CONTROL_LOGGING_ENABLED (1 << 0)
#define GUC_LOG_CONTROL_VERBOSITY_SHIFT 4
#define GUC_LOG_CONTROL_VERBOSITY_MASK (0xF << GUC_LOG_CONTROL_VERBOSITY_SHIFT)
#define GUC_LOG_CONTROL_DEFAULT_LOGGING (1 << 8)
#define XE_GUC_TLB_INVAL_TYPE_SHIFT 0
#define XE_GUC_TLB_INVAL_MODE_SHIFT 8
/* Flush PPC or SMRO caches along with TLB invalidation request */
#define XE_GUC_TLB_INVAL_FLUSH_CACHE (1 << 31)
enum xe_guc_tlb_invalidation_type {
XE_GUC_TLB_INVAL_FULL = 0x0,
XE_GUC_TLB_INVAL_PAGE_SELECTIVE = 0x1,
XE_GUC_TLB_INVAL_PAGE_SELECTIVE_CTX = 0x2,
XE_GUC_TLB_INVAL_GUC = 0x3,
};
/*
* 0: Heavy mode of Invalidation:
* The pipeline of the engine(s) for which the invalidation is targeted to is
* blocked, and all the in-flight transactions are guaranteed to be Globally
* Observed before completing the TLB invalidation
* 1: Lite mode of Invalidation:
* TLBs of the targeted engine(s) are immediately invalidated.
* In-flight transactions are NOT guaranteed to be Globally Observed before
* completing TLB invalidation.
* Light Invalidation Mode is to be used only when
* it can be guaranteed (by SW) that the address translations remain invariant
* for the in-flight transactions across the TLB invalidation. In other words,
* this mode can be used when the TLB invalidation is intended to clear out the
* stale cached translations that are no longer in use. Light Invalidation Mode
* is much faster than the Heavy Invalidation Mode, as it does not wait for the
* in-flight transactions to be GOd.
*/
enum xe_guc_tlb_inval_mode {
XE_GUC_TLB_INVAL_MODE_HEAVY = 0x0,
XE_GUC_TLB_INVAL_MODE_LITE = 0x1,
};
#endif

View File

@ -0,0 +1,249 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2021 Intel Corporation
*/
#ifndef _GUC_ACTIONS_SLPC_ABI_H_
#define _GUC_ACTIONS_SLPC_ABI_H_
#include <linux/types.h>
/**
* DOC: SLPC SHARED DATA STRUCTURE
*
* +----+------+--------------------------------------------------------------+
* | CL | Bytes| Description |
* +====+======+==============================================================+
* | 1 | 0-3 | SHARED DATA SIZE |
* | +------+--------------------------------------------------------------+
* | | 4-7 | GLOBAL STATE |
* | +------+--------------------------------------------------------------+
* | | 8-11 | DISPLAY DATA ADDRESS |
* | +------+--------------------------------------------------------------+
* | | 12:63| PADDING |
* +----+------+--------------------------------------------------------------+
* | | 0:63 | PADDING(PLATFORM INFO) |
* +----+------+--------------------------------------------------------------+
* | 3 | 0-3 | TASK STATE DATA |
* + +------+--------------------------------------------------------------+
* | | 4:63 | PADDING |
* +----+------+--------------------------------------------------------------+
* |4-21|0:1087| OVERRIDE PARAMS AND BIT FIELDS |
* +----+------+--------------------------------------------------------------+
* | | | PADDING + EXTRA RESERVED PAGE |
* +----+------+--------------------------------------------------------------+
*/
/*
* SLPC exposes certain parameters for global configuration by the host.
* These are referred to as override parameters, because in most cases
* the host will not need to modify the default values used by SLPC.
* SLPC remembers the default values which allows the host to easily restore
* them by simply unsetting the override. The host can set or unset override
* parameters during SLPC (re-)initialization using the SLPC Reset event.
* The host can also set or unset override parameters on the fly using the
* Parameter Set and Parameter Unset events
*/
#define SLPC_MAX_OVERRIDE_PARAMETERS 256
#define SLPC_OVERRIDE_BITFIELD_SIZE \
(SLPC_MAX_OVERRIDE_PARAMETERS / 32)
#define SLPC_PAGE_SIZE_BYTES 4096
#define SLPC_CACHELINE_SIZE_BYTES 64
#define SLPC_SHARED_DATA_SIZE_BYTE_HEADER SLPC_CACHELINE_SIZE_BYTES
#define SLPC_SHARED_DATA_SIZE_BYTE_PLATFORM_INFO SLPC_CACHELINE_SIZE_BYTES
#define SLPC_SHARED_DATA_SIZE_BYTE_TASK_STATE SLPC_CACHELINE_SIZE_BYTES
#define SLPC_SHARED_DATA_MODE_DEFN_TABLE_SIZE SLPC_PAGE_SIZE_BYTES
#define SLPC_SHARED_DATA_SIZE_BYTE_MAX (2 * SLPC_PAGE_SIZE_BYTES)
/*
* Cacheline size aligned (Total size needed for
* SLPM_KMD_MAX_OVERRIDE_PARAMETERS=256 is 1088 bytes)
*/
#define SLPC_OVERRIDE_PARAMS_TOTAL_BYTES (((((SLPC_MAX_OVERRIDE_PARAMETERS * 4) \
+ ((SLPC_MAX_OVERRIDE_PARAMETERS / 32) * 4)) \
+ (SLPC_CACHELINE_SIZE_BYTES - 1)) / SLPC_CACHELINE_SIZE_BYTES) * \
SLPC_CACHELINE_SIZE_BYTES)
#define SLPC_SHARED_DATA_SIZE_BYTE_OTHER (SLPC_SHARED_DATA_SIZE_BYTE_MAX - \
(SLPC_SHARED_DATA_SIZE_BYTE_HEADER \
+ SLPC_SHARED_DATA_SIZE_BYTE_PLATFORM_INFO \
+ SLPC_SHARED_DATA_SIZE_BYTE_TASK_STATE \
+ SLPC_OVERRIDE_PARAMS_TOTAL_BYTES \
+ SLPC_SHARED_DATA_MODE_DEFN_TABLE_SIZE))
enum slpc_task_enable {
SLPC_PARAM_TASK_DEFAULT = 0,
SLPC_PARAM_TASK_ENABLED,
SLPC_PARAM_TASK_DISABLED,
SLPC_PARAM_TASK_UNKNOWN
};
enum slpc_global_state {
SLPC_GLOBAL_STATE_NOT_RUNNING = 0,
SLPC_GLOBAL_STATE_INITIALIZING = 1,
SLPC_GLOBAL_STATE_RESETTING = 2,
SLPC_GLOBAL_STATE_RUNNING = 3,
SLPC_GLOBAL_STATE_SHUTTING_DOWN = 4,
SLPC_GLOBAL_STATE_ERROR = 5
};
enum slpc_param_id {
SLPC_PARAM_TASK_ENABLE_GTPERF = 0,
SLPC_PARAM_TASK_DISABLE_GTPERF = 1,
SLPC_PARAM_TASK_ENABLE_BALANCER = 2,
SLPC_PARAM_TASK_DISABLE_BALANCER = 3,
SLPC_PARAM_TASK_ENABLE_DCC = 4,
SLPC_PARAM_TASK_DISABLE_DCC = 5,
SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ = 6,
SLPC_PARAM_GLOBAL_MAX_GT_UNSLICE_FREQ_MHZ = 7,
SLPC_PARAM_GLOBAL_MIN_GT_SLICE_FREQ_MHZ = 8,
SLPC_PARAM_GLOBAL_MAX_GT_SLICE_FREQ_MHZ = 9,
SLPC_PARAM_GTPERF_THRESHOLD_MAX_FPS = 10,
SLPC_PARAM_GLOBAL_DISABLE_GT_FREQ_MANAGEMENT = 11,
SLPC_PARAM_GTPERF_ENABLE_FRAMERATE_STALLING = 12,
SLPC_PARAM_GLOBAL_DISABLE_RC6_MODE_CHANGE = 13,
SLPC_PARAM_GLOBAL_OC_UNSLICE_FREQ_MHZ = 14,
SLPC_PARAM_GLOBAL_OC_SLICE_FREQ_MHZ = 15,
SLPC_PARAM_GLOBAL_ENABLE_IA_GT_BALANCING = 16,
SLPC_PARAM_GLOBAL_ENABLE_ADAPTIVE_BURST_TURBO = 17,
SLPC_PARAM_GLOBAL_ENABLE_EVAL_MODE = 18,
SLPC_PARAM_GLOBAL_ENABLE_BALANCER_IN_NON_GAMING_MODE = 19,
SLPC_PARAM_GLOBAL_RT_MODE_TURBO_FREQ_DELTA_MHZ = 20,
SLPC_PARAM_PWRGATE_RC_MODE = 21,
SLPC_PARAM_EDR_MODE_COMPUTE_TIMEOUT_MS = 22,
SLPC_PARAM_EDR_QOS_FREQ_MHZ = 23,
SLPC_PARAM_MEDIA_FF_RATIO_MODE = 24,
SLPC_PARAM_ENABLE_IA_FREQ_LIMITING = 25,
SLPC_PARAM_STRATEGIES = 26,
SLPC_PARAM_POWER_PROFILE = 27,
SLPC_PARAM_IGNORE_EFFICIENT_FREQUENCY = 28,
SLPC_MAX_PARAM = 32,
};
enum slpc_media_ratio_mode {
SLPC_MEDIA_RATIO_MODE_DYNAMIC_CONTROL = 0,
SLPC_MEDIA_RATIO_MODE_FIXED_ONE_TO_ONE = 1,
SLPC_MEDIA_RATIO_MODE_FIXED_ONE_TO_TWO = 2,
};
enum slpc_gucrc_mode {
SLPC_GUCRC_MODE_HW = 0,
SLPC_GUCRC_MODE_GUCRC_NO_RC6 = 1,
SLPC_GUCRC_MODE_GUCRC_STATIC_TIMEOUT = 2,
SLPC_GUCRC_MODE_GUCRC_DYNAMIC_HYSTERESIS = 3,
SLPC_GUCRC_MODE_MAX,
};
enum slpc_event_id {
SLPC_EVENT_RESET = 0,
SLPC_EVENT_SHUTDOWN = 1,
SLPC_EVENT_PLATFORM_INFO_CHANGE = 2,
SLPC_EVENT_DISPLAY_MODE_CHANGE = 3,
SLPC_EVENT_FLIP_COMPLETE = 4,
SLPC_EVENT_QUERY_TASK_STATE = 5,
SLPC_EVENT_PARAMETER_SET = 6,
SLPC_EVENT_PARAMETER_UNSET = 7,
};
struct slpc_task_state_data {
union {
u32 task_status_padding;
struct {
u32 status;
#define SLPC_GTPERF_TASK_ENABLED REG_BIT(0)
#define SLPC_DCC_TASK_ENABLED REG_BIT(11)
#define SLPC_IN_DCC REG_BIT(12)
#define SLPC_BALANCER_ENABLED REG_BIT(15)
#define SLPC_IBC_TASK_ENABLED REG_BIT(16)
#define SLPC_BALANCER_IA_LMT_ENABLED REG_BIT(17)
#define SLPC_BALANCER_IA_LMT_ACTIVE REG_BIT(18)
};
};
union {
u32 freq_padding;
struct {
#define SLPC_MAX_UNSLICE_FREQ_MASK REG_GENMASK(7, 0)
#define SLPC_MIN_UNSLICE_FREQ_MASK REG_GENMASK(15, 8)
#define SLPC_MAX_SLICE_FREQ_MASK REG_GENMASK(23, 16)
#define SLPC_MIN_SLICE_FREQ_MASK REG_GENMASK(31, 24)
u32 freq;
};
};
} __packed;
struct slpc_shared_data_header {
/* Total size in bytes of this shared buffer. */
u32 size;
u32 global_state;
u32 display_data_addr;
} __packed;
struct slpc_override_params {
u32 bits[SLPC_OVERRIDE_BITFIELD_SIZE];
u32 values[SLPC_MAX_OVERRIDE_PARAMETERS];
} __packed;
struct slpc_shared_data {
struct slpc_shared_data_header header;
u8 shared_data_header_pad[SLPC_SHARED_DATA_SIZE_BYTE_HEADER -
sizeof(struct slpc_shared_data_header)];
u8 platform_info_pad[SLPC_SHARED_DATA_SIZE_BYTE_PLATFORM_INFO];
struct slpc_task_state_data task_state_data;
u8 task_state_data_pad[SLPC_SHARED_DATA_SIZE_BYTE_TASK_STATE -
sizeof(struct slpc_task_state_data)];
struct slpc_override_params override_params;
u8 override_params_pad[SLPC_OVERRIDE_PARAMS_TOTAL_BYTES -
sizeof(struct slpc_override_params)];
u8 shared_data_pad[SLPC_SHARED_DATA_SIZE_BYTE_OTHER];
/* PAGE 2 (4096 bytes), mode based parameter will be removed soon */
u8 reserved_mode_definition[4096];
} __packed;
/**
* DOC: SLPC H2G MESSAGE FORMAT
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ |
* | +-------+--------------------------------------------------------------+
* | | 27:16 | DATA0 = MBZ |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | ACTION = _`GUC_ACTION_HOST2GUC_PC_SLPM_REQUEST` = 0x3003 |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:8 | **EVENT_ID** |
* + +-------+--------------------------------------------------------------+
* | | 7:0 | **EVENT_ARGC** - number of data arguments |
* +---+-------+--------------------------------------------------------------+
* | 2 | 31:0 | **EVENT_DATA1** |
* +---+-------+--------------------------------------------------------------+
* |...| 31:0 | ... |
* +---+-------+--------------------------------------------------------------+
* |2+n| 31:0 | **EVENT_DATAn** |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_ACTION_HOST2GUC_PC_SLPC_REQUEST 0x3003
#define HOST2GUC_PC_SLPC_REQUEST_MSG_MIN_LEN \
(GUC_HXG_REQUEST_MSG_MIN_LEN + 1u)
#define HOST2GUC_PC_SLPC_EVENT_MAX_INPUT_ARGS 9
#define HOST2GUC_PC_SLPC_REQUEST_MSG_MAX_LEN \
(HOST2GUC_PC_SLPC_REQUEST_REQUEST_MSG_MIN_LEN + \
HOST2GUC_PC_SLPC_EVENT_MAX_INPUT_ARGS)
#define HOST2GUC_PC_SLPC_REQUEST_MSG_0_MBZ GUC_HXG_REQUEST_MSG_0_DATA0
#define HOST2GUC_PC_SLPC_REQUEST_MSG_1_EVENT_ID (0xff << 8)
#define HOST2GUC_PC_SLPC_REQUEST_MSG_1_EVENT_ARGC (0xff << 0)
#define HOST2GUC_PC_SLPC_REQUEST_MSG_N_EVENT_DATA_N GUC_HXG_REQUEST_MSG_n_DATAn
#endif

View File

@ -0,0 +1,127 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2014-2021 Intel Corporation
*/
#ifndef _ABI_GUC_COMMUNICATION_CTB_ABI_H
#define _ABI_GUC_COMMUNICATION_CTB_ABI_H
#include <linux/types.h>
#include <linux/build_bug.h>
#include "guc_messages_abi.h"
/**
* DOC: CT Buffer
*
* Circular buffer used to send `CTB Message`_
*/
/**
* DOC: CTB Descriptor
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31:0 | **HEAD** - offset (in dwords) to the last dword that was |
* | | | read from the `CT Buffer`_. |
* | | | It can only be updated by the receiver. |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | **TAIL** - offset (in dwords) to the last dword that was |
* | | | written to the `CT Buffer`_. |
* | | | It can only be updated by the sender. |
* +---+-------+--------------------------------------------------------------+
* | 2 | 31:0 | **STATUS** - status of the CTB |
* | | | |
* | | | - _`GUC_CTB_STATUS_NO_ERROR` = 0 (normal operation) |
* | | | - _`GUC_CTB_STATUS_OVERFLOW` = 1 (head/tail too large) |
* | | | - _`GUC_CTB_STATUS_UNDERFLOW` = 2 (truncated message) |
* | | | - _`GUC_CTB_STATUS_MISMATCH` = 4 (head/tail modified) |
* +---+-------+--------------------------------------------------------------+
* |...| | RESERVED = MBZ |
* +---+-------+--------------------------------------------------------------+
* | 15| 31:0 | RESERVED = MBZ |
* +---+-------+--------------------------------------------------------------+
*/
struct guc_ct_buffer_desc {
u32 head;
u32 tail;
u32 status;
#define GUC_CTB_STATUS_NO_ERROR 0
#define GUC_CTB_STATUS_OVERFLOW (1 << 0)
#define GUC_CTB_STATUS_UNDERFLOW (1 << 1)
#define GUC_CTB_STATUS_MISMATCH (1 << 2)
u32 reserved[13];
} __packed;
static_assert(sizeof(struct guc_ct_buffer_desc) == 64);
/**
* DOC: CTB Message
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31:16 | **FENCE** - message identifier |
* | +-------+--------------------------------------------------------------+
* | | 15:12 | **FORMAT** - format of the CTB message |
* | | | - _`GUC_CTB_FORMAT_HXG` = 0 - see `CTB HXG Message`_ |
* | +-------+--------------------------------------------------------------+
* | | 11:8 | **RESERVED** |
* | +-------+--------------------------------------------------------------+
* | | 7:0 | **NUM_DWORDS** - length of the CTB message (w/o header) |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | optional (depends on FORMAT) |
* +---+-------+ |
* |...| | |
* +---+-------+ |
* | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_CTB_HDR_LEN 1u
#define GUC_CTB_MSG_MIN_LEN GUC_CTB_HDR_LEN
#define GUC_CTB_MSG_MAX_LEN 256u
#define GUC_CTB_MSG_0_FENCE (0xffff << 16)
#define GUC_CTB_MSG_0_FORMAT (0xf << 12)
#define GUC_CTB_FORMAT_HXG 0u
#define GUC_CTB_MSG_0_RESERVED (0xf << 8)
#define GUC_CTB_MSG_0_NUM_DWORDS (0xff << 0)
/**
* DOC: CTB HXG Message
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31:16 | FENCE |
* | +-------+--------------------------------------------------------------+
* | | 15:12 | FORMAT = GUC_CTB_FORMAT_HXG_ |
* | +-------+--------------------------------------------------------------+
* | | 11:8 | RESERVED = MBZ |
* | +-------+--------------------------------------------------------------+
* | | 7:0 | NUM_DWORDS = length (in dwords) of the embedded HXG message |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | |
* +---+-------+ |
* |...| | [Embedded `HXG Message`_] |
* +---+-------+ |
* | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_CTB_HXG_MSG_MIN_LEN (GUC_CTB_MSG_MIN_LEN + GUC_HXG_MSG_MIN_LEN)
#define GUC_CTB_HXG_MSG_MAX_LEN GUC_CTB_MSG_MAX_LEN
/**
* DOC: CTB based communication
*
* The CTB (command transport buffer) communication between Host and GuC
* is based on u32 data stream written to the shared buffer. One buffer can
* be used to transmit data only in one direction (one-directional channel).
*
* Current status of the each buffer is maintained in the `CTB Descriptor`_.
* Each message in data stream is encoded as `CTB HXG Message`_.
*/
#endif

View File

@ -0,0 +1,49 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2014-2021 Intel Corporation
*/
#ifndef _ABI_GUC_COMMUNICATION_MMIO_ABI_H
#define _ABI_GUC_COMMUNICATION_MMIO_ABI_H
/**
* DOC: GuC MMIO based communication
*
* The MMIO based communication between Host and GuC relies on special
* hardware registers which format could be defined by the software
* (so called scratch registers).
*
* Each MMIO based message, both Host to GuC (H2G) and GuC to Host (G2H)
* messages, which maximum length depends on number of available scratch
* registers, is directly written into those scratch registers.
*
* For Gen9+, there are 16 software scratch registers 0xC180-0xC1B8,
* but no H2G command takes more than 4 parameters and the GuC firmware
* itself uses an 4-element array to store the H2G message.
*
* For Gen11+, there are additional 4 registers 0x190240-0x19024C, which
* are, regardless on lower count, preferred over legacy ones.
*
* The MMIO based communication is mainly used during driver initialization
* phase to setup the `CTB based communication`_ that will be used afterwards.
*/
#define GUC_MAX_MMIO_MSG_LEN 4
/**
* DOC: MMIO HXG Message
*
* Format of the MMIO messages follows definitions of `HXG Message`_.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31:0 | |
* +---+-------+ |
* |...| | [Embedded `HXG Message`_] |
* +---+-------+ |
* | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
#endif

View File

@ -0,0 +1,37 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2014-2021 Intel Corporation
*/
#ifndef _ABI_GUC_ERRORS_ABI_H
#define _ABI_GUC_ERRORS_ABI_H
enum xe_guc_response_status {
XE_GUC_RESPONSE_STATUS_SUCCESS = 0x0,
XE_GUC_RESPONSE_STATUS_GENERIC_FAIL = 0xF000,
};
enum xe_guc_load_status {
XE_GUC_LOAD_STATUS_DEFAULT = 0x00,
XE_GUC_LOAD_STATUS_START = 0x01,
XE_GUC_LOAD_STATUS_ERROR_DEVID_BUILD_MISMATCH = 0x02,
XE_GUC_LOAD_STATUS_GUC_PREPROD_BUILD_MISMATCH = 0x03,
XE_GUC_LOAD_STATUS_ERROR_DEVID_INVALID_GUCTYPE = 0x04,
XE_GUC_LOAD_STATUS_GDT_DONE = 0x10,
XE_GUC_LOAD_STATUS_IDT_DONE = 0x20,
XE_GUC_LOAD_STATUS_LAPIC_DONE = 0x30,
XE_GUC_LOAD_STATUS_GUCINT_DONE = 0x40,
XE_GUC_LOAD_STATUS_DPC_READY = 0x50,
XE_GUC_LOAD_STATUS_DPC_ERROR = 0x60,
XE_GUC_LOAD_STATUS_EXCEPTION = 0x70,
XE_GUC_LOAD_STATUS_INIT_DATA_INVALID = 0x71,
XE_GUC_LOAD_STATUS_PXP_TEARDOWN_CTRL_ENABLED = 0x72,
XE_GUC_LOAD_STATUS_INVALID_INIT_DATA_RANGE_START,
XE_GUC_LOAD_STATUS_MPU_DATA_INVALID = 0x73,
XE_GUC_LOAD_STATUS_INIT_MMIO_SAVE_RESTORE_INVALID = 0x74,
XE_GUC_LOAD_STATUS_INVALID_INIT_DATA_RANGE_END,
XE_GUC_LOAD_STATUS_READY = 0xF0,
};
#endif

View File

@ -0,0 +1,322 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2021 Intel Corporation
*/
#ifndef _ABI_GUC_KLVS_ABI_H
#define _ABI_GUC_KLVS_ABI_H
#include <linux/types.h>
/**
* DOC: GuC KLV
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31:16 | **KEY** - KLV key identifier |
* | | | - `GuC Self Config KLVs`_ |
* | | | - `GuC VGT Policy KLVs`_ |
* | | | - `GuC VF Configuration KLVs`_ |
* | | | |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | **LEN** - length of VALUE (in 32bit dwords) |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | **VALUE** - actual value of the KLV (format depends on KEY) |
* +---+-------+ |
* |...| | |
* +---+-------+ |
* | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_KLV_LEN_MIN 1u
#define GUC_KLV_0_KEY (0xffff << 16)
#define GUC_KLV_0_LEN (0xffff << 0)
#define GUC_KLV_n_VALUE (0xffffffff << 0)
/**
* DOC: GuC Self Config KLVs
*
* `GuC KLV`_ keys available for use with HOST2GUC_SELF_CFG_.
*
* _`GUC_KLV_SELF_CFG_MEMIRQ_STATUS_ADDR` : 0x0900
* Refers to 64 bit Global Gfx address (in bytes) of memory based interrupts
* status vector for use by the GuC.
*
* _`GUC_KLV_SELF_CFG_MEMIRQ_SOURCE_ADDR` : 0x0901
* Refers to 64 bit Global Gfx address (in bytes) of memory based interrupts
* source vector for use by the GuC.
*
* _`GUC_KLV_SELF_CFG_H2G_CTB_ADDR` : 0x0902
* Refers to 64 bit Global Gfx address of H2G `CT Buffer`_.
* Should be above WOPCM address but below APIC base address for native mode.
*
* _`GUC_KLV_SELF_CFG_H2G_CTB_DESCRIPTOR_ADDR : 0x0903
* Refers to 64 bit Global Gfx address of H2G `CTB Descriptor`_.
* Should be above WOPCM address but below APIC base address for native mode.
*
* _`GUC_KLV_SELF_CFG_H2G_CTB_SIZE : 0x0904
* Refers to size of H2G `CT Buffer`_ in bytes.
* Should be a multiple of 4K.
*
* _`GUC_KLV_SELF_CFG_G2H_CTB_ADDR : 0x0905
* Refers to 64 bit Global Gfx address of G2H `CT Buffer`_.
* Should be above WOPCM address but below APIC base address for native mode.
*
* _GUC_KLV_SELF_CFG_G2H_CTB_DESCRIPTOR_ADDR : 0x0906
* Refers to 64 bit Global Gfx address of G2H `CTB Descriptor`_.
* Should be above WOPCM address but below APIC base address for native mode.
*
* _GUC_KLV_SELF_CFG_G2H_CTB_SIZE : 0x0907
* Refers to size of G2H `CT Buffer`_ in bytes.
* Should be a multiple of 4K.
*/
#define GUC_KLV_SELF_CFG_MEMIRQ_STATUS_ADDR_KEY 0x0900
#define GUC_KLV_SELF_CFG_MEMIRQ_STATUS_ADDR_LEN 2u
#define GUC_KLV_SELF_CFG_MEMIRQ_SOURCE_ADDR_KEY 0x0901
#define GUC_KLV_SELF_CFG_MEMIRQ_SOURCE_ADDR_LEN 2u
#define GUC_KLV_SELF_CFG_H2G_CTB_ADDR_KEY 0x0902
#define GUC_KLV_SELF_CFG_H2G_CTB_ADDR_LEN 2u
#define GUC_KLV_SELF_CFG_H2G_CTB_DESCRIPTOR_ADDR_KEY 0x0903
#define GUC_KLV_SELF_CFG_H2G_CTB_DESCRIPTOR_ADDR_LEN 2u
#define GUC_KLV_SELF_CFG_H2G_CTB_SIZE_KEY 0x0904
#define GUC_KLV_SELF_CFG_H2G_CTB_SIZE_LEN 1u
#define GUC_KLV_SELF_CFG_G2H_CTB_ADDR_KEY 0x0905
#define GUC_KLV_SELF_CFG_G2H_CTB_ADDR_LEN 2u
#define GUC_KLV_SELF_CFG_G2H_CTB_DESCRIPTOR_ADDR_KEY 0x0906
#define GUC_KLV_SELF_CFG_G2H_CTB_DESCRIPTOR_ADDR_LEN 2u
#define GUC_KLV_SELF_CFG_G2H_CTB_SIZE_KEY 0x0907
#define GUC_KLV_SELF_CFG_G2H_CTB_SIZE_LEN 1u
/*
* Per context scheduling policy update keys.
*/
enum {
GUC_CONTEXT_POLICIES_KLV_ID_EXECUTION_QUANTUM = 0x2001,
GUC_CONTEXT_POLICIES_KLV_ID_PREEMPTION_TIMEOUT = 0x2002,
GUC_CONTEXT_POLICIES_KLV_ID_SCHEDULING_PRIORITY = 0x2003,
GUC_CONTEXT_POLICIES_KLV_ID_PREEMPT_TO_IDLE_ON_QUANTUM_EXPIRY = 0x2004,
GUC_CONTEXT_POLICIES_KLV_ID_SLPM_GT_FREQUENCY = 0x2005,
GUC_CONTEXT_POLICIES_KLV_NUM_IDS = 5,
};
/**
* DOC: GuC VGT Policy KLVs
*
* `GuC KLV`_ keys available for use with PF2GUC_UPDATE_VGT_POLICY.
*
* _`GUC_KLV_VGT_POLICY_SCHED_IF_IDLE` : 0x8001
* This config sets whether strict scheduling is enabled whereby any VF
* that doesnt have work to submit is still allocated a fixed execution
* time-slice to ensure active VFs execution is always consitent even
* during other VF reprovisiong / rebooting events. Changing this KLV
* impacts all VFs and takes effect on the next VF-Switch event.
*
* :0: don't schedule idle (default)
* :1: schedule if idle
*
* _`GUC_KLV_VGT_POLICY_ADVERSE_SAMPLE_PERIOD` : 0x8002
* This config sets the sample period for tracking adverse event counters.
* A sample period is the period in millisecs during which events are counted.
* This is applicable for all the VFs.
*
* :0: adverse events are not counted (default)
* :n: sample period in milliseconds
*
* _`GUC_KLV_VGT_POLICY_RESET_AFTER_VF_SWITCH` : 0x8D00
* This enum is to reset utilized HW engine after VF Switch (i.e to clean
* up Stale HW register left behind by previous VF)
*
* :0: don't reset (default)
* :1: reset
*/
#define GUC_KLV_VGT_POLICY_SCHED_IF_IDLE_KEY 0x8001
#define GUC_KLV_VGT_POLICY_SCHED_IF_IDLE_LEN 1u
#define GUC_KLV_VGT_POLICY_ADVERSE_SAMPLE_PERIOD_KEY 0x8002
#define GUC_KLV_VGT_POLICY_ADVERSE_SAMPLE_PERIOD_LEN 1u
#define GUC_KLV_VGT_POLICY_RESET_AFTER_VF_SWITCH_KEY 0x8D00
#define GUC_KLV_VGT_POLICY_RESET_AFTER_VF_SWITCH_LEN 1u
/**
* DOC: GuC VF Configuration KLVs
*
* `GuC KLV`_ keys available for use with PF2GUC_UPDATE_VF_CFG.
*
* _`GUC_KLV_VF_CFG_GGTT_START` : 0x0001
* A 4K aligned start GTT address/offset assigned to VF.
* Value is 64 bits.
*
* _`GUC_KLV_VF_CFG_GGTT_SIZE` : 0x0002
* A 4K aligned size of GGTT assigned to VF.
* Value is 64 bits.
*
* _`GUC_KLV_VF_CFG_LMEM_SIZE` : 0x0003
* A 2M aligned size of local memory assigned to VF.
* Value is 64 bits.
*
* _`GUC_KLV_VF_CFG_NUM_CONTEXTS` : 0x0004
* Refers to the number of contexts allocated to this VF.
*
* :0: no contexts (default)
* :1-65535: number of contexts (Gen12)
*
* _`GUC_KLV_VF_CFG_TILE_MASK` : 0x0005
* For multi-tiled products, this field contains the bitwise-OR of tiles
* assigned to the VF. Bit-0-set means VF has access to Tile-0,
* Bit-31-set means VF has access to Tile-31, and etc.
* At least one tile will always be allocated.
* If all bits are zero, VF KMD should treat this as a fatal error.
* For, single-tile products this KLV config is ignored.
*
* _`GUC_KLV_VF_CFG_NUM_DOORBELLS` : 0x0006
* Refers to the number of doorbells allocated to this VF.
*
* :0: no doorbells (default)
* :1-255: number of doorbells (Gen12)
*
* _`GUC_KLV_VF_CFG_EXEC_QUANTUM` : 0x8A01
* This config sets the VFs-execution-quantum in milliseconds.
* GUC will attempt to obey the maximum values as much as HW is capable
* of and this will never be perfectly-exact (accumulated nano-second
* granularity) since the GPUs clock time runs off a different crystal
* from the CPUs clock. Changing this KLV on a VF that is currently
* running a context wont take effect until a new context is scheduled in.
* That said, when the PF is changing this value from 0xFFFFFFFF to
* something else, it might never take effect if the VF is running an
* inifinitely long compute or shader kernel. In such a scenario, the
* PF would need to trigger a VM PAUSE and then change the KLV to force
* it to take effect. Such cases might typically happen on a 1PF+1VF
* Virtualization config enabled for heavier workloads like AI/ML.
*
* :0: infinite exec quantum (default)
*
* _`GUC_KLV_VF_CFG_PREEMPT_TIMEOUT` : 0x8A02
* This config sets the VF-preemption-timeout in microseconds.
* GUC will attempt to obey the minimum and maximum values as much as
* HW is capable and this will never be perfectly-exact (accumulated
* nano-second granularity) since the GPUs clock time runs off a
* different crystal from the CPUs clock. Changing this KLV on a VF
* that is currently running a context wont take effect until a new
* context is scheduled in.
* That said, when the PF is changing this value from 0xFFFFFFFF to
* something else, it might never take effect if the VF is running an
* inifinitely long compute or shader kernel.
* In this case, the PF would need to trigger a VM PAUSE and then change
* the KLV to force it to take effect. Such cases might typically happen
* on a 1PF+1VF Virtualization config enabled for heavier workloads like
* AI/ML.
*
* :0: no preemption timeout (default)
*
* _`GUC_KLV_VF_CFG_THRESHOLD_CAT_ERR` : 0x8A03
* This config sets threshold for CAT errors caused by the VF.
*
* :0: adverse events or error will not be reported (default)
* :n: event occurrence count per sampling interval
*
* _`GUC_KLV_VF_CFG_THRESHOLD_ENGINE_RESET` : 0x8A04
* This config sets threshold for engine reset caused by the VF.
*
* :0: adverse events or error will not be reported (default)
* :n: event occurrence count per sampling interval
*
* _`GUC_KLV_VF_CFG_THRESHOLD_PAGE_FAULT` : 0x8A05
* This config sets threshold for page fault errors caused by the VF.
*
* :0: adverse events or error will not be reported (default)
* :n: event occurrence count per sampling interval
*
* _`GUC_KLV_VF_CFG_THRESHOLD_H2G_STORM` : 0x8A06
* This config sets threshold for H2G interrupts triggered by the VF.
*
* :0: adverse events or error will not be reported (default)
* :n: time (us) per sampling interval
*
* _`GUC_KLV_VF_CFG_THRESHOLD_IRQ_STORM` : 0x8A07
* This config sets threshold for GT interrupts triggered by the VF's
* workloads.
*
* :0: adverse events or error will not be reported (default)
* :n: time (us) per sampling interval
*
* _`GUC_KLV_VF_CFG_THRESHOLD_DOORBELL_STORM` : 0x8A08
* This config sets threshold for doorbell's ring triggered by the VF.
*
* :0: adverse events or error will not be reported (default)
* :n: time (us) per sampling interval
*
* _`GUC_KLV_VF_CFG_BEGIN_DOORBELL_ID` : 0x8A0A
* Refers to the start index of doorbell assigned to this VF.
*
* :0: (default)
* :1-255: number of doorbells (Gen12)
*
* _`GUC_KLV_VF_CFG_BEGIN_CONTEXT_ID` : 0x8A0B
* Refers to the start index in context array allocated to this VFs use.
*
* :0: (default)
* :1-65535: number of contexts (Gen12)
*/
#define GUC_KLV_VF_CFG_GGTT_START_KEY 0x0001
#define GUC_KLV_VF_CFG_GGTT_START_LEN 2u
#define GUC_KLV_VF_CFG_GGTT_SIZE_KEY 0x0002
#define GUC_KLV_VF_CFG_GGTT_SIZE_LEN 2u
#define GUC_KLV_VF_CFG_LMEM_SIZE_KEY 0x0003
#define GUC_KLV_VF_CFG_LMEM_SIZE_LEN 2u
#define GUC_KLV_VF_CFG_NUM_CONTEXTS_KEY 0x0004
#define GUC_KLV_VF_CFG_NUM_CONTEXTS_LEN 1u
#define GUC_KLV_VF_CFG_TILE_MASK_KEY 0x0005
#define GUC_KLV_VF_CFG_TILE_MASK_LEN 1u
#define GUC_KLV_VF_CFG_NUM_DOORBELLS_KEY 0x0006
#define GUC_KLV_VF_CFG_NUM_DOORBELLS_LEN 1u
#define GUC_KLV_VF_CFG_EXEC_QUANTUM_KEY 0x8a01
#define GUC_KLV_VF_CFG_EXEC_QUANTUM_LEN 1u
#define GUC_KLV_VF_CFG_PREEMPT_TIMEOUT_KEY 0x8a02
#define GUC_KLV_VF_CFG_PREEMPT_TIMEOUT_LEN 1u
#define GUC_KLV_VF_CFG_THRESHOLD_CAT_ERR_KEY 0x8a03
#define GUC_KLV_VF_CFG_THRESHOLD_CAT_ERR_LEN 1u
#define GUC_KLV_VF_CFG_THRESHOLD_ENGINE_RESET_KEY 0x8a04
#define GUC_KLV_VF_CFG_THRESHOLD_ENGINE_RESET_LEN 1u
#define GUC_KLV_VF_CFG_THRESHOLD_PAGE_FAULT_KEY 0x8a05
#define GUC_KLV_VF_CFG_THRESHOLD_PAGE_FAULT_LEN 1u
#define GUC_KLV_VF_CFG_THRESHOLD_H2G_STORM_KEY 0x8a06
#define GUC_KLV_VF_CFG_THRESHOLD_H2G_STORM_LEN 1u
#define GUC_KLV_VF_CFG_THRESHOLD_IRQ_STORM_KEY 0x8a07
#define GUC_KLV_VF_CFG_THRESHOLD_IRQ_STORM_LEN 1u
#define GUC_KLV_VF_CFG_THRESHOLD_DOORBELL_STORM_KEY 0x8a08
#define GUC_KLV_VF_CFG_THRESHOLD_DOORBELL_STORM_LEN 1u
#define GUC_KLV_VF_CFG_BEGIN_DOORBELL_ID_KEY 0x8a0a
#define GUC_KLV_VF_CFG_BEGIN_DOORBELL_ID_LEN 1u
#define GUC_KLV_VF_CFG_BEGIN_CONTEXT_ID_KEY 0x8a0b
#define GUC_KLV_VF_CFG_BEGIN_CONTEXT_ID_LEN 1u
#endif

View File

@ -0,0 +1,234 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2014-2021 Intel Corporation
*/
#ifndef _ABI_GUC_MESSAGES_ABI_H
#define _ABI_GUC_MESSAGES_ABI_H
/**
* DOC: HXG Message
*
* All messages exchanged with GuC are defined using 32 bit dwords.
* First dword is treated as a message header. Remaining dwords are optional.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | | | |
* | 0 | 31 | **ORIGIN** - originator of the message |
* | | | - _`GUC_HXG_ORIGIN_HOST` = 0 |
* | | | - _`GUC_HXG_ORIGIN_GUC` = 1 |
* | | | |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | **TYPE** - message type |
* | | | - _`GUC_HXG_TYPE_REQUEST` = 0 |
* | | | - _`GUC_HXG_TYPE_EVENT` = 1 |
* | | | - _`GUC_HXG_TYPE_NO_RESPONSE_BUSY` = 3 |
* | | | - _`GUC_HXG_TYPE_NO_RESPONSE_RETRY` = 5 |
* | | | - _`GUC_HXG_TYPE_RESPONSE_FAILURE` = 6 |
* | | | - _`GUC_HXG_TYPE_RESPONSE_SUCCESS` = 7 |
* | +-------+--------------------------------------------------------------+
* | | 27:0 | **AUX** - auxiliary data (depends on TYPE) |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | |
* +---+-------+ |
* |...| | **PAYLOAD** - optional payload (depends on TYPE) |
* +---+-------+ |
* | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_HXG_MSG_MIN_LEN 1u
#define GUC_HXG_MSG_0_ORIGIN (0x1 << 31)
#define GUC_HXG_ORIGIN_HOST 0u
#define GUC_HXG_ORIGIN_GUC 1u
#define GUC_HXG_MSG_0_TYPE (0x7 << 28)
#define GUC_HXG_TYPE_REQUEST 0u
#define GUC_HXG_TYPE_EVENT 1u
#define GUC_HXG_TYPE_NO_RESPONSE_BUSY 3u
#define GUC_HXG_TYPE_NO_RESPONSE_RETRY 5u
#define GUC_HXG_TYPE_RESPONSE_FAILURE 6u
#define GUC_HXG_TYPE_RESPONSE_SUCCESS 7u
#define GUC_HXG_MSG_0_AUX (0xfffffff << 0)
#define GUC_HXG_MSG_n_PAYLOAD (0xffffffff << 0)
/**
* DOC: HXG Request
*
* The `HXG Request`_ message should be used to initiate synchronous activity
* for which confirmation or return data is expected.
*
* The recipient of this message shall use `HXG Response`_, `HXG Failure`_
* or `HXG Retry`_ message as a definite reply, and may use `HXG Busy`_
* message as a intermediate reply.
*
* Format of @DATA0 and all @DATAn fields depends on the @ACTION code.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ |
* | +-------+--------------------------------------------------------------+
* | | 27:16 | **DATA0** - request data (depends on ACTION) |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | **ACTION** - requested action code |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | |
* +---+-------+ |
* |...| | **DATAn** - optional data (depends on ACTION) |
* +---+-------+ |
* | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_HXG_REQUEST_MSG_MIN_LEN GUC_HXG_MSG_MIN_LEN
#define GUC_HXG_REQUEST_MSG_0_DATA0 (0xfff << 16)
#define GUC_HXG_REQUEST_MSG_0_ACTION (0xffff << 0)
#define GUC_HXG_REQUEST_MSG_n_DATAn GUC_HXG_MSG_n_PAYLOAD
/**
* DOC: HXG Event
*
* The `HXG Event`_ message should be used to initiate asynchronous activity
* that does not involves immediate confirmation nor data.
*
* Format of @DATA0 and all @DATAn fields depends on the @ACTION code.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_EVENT_ |
* | +-------+--------------------------------------------------------------+
* | | 27:16 | **DATA0** - event data (depends on ACTION) |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | **ACTION** - event action code |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | |
* +---+-------+ |
* |...| | **DATAn** - optional event data (depends on ACTION) |
* +---+-------+ |
* | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_HXG_EVENT_MSG_MIN_LEN GUC_HXG_MSG_MIN_LEN
#define GUC_HXG_EVENT_MSG_0_DATA0 (0xfff << 16)
#define GUC_HXG_EVENT_MSG_0_ACTION (0xffff << 0)
#define GUC_HXG_EVENT_MSG_n_DATAn GUC_HXG_MSG_n_PAYLOAD
/**
* DOC: HXG Busy
*
* The `HXG Busy`_ message may be used to acknowledge reception of the `HXG Request`_
* message if the recipient expects that it processing will be longer than default
* timeout.
*
* The @COUNTER field may be used as a progress indicator.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_NO_RESPONSE_BUSY_ |
* | +-------+--------------------------------------------------------------+
* | | 27:0 | **COUNTER** - progress indicator |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_HXG_BUSY_MSG_LEN GUC_HXG_MSG_MIN_LEN
#define GUC_HXG_BUSY_MSG_0_COUNTER GUC_HXG_MSG_0_AUX
/**
* DOC: HXG Retry
*
* The `HXG Retry`_ message should be used by recipient to indicate that the
* `HXG Request`_ message was dropped and it should be resent again.
*
* The @REASON field may be used to provide additional information.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_NO_RESPONSE_RETRY_ |
* | +-------+--------------------------------------------------------------+
* | | 27:0 | **REASON** - reason for retry |
* | | | - _`GUC_HXG_RETRY_REASON_UNSPECIFIED` = 0 |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_HXG_RETRY_MSG_LEN GUC_HXG_MSG_MIN_LEN
#define GUC_HXG_RETRY_MSG_0_REASON GUC_HXG_MSG_0_AUX
#define GUC_HXG_RETRY_REASON_UNSPECIFIED 0u
/**
* DOC: HXG Failure
*
* The `HXG Failure`_ message shall be used as a reply to the `HXG Request`_
* message that could not be processed due to an error.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_RESPONSE_FAILURE_ |
* | +-------+--------------------------------------------------------------+
* | | 27:16 | **HINT** - additional error hint |
* | +-------+--------------------------------------------------------------+
* | | 15:0 | **ERROR** - error/result code |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_HXG_FAILURE_MSG_LEN GUC_HXG_MSG_MIN_LEN
#define GUC_HXG_FAILURE_MSG_0_HINT (0xfff << 16)
#define GUC_HXG_FAILURE_MSG_0_ERROR (0xffff << 0)
/**
* DOC: HXG Response
*
* The `HXG Response`_ message shall be used as a reply to the `HXG Request`_
* message that was successfully processed without an error.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31 | ORIGIN |
* | +-------+--------------------------------------------------------------+
* | | 30:28 | TYPE = GUC_HXG_TYPE_RESPONSE_SUCCESS_ |
* | +-------+--------------------------------------------------------------+
* | | 27:0 | **DATA0** - data (depends on ACTION from `HXG Request`_) |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | |
* +---+-------+ |
* |...| | **DATAn** - data (depends on ACTION from `HXG Request`_) |
* +---+-------+ |
* | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_HXG_RESPONSE_MSG_MIN_LEN GUC_HXG_MSG_MIN_LEN
#define GUC_HXG_RESPONSE_MSG_0_DATA0 GUC_HXG_MSG_0_AUX
#define GUC_HXG_RESPONSE_MSG_n_DATAn GUC_HXG_MSG_n_PAYLOAD
/* deprecated */
#define INTEL_GUC_MSG_TYPE_SHIFT 28
#define INTEL_GUC_MSG_TYPE_MASK (0xF << INTEL_GUC_MSG_TYPE_SHIFT)
#define INTEL_GUC_MSG_DATA_SHIFT 16
#define INTEL_GUC_MSG_DATA_MASK (0xFFF << INTEL_GUC_MSG_DATA_SHIFT)
#define INTEL_GUC_MSG_CODE_SHIFT 0
#define INTEL_GUC_MSG_CODE_MASK (0xFFFF << INTEL_GUC_MSG_CODE_SHIFT)
enum intel_guc_msg_type {
INTEL_GUC_MSG_TYPE_REQUEST = 0x0,
INTEL_GUC_MSG_TYPE_RESPONSE = 0xF,
};
#endif

View File

@ -0,0 +1 @@
/* Empty */

View File

@ -0,0 +1,17 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _I915_GEM_MMAN_H_
#define _I915_GEM_MMAN_H_
#include "xe_bo_types.h"
#include <drm/drm_prime.h>
static inline int i915_gem_fb_mmap(struct xe_bo *bo, struct vm_area_struct *vma)
{
return drm_gem_prime_mmap(&bo->ttm.base, vma);
}
#endif

View File

@ -0,0 +1,65 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2022 Intel Corporation
*/
#ifndef _I915_GEM_OBJECT_H_
#define _I915_GEM_OBJECT_H_
#include <linux/types.h>
#include "xe_bo.h"
#define i915_gem_object_is_shmem(obj) ((obj)->flags & XE_BO_CREATE_SYSTEM_BIT)
static inline dma_addr_t i915_gem_object_get_dma_address(const struct xe_bo *bo, pgoff_t n)
{
/* Should never be called */
WARN_ON(1);
return n;
}
static inline bool i915_gem_object_is_tiled(const struct xe_bo *bo)
{
/* legacy tiling is unused */
return false;
}
static inline bool i915_gem_object_is_userptr(const struct xe_bo *bo)
{
/* legacy tiling is unused */
return false;
}
static inline int i915_gem_object_read_from_page(struct xe_bo *bo,
u32 ofs, u64 *ptr, u32 size)
{
struct ttm_bo_kmap_obj map;
void *virtual;
bool is_iomem;
int ret;
XE_WARN_ON(size != 8);
ret = xe_bo_lock(bo, true);
if (ret)
return ret;
ret = ttm_bo_kmap(&bo->ttm, ofs >> PAGE_SHIFT, 1, &map);
if (ret)
goto out_unlock;
ofs &= ~PAGE_MASK;
virtual = ttm_kmap_obj_virtual(&map, &is_iomem);
if (is_iomem)
*ptr = readq((void __iomem *)(virtual + ofs));
else
*ptr = *(u64 *)(virtual + ofs);
ttm_bo_kunmap(&map);
out_unlock:
xe_bo_unlock(bo);
return ret;
}
#endif

View File

@ -0,0 +1,12 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2022 Intel Corporation
*/
#ifndef _I915_GEM_OBJECT_FRONTBUFFER_H_
#define _I915_GEM_OBJECT_FRONTBUFFER_H_
#define i915_gem_object_get_frontbuffer(obj) NULL
#define i915_gem_object_set_frontbuffer(obj, front) (front)
#endif

View File

@ -0,0 +1,11 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef __INTEL_RPS_H__
#define __INTEL_RPS_H__
#define gen5_rps_irq_handler(x) ({})
#endif /* __INTEL_RPS_H__ */

View File

@ -0,0 +1,22 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2022 Intel Corporation
*/
#ifndef _I915_ACTIVE_H_
#define _I915_ACTIVE_H_
#include "i915_active_types.h"
static inline void i915_active_init(struct i915_active *ref,
int (*active)(struct i915_active *ref),
void (*retire)(struct i915_active *ref),
unsigned long flags)
{
(void) active;
(void) retire;
}
#define i915_active_fini(active) do { } while (0)
#endif

View File

@ -0,0 +1,13 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#ifndef _I915_ACTIVE_TYPES_H_
#define _I915_ACTIVE_TYPES_H_
struct i915_active {};
#define I915_ACTIVE_RETIRE_SLEEPS 0
#endif /* _I915_ACTIVE_TYPES_H_ */

View File

@ -0,0 +1,19 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef __I915_CONFIG_H__
#define __I915_CONFIG_H__
#include <linux/sched.h>
struct drm_i915_private;
static inline unsigned long
i915_fence_timeout(const struct drm_i915_private *i915)
{
return MAX_SCHEDULE_TIMEOUT;
}
#endif /* __I915_CONFIG_H__ */

View File

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef __I915_DEBUGFS_H__
#define __I915_DEBUGFS_H__
struct drm_i915_gem_object;
struct seq_file;
static inline void i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) {}
#endif /* __I915_DEBUGFS_H__ */

View File

@ -0,0 +1,233 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_I915_DRV_H_
#define _XE_I915_DRV_H_
/*
* "Adaptation header" to allow i915 display to also build for xe driver.
* TODO: refactor i915 and xe so this can cease to exist
*/
#include <drm/drm_drv.h>
#include "gem/i915_gem_object.h"
#include "soc/intel_pch.h"
#include "xe_device.h"
#include "xe_bo.h"
#include "xe_pm.h"
#include "xe_step.h"
#include "i915_gem.h"
#include "i915_gem_stolen.h"
#include "i915_gpu_error.h"
#include "i915_reg_defs.h"
#include "i915_utils.h"
#include "intel_gt_types.h"
#include "intel_step.h"
#include "intel_uc_fw.h"
#include "intel_uncore.h"
#include "intel_runtime_pm.h"
#include <linux/pm_runtime.h>
static inline struct drm_i915_private *to_i915(const struct drm_device *dev)
{
return container_of(dev, struct drm_i915_private, drm);
}
static inline struct drm_i915_private *kdev_to_i915(struct device *kdev)
{
return dev_get_drvdata(kdev);
}
#define INTEL_JASPERLAKE 0
#define INTEL_ELKHARTLAKE 0
#define IS_PLATFORM(xe, x) ((xe)->info.platform == x)
#define INTEL_INFO(dev_priv) (&((dev_priv)->info))
#define INTEL_DEVID(dev_priv) ((dev_priv)->info.devid)
#define IS_I830(dev_priv) (dev_priv && 0)
#define IS_I845G(dev_priv) (dev_priv && 0)
#define IS_I85X(dev_priv) (dev_priv && 0)
#define IS_I865G(dev_priv) (dev_priv && 0)
#define IS_I915G(dev_priv) (dev_priv && 0)
#define IS_I915GM(dev_priv) (dev_priv && 0)
#define IS_I945G(dev_priv) (dev_priv && 0)
#define IS_I945GM(dev_priv) (dev_priv && 0)
#define IS_I965G(dev_priv) (dev_priv && 0)
#define IS_I965GM(dev_priv) (dev_priv && 0)
#define IS_G45(dev_priv) (dev_priv && 0)
#define IS_GM45(dev_priv) (dev_priv && 0)
#define IS_G4X(dev_priv) (dev_priv && 0)
#define IS_PINEVIEW(dev_priv) (dev_priv && 0)
#define IS_G33(dev_priv) (dev_priv && 0)
#define IS_IRONLAKE(dev_priv) (dev_priv && 0)
#define IS_IRONLAKE_M(dev_priv) (dev_priv && 0)
#define IS_SANDYBRIDGE(dev_priv) (dev_priv && 0)
#define IS_IVYBRIDGE(dev_priv) (dev_priv && 0)
#define IS_IVB_GT1(dev_priv) (dev_priv && 0)
#define IS_VALLEYVIEW(dev_priv) (dev_priv && 0)
#define IS_CHERRYVIEW(dev_priv) (dev_priv && 0)
#define IS_HASWELL(dev_priv) (dev_priv && 0)
#define IS_BROADWELL(dev_priv) (dev_priv && 0)
#define IS_SKYLAKE(dev_priv) (dev_priv && 0)
#define IS_BROXTON(dev_priv) (dev_priv && 0)
#define IS_KABYLAKE(dev_priv) (dev_priv && 0)
#define IS_GEMINILAKE(dev_priv) (dev_priv && 0)
#define IS_COFFEELAKE(dev_priv) (dev_priv && 0)
#define IS_COMETLAKE(dev_priv) (dev_priv && 0)
#define IS_ICELAKE(dev_priv) (dev_priv && 0)
#define IS_JASPERLAKE(dev_priv) (dev_priv && 0)
#define IS_ELKHARTLAKE(dev_priv) (dev_priv && 0)
#define IS_TIGERLAKE(dev_priv) IS_PLATFORM(dev_priv, XE_TIGERLAKE)
#define IS_ROCKETLAKE(dev_priv) IS_PLATFORM(dev_priv, XE_ROCKETLAKE)
#define IS_DG1(dev_priv) IS_PLATFORM(dev_priv, XE_DG1)
#define IS_ALDERLAKE_S(dev_priv) IS_PLATFORM(dev_priv, XE_ALDERLAKE_S)
#define IS_ALDERLAKE_P(dev_priv) IS_PLATFORM(dev_priv, XE_ALDERLAKE_P)
#define IS_XEHPSDV(dev_priv) (dev_priv && 0)
#define IS_DG2(dev_priv) IS_PLATFORM(dev_priv, XE_DG2)
#define IS_PONTEVECCHIO(dev_priv) IS_PLATFORM(dev_priv, XE_PVC)
#define IS_METEORLAKE(dev_priv) IS_PLATFORM(dev_priv, XE_METEORLAKE)
#define IS_LUNARLAKE(dev_priv) IS_PLATFORM(dev_priv, XE_LUNARLAKE)
#define IS_HASWELL_ULT(dev_priv) (dev_priv && 0)
#define IS_BROADWELL_ULT(dev_priv) (dev_priv && 0)
#define IS_BROADWELL_ULX(dev_priv) (dev_priv && 0)
#define IP_VER(ver, rel) ((ver) << 8 | (rel))
#define INTEL_DISPLAY_ENABLED(xe) (HAS_DISPLAY((xe)) && !intel_opregion_headless_sku((xe)))
#define IS_GRAPHICS_VER(xe, first, last) \
((xe)->info.graphics_verx100 >= first * 100 && \
(xe)->info.graphics_verx100 <= (last*100 + 99))
#define IS_MOBILE(xe) (xe && 0)
#define HAS_LLC(xe) (!IS_DGFX((xe)))
#define HAS_GMD_ID(xe) GRAPHICS_VERx100(xe) >= 1270
/* Workarounds not handled yet */
#define IS_DISPLAY_STEP(xe, first, last) ({u8 __step = (xe)->info.step.display; first <= __step && __step <= last; })
#define IS_GRAPHICS_STEP(xe, first, last) ({u8 __step = (xe)->info.step.graphics; first <= __step && __step <= last; })
#define IS_LP(xe) (0)
#define IS_GEN9_LP(xe) (0)
#define IS_GEN9_BC(xe) (0)
#define IS_TIGERLAKE_UY(xe) (xe && 0)
#define IS_COMETLAKE_ULX(xe) (xe && 0)
#define IS_COFFEELAKE_ULX(xe) (xe && 0)
#define IS_KABYLAKE_ULX(xe) (xe && 0)
#define IS_SKYLAKE_ULX(xe) (xe && 0)
#define IS_HASWELL_ULX(xe) (xe && 0)
#define IS_COMETLAKE_ULT(xe) (xe && 0)
#define IS_COFFEELAKE_ULT(xe) (xe && 0)
#define IS_KABYLAKE_ULT(xe) (xe && 0)
#define IS_SKYLAKE_ULT(xe) (xe && 0)
#define IS_DG1_GRAPHICS_STEP(xe, first, last) (IS_DG1(xe) && IS_GRAPHICS_STEP(xe, first, last))
#define IS_DG2_GRAPHICS_STEP(xe, variant, first, last) \
((xe)->info.subplatform == XE_SUBPLATFORM_DG2_ ## variant && \
IS_GRAPHICS_STEP(xe, first, last))
#define IS_XEHPSDV_GRAPHICS_STEP(xe, first, last) (IS_XEHPSDV(xe) && IS_GRAPHICS_STEP(xe, first, last))
/* XXX: No basedie stepping support yet */
#define IS_PVC_BD_STEP(xe, first, last) (!WARN_ON(1) && IS_PONTEVECCHIO(xe))
#define IS_TIGERLAKE_DISPLAY_STEP(xe, first, last) (IS_TIGERLAKE(xe) && IS_DISPLAY_STEP(xe, first, last))
#define IS_ROCKETLAKE_DISPLAY_STEP(xe, first, last) (IS_ROCKETLAKE(xe) && IS_DISPLAY_STEP(xe, first, last))
#define IS_DG1_DISPLAY_STEP(xe, first, last) (IS_DG1(xe) && IS_DISPLAY_STEP(xe, first, last))
#define IS_DG2_DISPLAY_STEP(xe, first, last) (IS_DG2(xe) && IS_DISPLAY_STEP(xe, first, last))
#define IS_ADLP_DISPLAY_STEP(xe, first, last) (IS_ALDERLAKE_P(xe) && IS_DISPLAY_STEP(xe, first, last))
#define IS_ADLS_DISPLAY_STEP(xe, first, last) (IS_ALDERLAKE_S(xe) && IS_DISPLAY_STEP(xe, first, last))
#define IS_JSL_EHL_DISPLAY_STEP(xe, first, last) (IS_JSL_EHL(xe) && IS_DISPLAY_STEP(xe, first, last))
#define IS_MTL_DISPLAY_STEP(xe, first, last) (IS_METEORLAKE(xe) && IS_DISPLAY_STEP(xe, first, last))
/* FIXME: Add subplatform here */
#define IS_MTL_GRAPHICS_STEP(xe, sub, first, last) (IS_METEORLAKE(xe) && IS_DISPLAY_STEP(xe, first, last))
#define IS_DG2_G10(xe) ((xe)->info.subplatform == XE_SUBPLATFORM_DG2_G10)
#define IS_DG2_G11(xe) ((xe)->info.subplatform == XE_SUBPLATFORM_DG2_G11)
#define IS_DG2_G12(xe) ((xe)->info.subplatform == XE_SUBPLATFORM_DG2_G12)
#define IS_RAPTORLAKE_U(xe) ((xe)->info.subplatform == XE_SUBPLATFORM_ALDERLAKE_P_RPLU)
#define IS_ICL_WITH_PORT_F(xe) (xe && 0)
#define HAS_FLAT_CCS(xe) (xe_device_has_flat_ccs(xe))
#define to_intel_bo(x) gem_to_xe_bo((x))
#define mkwrite_device_info(xe) (INTEL_INFO(xe))
#define HAS_128_BYTE_Y_TILING(xe) (xe || 1)
#define intel_has_gpu_reset(a) (a && 0)
#include "intel_wakeref.h"
static inline bool intel_runtime_pm_get(struct xe_runtime_pm *pm)
{
struct xe_device *xe = container_of(pm, struct xe_device, runtime_pm);
if (xe_pm_runtime_get(xe) < 0) {
xe_pm_runtime_put(xe);
return false;
}
return true;
}
static inline bool intel_runtime_pm_get_if_in_use(struct xe_runtime_pm *pm)
{
struct xe_device *xe = container_of(pm, struct xe_device, runtime_pm);
return xe_pm_runtime_get_if_active(xe);
}
static inline void intel_runtime_pm_put_unchecked(struct xe_runtime_pm *pm)
{
struct xe_device *xe = container_of(pm, struct xe_device, runtime_pm);
xe_pm_runtime_put(xe);
}
static inline void intel_runtime_pm_put(struct xe_runtime_pm *pm, bool wakeref)
{
if (wakeref)
intel_runtime_pm_put_unchecked(pm);
}
#define intel_runtime_pm_get_raw intel_runtime_pm_get
#define intel_runtime_pm_put_raw intel_runtime_pm_put
#define assert_rpm_wakelock_held(x) do { } while (0)
#define assert_rpm_raw_wakeref_held(x) do { } while (0)
#define intel_uncore_forcewake_get(x, y) do { } while (0)
#define intel_uncore_forcewake_put(x, y) do { } while (0)
#define intel_uncore_arm_unclaimed_mmio_detection(x) do { } while (0)
#define I915_PRIORITY_DISPLAY 0
struct i915_sched_attr {
int priority;
};
#define i915_gem_fence_wait_priority(fence, attr) do { (void) attr; } while (0)
#define with_intel_runtime_pm(rpm, wf) \
for ((wf) = intel_runtime_pm_get(rpm); (wf); \
intel_runtime_pm_put((rpm), (wf)), (wf) = 0)
#define pdev_to_i915 pdev_to_xe_device
#define RUNTIME_INFO(xe) (&(xe)->info.i915_runtime)
#define FORCEWAKE_ALL XE_FORCEWAKE_ALL
#define HPD_STORM_DEFAULT_THRESHOLD 50
#ifdef CONFIG_ARM64
/*
* arm64 indirectly includes linux/rtc.h,
* which defines a irq_lock, so include it
* here before #define-ing it
*/
#include <linux/rtc.h>
#endif
#define irq_lock irq.lock
#endif

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../i915/i915_fixed.h"

View File

@ -0,0 +1,9 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef __I915_GEM_H__
#define __I915_GEM_H__
#define GEM_BUG_ON
#endif

View File

@ -0,0 +1,79 @@
#ifndef _I915_GEM_STOLEN_H_
#define _I915_GEM_STOLEN_H_
#include "xe_ttm_stolen_mgr.h"
#include "xe_res_cursor.h"
struct xe_bo;
struct i915_stolen_fb {
struct xe_bo *bo;
};
static inline int i915_gem_stolen_insert_node_in_range(struct xe_device *xe,
struct i915_stolen_fb *fb,
u32 size, u32 align,
u32 start, u32 end)
{
struct xe_bo *bo;
int err;
u32 flags = XE_BO_CREATE_PINNED_BIT | XE_BO_CREATE_STOLEN_BIT;
bo = xe_bo_create_locked_range(xe, xe_device_get_root_tile(xe),
NULL, size, start, end,
ttm_bo_type_kernel, flags);
if (IS_ERR(bo)) {
err = PTR_ERR(bo);
bo = NULL;
return err;
}
err = xe_bo_pin(bo);
xe_bo_unlock_vm_held(bo);
if (err) {
xe_bo_put(fb->bo);
bo = NULL;
}
fb->bo = bo;
return err;
}
static inline int i915_gem_stolen_insert_node(struct xe_device *xe,
struct i915_stolen_fb *fb,
u32 size, u32 align)
{
/* Not used on xe */
BUG_ON(1);
return -ENODEV;
}
static inline void i915_gem_stolen_remove_node(struct xe_device *xe,
struct i915_stolen_fb *fb)
{
xe_bo_unpin_map_no_vm(fb->bo);
fb->bo = NULL;
}
#define i915_gem_stolen_initialized(xe) (!!ttm_manager_type(&(xe)->ttm, XE_PL_STOLEN))
#define i915_gem_stolen_node_allocated(fb) (!!((fb)->bo))
static inline u32 i915_gem_stolen_node_offset(struct i915_stolen_fb *fb)
{
struct xe_res_cursor res;
xe_res_first(fb->bo->ttm.resource, 0, 4096, &res);
return res.start;
}
/* Used for < gen4. These are not supported by Xe */
#define i915_gem_stolen_area_address(xe) (!WARN_ON(1))
/* Used for gen9 specific WA. Gen9 is not supported by Xe */
#define i915_gem_stolen_area_size(xe) (!WARN_ON(1))
#define i915_gem_stolen_node_address(xe, fb) (xe_ttm_stolen_gpu_offset(xe) + \
i915_gem_stolen_node_offset(fb))
#define i915_gem_stolen_node_size(fb) ((u64)((fb)->bo->ttm.base.size))
#endif

View File

@ -0,0 +1,17 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _I915_GPU_ERROR_H_
#define _I915_GPU_ERROR_H_
struct drm_i915_error_state_buf;
__printf(2, 3)
static inline void
i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...)
{
}
#endif

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../i915/i915_irq.h"

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../i915/i915_reg.h"

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../i915/i915_reg_defs.h"

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#define trace_i915_reg_rw(a...) do { } while (0)

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../i915/i915_utils.h"

View File

@ -0,0 +1,44 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _I915_VGPU_H_
#define _I915_VGPU_H_
#include <linux/types.h>
struct drm_i915_private;
struct i915_ggtt;
static inline void intel_vgpu_detect(struct drm_i915_private *i915)
{
}
static inline bool intel_vgpu_active(struct drm_i915_private *i915)
{
return false;
}
static inline void intel_vgpu_register(struct drm_i915_private *i915)
{
}
static inline bool intel_vgpu_has_full_ppgtt(struct drm_i915_private *i915)
{
return false;
}
static inline bool intel_vgpu_has_hwsp_emulation(struct drm_i915_private *i915)
{
return false;
}
static inline bool intel_vgpu_has_huge_gtt(struct drm_i915_private *i915)
{
return false;
}
static inline int intel_vgt_balloon(struct i915_ggtt *ggtt)
{
return 0;
}
static inline void intel_vgt_deballoon(struct i915_ggtt *ggtt)
{
}
#endif /* _I915_VGPU_H_ */

View File

@ -0,0 +1,34 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef I915_VMA_H
#define I915_VMA_H
#include <uapi/drm/i915_drm.h>
#include <drm/drm_mm.h>
/* We don't want these from i915_drm.h in case of Xe */
#undef I915_TILING_X
#undef I915_TILING_Y
#define I915_TILING_X 0
#define I915_TILING_Y 0
struct xe_bo;
struct i915_vma {
struct xe_bo *bo, *dpt;
struct drm_mm_node node;
};
#define i915_ggtt_clear_scanout(bo) do { } while (0)
#define i915_vma_fence_id(vma) -1
static inline u32 i915_ggtt_offset(const struct i915_vma *vma)
{
return vma->node.start;
}
#endif

View File

@ -0,0 +1,74 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include <linux/types.h>
#include <linux/build_bug.h>
/* XX: Figure out how to handle this vma mapping in xe */
struct intel_remapped_plane_info {
/* in gtt pages */
u32 offset:31;
u32 linear:1;
union {
/* in gtt pages for !linear */
struct {
u16 width;
u16 height;
u16 src_stride;
u16 dst_stride;
};
/* in gtt pages for linear */
u32 size;
};
} __packed;
struct intel_remapped_info {
struct intel_remapped_plane_info plane[4];
/* in gtt pages */
u32 plane_alignment;
} __packed;
struct intel_rotation_info {
struct intel_remapped_plane_info plane[2];
} __packed;
enum i915_gtt_view_type {
I915_GTT_VIEW_NORMAL = 0,
I915_GTT_VIEW_ROTATED = sizeof(struct intel_rotation_info),
I915_GTT_VIEW_REMAPPED = sizeof(struct intel_remapped_info),
};
static inline void assert_i915_gem_gtt_types(void)
{
BUILD_BUG_ON(sizeof(struct intel_rotation_info) != 2 * sizeof(u32) + 8 * sizeof(u16));
BUILD_BUG_ON(sizeof(struct intel_remapped_info) != 5 * sizeof(u32) + 16 * sizeof(u16));
/* Check that rotation/remapped shares offsets for simplicity */
BUILD_BUG_ON(offsetof(struct intel_remapped_info, plane[0]) !=
offsetof(struct intel_rotation_info, plane[0]));
BUILD_BUG_ON(offsetofend(struct intel_remapped_info, plane[1]) !=
offsetofend(struct intel_rotation_info, plane[1]));
/* As we encode the size of each branch inside the union into its type,
* we have to be careful that each branch has a unique size.
*/
switch ((enum i915_gtt_view_type)0) {
case I915_GTT_VIEW_NORMAL:
case I915_GTT_VIEW_ROTATED:
case I915_GTT_VIEW_REMAPPED:
/* gcc complains if these are identical cases */
break;
}
}
struct i915_gtt_view {
enum i915_gtt_view_type type;
union {
/* Members need to contain no holes/padding */
struct intel_rotation_info rotated;
struct intel_remapped_info remapped;
};
};

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../i915/intel_clock_gating.h"

View File

@ -0,0 +1,11 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef __INTEL_GT_TYPES__
#define __INTEL_GT_TYPES__
#define intel_gt_support_legacy_fencing(gt) 0
#endif

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../i915/intel_mchbar_regs.h"

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../i915/intel_pci_config.h"

View File

@ -0,0 +1,42 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef __INTEL_PCODE_H__
#define __INTEL_PCODE_H__
#include "intel_uncore.h"
#include "xe_pcode.h"
static inline int
snb_pcode_write_timeout(struct intel_uncore *uncore, u32 mbox, u32 val,
int fast_timeout_us, int slow_timeout_ms)
{
return xe_pcode_write_timeout(__compat_uncore_to_gt(uncore), mbox, val,
slow_timeout_ms ?: 1);
}
static inline int
snb_pcode_write(struct intel_uncore *uncore, u32 mbox, u32 val)
{
return xe_pcode_write(__compat_uncore_to_gt(uncore), mbox, val);
}
static inline int
snb_pcode_read(struct intel_uncore *uncore, u32 mbox, u32 *val, u32 *val1)
{
return xe_pcode_read(__compat_uncore_to_gt(uncore), mbox, val, val1);
}
static inline int
skl_pcode_request(struct intel_uncore *uncore, u32 mbox,
u32 request, u32 reply_mask, u32 reply,
int timeout_base_ms)
{
return xe_pcode_request(__compat_uncore_to_gt(uncore), mbox, request, reply_mask, reply,
timeout_base_ms);
}
#endif /* __INTEL_PCODE_H__ */

View File

@ -0,0 +1,16 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "intel_wakeref.h"
#define intel_runtime_pm xe_runtime_pm
static inline void disable_rpm_wakeref_asserts(void *rpm)
{
}
static inline void enable_rpm_wakeref_asserts(void *rpm)
{
}

View File

@ -0,0 +1,20 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef __INTEL_STEP_H__
#define __INTEL_STEP_H__
#include "xe_device_types.h"
#include "xe_step.h"
#define intel_display_step_name xe_display_step_name
static inline
const char *xe_display_step_name(struct xe_device *xe)
{
return xe_step_name(xe->info.step.display);
}
#endif /* __INTEL_STEP_H__ */

View File

@ -0,0 +1,11 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _INTEL_UC_FW_H_
#define _INTEL_UC_FW_H_
#define INTEL_UC_FIRMWARE_URL "https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git"
#endif

View File

@ -0,0 +1,175 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef __INTEL_UNCORE_H__
#define __INTEL_UNCORE_H__
#include "xe_device.h"
#include "xe_device_types.h"
#include "xe_mmio.h"
static inline struct xe_gt *__compat_uncore_to_gt(struct intel_uncore *uncore)
{
struct xe_device *xe = container_of(uncore, struct xe_device, uncore);
return xe_root_mmio_gt(xe);
}
static inline u32 intel_uncore_read(struct intel_uncore *uncore,
i915_reg_t i915_reg)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
return xe_mmio_read32(__compat_uncore_to_gt(uncore), reg);
}
static inline u32 intel_uncore_read8(struct intel_uncore *uncore,
i915_reg_t i915_reg)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
return xe_mmio_read8(__compat_uncore_to_gt(uncore), reg);
}
static inline u32 intel_uncore_read16(struct intel_uncore *uncore,
i915_reg_t i915_reg)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
return xe_mmio_read16(__compat_uncore_to_gt(uncore), reg);
}
static inline u64
intel_uncore_read64_2x32(struct intel_uncore *uncore,
i915_reg_t i915_lower_reg, i915_reg_t i915_upper_reg)
{
struct xe_reg lower_reg = XE_REG(i915_mmio_reg_offset(i915_lower_reg));
struct xe_reg upper_reg = XE_REG(i915_mmio_reg_offset(i915_upper_reg));
u32 upper, lower, old_upper;
int loop = 0;
upper = xe_mmio_read32(__compat_uncore_to_gt(uncore), upper_reg);
do {
old_upper = upper;
lower = xe_mmio_read32(__compat_uncore_to_gt(uncore), lower_reg);
upper = xe_mmio_read32(__compat_uncore_to_gt(uncore), upper_reg);
} while (upper != old_upper && loop++ < 2);
return (u64)upper << 32 | lower;
}
static inline void intel_uncore_posting_read(struct intel_uncore *uncore,
i915_reg_t i915_reg)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
xe_mmio_read32(__compat_uncore_to_gt(uncore), reg);
}
static inline void intel_uncore_write(struct intel_uncore *uncore,
i915_reg_t i915_reg, u32 val)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
xe_mmio_write32(__compat_uncore_to_gt(uncore), reg, val);
}
static inline u32 intel_uncore_rmw(struct intel_uncore *uncore,
i915_reg_t i915_reg, u32 clear, u32 set)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
return xe_mmio_rmw32(__compat_uncore_to_gt(uncore), reg, clear, set);
}
static inline int intel_wait_for_register(struct intel_uncore *uncore,
i915_reg_t i915_reg, u32 mask,
u32 value, unsigned int timeout)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
return xe_mmio_wait32(__compat_uncore_to_gt(uncore), reg, mask, value,
timeout * USEC_PER_MSEC, NULL, false);
}
static inline int intel_wait_for_register_fw(struct intel_uncore *uncore,
i915_reg_t i915_reg, u32 mask,
u32 value, unsigned int timeout)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
return xe_mmio_wait32(__compat_uncore_to_gt(uncore), reg, mask, value,
timeout * USEC_PER_MSEC, NULL, false);
}
static inline int
__intel_wait_for_register(struct intel_uncore *uncore, i915_reg_t i915_reg,
u32 mask, u32 value, unsigned int fast_timeout_us,
unsigned int slow_timeout_ms, u32 *out_value)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
return xe_mmio_wait32(__compat_uncore_to_gt(uncore), reg, mask, value,
fast_timeout_us + 1000 * slow_timeout_ms,
out_value, false);
}
static inline u32 intel_uncore_read_fw(struct intel_uncore *uncore,
i915_reg_t i915_reg)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
return xe_mmio_read32(__compat_uncore_to_gt(uncore), reg);
}
static inline void intel_uncore_write_fw(struct intel_uncore *uncore,
i915_reg_t i915_reg, u32 val)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
xe_mmio_write32(__compat_uncore_to_gt(uncore), reg, val);
}
static inline u32 intel_uncore_read_notrace(struct intel_uncore *uncore,
i915_reg_t i915_reg)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
return xe_mmio_read32(__compat_uncore_to_gt(uncore), reg);
}
static inline void intel_uncore_write_notrace(struct intel_uncore *uncore,
i915_reg_t i915_reg, u32 val)
{
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
xe_mmio_write32(__compat_uncore_to_gt(uncore), reg, val);
}
static inline void __iomem *intel_uncore_regs(struct intel_uncore *uncore)
{
struct xe_device *xe = container_of(uncore, struct xe_device, uncore);
return xe_device_get_root_tile(xe)->mmio.regs;
}
/*
* The raw_reg_{read,write} macros are intended as a micro-optimization for
* interrupt handlers so that the pointer indirection on uncore->regs can
* be computed once (and presumably cached in a register) instead of generating
* extra load instructions for each MMIO access.
*
* Given that these macros are only intended for non-GSI interrupt registers
* (and the goal is to avoid extra instructions generated by the compiler),
* these macros do not account for uncore->gsi_offset. Any caller that needs
* to use these macros on a GSI register is responsible for adding the
* appropriate GSI offset to the 'base' parameter.
*/
#define raw_reg_read(base, reg) \
readl(base + i915_mmio_reg_offset(reg))
#define raw_reg_write(base, reg, value) \
writel(value, base + i915_mmio_reg_offset(reg))
#endif /* __INTEL_UNCORE_H__ */

View File

@ -0,0 +1,8 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include <linux/types.h>
typedef bool intel_wakeref_t;

View File

@ -0,0 +1,28 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef __INTEL_PXP_H__
#define __INTEL_PXP_H__
#include <linux/errno.h>
#include <linux/types.h>
struct drm_i915_gem_object;
struct intel_pxp;
static inline int intel_pxp_key_check(struct intel_pxp *pxp,
struct drm_i915_gem_object *obj,
bool assign)
{
return -ENODEV;
}
static inline bool
i915_gem_object_is_protected(const struct drm_i915_gem_object *obj)
{
return false;
}
#endif

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../../i915/soc/intel_dram.h"

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../../i915/soc/intel_gmch.h"

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../../i915/soc/intel_pch.h"

View File

@ -0,0 +1,132 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2013-2021 Intel Corporation
*/
#ifndef _VLV_SIDEBAND_H_
#define _VLV_SIDEBAND_H_
#include <linux/types.h>
#include "vlv_sideband_reg.h"
enum pipe;
struct drm_i915_private;
enum {
VLV_IOSF_SB_BUNIT,
VLV_IOSF_SB_CCK,
VLV_IOSF_SB_CCU,
VLV_IOSF_SB_DPIO,
VLV_IOSF_SB_FLISDSI,
VLV_IOSF_SB_GPIO,
VLV_IOSF_SB_NC,
VLV_IOSF_SB_PUNIT,
};
static inline void vlv_iosf_sb_get(struct drm_i915_private *i915, unsigned long ports)
{
}
static inline u32 vlv_iosf_sb_read(struct drm_i915_private *i915, u8 port, u32 reg)
{
return 0;
}
static inline void vlv_iosf_sb_write(struct drm_i915_private *i915,
u8 port, u32 reg, u32 val)
{
}
static inline void vlv_iosf_sb_put(struct drm_i915_private *i915, unsigned long ports)
{
}
static inline void vlv_bunit_get(struct drm_i915_private *i915)
{
}
static inline u32 vlv_bunit_read(struct drm_i915_private *i915, u32 reg)
{
return 0;
}
static inline void vlv_bunit_write(struct drm_i915_private *i915, u32 reg, u32 val)
{
}
static inline void vlv_bunit_put(struct drm_i915_private *i915)
{
}
static inline void vlv_cck_get(struct drm_i915_private *i915)
{
}
static inline u32 vlv_cck_read(struct drm_i915_private *i915, u32 reg)
{
return 0;
}
static inline void vlv_cck_write(struct drm_i915_private *i915, u32 reg, u32 val)
{
}
static inline void vlv_cck_put(struct drm_i915_private *i915)
{
}
static inline void vlv_ccu_get(struct drm_i915_private *i915)
{
}
static inline u32 vlv_ccu_read(struct drm_i915_private *i915, u32 reg)
{
return 0;
}
static inline void vlv_ccu_write(struct drm_i915_private *i915, u32 reg, u32 val)
{
}
static inline void vlv_ccu_put(struct drm_i915_private *i915)
{
}
static inline void vlv_dpio_get(struct drm_i915_private *i915)
{
}
static inline u32 vlv_dpio_read(struct drm_i915_private *i915, int pipe, int reg)
{
return 0;
}
static inline void vlv_dpio_write(struct drm_i915_private *i915,
int pipe, int reg, u32 val)
{
}
static inline void vlv_dpio_put(struct drm_i915_private *i915)
{
}
static inline void vlv_flisdsi_get(struct drm_i915_private *i915)
{
}
static inline u32 vlv_flisdsi_read(struct drm_i915_private *i915, u32 reg)
{
return 0;
}
static inline void vlv_flisdsi_write(struct drm_i915_private *i915, u32 reg, u32 val)
{
}
static inline void vlv_flisdsi_put(struct drm_i915_private *i915)
{
}
static inline void vlv_nc_get(struct drm_i915_private *i915)
{
}
static inline u32 vlv_nc_read(struct drm_i915_private *i915, u8 addr)
{
return 0;
}
static inline void vlv_nc_put(struct drm_i915_private *i915)
{
}
static inline void vlv_punit_get(struct drm_i915_private *i915)
{
}
static inline u32 vlv_punit_read(struct drm_i915_private *i915, u32 addr)
{
return 0;
}
static inline int vlv_punit_write(struct drm_i915_private *i915, u32 addr, u32 val)
{
return 0;
}
static inline void vlv_punit_put(struct drm_i915_private *i915)
{
}
#endif /* _VLV_SIDEBAND_H_ */

View File

@ -0,0 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "../../i915/vlv_sideband_reg.h"

View File

@ -0,0 +1,77 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2023 Intel Corporation
*/
#include "i915_drv.h"
#include "i915_irq.h"
#include "i915_reg.h"
#include "intel_uncore.h"
void gen3_irq_reset(struct intel_uncore *uncore, i915_reg_t imr,
i915_reg_t iir, i915_reg_t ier)
{
intel_uncore_write(uncore, imr, 0xffffffff);
intel_uncore_posting_read(uncore, imr);
intel_uncore_write(uncore, ier, 0);
/* IIR can theoretically queue up two events. Be paranoid. */
intel_uncore_write(uncore, iir, 0xffffffff);
intel_uncore_posting_read(uncore, iir);
intel_uncore_write(uncore, iir, 0xffffffff);
intel_uncore_posting_read(uncore, iir);
}
/*
* We should clear IMR at preinstall/uninstall, and just check at postinstall.
*/
void gen3_assert_iir_is_zero(struct intel_uncore *uncore, i915_reg_t reg)
{
struct xe_device *xe = container_of(uncore, struct xe_device, uncore);
u32 val = intel_uncore_read(uncore, reg);
if (val == 0)
return;
drm_WARN(&xe->drm, 1,
"Interrupt register 0x%x is not zero: 0x%08x\n",
i915_mmio_reg_offset(reg), val);
intel_uncore_write(uncore, reg, 0xffffffff);
intel_uncore_posting_read(uncore, reg);
intel_uncore_write(uncore, reg, 0xffffffff);
intel_uncore_posting_read(uncore, reg);
}
void gen3_irq_init(struct intel_uncore *uncore,
i915_reg_t imr, u32 imr_val,
i915_reg_t ier, u32 ier_val,
i915_reg_t iir)
{
gen3_assert_iir_is_zero(uncore, iir);
intel_uncore_write(uncore, ier, ier_val);
intel_uncore_write(uncore, imr, imr_val);
intel_uncore_posting_read(uncore, imr);
}
bool intel_irqs_enabled(struct xe_device *xe)
{
/*
* XXX: i915 has a racy handling of the irq.enabled, since it doesn't
* lock its transitions. Because of that, the irq.enabled sometimes
* is not read with the irq.lock in place.
* However, the most critical cases like vblank and page flips are
* properly using the locks.
* We cannot take the lock in here or run any kind of assert because
* of i915 inconsistency.
* But at this point the xe irq is better protected against races,
* although the full solution would be protecting the i915 side.
*/
return xe->irq.enabled;
}
void intel_synchronize_irq(struct xe_device *xe)
{
synchronize_irq(to_pci_dev(xe->drm.dev)->irq);
}

View File

@ -0,0 +1,26 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2023 Intel Corporation
*/
#include "i915_drv.h"
bool i915_vtd_active(struct drm_i915_private *i915)
{
if (device_iommu_mapped(i915->drm.dev))
return true;
/* Running as a guest, we assume the host is enforcing VT'd */
return i915_run_as_guest();
}
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG)
/* i915 specific, just put here for shutting it up */
int __i915_inject_probe_error(struct drm_i915_private *i915, int err,
const char *func, int line)
{
return 0;
}
#endif

View File

@ -0,0 +1,74 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2021 Intel Corporation
*/
#include <drm/drm_modeset_helper.h>
#include "i915_drv.h"
#include "intel_display_types.h"
#include "intel_fb_bo.h"
void intel_fb_bo_framebuffer_fini(struct xe_bo *bo)
{
if (bo->flags & XE_BO_CREATE_PINNED_BIT) {
/* Unpin our kernel fb first */
xe_bo_lock(bo, false);
xe_bo_unpin(bo);
xe_bo_unlock(bo);
}
xe_bo_put(bo);
}
int intel_fb_bo_framebuffer_init(struct intel_framebuffer *intel_fb,
struct xe_bo *bo,
struct drm_mode_fb_cmd2 *mode_cmd)
{
struct drm_i915_private *i915 = to_i915(bo->ttm.base.dev);
int ret;
xe_bo_get(bo);
ret = ttm_bo_reserve(&bo->ttm, true, false, NULL);
if (ret)
return ret;
if (!(bo->flags & XE_BO_SCANOUT_BIT)) {
/*
* XE_BO_SCANOUT_BIT should ideally be set at creation, or is
* automatically set when creating FB. We cannot change caching
* mode when the boect is VM_BINDed, so we can only set
* coherency with display when unbound.
*/
if (XE_IOCTL_DBG(i915, !list_empty(&bo->ttm.base.gpuva.list))) {
ttm_bo_unreserve(&bo->ttm);
return -EINVAL;
}
bo->flags |= XE_BO_SCANOUT_BIT;
}
ttm_bo_unreserve(&bo->ttm);
return ret;
}
struct xe_bo *intel_fb_bo_lookup_valid_bo(struct drm_i915_private *i915,
struct drm_file *filp,
const struct drm_mode_fb_cmd2 *mode_cmd)
{
struct drm_i915_gem_object *bo;
struct drm_gem_object *gem = drm_gem_object_lookup(filp, mode_cmd->handles[0]);
if (!gem)
return ERR_PTR(-ENOENT);
bo = gem_to_xe_bo(gem);
/* Require vram placement or dma-buf import */
if (IS_DGFX(i915) &&
!xe_bo_can_migrate(gem_to_xe_bo(gem), XE_PL_VRAM0) &&
bo->ttm.type != ttm_bo_type_sg) {
drm_gem_object_put(gem);
return ERR_PTR(-EREMOTE);
}
return bo;
}

View File

@ -0,0 +1,24 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2021 Intel Corporation
*/
#ifndef __INTEL_FB_BO_H__
#define __INTEL_FB_BO_H__
struct drm_file;
struct drm_mode_fb_cmd2;
struct drm_i915_private;
struct intel_framebuffer;
struct xe_bo;
void intel_fb_bo_framebuffer_fini(struct xe_bo *bo);
int intel_fb_bo_framebuffer_init(struct intel_framebuffer *intel_fb,
struct xe_bo *bo,
struct drm_mode_fb_cmd2 *mode_cmd);
struct xe_bo *intel_fb_bo_lookup_valid_bo(struct drm_i915_private *i915,
struct drm_file *filp,
const struct drm_mode_fb_cmd2 *mode_cmd);
#endif

View File

@ -0,0 +1,104 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#include "intel_fbdev_fb.h"
#include <drm/drm_fb_helper.h>
#include "xe_gt.h"
#include "xe_ttm_stolen_mgr.h"
#include "i915_drv.h"
#include "intel_display_types.h"
struct drm_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
struct drm_framebuffer *fb;
struct drm_device *dev = helper->dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_mode_fb_cmd2 mode_cmd = {};
struct drm_i915_gem_object *obj;
int size;
/* we don't do packed 24bpp */
if (sizes->surface_bpp == 24)
sizes->surface_bpp = 32;
mode_cmd.width = sizes->surface_width;
mode_cmd.height = sizes->surface_height;
mode_cmd.pitches[0] = ALIGN(mode_cmd.width *
DIV_ROUND_UP(sizes->surface_bpp, 8), XE_PAGE_SIZE);
mode_cmd.pixel_format = drm_mode_legacy_fb_format(sizes->surface_bpp,
sizes->surface_depth);
size = mode_cmd.pitches[0] * mode_cmd.height;
size = PAGE_ALIGN(size);
obj = ERR_PTR(-ENODEV);
if (!IS_DGFX(dev_priv)) {
obj = xe_bo_create_pin_map(dev_priv, xe_device_get_root_tile(dev_priv),
NULL, size,
ttm_bo_type_kernel, XE_BO_SCANOUT_BIT |
XE_BO_CREATE_STOLEN_BIT |
XE_BO_CREATE_PINNED_BIT);
if (!IS_ERR(obj))
drm_info(&dev_priv->drm, "Allocated fbdev into stolen\n");
else
drm_info(&dev_priv->drm, "Allocated fbdev into stolen failed: %li\n", PTR_ERR(obj));
}
if (IS_ERR(obj)) {
obj = xe_bo_create_pin_map(dev_priv, xe_device_get_root_tile(dev_priv), NULL, size,
ttm_bo_type_kernel, XE_BO_SCANOUT_BIT |
XE_BO_CREATE_VRAM_IF_DGFX(xe_device_get_root_tile(dev_priv)) |
XE_BO_CREATE_PINNED_BIT);
}
if (IS_ERR(obj)) {
drm_err(&dev_priv->drm, "failed to allocate framebuffer (%pe)\n", obj);
fb = ERR_PTR(-ENOMEM);
goto err;
}
fb = intel_framebuffer_create(obj, &mode_cmd);
if (IS_ERR(fb)) {
xe_bo_unpin_map_no_vm(obj);
goto err;
}
drm_gem_object_put(intel_bo_to_drm_bo(obj));
return fb;
err:
return fb;
}
int intel_fbdev_fb_fill_info(struct drm_i915_private *i915, struct fb_info *info,
struct drm_i915_gem_object *obj, struct i915_vma *vma)
{
struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
if (!(obj->flags & XE_BO_CREATE_SYSTEM_BIT)) {
if (obj->flags & XE_BO_CREATE_STOLEN_BIT)
info->fix.smem_start = xe_ttm_stolen_io_offset(obj, 0);
else
info->fix.smem_start =
pci_resource_start(pdev, 2) +
xe_bo_addr(obj, 0, XE_PAGE_SIZE);
info->fix.smem_len = obj->ttm.base.size;
} else {
/* XXX: Pure fiction, as the BO may not be physically accessible.. */
info->fix.smem_start = 0;
info->fix.smem_len = obj->ttm.base.size;
}
XE_WARN_ON(iosys_map_is_null(&obj->vmap));
info->screen_base = obj->vmap.vaddr_iomem;
info->screen_size = intel_bo_to_drm_bo(obj)->size;
return 0;
}

View File

@ -0,0 +1,21 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef __INTEL_FBDEV_FB_H__
#define __INTEL_FBDEV_FB_H__
struct drm_fb_helper;
struct drm_fb_helper_surface_size;
struct drm_i915_gem_object;
struct drm_i915_private;
struct fb_info;
struct i915_vma;
struct drm_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes);
int intel_fbdev_fb_fill_info(struct drm_i915_private *i915, struct fb_info *info,
struct drm_i915_gem_object *obj, struct i915_vma *vma);
#endif

View File

@ -0,0 +1,16 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2023 Intel Corporation
*/
#include "intel_display_types.h"
struct pci_dev;
unsigned int intel_gmch_vga_set_decode(struct pci_dev *pdev, bool enable_decode);
unsigned int intel_gmch_vga_set_decode(struct pci_dev *pdev, bool enable_decode)
{
/* ToDo: Implement the actual handling of vga decode */
return 0;
}

View File

@ -0,0 +1,17 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2023 Intel Corporation
*/
#include "intel_display_rps.h"
void intel_display_rps_boost_after_vblank(struct drm_crtc *crtc,
struct dma_fence *fence)
{
}
void intel_display_rps_mark_interactive(struct drm_i915_private *i915,
struct intel_atomic_state *state,
bool interactive)
{
}

View File

@ -0,0 +1,71 @@
// SPDX-License-Identifier: MIT
/*
* Copyright 2023, Intel Corporation.
*/
#include "i915_drv.h"
#include "i915_vma.h"
#include "intel_display_types.h"
#include "intel_dsb_buffer.h"
#include "xe_bo.h"
#include "xe_gt.h"
u32 intel_dsb_buffer_ggtt_offset(struct intel_dsb_buffer *dsb_buf)
{
return xe_bo_ggtt_addr(dsb_buf->vma->bo);
}
void intel_dsb_buffer_write(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val)
{
iosys_map_wr(&dsb_buf->vma->bo->vmap, idx * 4, u32, val);
}
u32 intel_dsb_buffer_read(struct intel_dsb_buffer *dsb_buf, u32 idx)
{
return iosys_map_rd(&dsb_buf->vma->bo->vmap, idx * 4, u32);
}
void intel_dsb_buffer_memset(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val, size_t size)
{
WARN_ON(idx > (dsb_buf->buf_size - size) / sizeof(*dsb_buf->cmd_buf));
iosys_map_memset(&dsb_buf->vma->bo->vmap, idx * 4, val, size);
}
bool intel_dsb_buffer_create(struct intel_crtc *crtc, struct intel_dsb_buffer *dsb_buf, size_t size)
{
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
vma = kzalloc(sizeof(*vma), GFP_KERNEL);
if (!vma)
return false;
obj = xe_bo_create_pin_map(i915, xe_device_get_root_tile(i915),
NULL, PAGE_ALIGN(size),
ttm_bo_type_kernel,
XE_BO_CREATE_VRAM_IF_DGFX(xe_device_get_root_tile(i915)) |
XE_BO_CREATE_GGTT_BIT);
if (IS_ERR(obj)) {
kfree(vma);
return false;
}
vma->bo = obj;
dsb_buf->vma = vma;
dsb_buf->buf_size = size;
return true;
}
void intel_dsb_buffer_cleanup(struct intel_dsb_buffer *dsb_buf)
{
xe_bo_unpin_map_no_vm(dsb_buf->vma->bo);
kfree(dsb_buf->vma);
}
void intel_dsb_buffer_flush_map(struct intel_dsb_buffer *dsb_buf)
{
/* TODO: add xe specific flush_map() for dsb buffer object. */
}

View File

@ -0,0 +1,384 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2021 Intel Corporation
*/
#include "i915_drv.h"
#include "intel_display_types.h"
#include "intel_dpt.h"
#include "intel_fb.h"
#include "intel_fb_pin.h"
#include "xe_ggtt.h"
#include "xe_gt.h"
#include <drm/ttm/ttm_bo.h>
static void
write_dpt_rotated(struct xe_bo *bo, struct iosys_map *map, u32 *dpt_ofs, u32 bo_ofs,
u32 width, u32 height, u32 src_stride, u32 dst_stride)
{
struct xe_device *xe = xe_bo_device(bo);
struct xe_ggtt *ggtt = xe_device_get_root_tile(xe)->mem.ggtt;
u32 column, row;
/* TODO: Maybe rewrite so we can traverse the bo addresses sequentially,
* by writing dpt/ggtt in a different order?
*/
for (column = 0; column < width; column++) {
u32 src_idx = src_stride * (height - 1) + column + bo_ofs;
for (row = 0; row < height; row++) {
u64 pte = ggtt->pt_ops->pte_encode_bo(bo, src_idx * XE_PAGE_SIZE,
xe->pat.idx[XE_CACHE_WB]);
iosys_map_wr(map, *dpt_ofs, u64, pte);
*dpt_ofs += 8;
src_idx -= src_stride;
}
/* The DE ignores the PTEs for the padding tiles */
*dpt_ofs += (dst_stride - height) * 8;
}
/* Align to next page */
*dpt_ofs = ALIGN(*dpt_ofs, 4096);
}
static void
write_dpt_remapped(struct xe_bo *bo, struct iosys_map *map, u32 *dpt_ofs,
u32 bo_ofs, u32 width, u32 height, u32 src_stride,
u32 dst_stride)
{
struct xe_device *xe = xe_bo_device(bo);
struct xe_ggtt *ggtt = xe_device_get_root_tile(xe)->mem.ggtt;
u64 (*pte_encode_bo)(struct xe_bo *bo, u64 bo_offset, u16 pat_index)
= ggtt->pt_ops->pte_encode_bo;
u32 column, row;
for (row = 0; row < height; row++) {
u32 src_idx = src_stride * row + bo_ofs;
for (column = 0; column < width; column++) {
iosys_map_wr(map, *dpt_ofs, u64,
pte_encode_bo(bo, src_idx * XE_PAGE_SIZE,
xe->pat.idx[XE_CACHE_WB]));
*dpt_ofs += 8;
src_idx++;
}
/* The DE ignores the PTEs for the padding tiles */
*dpt_ofs += (dst_stride - width) * 8;
}
/* Align to next page */
*dpt_ofs = ALIGN(*dpt_ofs, 4096);
}
static int __xe_pin_fb_vma_dpt(struct intel_framebuffer *fb,
const struct i915_gtt_view *view,
struct i915_vma *vma)
{
struct xe_device *xe = to_xe_device(fb->base.dev);
struct xe_tile *tile0 = xe_device_get_root_tile(xe);
struct xe_ggtt *ggtt = tile0->mem.ggtt;
struct xe_bo *bo = intel_fb_obj(&fb->base), *dpt;
u32 dpt_size, size = bo->ttm.base.size;
if (view->type == I915_GTT_VIEW_NORMAL)
dpt_size = ALIGN(size / XE_PAGE_SIZE * 8, XE_PAGE_SIZE);
else if (view->type == I915_GTT_VIEW_REMAPPED)
dpt_size = ALIGN(intel_remapped_info_size(&fb->remapped_view.gtt.remapped) * 8,
XE_PAGE_SIZE);
else
/* display uses 4K tiles instead of bytes here, convert to entries.. */
dpt_size = ALIGN(intel_rotation_info_size(&view->rotated) * 8,
XE_PAGE_SIZE);
if (IS_DGFX(xe))
dpt = xe_bo_create_pin_map(xe, tile0, NULL, dpt_size,
ttm_bo_type_kernel,
XE_BO_CREATE_VRAM0_BIT |
XE_BO_CREATE_GGTT_BIT);
else
dpt = xe_bo_create_pin_map(xe, tile0, NULL, dpt_size,
ttm_bo_type_kernel,
XE_BO_CREATE_STOLEN_BIT |
XE_BO_CREATE_GGTT_BIT);
if (IS_ERR(dpt))
dpt = xe_bo_create_pin_map(xe, tile0, NULL, dpt_size,
ttm_bo_type_kernel,
XE_BO_CREATE_SYSTEM_BIT |
XE_BO_CREATE_GGTT_BIT);
if (IS_ERR(dpt))
return PTR_ERR(dpt);
if (view->type == I915_GTT_VIEW_NORMAL) {
u32 x;
for (x = 0; x < size / XE_PAGE_SIZE; x++) {
u64 pte = ggtt->pt_ops->pte_encode_bo(bo, x * XE_PAGE_SIZE,
xe->pat.idx[XE_CACHE_WB]);
iosys_map_wr(&dpt->vmap, x * 8, u64, pte);
}
} else if (view->type == I915_GTT_VIEW_REMAPPED) {
const struct intel_remapped_info *remap_info = &view->remapped;
u32 i, dpt_ofs = 0;
for (i = 0; i < ARRAY_SIZE(remap_info->plane); i++)
write_dpt_remapped(bo, &dpt->vmap, &dpt_ofs,
remap_info->plane[i].offset,
remap_info->plane[i].width,
remap_info->plane[i].height,
remap_info->plane[i].src_stride,
remap_info->plane[i].dst_stride);
} else {
const struct intel_rotation_info *rot_info = &view->rotated;
u32 i, dpt_ofs = 0;
for (i = 0; i < ARRAY_SIZE(rot_info->plane); i++)
write_dpt_rotated(bo, &dpt->vmap, &dpt_ofs,
rot_info->plane[i].offset,
rot_info->plane[i].width,
rot_info->plane[i].height,
rot_info->plane[i].src_stride,
rot_info->plane[i].dst_stride);
}
vma->dpt = dpt;
vma->node = dpt->ggtt_node;
return 0;
}
static void
write_ggtt_rotated(struct xe_bo *bo, struct xe_ggtt *ggtt, u32 *ggtt_ofs, u32 bo_ofs,
u32 width, u32 height, u32 src_stride, u32 dst_stride)
{
struct xe_device *xe = xe_bo_device(bo);
u32 column, row;
for (column = 0; column < width; column++) {
u32 src_idx = src_stride * (height - 1) + column + bo_ofs;
for (row = 0; row < height; row++) {
u64 pte = ggtt->pt_ops->pte_encode_bo(bo, src_idx * XE_PAGE_SIZE,
xe->pat.idx[XE_CACHE_WB]);
xe_ggtt_set_pte(ggtt, *ggtt_ofs, pte);
*ggtt_ofs += XE_PAGE_SIZE;
src_idx -= src_stride;
}
/* The DE ignores the PTEs for the padding tiles */
*ggtt_ofs += (dst_stride - height) * XE_PAGE_SIZE;
}
}
static int __xe_pin_fb_vma_ggtt(struct intel_framebuffer *fb,
const struct i915_gtt_view *view,
struct i915_vma *vma)
{
struct xe_bo *bo = intel_fb_obj(&fb->base);
struct xe_device *xe = to_xe_device(fb->base.dev);
struct xe_ggtt *ggtt = xe_device_get_root_tile(xe)->mem.ggtt;
u32 align;
int ret;
/* TODO: Consider sharing framebuffer mapping?
* embed i915_vma inside intel_framebuffer
*/
xe_device_mem_access_get(tile_to_xe(ggtt->tile));
ret = mutex_lock_interruptible(&ggtt->lock);
if (ret)
goto out;
align = XE_PAGE_SIZE;
if (xe_bo_is_vram(bo) && ggtt->flags & XE_GGTT_FLAGS_64K)
align = max_t(u32, align, SZ_64K);
if (bo->ggtt_node.size && view->type == I915_GTT_VIEW_NORMAL) {
vma->node = bo->ggtt_node;
} else if (view->type == I915_GTT_VIEW_NORMAL) {
u32 x, size = bo->ttm.base.size;
ret = xe_ggtt_insert_special_node_locked(ggtt, &vma->node, size,
align, 0);
if (ret)
goto out_unlock;
for (x = 0; x < size; x += XE_PAGE_SIZE) {
u64 pte = ggtt->pt_ops->pte_encode_bo(bo, x,
xe->pat.idx[XE_CACHE_WB]);
xe_ggtt_set_pte(ggtt, vma->node.start + x, pte);
}
} else {
u32 i, ggtt_ofs;
const struct intel_rotation_info *rot_info = &view->rotated;
/* display seems to use tiles instead of bytes here, so convert it back.. */
u32 size = intel_rotation_info_size(rot_info) * XE_PAGE_SIZE;
ret = xe_ggtt_insert_special_node_locked(ggtt, &vma->node, size,
align, 0);
if (ret)
goto out_unlock;
ggtt_ofs = vma->node.start;
for (i = 0; i < ARRAY_SIZE(rot_info->plane); i++)
write_ggtt_rotated(bo, ggtt, &ggtt_ofs,
rot_info->plane[i].offset,
rot_info->plane[i].width,
rot_info->plane[i].height,
rot_info->plane[i].src_stride,
rot_info->plane[i].dst_stride);
}
xe_ggtt_invalidate(ggtt);
out_unlock:
mutex_unlock(&ggtt->lock);
out:
xe_device_mem_access_put(tile_to_xe(ggtt->tile));
return ret;
}
static struct i915_vma *__xe_pin_fb_vma(struct intel_framebuffer *fb,
const struct i915_gtt_view *view)
{
struct drm_device *dev = fb->base.dev;
struct xe_device *xe = to_xe_device(dev);
struct i915_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL);
struct xe_bo *bo = intel_fb_obj(&fb->base);
int ret;
if (!vma)
return ERR_PTR(-ENODEV);
if (IS_DGFX(to_xe_device(bo->ttm.base.dev)) &&
intel_fb_rc_ccs_cc_plane(&fb->base) >= 0 &&
!(bo->flags & XE_BO_NEEDS_CPU_ACCESS)) {
struct xe_tile *tile = xe_device_get_root_tile(xe);
/*
* If we need to able to access the clear-color value stored in
* the buffer, then we require that such buffers are also CPU
* accessible. This is important on small-bar systems where
* only some subset of VRAM is CPU accessible.
*/
if (tile->mem.vram.io_size < tile->mem.vram.usable_size) {
ret = -EINVAL;
goto err;
}
}
/*
* Pin the framebuffer, we can't use xe_bo_(un)pin functions as the
* assumptions are incorrect for framebuffers
*/
ret = ttm_bo_reserve(&bo->ttm, false, false, NULL);
if (ret)
goto err;
if (IS_DGFX(xe))
ret = xe_bo_migrate(bo, XE_PL_VRAM0);
else
ret = xe_bo_validate(bo, NULL, true);
if (!ret)
ttm_bo_pin(&bo->ttm);
ttm_bo_unreserve(&bo->ttm);
if (ret)
goto err;
vma->bo = bo;
if (intel_fb_uses_dpt(&fb->base))
ret = __xe_pin_fb_vma_dpt(fb, view, vma);
else
ret = __xe_pin_fb_vma_ggtt(fb, view, vma);
if (ret)
goto err_unpin;
return vma;
err_unpin:
ttm_bo_reserve(&bo->ttm, false, false, NULL);
ttm_bo_unpin(&bo->ttm);
ttm_bo_unreserve(&bo->ttm);
err:
kfree(vma);
return ERR_PTR(ret);
}
static void __xe_unpin_fb_vma(struct i915_vma *vma)
{
struct xe_device *xe = to_xe_device(vma->bo->ttm.base.dev);
struct xe_ggtt *ggtt = xe_device_get_root_tile(xe)->mem.ggtt;
if (vma->dpt)
xe_bo_unpin_map_no_vm(vma->dpt);
else if (!drm_mm_node_allocated(&vma->bo->ggtt_node) ||
vma->bo->ggtt_node.start != vma->node.start)
xe_ggtt_remove_node(ggtt, &vma->node);
ttm_bo_reserve(&vma->bo->ttm, false, false, NULL);
ttm_bo_unpin(&vma->bo->ttm);
ttm_bo_unreserve(&vma->bo->ttm);
kfree(vma);
}
struct i915_vma *
intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
bool phys_cursor,
const struct i915_gtt_view *view,
bool uses_fence,
unsigned long *out_flags)
{
*out_flags = 0;
return __xe_pin_fb_vma(to_intel_framebuffer(fb), view);
}
void intel_unpin_fb_vma(struct i915_vma *vma, unsigned long flags)
{
__xe_unpin_fb_vma(vma);
}
int intel_plane_pin_fb(struct intel_plane_state *plane_state)
{
struct drm_framebuffer *fb = plane_state->hw.fb;
struct xe_bo *bo = intel_fb_obj(fb);
struct i915_vma *vma;
/* We reject creating !SCANOUT fb's, so this is weird.. */
drm_WARN_ON(bo->ttm.base.dev, !(bo->flags & XE_BO_SCANOUT_BIT));
vma = __xe_pin_fb_vma(to_intel_framebuffer(fb), &plane_state->view.gtt);
if (IS_ERR(vma))
return PTR_ERR(vma);
plane_state->ggtt_vma = vma;
return 0;
}
void intel_plane_unpin_fb(struct intel_plane_state *old_plane_state)
{
__xe_unpin_fb_vma(old_plane_state->ggtt_vma);
old_plane_state->ggtt_vma = NULL;
}
/*
* For Xe introduce dummy intel_dpt_create which just return NULL and
* intel_dpt_destroy which does nothing.
*/
struct i915_address_space *intel_dpt_create(struct intel_framebuffer *fb)
{
return NULL;
}
void intel_dpt_destroy(struct i915_address_space *vm)
{
return;
}

View File

@ -0,0 +1,34 @@
// SPDX-License-Identifier: MIT
/*
* Copyright 2023, Intel Corporation.
*/
#include "i915_drv.h"
#include "intel_hdcp_gsc.h"
bool intel_hdcp_gsc_cs_required(struct drm_i915_private *i915)
{
return true;
}
bool intel_hdcp_gsc_check_status(struct drm_i915_private *i915)
{
return false;
}
int intel_hdcp_gsc_init(struct drm_i915_private *i915)
{
drm_info(&i915->drm, "HDCP support not yet implemented\n");
return -ENODEV;
}
void intel_hdcp_gsc_fini(struct drm_i915_private *i915)
{
}
ssize_t intel_hdcp_gsc_msg_send(struct drm_i915_private *i915, u8 *msg_in,
size_t msg_in_len, u8 *msg_out,
size_t msg_out_len)
{
return -ENODEV;
}

View File

@ -0,0 +1,291 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2021 Intel Corporation
*/
/* for ioread64 */
#include <linux/io-64-nonatomic-lo-hi.h>
#include "xe_ggtt.h"
#include "i915_drv.h"
#include "intel_atomic_plane.h"
#include "intel_display.h"
#include "intel_display_types.h"
#include "intel_fb.h"
#include "intel_fb_pin.h"
#include "intel_frontbuffer.h"
#include "intel_plane_initial.h"
static bool
intel_reuse_initial_plane_obj(struct drm_i915_private *i915,
const struct intel_initial_plane_config *plane_config,
struct drm_framebuffer **fb)
{
struct intel_crtc *crtc;
for_each_intel_crtc(&i915->drm, crtc) {
struct intel_crtc_state *crtc_state =
to_intel_crtc_state(crtc->base.state);
struct intel_plane *plane =
to_intel_plane(crtc->base.primary);
struct intel_plane_state *plane_state =
to_intel_plane_state(plane->base.state);
if (!crtc_state->uapi.active)
continue;
if (!plane_state->ggtt_vma)
continue;
if (intel_plane_ggtt_offset(plane_state) == plane_config->base) {
*fb = plane_state->hw.fb;
return true;
}
}
return false;
}
static struct xe_bo *
initial_plane_bo(struct xe_device *xe,
struct intel_initial_plane_config *plane_config)
{
struct xe_tile *tile0 = xe_device_get_root_tile(xe);
struct xe_bo *bo;
resource_size_t phys_base;
u32 base, size, flags;
u64 page_size = xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K ? SZ_64K : SZ_4K;
if (plane_config->size == 0)
return NULL;
flags = XE_BO_CREATE_PINNED_BIT | XE_BO_SCANOUT_BIT | XE_BO_CREATE_GGTT_BIT;
base = round_down(plane_config->base, page_size);
if (IS_DGFX(xe)) {
u64 __iomem *gte = tile0->mem.ggtt->gsm;
u64 pte;
gte += base / XE_PAGE_SIZE;
pte = ioread64(gte);
if (!(pte & XE_GGTT_PTE_DM)) {
drm_err(&xe->drm,
"Initial plane programming missing DM bit\n");
return NULL;
}
phys_base = pte & ~(page_size - 1);
flags |= XE_BO_CREATE_VRAM0_BIT;
/*
* We don't currently expect this to ever be placed in the
* stolen portion.
*/
if (phys_base >= tile0->mem.vram.usable_size) {
drm_err(&xe->drm,
"Initial plane programming using invalid range, phys_base=%pa\n",
&phys_base);
return NULL;
}
drm_dbg(&xe->drm,
"Using phys_base=%pa, based on initial plane programming\n",
&phys_base);
} else {
struct ttm_resource_manager *stolen = ttm_manager_type(&xe->ttm, XE_PL_STOLEN);
if (!stolen)
return NULL;
phys_base = base;
flags |= XE_BO_CREATE_STOLEN_BIT;
/*
* If the FB is too big, just don't use it since fbdev is not very
* important and we should probably use that space with FBC or other
* features.
*/
if (IS_ENABLED(CONFIG_FRAMEBUFFER_CONSOLE) &&
plane_config->size * 2 >> PAGE_SHIFT >= stolen->size)
return NULL;
}
size = round_up(plane_config->base + plane_config->size,
page_size);
size -= base;
bo = xe_bo_create_pin_map_at(xe, tile0, NULL, size, phys_base,
ttm_bo_type_kernel, flags);
if (IS_ERR(bo)) {
drm_dbg(&xe->drm,
"Failed to create bo phys_base=%pa size %u with flags %x: %li\n",
&phys_base, size, flags, PTR_ERR(bo));
return NULL;
}
return bo;
}
static bool
intel_alloc_initial_plane_obj(struct intel_crtc *crtc,
struct intel_initial_plane_config *plane_config)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_mode_fb_cmd2 mode_cmd = { 0 };
struct drm_framebuffer *fb = &plane_config->fb->base;
struct xe_bo *bo;
switch (fb->modifier) {
case DRM_FORMAT_MOD_LINEAR:
case I915_FORMAT_MOD_X_TILED:
case I915_FORMAT_MOD_Y_TILED:
case I915_FORMAT_MOD_4_TILED:
break;
default:
drm_dbg(&dev_priv->drm,
"Unsupported modifier for initial FB: 0x%llx\n",
fb->modifier);
return false;
}
mode_cmd.pixel_format = fb->format->format;
mode_cmd.width = fb->width;
mode_cmd.height = fb->height;
mode_cmd.pitches[0] = fb->pitches[0];
mode_cmd.modifier[0] = fb->modifier;
mode_cmd.flags = DRM_MODE_FB_MODIFIERS;
bo = initial_plane_bo(dev_priv, plane_config);
if (!bo)
return false;
if (intel_framebuffer_init(to_intel_framebuffer(fb),
bo, &mode_cmd)) {
drm_dbg_kms(&dev_priv->drm, "intel fb init failed\n");
goto err_bo;
}
/* Reference handed over to fb */
xe_bo_put(bo);
return true;
err_bo:
xe_bo_unpin_map_no_vm(bo);
return false;
}
static void
intel_find_initial_plane_obj(struct intel_crtc *crtc,
struct intel_initial_plane_config *plane_config)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_plane *plane =
to_intel_plane(crtc->base.primary);
struct intel_plane_state *plane_state =
to_intel_plane_state(plane->base.state);
struct intel_crtc_state *crtc_state =
to_intel_crtc_state(crtc->base.state);
struct drm_framebuffer *fb;
struct i915_vma *vma;
/*
* TODO:
* Disable planes if get_initial_plane_config() failed.
* Make sure things work if the surface base is not page aligned.
*/
if (!plane_config->fb)
return;
if (intel_alloc_initial_plane_obj(crtc, plane_config))
fb = &plane_config->fb->base;
else if (!intel_reuse_initial_plane_obj(dev_priv, plane_config, &fb))
goto nofb;
plane_state->uapi.rotation = plane_config->rotation;
intel_fb_fill_view(to_intel_framebuffer(fb),
plane_state->uapi.rotation, &plane_state->view);
vma = intel_pin_and_fence_fb_obj(fb, false, &plane_state->view.gtt,
false, &plane_state->flags);
if (IS_ERR(vma))
goto nofb;
plane_state->ggtt_vma = vma;
plane_state->uapi.src_x = 0;
plane_state->uapi.src_y = 0;
plane_state->uapi.src_w = fb->width << 16;
plane_state->uapi.src_h = fb->height << 16;
plane_state->uapi.crtc_x = 0;
plane_state->uapi.crtc_y = 0;
plane_state->uapi.crtc_w = fb->width;
plane_state->uapi.crtc_h = fb->height;
plane_state->uapi.fb = fb;
drm_framebuffer_get(fb);
plane_state->uapi.crtc = &crtc->base;
intel_plane_copy_uapi_to_hw_state(plane_state, plane_state, crtc);
atomic_or(plane->frontbuffer_bit, &to_intel_frontbuffer(fb)->bits);
plane_config->vma = vma;
/*
* Flip to the newly created mapping ASAP, so we can re-use the
* first part of GGTT for WOPCM, prevent flickering, and prevent
* the lookup of sysmem scratch pages.
*/
plane->check_plane(crtc_state, plane_state);
plane->async_flip(plane, crtc_state, plane_state, true);
return;
nofb:
/*
* We've failed to reconstruct the BIOS FB. Current display state
* indicates that the primary plane is visible, but has a NULL FB,
* which will lead to problems later if we don't fix it up. The
* simplest solution is to just disable the primary plane now and
* pretend the BIOS never had it enabled.
*/
intel_plane_disable_noatomic(crtc, plane);
}
static void plane_config_fini(struct intel_initial_plane_config *plane_config)
{
if (plane_config->fb) {
struct drm_framebuffer *fb = &plane_config->fb->base;
/* We may only have the stub and not a full framebuffer */
if (drm_framebuffer_read_refcount(fb))
drm_framebuffer_put(fb);
else
kfree(fb);
}
}
void intel_crtc_initial_plane_config(struct intel_crtc *crtc)
{
struct xe_device *xe = to_xe_device(crtc->base.dev);
struct intel_initial_plane_config plane_config = {};
/*
* Note that reserving the BIOS fb up front prevents us
* from stuffing other stolen allocations like the ring
* on top. This prevents some ugliness at boot time, and
* can even allow for smooth boot transitions if the BIOS
* fb is large enough for the active pipe configuration.
*/
xe->display.funcs.display->get_initial_plane_config(crtc, &plane_config);
/*
* If the fb is shared between multiple heads, we'll
* just get the first one.
*/
intel_find_initial_plane_obj(crtc, &plane_config);
plane_config_fini(&plane_config);
}

View File

@ -0,0 +1,160 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_GFXPIPE_COMMANDS_H_
#define _XE_GFXPIPE_COMMANDS_H_
#include "instructions/xe_instr_defs.h"
#define GFXPIPE_PIPELINE REG_GENMASK(28, 27)
#define PIPELINE_COMMON REG_FIELD_PREP(GFXPIPE_PIPELINE, 0x0)
#define PIPELINE_SINGLE_DW REG_FIELD_PREP(GFXPIPE_PIPELINE, 0x1)
#define PIPELINE_COMPUTE REG_FIELD_PREP(GFXPIPE_PIPELINE, 0x2)
#define PIPELINE_3D REG_FIELD_PREP(GFXPIPE_PIPELINE, 0x3)
#define GFXPIPE_OPCODE REG_GENMASK(26, 24)
#define GFXPIPE_SUBOPCODE REG_GENMASK(23, 16)
#define GFXPIPE_MATCH_MASK (XE_INSTR_CMD_TYPE | \
GFXPIPE_PIPELINE | \
GFXPIPE_OPCODE | \
GFXPIPE_SUBOPCODE)
#define GFXPIPE_COMMON_CMD(opcode, subopcode) \
(XE_INSTR_GFXPIPE | PIPELINE_COMMON | \
REG_FIELD_PREP(GFXPIPE_OPCODE, opcode) | \
REG_FIELD_PREP(GFXPIPE_SUBOPCODE, subopcode))
#define GFXPIPE_SINGLE_DW_CMD(opcode, subopcode) \
(XE_INSTR_GFXPIPE | PIPELINE_SINGLE_DW | \
REG_FIELD_PREP(GFXPIPE_OPCODE, opcode) | \
REG_FIELD_PREP(GFXPIPE_SUBOPCODE, subopcode))
#define GFXPIPE_3D_CMD(opcode, subopcode) \
(XE_INSTR_GFXPIPE | PIPELINE_3D | \
REG_FIELD_PREP(GFXPIPE_OPCODE, opcode) | \
REG_FIELD_PREP(GFXPIPE_SUBOPCODE, subopcode))
#define GFXPIPE_COMPUTE_CMD(opcode, subopcode) \
(XE_INSTR_GFXPIPE | PIPELINE_COMPUTE | \
REG_FIELD_PREP(GFXPIPE_OPCODE, opcode) | \
REG_FIELD_PREP(GFXPIPE_SUBOPCODE, subopcode))
#define STATE_BASE_ADDRESS GFXPIPE_COMMON_CMD(0x1, 0x1)
#define STATE_SIP GFXPIPE_COMMON_CMD(0x1, 0x2)
#define GPGPU_CSR_BASE_ADDRESS GFXPIPE_COMMON_CMD(0x1, 0x4)
#define STATE_COMPUTE_MODE GFXPIPE_COMMON_CMD(0x1, 0x5)
#define CMD_3DSTATE_BTD GFXPIPE_COMMON_CMD(0x1, 0x6)
#define CMD_3DSTATE_VF_STATISTICS GFXPIPE_SINGLE_DW_CMD(0x0, 0xB)
#define PIPELINE_SELECT GFXPIPE_SINGLE_DW_CMD(0x1, 0x4)
#define CMD_3DSTATE_DRAWING_RECTANGLE_FAST GFXPIPE_3D_CMD(0x0, 0x0)
#define CMD_3DSTATE_CLEAR_PARAMS GFXPIPE_3D_CMD(0x0, 0x4)
#define CMD_3DSTATE_DEPTH_BUFFER GFXPIPE_3D_CMD(0x0, 0x5)
#define CMD_3DSTATE_STENCIL_BUFFER GFXPIPE_3D_CMD(0x0, 0x6)
#define CMD_3DSTATE_HIER_DEPTH_BUFFER GFXPIPE_3D_CMD(0x0, 0x7)
#define CMD_3DSTATE_VERTEX_BUFFERS GFXPIPE_3D_CMD(0x0, 0x8)
#define CMD_3DSTATE_VERTEX_ELEMENTS GFXPIPE_3D_CMD(0x0, 0x9)
#define CMD_3DSTATE_INDEX_BUFFER GFXPIPE_3D_CMD(0x0, 0xA)
#define CMD_3DSTATE_VF GFXPIPE_3D_CMD(0x0, 0xC)
#define CMD_3DSTATE_MULTISAMPLE GFXPIPE_3D_CMD(0x0, 0xD)
#define CMD_3DSTATE_CC_STATE_POINTERS GFXPIPE_3D_CMD(0x0, 0xE)
#define CMD_3DSTATE_SCISSOR_STATE_POINTERS GFXPIPE_3D_CMD(0x0, 0xF)
#define CMD_3DSTATE_VS GFXPIPE_3D_CMD(0x0, 0x10)
#define CMD_3DSTATE_GS GFXPIPE_3D_CMD(0x0, 0x11)
#define CMD_3DSTATE_CLIP GFXPIPE_3D_CMD(0x0, 0x12)
#define CMD_3DSTATE_SF GFXPIPE_3D_CMD(0x0, 0x13)
#define CMD_3DSTATE_WM GFXPIPE_3D_CMD(0x0, 0x14)
#define CMD_3DSTATE_CONSTANT_VS GFXPIPE_3D_CMD(0x0, 0x15)
#define CMD_3DSTATE_CONSTANT_GS GFXPIPE_3D_CMD(0x0, 0x16)
#define CMD_3DSTATE_SAMPLE_MASK GFXPIPE_3D_CMD(0x0, 0x18)
#define CMD_3DSTATE_CONSTANT_HS GFXPIPE_3D_CMD(0x0, 0x19)
#define CMD_3DSTATE_CONSTANT_DS GFXPIPE_3D_CMD(0x0, 0x1A)
#define CMD_3DSTATE_HS GFXPIPE_3D_CMD(0x0, 0x1B)
#define CMD_3DSTATE_TE GFXPIPE_3D_CMD(0x0, 0x1C)
#define CMD_3DSTATE_DS GFXPIPE_3D_CMD(0x0, 0x1D)
#define CMD_3DSTATE_STREAMOUT GFXPIPE_3D_CMD(0x0, 0x1E)
#define CMD_3DSTATE_SBE GFXPIPE_3D_CMD(0x0, 0x1F)
#define CMD_3DSTATE_PS GFXPIPE_3D_CMD(0x0, 0x20)
#define CMD_3DSTATE_VIEWPORT_STATE_POINTERS_SF_CLIP GFXPIPE_3D_CMD(0x0, 0x21)
#define CMD_3DSTATE_CPS_POINTERS GFXPIPE_3D_CMD(0x0, 0x22)
#define CMD_3DSTATE_VIEWPORT_STATE_POINTERS_CC GFXPIPE_3D_CMD(0x0, 0x23)
#define CMD_3DSTATE_BLEND_STATE_POINTERS GFXPIPE_3D_CMD(0x0, 0x24)
#define CMD_3DSTATE_BINDING_TABLE_POINTERS_VS GFXPIPE_3D_CMD(0x0, 0x26)
#define CMD_3DSTATE_BINDING_TABLE_POINTERS_HS GFXPIPE_3D_CMD(0x0, 0x27)
#define CMD_3DSTATE_BINDING_TABLE_POINTERS_DS GFXPIPE_3D_CMD(0x0, 0x28)
#define CMD_3DSTATE_BINDING_TABLE_POINTERS_GS GFXPIPE_3D_CMD(0x0, 0x29)
#define CMD_3DSTATE_BINDING_TABLE_POINTERS_PS GFXPIPE_3D_CMD(0x0, 0x2A)
#define CMD_3DSTATE_SAMPLER_STATE_POINTERS_VS GFXPIPE_3D_CMD(0x0, 0x2B)
#define CMD_3DSTATE_SAMPLER_STATE_POINTERS_HS GFXPIPE_3D_CMD(0x0, 0x2C)
#define CMD_3DSTATE_SAMPLER_STATE_POINTERS_DS GFXPIPE_3D_CMD(0x0, 0x2D)
#define CMD_3DSTATE_SAMPLER_STATE_POINTERS_GS GFXPIPE_3D_CMD(0x0, 0x2E)
#define CMD_3DSTATE_SAMPLER_STATE_POINTERS_PS GFXPIPE_3D_CMD(0x0, 0x2F)
#define CMD_3DSTATE_VF_INSTANCING GFXPIPE_3D_CMD(0x0, 0x49)
#define CMD_3DSTATE_VF_SGVS GFXPIPE_3D_CMD(0x0, 0x4A)
#define CMD_3DSTATE_VF_TOPOLOGY GFXPIPE_3D_CMD(0x0, 0x4B)
#define CMD_3DSTATE_WM_CHROMAKEY GFXPIPE_3D_CMD(0x0, 0x4C)
#define CMD_3DSTATE_PS_BLEND GFXPIPE_3D_CMD(0x0, 0x4D)
#define CMD_3DSTATE_WM_DEPTH_STENCIL GFXPIPE_3D_CMD(0x0, 0x4E)
#define CMD_3DSTATE_PS_EXTRA GFXPIPE_3D_CMD(0x0, 0x4F)
#define CMD_3DSTATE_RASTER GFXPIPE_3D_CMD(0x0, 0x50)
#define CMD_3DSTATE_SBE_SWIZ GFXPIPE_3D_CMD(0x0, 0x51)
#define CMD_3DSTATE_WM_HZ_OP GFXPIPE_3D_CMD(0x0, 0x52)
#define CMD_3DSTATE_VF_COMPONENT_PACKING GFXPIPE_3D_CMD(0x0, 0x55)
#define CMD_3DSTATE_VF_SGVS_2 GFXPIPE_3D_CMD(0x0, 0x56)
#define CMD_3DSTATE_VFG GFXPIPE_3D_CMD(0x0, 0x57)
#define CMD_3DSTATE_URB_ALLOC_VS GFXPIPE_3D_CMD(0x0, 0x58)
#define CMD_3DSTATE_URB_ALLOC_HS GFXPIPE_3D_CMD(0x0, 0x59)
#define CMD_3DSTATE_URB_ALLOC_DS GFXPIPE_3D_CMD(0x0, 0x5A)
#define CMD_3DSTATE_URB_ALLOC_GS GFXPIPE_3D_CMD(0x0, 0x5B)
#define CMD_3DSTATE_SO_BUFFER_INDEX_0 GFXPIPE_3D_CMD(0x0, 0x60)
#define CMD_3DSTATE_SO_BUFFER_INDEX_1 GFXPIPE_3D_CMD(0x0, 0x61)
#define CMD_3DSTATE_SO_BUFFER_INDEX_2 GFXPIPE_3D_CMD(0x0, 0x62)
#define CMD_3DSTATE_SO_BUFFER_INDEX_3 GFXPIPE_3D_CMD(0x0, 0x63)
#define CMD_3DSTATE_PRIMITIVE_REPLICATION GFXPIPE_3D_CMD(0x0, 0x6C)
#define CMD_3DSTATE_TBIMR_TILE_PASS_INFO GFXPIPE_3D_CMD(0x0, 0x6E)
#define CMD_3DSTATE_AMFS GFXPIPE_3D_CMD(0x0, 0x6F)
#define CMD_3DSTATE_DEPTH_BOUNDS GFXPIPE_3D_CMD(0x0, 0x71)
#define CMD_3DSTATE_AMFS_TEXTURE_POINTERS GFXPIPE_3D_CMD(0x0, 0x72)
#define CMD_3DSTATE_CONSTANT_TS_POINTER GFXPIPE_3D_CMD(0x0, 0x73)
#define CMD_3DSTATE_MESH_CONTROL GFXPIPE_3D_CMD(0x0, 0x77)
#define CMD_3DSTATE_MESH_DISTRIB GFXPIPE_3D_CMD(0x0, 0x78)
#define CMD_3DSTATE_TASK_REDISTRIB GFXPIPE_3D_CMD(0x0, 0x79)
#define CMD_3DSTATE_MESH_SHADER GFXPIPE_3D_CMD(0x0, 0x7A)
#define CMD_3DSTATE_MESH_SHADER_DATA GFXPIPE_3D_CMD(0x0, 0x7B)
#define CMD_3DSTATE_TASK_CONTROL GFXPIPE_3D_CMD(0x0, 0x7C)
#define CMD_3DSTATE_TASK_SHADER GFXPIPE_3D_CMD(0x0, 0x7D)
#define CMD_3DSTATE_TASK_SHADER_DATA GFXPIPE_3D_CMD(0x0, 0x7E)
#define CMD_3DSTATE_URB_ALLOC_MESH GFXPIPE_3D_CMD(0x0, 0x7F)
#define CMD_3DSTATE_URB_ALLOC_TASK GFXPIPE_3D_CMD(0x0, 0x80)
#define CMD_3DSTATE_CLIP_MESH GFXPIPE_3D_CMD(0x0, 0x81)
#define CMD_3DSTATE_SBE_MESH GFXPIPE_3D_CMD(0x0, 0x82)
#define CMD_3DSTATE_CPSIZE_CONTROL_BUFFER GFXPIPE_3D_CMD(0x0, 0x83)
#define CMD_3DSTATE_DRAWING_RECTANGLE GFXPIPE_3D_CMD(0x1, 0x0)
#define CMD_3DSTATE_CHROMA_KEY GFXPIPE_3D_CMD(0x1, 0x4)
#define CMD_3DSTATE_POLY_STIPPLE_OFFSET GFXPIPE_3D_CMD(0x1, 0x6)
#define CMD_3DSTATE_POLY_STIPPLE_PATTERN GFXPIPE_3D_CMD(0x1, 0x7)
#define CMD_3DSTATE_LINE_STIPPLE GFXPIPE_3D_CMD(0x1, 0x8)
#define CMD_3DSTATE_AA_LINE_PARAMETERS GFXPIPE_3D_CMD(0x1, 0xA)
#define CMD_3DSTATE_MONOFILTER_SIZE GFXPIPE_3D_CMD(0x1, 0x11)
#define CMD_3DSTATE_PUSH_CONSTANT_ALLOC_VS GFXPIPE_3D_CMD(0x1, 0x12)
#define CMD_3DSTATE_PUSH_CONSTANT_ALLOC_HS GFXPIPE_3D_CMD(0x1, 0x13)
#define CMD_3DSTATE_PUSH_CONSTANT_ALLOC_DS GFXPIPE_3D_CMD(0x1, 0x14)
#define CMD_3DSTATE_PUSH_CONSTANT_ALLOC_GS GFXPIPE_3D_CMD(0x1, 0x15)
#define CMD_3DSTATE_PUSH_CONSTANT_ALLOC_PS GFXPIPE_3D_CMD(0x1, 0x16)
#define CMD_3DSTATE_SO_DECL_LIST GFXPIPE_3D_CMD(0x1, 0x17)
#define CMD_3DSTATE_SO_DECL_LIST_DW_LEN REG_GENMASK(8, 0)
#define CMD_3DSTATE_SO_BUFFER GFXPIPE_3D_CMD(0x1, 0x18)
#define CMD_3DSTATE_BINDING_TABLE_POOL_ALLOC GFXPIPE_3D_CMD(0x1, 0x19)
#define CMD_3DSTATE_SAMPLE_PATTERN GFXPIPE_3D_CMD(0x1, 0x1C)
#define CMD_3DSTATE_3D_MODE GFXPIPE_3D_CMD(0x1, 0x1E)
#define CMD_3DSTATE_SUBSLICE_HASH_TABLE GFXPIPE_3D_CMD(0x1, 0x1F)
#define CMD_3DSTATE_SLICE_TABLE_STATE_POINTERS GFXPIPE_3D_CMD(0x1, 0x20)
#define CMD_3DSTATE_PTBR_TILE_PASS_INFO GFXPIPE_3D_CMD(0x1, 0x22)
#endif

View File

@ -0,0 +1,36 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_GSC_COMMANDS_H_
#define _XE_GSC_COMMANDS_H_
#include "instructions/xe_instr_defs.h"
/*
* All GSCCS-specific commands have fixed length, so we can include it in the
* defines. Note that the generic GSC command header structure includes an
* optional data field in bits 9-21, but there are no commands that actually use
* it; some of the commands are instead defined as having an extended length
* field spanning bits 0-15, even if the extra bits are not required because the
* longest GSCCS command is only 8 dwords. To handle this, the defines below use
* a single field for both data and len. If we ever get a commands that does
* actually have data and this approach doesn't work for it we can re-work it
* at that point.
*/
#define GSC_OPCODE REG_GENMASK(28, 22)
#define GSC_CMD_DATA_AND_LEN REG_GENMASK(21, 0)
#define __GSC_INSTR(op, dl) \
(XE_INSTR_GSC | \
REG_FIELD_PREP(GSC_OPCODE, op) | \
REG_FIELD_PREP(GSC_CMD_DATA_AND_LEN, dl))
#define GSC_HECI_CMD_PKT __GSC_INSTR(0, 6)
#define GSC_FW_LOAD __GSC_INSTR(1, 2)
#define GSC_FW_LOAD_LIMIT_VALID REG_BIT(31)
#endif

View File

@ -0,0 +1,33 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_INSTR_DEFS_H_
#define _XE_INSTR_DEFS_H_
#include "regs/xe_reg_defs.h"
/*
* The first dword of any GPU instruction is the "instruction header." Bits
* 31:29 identify the general type of the command and determine how exact
* opcodes and sub-opcodes will be encoded in the remaining bits.
*/
#define XE_INSTR_CMD_TYPE GENMASK(31, 29)
#define XE_INSTR_MI REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x0)
#define XE_INSTR_GSC REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x2)
#define XE_INSTR_GFXPIPE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x3)
/*
* Most (but not all) instructions have a "length" field in the instruction
* header. The value expected is the total number of dwords for the
* instruction, minus two.
*
* Some instructions have length fields longer or shorter than 8 bits, but
* those are rare. This definition can be used for the common case where
* the length field is from 7:0.
*/
#define XE_INSTR_LEN_MASK GENMASK(7, 0)
#define XE_INSTR_NUM_DW(x) REG_FIELD_PREP(XE_INSTR_LEN_MASK, (x) - 2)
#endif

View File

@ -0,0 +1,61 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_MI_COMMANDS_H_
#define _XE_MI_COMMANDS_H_
#include "instructions/xe_instr_defs.h"
/*
* MI (Memory Interface) commands are supported by all GT engines. They
* provide general memory operations and command streamer control. MI commands
* have a command type of 0x0 (MI_COMMAND) in bits 31:29 of the instruction
* header dword and a specific MI opcode in bits 28:23.
*/
#define MI_OPCODE REG_GENMASK(28, 23)
#define MI_SUBOPCODE REG_GENMASK(22, 17) /* used with MI_EXPANSION */
#define __MI_INSTR(opcode) \
(XE_INSTR_MI | REG_FIELD_PREP(MI_OPCODE, opcode))
#define MI_NOOP __MI_INSTR(0x0)
#define MI_USER_INTERRUPT __MI_INSTR(0x2)
#define MI_ARB_CHECK __MI_INSTR(0x5)
#define MI_ARB_ON_OFF __MI_INSTR(0x8)
#define MI_ARB_ENABLE REG_BIT(0)
#define MI_ARB_DISABLE 0x0
#define MI_BATCH_BUFFER_END __MI_INSTR(0xA)
#define MI_TOPOLOGY_FILTER __MI_INSTR(0xD)
#define MI_FORCE_WAKEUP __MI_INSTR(0x1D)
#define MI_STORE_DATA_IMM __MI_INSTR(0x20)
#define MI_SDI_GGTT REG_BIT(22)
#define MI_SDI_LEN_DW GENMASK(9, 0)
#define MI_SDI_NUM_DW(x) REG_FIELD_PREP(MI_SDI_LEN_DW, (x) + 3 - 2)
#define MI_SDI_NUM_QW(x) (REG_FIELD_PREP(MI_SDI_LEN_DW, 2 * (x) + 3 - 2) | \
REG_BIT(21))
#define MI_LOAD_REGISTER_IMM __MI_INSTR(0x22)
#define MI_LRI_LRM_CS_MMIO REG_BIT(19)
#define MI_LRI_MMIO_REMAP_EN REG_BIT(17)
#define MI_LRI_NUM_REGS(x) XE_INSTR_NUM_DW(2 * (x) + 1)
#define MI_LRI_FORCE_POSTED REG_BIT(12)
#define MI_FLUSH_DW __MI_INSTR(0x26)
#define MI_FLUSH_DW_STORE_INDEX REG_BIT(21)
#define MI_INVALIDATE_TLB REG_BIT(18)
#define MI_FLUSH_DW_CCS REG_BIT(16)
#define MI_FLUSH_DW_OP_STOREDW REG_BIT(14)
#define MI_FLUSH_DW_LEN_DW REG_GENMASK(5, 0)
#define MI_FLUSH_IMM_DW REG_FIELD_PREP(MI_FLUSH_DW_LEN_DW, 4 - 2)
#define MI_FLUSH_IMM_QW REG_FIELD_PREP(MI_FLUSH_DW_LEN_DW, 5 - 2)
#define MI_FLUSH_DW_USE_GTT REG_BIT(2)
#define MI_BATCH_BUFFER_START __MI_INSTR(0x31)
#endif

View File

@ -0,0 +1,184 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_ENGINE_REGS_H_
#define _XE_ENGINE_REGS_H_
#include <asm/page.h>
#include "regs/xe_reg_defs.h"
/*
* These *_BASE values represent the MMIO offset where each hardware engine's
* registers start. The other definitions in this header are parameterized
* macros that will take one of these values as a parameter.
*/
#define RENDER_RING_BASE 0x02000
#define BSD_RING_BASE 0x1c0000
#define BSD2_RING_BASE 0x1c4000
#define BSD3_RING_BASE 0x1d0000
#define BSD4_RING_BASE 0x1d4000
#define XEHP_BSD5_RING_BASE 0x1e0000
#define XEHP_BSD6_RING_BASE 0x1e4000
#define XEHP_BSD7_RING_BASE 0x1f0000
#define XEHP_BSD8_RING_BASE 0x1f4000
#define VEBOX_RING_BASE 0x1c8000
#define VEBOX2_RING_BASE 0x1d8000
#define XEHP_VEBOX3_RING_BASE 0x1e8000
#define XEHP_VEBOX4_RING_BASE 0x1f8000
#define COMPUTE0_RING_BASE 0x1a000
#define COMPUTE1_RING_BASE 0x1c000
#define COMPUTE2_RING_BASE 0x1e000
#define COMPUTE3_RING_BASE 0x26000
#define BLT_RING_BASE 0x22000
#define XEHPC_BCS1_RING_BASE 0x3e0000
#define XEHPC_BCS2_RING_BASE 0x3e2000
#define XEHPC_BCS3_RING_BASE 0x3e4000
#define XEHPC_BCS4_RING_BASE 0x3e6000
#define XEHPC_BCS5_RING_BASE 0x3e8000
#define XEHPC_BCS6_RING_BASE 0x3ea000
#define XEHPC_BCS7_RING_BASE 0x3ec000
#define XEHPC_BCS8_RING_BASE 0x3ee000
#define GSCCS_RING_BASE 0x11a000
#define RING_TAIL(base) XE_REG((base) + 0x30)
#define RING_HEAD(base) XE_REG((base) + 0x34)
#define HEAD_ADDR 0x001FFFFC
#define RING_START(base) XE_REG((base) + 0x38)
#define RING_CTL(base) XE_REG((base) + 0x3c)
#define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */
#define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */
#define RING_PSMI_CTL(base) XE_REG((base) + 0x50, XE_REG_OPTION_MASKED)
#define RC_SEMA_IDLE_MSG_DISABLE REG_BIT(12)
#define WAIT_FOR_EVENT_POWER_DOWN_DISABLE REG_BIT(7)
#define IDLE_MSG_DISABLE REG_BIT(0)
#define RING_PWRCTX_MAXCNT(base) XE_REG((base) + 0x54)
#define IDLE_WAIT_TIME REG_GENMASK(19, 0)
#define RING_ACTHD_UDW(base) XE_REG((base) + 0x5c)
#define RING_DMA_FADD_UDW(base) XE_REG((base) + 0x60)
#define RING_IPEHR(base) XE_REG((base) + 0x68)
#define RING_ACTHD(base) XE_REG((base) + 0x74)
#define RING_DMA_FADD(base) XE_REG((base) + 0x78)
#define RING_HWS_PGA(base) XE_REG((base) + 0x80)
#define RING_HWSTAM(base) XE_REG((base) + 0x98)
#define RING_MI_MODE(base) XE_REG((base) + 0x9c)
#define RING_NOPID(base) XE_REG((base) + 0x94)
#define FF_THREAD_MODE(base) XE_REG((base) + 0xa0)
#define FF_TESSELATION_DOP_GATE_DISABLE BIT(19)
#define RING_IMR(base) XE_REG((base) + 0xa8)
#define RING_EIR(base) XE_REG((base) + 0xb0)
#define RING_EMR(base) XE_REG((base) + 0xb4)
#define RING_ESR(base) XE_REG((base) + 0xb8)
#define RING_CMD_CCTL(base) XE_REG((base) + 0xc4, XE_REG_OPTION_MASKED)
/*
* CMD_CCTL read/write fields take a MOCS value and _not_ a table index.
* The lsb of each can be considered a separate enabling bit for encryption.
* 6:0 == default MOCS value for reads => 6:1 == table index for reads.
* 13:7 == default MOCS value for writes => 13:8 == table index for writes.
* 15:14 == Reserved => 31:30 are set to 0.
*/
#define CMD_CCTL_WRITE_OVERRIDE_MASK REG_GENMASK(13, 8)
#define CMD_CCTL_READ_OVERRIDE_MASK REG_GENMASK(6, 1)
#define CSFE_CHICKEN1(base) XE_REG((base) + 0xd4, XE_REG_OPTION_MASKED)
#define GHWSP_CSB_REPORT_DIS REG_BIT(15)
#define PPHWSP_CSB_AND_TIMESTAMP_REPORT_DIS REG_BIT(14)
#define FF_SLICE_CS_CHICKEN1(base) XE_REG((base) + 0xe0, XE_REG_OPTION_MASKED)
#define FFSC_PERCTX_PREEMPT_CTRL REG_BIT(14)
#define FF_SLICE_CS_CHICKEN2(base) XE_REG((base) + 0xe4, XE_REG_OPTION_MASKED)
#define PERF_FIX_BALANCING_CFE_DISABLE REG_BIT(15)
#define CS_DEBUG_MODE1(base) XE_REG((base) + 0xec, XE_REG_OPTION_MASKED)
#define FF_DOP_CLOCK_GATE_DISABLE REG_BIT(1)
#define REPLAY_MODE_GRANULARITY REG_BIT(0)
#define RING_BBADDR(base) XE_REG((base) + 0x140)
#define RING_BBADDR_UDW(base) XE_REG((base) + 0x168)
#define BCS_SWCTRL(base) XE_REG((base) + 0x200, XE_REG_OPTION_MASKED)
#define BCS_SWCTRL_DISABLE_256B REG_BIT(2)
/* Handling MOCS value in BLIT_CCTL like it was done CMD_CCTL */
#define BLIT_CCTL(base) XE_REG((base) + 0x204)
#define BLIT_CCTL_DST_MOCS_MASK REG_GENMASK(14, 9)
#define BLIT_CCTL_SRC_MOCS_MASK REG_GENMASK(6, 1)
#define RING_EXECLIST_STATUS_LO(base) XE_REG((base) + 0x234)
#define RING_EXECLIST_STATUS_HI(base) XE_REG((base) + 0x234 + 4)
#define RING_CONTEXT_CONTROL(base) XE_REG((base) + 0x244)
#define CTX_CTRL_INHIBIT_SYN_CTX_SWITCH REG_BIT(3)
#define CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT REG_BIT(0)
#define RING_MODE(base) XE_REG((base) + 0x29c)
#define GFX_DISABLE_LEGACY_MODE REG_BIT(3)
#define RING_TIMESTAMP(base) XE_REG((base) + 0x358)
#define RING_TIMESTAMP_UDW(base) XE_REG((base) + 0x358 + 4)
#define RING_VALID_MASK 0x00000001
#define RING_VALID 0x00000001
#define STOP_RING REG_BIT(8)
#define TAIL_ADDR 0x001FFFF8
#define RING_CTX_TIMESTAMP(base) XE_REG((base) + 0x3a8)
#define RING_FORCE_TO_NONPRIV(base, i) XE_REG(((base) + 0x4d0) + (i) * 4)
#define RING_FORCE_TO_NONPRIV_DENY REG_BIT(30)
#define RING_FORCE_TO_NONPRIV_ACCESS_MASK REG_GENMASK(29, 28)
#define RING_FORCE_TO_NONPRIV_ACCESS_RW REG_FIELD_PREP(RING_FORCE_TO_NONPRIV_ACCESS_MASK, 0)
#define RING_FORCE_TO_NONPRIV_ACCESS_RD REG_FIELD_PREP(RING_FORCE_TO_NONPRIV_ACCESS_MASK, 1)
#define RING_FORCE_TO_NONPRIV_ACCESS_WR REG_FIELD_PREP(RING_FORCE_TO_NONPRIV_ACCESS_MASK, 2)
#define RING_FORCE_TO_NONPRIV_ACCESS_INVALID REG_FIELD_PREP(RING_FORCE_TO_NONPRIV_ACCESS_MASK, 3)
#define RING_FORCE_TO_NONPRIV_ADDRESS_MASK REG_GENMASK(25, 2)
#define RING_FORCE_TO_NONPRIV_RANGE_MASK REG_GENMASK(1, 0)
#define RING_FORCE_TO_NONPRIV_RANGE_1 REG_FIELD_PREP(RING_FORCE_TO_NONPRIV_RANGE_MASK, 0)
#define RING_FORCE_TO_NONPRIV_RANGE_4 REG_FIELD_PREP(RING_FORCE_TO_NONPRIV_RANGE_MASK, 1)
#define RING_FORCE_TO_NONPRIV_RANGE_16 REG_FIELD_PREP(RING_FORCE_TO_NONPRIV_RANGE_MASK, 2)
#define RING_FORCE_TO_NONPRIV_RANGE_64 REG_FIELD_PREP(RING_FORCE_TO_NONPRIV_RANGE_MASK, 3)
#define RING_FORCE_TO_NONPRIV_MASK_VALID (RING_FORCE_TO_NONPRIV_RANGE_MASK | \
RING_FORCE_TO_NONPRIV_ACCESS_MASK | \
RING_FORCE_TO_NONPRIV_DENY)
#define RING_MAX_NONPRIV_SLOTS 12
#define RING_EXECLIST_SQ_CONTENTS_LO(base) XE_REG((base) + 0x510)
#define RING_EXECLIST_SQ_CONTENTS_HI(base) XE_REG((base) + 0x510 + 4)
#define RING_EXECLIST_CONTROL(base) XE_REG((base) + 0x550)
#define EL_CTRL_LOAD REG_BIT(0)
#define CS_CHICKEN1(base) XE_REG((base) + 0x580, XE_REG_OPTION_MASKED)
#define PREEMPT_GPGPU_LEVEL(hi, lo) (((hi) << 2) | ((lo) << 1))
#define PREEMPT_GPGPU_MID_THREAD_LEVEL PREEMPT_GPGPU_LEVEL(0, 0)
#define PREEMPT_GPGPU_THREAD_GROUP_LEVEL PREEMPT_GPGPU_LEVEL(0, 1)
#define PREEMPT_GPGPU_COMMAND_LEVEL PREEMPT_GPGPU_LEVEL(1, 0)
#define PREEMPT_GPGPU_LEVEL_MASK PREEMPT_GPGPU_LEVEL(1, 1)
#define PREEMPT_3D_OBJECT_LEVEL REG_BIT(0)
#define VDBOX_CGCTL3F08(base) XE_REG((base) + 0x3f08)
#define CG3DDISHRS_CLKGATE_DIS REG_BIT(5)
#define VDBOX_CGCTL3F10(base) XE_REG((base) + 0x3f10)
#define IECPUNIT_CLKGATE_DIS REG_BIT(22)
#define VDBOX_CGCTL3F18(base) XE_REG((base) + 0x3f18)
#define ALNUNIT_CLKGATE_DIS REG_BIT(13)
#define VDBOX_CGCTL3F1C(base) XE_REG((base) + 0x3f1c)
#define MFXPIPE_CLKGATE_DIS REG_BIT(3)
#endif

View File

@ -0,0 +1,70 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_GPU_COMMANDS_H_
#define _XE_GPU_COMMANDS_H_
#include "regs/xe_reg_defs.h"
#define XY_CTRL_SURF_COPY_BLT ((2 << 29) | (0x48 << 22) | 3)
#define SRC_ACCESS_TYPE_SHIFT 21
#define DST_ACCESS_TYPE_SHIFT 20
#define CCS_SIZE_MASK GENMASK(17, 8)
#define XE2_CCS_SIZE_MASK GENMASK(18, 9)
#define XY_CTRL_SURF_MOCS_MASK GENMASK(31, 26)
#define XE2_XY_CTRL_SURF_MOCS_INDEX_MASK GENMASK(31, 28)
#define NUM_CCS_BYTES_PER_BLOCK 256
#define NUM_BYTES_PER_CCS_BYTE(_xe) (GRAPHICS_VER(_xe) >= 20 ? 512 : 256)
#define XY_FAST_COLOR_BLT_CMD (2 << 29 | 0x44 << 22)
#define XY_FAST_COLOR_BLT_DEPTH_32 (2 << 19)
#define XY_FAST_COLOR_BLT_DW 16
#define XY_FAST_COLOR_BLT_MOCS_MASK GENMASK(27, 22)
#define XE2_XY_FAST_COLOR_BLT_MOCS_INDEX_MASK GENMASK(27, 24)
#define XY_FAST_COLOR_BLT_MEM_TYPE_SHIFT 31
#define XY_FAST_COPY_BLT_CMD (2 << 29 | 0x42 << 22)
#define XY_FAST_COPY_BLT_DEPTH_32 (3<<24)
#define XY_FAST_COPY_BLT_D1_SRC_TILE4 REG_BIT(31)
#define XY_FAST_COPY_BLT_D1_DST_TILE4 REG_BIT(30)
#define XE2_XY_FAST_COPY_BLT_MOCS_INDEX_MASK GENMASK(23, 20)
#define PVC_MEM_SET_CMD (2 << 29 | 0x5b << 22)
#define PVC_MEM_SET_CMD_LEN_DW 7
#define PVC_MEM_SET_MATRIX REG_BIT(17)
#define PVC_MEM_SET_DATA_FIELD GENMASK(31, 24)
/* Bspec lists field as [6:0], but index alone is from [6:1] */
#define PVC_MEM_SET_MOCS_INDEX_MASK GENMASK(6, 1)
#define XE2_MEM_SET_MOCS_INDEX_MASK GENMASK(6, 3)
#define GFX_OP_PIPE_CONTROL(len) ((0x3<<29)|(0x3<<27)|(0x2<<24)|((len)-2))
#define PIPE_CONTROL0_HDC_PIPELINE_FLUSH BIT(9) /* gen12 */
#define PIPE_CONTROL_COMMAND_CACHE_INVALIDATE (1<<29)
#define PIPE_CONTROL_TILE_CACHE_FLUSH (1<<28)
#define PIPE_CONTROL_AMFS_FLUSH (1<<25)
#define PIPE_CONTROL_GLOBAL_GTT_IVB (1<<24)
#define PIPE_CONTROL_LRI_POST_SYNC BIT(23)
#define PIPE_CONTROL_STORE_DATA_INDEX (1<<21)
#define PIPE_CONTROL_CS_STALL (1<<20)
#define PIPE_CONTROL_GLOBAL_SNAPSHOT_RESET (1<<19)
#define PIPE_CONTROL_TLB_INVALIDATE BIT(18)
#define PIPE_CONTROL_PSD_SYNC (1<<17)
#define PIPE_CONTROL_QW_WRITE (1<<14)
#define PIPE_CONTROL_DEPTH_STALL (1<<13)
#define PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH (1<<12)
#define PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE (1<<11)
#define PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE (1<<10)
#define PIPE_CONTROL_INDIRECT_STATE_DISABLE (1<<9)
#define PIPE_CONTROL_FLUSH_ENABLE (1<<7)
#define PIPE_CONTROL_DC_FLUSH_ENABLE (1<<5)
#define PIPE_CONTROL_VF_CACHE_INVALIDATE (1<<4)
#define PIPE_CONTROL_CONST_CACHE_INVALIDATE (1<<3)
#define PIPE_CONTROL_STATE_CACHE_INVALIDATE (1<<2)
#define PIPE_CONTROL_STALL_AT_SCOREBOARD (1<<1)
#define PIPE_CONTROL_DEPTH_CACHE_FLUSH (1<<0)
#endif

View File

@ -0,0 +1,41 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_GSC_REGS_H_
#define _XE_GSC_REGS_H_
#include <linux/compiler.h>
#include <linux/types.h>
#include "regs/xe_reg_defs.h"
/* Definitions of GSC H/W registers, bits, etc */
#define MTL_GSC_HECI1_BASE 0x00116000
#define MTL_GSC_HECI2_BASE 0x00117000
#define HECI_H_CSR(base) XE_REG((base) + 0x4)
#define HECI_H_CSR_IE REG_BIT(0)
#define HECI_H_CSR_IS REG_BIT(1)
#define HECI_H_CSR_IG REG_BIT(2)
#define HECI_H_CSR_RDY REG_BIT(3)
#define HECI_H_CSR_RST REG_BIT(4)
/*
* The FWSTS register values are FW defined and can be different between
* HECI1 and HECI2
*/
#define HECI_FWSTS1(base) XE_REG((base) + 0xc40)
#define HECI1_FWSTS1_CURRENT_STATE REG_GENMASK(3, 0)
#define HECI1_FWSTS1_CURRENT_STATE_RESET 0
#define HECI1_FWSTS1_PROXY_STATE_NORMAL 5
#define HECI1_FWSTS1_INIT_COMPLETE REG_BIT(9)
#define HECI_FWSTS5(base) XE_REG((base) + 0xc68)
#define HECI1_FWSTS5_HUC_AUTH_DONE REG_BIT(19)
#define HECI_H_GS1(base) XE_REG((base) + 0xc4c)
#define HECI_H_GS1_ER_PREP REG_BIT(0)
#endif

View File

@ -0,0 +1,478 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_GT_REGS_H_
#define _XE_GT_REGS_H_
#include "regs/xe_reg_defs.h"
/*
* The GSI register range [0x0 - 0x40000) is replicated at a higher offset
* for the media GT. xe_mmio and xe_gt_mcr functions will automatically
* translate offsets by MEDIA_GT_GSI_OFFSET when operating on the media GT.
*/
#define MEDIA_GT_GSI_OFFSET 0x380000
#define MEDIA_GT_GSI_LENGTH 0x40000
/* MTL workpoint reg to get core C state and actual freq of 3D, SAMedia */
#define MTL_MIRROR_TARGET_WP1 XE_REG(0xc60)
#define MTL_CAGF_MASK REG_GENMASK(8, 0)
#define MTL_CC_MASK REG_GENMASK(12, 9)
/* RPM unit config (Gen8+) */
#define RPM_CONFIG0 XE_REG(0xd00)
#define RPM_CONFIG0_CRYSTAL_CLOCK_FREQ_MASK REG_GENMASK(5, 3)
#define RPM_CONFIG0_CRYSTAL_CLOCK_FREQ_24_MHZ 0
#define RPM_CONFIG0_CRYSTAL_CLOCK_FREQ_19_2_MHZ 1
#define RPM_CONFIG0_CRYSTAL_CLOCK_FREQ_38_4_MHZ 2
#define RPM_CONFIG0_CRYSTAL_CLOCK_FREQ_25_MHZ 3
#define RPM_CONFIG0_CTC_SHIFT_PARAMETER_MASK REG_GENMASK(2, 1)
#define FORCEWAKE_ACK_MEDIA_VDBOX(n) XE_REG(0xd50 + (n) * 4)
#define FORCEWAKE_ACK_MEDIA_VEBOX(n) XE_REG(0xd70 + (n) * 4)
#define FORCEWAKE_ACK_RENDER XE_REG(0xd84)
#define GMD_ID XE_REG(0xd8c)
#define GMD_ID_ARCH_MASK REG_GENMASK(31, 22)
#define GMD_ID_RELEASE_MASK REG_GENMASK(21, 14)
#define GMD_ID_REVID REG_GENMASK(5, 0)
#define FORCEWAKE_ACK_GSC XE_REG(0xdf8)
#define FORCEWAKE_ACK_GT_MTL XE_REG(0xdfc)
#define MCFG_MCR_SELECTOR XE_REG(0xfd0)
#define MTL_MCR_SELECTOR XE_REG(0xfd4)
#define SF_MCR_SELECTOR XE_REG(0xfd8)
#define MCR_SELECTOR XE_REG(0xfdc)
#define GAM_MCR_SELECTOR XE_REG(0xfe0)
#define MCR_MULTICAST REG_BIT(31)
#define MCR_SLICE_MASK REG_GENMASK(30, 27)
#define MCR_SLICE(slice) REG_FIELD_PREP(MCR_SLICE_MASK, slice)
#define MCR_SUBSLICE_MASK REG_GENMASK(26, 24)
#define MCR_SUBSLICE(subslice) REG_FIELD_PREP(MCR_SUBSLICE_MASK, subslice)
#define MTL_MCR_GROUPID REG_GENMASK(11, 8)
#define MTL_MCR_INSTANCEID REG_GENMASK(3, 0)
#define PS_INVOCATION_COUNT XE_REG(0x2348)
#define XELP_GLOBAL_MOCS(i) XE_REG(0x4000 + (i) * 4)
#define XEHP_GLOBAL_MOCS(i) XE_REG_MCR(0x4000 + (i) * 4)
#define CCS_AUX_INV XE_REG(0x4208)
#define VD0_AUX_INV XE_REG(0x4218)
#define VE0_AUX_INV XE_REG(0x4238)
#define VE1_AUX_INV XE_REG(0x42b8)
#define AUX_INV REG_BIT(0)
#define XEHP_TILE_ADDR_RANGE(_idx) XE_REG_MCR(0x4900 + (_idx) * 4)
#define XEHP_FLAT_CCS_BASE_ADDR XE_REG_MCR(0x4910)
#define WM_CHICKEN3 XE_REG_MCR(0x5588, XE_REG_OPTION_MASKED)
#define HIZ_PLANE_COMPRESSION_DIS REG_BIT(10)
#define CHICKEN_RASTER_2 XE_REG_MCR(0x6208, XE_REG_OPTION_MASKED)
#define TBIMR_FAST_CLIP REG_BIT(5)
#define FF_MODE XE_REG_MCR(0x6210)
#define DIS_TE_AUTOSTRIP REG_BIT(31)
#define DIS_MESH_PARTIAL_AUTOSTRIP REG_BIT(16)
#define DIS_MESH_AUTOSTRIP REG_BIT(15)
#define VFLSKPD XE_REG_MCR(0x62a8, XE_REG_OPTION_MASKED)
#define DIS_PARTIAL_AUTOSTRIP REG_BIT(9)
#define DIS_AUTOSTRIP REG_BIT(6)
#define DIS_OVER_FETCH_CACHE REG_BIT(1)
#define DIS_MULT_MISS_RD_SQUASH REG_BIT(0)
#define FF_MODE2 XE_REG(0x6604)
#define XEHP_FF_MODE2 XE_REG_MCR(0x6604)
#define FF_MODE2_GS_TIMER_MASK REG_GENMASK(31, 24)
#define FF_MODE2_GS_TIMER_224 REG_FIELD_PREP(FF_MODE2_GS_TIMER_MASK, 224)
#define FF_MODE2_TDS_TIMER_MASK REG_GENMASK(23, 16)
#define FF_MODE2_TDS_TIMER_128 REG_FIELD_PREP(FF_MODE2_TDS_TIMER_MASK, 4)
#define CACHE_MODE_1 XE_REG(0x7004, XE_REG_OPTION_MASKED)
#define MSAA_OPTIMIZATION_REDUC_DISABLE REG_BIT(11)
#define COMMON_SLICE_CHICKEN1 XE_REG(0x7010)
#define HIZ_CHICKEN XE_REG(0x7018, XE_REG_OPTION_MASKED)
#define DG1_HZ_READ_SUPPRESSION_OPTIMIZATION_DISABLE REG_BIT(14)
#define HZ_DEPTH_TEST_LE_GE_OPT_DISABLE REG_BIT(13)
#define XEHP_PSS_MODE2 XE_REG_MCR(0x703c, XE_REG_OPTION_MASKED)
#define SCOREBOARD_STALL_FLUSH_CONTROL REG_BIT(5)
#define XEHP_PSS_CHICKEN XE_REG_MCR(0x7044, XE_REG_OPTION_MASKED)
#define FLSH_IGNORES_PSD REG_BIT(10)
#define FD_END_COLLECT REG_BIT(5)
#define COMMON_SLICE_CHICKEN4 XE_REG(0x7300, XE_REG_OPTION_MASKED)
#define DISABLE_TDC_LOAD_BALANCING_CALC REG_BIT(6)
#define COMMON_SLICE_CHICKEN3 XE_REG(0x7304, XE_REG_OPTION_MASKED)
#define XEHP_COMMON_SLICE_CHICKEN3 XE_REG_MCR(0x7304, XE_REG_OPTION_MASKED)
#define DG1_FLOAT_POINT_BLEND_OPT_STRICT_MODE_EN REG_BIT(12)
#define XEHP_DUAL_SIMD8_SEQ_MERGE_DISABLE REG_BIT(12)
#define BLEND_EMB_FIX_DISABLE_IN_RCC REG_BIT(11)
#define DISABLE_CPS_AWARE_COLOR_PIPE REG_BIT(9)
#define XEHP_SLICE_COMMON_ECO_CHICKEN1 XE_REG_MCR(0x731c, XE_REG_OPTION_MASKED)
#define MSC_MSAA_REODER_BUF_BYPASS_DISABLE REG_BIT(14)
#define VF_PREEMPTION XE_REG(0x83a4, XE_REG_OPTION_MASKED)
#define PREEMPTION_VERTEX_COUNT REG_GENMASK(15, 0)
#define VF_SCRATCHPAD XE_REG(0x83a8, XE_REG_OPTION_MASKED)
#define XE2_VFG_TED_CREDIT_INTERFACE_DISABLE REG_BIT(13)
#define VFG_PREEMPTION_CHICKEN XE_REG(0x83b4, XE_REG_OPTION_MASKED)
#define POLYGON_TRIFAN_LINELOOP_DISABLE REG_BIT(4)
#define SQCNT1 XE_REG_MCR(0x8718)
#define XELPMP_SQCNT1 XE_REG(0x8718)
#define ENFORCE_RAR REG_BIT(23)
#define XEHP_SQCM XE_REG_MCR(0x8724)
#define EN_32B_ACCESS REG_BIT(30)
#define XE2_FLAT_CCS_BASE_RANGE_LOWER XE_REG_MCR(0x8800)
#define XE2_FLAT_CCS_ENABLE REG_BIT(0)
#define GSCPSMI_BASE XE_REG(0x880c)
/* Fuse readout registers for GT */
#define XEHP_FUSE4 XE_REG(0x9114)
#define CCS_EN_MASK REG_GENMASK(19, 16)
#define GT_L3_EXC_MASK REG_GENMASK(6, 4)
#define MIRROR_FUSE3 XE_REG(0x9118)
#define XE2_NODE_ENABLE_MASK REG_GENMASK(31, 16)
#define L3BANK_PAIR_COUNT 4
#define L3BANK_MASK REG_GENMASK(3, 0)
/* on Xe_HP the same fuses indicates mslices instead of L3 banks */
#define MAX_MSLICES 4
#define MEML3_EN_MASK REG_GENMASK(3, 0)
#define XELP_EU_ENABLE XE_REG(0x9134) /* "_DISABLE" on Xe_LP */
#define XELP_EU_MASK REG_GENMASK(7, 0)
#define XELP_GT_GEOMETRY_DSS_ENABLE XE_REG(0x913c)
#define GT_VEBOX_VDBOX_DISABLE XE_REG(0x9140)
#define GT_VEBOX_DISABLE_MASK REG_GENMASK(19, 16)
#define GT_VDBOX_DISABLE_MASK REG_GENMASK(7, 0)
#define XEHP_GT_COMPUTE_DSS_ENABLE XE_REG(0x9144)
#define XEHPC_GT_COMPUTE_DSS_ENABLE_EXT XE_REG(0x9148)
#define XE2_GT_COMPUTE_DSS_2 XE_REG(0x914c)
#define XE2_GT_GEOMETRY_DSS_1 XE_REG(0x9150)
#define XE2_GT_GEOMETRY_DSS_2 XE_REG(0x9154)
#define GDRST XE_REG(0x941c)
#define GRDOM_GUC REG_BIT(3)
#define GRDOM_FULL REG_BIT(0)
#define MISCCPCTL XE_REG(0x9424)
#define DOP_CLOCK_GATE_RENDER_ENABLE REG_BIT(1)
#define UNSLCGCTL9430 XE_REG(0x9430)
#define MSQDUNIT_CLKGATE_DIS REG_BIT(3)
#define UNSLICE_UNIT_LEVEL_CLKGATE XE_REG(0x9434)
#define VFUNIT_CLKGATE_DIS REG_BIT(20)
#define TSGUNIT_CLKGATE_DIS REG_BIT(17) /* XEHPSDV */
#define CG3DDISCFEG_CLKGATE_DIS REG_BIT(17) /* DG2 */
#define GAMEDIA_CLKGATE_DIS REG_BIT(11)
#define HSUNIT_CLKGATE_DIS REG_BIT(8)
#define VSUNIT_CLKGATE_DIS REG_BIT(3)
#define UNSLCGCTL9440 XE_REG(0x9440)
#define GAMTLBOACS_CLKGATE_DIS REG_BIT(28)
#define GAMTLBVDBOX5_CLKGATE_DIS REG_BIT(27)
#define GAMTLBVDBOX6_CLKGATE_DIS REG_BIT(26)
#define GAMTLBVDBOX3_CLKGATE_DIS REG_BIT(24)
#define GAMTLBVDBOX4_CLKGATE_DIS REG_BIT(23)
#define GAMTLBVDBOX7_CLKGATE_DIS REG_BIT(22)
#define GAMTLBVDBOX2_CLKGATE_DIS REG_BIT(21)
#define GAMTLBVDBOX0_CLKGATE_DIS REG_BIT(17)
#define GAMTLBKCR_CLKGATE_DIS REG_BIT(16)
#define GAMTLBGUC_CLKGATE_DIS REG_BIT(15)
#define GAMTLBBLT_CLKGATE_DIS REG_BIT(14)
#define GAMTLBVDBOX1_CLKGATE_DIS REG_BIT(6)
#define UNSLCGCTL9444 XE_REG(0x9444)
#define GAMTLBGFXA0_CLKGATE_DIS REG_BIT(30)
#define GAMTLBGFXA1_CLKGATE_DIS REG_BIT(29)
#define GAMTLBCOMPA0_CLKGATE_DIS REG_BIT(28)
#define GAMTLBCOMPA1_CLKGATE_DIS REG_BIT(27)
#define GAMTLBCOMPB0_CLKGATE_DIS REG_BIT(26)
#define GAMTLBCOMPB1_CLKGATE_DIS REG_BIT(25)
#define GAMTLBCOMPC0_CLKGATE_DIS REG_BIT(24)
#define GAMTLBCOMPC1_CLKGATE_DIS REG_BIT(23)
#define GAMTLBCOMPD0_CLKGATE_DIS REG_BIT(22)
#define GAMTLBCOMPD1_CLKGATE_DIS REG_BIT(21)
#define GAMTLBMERT_CLKGATE_DIS REG_BIT(20)
#define GAMTLBVEBOX3_CLKGATE_DIS REG_BIT(19)
#define GAMTLBVEBOX2_CLKGATE_DIS REG_BIT(18)
#define GAMTLBVEBOX1_CLKGATE_DIS REG_BIT(17)
#define GAMTLBVEBOX0_CLKGATE_DIS REG_BIT(16)
#define LTCDD_CLKGATE_DIS REG_BIT(10)
#define XEHP_SLICE_UNIT_LEVEL_CLKGATE XE_REG_MCR(0x94d4)
#define L3_CR2X_CLKGATE_DIS REG_BIT(17)
#define L3_CLKGATE_DIS REG_BIT(16)
#define NODEDSS_CLKGATE_DIS REG_BIT(12)
#define MSCUNIT_CLKGATE_DIS REG_BIT(10)
#define RCCUNIT_CLKGATE_DIS REG_BIT(7)
#define SARBUNIT_CLKGATE_DIS REG_BIT(5)
#define SBEUNIT_CLKGATE_DIS REG_BIT(4)
#define UNSLICE_UNIT_LEVEL_CLKGATE2 XE_REG(0x94e4)
#define VSUNIT_CLKGATE2_DIS REG_BIT(19)
#define SUBSLICE_UNIT_LEVEL_CLKGATE XE_REG_MCR(0x9524)
#define DSS_ROUTER_CLKGATE_DIS REG_BIT(28)
#define GWUNIT_CLKGATE_DIS REG_BIT(16)
#define SUBSLICE_UNIT_LEVEL_CLKGATE2 XE_REG_MCR(0x9528)
#define CPSSUNIT_CLKGATE_DIS REG_BIT(9)
#define SSMCGCTL9530 XE_REG_MCR(0x9530)
#define RTFUNIT_CLKGATE_DIS REG_BIT(18)
#define DFR_RATIO_EN_AND_CHICKEN XE_REG_MCR(0x9550)
#define DFR_DISABLE REG_BIT(9)
#define RPNSWREQ XE_REG(0xa008)
#define REQ_RATIO_MASK REG_GENMASK(31, 23)
#define RP_CONTROL XE_REG(0xa024)
#define RPSWCTL_MASK REG_GENMASK(10, 9)
#define RPSWCTL_ENABLE REG_FIELD_PREP(RPSWCTL_MASK, 2)
#define RPSWCTL_DISABLE REG_FIELD_PREP(RPSWCTL_MASK, 0)
#define RC_CONTROL XE_REG(0xa090)
#define RC_CTL_HW_ENABLE REG_BIT(31)
#define RC_CTL_TO_MODE REG_BIT(28)
#define RC_CTL_RC6_ENABLE REG_BIT(18)
#define RC_STATE XE_REG(0xa094)
#define RC_IDLE_HYSTERSIS XE_REG(0xa0ac)
#define PMINTRMSK XE_REG(0xa168)
#define PMINTR_DISABLE_REDIRECT_TO_GUC REG_BIT(31)
#define ARAT_EXPIRED_INTRMSK REG_BIT(9)
#define FORCEWAKE_GT XE_REG(0xa188)
#define PG_ENABLE XE_REG(0xa210)
#define CTC_MODE XE_REG(0xa26c)
#define CTC_SHIFT_PARAMETER_MASK REG_GENMASK(2, 1)
#define CTC_SOURCE_DIVIDE_LOGIC REG_BIT(0)
#define FORCEWAKE_RENDER XE_REG(0xa278)
#define FORCEWAKE_MEDIA_VDBOX(n) XE_REG(0xa540 + (n) * 4)
#define FORCEWAKE_MEDIA_VEBOX(n) XE_REG(0xa560 + (n) * 4)
#define FORCEWAKE_GSC XE_REG(0xa618)
#define XEHPC_LNCFMISCCFGREG0 XE_REG_MCR(0xb01c, XE_REG_OPTION_MASKED)
#define XEHPC_OVRLSCCC REG_BIT(0)
/* L3 Cache Control */
#define XELP_LNCFCMOCS(i) XE_REG(0xb020 + (i) * 4)
#define XEHP_LNCFCMOCS(i) XE_REG_MCR(0xb020 + (i) * 4)
#define LNCFCMOCS_REG_COUNT 32
#define XEHP_L3NODEARBCFG XE_REG_MCR(0xb0b4)
#define XEHP_LNESPARE REG_BIT(19)
#define XEHP_L3SQCREG5 XE_REG_MCR(0xb158)
#define L3_PWM_TIMER_INIT_VAL_MASK REG_GENMASK(9, 0)
#define XEHP_L3SCQREG7 XE_REG_MCR(0xb188)
#define BLEND_FILL_CACHING_OPT_DIS REG_BIT(3)
#define XEHPC_L3CLOS_MASK(i) XE_REG_MCR(0xb194 + (i) * 8)
#define XE2LPM_L3SQCREG5 XE_REG_MCR(0xb658)
#define XEHP_MERT_MOD_CTRL XE_REG_MCR(0xcf28)
#define RENDER_MOD_CTRL XE_REG_MCR(0xcf2c)
#define COMP_MOD_CTRL XE_REG_MCR(0xcf30)
#define XEHP_VDBX_MOD_CTRL XE_REG_MCR(0xcf34)
#define XELPMP_VDBX_MOD_CTRL XE_REG(0xcf34)
#define XEHP_VEBX_MOD_CTRL XE_REG_MCR(0xcf38)
#define XELPMP_VEBX_MOD_CTRL XE_REG(0xcf38)
#define FORCE_MISS_FTLB REG_BIT(3)
#define XEHP_GAMSTLB_CTRL XE_REG_MCR(0xcf4c)
#define CONTROL_BLOCK_CLKGATE_DIS REG_BIT(12)
#define EGRESS_BLOCK_CLKGATE_DIS REG_BIT(11)
#define TAG_BLOCK_CLKGATE_DIS REG_BIT(7)
#define XEHP_GAMCNTRL_CTRL XE_REG_MCR(0xcf54)
#define INVALIDATION_BROADCAST_MODE_DIS REG_BIT(12)
#define GLOBAL_INVALIDATION_MODE REG_BIT(2)
#define HALF_SLICE_CHICKEN5 XE_REG_MCR(0xe188, XE_REG_OPTION_MASKED)
#define DISABLE_SAMPLE_G_PERFORMANCE REG_BIT(0)
#define SAMPLER_MODE XE_REG_MCR(0xe18c, XE_REG_OPTION_MASKED)
#define ENABLE_SMALLPL REG_BIT(15)
#define SC_DISABLE_POWER_OPTIMIZATION_EBB REG_BIT(9)
#define SAMPLER_ENABLE_HEADLESS_MSG REG_BIT(5)
#define INDIRECT_STATE_BASE_ADDR_OVERRIDE REG_BIT(0)
#define HALF_SLICE_CHICKEN7 XE_REG_MCR(0xe194, XE_REG_OPTION_MASKED)
#define DG2_DISABLE_ROUND_ENABLE_ALLOW_FOR_SSLA REG_BIT(15)
#define CACHE_MODE_SS XE_REG_MCR(0xe420, XE_REG_OPTION_MASKED)
#define DISABLE_ECC REG_BIT(5)
#define ENABLE_PREFETCH_INTO_IC REG_BIT(3)
#define ROW_CHICKEN4 XE_REG_MCR(0xe48c, XE_REG_OPTION_MASKED)
#define DISABLE_GRF_CLEAR REG_BIT(13)
#define XEHP_DIS_BBL_SYSPIPE REG_BIT(11)
#define DISABLE_TDL_PUSH REG_BIT(9)
#define DIS_PICK_2ND_EU REG_BIT(7)
#define DISABLE_HDR_PAST_PAYLOAD_HOLD_FIX REG_BIT(4)
#define THREAD_EX_ARB_MODE REG_GENMASK(3, 2)
#define THREAD_EX_ARB_MODE_RR_AFTER_DEP REG_FIELD_PREP(THREAD_EX_ARB_MODE, 0x2)
#define ROW_CHICKEN3 XE_REG_MCR(0xe49c, XE_REG_OPTION_MASKED)
#define DIS_FIX_EOT1_FLUSH REG_BIT(9)
#define ROW_CHICKEN XE_REG_MCR(0xe4f0, XE_REG_OPTION_MASKED)
#define UGM_BACKUP_MODE REG_BIT(13)
#define MDQ_ARBITRATION_MODE REG_BIT(12)
#define EARLY_EOT_DIS REG_BIT(1)
#define ROW_CHICKEN2 XE_REG_MCR(0xe4f4, XE_REG_OPTION_MASKED)
#define DISABLE_READ_SUPPRESSION REG_BIT(15)
#define DISABLE_EARLY_READ REG_BIT(14)
#define ENABLE_LARGE_GRF_MODE REG_BIT(12)
#define PUSH_CONST_DEREF_HOLD_DIS REG_BIT(8)
#define DISABLE_DOP_GATING REG_BIT(0)
#define RT_CTRL XE_REG_MCR(0xe530)
#define DIS_NULL_QUERY REG_BIT(10)
#define XEHP_HDC_CHICKEN0 XE_REG_MCR(0xe5f0, XE_REG_OPTION_MASKED)
#define LSC_L1_FLUSH_CTL_3D_DATAPORT_FLUSH_EVENTS_MASK REG_GENMASK(13, 11)
#define DIS_ATOMIC_CHAINING_TYPED_WRITES REG_BIT(3)
#define LSC_CHICKEN_BIT_0 XE_REG_MCR(0xe7c8)
#define DISABLE_D8_D16_COASLESCE REG_BIT(30)
#define TGM_WRITE_EOM_FORCE REG_BIT(17)
#define FORCE_1_SUB_MESSAGE_PER_FRAGMENT REG_BIT(15)
#define SEQUENTIAL_ACCESS_UPGRADE_DISABLE REG_BIT(13)
#define LSC_CHICKEN_BIT_0_UDW XE_REG_MCR(0xe7c8 + 4)
#define UGM_FRAGMENT_THRESHOLD_TO_3 REG_BIT(58 - 32)
#define DIS_CHAIN_2XSIMD8 REG_BIT(55 - 32)
#define XE2_ALLOC_DPA_STARVE_FIX_DIS REG_BIT(47 - 32)
#define ENABLE_SMP_LD_RENDER_SURFACE_CONTROL REG_BIT(44 - 32)
#define FORCE_SLM_FENCE_SCOPE_TO_TILE REG_BIT(42 - 32)
#define FORCE_UGM_FENCE_SCOPE_TO_TILE REG_BIT(41 - 32)
#define MAXREQS_PER_BANK REG_GENMASK(39 - 32, 37 - 32)
#define DISABLE_128B_EVICTION_COMMAND_UDW REG_BIT(36 - 32)
#define SARB_CHICKEN1 XE_REG_MCR(0xe90c)
#define COMP_CKN_IN REG_GENMASK(30, 29)
#define RCU_MODE XE_REG(0x14800, XE_REG_OPTION_MASKED)
#define RCU_MODE_FIXED_SLICE_CCS_MODE REG_BIT(1)
#define RCU_MODE_CCS_ENABLE REG_BIT(0)
/*
* Total of 4 cslices, where each cslice is in the form:
* [0-3] CCS ID
* [4-6] RSVD
* [7] Disabled
*/
#define CCS_MODE XE_REG(0x14804)
#define CCS_MODE_CSLICE_0_3_MASK REG_GENMASK(11, 0) /* 3 bits per cslice */
#define CCS_MODE_CSLICE_MASK 0x7 /* CCS0-3 + rsvd */
#define CCS_MODE_CSLICE_WIDTH ilog2(CCS_MODE_CSLICE_MASK + 1)
#define CCS_MODE_CSLICE(cslice, ccs) \
((ccs) << ((cslice) * CCS_MODE_CSLICE_WIDTH))
#define FORCEWAKE_ACK_GT XE_REG(0x130044)
#define FORCEWAKE_KERNEL BIT(0)
#define FORCEWAKE_USER BIT(1)
#define FORCEWAKE_KERNEL_FALLBACK BIT(15)
#define MTL_MEDIA_PERF_LIMIT_REASONS XE_REG(0x138030)
#define MTL_MEDIA_MC6 XE_REG(0x138048)
#define GT_CORE_STATUS XE_REG(0x138060)
#define RCN_MASK REG_GENMASK(2, 0)
#define GT_C0 0
#define GT_C6 3
#define GT_GFX_RC6_LOCKED XE_REG(0x138104)
#define GT_GFX_RC6 XE_REG(0x138108)
#define GT0_PERF_LIMIT_REASONS XE_REG(0x1381a8)
#define GT0_PERF_LIMIT_REASONS_MASK 0xde3
#define PROCHOT_MASK REG_BIT(0)
#define THERMAL_LIMIT_MASK REG_BIT(1)
#define RATL_MASK REG_BIT(5)
#define VR_THERMALERT_MASK REG_BIT(6)
#define VR_TDC_MASK REG_BIT(7)
#define POWER_LIMIT_4_MASK REG_BIT(8)
#define POWER_LIMIT_1_MASK REG_BIT(10)
#define POWER_LIMIT_2_MASK REG_BIT(11)
#define GT_PERF_STATUS XE_REG(0x1381b4)
#define VOLTAGE_MASK REG_GENMASK(10, 0)
#define GT_INTR_DW(x) XE_REG(0x190018 + ((x) * 4))
#define RENDER_COPY_INTR_ENABLE XE_REG(0x190030)
#define VCS_VECS_INTR_ENABLE XE_REG(0x190034)
#define GUC_SG_INTR_ENABLE XE_REG(0x190038)
#define ENGINE1_MASK REG_GENMASK(31, 16)
#define ENGINE0_MASK REG_GENMASK(15, 0)
#define GPM_WGBOXPERF_INTR_ENABLE XE_REG(0x19003c)
#define GUNIT_GSC_INTR_ENABLE XE_REG(0x190044)
#define CCS_RSVD_INTR_ENABLE XE_REG(0x190048)
#define INTR_IDENTITY_REG(x) XE_REG(0x190060 + ((x) * 4))
#define INTR_DATA_VALID REG_BIT(31)
#define INTR_ENGINE_INSTANCE(x) REG_FIELD_GET(GENMASK(25, 20), x)
#define INTR_ENGINE_CLASS(x) REG_FIELD_GET(GENMASK(18, 16), x)
#define INTR_ENGINE_INTR(x) REG_FIELD_GET(GENMASK(15, 0), x)
#define OTHER_GUC_INSTANCE 0
#define OTHER_GSC_INSTANCE 6
#define IIR_REG_SELECTOR(x) XE_REG(0x190070 + ((x) * 4))
#define RCS0_RSVD_INTR_MASK XE_REG(0x190090)
#define BCS_RSVD_INTR_MASK XE_REG(0x1900a0)
#define VCS0_VCS1_INTR_MASK XE_REG(0x1900a8)
#define VCS2_VCS3_INTR_MASK XE_REG(0x1900ac)
#define VECS0_VECS1_INTR_MASK XE_REG(0x1900d0)
#define GUC_SG_INTR_MASK XE_REG(0x1900e8)
#define GPM_WGBOXPERF_INTR_MASK XE_REG(0x1900ec)
#define GUNIT_GSC_INTR_MASK XE_REG(0x1900f4)
#define CCS0_CCS1_INTR_MASK XE_REG(0x190100)
#define CCS2_CCS3_INTR_MASK XE_REG(0x190104)
#define XEHPC_BCS1_BCS2_INTR_MASK XE_REG(0x190110)
#define XEHPC_BCS3_BCS4_INTR_MASK XE_REG(0x190114)
#define XEHPC_BCS5_BCS6_INTR_MASK XE_REG(0x190118)
#define XEHPC_BCS7_BCS8_INTR_MASK XE_REG(0x19011c)
#define GT_WAIT_SEMAPHORE_INTERRUPT REG_BIT(11)
#define GT_CONTEXT_SWITCH_INTERRUPT REG_BIT(8)
#define GT_RENDER_PIPECTL_NOTIFY_INTERRUPT REG_BIT(4)
#define GT_CS_MASTER_ERROR_INTERRUPT REG_BIT(3)
#define GT_RENDER_USER_INTERRUPT REG_BIT(0)
#define PVC_GT0_PACKAGE_ENERGY_STATUS XE_REG(0x281004)
#define PVC_GT0_PACKAGE_RAPL_LIMIT XE_REG(0x281008)
#define PVC_GT0_PACKAGE_POWER_SKU_UNIT XE_REG(0x281068)
#define PVC_GT0_PLATFORM_ENERGY_STATUS XE_REG(0x28106c)
#define PVC_GT0_PACKAGE_POWER_SKU XE_REG(0x281080)
#endif

View File

@ -0,0 +1,143 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2022 Intel Corporation
*/
#ifndef _XE_GUC_REGS_H_
#define _XE_GUC_REGS_H_
#include <linux/compiler.h>
#include <linux/types.h>
#include "regs/xe_reg_defs.h"
/* Definitions of GuC H/W registers, bits, etc */
#define DIST_DBS_POPULATED XE_REG(0xd08)
#define DOORBELLS_PER_SQIDI_MASK REG_GENMASK(23, 16)
#define SQIDIS_DOORBELL_EXIST_MASK REG_GENMASK(15, 0)
#define DRBREGL(x) XE_REG(0x1000 + (x) * 8)
#define DRB_VALID REG_BIT(0)
#define DRBREGU(x) XE_REG(0x1000 + (x) * 8 + 4)
#define GTCR XE_REG(0x4274)
#define GTCR_INVALIDATE REG_BIT(0)
#define GUC_ARAT_C6DIS XE_REG(0xa178)
#define GUC_STATUS XE_REG(0xc000)
#define GS_AUTH_STATUS_MASK REG_GENMASK(31, 30)
#define GS_AUTH_STATUS_BAD REG_FIELD_PREP(GS_AUTH_STATUS_MASK, 0x1)
#define GS_AUTH_STATUS_GOOD REG_FIELD_PREP(GS_AUTH_STATUS_MASK, 0x2)
#define GS_MIA_MASK REG_GENMASK(18, 16)
#define GS_MIA_CORE_STATE REG_FIELD_PREP(GS_MIA_MASK, 0x1)
#define GS_MIA_HALT_REQUESTED REG_FIELD_PREP(GS_MIA_MASK, 0x2)
#define GS_MIA_ISR_ENTRY REG_FIELD_PREP(GS_MIA_MASK, 0x4)
#define GS_UKERNEL_MASK REG_GENMASK(15, 8)
#define GS_BOOTROM_MASK REG_GENMASK(7, 1)
#define GS_BOOTROM_RSA_FAILED REG_FIELD_PREP(GS_BOOTROM_MASK, 0x50)
#define GS_BOOTROM_JUMP_PASSED REG_FIELD_PREP(GS_BOOTROM_MASK, 0x76)
#define GS_MIA_IN_RESET REG_BIT(0)
#define GUC_WOPCM_SIZE XE_REG(0xc050)
#define GUC_WOPCM_SIZE_MASK REG_GENMASK(31, 12)
#define GUC_WOPCM_SIZE_LOCKED REG_BIT(0)
#define GUC_SHIM_CONTROL XE_REG(0xc064)
#define GUC_MOCS_INDEX_MASK REG_GENMASK(27, 24)
#define GUC_SHIM_WC_ENABLE REG_BIT(21)
#define GUC_ENABLE_MIA_CLOCK_GATING REG_BIT(15)
#define GUC_ENABLE_READ_CACHE_FOR_WOPCM_DATA REG_BIT(10)
#define GUC_ENABLE_READ_CACHE_FOR_SRAM_DATA REG_BIT(9)
#define GUC_MSGCH_ENABLE REG_BIT(4)
#define GUC_ENABLE_MIA_CACHING REG_BIT(2)
#define GUC_ENABLE_READ_CACHE_LOGIC REG_BIT(1)
#define GUC_DISABLE_SRAM_INIT_TO_ZEROES REG_BIT(0)
#define SOFT_SCRATCH(n) XE_REG(0xc180 + (n) * 4)
#define SOFT_SCRATCH_COUNT 16
#define HUC_KERNEL_LOAD_INFO XE_REG(0xc1dc)
#define HUC_LOAD_SUCCESSFUL REG_BIT(0)
#define UOS_RSA_SCRATCH(i) XE_REG(0xc200 + (i) * 4)
#define UOS_RSA_SCRATCH_COUNT 64
#define DMA_ADDR_0_LOW XE_REG(0xc300)
#define DMA_ADDR_0_HIGH XE_REG(0xc304)
#define DMA_ADDR_1_LOW XE_REG(0xc308)
#define DMA_ADDR_1_HIGH XE_REG(0xc30c)
#define DMA_ADDR_SPACE_MASK REG_GENMASK(20, 16)
#define DMA_ADDRESS_SPACE_WOPCM REG_FIELD_PREP(DMA_ADDR_SPACE_MASK, 7)
#define DMA_ADDRESS_SPACE_GGTT REG_FIELD_PREP(DMA_ADDR_SPACE_MASK, 8)
#define DMA_COPY_SIZE XE_REG(0xc310)
#define DMA_CTRL XE_REG(0xc314)
#define HUC_UKERNEL REG_BIT(9)
#define UOS_MOVE REG_BIT(4)
#define START_DMA REG_BIT(0)
#define DMA_GUC_WOPCM_OFFSET XE_REG(0xc340)
#define GUC_WOPCM_OFFSET_SHIFT 14
#define GUC_WOPCM_OFFSET_MASK REG_GENMASK(31, GUC_WOPCM_OFFSET_SHIFT)
#define HUC_LOADING_AGENT_GUC REG_BIT(1)
#define GUC_WOPCM_OFFSET_VALID REG_BIT(0)
#define GUC_MAX_IDLE_COUNT XE_REG(0xc3e4)
#define GUC_SEND_INTERRUPT XE_REG(0xc4c8)
#define GUC_SEND_TRIGGER REG_BIT(0)
#define GUC_BCS_RCS_IER XE_REG(0xc550)
#define GUC_VCS2_VCS1_IER XE_REG(0xc554)
#define GUC_WD_VECS_IER XE_REG(0xc558)
#define GUC_PM_P24C_IER XE_REG(0xc55c)
#define GUC_TLB_INV_CR XE_REG(0xcee8)
#define GUC_TLB_INV_CR_INVALIDATE REG_BIT(0)
#define HUC_STATUS2 XE_REG(0xd3b0)
#define HUC_FW_VERIFIED REG_BIT(7)
#define GT_PM_CONFIG XE_REG(0x13816c)
#define GT_DOORBELL_ENABLE REG_BIT(0)
#define GUC_HOST_INTERRUPT XE_REG(0x1901f0)
#define VF_SW_FLAG(n) XE_REG(0x190240 + (n) * 4)
#define VF_SW_FLAG_COUNT 4
#define MED_GUC_HOST_INTERRUPT XE_REG(0x190304)
#define MED_VF_SW_FLAG(n) XE_REG(0x190310 + (n) * 4)
#define MED_VF_SW_FLAG_COUNT 4
/* GuC Interrupt Vector */
#define GUC_INTR_GUC2HOST REG_BIT(15)
#define GUC_INTR_EXEC_ERROR REG_BIT(14)
#define GUC_INTR_DISPLAY_EVENT REG_BIT(13)
#define GUC_INTR_SEM_SIG REG_BIT(12)
#define GUC_INTR_IOMMU2GUC REG_BIT(11)
#define GUC_INTR_DOORBELL_RANG REG_BIT(10)
#define GUC_INTR_DMA_DONE REG_BIT(9)
#define GUC_INTR_FATAL_ERROR REG_BIT(8)
#define GUC_INTR_NOTIF_ERROR REG_BIT(7)
#define GUC_INTR_SW_INT_6 REG_BIT(6)
#define GUC_INTR_SW_INT_5 REG_BIT(5)
#define GUC_INTR_SW_INT_4 REG_BIT(4)
#define GUC_INTR_SW_INT_3 REG_BIT(3)
#define GUC_INTR_SW_INT_2 REG_BIT(2)
#define GUC_INTR_SW_INT_1 REG_BIT(1)
#define GUC_INTR_SW_INT_0 REG_BIT(0)
#define GUC_NUM_DOORBELLS 256
/* format of the HW-monitored doorbell cacheline */
struct guc_doorbell_info {
u32 db_status;
#define GUC_DOORBELL_DISABLED 0
#define GUC_DOORBELL_ENABLED 1
u32 cookie;
u32 reserved[14];
} __packed;
#endif

View File

@ -0,0 +1,17 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_LRC_LAYOUT_H_
#define _XE_LRC_LAYOUT_H_
#define CTX_CONTEXT_CONTROL (0x02 + 1)
#define CTX_RING_HEAD (0x04 + 1)
#define CTX_RING_TAIL (0x06 + 1)
#define CTX_RING_START (0x08 + 1)
#define CTX_RING_CTL (0x0a + 1)
#define CTX_PDP0_UDW (0x30 + 1)
#define CTX_PDP0_LDW (0x32 + 1)
#endif

View File

@ -0,0 +1,44 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_MCHBAR_REGS_H_
#define _XE_MCHBAR_REGS_H_
#include "regs/xe_reg_defs.h"
/*
* MCHBAR mirror.
*
* This mirrors the MCHBAR MMIO space whose location is determined by
* device 0 function 0's pci config register 0x44 or 0x48 and matches it in
* every way.
*/
#define MCHBAR_MIRROR_BASE_SNB 0x140000
#define PCU_CR_PACKAGE_POWER_SKU XE_REG(MCHBAR_MIRROR_BASE_SNB + 0x5930)
#define PKG_TDP GENMASK_ULL(14, 0)
#define PKG_MIN_PWR GENMASK_ULL(30, 16)
#define PKG_MAX_PWR GENMASK_ULL(46, 32)
#define PKG_MAX_WIN GENMASK_ULL(54, 48)
#define PKG_MAX_WIN_X GENMASK_ULL(54, 53)
#define PKG_MAX_WIN_Y GENMASK_ULL(52, 48)
#define PCU_CR_PACKAGE_POWER_SKU_UNIT XE_REG(MCHBAR_MIRROR_BASE_SNB + 0x5938)
#define PKG_PWR_UNIT REG_GENMASK(3, 0)
#define PKG_ENERGY_UNIT REG_GENMASK(12, 8)
#define PKG_TIME_UNIT REG_GENMASK(19, 16)
#define PCU_CR_PACKAGE_ENERGY_STATUS XE_REG(MCHBAR_MIRROR_BASE_SNB + 0x593c)
#define PCU_CR_PACKAGE_RAPL_LIMIT XE_REG(MCHBAR_MIRROR_BASE_SNB + 0x59a0)
#define PKG_PWR_LIM_1 REG_GENMASK(14, 0)
#define PKG_PWR_LIM_1_EN REG_BIT(15)
#define PKG_PWR_LIM_1_TIME REG_GENMASK(23, 17)
#define PKG_PWR_LIM_1_TIME_X REG_GENMASK(23, 22)
#define PKG_PWR_LIM_1_TIME_Y REG_GENMASK(21, 17)
#endif /* _XE_MCHBAR_REGS_H_ */

View File

@ -0,0 +1,120 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_REG_DEFS_H_
#define _XE_REG_DEFS_H_
#include "compat-i915-headers/i915_reg_defs.h"
/**
* struct xe_reg - Register definition
*
* Register defintion to be used by the individual register. Although the same
* definition is used for xe_reg and xe_reg_mcr, they use different internal
* APIs for accesses.
*/
struct xe_reg {
union {
struct {
/** @addr: address */
u32 addr:28;
/**
* @masked: register is "masked", with upper 16bits used
* to identify the bits that are updated on the lower
* bits
*/
u32 masked:1;
/**
* @mcr: register is multicast/replicated in the
* hardware and needs special handling. Any register
* with this set should also use a type of xe_reg_mcr_t.
* It's only here so the few places that deal with MCR
* registers specially (xe_sr.c) and tests using the raw
* value can inspect it.
*/
u32 mcr:1;
/**
* @ext: access MMIO extension space for current register.
*/
u32 ext:1;
};
/** @raw: Raw value with both address and options */
u32 raw;
};
};
/**
* struct xe_reg_mcr - MCR register definition
*
* MCR register is the same as a regular register, but uses another type since
* the internal API used for accessing them is different: it's never correct to
* use regular MMIO access.
*/
struct xe_reg_mcr {
/** @__reg: The register */
struct xe_reg __reg;
};
/**
* XE_REG_OPTION_MASKED - Register is "masked", with upper 16 bits marking the
* written bits on the lower 16 bits.
*
* It only applies to registers explicitly marked in bspec with
* "Access: Masked". Registers with this option can have write operations to
* specific lower bits by setting the corresponding upper bits. Other bits will
* not be affected. This allows register writes without needing a RMW cycle and
* without caching in software the register value.
*
* Example: a write with value 0x00010001 will set bit 0 and all other bits
* retain their previous values.
*
* To be used with XE_REG(). XE_REG_MCR() and XE_REG_INITIALIZER()
*/
#define XE_REG_OPTION_MASKED .masked = 1
/**
* XE_REG_INITIALIZER - Initializer for xe_reg_t.
* @r_: Register offset
* @...: Additional options like access mode. See struct xe_reg for available
* options.
*
* Register field is mandatory, and additional options may be passed as
* arguments. Usually ``XE_REG()`` should be preferred since it creates an
* object of the right type. However when initializing static const storage,
* where a compound statement is not allowed, this can be used instead.
*/
#define XE_REG_INITIALIZER(r_, ...) { .addr = r_, __VA_ARGS__ }
/**
* XE_REG - Create a struct xe_reg from offset and additional flags
* @r_: Register offset
* @...: Additional options like access mode. See struct xe_reg for available
* options.
*/
#define XE_REG(r_, ...) ((const struct xe_reg)XE_REG_INITIALIZER(r_, ##__VA_ARGS__))
/**
* XE_REG_EXT - Create a struct xe_reg from extension offset and additional
* flags
* @r_: Register extension offset
* @...: Additional options like access mode. See struct xe_reg for available
* options.
*/
#define XE_REG_EXT(r_, ...) \
((const struct xe_reg)XE_REG_INITIALIZER(r_, ##__VA_ARGS__, .ext = 1))
/**
* XE_REG_MCR - Create a struct xe_reg_mcr from offset and additional flags
* @r_: Register offset
* @...: Additional options like access mode. See struct xe_reg for available
* options.
*/
#define XE_REG_MCR(r_, ...) ((const struct xe_reg_mcr){ \
.__reg = XE_REG_INITIALIZER(r_, ##__VA_ARGS__, .mcr = 1) \
})
#endif

View File

@ -0,0 +1,68 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _XE_REGS_H_
#define _XE_REGS_H_
#include "regs/xe_reg_defs.h"
#define TIMESTAMP_OVERRIDE XE_REG(0x44074)
#define TIMESTAMP_OVERRIDE_US_COUNTER_DENOMINATOR_MASK REG_GENMASK(15, 12)
#define TIMESTAMP_OVERRIDE_US_COUNTER_DIVIDER_MASK REG_GENMASK(9, 0)
#define PCU_IRQ_OFFSET 0x444e0
#define GU_MISC_IRQ_OFFSET 0x444f0
#define GU_MISC_GSE REG_BIT(27)
#define SOFTWARE_FLAGS_SPR33 XE_REG(0x4f084)
#define GU_CNTL_PROTECTED XE_REG(0x10100C)
#define DRIVERINT_FLR_DIS REG_BIT(31)
#define GU_CNTL XE_REG(0x101010)
#define LMEM_INIT REG_BIT(7)
#define DRIVERFLR REG_BIT(31)
#define GU_DEBUG XE_REG(0x101018)
#define DRIVERFLR_STATUS REG_BIT(31)
#define XEHP_CLOCK_GATE_DIS XE_REG(0x101014)
#define SGSI_SIDECLK_DIS REG_BIT(17)
#define GGC XE_REG(0x108040)
#define GMS_MASK REG_GENMASK(15, 8)
#define GGMS_MASK REG_GENMASK(7, 6)
#define DSMBASE XE_REG(0x1080C0)
#define BDSM_MASK REG_GENMASK64(63, 20)
#define GSMBASE XE_REG(0x108100)
#define STOLEN_RESERVED XE_REG(0x1082c0)
#define WOPCM_SIZE_MASK REG_GENMASK64(9, 7)
#define MTL_RP_STATE_CAP XE_REG(0x138000)
#define MTL_GT_RPE_FREQUENCY XE_REG(0x13800c)
#define MTL_MEDIAP_STATE_CAP XE_REG(0x138020)
#define MTL_RPN_CAP_MASK REG_GENMASK(24, 16)
#define MTL_RP0_CAP_MASK REG_GENMASK(8, 0)
#define MTL_MPE_FREQUENCY XE_REG(0x13802c)
#define MTL_RPE_MASK REG_GENMASK(8, 0)
#define DG1_MSTR_TILE_INTR XE_REG(0x190008)
#define DG1_MSTR_IRQ REG_BIT(31)
#define DG1_MSTR_TILE(t) REG_BIT(t)
#define GFX_MSTR_IRQ XE_REG(0x190010)
#define MASTER_IRQ REG_BIT(31)
#define GU_MISC_IRQ REG_BIT(29)
#define DISPLAY_IRQ REG_BIT(16)
#define GT_DW_IRQ(x) REG_BIT(x)
#define PVC_RP_STATE_CAP XE_REG(0x281014)
#endif

View File

@ -0,0 +1,17 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _REGS_XE_SRIOV_REGS_H_
#define _REGS_XE_SRIOV_REGS_H_
#include "regs/xe_reg_defs.h"
#define XE2_LMEM_CFG XE_REG(0x48b0)
#define LMEM_CFG XE_REG(0xcf58)
#define LMEM_EN REG_BIT(31)
#define LMTT_DIR_PTR REG_GENMASK(30, 0) /* in multiples of 64KB */
#endif

View File

@ -0,0 +1,10 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DRM_XE_KUNIT_TEST) += \
xe_bo_test.o \
xe_dma_buf_test.o \
xe_migrate_test.o \
xe_mocs_test.o \
xe_pci_test.o \
xe_rtp_test.o \
xe_wa_test.o

Some files were not shown because too many files have changed in this diff Show More