2021-02-17 04:09:50 +00:00
|
|
|
.. SPDX-License-Identifier: GPL-2.0
|
|
|
|
.. include:: <isonum.txt>
|
|
|
|
|
|
|
|
===================================
|
|
|
|
Compute Express Link Memory Devices
|
|
|
|
===================================
|
|
|
|
|
|
|
|
A Compute Express Link Memory Device is a CXL component that implements the
|
|
|
|
CXL.mem protocol. It contains some amount of volatile memory, persistent memory,
|
|
|
|
or both. It is enumerated as a PCI device for configuration and passing
|
|
|
|
messages over an MMIO mailbox. Its contribution to the System Physical
|
|
|
|
Address space is handled via HDM (Host Managed Device Memory) decoders
|
|
|
|
that optionally define a device's contribution to an interleaved address
|
|
|
|
range across multiple devices underneath a host-bridge or interleaved
|
|
|
|
across host-bridges.
|
2021-02-17 04:09:51 +00:00
|
|
|
|
2022-02-01 21:07:51 +00:00
|
|
|
CXL Bus: Theory of Operation
|
|
|
|
============================
|
|
|
|
Similar to how a RAID driver takes disk objects and assembles them into a new
|
|
|
|
logical device, the CXL subsystem is tasked to take PCIe and ACPI objects and
|
|
|
|
assemble them into a CXL.mem decode topology. The need for runtime configuration
|
|
|
|
of the CXL.mem topology is also similar to RAID in that different environments
|
|
|
|
with the same hardware configuration may decide to assemble the topology in
|
|
|
|
contrasting ways. One may choose performance (RAID0) striping memory across
|
|
|
|
multiple Host Bridges and endpoints while another may opt for fault tolerance
|
|
|
|
and disable any striping in the CXL.mem topology.
|
|
|
|
|
|
|
|
Platform firmware enumerates a menu of interleave options at the "CXL root port"
|
|
|
|
(Linux term for the top of the CXL decode topology). From there, PCIe topology
|
|
|
|
dictates which endpoints can participate in which Host Bridge decode regimes.
|
|
|
|
Each PCIe Switch in the path between the root and an endpoint introduces a point
|
|
|
|
at which the interleave can be split. For example platform firmware may say at a
|
|
|
|
given range only decodes to 1 one Host Bridge, but that Host Bridge may in turn
|
|
|
|
interleave cycles across multiple Root Ports. An intervening Switch between a
|
|
|
|
port and an endpoint may interleave cycles across multiple Downstream Switch
|
|
|
|
Ports, etc.
|
|
|
|
|
|
|
|
Here is a sample listing of a CXL topology defined by 'cxl_test'. The 'cxl_test'
|
|
|
|
module generates an emulated CXL topology of 2 Host Bridges each with 2 Root
|
|
|
|
Ports. Each of those Root Ports are connected to 2-way switches with endpoints
|
|
|
|
connected to those downstream ports for a total of 8 endpoints::
|
|
|
|
|
|
|
|
# cxl list -BEMPu -b cxl_test
|
|
|
|
{
|
|
|
|
"bus":"root3",
|
|
|
|
"provider":"cxl_test",
|
|
|
|
"ports:root3":[
|
|
|
|
{
|
|
|
|
"port":"port5",
|
|
|
|
"host":"cxl_host_bridge.1",
|
|
|
|
"ports:port5":[
|
|
|
|
{
|
|
|
|
"port":"port8",
|
|
|
|
"host":"cxl_switch_uport.1",
|
|
|
|
"endpoints:port8":[
|
|
|
|
{
|
|
|
|
"endpoint":"endpoint9",
|
|
|
|
"host":"mem2",
|
|
|
|
"memdev":{
|
|
|
|
"memdev":"mem2",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0x1",
|
|
|
|
"numa_node":1,
|
|
|
|
"host":"cxl_mem.1"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"endpoint":"endpoint15",
|
|
|
|
"host":"mem6",
|
|
|
|
"memdev":{
|
|
|
|
"memdev":"mem6",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0x5",
|
|
|
|
"numa_node":1,
|
|
|
|
"host":"cxl_mem.5"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
]
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"port":"port12",
|
|
|
|
"host":"cxl_switch_uport.3",
|
|
|
|
"endpoints:port12":[
|
|
|
|
{
|
|
|
|
"endpoint":"endpoint17",
|
|
|
|
"host":"mem8",
|
|
|
|
"memdev":{
|
|
|
|
"memdev":"mem8",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0x7",
|
|
|
|
"numa_node":1,
|
|
|
|
"host":"cxl_mem.7"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"endpoint":"endpoint13",
|
|
|
|
"host":"mem4",
|
|
|
|
"memdev":{
|
|
|
|
"memdev":"mem4",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0x3",
|
|
|
|
"numa_node":1,
|
|
|
|
"host":"cxl_mem.3"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
]
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"port":"port4",
|
|
|
|
"host":"cxl_host_bridge.0",
|
|
|
|
"ports:port4":[
|
|
|
|
{
|
|
|
|
"port":"port6",
|
|
|
|
"host":"cxl_switch_uport.0",
|
|
|
|
"endpoints:port6":[
|
|
|
|
{
|
|
|
|
"endpoint":"endpoint7",
|
|
|
|
"host":"mem1",
|
|
|
|
"memdev":{
|
|
|
|
"memdev":"mem1",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0",
|
|
|
|
"numa_node":0,
|
|
|
|
"host":"cxl_mem.0"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"endpoint":"endpoint14",
|
|
|
|
"host":"mem5",
|
|
|
|
"memdev":{
|
|
|
|
"memdev":"mem5",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0x4",
|
|
|
|
"numa_node":0,
|
|
|
|
"host":"cxl_mem.4"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
]
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"port":"port10",
|
|
|
|
"host":"cxl_switch_uport.2",
|
|
|
|
"endpoints:port10":[
|
|
|
|
{
|
|
|
|
"endpoint":"endpoint16",
|
|
|
|
"host":"mem7",
|
|
|
|
"memdev":{
|
|
|
|
"memdev":"mem7",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0x6",
|
|
|
|
"numa_node":0,
|
|
|
|
"host":"cxl_mem.6"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"endpoint":"endpoint11",
|
|
|
|
"host":"mem3",
|
|
|
|
"memdev":{
|
|
|
|
"memdev":"mem3",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0x2",
|
|
|
|
"numa_node":0,
|
|
|
|
"host":"cxl_mem.2"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
|
|
|
|
In that listing each "root", "port", and "endpoint" object correspond a kernel
|
|
|
|
'struct cxl_port' object. A 'cxl_port' is a device that can decode CXL.mem to
|
|
|
|
its descendants. So "root" claims non-PCIe enumerable platform decode ranges and
|
|
|
|
decodes them to "ports", "ports" decode to "endpoints", and "endpoints"
|
|
|
|
represent the decode from SPA (System Physical Address) to DPA (Device Physical
|
|
|
|
Address).
|
|
|
|
|
|
|
|
Continuing the RAID analogy, disks have both topology metadata and on device
|
|
|
|
metadata that determine RAID set assembly. CXL Port topology and CXL Port link
|
|
|
|
status is metadata for CXL.mem set assembly. The CXL Port topology is enumerated
|
|
|
|
by the arrival of a CXL.mem device. I.e. unless and until the PCIe core attaches
|
|
|
|
the cxl_pci driver to a CXL Memory Expander there is no role for CXL Port
|
|
|
|
objects. Conversely for hot-unplug / removal scenarios, there is no need for
|
|
|
|
the Linux PCI core to tear down switch-level CXL resources because the endpoint
|
|
|
|
->remove() event cleans up the port data that was established to support that
|
|
|
|
Memory Expander.
|
|
|
|
|
|
|
|
The port metadata and potential decode schemes that a give memory device may
|
|
|
|
participate can be determined via a command like::
|
|
|
|
|
|
|
|
# cxl list -BDMu -d root -m mem3
|
|
|
|
{
|
|
|
|
"bus":"root3",
|
|
|
|
"provider":"cxl_test",
|
|
|
|
"decoders:root3":[
|
|
|
|
{
|
|
|
|
"decoder":"decoder3.1",
|
|
|
|
"resource":"0x8030000000",
|
|
|
|
"size":"512.00 MiB (536.87 MB)",
|
|
|
|
"volatile_capable":true,
|
|
|
|
"nr_targets":2
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"decoder":"decoder3.3",
|
|
|
|
"resource":"0x8060000000",
|
|
|
|
"size":"512.00 MiB (536.87 MB)",
|
|
|
|
"pmem_capable":true,
|
|
|
|
"nr_targets":2
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"decoder":"decoder3.0",
|
|
|
|
"resource":"0x8020000000",
|
|
|
|
"size":"256.00 MiB (268.44 MB)",
|
|
|
|
"volatile_capable":true,
|
|
|
|
"nr_targets":1
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"decoder":"decoder3.2",
|
|
|
|
"resource":"0x8050000000",
|
|
|
|
"size":"256.00 MiB (268.44 MB)",
|
|
|
|
"pmem_capable":true,
|
|
|
|
"nr_targets":1
|
|
|
|
}
|
|
|
|
],
|
|
|
|
"memdevs:root3":[
|
|
|
|
{
|
|
|
|
"memdev":"mem3",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0x2",
|
|
|
|
"numa_node":0,
|
|
|
|
"host":"cxl_mem.2"
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
|
|
|
|
...which queries the CXL topology to ask "given CXL Memory Expander with a kernel
|
|
|
|
device name of 'mem3' which platform level decode ranges may this device
|
|
|
|
participate". A given expander can participate in multiple CXL.mem interleave
|
|
|
|
sets simultaneously depending on how many decoder resource it has. In this
|
|
|
|
example mem3 can participate in one or more of a PMEM interleave that spans to
|
|
|
|
Host Bridges, a PMEM interleave that targets a single Host Bridge, a Volatile
|
|
|
|
memory interleave that spans 2 Host Bridges, and a Volatile memory interleave
|
|
|
|
that only targets a single Host Bridge.
|
|
|
|
|
|
|
|
Conversely the memory devices that can participate in a given platform level
|
|
|
|
decode scheme can be determined via a command like the following::
|
|
|
|
|
|
|
|
# cxl list -MDu -d 3.2
|
|
|
|
[
|
|
|
|
{
|
|
|
|
"memdevs":[
|
|
|
|
{
|
|
|
|
"memdev":"mem1",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0",
|
|
|
|
"numa_node":0,
|
|
|
|
"host":"cxl_mem.0"
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"memdev":"mem5",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0x4",
|
|
|
|
"numa_node":0,
|
|
|
|
"host":"cxl_mem.4"
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"memdev":"mem7",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0x6",
|
|
|
|
"numa_node":0,
|
|
|
|
"host":"cxl_mem.6"
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"memdev":"mem3",
|
|
|
|
"pmem_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"ram_size":"256.00 MiB (268.44 MB)",
|
|
|
|
"serial":"0x2",
|
|
|
|
"numa_node":0,
|
|
|
|
"host":"cxl_mem.2"
|
|
|
|
}
|
|
|
|
]
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"root decoders":[
|
|
|
|
{
|
|
|
|
"decoder":"decoder3.2",
|
|
|
|
"resource":"0x8050000000",
|
|
|
|
"size":"256.00 MiB (268.44 MB)",
|
|
|
|
"pmem_capable":true,
|
|
|
|
"nr_targets":1
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
]
|
|
|
|
|
|
|
|
...where the naming scheme for decoders is "decoder<port_id>.<instance_id>".
|
|
|
|
|
2021-02-17 04:09:51 +00:00
|
|
|
Driver Infrastructure
|
|
|
|
=====================
|
|
|
|
|
|
|
|
This section covers the driver infrastructure for a CXL memory device.
|
|
|
|
|
|
|
|
CXL Memory Device
|
|
|
|
-----------------
|
|
|
|
|
cxl: Rename mem to pci
As the driver has undergone development, it's become clear that the
majority [entirety?] of the current functionality in mem.c is actually a
layer encapsulating functionality exposed through PCI based
interactions. This layer can be used either in isolation or to provide
functionality for higher level functionality.
CXL capabilities exist in a parallel domain to PCIe. CXL devices are
enumerable and controllable via "legacy" PCIe mechanisms; however, their
CXL capabilities are a superset of PCIe. For example, a CXL device may
be connected to a non-CXL capable PCIe root port, and therefore will not
be able to participate in CXL.mem or CXL.cache operations, but can still
be accessed through PCIe mechanisms for CXL.io operations.
To properly represent the PCI nature of this driver, and in preparation for
introducing a new driver for the CXL.mem / HDM decoder (Host-managed Device
Memory) capabilities of a CXL memory expander, rename mem.c to pci.c so that
mem.c is available for this new driver.
The result of the change is that there is a clear layering distinction
in the driver, and a systems administrator may load only the cxl_pci
module and gain access to such operations as, firmware update, offline
provisioning of devices, and error collection. In addition to freeing up
the file name for another purpose, there are two primary reasons this is
useful,
1. Acting upon devices which don't have full CXL capabilities. This
may happen for instance if the CXL device is connected in a CXL
unaware part of the platform topology.
2. Userspace-first provisioning for devices without kernel driver
interference. This may be useful when provisioning a new device
in a specific manner that might otherwise be blocked or prevented
by the real CXL mem driver.
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Link: https://lore.kernel.org/r/20210526174413.802913-1-ben.widawsky@intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-05-26 17:44:13 +00:00
|
|
|
.. kernel-doc:: drivers/cxl/pci.c
|
|
|
|
:doc: cxl pci
|
2021-02-17 04:09:51 +00:00
|
|
|
|
cxl: Rename mem to pci
As the driver has undergone development, it's become clear that the
majority [entirety?] of the current functionality in mem.c is actually a
layer encapsulating functionality exposed through PCI based
interactions. This layer can be used either in isolation or to provide
functionality for higher level functionality.
CXL capabilities exist in a parallel domain to PCIe. CXL devices are
enumerable and controllable via "legacy" PCIe mechanisms; however, their
CXL capabilities are a superset of PCIe. For example, a CXL device may
be connected to a non-CXL capable PCIe root port, and therefore will not
be able to participate in CXL.mem or CXL.cache operations, but can still
be accessed through PCIe mechanisms for CXL.io operations.
To properly represent the PCI nature of this driver, and in preparation for
introducing a new driver for the CXL.mem / HDM decoder (Host-managed Device
Memory) capabilities of a CXL memory expander, rename mem.c to pci.c so that
mem.c is available for this new driver.
The result of the change is that there is a clear layering distinction
in the driver, and a systems administrator may load only the cxl_pci
module and gain access to such operations as, firmware update, offline
provisioning of devices, and error collection. In addition to freeing up
the file name for another purpose, there are two primary reasons this is
useful,
1. Acting upon devices which don't have full CXL capabilities. This
may happen for instance if the CXL device is connected in a CXL
unaware part of the platform topology.
2. Userspace-first provisioning for devices without kernel driver
interference. This may be useful when provisioning a new device
in a specific manner that might otherwise be blocked or prevented
by the real CXL mem driver.
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Link: https://lore.kernel.org/r/20210526174413.802913-1-ben.widawsky@intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-05-26 17:44:13 +00:00
|
|
|
.. kernel-doc:: drivers/cxl/pci.c
|
2021-02-17 04:09:51 +00:00
|
|
|
:internal:
|
2021-02-17 04:09:52 +00:00
|
|
|
|
2022-02-04 15:18:31 +00:00
|
|
|
.. kernel-doc:: drivers/cxl/mem.c
|
|
|
|
:doc: cxl mem
|
|
|
|
|
2022-02-01 21:07:51 +00:00
|
|
|
CXL Port
|
|
|
|
--------
|
|
|
|
.. kernel-doc:: drivers/cxl/port.c
|
|
|
|
:doc: cxl port
|
|
|
|
|
2021-05-14 05:22:00 +00:00
|
|
|
CXL Core
|
2021-05-20 19:52:20 +00:00
|
|
|
--------
|
2021-06-09 16:01:35 +00:00
|
|
|
.. kernel-doc:: drivers/cxl/cxl.h
|
|
|
|
:doc: cxl objects
|
|
|
|
|
|
|
|
.. kernel-doc:: drivers/cxl/cxl.h
|
|
|
|
:internal:
|
|
|
|
|
2022-01-24 00:29:21 +00:00
|
|
|
.. kernel-doc:: drivers/cxl/core/port.c
|
2021-05-14 05:22:00 +00:00
|
|
|
:doc: cxl core
|
2021-02-17 04:09:53 +00:00
|
|
|
|
2022-01-24 00:29:21 +00:00
|
|
|
.. kernel-doc:: drivers/cxl/core/port.c
|
2021-09-13 16:33:22 +00:00
|
|
|
:identifiers:
|
|
|
|
|
2022-02-04 15:18:31 +00:00
|
|
|
.. kernel-doc:: drivers/cxl/core/pci.c
|
|
|
|
:doc: cxl core pci
|
|
|
|
|
|
|
|
.. kernel-doc:: drivers/cxl/core/pci.c
|
|
|
|
:identifiers:
|
|
|
|
|
2021-08-02 17:29:49 +00:00
|
|
|
.. kernel-doc:: drivers/cxl/core/pmem.c
|
2021-09-04 02:21:01 +00:00
|
|
|
:doc: cxl pmem
|
2021-08-02 17:29:49 +00:00
|
|
|
|
2021-08-03 14:25:38 +00:00
|
|
|
.. kernel-doc:: drivers/cxl/core/regs.c
|
2021-09-04 02:21:06 +00:00
|
|
|
:doc: cxl registers
|
2021-08-03 14:25:38 +00:00
|
|
|
|
2021-09-09 05:12:32 +00:00
|
|
|
.. kernel-doc:: drivers/cxl/core/mbox.c
|
|
|
|
:doc: cxl mbox
|
|
|
|
|
cxl/region: Add region creation support
CXL 2.0 allows for dynamic provisioning of new memory regions (system
physical address resources like "System RAM" and "Persistent Memory").
Whereas DDR and PMEM resources are conveyed statically at boot, CXL
allows for assembling and instantiating new regions from the available
capacity of CXL memory expanders in the system.
Sysfs with an "echo $region_name > $create_region_attribute" interface
is chosen as the mechanism to initiate the provisioning process. This
was chosen over ioctl() and netlink() to keep the configuration
interface entirely in a pseudo-fs interface, and it was chosen over
configfs since, aside from this one creation event, the interface is
read-mostly. I.e. configfs supports cases where an object is designed to
be provisioned each boot, like an iSCSI storage target, and CXL region
creation is mostly for PMEM regions which are created usually once
per-lifetime of a server instance. This is an improvement over nvdimm
that pre-created "seed" devices that tended to confuse users looking to
determine which devices are active and which are idle.
Recall that the major change that CXL brings over previous persistent
memory architectures is the ability to dynamically define new regions.
Compare that to drivers like 'nfit' where the region configuration is
statically defined by platform firmware.
Regions are created as a child of a root decoder that encompasses an
address space with constraints. When created through sysfs, the root
decoder is explicit. When created from an LSA's region structure a root
decoder will possibly need to be inferred by the driver.
Upon region creation through sysfs, a vacant region is created with a
unique name. Regions have a number of attributes that must be configured
before the region can be bound to the driver where HDM decoder program
is completed.
An example of creating a new region:
- Allocate a new region name:
region=$(cat /sys/bus/cxl/devices/decoder0.0/create_pmem_region)
- Create a new region by name:
while
region=$(cat /sys/bus/cxl/devices/decoder0.0/create_pmem_region)
! echo $region > /sys/bus/cxl/devices/decoder0.0/create_pmem_region
do true; done
- Region now exists in sysfs:
stat -t /sys/bus/cxl/devices/decoder0.0/$region
- Delete the region, and name:
echo $region > /sys/bus/cxl/devices/decoder0.0/delete_region
Signed-off-by: Ben Widawsky <bwidawsk@kernel.org>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Link: https://lore.kernel.org/r/165784333909.1758207.794374602146306032.stgit@dwillia2-xfh.jf.intel.com
[djbw: simplify locking, reword changelog]
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-06-08 17:28:34 +00:00
|
|
|
CXL Regions
|
|
|
|
-----------
|
|
|
|
.. kernel-doc:: drivers/cxl/core/region.c
|
|
|
|
:doc: cxl core region
|
|
|
|
|
|
|
|
.. kernel-doc:: drivers/cxl/core/region.c
|
|
|
|
:identifiers:
|
|
|
|
|
2021-02-17 04:09:53 +00:00
|
|
|
External Interfaces
|
|
|
|
===================
|
|
|
|
|
|
|
|
CXL IOCTL Interface
|
|
|
|
-------------------
|
|
|
|
|
|
|
|
.. kernel-doc:: include/uapi/linux/cxl_mem.h
|
|
|
|
:doc: UAPI
|
|
|
|
|
|
|
|
.. kernel-doc:: include/uapi/linux/cxl_mem.h
|
|
|
|
:internal:
|