linux-stable/Documentation/driver-api/cxl/memory-devices.rst

384 lines
12 KiB
ReStructuredText
Raw Normal View History

.. SPDX-License-Identifier: GPL-2.0
.. include:: <isonum.txt>
===================================
Compute Express Link Memory Devices
===================================
A Compute Express Link Memory Device is a CXL component that implements the
CXL.mem protocol. It contains some amount of volatile memory, persistent memory,
or both. It is enumerated as a PCI device for configuration and passing
messages over an MMIO mailbox. Its contribution to the System Physical
Address space is handled via HDM (Host Managed Device Memory) decoders
that optionally define a device's contribution to an interleaved address
range across multiple devices underneath a host-bridge or interleaved
across host-bridges.
cxl/mem: Find device capabilities Provide enough functionality to utilize the mailbox of a memory device. The mailbox is used to interact with the firmware running on the memory device. The flow is proven with one implemented command, "identify". Because the class code has already told the driver this is a memory device and the identify command is mandatory. CXL devices contain an array of capabilities that describe the interactions software can have with the device or firmware running on the device. A CXL compliant device must implement the device status and the mailbox capability. Additionally, a CXL compliant memory device must implement the memory device capability. Each of the capabilities can [will] provide an offset within the MMIO region for interacting with the CXL device. The capabilities tell the driver how to find and map the register space for CXL Memory Devices. The registers are required to utilize the CXL spec defined mailbox interface. The spec outlines two mailboxes, primary and secondary. The secondary mailbox is earmarked for system firmware, and not handled in this driver. Primary mailboxes are capable of generating an interrupt when submitting a background command. That implementation is saved for a later time. Reported-by: Colin Ian King <colin.king@canonical.com> (coverity) Reported-by: Dan Carpenter <dan.carpenter@oracle.com> (smatch) Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> (v2) Link: https://www.computeexpresslink.org/download-the-specification Link: https://lore.kernel.org/r/20210217040958.1354670-3-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-02-17 04:09:51 +00:00
cxl/port: Add a driver for 'struct cxl_port' objects The need for a CXL port driver and a dedicated cxl_bus_type is driven by a need to simultaneously support 2 independent physical memory decode domains (cache coherent CXL.mem and uncached PCI.mmio) that also intersect at a single PCIe device node. A CXL Port is a device that advertises a CXL Component Register block with an "HDM Decoder Capability Structure". >From Documentation/driver-api/cxl/memory-devices.rst: Similar to how a RAID driver takes disk objects and assembles them into a new logical device, the CXL subsystem is tasked to take PCIe and ACPI objects and assemble them into a CXL.mem decode topology. The need for runtime configuration of the CXL.mem topology is also similar to RAID in that different environments with the same hardware configuration may decide to assemble the topology in contrasting ways. One may choose performance (RAID0) striping memory across multiple Host Bridges and endpoints while another may opt for fault tolerance and disable any striping in the CXL.mem topology. The port driver identifies whether an endpoint Memory Expander is connected to a CXL topology. If an active (bound to the 'cxl_port' driver) CXL Port is not found at every PCIe Switch Upstream port and an active "root" CXL Port then the device is just a plain PCIe endpoint only capable of participating in PCI.mmio and DMA cycles, not CXL.mem coherent interleave sets. The 'cxl_port' driver lets the CXL subsystem leverage driver-core infrastructure for setup and teardown of register resources and communicating device activation status to userspace. The cxl_bus_type can rendezvous the async arrival of platform level CXL resources (via the 'cxl_acpi' driver) with the asynchronous enumeration of Memory Expander endpoints, while also implementing a hierarchical locking model independent of the associated 'struct pci_dev' locking model. The locking for dport and decoder enumeration is now handled in the core rather than callers. For now the port driver only enumerates and registers CXL resources (downstream port metadata and decoder resources) later it will be used to take action on its decoders in response to CXL.mem region provisioning requests. Note1: cxlpci.h has long depended on pci.h, but port.c was the first to not include pci.h. Carry that dependency in cxlpci.h. Note2: cxl port enumeration and probing complicates CXL subsystem init to the point that it helps to have centralized debug logging of probe events in cxl_bus_probe(). Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Co-developed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/164374948116.464348.1772618057599155408.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-02-01 21:07:51 +00:00
CXL Bus: Theory of Operation
============================
Similar to how a RAID driver takes disk objects and assembles them into a new
logical device, the CXL subsystem is tasked to take PCIe and ACPI objects and
assemble them into a CXL.mem decode topology. The need for runtime configuration
of the CXL.mem topology is also similar to RAID in that different environments
with the same hardware configuration may decide to assemble the topology in
contrasting ways. One may choose performance (RAID0) striping memory across
multiple Host Bridges and endpoints while another may opt for fault tolerance
and disable any striping in the CXL.mem topology.
Platform firmware enumerates a menu of interleave options at the "CXL root port"
(Linux term for the top of the CXL decode topology). From there, PCIe topology
dictates which endpoints can participate in which Host Bridge decode regimes.
Each PCIe Switch in the path between the root and an endpoint introduces a point
at which the interleave can be split. For example platform firmware may say at a
given range only decodes to 1 one Host Bridge, but that Host Bridge may in turn
interleave cycles across multiple Root Ports. An intervening Switch between a
port and an endpoint may interleave cycles across multiple Downstream Switch
Ports, etc.
Here is a sample listing of a CXL topology defined by 'cxl_test'. The 'cxl_test'
module generates an emulated CXL topology of 2 Host Bridges each with 2 Root
Ports. Each of those Root Ports are connected to 2-way switches with endpoints
connected to those downstream ports for a total of 8 endpoints::
# cxl list -BEMPu -b cxl_test
{
"bus":"root3",
"provider":"cxl_test",
"ports:root3":[
{
"port":"port5",
"host":"cxl_host_bridge.1",
"ports:port5":[
{
"port":"port8",
"host":"cxl_switch_uport.1",
"endpoints:port8":[
{
"endpoint":"endpoint9",
"host":"mem2",
"memdev":{
"memdev":"mem2",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0x1",
"numa_node":1,
"host":"cxl_mem.1"
}
},
{
"endpoint":"endpoint15",
"host":"mem6",
"memdev":{
"memdev":"mem6",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0x5",
"numa_node":1,
"host":"cxl_mem.5"
}
}
]
},
{
"port":"port12",
"host":"cxl_switch_uport.3",
"endpoints:port12":[
{
"endpoint":"endpoint17",
"host":"mem8",
"memdev":{
"memdev":"mem8",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0x7",
"numa_node":1,
"host":"cxl_mem.7"
}
},
{
"endpoint":"endpoint13",
"host":"mem4",
"memdev":{
"memdev":"mem4",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0x3",
"numa_node":1,
"host":"cxl_mem.3"
}
}
]
}
]
},
{
"port":"port4",
"host":"cxl_host_bridge.0",
"ports:port4":[
{
"port":"port6",
"host":"cxl_switch_uport.0",
"endpoints:port6":[
{
"endpoint":"endpoint7",
"host":"mem1",
"memdev":{
"memdev":"mem1",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0",
"numa_node":0,
"host":"cxl_mem.0"
}
},
{
"endpoint":"endpoint14",
"host":"mem5",
"memdev":{
"memdev":"mem5",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0x4",
"numa_node":0,
"host":"cxl_mem.4"
}
}
]
},
{
"port":"port10",
"host":"cxl_switch_uport.2",
"endpoints:port10":[
{
"endpoint":"endpoint16",
"host":"mem7",
"memdev":{
"memdev":"mem7",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0x6",
"numa_node":0,
"host":"cxl_mem.6"
}
},
{
"endpoint":"endpoint11",
"host":"mem3",
"memdev":{
"memdev":"mem3",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0x2",
"numa_node":0,
"host":"cxl_mem.2"
}
}
]
}
]
}
]
}
In that listing each "root", "port", and "endpoint" object correspond a kernel
'struct cxl_port' object. A 'cxl_port' is a device that can decode CXL.mem to
its descendants. So "root" claims non-PCIe enumerable platform decode ranges and
decodes them to "ports", "ports" decode to "endpoints", and "endpoints"
represent the decode from SPA (System Physical Address) to DPA (Device Physical
Address).
Continuing the RAID analogy, disks have both topology metadata and on device
metadata that determine RAID set assembly. CXL Port topology and CXL Port link
status is metadata for CXL.mem set assembly. The CXL Port topology is enumerated
by the arrival of a CXL.mem device. I.e. unless and until the PCIe core attaches
the cxl_pci driver to a CXL Memory Expander there is no role for CXL Port
objects. Conversely for hot-unplug / removal scenarios, there is no need for
the Linux PCI core to tear down switch-level CXL resources because the endpoint
->remove() event cleans up the port data that was established to support that
Memory Expander.
The port metadata and potential decode schemes that a give memory device may
participate can be determined via a command like::
# cxl list -BDMu -d root -m mem3
{
"bus":"root3",
"provider":"cxl_test",
"decoders:root3":[
{
"decoder":"decoder3.1",
"resource":"0x8030000000",
"size":"512.00 MiB (536.87 MB)",
"volatile_capable":true,
"nr_targets":2
},
{
"decoder":"decoder3.3",
"resource":"0x8060000000",
"size":"512.00 MiB (536.87 MB)",
"pmem_capable":true,
"nr_targets":2
},
{
"decoder":"decoder3.0",
"resource":"0x8020000000",
"size":"256.00 MiB (268.44 MB)",
"volatile_capable":true,
"nr_targets":1
},
{
"decoder":"decoder3.2",
"resource":"0x8050000000",
"size":"256.00 MiB (268.44 MB)",
"pmem_capable":true,
"nr_targets":1
}
],
"memdevs:root3":[
{
"memdev":"mem3",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0x2",
"numa_node":0,
"host":"cxl_mem.2"
}
]
}
...which queries the CXL topology to ask "given CXL Memory Expander with a kernel
device name of 'mem3' which platform level decode ranges may this device
participate". A given expander can participate in multiple CXL.mem interleave
sets simultaneously depending on how many decoder resource it has. In this
example mem3 can participate in one or more of a PMEM interleave that spans to
Host Bridges, a PMEM interleave that targets a single Host Bridge, a Volatile
memory interleave that spans 2 Host Bridges, and a Volatile memory interleave
that only targets a single Host Bridge.
Conversely the memory devices that can participate in a given platform level
decode scheme can be determined via a command like the following::
# cxl list -MDu -d 3.2
[
{
"memdevs":[
{
"memdev":"mem1",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0",
"numa_node":0,
"host":"cxl_mem.0"
},
{
"memdev":"mem5",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0x4",
"numa_node":0,
"host":"cxl_mem.4"
},
{
"memdev":"mem7",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0x6",
"numa_node":0,
"host":"cxl_mem.6"
},
{
"memdev":"mem3",
"pmem_size":"256.00 MiB (268.44 MB)",
"ram_size":"256.00 MiB (268.44 MB)",
"serial":"0x2",
"numa_node":0,
"host":"cxl_mem.2"
}
]
},
{
"root decoders":[
{
"decoder":"decoder3.2",
"resource":"0x8050000000",
"size":"256.00 MiB (268.44 MB)",
"pmem_capable":true,
"nr_targets":1
}
]
}
]
...where the naming scheme for decoders is "decoder<port_id>.<instance_id>".
cxl/mem: Find device capabilities Provide enough functionality to utilize the mailbox of a memory device. The mailbox is used to interact with the firmware running on the memory device. The flow is proven with one implemented command, "identify". Because the class code has already told the driver this is a memory device and the identify command is mandatory. CXL devices contain an array of capabilities that describe the interactions software can have with the device or firmware running on the device. A CXL compliant device must implement the device status and the mailbox capability. Additionally, a CXL compliant memory device must implement the memory device capability. Each of the capabilities can [will] provide an offset within the MMIO region for interacting with the CXL device. The capabilities tell the driver how to find and map the register space for CXL Memory Devices. The registers are required to utilize the CXL spec defined mailbox interface. The spec outlines two mailboxes, primary and secondary. The secondary mailbox is earmarked for system firmware, and not handled in this driver. Primary mailboxes are capable of generating an interrupt when submitting a background command. That implementation is saved for a later time. Reported-by: Colin Ian King <colin.king@canonical.com> (coverity) Reported-by: Dan Carpenter <dan.carpenter@oracle.com> (smatch) Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> (v2) Link: https://www.computeexpresslink.org/download-the-specification Link: https://lore.kernel.org/r/20210217040958.1354670-3-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-02-17 04:09:51 +00:00
Driver Infrastructure
=====================
This section covers the driver infrastructure for a CXL memory device.
CXL Memory Device
-----------------
cxl: Rename mem to pci As the driver has undergone development, it's become clear that the majority [entirety?] of the current functionality in mem.c is actually a layer encapsulating functionality exposed through PCI based interactions. This layer can be used either in isolation or to provide functionality for higher level functionality. CXL capabilities exist in a parallel domain to PCIe. CXL devices are enumerable and controllable via "legacy" PCIe mechanisms; however, their CXL capabilities are a superset of PCIe. For example, a CXL device may be connected to a non-CXL capable PCIe root port, and therefore will not be able to participate in CXL.mem or CXL.cache operations, but can still be accessed through PCIe mechanisms for CXL.io operations. To properly represent the PCI nature of this driver, and in preparation for introducing a new driver for the CXL.mem / HDM decoder (Host-managed Device Memory) capabilities of a CXL memory expander, rename mem.c to pci.c so that mem.c is available for this new driver. The result of the change is that there is a clear layering distinction in the driver, and a systems administrator may load only the cxl_pci module and gain access to such operations as, firmware update, offline provisioning of devices, and error collection. In addition to freeing up the file name for another purpose, there are two primary reasons this is useful, 1. Acting upon devices which don't have full CXL capabilities. This may happen for instance if the CXL device is connected in a CXL unaware part of the platform topology. 2. Userspace-first provisioning for devices without kernel driver interference. This may be useful when provisioning a new device in a specific manner that might otherwise be blocked or prevented by the real CXL mem driver. Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210526174413.802913-1-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-05-26 17:44:13 +00:00
.. kernel-doc:: drivers/cxl/pci.c
:doc: cxl pci
cxl/mem: Find device capabilities Provide enough functionality to utilize the mailbox of a memory device. The mailbox is used to interact with the firmware running on the memory device. The flow is proven with one implemented command, "identify". Because the class code has already told the driver this is a memory device and the identify command is mandatory. CXL devices contain an array of capabilities that describe the interactions software can have with the device or firmware running on the device. A CXL compliant device must implement the device status and the mailbox capability. Additionally, a CXL compliant memory device must implement the memory device capability. Each of the capabilities can [will] provide an offset within the MMIO region for interacting with the CXL device. The capabilities tell the driver how to find and map the register space for CXL Memory Devices. The registers are required to utilize the CXL spec defined mailbox interface. The spec outlines two mailboxes, primary and secondary. The secondary mailbox is earmarked for system firmware, and not handled in this driver. Primary mailboxes are capable of generating an interrupt when submitting a background command. That implementation is saved for a later time. Reported-by: Colin Ian King <colin.king@canonical.com> (coverity) Reported-by: Dan Carpenter <dan.carpenter@oracle.com> (smatch) Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> (v2) Link: https://www.computeexpresslink.org/download-the-specification Link: https://lore.kernel.org/r/20210217040958.1354670-3-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-02-17 04:09:51 +00:00
cxl: Rename mem to pci As the driver has undergone development, it's become clear that the majority [entirety?] of the current functionality in mem.c is actually a layer encapsulating functionality exposed through PCI based interactions. This layer can be used either in isolation or to provide functionality for higher level functionality. CXL capabilities exist in a parallel domain to PCIe. CXL devices are enumerable and controllable via "legacy" PCIe mechanisms; however, their CXL capabilities are a superset of PCIe. For example, a CXL device may be connected to a non-CXL capable PCIe root port, and therefore will not be able to participate in CXL.mem or CXL.cache operations, but can still be accessed through PCIe mechanisms for CXL.io operations. To properly represent the PCI nature of this driver, and in preparation for introducing a new driver for the CXL.mem / HDM decoder (Host-managed Device Memory) capabilities of a CXL memory expander, rename mem.c to pci.c so that mem.c is available for this new driver. The result of the change is that there is a clear layering distinction in the driver, and a systems administrator may load only the cxl_pci module and gain access to such operations as, firmware update, offline provisioning of devices, and error collection. In addition to freeing up the file name for another purpose, there are two primary reasons this is useful, 1. Acting upon devices which don't have full CXL capabilities. This may happen for instance if the CXL device is connected in a CXL unaware part of the platform topology. 2. Userspace-first provisioning for devices without kernel driver interference. This may be useful when provisioning a new device in a specific manner that might otherwise be blocked or prevented by the real CXL mem driver. Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210526174413.802913-1-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-05-26 17:44:13 +00:00
.. kernel-doc:: drivers/cxl/pci.c
cxl/mem: Find device capabilities Provide enough functionality to utilize the mailbox of a memory device. The mailbox is used to interact with the firmware running on the memory device. The flow is proven with one implemented command, "identify". Because the class code has already told the driver this is a memory device and the identify command is mandatory. CXL devices contain an array of capabilities that describe the interactions software can have with the device or firmware running on the device. A CXL compliant device must implement the device status and the mailbox capability. Additionally, a CXL compliant memory device must implement the memory device capability. Each of the capabilities can [will] provide an offset within the MMIO region for interacting with the CXL device. The capabilities tell the driver how to find and map the register space for CXL Memory Devices. The registers are required to utilize the CXL spec defined mailbox interface. The spec outlines two mailboxes, primary and secondary. The secondary mailbox is earmarked for system firmware, and not handled in this driver. Primary mailboxes are capable of generating an interrupt when submitting a background command. That implementation is saved for a later time. Reported-by: Colin Ian King <colin.king@canonical.com> (coverity) Reported-by: Dan Carpenter <dan.carpenter@oracle.com> (smatch) Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> (v2) Link: https://www.computeexpresslink.org/download-the-specification Link: https://lore.kernel.org/r/20210217040958.1354670-3-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-02-17 04:09:51 +00:00
:internal:
cxl/mem: Add the cxl_mem driver At this point the subsystem can enumerate all CXL ports (CXL.mem decode resources in upstream switch ports and host bridges) in a system. The last mile is connecting those ports to endpoints. The cxl_mem driver connects an endpoint device to the platform CXL.mem protoctol decode-topology. At ->probe() time it walks its device-topology-ancestry and adds a CXL Port object at every Upstream Port hop until it gets to CXL root. The CXL root object is only present after a platform firmware driver registers platform CXL resources. For ACPI based platform this is managed by the ACPI0017 device and the cxl_acpi driver. The ports are registered such that disabling a given port automatically unregisters all descendant ports, and the chain can only be registered after the root is established. Given ACPI device scanning may run asynchronously compared to PCI device scanning the root driver is tasked with rescanning the bus after the root successfully probes. Conversely if any ports in a chain between the root and an endpoint becomes disconnected it subsequently triggers the endpoint to unregister. Given lock depenedencies the endpoint unregistration happens in a workqueue asynchronously. If userspace cares about synchronizing delayed work after port events the /sys/bus/cxl/flush attribute is available for that purpose. Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> [djbw: clarify changelog, rework hotplug support] Link: https://lore.kernel.org/r/164398782997.903003.9725273241627693186.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-02-04 15:18:31 +00:00
.. kernel-doc:: drivers/cxl/mem.c
:doc: cxl mem
cxl/port: Add a driver for 'struct cxl_port' objects The need for a CXL port driver and a dedicated cxl_bus_type is driven by a need to simultaneously support 2 independent physical memory decode domains (cache coherent CXL.mem and uncached PCI.mmio) that also intersect at a single PCIe device node. A CXL Port is a device that advertises a CXL Component Register block with an "HDM Decoder Capability Structure". >From Documentation/driver-api/cxl/memory-devices.rst: Similar to how a RAID driver takes disk objects and assembles them into a new logical device, the CXL subsystem is tasked to take PCIe and ACPI objects and assemble them into a CXL.mem decode topology. The need for runtime configuration of the CXL.mem topology is also similar to RAID in that different environments with the same hardware configuration may decide to assemble the topology in contrasting ways. One may choose performance (RAID0) striping memory across multiple Host Bridges and endpoints while another may opt for fault tolerance and disable any striping in the CXL.mem topology. The port driver identifies whether an endpoint Memory Expander is connected to a CXL topology. If an active (bound to the 'cxl_port' driver) CXL Port is not found at every PCIe Switch Upstream port and an active "root" CXL Port then the device is just a plain PCIe endpoint only capable of participating in PCI.mmio and DMA cycles, not CXL.mem coherent interleave sets. The 'cxl_port' driver lets the CXL subsystem leverage driver-core infrastructure for setup and teardown of register resources and communicating device activation status to userspace. The cxl_bus_type can rendezvous the async arrival of platform level CXL resources (via the 'cxl_acpi' driver) with the asynchronous enumeration of Memory Expander endpoints, while also implementing a hierarchical locking model independent of the associated 'struct pci_dev' locking model. The locking for dport and decoder enumeration is now handled in the core rather than callers. For now the port driver only enumerates and registers CXL resources (downstream port metadata and decoder resources) later it will be used to take action on its decoders in response to CXL.mem region provisioning requests. Note1: cxlpci.h has long depended on pci.h, but port.c was the first to not include pci.h. Carry that dependency in cxlpci.h. Note2: cxl port enumeration and probing complicates CXL subsystem init to the point that it helps to have centralized debug logging of probe events in cxl_bus_probe(). Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Co-developed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/164374948116.464348.1772618057599155408.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-02-01 21:07:51 +00:00
CXL Port
--------
.. kernel-doc:: drivers/cxl/port.c
:doc: cxl port
CXL Core
--------
.. kernel-doc:: drivers/cxl/cxl.h
:doc: cxl objects
.. kernel-doc:: drivers/cxl/cxl.h
:internal:
.. kernel-doc:: drivers/cxl/core/port.c
:doc: cxl core
.. kernel-doc:: drivers/cxl/core/port.c
:identifiers:
cxl/mem: Add the cxl_mem driver At this point the subsystem can enumerate all CXL ports (CXL.mem decode resources in upstream switch ports and host bridges) in a system. The last mile is connecting those ports to endpoints. The cxl_mem driver connects an endpoint device to the platform CXL.mem protoctol decode-topology. At ->probe() time it walks its device-topology-ancestry and adds a CXL Port object at every Upstream Port hop until it gets to CXL root. The CXL root object is only present after a platform firmware driver registers platform CXL resources. For ACPI based platform this is managed by the ACPI0017 device and the cxl_acpi driver. The ports are registered such that disabling a given port automatically unregisters all descendant ports, and the chain can only be registered after the root is established. Given ACPI device scanning may run asynchronously compared to PCI device scanning the root driver is tasked with rescanning the bus after the root successfully probes. Conversely if any ports in a chain between the root and an endpoint becomes disconnected it subsequently triggers the endpoint to unregister. Given lock depenedencies the endpoint unregistration happens in a workqueue asynchronously. If userspace cares about synchronizing delayed work after port events the /sys/bus/cxl/flush attribute is available for that purpose. Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> [djbw: clarify changelog, rework hotplug support] Link: https://lore.kernel.org/r/164398782997.903003.9725273241627693186.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-02-04 15:18:31 +00:00
.. kernel-doc:: drivers/cxl/core/pci.c
:doc: cxl core pci
.. kernel-doc:: drivers/cxl/core/pci.c
:identifiers:
.. kernel-doc:: drivers/cxl/core/pmem.c
:doc: cxl pmem
.. kernel-doc:: drivers/cxl/core/regs.c
:doc: cxl registers
.. kernel-doc:: drivers/cxl/core/mbox.c
:doc: cxl mbox
cxl/region: Add region creation support CXL 2.0 allows for dynamic provisioning of new memory regions (system physical address resources like "System RAM" and "Persistent Memory"). Whereas DDR and PMEM resources are conveyed statically at boot, CXL allows for assembling and instantiating new regions from the available capacity of CXL memory expanders in the system. Sysfs with an "echo $region_name > $create_region_attribute" interface is chosen as the mechanism to initiate the provisioning process. This was chosen over ioctl() and netlink() to keep the configuration interface entirely in a pseudo-fs interface, and it was chosen over configfs since, aside from this one creation event, the interface is read-mostly. I.e. configfs supports cases where an object is designed to be provisioned each boot, like an iSCSI storage target, and CXL region creation is mostly for PMEM regions which are created usually once per-lifetime of a server instance. This is an improvement over nvdimm that pre-created "seed" devices that tended to confuse users looking to determine which devices are active and which are idle. Recall that the major change that CXL brings over previous persistent memory architectures is the ability to dynamically define new regions. Compare that to drivers like 'nfit' where the region configuration is statically defined by platform firmware. Regions are created as a child of a root decoder that encompasses an address space with constraints. When created through sysfs, the root decoder is explicit. When created from an LSA's region structure a root decoder will possibly need to be inferred by the driver. Upon region creation through sysfs, a vacant region is created with a unique name. Regions have a number of attributes that must be configured before the region can be bound to the driver where HDM decoder program is completed. An example of creating a new region: - Allocate a new region name: region=$(cat /sys/bus/cxl/devices/decoder0.0/create_pmem_region) - Create a new region by name: while region=$(cat /sys/bus/cxl/devices/decoder0.0/create_pmem_region) ! echo $region > /sys/bus/cxl/devices/decoder0.0/create_pmem_region do true; done - Region now exists in sysfs: stat -t /sys/bus/cxl/devices/decoder0.0/$region - Delete the region, and name: echo $region > /sys/bus/cxl/devices/decoder0.0/delete_region Signed-off-by: Ben Widawsky <bwidawsk@kernel.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/165784333909.1758207.794374602146306032.stgit@dwillia2-xfh.jf.intel.com [djbw: simplify locking, reword changelog] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-06-08 17:28:34 +00:00
CXL Regions
-----------
.. kernel-doc:: drivers/cxl/core/region.c
:doc: cxl core region
.. kernel-doc:: drivers/cxl/core/region.c
:identifiers:
External Interfaces
===================
CXL IOCTL Interface
-------------------
.. kernel-doc:: include/uapi/linux/cxl_mem.h
:doc: UAPI
.. kernel-doc:: include/uapi/linux/cxl_mem.h
:internal: