Merge 5.18-rc2 into staging-next

We need the staging fix in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Greg Kroah-Hartman 2022-04-11 08:43:42 +02:00
commit 908662dc82
378 changed files with 4498 additions and 3420 deletions

View file

@ -391,6 +391,10 @@ Uwe Kleine-König <ukleinek@strlen.de>
Uwe Kleine-König <ukl@pengutronix.de>
Uwe Kleine-König <Uwe.Kleine-Koenig@digi.com>
Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
Vasily Averin <vasily.averin@linux.dev> <vvs@virtuozzo.com>
Vasily Averin <vasily.averin@linux.dev> <vvs@openvz.org>
Vasily Averin <vasily.averin@linux.dev> <vvs@parallels.com>
Vasily Averin <vasily.averin@linux.dev> <vvs@sw.ru>
Vinod Koul <vkoul@kernel.org> <vinod.koul@intel.com>
Vinod Koul <vkoul@kernel.org> <vinod.koul@linux.intel.com>
Vinod Koul <vkoul@kernel.org> <vkoul@infradead.org>

View file

@ -41,13 +41,18 @@ or ``VFAT_FS``. To run ``FAT_KUNIT_TEST``, the ``.kunitconfig`` has:
CONFIG_MSDOS_FS=y
CONFIG_FAT_KUNIT_TEST=y
1. A good starting point for the ``.kunitconfig``, is the KUnit default
config. Run the command:
1. A good starting point for the ``.kunitconfig`` is the KUnit default config.
You can generate it by running:
.. code-block:: bash
cd $PATH_TO_LINUX_REPO
cp tools/testing/kunit/configs/default.config .kunitconfig
tools/testing/kunit/kunit.py config
cat .kunit/.kunitconfig
.. note ::
``.kunitconfig`` lives in the ``--build_dir`` used by kunit.py, which is
``.kunit`` by default.
.. note ::
You may want to remove CONFIG_KUNIT_ALL_TESTS from the ``.kunitconfig`` as

View file

@ -51,7 +51,6 @@ properties:
Video port for MIPI DPI output (panel or connector).
required:
- port@0
- port@1
required:

View file

@ -39,7 +39,6 @@ properties:
Video port for MIPI DPI output (panel or connector).
required:
- port@0
- port@1
required:

View file

@ -83,6 +83,8 @@ properties:
required:
- compatible
- reg
- width-mm
- height-mm
- panel-timing
unevaluatedProperties: false

View file

@ -106,6 +106,12 @@ properties:
phy-mode:
$ref: "#/properties/phy-connection-type"
pcs-handle:
$ref: /schemas/types.yaml#/definitions/phandle
description:
Specifies a reference to a node representing a PCS PHY device on a MDIO
bus to link with an external PHY (phy-handle) if exists.
phy-handle:
$ref: /schemas/types.yaml#/definitions/phandle
description:

View file

@ -45,20 +45,3 @@ Optional properties:
In fiber mode, auto-negotiation is disabled and the PHY can only work in
100base-fx (full and half duplex) modes.
- lan8814,ignore-ts: If present the PHY will not support timestamping.
This option acts as check whether Timestamping is supported by
hardware or not. LAN8814 phy support hardware tmestamping.
- lan8814,latency_rx_10: Configures Latency value of phy in ingress at 10 Mbps.
- lan8814,latency_tx_10: Configures Latency value of phy in egress at 10 Mbps.
- lan8814,latency_rx_100: Configures Latency value of phy in ingress at 100 Mbps.
- lan8814,latency_tx_100: Configures Latency value of phy in egress at 100 Mbps.
- lan8814,latency_rx_1000: Configures Latency value of phy in ingress at 1000 Mbps.
- lan8814,latency_tx_1000: Configures Latency value of phy in egress at 1000 Mbps.

View file

@ -26,7 +26,8 @@ Required properties:
specified, the TX/RX DMA interrupts should be on that node
instead, and only the Ethernet core interrupt is optionally
specified here.
- phy-handle : Should point to the external phy device.
- phy-handle : Should point to the external phy device if exists. Pointing
this to the PCS/PMA PHY is deprecated and should be avoided.
See ethernet.txt file in the same directory.
- xlnx,rxmem : Set to allocated memory buffer for Rx/Tx in the hardware
@ -68,6 +69,11 @@ Optional properties:
required through the core's MDIO interface (i.e. always,
unless the PHY is accessed through a different bus).
- pcs-handle: Phandle to the internal PCS/PMA PHY in SGMII or 1000Base-X
modes, where "pcs-handle" should be used to point
to the PCS/PMA PHY, and "phy-handle" should point to an
external PHY if exists.
Example:
axi_ethernet_eth: ethernet@40c00000 {
compatible = "xlnx,axi-ethernet-1.00.a";

View file

@ -185,6 +185,12 @@ DMA Fence Chain
.. kernel-doc:: include/linux/dma-fence-chain.h
:internal:
DMA Fence unwrap
~~~~~~~~~~~~~~~~
.. kernel-doc:: include/linux/dma-fence-unwrap.h
:internal:
DMA Fence uABI/Sync File
~~~~~~~~~~~~~~~~~~~~~~~~

View file

@ -10,21 +10,21 @@ in joining the effort.
Design principles
=================
The Distributed Switch Architecture is a subsystem which was primarily designed
to support Marvell Ethernet switches (MV88E6xxx, a.k.a Linkstreet product line)
using Linux, but has since evolved to support other vendors as well.
The Distributed Switch Architecture subsystem was primarily designed to
support Marvell Ethernet switches (MV88E6xxx, a.k.a. Link Street product
line) using Linux, but has since evolved to support other vendors as well.
The original philosophy behind this design was to be able to use unmodified
Linux tools such as bridge, iproute2, ifconfig to work transparently whether
they configured/queried a switch port network device or a regular network
device.
An Ethernet switch is typically comprised of multiple front-panel ports, and one
or more CPU or management port. The DSA subsystem currently relies on the
An Ethernet switch typically comprises multiple front-panel ports and one
or more CPU or management ports. The DSA subsystem currently relies on the
presence of a management port connected to an Ethernet controller capable of
receiving Ethernet frames from the switch. This is a very common setup for all
kinds of Ethernet switches found in Small Home and Office products: routers,
gateways, or even top-of-the rack switches. This host Ethernet controller will
gateways, or even top-of-rack switches. This host Ethernet controller will
be later referred to as "master" and "cpu" in DSA terminology and code.
The D in DSA stands for Distributed, because the subsystem has been designed
@ -33,14 +33,14 @@ using upstream and downstream Ethernet links between switches. These specific
ports are referred to as "dsa" ports in DSA terminology and code. A collection
of multiple switches connected to each other is called a "switch tree".
For each front-panel port, DSA will create specialized network devices which are
For each front-panel port, DSA creates specialized network devices which are
used as controlling and data-flowing endpoints for use by the Linux networking
stack. These specialized network interfaces are referred to as "slave" network
interfaces in DSA terminology and code.
The ideal case for using DSA is when an Ethernet switch supports a "switch tag"
which is a hardware feature making the switch insert a specific tag for each
Ethernet frames it received to/from specific ports to help the management
Ethernet frame it receives to/from specific ports to help the management
interface figure out:
- what port is this frame coming from
@ -125,7 +125,7 @@ other switches from the same fabric, and in this case, the outermost switch
ports must decapsulate the packet.
Note that in certain cases, it might be the case that the tagging format used
by a leaf switch (not connected directly to the CPU) to not be the same as what
by a leaf switch (not connected directly to the CPU) is not the same as what
the network stack sees. This can be seen with Marvell switch trees, where the
CPU port can be configured to use either the DSA or the Ethertype DSA (EDSA)
format, but the DSA links are configured to use the shorter (without Ethertype)
@ -270,21 +270,21 @@ These interfaces are specialized in order to:
to/from specific switch ports
- query the switch for ethtool operations: statistics, link state,
Wake-on-LAN, register dumps...
- external/internal PHY management: link, auto-negotiation etc.
- manage external/internal PHY: link, auto-negotiation, etc.
These slave network devices have custom net_device_ops and ethtool_ops function
pointers which allow DSA to introduce a level of layering between the networking
stack/ethtool, and the switch driver implementation.
stack/ethtool and the switch driver implementation.
Upon frame transmission from these slave network devices, DSA will look up which
switch tagging protocol is currently registered with these network devices, and
switch tagging protocol is currently registered with these network devices and
invoke a specific transmit routine which takes care of adding the relevant
switch tag in the Ethernet frames.
These frames are then queued for transmission using the master network device
``ndo_start_xmit()`` function, since they contain the appropriate switch tag, the
``ndo_start_xmit()`` function. Since they contain the appropriate switch tag, the
Ethernet switch will be able to process these incoming frames from the
management interface and delivers these frames to the physical switch port.
management interface and deliver them to the physical switch port.
Graphical representation
------------------------
@ -330,9 +330,9 @@ MDIO reads/writes towards specific PHY addresses. In most MDIO-connected
switches, these functions would utilize direct or indirect PHY addressing mode
to return standard MII registers from the switch builtin PHYs, allowing the PHY
library and/or to return link status, link partner pages, auto-negotiation
results etc..
results, etc.
For Ethernet switches which have both external and internal MDIO busses, the
For Ethernet switches which have both external and internal MDIO buses, the
slave MII bus can be utilized to mux/demux MDIO reads and writes towards either
internal or external MDIO devices this switch might be connected to: internal
PHYs, external PHYs, or even external switches.
@ -349,7 +349,7 @@ DSA data structures are defined in ``include/net/dsa.h`` as well as
table indication (when cascading switches)
- ``dsa_platform_data``: platform device configuration data which can reference
a collection of dsa_chip_data structure if multiples switches are cascaded,
a collection of dsa_chip_data structures if multiple switches are cascaded,
the master network device this switch tree is attached to needs to be
referenced
@ -426,7 +426,7 @@ logic basically looks like this:
"phy-handle" property, if found, this PHY device is created and registered
using ``of_phy_connect()``
- if Device Tree is used, and the PHY device is "fixed", that is, conforms to
- if Device Tree is used and the PHY device is "fixed", that is, conforms to
the definition of a non-MDIO managed PHY as defined in
``Documentation/devicetree/bindings/net/fixed-link.txt``, the PHY is registered
and connected transparently using the special fixed MDIO bus driver
@ -481,7 +481,7 @@ Device Tree
DSA features a standardized binding which is documented in
``Documentation/devicetree/bindings/net/dsa/dsa.txt``. PHY/MDIO library helper
functions such as ``of_get_phy_mode()``, ``of_phy_connect()`` are also used to query
per-port PHY specific details: interface connection, MDIO bus location etc..
per-port PHY specific details: interface connection, MDIO bus location, etc.
Driver development
==================
@ -509,7 +509,7 @@ Switch configuration
- ``setup``: setup function for the switch, this function is responsible for setting
up the ``dsa_switch_ops`` private structure with all it needs: register maps,
interrupts, mutexes, locks etc.. This function is also expected to properly
interrupts, mutexes, locks, etc. This function is also expected to properly
configure the switch to separate all network interfaces from each other, that
is, they should be isolated by the switch hardware itself, typically by creating
a Port-based VLAN ID for each port and allowing only the CPU port and the
@ -526,13 +526,13 @@ PHY devices and link management
- ``get_phy_flags``: Some switches are interfaced to various kinds of Ethernet PHYs,
if the PHY library PHY driver needs to know about information it cannot obtain
on its own (e.g.: coming from switch memory mapped registers), this function
should return a 32-bits bitmask of "flags", that is private between the switch
should return a 32-bit bitmask of "flags" that is private between the switch
driver and the Ethernet PHY driver in ``drivers/net/phy/\*``.
- ``phy_read``: Function invoked by the DSA slave MDIO bus when attempting to read
the switch port MDIO registers. If unavailable, return 0xffff for each read.
For builtin switch Ethernet PHYs, this function should allow reading the link
status, auto-negotiation results, link partner pages etc..
status, auto-negotiation results, link partner pages, etc.
- ``phy_write``: Function invoked by the DSA slave MDIO bus when attempting to write
to the switch port MDIO registers. If unavailable return a negative error
@ -554,7 +554,7 @@ Ethtool operations
------------------
- ``get_strings``: ethtool function used to query the driver's strings, will
typically return statistics strings, private flags strings etc.
typically return statistics strings, private flags strings, etc.
- ``get_ethtool_stats``: ethtool function used to query per-port statistics and
return their values. DSA overlays slave network devices general statistics:
@ -564,7 +564,7 @@ Ethtool operations
- ``get_sset_count``: ethtool function used to query the number of statistics items
- ``get_wol``: ethtool function used to obtain Wake-on-LAN settings per-port, this
function may, for certain implementations also query the master network device
function may for certain implementations also query the master network device
Wake-on-LAN settings if this interface needs to participate in Wake-on-LAN
- ``set_wol``: ethtool function used to configure Wake-on-LAN settings per-port,
@ -607,14 +607,14 @@ Power management
in a fully active state
- ``port_enable``: function invoked by the DSA slave network device ndo_open
function when a port is administratively brought up, this function should be
fully enabling a given switch port. DSA takes care of marking the port with
function when a port is administratively brought up, this function should
fully enable a given switch port. DSA takes care of marking the port with
``BR_STATE_BLOCKING`` if the port is a bridge member, or ``BR_STATE_FORWARDING`` if it
was not, and propagating these changes down to the hardware
- ``port_disable``: function invoked by the DSA slave network device ndo_close
function when a port is administratively brought down, this function should be
fully disabling a given switch port. DSA takes care of marking the port with
function when a port is administratively brought down, this function should
fully disable a given switch port. DSA takes care of marking the port with
``BR_STATE_DISABLED`` and propagating changes to the hardware if this port is
disabled while being a bridge member
@ -622,12 +622,12 @@ Bridge layer
------------
- ``port_bridge_join``: bridge layer function invoked when a given switch port is
added to a bridge, this function should be doing the necessary at the switch
level to permit the joining port from being added to the relevant logical
added to a bridge, this function should do what's necessary at the switch
level to permit the joining port to be added to the relevant logical
domain for it to ingress/egress traffic with other members of the bridge.
- ``port_bridge_leave``: bridge layer function invoked when a given switch port is
removed from a bridge, this function should be doing the necessary at the
removed from a bridge, this function should do what's necessary at the
switch level to deny the leaving port from ingress/egress traffic from the
remaining bridge members. When the port leaves the bridge, it should be aged
out at the switch hardware for the switch to (re) learn MAC addresses behind
@ -663,7 +663,7 @@ Bridge layer
point for drivers that need to configure the hardware for enabling this
feature.
- ``port_bridge_tx_fwd_unoffload``: bridge layer function invoken when a driver
- ``port_bridge_tx_fwd_unoffload``: bridge layer function invoked when a driver
leaves a bridge port which had the TX forwarding offload feature enabled.
Bridge VLAN filtering

View file

@ -4791,6 +4791,7 @@ F: .clang-format
CLANG/LLVM BUILD SUPPORT
M: Nathan Chancellor <nathan@kernel.org>
M: Nick Desaulniers <ndesaulniers@google.com>
R: Tom Rix <trix@redhat.com>
L: llvm@lists.linux.dev
S: Supported
W: https://clangbuiltlinux.github.io/
@ -5715,7 +5716,7 @@ W: http://lanana.org/docs/device-list/index.html
DEVICE RESOURCE MANAGEMENT HELPERS
M: Hans de Goede <hdegoede@redhat.com>
R: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com>
R: Matti Vaittinen <mazziesaccount@gmail.com>
S: Maintained
F: include/linux/devm-helpers.h
@ -8675,7 +8676,6 @@ F: include/linux/cciss*.h
F: include/uapi/linux/cciss*.h
HFI1 DRIVER
M: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
M: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
L: linux-rdma@vger.kernel.org
S: Supported
@ -9598,6 +9598,7 @@ F: drivers/iio/pressure/dps310.c
INFINIBAND SUBSYSTEM
M: Jason Gunthorpe <jgg@nvidia.com>
M: Leon Romanovsky <leonro@nvidia.com>
L: linux-rdma@vger.kernel.org
S: Supported
W: https://github.com/linux-rdma/rdma-core
@ -11208,7 +11209,7 @@ F: scripts/spdxcheck.py
LINEAR RANGES HELPERS
M: Mark Brown <broonie@kernel.org>
R: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com>
R: Matti Vaittinen <mazziesaccount@gmail.com>
F: lib/linear_ranges.c
F: lib/test_linear_ranges.c
F: include/linux/linear_range.h
@ -14656,7 +14657,6 @@ F: drivers/rtc/rtc-optee.c
OPA-VNIC DRIVER
M: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
M: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
L: linux-rdma@vger.kernel.org
S: Supported
F: drivers/infiniband/ulp/opa_vnic
@ -16098,7 +16098,6 @@ F: include/uapi/linux/qemu_fw_cfg.h
QIB DRIVER
M: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
M: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
L: linux-rdma@vger.kernel.org
S: Supported
F: drivers/infiniband/hw/qib/
@ -16616,7 +16615,6 @@ F: drivers/net/ethernet/rdc/r6040.c
RDMAVT - RDMA verbs software
M: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
M: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com>
L: linux-rdma@vger.kernel.org
S: Supported
F: drivers/infiniband/sw/rdmavt
@ -17011,8 +17009,7 @@ S: Odd Fixes
F: drivers/tty/serial/rp2.*
ROHM BD99954 CHARGER IC
R: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com>
L: linux-power@fi.rohmeurope.com
R: Matti Vaittinen <mazziesaccount@gmail.com>
S: Supported
F: drivers/power/supply/bd99954-charger.c
F: drivers/power/supply/bd99954-charger.h
@ -17035,8 +17032,7 @@ F: drivers/regulator/bd9571mwv-regulator.c
F: include/linux/mfd/bd9571mwv.h
ROHM POWER MANAGEMENT IC DEVICE DRIVERS
R: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com>
L: linux-power@fi.rohmeurope.com
R: Matti Vaittinen <mazziesaccount@gmail.com>
S: Supported
F: drivers/clk/clk-bd718x7.c
F: drivers/gpio/gpio-bd71815.c
@ -21118,7 +21114,7 @@ F: include/linux/regulator/
K: regulator_get_optional
VOLTAGE AND CURRENT REGULATOR IRQ HELPERS
R: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com>
R: Matti Vaittinen <mazziesaccount@gmail.com>
F: drivers/regulator/irq_helpers.c
VRF

View file

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 18
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc2
NAME = Superb Owl
# *DOCUMENTATION*

View file

@ -75,6 +75,7 @@
#define ARM_CPU_PART_CORTEX_A77 0xD0D
#define ARM_CPU_PART_NEOVERSE_V1 0xD40
#define ARM_CPU_PART_CORTEX_A78 0xD41
#define ARM_CPU_PART_CORTEX_A78AE 0xD42
#define ARM_CPU_PART_CORTEX_X1 0xD44
#define ARM_CPU_PART_CORTEX_A510 0xD46
#define ARM_CPU_PART_CORTEX_A710 0xD47
@ -130,6 +131,7 @@
#define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
#define MIDR_NEOVERSE_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1)
#define MIDR_CORTEX_A78 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78)
#define MIDR_CORTEX_A78AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE)
#define MIDR_CORTEX_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1)
#define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510)
#define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)

View file

@ -107,7 +107,7 @@
isb // Make sure SRE is now set
mrs_s x0, SYS_ICC_SRE_EL2 // Read SRE back,
tbz x0, #0, .Lskip_gicv3_\@ // and check that it sticks
msr_s SYS_ICH_HCR_EL2, xzr // Reset ICC_HCR_EL2 to defaults
msr_s SYS_ICH_HCR_EL2, xzr // Reset ICH_HCR_EL2 to defaults
.Lskip_gicv3_\@:
.endm

View file

@ -42,7 +42,7 @@ bool alternative_is_applied(u16 cpufeature)
/*
* Check if the target PC is within an alternative block.
*/
static bool branch_insn_requires_update(struct alt_instr *alt, unsigned long pc)
static __always_inline bool branch_insn_requires_update(struct alt_instr *alt, unsigned long pc)
{
unsigned long replptr = (unsigned long)ALT_REPL_PTR(alt);
return !(pc >= replptr && pc <= (replptr + alt->alt_len));
@ -50,7 +50,7 @@ static bool branch_insn_requires_update(struct alt_instr *alt, unsigned long pc)
#define align_down(x, a) ((unsigned long)(x) & ~(((unsigned long)(a)) - 1))
static u32 get_alt_insn(struct alt_instr *alt, __le32 *insnptr, __le32 *altinsnptr)
static __always_inline u32 get_alt_insn(struct alt_instr *alt, __le32 *insnptr, __le32 *altinsnptr)
{
u32 insn;
@ -95,7 +95,7 @@ static u32 get_alt_insn(struct alt_instr *alt, __le32 *insnptr, __le32 *altinsnp
return insn;
}
static void patch_alternative(struct alt_instr *alt,
static noinstr void patch_alternative(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst)
{
__le32 *replptr;

View file

@ -8,16 +8,9 @@
#include <asm/cpufeature.h>
#include <asm/mte.h>
#ifndef VMA_ITERATOR
#define VMA_ITERATOR(name, mm, addr) \
struct mm_struct *name = mm
#define for_each_vma(vmi, vma) \
for (vma = vmi->mmap; vma; vma = vma->vm_next)
#endif
#define for_each_mte_vma(vmi, vma) \
#define for_each_mte_vma(tsk, vma) \
if (system_supports_mte()) \
for_each_vma(vmi, vma) \
for (vma = tsk->mm->mmap; vma; vma = vma->vm_next) \
if (vma->vm_flags & VM_MTE)
static unsigned long mte_vma_tag_dump_size(struct vm_area_struct *vma)
@ -32,10 +25,11 @@ static unsigned long mte_vma_tag_dump_size(struct vm_area_struct *vma)
static int mte_dump_tag_range(struct coredump_params *cprm,
unsigned long start, unsigned long end)
{
int ret = 1;
unsigned long addr;
void *tags = NULL;
for (addr = start; addr < end; addr += PAGE_SIZE) {
char tags[MTE_PAGE_TAG_STORAGE];
struct page *page = get_dump_page(addr);
/*
@ -59,22 +53,36 @@ static int mte_dump_tag_range(struct coredump_params *cprm,
continue;
}
if (!tags) {
tags = mte_allocate_tag_storage();
if (!tags) {
put_page(page);
ret = 0;
break;
}
}
mte_save_page_tags(page_address(page), tags);
put_page(page);
if (!dump_emit(cprm, tags, MTE_PAGE_TAG_STORAGE))
return 0;
if (!dump_emit(cprm, tags, MTE_PAGE_TAG_STORAGE)) {
mte_free_tag_storage(tags);
ret = 0;
break;
}
}
return 1;
if (tags)
mte_free_tag_storage(tags);
return ret;
}
Elf_Half elf_core_extra_phdrs(void)
{
struct vm_area_struct *vma;
int vma_count = 0;
VMA_ITERATOR(vmi, current->mm, 0);
for_each_mte_vma(vmi, vma)
for_each_mte_vma(current, vma)
vma_count++;
return vma_count;
@ -83,9 +91,8 @@ Elf_Half elf_core_extra_phdrs(void)
int elf_core_write_extra_phdrs(struct coredump_params *cprm, loff_t offset)
{
struct vm_area_struct *vma;
VMA_ITERATOR(vmi, current->mm, 0);
for_each_mte_vma(vmi, vma) {
for_each_mte_vma(current, vma) {
struct elf_phdr phdr;
phdr.p_type = PT_ARM_MEMTAG_MTE;
@ -109,9 +116,8 @@ size_t elf_core_extra_data_size(void)
{
struct vm_area_struct *vma;
size_t data_size = 0;
VMA_ITERATOR(vmi, current->mm, 0);
for_each_mte_vma(vmi, vma)
for_each_mte_vma(current, vma)
data_size += mte_vma_tag_dump_size(vma);
return data_size;
@ -120,9 +126,8 @@ size_t elf_core_extra_data_size(void)
int elf_core_write_extra_data(struct coredump_params *cprm)
{
struct vm_area_struct *vma;
VMA_ITERATOR(vmi, current->mm, 0);
for_each_mte_vma(vmi, vma) {
for_each_mte_vma(current, vma) {
if (vma->vm_flags & VM_DONTDUMP)
continue;

View file

@ -701,7 +701,7 @@ NOKPROBE_SYMBOL(breakpoint_handler);
* addresses. There is no straight-forward way, short of disassembling the
* offending instruction, to map that address back to the watchpoint. This
* function computes the distance of the memory access from the watchpoint as a
* heuristic for the likelyhood that a given access triggered the watchpoint.
* heuristic for the likelihood that a given access triggered the watchpoint.
*
* See Section D2.10.5 "Determining the memory location that caused a Watchpoint
* exception" of ARMv8 Architecture Reference Manual for details.

View file

@ -220,7 +220,7 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num,
* increasing the section's alignment so that the
* resulting address of this instruction is guaranteed
* to equal the offset in that particular bit (as well
* as all less signficant bits). This ensures that the
* as all less significant bits). This ensures that the
* address modulo 4 KB != 0xfff8 or 0xfffc (which would
* have all ones in bits [11:3])
*/

View file

@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
int i, ret = 0;
struct aarch64_insn_patch *pp = arg;
/* The first CPU becomes master */
if (atomic_inc_return(&pp->cpu_count) == 1) {
/* The last CPU becomes master */
if (atomic_inc_return(&pp->cpu_count) == num_online_cpus()) {
for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
pp->new_insns[i]);

View file

@ -853,6 +853,7 @@ u8 spectre_bhb_loop_affected(int scope)
if (scope == SCOPE_LOCAL_CPU) {
static const struct midr_range spectre_bhb_k32_list[] = {
MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A78AE),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),

View file

@ -234,6 +234,7 @@ asmlinkage notrace void secondary_start_kernel(void)
* Log the CPU info before it is marked online and might get read.
*/
cpuinfo_store_cpu();
store_cpu_topology(cpu);
/*
* Enable GIC and timers.
@ -242,7 +243,6 @@ asmlinkage notrace void secondary_start_kernel(void)
ipi_setup(cpu);
store_cpu_topology(cpu);
numa_add_cpu(cpu);
/*

View file

@ -140,7 +140,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
/*
* Restore pstate flags. OS lock and mdscr have been already
* restored, so from this point onwards, debugging is fully
* renabled if it was enabled when core started shutdown.
* reenabled if it was enabled when core started shutdown.
*/
local_daif_restore(flags);

View file

@ -73,7 +73,7 @@ EXPORT_SYMBOL(memstart_addr);
* In this scheme a comparatively quicker boot is observed.
*
* If ZONE_DMA configs are defined, crash kernel memory reservation
* is delayed until DMA zone memory range size initilazation performed in
* is delayed until DMA zone memory range size initialization performed in
* zone_sizes_init(). The defer is necessary to steer clear of DMA zone
* memory range to avoid overlap allocation. So crash kernel memory boundaries
* are not known when mapping all bank memory ranges, which otherwise means
@ -81,7 +81,7 @@ EXPORT_SYMBOL(memstart_addr);
* so page-granularity mappings are created for the entire memory range.
* Hence a slightly slower boot is observed.
*
* Note: Page-granularity mapppings are necessary for crash kernel memory
* Note: Page-granularity mappings are necessary for crash kernel memory
* range for shrinking its size via /sys/kernel/kexec_crash_size interface.
*/
#if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)

View file

@ -16,18 +16,6 @@
#include <asm/ppc-opcode.h>
#include <asm/pte-walk.h>
#ifdef CONFIG_PPC_PSERIES
static inline bool kvmhv_on_pseries(void)
{
return !cpu_has_feature(CPU_FTR_HVMODE);
}
#else
static inline bool kvmhv_on_pseries(void)
{
return false;
}
#endif
/*
* Structure for a nested guest, that is, for a guest that is managed by
* one of our guests.

View file

@ -586,6 +586,18 @@ static inline bool kvm_hv_mode_active(void) { return false; }
#endif
#ifdef CONFIG_PPC_PSERIES
static inline bool kvmhv_on_pseries(void)
{
return !cpu_has_feature(CPU_FTR_HVMODE);
}
#else
static inline bool kvmhv_on_pseries(void)
{
return false;
}
#endif
#ifdef CONFIG_KVM_XICS
static inline int kvmppc_xics_enabled(struct kvm_vcpu *vcpu)
{

View file

@ -132,7 +132,11 @@ static inline bool pfn_valid(unsigned long pfn)
#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
#define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
#define virt_addr_valid(vaddr) ({ \
unsigned long _addr = (unsigned long)vaddr; \
_addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \
pfn_valid(virt_to_pfn(_addr)); \
})
/*
* On Book-E parts we need __va to parse the device tree and we can't

View file

@ -28,11 +28,13 @@ void setup_panic(void);
#define ARCH_PANIC_TIMEOUT 180
#ifdef CONFIG_PPC_PSERIES
extern bool pseries_reloc_on_exception(void);
extern bool pseries_enable_reloc_on_exc(void);
extern void pseries_disable_reloc_on_exc(void);
extern void pseries_big_endian_exceptions(void);
void __init pseries_little_endian_exceptions(void);
#else
static inline bool pseries_reloc_on_exception(void) { return false; }
static inline bool pseries_enable_reloc_on_exc(void) { return false; }
static inline void pseries_disable_reloc_on_exc(void) {}
static inline void pseries_big_endian_exceptions(void) {}

View file

@ -24,5 +24,6 @@
#define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) __PPC_SCT(name, "b " #func)
#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) __PPC_SCT(name, "blr")
#define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) __PPC_SCT(name, "b .+20")
#endif /* _ASM_POWERPC_STATIC_CALL_H */

View file

@ -809,6 +809,10 @@ __start_interrupts:
* - MSR_EE|MSR_RI is clear (no reentrant exceptions)
* - Standard kernel environment is set up (stack, paca, etc)
*
* KVM:
* These interrupts do not elevate HV 0->1, so HV is not involved. PR KVM
* ensures that FSCR[SCV] is disabled whenever it has to force AIL off.
*
* Call convention:
*
* syscall register convention is in Documentation/powerpc/syscall64-abi.rst

View file

@ -196,6 +196,34 @@ static void __init configure_exceptions(void)
/* Under a PAPR hypervisor, we need hypercalls */
if (firmware_has_feature(FW_FEATURE_SET_MODE)) {
/*
* - PR KVM does not support AIL mode interrupts in the host
* while a PR guest is running.
*
* - SCV system call interrupt vectors are only implemented for
* AIL mode interrupts.
*
* - On pseries, AIL mode can only be enabled and disabled
* system-wide so when a PR VM is created on a pseries host,
* all CPUs of the host are set to AIL=0 mode.
*
* - Therefore host CPUs must not execute scv while a PR VM
* exists.
*
* - SCV support can not be disabled dynamically because the
* feature is advertised to host userspace. Disabling the
* facility and emulating it would be possible but is not
* implemented.
*
* - So SCV support is blanket disabled if PR KVM could possibly
* run. That is, PR support compiled in, booting on pseries
* with hash MMU.
*/
if (IS_ENABLED(CONFIG_KVM_BOOK3S_PR_POSSIBLE) && !radix_enabled()) {
init_task.thread.fscr &= ~FSCR_SCV;
cur_cpu_spec->cpu_user_features2 &= ~PPC_FEATURE2_SCV;
}
/* Enable AIL if possible */
if (!pseries_enable_reloc_on_exc()) {
init_task.thread.fscr &= ~FSCR_SCV;

View file

@ -112,12 +112,21 @@ config KVM_BOOK3S_64_PR
guest in user mode (problem state) and emulating all
privileged instructions and registers.
This is only available for hash MMU mode and only supports
guests that use hash MMU mode.
This is not as fast as using hypervisor mode, but works on
machines where hypervisor mode is not available or not usable,
and can emulate processors that are different from the host
processor, including emulating 32-bit processors on a 64-bit
host.
Selecting this option will cause the SCV facility to be
disabled when the kernel is booted on the pseries platform in
hash MMU mode (regardless of PR VMs running). When any PR VMs
are running, "AIL" mode is disabled which may slow interrupts
and system calls on the host.
config KVM_BOOK3S_HV_EXIT_TIMING
bool "Detailed timing for hypervisor real-mode code"
depends on KVM_BOOK3S_HV_POSSIBLE && DEBUG_FS

View file

@ -414,10 +414,16 @@ END_FTR_SECTION_IFSET(CPU_FTR_DAWR1)
*/
ld r10,HSTATE_SCRATCH0(r13)
cmpwi r10,BOOK3S_INTERRUPT_MACHINE_CHECK
beq machine_check_common
beq .Lcall_machine_check_common
cmpwi r10,BOOK3S_INTERRUPT_SYSTEM_RESET
beq system_reset_common
beq .Lcall_system_reset_common
b .
.Lcall_machine_check_common:
b machine_check_common
.Lcall_system_reset_common:
b system_reset_common
#endif

View file

@ -225,6 +225,13 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu)
int cpu;
struct rcuwait *waitp;
/*
* rcuwait_wake_up contains smp_mb() which orders prior stores that
* create pending work vs below loads of cpu fields. The other side
* is the barrier in vcpu run that orders setting the cpu fields vs
* testing for pending work.
*/
waitp = kvm_arch_vcpu_get_wait(vcpu);
if (rcuwait_wake_up(waitp))
++vcpu->stat.generic.halt_wakeup;
@ -1089,7 +1096,7 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
break;
}
tvcpu->arch.prodded = 1;
smp_mb();
smp_mb(); /* This orders prodded store vs ceded load */
if (tvcpu->arch.ceded)
kvmppc_fast_vcpu_kick_hv(tvcpu);
break;
@ -3766,6 +3773,14 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
pvc = core_info.vc[sub];
pvc->pcpu = pcpu + thr;
for_each_runnable_thread(i, vcpu, pvc) {
/*
* XXX: is kvmppc_start_thread called too late here?
* It updates vcpu->cpu and vcpu->arch.thread_cpu
* which are used by kvmppc_fast_vcpu_kick_hv(), but
* kick is called after new exceptions become available
* and exceptions are checked earlier than here, by
* kvmppc_core_prepare_to_enter.
*/
kvmppc_start_thread(vcpu, pvc);
kvmppc_create_dtl_entry(vcpu, pvc);
trace_kvm_guest_enter(vcpu);
@ -4487,6 +4502,21 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
if (need_resched() || !kvm->arch.mmu_ready)
goto out;
vcpu->cpu = pcpu;
vcpu->arch.thread_cpu = pcpu;
vc->pcpu = pcpu;
local_paca->kvm_hstate.kvm_vcpu = vcpu;
local_paca->kvm_hstate.ptid = 0;
local_paca->kvm_hstate.fake_suspend = 0;
/*
* Orders set cpu/thread_cpu vs testing for pending interrupts and
* doorbells below. The other side is when these fields are set vs
* kvmppc_fast_vcpu_kick_hv reading the cpu/thread_cpu fields to
* kick a vCPU to notice the pending interrupt.
*/
smp_mb();
if (!nested) {
kvmppc_core_prepare_to_enter(vcpu);
if (test_bit(BOOK3S_IRQPRIO_EXTERNAL,
@ -4506,13 +4536,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
tb = mftb();
vcpu->cpu = pcpu;
vcpu->arch.thread_cpu = pcpu;
vc->pcpu = pcpu;
local_paca->kvm_hstate.kvm_vcpu = vcpu;
local_paca->kvm_hstate.ptid = 0;
local_paca->kvm_hstate.fake_suspend = 0;
__kvmppc_create_dtl_entry(vcpu, pcpu, tb + vc->tb_offset, 0);
trace_kvm_guest_enter(vcpu);
@ -4614,6 +4637,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
run->exit_reason = KVM_EXIT_INTR;
vcpu->arch.ret = -EINTR;
out:
vcpu->cpu = -1;
vcpu->arch.thread_cpu = -1;
powerpc_local_irq_pmu_restore(flags);
preempt_enable();
goto done;

View file

@ -137,12 +137,15 @@ static void kvmppc_core_vcpu_load_pr(struct kvm_vcpu *vcpu, int cpu)
svcpu->slb_max = to_book3s(vcpu)->slb_shadow_max;
svcpu->in_use = 0;
svcpu_put(svcpu);
#endif
/* Disable AIL if supported */
if (cpu_has_feature(CPU_FTR_HVMODE) &&
cpu_has_feature(CPU_FTR_ARCH_207S))
mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) & ~LPCR_AIL);
if (cpu_has_feature(CPU_FTR_HVMODE)) {
if (cpu_has_feature(CPU_FTR_ARCH_207S))
mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) & ~LPCR_AIL);
if (cpu_has_feature(CPU_FTR_ARCH_300) && (current->thread.fscr & FSCR_SCV))
mtspr(SPRN_FSCR, mfspr(SPRN_FSCR) & ~FSCR_SCV);
}
#endif
vcpu->cpu = smp_processor_id();
#ifdef CONFIG_PPC_BOOK3S_32
@ -165,6 +168,14 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
memcpy(to_book3s(vcpu)->slb_shadow, svcpu->slb, sizeof(svcpu->slb));
to_book3s(vcpu)->slb_shadow_max = svcpu->slb_max;
svcpu_put(svcpu);
/* Enable AIL if supported */
if (cpu_has_feature(CPU_FTR_HVMODE)) {
if (cpu_has_feature(CPU_FTR_ARCH_207S))
mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) | LPCR_AIL_3);
if (cpu_has_feature(CPU_FTR_ARCH_300) && (current->thread.fscr & FSCR_SCV))
mtspr(SPRN_FSCR, mfspr(SPRN_FSCR) | FSCR_SCV);
}
#endif
if (kvmppc_is_split_real(vcpu))
@ -174,11 +185,6 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);
kvmppc_save_tm_pr(vcpu);
/* Enable AIL if supported */
if (cpu_has_feature(CPU_FTR_HVMODE) &&
cpu_has_feature(CPU_FTR_ARCH_207S))
mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) | LPCR_AIL_3);
vcpu->cpu = -1;
}
@ -1037,6 +1043,8 @@ static int kvmppc_handle_fac(struct kvm_vcpu *vcpu, ulong fac)
void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr)
{
if (fscr & FSCR_SCV)
fscr &= ~FSCR_SCV; /* SCV must not be enabled */
if ((vcpu->arch.fscr & FSCR_TAR) && !(fscr & FSCR_TAR)) {
/* TAR got dropped, drop it in shadow too */
kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);

View file

@ -281,6 +281,22 @@ static int kvmppc_h_pr_logical_ci_store(struct kvm_vcpu *vcpu)
return EMULATE_DONE;
}
static int kvmppc_h_pr_set_mode(struct kvm_vcpu *vcpu)
{
unsigned long mflags = kvmppc_get_gpr(vcpu, 4);
unsigned long resource = kvmppc_get_gpr(vcpu, 5);
if (resource == H_SET_MODE_RESOURCE_ADDR_TRANS_MODE) {
/* KVM PR does not provide AIL!=0 to guests */
if (mflags == 0)
kvmppc_set_gpr(vcpu, 3, H_SUCCESS);
else
kvmppc_set_gpr(vcpu, 3, H_UNSUPPORTED_FLAG_START - 63);
return EMULATE_DONE;
}
return EMULATE_FAIL;
}
#ifdef CONFIG_SPAPR_TCE_IOMMU
static int kvmppc_h_pr_put_tce(struct kvm_vcpu *vcpu)
{
@ -384,6 +400,8 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd)
return kvmppc_h_pr_logical_ci_load(vcpu);
case H_LOGICAL_CI_STORE:
return kvmppc_h_pr_logical_ci_store(vcpu);
case H_SET_MODE:
return kvmppc_h_pr_set_mode(vcpu);
case H_XIRR:
case H_CPPR:
case H_EOI:
@ -421,6 +439,7 @@ int kvmppc_hcall_impl_pr(unsigned long cmd)
case H_CEDE:
case H_LOGICAL_CI_LOAD:
case H_LOGICAL_CI_STORE:
case H_SET_MODE:
#ifdef CONFIG_KVM_XICS
case H_XIRR:
case H_CPPR:
@ -447,6 +466,7 @@ static unsigned int default_hcall_list[] = {
H_BULK_REMOVE,
H_PUT_TCE,
H_CEDE,
H_SET_MODE,
#ifdef CONFIG_KVM_XICS
H_XIRR,
H_CPPR,

View file

@ -705,6 +705,23 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = 1;
break;
#endif
case KVM_CAP_PPC_AIL_MODE_3:
r = 0;
/*
* KVM PR, POWER7, and some POWER9s don't support AIL=3 mode.
* The POWER9s can support it if the guest runs in hash mode,
* but QEMU doesn't necessarily query the capability in time.
*/
if (hv_enabled) {
if (kvmhv_on_pseries()) {
if (pseries_reloc_on_exception())
r = 1;
} else if (cpu_has_feature(CPU_FTR_ARCH_207S) &&
!cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) {
r = 1;
}
}
break;
default:
r = 0;
break;

View file

@ -255,7 +255,7 @@ void __init mem_init(void)
#endif
high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
set_max_mapnr(max_low_pfn);
set_max_mapnr(max_pfn);
kasan_late_init();

View file

@ -1436,7 +1436,7 @@ int find_and_online_cpu_nid(int cpu)
if (new_nid < 0 || !node_possible(new_nid))
new_nid = first_online_node;
if (NODE_DATA(new_nid) == NULL) {
if (!node_online(new_nid)) {
#ifdef CONFIG_MEMORY_HOTPLUG
/*
* Need to ensure that NODE_DATA is initialized for a node from

View file

@ -353,6 +353,14 @@ static void pseries_lpar_idle(void)
pseries_idle_epilog();
}
static bool pseries_reloc_on_exception_enabled;
bool pseries_reloc_on_exception(void)
{
return pseries_reloc_on_exception_enabled;
}
EXPORT_SYMBOL_GPL(pseries_reloc_on_exception);
/*
* Enable relocation on during exceptions. This has partition wide scope and
* may take a while to complete, if it takes longer than one second we will
@ -377,6 +385,7 @@ bool pseries_enable_reloc_on_exc(void)
" on exceptions: %ld\n", rc);
return false;
}
pseries_reloc_on_exception_enabled = true;
return true;
}
@ -404,7 +413,9 @@ void pseries_disable_reloc_on_exc(void)
break;
mdelay(get_longbusy_msecs(rc));
}
if (rc != H_SUCCESS)
if (rc == H_SUCCESS)
pseries_reloc_on_exception_enabled = false;
else
pr_warn("Warning: Failed to disable relocation on exceptions: %ld\n",
rc);
}

View file

@ -99,6 +99,7 @@ static struct attribute *vas_def_capab_attrs[] = {
&nr_used_credits_attribute.attr,
NULL,
};
ATTRIBUTE_GROUPS(vas_def_capab);
static struct attribute *vas_qos_capab_attrs[] = {
&nr_total_credits_attribute.attr,
@ -106,6 +107,7 @@ static struct attribute *vas_qos_capab_attrs[] = {
&update_total_credits_attribute.attr,
NULL,
};
ATTRIBUTE_GROUPS(vas_qos_capab);
static ssize_t vas_type_show(struct kobject *kobj, struct attribute *attr,
char *buf)
@ -154,13 +156,13 @@ static const struct sysfs_ops vas_sysfs_ops = {
static struct kobj_type vas_def_attr_type = {
.release = vas_type_release,
.sysfs_ops = &vas_sysfs_ops,
.default_attrs = vas_def_capab_attrs,
.default_groups = vas_def_capab_groups,
};
static struct kobj_type vas_qos_attr_type = {
.release = vas_type_release,
.sysfs_ops = &vas_sysfs_ops,
.default_attrs = vas_qos_capab_attrs,
.default_groups = vas_qos_capab_groups,
};
static char *vas_caps_kobj_name(struct vas_caps_entry *centry,

View file

@ -302,7 +302,7 @@ static struct extra_reg intel_spr_extra_regs[] __read_mostly = {
INTEL_UEVENT_EXTRA_REG(0x012a, MSR_OFFCORE_RSP_0, 0x3fffffffffull, RSP_0),
INTEL_UEVENT_EXTRA_REG(0x012b, MSR_OFFCORE_RSP_1, 0x3fffffffffull, RSP_1),
INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd),
INTEL_UEVENT_EXTRA_REG(0x01c6, MSR_PEBS_FRONTEND, 0x7fff17, FE),
INTEL_UEVENT_EXTRA_REG(0x01c6, MSR_PEBS_FRONTEND, 0x7fff1f, FE),
INTEL_UEVENT_EXTRA_REG(0x40ad, MSR_PEBS_FRONTEND, 0x7, FE),
INTEL_UEVENT_EXTRA_REG(0x04c2, MSR_PEBS_FRONTEND, 0x8, FE),
EVENT_EXTRA_END
@ -5536,7 +5536,11 @@ static void intel_pmu_check_event_constraints(struct event_constraint *event_con
/* Disabled fixed counters which are not in CPUID */
c->idxmsk64 &= intel_ctrl;
if (c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES)
/*
* Don't extend the pseudo-encoding to the
* generic counters
*/
if (!use_fixed_pseudo_encoding(c->code))
c->idxmsk64 |= (1ULL << num_counters) - 1;
}
c->idxmsk64 &=
@ -6212,6 +6216,7 @@ __init int intel_pmu_init(void)
case INTEL_FAM6_ALDERLAKE:
case INTEL_FAM6_ALDERLAKE_L:
case INTEL_FAM6_RAPTORLAKE:
/*
* Alder Lake has 2 types of CPU, core and atom.
*

View file

@ -40,7 +40,7 @@
* Model specific counters:
* MSR_CORE_C1_RES: CORE C1 Residency Counter
* perf code: 0x00
* Available model: SLM,AMT,GLM,CNL,ICX,TNT,ADL
* Available model: SLM,AMT,GLM,CNL,ICX,TNT,ADL,RPL
* Scope: Core (each processor core has a MSR)
* MSR_CORE_C3_RESIDENCY: CORE C3 Residency Counter
* perf code: 0x01
@ -51,49 +51,50 @@
* perf code: 0x02
* Available model: SLM,AMT,NHM,WSM,SNB,IVB,HSW,BDW,
* SKL,KNL,GLM,CNL,KBL,CML,ICL,ICX,
* TGL,TNT,RKL,ADL
* TGL,TNT,RKL,ADL,RPL
* Scope: Core
* MSR_CORE_C7_RESIDENCY: CORE C7 Residency Counter
* perf code: 0x03
* Available model: SNB,IVB,HSW,BDW,SKL,CNL,KBL,CML,
* ICL,TGL,RKL,ADL
* ICL,TGL,RKL,ADL,RPL
* Scope: Core
* MSR_PKG_C2_RESIDENCY: Package C2 Residency Counter.
* perf code: 0x00
* Available model: SNB,IVB,HSW,BDW,SKL,KNL,GLM,CNL,
* KBL,CML,ICL,ICX,TGL,TNT,RKL,ADL
* KBL,CML,ICL,ICX,TGL,TNT,RKL,ADL,
* RPL
* Scope: Package (physical package)
* MSR_PKG_C3_RESIDENCY: Package C3 Residency Counter.
* perf code: 0x01
* Available model: NHM,WSM,SNB,IVB,HSW,BDW,SKL,KNL,
* GLM,CNL,KBL,CML,ICL,TGL,TNT,RKL,
* ADL
* ADL,RPL
* Scope: Package (physical package)
* MSR_PKG_C6_RESIDENCY: Package C6 Residency Counter.
* perf code: 0x02
* Available model: SLM,AMT,NHM,WSM,SNB,IVB,HSW,BDW,
* SKL,KNL,GLM,CNL,KBL,CML,ICL,ICX,
* TGL,TNT,RKL,ADL
* TGL,TNT,RKL,ADL,RPL
* Scope: Package (physical package)
* MSR_PKG_C7_RESIDENCY: Package C7 Residency Counter.
* perf code: 0x03
* Available model: NHM,WSM,SNB,IVB,HSW,BDW,SKL,CNL,
* KBL,CML,ICL,TGL,RKL,ADL
* KBL,CML,ICL,TGL,RKL,ADL,RPL
* Scope: Package (physical package)
* MSR_PKG_C8_RESIDENCY: Package C8 Residency Counter.
* perf code: 0x04
* Available model: HSW ULT,KBL,CNL,CML,ICL,TGL,RKL,
* ADL
* ADL,RPL
* Scope: Package (physical package)
* MSR_PKG_C9_RESIDENCY: Package C9 Residency Counter.
* perf code: 0x05
* Available model: HSW ULT,KBL,CNL,CML,ICL,TGL,RKL,
* ADL
* ADL,RPL
* Scope: Package (physical package)
* MSR_PKG_C10_RESIDENCY: Package C10 Residency Counter.
* perf code: 0x06
* Available model: HSW ULT,KBL,GLM,CNL,CML,ICL,TGL,
* TNT,RKL,ADL
* TNT,RKL,ADL,RPL
* Scope: Package (physical package)
*
*/
@ -680,6 +681,7 @@ static const struct x86_cpu_id intel_cstates_match[] __initconst = {
X86_MATCH_INTEL_FAM6_MODEL(ROCKETLAKE, &icl_cstates),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &adl_cstates),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &adl_cstates),
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, &adl_cstates),
{ },
};
MODULE_DEVICE_TABLE(x86cpu, intel_cstates_match);

View file

@ -1828,6 +1828,7 @@ static const struct x86_cpu_id intel_uncore_match[] __initconst = {
X86_MATCH_INTEL_FAM6_MODEL(ROCKETLAKE, &rkl_uncore_init),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &adl_uncore_init),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &adl_uncore_init),
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, &adl_uncore_init),
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &spr_uncore_init),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_D, &snr_uncore_init),
{},

View file

@ -79,6 +79,10 @@
#define PCI_DEVICE_ID_INTEL_ADL_14_IMC 0x4650
#define PCI_DEVICE_ID_INTEL_ADL_15_IMC 0x4668
#define PCI_DEVICE_ID_INTEL_ADL_16_IMC 0x4670
#define PCI_DEVICE_ID_INTEL_RPL_1_IMC 0xA700
#define PCI_DEVICE_ID_INTEL_RPL_2_IMC 0xA702
#define PCI_DEVICE_ID_INTEL_RPL_3_IMC 0xA706
#define PCI_DEVICE_ID_INTEL_RPL_4_IMC 0xA709
/* SNB event control */
#define SNB_UNC_CTL_EV_SEL_MASK 0x000000ff
@ -1406,6 +1410,22 @@ static const struct pci_device_id tgl_uncore_pci_ids[] = {
PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ADL_16_IMC),
.driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
},
{ /* IMC */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_RPL_1_IMC),
.driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
},
{ /* IMC */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_RPL_2_IMC),
.driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
},
{ /* IMC */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_RPL_3_IMC),
.driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
},
{ /* IMC */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_RPL_4_IMC),
.driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
},
{ /* end: all zeroes */ }
};

View file

@ -103,6 +103,7 @@ static bool test_intel(int idx, void *data)
case INTEL_FAM6_ROCKETLAKE:
case INTEL_FAM6_ALDERLAKE:
case INTEL_FAM6_ALDERLAKE_L:
case INTEL_FAM6_RAPTORLAKE:
if (idx == PERF_MSR_SMI || idx == PERF_MSR_PPERF)
return true;
break;

View file

@ -154,24 +154,24 @@
# define DEFINE_EXTABLE_TYPE_REG \
".macro extable_type_reg type:req reg:req\n" \
".set found, 0\n" \
".set regnr, 0\n" \
".set .Lfound, 0\n" \
".set .Lregnr, 0\n" \
".irp rs,rax,rcx,rdx,rbx,rsp,rbp,rsi,rdi,r8,r9,r10,r11,r12,r13,r14,r15\n" \
".ifc \\reg, %%\\rs\n" \
".set found, found+1\n" \
".long \\type + (regnr << 8)\n" \
".set .Lfound, .Lfound+1\n" \
".long \\type + (.Lregnr << 8)\n" \
".endif\n" \
".set regnr, regnr+1\n" \
".set .Lregnr, .Lregnr+1\n" \
".endr\n" \
".set regnr, 0\n" \
".set .Lregnr, 0\n" \
".irp rs,eax,ecx,edx,ebx,esp,ebp,esi,edi,r8d,r9d,r10d,r11d,r12d,r13d,r14d,r15d\n" \
".ifc \\reg, %%\\rs\n" \
".set found, found+1\n" \
".long \\type + (regnr << 8)\n" \
".set .Lfound, .Lfound+1\n" \
".long \\type + (.Lregnr << 8)\n" \
".endif\n" \
".set regnr, regnr+1\n" \
".set .Lregnr, .Lregnr+1\n" \
".endr\n" \
".if (found != 1)\n" \
".if (.Lfound != 1)\n" \
".error \"extable_type_reg: bad register argument\"\n" \
".endif\n" \
".endm\n"

View file

@ -78,9 +78,9 @@ do { \
*/
#define __WARN_FLAGS(flags) \
do { \
__auto_type f = BUGFLAG_WARNING|(flags); \
__auto_type __flags = BUGFLAG_WARNING|(flags); \
instrumentation_begin(); \
_BUG_FLAGS(ASM_UD2, f, ASM_REACHABLE); \
_BUG_FLAGS(ASM_UD2, __flags, ASM_REACHABLE); \
instrumentation_end(); \
} while (0)

View file

@ -12,14 +12,17 @@ int pci_msi_prepare(struct irq_domain *domain, struct device *dev, int nvec,
/* Structs and defines for the X86 specific MSI message format */
typedef struct x86_msi_data {
u32 vector : 8,
delivery_mode : 3,
dest_mode_logical : 1,
reserved : 2,
active_low : 1,
is_level : 1;
u32 dmar_subhandle;
union {
struct {
u32 vector : 8,
delivery_mode : 3,
dest_mode_logical : 1,
reserved : 2,
active_low : 1,
is_level : 1;
};
u32 dmar_subhandle;
};
} __attribute__ ((packed)) arch_msi_msg_data_t;
#define arch_msi_msg_data x86_msi_data

View file

@ -38,9 +38,9 @@
#define arch_raw_cpu_ptr(ptr) \
({ \
unsigned long tcp_ptr__; \
asm volatile("add " __percpu_arg(1) ", %0" \
: "=r" (tcp_ptr__) \
: "m" (this_cpu_off), "0" (ptr)); \
asm ("add " __percpu_arg(1) ", %0" \
: "=r" (tcp_ptr__) \
: "m" (this_cpu_off), "0" (ptr)); \
(typeof(*(ptr)) __kernel __force *)tcp_ptr__; \
})
#else

View file

@ -241,6 +241,11 @@ struct x86_pmu_capability {
#define INTEL_PMC_IDX_FIXED_SLOTS (INTEL_PMC_IDX_FIXED + 3)
#define INTEL_PMC_MSK_FIXED_SLOTS (1ULL << INTEL_PMC_IDX_FIXED_SLOTS)
static inline bool use_fixed_pseudo_encoding(u64 code)
{
return !(code & 0xff);
}
/*
* We model BTS tracing as another fixed-mode PMC.
*

View file

@ -38,6 +38,8 @@
#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \
__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; int3; nop; nop; nop")
#define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) \
ARCH_DEFINE_STATIC_CALL_TRAMP(name, __static_call_return0)
#define ARCH_ADD_TRAMP_KEY(name) \
asm(".pushsection .static_call_tramp_key, \"a\" \n" \

View file

@ -12,10 +12,9 @@ enum insn_type {
};
/*
* data16 data16 xorq %rax, %rax - a single 5 byte instruction that clears %rax
* The REX.W cancels the effect of any data16.
* cs cs cs xorl %eax, %eax - a single 5 byte instruction that clears %[er]ax
*/
static const u8 xor5rax[] = { 0x66, 0x66, 0x48, 0x31, 0xc0 };
static const u8 xor5rax[] = { 0x2e, 0x2e, 0x2e, 0x31, 0xc0 };
static const u8 retinsn[] = { RET_INSN_OPCODE, 0xcc, 0xcc, 0xcc, 0xcc };

View file

@ -855,13 +855,11 @@ static void flush_tlb_func(void *info)
nr_invalidate);
}
static bool tlb_is_not_lazy(int cpu)
static bool tlb_is_not_lazy(int cpu, void *data)
{
return !per_cpu(cpu_tlbstate_shared.is_lazy, cpu);
}
static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared);
EXPORT_PER_CPU_SYMBOL(cpu_tlbstate_shared);
@ -890,36 +888,11 @@ STATIC_NOPV void native_flush_tlb_multi(const struct cpumask *cpumask,
* up on the new contents of what used to be page tables, while
* doing a speculative memory access.
*/
if (info->freed_tables) {
if (info->freed_tables)
on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
} else {
/*
* Although we could have used on_each_cpu_cond_mask(),
* open-coding it has performance advantages, as it eliminates
* the need for indirect calls or retpolines. In addition, it
* allows to use a designated cpumask for evaluating the
* condition, instead of allocating one.
*
* This code works under the assumption that there are no nested
* TLB flushes, an assumption that is already made in
* flush_tlb_mm_range().
*
* cond_cpumask is logically a stack-local variable, but it is
* more efficient to have it off the stack and not to allocate
* it on demand. Preemption is disabled and this code is
* non-reentrant.
*/
struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
int cpu;
cpumask_clear(cond_cpumask);
for_each_cpu(cpu, cpumask) {
if (tlb_is_not_lazy(cpu))
__cpumask_set_cpu(cpu, cond_cpumask);
}
on_each_cpu_mask(cond_cpumask, flush_tlb_func, (void *)info, true);
}
else
on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func,
(void *)info, 1, cpumask);
}
void flush_tlb_multi(const struct cpumask *cpumask,

View file

@ -412,6 +412,7 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
EMIT_LFENCE();
EMIT2(0xFF, 0xE0 + reg);
} else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
OPTIMIZER_HIDE_VAR(reg);
emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
} else
#endif

View file

@ -40,7 +40,8 @@ static void msr_save_context(struct saved_context *ctxt)
struct saved_msr *end = msr + ctxt->saved_msrs.num;
while (msr < end) {
msr->valid = !rdmsrl_safe(msr->info.msr_no, &msr->info.reg.q);
if (msr->valid)
rdmsrl(msr->info.msr_no, msr->info.reg.q);
msr++;
}
}
@ -424,8 +425,10 @@ static int msr_build_context(const u32 *msr_id, const int num)
}
for (i = saved_msrs->num, j = 0; i < total_num; i++, j++) {
u64 dummy;
msr_array[i].info.msr_no = msr_id[j];
msr_array[i].valid = false;
msr_array[i].valid = !rdmsrl_safe(msr_id[j], &dummy);
msr_array[i].info.reg.q = 0;
}
saved_msrs->num = total_num;
@ -500,10 +503,24 @@ static int pm_cpu_check(const struct x86_cpu_id *c)
return ret;
}
static void pm_save_spec_msr(void)
{
u32 spec_msr_id[] = {
MSR_IA32_SPEC_CTRL,
MSR_IA32_TSX_CTRL,
MSR_TSX_FORCE_ABORT,
MSR_IA32_MCU_OPT_CTRL,
MSR_AMD64_LS_CFG,
};
msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
}
static int pm_check_save_msr(void)
{
dmi_check_system(msr_save_dmi_table);
pm_cpu_check(msr_save_cpu_table);
pm_save_spec_msr();
return 0;
}

View file

@ -570,8 +570,7 @@ static int acpi_idle_play_dead(struct cpuidle_device *dev, int index)
{
struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
if (cx->type == ACPI_STATE_C3)
ACPI_FLUSH_CPU_CACHE();
ACPI_FLUSH_CPU_CACHE();
while (1) {

View file

@ -588,19 +588,6 @@ static struct acpi_device *handle_to_device(acpi_handle handle,
return adev;
}
int acpi_bus_get_device(acpi_handle handle, struct acpi_device **device)
{
if (!device)
return -EINVAL;
*device = handle_to_device(handle, NULL);
if (!*device)
return -ENODEV;
return 0;
}
EXPORT_SYMBOL(acpi_bus_get_device);
/**
* acpi_fetch_acpi_dev - Retrieve ACPI device object.
* @handle: ACPI handle associated with the requested ACPI device object.

View file

@ -115,14 +115,16 @@ config SATA_AHCI
If unsure, say N.
config SATA_LPM_POLICY
config SATA_MOBILE_LPM_POLICY
int "Default SATA Link Power Management policy for low power chipsets"
range 0 4
default 0
depends on SATA_AHCI
help
Select the Default SATA Link Power Management (LPM) policy to use
for chipsets / "South Bridges" designated as supporting low power.
for chipsets / "South Bridges" supporting low-power modes. Such
chipsets are typically found on most laptops but desktops and
servers now also widely use chipsets supporting low power modes.
The value set has the following meanings:
0 => Keep firmware settings

View file

@ -1595,7 +1595,7 @@ static int ahci_init_msi(struct pci_dev *pdev, unsigned int n_ports,
static void ahci_update_initial_lpm_policy(struct ata_port *ap,
struct ahci_host_priv *hpriv)
{
int policy = CONFIG_SATA_LPM_POLICY;
int policy = CONFIG_SATA_MOBILE_LPM_POLICY;
/* Ignore processing for chipsets that don't use policy */

View file

@ -236,7 +236,7 @@ enum {
AHCI_HFLAG_NO_WRITE_TO_RO = (1 << 24), /* don't write to read
only registers */
AHCI_HFLAG_USE_LPM_POLICY = (1 << 25), /* chipset that should use
SATA_LPM_POLICY
SATA_MOBILE_LPM_POLICY
as default lpm_policy */
AHCI_HFLAG_SUSPEND_PHYS = (1 << 26), /* handle PHYs during
suspend/resume */

View file

@ -4014,6 +4014,9 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
ATA_HORKAGE_ZERO_AFTER_TRIM, },
{ "Crucial_CT*MX100*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
ATA_HORKAGE_ZERO_AFTER_TRIM, },
{ "Samsung SSD 840 EVO*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
ATA_HORKAGE_NO_DMA_LOG |
ATA_HORKAGE_ZERO_AFTER_TRIM, },
{ "Samsung SSD 840*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
ATA_HORKAGE_ZERO_AFTER_TRIM, },
{ "Samsung SSD 850*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |

View file

@ -1634,7 +1634,7 @@ EXPORT_SYMBOL_GPL(ata_sff_interrupt);
void ata_sff_lost_interrupt(struct ata_port *ap)
{
u8 status;
u8 status = 0;
struct ata_queued_cmd *qc;
/* Only one outstanding command per SFF channel */

View file

@ -137,7 +137,11 @@ struct sata_dwc_device {
#endif
};
#define SATA_DWC_QCMD_MAX 32
/*
* Allow one extra special slot for commands and DMA management
* to account for libata internal commands.
*/
#define SATA_DWC_QCMD_MAX (ATA_MAX_QUEUE + 1)
struct sata_dwc_device_port {
struct sata_dwc_device *hsdev;

View file

@ -1638,22 +1638,22 @@ struct sib_info {
};
void drbd_bcast_event(struct drbd_device *device, const struct sib_info *sib);
extern void notify_resource_state(struct sk_buff *,
extern int notify_resource_state(struct sk_buff *,
unsigned int,
struct drbd_resource *,
struct resource_info *,
enum drbd_notification_type);
extern void notify_device_state(struct sk_buff *,
extern int notify_device_state(struct sk_buff *,
unsigned int,
struct drbd_device *,
struct device_info *,
enum drbd_notification_type);
extern void notify_connection_state(struct sk_buff *,
extern int notify_connection_state(struct sk_buff *,
unsigned int,
struct drbd_connection *,
struct connection_info *,
enum drbd_notification_type);
extern void notify_peer_device_state(struct sk_buff *,
extern int notify_peer_device_state(struct sk_buff *,
unsigned int,
struct drbd_peer_device *,
struct peer_device_info *,

View file

@ -2719,6 +2719,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
sprintf(disk->disk_name, "drbd%d", minor);
disk->private_data = device;
blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, disk->queue);
blk_queue_write_cache(disk->queue, true, true);
/* Setting the max_hw_sectors to an odd value of 8kibyte here
This triggers a max_bio_size message upon first attach or connect */
@ -2773,12 +2774,12 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
if (init_submitter(device)) {
err = ERR_NOMEM;
goto out_idr_remove_vol;
goto out_idr_remove_from_resource;
}
err = add_disk(disk);
if (err)
goto out_idr_remove_vol;
goto out_idr_remove_from_resource;
/* inherit the connection state */
device->state.conn = first_connection(resource)->cstate;
@ -2792,8 +2793,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
drbd_debugfs_device_add(device);
return NO_ERROR;
out_idr_remove_vol:
idr_remove(&connection->peer_devices, vnr);
out_idr_remove_from_resource:
for_each_connection(connection, resource) {
peer_device = idr_remove(&connection->peer_devices, vnr);

View file

@ -4549,7 +4549,7 @@ static int nla_put_notification_header(struct sk_buff *msg,
return drbd_notification_header_to_skb(msg, &nh, true);
}
void notify_resource_state(struct sk_buff *skb,
int notify_resource_state(struct sk_buff *skb,
unsigned int seq,
struct drbd_resource *resource,
struct resource_info *resource_info,
@ -4591,16 +4591,17 @@ void notify_resource_state(struct sk_buff *skb,
if (err && err != -ESRCH)
goto failed;
}
return;
return 0;
nla_put_failure:
nlmsg_free(skb);
failed:
drbd_err(resource, "Error %d while broadcasting event. Event seq:%u\n",
err, seq);
return err;
}
void notify_device_state(struct sk_buff *skb,
int notify_device_state(struct sk_buff *skb,
unsigned int seq,
struct drbd_device *device,
struct device_info *device_info,
@ -4640,16 +4641,17 @@ void notify_device_state(struct sk_buff *skb,
if (err && err != -ESRCH)
goto failed;
}
return;
return 0;
nla_put_failure:
nlmsg_free(skb);
failed:
drbd_err(device, "Error %d while broadcasting event. Event seq:%u\n",
err, seq);
return err;
}
void notify_connection_state(struct sk_buff *skb,
int notify_connection_state(struct sk_buff *skb,
unsigned int seq,
struct drbd_connection *connection,
struct connection_info *connection_info,
@ -4689,16 +4691,17 @@ void notify_connection_state(struct sk_buff *skb,
if (err && err != -ESRCH)
goto failed;
}
return;
return 0;
nla_put_failure:
nlmsg_free(skb);
failed:
drbd_err(connection, "Error %d while broadcasting event. Event seq:%u\n",
err, seq);
return err;
}
void notify_peer_device_state(struct sk_buff *skb,
int notify_peer_device_state(struct sk_buff *skb,
unsigned int seq,
struct drbd_peer_device *peer_device,
struct peer_device_info *peer_device_info,
@ -4739,13 +4742,14 @@ void notify_peer_device_state(struct sk_buff *skb,
if (err && err != -ESRCH)
goto failed;
}
return;
return 0;
nla_put_failure:
nlmsg_free(skb);
failed:
drbd_err(peer_device, "Error %d while broadcasting event. Event seq:%u\n",
err, seq);
return err;
}
void notify_helper(enum drbd_notification_type type,
@ -4796,7 +4800,7 @@ void notify_helper(enum drbd_notification_type type,
err, seq);
}
static void notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
static int notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
{
struct drbd_genlmsghdr *dh;
int err;
@ -4810,11 +4814,12 @@ static void notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
if (nla_put_notification_header(skb, NOTIFY_EXISTS))
goto nla_put_failure;
genlmsg_end(skb, dh);
return;
return 0;
nla_put_failure:
nlmsg_free(skb);
pr_err("Error %d sending event. Event seq:%u\n", err, seq);
return err;
}
static void free_state_changes(struct list_head *list)
@ -4841,6 +4846,7 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
unsigned int seq = cb->args[2];
unsigned int n;
enum drbd_notification_type flags = 0;
int err = 0;
/* There is no need for taking notification_mutex here: it doesn't
matter if the initial state events mix with later state chage
@ -4849,32 +4855,32 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
cb->args[5]--;
if (cb->args[5] == 1) {
notify_initial_state_done(skb, seq);
err = notify_initial_state_done(skb, seq);
goto out;
}
n = cb->args[4]++;
if (cb->args[4] < cb->args[3])
flags |= NOTIFY_CONTINUES;
if (n < 1) {
notify_resource_state_change(skb, seq, state_change->resource,
err = notify_resource_state_change(skb, seq, state_change->resource,
NOTIFY_EXISTS | flags);
goto next;
}
n--;
if (n < state_change->n_connections) {
notify_connection_state_change(skb, seq, &state_change->connections[n],
err = notify_connection_state_change(skb, seq, &state_change->connections[n],
NOTIFY_EXISTS | flags);
goto next;
}
n -= state_change->n_connections;
if (n < state_change->n_devices) {
notify_device_state_change(skb, seq, &state_change->devices[n],
err = notify_device_state_change(skb, seq, &state_change->devices[n],
NOTIFY_EXISTS | flags);
goto next;
}
n -= state_change->n_devices;
if (n < state_change->n_devices * state_change->n_connections) {
notify_peer_device_state_change(skb, seq, &state_change->peer_devices[n],
err = notify_peer_device_state_change(skb, seq, &state_change->peer_devices[n],
NOTIFY_EXISTS | flags);
goto next;
}
@ -4889,7 +4895,10 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
cb->args[4] = 0;
}
out:
return skb->len;
if (err)
return err;
else
return skb->len;
}
int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)

View file

@ -1537,7 +1537,7 @@ int drbd_bitmap_io_from_worker(struct drbd_device *device,
return rv;
}
void notify_resource_state_change(struct sk_buff *skb,
int notify_resource_state_change(struct sk_buff *skb,
unsigned int seq,
struct drbd_resource_state_change *resource_state_change,
enum drbd_notification_type type)
@ -1550,10 +1550,10 @@ void notify_resource_state_change(struct sk_buff *skb,
.res_susp_fen = resource_state_change->susp_fen[NEW],
};
notify_resource_state(skb, seq, resource, &resource_info, type);
return notify_resource_state(skb, seq, resource, &resource_info, type);
}
void notify_connection_state_change(struct sk_buff *skb,
int notify_connection_state_change(struct sk_buff *skb,
unsigned int seq,
struct drbd_connection_state_change *connection_state_change,
enum drbd_notification_type type)
@ -1564,10 +1564,10 @@ void notify_connection_state_change(struct sk_buff *skb,
.conn_role = connection_state_change->peer_role[NEW],
};
notify_connection_state(skb, seq, connection, &connection_info, type);
return notify_connection_state(skb, seq, connection, &connection_info, type);
}
void notify_device_state_change(struct sk_buff *skb,
int notify_device_state_change(struct sk_buff *skb,
unsigned int seq,
struct drbd_device_state_change *device_state_change,
enum drbd_notification_type type)
@ -1577,10 +1577,10 @@ void notify_device_state_change(struct sk_buff *skb,
.dev_disk_state = device_state_change->disk_state[NEW],
};
notify_device_state(skb, seq, device, &device_info, type);
return notify_device_state(skb, seq, device, &device_info, type);
}
void notify_peer_device_state_change(struct sk_buff *skb,
int notify_peer_device_state_change(struct sk_buff *skb,
unsigned int seq,
struct drbd_peer_device_state_change *p,
enum drbd_notification_type type)
@ -1594,7 +1594,7 @@ void notify_peer_device_state_change(struct sk_buff *skb,
.peer_resync_susp_dependency = p->resync_susp_dependency[NEW],
};
notify_peer_device_state(skb, seq, peer_device, &peer_device_info, type);
return notify_peer_device_state(skb, seq, peer_device, &peer_device_info, type);
}
static void broadcast_state_change(struct drbd_state_change *state_change)
@ -1602,7 +1602,7 @@ static void broadcast_state_change(struct drbd_state_change *state_change)
struct drbd_resource_state_change *resource_state_change = &state_change->resource[0];
bool resource_state_has_changed;
unsigned int n_device, n_connection, n_peer_device, n_peer_devices;
void (*last_func)(struct sk_buff *, unsigned int, void *,
int (*last_func)(struct sk_buff *, unsigned int, void *,
enum drbd_notification_type) = NULL;
void *last_arg = NULL;

View file

@ -44,19 +44,19 @@ extern struct drbd_state_change *remember_old_state(struct drbd_resource *, gfp_
extern void copy_old_to_new_state_change(struct drbd_state_change *);
extern void forget_state_change(struct drbd_state_change *);
extern void notify_resource_state_change(struct sk_buff *,
extern int notify_resource_state_change(struct sk_buff *,
unsigned int,
struct drbd_resource_state_change *,
enum drbd_notification_type type);
extern void notify_connection_state_change(struct sk_buff *,
extern int notify_connection_state_change(struct sk_buff *,
unsigned int,
struct drbd_connection_state_change *,
enum drbd_notification_type type);
extern void notify_device_state_change(struct sk_buff *,
extern int notify_device_state_change(struct sk_buff *,
unsigned int,
struct drbd_device_state_change *,
enum drbd_notification_type type);
extern void notify_peer_device_state_change(struct sk_buff *,
extern int notify_peer_device_state_change(struct sk_buff *,
unsigned int,
struct drbd_peer_device_state_change *,
enum drbd_notification_type type);

View file

@ -1365,7 +1365,6 @@ static int cdrom_slot_status(struct cdrom_device_info *cdi, int slot)
*/
int cdrom_number_of_slots(struct cdrom_device_info *cdi)
{
int status;
int nslots = 1;
struct cdrom_changer_info *info;
@ -1377,7 +1376,7 @@ int cdrom_number_of_slots(struct cdrom_device_info *cdi)
if (!info)
return -ENOMEM;
if ((status = cdrom_read_mech_status(cdi, info)) == 0)
if (cdrom_read_mech_status(cdi, info) == 0)
nslots = info->hdr.nslots;
kfree(info);

View file

@ -437,11 +437,8 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS],
* This shouldn't be set by functions like add_device_randomness(),
* where we can't trust the buffer passed to it is guaranteed to be
* unpredictable (so it might not have any entropy at all).
*
* Returns the number of bytes processed from input, which is bounded
* by CRNG_INIT_CNT_THRESH if account is true.
*/
static size_t crng_pre_init_inject(const void *input, size_t len, bool account)
static void crng_pre_init_inject(const void *input, size_t len, bool account)
{
static int crng_init_cnt = 0;
struct blake2s_state hash;
@ -452,18 +449,15 @@ static size_t crng_pre_init_inject(const void *input, size_t len, bool account)
spin_lock_irqsave(&base_crng.lock, flags);
if (crng_init != 0) {
spin_unlock_irqrestore(&base_crng.lock, flags);
return 0;
return;
}
if (account)
len = min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt);
blake2s_update(&hash, base_crng.key, sizeof(base_crng.key));
blake2s_update(&hash, input, len);
blake2s_final(&hash, base_crng.key);
if (account) {
crng_init_cnt += len;
crng_init_cnt += min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt);
if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
++base_crng.generation;
crng_init = 1;
@ -474,8 +468,6 @@ static size_t crng_pre_init_inject(const void *input, size_t len, bool account)
if (crng_init == 1)
pr_notice("fast init done\n");
return len;
}
static void _get_random_bytes(void *buf, size_t nbytes)
@ -531,7 +523,6 @@ EXPORT_SYMBOL(get_random_bytes);
static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes)
{
bool large_request = nbytes > 256;
ssize_t ret = 0;
size_t len;
u32 chacha_state[CHACHA_STATE_WORDS];
@ -540,22 +531,23 @@ static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes)
if (!nbytes)
return 0;
len = min_t(size_t, 32, nbytes);
crng_make_state(chacha_state, output, len);
if (copy_to_user(buf, output, len))
return -EFAULT;
nbytes -= len;
buf += len;
ret += len;
while (nbytes) {
if (large_request && need_resched()) {
if (signal_pending(current))
break;
schedule();
}
/*
* Immediately overwrite the ChaCha key at index 4 with random
* bytes, in case userspace causes copy_to_user() below to sleep
* forever, so that we still retain forward secrecy in that case.
*/
crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE);
/*
* However, if we're doing a read of len <= 32, we don't need to
* use chacha_state after, so we can simply return those bytes to
* the user directly.
*/
if (nbytes <= CHACHA_KEY_SIZE) {
ret = copy_to_user(buf, &chacha_state[4], nbytes) ? -EFAULT : nbytes;
goto out_zero_chacha;
}
do {
chacha20_block(chacha_state, output);
if (unlikely(chacha_state[12] == 0))
++chacha_state[13];
@ -569,10 +561,18 @@ static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes)
nbytes -= len;
buf += len;
ret += len;
}
memzero_explicit(chacha_state, sizeof(chacha_state));
BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE != 0);
if (!(ret % PAGE_SIZE) && nbytes) {
if (signal_pending(current))
break;
cond_resched();
}
} while (nbytes);
memzero_explicit(output, sizeof(output));
out_zero_chacha:
memzero_explicit(chacha_state, sizeof(chacha_state));
return ret;
}
@ -1141,12 +1141,9 @@ void add_hwgenerator_randomness(const void *buffer, size_t count,
size_t entropy)
{
if (unlikely(crng_init == 0 && entropy < POOL_MIN_BITS)) {
size_t ret = crng_pre_init_inject(buffer, count, true);
mix_pool_bytes(buffer, ret);
count -= ret;
buffer += ret;
if (!count || crng_init == 0)
return;
crng_pre_init_inject(buffer, count, true);
mix_pool_bytes(buffer, count);
return;
}
/*
@ -1545,6 +1542,13 @@ static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes,
{
static int maxwarn = 10;
/*
* Opportunistically attempt to initialize the RNG on platforms that
* have fast cycle counters, but don't (for now) require it to succeed.
*/
if (!crng_ready())
try_to_generate_entropy();
if (!crng_ready() && maxwarn > 0) {
maxwarn--;
if (__ratelimit(&urandom_warning))

View file

@ -436,7 +436,6 @@ static int wait_for_media_ready(struct cxl_dev_state *cxlds)
for (i = mbox_ready_timeout; i; i--) {
u32 temp;
int rc;
rc = pci_read_config_dword(
pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &temp);

View file

@ -12,6 +12,7 @@ dmabuf_selftests-y := \
selftest.o \
st-dma-fence.o \
st-dma-fence-chain.o \
st-dma-fence-unwrap.o \
st-dma-resv.o
obj-$(CONFIG_DMABUF_SELFTESTS) += dmabuf_selftests.o

View file

@ -159,6 +159,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
struct dma_fence_array *array;
size_t size = sizeof(*array);
WARN_ON(!num_fences || !fences);
/* Allocate the callback structures behind the array. */
size += num_fences * sizeof(struct dma_fence_array_cb);
array = kzalloc(size, GFP_KERNEL);
@ -219,3 +221,33 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context)
return true;
}
EXPORT_SYMBOL(dma_fence_match_context);
struct dma_fence *dma_fence_array_first(struct dma_fence *head)
{
struct dma_fence_array *array;
if (!head)
return NULL;
array = to_dma_fence_array(head);
if (!array)
return head;
if (!array->num_fences)
return NULL;
return array->fences[0];
}
EXPORT_SYMBOL(dma_fence_array_first);
struct dma_fence *dma_fence_array_next(struct dma_fence *head,
unsigned int index)
{
struct dma_fence_array *array = to_dma_fence_array(head);
if (!array || index >= array->num_fences)
return NULL;
return array->fences[index];
}
EXPORT_SYMBOL(dma_fence_array_next);

View file

@ -12,4 +12,5 @@
selftest(sanitycheck, __sanitycheck__) /* keep first (igt selfcheck) */
selftest(dma_fence, dma_fence)
selftest(dma_fence_chain, dma_fence_chain)
selftest(dma_fence_unwrap, dma_fence_unwrap)
selftest(dma_resv, dma_resv)

View file

@ -0,0 +1,261 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (C) 2022 Advanced Micro Devices, Inc.
*/
#include <linux/dma-fence-unwrap.h>
#if 0
#include <linux/kernel.h>
#include <linux/kthread.h>
#include <linux/mm.h>
#include <linux/sched/signal.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/random.h>
#endif
#include "selftest.h"
#define CHAIN_SZ (4 << 10)
static inline struct mock_fence {
struct dma_fence base;
spinlock_t lock;
} *to_mock_fence(struct dma_fence *f) {
return container_of(f, struct mock_fence, base);
}
static const char *mock_name(struct dma_fence *f)
{
return "mock";
}
static const struct dma_fence_ops mock_ops = {
.get_driver_name = mock_name,
.get_timeline_name = mock_name,
};
static struct dma_fence *mock_fence(void)
{
struct mock_fence *f;
f = kmalloc(sizeof(*f), GFP_KERNEL);
if (!f)
return NULL;
spin_lock_init(&f->lock);
dma_fence_init(&f->base, &mock_ops, &f->lock, 0, 0);
return &f->base;
}
static struct dma_fence *mock_array(unsigned int num_fences, ...)
{
struct dma_fence_array *array;
struct dma_fence **fences;
va_list valist;
int i;
fences = kcalloc(num_fences, sizeof(*fences), GFP_KERNEL);
if (!fences)
return NULL;
va_start(valist, num_fences);
for (i = 0; i < num_fences; ++i)
fences[i] = va_arg(valist, typeof(*fences));
va_end(valist);
array = dma_fence_array_create(num_fences, fences,
dma_fence_context_alloc(1),
1, false);
if (!array)
goto cleanup;
return &array->base;
cleanup:
for (i = 0; i < num_fences; ++i)
dma_fence_put(fences[i]);
kfree(fences);
return NULL;
}
static struct dma_fence *mock_chain(struct dma_fence *prev,
struct dma_fence *fence)
{
struct dma_fence_chain *f;
f = dma_fence_chain_alloc();
if (!f) {
dma_fence_put(prev);
dma_fence_put(fence);
return NULL;
}
dma_fence_chain_init(f, prev, fence, 1);
return &f->base;
}
static int sanitycheck(void *arg)
{
struct dma_fence *f, *chain, *array;
int err = 0;
f = mock_fence();
if (!f)
return -ENOMEM;
array = mock_array(1, f);
if (!array)
return -ENOMEM;
chain = mock_chain(NULL, array);
if (!chain)
return -ENOMEM;
dma_fence_signal(f);
dma_fence_put(chain);
return err;
}
static int unwrap_array(void *arg)
{
struct dma_fence *fence, *f1, *f2, *array;
struct dma_fence_unwrap iter;
int err = 0;
f1 = mock_fence();
if (!f1)
return -ENOMEM;
f2 = mock_fence();
if (!f2) {
dma_fence_put(f1);
return -ENOMEM;
}
array = mock_array(2, f1, f2);
if (!array)
return -ENOMEM;
dma_fence_unwrap_for_each(fence, &iter, array) {
if (fence == f1) {
f1 = NULL;
} else if (fence == f2) {
f2 = NULL;
} else {
pr_err("Unexpected fence!\n");
err = -EINVAL;
}
}
if (f1 || f2) {
pr_err("Not all fences seen!\n");
err = -EINVAL;
}
dma_fence_signal(f1);
dma_fence_signal(f2);
dma_fence_put(array);
return 0;
}
static int unwrap_chain(void *arg)
{
struct dma_fence *fence, *f1, *f2, *chain;
struct dma_fence_unwrap iter;
int err = 0;
f1 = mock_fence();
if (!f1)
return -ENOMEM;
f2 = mock_fence();
if (!f2) {
dma_fence_put(f1);
return -ENOMEM;
}
chain = mock_chain(f1, f2);
if (!chain)
return -ENOMEM;
dma_fence_unwrap_for_each(fence, &iter, chain) {
if (fence == f1) {
f1 = NULL;
} else if (fence == f2) {
f2 = NULL;
} else {
pr_err("Unexpected fence!\n");
err = -EINVAL;
}
}
if (f1 || f2) {
pr_err("Not all fences seen!\n");
err = -EINVAL;
}
dma_fence_signal(f1);
dma_fence_signal(f2);
dma_fence_put(chain);
return 0;
}
static int unwrap_chain_array(void *arg)
{
struct dma_fence *fence, *f1, *f2, *array, *chain;
struct dma_fence_unwrap iter;
int err = 0;
f1 = mock_fence();
if (!f1)
return -ENOMEM;
f2 = mock_fence();
if (!f2) {
dma_fence_put(f1);
return -ENOMEM;
}
array = mock_array(2, f1, f2);
if (!array)
return -ENOMEM;
chain = mock_chain(NULL, array);
if (!chain)
return -ENOMEM;
dma_fence_unwrap_for_each(fence, &iter, chain) {
if (fence == f1) {
f1 = NULL;
} else if (fence == f2) {
f2 = NULL;
} else {
pr_err("Unexpected fence!\n");
err = -EINVAL;
}
}
if (f1 || f2) {
pr_err("Not all fences seen!\n");
err = -EINVAL;
}
dma_fence_signal(f1);
dma_fence_signal(f2);
dma_fence_put(chain);
return 0;
}
int dma_fence_unwrap(void)
{
static const struct subtest tests[] = {
SUBTEST(sanitycheck),
SUBTEST(unwrap_array),
SUBTEST(unwrap_chain),
SUBTEST(unwrap_chain_array),
};
return subtests(tests, NULL);
}

View file

@ -5,6 +5,7 @@
* Copyright (C) 2012 Google, Inc.
*/
#include <linux/dma-fence-unwrap.h>
#include <linux/export.h>
#include <linux/file.h>
#include <linux/fs.h>
@ -172,20 +173,6 @@ static int sync_file_set_fence(struct sync_file *sync_file,
return 0;
}
static struct dma_fence **get_fences(struct sync_file *sync_file,
int *num_fences)
{
if (dma_fence_is_array(sync_file->fence)) {
struct dma_fence_array *array = to_dma_fence_array(sync_file->fence);
*num_fences = array->num_fences;
return array->fences;
}
*num_fences = 1;
return &sync_file->fence;
}
static void add_fence(struct dma_fence **fences,
int *i, struct dma_fence *fence)
{
@ -210,86 +197,97 @@ static void add_fence(struct dma_fence **fences,
static struct sync_file *sync_file_merge(const char *name, struct sync_file *a,
struct sync_file *b)
{
struct dma_fence *a_fence, *b_fence, **fences;
struct dma_fence_unwrap a_iter, b_iter;
unsigned int index, num_fences;
struct sync_file *sync_file;
struct dma_fence **fences = NULL, **nfences, **a_fences, **b_fences;
int i = 0, i_a, i_b, num_fences, a_num_fences, b_num_fences;
sync_file = sync_file_alloc();
if (!sync_file)
return NULL;
a_fences = get_fences(a, &a_num_fences);
b_fences = get_fences(b, &b_num_fences);
if (a_num_fences > INT_MAX - b_num_fences)
goto err;
num_fences = 0;
dma_fence_unwrap_for_each(a_fence, &a_iter, a->fence)
++num_fences;
dma_fence_unwrap_for_each(b_fence, &b_iter, b->fence)
++num_fences;
num_fences = a_num_fences + b_num_fences;
if (num_fences > INT_MAX)
goto err_free_sync_file;
fences = kcalloc(num_fences, sizeof(*fences), GFP_KERNEL);
if (!fences)
goto err;
goto err_free_sync_file;
/*
* Assume sync_file a and b are both ordered and have no
* duplicates with the same context.
* We can't guarantee that fences in both a and b are ordered, but it is
* still quite likely.
*
* If a sync_file can only be created with sync_file_merge
* and sync_file_create, this is a reasonable assumption.
* So attempt to order the fences as we pass over them and merge fences
* with the same context.
*/
for (i_a = i_b = 0; i_a < a_num_fences && i_b < b_num_fences; ) {
struct dma_fence *pt_a = a_fences[i_a];
struct dma_fence *pt_b = b_fences[i_b];
if (pt_a->context < pt_b->context) {
add_fence(fences, &i, pt_a);
index = 0;
for (a_fence = dma_fence_unwrap_first(a->fence, &a_iter),
b_fence = dma_fence_unwrap_first(b->fence, &b_iter);
a_fence || b_fence; ) {
i_a++;
} else if (pt_a->context > pt_b->context) {
add_fence(fences, &i, pt_b);
if (!b_fence) {
add_fence(fences, &index, a_fence);
a_fence = dma_fence_unwrap_next(&a_iter);
} else if (!a_fence) {
add_fence(fences, &index, b_fence);
b_fence = dma_fence_unwrap_next(&b_iter);
} else if (a_fence->context < b_fence->context) {
add_fence(fences, &index, a_fence);
a_fence = dma_fence_unwrap_next(&a_iter);
} else if (b_fence->context < a_fence->context) {
add_fence(fences, &index, b_fence);
b_fence = dma_fence_unwrap_next(&b_iter);
} else if (__dma_fence_is_later(a_fence->seqno, b_fence->seqno,
a_fence->ops)) {
add_fence(fences, &index, a_fence);
a_fence = dma_fence_unwrap_next(&a_iter);
b_fence = dma_fence_unwrap_next(&b_iter);
i_b++;
} else {
if (__dma_fence_is_later(pt_a->seqno, pt_b->seqno,
pt_a->ops))
add_fence(fences, &i, pt_a);
else
add_fence(fences, &i, pt_b);
i_a++;
i_b++;
add_fence(fences, &index, b_fence);
a_fence = dma_fence_unwrap_next(&a_iter);
b_fence = dma_fence_unwrap_next(&b_iter);
}
}
for (; i_a < a_num_fences; i_a++)
add_fence(fences, &i, a_fences[i_a]);
if (index == 0)
fences[index++] = dma_fence_get_stub();
for (; i_b < b_num_fences; i_b++)
add_fence(fences, &i, b_fences[i_b]);
if (num_fences > index) {
struct dma_fence **tmp;
if (i == 0)
fences[i++] = dma_fence_get(a_fences[0]);
if (num_fences > i) {
nfences = krealloc_array(fences, i, sizeof(*fences), GFP_KERNEL);
if (!nfences)
goto err;
fences = nfences;
/* Keep going even when reducing the size failed */
tmp = krealloc_array(fences, index, sizeof(*fences),
GFP_KERNEL);
if (tmp)
fences = tmp;
}
if (sync_file_set_fence(sync_file, fences, i) < 0)
goto err;
if (sync_file_set_fence(sync_file, fences, index) < 0)
goto err_put_fences;
strlcpy(sync_file->user_name, name, sizeof(sync_file->user_name));
return sync_file;
err:
while (i)
dma_fence_put(fences[--i]);
err_put_fences:
while (index)
dma_fence_put(fences[--index]);
kfree(fences);
err_free_sync_file:
fput(sync_file->file);
return NULL;
}
static int sync_file_release(struct inode *inode, struct file *file)
@ -398,11 +396,13 @@ static int sync_fill_fence_info(struct dma_fence *fence,
static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
unsigned long arg)
{
struct sync_file_info info;
struct sync_fence_info *fence_info = NULL;
struct dma_fence **fences;
struct dma_fence_unwrap iter;
struct sync_file_info info;
unsigned int num_fences;
struct dma_fence *fence;
int ret;
__u32 size;
int num_fences, ret, i;
if (copy_from_user(&info, (void __user *)arg, sizeof(info)))
return -EFAULT;
@ -410,7 +410,9 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
if (info.flags || info.pad)
return -EINVAL;
fences = get_fences(sync_file, &num_fences);
num_fences = 0;
dma_fence_unwrap_for_each(fence, &iter, sync_file->fence)
++num_fences;
/*
* Passing num_fences = 0 means that userspace doesn't want to
@ -433,8 +435,11 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
if (!fence_info)
return -ENOMEM;
for (i = 0; i < num_fences; i++) {
int status = sync_fill_fence_info(fences[i], &fence_info[i]);
num_fences = 0;
dma_fence_unwrap_for_each(fence, &iter, sync_file->fence) {
int status;
status = sync_fill_fence_info(fence, &fence_info[num_fences++]);
info.status = info.status <= 0 ? info.status : status;
}

View file

@ -1404,6 +1404,16 @@ static int gpiochip_to_irq(struct gpio_chip *gc, unsigned int offset)
{
struct irq_domain *domain = gc->irq.domain;
#ifdef CONFIG_GPIOLIB_IRQCHIP
/*
* Avoid race condition with other code, which tries to lookup
* an IRQ before the irqchip has been properly registered,
* i.e. while gpiochip is still being brought up.
*/
if (!gc->irq.initialized)
return -EPROBE_DEFER;
#endif
if (!gpiochip_irqchip_irq_valid(gc, offset))
return -ENXIO;
@ -1593,6 +1603,15 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
acpi_gpiochip_request_interrupts(gc);
/*
* Using barrier() here to prevent compiler from reordering
* gc->irq.initialized before initialization of above
* GPIO chip irq members.
*/
barrier();
gc->irq.initialized = true;
return 0;
}

View file

@ -119,6 +119,7 @@
#define CONNECTOR_OBJECT_ID_eDP 0x14
#define CONNECTOR_OBJECT_ID_MXM 0x15
#define CONNECTOR_OBJECT_ID_LVDS_eDP 0x16
#define CONNECTOR_OBJECT_ID_USBC 0x17
/* deleted */

View file

@ -5733,7 +5733,7 @@ void amdgpu_device_flush_hdp(struct amdgpu_device *adev,
struct amdgpu_ring *ring)
{
#ifdef CONFIG_X86_64
if (adev->flags & AMD_IS_APU)
if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev))
return;
#endif
if (adev->gmc.xgmi.connected_to_cpu)
@ -5749,7 +5749,7 @@ void amdgpu_device_invalidate_hdp(struct amdgpu_device *adev,
struct amdgpu_ring *ring)
{
#ifdef CONFIG_X86_64
if (adev->flags & AMD_IS_APU)
if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev))
return;
#endif
if (adev->gmc.xgmi.connected_to_cpu)

View file

@ -680,7 +680,7 @@ MODULE_PARM_DESC(sched_policy,
* Maximum number of processes that HWS can schedule concurrently. The maximum is the
* number of VMIDs assigned to the HWS, which is also the default.
*/
int hws_max_conc_proc = 8;
int hws_max_conc_proc = -1;
module_param(hws_max_conc_proc, int, 0444);
MODULE_PARM_DESC(hws_max_conc_proc,
"Max # processes HWS can execute concurrently when sched_policy=0 (0 = no concurrency, #VMIDs for KFD = Maximum(default))");

View file

@ -266,7 +266,7 @@ static int amdgpu_gfx_kiq_acquire(struct amdgpu_device *adev,
* adev->gfx.mec.num_pipe_per_mec
* adev->gfx.mec.num_queue_per_pipe;
while (queue_bit-- >= 0) {
while (--queue_bit >= 0) {
if (test_bit(queue_bit, adev->gfx.mec.queue_bitmap))
continue;

View file

@ -561,9 +561,15 @@ void amdgpu_gmc_noretry_set(struct amdgpu_device *adev)
switch (adev->ip_versions[GC_HWIP][0]) {
case IP_VERSION(9, 0, 1):
case IP_VERSION(9, 3, 0):
case IP_VERSION(9, 4, 0):
case IP_VERSION(9, 4, 1):
case IP_VERSION(9, 4, 2):
case IP_VERSION(10, 3, 3):
case IP_VERSION(10, 3, 4):
case IP_VERSION(10, 3, 5):
case IP_VERSION(10, 3, 6):
case IP_VERSION(10, 3, 7):
/*
* noretry = 0 will cause kfd page fault tests fail
* for some ASICs, so set default to 1 for these ASICs.

View file

@ -1284,6 +1284,7 @@ void amdgpu_bo_get_memory(struct amdgpu_bo *bo, uint64_t *vram_mem,
*/
void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
struct dma_fence *fence = NULL;
struct amdgpu_bo *abo;
int r;
@ -1303,7 +1304,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
amdgpu_amdkfd_remove_fence_on_pt_pd_bos(abo);
if (bo->resource->mem_type != TTM_PL_VRAM ||
!(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE))
!(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE) ||
adev->in_suspend || adev->shutdown)
return;
if (WARN_ON_ONCE(!dma_resv_trylock(bo->base.resv)))

View file

@ -300,8 +300,8 @@ void amdgpu_ring_generic_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib);
void amdgpu_ring_commit(struct amdgpu_ring *ring);
void amdgpu_ring_undo(struct amdgpu_ring *ring);
int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
unsigned int ring_size, struct amdgpu_irq_src *irq_src,
unsigned int irq_type, unsigned int prio,
unsigned int max_dw, struct amdgpu_irq_src *irq_src,
unsigned int irq_type, unsigned int hw_prio,
atomic_t *sched_score);
void amdgpu_ring_fini(struct amdgpu_ring *ring);
void amdgpu_ring_emit_reg_write_reg_wait_helper(struct amdgpu_ring *ring,

View file

@ -159,6 +159,7 @@
#define AMDGPU_VCN_MULTI_QUEUE_FLAG (1 << 8)
#define AMDGPU_VCN_SW_RING_FLAG (1 << 9)
#define AMDGPU_VCN_FW_LOGGING_FLAG (1 << 10)
#define AMDGPU_VCN_SMU_VERSION_INFO_FLAG (1 << 11)
#define AMDGPU_VCN_IB_FLAG_DECODE_BUFFER 0x00000001
#define AMDGPU_VCN_CMD_FLAG_MSG_BUFFER 0x00000001
@ -279,6 +280,11 @@ struct amdgpu_fw_shared_fw_logging {
uint32_t size;
};
struct amdgpu_fw_shared_smu_interface_info {
uint8_t smu_interface_type;
uint8_t padding[3];
};
struct amdgpu_fw_shared {
uint32_t present_flag_0;
uint8_t pad[44];
@ -287,6 +293,7 @@ struct amdgpu_fw_shared {
struct amdgpu_fw_shared_multi_queue multi_queue;
struct amdgpu_fw_shared_sw_ring sw_ring;
struct amdgpu_fw_shared_fw_logging fw_log;
struct amdgpu_fw_shared_smu_interface_info smu_interface_info;
};
struct amdgpu_vcn_fwlog {

View file

@ -3293,7 +3293,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3_3[] =
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0xffffffff, 0x00000280),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x00800000),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0x0c1807ff, 0x00000242),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL, 0x1ff1ffff, 0x00000500),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL_Vangogh, 0x1ff1ffff, 0x00000500),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL1_PIPE_STEER, 0x000000ff, 0x000000e4),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_0, 0x77777777, 0x32103210),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_1, 0x77777777, 0x32103210),
@ -3429,7 +3429,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3_6[] =
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0xffffffff, 0x00000280),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x00800000),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0x0c1807ff, 0x00000042),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL, 0x1ff1ffff, 0x00000500),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL_Vangogh, 0x1ff1ffff, 0x00000500),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL1_PIPE_STEER, 0x000000ff, 0x00000044),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_0, 0x77777777, 0x32103210),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_1, 0x77777777, 0x32103210),
@ -3454,7 +3454,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3_7[] = {
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0xffffffff, 0x00000280),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x00800000),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0x0c1807ff, 0x00000041),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL, 0x1ff1ffff, 0x00000500),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL_Vangogh, 0x1ff1ffff, 0x00000500),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL1_PIPE_STEER, 0x000000ff, 0x000000e4),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_0, 0x77777777, 0x32103210),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_1, 0x77777777, 0x32103210),
@ -7689,6 +7689,7 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)
switch (adev->ip_versions[GC_HWIP][0]) {
case IP_VERSION(10, 3, 1):
case IP_VERSION(10, 3, 3):
case IP_VERSION(10, 3, 7):
preempt_disable();
clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER_Vangogh);
clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER_Vangogh);

View file

@ -814,7 +814,7 @@ static int gmc_v10_0_mc_init(struct amdgpu_device *adev)
adev->gmc.aper_size = pci_resource_len(adev->pdev, 0);
#ifdef CONFIG_X86_64
if (adev->flags & AMD_IS_APU) {
if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev)) {
adev->gmc.aper_base = adev->gfxhub.funcs->get_mc_fb_offset(adev);
adev->gmc.aper_size = adev->gmc.real_vram_size;
}

View file

@ -381,8 +381,9 @@ static int gmc_v7_0_mc_init(struct amdgpu_device *adev)
adev->gmc.aper_size = pci_resource_len(adev->pdev, 0);
#ifdef CONFIG_X86_64
if (adev->flags & AMD_IS_APU &&
adev->gmc.real_vram_size > adev->gmc.aper_size) {
if ((adev->flags & AMD_IS_APU) &&
adev->gmc.real_vram_size > adev->gmc.aper_size &&
!amdgpu_passthrough(adev)) {
adev->gmc.aper_base = ((u64)RREG32(mmMC_VM_FB_OFFSET)) << 22;
adev->gmc.aper_size = adev->gmc.real_vram_size;
}

View file

@ -581,7 +581,7 @@ static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
adev->gmc.aper_size = pci_resource_len(adev->pdev, 0);
#ifdef CONFIG_X86_64
if (adev->flags & AMD_IS_APU) {
if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev)) {
adev->gmc.aper_base = ((u64)RREG32(mmMC_VM_FB_OFFSET)) << 22;
adev->gmc.aper_size = adev->gmc.real_vram_size;
}

View file

@ -1456,7 +1456,7 @@ static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
*/
/* check whether both host-gpu and gpu-gpu xgmi links exist */
if ((adev->flags & AMD_IS_APU) ||
if (((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev)) ||
(adev->gmc.xgmi.supported &&
adev->gmc.xgmi.connected_to_cpu)) {
adev->gmc.aper_base =
@ -1721,7 +1721,7 @@ static int gmc_v9_0_sw_fini(void *handle)
amdgpu_gem_force_release(adev);
amdgpu_vm_manager_fini(adev);
amdgpu_gart_table_vram_free(adev);
amdgpu_bo_unref(&adev->gmc.pdb0_bo);
amdgpu_bo_free_kernel(&adev->gmc.pdb0_bo, NULL, &adev->gmc.ptr_pdb0);
amdgpu_bo_fini(adev);
return 0;

View file

@ -24,6 +24,7 @@
#include <linux/firmware.h>
#include "amdgpu.h"
#include "amdgpu_cs.h"
#include "amdgpu_vcn.h"
#include "amdgpu_pm.h"
#include "soc15.h"
@ -1900,6 +1901,75 @@ static const struct amd_ip_funcs vcn_v1_0_ip_funcs = {
.set_powergating_state = vcn_v1_0_set_powergating_state,
};
/*
* It is a hardware issue that VCN can't handle a GTT TMZ buffer on
* CHIP_RAVEN series ASIC. Move such a GTT TMZ buffer to VRAM domain
* before command submission as a workaround.
*/
static int vcn_v1_0_validate_bo(struct amdgpu_cs_parser *parser,
struct amdgpu_job *job,
uint64_t addr)
{
struct ttm_operation_ctx ctx = { false, false };
struct amdgpu_fpriv *fpriv = parser->filp->driver_priv;
struct amdgpu_vm *vm = &fpriv->vm;
struct amdgpu_bo_va_mapping *mapping;
struct amdgpu_bo *bo;
int r;
addr &= AMDGPU_GMC_HOLE_MASK;
if (addr & 0x7) {
DRM_ERROR("VCN messages must be 8 byte aligned!\n");
return -EINVAL;
}
mapping = amdgpu_vm_bo_lookup_mapping(vm, addr/AMDGPU_GPU_PAGE_SIZE);
if (!mapping || !mapping->bo_va || !mapping->bo_va->base.bo)
return -EINVAL;
bo = mapping->bo_va->base.bo;
if (!(bo->flags & AMDGPU_GEM_CREATE_ENCRYPTED))
return 0;
amdgpu_bo_placement_from_domain(bo, AMDGPU_GEM_DOMAIN_VRAM);
r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
if (r) {
DRM_ERROR("Failed to validate the VCN message BO (%d)!\n", r);
return r;
}
return r;
}
static int vcn_v1_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
struct amdgpu_job *job,
struct amdgpu_ib *ib)
{
uint32_t msg_lo = 0, msg_hi = 0;
int i, r;
if (!(ib->flags & AMDGPU_IB_FLAGS_SECURE))
return 0;
for (i = 0; i < ib->length_dw; i += 2) {
uint32_t reg = amdgpu_ib_get_value(ib, i);
uint32_t val = amdgpu_ib_get_value(ib, i + 1);
if (reg == PACKET0(p->adev->vcn.internal.data0, 0)) {
msg_lo = val;
} else if (reg == PACKET0(p->adev->vcn.internal.data1, 0)) {
msg_hi = val;
} else if (reg == PACKET0(p->adev->vcn.internal.cmd, 0)) {
r = vcn_v1_0_validate_bo(p, job,
((u64)msg_hi) << 32 | msg_lo);
if (r)
return r;
}
}
return 0;
}
static const struct amdgpu_ring_funcs vcn_v1_0_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_DEC,
.align_mask = 0xf,
@ -1910,6 +1980,7 @@ static const struct amdgpu_ring_funcs vcn_v1_0_dec_ring_vm_funcs = {
.get_rptr = vcn_v1_0_dec_ring_get_rptr,
.get_wptr = vcn_v1_0_dec_ring_get_wptr,
.set_wptr = vcn_v1_0_dec_ring_set_wptr,
.patch_cs_in_place = vcn_v1_0_ring_patch_cs_in_place,
.emit_frame_size =
6 + 6 + /* hdp invalidate / flush */
SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +

View file

@ -219,6 +219,11 @@ static int vcn_v3_0_sw_init(void *handle)
cpu_to_le32(AMDGPU_VCN_MULTI_QUEUE_FLAG) |
cpu_to_le32(AMDGPU_VCN_FW_SHARED_FLAG_0_RB);
fw_shared->sw_ring.is_enabled = cpu_to_le32(DEC_SW_RING_ENABLED);
fw_shared->present_flag_0 |= AMDGPU_VCN_SMU_VERSION_INFO_FLAG;
if (adev->ip_versions[UVD_HWIP][0] == IP_VERSION(3, 1, 2))
fw_shared->smu_interface_info.smu_interface_type = 2;
else if (adev->ip_versions[UVD_HWIP][0] == IP_VERSION(3, 1, 1))
fw_shared->smu_interface_info.smu_interface_type = 1;
if (amdgpu_vcnfw_log)
amdgpu_vcn_fwlog_init(&adev->vcn.inst[i]);
@ -575,8 +580,8 @@ static void vcn_v3_0_mc_resume_dpg_mode(struct amdgpu_device *adev, int inst_idx
AMDGPU_GPU_PAGE_ALIGN(sizeof(struct amdgpu_fw_shared)), 0, indirect);
/* VCN global tiling registers */
WREG32_SOC15_DPG_MODE(0, SOC15_DPG_MODE_OFFSET(
UVD, 0, mmUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect);
WREG32_SOC15_DPG_MODE(inst_idx, SOC15_DPG_MODE_OFFSET(
UVD, inst_idx, mmUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect);
}
static void vcn_v3_0_disable_static_power_gating(struct amdgpu_device *adev, int inst)
@ -1480,8 +1485,11 @@ static int vcn_v3_0_start_sriov(struct amdgpu_device *adev)
static int vcn_v3_0_stop_dpg_mode(struct amdgpu_device *adev, int inst_idx)
{
struct dpg_pause_state state = {.fw_based = VCN_DPG_STATE__UNPAUSE};
uint32_t tmp;
vcn_v3_0_pause_dpg_mode(adev, inst_idx, &state);
/* Wait for power status to be 1 */
SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);

View file

@ -483,15 +483,10 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
}
/* Verify module parameters regarding mapped process number*/
if ((hws_max_conc_proc < 0)
|| (hws_max_conc_proc > kfd->vm_info.vmid_num_kfd)) {
dev_err(kfd_device,
"hws_max_conc_proc %d must be between 0 and %d, use %d instead\n",
hws_max_conc_proc, kfd->vm_info.vmid_num_kfd,
kfd->vm_info.vmid_num_kfd);
if (hws_max_conc_proc >= 0)
kfd->max_proc_per_quantum = min((u32)hws_max_conc_proc, kfd->vm_info.vmid_num_kfd);
else
kfd->max_proc_per_quantum = kfd->vm_info.vmid_num_kfd;
} else
kfd->max_proc_per_quantum = hws_max_conc_proc;
/* calculate max size of mqds needed for queues */
size = max_num_of_queues_per_device *
@ -536,7 +531,8 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
goto kfd_doorbell_error;
}
kfd->hive_id = kfd->adev->gmc.xgmi.hive_id;
if (amdgpu_use_xgmi_p2p)
kfd->hive_id = kfd->adev->gmc.xgmi.hive_id;
kfd->noretry = kfd->adev->gmc.noretry;

View file

@ -749,6 +749,8 @@ static struct kfd_event_waiter *alloc_event_waiters(uint32_t num_events)
event_waiters = kmalloc_array(num_events,
sizeof(struct kfd_event_waiter),
GFP_KERNEL);
if (!event_waiters)
return NULL;
for (i = 0; (event_waiters) && (i < num_events) ; i++) {
init_wait(&event_waiters[i].wait);

View file

@ -247,15 +247,6 @@ int kfd_smi_event_open(struct kfd_dev *dev, uint32_t *fd)
return ret;
}
ret = anon_inode_getfd(kfd_smi_name, &kfd_smi_ev_fops, (void *)client,
O_RDWR);
if (ret < 0) {
kfifo_free(&client->fifo);
kfree(client);
return ret;
}
*fd = ret;
init_waitqueue_head(&client->wait_queue);
spin_lock_init(&client->lock);
client->events = 0;
@ -265,5 +256,20 @@ int kfd_smi_event_open(struct kfd_dev *dev, uint32_t *fd)
list_add_rcu(&client->list, &dev->smi_clients);
spin_unlock(&dev->smi_lock);
ret = anon_inode_getfd(kfd_smi_name, &kfd_smi_ev_fops, (void *)client,
O_RDWR);
if (ret < 0) {
spin_lock(&dev->smi_lock);
list_del_rcu(&client->list);
spin_unlock(&dev->smi_lock);
synchronize_rcu();
kfifo_free(&client->fifo);
kfree(client);
return ret;
}
*fd = ret;
return 0;
}

View file

@ -2714,7 +2714,8 @@ static int dm_resume(void *handle)
* this is the case when traversing through already created
* MST connectors, should be skipped
*/
if (aconnector->mst_port)
if (aconnector->dc_link &&
aconnector->dc_link->type == dc_connection_mst_branch)
continue;
mutex_lock(&aconnector->hpd_lock);
@ -3972,7 +3973,7 @@ static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *cap
max - min);
}
static int amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
int bl_idx,
u32 user_brightness)
{
@ -4003,7 +4004,8 @@ static int amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
DRM_DEBUG("DM: Failed to update backlight on eDP[%d]\n", bl_idx);
}
return rc ? 0 : 1;
if (rc)
dm->actual_brightness[bl_idx] = user_brightness;
}
static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
@ -9947,7 +9949,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
/* restore the backlight level */
for (i = 0; i < dm->num_of_edps; i++) {
if (dm->backlight_dev[i] &&
(amdgpu_dm_backlight_get_level(dm, i) != dm->brightness[i]))
(dm->actual_brightness[i] != dm->brightness[i]))
amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);
}
#endif

View file

@ -540,6 +540,12 @@ struct amdgpu_display_manager {
* cached backlight values.
*/
u32 brightness[AMDGPU_DM_MAX_NUM_EDP];
/**
* @actual_brightness:
*
* last successfully applied backlight values.
*/
u32 actual_brightness[AMDGPU_DM_MAX_NUM_EDP];
};
enum dsc_clock_force_state {

View file

@ -436,57 +436,84 @@ static void dcn315_clk_mgr_helper_populate_bw_params(
struct integrated_info *bios_info,
const DpmClocks_315_t *clock_table)
{
int i, j;
int i;
struct clk_bw_params *bw_params = clk_mgr->base.bw_params;
uint32_t max_dispclk = 0, max_dppclk = 0;
uint32_t max_dispclk, max_dppclk, max_pstate, max_socclk, max_fclk = 0, min_pstate = 0;
struct clk_limit_table_entry def_max = bw_params->clk_table.entries[bw_params->clk_table.num_entries - 1];
j = -1;
max_dispclk = find_max_clk_value(clock_table->DispClocks, clock_table->NumDispClkLevelsEnabled);
max_dppclk = find_max_clk_value(clock_table->DppClocks, clock_table->NumDispClkLevelsEnabled);
max_socclk = find_max_clk_value(clock_table->SocClocks, clock_table->NumSocClkLevelsEnabled);
ASSERT(NUM_DF_PSTATE_LEVELS <= MAX_NUM_DPM_LVL);
/* Find lowest DPM, FCLK is filled in reverse order*/
for (i = NUM_DF_PSTATE_LEVELS - 1; i >= 0; i--) {
if (clock_table->DfPstateTable[i].FClk != 0) {
j = i;
break;
/* Find highest fclk pstate */
for (i = 0; i < clock_table->NumDfPstatesEnabled; i++) {
if (clock_table->DfPstateTable[i].FClk > max_fclk) {
max_fclk = clock_table->DfPstateTable[i].FClk;
max_pstate = i;
}
}
if (j == -1) {
/* clock table is all 0s, just use our own hardcode */
ASSERT(0);
return;
}
/* For 315 we want to base clock table on dcfclk, need at least one entry regardless of pmfw table */
for (i = 0; i < clock_table->NumDcfClkLevelsEnabled; i++) {
int j;
uint32_t min_fclk = clock_table->DfPstateTable[0].FClk;
bw_params->clk_table.num_entries = j + 1;
for (j = 1; j < clock_table->NumDfPstatesEnabled; j++) {
if (clock_table->DfPstateTable[j].Voltage <= clock_table->SocVoltage[i]
&& clock_table->DfPstateTable[j].FClk < min_fclk) {
min_fclk = clock_table->DfPstateTable[j].FClk;
min_pstate = j;
}
}
/* dispclk and dppclk can be max at any voltage, same number of levels for both */
if (clock_table->NumDispClkLevelsEnabled <= NUM_DISPCLK_DPM_LEVELS &&
clock_table->NumDispClkLevelsEnabled <= NUM_DPPCLK_DPM_LEVELS) {
max_dispclk = find_max_clk_value(clock_table->DispClocks, clock_table->NumDispClkLevelsEnabled);
max_dppclk = find_max_clk_value(clock_table->DppClocks, clock_table->NumDispClkLevelsEnabled);
} else {
ASSERT(0);
}
for (i = 0; i < bw_params->clk_table.num_entries; i++, j--) {
int temp;
bw_params->clk_table.entries[i].fclk_mhz = clock_table->DfPstateTable[j].FClk;
bw_params->clk_table.entries[i].memclk_mhz = clock_table->DfPstateTable[j].MemClk;
bw_params->clk_table.entries[i].voltage = clock_table->DfPstateTable[j].Voltage;
bw_params->clk_table.entries[i].wck_ratio = 1;
temp = find_clk_for_voltage(clock_table, clock_table->DcfClocks, clock_table->DfPstateTable[j].Voltage);
if (temp)
bw_params->clk_table.entries[i].dcfclk_mhz = temp;
temp = find_clk_for_voltage(clock_table, clock_table->SocClocks, clock_table->DfPstateTable[j].Voltage);
if (temp)
bw_params->clk_table.entries[i].socclk_mhz = temp;
bw_params->clk_table.entries[i].fclk_mhz = min_fclk;
bw_params->clk_table.entries[i].memclk_mhz = clock_table->DfPstateTable[min_pstate].MemClk;
bw_params->clk_table.entries[i].voltage = clock_table->DfPstateTable[min_pstate].Voltage;
bw_params->clk_table.entries[i].dcfclk_mhz = clock_table->DcfClocks[i];
bw_params->clk_table.entries[i].socclk_mhz = clock_table->SocClocks[i];
bw_params->clk_table.entries[i].dispclk_mhz = max_dispclk;
bw_params->clk_table.entries[i].dppclk_mhz = max_dppclk;
}
bw_params->clk_table.entries[i].wck_ratio = 1;
};
/* Make sure to include at least one entry and highest pstate */
if (max_pstate != min_pstate) {
bw_params->clk_table.entries[i].fclk_mhz = max_fclk;
bw_params->clk_table.entries[i].memclk_mhz = clock_table->DfPstateTable[max_pstate].MemClk;
bw_params->clk_table.entries[i].voltage = clock_table->DfPstateTable[max_pstate].Voltage;
bw_params->clk_table.entries[i].dcfclk_mhz = find_clk_for_voltage(
clock_table, clock_table->DcfClocks, clock_table->DfPstateTable[max_pstate].Voltage);
bw_params->clk_table.entries[i].socclk_mhz = find_clk_for_voltage(
clock_table, clock_table->SocClocks, clock_table->DfPstateTable[max_pstate].Voltage);
bw_params->clk_table.entries[i].dispclk_mhz = max_dispclk;
bw_params->clk_table.entries[i].dppclk_mhz = max_dppclk;
bw_params->clk_table.entries[i].wck_ratio = 1;
i++;
}
bw_params->clk_table.num_entries = i;
/* Include highest socclk */
if (bw_params->clk_table.entries[i-1].socclk_mhz < max_socclk)
bw_params->clk_table.entries[i-1].socclk_mhz = max_socclk;
/* Set any 0 clocks to max default setting. Not an issue for
* power since we aren't doing switching in such case anyway
*/
for (i = 0; i < bw_params->clk_table.num_entries; i++) {
if (!bw_params->clk_table.entries[i].fclk_mhz) {
bw_params->clk_table.entries[i].fclk_mhz = def_max.fclk_mhz;
bw_params->clk_table.entries[i].memclk_mhz = def_max.memclk_mhz;
bw_params->clk_table.entries[i].voltage = def_max.voltage;
}
if (!bw_params->clk_table.entries[i].dcfclk_mhz)
bw_params->clk_table.entries[i].dcfclk_mhz = def_max.dcfclk_mhz;
if (!bw_params->clk_table.entries[i].socclk_mhz)
bw_params->clk_table.entries[i].socclk_mhz = def_max.socclk_mhz;
if (!bw_params->clk_table.entries[i].dispclk_mhz)
bw_params->clk_table.entries[i].dispclk_mhz = def_max.dispclk_mhz;
if (!bw_params->clk_table.entries[i].dppclk_mhz)
bw_params->clk_table.entries[i].dppclk_mhz = def_max.dppclk_mhz;
}
bw_params->vram_type = bios_info->memory_type;
bw_params->num_channels = bios_info->ma_channel_number;

View file

@ -80,8 +80,8 @@ static const struct IP_BASE NBIO_BASE = { { { { 0x00000000, 0x00000014, 0x00000D
#define VBIOSSMC_MSG_SetDppclkFreq 0x06 ///< Set DPP clock frequency in MHZ
#define VBIOSSMC_MSG_SetHardMinDcfclkByFreq 0x07 ///< Set DCF clock frequency hard min in MHZ
#define VBIOSSMC_MSG_SetMinDeepSleepDcfclk 0x08 ///< Set DCF clock minimum frequency in deep sleep in MHZ
#define VBIOSSMC_MSG_SetPhyclkVoltageByFreq 0x09 ///< Set display phy clock frequency in MHZ in case VMIN does not support phy frequency
#define VBIOSSMC_MSG_GetFclkFrequency 0x0A ///< Get FCLK frequency, return frequemcy in MHZ
#define VBIOSSMC_MSG_GetDtbclkFreq 0x09 ///< Get display dtb clock frequency in MHZ in case VMIN does not support phy frequency
#define VBIOSSMC_MSG_SetDtbClk 0x0A ///< Set dtb clock frequency, return frequemcy in MHZ
#define VBIOSSMC_MSG_SetDisplayCount 0x0B ///< Inform PMFW of number of display connected
#define VBIOSSMC_MSG_EnableTmdp48MHzRefclkPwrDown 0x0C ///< To ask PMFW turn off TMDP 48MHz refclk during display off to save power
#define VBIOSSMC_MSG_UpdatePmeRestore 0x0D ///< To ask PMFW to write into Azalia for PME wake up event
@ -324,15 +324,26 @@ int dcn315_smu_get_dpref_clk(struct clk_mgr_internal *clk_mgr)
return (dprefclk_get_mhz * 1000);
}
int dcn315_smu_get_smu_fclk(struct clk_mgr_internal *clk_mgr)
int dcn315_smu_get_dtbclk(struct clk_mgr_internal *clk_mgr)
{
int fclk_get_mhz = -1;
if (clk_mgr->smu_present) {
fclk_get_mhz = dcn315_smu_send_msg_with_param(
clk_mgr,
VBIOSSMC_MSG_GetFclkFrequency,
VBIOSSMC_MSG_GetDtbclkFreq,
0);
}
return (fclk_get_mhz * 1000);
}
void dcn315_smu_set_dtbclk(struct clk_mgr_internal *clk_mgr, bool enable)
{
if (!clk_mgr->smu_present)
return;
dcn315_smu_send_msg_with_param(
clk_mgr,
VBIOSSMC_MSG_SetDtbClk,
enable);
}

View file

@ -37,6 +37,7 @@
#define NUM_SOC_VOLTAGE_LEVELS 4
#define NUM_DF_PSTATE_LEVELS 4
typedef struct {
uint16_t MinClock; // This is either DCFCLK or SOCCLK (in MHz)
uint16_t MaxClock; // This is either DCFCLK or SOCCLK (in MHz)
@ -124,5 +125,6 @@ void dcn315_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr);
void dcn315_smu_request_voltage_via_phyclk(struct clk_mgr_internal *clk_mgr, int requested_phyclk_khz);
void dcn315_smu_enable_pme_wa(struct clk_mgr_internal *clk_mgr);
int dcn315_smu_get_dpref_clk(struct clk_mgr_internal *clk_mgr);
int dcn315_smu_get_smu_fclk(struct clk_mgr_internal *clk_mgr);
int dcn315_smu_get_dtbclk(struct clk_mgr_internal *clk_mgr);
void dcn315_smu_set_dtbclk(struct clk_mgr_internal *clk_mgr, bool enable);
#endif /* DAL_DC_315_SMU_H_ */

Some files were not shown because too many files have changed in this diff Show more