KVM/riscv changes for 6.1

- Improved instruction encoding infrastructure for
   instructions not yet supported by binutils
 - Svinval support for both KVM Host and KVM Guest
 - Zihintpause support for KVM Guest
 - Zicbom support for KVM Guest
 - Record number of signal exits as a VCPU stat
 - Use generic guest entry infrastructure
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEZdn75s5e6LHDQ+f/rUjsVaLHLAcFAmM5IRgACgkQrUjsVaLH
 LAfNxg//TuUVC230Yh88WJNIQzX7Jf587E7DA5kHdLV/Lai/KSqoeaegbJ+XLSCp
 IEC2sDabWO3M3auoyF51NfCfLIR1qkR0xq/gwV6QlsgtTuCBTpdI7Yqg/GFuaZnv
 JMmkFxfprEtH9QLISYjt2xDUHqcorFEyislL2gg5iEilMR2kWDc5ZmMCQge1CwR0
 ldo5w9PSQM9CFmhjY9Gg6/Gx8QzfpEDxGNtn8KIZaBFUalGcj6gUYpFJDmAQFbXG
 k825s00gonEMrx3tGcp4URtQNW5Tnuxqb1vCoGm5v+vcQdRFbWrsBzxki96qPgvk
 iSbc3rqSCquyWQzUoIiPZ08/rkSW1Of4MwoffD3E9XyjjjlRnwOj85G5lB1Mtwb7
 zIf65/lfid5O+gUqBz1xPXNZt1MzcoiAL/1Yd9hijIzHlrESxwXL3jfzfpWV7MoT
 zc1v7Y5DKaiYBVhlE1zh0Fm/CLkS80AP/ndK5scsF/LW+U3G+nvmWAD9oOr5uB/Z
 CkdcWykZZ0iw5dNwyxTg9lK0tFw/4QDaQPLiDjG/rokcER4ky0dtW3kmbTNznfGn
 c+OKEML9jMuY2pJ0RwmXZZ6bsBUBa83J5qVcREQYWljhJ9hSqQX0k3aXE24DUYSV
 fBbmDDD/9jhIf06K75RDdFVDV9itVrWEjDMr5GPr3EM3A7+LekM=
 =qbIY
 -----END PGP SIGNATURE-----

Merge tag 'kvm-riscv-6.1-1' of https://github.com/kvm-riscv/linux into HEAD

KVM/riscv changes for 6.1

- Improved instruction encoding infrastructure for
  instructions not yet supported by binutils
- Svinval support for both KVM Host and KVM Guest
- Zihintpause support for KVM Guest
- Zicbom support for KVM Guest
- Record number of signal exits as a VCPU stat
- Use generic guest entry infrastructure
This commit is contained in:
Paolo Bonzini 2022-10-03 15:33:43 -04:00
commit e18d6152ff
290 changed files with 2619 additions and 2105 deletions

View File

@ -34,8 +34,8 @@ Example:
Use specific request line passing from dma
For example, MMC request line is 5
sdhci: sdhci@98e00000 {
compatible = "moxa,moxart-sdhci";
mmc: mmc@98e00000 {
compatible = "moxa,moxart-mmc";
reg = <0x98e00000 0x5C>;
interrupts = <5 0>;
clocks = <&clk_apb>;

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: i.MX8M DDR Controller
maintainers:
- Leonard Crestez <leonard.crestez@nxp.com>
- Peng Fan <peng.fan@nxp.com>
description:
The DDRC block is integrated in i.MX8M for interfacing with DDR based

View File

@ -40,6 +40,7 @@ properties:
patternProperties:
'^opp-?[0-9]+$':
type: object
additionalProperties: false
properties:
opp-hz: true

View File

@ -19,6 +19,7 @@ properties:
patternProperties:
'^opp-?[0-9]+$':
type: object
additionalProperties: false
properties:
opp-level: true

View File

@ -148,7 +148,7 @@ You can do plain I2C transactions by using read(2) and write(2) calls.
You do not need to pass the address byte; instead, set it through
ioctl I2C_SLAVE before you try to access the device.
You can do SMBus level transactions (see documentation file smbus-protocol
You can do SMBus level transactions (see documentation file smbus-protocol.rst
for details) through the following functions::
__s32 i2c_smbus_write_quick(int file, __u8 value);

View File

@ -32,9 +32,9 @@ User manual
===========
I2C slave backends behave like standard I2C clients. So, you can instantiate
them as described in the document 'instantiating-devices'. The only difference
is that i2c slave backends have their own address space. So, you have to add
0x1000 to the address you would originally request. An example for
them as described in the document instantiating-devices.rst. The only
difference is that i2c slave backends have their own address space. So, you
have to add 0x1000 to the address you would originally request. An example for
instantiating the slave-eeprom driver from userspace at the 7 bit address 0x64
on bus 1::

View File

@ -364,7 +364,7 @@ stop condition is issued between transaction. The i2c_msg structure
contains for each message the client address, the number of bytes of the
message and the message data itself.
You can read the file ``i2c-protocol`` for more information about the
You can read the file i2c-protocol.rst for more information about the
actual I2C protocol.
@ -414,7 +414,7 @@ transactions return 0 on success; the 'read' transactions return the read
value, except for block transactions, which return the number of values
read. The block buffers need not be longer than 32 bytes.
You can read the file ``smbus-protocol`` for more information about the
You can read the file smbus-protocol.rst for more information about the
actual SMBus protocol.

View File

@ -47,7 +47,6 @@ allow_join_initial_addr_port - BOOLEAN
Default: 1
pm_type - INTEGER
Set the default path manager type to use for each new MPTCP
socket. In-kernel path management will control subflow
connections and address advertisements according to

View File

@ -70,15 +70,6 @@ nf_conntrack_generic_timeout - INTEGER (seconds)
Default for generic timeout. This refers to layer 4 unknown/unsupported
protocols.
nf_conntrack_helper - BOOLEAN
- 0 - disabled (default)
- not 0 - enabled
Enable automatic conntrack helper assignment.
If disabled it is required to set up iptables rules to assign
helpers to connections. See the CT target description in the
iptables-extensions(8) man page for further information.
nf_conntrack_icmp_timeout - INTEGER (seconds)
default 30

View File

@ -671,7 +671,8 @@ F: fs/afs/
F: include/trace/events/afs.h
AGPGART DRIVER
M: David Airlie <airlied@linux.ie>
M: David Airlie <airlied@redhat.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm
F: drivers/char/agp/
@ -1010,7 +1011,6 @@ F: drivers/spi/spi-amd.c
AMD MP2 I2C DRIVER
M: Elie Morisse <syniurge@gmail.com>
M: Nehal Shah <nehal-bakulchandra.shah@amd.com>
M: Shyam Sundar S K <shyam-sundar.s-k@amd.com>
L: linux-i2c@vger.kernel.org
S: Maintained
@ -5245,6 +5245,7 @@ F: block/blk-throttle.c
F: include/linux/blk-cgroup.h
CONTROL GROUP - CPUSET
M: Waiman Long <longman@redhat.com>
M: Zefan Li <lizefan.x@bytedance.com>
L: cgroups@vger.kernel.org
S: Maintained
@ -6753,7 +6754,7 @@ F: Documentation/devicetree/bindings/display/panel/samsung,lms380kf01.yaml
F: drivers/gpu/drm/panel/panel-widechips-ws2401.c
DRM DRIVERS
M: David Airlie <airlied@linux.ie>
M: David Airlie <airlied@gmail.com>
M: Daniel Vetter <daniel@ffwll.ch>
L: dri-devel@lists.freedesktop.org
S: Maintained
@ -8652,8 +8653,8 @@ F: drivers/input/touchscreen/goodix*
GOOGLE ETHERNET DRIVERS
M: Jeroen de Borst <jeroendb@google.com>
R: Catherine Sullivan <csully@google.com>
R: David Awogbemila <awogbemila@google.com>
M: Catherine Sullivan <csully@google.com>
R: Shailend Chand <shailend@google.com>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/device_drivers/ethernet/google/gve.rst
@ -16858,6 +16859,7 @@ F: drivers/net/ethernet/qualcomm/emac/
QUALCOMM ETHQOS ETHERNET DRIVER
M: Vinod Koul <vkoul@kernel.org>
R: Bhupesh Sharma <bhupesh.sharma@linaro.org>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/net/qcom,ethqos.txt
@ -19960,6 +19962,7 @@ S: Supported
F: drivers/net/team/
F: include/linux/if_team.h
F: include/uapi/linux/if_team.h
F: tools/testing/selftests/net/team/
TECHNOLOGIC SYSTEMS TS-5500 PLATFORM SUPPORT
M: "Savoir-faire Linux Inc." <kernel@savoirfairelinux.com>
@ -21566,7 +21569,7 @@ F: drivers/gpio/gpio-virtio.c
F: include/uapi/linux/virtio_gpio.h
VIRTIO GPU DRIVER
M: David Airlie <airlied@linux.ie>
M: David Airlie <airlied@redhat.com>
M: Gerd Hoffmann <kraxel@redhat.com>
R: Gurchetan Singh <gurchetansingh@chromium.org>
R: Chia-I Wu <olvaffe@gmail.com>

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 0
SUBLEVEL = 0
EXTRAVERSION = -rc6
EXTRAVERSION = -rc7
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*

View File

@ -541,13 +541,13 @@
phy0: ethernet-phy@1 {
reg = <1>;
interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
phy1: ethernet-phy@2 {
reg = <2>;
interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
};

View File

@ -79,7 +79,7 @@
clocks = <&ref12>;
};
&sdhci {
&mmc {
status = "okay";
};

View File

@ -93,8 +93,8 @@
clock-names = "PCLK";
};
sdhci: sdhci@98e00000 {
compatible = "moxa,moxart-sdhci";
mmc: mmc@98e00000 {
compatible = "moxa,moxart-mmc";
reg = <0x98e00000 0x5C>;
interrupts = <5 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk_apb>;

View File

@ -152,11 +152,11 @@
* CPLD_reset is RESET_SOFT in schematic
*/
gpio-line-names =
"CPLD_D[1]", "CPLD_int", "CPLD_reset", "",
"", "CPLD_D[0]", "", "",
"", "", "", "CPLD_D[2]",
"CPLD_D[3]", "CPLD_D[4]", "CPLD_D[5]", "CPLD_D[6]",
"CPLD_D[7]", "", "", "",
"CPLD_D[6]", "CPLD_int", "CPLD_reset", "",
"", "CPLD_D[7]", "", "",
"", "", "", "CPLD_D[5]",
"CPLD_D[4]", "CPLD_D[3]", "CPLD_D[2]", "CPLD_D[1]",
"CPLD_D[0]", "", "", "",
"", "", "", "",
"", "", "", "KBD_intK",
"", "", "", "";

View File

@ -5,7 +5,6 @@
/dts-v1/;
#include <dt-bindings/phy/phy-imx8-pcie.h>
#include "imx8mm-tqma8mqml.dtsi"
#include "mba8mx.dtsi"

View File

@ -3,6 +3,7 @@
* Copyright 2020-2021 TQ-Systems GmbH
*/
#include <dt-bindings/phy/phy-imx8-pcie.h>
#include "imx8mm.dtsi"
/ {

View File

@ -367,8 +367,8 @@
nxp,dvs-standby-voltage = <850000>;
regulator-always-on;
regulator-boot-on;
regulator-max-microvolt = <950000>;
regulator-min-microvolt = <850000>;
regulator-max-microvolt = <1050000>;
regulator-min-microvolt = <805000>;
regulator-name = "On-module +VDD_ARM (BUCK2)";
regulator-ramp-delay = <3125>;
};
@ -376,8 +376,8 @@
reg_vdd_dram: BUCK3 {
regulator-always-on;
regulator-boot-on;
regulator-max-microvolt = <950000>;
regulator-min-microvolt = <850000>;
regulator-max-microvolt = <1000000>;
regulator-min-microvolt = <805000>;
regulator-name = "On-module +VDD_GPU_VPU_DDR (BUCK3)";
};
@ -416,7 +416,7 @@
reg_vdd_snvs: LDO2 {
regulator-always-on;
regulator-boot-on;
regulator-max-microvolt = <900000>;
regulator-max-microvolt = <800000>;
regulator-min-microvolt = <800000>;
regulator-name = "On-module +V0.8_SNVS (LDO2)";
};

View File

@ -672,7 +672,6 @@
<&clk IMX8MN_CLK_GPU_SHADER>,
<&clk IMX8MN_CLK_GPU_BUS_ROOT>,
<&clk IMX8MN_CLK_GPU_AHB>;
resets = <&src IMX8MQ_RESET_GPU_RESET>;
};
pgc_dispmix: power-domain@3 {

View File

@ -57,13 +57,13 @@
switch-1 {
label = "S12";
linux,code = <BTN_0>;
gpios = <&gpio5 26 GPIO_ACTIVE_LOW>;
gpios = <&gpio5 27 GPIO_ACTIVE_LOW>;
};
switch-2 {
label = "S13";
linux,code = <BTN_1>;
gpios = <&gpio5 27 GPIO_ACTIVE_LOW>;
gpios = <&gpio5 26 GPIO_ACTIVE_LOW>;
};
};
@ -394,6 +394,8 @@
&pcf85063 {
/* RTC_EVENT# is connected on MBa8MPxL */
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pcf85063>;
interrupt-parent = <&gpio4>;
interrupts = <28 IRQ_TYPE_EDGE_FALLING>;
};
@ -630,6 +632,10 @@
fsl,pins = <MX8MP_IOMUXC_SAI5_RXC__GPIO3_IO20 0x10>; /* Power enable */
};
pinctrl_pcf85063: pcf85063grp {
fsl,pins = <MX8MP_IOMUXC_SAI3_RXFS__GPIO4_IO28 0x80>;
};
/* LVDS Backlight */
pinctrl_pwm2: pwm2grp {
fsl,pins = <MX8MP_IOMUXC_SAI5_RXD0__PWM2_OUT 0x14>;

View File

@ -123,8 +123,7 @@
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_reg_can>;
regulator-name = "can2_stby";
gpio = <&gpio3 19 GPIO_ACTIVE_HIGH>;
enable-active-high;
gpio = <&gpio3 19 GPIO_ACTIVE_LOW>;
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
@ -484,35 +483,40 @@
lan1: port@0 {
reg = <0>;
label = "lan1";
phy-mode = "internal";
local-mac-address = [00 00 00 00 00 00];
};
lan2: port@1 {
reg = <1>;
label = "lan2";
phy-mode = "internal";
local-mac-address = [00 00 00 00 00 00];
};
lan3: port@2 {
reg = <2>;
label = "lan3";
phy-mode = "internal";
local-mac-address = [00 00 00 00 00 00];
};
lan4: port@3 {
reg = <3>;
label = "lan4";
phy-mode = "internal";
local-mac-address = [00 00 00 00 00 00];
};
lan5: port@4 {
reg = <4>;
label = "lan5";
phy-mode = "internal";
local-mac-address = [00 00 00 00 00 00];
};
port@6 {
reg = <6>;
port@5 {
reg = <5>;
label = "cpu";
ethernet = <&fec>;
phy-mode = "rgmii-id";

View File

@ -172,6 +172,7 @@
compatible = "fsl,imx8ulp-pcc3";
reg = <0x292d0000 0x10000>;
#clock-cells = <1>;
#reset-cells = <1>;
};
tpm5: tpm@29340000 {
@ -270,6 +271,7 @@
compatible = "fsl,imx8ulp-pcc4";
reg = <0x29800000 0x10000>;
#clock-cells = <1>;
#reset-cells = <1>;
};
lpi2c6: i2c@29840000 {
@ -414,6 +416,7 @@
compatible = "fsl,imx8ulp-pcc5";
reg = <0x2da70000 0x10000>;
#clock-cells = <1>;
#reset-cells = <1>;
};
};

View File

@ -2,8 +2,8 @@
/*
* Copyright (c) 2020 Fuzhou Rockchip Electronics Co., Ltd
* Copyright (c) 2020 Engicam srl
* Copyright (c) 2020 Amarula Solutons
* Copyright (c) 2020 Amarula Solutons(India)
* Copyright (c) 2020 Amarula Solutions
* Copyright (c) 2020 Amarula Solutions(India)
*/
#include <dt-bindings/gpio/gpio.h>

View File

@ -88,3 +88,8 @@
};
};
};
&wlan_host_wake_l {
/* Kevin has an external pull up, but Bob does not. */
rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_up>;
};

View File

@ -244,6 +244,14 @@
&edp {
status = "okay";
/*
* eDP PHY/clk don't sync reliably at anything other than 24 MHz. Only
* set this here, because rk3399-gru.dtsi ensures we can generate this
* off GPLL=600MHz, whereas some other RK3399 boards may not.
*/
assigned-clocks = <&cru PCLK_EDP>;
assigned-clock-rates = <24000000>;
ports {
edp_out: port@1 {
reg = <1>;
@ -578,6 +586,7 @@ ap_i2c_tp: &i2c5 {
};
wlan_host_wake_l: wlan-host-wake-l {
/* Kevin has an external pull up, but Bob does not */
rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_none>;
};
};

View File

@ -62,7 +62,6 @@
vcc5v0_host: vcc5v0-host-regulator {
compatible = "regulator-fixed";
gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>;
enable-active-low;
pinctrl-names = "default";
pinctrl-0 = <&vcc5v0_host_en>;
regulator-name = "vcc5v0_host";

View File

@ -189,7 +189,6 @@
vcc3v3_sd: vcc3v3_sd {
compatible = "regulator-fixed";
enable-active-low;
gpio = <&gpio0 RK_PA5 GPIO_ACTIVE_LOW>;
pinctrl-names = "default";
pinctrl-0 = <&vcc_sd_h>;

View File

@ -506,7 +506,7 @@
disable-wp;
pinctrl-names = "default";
pinctrl-0 = <&sdmmc0_bus4 &sdmmc0_clk &sdmmc0_cmd &sdmmc0_det>;
sd-uhs-sdr104;
sd-uhs-sdr50;
vmmc-supply = <&vcc3v3_sd>;
vqmmc-supply = <&vccio_sd>;
status = "okay";

View File

@ -678,7 +678,7 @@
};
&usb_host0_xhci {
extcon = <&usb2phy0>;
dr_mode = "host";
status = "okay";
};

View File

@ -656,7 +656,7 @@
};
&usb2phy0_otg {
vbus-supply = <&vcc5v0_usb_otg>;
phy-supply = <&vcc5v0_usb_otg>;
status = "okay";
};

View File

@ -581,7 +581,7 @@
};
&usb2phy0_otg {
vbus-supply = <&vcc5v0_usb_otg>;
phy-supply = <&vcc5v0_usb_otg>;
status = "okay";
};

View File

@ -48,6 +48,7 @@ CONFIG_ARCH_KEEMBAY=y
CONFIG_ARCH_MEDIATEK=y
CONFIG_ARCH_MESON=y
CONFIG_ARCH_MVEBU=y
CONFIG_ARCH_NXP=y
CONFIG_ARCH_MXC=y
CONFIG_ARCH_NPCM=y
CONFIG_ARCH_QCOM=y

View File

@ -237,7 +237,7 @@ static void amu_fie_setup(const struct cpumask *cpus)
for_each_cpu(cpu, cpus) {
if (!freq_counters_valid(cpu) ||
freq_inv_set_max_ratio(cpu,
cpufreq_get_hw_max_freq(cpu) * 1000,
cpufreq_get_hw_max_freq(cpu) * 1000ULL,
arch_timer_get_rate()))
return;
}

View File

@ -331,12 +331,6 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
}
BUG_ON(p4d_bad(p4d));
/*
* No need for locking during early boot. And it doesn't work as
* expected with KASLR enabled.
*/
if (system_state != SYSTEM_BOOTING)
mutex_lock(&fixmap_lock);
pudp = pud_set_fixmap_offset(p4dp, addr);
do {
pud_t old_pud = READ_ONCE(*pudp);
@ -368,15 +362,13 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
} while (pudp++, addr = next, addr != end);
pud_clear_fixmap();
if (system_state != SYSTEM_BOOTING)
mutex_unlock(&fixmap_lock);
}
static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(int),
int flags)
static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(int),
int flags)
{
unsigned long addr, end, next;
pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
@ -400,8 +392,20 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
} while (pgdp++, addr = next, addr != end);
}
static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(int),
int flags)
{
mutex_lock(&fixmap_lock);
__create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
pgtable_alloc, flags);
mutex_unlock(&fixmap_lock);
}
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
extern __alias(__create_pgd_mapping)
extern __alias(__create_pgd_mapping_locked)
void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
phys_addr_t size, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(int), int flags);

View File

@ -50,6 +50,7 @@ struct clk *clk_get_io(void)
{
return &cpu_clk_generic[2];
}
EXPORT_SYMBOL_GPL(clk_get_io);
struct clk *clk_get_ppe(void)
{

View File

@ -98,7 +98,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
if (plat_dat->bus_id) {
__raw_writel(__raw_readl(LS1X_MUX_CTRL0) | GMAC1_USE_UART1 |
GMAC1_USE_UART0, LS1X_MUX_CTRL0);
switch (plat_dat->interface) {
switch (plat_dat->phy_interface) {
case PHY_INTERFACE_MODE_RGMII:
val &= ~(GMAC1_USE_TXCLK | GMAC1_USE_PWM23);
break;
@ -107,12 +107,12 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
break;
default:
pr_err("unsupported mii mode %d\n",
plat_dat->interface);
plat_dat->phy_interface);
return -ENOTSUPP;
}
val &= ~GMAC1_SHUT;
} else {
switch (plat_dat->interface) {
switch (plat_dat->phy_interface) {
case PHY_INTERFACE_MODE_RGMII:
val &= ~(GMAC0_USE_TXCLK | GMAC0_USE_PWM01);
break;
@ -121,7 +121,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
break;
default:
pr_err("unsupported mii mode %d\n",
plat_dat->interface);
plat_dat->phy_interface);
return -ENOTSUPP;
}
val &= ~GMAC0_SHUT;
@ -131,7 +131,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
plat_dat = dev_get_platdata(&pdev->dev);
val &= ~PHY_INTF_SELI;
if (plat_dat->interface == PHY_INTERFACE_MODE_RMII)
if (plat_dat->phy_interface == PHY_INTERFACE_MODE_RMII)
val |= 0x4 << PHY_INTF_SELI_SHIFT;
__raw_writel(val, LS1X_MUX_CTRL1);
@ -146,9 +146,9 @@ static struct plat_stmmacenet_data ls1x_eth0_pdata = {
.bus_id = 0,
.phy_addr = -1,
#if defined(CONFIG_LOONGSON1_LS1B)
.interface = PHY_INTERFACE_MODE_MII,
.phy_interface = PHY_INTERFACE_MODE_MII,
#elif defined(CONFIG_LOONGSON1_LS1C)
.interface = PHY_INTERFACE_MODE_RMII,
.phy_interface = PHY_INTERFACE_MODE_RMII,
#endif
.mdio_bus_data = &ls1x_mdio_bus_data,
.dma_cfg = &ls1x_eth_dma_cfg,
@ -186,7 +186,7 @@ struct platform_device ls1x_eth0_pdev = {
static struct plat_stmmacenet_data ls1x_eth1_pdata = {
.bus_id = 1,
.phy_addr = -1,
.interface = PHY_INTERFACE_MODE_MII,
.phy_interface = PHY_INTERFACE_MODE_MII,
.mdio_bus_data = &ls1x_mdio_bus_data,
.dma_cfg = &ls1x_eth_dma_cfg,
.has_gmac = 1,

View File

@ -103,6 +103,7 @@ config RISCV
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select HAVE_POSIX_CPU_TIMERS_TASK_WORK
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_FUNCTION_ARG_ACCESS_API
select HAVE_STACKPROTECTOR
@ -227,6 +228,9 @@ config RISCV_DMA_NONCOHERENT
select ARCH_HAS_SETUP_DMA_OPS
select DMA_DIRECT_REMAP
config AS_HAS_INSN
def_bool $(as-instr,.insn r 51$(comma) 0$(comma) 0$(comma) t0$(comma) t0$(comma) zero)
source "arch/riscv/Kconfig.socs"
source "arch/riscv/Kconfig.erratas"
@ -386,6 +390,7 @@ config RISCV_ISA_C
config RISCV_ISA_SVPBMT
bool "SVPBMT extension support"
depends on 64BIT && MMU
depends on !XIP_KERNEL
select RISCV_ALTERNATIVE
default y
help

View File

@ -46,7 +46,7 @@ config ERRATA_THEAD
config ERRATA_THEAD_PBMT
bool "Apply T-Head memory type errata"
depends on ERRATA_THEAD && 64BIT
depends on ERRATA_THEAD && 64BIT && MMU
select RISCV_ALTERNATIVE_EARLY
default y
help
@ -57,7 +57,7 @@ config ERRATA_THEAD_PBMT
config ERRATA_THEAD_CMO
bool "Apply T-Head cache management errata"
depends on ERRATA_THEAD
depends on ERRATA_THEAD && MMU
select RISCV_DMA_NONCOHERENT
default y
help

View File

@ -37,6 +37,7 @@ static bool errata_probe_cmo(unsigned int stage,
if (stage == RISCV_ALTERNATIVES_EARLY_BOOT)
return false;
riscv_cbom_block_size = L1_CACHE_BYTES;
riscv_noncoherent_supported();
return true;
#else

View File

@ -42,6 +42,11 @@ void flush_icache_mm(struct mm_struct *mm, bool local);
#endif /* CONFIG_SMP */
/*
* The T-Head CMO errata internally probe the CBOM block size, but otherwise
* don't depend on Zicbom.
*/
extern unsigned int riscv_cbom_block_size;
#ifdef CONFIG_RISCV_ISA_ZICBOM
void riscv_init_cbom_blocksize(void);
#else

View File

@ -3,6 +3,11 @@
#define __ASM_GPR_NUM_H
#ifdef __ASSEMBLY__
.irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
.equ .L__gpr_num_x\num, \num
.endr
.equ .L__gpr_num_zero, 0
.equ .L__gpr_num_ra, 1
.equ .L__gpr_num_sp, 2
@ -39,6 +44,9 @@
#else /* __ASSEMBLY__ */
#define __DEFINE_ASM_GPR_NUMS \
" .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31\n" \
" .equ .L__gpr_num_x\\num, \\num\n" \
" .endr\n" \
" .equ .L__gpr_num_zero, 0\n" \
" .equ .L__gpr_num_ra, 1\n" \
" .equ .L__gpr_num_sp, 2\n" \

View File

@ -58,6 +58,7 @@ enum riscv_isa_ext_id {
RISCV_ISA_EXT_ZICBOM,
RISCV_ISA_EXT_ZIHINTPAUSE,
RISCV_ISA_EXT_SSTC,
RISCV_ISA_EXT_SVINVAL,
RISCV_ISA_EXT_ID_MAX = RISCV_ISA_EXT_MAX,
};
@ -69,6 +70,7 @@ enum riscv_isa_ext_id {
enum riscv_isa_ext_key {
RISCV_ISA_EXT_KEY_FPU, /* For 'F' and 'D' */
RISCV_ISA_EXT_KEY_ZIHINTPAUSE,
RISCV_ISA_EXT_KEY_SVINVAL,
RISCV_ISA_EXT_KEY_MAX,
};
@ -90,6 +92,8 @@ static __always_inline int riscv_isa_ext2key(int num)
return RISCV_ISA_EXT_KEY_FPU;
case RISCV_ISA_EXT_ZIHINTPAUSE:
return RISCV_ISA_EXT_KEY_ZIHINTPAUSE;
case RISCV_ISA_EXT_SVINVAL:
return RISCV_ISA_EXT_KEY_SVINVAL;
default:
return -EINVAL;
}

View File

@ -0,0 +1,137 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef __ASM_INSN_DEF_H
#define __ASM_INSN_DEF_H
#include <asm/asm.h>
#define INSN_R_FUNC7_SHIFT 25
#define INSN_R_RS2_SHIFT 20
#define INSN_R_RS1_SHIFT 15
#define INSN_R_FUNC3_SHIFT 12
#define INSN_R_RD_SHIFT 7
#define INSN_R_OPCODE_SHIFT 0
#ifdef __ASSEMBLY__
#ifdef CONFIG_AS_HAS_INSN
.macro insn_r, opcode, func3, func7, rd, rs1, rs2
.insn r \opcode, \func3, \func7, \rd, \rs1, \rs2
.endm
#else
#include <asm/gpr-num.h>
.macro insn_r, opcode, func3, func7, rd, rs1, rs2
.4byte ((\opcode << INSN_R_OPCODE_SHIFT) | \
(\func3 << INSN_R_FUNC3_SHIFT) | \
(\func7 << INSN_R_FUNC7_SHIFT) | \
(.L__gpr_num_\rd << INSN_R_RD_SHIFT) | \
(.L__gpr_num_\rs1 << INSN_R_RS1_SHIFT) | \
(.L__gpr_num_\rs2 << INSN_R_RS2_SHIFT))
.endm
#endif
#define __INSN_R(...) insn_r __VA_ARGS__
#else /* ! __ASSEMBLY__ */
#ifdef CONFIG_AS_HAS_INSN
#define __INSN_R(opcode, func3, func7, rd, rs1, rs2) \
".insn r " opcode ", " func3 ", " func7 ", " rd ", " rs1 ", " rs2 "\n"
#else
#include <linux/stringify.h>
#include <asm/gpr-num.h>
#define DEFINE_INSN_R \
__DEFINE_ASM_GPR_NUMS \
" .macro insn_r, opcode, func3, func7, rd, rs1, rs2\n" \
" .4byte ((\\opcode << " __stringify(INSN_R_OPCODE_SHIFT) ") |" \
" (\\func3 << " __stringify(INSN_R_FUNC3_SHIFT) ") |" \
" (\\func7 << " __stringify(INSN_R_FUNC7_SHIFT) ") |" \
" (.L__gpr_num_\\rd << " __stringify(INSN_R_RD_SHIFT) ") |" \
" (.L__gpr_num_\\rs1 << " __stringify(INSN_R_RS1_SHIFT) ") |" \
" (.L__gpr_num_\\rs2 << " __stringify(INSN_R_RS2_SHIFT) "))\n" \
" .endm\n"
#define UNDEFINE_INSN_R \
" .purgem insn_r\n"
#define __INSN_R(opcode, func3, func7, rd, rs1, rs2) \
DEFINE_INSN_R \
"insn_r " opcode ", " func3 ", " func7 ", " rd ", " rs1 ", " rs2 "\n" \
UNDEFINE_INSN_R
#endif
#endif /* ! __ASSEMBLY__ */
#define INSN_R(opcode, func3, func7, rd, rs1, rs2) \
__INSN_R(RV_##opcode, RV_##func3, RV_##func7, \
RV_##rd, RV_##rs1, RV_##rs2)
#define RV_OPCODE(v) __ASM_STR(v)
#define RV_FUNC3(v) __ASM_STR(v)
#define RV_FUNC7(v) __ASM_STR(v)
#define RV_RD(v) __ASM_STR(v)
#define RV_RS1(v) __ASM_STR(v)
#define RV_RS2(v) __ASM_STR(v)
#define __RV_REG(v) __ASM_STR(x ## v)
#define RV___RD(v) __RV_REG(v)
#define RV___RS1(v) __RV_REG(v)
#define RV___RS2(v) __RV_REG(v)
#define RV_OPCODE_SYSTEM RV_OPCODE(115)
#define HFENCE_VVMA(vaddr, asid) \
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(17), \
__RD(0), RS1(vaddr), RS2(asid))
#define HFENCE_GVMA(gaddr, vmid) \
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(49), \
__RD(0), RS1(gaddr), RS2(vmid))
#define HLVX_HU(dest, addr) \
INSN_R(OPCODE_SYSTEM, FUNC3(4), FUNC7(50), \
RD(dest), RS1(addr), __RS2(3))
#define HLV_W(dest, addr) \
INSN_R(OPCODE_SYSTEM, FUNC3(4), FUNC7(52), \
RD(dest), RS1(addr), __RS2(0))
#ifdef CONFIG_64BIT
#define HLV_D(dest, addr) \
INSN_R(OPCODE_SYSTEM, FUNC3(4), FUNC7(54), \
RD(dest), RS1(addr), __RS2(0))
#else
#define HLV_D(dest, addr) \
__ASM_STR(.error "hlv.d requires 64-bit support")
#endif
#define SINVAL_VMA(vaddr, asid) \
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(11), \
__RD(0), RS1(vaddr), RS2(asid))
#define SFENCE_W_INVAL() \
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(12), \
__RD(0), __RS1(0), __RS2(0))
#define SFENCE_INVAL_IR() \
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(12), \
__RD(0), __RS1(0), __RS2(1))
#define HINVAL_VVMA(vaddr, asid) \
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(19), \
__RD(0), RS1(vaddr), RS2(asid))
#define HINVAL_GVMA(gaddr, vmid) \
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(51), \
__RD(0), RS1(gaddr), RS2(vmid))
#endif /* __ASM_INSN_DEF_H */

View File

@ -67,6 +67,7 @@ struct kvm_vcpu_stat {
u64 mmio_exit_kernel;
u64 csr_exit_user;
u64 csr_exit_kernel;
u64 signal_exits;
u64 exits;
};

View File

@ -11,8 +11,8 @@
#define KVM_SBI_IMPID 3
#define KVM_SBI_VERSION_MAJOR 0
#define KVM_SBI_VERSION_MINOR 3
#define KVM_SBI_VERSION_MAJOR 1
#define KVM_SBI_VERSION_MINOR 0
struct kvm_vcpu_sbi_extension {
unsigned long extid_start;

View File

@ -48,6 +48,7 @@ struct kvm_sregs {
/* CONFIG registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
struct kvm_riscv_config {
unsigned long isa;
unsigned long zicbom_block_size;
};
/* CORE registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
@ -98,6 +99,9 @@ enum KVM_RISCV_ISA_EXT_ID {
KVM_RISCV_ISA_EXT_M,
KVM_RISCV_ISA_EXT_SVPBMT,
KVM_RISCV_ISA_EXT_SSTC,
KVM_RISCV_ISA_EXT_SVINVAL,
KVM_RISCV_ISA_EXT_ZIHINTPAUSE,
KVM_RISCV_ISA_EXT_ZICBOM,
KVM_RISCV_ISA_EXT_MAX,
};

View File

@ -96,6 +96,7 @@ static struct riscv_isa_ext_data isa_ext_arr[] = {
__RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM),
__RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE),
__RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC),
__RISCV_ISA_EXT_DATA(svinval, RISCV_ISA_EXT_SVINVAL),
__RISCV_ISA_EXT_DATA("", RISCV_ISA_EXT_MAX),
};

View File

@ -204,6 +204,7 @@ void __init riscv_fill_hwcap(void)
SET_ISA_EXT_MAP("zicbom", RISCV_ISA_EXT_ZICBOM);
SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE);
SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC);
SET_ISA_EXT_MAP("svinval", RISCV_ISA_EXT_SVINVAL);
}
#undef SET_ISA_EXT_MAP
}

View File

@ -296,8 +296,8 @@ void __init setup_arch(char **cmdline_p)
setup_smp();
#endif
riscv_fill_hwcap();
riscv_init_cbom_blocksize();
riscv_fill_hwcap();
apply_boot_alternatives();
}

View File

@ -124,6 +124,8 @@ SYSCALL_DEFINE0(rt_sigreturn)
if (restore_altstack(&frame->uc.uc_stack))
goto badframe;
regs->cause = -1UL;
return regs->a0;
badframe:

View File

@ -24,6 +24,7 @@ config KVM
select PREEMPT_NOTIFIERS
select KVM_MMIO
select KVM_GENERIC_DIRTYLOG_READ_PROTECT
select KVM_XFER_TO_GUEST_WORK
select HAVE_KVM_VCPU_ASYNC_IOCTL
select HAVE_KVM_EVENTFD
select SRCU

View File

@ -122,7 +122,7 @@ void kvm_arch_exit(void)
{
}
static int riscv_kvm_init(void)
static int __init riscv_kvm_init(void)
{
return kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
}

View File

@ -12,22 +12,11 @@
#include <linux/kvm_host.h>
#include <asm/cacheflush.h>
#include <asm/csr.h>
#include <asm/hwcap.h>
#include <asm/insn-def.h>
/*
* Instruction encoding of hfence.gvma is:
* HFENCE.GVMA rs1, rs2
* HFENCE.GVMA zero, rs2
* HFENCE.GVMA rs1
* HFENCE.GVMA
*
* rs1!=zero and rs2!=zero ==> HFENCE.GVMA rs1, rs2
* rs1==zero and rs2!=zero ==> HFENCE.GVMA zero, rs2
* rs1!=zero and rs2==zero ==> HFENCE.GVMA rs1
* rs1==zero and rs2==zero ==> HFENCE.GVMA
*
* Instruction encoding of HFENCE.GVMA is:
* 0110001 rs2(5) rs1(5) 000 00000 1110011
*/
#define has_svinval() \
static_branch_unlikely(&riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_SVINVAL])
void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid,
gpa_t gpa, gpa_t gpsz,
@ -40,32 +29,22 @@ void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid,
return;
}
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order)) {
/*
* rs1 = a0 (GPA >> 2)
* rs2 = a1 (VMID)
* HFENCE.GVMA a0, a1
* 0110001 01011 01010 000 00000 1110011
*/
asm volatile ("srli a0, %0, 2\n"
"add a1, %1, zero\n"
".word 0x62b50073\n"
:: "r" (pos), "r" (vmid)
: "a0", "a1", "memory");
if (has_svinval()) {
asm volatile (SFENCE_W_INVAL() ::: "memory");
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order))
asm volatile (HINVAL_GVMA(%0, %1)
: : "r" (pos >> 2), "r" (vmid) : "memory");
asm volatile (SFENCE_INVAL_IR() ::: "memory");
} else {
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order))
asm volatile (HFENCE_GVMA(%0, %1)
: : "r" (pos >> 2), "r" (vmid) : "memory");
}
}
void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid)
{
/*
* rs1 = zero
* rs2 = a0 (VMID)
* HFENCE.GVMA zero, a0
* 0110001 01010 00000 000 00000 1110011
*/
asm volatile ("add a0, %0, zero\n"
".word 0x62a00073\n"
:: "r" (vmid) : "a0", "memory");
asm volatile(HFENCE_GVMA(zero, %0) : : "r" (vmid) : "memory");
}
void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz,
@ -78,46 +57,24 @@ void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz,
return;
}
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order)) {
/*
* rs1 = a0 (GPA >> 2)
* rs2 = zero
* HFENCE.GVMA a0
* 0110001 00000 01010 000 00000 1110011
*/
asm volatile ("srli a0, %0, 2\n"
".word 0x62050073\n"
:: "r" (pos) : "a0", "memory");
if (has_svinval()) {
asm volatile (SFENCE_W_INVAL() ::: "memory");
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order))
asm volatile(HINVAL_GVMA(%0, zero)
: : "r" (pos >> 2) : "memory");
asm volatile (SFENCE_INVAL_IR() ::: "memory");
} else {
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order))
asm volatile(HFENCE_GVMA(%0, zero)
: : "r" (pos >> 2) : "memory");
}
}
void kvm_riscv_local_hfence_gvma_all(void)
{
/*
* rs1 = zero
* rs2 = zero
* HFENCE.GVMA
* 0110001 00000 00000 000 00000 1110011
*/
asm volatile (".word 0x62000073" ::: "memory");
asm volatile(HFENCE_GVMA(zero, zero) : : : "memory");
}
/*
* Instruction encoding of hfence.gvma is:
* HFENCE.VVMA rs1, rs2
* HFENCE.VVMA zero, rs2
* HFENCE.VVMA rs1
* HFENCE.VVMA
*
* rs1!=zero and rs2!=zero ==> HFENCE.VVMA rs1, rs2
* rs1==zero and rs2!=zero ==> HFENCE.VVMA zero, rs2
* rs1!=zero and rs2==zero ==> HFENCE.VVMA rs1
* rs1==zero and rs2==zero ==> HFENCE.VVMA
*
* Instruction encoding of HFENCE.VVMA is:
* 0010001 rs2(5) rs1(5) 000 00000 1110011
*/
void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid,
unsigned long asid,
unsigned long gva,
@ -133,18 +90,16 @@ void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid,
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
for (pos = gva; pos < (gva + gvsz); pos += BIT(order)) {
/*
* rs1 = a0 (GVA)
* rs2 = a1 (ASID)
* HFENCE.VVMA a0, a1
* 0010001 01011 01010 000 00000 1110011
*/
asm volatile ("add a0, %0, zero\n"
"add a1, %1, zero\n"
".word 0x22b50073\n"
:: "r" (pos), "r" (asid)
: "a0", "a1", "memory");
if (has_svinval()) {
asm volatile (SFENCE_W_INVAL() ::: "memory");
for (pos = gva; pos < (gva + gvsz); pos += BIT(order))
asm volatile(HINVAL_VVMA(%0, %1)
: : "r" (pos), "r" (asid) : "memory");
asm volatile (SFENCE_INVAL_IR() ::: "memory");
} else {
for (pos = gva; pos < (gva + gvsz); pos += BIT(order))
asm volatile(HFENCE_VVMA(%0, %1)
: : "r" (pos), "r" (asid) : "memory");
}
csr_write(CSR_HGATP, hgatp);
@ -157,15 +112,7 @@ void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid,
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
/*
* rs1 = zero
* rs2 = a0 (ASID)
* HFENCE.VVMA zero, a0
* 0010001 01010 00000 000 00000 1110011
*/
asm volatile ("add a0, %0, zero\n"
".word 0x22a00073\n"
:: "r" (asid) : "a0", "memory");
asm volatile(HFENCE_VVMA(zero, %0) : : "r" (asid) : "memory");
csr_write(CSR_HGATP, hgatp);
}
@ -183,16 +130,16 @@ void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid,
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
for (pos = gva; pos < (gva + gvsz); pos += BIT(order)) {
/*
* rs1 = a0 (GVA)
* rs2 = zero
* HFENCE.VVMA a0
* 0010001 00000 01010 000 00000 1110011
*/
asm volatile ("add a0, %0, zero\n"
".word 0x22050073\n"
:: "r" (pos) : "a0", "memory");
if (has_svinval()) {
asm volatile (SFENCE_W_INVAL() ::: "memory");
for (pos = gva; pos < (gva + gvsz); pos += BIT(order))
asm volatile(HINVAL_VVMA(%0, zero)
: : "r" (pos) : "memory");
asm volatile (SFENCE_INVAL_IR() ::: "memory");
} else {
for (pos = gva; pos < (gva + gvsz); pos += BIT(order))
asm volatile(HFENCE_VVMA(%0, zero)
: : "r" (pos) : "memory");
}
csr_write(CSR_HGATP, hgatp);
@ -204,13 +151,7 @@ void kvm_riscv_local_hfence_vvma_all(unsigned long vmid)
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
/*
* rs1 = zero
* rs2 = zero
* HFENCE.VVMA
* 0010001 00000 00000 000 00000 1110011
*/
asm volatile (".word 0x22000073" ::: "memory");
asm volatile(HFENCE_VVMA(zero, zero) : : : "memory");
csr_write(CSR_HGATP, hgatp);
}

View File

@ -7,6 +7,7 @@
*/
#include <linux/bitops.h>
#include <linux/entry-kvm.h>
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/kdebug.h>
@ -18,6 +19,7 @@
#include <linux/fs.h>
#include <linux/kvm_host.h>
#include <asm/csr.h>
#include <asm/cacheflush.h>
#include <asm/hwcap.h>
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
@ -28,6 +30,7 @@ const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
STATS_DESC_COUNTER(VCPU, mmio_exit_kernel),
STATS_DESC_COUNTER(VCPU, csr_exit_user),
STATS_DESC_COUNTER(VCPU, csr_exit_kernel),
STATS_DESC_COUNTER(VCPU, signal_exits),
STATS_DESC_COUNTER(VCPU, exits)
};
@ -42,17 +45,23 @@ const struct kvm_stats_header kvm_vcpu_stats_header = {
#define KVM_RISCV_BASE_ISA_MASK GENMASK(25, 0)
#define KVM_ISA_EXT_ARR(ext) [KVM_RISCV_ISA_EXT_##ext] = RISCV_ISA_EXT_##ext
/* Mapping between KVM ISA Extension ID & Host ISA extension ID */
static const unsigned long kvm_isa_ext_arr[] = {
RISCV_ISA_EXT_a,
RISCV_ISA_EXT_c,
RISCV_ISA_EXT_d,
RISCV_ISA_EXT_f,
RISCV_ISA_EXT_h,
RISCV_ISA_EXT_i,
RISCV_ISA_EXT_m,
RISCV_ISA_EXT_SVPBMT,
RISCV_ISA_EXT_SSTC,
[KVM_RISCV_ISA_EXT_A] = RISCV_ISA_EXT_a,
[KVM_RISCV_ISA_EXT_C] = RISCV_ISA_EXT_c,
[KVM_RISCV_ISA_EXT_D] = RISCV_ISA_EXT_d,
[KVM_RISCV_ISA_EXT_F] = RISCV_ISA_EXT_f,
[KVM_RISCV_ISA_EXT_H] = RISCV_ISA_EXT_h,
[KVM_RISCV_ISA_EXT_I] = RISCV_ISA_EXT_i,
[KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m,
KVM_ISA_EXT_ARR(SSTC),
KVM_ISA_EXT_ARR(SVINVAL),
KVM_ISA_EXT_ARR(SVPBMT),
KVM_ISA_EXT_ARR(ZIHINTPAUSE),
KVM_ISA_EXT_ARR(ZICBOM),
};
static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
@ -87,6 +96,8 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
case KVM_RISCV_ISA_EXT_I:
case KVM_RISCV_ISA_EXT_M:
case KVM_RISCV_ISA_EXT_SSTC:
case KVM_RISCV_ISA_EXT_SVINVAL:
case KVM_RISCV_ISA_EXT_ZIHINTPAUSE:
return false;
default:
break;
@ -254,6 +265,11 @@ static int kvm_riscv_vcpu_get_reg_config(struct kvm_vcpu *vcpu,
case KVM_REG_RISCV_CONFIG_REG(isa):
reg_val = vcpu->arch.isa[0] & KVM_RISCV_BASE_ISA_MASK;
break;
case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size):
if (!riscv_isa_extension_available(vcpu->arch.isa, ZICBOM))
return -EINVAL;
reg_val = riscv_cbom_block_size;
break;
default:
return -EINVAL;
}
@ -311,6 +327,8 @@ static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu,
return -EOPNOTSUPP;
}
break;
case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size):
return -EOPNOTSUPP;
default:
return -EINVAL;
}
@ -784,11 +802,15 @@ static void kvm_riscv_vcpu_update_config(const unsigned long *isa)
{
u64 henvcfg = 0;
if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SVPBMT))
if (riscv_isa_extension_available(isa, SVPBMT))
henvcfg |= ENVCFG_PBMTE;
if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SSTC))
if (riscv_isa_extension_available(isa, SSTC))
henvcfg |= ENVCFG_STCE;
if (riscv_isa_extension_available(isa, ZICBOM))
henvcfg |= (ENVCFG_CBIE | ENVCFG_CBCFE);
csr_write(CSR_HENVCFG, henvcfg);
#ifdef CONFIG_32BIT
csr_write(CSR_HENVCFGH, henvcfg >> 32);
@ -958,7 +980,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
run->exit_reason = KVM_EXIT_UNKNOWN;
while (ret > 0) {
/* Check conditions before entering the guest */
cond_resched();
ret = xfer_to_guest_mode_handle_work(vcpu);
if (!ret)
ret = 1;
kvm_riscv_gstage_vmid_update(vcpu);
@ -966,15 +990,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
local_irq_disable();
/*
* Exit if we have a signal pending so that we can deliver
* the signal to user space.
*/
if (signal_pending(current)) {
ret = -EINTR;
run->exit_reason = KVM_EXIT_INTR;
}
/*
* Ensure we set mode to IN_GUEST_MODE after we disable
* interrupts and before the final VCPU requests check.
@ -997,7 +1012,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
if (ret <= 0 ||
kvm_riscv_gstage_vmid_ver_changed(&vcpu->kvm->arch.vmid) ||
kvm_request_pending(vcpu)) {
kvm_request_pending(vcpu) ||
xfer_to_guest_mode_work_pending()) {
vcpu->mode = OUTSIDE_GUEST_MODE;
local_irq_enable();
kvm_vcpu_srcu_read_lock(vcpu);

View File

@ -8,6 +8,7 @@
#include <linux/kvm_host.h>
#include <asm/csr.h>
#include <asm/insn-def.h>
static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
struct kvm_cpu_trap *trap)
@ -62,11 +63,7 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcpu *vcpu,
{
register unsigned long taddr asm("a0") = (unsigned long)trap;
register unsigned long ttmp asm("a1");
register unsigned long val asm("t0");
register unsigned long tmp asm("t1");
register unsigned long addr asm("t2") = guest_addr;
unsigned long flags;
unsigned long old_stvec, old_hstatus;
unsigned long flags, val, tmp, old_stvec, old_hstatus;
local_irq_save(flags);
@ -82,29 +79,19 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcpu *vcpu,
".option push\n"
".option norvc\n"
"add %[ttmp], %[taddr], 0\n"
/*
* HLVX.HU %[val], (%[addr])
* HLVX.HU t0, (t2)
* 0110010 00011 00111 100 00101 1110011
*/
".word 0x6433c2f3\n"
HLVX_HU(%[val], %[addr])
"andi %[tmp], %[val], 3\n"
"addi %[tmp], %[tmp], -3\n"
"bne %[tmp], zero, 2f\n"
"addi %[addr], %[addr], 2\n"
/*
* HLVX.HU %[tmp], (%[addr])
* HLVX.HU t1, (t2)
* 0110010 00011 00111 100 00110 1110011
*/
".word 0x6433c373\n"
HLVX_HU(%[tmp], %[addr])
"sll %[tmp], %[tmp], 16\n"
"add %[val], %[val], %[tmp]\n"
"2:\n"
".option pop"
: [val] "=&r" (val), [tmp] "=&r" (tmp),
[taddr] "+&r" (taddr), [ttmp] "+&r" (ttmp),
[addr] "+&r" (addr) : : "memory");
[addr] "+&r" (guest_addr) : : "memory");
if (trap->scause == EXC_LOAD_PAGE_FAULT)
trap->scause = EXC_INST_PAGE_FAULT;
@ -121,24 +108,14 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcpu *vcpu,
".option norvc\n"
"add %[ttmp], %[taddr], 0\n"
#ifdef CONFIG_64BIT
/*
* HLV.D %[val], (%[addr])
* HLV.D t0, (t2)
* 0110110 00000 00111 100 00101 1110011
*/
".word 0x6c03c2f3\n"
HLV_D(%[val], %[addr])
#else
/*
* HLV.W %[val], (%[addr])
* HLV.W t0, (t2)
* 0110100 00000 00111 100 00101 1110011
*/
".word 0x6803c2f3\n"
HLV_W(%[val], %[addr])
#endif
".option pop"
: [val] "=&r" (val),
[taddr] "+&r" (taddr), [ttmp] "+&r" (ttmp)
: [addr] "r" (addr) : "memory");
: [addr] "r" (guest_addr) : "memory");
}
csr_write(CSR_STVEC, old_stvec);

View File

@ -12,7 +12,9 @@
#include <linux/of_device.h>
#include <asm/cacheflush.h>
static unsigned int riscv_cbom_block_size = L1_CACHE_BYTES;
unsigned int riscv_cbom_block_size;
EXPORT_SYMBOL_GPL(riscv_cbom_block_size);
static bool noncoherent_supported;
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
@ -79,38 +81,41 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
void riscv_init_cbom_blocksize(void)
{
struct device_node *node;
unsigned long cbom_hartid;
u32 val, probed_block_size;
int ret;
u32 val;
probed_block_size = 0;
for_each_of_cpu_node(node) {
unsigned long hartid;
int cbom_hartid;
ret = riscv_of_processor_hartid(node, &hartid);
if (ret)
continue;
if (hartid < 0)
continue;
/* set block-size for cbom extension if available */
ret = of_property_read_u32(node, "riscv,cbom-block-size", &val);
if (ret)
continue;
if (!riscv_cbom_block_size) {
riscv_cbom_block_size = val;
if (!probed_block_size) {
probed_block_size = val;
cbom_hartid = hartid;
} else {
if (riscv_cbom_block_size != val)
pr_warn("cbom-block-size mismatched between harts %d and %lu\n",
if (probed_block_size != val)
pr_warn("cbom-block-size mismatched between harts %lu and %lu\n",
cbom_hartid, hartid);
}
}
if (probed_block_size)
riscv_cbom_block_size = probed_block_size;
}
#endif
void riscv_noncoherent_supported(void)
{
WARN(!riscv_cbom_block_size,
"Non-coherent DMA support enabled without a block size\n");
noncoherent_supported = true;
}

View File

@ -132,10 +132,18 @@ export LDS_ELF_FORMAT := $(ELF_FORMAT)
# The wrappers will select whether using "malloc" or the kernel allocator.
LINK_WRAPS = -Wl,--wrap,malloc -Wl,--wrap,free -Wl,--wrap,calloc
# Avoid binutils 2.39+ warnings by marking the stack non-executable and
# ignorning warnings for the kallsyms sections.
LDFLAGS_EXECSTACK = -z noexecstack
ifeq ($(CONFIG_LD_IS_BFD),y)
LDFLAGS_EXECSTACK += $(call ld-option,--no-warn-rwx-segments)
endif
LD_FLAGS_CMDLINE = $(foreach opt,$(KBUILD_LDFLAGS),-Wl,$(opt))
# Used by link-vmlinux.sh which has special support for um link
export CFLAGS_vmlinux := $(LINK-y) $(LINK_WRAPS) $(LD_FLAGS_CMDLINE)
export LDFLAGS_vmlinux := $(LDFLAGS_EXECSTACK)
# When cleaning we don't include .config, so we don't include
# TT or skas makefiles and don't clean skas_ptregs.h.

View File

@ -48,7 +48,8 @@ void show_stack(struct task_struct *task, unsigned long *stack,
break;
if (i && ((i % STACKSLOTS_PER_LINE) == 0))
pr_cont("\n");
pr_cont(" %08lx", *stack++);
pr_cont(" %08lx", READ_ONCE_NOCHECK(*stack));
stack++;
}
printk("%sCall Trace:\n", loglvl);

View File

@ -33,7 +33,7 @@
#include "um_arch.h"
#define DEFAULT_COMMAND_LINE_ROOT "root=98:0"
#define DEFAULT_COMMAND_LINE_CONSOLE "console=tty"
#define DEFAULT_COMMAND_LINE_CONSOLE "console=tty0"
/* Changed in add_arg and setup_arch, which run before SMP is started */
static char __initdata command_line[COMMAND_LINE_SIZE] = { 0 };

View File

@ -6,10 +6,9 @@
#include <asm/unistd.h>
#include <sysdep/ptrace.h>
typedef long syscall_handler_t(struct pt_regs);
typedef long syscall_handler_t(struct syscall_args);
extern syscall_handler_t *sys_call_table[];
#define EXECUTE_SYSCALL(syscall, regs) \
((long (*)(struct syscall_args)) \
(*sys_call_table[syscall]))(SYSCALL_ARGS(&regs->regs))
((*sys_call_table[syscall]))(SYSCALL_ARGS(&regs->regs))

View File

@ -65,9 +65,6 @@ static int get_free_idx(struct task_struct* task)
struct thread_struct *t = &task->thread;
int idx;
if (!t->arch.tls_array)
return GDT_ENTRY_TLS_MIN;
for (idx = 0; idx < GDT_ENTRY_TLS_ENTRIES; idx++)
if (!t->arch.tls_array[idx].present)
return idx + GDT_ENTRY_TLS_MIN;
@ -240,9 +237,6 @@ static int get_tls_entry(struct task_struct *task, struct user_desc *info,
{
struct thread_struct *t = &task->thread;
if (!t->arch.tls_array)
goto clear;
if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
return -EINVAL;

View File

@ -65,7 +65,7 @@ quiet_cmd_vdso = VDSO $@
-Wl,-T,$(filter %.lds,$^) $(filter %.o,$^) && \
sh $(srctree)/$(src)/checkundef.sh '$(NM)' '$@'
VDSO_LDFLAGS = -fPIC -shared -Wl,--hash-style=sysv
VDSO_LDFLAGS = -fPIC -shared -Wl,--hash-style=sysv -z noexecstack
GCOV_PROFILE := n
#

View File

@ -602,7 +602,6 @@ void del_gendisk(struct gendisk *disk)
* Prevent new I/O from crossing bio_queue_enter().
*/
blk_queue_start_drain(q);
blk_mq_freeze_queue_wait(q);
if (!(disk->flags & GENHD_FL_HIDDEN)) {
sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
@ -626,6 +625,8 @@ void del_gendisk(struct gendisk *disk)
pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
device_del(disk_to_dev(disk));
blk_mq_freeze_queue_wait(q);
blk_throtl_cancel_bios(disk->queue);
blk_sync_queue(q);

View File

@ -43,7 +43,7 @@ config SYSTEM_TRUSTED_KEYRING
bool "Provide system-wide ring of trusted keys"
depends on KEYS
depends on ASYMMETRIC_KEY_TYPE
depends on X509_CERTIFICATE_PARSER
depends on X509_CERTIFICATE_PARSER = y
help
Provide a system keyring to which trusted keys can be added. Keys in
the keyring are considered to be trusted. Keys may be added at will

View File

@ -1625,7 +1625,7 @@ static int __init fw_devlink_setup(char *arg)
}
early_param("fw_devlink", fw_devlink_setup);
static bool fw_devlink_strict = true;
static bool fw_devlink_strict;
static int __init fw_devlink_strict_setup(char *arg)
{
return strtobool(arg, &fw_devlink_strict);

View File

@ -449,6 +449,9 @@ static int quad8_events_configure(struct counter_device *counter)
return -EINVAL;
}
/* Enable IRQ line */
irq_enabled |= BIT(event_node->channel);
/* Skip configuration if it is the same as previously set */
if (priv->irq_trigger[event_node->channel] == next_irq_trigger)
continue;
@ -462,9 +465,6 @@ static int quad8_events_configure(struct counter_device *counter)
priv->irq_trigger[event_node->channel] << 3;
iowrite8(QUAD8_CTR_IOR | ior_cfg,
&priv->reg->channel[event_node->channel].control);
/* Enable IRQ line */
irq_enabled |= BIT(event_node->channel);
}
iowrite8(irq_enabled, &priv->reg->index_interrupt);

View File

@ -15,6 +15,7 @@ void hmem_register_device(int target_nid, struct resource *r)
.start = r->start,
.end = r->end,
.flags = IORESOURCE_MEM,
.desc = IORES_DESC_SOFT_RESERVED,
};
struct platform_device *pdev;
struct memregion_info info;

View File

@ -31,14 +31,14 @@ struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property)
}
pdev = of_find_device_by_node(udma_node);
if (np != udma_node)
of_node_put(udma_node);
if (!pdev) {
pr_debug("UDMA device not found\n");
return ERR_PTR(-EPROBE_DEFER);
}
if (np != udma_node)
of_node_put(udma_node);
ud = platform_get_drvdata(pdev);
if (!ud) {
pr_debug("UDMA has not been probed\n");

View File

@ -3040,9 +3040,10 @@ static int xilinx_dma_probe(struct platform_device *pdev)
/* Request and map I/O memory */
xdev->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(xdev->regs))
return PTR_ERR(xdev->regs);
if (IS_ERR(xdev->regs)) {
err = PTR_ERR(xdev->regs);
goto disable_clks;
}
/* Retrieve the DMA engine properties from the device tree */
xdev->max_buffer_len = GENMASK(XILINX_DMA_MAX_TRANS_LEN_MAX - 1, 0);
xdev->s2mm_chan_id = xdev->dma_config->max_channels / 2;
@ -3070,7 +3071,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
if (err < 0) {
dev_err(xdev->dev,
"missing xlnx,num-fstores property\n");
return err;
goto disable_clks;
}
err = of_property_read_u32(node, "xlnx,flush-fsync",
@ -3090,7 +3091,11 @@ static int xilinx_dma_probe(struct platform_device *pdev)
xdev->ext_addr = false;
/* Set the dma mask bits */
dma_set_mask_and_coherent(xdev->dev, DMA_BIT_MASK(addr_width));
err = dma_set_mask_and_coherent(xdev->dev, DMA_BIT_MASK(addr_width));
if (err < 0) {
dev_err(xdev->dev, "DMA mask error %d\n", err);
goto disable_clks;
}
/* Initialize the DMA engine */
xdev->common.dev = &pdev->dev;
@ -3137,7 +3142,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
for_each_child_of_node(node, child) {
err = xilinx_dma_child_probe(xdev, child);
if (err < 0)
goto disable_clks;
goto error;
}
if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
@ -3172,12 +3177,12 @@ static int xilinx_dma_probe(struct platform_device *pdev)
return 0;
disable_clks:
xdma_disable_allclks(xdev);
error:
for (i = 0; i < xdev->dma_config->max_channels; i++)
if (xdev->chan[i])
xilinx_dma_chan_remove(xdev->chan[i]);
disable_clks:
xdma_disable_allclks(xdev);
return err;
}

View File

@ -849,7 +849,7 @@ static struct dma_async_tx_descriptor *zynqmp_dma_prep_memcpy(
zynqmp_dma_desc_config_eod(chan, desc);
async_tx_ack(&first->async_tx);
first->async_tx.flags = flags;
first->async_tx.flags = (enum dma_ctrl_flags)flags;
return &first->async_tx;
}

View File

@ -450,9 +450,13 @@ static int scmi_clock_count_get(const struct scmi_protocol_handle *ph)
static const struct scmi_clock_info *
scmi_clock_info_get(const struct scmi_protocol_handle *ph, u32 clk_id)
{
struct scmi_clock_info *clk;
struct clock_info *ci = ph->get_priv(ph);
struct scmi_clock_info *clk = ci->clk + clk_id;
if (clk_id >= ci->num_clocks)
return NULL;
clk = ci->clk + clk_id;
if (!clk->name[0])
return NULL;

View File

@ -106,6 +106,7 @@ enum scmi_optee_pta_cmd {
* @channel_id: OP-TEE channel ID used for this transport
* @tee_session: TEE session identifier
* @caps: OP-TEE SCMI channel capabilities
* @rx_len: Response size
* @mu: Mutex protection on channel access
* @cinfo: SCMI channel information
* @shmem: Virtual base address of the shared memory

View File

@ -166,9 +166,13 @@ static int scmi_domain_reset(const struct scmi_protocol_handle *ph, u32 domain,
struct scmi_xfer *t;
struct scmi_msg_reset_domain_reset *dom;
struct scmi_reset_info *pi = ph->get_priv(ph);
struct reset_dom_info *rdom = pi->dom_info + domain;
struct reset_dom_info *rdom;
if (rdom->async_reset)
if (domain >= pi->num_domains)
return -EINVAL;
rdom = pi->dom_info + domain;
if (rdom->async_reset && flags & AUTONOMOUS_RESET)
flags |= ASYNCHRONOUS_RESET;
ret = ph->xops->xfer_get_init(ph, RESET, sizeof(*dom), 0, &t);
@ -180,7 +184,7 @@ static int scmi_domain_reset(const struct scmi_protocol_handle *ph, u32 domain,
dom->flags = cpu_to_le32(flags);
dom->reset_state = cpu_to_le32(state);
if (rdom->async_reset)
if (flags & ASYNCHRONOUS_RESET)
ret = ph->xops->do_xfer_with_response(ph, t);
else
ret = ph->xops->do_xfer(ph, t);

View File

@ -138,9 +138,28 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
scmi_pd_data->domains = domains;
scmi_pd_data->num_domains = num_domains;
dev_set_drvdata(dev, scmi_pd_data);
return of_genpd_add_provider_onecell(np, scmi_pd_data);
}
static void scmi_pm_domain_remove(struct scmi_device *sdev)
{
int i;
struct genpd_onecell_data *scmi_pd_data;
struct device *dev = &sdev->dev;
struct device_node *np = dev->of_node;
of_genpd_del_provider(np);
scmi_pd_data = dev_get_drvdata(dev);
for (i = 0; i < scmi_pd_data->num_domains; i++) {
if (!scmi_pd_data->domains[i])
continue;
pm_genpd_remove(scmi_pd_data->domains[i]);
}
}
static const struct scmi_device_id scmi_id_table[] = {
{ SCMI_PROTOCOL_POWER, "genpd" },
{ },
@ -150,6 +169,7 @@ MODULE_DEVICE_TABLE(scmi, scmi_id_table);
static struct scmi_driver scmi_power_domain_driver = {
.name = "scmi-power-domain",
.probe = scmi_pm_domain_probe,
.remove = scmi_pm_domain_remove,
.id_table = scmi_id_table,
};
module_scmi_driver(scmi_power_domain_driver);

View File

@ -762,6 +762,10 @@ static int scmi_sensor_config_get(const struct scmi_protocol_handle *ph,
{
int ret;
struct scmi_xfer *t;
struct sensors_info *si = ph->get_priv(ph);
if (sensor_id >= si->num_sensors)
return -EINVAL;
ret = ph->xops->xfer_get_init(ph, SENSOR_CONFIG_GET,
sizeof(__le32), sizeof(__le32), &t);
@ -771,7 +775,6 @@ static int scmi_sensor_config_get(const struct scmi_protocol_handle *ph,
put_unaligned_le32(sensor_id, t->tx.buf);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
struct sensors_info *si = ph->get_priv(ph);
struct scmi_sensor_info *s = si->sensors + sensor_id;
*sensor_config = get_unaligned_le64(t->rx.buf);
@ -788,6 +791,10 @@ static int scmi_sensor_config_set(const struct scmi_protocol_handle *ph,
int ret;
struct scmi_xfer *t;
struct scmi_msg_sensor_config_set *msg;
struct sensors_info *si = ph->get_priv(ph);
if (sensor_id >= si->num_sensors)
return -EINVAL;
ret = ph->xops->xfer_get_init(ph, SENSOR_CONFIG_SET,
sizeof(*msg), 0, &t);
@ -800,7 +807,6 @@ static int scmi_sensor_config_set(const struct scmi_protocol_handle *ph,
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
struct sensors_info *si = ph->get_priv(ph);
struct scmi_sensor_info *s = si->sensors + sensor_id;
s->sensor_config = sensor_config;
@ -831,8 +837,11 @@ static int scmi_sensor_reading_get(const struct scmi_protocol_handle *ph,
int ret;
struct scmi_xfer *t;
struct scmi_msg_sensor_reading_get *sensor;
struct scmi_sensor_info *s;
struct sensors_info *si = ph->get_priv(ph);
struct scmi_sensor_info *s = si->sensors + sensor_id;
if (sensor_id >= si->num_sensors)
return -EINVAL;
ret = ph->xops->xfer_get_init(ph, SENSOR_READING_GET,
sizeof(*sensor), 0, &t);
@ -841,6 +850,7 @@ static int scmi_sensor_reading_get(const struct scmi_protocol_handle *ph,
sensor = t->tx.buf;
sensor->id = cpu_to_le32(sensor_id);
s = si->sensors + sensor_id;
if (s->async) {
sensor->flags = cpu_to_le32(SENSOR_READ_ASYNC);
ret = ph->xops->do_xfer_with_response(ph, t);
@ -895,9 +905,13 @@ scmi_sensor_reading_get_timestamped(const struct scmi_protocol_handle *ph,
int ret;
struct scmi_xfer *t;
struct scmi_msg_sensor_reading_get *sensor;
struct scmi_sensor_info *s;
struct sensors_info *si = ph->get_priv(ph);
struct scmi_sensor_info *s = si->sensors + sensor_id;
if (sensor_id >= si->num_sensors)
return -EINVAL;
s = si->sensors + sensor_id;
if (!count || !readings ||
(!s->num_axis && count > 1) || (s->num_axis && count > s->num_axis))
return -EINVAL;
@ -948,6 +962,9 @@ scmi_sensor_info_get(const struct scmi_protocol_handle *ph, u32 sensor_id)
{
struct sensors_info *si = ph->get_priv(ph);
if (sensor_id >= si->num_sensors)
return NULL;
return si->sensors + sensor_id;
}

View File

@ -48,6 +48,9 @@ static int efibc_reboot_notifier_call(struct notifier_block *notifier,
return NOTIFY_DONE;
wdata = kmalloc(MAX_DATA_LEN * sizeof(efi_char16_t), GFP_KERNEL);
if (!wdata)
return NOTIFY_DONE;
for (l = 0; l < MAX_DATA_LEN - 1 && str[l] != '\0'; l++)
wdata[l] = str[l];
wdata[l] = L'\0';

View File

@ -14,7 +14,7 @@
/* SHIM variables */
static const efi_guid_t shim_guid = EFI_SHIM_LOCK_GUID;
static const efi_char16_t shim_MokSBState_name[] = L"MokSBState";
static const efi_char16_t shim_MokSBState_name[] = L"MokSBStateRT";
static efi_status_t get_var(efi_char16_t *name, efi_guid_t *vendor, u32 *attr,
unsigned long *data_size, void *data)
@ -43,8 +43,8 @@ enum efi_secureboot_mode efi_get_secureboot(void)
/*
* See if a user has put the shim into insecure mode. If so, and if the
* variable doesn't have the runtime attribute set, we might as well
* honor that.
* variable doesn't have the non-volatile attribute set, we might as
* well honor that.
*/
size = sizeof(moksbstate);
status = get_efi_var(shim_MokSBState_name, &shim_guid,
@ -53,7 +53,7 @@ enum efi_secureboot_mode efi_get_secureboot(void)
/* If it fails, we don't care why. Default to secure */
if (status != EFI_SUCCESS)
goto secure_boot_enabled;
if (!(attr & EFI_VARIABLE_RUNTIME_ACCESS) && moksbstate == 1)
if (!(attr & EFI_VARIABLE_NON_VOLATILE) && moksbstate == 1)
return efi_secureboot_mode_disabled;
secure_boot_enabled:

View File

@ -516,6 +516,13 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
hdr->ramdisk_image = 0;
hdr->ramdisk_size = 0;
/*
* Disregard any setup data that was provided by the bootloader:
* setup_data could be pointing anywhere, and we have no way of
* authenticating or validating the payload.
*/
hdr->setup_data = 0;
efi_stub_entry(handle, sys_table_arg, boot_params);
/* not reached */

View File

@ -148,10 +148,6 @@ static ssize_t flash_count_show(struct device *dev,
stride = regmap_get_reg_stride(sec->m10bmc->regmap);
num_bits = FLASH_COUNT_SIZE * 8;
flash_buf = kmalloc(FLASH_COUNT_SIZE, GFP_KERNEL);
if (!flash_buf)
return -ENOMEM;
if (FLASH_COUNT_SIZE % stride) {
dev_err(sec->dev,
"FLASH_COUNT_SIZE (0x%x) not aligned to stride (0x%x)\n",
@ -160,6 +156,10 @@ static ssize_t flash_count_show(struct device *dev,
return -EINVAL;
}
flash_buf = kmalloc(FLASH_COUNT_SIZE, GFP_KERNEL);
if (!flash_buf)
return -ENOMEM;
ret = regmap_bulk_read(sec->m10bmc->regmap, STAGING_FLASH_COUNT,
flash_buf, FLASH_COUNT_SIZE / stride);
if (ret) {

View File

@ -41,14 +41,12 @@
* struct ftgpio_gpio - Gemini GPIO state container
* @dev: containing device for this instance
* @gc: gpiochip for this instance
* @irq: irqchip for this instance
* @base: remapped I/O-memory base
* @clk: silicon clock
*/
struct ftgpio_gpio {
struct device *dev;
struct gpio_chip gc;
struct irq_chip irq;
void __iomem *base;
struct clk *clk;
};
@ -70,6 +68,7 @@ static void ftgpio_gpio_mask_irq(struct irq_data *d)
val = readl(g->base + GPIO_INT_EN);
val &= ~BIT(irqd_to_hwirq(d));
writel(val, g->base + GPIO_INT_EN);
gpiochip_disable_irq(gc, irqd_to_hwirq(d));
}
static void ftgpio_gpio_unmask_irq(struct irq_data *d)
@ -78,6 +77,7 @@ static void ftgpio_gpio_unmask_irq(struct irq_data *d)
struct ftgpio_gpio *g = gpiochip_get_data(gc);
u32 val;
gpiochip_enable_irq(gc, irqd_to_hwirq(d));
val = readl(g->base + GPIO_INT_EN);
val |= BIT(irqd_to_hwirq(d));
writel(val, g->base + GPIO_INT_EN);
@ -221,6 +221,16 @@ static int ftgpio_gpio_set_config(struct gpio_chip *gc, unsigned int offset,
return 0;
}
static const struct irq_chip ftgpio_irq_chip = {
.name = "FTGPIO010",
.irq_ack = ftgpio_gpio_ack_irq,
.irq_mask = ftgpio_gpio_mask_irq,
.irq_unmask = ftgpio_gpio_unmask_irq,
.irq_set_type = ftgpio_gpio_set_irq_type,
.flags = IRQCHIP_IMMUTABLE,
GPIOCHIP_IRQ_RESOURCE_HELPERS,
};
static int ftgpio_gpio_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -277,14 +287,8 @@ static int ftgpio_gpio_probe(struct platform_device *pdev)
if (!IS_ERR(g->clk))
g->gc.set_config = ftgpio_gpio_set_config;
g->irq.name = "FTGPIO010";
g->irq.irq_ack = ftgpio_gpio_ack_irq;
g->irq.irq_mask = ftgpio_gpio_mask_irq;
g->irq.irq_unmask = ftgpio_gpio_unmask_irq;
g->irq.irq_set_type = ftgpio_gpio_set_irq_type;
girq = &g->gc.irq;
girq->chip = &g->irq;
gpio_irq_chip_set_chip(girq, &ftgpio_irq_chip);
girq->parent_handler = ftgpio_gpio_irq_handler;
girq->num_parents = 1;
girq->parents = devm_kcalloc(dev, 1, sizeof(*girq->parents),

View File

@ -533,8 +533,10 @@ static int __init gpio_mockup_register_chip(int idx)
}
fwnode = fwnode_create_software_node(properties, NULL);
if (IS_ERR(fwnode))
if (IS_ERR(fwnode)) {
kfree_strarray(line_names, ngpio);
return PTR_ERR(fwnode);
}
pdevinfo.name = "gpio-mockup";
pdevinfo.id = idx;
@ -597,9 +599,9 @@ static int __init gpio_mockup_init(void)
static void __exit gpio_mockup_exit(void)
{
gpio_mockup_unregister_pdevs();
debugfs_remove_recursive(gpio_mockup_dbg_dir);
platform_driver_unregister(&gpio_mockup_driver);
gpio_mockup_unregister_pdevs();
}
module_init(gpio_mockup_init);

View File

@ -307,6 +307,8 @@ static int tqmx86_gpio_probe(struct platform_device *pdev)
girq->default_type = IRQ_TYPE_NONE;
girq->handler = handle_simple_irq;
girq->init_valid_mask = tqmx86_init_irq_valid_mask;
irq_domain_set_pm_device(girq->domain, dev);
}
ret = devm_gpiochip_add_data(dev, chip, gpio);
@ -315,8 +317,6 @@ static int tqmx86_gpio_probe(struct platform_device *pdev)
goto out_pm_dis;
}
irq_domain_set_pm_device(girq->domain, dev);
dev_info(dev, "GPIO functionality initialized with %d pins\n",
chip->ngpio);

View File

@ -1986,7 +1986,6 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
ret = -ENODEV;
goto out_free_le;
}
le->irq = irq;
if (eflags & GPIOEVENT_REQUEST_RISING_EDGE)
irqflags |= test_bit(FLAG_ACTIVE_LOW, &desc->flags) ?
@ -2000,7 +1999,7 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
init_waitqueue_head(&le->wait);
/* Request a thread to read the events */
ret = request_threaded_irq(le->irq,
ret = request_threaded_irq(irq,
lineevent_irq_handler,
lineevent_irq_thread,
irqflags,
@ -2009,6 +2008,8 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
if (ret)
goto out_free_le;
le->irq = irq;
fd = get_unused_fd_flags(O_RDONLY | O_CLOEXEC);
if (fd < 0) {
ret = fd;

View File

@ -39,6 +39,7 @@
#include <linux/pm_runtime.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_damage_helper.h>
#include <drm/drm_drv.h>
#include <drm/drm_edid.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_fb_helper.h>
@ -497,6 +498,11 @@ bool amdgpu_display_ddc_probe(struct amdgpu_connector *amdgpu_connector,
static const struct drm_framebuffer_funcs amdgpu_fb_funcs = {
.destroy = drm_gem_fb_destroy,
.create_handle = drm_gem_fb_create_handle,
};
static const struct drm_framebuffer_funcs amdgpu_fb_funcs_atomic = {
.destroy = drm_gem_fb_destroy,
.create_handle = drm_gem_fb_create_handle,
.dirty = drm_atomic_helper_dirtyfb,
};
@ -1102,7 +1108,10 @@ static int amdgpu_display_gem_fb_verify_and_init(struct drm_device *dev,
if (ret)
goto err;
ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
if (drm_drv_uses_atomic_modeset(dev))
ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs_atomic);
else
ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
if (ret)
goto err;

View File

@ -181,6 +181,9 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
for (i = 0; i < AMDGPU_MES_MAX_SDMA_PIPES; i++) {
if (adev->ip_versions[SDMA0_HWIP][0] < IP_VERSION(6, 0, 0))
adev->mes.sdma_hqd_mask[i] = i ? 0 : 0x3fc;
/* zero sdma_hqd_mask for non-existent engine */
else if (adev->sdma.num_instances == 1)
adev->mes.sdma_hqd_mask[i] = i ? 0 : 0xfc;
else
adev->mes.sdma_hqd_mask[i] = 0xfc;
}

View File

@ -2484,8 +2484,7 @@ bool amdgpu_vm_handle_fault(struct amdgpu_device *adev, u32 pasid,
/* Intentionally setting invalid PTE flag
* combination to force a no-retry-fault
*/
flags = AMDGPU_PTE_EXECUTABLE | AMDGPU_PDE_PTE |
AMDGPU_PTE_TF;
flags = AMDGPU_PTE_SNOOPED | AMDGPU_PTE_PRT;
value = 0;
} else if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_NEVER) {
/* Redirect the access to the dummy page */

View File

@ -1103,10 +1103,13 @@ static void gmc_v9_0_get_vm_pde(struct amdgpu_device *adev, int level,
*flags |= AMDGPU_PDE_BFS(0x9);
} else if (level == AMDGPU_VM_PDB0) {
if (*flags & AMDGPU_PDE_PTE)
if (*flags & AMDGPU_PDE_PTE) {
*flags &= ~AMDGPU_PDE_PTE;
else
if (!(*flags & AMDGPU_PTE_VALID))
*addr |= 1 << PAGE_SHIFT;
} else {
*flags |= AMDGPU_PTE_TF;
}
}
}

View File

@ -4759,7 +4759,7 @@ fill_dc_plane_info_and_addr(struct amdgpu_device *adev,
plane_info->visible = true;
plane_info->stereo_format = PLANE_STEREO_FORMAT_NONE;
plane_info->layer_index = 0;
plane_info->layer_index = plane_state->normalized_zpos;
ret = fill_plane_color_attributes(plane_state, plane_info->format,
&plane_info->color_space);
@ -4827,7 +4827,7 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev,
dc_plane_state->global_alpha = plane_info.global_alpha;
dc_plane_state->global_alpha_value = plane_info.global_alpha_value;
dc_plane_state->dcc = plane_info.dcc;
dc_plane_state->layer_index = plane_info.layer_index; // Always returns 0
dc_plane_state->layer_index = plane_info.layer_index;
dc_plane_state->flip_int_enabled = true;
/*
@ -9485,6 +9485,14 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
}
}
/*
* DC consults the zpos (layer_index in DC terminology) to determine the
* hw plane on which to enable the hw cursor (see
* `dcn10_can_pipe_disable_cursor`). By now, all modified planes are in
* atomic state, so call drm helper to normalize zpos.
*/
drm_atomic_normalize_zpos(dev, state);
/* Remove exiting planes if they are modified */
for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
ret = dm_update_plane_state(dc, state, plane,

View File

@ -99,7 +99,7 @@ static int dcn31_get_active_display_cnt_wa(
return display_count;
}
static void dcn31_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable)
static void dcn31_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable)
{
struct dc *dc = clk_mgr_base->ctx->dc;
int i;
@ -110,9 +110,10 @@ static void dcn31_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable)
if (pipe->top_pipe || pipe->prev_odm_pipe)
continue;
if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal))) {
if (disable)
if (disable) {
pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg);
else
reset_sync_context_for_pipe(dc, context, i);
} else
pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg);
}
}
@ -211,11 +212,11 @@ void dcn31_update_clocks(struct clk_mgr *clk_mgr_base,
}
if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
dcn31_disable_otg_wa(clk_mgr_base, true);
dcn31_disable_otg_wa(clk_mgr_base, context, true);
clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
dcn31_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz);
dcn31_disable_otg_wa(clk_mgr_base, false);
dcn31_disable_otg_wa(clk_mgr_base, context, false);
update_dispclk = true;
}

View File

@ -119,7 +119,7 @@ static int dcn314_get_active_display_cnt_wa(
return display_count;
}
static void dcn314_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable)
static void dcn314_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable)
{
struct dc *dc = clk_mgr_base->ctx->dc;
int i;
@ -129,11 +129,11 @@ static void dcn314_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable)
if (pipe->top_pipe || pipe->prev_odm_pipe)
continue;
if (pipe->stream && (pipe->stream->dpms_off || pipe->plane_state == NULL ||
dc_is_virtual_signal(pipe->stream->signal))) {
if (disable)
if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal))) {
if (disable) {
pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg);
else
reset_sync_context_for_pipe(dc, context, i);
} else
pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg);
}
}
@ -233,11 +233,11 @@ void dcn314_update_clocks(struct clk_mgr *clk_mgr_base,
}
if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
dcn314_disable_otg_wa(clk_mgr_base, true);
dcn314_disable_otg_wa(clk_mgr_base, context, true);
clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
dcn314_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz);
dcn314_disable_otg_wa(clk_mgr_base, false);
dcn314_disable_otg_wa(clk_mgr_base, context, false);
update_dispclk = true;
}

View File

@ -46,6 +46,9 @@
#define TO_CLK_MGR_DCN315(clk_mgr)\
container_of(clk_mgr, struct clk_mgr_dcn315, base)
#define UNSUPPORTED_DCFCLK 10000000
#define MIN_DPP_DISP_CLK 100000
static int dcn315_get_active_display_cnt_wa(
struct dc *dc,
struct dc_state *context)
@ -79,7 +82,7 @@ static int dcn315_get_active_display_cnt_wa(
return display_count;
}
static void dcn315_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable)
static void dcn315_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable)
{
struct dc *dc = clk_mgr_base->ctx->dc;
int i;
@ -91,9 +94,10 @@ static void dcn315_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable)
continue;
if (pipe->stream && (pipe->stream->dpms_off || pipe->plane_state == NULL ||
dc_is_virtual_signal(pipe->stream->signal))) {
if (disable)
if (disable) {
pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg);
else
reset_sync_context_for_pipe(dc, context, i);
} else
pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg);
}
}
@ -146,6 +150,9 @@ static void dcn315_update_clocks(struct clk_mgr *clk_mgr_base,
}
}
/* Lock pstate by requesting unsupported dcfclk if change is unsupported */
if (!new_clocks->p_state_change_support)
new_clocks->dcfclk_khz = UNSUPPORTED_DCFCLK;
if (should_set_clock(safe_to_lower, new_clocks->dcfclk_khz, clk_mgr_base->clks.dcfclk_khz)) {
clk_mgr_base->clks.dcfclk_khz = new_clocks->dcfclk_khz;
dcn315_smu_set_hard_min_dcfclk(clk_mgr, clk_mgr_base->clks.dcfclk_khz);
@ -159,10 +166,10 @@ static void dcn315_update_clocks(struct clk_mgr *clk_mgr_base,
// workaround: Limit dppclk to 100Mhz to avoid lower eDP panel switch to plus 4K monitor underflow.
if (!IS_DIAG_DC(dc->ctx->dce_environment)) {
if (new_clocks->dppclk_khz < 100000)
new_clocks->dppclk_khz = 100000;
if (new_clocks->dispclk_khz < 100000)
new_clocks->dispclk_khz = 100000;
if (new_clocks->dppclk_khz < MIN_DPP_DISP_CLK)
new_clocks->dppclk_khz = MIN_DPP_DISP_CLK;
if (new_clocks->dispclk_khz < MIN_DPP_DISP_CLK)
new_clocks->dispclk_khz = MIN_DPP_DISP_CLK;
}
if (should_set_clock(safe_to_lower, new_clocks->dppclk_khz, clk_mgr->base.clks.dppclk_khz)) {
@ -175,12 +182,12 @@ static void dcn315_update_clocks(struct clk_mgr *clk_mgr_base,
if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
/* No need to apply the w/a if we haven't taken over from bios yet */
if (clk_mgr_base->clks.dispclk_khz)
dcn315_disable_otg_wa(clk_mgr_base, true);
dcn315_disable_otg_wa(clk_mgr_base, context, true);
clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
dcn315_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz);
if (clk_mgr_base->clks.dispclk_khz)
dcn315_disable_otg_wa(clk_mgr_base, false);
dcn315_disable_otg_wa(clk_mgr_base, context, false);
update_dispclk = true;
}
@ -275,7 +282,7 @@ static struct wm_table ddr5_wm_table = {
{
.wm_inst = WM_A,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 64.0,
.pstate_latency_us = 129.0,
.sr_exit_time_us = 11.5,
.sr_enter_plus_exit_time_us = 14.5,
.valid = true,
@ -283,7 +290,7 @@ static struct wm_table ddr5_wm_table = {
{
.wm_inst = WM_B,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 64.0,
.pstate_latency_us = 129.0,
.sr_exit_time_us = 11.5,
.sr_enter_plus_exit_time_us = 14.5,
.valid = true,
@ -291,7 +298,7 @@ static struct wm_table ddr5_wm_table = {
{
.wm_inst = WM_C,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 64.0,
.pstate_latency_us = 129.0,
.sr_exit_time_us = 11.5,
.sr_enter_plus_exit_time_us = 14.5,
.valid = true,
@ -299,7 +306,7 @@ static struct wm_table ddr5_wm_table = {
{
.wm_inst = WM_D,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 64.0,
.pstate_latency_us = 129.0,
.sr_exit_time_us = 11.5,
.sr_enter_plus_exit_time_us = 14.5,
.valid = true,
@ -556,8 +563,7 @@ static void dcn315_clk_mgr_helper_populate_bw_params(
ASSERT(bw_params->clk_table.entries[i-1].dcfclk_mhz);
bw_params->vram_type = bios_info->memory_type;
bw_params->num_channels = bios_info->ma_channel_number;
if (!bw_params->num_channels)
bw_params->num_channels = 2;
bw_params->dram_channel_width_bytes = bios_info->memory_type == 0x22 ? 8 : 4;
for (i = 0; i < WM_SET_COUNT; i++) {
bw_params->wm_table.entries[i].wm_inst = i;

View File

@ -112,7 +112,7 @@ static int dcn316_get_active_display_cnt_wa(
return display_count;
}
static void dcn316_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable)
static void dcn316_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable)
{
struct dc *dc = clk_mgr_base->ctx->dc;
int i;
@ -124,9 +124,10 @@ static void dcn316_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable)
continue;
if (pipe->stream && (pipe->stream->dpms_off || pipe->plane_state == NULL ||
dc_is_virtual_signal(pipe->stream->signal))) {
if (disable)
if (disable) {
pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg);
else
reset_sync_context_for_pipe(dc, context, i);
} else
pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg);
}
}
@ -221,11 +222,11 @@ static void dcn316_update_clocks(struct clk_mgr *clk_mgr_base,
}
if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
dcn316_disable_otg_wa(clk_mgr_base, true);
dcn316_disable_otg_wa(clk_mgr_base, context, true);
clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
dcn316_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz);
dcn316_disable_otg_wa(clk_mgr_base, false);
dcn316_disable_otg_wa(clk_mgr_base, context, false);
update_dispclk = true;
}

View File

@ -2758,8 +2758,14 @@ bool perform_link_training_with_retries(
skip_video_pattern);
/* Transmit idle pattern once training successful. */
if (status == LINK_TRAINING_SUCCESS && !is_link_bw_low)
if (status == LINK_TRAINING_SUCCESS && !is_link_bw_low) {
dp_set_hw_test_pattern(link, &pipe_ctx->link_res, DP_TEST_PATTERN_VIDEO_MODE, NULL, 0);
/* Update verified link settings to current one
* Because DPIA LT might fallback to lower link setting.
*/
link->verified_link_cap.link_rate = link->cur_link_settings.link_rate;
link->verified_link_cap.lane_count = link->cur_link_settings.lane_count;
}
} else {
status = dc_link_dp_perform_link_training(link,
&pipe_ctx->link_res,
@ -5121,6 +5127,14 @@ bool dp_retrieve_lttpr_cap(struct dc_link *link)
lttpr_dpcd_data[DP_PHY_REPEATER_128B132B_RATES -
DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV];
/* If this chip cap is set, at least one retimer must exist in the chain
* Override count to 1 if we receive a known bad count (0 or an invalid value) */
if (link->chip_caps & EXT_DISPLAY_PATH_CAPS__DP_FIXED_VS_EN &&
(dp_convert_to_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt) == 0)) {
ASSERT(0);
link->dpcd_caps.lttpr_caps.phy_repeater_cnt = 0x80;
}
/* Attempt to train in LTTPR transparent mode if repeater count exceeds 8. */
is_lttpr_present = (link->dpcd_caps.lttpr_caps.max_lane_count > 0 &&
link->dpcd_caps.lttpr_caps.max_lane_count <= 4 &&

View File

@ -3584,6 +3584,23 @@ void check_syncd_pipes_for_disabled_master_pipe(struct dc *dc,
}
}
void reset_sync_context_for_pipe(const struct dc *dc,
struct dc_state *context,
uint8_t pipe_idx)
{
int i;
struct pipe_ctx *pipe_ctx_reset;
/* reset the otg sync context for the pipe and its slave pipes if any */
for (i = 0; i < dc->res_pool->pipe_count; i++) {
pipe_ctx_reset = &context->res_ctx.pipe_ctx[i];
if (((GET_PIPE_SYNCD_FROM_PIPE(pipe_ctx_reset) == pipe_idx) &&
IS_PIPE_SYNCD_VALID(pipe_ctx_reset)) || (i == pipe_idx))
SET_PIPE_SYNCD_TO_PIPE(pipe_ctx_reset, i);
}
}
uint8_t resource_transmitter_to_phy_idx(const struct dc *dc, enum transmitter transmitter)
{
/* TODO - get transmitter to phy idx mapping from DMUB */

View File

@ -2164,7 +2164,8 @@ static void dce110_setup_audio_dto(
continue;
if (pipe_ctx->stream->signal != SIGNAL_TYPE_HDMI_TYPE_A)
continue;
if (pipe_ctx->stream_res.audio != NULL) {
if (pipe_ctx->stream_res.audio != NULL &&
pipe_ctx->stream_res.audio->enabled == false) {
struct audio_output audio_output;
build_audio_output(context, pipe_ctx, &audio_output);
@ -2204,7 +2205,8 @@ static void dce110_setup_audio_dto(
if (!dc_is_dp_signal(pipe_ctx->stream->signal))
continue;
if (pipe_ctx->stream_res.audio != NULL) {
if (pipe_ctx->stream_res.audio != NULL &&
pipe_ctx->stream_res.audio->enabled == false) {
struct audio_output audio_output;
build_audio_output(context, pipe_ctx, &audio_output);

View File

@ -445,226 +445,6 @@
type DSCRM_DSC_FORWARD_EN; \
type DSCRM_DSC_OPP_PIPE_SOURCE
#define DSC_REG_LIST_DCN314(id) \
SRI(DSC_TOP_CONTROL, DSC_TOP, id),\
SRI(DSC_DEBUG_CONTROL, DSC_TOP, id),\
SRI(DSCC_CONFIG0, DSCC, id),\
SRI(DSCC_CONFIG1, DSCC, id),\
SRI(DSCC_STATUS, DSCC, id),\
SRI(DSCC_INTERRUPT_CONTROL_STATUS, DSCC, id),\
SRI(DSCC_PPS_CONFIG0, DSCC, id),\
SRI(DSCC_PPS_CONFIG1, DSCC, id),\
SRI(DSCC_PPS_CONFIG2, DSCC, id),\
SRI(DSCC_PPS_CONFIG3, DSCC, id),\
SRI(DSCC_PPS_CONFIG4, DSCC, id),\
SRI(DSCC_PPS_CONFIG5, DSCC, id),\
SRI(DSCC_PPS_CONFIG6, DSCC, id),\
SRI(DSCC_PPS_CONFIG7, DSCC, id),\
SRI(DSCC_PPS_CONFIG8, DSCC, id),\
SRI(DSCC_PPS_CONFIG9, DSCC, id),\
SRI(DSCC_PPS_CONFIG10, DSCC, id),\
SRI(DSCC_PPS_CONFIG11, DSCC, id),\
SRI(DSCC_PPS_CONFIG12, DSCC, id),\
SRI(DSCC_PPS_CONFIG13, DSCC, id),\
SRI(DSCC_PPS_CONFIG14, DSCC, id),\
SRI(DSCC_PPS_CONFIG15, DSCC, id),\
SRI(DSCC_PPS_CONFIG16, DSCC, id),\
SRI(DSCC_PPS_CONFIG17, DSCC, id),\
SRI(DSCC_PPS_CONFIG18, DSCC, id),\
SRI(DSCC_PPS_CONFIG19, DSCC, id),\
SRI(DSCC_PPS_CONFIG20, DSCC, id),\
SRI(DSCC_PPS_CONFIG21, DSCC, id),\
SRI(DSCC_PPS_CONFIG22, DSCC, id),\
SRI(DSCC_MEM_POWER_CONTROL, DSCC, id),\
SRI(DSCC_R_Y_SQUARED_ERROR_LOWER, DSCC, id),\
SRI(DSCC_R_Y_SQUARED_ERROR_UPPER, DSCC, id),\
SRI(DSCC_G_CB_SQUARED_ERROR_LOWER, DSCC, id),\
SRI(DSCC_G_CB_SQUARED_ERROR_UPPER, DSCC, id),\
SRI(DSCC_B_CR_SQUARED_ERROR_LOWER, DSCC, id),\
SRI(DSCC_B_CR_SQUARED_ERROR_UPPER, DSCC, id),\
SRI(DSCC_MAX_ABS_ERROR0, DSCC, id),\
SRI(DSCC_MAX_ABS_ERROR1, DSCC, id),\
SRI(DSCC_RATE_BUFFER0_MAX_FULLNESS_LEVEL, DSCC, id),\
SRI(DSCC_RATE_BUFFER1_MAX_FULLNESS_LEVEL, DSCC, id),\
SRI(DSCC_RATE_BUFFER2_MAX_FULLNESS_LEVEL, DSCC, id),\
SRI(DSCC_RATE_BUFFER3_MAX_FULLNESS_LEVEL, DSCC, id),\
SRI(DSCC_RATE_CONTROL_BUFFER0_MAX_FULLNESS_LEVEL, DSCC, id),\
SRI(DSCC_RATE_CONTROL_BUFFER1_MAX_FULLNESS_LEVEL, DSCC, id),\
SRI(DSCC_RATE_CONTROL_BUFFER2_MAX_FULLNESS_LEVEL, DSCC, id),\
SRI(DSCC_RATE_CONTROL_BUFFER3_MAX_FULLNESS_LEVEL, DSCC, id),\
SRI(DSCCIF_CONFIG0, DSCCIF, id),\
SRI(DSCCIF_CONFIG1, DSCCIF, id),\
SRI(DSCRM_DSC_FORWARD_CONFIG, DSCRM, id)
#define DSC_REG_LIST_SH_MASK_DCN314(mask_sh)\
DSC_SF(DSC_TOP0_DSC_TOP_CONTROL, DSC_CLOCK_EN, mask_sh), \
DSC_SF(DSC_TOP0_DSC_TOP_CONTROL, DSC_DISPCLK_R_GATE_DIS, mask_sh), \
DSC_SF(DSC_TOP0_DSC_TOP_CONTROL, DSC_DSCCLK_R_GATE_DIS, mask_sh), \
DSC_SF(DSC_TOP0_DSC_DEBUG_CONTROL, DSC_DBG_EN, mask_sh), \
DSC_SF(DSC_TOP0_DSC_DEBUG_CONTROL, DSC_TEST_CLOCK_MUX_SEL, mask_sh), \
DSC_SF(DSCC0_DSCC_CONFIG0, NUMBER_OF_SLICES_PER_LINE, mask_sh), \
DSC_SF(DSCC0_DSCC_CONFIG0, ALTERNATE_ICH_ENCODING_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_CONFIG0, NUMBER_OF_SLICES_IN_VERTICAL_DIRECTION, mask_sh), \
DSC_SF(DSCC0_DSCC_CONFIG1, DSCC_RATE_CONTROL_BUFFER_MODEL_SIZE, mask_sh), \
/*DSC_SF(DSCC0_DSCC_CONFIG1, DSCC_DISABLE_ICH, mask_sh),*/ \
DSC_SF(DSCC0_DSCC_STATUS, DSCC_DOUBLE_BUFFER_REG_UPDATE_PENDING, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER0_OVERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER1_OVERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER2_OVERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER3_OVERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER0_UNDERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER1_UNDERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER2_UNDERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER3_UNDERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL0_OVERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL1_OVERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL2_OVERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL3_OVERFLOW_OCCURRED, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER0_OVERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER1_OVERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER2_OVERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER3_OVERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER0_UNDERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER1_UNDERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER2_UNDERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER3_UNDERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL0_OVERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL1_OVERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL2_OVERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL3_OVERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG0, DSC_VERSION_MINOR, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG0, DSC_VERSION_MAJOR, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG0, PPS_IDENTIFIER, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG0, LINEBUF_DEPTH, mask_sh), \
DSC2_SF(DSCC0, DSCC_PPS_CONFIG0__BITS_PER_COMPONENT, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG1, BITS_PER_PIXEL, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG1, VBR_ENABLE, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG1, SIMPLE_422, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG1, CONVERT_RGB, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG1, BLOCK_PRED_ENABLE, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG1, NATIVE_422, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG1, NATIVE_420, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG1, CHUNK_SIZE, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG2, PIC_WIDTH, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG2, PIC_HEIGHT, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG3, SLICE_WIDTH, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG3, SLICE_HEIGHT, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG4, INITIAL_XMIT_DELAY, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG4, INITIAL_DEC_DELAY, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG5, INITIAL_SCALE_VALUE, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG5, SCALE_INCREMENT_INTERVAL, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG6, SCALE_DECREMENT_INTERVAL, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG6, FIRST_LINE_BPG_OFFSET, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG6, SECOND_LINE_BPG_OFFSET, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG7, NFL_BPG_OFFSET, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG7, SLICE_BPG_OFFSET, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG8, NSL_BPG_OFFSET, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG8, SECOND_LINE_OFFSET_ADJ, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG9, INITIAL_OFFSET, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG9, FINAL_OFFSET, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG10, FLATNESS_MIN_QP, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG10, FLATNESS_MAX_QP, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG10, RC_MODEL_SIZE, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG11, RC_EDGE_FACTOR, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG11, RC_QUANT_INCR_LIMIT0, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG11, RC_QUANT_INCR_LIMIT1, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG11, RC_TGT_OFFSET_LO, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG11, RC_TGT_OFFSET_HI, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG12, RC_BUF_THRESH0, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG12, RC_BUF_THRESH1, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG12, RC_BUF_THRESH2, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG12, RC_BUF_THRESH3, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG13, RC_BUF_THRESH4, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG13, RC_BUF_THRESH5, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG13, RC_BUF_THRESH6, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG13, RC_BUF_THRESH7, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG14, RC_BUF_THRESH8, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG14, RC_BUF_THRESH9, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG14, RC_BUF_THRESH10, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG14, RC_BUF_THRESH11, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG15, RC_BUF_THRESH12, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG15, RC_BUF_THRESH13, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG15, RANGE_MIN_QP0, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG15, RANGE_MAX_QP0, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG15, RANGE_BPG_OFFSET0, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_MIN_QP1, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_MAX_QP1, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_BPG_OFFSET1, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_MIN_QP2, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_MAX_QP2, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_BPG_OFFSET2, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_MIN_QP3, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_MAX_QP3, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_BPG_OFFSET3, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_MIN_QP4, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_MAX_QP4, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_BPG_OFFSET4, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_MIN_QP5, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_MAX_QP5, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_BPG_OFFSET5, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_MIN_QP6, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_MAX_QP6, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_BPG_OFFSET6, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_MIN_QP7, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_MAX_QP7, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_BPG_OFFSET7, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_MIN_QP8, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_MAX_QP8, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_BPG_OFFSET8, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_MIN_QP9, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_MAX_QP9, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_BPG_OFFSET9, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_MIN_QP10, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_MAX_QP10, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_BPG_OFFSET10, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_MIN_QP11, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_MAX_QP11, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_BPG_OFFSET11, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_MIN_QP12, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_MAX_QP12, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_BPG_OFFSET12, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_MIN_QP13, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_MAX_QP13, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_BPG_OFFSET13, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_MIN_QP14, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_MAX_QP14, mask_sh), \
DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_BPG_OFFSET14, mask_sh), \
DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_DEFAULT_MEM_LOW_POWER_STATE, mask_sh), \
DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_MEM_PWR_FORCE, mask_sh), \
DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_MEM_PWR_DIS, mask_sh), \
DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_MEM_PWR_STATE, mask_sh), \
DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_NATIVE_422_MEM_PWR_FORCE, mask_sh), \
DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_NATIVE_422_MEM_PWR_DIS, mask_sh), \
DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_NATIVE_422_MEM_PWR_STATE, mask_sh), \
DSC_SF(DSCC0_DSCC_R_Y_SQUARED_ERROR_LOWER, DSCC_R_Y_SQUARED_ERROR_LOWER, mask_sh), \
DSC_SF(DSCC0_DSCC_R_Y_SQUARED_ERROR_UPPER, DSCC_R_Y_SQUARED_ERROR_UPPER, mask_sh), \
DSC_SF(DSCC0_DSCC_G_CB_SQUARED_ERROR_LOWER, DSCC_G_CB_SQUARED_ERROR_LOWER, mask_sh), \
DSC_SF(DSCC0_DSCC_G_CB_SQUARED_ERROR_UPPER, DSCC_G_CB_SQUARED_ERROR_UPPER, mask_sh), \
DSC_SF(DSCC0_DSCC_B_CR_SQUARED_ERROR_LOWER, DSCC_B_CR_SQUARED_ERROR_LOWER, mask_sh), \
DSC_SF(DSCC0_DSCC_B_CR_SQUARED_ERROR_UPPER, DSCC_B_CR_SQUARED_ERROR_UPPER, mask_sh), \
DSC_SF(DSCC0_DSCC_MAX_ABS_ERROR0, DSCC_R_Y_MAX_ABS_ERROR, mask_sh), \
DSC_SF(DSCC0_DSCC_MAX_ABS_ERROR0, DSCC_G_CB_MAX_ABS_ERROR, mask_sh), \
DSC_SF(DSCC0_DSCC_MAX_ABS_ERROR1, DSCC_B_CR_MAX_ABS_ERROR, mask_sh), \
DSC_SF(DSCC0_DSCC_RATE_BUFFER0_MAX_FULLNESS_LEVEL, DSCC_RATE_BUFFER0_MAX_FULLNESS_LEVEL, mask_sh), \
DSC_SF(DSCC0_DSCC_RATE_BUFFER1_MAX_FULLNESS_LEVEL, DSCC_RATE_BUFFER1_MAX_FULLNESS_LEVEL, mask_sh), \
DSC_SF(DSCC0_DSCC_RATE_BUFFER2_MAX_FULLNESS_LEVEL, DSCC_RATE_BUFFER2_MAX_FULLNESS_LEVEL, mask_sh), \
DSC_SF(DSCC0_DSCC_RATE_BUFFER3_MAX_FULLNESS_LEVEL, DSCC_RATE_BUFFER3_MAX_FULLNESS_LEVEL, mask_sh), \
DSC_SF(DSCC0_DSCC_RATE_CONTROL_BUFFER0_MAX_FULLNESS_LEVEL, DSCC_RATE_CONTROL_BUFFER0_MAX_FULLNESS_LEVEL, mask_sh), \
DSC_SF(DSCC0_DSCC_RATE_CONTROL_BUFFER1_MAX_FULLNESS_LEVEL, DSCC_RATE_CONTROL_BUFFER1_MAX_FULLNESS_LEVEL, mask_sh), \
DSC_SF(DSCC0_DSCC_RATE_CONTROL_BUFFER2_MAX_FULLNESS_LEVEL, DSCC_RATE_CONTROL_BUFFER2_MAX_FULLNESS_LEVEL, mask_sh), \
DSC_SF(DSCC0_DSCC_RATE_CONTROL_BUFFER3_MAX_FULLNESS_LEVEL, DSCC_RATE_CONTROL_BUFFER3_MAX_FULLNESS_LEVEL, mask_sh), \
DSC_SF(DSCCIF0_DSCCIF_CONFIG0, INPUT_INTERFACE_UNDERFLOW_RECOVERY_EN, mask_sh), \
DSC_SF(DSCCIF0_DSCCIF_CONFIG0, INPUT_INTERFACE_UNDERFLOW_OCCURRED_INT_EN, mask_sh), \
DSC_SF(DSCCIF0_DSCCIF_CONFIG0, INPUT_INTERFACE_UNDERFLOW_OCCURRED_STATUS, mask_sh), \
DSC_SF(DSCCIF0_DSCCIF_CONFIG0, INPUT_PIXEL_FORMAT, mask_sh), \
DSC2_SF(DSCCIF0, DSCCIF_CONFIG0__BITS_PER_COMPONENT, mask_sh), \
DSC_SF(DSCCIF0_DSCCIF_CONFIG0, DOUBLE_BUFFER_REG_UPDATE_PENDING, mask_sh), \
DSC_SF(DSCCIF0_DSCCIF_CONFIG1, PIC_WIDTH, mask_sh), \
DSC_SF(DSCCIF0_DSCCIF_CONFIG1, PIC_HEIGHT, mask_sh), \
DSC_SF(DSCRM0_DSCRM_DSC_FORWARD_CONFIG, DSCRM_DSC_FORWARD_EN, mask_sh), \
DSC_SF(DSCRM0_DSCRM_DSC_FORWARD_CONFIG, DSCRM_DSC_OPP_PIPE_SOURCE, mask_sh)
struct dcn20_dsc_registers {
uint32_t DSC_TOP_CONTROL;
uint32_t DSC_DEBUG_CONTROL;

View File

@ -1565,6 +1565,7 @@ static void dcn20_update_dchubp_dpp(
/* Any updates are handled in dc interface, just need
* to apply existing for plane enable / opp change */
if (pipe_ctx->update_flags.bits.enable || pipe_ctx->update_flags.bits.opp_changed
|| pipe_ctx->update_flags.bits.plane_changed
|| pipe_ctx->stream->update_flags.bits.gamut_remap
|| pipe_ctx->stream->update_flags.bits.out_csc) {
/* dpp/cm gamut remap*/

View File

@ -343,7 +343,6 @@ unsigned int dcn314_calculate_dccg_k1_k2_values(struct pipe_ctx *pipe_ctx, unsig
{
struct dc_stream_state *stream = pipe_ctx->stream;
unsigned int odm_combine_factor = 0;
struct dc *dc = pipe_ctx->stream->ctx->dc;
bool two_pix_per_container = false;
two_pix_per_container = optc2_is_two_pixels_per_containter(&stream->timing);
@ -364,7 +363,7 @@ unsigned int dcn314_calculate_dccg_k1_k2_values(struct pipe_ctx *pipe_ctx, unsig
} else {
*k1_div = PIXEL_RATE_DIV_BY_1;
*k2_div = PIXEL_RATE_DIV_BY_4;
if ((odm_combine_factor == 2) || dc->debug.enable_dp_dig_pixel_rate_div_policy)
if (odm_combine_factor == 2)
*k2_div = PIXEL_RATE_DIV_BY_2;
}
}
@ -384,21 +383,10 @@ void dcn314_set_pixels_per_cycle(struct pipe_ctx *pipe_ctx)
return;
odm_combine_factor = get_odm_config(pipe_ctx, NULL);
if (optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing) || odm_combine_factor > 1
|| dcn314_is_dp_dig_pixel_rate_div_policy(pipe_ctx))
if (optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing) || odm_combine_factor > 1)
pix_per_cycle = 2;
if (pipe_ctx->stream_res.stream_enc->funcs->set_input_mode)
pipe_ctx->stream_res.stream_enc->funcs->set_input_mode(pipe_ctx->stream_res.stream_enc,
pix_per_cycle);
}
bool dcn314_is_dp_dig_pixel_rate_div_policy(struct pipe_ctx *pipe_ctx)
{
struct dc *dc = pipe_ctx->stream->ctx->dc;
if (dc_is_dp_signal(pipe_ctx->stream->signal) && !is_dp_128b_132b_signal(pipe_ctx) &&
dc->debug.enable_dp_dig_pixel_rate_div_policy)
return true;
return false;
}

View File

@ -41,6 +41,4 @@ unsigned int dcn314_calculate_dccg_k1_k2_values(struct pipe_ctx *pipe_ctx, unsig
void dcn314_set_pixels_per_cycle(struct pipe_ctx *pipe_ctx);
bool dcn314_is_dp_dig_pixel_rate_div_policy(struct pipe_ctx *pipe_ctx);
#endif /* __DC_HWSS_DCN314_H__ */

View File

@ -146,7 +146,6 @@ static const struct hwseq_private_funcs dcn314_private_funcs = {
.setup_hpo_hw_control = dcn31_setup_hpo_hw_control,
.calculate_dccg_k1_k2_values = dcn314_calculate_dccg_k1_k2_values,
.set_pixels_per_cycle = dcn314_set_pixels_per_cycle,
.is_dp_dig_pixel_rate_div_policy = dcn314_is_dp_dig_pixel_rate_div_policy,
};
void dcn314_hw_sequencer_construct(struct dc *dc)

Some files were not shown because too many files have changed in this diff Show More