ASoC: Fixes for v6.8

This pull request adds Richard Fitzgerald's series with extensive fixes
 for the CS35L56, he said:
 
     These patches fix various things that were undocumented, unknown or
     uncertain when the original driver code was written. And also a few
     things that were just bugs.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmW75GkACgkQJNaLcl1U
 h9CBhAf/an09YRBuVTbep/g7a6D+NTuMRY/iDJBlBcukk4NED45r0zrS/JgCQU8S
 iWk6uAMbl3ato7P8zn99eQCcTAS6/fhkQIeiHwOg+AAvKBQ62tlAt5NsTXs/QLWh
 AI04UwAf+CPIvO8kngu4bhEOH5X0Ss7lr5NzpmO/vVJODX6i5BbpiWiAxIn5yt7V
 17SrpKzsp9yv/OyDrqmrhoT2cECNir1sBdTgMj+7L1+2URRd7gx4xFipmS3Cu6KE
 FP+wBtoDa4HQ7RK2O/uPhmjpbBAF12maIc0XEdDg/VTIRhxHTUK+9e7kfxkkpn0z
 6B9vx7xr3GkvcFcXJo23hC0jUn/1ZA==
 =z/HP
 -----END PGP SIGNATURE-----

Merge tag 'asoc-fix-v6.8-rc2-2' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus

ASoC: Fixes for v6.8

This pull request adds Richard Fitzgerald's series with extensive fixes
for the CS35L56, he said:

    These patches fix various things that were undocumented, unknown or
    uncertain when the original driver code was written. And also a few
    things that were just bugs.
This commit is contained in:
Takashi Iwai 2024-02-01 19:40:42 +01:00
commit d4ea2bd1bb
412 changed files with 4325 additions and 2095 deletions

View File

@ -10,6 +10,7 @@ What: /sys/devices/platform/silicom-platform/power_cycle
Date: November 2023
KernelVersion: 6.7
Contact: Henry Shi <henrys@silicom-usa.com>
Description:
This file allow user to power cycle the platform.
Default value is 0; when set to 1, it powers down
the platform, waits 5 seconds, then powers on the

View File

@ -101,8 +101,8 @@ External References
email threads
-------------
* `Initial discussion on the New subsystem for acceleration devices <https://lkml.org/lkml/2022/7/31/83>`_ - Oded Gabbay (2022)
* `patch-set to add the new subsystem <https://lkml.org/lkml/2022/10/22/544>`_ - Oded Gabbay (2022)
* `Initial discussion on the New subsystem for acceleration devices <https://lore.kernel.org/lkml/CAFCwf11=9qpNAepL7NL+YAV_QO=Wv6pnWPhKHKAepK3fNn+2Dg@mail.gmail.com/>`_ - Oded Gabbay (2022)
* `patch-set to add the new subsystem <https://lore.kernel.org/lkml/20221022214622.18042-1-ogabbay@kernel.org/>`_ - Oded Gabbay (2022)
Conference talks
----------------

View File

@ -218,8 +218,3 @@ bytes respectively. Such letter suffixes can also be entirely omitted:
.. include:: kernel-parameters.txt
:literal:
Todo
----
Add more DRM drivers.

View File

@ -243,13 +243,9 @@ To reduce its OS jitter, do any of the following:
3. Do any of the following needed to avoid jitter that your
application cannot tolerate:
a. Build your kernel with CONFIG_SLUB=y rather than
CONFIG_SLAB=y, thus avoiding the slab allocator's periodic
use of each CPU's workqueues to run its cache_reap()
function.
b. Avoid using oprofile, thus avoiding OS jitter from
a. Avoid using oprofile, thus avoiding OS jitter from
wq_sync_buffer().
c. Limit your CPU frequency so that a CPU-frequency
b. Limit your CPU frequency so that a CPU-frequency
governor is not required, possibly enlisting the aid of
special heatsinks or other cooling technologies. If done
correctly, and if you CPU architecture permits, you should
@ -259,7 +255,7 @@ To reduce its OS jitter, do any of the following:
WARNING: Please check your CPU specifications to
make sure that this is safe on your particular system.
d. As of v3.18, Christoph Lameter's on-demand vmstat workers
c. As of v3.18, Christoph Lameter's on-demand vmstat workers
commit prevents OS jitter due to vmstat_update() on
CONFIG_SMP=y systems. Before v3.18, is not possible
to entirely get rid of the OS jitter, but you can
@ -274,7 +270,7 @@ To reduce its OS jitter, do any of the following:
(based on an earlier one from Gilad Ben-Yossef) that
reduces or even eliminates vmstat overhead for some
workloads at https://lore.kernel.org/r/00000140e9dfd6bd-40db3d4f-c1be-434f-8132-7820f81bb586-000000@email.amazonses.com.
e. If running on high-end powerpc servers, build with
d. If running on high-end powerpc servers, build with
CONFIG_PPC_RTAS_DAEMON=n. This prevents the RTAS
daemon from running on each CPU every second or so.
(This will require editing Kconfig files and will defeat
@ -282,12 +278,12 @@ To reduce its OS jitter, do any of the following:
due to the rtas_event_scan() function.
WARNING: Please check your CPU specifications to
make sure that this is safe on your particular system.
f. If running on Cell Processor, build your kernel with
e. If running on Cell Processor, build your kernel with
CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from
spu_gov_work().
WARNING: Please check your CPU specifications to
make sure that this is safe on your particular system.
g. If running on PowerMAC, build your kernel with
f. If running on PowerMAC, build your kernel with
CONFIG_PMAC_RACKMETER=n to disable the CPU-meter,
avoiding OS jitter from rackmeter_do_timer().

View File

@ -85,7 +85,7 @@ allOf:
clocks:
minItems: 6
maxItems: 6
regs:
reg:
minItems: 2
maxItems: 2
@ -99,7 +99,7 @@ allOf:
clocks:
minItems: 4
maxItems: 4
regs:
reg:
minItems: 2
maxItems: 2
@ -116,7 +116,7 @@ allOf:
clocks:
minItems: 3
maxItems: 3
regs:
reg:
minItems: 1
maxItems: 1

View File

@ -17,7 +17,7 @@ properties:
compatible:
items:
- enum:
- ti,k3-j721s2-wave521c
- ti,j721s2-wave521c
- const: cnm,wave521c
reg:
@ -53,7 +53,7 @@ additionalProperties: false
examples:
- |
vpu: video-codec@12345678 {
compatible = "ti,k3-j721s2-wave521c", "cnm,wave521c";
compatible = "ti,j721s2-wave521c", "cnm,wave521c";
reg = <0x12345678 0x1000>;
clocks = <&clks 42>;
interrupts = <42>;

View File

@ -145,7 +145,9 @@ filesystem, an overlay filesystem needs to record in the upper filesystem
that files have been removed. This is done using whiteouts and opaque
directories (non-directories are always opaque).
A whiteout is created as a character device with 0/0 device number.
A whiteout is created as a character device with 0/0 device number or
as a zero-size regular file with the xattr "trusted.overlay.whiteout".
When a whiteout is found in the upper level of a merged directory, any
matching name in the lower level is ignored, and the whiteout itself
is also hidden.
@ -154,6 +156,13 @@ A directory is made opaque by setting the xattr "trusted.overlay.opaque"
to "y". Where the upper filesystem contains an opaque directory, any
directory in the lower filesystem with the same name is ignored.
An opaque directory should not conntain any whiteouts, because they do not
serve any purpose. A merge directory containing regular files with the xattr
"trusted.overlay.whiteout", should be additionally marked by setting the xattr
"trusted.overlay.opaque" to "x" on the merge directory itself.
This is needed to avoid the overhead of checking the "trusted.overlay.whiteout"
on all entries during readdir in the common case.
readdir
-------
@ -534,8 +543,9 @@ A lower dir with a regular whiteout will always be handled by the overlayfs
mount, so to support storing an effective whiteout file in an overlayfs mount an
alternative form of whiteout is supported. This form is a regular, zero-size
file with the "overlay.whiteout" xattr set, inside a directory with the
"overlay.whiteouts" xattr set. Such whiteouts are never created by overlayfs,
but can be used by userspace tools (like containers) that generate lower layers.
"overlay.opaque" xattr set to "x" (see `whiteouts and opaque directories`_).
These alternative whiteouts are never created by overlayfs, but can be used by
userspace tools (like containers) that generate lower layers.
These alternative whiteouts can be escaped using the standard xattr escape
mechanism in order to properly nest to any depth.

View File

@ -12,5 +12,7 @@
<script type="text/javascript"> <!--
var sbar = document.getElementsByClassName("sphinxsidebar")[0];
let currents = document.getElementsByClassName("current")
sbar.scrollTop = currents[currents.length - 1].offsetTop;
if (currents.length) {
sbar.scrollTop = currents[currents.length - 1].offsetTop;
}
--> </script>

View File

@ -3168,10 +3168,10 @@ F: drivers/hwmon/asus-ec-sensors.c
ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS
M: Corentin Chary <corentin.chary@gmail.com>
L: acpi4asus-user@lists.sourceforge.net
M: Luke D. Jones <luke@ljones.dev>
L: platform-driver-x86@vger.kernel.org
S: Maintained
W: http://acpi4asus.sf.net
W: https://asus-linux.org/
F: drivers/platform/x86/asus*.c
F: drivers/platform/x86/eeepc*.c
@ -4547,7 +4547,7 @@ F: drivers/net/ieee802154/ca8210.c
CACHEFILES: FS-CACHE BACKEND FOR CACHING ON MOUNTED FILESYSTEMS
M: David Howells <dhowells@redhat.com>
L: linux-cachefs@redhat.com (moderated for non-subscribers)
L: netfs@lists.linux.dev
S: Supported
F: Documentation/filesystems/caching/cachefiles.rst
F: fs/cachefiles/
@ -5958,7 +5958,6 @@ S: Maintained
F: drivers/platform/x86/dell/dell-wmi-descriptor.c
DELL WMI HARDWARE PRIVACY SUPPORT
M: Perry Yuan <Perry.Yuan@dell.com>
L: Dell.Client.Kernel@dell.com
L: platform-driver-x86@vger.kernel.org
S: Maintained
@ -7955,12 +7954,13 @@ L: rust-for-linux@vger.kernel.org
S: Maintained
F: rust/kernel/net/phy.rs
EXEC & BINFMT API
EXEC & BINFMT API, ELF
R: Eric Biederman <ebiederm@xmission.com>
R: Kees Cook <keescook@chromium.org>
L: linux-mm@kvack.org
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/execve
F: Documentation/userspace-api/ELF.rst
F: fs/*binfmt_*.c
F: fs/exec.c
F: include/linux/binfmts.h
@ -8223,7 +8223,8 @@ F: include/linux/iomap.h
FILESYSTEMS [NETFS LIBRARY]
M: David Howells <dhowells@redhat.com>
L: linux-cachefs@redhat.com (moderated for non-subscribers)
R: Jeff Layton <jlayton@kernel.org>
L: netfs@lists.linux.dev
L: linux-fsdevel@vger.kernel.org
S: Supported
F: Documentation/filesystems/caching/
@ -20549,6 +20550,7 @@ F: Documentation/translations/sp_SP/
SPARC + UltraSPARC (sparc/sparc64)
M: "David S. Miller" <davem@davemloft.net>
M: Andreas Larsson <andreas@gaisler.com>
L: sparclinux@vger.kernel.org
S: Maintained
Q: http://patchwork.ozlabs.org/project/sparclinux/list/

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 8
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc2
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*
@ -986,6 +986,10 @@ NOSTDINC_FLAGS += -nostdinc
# perform bounds checking.
KBUILD_CFLAGS += $(call cc-option, -fstrict-flex-arrays=3)
#Currently, disable -Wstringop-overflow for GCC 11, globally.
KBUILD_CFLAGS-$(CONFIG_CC_NO_STRINGOP_OVERFLOW) += $(call cc-option, -Wno-stringop-overflow)
KBUILD_CFLAGS-$(CONFIG_CC_STRINGOP_OVERFLOW) += $(call cc-option, -Wstringop-overflow)
# disable invalid "can't wrap" optimizations for signed / pointers
KBUILD_CFLAGS += -fno-strict-overflow

View File

@ -45,8 +45,8 @@
num-chipselects = <1>;
cs-gpios = <&gpio0 ASPEED_GPIO(Z, 0) GPIO_ACTIVE_LOW>;
tpmdev@0 {
compatible = "tcg,tpm_tis-spi";
tpm@0 {
compatible = "infineon,slb9670", "tcg,tpm_tis-spi";
spi-max-frequency = <33000000>;
reg = <0>;
};

View File

@ -80,8 +80,8 @@
gpio-miso = <&gpio ASPEED_GPIO(R, 5) GPIO_ACTIVE_HIGH>;
num-chipselects = <1>;
tpmdev@0 {
compatible = "tcg,tpm_tis-spi";
tpm@0 {
compatible = "infineon,slb9670", "tcg,tpm_tis-spi";
spi-max-frequency = <33000000>;
reg = <0>;
};

View File

@ -456,7 +456,7 @@
status = "okay";
tpm: tpm@2e {
compatible = "tcg,tpm-tis-i2c";
compatible = "nuvoton,npct75x", "tcg,tpm-tis-i2c";
reg = <0x2e>;
};
};

View File

@ -35,8 +35,8 @@
gpio-mosi = <&gpio0 ASPEED_GPIO(X, 4) GPIO_ACTIVE_HIGH>;
gpio-miso = <&gpio0 ASPEED_GPIO(X, 5) GPIO_ACTIVE_HIGH>;
tpmdev@0 {
compatible = "tcg,tpm_tis-spi";
tpm@0 {
compatible = "infineon,slb9670", "tcg,tpm_tis-spi";
spi-max-frequency = <33000000>;
reg = <0>;
};

View File

@ -116,7 +116,7 @@
tpm_tis: tpm@1 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_tpm>;
compatible = "tcg,tpm_tis-spi";
compatible = "infineon,slb9670", "tcg,tpm_tis-spi";
reg = <1>;
spi-max-frequency = <20000000>;
interrupt-parent = <&gpio5>;

View File

@ -130,7 +130,7 @@
* TCG specification - Section 6.4.1 Clocking:
* TPM shall support a SPI clock frequency range of 10-24 MHz.
*/
st33htph: tpm-tis@0 {
st33htph: tpm@0 {
compatible = "st,st33htpm-spi", "tcg,tpm_tis-spi";
reg = <0>;
spi-max-frequency = <24000000>;

View File

@ -434,6 +434,7 @@
};
&fimd {
samsung,invert-vclk;
status = "okay";
};

View File

@ -217,7 +217,7 @@
pinctrl-names = "default";
pinctrl-0 = <&spi1_pins>;
tpm_spi_tis@0 {
tpm@0 {
compatible = "tcg,tpm_tis-spi";
reg = <0>;
spi-max-frequency = <500000>;

View File

@ -289,7 +289,7 @@
#clock-cells = <1>;
clocks = <&cmu_top CLK_DOUT_CMU_MISC_BUS>,
<&cmu_top CLK_DOUT_CMU_MISC_SSS>;
clock-names = "dout_cmu_misc_bus", "dout_cmu_misc_sss";
clock-names = "bus", "sss";
};
watchdog_cl0: watchdog@10060000 {

View File

@ -120,7 +120,7 @@
};
tpm: tpm@1 {
compatible = "tcg,tpm_tis-spi";
compatible = "infineon,slb9670", "tcg,tpm_tis-spi";
interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&gpio2>;
pinctrl-names = "default";

View File

@ -89,7 +89,7 @@
status = "okay";
tpm@1 {
compatible = "tcg,tpm_tis-spi";
compatible = "atmel,attpm20p", "tcg,tpm_tis-spi";
reg = <0x1>;
spi-max-frequency = <36000000>;
};

View File

@ -109,7 +109,7 @@
status = "okay";
tpm@1 {
compatible = "tcg,tpm_tis-spi";
compatible = "atmel,attpm20p", "tcg,tpm_tis-spi";
reg = <0x1>;
spi-max-frequency = <36000000>;
};

View File

@ -234,7 +234,7 @@
status = "okay";
tpm: tpm@0 {
compatible = "infineon,slb9670";
compatible = "infineon,slb9670", "tcg,tpm_tis-spi";
reg = <0>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_tpm>;

View File

@ -103,7 +103,7 @@
status = "okay";
tpm@1 {
compatible = "tcg,tpm_tis-spi";
compatible = "atmel,attpm20p", "tcg,tpm_tis-spi";
reg = <0x1>;
spi-max-frequency = <36000000>;
};

View File

@ -115,7 +115,7 @@
status = "okay";
tpm@1 {
compatible = "tcg,tpm_tis-spi";
compatible = "atmel,attpm20p", "tcg,tpm_tis-spi";
reg = <0x1>;
spi-max-frequency = <36000000>;
};

View File

@ -196,7 +196,7 @@
status = "okay";
tpm@0 {
compatible = "tcg,tpm_tis-spi";
compatible = "atmel,attpm20p", "tcg,tpm_tis-spi";
reg = <0x0>;
spi-max-frequency = <36000000>;
};

View File

@ -65,7 +65,7 @@
status = "okay";
tpm@0 {
compatible = "infineon,slb9670";
compatible = "infineon,slb9670", "tcg,tpm_tis-spi";
reg = <0>;
spi-max-frequency = <43000000>;
};

View File

@ -888,7 +888,7 @@
status = "okay";
cs-gpios = <&pio 86 GPIO_ACTIVE_LOW>;
cr50@0 {
tpm@0 {
compatible = "google,cr50";
reg = <0>;
spi-max-frequency = <1000000>;

View File

@ -1402,7 +1402,7 @@
pinctrl-names = "default";
pinctrl-0 = <&spi5_pins>;
cr50@0 {
tpm@0 {
compatible = "google,cr50";
reg = <0>;
interrupts-extended = <&pio 171 IRQ_TYPE_EDGE_RISING>;

View File

@ -70,7 +70,7 @@
&spi0 {
status = "okay";
cr50@0 {
tpm@0 {
compatible = "google,cr50";
reg = <0>;
interrupt-parent = <&gpio0>;

View File

@ -706,7 +706,7 @@ camera: &i2c7 {
&spi2 {
status = "okay";
cr50@0 {
tpm@0 {
compatible = "google,cr50";
reg = <0>;
interrupt-parent = <&gpio1>;

View File

@ -60,7 +60,7 @@ int kvm_own_lsx(struct kvm_vcpu *vcpu);
void kvm_save_lsx(struct loongarch_fpu *fpu);
void kvm_restore_lsx(struct loongarch_fpu *fpu);
#else
static inline int kvm_own_lsx(struct kvm_vcpu *vcpu) { }
static inline int kvm_own_lsx(struct kvm_vcpu *vcpu) { return -EINVAL; }
static inline void kvm_save_lsx(struct loongarch_fpu *fpu) { }
static inline void kvm_restore_lsx(struct loongarch_fpu *fpu) { }
#endif
@ -70,7 +70,7 @@ int kvm_own_lasx(struct kvm_vcpu *vcpu);
void kvm_save_lasx(struct loongarch_fpu *fpu);
void kvm_restore_lasx(struct loongarch_fpu *fpu);
#else
static inline int kvm_own_lasx(struct kvm_vcpu *vcpu) { }
static inline int kvm_own_lasx(struct kvm_vcpu *vcpu) { return -EINVAL; }
static inline void kvm_save_lasx(struct loongarch_fpu *fpu) { }
static inline void kvm_restore_lasx(struct loongarch_fpu *fpu) { }
#endif

View File

@ -509,7 +509,6 @@ asmlinkage void start_secondary(void)
sync_counter();
cpu = raw_smp_processor_id();
set_my_cpu_offset(per_cpu_offset(cpu));
rcutree_report_cpu_starting(cpu);
cpu_probe();
constant_clockevent_init();

View File

@ -675,7 +675,7 @@ static bool fault_supports_huge_mapping(struct kvm_memory_slot *memslot,
*
* There are several ways to safely use this helper:
*
* - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before
* - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before
* consuming it. In this case, mmu_lock doesn't need to be held during the
* lookup, but it does need to be held while checking the MMU notifier.
*
@ -855,7 +855,7 @@ retry:
/* Check if an invalidation has taken place since we got pfn */
spin_lock(&kvm->mmu_lock);
if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) {
if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) {
/*
* This can happen when mappings are changed asynchronously, but
* also synchronously if a COW is triggered by

View File

@ -284,12 +284,16 @@ static void setup_tlb_handler(int cpu)
set_handler(EXCCODE_TLBNR * VECSIZE, handle_tlb_protect, VECSIZE);
set_handler(EXCCODE_TLBNX * VECSIZE, handle_tlb_protect, VECSIZE);
set_handler(EXCCODE_TLBPE * VECSIZE, handle_tlb_protect, VECSIZE);
}
} else {
int vec_sz __maybe_unused;
void *addr __maybe_unused;
struct page *page __maybe_unused;
/* Avoid lockdep warning */
rcutree_report_cpu_starting(cpu);
#ifdef CONFIG_NUMA
else {
void *addr;
struct page *page;
const int vec_sz = sizeof(exception_handlers);
vec_sz = sizeof(exception_handlers);
if (pcpu_handlers[cpu])
return;
@ -305,8 +309,8 @@ static void setup_tlb_handler(int cpu)
csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_EENTRY);
csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_MERRENTRY);
csr_write64(pcpu_handlers[cpu] + 80*VECSIZE, LOONGARCH_CSR_TLBRENTRY);
}
#endif
}
}
void tlb_init(int cpu)

View File

@ -40,6 +40,7 @@
#include <linux/string.h>
#include <asm/bootinfo.h>
#include <prom.h>
int prom_argc;
char **prom_argv;

View File

@ -30,13 +30,11 @@
#include <linux/mm.h>
#include <linux/dma-map-ops.h> /* for dma_default_coherent */
#include <asm/bootinfo.h>
#include <asm/mipsregs.h>
#include <au1000.h>
extern void __init board_setup(void);
extern void __init alchemy_set_lpj(void);
static bool alchemy_dma_coherent(void)
{
switch (alchemy_get_cputype()) {

View File

@ -702,7 +702,7 @@ static struct ssb_sprom bcm63xx_sprom = {
.boardflags_hi = 0x0000,
};
int bcm63xx_get_fallback_sprom(struct ssb_bus *bus, struct ssb_sprom *out)
static int bcm63xx_get_fallback_sprom(struct ssb_bus *bus, struct ssb_sprom *out)
{
if (bus->bustype == SSB_BUSTYPE_PCI) {
memcpy(out, &bcm63xx_sprom, sizeof(struct ssb_sprom));

View File

@ -26,7 +26,7 @@ static struct platform_device bcm63xx_rng_device = {
.resource = rng_resources,
};
int __init bcm63xx_rng_register(void)
static int __init bcm63xx_rng_register(void)
{
if (!BCMCPU_IS_6368())
return -ENODEV;

View File

@ -10,6 +10,7 @@
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <bcm63xx_cpu.h>
#include <bcm63xx_dev_uart.h>
static struct resource uart0_resources[] = {
{

View File

@ -34,7 +34,7 @@ static struct platform_device bcm63xx_wdt_device = {
},
};
int __init bcm63xx_wdt_register(void)
static int __init bcm63xx_wdt_register(void)
{
wdt_resources[0].start = bcm63xx_regset_address(RSET_WDT);
wdt_resources[0].end = wdt_resources[0].start;

View File

@ -72,7 +72,7 @@ static inline int enable_irq_for_cpu(int cpu, struct irq_data *d,
*/
#define BUILD_IPIC_INTERNAL(width) \
void __dispatch_internal_##width(int cpu) \
static void __dispatch_internal_##width(int cpu) \
{ \
u32 pending[width / 32]; \
unsigned int src, tgt; \

View File

@ -159,7 +159,7 @@ void __init plat_mem_setup(void)
board_setup();
}
int __init bcm63xx_register_devices(void)
static int __init bcm63xx_register_devices(void)
{
/* register gpiochip */
bcm63xx_gpio_init();

View File

@ -178,7 +178,7 @@ int bcm63xx_timer_set(int id, int monotonic, unsigned int countdown_us)
EXPORT_SYMBOL(bcm63xx_timer_set);
int bcm63xx_timer_init(void)
static int bcm63xx_timer_init(void)
{
int ret, irq;
u32 reg;

View File

@ -23,9 +23,6 @@
#include <cobalt.h>
extern void cobalt_machine_restart(char *command);
extern void cobalt_machine_halt(void);
const char *get_system_type(void)
{
switch (cobalt_board_id) {

View File

@ -37,7 +37,7 @@ static unsigned int nr_prom_mem __initdata;
*/
#define ARC_PAGE_SHIFT 12
struct linux_mdesc * __init ArcGetMemoryDescriptor(struct linux_mdesc *Current)
static struct linux_mdesc * __init ArcGetMemoryDescriptor(struct linux_mdesc *Current)
{
return (struct linux_mdesc *) ARC_CALL1(get_mdesc, Current);
}

View File

@ -597,6 +597,9 @@
#include <asm/cpu.h>
void alchemy_set_lpj(void);
void board_setup(void);
/* helpers to access the SYS_* registers */
static inline unsigned long alchemy_rdsys(int regofs)
{

View File

@ -19,4 +19,7 @@ extern int cobalt_board_id;
#define COBALT_BRD_ID_QUBE2 0x5
#define COBALT_BRD_ID_RAQ2 0x6
void cobalt_machine_halt(void);
void cobalt_machine_restart(char *command);
#endif /* __ASM_COBALT_H */

View File

@ -11,6 +11,7 @@
#include <asm/cpu-features.h>
#include <asm/cpu-info.h>
#include <asm/fpu.h>
#ifdef CONFIG_MIPS_FP_SUPPORT
@ -309,6 +310,11 @@ void mips_set_personality_nan(struct arch_elf_state *state)
struct cpuinfo_mips *c = &boot_cpu_data;
struct task_struct *t = current;
/* Do this early so t->thread.fpu.fcr31 won't be clobbered in case
* we are preempted before the lose_fpu(0) in start_thread.
*/
lose_fpu(0);
t->thread.fpu.fcr31 = c->fpu_csr31;
switch (state->nan_2008) {
case 0:

View File

@ -2007,7 +2007,13 @@ unsigned long vi_handlers[64];
void reserve_exception_space(phys_addr_t addr, unsigned long size)
{
memblock_reserve(addr, size);
/*
* reserve exception space on CPUs other than CPU0
* is too late, since memblock is unavailable when APs
* up
*/
if (smp_processor_id() == 0)
memblock_reserve(addr, size);
}
void __init *set_except_vector(int n, void *addr)

View File

@ -108,10 +108,9 @@ void __init prom_init(void)
prom_init_cmdline();
#if defined(CONFIG_MIPS_MT_SMP)
if (cpu_has_mipsmt) {
lantiq_smp_ops = vsmp_smp_ops;
lantiq_smp_ops = vsmp_smp_ops;
if (cpu_has_mipsmt)
lantiq_smp_ops.init_secondary = lantiq_init_secondary;
register_smp_ops(&lantiq_smp_ops);
}
register_smp_ops(&lantiq_smp_ops);
#endif
}

View File

@ -103,6 +103,9 @@ void __init szmem(unsigned int node)
if (loongson_sysconf.vgabios_addr)
memblock_reserve(virt_to_phys((void *)loongson_sysconf.vgabios_addr),
SZ_256K);
/* set nid for reserved memory */
memblock_set_node((u64)node << 44, (u64)(node + 1) << 44,
&memblock.reserved, node);
}
#ifndef CONFIG_NUMA

View File

@ -132,6 +132,8 @@ static void __init node_mem_init(unsigned int node)
/* Reserve pfn range 0~node[0]->node_start_pfn */
memblock_reserve(0, PAGE_SIZE * start_pfn);
/* set nid for reserved memory on node 0 */
memblock_set_node(0, 1ULL << 44, &memblock.reserved, 0);
}
}

View File

@ -5,7 +5,7 @@
obj-y := ip27-berr.o ip27-irq.o ip27-init.o ip27-klconfig.o \
ip27-klnuma.o ip27-memory.o ip27-nmi.o ip27-reset.o ip27-timer.o \
ip27-hubio.o ip27-xtalk.o
ip27-xtalk.o
obj-$(CONFIG_EARLY_PRINTK) += ip27-console.o
obj-$(CONFIG_SMP) += ip27-smp.o

View File

@ -22,6 +22,8 @@
#include <asm/traps.h>
#include <linux/uaccess.h>
#include "ip27-common.h"
static void dump_hub_information(unsigned long errst0, unsigned long errst1)
{
static char *err_type[2][8] = {
@ -57,7 +59,7 @@ static void dump_hub_information(unsigned long errst0, unsigned long errst1)
[st0.pi_stat0_fmt.s0_err_type] ? : "invalid");
}
int ip27_be_handler(struct pt_regs *regs, int is_fixup)
static int ip27_be_handler(struct pt_regs *regs, int is_fixup)
{
unsigned long errst0, errst1;
int data = regs->cp0_cause & 4;

View File

@ -10,6 +10,7 @@ extern void hub_rt_clock_event_init(void);
extern void hub_rtc_init(nasid_t nasid);
extern void install_cpu_nmi_handler(int slice);
extern void install_ipi(void);
extern void ip27_be_init(void);
extern void ip27_reboot_setup(void);
extern const struct plat_smp_ops ip27_smp_ops;
extern unsigned long node_getfirstfree(nasid_t nasid);
@ -17,4 +18,5 @@ extern void per_cpu_init(void);
extern void replicate_kernel_text(void);
extern void setup_replication_mask(void);
#endif /* __IP27_COMMON_H */

View File

@ -1,185 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc.
* Copyright (C) 2004 Christoph Hellwig.
*
* Support functions for the HUB ASIC - mostly PIO mapping related.
*/
#include <linux/bitops.h>
#include <linux/string.h>
#include <linux/mmzone.h>
#include <asm/sn/addrs.h>
#include <asm/sn/arch.h>
#include <asm/sn/agent.h>
#include <asm/sn/io.h>
#include <asm/xtalk/xtalk.h>
static int force_fire_and_forget = 1;
/**
* hub_pio_map - establish a HUB PIO mapping
*
* @nasid: nasid to perform PIO mapping on
* @widget: widget ID to perform PIO mapping for
* @xtalk_addr: xtalk_address that needs to be mapped
* @size: size of the PIO mapping
*
**/
unsigned long hub_pio_map(nasid_t nasid, xwidgetnum_t widget,
unsigned long xtalk_addr, size_t size)
{
unsigned i;
/* use small-window mapping if possible */
if ((xtalk_addr % SWIN_SIZE) + size <= SWIN_SIZE)
return NODE_SWIN_BASE(nasid, widget) + (xtalk_addr % SWIN_SIZE);
if ((xtalk_addr % BWIN_SIZE) + size > BWIN_SIZE) {
printk(KERN_WARNING "PIO mapping at hub %d widget %d addr 0x%lx"
" too big (%ld)\n",
nasid, widget, xtalk_addr, size);
return 0;
}
xtalk_addr &= ~(BWIN_SIZE-1);
for (i = 0; i < HUB_NUM_BIG_WINDOW; i++) {
if (test_and_set_bit(i, hub_data(nasid)->h_bigwin_used))
continue;
/*
* The code below does a PIO write to setup an ITTE entry.
*
* We need to prevent other CPUs from seeing our updated
* memory shadow of the ITTE (in the piomap) until the ITTE
* entry is actually set up; otherwise, another CPU might
* attempt a PIO prematurely.
*
* Also, the only way we can know that an entry has been
* received by the hub and can be used by future PIO reads/
* writes is by reading back the ITTE entry after writing it.
*
* For these two reasons, we PIO read back the ITTE entry
* after we write it.
*/
IIO_ITTE_PUT(nasid, i, HUB_PIO_MAP_TO_MEM, widget, xtalk_addr);
__raw_readq(IIO_ITTE_GET(nasid, i));
return NODE_BWIN_BASE(nasid, widget) + (xtalk_addr % BWIN_SIZE);
}
printk(KERN_WARNING "unable to establish PIO mapping for at"
" hub %d widget %d addr 0x%lx\n",
nasid, widget, xtalk_addr);
return 0;
}
/*
* hub_setup_prb(nasid, prbnum, credits, conveyor)
*
* Put a PRB into fire-and-forget mode if conveyor isn't set. Otherwise,
* put it into conveyor belt mode with the specified number of credits.
*/
static void hub_setup_prb(nasid_t nasid, int prbnum, int credits)
{
union iprb_u prb;
int prb_offset;
/*
* Get the current register value.
*/
prb_offset = IIO_IOPRB(prbnum);
prb.iprb_regval = REMOTE_HUB_L(nasid, prb_offset);
/*
* Clear out some fields.
*/
prb.iprb_ovflow = 1;
prb.iprb_bnakctr = 0;
prb.iprb_anakctr = 0;
/*
* Enable or disable fire-and-forget mode.
*/
prb.iprb_ff = force_fire_and_forget ? 1 : 0;
/*
* Set the appropriate number of PIO credits for the widget.
*/
prb.iprb_xtalkctr = credits;
/*
* Store the new value to the register.
*/
REMOTE_HUB_S(nasid, prb_offset, prb.iprb_regval);
}
/**
* hub_set_piomode - set pio mode for a given hub
*
* @nasid: physical node ID for the hub in question
*
* Put the hub into either "PIO conveyor belt" mode or "fire-and-forget" mode.
* To do this, we have to make absolutely sure that no PIOs are in progress
* so we turn off access to all widgets for the duration of the function.
*
* XXX - This code should really check what kind of widget we're talking
* to. Bridges can only handle three requests, but XG will do more.
* How many can crossbow handle to widget 0? We're assuming 1.
*
* XXX - There is a bug in the crossbow that link reset PIOs do not
* return write responses. The easiest solution to this problem is to
* leave widget 0 (xbow) in fire-and-forget mode at all times. This
* only affects pio's to xbow registers, which should be rare.
**/
static void hub_set_piomode(nasid_t nasid)
{
u64 ii_iowa;
union hubii_wcr_u ii_wcr;
unsigned i;
ii_iowa = REMOTE_HUB_L(nasid, IIO_OUTWIDGET_ACCESS);
REMOTE_HUB_S(nasid, IIO_OUTWIDGET_ACCESS, 0);
ii_wcr.wcr_reg_value = REMOTE_HUB_L(nasid, IIO_WCR);
if (ii_wcr.iwcr_dir_con) {
/*
* Assume a bridge here.
*/
hub_setup_prb(nasid, 0, 3);
} else {
/*
* Assume a crossbow here.
*/
hub_setup_prb(nasid, 0, 1);
}
/*
* XXX - Here's where we should take the widget type into
* when account assigning credits.
*/
for (i = HUB_WIDGET_ID_MIN; i <= HUB_WIDGET_ID_MAX; i++)
hub_setup_prb(nasid, i, 3);
REMOTE_HUB_S(nasid, IIO_OUTWIDGET_ACCESS, ii_iowa);
}
/*
* hub_pio_init - PIO-related hub initialization
*
* @hub: hubinfo structure for our hub
*/
void hub_pio_init(nasid_t nasid)
{
unsigned i;
/* initialize big window piomaps for this hub */
bitmap_zero(hub_data(nasid)->h_bigwin_used, HUB_NUM_BIG_WINDOW);
for (i = 0; i < HUB_NUM_BIG_WINDOW; i++)
IIO_ITTE_DISABLE(nasid, i);
hub_set_piomode(nasid);
}

View File

@ -23,6 +23,8 @@
#include <asm/sn/intr.h>
#include <asm/sn/irq_alloc.h>
#include "ip27-common.h"
struct hub_irq_data {
u64 *irq_mask[2];
cpuid_t cpu;

View File

@ -23,6 +23,7 @@
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/sections.h>
#include <asm/sgialib.h>
#include <asm/sn/arch.h>
#include <asm/sn/agent.h>

View File

@ -11,6 +11,8 @@
#include <asm/sn/arch.h>
#include <asm/sn/agent.h>
#include "ip27-common.h"
#if 0
#define NODE_NUM_CPUS(n) CNODE_NUM_CPUS(n)
#else
@ -23,16 +25,7 @@
typedef unsigned long machreg_t;
static arch_spinlock_t nmi_lock = __ARCH_SPIN_LOCK_UNLOCKED;
/*
* Let's see what else we need to do here. Set up sp, gp?
*/
void nmi_dump(void)
{
void cont_nmi_dump(void);
cont_nmi_dump();
}
static void nmi_dump(void);
void install_cpu_nmi_handler(int slice)
{
@ -53,7 +46,7 @@ void install_cpu_nmi_handler(int slice)
* into the eframe format for the node under consideration.
*/
void nmi_cpu_eframe_save(nasid_t nasid, int slice)
static void nmi_cpu_eframe_save(nasid_t nasid, int slice)
{
struct reg_struct *nr;
int i;
@ -129,7 +122,7 @@ void nmi_cpu_eframe_save(nasid_t nasid, int slice)
pr_emerg("\n");
}
void nmi_dump_hub_irq(nasid_t nasid, int slice)
static void nmi_dump_hub_irq(nasid_t nasid, int slice)
{
u64 mask0, mask1, pend0, pend1;
@ -153,7 +146,7 @@ void nmi_dump_hub_irq(nasid_t nasid, int slice)
* Copy the cpu registers which have been saved in the IP27prom format
* into the eframe format for the node under consideration.
*/
void nmi_node_eframe_save(nasid_t nasid)
static void nmi_node_eframe_save(nasid_t nasid)
{
int slice;
@ -170,8 +163,7 @@ void nmi_node_eframe_save(nasid_t nasid)
/*
* Save the nmi cpu registers for all cpus in the system.
*/
void
nmi_eframes_save(void)
static void nmi_eframes_save(void)
{
nasid_t nasid;
@ -179,8 +171,7 @@ nmi_eframes_save(void)
nmi_node_eframe_save(nasid);
}
void
cont_nmi_dump(void)
static void nmi_dump(void)
{
#ifndef REAL_NMI_SIGNAL
static atomic_t nmied_cpus = ATOMIC_INIT(0);

View File

@ -3,6 +3,7 @@
#include <linux/io.h>
#include <asm/sn/ioc3.h>
#include <asm/setup.h>
static inline struct ioc3_uartregs *console_uart(void)
{

View File

@ -14,6 +14,7 @@
#include <linux/percpu.h>
#include <linux/memblock.h>
#include <asm/bootinfo.h>
#include <asm/smp-ops.h>
#include <asm/sgialib.h>
#include <asm/time.h>

View File

@ -18,6 +18,8 @@
#include <asm/ip32/crime.h>
#include <asm/ip32/mace.h>
#include "ip32-common.h"
struct sgi_crime __iomem *crime;
struct sgi_mace __iomem *mace;
@ -39,7 +41,7 @@ void __init crime_init(void)
id, rev, field, (unsigned long) CRIME_BASE);
}
irqreturn_t crime_memerr_intr(unsigned int irq, void *dev_id)
irqreturn_t crime_memerr_intr(int irq, void *dev_id)
{
unsigned long stat, addr;
int fatal = 0;
@ -90,7 +92,7 @@ irqreturn_t crime_memerr_intr(unsigned int irq, void *dev_id)
return IRQ_HANDLED;
}
irqreturn_t crime_cpuerr_intr(unsigned int irq, void *dev_id)
irqreturn_t crime_cpuerr_intr(int irq, void *dev_id)
{
unsigned long stat = crime->cpu_error_stat & CRIME_CPU_ERROR_MASK;
unsigned long addr = crime->cpu_error_addr & CRIME_CPU_ERROR_ADDR_MASK;

View File

@ -18,6 +18,8 @@
#include <asm/ptrace.h>
#include <asm/tlbdebug.h>
#include "ip32-common.h"
static int ip32_be_handler(struct pt_regs *regs, int is_fixup)
{
int data = regs->cp0_cause & 4;

View File

@ -0,0 +1,15 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef __IP32_COMMON_H
#define __IP32_COMMON_H
#include <linux/init.h>
#include <linux/interrupt.h>
void __init crime_init(void);
irqreturn_t crime_memerr_intr(int irq, void *dev_id);
irqreturn_t crime_cpuerr_intr(int irq, void *dev_id);
void __init ip32_be_init(void);
void ip32_prepare_poweroff(void);
#endif /* __IP32_COMMON_H */

View File

@ -28,6 +28,8 @@
#include <asm/ip32/mace.h>
#include <asm/ip32/ip32_ints.h>
#include "ip32-common.h"
/* issue a PIO read to make sure no PIO writes are pending */
static inline void flush_crime_bus(void)
{
@ -107,10 +109,6 @@ static inline void flush_mace_bus(void)
* is quite different anyway.
*/
/* Some initial interrupts to set up */
extern irqreturn_t crime_memerr_intr(int irq, void *dev_id);
extern irqreturn_t crime_cpuerr_intr(int irq, void *dev_id);
/*
* This is for pure CRIME interrupts - ie not MACE. The advantage?
* We get to split the register in half and do faster lookups.

View File

@ -15,6 +15,7 @@
#include <asm/ip32/crime.h>
#include <asm/bootinfo.h>
#include <asm/page.h>
#include <asm/sgialib.h>
extern void crime_init(void);

View File

@ -29,6 +29,8 @@
#include <asm/ip32/crime.h>
#include <asm/ip32/ip32_ints.h>
#include "ip32-common.h"
#define POWERDOWN_TIMEOUT 120
/*
* Blink frequency during reboot grace period and when panicked.

View File

@ -26,8 +26,7 @@
#include <asm/ip32/mace.h>
#include <asm/ip32/ip32_ints.h>
extern void ip32_be_init(void);
extern void crime_init(void);
#include "ip32-common.h"
#ifdef CONFIG_SGI_O2MACE_ETH
/*

View File

@ -93,144 +93,160 @@
<&cpu63_intc 3>;
};
clint_mtimer0: timer@70ac000000 {
clint_mtimer0: timer@70ac004000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac000000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac004000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu0_intc 7>,
<&cpu1_intc 7>,
<&cpu2_intc 7>,
<&cpu3_intc 7>;
};
clint_mtimer1: timer@70ac010000 {
clint_mtimer1: timer@70ac014000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac010000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac014000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu4_intc 7>,
<&cpu5_intc 7>,
<&cpu6_intc 7>,
<&cpu7_intc 7>;
};
clint_mtimer2: timer@70ac020000 {
clint_mtimer2: timer@70ac024000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac020000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac024000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu8_intc 7>,
<&cpu9_intc 7>,
<&cpu10_intc 7>,
<&cpu11_intc 7>;
};
clint_mtimer3: timer@70ac030000 {
clint_mtimer3: timer@70ac034000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac030000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac034000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu12_intc 7>,
<&cpu13_intc 7>,
<&cpu14_intc 7>,
<&cpu15_intc 7>;
};
clint_mtimer4: timer@70ac040000 {
clint_mtimer4: timer@70ac044000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac040000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac044000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu16_intc 7>,
<&cpu17_intc 7>,
<&cpu18_intc 7>,
<&cpu19_intc 7>;
};
clint_mtimer5: timer@70ac050000 {
clint_mtimer5: timer@70ac054000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac050000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac054000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu20_intc 7>,
<&cpu21_intc 7>,
<&cpu22_intc 7>,
<&cpu23_intc 7>;
};
clint_mtimer6: timer@70ac060000 {
clint_mtimer6: timer@70ac064000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac060000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac064000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu24_intc 7>,
<&cpu25_intc 7>,
<&cpu26_intc 7>,
<&cpu27_intc 7>;
};
clint_mtimer7: timer@70ac070000 {
clint_mtimer7: timer@70ac074000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac070000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac074000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu28_intc 7>,
<&cpu29_intc 7>,
<&cpu30_intc 7>,
<&cpu31_intc 7>;
};
clint_mtimer8: timer@70ac080000 {
clint_mtimer8: timer@70ac084000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac080000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac084000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu32_intc 7>,
<&cpu33_intc 7>,
<&cpu34_intc 7>,
<&cpu35_intc 7>;
};
clint_mtimer9: timer@70ac090000 {
clint_mtimer9: timer@70ac094000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac090000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac094000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu36_intc 7>,
<&cpu37_intc 7>,
<&cpu38_intc 7>,
<&cpu39_intc 7>;
};
clint_mtimer10: timer@70ac0a0000 {
clint_mtimer10: timer@70ac0a4000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac0a0000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac0a4000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu40_intc 7>,
<&cpu41_intc 7>,
<&cpu42_intc 7>,
<&cpu43_intc 7>;
};
clint_mtimer11: timer@70ac0b0000 {
clint_mtimer11: timer@70ac0b4000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac0b0000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac0b4000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu44_intc 7>,
<&cpu45_intc 7>,
<&cpu46_intc 7>,
<&cpu47_intc 7>;
};
clint_mtimer12: timer@70ac0c0000 {
clint_mtimer12: timer@70ac0c4000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac0c0000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac0c4000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu48_intc 7>,
<&cpu49_intc 7>,
<&cpu50_intc 7>,
<&cpu51_intc 7>;
};
clint_mtimer13: timer@70ac0d0000 {
clint_mtimer13: timer@70ac0d4000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac0d0000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac0d4000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu52_intc 7>,
<&cpu53_intc 7>,
<&cpu54_intc 7>,
<&cpu55_intc 7>;
};
clint_mtimer14: timer@70ac0e0000 {
clint_mtimer14: timer@70ac0e4000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac0e0000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac0e4000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu56_intc 7>,
<&cpu57_intc 7>,
<&cpu58_intc 7>,
<&cpu59_intc 7>;
};
clint_mtimer15: timer@70ac0f0000 {
clint_mtimer15: timer@70ac0f4000 {
compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
reg = <0x00000070 0xac0f0000 0x00000000 0x00007ff8>;
reg = <0x00000070 0xac0f4000 0x00000000 0x0000c000>;
reg-names = "mtimecmp";
interrupts-extended = <&cpu60_intc 7>,
<&cpu61_intc 7>,
<&cpu62_intc 7>,

View File

@ -795,6 +795,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
bool is_struct_ops = flags & BPF_TRAMP_F_INDIRECT;
void *orig_call = func_addr;
bool save_ret;
u32 insn;
@ -878,7 +879,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
stack_size = round_up(stack_size, 16);
if (func_addr) {
if (!is_struct_ops) {
/* For the trampoline called from function entry,
* the frame of traced function and the frame of
* trampoline need to be considered.
@ -998,7 +999,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
emit_ld(RV_REG_S1, -sreg_off, RV_REG_FP, ctx);
if (func_addr) {
if (!is_struct_ops) {
/* trampoline called from function entry */
emit_ld(RV_REG_T0, stack_size - 8, RV_REG_SP, ctx);
emit_ld(RV_REG_FP, stack_size - 16, RV_REG_SP, ctx);

View File

@ -81,10 +81,8 @@
#define X86_FEATURE_K6_MTRR ( 3*32+ 1) /* AMD K6 nonstandard MTRRs */
#define X86_FEATURE_CYRIX_ARR ( 3*32+ 2) /* Cyrix ARRs (= MTRRs) */
#define X86_FEATURE_CENTAUR_MCR ( 3*32+ 3) /* Centaur MCRs (= MTRRs) */
/* CPU types for specific tunings: */
#define X86_FEATURE_K8 ( 3*32+ 4) /* "" Opteron, Athlon64 */
/* FREE, was #define X86_FEATURE_K7 ( 3*32+ 5) "" Athlon */
#define X86_FEATURE_ZEN5 ( 3*32+ 5) /* "" CPU based on Zen5 microarchitecture */
#define X86_FEATURE_P3 ( 3*32+ 6) /* "" P3 */
#define X86_FEATURE_P4 ( 3*32+ 7) /* "" P4 */
#define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* TSC ticks at a constant rate */

View File

@ -162,6 +162,8 @@
#define INTEL_FAM6_ATOM_CRESTMONT_X 0xAF /* Sierra Forest */
#define INTEL_FAM6_ATOM_CRESTMONT 0xB6 /* Grand Ridge */
#define INTEL_FAM6_ATOM_DARKMONT_X 0xDD /* Clearwater Forest */
/* Xeon Phi */
#define INTEL_FAM6_XEON_PHI_KNL 0x57 /* Knights Landing */

View File

@ -58,12 +58,29 @@ extern long __ia32_sys_ni_syscall(const struct pt_regs *regs);
,,regs->di,,regs->si,,regs->dx \
,,regs->r10,,regs->r8,,regs->r9) \
/* SYSCALL_PT_ARGS is Adapted from s390x */
#define SYSCALL_PT_ARG6(m, t1, t2, t3, t4, t5, t6) \
SYSCALL_PT_ARG5(m, t1, t2, t3, t4, t5), m(t6, (regs->bp))
#define SYSCALL_PT_ARG5(m, t1, t2, t3, t4, t5) \
SYSCALL_PT_ARG4(m, t1, t2, t3, t4), m(t5, (regs->di))
#define SYSCALL_PT_ARG4(m, t1, t2, t3, t4) \
SYSCALL_PT_ARG3(m, t1, t2, t3), m(t4, (regs->si))
#define SYSCALL_PT_ARG3(m, t1, t2, t3) \
SYSCALL_PT_ARG2(m, t1, t2), m(t3, (regs->dx))
#define SYSCALL_PT_ARG2(m, t1, t2) \
SYSCALL_PT_ARG1(m, t1), m(t2, (regs->cx))
#define SYSCALL_PT_ARG1(m, t1) m(t1, (regs->bx))
#define SYSCALL_PT_ARGS(x, ...) SYSCALL_PT_ARG##x(__VA_ARGS__)
#define __SC_COMPAT_CAST(t, a) \
(__typeof(__builtin_choose_expr(__TYPE_IS_L(t), 0, 0U))) \
(unsigned int)a
/* Mapping of registers to parameters for syscalls on i386 */
#define SC_IA32_REGS_TO_ARGS(x, ...) \
__MAP(x,__SC_ARGS \
,,(unsigned int)regs->bx,,(unsigned int)regs->cx \
,,(unsigned int)regs->dx,,(unsigned int)regs->si \
,,(unsigned int)regs->di,,(unsigned int)regs->bp)
SYSCALL_PT_ARGS(x, __SC_COMPAT_CAST, \
__MAP(x, __SC_TYPE, __VA_ARGS__)) \
#define __SYS_STUB0(abi, name) \
long __##abi##_##name(const struct pt_regs *regs); \

View File

@ -403,7 +403,7 @@ noinstr void BUG_func(void)
{
BUG();
}
EXPORT_SYMBOL_GPL(BUG_func);
EXPORT_SYMBOL(BUG_func);
#define CALL_RIP_REL_OPCODE 0xff
#define CALL_RIP_REL_MODRM 0x15

View File

@ -538,7 +538,7 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
/* Figure out Zen generations: */
switch (c->x86) {
case 0x17: {
case 0x17:
switch (c->x86_model) {
case 0x00 ... 0x2f:
case 0x50 ... 0x5f:
@ -554,8 +554,8 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
goto warn;
}
break;
}
case 0x19: {
case 0x19:
switch (c->x86_model) {
case 0x00 ... 0x0f:
case 0x20 ... 0x5f:
@ -569,7 +569,20 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
goto warn;
}
break;
}
case 0x1a:
switch (c->x86_model) {
case 0x00 ... 0x0f:
case 0x20 ... 0x2f:
case 0x40 ... 0x4f:
case 0x70 ... 0x7f:
setup_force_cpu_cap(X86_FEATURE_ZEN5);
break;
default:
goto warn;
}
break;
default:
break;
}
@ -1039,6 +1052,11 @@ static void init_amd_zen4(struct cpuinfo_x86 *c)
msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT);
}
static void init_amd_zen5(struct cpuinfo_x86 *c)
{
init_amd_zen_common();
}
static void init_amd(struct cpuinfo_x86 *c)
{
u64 vm_cr;
@ -1084,6 +1102,8 @@ static void init_amd(struct cpuinfo_x86 *c)
init_amd_zen3(c);
else if (boot_cpu_has(X86_FEATURE_ZEN4))
init_amd_zen4(c);
else if (boot_cpu_has(X86_FEATURE_ZEN5))
init_amd_zen5(c);
/*
* Enable workaround for FXSAVE leak on CPUs

View File

@ -205,12 +205,19 @@ static int bio_copy_user_iov(struct request *rq, struct rq_map_data *map_data,
/*
* success
*/
if ((iov_iter_rw(iter) == WRITE &&
(!map_data || !map_data->null_mapped)) ||
(map_data && map_data->from_user)) {
if (iov_iter_rw(iter) == WRITE &&
(!map_data || !map_data->null_mapped)) {
ret = bio_copy_from_iter(bio, iter);
if (ret)
goto cleanup;
} else if (map_data && map_data->from_user) {
struct iov_iter iter2 = *iter;
/* This is the copy-in part of SG_DXFER_TO_FROM_DEV. */
iter2.data_source = ITER_SOURCE;
ret = bio_copy_from_iter(bio, &iter2);
if (ret)
goto cleanup;
} else {
if (bmd->is_our_pages)
zero_fill_bio(bio);

View File

@ -20,8 +20,6 @@ static int blkpg_do_ioctl(struct block_device *bdev,
struct blkpg_partition p;
sector_t start, length;
if (disk->flags & GENHD_FL_NO_PART)
return -EINVAL;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
if (copy_from_user(&p, upart, sizeof(struct blkpg_partition)))

View File

@ -439,6 +439,11 @@ int bdev_add_partition(struct gendisk *disk, int partno, sector_t start,
goto out;
}
if (disk->flags & GENHD_FL_NO_PART) {
ret = -EINVAL;
goto out;
}
if (partition_overlaps(disk, start, length, -1)) {
ret = -EBUSY;
goto out;

View File

@ -102,7 +102,7 @@ static int reset_pending_show(struct seq_file *s, void *v)
{
struct ivpu_device *vdev = seq_to_ivpu(s);
seq_printf(s, "%d\n", atomic_read(&vdev->pm->in_reset));
seq_printf(s, "%d\n", atomic_read(&vdev->pm->reset_pending));
return 0;
}
@ -130,7 +130,9 @@ dvfs_mode_fops_write(struct file *file, const char __user *user_buf, size_t size
fw->dvfs_mode = dvfs_mode;
ivpu_pm_schedule_recovery(vdev);
ret = pci_try_reset_function(to_pci_dev(vdev->drm.dev));
if (ret)
return ret;
return size;
}
@ -190,7 +192,10 @@ fw_profiling_freq_fops_write(struct file *file, const char __user *user_buf,
return ret;
ivpu_hw_profiling_freq_drive(vdev, enable);
ivpu_pm_schedule_recovery(vdev);
ret = pci_try_reset_function(to_pci_dev(vdev->drm.dev));
if (ret)
return ret;
return size;
}
@ -301,11 +306,18 @@ static ssize_t
ivpu_force_recovery_fn(struct file *file, const char __user *user_buf, size_t size, loff_t *pos)
{
struct ivpu_device *vdev = file->private_data;
int ret;
if (!size)
return -EINVAL;
ivpu_pm_schedule_recovery(vdev);
ret = ivpu_rpm_get(vdev);
if (ret)
return ret;
ivpu_pm_trigger_recovery(vdev, "debugfs");
flush_work(&vdev->pm->recovery_work);
ivpu_rpm_put(vdev);
return size;
}

View File

@ -6,6 +6,7 @@
#include <linux/firmware.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <drm/drm_accel.h>
#include <drm/drm_file.h>
@ -17,6 +18,7 @@
#include "ivpu_debugfs.h"
#include "ivpu_drv.h"
#include "ivpu_fw.h"
#include "ivpu_fw_log.h"
#include "ivpu_gem.h"
#include "ivpu_hw.h"
#include "ivpu_ipc.h"
@ -65,22 +67,20 @@ struct ivpu_file_priv *ivpu_file_priv_get(struct ivpu_file_priv *file_priv)
return file_priv;
}
struct ivpu_file_priv *ivpu_file_priv_get_by_ctx_id(struct ivpu_device *vdev, unsigned long id)
static void file_priv_unbind(struct ivpu_device *vdev, struct ivpu_file_priv *file_priv)
{
struct ivpu_file_priv *file_priv;
mutex_lock(&file_priv->lock);
if (file_priv->bound) {
ivpu_dbg(vdev, FILE, "file_priv unbind: ctx %u\n", file_priv->ctx.id);
xa_lock_irq(&vdev->context_xa);
file_priv = xa_load(&vdev->context_xa, id);
/* file_priv may still be in context_xa during file_priv_release() */
if (file_priv && !kref_get_unless_zero(&file_priv->ref))
file_priv = NULL;
xa_unlock_irq(&vdev->context_xa);
if (file_priv)
ivpu_dbg(vdev, KREF, "file_priv get by id: ctx %u refcount %u\n",
file_priv->ctx.id, kref_read(&file_priv->ref));
return file_priv;
ivpu_cmdq_release_all_locked(file_priv);
ivpu_jsm_context_release(vdev, file_priv->ctx.id);
ivpu_bo_unbind_all_bos_from_context(vdev, &file_priv->ctx);
ivpu_mmu_user_context_fini(vdev, &file_priv->ctx);
file_priv->bound = false;
drm_WARN_ON(&vdev->drm, !xa_erase_irq(&vdev->context_xa, file_priv->ctx.id));
}
mutex_unlock(&file_priv->lock);
}
static void file_priv_release(struct kref *ref)
@ -88,13 +88,15 @@ static void file_priv_release(struct kref *ref)
struct ivpu_file_priv *file_priv = container_of(ref, struct ivpu_file_priv, ref);
struct ivpu_device *vdev = file_priv->vdev;
ivpu_dbg(vdev, FILE, "file_priv release: ctx %u\n", file_priv->ctx.id);
ivpu_dbg(vdev, FILE, "file_priv release: ctx %u bound %d\n",
file_priv->ctx.id, (bool)file_priv->bound);
pm_runtime_get_sync(vdev->drm.dev);
mutex_lock(&vdev->context_list_lock);
file_priv_unbind(vdev, file_priv);
mutex_unlock(&vdev->context_list_lock);
pm_runtime_put_autosuspend(vdev->drm.dev);
ivpu_cmdq_release_all(file_priv);
ivpu_jsm_context_release(vdev, file_priv->ctx.id);
ivpu_bo_remove_all_bos_from_context(vdev, &file_priv->ctx);
ivpu_mmu_user_context_fini(vdev, &file_priv->ctx);
drm_WARN_ON(&vdev->drm, xa_erase_irq(&vdev->context_xa, file_priv->ctx.id) != file_priv);
mutex_destroy(&file_priv->lock);
kfree(file_priv);
}
@ -176,9 +178,6 @@ static int ivpu_get_param_ioctl(struct drm_device *dev, void *data, struct drm_f
case DRM_IVPU_PARAM_CONTEXT_BASE_ADDRESS:
args->value = vdev->hw->ranges.user.start;
break;
case DRM_IVPU_PARAM_CONTEXT_PRIORITY:
args->value = file_priv->priority;
break;
case DRM_IVPU_PARAM_CONTEXT_ID:
args->value = file_priv->ctx.id;
break;
@ -218,17 +217,10 @@ static int ivpu_get_param_ioctl(struct drm_device *dev, void *data, struct drm_f
static int ivpu_set_param_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
{
struct ivpu_file_priv *file_priv = file->driver_priv;
struct drm_ivpu_param *args = data;
int ret = 0;
switch (args->param) {
case DRM_IVPU_PARAM_CONTEXT_PRIORITY:
if (args->value <= DRM_IVPU_CONTEXT_PRIORITY_REALTIME)
file_priv->priority = args->value;
else
ret = -EINVAL;
break;
default:
ret = -EINVAL;
}
@ -241,50 +233,53 @@ static int ivpu_open(struct drm_device *dev, struct drm_file *file)
struct ivpu_device *vdev = to_ivpu_device(dev);
struct ivpu_file_priv *file_priv;
u32 ctx_id;
void *old;
int ret;
int idx, ret;
ret = xa_alloc_irq(&vdev->context_xa, &ctx_id, NULL, vdev->context_xa_limit, GFP_KERNEL);
if (ret) {
ivpu_err(vdev, "Failed to allocate context id: %d\n", ret);
return ret;
}
if (!drm_dev_enter(dev, &idx))
return -ENODEV;
file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
if (!file_priv) {
ret = -ENOMEM;
goto err_xa_erase;
goto err_dev_exit;
}
file_priv->vdev = vdev;
file_priv->priority = DRM_IVPU_CONTEXT_PRIORITY_NORMAL;
file_priv->bound = true;
kref_init(&file_priv->ref);
mutex_init(&file_priv->lock);
mutex_lock(&vdev->context_list_lock);
ret = xa_alloc_irq(&vdev->context_xa, &ctx_id, file_priv,
vdev->context_xa_limit, GFP_KERNEL);
if (ret) {
ivpu_err(vdev, "Failed to allocate context id: %d\n", ret);
goto err_unlock;
}
ret = ivpu_mmu_user_context_init(vdev, &file_priv->ctx, ctx_id);
if (ret)
goto err_mutex_destroy;
goto err_xa_erase;
old = xa_store_irq(&vdev->context_xa, ctx_id, file_priv, GFP_KERNEL);
if (xa_is_err(old)) {
ret = xa_err(old);
ivpu_err(vdev, "Failed to store context %u: %d\n", ctx_id, ret);
goto err_ctx_fini;
}
mutex_unlock(&vdev->context_list_lock);
drm_dev_exit(idx);
file->driver_priv = file_priv;
ivpu_dbg(vdev, FILE, "file_priv create: ctx %u process %s pid %d\n",
ctx_id, current->comm, task_pid_nr(current));
file->driver_priv = file_priv;
return 0;
err_ctx_fini:
ivpu_mmu_user_context_fini(vdev, &file_priv->ctx);
err_mutex_destroy:
mutex_destroy(&file_priv->lock);
kfree(file_priv);
err_xa_erase:
xa_erase_irq(&vdev->context_xa, ctx_id);
err_unlock:
mutex_unlock(&vdev->context_list_lock);
mutex_destroy(&file_priv->lock);
kfree(file_priv);
err_dev_exit:
drm_dev_exit(idx);
return ret;
}
@ -340,8 +335,6 @@ static int ivpu_wait_for_ready(struct ivpu_device *vdev)
if (!ret)
ivpu_dbg(vdev, PM, "VPU ready message received successfully\n");
else
ivpu_hw_diagnose_failure(vdev);
return ret;
}
@ -369,6 +362,9 @@ int ivpu_boot(struct ivpu_device *vdev)
ret = ivpu_wait_for_ready(vdev);
if (ret) {
ivpu_err(vdev, "Failed to boot the firmware: %d\n", ret);
ivpu_hw_diagnose_failure(vdev);
ivpu_mmu_evtq_dump(vdev);
ivpu_fw_log_dump(vdev);
return ret;
}
@ -540,6 +536,10 @@ static int ivpu_dev_init(struct ivpu_device *vdev)
lockdep_set_class(&vdev->submitted_jobs_xa.xa_lock, &submitted_jobs_xa_lock_class_key);
INIT_LIST_HEAD(&vdev->bo_list);
ret = drmm_mutex_init(&vdev->drm, &vdev->context_list_lock);
if (ret)
goto err_xa_destroy;
ret = drmm_mutex_init(&vdev->drm, &vdev->bo_list_lock);
if (ret)
goto err_xa_destroy;
@ -611,14 +611,30 @@ err_xa_destroy:
return ret;
}
static void ivpu_bo_unbind_all_user_contexts(struct ivpu_device *vdev)
{
struct ivpu_file_priv *file_priv;
unsigned long ctx_id;
mutex_lock(&vdev->context_list_lock);
xa_for_each(&vdev->context_xa, ctx_id, file_priv)
file_priv_unbind(vdev, file_priv);
mutex_unlock(&vdev->context_list_lock);
}
static void ivpu_dev_fini(struct ivpu_device *vdev)
{
ivpu_pm_disable(vdev);
ivpu_shutdown(vdev);
if (IVPU_WA(d3hot_after_power_off))
pci_set_power_state(to_pci_dev(vdev->drm.dev), PCI_D3hot);
ivpu_jobs_abort_all(vdev);
ivpu_job_done_consumer_fini(vdev);
ivpu_pm_cancel_recovery(vdev);
ivpu_bo_unbind_all_user_contexts(vdev);
ivpu_ipc_fini(vdev);
ivpu_fw_fini(vdev);

View File

@ -56,6 +56,7 @@
#define IVPU_DBG_JSM BIT(10)
#define IVPU_DBG_KREF BIT(11)
#define IVPU_DBG_RPM BIT(12)
#define IVPU_DBG_MMU_MAP BIT(13)
#define ivpu_err(vdev, fmt, ...) \
drm_err(&(vdev)->drm, "%s(): " fmt, __func__, ##__VA_ARGS__)
@ -114,6 +115,7 @@ struct ivpu_device {
struct ivpu_mmu_context gctx;
struct ivpu_mmu_context rctx;
struct mutex context_list_lock; /* Protects user context addition/removal */
struct xarray context_xa;
struct xa_limit context_xa_limit;
@ -145,8 +147,8 @@ struct ivpu_file_priv {
struct mutex lock; /* Protects cmdq */
struct ivpu_cmdq *cmdq[IVPU_NUM_ENGINES];
struct ivpu_mmu_context ctx;
u32 priority;
bool has_mmu_faults;
bool bound;
};
extern int ivpu_dbg_mask;
@ -162,7 +164,6 @@ extern bool ivpu_disable_mmu_cont_pages;
extern int ivpu_test_mode;
struct ivpu_file_priv *ivpu_file_priv_get(struct ivpu_file_priv *file_priv);
struct ivpu_file_priv *ivpu_file_priv_get_by_ctx_id(struct ivpu_device *vdev, unsigned long id);
void ivpu_file_priv_put(struct ivpu_file_priv **link);
int ivpu_boot(struct ivpu_device *vdev);

View File

@ -24,14 +24,11 @@ static const struct drm_gem_object_funcs ivpu_gem_funcs;
static inline void ivpu_dbg_bo(struct ivpu_device *vdev, struct ivpu_bo *bo, const char *action)
{
if (bo->ctx)
ivpu_dbg(vdev, BO, "%6s: size %zu has_pages %d dma_mapped %d handle %u ctx %d vpu_addr 0x%llx mmu_mapped %d\n",
action, ivpu_bo_size(bo), (bool)bo->base.pages, (bool)bo->base.sgt,
bo->handle, bo->ctx->id, bo->vpu_addr, bo->mmu_mapped);
else
ivpu_dbg(vdev, BO, "%6s: size %zu has_pages %d dma_mapped %d handle %u (not added to context)\n",
action, ivpu_bo_size(bo), (bool)bo->base.pages, (bool)bo->base.sgt,
bo->handle);
ivpu_dbg(vdev, BO,
"%6s: bo %8p vpu_addr %9llx size %8zu ctx %d has_pages %d dma_mapped %d mmu_mapped %d wc %d imported %d\n",
action, bo, bo->vpu_addr, ivpu_bo_size(bo), bo->ctx ? bo->ctx->id : 0,
(bool)bo->base.pages, (bool)bo->base.sgt, bo->mmu_mapped, bo->base.map_wc,
(bool)bo->base.base.import_attach);
}
/*
@ -49,12 +46,7 @@ int __must_check ivpu_bo_pin(struct ivpu_bo *bo)
mutex_lock(&bo->lock);
ivpu_dbg_bo(vdev, bo, "pin");
if (!bo->ctx) {
ivpu_err(vdev, "vpu_addr not allocated for BO %d\n", bo->handle);
ret = -EINVAL;
goto unlock;
}
drm_WARN_ON(&vdev->drm, !bo->ctx);
if (!bo->mmu_mapped) {
struct sg_table *sgt = drm_gem_shmem_get_pages_sgt(&bo->base);
@ -85,7 +77,10 @@ ivpu_bo_alloc_vpu_addr(struct ivpu_bo *bo, struct ivpu_mmu_context *ctx,
const struct ivpu_addr_range *range)
{
struct ivpu_device *vdev = ivpu_bo_to_vdev(bo);
int ret;
int idx, ret;
if (!drm_dev_enter(&vdev->drm, &idx))
return -ENODEV;
mutex_lock(&bo->lock);
@ -101,6 +96,8 @@ ivpu_bo_alloc_vpu_addr(struct ivpu_bo *bo, struct ivpu_mmu_context *ctx,
mutex_unlock(&bo->lock);
drm_dev_exit(idx);
return ret;
}
@ -108,11 +105,7 @@ static void ivpu_bo_unbind_locked(struct ivpu_bo *bo)
{
struct ivpu_device *vdev = ivpu_bo_to_vdev(bo);
lockdep_assert_held(&bo->lock);
ivpu_dbg_bo(vdev, bo, "unbind");
/* TODO: dma_unmap */
lockdep_assert(lockdep_is_held(&bo->lock) || !kref_read(&bo->base.base.refcount));
if (bo->mmu_mapped) {
drm_WARN_ON(&vdev->drm, !bo->ctx);
@ -124,19 +117,23 @@ static void ivpu_bo_unbind_locked(struct ivpu_bo *bo)
if (bo->ctx) {
ivpu_mmu_context_remove_node(bo->ctx, &bo->mm_node);
bo->vpu_addr = 0;
bo->ctx = NULL;
}
if (bo->base.base.import_attach)
return;
dma_resv_lock(bo->base.base.resv, NULL);
if (bo->base.sgt) {
dma_unmap_sgtable(vdev->drm.dev, bo->base.sgt, DMA_BIDIRECTIONAL, 0);
sg_free_table(bo->base.sgt);
kfree(bo->base.sgt);
bo->base.sgt = NULL;
}
dma_resv_unlock(bo->base.base.resv);
}
static void ivpu_bo_unbind(struct ivpu_bo *bo)
{
mutex_lock(&bo->lock);
ivpu_bo_unbind_locked(bo);
mutex_unlock(&bo->lock);
}
void ivpu_bo_remove_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx)
void ivpu_bo_unbind_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx)
{
struct ivpu_bo *bo;
@ -146,8 +143,10 @@ void ivpu_bo_remove_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_m
mutex_lock(&vdev->bo_list_lock);
list_for_each_entry(bo, &vdev->bo_list, bo_list_node) {
mutex_lock(&bo->lock);
if (bo->ctx == ctx)
if (bo->ctx == ctx) {
ivpu_dbg_bo(vdev, bo, "unbind");
ivpu_bo_unbind_locked(bo);
}
mutex_unlock(&bo->lock);
}
mutex_unlock(&vdev->bo_list_lock);
@ -199,9 +198,6 @@ ivpu_bo_create(struct ivpu_device *vdev, u64 size, u32 flags)
list_add_tail(&bo->bo_list_node, &vdev->bo_list);
mutex_unlock(&vdev->bo_list_lock);
ivpu_dbg(vdev, BO, "create: vpu_addr 0x%llx size %zu flags 0x%x\n",
bo->vpu_addr, bo->base.base.size, flags);
return bo;
}
@ -212,6 +208,12 @@ static int ivpu_bo_open(struct drm_gem_object *obj, struct drm_file *file)
struct ivpu_bo *bo = to_ivpu_bo(obj);
struct ivpu_addr_range *range;
if (bo->ctx) {
ivpu_warn(vdev, "Can't add BO to ctx %u: already in ctx %u\n",
file_priv->ctx.id, bo->ctx->id);
return -EALREADY;
}
if (bo->flags & DRM_IVPU_BO_SHAVE_MEM)
range = &vdev->hw->ranges.shave;
else if (bo->flags & DRM_IVPU_BO_DMA_MEM)
@ -227,62 +229,24 @@ static void ivpu_bo_free(struct drm_gem_object *obj)
struct ivpu_device *vdev = to_ivpu_device(obj->dev);
struct ivpu_bo *bo = to_ivpu_bo(obj);
ivpu_dbg_bo(vdev, bo, "free");
mutex_lock(&vdev->bo_list_lock);
list_del(&bo->bo_list_node);
mutex_unlock(&vdev->bo_list_lock);
drm_WARN_ON(&vdev->drm, !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ));
ivpu_dbg_bo(vdev, bo, "free");
ivpu_bo_unbind(bo);
ivpu_bo_unbind_locked(bo);
mutex_destroy(&bo->lock);
drm_WARN_ON(obj->dev, bo->base.pages_use_count > 1);
drm_gem_shmem_free(&bo->base);
}
static const struct dma_buf_ops ivpu_bo_dmabuf_ops = {
.cache_sgt_mapping = true,
.attach = drm_gem_map_attach,
.detach = drm_gem_map_detach,
.map_dma_buf = drm_gem_map_dma_buf,
.unmap_dma_buf = drm_gem_unmap_dma_buf,
.release = drm_gem_dmabuf_release,
.mmap = drm_gem_dmabuf_mmap,
.vmap = drm_gem_dmabuf_vmap,
.vunmap = drm_gem_dmabuf_vunmap,
};
static struct dma_buf *ivpu_bo_export(struct drm_gem_object *obj, int flags)
{
struct drm_device *dev = obj->dev;
struct dma_buf_export_info exp_info = {
.exp_name = KBUILD_MODNAME,
.owner = dev->driver->fops->owner,
.ops = &ivpu_bo_dmabuf_ops,
.size = obj->size,
.flags = flags,
.priv = obj,
.resv = obj->resv,
};
void *sgt;
/*
* Make sure that pages are allocated and dma-mapped before exporting the bo.
* DMA-mapping is required if the bo will be imported to the same device.
*/
sgt = drm_gem_shmem_get_pages_sgt(to_drm_gem_shmem_obj(obj));
if (IS_ERR(sgt))
return sgt;
return drm_gem_dmabuf_export(dev, &exp_info);
}
static const struct drm_gem_object_funcs ivpu_gem_funcs = {
.free = ivpu_bo_free,
.open = ivpu_bo_open,
.export = ivpu_bo_export,
.print_info = drm_gem_shmem_object_print_info,
.pin = drm_gem_shmem_object_pin,
.unpin = drm_gem_shmem_object_unpin,
@ -315,11 +279,9 @@ int ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *fi
return PTR_ERR(bo);
}
ret = drm_gem_handle_create(file, &bo->base.base, &bo->handle);
if (!ret) {
ret = drm_gem_handle_create(file, &bo->base.base, &args->handle);
if (!ret)
args->vpu_addr = bo->vpu_addr;
args->handle = bo->handle;
}
drm_gem_object_put(&bo->base.base);
@ -361,7 +323,9 @@ ivpu_bo_alloc_internal(struct ivpu_device *vdev, u64 vpu_addr, u64 size, u32 fla
if (ret)
goto err_put;
dma_resv_lock(bo->base.base.resv, NULL);
ret = drm_gem_shmem_vmap(&bo->base, &map);
dma_resv_unlock(bo->base.base.resv);
if (ret)
goto err_put;
@ -376,7 +340,10 @@ void ivpu_bo_free_internal(struct ivpu_bo *bo)
{
struct iosys_map map = IOSYS_MAP_INIT_VADDR(bo->base.vaddr);
dma_resv_lock(bo->base.base.resv, NULL);
drm_gem_shmem_vunmap(&bo->base, &map);
dma_resv_unlock(bo->base.base.resv);
drm_gem_object_put(&bo->base.base);
}
@ -432,19 +399,11 @@ int ivpu_bo_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file
static void ivpu_bo_print_info(struct ivpu_bo *bo, struct drm_printer *p)
{
unsigned long dma_refcount = 0;
mutex_lock(&bo->lock);
if (bo->base.base.dma_buf && bo->base.base.dma_buf->file)
dma_refcount = atomic_long_read(&bo->base.base.dma_buf->file->f_count);
drm_printf(p, "%-3u %-6d 0x%-12llx %-10lu 0x%-8x %-4u %-8lu",
bo->ctx->id, bo->handle, bo->vpu_addr, bo->base.base.size,
bo->flags, kref_read(&bo->base.base.refcount), dma_refcount);
if (bo->base.base.import_attach)
drm_printf(p, " imported");
drm_printf(p, "%-9p %-3u 0x%-12llx %-10lu 0x%-8x %-4u",
bo, bo->ctx->id, bo->vpu_addr, bo->base.base.size,
bo->flags, kref_read(&bo->base.base.refcount));
if (bo->base.pages)
drm_printf(p, " has_pages");
@ -452,6 +411,9 @@ static void ivpu_bo_print_info(struct ivpu_bo *bo, struct drm_printer *p)
if (bo->mmu_mapped)
drm_printf(p, " mmu_mapped");
if (bo->base.base.import_attach)
drm_printf(p, " imported");
drm_printf(p, "\n");
mutex_unlock(&bo->lock);
@ -462,8 +424,8 @@ void ivpu_bo_list(struct drm_device *dev, struct drm_printer *p)
struct ivpu_device *vdev = to_ivpu_device(dev);
struct ivpu_bo *bo;
drm_printf(p, "%-3s %-6s %-14s %-10s %-10s %-4s %-8s %s\n",
"ctx", "handle", "vpu_addr", "size", "flags", "refs", "dma_refs", "attribs");
drm_printf(p, "%-9s %-3s %-14s %-10s %-10s %-4s %s\n",
"bo", "ctx", "vpu_addr", "size", "flags", "refs", "attribs");
mutex_lock(&vdev->bo_list_lock);
list_for_each_entry(bo, &vdev->bo_list, bo_list_node)

View File

@ -19,14 +19,13 @@ struct ivpu_bo {
struct mutex lock; /* Protects: ctx, mmu_mapped, vpu_addr */
u64 vpu_addr;
u32 handle;
u32 flags;
u32 job_status; /* Valid only for command buffer */
bool mmu_mapped;
};
int ivpu_bo_pin(struct ivpu_bo *bo);
void ivpu_bo_remove_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx);
void ivpu_bo_unbind_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx);
struct drm_gem_object *ivpu_gem_create_object(struct drm_device *dev, size_t size);
struct ivpu_bo *ivpu_bo_alloc_internal(struct ivpu_device *vdev, u64 vpu_addr, u64 size, u32 flags);

View File

@ -875,24 +875,18 @@ static void ivpu_hw_37xx_irq_disable(struct ivpu_device *vdev)
static void ivpu_hw_37xx_irq_wdt_nce_handler(struct ivpu_device *vdev)
{
ivpu_err_ratelimited(vdev, "WDT NCE irq\n");
ivpu_pm_schedule_recovery(vdev);
ivpu_pm_trigger_recovery(vdev, "WDT NCE IRQ");
}
static void ivpu_hw_37xx_irq_wdt_mss_handler(struct ivpu_device *vdev)
{
ivpu_err_ratelimited(vdev, "WDT MSS irq\n");
ivpu_hw_wdt_disable(vdev);
ivpu_pm_schedule_recovery(vdev);
ivpu_pm_trigger_recovery(vdev, "WDT MSS IRQ");
}
static void ivpu_hw_37xx_irq_noc_firewall_handler(struct ivpu_device *vdev)
{
ivpu_err_ratelimited(vdev, "NOC Firewall irq\n");
ivpu_pm_schedule_recovery(vdev);
ivpu_pm_trigger_recovery(vdev, "NOC Firewall IRQ");
}
/* Handler for IRQs from VPU core (irqV) */
@ -970,7 +964,7 @@ static bool ivpu_hw_37xx_irqb_handler(struct ivpu_device *vdev, int irq)
REGB_WR32(VPU_37XX_BUTTRESS_INTERRUPT_STAT, status);
if (schedule_recovery)
ivpu_pm_schedule_recovery(vdev);
ivpu_pm_trigger_recovery(vdev, "Buttress IRQ");
return true;
}

View File

@ -746,7 +746,7 @@ static int ivpu_hw_40xx_info_init(struct ivpu_device *vdev)
return 0;
}
static int ivpu_hw_40xx_reset(struct ivpu_device *vdev)
static int ivpu_hw_40xx_ip_reset(struct ivpu_device *vdev)
{
int ret;
u32 val;
@ -768,6 +768,23 @@ static int ivpu_hw_40xx_reset(struct ivpu_device *vdev)
return ret;
}
static int ivpu_hw_40xx_reset(struct ivpu_device *vdev)
{
int ret = 0;
if (ivpu_hw_40xx_ip_reset(vdev)) {
ivpu_err(vdev, "Failed to reset VPU IP\n");
ret = -EIO;
}
if (ivpu_pll_disable(vdev)) {
ivpu_err(vdev, "Failed to disable PLL\n");
ret = -EIO;
}
return ret;
}
static int ivpu_hw_40xx_d0i3_enable(struct ivpu_device *vdev)
{
int ret;
@ -913,7 +930,7 @@ static int ivpu_hw_40xx_power_down(struct ivpu_device *vdev)
ivpu_hw_40xx_save_d0i3_entry_timestamp(vdev);
if (!ivpu_hw_40xx_is_idle(vdev) && ivpu_hw_40xx_reset(vdev))
if (!ivpu_hw_40xx_is_idle(vdev) && ivpu_hw_40xx_ip_reset(vdev))
ivpu_warn(vdev, "Failed to reset the VPU\n");
if (ivpu_pll_disable(vdev)) {
@ -1032,18 +1049,18 @@ static void ivpu_hw_40xx_irq_disable(struct ivpu_device *vdev)
static void ivpu_hw_40xx_irq_wdt_nce_handler(struct ivpu_device *vdev)
{
/* TODO: For LNN hang consider engine reset instead of full recovery */
ivpu_pm_schedule_recovery(vdev);
ivpu_pm_trigger_recovery(vdev, "WDT NCE IRQ");
}
static void ivpu_hw_40xx_irq_wdt_mss_handler(struct ivpu_device *vdev)
{
ivpu_hw_wdt_disable(vdev);
ivpu_pm_schedule_recovery(vdev);
ivpu_pm_trigger_recovery(vdev, "WDT MSS IRQ");
}
static void ivpu_hw_40xx_irq_noc_firewall_handler(struct ivpu_device *vdev)
{
ivpu_pm_schedule_recovery(vdev);
ivpu_pm_trigger_recovery(vdev, "NOC Firewall IRQ");
}
/* Handler for IRQs from VPU core (irqV) */
@ -1137,7 +1154,7 @@ static bool ivpu_hw_40xx_irqb_handler(struct ivpu_device *vdev, int irq)
REGB_WR32(VPU_40XX_BUTTRESS_INTERRUPT_STAT, status);
if (schedule_recovery)
ivpu_pm_schedule_recovery(vdev);
ivpu_pm_trigger_recovery(vdev, "Buttress IRQ");
return true;
}

View File

@ -343,10 +343,8 @@ int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *r
hb_ret = ivpu_ipc_send_receive_internal(vdev, &hb_req, VPU_JSM_MSG_QUERY_ENGINE_HB_DONE,
&hb_resp, VPU_IPC_CHAN_ASYNC_CMD,
vdev->timeout.jsm);
if (hb_ret == -ETIMEDOUT) {
ivpu_hw_diagnose_failure(vdev);
ivpu_pm_schedule_recovery(vdev);
}
if (hb_ret == -ETIMEDOUT)
ivpu_pm_trigger_recovery(vdev, "IPC timeout");
return ret;
}

View File

@ -112,22 +112,20 @@ static void ivpu_cmdq_release_locked(struct ivpu_file_priv *file_priv, u16 engin
}
}
void ivpu_cmdq_release_all(struct ivpu_file_priv *file_priv)
void ivpu_cmdq_release_all_locked(struct ivpu_file_priv *file_priv)
{
int i;
mutex_lock(&file_priv->lock);
lockdep_assert_held(&file_priv->lock);
for (i = 0; i < IVPU_NUM_ENGINES; i++)
ivpu_cmdq_release_locked(file_priv, i);
mutex_unlock(&file_priv->lock);
}
/*
* Mark the doorbell as unregistered and reset job queue pointers.
* This function needs to be called when the VPU hardware is restarted
* and FW looses job queue state. The next time job queue is used it
* and FW loses job queue state. The next time job queue is used it
* will be registered again.
*/
static void ivpu_cmdq_reset_locked(struct ivpu_file_priv *file_priv, u16 engine)
@ -161,15 +159,13 @@ void ivpu_cmdq_reset_all_contexts(struct ivpu_device *vdev)
struct ivpu_file_priv *file_priv;
unsigned long ctx_id;
xa_for_each(&vdev->context_xa, ctx_id, file_priv) {
file_priv = ivpu_file_priv_get_by_ctx_id(vdev, ctx_id);
if (!file_priv)
continue;
mutex_lock(&vdev->context_list_lock);
xa_for_each(&vdev->context_xa, ctx_id, file_priv)
ivpu_cmdq_reset_all(file_priv);
ivpu_file_priv_put(&file_priv);
}
mutex_unlock(&vdev->context_list_lock);
}
static int ivpu_cmdq_push_job(struct ivpu_cmdq *cmdq, struct ivpu_job *job)
@ -243,60 +239,32 @@ static struct dma_fence *ivpu_fence_create(struct ivpu_device *vdev)
return &fence->base;
}
static void job_get(struct ivpu_job *job, struct ivpu_job **link)
static void ivpu_job_destroy(struct ivpu_job *job)
{
struct ivpu_device *vdev = job->vdev;
kref_get(&job->ref);
*link = job;
ivpu_dbg(vdev, KREF, "Job get: id %u refcount %u\n", job->job_id, kref_read(&job->ref));
}
static void job_release(struct kref *ref)
{
struct ivpu_job *job = container_of(ref, struct ivpu_job, ref);
struct ivpu_device *vdev = job->vdev;
u32 i;
ivpu_dbg(vdev, JOB, "Job destroyed: id %3u ctx %2d engine %d",
job->job_id, job->file_priv->ctx.id, job->engine_idx);
for (i = 0; i < job->bo_count; i++)
if (job->bos[i])
drm_gem_object_put(&job->bos[i]->base.base);
dma_fence_put(job->done_fence);
ivpu_file_priv_put(&job->file_priv);
ivpu_dbg(vdev, KREF, "Job released: id %u\n", job->job_id);
kfree(job);
/* Allow the VPU to get suspended, must be called after ivpu_file_priv_put() */
ivpu_rpm_put(vdev);
}
static void job_put(struct ivpu_job *job)
{
struct ivpu_device *vdev = job->vdev;
ivpu_dbg(vdev, KREF, "Job put: id %u refcount %u\n", job->job_id, kref_read(&job->ref));
kref_put(&job->ref, job_release);
}
static struct ivpu_job *
ivpu_create_job(struct ivpu_file_priv *file_priv, u32 engine_idx, u32 bo_count)
ivpu_job_create(struct ivpu_file_priv *file_priv, u32 engine_idx, u32 bo_count)
{
struct ivpu_device *vdev = file_priv->vdev;
struct ivpu_job *job;
int ret;
ret = ivpu_rpm_get(vdev);
if (ret < 0)
return NULL;
job = kzalloc(struct_size(job, bos, bo_count), GFP_KERNEL);
if (!job)
goto err_rpm_put;
kref_init(&job->ref);
return NULL;
job->vdev = vdev;
job->engine_idx = engine_idx;
@ -310,17 +278,14 @@ ivpu_create_job(struct ivpu_file_priv *file_priv, u32 engine_idx, u32 bo_count)
job->file_priv = ivpu_file_priv_get(file_priv);
ivpu_dbg(vdev, JOB, "Job created: ctx %2d engine %d", file_priv->ctx.id, job->engine_idx);
return job;
err_free_job:
kfree(job);
err_rpm_put:
ivpu_rpm_put(vdev);
return NULL;
}
static int ivpu_job_done(struct ivpu_device *vdev, u32 job_id, u32 job_status)
static int ivpu_job_signal_and_destroy(struct ivpu_device *vdev, u32 job_id, u32 job_status)
{
struct ivpu_job *job;
@ -337,9 +302,10 @@ static int ivpu_job_done(struct ivpu_device *vdev, u32 job_id, u32 job_status)
ivpu_dbg(vdev, JOB, "Job complete: id %3u ctx %2d engine %d status 0x%x\n",
job->job_id, job->file_priv->ctx.id, job->engine_idx, job_status);
ivpu_job_destroy(job);
ivpu_stop_job_timeout_detection(vdev);
job_put(job);
ivpu_rpm_put(vdev);
return 0;
}
@ -349,10 +315,10 @@ void ivpu_jobs_abort_all(struct ivpu_device *vdev)
unsigned long id;
xa_for_each(&vdev->submitted_jobs_xa, id, job)
ivpu_job_done(vdev, id, VPU_JSM_STATUS_ABORTED);
ivpu_job_signal_and_destroy(vdev, id, VPU_JSM_STATUS_ABORTED);
}
static int ivpu_direct_job_submission(struct ivpu_job *job)
static int ivpu_job_submit(struct ivpu_job *job)
{
struct ivpu_file_priv *file_priv = job->file_priv;
struct ivpu_device *vdev = job->vdev;
@ -360,53 +326,65 @@ static int ivpu_direct_job_submission(struct ivpu_job *job)
struct ivpu_cmdq *cmdq;
int ret;
ret = ivpu_rpm_get(vdev);
if (ret < 0)
return ret;
mutex_lock(&file_priv->lock);
cmdq = ivpu_cmdq_acquire(job->file_priv, job->engine_idx);
if (!cmdq) {
ivpu_warn(vdev, "Failed get job queue, ctx %d engine %d\n",
file_priv->ctx.id, job->engine_idx);
ivpu_warn_ratelimited(vdev, "Failed get job queue, ctx %d engine %d\n",
file_priv->ctx.id, job->engine_idx);
ret = -EINVAL;
goto err_unlock;
goto err_unlock_file_priv;
}
job_id_range.min = FIELD_PREP(JOB_ID_CONTEXT_MASK, (file_priv->ctx.id - 1));
job_id_range.max = job_id_range.min | JOB_ID_JOB_MASK;
job_get(job, &job);
ret = xa_alloc(&vdev->submitted_jobs_xa, &job->job_id, job, job_id_range, GFP_KERNEL);
xa_lock(&vdev->submitted_jobs_xa);
ret = __xa_alloc(&vdev->submitted_jobs_xa, &job->job_id, job, job_id_range, GFP_KERNEL);
if (ret) {
ivpu_warn_ratelimited(vdev, "Failed to allocate job id: %d\n", ret);
goto err_job_put;
ivpu_dbg(vdev, JOB, "Too many active jobs in ctx %d\n",
file_priv->ctx.id);
ret = -EBUSY;
goto err_unlock_submitted_jobs_xa;
}
ret = ivpu_cmdq_push_job(cmdq, job);
if (ret)
goto err_xa_erase;
goto err_erase_xa;
ivpu_start_job_timeout_detection(vdev);
ivpu_dbg(vdev, JOB, "Job submitted: id %3u addr 0x%llx ctx %2d engine %d next %d\n",
job->job_id, job->cmd_buf_vpu_addr, file_priv->ctx.id,
job->engine_idx, cmdq->jobq->header.tail);
if (ivpu_test_mode & IVPU_TEST_MODE_NULL_HW) {
ivpu_job_done(vdev, job->job_id, VPU_JSM_STATUS_SUCCESS);
if (unlikely(ivpu_test_mode & IVPU_TEST_MODE_NULL_HW)) {
cmdq->jobq->header.head = cmdq->jobq->header.tail;
wmb(); /* Flush WC buffer for jobq header */
} else {
ivpu_cmdq_ring_db(vdev, cmdq);
}
ivpu_dbg(vdev, JOB, "Job submitted: id %3u ctx %2d engine %d addr 0x%llx next %d\n",
job->job_id, file_priv->ctx.id, job->engine_idx,
job->cmd_buf_vpu_addr, cmdq->jobq->header.tail);
xa_unlock(&vdev->submitted_jobs_xa);
mutex_unlock(&file_priv->lock);
if (unlikely(ivpu_test_mode & IVPU_TEST_MODE_NULL_HW))
ivpu_job_signal_and_destroy(vdev, job->job_id, VPU_JSM_STATUS_SUCCESS);
return 0;
err_xa_erase:
xa_erase(&vdev->submitted_jobs_xa, job->job_id);
err_job_put:
job_put(job);
err_unlock:
err_erase_xa:
__xa_erase(&vdev->submitted_jobs_xa, job->job_id);
err_unlock_submitted_jobs_xa:
xa_unlock(&vdev->submitted_jobs_xa);
err_unlock_file_priv:
mutex_unlock(&file_priv->lock);
ivpu_rpm_put(vdev);
return ret;
}
@ -488,6 +466,9 @@ int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
if (params->engine > DRM_IVPU_ENGINE_COPY)
return -EINVAL;
if (params->priority > DRM_IVPU_JOB_PRIORITY_REALTIME)
return -EINVAL;
if (params->buffer_count == 0 || params->buffer_count > JOB_MAX_BUFFER_COUNT)
return -EINVAL;
@ -509,44 +490,49 @@ int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
params->buffer_count * sizeof(u32));
if (ret) {
ret = -EFAULT;
goto free_handles;
goto err_free_handles;
}
if (!drm_dev_enter(&vdev->drm, &idx)) {
ret = -ENODEV;
goto free_handles;
goto err_free_handles;
}
ivpu_dbg(vdev, JOB, "Submit ioctl: ctx %u buf_count %u\n",
file_priv->ctx.id, params->buffer_count);
job = ivpu_create_job(file_priv, params->engine, params->buffer_count);
job = ivpu_job_create(file_priv, params->engine, params->buffer_count);
if (!job) {
ivpu_err(vdev, "Failed to create job\n");
ret = -ENOMEM;
goto dev_exit;
goto err_exit_dev;
}
ret = ivpu_job_prepare_bos_for_submit(file, job, buf_handles, params->buffer_count,
params->commands_offset);
if (ret) {
ivpu_err(vdev, "Failed to prepare job, ret %d\n", ret);
goto job_put;
ivpu_err(vdev, "Failed to prepare job: %d\n", ret);
goto err_destroy_job;
}
ret = ivpu_direct_job_submission(job);
if (ret) {
dma_fence_signal(job->done_fence);
ivpu_err(vdev, "Failed to submit job to the HW, ret %d\n", ret);
}
down_read(&vdev->pm->reset_lock);
ret = ivpu_job_submit(job);
up_read(&vdev->pm->reset_lock);
if (ret)
goto err_signal_fence;
job_put:
job_put(job);
dev_exit:
drm_dev_exit(idx);
free_handles:
kfree(buf_handles);
return ret;
err_signal_fence:
dma_fence_signal(job->done_fence);
err_destroy_job:
ivpu_job_destroy(job);
err_exit_dev:
drm_dev_exit(idx);
err_free_handles:
kfree(buf_handles);
return ret;
}
@ -568,7 +554,7 @@ ivpu_job_done_callback(struct ivpu_device *vdev, struct ivpu_ipc_hdr *ipc_hdr,
}
payload = (struct vpu_ipc_msg_payload_job_done *)&jsm_msg->payload;
ret = ivpu_job_done(vdev, payload->job_id, payload->job_status);
ret = ivpu_job_signal_and_destroy(vdev, payload->job_id, payload->job_status);
if (!ret && !xa_empty(&vdev->submitted_jobs_xa))
ivpu_start_job_timeout_detection(vdev);
}

View File

@ -43,7 +43,6 @@ struct ivpu_cmdq {
will update the job status
*/
struct ivpu_job {
struct kref ref;
struct ivpu_device *vdev;
struct ivpu_file_priv *file_priv;
struct dma_fence *done_fence;
@ -56,7 +55,7 @@ struct ivpu_job {
int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
void ivpu_cmdq_release_all(struct ivpu_file_priv *file_priv);
void ivpu_cmdq_release_all_locked(struct ivpu_file_priv *file_priv);
void ivpu_cmdq_reset_all_contexts(struct ivpu_device *vdev);
void ivpu_job_done_consumer_init(struct ivpu_device *vdev);

View File

@ -7,6 +7,7 @@
#include <linux/highmem.h>
#include "ivpu_drv.h"
#include "ivpu_hw.h"
#include "ivpu_hw_reg_io.h"
#include "ivpu_mmu.h"
#include "ivpu_mmu_context.h"
@ -518,6 +519,7 @@ static int ivpu_mmu_cmdq_sync(struct ivpu_device *vdev)
ivpu_err(vdev, "Timed out waiting for MMU consumer: %d, error: %s\n", ret,
ivpu_mmu_cmdq_err_to_str(err));
ivpu_hw_diagnose_failure(vdev);
}
return ret;
@ -885,7 +887,6 @@ static u32 *ivpu_mmu_get_event(struct ivpu_device *vdev)
void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev)
{
bool schedule_recovery = false;
u32 *event;
u32 ssid;
@ -895,14 +896,21 @@ void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev)
ivpu_mmu_dump_event(vdev, event);
ssid = FIELD_GET(IVPU_MMU_EVT_SSID_MASK, event[0]);
if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID)
schedule_recovery = true;
else
ivpu_mmu_user_context_mark_invalid(vdev, ssid);
}
if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID) {
ivpu_pm_trigger_recovery(vdev, "MMU event");
return;
}
if (schedule_recovery)
ivpu_pm_schedule_recovery(vdev);
ivpu_mmu_user_context_mark_invalid(vdev, ssid);
}
}
void ivpu_mmu_evtq_dump(struct ivpu_device *vdev)
{
u32 *event;
while ((event = ivpu_mmu_get_event(vdev)) != NULL)
ivpu_mmu_dump_event(vdev, event);
}
void ivpu_mmu_irq_gerr_handler(struct ivpu_device *vdev)

View File

@ -46,5 +46,6 @@ int ivpu_mmu_invalidate_tlb(struct ivpu_device *vdev, u16 ssid);
void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev);
void ivpu_mmu_irq_gerr_handler(struct ivpu_device *vdev);
void ivpu_mmu_evtq_dump(struct ivpu_device *vdev);
#endif /* __IVPU_MMU_H__ */

View File

@ -355,6 +355,9 @@ ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
dma_addr_t dma_addr = sg_dma_address(sg) - sg->offset;
size_t size = sg_dma_len(sg) + sg->offset;
ivpu_dbg(vdev, MMU_MAP, "Map ctx: %u dma_addr: 0x%llx vpu_addr: 0x%llx size: %lu\n",
ctx->id, dma_addr, vpu_addr, size);
ret = ivpu_mmu_context_map_pages(vdev, ctx, vpu_addr, dma_addr, size, prot);
if (ret) {
ivpu_err(vdev, "Failed to map context pages\n");
@ -366,6 +369,7 @@ ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
/* Ensure page table modifications are flushed from wc buffers to memory */
wmb();
mutex_unlock(&ctx->lock);
ret = ivpu_mmu_invalidate_tlb(vdev, ctx->id);
@ -388,14 +392,19 @@ ivpu_mmu_context_unmap_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ct
mutex_lock(&ctx->lock);
for_each_sgtable_dma_sg(sgt, sg, i) {
dma_addr_t dma_addr = sg_dma_address(sg) - sg->offset;
size_t size = sg_dma_len(sg) + sg->offset;
ivpu_dbg(vdev, MMU_MAP, "Unmap ctx: %u dma_addr: 0x%llx vpu_addr: 0x%llx size: %lu\n",
ctx->id, dma_addr, vpu_addr, size);
ivpu_mmu_context_unmap_pages(ctx, vpu_addr, size);
vpu_addr += size;
}
/* Ensure page table modifications are flushed from wc buffers to memory */
wmb();
mutex_unlock(&ctx->lock);
ret = ivpu_mmu_invalidate_tlb(vdev, ctx->id);

View File

@ -13,6 +13,7 @@
#include "ivpu_drv.h"
#include "ivpu_hw.h"
#include "ivpu_fw.h"
#include "ivpu_fw_log.h"
#include "ivpu_ipc.h"
#include "ivpu_job.h"
#include "ivpu_jsm_msg.h"
@ -111,6 +112,14 @@ static void ivpu_pm_recovery_work(struct work_struct *work)
char *evt[2] = {"IVPU_PM_EVENT=IVPU_RECOVER", NULL};
int ret;
ivpu_err(vdev, "Recovering the VPU (reset #%d)\n", atomic_read(&vdev->pm->reset_counter));
ret = pm_runtime_resume_and_get(vdev->drm.dev);
if (ret)
ivpu_err(vdev, "Failed to resume VPU: %d\n", ret);
ivpu_fw_log_dump(vdev);
retry:
ret = pci_try_reset_function(to_pci_dev(vdev->drm.dev));
if (ret == -EAGAIN && !drm_dev_is_unplugged(&vdev->drm)) {
@ -122,11 +131,13 @@ retry:
ivpu_err(vdev, "Failed to reset VPU: %d\n", ret);
kobject_uevent_env(&vdev->drm.dev->kobj, KOBJ_CHANGE, evt);
pm_runtime_mark_last_busy(vdev->drm.dev);
pm_runtime_put_autosuspend(vdev->drm.dev);
}
void ivpu_pm_schedule_recovery(struct ivpu_device *vdev)
void ivpu_pm_trigger_recovery(struct ivpu_device *vdev, const char *reason)
{
struct ivpu_pm_info *pm = vdev->pm;
ivpu_err(vdev, "Recovery triggered by %s\n", reason);
if (ivpu_disable_recovery) {
ivpu_err(vdev, "Recovery not available when disable_recovery param is set\n");
@ -138,10 +149,11 @@ void ivpu_pm_schedule_recovery(struct ivpu_device *vdev)
return;
}
/* Schedule recovery if it's not in progress */
if (atomic_cmpxchg(&pm->in_reset, 0, 1) == 0) {
ivpu_hw_irq_disable(vdev);
queue_work(system_long_wq, &pm->recovery_work);
/* Trigger recovery if it's not in progress */
if (atomic_cmpxchg(&vdev->pm->reset_pending, 0, 1) == 0) {
ivpu_hw_diagnose_failure(vdev);
ivpu_hw_irq_disable(vdev); /* Disable IRQ early to protect from IRQ storm */
queue_work(system_long_wq, &vdev->pm->recovery_work);
}
}
@ -149,12 +161,8 @@ static void ivpu_job_timeout_work(struct work_struct *work)
{
struct ivpu_pm_info *pm = container_of(work, struct ivpu_pm_info, job_timeout_work.work);
struct ivpu_device *vdev = pm->vdev;
unsigned long timeout_ms = ivpu_tdr_timeout_ms ? ivpu_tdr_timeout_ms : vdev->timeout.tdr;
ivpu_err(vdev, "TDR detected, timeout %lu ms", timeout_ms);
ivpu_hw_diagnose_failure(vdev);
ivpu_pm_schedule_recovery(vdev);
ivpu_pm_trigger_recovery(vdev, "TDR");
}
void ivpu_start_job_timeout_detection(struct ivpu_device *vdev)
@ -227,6 +235,9 @@ int ivpu_pm_runtime_suspend_cb(struct device *dev)
bool hw_is_idle = true;
int ret;
drm_WARN_ON(&vdev->drm, !xa_empty(&vdev->submitted_jobs_xa));
drm_WARN_ON(&vdev->drm, work_pending(&vdev->pm->recovery_work));
ivpu_dbg(vdev, PM, "Runtime suspend..\n");
if (!ivpu_hw_is_idle(vdev) && vdev->pm->suspend_reschedule_counter) {
@ -247,7 +258,8 @@ int ivpu_pm_runtime_suspend_cb(struct device *dev)
ivpu_err(vdev, "Failed to set suspend VPU: %d\n", ret);
if (!hw_is_idle) {
ivpu_warn(vdev, "VPU failed to enter idle, force suspended.\n");
ivpu_err(vdev, "VPU failed to enter idle, force suspended.\n");
ivpu_fw_log_dump(vdev);
ivpu_pm_prepare_cold_boot(vdev);
} else {
ivpu_pm_prepare_warm_boot(vdev);
@ -308,11 +320,12 @@ void ivpu_pm_reset_prepare_cb(struct pci_dev *pdev)
{
struct ivpu_device *vdev = pci_get_drvdata(pdev);
pm_runtime_get_sync(vdev->drm.dev);
ivpu_dbg(vdev, PM, "Pre-reset..\n");
atomic_inc(&vdev->pm->reset_counter);
atomic_set(&vdev->pm->in_reset, 1);
atomic_set(&vdev->pm->reset_pending, 1);
pm_runtime_get_sync(vdev->drm.dev);
down_write(&vdev->pm->reset_lock);
ivpu_prepare_for_reset(vdev);
ivpu_hw_reset(vdev);
ivpu_pm_prepare_cold_boot(vdev);
@ -329,9 +342,11 @@ void ivpu_pm_reset_done_cb(struct pci_dev *pdev)
ret = ivpu_resume(vdev);
if (ret)
ivpu_err(vdev, "Failed to set RESUME state: %d\n", ret);
atomic_set(&vdev->pm->in_reset, 0);
up_write(&vdev->pm->reset_lock);
atomic_set(&vdev->pm->reset_pending, 0);
ivpu_dbg(vdev, PM, "Post-reset done.\n");
pm_runtime_mark_last_busy(vdev->drm.dev);
pm_runtime_put_autosuspend(vdev->drm.dev);
}
@ -344,7 +359,10 @@ void ivpu_pm_init(struct ivpu_device *vdev)
pm->vdev = vdev;
pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT;
atomic_set(&pm->in_reset, 0);
init_rwsem(&pm->reset_lock);
atomic_set(&pm->reset_pending, 0);
atomic_set(&pm->reset_counter, 0);
INIT_WORK(&pm->recovery_work, ivpu_pm_recovery_work);
INIT_DELAYED_WORK(&pm->job_timeout_work, ivpu_job_timeout_work);

View File

@ -6,6 +6,7 @@
#ifndef __IVPU_PM_H__
#define __IVPU_PM_H__
#include <linux/rwsem.h>
#include <linux/types.h>
struct ivpu_device;
@ -14,8 +15,9 @@ struct ivpu_pm_info {
struct ivpu_device *vdev;
struct delayed_work job_timeout_work;
struct work_struct recovery_work;
atomic_t in_reset;
struct rw_semaphore reset_lock;
atomic_t reset_counter;
atomic_t reset_pending;
bool is_warmboot;
u32 suspend_reschedule_counter;
};
@ -37,7 +39,7 @@ int __must_check ivpu_rpm_get(struct ivpu_device *vdev);
int __must_check ivpu_rpm_get_if_active(struct ivpu_device *vdev);
void ivpu_rpm_put(struct ivpu_device *vdev);
void ivpu_pm_schedule_recovery(struct ivpu_device *vdev);
void ivpu_pm_trigger_recovery(struct ivpu_device *vdev, const char *reason);
void ivpu_start_job_timeout_detection(struct ivpu_device *vdev);
void ivpu_stop_job_timeout_detection(struct ivpu_device *vdev);

View File

@ -48,6 +48,7 @@ enum {
enum board_ids {
/* board IDs by feature in alphabetical order */
board_ahci,
board_ahci_43bit_dma,
board_ahci_ign_iferr,
board_ahci_low_power,
board_ahci_no_debounce_delay,
@ -128,6 +129,13 @@ static const struct ata_port_info ahci_port_info[] = {
.udma_mask = ATA_UDMA6,
.port_ops = &ahci_ops,
},
[board_ahci_43bit_dma] = {
AHCI_HFLAGS (AHCI_HFLAG_43BIT_ONLY),
.flags = AHCI_FLAG_COMMON,
.pio_mask = ATA_PIO4,
.udma_mask = ATA_UDMA6,
.port_ops = &ahci_ops,
},
[board_ahci_ign_iferr] = {
AHCI_HFLAGS (AHCI_HFLAG_IGN_IRQ_IF_ERR),
.flags = AHCI_FLAG_COMMON,
@ -597,11 +605,11 @@ static const struct pci_device_id ahci_pci_tbl[] = {
{ PCI_VDEVICE(PROMISE, 0x3f20), board_ahci }, /* PDC42819 */
{ PCI_VDEVICE(PROMISE, 0x3781), board_ahci }, /* FastTrak TX8660 ahci-mode */
/* Asmedia */
/* ASMedia */
{ PCI_VDEVICE(ASMEDIA, 0x0601), board_ahci }, /* ASM1060 */
{ PCI_VDEVICE(ASMEDIA, 0x0602), board_ahci }, /* ASM1060 */
{ PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci }, /* ASM1061 */
{ PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci }, /* ASM1062 */
{ PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci_43bit_dma }, /* ASM1061 */
{ PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci_43bit_dma }, /* ASM1061/1062 */
{ PCI_VDEVICE(ASMEDIA, 0x0621), board_ahci }, /* ASM1061R */
{ PCI_VDEVICE(ASMEDIA, 0x0622), board_ahci }, /* ASM1062R */
{ PCI_VDEVICE(ASMEDIA, 0x0624), board_ahci }, /* ASM1062+JMB575 */
@ -663,6 +671,11 @@ MODULE_PARM_DESC(mobile_lpm_policy, "Default LPM policy for mobile chipsets");
static void ahci_pci_save_initial_config(struct pci_dev *pdev,
struct ahci_host_priv *hpriv)
{
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && pdev->device == 0x1166) {
dev_info(&pdev->dev, "ASM1166 has only six ports\n");
hpriv->saved_port_map = 0x3f;
}
if (pdev->vendor == PCI_VENDOR_ID_JMICRON && pdev->device == 0x2361) {
dev_info(&pdev->dev, "JMB361 has only one port\n");
hpriv->saved_port_map = 1;
@ -949,11 +962,20 @@ static int ahci_pci_device_resume(struct device *dev)
#endif /* CONFIG_PM */
static int ahci_configure_dma_masks(struct pci_dev *pdev, int using_dac)
static int ahci_configure_dma_masks(struct pci_dev *pdev,
struct ahci_host_priv *hpriv)
{
const int dma_bits = using_dac ? 64 : 32;
int dma_bits;
int rc;
if (hpriv->cap & HOST_CAP_64) {
dma_bits = 64;
if (hpriv->flags & AHCI_HFLAG_43BIT_ONLY)
dma_bits = 43;
} else {
dma_bits = 32;
}
/*
* If the device fixup already set the dma_mask to some non-standard
* value, don't extend it here. This happens on STA2X11, for example.
@ -1926,7 +1948,7 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
ahci_gtf_filter_workaround(host);
/* initialize adapter */
rc = ahci_configure_dma_masks(pdev, hpriv->cap & HOST_CAP_64);
rc = ahci_configure_dma_masks(pdev, hpriv);
if (rc)
return rc;

View File

@ -247,6 +247,7 @@ enum {
AHCI_HFLAG_SUSPEND_PHYS = BIT(26), /* handle PHYs during
suspend/resume */
AHCI_HFLAG_NO_SXS = BIT(28), /* SXS not supported */
AHCI_HFLAG_43BIT_ONLY = BIT(29), /* 43bit DMA addr limit */
/* ap->flags bits */

View File

@ -784,7 +784,7 @@ bool sata_lpm_ignore_phy_events(struct ata_link *link)
EXPORT_SYMBOL_GPL(sata_lpm_ignore_phy_events);
static const char *ata_lpm_policy_names[] = {
[ATA_LPM_UNKNOWN] = "max_performance",
[ATA_LPM_UNKNOWN] = "keep_firmware_settings",
[ATA_LPM_MAX_POWER] = "max_performance",
[ATA_LPM_MED_POWER] = "medium_power",
[ATA_LPM_MED_POWER_WITH_DIPM] = "med_power_with_dipm",

View File

@ -333,6 +333,7 @@ aoeblk_gdalloc(void *vp)
struct gendisk *gd;
mempool_t *mp;
struct blk_mq_tag_set *set;
sector_t ssize;
ulong flags;
int late = 0;
int err;
@ -396,7 +397,7 @@ aoeblk_gdalloc(void *vp)
gd->minors = AOE_PARTITIONS;
gd->fops = &aoe_bdops;
gd->private_data = d;
set_capacity(gd, d->ssize);
ssize = d->ssize;
snprintf(gd->disk_name, sizeof gd->disk_name, "etherd/e%ld.%d",
d->aoemajor, d->aoeminor);
@ -405,6 +406,8 @@ aoeblk_gdalloc(void *vp)
spin_unlock_irqrestore(&d->lock, flags);
set_capacity(gd, ssize);
err = device_add_disk(NULL, gd, aoe_attr_groups);
if (err)
goto out_disk_cleanup;

View File

@ -3452,14 +3452,15 @@ static bool rbd_lock_add_request(struct rbd_img_request *img_req)
static void rbd_lock_del_request(struct rbd_img_request *img_req)
{
struct rbd_device *rbd_dev = img_req->rbd_dev;
bool need_wakeup;
bool need_wakeup = false;
lockdep_assert_held(&rbd_dev->lock_rwsem);
spin_lock(&rbd_dev->lock_lists_lock);
rbd_assert(!list_empty(&img_req->lock_item));
list_del_init(&img_req->lock_item);
need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING &&
list_empty(&rbd_dev->running_list));
if (!list_empty(&img_req->lock_item)) {
list_del_init(&img_req->lock_item);
need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING &&
list_empty(&rbd_dev->running_list));
}
spin_unlock(&rbd_dev->lock_lists_lock);
if (need_wakeup)
complete(&rbd_dev->releasing_wait);
@ -3842,14 +3843,19 @@ static void wake_lock_waiters(struct rbd_device *rbd_dev, int result)
return;
}
list_for_each_entry(img_req, &rbd_dev->acquiring_list, lock_item) {
while (!list_empty(&rbd_dev->acquiring_list)) {
img_req = list_first_entry(&rbd_dev->acquiring_list,
struct rbd_img_request, lock_item);
mutex_lock(&img_req->state_mutex);
rbd_assert(img_req->state == RBD_IMG_EXCLUSIVE_LOCK);
if (!result)
list_move_tail(&img_req->lock_item,
&rbd_dev->running_list);
else
list_del_init(&img_req->lock_item);
rbd_img_schedule(img_req, result);
mutex_unlock(&img_req->state_mutex);
}
list_splice_tail_init(&rbd_dev->acquiring_list, &rbd_dev->running_list);
}
static bool locker_equal(const struct ceph_locker *lhs,
@ -5326,7 +5332,7 @@ static void rbd_dev_release(struct device *dev)
if (need_put) {
destroy_workqueue(rbd_dev->task_wq);
ida_simple_remove(&rbd_dev_id_ida, rbd_dev->dev_id);
ida_free(&rbd_dev_id_ida, rbd_dev->dev_id);
}
rbd_dev_free(rbd_dev);
@ -5402,9 +5408,9 @@ static struct rbd_device *rbd_dev_create(struct rbd_client *rbdc,
return NULL;
/* get an id and fill in device name */
rbd_dev->dev_id = ida_simple_get(&rbd_dev_id_ida, 0,
minor_to_rbd_dev_id(1 << MINORBITS),
GFP_KERNEL);
rbd_dev->dev_id = ida_alloc_max(&rbd_dev_id_ida,
minor_to_rbd_dev_id(1 << MINORBITS) - 1,
GFP_KERNEL);
if (rbd_dev->dev_id < 0)
goto fail_rbd_dev;
@ -5425,7 +5431,7 @@ static struct rbd_device *rbd_dev_create(struct rbd_client *rbdc,
return rbd_dev;
fail_dev_id:
ida_simple_remove(&rbd_dev_id_ida, rbd_dev->dev_id);
ida_free(&rbd_dev_id_ida, rbd_dev->dev_id);
fail_rbd_dev:
rbd_dev_free(rbd_dev);
return NULL;

View File

@ -1232,14 +1232,13 @@ static void amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
max_limit_perf = div_u64(policy->max * cpudata->highest_perf, cpudata->max_freq);
min_limit_perf = div_u64(policy->min * cpudata->highest_perf, cpudata->max_freq);
WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf);
WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf);
max_perf = clamp_t(unsigned long, max_perf, cpudata->min_limit_perf,
cpudata->max_limit_perf);
min_perf = clamp_t(unsigned long, min_perf, cpudata->min_limit_perf,
cpudata->max_limit_perf);
WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf);
WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf);
value = READ_ONCE(cpudata->cppc_req_cached);
if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)

Some files were not shown because too many files have changed in this diff Show More