Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Lots of simnple overlapping additions.

With a build fix from Stephen Rothwell.

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2021-10-22 11:41:16 +01:00
commit bdfa75ad70
364 changed files with 3812 additions and 2436 deletions

View File

@ -33,6 +33,8 @@ Al Viro <viro@zenIV.linux.org.uk>
Andi Kleen <ak@linux.intel.com> <ak@suse.de>
Andi Shyti <andi@etezian.org> <andi.shyti@samsung.com>
Andreas Herrmann <aherrman@de.ibm.com>
Andrej Shadura <andrew.shadura@collabora.co.uk>
Andrej Shadura <andrew@shadura.me> <andrew@beldisplaytech.com>
Andrew Morton <akpm@linux-foundation.org>
Andrew Murray <amurray@thegoodpenguin.co.uk> <amurray@embedded-bits.co.uk>
Andrew Murray <amurray@thegoodpenguin.co.uk> <andrew.murray@arm.com>

View File

@ -171,7 +171,7 @@ examples:
cs-gpios = <&gpio0 13 0>,
<&gpio0 14 0>;
rx-sample-delay-ns = <3>;
spi-flash@1 {
flash@1 {
compatible = "spi-nand";
reg = <1>;
rx-sample-delay-ns = <7>;

View File

@ -4,103 +4,112 @@
NTFS3
=====
Summary and Features
====================
NTFS3 is fully functional NTFS Read-Write driver. The driver works with
NTFS versions up to 3.1, normal/compressed/sparse files
and journal replaying. File system type to use on mount is 'ntfs3'.
NTFS3 is fully functional NTFS Read-Write driver. The driver works with NTFS
versions up to 3.1. File system type to use on mount is *ntfs3*.
- This driver implements NTFS read/write support for normal, sparse and
compressed files.
- Supports native journal replaying;
- Supports extended attributes
Predefined extended attributes:
- 'system.ntfs_security' gets/sets security
descriptor (SECURITY_DESCRIPTOR_RELATIVE)
- 'system.ntfs_attrib' gets/sets ntfs file/dir attributes.
Note: applied to empty files, this allows to switch type between
sparse(0x200), compressed(0x800) and normal;
- Supports native journal replaying.
- Supports NFS export of mounted NTFS volumes.
- Supports extended attributes. Predefined extended attributes:
- *system.ntfs_security* gets/sets security
Descriptor: SECURITY_DESCRIPTOR_RELATIVE
- *system.ntfs_attrib* gets/sets ntfs file/dir attributes.
Note: Applied to empty files, this allows to switch type between
sparse(0x200), compressed(0x800) and normal.
Mount Options
=============
The list below describes mount options supported by NTFS3 driver in addition to
generic ones.
generic ones. You can use every mount option with **no** option. If it is in
this table marked with no it means default is without **no**.
===============================================================================
.. flat-table::
:widths: 1 5
:fill-cells:
nls=name This option informs the driver how to interpret path
strings and translate them to Unicode and back. If
this option is not set, the default codepage will be
used (CONFIG_NLS_DEFAULT).
Examples:
'nls=utf8'
* - iocharset=name
- This option informs the driver how to interpret path strings and
translate them to Unicode and back. If this option is not set, the
default codepage will be used (CONFIG_NLS_DEFAULT).
uid=
gid=
umask= Controls the default permissions for files/directories created
after the NTFS volume is mounted.
Example: iocharset=utf8
fmask=
dmask= Instead of specifying umask which applies both to
files and directories, fmask applies only to files and
dmask only to directories.
* - uid=
- :rspan:`1`
* - gid=
nohidden Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN)
attribute will not be shown under Linux.
* - umask=
- Controls the default permissions for files/directories created after
the NTFS volume is mounted.
sys_immutable Files with the Windows-specific SYSTEM
(FILE_ATTRIBUTE_SYSTEM) attribute will be marked as system
immutable files.
* - dmask=
- :rspan:`1` Instead of specifying umask which applies both to files and
directories, fmask applies only to files and dmask only to directories.
* - fmask=
discard Enable support of the TRIM command for improved performance
on delete operations, which is recommended for use with the
solid-state drives (SSD).
* - noacsrules
- "No access rules" mount option sets access rights for files/folders to
777 and owner/group to root. This mount option absorbs all other
permissions.
force Forces the driver to mount partitions even if 'dirty' flag
(volume dirty) is set. Not recommended for use.
- Permissions change for files/folders will be reported as successful,
but they will remain 777.
sparse Create new files as "sparse".
- Owner/group change will be reported as successful, butthey will stay
as root.
showmeta Use this parameter to show all meta-files (System Files) on
a mounted NTFS partition.
By default, all meta-files are hidden.
* - nohidden
- Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN) attribute
will not be shown under Linux.
prealloc Preallocate space for files excessively when file size is
increasing on writes. Decreases fragmentation in case of
parallel write operations to different files.
* - sys_immutable
- Files with the Windows-specific SYSTEM (FILE_ATTRIBUTE_SYSTEM) attribute
will be marked as system immutable files.
no_acs_rules "No access rules" mount option sets access rights for
files/folders to 777 and owner/group to root. This mount
option absorbs all other permissions:
- permissions change for files/folders will be reported
as successful, but they will remain 777;
- owner/group change will be reported as successful, but
they will stay as root
* - discard
- Enable support of the TRIM command for improved performance on delete
operations, which is recommended for use with the solid-state drives
(SSD).
acl Support POSIX ACLs (Access Control Lists). Effective if
supported by Kernel. Not to be confused with NTFS ACLs.
The option specified as acl enables support for POSIX ACLs.
* - force
- Forces the driver to mount partitions even if volume is marked dirty.
Not recommended for use.
noatime All files and directories will not update their last access
time attribute if a partition is mounted with this parameter.
This option can speed up file system operation.
* - sparse
- Create new files as sparse.
===============================================================================
* - showmeta
- Use this parameter to show all meta-files (System Files) on a mounted
NTFS partition. By default, all meta-files are hidden.
ToDo list
* - prealloc
- Preallocate space for files excessively when file size is increasing on
writes. Decreases fragmentation in case of parallel write operations to
different files.
* - acl
- Support POSIX ACLs (Access Control Lists). Effective if supported by
Kernel. Not to be confused with NTFS ACLs. The option specified as acl
enables support for POSIX ACLs.
Todo list
=========
- Full journaling support (currently journal replaying is supported) over JBD.
- Full journaling support over JBD. Currently journal replaying is supported
which is not necessarily as effectice as JBD would be.
References
==========
https://www.paragon-software.com/home/ntfs-linux-professional/
- Commercial version of the NTFS driver for Linux.
- Commercial version of the NTFS driver for Linux.
https://www.paragon-software.com/home/ntfs-linux-professional/
almaz.alexandrovich@paragon-software.com
- Direct e-mail address for feedback and requests on the NTFS3 implementation.
- Direct e-mail address for feedback and requests on the NTFS3 implementation.
almaz.alexandrovich@paragon-software.com

View File

@ -30,10 +30,11 @@ The ``ice`` driver reports the following versions
PHY, link, etc.
* - ``fw.mgmt.api``
- running
- 1.5
- 2-digit version number of the API exported over the AdminQ by the
management firmware. Used by the driver to identify what commands
are supported.
- 1.5.1
- 3-digit version number (major.minor.patch) of the API exported over
the AdminQ by the management firmware. Used by the driver to
identify what commands are supported. Historical versions of the
kernel only displayed a 2-digit version number (major.minor).
* - ``fw.mgmt.build``
- running
- 0x305d955f

View File

@ -59,11 +59,11 @@ specified with a ``sockaddr`` type, with a single-byte endpoint address:
};
struct sockaddr_mctp {
unsigned short int smctp_family;
int smctp_network;
struct mctp_addr smctp_addr;
__u8 smctp_type;
__u8 smctp_tag;
__kernel_sa_family_t smctp_family;
unsigned int smctp_network;
struct mctp_addr smctp_addr;
__u8 smctp_type;
__u8 smctp_tag;
};
#define MCTP_NET_ANY 0x0

View File

@ -18,7 +18,7 @@ types can be added after the security issue of corresponding device driver
is clarified or fixed in the future.
Create/Destroy VDUSE devices
------------------------
----------------------------
VDUSE devices are created as follows:

View File

@ -7349,10 +7349,11 @@ F: include/uapi/linux/fpga-dfl.h
FPGA MANAGER FRAMEWORK
M: Moritz Fischer <mdf@kernel.org>
M: Wu Hao <hao.wu@intel.com>
M: Xu Yilun <yilun.xu@intel.com>
R: Tom Rix <trix@redhat.com>
L: linux-fpga@vger.kernel.org
S: Maintained
W: http://www.rocketboards.org
Q: http://patchwork.kernel.org/project/linux-fpga/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mdf/linux-fpga.git
F: Documentation/devicetree/bindings/fpga/
@ -10285,7 +10286,6 @@ KERNEL VIRTUAL MACHINE for s390 (KVM/s390)
M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Janosch Frank <frankja@linux.ibm.com>
R: David Hildenbrand <david@redhat.com>
R: Cornelia Huck <cohuck@redhat.com>
R: Claudio Imbrenda <imbrenda@linux.ibm.com>
L: kvm@vger.kernel.org
S: Supported
@ -16308,6 +16308,7 @@ S390
M: Heiko Carstens <hca@linux.ibm.com>
M: Vasily Gorbik <gor@linux.ibm.com>
M: Christian Borntraeger <borntraeger@de.ibm.com>
R: Alexander Gordeev <agordeev@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
@ -16386,7 +16387,6 @@ F: drivers/s390/crypto/vfio_ap_ops.c
F: drivers/s390/crypto/vfio_ap_private.h
S390 VFIO-CCW DRIVER
M: Cornelia Huck <cohuck@redhat.com>
M: Eric Farman <farman@linux.ibm.com>
M: Matthew Rosato <mjrosato@linux.ibm.com>
R: Halil Pasic <pasic@linux.ibm.com>
@ -17993,7 +17993,7 @@ F: net/switchdev/
SY8106A REGULATOR DRIVER
M: Icenowy Zheng <icenowy@aosc.io>
S: Maintained
F: Documentation/devicetree/bindings/regulator/sy8106a-regulator.txt
F: Documentation/devicetree/bindings/regulator/silergy,sy8106a.yaml
F: drivers/regulator/sy8106a-regulator.c
SYNC FILE FRAMEWORK

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 15
SUBLEVEL = 0
EXTRAVERSION = -rc5
EXTRAVERSION = -rc6
NAME = Opossums on Parade
# *DOCUMENTATION*

View File

@ -26,11 +26,6 @@ extern char empty_zero_page[PAGE_SIZE];
extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE);
/* Macro to mark a page protection as uncacheable */
#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CACHEABLE))
extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE);
/* to cope with aliasing VIPT cache */
#define HAVE_ARCH_UNMAPPED_AREA

View File

@ -40,8 +40,8 @@
regulator-always-on;
regulator-settling-time-us = <5000>;
gpios = <&expgpio 4 GPIO_ACTIVE_HIGH>;
states = <1800000 0x1
3300000 0x0>;
states = <1800000 0x1>,
<3300000 0x0>;
status = "okay";
};
@ -217,15 +217,16 @@
};
&pcie0 {
pci@1,0 {
pci@0,0 {
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
ranges;
reg = <0 0 0 0 0>;
usb@1,0 {
reg = <0x10000 0 0 0 0>;
usb@0,0 {
reg = <0 0 0 0 0>;
resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
};
};

View File

@ -300,6 +300,14 @@
status = "disabled";
};
vec: vec@7ec13000 {
compatible = "brcm,bcm2711-vec";
reg = <0x7ec13000 0x1000>;
clocks = <&clocks BCM2835_CLOCK_VEC>;
interrupts = <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
dvp: clock@7ef00000 {
compatible = "brcm,brcm2711-dvp";
reg = <0x7ef00000 0x10>;
@ -532,8 +540,8 @@
compatible = "brcm,genet-mdio-v5";
reg = <0xe14 0x8>;
reg-names = "mdio";
#address-cells = <0x0>;
#size-cells = <0x1>;
#address-cells = <0x1>;
#size-cells = <0x0>;
};
};
};

View File

@ -106,6 +106,14 @@
status = "okay";
};
vec: vec@7e806000 {
compatible = "brcm,bcm2835-vec";
reg = <0x7e806000 0x1000>;
clocks = <&clocks BCM2835_CLOCK_VEC>;
interrupts = <2 27>;
status = "disabled";
};
pixelvalve@7e807000 {
compatible = "brcm,bcm2835-pixelvalve2";
reg = <0x7e807000 0x100>;

View File

@ -464,14 +464,6 @@
status = "disabled";
};
vec: vec@7e806000 {
compatible = "brcm,bcm2835-vec";
reg = <0x7e806000 0x1000>;
clocks = <&clocks BCM2835_CLOCK_VEC>;
interrupts = <2 27>;
status = "disabled";
};
usb: usb@7e980000 {
compatible = "brcm,bcm2835-usb";
reg = <0x7e980000 0x10000>;

View File

@ -197,7 +197,6 @@ CONFIG_PCI_EPF_TEST=m
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_OMAP_OCP2SCP=y
CONFIG_SIMPLE_PM_BUS=y
CONFIG_MTD=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_BLOCK=y

View File

@ -46,7 +46,6 @@ CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_DMA_CMA=y
CONFIG_CMA_SIZE_MBYTES=64
CONFIG_SIMPLE_PM_BUS=y
CONFIG_MTD=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_BLOCK=y

View File

@ -40,7 +40,6 @@ CONFIG_PCI_RCAR_GEN2=y
CONFIG_PCIE_RCAR_HOST=y
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_SIMPLE_PM_BUS=y
CONFIG_MTD=y
CONFIG_MTD_BLOCK=y
CONFIG_MTD_CFI=y

View File

@ -9,6 +9,7 @@
#include <linux/iopoll.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/reset-controller.h>
#include <linux/smp.h>
#include <asm/smp_plat.h>
@ -81,11 +82,6 @@ static const struct reset_control_ops imx_src_ops = {
.reset = imx_src_reset_module,
};
static struct reset_controller_dev imx_reset_controller = {
.ops = &imx_src_ops,
.nr_resets = ARRAY_SIZE(sw_reset_bits),
};
static void imx_gpcv2_set_m_core_pgc(bool enable, u32 offset)
{
writel_relaxed(enable, gpc_base + offset);
@ -177,10 +173,6 @@ void __init imx_src_init(void)
src_base = of_iomap(np, 0);
WARN_ON(!src_base);
imx_reset_controller.of_node = np;
if (IS_ENABLED(CONFIG_RESET_CONTROLLER))
reset_controller_register(&imx_reset_controller);
/*
* force warm reset sources to generate cold reset
* for a more reliable restart
@ -214,3 +206,33 @@ void __init imx7_src_init(void)
if (!gpc_base)
return;
}
static const struct of_device_id imx_src_dt_ids[] = {
{ .compatible = "fsl,imx51-src" },
{ /* sentinel */ }
};
static int imx_src_probe(struct platform_device *pdev)
{
struct reset_controller_dev *rcdev;
rcdev = devm_kzalloc(&pdev->dev, sizeof(*rcdev), GFP_KERNEL);
if (!rcdev)
return -ENOMEM;
rcdev->ops = &imx_src_ops;
rcdev->dev = &pdev->dev;
rcdev->of_node = pdev->dev.of_node;
rcdev->nr_resets = ARRAY_SIZE(sw_reset_bits);
return devm_reset_controller_register(&pdev->dev, rcdev);
}
static struct platform_driver imx_src_driver = {
.driver = {
.name = "imx-src",
.of_match_table = imx_src_dt_ids,
},
.probe = imx_src_probe,
};
builtin_platform_driver(imx_src_driver);

View File

@ -112,7 +112,6 @@ config ARCH_OMAP2PLUS
select PM_GENERIC_DOMAINS
select PM_GENERIC_DOMAINS_OF
select RESET_CONTROLLER
select SIMPLE_PM_BUS
select SOC_BUS
select TI_SYSC
select OMAP_IRQCHIP

View File

@ -245,7 +245,6 @@ CONFIG_DEVTMPFS_MOUNT=y
CONFIG_FW_LOADER_USER_HELPER=y
CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
CONFIG_HISILICON_LPC=y
CONFIG_SIMPLE_PM_BUS=y
CONFIG_FSL_MC_BUS=y
CONFIG_TEGRA_ACONNECT=m
CONFIG_GNSS=m

View File

@ -24,6 +24,7 @@ struct hyp_pool {
/* Allocation */
void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order);
void hyp_split_page(struct hyp_page *page);
void hyp_get_page(struct hyp_pool *pool, void *addr);
void hyp_put_page(struct hyp_pool *pool, void *addr);

View File

@ -35,7 +35,18 @@ const u8 pkvm_hyp_id = 1;
static void *host_s2_zalloc_pages_exact(size_t size)
{
return hyp_alloc_pages(&host_s2_pool, get_order(size));
void *addr = hyp_alloc_pages(&host_s2_pool, get_order(size));
hyp_split_page(hyp_virt_to_page(addr));
/*
* The size of concatenated PGDs is always a power of two of PAGE_SIZE,
* so there should be no need to free any of the tail pages to make the
* allocation exact.
*/
WARN_ON(size != (PAGE_SIZE << get_order(size)));
return addr;
}
static void *host_s2_zalloc_page(void *pool)

View File

@ -152,6 +152,7 @@ static inline void hyp_page_ref_inc(struct hyp_page *p)
static inline int hyp_page_ref_dec_and_test(struct hyp_page *p)
{
BUG_ON(!p->refcount);
p->refcount--;
return (p->refcount == 0);
}
@ -193,6 +194,20 @@ void hyp_get_page(struct hyp_pool *pool, void *addr)
hyp_spin_unlock(&pool->lock);
}
void hyp_split_page(struct hyp_page *p)
{
unsigned short order = p->order;
unsigned int i;
p->order = 0;
for (i = 1; i < (1 << order); i++) {
struct hyp_page *tail = p + i;
tail->order = 0;
hyp_set_page_refcounted(tail);
}
}
void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order)
{
unsigned short i = order;

View File

@ -1529,8 +1529,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
* when updating the PG_mte_tagged page flag, see
* sanitise_mte_tags for more details.
*/
if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED)
return -EINVAL;
if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) {
ret = -EINVAL;
break;
}
if (vma->vm_flags & VM_PFNMAP) {
/* IO region dirty page logging not allowed */

View File

@ -8,7 +8,7 @@ config CSKY
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_FRAME_POINTERS if !CPU_CK610
select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
select COMMON_CLK
select CLKSRC_MMIO
@ -241,6 +241,7 @@ endchoice
menuconfig HAVE_TCM
bool "Tightly-Coupled/Sram Memory"
depends on !COMPILE_TEST
help
The implementation are not only used by TCM (Tightly-Coupled Meory)
but also used by sram on SOC bus. It follow existed linux tcm

View File

@ -74,7 +74,6 @@ static __always_inline unsigned long __fls(unsigned long x)
* bug fix, why only could use atomic!!!!
*/
#include <asm-generic/bitops/non-atomic.h>
#define __clear_bit(nr, vaddr) clear_bit(nr, vaddr)
#include <asm-generic/bitops/le.h>
#include <asm-generic/bitops/ext2-atomic.h>

View File

@ -99,7 +99,8 @@ static int gpr_set(struct task_struct *target,
if (ret)
return ret;
regs.sr = task_pt_regs(target)->sr;
/* BIT(0) of regs.sr is Condition Code/Carry bit */
regs.sr = (regs.sr & BIT(0)) | (task_pt_regs(target)->sr & ~BIT(0));
#ifdef CONFIG_CPU_HAS_HILO
regs.dcsr = task_pt_regs(target)->dcsr;
#endif

View File

@ -52,10 +52,14 @@ static long restore_sigcontext(struct pt_regs *regs,
struct sigcontext __user *sc)
{
int err = 0;
unsigned long sr = regs->sr;
/* sc_pt_regs is structured the same as the start of pt_regs */
err |= __copy_from_user(regs, &sc->sc_pt_regs, sizeof(struct pt_regs));
/* BIT(0) of regs->sr is Condition Code/Carry bit */
regs->sr = (sr & ~1) | (regs->sr & 1);
/* Restore the floating-point state. */
err |= restore_fpu_state(sc);

View File

@ -9,7 +9,7 @@
static inline unsigned long arch_local_save_flags(void)
{
return RDCTL(CTL_STATUS);
return RDCTL(CTL_FSTATUS);
}
/*
@ -18,7 +18,7 @@ static inline unsigned long arch_local_save_flags(void)
*/
static inline void arch_local_irq_restore(unsigned long flags)
{
WRCTL(CTL_STATUS, flags);
WRCTL(CTL_FSTATUS, flags);
}
static inline void arch_local_irq_disable(void)

View File

@ -11,7 +11,7 @@
#endif
/* control register numbers */
#define CTL_STATUS 0
#define CTL_FSTATUS 0
#define CTL_ESTATUS 1
#define CTL_BSTATUS 2
#define CTL_IENABLE 3

View File

@ -126,14 +126,16 @@ _GLOBAL(idle_return_gpr_loss)
/*
* This is the sequence required to execute idle instructions, as
* specified in ISA v2.07 (and earlier). MSR[IR] and MSR[DR] must be 0.
*
* The 0(r1) slot is used to save r2 in isa206, so use that here.
* We have to store a GPR somewhere, ptesync, then reload it, and create
* a false dependency on the result of the load. It doesn't matter which
* GPR we store, or where we store it. We have already stored r2 to the
* stack at -8(r1) in isa206_idle_insn_mayloss, so use that.
*/
#define IDLE_STATE_ENTER_SEQ_NORET(IDLE_INST) \
/* Magic NAP/SLEEP/WINKLE mode enter sequence */ \
std r2,0(r1); \
std r2,-8(r1); \
ptesync; \
ld r2,0(r1); \
ld r2,-8(r1); \
236: cmpd cr0,r2,r2; \
bne 236b; \
IDLE_INST; \

View File

@ -1730,8 +1730,6 @@ void __cpu_die(unsigned int cpu)
void arch_cpu_idle_dead(void)
{
sched_preempt_enable_no_resched();
/*
* Disable on the down path. This will be re-enabled by
* start_secondary() via start_secondary_resume() below

View File

@ -255,13 +255,16 @@ kvm_novcpu_exit:
* r3 contains the SRR1 wakeup value, SRR1 is trashed.
*/
_GLOBAL(idle_kvm_start_guest)
ld r4,PACAEMERGSP(r13)
mfcr r5
mflr r0
std r1,0(r4)
std r5,8(r4)
std r0,16(r4)
subi r1,r4,STACK_FRAME_OVERHEAD
std r5, 8(r1) // Save CR in caller's frame
std r0, 16(r1) // Save LR in caller's frame
// Create frame on emergency stack
ld r4, PACAEMERGSP(r13)
stdu r1, -SWITCH_FRAME_SIZE(r4)
// Switch to new frame on emergency stack
mr r1, r4
std r3, 32(r1) // Save SRR1 wakeup value
SAVE_NVGPRS(r1)
/*
@ -313,6 +316,10 @@ kvm_unsplit_wakeup:
kvm_secondary_got_guest:
// About to go to guest, clear saved SRR1
li r0, 0
std r0, 32(r1)
/* Set HSTATE_DSCR(r13) to something sensible */
ld r6, PACA_DSCR_DEFAULT(r13)
std r6, HSTATE_DSCR(r13)
@ -392,13 +399,12 @@ kvm_no_guest:
mfspr r4, SPRN_LPCR
rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1
mtspr SPRN_LPCR, r4
/* set up r3 for return */
mfspr r3,SPRN_SRR1
// Return SRR1 wakeup value, or 0 if we went into the guest
ld r3, 32(r1)
REST_NVGPRS(r1)
addi r1, r1, STACK_FRAME_OVERHEAD
ld r0, 16(r1)
ld r5, 8(r1)
ld r1, 0(r1)
ld r1, 0(r1) // Switch back to caller stack
ld r0, 16(r1) // Reload LR
ld r5, 8(r1) // Reload CR
mtlr r0
mtcr r5
blr

View File

@ -945,7 +945,8 @@ static int xive_get_irqchip_state(struct irq_data *data,
* interrupt to be inactive in that case.
*/
*state = (pq != XIVE_ESB_INVALID) && !xd->stale_p &&
(xd->saved_p || !!(pq & XIVE_ESB_VAL_P));
(xd->saved_p || (!!(pq & XIVE_ESB_VAL_P) &&
!irqd_irq_disabled(data)));
return 0;
default:
return -EINVAL;

View File

@ -894,6 +894,11 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
/**
* guest_translate_address - translate guest logical into guest absolute address
* @vcpu: virtual cpu
* @gva: Guest virtual address
* @ar: Access register
* @gpa: Guest physical address
* @mode: Translation access mode
*
* Parameter semantics are the same as the ones from guest_translate.
* The memory contents at the guest address are not changed.
@ -934,6 +939,11 @@ int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
/**
* check_gva_range - test a range of guest virtual addresses for accessibility
* @vcpu: virtual cpu
* @gva: Guest virtual address
* @ar: Access register
* @length: Length of test range
* @mode: Translation access mode
*/
int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
unsigned long length, enum gacc_mode mode)
@ -956,6 +966,7 @@ int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
/**
* kvm_s390_check_low_addr_prot_real - check for low-address protection
* @vcpu: virtual cpu
* @gra: Guest real address
*
* Checks whether an address is subject to low-address protection and set
@ -979,6 +990,7 @@ int kvm_s390_check_low_addr_prot_real(struct kvm_vcpu *vcpu, unsigned long gra)
* @pgt: pointer to the beginning of the page table for the given address if
* successful (return value 0), or to the first invalid DAT entry in
* case of exceptions (return value > 0)
* @dat_protection: referenced memory is write protected
* @fake: pgt references contiguous guest memory block, not a pgtable
*/
static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,

View File

@ -269,6 +269,7 @@ static int handle_prog(struct kvm_vcpu *vcpu)
/**
* handle_external_interrupt - used for external interruption interceptions
* @vcpu: virtual cpu
*
* This interception only occurs if the CPUSTAT_EXT_INT bit was set, or if
* the new PSW does not have external interrupts disabled. In the first case,
@ -315,7 +316,8 @@ static int handle_external_interrupt(struct kvm_vcpu *vcpu)
}
/**
* Handle MOVE PAGE partial execution interception.
* handle_mvpg_pei - Handle MOVE PAGE partial execution interception.
* @vcpu: virtual cpu
*
* This interception can only happen for guests with DAT disabled and
* addresses that are currently not mapped in the host. Thus we try to

View File

@ -259,14 +259,13 @@ EXPORT_SYMBOL(strcmp);
#ifdef __HAVE_ARCH_STRRCHR
char *strrchr(const char *s, int c)
{
size_t len = __strend(s) - s;
ssize_t len = __strend(s) - s;
if (len)
do {
if (s[len] == (char) c)
return (char *) s + len;
} while (--len > 0);
return NULL;
do {
if (s[len] == (char)c)
return (char *)s + len;
} while (--len >= 0);
return NULL;
}
EXPORT_SYMBOL(strrchr);
#endif

View File

@ -1525,7 +1525,6 @@ config AMD_MEM_ENCRYPT
config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT
bool "Activate AMD Secure Memory Encryption (SME) by default"
default y
depends on AMD_MEM_ENCRYPT
help
Say yes to have system memory encrypted by default if running on

View File

@ -68,6 +68,7 @@ static bool test_intel(int idx, void *data)
case INTEL_FAM6_BROADWELL_D:
case INTEL_FAM6_BROADWELL_G:
case INTEL_FAM6_BROADWELL_X:
case INTEL_FAM6_SAPPHIRERAPIDS_X:
case INTEL_FAM6_ATOM_SILVERMONT:
case INTEL_FAM6_ATOM_SILVERMONT_D:

View File

@ -385,7 +385,7 @@ static int __fpu_restore_sig(void __user *buf, void __user *buf_fx,
return -EINVAL;
} else {
/* Mask invalid bits out for historical reasons (broken hardware). */
fpu->state.fxsave.mxcsr &= ~mxcsr_feature_mask;
fpu->state.fxsave.mxcsr &= mxcsr_feature_mask;
}
/* Enforce XFEATURE_MASK_FPSSE when XSAVE is enabled */

View File

@ -2321,13 +2321,14 @@ EXPORT_SYMBOL_GPL(kvm_apic_update_apicv);
void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
{
struct kvm_lapic *apic = vcpu->arch.apic;
u64 msr_val;
int i;
if (!init_event) {
vcpu->arch.apic_base = APIC_DEFAULT_PHYS_BASE |
MSR_IA32_APICBASE_ENABLE;
msr_val = APIC_DEFAULT_PHYS_BASE | MSR_IA32_APICBASE_ENABLE;
if (kvm_vcpu_is_reset_bsp(vcpu))
vcpu->arch.apic_base |= MSR_IA32_APICBASE_BSP;
msr_val |= MSR_IA32_APICBASE_BSP;
kvm_lapic_set_base(vcpu, msr_val);
}
if (!apic)
@ -2336,11 +2337,9 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
/* Stop the timer in case it's a reset to an active apic */
hrtimer_cancel(&apic->lapic_timer.timer);
if (!init_event) {
apic->base_address = APIC_DEFAULT_PHYS_BASE;
/* The xAPIC ID is set at RESET even if the APIC was already enabled. */
if (!init_event)
kvm_apic_set_xapic_id(apic, vcpu->vcpu_id);
}
kvm_apic_set_version(apic->vcpu);
for (i = 0; i < KVM_APIC_LVT_NUM; i++)
@ -2481,6 +2480,11 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu, int timer_advance_ns)
lapic_timer_advance_dynamic = false;
}
/*
* Stuff the APIC ENABLE bit in lieu of temporarily incrementing
* apic_hw_disabled; the full RESET value is set by kvm_lapic_reset().
*/
vcpu->arch.apic_base = MSR_IA32_APICBASE_ENABLE;
static_branch_inc(&apic_sw_disabled.key); /* sw disabled at reset */
kvm_iodevice_init(&apic->dev, &apic_mmio_ops);
@ -2942,5 +2946,7 @@ int kvm_apic_accept_events(struct kvm_vcpu *vcpu)
void kvm_lapic_exit(void)
{
static_key_deferred_flush(&apic_hw_disabled);
WARN_ON(static_branch_unlikely(&apic_hw_disabled.key));
static_key_deferred_flush(&apic_sw_disabled);
WARN_ON(static_branch_unlikely(&apic_sw_disabled.key));
}

View File

@ -618,7 +618,12 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
vmsa.handle = to_kvm_svm(kvm)->sev_info.handle;
vmsa.address = __sme_pa(svm->vmsa);
vmsa.len = PAGE_SIZE;
return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
if (ret)
return ret;
vcpu->arch.guest_state_protected = true;
return 0;
}
static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
@ -2583,7 +2588,7 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in)
return -EINVAL;
return kvm_sev_es_string_io(&svm->vcpu, size, port,
svm->ghcb_sa, svm->ghcb_sa_len, in);
svm->ghcb_sa, svm->ghcb_sa_len / size, in);
}
void sev_es_init_vmcb(struct vcpu_svm *svm)

View File

@ -191,7 +191,7 @@ struct vcpu_svm {
/* SEV-ES scratch area support */
void *ghcb_sa;
u64 ghcb_sa_len;
u32 ghcb_sa_len;
bool ghcb_sa_sync;
bool ghcb_sa_free;

View File

@ -5562,9 +5562,13 @@ static int handle_encls(struct kvm_vcpu *vcpu)
static int handle_bus_lock_vmexit(struct kvm_vcpu *vcpu)
{
vcpu->run->exit_reason = KVM_EXIT_X86_BUS_LOCK;
vcpu->run->flags |= KVM_RUN_X86_BUS_LOCK;
return 0;
/*
* Hardware may or may not set the BUS_LOCK_DETECTED flag on BUS_LOCK
* VM-Exits. Unconditionally set the flag here and leave the handling to
* vmx_handle_exit().
*/
to_vmx(vcpu)->exit_reason.bus_lock_detected = true;
return 1;
}
/*
@ -6051,9 +6055,8 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
int ret = __vmx_handle_exit(vcpu, exit_fastpath);
/*
* Even when current exit reason is handled by KVM internally, we
* still need to exit to user space when bus lock detected to inform
* that there is a bus lock in guest.
* Exit to user space when bus lock detected to inform that there is
* a bus lock in guest.
*/
if (to_vmx(vcpu)->exit_reason.bus_lock_detected) {
if (ret > 0)

View File

@ -11392,7 +11392,8 @@ static int memslot_rmap_alloc(struct kvm_memory_slot *slot,
int level = i + 1;
int lpages = __kvm_mmu_slot_lpages(slot, npages, level);
WARN_ON(slot->arch.rmap[i]);
if (slot->arch.rmap[i])
continue;
slot->arch.rmap[i] = kvcalloc(lpages, sz, GFP_KERNEL_ACCOUNT);
if (!slot->arch.rmap[i]) {

View File

@ -666,6 +666,12 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
bfqg_and_blkg_put(bfqq_group(bfqq));
if (entity->parent &&
entity->parent->last_bfqq_created == bfqq)
entity->parent->last_bfqq_created = NULL;
else if (bfqd->last_bfqq_created == bfqq)
bfqd->last_bfqq_created = NULL;
entity->parent = bfqg->my_entity;
entity->sched_data = &bfqg->sched_data;
/* pin down bfqg and its associated blkg */

View File

@ -49,7 +49,6 @@
#include "blk-mq.h"
#include "blk-mq-sched.h"
#include "blk-pm.h"
#include "blk-rq-qos.h"
struct dentry *blk_debugfs_root;
@ -337,23 +336,25 @@ void blk_put_queue(struct request_queue *q)
}
EXPORT_SYMBOL(blk_put_queue);
void blk_set_queue_dying(struct request_queue *q)
void blk_queue_start_drain(struct request_queue *q)
{
blk_queue_flag_set(QUEUE_FLAG_DYING, q);
/*
* When queue DYING flag is set, we need to block new req
* entering queue, so we call blk_freeze_queue_start() to
* prevent I/O from crossing blk_queue_enter().
*/
blk_freeze_queue_start(q);
if (queue_is_mq(q))
blk_mq_wake_waiters(q);
/* Make blk_queue_enter() reexamine the DYING flag. */
wake_up_all(&q->mq_freeze_wq);
}
void blk_set_queue_dying(struct request_queue *q)
{
blk_queue_flag_set(QUEUE_FLAG_DYING, q);
blk_queue_start_drain(q);
}
EXPORT_SYMBOL_GPL(blk_set_queue_dying);
/**
@ -385,13 +386,8 @@ void blk_cleanup_queue(struct request_queue *q)
*/
blk_freeze_queue(q);
rq_qos_exit(q);
blk_queue_flag_set(QUEUE_FLAG_DEAD, q);
/* for synchronous bio-based driver finish in-flight integrity i/o */
blk_flush_integrity();
blk_sync_queue(q);
if (queue_is_mq(q))
blk_mq_exit_queue(q);
@ -416,6 +412,30 @@ void blk_cleanup_queue(struct request_queue *q)
}
EXPORT_SYMBOL(blk_cleanup_queue);
static bool blk_try_enter_queue(struct request_queue *q, bool pm)
{
rcu_read_lock();
if (!percpu_ref_tryget_live(&q->q_usage_counter))
goto fail;
/*
* The code that increments the pm_only counter must ensure that the
* counter is globally visible before the queue is unfrozen.
*/
if (blk_queue_pm_only(q) &&
(!pm || queue_rpm_status(q) == RPM_SUSPENDED))
goto fail_put;
rcu_read_unlock();
return true;
fail_put:
percpu_ref_put(&q->q_usage_counter);
fail:
rcu_read_unlock();
return false;
}
/**
* blk_queue_enter() - try to increase q->q_usage_counter
* @q: request queue pointer
@ -425,40 +445,18 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
{
const bool pm = flags & BLK_MQ_REQ_PM;
while (true) {
bool success = false;
rcu_read_lock();
if (percpu_ref_tryget_live(&q->q_usage_counter)) {
/*
* The code that increments the pm_only counter is
* responsible for ensuring that that counter is
* globally visible before the queue is unfrozen.
*/
if ((pm && queue_rpm_status(q) != RPM_SUSPENDED) ||
!blk_queue_pm_only(q)) {
success = true;
} else {
percpu_ref_put(&q->q_usage_counter);
}
}
rcu_read_unlock();
if (success)
return 0;
while (!blk_try_enter_queue(q, pm)) {
if (flags & BLK_MQ_REQ_NOWAIT)
return -EBUSY;
/*
* read pair of barrier in blk_freeze_queue_start(),
* we need to order reading __PERCPU_REF_DEAD flag of
* .q_usage_counter and reading .mq_freeze_depth or
* queue dying flag, otherwise the following wait may
* never return if the two reads are reordered.
* read pair of barrier in blk_freeze_queue_start(), we need to
* order reading __PERCPU_REF_DEAD flag of .q_usage_counter and
* reading .mq_freeze_depth or queue dying flag, otherwise the
* following wait may never return if the two reads are
* reordered.
*/
smp_rmb();
wait_event(q->mq_freeze_wq,
(!q->mq_freeze_depth &&
blk_pm_resume_queue(pm, q)) ||
@ -466,23 +464,43 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
if (blk_queue_dying(q))
return -ENODEV;
}
return 0;
}
static inline int bio_queue_enter(struct bio *bio)
{
struct request_queue *q = bio->bi_bdev->bd_disk->queue;
bool nowait = bio->bi_opf & REQ_NOWAIT;
int ret;
struct gendisk *disk = bio->bi_bdev->bd_disk;
struct request_queue *q = disk->queue;
ret = blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0);
if (unlikely(ret)) {
if (nowait && !blk_queue_dying(q))
while (!blk_try_enter_queue(q, false)) {
if (bio->bi_opf & REQ_NOWAIT) {
if (test_bit(GD_DEAD, &disk->state))
goto dead;
bio_wouldblock_error(bio);
else
bio_io_error(bio);
return -EBUSY;
}
/*
* read pair of barrier in blk_freeze_queue_start(), we need to
* order reading __PERCPU_REF_DEAD flag of .q_usage_counter and
* reading .mq_freeze_depth or queue dying flag, otherwise the
* following wait may never return if the two reads are
* reordered.
*/
smp_rmb();
wait_event(q->mq_freeze_wq,
(!q->mq_freeze_depth &&
blk_pm_resume_queue(false, q)) ||
test_bit(GD_DEAD, &disk->state));
if (test_bit(GD_DEAD, &disk->state))
goto dead;
}
return ret;
return 0;
dead:
bio_io_error(bio);
return -ENODEV;
}
void blk_queue_exit(struct request_queue *q)
@ -899,11 +917,18 @@ static blk_qc_t __submit_bio(struct bio *bio)
struct gendisk *disk = bio->bi_bdev->bd_disk;
blk_qc_t ret = BLK_QC_T_NONE;
if (blk_crypto_bio_prep(&bio)) {
if (!disk->fops->submit_bio)
return blk_mq_submit_bio(bio);
if (unlikely(bio_queue_enter(bio) != 0))
return BLK_QC_T_NONE;
if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))
goto queue_exit;
if (disk->fops->submit_bio) {
ret = disk->fops->submit_bio(bio);
goto queue_exit;
}
return blk_mq_submit_bio(bio);
queue_exit:
blk_queue_exit(disk->queue);
return ret;
}
@ -941,9 +966,6 @@ static blk_qc_t __submit_bio_noacct(struct bio *bio)
struct request_queue *q = bio->bi_bdev->bd_disk->queue;
struct bio_list lower, same;
if (unlikely(bio_queue_enter(bio) != 0))
continue;
/*
* Create a fresh bio_list for all subordinate requests.
*/
@ -979,23 +1001,12 @@ static blk_qc_t __submit_bio_noacct(struct bio *bio)
static blk_qc_t __submit_bio_noacct_mq(struct bio *bio)
{
struct bio_list bio_list[2] = { };
blk_qc_t ret = BLK_QC_T_NONE;
blk_qc_t ret;
current->bio_list = bio_list;
do {
struct gendisk *disk = bio->bi_bdev->bd_disk;
if (unlikely(bio_queue_enter(bio) != 0))
continue;
if (!blk_crypto_bio_prep(&bio)) {
blk_queue_exit(disk->queue);
ret = BLK_QC_T_NONE;
continue;
}
ret = blk_mq_submit_bio(bio);
ret = __submit_bio(bio);
} while ((bio = bio_list_pop(&bio_list[0])));
current->bio_list = NULL;
@ -1013,9 +1024,6 @@ static blk_qc_t __submit_bio_noacct_mq(struct bio *bio)
*/
blk_qc_t submit_bio_noacct(struct bio *bio)
{
if (!submit_bio_checks(bio))
return BLK_QC_T_NONE;
/*
* We only want one ->submit_bio to be active at a time, else stack
* usage with stacked devices could be a problem. Use current->bio_list

View File

@ -188,9 +188,11 @@ void blk_mq_freeze_queue(struct request_queue *q)
}
EXPORT_SYMBOL_GPL(blk_mq_freeze_queue);
void blk_mq_unfreeze_queue(struct request_queue *q)
void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
{
mutex_lock(&q->mq_freeze_lock);
if (force_atomic)
q->q_usage_counter.data->force_atomic = true;
q->mq_freeze_depth--;
WARN_ON_ONCE(q->mq_freeze_depth < 0);
if (!q->mq_freeze_depth) {
@ -199,6 +201,11 @@ void blk_mq_unfreeze_queue(struct request_queue *q)
}
mutex_unlock(&q->mq_freeze_lock);
}
void blk_mq_unfreeze_queue(struct request_queue *q)
{
__blk_mq_unfreeze_queue(q, false);
}
EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
/*

View File

@ -51,6 +51,8 @@ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size,
void blk_free_flush_queue(struct blk_flush_queue *q);
void blk_freeze_queue(struct request_queue *q);
void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
void blk_queue_start_drain(struct request_queue *q);
#define BIO_INLINE_VECS 4
struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,

View File

@ -26,6 +26,7 @@
#include <linux/badblocks.h>
#include "blk.h"
#include "blk-rq-qos.h"
static struct kobject *block_depr;
@ -559,6 +560,8 @@ EXPORT_SYMBOL(device_add_disk);
*/
void del_gendisk(struct gendisk *disk)
{
struct request_queue *q = disk->queue;
might_sleep();
if (WARN_ON_ONCE(!disk_live(disk) && !(disk->flags & GENHD_FL_HIDDEN)))
@ -575,8 +578,27 @@ void del_gendisk(struct gendisk *disk)
fsync_bdev(disk->part0);
__invalidate_device(disk->part0, true);
/*
* Fail any new I/O.
*/
set_bit(GD_DEAD, &disk->state);
set_capacity(disk, 0);
/*
* Prevent new I/O from crossing bio_queue_enter().
*/
blk_queue_start_drain(q);
blk_mq_freeze_queue_wait(q);
rq_qos_exit(q);
blk_sync_queue(q);
blk_flush_integrity();
/*
* Allow using passthrough request again after the queue is torn down.
*/
blk_queue_flag_clear(QUEUE_FLAG_INIT_DONE, q);
__blk_mq_unfreeze_queue(q, true);
if (!(disk->flags & GENHD_FL_HIDDEN)) {
sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
@ -1056,6 +1078,7 @@ static void disk_release(struct device *dev)
struct gendisk *disk = dev_to_disk(dev);
might_sleep();
WARN_ON_ONCE(disk_live(disk));
disk_release_events(disk);
kfree(disk->random);

View File

@ -151,6 +151,7 @@ struct kyber_ctx_queue {
struct kyber_queue_data {
struct request_queue *q;
dev_t dev;
/*
* Each scheduling domain has a limited number of in-flight requests
@ -257,7 +258,7 @@ static int calculate_percentile(struct kyber_queue_data *kqd,
}
memset(buckets, 0, sizeof(kqd->latency_buckets[sched_domain][type]));
trace_kyber_latency(kqd->q, kyber_domain_names[sched_domain],
trace_kyber_latency(kqd->dev, kyber_domain_names[sched_domain],
kyber_latency_type_names[type], percentile,
bucket + 1, 1 << KYBER_LATENCY_SHIFT, samples);
@ -270,7 +271,7 @@ static void kyber_resize_domain(struct kyber_queue_data *kqd,
depth = clamp(depth, 1U, kyber_depth[sched_domain]);
if (depth != kqd->domain_tokens[sched_domain].sb.depth) {
sbitmap_queue_resize(&kqd->domain_tokens[sched_domain], depth);
trace_kyber_adjust(kqd->q, kyber_domain_names[sched_domain],
trace_kyber_adjust(kqd->dev, kyber_domain_names[sched_domain],
depth);
}
}
@ -366,6 +367,7 @@ static struct kyber_queue_data *kyber_queue_data_alloc(struct request_queue *q)
goto err;
kqd->q = q;
kqd->dev = disk_devt(q->disk);
kqd->cpu_latency = alloc_percpu_gfp(struct kyber_cpu_latency,
GFP_KERNEL | __GFP_ZERO);
@ -774,7 +776,7 @@ kyber_dispatch_cur_domain(struct kyber_queue_data *kqd,
list_del_init(&rq->queuelist);
return rq;
} else {
trace_kyber_throttled(kqd->q,
trace_kyber_throttled(kqd->dev,
kyber_domain_names[khd->cur_domain]);
}
} else if (sbitmap_any_bit_set(&khd->kcq_map[khd->cur_domain])) {
@ -787,7 +789,7 @@ kyber_dispatch_cur_domain(struct kyber_queue_data *kqd,
list_del_init(&rq->queuelist);
return rq;
} else {
trace_kyber_throttled(kqd->q,
trace_kyber_throttled(kqd->dev,
kyber_domain_names[khd->cur_domain]);
}
}

View File

@ -21,6 +21,7 @@
#include <linux/earlycpio.h>
#include <linux/initrd.h>
#include <linux/security.h>
#include <linux/kmemleak.h>
#include "internal.h"
#ifdef CONFIG_ACPI_CUSTOM_DSDT
@ -601,6 +602,8 @@ void __init acpi_table_upgrade(void)
*/
arch_reserve_mem_area(acpi_tables_addr, all_tables_size);
kmemleak_ignore_phys(acpi_tables_addr);
/*
* early_ioremap only can remap 256k one time. If we map all
* tables one time, we will hit the limit. Need to map chunks

View File

@ -371,7 +371,7 @@ static int lps0_device_attach(struct acpi_device *adev,
return 0;
if (acpi_s2idle_vendor_amd()) {
/* AMD0004, AMDI0005:
/* AMD0004, AMD0005, AMDI0005:
* - Should use rev_id 0x0
* - function mask > 0x3: Should use AMD method, but has off by one bug
* - function mask = 0x3: Should use Microsoft method
@ -390,6 +390,7 @@ static int lps0_device_attach(struct acpi_device *adev,
ACPI_LPS0_DSM_UUID_MICROSOFT, 0,
&lps0_dsm_guid_microsoft);
if (lps0_dsm_func_mask > 0x3 && (!strcmp(hid, "AMD0004") ||
!strcmp(hid, "AMD0005") ||
!strcmp(hid, "AMDI0005"))) {
lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1;
acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n",

View File

@ -440,10 +440,7 @@ struct ahci_host_priv *ahci_platform_get_resources(struct platform_device *pdev,
hpriv->phy_regulator = devm_regulator_get(dev, "phy");
if (IS_ERR(hpriv->phy_regulator)) {
rc = PTR_ERR(hpriv->phy_regulator);
if (rc == -EPROBE_DEFER)
goto err_out;
rc = 0;
hpriv->phy_regulator = NULL;
goto err_out;
}
if (flags & AHCI_PLATFORM_GET_RESETS) {

View File

@ -352,7 +352,8 @@ static unsigned int pdc_data_xfer_vlb(struct ata_queued_cmd *qc,
iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
if (unlikely(slop)) {
__le32 pad;
__le32 pad = 0;
if (rw == READ) {
pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
memcpy(buf + buflen - slop, &pad, slop);
@ -742,7 +743,8 @@ static unsigned int vlb32_data_xfer(struct ata_queued_cmd *qc,
ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
if (unlikely(slop)) {
__le32 pad;
__le32 pad = 0;
if (rw == WRITE) {
memcpy(&pad, buf + buflen - slop, slop);
iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);

View File

@ -687,7 +687,8 @@ struct device_link *device_link_add(struct device *consumer,
{
struct device_link *link;
if (!consumer || !supplier || flags & ~DL_ADD_VALID_FLAGS ||
if (!consumer || !supplier || consumer == supplier ||
flags & ~DL_ADD_VALID_FLAGS ||
(flags & DL_FLAG_STATELESS && flags & DL_MANAGED_LINK_FLAGS) ||
(flags & DL_FLAG_SYNC_STATE_ONLY &&
(flags & ~DL_FLAG_INFERRED) != DL_FLAG_SYNC_STATE_ONLY) ||

View File

@ -373,10 +373,22 @@ static int brd_alloc(int i)
struct gendisk *disk;
char buf[DISK_NAME_LEN];
mutex_lock(&brd_devices_mutex);
list_for_each_entry(brd, &brd_devices, brd_list) {
if (brd->brd_number == i) {
mutex_unlock(&brd_devices_mutex);
return -EEXIST;
}
}
brd = kzalloc(sizeof(*brd), GFP_KERNEL);
if (!brd)
if (!brd) {
mutex_unlock(&brd_devices_mutex);
return -ENOMEM;
}
brd->brd_number = i;
list_add_tail(&brd->brd_list, &brd_devices);
mutex_unlock(&brd_devices_mutex);
spin_lock_init(&brd->brd_lock);
INIT_RADIX_TREE(&brd->brd_pages, GFP_ATOMIC);
@ -411,37 +423,30 @@ static int brd_alloc(int i)
blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, disk->queue);
add_disk(disk);
list_add_tail(&brd->brd_list, &brd_devices);
return 0;
out_free_dev:
mutex_lock(&brd_devices_mutex);
list_del(&brd->brd_list);
mutex_unlock(&brd_devices_mutex);
kfree(brd);
return -ENOMEM;
}
static void brd_probe(dev_t dev)
{
int i = MINOR(dev) / max_part;
struct brd_device *brd;
mutex_lock(&brd_devices_mutex);
list_for_each_entry(brd, &brd_devices, brd_list) {
if (brd->brd_number == i)
goto out_unlock;
}
brd_alloc(i);
out_unlock:
mutex_unlock(&brd_devices_mutex);
brd_alloc(MINOR(dev) / max_part);
}
static void brd_del_one(struct brd_device *brd)
{
list_del(&brd->brd_list);
del_gendisk(brd->brd_disk);
blk_cleanup_disk(brd->brd_disk);
brd_free_pages(brd);
mutex_lock(&brd_devices_mutex);
list_del(&brd->brd_list);
mutex_unlock(&brd_devices_mutex);
kfree(brd);
}
@ -491,25 +496,21 @@ static int __init brd_init(void)
brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
mutex_lock(&brd_devices_mutex);
for (i = 0; i < rd_nr; i++) {
err = brd_alloc(i);
if (err)
goto out_free;
}
mutex_unlock(&brd_devices_mutex);
pr_info("brd: module loaded\n");
return 0;
out_free:
unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
debugfs_remove_recursive(brd_debugfs_dir);
list_for_each_entry_safe(brd, next, &brd_devices, brd_list)
brd_del_one(brd);
mutex_unlock(&brd_devices_mutex);
unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
pr_info("brd: module NOT loaded !!!\n");
return err;
@ -519,13 +520,12 @@ static void __exit brd_exit(void)
{
struct brd_device *brd, *next;
unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
debugfs_remove_recursive(brd_debugfs_dir);
list_for_each_entry_safe(brd, next, &brd_devices, brd_list)
brd_del_one(brd);
unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
pr_info("brd: module unloaded\n");
}

View File

@ -71,8 +71,10 @@ static int rnbd_clt_parse_map_options(const char *buf, size_t max_path_cnt,
int opt_mask = 0;
int token;
int ret = -EINVAL;
int i, dest_port, nr_poll_queues;
int nr_poll_queues = 0;
int dest_port = 0;
int p_cnt = 0;
int i;
options = kstrdup(buf, GFP_KERNEL);
if (!options)

View File

@ -689,28 +689,6 @@ static const struct blk_mq_ops virtio_mq_ops = {
static unsigned int virtblk_queue_depth;
module_param_named(queue_depth, virtblk_queue_depth, uint, 0444);
static int virtblk_validate(struct virtio_device *vdev)
{
u32 blk_size;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
if (!virtio_has_feature(vdev, VIRTIO_BLK_F_BLK_SIZE))
return 0;
blk_size = virtio_cread32(vdev,
offsetof(struct virtio_blk_config, blk_size));
if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE)
__virtio_clear_bit(vdev, VIRTIO_BLK_F_BLK_SIZE);
return 0;
}
static int virtblk_probe(struct virtio_device *vdev)
{
struct virtio_blk *vblk;
@ -722,6 +700,12 @@ static int virtblk_probe(struct virtio_device *vdev)
u8 physical_block_exp, alignment_offset;
unsigned int queue_depth;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
err = ida_simple_get(&vd_index_ida, 0, minor_to_index(1 << MINORBITS),
GFP_KERNEL);
if (err < 0)
@ -836,14 +820,6 @@ static int virtblk_probe(struct virtio_device *vdev)
else
blk_size = queue_logical_block_size(q);
if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE) {
dev_err(&vdev->dev,
"block size is changed unexpectedly, now is %u\n",
blk_size);
err = -EINVAL;
goto out_cleanup_disk;
}
/* Use topology information if available */
err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY,
struct virtio_blk_config, physical_block_exp,
@ -1009,7 +985,6 @@ static struct virtio_driver virtio_blk = {
.driver.name = KBUILD_MODNAME,
.driver.owner = THIS_MODULE,
.id_table = id_table,
.validate = virtblk_validate,
.probe = virtblk_probe,
.remove = virtblk_remove,
.config_changed = virtblk_config_changed,

View File

@ -152,18 +152,6 @@ config QCOM_EBI2
Interface 2, which can be used to connect things like NAND Flash,
SRAM, ethernet adapters, FPGAs and LCD displays.
config SIMPLE_PM_BUS
tristate "Simple Power-Managed Bus Driver"
depends on OF && PM
help
Driver for transparent busses that don't need a real driver, but
where the bus controller is part of a PM domain, or under the control
of a functional clock, and thus relies on runtime PM for managing
this PM domain and/or clock.
An example of such a bus controller is the Renesas Bus State
Controller (BSC, sometimes called "LBSC within Bus Bridge", or
"External Bus Interface") as found on several Renesas ARM SoCs.
config SUN50I_DE2_BUS
bool "Allwinner A64 DE2 Bus Driver"
default ARM64

View File

@ -27,7 +27,7 @@ obj-$(CONFIG_OMAP_OCP2SCP) += omap-ocp2scp.o
obj-$(CONFIG_QCOM_EBI2) += qcom-ebi2.o
obj-$(CONFIG_SUN50I_DE2_BUS) += sun50i-de2.o
obj-$(CONFIG_SUNXI_RSB) += sunxi-rsb.o
obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o
obj-$(CONFIG_OF) += simple-pm-bus.o
obj-$(CONFIG_TEGRA_ACONNECT) += tegra-aconnect.o
obj-$(CONFIG_TEGRA_GMI) += tegra-gmi.o
obj-$(CONFIG_TI_PWMSS) += ti-pwmss.o

View File

@ -13,11 +13,36 @@
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
static int simple_pm_bus_probe(struct platform_device *pdev)
{
const struct of_dev_auxdata *lookup = dev_get_platdata(&pdev->dev);
struct device_node *np = pdev->dev.of_node;
const struct device *dev = &pdev->dev;
const struct of_dev_auxdata *lookup = dev_get_platdata(dev);
struct device_node *np = dev->of_node;
const struct of_device_id *match;
/*
* Allow user to use driver_override to bind this driver to a
* transparent bus device which has a different compatible string
* that's not listed in simple_pm_bus_of_match. We don't want to do any
* of the simple-pm-bus tasks for these devices, so return early.
*/
if (pdev->driver_override)
return 0;
match = of_match_device(dev->driver->of_match_table, dev);
/*
* These are transparent bus devices (not simple-pm-bus matches) that
* have their child nodes populated automatically. So, don't need to
* do anything more. We only match with the device if this driver is
* the most specific match because we don't want to incorrectly bind to
* a device that has a more specific driver.
*/
if (match && match->data) {
if (of_property_match_string(np, "compatible", match->compatible) == 0)
return 0;
else
return -ENODEV;
}
dev_dbg(&pdev->dev, "%s\n", __func__);
@ -31,14 +56,25 @@ static int simple_pm_bus_probe(struct platform_device *pdev)
static int simple_pm_bus_remove(struct platform_device *pdev)
{
const void *data = of_device_get_match_data(&pdev->dev);
if (pdev->driver_override || data)
return 0;
dev_dbg(&pdev->dev, "%s\n", __func__);
pm_runtime_disable(&pdev->dev);
return 0;
}
#define ONLY_BUS ((void *) 1) /* Match if the device is only a bus. */
static const struct of_device_id simple_pm_bus_of_match[] = {
{ .compatible = "simple-pm-bus", },
{ .compatible = "simple-bus", .data = ONLY_BUS },
{ .compatible = "simple-mfd", .data = ONLY_BUS },
{ .compatible = "isa", .data = ONLY_BUS },
{ .compatible = "arm,amba-bus", .data = ONLY_BUS },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);

View File

@ -564,6 +564,7 @@ config SM_GCC_6125
config SM_GCC_6350
tristate "SM6350 Global Clock Controller"
select QCOM_GDSC
help
Support for the global clock controller on SM6350 devices.
Say Y if you want to use peripheral devices such as UART,

View File

@ -3242,7 +3242,7 @@ static struct gdsc hlos1_vote_turing_mmu_tbu1_gdsc = {
};
static struct gdsc hlos1_vote_turing_mmu_tbu0_gdsc = {
.gdscr = 0x7d060,
.gdscr = 0x7d07c,
.pd = {
.name = "hlos1_vote_turing_mmu_tbu0",
},

View File

@ -186,6 +186,8 @@ static struct rzg2l_reset r9a07g044_resets[] = {
static const unsigned int r9a07g044_crit_mod_clks[] __initconst = {
MOD_CLK_BASE + R9A07G044_GIC600_GICCLK,
MOD_CLK_BASE + R9A07G044_IA55_CLK,
MOD_CLK_BASE + R9A07G044_DMAC_ACLK,
};
const struct rzg2l_cpg_info r9a07g044_cpg_info = {

View File

@ -391,7 +391,7 @@ static int rzg2l_mod_clock_is_enabled(struct clk_hw *hw)
value = readl(priv->base + CLK_MON_R(clock->off));
return !(value & bitmask);
return value & bitmask;
}
static const struct clk_ops rzg2l_mod_clock_ops = {

View File

@ -165,13 +165,6 @@ static const struct clk_parent_data mpu_mux[] = {
.name = "boot_clk", },
};
static const struct clk_parent_data s2f_usr0_mux[] = {
{ .fw_name = "f2s-free-clk",
.name = "f2s-free-clk", },
{ .fw_name = "boot_clk",
.name = "boot_clk", },
};
static const struct clk_parent_data emac_mux[] = {
{ .fw_name = "emaca_free_clk",
.name = "emaca_free_clk", },
@ -312,8 +305,6 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
4, 0x44, 28, 1, 0, 0, 0},
{ AGILEX_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
5, 0, 0, 0, 0x30, 1, 0},
{ AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x24,
6, 0, 0, 0, 0, 0, 0},
{ AGILEX_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
0, 0, 0, 0, 0x94, 26, 0},
{ AGILEX_EMAC1_CLK, "emac1_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,

View File

@ -178,7 +178,7 @@ static void axp_mc_check(struct mem_ctl_info *mci)
"details unavailable (multiple errors)");
if (cnt_dbe)
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
cnt_sbe, /* error count */
cnt_dbe, /* error count */
0, 0, 0, /* pfn, offset, syndrome */
-1, -1, -1, /* top, mid, low layer */
mci->ctl_name,

View File

@ -49,6 +49,13 @@ static int ffa_device_probe(struct device *dev)
return ffa_drv->probe(ffa_dev);
}
static void ffa_device_remove(struct device *dev)
{
struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver);
ffa_drv->remove(to_ffa_dev(dev));
}
static int ffa_device_uevent(struct device *dev, struct kobj_uevent_env *env)
{
struct ffa_device *ffa_dev = to_ffa_dev(dev);
@ -86,6 +93,7 @@ struct bus_type ffa_bus_type = {
.name = "arm_ffa",
.match = ffa_device_match,
.probe = ffa_device_probe,
.remove = ffa_device_remove,
.uevent = ffa_device_uevent,
.dev_groups = ffa_device_attributes_groups,
};
@ -127,7 +135,7 @@ static void ffa_release_device(struct device *dev)
static int __ffa_devices_unregister(struct device *dev, void *data)
{
ffa_release_device(dev);
device_unregister(dev);
return 0;
}

View File

@ -25,8 +25,6 @@
#include <acpi/ghes.h>
#include <ras/ras_event.h>
static char rcd_decode_str[CPER_REC_LEN];
/*
* CPER record ID need to be unique even after reboot, because record
* ID is used as index for ERST storage, while CPER records from
@ -312,6 +310,7 @@ const char *cper_mem_err_unpack(struct trace_seq *p,
struct cper_mem_err_compact *cmem)
{
const char *ret = trace_seq_buffer_ptr(p);
char rcd_decode_str[CPER_REC_LEN];
if (cper_mem_err_location(cmem, rcd_decode_str))
trace_seq_printf(p, "%s", rcd_decode_str);
@ -326,6 +325,7 @@ static void cper_print_mem(const char *pfx, const struct cper_sec_mem_err *mem,
int len)
{
struct cper_mem_err_compact cmem;
char rcd_decode_str[CPER_REC_LEN];
/* Don't trust UEFI 2.1/2.2 structure with bad validation bits */
if (len == sizeof(struct cper_sec_mem_err_old) &&

View File

@ -271,7 +271,7 @@ efi_status_t allocate_new_fdt_and_exit_boot(void *handle,
return status;
}
efi_info("Exiting boot services and installing virtual address map...\n");
efi_info("Exiting boot services...\n");
map.map = &memory_map;
status = efi_allocate_pages(MAX_FDT_SIZE, new_fdt_addr, ULONG_MAX);

View File

@ -414,7 +414,7 @@ static void virt_efi_reset_system(int reset_type,
unsigned long data_size,
efi_char16_t *data)
{
if (down_interruptible(&efi_runtime_lock)) {
if (down_trylock(&efi_runtime_lock)) {
pr_warn("failed to invoke the reset_system() runtime service:\n"
"could not get exclusive access to the firmware\n");
return;

View File

@ -192,12 +192,19 @@ static const struct of_device_id ice40_fpga_of_match[] = {
};
MODULE_DEVICE_TABLE(of, ice40_fpga_of_match);
static const struct spi_device_id ice40_fpga_spi_ids[] = {
{ .name = "ice40-fpga-mgr", },
{},
};
MODULE_DEVICE_TABLE(spi, ice40_fpga_spi_ids);
static struct spi_driver ice40_fpga_driver = {
.probe = ice40_fpga_probe,
.driver = {
.name = "ice40spi",
.of_match_table = of_match_ptr(ice40_fpga_of_match),
},
.id_table = ice40_fpga_spi_ids,
};
module_spi_driver(ice40_fpga_driver);

View File

@ -174,6 +174,13 @@ static int gen_74x164_remove(struct spi_device *spi)
return 0;
}
static const struct spi_device_id gen_74x164_spi_ids[] = {
{ .name = "74hc595" },
{ .name = "74lvc594" },
{},
};
MODULE_DEVICE_TABLE(spi, gen_74x164_spi_ids);
static const struct of_device_id gen_74x164_dt_ids[] = {
{ .compatible = "fairchild,74hc595" },
{ .compatible = "nxp,74lvc594" },
@ -188,6 +195,7 @@ static struct spi_driver gen_74x164_driver = {
},
.probe = gen_74x164_probe,
.remove = gen_74x164_remove,
.id_table = gen_74x164_spi_ids,
};
module_spi_driver(gen_74x164_driver);

View File

@ -476,10 +476,19 @@ static struct platform_device *gpio_mockup_pdevs[GPIO_MOCKUP_MAX_GC];
static void gpio_mockup_unregister_pdevs(void)
{
struct platform_device *pdev;
struct fwnode_handle *fwnode;
int i;
for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++)
platform_device_unregister(gpio_mockup_pdevs[i]);
for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++) {
pdev = gpio_mockup_pdevs[i];
if (!pdev)
continue;
fwnode = dev_fwnode(&pdev->dev);
platform_device_unregister(pdev);
fwnode_remove_software_node(fwnode);
}
}
static __init char **gpio_mockup_make_line_names(const char *label,
@ -508,6 +517,7 @@ static int __init gpio_mockup_register_chip(int idx)
struct property_entry properties[GPIO_MOCKUP_MAX_PROP];
struct platform_device_info pdevinfo;
struct platform_device *pdev;
struct fwnode_handle *fwnode;
char **line_names = NULL;
char chip_label[32];
int prop = 0, base;
@ -536,13 +546,18 @@ static int __init gpio_mockup_register_chip(int idx)
"gpio-line-names", line_names, ngpio);
}
fwnode = fwnode_create_software_node(properties, NULL);
if (IS_ERR(fwnode))
return PTR_ERR(fwnode);
pdevinfo.name = "gpio-mockup";
pdevinfo.id = idx;
pdevinfo.properties = properties;
pdevinfo.fwnode = fwnode;
pdev = platform_device_register_full(&pdevinfo);
kfree_strarray(line_names, ngpio);
if (IS_ERR(pdev)) {
fwnode_remove_software_node(fwnode);
pr_err("error registering device");
return PTR_ERR(pdev);
}

View File

@ -559,21 +559,21 @@ static int pca953x_gpio_set_pull_up_down(struct pca953x_chip *chip,
mutex_lock(&chip->i2c_lock);
/* Disable pull-up/pull-down */
ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);
if (ret)
goto exit;
/* Configure pull-up/pull-down */
if (config == PIN_CONFIG_BIAS_PULL_UP)
ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, bit);
else if (config == PIN_CONFIG_BIAS_PULL_DOWN)
ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, 0);
else
ret = 0;
if (ret)
goto exit;
/* Enable pull-up/pull-down */
ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);
/* Disable/Enable pull-up/pull-down */
if (config == PIN_CONFIG_BIAS_DISABLE)
ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);
else
ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);
exit:
mutex_unlock(&chip->i2c_lock);
@ -587,7 +587,9 @@ static int pca953x_gpio_set_config(struct gpio_chip *gc, unsigned int offset,
switch (pinconf_to_config_param(config)) {
case PIN_CONFIG_BIAS_PULL_UP:
case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT:
case PIN_CONFIG_BIAS_PULL_DOWN:
case PIN_CONFIG_BIAS_DISABLE:
return pca953x_gpio_set_pull_up_down(chip, offset, config);
default:
return -ENOTSUPP;

View File

@ -1300,18 +1300,6 @@ static enum drm_mode_status ast_mode_valid(struct drm_connector *connector,
return flags;
}
static enum drm_connector_status ast_connector_detect(struct drm_connector
*connector, bool force)
{
int r;
r = ast_get_modes(connector);
if (r <= 0)
return connector_status_disconnected;
return connector_status_connected;
}
static void ast_connector_destroy(struct drm_connector *connector)
{
struct ast_connector *ast_connector = to_ast_connector(connector);
@ -1327,7 +1315,6 @@ static const struct drm_connector_helper_funcs ast_connector_helper_funcs = {
static const struct drm_connector_funcs ast_connector_funcs = {
.reset = drm_atomic_helper_connector_reset,
.detect = ast_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = ast_connector_destroy,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
@ -1355,8 +1342,7 @@ static int ast_connector_init(struct drm_device *dev)
connector->interlace_allowed = 0;
connector->doublescan_allowed = 0;
connector->polled = DRM_CONNECTOR_POLL_CONNECT |
DRM_CONNECTOR_POLL_DISCONNECT;
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
drm_connector_attach_encoder(connector, encoder);
@ -1425,8 +1411,6 @@ int ast_mode_config_init(struct ast_private *ast)
drm_mode_config_reset(dev);
drm_kms_helper_poll_init(dev);
return 0;
}

View File

@ -1834,11 +1834,20 @@ static void connector_bad_edid(struct drm_connector *connector,
u8 *edid, int num_blocks)
{
int i;
u8 num_of_ext = edid[0x7e];
u8 last_block;
/*
* 0x7e in the EDID is the number of extension blocks. The EDID
* is 1 (base block) + num_ext_blocks big. That means we can think
* of 0x7e in the EDID of the _index_ of the last block in the
* combined chunk of memory.
*/
last_block = edid[0x7e];
/* Calculate real checksum for the last edid extension block data */
connector->real_edid_checksum =
drm_edid_block_checksum(edid + num_of_ext * EDID_LENGTH);
if (last_block < num_blocks)
connector->real_edid_checksum =
drm_edid_block_checksum(edid + last_block * EDID_LENGTH);
if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS))
return;

View File

@ -1506,6 +1506,7 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
{
struct drm_client_dev *client = &fb_helper->client;
struct drm_device *dev = fb_helper->dev;
struct drm_mode_config *config = &dev->mode_config;
int ret = 0;
int crtc_count = 0;
struct drm_connector_list_iter conn_iter;
@ -1663,6 +1664,11 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
/* Handle our overallocation */
sizes.surface_height *= drm_fbdev_overalloc;
sizes.surface_height /= 100;
if (sizes.surface_height > config->max_height) {
drm_dbg_kms(dev, "Fbdev over-allocation too large; clamping height to %d\n",
config->max_height);
sizes.surface_height = config->max_height;
}
/* push down into drivers */
ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);

View File

@ -46,6 +46,7 @@ int hyperv_mode_config_init(struct hyperv_drm_device *hv);
int hyperv_update_vram_location(struct hv_device *hdev, phys_addr_t vram_pp);
int hyperv_update_situation(struct hv_device *hdev, u8 active, u32 bpp,
u32 w, u32 h, u32 pitch);
int hyperv_hide_hw_ptr(struct hv_device *hdev);
int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect);
int hyperv_connect_vsp(struct hv_device *hdev);

View File

@ -101,6 +101,7 @@ static void hyperv_pipe_enable(struct drm_simple_display_pipe *pipe,
struct hyperv_drm_device *hv = to_hv(pipe->crtc.dev);
struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state);
hyperv_hide_hw_ptr(hv->hdev);
hyperv_update_situation(hv->hdev, 1, hv->screen_depth,
crtc_state->mode.hdisplay,
crtc_state->mode.vdisplay,

View File

@ -299,6 +299,55 @@ int hyperv_update_situation(struct hv_device *hdev, u8 active, u32 bpp,
return 0;
}
/*
* Hyper-V supports a hardware cursor feature. It's not used by Linux VM,
* but the Hyper-V host still draws a point as an extra mouse pointer,
* which is unwanted, especially when Xorg is running.
*
* The hyperv_fb driver uses synthvid_send_ptr() to hide the unwanted
* pointer, by setting msg.ptr_pos.is_visible = 1 and setting the
* msg.ptr_shape.data. Note: setting msg.ptr_pos.is_visible to 0 doesn't
* work in tests.
*
* Copy synthvid_send_ptr() to hyperv_drm and rename it to
* hyperv_hide_hw_ptr(). Note: hyperv_hide_hw_ptr() is also called in the
* handler of the SYNTHVID_FEATURE_CHANGE event, otherwise the host still
* draws an extra unwanted mouse pointer after the VM Connection window is
* closed and reopened.
*/
int hyperv_hide_hw_ptr(struct hv_device *hdev)
{
struct synthvid_msg msg;
memset(&msg, 0, sizeof(struct synthvid_msg));
msg.vid_hdr.type = SYNTHVID_POINTER_POSITION;
msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
sizeof(struct synthvid_pointer_position);
msg.ptr_pos.is_visible = 1;
msg.ptr_pos.video_output = 0;
msg.ptr_pos.image_x = 0;
msg.ptr_pos.image_y = 0;
hyperv_sendpacket(hdev, &msg);
memset(&msg, 0, sizeof(struct synthvid_msg));
msg.vid_hdr.type = SYNTHVID_POINTER_SHAPE;
msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
sizeof(struct synthvid_pointer_shape);
msg.ptr_shape.part_idx = SYNTHVID_CURSOR_COMPLETE;
msg.ptr_shape.is_argb = 1;
msg.ptr_shape.width = 1;
msg.ptr_shape.height = 1;
msg.ptr_shape.hot_x = 0;
msg.ptr_shape.hot_y = 0;
msg.ptr_shape.data[0] = 0;
msg.ptr_shape.data[1] = 1;
msg.ptr_shape.data[2] = 1;
msg.ptr_shape.data[3] = 1;
hyperv_sendpacket(hdev, &msg);
return 0;
}
int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect)
{
struct hyperv_drm_device *hv = hv_get_drvdata(hdev);
@ -392,8 +441,11 @@ static void hyperv_receive_sub(struct hv_device *hdev)
return;
}
if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE)
if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE) {
hv->dirt_needed = msg->feature_chg.is_dirt_needed;
if (hv->dirt_needed)
hyperv_hide_hw_ptr(hv->hdev);
}
}
static void hyperv_receive(void *ctx)

View File

@ -186,13 +186,16 @@ void intel_dsm_get_bios_data_funcs_supported(struct drm_i915_private *i915)
{
struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
acpi_handle dhandle;
union acpi_object *obj;
dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle)
return;
acpi_evaluate_dsm(dhandle, &intel_dsm_guid2, INTEL_DSM_REVISION_ID,
INTEL_DSM_FN_GET_BIOS_DATA_FUNCS_SUPPORTED, NULL);
obj = acpi_evaluate_dsm(dhandle, &intel_dsm_guid2, INTEL_DSM_REVISION_ID,
INTEL_DSM_FN_GET_BIOS_DATA_FUNCS_SUPPORTED, NULL);
if (obj)
ACPI_FREE(obj);
}
/*

View File

@ -937,6 +937,10 @@ static struct i915_gem_engines *user_engines(struct i915_gem_context *ctx,
unsigned int n;
e = alloc_engines(num_engines);
if (!e)
return ERR_PTR(-ENOMEM);
e->num_engines = num_engines;
for (n = 0; n < num_engines; n++) {
struct intel_context *ce;
int ret;
@ -970,7 +974,6 @@ static struct i915_gem_engines *user_engines(struct i915_gem_context *ctx,
goto free_engines;
}
}
e->num_engines = num_engines;
return e;

View File

@ -421,6 +421,7 @@ void intel_context_fini(struct intel_context *ce)
mutex_destroy(&ce->pin_mutex);
i915_active_fini(&ce->active);
i915_sw_fence_fini(&ce->guc_blocked);
}
void i915_context_module_exit(void)

View File

@ -66,7 +66,8 @@ static const struct drm_crtc_funcs kmb_crtc_funcs = {
.disable_vblank = kmb_crtc_disable_vblank,
};
static void kmb_crtc_set_mode(struct drm_crtc *crtc)
static void kmb_crtc_set_mode(struct drm_crtc *crtc,
struct drm_atomic_state *old_state)
{
struct drm_device *dev = crtc->dev;
struct drm_display_mode *m = &crtc->state->adjusted_mode;
@ -75,7 +76,7 @@ static void kmb_crtc_set_mode(struct drm_crtc *crtc)
unsigned int val = 0;
/* Initialize mipi */
kmb_dsi_mode_set(kmb->kmb_dsi, m, kmb->sys_clk_mhz);
kmb_dsi_mode_set(kmb->kmb_dsi, m, kmb->sys_clk_mhz, old_state);
drm_info(dev,
"vfp= %d vbp= %d vsync_len=%d hfp=%d hbp=%d hsync_len=%d\n",
m->crtc_vsync_start - m->crtc_vdisplay,
@ -138,7 +139,7 @@ static void kmb_crtc_atomic_enable(struct drm_crtc *crtc,
struct kmb_drm_private *kmb = crtc_to_kmb_priv(crtc);
clk_prepare_enable(kmb->kmb_clk.clk_lcd);
kmb_crtc_set_mode(crtc);
kmb_crtc_set_mode(crtc, state);
drm_crtc_vblank_on(crtc);
}
@ -185,11 +186,45 @@ static void kmb_crtc_atomic_flush(struct drm_crtc *crtc,
spin_unlock_irq(&crtc->dev->event_lock);
}
static enum drm_mode_status
kmb_crtc_mode_valid(struct drm_crtc *crtc,
const struct drm_display_mode *mode)
{
int refresh;
struct drm_device *dev = crtc->dev;
int vfp = mode->vsync_start - mode->vdisplay;
if (mode->vdisplay < KMB_CRTC_MAX_HEIGHT) {
drm_dbg(dev, "height = %d less than %d",
mode->vdisplay, KMB_CRTC_MAX_HEIGHT);
return MODE_BAD_VVALUE;
}
if (mode->hdisplay < KMB_CRTC_MAX_WIDTH) {
drm_dbg(dev, "width = %d less than %d",
mode->hdisplay, KMB_CRTC_MAX_WIDTH);
return MODE_BAD_HVALUE;
}
refresh = drm_mode_vrefresh(mode);
if (refresh < KMB_MIN_VREFRESH || refresh > KMB_MAX_VREFRESH) {
drm_dbg(dev, "refresh = %d less than %d or greater than %d",
refresh, KMB_MIN_VREFRESH, KMB_MAX_VREFRESH);
return MODE_BAD;
}
if (vfp < KMB_CRTC_MIN_VFP) {
drm_dbg(dev, "vfp = %d less than %d", vfp, KMB_CRTC_MIN_VFP);
return MODE_BAD;
}
return MODE_OK;
}
static const struct drm_crtc_helper_funcs kmb_crtc_helper_funcs = {
.atomic_begin = kmb_crtc_atomic_begin,
.atomic_enable = kmb_crtc_atomic_enable,
.atomic_disable = kmb_crtc_atomic_disable,
.atomic_flush = kmb_crtc_atomic_flush,
.mode_valid = kmb_crtc_mode_valid,
};
int kmb_setup_crtc(struct drm_device *drm)

View File

@ -380,7 +380,7 @@ static irqreturn_t handle_lcd_irq(struct drm_device *dev)
if (val & LAYER3_DMA_FIFO_UNDERFLOW)
drm_dbg(&kmb->drm,
"LAYER3:GL1 DMA UNDERFLOW val = 0x%lx", val);
if (val & LAYER3_DMA_FIFO_UNDERFLOW)
if (val & LAYER3_DMA_FIFO_OVERFLOW)
drm_dbg(&kmb->drm,
"LAYER3:GL1 DMA OVERFLOW val = 0x%lx", val);
}

View File

@ -20,11 +20,18 @@
#define DRIVER_MAJOR 1
#define DRIVER_MINOR 1
/* Platform definitions */
#define KMB_CRTC_MIN_VFP 4
#define KMB_CRTC_MAX_WIDTH 1920 /* max width in pixels */
#define KMB_CRTC_MAX_HEIGHT 1080 /* max height in pixels */
#define KMB_CRTC_MIN_WIDTH 1920
#define KMB_CRTC_MIN_HEIGHT 1080
#define KMB_FB_MAX_WIDTH 1920
#define KMB_FB_MAX_HEIGHT 1080
#define KMB_FB_MIN_WIDTH 1
#define KMB_FB_MIN_HEIGHT 1
#define KMB_MIN_VREFRESH 59 /*vertical refresh in Hz */
#define KMB_MAX_VREFRESH 60 /*vertical refresh in Hz */
#define KMB_LCD_DEFAULT_CLK 200000000
#define KMB_SYS_CLK_MHZ 500
@ -50,6 +57,7 @@ struct kmb_drm_private {
spinlock_t irq_lock;
int irq_lcd;
int sys_clk_mhz;
struct disp_cfg init_disp_cfg[KMB_MAX_PLANES];
struct layer_status plane_status[KMB_MAX_PLANES];
int kmb_under_flow;
int kmb_flush_done;

View File

@ -482,6 +482,10 @@ static u32 mipi_tx_fg_section_cfg(struct kmb_dsi *kmb_dsi,
return 0;
}
#define CLK_DIFF_LOW 50
#define CLK_DIFF_HI 60
#define SYSCLK_500 500
static void mipi_tx_fg_cfg_regs(struct kmb_dsi *kmb_dsi, u8 frame_gen,
struct mipi_tx_frame_timing_cfg *fg_cfg)
{
@ -492,7 +496,12 @@ static void mipi_tx_fg_cfg_regs(struct kmb_dsi *kmb_dsi, u8 frame_gen,
/* 500 Mhz system clock minus 50 to account for the difference in
* MIPI clock speed in RTL tests
*/
sysclk = kmb_dsi->sys_clk_mhz - 50;
if (kmb_dsi->sys_clk_mhz == SYSCLK_500) {
sysclk = kmb_dsi->sys_clk_mhz - CLK_DIFF_LOW;
} else {
/* 700 Mhz clk*/
sysclk = kmb_dsi->sys_clk_mhz - CLK_DIFF_HI;
}
/* PPL-Pixel Packing Layer, LLP-Low Level Protocol
* Frame genartor timing parameters are clocked on the system clock,
@ -1322,7 +1331,8 @@ static u32 mipi_tx_init_dphy(struct kmb_dsi *kmb_dsi,
return 0;
}
static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi)
static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi,
struct drm_atomic_state *old_state)
{
struct regmap *msscam;
@ -1331,7 +1341,7 @@ static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi)
dev_dbg(kmb_dsi->dev, "failed to get msscam syscon");
return;
}
drm_atomic_bridge_chain_enable(adv_bridge, old_state);
/* DISABLE MIPI->CIF CONNECTION */
regmap_write(msscam, MSS_MIPI_CIF_CFG, 0);
@ -1342,7 +1352,7 @@ static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi)
}
int kmb_dsi_mode_set(struct kmb_dsi *kmb_dsi, struct drm_display_mode *mode,
int sys_clk_mhz)
int sys_clk_mhz, struct drm_atomic_state *old_state)
{
u64 data_rate;
@ -1384,18 +1394,13 @@ int kmb_dsi_mode_set(struct kmb_dsi *kmb_dsi, struct drm_display_mode *mode,
mipi_tx_init_cfg.lane_rate_mbps = data_rate;
}
kmb_write_mipi(kmb_dsi, DPHY_ENABLE, 0);
kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL0, 0);
kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL1, 0);
kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL2, 0);
/* Initialize mipi controller */
mipi_tx_init_cntrl(kmb_dsi, &mipi_tx_init_cfg);
/* Dphy initialization */
mipi_tx_init_dphy(kmb_dsi, &mipi_tx_init_cfg);
connect_lcd_to_mipi(kmb_dsi);
connect_lcd_to_mipi(kmb_dsi, old_state);
dev_info(kmb_dsi->dev, "mipi hw initialized");
return 0;

View File

@ -380,7 +380,7 @@ int kmb_dsi_host_bridge_init(struct device *dev);
struct kmb_dsi *kmb_dsi_init(struct platform_device *pdev);
void kmb_dsi_host_unregister(struct kmb_dsi *kmb_dsi);
int kmb_dsi_mode_set(struct kmb_dsi *kmb_dsi, struct drm_display_mode *mode,
int sys_clk_mhz);
int sys_clk_mhz, struct drm_atomic_state *old_state);
int kmb_dsi_map_mmio(struct kmb_dsi *kmb_dsi);
int kmb_dsi_clk_init(struct kmb_dsi *kmb_dsi);
int kmb_dsi_encoder_init(struct drm_device *dev, struct kmb_dsi *kmb_dsi);

View File

@ -67,8 +67,21 @@ static const u32 kmb_formats_v[] = {
static unsigned int check_pixel_format(struct drm_plane *plane, u32 format)
{
struct kmb_drm_private *kmb;
struct kmb_plane *kmb_plane = to_kmb_plane(plane);
int i;
int plane_id = kmb_plane->id;
struct disp_cfg init_disp_cfg;
kmb = to_kmb(plane->dev);
init_disp_cfg = kmb->init_disp_cfg[plane_id];
/* Due to HW limitations, changing pixel format after initial
* plane configuration is not supported.
*/
if (init_disp_cfg.format && init_disp_cfg.format != format) {
drm_dbg(&kmb->drm, "Cannot change format after initial plane configuration");
return -EINVAL;
}
for (i = 0; i < plane->format_count; i++) {
if (plane->format_types[i] == format)
return 0;
@ -81,11 +94,17 @@ static int kmb_plane_atomic_check(struct drm_plane *plane,
{
struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state,
plane);
struct kmb_drm_private *kmb;
struct kmb_plane *kmb_plane = to_kmb_plane(plane);
int plane_id = kmb_plane->id;
struct disp_cfg init_disp_cfg;
struct drm_framebuffer *fb;
int ret;
struct drm_crtc_state *crtc_state;
bool can_position;
kmb = to_kmb(plane->dev);
init_disp_cfg = kmb->init_disp_cfg[plane_id];
fb = new_plane_state->fb;
if (!fb || !new_plane_state->crtc)
return 0;
@ -99,6 +118,16 @@ static int kmb_plane_atomic_check(struct drm_plane *plane,
new_plane_state->crtc_w < KMB_FB_MIN_WIDTH ||
new_plane_state->crtc_h < KMB_FB_MIN_HEIGHT)
return -EINVAL;
/* Due to HW limitations, changing plane height or width after
* initial plane configuration is not supported.
*/
if ((init_disp_cfg.width && init_disp_cfg.height) &&
(init_disp_cfg.width != fb->width ||
init_disp_cfg.height != fb->height)) {
drm_dbg(&kmb->drm, "Cannot change plane height or width after initial configuration");
return -EINVAL;
}
can_position = (plane->type == DRM_PLANE_TYPE_OVERLAY);
crtc_state =
drm_atomic_get_existing_crtc_state(state,
@ -335,6 +364,7 @@ static void kmb_plane_atomic_update(struct drm_plane *plane,
unsigned char plane_id;
int num_planes;
static dma_addr_t addr[MAX_SUB_PLANES];
struct disp_cfg *init_disp_cfg;
if (!plane || !new_plane_state || !old_plane_state)
return;
@ -357,7 +387,8 @@ static void kmb_plane_atomic_update(struct drm_plane *plane,
}
spin_unlock_irq(&kmb->irq_lock);
src_w = (new_plane_state->src_w >> 16);
init_disp_cfg = &kmb->init_disp_cfg[plane_id];
src_w = new_plane_state->src_w >> 16;
src_h = new_plane_state->src_h >> 16;
crtc_x = new_plane_state->crtc_x;
crtc_y = new_plane_state->crtc_y;
@ -500,6 +531,16 @@ static void kmb_plane_atomic_update(struct drm_plane *plane,
/* Enable DMA */
kmb_write_lcd(kmb, LCD_LAYERn_DMA_CFG(plane_id), dma_cfg);
/* Save initial display config */
if (!init_disp_cfg->width ||
!init_disp_cfg->height ||
!init_disp_cfg->format) {
init_disp_cfg->width = width;
init_disp_cfg->height = height;
init_disp_cfg->format = fb->format->format;
}
drm_dbg(&kmb->drm, "dma_cfg=0x%x LCD_DMA_CFG=0x%x\n", dma_cfg,
kmb_read_lcd(kmb, LCD_LAYERn_DMA_CFG(plane_id)));

View File

@ -63,6 +63,12 @@ struct layer_status {
u32 ctrl;
};
struct disp_cfg {
unsigned int width;
unsigned int height;
unsigned int format;
};
struct kmb_plane *kmb_plane_init(struct drm_device *drm);
void kmb_plane_destroy(struct drm_plane *plane);
#endif /* __KMB_PLANE_H__ */

View File

@ -4,8 +4,6 @@
*/
#include <linux/clk.h>
#include <linux/dma-mapping.h>
#include <linux/mailbox_controller.h>
#include <linux/pm_runtime.h>
#include <linux/soc/mediatek/mtk-cmdq.h>
#include <linux/soc/mediatek/mtk-mmsys.h>
@ -52,11 +50,8 @@ struct mtk_drm_crtc {
bool pending_async_planes;
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
struct mbox_client cmdq_cl;
struct mbox_chan *cmdq_chan;
struct cmdq_pkt cmdq_handle;
struct cmdq_client *cmdq_client;
u32 cmdq_event;
u32 cmdq_vblank_cnt;
#endif
struct device *mmsys_dev;
@ -227,79 +222,9 @@ struct mtk_ddp_comp *mtk_drm_ddp_comp_for_plane(struct drm_crtc *crtc,
}
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
static int mtk_drm_cmdq_pkt_create(struct mbox_chan *chan, struct cmdq_pkt *pkt,
size_t size)
static void ddp_cmdq_cb(struct cmdq_cb_data data)
{
struct device *dev;
dma_addr_t dma_addr;
pkt->va_base = kzalloc(size, GFP_KERNEL);
if (!pkt->va_base) {
kfree(pkt);
return -ENOMEM;
}
pkt->buf_size = size;
dev = chan->mbox->dev;
dma_addr = dma_map_single(dev, pkt->va_base, pkt->buf_size,
DMA_TO_DEVICE);
if (dma_mapping_error(dev, dma_addr)) {
dev_err(dev, "dma map failed, size=%u\n", (u32)(u64)size);
kfree(pkt->va_base);
kfree(pkt);
return -ENOMEM;
}
pkt->pa_base = dma_addr;
return 0;
}
static void mtk_drm_cmdq_pkt_destroy(struct mbox_chan *chan, struct cmdq_pkt *pkt)
{
dma_unmap_single(chan->mbox->dev, pkt->pa_base, pkt->buf_size,
DMA_TO_DEVICE);
kfree(pkt->va_base);
kfree(pkt);
}
static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
{
struct mtk_drm_crtc *mtk_crtc = container_of(cl, struct mtk_drm_crtc, cmdq_cl);
struct cmdq_cb_data *data = mssg;
struct mtk_crtc_state *state;
unsigned int i;
state = to_mtk_crtc_state(mtk_crtc->base.state);
state->pending_config = false;
if (mtk_crtc->pending_planes) {
for (i = 0; i < mtk_crtc->layer_nr; i++) {
struct drm_plane *plane = &mtk_crtc->planes[i];
struct mtk_plane_state *plane_state;
plane_state = to_mtk_plane_state(plane->state);
plane_state->pending.config = false;
}
mtk_crtc->pending_planes = false;
}
if (mtk_crtc->pending_async_planes) {
for (i = 0; i < mtk_crtc->layer_nr; i++) {
struct drm_plane *plane = &mtk_crtc->planes[i];
struct mtk_plane_state *plane_state;
plane_state = to_mtk_plane_state(plane->state);
plane_state->pending.async_config = false;
}
mtk_crtc->pending_async_planes = false;
}
mtk_crtc->cmdq_vblank_cnt = 0;
mtk_drm_cmdq_pkt_destroy(mtk_crtc->cmdq_chan, data->pkt);
cmdq_pkt_destroy(data.data);
}
#endif
@ -453,8 +378,7 @@ static void mtk_crtc_ddp_config(struct drm_crtc *crtc,
state->pending_vrefresh, 0,
cmdq_handle);
if (!cmdq_handle)
state->pending_config = false;
state->pending_config = false;
}
if (mtk_crtc->pending_planes) {
@ -474,12 +398,9 @@ static void mtk_crtc_ddp_config(struct drm_crtc *crtc,
mtk_ddp_comp_layer_config(comp, local_layer,
plane_state,
cmdq_handle);
if (!cmdq_handle)
plane_state->pending.config = false;
plane_state->pending.config = false;
}
if (!cmdq_handle)
mtk_crtc->pending_planes = false;
mtk_crtc->pending_planes = false;
}
if (mtk_crtc->pending_async_planes) {
@ -499,12 +420,9 @@ static void mtk_crtc_ddp_config(struct drm_crtc *crtc,
mtk_ddp_comp_layer_config(comp, local_layer,
plane_state,
cmdq_handle);
if (!cmdq_handle)
plane_state->pending.async_config = false;
plane_state->pending.async_config = false;
}
if (!cmdq_handle)
mtk_crtc->pending_async_planes = false;
mtk_crtc->pending_async_planes = false;
}
}
@ -512,7 +430,7 @@ static void mtk_drm_crtc_update_config(struct mtk_drm_crtc *mtk_crtc,
bool needs_vblank)
{
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
struct cmdq_pkt *cmdq_handle = &mtk_crtc->cmdq_handle;
struct cmdq_pkt *cmdq_handle;
#endif
struct drm_crtc *crtc = &mtk_crtc->base;
struct mtk_drm_private *priv = crtc->dev->dev_private;
@ -550,24 +468,14 @@ static void mtk_drm_crtc_update_config(struct mtk_drm_crtc *mtk_crtc,
mtk_mutex_release(mtk_crtc->mutex);
}
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
if (mtk_crtc->cmdq_chan) {
mbox_flush(mtk_crtc->cmdq_chan, 2000);
cmdq_handle->cmd_buf_size = 0;
if (mtk_crtc->cmdq_client) {
mbox_flush(mtk_crtc->cmdq_client->chan, 2000);
cmdq_handle = cmdq_pkt_create(mtk_crtc->cmdq_client, PAGE_SIZE);
cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event);
cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false);
mtk_crtc_ddp_config(crtc, cmdq_handle);
cmdq_pkt_finalize(cmdq_handle);
dma_sync_single_for_device(mtk_crtc->cmdq_chan->mbox->dev,
cmdq_handle->pa_base,
cmdq_handle->cmd_buf_size,
DMA_TO_DEVICE);
/*
* CMDQ command should execute in next vblank,
* If it fail to execute in next 2 vblank, timeout happen.
*/
mtk_crtc->cmdq_vblank_cnt = 2;
mbox_send_message(mtk_crtc->cmdq_chan, cmdq_handle);
mbox_client_txdone(mtk_crtc->cmdq_chan, 0);
cmdq_pkt_flush_async(cmdq_handle, ddp_cmdq_cb, cmdq_handle);
}
#endif
mtk_crtc->config_updating = false;
@ -581,15 +489,12 @@ static void mtk_crtc_ddp_irq(void *data)
struct mtk_drm_private *priv = crtc->dev->dev_private;
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
if (!priv->data->shadow_register && !mtk_crtc->cmdq_chan)
mtk_crtc_ddp_config(crtc, NULL);
else if (mtk_crtc->cmdq_vblank_cnt > 0 && --mtk_crtc->cmdq_vblank_cnt == 0)
DRM_ERROR("mtk_crtc %d CMDQ execute command timeout!\n",
drm_crtc_index(&mtk_crtc->base));
if (!priv->data->shadow_register && !mtk_crtc->cmdq_client)
#else
if (!priv->data->shadow_register)
mtk_crtc_ddp_config(crtc, NULL);
#endif
mtk_crtc_ddp_config(crtc, NULL);
mtk_drm_finish_page_flip(mtk_crtc);
}
@ -924,20 +829,16 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
mutex_init(&mtk_crtc->hw_lock);
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
mtk_crtc->cmdq_cl.dev = mtk_crtc->mmsys_dev;
mtk_crtc->cmdq_cl.tx_block = false;
mtk_crtc->cmdq_cl.knows_txdone = true;
mtk_crtc->cmdq_cl.rx_callback = ddp_cmdq_cb;
mtk_crtc->cmdq_chan =
mbox_request_channel(&mtk_crtc->cmdq_cl,
drm_crtc_index(&mtk_crtc->base));
if (IS_ERR(mtk_crtc->cmdq_chan)) {
mtk_crtc->cmdq_client =
cmdq_mbox_create(mtk_crtc->mmsys_dev,
drm_crtc_index(&mtk_crtc->base));
if (IS_ERR(mtk_crtc->cmdq_client)) {
dev_dbg(dev, "mtk_crtc %d failed to create mailbox client, writing register by CPU now\n",
drm_crtc_index(&mtk_crtc->base));
mtk_crtc->cmdq_chan = NULL;
mtk_crtc->cmdq_client = NULL;
}
if (mtk_crtc->cmdq_chan) {
if (mtk_crtc->cmdq_client) {
ret = of_property_read_u32_index(priv->mutex_node,
"mediatek,gce-events",
drm_crtc_index(&mtk_crtc->base),
@ -945,18 +846,8 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
if (ret) {
dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n",
drm_crtc_index(&mtk_crtc->base));
mbox_free_channel(mtk_crtc->cmdq_chan);
mtk_crtc->cmdq_chan = NULL;
} else {
ret = mtk_drm_cmdq_pkt_create(mtk_crtc->cmdq_chan,
&mtk_crtc->cmdq_handle,
PAGE_SIZE);
if (ret) {
dev_dbg(dev, "mtk_crtc %d failed to create cmdq packet\n",
drm_crtc_index(&mtk_crtc->base));
mbox_free_channel(mtk_crtc->cmdq_chan);
mtk_crtc->cmdq_chan = NULL;
}
cmdq_mbox_destroy(mtk_crtc->cmdq_client);
mtk_crtc->cmdq_client = NULL;
}
}
#endif

View File

@ -571,13 +571,14 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev)
}
icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
ret = IS_ERR(icc_path);
if (ret)
if (IS_ERR(icc_path)) {
ret = PTR_ERR(icc_path);
goto fail;
}
ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");
ret = IS_ERR(ocmem_icc_path);
if (ret) {
if (IS_ERR(ocmem_icc_path)) {
ret = PTR_ERR(ocmem_icc_path);
/* allow -ENODATA, ocmem icc is optional */
if (ret != -ENODATA)
goto fail;

View File

@ -699,13 +699,14 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev)
}
icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
ret = IS_ERR(icc_path);
if (ret)
if (IS_ERR(icc_path)) {
ret = PTR_ERR(icc_path);
goto fail;
}
ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");
ret = IS_ERR(ocmem_icc_path);
if (ret) {
if (IS_ERR(ocmem_icc_path)) {
ret = PTR_ERR(ocmem_icc_path);
/* allow -ENODATA, ocmem icc is optional */
if (ret != -ENODATA)
goto fail;

View File

@ -296,6 +296,8 @@ int a6xx_gmu_set_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)
u32 val;
int request, ack;
WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));
if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))
return -EINVAL;
@ -337,6 +339,8 @@ void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)
{
int bit;
WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));
if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))
return;
@ -1482,6 +1486,8 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
if (!pdev)
return -ENODEV;
mutex_init(&gmu->lock);
gmu->dev = &pdev->dev;
of_dma_configure(gmu->dev, node, true);

View File

@ -44,6 +44,9 @@ struct a6xx_gmu_bo {
struct a6xx_gmu {
struct device *dev;
/* For serializing communication with the GMU: */
struct mutex lock;
struct msm_gem_address_space *aspace;
void * __iomem mmio;

View File

@ -106,7 +106,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
u32 asid;
u64 memptr = rbmemptr(ring, ttbr0);
if (ctx == a6xx_gpu->cur_ctx)
if (ctx->seqno == a6xx_gpu->cur_ctx_seqno)
return;
if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))
@ -139,7 +139,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
OUT_PKT7(ring, CP_EVENT_WRITE, 1);
OUT_RING(ring, 0x31);
a6xx_gpu->cur_ctx = ctx;
a6xx_gpu->cur_ctx_seqno = ctx->seqno;
}
static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
@ -881,7 +881,7 @@ static int a6xx_zap_shader_init(struct msm_gpu *gpu)
A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \
A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR)
static int a6xx_hw_init(struct msm_gpu *gpu)
static int hw_init(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
@ -1081,7 +1081,7 @@ static int a6xx_hw_init(struct msm_gpu *gpu)
/* Always come up on rb 0 */
a6xx_gpu->cur_ring = gpu->rb[0];
a6xx_gpu->cur_ctx = NULL;
a6xx_gpu->cur_ctx_seqno = 0;
/* Enable the SQE_to start the CP engine */
gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1);
@ -1135,6 +1135,19 @@ out:
return ret;
}
static int a6xx_hw_init(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
int ret;
mutex_lock(&a6xx_gpu->gmu.lock);
ret = hw_init(gpu);
mutex_unlock(&a6xx_gpu->gmu.lock);
return ret;
}
static void a6xx_dump(struct msm_gpu *gpu)
{
DRM_DEV_INFO(&gpu->pdev->dev, "status: %08x\n",
@ -1509,7 +1522,9 @@ static int a6xx_pm_resume(struct msm_gpu *gpu)
trace_msm_gpu_resume(0);
mutex_lock(&a6xx_gpu->gmu.lock);
ret = a6xx_gmu_resume(a6xx_gpu);
mutex_unlock(&a6xx_gpu->gmu.lock);
if (ret)
return ret;
@ -1532,7 +1547,9 @@ static int a6xx_pm_suspend(struct msm_gpu *gpu)
msm_devfreq_suspend(gpu);
mutex_lock(&a6xx_gpu->gmu.lock);
ret = a6xx_gmu_stop(a6xx_gpu);
mutex_unlock(&a6xx_gpu->gmu.lock);
if (ret)
return ret;
@ -1547,18 +1564,19 @@ static int a6xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
static DEFINE_MUTEX(perfcounter_oob);
mutex_lock(&perfcounter_oob);
mutex_lock(&a6xx_gpu->gmu.lock);
/* Force the GPU power on so we can read this register */
a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
*value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);
REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);
a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
mutex_unlock(&perfcounter_oob);
mutex_unlock(&a6xx_gpu->gmu.lock);
return 0;
}
@ -1622,6 +1640,16 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
return (unsigned long)busy_time;
}
void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
mutex_lock(&a6xx_gpu->gmu.lock);
a6xx_gmu_set_freq(gpu, opp);
mutex_unlock(&a6xx_gpu->gmu.lock);
}
static struct msm_gem_address_space *
a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev)
{
@ -1766,7 +1794,7 @@ static const struct adreno_gpu_funcs funcs = {
#endif
.gpu_busy = a6xx_gpu_busy,
.gpu_get_freq = a6xx_gmu_get_freq,
.gpu_set_freq = a6xx_gmu_set_freq,
.gpu_set_freq = a6xx_gpu_set_freq,
#if defined(CONFIG_DRM_MSM_GPU_STATE)
.gpu_state_get = a6xx_gpu_state_get,
.gpu_state_put = a6xx_gpu_state_put,
@ -1810,6 +1838,13 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
adreno_cmp_rev(ADRENO_REV(6, 3, 5, ANY_ID), info->rev)))
adreno_gpu->base.hw_apriv = true;
/*
* For now only clamp to idle freq for devices where this is known not
* to cause power supply issues:
*/
if (info && (info->revn == 618))
gpu->clamp_to_idle = true;
a6xx_llc_slices_init(pdev, a6xx_gpu);
ret = a6xx_set_supported_hw(&pdev->dev, config->rev);

View File

@ -19,7 +19,16 @@ struct a6xx_gpu {
uint64_t sqe_iova;
struct msm_ringbuffer *cur_ring;
struct msm_file_private *cur_ctx;
/**
* cur_ctx_seqno:
*
* The ctx->seqno value of the context with current pgtables
* installed. Tracked by seqno rather than pointer value to
* avoid dangling pointers, and cases where a ctx can be freed
* and a new one created with the same address.
*/
int cur_ctx_seqno;
struct a6xx_gmu gmu;

View File

@ -794,7 +794,7 @@ static const struct dpu_pingpong_cfg sm8150_pp[] = {
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
-1),
PP_BLK("pingpong_5", PINGPONG_5, 0x72800, MERGE_3D_2, sdm845_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
-1),
};

View File

@ -1125,6 +1125,20 @@ static void mdp5_crtc_reset(struct drm_crtc *crtc)
__drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
}
static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = {
.set_config = drm_atomic_helper_set_config,
.destroy = mdp5_crtc_destroy,
.page_flip = drm_atomic_helper_page_flip,
.reset = mdp5_crtc_reset,
.atomic_duplicate_state = mdp5_crtc_duplicate_state,
.atomic_destroy_state = mdp5_crtc_destroy_state,
.atomic_print_state = mdp5_crtc_atomic_print_state,
.get_vblank_counter = mdp5_crtc_get_vblank_counter,
.enable_vblank = msm_crtc_enable_vblank,
.disable_vblank = msm_crtc_disable_vblank,
.get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp,
};
static const struct drm_crtc_funcs mdp5_crtc_funcs = {
.set_config = drm_atomic_helper_set_config,
.destroy = mdp5_crtc_destroy,
@ -1313,6 +1327,8 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev,
mdp5_crtc->lm_cursor_enabled = cursor_plane ? false : true;
drm_crtc_init_with_planes(dev, crtc, plane, cursor_plane,
cursor_plane ?
&mdp5_crtc_no_lm_cursor_funcs :
&mdp5_crtc_funcs, NULL);
drm_flip_work_init(&mdp5_crtc->unref_cursor_work,

Some files were not shown because too many files have changed in this diff Show More