The usual updates from the irq departement:

Core changes:
 
  - Provide IRQF_NO_AUTOEN as a flag for request*_irq() so drivers can be
    cleaned up which either use a seperate mechanism to prevent auto-enable
    at request time or have a racy mechanism which disables the interrupt
    right after request.
 
  - Get rid of the last usage of irq_create_identity_mapping() and remove
    the interface.
 
  - An overhaul of tasklet_disable(). Most usage sites of tasklet_disable()
    are in task context and usually in cleanup, teardown code pathes.
    tasklet_disable() spinwaits for a tasklet which is currently executed.
    That's not only a problem for PREEMPT_RT where this can lead to a live
    lock when the disabling task preempts the softirq thread. It's also
    problematic in context of virtualization when the vCPU which runs the
    tasklet is scheduled out and the disabling code has to spin wait until
    it's scheduled back in. Though there are a few code pathes which invoke
    tasklet_disable() from non-sleepable context. For these a new disable
    variant which still spinwaits is provided which allows to switch
    tasklet_disable() to a sleep wait mechanism. For the atomic use cases
    this does not solve the live lock issue on PREEMPT_RT. That is mitigated
    by blocking on the RT specific softirq lock.
 
  - The PREEMPT_RT specific implementation of softirq processing and
    local_bh_disable/enable().
 
    On RT enabled kernels soft interrupt processing happens always in task
    context and all interrupt handlers, which are not explicitly marked to
    be invoked in hard interrupt context are forced into task context as
    well. This allows to protect against softirq processing with a per
    CPU lock, which in turn allows to make BH disabled regions preemptible.
 
    Most of the softirq handling code is still shared. The RT/non-RT
    specific differences are addressed with a set of inline functions which
    provide the context specific functionality. The local_bh_disable() /
    local_bh_enable() mechanism are obviously seperate.
 
  - The usual set of small improvements and cleanups
 
 Driver changes:
 
  - New drivers for Nuvoton WPCM450 and DT 79rc3243x interrupt controllers
 
  - Extended functionality for MStar, STM32 and SC7280 irq chips
 
  - Enhanced robustness for ARM GICv3/4.1 drivers
 
  - The usual set of cleanups and improvements all over the place
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmCGh5wTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoZ+/EACWBpQ/2ZHizEw1bzjaDzJrR8U228xu
 wNi7nSP92Y07nJ3cCX7a6TJ53mqd0n3RT+DprlsOuqSN0D7Ktr/x44V/aZtm0d3N
 GkFOlpeGCRnHusLaUTwk7a8289LuoQ7OhSxIB409n1I4nLI96ZK41D1tYonMYl6E
 nxDiGADASfjaciBWbjwJO/mlwmiW/VRpSTxswx0wzakFfbIx9iKyKv1bCJQZ5JK+
 lHmf0jxpDIs1EVK/ElJ9Ky6TMBlEmZyiX7n6rujtwJ1W+Jc/uL/y8pLJvGwooVmI
 yHTYsLMqzviCbAMhJiB3h1qs3GbCGlM78prgJTnOd0+xEUOCcopCRQlsTXVBq8Nb
 OS+HNkYmYXRfiSH6lINJsIok8Xis28bAw/qWz2Ho+8wLq0TI8crK38roD1fPndee
 FNJRhsPPOBkscpIldJ0Cr0X5lclkJFiAhAxORPHoseKvQSm7gBMB7H99xeGRffTn
 yB3XqeTJMvPNmAHNN4Brv6ey3OjwnEWBgwcnIM2LtbIlRtlmxTYuR+82OPOgEvzk
 fSrjFFJqu0LEMLEOXS4pYN824PawjV//UAy4IaG8AodmUUCSGHgw1gTVa4sIf72t
 tXY54HqWfRWRpujhVRgsZETqBUtZkL6yvpoe8f6H7P91W5tAfv3oj4ch9RkhUo+Z
 b0/u9T0+Fpbg+w==
 =id4G
 -----END PGP SIGNATURE-----

Merge tag 'irq-core-2021-04-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq updates from Thomas Gleixner:
 "The usual updates from the irq departement:

  Core changes:

   - Provide IRQF_NO_AUTOEN as a flag for request*_irq() so drivers can
     be cleaned up which either use a seperate mechanism to prevent
     auto-enable at request time or have a racy mechanism which disables
     the interrupt right after request.

   - Get rid of the last usage of irq_create_identity_mapping() and
     remove the interface.

   - An overhaul of tasklet_disable().

     Most usage sites of tasklet_disable() are in task context and
     usually in cleanup, teardown code pathes. tasklet_disable()
     spinwaits for a tasklet which is currently executed. That's not
     only a problem for PREEMPT_RT where this can lead to a live lock
     when the disabling task preempts the softirq thread. It's also
     problematic in context of virtualization when the vCPU which runs
     the tasklet is scheduled out and the disabling code has to spin
     wait until it's scheduled back in.

     There are a few code pathes which invoke tasklet_disable() from
     non-sleepable context. For these a new disable variant which still
     spinwaits is provided which allows to switch tasklet_disable() to a
     sleep wait mechanism. For the atomic use cases this does not solve
     the live lock issue on PREEMPT_RT. That is mitigated by blocking on
     the RT specific softirq lock.

   - The PREEMPT_RT specific implementation of softirq processing and
     local_bh_disable/enable().

     On RT enabled kernels soft interrupt processing happens always in
     task context and all interrupt handlers, which are not explicitly
     marked to be invoked in hard interrupt context are forced into task
     context as well. This allows to protect against softirq processing
     with a per CPU lock, which in turn allows to make BH disabled
     regions preemptible.

     Most of the softirq handling code is still shared. The RT/non-RT
     specific differences are addressed with a set of inline functions
     which provide the context specific functionality. The
     local_bh_disable() / local_bh_enable() mechanism are obviously
     seperate.

   - The usual set of small improvements and cleanups

  Driver changes:

   - New drivers for Nuvoton WPCM450 and DT 79rc3243x interrupt
     controllers

   - Extended functionality for MStar, STM32 and SC7280 irq chips

   - Enhanced robustness for ARM GICv3/4.1 drivers

   - The usual set of cleanups and improvements all over the place"

* tag 'irq-core-2021-04-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (53 commits)
  irqchip/xilinx: Expose Kconfig option for Zynq/ZynqMP
  irqchip/gic-v3: Do not enable irqs when handling spurious interrups
  dt-bindings: interrupt-controller: Add IDT 79RC3243x Interrupt Controller
  irqchip: Add support for IDT 79rc3243x interrupt controller
  irqdomain: Drop references to recusive irqdomain setup
  irqdomain: Get rid of irq_create_strict_mappings()
  irqchip/jcore-aic: Kill use of irq_create_strict_mappings()
  ARM: PXA: Kill use of irq_create_strict_mappings()
  irqchip/gic-v4.1: Disable vSGI upon (GIC CPUIF < v4.1) detection
  irqchip/tb10x: Use 'fallthrough' to eliminate a warning
  genirq: Reduce irqdebug cacheline bouncing
  kernel: Initialize cpumask before parsing
  irqchip/wpcm450: Drop COMPILE_TEST
  irqchip/irq-mst: Support polarity configuration
  irqchip: Add driver for WPCM450 interrupt controller
  dt-bindings: interrupt-controller: Add nuvoton, wpcm450-aic
  dt-bindings: qcom,pdc: Add compatible for sc7280
  irqchip/stm32: Add usart instances exti direct event support
  irqchip/gic-v3: Fix OF_BAD_ADDR error handling
  irqchip/sifive-plic: Mark two global variables __ro_after_init
  ...
This commit is contained in:
Linus Torvalds 2021-04-26 09:43:16 -07:00
commit 91552ab8ff
71 changed files with 1006 additions and 289 deletions

View File

@ -0,0 +1,48 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/interrupt-controller/idt,32434-pic.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: IDT 79RC32434 Interrupt Controller Device Tree Bindings
maintainers:
- Thomas Bogendoerfer <tsbogend@alpha.franken.de>
allOf:
- $ref: /schemas/interrupt-controller.yaml#
properties:
"#interrupt-cells":
const: 1
compatible:
const: idt,32434-pic
reg:
maxItems: 1
interrupt-controller: true
required:
- "#interrupt-cells"
- compatible
- reg
- interrupt-controller
additionalProperties: false
examples:
- |
idtpic3: interrupt-controller@3800c {
compatible = "idt,32434-pic";
reg = <0x3800c 0x0c>;
interrupt-controller;
#interrupt-cells = <1>;
interrupt-parent = <&cpuintc>;
interrupts = <3>;
};
...

View File

@ -0,0 +1,39 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/interrupt-controller/nuvoton,wpcm450-aic.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Nuvoton WPCM450 Advanced Interrupt Controller bindings
maintainers:
- Jonathan Neuschäfer <j.neuschaefer@gmx.net>
properties:
'#interrupt-cells':
const: 2
compatible:
const: nuvoton,wpcm450-aic
interrupt-controller: true
reg:
maxItems: 1
additionalProperties: false
required:
- '#interrupt-cells'
- compatible
- reg
- interrupt-controller
examples:
- |
aic: interrupt-controller@b8002000 {
compatible = "nuvoton,wpcm450-aic";
reg = <0xb8002000 0x1000>;
interrupt-controller;
#interrupt-cells = <2>;
};

View File

@ -19,6 +19,7 @@ Properties:
Value type: <string>
Definition: Should contain "qcom,<soc>-pdc" and "qcom,pdc"
- "qcom,sc7180-pdc": For SC7180
- "qcom,sc7280-pdc": For SC7280
- "qcom,sdm845-pdc": For SDM845
- "qcom,sdm8250-pdc": For SM8250
- "qcom,sdm8350-pdc": For SM8350

View File

@ -147,22 +147,20 @@ static int cplds_probe(struct platform_device *pdev)
}
irq_set_irq_wake(fpga->irq, 1);
fpga->irqdomain = irq_domain_add_linear(pdev->dev.of_node,
CPLDS_NB_IRQ,
&cplds_irq_domain_ops, fpga);
if (base_irq)
fpga->irqdomain = irq_domain_add_legacy(pdev->dev.of_node,
CPLDS_NB_IRQ,
base_irq, 0,
&cplds_irq_domain_ops,
fpga);
else
fpga->irqdomain = irq_domain_add_linear(pdev->dev.of_node,
CPLDS_NB_IRQ,
&cplds_irq_domain_ops,
fpga);
if (!fpga->irqdomain)
return -ENODEV;
if (base_irq) {
ret = irq_create_strict_mappings(fpga->irqdomain, base_irq, 0,
CPLDS_NB_IRQ);
if (ret) {
dev_err(&pdev->dev, "couldn't create the irq mapping %d..%d\n",
base_irq, base_irq + CPLDS_NB_IRQ);
return ret;
}
}
return 0;
}

View File

@ -86,7 +86,7 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu,
}
break;
case GICD_TYPER2:
if (kvm_vgic_global_state.has_gicv4_1)
if (kvm_vgic_global_state.has_gicv4_1 && gic_cpuif_has_vsgi())
value = GICD_TYPER2_nASSGIcap;
break;
case GICD_IIDR:
@ -119,7 +119,7 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu,
dist->enabled = val & GICD_CTLR_ENABLE_SS_G1;
/* Not a GICv4.1? No HW SGIs */
if (!kvm_vgic_global_state.has_gicv4_1)
if (!kvm_vgic_global_state.has_gicv4_1 || !gic_cpuif_has_vsgi())
val &= ~GICD_CTLR_nASSGIreq;
/* Dist stays enabled? nASSGIreq is RO */

View File

@ -276,10 +276,6 @@ asmlinkage void plat_irq_dispatch(void)
}
#ifdef CONFIG_CPU_XLP
static const struct irq_domain_ops xlp_pic_irq_domain_ops = {
.xlate = irq_domain_xlate_onetwocell,
};
static int __init xlp_of_pic_init(struct device_node *node,
struct device_node *parent)
{
@ -324,7 +320,7 @@ static int __init xlp_of_pic_init(struct device_node *node,
xlp_pic_domain = irq_domain_add_legacy(node, n_picirqs,
nlm_irq_to_xirq(socid, PIC_IRQ_BASE), PIC_IRQ_BASE,
&xlp_pic_irq_domain_ops, NULL);
&irq_domain_simple_ops, NULL);
if (xlp_pic_domain == NULL) {
pr_err("PIC %pOFn: Creating legacy domain failed!\n", node);
return -EINVAL;

View File

@ -2054,7 +2054,7 @@ static int eni_send(struct atm_vcc *vcc,struct sk_buff *skb)
}
submitted++;
ATM_SKB(skb)->vcc = vcc;
tasklet_disable(&ENI_DEV(vcc->dev)->task);
tasklet_disable_in_atomic(&ENI_DEV(vcc->dev)->task);
res = do_tx(skb);
tasklet_enable(&ENI_DEV(vcc->dev)->task);
if (res == enq_ok) return 0;

View File

@ -2545,7 +2545,7 @@ static int ohci_cancel_packet(struct fw_card *card, struct fw_packet *packet)
struct driver_data *driver_data = packet->driver_data;
int ret = -ENOENT;
tasklet_disable(&ctx->tasklet);
tasklet_disable_in_atomic(&ctx->tasklet);
if (packet->ack != 0)
goto out;
@ -3465,7 +3465,7 @@ static int ohci_flush_iso_completions(struct fw_iso_context *base)
struct iso_context *ctx = container_of(base, struct iso_context, base);
int ret = 0;
tasklet_disable(&ctx->context.tasklet);
tasklet_disable_in_atomic(&ctx->context.tasklet);
if (!test_and_set_bit_lock(0, &ctx->flushing_completions)) {
context_tasklet((unsigned long)&ctx->context);

View File

@ -105,7 +105,7 @@ static inline bool tasklet_is_locked(const struct tasklet_struct *t)
static inline void __tasklet_disable_sync_once(struct tasklet_struct *t)
{
if (!atomic_fetch_inc(&t->count))
tasklet_unlock_wait(t);
tasklet_unlock_spin_wait(t);
}
static inline bool __tasklet_is_enabled(const struct tasklet_struct *t)

View File

@ -279,8 +279,13 @@ config XTENSA_MX
select GENERIC_IRQ_EFFECTIVE_AFF_MASK
config XILINX_INTC
bool
bool "Xilinx Interrupt Controller IP"
depends on MICROBLAZE || ARCH_ZYNQ || ARCH_ZYNQMP
select IRQ_DOMAIN
help
Support for the Xilinx Interrupt Controller IP core.
This is used as a primary controller with MicroBlaze and can also
be used as a secondary chained controller on other platforms.
config IRQ_CROSSBAR
bool
@ -577,4 +582,15 @@ config MST_IRQ
help
Support MStar Interrupt Controller.
config WPCM450_AIC
bool "Nuvoton WPCM450 Advanced Interrupt Controller"
depends on ARCH_WPCM450
help
Support for the interrupt controller in the Nuvoton WPCM450 BMC SoC.
config IRQ_IDT3243X
bool
select GENERIC_IRQ_CHIP
select IRQ_DOMAIN
endmenu

View File

@ -113,3 +113,5 @@ obj-$(CONFIG_LOONGSON_PCH_MSI) += irq-loongson-pch-msi.o
obj-$(CONFIG_MST_IRQ) += irq-mst-intc.o
obj-$(CONFIG_SL28CPLD_INTC) += irq-sl28cpld.o
obj-$(CONFIG_MACH_REALTEK_RTL) += irq-realtek-rtl.o
obj-$(CONFIG_WPCM450_AIC) += irq-wpcm450-aic.o
obj-$(CONFIG_IRQ_IDT3243X) += irq-idt3243x.o

View File

@ -71,7 +71,7 @@ static void vic_init_hw(struct aspeed_vic *vic)
writel(0, vic->base + AVIC_INT_SELECT);
writel(0, vic->base + AVIC_INT_SELECT + 4);
/* Some interrupts have a programable high/low level trigger
/* Some interrupts have a programmable high/low level trigger
* (4 GPIO direct inputs), for now we assume this was configured
* by firmware. We read which ones are edge now.
*/
@ -203,7 +203,7 @@ static int __init avic_of_init(struct device_node *node,
}
vic->base = regs;
/* Initialize soures, all masked */
/* Initialize sources, all masked */
vic_init_hw(vic);
/* Ready to receive interrupts */

View File

@ -309,7 +309,7 @@ static int __init bcm7120_l2_intc_probe(struct device_node *dn,
if (data->can_wake) {
/* This IRQ chip can wake the system, set all
* relevant child interupts in wake_enabled mask
* relevant child interrupts in wake_enabled mask
*/
gc->wake_enabled = 0xffffffff;
gc->wake_enabled &= ~gc->unused;

View File

@ -176,7 +176,7 @@ gx_intc_init(struct device_node *node, struct device_node *parent)
writel(0x0, reg_base + GX_INTC_NEN63_32);
/*
* Initial mask reg with all unmasked, because we only use enalbe reg
* Initial mask reg with all unmasked, because we only use enable reg
*/
writel(0x0, reg_base + GX_INTC_NMASK31_00);
writel(0x0, reg_base + GX_INTC_NMASK63_32);

View File

@ -371,7 +371,7 @@ static int __init gicv2m_init_one(struct fwnode_handle *fwnode,
* the MSI data is the absolute value within the range from
* spi_start to (spi_start + num_spis).
*
* Broadom NS2 GICv2m implementation has an erratum where the MSI data
* Broadcom NS2 GICv2m implementation has an erratum where the MSI data
* is 'spi_number - 32'
*
* Reading that register fails on the Graviton implementation

View File

@ -1492,7 +1492,7 @@ static void its_vlpi_set_doorbell(struct irq_data *d, bool enable)
*
* Ideally, we'd issue a VMAPTI to set the doorbell to its LPI
* value or to 1023, depending on the enable bit. But that
* would be issueing a mapping for an /existing/ DevID+EventID
* would be issuing a mapping for an /existing/ DevID+EventID
* pair, which is UNPREDICTABLE. Instead, let's issue a VMOVI
* to the /same/ vPE, using this opportunity to adjust the
* doorbell. Mouahahahaha. We loves it, Precious.
@ -3122,7 +3122,7 @@ static void its_cpu_init_lpis(void)
/*
* It's possible for CPU to receive VLPIs before it is
* sheduled as a vPE, especially for the first CPU, and the
* scheduled as a vPE, especially for the first CPU, and the
* VLPI with INTID larger than 2^(IDbits+1) will be considered
* as out of range and dropped by GIC.
* So we initialize IDbits to known value to avoid VLPI drop.
@ -3616,7 +3616,7 @@ static void its_irq_domain_free(struct irq_domain *domain, unsigned int virq,
/*
* If all interrupts have been freed, start mopping the
* floor. This is conditionned on the device not being shared.
* floor. This is conditioned on the device not being shared.
*/
if (!its_dev->shared &&
bitmap_empty(its_dev->event_map.lpi_map,
@ -4194,7 +4194,7 @@ static int its_sgi_set_affinity(struct irq_data *d,
{
/*
* There is no notion of affinity for virtual SGIs, at least
* not on the host (since they can only be targetting a vPE).
* not on the host (since they can only be targeting a vPE).
* Tell the kernel we've done whatever it asked for.
*/
irq_data_update_effective_affinity(d, mask_val);
@ -4239,7 +4239,7 @@ static int its_sgi_get_irqchip_state(struct irq_data *d,
/*
* Locking galore! We can race against two different events:
*
* - Concurent vPE affinity change: we must make sure it cannot
* - Concurrent vPE affinity change: we must make sure it cannot
* happen, or we'll talk to the wrong redistributor. This is
* identical to what happens with vLPIs.
*

View File

@ -303,7 +303,7 @@ int __init mbi_init(struct fwnode_handle *fwnode, struct irq_domain *parent)
reg = of_get_property(np, "mbi-alias", NULL);
if (reg) {
mbi_phys_base = of_translate_address(np, reg);
if (mbi_phys_base == OF_BAD_ADDR) {
if (mbi_phys_base == (phys_addr_t)OF_BAD_ADDR) {
ret = -ENXIO;
goto err_free_mbi;
}

View File

@ -648,6 +648,10 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
irqnr = gic_read_iar();
/* Check for special IDs first */
if ((irqnr >= 1020 && irqnr <= 1023))
return;
if (gic_supports_nmi() &&
unlikely(gic_read_rpr() == GICD_INT_NMI_PRI)) {
gic_handle_nmi(irqnr, regs);
@ -659,10 +663,6 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
gic_arch_enable_irqs();
}
/* Check for special IDs first */
if ((irqnr >= 1020 && irqnr <= 1023))
return;
if (static_branch_likely(&supports_deactivate_key))
gic_write_eoir(irqnr);
else
@ -1379,7 +1379,7 @@ static int gic_irq_domain_translate(struct irq_domain *d,
/*
* Make it clear that broken DTs are... broken.
* Partitionned PPIs are an unfortunate exception.
* Partitioned PPIs are an unfortunate exception.
*/
WARN_ON(*type == IRQ_TYPE_NONE &&
fwspec->param[0] != GIC_IRQ_TYPE_PARTITION);

View File

@ -87,17 +87,40 @@ static struct irq_domain *gic_domain;
static const struct irq_domain_ops *vpe_domain_ops;
static const struct irq_domain_ops *sgi_domain_ops;
#ifdef CONFIG_ARM64
#include <asm/cpufeature.h>
bool gic_cpuif_has_vsgi(void)
{
unsigned long fld, reg = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64PFR0_GIC_SHIFT);
return fld >= 0x3;
}
#else
bool gic_cpuif_has_vsgi(void)
{
return false;
}
#endif
static bool has_v4_1(void)
{
return !!sgi_domain_ops;
}
static bool has_v4_1_sgi(void)
{
return has_v4_1() && gic_cpuif_has_vsgi();
}
static int its_alloc_vcpu_sgis(struct its_vpe *vpe, int idx)
{
char *name;
int sgi_base;
if (!has_v4_1())
if (!has_v4_1_sgi())
return 0;
name = kasprintf(GFP_KERNEL, "GICv4-sgi-%d", task_pid_nr(current));
@ -182,7 +205,7 @@ static void its_free_sgi_irqs(struct its_vm *vm)
{
int i;
if (!has_v4_1())
if (!has_v4_1_sgi())
return;
for (i = 0; i < vm->nr_vpes; i++) {

View File

@ -1,9 +1,9 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Hisilicon HiP04 INTC
* HiSilicon HiP04 INTC
*
* Copyright (C) 2002-2014 ARM Limited.
* Copyright (c) 2013-2014 Hisilicon Ltd.
* Copyright (c) 2013-2014 HiSilicon Ltd.
* Copyright (c) 2013-2014 Linaro Ltd.
*
* Interrupt architecture for the HIP04 INTC:

View File

@ -0,0 +1,124 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Driver for IDT/Renesas 79RC3243x Interrupt Controller.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#define IDT_PIC_NR_IRQS 32
#define IDT_PIC_IRQ_PEND 0x00
#define IDT_PIC_IRQ_MASK 0x08
struct idt_pic_data {
void __iomem *base;
struct irq_domain *irq_domain;
struct irq_chip_generic *gc;
};
static void idt_irq_dispatch(struct irq_desc *desc)
{
struct idt_pic_data *idtpic = irq_desc_get_handler_data(desc);
struct irq_chip *host_chip = irq_desc_get_chip(desc);
u32 pending, hwirq, virq;
chained_irq_enter(host_chip, desc);
pending = irq_reg_readl(idtpic->gc, IDT_PIC_IRQ_PEND);
pending &= ~idtpic->gc->mask_cache;
while (pending) {
hwirq = __fls(pending);
virq = irq_linear_revmap(idtpic->irq_domain, hwirq);
if (virq)
generic_handle_irq(virq);
pending &= ~(1 << hwirq);
}
chained_irq_exit(host_chip, desc);
}
static int idt_pic_init(struct device_node *of_node, struct device_node *parent)
{
struct irq_domain *domain;
struct idt_pic_data *idtpic;
struct irq_chip_generic *gc;
struct irq_chip_type *ct;
unsigned int parent_irq;
int ret = 0;
idtpic = kzalloc(sizeof(*idtpic), GFP_KERNEL);
if (!idtpic) {
ret = -ENOMEM;
goto out_err;
}
parent_irq = irq_of_parse_and_map(of_node, 0);
if (!parent_irq) {
pr_err("Failed to map parent IRQ!\n");
ret = -EINVAL;
goto out_free;
}
idtpic->base = of_iomap(of_node, 0);
if (!idtpic->base) {
pr_err("Failed to map base address!\n");
ret = -ENOMEM;
goto out_unmap_irq;
}
domain = irq_domain_add_linear(of_node, IDT_PIC_NR_IRQS,
&irq_generic_chip_ops, NULL);
if (!domain) {
pr_err("Failed to add irqdomain!\n");
ret = -ENOMEM;
goto out_iounmap;
}
idtpic->irq_domain = domain;
ret = irq_alloc_domain_generic_chips(domain, 32, 1, "IDTPIC",
handle_level_irq, 0,
IRQ_NOPROBE | IRQ_LEVEL, 0);
if (ret)
goto out_domain_remove;
gc = irq_get_domain_generic_chip(domain, 0);
gc->reg_base = idtpic->base;
gc->private = idtpic;
ct = gc->chip_types;
ct->regs.mask = IDT_PIC_IRQ_MASK;
ct->chip.irq_mask = irq_gc_mask_set_bit;
ct->chip.irq_unmask = irq_gc_mask_clr_bit;
idtpic->gc = gc;
/* Mask interrupts. */
writel(0xffffffff, idtpic->base + IDT_PIC_IRQ_MASK);
gc->mask_cache = 0xffffffff;
irq_set_chained_handler_and_data(parent_irq,
idt_irq_dispatch, idtpic);
return 0;
out_domain_remove:
irq_domain_remove(domain);
out_iounmap:
iounmap(idtpic->base);
out_unmap_irq:
irq_dispose_mapping(parent_irq);
out_free:
kfree(idtpic);
out_err:
pr_err("Failed to initialize! (errno = %d)\n", ret);
return ret;
}
IRQCHIP_DECLARE(idt_pic, "idt,32434-pic", idt_pic_init);

View File

@ -100,11 +100,11 @@ static int __init aic_irq_of_init(struct device_node *node,
jcore_aic.irq_unmask = noop;
jcore_aic.name = "AIC";
domain = irq_domain_add_linear(node, dom_sz, &jcore_aic_irqdomain_ops,
domain = irq_domain_add_legacy(node, dom_sz - min_irq, min_irq, min_irq,
&jcore_aic_irqdomain_ops,
&jcore_aic);
if (!domain)
return -ENOMEM;
irq_create_strict_mappings(domain, min_irq, min_irq, dom_sz - min_irq);
return 0;
}

View File

@ -163,7 +163,7 @@ static void pch_pic_reset(struct pch_pic *priv)
int i;
for (i = 0; i < PIC_COUNT; i++) {
/* Write vectore ID */
/* Write vectored ID */
writeb(priv->ht_vec_base + i, priv->base + PCH_INT_HTVEC(i));
/* Hardcode route to HT0 Lo */
writeb(1, priv->base + PCH_INT_ROUTE(i));

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2015 Hisilicon Limited, All Rights Reserved.
* Copyright (C) 2015 HiSilicon Limited, All Rights Reserved.
* Author: Jun Ma <majun258@huawei.com>
* Author: Yun Wu <wuyun.wu@huawei.com>
*/
@ -390,4 +390,4 @@ module_platform_driver(mbigen_platform_driver);
MODULE_AUTHOR("Jun Ma <majun258@huawei.com>");
MODULE_AUTHOR("Yun Wu <wuyun.wu@huawei.com>");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Hisilicon MBI Generator driver");
MODULE_DESCRIPTION("HiSilicon MBI Generator driver");

View File

@ -227,7 +227,7 @@ meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl,
/*
* Get the hwirq number assigned to this channel through
* a pointer the channel_irq table. The added benifit of this
* a pointer the channel_irq table. The added benefit of this
* method is that we can also retrieve the channel index with
* it, using the table base.
*/

View File

@ -13,15 +13,27 @@
#include <linux/of_irq.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/syscore_ops.h>
#define INTC_MASK 0x0
#define INTC_EOI 0x20
#define MST_INTC_MAX_IRQS 64
#define INTC_MASK 0x0
#define INTC_REV_POLARITY 0x10
#define INTC_EOI 0x20
#ifdef CONFIG_PM_SLEEP
static LIST_HEAD(mst_intc_list);
#endif
struct mst_intc_chip_data {
raw_spinlock_t lock;
unsigned int irq_start, nr_irqs;
void __iomem *base;
bool no_eoi;
#ifdef CONFIG_PM_SLEEP
struct list_head entry;
u16 saved_polarity_conf[DIV_ROUND_UP(MST_INTC_MAX_IRQS, 16)];
#endif
};
static void mst_set_irq(struct irq_data *d, u32 offset)
@ -78,6 +90,24 @@ static void mst_intc_eoi_irq(struct irq_data *d)
irq_chip_eoi_parent(d);
}
static int mst_irq_chip_set_type(struct irq_data *data, unsigned int type)
{
switch (type) {
case IRQ_TYPE_LEVEL_LOW:
case IRQ_TYPE_EDGE_FALLING:
mst_set_irq(data, INTC_REV_POLARITY);
break;
case IRQ_TYPE_LEVEL_HIGH:
case IRQ_TYPE_EDGE_RISING:
mst_clear_irq(data, INTC_REV_POLARITY);
break;
default:
return -EINVAL;
}
return irq_chip_set_type_parent(data, IRQ_TYPE_LEVEL_HIGH);
}
static struct irq_chip mst_intc_chip = {
.name = "mst-intc",
.irq_mask = mst_intc_mask_irq,
@ -87,13 +117,62 @@ static struct irq_chip mst_intc_chip = {
.irq_set_irqchip_state = irq_chip_set_parent_state,
.irq_set_affinity = irq_chip_set_affinity_parent,
.irq_set_vcpu_affinity = irq_chip_set_vcpu_affinity_parent,
.irq_set_type = irq_chip_set_type_parent,
.irq_set_type = mst_irq_chip_set_type,
.irq_retrigger = irq_chip_retrigger_hierarchy,
.flags = IRQCHIP_SET_TYPE_MASKED |
IRQCHIP_SKIP_SET_WAKE |
IRQCHIP_MASK_ON_SUSPEND,
};
#ifdef CONFIG_PM_SLEEP
static void mst_intc_polarity_save(struct mst_intc_chip_data *cd)
{
int i;
void __iomem *addr = cd->base + INTC_REV_POLARITY;
for (i = 0; i < DIV_ROUND_UP(cd->nr_irqs, 16); i++)
cd->saved_polarity_conf[i] = readw_relaxed(addr + i * 4);
}
static void mst_intc_polarity_restore(struct mst_intc_chip_data *cd)
{
int i;
void __iomem *addr = cd->base + INTC_REV_POLARITY;
for (i = 0; i < DIV_ROUND_UP(cd->nr_irqs, 16); i++)
writew_relaxed(cd->saved_polarity_conf[i], addr + i * 4);
}
static void mst_irq_resume(void)
{
struct mst_intc_chip_data *cd;
list_for_each_entry(cd, &mst_intc_list, entry)
mst_intc_polarity_restore(cd);
}
static int mst_irq_suspend(void)
{
struct mst_intc_chip_data *cd;
list_for_each_entry(cd, &mst_intc_list, entry)
mst_intc_polarity_save(cd);
return 0;
}
static struct syscore_ops mst_irq_syscore_ops = {
.suspend = mst_irq_suspend,
.resume = mst_irq_resume,
};
static int __init mst_irq_pm_init(void)
{
register_syscore_ops(&mst_irq_syscore_ops);
return 0;
}
late_initcall(mst_irq_pm_init);
#endif
static int mst_intc_domain_translate(struct irq_domain *d,
struct irq_fwspec *fwspec,
unsigned long *hwirq,
@ -145,6 +224,15 @@ static int mst_intc_domain_alloc(struct irq_domain *domain, unsigned int virq,
parent_fwspec = *fwspec;
parent_fwspec.fwnode = domain->parent->fwnode;
parent_fwspec.param[1] = cd->irq_start + hwirq;
/*
* mst-intc latch the interrupt request if it's edge triggered,
* so the output signal to parent GIC is always level sensitive.
* And if the irq signal is active low, configure it to active high
* to meet GIC SPI spec in mst_irq_chip_set_type via REV_POLARITY bit.
*/
parent_fwspec.param[2] = IRQ_TYPE_LEVEL_HIGH;
return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &parent_fwspec);
}
@ -193,6 +281,10 @@ static int __init mst_intc_of_init(struct device_node *dn,
return -ENOMEM;
}
#ifdef CONFIG_PM_SLEEP
INIT_LIST_HEAD(&cd->entry);
list_add_tail(&cd->entry, &mst_intc_list);
#endif
return 0;
}

View File

@ -217,7 +217,7 @@ static void mtk_cirq_resume(void)
{
u32 value;
/* flush recored interrupts, will send signals to parent controller */
/* flush recorded interrupts, will send signals to parent controller */
value = readl_relaxed(cirq_data->base + CIRQ_CONTROL);
writel_relaxed(value | CIRQ_FLUSH, cirq_data->base + CIRQ_CONTROL);

View File

@ -58,7 +58,7 @@ struct icoll_priv {
static struct icoll_priv icoll_priv;
static struct irq_domain *icoll_domain;
/* calculate bit offset depending on number of intterupt per register */
/* calculate bit offset depending on number of interrupt per register */
static u32 icoll_intr_bitshift(struct irq_data *d, u32 bit)
{
/*
@ -68,7 +68,7 @@ static u32 icoll_intr_bitshift(struct irq_data *d, u32 bit)
return bit << ((d->hwirq & 3) << 3);
}
/* calculate mem offset depending on number of intterupt per register */
/* calculate mem offset depending on number of interrupt per register */
static void __iomem *icoll_intr_reg(struct irq_data *d)
{
/* offset = hwirq / intr_per_reg * 0x10 */

View File

@ -77,8 +77,8 @@ struct plic_handler {
void __iomem *enable_base;
struct plic_priv *priv;
};
static int plic_parent_irq;
static bool plic_cpuhp_setup_done;
static int plic_parent_irq __ro_after_init;
static bool plic_cpuhp_setup_done __ro_after_init;
static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
static inline void plic_toggle(struct plic_handler *handler,

View File

@ -193,7 +193,14 @@ static const struct stm32_desc_irq stm32mp1_desc_irq[] = {
{ .exti = 23, .irq_parent = 72, .chip = &stm32_exti_h_chip_direct },
{ .exti = 24, .irq_parent = 95, .chip = &stm32_exti_h_chip_direct },
{ .exti = 25, .irq_parent = 107, .chip = &stm32_exti_h_chip_direct },
{ .exti = 26, .irq_parent = 37, .chip = &stm32_exti_h_chip_direct },
{ .exti = 27, .irq_parent = 38, .chip = &stm32_exti_h_chip_direct },
{ .exti = 28, .irq_parent = 39, .chip = &stm32_exti_h_chip_direct },
{ .exti = 29, .irq_parent = 71, .chip = &stm32_exti_h_chip_direct },
{ .exti = 30, .irq_parent = 52, .chip = &stm32_exti_h_chip_direct },
{ .exti = 31, .irq_parent = 53, .chip = &stm32_exti_h_chip_direct },
{ .exti = 32, .irq_parent = 82, .chip = &stm32_exti_h_chip_direct },
{ .exti = 33, .irq_parent = 83, .chip = &stm32_exti_h_chip_direct },
{ .exti = 47, .irq_parent = 93, .chip = &stm32_exti_h_chip_direct },
{ .exti = 48, .irq_parent = 138, .chip = &stm32_exti_h_chip_direct },
{ .exti = 50, .irq_parent = 139, .chip = &stm32_exti_h_chip_direct },

View File

@ -189,7 +189,7 @@ static void __exception_irq_entry sun4i_handle_irq(struct pt_regs *regs)
* 3) spurious irq
* So if we immediately get a reading of 0, check the irq-pending reg
* to differentiate between 2 and 3. We only do this once to avoid
* the extra check in the common case of 1 hapening after having
* the extra check in the common case of 1 happening after having
* read the vector-reg once.
*/
hwirq = readl(irq_ic_data->irq_base + SUN4I_IRQ_VECTOR_REG) >> 2;

View File

@ -60,6 +60,7 @@ static int tb10x_irq_set_type(struct irq_data *data, unsigned int flow_type)
break;
case IRQ_TYPE_NONE:
flow_type = IRQ_TYPE_LEVEL_LOW;
fallthrough;
case IRQ_TYPE_LEVEL_LOW:
mod ^= im;
pol ^= im;

View File

@ -78,7 +78,7 @@ struct ti_sci_inta_vint_desc {
* struct ti_sci_inta_irq_domain - Structure representing a TISCI based
* Interrupt Aggregator IRQ domain.
* @sci: Pointer to TISCI handle
* @vint: TISCI resource pointer representing IA inerrupts.
* @vint: TISCI resource pointer representing IA interrupts.
* @global_event: TISCI resource pointer representing global events.
* @vint_list: List of the vints active in the system
* @vint_mutex: Mutex to protect vint_list

View File

@ -163,7 +163,7 @@ static struct syscore_ops vic_syscore_ops = {
};
/**
* vic_pm_init - initicall to register VIC pm
* vic_pm_init - initcall to register VIC pm
*
* This is called via late_initcall() to register
* the resources for the VICs due to the early
@ -397,7 +397,7 @@ static void __init vic_clear_interrupts(void __iomem *base)
/*
* The PL190 cell from ARM has been modified by ST to handle 64 interrupts.
* The original cell has 32 interrupts, while the modified one has 64,
* replocating two blocks 0x00..0x1f in 0x20..0x3f. In that case
* replicating two blocks 0x00..0x1f in 0x20..0x3f. In that case
* the probe function is called twice, with base set to offset 000
* and 020 within the page. We call this "second block".
*/

View File

@ -0,0 +1,161 @@
// SPDX-License-Identifier: GPL-2.0-only
// Copyright 2021 Jonathan Neuschäfer
#include <linux/irqchip.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/printk.h>
#include <asm/exception.h>
#define AIC_SCR(x) ((x)*4) /* Source control registers */
#define AIC_GEN 0x84 /* Interrupt group enable control register */
#define AIC_GRSR 0x88 /* Interrupt group raw status register */
#define AIC_IRSR 0x100 /* Interrupt raw status register */
#define AIC_IASR 0x104 /* Interrupt active status register */
#define AIC_ISR 0x108 /* Interrupt status register */
#define AIC_IPER 0x10c /* Interrupt priority encoding register */
#define AIC_ISNR 0x110 /* Interrupt source number register */
#define AIC_IMR 0x114 /* Interrupt mask register */
#define AIC_OISR 0x118 /* Output interrupt status register */
#define AIC_MECR 0x120 /* Mask enable command register */
#define AIC_MDCR 0x124 /* Mask disable command register */
#define AIC_SSCR 0x128 /* Source set command register */
#define AIC_SCCR 0x12c /* Source clear command register */
#define AIC_EOSCR 0x130 /* End of service command register */
#define AIC_SCR_SRCTYPE_LOW_LEVEL (0 << 6)
#define AIC_SCR_SRCTYPE_HIGH_LEVEL (1 << 6)
#define AIC_SCR_SRCTYPE_NEG_EDGE (2 << 6)
#define AIC_SCR_SRCTYPE_POS_EDGE (3 << 6)
#define AIC_SCR_PRIORITY(x) (x)
#define AIC_SCR_PRIORITY_MASK 0x7
#define AIC_NUM_IRQS 32
struct wpcm450_aic {
void __iomem *regs;
struct irq_domain *domain;
};
static struct wpcm450_aic *aic;
static void wpcm450_aic_init_hw(void)
{
int i;
/* Disable (mask) all interrupts */
writel(0xffffffff, aic->regs + AIC_MDCR);
/*
* Make sure the interrupt controller is ready to serve new interrupts.
* Reading from IPER indicates that the nIRQ signal may be deasserted,
* and writing to EOSCR indicates that interrupt handling has finished.
*/
readl(aic->regs + AIC_IPER);
writel(0, aic->regs + AIC_EOSCR);
/* Initialize trigger mode and priority of each interrupt source */
for (i = 0; i < AIC_NUM_IRQS; i++)
writel(AIC_SCR_SRCTYPE_HIGH_LEVEL | AIC_SCR_PRIORITY(7),
aic->regs + AIC_SCR(i));
}
static void __exception_irq_entry wpcm450_aic_handle_irq(struct pt_regs *regs)
{
int hwirq;
/* Determine the interrupt source */
/* Read IPER to signal that nIRQ can be de-asserted */
hwirq = readl(aic->regs + AIC_IPER) / 4;
handle_domain_irq(aic->domain, hwirq, regs);
}
static void wpcm450_aic_eoi(struct irq_data *d)
{
/* Signal end-of-service */
writel(0, aic->regs + AIC_EOSCR);
}
static void wpcm450_aic_mask(struct irq_data *d)
{
unsigned int mask = BIT(d->hwirq);
/* Disable (mask) the interrupt */
writel(mask, aic->regs + AIC_MDCR);
}
static void wpcm450_aic_unmask(struct irq_data *d)
{
unsigned int mask = BIT(d->hwirq);
/* Enable (unmask) the interrupt */
writel(mask, aic->regs + AIC_MECR);
}
static int wpcm450_aic_set_type(struct irq_data *d, unsigned int flow_type)
{
/*
* The hardware supports high/low level, as well as rising/falling edge
* modes, and the DT binding accommodates for that, but as long as
* other modes than high level mode are not used and can't be tested,
* they are rejected in this driver.
*/
if ((flow_type & IRQ_TYPE_SENSE_MASK) != IRQ_TYPE_LEVEL_HIGH)
return -EINVAL;
return 0;
}
static struct irq_chip wpcm450_aic_chip = {
.name = "wpcm450-aic",
.irq_eoi = wpcm450_aic_eoi,
.irq_mask = wpcm450_aic_mask,
.irq_unmask = wpcm450_aic_unmask,
.irq_set_type = wpcm450_aic_set_type,
};
static int wpcm450_aic_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hwirq)
{
if (hwirq >= AIC_NUM_IRQS)
return -EPERM;
irq_set_chip_and_handler(irq, &wpcm450_aic_chip, handle_fasteoi_irq);
irq_set_chip_data(irq, aic);
irq_set_probe(irq);
return 0;
}
static const struct irq_domain_ops wpcm450_aic_ops = {
.map = wpcm450_aic_map,
.xlate = irq_domain_xlate_twocell,
};
static int __init wpcm450_aic_of_init(struct device_node *node,
struct device_node *parent)
{
if (parent)
return -EINVAL;
aic = kzalloc(sizeof(*aic), GFP_KERNEL);
if (!aic)
return -ENOMEM;
aic->regs = of_iomap(node, 0);
if (!aic->regs) {
pr_err("Failed to map WPCM450 AIC registers\n");
return -ENOMEM;
}
wpcm450_aic_init_hw();
set_handle_irq(wpcm450_aic_handle_irq);
aic->domain = irq_domain_add_linear(node, AIC_NUM_IRQS, &wpcm450_aic_ops, aic);
return 0;
}
IRQCHIP_DECLARE(wpcm450_aic, "nuvoton,wpcm450-aic", wpcm450_aic_of_init);

View File

@ -210,7 +210,7 @@ static int __init xilinx_intc_of_init(struct device_node *intc,
/*
* Disable all external interrupts until they are
* explicity requested.
* explicitly requested.
*/
xintc_write(irqc, IER, 0);

View File

@ -963,7 +963,7 @@ static void tx_timeout(struct net_device *dev, unsigned int txqueue)
unsigned long flag;
netif_stop_queue(dev);
tasklet_disable(&np->tx_tasklet);
tasklet_disable_in_atomic(&np->tx_tasklet);
iowrite16(0, ioaddr + IntrEnable);
printk(KERN_WARNING "%s: Transmit timed out, TxStatus %2.2x "
"TxFrameId %2.2x,"

View File

@ -1265,9 +1265,9 @@ jme_stop_shutdown_timer(struct jme_adapter *jme)
jwrite32f(jme, JME_APMC, apmc);
}
static void jme_link_change_tasklet(struct tasklet_struct *t)
static void jme_link_change_work(struct work_struct *work)
{
struct jme_adapter *jme = from_tasklet(jme, t, linkch_task);
struct jme_adapter *jme = container_of(work, struct jme_adapter, linkch_task);
struct net_device *netdev = jme->dev;
int rc;
@ -1510,7 +1510,7 @@ jme_intr_msi(struct jme_adapter *jme, u32 intrstat)
* all other events are ignored
*/
jwrite32(jme, JME_IEVE, intrstat);
tasklet_schedule(&jme->linkch_task);
schedule_work(&jme->linkch_task);
goto out_reenable;
}
@ -1832,7 +1832,6 @@ jme_open(struct net_device *netdev)
jme_clear_pm_disable_wol(jme);
JME_NAPI_ENABLE(jme);
tasklet_setup(&jme->linkch_task, jme_link_change_tasklet);
tasklet_setup(&jme->txclean_task, jme_tx_clean_tasklet);
tasklet_setup(&jme->rxclean_task, jme_rx_clean_tasklet);
tasklet_setup(&jme->rxempty_task, jme_rx_empty_tasklet);
@ -1920,7 +1919,7 @@ jme_close(struct net_device *netdev)
JME_NAPI_DISABLE(jme);
tasklet_kill(&jme->linkch_task);
cancel_work_sync(&jme->linkch_task);
tasklet_kill(&jme->txclean_task);
tasklet_kill(&jme->rxclean_task);
tasklet_kill(&jme->rxempty_task);
@ -3035,6 +3034,7 @@ jme_init_one(struct pci_dev *pdev,
atomic_set(&jme->rx_empty, 1);
tasklet_setup(&jme->pcc_task, jme_pcc_tasklet);
INIT_WORK(&jme->linkch_task, jme_link_change_work);
jme->dpi.cur = PCC_P1;
jme->reg_ghc = 0;

View File

@ -411,7 +411,7 @@ struct jme_adapter {
struct tasklet_struct rxempty_task;
struct tasklet_struct rxclean_task;
struct tasklet_struct txclean_task;
struct tasklet_struct linkch_task;
struct work_struct linkch_task;
struct tasklet_struct pcc_task;
unsigned long flags;
u32 reg_txcs;

View File

@ -251,7 +251,7 @@ void ath9k_beacon_ensure_primary_slot(struct ath_softc *sc)
int first_slot = ATH_BCBUF;
int slot;
tasklet_disable(&sc->bcon_tasklet);
tasklet_disable_in_atomic(&sc->bcon_tasklet);
/* Find first taken slot. */
for (slot = 0; slot < ATH_BCBUF; slot++) {

View File

@ -1458,7 +1458,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
* Prevents hv_pci_onchannelcallback() from running concurrently
* in the tasklet.
*/
tasklet_disable(&channel->callback_event);
tasklet_disable_in_atomic(&channel->callback_event);
/*
* Since this function is called with IRQ locks held, can't

View File

@ -179,6 +179,21 @@ static unsigned int __init save_reg(struct intc_desc_int *d,
return 0;
}
static bool __init intc_map(struct irq_domain *domain, int irq)
{
if (!irq_to_desc(irq) && irq_alloc_desc_at(irq, NUMA_NO_NODE) != irq) {
pr_err("uname to allocate IRQ %d\n", irq);
return false;
}
if (irq_domain_associate(domain, irq, irq)) {
pr_err("domain association failure\n");
return false;
}
return true;
}
int __init register_intc_controller(struct intc_desc *desc)
{
unsigned int i, k, smp;
@ -311,24 +326,12 @@ int __init register_intc_controller(struct intc_desc *desc)
for (i = 0; i < hw->nr_vectors; i++) {
struct intc_vect *vect = hw->vectors + i;
unsigned int irq = evt2irq(vect->vect);
int res;
if (!vect->enum_id)
continue;
res = irq_create_identity_mapping(d->domain, irq);
if (unlikely(res)) {
if (res == -EEXIST) {
res = irq_domain_associate(d->domain, irq, irq);
if (unlikely(res)) {
pr_err("domain association failure\n");
continue;
}
} else {
pr_err("can't identity map IRQ %d\n", irq);
continue;
}
}
if (!intc_map(d->domain, irq))
continue;
intc_irq_xlate_set(irq, vect->enum_id, d);
intc_register_irq(desc, d, vect->enum_id, irq);
@ -345,22 +348,8 @@ int __init register_intc_controller(struct intc_desc *desc)
* IRQ support, each vector still needs to have
* its own backing irq_desc.
*/
res = irq_create_identity_mapping(d->domain, irq2);
if (unlikely(res)) {
if (res == -EEXIST) {
res = irq_domain_associate(d->domain,
irq2, irq2);
if (unlikely(res)) {
pr_err("domain association "
"failure\n");
continue;
}
} else {
pr_err("can't identity map IRQ %d\n",
irq);
continue;
}
}
if (!intc_map(d->domain, irq2))
continue;
vect2->enum_id = 0;

View File

@ -4,7 +4,7 @@
#include <linux/preempt.h>
#ifdef CONFIG_TRACE_IRQFLAGS
#if defined(CONFIG_PREEMPT_RT) || defined(CONFIG_TRACE_IRQFLAGS)
extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
#else
static __always_inline void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
@ -32,4 +32,10 @@ static inline void local_bh_enable(void)
__local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
}
#ifdef CONFIG_PREEMPT_RT
extern bool local_bh_blocked(void);
#else
static inline bool local_bh_blocked(void) { return false; }
#endif
#endif /* _LINUX_BH_H */

View File

@ -6,6 +6,7 @@
#include <linux/preempt.h>
#include <linux/lockdep.h>
#include <linux/ftrace_irq.h>
#include <linux/sched.h>
#include <linux/vtime.h>
#include <asm/hardirq.h>

View File

@ -61,6 +61,9 @@
* interrupt handler after suspending interrupts. For system
* wakeup devices users need to implement wakeup detection in
* their interrupt handlers.
* IRQF_NO_AUTOEN - Don't enable IRQ or NMI automatically when users request it.
* Users will enable it explicitly by enable_irq() or enable_nmi()
* later.
*/
#define IRQF_SHARED 0x00000080
#define IRQF_PROBE_SHARED 0x00000100
@ -74,6 +77,7 @@
#define IRQF_NO_THREAD 0x00010000
#define IRQF_EARLY_RESUME 0x00020000
#define IRQF_COND_SUSPEND 0x00040000
#define IRQF_NO_AUTOEN 0x00080000
#define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD)
@ -654,26 +658,21 @@ enum
TASKLET_STATE_RUN /* Tasklet is running (SMP only) */
};
#ifdef CONFIG_SMP
#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
static inline int tasklet_trylock(struct tasklet_struct *t)
{
return !test_and_set_bit(TASKLET_STATE_RUN, &(t)->state);
}
static inline void tasklet_unlock(struct tasklet_struct *t)
{
smp_mb__before_atomic();
clear_bit(TASKLET_STATE_RUN, &(t)->state);
}
void tasklet_unlock(struct tasklet_struct *t);
void tasklet_unlock_wait(struct tasklet_struct *t);
void tasklet_unlock_spin_wait(struct tasklet_struct *t);
static inline void tasklet_unlock_wait(struct tasklet_struct *t)
{
while (test_bit(TASKLET_STATE_RUN, &(t)->state)) { barrier(); }
}
#else
#define tasklet_trylock(t) 1
#define tasklet_unlock_wait(t) do { } while (0)
#define tasklet_unlock(t) do { } while (0)
static inline int tasklet_trylock(struct tasklet_struct *t) { return 1; }
static inline void tasklet_unlock(struct tasklet_struct *t) { }
static inline void tasklet_unlock_wait(struct tasklet_struct *t) { }
static inline void tasklet_unlock_spin_wait(struct tasklet_struct *t) { }
#endif
extern void __tasklet_schedule(struct tasklet_struct *t);
@ -698,6 +697,17 @@ static inline void tasklet_disable_nosync(struct tasklet_struct *t)
smp_mb__after_atomic();
}
/*
* Do not use in new code. Disabling tasklets from atomic contexts is
* error prone and should be avoided.
*/
static inline void tasklet_disable_in_atomic(struct tasklet_struct *t)
{
tasklet_disable_nosync(t);
tasklet_unlock_spin_wait(t);
smp_mb();
}
static inline void tasklet_disable(struct tasklet_struct *t)
{
tasklet_disable_nosync(t);
@ -712,7 +722,6 @@ static inline void tasklet_enable(struct tasklet_struct *t)
}
extern void tasklet_kill(struct tasklet_struct *t);
extern void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu);
extern void tasklet_init(struct tasklet_struct *t,
void (*func)(unsigned long), unsigned long data);
extern void tasklet_setup(struct tasklet_struct *t,

View File

@ -116,7 +116,7 @@ enum {
* IRQ_SET_MASK_NOCPY - OK, chip did update irq_common_data.affinity
* IRQ_SET_MASK_OK_DONE - Same as IRQ_SET_MASK_OK for core. Special code to
* support stacked irqchips, which indicates skipping
* all descendent irqchips.
* all descendant irqchips.
*/
enum {
IRQ_SET_MASK_OK = 0,
@ -302,7 +302,7 @@ static inline bool irqd_is_level_type(struct irq_data *d)
/*
* Must only be called of irqchip.irq_set_affinity() or low level
* hieararchy domain allocation functions.
* hierarchy domain allocation functions.
*/
static inline void irqd_set_single_target(struct irq_data *d)
{

View File

@ -145,4 +145,6 @@ int its_init_v4(struct irq_domain *domain,
const struct irq_domain_ops *vpe_ops,
const struct irq_domain_ops *sgi_ops);
bool gic_cpuif_has_vsgi(void);
#endif

View File

@ -32,7 +32,7 @@ struct pt_regs;
* @last_unhandled: aging timer for unhandled count
* @irqs_unhandled: stats field for spurious unhandled interrupts
* @threads_handled: stats field for deferred spurious detection of threaded handlers
* @threads_handled_last: comparator field for deferred spurious detection of theraded handlers
* @threads_handled_last: comparator field for deferred spurious detection of threaded handlers
* @lock: locking for SMP
* @affinity_hint: hint to user space for preferred irq affinity
* @affinity_notify: context for notification of affinity changes

View File

@ -415,15 +415,6 @@ static inline unsigned int irq_linear_revmap(struct irq_domain *domain,
extern unsigned int irq_find_mapping(struct irq_domain *host,
irq_hw_number_t hwirq);
extern unsigned int irq_create_direct_mapping(struct irq_domain *host);
extern int irq_create_strict_mappings(struct irq_domain *domain,
unsigned int irq_base,
irq_hw_number_t hwirq_base, int count);
static inline int irq_create_identity_mapping(struct irq_domain *host,
irq_hw_number_t hwirq)
{
return irq_create_strict_mappings(host, hwirq, hwirq, 1);
}
extern const struct irq_domain_ops irq_domain_simple_ops;

View File

@ -79,7 +79,11 @@
#define nmi_count() (preempt_count() & NMI_MASK)
#define hardirq_count() (preempt_count() & HARDIRQ_MASK)
#define softirq_count() (preempt_count() & SOFTIRQ_MASK)
#ifdef CONFIG_PREEMPT_RT
# define softirq_count() (current->softirq_disable_cnt & SOFTIRQ_MASK)
#else
# define softirq_count() (preempt_count() & SOFTIRQ_MASK)
#endif
#define irq_count() (nmi_count() | hardirq_count() | softirq_count())
/*

View File

@ -334,7 +334,8 @@ static inline void rcu_preempt_sleep_check(void) { }
#define rcu_sleep_check() \
do { \
rcu_preempt_sleep_check(); \
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map), \
if (!IS_ENABLED(CONFIG_PREEMPT_RT)) \
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map), \
"Illegal context switch in RCU-bh read-side critical section"); \
RCU_LOCKDEP_WARN(lock_is_held(&rcu_sched_lock_map), \
"Illegal context switch in RCU-sched read-side critical section"); \

View File

@ -1044,6 +1044,9 @@ struct task_struct {
int softirq_context;
int irq_config;
#endif
#ifdef CONFIG_PREEMPT_RT
int softirq_disable_cnt;
#endif
#ifdef CONFIG_LOCKDEP
# define MAX_LOCK_DEPTH 48UL

View File

@ -761,7 +761,7 @@ EXPORT_SYMBOL_GPL(handle_fasteoi_nmi);
* handle_edge_irq - edge type IRQ handler
* @desc: the interrupt description structure for this irq
*
* Interrupt occures on the falling and/or rising edge of a hardware
* Interrupt occurs on the falling and/or rising edge of a hardware
* signal. The occurrence is latched into the irq controller hardware
* and must be acked in order to be reenabled. After the ack another
* interrupt can happen on the same source even before the first one
@ -808,7 +808,7 @@ void handle_edge_irq(struct irq_desc *desc)
/*
* When another irq arrived while we were handling
* one, we could have masked the irq.
* Renable it, if it was not disabled in meantime.
* Reenable it, if it was not disabled in meantime.
*/
if (unlikely(desc->istate & IRQS_PENDING)) {
if (!irqd_irq_disabled(&desc->irq_data) &&
@ -1419,7 +1419,7 @@ EXPORT_SYMBOL_GPL(irq_chip_eoi_parent);
* @dest: The affinity mask to set
* @force: Flag to enforce setting (disable online checks)
*
* Conditinal, as the underlying parent chip might not implement it.
* Conditional, as the underlying parent chip might not implement it.
*/
int irq_chip_set_affinity_parent(struct irq_data *data,
const struct cpumask *dest, bool force)
@ -1531,7 +1531,7 @@ EXPORT_SYMBOL_GPL(irq_chip_release_resources_parent);
#endif
/**
* irq_chip_compose_msi_msg - Componse msi message for a irq chip
* irq_chip_compose_msi_msg - Compose msi message for a irq chip
* @data: Pointer to interrupt specific data
* @msg: Pointer to the MSI message
*

View File

@ -13,7 +13,7 @@
/*
* What should we do if we get a hw irq event on an illegal vector?
* Each architecture has to answer this themself.
* Each architecture has to answer this themselves.
*/
static void ack_bad(struct irq_data *data)
{

View File

@ -107,7 +107,7 @@ free_descs:
* @irq: linux irq number to be destroyed
* @dest: cpumask of cpus which should have the IPI removed
*
* The IPIs allocated with irq_reserve_ipi() are retuerned to the system
* The IPIs allocated with irq_reserve_ipi() are returned to the system
* destroying all virqs associated with them.
*
* Return 0 on success or error code on failure.

View File

@ -24,10 +24,6 @@ struct irq_sim_irq_ctx {
struct irq_sim_work_ctx *work_ctx;
};
struct irq_sim_devres {
struct irq_domain *domain;
};
static void irq_sim_irqmask(struct irq_data *data)
{
struct irq_sim_irq_ctx *irq_ctx = irq_data_get_irq_chip_data(data);
@ -216,11 +212,11 @@ void irq_domain_remove_sim(struct irq_domain *domain)
}
EXPORT_SYMBOL_GPL(irq_domain_remove_sim);
static void devm_irq_domain_release_sim(struct device *dev, void *res)
static void devm_irq_domain_remove_sim(void *data)
{
struct irq_sim_devres *this = res;
struct irq_domain *domain = data;
irq_domain_remove_sim(this->domain);
irq_domain_remove_sim(domain);
}
/**
@ -238,20 +234,17 @@ struct irq_domain *devm_irq_domain_create_sim(struct device *dev,
struct fwnode_handle *fwnode,
unsigned int num_irqs)
{
struct irq_sim_devres *dr;
struct irq_domain *domain;
int ret;
dr = devres_alloc(devm_irq_domain_release_sim,
sizeof(*dr), GFP_KERNEL);
if (!dr)
return ERR_PTR(-ENOMEM);
domain = irq_domain_create_sim(fwnode, num_irqs);
if (IS_ERR(domain))
return domain;
dr->domain = irq_domain_create_sim(fwnode, num_irqs);
if (IS_ERR(dr->domain)) {
devres_free(dr);
return dr->domain;
}
ret = devm_add_action_or_reset(dev, devm_irq_domain_remove_sim, domain);
if (ret)
return ERR_PTR(ret);
devres_add(dev, dr);
return dr->domain;
return domain;
}
EXPORT_SYMBOL_GPL(devm_irq_domain_create_sim);

View File

@ -31,7 +31,7 @@ static int __init irq_affinity_setup(char *str)
cpulist_parse(str, irq_default_affinity);
/*
* Set at least the boot cpu. We don't want to end up with
* bugreports caused by random comandline masks
* bugreports caused by random commandline masks
*/
cpumask_set_cpu(smp_processor_id(), irq_default_affinity);
return 1;

View File

@ -62,7 +62,7 @@ EXPORT_SYMBOL_GPL(irqchip_fwnode_ops);
* @name: Optional user provided domain name
* @pa: Optional user-provided physical address
*
* Allocate a struct irqchip_fwid, and return a poiner to the embedded
* Allocate a struct irqchip_fwid, and return a pointer to the embedded
* fwnode_handle (or NULL on failure).
*
* Note: The types IRQCHIP_FWNODE_NAMED and IRQCHIP_FWNODE_NAMED_ID are
@ -665,7 +665,7 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq);
/* Look for default domain if nececssary */
/* Look for default domain if necessary */
if (domain == NULL)
domain = irq_default_domain;
if (domain == NULL) {
@ -703,41 +703,6 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
}
EXPORT_SYMBOL_GPL(irq_create_mapping_affinity);
/**
* irq_create_strict_mappings() - Map a range of hw irqs to fixed linux irqs
* @domain: domain owning the interrupt range
* @irq_base: beginning of linux IRQ range
* @hwirq_base: beginning of hardware IRQ range
* @count: Number of interrupts to map
*
* This routine is used for allocating and mapping a range of hardware
* irqs to linux irqs where the linux irq numbers are at pre-defined
* locations. For use by controllers that already have static mappings
* to insert in to the domain.
*
* Non-linear users can use irq_create_identity_mapping() for IRQ-at-a-time
* domain insertion.
*
* 0 is returned upon success, while any failure to establish a static
* mapping is treated as an error.
*/
int irq_create_strict_mappings(struct irq_domain *domain, unsigned int irq_base,
irq_hw_number_t hwirq_base, int count)
{
struct device_node *of_node;
int ret;
of_node = irq_domain_get_of_node(domain);
ret = irq_alloc_descs(irq_base, irq_base, count,
of_node_to_nid(of_node));
if (unlikely(ret < 0))
return ret;
irq_domain_associate_many(domain, irq_base, hwirq_base, count);
return 0;
}
EXPORT_SYMBOL_GPL(irq_create_strict_mappings);
static int irq_domain_translate(struct irq_domain *d,
struct irq_fwspec *fwspec,
irq_hw_number_t *hwirq, unsigned int *type)
@ -906,7 +871,7 @@ unsigned int irq_find_mapping(struct irq_domain *domain,
{
struct irq_data *data;
/* Look for default domain if nececssary */
/* Look for default domain if necessary */
if (domain == NULL)
domain = irq_default_domain;
if (domain == NULL)
@ -1436,7 +1401,7 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain,
* The whole process to setup an IRQ has been split into two steps.
* The first step, __irq_domain_alloc_irqs(), is to allocate IRQ
* descriptor and required hardware resources. The second step,
* irq_domain_activate_irq(), is to program hardwares with preallocated
* irq_domain_activate_irq(), is to program the hardware with preallocated
* resources. In this way, it's easier to rollback when failing to
* allocate resources.
*/
@ -1694,12 +1659,10 @@ void irq_domain_free_irqs(unsigned int virq, unsigned int nr_irqs)
/**
* irq_domain_alloc_irqs_parent - Allocate interrupts from parent domain
* @domain: Domain below which interrupts must be allocated
* @irq_base: Base IRQ number
* @nr_irqs: Number of IRQs to allocate
* @arg: Allocation data (arch/domain specific)
*
* Check whether the domain has been setup recursive. If not allocate
* through the parent domain.
*/
int irq_domain_alloc_irqs_parent(struct irq_domain *domain,
unsigned int irq_base, unsigned int nr_irqs,
@ -1715,11 +1678,9 @@ EXPORT_SYMBOL_GPL(irq_domain_alloc_irqs_parent);
/**
* irq_domain_free_irqs_parent - Free interrupts from parent domain
* @domain: Domain below which interrupts must be freed
* @irq_base: Base IRQ number
* @nr_irqs: Number of IRQs to free
*
* Check whether the domain has been setup recursive. If not free
* through the parent domain.
*/
void irq_domain_free_irqs_parent(struct irq_domain *domain,
unsigned int irq_base, unsigned int nr_irqs)

View File

@ -179,7 +179,7 @@ bool irq_can_set_affinity_usr(unsigned int irq)
/**
* irq_set_thread_affinity - Notify irq threads to adjust affinity
* @desc: irq descriptor which has affitnity changed
* @desc: irq descriptor which has affinity changed
*
* We just set IRQTF_AFFINITY and delegate the affinity setting
* to the interrupt thread itself. We can not call
@ -326,7 +326,7 @@ static bool irq_set_affinity_deactivated(struct irq_data *data,
* If the interrupt is not yet activated, just store the affinity
* mask and do not call the chip driver at all. On activation the
* driver has to make sure anyway that the interrupt is in a
* useable state so startup works.
* usable state so startup works.
*/
if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) ||
irqd_is_activated(data) || !irqd_affinity_on_activate(data))
@ -1054,7 +1054,7 @@ again:
* to IRQS_INPROGRESS and the irq line is masked forever.
*
* This also serializes the state of shared oneshot handlers
* versus "desc->threads_onehsot |= action->thread_mask;" in
* versus "desc->threads_oneshot |= action->thread_mask;" in
* irq_wake_thread(). See the comment there which explains the
* serialization.
*/
@ -1157,7 +1157,7 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
/*
* Interrupts explicitly requested as threaded interrupts want to be
* preemtible - many of them need to sleep and wait for slow busses to
* preemptible - many of them need to sleep and wait for slow busses to
* complete.
*/
static irqreturn_t irq_thread_fn(struct irq_desc *desc,
@ -1697,7 +1697,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
irqd_set(&desc->irq_data, IRQD_NO_BALANCING);
}
if (irq_settings_can_autoenable(desc)) {
if (!(new->flags & IRQF_NO_AUTOEN) &&
irq_settings_can_autoenable(desc)) {
irq_startup(desc, IRQ_RESEND, IRQ_START_COND);
} else {
/*
@ -1912,7 +1913,7 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id)
/* Last action releases resources */
if (!desc->action) {
/*
* Reaquire bus lock as irq_release_resources() might
* Reacquire bus lock as irq_release_resources() might
* require it to deallocate resources over the slow bus.
*/
chip_bus_lock(desc);
@ -2090,10 +2091,15 @@ int request_threaded_irq(unsigned int irq, irq_handler_t handler,
* which interrupt is which (messes up the interrupt freeing
* logic etc).
*
* Also shared interrupts do not go well with disabling auto enable.
* The sharing interrupt might request it while it's still disabled
* and then wait for interrupts forever.
*
* Also IRQF_COND_SUSPEND only makes sense for shared interrupts and
* it cannot be set along with IRQF_NO_SUSPEND.
*/
if (((irqflags & IRQF_SHARED) && !dev_id) ||
((irqflags & IRQF_SHARED) && (irqflags & IRQF_NO_AUTOEN)) ||
(!(irqflags & IRQF_SHARED) && (irqflags & IRQF_COND_SUSPEND)) ||
((irqflags & IRQF_NO_SUSPEND) && (irqflags & IRQF_COND_SUSPEND)))
return -EINVAL;
@ -2249,7 +2255,8 @@ int request_nmi(unsigned int irq, irq_handler_t handler,
desc = irq_to_desc(irq);
if (!desc || irq_settings_can_autoenable(desc) ||
if (!desc || (irq_settings_can_autoenable(desc) &&
!(irqflags & IRQF_NO_AUTOEN)) ||
!irq_settings_can_request(desc) ||
WARN_ON(irq_settings_is_per_cpu_devid(desc)) ||
!irq_supports_nmi(desc))
@ -2746,7 +2753,7 @@ int __irq_get_irqchip_state(struct irq_data *data, enum irqchip_irq_state which,
* irq_get_irqchip_state - returns the irqchip state of a interrupt.
* @irq: Interrupt line that is forwarded to a VM
* @which: One of IRQCHIP_STATE_* the caller wants to know about
* @state: a pointer to a boolean where the state is to be storeed
* @state: a pointer to a boolean where the state is to be stored
*
* This call snapshots the internal irqchip state of an
* interrupt, returning into @state the bit corresponding to

View File

@ -337,15 +337,14 @@ void irq_matrix_assign(struct irq_matrix *m, unsigned int bit)
* irq_matrix_reserve - Reserve interrupts
* @m: Matrix pointer
*
* This is merily a book keeping call. It increments the number of globally
* This is merely a book keeping call. It increments the number of globally
* reserved interrupt bits w/o actually allocating them. This allows to
* setup interrupt descriptors w/o assigning low level resources to it.
* The actual allocation happens when the interrupt gets activated.
*/
void irq_matrix_reserve(struct irq_matrix *m)
{
if (m->global_reserved <= m->global_available &&
m->global_reserved + 1 > m->global_available)
if (m->global_reserved == m->global_available)
pr_warn("Interrupt reservation exceeds available resources\n");
m->global_reserved++;
@ -356,7 +355,7 @@ void irq_matrix_reserve(struct irq_matrix *m)
* irq_matrix_remove_reserved - Remove interrupt reservation
* @m: Matrix pointer
*
* This is merily a book keeping call. It decrements the number of globally
* This is merely a book keeping call. It decrements the number of globally
* reserved interrupt bits. This is used to undo irq_matrix_reserve() when the
* interrupt was never in use and a real vector allocated, which undid the
* reservation.
@ -423,7 +422,9 @@ void irq_matrix_free(struct irq_matrix *m, unsigned int cpu,
if (WARN_ON_ONCE(bit < m->alloc_start || bit >= m->alloc_end))
return;
clear_bit(bit, cm->alloc_map);
if (WARN_ON_ONCE(!test_and_clear_bit(bit, cm->alloc_map)))
return;
cm->allocated--;
if(managed)
cm->managed_allocated--;

View File

@ -7,7 +7,7 @@
/**
* irq_fixup_move_pending - Cleanup irq move pending from a dying CPU
* @desc: Interrupt descpriptor to clean up
* @desc: Interrupt descriptor to clean up
* @force_clear: If set clear the move pending bit unconditionally.
* If not set, clear it only when the dying CPU is the
* last one in the pending mask.

View File

@ -5,7 +5,7 @@
*
* This file is licensed under GPLv2.
*
* This file contains common code to support Message Signalled Interrupt for
* This file contains common code to support Message Signaled Interrupts for
* PCI compatible and non PCI compatible devices.
*/
#include <linux/types.h>

View File

@ -144,7 +144,7 @@ static ssize_t write_irq_affinity(int type, struct file *file,
if (!irq_can_set_affinity_usr(irq) || no_irq_affinity)
return -EIO;
if (!alloc_cpumask_var(&new_value, GFP_KERNEL))
if (!zalloc_cpumask_var(&new_value, GFP_KERNEL))
return -ENOMEM;
if (type)
@ -238,7 +238,7 @@ static ssize_t default_affinity_write(struct file *file,
cpumask_var_t new_value;
int err;
if (!alloc_cpumask_var(&new_value, GFP_KERNEL))
if (!zalloc_cpumask_var(&new_value, GFP_KERNEL))
return -ENOMEM;
err = cpumask_parse_user(buffer, count, new_value);

View File

@ -128,7 +128,7 @@ int check_irq_resend(struct irq_desc *desc, bool inject)
if (!try_retrigger(desc))
err = irq_sw_resend(desc);
/* If the retrigger was successfull, mark it with the REPLAY bit */
/* If the retrigger was successful, mark it with the REPLAY bit */
if (!err)
desc->istate |= IRQS_REPLAY;
return err;

View File

@ -403,6 +403,10 @@ void note_interrupt(struct irq_desc *desc, irqreturn_t action_ret)
desc->irqs_unhandled -= ok;
}
if (likely(!desc->irqs_unhandled))
return;
/* Now getting into unhandled irq detection */
desc->irq_count++;
if (likely(desc->irq_count < 100000))
return;

View File

@ -84,7 +84,7 @@ void irq_timings_disable(void)
* 2. Log interval
*
* We saw the irq timings allow to compute the interval of the
* occurrences for a specific interrupt. We can reasonibly assume the
* occurrences for a specific interrupt. We can reasonably assume the
* longer is the interval, the higher is the error for the next event
* and we can consider storing those interval values into an array
* where each slot in the array correspond to an interval at the power
@ -416,7 +416,7 @@ static u64 __irq_timings_next_event(struct irqt_stat *irqs, int irq, u64 now)
* Copy the content of the circular buffer into another buffer
* in order to linearize the buffer instead of dealing with
* wrapping indexes and shifted array which will be prone to
* error and extremelly difficult to debug.
* error and extremely difficult to debug.
*/
for (i = 0; i < count; i++) {
int index = (start + i) & IRQ_TIMINGS_MASK;
@ -485,7 +485,7 @@ static inline void irq_timings_store(int irq, struct irqt_stat *irqs, u64 ts)
/*
* The interrupt triggered more than one second apart, that
* ends the sequence as predictible for our purpose. In this
* ends the sequence as predictable for our purpose. In this
* case, assume we have the beginning of a sequence and the
* timestamp is the first value. As it is impossible to
* predict anything at this point, return.
@ -514,7 +514,7 @@ static inline void irq_timings_store(int irq, struct irqt_stat *irqs, u64 ts)
* If more than the array size interrupts happened during the
* last busy/idle cycle, the index wrapped up and we have to
* begin with the next element in the array which is the last one
* in the sequence, otherwise it is a the index 0.
* in the sequence, otherwise it is at the index 0.
*
* - have an indication of the interrupts activity on this CPU
* (eg. irq/sec)

View File

@ -430,7 +430,7 @@ static ssize_t prof_cpu_mask_proc_write(struct file *file,
cpumask_var_t new_value;
int err;
if (!alloc_cpumask_var(&new_value, GFP_KERNEL))
if (!zalloc_cpumask_var(&new_value, GFP_KERNEL))
return -ENOMEM;
err = cpumask_parse_user(buffer, count, new_value);

View File

@ -60,7 +60,7 @@ void irqtime_account_irq(struct task_struct *curr, unsigned int offset)
cpu = smp_processor_id();
delta = sched_clock_cpu(cpu) - irqtime->irq_start_time;
irqtime->irq_start_time += delta;
pc = preempt_count() - offset;
pc = irq_count() - offset;
/*
* We do not account for softirq time from ksoftirqd here.
@ -421,7 +421,7 @@ void vtime_task_switch(struct task_struct *prev)
void vtime_account_irq(struct task_struct *tsk, unsigned int offset)
{
unsigned int pc = preempt_count() - offset;
unsigned int pc = irq_count() - offset;
if (pc & HARDIRQ_OFFSET) {
vtime_account_hardirq(tsk);

View File

@ -13,6 +13,7 @@
#include <linux/kernel_stat.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/local_lock.h>
#include <linux/mm.h>
#include <linux/notifier.h>
#include <linux/percpu.h>
@ -25,6 +26,7 @@
#include <linux/smpboot.h>
#include <linux/tick.h>
#include <linux/irq.h>
#include <linux/wait_bit.h>
#include <asm/softirq_stack.h>
@ -102,20 +104,204 @@ EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
#endif
/*
* preempt_count and SOFTIRQ_OFFSET usage:
* - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
* softirq processing.
* - preempt_count is changed by SOFTIRQ_DISABLE_OFFSET (= 2 * SOFTIRQ_OFFSET)
* SOFTIRQ_OFFSET usage:
*
* On !RT kernels 'count' is the preempt counter, on RT kernels this applies
* to a per CPU counter and to task::softirqs_disabled_cnt.
*
* - count is changed by SOFTIRQ_OFFSET on entering or leaving softirq
* processing.
*
* - count is changed by SOFTIRQ_DISABLE_OFFSET (= 2 * SOFTIRQ_OFFSET)
* on local_bh_disable or local_bh_enable.
*
* This lets us distinguish between whether we are currently processing
* softirq and whether we just have bh disabled.
*/
#ifdef CONFIG_PREEMPT_RT
#ifdef CONFIG_TRACE_IRQFLAGS
/*
* This is for softirq.c-internal use, where hardirqs are disabled
* RT accounts for BH disabled sections in task::softirqs_disabled_cnt and
* also in per CPU softirq_ctrl::cnt. This is necessary to allow tasks in a
* softirq disabled section to be preempted.
*
* The per task counter is used for softirq_count(), in_softirq() and
* in_serving_softirqs() because these counts are only valid when the task
* holding softirq_ctrl::lock is running.
*
* The per CPU counter prevents pointless wakeups of ksoftirqd in case that
* the task which is in a softirq disabled section is preempted or blocks.
*/
struct softirq_ctrl {
local_lock_t lock;
int cnt;
};
static DEFINE_PER_CPU(struct softirq_ctrl, softirq_ctrl) = {
.lock = INIT_LOCAL_LOCK(softirq_ctrl.lock),
};
/**
* local_bh_blocked() - Check for idle whether BH processing is blocked
*
* Returns false if the per CPU softirq::cnt is 0 otherwise true.
*
* This is invoked from the idle task to guard against false positive
* softirq pending warnings, which would happen when the task which holds
* softirq_ctrl::lock was the only running task on the CPU and blocks on
* some other lock.
*/
bool local_bh_blocked(void)
{
return __this_cpu_read(softirq_ctrl.cnt) != 0;
}
void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
{
unsigned long flags;
int newcnt;
WARN_ON_ONCE(in_hardirq());
/* First entry of a task into a BH disabled section? */
if (!current->softirq_disable_cnt) {
if (preemptible()) {
local_lock(&softirq_ctrl.lock);
/* Required to meet the RCU bottomhalf requirements. */
rcu_read_lock();
} else {
DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
}
}
/*
* Track the per CPU softirq disabled state. On RT this is per CPU
* state to allow preemption of bottom half disabled sections.
*/
newcnt = __this_cpu_add_return(softirq_ctrl.cnt, cnt);
/*
* Reflect the result in the task state to prevent recursion on the
* local lock and to make softirq_count() & al work.
*/
current->softirq_disable_cnt = newcnt;
if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && newcnt == cnt) {
raw_local_irq_save(flags);
lockdep_softirqs_off(ip);
raw_local_irq_restore(flags);
}
}
EXPORT_SYMBOL(__local_bh_disable_ip);
static void __local_bh_enable(unsigned int cnt, bool unlock)
{
unsigned long flags;
int newcnt;
DEBUG_LOCKS_WARN_ON(current->softirq_disable_cnt !=
this_cpu_read(softirq_ctrl.cnt));
if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && softirq_count() == cnt) {
raw_local_irq_save(flags);
lockdep_softirqs_on(_RET_IP_);
raw_local_irq_restore(flags);
}
newcnt = __this_cpu_sub_return(softirq_ctrl.cnt, cnt);
current->softirq_disable_cnt = newcnt;
if (!newcnt && unlock) {
rcu_read_unlock();
local_unlock(&softirq_ctrl.lock);
}
}
void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
{
bool preempt_on = preemptible();
unsigned long flags;
u32 pending;
int curcnt;
WARN_ON_ONCE(in_irq());
lockdep_assert_irqs_enabled();
local_irq_save(flags);
curcnt = __this_cpu_read(softirq_ctrl.cnt);
/*
* If this is not reenabling soft interrupts, no point in trying to
* run pending ones.
*/
if (curcnt != cnt)
goto out;
pending = local_softirq_pending();
if (!pending || ksoftirqd_running(pending))
goto out;
/*
* If this was called from non preemptible context, wake up the
* softirq daemon.
*/
if (!preempt_on) {
wakeup_softirqd();
goto out;
}
/*
* Adjust softirq count to SOFTIRQ_OFFSET which makes
* in_serving_softirq() become true.
*/
cnt = SOFTIRQ_OFFSET;
__local_bh_enable(cnt, false);
__do_softirq();
out:
__local_bh_enable(cnt, preempt_on);
local_irq_restore(flags);
}
EXPORT_SYMBOL(__local_bh_enable_ip);
/*
* Invoked from ksoftirqd_run() outside of the interrupt disabled section
* to acquire the per CPU local lock for reentrancy protection.
*/
static inline void ksoftirqd_run_begin(void)
{
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
local_irq_disable();
}
/* Counterpart to ksoftirqd_run_begin() */
static inline void ksoftirqd_run_end(void)
{
__local_bh_enable(SOFTIRQ_OFFSET, true);
WARN_ON_ONCE(in_interrupt());
local_irq_enable();
}
static inline void softirq_handle_begin(void) { }
static inline void softirq_handle_end(void) { }
static inline bool should_wake_ksoftirqd(void)
{
return !this_cpu_read(softirq_ctrl.cnt);
}
static inline void invoke_softirq(void)
{
if (should_wake_ksoftirqd())
wakeup_softirqd();
}
#else /* CONFIG_PREEMPT_RT */
/*
* This one is for softirq.c-internal use, where hardirqs are disabled
* legitimately:
*/
#ifdef CONFIG_TRACE_IRQFLAGS
void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
{
unsigned long flags;
@ -206,6 +392,32 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
}
EXPORT_SYMBOL(__local_bh_enable_ip);
static inline void softirq_handle_begin(void)
{
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
}
static inline void softirq_handle_end(void)
{
__local_bh_enable(SOFTIRQ_OFFSET);
WARN_ON_ONCE(in_interrupt());
}
static inline void ksoftirqd_run_begin(void)
{
local_irq_disable();
}
static inline void ksoftirqd_run_end(void)
{
local_irq_enable();
}
static inline bool should_wake_ksoftirqd(void)
{
return true;
}
static inline void invoke_softirq(void)
{
if (ksoftirqd_running(local_softirq_pending()))
@ -250,6 +462,8 @@ asmlinkage __visible void do_softirq(void)
local_irq_restore(flags);
}
#endif /* !CONFIG_PREEMPT_RT */
/*
* We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
* but break the loop if need_resched() is set or after 2 ms.
@ -318,7 +532,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
pending = local_softirq_pending();
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
softirq_handle_begin();
in_hardirq = lockdep_softirq_start();
account_softirq_enter(current);
@ -354,8 +568,10 @@ restart:
pending >>= softirq_bit;
}
if (__this_cpu_read(ksoftirqd) == current)
if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
__this_cpu_read(ksoftirqd) == current)
rcu_softirq_qs();
local_irq_disable();
pending = local_softirq_pending();
@ -369,8 +585,7 @@ restart:
account_softirq_exit(current);
lockdep_softirq_end(in_hardirq);
__local_bh_enable(SOFTIRQ_OFFSET);
WARN_ON_ONCE(in_interrupt());
softirq_handle_end();
current_restore_flags(old_flags, PF_MEMALLOC);
}
@ -465,7 +680,7 @@ inline void raise_softirq_irqoff(unsigned int nr)
* Otherwise we wake up ksoftirqd to make sure we
* schedule the softirq soon.
*/
if (!in_interrupt())
if (!in_interrupt() && should_wake_ksoftirqd())
wakeup_softirqd();
}
@ -531,6 +746,20 @@ void __tasklet_hi_schedule(struct tasklet_struct *t)
}
EXPORT_SYMBOL(__tasklet_hi_schedule);
static bool tasklet_clear_sched(struct tasklet_struct *t)
{
if (test_and_clear_bit(TASKLET_STATE_SCHED, &t->state)) {
wake_up_var(&t->state);
return true;
}
WARN_ONCE(1, "tasklet SCHED state not set: %s %pS\n",
t->use_callback ? "callback" : "func",
t->use_callback ? (void *)t->callback : (void *)t->func);
return false;
}
static void tasklet_action_common(struct softirq_action *a,
struct tasklet_head *tl_head,
unsigned int softirq_nr)
@ -550,13 +779,12 @@ static void tasklet_action_common(struct softirq_action *a,
if (tasklet_trylock(t)) {
if (!atomic_read(&t->count)) {
if (!test_and_clear_bit(TASKLET_STATE_SCHED,
&t->state))
BUG();
if (t->use_callback)
t->callback(t);
else
t->func(t->data);
if (tasklet_clear_sched(t)) {
if (t->use_callback)
t->callback(t);
else
t->func(t->data);
}
tasklet_unlock(t);
continue;
}
@ -606,21 +834,62 @@ void tasklet_init(struct tasklet_struct *t,
}
EXPORT_SYMBOL(tasklet_init);
#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
/*
* Do not use in new code. Waiting for tasklets from atomic contexts is
* error prone and should be avoided.
*/
void tasklet_unlock_spin_wait(struct tasklet_struct *t)
{
while (test_bit(TASKLET_STATE_RUN, &(t)->state)) {
if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
/*
* Prevent a live lock when current preempted soft
* interrupt processing or prevents ksoftirqd from
* running. If the tasklet runs on a different CPU
* then this has no effect other than doing the BH
* disable/enable dance for nothing.
*/
local_bh_disable();
local_bh_enable();
} else {
cpu_relax();
}
}
}
EXPORT_SYMBOL(tasklet_unlock_spin_wait);
#endif
void tasklet_kill(struct tasklet_struct *t)
{
if (in_interrupt())
pr_notice("Attempt to kill tasklet from interrupt\n");
while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
do {
yield();
} while (test_bit(TASKLET_STATE_SCHED, &t->state));
}
while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state))
wait_var_event(&t->state, !test_bit(TASKLET_STATE_SCHED, &t->state));
tasklet_unlock_wait(t);
clear_bit(TASKLET_STATE_SCHED, &t->state);
tasklet_clear_sched(t);
}
EXPORT_SYMBOL(tasklet_kill);
#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
void tasklet_unlock(struct tasklet_struct *t)
{
smp_mb__before_atomic();
clear_bit(TASKLET_STATE_RUN, &t->state);
smp_mb__after_atomic();
wake_up_var(&t->state);
}
EXPORT_SYMBOL_GPL(tasklet_unlock);
void tasklet_unlock_wait(struct tasklet_struct *t)
{
wait_var_event(&t->state, !test_bit(TASKLET_STATE_RUN, &t->state));
}
EXPORT_SYMBOL_GPL(tasklet_unlock_wait);
#endif
void __init softirq_init(void)
{
int cpu;
@ -643,53 +912,21 @@ static int ksoftirqd_should_run(unsigned int cpu)
static void run_ksoftirqd(unsigned int cpu)
{
local_irq_disable();
ksoftirqd_run_begin();
if (local_softirq_pending()) {
/*
* We can safely run softirq on inline stack, as we are not deep
* in the task stack here.
*/
__do_softirq();
local_irq_enable();
ksoftirqd_run_end();
cond_resched();
return;
}
local_irq_enable();
ksoftirqd_run_end();
}
#ifdef CONFIG_HOTPLUG_CPU
/*
* tasklet_kill_immediate is called to remove a tasklet which can already be
* scheduled for execution on @cpu.
*
* Unlike tasklet_kill, this function removes the tasklet
* _immediately_, even if the tasklet is in TASKLET_STATE_SCHED state.
*
* When this function is called, @cpu must be in the CPU_DEAD state.
*/
void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu)
{
struct tasklet_struct **i;
BUG_ON(cpu_online(cpu));
BUG_ON(test_bit(TASKLET_STATE_RUN, &t->state));
if (!test_bit(TASKLET_STATE_SCHED, &t->state))
return;
/* CPU is dead, so no lock needed. */
for (i = &per_cpu(tasklet_vec, cpu).head; *i; i = &(*i)->next) {
if (*i == t) {
*i = t->next;
/* If this was the tail element, move the tail ptr */
if (*i == NULL)
per_cpu(tasklet_vec, cpu).tail = i;
return;
}
}
BUG();
}
static int takeover_tasklets(unsigned int cpu)
{
/* CPU is dead, so no lock needed. */

View File

@ -973,7 +973,7 @@ static bool can_stop_idle_tick(int cpu, struct tick_sched *ts)
if (unlikely(local_softirq_pending())) {
static int ratelimit;
if (ratelimit < 10 &&
if (ratelimit < 10 && !local_bh_blocked() &&
(local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) {
pr_warn("NOHZ tick-stop error: Non-RCU local softirq work is pending, handler #%02x!!!\n",
(unsigned int) local_softirq_pending());

View File

@ -4832,7 +4832,7 @@ tracing_cpumask_write(struct file *filp, const char __user *ubuf,
cpumask_var_t tracing_cpumask_new;
int err;
if (!alloc_cpumask_var(&tracing_cpumask_new, GFP_KERNEL))
if (!zalloc_cpumask_var(&tracing_cpumask_new, GFP_KERNEL))
return -ENOMEM;
err = cpumask_parse_user(ubuf, count, tracing_cpumask_new);