Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:
 "It looks like a decent sized set of fixes, but a lot of these are one
  liner off-by-one and similar type changes:

   1) Fix netlink header pointer to calcular bad attribute offset
      reported to user. From Pablo Neira Ayuso.

   2) Don't double clear PHY interrupts when ->did_interrupt is set,
      from Heiner Kallweit.

   3) Add missing validation of various (devlink, nl802154, fib, etc.)
      attributes, from Jakub Kicinski.

   4) Missing *pos increments in various netfilter seq_next ops, from
      Vasily Averin.

   5) Missing break in of_mdiobus_register() loop, from Dajun Jin.

   6) Don't double bump tx_dropped in veth driver, from Jiang Lidong.

   7) Work around FMAN erratum A050385, from Madalin Bucur.

   8) Make sure ARP header is pulled early enough in bonding driver,
      from Eric Dumazet.

   9) Do a cond_resched() during multicast processing of ipvlan and
      macvlan, from Mahesh Bandewar.

  10) Don't attach cgroups to unrelated sockets when in interrupt
      context, from Shakeel Butt.

  11) Fix tpacket ring state management when encountering unknown GSO
      types. From Willem de Bruijn.

  12) Fix MDIO bus PHY resume by checking mdio_bus_phy_may_suspend()
      only in the suspend context. From Heiner Kallweit"

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (112 commits)
  net: systemport: fix index check to avoid an array out of bounds access
  tc-testing: add ETS scheduler to tdc build configuration
  net: phy: fix MDIO bus PM PHY resuming
  net: hns3: clear port base VLAN when unload PF
  net: hns3: fix RMW issue for VLAN filter switch
  net: hns3: fix VF VLAN table entries inconsistent issue
  net: hns3: fix "tc qdisc del" failed issue
  taprio: Fix sending packets without dequeueing them
  net: mvmdio: avoid error message for optional IRQ
  net: dsa: mv88e6xxx: Add missing mask of ATU occupancy register
  net: memcg: fix lockdep splat in inet_csk_accept()
  s390/qeth: implement smarter resizing of the RX buffer pool
  s390/qeth: refactor buffer pool code
  s390/qeth: use page pointers to manage RX buffer pool
  seg6: fix SRv6 L2 tunnels to use IANA-assigned protocol number
  net: dsa: Don't instantiate phylink for CPU/DSA ports unless needed
  net/packet: tpacket_rcv: do not increment ring index on drop
  sxgbe: Fix off by one in samsung driver strncpy size arg
  net: caif: Add lockdep expression to RCU traversal primitive
  MAINTAINERS: remove Sathya Perla as Emulex NIC maintainer
  ...
This commit is contained in:
Linus Torvalds 2020-03-12 16:19:19 -07:00
commit 1b51f69461
110 changed files with 993 additions and 370 deletions

View File

@ -110,6 +110,13 @@ PROPERTIES
Usage: required
Definition: See soc/fsl/qman.txt and soc/fsl/bman.txt
- fsl,erratum-a050385
Usage: optional
Value type: boolean
Definition: A boolean property. Indicates the presence of the
erratum A050385 which indicates that DMA transactions that are
split can result in a FMan lock.
=============================================================================
FMan MURAM Node

View File

@ -40,9 +40,6 @@ example usage
# Delete a snapshot using:
$ devlink region del pci/0000:00:05.0/cr-space snapshot 1
# Trigger (request) a snapshot be taken:
$ devlink region trigger pci/0000:00:05.0/cr-space
# Dump a snapshot:
$ devlink region dump pci/0000:00:05.0/fw-health snapshot 1
0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30

View File

@ -8,9 +8,9 @@ Overview
========
The net_failover driver provides an automated failover mechanism via APIs
to create and destroy a failover master netdev and mananges a primary and
to create and destroy a failover master netdev and manages a primary and
standby slave netdevs that get registered via the generic failover
infrastructrure.
infrastructure.
The failover netdev acts a master device and controls 2 slave devices. The
original paravirtual interface is registered as 'standby' slave netdev and
@ -29,7 +29,7 @@ virtio-net accelerated datapath: STANDBY mode
=============================================
net_failover enables hypervisor controlled accelerated datapath to virtio-net
enabled VMs in a transparent manner with no/minimal guest userspace chanages.
enabled VMs in a transparent manner with no/minimal guest userspace changes.
To support this, the hypervisor needs to enable VIRTIO_NET_F_STANDBY
feature on the virtio-net interface and assign the same MAC address to both

View File

@ -159,7 +159,7 @@ Socket Interface
set SO_RDS_TRANSPORT on a socket for which the transport has
been previously attached explicitly (by SO_RDS_TRANSPORT) or
implicitly (via bind(2)) will return an error of EOPNOTSUPP.
An attempt to set SO_RDS_TRANSPPORT to RDS_TRANS_NONE will
An attempt to set SO_RDS_TRANSPORT to RDS_TRANS_NONE will
always return EINVAL.
RDMA for RDS

View File

@ -4073,7 +4073,6 @@ F: drivers/scsi/snic/
CISCO VIC ETHERNET NIC DRIVER
M: Christian Benvenuti <benve@cisco.com>
M: Govindarajulu Varadarajan <_govind@gmx.com>
M: Parvi Kaustubhi <pkaustub@cisco.com>
S: Supported
F: drivers/net/ethernet/cisco/enic/
@ -4572,7 +4571,7 @@ F: drivers/infiniband/hw/cxgb4/
F: include/uapi/rdma/cxgb4-abi.h
CXGB4VF ETHERNET DRIVER (CXGB4VF)
M: Casey Leedom <leedom@chelsio.com>
M: Vishal Kulkarni <vishal@gmail.com>
L: netdev@vger.kernel.org
W: http://www.chelsio.com
S: Supported
@ -6198,7 +6197,6 @@ S: Supported
F: drivers/scsi/be2iscsi/
Emulex 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER (be2net)
M: Sathya Perla <sathya.perla@broadcom.com>
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
M: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
M: Somnath Kotur <somnath.kotur@broadcom.com>

View File

@ -20,6 +20,8 @@
};
&fman0 {
fsl,erratum-a050385;
/* these aliases provide the FMan ports mapping */
enet0: ethernet@e0000 {
};

View File

@ -91,7 +91,7 @@
#ifdef GENERAL_DEBUG
#define PRINTK(args...) printk(args)
#else
#define PRINTK(args...)
#define PRINTK(args...) do {} while (0)
#endif /* GENERAL_DEBUG */
#ifdef EXTRA_DEBUG

View File

@ -50,11 +50,6 @@ struct arp_pkt {
};
#pragma pack()
static inline struct arp_pkt *arp_pkt(const struct sk_buff *skb)
{
return (struct arp_pkt *)skb_network_header(skb);
}
/* Forward declaration */
static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[],
bool strict_match);
@ -553,10 +548,11 @@ static void rlb_req_update_subnet_clients(struct bonding *bond, __be32 src_ip)
spin_unlock(&bond->mode_lock);
}
static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bond)
static struct slave *rlb_choose_channel(struct sk_buff *skb,
struct bonding *bond,
const struct arp_pkt *arp)
{
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
struct arp_pkt *arp = arp_pkt(skb);
struct slave *assigned_slave, *curr_active_slave;
struct rlb_client_info *client_info;
u32 hash_index = 0;
@ -653,8 +649,12 @@ static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bon
*/
static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
{
struct arp_pkt *arp = arp_pkt(skb);
struct slave *tx_slave = NULL;
struct arp_pkt *arp;
if (!pskb_network_may_pull(skb, sizeof(*arp)))
return NULL;
arp = (struct arp_pkt *)skb_network_header(skb);
/* Don't modify or load balance ARPs that do not originate locally
* (e.g.,arrive via a bridge).
@ -664,7 +664,7 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
if (arp->op_code == htons(ARPOP_REPLY)) {
/* the arp must be sent on the selected rx channel */
tx_slave = rlb_choose_channel(skb, bond);
tx_slave = rlb_choose_channel(skb, bond, arp);
if (tx_slave)
bond_hw_addr_copy(arp->mac_src, tx_slave->dev->dev_addr,
tx_slave->dev->addr_len);
@ -676,7 +676,7 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
* When the arp reply is received the entry will be updated
* with the correct unicast address of the client.
*/
tx_slave = rlb_choose_channel(skb, bond);
tx_slave = rlb_choose_channel(skb, bond, arp);
/* The ARP reply packets must be delayed so that
* they can cancel out the influence of the ARP request.

View File

@ -883,6 +883,7 @@ static const struct nla_policy can_policy[IFLA_CAN_MAX + 1] = {
= { .len = sizeof(struct can_bittiming) },
[IFLA_CAN_DATA_BITTIMING_CONST]
= { .len = sizeof(struct can_bittiming_const) },
[IFLA_CAN_TERMINATION] = { .type = NLA_U16 },
};
static int can_validate(struct nlattr *tb[], struct nlattr *data[],

View File

@ -2769,6 +2769,8 @@ static u64 mv88e6xxx_devlink_atu_bin_get(struct mv88e6xxx_chip *chip,
goto unlock;
}
occupancy &= MV88E6XXX_G2_ATU_STATS_MASK;
unlock:
mv88e6xxx_reg_unlock(chip);

View File

@ -1099,6 +1099,13 @@ int mv88e6xxx_g2_irq_setup(struct mv88e6xxx_chip *chip)
{
int err, irq, virq;
chip->g2_irq.masked = ~0;
mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_g2_int_mask(chip, ~chip->g2_irq.masked);
mv88e6xxx_reg_unlock(chip);
if (err)
return err;
chip->g2_irq.domain = irq_domain_add_simple(
chip->dev->of_node, 16, 0, &mv88e6xxx_g2_irq_domain_ops, chip);
if (!chip->g2_irq.domain)
@ -1108,7 +1115,6 @@ int mv88e6xxx_g2_irq_setup(struct mv88e6xxx_chip *chip)
irq_create_mapping(chip->g2_irq.domain, irq);
chip->g2_irq.chip = mv88e6xxx_g2_irq_chip;
chip->g2_irq.masked = ~0;
chip->device_irq = irq_find_mapping(chip->g1_irq.domain,
MV88E6XXX_G1_STS_IRQ_DEVICE);

View File

@ -1741,7 +1741,8 @@ static void sja1105_teardown(struct dsa_switch *ds)
if (!dsa_is_user_port(ds, port))
continue;
kthread_destroy_worker(sp->xmit_worker);
if (sp->xmit_worker)
kthread_destroy_worker(sp->xmit_worker);
}
sja1105_tas_teardown(ds);

View File

@ -2135,7 +2135,7 @@ static int bcm_sysport_rule_set(struct bcm_sysport_priv *priv,
return -ENOSPC;
index = find_first_zero_bit(priv->filters, RXCHK_BRCM_TAG_MAX);
if (index > RXCHK_BRCM_TAG_MAX)
if (index >= RXCHK_BRCM_TAG_MAX)
return -ENOSPC;
/* Location is the classification ID, and index is the position

View File

@ -10982,13 +10982,13 @@ static int bnxt_change_mtu(struct net_device *dev, int new_mtu)
struct bnxt *bp = netdev_priv(dev);
if (netif_running(dev))
bnxt_close_nic(bp, false, false);
bnxt_close_nic(bp, true, false);
dev->mtu = new_mtu;
bnxt_set_ring_params(bp);
if (netif_running(dev))
return bnxt_open_nic(bp, false, false);
return bnxt_open_nic(bp, true, false);
return 0;
}

View File

@ -2007,8 +2007,8 @@ int bnxt_flash_package_from_file(struct net_device *dev, const char *filename,
struct hwrm_nvm_install_update_output *resp = bp->hwrm_cmd_resp_addr;
struct hwrm_nvm_install_update_input install = {0};
const struct firmware *fw;
int rc, hwrm_err = 0;
u32 item_len;
int rc = 0;
u16 index;
bnxt_hwrm_fw_set_time(bp);
@ -2052,15 +2052,14 @@ int bnxt_flash_package_from_file(struct net_device *dev, const char *filename,
memcpy(kmem, fw->data, fw->size);
modify.host_src_addr = cpu_to_le64(dma_handle);
hwrm_err = hwrm_send_message(bp, &modify,
sizeof(modify),
FLASH_PACKAGE_TIMEOUT);
rc = hwrm_send_message(bp, &modify, sizeof(modify),
FLASH_PACKAGE_TIMEOUT);
dma_free_coherent(&bp->pdev->dev, fw->size, kmem,
dma_handle);
}
}
release_firmware(fw);
if (rc || hwrm_err)
if (rc)
goto err_exit;
if ((install_type & 0xffff) == 0)
@ -2069,20 +2068,19 @@ int bnxt_flash_package_from_file(struct net_device *dev, const char *filename,
install.install_type = cpu_to_le32(install_type);
mutex_lock(&bp->hwrm_cmd_lock);
hwrm_err = _hwrm_send_message(bp, &install, sizeof(install),
INSTALL_PACKAGE_TIMEOUT);
if (hwrm_err) {
rc = _hwrm_send_message(bp, &install, sizeof(install),
INSTALL_PACKAGE_TIMEOUT);
if (rc) {
u8 error_code = ((struct hwrm_err_output *)resp)->cmd_err;
if (resp->error_code && error_code ==
NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) {
install.flags |= cpu_to_le16(
NVM_INSTALL_UPDATE_REQ_FLAGS_ALLOWED_TO_DEFRAG);
hwrm_err = _hwrm_send_message(bp, &install,
sizeof(install),
INSTALL_PACKAGE_TIMEOUT);
rc = _hwrm_send_message(bp, &install, sizeof(install),
INSTALL_PACKAGE_TIMEOUT);
}
if (hwrm_err)
if (rc)
goto flash_pkg_exit;
}
@ -2094,7 +2092,7 @@ int bnxt_flash_package_from_file(struct net_device *dev, const char *filename,
flash_pkg_exit:
mutex_unlock(&bp->hwrm_cmd_lock);
err_exit:
if (hwrm_err == -EACCES)
if (rc == -EACCES)
bnxt_print_admin_err(bp);
return rc;
}

View File

@ -5381,12 +5381,11 @@ static inline bool is_x_10g_port(const struct link_config *lc)
static int cfg_queues(struct adapter *adap)
{
u32 avail_qsets, avail_eth_qsets, avail_uld_qsets;
u32 i, n10g = 0, qidx = 0, n1g = 0;
u32 ncpus = num_online_cpus();
u32 niqflint, neq, num_ulds;
struct sge *s = &adap->sge;
u32 i, n10g = 0, qidx = 0;
#ifndef CONFIG_CHELSIO_T4_DCB
int q10g = 0;
#endif
u32 q10g = 0, q1g;
/* Reduce memory usage in kdump environment, disable all offload. */
if (is_kdump_kernel() || (is_uld(adap) && t4_uld_mem_alloc(adap))) {
@ -5424,44 +5423,50 @@ static int cfg_queues(struct adapter *adap)
n10g += is_x_10g_port(&adap2pinfo(adap, i)->link_cfg);
avail_eth_qsets = min_t(u32, avail_qsets, MAX_ETH_QSETS);
/* We default to 1 queue per non-10G port and up to # of cores queues
* per 10G port.
*/
if (n10g)
q10g = (avail_eth_qsets - (adap->params.nports - n10g)) / n10g;
n1g = adap->params.nports - n10g;
#ifdef CONFIG_CHELSIO_T4_DCB
/* For Data Center Bridging support we need to be able to support up
* to 8 Traffic Priorities; each of which will be assigned to its
* own TX Queue in order to prevent Head-Of-Line Blocking.
*/
q1g = 8;
if (adap->params.nports * 8 > avail_eth_qsets) {
dev_err(adap->pdev_dev, "DCB avail_eth_qsets=%d < %d!\n",
avail_eth_qsets, adap->params.nports * 8);
return -ENOMEM;
}
for_each_port(adap, i) {
struct port_info *pi = adap2pinfo(adap, i);
if (adap->params.nports * ncpus < avail_eth_qsets)
q10g = max(8U, ncpus);
else
q10g = max(8U, q10g);
while ((q10g * n10g) > (avail_eth_qsets - n1g * q1g))
q10g--;
pi->first_qset = qidx;
pi->nqsets = is_kdump_kernel() ? 1 : 8;
qidx += pi->nqsets;
}
#else /* !CONFIG_CHELSIO_T4_DCB */
/* We default to 1 queue per non-10G port and up to # of cores queues
* per 10G port.
*/
if (n10g)
q10g = (avail_eth_qsets - (adap->params.nports - n10g)) / n10g;
if (q10g > netif_get_num_default_rss_queues())
q10g = netif_get_num_default_rss_queues();
if (is_kdump_kernel())
q1g = 1;
q10g = min(q10g, ncpus);
#endif /* !CONFIG_CHELSIO_T4_DCB */
if (is_kdump_kernel()) {
q10g = 1;
q1g = 1;
}
for_each_port(adap, i) {
struct port_info *pi = adap2pinfo(adap, i);
pi->first_qset = qidx;
pi->nqsets = is_x_10g_port(&pi->link_cfg) ? q10g : 1;
pi->nqsets = is_x_10g_port(&pi->link_cfg) ? q10g : q1g;
qidx += pi->nqsets;
}
#endif /* !CONFIG_CHELSIO_T4_DCB */
s->ethqsets = qidx;
s->max_ethqsets = qidx; /* MSI-X may lower it later */
@ -5473,7 +5478,7 @@ static int cfg_queues(struct adapter *adap)
* capped by the number of available cores.
*/
num_ulds = adap->num_uld + adap->num_ofld_uld;
i = min_t(u32, MAX_OFLD_QSETS, num_online_cpus());
i = min_t(u32, MAX_OFLD_QSETS, ncpus);
avail_uld_qsets = roundup(i, adap->params.nports);
if (avail_qsets < num_ulds * adap->params.nports) {
adap->params.offload = 0;

View File

@ -1,4 +1,5 @@
/* Copyright 2008 - 2016 Freescale Semiconductor Inc.
* Copyright 2020 NXP
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
@ -123,7 +124,22 @@ MODULE_PARM_DESC(tx_timeout, "The Tx timeout in ms");
#define FSL_QMAN_MAX_OAL 127
/* Default alignment for start of data in an Rx FD */
#ifdef CONFIG_DPAA_ERRATUM_A050385
/* aligning data start to 64 avoids DMA transaction splits, unless the buffer
* is crossing a 4k page boundary
*/
#define DPAA_FD_DATA_ALIGNMENT (fman_has_errata_a050385() ? 64 : 16)
/* aligning to 256 avoids DMA transaction splits caused by 4k page boundary
* crossings; also, all SG fragments except the last must have a size multiple
* of 256 to avoid DMA transaction splits
*/
#define DPAA_A050385_ALIGN 256
#define DPAA_FD_RX_DATA_ALIGNMENT (fman_has_errata_a050385() ? \
DPAA_A050385_ALIGN : 16)
#else
#define DPAA_FD_DATA_ALIGNMENT 16
#define DPAA_FD_RX_DATA_ALIGNMENT DPAA_FD_DATA_ALIGNMENT
#endif
/* The DPAA requires 256 bytes reserved and mapped for the SGT */
#define DPAA_SGT_SIZE 256
@ -158,8 +174,13 @@ MODULE_PARM_DESC(tx_timeout, "The Tx timeout in ms");
#define DPAA_PARSE_RESULTS_SIZE sizeof(struct fman_prs_result)
#define DPAA_TIME_STAMP_SIZE 8
#define DPAA_HASH_RESULTS_SIZE 8
#ifdef CONFIG_DPAA_ERRATUM_A050385
#define DPAA_RX_PRIV_DATA_SIZE (DPAA_A050385_ALIGN - (DPAA_PARSE_RESULTS_SIZE\
+ DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE))
#else
#define DPAA_RX_PRIV_DATA_SIZE (u16)(DPAA_TX_PRIV_DATA_SIZE + \
dpaa_rx_extra_headroom)
#endif
#define DPAA_ETH_PCD_RXQ_NUM 128
@ -180,7 +201,12 @@ static struct dpaa_bp *dpaa_bp_array[BM_MAX_NUM_OF_POOLS];
#define DPAA_BP_RAW_SIZE 4096
#ifdef CONFIG_DPAA_ERRATUM_A050385
#define dpaa_bp_size(raw_size) (SKB_WITH_OVERHEAD(raw_size) & \
~(DPAA_A050385_ALIGN - 1))
#else
#define dpaa_bp_size(raw_size) SKB_WITH_OVERHEAD(raw_size)
#endif
static int dpaa_max_frm;
@ -1192,7 +1218,7 @@ static int dpaa_eth_init_rx_port(struct fman_port *port, struct dpaa_bp *bp,
buf_prefix_content.pass_prs_result = true;
buf_prefix_content.pass_hash_result = true;
buf_prefix_content.pass_time_stamp = true;
buf_prefix_content.data_align = DPAA_FD_DATA_ALIGNMENT;
buf_prefix_content.data_align = DPAA_FD_RX_DATA_ALIGNMENT;
rx_p = &params.specific_params.rx_params;
rx_p->err_fqid = errq->fqid;
@ -1662,6 +1688,8 @@ static u8 rx_csum_offload(const struct dpaa_priv *priv, const struct qm_fd *fd)
return CHECKSUM_NONE;
}
#define PTR_IS_ALIGNED(x, a) (IS_ALIGNED((unsigned long)(x), (a)))
/* Build a linear skb around the received buffer.
* We are guaranteed there is enough room at the end of the data buffer to
* accommodate the shared info area of the skb.
@ -1733,8 +1761,7 @@ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv,
sg_addr = qm_sg_addr(&sgt[i]);
sg_vaddr = phys_to_virt(sg_addr);
WARN_ON(!IS_ALIGNED((unsigned long)sg_vaddr,
SMP_CACHE_BYTES));
WARN_ON(!PTR_IS_ALIGNED(sg_vaddr, SMP_CACHE_BYTES));
dma_unmap_page(priv->rx_dma_dev, sg_addr,
DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE);
@ -2022,6 +2049,75 @@ static inline int dpaa_xmit(struct dpaa_priv *priv,
return 0;
}
#ifdef CONFIG_DPAA_ERRATUM_A050385
int dpaa_a050385_wa(struct net_device *net_dev, struct sk_buff **s)
{
struct dpaa_priv *priv = netdev_priv(net_dev);
struct sk_buff *new_skb, *skb = *s;
unsigned char *start, i;
/* check linear buffer alignment */
if (!PTR_IS_ALIGNED(skb->data, DPAA_A050385_ALIGN))
goto workaround;
/* linear buffers just need to have an aligned start */
if (!skb_is_nonlinear(skb))
return 0;
/* linear data size for nonlinear skbs needs to be aligned */
if (!IS_ALIGNED(skb_headlen(skb), DPAA_A050385_ALIGN))
goto workaround;
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
/* all fragments need to have aligned start addresses */
if (!IS_ALIGNED(skb_frag_off(frag), DPAA_A050385_ALIGN))
goto workaround;
/* all but last fragment need to have aligned sizes */
if (!IS_ALIGNED(skb_frag_size(frag), DPAA_A050385_ALIGN) &&
(i < skb_shinfo(skb)->nr_frags - 1))
goto workaround;
}
return 0;
workaround:
/* copy all the skb content into a new linear buffer */
new_skb = netdev_alloc_skb(net_dev, skb->len + DPAA_A050385_ALIGN - 1 +
priv->tx_headroom);
if (!new_skb)
return -ENOMEM;
/* NET_SKB_PAD bytes already reserved, adding up to tx_headroom */
skb_reserve(new_skb, priv->tx_headroom - NET_SKB_PAD);
/* Workaround for DPAA_A050385 requires data start to be aligned */
start = PTR_ALIGN(new_skb->data, DPAA_A050385_ALIGN);
if (start - new_skb->data != 0)
skb_reserve(new_skb, start - new_skb->data);
skb_put(new_skb, skb->len);
skb_copy_bits(skb, 0, new_skb->data, skb->len);
skb_copy_header(new_skb, skb);
new_skb->dev = skb->dev;
/* We move the headroom when we align it so we have to reset the
* network and transport header offsets relative to the new data
* pointer. The checksum offload relies on these offsets.
*/
skb_set_network_header(new_skb, skb_network_offset(skb));
skb_set_transport_header(new_skb, skb_transport_offset(skb));
/* TODO: does timestamping need the result in the old skb? */
dev_kfree_skb(skb);
*s = new_skb;
return 0;
}
#endif
static netdev_tx_t
dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
{
@ -2068,6 +2164,14 @@ dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
nonlinear = skb_is_nonlinear(skb);
}
#ifdef CONFIG_DPAA_ERRATUM_A050385
if (unlikely(fman_has_errata_a050385())) {
if (dpaa_a050385_wa(net_dev, &skb))
goto enomem;
nonlinear = skb_is_nonlinear(skb);
}
#endif
if (nonlinear) {
/* Just create a S/G fd based on the skb */
err = skb_to_sg_fd(priv, skb, &fd);
@ -2741,9 +2845,7 @@ static inline u16 dpaa_get_headroom(struct dpaa_buffer_layout *bl)
headroom = (u16)(bl->priv_data_size + DPAA_PARSE_RESULTS_SIZE +
DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE);
return DPAA_FD_DATA_ALIGNMENT ? ALIGN(headroom,
DPAA_FD_DATA_ALIGNMENT) :
headroom;
return ALIGN(headroom, DPAA_FD_DATA_ALIGNMENT);
}
static int dpaa_eth_probe(struct platform_device *pdev)

View File

@ -2529,15 +2529,15 @@ fec_enet_set_coalesce(struct net_device *ndev, struct ethtool_coalesce *ec)
return -EINVAL;
}
cycle = fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr);
cycle = fec_enet_us_to_itr_clock(ndev, ec->rx_coalesce_usecs);
if (cycle > 0xFFFF) {
dev_err(dev, "Rx coalesced usec exceed hardware limitation\n");
return -EINVAL;
}
cycle = fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr);
cycle = fec_enet_us_to_itr_clock(ndev, ec->tx_coalesce_usecs);
if (cycle > 0xFFFF) {
dev_err(dev, "Rx coalesced usec exceed hardware limitation\n");
dev_err(dev, "Tx coalesced usec exceed hardware limitation\n");
return -EINVAL;
}

View File

@ -8,3 +8,31 @@ config FSL_FMAN
help
Freescale Data-Path Acceleration Architecture Frame Manager
(FMan) support
config DPAA_ERRATUM_A050385
bool
depends on ARM64 && FSL_DPAA
default y
help
DPAA FMan erratum A050385 software workaround implementation:
align buffers, data start, SG fragment length to avoid FMan DMA
splits.
FMAN DMA read or writes under heavy traffic load may cause FMAN
internal resource leak thus stopping further packet processing.
The FMAN internal queue can overflow when FMAN splits single
read or write transactions into multiple smaller transactions
such that more than 17 AXI transactions are in flight from FMAN
to interconnect. When the FMAN internal queue overflows, it can
stall further packet processing. The issue can occur with any
one of the following three conditions:
1. FMAN AXI transaction crosses 4K address boundary (Errata
A010022)
2. FMAN DMA address for an AXI transaction is not 16 byte
aligned, i.e. the last 4 bits of an address are non-zero
3. Scatter Gather (SG) frames have more than one SG buffer in
the SG list and any one of the buffers, except the last
buffer in the SG list has data size that is not a multiple
of 16 bytes, i.e., other than 16, 32, 48, 64, etc.
With any one of the above three conditions present, there is
likelihood of stalled FMAN packet processing, especially under
stress with multiple ports injecting line-rate traffic.

View File

@ -1,5 +1,6 @@
/*
* Copyright 2008-2015 Freescale Semiconductor Inc.
* Copyright 2020 NXP
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
@ -566,6 +567,10 @@ struct fman_cfg {
u32 qmi_def_tnums_thresh;
};
#ifdef CONFIG_DPAA_ERRATUM_A050385
static bool fman_has_err_a050385;
#endif
static irqreturn_t fman_exceptions(struct fman *fman,
enum fman_exceptions exception)
{
@ -2518,6 +2523,14 @@ struct fman *fman_bind(struct device *fm_dev)
}
EXPORT_SYMBOL(fman_bind);
#ifdef CONFIG_DPAA_ERRATUM_A050385
bool fman_has_errata_a050385(void)
{
return fman_has_err_a050385;
}
EXPORT_SYMBOL(fman_has_errata_a050385);
#endif
static irqreturn_t fman_err_irq(int irq, void *handle)
{
struct fman *fman = (struct fman *)handle;
@ -2845,6 +2858,11 @@ static struct fman *read_dts_node(struct platform_device *of_dev)
goto fman_free;
}
#ifdef CONFIG_DPAA_ERRATUM_A050385
fman_has_err_a050385 =
of_property_read_bool(fm_node, "fsl,erratum-a050385");
#endif
return fman;
fman_node_put:

View File

@ -1,5 +1,6 @@
/*
* Copyright 2008-2015 Freescale Semiconductor Inc.
* Copyright 2020 NXP
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
@ -398,6 +399,10 @@ u16 fman_get_max_frm(void);
int fman_get_rx_extra_headroom(void);
#ifdef CONFIG_DPAA_ERRATUM_A050385
bool fman_has_errata_a050385(void);
#endif
struct fman *fman_bind(struct device *dev);
#endif /* __FM_H */

View File

@ -46,6 +46,7 @@ enum HCLGE_MBX_OPCODE {
HCLGE_MBX_PUSH_VLAN_INFO, /* (PF -> VF) push port base vlan */
HCLGE_MBX_GET_MEDIA_TYPE, /* (VF -> PF) get media type */
HCLGE_MBX_PUSH_PROMISC_INFO, /* (PF -> VF) push vf promisc info */
HCLGE_MBX_VF_UNINIT, /* (VF -> PF) vf is unintializing */
HCLGE_MBX_GET_VF_FLR_STATUS = 200, /* (M7 -> PF) get vf flr status */
HCLGE_MBX_PUSH_LINK_STATUS, /* (M7 -> PF) get port link status */

View File

@ -1711,7 +1711,7 @@ static int hns3_setup_tc(struct net_device *netdev, void *type_data)
netif_dbg(h, drv, netdev, "setup tc: num_tc=%u\n", tc);
return (kinfo->dcb_ops && kinfo->dcb_ops->setup_tc) ?
kinfo->dcb_ops->setup_tc(h, tc, prio_tc) : -EOPNOTSUPP;
kinfo->dcb_ops->setup_tc(h, tc ? tc : 1, prio_tc) : -EOPNOTSUPP;
}
static int hns3_nic_setup_tc(struct net_device *dev, enum tc_setup_type type,

View File

@ -2446,10 +2446,12 @@ static int hclge_cfg_mac_speed_dup_hw(struct hclge_dev *hdev, int speed,
int hclge_cfg_mac_speed_dup(struct hclge_dev *hdev, int speed, u8 duplex)
{
struct hclge_mac *mac = &hdev->hw.mac;
int ret;
duplex = hclge_check_speed_dup(duplex, speed);
if (hdev->hw.mac.speed == speed && hdev->hw.mac.duplex == duplex)
if (!mac->support_autoneg && mac->speed == speed &&
mac->duplex == duplex)
return 0;
ret = hclge_cfg_mac_speed_dup_hw(hdev, speed, duplex);
@ -7743,16 +7745,27 @@ static int hclge_set_vlan_filter_ctrl(struct hclge_dev *hdev, u8 vlan_type,
struct hclge_desc desc;
int ret;
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_VLAN_FILTER_CTRL, false);
/* read current vlan filter parameter */
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_VLAN_FILTER_CTRL, true);
req = (struct hclge_vlan_filter_ctrl_cmd *)desc.data;
req->vlan_type = vlan_type;
req->vlan_fe = filter_en ? fe_type : 0;
req->vf_id = vf_id;
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
if (ret) {
dev_err(&hdev->pdev->dev,
"failed to get vlan filter config, ret = %d.\n", ret);
return ret;
}
/* modify and write new config parameter */
hclge_cmd_reuse_desc(&desc, false);
req->vlan_fe = filter_en ?
(req->vlan_fe | fe_type) : (req->vlan_fe & ~fe_type);
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
if (ret)
dev_err(&hdev->pdev->dev, "set vlan filter fail, ret =%d.\n",
dev_err(&hdev->pdev->dev, "failed to set vlan filter, ret = %d.\n",
ret);
return ret;
@ -8270,6 +8283,7 @@ void hclge_rm_vport_all_vlan_table(struct hclge_vport *vport, bool is_del_list)
kfree(vlan);
}
}
clear_bit(vport->vport_id, hdev->vf_vlan_full);
}
void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev)
@ -8486,6 +8500,28 @@ static int hclge_set_vf_vlan_filter(struct hnae3_handle *handle, int vfid,
}
}
static void hclge_clear_vf_vlan(struct hclge_dev *hdev)
{
struct hclge_vlan_info *vlan_info;
struct hclge_vport *vport;
int ret;
int vf;
/* clear port base vlan for all vf */
for (vf = HCLGE_VF_VPORT_START_NUM; vf < hdev->num_alloc_vport; vf++) {
vport = &hdev->vport[vf];
vlan_info = &vport->port_base_vlan_cfg.vlan_info;
ret = hclge_set_vlan_filter_hw(hdev, htons(ETH_P_8021Q),
vport->vport_id,
vlan_info->vlan_tag, true);
if (ret)
dev_err(&hdev->pdev->dev,
"failed to clear vf vlan for vf%d, ret = %d\n",
vf - HCLGE_VF_VPORT_START_NUM, ret);
}
}
int hclge_set_vlan_filter(struct hnae3_handle *handle, __be16 proto,
u16 vlan_id, bool is_kill)
{
@ -9895,6 +9931,7 @@ static void hclge_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
struct hclge_mac *mac = &hdev->hw.mac;
hclge_reset_vf_rate(hdev);
hclge_clear_vf_vlan(hdev);
hclge_misc_affinity_teardown(hdev);
hclge_state_uninit(hdev);

View File

@ -799,6 +799,7 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
hclge_get_link_mode(vport, req);
break;
case HCLGE_MBX_GET_VF_FLR_STATUS:
case HCLGE_MBX_VF_UNINIT:
hclge_rm_vport_all_mac_table(vport, true,
HCLGE_MAC_ADDR_UC);
hclge_rm_vport_all_mac_table(vport, true,

View File

@ -2803,6 +2803,9 @@ static void hclgevf_uninit_hdev(struct hclgevf_dev *hdev)
{
hclgevf_state_uninit(hdev);
hclgevf_send_mbx_msg(hdev, HCLGE_MBX_VF_UNINIT, 0, NULL, 0,
false, NULL, 0);
if (test_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state)) {
hclgevf_misc_irq_uninit(hdev);
hclgevf_uninit_msi(hdev);

View File

@ -2142,6 +2142,8 @@ static void __ibmvnic_reset(struct work_struct *work)
{
struct ibmvnic_rwi *rwi;
struct ibmvnic_adapter *adapter;
bool saved_state = false;
unsigned long flags;
u32 reset_state;
int rc = 0;
@ -2153,17 +2155,25 @@ static void __ibmvnic_reset(struct work_struct *work)
return;
}
reset_state = adapter->state;
rwi = get_next_rwi(adapter);
while (rwi) {
spin_lock_irqsave(&adapter->state_lock, flags);
if (adapter->state == VNIC_REMOVING ||
adapter->state == VNIC_REMOVED) {
spin_unlock_irqrestore(&adapter->state_lock, flags);
kfree(rwi);
rc = EBUSY;
break;
}
if (!saved_state) {
reset_state = adapter->state;
adapter->state = VNIC_RESETTING;
saved_state = true;
}
spin_unlock_irqrestore(&adapter->state_lock, flags);
if (rwi->reset_reason == VNIC_RESET_CHANGE_PARAM) {
/* CHANGE_PARAM requestor holds rtnl_lock */
rc = do_change_param_reset(adapter, rwi, reset_state);
@ -5091,6 +5101,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
__ibmvnic_delayed_reset);
INIT_LIST_HEAD(&adapter->rwi_list);
spin_lock_init(&adapter->rwi_lock);
spin_lock_init(&adapter->state_lock);
mutex_init(&adapter->fw_lock);
init_completion(&adapter->init_done);
init_completion(&adapter->fw_done);
@ -5163,8 +5174,17 @@ static int ibmvnic_remove(struct vio_dev *dev)
{
struct net_device *netdev = dev_get_drvdata(&dev->dev);
struct ibmvnic_adapter *adapter = netdev_priv(netdev);
unsigned long flags;
spin_lock_irqsave(&adapter->state_lock, flags);
if (adapter->state == VNIC_RESETTING) {
spin_unlock_irqrestore(&adapter->state_lock, flags);
return -EBUSY;
}
adapter->state = VNIC_REMOVING;
spin_unlock_irqrestore(&adapter->state_lock, flags);
rtnl_lock();
unregister_netdevice(netdev);

View File

@ -941,7 +941,8 @@ enum vnic_state {VNIC_PROBING = 1,
VNIC_CLOSING,
VNIC_CLOSED,
VNIC_REMOVING,
VNIC_REMOVED};
VNIC_REMOVED,
VNIC_RESETTING};
enum ibmvnic_reset_reason {VNIC_RESET_FAILOVER = 1,
VNIC_RESET_MOBILITY,
@ -1090,4 +1091,7 @@ struct ibmvnic_adapter {
struct ibmvnic_tunables desired;
struct ibmvnic_tunables fallback;
/* Used for serializatin of state field */
spinlock_t state_lock;
};

View File

@ -347,7 +347,7 @@ static int orion_mdio_probe(struct platform_device *pdev)
}
dev->err_interrupt = platform_get_irq(pdev, 0);
dev->err_interrupt = platform_get_irq_optional(pdev, 0);
if (dev->err_interrupt > 0 &&
resource_size(r) < MVMDIO_ERR_INT_MASK + 4) {
dev_err(&pdev->dev,
@ -364,8 +364,8 @@ static int orion_mdio_probe(struct platform_device *pdev)
writel(MVMDIO_ERR_INT_SMI_DONE,
dev->regs + MVMDIO_ERR_INT_MASK);
} else if (dev->err_interrupt == -EPROBE_DEFER) {
ret = -EPROBE_DEFER;
} else if (dev->err_interrupt < 0) {
ret = dev->err_interrupt;
goto out_mdio;
}

View File

@ -2176,24 +2176,29 @@ static int ocelot_init_timestamp(struct ocelot *ocelot)
return 0;
}
static void ocelot_port_set_mtu(struct ocelot *ocelot, int port, size_t mtu)
/* Configure the maximum SDU (L2 payload) on RX to the value specified in @sdu.
* The length of VLAN tags is accounted for automatically via DEV_MAC_TAGS_CFG.
*/
static void ocelot_port_set_maxlen(struct ocelot *ocelot, int port, size_t sdu)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
int maxlen = sdu + ETH_HLEN + ETH_FCS_LEN;
int atop_wm;
ocelot_port_writel(ocelot_port, mtu, DEV_MAC_MAXLEN_CFG);
ocelot_port_writel(ocelot_port, maxlen, DEV_MAC_MAXLEN_CFG);
/* Set Pause WM hysteresis
* 152 = 6 * mtu / OCELOT_BUFFER_CELL_SZ
* 101 = 4 * mtu / OCELOT_BUFFER_CELL_SZ
* 152 = 6 * maxlen / OCELOT_BUFFER_CELL_SZ
* 101 = 4 * maxlen / OCELOT_BUFFER_CELL_SZ
*/
ocelot_write_rix(ocelot, SYS_PAUSE_CFG_PAUSE_ENA |
SYS_PAUSE_CFG_PAUSE_STOP(101) |
SYS_PAUSE_CFG_PAUSE_START(152), SYS_PAUSE_CFG, port);
/* Tail dropping watermark */
atop_wm = (ocelot->shared_queue_sz - 9 * mtu) / OCELOT_BUFFER_CELL_SZ;
ocelot_write_rix(ocelot, ocelot_wm_enc(9 * mtu),
atop_wm = (ocelot->shared_queue_sz - 9 * maxlen) /
OCELOT_BUFFER_CELL_SZ;
ocelot_write_rix(ocelot, ocelot_wm_enc(9 * maxlen),
SYS_ATOP, port);
ocelot_write(ocelot, ocelot_wm_enc(atop_wm), SYS_ATOP_TOT_CFG);
}
@ -2222,9 +2227,10 @@ void ocelot_init_port(struct ocelot *ocelot, int port)
DEV_MAC_HDX_CFG);
/* Set Max Length and maximum tags allowed */
ocelot_port_set_mtu(ocelot, port, VLAN_ETH_FRAME_LEN);
ocelot_port_set_maxlen(ocelot, port, ETH_DATA_LEN);
ocelot_port_writel(ocelot_port, DEV_MAC_TAGS_CFG_TAG_ID(ETH_P_8021AD) |
DEV_MAC_TAGS_CFG_VLAN_AWR_ENA |
DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA |
DEV_MAC_TAGS_CFG_VLAN_LEN_AWR_ENA,
DEV_MAC_TAGS_CFG);
@ -2310,18 +2316,18 @@ void ocelot_set_cpu_port(struct ocelot *ocelot, int cpu,
* Only one port can be an NPI at the same time.
*/
if (cpu < ocelot->num_phys_ports) {
int mtu = VLAN_ETH_FRAME_LEN + OCELOT_TAG_LEN;
int sdu = ETH_DATA_LEN + OCELOT_TAG_LEN;
ocelot_write(ocelot, QSYS_EXT_CPU_CFG_EXT_CPUQ_MSK_M |
QSYS_EXT_CPU_CFG_EXT_CPU_PORT(cpu),
QSYS_EXT_CPU_CFG);
if (injection == OCELOT_TAG_PREFIX_SHORT)
mtu += OCELOT_SHORT_PREFIX_LEN;
sdu += OCELOT_SHORT_PREFIX_LEN;
else if (injection == OCELOT_TAG_PREFIX_LONG)
mtu += OCELOT_LONG_PREFIX_LEN;
sdu += OCELOT_LONG_PREFIX_LEN;
ocelot_port_set_mtu(ocelot, cpu, mtu);
ocelot_port_set_maxlen(ocelot, cpu, sdu);
}
/* CPU port Injection/Extraction configuration */

View File

@ -1688,7 +1688,7 @@ static int ionic_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)
if (!(is_zero_ether_addr(mac) || is_valid_ether_addr(mac)))
return -EINVAL;
down_read(&ionic->vf_op_lock);
down_write(&ionic->vf_op_lock);
if (vf >= pci_num_vf(ionic->pdev) || !ionic->vfs) {
ret = -EINVAL;
@ -1698,7 +1698,7 @@ static int ionic_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)
ether_addr_copy(ionic->vfs[vf].macaddr, mac);
}
up_read(&ionic->vf_op_lock);
up_write(&ionic->vf_op_lock);
return ret;
}
@ -1719,7 +1719,7 @@ static int ionic_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan,
if (proto != htons(ETH_P_8021Q))
return -EPROTONOSUPPORT;
down_read(&ionic->vf_op_lock);
down_write(&ionic->vf_op_lock);
if (vf >= pci_num_vf(ionic->pdev) || !ionic->vfs) {
ret = -EINVAL;
@ -1730,7 +1730,7 @@ static int ionic_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan,
ionic->vfs[vf].vlanid = vlan;
}
up_read(&ionic->vf_op_lock);
up_write(&ionic->vf_op_lock);
return ret;
}

View File

@ -2277,7 +2277,7 @@ static int __init sxgbe_cmdline_opt(char *str)
if (!str || !*str)
return -EINVAL;
while ((opt = strsep(&str, ",")) != NULL) {
if (!strncmp(opt, "eee_timer:", 6)) {
if (!strncmp(opt, "eee_timer:", 10)) {
if (kstrtoint(opt + 10, 0, &eee_timer))
goto err;
}

View File

@ -2853,11 +2853,24 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
}
/* Transmit timestamps are only available for 8XXX series. They result
* in three events per packet. These occur in order, and are:
* - the normal completion event
* in up to three events per packet. These occur in order, and are:
* - the normal completion event (may be omitted)
* - the low part of the timestamp
* - the high part of the timestamp
*
* It's possible for multiple completion events to appear before the
* corresponding timestamps. So we can for example get:
* COMP N
* COMP N+1
* TS_LO N
* TS_HI N
* TS_LO N+1
* TS_HI N+1
*
* In addition it's also possible for the adjacent completions to be
* merged, so we may not see COMP N above. As such, the completion
* events are not very useful here.
*
* Each part of the timestamp is itself split across two 16 bit
* fields in the event.
*/
@ -2865,17 +2878,7 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
switch (tx_ev_type) {
case TX_TIMESTAMP_EVENT_TX_EV_COMPLETION:
/* In case of Queue flush or FLR, we might have received
* the previous TX completion event but not the Timestamp
* events.
*/
if (tx_queue->completed_desc_ptr != tx_queue->ptr_mask)
efx_xmit_done(tx_queue, tx_queue->completed_desc_ptr);
tx_ev_desc_ptr = EFX_QWORD_FIELD(*event,
ESF_DZ_TX_DESCR_INDX);
tx_queue->completed_desc_ptr =
tx_ev_desc_ptr & tx_queue->ptr_mask;
/* Ignore this event - see above. */
break;
case TX_TIMESTAMP_EVENT_TX_EV_TSTAMP_LO:
@ -2887,8 +2890,7 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
ts_part = efx_ef10_extract_event_ts(event);
tx_queue->completed_timestamp_major = ts_part;
efx_xmit_done(tx_queue, tx_queue->completed_desc_ptr);
tx_queue->completed_desc_ptr = tx_queue->ptr_mask;
efx_xmit_done_single(tx_queue);
break;
default:

View File

@ -20,6 +20,7 @@ netdev_tx_t efx_hard_start_xmit(struct sk_buff *skb,
struct net_device *net_dev);
netdev_tx_t efx_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb);
void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index);
void efx_xmit_done_single(struct efx_tx_queue *tx_queue);
int efx_setup_tc(struct net_device *net_dev, enum tc_setup_type type,
void *type_data);
extern unsigned int efx_piobuf_size;

View File

@ -583,6 +583,7 @@ struct efx_channel *efx_copy_channel(const struct efx_channel *old_channel)
if (tx_queue->channel)
tx_queue->channel = channel;
tx_queue->buffer = NULL;
tx_queue->cb_page = NULL;
memset(&tx_queue->txd, 0, sizeof(tx_queue->txd));
}

View File

@ -208,8 +208,6 @@ struct efx_tx_buffer {
* avoid cache-line ping-pong between the xmit path and the
* completion path.
* @merge_events: Number of TX merged completion events
* @completed_desc_ptr: Most recent completed pointer - only used with
* timestamping.
* @completed_timestamp_major: Top part of the most recent tx timestamp.
* @completed_timestamp_minor: Low part of the most recent tx timestamp.
* @insert_count: Current insert pointer
@ -269,7 +267,6 @@ struct efx_tx_queue {
unsigned int merge_events;
unsigned int bytes_compl;
unsigned int pkts_compl;
unsigned int completed_desc_ptr;
u32 completed_timestamp_major;
u32 completed_timestamp_minor;

View File

@ -535,6 +535,44 @@ netdev_tx_t efx_hard_start_xmit(struct sk_buff *skb,
return efx_enqueue_skb(tx_queue, skb);
}
void efx_xmit_done_single(struct efx_tx_queue *tx_queue)
{
unsigned int pkts_compl = 0, bytes_compl = 0;
unsigned int read_ptr;
bool finished = false;
read_ptr = tx_queue->read_count & tx_queue->ptr_mask;
while (!finished) {
struct efx_tx_buffer *buffer = &tx_queue->buffer[read_ptr];
if (!efx_tx_buffer_in_use(buffer)) {
struct efx_nic *efx = tx_queue->efx;
netif_err(efx, hw, efx->net_dev,
"TX queue %d spurious single TX completion\n",
tx_queue->queue);
efx_schedule_reset(efx, RESET_TYPE_TX_SKIP);
return;
}
/* Need to check the flag before dequeueing. */
if (buffer->flags & EFX_TX_BUF_SKB)
finished = true;
efx_dequeue_buffer(tx_queue, buffer, &pkts_compl, &bytes_compl);
++tx_queue->read_count;
read_ptr = tx_queue->read_count & tx_queue->ptr_mask;
}
tx_queue->pkts_compl += pkts_compl;
tx_queue->bytes_compl += bytes_compl;
EFX_WARN_ON_PARANOID(pkts_compl != 1);
efx_xmit_done_check_empty(tx_queue);
}
void efx_init_tx_queue_core_txq(struct efx_tx_queue *tx_queue)
{
struct efx_nic *efx = tx_queue->efx;

View File

@ -80,7 +80,6 @@ void efx_init_tx_queue(struct efx_tx_queue *tx_queue)
tx_queue->xmit_more_available = false;
tx_queue->timestamping = (efx_ptp_use_mac_tx_timestamps(efx) &&
tx_queue->channel == efx_ptp_channel(efx));
tx_queue->completed_desc_ptr = tx_queue->ptr_mask;
tx_queue->completed_timestamp_major = 0;
tx_queue->completed_timestamp_minor = 0;
@ -210,10 +209,9 @@ static void efx_dequeue_buffers(struct efx_tx_queue *tx_queue,
while (read_ptr != stop_index) {
struct efx_tx_buffer *buffer = &tx_queue->buffer[read_ptr];
if (!(buffer->flags & EFX_TX_BUF_OPTION) &&
unlikely(buffer->len == 0)) {
if (!efx_tx_buffer_in_use(buffer)) {
netif_err(efx, tx_err, efx->net_dev,
"TX queue %d spurious TX completion id %x\n",
"TX queue %d spurious TX completion id %d\n",
tx_queue->queue, read_ptr);
efx_schedule_reset(efx, RESET_TYPE_TX_SKIP);
return;
@ -226,6 +224,19 @@ static void efx_dequeue_buffers(struct efx_tx_queue *tx_queue,
}
}
void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue)
{
if ((int)(tx_queue->read_count - tx_queue->old_write_count) >= 0) {
tx_queue->old_write_count = READ_ONCE(tx_queue->write_count);
if (tx_queue->read_count == tx_queue->old_write_count) {
/* Ensure that read_count is flushed. */
smp_mb();
tx_queue->empty_read_count =
tx_queue->read_count | EFX_EMPTY_COUNT_VALID;
}
}
}
void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index)
{
unsigned int fill_level, pkts_compl = 0, bytes_compl = 0;
@ -256,15 +267,7 @@ void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index)
netif_tx_wake_queue(tx_queue->core_txq);
}
/* Check whether the hardware queue is now empty */
if ((int)(tx_queue->read_count - tx_queue->old_write_count) >= 0) {
tx_queue->old_write_count = READ_ONCE(tx_queue->write_count);
if (tx_queue->read_count == tx_queue->old_write_count) {
smp_mb();
tx_queue->empty_read_count =
tx_queue->read_count | EFX_EMPTY_COUNT_VALID;
}
}
efx_xmit_done_check_empty(tx_queue);
}
/* Remove buffers put into a tx_queue for the current packet.

View File

@ -21,6 +21,12 @@ void efx_dequeue_buffer(struct efx_tx_queue *tx_queue,
unsigned int *pkts_compl,
unsigned int *bytes_compl);
static inline bool efx_tx_buffer_in_use(struct efx_tx_buffer *buffer)
{
return buffer->len || (buffer->flags & EFX_TX_BUF_OPTION);
}
void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue);
void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index);
void efx_enqueue_unwind(struct efx_tx_queue *tx_queue,

View File

@ -24,6 +24,7 @@
static void dwmac1000_core_init(struct mac_device_info *hw,
struct net_device *dev)
{
struct stmmac_priv *priv = netdev_priv(dev);
void __iomem *ioaddr = hw->pcsr;
u32 value = readl(ioaddr + GMAC_CONTROL);
int mtu = dev->mtu;
@ -35,7 +36,7 @@ static void dwmac1000_core_init(struct mac_device_info *hw,
* Broadcom tags can look like invalid LLC/SNAP packets and cause the
* hardware to truncate packets on reception.
*/
if (netdev_uses_dsa(dev))
if (netdev_uses_dsa(dev) || !priv->plat->enh_desc)
value &= ~GMAC_CONTROL_ACS;
if (mtu > 1500)

View File

@ -293,6 +293,7 @@ void ipvlan_process_multicast(struct work_struct *work)
}
if (dev)
dev_put(dev);
cond_resched();
}
}
@ -498,19 +499,21 @@ static int ipvlan_process_outbound(struct sk_buff *skb)
struct ethhdr *ethh = eth_hdr(skb);
int ret = NET_XMIT_DROP;
/* In this mode we dont care about multicast and broadcast traffic */
if (is_multicast_ether_addr(ethh->h_dest)) {
pr_debug_ratelimited("Dropped {multi|broad}cast of type=[%x]\n",
ntohs(skb->protocol));
kfree_skb(skb);
goto out;
}
/* The ipvlan is a pseudo-L2 device, so the packets that we receive
* will have L2; which need to discarded and processed further
* in the net-ns of the main-device.
*/
if (skb_mac_header_was_set(skb)) {
/* In this mode we dont care about
* multicast and broadcast traffic */
if (is_multicast_ether_addr(ethh->h_dest)) {
pr_debug_ratelimited(
"Dropped {multi|broad}cast of type=[%x]\n",
ntohs(skb->protocol));
kfree_skb(skb);
goto out;
}
skb_pull(skb, sizeof(*ethh));
skb->mac_header = (typeof(skb->mac_header))~0U;
skb_reset_network_header(skb);

View File

@ -164,7 +164,6 @@ static void ipvlan_uninit(struct net_device *dev)
static int ipvlan_open(struct net_device *dev)
{
struct ipvl_dev *ipvlan = netdev_priv(dev);
struct net_device *phy_dev = ipvlan->phy_dev;
struct ipvl_addr *addr;
if (ipvlan->port->mode == IPVLAN_MODE_L3 ||
@ -178,7 +177,7 @@ static int ipvlan_open(struct net_device *dev)
ipvlan_ht_addr_add(ipvlan, addr);
rcu_read_unlock();
return dev_uc_add(phy_dev, phy_dev->dev_addr);
return 0;
}
static int ipvlan_stop(struct net_device *dev)
@ -190,8 +189,6 @@ static int ipvlan_stop(struct net_device *dev)
dev_uc_unsync(phy_dev, dev);
dev_mc_unsync(phy_dev, dev);
dev_uc_del(phy_dev, phy_dev->dev_addr);
rcu_read_lock();
list_for_each_entry_rcu(addr, &ipvlan->addrs, anode)
ipvlan_ht_addr_del(addr);

View File

@ -424,6 +424,11 @@ static struct macsec_eth_header *macsec_ethhdr(struct sk_buff *skb)
return (struct macsec_eth_header *)skb_mac_header(skb);
}
static sci_t dev_to_sci(struct net_device *dev, __be16 port)
{
return make_sci(dev->dev_addr, port);
}
static void __macsec_pn_wrapped(struct macsec_secy *secy,
struct macsec_tx_sa *tx_sa)
{
@ -3268,6 +3273,20 @@ static int macsec_set_mac_address(struct net_device *dev, void *p)
out:
ether_addr_copy(dev->dev_addr, addr->sa_data);
macsec->secy.sci = dev_to_sci(dev, MACSEC_PORT_ES);
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(macsec)) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(macsec, &ctx);
if (ops) {
ctx.secy = &macsec->secy;
macsec_offload(ops->mdo_upd_secy, &ctx);
}
}
return 0;
}
@ -3342,6 +3361,7 @@ static const struct device_type macsec_type = {
static const struct nla_policy macsec_rtnl_policy[IFLA_MACSEC_MAX + 1] = {
[IFLA_MACSEC_SCI] = { .type = NLA_U64 },
[IFLA_MACSEC_PORT] = { .type = NLA_U16 },
[IFLA_MACSEC_ICV_LEN] = { .type = NLA_U8 },
[IFLA_MACSEC_CIPHER_SUITE] = { .type = NLA_U64 },
[IFLA_MACSEC_WINDOW] = { .type = NLA_U32 },
@ -3592,11 +3612,6 @@ static bool sci_exists(struct net_device *dev, sci_t sci)
return false;
}
static sci_t dev_to_sci(struct net_device *dev, __be16 port)
{
return make_sci(dev->dev_addr, port);
}
static int macsec_add_dev(struct net_device *dev, sci_t sci, u8 icv_len)
{
struct macsec_dev *macsec = macsec_priv(dev);

View File

@ -334,6 +334,8 @@ static void macvlan_process_broadcast(struct work_struct *w)
if (src)
dev_put(src->dev);
consume_skb(skb);
cond_resched();
}
}

View File

@ -73,6 +73,7 @@ static struct phy_driver bcm63xx_driver[] = {
/* same phy as above, with just a different OUI */
.phy_id = 0x002bdc00,
.phy_id_mask = 0xfffffc00,
.name = "Broadcom BCM63XX (2)",
/* PHY_BASIC_FEATURES */
.flags = PHY_IS_INTERNAL,
.config_init = bcm63xx_config_init,

View File

@ -727,7 +727,8 @@ static irqreturn_t phy_interrupt(int irq, void *phy_dat)
phy_trigger_machine(phydev);
}
if (phy_clear_interrupt(phydev))
/* did_interrupt() may have cleared the interrupt already */
if (!phydev->drv->did_interrupt && phy_clear_interrupt(phydev))
goto phy_err;
return IRQ_HANDLED;

View File

@ -286,6 +286,8 @@ static int mdio_bus_phy_suspend(struct device *dev)
if (!mdio_bus_phy_may_suspend(phydev))
return 0;
phydev->suspended_by_mdio_bus = 1;
return phy_suspend(phydev);
}
@ -294,9 +296,11 @@ static int mdio_bus_phy_resume(struct device *dev)
struct phy_device *phydev = to_phy_device(dev);
int ret;
if (!mdio_bus_phy_may_suspend(phydev))
if (!phydev->suspended_by_mdio_bus)
goto no_resume;
phydev->suspended_by_mdio_bus = 0;
ret = phy_resume(phydev);
if (ret < 0)
return ret;

View File

@ -761,8 +761,14 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy,
config.interface = interface;
ret = phylink_validate(pl, supported, &config);
if (ret)
if (ret) {
phylink_warn(pl, "validation of %s with support %*pb and advertisement %*pb failed: %d\n",
phy_modes(config.interface),
__ETHTOOL_LINK_MODE_MASK_NBITS, phy->supported,
__ETHTOOL_LINK_MODE_MASK_NBITS, config.advertising,
ret);
return ret;
}
phy->phylink = pl;
phy->phy_link_change = phylink_phy_change;

View File

@ -232,7 +232,7 @@ slhc_compress(struct slcompress *comp, unsigned char *icp, int isize,
struct cstate *cs = lcs->next;
unsigned long deltaS, deltaA;
short changes = 0;
int hlen;
int nlen, hlen;
unsigned char new_seq[16];
unsigned char *cp = new_seq;
struct iphdr *ip;
@ -248,6 +248,8 @@ slhc_compress(struct slcompress *comp, unsigned char *icp, int isize,
return isize;
ip = (struct iphdr *) icp;
if (ip->version != 4 || ip->ihl < 5)
return isize;
/* Bail if this packet isn't TCP, or is an IP fragment */
if (ip->protocol != IPPROTO_TCP || (ntohs(ip->frag_off) & 0x3fff)) {
@ -258,10 +260,14 @@ slhc_compress(struct slcompress *comp, unsigned char *icp, int isize,
comp->sls_o_tcp++;
return isize;
}
/* Extract TCP header */
nlen = ip->ihl * 4;
if (isize < nlen + sizeof(*th))
return isize;
th = (struct tcphdr *)(((unsigned char *)ip) + ip->ihl*4);
hlen = ip->ihl*4 + th->doff*4;
th = (struct tcphdr *)(icp + nlen);
if (th->doff < sizeof(struct tcphdr) / 4)
return isize;
hlen = nlen + th->doff * 4;
/* Bail if the TCP packet isn't `compressible' (i.e., ACK isn't set or
* some other control bit is set). Also uncompressible if

View File

@ -2240,6 +2240,8 @@ team_nl_option_policy[TEAM_ATTR_OPTION_MAX + 1] = {
[TEAM_ATTR_OPTION_CHANGED] = { .type = NLA_FLAG },
[TEAM_ATTR_OPTION_TYPE] = { .type = NLA_U8 },
[TEAM_ATTR_OPTION_DATA] = { .type = NLA_BINARY },
[TEAM_ATTR_OPTION_PORT_IFINDEX] = { .type = NLA_U32 },
[TEAM_ATTR_OPTION_ARRAY_INDEX] = { .type = NLA_U32 },
};
static int team_nl_cmd_noop(struct sk_buff *skb, struct genl_info *info)

View File

@ -3221,6 +3221,8 @@ static u16 r8153_phy_status(struct r8152 *tp, u16 desired)
}
msleep(20);
if (test_bit(RTL8152_UNPLUG, &tp->flags))
break;
}
return data;
@ -5402,7 +5404,10 @@ static void r8153_init(struct r8152 *tp)
if (ocp_read_word(tp, MCU_TYPE_PLA, PLA_BOOT_CTRL) &
AUTOLOAD_DONE)
break;
msleep(20);
if (test_bit(RTL8152_UNPLUG, &tp->flags))
break;
}
data = r8153_phy_status(tp, 0);
@ -5539,7 +5544,10 @@ static void r8153b_init(struct r8152 *tp)
if (ocp_read_word(tp, MCU_TYPE_PLA, PLA_BOOT_CTRL) &
AUTOLOAD_DONE)
break;
msleep(20);
if (test_bit(RTL8152_UNPLUG, &tp->flags))
break;
}
data = r8153_phy_status(tp, 0);

View File

@ -328,7 +328,7 @@ static void veth_get_stats64(struct net_device *dev,
rcu_read_lock();
peer = rcu_dereference(priv->peer);
if (peer) {
tot->rx_dropped += veth_stats_tx(peer, &packets, &bytes);
veth_stats_tx(peer, &packets, &bytes);
tot->rx_bytes += bytes;
tot->rx_packets += packets;

View File

@ -308,7 +308,8 @@ iwl_parse_nvm_sections(struct iwl_mvm *mvm)
}
/* PHY_SKU section is mandatory in B0 */
if (!mvm->nvm_sections[NVM_SECTION_TYPE_PHY_SKU].data) {
if (mvm->trans->cfg->nvm_type == IWL_NVM_EXT &&
!mvm->nvm_sections[NVM_SECTION_TYPE_PHY_SKU].data) {
IWL_ERR(mvm,
"Can't parse phy_sku in B0, empty sections\n");
return NULL;

View File

@ -447,10 +447,13 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
struct page *page = virt_to_head_page(data);
int offset = data - page_address(page);
struct sk_buff *skb = q->rx_head;
struct skb_shared_info *shinfo = skb_shinfo(skb);
offset += q->buf_offset;
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, offset, len,
q->buf_size);
if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) {
offset += q->buf_offset;
skb_add_rx_frag(skb, shinfo->nr_frags, page, offset, len,
q->buf_size);
}
if (more)
return;

View File

@ -306,6 +306,7 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
rc = of_mdiobus_register_phy(mdio, child, addr);
if (rc && rc != -ENODEV)
goto unregister;
break;
}
}
}

View File

@ -369,7 +369,7 @@ enum qeth_qdio_info_states {
struct qeth_buffer_pool_entry {
struct list_head list;
struct list_head init_list;
void *elements[QDIO_MAX_ELEMENTS_PER_BUFFER];
struct page *elements[QDIO_MAX_ELEMENTS_PER_BUFFER];
};
struct qeth_qdio_buffer_pool {
@ -983,7 +983,7 @@ extern const struct attribute_group qeth_device_blkt_group;
extern const struct device_type qeth_generic_devtype;
const char *qeth_get_cardname_short(struct qeth_card *);
int qeth_realloc_buffer_pool(struct qeth_card *, int);
int qeth_resize_buffer_pool(struct qeth_card *card, unsigned int count);
int qeth_core_load_discipline(struct qeth_card *, enum qeth_discipline_id);
void qeth_core_free_discipline(struct qeth_card *);

View File

@ -65,7 +65,6 @@ static struct lock_class_key qdio_out_skb_queue_key;
static void qeth_issue_next_read_cb(struct qeth_card *card,
struct qeth_cmd_buffer *iob,
unsigned int data_length);
static void qeth_free_buffer_pool(struct qeth_card *);
static int qeth_qdio_establish(struct qeth_card *);
static void qeth_free_qdio_queues(struct qeth_card *card);
static void qeth_notify_skbs(struct qeth_qdio_out_q *queue,
@ -212,49 +211,121 @@ void qeth_clear_working_pool_list(struct qeth_card *card)
}
EXPORT_SYMBOL_GPL(qeth_clear_working_pool_list);
static void qeth_free_pool_entry(struct qeth_buffer_pool_entry *entry)
{
unsigned int i;
for (i = 0; i < ARRAY_SIZE(entry->elements); i++) {
if (entry->elements[i])
__free_page(entry->elements[i]);
}
kfree(entry);
}
static void qeth_free_buffer_pool(struct qeth_card *card)
{
struct qeth_buffer_pool_entry *entry, *tmp;
list_for_each_entry_safe(entry, tmp, &card->qdio.init_pool.entry_list,
init_list) {
list_del(&entry->init_list);
qeth_free_pool_entry(entry);
}
}
static struct qeth_buffer_pool_entry *qeth_alloc_pool_entry(unsigned int pages)
{
struct qeth_buffer_pool_entry *entry;
unsigned int i;
entry = kzalloc(sizeof(*entry), GFP_KERNEL);
if (!entry)
return NULL;
for (i = 0; i < pages; i++) {
entry->elements[i] = alloc_page(GFP_KERNEL);
if (!entry->elements[i]) {
qeth_free_pool_entry(entry);
return NULL;
}
}
return entry;
}
static int qeth_alloc_buffer_pool(struct qeth_card *card)
{
struct qeth_buffer_pool_entry *pool_entry;
void *ptr;
int i, j;
unsigned int buf_elements = QETH_MAX_BUFFER_ELEMENTS(card);
unsigned int i;
QETH_CARD_TEXT(card, 5, "alocpool");
for (i = 0; i < card->qdio.init_pool.buf_count; ++i) {
pool_entry = kzalloc(sizeof(*pool_entry), GFP_KERNEL);
if (!pool_entry) {
struct qeth_buffer_pool_entry *entry;
entry = qeth_alloc_pool_entry(buf_elements);
if (!entry) {
qeth_free_buffer_pool(card);
return -ENOMEM;
}
for (j = 0; j < QETH_MAX_BUFFER_ELEMENTS(card); ++j) {
ptr = (void *) __get_free_page(GFP_KERNEL);
if (!ptr) {
while (j > 0)
free_page((unsigned long)
pool_entry->elements[--j]);
kfree(pool_entry);
qeth_free_buffer_pool(card);
return -ENOMEM;
}
pool_entry->elements[j] = ptr;
}
list_add(&pool_entry->init_list,
&card->qdio.init_pool.entry_list);
list_add(&entry->init_list, &card->qdio.init_pool.entry_list);
}
return 0;
}
int qeth_realloc_buffer_pool(struct qeth_card *card, int bufcnt)
int qeth_resize_buffer_pool(struct qeth_card *card, unsigned int count)
{
unsigned int buf_elements = QETH_MAX_BUFFER_ELEMENTS(card);
struct qeth_qdio_buffer_pool *pool = &card->qdio.init_pool;
struct qeth_buffer_pool_entry *entry, *tmp;
int delta = count - pool->buf_count;
LIST_HEAD(entries);
QETH_CARD_TEXT(card, 2, "realcbp");
/* TODO: steel/add buffers from/to a running card's buffer pool (?) */
qeth_clear_working_pool_list(card);
qeth_free_buffer_pool(card);
card->qdio.in_buf_pool.buf_count = bufcnt;
card->qdio.init_pool.buf_count = bufcnt;
return qeth_alloc_buffer_pool(card);
/* Defer until queue is allocated: */
if (!card->qdio.in_q)
goto out;
/* Remove entries from the pool: */
while (delta < 0) {
entry = list_first_entry(&pool->entry_list,
struct qeth_buffer_pool_entry,
init_list);
list_del(&entry->init_list);
qeth_free_pool_entry(entry);
delta++;
}
/* Allocate additional entries: */
while (delta > 0) {
entry = qeth_alloc_pool_entry(buf_elements);
if (!entry) {
list_for_each_entry_safe(entry, tmp, &entries,
init_list) {
list_del(&entry->init_list);
qeth_free_pool_entry(entry);
}
return -ENOMEM;
}
list_add(&entry->init_list, &entries);
delta--;
}
list_splice(&entries, &pool->entry_list);
out:
card->qdio.in_buf_pool.buf_count = count;
pool->buf_count = count;
return 0;
}
EXPORT_SYMBOL_GPL(qeth_realloc_buffer_pool);
EXPORT_SYMBOL_GPL(qeth_resize_buffer_pool);
static void qeth_free_qdio_queue(struct qeth_qdio_q *q)
{
@ -1170,19 +1241,6 @@ void qeth_drain_output_queues(struct qeth_card *card)
}
EXPORT_SYMBOL_GPL(qeth_drain_output_queues);
static void qeth_free_buffer_pool(struct qeth_card *card)
{
struct qeth_buffer_pool_entry *pool_entry, *tmp;
int i = 0;
list_for_each_entry_safe(pool_entry, tmp,
&card->qdio.init_pool.entry_list, init_list){
for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i)
free_page((unsigned long)pool_entry->elements[i]);
list_del(&pool_entry->init_list);
kfree(pool_entry);
}
}
static int qeth_osa_set_output_queues(struct qeth_card *card, bool single)
{
unsigned int count = single ? 1 : card->dev->num_tx_queues;
@ -1204,7 +1262,6 @@ static int qeth_osa_set_output_queues(struct qeth_card *card, bool single)
if (count == 1)
dev_info(&card->gdev->dev, "Priority Queueing not supported\n");
card->qdio.default_out_queue = single ? 0 : QETH_DEFAULT_QUEUE;
card->qdio.no_out_queues = count;
return 0;
}
@ -2393,7 +2450,6 @@ static void qeth_free_qdio_queues(struct qeth_card *card)
return;
qeth_free_cq(card);
cancel_delayed_work_sync(&card->buffer_reclaim_work);
for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) {
if (card->qdio.in_q->bufs[j].rx_skb)
dev_kfree_skb_any(card->qdio.in_q->bufs[j].rx_skb);
@ -2575,7 +2631,6 @@ static struct qeth_buffer_pool_entry *qeth_find_free_buffer_pool_entry(
struct list_head *plh;
struct qeth_buffer_pool_entry *entry;
int i, free;
struct page *page;
if (list_empty(&card->qdio.in_buf_pool.entry_list))
return NULL;
@ -2584,7 +2639,7 @@ static struct qeth_buffer_pool_entry *qeth_find_free_buffer_pool_entry(
entry = list_entry(plh, struct qeth_buffer_pool_entry, list);
free = 1;
for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) {
if (page_count(virt_to_page(entry->elements[i])) > 1) {
if (page_count(entry->elements[i]) > 1) {
free = 0;
break;
}
@ -2599,15 +2654,15 @@ static struct qeth_buffer_pool_entry *qeth_find_free_buffer_pool_entry(
entry = list_entry(card->qdio.in_buf_pool.entry_list.next,
struct qeth_buffer_pool_entry, list);
for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) {
if (page_count(virt_to_page(entry->elements[i])) > 1) {
page = alloc_page(GFP_ATOMIC);
if (!page) {
if (page_count(entry->elements[i]) > 1) {
struct page *page = alloc_page(GFP_ATOMIC);
if (!page)
return NULL;
} else {
free_page((unsigned long)entry->elements[i]);
entry->elements[i] = page_address(page);
QETH_CARD_STAT_INC(card, rx_sg_alloc_page);
}
__free_page(entry->elements[i]);
entry->elements[i] = page;
QETH_CARD_STAT_INC(card, rx_sg_alloc_page);
}
}
list_del_init(&entry->list);
@ -2625,12 +2680,12 @@ static int qeth_init_input_buffer(struct qeth_card *card,
ETH_HLEN +
sizeof(struct ipv6hdr));
if (!buf->rx_skb)
return 1;
return -ENOMEM;
}
pool_entry = qeth_find_free_buffer_pool_entry(card);
if (!pool_entry)
return 1;
return -ENOBUFS;
/*
* since the buffer is accessed only from the input_tasklet
@ -2643,7 +2698,7 @@ static int qeth_init_input_buffer(struct qeth_card *card,
for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) {
buf->buffer->element[i].length = PAGE_SIZE;
buf->buffer->element[i].addr =
virt_to_phys(pool_entry->elements[i]);
page_to_phys(pool_entry->elements[i]);
if (i == QETH_MAX_BUFFER_ELEMENTS(card) - 1)
buf->buffer->element[i].eflags = SBAL_EFLAGS_LAST_ENTRY;
else
@ -2675,10 +2730,15 @@ static int qeth_init_qdio_queues(struct qeth_card *card)
/* inbound queue */
qdio_reset_buffers(card->qdio.in_q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q);
memset(&card->rx, 0, sizeof(struct qeth_rx));
qeth_initialize_working_pool_list(card);
/*give only as many buffers to hardware as we have buffer pool entries*/
for (i = 0; i < card->qdio.in_buf_pool.buf_count - 1; ++i)
qeth_init_input_buffer(card, &card->qdio.in_q->bufs[i]);
for (i = 0; i < card->qdio.in_buf_pool.buf_count - 1; i++) {
rc = qeth_init_input_buffer(card, &card->qdio.in_q->bufs[i]);
if (rc)
return rc;
}
card->qdio.in_q->next_buf_to_init =
card->qdio.in_buf_pool.buf_count - 1;
rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0, 0,

View File

@ -247,8 +247,8 @@ static ssize_t qeth_dev_bufcnt_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct qeth_card *card = dev_get_drvdata(dev);
unsigned int cnt;
char *tmp;
int cnt, old_cnt;
int rc = 0;
mutex_lock(&card->conf_mutex);
@ -257,13 +257,12 @@ static ssize_t qeth_dev_bufcnt_store(struct device *dev,
goto out;
}
old_cnt = card->qdio.in_buf_pool.buf_count;
cnt = simple_strtoul(buf, &tmp, 10);
cnt = (cnt < QETH_IN_BUF_COUNT_MIN) ? QETH_IN_BUF_COUNT_MIN :
((cnt > QETH_IN_BUF_COUNT_MAX) ? QETH_IN_BUF_COUNT_MAX : cnt);
if (old_cnt != cnt) {
rc = qeth_realloc_buffer_pool(card, cnt);
}
rc = qeth_resize_buffer_pool(card, cnt);
out:
mutex_unlock(&card->conf_mutex);
return rc ? rc : count;

View File

@ -284,6 +284,7 @@ static void qeth_l2_stop_card(struct qeth_card *card)
if (card->state == CARD_STATE_SOFTSETUP) {
qeth_clear_ipacmd_list(card);
qeth_drain_output_queues(card);
cancel_delayed_work_sync(&card->buffer_reclaim_work);
card->state = CARD_STATE_DOWN;
}

View File

@ -1178,6 +1178,7 @@ static void qeth_l3_stop_card(struct qeth_card *card)
qeth_l3_clear_ip_htable(card, 1);
qeth_clear_ipacmd_list(card);
qeth_drain_output_queues(card);
cancel_delayed_work_sync(&card->buffer_reclaim_work);
card->state = CARD_STATE_DOWN;
}

View File

@ -206,12 +206,11 @@ static ssize_t qeth_l3_dev_sniffer_store(struct device *dev,
qdio_get_ssqd_desc(CARD_DDEV(card), &card->ssqd);
if (card->ssqd.qdioac2 & CHSC_AC2_SNIFFER_AVAILABLE) {
card->options.sniffer = i;
if (card->qdio.init_pool.buf_count !=
QETH_IN_BUF_COUNT_MAX)
qeth_realloc_buffer_pool(card,
QETH_IN_BUF_COUNT_MAX);
} else
qeth_resize_buffer_pool(card, QETH_IN_BUF_COUNT_MAX);
} else {
rc = -EPERM;
}
break;
default:
rc = -EINVAL;

View File

@ -2,15 +2,10 @@
#ifndef _INET_DIAG_H_
#define _INET_DIAG_H_ 1
#include <net/netlink.h>
#include <uapi/linux/inet_diag.h>
struct net;
struct sock;
struct inet_hashinfo;
struct nlattr;
struct nlmsghdr;
struct sk_buff;
struct netlink_callback;
struct inet_diag_handler {
void (*dump)(struct sk_buff *skb,
@ -62,6 +57,17 @@ int inet_diag_bc_sk(const struct nlattr *_bc, struct sock *sk);
void inet_diag_msg_common_fill(struct inet_diag_msg *r, struct sock *sk);
static inline size_t inet_diag_msg_attrs_size(void)
{
return nla_total_size(1) /* INET_DIAG_SHUTDOWN */
+ nla_total_size(1) /* INET_DIAG_TOS */
#if IS_ENABLED(CONFIG_IPV6)
+ nla_total_size(1) /* INET_DIAG_TCLASS */
+ nla_total_size(1) /* INET_DIAG_SKV6ONLY */
#endif
+ nla_total_size(4) /* INET_DIAG_MARK */
+ nla_total_size(4); /* INET_DIAG_CLASS_ID */
}
int inet_diag_msg_attrs_fill(struct sock *sk, struct sk_buff *skb,
struct inet_diag_msg *r, int ext,
struct user_namespace *user_ns, bool net_admin);

View File

@ -357,6 +357,7 @@ struct macsec_ops;
* is_gigabit_capable: Set to true if PHY supports 1000Mbps
* has_fixups: Set to true if this phy has fixups/quirks.
* suspended: Set to true if this phy has been suspended successfully.
* suspended_by_mdio_bus: Set to true if this phy was suspended by MDIO bus.
* sysfs_links: Internal boolean tracking sysfs symbolic links setup/removal.
* loopback_enabled: Set true if this phy has been loopbacked successfully.
* state: state of the PHY for management purposes
@ -396,6 +397,7 @@ struct phy_device {
unsigned is_gigabit_capable:1;
unsigned has_fixups:1;
unsigned suspended:1;
unsigned suspended_by_mdio_bus:1;
unsigned sysfs_links:1;
unsigned loopback_enabled:1;
@ -557,6 +559,7 @@ struct phy_driver {
/*
* Checks if the PHY generated an interrupt.
* For multi-PHY devices with shared PHY interrupt pin
* Set interrupt bits have to be cleared.
*/
int (*did_interrupt)(struct phy_device *phydev);

View File

@ -972,9 +972,9 @@ static inline int rhashtable_lookup_insert_key(
/**
* rhashtable_lookup_get_insert_key - lookup and insert object into hash table
* @ht: hash table
* @key: key
* @obj: pointer to hash head inside object
* @params: hash table parameters
* @data: pointer to element data already in hashes
*
* Just like rhashtable_lookup_insert_key(), but this function returns the
* object if it exists, NULL if it does not and the insertion was successful,

View File

@ -108,6 +108,7 @@ struct fib_rule_notifier_info {
[FRA_OIFNAME] = { .type = NLA_STRING, .len = IFNAMSIZ - 1 }, \
[FRA_PRIORITY] = { .type = NLA_U32 }, \
[FRA_FWMARK] = { .type = NLA_U32 }, \
[FRA_TUN_ID] = { .type = NLA_U64 }, \
[FRA_FWMASK] = { .type = NLA_U32 }, \
[FRA_TABLE] = { .type = NLA_U32 }, \
[FRA_SUPPRESS_PREFIXLEN] = { .type = NLA_U32 }, \

View File

@ -74,7 +74,7 @@
#define DEV_MAC_TAGS_CFG_TAG_ID_M GENMASK(31, 16)
#define DEV_MAC_TAGS_CFG_TAG_ID_X(x) (((x) & GENMASK(31, 16)) >> 16)
#define DEV_MAC_TAGS_CFG_VLAN_LEN_AWR_ENA BIT(2)
#define DEV_MAC_TAGS_CFG_PB_ENA BIT(1)
#define DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA BIT(1)
#define DEV_MAC_TAGS_CFG_VLAN_AWR_ENA BIT(0)
#define DEV_MAC_ADV_CHK_CFG 0x2c

View File

@ -74,6 +74,8 @@ enum {
#define IPPROTO_UDPLITE IPPROTO_UDPLITE
IPPROTO_MPLS = 137, /* MPLS in IP (RFC 4023) */
#define IPPROTO_MPLS IPPROTO_MPLS
IPPROTO_ETHERNET = 143, /* Ethernet-within-IPv6 Encapsulation */
#define IPPROTO_ETHERNET IPPROTO_ETHERNET
IPPROTO_RAW = 255, /* Raw IP packets */
#define IPPROTO_RAW IPPROTO_RAW
IPPROTO_MPTCP = 262, /* Multipath TCP connection */

View File

@ -6271,6 +6271,10 @@ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
return;
}
/* Don't associate the sock with unrelated interrupted task's cgroup. */
if (in_interrupt())
return;
rcu_read_lock();
while (true) {

View File

@ -6682,19 +6682,9 @@ void mem_cgroup_sk_alloc(struct sock *sk)
if (!mem_cgroup_sockets_enabled)
return;
/*
* Socket cloning can throw us here with sk_memcg already
* filled. It won't however, necessarily happen from
* process context. So the test for root memcg given
* the current task's memcg won't help us in this case.
*
* Respecting the original socket's memcg is a better
* decision in this case.
*/
if (sk->sk_memcg) {
css_get(&sk->sk_memcg->css);
/* Do not associate the sock with unrelated interrupted task's memcg. */
if (in_interrupt())
return;
}
rcu_read_lock();
memcg = mem_cgroup_from_task(current);

View File

@ -789,6 +789,10 @@ static void batadv_iv_ogm_schedule_buff(struct batadv_hard_iface *hard_iface)
lockdep_assert_held(&hard_iface->bat_iv.ogm_buff_mutex);
/* interface already disabled by batadv_iv_ogm_iface_disable */
if (!*ogm_buff)
return;
/* the interface gets activated here to avoid race conditions between
* the moment of activating the interface in
* hardif_activate_interface() where the originator mac is set and

View File

@ -112,7 +112,8 @@ static struct caif_device_entry *caif_get(struct net_device *dev)
caif_device_list(dev_net(dev));
struct caif_device_entry *caifd;
list_for_each_entry_rcu(caifd, &caifdevs->list, list) {
list_for_each_entry_rcu(caifd, &caifdevs->list, list,
lockdep_rtnl_is_held()) {
if (caifd->netdev == dev)
return caifd;
}

View File

@ -3352,34 +3352,41 @@ devlink_param_value_get_from_info(const struct devlink_param *param,
struct genl_info *info,
union devlink_param_value *value)
{
struct nlattr *param_data;
int len;
if (param->type != DEVLINK_PARAM_TYPE_BOOL &&
!info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA])
param_data = info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA];
if (param->type != DEVLINK_PARAM_TYPE_BOOL && !param_data)
return -EINVAL;
switch (param->type) {
case DEVLINK_PARAM_TYPE_U8:
value->vu8 = nla_get_u8(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]);
if (nla_len(param_data) != sizeof(u8))
return -EINVAL;
value->vu8 = nla_get_u8(param_data);
break;
case DEVLINK_PARAM_TYPE_U16:
value->vu16 = nla_get_u16(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]);
if (nla_len(param_data) != sizeof(u16))
return -EINVAL;
value->vu16 = nla_get_u16(param_data);
break;
case DEVLINK_PARAM_TYPE_U32:
value->vu32 = nla_get_u32(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]);
if (nla_len(param_data) != sizeof(u32))
return -EINVAL;
value->vu32 = nla_get_u32(param_data);
break;
case DEVLINK_PARAM_TYPE_STRING:
len = strnlen(nla_data(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]),
nla_len(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]));
if (len == nla_len(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]) ||
len = strnlen(nla_data(param_data), nla_len(param_data));
if (len == nla_len(param_data) ||
len >= __DEVLINK_PARAM_MAX_STRING_VALUE)
return -EINVAL;
strcpy(value->vstr,
nla_data(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]));
strcpy(value->vstr, nla_data(param_data));
break;
case DEVLINK_PARAM_TYPE_BOOL:
value->vbool = info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA] ?
true : false;
if (param_data && nla_len(param_data))
return -EINVAL;
value->vbool = nla_get_flag(param_data);
break;
}
return 0;
@ -5951,6 +5958,8 @@ static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = {
[DEVLINK_ATTR_PARAM_VALUE_CMODE] = { .type = NLA_U8 },
[DEVLINK_ATTR_REGION_NAME] = { .type = NLA_NUL_STRING },
[DEVLINK_ATTR_REGION_SNAPSHOT_ID] = { .type = NLA_U32 },
[DEVLINK_ATTR_REGION_CHUNK_ADDR] = { .type = NLA_U64 },
[DEVLINK_ATTR_REGION_CHUNK_LEN] = { .type = NLA_U64 },
[DEVLINK_ATTR_HEALTH_REPORTER_NAME] = { .type = NLA_NUL_STRING },
[DEVLINK_ATTR_HEALTH_REPORTER_GRACEFUL_PERIOD] = { .type = NLA_U64 },
[DEVLINK_ATTR_HEALTH_REPORTER_AUTO_RECOVER] = { .type = NLA_U8 },

View File

@ -53,30 +53,60 @@ static void cgrp_css_free(struct cgroup_subsys_state *css)
kfree(css_cls_state(css));
}
/*
* To avoid freezing of sockets creation for tasks with big number of threads
* and opened sockets lets release file_lock every 1000 iterated descriptors.
* New sockets will already have been created with new classid.
*/
struct update_classid_context {
u32 classid;
unsigned int batch;
};
#define UPDATE_CLASSID_BATCH 1000
static int update_classid_sock(const void *v, struct file *file, unsigned n)
{
int err;
struct update_classid_context *ctx = (void *)v;
struct socket *sock = sock_from_file(file, &err);
if (sock) {
spin_lock(&cgroup_sk_update_lock);
sock_cgroup_set_classid(&sock->sk->sk_cgrp_data,
(unsigned long)v);
sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, ctx->classid);
spin_unlock(&cgroup_sk_update_lock);
}
if (--ctx->batch == 0) {
ctx->batch = UPDATE_CLASSID_BATCH;
return n + 1;
}
return 0;
}
static void update_classid_task(struct task_struct *p, u32 classid)
{
struct update_classid_context ctx = {
.classid = classid,
.batch = UPDATE_CLASSID_BATCH
};
unsigned int fd = 0;
do {
task_lock(p);
fd = iterate_fd(p->files, fd, update_classid_sock, &ctx);
task_unlock(p);
cond_resched();
} while (fd);
}
static void cgrp_attach(struct cgroup_taskset *tset)
{
struct cgroup_subsys_state *css;
struct task_struct *p;
cgroup_taskset_for_each(p, css, tset) {
task_lock(p);
iterate_fd(p->files, 0, update_classid_sock,
(void *)(unsigned long)css_cls_state(css)->classid);
task_unlock(p);
update_classid_task(p, css_cls_state(css)->classid);
}
}
@ -98,10 +128,7 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
css_task_iter_start(css, 0, &it);
while ((p = css_task_iter_next(&it))) {
task_lock(p);
iterate_fd(p->files, 0, update_classid_sock,
(void *)(unsigned long)cs->classid);
task_unlock(p);
update_classid_task(p, cs->classid);
cond_resched();
}
css_task_iter_end(&it);

View File

@ -1830,7 +1830,10 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
atomic_set(&newsk->sk_zckey, 0);
sock_reset_flag(newsk, SOCK_DONE);
mem_cgroup_sk_alloc(newsk);
/* sk->sk_memcg will be populated at accept() time */
newsk->sk_memcg = NULL;
cgroup_sk_alloc(&newsk->sk_cgrp_data);
rcu_read_lock();

View File

@ -117,7 +117,9 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
/* port.c */
int dsa_port_set_state(struct dsa_port *dp, u8 state,
struct switchdev_trans *trans);
int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy);
int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy);
void dsa_port_disable_rt(struct dsa_port *dp);
void dsa_port_disable(struct dsa_port *dp);
int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br);
void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br);

View File

@ -63,7 +63,7 @@ static void dsa_port_set_state_now(struct dsa_port *dp, u8 state)
pr_err("DSA: failed to set STP state %u (%d)\n", state, err);
}
int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy)
int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy)
{
struct dsa_switch *ds = dp->ds;
int port = dp->index;
@ -78,14 +78,31 @@ int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy)
if (!dp->bridge_dev)
dsa_port_set_state_now(dp, BR_STATE_FORWARDING);
if (dp->pl)
phylink_start(dp->pl);
return 0;
}
void dsa_port_disable(struct dsa_port *dp)
int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy)
{
int err;
rtnl_lock();
err = dsa_port_enable_rt(dp, phy);
rtnl_unlock();
return err;
}
void dsa_port_disable_rt(struct dsa_port *dp)
{
struct dsa_switch *ds = dp->ds;
int port = dp->index;
if (dp->pl)
phylink_stop(dp->pl);
if (!dp->bridge_dev)
dsa_port_set_state_now(dp, BR_STATE_DISABLED);
@ -93,6 +110,13 @@ void dsa_port_disable(struct dsa_port *dp)
ds->ops->port_disable(ds, port);
}
void dsa_port_disable(struct dsa_port *dp)
{
rtnl_lock();
dsa_port_disable_rt(dp);
rtnl_unlock();
}
int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br)
{
struct dsa_notifier_bridge_info info = {
@ -614,10 +638,6 @@ static int dsa_port_phylink_register(struct dsa_port *dp)
goto err_phy_connect;
}
rtnl_lock();
phylink_start(dp->pl);
rtnl_unlock();
return 0;
err_phy_connect:
@ -628,9 +648,14 @@ err_phy_connect:
int dsa_port_link_register_of(struct dsa_port *dp)
{
struct dsa_switch *ds = dp->ds;
struct device_node *phy_np;
if (!ds->ops->adjust_link)
return dsa_port_phylink_register(dp);
if (!ds->ops->adjust_link) {
phy_np = of_parse_phandle(dp->dn, "phy-handle", 0);
if (of_phy_is_fixed_link(dp->dn) || phy_np)
return dsa_port_phylink_register(dp);
return 0;
}
dev_warn(ds->dev,
"Using legacy PHYLIB callbacks. Please migrate to PHYLINK!\n");
@ -645,11 +670,12 @@ void dsa_port_link_unregister_of(struct dsa_port *dp)
{
struct dsa_switch *ds = dp->ds;
if (!ds->ops->adjust_link) {
if (!ds->ops->adjust_link && dp->pl) {
rtnl_lock();
phylink_disconnect_phy(dp->pl);
rtnl_unlock();
phylink_destroy(dp->pl);
dp->pl = NULL;
return;
}

View File

@ -88,12 +88,10 @@ static int dsa_slave_open(struct net_device *dev)
goto clear_allmulti;
}
err = dsa_port_enable(dp, dev->phydev);
err = dsa_port_enable_rt(dp, dev->phydev);
if (err)
goto clear_promisc;
phylink_start(dp->pl);
return 0;
clear_promisc:
@ -114,9 +112,7 @@ static int dsa_slave_close(struct net_device *dev)
struct net_device *master = dsa_slave_to_master(dev);
struct dsa_port *dp = dsa_slave_to_port(dev);
phylink_stop(dp->pl);
dsa_port_disable(dp);
dsa_port_disable_rt(dp);
dev_mc_unsync(master, dev);
dev_uc_unsync(master, dev);

View File

@ -21,7 +21,13 @@ const struct nla_policy ieee802154_policy[IEEE802154_ATTR_MAX + 1] = {
[IEEE802154_ATTR_HW_ADDR] = { .type = NLA_HW_ADDR, },
[IEEE802154_ATTR_PAN_ID] = { .type = NLA_U16, },
[IEEE802154_ATTR_CHANNEL] = { .type = NLA_U8, },
[IEEE802154_ATTR_BCN_ORD] = { .type = NLA_U8, },
[IEEE802154_ATTR_SF_ORD] = { .type = NLA_U8, },
[IEEE802154_ATTR_PAN_COORD] = { .type = NLA_U8, },
[IEEE802154_ATTR_BAT_EXT] = { .type = NLA_U8, },
[IEEE802154_ATTR_COORD_REALIGN] = { .type = NLA_U8, },
[IEEE802154_ATTR_PAGE] = { .type = NLA_U8, },
[IEEE802154_ATTR_DEV_TYPE] = { .type = NLA_U8, },
[IEEE802154_ATTR_COORD_SHORT_ADDR] = { .type = NLA_U16, },
[IEEE802154_ATTR_COORD_HW_ADDR] = { .type = NLA_HW_ADDR, },
[IEEE802154_ATTR_COORD_PAN_ID] = { .type = NLA_U16, },

View File

@ -56,7 +56,9 @@ int gre_del_protocol(const struct gre_protocol *proto, u8 version)
}
EXPORT_SYMBOL_GPL(gre_del_protocol);
/* Fills in tpi and returns header length to be pulled. */
/* Fills in tpi and returns header length to be pulled.
* Note that caller must use pskb_may_pull() before pulling GRE header.
*/
int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
bool *csum_err, __be16 proto, int nhs)
{
@ -110,8 +112,14 @@ int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
* - When dealing with WCCPv2, Skip extra 4 bytes in GRE header
*/
if (greh->flags == 0 && tpi->proto == htons(ETH_P_WCCP)) {
u8 _val, *val;
val = skb_header_pointer(skb, nhs + hdr_len,
sizeof(_val), &_val);
if (!val)
return -EINVAL;
tpi->proto = proto;
if ((*(u8 *)options & 0xF0) != 0x40)
if ((*val & 0xF0) != 0x40)
hdr_len += 4;
}
tpi->hdr_len = hdr_len;

View File

@ -482,8 +482,28 @@ struct sock *inet_csk_accept(struct sock *sk, int flags, int *err, bool kern)
}
spin_unlock_bh(&queue->fastopenq.lock);
}
out:
release_sock(sk);
if (newsk && mem_cgroup_sockets_enabled) {
int amt;
/* atomically get the memory usage, set and charge the
* newsk->sk_memcg.
*/
lock_sock(newsk);
/* The socket has not been accepted yet, no need to look at
* newsk->sk_wmem_queued.
*/
amt = sk_mem_pages(newsk->sk_forward_alloc +
atomic_read(&newsk->sk_rmem_alloc));
mem_cgroup_sk_alloc(newsk);
if (newsk->sk_memcg && amt)
mem_cgroup_charge_skmem(newsk->sk_memcg, amt);
release_sock(newsk);
}
if (req)
reqsk_put(req);
return newsk;

View File

@ -100,13 +100,9 @@ static size_t inet_sk_attr_size(struct sock *sk,
aux = handler->idiag_get_aux_size(sk, net_admin);
return nla_total_size(sizeof(struct tcp_info))
+ nla_total_size(1) /* INET_DIAG_SHUTDOWN */
+ nla_total_size(1) /* INET_DIAG_TOS */
+ nla_total_size(1) /* INET_DIAG_TCLASS */
+ nla_total_size(4) /* INET_DIAG_MARK */
+ nla_total_size(4) /* INET_DIAG_CLASS_ID */
+ nla_total_size(sizeof(struct inet_diag_meminfo))
+ nla_total_size(sizeof(struct inet_diag_msg))
+ inet_diag_msg_attrs_size()
+ nla_total_size(sizeof(struct inet_diag_meminfo))
+ nla_total_size(SK_MEMINFO_VARS * sizeof(u32))
+ nla_total_size(TCP_CA_NAME_MAX)
+ nla_total_size(sizeof(struct tcpvegas_info))
@ -147,6 +143,24 @@ int inet_diag_msg_attrs_fill(struct sock *sk, struct sk_buff *skb,
if (net_admin && nla_put_u32(skb, INET_DIAG_MARK, sk->sk_mark))
goto errout;
if (ext & (1 << (INET_DIAG_CLASS_ID - 1)) ||
ext & (1 << (INET_DIAG_TCLASS - 1))) {
u32 classid = 0;
#ifdef CONFIG_SOCK_CGROUP_DATA
classid = sock_cgroup_classid(&sk->sk_cgrp_data);
#endif
/* Fallback to socket priority if class id isn't set.
* Classful qdiscs use it as direct reference to class.
* For cgroup2 classid is always zero.
*/
if (!classid)
classid = sk->sk_priority;
if (nla_put_u32(skb, INET_DIAG_CLASS_ID, classid))
goto errout;
}
r->idiag_uid = from_kuid_munged(user_ns, sock_i_uid(sk));
r->idiag_inode = sock_i_ino(sk);
@ -284,24 +298,6 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
goto errout;
}
if (ext & (1 << (INET_DIAG_CLASS_ID - 1)) ||
ext & (1 << (INET_DIAG_TCLASS - 1))) {
u32 classid = 0;
#ifdef CONFIG_SOCK_CGROUP_DATA
classid = sock_cgroup_classid(&sk->sk_cgrp_data);
#endif
/* Fallback to socket priority if class id isn't set.
* Classful qdiscs use it as direct reference to class.
* For cgroup2 classid is always zero.
*/
if (!classid)
classid = sk->sk_priority;
if (nla_put_u32(skb, INET_DIAG_CLASS_ID, classid))
goto errout;
}
out:
nlmsg_end(skb, nlh);
return 0;

View File

@ -100,8 +100,9 @@ static int raw_diag_dump_one(struct sk_buff *in_skb,
if (IS_ERR(sk))
return PTR_ERR(sk);
rep = nlmsg_new(sizeof(struct inet_diag_msg) +
sizeof(struct inet_diag_meminfo) + 64,
rep = nlmsg_new(nla_total_size(sizeof(struct inet_diag_msg)) +
inet_diag_msg_attrs_size() +
nla_total_size(sizeof(struct inet_diag_meminfo)) + 64,
GFP_KERNEL);
if (!rep) {
sock_put(sk);

View File

@ -64,8 +64,9 @@ static int udp_dump_one(struct udp_table *tbl, struct sk_buff *in_skb,
goto out;
err = -ENOMEM;
rep = nlmsg_new(sizeof(struct inet_diag_msg) +
sizeof(struct inet_diag_meminfo) + 64,
rep = nlmsg_new(nla_total_size(sizeof(struct inet_diag_msg)) +
inet_diag_msg_attrs_size() +
nla_total_size(sizeof(struct inet_diag_meminfo)) + 64,
GFP_KERNEL);
if (!rep)
goto out;

View File

@ -1226,11 +1226,13 @@ check_cleanup_prefix_route(struct inet6_ifaddr *ifp, unsigned long *expires)
}
static void
cleanup_prefix_route(struct inet6_ifaddr *ifp, unsigned long expires, bool del_rt)
cleanup_prefix_route(struct inet6_ifaddr *ifp, unsigned long expires,
bool del_rt, bool del_peer)
{
struct fib6_info *f6i;
f6i = addrconf_get_prefix_route(&ifp->addr, ifp->prefix_len,
f6i = addrconf_get_prefix_route(del_peer ? &ifp->peer_addr : &ifp->addr,
ifp->prefix_len,
ifp->idev->dev, 0, RTF_DEFAULT, true);
if (f6i) {
if (del_rt)
@ -1293,7 +1295,7 @@ static void ipv6_del_addr(struct inet6_ifaddr *ifp)
if (action != CLEANUP_PREFIX_RT_NOP) {
cleanup_prefix_route(ifp, expires,
action == CLEANUP_PREFIX_RT_DEL);
action == CLEANUP_PREFIX_RT_DEL, false);
}
/* clean up prefsrc entries */
@ -3345,6 +3347,10 @@ static void addrconf_dev_config(struct net_device *dev)
(dev->type != ARPHRD_NONE) &&
(dev->type != ARPHRD_RAWIP)) {
/* Alas, we support only Ethernet autoconfiguration. */
idev = __in6_dev_get(dev);
if (!IS_ERR_OR_NULL(idev) && dev->flags & IFF_UP &&
dev->flags & IFF_MULTICAST)
ipv6_mc_up(idev);
return;
}
@ -4586,12 +4592,14 @@ inet6_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh,
}
static int modify_prefix_route(struct inet6_ifaddr *ifp,
unsigned long expires, u32 flags)
unsigned long expires, u32 flags,
bool modify_peer)
{
struct fib6_info *f6i;
u32 prio;
f6i = addrconf_get_prefix_route(&ifp->addr, ifp->prefix_len,
f6i = addrconf_get_prefix_route(modify_peer ? &ifp->peer_addr : &ifp->addr,
ifp->prefix_len,
ifp->idev->dev, 0, RTF_DEFAULT, true);
if (!f6i)
return -ENOENT;
@ -4602,7 +4610,8 @@ static int modify_prefix_route(struct inet6_ifaddr *ifp,
ip6_del_rt(dev_net(ifp->idev->dev), f6i);
/* add new one */
addrconf_prefix_route(&ifp->addr, ifp->prefix_len,
addrconf_prefix_route(modify_peer ? &ifp->peer_addr : &ifp->addr,
ifp->prefix_len,
ifp->rt_priority, ifp->idev->dev,
expires, flags, GFP_KERNEL);
} else {
@ -4624,6 +4633,7 @@ static int inet6_addr_modify(struct inet6_ifaddr *ifp, struct ifa6_config *cfg)
unsigned long timeout;
bool was_managetempaddr;
bool had_prefixroute;
bool new_peer = false;
ASSERT_RTNL();
@ -4655,6 +4665,13 @@ static int inet6_addr_modify(struct inet6_ifaddr *ifp, struct ifa6_config *cfg)
cfg->preferred_lft = timeout;
}
if (cfg->peer_pfx &&
memcmp(&ifp->peer_addr, cfg->peer_pfx, sizeof(struct in6_addr))) {
if (!ipv6_addr_any(&ifp->peer_addr))
cleanup_prefix_route(ifp, expires, true, true);
new_peer = true;
}
spin_lock_bh(&ifp->lock);
was_managetempaddr = ifp->flags & IFA_F_MANAGETEMPADDR;
had_prefixroute = ifp->flags & IFA_F_PERMANENT &&
@ -4670,6 +4687,9 @@ static int inet6_addr_modify(struct inet6_ifaddr *ifp, struct ifa6_config *cfg)
if (cfg->rt_priority && cfg->rt_priority != ifp->rt_priority)
ifp->rt_priority = cfg->rt_priority;
if (new_peer)
ifp->peer_addr = *cfg->peer_pfx;
spin_unlock_bh(&ifp->lock);
if (!(ifp->flags&IFA_F_TENTATIVE))
ipv6_ifa_notify(0, ifp);
@ -4678,7 +4698,7 @@ static int inet6_addr_modify(struct inet6_ifaddr *ifp, struct ifa6_config *cfg)
int rc = -ENOENT;
if (had_prefixroute)
rc = modify_prefix_route(ifp, expires, flags);
rc = modify_prefix_route(ifp, expires, flags, false);
/* prefix route could have been deleted; if so restore it */
if (rc == -ENOENT) {
@ -4686,6 +4706,15 @@ static int inet6_addr_modify(struct inet6_ifaddr *ifp, struct ifa6_config *cfg)
ifp->rt_priority, ifp->idev->dev,
expires, flags, GFP_KERNEL);
}
if (had_prefixroute && !ipv6_addr_any(&ifp->peer_addr))
rc = modify_prefix_route(ifp, expires, flags, true);
if (rc == -ENOENT && !ipv6_addr_any(&ifp->peer_addr)) {
addrconf_prefix_route(&ifp->peer_addr, ifp->prefix_len,
ifp->rt_priority, ifp->idev->dev,
expires, flags, GFP_KERNEL);
}
} else if (had_prefixroute) {
enum cleanup_prefix_rt_t action;
unsigned long rt_expires;
@ -4696,7 +4725,7 @@ static int inet6_addr_modify(struct inet6_ifaddr *ifp, struct ifa6_config *cfg)
if (action != CLEANUP_PREFIX_RT_NOP) {
cleanup_prefix_route(ifp, rt_expires,
action == CLEANUP_PREFIX_RT_DEL);
action == CLEANUP_PREFIX_RT_DEL, false);
}
}
@ -5983,9 +6012,9 @@ static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp)
if (ifp->idev->cnf.forwarding)
addrconf_join_anycast(ifp);
if (!ipv6_addr_any(&ifp->peer_addr))
addrconf_prefix_route(&ifp->peer_addr, 128, 0,
ifp->idev->dev, 0, 0,
GFP_ATOMIC);
addrconf_prefix_route(&ifp->peer_addr, 128,
ifp->rt_priority, ifp->idev->dev,
0, 0, GFP_ATOMIC);
break;
case RTM_DELADDR:
if (ifp->idev->cnf.forwarding)

View File

@ -268,7 +268,7 @@ static int seg6_do_srh(struct sk_buff *skb)
skb_mac_header_rebuild(skb);
skb_push(skb, skb->mac_len);
err = seg6_do_srh_encap(skb, tinfo->srh, NEXTHDR_NONE);
err = seg6_do_srh_encap(skb, tinfo->srh, IPPROTO_ETHERNET);
if (err)
return err;

View File

@ -282,7 +282,7 @@ static int input_action_end_dx2(struct sk_buff *skb,
struct net_device *odev;
struct ethhdr *eth;
if (!decap_and_validate(skb, NEXTHDR_NONE))
if (!decap_and_validate(skb, IPPROTO_ETHERNET))
goto drop;
if (!pskb_may_pull(skb, ETH_HLEN))

View File

@ -1152,7 +1152,8 @@ int mesh_nexthop_resolve(struct ieee80211_sub_if_data *sdata,
}
}
if (!(mpath->flags & MESH_PATH_RESOLVING))
if (!(mpath->flags & MESH_PATH_RESOLVING) &&
mesh_path_sel_is_hwmp(sdata))
mesh_queue_preq(mpath, PREQ_Q_F_START);
if (skb_queue_len(&mpath->frame_queue) >= MESH_FRAME_QUEUE_LEN)

View File

@ -334,6 +334,8 @@ static bool mptcp_established_options_dss(struct sock *sk, struct sk_buff *skb,
struct mptcp_sock *msk;
unsigned int ack_size;
bool ret = false;
bool can_ack;
u64 ack_seq;
u8 tcp_fin;
if (skb) {
@ -360,9 +362,22 @@ static bool mptcp_established_options_dss(struct sock *sk, struct sk_buff *skb,
ret = true;
}
/* passive sockets msk will set the 'can_ack' after accept(), even
* if the first subflow may have the already the remote key handy
*/
can_ack = true;
opts->ext_copy.use_ack = 0;
msk = mptcp_sk(subflow->conn);
if (!msk || !READ_ONCE(msk->can_ack)) {
if (likely(msk && READ_ONCE(msk->can_ack))) {
ack_seq = msk->ack_seq;
} else if (subflow->can_ack) {
mptcp_crypto_key_sha(subflow->remote_key, NULL, &ack_seq);
ack_seq++;
} else {
can_ack = false;
}
if (unlikely(!can_ack)) {
*size = ALIGN(dss_size, 4);
return ret;
}
@ -375,7 +390,7 @@ static bool mptcp_established_options_dss(struct sock *sk, struct sk_buff *skb,
dss_size += ack_size;
opts->ext_copy.data_ack = msk->ack_seq;
opts->ext_copy.data_ack = ack_seq;
opts->ext_copy.ack64 = 1;
opts->ext_copy.use_ack = 1;

View File

@ -411,7 +411,7 @@ static void *ct_cpu_seq_next(struct seq_file *seq, void *v, loff_t *pos)
*pos = cpu + 1;
return per_cpu_ptr(net->ct.stat, cpu);
}
(*pos)++;
return NULL;
}

View File

@ -267,7 +267,7 @@ static void *synproxy_cpu_seq_next(struct seq_file *seq, void *v, loff_t *pos)
*pos = cpu + 1;
return per_cpu_ptr(snet->stats, cpu);
}
(*pos)++;
return NULL;
}

View File

@ -1405,6 +1405,11 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
lockdep_commit_lock_is_held(net));
if (nft_dump_stats(skb, stats))
goto nla_put_failure;
if ((chain->flags & NFT_CHAIN_HW_OFFLOAD) &&
nla_put_be32(skb, NFTA_CHAIN_FLAGS,
htonl(NFT_CHAIN_HW_OFFLOAD)))
goto nla_put_failure;
}
if (nla_put_be32(skb, NFTA_CHAIN_USE, htonl(chain->use)))
@ -6300,8 +6305,13 @@ static int nf_tables_newflowtable(struct net *net, struct sock *nlsk,
goto err4;
err = nft_register_flowtable_net_hooks(ctx.net, table, flowtable);
if (err < 0)
if (err < 0) {
list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) {
list_del_rcu(&hook->list);
kfree_rcu(hook, rcu);
}
goto err4;
}
err = nft_trans_flowtable_add(&ctx, NFT_MSG_NEWFLOWTABLE, flowtable);
if (err < 0)
@ -7378,13 +7388,8 @@ static void nf_tables_module_autoload(struct net *net)
list_splice_init(&net->nft.module_list, &module_list);
mutex_unlock(&net->nft.commit_mutex);
list_for_each_entry_safe(req, next, &module_list, list) {
if (req->done) {
list_del(&req->list);
kfree(req);
} else {
request_module("%s", req->module);
req->done = true;
}
request_module("%s", req->module);
req->done = true;
}
mutex_lock(&net->nft.commit_mutex);
list_splice(&module_list, &net->nft.module_list);
@ -8167,6 +8172,7 @@ static void __net_exit nf_tables_exit_net(struct net *net)
__nft_release_tables(net);
mutex_unlock(&net->nft.commit_mutex);
WARN_ON_ONCE(!list_empty(&net->nft.tables));
WARN_ON_ONCE(!list_empty(&net->nft.module_list));
}
static struct pernet_operations nf_tables_net_ops = {

View File

@ -742,6 +742,8 @@ static const struct nla_policy nfnl_cthelper_policy[NFCTH_MAX+1] = {
[NFCTH_NAME] = { .type = NLA_NUL_STRING,
.len = NF_CT_HELPER_NAME_LEN-1 },
[NFCTH_QUEUE_NUM] = { .type = NLA_U32, },
[NFCTH_PRIV_DATA_LEN] = { .type = NLA_U32, },
[NFCTH_STATUS] = { .type = NLA_U32, },
};
static const struct nfnl_callback nfnl_cthelper_cb[NFNL_MSG_CTHELPER_MAX] = {

View File

@ -89,6 +89,7 @@ static const struct nft_chain_type nft_chain_nat_inet = {
.name = "nat",
.type = NFT_CHAIN_T_NAT,
.family = NFPROTO_INET,
.owner = THIS_MODULE,
.hook_mask = (1 << NF_INET_PRE_ROUTING) |
(1 << NF_INET_LOCAL_IN) |
(1 << NF_INET_LOCAL_OUT) |

View File

@ -129,6 +129,7 @@ static const struct nla_policy nft_payload_policy[NFTA_PAYLOAD_MAX + 1] = {
[NFTA_PAYLOAD_LEN] = { .type = NLA_U32 },
[NFTA_PAYLOAD_CSUM_TYPE] = { .type = NLA_U32 },
[NFTA_PAYLOAD_CSUM_OFFSET] = { .type = NLA_U32 },
[NFTA_PAYLOAD_CSUM_FLAGS] = { .type = NLA_U32 },
};
static int nft_payload_init(const struct nft_ctx *ctx,

View File

@ -339,6 +339,8 @@ static const struct nla_policy nft_tunnel_key_policy[NFTA_TUNNEL_KEY_MAX + 1] =
[NFTA_TUNNEL_KEY_FLAGS] = { .type = NLA_U32, },
[NFTA_TUNNEL_KEY_TOS] = { .type = NLA_U8, },
[NFTA_TUNNEL_KEY_TTL] = { .type = NLA_U8, },
[NFTA_TUNNEL_KEY_SPORT] = { .type = NLA_U16, },
[NFTA_TUNNEL_KEY_DPORT] = { .type = NLA_U16, },
[NFTA_TUNNEL_KEY_OPTS] = { .type = NLA_NESTED, },
};

View File

@ -1551,6 +1551,9 @@ static void *xt_mttg_seq_next(struct seq_file *seq, void *v, loff_t *ppos,
uint8_t nfproto = (unsigned long)PDE_DATA(file_inode(seq->file));
struct nf_mttg_trav *trav = seq->private;
if (ppos != NULL)
++(*ppos);
switch (trav->class) {
case MTTG_TRAV_INIT:
trav->class = MTTG_TRAV_NFP_UNSPEC;
@ -1576,9 +1579,6 @@ static void *xt_mttg_seq_next(struct seq_file *seq, void *v, loff_t *ppos,
default:
return NULL;
}
if (ppos != NULL)
++*ppos;
return trav;
}

View File

@ -492,12 +492,12 @@ static void *recent_seq_next(struct seq_file *seq, void *v, loff_t *pos)
const struct recent_entry *e = v;
const struct list_head *head = e->list.next;
(*pos)++;
while (head == &t->iphash[st->bucket]) {
if (++st->bucket >= ip_list_hash_size)
return NULL;
head = t->iphash[st->bucket].next;
}
(*pos)++;
return list_entry(head, struct recent_entry, list);
}

View File

@ -2434,7 +2434,7 @@ void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err,
in_skb->len))
WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS,
(u8 *)extack->bad_attr -
in_skb->data));
(u8 *)nlh));
} else {
if (extack->cookie_len)
WARN_ON(nla_put(skb, NLMSGERR_ATTR_COOKIE,

View File

@ -181,13 +181,20 @@ exit:
void nfc_hci_cmd_received(struct nfc_hci_dev *hdev, u8 pipe, u8 cmd,
struct sk_buff *skb)
{
u8 gate = hdev->pipes[pipe].gate;
u8 status = NFC_HCI_ANY_OK;
struct hci_create_pipe_resp *create_info;
struct hci_delete_pipe_noti *delete_info;
struct hci_all_pipe_cleared_noti *cleared_info;
u8 gate;
pr_debug("from gate %x pipe %x cmd %x\n", gate, pipe, cmd);
pr_debug("from pipe %x cmd %x\n", pipe, cmd);
if (pipe >= NFC_HCI_MAX_PIPES) {
status = NFC_HCI_ANY_E_NOK;
goto exit;
}
gate = hdev->pipes[pipe].gate;
switch (cmd) {
case NFC_HCI_ADM_NOTIFY_PIPE_CREATED:
@ -375,8 +382,14 @@ void nfc_hci_event_received(struct nfc_hci_dev *hdev, u8 pipe, u8 event,
struct sk_buff *skb)
{
int r = 0;
u8 gate = hdev->pipes[pipe].gate;
u8 gate;
if (pipe >= NFC_HCI_MAX_PIPES) {
pr_err("Discarded event %x to invalid pipe %x\n", event, pipe);
goto exit;
}
gate = hdev->pipes[pipe].gate;
if (gate == NFC_HCI_INVALID_GATE) {
pr_err("Discarded event %x to unopened pipe %x\n", event, pipe);
goto exit;

View File

@ -32,6 +32,7 @@ static const struct nla_policy nfc_genl_policy[NFC_ATTR_MAX + 1] = {
[NFC_ATTR_DEVICE_NAME] = { .type = NLA_STRING,
.len = NFC_DEVICE_NAME_MAXSIZE },
[NFC_ATTR_PROTOCOLS] = { .type = NLA_U32 },
[NFC_ATTR_TARGET_INDEX] = { .type = NLA_U32 },
[NFC_ATTR_COMM_MODE] = { .type = NLA_U8 },
[NFC_ATTR_RF_MODE] = { .type = NLA_U8 },
[NFC_ATTR_DEVICE_POWERED] = { .type = NLA_U8 },
@ -43,7 +44,10 @@ static const struct nla_policy nfc_genl_policy[NFC_ATTR_MAX + 1] = {
[NFC_ATTR_LLC_SDP] = { .type = NLA_NESTED },
[NFC_ATTR_FIRMWARE_NAME] = { .type = NLA_STRING,
.len = NFC_FIRMWARE_NAME_MAXSIZE },
[NFC_ATTR_SE_INDEX] = { .type = NLA_U32 },
[NFC_ATTR_SE_APDU] = { .type = NLA_BINARY },
[NFC_ATTR_VENDOR_ID] = { .type = NLA_U32 },
[NFC_ATTR_VENDOR_SUBCMD] = { .type = NLA_U32 },
[NFC_ATTR_VENDOR_DATA] = { .type = NLA_BINARY },
};

Some files were not shown because too many files have changed in this diff Show More