Including fixes from wireless, and netfilter.

Selftests excluded - we have 58 patches and diff of +442/-199,
 which isn't really small but perhaps with the exception of
 the WiFi locking change it's old(ish) bugs.
 
 We have no known problems with v6.4.
 
 The selftest changes are rather large as MPTCP folks try to apply
 Greg's guidance that selftest from torvalds/linux should be able
 to run against stable kernels.
 
 Last thing I should call out is the DCCP/UDP-lite deprecation notices,
 we are fairly sure those are dead, but if we're wrong reverting them
 back in won't be fun.
 
 Current release - regressions:
 
  - wifi:
   - cfg80211: fix double lock bug in reg_wdev_chan_valid()
   - iwlwifi: mvm: spin_lock_bh() to fix lockdep regression
 
 Current release - new code bugs:
 
  - handshake: remove fput() that causes use-after-free
 
 Previous releases - regressions:
 
  - sched: cls_u32: fix reference counter leak leading to overflow
 
  - sched: cls_api: fix lockup on flushing explicitly created chain
 
 Previous releases - always broken:
 
  - nf_tables: integrate pipapo into commit protocol
 
  - nf_tables: incorrect error path handling with NFT_MSG_NEWRULE,
    fix dangling pointer on failure
 
  - ping6: fix send to link-local addresses with VRF
 
  - sched: act_pedit: parse L3 header for L4 offset, the skb may
    not have the offset saved
 
  - sched: act_ct: fix promotion of offloaded unreplied tuple
 
  - sched: refuse to destroy an ingress and clsact Qdiscs if there
    are lockless change operations in flight
 
  - wifi: mac80211: fix handful of bugs in multi-link operation
 
  - ipvlan: fix bound dev checking for IPv6 l3s mode
 
  - eth: enetc: correct the indexes of highest and 2nd highest TCs
 
  - eth: ice: fix XDP memory leak when NIC is brought up and down
 
 Misc:
 
  - add deprecation notices for UDP-lite and DCCP
 
  - selftests: mptcp: skip tests not supported by old kernels
 
  - sctp: handle invalid error codes without calling BUG()
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmSLllkACgkQMUZtbf5S
 IrubBg/+OeLG7Q3h80t8q1UV2uRXXp3zYcV1Hm2DEtP96RuBYXR4q06/n9Pqbt8P
 gkPWS8dHgt+hCKgsPP2XcWOKxWXK4knTDcV58MVVo4DiVfjCNa6KKb6Glo+G/fvY
 8RlLpQAaTLWBqm8BSQMLL5paWTe9q9LK0w1g280fwVnbPchtqM594zmpP2dm6z3o
 sSFMtYHN62h0isLnrlo1cnY/Qq6H/OWMZDdcJpMoRXIF0JHKMfbangotX/MjgCGj
 4EYrIwQj8+Ctyg+QgmgK5Pr53i2as/ErfrXQKfvjq/4FyLECPUd+KXu6uJW8TpIi
 2/wzO9ssx0iArAn5V+OPqAalbWpJoQ4ba1Ztdd2GKSaOtR8zNYL0QepYK3s+n3YT
 88ZJC0rDOKq9E3MdMuBVgV83NFtwkDe4JdKJwYW2F8+UsDs0jxXjcCEuH719GKSz
 Ag5RK7MQGm3N1Uom9RDGlMin+cvTjWH/owN39ibvJ5G90JTUpGU7IyVHi0Z8X1DG
 lb0C/fc/QF9xl0S7B+LgyRh53lBY0L+zLO8JYK51n+VzU1L9ur5sylqoS3P2XtwB
 4gHX1E+OAX1j4X/lvwF6nclISQs9nF9G41EYfnh38+YtcAKd70+Yo0/cnY5HUCvr
 KKELhdXfqx/Dx18aq8o9IhRuECM81Q7dHHoe6PhHxZaJFgn0nSE=
 =oNA0
 -----END PGP SIGNATURE-----

Merge tag 'net-6.4-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from wireless, and netfilter.

  Selftests excluded - we have 58 patches and diff of +442/-199, which
  isn't really small but perhaps with the exception of the WiFi locking
  change it's old(ish) bugs.

  We have no known problems with v6.4.

  The selftest changes are rather large as MPTCP folks try to apply
  Greg's guidance that selftest from torvalds/linux should be able to
  run against stable kernels.

  Last thing I should call out is the DCCP/UDP-lite deprecation notices.
  We are fairly sure those are dead, but if we're wrong reverting them
  back in won't be fun.

  Current release - regressions:

   - wifi:
      - cfg80211: fix double lock bug in reg_wdev_chan_valid()
      - iwlwifi: mvm: spin_lock_bh() to fix lockdep regression

  Current release - new code bugs:

   - handshake: remove fput() that causes use-after-free

  Previous releases - regressions:

   - sched: cls_u32: fix reference counter leak leading to overflow

   - sched: cls_api: fix lockup on flushing explicitly created chain

  Previous releases - always broken:

   - nf_tables: integrate pipapo into commit protocol

   - nf_tables: incorrect error path handling with NFT_MSG_NEWRULE, fix
     dangling pointer on failure

   - ping6: fix send to link-local addresses with VRF

   - sched: act_pedit: parse L3 header for L4 offset, the skb may not
     have the offset saved

   - sched: act_ct: fix promotion of offloaded unreplied tuple

   - sched: refuse to destroy an ingress and clsact Qdiscs if there are
     lockless change operations in flight

   - wifi: mac80211: fix handful of bugs in multi-link operation

   - ipvlan: fix bound dev checking for IPv6 l3s mode

   - eth: enetc: correct the indexes of highest and 2nd highest TCs

   - eth: ice: fix XDP memory leak when NIC is brought up and down

  Misc:

   - add deprecation notices for UDP-lite and DCCP

   - selftests: mptcp: skip tests not supported by old kernels

   - sctp: handle invalid error codes without calling BUG()"

* tag 'net-6.4-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (91 commits)
  dccp: Print deprecation notice.
  udplite: Print deprecation notice.
  octeon_ep: Add missing check for ioremap
  selftests/ptp: Fix timestamp printf format for PTP_SYS_OFFSET
  net: ethernet: stmicro: stmmac: fix possible memory leak in __stmmac_open
  net: tipc: resize nlattr array to correct size
  sfc: fix XDP queues mode with legacy IRQ
  net: macsec: fix double free of percpu stats
  net: lapbether: only support ethernet devices
  MAINTAINERS: add reviewers for SMC Sockets
  s390/ism: Fix trying to free already-freed IRQ by repeated ism_dev_exit()
  net: dsa: felix: fix taprio guard band overflow at 10Mbps with jumbo frames
  net/sched: cls_api: Fix lockup on flushing explicitly created chain
  ice: Fix ice module unload
  net/handshake: remove fput() that causes use-after-free
  selftests: forwarding: hw_stats_l3: Set addrgenmode in a separate step
  net/sched: qdisc_destroy() old ingress and clsact Qdiscs before grafting
  net/sched: Refactor qdisc_graft() for ingress and clsact Qdiscs
  net/sched: act_ct: Fix promotion of offloaded unreplied tuple
  wifi: iwlwifi: mvm: spin_lock_bh() to fix lockdep regression
  ...
This commit is contained in:
Linus Torvalds 2023-06-15 21:11:17 -07:00
commit 40f71e7cd3
76 changed files with 945 additions and 442 deletions

View File

@ -19140,6 +19140,9 @@ SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS
M: Karsten Graul <kgraul@linux.ibm.com>
M: Wenjia Zhang <wenjia@linux.ibm.com>
M: Jan Karcher <jaka@linux.ibm.com>
R: D. Wythe <alibuda@linux.alibaba.com>
R: Tony Lu <tonylu@linux.alibaba.com>
R: Wen Gu <guwen@linux.alibaba.com>
L: linux-s390@vger.kernel.org
S: Supported
F: net/smc/

View File

@ -1263,7 +1263,7 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
/* Consider the standard Ethernet overhead of 8 octets preamble+SFD,
* 4 octets FCS, 12 octets IFG.
*/
needed_bit_time_ps = (maxlen + 24) * picos_per_byte;
needed_bit_time_ps = (u64)(maxlen + 24) * picos_per_byte;
dev_dbg(ocelot->dev,
"port %d: max frame size %d needs %llu ps at speed %d\n",

View File

@ -14294,11 +14294,16 @@ static void bnx2x_io_resume(struct pci_dev *pdev)
bp->fw_seq = SHMEM_RD(bp, func_mb[BP_FW_MB_IDX(bp)].drv_mb_header) &
DRV_MSG_SEQ_NUMBER_MASK;
if (netif_running(dev))
bnx2x_nic_load(bp, LOAD_NORMAL);
if (netif_running(dev)) {
if (bnx2x_nic_load(bp, LOAD_NORMAL)) {
netdev_err(bp->dev, "Error during driver initialization, try unloading/reloading the driver\n");
goto done;
}
}
netif_device_attach(dev);
done:
rtnl_unlock();
}

View File

@ -181,8 +181,8 @@ int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data)
int bw_sum = 0;
u8 bw;
prio_top = netdev_get_prio_tc_map(ndev, tc_nums - 1);
prio_next = netdev_get_prio_tc_map(ndev, tc_nums - 2);
prio_top = tc_nums - 1;
prio_next = tc_nums - 2;
/* Support highest prio and second prio tc in cbs mode */
if (tc != prio_top && tc != prio_next)

View File

@ -525,7 +525,7 @@ void iavf_set_ethtool_ops(struct net_device *netdev);
void iavf_update_stats(struct iavf_adapter *adapter);
void iavf_reset_interrupt_capability(struct iavf_adapter *adapter);
int iavf_init_interrupt_scheme(struct iavf_adapter *adapter);
void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask);
void iavf_irq_enable_queues(struct iavf_adapter *adapter);
void iavf_free_all_tx_resources(struct iavf_adapter *adapter);
void iavf_free_all_rx_resources(struct iavf_adapter *adapter);

View File

@ -359,21 +359,18 @@ static void iavf_irq_disable(struct iavf_adapter *adapter)
}
/**
* iavf_irq_enable_queues - Enable interrupt for specified queues
* iavf_irq_enable_queues - Enable interrupt for all queues
* @adapter: board private structure
* @mask: bitmap of queues to enable
**/
void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask)
void iavf_irq_enable_queues(struct iavf_adapter *adapter)
{
struct iavf_hw *hw = &adapter->hw;
int i;
for (i = 1; i < adapter->num_msix_vectors; i++) {
if (mask & BIT(i - 1)) {
wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1),
IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK);
}
wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1),
IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK);
}
}
@ -387,7 +384,7 @@ void iavf_irq_enable(struct iavf_adapter *adapter, bool flush)
struct iavf_hw *hw = &adapter->hw;
iavf_misc_irq_enable(adapter);
iavf_irq_enable_queues(adapter, ~0);
iavf_irq_enable_queues(adapter);
if (flush)
iavf_flush(hw);

View File

@ -40,7 +40,7 @@
#define IAVF_VFINT_DYN_CTL01_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTL01_INTENA_SHIFT)
#define IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3
#define IAVF_VFINT_DYN_CTL01_ITR_INDX_MASK IAVF_MASK(0x3, IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT)
#define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
#define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...63 */ /* Reset: VFR */
#define IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT 0
#define IAVF_VFINT_DYN_CTLN1_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT)
#define IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2

View File

@ -96,12 +96,7 @@ static void ice_gnss_read(struct kthread_work *work)
int err = 0;
pf = gnss->back;
if (!pf) {
err = -EFAULT;
goto exit;
}
if (!test_bit(ICE_FLAG_GNSS, pf->flags))
if (!pf || !test_bit(ICE_FLAG_GNSS, pf->flags))
return;
hw = &pf->hw;
@ -159,7 +154,6 @@ free_buf:
free_page((unsigned long)buf);
requeue:
kthread_queue_delayed_work(gnss->kworker, &gnss->read_work, delay);
exit:
if (err)
dev_dbg(ice_pf_to_dev(pf), "GNSS failed to read err=%d\n", err);
}

View File

@ -4802,9 +4802,13 @@ err_init_pf:
static void ice_deinit_dev(struct ice_pf *pf)
{
ice_free_irq_msix_misc(pf);
ice_clear_interrupt_scheme(pf);
ice_deinit_pf(pf);
ice_deinit_hw(&pf->hw);
/* Service task is already stopped, so call reset directly. */
ice_reset(&pf->hw, ICE_RESET_PFR);
pci_wait_for_pending_transaction(pf->pdev);
ice_clear_interrupt_scheme(pf);
}
static void ice_init_features(struct ice_pf *pf)
@ -5094,10 +5098,6 @@ int ice_load(struct ice_pf *pf)
struct ice_vsi *vsi;
int err;
err = ice_reset(&pf->hw, ICE_RESET_PFR);
if (err)
return err;
err = ice_init_dev(pf);
if (err)
return err;
@ -5354,12 +5354,6 @@ static void ice_remove(struct pci_dev *pdev)
ice_setup_mc_magic_wake(pf);
ice_set_wake(pf);
/* Issue a PFR as part of the prescribed driver unload flow. Do not
* do it via ice_schedule_reset() since there is no need to rebuild
* and the service task is already stopped.
*/
ice_reset(&pf->hw, ICE_RESET_PFR);
pci_wait_for_pending_transaction(pdev);
pci_disable_device(pdev);
}
@ -7056,6 +7050,10 @@ int ice_down(struct ice_vsi *vsi)
ice_for_each_txq(vsi, i)
ice_clean_tx_ring(vsi->tx_rings[i]);
if (ice_is_xdp_ena_vsi(vsi))
ice_for_each_xdp_txq(vsi, i)
ice_clean_tx_ring(vsi->xdp_rings[i]);
ice_for_each_rxq(vsi, i)
ice_clean_rx_ring(vsi->rx_rings[i]);

View File

@ -822,6 +822,8 @@ static int igb_set_eeprom(struct net_device *netdev,
*/
ret_val = hw->nvm.ops.read(hw, last_word, 1,
&eeprom_buff[last_word - first_word]);
if (ret_val)
goto out;
}
/* Device's eeprom is always little-endian, word addressable */
@ -841,6 +843,7 @@ static int igb_set_eeprom(struct net_device *netdev,
hw->nvm.ops.update(hw);
igb_set_fw_version(adapter);
out:
kfree(eeprom_buff);
return ret_val;
}

View File

@ -6947,6 +6947,7 @@ static void igb_extts(struct igb_adapter *adapter, int tsintr_tt)
struct e1000_hw *hw = &adapter->hw;
struct ptp_clock_event event;
struct timespec64 ts;
unsigned long flags;
if (pin < 0 || pin >= IGB_N_SDP)
return;
@ -6954,9 +6955,12 @@ static void igb_extts(struct igb_adapter *adapter, int tsintr_tt)
if (hw->mac.type == e1000_82580 ||
hw->mac.type == e1000_i354 ||
hw->mac.type == e1000_i350) {
s64 ns = rd32(auxstmpl);
u64 ns = rd32(auxstmpl);
ns += ((s64)(rd32(auxstmph) & 0xFF)) << 32;
ns += ((u64)(rd32(auxstmph) & 0xFF)) << 32;
spin_lock_irqsave(&adapter->tmreg_lock, flags);
ns = timecounter_cyc2time(&adapter->tc, ns);
spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
ts = ns_to_timespec64(ns);
} else {
ts.tv_nsec = rd32(auxstmpl);

View File

@ -254,6 +254,13 @@ static void igc_clean_tx_ring(struct igc_ring *tx_ring)
/* reset BQL for queue */
netdev_tx_reset_queue(txring_txq(tx_ring));
/* Zero out the buffer ring */
memset(tx_ring->tx_buffer_info, 0,
sizeof(*tx_ring->tx_buffer_info) * tx_ring->count);
/* Zero out the descriptor ring */
memset(tx_ring->desc, 0, tx_ring->size);
/* reset next_to_use and next_to_clean */
tx_ring->next_to_use = 0;
tx_ring->next_to_clean = 0;
@ -267,7 +274,7 @@ static void igc_clean_tx_ring(struct igc_ring *tx_ring)
*/
void igc_free_tx_resources(struct igc_ring *tx_ring)
{
igc_clean_tx_ring(tx_ring);
igc_disable_tx_ring(tx_ring);
vfree(tx_ring->tx_buffer_info);
tx_ring->tx_buffer_info = NULL;
@ -6723,6 +6730,9 @@ static void igc_remove(struct pci_dev *pdev)
igc_ptp_stop(adapter);
pci_disable_ptm(pdev);
pci_clear_master(pdev);
set_bit(__IGC_DOWN, &adapter->state);
del_timer_sync(&adapter->watchdog_timer);

View File

@ -981,6 +981,9 @@ int octep_device_setup(struct octep_device *oct)
oct->mmio[i].hw_addr =
ioremap(pci_resource_start(oct->pdev, i * 2),
pci_resource_len(oct->pdev, i * 2));
if (!oct->mmio[i].hw_addr)
goto unmap_prev;
oct->mmio[i].mapped = 1;
}
@ -1015,7 +1018,9 @@ int octep_device_setup(struct octep_device *oct)
return 0;
unsupported_dev:
for (i = 0; i < OCTEP_MMIO_REGIONS; i++)
i = OCTEP_MMIO_REGIONS;
unmap_prev:
while (i--)
iounmap(oct->mmio[i].hw_addr);
kfree(oct->conf);

View File

@ -1878,7 +1878,8 @@ static int nix_check_txschq_alloc_req(struct rvu *rvu, int lvl, u16 pcifunc,
free_cnt = rvu_rsrc_free_count(&txsch->schq);
}
if (free_cnt < req_schq || req_schq > MAX_TXSCHQ_PER_FUNC)
if (free_cnt < req_schq || req->schq[lvl] > MAX_TXSCHQ_PER_FUNC ||
req->schq_contig[lvl] > MAX_TXSCHQ_PER_FUNC)
return NIX_AF_ERR_TLX_ALLOC_FAIL;
/* If contiguous queues are needed, check for availability */
@ -4080,10 +4081,6 @@ int rvu_mbox_handler_nix_set_rx_cfg(struct rvu *rvu, struct nix_rx_cfg *req,
static u64 rvu_get_lbk_link_credits(struct rvu *rvu, u16 lbk_max_frs)
{
/* CN10k supports 72KB FIFO size and max packet size of 64k */
if (rvu->hw->lbk_bufsize == 0x12000)
return (rvu->hw->lbk_bufsize - lbk_max_frs) / 16;
return 1600; /* 16 * max LBK datarate = 16 * 100Gbps */
}

View File

@ -1164,10 +1164,8 @@ static u16 __rvu_npc_exact_cmd_rules_cnt_update(struct rvu *rvu, int drop_mcam_i
{
struct npc_exact_table *table;
u16 *cnt, old_cnt;
bool promisc;
table = rvu->hw->table;
promisc = table->promisc_mode[drop_mcam_idx];
cnt = &table->cnt_cmd_rules[drop_mcam_idx];
old_cnt = *cnt;
@ -1179,16 +1177,13 @@ static u16 __rvu_npc_exact_cmd_rules_cnt_update(struct rvu *rvu, int drop_mcam_i
*enable_or_disable_cam = false;
if (promisc)
goto done;
/* If all rules are deleted and not already in promisc mode; disable cam */
/* If all rules are deleted, disable cam */
if (!*cnt && val < 0) {
*enable_or_disable_cam = true;
goto done;
}
/* If rule got added and not already in promisc mode; enable cam */
/* If rule got added, enable cam */
if (!old_cnt && val > 0) {
*enable_or_disable_cam = true;
goto done;
@ -1443,7 +1438,6 @@ int rvu_npc_exact_promisc_disable(struct rvu *rvu, u16 pcifunc)
u32 drop_mcam_idx;
bool *promisc;
bool rc;
u32 cnt;
table = rvu->hw->table;
@ -1466,17 +1460,8 @@ int rvu_npc_exact_promisc_disable(struct rvu *rvu, u16 pcifunc)
return LMAC_AF_ERR_INVALID_PARAM;
}
*promisc = false;
cnt = __rvu_npc_exact_cmd_rules_cnt_update(rvu, drop_mcam_idx, 0, NULL);
mutex_unlock(&table->lock);
/* If no dmac filter entries configured, disable drop rule */
if (!cnt)
rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, false);
else
rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, !*promisc);
dev_dbg(rvu->dev, "%s: disabled promisc mode (cgx=%d lmac=%d, cnt=%d)\n",
__func__, cgx_id, lmac_id, cnt);
return 0;
}
@ -1494,7 +1479,6 @@ int rvu_npc_exact_promisc_enable(struct rvu *rvu, u16 pcifunc)
u32 drop_mcam_idx;
bool *promisc;
bool rc;
u32 cnt;
table = rvu->hw->table;
@ -1517,17 +1501,8 @@ int rvu_npc_exact_promisc_enable(struct rvu *rvu, u16 pcifunc)
return LMAC_AF_ERR_INVALID_PARAM;
}
*promisc = true;
cnt = __rvu_npc_exact_cmd_rules_cnt_update(rvu, drop_mcam_idx, 0, NULL);
mutex_unlock(&table->lock);
/* If no dmac filter entries configured, disable drop rule */
if (!cnt)
rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, false);
else
rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, !*promisc);
dev_dbg(rvu->dev, "%s: Enabled promisc mode (cgx=%d lmac=%d cnt=%d)\n",
__func__, cgx_id, lmac_id, cnt);
return 0;
}

View File

@ -347,17 +347,6 @@ out:
return -ENOMEM;
}
static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv)
{
struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue;
gq->ring_size = TS_RING_SIZE;
gq->ts_ring = dma_alloc_coherent(&priv->pdev->dev,
sizeof(struct rswitch_ts_desc) *
(gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL);
return !gq->ts_ring ? -ENOMEM : 0;
}
static void rswitch_desc_set_dptr(struct rswitch_desc *desc, dma_addr_t addr)
{
desc->dptrl = cpu_to_le32(lower_32_bits(addr));
@ -533,6 +522,28 @@ static void rswitch_gwca_linkfix_free(struct rswitch_private *priv)
gwca->linkfix_table = NULL;
}
static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv)
{
struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue;
struct rswitch_ts_desc *desc;
gq->ring_size = TS_RING_SIZE;
gq->ts_ring = dma_alloc_coherent(&priv->pdev->dev,
sizeof(struct rswitch_ts_desc) *
(gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL);
if (!gq->ts_ring)
return -ENOMEM;
rswitch_gwca_ts_queue_fill(priv, 0, TS_RING_SIZE);
desc = &gq->ts_ring[gq->ring_size];
desc->desc.die_dt = DT_LINKFIX;
rswitch_desc_set_dptr(&desc->desc, gq->ring_dma);
INIT_LIST_HEAD(&priv->gwca.ts_info_list);
return 0;
}
static struct rswitch_gwca_queue *rswitch_gwca_get(struct rswitch_private *priv)
{
struct rswitch_gwca_queue *gq;
@ -1780,9 +1791,6 @@ static int rswitch_init(struct rswitch_private *priv)
if (err < 0)
goto err_ts_queue_alloc;
rswitch_gwca_ts_queue_fill(priv, 0, TS_RING_SIZE);
INIT_LIST_HEAD(&priv->gwca.ts_info_list);
for (i = 0; i < RSWITCH_NUM_PORTS; i++) {
err = rswitch_device_alloc(priv, i);
if (err < 0) {

View File

@ -301,6 +301,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
efx->tx_channel_offset = 0;
efx->n_xdp_channels = 0;
efx->xdp_channel_offset = efx->n_channels;
efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED;
rc = pci_enable_msi(efx->pci_dev);
if (rc == 0) {
efx_get_channel(efx, 0)->irq = efx->pci_dev->irq;
@ -322,6 +323,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
efx->tx_channel_offset = efx_separate_tx_channels ? 1 : 0;
efx->n_xdp_channels = 0;
efx->xdp_channel_offset = efx->n_channels;
efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED;
efx->legacy_irq = efx->pci_dev->irq;
}

View File

@ -302,6 +302,7 @@ int efx_siena_probe_interrupts(struct efx_nic *efx)
efx->tx_channel_offset = 0;
efx->n_xdp_channels = 0;
efx->xdp_channel_offset = efx->n_channels;
efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED;
rc = pci_enable_msi(efx->pci_dev);
if (rc == 0) {
efx_get_channel(efx, 0)->irq = efx->pci_dev->irq;
@ -323,6 +324,7 @@ int efx_siena_probe_interrupts(struct efx_nic *efx)
efx->tx_channel_offset = efx_siena_separate_tx_channels ? 1 : 0;
efx->n_xdp_channels = 0;
efx->xdp_channel_offset = efx->n_channels;
efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED;
efx->legacy_irq = efx->pci_dev->irq;
}

View File

@ -3873,7 +3873,6 @@ irq_error:
stmmac_hw_teardown(dev);
init_error:
free_dma_desc_resources(priv, &priv->dma_conf);
phylink_disconnect_phy(priv->phylink);
init_phy_error:
pm_runtime_put(priv->device);
@ -3891,6 +3890,9 @@ static int stmmac_open(struct net_device *dev)
return PTR_ERR(dma_conf);
ret = __stmmac_open(dev, dma_conf);
if (ret)
free_dma_desc_resources(priv, dma_conf);
kfree(dma_conf);
return ret;
}
@ -5633,12 +5635,15 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
stmmac_release(dev);
ret = __stmmac_open(dev, dma_conf);
kfree(dma_conf);
if (ret) {
free_dma_desc_resources(priv, dma_conf);
kfree(dma_conf);
netdev_err(priv->dev, "failed reopening the interface after MTU change\n");
return ret;
}
kfree(dma_conf);
stmmac_set_rx_mode(dev);
}

View File

@ -2068,7 +2068,7 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
/* Initialize the Serdes PHY for the port */
ret = am65_cpsw_init_serdes_phy(dev, port_np, port);
if (ret)
return ret;
goto of_node_put;
port->slave.mac_only =
of_property_read_bool(port_np, "ti,mac-only");

View File

@ -102,6 +102,10 @@ static unsigned int ipvlan_nf_input(void *priv, struct sk_buff *skb,
skb->dev = addr->master->dev;
skb->skb_iif = skb->dev->ifindex;
#if IS_ENABLED(CONFIG_IPV6)
if (addr->atype == IPVL_IPV6)
IP6CB(skb)->iif = skb->dev->ifindex;
#endif
len = skb->len + ETH_HLEN;
ipvlan_count_rx(addr->master, len, true, false);
out:

View File

@ -3997,17 +3997,15 @@ static int macsec_add_dev(struct net_device *dev, sci_t sci, u8 icv_len)
return -ENOMEM;
secy->tx_sc.stats = netdev_alloc_pcpu_stats(struct pcpu_tx_sc_stats);
if (!secy->tx_sc.stats) {
free_percpu(macsec->stats);
if (!secy->tx_sc.stats)
return -ENOMEM;
}
secy->tx_sc.md_dst = metadata_dst_alloc(0, METADATA_MACSEC, GFP_KERNEL);
if (!secy->tx_sc.md_dst) {
free_percpu(secy->tx_sc.stats);
free_percpu(macsec->stats);
if (!secy->tx_sc.md_dst)
/* macsec and secy percpu stats will be freed when unregistering
* net_device in macsec_free_netdev()
*/
return -ENOMEM;
}
if (sci == MACSEC_UNDEF_SCI)
sci = dev_to_sci(dev, MACSEC_PORT_ES);

View File

@ -188,6 +188,7 @@ static int phylink_interface_max_speed(phy_interface_t interface)
case PHY_INTERFACE_MODE_RGMII_ID:
case PHY_INTERFACE_MODE_RGMII:
case PHY_INTERFACE_MODE_QSGMII:
case PHY_INTERFACE_MODE_QUSGMII:
case PHY_INTERFACE_MODE_SGMII:
case PHY_INTERFACE_MODE_GMII:
return SPEED_1000;
@ -204,7 +205,6 @@ static int phylink_interface_max_speed(phy_interface_t interface)
case PHY_INTERFACE_MODE_10GBASER:
case PHY_INTERFACE_MODE_10GKR:
case PHY_INTERFACE_MODE_USXGMII:
case PHY_INTERFACE_MODE_QUSGMII:
return SPEED_10000;
case PHY_INTERFACE_MODE_25GBASER:
@ -3298,6 +3298,41 @@ void phylink_decode_usxgmii_word(struct phylink_link_state *state,
}
EXPORT_SYMBOL_GPL(phylink_decode_usxgmii_word);
/**
* phylink_decode_usgmii_word() - decode the USGMII word from a MAC PCS
* @state: a pointer to a struct phylink_link_state.
* @lpa: a 16 bit value which stores the USGMII auto-negotiation word
*
* Helper for MAC PCS supporting the USGMII protocol and the auto-negotiation
* code word. Decode the USGMII code word and populate the corresponding fields
* (speed, duplex) into the phylink_link_state structure. The structure for this
* word is the same as the USXGMII word, except it only supports speeds up to
* 1Gbps.
*/
static void phylink_decode_usgmii_word(struct phylink_link_state *state,
uint16_t lpa)
{
switch (lpa & MDIO_USXGMII_SPD_MASK) {
case MDIO_USXGMII_10:
state->speed = SPEED_10;
break;
case MDIO_USXGMII_100:
state->speed = SPEED_100;
break;
case MDIO_USXGMII_1000:
state->speed = SPEED_1000;
break;
default:
state->link = false;
return;
}
if (lpa & MDIO_USXGMII_FULL_DUPLEX)
state->duplex = DUPLEX_FULL;
else
state->duplex = DUPLEX_HALF;
}
/**
* phylink_mii_c22_pcs_decode_state() - Decode MAC PCS state from MII registers
* @state: a pointer to a &struct phylink_link_state.
@ -3335,9 +3370,11 @@ void phylink_mii_c22_pcs_decode_state(struct phylink_link_state *state,
case PHY_INTERFACE_MODE_SGMII:
case PHY_INTERFACE_MODE_QSGMII:
case PHY_INTERFACE_MODE_QUSGMII:
phylink_decode_sgmii_word(state, lpa);
break;
case PHY_INTERFACE_MODE_QUSGMII:
phylink_decode_usgmii_word(state, lpa);
break;
default:
state->link = false;

View File

@ -1220,7 +1220,9 @@ static const struct usb_device_id products[] = {
{QMI_FIXED_INTF(0x05c6, 0x9080, 8)},
{QMI_FIXED_INTF(0x05c6, 0x9083, 3)},
{QMI_FIXED_INTF(0x05c6, 0x9084, 4)},
{QMI_QUIRK_SET_DTR(0x05c6, 0x9091, 2)}, /* Compal RXM-G1 */
{QMI_FIXED_INTF(0x05c6, 0x90b2, 3)}, /* ublox R410M */
{QMI_QUIRK_SET_DTR(0x05c6, 0x90db, 2)}, /* Compal RXM-G1 */
{QMI_FIXED_INTF(0x05c6, 0x920d, 0)},
{QMI_FIXED_INTF(0x05c6, 0x920d, 5)},
{QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)}, /* YUGA CLM920-NC5 */

View File

@ -384,6 +384,9 @@ static int lapbeth_new_device(struct net_device *dev)
ASSERT_RTNL();
if (dev->type != ARPHRD_ETHER)
return -EINVAL;
ndev = alloc_netdev(sizeof(*lapbeth), "lapb%d", NET_NAME_UNKNOWN,
lapbeth_setup);
if (!ndev)

View File

@ -2692,7 +2692,7 @@ static void rs_drv_get_rate(void *mvm_r, struct ieee80211_sta *sta,
lq_sta = mvm_sta;
spin_lock(&lq_sta->pers.lock);
spin_lock_bh(&lq_sta->pers.lock);
iwl_mvm_hwrate_to_tx_rate_v1(lq_sta->last_rate_n_flags,
info->band, &info->control.rates[0]);
info->control.rates[0].count = 1;
@ -2707,7 +2707,7 @@ static void rs_drv_get_rate(void *mvm_r, struct ieee80211_sta *sta,
iwl_mvm_hwrate_to_tx_rate_v1(last_ucode_rate, info->band,
&txrc->reported_rate);
}
spin_unlock(&lq_sta->pers.lock);
spin_unlock_bh(&lq_sta->pers.lock);
}
static void *rs_drv_alloc_sta(void *mvm_rate, struct ieee80211_sta *sta,
@ -3264,11 +3264,11 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
/* If it's locked we are in middle of init flow
* just wait for next tx status to update the lq_sta data
*/
if (!spin_trylock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock))
if (!spin_trylock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock))
return;
__iwl_mvm_rs_tx_status(mvm, sta, tid, info, ndp);
spin_unlock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock);
spin_unlock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock);
}
#ifdef CONFIG_MAC80211_DEBUGFS
@ -4117,9 +4117,9 @@ void iwl_mvm_rs_rate_init(struct iwl_mvm *mvm,
} else {
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
spin_lock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock);
spin_lock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock);
rs_drv_rate_init(mvm, sta, band);
spin_unlock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock);
spin_unlock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock);
}
}

View File

@ -771,14 +771,6 @@ static int __init ism_init(void)
static void __exit ism_exit(void)
{
struct ism_dev *ism;
mutex_lock(&ism_dev_list.mutex);
list_for_each_entry(ism, &ism_dev_list.list, list) {
ism_dev_exit(ism);
}
mutex_unlock(&ism_dev_list.mutex);
pci_unregister_driver(&ism_driver);
debug_unregister(ism_debug_info);
}

View File

@ -268,7 +268,7 @@ int flow_offload_route_init(struct flow_offload *flow,
int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow);
void flow_offload_refresh(struct nf_flowtable *flow_table,
struct flow_offload *flow);
struct flow_offload *flow, bool force);
struct flow_offload_tuple_rhash *flow_offload_lookup(struct nf_flowtable *flow_table,
struct flow_offload_tuple *tuple);

View File

@ -462,7 +462,8 @@ struct nft_set_ops {
const struct nft_set *set,
const struct nft_set_elem *elem,
unsigned int flags);
void (*commit)(const struct nft_set *set);
void (*abort)(const struct nft_set *set);
u64 (*privsize)(const struct nlattr * const nla[],
const struct nft_set_desc *desc);
bool (*estimate)(const struct nft_set_desc *desc,
@ -557,6 +558,7 @@ struct nft_set {
u16 policy;
u16 udlen;
unsigned char *udata;
struct list_head pending_update;
/* runtime data below here */
const struct nft_set_ops *ops ____cacheline_aligned;
u16 flags:14,

View File

@ -137,6 +137,13 @@ static inline void qdisc_refcount_inc(struct Qdisc *qdisc)
refcount_inc(&qdisc->refcnt);
}
static inline bool qdisc_refcount_dec_if_one(struct Qdisc *qdisc)
{
if (qdisc->flags & TCQ_F_BUILTIN)
return true;
return refcount_dec_if_one(&qdisc->refcnt);
}
/* Intended to be used by unlocked users, when concurrent qdisc release is
* possible.
*/
@ -652,6 +659,7 @@ void dev_deactivate_many(struct list_head *head);
struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue,
struct Qdisc *qdisc);
void qdisc_reset(struct Qdisc *qdisc);
void qdisc_destroy(struct Qdisc *qdisc);
void qdisc_put(struct Qdisc *qdisc);
void qdisc_put_unlocked(struct Qdisc *qdisc);
void qdisc_tree_reduce_backlog(struct Qdisc *qdisc, int n, int len);

View File

@ -783,7 +783,7 @@ enum {
/* add new constants above here */
__ETHTOOL_A_STATS_GRP_CNT,
ETHTOOL_A_STATS_GRP_MAX = (__ETHTOOL_A_STATS_CNT - 1)
ETHTOOL_A_STATS_GRP_MAX = (__ETHTOOL_A_STATS_GRP_CNT - 1)
};
enum {

View File

@ -191,6 +191,9 @@ int dccp_init_sock(struct sock *sk, const __u8 ctl_sock_initialized)
struct dccp_sock *dp = dccp_sk(sk);
struct inet_connection_sock *icsk = inet_csk(sk);
pr_warn_once("DCCP is deprecated and scheduled to be removed in 2025, "
"please contact the netdev mailing list\n");
icsk->icsk_rto = DCCP_TIMEOUT_INIT;
icsk->icsk_syn_retries = sysctl_dccp_request_retries;
sk->sk_state = DCCP_CLOSED;

View File

@ -31,7 +31,6 @@ struct handshake_req {
struct list_head hr_list;
struct rhash_head hr_rhash;
unsigned long hr_flags;
struct file *hr_file;
const struct handshake_proto *hr_proto;
struct sock *hr_sk;
void (*hr_odestruct)(struct sock *sk);

View File

@ -239,7 +239,6 @@ int handshake_req_submit(struct socket *sock, struct handshake_req *req,
}
req->hr_odestruct = req->hr_sk->sk_destruct;
req->hr_sk->sk_destruct = handshake_sk_destruct;
req->hr_file = sock->file;
ret = -EOPNOTSUPP;
net = sock_net(req->hr_sk);
@ -335,9 +334,6 @@ bool handshake_req_cancel(struct sock *sk)
return false;
}
/* Request accepted and waiting for DONE */
fput(req->hr_file);
out_true:
trace_handshake_cancel(net, req, sk);

View File

@ -22,6 +22,8 @@ static int udplite_sk_init(struct sock *sk)
{
udp_init_sock(sk);
udp_sk(sk)->pcflag = UDPLITE_BIT;
pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, "
"please contact the netdev mailing list\n");
return 0;
}

View File

@ -114,7 +114,8 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
addr_type = ipv6_addr_type(daddr);
if ((__ipv6_addr_needs_scope_id(addr_type) && !oif) ||
(addr_type & IPV6_ADDR_MAPPED) ||
(oif && sk->sk_bound_dev_if && oif != sk->sk_bound_dev_if))
(oif && sk->sk_bound_dev_if && oif != sk->sk_bound_dev_if &&
l3mdev_master_ifindex_by_index(sock_net(sk), oif) != sk->sk_bound_dev_if))
return -EINVAL;
ipcm6_init_sk(&ipc6, np);

View File

@ -8,6 +8,8 @@
* Changes:
* Fixes:
*/
#define pr_fmt(fmt) "UDPLite6: " fmt
#include <linux/export.h>
#include <linux/proc_fs.h>
#include "udp_impl.h"
@ -16,6 +18,8 @@ static int udplitev6_sk_init(struct sock *sk)
{
udpv6_init_sock(sk);
udp_sk(sk)->pcflag = UDPLITE_BIT;
pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, "
"please contact the netdev mailing list\n");
return 0;
}

View File

@ -4865,11 +4865,16 @@ static int ieee80211_add_intf_link(struct wiphy *wiphy,
unsigned int link_id)
{
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
int res;
if (wdev->use_4addr)
return -EOPNOTSUPP;
return ieee80211_vif_set_links(sdata, wdev->valid_links);
mutex_lock(&sdata->local->mtx);
res = ieee80211_vif_set_links(sdata, wdev->valid_links);
mutex_unlock(&sdata->local->mtx);
return res;
}
static void ieee80211_del_intf_link(struct wiphy *wiphy,
@ -4878,7 +4883,9 @@ static void ieee80211_del_intf_link(struct wiphy *wiphy,
{
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
mutex_lock(&sdata->local->mtx);
ieee80211_vif_set_links(sdata, wdev->valid_links);
mutex_unlock(&sdata->local->mtx);
}
static int sta_add_link_station(struct ieee80211_local *local,

View File

@ -2312,7 +2312,7 @@ ieee802_11_parse_elems(const u8 *start, size_t len, bool action,
return ieee802_11_parse_elems_crc(start, len, action, 0, 0, bss);
}
void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos);
void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id);
extern const int ieee802_1d_to_ac[8];

View File

@ -2,7 +2,7 @@
/*
* MLO link handling
*
* Copyright (C) 2022 Intel Corporation
* Copyright (C) 2022-2023 Intel Corporation
*/
#include <linux/slab.h>
#include <linux/kernel.h>
@ -409,6 +409,7 @@ static int _ieee80211_set_active_links(struct ieee80211_sub_if_data *sdata,
IEEE80211_CHANCTX_SHARED);
WARN_ON_ONCE(ret);
ieee80211_mgd_set_link_qos_params(link);
ieee80211_link_info_change_notify(sdata, link,
BSS_CHANGED_ERP_CTS_PROT |
BSS_CHANGED_ERP_PREAMBLE |
@ -423,7 +424,6 @@ static int _ieee80211_set_active_links(struct ieee80211_sub_if_data *sdata,
BSS_CHANGED_TWT |
BSS_CHANGED_HE_OBSS_PD |
BSS_CHANGED_HE_BSS_COLOR);
ieee80211_mgd_set_link_qos_params(link);
}
old_active = sdata->vif.active_links;

View File

@ -1372,10 +1372,11 @@ static void ieee80211_assoc_add_ml_elem(struct ieee80211_sub_if_data *sdata,
ieee80211_add_non_inheritance_elem(skb, outer_present_elems,
link_present_elems);
ieee80211_fragment_element(skb, subelem_len);
ieee80211_fragment_element(skb, subelem_len,
IEEE80211_MLE_SUBELEM_FRAGMENT);
}
ieee80211_fragment_element(skb, ml_elem_len);
ieee80211_fragment_element(skb, ml_elem_len, WLAN_EID_FRAGMENT);
}
static int ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)

View File

@ -4445,7 +4445,7 @@ static void ieee80211_mlo_multicast_tx(struct net_device *dev,
struct sk_buff *skb)
{
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
unsigned long links = sdata->vif.valid_links;
unsigned long links = sdata->vif.active_links;
unsigned int link;
u32 ctrl_flags = IEEE80211_TX_CTRL_MCAST_MLO_FIRST_TX;
@ -6040,7 +6040,7 @@ void __ieee80211_tx_skb_tid_band(struct ieee80211_sub_if_data *sdata,
rcu_read_unlock();
if (WARN_ON_ONCE(link == ARRAY_SIZE(sdata->vif.link_conf)))
link = ffs(sdata->vif.valid_links) - 1;
link = ffs(sdata->vif.active_links) - 1;
}
IEEE80211_SKB_CB(skb)->control.flags |=
@ -6076,7 +6076,7 @@ void ieee80211_tx_skb_tid(struct ieee80211_sub_if_data *sdata,
band = chanctx_conf->def.chan->band;
} else {
WARN_ON(link_id >= 0 &&
!(sdata->vif.valid_links & BIT(link_id)));
!(sdata->vif.active_links & BIT(link_id)));
/* MLD transmissions must not rely on the band */
band = 0;
}

View File

@ -5049,7 +5049,7 @@ u8 *ieee80211_ie_build_eht_cap(u8 *pos,
return pos;
}
void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos)
void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id)
{
unsigned int elem_len;
@ -5069,7 +5069,7 @@ void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos)
memmove(len_pos + 255 + 3, len_pos + 255 + 1, elem_len);
/* place the fragment ID */
len_pos += 255 + 1;
*len_pos = WLAN_EID_FRAGMENT;
*len_pos = frag_id;
/* and point to fragment length to update later */
len_pos++;
}

View File

@ -317,12 +317,12 @@ int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow)
EXPORT_SYMBOL_GPL(flow_offload_add);
void flow_offload_refresh(struct nf_flowtable *flow_table,
struct flow_offload *flow)
struct flow_offload *flow, bool force)
{
u32 timeout;
timeout = nf_flowtable_time_stamp + flow_offload_get_timeout(flow);
if (timeout - READ_ONCE(flow->timeout) > HZ)
if (force || timeout - READ_ONCE(flow->timeout) > HZ)
WRITE_ONCE(flow->timeout, timeout);
else
return;
@ -334,6 +334,12 @@ void flow_offload_refresh(struct nf_flowtable *flow_table,
}
EXPORT_SYMBOL_GPL(flow_offload_refresh);
static bool nf_flow_is_outdated(const struct flow_offload *flow)
{
return test_bit(IPS_SEEN_REPLY_BIT, &flow->ct->status) &&
!test_bit(NF_FLOW_HW_ESTABLISHED, &flow->flags);
}
static inline bool nf_flow_has_expired(const struct flow_offload *flow)
{
return nf_flow_timeout_delta(flow->timeout) <= 0;
@ -423,7 +429,8 @@ static void nf_flow_offload_gc_step(struct nf_flowtable *flow_table,
struct flow_offload *flow, void *data)
{
if (nf_flow_has_expired(flow) ||
nf_ct_is_dying(flow->ct))
nf_ct_is_dying(flow->ct) ||
nf_flow_is_outdated(flow))
flow_offload_teardown(flow);
if (test_bit(NF_FLOW_TEARDOWN, &flow->flags)) {

View File

@ -384,7 +384,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
if (skb_try_make_writable(skb, thoff + hdrsize))
return NF_DROP;
flow_offload_refresh(flow_table, flow);
flow_offload_refresh(flow_table, flow, false);
nf_flow_encap_pop(skb, tuplehash);
thoff -= offset;
@ -650,7 +650,7 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
if (skb_try_make_writable(skb, thoff + hdrsize))
return NF_DROP;
flow_offload_refresh(flow_table, flow);
flow_offload_refresh(flow_table, flow, false);
nf_flow_encap_pop(skb, tuplehash);

View File

@ -3844,7 +3844,8 @@ err_destroy_flow_rule:
if (flow)
nft_flow_rule_destroy(flow);
err_release_rule:
nf_tables_rule_release(&ctx, rule);
nft_rule_expr_deactivate(&ctx, rule, NFT_TRANS_PREPARE);
nf_tables_rule_destroy(&ctx, rule);
err_release_expr:
for (i = 0; i < n; i++) {
if (expr_info[i].ops) {
@ -4919,6 +4920,7 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
set->num_exprs = num_exprs;
set->handle = nf_tables_alloc_handle(table);
INIT_LIST_HEAD(&set->pending_update);
err = nft_trans_set_add(&ctx, NFT_MSG_NEWSET, set);
if (err < 0)
@ -9275,10 +9277,25 @@ static void nf_tables_commit_audit_log(struct list_head *adl, u32 generation)
}
}
static void nft_set_commit_update(struct list_head *set_update_list)
{
struct nft_set *set, *next;
list_for_each_entry_safe(set, next, set_update_list, pending_update) {
list_del_init(&set->pending_update);
if (!set->ops->commit)
continue;
set->ops->commit(set);
}
}
static int nf_tables_commit(struct net *net, struct sk_buff *skb)
{
struct nftables_pernet *nft_net = nft_pernet(net);
struct nft_trans *trans, *next;
LIST_HEAD(set_update_list);
struct nft_trans_elem *te;
struct nft_chain *chain;
struct nft_table *table;
@ -9453,6 +9470,11 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
nf_tables_setelem_notify(&trans->ctx, te->set,
&te->elem,
NFT_MSG_NEWSETELEM);
if (te->set->ops->commit &&
list_empty(&te->set->pending_update)) {
list_add_tail(&te->set->pending_update,
&set_update_list);
}
nft_trans_destroy(trans);
break;
case NFT_MSG_DELSETELEM:
@ -9467,6 +9489,11 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
atomic_dec(&te->set->nelems);
te->set->ndeact--;
}
if (te->set->ops->commit &&
list_empty(&te->set->pending_update)) {
list_add_tail(&te->set->pending_update,
&set_update_list);
}
break;
case NFT_MSG_NEWOBJ:
if (nft_trans_obj_update(trans)) {
@ -9529,6 +9556,8 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
}
}
nft_set_commit_update(&set_update_list);
nft_commit_notify(net, NETLINK_CB(skb).portid);
nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN);
nf_tables_commit_audit_log(&adl, nft_net->base_seq);
@ -9588,10 +9617,25 @@ static void nf_tables_abort_release(struct nft_trans *trans)
kfree(trans);
}
static void nft_set_abort_update(struct list_head *set_update_list)
{
struct nft_set *set, *next;
list_for_each_entry_safe(set, next, set_update_list, pending_update) {
list_del_init(&set->pending_update);
if (!set->ops->abort)
continue;
set->ops->abort(set);
}
}
static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
{
struct nftables_pernet *nft_net = nft_pernet(net);
struct nft_trans *trans, *next;
LIST_HEAD(set_update_list);
struct nft_trans_elem *te;
if (action == NFNL_ABORT_VALIDATE &&
@ -9701,6 +9745,12 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
nft_setelem_remove(net, te->set, &te->elem);
if (!nft_setelem_is_catchall(te->set, &te->elem))
atomic_dec(&te->set->nelems);
if (te->set->ops->abort &&
list_empty(&te->set->pending_update)) {
list_add_tail(&te->set->pending_update,
&set_update_list);
}
break;
case NFT_MSG_DELSETELEM:
case NFT_MSG_DESTROYSETELEM:
@ -9711,6 +9761,11 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
if (!nft_setelem_is_catchall(te->set, &te->elem))
te->set->ndeact--;
if (te->set->ops->abort &&
list_empty(&te->set->pending_update)) {
list_add_tail(&te->set->pending_update,
&set_update_list);
}
nft_trans_destroy(trans);
break;
case NFT_MSG_NEWOBJ:
@ -9753,6 +9808,8 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
}
}
nft_set_abort_update(&set_update_list);
synchronize_rcu();
list_for_each_entry_safe_reverse(trans, next,

View File

@ -533,7 +533,8 @@ ack:
* processed, this avoids that the same error is
* reported several times when replaying the batch.
*/
if (nfnl_err_add(&err_list, nlh, err, &extack) < 0) {
if (err == -ENOMEM ||
nfnl_err_add(&err_list, nlh, err, &extack) < 0) {
/* We failed to enqueue an error, reset the
* list of errors and send OOM to userspace
* pointing to the batch header.

View File

@ -1600,17 +1600,10 @@ static void pipapo_free_fields(struct nft_pipapo_match *m)
}
}
/**
* pipapo_reclaim_match - RCU callback to free fields from old matching data
* @rcu: RCU head
*/
static void pipapo_reclaim_match(struct rcu_head *rcu)
static void pipapo_free_match(struct nft_pipapo_match *m)
{
struct nft_pipapo_match *m;
int i;
m = container_of(rcu, struct nft_pipapo_match, rcu);
for_each_possible_cpu(i)
kfree(*per_cpu_ptr(m->scratch, i));
@ -1625,7 +1618,19 @@ static void pipapo_reclaim_match(struct rcu_head *rcu)
}
/**
* pipapo_commit() - Replace lookup data with current working copy
* pipapo_reclaim_match - RCU callback to free fields from old matching data
* @rcu: RCU head
*/
static void pipapo_reclaim_match(struct rcu_head *rcu)
{
struct nft_pipapo_match *m;
m = container_of(rcu, struct nft_pipapo_match, rcu);
pipapo_free_match(m);
}
/**
* nft_pipapo_commit() - Replace lookup data with current working copy
* @set: nftables API set representation
*
* While at it, check if we should perform garbage collection on the working
@ -1635,7 +1640,7 @@ static void pipapo_reclaim_match(struct rcu_head *rcu)
* We also need to create a new working copy for subsequent insertions and
* deletions.
*/
static void pipapo_commit(const struct nft_set *set)
static void nft_pipapo_commit(const struct nft_set *set)
{
struct nft_pipapo *priv = nft_set_priv(set);
struct nft_pipapo_match *new_clone, *old;
@ -1660,6 +1665,26 @@ static void pipapo_commit(const struct nft_set *set)
priv->clone = new_clone;
}
static void nft_pipapo_abort(const struct nft_set *set)
{
struct nft_pipapo *priv = nft_set_priv(set);
struct nft_pipapo_match *new_clone, *m;
if (!priv->dirty)
return;
m = rcu_dereference(priv->match);
new_clone = pipapo_clone(m);
if (IS_ERR(new_clone))
return;
priv->dirty = false;
pipapo_free_match(priv->clone);
priv->clone = new_clone;
}
/**
* nft_pipapo_activate() - Mark element reference as active given key, commit
* @net: Network namespace
@ -1667,8 +1692,7 @@ static void pipapo_commit(const struct nft_set *set)
* @elem: nftables API element representation containing key data
*
* On insertion, elements are added to a copy of the matching data currently
* in use for lookups, and not directly inserted into current lookup data, so
* we'll take care of that by calling pipapo_commit() here. Both
* in use for lookups, and not directly inserted into current lookup data. Both
* nft_pipapo_insert() and nft_pipapo_activate() are called once for each
* element, hence we can't purpose either one as a real commit operation.
*/
@ -1684,8 +1708,6 @@ static void nft_pipapo_activate(const struct net *net,
nft_set_elem_change_active(net, set, &e->ext);
nft_set_elem_clear_busy(&e->ext);
pipapo_commit(set);
}
/**
@ -1931,7 +1953,6 @@ static void nft_pipapo_remove(const struct net *net, const struct nft_set *set,
if (i == m->field_count) {
priv->dirty = true;
pipapo_drop(m, rulemap);
pipapo_commit(set);
return;
}
@ -2230,6 +2251,8 @@ const struct nft_set_type nft_set_pipapo_type = {
.init = nft_pipapo_init,
.destroy = nft_pipapo_destroy,
.gc_init = nft_pipapo_gc_init,
.commit = nft_pipapo_commit,
.abort = nft_pipapo_abort,
.elemsize = offsetof(struct nft_pipapo_elem, ext),
},
};
@ -2252,6 +2275,8 @@ const struct nft_set_type nft_set_pipapo_avx2_type = {
.init = nft_pipapo_init,
.destroy = nft_pipapo_destroy,
.gc_init = nft_pipapo_gc_init,
.commit = nft_pipapo_commit,
.abort = nft_pipapo_abort,
.elemsize = offsetof(struct nft_pipapo_elem, ext),
},
};

View File

@ -857,7 +857,8 @@ int netlbl_catmap_setlong(struct netlbl_lsm_catmap **catmap,
offset -= iter->startbit;
idx = offset / NETLBL_CATMAP_MAPSIZE;
iter->bitmap[idx] |= bitmap << (offset % NETLBL_CATMAP_MAPSIZE);
iter->bitmap[idx] |= (NETLBL_CATMAP_MAPTYPE)bitmap
<< (offset % NETLBL_CATMAP_MAPSIZE);
return 0;
}

View File

@ -610,6 +610,7 @@ static bool tcf_ct_flow_table_lookup(struct tcf_ct_params *p,
struct flow_offload_tuple tuple = {};
enum ip_conntrack_info ctinfo;
struct tcphdr *tcph = NULL;
bool force_refresh = false;
struct flow_offload *flow;
struct nf_conn *ct;
u8 dir;
@ -647,6 +648,7 @@ static bool tcf_ct_flow_table_lookup(struct tcf_ct_params *p,
* established state, then don't refresh.
*/
return false;
force_refresh = true;
}
if (tcph && (unlikely(tcph->fin || tcph->rst))) {
@ -660,7 +662,12 @@ static bool tcf_ct_flow_table_lookup(struct tcf_ct_params *p,
else
ctinfo = IP_CT_ESTABLISHED_REPLY;
flow_offload_refresh(nf_ft, flow);
flow_offload_refresh(nf_ft, flow, force_refresh);
if (!test_bit(IPS_ASSURED_BIT, &ct->status)) {
/* Process this flow in SW to allow promoting to ASSURED */
return false;
}
nf_conntrack_get(&ct->ct_general);
nf_ct_set(skb, ct, ctinfo);
if (nf_ft->flags & NF_FLOWTABLE_COUNTER)

View File

@ -13,7 +13,10 @@
#include <linux/rtnetlink.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
#include <linux/slab.h>
#include <net/ipv6.h>
#include <net/netlink.h>
#include <net/pkt_sched.h>
#include <linux/tc_act/tc_pedit.h>
@ -327,28 +330,58 @@ static bool offset_valid(struct sk_buff *skb, int offset)
return true;
}
static void pedit_skb_hdr_offset(struct sk_buff *skb,
static int pedit_l4_skb_offset(struct sk_buff *skb, int *hoffset, const int header_type)
{
const int noff = skb_network_offset(skb);
int ret = -EINVAL;
struct iphdr _iph;
switch (skb->protocol) {
case htons(ETH_P_IP): {
const struct iphdr *iph = skb_header_pointer(skb, noff, sizeof(_iph), &_iph);
if (!iph)
goto out;
*hoffset = noff + iph->ihl * 4;
ret = 0;
break;
}
case htons(ETH_P_IPV6):
ret = ipv6_find_hdr(skb, hoffset, header_type, NULL, NULL) == header_type ? 0 : -EINVAL;
break;
}
out:
return ret;
}
static int pedit_skb_hdr_offset(struct sk_buff *skb,
enum pedit_header_type htype, int *hoffset)
{
int ret = -EINVAL;
/* 'htype' is validated in the netlink parsing */
switch (htype) {
case TCA_PEDIT_KEY_EX_HDR_TYPE_ETH:
if (skb_mac_header_was_set(skb))
if (skb_mac_header_was_set(skb)) {
*hoffset = skb_mac_offset(skb);
ret = 0;
}
break;
case TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK:
case TCA_PEDIT_KEY_EX_HDR_TYPE_IP4:
case TCA_PEDIT_KEY_EX_HDR_TYPE_IP6:
*hoffset = skb_network_offset(skb);
ret = 0;
break;
case TCA_PEDIT_KEY_EX_HDR_TYPE_TCP:
ret = pedit_l4_skb_offset(skb, hoffset, IPPROTO_TCP);
break;
case TCA_PEDIT_KEY_EX_HDR_TYPE_UDP:
if (skb_transport_header_was_set(skb))
*hoffset = skb_transport_offset(skb);
ret = pedit_l4_skb_offset(skb, hoffset, IPPROTO_UDP);
break;
default:
break;
}
return ret;
}
TC_INDIRECT_SCOPE int tcf_pedit_act(struct sk_buff *skb,
@ -384,6 +417,7 @@ TC_INDIRECT_SCOPE int tcf_pedit_act(struct sk_buff *skb,
int hoffset = 0;
u32 *ptr, hdata;
u32 val;
int rc;
if (tkey_ex) {
htype = tkey_ex->htype;
@ -392,7 +426,11 @@ TC_INDIRECT_SCOPE int tcf_pedit_act(struct sk_buff *skb,
tkey_ex++;
}
pedit_skb_hdr_offset(skb, htype, &hoffset);
rc = pedit_skb_hdr_offset(skb, htype, &hoffset);
if (rc) {
pr_info_ratelimited("tc action pedit unable to extract header offset for header type (0x%x)\n", htype);
goto bad;
}
if (tkey->offmask) {
u8 *d, _d;

View File

@ -657,8 +657,8 @@ static void __tcf_chain_put(struct tcf_chain *chain, bool by_act,
{
struct tcf_block *block = chain->block;
const struct tcf_proto_ops *tmplt_ops;
unsigned int refcnt, non_act_refcnt;
bool free_block = false;
unsigned int refcnt;
void *tmplt_priv;
mutex_lock(&block->lock);
@ -678,13 +678,15 @@ static void __tcf_chain_put(struct tcf_chain *chain, bool by_act,
* save these to temporary variables.
*/
refcnt = --chain->refcnt;
non_act_refcnt = refcnt - chain->action_refcnt;
tmplt_ops = chain->tmplt_ops;
tmplt_priv = chain->tmplt_priv;
/* The last dropped non-action reference will trigger notification. */
if (refcnt - chain->action_refcnt == 0 && !by_act) {
tc_chain_notify_delete(tmplt_ops, tmplt_priv, chain->index,
block, NULL, 0, 0, false);
if (non_act_refcnt == chain->explicitly_created && !by_act) {
if (non_act_refcnt == 0)
tc_chain_notify_delete(tmplt_ops, tmplt_priv,
chain->index, block, NULL, 0, 0,
false);
/* Last reference to chain, no need to lock. */
chain->flushing = false;
}

View File

@ -718,13 +718,19 @@ static int u32_set_parms(struct net *net, struct tcf_proto *tp,
struct nlattr *est, u32 flags, u32 fl_flags,
struct netlink_ext_ack *extack)
{
int err;
int err, ifindex = -1;
err = tcf_exts_validate_ex(net, tp, tb, est, &n->exts, flags,
fl_flags, extack);
if (err < 0)
return err;
if (tb[TCA_U32_INDEV]) {
ifindex = tcf_change_indev(net, tb[TCA_U32_INDEV], extack);
if (ifindex < 0)
return -EINVAL;
}
if (tb[TCA_U32_LINK]) {
u32 handle = nla_get_u32(tb[TCA_U32_LINK]);
struct tc_u_hnode *ht_down = NULL, *ht_old;
@ -759,13 +765,9 @@ static int u32_set_parms(struct net *net, struct tcf_proto *tp,
tcf_bind_filter(tp, &n->res, base);
}
if (tb[TCA_U32_INDEV]) {
int ret;
ret = tcf_change_indev(net, tb[TCA_U32_INDEV], extack);
if (ret < 0)
return -EINVAL;
n->ifindex = ret;
}
if (ifindex >= 0)
n->ifindex = ifindex;
return 0;
}

View File

@ -1079,17 +1079,29 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
if (parent == NULL) {
unsigned int i, num_q, ingress;
struct netdev_queue *dev_queue;
ingress = 0;
num_q = dev->num_tx_queues;
if ((q && q->flags & TCQ_F_INGRESS) ||
(new && new->flags & TCQ_F_INGRESS)) {
num_q = 1;
ingress = 1;
if (!dev_ingress_queue(dev)) {
dev_queue = dev_ingress_queue(dev);
if (!dev_queue) {
NL_SET_ERR_MSG(extack, "Device does not have an ingress queue");
return -ENOENT;
}
q = rtnl_dereference(dev_queue->qdisc_sleeping);
/* This is the counterpart of that qdisc_refcount_inc_nz() call in
* __tcf_qdisc_find() for filter requests.
*/
if (!qdisc_refcount_dec_if_one(q)) {
NL_SET_ERR_MSG(extack,
"Current ingress or clsact Qdisc has ongoing filter requests");
return -EBUSY;
}
}
if (dev->flags & IFF_UP)
@ -1100,18 +1112,26 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
if (new && new->ops->attach && !ingress)
goto skip;
for (i = 0; i < num_q; i++) {
struct netdev_queue *dev_queue = dev_ingress_queue(dev);
if (!ingress)
if (!ingress) {
for (i = 0; i < num_q; i++) {
dev_queue = netdev_get_tx_queue(dev, i);
old = dev_graft_qdisc(dev_queue, new);
old = dev_graft_qdisc(dev_queue, new);
if (new && i > 0)
qdisc_refcount_inc(new);
if (!ingress)
if (new && i > 0)
qdisc_refcount_inc(new);
qdisc_put(old);
}
} else {
old = dev_graft_qdisc(dev_queue, NULL);
/* {ingress,clsact}_destroy() @old before grafting @new to avoid
* unprotected concurrent accesses to net_device::miniq_{in,e}gress
* pointer(s) in mini_qdisc_pair_swap().
*/
qdisc_notify(net, skb, n, classid, old, new, extack);
qdisc_destroy(old);
dev_graft_qdisc(dev_queue, new);
}
skip:
@ -1125,8 +1145,6 @@ skip:
if (new && new->ops->attach)
new->ops->attach(new);
} else {
notify_and_destroy(net, skb, n, classid, old, new, extack);
}
if (dev->flags & IFF_UP)

View File

@ -1046,7 +1046,7 @@ static void qdisc_free_cb(struct rcu_head *head)
qdisc_free(q);
}
static void qdisc_destroy(struct Qdisc *qdisc)
static void __qdisc_destroy(struct Qdisc *qdisc)
{
const struct Qdisc_ops *ops = qdisc->ops;
@ -1070,6 +1070,14 @@ static void qdisc_destroy(struct Qdisc *qdisc)
call_rcu(&qdisc->rcu, qdisc_free_cb);
}
void qdisc_destroy(struct Qdisc *qdisc)
{
if (qdisc->flags & TCQ_F_BUILTIN)
return;
__qdisc_destroy(qdisc);
}
void qdisc_put(struct Qdisc *qdisc)
{
if (!qdisc)
@ -1079,7 +1087,7 @@ void qdisc_put(struct Qdisc *qdisc)
!refcount_dec_and_test(&qdisc->refcnt))
return;
qdisc_destroy(qdisc);
__qdisc_destroy(qdisc);
}
EXPORT_SYMBOL(qdisc_put);
@ -1094,7 +1102,7 @@ void qdisc_put_unlocked(struct Qdisc *qdisc)
!refcount_dec_and_rtnl_lock(&qdisc->refcnt))
return;
qdisc_destroy(qdisc);
__qdisc_destroy(qdisc);
rtnl_unlock();
}
EXPORT_SYMBOL(qdisc_put_unlocked);

View File

@ -797,6 +797,9 @@ static struct sk_buff *taprio_dequeue_tc_priority(struct Qdisc *sch,
taprio_next_tc_txq(dev, tc, &q->cur_txq[tc]);
if (q->cur_txq[tc] >= dev->num_tx_queues)
q->cur_txq[tc] = first_txq;
if (skb)
return skb;
} while (q->cur_txq[tc] != first_txq);

View File

@ -1250,7 +1250,10 @@ static int sctp_side_effects(enum sctp_event_type event_type,
default:
pr_err("impossible disposition %d in state %d, event_type %d, event_id %d\n",
status, state, event_type, subtype.chunk);
BUG();
error = status;
if (error >= 0)
error = -EINVAL;
WARN_ON_ONCE(1);
break;
}

View File

@ -4482,7 +4482,7 @@ enum sctp_disposition sctp_sf_eat_auth(struct net *net,
SCTP_AUTH_NEW_KEY, GFP_ATOMIC);
if (!ev)
return -ENOMEM;
return SCTP_DISPOSITION_NOMEM;
sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP,
SCTP_ULPEVENT(ev));

View File

@ -1258,7 +1258,7 @@ int tipc_nl_media_get(struct sk_buff *skb, struct genl_info *info)
struct tipc_nl_msg msg;
struct tipc_media *media;
struct sk_buff *rep;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1];
if (!info->attrs[TIPC_NLA_MEDIA])
return -EINVAL;
@ -1307,7 +1307,7 @@ int __tipc_nl_media_set(struct sk_buff *skb, struct genl_info *info)
int err;
char *name;
struct tipc_media *m;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1];
if (!info->attrs[TIPC_NLA_MEDIA])
return -EINVAL;

View File

@ -2,7 +2,7 @@
/*
* Portions of this file
* Copyright(c) 2016-2017 Intel Deutschland GmbH
* Copyright (C) 2018, 2021-2022 Intel Corporation
* Copyright (C) 2018, 2021-2023 Intel Corporation
*/
#ifndef __CFG80211_RDEV_OPS
#define __CFG80211_RDEV_OPS
@ -1441,8 +1441,8 @@ rdev_del_intf_link(struct cfg80211_registered_device *rdev,
unsigned int link_id)
{
trace_rdev_del_intf_link(&rdev->wiphy, wdev, link_id);
if (rdev->ops->add_intf_link)
rdev->ops->add_intf_link(&rdev->wiphy, wdev, link_id);
if (rdev->ops->del_intf_link)
rdev->ops->del_intf_link(&rdev->wiphy, wdev, link_id);
trace_rdev_return_void(&rdev->wiphy);
}

View File

@ -2404,11 +2404,8 @@ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
case NL80211_IFTYPE_P2P_GO:
case NL80211_IFTYPE_ADHOC:
case NL80211_IFTYPE_MESH_POINT:
wiphy_lock(wiphy);
ret = cfg80211_reg_can_beacon_relax(wiphy, &chandef,
iftype);
wiphy_unlock(wiphy);
if (!ret)
return ret;
break;

View File

@ -5,7 +5,7 @@
* Copyright 2007-2009 Johannes Berg <johannes@sipsolutions.net>
* Copyright 2013-2014 Intel Mobile Communications GmbH
* Copyright 2017 Intel Deutschland GmbH
* Copyright (C) 2018-2022 Intel Corporation
* Copyright (C) 2018-2023 Intel Corporation
*/
#include <linux/export.h>
#include <linux/bitops.h>
@ -2558,6 +2558,13 @@ void cfg80211_remove_links(struct wireless_dev *wdev)
{
unsigned int link_id;
/*
* links are controlled by upper layers (userspace/cfg)
* only for AP mode, so only remove them here for AP
*/
if (wdev->iftype != NL80211_IFTYPE_AP)
return;
wdev_lock(wdev);
if (wdev->valid_links) {
for_each_valid_link(wdev, link_id)

View File

@ -84,8 +84,9 @@ h2_destroy()
router_rp1_200_create()
{
ip link add name $rp1.200 up \
link $rp1 addrgenmode eui64 type vlan id 200
ip link add name $rp1.200 link $rp1 type vlan id 200
ip link set dev $rp1.200 addrgenmode eui64
ip link set dev $rp1.200 up
ip address add dev $rp1.200 192.0.2.2/28
ip address add dev $rp1.200 2001:db8:1::2/64
ip stats set dev $rp1.200 l3_stats on
@ -256,9 +257,11 @@ reapply_config()
router_rp1_200_destroy
ip link add name $rp1.200 link $rp1 addrgenmode none type vlan id 200
ip link add name $rp1.200 link $rp1 type vlan id 200
ip link set dev $rp1.200 addrgenmode none
ip stats set dev $rp1.200 l3_stats on
ip link set dev $rp1.200 up addrgenmode eui64
ip link set dev $rp1.200 addrgenmode eui64
ip link set dev $rp1.200 up
ip address add dev $rp1.200 192.0.2.2/28
ip address add dev $rp1.200 2001:db8:1::2/64
}

View File

@ -1,3 +1,4 @@
CONFIG_KALLSYMS=y
CONFIG_MPTCP=y
CONFIG_IPV6=y
CONFIG_MPTCP_IPV6=y

View File

@ -55,16 +55,20 @@ __chk_nr()
{
local command="$1"
local expected=$2
local msg nr
local msg="$3"
local skip="${4:-SKIP}"
local nr
shift 2
msg=$*
nr=$(eval $command)
printf "%-50s" "$msg"
if [ $nr != $expected ]; then
echo "[ fail ] expected $expected found $nr"
ret=$test_cnt
if [ $nr = "$skip" ] && ! mptcp_lib_expect_all_features; then
echo "[ skip ] Feature probably not supported"
else
echo "[ fail ] expected $expected found $nr"
ret=$test_cnt
fi
else
echo "[ ok ]"
fi
@ -76,12 +80,12 @@ __chk_msk_nr()
local condition=$1
shift 1
__chk_nr "ss -inmHMN $ns | $condition" $*
__chk_nr "ss -inmHMN $ns | $condition" "$@"
}
chk_msk_nr()
{
__chk_msk_nr "grep -c token:" $*
__chk_msk_nr "grep -c token:" "$@"
}
wait_msk_nr()
@ -119,37 +123,26 @@ wait_msk_nr()
chk_msk_fallback_nr()
{
__chk_msk_nr "grep -c fallback" $*
__chk_msk_nr "grep -c fallback" "$@"
}
chk_msk_remote_key_nr()
{
__chk_msk_nr "grep -c remote_key" $*
__chk_msk_nr "grep -c remote_key" "$@"
}
__chk_listen()
{
local filter="$1"
local expected=$2
local msg="$3"
shift 2
msg=$*
nr=$(ss -N $ns -Ml "$filter" | grep -c LISTEN)
printf "%-50s" "$msg"
if [ $nr != $expected ]; then
echo "[ fail ] expected $expected found $nr"
ret=$test_cnt
else
echo "[ ok ]"
fi
__chk_nr "ss -N $ns -Ml '$filter' | grep -c LISTEN" "$expected" "$msg" 0
}
chk_msk_listen()
{
lport=$1
local msg="check for listen socket"
# destination port search should always return empty list
__chk_listen "dport $lport" 0 "listen match for dport $lport"
@ -167,10 +160,9 @@ chk_msk_listen()
chk_msk_inuse()
{
local expected=$1
local msg="$2"
local listen_nr
shift 1
listen_nr=$(ss -N "${ns}" -Ml | grep -c LISTEN)
expected=$((expected + listen_nr))
@ -181,7 +173,7 @@ chk_msk_inuse()
sleep 0.1
done
__chk_nr get_msk_inuse $expected $*
__chk_nr get_msk_inuse $expected "$msg" 0
}
# $1: ns, $2: port

View File

@ -144,6 +144,7 @@ cleanup()
}
mptcp_lib_check_mptcp
mptcp_lib_check_kallsyms
ip -Version > /dev/null 2>&1
if [ $? -ne 0 ];then
@ -695,6 +696,15 @@ run_test_transparent()
return 0
fi
# IP(V6)_TRANSPARENT has been added after TOS support which came with
# the required infrastructure in MPTCP sockopt code. To support TOS, the
# following function has been exported (T). Not great but better than
# checking for a specific kernel version.
if ! mptcp_lib_kallsyms_has "T __ip_sock_set_tos$"; then
echo "INFO: ${msg} not supported by the kernel: SKIP"
return
fi
ip netns exec "$listener_ns" nft -f /dev/stdin <<"EOF"
flush ruleset
table inet mangle {
@ -767,6 +777,11 @@ run_tests_peekmode()
run_tests_mptfo()
{
if ! mptcp_lib_kallsyms_has "mptcp_fastopen_"; then
echo "INFO: TFO not supported by the kernel: SKIP"
return
fi
echo "INFO: with MPTFO start"
ip netns exec "$ns1" sysctl -q net.ipv4.tcp_fastopen=2
ip netns exec "$ns2" sysctl -q net.ipv4.tcp_fastopen=1
@ -787,6 +802,11 @@ run_tests_disconnect()
local old_cin=$cin
local old_sin=$sin
if ! mptcp_lib_kallsyms_has "mptcp_pm_data_reset$"; then
echo "INFO: Full disconnect not supported: SKIP"
return
fi
cat $cin $cin $cin > "$cin".disconnect
# force do_transfer to cope with the multiple tranmissions

File diff suppressed because it is too large Load Diff

View File

@ -38,3 +38,67 @@ mptcp_lib_check_mptcp() {
exit ${KSFT_SKIP}
fi
}
mptcp_lib_check_kallsyms() {
if ! mptcp_lib_has_file "/proc/kallsyms"; then
echo "SKIP: CONFIG_KALLSYMS is missing"
exit ${KSFT_SKIP}
fi
}
# Internal: use mptcp_lib_kallsyms_has() instead
__mptcp_lib_kallsyms_has() {
local sym="${1}"
mptcp_lib_check_kallsyms
grep -q " ${sym}" /proc/kallsyms
}
# $1: part of a symbol to look at, add '$' at the end for full name
mptcp_lib_kallsyms_has() {
local sym="${1}"
if __mptcp_lib_kallsyms_has "${sym}"; then
return 0
fi
mptcp_lib_fail_if_expected_feature "${sym} symbol not found"
}
# $1: part of a symbol to look at, add '$' at the end for full name
mptcp_lib_kallsyms_doesnt_have() {
local sym="${1}"
if ! __mptcp_lib_kallsyms_has "${sym}"; then
return 0
fi
mptcp_lib_fail_if_expected_feature "${sym} symbol has been found"
}
# !!!AVOID USING THIS!!!
# Features might not land in the expected version and features can be backported
#
# $1: kernel version, e.g. 6.3
mptcp_lib_kversion_ge() {
local exp_maj="${1%.*}"
local exp_min="${1#*.}"
local v maj min
# If the kernel has backported features, set this env var to 1:
if [ "${SELFTESTS_MPTCP_LIB_NO_KVERSION_CHECK:-}" = "1" ]; then
return 0
fi
v=$(uname -r | cut -d'.' -f1,2)
maj=${v%.*}
min=${v#*.}
if [ "${maj}" -gt "${exp_maj}" ] ||
{ [ "${maj}" -eq "${exp_maj}" ] && [ "${min}" -ge "${exp_min}" ]; }; then
return 0
fi
mptcp_lib_fail_if_expected_feature "kernel version ${1} lower than ${v}"
}

View File

@ -87,6 +87,10 @@ struct so_state {
uint64_t tcpi_rcv_delta;
};
#ifndef MIN
#define MIN(a, b) ((a) < (b) ? (a) : (b))
#endif
static void die_perror(const char *msg)
{
perror(msg);
@ -349,13 +353,14 @@ static void do_getsockopt_tcp_info(struct so_state *s, int fd, size_t r, size_t
xerror("getsockopt MPTCP_TCPINFO (tries %d, %m)");
assert(olen <= sizeof(ti));
assert(ti.d.size_user == ti.d.size_kernel);
assert(ti.d.size_user == sizeof(struct tcp_info));
assert(ti.d.size_kernel > 0);
assert(ti.d.size_user ==
MIN(ti.d.size_kernel, sizeof(struct tcp_info)));
assert(ti.d.num_subflows == 1);
assert(olen > (socklen_t)sizeof(struct mptcp_subflow_data));
olen -= sizeof(struct mptcp_subflow_data);
assert(olen == sizeof(struct tcp_info));
assert(olen == ti.d.size_user);
if (ti.ti[0].tcpi_bytes_sent == w &&
ti.ti[0].tcpi_bytes_received == r)
@ -401,13 +406,14 @@ static void do_getsockopt_subflow_addrs(int fd)
die_perror("getsockopt MPTCP_SUBFLOW_ADDRS");
assert(olen <= sizeof(addrs));
assert(addrs.d.size_user == addrs.d.size_kernel);
assert(addrs.d.size_user == sizeof(struct mptcp_subflow_addrs));
assert(addrs.d.size_kernel > 0);
assert(addrs.d.size_user ==
MIN(addrs.d.size_kernel, sizeof(struct mptcp_subflow_addrs)));
assert(addrs.d.num_subflows == 1);
assert(olen > (socklen_t)sizeof(struct mptcp_subflow_data));
olen -= sizeof(struct mptcp_subflow_data);
assert(olen == sizeof(struct mptcp_subflow_addrs));
assert(olen == addrs.d.size_user);
llen = sizeof(local);
ret = getsockname(fd, (struct sockaddr *)&local, &llen);

View File

@ -87,6 +87,7 @@ cleanup()
}
mptcp_lib_check_mptcp
mptcp_lib_check_kallsyms
ip -Version > /dev/null 2>&1
if [ $? -ne 0 ];then
@ -186,9 +187,14 @@ do_transfer()
local_addr="0.0.0.0"
fi
cmsg="TIMESTAMPNS"
if mptcp_lib_kallsyms_has "mptcp_ioctl$"; then
cmsg+=",TCPINQ"
fi
timeout ${timeout_test} \
ip netns exec ${listener_ns} \
$mptcp_connect -t ${timeout_poll} -l -M 1 -p $port -s ${srv_proto} -c TIMESTAMPNS,TCPINQ \
$mptcp_connect -t ${timeout_poll} -l -M 1 -p $port -s ${srv_proto} -c "${cmsg}" \
${local_addr} < "$sin" > "$sout" &
local spid=$!
@ -196,7 +202,7 @@ do_transfer()
timeout ${timeout_test} \
ip netns exec ${connector_ns} \
$mptcp_connect -t ${timeout_poll} -M 2 -p $port -s ${cl_proto} -c TIMESTAMPNS,TCPINQ \
$mptcp_connect -t ${timeout_poll} -M 2 -p $port -s ${cl_proto} -c "${cmsg}" \
$connect_addr < "$cin" > "$cout" &
local cpid=$!
@ -253,6 +259,11 @@ do_mptcp_sockopt_tests()
{
local lret=0
if ! mptcp_lib_kallsyms_has "mptcp_diag_fill_info$"; then
echo "INFO: MPTCP sockopt not supported: SKIP"
return
fi
ip netns exec "$ns_sbox" ./mptcp_sockopt
lret=$?
@ -307,6 +318,11 @@ do_tcpinq_tests()
{
local lret=0
if ! mptcp_lib_kallsyms_has "mptcp_ioctl$"; then
echo "INFO: TCP_INQ not supported: SKIP"
return
fi
local args
for args in "-t tcp" "-r tcp"; do
do_tcpinq_test $args

View File

@ -73,8 +73,12 @@ check()
}
check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "defaults addr list"
check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
default_limits="$(ip netns exec $ns1 ./pm_nl_ctl limits)"
if mptcp_lib_expect_all_features; then
check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
subflows 2" "defaults limits"
fi
ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.1
ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.2 flags subflow dev lo
@ -121,12 +125,10 @@ ip netns exec $ns1 ./pm_nl_ctl flush
check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "flush addrs"
ip netns exec $ns1 ./pm_nl_ctl limits 9 1
check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
subflows 2" "rcv addrs above hard limit"
check "ip netns exec $ns1 ./pm_nl_ctl limits" "$default_limits" "rcv addrs above hard limit"
ip netns exec $ns1 ./pm_nl_ctl limits 1 9
check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0
subflows 2" "subflows above hard limit"
check "ip netns exec $ns1 ./pm_nl_ctl limits" "$default_limits" "subflows above hard limit"
ip netns exec $ns1 ./pm_nl_ctl limits 8 8
check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 8
@ -176,14 +178,19 @@ subflow,backup 10.0.1.1" "set flags (backup)"
ip netns exec $ns1 ./pm_nl_ctl set 10.0.1.1 flags nobackup
check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
subflow 10.0.1.1" " (nobackup)"
# fullmesh support has been added later
ip netns exec $ns1 ./pm_nl_ctl set id 1 flags fullmesh
check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
if ip netns exec $ns1 ./pm_nl_ctl dump | grep -q "fullmesh" ||
mptcp_lib_expect_all_features; then
check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
subflow,fullmesh 10.0.1.1" " (fullmesh)"
ip netns exec $ns1 ./pm_nl_ctl set id 1 flags nofullmesh
check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
ip netns exec $ns1 ./pm_nl_ctl set id 1 flags nofullmesh
check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
subflow 10.0.1.1" " (nofullmesh)"
ip netns exec $ns1 ./pm_nl_ctl set id 1 flags backup,fullmesh
check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
ip netns exec $ns1 ./pm_nl_ctl set id 1 flags backup,fullmesh
check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
subflow,backup,fullmesh 10.0.1.1" " (backup,fullmesh)"
fi
exit $ret

View File

@ -4,11 +4,17 @@
. "$(dirname "${0}")/mptcp_lib.sh"
mptcp_lib_check_mptcp
mptcp_lib_check_kallsyms
if ! mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
echo "userspace pm tests are not supported by the kernel: SKIP"
exit ${KSFT_SKIP}
fi
ip -Version > /dev/null 2>&1
if [ $? -ne 0 ];then
echo "SKIP: Cannot not run test without ip tool"
exit 1
exit ${KSFT_SKIP}
fi
ANNOUNCED=6 # MPTCP_EVENT_ANNOUNCED
@ -909,6 +915,11 @@ test_listener()
{
print_title "Listener tests"
if ! mptcp_lib_kallsyms_has "mptcp_event_pm_listener$"; then
stdbuf -o0 -e0 printf "LISTENER events \t[SKIP] Not supported\n"
return
fi
# Capture events on the network namespace running the client
:>$client_evts

View File

@ -502,11 +502,11 @@ int main(int argc, char *argv[])
interval = t2 - t1;
offset = (t2 + t1) / 2 - tp;
printf("system time: %lld.%u\n",
printf("system time: %lld.%09u\n",
(pct+2*i)->sec, (pct+2*i)->nsec);
printf("phc time: %lld.%u\n",
printf("phc time: %lld.%09u\n",
(pct+2*i+1)->sec, (pct+2*i+1)->nsec);
printf("system time: %lld.%u\n",
printf("system time: %lld.%09u\n",
(pct+2*i+2)->sec, (pct+2*i+2)->nsec);
printf("system/phc clock time offset is %" PRId64 " ns\n"
"system clock time delay is %" PRId64 " ns\n",

View File

@ -6,20 +6,18 @@ CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_ZONES=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_NAT=m
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NET_SCHED=y
#
# Queueing/Scheduling
#
CONFIG_NET_SCH_ATM=m
CONFIG_NET_SCH_CAKE=m
CONFIG_NET_SCH_CBQ=m
CONFIG_NET_SCH_CBS=m
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_CODEL=m
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_DSMARK=m
CONFIG_NET_SCH_ETF=m
CONFIG_NET_SCH_FQ=m
CONFIG_NET_SCH_FQ_CODEL=m
@ -57,8 +55,6 @@ CONFIG_NET_CLS_FLOW=m
CONFIG_NET_CLS_FLOWER=m
CONFIG_NET_CLS_MATCHALL=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_RSVP=m
CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m

View File

@ -58,10 +58,10 @@
"setup": [
"$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb db 10",
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb db 100",
"expExitCode": "0",
"verifyCmd": "$TC qdisc show dev $DUMMY",
"matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 600s db 10ms",
"matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 600s db 100ms",
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY handle 1: root",

View File

@ -2,5 +2,6 @@
# SPDX-License-Identifier: GPL-2.0
modprobe netdevsim
modprobe sch_teql
./tdc.py -c actions --nobuildebpf
./tdc.py -c qdisc