Including fixes from bpf, can and wifi.

Current release - new code bugs:
 
  - eth: mlx5e:
    - use kvfree() in mlx5e_accel_fs_tcp_create()
    - MACsec, fix RX data path 16 RX security channel limit
    - MACsec, fix memory leak when MACsec device is deleted
    - MACsec, fix update Rx secure channel active field
    - MACsec, fix add Rx security association (SA) rule memory leak
 
 Previous releases - regressions:
 
  - wifi: cfg80211: don't allow multi-BSSID in S1G
 
  - stmmac: set MAC's flow control register to reflect current settings
 
  - eth: mlx5:
    - E-switch, fix duplicate lag creation
    - fix use-after-free when reverting termination table
 
 Previous releases - always broken:
 
  - ipv4: fix route deletion when nexthop info is not specified
 
  - bpf: fix a local storage BPF map bug where the value's spin lock
    field can get initialized incorrectly
 
  - tipc: re-fetch skb cb after tipc_msg_validate
 
  - wifi: wilc1000: fix Information Element parsing
 
  - packet: do not set TP_STATUS_CSUM_VALID on CHECKSUM_COMPLETE
 
  - sctp: fix memory leak in sctp_stream_outq_migrate()
 
  - can: can327: fix potential skb leak when netdev is down
 
  - can: add number of missing netdev freeing on error paths
 
  - aquantia: do not purge addresses when setting the number of rings
 
  - wwan: iosm:
    - fix incorrect skb length leading to truncated packet
    - fix crash in peek throughput test due to skb UAF
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmOGOdYACgkQMUZtbf5S
 IrsknQ//SAoOyDOEu15YzOt8hAupLKoF6MM+D0dwwTEQZLf7IVXCjPpkKtVh7Si7
 YCBoyrqrDs7vwaUrVoKY19Amwov+EYrHCpdC+c7wdZ7uxTaYfUbJJUGmxYOR179o
 lV1+1Aiqg9F9C6CUsmZ5lDN2Yb7/uPDBICIV8LM+VzJAtXjurBVauyMwAxLxPOAr
 cgvM+h5xzE7DXMF2z8R/mUq5MSIWoJo9hy2UwbV+f2liMTQuw9rwTbyw3d7+H/6p
 xmJcBcVaABjoUEsEhld3NTlYbSEnlFgCQBfDWzf2e4y6jBxO0JepuIc7SZwJFRJY
 XBqdsKcGw5RkgKbksKUgxe126XFX0SUUQEp0UkOIqe15k7eC2yO9uj1gRm6OuV4s
 J94HKzHX9WNV5OQ790Ed2JyIJScztMZlNFVJ/cz2/+iKR42xJg6kaO6Rt2fobtmL
 VC2cH+RfHzLl+2+7xnfzXEDgFePSBlA02Aq1wihU3zB3r7WCFHchEf9T7sGt1QF0
 03R+8E3+N2tYqphPAXyDoy6kXQJTPxJHAe1FNHJlwgfieUDEWZi/Pm+uQrKIkDeo
 oq9MAV2QBNSD1w4wl7cXfvicO5kBr/OP6YBqwkpsGao2jCSIgkWEX2DRrUaLczXl
 5/Z+m/gCO5tAEcVRYfMivxUIon//9EIhbErVpHTlNWpRHk24eS4=
 =0Lnw
 -----END PGP SIGNATURE-----

Merge tag 'net-6.1-rc8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from bpf, can and wifi.

  Current release - new code bugs:

   - eth: mlx5e:
      - use kvfree() in mlx5e_accel_fs_tcp_create()
      - MACsec, fix RX data path 16 RX security channel limit
      - MACsec, fix memory leak when MACsec device is deleted
      - MACsec, fix update Rx secure channel active field
      - MACsec, fix add Rx security association (SA) rule memory leak

  Previous releases - regressions:

   - wifi: cfg80211: don't allow multi-BSSID in S1G

   - stmmac: set MAC's flow control register to reflect current settings

   - eth: mlx5:
      - E-switch, fix duplicate lag creation
      - fix use-after-free when reverting termination table

  Previous releases - always broken:

   - ipv4: fix route deletion when nexthop info is not specified

   - bpf: fix a local storage BPF map bug where the value's spin lock
     field can get initialized incorrectly

   - tipc: re-fetch skb cb after tipc_msg_validate

   - wifi: wilc1000: fix Information Element parsing

   - packet: do not set TP_STATUS_CSUM_VALID on CHECKSUM_COMPLETE

   - sctp: fix memory leak in sctp_stream_outq_migrate()

   - can: can327: fix potential skb leak when netdev is down

   - can: add number of missing netdev freeing on error paths

   - aquantia: do not purge addresses when setting the number of rings

   - wwan: iosm:
      - fix incorrect skb length leading to truncated packet
      - fix crash in peek throughput test due to skb UAF"

* tag 'net-6.1-rc8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (79 commits)
  net: ethernet: renesas: ravb: Fix promiscuous mode after system resumed
  MAINTAINERS: Update maintainer list for chelsio drivers
  ionic: update MAINTAINERS entry
  sctp: fix memory leak in sctp_stream_outq_migrate()
  packet: do not set TP_STATUS_CSUM_VALID on CHECKSUM_COMPLETE
  net/mlx5: Lag, Fix for loop when checking lag
  Revert "net/mlx5e: MACsec, remove replay window size limitation in offload path"
  net: marvell: prestera: Fix a NULL vs IS_ERR() check in some functions
  net: tun: Fix use-after-free in tun_detach()
  net: mdiobus: fix unbalanced node reference count
  net: hsr: Fix potential use-after-free
  tipc: re-fetch skb cb after tipc_msg_validate
  mptcp: fix sleep in atomic at close time
  mptcp: don't orphan ssk in mptcp_close()
  dsa: lan9303: Correct stat name
  ipv4: Fix route deletion when nexthop info is not specified
  net: wwan: iosm: fix incorrect skb length
  net: wwan: iosm: fix crash in peek throughput test
  net: wwan: iosm: fix dma_alloc_coherent incompatible pointer type
  net: wwan: iosm: fix kernel test robot reported error
  ...
This commit is contained in:
Linus Torvalds 2022-11-29 09:52:10 -08:00
commit 01f856ae6d
73 changed files with 478 additions and 263 deletions

View File

@ -391,6 +391,7 @@ Sebastian Reichel <sre@kernel.org> <sebastian.reichel@collabora.co.uk>
Sebastian Reichel <sre@kernel.org> <sre@debian.org>
Sedat Dilek <sedat.dilek@gmail.com> <sedat.dilek@credativ.de>
Seth Forshee <sforshee@kernel.org> <seth.forshee@canonical.com>
Shannon Nelson <shannon.nelson@amd.com> <snelson@pensando.io>
Shiraz Hashim <shiraz.linux.kernel@gmail.com> <shiraz.hashim@st.com>
Shuah Khan <shuah@kernel.org> <shuahkhan@gmail.com>
Shuah Khan <shuah@kernel.org> <shuah.khan@hp.com>

View File

@ -5585,8 +5585,6 @@ F: drivers/scsi/cxgbi/cxgb3i
CXGB4 CRYPTO DRIVER (chcr)
M: Ayush Sawal <ayush.sawal@chelsio.com>
M: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
M: Rohit Maheshwari <rohitm@chelsio.com>
L: linux-crypto@vger.kernel.org
S: Supported
W: http://www.chelsio.com
@ -5594,8 +5592,6 @@ F: drivers/crypto/chelsio
CXGB4 INLINE CRYPTO DRIVER
M: Ayush Sawal <ayush.sawal@chelsio.com>
M: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
M: Rohit Maheshwari <rohitm@chelsio.com>
L: netdev@vger.kernel.org
S: Supported
W: http://www.chelsio.com
@ -16149,7 +16145,8 @@ F: include/linux/peci-cpu.h
F: include/linux/peci.h
PENSANDO ETHERNET DRIVERS
M: Shannon Nelson <snelson@pensando.io>
M: Shannon Nelson <shannon.nelson@amd.com>
M: Brett Creeley <brett.creeley@amd.com>
M: drivers@pensando.io
L: netdev@vger.kernel.org
S: Supported
@ -17485,10 +17482,8 @@ S: Maintained
F: drivers/net/wireless/realtek/rtw89/
REDPINE WIRELESS DRIVER
M: Amitkumar Karwar <amitkarwar@gmail.com>
M: Siva Rebbagondla <siva8118@gmail.com>
L: linux-wireless@vger.kernel.org
S: Maintained
S: Orphan
F: drivers/net/wireless/rsi/
REGISTER MAP ABSTRACTION

View File

@ -263,8 +263,10 @@ static void can327_feed_frame_to_netdev(struct can327 *elm, struct sk_buff *skb)
{
lockdep_assert_held(&elm->lock);
if (!netif_running(elm->dev))
if (!netif_running(elm->dev)) {
kfree_skb(skb);
return;
}
/* Queue for NAPI pickup.
* rx-offload will update stats and LEDs for us.

View File

@ -264,22 +264,24 @@ static int cc770_isa_probe(struct platform_device *pdev)
if (err) {
dev_err(&pdev->dev,
"couldn't register device (err=%d)\n", err);
goto exit_unmap;
goto exit_free;
}
dev_info(&pdev->dev, "device registered (reg_base=0x%p, irq=%d)\n",
priv->reg_base, dev->irq);
return 0;
exit_unmap:
exit_free:
free_cc770dev(dev);
exit_unmap:
if (mem[idx])
iounmap(base);
exit_release:
exit_release:
if (mem[idx])
release_mem_region(mem[idx], iosize);
else
release_region(port[idx], iosize);
exit:
exit:
return err;
}

View File

@ -1909,7 +1909,7 @@ int m_can_class_get_clocks(struct m_can_classdev *cdev)
cdev->hclk = devm_clk_get(cdev->dev, "hclk");
cdev->cclk = devm_clk_get(cdev->dev, "cclk");
if (IS_ERR(cdev->cclk)) {
if (IS_ERR(cdev->hclk) || IS_ERR(cdev->cclk)) {
dev_err(cdev->dev, "no clock found\n");
ret = -ENODEV;
}

View File

@ -120,7 +120,7 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
ret = pci_alloc_irq_vectors(pci, 1, 1, PCI_IRQ_ALL_TYPES);
if (ret < 0)
return ret;
goto err_free_dev;
mcan_class->dev = &pci->dev;
mcan_class->net->irq = pci_irq_vector(pci, 0);
@ -132,7 +132,7 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
ret = m_can_class_register(mcan_class);
if (ret)
goto err;
goto err_free_irq;
/* Enable interrupt control at CAN wrapper IP */
writel(0x1, base + CTL_CSR_INT_CTL_OFFSET);
@ -144,8 +144,10 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
return 0;
err:
err_free_irq:
pci_free_irq_vectors(pci);
err_free_dev:
m_can_class_free_dev(mcan_class->net);
return ret;
}
@ -161,6 +163,7 @@ static void m_can_pci_remove(struct pci_dev *pci)
writel(0x0, priv->base + CTL_CSR_INT_CTL_OFFSET);
m_can_class_unregister(mcan_class);
m_can_class_free_dev(mcan_class->net);
pci_free_irq_vectors(pci);
}

View File

@ -202,22 +202,24 @@ static int sja1000_isa_probe(struct platform_device *pdev)
if (err) {
dev_err(&pdev->dev, "registering %s failed (err=%d)\n",
DRV_NAME, err);
goto exit_unmap;
goto exit_free;
}
dev_info(&pdev->dev, "%s device registered (reg_base=0x%p, irq=%d)\n",
DRV_NAME, priv->reg_base, dev->irq);
return 0;
exit_unmap:
exit_free:
free_sja1000dev(dev);
exit_unmap:
if (mem[idx])
iounmap(base);
exit_release:
exit_release:
if (mem[idx])
release_mem_region(mem[idx], iosize);
else
release_region(port[idx], iosize);
exit:
exit:
return err;
}

View File

@ -2091,8 +2091,11 @@ static int es58x_init_netdev(struct es58x_device *es58x_dev, int channel_idx)
netdev->dev_port = channel_idx;
ret = register_candev(netdev);
if (ret)
if (ret) {
es58x_dev->netdev[channel_idx] = NULL;
free_candev(netdev);
return ret;
}
netdev_queue_set_dql_min_limit(netdev_get_tx_queue(netdev, 0),
es58x_dev->param->dql_min_limit);

View File

@ -47,6 +47,10 @@
#define MCBA_VER_REQ_USB 1
#define MCBA_VER_REQ_CAN 2
/* Drive the CAN_RES signal LOW "0" to activate R24 and R25 */
#define MCBA_VER_TERMINATION_ON 0
#define MCBA_VER_TERMINATION_OFF 1
#define MCBA_SIDL_EXID_MASK 0x8
#define MCBA_DLC_MASK 0xf
#define MCBA_DLC_RTR_MASK 0x40
@ -463,7 +467,7 @@ static void mcba_usb_process_ka_usb(struct mcba_priv *priv,
priv->usb_ka_first_pass = false;
}
if (msg->termination_state)
if (msg->termination_state == MCBA_VER_TERMINATION_ON)
priv->can.termination = MCBA_TERMINATION_ENABLED;
else
priv->can.termination = MCBA_TERMINATION_DISABLED;
@ -785,9 +789,9 @@ static int mcba_set_termination(struct net_device *netdev, u16 term)
};
if (term == MCBA_TERMINATION_ENABLED)
usb_msg.termination = 1;
usb_msg.termination = MCBA_VER_TERMINATION_ON;
else
usb_msg.termination = 0;
usb_msg.termination = MCBA_VER_TERMINATION_OFF;
mcba_usb_xmit_cmd(priv, (struct mcba_usb_msg *)&usb_msg);

View File

@ -961,7 +961,7 @@ static const struct lan9303_mib_desc lan9303_mib[] = {
{ .offset = LAN9303_MAC_TX_BRDCST_CNT_0, .name = "TxBroad", },
{ .offset = LAN9303_MAC_TX_PAUSE_CNT_0, .name = "TxPause", },
{ .offset = LAN9303_MAC_TX_MULCST_CNT_0, .name = "TxMulti", },
{ .offset = LAN9303_MAC_RX_UNDSZE_CNT_0, .name = "TxUnderRun", },
{ .offset = LAN9303_MAC_RX_UNDSZE_CNT_0, .name = "RxShort", },
{ .offset = LAN9303_MAC_TX_64_CNT_0, .name = "Tx64Byte", },
{ .offset = LAN9303_MAC_TX_127_CNT_0, .name = "Tx128Byte", },
{ .offset = LAN9303_MAC_TX_255_CNT_0, .name = "Tx256Byte", },

View File

@ -13,6 +13,7 @@
#include "aq_ptp.h"
#include "aq_filters.h"
#include "aq_macsec.h"
#include "aq_main.h"
#include <linux/ptp_clock_kernel.h>
@ -858,7 +859,7 @@ static int aq_set_ringparam(struct net_device *ndev,
if (netif_running(ndev)) {
ndev_running = true;
dev_close(ndev);
aq_ndev_close(ndev);
}
cfg->rxds = max(ring->rx_pending, hw_caps->rxds_min);
@ -874,7 +875,7 @@ static int aq_set_ringparam(struct net_device *ndev,
goto err_exit;
if (ndev_running)
err = dev_open(ndev, NULL);
err = aq_ndev_open(ndev);
err_exit:
return err;

View File

@ -58,7 +58,7 @@ struct net_device *aq_ndev_alloc(void)
return ndev;
}
static int aq_ndev_open(struct net_device *ndev)
int aq_ndev_open(struct net_device *ndev)
{
struct aq_nic_s *aq_nic = netdev_priv(ndev);
int err = 0;
@ -88,7 +88,7 @@ err_exit:
return err;
}
static int aq_ndev_close(struct net_device *ndev)
int aq_ndev_close(struct net_device *ndev)
{
struct aq_nic_s *aq_nic = netdev_priv(ndev);
int err = 0;

View File

@ -16,5 +16,7 @@ DECLARE_STATIC_KEY_FALSE(aq_xdp_locking_key);
void aq_ndev_schedule_work(struct work_struct *work);
struct net_device *aq_ndev_alloc(void);
int aq_ndev_open(struct net_device *ndev);
int aq_ndev_close(struct net_device *ndev);
#endif /* AQ_MAIN_H */

View File

@ -74,7 +74,7 @@
#include "fec.h"
static void set_multicast_list(struct net_device *ndev);
static void fec_enet_itr_coal_init(struct net_device *ndev);
static void fec_enet_itr_coal_set(struct net_device *ndev);
#define DRIVER_NAME "fec"
@ -1220,8 +1220,7 @@ fec_restart(struct net_device *ndev)
writel(0, fep->hwp + FEC_IMASK);
/* Init the interrupt coalescing */
fec_enet_itr_coal_init(ndev);
fec_enet_itr_coal_set(ndev);
}
static int fec_enet_ipc_handle_init(struct fec_enet_private *fep)
@ -2856,19 +2855,6 @@ static int fec_enet_set_coalesce(struct net_device *ndev,
return 0;
}
static void fec_enet_itr_coal_init(struct net_device *ndev)
{
struct ethtool_coalesce ec;
ec.rx_coalesce_usecs = FEC_ITR_ICTT_DEFAULT;
ec.rx_max_coalesced_frames = FEC_ITR_ICFT_DEFAULT;
ec.tx_coalesce_usecs = FEC_ITR_ICTT_DEFAULT;
ec.tx_max_coalesced_frames = FEC_ITR_ICFT_DEFAULT;
fec_enet_set_coalesce(ndev, &ec, NULL, NULL);
}
static int fec_enet_get_tunable(struct net_device *netdev,
const struct ethtool_tunable *tuna,
void *data)
@ -3623,6 +3609,10 @@ static int fec_enet_init(struct net_device *ndev)
fep->rx_align = 0x3;
fep->tx_align = 0x3;
#endif
fep->rx_pkts_itr = FEC_ITR_ICFT_DEFAULT;
fep->tx_pkts_itr = FEC_ITR_ICFT_DEFAULT;
fep->rx_time_itr = FEC_ITR_ICTT_DEFAULT;
fep->tx_time_itr = FEC_ITR_ICTT_DEFAULT;
/* Check mask of the streaming and coherent API */
ret = dma_set_mask_and_coherent(&fep->pdev->dev, DMA_BIT_MASK(32));

View File

@ -1741,11 +1741,8 @@ static int e100_xmit_prepare(struct nic *nic, struct cb *cb,
dma_addr = dma_map_single(&nic->pdev->dev, skb->data, skb->len,
DMA_TO_DEVICE);
/* If we can't map the skb, have the upper layer try later */
if (dma_mapping_error(&nic->pdev->dev, dma_addr)) {
dev_kfree_skb_any(skb);
skb = NULL;
if (dma_mapping_error(&nic->pdev->dev, dma_addr))
return -ENOMEM;
}
/*
* Use the last 4 bytes of the SKB payload packet as the CRC, used for

View File

@ -32,6 +32,8 @@ struct workqueue_struct *fm10k_workqueue;
**/
static int __init fm10k_init_module(void)
{
int ret;
pr_info("%s\n", fm10k_driver_string);
pr_info("%s\n", fm10k_copyright);
@ -43,7 +45,13 @@ static int __init fm10k_init_module(void)
fm10k_dbg_init();
return fm10k_register_pci_driver();
ret = fm10k_register_pci_driver();
if (ret) {
fm10k_dbg_exit();
destroy_workqueue(fm10k_workqueue);
}
return ret;
}
module_init(fm10k_init_module);

View File

@ -16644,6 +16644,8 @@ static struct pci_driver i40e_driver = {
**/
static int __init i40e_init_module(void)
{
int err;
pr_info("%s: %s\n", i40e_driver_name, i40e_driver_string);
pr_info("%s: %s\n", i40e_driver_name, i40e_copyright);
@ -16661,7 +16663,14 @@ static int __init i40e_init_module(void)
}
i40e_dbg_init();
return pci_register_driver(&i40e_driver);
err = pci_register_driver(&i40e_driver);
if (err) {
destroy_workqueue(i40e_wq);
i40e_dbg_exit();
return err;
}
return 0;
}
module_init(i40e_init_module);

View File

@ -5196,6 +5196,8 @@ static struct pci_driver iavf_driver = {
**/
static int __init iavf_init_module(void)
{
int ret;
pr_info("iavf: %s\n", iavf_driver_string);
pr_info("%s\n", iavf_copyright);
@ -5206,7 +5208,12 @@ static int __init iavf_init_module(void)
pr_err("%s: Failed to create workqueue\n", iavf_driver_name);
return -ENOMEM;
}
return pci_register_driver(&iavf_driver);
ret = pci_register_driver(&iavf_driver);
if (ret)
destroy_workqueue(iavf_wq);
return ret;
}
module_init(iavf_init_module);

View File

@ -4869,6 +4869,8 @@ static struct pci_driver ixgbevf_driver = {
**/
static int __init ixgbevf_init_module(void)
{
int err;
pr_info("%s\n", ixgbevf_driver_string);
pr_info("%s\n", ixgbevf_copyright);
ixgbevf_wq = create_singlethread_workqueue(ixgbevf_driver_name);
@ -4877,7 +4879,13 @@ static int __init ixgbevf_init_module(void)
return -ENOMEM;
}
return pci_register_driver(&ixgbevf_driver);
err = pci_register_driver(&ixgbevf_driver);
if (err) {
destroy_workqueue(ixgbevf_wq);
return err;
}
return 0;
}
module_init(ixgbevf_init_module);

View File

@ -884,7 +884,7 @@ static inline void otx2_dma_unmap_page(struct otx2_nic *pfvf,
static inline u16 otx2_get_smq_idx(struct otx2_nic *pfvf, u16 qidx)
{
#ifdef CONFIG_DCB
if (pfvf->pfc_alloc_status[qidx])
if (qidx < NIX_PF_PFC_PRIO_MAX && pfvf->pfc_alloc_status[qidx])
return pfvf->pfc_schq_list[NIX_TXSCH_LVL_SMQ][qidx];
#endif

View File

@ -457,7 +457,7 @@ prestera_kern_neigh_cache_find(struct prestera_switch *sw,
n_cache =
rhashtable_lookup_fast(&sw->router->kern_neigh_cache_ht, key,
__prestera_kern_neigh_cache_ht_params);
return IS_ERR(n_cache) ? NULL : n_cache;
return n_cache;
}
static void

View File

@ -330,7 +330,7 @@ prestera_nh_neigh_find(struct prestera_switch *sw,
nh_neigh = rhashtable_lookup_fast(&sw->router->nh_neigh_ht,
key, __prestera_nh_neigh_ht_params);
return IS_ERR(nh_neigh) ? NULL : nh_neigh;
return nh_neigh;
}
struct prestera_nh_neigh *
@ -484,7 +484,7 @@ __prestera_nexthop_group_find(struct prestera_switch *sw,
nh_grp = rhashtable_lookup_fast(&sw->router->nexthop_group_ht,
key, __prestera_nexthop_group_ht_params);
return IS_ERR(nh_grp) ? NULL : nh_grp;
return nh_grp;
}
static struct prestera_nexthop_group *

View File

@ -1497,8 +1497,8 @@ static ssize_t outlen_write(struct file *filp, const char __user *buf,
return -EFAULT;
err = sscanf(outlen_str, "%d", &outlen);
if (err < 0)
return err;
if (err != 1)
return -EINVAL;
ptr = kzalloc(outlen, GFP_KERNEL);
if (!ptr)

View File

@ -365,7 +365,7 @@ void mlx5e_accel_fs_tcp_destroy(struct mlx5e_flow_steering *fs)
for (i = 0; i < ACCEL_FS_TCP_NUM_TYPES; i++)
accel_fs_tcp_destroy_table(fs, i);
kfree(accel_tcp);
kvfree(accel_tcp);
mlx5e_fs_set_accel_tcp(fs, NULL);
}
@ -397,7 +397,7 @@ int mlx5e_accel_fs_tcp_create(struct mlx5e_flow_steering *fs)
err_destroy_tables:
while (--i >= 0)
accel_fs_tcp_destroy_table(fs, i);
kfree(accel_tcp);
kvfree(accel_tcp);
mlx5e_fs_set_accel_tcp(fs, NULL);
return err;
}

View File

@ -427,15 +427,15 @@ mlx5e_macsec_get_rx_sc_from_sc_list(const struct list_head *list, sci_t sci)
return NULL;
}
static int mlx5e_macsec_update_rx_sa(struct mlx5e_macsec *macsec,
struct mlx5e_macsec_sa *rx_sa,
bool active)
static int macsec_rx_sa_active_update(struct macsec_context *ctx,
struct mlx5e_macsec_sa *rx_sa,
bool active)
{
struct mlx5_core_dev *mdev = macsec->mdev;
struct mlx5_macsec_obj_attrs attrs = {};
struct mlx5e_priv *priv = netdev_priv(ctx->netdev);
struct mlx5e_macsec *macsec = priv->macsec;
int err = 0;
if (rx_sa->active != active)
if (rx_sa->active == active)
return 0;
rx_sa->active = active;
@ -444,13 +444,11 @@ static int mlx5e_macsec_update_rx_sa(struct mlx5e_macsec *macsec,
return 0;
}
attrs.sci = cpu_to_be64((__force u64)rx_sa->sci);
attrs.enc_key_id = rx_sa->enc_key_id;
err = mlx5e_macsec_create_object(mdev, &attrs, false, &rx_sa->macsec_obj_id);
err = mlx5e_macsec_init_sa(ctx, rx_sa, true, false);
if (err)
return err;
rx_sa->active = false;
return 0;
return err;
}
static bool mlx5e_macsec_secy_features_validate(struct macsec_context *ctx)
@ -476,6 +474,11 @@ static bool mlx5e_macsec_secy_features_validate(struct macsec_context *ctx)
return false;
}
if (!ctx->secy->tx_sc.encrypt) {
netdev_err(netdev, "MACsec offload: encrypt off isn't supported\n");
return false;
}
return true;
}
@ -620,6 +623,7 @@ static int mlx5e_macsec_upd_txsa(struct macsec_context *ctx)
if (tx_sa->active == ctx_tx_sa->active)
goto out;
tx_sa->active = ctx_tx_sa->active;
if (tx_sa->assoc_num != tx_sc->encoding_sa)
goto out;
@ -635,8 +639,6 @@ static int mlx5e_macsec_upd_txsa(struct macsec_context *ctx)
mlx5e_macsec_cleanup_sa(macsec, tx_sa, true);
}
tx_sa->active = ctx_tx_sa->active;
out:
mutex_unlock(&macsec->lock);
@ -736,9 +738,14 @@ static int mlx5e_macsec_add_rxsc(struct macsec_context *ctx)
sc_xarray_element->rx_sc = rx_sc;
err = xa_alloc(&macsec->sc_xarray, &sc_xarray_element->fs_id, sc_xarray_element,
XA_LIMIT(1, USHRT_MAX), GFP_KERNEL);
if (err)
XA_LIMIT(1, MLX5_MACEC_RX_FS_ID_MAX), GFP_KERNEL);
if (err) {
if (err == -EBUSY)
netdev_err(ctx->netdev,
"MACsec offload: unable to create entry for RX SC (%d Rx SCs already allocated)\n",
MLX5_MACEC_RX_FS_ID_MAX);
goto destroy_sc_xarray_elemenet;
}
rx_sc->md_dst = metadata_dst_alloc(0, METADATA_MACSEC, GFP_KERNEL);
if (!rx_sc->md_dst) {
@ -798,16 +805,16 @@ static int mlx5e_macsec_upd_rxsc(struct macsec_context *ctx)
goto out;
}
rx_sc->active = ctx_rx_sc->active;
if (rx_sc->active == ctx_rx_sc->active)
goto out;
rx_sc->active = ctx_rx_sc->active;
for (i = 0; i < MACSEC_NUM_AN; ++i) {
rx_sa = rx_sc->rx_sa[i];
if (!rx_sa)
continue;
err = mlx5e_macsec_update_rx_sa(macsec, rx_sa, rx_sa->active && ctx_rx_sc->active);
err = macsec_rx_sa_active_update(ctx, rx_sa, rx_sa->active && ctx_rx_sc->active);
if (err)
goto out;
}
@ -818,16 +825,43 @@ out:
return err;
}
static void macsec_del_rxsc_ctx(struct mlx5e_macsec *macsec, struct mlx5e_macsec_rx_sc *rx_sc)
{
struct mlx5e_macsec_sa *rx_sa;
int i;
for (i = 0; i < MACSEC_NUM_AN; ++i) {
rx_sa = rx_sc->rx_sa[i];
if (!rx_sa)
continue;
mlx5e_macsec_cleanup_sa(macsec, rx_sa, false);
mlx5_destroy_encryption_key(macsec->mdev, rx_sa->enc_key_id);
kfree(rx_sa);
rx_sc->rx_sa[i] = NULL;
}
/* At this point the relevant MACsec offload Rx rule already removed at
* mlx5e_macsec_cleanup_sa need to wait for datapath to finish current
* Rx related data propagating using xa_erase which uses rcu to sync,
* once fs_id is erased then this rx_sc is hidden from datapath.
*/
list_del_rcu(&rx_sc->rx_sc_list_element);
xa_erase(&macsec->sc_xarray, rx_sc->sc_xarray_element->fs_id);
metadata_dst_free(rx_sc->md_dst);
kfree(rx_sc->sc_xarray_element);
kfree_rcu(rx_sc);
}
static int mlx5e_macsec_del_rxsc(struct macsec_context *ctx)
{
struct mlx5e_priv *priv = netdev_priv(ctx->netdev);
struct mlx5e_macsec_device *macsec_device;
struct mlx5e_macsec_rx_sc *rx_sc;
struct mlx5e_macsec_sa *rx_sa;
struct mlx5e_macsec *macsec;
struct list_head *list;
int err = 0;
int i;
mutex_lock(&priv->macsec->lock);
@ -849,31 +883,7 @@ static int mlx5e_macsec_del_rxsc(struct macsec_context *ctx)
goto out;
}
for (i = 0; i < MACSEC_NUM_AN; ++i) {
rx_sa = rx_sc->rx_sa[i];
if (!rx_sa)
continue;
mlx5e_macsec_cleanup_sa(macsec, rx_sa, false);
mlx5_destroy_encryption_key(macsec->mdev, rx_sa->enc_key_id);
kfree(rx_sa);
rx_sc->rx_sa[i] = NULL;
}
/*
* At this point the relevant MACsec offload Rx rule already removed at
* mlx5e_macsec_cleanup_sa need to wait for datapath to finish current
* Rx related data propagating using xa_erase which uses rcu to sync,
* once fs_id is erased then this rx_sc is hidden from datapath.
*/
list_del_rcu(&rx_sc->rx_sc_list_element);
xa_erase(&macsec->sc_xarray, rx_sc->sc_xarray_element->fs_id);
metadata_dst_free(rx_sc->md_dst);
kfree(rx_sc->sc_xarray_element);
kfree_rcu(rx_sc);
macsec_del_rxsc_ctx(macsec, rx_sc);
out:
mutex_unlock(&macsec->lock);
@ -1015,7 +1025,7 @@ static int mlx5e_macsec_upd_rxsa(struct macsec_context *ctx)
goto out;
}
err = mlx5e_macsec_update_rx_sa(macsec, rx_sa, ctx_rx_sa->active);
err = macsec_rx_sa_active_update(ctx, rx_sa, ctx_rx_sa->active);
out:
mutex_unlock(&macsec->lock);
@ -1234,7 +1244,6 @@ static int mlx5e_macsec_del_secy(struct macsec_context *ctx)
struct mlx5e_priv *priv = netdev_priv(ctx->netdev);
struct mlx5e_macsec_device *macsec_device;
struct mlx5e_macsec_rx_sc *rx_sc, *tmp;
struct mlx5e_macsec_sa *rx_sa;
struct mlx5e_macsec_sa *tx_sa;
struct mlx5e_macsec *macsec;
struct list_head *list;
@ -1263,28 +1272,15 @@ static int mlx5e_macsec_del_secy(struct macsec_context *ctx)
}
list = &macsec_device->macsec_rx_sc_list_head;
list_for_each_entry_safe(rx_sc, tmp, list, rx_sc_list_element) {
for (i = 0; i < MACSEC_NUM_AN; ++i) {
rx_sa = rx_sc->rx_sa[i];
if (!rx_sa)
continue;
mlx5e_macsec_cleanup_sa(macsec, rx_sa, false);
mlx5_destroy_encryption_key(macsec->mdev, rx_sa->enc_key_id);
kfree(rx_sa);
rx_sc->rx_sa[i] = NULL;
}
list_del_rcu(&rx_sc->rx_sc_list_element);
kfree_rcu(rx_sc);
}
list_for_each_entry_safe(rx_sc, tmp, list, rx_sc_list_element)
macsec_del_rxsc_ctx(macsec, rx_sc);
kfree(macsec_device->dev_addr);
macsec_device->dev_addr = NULL;
list_del_rcu(&macsec_device->macsec_device_list_element);
--macsec->num_of_devices;
kfree(macsec_device);
out:
mutex_unlock(&macsec->lock);
@ -1748,7 +1744,7 @@ void mlx5e_macsec_offload_handle_rx_skb(struct net_device *netdev,
if (!macsec)
return;
fs_id = MLX5_MACSEC_METADATA_HANDLE(macsec_meta_data);
fs_id = MLX5_MACSEC_RX_METADAT_HANDLE(macsec_meta_data);
rcu_read_lock();
sc_xarray_element = xa_load(&macsec->sc_xarray, fs_id);

View File

@ -10,9 +10,11 @@
#include <net/macsec.h>
#include <net/dst_metadata.h>
/* Bit31 - 30: MACsec marker, Bit3-0: MACsec id */
/* Bit31 - 30: MACsec marker, Bit15-0: MACsec id */
#define MLX5_MACEC_RX_FS_ID_MAX USHRT_MAX /* Must be power of two */
#define MLX5_MACSEC_RX_FS_ID_MASK MLX5_MACEC_RX_FS_ID_MAX
#define MLX5_MACSEC_METADATA_MARKER(metadata) ((((metadata) >> 30) & 0x3) == 0x1)
#define MLX5_MACSEC_METADATA_HANDLE(metadata) ((metadata) & GENMASK(3, 0))
#define MLX5_MACSEC_RX_METADAT_HANDLE(metadata) ((metadata) & MLX5_MACSEC_RX_FS_ID_MASK)
struct mlx5e_priv;
struct mlx5e_macsec;

View File

@ -250,7 +250,7 @@ static int macsec_fs_tx_create(struct mlx5e_macsec_fs *macsec_fs)
struct mlx5_flow_handle *rule;
struct mlx5_flow_spec *spec;
u32 *flow_group_in;
int err = 0;
int err;
ns = mlx5_get_flow_namespace(macsec_fs->mdev, MLX5_FLOW_NAMESPACE_EGRESS_MACSEC);
if (!ns)
@ -261,8 +261,10 @@ static int macsec_fs_tx_create(struct mlx5e_macsec_fs *macsec_fs)
return -ENOMEM;
flow_group_in = kvzalloc(inlen, GFP_KERNEL);
if (!flow_group_in)
if (!flow_group_in) {
err = -ENOMEM;
goto out_spec;
}
tx_tables = &tx_fs->tables;
ft_crypto = &tx_tables->ft_crypto;
@ -898,7 +900,7 @@ static int macsec_fs_rx_create(struct mlx5e_macsec_fs *macsec_fs)
struct mlx5_flow_handle *rule;
struct mlx5_flow_spec *spec;
u32 *flow_group_in;
int err = 0;
int err;
ns = mlx5_get_flow_namespace(macsec_fs->mdev, MLX5_FLOW_NAMESPACE_KERNEL_RX_MACSEC);
if (!ns)
@ -909,8 +911,10 @@ static int macsec_fs_rx_create(struct mlx5e_macsec_fs *macsec_fs)
return -ENOMEM;
flow_group_in = kvzalloc(inlen, GFP_KERNEL);
if (!flow_group_in)
if (!flow_group_in) {
err = -ENOMEM;
goto free_spec;
}
rx_tables = &rx_fs->tables;
ft_crypto = &rx_tables->ft_crypto;
@ -1142,10 +1146,10 @@ macsec_fs_rx_add_rule(struct mlx5e_macsec_fs *macsec_fs,
ft_crypto = &rx_tables->ft_crypto;
/* Set bit[31 - 30] macsec marker - 0x01 */
/* Set bit[3-0] fs id */
/* Set bit[15-0] fs id */
MLX5_SET(set_action_in, action, action_type, MLX5_ACTION_TYPE_SET);
MLX5_SET(set_action_in, action, field, MLX5_ACTION_IN_FIELD_METADATA_REG_B);
MLX5_SET(set_action_in, action, data, fs_id | BIT(30));
MLX5_SET(set_action_in, action, data, MLX5_MACSEC_RX_METADAT_HANDLE(fs_id) | BIT(30));
MLX5_SET(set_action_in, action, offset, 0);
MLX5_SET(set_action_in, action, length, 32);
@ -1205,6 +1209,7 @@ macsec_fs_rx_add_rule(struct mlx5e_macsec_fs *macsec_fs,
rx_rule->rule[1] = rule;
}
kvfree(spec);
return macsec_rule;
err:

View File

@ -1362,6 +1362,9 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf)
devl_rate_nodes_destroy(devlink);
}
/* Destroy legacy fdb when disabling sriov in legacy mode. */
if (esw->mode == MLX5_ESWITCH_LEGACY)
mlx5_eswitch_disable_locked(esw);
esw->esw_funcs.num_vfs = 0;

View File

@ -736,6 +736,14 @@ void mlx5_eswitch_offloads_destroy_single_fdb(struct mlx5_eswitch *master_esw,
struct mlx5_eswitch *slave_esw);
int mlx5_eswitch_reload_reps(struct mlx5_eswitch *esw);
static inline int mlx5_eswitch_num_vfs(struct mlx5_eswitch *esw)
{
if (mlx5_esw_allowed(esw))
return esw->esw_funcs.num_vfs;
return 0;
}
#else /* CONFIG_MLX5_ESWITCH */
/* eswitch API stubs */
static inline int mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; }

View File

@ -3387,6 +3387,13 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw,
int err;
esw->mode = MLX5_ESWITCH_LEGACY;
/* If changing from switchdev to legacy mode without sriov enabled,
* no need to create legacy fdb.
*/
if (!mlx5_sriov_is_enabled(esw->dev))
return 0;
err = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_IGNORE_NUM_VFS);
if (err)
NL_SET_ERR_MSG_MOD(extack, "Failed setting eswitch to legacy");

View File

@ -312,6 +312,8 @@ revert_changes:
for (curr_dest = 0; curr_dest < num_vport_dests; curr_dest++) {
struct mlx5_termtbl_handle *tt = attr->dests[curr_dest].termtbl;
attr->dests[curr_dest].termtbl = NULL;
/* search for the destination associated with the
* current term table
*/

View File

@ -700,10 +700,13 @@ static bool mlx5_lag_check_prereq(struct mlx5_lag *ldev)
return false;
#ifdef CONFIG_MLX5_ESWITCH
dev = ldev->pf[MLX5_LAG_P1].dev;
if ((mlx5_sriov_is_enabled(dev)) && !is_mdev_switchdev_mode(dev))
return false;
for (i = 0; i < ldev->ports; i++) {
dev = ldev->pf[i].dev;
if (mlx5_eswitch_num_vfs(dev->priv.eswitch) && !is_mdev_switchdev_mode(dev))
return false;
}
dev = ldev->pf[MLX5_LAG_P1].dev;
mode = mlx5_eswitch_mode(dev);
for (i = 0; i < ldev->ports; i++)
if (mlx5_eswitch_mode(ldev->pf[i].dev) != mode)

View File

@ -46,7 +46,7 @@ static int dr_table_set_miss_action_nic(struct mlx5dr_domain *dmn,
int mlx5dr_table_set_miss_action(struct mlx5dr_table *tbl,
struct mlx5dr_action *action)
{
int ret;
int ret = -EOPNOTSUPP;
if (action && action->action_type != DR_ACTION_TYP_FT)
return -EOPNOTSUPP;
@ -67,6 +67,9 @@ int mlx5dr_table_set_miss_action(struct mlx5dr_table *tbl,
goto out;
}
if (ret)
goto out;
/* Release old action */
if (tbl->miss_action)
refcount_dec(&tbl->miss_action->refcount);

View File

@ -249,25 +249,26 @@ static void nixge_hw_dma_bd_release(struct net_device *ndev)
struct sk_buff *skb;
int i;
for (i = 0; i < RX_BD_NUM; i++) {
phys_addr = nixge_hw_dma_bd_get_addr(&priv->rx_bd_v[i],
phys);
if (priv->rx_bd_v) {
for (i = 0; i < RX_BD_NUM; i++) {
phys_addr = nixge_hw_dma_bd_get_addr(&priv->rx_bd_v[i],
phys);
dma_unmap_single(ndev->dev.parent, phys_addr,
NIXGE_MAX_JUMBO_FRAME_SIZE,
DMA_FROM_DEVICE);
dma_unmap_single(ndev->dev.parent, phys_addr,
NIXGE_MAX_JUMBO_FRAME_SIZE,
DMA_FROM_DEVICE);
skb = (struct sk_buff *)(uintptr_t)
nixge_hw_dma_bd_get_addr(&priv->rx_bd_v[i],
sw_id_offset);
dev_kfree_skb(skb);
}
skb = (struct sk_buff *)(uintptr_t)
nixge_hw_dma_bd_get_addr(&priv->rx_bd_v[i],
sw_id_offset);
dev_kfree_skb(skb);
}
if (priv->rx_bd_v)
dma_free_coherent(ndev->dev.parent,
sizeof(*priv->rx_bd_v) * RX_BD_NUM,
priv->rx_bd_v,
priv->rx_bd_p);
}
if (priv->tx_skb)
devm_kfree(ndev->dev.parent, priv->tx_skb);

View File

@ -767,34 +767,34 @@ static int qed_mcp_cancel_load_req(struct qed_hwfn *p_hwfn,
return rc;
}
#define CONFIG_QEDE_BITMAP_IDX BIT(0)
#define CONFIG_QED_SRIOV_BITMAP_IDX BIT(1)
#define CONFIG_QEDR_BITMAP_IDX BIT(2)
#define CONFIG_QEDF_BITMAP_IDX BIT(4)
#define CONFIG_QEDI_BITMAP_IDX BIT(5)
#define CONFIG_QED_LL2_BITMAP_IDX BIT(6)
#define BITMAP_IDX_FOR_CONFIG_QEDE BIT(0)
#define BITMAP_IDX_FOR_CONFIG_QED_SRIOV BIT(1)
#define BITMAP_IDX_FOR_CONFIG_QEDR BIT(2)
#define BITMAP_IDX_FOR_CONFIG_QEDF BIT(4)
#define BITMAP_IDX_FOR_CONFIG_QEDI BIT(5)
#define BITMAP_IDX_FOR_CONFIG_QED_LL2 BIT(6)
static u32 qed_get_config_bitmap(void)
{
u32 config_bitmap = 0x0;
if (IS_ENABLED(CONFIG_QEDE))
config_bitmap |= CONFIG_QEDE_BITMAP_IDX;
config_bitmap |= BITMAP_IDX_FOR_CONFIG_QEDE;
if (IS_ENABLED(CONFIG_QED_SRIOV))
config_bitmap |= CONFIG_QED_SRIOV_BITMAP_IDX;
config_bitmap |= BITMAP_IDX_FOR_CONFIG_QED_SRIOV;
if (IS_ENABLED(CONFIG_QED_RDMA))
config_bitmap |= CONFIG_QEDR_BITMAP_IDX;
config_bitmap |= BITMAP_IDX_FOR_CONFIG_QEDR;
if (IS_ENABLED(CONFIG_QED_FCOE))
config_bitmap |= CONFIG_QEDF_BITMAP_IDX;
config_bitmap |= BITMAP_IDX_FOR_CONFIG_QEDF;
if (IS_ENABLED(CONFIG_QED_ISCSI))
config_bitmap |= CONFIG_QEDI_BITMAP_IDX;
config_bitmap |= BITMAP_IDX_FOR_CONFIG_QEDI;
if (IS_ENABLED(CONFIG_QED_LL2))
config_bitmap |= CONFIG_QED_LL2_BITMAP_IDX;
config_bitmap |= BITMAP_IDX_FOR_CONFIG_QED_LL2;
return config_bitmap;
}

View File

@ -2991,7 +2991,7 @@ static void qlcnic_83xx_recover_driver_lock(struct qlcnic_adapter *adapter)
QLCWRX(adapter->ahw, QLC_83XX_RECOVER_DRV_LOCK, val);
dev_info(&adapter->pdev->dev,
"%s: lock recovery initiated\n", __func__);
msleep(QLC_83XX_DRV_LOCK_RECOVERY_DELAY);
mdelay(QLC_83XX_DRV_LOCK_RECOVERY_DELAY);
val = QLCRDX(adapter->ahw, QLC_83XX_RECOVER_DRV_LOCK);
id = ((val >> 2) & 0xF);
if (id == adapter->portnum) {
@ -3027,7 +3027,7 @@ int qlcnic_83xx_lock_driver(struct qlcnic_adapter *adapter)
if (status)
break;
msleep(QLC_83XX_DRV_LOCK_WAIT_DELAY);
mdelay(QLC_83XX_DRV_LOCK_WAIT_DELAY);
i++;
if (i == 1)

View File

@ -3020,6 +3020,7 @@ static int __maybe_unused ravb_resume(struct device *dev)
ret = ravb_open(ndev);
if (ret < 0)
return ret;
ravb_set_rx_mode(ndev);
netif_device_attach(ndev);
}

View File

@ -748,6 +748,8 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
if (fc & FLOW_RX) {
pr_debug("\tReceive Flow-Control ON\n");
flow |= GMAC_RX_FLOW_CTRL_RFE;
} else {
pr_debug("\tReceive Flow-Control OFF\n");
}
writel(flow, ioaddr + GMAC_RX_FLOW_CTRL);

View File

@ -1061,8 +1061,16 @@ static void stmmac_mac_link_up(struct phylink_config *config,
ctrl |= priv->hw->link.duplex;
/* Flow Control operation */
if (tx_pause && rx_pause)
stmmac_mac_flow_ctrl(priv, duplex);
if (rx_pause && tx_pause)
priv->flow_ctrl = FLOW_AUTO;
else if (rx_pause && !tx_pause)
priv->flow_ctrl = FLOW_RX;
else if (!rx_pause && tx_pause)
priv->flow_ctrl = FLOW_TX;
else
priv->flow_ctrl = FLOW_OFF;
stmmac_mac_flow_ctrl(priv, duplex);
if (ctrl != old_ctrl)
writel(ctrl, priv->ioaddr + MAC_CTRL_REG);

View File

@ -2082,7 +2082,7 @@ static void am65_cpsw_nuss_cleanup_ndev(struct am65_cpsw_common *common)
for (i = 0; i < common->port_num; i++) {
port = &common->ports[i];
if (port->ndev)
if (port->ndev && port->ndev->reg_state == NETREG_REGISTERED)
unregister_netdev(port->ndev);
}
}

View File

@ -211,7 +211,7 @@ static __net_init int loopback_net_init(struct net *net)
int err;
err = -ENOMEM;
dev = alloc_netdev(0, "lo", NET_NAME_UNKNOWN, loopback_setup);
dev = alloc_netdev(0, "lo", NET_NAME_PREDICTABLE, loopback_setup);
if (!dev)
goto out;

View File

@ -148,7 +148,7 @@ int fwnode_mdiobus_register_phy(struct mii_bus *bus,
/* Associate the fwnode with the device structure so it
* can be looked up later.
*/
phy->mdio.dev.fwnode = child;
phy->mdio.dev.fwnode = fwnode_handle_get(child);
/* All data is now stored in the phy struct, so register it */
rc = phy_device_register(phy);

View File

@ -484,7 +484,14 @@ static int __init ntb_netdev_init_module(void)
rc = ntb_transport_register_client_dev(KBUILD_MODNAME);
if (rc)
return rc;
return ntb_transport_register_client(&ntb_netdev_client);
rc = ntb_transport_register_client(&ntb_netdev_client);
if (rc) {
ntb_transport_unregister_client_dev(KBUILD_MODNAME);
return rc;
}
return 0;
}
module_init(ntb_netdev_init_module);

View File

@ -217,6 +217,7 @@ static void phy_mdio_device_free(struct mdio_device *mdiodev)
static void phy_device_release(struct device *dev)
{
fwnode_handle_put(dev->fwnode);
kfree(to_phy_device(dev));
}
@ -1520,6 +1521,7 @@ error:
error_module_put:
module_put(d->driver->owner);
d->driver = NULL;
error_put_device:
put_device(d);
if (ndev_owner != bus->owner)

View File

@ -1603,19 +1603,29 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy,
linkmode_copy(supported, phy->supported);
linkmode_copy(config.advertising, phy->advertising);
/* Clause 45 PHYs switch their Serdes lane between several different
* modes, normally 10GBASE-R, SGMII. Some use 2500BASE-X for 2.5G
* speeds. We really need to know which interface modes the PHY and
* MAC supports to properly work out which linkmodes can be supported.
/* Check whether we would use rate matching for the proposed interface
* mode.
*/
if (phy->is_c45 &&
config.rate_matching = phy_get_rate_matching(phy, interface);
/* Clause 45 PHYs may switch their Serdes lane between, e.g. 10GBASE-R,
* 5GBASE-R, 2500BASE-X and SGMII if they are not using rate matching.
* For some interface modes (e.g. RXAUI, XAUI and USXGMII) switching
* their Serdes is either unnecessary or not reasonable.
*
* For these which switch interface modes, we really need to know which
* interface modes the PHY supports to properly work out which ethtool
* linkmodes can be supported. For now, as a work-around, we validate
* against all interface modes, which may lead to more ethtool link
* modes being advertised than are actually supported.
*/
if (phy->is_c45 && config.rate_matching == RATE_MATCH_NONE &&
interface != PHY_INTERFACE_MODE_RXAUI &&
interface != PHY_INTERFACE_MODE_XAUI &&
interface != PHY_INTERFACE_MODE_USXGMII)
config.interface = PHY_INTERFACE_MODE_NA;
else
config.interface = interface;
config.rate_matching = phy_get_rate_matching(phy, config.interface);
ret = phylink_validate(pl, supported, &config);
if (ret) {

View File

@ -686,7 +686,6 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
if (tun)
xdp_rxq_info_unreg(&tfile->xdp_rxq);
ptr_ring_cleanup(&tfile->tx_ring, tun_ptr_free);
sock_put(&tfile->sk);
}
}
@ -702,6 +701,9 @@ static void tun_detach(struct tun_file *tfile, bool clean)
if (dev)
netdev_state_change(dev);
rtnl_unlock();
if (clean)
sock_put(&tfile->sk);
}
static void tun_detach_all(struct net_device *dev)

View File

@ -959,30 +959,51 @@ static inline void wilc_wfi_cfg_parse_ch_attr(u8 *buf, u32 len, u8 sta_ch)
return;
while (index + sizeof(*e) <= len) {
u16 attr_size;
e = (struct wilc_attr_entry *)&buf[index];
if (e->attr_type == IEEE80211_P2P_ATTR_CHANNEL_LIST)
attr_size = le16_to_cpu(e->attr_len);
if (index + sizeof(*e) + attr_size > len)
return;
if (e->attr_type == IEEE80211_P2P_ATTR_CHANNEL_LIST &&
attr_size >= (sizeof(struct wilc_attr_ch_list) - sizeof(*e)))
ch_list_idx = index;
else if (e->attr_type == IEEE80211_P2P_ATTR_OPER_CHANNEL)
else if (e->attr_type == IEEE80211_P2P_ATTR_OPER_CHANNEL &&
attr_size == (sizeof(struct wilc_attr_oper_ch) - sizeof(*e)))
op_ch_idx = index;
if (ch_list_idx && op_ch_idx)
break;
index += le16_to_cpu(e->attr_len) + sizeof(*e);
index += sizeof(*e) + attr_size;
}
if (ch_list_idx) {
u16 attr_size;
struct wilc_ch_list_elem *e;
int i;
u16 elem_size;
ch_list = (struct wilc_attr_ch_list *)&buf[ch_list_idx];
attr_size = le16_to_cpu(ch_list->attr_len);
for (i = 0; i < attr_size;) {
/* the number of bytes following the final 'elem' member */
elem_size = le16_to_cpu(ch_list->attr_len) -
(sizeof(*ch_list) - sizeof(struct wilc_attr_entry));
for (unsigned int i = 0; i < elem_size;) {
struct wilc_ch_list_elem *e;
e = (struct wilc_ch_list_elem *)(ch_list->elem + i);
i += sizeof(*e);
if (i > elem_size)
break;
i += e->no_of_channels;
if (i > elem_size)
break;
if (e->op_class == WILC_WLAN_OPERATING_CLASS_2_4GHZ) {
memset(e->ch_list, sta_ch, e->no_of_channels);
break;
}
i += e->no_of_channels;
}
}

View File

@ -482,14 +482,25 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
rsn_ie = cfg80211_find_ie(WLAN_EID_RSN, ies->data, ies->len);
if (rsn_ie) {
int rsn_ie_len = sizeof(struct element) + rsn_ie[1];
int offset = 8;
param->mode_802_11i = 2;
param->rsn_found = true;
/* extract RSN capabilities */
offset += (rsn_ie[offset] * 4) + 2;
offset += (rsn_ie[offset] * 4) + 2;
memcpy(param->rsn_cap, &rsn_ie[offset], 2);
if (offset < rsn_ie_len) {
/* skip over pairwise suites */
offset += (rsn_ie[offset] * 4) + 2;
if (offset < rsn_ie_len) {
/* skip over authentication suites */
offset += (rsn_ie[offset] * 4) + 2;
if (offset + 1 < rsn_ie_len) {
param->mode_802_11i = 2;
param->rsn_found = true;
memcpy(param->rsn_cap, &rsn_ie[offset], 2);
}
}
}
}
if (param->rsn_found) {

View File

@ -365,7 +365,8 @@ static void ipc_mux_dl_cmd_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb)
/* Pass the DL packet to the netif layer. */
static int ipc_mux_net_receive(struct iosm_mux *ipc_mux, int if_id,
struct iosm_wwan *wwan, u32 offset,
u8 service_class, struct sk_buff *skb)
u8 service_class, struct sk_buff *skb,
u32 pkt_len)
{
struct sk_buff *dest_skb = skb_clone(skb, GFP_ATOMIC);
@ -373,7 +374,7 @@ static int ipc_mux_net_receive(struct iosm_mux *ipc_mux, int if_id,
return -ENOMEM;
skb_pull(dest_skb, offset);
skb_set_tail_pointer(dest_skb, dest_skb->len);
skb_trim(dest_skb, pkt_len);
/* Pass the packet to the netif layer. */
dest_skb->priority = service_class;
@ -429,7 +430,7 @@ static void ipc_mux_dl_fcth_decode(struct iosm_mux *ipc_mux,
static void ipc_mux_dl_adgh_decode(struct iosm_mux *ipc_mux,
struct sk_buff *skb)
{
u32 pad_len, packet_offset;
u32 pad_len, packet_offset, adgh_len;
struct iosm_wwan *wwan;
struct mux_adgh *adgh;
u8 *block = skb->data;
@ -470,10 +471,12 @@ static void ipc_mux_dl_adgh_decode(struct iosm_mux *ipc_mux,
packet_offset = sizeof(*adgh) + pad_len;
if_id += ipc_mux->wwan_q_offset;
adgh_len = le16_to_cpu(adgh->length);
/* Pass the packet to the netif layer */
rc = ipc_mux_net_receive(ipc_mux, if_id, wwan, packet_offset,
adgh->service_class, skb);
adgh->service_class, skb,
adgh_len - packet_offset);
if (rc) {
dev_err(ipc_mux->dev, "mux adgh decoding error");
return;
@ -547,7 +550,7 @@ static int mux_dl_process_dg(struct iosm_mux *ipc_mux, struct mux_adbh *adbh,
int if_id, int nr_of_dg)
{
u32 dl_head_pad_len = ipc_mux->session[if_id].dl_head_pad_len;
u32 packet_offset, i, rc;
u32 packet_offset, i, rc, dg_len;
for (i = 0; i < nr_of_dg; i++, dg++) {
if (le32_to_cpu(dg->datagram_index)
@ -562,11 +565,12 @@ static int mux_dl_process_dg(struct iosm_mux *ipc_mux, struct mux_adbh *adbh,
packet_offset =
le32_to_cpu(dg->datagram_index) +
dl_head_pad_len;
dg_len = le16_to_cpu(dg->datagram_length);
/* Pass the packet to the netif layer. */
rc = ipc_mux_net_receive(ipc_mux, if_id, ipc_mux->wwan,
packet_offset,
dg->service_class,
skb);
dg->service_class, skb,
dg_len - dl_head_pad_len);
if (rc)
goto dg_error;
}
@ -1207,10 +1211,9 @@ static int mux_ul_dg_update_tbl_index(struct iosm_mux *ipc_mux,
qlth_n_ql_size, ul_list);
ipc_mux_ul_adb_finish(ipc_mux);
if (ipc_mux_ul_adb_allocate(ipc_mux, adb, &ipc_mux->size_needed,
IOSM_AGGR_MUX_SIG_ADBH)) {
dev_kfree_skb(src_skb);
IOSM_AGGR_MUX_SIG_ADBH))
return -ENOMEM;
}
ipc_mux->size_needed = le32_to_cpu(adb->adbh->block_length);
ipc_mux->size_needed += offsetof(struct mux_adth, dg);
@ -1471,8 +1474,7 @@ void ipc_mux_ul_encoded_process(struct iosm_mux *ipc_mux, struct sk_buff *skb)
ipc_mux->ul_data_pend_bytes);
/* Reset the skb settings. */
skb->tail = 0;
skb->len = 0;
skb_trim(skb, 0);
/* Add the consumed ADB to the free list. */
skb_queue_tail((&ipc_mux->ul_adb.free_list), skb);

View File

@ -122,7 +122,7 @@ struct iosm_protocol {
struct iosm_imem *imem;
struct ipc_rsp *rsp_ring[IPC_MEM_MSG_ENTRIES];
struct device *dev;
phys_addr_t phy_ap_shm;
dma_addr_t phy_ap_shm;
u32 old_msg_tail;
};

View File

@ -14,7 +14,7 @@
#define OCR_MODE_TEST 0x01
#define OCR_MODE_NORMAL 0x02
#define OCR_MODE_CLOCK 0x03
#define OCR_MODE_MASK 0x07
#define OCR_MODE_MASK 0x03
#define OCR_TX0_INVERT 0x04
#define OCR_TX0_PULLDOWN 0x08
#define OCR_TX0_PULLUP 0x10

View File

@ -26,6 +26,8 @@ struct sctp_sched_ops {
int (*init)(struct sctp_stream *stream);
/* Init a stream */
int (*init_sid)(struct sctp_stream *stream, __u16 sid, gfp_t gfp);
/* free a stream */
void (*free_sid)(struct sctp_stream *stream, __u16 sid);
/* Frees the entire thing */
void (*free)(struct sctp_stream *stream);

View File

@ -74,7 +74,7 @@ bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner,
gfp_flags | __GFP_NOWARN);
if (selem) {
if (value)
memcpy(SDATA(selem)->data, value, smap->map.value_size);
copy_map_value(&smap->map, SDATA(selem)->data, value);
return selem;
}

View File

@ -9030,7 +9030,7 @@ static void perf_event_bpf_emit_ksymbols(struct bpf_prog *prog,
PERF_RECORD_KSYMBOL_TYPE_BPF,
(u64)(unsigned long)subprog->bpf_func,
subprog->jited_len, unregister,
prog->aux->ksym.name);
subprog->aux->ksym.name);
}
}
}

View File

@ -862,8 +862,10 @@ static int p9_socket_open(struct p9_client *client, struct socket *csocket)
struct file *file;
p = kzalloc(sizeof(struct p9_trans_fd), GFP_KERNEL);
if (!p)
if (!p) {
sock_release(csocket);
return -ENOMEM;
}
csocket->sk->sk_allocation = GFP_NOIO;
file = sock_alloc_file(csocket, 0, NULL);

View File

@ -351,17 +351,18 @@ static void hsr_deliver_master(struct sk_buff *skb, struct net_device *dev,
struct hsr_node *node_src)
{
bool was_multicast_frame;
int res;
int res, recv_len;
was_multicast_frame = (skb->pkt_type == PACKET_MULTICAST);
hsr_addr_subst_source(node_src, skb);
skb_pull(skb, ETH_HLEN);
recv_len = skb->len;
res = netif_rx(skb);
if (res == NET_RX_DROP) {
dev->stats.rx_dropped++;
} else {
dev->stats.rx_packets++;
dev->stats.rx_bytes += skb->len;
dev->stats.rx_bytes += recv_len;
if (was_multicast_frame)
dev->stats.multicast++;
}

View File

@ -888,9 +888,11 @@ int fib_nh_match(struct net *net, struct fib_config *cfg, struct fib_info *fi,
return 1;
}
/* cannot match on nexthop object attributes */
if (fi->nh)
return 1;
if (fi->nh) {
if (cfg->fc_oif || cfg->fc_gw_family || cfg->fc_mp)
return 1;
return 0;
}
if (cfg->fc_oif || cfg->fc_gw_family) {
struct fib_nh *nh;

View File

@ -452,6 +452,9 @@ static u32 ieee80211_get_rate_duration(struct ieee80211_hw *hw,
(status->encoding == RX_ENC_HE && streams > 8)))
return 0;
if (idx >= MCS_GROUP_RATES)
return 0;
duration = airtime_mcs_groups[group].duration[idx];
duration <<= airtime_mcs_groups[group].shift;
*overhead = 36 + (streams << 2);

View File

@ -2354,12 +2354,7 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
goto out;
}
/* if we are invoked by the msk cleanup code, the subflow is
* already orphaned
*/
if (ssk->sk_socket)
sock_orphan(ssk);
sock_orphan(ssk);
subflow->disposable = 1;
/* if ssk hit tcp_done(), tcp_cleanup_ulp() cleared the related ops
@ -2940,7 +2935,11 @@ cleanup:
if (ssk == msk->first)
subflow->fail_tout = 0;
sock_orphan(ssk);
/* detach from the parent socket, but allow data_ready to
* push incoming data into the mptcp stack, to properly ack it
*/
ssk->sk_socket = NULL;
ssk->sk_wq = NULL;
unlock_sock_fast(ssk, slow);
}
sock_orphan(sk);

View File

@ -1745,16 +1745,16 @@ void mptcp_subflow_queue_clean(struct sock *listener_ssk)
for (msk = head; msk; msk = next) {
struct sock *sk = (struct sock *)msk;
bool slow, do_cancel_work;
bool do_cancel_work;
sock_hold(sk);
slow = lock_sock_fast_nested(sk);
lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
next = msk->dl_next;
msk->first = NULL;
msk->dl_next = NULL;
do_cancel_work = __mptcp_close(sk, 0);
unlock_sock_fast(sk, slow);
release_sock(sk);
if (do_cancel_work)
mptcp_cancel_work(sk);
sock_put(sk);

View File

@ -2293,8 +2293,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
if (skb->ip_summed == CHECKSUM_PARTIAL)
status |= TP_STATUS_CSUMNOTREADY;
else if (skb->pkt_type != PACKET_OUTGOING &&
(skb->ip_summed == CHECKSUM_COMPLETE ||
skb_csum_unnecessary(skb)))
skb_csum_unnecessary(skb))
status |= TP_STATUS_CSUM_VALID;
if (snaplen > res)
@ -3520,8 +3519,7 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
if (skb->ip_summed == CHECKSUM_PARTIAL)
aux.tp_status |= TP_STATUS_CSUMNOTREADY;
else if (skb->pkt_type != PACKET_OUTGOING &&
(skb->ip_summed == CHECKSUM_COMPLETE ||
skb_csum_unnecessary(skb)))
skb_csum_unnecessary(skb))
aux.tp_status |= TP_STATUS_CSUM_VALID;
aux.tp_len = origlen;

View File

@ -52,6 +52,19 @@ static void sctp_stream_shrink_out(struct sctp_stream *stream, __u16 outcnt)
}
}
static void sctp_stream_free_ext(struct sctp_stream *stream, __u16 sid)
{
struct sctp_sched_ops *sched;
if (!SCTP_SO(stream, sid)->ext)
return;
sched = sctp_sched_ops_from_stream(stream);
sched->free_sid(stream, sid);
kfree(SCTP_SO(stream, sid)->ext);
SCTP_SO(stream, sid)->ext = NULL;
}
/* Migrates chunks from stream queues to new stream queues if needed,
* but not across associations. Also, removes those chunks to streams
* higher than the new max.
@ -70,16 +83,14 @@ static void sctp_stream_outq_migrate(struct sctp_stream *stream,
* sctp_stream_update will swap ->out pointers.
*/
for (i = 0; i < outcnt; i++) {
kfree(SCTP_SO(new, i)->ext);
sctp_stream_free_ext(new, i);
SCTP_SO(new, i)->ext = SCTP_SO(stream, i)->ext;
SCTP_SO(stream, i)->ext = NULL;
}
}
for (i = outcnt; i < stream->outcnt; i++) {
kfree(SCTP_SO(stream, i)->ext);
SCTP_SO(stream, i)->ext = NULL;
}
for (i = outcnt; i < stream->outcnt; i++)
sctp_stream_free_ext(stream, i);
}
static int sctp_stream_alloc_out(struct sctp_stream *stream, __u16 outcnt,
@ -174,9 +185,9 @@ void sctp_stream_free(struct sctp_stream *stream)
struct sctp_sched_ops *sched = sctp_sched_ops_from_stream(stream);
int i;
sched->free(stream);
sched->unsched_all(stream);
for (i = 0; i < stream->outcnt; i++)
kfree(SCTP_SO(stream, i)->ext);
sctp_stream_free_ext(stream, i);
genradix_free(&stream->out);
genradix_free(&stream->in);
}

View File

@ -46,6 +46,10 @@ static int sctp_sched_fcfs_init_sid(struct sctp_stream *stream, __u16 sid,
return 0;
}
static void sctp_sched_fcfs_free_sid(struct sctp_stream *stream, __u16 sid)
{
}
static void sctp_sched_fcfs_free(struct sctp_stream *stream)
{
}
@ -96,6 +100,7 @@ static struct sctp_sched_ops sctp_sched_fcfs = {
.get = sctp_sched_fcfs_get,
.init = sctp_sched_fcfs_init,
.init_sid = sctp_sched_fcfs_init_sid,
.free_sid = sctp_sched_fcfs_free_sid,
.free = sctp_sched_fcfs_free,
.enqueue = sctp_sched_fcfs_enqueue,
.dequeue = sctp_sched_fcfs_dequeue,

View File

@ -204,6 +204,24 @@ static int sctp_sched_prio_init_sid(struct sctp_stream *stream, __u16 sid,
return sctp_sched_prio_set(stream, sid, 0, gfp);
}
static void sctp_sched_prio_free_sid(struct sctp_stream *stream, __u16 sid)
{
struct sctp_stream_priorities *prio = SCTP_SO(stream, sid)->ext->prio_head;
int i;
if (!prio)
return;
SCTP_SO(stream, sid)->ext->prio_head = NULL;
for (i = 0; i < stream->outcnt; i++) {
if (SCTP_SO(stream, i)->ext &&
SCTP_SO(stream, i)->ext->prio_head == prio)
return;
}
kfree(prio);
}
static void sctp_sched_prio_free(struct sctp_stream *stream)
{
struct sctp_stream_priorities *prio, *n;
@ -323,6 +341,7 @@ static struct sctp_sched_ops sctp_sched_prio = {
.get = sctp_sched_prio_get,
.init = sctp_sched_prio_init,
.init_sid = sctp_sched_prio_init_sid,
.free_sid = sctp_sched_prio_free_sid,
.free = sctp_sched_prio_free,
.enqueue = sctp_sched_prio_enqueue,
.dequeue = sctp_sched_prio_dequeue,

View File

@ -90,6 +90,10 @@ static int sctp_sched_rr_init_sid(struct sctp_stream *stream, __u16 sid,
return 0;
}
static void sctp_sched_rr_free_sid(struct sctp_stream *stream, __u16 sid)
{
}
static void sctp_sched_rr_free(struct sctp_stream *stream)
{
sctp_sched_rr_unsched_all(stream);
@ -177,6 +181,7 @@ static struct sctp_sched_ops sctp_sched_rr = {
.get = sctp_sched_rr_get,
.init = sctp_sched_rr_init,
.init_sid = sctp_sched_rr_init_sid,
.free_sid = sctp_sched_rr_free_sid,
.free = sctp_sched_rr_free,
.enqueue = sctp_sched_rr_enqueue,
.dequeue = sctp_sched_rr_dequeue,

View File

@ -1971,6 +1971,9 @@ rcv:
/* Ok, everything's fine, try to synch own keys according to peers' */
tipc_crypto_key_synch(rx, *skb);
/* Re-fetch skb cb as skb might be changed in tipc_msg_validate */
skb_cb = TIPC_SKB_CB(*skb);
/* Mark skb decrypted */
skb_cb->decrypted = 1;

View File

@ -330,7 +330,8 @@ static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
* determine if they are the same ie.
*/
if (tmp_old[0] == WLAN_EID_VENDOR_SPECIFIC) {
if (!memcmp(tmp_old + 2, tmp + 2, 5)) {
if (tmp_old[1] >= 5 && tmp[1] >= 5 &&
!memcmp(tmp_old + 2, tmp + 2, 5)) {
/* same vendor ie, copy from
* subelement
*/
@ -2526,10 +2527,15 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
const struct cfg80211_bss_ies *ies1, *ies2;
size_t ielen = len - offsetof(struct ieee80211_mgmt,
u.probe_resp.variable);
struct cfg80211_non_tx_bss non_tx_data;
struct cfg80211_non_tx_bss non_tx_data = {};
res = cfg80211_inform_single_bss_frame_data(wiphy, data, mgmt,
len, gfp);
/* don't do any further MBSSID handling for S1G */
if (ieee80211_is_s1g_beacon(mgmt->frame_control))
return res;
if (!res || !wiphy->support_mbssid ||
!cfg80211_find_elem(WLAN_EID_MULTIPLE_BSSID, ie, ielen))
return res;

View File

@ -11169,7 +11169,7 @@ static int attach_raw_tp(const struct bpf_program *prog, long cookie, struct bpf
}
*link = bpf_program__attach_raw_tracepoint(prog, tp_name);
return libbpf_get_error(link);
return libbpf_get_error(*link);
}
/* Common logic for all BPF program types that attach to a btf_id */

View File

@ -234,7 +234,7 @@ static int probe_map_create(enum bpf_map_type map_type)
case BPF_MAP_TYPE_USER_RINGBUF:
key_size = 0;
value_size = 0;
max_entries = 4096;
max_entries = sysconf(_SC_PAGE_SIZE);
break;
case BPF_MAP_TYPE_STRUCT_OPS:
/* we'll get -ENOTSUPP for invalid BTF type ID for struct_ops */

View File

@ -77,6 +77,7 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
__u32 len = sizeof(info);
struct epoll_event *e;
struct ring *r;
__u64 mmap_sz;
void *tmp;
int err;
@ -115,8 +116,7 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
r->mask = info.max_entries - 1;
/* Map writable consumer page */
tmp = mmap(NULL, rb->page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
map_fd, 0);
tmp = mmap(NULL, rb->page_size, PROT_READ | PROT_WRITE, MAP_SHARED, map_fd, 0);
if (tmp == MAP_FAILED) {
err = -errno;
pr_warn("ringbuf: failed to mmap consumer page for map fd=%d: %d\n",
@ -129,8 +129,12 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
* data size to allow simple reading of samples that wrap around the
* end of a ring buffer. See kernel implementation for details.
* */
tmp = mmap(NULL, rb->page_size + 2 * info.max_entries, PROT_READ,
MAP_SHARED, map_fd, rb->page_size);
mmap_sz = rb->page_size + 2 * (__u64)info.max_entries;
if (mmap_sz != (__u64)(size_t)mmap_sz) {
pr_warn("ringbuf: ring buffer size (%u) is too big\n", info.max_entries);
return libbpf_err(-E2BIG);
}
tmp = mmap(NULL, (size_t)mmap_sz, PROT_READ, MAP_SHARED, map_fd, rb->page_size);
if (tmp == MAP_FAILED) {
err = -errno;
ringbuf_unmap_ring(rb, r);
@ -348,6 +352,7 @@ static int user_ringbuf_map(struct user_ring_buffer *rb, int map_fd)
{
struct bpf_map_info info;
__u32 len = sizeof(info);
__u64 mmap_sz;
void *tmp;
struct epoll_event *rb_epoll;
int err;
@ -384,8 +389,13 @@ static int user_ringbuf_map(struct user_ring_buffer *rb, int map_fd)
* simple reading and writing of samples that wrap around the end of
* the buffer. See the kernel implementation for details.
*/
tmp = mmap(NULL, rb->page_size + 2 * info.max_entries,
PROT_READ | PROT_WRITE, MAP_SHARED, map_fd, rb->page_size);
mmap_sz = rb->page_size + 2 * (__u64)info.max_entries;
if (mmap_sz != (__u64)(size_t)mmap_sz) {
pr_warn("user ringbuf: ring buf size (%u) is too big\n", info.max_entries);
return -E2BIG;
}
tmp = mmap(NULL, (size_t)mmap_sz, PROT_READ | PROT_WRITE, MAP_SHARED,
map_fd, rb->page_size);
if (tmp == MAP_FAILED) {
err = -errno;
pr_warn("user ringbuf: failed to mmap data pages for map fd=%d: %d\n",
@ -476,6 +486,10 @@ void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size)
__u64 cons_pos, prod_pos;
struct ringbuf_hdr *hdr;
/* The top two bits are used as special flags */
if (size & (BPF_RINGBUF_BUSY_BIT | BPF_RINGBUF_DISCARD_BIT))
return errno = E2BIG, NULL;
/* Synchronizes with smp_store_release() in __bpf_user_ringbuf_peek() in
* the kernel.
*/

View File

@ -458,7 +458,7 @@ static void test_sk_storage_map_basic(void)
struct {
int cnt;
int lock;
} value = { .cnt = 0xeB9f, .lock = 0, }, lookup_value;
} value = { .cnt = 0xeB9f, .lock = 1, }, lookup_value;
struct bpf_map_create_opts bad_xattr;
int btf_fd, map_fd, sk_fd, err;
@ -483,38 +483,41 @@ static void test_sk_storage_map_basic(void)
"err:%d errno:%d\n", err, errno);
err = bpf_map_lookup_elem_flags(map_fd, &sk_fd, &lookup_value,
BPF_F_LOCK);
CHECK(err || lookup_value.cnt != value.cnt,
CHECK(err || lookup_value.lock || lookup_value.cnt != value.cnt,
"bpf_map_lookup_elem_flags(BPF_F_LOCK)",
"err:%d errno:%d cnt:%x(%x)\n",
err, errno, lookup_value.cnt, value.cnt);
"err:%d errno:%d lock:%x cnt:%x(%x)\n",
err, errno, lookup_value.lock, lookup_value.cnt, value.cnt);
/* Bump the cnt and update with BPF_EXIST | BPF_F_LOCK */
value.cnt += 1;
value.lock = 2;
err = bpf_map_update_elem(map_fd, &sk_fd, &value,
BPF_EXIST | BPF_F_LOCK);
CHECK(err, "bpf_map_update_elem(BPF_EXIST|BPF_F_LOCK)",
"err:%d errno:%d\n", err, errno);
err = bpf_map_lookup_elem_flags(map_fd, &sk_fd, &lookup_value,
BPF_F_LOCK);
CHECK(err || lookup_value.cnt != value.cnt,
CHECK(err || lookup_value.lock || lookup_value.cnt != value.cnt,
"bpf_map_lookup_elem_flags(BPF_F_LOCK)",
"err:%d errno:%d cnt:%x(%x)\n",
err, errno, lookup_value.cnt, value.cnt);
"err:%d errno:%d lock:%x cnt:%x(%x)\n",
err, errno, lookup_value.lock, lookup_value.cnt, value.cnt);
/* Bump the cnt and update with BPF_EXIST */
value.cnt += 1;
value.lock = 2;
err = bpf_map_update_elem(map_fd, &sk_fd, &value, BPF_EXIST);
CHECK(err, "bpf_map_update_elem(BPF_EXIST)",
"err:%d errno:%d\n", err, errno);
err = bpf_map_lookup_elem_flags(map_fd, &sk_fd, &lookup_value,
BPF_F_LOCK);
CHECK(err || lookup_value.cnt != value.cnt,
CHECK(err || lookup_value.lock || lookup_value.cnt != value.cnt,
"bpf_map_lookup_elem_flags(BPF_F_LOCK)",
"err:%d errno:%d cnt:%x(%x)\n",
err, errno, lookup_value.cnt, value.cnt);
"err:%d errno:%d lock:%x cnt:%x(%x)\n",
err, errno, lookup_value.lock, lookup_value.cnt, value.cnt);
/* Update with BPF_NOEXIST */
value.cnt += 1;
value.lock = 2;
err = bpf_map_update_elem(map_fd, &sk_fd, &value,
BPF_NOEXIST | BPF_F_LOCK);
CHECK(!err || errno != EEXIST,
@ -526,22 +529,23 @@ static void test_sk_storage_map_basic(void)
value.cnt -= 1;
err = bpf_map_lookup_elem_flags(map_fd, &sk_fd, &lookup_value,
BPF_F_LOCK);
CHECK(err || lookup_value.cnt != value.cnt,
CHECK(err || lookup_value.lock || lookup_value.cnt != value.cnt,
"bpf_map_lookup_elem_flags(BPF_F_LOCK)",
"err:%d errno:%d cnt:%x(%x)\n",
err, errno, lookup_value.cnt, value.cnt);
"err:%d errno:%d lock:%x cnt:%x(%x)\n",
err, errno, lookup_value.lock, lookup_value.cnt, value.cnt);
/* Bump the cnt again and update with map_flags == 0 */
value.cnt += 1;
value.lock = 2;
err = bpf_map_update_elem(map_fd, &sk_fd, &value, 0);
CHECK(err, "bpf_map_update_elem()", "err:%d errno:%d\n",
err, errno);
err = bpf_map_lookup_elem_flags(map_fd, &sk_fd, &lookup_value,
BPF_F_LOCK);
CHECK(err || lookup_value.cnt != value.cnt,
CHECK(err || lookup_value.lock || lookup_value.cnt != value.cnt,
"bpf_map_lookup_elem_flags(BPF_F_LOCK)",
"err:%d errno:%d cnt:%x(%x)\n",
err, errno, lookup_value.cnt, value.cnt);
"err:%d errno:%d lock:%x cnt:%x(%x)\n",
err, errno, lookup_value.lock, lookup_value.cnt, value.cnt);
/* Test delete elem */
err = bpf_map_delete_elem(map_fd, &sk_fd);

View File

@ -358,10 +358,12 @@ static int get_syms(char ***symsp, size_t *cntp)
* We attach to almost all kernel functions and some of them
* will cause 'suspicious RCU usage' when fprobe is attached
* to them. Filter out the current culprits - arch_cpu_idle
* and rcu_* functions.
* default_idle and rcu_* functions.
*/
if (!strcmp(name, "arch_cpu_idle"))
continue;
if (!strcmp(name, "default_idle"))
continue;
if (!strncmp(name, "rcu_", 4))
continue;
if (!strcmp(name, "bpf_dispatcher_xdp_func"))
@ -400,7 +402,7 @@ error:
return err;
}
static void test_bench_attach(void)
void serial_test_kprobe_multi_bench_attach(void)
{
LIBBPF_OPTS(bpf_kprobe_multi_opts, opts);
struct kprobe_multi_empty *skel = NULL;
@ -468,6 +470,4 @@ void test_kprobe_multi_test(void)
test_attach_api_syms();
if (test__start_subtest("attach_api_fails"))
test_attach_api_fails();
if (test__start_subtest("bench_attach"))
test_bench_attach();
}

View File

@ -1228,6 +1228,17 @@ ipv4_fcnal()
run_cmd "$IP ro add 172.16.101.0/24 nhid 21"
run_cmd "$IP ro del 172.16.101.0/24 nexthop via 172.16.1.7 dev veth1 nexthop via 172.16.1.8 dev veth1"
log_test $? 2 "Delete multipath route with only nh id based entry"
run_cmd "$IP nexthop add id 22 via 172.16.1.6 dev veth1"
run_cmd "$IP ro add 172.16.102.0/24 nhid 22"
run_cmd "$IP ro del 172.16.102.0/24 dev veth1"
log_test $? 2 "Delete route when specifying only nexthop device"
run_cmd "$IP ro del 172.16.102.0/24 via 172.16.1.6"
log_test $? 2 "Delete route when specifying only gateway"
run_cmd "$IP ro del 172.16.102.0/24"
log_test $? 0 "Delete route when not specifying nexthop attributes"
}
ipv4_grp_fcnal()