Networking fixes for 6.5-rc8, including fixes from wifi, can

and netfilter
 
 Fixes to fixes:
 
   - nf_tables:
     - GC transaction race with abort path
     - defer gc run if previous batch is still pending
 
 Previous releases - regressions:
 
   - ipv4: fix data-races around inet->inet_id
 
   - phy: fix deadlocking in phy_error() invocation
 
   - mdio: fix C45 read/write protocol
 
   - ipvlan: fix a reference count leak warning in ipvlan_ns_exit()
 
   - ice: fix NULL pointer deref during VF reset
 
   - i40e: fix potential NULL pointer dereferencing of pf->vf i40e_sync_vsi_filters()
 
   - tg3: use slab_build_skb() when needed
 
   - mtk_eth_soc: fix NULL pointer on hw reset
 
 Previous releases - always broken:
 
   - core: validate veth and vxcan peer ifindexes
 
   - sched: fix a qdisc modification with ambiguous command request
 
   - devlink: add missing unregister linecard notification
 
   - wifi: mac80211: limit reorder_buf_filtered to avoid UBSAN warning
 
   - batman:
     - do not get eth header before batadv_check_management_packet
     - fix batadv_v_ogm_aggr_send memory leak
 
   - bonding: fix macvlan over alb bond support
 
   - mlxsw: set time stamp fields also when its type is MIRROR_UTC
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmTnJIQSHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOkt7kP/jy6HOMwSOMFbtxQD2m89EImr6ZlLUPg
 H09seQzC5nwRbgZrdzukmM27HDKEkYe1sPyxhpS8E4iAslFaefEvnWqOY0oiQSpH
 OuF4mP/cS9QKb62NwKVrau3SCARS9arLmOF0mcJNdDOWwucE+SoFaebxSMitAU/w
 k8hHVsLwc5dwZAYznOl2/qsmPBnIUsxfymNJE/RuFqj1nHccGybh9mJKpAxc0knj
 QEjqno//PgAXPV/X3mH/wG0fcsXs0OlAnBS9yA95GNzuR2yWrh7bD/et99En/elS
 8paUio+O3P6Y6WaewgDYFm44pf/x+hFb18Irtab82BkdRw+lgFyF23g8IH7ToJAE
 mEaxwdS7AQ4XEunNyJsjwiffWUG1nFaoIhaGb0Lo1qmgLHDo+rrNhkrBWvZxSf0Q
 8QlMnCXopJ1c5Qltz5QNVaWPErpCcanxV3cpNlG+lTpfamWBrUpuv/EhHCUF/fr3
 hlgJEm+WoFTvexO+QC3CyJDz2JYLLMaaYaoUZ1aJS2dtTTc3tfUjEL8VcopfXI87
 2FXJ3qEtCkvfdtfFjhofw97qHDvGrTXa9r2JSh1Pp8v15pKdM2P/lMYxd4B0cSEw
 9udW/3bWkvHZayzBWvqDEiz3UTID1+uX0/qpBWY40QzTdIXo6sBrCCk93tjJUdcA
 kXjw9HkSqW6H
 =WKil
 -----END PGP SIGNATURE-----

Merge tag 'net-6.5-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from wifi, can and netfilter.

  Fixes to fixes:

   - nf_tables:
       - GC transaction race with abort path
       - defer gc run if previous batch is still pending

  Previous releases - regressions:

   - ipv4: fix data-races around inet->inet_id

   - phy: fix deadlocking in phy_error() invocation

   - mdio: fix C45 read/write protocol

   - ipvlan: fix a reference count leak warning in ipvlan_ns_exit()

   - ice: fix NULL pointer deref during VF reset

   - i40e: fix potential NULL pointer dereferencing of pf->vf in
     i40e_sync_vsi_filters()

   - tg3: use slab_build_skb() when needed

   - mtk_eth_soc: fix NULL pointer on hw reset

  Previous releases - always broken:

   - core: validate veth and vxcan peer ifindexes

   - sched: fix a qdisc modification with ambiguous command request

   - devlink: add missing unregister linecard notification

   - wifi: mac80211: limit reorder_buf_filtered to avoid UBSAN warning

   - batman:
      - do not get eth header before batadv_check_management_packet
      - fix batadv_v_ogm_aggr_send memory leak

   - bonding: fix macvlan over alb bond support

   - mlxsw: set time stamp fields also when its type is MIRROR_UTC"

* tag 'net-6.5-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (54 commits)
  selftests: bonding: add macvlan over bond testing
  selftest: bond: add new topo bond_topo_2d1c.sh
  bonding: fix macvlan over alb bond support
  rtnetlink: Reject negative ifindexes in RTM_NEWLINK
  netfilter: nf_tables: defer gc run if previous batch is still pending
  netfilter: nf_tables: fix out of memory error handling
  netfilter: nf_tables: use correct lock to protect gc_list
  netfilter: nf_tables: GC transaction race with abort path
  netfilter: nf_tables: flush pending destroy work before netlink notifier
  netfilter: nf_tables: validate all pending tables
  ibmveth: Use dcbf rather than dcbfl
  i40e: fix potential NULL pointer dereferencing of pf->vf i40e_sync_vsi_filters()
  net/sched: fix a qdisc modification with ambiguous command request
  igc: Fix the typo in the PTM Control macro
  batman-adv: Hold rtnl lock during MTU update via netlink
  igb: Avoid starting unnecessary workqueues
  can: raw: add missing refcount for memory leak fix
  can: isotp: fix support for transmission of SF without flow control
  bnx2x: new flag for track HW resource allocation
  sfc: allocate a big enough SKB for loopback selftest packet
  ...
This commit is contained in:
Linus Torvalds 2023-08-24 08:23:13 -07:00
commit b5cc3833f1
79 changed files with 664 additions and 356 deletions

View File

@ -13,7 +13,7 @@ Description:
Specifies the duration of the LED blink in milliseconds.
Defaults to 50 ms.
With hw_control ON, the interval value MUST be set to the
When offloaded is true, the interval value MUST be set to the
default value and cannot be changed.
Trying to set any value in this specific mode will return
an EINVAL error.
@ -44,8 +44,8 @@ Description:
If set to 1, the LED will blink for the milliseconds specified
in interval to signal transmission.
With hw_control ON, the blink interval is controlled by hardware
and won't reflect the value set in interval.
When offloaded is true, the blink interval is controlled by
hardware and won't reflect the value set in interval.
What: /sys/class/leds/<led>/rx
Date: Dec 2017
@ -59,21 +59,21 @@ Description:
If set to 1, the LED will blink for the milliseconds specified
in interval to signal reception.
With hw_control ON, the blink interval is controlled by hardware
and won't reflect the value set in interval.
When offloaded is true, the blink interval is controlled by
hardware and won't reflect the value set in interval.
What: /sys/class/leds/<led>/hw_control
What: /sys/class/leds/<led>/offloaded
Date: Jun 2023
KernelVersion: 6.5
Contact: linux-leds@vger.kernel.org
Description:
Communicate whether the LED trigger modes are driven by hardware
or software fallback is used.
Communicate whether the LED trigger modes are offloaded to
hardware or whether software fallback is used.
If 0, the LED is using software fallback to blink.
If 1, the LED is using hardware control to blink and signal the
requested modes.
If 1, the LED blinking in requested mode is offloaded to
hardware.
What: /sys/class/leds/<led>/link_10
Date: Jun 2023

View File

@ -14803,6 +14803,16 @@ F: net/netfilter/xt_CONNSECMARK.c
F: net/netfilter/xt_SECMARK.c
F: net/netlabel/
NETWORKING [MACSEC]
M: Sabrina Dubroca <sd@queasysnail.net>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/macsec.c
F: include/net/macsec.h
F: include/uapi/linux/if_macsec.h
K: macsec
K: \bmdo_
NETWORKING [MPTCP]
M: Matthieu Baerts <matthieu.baerts@tessares.net>
M: Mat Martineau <martineau@kernel.org>

View File

@ -406,15 +406,15 @@ static ssize_t interval_store(struct device *dev,
static DEVICE_ATTR_RW(interval);
static ssize_t hw_control_show(struct device *dev,
struct device_attribute *attr, char *buf)
static ssize_t offloaded_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct led_netdev_data *trigger_data = led_trigger_get_drvdata(dev);
return sprintf(buf, "%d\n", trigger_data->hw_control);
}
static DEVICE_ATTR_RO(hw_control);
static DEVICE_ATTR_RO(offloaded);
static struct attribute *netdev_trig_attrs[] = {
&dev_attr_device_name.attr,
@ -427,7 +427,7 @@ static struct attribute *netdev_trig_attrs[] = {
&dev_attr_rx.attr,
&dev_attr_tx.attr,
&dev_attr_interval.attr,
&dev_attr_hw_control.attr,
&dev_attr_offloaded.attr,
NULL
};
ATTRIBUTE_GROUPS(netdev_trig);

View File

@ -660,10 +660,10 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
return NULL;
arp = (struct arp_pkt *)skb_network_header(skb);
/* Don't modify or load balance ARPs that do not originate locally
* (e.g.,arrive via a bridge).
/* Don't modify or load balance ARPs that do not originate
* from the bond itself or a VLAN directly above the bond.
*/
if (!bond_slave_has_mac_rx(bond, arp->mac_src))
if (!bond_slave_has_mac_rcu(bond, arp->mac_src))
return NULL;
dev = ip_dev_find(dev_net(bond->dev), arp->ip_src);

View File

@ -192,12 +192,7 @@ static int vxcan_newlink(struct net *net, struct net_device *dev,
nla_peer = data[VXCAN_INFO_PEER];
ifmp = nla_data(nla_peer);
err = rtnl_nla_parse_ifla(peer_tb,
nla_data(nla_peer) +
sizeof(struct ifinfomsg),
nla_len(nla_peer) -
sizeof(struct ifinfomsg),
NULL);
err = rtnl_nla_parse_ifinfomsg(peer_tb, nla_peer, extack);
if (err < 0)
return err;

View File

@ -1006,6 +1006,10 @@ mt753x_trap_frames(struct mt7530_priv *priv)
mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK,
MT753X_BPDU_CPU_ONLY);
/* Trap 802.1X PAE frames to the CPU port(s) */
mt7530_rmw(priv, MT753X_BPC, MT753X_PAE_PORT_FW_MASK,
MT753X_PAE_PORT_FW(MT753X_BPDU_CPU_ONLY));
/* Trap LLDP frames with :0E MAC DA to the CPU port(s) */
mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_PORT_FW_MASK,
MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY));

View File

@ -66,6 +66,8 @@ enum mt753x_id {
/* Registers for BPDU and PAE frame control*/
#define MT753X_BPC 0x24
#define MT753X_BPDU_PORT_FW_MASK GENMASK(2, 0)
#define MT753X_PAE_PORT_FW_MASK GENMASK(18, 16)
#define MT753X_PAE_PORT_FW(x) FIELD_PREP(MT753X_PAE_PORT_FW_MASK, x)
/* Register for :03 and :0E MAC DA frame control */
#define MT753X_RGAC2 0x2c

View File

@ -1069,6 +1069,9 @@ static u64 vsc9959_tas_remaining_gate_len_ps(u64 gate_len_ns)
if (gate_len_ns == U64_MAX)
return U64_MAX;
if (gate_len_ns < VSC9959_TAS_MIN_GATE_LEN_NS)
return 0;
return (gate_len_ns - VSC9959_TAS_MIN_GATE_LEN_NS) * PSEC_PER_NSEC;
}

View File

@ -1448,7 +1448,7 @@ int bgmac_phy_connect_direct(struct bgmac *bgmac)
int err;
phy_dev = fixed_phy_register(PHY_POLL, &fphy_status, NULL);
if (!phy_dev || IS_ERR(phy_dev)) {
if (IS_ERR(phy_dev)) {
dev_err(bgmac->dev, "Failed to register fixed PHY device\n");
return -ENODEV;
}

View File

@ -1508,6 +1508,8 @@ struct bnx2x {
bool cnic_loaded;
struct cnic_eth_dev *(*cnic_probe)(struct net_device *);
bool nic_stopped;
/* Flag that indicates that we can start looking for FCoE L2 queue
* completions in the default status block.
*/

View File

@ -2715,6 +2715,7 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
bnx2x_add_all_napi(bp);
DP(NETIF_MSG_IFUP, "napi added\n");
bnx2x_napi_enable(bp);
bp->nic_stopped = false;
if (IS_PF(bp)) {
/* set pf load just before approaching the MCP */
@ -2960,6 +2961,7 @@ load_error2:
load_error1:
bnx2x_napi_disable(bp);
bnx2x_del_all_napi(bp);
bp->nic_stopped = true;
/* clear pf_load status, as it was already set */
if (IS_PF(bp))
@ -3095,14 +3097,17 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
if (!CHIP_IS_E1x(bp))
bnx2x_pf_disable(bp);
/* Disable HW interrupts, NAPI */
bnx2x_netif_stop(bp, 1);
/* Delete all NAPI objects */
bnx2x_del_all_napi(bp);
if (CNIC_LOADED(bp))
bnx2x_del_all_napi_cnic(bp);
/* Release IRQs */
bnx2x_free_irq(bp);
if (!bp->nic_stopped) {
/* Disable HW interrupts, NAPI */
bnx2x_netif_stop(bp, 1);
/* Delete all NAPI objects */
bnx2x_del_all_napi(bp);
if (CNIC_LOADED(bp))
bnx2x_del_all_napi_cnic(bp);
/* Release IRQs */
bnx2x_free_irq(bp);
bp->nic_stopped = true;
}
/* Report UNLOAD_DONE to MCP */
bnx2x_send_unload_done(bp, false);

View File

@ -9474,15 +9474,18 @@ unload_error:
}
}
/* Disable HW interrupts, NAPI */
bnx2x_netif_stop(bp, 1);
/* Delete all NAPI objects */
bnx2x_del_all_napi(bp);
if (CNIC_LOADED(bp))
bnx2x_del_all_napi_cnic(bp);
if (!bp->nic_stopped) {
/* Disable HW interrupts, NAPI */
bnx2x_netif_stop(bp, 1);
/* Delete all NAPI objects */
bnx2x_del_all_napi(bp);
if (CNIC_LOADED(bp))
bnx2x_del_all_napi_cnic(bp);
/* Release IRQs */
bnx2x_free_irq(bp);
/* Release IRQs */
bnx2x_free_irq(bp);
bp->nic_stopped = true;
}
/* Reset the chip, unless PCI function is offline. If we reach this
* point following a PCI error handling, it means device is really
@ -14238,13 +14241,16 @@ static pci_ers_result_t bnx2x_io_slot_reset(struct pci_dev *pdev)
}
bnx2x_drain_tx_queues(bp);
bnx2x_send_unload_req(bp, UNLOAD_RECOVERY);
bnx2x_netif_stop(bp, 1);
bnx2x_del_all_napi(bp);
if (!bp->nic_stopped) {
bnx2x_netif_stop(bp, 1);
bnx2x_del_all_napi(bp);
if (CNIC_LOADED(bp))
bnx2x_del_all_napi_cnic(bp);
if (CNIC_LOADED(bp))
bnx2x_del_all_napi_cnic(bp);
bnx2x_free_irq(bp);
bnx2x_free_irq(bp);
bp->nic_stopped = true;
}
/* Report UNLOAD_DONE to MCP */
bnx2x_send_unload_done(bp, true);

View File

@ -529,13 +529,16 @@ void bnx2x_vfpf_close_vf(struct bnx2x *bp)
bnx2x_vfpf_finalize(bp, &req->first_tlv);
free_irq:
/* Disable HW interrupts, NAPI */
bnx2x_netif_stop(bp, 0);
/* Delete all NAPI objects */
bnx2x_del_all_napi(bp);
if (!bp->nic_stopped) {
/* Disable HW interrupts, NAPI */
bnx2x_netif_stop(bp, 0);
/* Delete all NAPI objects */
bnx2x_del_all_napi(bp);
/* Release IRQs */
bnx2x_free_irq(bp);
/* Release IRQs */
bnx2x_free_irq(bp);
bp->nic_stopped = true;
}
}
static void bnx2x_leading_vfq_init(struct bnx2x *bp, struct bnx2x_virtf *vf,

View File

@ -617,7 +617,7 @@ static int bcmgenet_mii_pd_init(struct bcmgenet_priv *priv)
};
phydev = fixed_phy_register(PHY_POLL, &fphy_status, NULL);
if (!phydev || IS_ERR(phydev)) {
if (IS_ERR(phydev)) {
dev_err(kdev, "failed to register fixed PHY device\n");
return -ENODEV;
}

View File

@ -6881,7 +6881,10 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget)
ri->data = NULL;
skb = build_skb(data, frag_size);
if (frag_size)
skb = build_skb(data, frag_size);
else
skb = slab_build_skb(data);
if (!skb) {
tg3_frag_free(frag_size != 0, data);
goto drop_it_no_recycle;

View File

@ -1466,7 +1466,7 @@ static void make_established(struct sock *sk, u32 snd_isn, unsigned int opt)
tp->write_seq = snd_isn;
tp->snd_nxt = snd_isn;
tp->snd_una = snd_isn;
inet_sk(sk)->inet_id = get_random_u16();
atomic_set(&inet_sk(sk)->inet_id, get_random_u16());
assign_rxopt(sk, opt);
if (tp->rcv_wnd > (RCV_BUFSIZ_M << 10))

View File

@ -203,7 +203,7 @@ static inline void ibmveth_flush_buffer(void *addr, unsigned long length)
unsigned long offset;
for (offset = 0; offset < length; offset += SMP_CACHE_BYTES)
asm("dcbfl %0,%1" :: "b" (addr), "r" (offset));
asm("dcbf %0,%1,1" :: "b" (addr), "r" (offset));
}
/* replenish the buffers for a pool. note that we don't need to

View File

@ -2609,7 +2609,7 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
retval = i40e_correct_mac_vlan_filters
(vsi, &tmp_add_list, &tmp_del_list,
vlan_filters);
else
else if (pf->vf)
retval = i40e_correct_vf_mac_vlan_filters
(vsi, &tmp_add_list, &tmp_del_list,
vlan_filters, pf->vf[vsi->vf_id].trusted);
@ -2782,7 +2782,8 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
}
/* if the VF is not trusted do not do promisc */
if ((vsi->type == I40E_VSI_SRIOV) && !pf->vf[vsi->vf_id].trusted) {
if (vsi->type == I40E_VSI_SRIOV && pf->vf &&
!pf->vf[vsi->vf_id].trusted) {
clear_bit(__I40E_VSI_OVERFLOW_PROMISC, vsi->state);
goto out;
}

View File

@ -435,7 +435,8 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring)
/* Receive Packet Data Buffer Size.
* The Packet Data Buffer Size is defined in 128 byte units.
*/
rlan_ctx.dbuf = ring->rx_buf_len >> ICE_RLAN_CTX_DBUF_S;
rlan_ctx.dbuf = DIV_ROUND_UP(ring->rx_buf_len,
BIT_ULL(ICE_RLAN_CTX_DBUF_S));
/* use 32 byte descriptors */
rlan_ctx.dsize = 1;

View File

@ -1131,7 +1131,7 @@ int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena)
if (!vf)
return -EINVAL;
ret = ice_check_vf_ready_for_reset(vf);
ret = ice_check_vf_ready_for_cfg(vf);
if (ret)
goto out_put_vf;
@ -1246,7 +1246,7 @@ int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
goto out_put_vf;
}
ret = ice_check_vf_ready_for_reset(vf);
ret = ice_check_vf_ready_for_cfg(vf);
if (ret)
goto out_put_vf;
@ -1300,7 +1300,7 @@ int ice_set_vf_trust(struct net_device *netdev, int vf_id, bool trusted)
return -EOPNOTSUPP;
}
ret = ice_check_vf_ready_for_reset(vf);
ret = ice_check_vf_ready_for_cfg(vf);
if (ret)
goto out_put_vf;
@ -1613,7 +1613,7 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos,
if (!vf)
return -EINVAL;
ret = ice_check_vf_ready_for_reset(vf);
ret = ice_check_vf_ready_for_cfg(vf);
if (ret)
goto out_put_vf;

View File

@ -185,25 +185,6 @@ int ice_check_vf_ready_for_cfg(struct ice_vf *vf)
return 0;
}
/**
* ice_check_vf_ready_for_reset - check if VF is ready to be reset
* @vf: VF to check if it's ready to be reset
*
* The purpose of this function is to ensure that the VF is not in reset,
* disabled, and is both initialized and active, thus enabling us to safely
* initialize another reset.
*/
int ice_check_vf_ready_for_reset(struct ice_vf *vf)
{
int ret;
ret = ice_check_vf_ready_for_cfg(vf);
if (!ret && !test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states))
ret = -EAGAIN;
return ret;
}
/**
* ice_trigger_vf_reset - Reset a VF on HW
* @vf: pointer to the VF structure
@ -631,11 +612,17 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
return 0;
}
if (flags & ICE_VF_RESET_LOCK)
mutex_lock(&vf->cfg_lock);
else
lockdep_assert_held(&vf->cfg_lock);
if (ice_is_vf_disabled(vf)) {
vsi = ice_get_vf_vsi(vf);
if (!vsi) {
dev_dbg(dev, "VF is already removed\n");
return -EINVAL;
err = -EINVAL;
goto out_unlock;
}
ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, vf->vf_id);
@ -644,14 +631,9 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
dev_dbg(dev, "VF is already disabled, there is no need for resetting it, telling VM, all is fine %d\n",
vf->vf_id);
return 0;
goto out_unlock;
}
if (flags & ICE_VF_RESET_LOCK)
mutex_lock(&vf->cfg_lock);
else
lockdep_assert_held(&vf->cfg_lock);
/* Set VF disable bit state here, before triggering reset */
set_bit(ICE_VF_STATE_DIS, vf->vf_states);
ice_trigger_vf_reset(vf, flags & ICE_VF_RESET_VFLR, false);

View File

@ -215,7 +215,6 @@ u16 ice_get_num_vfs(struct ice_pf *pf);
struct ice_vsi *ice_get_vf_vsi(struct ice_vf *vf);
bool ice_is_vf_disabled(struct ice_vf *vf);
int ice_check_vf_ready_for_cfg(struct ice_vf *vf);
int ice_check_vf_ready_for_reset(struct ice_vf *vf);
void ice_set_vf_state_dis(struct ice_vf *vf);
bool ice_is_any_vf_in_unicast_promisc(struct ice_pf *pf);
void

View File

@ -3947,7 +3947,6 @@ error_handler:
ice_vc_notify_vf_link_state(vf);
break;
case VIRTCHNL_OP_RESET_VF:
clear_bit(ICE_VF_STATE_ACTIVE, vf->vf_states);
ops->reset_vf(vf);
break;
case VIRTCHNL_OP_ADD_ETH_ADDR:

View File

@ -1385,18 +1385,6 @@ void igb_ptp_init(struct igb_adapter *adapter)
return;
}
spin_lock_init(&adapter->tmreg_lock);
INIT_WORK(&adapter->ptp_tx_work, igb_ptp_tx_work);
if (adapter->ptp_flags & IGB_PTP_OVERFLOW_CHECK)
INIT_DELAYED_WORK(&adapter->ptp_overflow_work,
igb_ptp_overflow_check);
adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
igb_ptp_reset(adapter);
adapter->ptp_clock = ptp_clock_register(&adapter->ptp_caps,
&adapter->pdev->dev);
if (IS_ERR(adapter->ptp_clock)) {
@ -1406,6 +1394,18 @@ void igb_ptp_init(struct igb_adapter *adapter)
dev_info(&adapter->pdev->dev, "added PHC on %s\n",
adapter->netdev->name);
adapter->ptp_flags |= IGB_PTP_ENABLED;
spin_lock_init(&adapter->tmreg_lock);
INIT_WORK(&adapter->ptp_tx_work, igb_ptp_tx_work);
if (adapter->ptp_flags & IGB_PTP_OVERFLOW_CHECK)
INIT_DELAYED_WORK(&adapter->ptp_overflow_work,
igb_ptp_overflow_check);
adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
igb_ptp_reset(adapter);
}
}

View File

@ -546,7 +546,7 @@
#define IGC_PTM_CTRL_START_NOW BIT(29) /* Start PTM Now */
#define IGC_PTM_CTRL_EN BIT(30) /* Enable PTM */
#define IGC_PTM_CTRL_TRIG BIT(31) /* PTM Cycle trigger */
#define IGC_PTM_CTRL_SHRT_CYC(usec) (((usec) & 0x2f) << 2)
#define IGC_PTM_CTRL_SHRT_CYC(usec) (((usec) & 0x3f) << 2)
#define IGC_PTM_CTRL_PTM_TO(usec) (((usec) & 0xff) << 8)
#define IGC_PTM_SHORT_CYC_DEFAULT 10 /* Default Short/interrupted cycle interval */

View File

@ -4270,9 +4270,10 @@ rx_frscfg:
if (link < 0)
return NIX_AF_ERR_RX_LINK_INVALID;
nix_find_link_frs(rvu, req, pcifunc);
linkcfg:
nix_find_link_frs(rvu, req, pcifunc);
cfg = rvu_read64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link));
cfg = (cfg & ~(0xFFFFULL << 16)) | ((u64)req->maxlen << 16);
if (req->update_minlen)

View File

@ -221,9 +221,13 @@ void mtk_wed_fe_reset(void)
for (i = 0; i < ARRAY_SIZE(hw_list); i++) {
struct mtk_wed_hw *hw = hw_list[i];
struct mtk_wed_device *dev = hw->wed_dev;
struct mtk_wed_device *dev;
int err;
if (!hw)
break;
dev = hw->wed_dev;
if (!dev || !dev->wlan.reset)
continue;
@ -244,8 +248,12 @@ void mtk_wed_fe_reset_complete(void)
for (i = 0; i < ARRAY_SIZE(hw_list); i++) {
struct mtk_wed_hw *hw = hw_list[i];
struct mtk_wed_device *dev = hw->wed_dev;
struct mtk_wed_device *dev;
if (!hw)
break;
dev = hw->wed_dev;
if (!dev || !dev->wlan.reset_complete)
continue;

View File

@ -32,8 +32,8 @@ static const struct mlxsw_afk_element_info mlxsw_afk_element_infos[] = {
MLXSW_AFK_ELEMENT_INFO_U32(IP_TTL_, 0x18, 0, 8),
MLXSW_AFK_ELEMENT_INFO_U32(IP_ECN, 0x18, 9, 2),
MLXSW_AFK_ELEMENT_INFO_U32(IP_DSCP, 0x18, 11, 6),
MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_MSB, 0x18, 17, 3),
MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_LSB, 0x18, 20, 8),
MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_MSB, 0x18, 17, 4),
MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_LSB, 0x18, 21, 8),
MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_96_127, 0x20, 4),
MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_64_95, 0x24, 4),
MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_32_63, 0x28, 4),

View File

@ -517,11 +517,15 @@ static void mlxsw_pci_skb_cb_ts_set(struct mlxsw_pci *mlxsw_pci,
struct sk_buff *skb,
enum mlxsw_pci_cqe_v cqe_v, char *cqe)
{
u8 ts_type;
if (cqe_v != MLXSW_PCI_CQE_V2)
return;
if (mlxsw_pci_cqe2_time_stamp_type_get(cqe) !=
MLXSW_PCI_CQE_TIME_STAMP_TYPE_UTC)
ts_type = mlxsw_pci_cqe2_time_stamp_type_get(cqe);
if (ts_type != MLXSW_PCI_CQE_TIME_STAMP_TYPE_UTC &&
ts_type != MLXSW_PCI_CQE_TIME_STAMP_TYPE_MIRROR_UTC)
return;
mlxsw_skb_cb(skb)->cqe_ts.sec = mlxsw_pci_cqe2_time_stamp_sec_get(cqe);

View File

@ -97,14 +97,6 @@ MLXSW_ITEM32(reg, sspr, m, 0x00, 31, 1);
*/
MLXSW_ITEM32_LP(reg, sspr, 0x00, 16, 0x00, 12);
/* reg_sspr_sub_port
* Virtual port within the physical port.
* Should be set to 0 when virtual ports are not enabled on the port.
*
* Access: RW
*/
MLXSW_ITEM32(reg, sspr, sub_port, 0x00, 8, 8);
/* reg_sspr_system_port
* Unique identifier within the stacking domain that represents all the ports
* that are available in the system (external ports).
@ -120,7 +112,6 @@ static inline void mlxsw_reg_sspr_pack(char *payload, u16 local_port)
MLXSW_REG_ZERO(sspr, payload);
mlxsw_reg_sspr_m_set(payload, 1);
mlxsw_reg_sspr_local_port_set(payload, local_port);
mlxsw_reg_sspr_sub_port_set(payload, 0);
mlxsw_reg_sspr_system_port_set(payload, local_port);
}

View File

@ -193,7 +193,7 @@ mlxsw_sp2_mr_tcam_rule_parse(struct mlxsw_sp_acl_rule *rule,
key->vrid, GENMASK(7, 0));
mlxsw_sp_acl_rulei_keymask_u32(rulei,
MLXSW_AFK_ELEMENT_VIRT_ROUTER_MSB,
key->vrid >> 8, GENMASK(2, 0));
key->vrid >> 8, GENMASK(3, 0));
switch (key->proto) {
case MLXSW_SP_L3_PROTO_IPV4:
return mlxsw_sp2_mr_tcam_rule_parse4(rulei, key);

View File

@ -171,7 +171,7 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_2[] = {
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_4[] = {
MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_LSB, 0x04, 24, 8),
MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_MSB, 0x00, 0, 3),
MLXSW_AFK_ELEMENT_INST_EXT_U32(VIRT_ROUTER_MSB, 0x00, 0, 3, 0, true),
};
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_0[] = {
@ -321,7 +321,7 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_5b[] = {
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_4b[] = {
MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_LSB, 0x04, 13, 8),
MLXSW_AFK_ELEMENT_INST_EXT_U32(VIRT_ROUTER_MSB, 0x04, 21, 4, 0, true),
MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_MSB, 0x04, 21, 4),
};
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2b[] = {

View File

@ -428,7 +428,7 @@ static int ef4_begin_loopback(struct ef4_tx_queue *tx_queue)
for (i = 0; i < state->packet_count; i++) {
/* Allocate an skb, holding an extra reference for
* transmit completion counting */
skb = alloc_skb(EF4_LOOPBACK_PAYLOAD_LEN, GFP_KERNEL);
skb = alloc_skb(sizeof(state->payload), GFP_KERNEL);
if (!skb)
return -ENOMEM;
state->skbs[i] = skb;

View File

@ -426,7 +426,7 @@ static int efx_begin_loopback(struct efx_tx_queue *tx_queue)
for (i = 0; i < state->packet_count; i++) {
/* Allocate an skb, holding an extra reference for
* transmit completion counting */
skb = alloc_skb(EFX_LOOPBACK_PAYLOAD_LEN, GFP_KERNEL);
skb = alloc_skb(sizeof(state->payload), GFP_KERNEL);
if (!skb)
return -ENOMEM;
state->skbs[i] = skb;

View File

@ -426,7 +426,7 @@ static int efx_begin_loopback(struct efx_tx_queue *tx_queue)
for (i = 0; i < state->packet_count; i++) {
/* Allocate an skb, holding an extra reference for
* transmit completion counting */
skb = alloc_skb(EFX_LOOPBACK_PAYLOAD_LEN, GFP_KERNEL);
skb = alloc_skb(sizeof(state->payload), GFP_KERNEL);
if (!skb)
return -ENOMEM;
state->skbs[i] = skb;

View File

@ -748,7 +748,8 @@ static int ipvlan_device_event(struct notifier_block *unused,
write_pnet(&port->pnet, newnet);
ipvlan_migrate_l3s_hook(oldnet, newnet);
if (port->mode == IPVLAN_MODE_L3S)
ipvlan_migrate_l3s_hook(oldnet, newnet);
break;
}
case NETDEV_UNREGISTER:

View File

@ -186,7 +186,7 @@ int mdiobb_read_c45(struct mii_bus *bus, int phy, int devad, int reg)
struct mdiobb_ctrl *ctrl = bus->priv;
mdiobb_cmd_addr(ctrl, phy, devad, reg);
mdiobb_cmd(ctrl, MDIO_C45_READ, phy, reg);
mdiobb_cmd(ctrl, MDIO_C45_READ, phy, devad);
return mdiobb_read_common(bus, phy);
}
@ -222,7 +222,7 @@ int mdiobb_write_c45(struct mii_bus *bus, int phy, int devad, int reg, u16 val)
struct mdiobb_ctrl *ctrl = bus->priv;
mdiobb_cmd_addr(ctrl, phy, devad, reg);
mdiobb_cmd(ctrl, MDIO_C45_WRITE, phy, reg);
mdiobb_cmd(ctrl, MDIO_C45_WRITE, phy, devad);
return mdiobb_write_common(bus, val);
}

View File

@ -1184,9 +1184,11 @@ void phy_stop_machine(struct phy_device *phydev)
static void phy_process_error(struct phy_device *phydev)
{
mutex_lock(&phydev->lock);
/* phydev->lock must be held for the state change to be safe */
if (!mutex_is_locked(&phydev->lock))
phydev_err(phydev, "PHY-device data unsafe context\n");
phydev->state = PHY_ERROR;
mutex_unlock(&phydev->lock);
phy_trigger_machine(phydev);
}
@ -1195,7 +1197,9 @@ static void phy_error_precise(struct phy_device *phydev,
const void *func, int err)
{
WARN(1, "%pS: returned: %d\n", func, err);
mutex_lock(&phydev->lock);
phy_process_error(phydev);
mutex_unlock(&phydev->lock);
}
/**
@ -1204,8 +1208,7 @@ static void phy_error_precise(struct phy_device *phydev,
*
* Moves the PHY to the ERROR state in response to a read
* or write error, and tells the controller the link is down.
* Must not be called from interrupt context, or while the
* phydev->lock is held.
* Must be called with phydev->lock held.
*/
void phy_error(struct phy_device *phydev)
{

View File

@ -258,6 +258,16 @@ void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id,
switch (id->base.extended_cc) {
case SFF8024_ECC_UNSPEC:
break;
case SFF8024_ECC_100G_25GAUI_C2M_AOC:
if (br_min <= 28000 && br_max >= 25000) {
/* 25GBASE-R, possibly with FEC */
__set_bit(PHY_INTERFACE_MODE_25GBASER, interfaces);
/* There is currently no link mode for 25000base
* with unspecified range, reuse SR.
*/
phylink_set(modes, 25000baseSR_Full);
}
break;
case SFF8024_ECC_100GBASE_SR4_25GBASE_SR:
phylink_set(modes, 100000baseSR4_Full);
phylink_set(modes, 25000baseSR_Full);

View File

@ -1861,10 +1861,7 @@ static int veth_newlink(struct net *src_net, struct net_device *dev,
nla_peer = data[VETH_INFO_PEER];
ifmp = nla_data(nla_peer);
err = rtnl_nla_parse_ifla(peer_tb,
nla_data(nla_peer) + sizeof(struct ifinfomsg),
nla_len(nla_peer) - sizeof(struct ifinfomsg),
NULL);
err = rtnl_nla_parse_ifinfomsg(peer_tb, nla_peer, extack);
if (err < 0)
return err;

View File

@ -66,6 +66,7 @@ config IWLMVM
tristate "Intel Wireless WiFi MVM Firmware support"
select WANT_DEV_COREDUMP
depends on MAC80211
depends on PTP_1588_CLOCK_OPTIONAL
help
This is the driver that supports the MVM firmware. The list
of the devices that use this firmware is available here:

View File

@ -722,23 +722,14 @@ static inline struct slave *bond_slave_has_mac(struct bonding *bond,
}
/* Caller must hold rcu_read_lock() for read */
static inline bool bond_slave_has_mac_rx(struct bonding *bond, const u8 *mac)
static inline bool bond_slave_has_mac_rcu(struct bonding *bond, const u8 *mac)
{
struct list_head *iter;
struct slave *tmp;
struct netdev_hw_addr *ha;
bond_for_each_slave_rcu(bond, tmp, iter)
if (ether_addr_equal_64bits(mac, tmp->dev->dev_addr))
return true;
if (netdev_uc_empty(bond->dev))
return false;
netdev_for_each_uc_addr(ha, bond->dev)
if (ether_addr_equal_64bits(mac, ha->addr))
return true;
return false;
}

View File

@ -222,8 +222,8 @@ struct inet_sock {
__s16 uc_ttl;
__u16 cmsg_flags;
struct ip_options_rcu __rcu *inet_opt;
atomic_t inet_id;
__be16 inet_sport;
__u16 inet_id;
__u8 tos;
__u8 min_ttl;

View File

@ -538,8 +538,19 @@ static inline void ip_select_ident_segs(struct net *net, struct sk_buff *skb,
* generator as much as we can.
*/
if (sk && inet_sk(sk)->inet_daddr) {
iph->id = htons(inet_sk(sk)->inet_id);
inet_sk(sk)->inet_id += segs;
int val;
/* avoid atomic operations for TCP,
* as we hold socket lock at this point.
*/
if (sk_is_tcp(sk)) {
sock_owned_by_me(sk);
val = atomic_read(&inet_sk(sk)->inet_id);
atomic_set(&inet_sk(sk)->inet_id, val + segs);
} else {
val = atomic_add_return(segs, &inet_sk(sk)->inet_id);
}
iph->id = htons(val);
return;
}
if ((iph->frag_off & htons(IP_DF)) && !skb->ignore_df) {

View File

@ -6612,6 +6612,7 @@ void ieee80211_stop_rx_ba_session(struct ieee80211_vif *vif, u16 ba_rx_bitmap,
* marks frames marked in the bitmap as having been filtered. Afterwards, it
* checks if any frames in the window starting from @ssn can now be released
* (in case they were only waiting for frames that were filtered.)
* (Only work correctly if @max_rx_aggregation_subframes <= 64 frames)
*/
void ieee80211_mark_rx_ba_filtered_frames(struct ieee80211_sta *pubsta, u8 tid,
u16 ssn, u64 filtered,

View File

@ -587,6 +587,11 @@ static inline void *nft_set_priv(const struct nft_set *set)
return (void *)set->data;
}
static inline bool nft_set_gc_is_pending(const struct nft_set *s)
{
return refcount_read(&s->refs) != 1;
}
static inline struct nft_set *nft_set_container_of(const void *priv)
{
return (void *)priv - offsetof(struct nft_set, data);
@ -1729,6 +1734,7 @@ struct nftables_pernet {
u64 table_handle;
unsigned int base_seq;
unsigned int gc_seq;
u8 validate_state;
};
extern unsigned int nf_tables_net_id;

View File

@ -190,8 +190,8 @@ int rtnl_delete_link(struct net_device *dev, u32 portid, const struct nlmsghdr *
int rtnl_configure_link(struct net_device *dev, const struct ifinfomsg *ifm,
u32 portid, const struct nlmsghdr *nlh);
int rtnl_nla_parse_ifla(struct nlattr **tb, const struct nlattr *head, int len,
struct netlink_ext_ack *exterr);
int rtnl_nla_parse_ifinfomsg(struct nlattr **tb, const struct nlattr *nla_peer,
struct netlink_ext_ack *exterr);
struct net *rtnl_get_net_ns_capable(struct sock *sk, int netnsid);
#define MODULE_ALIAS_RTNL_LINK(kind) MODULE_ALIAS("rtnl-link-" kind)

View File

@ -1323,6 +1323,7 @@ struct proto {
/*
* Pressure flag: try to collapse.
* Technical note: it is used by multiple contexts non atomically.
* Make sure to use READ_ONCE()/WRITE_ONCE() for all reads/writes.
* All the __sk_mem_schedule() is of this nature: accounting
* is strict, actions are advisory and have some latency.
*/
@ -1423,7 +1424,7 @@ static inline bool sk_has_memory_pressure(const struct sock *sk)
static inline bool sk_under_global_memory_pressure(const struct sock *sk)
{
return sk->sk_prot->memory_pressure &&
!!*sk->sk_prot->memory_pressure;
!!READ_ONCE(*sk->sk_prot->memory_pressure);
}
static inline bool sk_under_memory_pressure(const struct sock *sk)
@ -1435,7 +1436,7 @@ static inline bool sk_under_memory_pressure(const struct sock *sk)
mem_cgroup_under_socket_pressure(sk->sk_memcg))
return true;
return !!*sk->sk_prot->memory_pressure;
return !!READ_ONCE(*sk->sk_prot->memory_pressure);
}
static inline long
@ -1512,7 +1513,7 @@ proto_memory_pressure(struct proto *prot)
{
if (!prot->memory_pressure)
return false;
return !!*prot->memory_pressure;
return !!READ_ONCE(*prot->memory_pressure);
}

View File

@ -505,7 +505,7 @@ int batadv_v_elp_packet_recv(struct sk_buff *skb,
struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface);
struct batadv_elp_packet *elp_packet;
struct batadv_hard_iface *primary_if;
struct ethhdr *ethhdr = (struct ethhdr *)skb_mac_header(skb);
struct ethhdr *ethhdr;
bool res;
int ret = NET_RX_DROP;
@ -513,6 +513,7 @@ int batadv_v_elp_packet_recv(struct sk_buff *skb,
if (!res)
goto free_skb;
ethhdr = eth_hdr(skb);
if (batadv_is_my_mac(bat_priv, ethhdr->h_source))
goto free_skb;

View File

@ -123,8 +123,10 @@ static void batadv_v_ogm_send_to_if(struct sk_buff *skb,
{
struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
if (hard_iface->if_status != BATADV_IF_ACTIVE)
if (hard_iface->if_status != BATADV_IF_ACTIVE) {
kfree_skb(skb);
return;
}
batadv_inc_counter(bat_priv, BATADV_CNT_MGMT_TX);
batadv_add_counter(bat_priv, BATADV_CNT_MGMT_TX_BYTES,
@ -985,7 +987,7 @@ int batadv_v_ogm_packet_recv(struct sk_buff *skb,
{
struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface);
struct batadv_ogm2_packet *ogm_packet;
struct ethhdr *ethhdr = eth_hdr(skb);
struct ethhdr *ethhdr;
int ogm_offset;
u8 *packet_pos;
int ret = NET_RX_DROP;
@ -999,6 +1001,7 @@ int batadv_v_ogm_packet_recv(struct sk_buff *skb,
if (!batadv_check_management_packet(skb, if_incoming, BATADV_OGM2_HLEN))
goto free_skb;
ethhdr = eth_hdr(skb);
if (batadv_is_my_mac(bat_priv, ethhdr->h_source))
goto free_skb;

View File

@ -630,7 +630,19 @@ out:
*/
void batadv_update_min_mtu(struct net_device *soft_iface)
{
soft_iface->mtu = batadv_hardif_min_mtu(soft_iface);
struct batadv_priv *bat_priv = netdev_priv(soft_iface);
int limit_mtu;
int mtu;
mtu = batadv_hardif_min_mtu(soft_iface);
if (bat_priv->mtu_set_by_user)
limit_mtu = bat_priv->mtu_set_by_user;
else
limit_mtu = ETH_DATA_LEN;
mtu = min(mtu, limit_mtu);
dev_set_mtu(soft_iface, mtu);
/* Check if the local translate table should be cleaned up to match a
* new (and smaller) MTU.

View File

@ -495,7 +495,10 @@ static int batadv_netlink_set_mesh(struct sk_buff *skb, struct genl_info *info)
attr = info->attrs[BATADV_ATTR_FRAGMENTATION_ENABLED];
atomic_set(&bat_priv->fragmentation, !!nla_get_u8(attr));
rtnl_lock();
batadv_update_min_mtu(bat_priv->soft_iface);
rtnl_unlock();
}
if (info->attrs[BATADV_ATTR_GW_BANDWIDTH_DOWN]) {

View File

@ -153,11 +153,14 @@ static int batadv_interface_set_mac_addr(struct net_device *dev, void *p)
static int batadv_interface_change_mtu(struct net_device *dev, int new_mtu)
{
struct batadv_priv *bat_priv = netdev_priv(dev);
/* check ranges */
if (new_mtu < 68 || new_mtu > batadv_hardif_min_mtu(dev))
return -EINVAL;
dev->mtu = new_mtu;
bat_priv->mtu_set_by_user = new_mtu;
return 0;
}

View File

@ -774,7 +774,6 @@ check_roaming:
if (roamed_back) {
batadv_tt_global_free(bat_priv, tt_global,
"Roaming canceled");
tt_global = NULL;
} else {
/* The global entry has to be marked as ROAMING and
* has to be kept for consistency purpose

View File

@ -1546,6 +1546,12 @@ struct batadv_priv {
/** @soft_iface: net device which holds this struct as private data */
struct net_device *soft_iface;
/**
* @mtu_set_by_user: MTU was set once by user
* protected by rtnl_lock
*/
int mtu_set_by_user;
/**
* @bat_counters: mesh internal traffic statistic counters (see
* batadv_counters)

View File

@ -188,12 +188,6 @@ static bool isotp_register_rxid(struct isotp_sock *so)
return (isotp_bc_flags(so) == 0);
}
static bool isotp_register_txecho(struct isotp_sock *so)
{
/* all modes but SF_BROADCAST register for tx echo skbs */
return (isotp_bc_flags(so) != CAN_ISOTP_SF_BROADCAST);
}
static enum hrtimer_restart isotp_rx_timer_handler(struct hrtimer *hrtimer)
{
struct isotp_sock *so = container_of(hrtimer, struct isotp_sock,
@ -1209,7 +1203,7 @@ static int isotp_release(struct socket *sock)
lock_sock(sk);
/* remove current filters & unregister */
if (so->bound && isotp_register_txecho(so)) {
if (so->bound) {
if (so->ifindex) {
struct net_device *dev;
@ -1332,14 +1326,12 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
can_rx_register(net, dev, rx_id, SINGLE_MASK(rx_id),
isotp_rcv, sk, "isotp", sk);
if (isotp_register_txecho(so)) {
/* no consecutive frame echo skb in flight */
so->cfecho = 0;
/* no consecutive frame echo skb in flight */
so->cfecho = 0;
/* register for echo skb's */
can_rx_register(net, dev, tx_id, SINGLE_MASK(tx_id),
isotp_rcv_echo, sk, "isotpe", sk);
}
/* register for echo skb's */
can_rx_register(net, dev, tx_id, SINGLE_MASK(tx_id),
isotp_rcv_echo, sk, "isotpe", sk);
dev_put(dev);
@ -1560,7 +1552,7 @@ static void isotp_notify(struct isotp_sock *so, unsigned long msg,
case NETDEV_UNREGISTER:
lock_sock(sk);
/* remove current filters & unregister */
if (so->bound && isotp_register_txecho(so)) {
if (so->bound) {
if (isotp_register_rxid(so))
can_rx_unregister(dev_net(dev), dev, so->rxid,
SINGLE_MASK(so->rxid),

View File

@ -85,6 +85,7 @@ struct raw_sock {
int bound;
int ifindex;
struct net_device *dev;
netdevice_tracker dev_tracker;
struct list_head notifier;
int loopback;
int recv_own_msgs;
@ -285,8 +286,10 @@ static void raw_notify(struct raw_sock *ro, unsigned long msg,
case NETDEV_UNREGISTER:
lock_sock(sk);
/* remove current filters & unregister */
if (ro->bound)
if (ro->bound) {
raw_disable_allfilters(dev_net(dev), dev, sk);
netdev_put(dev, &ro->dev_tracker);
}
if (ro->count > 1)
kfree(ro->filter);
@ -391,10 +394,12 @@ static int raw_release(struct socket *sock)
/* remove current filters & unregister */
if (ro->bound) {
if (ro->dev)
if (ro->dev) {
raw_disable_allfilters(dev_net(ro->dev), ro->dev, sk);
else
netdev_put(ro->dev, &ro->dev_tracker);
} else {
raw_disable_allfilters(sock_net(sk), NULL, sk);
}
}
if (ro->count > 1)
@ -445,10 +450,10 @@ static int raw_bind(struct socket *sock, struct sockaddr *uaddr, int len)
goto out;
}
if (dev->type != ARPHRD_CAN) {
dev_put(dev);
err = -ENODEV;
goto out;
goto out_put_dev;
}
if (!(dev->flags & IFF_UP))
notify_enetdown = 1;
@ -456,7 +461,9 @@ static int raw_bind(struct socket *sock, struct sockaddr *uaddr, int len)
/* filters set by default/setsockopt */
err = raw_enable_allfilters(sock_net(sk), dev, sk);
dev_put(dev);
if (err)
goto out_put_dev;
} else {
ifindex = 0;
@ -467,18 +474,28 @@ static int raw_bind(struct socket *sock, struct sockaddr *uaddr, int len)
if (!err) {
if (ro->bound) {
/* unregister old filters */
if (ro->dev)
if (ro->dev) {
raw_disable_allfilters(dev_net(ro->dev),
ro->dev, sk);
else
/* drop reference to old ro->dev */
netdev_put(ro->dev, &ro->dev_tracker);
} else {
raw_disable_allfilters(sock_net(sk), NULL, sk);
}
}
ro->ifindex = ifindex;
ro->bound = 1;
/* bind() ok -> hold a reference for new ro->dev */
ro->dev = dev;
if (ro->dev)
netdev_hold(ro->dev, &ro->dev_tracker, GFP_KERNEL);
}
out:
out_put_dev:
/* remove potential reference from dev_get_by_index() */
if (dev)
dev_put(dev);
out:
release_sock(sk);
rtnl_unlock();

View File

@ -2268,13 +2268,27 @@ out_err:
return err;
}
int rtnl_nla_parse_ifla(struct nlattr **tb, const struct nlattr *head, int len,
struct netlink_ext_ack *exterr)
int rtnl_nla_parse_ifinfomsg(struct nlattr **tb, const struct nlattr *nla_peer,
struct netlink_ext_ack *exterr)
{
return nla_parse_deprecated(tb, IFLA_MAX, head, len, ifla_policy,
const struct ifinfomsg *ifmp;
const struct nlattr *attrs;
size_t len;
ifmp = nla_data(nla_peer);
attrs = nla_data(nla_peer) + sizeof(struct ifinfomsg);
len = nla_len(nla_peer) - sizeof(struct ifinfomsg);
if (ifmp->ifi_index < 0) {
NL_SET_ERR_MSG_ATTR(exterr, nla_peer,
"ifindex can't be negative");
return -EINVAL;
}
return nla_parse_deprecated(tb, IFLA_MAX, attrs, len, ifla_policy,
exterr);
}
EXPORT_SYMBOL(rtnl_nla_parse_ifla);
EXPORT_SYMBOL(rtnl_nla_parse_ifinfomsg);
struct net *rtnl_link_get_net(struct net *src_net, struct nlattr *tb[])
{
@ -3547,6 +3561,9 @@ replay:
if (ifm->ifi_index > 0) {
link_specified = true;
dev = __dev_get_by_index(net, ifm->ifi_index);
} else if (ifm->ifi_index < 0) {
NL_SET_ERR_MSG(extack, "ifindex can't be negative");
return -EINVAL;
} else if (tb[IFLA_IFNAME] || tb[IFLA_ALT_IFNAME]) {
link_specified = true;
dev = rtnl_dev_get(net, tb);

View File

@ -130,7 +130,7 @@ int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
inet->inet_daddr,
inet->inet_sport,
inet->inet_dport);
inet->inet_id = get_random_u16();
atomic_set(&inet->inet_id, get_random_u16());
err = dccp_connect(sk);
rt = NULL;
@ -432,7 +432,7 @@ struct sock *dccp_v4_request_recv_sock(const struct sock *sk,
RCU_INIT_POINTER(newinet->inet_opt, rcu_dereference(ireq->ireq_opt));
newinet->mc_index = inet_iif(skb);
newinet->mc_ttl = ip_hdr(skb)->ttl;
newinet->inet_id = get_random_u16();
atomic_set(&newinet->inet_id, get_random_u16());
if (dst == NULL && (dst = inet_csk_route_child_sock(sk, newsk, req)) == NULL)
goto put_and_exit;

View File

@ -315,11 +315,15 @@ EXPORT_SYMBOL_GPL(dccp_disconnect);
__poll_t dccp_poll(struct file *file, struct socket *sock,
poll_table *wait)
{
__poll_t mask;
struct sock *sk = sock->sk;
__poll_t mask;
u8 shutdown;
int state;
sock_poll_wait(file, sock, wait);
if (sk->sk_state == DCCP_LISTEN)
state = inet_sk_state_load(sk);
if (state == DCCP_LISTEN)
return inet_csk_listen_poll(sk);
/* Socket is not locked. We are protected from async events
@ -328,20 +332,21 @@ __poll_t dccp_poll(struct file *file, struct socket *sock,
*/
mask = 0;
if (sk->sk_err)
if (READ_ONCE(sk->sk_err))
mask = EPOLLERR;
shutdown = READ_ONCE(sk->sk_shutdown);
if (sk->sk_shutdown == SHUTDOWN_MASK || sk->sk_state == DCCP_CLOSED)
if (shutdown == SHUTDOWN_MASK || state == DCCP_CLOSED)
mask |= EPOLLHUP;
if (sk->sk_shutdown & RCV_SHUTDOWN)
if (shutdown & RCV_SHUTDOWN)
mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP;
/* Connected? */
if ((1 << sk->sk_state) & ~(DCCPF_REQUESTING | DCCPF_RESPOND)) {
if ((1 << state) & ~(DCCPF_REQUESTING | DCCPF_RESPOND)) {
if (atomic_read(&sk->sk_rmem_alloc) > 0)
mask |= EPOLLIN | EPOLLRDNORM;
if (!(sk->sk_shutdown & SEND_SHUTDOWN)) {
if (!(shutdown & SEND_SHUTDOWN)) {
if (sk_stream_is_writeable(sk)) {
mask |= EPOLLOUT | EPOLLWRNORM;
} else { /* send SIGIO later */
@ -359,7 +364,6 @@ __poll_t dccp_poll(struct file *file, struct socket *sock,
}
return mask;
}
EXPORT_SYMBOL_GPL(dccp_poll);
int dccp_ioctl(struct sock *sk, int cmd, int *karg)

View File

@ -6704,6 +6704,7 @@ void devlink_notify_unregister(struct devlink *devlink)
struct devlink_param_item *param_item;
struct devlink_trap_item *trap_item;
struct devlink_port *devlink_port;
struct devlink_linecard *linecard;
struct devlink_rate *rate_node;
struct devlink_region *region;
unsigned long port_index;
@ -6732,6 +6733,8 @@ void devlink_notify_unregister(struct devlink *devlink)
xa_for_each(&devlink->ports, port_index, devlink_port)
devlink_port_notify(devlink_port, DEVLINK_CMD_PORT_DEL);
list_for_each_entry_reverse(linecard, &devlink->linecard_list, list)
devlink_linecard_notify(linecard, DEVLINK_CMD_LINECARD_DEL);
devlink_notify(devlink, DEVLINK_CMD_DEL);
}

View File

@ -340,7 +340,7 @@ lookup_protocol:
else
inet->pmtudisc = IP_PMTUDISC_WANT;
inet->inet_id = 0;
atomic_set(&inet->inet_id, 0);
sock_init_data(sock, sk);

View File

@ -73,7 +73,7 @@ int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len
reuseport_has_conns_set(sk);
sk->sk_state = TCP_ESTABLISHED;
sk_set_txhash(sk);
inet->inet_id = get_random_u16();
atomic_set(&inet->inet_id, get_random_u16());
sk_dst_set(sk, &rt->dst);
err = 0;

View File

@ -312,7 +312,7 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
inet->inet_daddr));
}
inet->inet_id = get_random_u16();
atomic_set(&inet->inet_id, get_random_u16());
if (tcp_fastopen_defer_connect(sk, &err))
return err;
@ -1596,7 +1596,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
inet_csk(newsk)->icsk_ext_hdr_len = 0;
if (inet_opt)
inet_csk(newsk)->icsk_ext_hdr_len = inet_opt->opt.optlen;
newinet->inet_id = get_random_u16();
atomic_set(&newinet->inet_id, get_random_u16());
/* Set ToS of the new socket based upon the value of incoming SYN.
* ECT bits are set later in tcp_init_transfer().

View File

@ -1083,7 +1083,8 @@ static inline bool ieee80211_rx_reorder_ready(struct tid_ampdu_rx *tid_agg_rx,
struct sk_buff *tail = skb_peek_tail(frames);
struct ieee80211_rx_status *status;
if (tid_agg_rx->reorder_buf_filtered & BIT_ULL(index))
if (tid_agg_rx->reorder_buf_filtered &&
tid_agg_rx->reorder_buf_filtered & BIT_ULL(index))
return true;
if (!tail)
@ -1124,7 +1125,8 @@ static void ieee80211_release_reorder_frame(struct ieee80211_sub_if_data *sdata,
}
no_frame:
tid_agg_rx->reorder_buf_filtered &= ~BIT_ULL(index);
if (tid_agg_rx->reorder_buf_filtered)
tid_agg_rx->reorder_buf_filtered &= ~BIT_ULL(index);
tid_agg_rx->head_seq_num = ieee80211_sn_inc(tid_agg_rx->head_seq_num);
}
@ -4264,6 +4266,7 @@ void ieee80211_mark_rx_ba_filtered_frames(struct ieee80211_sta *pubsta, u8 tid,
u16 ssn, u64 filtered,
u16 received_mpdus)
{
struct ieee80211_local *local;
struct sta_info *sta;
struct tid_ampdu_rx *tid_agg_rx;
struct sk_buff_head frames;
@ -4281,6 +4284,11 @@ void ieee80211_mark_rx_ba_filtered_frames(struct ieee80211_sta *pubsta, u8 tid,
sta = container_of(pubsta, struct sta_info, sta);
local = sta->sdata->local;
WARN_ONCE(local->hw.max_rx_aggregation_subframes > 64,
"RX BA marker can't support max_rx_aggregation_subframes %u > 64\n",
local->hw.max_rx_aggregation_subframes);
if (!ieee80211_rx_data_set_sta(&rx, sta, -1))
return;

View File

@ -1373,7 +1373,7 @@ static int nf_tables_newtable(struct sk_buff *skb, const struct nfnl_info *info,
if (table == NULL)
goto err_kzalloc;
table->validate_state = NFT_VALIDATE_SKIP;
table->validate_state = nft_net->validate_state;
table->name = nla_strdup(attr, GFP_KERNEL_ACCOUNT);
if (table->name == NULL)
goto err_strdup;
@ -9051,9 +9051,8 @@ static int nf_tables_validate(struct net *net)
return -EAGAIN;
nft_validate_state_update(table, NFT_VALIDATE_SKIP);
break;
}
break;
}
return 0;
@ -9457,9 +9456,9 @@ static void nft_trans_gc_work(struct work_struct *work)
struct nft_trans_gc *trans, *next;
LIST_HEAD(trans_gc_list);
spin_lock(&nf_tables_destroy_list_lock);
spin_lock(&nf_tables_gc_list_lock);
list_splice_init(&nf_tables_gc_list, &trans_gc_list);
spin_unlock(&nf_tables_destroy_list_lock);
spin_unlock(&nf_tables_gc_list_lock);
list_for_each_entry_safe(trans, next, &trans_gc_list, list) {
list_del(&trans->list);
@ -9799,8 +9798,10 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
}
/* 0. Validate ruleset, otherwise roll back for error reporting. */
if (nf_tables_validate(net) < 0)
if (nf_tables_validate(net) < 0) {
nft_net->validate_state = NFT_VALIDATE_DO;
return -EAGAIN;
}
err = nft_flow_rule_offload_commit(net);
if (err < 0)
@ -10059,6 +10060,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
nf_tables_commit_audit_log(&adl, nft_net->base_seq);
nft_gc_seq_end(nft_net, gc_seq);
nft_net->validate_state = NFT_VALIDATE_SKIP;
nf_tables_commit_release(net);
return 0;
@ -10335,8 +10337,12 @@ static int nf_tables_abort(struct net *net, struct sk_buff *skb,
enum nfnl_abort_action action)
{
struct nftables_pernet *nft_net = nft_pernet(net);
int ret = __nf_tables_abort(net, action);
unsigned int gc_seq;
int ret;
gc_seq = nft_gc_seq_begin(nft_net);
ret = __nf_tables_abort(net, action);
nft_gc_seq_end(nft_net, gc_seq);
mutex_unlock(&nft_net->commit_mutex);
return ret;
@ -11071,7 +11077,7 @@ static int nft_rcv_nl_event(struct notifier_block *this, unsigned long event,
gc_seq = nft_gc_seq_begin(nft_net);
if (!list_empty(&nf_tables_destroy_list))
rcu_barrier();
nf_tables_trans_destroy_flush_work();
again:
list_for_each_entry(table, &nft_net->tables, list) {
if (nft_table_has_owner(table) &&
@ -11115,6 +11121,7 @@ static int __net_init nf_tables_init_net(struct net *net)
mutex_init(&nft_net->commit_mutex);
nft_net->base_seq = 1;
nft_net->gc_seq = 0;
nft_net->validate_state = NFT_VALIDATE_SKIP;
return 0;
}

View File

@ -326,6 +326,9 @@ static void nft_rhash_gc(struct work_struct *work)
nft_net = nft_pernet(net);
gc_seq = READ_ONCE(nft_net->gc_seq);
if (nft_set_gc_is_pending(set))
goto done;
gc = nft_trans_gc_alloc(set, gc_seq, GFP_KERNEL);
if (!gc)
goto done;

View File

@ -902,12 +902,14 @@ static void pipapo_lt_bits_adjust(struct nft_pipapo_field *f)
static int pipapo_insert(struct nft_pipapo_field *f, const uint8_t *k,
int mask_bits)
{
int rule = f->rules++, group, ret, bit_offset = 0;
int rule = f->rules, group, ret, bit_offset = 0;
ret = pipapo_resize(f, f->rules - 1, f->rules);
ret = pipapo_resize(f, f->rules, f->rules + 1);
if (ret)
return ret;
f->rules++;
for (group = 0; group < f->groups; group++) {
int i, v;
u8 mask;
@ -1052,7 +1054,9 @@ static int pipapo_expand(struct nft_pipapo_field *f,
step++;
if (step >= len) {
if (!masks) {
pipapo_insert(f, base, 0);
err = pipapo_insert(f, base, 0);
if (err < 0)
return err;
masks = 1;
}
goto out;
@ -1235,6 +1239,9 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
else
ret = pipapo_expand(f, start, end, f->groups * f->bb);
if (ret < 0)
return ret;
if (f->bsize > bsize_max)
bsize_max = f->bsize;

View File

@ -611,6 +611,9 @@ static void nft_rbtree_gc(struct work_struct *work)
nft_net = nft_pernet(net);
gc_seq = READ_ONCE(nft_net->gc_seq);
if (nft_set_gc_is_pending(set))
goto done;
gc = nft_trans_gc_alloc(set, gc_seq, GFP_KERNEL);
if (!gc)
goto done;

View File

@ -1547,10 +1547,28 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
return 0;
}
static bool req_create_or_replace(struct nlmsghdr *n)
{
return (n->nlmsg_flags & NLM_F_CREATE &&
n->nlmsg_flags & NLM_F_REPLACE);
}
static bool req_create_exclusive(struct nlmsghdr *n)
{
return (n->nlmsg_flags & NLM_F_CREATE &&
n->nlmsg_flags & NLM_F_EXCL);
}
static bool req_change(struct nlmsghdr *n)
{
return (!(n->nlmsg_flags & NLM_F_CREATE) &&
!(n->nlmsg_flags & NLM_F_REPLACE) &&
!(n->nlmsg_flags & NLM_F_EXCL));
}
/*
* Create/change qdisc.
*/
static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
struct netlink_ext_ack *extack)
{
@ -1644,27 +1662,35 @@ replay:
*
* We know, that some child q is already
* attached to this parent and have choice:
* either to change it or to create/graft new one.
* 1) change it or 2) create/graft new one.
* If the requested qdisc kind is different
* than the existing one, then we choose graft.
* If they are the same then this is "change"
* operation - just let it fallthrough..
*
* 1. We are allowed to create/graft only
* if CREATE and REPLACE flags are set.
* if the request is explicitly stating
* "please create if it doesn't exist".
*
* 2. If EXCL is set, requestor wanted to say,
* that qdisc tcm_handle is not expected
* 2. If the request is to exclusive create
* then the qdisc tcm_handle is not expected
* to exist, so that we choose create/graft too.
*
* 3. The last case is when no flags are set.
* This will happen when for example tc
* utility issues a "change" command.
* Alas, it is sort of hole in API, we
* cannot decide what to do unambiguously.
* For now we select create/graft, if
* user gave KIND, which does not match existing.
* For now we select create/graft.
*/
if ((n->nlmsg_flags & NLM_F_CREATE) &&
(n->nlmsg_flags & NLM_F_REPLACE) &&
((n->nlmsg_flags & NLM_F_EXCL) ||
(tca[TCA_KIND] &&
nla_strcmp(tca[TCA_KIND], q->ops->id))))
goto create_n_graft;
if (tca[TCA_KIND] &&
nla_strcmp(tca[TCA_KIND], q->ops->id)) {
if (req_create_or_replace(n) ||
req_create_exclusive(n))
goto create_n_graft;
else if (req_change(n))
goto create_n_graft2;
}
}
}
} else {
@ -1698,6 +1724,7 @@ create_n_graft:
NL_SET_ERR_MSG(extack, "Qdisc not found. To create specify NLM_F_CREATE flag");
return -ENOENT;
}
create_n_graft2:
if (clid == TC_H_INGRESS) {
if (dev_ingress_queue(dev)) {
q = qdisc_create(dev, dev_ingress_queue(dev),

View File

@ -99,7 +99,7 @@ struct percpu_counter sctp_sockets_allocated;
static void sctp_enter_memory_pressure(struct sock *sk)
{
sctp_memory_pressure = 1;
WRITE_ONCE(sctp_memory_pressure, 1);
}
@ -9479,7 +9479,7 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
newinet->inet_rcv_saddr = inet->inet_rcv_saddr;
newinet->inet_dport = htons(asoc->peer.port);
newinet->pmtudisc = inet->pmtudisc;
newinet->inet_id = get_random_u16();
atomic_set(&newinet->inet_id, get_random_u16());
newinet->uc_ttl = inet->uc_ttl;
newinet->mc_loop = 1;

View File

@ -9,10 +9,12 @@ TEST_PROGS := \
mode-1-recovery-updelay.sh \
mode-2-recovery-updelay.sh \
bond_options.sh \
bond-eth-type-change.sh
bond-eth-type-change.sh \
bond_macvlan.sh
TEST_FILES := \
lag_lib.sh \
bond_topo_2d1c.sh \
bond_topo_3d1c.sh \
net_forwarding_lib.sh

View File

@ -57,8 +57,8 @@ ip link add name veth2-bond type veth peer name veth2-end
# add ports
ip link set fbond master fab-br0
ip link set veth1-bond down master fbond
ip link set veth2-bond down master fbond
ip link set veth1-bond master fbond
ip link set veth2-bond master fbond
# bring up
ip link set veth1-end up

View File

@ -0,0 +1,99 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
#
# Test macvlan over balance-alb
lib_dir=$(dirname "$0")
source ${lib_dir}/bond_topo_2d1c.sh
m1_ns="m1-$(mktemp -u XXXXXX)"
m2_ns="m1-$(mktemp -u XXXXXX)"
m1_ip4="192.0.2.11"
m1_ip6="2001:db8::11"
m2_ip4="192.0.2.12"
m2_ip6="2001:db8::12"
cleanup()
{
ip -n ${m1_ns} link del macv0
ip netns del ${m1_ns}
ip -n ${m2_ns} link del macv0
ip netns del ${m2_ns}
client_destroy
server_destroy
gateway_destroy
}
check_connection()
{
local ns=${1}
local target=${2}
local message=${3:-"macvlan_over_bond"}
RET=0
ip netns exec ${ns} ping ${target} -c 4 -i 0.1 &>/dev/null
check_err $? "ping failed"
log_test "$mode: $message"
}
macvlan_over_bond()
{
local param="$1"
RET=0
# setup new bond mode
bond_reset "${param}"
ip -n ${s_ns} link add link bond0 name macv0 type macvlan mode bridge
ip -n ${s_ns} link set macv0 netns ${m1_ns}
ip -n ${m1_ns} link set dev macv0 up
ip -n ${m1_ns} addr add ${m1_ip4}/24 dev macv0
ip -n ${m1_ns} addr add ${m1_ip6}/24 dev macv0
ip -n ${s_ns} link add link bond0 name macv0 type macvlan mode bridge
ip -n ${s_ns} link set macv0 netns ${m2_ns}
ip -n ${m2_ns} link set dev macv0 up
ip -n ${m2_ns} addr add ${m2_ip4}/24 dev macv0
ip -n ${m2_ns} addr add ${m2_ip6}/24 dev macv0
sleep 2
check_connection "${c_ns}" "${s_ip4}" "IPv4: client->server"
check_connection "${c_ns}" "${s_ip6}" "IPv6: client->server"
check_connection "${c_ns}" "${m1_ip4}" "IPv4: client->macvlan_1"
check_connection "${c_ns}" "${m1_ip6}" "IPv6: client->macvlan_1"
check_connection "${c_ns}" "${m2_ip4}" "IPv4: client->macvlan_2"
check_connection "${c_ns}" "${m2_ip6}" "IPv6: client->macvlan_2"
check_connection "${m1_ns}" "${m2_ip4}" "IPv4: macvlan_1->macvlan_2"
check_connection "${m1_ns}" "${m2_ip6}" "IPv6: macvlan_1->macvlan_2"
sleep 5
check_connection "${s_ns}" "${c_ip4}" "IPv4: server->client"
check_connection "${s_ns}" "${c_ip6}" "IPv6: server->client"
check_connection "${m1_ns}" "${c_ip4}" "IPv4: macvlan_1->client"
check_connection "${m1_ns}" "${c_ip6}" "IPv6: macvlan_1->client"
check_connection "${m2_ns}" "${c_ip4}" "IPv4: macvlan_2->client"
check_connection "${m2_ns}" "${c_ip6}" "IPv6: macvlan_2->client"
check_connection "${m2_ns}" "${m1_ip4}" "IPv4: macvlan_2->macvlan_2"
check_connection "${m2_ns}" "${m1_ip6}" "IPv6: macvlan_2->macvlan_2"
ip -n ${c_ns} neigh flush dev eth0
}
trap cleanup EXIT
setup_prepare
ip netns add ${m1_ns}
ip netns add ${m2_ns}
modes="active-backup balance-tlb balance-alb"
for mode in $modes; do
macvlan_over_bond "mode $mode"
done
exit $EXIT_STATUS

View File

@ -9,10 +9,7 @@ ALL_TESTS="
num_grat_arp
"
REQUIRE_MZ=no
NUM_NETIFS=0
lib_dir=$(dirname "$0")
source ${lib_dir}/net_forwarding_lib.sh
source ${lib_dir}/bond_topo_3d1c.sh
skip_prio()

View File

@ -0,0 +1,158 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
#
# Topology for Bond mode 1,5,6 testing
#
# +-------------------------+
# | bond0 | Server
# | + | 192.0.2.1/24
# | eth0 | eth1 | 2001:db8::1/24
# | +---+---+ |
# | | | |
# +-------------------------+
# | |
# +-------------------------+
# | | | |
# | +---+-------+---+ | Gateway
# | | br0 | | 192.0.2.254/24
# | +-------+-------+ | 2001:db8::254/24
# | | |
# +-------------------------+
# |
# +-------------------------+
# | | | Client
# | + | 192.0.2.10/24
# | eth0 | 2001:db8::10/24
# +-------------------------+
REQUIRE_MZ=no
NUM_NETIFS=0
lib_dir=$(dirname "$0")
source ${lib_dir}/net_forwarding_lib.sh
s_ns="s-$(mktemp -u XXXXXX)"
c_ns="c-$(mktemp -u XXXXXX)"
g_ns="g-$(mktemp -u XXXXXX)"
s_ip4="192.0.2.1"
c_ip4="192.0.2.10"
g_ip4="192.0.2.254"
s_ip6="2001:db8::1"
c_ip6="2001:db8::10"
g_ip6="2001:db8::254"
gateway_create()
{
ip netns add ${g_ns}
ip -n ${g_ns} link add br0 type bridge
ip -n ${g_ns} link set br0 up
ip -n ${g_ns} addr add ${g_ip4}/24 dev br0
ip -n ${g_ns} addr add ${g_ip6}/24 dev br0
}
gateway_destroy()
{
ip -n ${g_ns} link del br0
ip netns del ${g_ns}
}
server_create()
{
ip netns add ${s_ns}
ip -n ${s_ns} link add bond0 type bond mode active-backup miimon 100
for i in $(seq 0 1); do
ip -n ${s_ns} link add eth${i} type veth peer name s${i} netns ${g_ns}
ip -n ${g_ns} link set s${i} up
ip -n ${g_ns} link set s${i} master br0
ip -n ${s_ns} link set eth${i} master bond0
tc -n ${g_ns} qdisc add dev s${i} clsact
done
ip -n ${s_ns} link set bond0 up
ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0
ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0
sleep 2
}
# Reset bond with new mode and options
bond_reset()
{
# Count the eth link number in real-time as this function
# maybe called from other topologies.
local link_num=$(ip -n ${s_ns} -br link show | grep -c "^eth")
local param="$1"
link_num=$((link_num -1))
ip -n ${s_ns} link set bond0 down
ip -n ${s_ns} link del bond0
ip -n ${s_ns} link add bond0 type bond $param
for i in $(seq 0 ${link_num}); do
ip -n ${s_ns} link set eth$i master bond0
done
ip -n ${s_ns} link set bond0 up
ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0
ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0
sleep 2
}
server_destroy()
{
# Count the eth link number in real-time as this function
# maybe called from other topologies.
local link_num=$(ip -n ${s_ns} -br link show | grep -c "^eth")
link_num=$((link_num -1))
for i in $(seq 0 ${link_num}); do
ip -n ${s_ns} link del eth${i}
done
ip netns del ${s_ns}
}
client_create()
{
ip netns add ${c_ns}
ip -n ${c_ns} link add eth0 type veth peer name c0 netns ${g_ns}
ip -n ${g_ns} link set c0 up
ip -n ${g_ns} link set c0 master br0
ip -n ${c_ns} link set eth0 up
ip -n ${c_ns} addr add ${c_ip4}/24 dev eth0
ip -n ${c_ns} addr add ${c_ip6}/24 dev eth0
}
client_destroy()
{
ip -n ${c_ns} link del eth0
ip netns del ${c_ns}
}
setup_prepare()
{
gateway_create
server_create
client_create
}
cleanup()
{
pre_cleanup
client_destroy
server_destroy
gateway_destroy
}
bond_check_connection()
{
local msg=${1:-"check connection"}
sleep 2
ip netns exec ${s_ns} ping ${c_ip4} -c5 -i 0.1 &>/dev/null
check_err $? "${msg}: ping failed"
ip netns exec ${s_ns} ping6 ${c_ip6} -c5 -i 0.1 &>/dev/null
check_err $? "${msg}: ping6 failed"
}

View File

@ -25,121 +25,19 @@
# | eth0 | 2001:db8::10/24
# +-------------------------------------+
s_ns="s-$(mktemp -u XXXXXX)"
c_ns="c-$(mktemp -u XXXXXX)"
g_ns="g-$(mktemp -u XXXXXX)"
s_ip4="192.0.2.1"
c_ip4="192.0.2.10"
g_ip4="192.0.2.254"
s_ip6="2001:db8::1"
c_ip6="2001:db8::10"
g_ip6="2001:db8::254"
gateway_create()
{
ip netns add ${g_ns}
ip -n ${g_ns} link add br0 type bridge
ip -n ${g_ns} link set br0 up
ip -n ${g_ns} addr add ${g_ip4}/24 dev br0
ip -n ${g_ns} addr add ${g_ip6}/24 dev br0
}
gateway_destroy()
{
ip -n ${g_ns} link del br0
ip netns del ${g_ns}
}
server_create()
{
ip netns add ${s_ns}
ip -n ${s_ns} link add bond0 type bond mode active-backup miimon 100
for i in $(seq 0 2); do
ip -n ${s_ns} link add eth${i} type veth peer name s${i} netns ${g_ns}
ip -n ${g_ns} link set s${i} up
ip -n ${g_ns} link set s${i} master br0
ip -n ${s_ns} link set eth${i} master bond0
tc -n ${g_ns} qdisc add dev s${i} clsact
done
ip -n ${s_ns} link set bond0 up
ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0
ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0
sleep 2
}
# Reset bond with new mode and options
bond_reset()
{
local param="$1"
ip -n ${s_ns} link set bond0 down
ip -n ${s_ns} link del bond0
ip -n ${s_ns} link add bond0 type bond $param
for i in $(seq 0 2); do
ip -n ${s_ns} link set eth$i master bond0
done
ip -n ${s_ns} link set bond0 up
ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0
ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0
sleep 2
}
server_destroy()
{
for i in $(seq 0 2); do
ip -n ${s_ns} link del eth${i}
done
ip netns del ${s_ns}
}
client_create()
{
ip netns add ${c_ns}
ip -n ${c_ns} link add eth0 type veth peer name c0 netns ${g_ns}
ip -n ${g_ns} link set c0 up
ip -n ${g_ns} link set c0 master br0
ip -n ${c_ns} link set eth0 up
ip -n ${c_ns} addr add ${c_ip4}/24 dev eth0
ip -n ${c_ns} addr add ${c_ip6}/24 dev eth0
}
client_destroy()
{
ip -n ${c_ns} link del eth0
ip netns del ${c_ns}
}
source bond_topo_2d1c.sh
setup_prepare()
{
gateway_create
server_create
client_create
}
cleanup()
{
pre_cleanup
client_destroy
server_destroy
gateway_destroy
}
bond_check_connection()
{
local msg=${1:-"check connection"}
sleep 2
ip netns exec ${s_ns} ping ${c_ip4} -c5 -i 0.1 &>/dev/null
check_err $? "${msg}: ping failed"
ip netns exec ${s_ns} ping6 ${c_ip6} -c5 -i 0.1 &>/dev/null
check_err $? "${msg}: ping6 failed"
# Add the extra device as we use 3 down links for bond0
local i=2
ip -n ${s_ns} link add eth${i} type veth peer name s${i} netns ${g_ns}
ip -n ${g_ns} link set s${i} up
ip -n ${g_ns} link set s${i} master br0
ip -n ${s_ns} link set eth${i} master bond0
tc -n ${g_ns} qdisc add dev s${i} clsact
}

View File

@ -98,12 +98,12 @@ sb_occ_etc_check()
port_pool_test()
{
local exp_max_occ=288
local exp_max_occ=$(devlink_cell_size_get)
local max_occ
devlink sb occupancy clearmax $DEVLINK_DEV
$MZ $h1 -c 1 -p 160 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \
$MZ $h1 -c 1 -p 10 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \
-t ip -q
devlink sb occupancy snapshot $DEVLINK_DEV
@ -126,12 +126,12 @@ port_pool_test()
port_tc_ip_test()
{
local exp_max_occ=288
local exp_max_occ=$(devlink_cell_size_get)
local max_occ
devlink sb occupancy clearmax $DEVLINK_DEV
$MZ $h1 -c 1 -p 160 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \
$MZ $h1 -c 1 -p 10 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \
-t ip -q
devlink sb occupancy snapshot $DEVLINK_DEV
@ -154,16 +154,12 @@ port_tc_ip_test()
port_tc_arp_test()
{
local exp_max_occ=96
local exp_max_occ=$(devlink_cell_size_get)
local max_occ
if [[ $MLXSW_CHIP != "mlxsw_spectrum" ]]; then
exp_max_occ=144
fi
devlink sb occupancy clearmax $DEVLINK_DEV
$MZ $h1 -c 1 -p 160 -a $h1mac -A 192.0.1.1 -t arp -q
$MZ $h1 -c 1 -p 10 -a $h1mac -A 192.0.1.1 -t arp -q
devlink sb occupancy snapshot $DEVLINK_DEV

View File

@ -15,6 +15,7 @@ ip_local_port_range
ipsec
ipv6_flowlabel
ipv6_flowlabel_mgr
log.txt
msg_zerocopy
nettest
psock_fanout
@ -45,6 +46,7 @@ test_unix_oob
timestamping
tls
toeplitz
tools
tun
txring_overwrite
txtimestamp