Including fixes from bpf, and bluetooth.

Not all that quiet given spring celebrations, but "current" fixes
 are thinning out, which is encouraging. One outstanding regression
 in the mlx5 driver when using old FW, not blocking but we're pushing
 for a fix.
 
 Current release - new code bugs:
 
  - eth: enetc: workaround for unresponsive pMAC after receiving
    express traffic
 
 Previous releases - regressions:
 
  - rtnetlink: restore RTM_NEW/DELLINK notification behavior,
    keep the pid/seq fields 0 for backward compatibility
 
 Previous releases - always broken:
 
  - sctp: fix a potential overflow in sctp_ifwdtsn_skip
 
  - mptcp:
    - use mptcp_schedule_work instead of open-coding it and make
      the worker check stricter, to avoid scheduling work on closed
      sockets
    - fix NULL pointer dereference on fastopen early fallback
 
  - skbuff: fix memory corruption due to a race between skb coalescing
    and releasing clones confusing page_pool reference counting
 
  - bonding: fix neighbor solicitation validation on backup slaves
 
  - bpf: tcp: use sock_gen_put instead of sock_put in bpf_iter_tcp
 
  - bpf: arm64: fixed a BTI error on returning to patched function
 
  - openvswitch: fix race on port output leading to inf loop
 
  - sfp: initialize sfp->i2c_block_size at sfp allocation to avoid
    returning a different errno than expected
 
  - phy: nxp-c45-tja11xx: unregister PTP, purge queues on remove
 
  - Bluetooth: fix printing errors if LE Connection times out
 
  - Bluetooth: assorted UaF, deadlock and data race fixes
 
  - eth: macb: fix memory corruption in extended buffer descriptor mode
 
 Misc:
 
  - adjust the XDP Rx flow hash API to also include the protocol layers
    over which the hash was computed
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmQ4ZrsACgkQMUZtbf5S
 IruWUQ/9F+HlnHf3Sv08zGlnV++vLaJ/Ld8C2YNYNxRwuoJvcCyikQ/ZfUKdKAoS
 kCf0XqFD2SMl8wHpCQBhK4uXvKBdBMx6L6wEp7dbDciGl/+5yihe9opBBXKekWbB
 ULRZcZE7RACri/QsXQhD7Y8p530xnYWQXO8ZMjR3vOAWxplJtBDNDnXi7hqtxQpW
 Vzwl1ntvD1msmxhvy0UZrgeesL8R3UckFvqYEqnINeyd8E8HB1dAOg899/ZPUbjA
 UoEw5VsSBSr9DE7+Fs6uD8trBxQ1CrdRVJjhRhwivHk8/Ro7dIzjcVV30ei3wucz
 0RiNvCqypkeLeRrcVlSk8lR5r9FBGvhDMFbcGM8lHnxSc0WB+Sj+2iup4fpTE8/p
 VUIvhhzuBuXU4Sc022pm6BL5DBSKdnJRquFq6XCTwnQM6v7fvzu1yWNXsQom8Nbi
 1/ZcFdj27FHwMpU0JPZ4PFxT7Ta830UyulVZuyWA+zEzlN7DvW3O7bGQC72GEuID
 Xc58D4kVtywzbntFmUjuhXCD/6vvD5WW5orLpMWg5SH9F14prt3C9OFSpTwTTq+W
 uPBEslhnhhCPecTNo2iFPLX3bN67n8KDVUWua1AHaqmcK7QFGs0njJGGLpFdHMll
 SuNgrNrtQE9EHH8XL6VbSD2zf35ZfoKVg6qvL3oeLzXkGkZrnls=
 =W+J2
 -----END PGP SIGNATURE-----

Merge tag 'net-6.3-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from bpf, and bluetooth.

  Not all that quiet given spring celebrations, but "current" fixes are
  thinning out, which is encouraging. One outstanding regression in the
  mlx5 driver when using old FW, not blocking but we're pushing for a
  fix.

  Current release - new code bugs:

   - eth: enetc: workaround for unresponsive pMAC after receiving
     express traffic

  Previous releases - regressions:

   - rtnetlink: restore RTM_NEW/DELLINK notification behavior, keep the
     pid/seq fields 0 for backward compatibility

  Previous releases - always broken:

   - sctp: fix a potential overflow in sctp_ifwdtsn_skip

   - mptcp:
      - use mptcp_schedule_work instead of open-coding it and make the
        worker check stricter, to avoid scheduling work on closed
        sockets
      - fix NULL pointer dereference on fastopen early fallback

   - skbuff: fix memory corruption due to a race between skb coalescing
     and releasing clones confusing page_pool reference counting

   - bonding: fix neighbor solicitation validation on backup slaves

   - bpf: tcp: use sock_gen_put instead of sock_put in bpf_iter_tcp

   - bpf: arm64: fixed a BTI error on returning to patched function

   - openvswitch: fix race on port output leading to inf loop

   - sfp: initialize sfp->i2c_block_size at sfp allocation to avoid
     returning a different errno than expected

   - phy: nxp-c45-tja11xx: unregister PTP, purge queues on remove

   - Bluetooth: fix printing errors if LE Connection times out

   - Bluetooth: assorted UaF, deadlock and data race fixes

   - eth: macb: fix memory corruption in extended buffer descriptor mode

  Misc:

   - adjust the XDP Rx flow hash API to also include the protocol layers
     over which the hash was computed"

* tag 'net-6.3-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (50 commits)
  selftests/bpf: Adjust bpf_xdp_metadata_rx_hash for new arg
  mlx4: bpf_xdp_metadata_rx_hash add xdp rss hash type
  veth: bpf_xdp_metadata_rx_hash add xdp rss hash type
  mlx5: bpf_xdp_metadata_rx_hash add xdp rss hash type
  xdp: rss hash types representation
  selftests/bpf: xdp_hw_metadata remove bpf_printk and add counters
  skbuff: Fix a race between coalescing and releasing SKBs
  net: macb: fix a memory corruption in extended buffer descriptor mode
  selftests: add the missing CONFIG_IP_SCTP in net config
  udp6: fix potential access to stale information
  selftests: openvswitch: adjust datapath NL message declaration
  selftests: mptcp: userspace pm: uniform verify events
  mptcp: fix NULL pointer dereference on fastopen early fallback
  mptcp: stricter state check in mptcp_worker
  mptcp: use mptcp_schedule_work instead of open-coding it
  net: enetc: workaround for unresponsive pMAC after receiving express traffic
  sctp: fix a potential overflow in sctp_ifwdtsn_skip
  net: qrtr: Fix an uninit variable access bug in qrtr_tx_resume()
  rtnetlink: Restore RTM_NEW/DELLINK notification behavior
  net: ti/cpsw: Add explicit platform_device.h and of_platform.h includes
  ...
This commit is contained in:
Linus Torvalds 2023-04-13 15:33:04 -07:00
commit 829cca4d17
65 changed files with 990 additions and 517 deletions

View File

@ -340,6 +340,8 @@ tcp_app_win - INTEGER
Reserve max(window/2^tcp_app_win, mss) of window for application
buffer. Value 0 is special, it means that nothing is reserved.
Possible values are [0, 31], inclusive.
Default: 31
tcp_autocorking - BOOLEAN

View File

@ -281,4 +281,8 @@
/* DMB */
#define A64_DMB_ISH aarch64_insn_gen_dmb(AARCH64_INSN_MB_ISH)
/* ADR */
#define A64_ADR(Rd, offset) \
aarch64_insn_gen_adr(0, offset, Rd, AARCH64_INSN_ADR_TYPE_ADR)
#endif /* _BPF_JIT_H */

View File

@ -1900,7 +1900,8 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
restore_args(ctx, args_off, nargs);
/* call original func */
emit(A64_LDR64I(A64_R(10), A64_SP, retaddr_off), ctx);
emit(A64_BLR(A64_R(10)), ctx);
emit(A64_ADR(A64_LR, AARCH64_INSN_SIZE * 2), ctx);
emit(A64_RET(A64_R(10)), ctx);
/* store return value */
emit(A64_STR64I(A64_R(0), A64_SP, retval_off), ctx);
/* reserve a nop for bpf_tramp_image_put */

View File

@ -1022,6 +1022,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
emit_atomic(insn, ctx);
break;
/* Speculation barrier */
case BPF_ST | BPF_NOSPEC:
break;
default:
pr_err("bpf_jit: unknown opcode %02x\n", code);
return -EINVAL;

View File

@ -511,7 +511,7 @@ static const char *btbcm_get_board_name(struct device *dev)
len = strlen(tmp) + 1;
board_type = devm_kzalloc(dev, len, GFP_KERNEL);
strscpy(board_type, tmp, len);
for (i = 0; i < board_type[i]; i++) {
for (i = 0; i < len; i++) {
if (board_type[i] == '/')
board_type[i] = '-';
}

View File

@ -358,6 +358,7 @@ static void btsdio_remove(struct sdio_func *func)
if (!data)
return;
cancel_work_sync(&data->work);
hdev = data->hdev;
sdio_set_drvdata(func, NULL);

View File

@ -3269,7 +3269,8 @@ static int bond_na_rcv(const struct sk_buff *skb, struct bonding *bond,
combined = skb_header_pointer(skb, 0, sizeof(_combined), &_combined);
if (!combined || combined->ip6.nexthdr != NEXTHDR_ICMP ||
combined->icmp6.icmp6_type != NDISC_NEIGHBOUR_ADVERTISEMENT)
(combined->icmp6.icmp6_type != NDISC_NEIGHBOUR_SOLICITATION &&
combined->icmp6.icmp6_type != NDISC_NEIGHBOUR_ADVERTISEMENT))
goto out;
saddr = &combined->ip6.saddr;
@ -3291,7 +3292,7 @@ static int bond_na_rcv(const struct sk_buff *skb, struct bonding *bond,
else if (curr_active_slave &&
time_after(slave_last_rx(bond, curr_active_slave),
curr_active_slave->last_link_up))
bond_validate_na(bond, slave, saddr, daddr);
bond_validate_na(bond, slave, daddr, saddr);
else if (curr_arp_slave &&
bond_time_in_interval(bond, slave_last_tx(curr_arp_slave), 1))
bond_validate_na(bond, slave, saddr, daddr);

View File

@ -1064,6 +1064,10 @@ static dma_addr_t macb_get_addr(struct macb *bp, struct macb_dma_desc *desc)
}
#endif
addr |= MACB_BF(RX_WADDR, MACB_BFEXT(RX_WADDR, desc->addr));
#ifdef CONFIG_MACB_USE_HWSTAMP
if (bp->hw_dma_cap & HW_DMA_CAP_PTP)
addr &= ~GEM_BIT(DMA_RXVALID);
#endif
return addr;
}

View File

@ -989,6 +989,20 @@ static int enetc_get_mm(struct net_device *ndev, struct ethtool_mm_state *state)
return 0;
}
/* FIXME: Workaround for the link partner's verification failing if ENETC
* priorly received too much express traffic. The documentation doesn't
* suggest this is needed.
*/
static void enetc_restart_emac_rx(struct enetc_si *si)
{
u32 val = enetc_port_rd(&si->hw, ENETC_PM0_CMD_CFG);
enetc_port_wr(&si->hw, ENETC_PM0_CMD_CFG, val & ~ENETC_PM0_RX_EN);
if (val & ENETC_PM0_RX_EN)
enetc_port_wr(&si->hw, ENETC_PM0_CMD_CFG, val);
}
static int enetc_set_mm(struct net_device *ndev, struct ethtool_mm_cfg *cfg,
struct netlink_ext_ack *extack)
{
@ -1040,6 +1054,8 @@ static int enetc_set_mm(struct net_device *ndev, struct ethtool_mm_cfg *cfg,
enetc_port_wr(hw, ENETC_MMCSR, val);
enetc_restart_emac_rx(priv->si);
mutex_unlock(&priv->mm_lock);
return 0;

View File

@ -59,8 +59,6 @@ enum iavf_vsi_state_t {
struct iavf_vsi {
struct iavf_adapter *back;
struct net_device *netdev;
unsigned long active_cvlans[BITS_TO_LONGS(VLAN_N_VID)];
unsigned long active_svlans[BITS_TO_LONGS(VLAN_N_VID)];
u16 seid;
u16 id;
DECLARE_BITMAP(state, __IAVF_VSI_STATE_SIZE__);
@ -158,15 +156,20 @@ struct iavf_vlan {
u16 tpid;
};
enum iavf_vlan_state_t {
IAVF_VLAN_INVALID,
IAVF_VLAN_ADD, /* filter needs to be added */
IAVF_VLAN_IS_NEW, /* filter is new, wait for PF answer */
IAVF_VLAN_ACTIVE, /* filter is accepted by PF */
IAVF_VLAN_DISABLE, /* filter needs to be deleted by PF, then marked INACTIVE */
IAVF_VLAN_INACTIVE, /* filter is inactive, we are in IFF_DOWN */
IAVF_VLAN_REMOVE, /* filter needs to be removed from list */
};
struct iavf_vlan_filter {
struct list_head list;
struct iavf_vlan vlan;
struct {
u8 is_new_vlan:1; /* filter is new, wait for PF answer */
u8 remove:1; /* filter needs to be removed */
u8 add:1; /* filter needs to be added */
u8 padding:5;
};
enum iavf_vlan_state_t state;
};
#define IAVF_MAX_TRAFFIC_CLASS 4
@ -258,6 +261,7 @@ struct iavf_adapter {
wait_queue_head_t vc_waitqueue;
struct iavf_q_vector *q_vectors;
struct list_head vlan_filter_list;
int num_vlan_filters;
struct list_head mac_filter_list;
struct mutex crit_lock;
struct mutex client_lock;

View File

@ -791,7 +791,8 @@ iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter,
f->vlan = vlan;
list_add_tail(&f->list, &adapter->vlan_filter_list);
f->add = true;
f->state = IAVF_VLAN_ADD;
adapter->num_vlan_filters++;
adapter->aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER;
}
@ -813,7 +814,7 @@ static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan)
f = iavf_find_vlan(adapter, vlan);
if (f) {
f->remove = true;
f->state = IAVF_VLAN_REMOVE;
adapter->aq_required |= IAVF_FLAG_AQ_DEL_VLAN_FILTER;
}
@ -828,14 +829,18 @@ static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan)
**/
static void iavf_restore_filters(struct iavf_adapter *adapter)
{
u16 vid;
struct iavf_vlan_filter *f;
/* re-add all VLAN filters */
for_each_set_bit(vid, adapter->vsi.active_cvlans, VLAN_N_VID)
iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021Q));
spin_lock_bh(&adapter->mac_vlan_list_lock);
for_each_set_bit(vid, adapter->vsi.active_svlans, VLAN_N_VID)
iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021AD));
list_for_each_entry(f, &adapter->vlan_filter_list, list) {
if (f->state == IAVF_VLAN_INACTIVE)
f->state = IAVF_VLAN_ADD;
}
spin_unlock_bh(&adapter->mac_vlan_list_lock);
adapter->aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER;
}
/**
@ -844,8 +849,7 @@ static void iavf_restore_filters(struct iavf_adapter *adapter)
*/
u16 iavf_get_num_vlans_added(struct iavf_adapter *adapter)
{
return bitmap_weight(adapter->vsi.active_cvlans, VLAN_N_VID) +
bitmap_weight(adapter->vsi.active_svlans, VLAN_N_VID);
return adapter->num_vlan_filters;
}
/**
@ -928,11 +932,6 @@ static int iavf_vlan_rx_kill_vid(struct net_device *netdev,
return 0;
iavf_del_vlan(adapter, IAVF_VLAN(vid, be16_to_cpu(proto)));
if (proto == cpu_to_be16(ETH_P_8021Q))
clear_bit(vid, adapter->vsi.active_cvlans);
else
clear_bit(vid, adapter->vsi.active_svlans);
return 0;
}
@ -1293,16 +1292,11 @@ static void iavf_clear_mac_vlan_filters(struct iavf_adapter *adapter)
}
}
/* remove all VLAN filters */
/* disable all VLAN filters */
list_for_each_entry_safe(vlf, vlftmp, &adapter->vlan_filter_list,
list) {
if (vlf->add) {
list_del(&vlf->list);
kfree(vlf);
} else {
vlf->remove = true;
}
}
list)
vlf->state = IAVF_VLAN_DISABLE;
spin_unlock_bh(&adapter->mac_vlan_list_lock);
}
@ -2914,6 +2908,7 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
list_del(&fv->list);
kfree(fv);
}
adapter->num_vlan_filters = 0;
spin_unlock_bh(&adapter->mac_vlan_list_lock);
@ -3131,9 +3126,6 @@ continue_reset:
adapter->aq_required |= IAVF_FLAG_AQ_ADD_CLOUD_FILTER;
iavf_misc_irq_enable(adapter);
bitmap_clear(adapter->vsi.active_cvlans, 0, VLAN_N_VID);
bitmap_clear(adapter->vsi.active_svlans, 0, VLAN_N_VID);
mod_delayed_work(adapter->wq, &adapter->watchdog_task, 2);
/* We were running when the reset started, so we need to restore some

View File

@ -642,16 +642,10 @@ static void iavf_vlan_add_reject(struct iavf_adapter *adapter)
spin_lock_bh(&adapter->mac_vlan_list_lock);
list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) {
if (f->is_new_vlan) {
if (f->vlan.tpid == ETH_P_8021Q)
clear_bit(f->vlan.vid,
adapter->vsi.active_cvlans);
else
clear_bit(f->vlan.vid,
adapter->vsi.active_svlans);
if (f->state == IAVF_VLAN_IS_NEW) {
list_del(&f->list);
kfree(f);
adapter->num_vlan_filters--;
}
}
spin_unlock_bh(&adapter->mac_vlan_list_lock);
@ -679,7 +673,7 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
spin_lock_bh(&adapter->mac_vlan_list_lock);
list_for_each_entry(f, &adapter->vlan_filter_list, list) {
if (f->add)
if (f->state == IAVF_VLAN_ADD)
count++;
}
if (!count || !VLAN_FILTERING_ALLOWED(adapter)) {
@ -710,11 +704,10 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
vvfl->vsi_id = adapter->vsi_res->vsi_id;
vvfl->num_elements = count;
list_for_each_entry(f, &adapter->vlan_filter_list, list) {
if (f->add) {
if (f->state == IAVF_VLAN_ADD) {
vvfl->vlan_id[i] = f->vlan.vid;
i++;
f->add = false;
f->is_new_vlan = true;
f->state = IAVF_VLAN_IS_NEW;
if (i == count)
break;
}
@ -760,7 +753,7 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
vvfl_v2->vport_id = adapter->vsi_res->vsi_id;
vvfl_v2->num_elements = count;
list_for_each_entry(f, &adapter->vlan_filter_list, list) {
if (f->add) {
if (f->state == IAVF_VLAN_ADD) {
struct virtchnl_vlan_supported_caps *filtering_support =
&adapter->vlan_v2_caps.filtering.filtering_support;
struct virtchnl_vlan *vlan;
@ -778,8 +771,7 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
vlan->tpid = f->vlan.tpid;
i++;
f->add = false;
f->is_new_vlan = true;
f->state = IAVF_VLAN_IS_NEW;
}
}
@ -822,10 +814,16 @@ void iavf_del_vlans(struct iavf_adapter *adapter)
* filters marked for removal to enable bailing out before
* sending a virtchnl message
*/
if (f->remove && !VLAN_FILTERING_ALLOWED(adapter)) {
if (f->state == IAVF_VLAN_REMOVE &&
!VLAN_FILTERING_ALLOWED(adapter)) {
list_del(&f->list);
kfree(f);
} else if (f->remove) {
adapter->num_vlan_filters--;
} else if (f->state == IAVF_VLAN_DISABLE &&
!VLAN_FILTERING_ALLOWED(adapter)) {
f->state = IAVF_VLAN_INACTIVE;
} else if (f->state == IAVF_VLAN_REMOVE ||
f->state == IAVF_VLAN_DISABLE) {
count++;
}
}
@ -857,11 +855,18 @@ void iavf_del_vlans(struct iavf_adapter *adapter)
vvfl->vsi_id = adapter->vsi_res->vsi_id;
vvfl->num_elements = count;
list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) {
if (f->remove) {
if (f->state == IAVF_VLAN_DISABLE) {
vvfl->vlan_id[i] = f->vlan.vid;
f->state = IAVF_VLAN_INACTIVE;
i++;
if (i == count)
break;
} else if (f->state == IAVF_VLAN_REMOVE) {
vvfl->vlan_id[i] = f->vlan.vid;
list_del(&f->list);
kfree(f);
adapter->num_vlan_filters--;
i++;
if (i == count)
break;
}
@ -901,7 +906,8 @@ void iavf_del_vlans(struct iavf_adapter *adapter)
vvfl_v2->vport_id = adapter->vsi_res->vsi_id;
vvfl_v2->num_elements = count;
list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) {
if (f->remove) {
if (f->state == IAVF_VLAN_DISABLE ||
f->state == IAVF_VLAN_REMOVE) {
struct virtchnl_vlan_supported_caps *filtering_support =
&adapter->vlan_v2_caps.filtering.filtering_support;
struct virtchnl_vlan *vlan;
@ -915,8 +921,13 @@ void iavf_del_vlans(struct iavf_adapter *adapter)
vlan->tci = f->vlan.vid;
vlan->tpid = f->vlan.tpid;
list_del(&f->list);
kfree(f);
if (f->state == IAVF_VLAN_DISABLE) {
f->state = IAVF_VLAN_INACTIVE;
} else {
list_del(&f->list);
kfree(f);
adapter->num_vlan_filters--;
}
i++;
if (i == count)
break;
@ -2192,7 +2203,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
list_for_each_entry(vlf,
&adapter->vlan_filter_list,
list)
vlf->add = true;
vlf->state = IAVF_VLAN_ADD;
adapter->aq_required |=
IAVF_FLAG_AQ_ADD_VLAN_FILTER;
@ -2260,7 +2271,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
list_for_each_entry(vlf,
&adapter->vlan_filter_list,
list)
vlf->add = true;
vlf->state = IAVF_VLAN_ADD;
aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER;
}
@ -2444,15 +2455,8 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
spin_lock_bh(&adapter->mac_vlan_list_lock);
list_for_each_entry(f, &adapter->vlan_filter_list, list) {
if (f->is_new_vlan) {
f->is_new_vlan = false;
if (f->vlan.tpid == ETH_P_8021Q)
set_bit(f->vlan.vid,
adapter->vsi.active_cvlans);
else
set_bit(f->vlan.vid,
adapter->vsi.active_svlans);
}
if (f->state == IAVF_VLAN_IS_NEW)
f->state = IAVF_VLAN_ACTIVE;
}
spin_unlock_bh(&adapter->mac_vlan_list_lock);
}

View File

@ -681,14 +681,32 @@ int mlx4_en_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp)
return 0;
}
int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash)
int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash,
enum xdp_rss_hash_type *rss_type)
{
struct mlx4_en_xdp_buff *_ctx = (void *)ctx;
struct mlx4_cqe *cqe = _ctx->cqe;
enum xdp_rss_hash_type xht = 0;
__be16 status;
if (unlikely(!(_ctx->dev->features & NETIF_F_RXHASH)))
return -ENODATA;
*hash = be32_to_cpu(_ctx->cqe->immed_rss_invalid);
*hash = be32_to_cpu(cqe->immed_rss_invalid);
status = cqe->status;
if (status & cpu_to_be16(MLX4_CQE_STATUS_TCP))
xht = XDP_RSS_L4_TCP;
if (status & cpu_to_be16(MLX4_CQE_STATUS_UDP))
xht = XDP_RSS_L4_UDP;
if (status & cpu_to_be16(MLX4_CQE_STATUS_IPV4 | MLX4_CQE_STATUS_IPV4F))
xht |= XDP_RSS_L3_IPV4;
if (status & cpu_to_be16(MLX4_CQE_STATUS_IPV6)) {
xht |= XDP_RSS_L3_IPV6;
if (cqe->ipv6_ext_mask)
xht |= XDP_RSS_L3_DYNHDR;
}
*rss_type = xht;
return 0;
}

View File

@ -798,7 +798,8 @@ int mlx4_en_netdev_event(struct notifier_block *this,
struct xdp_md;
int mlx4_en_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp);
int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash);
int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash,
enum xdp_rss_hash_type *rss_type);
/*
* Functions for time stamping

View File

@ -34,6 +34,7 @@
#include <net/xdp_sock_drv.h>
#include "en/xdp.h"
#include "en/params.h"
#include <linux/bitfield.h>
int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk)
{
@ -169,14 +170,72 @@ static int mlx5e_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp)
return 0;
}
static int mlx5e_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash)
/* Mapping HW RSS Type bits CQE_RSS_HTYPE_IP + CQE_RSS_HTYPE_L4 into 4-bits*/
#define RSS_TYPE_MAX_TABLE 16 /* 4-bits max 16 entries */
#define RSS_L4 GENMASK(1, 0)
#define RSS_L3 GENMASK(3, 2) /* Same as CQE_RSS_HTYPE_IP */
/* Valid combinations of CQE_RSS_HTYPE_IP + CQE_RSS_HTYPE_L4 sorted numerical */
enum mlx5_rss_hash_type {
RSS_TYPE_NO_HASH = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IP_NONE) |
FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_NONE)),
RSS_TYPE_L3_IPV4 = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) |
FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_NONE)),
RSS_TYPE_L4_IPV4_TCP = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) |
FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_TCP)),
RSS_TYPE_L4_IPV4_UDP = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) |
FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_UDP)),
RSS_TYPE_L4_IPV4_IPSEC = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) |
FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_IPSEC)),
RSS_TYPE_L3_IPV6 = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) |
FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_NONE)),
RSS_TYPE_L4_IPV6_TCP = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) |
FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_TCP)),
RSS_TYPE_L4_IPV6_UDP = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) |
FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_UDP)),
RSS_TYPE_L4_IPV6_IPSEC = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) |
FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_IPSEC)),
};
/* Invalid combinations will simply return zero, allows no boundary checks */
static const enum xdp_rss_hash_type mlx5_xdp_rss_type[RSS_TYPE_MAX_TABLE] = {
[RSS_TYPE_NO_HASH] = XDP_RSS_TYPE_NONE,
[1] = XDP_RSS_TYPE_NONE, /* Implicit zero */
[2] = XDP_RSS_TYPE_NONE, /* Implicit zero */
[3] = XDP_RSS_TYPE_NONE, /* Implicit zero */
[RSS_TYPE_L3_IPV4] = XDP_RSS_TYPE_L3_IPV4,
[RSS_TYPE_L4_IPV4_TCP] = XDP_RSS_TYPE_L4_IPV4_TCP,
[RSS_TYPE_L4_IPV4_UDP] = XDP_RSS_TYPE_L4_IPV4_UDP,
[RSS_TYPE_L4_IPV4_IPSEC] = XDP_RSS_TYPE_L4_IPV4_IPSEC,
[RSS_TYPE_L3_IPV6] = XDP_RSS_TYPE_L3_IPV6,
[RSS_TYPE_L4_IPV6_TCP] = XDP_RSS_TYPE_L4_IPV6_TCP,
[RSS_TYPE_L4_IPV6_UDP] = XDP_RSS_TYPE_L4_IPV6_UDP,
[RSS_TYPE_L4_IPV6_IPSEC] = XDP_RSS_TYPE_L4_IPV6_IPSEC,
[12] = XDP_RSS_TYPE_NONE, /* Implicit zero */
[13] = XDP_RSS_TYPE_NONE, /* Implicit zero */
[14] = XDP_RSS_TYPE_NONE, /* Implicit zero */
[15] = XDP_RSS_TYPE_NONE, /* Implicit zero */
};
static int mlx5e_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash,
enum xdp_rss_hash_type *rss_type)
{
const struct mlx5e_xdp_buff *_ctx = (void *)ctx;
const struct mlx5_cqe64 *cqe = _ctx->cqe;
u32 hash_type, l4_type, ip_type, lookup;
if (unlikely(!(_ctx->xdp.rxq->dev->features & NETIF_F_RXHASH)))
return -ENODATA;
*hash = be32_to_cpu(_ctx->cqe->rss_hash_result);
*hash = be32_to_cpu(cqe->rss_hash_result);
hash_type = cqe->rss_hash_type;
BUILD_BUG_ON(CQE_RSS_HTYPE_IP != RSS_L3); /* same mask */
ip_type = hash_type & CQE_RSS_HTYPE_IP;
l4_type = FIELD_GET(CQE_RSS_HTYPE_L4, hash_type);
lookup = ip_type | l4_type;
*rss_type = mlx5_xdp_rss_type[lookup];
return 0;
}

View File

@ -628,7 +628,13 @@ int qlcnic_fw_create_ctx(struct qlcnic_adapter *dev)
int i, err, ring;
if (dev->flags & QLCNIC_NEED_FLR) {
pci_reset_function(dev->pdev);
err = pci_reset_function(dev->pdev);
if (err) {
dev_err(&dev->pdev->dev,
"Adapter reset failed (%d). Please reboot\n",
err);
return err;
}
dev->flags &= ~QLCNIC_NEED_FLR;
}

View File

@ -4522,7 +4522,7 @@ static int niu_alloc_channels(struct niu *np)
err = niu_rbr_fill(np, rp, GFP_KERNEL);
if (err)
return err;
goto out_err;
}
tx_rings = kcalloc(num_tx_rings, sizeof(struct tx_ring_info),

View File

@ -27,7 +27,7 @@
#include <linux/of.h>
#include <linux/of_mdio.h>
#include <linux/of_net.h>
#include <linux/of_device.h>
#include <linux/of_platform.h>
#include <linux/if_vlan.h>
#include <linux/kmemleak.h>
#include <linux/sys_soc.h>

View File

@ -7,6 +7,7 @@
#include <linux/io.h>
#include <linux/clk.h>
#include <linux/platform_device.h>
#include <linux/timer.h>
#include <linux/module.h>
#include <linux/irqreturn.h>
@ -23,7 +24,7 @@
#include <linux/of.h>
#include <linux/of_mdio.h>
#include <linux/of_net.h>
#include <linux/of_device.h>
#include <linux/of_platform.h>
#include <linux/if_vlan.h>
#include <linux/kmemleak.h>
#include <linux/sys_soc.h>

View File

@ -191,7 +191,7 @@
#define MAX_ID_PS 2260U
#define DEFAULT_ID_PS 2000U
#define PPM_TO_SUBNS_INC(ppb) div_u64(GENMASK(31, 0) * (ppb) * \
#define PPM_TO_SUBNS_INC(ppb) div_u64(GENMASK_ULL(31, 0) * (ppb) * \
PTP_CLK_PERIOD_100BT1, NSEC_PER_SEC)
#define NXP_C45_SKB_CB(skb) ((struct nxp_c45_skb_cb *)(skb)->cb)
@ -1337,6 +1337,17 @@ no_ptp_support:
return ret;
}
static void nxp_c45_remove(struct phy_device *phydev)
{
struct nxp_c45_phy *priv = phydev->priv;
if (priv->ptp_clock)
ptp_clock_unregister(priv->ptp_clock);
skb_queue_purge(&priv->tx_queue);
skb_queue_purge(&priv->rx_queue);
}
static struct phy_driver nxp_c45_driver[] = {
{
PHY_ID_MATCH_MODEL(PHY_ID_TJA_1103),
@ -1359,6 +1370,7 @@ static struct phy_driver nxp_c45_driver[] = {
.set_loopback = genphy_c45_loopback,
.get_sqi = nxp_c45_get_sqi,
.get_sqi_max = nxp_c45_get_sqi_max,
.remove = nxp_c45_remove,
},
};

View File

@ -210,6 +210,12 @@ static const enum gpiod_flags gpio_flags[] = {
#define SFP_PHY_ADDR 22
#define SFP_PHY_ADDR_ROLLBALL 17
/* SFP_EEPROM_BLOCK_SIZE is the size of data chunk to read the EEPROM
* at a time. Some SFP modules and also some Linux I2C drivers do not like
* reads longer than 16 bytes.
*/
#define SFP_EEPROM_BLOCK_SIZE 16
struct sff_data {
unsigned int gpios;
bool (*module_supported)(const struct sfp_eeprom_id *id);
@ -1929,11 +1935,7 @@ static int sfp_sm_mod_probe(struct sfp *sfp, bool report)
u8 check;
int ret;
/* Some SFP modules and also some Linux I2C drivers do not like reads
* longer than 16 bytes, so read the EEPROM in chunks of 16 bytes at
* a time.
*/
sfp->i2c_block_size = 16;
sfp->i2c_block_size = SFP_EEPROM_BLOCK_SIZE;
ret = sfp_read(sfp, false, 0, &id.base, sizeof(id.base));
if (ret < 0) {
@ -2485,6 +2487,9 @@ static int sfp_module_eeprom(struct sfp *sfp, struct ethtool_eeprom *ee,
unsigned int first, last, len;
int ret;
if (!(sfp->state & SFP_F_PRESENT))
return -ENODEV;
if (ee->len == 0)
return -EINVAL;
@ -2517,6 +2522,9 @@ static int sfp_module_eeprom_by_page(struct sfp *sfp,
const struct ethtool_module_eeprom *page,
struct netlink_ext_ack *extack)
{
if (!(sfp->state & SFP_F_PRESENT))
return -ENODEV;
if (page->bank) {
NL_SET_ERR_MSG(extack, "Banks not supported");
return -EOPNOTSUPP;
@ -2621,6 +2629,7 @@ static struct sfp *sfp_alloc(struct device *dev)
return ERR_PTR(-ENOMEM);
sfp->dev = dev;
sfp->i2c_block_size = SFP_EEPROM_BLOCK_SIZE;
mutex_init(&sfp->sm_mutex);
mutex_init(&sfp->st_mutex);

View File

@ -1943,7 +1943,7 @@ static struct rx_agg *alloc_rx_agg(struct r8152 *tp, gfp_t mflags)
if (!rx_agg)
return NULL;
rx_agg->page = alloc_pages(mflags | __GFP_COMP, order);
rx_agg->page = alloc_pages(mflags | __GFP_COMP | __GFP_NOWARN, order);
if (!rx_agg->page)
goto free_rx;

View File

@ -1648,14 +1648,18 @@ static int veth_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp)
return 0;
}
static int veth_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash)
static int veth_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash,
enum xdp_rss_hash_type *rss_type)
{
struct veth_xdp_buff *_ctx = (void *)ctx;
struct sk_buff *skb = _ctx->skb;
if (!_ctx->skb)
if (!skb)
return -ENODATA;
*hash = skb_get_hash(_ctx->skb);
*hash = skb_get_hash(skb);
*rss_type = skb->l4_hash ? XDP_RSS_TYPE_L4_ANY : XDP_RSS_TYPE_NONE;
return 0;
}

View File

@ -295,7 +295,7 @@ static int ipc_pcie_probe(struct pci_dev *pci,
ret = dma_set_mask(ipc_pcie->dev, DMA_BIT_MASK(64));
if (ret) {
dev_err(ipc_pcie->dev, "Could not set PCI DMA mask: %d", ret);
return ret;
goto set_mask_fail;
}
ipc_pcie_config_aspm(ipc_pcie);
@ -323,6 +323,7 @@ static int ipc_pcie_probe(struct pci_dev *pci,
imem_init_fail:
ipc_pcie_resources_release(ipc_pcie);
resources_req_fail:
set_mask_fail:
pci_disable_device(pci);
pci_enable_fail:
kfree(ipc_pcie);

View File

@ -36,6 +36,7 @@
#include <linux/types.h>
#include <rdma/ib_verbs.h>
#include <linux/mlx5/mlx5_ifc.h>
#include <linux/bitfield.h>
#if defined(__LITTLE_ENDIAN)
#define MLX5_SET_HOST_ENDIANNESS 0
@ -980,14 +981,23 @@ enum {
};
enum {
CQE_RSS_HTYPE_IP = 0x3 << 2,
CQE_RSS_HTYPE_IP = GENMASK(3, 2),
/* cqe->rss_hash_type[3:2] - IP destination selected for hash
* (00 = none, 01 = IPv4, 10 = IPv6, 11 = Reserved)
*/
CQE_RSS_HTYPE_L4 = 0x3 << 6,
CQE_RSS_IP_NONE = 0x0,
CQE_RSS_IPV4 = 0x1,
CQE_RSS_IPV6 = 0x2,
CQE_RSS_RESERVED = 0x3,
CQE_RSS_HTYPE_L4 = GENMASK(7, 6),
/* cqe->rss_hash_type[7:6] - L4 destination selected for hash
* (00 = none, 01 = TCP. 10 = UDP, 11 = IPSEC.SPI
*/
CQE_RSS_L4_NONE = 0x0,
CQE_RSS_L4_TCP = 0x1,
CQE_RSS_L4_UDP = 0x2,
CQE_RSS_L4_IPSEC = 0x3,
};
enum {

View File

@ -1624,7 +1624,8 @@ struct net_device_ops {
struct xdp_metadata_ops {
int (*xmo_rx_timestamp)(const struct xdp_md *ctx, u64 *timestamp);
int (*xmo_rx_hash)(const struct xdp_md *ctx, u32 *hash);
int (*xmo_rx_hash)(const struct xdp_md *ctx, u32 *hash,
enum xdp_rss_hash_type *rss_type);
};
/**

View File

@ -25,7 +25,8 @@ void rtmsg_ifinfo_newnet(int type, struct net_device *dev, unsigned int change,
struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev,
unsigned change, u32 event,
gfp_t flags, int *new_nsid,
int new_ifindex, u32 portid, u32 seq);
int new_ifindex, u32 portid,
const struct nlmsghdr *nlh);
void rtmsg_ifinfo_send(struct sk_buff *skb, struct net_device *dev,
gfp_t flags, u32 portid, const struct nlmsghdr *nlh);

View File

@ -954,6 +954,7 @@ enum {
HCI_CONN_STK_ENCRYPT,
HCI_CONN_AUTH_INITIATOR,
HCI_CONN_DROP,
HCI_CONN_CANCEL,
HCI_CONN_PARAM_REMOVAL_PEND,
HCI_CONN_NEW_LINK_KEY,
HCI_CONN_SCANNING,

View File

@ -761,13 +761,17 @@ static inline int bond_get_targets_ip(__be32 *targets, __be32 ip)
#if IS_ENABLED(CONFIG_IPV6)
static inline int bond_get_targets_ip6(struct in6_addr *targets, struct in6_addr *ip)
{
struct in6_addr mcaddr;
int i;
for (i = 0; i < BOND_MAX_NS_TARGETS; i++)
if (ipv6_addr_equal(&targets[i], ip))
for (i = 0; i < BOND_MAX_NS_TARGETS; i++) {
addrconf_addr_solict_mult(&targets[i], &mcaddr);
if ((ipv6_addr_equal(&targets[i], ip)) ||
(ipv6_addr_equal(&mcaddr, ip)))
return i;
else if (ipv6_addr_any(&targets[i]))
break;
}
return -1;
}

View File

@ -8,6 +8,7 @@
#include <linux/skbuff.h> /* skb_shared_info */
#include <uapi/linux/netdev.h>
#include <linux/bitfield.h>
/**
* DOC: XDP RX-queue information
@ -425,6 +426,52 @@ XDP_METADATA_KFUNC_xxx
MAX_XDP_METADATA_KFUNC,
};
enum xdp_rss_hash_type {
/* First part: Individual bits for L3/L4 types */
XDP_RSS_L3_IPV4 = BIT(0),
XDP_RSS_L3_IPV6 = BIT(1),
/* The fixed (L3) IPv4 and IPv6 headers can both be followed by
* variable/dynamic headers, IPv4 called Options and IPv6 called
* Extension Headers. HW RSS type can contain this info.
*/
XDP_RSS_L3_DYNHDR = BIT(2),
/* When RSS hash covers L4 then drivers MUST set XDP_RSS_L4 bit in
* addition to the protocol specific bit. This ease interaction with
* SKBs and avoids reserving a fixed mask for future L4 protocol bits.
*/
XDP_RSS_L4 = BIT(3), /* L4 based hash, proto can be unknown */
XDP_RSS_L4_TCP = BIT(4),
XDP_RSS_L4_UDP = BIT(5),
XDP_RSS_L4_SCTP = BIT(6),
XDP_RSS_L4_IPSEC = BIT(7), /* L4 based hash include IPSEC SPI */
/* Second part: RSS hash type combinations used for driver HW mapping */
XDP_RSS_TYPE_NONE = 0,
XDP_RSS_TYPE_L2 = XDP_RSS_TYPE_NONE,
XDP_RSS_TYPE_L3_IPV4 = XDP_RSS_L3_IPV4,
XDP_RSS_TYPE_L3_IPV6 = XDP_RSS_L3_IPV6,
XDP_RSS_TYPE_L3_IPV4_OPT = XDP_RSS_L3_IPV4 | XDP_RSS_L3_DYNHDR,
XDP_RSS_TYPE_L3_IPV6_EX = XDP_RSS_L3_IPV6 | XDP_RSS_L3_DYNHDR,
XDP_RSS_TYPE_L4_ANY = XDP_RSS_L4,
XDP_RSS_TYPE_L4_IPV4_TCP = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_TCP,
XDP_RSS_TYPE_L4_IPV4_UDP = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_UDP,
XDP_RSS_TYPE_L4_IPV4_SCTP = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_SCTP,
XDP_RSS_TYPE_L4_IPV4_IPSEC = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_IPSEC,
XDP_RSS_TYPE_L4_IPV6_TCP = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_TCP,
XDP_RSS_TYPE_L4_IPV6_UDP = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_UDP,
XDP_RSS_TYPE_L4_IPV6_SCTP = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_SCTP,
XDP_RSS_TYPE_L4_IPV6_IPSEC = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_IPSEC,
XDP_RSS_TYPE_L4_IPV6_TCP_EX = XDP_RSS_TYPE_L4_IPV6_TCP | XDP_RSS_L3_DYNHDR,
XDP_RSS_TYPE_L4_IPV6_UDP_EX = XDP_RSS_TYPE_L4_IPV6_UDP | XDP_RSS_L3_DYNHDR,
XDP_RSS_TYPE_L4_IPV6_SCTP_EX = XDP_RSS_TYPE_L4_IPV6_SCTP | XDP_RSS_L3_DYNHDR,
};
#ifdef CONFIG_NET
u32 bpf_xdp_metadata_kfunc_id(int id);
bool bpf_dev_bound_kfunc_id(u32 btf_id);

View File

@ -68,7 +68,7 @@ static const struct sco_param esco_param_msbc[] = {
};
/* This function requires the caller holds hdev->lock */
static void hci_connect_le_scan_cleanup(struct hci_conn *conn)
static void hci_connect_le_scan_cleanup(struct hci_conn *conn, u8 status)
{
struct hci_conn_params *params;
struct hci_dev *hdev = conn->hdev;
@ -88,9 +88,28 @@ static void hci_connect_le_scan_cleanup(struct hci_conn *conn)
params = hci_pend_le_action_lookup(&hdev->pend_le_conns, bdaddr,
bdaddr_type);
if (!params || !params->explicit_connect)
if (!params)
return;
if (params->conn) {
hci_conn_drop(params->conn);
hci_conn_put(params->conn);
params->conn = NULL;
}
if (!params->explicit_connect)
return;
/* If the status indicates successful cancellation of
* the attempt (i.e. Unknown Connection Id) there's no point of
* notifying failure since we'll go back to keep trying to
* connect. The only exception is explicit connect requests
* where a timeout + cancel does indicate an actual failure.
*/
if (status && status != HCI_ERROR_UNKNOWN_CONN_ID)
mgmt_connect_failed(hdev, &conn->dst, conn->type,
conn->dst_type, status);
/* The connection attempt was doing scan for new RPA, and is
* in scan phase. If params are not associated with any other
* autoconnect action, remove them completely. If they are, just unmark
@ -178,7 +197,7 @@ static void le_scan_cleanup(struct work_struct *work)
rcu_read_unlock();
if (c == conn) {
hci_connect_le_scan_cleanup(conn);
hci_connect_le_scan_cleanup(conn, 0x00);
hci_conn_cleanup(conn);
}
@ -1049,6 +1068,17 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
return conn;
}
static bool hci_conn_unlink(struct hci_conn *conn)
{
if (!conn->link)
return false;
conn->link->link = NULL;
conn->link = NULL;
return true;
}
int hci_conn_del(struct hci_conn *conn)
{
struct hci_dev *hdev = conn->hdev;
@ -1060,15 +1090,16 @@ int hci_conn_del(struct hci_conn *conn)
cancel_delayed_work_sync(&conn->idle_work);
if (conn->type == ACL_LINK) {
struct hci_conn *sco = conn->link;
if (sco) {
sco->link = NULL;
struct hci_conn *link = conn->link;
if (link) {
hci_conn_unlink(conn);
/* Due to race, SCO connection might be not established
* yet at this point. Delete it now, otherwise it is
* possible for it to be stuck and can't be deleted.
*/
if (sco->handle == HCI_CONN_HANDLE_UNSET)
hci_conn_del(sco);
if (link->handle == HCI_CONN_HANDLE_UNSET)
hci_conn_del(link);
}
/* Unacked frames */
@ -1084,7 +1115,7 @@ int hci_conn_del(struct hci_conn *conn)
struct hci_conn *acl = conn->link;
if (acl) {
acl->link = NULL;
hci_conn_unlink(conn);
hci_conn_drop(acl);
}
@ -1179,31 +1210,8 @@ EXPORT_SYMBOL(hci_get_route);
static void hci_le_conn_failed(struct hci_conn *conn, u8 status)
{
struct hci_dev *hdev = conn->hdev;
struct hci_conn_params *params;
params = hci_pend_le_action_lookup(&hdev->pend_le_conns, &conn->dst,
conn->dst_type);
if (params && params->conn) {
hci_conn_drop(params->conn);
hci_conn_put(params->conn);
params->conn = NULL;
}
/* If the status indicates successful cancellation of
* the attempt (i.e. Unknown Connection Id) there's no point of
* notifying failure since we'll go back to keep trying to
* connect. The only exception is explicit connect requests
* where a timeout + cancel does indicate an actual failure.
*/
if (status != HCI_ERROR_UNKNOWN_CONN_ID ||
(params && params->explicit_connect))
mgmt_connect_failed(hdev, &conn->dst, conn->type,
conn->dst_type, status);
/* Since we may have temporarily stopped the background scanning in
* favor of connection establishment, we should restart it.
*/
hci_update_passive_scan(hdev);
hci_connect_le_scan_cleanup(conn, status);
/* Enable advertising in case this was a failed connection
* attempt as a peripheral.
@ -1237,15 +1245,15 @@ static void create_le_conn_complete(struct hci_dev *hdev, void *data, int err)
{
struct hci_conn *conn = data;
bt_dev_dbg(hdev, "err %d", err);
hci_dev_lock(hdev);
if (!err) {
hci_connect_le_scan_cleanup(conn);
hci_connect_le_scan_cleanup(conn, 0x00);
goto done;
}
bt_dev_err(hdev, "request failed to create LE connection: err %d", err);
/* Check if connection is still pending */
if (conn != hci_lookup_le_connect(hdev))
goto done;
@ -2438,6 +2446,12 @@ void hci_conn_hash_flush(struct hci_dev *hdev)
c->state = BT_CLOSED;
hci_disconn_cfm(c, HCI_ERROR_LOCAL_HOST_TERM);
/* Unlink before deleting otherwise it is possible that
* hci_conn_del removes the link which may cause the list to
* contain items already freed.
*/
hci_conn_unlink(c);
hci_conn_del(c);
}
}
@ -2775,6 +2789,9 @@ int hci_abort_conn(struct hci_conn *conn, u8 reason)
{
int r = 0;
if (test_and_set_bit(HCI_CONN_CANCEL, &conn->flags))
return 0;
switch (conn->state) {
case BT_CONNECTED:
case BT_CONFIG:

View File

@ -2881,16 +2881,6 @@ static void cs_le_create_conn(struct hci_dev *hdev, bdaddr_t *peer_addr,
conn->resp_addr_type = peer_addr_type;
bacpy(&conn->resp_addr, peer_addr);
/* We don't want the connection attempt to stick around
* indefinitely since LE doesn't have a page timeout concept
* like BR/EDR. Set a timer for any connection that doesn't use
* the accept list for connecting.
*/
if (filter_policy == HCI_LE_USE_PEER_ADDR)
queue_delayed_work(conn->hdev->workqueue,
&conn->le_conn_timeout,
conn->conn_timeout);
}
static void hci_cs_le_create_conn(struct hci_dev *hdev, u8 status)
@ -5902,6 +5892,12 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
if (status)
goto unlock;
/* Drop the connection if it has been aborted */
if (test_bit(HCI_CONN_CANCEL, &conn->flags)) {
hci_conn_drop(conn);
goto unlock;
}
if (conn->dst_type == ADDR_LE_DEV_PUBLIC)
addr_type = BDADDR_LE_PUBLIC;
else
@ -6995,7 +6991,7 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
bis->iso_qos.in.latency = le16_to_cpu(ev->interval) * 125 / 100;
bis->iso_qos.in.sdu = le16_to_cpu(ev->max_pdu);
hci_connect_cfm(bis, ev->status);
hci_iso_setup_path(bis);
}
hci_dev_unlock(hdev);

View File

@ -246,8 +246,9 @@ int __hci_cmd_sync_status_sk(struct hci_dev *hdev, u16 opcode, u32 plen,
skb = __hci_cmd_sync_sk(hdev, opcode, plen, param, event, timeout, sk);
if (IS_ERR(skb)) {
bt_dev_err(hdev, "Opcode 0x%4x failed: %ld", opcode,
PTR_ERR(skb));
if (!event)
bt_dev_err(hdev, "Opcode 0x%4x failed: %ld", opcode,
PTR_ERR(skb));
return PTR_ERR(skb);
}
@ -5126,8 +5127,11 @@ static int hci_le_connect_cancel_sync(struct hci_dev *hdev,
if (test_bit(HCI_CONN_SCANNING, &conn->flags))
return 0;
if (test_and_set_bit(HCI_CONN_CANCEL, &conn->flags))
return 0;
return __hci_cmd_sync_status(hdev, HCI_OP_LE_CREATE_CONN_CANCEL,
6, &conn->dst, HCI_CMD_TIMEOUT);
0, NULL, HCI_CMD_TIMEOUT);
}
static int hci_connect_cancel_sync(struct hci_dev *hdev, struct hci_conn *conn)
@ -6102,6 +6106,9 @@ int hci_le_create_conn_sync(struct hci_dev *hdev, struct hci_conn *conn)
conn->conn_timeout, NULL);
done:
if (err == -ETIMEDOUT)
hci_le_connect_cancel_sync(hdev, conn);
/* Re-enable advertising after the connection attempt is finished. */
hci_resume_advertising_sync(hdev);
return err;

View File

@ -433,7 +433,7 @@ static void hidp_set_timer(struct hidp_session *session)
static void hidp_del_timer(struct hidp_session *session)
{
if (session->idle_to > 0)
del_timer(&session->timer);
del_timer_sync(&session->timer);
}
static void hidp_process_report(struct hidp_session *session, int type,

View File

@ -4652,33 +4652,27 @@ static inline int l2cap_disconnect_req(struct l2cap_conn *conn,
BT_DBG("scid 0x%4.4x dcid 0x%4.4x", scid, dcid);
mutex_lock(&conn->chan_lock);
chan = __l2cap_get_chan_by_scid(conn, dcid);
chan = l2cap_get_chan_by_scid(conn, dcid);
if (!chan) {
mutex_unlock(&conn->chan_lock);
cmd_reject_invalid_cid(conn, cmd->ident, dcid, scid);
return 0;
}
l2cap_chan_hold(chan);
l2cap_chan_lock(chan);
rsp.dcid = cpu_to_le16(chan->scid);
rsp.scid = cpu_to_le16(chan->dcid);
l2cap_send_cmd(conn, cmd->ident, L2CAP_DISCONN_RSP, sizeof(rsp), &rsp);
chan->ops->set_shutdown(chan);
mutex_lock(&conn->chan_lock);
l2cap_chan_del(chan, ECONNRESET);
mutex_unlock(&conn->chan_lock);
chan->ops->close(chan);
l2cap_chan_unlock(chan);
l2cap_chan_put(chan);
mutex_unlock(&conn->chan_lock);
return 0;
}
@ -4698,33 +4692,27 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn,
BT_DBG("dcid 0x%4.4x scid 0x%4.4x", dcid, scid);
mutex_lock(&conn->chan_lock);
chan = __l2cap_get_chan_by_scid(conn, scid);
chan = l2cap_get_chan_by_scid(conn, scid);
if (!chan) {
mutex_unlock(&conn->chan_lock);
return 0;
}
l2cap_chan_hold(chan);
l2cap_chan_lock(chan);
if (chan->state != BT_DISCONN) {
l2cap_chan_unlock(chan);
l2cap_chan_put(chan);
mutex_unlock(&conn->chan_lock);
return 0;
}
mutex_lock(&conn->chan_lock);
l2cap_chan_del(chan, 0);
mutex_unlock(&conn->chan_lock);
chan->ops->close(chan);
l2cap_chan_unlock(chan);
l2cap_chan_put(chan);
mutex_unlock(&conn->chan_lock);
return 0;
}

View File

@ -235,27 +235,41 @@ static int sco_chan_add(struct sco_conn *conn, struct sock *sk,
return err;
}
static int sco_connect(struct hci_dev *hdev, struct sock *sk)
static int sco_connect(struct sock *sk)
{
struct sco_conn *conn;
struct hci_conn *hcon;
struct hci_dev *hdev;
int err, type;
BT_DBG("%pMR -> %pMR", &sco_pi(sk)->src, &sco_pi(sk)->dst);
hdev = hci_get_route(&sco_pi(sk)->dst, &sco_pi(sk)->src, BDADDR_BREDR);
if (!hdev)
return -EHOSTUNREACH;
hci_dev_lock(hdev);
if (lmp_esco_capable(hdev) && !disable_esco)
type = ESCO_LINK;
else
type = SCO_LINK;
if (sco_pi(sk)->setting == BT_VOICE_TRANSPARENT &&
(!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev)))
return -EOPNOTSUPP;
(!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev))) {
err = -EOPNOTSUPP;
goto unlock;
}
hcon = hci_connect_sco(hdev, type, &sco_pi(sk)->dst,
sco_pi(sk)->setting, &sco_pi(sk)->codec);
if (IS_ERR(hcon))
return PTR_ERR(hcon);
if (IS_ERR(hcon)) {
err = PTR_ERR(hcon);
goto unlock;
}
hci_dev_unlock(hdev);
hci_dev_put(hdev);
conn = sco_conn_add(hcon);
if (!conn) {
@ -263,13 +277,15 @@ static int sco_connect(struct hci_dev *hdev, struct sock *sk)
return -ENOMEM;
}
/* Update source addr of the socket */
bacpy(&sco_pi(sk)->src, &hcon->src);
err = sco_chan_add(conn, sk, NULL);
if (err)
return err;
lock_sock(sk);
/* Update source addr of the socket */
bacpy(&sco_pi(sk)->src, &hcon->src);
if (hcon->state == BT_CONNECTED) {
sco_sock_clear_timer(sk);
sk->sk_state = BT_CONNECTED;
@ -278,6 +294,13 @@ static int sco_connect(struct hci_dev *hdev, struct sock *sk)
sco_sock_set_timer(sk, sk->sk_sndtimeo);
}
release_sock(sk);
return err;
unlock:
hci_dev_unlock(hdev);
hci_dev_put(hdev);
return err;
}
@ -565,7 +588,6 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
{
struct sockaddr_sco *sa = (struct sockaddr_sco *) addr;
struct sock *sk = sock->sk;
struct hci_dev *hdev;
int err;
BT_DBG("sk %p", sk);
@ -574,37 +596,26 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
addr->sa_family != AF_BLUETOOTH)
return -EINVAL;
lock_sock(sk);
if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) {
err = -EBADFD;
goto done;
}
if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND)
return -EBADFD;
if (sk->sk_type != SOCK_SEQPACKET) {
if (sk->sk_type != SOCK_SEQPACKET)
err = -EINVAL;
goto done;
}
hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR);
if (!hdev) {
err = -EHOSTUNREACH;
goto done;
}
hci_dev_lock(hdev);
lock_sock(sk);
/* Set destination address and psm */
bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr);
release_sock(sk);
err = sco_connect(hdev, sk);
hci_dev_unlock(hdev);
hci_dev_put(hdev);
err = sco_connect(sk);
if (err)
goto done;
return err;
lock_sock(sk);
err = bt_sock_wait_state(sk, BT_CONNECTED,
sock_sndtimeo(sk, flags & O_NONBLOCK));
done:
release_sock(sk);
return err;
}
@ -1129,6 +1140,8 @@ static int sco_sock_getsockopt(struct socket *sock, int level, int optname,
break;
}
release_sock(sk);
/* find total buffer size required to copy codec + caps */
hci_dev_lock(hdev);
list_for_each_entry(c, &hdev->local_codecs, list) {
@ -1146,15 +1159,13 @@ static int sco_sock_getsockopt(struct socket *sock, int level, int optname,
buf_len += sizeof(struct bt_codecs);
if (buf_len > len) {
hci_dev_put(hdev);
err = -ENOBUFS;
break;
return -ENOBUFS;
}
ptr = optval;
if (put_user(num_codecs, ptr)) {
hci_dev_put(hdev);
err = -EFAULT;
break;
return -EFAULT;
}
ptr += sizeof(num_codecs);
@ -1194,12 +1205,14 @@ static int sco_sock_getsockopt(struct socket *sock, int level, int optname,
ptr += len;
}
if (!err && put_user(buf_len, optlen))
err = -EFAULT;
hci_dev_unlock(hdev);
hci_dev_put(hdev);
lock_sock(sk);
if (!err && put_user(buf_len, optlen))
err = -EFAULT;
break;
default:

View File

@ -3199,6 +3199,7 @@ static u16 skb_tx_hash(const struct net_device *dev,
}
if (skb_rx_queue_recorded(skb)) {
DEBUG_NET_WARN_ON_ONCE(qcount == 0);
hash = skb_get_rx_queue(skb);
if (hash >= qoffset)
hash -= qoffset;
@ -10846,7 +10847,7 @@ void unregister_netdevice_many_notify(struct list_head *head,
dev->rtnl_link_state == RTNL_LINK_INITIALIZED)
skb = rtmsg_ifinfo_build_skb(RTM_DELLINK, dev, ~0U, 0,
GFP_KERNEL, NULL, 0,
portid, nlmsg_seq(nlh));
portid, nlh);
/*
* Flush the unicast and multicast chains

View File

@ -3972,16 +3972,23 @@ static int rtnl_dump_all(struct sk_buff *skb, struct netlink_callback *cb)
struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev,
unsigned int change,
u32 event, gfp_t flags, int *new_nsid,
int new_ifindex, u32 portid, u32 seq)
int new_ifindex, u32 portid,
const struct nlmsghdr *nlh)
{
struct net *net = dev_net(dev);
struct sk_buff *skb;
int err = -ENOBUFS;
u32 seq = 0;
skb = nlmsg_new(if_nlmsg_size(dev, 0), flags);
if (skb == NULL)
goto errout;
if (nlmsg_report(nlh))
seq = nlmsg_seq(nlh);
else
portid = 0;
err = rtnl_fill_ifinfo(skb, dev, dev_net(dev),
type, portid, seq, change, 0, 0, event,
new_nsid, new_ifindex, -1, flags);
@ -4017,7 +4024,7 @@ static void rtmsg_ifinfo_event(int type, struct net_device *dev,
return;
skb = rtmsg_ifinfo_build_skb(type, dev, change, event, flags, new_nsid,
new_ifindex, portid, nlmsg_seq(nlh));
new_ifindex, portid, nlh);
if (skb)
rtmsg_ifinfo_send(skb, dev, flags, portid, nlh);
}

View File

@ -5599,18 +5599,18 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
if (skb_cloned(to))
return false;
/* In general, avoid mixing slab allocated and page_pool allocated
* pages within the same SKB. However when @to is not pp_recycle and
* @from is cloned, we can transition frag pages from page_pool to
* reference counted.
*
* On the other hand, don't allow coalescing two pp_recycle SKBs if
* @from is cloned, in case the SKB is using page_pool fragment
/* In general, avoid mixing page_pool and non-page_pool allocated
* pages within the same SKB. Additionally avoid dealing with clones
* with page_pool pages, in case the SKB is using page_pool fragment
* references (PP_FLAG_PAGE_FRAG). Since we only take full page
* references for cloned SKBs at the moment that would result in
* inconsistent reference counts.
* In theory we could take full references if @from is cloned and
* !@to->pp_recycle but its tricky (due to potential race with
* the clone disappearing) and rare, so not worth dealing with.
*/
if (to->pp_recycle != (from->pp_recycle && !skb_cloned(from)))
if (to->pp_recycle != from->pp_recycle ||
(from->pp_recycle && skb_cloned(from)))
return false;
if (len <= skb_tailroom(to)) {

View File

@ -734,13 +734,21 @@ __bpf_kfunc int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx, u64 *tim
* bpf_xdp_metadata_rx_hash - Read XDP frame RX hash.
* @ctx: XDP context pointer.
* @hash: Return value pointer.
* @rss_type: Return value pointer for RSS type.
*
* The RSS hash type (@rss_type) specifies what portion of packet headers NIC
* hardware used when calculating RSS hash value. The RSS type can be decoded
* via &enum xdp_rss_hash_type either matching on individual L3/L4 bits
* ``XDP_RSS_L*`` or by combined traditional *RSS Hashing Types*
* ``XDP_RSS_TYPE_L*``.
*
* Return:
* * Returns 0 on success or ``-errno`` on error.
* * ``-EOPNOTSUPP`` : means device driver doesn't implement kfunc
* * ``-ENODATA`` : means no RX-hash available for this frame
*/
__bpf_kfunc int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash)
__bpf_kfunc int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash,
enum xdp_rss_hash_type *rss_type)
{
return -EOPNOTSUPP;
}

View File

@ -25,6 +25,7 @@ static int ip_local_port_range_min[] = { 1, 1 };
static int ip_local_port_range_max[] = { 65535, 65535 };
static int tcp_adv_win_scale_min = -31;
static int tcp_adv_win_scale_max = 31;
static int tcp_app_win_max = 31;
static int tcp_min_snd_mss_min = TCP_MIN_SND_MSS;
static int tcp_min_snd_mss_max = 65535;
static int ip_privileged_port_min;
@ -1198,6 +1199,8 @@ static struct ctl_table ipv4_net_table[] = {
.maxlen = sizeof(u8),
.mode = 0644,
.proc_handler = proc_dou8vec_minmax,
.extra1 = SYSCTL_ZERO,
.extra2 = &tcp_app_win_max,
},
{
.procname = "tcp_adv_win_scale",

View File

@ -2780,7 +2780,7 @@ static int tcp_prog_seq_show(struct bpf_prog *prog, struct bpf_iter_meta *meta,
static void bpf_iter_tcp_put_batch(struct bpf_tcp_iter_state *iter)
{
while (iter->cur_sk < iter->end_sk)
sock_put(iter->batch[iter->cur_sk++]);
sock_gen_put(iter->batch[iter->cur_sk++]);
}
static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter,
@ -2941,7 +2941,7 @@ static void *bpf_iter_tcp_seq_next(struct seq_file *seq, void *v, loff_t *pos)
* st->bucket. See tcp_seek_last_pos().
*/
st->offset++;
sock_put(iter->batch[iter->cur_sk++]);
sock_gen_put(iter->batch[iter->cur_sk++]);
}
if (iter->cur_sk < iter->end_sk)

View File

@ -1395,9 +1395,11 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
msg->msg_name = &sin;
msg->msg_namelen = sizeof(sin);
do_udp_sendmsg:
if (ipv6_only_sock(sk))
return -ENETUNREACH;
return udp_sendmsg(sk, msg, len);
err = ipv6_only_sock(sk) ?
-ENETUNREACH : udp_sendmsg(sk, msg, len);
msg->msg_name = sin6;
msg->msg_namelen = addr_len;
return err;
}
}

View File

@ -9,11 +9,18 @@
void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subflow,
struct request_sock *req)
{
struct sock *ssk = subflow->tcp_sock;
struct sock *sk = subflow->conn;
struct sock *sk, *ssk;
struct sk_buff *skb;
struct tcp_sock *tp;
/* on early fallback the subflow context is deleted by
* subflow_syn_recv_sock()
*/
if (!subflow)
return;
ssk = subflow->tcp_sock;
sk = subflow->conn;
tp = tcp_sk(ssk);
subflow->is_mptfo = 1;

View File

@ -1192,9 +1192,8 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
*/
if (TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq) {
if (mp_opt.data_fin && mp_opt.data_len == 1 &&
mptcp_update_rcv_data_fin(msk, mp_opt.data_seq, mp_opt.dsn64) &&
schedule_work(&msk->work))
sock_hold(subflow->conn);
mptcp_update_rcv_data_fin(msk, mp_opt.data_seq, mp_opt.dsn64))
mptcp_schedule_work((struct sock *)msk);
return true;
}

View File

@ -2626,7 +2626,7 @@ static void mptcp_worker(struct work_struct *work)
lock_sock(sk);
state = sk->sk_state;
if (unlikely(state == TCP_CLOSE))
if (unlikely((1 << state) & (TCPF_CLOSE | TCPF_LISTEN)))
goto unlock;
mptcp_check_data_fin_ack(sk);

View File

@ -408,9 +408,8 @@ void mptcp_subflow_reset(struct sock *ssk)
tcp_send_active_reset(ssk, GFP_ATOMIC);
tcp_done(ssk);
if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags) &&
schedule_work(&mptcp_sk(sk)->work))
return; /* worker will put sk for us */
if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags))
mptcp_schedule_work(sk);
sock_put(sk);
}
@ -1118,8 +1117,8 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
skb_ext_del(skb, SKB_EXT_MPTCP);
return MAPPING_OK;
} else {
if (updated && schedule_work(&msk->work))
sock_hold((struct sock *)msk);
if (updated)
mptcp_schedule_work((struct sock *)msk);
return MAPPING_DATA_FIN;
}
@ -1222,17 +1221,12 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb,
/* sched mptcp worker to remove the subflow if no more data is pending */
static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ssk)
{
struct sock *sk = (struct sock *)msk;
if (likely(ssk->sk_state != TCP_CLOSE))
return;
if (skb_queue_empty(&ssk->sk_receive_queue) &&
!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags)) {
sock_hold(sk);
if (!schedule_work(&msk->work))
sock_put(sk);
}
!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags))
mptcp_schedule_work((struct sock *)msk);
}
static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)

View File

@ -913,7 +913,7 @@ static void do_output(struct datapath *dp, struct sk_buff *skb, int out_port,
{
struct vport *vport = ovs_vport_rcu(dp, out_port);
if (likely(vport)) {
if (likely(vport && netif_carrier_ok(vport->dev))) {
u16 mru = OVS_CB(skb)->mru;
u32 cutlen = OVS_CB(skb)->cutlen;

View File

@ -498,6 +498,11 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
if (!size || len != ALIGN(size, 4) + hdrlen)
goto err;
if ((cb->type == QRTR_TYPE_NEW_SERVER ||
cb->type == QRTR_TYPE_RESUME_TX) &&
size < sizeof(struct qrtr_ctrl_pkt))
goto err;
if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA &&
cb->type != QRTR_TYPE_RESUME_TX)
goto err;
@ -510,9 +515,6 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
/* Remote node endpoint can bridge other distant nodes */
const struct qrtr_ctrl_pkt *pkt;
if (size < sizeof(*pkt))
goto err;
pkt = data + hdrlen;
qrtr_node_assign(node, le32_to_cpu(pkt->server.node));
}

View File

@ -1154,7 +1154,8 @@ static void sctp_generate_iftsn(struct sctp_outq *q, __u32 ctsn)
#define _sctp_walk_ifwdtsn(pos, chunk, end) \
for (pos = chunk->subh.ifwdtsn_hdr->skip; \
(void *)pos < (void *)chunk->subh.ifwdtsn_hdr->skip + (end); pos++)
(void *)pos <= (void *)chunk->subh.ifwdtsn_hdr->skip + (end) - \
sizeof(struct sctp_ifwdtsn_skip); pos++)
#define sctp_walk_ifwdtsn(pos, ch) \
_sctp_walk_ifwdtsn((pos), (ch), ntohs((ch)->chunk_hdr->length) - \

View File

@ -3270,6 +3270,17 @@ static int __smc_create(struct net *net, struct socket *sock, int protocol,
sk_common_release(sk);
goto out;
}
/* smc_clcsock_release() does not wait smc->clcsock->sk's
* destruction; its sk_state might not be TCP_CLOSE after
* smc->sk is close()d, and TCP timers can be fired later,
* which need net ref.
*/
sk = smc->clcsock->sk;
__netns_tracker_free(net, &sk->ns_tracker, false);
sk->sk_net_refcnt = 1;
get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
sock_inuse_add(net, 1);
} else {
smc->clcsock = clcsock;
}

View File

@ -167,8 +167,7 @@ void test_xdp_do_redirect(void)
if (!ASSERT_EQ(query_opts.feature_flags,
NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG |
NETDEV_XDP_ACT_NDO_XMIT_SG,
NETDEV_XDP_ACT_RX_SG,
"veth_src query_opts.feature_flags"))
goto out;
@ -176,11 +175,36 @@ void test_xdp_do_redirect(void)
if (!ASSERT_OK(err, "veth_dst bpf_xdp_query"))
goto out;
if (!ASSERT_EQ(query_opts.feature_flags,
NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_RX_SG,
"veth_dst query_opts.feature_flags"))
goto out;
/* Enable GRO */
SYS("ethtool -K veth_src gro on");
SYS("ethtool -K veth_dst gro on");
err = bpf_xdp_query(ifindex_src, XDP_FLAGS_DRV_MODE, &query_opts);
if (!ASSERT_OK(err, "veth_src bpf_xdp_query gro on"))
goto out;
if (!ASSERT_EQ(query_opts.feature_flags,
NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG |
NETDEV_XDP_ACT_NDO_XMIT_SG,
"veth_dst query_opts.feature_flags"))
"veth_src query_opts.feature_flags gro on"))
goto out;
err = bpf_xdp_query(ifindex_dst, XDP_FLAGS_DRV_MODE, &query_opts);
if (!ASSERT_OK(err, "veth_dst bpf_xdp_query gro on"))
goto out;
if (!ASSERT_EQ(query_opts.feature_flags,
NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG |
NETDEV_XDP_ACT_NDO_XMIT_SG,
"veth_dst query_opts.feature_flags gro on"))
goto out;
memcpy(skel->rodata->expect_dst, &pkt_udp.eth.h_dest, ETH_ALEN);

View File

@ -273,6 +273,8 @@ static int verify_xsk_metadata(struct xsk *xsk)
if (!ASSERT_NEQ(meta->rx_hash, 0, "rx_hash"))
return -1;
ASSERT_EQ(meta->rx_hash_type, 0, "rx_hash_type");
xsk_ring_cons__release(&xsk->rx, 1);
refill_rx(xsk, comp_addr);

View File

@ -12,10 +12,14 @@ struct {
__type(value, __u32);
} xsk SEC(".maps");
__u64 pkts_skip = 0;
__u64 pkts_fail = 0;
__u64 pkts_redir = 0;
extern int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx,
__u64 *timestamp) __ksym;
extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx,
__u32 *hash) __ksym;
extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, __u32 *hash,
enum xdp_rss_hash_type *rss_type) __ksym;
SEC("xdp")
int rx(struct xdp_md *ctx)
@ -26,7 +30,7 @@ int rx(struct xdp_md *ctx)
struct udphdr *udp = NULL;
struct iphdr *iph = NULL;
struct xdp_meta *meta;
int ret;
int err;
data = (void *)(long)ctx->data;
data_end = (void *)(long)ctx->data_end;
@ -46,17 +50,20 @@ int rx(struct xdp_md *ctx)
udp = NULL;
}
if (!udp)
if (!udp) {
__sync_add_and_fetch(&pkts_skip, 1);
return XDP_PASS;
}
if (udp->dest != bpf_htons(9091))
/* Forwarding UDP:9091 to AF_XDP */
if (udp->dest != bpf_htons(9091)) {
__sync_add_and_fetch(&pkts_skip, 1);
return XDP_PASS;
}
bpf_printk("forwarding UDP:9091 to AF_XDP");
ret = bpf_xdp_adjust_meta(ctx, -(int)sizeof(struct xdp_meta));
if (ret != 0) {
bpf_printk("bpf_xdp_adjust_meta returned %d", ret);
err = bpf_xdp_adjust_meta(ctx, -(int)sizeof(struct xdp_meta));
if (err) {
__sync_add_and_fetch(&pkts_fail, 1);
return XDP_PASS;
}
@ -65,20 +72,19 @@ int rx(struct xdp_md *ctx)
meta = data_meta;
if (meta + 1 > data) {
bpf_printk("bpf_xdp_adjust_meta doesn't appear to work");
__sync_add_and_fetch(&pkts_fail, 1);
return XDP_PASS;
}
if (!bpf_xdp_metadata_rx_timestamp(ctx, &meta->rx_timestamp))
bpf_printk("populated rx_timestamp with %llu", meta->rx_timestamp);
else
err = bpf_xdp_metadata_rx_timestamp(ctx, &meta->rx_timestamp);
if (err)
meta->rx_timestamp = 0; /* Used by AF_XDP as not avail signal */
if (!bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash))
bpf_printk("populated rx_hash with %u", meta->rx_hash);
else
meta->rx_hash = 0; /* Used by AF_XDP as not avail signal */
err = bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash, &meta->rx_hash_type);
if (err < 0)
meta->rx_hash_err = err; /* Used by AF_XDP as no hash signal */
__sync_add_and_fetch(&pkts_redir, 1);
return bpf_redirect_map(&xsk, ctx->rx_queue_index, XDP_PASS);
}

View File

@ -21,8 +21,8 @@ struct {
extern int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx,
__u64 *timestamp) __ksym;
extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx,
__u32 *hash) __ksym;
extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, __u32 *hash,
enum xdp_rss_hash_type *rss_type) __ksym;
SEC("xdp")
int rx(struct xdp_md *ctx)
@ -56,7 +56,7 @@ int rx(struct xdp_md *ctx)
if (timestamp == 0)
meta->rx_timestamp = 1;
bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash);
bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash, &meta->rx_hash_type);
return bpf_redirect_map(&xsk, ctx->rx_queue_index, XDP_PASS);
}

View File

@ -5,17 +5,18 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx,
__u32 *hash) __ksym;
extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, __u32 *hash,
enum xdp_rss_hash_type *rss_type) __ksym;
int called;
SEC("freplace/rx")
int freplace_rx(struct xdp_md *ctx)
{
enum xdp_rss_hash_type type = 0;
u32 hash = 0;
/* Call _any_ metadata function to make sure we don't crash. */
bpf_xdp_metadata_rx_hash(ctx, &hash);
bpf_xdp_metadata_rx_hash(ctx, &hash, &type);
called++;
return XDP_PASS;
}

View File

@ -141,7 +141,11 @@ static void verify_xdp_metadata(void *data)
meta = data - sizeof(*meta);
printf("rx_timestamp: %llu\n", meta->rx_timestamp);
printf("rx_hash: %u\n", meta->rx_hash);
if (meta->rx_hash_err < 0)
printf("No rx_hash err=%d\n", meta->rx_hash_err);
else
printf("rx_hash: 0x%X with RSS type:0x%X\n",
meta->rx_hash, meta->rx_hash_type);
}
static void verify_skb_metadata(int fd)
@ -212,7 +216,9 @@ static int verify_metadata(struct xsk *rx_xsk, int rxq, int server_fd)
while (true) {
errno = 0;
ret = poll(fds, rxq + 1, 1000);
printf("poll: %d (%d)\n", ret, errno);
printf("poll: %d (%d) skip=%llu fail=%llu redir=%llu\n",
ret, errno, bpf_obj->bss->pkts_skip,
bpf_obj->bss->pkts_fail, bpf_obj->bss->pkts_redir);
if (ret < 0)
break;
if (ret == 0)

View File

@ -12,4 +12,8 @@
struct xdp_meta {
__u64 rx_timestamp;
__u32 rx_hash;
union {
__u32 rx_hash_type;
__s32 rx_hash_err;
};
};

View File

@ -8,11 +8,12 @@ TEST_PROGS := \
dev_addr_lists.sh \
mode-1-recovery-updelay.sh \
mode-2-recovery-updelay.sh \
option_prio.sh \
bond_options.sh \
bond-eth-type-change.sh
TEST_FILES := \
lag_lib.sh \
bond_topo_3d1c.sh \
net_forwarding_lib.sh
include ../../../lib.mk

View File

@ -0,0 +1,264 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
#
# Test bonding options with mode 1,5,6
ALL_TESTS="
prio
arp_validate
"
REQUIRE_MZ=no
NUM_NETIFS=0
lib_dir=$(dirname "$0")
source ${lib_dir}/net_forwarding_lib.sh
source ${lib_dir}/bond_topo_3d1c.sh
skip_prio()
{
local skip=1
# check if iproute support prio option
ip -n ${s_ns} link set eth0 type bond_slave prio 10
[[ $? -ne 0 ]] && skip=0
# check if kernel support prio option
ip -n ${s_ns} -d link show eth0 | grep -q "prio 10"
[[ $? -ne 0 ]] && skip=0
return $skip
}
skip_ns()
{
local skip=1
# check if iproute support ns_ip6_target option
ip -n ${s_ns} link add bond1 type bond ns_ip6_target ${g_ip6}
[[ $? -ne 0 ]] && skip=0
# check if kernel support ns_ip6_target option
ip -n ${s_ns} -d link show bond1 | grep -q "ns_ip6_target ${g_ip6}"
[[ $? -ne 0 ]] && skip=0
ip -n ${s_ns} link del bond1
return $skip
}
active_slave=""
check_active_slave()
{
local target_active_slave=$1
active_slave=$(cmd_jq "ip -n ${s_ns} -d -j link show bond0" ".[].linkinfo.info_data.active_slave")
test "$active_slave" = "$target_active_slave"
check_err $? "Current active slave is $active_slave but not $target_active_slave"
}
# Test bonding prio option
prio_test()
{
local param="$1"
RET=0
# create bond
bond_reset "${param}"
# check bonding member prio value
ip -n ${s_ns} link set eth0 type bond_slave prio 0
ip -n ${s_ns} link set eth1 type bond_slave prio 10
ip -n ${s_ns} link set eth2 type bond_slave prio 11
cmd_jq "ip -n ${s_ns} -d -j link show eth0" \
".[].linkinfo.info_slave_data | select (.prio == 0)" "-e" &> /dev/null
check_err $? "eth0 prio is not 0"
cmd_jq "ip -n ${s_ns} -d -j link show eth1" \
".[].linkinfo.info_slave_data | select (.prio == 10)" "-e" &> /dev/null
check_err $? "eth1 prio is not 10"
cmd_jq "ip -n ${s_ns} -d -j link show eth2" \
".[].linkinfo.info_slave_data | select (.prio == 11)" "-e" &> /dev/null
check_err $? "eth2 prio is not 11"
bond_check_connection "setup"
# active slave should be the primary slave
check_active_slave eth1
# active slave should be the higher prio slave
ip -n ${s_ns} link set $active_slave down
bond_check_connection "fail over"
check_active_slave eth2
# when only 1 slave is up
ip -n ${s_ns} link set $active_slave down
bond_check_connection "only 1 slave up"
check_active_slave eth0
# when a higher prio slave change to up
ip -n ${s_ns} link set eth2 up
bond_check_connection "higher prio slave up"
case $primary_reselect in
"0")
check_active_slave "eth2"
;;
"1")
check_active_slave "eth0"
;;
"2")
check_active_slave "eth0"
;;
esac
local pre_active_slave=$active_slave
# when the primary slave change to up
ip -n ${s_ns} link set eth1 up
bond_check_connection "primary slave up"
case $primary_reselect in
"0")
check_active_slave "eth1"
;;
"1")
check_active_slave "$pre_active_slave"
;;
"2")
check_active_slave "$pre_active_slave"
ip -n ${s_ns} link set $active_slave down
bond_check_connection "pre_active slave down"
check_active_slave "eth1"
;;
esac
# Test changing bond slave prio
if [[ "$primary_reselect" == "0" ]];then
ip -n ${s_ns} link set eth0 type bond_slave prio 1000000
ip -n ${s_ns} link set eth1 type bond_slave prio 0
ip -n ${s_ns} link set eth2 type bond_slave prio -50
ip -n ${s_ns} -d link show eth0 | grep -q 'prio 1000000'
check_err $? "eth0 prio is not 1000000"
ip -n ${s_ns} -d link show eth1 | grep -q 'prio 0'
check_err $? "eth1 prio is not 0"
ip -n ${s_ns} -d link show eth2 | grep -q 'prio -50'
check_err $? "eth3 prio is not -50"
check_active_slave "eth1"
ip -n ${s_ns} link set $active_slave down
bond_check_connection "change slave prio"
check_active_slave "eth0"
fi
}
prio_miimon()
{
local primary_reselect
local mode=$1
for primary_reselect in 0 1 2; do
prio_test "mode $mode miimon 100 primary eth1 primary_reselect $primary_reselect"
log_test "prio" "$mode miimon primary_reselect $primary_reselect"
done
}
prio_arp()
{
local primary_reselect
local mode=$1
for primary_reselect in 0 1 2; do
prio_test "mode active-backup arp_interval 100 arp_ip_target ${g_ip4} primary eth1 primary_reselect $primary_reselect"
log_test "prio" "$mode arp_ip_target primary_reselect $primary_reselect"
done
}
prio_ns()
{
local primary_reselect
local mode=$1
if skip_ns; then
log_test_skip "prio ns" "Current iproute or kernel doesn't support bond option 'ns_ip6_target'."
return 0
fi
for primary_reselect in 0 1 2; do
prio_test "mode active-backup arp_interval 100 ns_ip6_target ${g_ip6} primary eth1 primary_reselect $primary_reselect"
log_test "prio" "$mode ns_ip6_target primary_reselect $primary_reselect"
done
}
prio()
{
local mode modes="active-backup balance-tlb balance-alb"
if skip_prio; then
log_test_skip "prio" "Current iproute or kernel doesn't support bond option 'prio'."
return 0
fi
for mode in $modes; do
prio_miimon $mode
prio_arp $mode
prio_ns $mode
done
}
arp_validate_test()
{
local param="$1"
RET=0
# create bond
bond_reset "${param}"
bond_check_connection
[ $RET -ne 0 ] && log_test "arp_validate" "$retmsg"
# wait for a while to make sure the mii status stable
sleep 5
for i in $(seq 0 2); do
mii_status=$(cmd_jq "ip -n ${s_ns} -j -d link show eth$i" ".[].linkinfo.info_slave_data.mii_status")
if [ ${mii_status} != "UP" ]; then
RET=1
log_test "arp_validate" "interface eth$i mii_status $mii_status"
fi
done
}
arp_validate_arp()
{
local mode=$1
local val
for val in $(seq 0 6); do
arp_validate_test "mode $mode arp_interval 100 arp_ip_target ${g_ip4} arp_validate $val"
log_test "arp_validate" "$mode arp_ip_target arp_validate $val"
done
}
arp_validate_ns()
{
local mode=$1
local val
if skip_ns; then
log_test_skip "arp_validate ns" "Current iproute or kernel doesn't support bond option 'ns_ip6_target'."
return 0
fi
for val in $(seq 0 6); do
arp_validate_test "mode $mode arp_interval 100 ns_ip6_target ${g_ip6} arp_validate $val"
log_test "arp_validate" "$mode ns_ip6_target arp_validate $val"
done
}
arp_validate()
{
arp_validate_arp "active-backup"
arp_validate_ns "active-backup"
}
trap cleanup EXIT
setup_prepare
setup_wait
tests_run
exit $EXIT_STATUS

View File

@ -0,0 +1,143 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
#
# Topology for Bond mode 1,5,6 testing
#
# +-------------------------------------+
# | bond0 |
# | + | Server
# | eth0 | eth1 eth2 | 192.0.2.1/24
# | +-------------------+ | 2001:db8::1/24
# | | | | |
# +-------------------------------------+
# | | |
# +-------------------------------------+
# | | | | |
# | +---+---------+---------+---+ | Gateway
# | | br0 | | 192.0.2.254/24
# | +-------------+-------------+ | 2001:db8::254/24
# | | |
# +-------------------------------------+
# |
# +-------------------------------------+
# | | | Client
# | + | 192.0.2.10/24
# | eth0 | 2001:db8::10/24
# +-------------------------------------+
s_ns="s-$(mktemp -u XXXXXX)"
c_ns="c-$(mktemp -u XXXXXX)"
g_ns="g-$(mktemp -u XXXXXX)"
s_ip4="192.0.2.1"
c_ip4="192.0.2.10"
g_ip4="192.0.2.254"
s_ip6="2001:db8::1"
c_ip6="2001:db8::10"
g_ip6="2001:db8::254"
gateway_create()
{
ip netns add ${g_ns}
ip -n ${g_ns} link add br0 type bridge
ip -n ${g_ns} link set br0 up
ip -n ${g_ns} addr add ${g_ip4}/24 dev br0
ip -n ${g_ns} addr add ${g_ip6}/24 dev br0
}
gateway_destroy()
{
ip -n ${g_ns} link del br0
ip netns del ${g_ns}
}
server_create()
{
ip netns add ${s_ns}
ip -n ${s_ns} link add bond0 type bond mode active-backup miimon 100
for i in $(seq 0 2); do
ip -n ${s_ns} link add eth${i} type veth peer name s${i} netns ${g_ns}
ip -n ${g_ns} link set s${i} up
ip -n ${g_ns} link set s${i} master br0
ip -n ${s_ns} link set eth${i} master bond0
done
ip -n ${s_ns} link set bond0 up
ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0
ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0
sleep 2
}
# Reset bond with new mode and options
bond_reset()
{
local param="$1"
ip -n ${s_ns} link set bond0 down
ip -n ${s_ns} link del bond0
ip -n ${s_ns} link add bond0 type bond $param
for i in $(seq 0 2); do
ip -n ${s_ns} link set eth$i master bond0
done
ip -n ${s_ns} link set bond0 up
ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0
ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0
sleep 2
}
server_destroy()
{
for i in $(seq 0 2); do
ip -n ${s_ns} link del eth${i}
done
ip netns del ${s_ns}
}
client_create()
{
ip netns add ${c_ns}
ip -n ${c_ns} link add eth0 type veth peer name c0 netns ${g_ns}
ip -n ${g_ns} link set c0 up
ip -n ${g_ns} link set c0 master br0
ip -n ${c_ns} link set eth0 up
ip -n ${c_ns} addr add ${c_ip4}/24 dev eth0
ip -n ${c_ns} addr add ${c_ip6}/24 dev eth0
}
client_destroy()
{
ip -n ${c_ns} link del eth0
ip netns del ${c_ns}
}
setup_prepare()
{
gateway_create
server_create
client_create
}
cleanup()
{
pre_cleanup
client_destroy
server_destroy
gateway_destroy
}
bond_check_connection()
{
local msg=${1:-"check connection"}
sleep 2
ip netns exec ${s_ns} ping ${c_ip4} -c5 -i 0.1 &>/dev/null
check_err $? "${msg}: ping failed"
ip netns exec ${s_ns} ping6 ${c_ip6} -c5 -i 0.1 &>/dev/null
check_err $? "${msg}: ping6 failed"
}

View File

@ -1,245 +0,0 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
#
# Test bonding option prio
#
ALL_TESTS="
prio_arp_ip_target_test
prio_miimon_test
"
REQUIRE_MZ=no
REQUIRE_JQ=no
NUM_NETIFS=0
lib_dir=$(dirname "$0")
source "$lib_dir"/net_forwarding_lib.sh
destroy()
{
ip link del bond0 &>/dev/null
ip link del br0 &>/dev/null
ip link del veth0 &>/dev/null
ip link del veth1 &>/dev/null
ip link del veth2 &>/dev/null
ip netns del ns1 &>/dev/null
ip link del veth3 &>/dev/null
}
cleanup()
{
pre_cleanup
destroy
}
skip()
{
local skip=1
ip link add name bond0 type bond mode 1 miimon 100 &>/dev/null
ip link add name veth0 type veth peer name veth0_p
ip link set veth0 master bond0
# check if iproute support prio option
ip link set dev veth0 type bond_slave prio 10
[[ $? -ne 0 ]] && skip=0
# check if bonding support prio option
ip -d link show veth0 | grep -q "prio 10"
[[ $? -ne 0 ]] && skip=0
ip link del bond0 &>/dev/null
ip link del veth0
return $skip
}
active_slave=""
check_active_slave()
{
local target_active_slave=$1
active_slave="$(cat /sys/class/net/bond0/bonding/active_slave)"
test "$active_slave" = "$target_active_slave"
check_err $? "Current active slave is $active_slave but not $target_active_slave"
}
# Test bonding prio option with mode=$mode monitor=$monitor
# and primary_reselect=$primary_reselect
prio_test()
{
RET=0
local monitor=$1
local mode=$2
local primary_reselect=$3
local bond_ip4="192.169.1.2"
local peer_ip4="192.169.1.1"
local bond_ip6="2009:0a:0b::02"
local peer_ip6="2009:0a:0b::01"
# create veths
ip link add name veth0 type veth peer name veth0_p
ip link add name veth1 type veth peer name veth1_p
ip link add name veth2 type veth peer name veth2_p
# create bond
if [[ "$monitor" == "miimon" ]];then
ip link add name bond0 type bond mode $mode miimon 100 primary veth1 primary_reselect $primary_reselect
elif [[ "$monitor" == "arp_ip_target" ]];then
ip link add name bond0 type bond mode $mode arp_interval 1000 arp_ip_target $peer_ip4 primary veth1 primary_reselect $primary_reselect
elif [[ "$monitor" == "ns_ip6_target" ]];then
ip link add name bond0 type bond mode $mode arp_interval 1000 ns_ip6_target $peer_ip6 primary veth1 primary_reselect $primary_reselect
fi
ip link set bond0 up
ip link set veth0 master bond0
ip link set veth1 master bond0
ip link set veth2 master bond0
# check bonding member prio value
ip link set dev veth0 type bond_slave prio 0
ip link set dev veth1 type bond_slave prio 10
ip link set dev veth2 type bond_slave prio 11
ip -d link show veth0 | grep -q 'prio 0'
check_err $? "veth0 prio is not 0"
ip -d link show veth1 | grep -q 'prio 10'
check_err $? "veth0 prio is not 10"
ip -d link show veth2 | grep -q 'prio 11'
check_err $? "veth0 prio is not 11"
ip link set veth0 up
ip link set veth1 up
ip link set veth2 up
ip link set veth0_p up
ip link set veth1_p up
ip link set veth2_p up
# prepare ping target
ip link add name br0 type bridge
ip link set br0 up
ip link set veth0_p master br0
ip link set veth1_p master br0
ip link set veth2_p master br0
ip link add name veth3 type veth peer name veth3_p
ip netns add ns1
ip link set veth3_p master br0 up
ip link set veth3 netns ns1 up
ip netns exec ns1 ip addr add $peer_ip4/24 dev veth3
ip netns exec ns1 ip addr add $peer_ip6/64 dev veth3
ip addr add $bond_ip4/24 dev bond0
ip addr add $bond_ip6/64 dev bond0
sleep 5
ping $peer_ip4 -c5 -I bond0 &>/dev/null
check_err $? "ping failed 1."
ping6 $peer_ip6 -c5 -I bond0 &>/dev/null
check_err $? "ping6 failed 1."
# active salve should be the primary slave
check_active_slave veth1
# active slave should be the higher prio slave
ip link set $active_slave down
ping $peer_ip4 -c5 -I bond0 &>/dev/null
check_err $? "ping failed 2."
check_active_slave veth2
# when only 1 slave is up
ip link set $active_slave down
ping $peer_ip4 -c5 -I bond0 &>/dev/null
check_err $? "ping failed 3."
check_active_slave veth0
# when a higher prio slave change to up
ip link set veth2 up
ping $peer_ip4 -c5 -I bond0 &>/dev/null
check_err $? "ping failed 4."
case $primary_reselect in
"0")
check_active_slave "veth2"
;;
"1")
check_active_slave "veth0"
;;
"2")
check_active_slave "veth0"
;;
esac
local pre_active_slave=$active_slave
# when the primary slave change to up
ip link set veth1 up
ping $peer_ip4 -c5 -I bond0 &>/dev/null
check_err $? "ping failed 5."
case $primary_reselect in
"0")
check_active_slave "veth1"
;;
"1")
check_active_slave "$pre_active_slave"
;;
"2")
check_active_slave "$pre_active_slave"
ip link set $active_slave down
ping $peer_ip4 -c5 -I bond0 &>/dev/null
check_err $? "ping failed 6."
check_active_slave "veth1"
;;
esac
# Test changing bond salve prio
if [[ "$primary_reselect" == "0" ]];then
ip link set dev veth0 type bond_slave prio 1000000
ip link set dev veth1 type bond_slave prio 0
ip link set dev veth2 type bond_slave prio -50
ip -d link show veth0 | grep -q 'prio 1000000'
check_err $? "veth0 prio is not 1000000"
ip -d link show veth1 | grep -q 'prio 0'
check_err $? "veth1 prio is not 0"
ip -d link show veth2 | grep -q 'prio -50'
check_err $? "veth3 prio is not -50"
check_active_slave "veth1"
ip link set $active_slave down
ping $peer_ip4 -c5 -I bond0 &>/dev/null
check_err $? "ping failed 7."
check_active_slave "veth0"
fi
cleanup
log_test "prio_test" "Test bonding option 'prio' with mode=$mode monitor=$monitor and primary_reselect=$primary_reselect"
}
prio_miimon_test()
{
local mode
local primary_reselect
for mode in 1 5 6; do
for primary_reselect in 0 1 2; do
prio_test "miimon" $mode $primary_reselect
done
done
}
prio_arp_ip_target_test()
{
local primary_reselect
for primary_reselect in 0 1 2; do
prio_test "arp_ip_target" 1 $primary_reselect
done
}
if skip;then
log_test_skip "option_prio.sh" "Current iproute doesn't support 'prio'."
exit 0
fi
trap cleanup EXIT
tests_run
exit "$EXIT_STATUS"

View File

@ -48,3 +48,4 @@ CONFIG_BAREUDP=m
CONFIG_IPV6_IOAM6_LWTUNNEL=y
CONFIG_CRYPTO_SM4_GENERIC=y
CONFIG_AMT=m
CONFIG_IP_SCTP=m

View File

@ -913,6 +913,7 @@ test_listener()
$client4_port > /dev/null 2>&1 &
local listener_pid=$!
sleep 0.5
verify_listener_events $client_evts $LISTENER_CREATED $AF_INET 10.0.2.2 $client4_port
# ADD_ADDR from client to server machine reusing the subflow port
@ -928,6 +929,7 @@ test_listener()
# Delete the listener from the client ns, if one was created
kill_wait $listener_pid
sleep 0.5
verify_listener_events $client_evts $LISTENER_CLOSED $AF_INET 10.0.2.2 $client4_port
}

View File

@ -62,7 +62,7 @@ class OvsDatapath(GenericNetlinkSocket):
nla_map = (
("OVS_DP_ATTR_UNSPEC", "none"),
("OVS_DP_ATTR_NAME", "asciiz"),
("OVS_DP_ATTR_UPCALL_PID", "uint32"),
("OVS_DP_ATTR_UPCALL_PID", "array(uint32)"),
("OVS_DP_ATTR_STATS", "dpstats"),
("OVS_DP_ATTR_MEGAFLOW_STATS", "megaflowstats"),
("OVS_DP_ATTR_USER_FEATURES", "uint32"),