Networking fixes for 5.14-rc5, including fixes from ipsec.

Current release - regressions:
 
  - sched: taprio: fix init procedure to avoid inf loop when dumping
 
  - sctp: move the active_key update after sh_keys is added
 
 Current release - new code bugs:
 
  - sparx5: fix build with old GCC & bitmask on 32-bit targets
 
 Previous releases - regressions:
 
  - xfrm: redo the PREEMPT_RT RCU vs hash_resize_mutex deadlock fix
 
  - xfrm: fixes for the compat netlink attribute translator
 
  - phy: micrel: Fix detection of ksz87xx switch
 
 Previous releases - always broken:
 
  - gro: set inner transport header offset in tcp/udp GRO hook to avoid
         crashes when such packets reach GSO
 
  - vsock: handle VIRTIO_VSOCK_OP_CREDIT_REQUEST, as required by spec
 
  - dsa: sja1105: fix static FDB entries on SJA1105P/Q/R/S and SJA1110
 
  - bridge: validate the NUD_PERMANENT bit when adding an extern_learn FDB entry
 
  - usb: lan78xx: don't modify phy_device state concurrently
 
  - usb: pegasus: check for errors of IO routines
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmEL/jwACgkQMUZtbf5S
 IrsxmQ//Qbq6TluxzmFnWCjOGWo009GZUZ/PdlnhxnHRAk8BzzMM7209xGARTKod
 t+bNl8ioDDxxiBlp57gtoe67nnnd4cwbPCGKpY49al8guvetsQq+0vlg0u3490X+
 clY7Uz7G/8thf0JylhqQB1LrMXcNNHqB7ZV5CpM1cC+H/YxeHBv+LQy44S7Vz+LG
 btHGQbbsnHrVF6WhohU8tr5AX7MdLaQvQ2aZ1XodEXRd9js4P4CP2Hn/cazZJBOT
 rwxaFao2DWs6qaVYBFHtKyU1qvoxQ6Ngex/lMY0QQ9rOX/L+ha+ygzzUoqXjg7DX
 jOFUeZIiGHcPe0a10NO8NkPCqn7bOBQ2h/BpJPF9b8VvQKbJAOV8kOdtTbGhMh28
 vboensrppqW4qzWpgkoJaVbusvcNWibFspYFyrLjpKxpPmKuLJlli2mkyUbsUiCO
 uxMN+IqisWiR379rWLX5tJQp6OIvWeQW3htD5ms7nIHpvL1pbRJnsekepkUjmTx9
 DtvowHGpPSG4dPq7EP6LcE/1K0YQFjZQMsJkqTH7J4Qi+pmB2MoQyJzPraktiquT
 2Qb/O2yZlng9sYYCs0P73TiVBef5KnIoIJXKvqkrmyN4QjyO+LDevGQyXV06B5VJ
 a8duR+yWgPVQn+T7SMKhAOXoqXwSCbJlpXSG7iOFp4dQLCiBVzI=
 =/rrJ
 -----END PGP SIGNATURE-----

Merge tag 'net-5.14-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from ipsec.

  Current release - regressions:

   - sched: taprio: fix init procedure to avoid inf loop when dumping

   - sctp: move the active_key update after sh_keys is added

  Current release - new code bugs:

   - sparx5: fix build with old GCC & bitmask on 32-bit targets

  Previous releases - regressions:

   - xfrm: redo the PREEMPT_RT RCU vs hash_resize_mutex deadlock fix

   - xfrm: fixes for the compat netlink attribute translator

   - phy: micrel: Fix detection of ksz87xx switch

  Previous releases - always broken:

   - gro: set inner transport header offset in tcp/udp GRO hook to avoid
     crashes when such packets reach GSO

   - vsock: handle VIRTIO_VSOCK_OP_CREDIT_REQUEST, as required by spec

   - dsa: sja1105: fix static FDB entries on SJA1105P/Q/R/S and SJA1110

   - bridge: validate the NUD_PERMANENT bit when adding an extern_learn
     FDB entry

   - usb: lan78xx: don't modify phy_device state concurrently

   - usb: pegasus: check for errors of IO routines"

* tag 'net-5.14-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (48 commits)
  net: vxge: fix use-after-free in vxge_device_unregister
  net: fec: fix use-after-free in fec_drv_remove
  net: pegasus: fix uninit-value in get_interrupt_interval
  net: ethernet: ti: am65-cpsw: fix crash in am65_cpsw_port_offload_fwd_mark_update()
  bnx2x: fix an error code in bnx2x_nic_load()
  net: wwan: iosm: fix recursive lock acquire in unregister
  net: wwan: iosm: correct data protocol mask bit
  net: wwan: iosm: endianness type correction
  net: wwan: iosm: fix lkp buildbot warning
  net: usb: lan78xx: don't modify phy_device state concurrently
  docs: networking: netdevsim rules
  net: usb: pegasus: Remove the changelog and DRIVER_VERSION.
  net: usb: pegasus: Check the return value of get_geristers() and friends;
  net/prestera: Fix devlink groups leakage in error flow
  net: sched: fix lockdep_set_class() typo error for sch->seqlock
  net: dsa: qca: ar9331: reorder MDIO write sequence
  VSOCK: handle VIRTIO_VSOCK_OP_CREDIT_REQUEST
  mptcp: drop unused rcu member in mptcp_pm_addr_entry
  net: ipv6: fix returned variable type in ip6_skb_dst_mtu
  nfp: update ethtool reporting of pauseframe control
  ...
This commit is contained in:
Linus Torvalds 2021-08-05 12:26:00 -07:00
commit 902e7f373f
48 changed files with 585 additions and 199 deletions

View File

@ -228,6 +228,23 @@ before posting to the mailing list. The patchwork build bot instance
gets overloaded very easily and netdev@vger really doesn't need more gets overloaded very easily and netdev@vger really doesn't need more
traffic if we can help it. traffic if we can help it.
netdevsim is great, can I extend it for my out-of-tree tests?
-------------------------------------------------------------
No, `netdevsim` is a test vehicle solely for upstream tests.
(Please add your tests under tools/testing/selftests/.)
We also give no guarantees that `netdevsim` won't change in the future
in a way which would break what would normally be considered uAPI.
Is netdevsim considered a "user" of an API?
-------------------------------------------
Linux kernel has a long standing rule that no API should be added unless
it has a real, in-tree user. Mock-ups and tests based on `netdevsim` are
strongly encouraged when adding new APIs, but `netdevsim` in itself
is **not** considered a use case/user.
Any other tips to help ensure my net/net-next patch gets OK'd? Any other tips to help ensure my net/net-next patch gets OK'd?
-------------------------------------------------------------- --------------------------------------------------------------
Attention to detail. Re-read your own work as if you were the Attention to detail. Re-read your own work as if you were the

View File

@ -73,7 +73,9 @@ IF_OPER_LOWERLAYERDOWN (3):
state (f.e. VLAN). state (f.e. VLAN).
IF_OPER_TESTING (4): IF_OPER_TESTING (4):
Unused in current kernel. Interface is in testing mode, for example executing driver self-tests
or media (cable) test. It can't be used for normal traffic until tests
complete.
IF_OPER_DORMANT (5): IF_OPER_DORMANT (5):
Interface is L1 up, but waiting for an external event, f.e. for a Interface is L1 up, but waiting for an external event, f.e. for a
@ -111,7 +113,7 @@ it as lower layer.
Note that for certain kind of soft-devices, which are not managing any Note that for certain kind of soft-devices, which are not managing any
real hardware, it is possible to set this bit from userspace. One real hardware, it is possible to set this bit from userspace. One
should use TVL IFLA_CARRIER to do so. should use TLV IFLA_CARRIER to do so.
netif_carrier_ok() can be used to query that bit. netif_carrier_ok() can be used to query that bit.

View File

@ -682,7 +682,7 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info); struct image_info *img_info);
void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl); void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl);
int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan); struct mhi_chan *mhi_chan, unsigned int flags);
int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl, int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan); struct mhi_chan *mhi_chan);
void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl, void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,

View File

@ -1430,7 +1430,7 @@ exit_unprepare_channel:
} }
int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan) struct mhi_chan *mhi_chan, unsigned int flags)
{ {
int ret = 0; int ret = 0;
struct device *dev = &mhi_chan->mhi_dev->dev; struct device *dev = &mhi_chan->mhi_dev->dev;
@ -1455,6 +1455,9 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
if (ret) if (ret)
goto error_pm_state; goto error_pm_state;
if (mhi_chan->dir == DMA_FROM_DEVICE)
mhi_chan->pre_alloc = !!(flags & MHI_CH_INBOUND_ALLOC_BUFS);
/* Pre-allocate buffer for xfer ring */ /* Pre-allocate buffer for xfer ring */
if (mhi_chan->pre_alloc) { if (mhi_chan->pre_alloc) {
int nr_el = get_nr_avail_ring_elements(mhi_cntrl, int nr_el = get_nr_avail_ring_elements(mhi_cntrl,
@ -1610,7 +1613,7 @@ void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan)
} }
/* Move channel to start state */ /* Move channel to start state */
int mhi_prepare_for_transfer(struct mhi_device *mhi_dev) int mhi_prepare_for_transfer(struct mhi_device *mhi_dev, unsigned int flags)
{ {
int ret, dir; int ret, dir;
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
@ -1621,7 +1624,7 @@ int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
if (!mhi_chan) if (!mhi_chan)
continue; continue;
ret = mhi_prepare_channel(mhi_cntrl, mhi_chan); ret = mhi_prepare_channel(mhi_cntrl, mhi_chan, flags);
if (ret) if (ret)
goto error_open_chan; goto error_open_chan;
} }

View File

@ -837,16 +837,24 @@ static int ar9331_mdio_write(void *ctx, u32 reg, u32 val)
return 0; return 0;
} }
ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg, val); /* In case of this switch we work with 32bit registers on top of 16bit
if (ret < 0) * bus. Some registers (for example access to forwarding database) have
goto error; * trigger bit on the first 16bit half of request, the result and
* configuration of request in the second half.
* To make it work properly, we should do the second part of transfer
* before the first one is done.
*/
ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg + 2, ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg + 2,
val >> 16); val >> 16);
if (ret < 0) if (ret < 0)
goto error; goto error;
ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg, val);
if (ret < 0)
goto error;
return 0; return 0;
error: error:
dev_err_ratelimited(&sbus->dev, "Bus error. Failed to write register.\n"); dev_err_ratelimited(&sbus->dev, "Bus error. Failed to write register.\n");
return ret; return ret;

View File

@ -304,6 +304,15 @@ sja1105pqrs_common_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
hostcmd = SJA1105_HOSTCMD_INVALIDATE; hostcmd = SJA1105_HOSTCMD_INVALIDATE;
} }
sja1105_packing(p, &hostcmd, 25, 23, size, op); sja1105_packing(p, &hostcmd, 25, 23, size, op);
}
static void
sja1105pqrs_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
int entry_size = SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY;
sja1105pqrs_common_l2_lookup_cmd_packing(buf, cmd, op, entry_size);
/* Hack - The hardware takes the 'index' field within /* Hack - The hardware takes the 'index' field within
* struct sja1105_l2_lookup_entry as the index on which this command * struct sja1105_l2_lookup_entry as the index on which this command
@ -313,26 +322,18 @@ sja1105pqrs_common_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
* such that our API doesn't need to ask for a full-blown entry * such that our API doesn't need to ask for a full-blown entry
* structure when e.g. a delete is requested. * structure when e.g. a delete is requested.
*/ */
sja1105_packing(buf, &cmd->index, 15, 6, sja1105_packing(buf, &cmd->index, 15, 6, entry_size, op);
SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY, op);
}
static void
sja1105pqrs_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
int size = SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY;
return sja1105pqrs_common_l2_lookup_cmd_packing(buf, cmd, op, size);
} }
static void static void
sja1110_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd, sja1110_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op) enum packing_op op)
{ {
int size = SJA1110_SIZE_L2_LOOKUP_ENTRY; int entry_size = SJA1110_SIZE_L2_LOOKUP_ENTRY;
return sja1105pqrs_common_l2_lookup_cmd_packing(buf, cmd, op, size); sja1105pqrs_common_l2_lookup_cmd_packing(buf, cmd, op, entry_size);
sja1105_packing(buf, &cmd->index, 10, 1, entry_size, op);
} }
/* The switch is so retarded that it makes our command/entry abstraction /* The switch is so retarded that it makes our command/entry abstraction

View File

@ -1318,10 +1318,11 @@ static int sja1105et_is_fdb_entry_in_bin(struct sja1105_private *priv, int bin,
int sja1105et_fdb_add(struct dsa_switch *ds, int port, int sja1105et_fdb_add(struct dsa_switch *ds, int port,
const unsigned char *addr, u16 vid) const unsigned char *addr, u16 vid)
{ {
struct sja1105_l2_lookup_entry l2_lookup = {0}; struct sja1105_l2_lookup_entry l2_lookup = {0}, tmp;
struct sja1105_private *priv = ds->priv; struct sja1105_private *priv = ds->priv;
struct device *dev = ds->dev; struct device *dev = ds->dev;
int last_unused = -1; int last_unused = -1;
int start, end, i;
int bin, way, rc; int bin, way, rc;
bin = sja1105et_fdb_hash(priv, addr, vid); bin = sja1105et_fdb_hash(priv, addr, vid);
@ -1333,7 +1334,7 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port,
* mask? If yes, we need to do nothing. If not, we need * mask? If yes, we need to do nothing. If not, we need
* to rewrite the entry by adding this port to it. * to rewrite the entry by adding this port to it.
*/ */
if (l2_lookup.destports & BIT(port)) if ((l2_lookup.destports & BIT(port)) && l2_lookup.lockeds)
return 0; return 0;
l2_lookup.destports |= BIT(port); l2_lookup.destports |= BIT(port);
} else { } else {
@ -1364,6 +1365,7 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port,
index, NULL, false); index, NULL, false);
} }
} }
l2_lookup.lockeds = true;
l2_lookup.index = sja1105et_fdb_index(bin, way); l2_lookup.index = sja1105et_fdb_index(bin, way);
rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP, rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
@ -1372,6 +1374,29 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port,
if (rc < 0) if (rc < 0)
return rc; return rc;
/* Invalidate a dynamically learned entry if that exists */
start = sja1105et_fdb_index(bin, 0);
end = sja1105et_fdb_index(bin, way);
for (i = start; i < end; i++) {
rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
i, &tmp);
if (rc == -ENOENT)
continue;
if (rc)
return rc;
if (tmp.macaddr != ether_addr_to_u64(addr) || tmp.vlanid != vid)
continue;
rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
i, NULL, false);
if (rc)
return rc;
break;
}
return sja1105_static_fdb_change(priv, port, &l2_lookup, true); return sja1105_static_fdb_change(priv, port, &l2_lookup, true);
} }
@ -1413,32 +1438,30 @@ int sja1105et_fdb_del(struct dsa_switch *ds, int port,
int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port, int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port,
const unsigned char *addr, u16 vid) const unsigned char *addr, u16 vid)
{ {
struct sja1105_l2_lookup_entry l2_lookup = {0}; struct sja1105_l2_lookup_entry l2_lookup = {0}, tmp;
struct sja1105_private *priv = ds->priv; struct sja1105_private *priv = ds->priv;
int rc, i; int rc, i;
/* Search for an existing entry in the FDB table */ /* Search for an existing entry in the FDB table */
l2_lookup.macaddr = ether_addr_to_u64(addr); l2_lookup.macaddr = ether_addr_to_u64(addr);
l2_lookup.vlanid = vid; l2_lookup.vlanid = vid;
l2_lookup.iotag = SJA1105_S_TAG;
l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0); l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
if (priv->vlan_state != SJA1105_VLAN_UNAWARE) { l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_iotag = BIT(0);
} else {
l2_lookup.mask_vlanid = 0;
l2_lookup.mask_iotag = 0;
}
l2_lookup.destports = BIT(port); l2_lookup.destports = BIT(port);
tmp = l2_lookup;
rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP, rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
SJA1105_SEARCH, &l2_lookup); SJA1105_SEARCH, &tmp);
if (rc == 0) { if (rc == 0 && tmp.index != SJA1105_MAX_L2_LOOKUP_COUNT - 1) {
/* Found and this port is already in the entry's /* Found a static entry and this port is already in the entry's
* port mask => job done * port mask => job done
*/ */
if (l2_lookup.destports & BIT(port)) if ((tmp.destports & BIT(port)) && tmp.lockeds)
return 0; return 0;
l2_lookup = tmp;
/* l2_lookup.index is populated by the switch in case it /* l2_lookup.index is populated by the switch in case it
* found something. * found something.
*/ */
@ -1460,16 +1483,46 @@ int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port,
dev_err(ds->dev, "FDB is full, cannot add entry.\n"); dev_err(ds->dev, "FDB is full, cannot add entry.\n");
return -EINVAL; return -EINVAL;
} }
l2_lookup.lockeds = true;
l2_lookup.index = i; l2_lookup.index = i;
skip_finding_an_index: skip_finding_an_index:
l2_lookup.lockeds = true;
rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP, rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
l2_lookup.index, &l2_lookup, l2_lookup.index, &l2_lookup,
true); true);
if (rc < 0) if (rc < 0)
return rc; return rc;
/* The switch learns dynamic entries and looks up the FDB left to
* right. It is possible that our addition was concurrent with the
* dynamic learning of the same address, so now that the static entry
* has been installed, we are certain that address learning for this
* particular address has been turned off, so the dynamic entry either
* is in the FDB at an index smaller than the static one, or isn't (it
* can also be at a larger index, but in that case it is inactive
* because the static FDB entry will match first, and the dynamic one
* will eventually age out). Search for a dynamically learned address
* prior to our static one and invalidate it.
*/
tmp = l2_lookup;
rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
SJA1105_SEARCH, &tmp);
if (rc < 0) {
dev_err(ds->dev,
"port %d failed to read back entry for %pM vid %d: %pe\n",
port, addr, vid, ERR_PTR(rc));
return rc;
}
if (tmp.index < l2_lookup.index) {
rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
tmp.index, NULL, false);
if (rc < 0)
return rc;
}
return sja1105_static_fdb_change(priv, port, &l2_lookup, true); return sja1105_static_fdb_change(priv, port, &l2_lookup, true);
} }
@ -1483,15 +1536,8 @@ int sja1105pqrs_fdb_del(struct dsa_switch *ds, int port,
l2_lookup.macaddr = ether_addr_to_u64(addr); l2_lookup.macaddr = ether_addr_to_u64(addr);
l2_lookup.vlanid = vid; l2_lookup.vlanid = vid;
l2_lookup.iotag = SJA1105_S_TAG;
l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0); l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
if (priv->vlan_state != SJA1105_VLAN_UNAWARE) { l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_iotag = BIT(0);
} else {
l2_lookup.mask_vlanid = 0;
l2_lookup.mask_iotag = 0;
}
l2_lookup.destports = BIT(port); l2_lookup.destports = BIT(port);
rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP, rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,

View File

@ -2669,7 +2669,8 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
} }
/* Allocated memory for FW statistics */ /* Allocated memory for FW statistics */
if (bnx2x_alloc_fw_stats_mem(bp)) rc = bnx2x_alloc_fw_stats_mem(bp);
if (rc)
LOAD_ERROR_EXIT(bp, load_error0); LOAD_ERROR_EXIT(bp, load_error0);
/* request pf to initialize status blocks */ /* request pf to initialize status blocks */

View File

@ -3843,13 +3843,13 @@ fec_drv_remove(struct platform_device *pdev)
if (of_phy_is_fixed_link(np)) if (of_phy_is_fixed_link(np))
of_phy_deregister_fixed_link(np); of_phy_deregister_fixed_link(np);
of_node_put(fep->phy_node); of_node_put(fep->phy_node);
free_netdev(ndev);
clk_disable_unprepare(fep->clk_ahb); clk_disable_unprepare(fep->clk_ahb);
clk_disable_unprepare(fep->clk_ipg); clk_disable_unprepare(fep->clk_ipg);
pm_runtime_put_noidle(&pdev->dev); pm_runtime_put_noidle(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
free_netdev(ndev);
return 0; return 0;
} }

View File

@ -530,6 +530,8 @@ err_trap_register:
prestera_trap = &prestera_trap_items_arr[i]; prestera_trap = &prestera_trap_items_arr[i];
devlink_traps_unregister(devlink, &prestera_trap->trap, 1); devlink_traps_unregister(devlink, &prestera_trap->trap, 1);
} }
devlink_trap_groups_unregister(devlink, prestera_trap_groups_arr,
groups_count);
err_groups_register: err_groups_register:
kfree(trap_data->trap_items_arr); kfree(trap_data->trap_items_arr);
err_trap_items_alloc: err_trap_items_alloc:

View File

@ -13,19 +13,26 @@
*/ */
#define VSTAX 73 #define VSTAX 73
static void ifh_encode_bitfield(void *ifh, u64 value, u32 pos, u32 width) #define ifh_encode_bitfield(ifh, value, pos, _width) \
({ \
u32 width = (_width); \
\
/* Max width is 5 bytes - 40 bits. In worst case this will
* spread over 6 bytes - 48 bits
*/ \
compiletime_assert(width <= 40, \
"Unsupported width, must be <= 40"); \
__ifh_encode_bitfield((ifh), (value), (pos), width); \
})
static void __ifh_encode_bitfield(void *ifh, u64 value, u32 pos, u32 width)
{ {
u8 *ifh_hdr = ifh; u8 *ifh_hdr = ifh;
/* Calculate the Start IFH byte position of this IFH bit position */ /* Calculate the Start IFH byte position of this IFH bit position */
u32 byte = (35 - (pos / 8)); u32 byte = (35 - (pos / 8));
/* Calculate the Start bit position in the Start IFH byte */ /* Calculate the Start bit position in the Start IFH byte */
u32 bit = (pos % 8); u32 bit = (pos % 8);
u64 encode = GENMASK(bit + width - 1, bit) & (value << bit); u64 encode = GENMASK_ULL(bit + width - 1, bit) & (value << bit);
/* Max width is 5 bytes - 40 bits. In worst case this will
* spread over 6 bytes - 48 bits
*/
compiletime_assert(width <= 40, "Unsupported width, must be <= 40");
/* The b0-b7 goes into the start IFH byte */ /* The b0-b7 goes into the start IFH byte */
if (encode & 0xFF) if (encode & 0xFF)

View File

@ -819,7 +819,7 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
printk(version); printk(version);
#endif #endif
i = pci_enable_device(pdev); i = pcim_enable_device(pdev);
if (i) return i; if (i) return i;
/* natsemi has a non-standard PM control register /* natsemi has a non-standard PM control register
@ -852,7 +852,7 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
ioaddr = ioremap(iostart, iosize); ioaddr = ioremap(iostart, iosize);
if (!ioaddr) { if (!ioaddr) {
i = -ENOMEM; i = -ENOMEM;
goto err_ioremap; goto err_pci_request_regions;
} }
/* Work around the dropped serial bit. */ /* Work around the dropped serial bit. */
@ -974,9 +974,6 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
err_register_netdev: err_register_netdev:
iounmap(ioaddr); iounmap(ioaddr);
err_ioremap:
pci_release_regions(pdev);
err_pci_request_regions: err_pci_request_regions:
free_netdev(dev); free_netdev(dev);
return i; return i;
@ -3241,7 +3238,6 @@ static void natsemi_remove1(struct pci_dev *pdev)
NATSEMI_REMOVE_FILE(pdev, dspcfg_workaround); NATSEMI_REMOVE_FILE(pdev, dspcfg_workaround);
unregister_netdev (dev); unregister_netdev (dev);
pci_release_regions (pdev);
iounmap(ioaddr); iounmap(ioaddr);
free_netdev (dev); free_netdev (dev);
} }

View File

@ -3512,13 +3512,13 @@ static void vxge_device_unregister(struct __vxge_hw_device *hldev)
kfree(vdev->vpaths); kfree(vdev->vpaths);
/* we are safe to free it now */
free_netdev(dev);
vxge_debug_init(vdev->level_trace, "%s: ethernet device unregistered", vxge_debug_init(vdev->level_trace, "%s: ethernet device unregistered",
buf); buf);
vxge_debug_entryexit(vdev->level_trace, "%s: %s:%d Exiting...", buf, vxge_debug_entryexit(vdev->level_trace, "%s: %s:%d Exiting...", buf,
__func__, __LINE__); __func__, __LINE__);
/* we are safe to free it now */
free_netdev(dev);
} }
/* /*

View File

@ -286,6 +286,8 @@ nfp_net_get_link_ksettings(struct net_device *netdev,
/* Init to unknowns */ /* Init to unknowns */
ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE); ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
cmd->base.port = PORT_OTHER; cmd->base.port = PORT_OTHER;
cmd->base.speed = SPEED_UNKNOWN; cmd->base.speed = SPEED_UNKNOWN;
cmd->base.duplex = DUPLEX_UNKNOWN; cmd->base.duplex = DUPLEX_UNKNOWN;

View File

@ -501,6 +501,7 @@ struct qede_fastpath {
#define QEDE_SP_HW_ERR 4 #define QEDE_SP_HW_ERR 4
#define QEDE_SP_ARFS_CONFIG 5 #define QEDE_SP_ARFS_CONFIG 5
#define QEDE_SP_AER 7 #define QEDE_SP_AER 7
#define QEDE_SP_DISABLE 8
#ifdef CONFIG_RFS_ACCEL #ifdef CONFIG_RFS_ACCEL
int qede_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb, int qede_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,

View File

@ -1009,6 +1009,13 @@ static void qede_sp_task(struct work_struct *work)
struct qede_dev *edev = container_of(work, struct qede_dev, struct qede_dev *edev = container_of(work, struct qede_dev,
sp_task.work); sp_task.work);
/* Disable execution of this deferred work once
* qede removal is in progress, this stop any future
* scheduling of sp_task.
*/
if (test_bit(QEDE_SP_DISABLE, &edev->sp_flags))
return;
/* The locking scheme depends on the specific flag: /* The locking scheme depends on the specific flag:
* In case of QEDE_SP_RECOVERY, acquiring the RTNL lock is required to * In case of QEDE_SP_RECOVERY, acquiring the RTNL lock is required to
* ensure that ongoing flows are ended and new ones are not started. * ensure that ongoing flows are ended and new ones are not started.
@ -1300,6 +1307,7 @@ static void __qede_remove(struct pci_dev *pdev, enum qede_remove_mode mode)
qede_rdma_dev_remove(edev, (mode == QEDE_REMOVE_RECOVERY)); qede_rdma_dev_remove(edev, (mode == QEDE_REMOVE_RECOVERY));
if (mode != QEDE_REMOVE_RECOVERY) { if (mode != QEDE_REMOVE_RECOVERY) {
set_bit(QEDE_SP_DISABLE, &edev->sp_flags);
unregister_netdev(ndev); unregister_netdev(ndev);
cancel_delayed_work_sync(&edev->sp_task); cancel_delayed_work_sync(&edev->sp_task);

View File

@ -2060,8 +2060,12 @@ static void am65_cpsw_port_offload_fwd_mark_update(struct am65_cpsw_common *comm
for (i = 1; i <= common->port_num; i++) { for (i = 1; i <= common->port_num; i++) {
struct am65_cpsw_port *port = am65_common_get_port(common, i); struct am65_cpsw_port *port = am65_common_get_port(common, i);
struct am65_cpsw_ndev_priv *priv = am65_ndev_to_priv(port->ndev); struct am65_cpsw_ndev_priv *priv;
if (!port->ndev)
continue;
priv = am65_ndev_to_priv(port->ndev);
priv->offload_fwd_mark = set_val; priv->offload_fwd_mark = set_val;
} }
} }

View File

@ -335,7 +335,7 @@ static int mhi_net_newlink(void *ctxt, struct net_device *ndev, u32 if_id,
u64_stats_init(&mhi_netdev->stats.tx_syncp); u64_stats_init(&mhi_netdev->stats.tx_syncp);
/* Start MHI channels */ /* Start MHI channels */
err = mhi_prepare_for_transfer(mhi_dev); err = mhi_prepare_for_transfer(mhi_dev, 0);
if (err) if (err)
goto out_err; goto out_err;

View File

@ -401,11 +401,11 @@ static int ksz8041_config_aneg(struct phy_device *phydev)
} }
static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev, static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev,
const u32 ksz_phy_id) const bool ksz_8051)
{ {
int ret; int ret;
if ((phydev->phy_id & MICREL_PHY_ID_MASK) != ksz_phy_id) if ((phydev->phy_id & MICREL_PHY_ID_MASK) != PHY_ID_KSZ8051)
return 0; return 0;
ret = phy_read(phydev, MII_BMSR); ret = phy_read(phydev, MII_BMSR);
@ -418,7 +418,7 @@ static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev,
* the switch does not. * the switch does not.
*/ */
ret &= BMSR_ERCAP; ret &= BMSR_ERCAP;
if (ksz_phy_id == PHY_ID_KSZ8051) if (ksz_8051)
return ret; return ret;
else else
return !ret; return !ret;
@ -426,7 +426,7 @@ static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev,
static int ksz8051_match_phy_device(struct phy_device *phydev) static int ksz8051_match_phy_device(struct phy_device *phydev)
{ {
return ksz8051_ksz8795_match_phy_device(phydev, PHY_ID_KSZ8051); return ksz8051_ksz8795_match_phy_device(phydev, true);
} }
static int ksz8081_config_init(struct phy_device *phydev) static int ksz8081_config_init(struct phy_device *phydev)
@ -535,7 +535,7 @@ static int ksz8061_config_init(struct phy_device *phydev)
static int ksz8795_match_phy_device(struct phy_device *phydev) static int ksz8795_match_phy_device(struct phy_device *phydev)
{ {
return ksz8051_ksz8795_match_phy_device(phydev, PHY_ID_KSZ87XX); return ksz8051_ksz8795_match_phy_device(phydev, false);
} }
static int ksz9021_load_values_from_of(struct phy_device *phydev, static int ksz9021_load_values_from_of(struct phy_device *phydev,

View File

@ -1154,7 +1154,7 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
{ {
struct phy_device *phydev = dev->net->phydev; struct phy_device *phydev = dev->net->phydev;
struct ethtool_link_ksettings ecmd; struct ethtool_link_ksettings ecmd;
int ladv, radv, ret; int ladv, radv, ret, link;
u32 buf; u32 buf;
/* clear LAN78xx interrupt status */ /* clear LAN78xx interrupt status */
@ -1162,9 +1162,12 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
if (unlikely(ret < 0)) if (unlikely(ret < 0))
return -EIO; return -EIO;
mutex_lock(&phydev->lock);
phy_read_status(phydev); phy_read_status(phydev);
link = phydev->link;
mutex_unlock(&phydev->lock);
if (!phydev->link && dev->link_on) { if (!link && dev->link_on) {
dev->link_on = false; dev->link_on = false;
/* reset MAC */ /* reset MAC */
@ -1177,7 +1180,7 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
return -EIO; return -EIO;
del_timer(&dev->stat_monitor); del_timer(&dev->stat_monitor);
} else if (phydev->link && !dev->link_on) { } else if (link && !dev->link_on) {
dev->link_on = true; dev->link_on = true;
phy_ethtool_ksettings_get(phydev, &ecmd); phy_ethtool_ksettings_get(phydev, &ecmd);
@ -1466,9 +1469,14 @@ static int lan78xx_set_eee(struct net_device *net, struct ethtool_eee *edata)
static u32 lan78xx_get_link(struct net_device *net) static u32 lan78xx_get_link(struct net_device *net)
{ {
phy_read_status(net->phydev); u32 link;
return net->phydev->link; mutex_lock(&net->phydev->lock);
phy_read_status(net->phydev);
link = net->phydev->link;
mutex_unlock(&net->phydev->lock);
return link;
} }
static void lan78xx_get_drvinfo(struct net_device *net, static void lan78xx_get_drvinfo(struct net_device *net,

View File

@ -1,31 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only // SPDX-License-Identifier: GPL-2.0-only
/* /*
* Copyright (c) 1999-2013 Petko Manolov (petkan@nucleusys.com) * Copyright (c) 1999-2021 Petko Manolov (petkan@nucleusys.com)
* *
* ChangeLog:
* .... Most of the time spent on reading sources & docs.
* v0.2.x First official release for the Linux kernel.
* v0.3.0 Beutified and structured, some bugs fixed.
* v0.3.x URBifying bulk requests and bugfixing. First relatively
* stable release. Still can touch device's registers only
* from top-halves.
* v0.4.0 Control messages remained unurbified are now URBs.
* Now we can touch the HW at any time.
* v0.4.9 Control urbs again use process context to wait. Argh...
* Some long standing bugs (enable_net_traffic) fixed.
* Also nasty trick about resubmiting control urb from
* interrupt context used. Please let me know how it
* behaves. Pegasus II support added since this version.
* TODO: suppressing HCD warnings spewage on disconnect.
* v0.4.13 Ethernet address is now set at probe(), not at open()
* time as this seems to break dhcpd.
* v0.5.0 branch to 2.5.x kernels
* v0.5.1 ethtool support added
* v0.5.5 rx socket buffers are in a pool and the their allocation
* is out of the interrupt routine.
* ...
* v0.9.3 simplified [get|set]_register(s), async update registers
* logic revisited, receive skb_pool removed.
*/ */
#include <linux/sched.h> #include <linux/sched.h>
@ -45,7 +21,6 @@
/* /*
* Version Information * Version Information
*/ */
#define DRIVER_VERSION "v0.9.3 (2013/04/25)"
#define DRIVER_AUTHOR "Petko Manolov <petkan@nucleusys.com>" #define DRIVER_AUTHOR "Petko Manolov <petkan@nucleusys.com>"
#define DRIVER_DESC "Pegasus/Pegasus II USB Ethernet driver" #define DRIVER_DESC "Pegasus/Pegasus II USB Ethernet driver"
@ -132,9 +107,15 @@ static int get_registers(pegasus_t *pegasus, __u16 indx, __u16 size, void *data)
static int set_registers(pegasus_t *pegasus, __u16 indx, __u16 size, static int set_registers(pegasus_t *pegasus, __u16 indx, __u16 size,
const void *data) const void *data)
{ {
return usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REGS, int ret;
ret = usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REGS,
PEGASUS_REQT_WRITE, 0, indx, data, size, PEGASUS_REQT_WRITE, 0, indx, data, size,
1000, GFP_NOIO); 1000, GFP_NOIO);
if (ret < 0)
netif_dbg(pegasus, drv, pegasus->net, "%s failed with %d\n", __func__, ret);
return ret;
} }
/* /*
@ -145,10 +126,15 @@ static int set_registers(pegasus_t *pegasus, __u16 indx, __u16 size,
static int set_register(pegasus_t *pegasus, __u16 indx, __u8 data) static int set_register(pegasus_t *pegasus, __u16 indx, __u8 data)
{ {
void *buf = &data; void *buf = &data;
int ret;
return usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REG, ret = usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REG,
PEGASUS_REQT_WRITE, data, indx, buf, 1, PEGASUS_REQT_WRITE, data, indx, buf, 1,
1000, GFP_NOIO); 1000, GFP_NOIO);
if (ret < 0)
netif_dbg(pegasus, drv, pegasus->net, "%s failed with %d\n", __func__, ret);
return ret;
} }
static int update_eth_regs_async(pegasus_t *pegasus) static int update_eth_regs_async(pegasus_t *pegasus)
@ -188,10 +174,9 @@ static int update_eth_regs_async(pegasus_t *pegasus)
static int __mii_op(pegasus_t *p, __u8 phy, __u8 indx, __u16 *regd, __u8 cmd) static int __mii_op(pegasus_t *p, __u8 phy, __u8 indx, __u16 *regd, __u8 cmd)
{ {
int i; int i, ret;
__u8 data[4] = { phy, 0, 0, indx };
__le16 regdi; __le16 regdi;
int ret = -ETIMEDOUT; __u8 data[4] = { phy, 0, 0, indx };
if (cmd & PHY_WRITE) { if (cmd & PHY_WRITE) {
__le16 *t = (__le16 *) & data[1]; __le16 *t = (__le16 *) & data[1];
@ -207,12 +192,15 @@ static int __mii_op(pegasus_t *p, __u8 phy, __u8 indx, __u16 *regd, __u8 cmd)
if (data[0] & PHY_DONE) if (data[0] & PHY_DONE)
break; break;
} }
if (i >= REG_TIMEOUT) if (i >= REG_TIMEOUT) {
ret = -ETIMEDOUT;
goto fail; goto fail;
}
if (cmd & PHY_READ) { if (cmd & PHY_READ) {
ret = get_registers(p, PhyData, 2, &regdi); ret = get_registers(p, PhyData, 2, &regdi);
if (ret < 0)
goto fail;
*regd = le16_to_cpu(regdi); *regd = le16_to_cpu(regdi);
return ret;
} }
return 0; return 0;
fail: fail:
@ -235,9 +223,13 @@ static int write_mii_word(pegasus_t *pegasus, __u8 phy, __u8 indx, __u16 *regd)
static int mdio_read(struct net_device *dev, int phy_id, int loc) static int mdio_read(struct net_device *dev, int phy_id, int loc)
{ {
pegasus_t *pegasus = netdev_priv(dev); pegasus_t *pegasus = netdev_priv(dev);
int ret;
u16 res; u16 res;
read_mii_word(pegasus, phy_id, loc, &res); ret = read_mii_word(pegasus, phy_id, loc, &res);
if (ret < 0)
return ret;
return (int)res; return (int)res;
} }
@ -251,10 +243,9 @@ static void mdio_write(struct net_device *dev, int phy_id, int loc, int val)
static int read_eprom_word(pegasus_t *pegasus, __u8 index, __u16 *retdata) static int read_eprom_word(pegasus_t *pegasus, __u8 index, __u16 *retdata)
{ {
int i; int ret, i;
__u8 tmp = 0;
__le16 retdatai; __le16 retdatai;
int ret; __u8 tmp = 0;
set_register(pegasus, EpromCtrl, 0); set_register(pegasus, EpromCtrl, 0);
set_register(pegasus, EpromOffset, index); set_register(pegasus, EpromOffset, index);
@ -262,21 +253,25 @@ static int read_eprom_word(pegasus_t *pegasus, __u8 index, __u16 *retdata)
for (i = 0; i < REG_TIMEOUT; i++) { for (i = 0; i < REG_TIMEOUT; i++) {
ret = get_registers(pegasus, EpromCtrl, 1, &tmp); ret = get_registers(pegasus, EpromCtrl, 1, &tmp);
if (ret < 0)
goto fail;
if (tmp & EPROM_DONE) if (tmp & EPROM_DONE)
break; break;
if (ret == -ESHUTDOWN)
goto fail;
} }
if (i >= REG_TIMEOUT) if (i >= REG_TIMEOUT) {
ret = -ETIMEDOUT;
goto fail; goto fail;
}
ret = get_registers(pegasus, EpromData, 2, &retdatai); ret = get_registers(pegasus, EpromData, 2, &retdatai);
if (ret < 0)
goto fail;
*retdata = le16_to_cpu(retdatai); *retdata = le16_to_cpu(retdatai);
return ret; return ret;
fail: fail:
netif_warn(pegasus, drv, pegasus->net, "%s failed\n", __func__); netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
return -ETIMEDOUT; return ret;
} }
#ifdef PEGASUS_WRITE_EEPROM #ifdef PEGASUS_WRITE_EEPROM
@ -324,10 +319,10 @@ static int write_eprom_word(pegasus_t *pegasus, __u8 index, __u16 data)
return ret; return ret;
fail: fail:
netif_warn(pegasus, drv, pegasus->net, "%s failed\n", __func__); netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
return -ETIMEDOUT; return -ETIMEDOUT;
} }
#endif /* PEGASUS_WRITE_EEPROM */ #endif /* PEGASUS_WRITE_EEPROM */
static inline int get_node_id(pegasus_t *pegasus, u8 *id) static inline int get_node_id(pegasus_t *pegasus, u8 *id)
{ {
@ -367,19 +362,21 @@ static void set_ethernet_addr(pegasus_t *pegasus)
return; return;
err: err:
eth_hw_addr_random(pegasus->net); eth_hw_addr_random(pegasus->net);
dev_info(&pegasus->intf->dev, "software assigned MAC address.\n"); netif_dbg(pegasus, drv, pegasus->net, "software assigned MAC address.\n");
return; return;
} }
static inline int reset_mac(pegasus_t *pegasus) static inline int reset_mac(pegasus_t *pegasus)
{ {
int ret, i;
__u8 data = 0x8; __u8 data = 0x8;
int i;
set_register(pegasus, EthCtrl1, data); set_register(pegasus, EthCtrl1, data);
for (i = 0; i < REG_TIMEOUT; i++) { for (i = 0; i < REG_TIMEOUT; i++) {
get_registers(pegasus, EthCtrl1, 1, &data); ret = get_registers(pegasus, EthCtrl1, 1, &data);
if (ret < 0)
goto fail;
if (~data & 0x08) { if (~data & 0x08) {
if (loopback) if (loopback)
break; break;
@ -402,22 +399,29 @@ static inline int reset_mac(pegasus_t *pegasus)
} }
if (usb_dev_id[pegasus->dev_index].vendor == VENDOR_ELCON) { if (usb_dev_id[pegasus->dev_index].vendor == VENDOR_ELCON) {
__u16 auxmode; __u16 auxmode;
read_mii_word(pegasus, 3, 0x1b, &auxmode); ret = read_mii_word(pegasus, 3, 0x1b, &auxmode);
if (ret < 0)
goto fail;
auxmode |= 4; auxmode |= 4;
write_mii_word(pegasus, 3, 0x1b, &auxmode); write_mii_word(pegasus, 3, 0x1b, &auxmode);
} }
return 0; return 0;
fail:
netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
return ret;
} }
static int enable_net_traffic(struct net_device *dev, struct usb_device *usb) static int enable_net_traffic(struct net_device *dev, struct usb_device *usb)
{ {
__u16 linkpart;
__u8 data[4];
pegasus_t *pegasus = netdev_priv(dev); pegasus_t *pegasus = netdev_priv(dev);
int ret; int ret;
__u16 linkpart;
__u8 data[4];
read_mii_word(pegasus, pegasus->phy, MII_LPA, &linkpart); ret = read_mii_word(pegasus, pegasus->phy, MII_LPA, &linkpart);
if (ret < 0)
goto fail;
data[0] = 0xc8; /* TX & RX enable, append status, no CRC */ data[0] = 0xc8; /* TX & RX enable, append status, no CRC */
data[1] = 0; data[1] = 0;
if (linkpart & (ADVERTISE_100FULL | ADVERTISE_10FULL)) if (linkpart & (ADVERTISE_100FULL | ADVERTISE_10FULL))
@ -435,11 +439,16 @@ static int enable_net_traffic(struct net_device *dev, struct usb_device *usb)
usb_dev_id[pegasus->dev_index].vendor == VENDOR_LINKSYS2 || usb_dev_id[pegasus->dev_index].vendor == VENDOR_LINKSYS2 ||
usb_dev_id[pegasus->dev_index].vendor == VENDOR_DLINK) { usb_dev_id[pegasus->dev_index].vendor == VENDOR_DLINK) {
u16 auxmode; u16 auxmode;
read_mii_word(pegasus, 0, 0x1b, &auxmode); ret = read_mii_word(pegasus, 0, 0x1b, &auxmode);
if (ret < 0)
goto fail;
auxmode |= 4; auxmode |= 4;
write_mii_word(pegasus, 0, 0x1b, &auxmode); write_mii_word(pegasus, 0, 0x1b, &auxmode);
} }
return 0;
fail:
netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
return ret; return ret;
} }
@ -447,9 +456,9 @@ static void read_bulk_callback(struct urb *urb)
{ {
pegasus_t *pegasus = urb->context; pegasus_t *pegasus = urb->context;
struct net_device *net; struct net_device *net;
u8 *buf = urb->transfer_buffer;
int rx_status, count = urb->actual_length; int rx_status, count = urb->actual_length;
int status = urb->status; int status = urb->status;
u8 *buf = urb->transfer_buffer;
__u16 pkt_len; __u16 pkt_len;
if (!pegasus) if (!pegasus)
@ -735,12 +744,16 @@ static inline void disable_net_traffic(pegasus_t *pegasus)
set_registers(pegasus, EthCtrl0, sizeof(tmp), &tmp); set_registers(pegasus, EthCtrl0, sizeof(tmp), &tmp);
} }
static inline void get_interrupt_interval(pegasus_t *pegasus) static inline int get_interrupt_interval(pegasus_t *pegasus)
{ {
u16 data; u16 data;
u8 interval; u8 interval;
int ret;
ret = read_eprom_word(pegasus, 4, &data);
if (ret < 0)
return ret;
read_eprom_word(pegasus, 4, &data);
interval = data >> 8; interval = data >> 8;
if (pegasus->usb->speed != USB_SPEED_HIGH) { if (pegasus->usb->speed != USB_SPEED_HIGH) {
if (interval < 0x80) { if (interval < 0x80) {
@ -755,6 +768,8 @@ static inline void get_interrupt_interval(pegasus_t *pegasus)
} }
} }
pegasus->intr_interval = interval; pegasus->intr_interval = interval;
return 0;
} }
static void set_carrier(struct net_device *net) static void set_carrier(struct net_device *net)
@ -880,7 +895,6 @@ static void pegasus_get_drvinfo(struct net_device *dev,
pegasus_t *pegasus = netdev_priv(dev); pegasus_t *pegasus = netdev_priv(dev);
strlcpy(info->driver, driver_name, sizeof(info->driver)); strlcpy(info->driver, driver_name, sizeof(info->driver));
strlcpy(info->version, DRIVER_VERSION, sizeof(info->version));
usb_make_path(pegasus->usb, info->bus_info, sizeof(info->bus_info)); usb_make_path(pegasus->usb, info->bus_info, sizeof(info->bus_info));
} }
@ -998,8 +1012,7 @@ static int pegasus_ioctl(struct net_device *net, struct ifreq *rq, int cmd)
data[0] = pegasus->phy; data[0] = pegasus->phy;
fallthrough; fallthrough;
case SIOCDEVPRIVATE + 1: case SIOCDEVPRIVATE + 1:
read_mii_word(pegasus, data[0], data[1] & 0x1f, &data[3]); res = read_mii_word(pegasus, data[0], data[1] & 0x1f, &data[3]);
res = 0;
break; break;
case SIOCDEVPRIVATE + 2: case SIOCDEVPRIVATE + 2:
if (!capable(CAP_NET_ADMIN)) if (!capable(CAP_NET_ADMIN))
@ -1033,22 +1046,25 @@ static void pegasus_set_multicast(struct net_device *net)
static __u8 mii_phy_probe(pegasus_t *pegasus) static __u8 mii_phy_probe(pegasus_t *pegasus)
{ {
int i; int i, ret;
__u16 tmp; __u16 tmp;
for (i = 0; i < 32; i++) { for (i = 0; i < 32; i++) {
read_mii_word(pegasus, i, MII_BMSR, &tmp); ret = read_mii_word(pegasus, i, MII_BMSR, &tmp);
if (ret < 0)
goto fail;
if (tmp == 0 || tmp == 0xffff || (tmp & BMSR_MEDIA) == 0) if (tmp == 0 || tmp == 0xffff || (tmp & BMSR_MEDIA) == 0)
continue; continue;
else else
return i; return i;
} }
fail:
return 0xff; return 0xff;
} }
static inline void setup_pegasus_II(pegasus_t *pegasus) static inline void setup_pegasus_II(pegasus_t *pegasus)
{ {
int ret;
__u8 data = 0xa5; __u8 data = 0xa5;
set_register(pegasus, Reg1d, 0); set_register(pegasus, Reg1d, 0);
@ -1060,7 +1076,9 @@ static inline void setup_pegasus_II(pegasus_t *pegasus)
set_register(pegasus, Reg7b, 2); set_register(pegasus, Reg7b, 2);
set_register(pegasus, 0x83, data); set_register(pegasus, 0x83, data);
get_registers(pegasus, 0x83, 1, &data); ret = get_registers(pegasus, 0x83, 1, &data);
if (ret < 0)
goto fail;
if (data == 0xa5) if (data == 0xa5)
pegasus->chip = 0x8513; pegasus->chip = 0x8513;
@ -1075,6 +1093,10 @@ static inline void setup_pegasus_II(pegasus_t *pegasus)
set_register(pegasus, Reg81, 6); set_register(pegasus, Reg81, 6);
else else
set_register(pegasus, Reg81, 2); set_register(pegasus, Reg81, 2);
return;
fail:
netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
} }
static void check_carrier(struct work_struct *work) static void check_carrier(struct work_struct *work)
@ -1149,7 +1171,9 @@ static int pegasus_probe(struct usb_interface *intf,
| NETIF_MSG_PROBE | NETIF_MSG_LINK); | NETIF_MSG_PROBE | NETIF_MSG_LINK);
pegasus->features = usb_dev_id[dev_index].private; pegasus->features = usb_dev_id[dev_index].private;
get_interrupt_interval(pegasus); res = get_interrupt_interval(pegasus);
if (res)
goto out2;
if (reset_mac(pegasus)) { if (reset_mac(pegasus)) {
dev_err(&intf->dev, "can't reset MAC\n"); dev_err(&intf->dev, "can't reset MAC\n");
res = -EIO; res = -EIO;
@ -1296,7 +1320,7 @@ static void __init parse_id(char *id)
static int __init pegasus_init(void) static int __init pegasus_init(void)
{ {
pr_info("%s: %s, " DRIVER_DESC "\n", driver_name, DRIVER_VERSION); pr_info("%s: " DRIVER_DESC "\n", driver_name);
if (devid) if (devid)
parse_id(devid); parse_id(devid);
return usb_register(&pegasus_driver); return usb_register(&pegasus_driver);

View File

@ -10,10 +10,10 @@
#define IOSM_CP_VERSION 0x0100UL #define IOSM_CP_VERSION 0x0100UL
/* DL dir Aggregation support mask */ /* DL dir Aggregation support mask */
#define DL_AGGR BIT(23) #define DL_AGGR BIT(9)
/* UL dir Aggregation support mask */ /* UL dir Aggregation support mask */
#define UL_AGGR BIT(22) #define UL_AGGR BIT(8)
/* UL flow credit support mask */ /* UL flow credit support mask */
#define UL_FLOW_CREDIT BIT(21) #define UL_FLOW_CREDIT BIT(21)

View File

@ -320,7 +320,7 @@ static void ipc_mux_dl_fcth_decode(struct iosm_mux *ipc_mux,
return; return;
} }
ul_credits = fct->vfl.nr_of_bytes; ul_credits = le32_to_cpu(fct->vfl.nr_of_bytes);
dev_dbg(ipc_mux->dev, "Flow_Credit:: if_id[%d] Old: %d Grants: %d", dev_dbg(ipc_mux->dev, "Flow_Credit:: if_id[%d] Old: %d Grants: %d",
if_id, ipc_mux->session[if_id].ul_flow_credits, ul_credits); if_id, ipc_mux->session[if_id].ul_flow_credits, ul_credits);
@ -586,7 +586,7 @@ static bool ipc_mux_lite_send_qlt(struct iosm_mux *ipc_mux)
qlt->reserved[0] = 0; qlt->reserved[0] = 0;
qlt->reserved[1] = 0; qlt->reserved[1] = 0;
qlt->vfl.nr_of_bytes = session->ul_list.qlen; qlt->vfl.nr_of_bytes = cpu_to_le32(session->ul_list.qlen);
/* Add QLT to the transfer list. */ /* Add QLT to the transfer list. */
skb_queue_tail(&ipc_mux->channel->ul_list, skb_queue_tail(&ipc_mux->channel->ul_list,

View File

@ -106,7 +106,7 @@ struct mux_lite_cmdh {
* @nr_of_bytes: Number of bytes available to transmit in the queue. * @nr_of_bytes: Number of bytes available to transmit in the queue.
*/ */
struct mux_lite_vfl { struct mux_lite_vfl {
u32 nr_of_bytes; __le32 nr_of_bytes;
}; };
/** /**

View File

@ -412,8 +412,8 @@ struct sk_buff *ipc_protocol_dl_td_process(struct iosm_protocol *ipc_protocol,
} }
if (p_td->buffer.address != IPC_CB(skb)->mapping) { if (p_td->buffer.address != IPC_CB(skb)->mapping) {
dev_err(ipc_protocol->dev, "invalid buf=%p or skb=%p", dev_err(ipc_protocol->dev, "invalid buf=%llx or skb=%p",
(void *)p_td->buffer.address, skb->data); (unsigned long long)p_td->buffer.address, skb->data);
ipc_pcie_kfree_skb(ipc_protocol->pcie, skb); ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
skb = NULL; skb = NULL;
goto ret; goto ret;

View File

@ -228,7 +228,7 @@ static void ipc_wwan_dellink(void *ctxt, struct net_device *dev,
RCU_INIT_POINTER(ipc_wwan->sub_netlist[if_id], NULL); RCU_INIT_POINTER(ipc_wwan->sub_netlist[if_id], NULL);
/* unregistering includes synchronize_net() */ /* unregistering includes synchronize_net() */
unregister_netdevice(dev); unregister_netdevice_queue(dev, head);
unlock: unlock:
mutex_unlock(&ipc_wwan->if_mutex); mutex_unlock(&ipc_wwan->if_mutex);

View File

@ -110,7 +110,7 @@ static int mhi_wwan_ctrl_start(struct wwan_port *port)
int ret; int ret;
/* Start mhi device's channel(s) */ /* Start mhi device's channel(s) */
ret = mhi_prepare_for_transfer(mhiwwan->mhi_dev); ret = mhi_prepare_for_transfer(mhiwwan->mhi_dev, 0);
if (ret) if (ret)
return ret; return ret;

View File

@ -719,8 +719,13 @@ void mhi_device_put(struct mhi_device *mhi_dev);
* host and device execution environments match and * host and device execution environments match and
* channels are in a DISABLED state. * channels are in a DISABLED state.
* @mhi_dev: Device associated with the channels * @mhi_dev: Device associated with the channels
* @flags: MHI channel flags
*/ */
int mhi_prepare_for_transfer(struct mhi_device *mhi_dev); int mhi_prepare_for_transfer(struct mhi_device *mhi_dev,
unsigned int flags);
/* Automatically allocate and queue inbound buffers */
#define MHI_CH_INBOUND_ALLOC_BUFS BIT(0)
/** /**
* mhi_unprepare_from_transfer - Reset UL and DL channels for data transfer. * mhi_unprepare_from_transfer - Reset UL and DL channels for data transfer.

View File

@ -293,7 +293,7 @@ static inline bool flow_action_has_entries(const struct flow_action *action)
} }
/** /**
* flow_action_has_one_action() - check if exactly one action is present * flow_offload_has_one_action() - check if exactly one action is present
* @action: tc filter flow offload action * @action: tc filter flow offload action
* *
* Returns true if exactly one action is present. * Returns true if exactly one action is present.

View File

@ -265,7 +265,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
static inline unsigned int ip6_skb_dst_mtu(struct sk_buff *skb) static inline unsigned int ip6_skb_dst_mtu(struct sk_buff *skb)
{ {
int mtu; unsigned int mtu;
struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ? struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ?
inet6_sk(skb->sk) : NULL; inet6_sk(skb->sk) : NULL;

View File

@ -75,6 +75,7 @@ struct netns_xfrm {
#endif #endif
spinlock_t xfrm_state_lock; spinlock_t xfrm_state_lock;
seqcount_spinlock_t xfrm_state_hash_generation; seqcount_spinlock_t xfrm_state_hash_generation;
seqcount_spinlock_t xfrm_policy_hash_generation;
spinlock_t xfrm_policy_lock; spinlock_t xfrm_policy_lock;
struct mutex xfrm_cfg_mutex; struct mutex xfrm_cfg_mutex;

View File

@ -337,6 +337,9 @@ int tcf_exts_dump_stats(struct sk_buff *skb, struct tcf_exts *exts);
/** /**
* struct tcf_pkt_info - packet information * struct tcf_pkt_info - packet information
*
* @ptr: start of the pkt data
* @nexthdr: offset of the next header
*/ */
struct tcf_pkt_info { struct tcf_pkt_info {
unsigned char * ptr; unsigned char * ptr;
@ -355,6 +358,7 @@ struct tcf_ematch_ops;
* @ops: the operations lookup table of the corresponding ematch module * @ops: the operations lookup table of the corresponding ematch module
* @datalen: length of the ematch specific configuration data * @datalen: length of the ematch specific configuration data
* @data: ematch specific data * @data: ematch specific data
* @net: the network namespace
*/ */
struct tcf_ematch { struct tcf_ematch {
struct tcf_ematch_ops * ops; struct tcf_ematch_ops * ops;

View File

@ -166,7 +166,8 @@ static int br_switchdev_event(struct notifier_block *unused,
case SWITCHDEV_FDB_ADD_TO_BRIDGE: case SWITCHDEV_FDB_ADD_TO_BRIDGE:
fdb_info = ptr; fdb_info = ptr;
err = br_fdb_external_learn_add(br, p, fdb_info->addr, err = br_fdb_external_learn_add(br, p, fdb_info->addr,
fdb_info->vid, false); fdb_info->vid,
fdb_info->is_local, false);
if (err) { if (err) {
err = notifier_from_errno(err); err = notifier_from_errno(err);
break; break;

View File

@ -1019,7 +1019,8 @@ static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source,
static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br, static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br,
struct net_bridge_port *p, const unsigned char *addr, struct net_bridge_port *p, const unsigned char *addr,
u16 nlh_flags, u16 vid, struct nlattr *nfea_tb[]) u16 nlh_flags, u16 vid, struct nlattr *nfea_tb[],
struct netlink_ext_ack *extack)
{ {
int err = 0; int err = 0;
@ -1038,7 +1039,15 @@ static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br,
rcu_read_unlock(); rcu_read_unlock();
local_bh_enable(); local_bh_enable();
} else if (ndm->ndm_flags & NTF_EXT_LEARNED) { } else if (ndm->ndm_flags & NTF_EXT_LEARNED) {
err = br_fdb_external_learn_add(br, p, addr, vid, true); if (!p && !(ndm->ndm_state & NUD_PERMANENT)) {
NL_SET_ERR_MSG_MOD(extack,
"FDB entry towards bridge must be permanent");
return -EINVAL;
}
err = br_fdb_external_learn_add(br, p, addr, vid,
ndm->ndm_state & NUD_PERMANENT,
true);
} else { } else {
spin_lock_bh(&br->hash_lock); spin_lock_bh(&br->hash_lock);
err = fdb_add_entry(br, p, addr, ndm, nlh_flags, vid, nfea_tb); err = fdb_add_entry(br, p, addr, ndm, nlh_flags, vid, nfea_tb);
@ -1110,9 +1119,11 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
} }
/* VID was specified, so use it. */ /* VID was specified, so use it. */
err = __br_fdb_add(ndm, br, p, addr, nlh_flags, vid, nfea_tb); err = __br_fdb_add(ndm, br, p, addr, nlh_flags, vid, nfea_tb,
extack);
} else { } else {
err = __br_fdb_add(ndm, br, p, addr, nlh_flags, 0, nfea_tb); err = __br_fdb_add(ndm, br, p, addr, nlh_flags, 0, nfea_tb,
extack);
if (err || !vg || !vg->num_vlans) if (err || !vg || !vg->num_vlans)
goto out; goto out;
@ -1124,7 +1135,7 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
if (!br_vlan_should_use(v)) if (!br_vlan_should_use(v))
continue; continue;
err = __br_fdb_add(ndm, br, p, addr, nlh_flags, v->vid, err = __br_fdb_add(ndm, br, p, addr, nlh_flags, v->vid,
nfea_tb); nfea_tb, extack);
if (err) if (err)
goto out; goto out;
} }
@ -1264,7 +1275,7 @@ void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p)
} }
int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p, int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
const unsigned char *addr, u16 vid, const unsigned char *addr, u16 vid, bool is_local,
bool swdev_notify) bool swdev_notify)
{ {
struct net_bridge_fdb_entry *fdb; struct net_bridge_fdb_entry *fdb;
@ -1281,6 +1292,10 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
if (swdev_notify) if (swdev_notify)
flags |= BIT(BR_FDB_ADDED_BY_USER); flags |= BIT(BR_FDB_ADDED_BY_USER);
if (is_local)
flags |= BIT(BR_FDB_LOCAL);
fdb = fdb_create(br, p, addr, vid, flags); fdb = fdb_create(br, p, addr, vid, flags);
if (!fdb) { if (!fdb) {
err = -ENOMEM; err = -ENOMEM;
@ -1307,6 +1322,9 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
if (swdev_notify) if (swdev_notify)
set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags); set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags);
if (is_local)
set_bit(BR_FDB_LOCAL, &fdb->flags);
if (modified) if (modified)
fdb_notify(br, fdb, RTM_NEWNEIGH, swdev_notify); fdb_notify(br, fdb, RTM_NEWNEIGH, swdev_notify);
} }

View File

@ -711,7 +711,7 @@ int br_fdb_get(struct sk_buff *skb, struct nlattr *tb[], struct net_device *dev,
int br_fdb_sync_static(struct net_bridge *br, struct net_bridge_port *p); int br_fdb_sync_static(struct net_bridge *br, struct net_bridge_port *p);
void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p); void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p);
int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p, int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
const unsigned char *addr, u16 vid, const unsigned char *addr, u16 vid, bool is_local,
bool swdev_notify); bool swdev_notify);
int br_fdb_external_learn_del(struct net_bridge *br, struct net_bridge_port *p, int br_fdb_external_learn_del(struct net_bridge *br, struct net_bridge_port *p,
const unsigned char *addr, u16 vid, const unsigned char *addr, u16 vid,

View File

@ -298,6 +298,9 @@ int tcp_gro_complete(struct sk_buff *skb)
if (th->cwr) if (th->cwr)
skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN; skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN;
if (skb->encapsulation)
skb->inner_transport_header = skb->transport_header;
return 0; return 0;
} }
EXPORT_SYMBOL(tcp_gro_complete); EXPORT_SYMBOL(tcp_gro_complete);

View File

@ -624,6 +624,10 @@ static int udp_gro_complete_segment(struct sk_buff *skb)
skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count; skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count;
skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_L4; skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_L4;
if (skb->encapsulation)
skb->inner_transport_header = skb->transport_header;
return 0; return 0;
} }

View File

@ -27,7 +27,6 @@ struct mptcp_pm_addr_entry {
struct mptcp_addr_info addr; struct mptcp_addr_info addr;
u8 flags; u8 flags;
int ifindex; int ifindex;
struct rcu_head rcu;
struct socket *lsk; struct socket *lsk;
}; };

View File

@ -15,6 +15,7 @@ struct qrtr_mhi_dev {
struct qrtr_endpoint ep; struct qrtr_endpoint ep;
struct mhi_device *mhi_dev; struct mhi_device *mhi_dev;
struct device *dev; struct device *dev;
struct completion ready;
}; };
/* From MHI to QRTR */ /* From MHI to QRTR */
@ -50,6 +51,10 @@ static int qcom_mhi_qrtr_send(struct qrtr_endpoint *ep, struct sk_buff *skb)
struct qrtr_mhi_dev *qdev = container_of(ep, struct qrtr_mhi_dev, ep); struct qrtr_mhi_dev *qdev = container_of(ep, struct qrtr_mhi_dev, ep);
int rc; int rc;
rc = wait_for_completion_interruptible(&qdev->ready);
if (rc)
goto free_skb;
if (skb->sk) if (skb->sk)
sock_hold(skb->sk); sock_hold(skb->sk);
@ -79,7 +84,7 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev,
int rc; int rc;
/* start channels */ /* start channels */
rc = mhi_prepare_for_transfer(mhi_dev); rc = mhi_prepare_for_transfer(mhi_dev, 0);
if (rc) if (rc)
return rc; return rc;
@ -96,6 +101,15 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev,
if (rc) if (rc)
return rc; return rc;
/* start channels */
rc = mhi_prepare_for_transfer(mhi_dev, MHI_CH_INBOUND_ALLOC_BUFS);
if (rc) {
qrtr_endpoint_unregister(&qdev->ep);
dev_set_drvdata(&mhi_dev->dev, NULL);
return rc;
}
complete_all(&qdev->ready);
dev_dbg(qdev->dev, "Qualcomm MHI QRTR driver probed\n"); dev_dbg(qdev->dev, "Qualcomm MHI QRTR driver probed\n");
return 0; return 0;

View File

@ -913,7 +913,7 @@ struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue,
/* seqlock has the same scope of busylock, for NOLOCK qdisc */ /* seqlock has the same scope of busylock, for NOLOCK qdisc */
spin_lock_init(&sch->seqlock); spin_lock_init(&sch->seqlock);
lockdep_set_class(&sch->busylock, lockdep_set_class(&sch->seqlock,
dev->qdisc_tx_busylock ?: &qdisc_tx_busylock); dev->qdisc_tx_busylock ?: &qdisc_tx_busylock);
seqcount_init(&sch->running); seqcount_init(&sch->running);

View File

@ -1739,8 +1739,6 @@ static void taprio_attach(struct Qdisc *sch)
if (FULL_OFFLOAD_IS_ENABLED(q->flags)) { if (FULL_OFFLOAD_IS_ENABLED(q->flags)) {
qdisc->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT; qdisc->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT;
old = dev_graft_qdisc(qdisc->dev_queue, qdisc); old = dev_graft_qdisc(qdisc->dev_queue, qdisc);
if (ntx < dev->real_num_tx_queues)
qdisc_hash_add(qdisc, false);
} else { } else {
old = dev_graft_qdisc(qdisc->dev_queue, sch); old = dev_graft_qdisc(qdisc->dev_queue, sch);
qdisc_refcount_inc(sch); qdisc_refcount_inc(sch);

View File

@ -857,14 +857,18 @@ int sctp_auth_set_key(struct sctp_endpoint *ep,
memcpy(key->data, &auth_key->sca_key[0], auth_key->sca_keylength); memcpy(key->data, &auth_key->sca_key[0], auth_key->sca_keylength);
cur_key->key = key; cur_key->key = key;
if (replace) { if (!replace) {
list_del_init(&shkey->key_list); list_add(&cur_key->key_list, sh_keys);
sctp_auth_shkey_release(shkey); return 0;
if (asoc && asoc->active_key_id == auth_key->sca_keynumber)
sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
} }
list_del_init(&shkey->key_list);
sctp_auth_shkey_release(shkey);
list_add(&cur_key->key_list, sh_keys); list_add(&cur_key->key_list, sh_keys);
if (asoc && asoc->active_key_id == auth_key->sca_keynumber)
sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
return 0; return 0;
} }

View File

@ -1079,6 +1079,9 @@ virtio_transport_recv_connected(struct sock *sk,
virtio_transport_recv_enqueue(vsk, pkt); virtio_transport_recv_enqueue(vsk, pkt);
sk->sk_data_ready(sk); sk->sk_data_ready(sk);
return err; return err;
case VIRTIO_VSOCK_OP_CREDIT_REQUEST:
virtio_transport_send_credit_update(vsk);
break;
case VIRTIO_VSOCK_OP_CREDIT_UPDATE: case VIRTIO_VSOCK_OP_CREDIT_UPDATE:
sk->sk_write_space(sk); sk->sk_write_space(sk);
break; break;

View File

@ -298,8 +298,16 @@ static int xfrm_xlate64(struct sk_buff *dst, const struct nlmsghdr *nlh_src)
len = nlmsg_attrlen(nlh_src, xfrm_msg_min[type]); len = nlmsg_attrlen(nlh_src, xfrm_msg_min[type]);
nla_for_each_attr(nla, attrs, len, remaining) { nla_for_each_attr(nla, attrs, len, remaining) {
int err = xfrm_xlate64_attr(dst, nla); int err;
switch (type) {
case XFRM_MSG_NEWSPDINFO:
err = xfrm_nla_cpy(dst, nla, nla_len(nla));
break;
default:
err = xfrm_xlate64_attr(dst, nla);
break;
}
if (err) if (err)
return err; return err;
} }
@ -341,7 +349,8 @@ static int xfrm_alloc_compat(struct sk_buff *skb, const struct nlmsghdr *nlh_src
/* Calculates len of translated 64-bit message. */ /* Calculates len of translated 64-bit message. */
static size_t xfrm_user_rcv_calculate_len64(const struct nlmsghdr *src, static size_t xfrm_user_rcv_calculate_len64(const struct nlmsghdr *src,
struct nlattr *attrs[XFRMA_MAX+1]) struct nlattr *attrs[XFRMA_MAX + 1],
int maxtype)
{ {
size_t len = nlmsg_len(src); size_t len = nlmsg_len(src);
@ -358,10 +367,20 @@ static size_t xfrm_user_rcv_calculate_len64(const struct nlmsghdr *src,
case XFRM_MSG_POLEXPIRE: case XFRM_MSG_POLEXPIRE:
len += 8; len += 8;
break; break;
case XFRM_MSG_NEWSPDINFO:
/* attirbutes are xfrm_spdattr_type_t, not xfrm_attr_type_t */
return len;
default: default:
break; break;
} }
/* Unexpected for anything, but XFRM_MSG_NEWSPDINFO, please
* correct both 64=>32-bit and 32=>64-bit translators to copy
* new attributes.
*/
if (WARN_ON_ONCE(maxtype))
return len;
if (attrs[XFRMA_SA]) if (attrs[XFRMA_SA])
len += 4; len += 4;
if (attrs[XFRMA_POLICY]) if (attrs[XFRMA_POLICY])
@ -440,7 +459,8 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla,
static int xfrm_xlate32(struct nlmsghdr *dst, const struct nlmsghdr *src, static int xfrm_xlate32(struct nlmsghdr *dst, const struct nlmsghdr *src,
struct nlattr *attrs[XFRMA_MAX+1], struct nlattr *attrs[XFRMA_MAX+1],
size_t size, u8 type, struct netlink_ext_ack *extack) size_t size, u8 type, int maxtype,
struct netlink_ext_ack *extack)
{ {
size_t pos; size_t pos;
int i; int i;
@ -520,6 +540,25 @@ static int xfrm_xlate32(struct nlmsghdr *dst, const struct nlmsghdr *src,
} }
pos = dst->nlmsg_len; pos = dst->nlmsg_len;
if (maxtype) {
/* attirbutes are xfrm_spdattr_type_t, not xfrm_attr_type_t */
WARN_ON_ONCE(src->nlmsg_type != XFRM_MSG_NEWSPDINFO);
for (i = 1; i <= maxtype; i++) {
int err;
if (!attrs[i])
continue;
/* just copy - no need for translation */
err = xfrm_attr_cpy32(dst, &pos, attrs[i], size,
nla_len(attrs[i]), nla_len(attrs[i]));
if (err)
return err;
}
return 0;
}
for (i = 1; i < XFRMA_MAX + 1; i++) { for (i = 1; i < XFRMA_MAX + 1; i++) {
int err; int err;
@ -564,7 +603,7 @@ static struct nlmsghdr *xfrm_user_rcv_msg_compat(const struct nlmsghdr *h32,
if (err < 0) if (err < 0)
return ERR_PTR(err); return ERR_PTR(err);
len = xfrm_user_rcv_calculate_len64(h32, attrs); len = xfrm_user_rcv_calculate_len64(h32, attrs, maxtype);
/* The message doesn't need translation */ /* The message doesn't need translation */
if (len == nlmsg_len(h32)) if (len == nlmsg_len(h32))
return NULL; return NULL;
@ -574,7 +613,7 @@ static struct nlmsghdr *xfrm_user_rcv_msg_compat(const struct nlmsghdr *h32,
if (!h64) if (!h64)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
err = xfrm_xlate32(h64, h32, attrs, len, type, extack); err = xfrm_xlate32(h64, h32, attrs, len, type, maxtype, extack);
if (err < 0) { if (err < 0) {
kvfree(h64); kvfree(h64);
return ERR_PTR(err); return ERR_PTR(err);

View File

@ -241,7 +241,7 @@ static void ipcomp_free_tfms(struct crypto_comp * __percpu *tfms)
break; break;
} }
WARN_ON(!pos); WARN_ON(list_entry_is_head(pos, &ipcomp_tfms_list, list));
if (--pos->users) if (--pos->users)
return; return;

View File

@ -155,7 +155,6 @@ static struct xfrm_policy_afinfo const __rcu *xfrm_policy_afinfo[AF_INET6 + 1]
__read_mostly; __read_mostly;
static struct kmem_cache *xfrm_dst_cache __ro_after_init; static struct kmem_cache *xfrm_dst_cache __ro_after_init;
static __read_mostly seqcount_mutex_t xfrm_policy_hash_generation;
static struct rhashtable xfrm_policy_inexact_table; static struct rhashtable xfrm_policy_inexact_table;
static const struct rhashtable_params xfrm_pol_inexact_params; static const struct rhashtable_params xfrm_pol_inexact_params;
@ -585,7 +584,7 @@ static void xfrm_bydst_resize(struct net *net, int dir)
return; return;
spin_lock_bh(&net->xfrm.xfrm_policy_lock); spin_lock_bh(&net->xfrm.xfrm_policy_lock);
write_seqcount_begin(&xfrm_policy_hash_generation); write_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation);
odst = rcu_dereference_protected(net->xfrm.policy_bydst[dir].table, odst = rcu_dereference_protected(net->xfrm.policy_bydst[dir].table,
lockdep_is_held(&net->xfrm.xfrm_policy_lock)); lockdep_is_held(&net->xfrm.xfrm_policy_lock));
@ -596,7 +595,7 @@ static void xfrm_bydst_resize(struct net *net, int dir)
rcu_assign_pointer(net->xfrm.policy_bydst[dir].table, ndst); rcu_assign_pointer(net->xfrm.policy_bydst[dir].table, ndst);
net->xfrm.policy_bydst[dir].hmask = nhashmask; net->xfrm.policy_bydst[dir].hmask = nhashmask;
write_seqcount_end(&xfrm_policy_hash_generation); write_seqcount_end(&net->xfrm.xfrm_policy_hash_generation);
spin_unlock_bh(&net->xfrm.xfrm_policy_lock); spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
synchronize_rcu(); synchronize_rcu();
@ -1245,7 +1244,7 @@ static void xfrm_hash_rebuild(struct work_struct *work)
} while (read_seqretry(&net->xfrm.policy_hthresh.lock, seq)); } while (read_seqretry(&net->xfrm.policy_hthresh.lock, seq));
spin_lock_bh(&net->xfrm.xfrm_policy_lock); spin_lock_bh(&net->xfrm.xfrm_policy_lock);
write_seqcount_begin(&xfrm_policy_hash_generation); write_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation);
/* make sure that we can insert the indirect policies again before /* make sure that we can insert the indirect policies again before
* we start with destructive action. * we start with destructive action.
@ -1354,7 +1353,7 @@ static void xfrm_hash_rebuild(struct work_struct *work)
out_unlock: out_unlock:
__xfrm_policy_inexact_flush(net); __xfrm_policy_inexact_flush(net);
write_seqcount_end(&xfrm_policy_hash_generation); write_seqcount_end(&net->xfrm.xfrm_policy_hash_generation);
spin_unlock_bh(&net->xfrm.xfrm_policy_lock); spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
mutex_unlock(&hash_resize_mutex); mutex_unlock(&hash_resize_mutex);
@ -2091,15 +2090,12 @@ static struct xfrm_policy *xfrm_policy_lookup_bytype(struct net *net, u8 type,
if (unlikely(!daddr || !saddr)) if (unlikely(!daddr || !saddr))
return NULL; return NULL;
retry:
sequence = read_seqcount_begin(&xfrm_policy_hash_generation);
rcu_read_lock(); rcu_read_lock();
retry:
chain = policy_hash_direct(net, daddr, saddr, family, dir); do {
if (read_seqcount_retry(&xfrm_policy_hash_generation, sequence)) { sequence = read_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation);
rcu_read_unlock(); chain = policy_hash_direct(net, daddr, saddr, family, dir);
goto retry; } while (read_seqcount_retry(&net->xfrm.xfrm_policy_hash_generation, sequence));
}
ret = NULL; ret = NULL;
hlist_for_each_entry_rcu(pol, chain, bydst) { hlist_for_each_entry_rcu(pol, chain, bydst) {
@ -2130,15 +2126,11 @@ static struct xfrm_policy *xfrm_policy_lookup_bytype(struct net *net, u8 type,
} }
skip_inexact: skip_inexact:
if (read_seqcount_retry(&xfrm_policy_hash_generation, sequence)) { if (read_seqcount_retry(&net->xfrm.xfrm_policy_hash_generation, sequence))
rcu_read_unlock();
goto retry; goto retry;
}
if (ret && !xfrm_pol_hold_rcu(ret)) { if (ret && !xfrm_pol_hold_rcu(ret))
rcu_read_unlock();
goto retry; goto retry;
}
fail: fail:
rcu_read_unlock(); rcu_read_unlock();
@ -4089,6 +4081,7 @@ static int __net_init xfrm_net_init(struct net *net)
/* Initialize the per-net locks here */ /* Initialize the per-net locks here */
spin_lock_init(&net->xfrm.xfrm_state_lock); spin_lock_init(&net->xfrm.xfrm_state_lock);
spin_lock_init(&net->xfrm.xfrm_policy_lock); spin_lock_init(&net->xfrm.xfrm_policy_lock);
seqcount_spinlock_init(&net->xfrm.xfrm_policy_hash_generation, &net->xfrm.xfrm_policy_lock);
mutex_init(&net->xfrm.xfrm_cfg_mutex); mutex_init(&net->xfrm.xfrm_cfg_mutex);
rv = xfrm_statistics_init(net); rv = xfrm_statistics_init(net);
@ -4133,7 +4126,6 @@ void __init xfrm_init(void)
{ {
register_pernet_subsys(&xfrm_net_ops); register_pernet_subsys(&xfrm_net_ops);
xfrm_dev_init(); xfrm_dev_init();
seqcount_mutex_init(&xfrm_policy_hash_generation, &hash_resize_mutex);
xfrm_input_init(); xfrm_input_init();
#ifdef CONFIG_XFRM_ESPINTCP #ifdef CONFIG_XFRM_ESPINTCP

View File

@ -2811,6 +2811,16 @@ static int xfrm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
err = link->doit(skb, nlh, attrs); err = link->doit(skb, nlh, attrs);
/* We need to free skb allocated in xfrm_alloc_compat() before
* returning from this function, because consume_skb() won't take
* care of frag_list since netlink destructor sets
* sbk->head to NULL. (see netlink_skb_destructor())
*/
if (skb_has_frag_list(skb)) {
kfree_skb(skb_shinfo(skb)->frag_list);
skb_shinfo(skb)->frag_list = NULL;
}
err: err:
kvfree(nlh64); kvfree(nlh64);
return err; return err;

View File

@ -484,13 +484,16 @@ enum desc_type {
MONITOR_ACQUIRE, MONITOR_ACQUIRE,
EXPIRE_STATE, EXPIRE_STATE,
EXPIRE_POLICY, EXPIRE_POLICY,
SPDINFO_ATTRS,
}; };
const char *desc_name[] = { const char *desc_name[] = {
"create tunnel", "create tunnel",
"alloc spi", "alloc spi",
"monitor acquire", "monitor acquire",
"expire state", "expire state",
"expire policy" "expire policy",
"spdinfo attributes",
""
}; };
struct xfrm_desc { struct xfrm_desc {
enum desc_type type; enum desc_type type;
@ -1593,6 +1596,155 @@ out_close:
return ret; return ret;
} }
static int xfrm_spdinfo_set_thresh(int xfrm_sock, uint32_t *seq,
unsigned thresh4_l, unsigned thresh4_r,
unsigned thresh6_l, unsigned thresh6_r,
bool add_bad_attr)
{
struct {
struct nlmsghdr nh;
union {
uint32_t unused;
int error;
};
char attrbuf[MAX_PAYLOAD];
} req;
struct xfrmu_spdhthresh thresh;
memset(&req, 0, sizeof(req));
req.nh.nlmsg_len = NLMSG_LENGTH(sizeof(req.unused));
req.nh.nlmsg_type = XFRM_MSG_NEWSPDINFO;
req.nh.nlmsg_flags = NLM_F_REQUEST | NLM_F_ACK;
req.nh.nlmsg_seq = (*seq)++;
thresh.lbits = thresh4_l;
thresh.rbits = thresh4_r;
if (rtattr_pack(&req.nh, sizeof(req), XFRMA_SPD_IPV4_HTHRESH, &thresh, sizeof(thresh)))
return -1;
thresh.lbits = thresh6_l;
thresh.rbits = thresh6_r;
if (rtattr_pack(&req.nh, sizeof(req), XFRMA_SPD_IPV6_HTHRESH, &thresh, sizeof(thresh)))
return -1;
if (add_bad_attr) {
BUILD_BUG_ON(XFRMA_IF_ID <= XFRMA_SPD_MAX + 1);
if (rtattr_pack(&req.nh, sizeof(req), XFRMA_IF_ID, NULL, 0)) {
pr_err("adding attribute failed: no space");
return -1;
}
}
if (send(xfrm_sock, &req, req.nh.nlmsg_len, 0) < 0) {
pr_err("send()");
return -1;
}
if (recv(xfrm_sock, &req, sizeof(req), 0) < 0) {
pr_err("recv()");
return -1;
} else if (req.nh.nlmsg_type != NLMSG_ERROR) {
printk("expected NLMSG_ERROR, got %d", (int)req.nh.nlmsg_type);
return -1;
}
if (req.error) {
printk("NLMSG_ERROR: %d: %s", req.error, strerror(-req.error));
return -1;
}
return 0;
}
static int xfrm_spdinfo_attrs(int xfrm_sock, uint32_t *seq)
{
struct {
struct nlmsghdr nh;
union {
uint32_t unused;
int error;
};
char attrbuf[MAX_PAYLOAD];
} req;
if (xfrm_spdinfo_set_thresh(xfrm_sock, seq, 32, 31, 120, 16, false)) {
pr_err("Can't set SPD HTHRESH");
return KSFT_FAIL;
}
memset(&req, 0, sizeof(req));
req.nh.nlmsg_len = NLMSG_LENGTH(sizeof(req.unused));
req.nh.nlmsg_type = XFRM_MSG_GETSPDINFO;
req.nh.nlmsg_flags = NLM_F_REQUEST;
req.nh.nlmsg_seq = (*seq)++;
if (send(xfrm_sock, &req, req.nh.nlmsg_len, 0) < 0) {
pr_err("send()");
return KSFT_FAIL;
}
if (recv(xfrm_sock, &req, sizeof(req), 0) < 0) {
pr_err("recv()");
return KSFT_FAIL;
} else if (req.nh.nlmsg_type == XFRM_MSG_NEWSPDINFO) {
size_t len = NLMSG_PAYLOAD(&req.nh, sizeof(req.unused));
struct rtattr *attr = (void *)req.attrbuf;
int got_thresh = 0;
for (; RTA_OK(attr, len); attr = RTA_NEXT(attr, len)) {
if (attr->rta_type == XFRMA_SPD_IPV4_HTHRESH) {
struct xfrmu_spdhthresh *t = RTA_DATA(attr);
got_thresh++;
if (t->lbits != 32 || t->rbits != 31) {
pr_err("thresh differ: %u, %u",
t->lbits, t->rbits);
return KSFT_FAIL;
}
}
if (attr->rta_type == XFRMA_SPD_IPV6_HTHRESH) {
struct xfrmu_spdhthresh *t = RTA_DATA(attr);
got_thresh++;
if (t->lbits != 120 || t->rbits != 16) {
pr_err("thresh differ: %u, %u",
t->lbits, t->rbits);
return KSFT_FAIL;
}
}
}
if (got_thresh != 2) {
pr_err("only %d thresh returned by XFRM_MSG_GETSPDINFO", got_thresh);
return KSFT_FAIL;
}
} else if (req.nh.nlmsg_type != NLMSG_ERROR) {
printk("expected NLMSG_ERROR, got %d", (int)req.nh.nlmsg_type);
return KSFT_FAIL;
} else {
printk("NLMSG_ERROR: %d: %s", req.error, strerror(-req.error));
return -1;
}
/* Restore the default */
if (xfrm_spdinfo_set_thresh(xfrm_sock, seq, 32, 32, 128, 128, false)) {
pr_err("Can't restore SPD HTHRESH");
return KSFT_FAIL;
}
/*
* At this moment xfrm uses nlmsg_parse_deprecated(), which
* implies NL_VALIDATE_LIBERAL - ignoring attributes with
* (type > maxtype). nla_parse_depricated_strict() would enforce
* it. Or even stricter nla_parse().
* Right now it's not expected to fail, but to be ignored.
*/
if (xfrm_spdinfo_set_thresh(xfrm_sock, seq, 32, 32, 128, 128, true))
return KSFT_PASS;
return KSFT_PASS;
}
static int child_serv(int xfrm_sock, uint32_t *seq, static int child_serv(int xfrm_sock, uint32_t *seq,
unsigned int nr, int cmd_fd, void *buf, struct xfrm_desc *desc) unsigned int nr, int cmd_fd, void *buf, struct xfrm_desc *desc)
{ {
@ -1717,6 +1869,9 @@ static int child_f(unsigned int nr, int test_desc_fd, int cmd_fd, void *buf)
case EXPIRE_POLICY: case EXPIRE_POLICY:
ret = xfrm_expire_policy(xfrm_sock, &seq, nr, &desc); ret = xfrm_expire_policy(xfrm_sock, &seq, nr, &desc);
break; break;
case SPDINFO_ATTRS:
ret = xfrm_spdinfo_attrs(xfrm_sock, &seq);
break;
default: default:
printk("Unknown desc type %d", desc.type); printk("Unknown desc type %d", desc.type);
exit(KSFT_FAIL); exit(KSFT_FAIL);
@ -1994,8 +2149,10 @@ static int write_proto_plan(int fd, int proto)
* sizeof(xfrm_user_polexpire) = 168 | sizeof(xfrm_user_polexpire) = 176 * sizeof(xfrm_user_polexpire) = 168 | sizeof(xfrm_user_polexpire) = 176
* *
* Check the affected by the UABI difference structures. * Check the affected by the UABI difference structures.
* Also, check translation for xfrm_set_spdinfo: it has it's own attributes
* which needs to be correctly copied, but not translated.
*/ */
const unsigned int compat_plan = 4; const unsigned int compat_plan = 5;
static int write_compat_struct_tests(int test_desc_fd) static int write_compat_struct_tests(int test_desc_fd)
{ {
struct xfrm_desc desc = {}; struct xfrm_desc desc = {};
@ -2019,6 +2176,10 @@ static int write_compat_struct_tests(int test_desc_fd)
if (__write_desc(test_desc_fd, &desc)) if (__write_desc(test_desc_fd, &desc))
return -1; return -1;
desc.type = SPDINFO_ATTRS;
if (__write_desc(test_desc_fd, &desc))
return -1;
return 0; return 0;
} }