Including fixes from bluetooth.

Current release - new code bugs:
 
   - netfilter: complete validation of user input
 
   - mlx5: disallow SRIOV switchdev mode when in multi-PF netdev
 
 Previous releases - regressions:
 
   - core: fix u64_stats_init() for lockdep when used repeatedly in one file
 
   - ipv6: fix race condition between ipv6_get_ifaddr and ipv6_del_addr
 
   - bluetooth: fix memory leak in hci_req_sync_complete()
 
   - batman-adv: avoid infinite loop trying to resize local TT
 
   - drv: geneve: fix header validation in geneve[6]_xmit_skb
 
   - drv: bnxt_en: fix possible memory leak in bnxt_rdma_aux_device_init()
 
   - drv: mlx5: offset comp irq index in name by one
 
   - drv: ena: avoid double-free clearing stale tx_info->xdpf value
 
   - drv: pds_core: fix pdsc_check_pci_health deadlock
 
 Previous releases - always broken:
 
   - xsk: validate user input for XDP_{UMEM|COMPLETION}_FILL_RING
 
   - bluetooth: fix setsockopt not validating user input
 
   - af_unix: clear stale u->oob_skb.
 
   - nfc: llcp: fix nfc_llcp_setsockopt() unsafe copies
 
   - drv: virtio_net: fix guest hangup on invalid RSS update
 
   - drv: mlx5e: Fix mlx5e_priv_init() cleanup flow
 
   - dsa: mt7530: trap link-local frames regardless of ST Port State
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmYXyoQSHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOk72wQAJJ9DQra9b/8S3Zla1dutBcznSCxruas
 vWrpgIZiT3Aw5zUmZUZn+rNP8xeWLBK78Yv4m236B8/D3Kji2uMbVrjUAApBBcHr
 /lLmctZIhDHCoJCYvRTC/VOVPuqbbbmxOmx6rNvry93iNAiHAnBdCUlOYzMNPzJz
 6XIGtztFP0ICtt9owFtQRnsPeKhZJ5DoxqgE9KS2Pmb9PU99i1bEShpwLwB5I83S
 yTHKUY5W0rknkQZTW1gbv+o3dR0iFy7LZ+1FItJ/UzH0bG6JmcqzSlH5mZZJCc/L
 5LdUwtwMmKG2Kez/vKr1DAwTeAyhwVU+d+Hb28QXiO0kAYbjbOgNXse1st3RwDt5
 YKMKlsmR+kgPYLcvs9df2aubNSRvi2utwIA2kuH33HxBYF5PfQR5PTGeR21A+cKo
 wvSit8aMaGFTPJ7rRIzkNaPdIHSvPMKYcXV/T8EPvlOHzi5GBX0qHWj99JO9Eri+
 VFci+FG3HCPHK8v683g/WWiiVNx/IHMfNbcukes1oDFsCeNo7KZcnPY+zVhtdvvt
 QBnvbAZGKeDXMbnHZLB3DCR3ENHWTrJzC3alLDp3/uFC79VKtfIRO2wEX3gkrN8S
 JHsdYU13Yp1ERaNjUeq7Sqk2OGLfsBt4HSOhcK8OPPgE5rDRON5UPjkuNvbaEiZY
 Morzaqzerg1B
 =a9bB
 -----END PGP SIGNATURE-----

Merge tag 'net-6.9-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from bluetooth.

  Current release - new code bugs:

   - netfilter: complete validation of user input

   - mlx5: disallow SRIOV switchdev mode when in multi-PF netdev

  Previous releases - regressions:

   - core: fix u64_stats_init() for lockdep when used repeatedly in one
     file

   - ipv6: fix race condition between ipv6_get_ifaddr and ipv6_del_addr

   - bluetooth: fix memory leak in hci_req_sync_complete()

   - batman-adv: avoid infinite loop trying to resize local TT

   - drv: geneve: fix header validation in geneve[6]_xmit_skb

   - drv: bnxt_en: fix possible memory leak in
     bnxt_rdma_aux_device_init()

   - drv: mlx5: offset comp irq index in name by one

   - drv: ena: avoid double-free clearing stale tx_info->xdpf value

   - drv: pds_core: fix pdsc_check_pci_health deadlock

  Previous releases - always broken:

   - xsk: validate user input for XDP_{UMEM|COMPLETION}_FILL_RING

   - bluetooth: fix setsockopt not validating user input

   - af_unix: clear stale u->oob_skb.

   - nfc: llcp: fix nfc_llcp_setsockopt() unsafe copies

   - drv: virtio_net: fix guest hangup on invalid RSS update

   - drv: mlx5e: Fix mlx5e_priv_init() cleanup flow

   - dsa: mt7530: trap link-local frames regardless of ST Port State"

* tag 'net-6.9-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (59 commits)
  net: ena: Set tx_info->xdpf value to NULL
  net: ena: Fix incorrect descriptor free behavior
  net: ena: Wrong missing IO completions check order
  net: ena: Fix potential sign extension issue
  af_unix: Fix garbage collector racing against connect()
  net: dsa: mt7530: trap link-local frames regardless of ST Port State
  Revert "s390/ism: fix receive message buffer allocation"
  net: sparx5: fix wrong config being used when reconfiguring PCS
  net/mlx5: fix possible stack overflows
  net/mlx5: Disallow SRIOV switchdev mode when in multi-PF netdev
  net/mlx5e: RSS, Block XOR hash with over 128 channels
  net/mlx5e: Do not produce metadata freelist entries in Tx port ts WQE xmit
  net/mlx5e: HTB, Fix inconsistencies with QoS SQs number
  net/mlx5e: Fix mlx5e_priv_init() cleanup flow
  net/mlx5e: RSS, Block changing channels number when RXFH is configured
  net/mlx5: Correctly compare pkt reformat ids
  net/mlx5: Properly link new fs rules into the tree
  net/mlx5: offset comp irq index in name by one
  net/mlx5: Register devlink first under devlink lock
  net/mlx5: E-switch, store eswitch pointer before registering devlink_param
  ...
This commit is contained in:
Linus Torvalds 2024-04-11 11:46:31 -07:00
commit 2ae9a8972c
69 changed files with 745 additions and 358 deletions

View File

@ -2191,7 +2191,6 @@ N: mxs
ARM/FREESCALE LAYERSCAPE ARM ARCHITECTURE ARM/FREESCALE LAYERSCAPE ARM ARCHITECTURE
M: Shawn Guo <shawnguo@kernel.org> M: Shawn Guo <shawnguo@kernel.org>
M: Li Yang <leoyang.li@nxp.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git
@ -8524,7 +8523,6 @@ S: Maintained
F: drivers/video/fbdev/fsl-diu-fb.* F: drivers/video/fbdev/fsl-diu-fb.*
FREESCALE DMA DRIVER FREESCALE DMA DRIVER
M: Li Yang <leoyang.li@nxp.com>
M: Zhang Wei <zw@zh-kernel.org> M: Zhang Wei <zw@zh-kernel.org>
L: linuxppc-dev@lists.ozlabs.org L: linuxppc-dev@lists.ozlabs.org
S: Maintained S: Maintained
@ -8689,10 +8687,9 @@ F: drivers/soc/fsl/qe/tsa.h
F: include/dt-bindings/soc/cpm1-fsl,tsa.h F: include/dt-bindings/soc/cpm1-fsl,tsa.h
FREESCALE QUICC ENGINE UCC ETHERNET DRIVER FREESCALE QUICC ENGINE UCC ETHERNET DRIVER
M: Li Yang <leoyang.li@nxp.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: linuxppc-dev@lists.ozlabs.org L: linuxppc-dev@lists.ozlabs.org
S: Maintained S: Orphan
F: drivers/net/ethernet/freescale/ucc_geth* F: drivers/net/ethernet/freescale/ucc_geth*
FREESCALE QUICC ENGINE UCC HDLC DRIVER FREESCALE QUICC ENGINE UCC HDLC DRIVER
@ -8709,10 +8706,9 @@ S: Maintained
F: drivers/tty/serial/ucc_uart.c F: drivers/tty/serial/ucc_uart.c
FREESCALE SOC DRIVERS FREESCALE SOC DRIVERS
M: Li Yang <leoyang.li@nxp.com>
L: linuxppc-dev@lists.ozlabs.org L: linuxppc-dev@lists.ozlabs.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Orphan
F: Documentation/devicetree/bindings/misc/fsl,dpaa2-console.yaml F: Documentation/devicetree/bindings/misc/fsl,dpaa2-console.yaml
F: Documentation/devicetree/bindings/soc/fsl/ F: Documentation/devicetree/bindings/soc/fsl/
F: drivers/soc/fsl/ F: drivers/soc/fsl/
@ -8746,10 +8742,9 @@ F: Documentation/devicetree/bindings/sound/fsl,qmc-audio.yaml
F: sound/soc/fsl/fsl_qmc_audio.c F: sound/soc/fsl/fsl_qmc_audio.c
FREESCALE USB PERIPHERAL DRIVERS FREESCALE USB PERIPHERAL DRIVERS
M: Li Yang <leoyang.li@nxp.com>
L: linux-usb@vger.kernel.org L: linux-usb@vger.kernel.org
L: linuxppc-dev@lists.ozlabs.org L: linuxppc-dev@lists.ozlabs.org
S: Maintained S: Orphan
F: drivers/usb/gadget/udc/fsl* F: drivers/usb/gadget/udc/fsl*
FREESCALE USB PHY DRIVER FREESCALE USB PHY DRIVER

View File

@ -401,23 +401,23 @@ data_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
} }
static int data_sock_setsockopt(struct socket *sock, int level, int optname, static int data_sock_setsockopt(struct socket *sock, int level, int optname,
sockptr_t optval, unsigned int len) sockptr_t optval, unsigned int optlen)
{ {
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
int err = 0, opt = 0; int err = 0, opt = 0;
if (*debug & DEBUG_SOCKET) if (*debug & DEBUG_SOCKET)
printk(KERN_DEBUG "%s(%p, %d, %x, optval, %d)\n", __func__, sock, printk(KERN_DEBUG "%s(%p, %d, %x, optval, %d)\n", __func__, sock,
level, optname, len); level, optname, optlen);
lock_sock(sk); lock_sock(sk);
switch (optname) { switch (optname) {
case MISDN_TIME_STAMP: case MISDN_TIME_STAMP:
if (copy_from_sockptr(&opt, optval, sizeof(int))) { err = copy_safe_from_sockptr(&opt, sizeof(opt),
err = -EFAULT; optval, optlen);
if (err)
break; break;
}
if (opt) if (opt)
_pms(sk)->cmask |= MISDN_TIME_STAMP; _pms(sk)->cmask |= MISDN_TIME_STAMP;

View File

@ -950,20 +950,173 @@ static void mt7530_setup_port5(struct dsa_switch *ds, phy_interface_t interface)
mutex_unlock(&priv->reg_mutex); mutex_unlock(&priv->reg_mutex);
} }
/* On page 205, section "8.6.3 Frame filtering" of the active standard, IEEE Std /* In Clause 5 of IEEE Std 802-2014, two sublayers of the data link layer (DLL)
* 802.1Q-2022, it is stated that frames with 01:80:C2:00:00:00-0F as MAC DA * of the Open Systems Interconnection basic reference model (OSI/RM) are
* must only be propagated to C-VLAN and MAC Bridge components. That means * described; the medium access control (MAC) and logical link control (LLC)
* VLAN-aware and VLAN-unaware bridges. On the switch designs with CPU ports, * sublayers. The MAC sublayer is the one facing the physical layer.
* these frames are supposed to be processed by the CPU (software). So we make
* the switch only forward them to the CPU port. And if received from a CPU
* port, forward to a single port. The software is responsible of making the
* switch conform to the latter by setting a single port as destination port on
* the special tag.
* *
* This switch intellectual property cannot conform to this part of the standard * In 8.2 of IEEE Std 802.1Q-2022, the Bridge architecture is described. A
* fully. Whilst the REV_UN frame tag covers the remaining :04-0D and :0F MAC * Bridge component comprises a MAC Relay Entity for interconnecting the Ports
* DAs, it also includes :22-FF which the scope of propagation is not supposed * of the Bridge, at least two Ports, and higher layer entities with at least a
* to be restricted for these MAC DAs. * Spanning Tree Protocol Entity included.
*
* Each Bridge Port also functions as an end station and shall provide the MAC
* Service to an LLC Entity. Each instance of the MAC Service is provided to a
* distinct LLC Entity that supports protocol identification, multiplexing, and
* demultiplexing, for protocol data unit (PDU) transmission and reception by
* one or more higher layer entities.
*
* It is described in 8.13.9 of IEEE Std 802.1Q-2022 that in a Bridge, the LLC
* Entity associated with each Bridge Port is modeled as being directly
* connected to the attached Local Area Network (LAN).
*
* On the switch with CPU port architecture, CPU port functions as Management
* Port, and the Management Port functionality is provided by software which
* functions as an end station. Software is connected to an IEEE 802 LAN that is
* wholly contained within the system that incorporates the Bridge. Software
* provides access to the LLC Entity associated with each Bridge Port by the
* value of the source port field on the special tag on the frame received by
* software.
*
* We call frames that carry control information to determine the active
* topology and current extent of each Virtual Local Area Network (VLAN), i.e.,
* spanning tree or Shortest Path Bridging (SPB) and Multiple VLAN Registration
* Protocol Data Units (MVRPDUs), and frames from other link constrained
* protocols, such as Extensible Authentication Protocol over LAN (EAPOL) and
* Link Layer Discovery Protocol (LLDP), link-local frames. They are not
* forwarded by a Bridge. Permanently configured entries in the filtering
* database (FDB) ensure that such frames are discarded by the Forwarding
* Process. In 8.6.3 of IEEE Std 802.1Q-2022, this is described in detail:
*
* Each of the reserved MAC addresses specified in Table 8-1
* (01-80-C2-00-00-[00,01,02,03,04,05,06,07,08,09,0A,0B,0C,0D,0E,0F]) shall be
* permanently configured in the FDB in C-VLAN components and ERs.
*
* Each of the reserved MAC addresses specified in Table 8-2
* (01-80-C2-00-00-[01,02,03,04,05,06,07,08,09,0A,0E]) shall be permanently
* configured in the FDB in S-VLAN components.
*
* Each of the reserved MAC addresses specified in Table 8-3
* (01-80-C2-00-00-[01,02,04,0E]) shall be permanently configured in the FDB in
* TPMR components.
*
* The FDB entries for reserved MAC addresses shall specify filtering for all
* Bridge Ports and all VIDs. Management shall not provide the capability to
* modify or remove entries for reserved MAC addresses.
*
* The addresses in Table 8-1, Table 8-2, and Table 8-3 determine the scope of
* propagation of PDUs within a Bridged Network, as follows:
*
* The Nearest Bridge group address (01-80-C2-00-00-0E) is an address that no
* conformant Two-Port MAC Relay (TPMR) component, Service VLAN (S-VLAN)
* component, Customer VLAN (C-VLAN) component, or MAC Bridge can forward.
* PDUs transmitted using this destination address, or any other addresses
* that appear in Table 8-1, Table 8-2, and Table 8-3
* (01-80-C2-00-00-[00,01,02,03,04,05,06,07,08,09,0A,0B,0C,0D,0E,0F]), can
* therefore travel no further than those stations that can be reached via a
* single individual LAN from the originating station.
*
* The Nearest non-TPMR Bridge group address (01-80-C2-00-00-03), is an
* address that no conformant S-VLAN component, C-VLAN component, or MAC
* Bridge can forward; however, this address is relayed by a TPMR component.
* PDUs using this destination address, or any of the other addresses that
* appear in both Table 8-1 and Table 8-2 but not in Table 8-3
* (01-80-C2-00-00-[00,03,05,06,07,08,09,0A,0B,0C,0D,0F]), will be relayed by
* any TPMRs but will propagate no further than the nearest S-VLAN component,
* C-VLAN component, or MAC Bridge.
*
* The Nearest Customer Bridge group address (01-80-C2-00-00-00) is an address
* that no conformant C-VLAN component, MAC Bridge can forward; however, it is
* relayed by TPMR components and S-VLAN components. PDUs using this
* destination address, or any of the other addresses that appear in Table 8-1
* but not in either Table 8-2 or Table 8-3 (01-80-C2-00-00-[00,0B,0C,0D,0F]),
* will be relayed by TPMR components and S-VLAN components but will propagate
* no further than the nearest C-VLAN component or MAC Bridge.
*
* Because the LLC Entity associated with each Bridge Port is provided via CPU
* port, we must not filter these frames but forward them to CPU port.
*
* In a Bridge, the transmission Port is majorly decided by ingress and egress
* rules, FDB, and spanning tree Port State functions of the Forwarding Process.
* For link-local frames, only CPU port should be designated as destination port
* in the FDB, and the other functions of the Forwarding Process must not
* interfere with the decision of the transmission Port. We call this process
* trapping frames to CPU port.
*
* Therefore, on the switch with CPU port architecture, link-local frames must
* be trapped to CPU port, and certain link-local frames received by a Port of a
* Bridge comprising a TPMR component or an S-VLAN component must be excluded
* from it.
*
* A Bridge of the switch with CPU port architecture cannot comprise a Two-Port
* MAC Relay (TPMR) component as a TPMR component supports only a subset of the
* functionality of a MAC Bridge. A Bridge comprising two Ports (Management Port
* doesn't count) of this architecture will either function as a standard MAC
* Bridge or a standard VLAN Bridge.
*
* Therefore, a Bridge of this architecture can only comprise S-VLAN components,
* C-VLAN components, or MAC Bridge components. Since there's no TPMR component,
* we don't need to relay PDUs using the destination addresses specified on the
* Nearest non-TPMR section, and the proportion of the Nearest Customer Bridge
* section where they must be relayed by TPMR components.
*
* One option to trap link-local frames to CPU port is to add static FDB entries
* with CPU port designated as destination port. However, because that
* Independent VLAN Learning (IVL) is being used on every VID, each entry only
* applies to a single VLAN Identifier (VID). For a Bridge comprising a MAC
* Bridge component or a C-VLAN component, there would have to be 16 times 4096
* entries. This switch intellectual property can only hold a maximum of 2048
* entries. Using this option, there also isn't a mechanism to prevent
* link-local frames from being discarded when the spanning tree Port State of
* the reception Port is discarding.
*
* The remaining option is to utilise the BPC, RGAC1, RGAC2, RGAC3, and RGAC4
* registers. Whilst this applies to every VID, it doesn't contain all of the
* reserved MAC addresses without affecting the remaining Standard Group MAC
* Addresses. The REV_UN frame tag utilised using the RGAC4 register covers the
* remaining 01-80-C2-00-00-[04,05,06,07,08,09,0A,0B,0C,0D,0F] destination
* addresses. It also includes the 01-80-C2-00-00-22 to 01-80-C2-00-00-FF
* destination addresses which may be relayed by MAC Bridges or VLAN Bridges.
* The latter option provides better but not complete conformance.
*
* This switch intellectual property also does not provide a mechanism to trap
* link-local frames with specific destination addresses to CPU port by Bridge,
* to conform to the filtering rules for the distinct Bridge components.
*
* Therefore, regardless of the type of the Bridge component, link-local frames
* with these destination addresses will be trapped to CPU port:
*
* 01-80-C2-00-00-[00,01,02,03,0E]
*
* In a Bridge comprising a MAC Bridge component or a C-VLAN component:
*
* Link-local frames with these destination addresses won't be trapped to CPU
* port which won't conform to IEEE Std 802.1Q-2022:
*
* 01-80-C2-00-00-[04,05,06,07,08,09,0A,0B,0C,0D,0F]
*
* In a Bridge comprising an S-VLAN component:
*
* Link-local frames with these destination addresses will be trapped to CPU
* port which won't conform to IEEE Std 802.1Q-2022:
*
* 01-80-C2-00-00-00
*
* Link-local frames with these destination addresses won't be trapped to CPU
* port which won't conform to IEEE Std 802.1Q-2022:
*
* 01-80-C2-00-00-[04,05,06,07,08,09,0A]
*
* To trap link-local frames to CPU port as conformant as this switch
* intellectual property can allow, link-local frames are made to be regarded as
* Bridge Protocol Data Units (BPDUs). This is because this switch intellectual
* property only lets the frames regarded as BPDUs bypass the spanning tree Port
* State function of the Forwarding Process.
*
* The only remaining interference is the ingress rules. When the reception Port
* has no PVID assigned on software, VLAN-untagged frames won't be allowed in.
* There doesn't seem to be a mechanism on the switch intellectual property to
* have link-local frames bypass this function of the Forwarding Process.
*/ */
static void static void
mt753x_trap_frames(struct mt7530_priv *priv) mt753x_trap_frames(struct mt7530_priv *priv)
@ -971,35 +1124,43 @@ mt753x_trap_frames(struct mt7530_priv *priv)
/* Trap 802.1X PAE frames and BPDUs to the CPU port(s) and egress them /* Trap 802.1X PAE frames and BPDUs to the CPU port(s) and egress them
* VLAN-untagged. * VLAN-untagged.
*/ */
mt7530_rmw(priv, MT753X_BPC, MT753X_PAE_EG_TAG_MASK | mt7530_rmw(priv, MT753X_BPC,
MT753X_PAE_PORT_FW_MASK | MT753X_BPDU_EG_TAG_MASK | MT753X_PAE_BPDU_FR | MT753X_PAE_EG_TAG_MASK |
MT753X_BPDU_PORT_FW_MASK, MT753X_PAE_PORT_FW_MASK | MT753X_BPDU_EG_TAG_MASK |
MT753X_PAE_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | MT753X_BPDU_PORT_FW_MASK,
MT753X_PAE_PORT_FW(MT753X_BPDU_CPU_ONLY) | MT753X_PAE_BPDU_FR |
MT753X_BPDU_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | MT753X_PAE_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
MT753X_BPDU_CPU_ONLY); MT753X_PAE_PORT_FW(MT753X_BPDU_CPU_ONLY) |
MT753X_BPDU_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
MT753X_BPDU_CPU_ONLY);
/* Trap frames with :01 and :02 MAC DAs to the CPU port(s) and egress /* Trap frames with :01 and :02 MAC DAs to the CPU port(s) and egress
* them VLAN-untagged. * them VLAN-untagged.
*/ */
mt7530_rmw(priv, MT753X_RGAC1, MT753X_R02_EG_TAG_MASK | mt7530_rmw(priv, MT753X_RGAC1,
MT753X_R02_PORT_FW_MASK | MT753X_R01_EG_TAG_MASK | MT753X_R02_BPDU_FR | MT753X_R02_EG_TAG_MASK |
MT753X_R01_PORT_FW_MASK, MT753X_R02_PORT_FW_MASK | MT753X_R01_BPDU_FR |
MT753X_R02_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | MT753X_R01_EG_TAG_MASK | MT753X_R01_PORT_FW_MASK,
MT753X_R02_PORT_FW(MT753X_BPDU_CPU_ONLY) | MT753X_R02_BPDU_FR |
MT753X_R01_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | MT753X_R02_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
MT753X_BPDU_CPU_ONLY); MT753X_R02_PORT_FW(MT753X_BPDU_CPU_ONLY) |
MT753X_R01_BPDU_FR |
MT753X_R01_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
MT753X_BPDU_CPU_ONLY);
/* Trap frames with :03 and :0E MAC DAs to the CPU port(s) and egress /* Trap frames with :03 and :0E MAC DAs to the CPU port(s) and egress
* them VLAN-untagged. * them VLAN-untagged.
*/ */
mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_EG_TAG_MASK | mt7530_rmw(priv, MT753X_RGAC2,
MT753X_R0E_PORT_FW_MASK | MT753X_R03_EG_TAG_MASK | MT753X_R0E_BPDU_FR | MT753X_R0E_EG_TAG_MASK |
MT753X_R03_PORT_FW_MASK, MT753X_R0E_PORT_FW_MASK | MT753X_R03_BPDU_FR |
MT753X_R0E_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | MT753X_R03_EG_TAG_MASK | MT753X_R03_PORT_FW_MASK,
MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY) | MT753X_R0E_BPDU_FR |
MT753X_R03_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | MT753X_R0E_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
MT753X_BPDU_CPU_ONLY); MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY) |
MT753X_R03_BPDU_FR |
MT753X_R03_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
MT753X_BPDU_CPU_ONLY);
} }
static void static void
@ -2505,18 +2666,25 @@ mt7531_setup(struct dsa_switch *ds)
mt7530_rmw(priv, MT7531_GPIO_MODE0, MT7531_GPIO0_MASK, mt7530_rmw(priv, MT7531_GPIO_MODE0, MT7531_GPIO0_MASK,
MT7531_GPIO0_INTERRUPT); MT7531_GPIO0_INTERRUPT);
/* Enable PHY core PLL, since phy_device has not yet been created /* Enable Energy-Efficient Ethernet (EEE) and PHY core PLL, since
* provided for phy_[read,write]_mmd_indirect is called, we provide * phy_device has not yet been created provided for
* our own mt7531_ind_mmd_phy_[read,write] to complete this * phy_[read,write]_mmd_indirect is called, we provide our own
* function. * mt7531_ind_mmd_phy_[read,write] to complete this function.
*/ */
val = mt7531_ind_c45_phy_read(priv, MT753X_CTRL_PHY_ADDR, val = mt7531_ind_c45_phy_read(priv, MT753X_CTRL_PHY_ADDR,
MDIO_MMD_VEND2, CORE_PLL_GROUP4); MDIO_MMD_VEND2, CORE_PLL_GROUP4);
val |= MT7531_PHY_PLL_BYPASS_MODE; val |= MT7531_RG_SYSPLL_DMY2 | MT7531_PHY_PLL_BYPASS_MODE;
val &= ~MT7531_PHY_PLL_OFF; val &= ~MT7531_PHY_PLL_OFF;
mt7531_ind_c45_phy_write(priv, MT753X_CTRL_PHY_ADDR, MDIO_MMD_VEND2, mt7531_ind_c45_phy_write(priv, MT753X_CTRL_PHY_ADDR, MDIO_MMD_VEND2,
CORE_PLL_GROUP4, val); CORE_PLL_GROUP4, val);
/* Disable EEE advertisement on the switch PHYs. */
for (i = MT753X_CTRL_PHY_ADDR;
i < MT753X_CTRL_PHY_ADDR + MT7530_NUM_PHYS; i++) {
mt7531_ind_c45_phy_write(priv, i, MDIO_MMD_AN, MDIO_AN_EEE_ADV,
0);
}
mt7531_setup_common(ds); mt7531_setup_common(ds);
/* Setup VLAN ID 0 for VLAN-unaware bridges */ /* Setup VLAN ID 0 for VLAN-unaware bridges */

View File

@ -65,6 +65,7 @@ enum mt753x_id {
/* Registers for BPDU and PAE frame control*/ /* Registers for BPDU and PAE frame control*/
#define MT753X_BPC 0x24 #define MT753X_BPC 0x24
#define MT753X_PAE_BPDU_FR BIT(25)
#define MT753X_PAE_EG_TAG_MASK GENMASK(24, 22) #define MT753X_PAE_EG_TAG_MASK GENMASK(24, 22)
#define MT753X_PAE_EG_TAG(x) FIELD_PREP(MT753X_PAE_EG_TAG_MASK, x) #define MT753X_PAE_EG_TAG(x) FIELD_PREP(MT753X_PAE_EG_TAG_MASK, x)
#define MT753X_PAE_PORT_FW_MASK GENMASK(18, 16) #define MT753X_PAE_PORT_FW_MASK GENMASK(18, 16)
@ -75,20 +76,24 @@ enum mt753x_id {
/* Register for :01 and :02 MAC DA frame control */ /* Register for :01 and :02 MAC DA frame control */
#define MT753X_RGAC1 0x28 #define MT753X_RGAC1 0x28
#define MT753X_R02_BPDU_FR BIT(25)
#define MT753X_R02_EG_TAG_MASK GENMASK(24, 22) #define MT753X_R02_EG_TAG_MASK GENMASK(24, 22)
#define MT753X_R02_EG_TAG(x) FIELD_PREP(MT753X_R02_EG_TAG_MASK, x) #define MT753X_R02_EG_TAG(x) FIELD_PREP(MT753X_R02_EG_TAG_MASK, x)
#define MT753X_R02_PORT_FW_MASK GENMASK(18, 16) #define MT753X_R02_PORT_FW_MASK GENMASK(18, 16)
#define MT753X_R02_PORT_FW(x) FIELD_PREP(MT753X_R02_PORT_FW_MASK, x) #define MT753X_R02_PORT_FW(x) FIELD_PREP(MT753X_R02_PORT_FW_MASK, x)
#define MT753X_R01_BPDU_FR BIT(9)
#define MT753X_R01_EG_TAG_MASK GENMASK(8, 6) #define MT753X_R01_EG_TAG_MASK GENMASK(8, 6)
#define MT753X_R01_EG_TAG(x) FIELD_PREP(MT753X_R01_EG_TAG_MASK, x) #define MT753X_R01_EG_TAG(x) FIELD_PREP(MT753X_R01_EG_TAG_MASK, x)
#define MT753X_R01_PORT_FW_MASK GENMASK(2, 0) #define MT753X_R01_PORT_FW_MASK GENMASK(2, 0)
/* Register for :03 and :0E MAC DA frame control */ /* Register for :03 and :0E MAC DA frame control */
#define MT753X_RGAC2 0x2c #define MT753X_RGAC2 0x2c
#define MT753X_R0E_BPDU_FR BIT(25)
#define MT753X_R0E_EG_TAG_MASK GENMASK(24, 22) #define MT753X_R0E_EG_TAG_MASK GENMASK(24, 22)
#define MT753X_R0E_EG_TAG(x) FIELD_PREP(MT753X_R0E_EG_TAG_MASK, x) #define MT753X_R0E_EG_TAG(x) FIELD_PREP(MT753X_R0E_EG_TAG_MASK, x)
#define MT753X_R0E_PORT_FW_MASK GENMASK(18, 16) #define MT753X_R0E_PORT_FW_MASK GENMASK(18, 16)
#define MT753X_R0E_PORT_FW(x) FIELD_PREP(MT753X_R0E_PORT_FW_MASK, x) #define MT753X_R0E_PORT_FW(x) FIELD_PREP(MT753X_R0E_PORT_FW_MASK, x)
#define MT753X_R03_BPDU_FR BIT(9)
#define MT753X_R03_EG_TAG_MASK GENMASK(8, 6) #define MT753X_R03_EG_TAG_MASK GENMASK(8, 6)
#define MT753X_R03_EG_TAG(x) FIELD_PREP(MT753X_R03_EG_TAG_MASK, x) #define MT753X_R03_EG_TAG(x) FIELD_PREP(MT753X_R03_EG_TAG_MASK, x)
#define MT753X_R03_PORT_FW_MASK GENMASK(2, 0) #define MT753X_R03_PORT_FW_MASK GENMASK(2, 0)
@ -616,6 +621,7 @@ enum mt7531_clk_skew {
#define RG_SYSPLL_DDSFBK_EN BIT(12) #define RG_SYSPLL_DDSFBK_EN BIT(12)
#define RG_SYSPLL_BIAS_EN BIT(11) #define RG_SYSPLL_BIAS_EN BIT(11)
#define RG_SYSPLL_BIAS_LPF_EN BIT(10) #define RG_SYSPLL_BIAS_LPF_EN BIT(10)
#define MT7531_RG_SYSPLL_DMY2 BIT(6)
#define MT7531_PHY_PLL_OFF BIT(5) #define MT7531_PHY_PLL_OFF BIT(5)
#define MT7531_PHY_PLL_BYPASS_MODE BIT(4) #define MT7531_PHY_PLL_BYPASS_MODE BIT(4)

View File

@ -351,7 +351,7 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev,
ENA_COM_BOUNCE_BUFFER_CNTRL_CNT; ENA_COM_BOUNCE_BUFFER_CNTRL_CNT;
io_sq->bounce_buf_ctrl.next_to_use = 0; io_sq->bounce_buf_ctrl.next_to_use = 0;
size = io_sq->bounce_buf_ctrl.buffer_size * size = (size_t)io_sq->bounce_buf_ctrl.buffer_size *
io_sq->bounce_buf_ctrl.buffers_num; io_sq->bounce_buf_ctrl.buffers_num;
dev_node = dev_to_node(ena_dev->dmadev); dev_node = dev_to_node(ena_dev->dmadev);

View File

@ -718,8 +718,11 @@ void ena_unmap_tx_buff(struct ena_ring *tx_ring,
static void ena_free_tx_bufs(struct ena_ring *tx_ring) static void ena_free_tx_bufs(struct ena_ring *tx_ring)
{ {
bool print_once = true; bool print_once = true;
bool is_xdp_ring;
u32 i; u32 i;
is_xdp_ring = ENA_IS_XDP_INDEX(tx_ring->adapter, tx_ring->qid);
for (i = 0; i < tx_ring->ring_size; i++) { for (i = 0; i < tx_ring->ring_size; i++) {
struct ena_tx_buffer *tx_info = &tx_ring->tx_buffer_info[i]; struct ena_tx_buffer *tx_info = &tx_ring->tx_buffer_info[i];
@ -739,10 +742,15 @@ static void ena_free_tx_bufs(struct ena_ring *tx_ring)
ena_unmap_tx_buff(tx_ring, tx_info); ena_unmap_tx_buff(tx_ring, tx_info);
dev_kfree_skb_any(tx_info->skb); if (is_xdp_ring)
xdp_return_frame(tx_info->xdpf);
else
dev_kfree_skb_any(tx_info->skb);
} }
netdev_tx_reset_queue(netdev_get_tx_queue(tx_ring->netdev,
tx_ring->qid)); if (!is_xdp_ring)
netdev_tx_reset_queue(netdev_get_tx_queue(tx_ring->netdev,
tx_ring->qid));
} }
static void ena_free_all_tx_bufs(struct ena_adapter *adapter) static void ena_free_all_tx_bufs(struct ena_adapter *adapter)
@ -3481,10 +3489,11 @@ static void check_for_missing_completions(struct ena_adapter *adapter)
{ {
struct ena_ring *tx_ring; struct ena_ring *tx_ring;
struct ena_ring *rx_ring; struct ena_ring *rx_ring;
int i, budget, rc; int qid, budget, rc;
int io_queue_count; int io_queue_count;
io_queue_count = adapter->xdp_num_queues + adapter->num_io_queues; io_queue_count = adapter->xdp_num_queues + adapter->num_io_queues;
/* Make sure the driver doesn't turn the device in other process */ /* Make sure the driver doesn't turn the device in other process */
smp_rmb(); smp_rmb();
@ -3497,27 +3506,29 @@ static void check_for_missing_completions(struct ena_adapter *adapter)
if (adapter->missing_tx_completion_to == ENA_HW_HINTS_NO_TIMEOUT) if (adapter->missing_tx_completion_to == ENA_HW_HINTS_NO_TIMEOUT)
return; return;
budget = ENA_MONITORED_TX_QUEUES; budget = min_t(u32, io_queue_count, ENA_MONITORED_TX_QUEUES);
for (i = adapter->last_monitored_tx_qid; i < io_queue_count; i++) { qid = adapter->last_monitored_tx_qid;
tx_ring = &adapter->tx_ring[i];
rx_ring = &adapter->rx_ring[i]; while (budget) {
qid = (qid + 1) % io_queue_count;
tx_ring = &adapter->tx_ring[qid];
rx_ring = &adapter->rx_ring[qid];
rc = check_missing_comp_in_tx_queue(adapter, tx_ring); rc = check_missing_comp_in_tx_queue(adapter, tx_ring);
if (unlikely(rc)) if (unlikely(rc))
return; return;
rc = !ENA_IS_XDP_INDEX(adapter, i) ? rc = !ENA_IS_XDP_INDEX(adapter, qid) ?
check_for_rx_interrupt_queue(adapter, rx_ring) : 0; check_for_rx_interrupt_queue(adapter, rx_ring) : 0;
if (unlikely(rc)) if (unlikely(rc))
return; return;
budget--; budget--;
if (!budget)
break;
} }
adapter->last_monitored_tx_qid = i % io_queue_count; adapter->last_monitored_tx_qid = qid;
} }
/* trigger napi schedule after 2 consecutive detections */ /* trigger napi schedule after 2 consecutive detections */

View File

@ -89,7 +89,7 @@ int ena_xdp_xmit_frame(struct ena_ring *tx_ring,
rc = ena_xdp_tx_map_frame(tx_ring, tx_info, xdpf, &ena_tx_ctx); rc = ena_xdp_tx_map_frame(tx_ring, tx_info, xdpf, &ena_tx_ctx);
if (unlikely(rc)) if (unlikely(rc))
return rc; goto err;
ena_tx_ctx.req_id = req_id; ena_tx_ctx.req_id = req_id;
@ -112,7 +112,9 @@ int ena_xdp_xmit_frame(struct ena_ring *tx_ring,
error_unmap_dma: error_unmap_dma:
ena_unmap_tx_buff(tx_ring, tx_info); ena_unmap_tx_buff(tx_ring, tx_info);
err:
tx_info->xdpf = NULL; tx_info->xdpf = NULL;
return rc; return rc;
} }

View File

@ -593,6 +593,16 @@ err_out:
pdsc_teardown(pdsc, PDSC_TEARDOWN_RECOVERY); pdsc_teardown(pdsc, PDSC_TEARDOWN_RECOVERY);
} }
void pdsc_pci_reset_thread(struct work_struct *work)
{
struct pdsc *pdsc = container_of(work, struct pdsc, pci_reset_work);
struct pci_dev *pdev = pdsc->pdev;
pci_dev_get(pdev);
pci_reset_function(pdev);
pci_dev_put(pdev);
}
static void pdsc_check_pci_health(struct pdsc *pdsc) static void pdsc_check_pci_health(struct pdsc *pdsc)
{ {
u8 fw_status; u8 fw_status;
@ -607,7 +617,8 @@ static void pdsc_check_pci_health(struct pdsc *pdsc)
if (fw_status != PDS_RC_BAD_PCI) if (fw_status != PDS_RC_BAD_PCI)
return; return;
pci_reset_function(pdsc->pdev); /* prevent deadlock between pdsc_reset_prepare and pdsc_health_thread */
queue_work(pdsc->wq, &pdsc->pci_reset_work);
} }
void pdsc_health_thread(struct work_struct *work) void pdsc_health_thread(struct work_struct *work)

View File

@ -197,6 +197,7 @@ struct pdsc {
struct pdsc_qcq notifyqcq; struct pdsc_qcq notifyqcq;
u64 last_eid; u64 last_eid;
struct pdsc_viftype *viftype_status; struct pdsc_viftype *viftype_status;
struct work_struct pci_reset_work;
}; };
/** enum pds_core_dbell_bits - bitwise composition of dbell values. /** enum pds_core_dbell_bits - bitwise composition of dbell values.
@ -313,5 +314,6 @@ int pdsc_firmware_update(struct pdsc *pdsc, const struct firmware *fw,
void pdsc_fw_down(struct pdsc *pdsc); void pdsc_fw_down(struct pdsc *pdsc);
void pdsc_fw_up(struct pdsc *pdsc); void pdsc_fw_up(struct pdsc *pdsc);
void pdsc_pci_reset_thread(struct work_struct *work);
#endif /* _PDSC_H_ */ #endif /* _PDSC_H_ */

View File

@ -229,6 +229,9 @@ int pdsc_devcmd_reset(struct pdsc *pdsc)
.reset.opcode = PDS_CORE_CMD_RESET, .reset.opcode = PDS_CORE_CMD_RESET,
}; };
if (!pdsc_is_fw_running(pdsc))
return 0;
return pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout); return pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
} }

View File

@ -239,6 +239,7 @@ static int pdsc_init_pf(struct pdsc *pdsc)
snprintf(wq_name, sizeof(wq_name), "%s.%d", PDS_CORE_DRV_NAME, pdsc->uid); snprintf(wq_name, sizeof(wq_name), "%s.%d", PDS_CORE_DRV_NAME, pdsc->uid);
pdsc->wq = create_singlethread_workqueue(wq_name); pdsc->wq = create_singlethread_workqueue(wq_name);
INIT_WORK(&pdsc->health_work, pdsc_health_thread); INIT_WORK(&pdsc->health_work, pdsc_health_thread);
INIT_WORK(&pdsc->pci_reset_work, pdsc_pci_reset_thread);
timer_setup(&pdsc->wdtimer, pdsc_wdtimer_cb, 0); timer_setup(&pdsc->wdtimer, pdsc_wdtimer_cb, 0);
pdsc->wdtimer_period = PDSC_WATCHDOG_SECS * HZ; pdsc->wdtimer_period = PDSC_WATCHDOG_SECS * HZ;

View File

@ -11758,6 +11758,8 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
/* VF-reps may need to be re-opened after the PF is re-opened */ /* VF-reps may need to be re-opened after the PF is re-opened */
if (BNXT_PF(bp)) if (BNXT_PF(bp))
bnxt_vf_reps_open(bp); bnxt_vf_reps_open(bp);
if (bp->ptp_cfg)
atomic_set(&bp->ptp_cfg->tx_avail, BNXT_MAX_TX_TS);
bnxt_ptp_init_rtc(bp, true); bnxt_ptp_init_rtc(bp, true);
bnxt_ptp_cfg_tstamp_filters(bp); bnxt_ptp_cfg_tstamp_filters(bp);
bnxt_cfg_usr_fltrs(bp); bnxt_cfg_usr_fltrs(bp);

View File

@ -210,6 +210,9 @@ void bnxt_ulp_start(struct bnxt *bp, int err)
if (err) if (err)
return; return;
if (edev->ulp_tbl->msix_requested)
bnxt_fill_msix_vecs(bp, edev->msix_entries);
if (aux_priv) { if (aux_priv) {
struct auxiliary_device *adev; struct auxiliary_device *adev;
@ -392,12 +395,13 @@ void bnxt_rdma_aux_device_init(struct bnxt *bp)
if (!edev) if (!edev)
goto aux_dev_uninit; goto aux_dev_uninit;
aux_priv->edev = edev;
ulp = kzalloc(sizeof(*ulp), GFP_KERNEL); ulp = kzalloc(sizeof(*ulp), GFP_KERNEL);
if (!ulp) if (!ulp)
goto aux_dev_uninit; goto aux_dev_uninit;
edev->ulp_tbl = ulp; edev->ulp_tbl = ulp;
aux_priv->edev = edev;
bp->edev = edev; bp->edev = edev;
bnxt_set_edev_info(edev, bp); bnxt_set_edev_info(edev, bp);

View File

@ -4819,19 +4819,19 @@ static int rvu_nix_block_init(struct rvu *rvu, struct nix_hw *nix_hw)
*/ */
rvu_write64(rvu, blkaddr, NIX_AF_CFG, rvu_write64(rvu, blkaddr, NIX_AF_CFG,
rvu_read64(rvu, blkaddr, NIX_AF_CFG) | 0x40ULL); rvu_read64(rvu, blkaddr, NIX_AF_CFG) | 0x40ULL);
/* Set chan/link to backpressure TL3 instead of TL2 */
rvu_write64(rvu, blkaddr, NIX_AF_PSE_CHANNEL_LEVEL, 0x01);
/* Disable SQ manager's sticky mode operation (set TM6 = 0)
* This sticky mode is known to cause SQ stalls when multiple
* SQs are mapped to same SMQ and transmitting pkts at a time.
*/
cfg = rvu_read64(rvu, blkaddr, NIX_AF_SQM_DBG_CTL_STATUS);
cfg &= ~BIT_ULL(15);
rvu_write64(rvu, blkaddr, NIX_AF_SQM_DBG_CTL_STATUS, cfg);
} }
/* Set chan/link to backpressure TL3 instead of TL2 */
rvu_write64(rvu, blkaddr, NIX_AF_PSE_CHANNEL_LEVEL, 0x01);
/* Disable SQ manager's sticky mode operation (set TM6 = 0)
* This sticky mode is known to cause SQ stalls when multiple
* SQs are mapped to same SMQ and transmitting pkts at a time.
*/
cfg = rvu_read64(rvu, blkaddr, NIX_AF_SQM_DBG_CTL_STATUS);
cfg &= ~BIT_ULL(15);
rvu_write64(rvu, blkaddr, NIX_AF_SQM_DBG_CTL_STATUS, cfg);
ltdefs = rvu->kpu.lt_def; ltdefs = rvu->kpu.lt_def;
/* Calibrate X2P bus to check if CGX/LBK links are fine */ /* Calibrate X2P bus to check if CGX/LBK links are fine */
err = nix_calibrate_x2p(rvu, blkaddr); err = nix_calibrate_x2p(rvu, blkaddr);

View File

@ -382,6 +382,7 @@ static void otx2_qos_read_txschq_cfg_tl(struct otx2_qos_node *parent,
otx2_qos_read_txschq_cfg_tl(node, cfg); otx2_qos_read_txschq_cfg_tl(node, cfg);
cnt = cfg->static_node_pos[node->level]; cnt = cfg->static_node_pos[node->level];
cfg->schq_contig_list[node->level][cnt] = node->schq; cfg->schq_contig_list[node->level][cnt] = node->schq;
cfg->schq_index_used[node->level][cnt] = true;
cfg->schq_contig[node->level]++; cfg->schq_contig[node->level]++;
cfg->static_node_pos[node->level]++; cfg->static_node_pos[node->level]++;
otx2_qos_read_txschq_cfg_schq(node, cfg); otx2_qos_read_txschq_cfg_schq(node, cfg);

View File

@ -95,9 +95,15 @@ static inline void mlx5e_ptp_metadata_fifo_push(struct mlx5e_ptp_metadata_fifo *
} }
static inline u8 static inline u8
mlx5e_ptp_metadata_fifo_peek(struct mlx5e_ptp_metadata_fifo *fifo)
{
return fifo->data[fifo->mask & fifo->cc];
}
static inline void
mlx5e_ptp_metadata_fifo_pop(struct mlx5e_ptp_metadata_fifo *fifo) mlx5e_ptp_metadata_fifo_pop(struct mlx5e_ptp_metadata_fifo *fifo)
{ {
return fifo->data[fifo->mask & fifo->cc++]; fifo->cc++;
} }
static inline void static inline void

View File

@ -83,24 +83,25 @@ int mlx5e_open_qos_sq(struct mlx5e_priv *priv, struct mlx5e_channels *chs,
txq_ix = mlx5e_qid_from_qos(chs, node_qid); txq_ix = mlx5e_qid_from_qos(chs, node_qid);
WARN_ON(node_qid > priv->htb_max_qos_sqs); WARN_ON(node_qid >= mlx5e_htb_cur_leaf_nodes(priv->htb));
if (node_qid == priv->htb_max_qos_sqs) { if (!priv->htb_qos_sq_stats) {
struct mlx5e_sq_stats *stats, **stats_list = NULL; struct mlx5e_sq_stats **stats_list;
if (priv->htb_max_qos_sqs == 0) { stats_list = kvcalloc(mlx5e_qos_max_leaf_nodes(priv->mdev),
stats_list = kvcalloc(mlx5e_qos_max_leaf_nodes(priv->mdev), sizeof(*stats_list), GFP_KERNEL);
sizeof(*stats_list), if (!stats_list)
GFP_KERNEL);
if (!stats_list)
return -ENOMEM;
}
stats = kzalloc(sizeof(*stats), GFP_KERNEL);
if (!stats) {
kvfree(stats_list);
return -ENOMEM; return -ENOMEM;
}
if (stats_list) WRITE_ONCE(priv->htb_qos_sq_stats, stats_list);
WRITE_ONCE(priv->htb_qos_sq_stats, stats_list); }
if (!priv->htb_qos_sq_stats[node_qid]) {
struct mlx5e_sq_stats *stats;
stats = kzalloc(sizeof(*stats), GFP_KERNEL);
if (!stats)
return -ENOMEM;
WRITE_ONCE(priv->htb_qos_sq_stats[node_qid], stats); WRITE_ONCE(priv->htb_qos_sq_stats[node_qid], stats);
/* Order htb_max_qos_sqs increment after writing the array pointer. /* Order htb_max_qos_sqs increment after writing the array pointer.
* Pairs with smp_load_acquire in en_stats.c. * Pairs with smp_load_acquire in en_stats.c.

View File

@ -179,6 +179,13 @@ u32 mlx5e_rqt_size(struct mlx5_core_dev *mdev, unsigned int num_channels)
return min_t(u32, rqt_size, max_cap_rqt_size); return min_t(u32, rqt_size, max_cap_rqt_size);
} }
#define MLX5E_MAX_RQT_SIZE_ALLOWED_WITH_XOR8_HASH 256
unsigned int mlx5e_rqt_max_num_channels_allowed_for_xor8(void)
{
return MLX5E_MAX_RQT_SIZE_ALLOWED_WITH_XOR8_HASH / MLX5E_UNIFORM_SPREAD_RQT_FACTOR;
}
void mlx5e_rqt_destroy(struct mlx5e_rqt *rqt) void mlx5e_rqt_destroy(struct mlx5e_rqt *rqt)
{ {
mlx5_core_destroy_rqt(rqt->mdev, rqt->rqtn); mlx5_core_destroy_rqt(rqt->mdev, rqt->rqtn);

View File

@ -38,6 +38,7 @@ static inline u32 mlx5e_rqt_get_rqtn(struct mlx5e_rqt *rqt)
} }
u32 mlx5e_rqt_size(struct mlx5_core_dev *mdev, unsigned int num_channels); u32 mlx5e_rqt_size(struct mlx5_core_dev *mdev, unsigned int num_channels);
unsigned int mlx5e_rqt_max_num_channels_allowed_for_xor8(void);
int mlx5e_rqt_redirect_direct(struct mlx5e_rqt *rqt, u32 rqn, u32 *vhca_id); int mlx5e_rqt_redirect_direct(struct mlx5e_rqt *rqt, u32 rqn, u32 *vhca_id);
int mlx5e_rqt_redirect_indir(struct mlx5e_rqt *rqt, u32 *rqns, u32 *vhca_ids, int mlx5e_rqt_redirect_indir(struct mlx5e_rqt *rqt, u32 *rqns, u32 *vhca_ids,
unsigned int num_rqns, unsigned int num_rqns,

View File

@ -57,6 +57,7 @@ int mlx5e_selq_init(struct mlx5e_selq *selq, struct mutex *state_lock)
void mlx5e_selq_cleanup(struct mlx5e_selq *selq) void mlx5e_selq_cleanup(struct mlx5e_selq *selq)
{ {
mutex_lock(selq->state_lock);
WARN_ON_ONCE(selq->is_prepared); WARN_ON_ONCE(selq->is_prepared);
kvfree(selq->standby); kvfree(selq->standby);
@ -67,6 +68,7 @@ void mlx5e_selq_cleanup(struct mlx5e_selq *selq)
kvfree(selq->standby); kvfree(selq->standby);
selq->standby = NULL; selq->standby = NULL;
mutex_unlock(selq->state_lock);
} }
void mlx5e_selq_prepare_params(struct mlx5e_selq *selq, struct mlx5e_params *params) void mlx5e_selq_prepare_params(struct mlx5e_selq *selq, struct mlx5e_params *params)

View File

@ -451,6 +451,34 @@ int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv,
mutex_lock(&priv->state_lock); mutex_lock(&priv->state_lock);
if (mlx5e_rx_res_get_current_hash(priv->rx_res).hfunc == ETH_RSS_HASH_XOR) {
unsigned int xor8_max_channels = mlx5e_rqt_max_num_channels_allowed_for_xor8();
if (count > xor8_max_channels) {
err = -EINVAL;
netdev_err(priv->netdev, "%s: Requested number of channels (%d) exceeds the maximum allowed by the XOR8 RSS hfunc (%d)\n",
__func__, count, xor8_max_channels);
goto out;
}
}
/* If RXFH is configured, changing the channels number is allowed only if
* it does not require resizing the RSS table. This is because the previous
* configuration may no longer be compatible with the new RSS table.
*/
if (netif_is_rxfh_configured(priv->netdev)) {
int cur_rqt_size = mlx5e_rqt_size(priv->mdev, cur_params->num_channels);
int new_rqt_size = mlx5e_rqt_size(priv->mdev, count);
if (new_rqt_size != cur_rqt_size) {
err = -EINVAL;
netdev_err(priv->netdev,
"%s: RXFH is configured, block changing channels number that affects RSS table size (new: %d, current: %d)\n",
__func__, new_rqt_size, cur_rqt_size);
goto out;
}
}
/* Don't allow changing the number of channels if HTB offload is active, /* Don't allow changing the number of channels if HTB offload is active,
* because the numeration of the QoS SQs will change, while per-queue * because the numeration of the QoS SQs will change, while per-queue
* qdiscs are attached. * qdiscs are attached.
@ -1281,17 +1309,30 @@ int mlx5e_set_rxfh(struct net_device *dev, struct ethtool_rxfh_param *rxfh,
struct mlx5e_priv *priv = netdev_priv(dev); struct mlx5e_priv *priv = netdev_priv(dev);
u32 *rss_context = &rxfh->rss_context; u32 *rss_context = &rxfh->rss_context;
u8 hfunc = rxfh->hfunc; u8 hfunc = rxfh->hfunc;
unsigned int count;
int err; int err;
mutex_lock(&priv->state_lock); mutex_lock(&priv->state_lock);
count = priv->channels.params.num_channels;
if (hfunc == ETH_RSS_HASH_XOR) {
unsigned int xor8_max_channels = mlx5e_rqt_max_num_channels_allowed_for_xor8();
if (count > xor8_max_channels) {
err = -EINVAL;
netdev_err(priv->netdev, "%s: Cannot set RSS hash function to XOR, current number of channels (%d) exceeds the maximum allowed for XOR8 RSS hfunc (%d)\n",
__func__, count, xor8_max_channels);
goto unlock;
}
}
if (*rss_context && rxfh->rss_delete) { if (*rss_context && rxfh->rss_delete) {
err = mlx5e_rx_res_rss_destroy(priv->rx_res, *rss_context); err = mlx5e_rx_res_rss_destroy(priv->rx_res, *rss_context);
goto unlock; goto unlock;
} }
if (*rss_context == ETH_RXFH_CONTEXT_ALLOC) { if (*rss_context == ETH_RXFH_CONTEXT_ALLOC) {
unsigned int count = priv->channels.params.num_channels;
err = mlx5e_rx_res_rss_init(priv->rx_res, rss_context, count); err = mlx5e_rx_res_rss_init(priv->rx_res, rss_context, count);
if (err) if (err)
goto unlock; goto unlock;

View File

@ -5726,9 +5726,7 @@ void mlx5e_priv_cleanup(struct mlx5e_priv *priv)
kfree(priv->tx_rates); kfree(priv->tx_rates);
kfree(priv->txq2sq); kfree(priv->txq2sq);
destroy_workqueue(priv->wq); destroy_workqueue(priv->wq);
mutex_lock(&priv->state_lock);
mlx5e_selq_cleanup(&priv->selq); mlx5e_selq_cleanup(&priv->selq);
mutex_unlock(&priv->state_lock);
free_cpumask_var(priv->scratchpad.cpumask); free_cpumask_var(priv->scratchpad.cpumask);
for (i = 0; i < priv->htb_max_qos_sqs; i++) for (i = 0; i < priv->htb_max_qos_sqs; i++)

View File

@ -398,6 +398,8 @@ mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb,
(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))) { (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))) {
u8 metadata_index = be32_to_cpu(eseg->flow_table_metadata); u8 metadata_index = be32_to_cpu(eseg->flow_table_metadata);
mlx5e_ptp_metadata_fifo_pop(&sq->ptpsq->metadata_freelist);
mlx5e_skb_cb_hwtstamp_init(skb); mlx5e_skb_cb_hwtstamp_init(skb);
mlx5e_ptp_metadata_map_put(&sq->ptpsq->metadata_map, skb, mlx5e_ptp_metadata_map_put(&sq->ptpsq->metadata_map, skb,
metadata_index); metadata_index);
@ -496,9 +498,6 @@ mlx5e_sq_xmit_wqe(struct mlx5e_txqsq *sq, struct sk_buff *skb,
err_drop: err_drop:
stats->dropped++; stats->dropped++;
if (unlikely(sq->ptpsq && (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)))
mlx5e_ptp_metadata_fifo_push(&sq->ptpsq->metadata_freelist,
be32_to_cpu(eseg->flow_table_metadata));
dev_kfree_skb_any(skb); dev_kfree_skb_any(skb);
mlx5e_tx_flush(sq); mlx5e_tx_flush(sq);
} }
@ -657,7 +656,7 @@ static void mlx5e_cqe_ts_id_eseg(struct mlx5e_ptpsq *ptpsq, struct sk_buff *skb,
{ {
if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
eseg->flow_table_metadata = eseg->flow_table_metadata =
cpu_to_be32(mlx5e_ptp_metadata_fifo_pop(&ptpsq->metadata_freelist)); cpu_to_be32(mlx5e_ptp_metadata_fifo_peek(&ptpsq->metadata_freelist));
} }
static void mlx5e_txwqe_build_eseg(struct mlx5e_priv *priv, struct mlx5e_txqsq *sq, static void mlx5e_txwqe_build_eseg(struct mlx5e_priv *priv, struct mlx5e_txqsq *sq,

View File

@ -1868,6 +1868,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
if (err) if (err)
goto abort; goto abort;
dev->priv.eswitch = esw;
err = esw_offloads_init(esw); err = esw_offloads_init(esw);
if (err) if (err)
goto reps_err; goto reps_err;
@ -1892,11 +1893,6 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_BASIC; esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_BASIC;
else else
esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_NONE; esw->offloads.encap = DEVLINK_ESWITCH_ENCAP_MODE_NONE;
if (MLX5_ESWITCH_MANAGER(dev) &&
mlx5_esw_vport_match_metadata_supported(esw))
esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA;
dev->priv.eswitch = esw;
BLOCKING_INIT_NOTIFIER_HEAD(&esw->n_head); BLOCKING_INIT_NOTIFIER_HEAD(&esw->n_head);
esw_info(dev, esw_info(dev,
@ -1908,6 +1904,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
reps_err: reps_err:
mlx5_esw_vports_cleanup(esw); mlx5_esw_vports_cleanup(esw);
dev->priv.eswitch = NULL;
abort: abort:
if (esw->work_queue) if (esw->work_queue)
destroy_workqueue(esw->work_queue); destroy_workqueue(esw->work_queue);
@ -1926,7 +1923,6 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw)
esw_info(esw->dev, "cleanup\n"); esw_info(esw->dev, "cleanup\n");
esw->dev->priv.eswitch = NULL;
destroy_workqueue(esw->work_queue); destroy_workqueue(esw->work_queue);
WARN_ON(refcount_read(&esw->qos.refcnt)); WARN_ON(refcount_read(&esw->qos.refcnt));
mutex_destroy(&esw->state_lock); mutex_destroy(&esw->state_lock);
@ -1937,6 +1933,7 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw)
mutex_destroy(&esw->offloads.encap_tbl_lock); mutex_destroy(&esw->offloads.encap_tbl_lock);
mutex_destroy(&esw->offloads.decap_tbl_lock); mutex_destroy(&esw->offloads.decap_tbl_lock);
esw_offloads_cleanup(esw); esw_offloads_cleanup(esw);
esw->dev->priv.eswitch = NULL;
mlx5_esw_vports_cleanup(esw); mlx5_esw_vports_cleanup(esw);
debugfs_remove_recursive(esw->debugfs_root); debugfs_remove_recursive(esw->debugfs_root);
devl_params_unregister(priv_to_devlink(esw->dev), mlx5_eswitch_params, devl_params_unregister(priv_to_devlink(esw->dev), mlx5_eswitch_params,

View File

@ -43,6 +43,7 @@
#include "rdma.h" #include "rdma.h"
#include "en.h" #include "en.h"
#include "fs_core.h" #include "fs_core.h"
#include "lib/mlx5.h"
#include "lib/devcom.h" #include "lib/devcom.h"
#include "lib/eq.h" #include "lib/eq.h"
#include "lib/fs_chains.h" #include "lib/fs_chains.h"
@ -2476,6 +2477,10 @@ int esw_offloads_init(struct mlx5_eswitch *esw)
if (err) if (err)
return err; return err;
if (MLX5_ESWITCH_MANAGER(esw->dev) &&
mlx5_esw_vport_match_metadata_supported(esw))
esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA;
err = devl_params_register(priv_to_devlink(esw->dev), err = devl_params_register(priv_to_devlink(esw->dev),
esw_devlink_params, esw_devlink_params,
ARRAY_SIZE(esw_devlink_params)); ARRAY_SIZE(esw_devlink_params));
@ -3707,6 +3712,12 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode,
if (esw_mode_from_devlink(mode, &mlx5_mode)) if (esw_mode_from_devlink(mode, &mlx5_mode))
return -EINVAL; return -EINVAL;
if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV && mlx5_get_sd(esw->dev)) {
NL_SET_ERR_MSG_MOD(extack,
"Can't change E-Switch mode to switchdev when multi-PF netdev (Socket Direct) is configured.");
return -EPERM;
}
mlx5_lag_disable_change(esw->dev); mlx5_lag_disable_change(esw->dev);
err = mlx5_esw_try_lock(esw); err = mlx5_esw_try_lock(esw);
if (err < 0) { if (err < 0) {

View File

@ -1664,6 +1664,16 @@ static int create_auto_flow_group(struct mlx5_flow_table *ft,
return err; return err;
} }
static bool mlx5_pkt_reformat_cmp(struct mlx5_pkt_reformat *p1,
struct mlx5_pkt_reformat *p2)
{
return p1->owner == p2->owner &&
(p1->owner == MLX5_FLOW_RESOURCE_OWNER_FW ?
p1->id == p2->id :
mlx5_fs_dr_action_get_pkt_reformat_id(p1) ==
mlx5_fs_dr_action_get_pkt_reformat_id(p2));
}
static bool mlx5_flow_dests_cmp(struct mlx5_flow_destination *d1, static bool mlx5_flow_dests_cmp(struct mlx5_flow_destination *d1,
struct mlx5_flow_destination *d2) struct mlx5_flow_destination *d2)
{ {
@ -1675,8 +1685,8 @@ static bool mlx5_flow_dests_cmp(struct mlx5_flow_destination *d1,
((d1->vport.flags & MLX5_FLOW_DEST_VPORT_VHCA_ID) ? ((d1->vport.flags & MLX5_FLOW_DEST_VPORT_VHCA_ID) ?
(d1->vport.vhca_id == d2->vport.vhca_id) : true) && (d1->vport.vhca_id == d2->vport.vhca_id) : true) &&
((d1->vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID) ? ((d1->vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID) ?
(d1->vport.pkt_reformat->id == mlx5_pkt_reformat_cmp(d1->vport.pkt_reformat,
d2->vport.pkt_reformat->id) : true)) || d2->vport.pkt_reformat) : true)) ||
(d1->type == MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE && (d1->type == MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE &&
d1->ft == d2->ft) || d1->ft == d2->ft) ||
(d1->type == MLX5_FLOW_DESTINATION_TYPE_TIR && (d1->type == MLX5_FLOW_DESTINATION_TYPE_TIR &&
@ -1808,8 +1818,9 @@ static struct mlx5_flow_handle *add_rule_fg(struct mlx5_flow_group *fg,
} }
trace_mlx5_fs_set_fte(fte, false); trace_mlx5_fs_set_fte(fte, false);
/* Link newly added rules into the tree. */
for (i = 0; i < handle->num_rules; i++) { for (i = 0; i < handle->num_rules; i++) {
if (refcount_read(&handle->rule[i]->node.refcount) == 1) { if (!handle->rule[i]->node.parent) {
tree_add_node(&handle->rule[i]->node, &fte->node); tree_add_node(&handle->rule[i]->node, &fte->node);
trace_mlx5_fs_add_rule(handle->rule[i]); trace_mlx5_fs_add_rule(handle->rule[i]);
} }

View File

@ -1480,6 +1480,14 @@ int mlx5_init_one_devl_locked(struct mlx5_core_dev *dev)
if (err) if (err)
goto err_register; goto err_register;
err = mlx5_crdump_enable(dev);
if (err)
mlx5_core_err(dev, "mlx5_crdump_enable failed with error code %d\n", err);
err = mlx5_hwmon_dev_register(dev);
if (err)
mlx5_core_err(dev, "mlx5_hwmon_dev_register failed with error code %d\n", err);
mutex_unlock(&dev->intf_state_mutex); mutex_unlock(&dev->intf_state_mutex);
return 0; return 0;
@ -1505,7 +1513,10 @@ int mlx5_init_one(struct mlx5_core_dev *dev)
int err; int err;
devl_lock(devlink); devl_lock(devlink);
devl_register(devlink);
err = mlx5_init_one_devl_locked(dev); err = mlx5_init_one_devl_locked(dev);
if (err)
devl_unregister(devlink);
devl_unlock(devlink); devl_unlock(devlink);
return err; return err;
} }
@ -1517,6 +1528,8 @@ void mlx5_uninit_one(struct mlx5_core_dev *dev)
devl_lock(devlink); devl_lock(devlink);
mutex_lock(&dev->intf_state_mutex); mutex_lock(&dev->intf_state_mutex);
mlx5_hwmon_dev_unregister(dev);
mlx5_crdump_disable(dev);
mlx5_unregister_device(dev); mlx5_unregister_device(dev);
if (!test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) { if (!test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) {
@ -1534,6 +1547,7 @@ void mlx5_uninit_one(struct mlx5_core_dev *dev)
mlx5_function_teardown(dev, true); mlx5_function_teardown(dev, true);
out: out:
mutex_unlock(&dev->intf_state_mutex); mutex_unlock(&dev->intf_state_mutex);
devl_unregister(devlink);
devl_unlock(devlink); devl_unlock(devlink);
} }
@ -1680,16 +1694,20 @@ int mlx5_init_one_light(struct mlx5_core_dev *dev)
} }
devl_lock(devlink); devl_lock(devlink);
devl_register(devlink);
err = mlx5_devlink_params_register(priv_to_devlink(dev)); err = mlx5_devlink_params_register(priv_to_devlink(dev));
devl_unlock(devlink);
if (err) { if (err) {
mlx5_core_warn(dev, "mlx5_devlink_param_reg err = %d\n", err); mlx5_core_warn(dev, "mlx5_devlink_param_reg err = %d\n", err);
goto query_hca_caps_err; goto query_hca_caps_err;
} }
devl_unlock(devlink);
return 0; return 0;
query_hca_caps_err: query_hca_caps_err:
devl_unregister(devlink);
devl_unlock(devlink);
mlx5_function_disable(dev, true); mlx5_function_disable(dev, true);
out: out:
dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
@ -1702,6 +1720,7 @@ void mlx5_uninit_one_light(struct mlx5_core_dev *dev)
devl_lock(devlink); devl_lock(devlink);
mlx5_devlink_params_unregister(priv_to_devlink(dev)); mlx5_devlink_params_unregister(priv_to_devlink(dev));
devl_unregister(devlink);
devl_unlock(devlink); devl_unlock(devlink);
if (dev->state != MLX5_DEVICE_STATE_UP) if (dev->state != MLX5_DEVICE_STATE_UP)
return; return;
@ -1943,16 +1962,7 @@ static int probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
goto err_init_one; goto err_init_one;
} }
err = mlx5_crdump_enable(dev);
if (err)
dev_err(&pdev->dev, "mlx5_crdump_enable failed with error code %d\n", err);
err = mlx5_hwmon_dev_register(dev);
if (err)
mlx5_core_err(dev, "mlx5_hwmon_dev_register failed with error code %d\n", err);
pci_save_state(pdev); pci_save_state(pdev);
devlink_register(devlink);
return 0; return 0;
err_init_one: err_init_one:
@ -1973,16 +1983,9 @@ static void remove_one(struct pci_dev *pdev)
struct devlink *devlink = priv_to_devlink(dev); struct devlink *devlink = priv_to_devlink(dev);
set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state); set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state);
/* mlx5_drain_fw_reset() and mlx5_drain_health_wq() are using
* devlink notify APIs.
* Hence, we must drain them before unregistering the devlink.
*/
mlx5_drain_fw_reset(dev); mlx5_drain_fw_reset(dev);
mlx5_drain_health_wq(dev); mlx5_drain_health_wq(dev);
devlink_unregister(devlink);
mlx5_sriov_disable(pdev, false); mlx5_sriov_disable(pdev, false);
mlx5_hwmon_dev_unregister(dev);
mlx5_crdump_disable(dev);
mlx5_uninit_one(dev); mlx5_uninit_one(dev);
mlx5_pci_close(dev); mlx5_pci_close(dev);
mlx5_mdev_uninit(dev); mlx5_mdev_uninit(dev);

View File

@ -19,6 +19,7 @@
#define MLX5_IRQ_CTRL_SF_MAX 8 #define MLX5_IRQ_CTRL_SF_MAX 8
/* min num of vectors for SFs to be enabled */ /* min num of vectors for SFs to be enabled */
#define MLX5_IRQ_VEC_COMP_BASE_SF 2 #define MLX5_IRQ_VEC_COMP_BASE_SF 2
#define MLX5_IRQ_VEC_COMP_BASE 1
#define MLX5_EQ_SHARE_IRQ_MAX_COMP (8) #define MLX5_EQ_SHARE_IRQ_MAX_COMP (8)
#define MLX5_EQ_SHARE_IRQ_MAX_CTRL (UINT_MAX) #define MLX5_EQ_SHARE_IRQ_MAX_CTRL (UINT_MAX)
@ -246,6 +247,7 @@ static void irq_set_name(struct mlx5_irq_pool *pool, char *name, int vecidx)
return; return;
} }
vecidx -= MLX5_IRQ_VEC_COMP_BASE;
snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_comp%d", vecidx); snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_comp%d", vecidx);
} }
@ -585,7 +587,7 @@ struct mlx5_irq *mlx5_irq_request_vector(struct mlx5_core_dev *dev, u16 cpu,
struct mlx5_irq_table *table = mlx5_irq_table_get(dev); struct mlx5_irq_table *table = mlx5_irq_table_get(dev);
struct mlx5_irq_pool *pool = table->pcif_pool; struct mlx5_irq_pool *pool = table->pcif_pool;
struct irq_affinity_desc af_desc; struct irq_affinity_desc af_desc;
int offset = 1; int offset = MLX5_IRQ_VEC_COMP_BASE;
if (!pool->xa_num_irqs.max) if (!pool->xa_num_irqs.max)
offset = 0; offset = 0;

View File

@ -101,7 +101,6 @@ static void mlx5_sf_dev_remove(struct auxiliary_device *adev)
devlink = priv_to_devlink(mdev); devlink = priv_to_devlink(mdev);
set_bit(MLX5_BREAK_FW_WAIT, &mdev->intf_state); set_bit(MLX5_BREAK_FW_WAIT, &mdev->intf_state);
mlx5_drain_health_wq(mdev); mlx5_drain_health_wq(mdev);
devlink_unregister(devlink);
if (mlx5_dev_is_lightweight(mdev)) if (mlx5_dev_is_lightweight(mdev))
mlx5_uninit_one_light(mdev); mlx5_uninit_one_light(mdev);
else else

View File

@ -205,12 +205,11 @@ dr_dump_hex_print(char hex[DR_HEX_SIZE], char *src, u32 size)
} }
static int static int
dr_dump_rule_action_mem(struct seq_file *file, const u64 rule_id, dr_dump_rule_action_mem(struct seq_file *file, char *buff, const u64 rule_id,
struct mlx5dr_rule_action_member *action_mem) struct mlx5dr_rule_action_member *action_mem)
{ {
struct mlx5dr_action *action = action_mem->action; struct mlx5dr_action *action = action_mem->action;
const u64 action_id = DR_DBG_PTR_TO_ID(action); const u64 action_id = DR_DBG_PTR_TO_ID(action);
char buff[MLX5DR_DEBUG_DUMP_BUFF_LENGTH];
u64 hit_tbl_ptr, miss_tbl_ptr; u64 hit_tbl_ptr, miss_tbl_ptr;
u32 hit_tbl_id, miss_tbl_id; u32 hit_tbl_id, miss_tbl_id;
int ret; int ret;
@ -488,10 +487,9 @@ dr_dump_rule_action_mem(struct seq_file *file, const u64 rule_id,
} }
static int static int
dr_dump_rule_mem(struct seq_file *file, struct mlx5dr_ste *ste, dr_dump_rule_mem(struct seq_file *file, char *buff, struct mlx5dr_ste *ste,
bool is_rx, const u64 rule_id, u8 format_ver) bool is_rx, const u64 rule_id, u8 format_ver)
{ {
char buff[MLX5DR_DEBUG_DUMP_BUFF_LENGTH];
char hw_ste_dump[DR_HEX_SIZE]; char hw_ste_dump[DR_HEX_SIZE];
u32 mem_rec_type; u32 mem_rec_type;
int ret; int ret;
@ -522,7 +520,8 @@ dr_dump_rule_mem(struct seq_file *file, struct mlx5dr_ste *ste,
} }
static int static int
dr_dump_rule_rx_tx(struct seq_file *file, struct mlx5dr_rule_rx_tx *rule_rx_tx, dr_dump_rule_rx_tx(struct seq_file *file, char *buff,
struct mlx5dr_rule_rx_tx *rule_rx_tx,
bool is_rx, const u64 rule_id, u8 format_ver) bool is_rx, const u64 rule_id, u8 format_ver)
{ {
struct mlx5dr_ste *ste_arr[DR_RULE_MAX_STES + DR_ACTION_MAX_STES]; struct mlx5dr_ste *ste_arr[DR_RULE_MAX_STES + DR_ACTION_MAX_STES];
@ -533,7 +532,7 @@ dr_dump_rule_rx_tx(struct seq_file *file, struct mlx5dr_rule_rx_tx *rule_rx_tx,
return 0; return 0;
while (i--) { while (i--) {
ret = dr_dump_rule_mem(file, ste_arr[i], is_rx, rule_id, ret = dr_dump_rule_mem(file, buff, ste_arr[i], is_rx, rule_id,
format_ver); format_ver);
if (ret < 0) if (ret < 0)
return ret; return ret;
@ -542,7 +541,8 @@ dr_dump_rule_rx_tx(struct seq_file *file, struct mlx5dr_rule_rx_tx *rule_rx_tx,
return 0; return 0;
} }
static int dr_dump_rule(struct seq_file *file, struct mlx5dr_rule *rule) static noinline_for_stack int
dr_dump_rule(struct seq_file *file, struct mlx5dr_rule *rule)
{ {
struct mlx5dr_rule_action_member *action_mem; struct mlx5dr_rule_action_member *action_mem;
const u64 rule_id = DR_DBG_PTR_TO_ID(rule); const u64 rule_id = DR_DBG_PTR_TO_ID(rule);
@ -565,19 +565,19 @@ static int dr_dump_rule(struct seq_file *file, struct mlx5dr_rule *rule)
return ret; return ret;
if (rx->nic_matcher) { if (rx->nic_matcher) {
ret = dr_dump_rule_rx_tx(file, rx, true, rule_id, format_ver); ret = dr_dump_rule_rx_tx(file, buff, rx, true, rule_id, format_ver);
if (ret < 0) if (ret < 0)
return ret; return ret;
} }
if (tx->nic_matcher) { if (tx->nic_matcher) {
ret = dr_dump_rule_rx_tx(file, tx, false, rule_id, format_ver); ret = dr_dump_rule_rx_tx(file, buff, tx, false, rule_id, format_ver);
if (ret < 0) if (ret < 0)
return ret; return ret;
} }
list_for_each_entry(action_mem, &rule->rule_actions_list, list) { list_for_each_entry(action_mem, &rule->rule_actions_list, list) {
ret = dr_dump_rule_action_mem(file, rule_id, action_mem); ret = dr_dump_rule_action_mem(file, buff, rule_id, action_mem);
if (ret < 0) if (ret < 0)
return ret; return ret;
} }
@ -586,10 +586,10 @@ static int dr_dump_rule(struct seq_file *file, struct mlx5dr_rule *rule)
} }
static int static int
dr_dump_matcher_mask(struct seq_file *file, struct mlx5dr_match_param *mask, dr_dump_matcher_mask(struct seq_file *file, char *buff,
struct mlx5dr_match_param *mask,
u8 criteria, const u64 matcher_id) u8 criteria, const u64 matcher_id)
{ {
char buff[MLX5DR_DEBUG_DUMP_BUFF_LENGTH];
char dump[DR_HEX_SIZE]; char dump[DR_HEX_SIZE];
int ret; int ret;
@ -681,10 +681,10 @@ dr_dump_matcher_mask(struct seq_file *file, struct mlx5dr_match_param *mask,
} }
static int static int
dr_dump_matcher_builder(struct seq_file *file, struct mlx5dr_ste_build *builder, dr_dump_matcher_builder(struct seq_file *file, char *buff,
struct mlx5dr_ste_build *builder,
u32 index, bool is_rx, const u64 matcher_id) u32 index, bool is_rx, const u64 matcher_id)
{ {
char buff[MLX5DR_DEBUG_DUMP_BUFF_LENGTH];
int ret; int ret;
ret = snprintf(buff, MLX5DR_DEBUG_DUMP_BUFF_LENGTH, ret = snprintf(buff, MLX5DR_DEBUG_DUMP_BUFF_LENGTH,
@ -702,11 +702,10 @@ dr_dump_matcher_builder(struct seq_file *file, struct mlx5dr_ste_build *builder,
} }
static int static int
dr_dump_matcher_rx_tx(struct seq_file *file, bool is_rx, dr_dump_matcher_rx_tx(struct seq_file *file, char *buff, bool is_rx,
struct mlx5dr_matcher_rx_tx *matcher_rx_tx, struct mlx5dr_matcher_rx_tx *matcher_rx_tx,
const u64 matcher_id) const u64 matcher_id)
{ {
char buff[MLX5DR_DEBUG_DUMP_BUFF_LENGTH];
enum dr_dump_rec_type rec_type; enum dr_dump_rec_type rec_type;
u64 s_icm_addr, e_icm_addr; u64 s_icm_addr, e_icm_addr;
int i, ret; int i, ret;
@ -731,7 +730,7 @@ dr_dump_matcher_rx_tx(struct seq_file *file, bool is_rx,
return ret; return ret;
for (i = 0; i < matcher_rx_tx->num_of_builders; i++) { for (i = 0; i < matcher_rx_tx->num_of_builders; i++) {
ret = dr_dump_matcher_builder(file, ret = dr_dump_matcher_builder(file, buff,
&matcher_rx_tx->ste_builder[i], &matcher_rx_tx->ste_builder[i],
i, is_rx, matcher_id); i, is_rx, matcher_id);
if (ret < 0) if (ret < 0)
@ -741,7 +740,7 @@ dr_dump_matcher_rx_tx(struct seq_file *file, bool is_rx,
return 0; return 0;
} }
static int static noinline_for_stack int
dr_dump_matcher(struct seq_file *file, struct mlx5dr_matcher *matcher) dr_dump_matcher(struct seq_file *file, struct mlx5dr_matcher *matcher)
{ {
struct mlx5dr_matcher_rx_tx *rx = &matcher->rx; struct mlx5dr_matcher_rx_tx *rx = &matcher->rx;
@ -763,19 +762,19 @@ dr_dump_matcher(struct seq_file *file, struct mlx5dr_matcher *matcher)
if (ret) if (ret)
return ret; return ret;
ret = dr_dump_matcher_mask(file, &matcher->mask, ret = dr_dump_matcher_mask(file, buff, &matcher->mask,
matcher->match_criteria, matcher_id); matcher->match_criteria, matcher_id);
if (ret < 0) if (ret < 0)
return ret; return ret;
if (rx->nic_tbl) { if (rx->nic_tbl) {
ret = dr_dump_matcher_rx_tx(file, true, rx, matcher_id); ret = dr_dump_matcher_rx_tx(file, buff, true, rx, matcher_id);
if (ret < 0) if (ret < 0)
return ret; return ret;
} }
if (tx->nic_tbl) { if (tx->nic_tbl) {
ret = dr_dump_matcher_rx_tx(file, false, tx, matcher_id); ret = dr_dump_matcher_rx_tx(file, buff, false, tx, matcher_id);
if (ret < 0) if (ret < 0)
return ret; return ret;
} }
@ -803,11 +802,10 @@ dr_dump_matcher_all(struct seq_file *file, struct mlx5dr_matcher *matcher)
} }
static int static int
dr_dump_table_rx_tx(struct seq_file *file, bool is_rx, dr_dump_table_rx_tx(struct seq_file *file, char *buff, bool is_rx,
struct mlx5dr_table_rx_tx *table_rx_tx, struct mlx5dr_table_rx_tx *table_rx_tx,
const u64 table_id) const u64 table_id)
{ {
char buff[MLX5DR_DEBUG_DUMP_BUFF_LENGTH];
enum dr_dump_rec_type rec_type; enum dr_dump_rec_type rec_type;
u64 s_icm_addr; u64 s_icm_addr;
int ret; int ret;
@ -829,7 +827,8 @@ dr_dump_table_rx_tx(struct seq_file *file, bool is_rx,
return 0; return 0;
} }
static int dr_dump_table(struct seq_file *file, struct mlx5dr_table *table) static noinline_for_stack int
dr_dump_table(struct seq_file *file, struct mlx5dr_table *table)
{ {
struct mlx5dr_table_rx_tx *rx = &table->rx; struct mlx5dr_table_rx_tx *rx = &table->rx;
struct mlx5dr_table_rx_tx *tx = &table->tx; struct mlx5dr_table_rx_tx *tx = &table->tx;
@ -848,14 +847,14 @@ static int dr_dump_table(struct seq_file *file, struct mlx5dr_table *table)
return ret; return ret;
if (rx->nic_dmn) { if (rx->nic_dmn) {
ret = dr_dump_table_rx_tx(file, true, rx, ret = dr_dump_table_rx_tx(file, buff, true, rx,
DR_DBG_PTR_TO_ID(table)); DR_DBG_PTR_TO_ID(table));
if (ret < 0) if (ret < 0)
return ret; return ret;
} }
if (tx->nic_dmn) { if (tx->nic_dmn) {
ret = dr_dump_table_rx_tx(file, false, tx, ret = dr_dump_table_rx_tx(file, buff, false, tx,
DR_DBG_PTR_TO_ID(table)); DR_DBG_PTR_TO_ID(table));
if (ret < 0) if (ret < 0)
return ret; return ret;
@ -881,10 +880,10 @@ static int dr_dump_table_all(struct seq_file *file, struct mlx5dr_table *tbl)
} }
static int static int
dr_dump_send_ring(struct seq_file *file, struct mlx5dr_send_ring *ring, dr_dump_send_ring(struct seq_file *file, char *buff,
struct mlx5dr_send_ring *ring,
const u64 domain_id) const u64 domain_id)
{ {
char buff[MLX5DR_DEBUG_DUMP_BUFF_LENGTH];
int ret; int ret;
ret = snprintf(buff, MLX5DR_DEBUG_DUMP_BUFF_LENGTH, ret = snprintf(buff, MLX5DR_DEBUG_DUMP_BUFF_LENGTH,
@ -902,13 +901,13 @@ dr_dump_send_ring(struct seq_file *file, struct mlx5dr_send_ring *ring,
return 0; return 0;
} }
static noinline_for_stack int static int
dr_dump_domain_info_flex_parser(struct seq_file *file, dr_dump_domain_info_flex_parser(struct seq_file *file,
char *buff,
const char *flex_parser_name, const char *flex_parser_name,
const u8 flex_parser_value, const u8 flex_parser_value,
const u64 domain_id) const u64 domain_id)
{ {
char buff[MLX5DR_DEBUG_DUMP_BUFF_LENGTH];
int ret; int ret;
ret = snprintf(buff, MLX5DR_DEBUG_DUMP_BUFF_LENGTH, ret = snprintf(buff, MLX5DR_DEBUG_DUMP_BUFF_LENGTH,
@ -925,11 +924,11 @@ dr_dump_domain_info_flex_parser(struct seq_file *file,
return 0; return 0;
} }
static noinline_for_stack int static int
dr_dump_domain_info_caps(struct seq_file *file, struct mlx5dr_cmd_caps *caps, dr_dump_domain_info_caps(struct seq_file *file, char *buff,
struct mlx5dr_cmd_caps *caps,
const u64 domain_id) const u64 domain_id)
{ {
char buff[MLX5DR_DEBUG_DUMP_BUFF_LENGTH];
struct mlx5dr_cmd_vport_cap *vport_caps; struct mlx5dr_cmd_vport_cap *vport_caps;
unsigned long i, vports_num; unsigned long i, vports_num;
int ret; int ret;
@ -969,34 +968,35 @@ dr_dump_domain_info_caps(struct seq_file *file, struct mlx5dr_cmd_caps *caps,
} }
static int static int
dr_dump_domain_info(struct seq_file *file, struct mlx5dr_domain_info *info, dr_dump_domain_info(struct seq_file *file, char *buff,
struct mlx5dr_domain_info *info,
const u64 domain_id) const u64 domain_id)
{ {
int ret; int ret;
ret = dr_dump_domain_info_caps(file, &info->caps, domain_id); ret = dr_dump_domain_info_caps(file, buff, &info->caps, domain_id);
if (ret < 0) if (ret < 0)
return ret; return ret;
ret = dr_dump_domain_info_flex_parser(file, "icmp_dw0", ret = dr_dump_domain_info_flex_parser(file, buff, "icmp_dw0",
info->caps.flex_parser_id_icmp_dw0, info->caps.flex_parser_id_icmp_dw0,
domain_id); domain_id);
if (ret < 0) if (ret < 0)
return ret; return ret;
ret = dr_dump_domain_info_flex_parser(file, "icmp_dw1", ret = dr_dump_domain_info_flex_parser(file, buff, "icmp_dw1",
info->caps.flex_parser_id_icmp_dw1, info->caps.flex_parser_id_icmp_dw1,
domain_id); domain_id);
if (ret < 0) if (ret < 0)
return ret; return ret;
ret = dr_dump_domain_info_flex_parser(file, "icmpv6_dw0", ret = dr_dump_domain_info_flex_parser(file, buff, "icmpv6_dw0",
info->caps.flex_parser_id_icmpv6_dw0, info->caps.flex_parser_id_icmpv6_dw0,
domain_id); domain_id);
if (ret < 0) if (ret < 0)
return ret; return ret;
ret = dr_dump_domain_info_flex_parser(file, "icmpv6_dw1", ret = dr_dump_domain_info_flex_parser(file, buff, "icmpv6_dw1",
info->caps.flex_parser_id_icmpv6_dw1, info->caps.flex_parser_id_icmpv6_dw1,
domain_id); domain_id);
if (ret < 0) if (ret < 0)
@ -1032,12 +1032,12 @@ dr_dump_domain(struct seq_file *file, struct mlx5dr_domain *dmn)
if (ret) if (ret)
return ret; return ret;
ret = dr_dump_domain_info(file, &dmn->info, domain_id); ret = dr_dump_domain_info(file, buff, &dmn->info, domain_id);
if (ret < 0) if (ret < 0)
return ret; return ret;
if (dmn->info.supp_sw_steering) { if (dmn->info.supp_sw_steering) {
ret = dr_dump_send_ring(file, dmn->send_ring, domain_id); ret = dr_dump_send_ring(file, buff, dmn->send_ring, domain_id);
if (ret < 0) if (ret < 0)
return ret; return ret;
} }

View File

@ -368,7 +368,6 @@ union ks8851_tx_hdr {
* @rdfifo: FIFO read callback * @rdfifo: FIFO read callback
* @wrfifo: FIFO write callback * @wrfifo: FIFO write callback
* @start_xmit: start_xmit() implementation callback * @start_xmit: start_xmit() implementation callback
* @rx_skb: rx_skb() implementation callback
* @flush_tx_work: flush_tx_work() implementation callback * @flush_tx_work: flush_tx_work() implementation callback
* *
* The @statelock is used to protect information in the structure which may * The @statelock is used to protect information in the structure which may
@ -423,8 +422,6 @@ struct ks8851_net {
struct sk_buff *txp, bool irq); struct sk_buff *txp, bool irq);
netdev_tx_t (*start_xmit)(struct sk_buff *skb, netdev_tx_t (*start_xmit)(struct sk_buff *skb,
struct net_device *dev); struct net_device *dev);
void (*rx_skb)(struct ks8851_net *ks,
struct sk_buff *skb);
void (*flush_tx_work)(struct ks8851_net *ks); void (*flush_tx_work)(struct ks8851_net *ks);
}; };

View File

@ -231,16 +231,6 @@ static void ks8851_dbg_dumpkkt(struct ks8851_net *ks, u8 *rxpkt)
rxpkt[12], rxpkt[13], rxpkt[14], rxpkt[15]); rxpkt[12], rxpkt[13], rxpkt[14], rxpkt[15]);
} }
/**
* ks8851_rx_skb - receive skbuff
* @ks: The device state.
* @skb: The skbuff
*/
static void ks8851_rx_skb(struct ks8851_net *ks, struct sk_buff *skb)
{
ks->rx_skb(ks, skb);
}
/** /**
* ks8851_rx_pkts - receive packets from the host * ks8851_rx_pkts - receive packets from the host
* @ks: The device information. * @ks: The device information.
@ -309,7 +299,7 @@ static void ks8851_rx_pkts(struct ks8851_net *ks)
ks8851_dbg_dumpkkt(ks, rxpkt); ks8851_dbg_dumpkkt(ks, rxpkt);
skb->protocol = eth_type_trans(skb, ks->netdev); skb->protocol = eth_type_trans(skb, ks->netdev);
ks8851_rx_skb(ks, skb); __netif_rx(skb);
ks->netdev->stats.rx_packets++; ks->netdev->stats.rx_packets++;
ks->netdev->stats.rx_bytes += rxlen; ks->netdev->stats.rx_bytes += rxlen;
@ -340,6 +330,8 @@ static irqreturn_t ks8851_irq(int irq, void *_ks)
unsigned long flags; unsigned long flags;
unsigned int status; unsigned int status;
local_bh_disable();
ks8851_lock(ks, &flags); ks8851_lock(ks, &flags);
status = ks8851_rdreg16(ks, KS_ISR); status = ks8851_rdreg16(ks, KS_ISR);
@ -416,6 +408,8 @@ static irqreturn_t ks8851_irq(int irq, void *_ks)
if (status & IRQ_LCI) if (status & IRQ_LCI)
mii_check_link(&ks->mii); mii_check_link(&ks->mii);
local_bh_enable();
return IRQ_HANDLED; return IRQ_HANDLED;
} }

View File

@ -210,16 +210,6 @@ static void ks8851_wrfifo_par(struct ks8851_net *ks, struct sk_buff *txp,
iowrite16_rep(ksp->hw_addr, txp->data, len / 2); iowrite16_rep(ksp->hw_addr, txp->data, len / 2);
} }
/**
* ks8851_rx_skb_par - receive skbuff
* @ks: The device state.
* @skb: The skbuff
*/
static void ks8851_rx_skb_par(struct ks8851_net *ks, struct sk_buff *skb)
{
netif_rx(skb);
}
static unsigned int ks8851_rdreg16_par_txqcr(struct ks8851_net *ks) static unsigned int ks8851_rdreg16_par_txqcr(struct ks8851_net *ks)
{ {
return ks8851_rdreg16_par(ks, KS_TXQCR); return ks8851_rdreg16_par(ks, KS_TXQCR);
@ -298,7 +288,6 @@ static int ks8851_probe_par(struct platform_device *pdev)
ks->rdfifo = ks8851_rdfifo_par; ks->rdfifo = ks8851_rdfifo_par;
ks->wrfifo = ks8851_wrfifo_par; ks->wrfifo = ks8851_wrfifo_par;
ks->start_xmit = ks8851_start_xmit_par; ks->start_xmit = ks8851_start_xmit_par;
ks->rx_skb = ks8851_rx_skb_par;
#define STD_IRQ (IRQ_LCI | /* Link Change */ \ #define STD_IRQ (IRQ_LCI | /* Link Change */ \
IRQ_RXI | /* RX done */ \ IRQ_RXI | /* RX done */ \

View File

@ -298,16 +298,6 @@ static unsigned int calc_txlen(unsigned int len)
return ALIGN(len + 4, 4); return ALIGN(len + 4, 4);
} }
/**
* ks8851_rx_skb_spi - receive skbuff
* @ks: The device state
* @skb: The skbuff
*/
static void ks8851_rx_skb_spi(struct ks8851_net *ks, struct sk_buff *skb)
{
netif_rx(skb);
}
/** /**
* ks8851_tx_work - process tx packet(s) * ks8851_tx_work - process tx packet(s)
* @work: The work strucutre what was scheduled. * @work: The work strucutre what was scheduled.
@ -435,7 +425,6 @@ static int ks8851_probe_spi(struct spi_device *spi)
ks->rdfifo = ks8851_rdfifo_spi; ks->rdfifo = ks8851_rdfifo_spi;
ks->wrfifo = ks8851_wrfifo_spi; ks->wrfifo = ks8851_wrfifo_spi;
ks->start_xmit = ks8851_start_xmit_spi; ks->start_xmit = ks8851_start_xmit_spi;
ks->rx_skb = ks8851_rx_skb_spi;
ks->flush_tx_work = ks8851_flush_tx_work_spi; ks->flush_tx_work = ks8851_flush_tx_work_spi;
#define STD_IRQ (IRQ_LCI | /* Link Change */ \ #define STD_IRQ (IRQ_LCI | /* Link Change */ \

View File

@ -731,7 +731,7 @@ static int sparx5_port_pcs_low_set(struct sparx5 *sparx5,
bool sgmii = false, inband_aneg = false; bool sgmii = false, inband_aneg = false;
int err; int err;
if (port->conf.inband) { if (conf->inband) {
if (conf->portmode == PHY_INTERFACE_MODE_SGMII || if (conf->portmode == PHY_INTERFACE_MODE_SGMII ||
conf->portmode == PHY_INTERFACE_MODE_QSGMII) conf->portmode == PHY_INTERFACE_MODE_QSGMII)
inband_aneg = true; /* Cisco-SGMII in-band-aneg */ inband_aneg = true; /* Cisco-SGMII in-band-aneg */
@ -948,7 +948,7 @@ int sparx5_port_pcs_set(struct sparx5 *sparx5,
if (err) if (err)
return -EINVAL; return -EINVAL;
if (port->conf.inband) { if (conf->inband) {
/* Enable/disable 1G counters in ASM */ /* Enable/disable 1G counters in ASM */
spx5_rmw(ASM_PORT_CFG_CSC_STAT_DIS_SET(high_speed_dev), spx5_rmw(ASM_PORT_CFG_CSC_STAT_DIS_SET(high_speed_dev),
ASM_PORT_CFG_CSC_STAT_DIS, ASM_PORT_CFG_CSC_STAT_DIS,

View File

@ -73,6 +73,7 @@ enum mac_version {
}; };
struct rtl8169_private; struct rtl8169_private;
struct r8169_led_classdev;
void r8169_apply_firmware(struct rtl8169_private *tp); void r8169_apply_firmware(struct rtl8169_private *tp);
u16 rtl8168h_2_get_adc_bias_ioffset(struct rtl8169_private *tp); u16 rtl8168h_2_get_adc_bias_ioffset(struct rtl8169_private *tp);
@ -84,7 +85,8 @@ void r8169_get_led_name(struct rtl8169_private *tp, int idx,
char *buf, int buf_len); char *buf, int buf_len);
int rtl8168_get_led_mode(struct rtl8169_private *tp); int rtl8168_get_led_mode(struct rtl8169_private *tp);
int rtl8168_led_mod_ctrl(struct rtl8169_private *tp, u16 mask, u16 val); int rtl8168_led_mod_ctrl(struct rtl8169_private *tp, u16 mask, u16 val);
void rtl8168_init_leds(struct net_device *ndev); struct r8169_led_classdev *rtl8168_init_leds(struct net_device *ndev);
int rtl8125_get_led_mode(struct rtl8169_private *tp, int index); int rtl8125_get_led_mode(struct rtl8169_private *tp, int index);
int rtl8125_set_led_mode(struct rtl8169_private *tp, int index, u16 mode); int rtl8125_set_led_mode(struct rtl8169_private *tp, int index, u16 mode);
void rtl8125_init_leds(struct net_device *ndev); struct r8169_led_classdev *rtl8125_init_leds(struct net_device *ndev);
void r8169_remove_leds(struct r8169_led_classdev *leds);

View File

@ -146,22 +146,22 @@ static void rtl8168_setup_ldev(struct r8169_led_classdev *ldev,
led_cdev->hw_control_get_device = r8169_led_hw_control_get_device; led_cdev->hw_control_get_device = r8169_led_hw_control_get_device;
/* ignore errors */ /* ignore errors */
devm_led_classdev_register(&ndev->dev, led_cdev); led_classdev_register(&ndev->dev, led_cdev);
} }
void rtl8168_init_leds(struct net_device *ndev) struct r8169_led_classdev *rtl8168_init_leds(struct net_device *ndev)
{ {
/* bind resource mgmt to netdev */
struct device *dev = &ndev->dev;
struct r8169_led_classdev *leds; struct r8169_led_classdev *leds;
int i; int i;
leds = devm_kcalloc(dev, RTL8168_NUM_LEDS, sizeof(*leds), GFP_KERNEL); leds = kcalloc(RTL8168_NUM_LEDS + 1, sizeof(*leds), GFP_KERNEL);
if (!leds) if (!leds)
return; return NULL;
for (i = 0; i < RTL8168_NUM_LEDS; i++) for (i = 0; i < RTL8168_NUM_LEDS; i++)
rtl8168_setup_ldev(leds + i, ndev, i); rtl8168_setup_ldev(leds + i, ndev, i);
return leds;
} }
static int rtl8125_led_hw_control_is_supported(struct led_classdev *led_cdev, static int rtl8125_led_hw_control_is_supported(struct led_classdev *led_cdev,
@ -245,20 +245,31 @@ static void rtl8125_setup_led_ldev(struct r8169_led_classdev *ldev,
led_cdev->hw_control_get_device = r8169_led_hw_control_get_device; led_cdev->hw_control_get_device = r8169_led_hw_control_get_device;
/* ignore errors */ /* ignore errors */
devm_led_classdev_register(&ndev->dev, led_cdev); led_classdev_register(&ndev->dev, led_cdev);
} }
void rtl8125_init_leds(struct net_device *ndev) struct r8169_led_classdev *rtl8125_init_leds(struct net_device *ndev)
{ {
/* bind resource mgmt to netdev */
struct device *dev = &ndev->dev;
struct r8169_led_classdev *leds; struct r8169_led_classdev *leds;
int i; int i;
leds = devm_kcalloc(dev, RTL8125_NUM_LEDS, sizeof(*leds), GFP_KERNEL); leds = kcalloc(RTL8125_NUM_LEDS + 1, sizeof(*leds), GFP_KERNEL);
if (!leds) if (!leds)
return; return NULL;
for (i = 0; i < RTL8125_NUM_LEDS; i++) for (i = 0; i < RTL8125_NUM_LEDS; i++)
rtl8125_setup_led_ldev(leds + i, ndev, i); rtl8125_setup_led_ldev(leds + i, ndev, i);
return leds;
}
void r8169_remove_leds(struct r8169_led_classdev *leds)
{
if (!leds)
return;
for (struct r8169_led_classdev *l = leds; l->ndev; l++)
led_classdev_unregister(&l->led);
kfree(leds);
} }

View File

@ -647,6 +647,8 @@ struct rtl8169_private {
const char *fw_name; const char *fw_name;
struct rtl_fw *rtl_fw; struct rtl_fw *rtl_fw;
struct r8169_led_classdev *leds;
u32 ocp_base; u32 ocp_base;
}; };
@ -5044,6 +5046,9 @@ static void rtl_remove_one(struct pci_dev *pdev)
cancel_work_sync(&tp->wk.work); cancel_work_sync(&tp->wk.work);
if (IS_ENABLED(CONFIG_R8169_LEDS))
r8169_remove_leds(tp->leds);
unregister_netdev(tp->dev); unregister_netdev(tp->dev);
if (tp->dash_type != RTL_DASH_NONE) if (tp->dash_type != RTL_DASH_NONE)
@ -5501,9 +5506,9 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (IS_ENABLED(CONFIG_R8169_LEDS)) { if (IS_ENABLED(CONFIG_R8169_LEDS)) {
if (rtl_is_8125(tp)) if (rtl_is_8125(tp))
rtl8125_init_leds(dev); tp->leds = rtl8125_init_leds(dev);
else if (tp->mac_version > RTL_GIGA_MAC_VER_06) else if (tp->mac_version > RTL_GIGA_MAC_VER_06)
rtl8168_init_leds(dev); tp->leds = rtl8168_init_leds(dev);
} }
netdev_info(dev, "%s, %pM, XID %03x, IRQ %d\n", netdev_info(dev, "%s, %pM, XID %03x, IRQ %d\n",

View File

@ -52,6 +52,7 @@ struct stmmac_counters {
unsigned int mmc_tx_excessdef; unsigned int mmc_tx_excessdef;
unsigned int mmc_tx_pause_frame; unsigned int mmc_tx_pause_frame;
unsigned int mmc_tx_vlan_frame_g; unsigned int mmc_tx_vlan_frame_g;
unsigned int mmc_tx_oversize_g;
unsigned int mmc_tx_lpi_usec; unsigned int mmc_tx_lpi_usec;
unsigned int mmc_tx_lpi_tran; unsigned int mmc_tx_lpi_tran;
@ -80,6 +81,7 @@ struct stmmac_counters {
unsigned int mmc_rx_fifo_overflow; unsigned int mmc_rx_fifo_overflow;
unsigned int mmc_rx_vlan_frames_gb; unsigned int mmc_rx_vlan_frames_gb;
unsigned int mmc_rx_watchdog_error; unsigned int mmc_rx_watchdog_error;
unsigned int mmc_rx_error;
unsigned int mmc_rx_lpi_usec; unsigned int mmc_rx_lpi_usec;
unsigned int mmc_rx_lpi_tran; unsigned int mmc_rx_lpi_tran;
unsigned int mmc_rx_discard_frames_gb; unsigned int mmc_rx_discard_frames_gb;

View File

@ -53,6 +53,7 @@
#define MMC_TX_EXCESSDEF 0x6c #define MMC_TX_EXCESSDEF 0x6c
#define MMC_TX_PAUSE_FRAME 0x70 #define MMC_TX_PAUSE_FRAME 0x70
#define MMC_TX_VLAN_FRAME_G 0x74 #define MMC_TX_VLAN_FRAME_G 0x74
#define MMC_TX_OVERSIZE_G 0x78
/* MMC RX counter registers */ /* MMC RX counter registers */
#define MMC_RX_FRAMECOUNT_GB 0x80 #define MMC_RX_FRAMECOUNT_GB 0x80
@ -79,6 +80,13 @@
#define MMC_RX_FIFO_OVERFLOW 0xd4 #define MMC_RX_FIFO_OVERFLOW 0xd4
#define MMC_RX_VLAN_FRAMES_GB 0xd8 #define MMC_RX_VLAN_FRAMES_GB 0xd8
#define MMC_RX_WATCHDOG_ERROR 0xdc #define MMC_RX_WATCHDOG_ERROR 0xdc
#define MMC_RX_ERROR 0xe0
#define MMC_TX_LPI_USEC 0xec
#define MMC_TX_LPI_TRAN 0xf0
#define MMC_RX_LPI_USEC 0xf4
#define MMC_RX_LPI_TRAN 0xf8
/* IPC*/ /* IPC*/
#define MMC_RX_IPC_INTR_MASK 0x100 #define MMC_RX_IPC_INTR_MASK 0x100
#define MMC_RX_IPC_INTR 0x108 #define MMC_RX_IPC_INTR 0x108
@ -283,6 +291,9 @@ static void dwmac_mmc_read(void __iomem *mmcaddr, struct stmmac_counters *mmc)
mmc->mmc_tx_excessdef += readl(mmcaddr + MMC_TX_EXCESSDEF); mmc->mmc_tx_excessdef += readl(mmcaddr + MMC_TX_EXCESSDEF);
mmc->mmc_tx_pause_frame += readl(mmcaddr + MMC_TX_PAUSE_FRAME); mmc->mmc_tx_pause_frame += readl(mmcaddr + MMC_TX_PAUSE_FRAME);
mmc->mmc_tx_vlan_frame_g += readl(mmcaddr + MMC_TX_VLAN_FRAME_G); mmc->mmc_tx_vlan_frame_g += readl(mmcaddr + MMC_TX_VLAN_FRAME_G);
mmc->mmc_tx_oversize_g += readl(mmcaddr + MMC_TX_OVERSIZE_G);
mmc->mmc_tx_lpi_usec += readl(mmcaddr + MMC_TX_LPI_USEC);
mmc->mmc_tx_lpi_tran += readl(mmcaddr + MMC_TX_LPI_TRAN);
/* MMC RX counter registers */ /* MMC RX counter registers */
mmc->mmc_rx_framecount_gb += readl(mmcaddr + MMC_RX_FRAMECOUNT_GB); mmc->mmc_rx_framecount_gb += readl(mmcaddr + MMC_RX_FRAMECOUNT_GB);
@ -316,6 +327,10 @@ static void dwmac_mmc_read(void __iomem *mmcaddr, struct stmmac_counters *mmc)
mmc->mmc_rx_fifo_overflow += readl(mmcaddr + MMC_RX_FIFO_OVERFLOW); mmc->mmc_rx_fifo_overflow += readl(mmcaddr + MMC_RX_FIFO_OVERFLOW);
mmc->mmc_rx_vlan_frames_gb += readl(mmcaddr + MMC_RX_VLAN_FRAMES_GB); mmc->mmc_rx_vlan_frames_gb += readl(mmcaddr + MMC_RX_VLAN_FRAMES_GB);
mmc->mmc_rx_watchdog_error += readl(mmcaddr + MMC_RX_WATCHDOG_ERROR); mmc->mmc_rx_watchdog_error += readl(mmcaddr + MMC_RX_WATCHDOG_ERROR);
mmc->mmc_rx_error += readl(mmcaddr + MMC_RX_ERROR);
mmc->mmc_rx_lpi_usec += readl(mmcaddr + MMC_RX_LPI_USEC);
mmc->mmc_rx_lpi_tran += readl(mmcaddr + MMC_RX_LPI_TRAN);
/* IPv4 */ /* IPv4 */
mmc->mmc_rx_ipv4_gd += readl(mmcaddr + MMC_RX_IPV4_GD); mmc->mmc_rx_ipv4_gd += readl(mmcaddr + MMC_RX_IPV4_GD);
mmc->mmc_rx_ipv4_hderr += readl(mmcaddr + MMC_RX_IPV4_HDERR); mmc->mmc_rx_ipv4_hderr += readl(mmcaddr + MMC_RX_IPV4_HDERR);

View File

@ -212,6 +212,7 @@ static const struct stmmac_stats stmmac_mmc[] = {
STMMAC_MMC_STAT(mmc_tx_excessdef), STMMAC_MMC_STAT(mmc_tx_excessdef),
STMMAC_MMC_STAT(mmc_tx_pause_frame), STMMAC_MMC_STAT(mmc_tx_pause_frame),
STMMAC_MMC_STAT(mmc_tx_vlan_frame_g), STMMAC_MMC_STAT(mmc_tx_vlan_frame_g),
STMMAC_MMC_STAT(mmc_tx_oversize_g),
STMMAC_MMC_STAT(mmc_tx_lpi_usec), STMMAC_MMC_STAT(mmc_tx_lpi_usec),
STMMAC_MMC_STAT(mmc_tx_lpi_tran), STMMAC_MMC_STAT(mmc_tx_lpi_tran),
STMMAC_MMC_STAT(mmc_rx_framecount_gb), STMMAC_MMC_STAT(mmc_rx_framecount_gb),
@ -238,6 +239,7 @@ static const struct stmmac_stats stmmac_mmc[] = {
STMMAC_MMC_STAT(mmc_rx_fifo_overflow), STMMAC_MMC_STAT(mmc_rx_fifo_overflow),
STMMAC_MMC_STAT(mmc_rx_vlan_frames_gb), STMMAC_MMC_STAT(mmc_rx_vlan_frames_gb),
STMMAC_MMC_STAT(mmc_rx_watchdog_error), STMMAC_MMC_STAT(mmc_rx_watchdog_error),
STMMAC_MMC_STAT(mmc_rx_error),
STMMAC_MMC_STAT(mmc_rx_lpi_usec), STMMAC_MMC_STAT(mmc_rx_lpi_usec),
STMMAC_MMC_STAT(mmc_rx_lpi_tran), STMMAC_MMC_STAT(mmc_rx_lpi_tran),
STMMAC_MMC_STAT(mmc_rx_discard_frames_gb), STMMAC_MMC_STAT(mmc_rx_discard_frames_gb),

View File

@ -822,7 +822,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
__be16 sport; __be16 sport;
int err; int err;
if (!pskb_inet_may_pull(skb)) if (!skb_vlan_inet_prepare(skb))
return -EINVAL; return -EINVAL;
if (!gs4) if (!gs4)
@ -929,7 +929,7 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
__be16 sport; __be16 sport;
int err; int err;
if (!pskb_inet_may_pull(skb)) if (!skb_vlan_inet_prepare(skb))
return -EINVAL; return -EINVAL;
if (!gs6) if (!gs6)

View File

@ -3807,6 +3807,7 @@ static int virtnet_set_rxfh(struct net_device *dev,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct virtnet_info *vi = netdev_priv(dev); struct virtnet_info *vi = netdev_priv(dev);
bool update = false;
int i; int i;
if (rxfh->hfunc != ETH_RSS_HASH_NO_CHANGE && if (rxfh->hfunc != ETH_RSS_HASH_NO_CHANGE &&
@ -3814,13 +3815,28 @@ static int virtnet_set_rxfh(struct net_device *dev,
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (rxfh->indir) { if (rxfh->indir) {
if (!vi->has_rss)
return -EOPNOTSUPP;
for (i = 0; i < vi->rss_indir_table_size; ++i) for (i = 0; i < vi->rss_indir_table_size; ++i)
vi->ctrl->rss.indirection_table[i] = rxfh->indir[i]; vi->ctrl->rss.indirection_table[i] = rxfh->indir[i];
update = true;
} }
if (rxfh->key)
memcpy(vi->ctrl->rss.key, rxfh->key, vi->rss_key_size);
virtnet_commit_rss_command(vi); if (rxfh->key) {
/* If either _F_HASH_REPORT or _F_RSS are negotiated, the
* device provides hash calculation capabilities, that is,
* hash_key is configured.
*/
if (!vi->has_rss && !vi->has_rss_hash_report)
return -EOPNOTSUPP;
memcpy(vi->ctrl->rss.key, rxfh->key, vi->rss_key_size);
update = true;
}
if (update)
virtnet_commit_rss_command(vi);
return 0; return 0;
} }
@ -4729,13 +4745,15 @@ static int virtnet_probe(struct virtio_device *vdev)
if (virtio_has_feature(vdev, VIRTIO_NET_F_HASH_REPORT)) if (virtio_has_feature(vdev, VIRTIO_NET_F_HASH_REPORT))
vi->has_rss_hash_report = true; vi->has_rss_hash_report = true;
if (virtio_has_feature(vdev, VIRTIO_NET_F_RSS)) if (virtio_has_feature(vdev, VIRTIO_NET_F_RSS)) {
vi->has_rss = true; vi->has_rss = true;
if (vi->has_rss || vi->has_rss_hash_report) {
vi->rss_indir_table_size = vi->rss_indir_table_size =
virtio_cread16(vdev, offsetof(struct virtio_net_config, virtio_cread16(vdev, offsetof(struct virtio_net_config,
rss_max_indirection_table_length)); rss_max_indirection_table_length));
}
if (vi->has_rss || vi->has_rss_hash_report) {
vi->rss_key_size = vi->rss_key_size =
virtio_cread8(vdev, offsetof(struct virtio_net_config, rss_max_key_size)); virtio_cread8(vdev, offsetof(struct virtio_net_config, rss_max_key_size));

View File

@ -50,11 +50,36 @@ static inline int copy_from_sockptr_offset(void *dst, sockptr_t src,
return 0; return 0;
} }
/* Deprecated.
* This is unsafe, unless caller checked user provided optlen.
* Prefer copy_safe_from_sockptr() instead.
*/
static inline int copy_from_sockptr(void *dst, sockptr_t src, size_t size) static inline int copy_from_sockptr(void *dst, sockptr_t src, size_t size)
{ {
return copy_from_sockptr_offset(dst, src, 0, size); return copy_from_sockptr_offset(dst, src, 0, size);
} }
/**
* copy_safe_from_sockptr: copy a struct from sockptr
* @dst: Destination address, in kernel space. This buffer must be @ksize
* bytes long.
* @ksize: Size of @dst struct.
* @optval: Source address. (in user or kernel space)
* @optlen: Size of @optval data.
*
* Returns:
* * -EINVAL: @optlen < @ksize
* * -EFAULT: access to userspace failed.
* * 0 : @ksize bytes were copied
*/
static inline int copy_safe_from_sockptr(void *dst, size_t ksize,
sockptr_t optval, unsigned int optlen)
{
if (optlen < ksize)
return -EINVAL;
return copy_from_sockptr(dst, optval, ksize);
}
static inline int copy_struct_from_sockptr(void *dst, size_t ksize, static inline int copy_struct_from_sockptr(void *dst, size_t ksize,
sockptr_t src, size_t usize) sockptr_t src, size_t usize)
{ {

View File

@ -135,10 +135,11 @@ static inline void u64_stats_inc(u64_stats_t *p)
p->v++; p->v++;
} }
static inline void u64_stats_init(struct u64_stats_sync *syncp) #define u64_stats_init(syncp) \
{ do { \
seqcount_init(&syncp->seq); struct u64_stats_sync *__s = (syncp); \
} seqcount_init(&__s->seq); \
} while (0)
static inline void __u64_stats_update_begin(struct u64_stats_sync *syncp) static inline void __u64_stats_update_begin(struct u64_stats_sync *syncp)
{ {

View File

@ -438,6 +438,10 @@ static inline void in6_ifa_hold(struct inet6_ifaddr *ifp)
refcount_inc(&ifp->refcnt); refcount_inc(&ifp->refcnt);
} }
static inline bool in6_ifa_hold_safe(struct inet6_ifaddr *ifp)
{
return refcount_inc_not_zero(&ifp->refcnt);
}
/* /*
* compute link-local solicited-node multicast address * compute link-local solicited-node multicast address

View File

@ -585,6 +585,15 @@ static inline struct sk_buff *bt_skb_sendmmsg(struct sock *sk,
return skb; return skb;
} }
static inline int bt_copy_from_sockptr(void *dst, size_t dst_size,
sockptr_t src, size_t src_size)
{
if (dst_size > src_size)
return -EINVAL;
return copy_from_sockptr(dst, src, dst_size);
}
int bt_to_errno(u16 code); int bt_to_errno(u16 code);
__u8 bt_status(int err); __u8 bt_status(int err);

View File

@ -361,6 +361,39 @@ static inline bool pskb_inet_may_pull(struct sk_buff *skb)
return pskb_network_may_pull(skb, nhlen); return pskb_network_may_pull(skb, nhlen);
} }
/* Variant of pskb_inet_may_pull().
*/
static inline bool skb_vlan_inet_prepare(struct sk_buff *skb)
{
int nhlen = 0, maclen = ETH_HLEN;
__be16 type = skb->protocol;
/* Essentially this is skb_protocol(skb, true)
* And we get MAC len.
*/
if (eth_type_vlan(type))
type = __vlan_get_protocol(skb, type, &maclen);
switch (type) {
#if IS_ENABLED(CONFIG_IPV6)
case htons(ETH_P_IPV6):
nhlen = sizeof(struct ipv6hdr);
break;
#endif
case htons(ETH_P_IP):
nhlen = sizeof(struct iphdr);
break;
}
/* For ETH_P_IPV6/ETH_P_IP we make sure to pull
* a base network header in skb->head.
*/
if (!pskb_may_pull(skb, maclen + nhlen))
return false;
skb_set_network_header(skb, maclen);
return true;
}
static inline int ip_encap_hlen(struct ip_tunnel_encap *e) static inline int ip_encap_hlen(struct ip_tunnel_encap *e)
{ {
const struct ip_tunnel_encap_ops *ops; const struct ip_tunnel_encap_ops *ops;

View File

@ -594,13 +594,15 @@ static void test_ip_fast_csum(struct kunit *test)
static void test_csum_ipv6_magic(struct kunit *test) static void test_csum_ipv6_magic(struct kunit *test)
{ {
#if defined(CONFIG_NET)
const struct in6_addr *saddr; const struct in6_addr *saddr;
const struct in6_addr *daddr; const struct in6_addr *daddr;
unsigned int len; unsigned int len;
unsigned char proto; unsigned char proto;
__wsum csum; __wsum csum;
if (!IS_ENABLED(CONFIG_NET))
return;
const int daddr_offset = sizeof(struct in6_addr); const int daddr_offset = sizeof(struct in6_addr);
const int len_offset = sizeof(struct in6_addr) + sizeof(struct in6_addr); const int len_offset = sizeof(struct in6_addr) + sizeof(struct in6_addr);
const int proto_offset = sizeof(struct in6_addr) + sizeof(struct in6_addr) + const int proto_offset = sizeof(struct in6_addr) + sizeof(struct in6_addr) +
@ -618,7 +620,6 @@ static void test_csum_ipv6_magic(struct kunit *test)
CHECK_EQ(to_sum16(expected_csum_ipv6_magic[i]), CHECK_EQ(to_sum16(expected_csum_ipv6_magic[i]),
csum_ipv6_magic(saddr, daddr, len, proto, csum)); csum_ipv6_magic(saddr, daddr, len, proto, csum));
} }
#endif /* !CONFIG_NET */
} }
static struct kunit_case __refdata checksum_test_cases[] = { static struct kunit_case __refdata checksum_test_cases[] = {

View File

@ -3948,7 +3948,7 @@ void batadv_tt_local_resize_to_mtu(struct net_device *soft_iface)
spin_lock_bh(&bat_priv->tt.commit_lock); spin_lock_bh(&bat_priv->tt.commit_lock);
while (true) { while (timeout) {
table_size = batadv_tt_local_table_transmit_size(bat_priv); table_size = batadv_tt_local_table_transmit_size(bat_priv);
if (packet_size_max >= table_size) if (packet_size_max >= table_size)
break; break;

View File

@ -105,8 +105,10 @@ void hci_req_sync_complete(struct hci_dev *hdev, u8 result, u16 opcode,
if (hdev->req_status == HCI_REQ_PEND) { if (hdev->req_status == HCI_REQ_PEND) {
hdev->req_result = result; hdev->req_result = result;
hdev->req_status = HCI_REQ_DONE; hdev->req_status = HCI_REQ_DONE;
if (skb) if (skb) {
kfree_skb(hdev->req_skb);
hdev->req_skb = skb_get(skb); hdev->req_skb = skb_get(skb);
}
wake_up_interruptible(&hdev->req_wait_q); wake_up_interruptible(&hdev->req_wait_q);
} }
} }

View File

@ -1946,10 +1946,9 @@ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
switch (optname) { switch (optname) {
case HCI_DATA_DIR: case HCI_DATA_DIR:
if (copy_from_sockptr(&opt, optval, sizeof(opt))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, len);
err = -EFAULT; if (err)
break; break;
}
if (opt) if (opt)
hci_pi(sk)->cmsg_mask |= HCI_CMSG_DIR; hci_pi(sk)->cmsg_mask |= HCI_CMSG_DIR;
@ -1958,10 +1957,9 @@ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
break; break;
case HCI_TIME_STAMP: case HCI_TIME_STAMP:
if (copy_from_sockptr(&opt, optval, sizeof(opt))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, len);
err = -EFAULT; if (err)
break; break;
}
if (opt) if (opt)
hci_pi(sk)->cmsg_mask |= HCI_CMSG_TSTAMP; hci_pi(sk)->cmsg_mask |= HCI_CMSG_TSTAMP;
@ -1979,11 +1977,9 @@ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
uf.event_mask[1] = *((u32 *) f->event_mask + 1); uf.event_mask[1] = *((u32 *) f->event_mask + 1);
} }
len = min_t(unsigned int, len, sizeof(uf)); err = bt_copy_from_sockptr(&uf, sizeof(uf), optval, len);
if (copy_from_sockptr(&uf, optval, len)) { if (err)
err = -EFAULT;
break; break;
}
if (!capable(CAP_NET_RAW)) { if (!capable(CAP_NET_RAW)) {
uf.type_mask &= hci_sec_filter.type_mask; uf.type_mask &= hci_sec_filter.type_mask;
@ -2042,10 +2038,9 @@ static int hci_sock_setsockopt(struct socket *sock, int level, int optname,
goto done; goto done;
} }
if (copy_from_sockptr(&opt, optval, sizeof(opt))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, len);
err = -EFAULT; if (err)
break; break;
}
hci_pi(sk)->mtu = opt; hci_pi(sk)->mtu = opt;
break; break;

View File

@ -2814,8 +2814,8 @@ static int hci_le_set_ext_scan_param_sync(struct hci_dev *hdev, u8 type,
if (qos->bcast.in.phy & BT_ISO_PHY_CODED) { if (qos->bcast.in.phy & BT_ISO_PHY_CODED) {
cp->scanning_phys |= LE_SCAN_PHY_CODED; cp->scanning_phys |= LE_SCAN_PHY_CODED;
hci_le_scan_phy_params(phy, type, hci_le_scan_phy_params(phy, type,
interval, interval * 3,
window); window * 3);
num_phy++; num_phy++;
phy++; phy++;
} }
@ -2835,7 +2835,7 @@ static int hci_le_set_ext_scan_param_sync(struct hci_dev *hdev, u8 type,
if (scan_coded(hdev)) { if (scan_coded(hdev)) {
cp->scanning_phys |= LE_SCAN_PHY_CODED; cp->scanning_phys |= LE_SCAN_PHY_CODED;
hci_le_scan_phy_params(phy, type, interval, window); hci_le_scan_phy_params(phy, type, interval * 3, window * 3);
num_phy++; num_phy++;
phy++; phy++;
} }

View File

@ -1451,8 +1451,8 @@ static bool check_ucast_qos(struct bt_iso_qos *qos)
static bool check_bcast_qos(struct bt_iso_qos *qos) static bool check_bcast_qos(struct bt_iso_qos *qos)
{ {
if (qos->bcast.sync_factor == 0x00) if (!qos->bcast.sync_factor)
return false; qos->bcast.sync_factor = 0x01;
if (qos->bcast.packing > 0x01) if (qos->bcast.packing > 0x01)
return false; return false;
@ -1475,6 +1475,9 @@ static bool check_bcast_qos(struct bt_iso_qos *qos)
if (qos->bcast.skip > 0x01f3) if (qos->bcast.skip > 0x01f3)
return false; return false;
if (!qos->bcast.sync_timeout)
qos->bcast.sync_timeout = BT_ISO_SYNC_TIMEOUT;
if (qos->bcast.sync_timeout < 0x000a || qos->bcast.sync_timeout > 0x4000) if (qos->bcast.sync_timeout < 0x000a || qos->bcast.sync_timeout > 0x4000)
return false; return false;
@ -1484,6 +1487,9 @@ static bool check_bcast_qos(struct bt_iso_qos *qos)
if (qos->bcast.mse > 0x1f) if (qos->bcast.mse > 0x1f)
return false; return false;
if (!qos->bcast.timeout)
qos->bcast.sync_timeout = BT_ISO_SYNC_TIMEOUT;
if (qos->bcast.timeout < 0x000a || qos->bcast.timeout > 0x4000) if (qos->bcast.timeout < 0x000a || qos->bcast.timeout > 0x4000)
return false; return false;
@ -1494,7 +1500,7 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
sockptr_t optval, unsigned int optlen) sockptr_t optval, unsigned int optlen)
{ {
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
int len, err = 0; int err = 0;
struct bt_iso_qos qos = default_qos; struct bt_iso_qos qos = default_qos;
u32 opt; u32 opt;
@ -1509,10 +1515,9 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
} }
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
err = -EFAULT; if (err)
break; break;
}
if (opt) if (opt)
set_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags); set_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags);
@ -1521,10 +1526,9 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
case BT_PKT_STATUS: case BT_PKT_STATUS:
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
err = -EFAULT; if (err)
break; break;
}
if (opt) if (opt)
set_bit(BT_SK_PKT_STATUS, &bt_sk(sk)->flags); set_bit(BT_SK_PKT_STATUS, &bt_sk(sk)->flags);
@ -1539,17 +1543,9 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
} }
len = min_t(unsigned int, sizeof(qos), optlen); err = bt_copy_from_sockptr(&qos, sizeof(qos), optval, optlen);
if (err)
if (copy_from_sockptr(&qos, optval, len)) {
err = -EFAULT;
break; break;
}
if (len == sizeof(qos.ucast) && !check_ucast_qos(&qos)) {
err = -EINVAL;
break;
}
iso_pi(sk)->qos = qos; iso_pi(sk)->qos = qos;
iso_pi(sk)->qos_user_set = true; iso_pi(sk)->qos_user_set = true;
@ -1564,18 +1560,16 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
} }
if (optlen > sizeof(iso_pi(sk)->base)) { if (optlen > sizeof(iso_pi(sk)->base)) {
err = -EOVERFLOW; err = -EINVAL;
break; break;
} }
len = min_t(unsigned int, sizeof(iso_pi(sk)->base), optlen); err = bt_copy_from_sockptr(iso_pi(sk)->base, optlen, optval,
optlen);
if (copy_from_sockptr(iso_pi(sk)->base, optval, len)) { if (err)
err = -EFAULT;
break; break;
}
iso_pi(sk)->base_len = len; iso_pi(sk)->base_len = optlen;
break; break;

View File

@ -4054,8 +4054,7 @@ static int l2cap_connect_req(struct l2cap_conn *conn,
return -EPROTO; return -EPROTO;
hci_dev_lock(hdev); hci_dev_lock(hdev);
if (hci_dev_test_flag(hdev, HCI_MGMT) && if (hci_dev_test_flag(hdev, HCI_MGMT))
!test_and_set_bit(HCI_CONN_MGMT_CONNECTED, &hcon->flags))
mgmt_device_connected(hdev, hcon, NULL, 0); mgmt_device_connected(hdev, hcon, NULL, 0);
hci_dev_unlock(hdev); hci_dev_unlock(hdev);

View File

@ -727,7 +727,7 @@ static int l2cap_sock_setsockopt_old(struct socket *sock, int optname,
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
struct l2cap_chan *chan = l2cap_pi(sk)->chan; struct l2cap_chan *chan = l2cap_pi(sk)->chan;
struct l2cap_options opts; struct l2cap_options opts;
int len, err = 0; int err = 0;
u32 opt; u32 opt;
BT_DBG("sk %p", sk); BT_DBG("sk %p", sk);
@ -754,11 +754,9 @@ static int l2cap_sock_setsockopt_old(struct socket *sock, int optname,
opts.max_tx = chan->max_tx; opts.max_tx = chan->max_tx;
opts.txwin_size = chan->tx_win; opts.txwin_size = chan->tx_win;
len = min_t(unsigned int, sizeof(opts), optlen); err = bt_copy_from_sockptr(&opts, sizeof(opts), optval, optlen);
if (copy_from_sockptr(&opts, optval, len)) { if (err)
err = -EFAULT;
break; break;
}
if (opts.txwin_size > L2CAP_DEFAULT_EXT_WINDOW) { if (opts.txwin_size > L2CAP_DEFAULT_EXT_WINDOW) {
err = -EINVAL; err = -EINVAL;
@ -801,10 +799,9 @@ static int l2cap_sock_setsockopt_old(struct socket *sock, int optname,
break; break;
case L2CAP_LM: case L2CAP_LM:
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
err = -EFAULT; if (err)
break; break;
}
if (opt & L2CAP_LM_FIPS) { if (opt & L2CAP_LM_FIPS) {
err = -EINVAL; err = -EINVAL;
@ -885,7 +882,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
struct bt_security sec; struct bt_security sec;
struct bt_power pwr; struct bt_power pwr;
struct l2cap_conn *conn; struct l2cap_conn *conn;
int len, err = 0; int err = 0;
u32 opt; u32 opt;
u16 mtu; u16 mtu;
u8 mode; u8 mode;
@ -911,11 +908,9 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
sec.level = BT_SECURITY_LOW; sec.level = BT_SECURITY_LOW;
len = min_t(unsigned int, sizeof(sec), optlen); err = bt_copy_from_sockptr(&sec, sizeof(sec), optval, optlen);
if (copy_from_sockptr(&sec, optval, len)) { if (err)
err = -EFAULT;
break; break;
}
if (sec.level < BT_SECURITY_LOW || if (sec.level < BT_SECURITY_LOW ||
sec.level > BT_SECURITY_FIPS) { sec.level > BT_SECURITY_FIPS) {
@ -960,10 +955,9 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
} }
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
err = -EFAULT; if (err)
break; break;
}
if (opt) { if (opt) {
set_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags); set_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags);
@ -975,10 +969,9 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
case BT_FLUSHABLE: case BT_FLUSHABLE:
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
err = -EFAULT; if (err)
break; break;
}
if (opt > BT_FLUSHABLE_ON) { if (opt > BT_FLUSHABLE_ON) {
err = -EINVAL; err = -EINVAL;
@ -1010,11 +1003,9 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
pwr.force_active = BT_POWER_FORCE_ACTIVE_ON; pwr.force_active = BT_POWER_FORCE_ACTIVE_ON;
len = min_t(unsigned int, sizeof(pwr), optlen); err = bt_copy_from_sockptr(&pwr, sizeof(pwr), optval, optlen);
if (copy_from_sockptr(&pwr, optval, len)) { if (err)
err = -EFAULT;
break; break;
}
if (pwr.force_active) if (pwr.force_active)
set_bit(FLAG_FORCE_ACTIVE, &chan->flags); set_bit(FLAG_FORCE_ACTIVE, &chan->flags);
@ -1023,10 +1014,9 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
case BT_CHANNEL_POLICY: case BT_CHANNEL_POLICY:
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
err = -EFAULT; if (err)
break; break;
}
err = -EOPNOTSUPP; err = -EOPNOTSUPP;
break; break;
@ -1055,10 +1045,9 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
} }
if (copy_from_sockptr(&mtu, optval, sizeof(u16))) { err = bt_copy_from_sockptr(&mtu, sizeof(mtu), optval, optlen);
err = -EFAULT; if (err)
break; break;
}
if (chan->mode == L2CAP_MODE_EXT_FLOWCTL && if (chan->mode == L2CAP_MODE_EXT_FLOWCTL &&
sk->sk_state == BT_CONNECTED) sk->sk_state == BT_CONNECTED)
@ -1086,10 +1075,9 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
} }
if (copy_from_sockptr(&mode, optval, sizeof(u8))) { err = bt_copy_from_sockptr(&mode, sizeof(mode), optval, optlen);
err = -EFAULT; if (err)
break; break;
}
BT_DBG("mode %u", mode); BT_DBG("mode %u", mode);

View File

@ -629,7 +629,7 @@ static int rfcomm_sock_setsockopt_old(struct socket *sock, int optname,
switch (optname) { switch (optname) {
case RFCOMM_LM: case RFCOMM_LM:
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { if (bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen)) {
err = -EFAULT; err = -EFAULT;
break; break;
} }
@ -664,7 +664,6 @@ static int rfcomm_sock_setsockopt(struct socket *sock, int level, int optname,
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
struct bt_security sec; struct bt_security sec;
int err = 0; int err = 0;
size_t len;
u32 opt; u32 opt;
BT_DBG("sk %p", sk); BT_DBG("sk %p", sk);
@ -686,11 +685,9 @@ static int rfcomm_sock_setsockopt(struct socket *sock, int level, int optname,
sec.level = BT_SECURITY_LOW; sec.level = BT_SECURITY_LOW;
len = min_t(unsigned int, sizeof(sec), optlen); err = bt_copy_from_sockptr(&sec, sizeof(sec), optval, optlen);
if (copy_from_sockptr(&sec, optval, len)) { if (err)
err = -EFAULT;
break; break;
}
if (sec.level > BT_SECURITY_HIGH) { if (sec.level > BT_SECURITY_HIGH) {
err = -EINVAL; err = -EINVAL;
@ -706,10 +703,9 @@ static int rfcomm_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
} }
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
err = -EFAULT; if (err)
break; break;
}
if (opt) if (opt)
set_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags); set_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags);

View File

@ -824,7 +824,7 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
sockptr_t optval, unsigned int optlen) sockptr_t optval, unsigned int optlen)
{ {
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
int len, err = 0; int err = 0;
struct bt_voice voice; struct bt_voice voice;
u32 opt; u32 opt;
struct bt_codecs *codecs; struct bt_codecs *codecs;
@ -843,10 +843,9 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
} }
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
err = -EFAULT; if (err)
break; break;
}
if (opt) if (opt)
set_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags); set_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags);
@ -863,11 +862,10 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
voice.setting = sco_pi(sk)->setting; voice.setting = sco_pi(sk)->setting;
len = min_t(unsigned int, sizeof(voice), optlen); err = bt_copy_from_sockptr(&voice, sizeof(voice), optval,
if (copy_from_sockptr(&voice, optval, len)) { optlen);
err = -EFAULT; if (err)
break; break;
}
/* Explicitly check for these values */ /* Explicitly check for these values */
if (voice.setting != BT_VOICE_TRANSPARENT && if (voice.setting != BT_VOICE_TRANSPARENT &&
@ -890,10 +888,9 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
case BT_PKT_STATUS: case BT_PKT_STATUS:
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
err = -EFAULT; if (err)
break; break;
}
if (opt) if (opt)
set_bit(BT_SK_PKT_STATUS, &bt_sk(sk)->flags); set_bit(BT_SK_PKT_STATUS, &bt_sk(sk)->flags);
@ -934,9 +931,9 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
} }
if (copy_from_sockptr(buffer, optval, optlen)) { err = bt_copy_from_sockptr(buffer, optlen, optval, optlen);
if (err) {
hci_dev_put(hdev); hci_dev_put(hdev);
err = -EFAULT;
break; break;
} }

View File

@ -966,6 +966,8 @@ static int do_replace(struct net *net, sockptr_t arg, unsigned int len)
return -ENOMEM; return -ENOMEM;
if (tmp.num_counters == 0) if (tmp.num_counters == 0)
return -EINVAL; return -EINVAL;
if ((u64)len < (u64)tmp.size + sizeof(tmp))
return -EINVAL;
tmp.name[sizeof(tmp.name)-1] = 0; tmp.name[sizeof(tmp.name)-1] = 0;
@ -1266,6 +1268,8 @@ static int compat_do_replace(struct net *net, sockptr_t arg, unsigned int len)
return -ENOMEM; return -ENOMEM;
if (tmp.num_counters == 0) if (tmp.num_counters == 0)
return -EINVAL; return -EINVAL;
if ((u64)len < (u64)tmp.size + sizeof(tmp))
return -EINVAL;
tmp.name[sizeof(tmp.name)-1] = 0; tmp.name[sizeof(tmp.name)-1] = 0;

View File

@ -1118,6 +1118,8 @@ do_replace(struct net *net, sockptr_t arg, unsigned int len)
return -ENOMEM; return -ENOMEM;
if (tmp.num_counters == 0) if (tmp.num_counters == 0)
return -EINVAL; return -EINVAL;
if ((u64)len < (u64)tmp.size + sizeof(tmp))
return -EINVAL;
tmp.name[sizeof(tmp.name)-1] = 0; tmp.name[sizeof(tmp.name)-1] = 0;
@ -1504,6 +1506,8 @@ compat_do_replace(struct net *net, sockptr_t arg, unsigned int len)
return -ENOMEM; return -ENOMEM;
if (tmp.num_counters == 0) if (tmp.num_counters == 0)
return -EINVAL; return -EINVAL;
if ((u64)len < (u64)tmp.size + sizeof(tmp))
return -EINVAL;
tmp.name[sizeof(tmp.name)-1] = 0; tmp.name[sizeof(tmp.name)-1] = 0;

View File

@ -926,13 +926,11 @@ void ip_rt_send_redirect(struct sk_buff *skb)
icmp_send(skb, ICMP_REDIRECT, ICMP_REDIR_HOST, gw); icmp_send(skb, ICMP_REDIRECT, ICMP_REDIR_HOST, gw);
peer->rate_last = jiffies; peer->rate_last = jiffies;
++peer->n_redirects; ++peer->n_redirects;
#ifdef CONFIG_IP_ROUTE_VERBOSE if (IS_ENABLED(CONFIG_IP_ROUTE_VERBOSE) && log_martians &&
if (log_martians &&
peer->n_redirects == ip_rt_redirect_number) peer->n_redirects == ip_rt_redirect_number)
net_warn_ratelimited("host %pI4/if%d ignores redirects for %pI4 to %pI4\n", net_warn_ratelimited("host %pI4/if%d ignores redirects for %pI4 to %pI4\n",
&ip_hdr(skb)->saddr, inet_iif(skb), &ip_hdr(skb)->saddr, inet_iif(skb),
&ip_hdr(skb)->daddr, &gw); &ip_hdr(skb)->daddr, &gw);
#endif
} }
out_put_peer: out_put_peer:
inet_putpeer(peer); inet_putpeer(peer);

View File

@ -2091,9 +2091,10 @@ struct inet6_ifaddr *ipv6_get_ifaddr(struct net *net, const struct in6_addr *add
if (ipv6_addr_equal(&ifp->addr, addr)) { if (ipv6_addr_equal(&ifp->addr, addr)) {
if (!dev || ifp->idev->dev == dev || if (!dev || ifp->idev->dev == dev ||
!(ifp->scope&(IFA_LINK|IFA_HOST) || strict)) { !(ifp->scope&(IFA_LINK|IFA_HOST) || strict)) {
result = ifp; if (in6_ifa_hold_safe(ifp)) {
in6_ifa_hold(ifp); result = ifp;
break; break;
}
} }
} }
} }

View File

@ -1385,7 +1385,10 @@ int fib6_add(struct fib6_node *root, struct fib6_info *rt,
struct nl_info *info, struct netlink_ext_ack *extack) struct nl_info *info, struct netlink_ext_ack *extack)
{ {
struct fib6_table *table = rt->fib6_table; struct fib6_table *table = rt->fib6_table;
struct fib6_node *fn, *pn = NULL; struct fib6_node *fn;
#ifdef CONFIG_IPV6_SUBTREES
struct fib6_node *pn = NULL;
#endif
int err = -ENOMEM; int err = -ENOMEM;
int allow_create = 1; int allow_create = 1;
int replace_required = 0; int replace_required = 0;
@ -1409,9 +1412,9 @@ int fib6_add(struct fib6_node *root, struct fib6_info *rt,
goto out; goto out;
} }
#ifdef CONFIG_IPV6_SUBTREES
pn = fn; pn = fn;
#ifdef CONFIG_IPV6_SUBTREES
if (rt->fib6_src.plen) { if (rt->fib6_src.plen) {
struct fib6_node *sn; struct fib6_node *sn;

View File

@ -1135,6 +1135,8 @@ do_replace(struct net *net, sockptr_t arg, unsigned int len)
return -ENOMEM; return -ENOMEM;
if (tmp.num_counters == 0) if (tmp.num_counters == 0)
return -EINVAL; return -EINVAL;
if ((u64)len < (u64)tmp.size + sizeof(tmp))
return -EINVAL;
tmp.name[sizeof(tmp.name)-1] = 0; tmp.name[sizeof(tmp.name)-1] = 0;
@ -1513,6 +1515,8 @@ compat_do_replace(struct net *net, sockptr_t arg, unsigned int len)
return -ENOMEM; return -ENOMEM;
if (tmp.num_counters == 0) if (tmp.num_counters == 0)
return -EINVAL; return -EINVAL;
if ((u64)len < (u64)tmp.size + sizeof(tmp))
return -EINVAL;
tmp.name[sizeof(tmp.name)-1] = 0; tmp.name[sizeof(tmp.name)-1] = 0;

View File

@ -252,10 +252,10 @@ static int nfc_llcp_setsockopt(struct socket *sock, int level, int optname,
break; break;
} }
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { err = copy_safe_from_sockptr(&opt, sizeof(opt),
err = -EFAULT; optval, optlen);
if (err)
break; break;
}
if (opt > LLCP_MAX_RW) { if (opt > LLCP_MAX_RW) {
err = -EINVAL; err = -EINVAL;
@ -274,10 +274,10 @@ static int nfc_llcp_setsockopt(struct socket *sock, int level, int optname,
break; break;
} }
if (copy_from_sockptr(&opt, optval, sizeof(u32))) { err = copy_safe_from_sockptr(&opt, sizeof(opt),
err = -EFAULT; optval, optlen);
if (err)
break; break;
}
if (opt > LLCP_MAX_MIUX) { if (opt > LLCP_MAX_MIUX) {
err = -EINVAL; err = -EINVAL;

View File

@ -1380,8 +1380,9 @@ int ovs_ct_copy_action(struct net *net, const struct nlattr *attr,
if (ct_info.timeout[0]) { if (ct_info.timeout[0]) {
if (nf_ct_set_timeout(net, ct_info.ct, family, key->ip.proto, if (nf_ct_set_timeout(net, ct_info.ct, family, key->ip.proto,
ct_info.timeout)) ct_info.timeout))
pr_info_ratelimited("Failed to associated timeout " OVS_NLERR(log,
"policy `%s'\n", ct_info.timeout); "Failed to associated timeout policy '%s'",
ct_info.timeout);
else else
ct_info.nf_ct_timeout = rcu_dereference( ct_info.nf_ct_timeout = rcu_dereference(
nf_ct_timeout_find(ct_info.ct)->timeout); nf_ct_timeout_find(ct_info.ct)->timeout);

View File

@ -2665,7 +2665,9 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
} }
} else if (!(flags & MSG_PEEK)) { } else if (!(flags & MSG_PEEK)) {
skb_unlink(skb, &sk->sk_receive_queue); skb_unlink(skb, &sk->sk_receive_queue);
consume_skb(skb); WRITE_ONCE(u->oob_skb, NULL);
if (!WARN_ON_ONCE(skb_unref(skb)))
kfree_skb(skb);
skb = skb_peek(&sk->sk_receive_queue); skb = skb_peek(&sk->sk_receive_queue);
} }
} }

View File

@ -274,11 +274,22 @@ static void __unix_gc(struct work_struct *work)
* receive queues. Other, non candidate sockets _can_ be * receive queues. Other, non candidate sockets _can_ be
* added to queue, so we must make sure only to touch * added to queue, so we must make sure only to touch
* candidates. * candidates.
*
* Embryos, though never candidates themselves, affect which
* candidates are reachable by the garbage collector. Before
* being added to a listener's queue, an embryo may already
* receive data carrying SCM_RIGHTS, potentially making the
* passed socket a candidate that is not yet reachable by the
* collector. It becomes reachable once the embryo is
* enqueued. Therefore, we must ensure that no SCM-laden
* embryo appears in a (candidate) listener's queue between
* consecutive scan_children() calls.
*/ */
list_for_each_entry_safe(u, next, &gc_inflight_list, link) { list_for_each_entry_safe(u, next, &gc_inflight_list, link) {
struct sock *sk = &u->sk;
long total_refs; long total_refs;
total_refs = file_count(u->sk.sk_socket->file); total_refs = file_count(sk->sk_socket->file);
WARN_ON_ONCE(!u->inflight); WARN_ON_ONCE(!u->inflight);
WARN_ON_ONCE(total_refs < u->inflight); WARN_ON_ONCE(total_refs < u->inflight);
@ -286,6 +297,11 @@ static void __unix_gc(struct work_struct *work)
list_move_tail(&u->link, &gc_candidates); list_move_tail(&u->link, &gc_candidates);
__set_bit(UNIX_GC_CANDIDATE, &u->gc_flags); __set_bit(UNIX_GC_CANDIDATE, &u->gc_flags);
__set_bit(UNIX_GC_MAYBE_CYCLE, &u->gc_flags); __set_bit(UNIX_GC_MAYBE_CYCLE, &u->gc_flags);
if (sk->sk_state == TCP_LISTEN) {
unix_state_lock(sk);
unix_state_unlock(sk);
}
} }
} }

View File

@ -1417,6 +1417,8 @@ static int xsk_setsockopt(struct socket *sock, int level, int optname,
struct xsk_queue **q; struct xsk_queue **q;
int entries; int entries;
if (optlen < sizeof(entries))
return -EINVAL;
if (copy_from_sockptr(&entries, optval, sizeof(entries))) if (copy_from_sockptr(&entries, optval, sizeof(entries)))
return -EFAULT; return -EFAULT;