Including fixes from Bluetooth, netfilter, BPF and WiFi.

I didn't collect precise data but feels like we've got a lot
 of 6.5 fixes here. WiFi fixes are most user-awaited.
 
 Current release - regressions:
 
  - Bluetooth: fix hci_link_tx_to RCU lock usage
 
 Current release - new code bugs:
 
  - bpf: mprog: fix maximum program check on mprog attachment
 
  - eth: ti: icssg-prueth: fix signedness bug in prueth_init_tx_chns()
 
 Previous releases - regressions:
 
  - ipv6: tcp: add a missing nf_reset_ct() in 3WHS handling
 
  - vringh: don't use vringh_kiov_advance() in vringh_iov_xfer(),
    it doesn't handle zero length like we expected
 
  - wifi:
    - cfg80211: fix cqm_config access race, fix crashes with brcmfmac
    - iwlwifi: mvm: handle PS changes in vif_cfg_changed
    - mac80211: fix mesh id corruption on 32 bit systems
    - mt76: mt76x02: fix MT76x0 external LNA gain handling
 
  - Bluetooth: fix handling of HCI_QUIRK_STRICT_DUPLICATE_FILTER
 
  - l2tp: fix handling of transhdrlen in __ip{,6}_append_data()
 
  - dsa: mv88e6xxx: avoid EEPROM timeout when EEPROM is absent
 
  - eth: stmmac: fix the incorrect parameter after refactoring
 
 Previous releases - always broken:
 
  - net: replace calls to sock->ops->connect() with kernel_connect(),
    prevent address rewrite in kernel_bind(); otherwise BPF hooks may
    modify arguments, unexpectedly to the caller
 
  - tcp: fix delayed ACKs when reads and writes align with MSS
 
  - bpf:
    - verifier: unconditionally reset backtrack_state masks on global
      func exit
    - s390: let arch_prepare_bpf_trampoline return program size,
      fix struct_ops offsets
    - sockmap: fix accounting of available bytes in presence of PEEKs
    - sockmap: reject sk_msg egress redirects to non-TCP sockets
 
  - ipv4/fib: send netlink notify when delete source address routes
 
  - ethtool: plca: fix width of reads when parsing netlink commands
 
  - netfilter: nft_payload: rebuild vlan header on h_proto access
 
  - Bluetooth: hci_codec: fix leaking memory of local_codecs
 
  - eth: intel: ice: always add legacy 32byte RXDID in supported_rxdids
 
  - eth: stmmac:
    - dwmac-stm32: fix resume on STM32 MCU
    - remove buggy and unneeded stmmac_poll_controller, depend on NAPI
 
  - ibmveth: always recompute TCP pseudo-header checksum, fix use
    of the driver with Open vSwitch
 
  - wifi:
    - rtw88: rtw8723d: fix MAC address offset in EEPROM
    - mt76: fix lock dependency problem for wed_lock
    - mwifiex: sanity check data reported by the device
    - iwlwifi: ensure ack flag is properly cleared
    - iwlwifi: mvm: fix a memory corruption due to bad pointer arithm
    - iwlwifi: mvm: fix incorrect usage of scan API
 
 Misc:
 
  - wifi: mac80211: work around Cisco AP 9115 VHT MPDU length
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmUe/hkACgkQMUZtbf5S
 Iru94hAAlkVsHgGy7T8Te23m5Q5v33ZRj+7hbjFBN+be2TJu8cWo9qL6jGvrxBP6
 D0X32ZvoWtX95Ua023Ibs1WtEc6ebROctTuDZpoIW35MYamFz9fYecoY4i+t+m1+
 6Y/UVRu4eHWXUtZclRQUEUdMe5HAJms9uNlIkTxLivvFhERgmGKtAsca8nuU9wMo
 lJJUAFxf4wJKx/338AUaa0yfsg6cQcgRpbm1csAR9VSa3mU1PbrIYzf7fMbWjRET
 6DiijheeClb7biF4ZKqqHgZcProwVFxoFCq1GH+PCzaw8K1eIkGwXlpVEW89lgsC
 Ukc1L9JgLJIBSObvNWiO2mu5w+V89+XR2rL9KM3tW6x+k5tEncNL3WOphqH69NPS
 gfizGlvXjP+2aCJgHovvlQEaBn/7xNYWTAbqh1jVDleUC95Ur8ap4X42YB/3QvPN
 X9l8hp4Htu8SclqjXKtMz9qt6Ug5Uyi88o+1U53BNE6C6ICKW9i4uApT1DsLBAK8
 QM5WPTj/ChIBbQu7HWNW+Ux3NX5R6fFzZ5BfKrjbuNEHQKRauj2300gVtU6xGS7T
 IFWiu8i00T34aXF2Vnfykc0zNRylhw/DHqtJFUxmJQOBQgyKlkjYacf2cYru5lnR
 BWA8Zsg5wpapT5CWSGlSRid4sRMtcDiMsI7fnIqB5CPhJGnR6eI=
 =JAtc
 -----END PGP SIGNATURE-----

Merge tag 'net-6.6-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from Bluetooth, netfilter, BPF and WiFi.

  I didn't collect precise data but feels like we've got a lot of 6.5
  fixes here. WiFi fixes are most user-awaited.

  Current release - regressions:

   - Bluetooth: fix hci_link_tx_to RCU lock usage

  Current release - new code bugs:

   - bpf: mprog: fix maximum program check on mprog attachment

   - eth: ti: icssg-prueth: fix signedness bug in prueth_init_tx_chns()

  Previous releases - regressions:

   - ipv6: tcp: add a missing nf_reset_ct() in 3WHS handling

   - vringh: don't use vringh_kiov_advance() in vringh_iov_xfer(), it
     doesn't handle zero length like we expected

   - wifi:
      - cfg80211: fix cqm_config access race, fix crashes with brcmfmac
      - iwlwifi: mvm: handle PS changes in vif_cfg_changed
      - mac80211: fix mesh id corruption on 32 bit systems
      - mt76: mt76x02: fix MT76x0 external LNA gain handling

   - Bluetooth: fix handling of HCI_QUIRK_STRICT_DUPLICATE_FILTER

   - l2tp: fix handling of transhdrlen in __ip{,6}_append_data()

   - dsa: mv88e6xxx: avoid EEPROM timeout when EEPROM is absent

   - eth: stmmac: fix the incorrect parameter after refactoring

  Previous releases - always broken:

   - net: replace calls to sock->ops->connect() with kernel_connect(),
     prevent address rewrite in kernel_bind(); otherwise BPF hooks may
     modify arguments, unexpectedly to the caller

   - tcp: fix delayed ACKs when reads and writes align with MSS

   - bpf:
      - verifier: unconditionally reset backtrack_state masks on global
        func exit
      - s390: let arch_prepare_bpf_trampoline return program size, fix
        struct_ops offsets
      - sockmap: fix accounting of available bytes in presence of PEEKs
      - sockmap: reject sk_msg egress redirects to non-TCP sockets

   - ipv4/fib: send netlink notify when delete source address routes

   - ethtool: plca: fix width of reads when parsing netlink commands

   - netfilter: nft_payload: rebuild vlan header on h_proto access

   - Bluetooth: hci_codec: fix leaking memory of local_codecs

   - eth: intel: ice: always add legacy 32byte RXDID in supported_rxdids

   - eth: stmmac:
     - dwmac-stm32: fix resume on STM32 MCU
     - remove buggy and unneeded stmmac_poll_controller, depend on NAPI

   - ibmveth: always recompute TCP pseudo-header checksum, fix use of
     the driver with Open vSwitch

   - wifi:
      - rtw88: rtw8723d: fix MAC address offset in EEPROM
      - mt76: fix lock dependency problem for wed_lock
      - mwifiex: sanity check data reported by the device
      - iwlwifi: ensure ack flag is properly cleared
      - iwlwifi: mvm: fix a memory corruption due to bad pointer arithm
      - iwlwifi: mvm: fix incorrect usage of scan API

  Misc:

   - wifi: mac80211: work around Cisco AP 9115 VHT MPDU length"

* tag 'net-6.6-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (99 commits)
  MAINTAINERS: update Matthieu's email address
  mptcp: userspace pm allow creating id 0 subflow
  mptcp: fix delegated action races
  net: stmmac: remove unneeded stmmac_poll_controller
  net: lan743x: also select PHYLIB
  net: ethernet: mediatek: disable irq before schedule napi
  net: mana: Fix oversized sge0 for GSO packets
  net: mana: Fix the tso_bytes calculation
  net: mana: Fix TX CQE error handling
  netlink: annotate data-races around sk->sk_err
  sctp: update hb timer immediately after users change hb_interval
  sctp: update transport state when processing a dupcook packet
  tcp: fix delayed ACKs for MSS boundary condition
  tcp: fix quick-ack counting to count actual ACKs of new data
  page_pool: fix documentation typos
  tipc: fix a potential deadlock on &tx->lock
  net: stmmac: dwmac-stm32: fix resume on STM32 MCU
  ipv4: Set offload_failed flag in fibmatch results
  netfilter: nf_tables: nft_set_rbtree: fix spurious insertion failure
  netfilter: nf_tables: Deduplicate nft_register_obj audit logs
  ...
This commit is contained in:
Linus Torvalds 2023-10-05 11:29:21 -07:00
commit f291209eca
112 changed files with 1358 additions and 600 deletions

View File

@ -377,6 +377,7 @@ Matthew Wilcox <willy@infradead.org> <willy@debian.org>
Matthew Wilcox <willy@infradead.org> <willy@linux.intel.com>
Matthew Wilcox <willy@infradead.org> <willy@parisc-linux.org>
Matthias Fuchs <socketcan@esd.eu> <matthias.fuchs@esd.eu>
Matthieu Baerts <matttbe@kernel.org> <matthieu.baerts@tessares.net>
Matthieu CASTET <castet.matthieu@free.fr>
Matti Vaittinen <mazziesaccount@gmail.com> <matti.vaittinen@fi.rohmeurope.com>
Matt Ranostay <matt.ranostay@konsulko.com> <matt@ranostay.consulting>

View File

@ -470,7 +470,6 @@ F: drivers/hwmon/adm1029.c
ADM8211 WIRELESS DRIVER
L: linux-wireless@vger.kernel.org
S: Orphan
W: https://wireless.wiki.kernel.org/
F: drivers/net/wireless/admtek/adm8211.*
ADP1653 FLASH CONTROLLER DRIVER
@ -9531,10 +9530,8 @@ F: Documentation/devicetree/bindings/iio/pressure/honeywell,mprls0025pa.yaml
F: drivers/iio/pressure/mprls0025pa.c
HOST AP DRIVER
M: Jouni Malinen <j@w1.fi>
L: linux-wireless@vger.kernel.org
S: Obsolete
W: http://w1.fi/hostap-driver.html
F: drivers/net/wireless/intersil/hostap/
HP BIOSCFG DRIVER
@ -14947,7 +14944,7 @@ K: macsec
K: \bmdo_
NETWORKING [MPTCP]
M: Matthieu Baerts <matthieu.baerts@tessares.net>
M: Matthieu Baerts <matttbe@kernel.org>
M: Mat Martineau <martineau@kernel.org>
L: netdev@vger.kernel.org
L: mptcp@lists.linux.dev
@ -17602,6 +17599,7 @@ M: Kalle Valo <kvalo@kernel.org>
M: Jeff Johnson <quic_jjohnson@quicinc.com>
L: ath12k@lists.infradead.org
S: Supported
W: https://wireless.wiki.kernel.org/en/users/Drivers/ath12k
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
F: drivers/net/wireless/ath/ath12k/
@ -18132,8 +18130,6 @@ REALTEK WIRELESS DRIVER (rtlwifi family)
M: Ping-Ke Shih <pkshih@realtek.com>
L: linux-wireless@vger.kernel.org
S: Maintained
W: https://wireless.wiki.kernel.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git
F: drivers/net/wireless/realtek/rtlwifi/
REALTEK WIRELESS DRIVER (rtw88)
@ -18661,7 +18657,6 @@ F: drivers/media/dvb-frontends/rtl2832_sdr*
RTL8180 WIRELESS DRIVER
L: linux-wireless@vger.kernel.org
S: Orphan
W: https://wireless.wiki.kernel.org/
F: drivers/net/wireless/realtek/rtl818x/rtl8180/
RTL8187 WIRELESS DRIVER
@ -18669,14 +18664,12 @@ M: Hin-Tak Leung <hintak.leung@gmail.com>
M: Larry Finger <Larry.Finger@lwfinger.net>
L: linux-wireless@vger.kernel.org
S: Maintained
W: https://wireless.wiki.kernel.org/
F: drivers/net/wireless/realtek/rtl818x/rtl8187/
RTL8XXXU WIRELESS DRIVER (rtl8xxxu)
M: Jes Sorensen <Jes.Sorensen@gmail.com>
L: linux-wireless@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jes/linux.git rtl8xxxu-devel
F: drivers/net/wireless/realtek/rtl8xxxu/
RTRS TRANSPORT DRIVERS
@ -21658,7 +21651,6 @@ L: linux-wireless@vger.kernel.org
S: Orphan
W: https://wireless.wiki.kernel.org/en/users/Drivers/wl12xx
W: https://wireless.wiki.kernel.org/en/users/Drivers/wl1251
T: git git://git.kernel.org/pub/scm/linux/kernel/git/luca/wl12xx.git
F: drivers/net/wireless/ti/
TIMEKEEPING, CLOCKSOURCE CORE, NTP, ALARMTIMER

View File

@ -2513,7 +2513,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
return -E2BIG;
}
return ret;
return tjit.common.prg;
}
bool bpf_jit_supports_subprog_tailcalls(void)

View File

@ -4419,6 +4419,7 @@ static int btusb_probe(struct usb_interface *intf,
if (id->driver_info & BTUSB_QCA_ROME) {
data->setup_on_usb = btusb_setup_qca;
hdev->shutdown = btusb_shutdown_qca;
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
hdev->cmd_timeout = btusb_qca_cmd_timeout;
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);

View File

@ -558,6 +558,9 @@ int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn)
tx_chn->virq = k3_ringacc_get_ring_irq_num(tx_chn->ringtxcq);
}
if (!tx_chn->virq)
return -ENXIO;
return tx_chn->virq;
}
EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_irq);

View File

@ -2958,14 +2958,16 @@ static void mv88e6xxx_hardware_reset(struct mv88e6xxx_chip *chip)
* from the wrong location resulting in the switch booting
* to wrong mode and inoperable.
*/
mv88e6xxx_g1_wait_eeprom_done(chip);
if (chip->info->ops->get_eeprom)
mv88e6xxx_g2_eeprom_wait(chip);
gpiod_set_value_cansleep(gpiod, 1);
usleep_range(10000, 20000);
gpiod_set_value_cansleep(gpiod, 0);
usleep_range(10000, 20000);
mv88e6xxx_g1_wait_eeprom_done(chip);
if (chip->info->ops->get_eeprom)
mv88e6xxx_g2_eeprom_wait(chip);
}
}

View File

@ -75,37 +75,6 @@ static int mv88e6xxx_g1_wait_init_ready(struct mv88e6xxx_chip *chip)
return mv88e6xxx_g1_wait_bit(chip, MV88E6XXX_G1_STS, bit, 1);
}
void mv88e6xxx_g1_wait_eeprom_done(struct mv88e6xxx_chip *chip)
{
const unsigned long timeout = jiffies + 1 * HZ;
u16 val;
int err;
/* Wait up to 1 second for the switch to finish reading the
* EEPROM.
*/
while (time_before(jiffies, timeout)) {
err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_STS, &val);
if (err) {
dev_err(chip->dev, "Error reading status");
return;
}
/* If the switch is still resetting, it may not
* respond on the bus, and so MDIO read returns
* 0xffff. Differentiate between that, and waiting for
* the EEPROM to be done by bit 0 being set.
*/
if (val != 0xffff &&
val & BIT(MV88E6XXX_G1_STS_IRQ_EEPROM_DONE))
return;
usleep_range(1000, 2000);
}
dev_err(chip->dev, "Timeout waiting for EEPROM done");
}
/* Offset 0x01: Switch MAC Address Register Bytes 0 & 1
* Offset 0x02: Switch MAC Address Register Bytes 2 & 3
* Offset 0x03: Switch MAC Address Register Bytes 4 & 5

View File

@ -282,7 +282,6 @@ int mv88e6xxx_g1_set_switch_mac(struct mv88e6xxx_chip *chip, u8 *addr);
int mv88e6185_g1_reset(struct mv88e6xxx_chip *chip);
int mv88e6352_g1_reset(struct mv88e6xxx_chip *chip);
int mv88e6250_g1_reset(struct mv88e6xxx_chip *chip);
void mv88e6xxx_g1_wait_eeprom_done(struct mv88e6xxx_chip *chip);
int mv88e6185_g1_ppu_enable(struct mv88e6xxx_chip *chip);
int mv88e6185_g1_ppu_disable(struct mv88e6xxx_chip *chip);

View File

@ -340,7 +340,7 @@ int mv88e6xxx_g2_pot_clear(struct mv88e6xxx_chip *chip)
* Offset 0x15: EEPROM Addr (for 8-bit data access)
*/
static int mv88e6xxx_g2_eeprom_wait(struct mv88e6xxx_chip *chip)
int mv88e6xxx_g2_eeprom_wait(struct mv88e6xxx_chip *chip)
{
int bit = __bf_shf(MV88E6XXX_G2_EEPROM_CMD_BUSY);
int err;

View File

@ -365,6 +365,7 @@ int mv88e6xxx_g2_trunk_clear(struct mv88e6xxx_chip *chip);
int mv88e6xxx_g2_device_mapping_write(struct mv88e6xxx_chip *chip, int target,
int port);
int mv88e6xxx_g2_eeprom_wait(struct mv88e6xxx_chip *chip);
extern const struct mv88e6xxx_irq_ops mv88e6097_watchdog_ops;
extern const struct mv88e6xxx_irq_ops mv88e6250_watchdog_ops;

View File

@ -1303,24 +1303,23 @@ static void ibmveth_rx_csum_helper(struct sk_buff *skb,
* the user space for finding a flow. During this process, OVS computes
* checksum on the first packet when CHECKSUM_PARTIAL flag is set.
*
* So, re-compute TCP pseudo header checksum when configured for
* trunk mode.
* So, re-compute TCP pseudo header checksum.
*/
if (iph_proto == IPPROTO_TCP) {
struct tcphdr *tcph = (struct tcphdr *)(skb->data + iphlen);
if (tcph->check == 0x0000) {
/* Recompute TCP pseudo header checksum */
if (adapter->is_active_trunk) {
tcphdrlen = skb->len - iphlen;
if (skb_proto == ETH_P_IP)
tcph->check =
~csum_tcpudp_magic(iph->saddr,
iph->daddr, tcphdrlen, iph_proto, 0);
else if (skb_proto == ETH_P_IPV6)
tcph->check =
~csum_ipv6_magic(&iph6->saddr,
&iph6->daddr, tcphdrlen, iph_proto, 0);
}
tcphdrlen = skb->len - iphlen;
if (skb_proto == ETH_P_IP)
tcph->check =
~csum_tcpudp_magic(iph->saddr,
iph->daddr, tcphdrlen, iph_proto, 0);
else if (skb_proto == ETH_P_IPV6)
tcph->check =
~csum_ipv6_magic(&iph6->saddr,
&iph6->daddr, tcphdrlen, iph_proto, 0);
/* Setup SKB fields for checksum offload */
skb_partial_csum_set(skb, iphlen,
offsetof(struct tcphdr, check));

View File

@ -2617,12 +2617,14 @@ static int ice_vc_query_rxdid(struct ice_vf *vf)
goto err;
}
/* Read flexiflag registers to determine whether the
* corresponding RXDID is configured and supported or not.
* Since Legacy 16byte descriptor format is not supported,
* start from Legacy 32byte descriptor.
/* RXDIDs supported by DDP package can be read from the register
* to get the supported RXDID bitmap. But the legacy 32byte RXDID
* is not listed in DDP package, add it in the bitmap manually.
* Legacy 16byte descriptor is not supported.
*/
for (i = ICE_RXDID_LEGACY_1; i < ICE_FLEX_DESC_RXDID_MAX_NUM; i++) {
rxdid->supported_rxdids |= BIT(ICE_RXDID_LEGACY_1);
for (i = ICE_RXDID_FLEX_NIC; i < ICE_FLEX_DESC_RXDID_MAX_NUM; i++) {
regval = rd32(hw, GLFLXP_RXDID_FLAGS(i, 0));
if ((regval >> GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S)
& GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M)

View File

@ -2195,7 +2195,7 @@ struct rx_ring_info {
struct sk_buff *skb;
dma_addr_t data_addr;
DEFINE_DMA_UNMAP_LEN(data_size);
dma_addr_t frag_addr[ETH_JUMBO_MTU >> PAGE_SHIFT];
dma_addr_t frag_addr[ETH_JUMBO_MTU >> PAGE_SHIFT ?: 1];
};
enum flow_control {

View File

@ -3171,8 +3171,8 @@ static irqreturn_t mtk_handle_irq_rx(int irq, void *_eth)
eth->rx_events++;
if (likely(napi_schedule_prep(&eth->rx_napi))) {
__napi_schedule(&eth->rx_napi);
mtk_rx_irq_disable(eth, eth->soc->txrx.rx_irq_done_mask);
__napi_schedule(&eth->rx_napi);
}
return IRQ_HANDLED;
@ -3184,8 +3184,8 @@ static irqreturn_t mtk_handle_irq_tx(int irq, void *_eth)
eth->tx_events++;
if (likely(napi_schedule_prep(&eth->tx_napi))) {
__napi_schedule(&eth->tx_napi);
mtk_tx_irq_disable(eth, MTK_TX_DONE_INT);
__napi_schedule(&eth->tx_napi);
}
return IRQ_HANDLED;

View File

@ -46,6 +46,7 @@ config LAN743X
tristate "LAN743x support"
depends on PCI
depends on PTP_1588_CLOCK_OPTIONAL
select PHYLIB
select FIXED_PHY
select CRC16
select CRC32

View File

@ -91,63 +91,137 @@ static unsigned int mana_checksum_info(struct sk_buff *skb)
return 0;
}
static void mana_add_sge(struct mana_tx_package *tp, struct mana_skb_head *ash,
int sg_i, dma_addr_t da, int sge_len, u32 gpa_mkey)
{
ash->dma_handle[sg_i] = da;
ash->size[sg_i] = sge_len;
tp->wqe_req.sgl[sg_i].address = da;
tp->wqe_req.sgl[sg_i].mem_key = gpa_mkey;
tp->wqe_req.sgl[sg_i].size = sge_len;
}
static int mana_map_skb(struct sk_buff *skb, struct mana_port_context *apc,
struct mana_tx_package *tp)
struct mana_tx_package *tp, int gso_hs)
{
struct mana_skb_head *ash = (struct mana_skb_head *)skb->head;
int hsg = 1; /* num of SGEs of linear part */
struct gdma_dev *gd = apc->ac->gdma_dev;
int skb_hlen = skb_headlen(skb);
int sge0_len, sge1_len = 0;
struct gdma_context *gc;
struct device *dev;
skb_frag_t *frag;
dma_addr_t da;
int sg_i;
int i;
gc = gd->gdma_context;
dev = gc->dev;
da = dma_map_single(dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE);
if (gso_hs && gso_hs < skb_hlen) {
sge0_len = gso_hs;
sge1_len = skb_hlen - gso_hs;
} else {
sge0_len = skb_hlen;
}
da = dma_map_single(dev, skb->data, sge0_len, DMA_TO_DEVICE);
if (dma_mapping_error(dev, da))
return -ENOMEM;
ash->dma_handle[0] = da;
ash->size[0] = skb_headlen(skb);
tp->wqe_req.sgl[0].address = ash->dma_handle[0];
tp->wqe_req.sgl[0].mem_key = gd->gpa_mkey;
tp->wqe_req.sgl[0].size = ash->size[0];
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
frag = &skb_shinfo(skb)->frags[i];
da = skb_frag_dma_map(dev, frag, 0, skb_frag_size(frag),
DMA_TO_DEVICE);
mana_add_sge(tp, ash, 0, da, sge0_len, gd->gpa_mkey);
if (sge1_len) {
sg_i = 1;
da = dma_map_single(dev, skb->data + sge0_len, sge1_len,
DMA_TO_DEVICE);
if (dma_mapping_error(dev, da))
goto frag_err;
ash->dma_handle[i + 1] = da;
ash->size[i + 1] = skb_frag_size(frag);
mana_add_sge(tp, ash, sg_i, da, sge1_len, gd->gpa_mkey);
hsg = 2;
}
tp->wqe_req.sgl[i + 1].address = ash->dma_handle[i + 1];
tp->wqe_req.sgl[i + 1].mem_key = gd->gpa_mkey;
tp->wqe_req.sgl[i + 1].size = ash->size[i + 1];
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
sg_i = hsg + i;
frag = &skb_shinfo(skb)->frags[i];
da = skb_frag_dma_map(dev, frag, 0, skb_frag_size(frag),
DMA_TO_DEVICE);
if (dma_mapping_error(dev, da))
goto frag_err;
mana_add_sge(tp, ash, sg_i, da, skb_frag_size(frag),
gd->gpa_mkey);
}
return 0;
frag_err:
for (i = i - 1; i >= 0; i--)
dma_unmap_page(dev, ash->dma_handle[i + 1], ash->size[i + 1],
for (i = sg_i - 1; i >= hsg; i--)
dma_unmap_page(dev, ash->dma_handle[i], ash->size[i],
DMA_TO_DEVICE);
dma_unmap_single(dev, ash->dma_handle[0], ash->size[0], DMA_TO_DEVICE);
for (i = hsg - 1; i >= 0; i--)
dma_unmap_single(dev, ash->dma_handle[i], ash->size[i],
DMA_TO_DEVICE);
return -ENOMEM;
}
/* Handle the case when GSO SKB linear length is too large.
* MANA NIC requires GSO packets to put only the packet header to SGE0.
* So, we need 2 SGEs for the skb linear part which contains more than the
* header.
* Return a positive value for the number of SGEs, or a negative value
* for an error.
*/
static int mana_fix_skb_head(struct net_device *ndev, struct sk_buff *skb,
int gso_hs)
{
int num_sge = 1 + skb_shinfo(skb)->nr_frags;
int skb_hlen = skb_headlen(skb);
if (gso_hs < skb_hlen) {
num_sge++;
} else if (gso_hs > skb_hlen) {
if (net_ratelimit())
netdev_err(ndev,
"TX nonlinear head: hs:%d, skb_hlen:%d\n",
gso_hs, skb_hlen);
return -EINVAL;
}
return num_sge;
}
/* Get the GSO packet's header size */
static int mana_get_gso_hs(struct sk_buff *skb)
{
int gso_hs;
if (skb->encapsulation) {
gso_hs = skb_inner_tcp_all_headers(skb);
} else {
if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) {
gso_hs = skb_transport_offset(skb) +
sizeof(struct udphdr);
} else {
gso_hs = skb_tcp_all_headers(skb);
}
}
return gso_hs;
}
netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev)
{
enum mana_tx_pkt_format pkt_fmt = MANA_SHORT_PKT_FMT;
struct mana_port_context *apc = netdev_priv(ndev);
int gso_hs = 0; /* zero for non-GSO pkts */
u16 txq_idx = skb_get_queue_mapping(skb);
struct gdma_dev *gd = apc->ac->gdma_dev;
bool ipv4 = false, ipv6 = false;
@ -159,7 +233,6 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev)
struct mana_txq *txq;
struct mana_cq *cq;
int err, len;
u16 ihs;
if (unlikely(!apc->port_is_up))
goto tx_drop;
@ -209,19 +282,6 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev)
pkg.wqe_req.client_data_unit = 0;
pkg.wqe_req.num_sge = 1 + skb_shinfo(skb)->nr_frags;
WARN_ON_ONCE(pkg.wqe_req.num_sge > MAX_TX_WQE_SGL_ENTRIES);
if (pkg.wqe_req.num_sge <= ARRAY_SIZE(pkg.sgl_array)) {
pkg.wqe_req.sgl = pkg.sgl_array;
} else {
pkg.sgl_ptr = kmalloc_array(pkg.wqe_req.num_sge,
sizeof(struct gdma_sge),
GFP_ATOMIC);
if (!pkg.sgl_ptr)
goto tx_drop_count;
pkg.wqe_req.sgl = pkg.sgl_ptr;
}
if (skb->protocol == htons(ETH_P_IP))
ipv4 = true;
@ -229,6 +289,26 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev)
ipv6 = true;
if (skb_is_gso(skb)) {
int num_sge;
gso_hs = mana_get_gso_hs(skb);
num_sge = mana_fix_skb_head(ndev, skb, gso_hs);
if (num_sge > 0)
pkg.wqe_req.num_sge = num_sge;
else
goto tx_drop_count;
u64_stats_update_begin(&tx_stats->syncp);
if (skb->encapsulation) {
tx_stats->tso_inner_packets++;
tx_stats->tso_inner_bytes += skb->len - gso_hs;
} else {
tx_stats->tso_packets++;
tx_stats->tso_bytes += skb->len - gso_hs;
}
u64_stats_update_end(&tx_stats->syncp);
pkg.tx_oob.s_oob.is_outer_ipv4 = ipv4;
pkg.tx_oob.s_oob.is_outer_ipv6 = ipv6;
@ -252,28 +332,6 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev)
&ipv6_hdr(skb)->daddr, 0,
IPPROTO_TCP, 0);
}
if (skb->encapsulation) {
ihs = skb_inner_tcp_all_headers(skb);
u64_stats_update_begin(&tx_stats->syncp);
tx_stats->tso_inner_packets++;
tx_stats->tso_inner_bytes += skb->len - ihs;
u64_stats_update_end(&tx_stats->syncp);
} else {
if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) {
ihs = skb_transport_offset(skb) + sizeof(struct udphdr);
} else {
ihs = skb_tcp_all_headers(skb);
if (ipv6_has_hopopt_jumbo(skb))
ihs -= sizeof(struct hop_jumbo_hdr);
}
u64_stats_update_begin(&tx_stats->syncp);
tx_stats->tso_packets++;
tx_stats->tso_bytes += skb->len - ihs;
u64_stats_update_end(&tx_stats->syncp);
}
} else if (skb->ip_summed == CHECKSUM_PARTIAL) {
csum_type = mana_checksum_info(skb);
@ -296,11 +354,25 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev)
} else {
/* Can't do offload of this type of checksum */
if (skb_checksum_help(skb))
goto free_sgl_ptr;
goto tx_drop_count;
}
}
if (mana_map_skb(skb, apc, &pkg)) {
WARN_ON_ONCE(pkg.wqe_req.num_sge > MAX_TX_WQE_SGL_ENTRIES);
if (pkg.wqe_req.num_sge <= ARRAY_SIZE(pkg.sgl_array)) {
pkg.wqe_req.sgl = pkg.sgl_array;
} else {
pkg.sgl_ptr = kmalloc_array(pkg.wqe_req.num_sge,
sizeof(struct gdma_sge),
GFP_ATOMIC);
if (!pkg.sgl_ptr)
goto tx_drop_count;
pkg.wqe_req.sgl = pkg.sgl_ptr;
}
if (mana_map_skb(skb, apc, &pkg, gso_hs)) {
u64_stats_update_begin(&tx_stats->syncp);
tx_stats->mana_map_err++;
u64_stats_update_end(&tx_stats->syncp);
@ -1258,11 +1330,16 @@ static void mana_unmap_skb(struct sk_buff *skb, struct mana_port_context *apc)
struct mana_skb_head *ash = (struct mana_skb_head *)skb->head;
struct gdma_context *gc = apc->ac->gdma_dev->gdma_context;
struct device *dev = gc->dev;
int i;
int hsg, i;
dma_unmap_single(dev, ash->dma_handle[0], ash->size[0], DMA_TO_DEVICE);
/* Number of SGEs of linear part */
hsg = (skb_is_gso(skb) && skb_headlen(skb) > ash->size[0]) ? 2 : 1;
for (i = 1; i < skb_shinfo(skb)->nr_frags + 1; i++)
for (i = 0; i < hsg; i++)
dma_unmap_single(dev, ash->dma_handle[i], ash->size[i],
DMA_TO_DEVICE);
for (i = hsg; i < skb_shinfo(skb)->nr_frags + hsg; i++)
dma_unmap_page(dev, ash->dma_handle[i], ash->size[i],
DMA_TO_DEVICE);
}
@ -1317,19 +1394,23 @@ static void mana_poll_tx_cq(struct mana_cq *cq)
case CQE_TX_VPORT_IDX_OUT_OF_RANGE:
case CQE_TX_VPORT_DISABLED:
case CQE_TX_VLAN_TAGGING_VIOLATION:
WARN_ONCE(1, "TX: CQE error %d: ignored.\n",
cqe_oob->cqe_hdr.cqe_type);
if (net_ratelimit())
netdev_err(ndev, "TX: CQE error %d\n",
cqe_oob->cqe_hdr.cqe_type);
apc->eth_stats.tx_cqe_err++;
break;
default:
/* If the CQE type is unexpected, log an error, assert,
* and go through the error path.
/* If the CQE type is unknown, log an error,
* and still free the SKB, update tail, etc.
*/
WARN_ONCE(1, "TX: Unexpected CQE type %d: HW BUG?\n",
cqe_oob->cqe_hdr.cqe_type);
if (net_ratelimit())
netdev_err(ndev, "TX: unknown CQE type %d\n",
cqe_oob->cqe_hdr.cqe_type);
apc->eth_stats.tx_cqe_unknown_type++;
return;
break;
}
if (WARN_ON_ONCE(txq->gdma_txq_id != completions[i].wq_num))

View File

@ -110,9 +110,9 @@ struct qed_ll2_info {
enum core_tx_dest tx_dest;
u8 tx_stats_en;
bool main_func_queue;
struct qed_ll2_cbs cbs;
struct qed_ll2_rx_queue rx_queue;
struct qed_ll2_tx_queue tx_queue;
struct qed_ll2_cbs cbs;
};
extern const struct qed_ll2_ops qed_ll2_ops_pass;

View File

@ -4,6 +4,7 @@
* Copyright (C) 2022 Renesas Electronics Corporation
*/
#include <linux/clk.h>
#include <linux/dma-mapping.h>
#include <linux/err.h>
#include <linux/etherdevice.h>
@ -1049,7 +1050,7 @@ static void rswitch_rmac_setting(struct rswitch_etha *etha, const u8 *mac)
static void rswitch_etha_enable_mii(struct rswitch_etha *etha)
{
rswitch_modify(etha->addr, MPIC, MPIC_PSMCS_MASK | MPIC_PSMHT_MASK,
MPIC_PSMCS(0x05) | MPIC_PSMHT(0x06));
MPIC_PSMCS(etha->psmcs) | MPIC_PSMHT(0x06));
rswitch_modify(etha->addr, MPSM, 0, MPSM_MFF_C45);
}
@ -1693,6 +1694,12 @@ static void rswitch_etha_init(struct rswitch_private *priv, int index)
etha->index = index;
etha->addr = priv->addr + RSWITCH_ETHA_OFFSET + index * RSWITCH_ETHA_SIZE;
etha->coma_addr = priv->addr;
/* MPIC.PSMCS = (clk [MHz] / (MDC frequency [MHz] * 2) - 1.
* Calculating PSMCS value as MDC frequency = 2.5MHz. So, multiply
* both the numerator and the denominator by 10.
*/
etha->psmcs = clk_get_rate(priv->clk) / 100000 / (25 * 2) - 1;
}
static int rswitch_device_alloc(struct rswitch_private *priv, int index)
@ -1900,6 +1907,10 @@ static int renesas_eth_sw_probe(struct platform_device *pdev)
return -ENOMEM;
spin_lock_init(&priv->lock);
priv->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(priv->clk))
return PTR_ERR(priv->clk);
attr = soc_device_match(rswitch_soc_no_speed_change);
if (attr)
priv->etha_no_runtime_change = true;

View File

@ -915,6 +915,7 @@ struct rswitch_etha {
bool external_phy;
struct mii_bus *mii;
phy_interface_t phy_interface;
u32 psmcs;
u8 mac_addr[MAX_ADDR_LEN];
int link;
int speed;
@ -1012,6 +1013,7 @@ struct rswitch_private {
struct rswitch_mfwd mfwd;
spinlock_t lock; /* lock interrupt registers' control */
struct clk *clk;
bool etha_no_runtime_change;
bool gwca_halt;

View File

@ -104,6 +104,7 @@ struct stm32_ops {
int (*parse_data)(struct stm32_dwmac *dwmac,
struct device *dev);
u32 syscfg_eth_mask;
bool clk_rx_enable_in_suspend;
};
static int stm32_dwmac_init(struct plat_stmmacenet_data *plat_dat)
@ -121,7 +122,8 @@ static int stm32_dwmac_init(struct plat_stmmacenet_data *plat_dat)
if (ret)
return ret;
if (!dwmac->dev->power.is_suspended) {
if (!dwmac->ops->clk_rx_enable_in_suspend ||
!dwmac->dev->power.is_suspended) {
ret = clk_prepare_enable(dwmac->clk_rx);
if (ret) {
clk_disable_unprepare(dwmac->clk_tx);
@ -513,7 +515,8 @@ static struct stm32_ops stm32mp1_dwmac_data = {
.suspend = stm32mp1_suspend,
.resume = stm32mp1_resume,
.parse_data = stm32mp1_parse_data,
.syscfg_eth_mask = SYSCFG_MP1_ETH_MASK
.syscfg_eth_mask = SYSCFG_MP1_ETH_MASK,
.clk_rx_enable_in_suspend = true
};
static const struct of_device_id stm32_dwmac_match[] = {

View File

@ -6002,33 +6002,6 @@ static irqreturn_t stmmac_msi_intr_rx(int irq, void *data)
return IRQ_HANDLED;
}
#ifdef CONFIG_NET_POLL_CONTROLLER
/* Polling receive - used by NETCONSOLE and other diagnostic tools
* to allow network I/O with interrupts disabled.
*/
static void stmmac_poll_controller(struct net_device *dev)
{
struct stmmac_priv *priv = netdev_priv(dev);
int i;
/* If adapter is down, do nothing */
if (test_bit(STMMAC_DOWN, &priv->state))
return;
if (priv->plat->flags & STMMAC_FLAG_MULTI_MSI_EN) {
for (i = 0; i < priv->plat->rx_queues_to_use; i++)
stmmac_msi_intr_rx(0, &priv->dma_conf.rx_queue[i]);
for (i = 0; i < priv->plat->tx_queues_to_use; i++)
stmmac_msi_intr_tx(0, &priv->dma_conf.tx_queue[i]);
} else {
disable_irq(dev->irq);
stmmac_interrupt(dev->irq, dev);
enable_irq(dev->irq);
}
}
#endif
/**
* stmmac_ioctl - Entry point for the Ioctl
* @dev: Device pointer.
@ -6989,9 +6962,6 @@ static const struct net_device_ops stmmac_netdev_ops = {
.ndo_get_stats64 = stmmac_get_stats64,
.ndo_setup_tc = stmmac_setup_tc,
.ndo_select_queue = stmmac_select_queue,
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll_controller = stmmac_poll_controller,
#endif
.ndo_set_mac_address = stmmac_set_mac_address,
.ndo_vlan_rx_add_vid = stmmac_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid = stmmac_vlan_rx_kill_vid,

View File

@ -901,7 +901,7 @@ static int __maybe_unused stmmac_pltfr_resume(struct device *dev)
struct platform_device *pdev = to_platform_device(dev);
int ret;
ret = stmmac_pltfr_init(pdev, priv->plat->bsp_priv);
ret = stmmac_pltfr_init(pdev, priv->plat);
if (ret)
return ret;

View File

@ -1747,9 +1747,10 @@ static int am65_cpsw_nuss_init_tx_chns(struct am65_cpsw_common *common)
}
tx_chn->irq = k3_udma_glue_tx_get_irq(tx_chn->tx_chn);
if (tx_chn->irq <= 0) {
if (tx_chn->irq < 0) {
dev_err(dev, "Failed to get tx dma irq %d\n",
tx_chn->irq);
ret = tx_chn->irq;
goto err;
}

View File

@ -316,12 +316,12 @@ static int prueth_init_tx_chns(struct prueth_emac *emac)
goto fail;
}
tx_chn->irq = k3_udma_glue_tx_get_irq(tx_chn->tx_chn);
if (tx_chn->irq <= 0) {
ret = -EINVAL;
ret = k3_udma_glue_tx_get_irq(tx_chn->tx_chn);
if (ret < 0) {
netdev_err(ndev, "failed to get tx irq\n");
goto fail;
}
tx_chn->irq = ret;
snprintf(tx_chn->name, sizeof(tx_chn->name), "%s-tx%d",
dev_name(dev), tx_chn->id);

View File

@ -90,7 +90,9 @@ static int __must_check __smsc75xx_read_reg(struct usbnet *dev, u32 index,
ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN
| USB_TYPE_VENDOR | USB_RECIP_DEVICE,
0, index, &buf, 4);
if (unlikely(ret < 0)) {
if (unlikely(ret < 4)) {
ret = ret < 0 ? ret : -ENODATA;
netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
index, ret);
return ret;

View File

@ -34,6 +34,8 @@
#define TDM_PPPOHT_SLIC_MAXIN
#define RX_BD_ERRORS (R_CD_S | R_OV_S | R_CR_S | R_AB_S | R_NO_S | R_LG_S)
static int uhdlc_close(struct net_device *dev);
static struct ucc_tdm_info utdm_primary_info = {
.uf_info = {
.tsa = 0,
@ -708,6 +710,7 @@ static int uhdlc_open(struct net_device *dev)
hdlc_device *hdlc = dev_to_hdlc(dev);
struct ucc_hdlc_private *priv = hdlc->priv;
struct ucc_tdm *utdm = priv->utdm;
int rc = 0;
if (priv->hdlc_busy != 1) {
if (request_irq(priv->ut_info->uf_info.irq,
@ -731,10 +734,13 @@ static int uhdlc_open(struct net_device *dev)
napi_enable(&priv->napi);
netdev_reset_queue(dev);
netif_start_queue(dev);
hdlc_open(dev);
rc = hdlc_open(dev);
if (rc)
uhdlc_close(dev);
}
return 0;
return rc;
}
static void uhdlc_memclean(struct ucc_hdlc_private *priv)
@ -824,6 +830,8 @@ static int uhdlc_close(struct net_device *dev)
netdev_reset_queue(dev);
priv->hdlc_busy = 0;
hdlc_close(dev);
return 0;
}

View File

@ -442,7 +442,12 @@ struct brcmf_scan_params_v2_le {
* fixed parameter portion is assumed, otherwise
* ssid in the fixed portion is ignored
*/
__le16 channel_list[1]; /* list of chanspecs */
union {
__le16 padding; /* Reserve space for at least 1 entry for abort
* which uses an on stack brcmf_scan_params_v2_le
*/
DECLARE_FLEX_ARRAY(__le16, channel_list); /* chanspecs */
};
};
struct brcmf_scan_results {
@ -702,7 +707,7 @@ struct brcmf_sta_info_le {
struct brcmf_chanspec_list {
__le32 count; /* # of entries */
__le32 element[1]; /* variable length uint32 list */
__le32 element[]; /* variable length uint32 list */
};
/*

View File

@ -310,9 +310,9 @@ struct iwl_fw_ini_fifo_hdr {
struct iwl_fw_ini_error_dump_range {
__le32 range_data_size;
union {
__le32 internal_base_addr;
__le64 dram_base_addr;
__le32 page_num;
__le32 internal_base_addr __packed;
__le64 dram_base_addr __packed;
__le32 page_num __packed;
struct iwl_fw_ini_fifo_hdr fifo_hdr;
struct iwl_cmd_header fw_pkt_hdr;
};

View File

@ -802,7 +802,7 @@ out:
mvm->nvm_data->bands[0].n_channels = 1;
mvm->nvm_data->bands[0].n_bitrates = 1;
mvm->nvm_data->bands[0].bitrates =
(void *)((u8 *)mvm->nvm_data->channels + 1);
(void *)(mvm->nvm_data->channels + 1);
mvm->nvm_data->bands[0].bitrates->hw_value = 10;
}

View File

@ -731,73 +731,78 @@ static void iwl_mvm_mld_vif_cfg_changed_station(struct iwl_mvm *mvm,
mvmvif->associated = vif->cfg.assoc;
if (!(changes & BSS_CHANGED_ASSOC))
return;
if (changes & BSS_CHANGED_ASSOC) {
if (vif->cfg.assoc) {
/* clear statistics to get clean beacon counter */
iwl_mvm_request_statistics(mvm, true);
iwl_mvm_sf_update(mvm, vif, false);
iwl_mvm_power_vif_assoc(mvm, vif);
if (vif->cfg.assoc) {
/* clear statistics to get clean beacon counter */
iwl_mvm_request_statistics(mvm, true);
iwl_mvm_sf_update(mvm, vif, false);
iwl_mvm_power_vif_assoc(mvm, vif);
for_each_mvm_vif_valid_link(mvmvif, i) {
memset(&mvmvif->link[i]->beacon_stats, 0,
sizeof(mvmvif->link[i]->beacon_stats));
for_each_mvm_vif_valid_link(mvmvif, i) {
memset(&mvmvif->link[i]->beacon_stats, 0,
sizeof(mvmvif->link[i]->beacon_stats));
if (vif->p2p) {
iwl_mvm_update_smps(mvm, vif,
IWL_MVM_SMPS_REQ_PROT,
IEEE80211_SMPS_DYNAMIC, i);
}
if (vif->p2p) {
iwl_mvm_update_smps(mvm, vif,
IWL_MVM_SMPS_REQ_PROT,
IEEE80211_SMPS_DYNAMIC, i);
rcu_read_lock();
link_conf = rcu_dereference(vif->link_conf[i]);
if (link_conf && !link_conf->dtim_period)
protect = true;
rcu_read_unlock();
}
rcu_read_lock();
link_conf = rcu_dereference(vif->link_conf[i]);
if (link_conf && !link_conf->dtim_period)
protect = true;
rcu_read_unlock();
}
if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
protect) {
/* If we're not restarting and still haven't
* heard a beacon (dtim period unknown) then
* make sure we still have enough minimum time
* remaining in the time event, since the auth
* might actually have taken quite a while
* (especially for SAE) and so the remaining
* time could be small without us having heard
* a beacon yet.
*/
iwl_mvm_protect_assoc(mvm, vif, 0);
}
if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
protect) {
/* If we're not restarting and still haven't
* heard a beacon (dtim period unknown) then
* make sure we still have enough minimum time
* remaining in the time event, since the auth
* might actually have taken quite a while
* (especially for SAE) and so the remaining
* time could be small without us having heard
* a beacon yet.
iwl_mvm_sf_update(mvm, vif, false);
/* FIXME: need to decide about misbehaving AP handling */
iwl_mvm_power_vif_assoc(mvm, vif);
} else if (iwl_mvm_mld_vif_have_valid_ap_sta(mvmvif)) {
iwl_mvm_mei_host_disassociated(mvm);
/* If update fails - SF might be running in associated
* mode while disassociated - which is forbidden.
*/
iwl_mvm_protect_assoc(mvm, vif, 0);
ret = iwl_mvm_sf_update(mvm, vif, false);
WARN_ONCE(ret &&
!test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
&mvm->status),
"Failed to update SF upon disassociation\n");
/* If we get an assert during the connection (after the
* station has been added, but before the vif is set
* to associated), mac80211 will re-add the station and
* then configure the vif. Since the vif is not
* associated, we would remove the station here and
* this would fail the recovery.
*/
iwl_mvm_mld_vif_delete_all_stas(mvm, vif);
}
iwl_mvm_sf_update(mvm, vif, false);
/* FIXME: need to decide about misbehaving AP handling */
iwl_mvm_power_vif_assoc(mvm, vif);
} else if (iwl_mvm_mld_vif_have_valid_ap_sta(mvmvif)) {
iwl_mvm_mei_host_disassociated(mvm);
/* If update fails - SF might be running in associated
* mode while disassociated - which is forbidden.
*/
ret = iwl_mvm_sf_update(mvm, vif, false);
WARN_ONCE(ret &&
!test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
&mvm->status),
"Failed to update SF upon disassociation\n");
/* If we get an assert during the connection (after the
* station has been added, but before the vif is set
* to associated), mac80211 will re-add the station and
* then configure the vif. Since the vif is not
* associated, we would remove the station here and
* this would fail the recovery.
*/
iwl_mvm_mld_vif_delete_all_stas(mvm, vif);
iwl_mvm_bss_info_changed_station_assoc(mvm, vif, changes);
}
iwl_mvm_bss_info_changed_station_assoc(mvm, vif, changes);
if (changes & BSS_CHANGED_PS) {
ret = iwl_mvm_power_update_mac(mvm);
if (ret)
IWL_ERR(mvm, "failed to update power mode\n");
}
}
static void

View File

@ -2342,7 +2342,7 @@ iwl_mvm_scan_umac_fill_general_p_v12(struct iwl_mvm *mvm,
if (gen_flags & IWL_UMAC_SCAN_GEN_FLAGS_V2_FRAGMENTED_LMAC2)
gp->num_of_fragments[SCAN_HB_LMAC_IDX] = IWL_SCAN_NUM_OF_FRAGS;
if (version < 12) {
if (version < 16) {
gp->scan_start_mac_or_link_id = scan_vif->id;
} else {
struct iwl_mvm_vif_link_info *link_info;

View File

@ -1612,6 +1612,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
iwl_trans_free_tx_cmd(mvm->trans, info->driver_data[1]);
memset(&info->status, 0, sizeof(info->status));
info->flags &= ~(IEEE80211_TX_STAT_ACK | IEEE80211_TX_STAT_TX_FILTERED);
/* inform mac80211 about what happened with the frame */
switch (status & TX_STATUS_MSK) {
@ -1964,6 +1965,8 @@ static void iwl_mvm_tx_reclaim(struct iwl_mvm *mvm, int sta_id, int tid,
*/
if (!is_flush)
info->flags |= IEEE80211_TX_STAT_ACK;
else
info->flags &= ~IEEE80211_TX_STAT_ACK;
}
/*

View File

@ -918,9 +918,17 @@ void mwifiex_11n_rxba_sync_event(struct mwifiex_private *priv,
mwifiex_dbg_dump(priv->adapter, EVT_D, "RXBA_SYNC event:",
event_buf, len);
while (tlv_buf_left >= sizeof(*tlv_rxba)) {
while (tlv_buf_left > sizeof(*tlv_rxba)) {
tlv_type = le16_to_cpu(tlv_rxba->header.type);
tlv_len = le16_to_cpu(tlv_rxba->header.len);
if (size_add(sizeof(tlv_rxba->header), tlv_len) > tlv_buf_left) {
mwifiex_dbg(priv->adapter, WARN,
"TLV size (%zu) overflows event_buf buf_left=%d\n",
size_add(sizeof(tlv_rxba->header), tlv_len),
tlv_buf_left);
return;
}
if (tlv_type != TLV_TYPE_RXBA_SYNC) {
mwifiex_dbg(priv->adapter, ERROR,
"Wrong TLV id=0x%x\n", tlv_type);
@ -929,6 +937,14 @@ void mwifiex_11n_rxba_sync_event(struct mwifiex_private *priv,
tlv_seq_num = le16_to_cpu(tlv_rxba->seq_num);
tlv_bitmap_len = le16_to_cpu(tlv_rxba->bitmap_len);
if (size_add(sizeof(*tlv_rxba), tlv_bitmap_len) > tlv_buf_left) {
mwifiex_dbg(priv->adapter, WARN,
"TLV size (%zu) overflows event_buf buf_left=%d\n",
size_add(sizeof(*tlv_rxba), tlv_bitmap_len),
tlv_buf_left);
return;
}
mwifiex_dbg(priv->adapter, INFO,
"%pM tid=%d seq_num=%d bitmap_len=%d\n",
tlv_rxba->mac, tlv_rxba->tid, tlv_seq_num,
@ -965,8 +981,8 @@ void mwifiex_11n_rxba_sync_event(struct mwifiex_private *priv,
}
}
tlv_buf_left -= (sizeof(*tlv_rxba) + tlv_len);
tmp = (u8 *)tlv_rxba + tlv_len + sizeof(*tlv_rxba);
tlv_buf_left -= (sizeof(tlv_rxba->header) + tlv_len);
tmp = (u8 *)tlv_rxba + sizeof(tlv_rxba->header) + tlv_len;
tlv_rxba = (struct mwifiex_ie_types_rxba_sync *)tmp;
}
}

View File

@ -779,7 +779,7 @@ struct mwifiex_ie_types_rxba_sync {
u8 reserved;
__le16 seq_num;
__le16 bitmap_len;
u8 bitmap[1];
u8 bitmap[];
} __packed;
struct chan_band_param_set {

View File

@ -86,7 +86,8 @@ int mwifiex_process_rx_packet(struct mwifiex_private *priv,
rx_pkt_len = le16_to_cpu(local_rx_pd->rx_pkt_length);
rx_pkt_hdr = (void *)local_rx_pd + rx_pkt_off;
if (sizeof(*rx_pkt_hdr) + rx_pkt_off > skb->len) {
if (sizeof(rx_pkt_hdr->eth803_hdr) + sizeof(rfc1042_header) +
rx_pkt_off > skb->len) {
mwifiex_dbg(priv->adapter, ERROR,
"wrong rx packet offset: len=%d, rx_pkt_off=%d\n",
skb->len, rx_pkt_off);
@ -95,12 +96,13 @@ int mwifiex_process_rx_packet(struct mwifiex_private *priv,
return -1;
}
if ((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header,
sizeof(bridge_tunnel_header))) ||
(!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header,
sizeof(rfc1042_header)) &&
ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_AARP &&
ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_IPX)) {
if (sizeof(*rx_pkt_hdr) + rx_pkt_off <= skb->len &&
((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header,
sizeof(bridge_tunnel_header))) ||
(!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header,
sizeof(rfc1042_header)) &&
ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_AARP &&
ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_IPX))) {
/*
* Replace the 803 header and rfc1042 header (llc/snap) with an
* EthernetII header, keep the src/dst and snap_type

View File

@ -93,13 +93,13 @@ __mt76_get_rxwi(struct mt76_dev *dev)
{
struct mt76_txwi_cache *t = NULL;
spin_lock(&dev->wed_lock);
spin_lock_bh(&dev->wed_lock);
if (!list_empty(&dev->rxwi_cache)) {
t = list_first_entry(&dev->rxwi_cache, struct mt76_txwi_cache,
list);
list_del(&t->list);
}
spin_unlock(&dev->wed_lock);
spin_unlock_bh(&dev->wed_lock);
return t;
}
@ -145,9 +145,9 @@ mt76_put_rxwi(struct mt76_dev *dev, struct mt76_txwi_cache *t)
if (!t)
return;
spin_lock(&dev->wed_lock);
spin_lock_bh(&dev->wed_lock);
list_add(&t->list, &dev->rxwi_cache);
spin_unlock(&dev->wed_lock);
spin_unlock_bh(&dev->wed_lock);
}
EXPORT_SYMBOL_GPL(mt76_put_rxwi);

View File

@ -131,15 +131,8 @@ u8 mt76x02_get_lna_gain(struct mt76x02_dev *dev,
s8 *lna_2g, s8 *lna_5g,
struct ieee80211_channel *chan)
{
u16 val;
u8 lna;
val = mt76x02_eeprom_get(dev, MT_EE_NIC_CONF_1);
if (val & MT_EE_NIC_CONF_1_LNA_EXT_2G)
*lna_2g = 0;
if (val & MT_EE_NIC_CONF_1_LNA_EXT_5G)
memset(lna_5g, 0, sizeof(s8) * 3);
if (chan->band == NL80211_BAND_2GHZ)
lna = *lna_2g;
else if (chan->hw_value <= 64)

View File

@ -256,7 +256,8 @@ void mt76x2_read_rx_gain(struct mt76x02_dev *dev)
struct ieee80211_channel *chan = dev->mphy.chandef.chan;
int channel = chan->hw_value;
s8 lna_5g[3], lna_2g;
u8 lna;
bool use_lna;
u8 lna = 0;
u16 val;
if (chan->band == NL80211_BAND_2GHZ)
@ -275,7 +276,15 @@ void mt76x2_read_rx_gain(struct mt76x02_dev *dev)
dev->cal.rx.mcu_gain |= (lna_5g[1] & 0xff) << 16;
dev->cal.rx.mcu_gain |= (lna_5g[2] & 0xff) << 24;
lna = mt76x02_get_lna_gain(dev, &lna_2g, lna_5g, chan);
val = mt76x02_eeprom_get(dev, MT_EE_NIC_CONF_1);
if (chan->band == NL80211_BAND_2GHZ)
use_lna = !(val & MT_EE_NIC_CONF_1_LNA_EXT_2G);
else
use_lna = !(val & MT_EE_NIC_CONF_1_LNA_EXT_5G);
if (use_lna)
lna = mt76x02_get_lna_gain(dev, &lna_2g, lna_5g, chan);
dev->cal.rx.lna_gain = mt76x02_sign_extend(lna, 8);
}
EXPORT_SYMBOL_GPL(mt76x2_read_rx_gain);

View File

@ -46,6 +46,7 @@ struct rtw8723du_efuse {
u8 vender_id[2]; /* 0x100 */
u8 product_id[2]; /* 0x102 */
u8 usb_option; /* 0x104 */
u8 res5[2]; /* 0x105 */
u8 mac_addr[ETH_ALEN]; /* 0x107 */
};

View File

@ -3998,7 +3998,6 @@ ptp_ocp_device_init(struct ptp_ocp *bp, struct pci_dev *pdev)
return 0;
out:
ptp_ocp_dev_release(&bp->dev);
put_device(&bp->dev);
return err;
}

View File

@ -123,8 +123,18 @@ static inline ssize_t vringh_iov_xfer(struct vringh *vrh,
done += partlen;
len -= partlen;
ptr += partlen;
iov->consumed += partlen;
iov->iov[iov->i].iov_len -= partlen;
iov->iov[iov->i].iov_base += partlen;
vringh_kiov_advance(iov, partlen);
if (!iov->iov[iov->i].iov_len) {
/* Fix up old iov element then increment. */
iov->iov[iov->i].iov_len = iov->consumed;
iov->iov[iov->i].iov_base -= iov->consumed;
iov->consumed = 0;
iov->i++;
}
}
return done;
}

View File

@ -1307,7 +1307,7 @@ static inline int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
static inline struct bpf_trampoline *bpf_trampoline_get(u64 key,
struct bpf_attach_target_info *tgt_info)
{
return ERR_PTR(-EOPNOTSUPP);
return NULL;
}
static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {}
#define DEFINE_BPF_DISPATCHER(name)

View File

@ -9,6 +9,7 @@ struct ip_ct_sctp {
enum sctp_conntrack state;
__be32 vtag[IP_CT_DIR_MAX];
u8 init[IP_CT_DIR_MAX];
u8 last_dir;
u8 flags;
};

View File

@ -350,7 +350,7 @@ struct hci_dev {
struct list_head list;
struct mutex lock;
char name[8];
const char *name;
unsigned long flags;
__u16 id;
__u8 bus;

View File

@ -5941,6 +5941,7 @@ void wiphy_delayed_work_cancel(struct wiphy *wiphy,
* @event_lock: (private) lock for event list
* @owner_nlportid: (private) owner socket port ID
* @nl_owner_dead: (private) owner socket went away
* @cqm_rssi_work: (private) CQM RSSI reporting work
* @cqm_config: (private) nl80211 RSSI monitor state
* @pmsr_list: (private) peer measurement requests
* @pmsr_lock: (private) peer measurements requests/results lock
@ -6013,7 +6014,8 @@ struct wireless_dev {
} wext;
#endif
struct cfg80211_cqm_config *cqm_config;
struct wiphy_work cqm_rssi_work;
struct cfg80211_cqm_config __rcu *cqm_config;
struct list_head pmsr_list;
spinlock_t pmsr_lock;
@ -7231,7 +7233,7 @@ struct cfg80211_rx_assoc_resp {
int uapsd_queues;
const u8 *ap_mld_addr;
struct {
const u8 *addr;
u8 addr[ETH_ALEN] __aligned(2);
struct cfg80211_bss *bss;
u16 status;
} links[IEEE80211_MLD_MAX_NUM_LINKS];

View File

@ -154,6 +154,7 @@ struct fib_info {
int fib_nhs;
bool fib_nh_is_v6;
bool nh_updated;
bool pfsrc_removed;
struct nexthop *nh;
struct rcu_head rcu;
struct fib_nh fib_nh[];

View File

@ -103,9 +103,10 @@ struct mana_txq {
/* skb data and frags dma mappings */
struct mana_skb_head {
dma_addr_t dma_handle[MAX_SKB_FRAGS + 1];
/* GSO pkts may have 2 SGEs for the linear part*/
dma_addr_t dma_handle[MAX_SKB_FRAGS + 2];
u32 size[MAX_SKB_FRAGS + 1];
u32 size[MAX_SKB_FRAGS + 2];
};
#define MANA_HEADROOM sizeof(struct mana_skb_head)

View File

@ -539,7 +539,7 @@ static inline int neigh_output(struct neighbour *n, struct sk_buff *skb,
READ_ONCE(hh->hh_len))
return neigh_hh_output(hh, skb);
return n->output(n, skb);
return READ_ONCE(n->output)(n, skb);
}
static inline struct neighbour *

View File

@ -16,13 +16,13 @@
* page_pool_alloc_pages() call. Drivers should use
* page_pool_dev_alloc_pages() replacing dev_alloc_pages().
*
* API keeps track of in-flight pages, in order to let API user know
* The API keeps track of in-flight pages, in order to let API users know
* when it is safe to free a page_pool object. Thus, API users
* must call page_pool_put_page() to free the page, or attach
* the page to a page_pool-aware objects like skbs marked with
* the page to a page_pool-aware object like skbs marked with
* skb_mark_for_recycle().
*
* API user must call page_pool_put_page() once on a page, as it
* API users must call page_pool_put_page() once on a page, as it
* will either recycle the page, or in case of refcnt > 1, it will
* release the DMA mapping and in-flight state accounting.
*/

View File

@ -348,12 +348,14 @@ ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos,
struct sk_buff *tcp_stream_alloc_skb(struct sock *sk, gfp_t gfp,
bool force_schedule);
static inline void tcp_dec_quickack_mode(struct sock *sk,
const unsigned int pkts)
static inline void tcp_dec_quickack_mode(struct sock *sk)
{
struct inet_connection_sock *icsk = inet_csk(sk);
if (icsk->icsk_ack.quick) {
/* How many ACKs S/ACKing new data have we sent? */
const unsigned int pkts = inet_csk_ack_scheduled(sk) ? 1 : 0;
if (pkts >= icsk->icsk_ack.quick) {
icsk->icsk_ack.quick = 0;
/* Leaving quickack mode we deflate ATO. */

View File

@ -965,37 +965,31 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags)
return !ret ? NULL : ret + LLIST_NODE_SZ;
}
/* Most of the logic is taken from setup_kmalloc_cache_index_table() */
static __init int bpf_mem_cache_adjust_size(void)
{
unsigned int size, index;
unsigned int size;
/* Normally KMALLOC_MIN_SIZE is 8-bytes, but it can be
* up-to 256-bytes.
/* Adjusting the indexes in size_index() according to the object_size
* of underlying slab cache, so bpf_mem_alloc() will select a
* bpf_mem_cache with unit_size equal to the object_size of
* the underlying slab cache.
*
* The maximal value of KMALLOC_MIN_SIZE and __kmalloc_minalign() is
* 256-bytes, so only do adjustment for [8-bytes, 192-bytes].
*/
size = KMALLOC_MIN_SIZE;
if (size <= 192)
index = size_index[(size - 1) / 8];
else
index = fls(size - 1) - 1;
for (size = 8; size < KMALLOC_MIN_SIZE && size <= 192; size += 8)
size_index[(size - 1) / 8] = index;
for (size = 192; size >= 8; size -= 8) {
unsigned int kmalloc_size, index;
/* The minimal alignment is 64-bytes, so disable 96-bytes cache and
* use 128-bytes cache instead.
*/
if (KMALLOC_MIN_SIZE >= 64) {
index = size_index[(128 - 1) / 8];
for (size = 64 + 8; size <= 96; size += 8)
size_index[(size - 1) / 8] = index;
}
kmalloc_size = kmalloc_size_roundup(size);
if (kmalloc_size == size)
continue;
/* The minimal alignment is 128-bytes, so disable 192-bytes cache and
* use 256-bytes cache instead.
*/
if (KMALLOC_MIN_SIZE >= 128) {
index = fls(256 - 1) - 1;
for (size = 128 + 8; size <= 192; size += 8)
if (kmalloc_size <= 192)
index = size_index[(kmalloc_size - 1) / 8];
else
index = fls(kmalloc_size - 1) - 1;
/* Only overwrite if necessary */
if (size_index[(size - 1) / 8] != index)
size_index[(size - 1) / 8] = index;
}

View File

@ -253,6 +253,9 @@ int bpf_mprog_attach(struct bpf_mprog_entry *entry,
goto out;
}
idx = tidx;
} else if (bpf_mprog_total(entry) == bpf_mprog_max()) {
ret = -ERANGE;
goto out;
}
if (flags & BPF_F_BEFORE) {
tidx = bpf_mprog_pos_before(entry, &rtuple);

View File

@ -4047,11 +4047,9 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno)
bitmap_from_u64(mask, bt_reg_mask(bt));
for_each_set_bit(i, mask, 32) {
reg = &st->frame[0]->regs[i];
if (reg->type != SCALAR_VALUE) {
bt_clear_reg(bt, i);
continue;
}
reg->precise = true;
bt_clear_reg(bt, i);
if (reg->type == SCALAR_VALUE)
reg->precise = true;
}
return 0;
}

View File

@ -2413,34 +2413,41 @@ int hci_conn_security(struct hci_conn *conn, __u8 sec_level, __u8 auth_type,
if (!test_bit(HCI_CONN_AUTH, &conn->flags))
goto auth;
/* An authenticated FIPS approved combination key has sufficient
* security for security level 4. */
if (conn->key_type == HCI_LK_AUTH_COMBINATION_P256 &&
sec_level == BT_SECURITY_FIPS)
goto encrypt;
/* An authenticated combination key has sufficient security for
security level 3. */
if ((conn->key_type == HCI_LK_AUTH_COMBINATION_P192 ||
conn->key_type == HCI_LK_AUTH_COMBINATION_P256) &&
sec_level == BT_SECURITY_HIGH)
goto encrypt;
/* An unauthenticated combination key has sufficient security for
security level 1 and 2. */
if ((conn->key_type == HCI_LK_UNAUTH_COMBINATION_P192 ||
conn->key_type == HCI_LK_UNAUTH_COMBINATION_P256) &&
(sec_level == BT_SECURITY_MEDIUM || sec_level == BT_SECURITY_LOW))
goto encrypt;
/* A combination key has always sufficient security for the security
levels 1 or 2. High security level requires the combination key
is generated using maximum PIN code length (16).
For pre 2.1 units. */
if (conn->key_type == HCI_LK_COMBINATION &&
(sec_level == BT_SECURITY_MEDIUM || sec_level == BT_SECURITY_LOW ||
conn->pin_length == 16))
goto encrypt;
switch (conn->key_type) {
case HCI_LK_AUTH_COMBINATION_P256:
/* An authenticated FIPS approved combination key has
* sufficient security for security level 4 or lower.
*/
if (sec_level <= BT_SECURITY_FIPS)
goto encrypt;
break;
case HCI_LK_AUTH_COMBINATION_P192:
/* An authenticated combination key has sufficient security for
* security level 3 or lower.
*/
if (sec_level <= BT_SECURITY_HIGH)
goto encrypt;
break;
case HCI_LK_UNAUTH_COMBINATION_P192:
case HCI_LK_UNAUTH_COMBINATION_P256:
/* An unauthenticated combination key has sufficient security
* for security level 2 or lower.
*/
if (sec_level <= BT_SECURITY_MEDIUM)
goto encrypt;
break;
case HCI_LK_COMBINATION:
/* A combination key has always sufficient security for the
* security levels 2 or lower. High security level requires the
* combination key is generated using maximum PIN code length
* (16). For pre 2.1 units.
*/
if (sec_level <= BT_SECURITY_MEDIUM || conn->pin_length == 16)
goto encrypt;
break;
default:
break;
}
auth:
if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags))

View File

@ -2617,7 +2617,11 @@ int hci_register_dev(struct hci_dev *hdev)
if (id < 0)
return id;
snprintf(hdev->name, sizeof(hdev->name), "hci%d", id);
error = dev_set_name(&hdev->dev, "hci%u", id);
if (error)
return error;
hdev->name = dev_name(&hdev->dev);
hdev->id = id;
BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus);
@ -2639,8 +2643,6 @@ int hci_register_dev(struct hci_dev *hdev)
if (!IS_ERR_OR_NULL(bt_debugfs))
hdev->debugfs = debugfs_create_dir(hdev->name, bt_debugfs);
dev_set_name(&hdev->dev, "%s", hdev->name);
error = device_add(&hdev->dev);
if (error < 0)
goto err_wqueue;
@ -2784,6 +2786,7 @@ void hci_release_dev(struct hci_dev *hdev)
hci_conn_params_clear_all(hdev);
hci_discovery_filter_clear(hdev);
hci_blocked_keys_clear(hdev);
hci_codec_list_clear(&hdev->local_codecs);
hci_dev_unlock(hdev);
ida_simple_remove(&hci_index_ida, hdev->id);
@ -3418,7 +3421,12 @@ static void hci_link_tx_to(struct hci_dev *hdev, __u8 type)
if (c->type == type && c->sent) {
bt_dev_err(hdev, "killing stalled connection %pMR",
&c->dst);
/* hci_disconnect might sleep, so, we have to release
* the RCU read lock before calling it.
*/
rcu_read_unlock();
hci_disconnect(c, HCI_ERROR_REMOTE_USER_TERM);
rcu_read_lock();
}
}

View File

@ -33,6 +33,7 @@
#include "hci_request.h"
#include "hci_debugfs.h"
#include "hci_codec.h"
#include "a2mp.h"
#include "amp.h"
#include "smp.h"

View File

@ -71,7 +71,5 @@ struct sk_buff *hci_prepare_cmd(struct hci_dev *hdev, u16 opcode, u32 plen,
void hci_req_add_le_scan_disable(struct hci_request *req, bool rpa_le_conn);
void hci_req_add_le_passive_scan(struct hci_request *req);
void hci_req_prepare_suspend(struct hci_dev *hdev, enum suspended_state next);
void hci_request_setup(struct hci_dev *hdev);
void hci_request_cancel_all(struct hci_dev *hdev);

View File

@ -413,11 +413,6 @@ static int hci_le_scan_restart_sync(struct hci_dev *hdev)
LE_SCAN_FILTER_DUP_ENABLE);
}
static int le_scan_restart_sync(struct hci_dev *hdev, void *data)
{
return hci_le_scan_restart_sync(hdev);
}
static void le_scan_restart(struct work_struct *work)
{
struct hci_dev *hdev = container_of(work, struct hci_dev,
@ -427,15 +422,15 @@ static void le_scan_restart(struct work_struct *work)
bt_dev_dbg(hdev, "");
hci_dev_lock(hdev);
status = hci_cmd_sync_queue(hdev, le_scan_restart_sync, NULL, NULL);
status = hci_le_scan_restart_sync(hdev);
if (status) {
bt_dev_err(hdev, "failed to restart LE scan: status %d",
status);
goto unlock;
return;
}
hci_dev_lock(hdev);
if (!test_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks) ||
!hdev->discovery.scan_start)
goto unlock;
@ -5079,6 +5074,7 @@ int hci_dev_close_sync(struct hci_dev *hdev)
memset(hdev->eir, 0, sizeof(hdev->eir));
memset(hdev->dev_class, 0, sizeof(hdev->dev_class));
bacpy(&hdev->random_addr, BDADDR_ANY);
hci_codec_list_clear(&hdev->local_codecs);
hci_dev_put(hdev);
return err;

View File

@ -502,7 +502,7 @@ drop:
}
/* -------- Socket interface ---------- */
static struct sock *__iso_get_sock_listen_by_addr(bdaddr_t *ba)
static struct sock *__iso_get_sock_listen_by_addr(bdaddr_t *src, bdaddr_t *dst)
{
struct sock *sk;
@ -510,7 +510,10 @@ static struct sock *__iso_get_sock_listen_by_addr(bdaddr_t *ba)
if (sk->sk_state != BT_LISTEN)
continue;
if (!bacmp(&iso_pi(sk)->src, ba))
if (bacmp(&iso_pi(sk)->dst, dst))
continue;
if (!bacmp(&iso_pi(sk)->src, src))
return sk;
}
@ -952,7 +955,7 @@ static int iso_listen_cis(struct sock *sk)
write_lock(&iso_sk_list.lock);
if (__iso_get_sock_listen_by_addr(&iso_pi(sk)->src))
if (__iso_get_sock_listen_by_addr(&iso_pi(sk)->src, &iso_pi(sk)->dst))
err = -EADDRINUSE;
write_unlock(&iso_sk_list.lock);

View File

@ -294,7 +294,7 @@ int br_nf_pre_routing_finish_bridge(struct net *net, struct sock *sk, struct sk_
/* tell br_dev_xmit to continue with forwarding */
nf_bridge->bridged_dnat = 1;
/* FIXME Need to refragment */
ret = neigh->output(neigh, skb);
ret = READ_ONCE(neigh->output)(neigh, skb);
}
neigh_release(neigh);
return ret;

View File

@ -410,7 +410,7 @@ static void neigh_flush_dev(struct neigh_table *tbl, struct net_device *dev,
*/
__skb_queue_purge(&n->arp_queue);
n->arp_queue_len_bytes = 0;
n->output = neigh_blackhole;
WRITE_ONCE(n->output, neigh_blackhole);
if (n->nud_state & NUD_VALID)
n->nud_state = NUD_NOARP;
else
@ -920,7 +920,7 @@ static void neigh_suspect(struct neighbour *neigh)
{
neigh_dbg(2, "neigh %p is suspected\n", neigh);
neigh->output = neigh->ops->output;
WRITE_ONCE(neigh->output, neigh->ops->output);
}
/* Neighbour state is OK;
@ -932,7 +932,7 @@ static void neigh_connect(struct neighbour *neigh)
{
neigh_dbg(2, "neigh %p is connected\n", neigh);
neigh->output = neigh->ops->connected_output;
WRITE_ONCE(neigh->output, neigh->ops->connected_output);
}
static void neigh_periodic_work(struct work_struct *work)
@ -988,7 +988,9 @@ static void neigh_periodic_work(struct work_struct *work)
(state == NUD_FAILED ||
!time_in_range_open(jiffies, n->used,
n->used + NEIGH_VAR(n->parms, GC_STALETIME)))) {
*np = n->next;
rcu_assign_pointer(*np,
rcu_dereference_protected(n->next,
lockdep_is_held(&tbl->lock)));
neigh_mark_dead(n);
write_unlock(&n->lock);
neigh_cleanup_and_release(n);
@ -1447,7 +1449,7 @@ static int __neigh_update(struct neighbour *neigh, const u8 *lladdr,
if (n2)
n1 = n2;
}
n1->output(n1, skb);
READ_ONCE(n1->output)(n1, skb);
if (n2)
neigh_release(n2);
rcu_read_unlock();
@ -3153,7 +3155,7 @@ int neigh_xmit(int index, struct net_device *dev,
rcu_read_unlock();
goto out_kfree_skb;
}
err = neigh->output(neigh, skb);
err = READ_ONCE(neigh->output)(neigh, skb);
rcu_read_unlock();
}
else if (index == NEIGH_LINK_TABLE) {

View File

@ -668,6 +668,8 @@ BPF_CALL_4(bpf_msg_redirect_map, struct sk_msg *, msg,
sk = __sock_map_lookup_elem(map, key);
if (unlikely(!sk || !sock_map_redirect_allowed(sk)))
return SK_DROP;
if (!(flags & BPF_F_INGRESS) && !sk_is_tcp(sk))
return SK_DROP;
msg->flags = flags;
msg->sk_redir = sk;
@ -1267,6 +1269,8 @@ BPF_CALL_4(bpf_msg_redirect_hash, struct sk_msg *, msg,
sk = __sock_hash_lookup_elem(map, key);
if (unlikely(!sk || !sock_map_redirect_allowed(sk)))
return SK_DROP;
if (!(flags & BPF_F_INGRESS) && !sk_is_tcp(sk))
return SK_DROP;
msg->flags = flags;
msg->sk_redir = sk;

View File

@ -21,16 +21,6 @@ struct plca_reply_data {
#define PLCA_REPDATA(__reply_base) \
container_of(__reply_base, struct plca_reply_data, base)
static void plca_update_sint(int *dst, const struct nlattr *attr,
bool *mod)
{
if (!attr)
return;
*dst = nla_get_u32(attr);
*mod = true;
}
// PLCA get configuration message ------------------------------------------- //
const struct nla_policy ethnl_plca_get_cfg_policy[] = {
@ -38,6 +28,29 @@ const struct nla_policy ethnl_plca_get_cfg_policy[] = {
NLA_POLICY_NESTED(ethnl_header_policy),
};
static void plca_update_sint(int *dst, struct nlattr **tb, u32 attrid,
bool *mod)
{
const struct nlattr *attr = tb[attrid];
if (!attr ||
WARN_ON_ONCE(attrid >= ARRAY_SIZE(ethnl_plca_set_cfg_policy)))
return;
switch (ethnl_plca_set_cfg_policy[attrid].type) {
case NLA_U8:
*dst = nla_get_u8(attr);
break;
case NLA_U32:
*dst = nla_get_u32(attr);
break;
default:
WARN_ON_ONCE(1);
}
*mod = true;
}
static int plca_get_cfg_prepare_data(const struct ethnl_req_info *req_base,
struct ethnl_reply_data *reply_base,
const struct genl_info *info)
@ -144,13 +157,13 @@ ethnl_set_plca(struct ethnl_req_info *req_info, struct genl_info *info)
return -EOPNOTSUPP;
memset(&plca_cfg, 0xff, sizeof(plca_cfg));
plca_update_sint(&plca_cfg.enabled, tb[ETHTOOL_A_PLCA_ENABLED], &mod);
plca_update_sint(&plca_cfg.node_id, tb[ETHTOOL_A_PLCA_NODE_ID], &mod);
plca_update_sint(&plca_cfg.node_cnt, tb[ETHTOOL_A_PLCA_NODE_CNT], &mod);
plca_update_sint(&plca_cfg.to_tmr, tb[ETHTOOL_A_PLCA_TO_TMR], &mod);
plca_update_sint(&plca_cfg.burst_cnt, tb[ETHTOOL_A_PLCA_BURST_CNT],
plca_update_sint(&plca_cfg.enabled, tb, ETHTOOL_A_PLCA_ENABLED, &mod);
plca_update_sint(&plca_cfg.node_id, tb, ETHTOOL_A_PLCA_NODE_ID, &mod);
plca_update_sint(&plca_cfg.node_cnt, tb, ETHTOOL_A_PLCA_NODE_CNT, &mod);
plca_update_sint(&plca_cfg.to_tmr, tb, ETHTOOL_A_PLCA_TO_TMR, &mod);
plca_update_sint(&plca_cfg.burst_cnt, tb, ETHTOOL_A_PLCA_BURST_CNT,
&mod);
plca_update_sint(&plca_cfg.burst_tmr, tb[ETHTOOL_A_PLCA_BURST_TMR],
plca_update_sint(&plca_cfg.burst_tmr, tb, ETHTOOL_A_PLCA_BURST_TMR,
&mod);
if (!mod)
return 0;

View File

@ -1887,6 +1887,7 @@ int fib_sync_down_addr(struct net_device *dev, __be32 local)
continue;
if (fi->fib_prefsrc == local) {
fi->fib_flags |= RTNH_F_DEAD;
fi->pfsrc_removed = true;
ret++;
}
}

View File

@ -2027,6 +2027,7 @@ void fib_table_flush_external(struct fib_table *tb)
int fib_table_flush(struct net *net, struct fib_table *tb, bool flush_all)
{
struct trie *t = (struct trie *)tb->tb_data;
struct nl_info info = { .nl_net = net };
struct key_vector *pn = t->kv;
unsigned long cindex = 1;
struct hlist_node *tmp;
@ -2089,6 +2090,9 @@ int fib_table_flush(struct net *net, struct fib_table *tb, bool flush_all)
fib_notify_alias_delete(net, n->key, &n->leaf, fa,
NULL);
if (fi->pfsrc_removed)
rtmsg_fib(RTM_DELROUTE, htonl(n->key), fa,
KEYLENGTH - fa->fa_slen, tb->tb_id, &info, 0);
hlist_del_rcu(&fa->fa_list);
fib_release_info(fa->fa_info);
alias_free_mem_rcu(fa);

View File

@ -3417,6 +3417,8 @@ static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
fa->fa_type == fri.type) {
fri.offload = READ_ONCE(fa->offload);
fri.trap = READ_ONCE(fa->trap);
fri.offload_failed =
READ_ONCE(fa->offload_failed);
break;
}
}

View File

@ -1621,16 +1621,13 @@ EXPORT_SYMBOL(tcp_read_sock);
int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
{
struct tcp_sock *tp = tcp_sk(sk);
u32 seq = tp->copied_seq;
struct sk_buff *skb;
int copied = 0;
u32 offset;
if (sk->sk_state == TCP_LISTEN)
return -ENOTCONN;
while ((skb = tcp_recv_skb(sk, seq, &offset)) != NULL) {
while ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) {
u8 tcp_flags;
int used;
@ -1643,13 +1640,10 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
copied = used;
break;
}
seq += used;
copied += used;
if (tcp_flags & TCPHDR_FIN) {
++seq;
if (tcp_flags & TCPHDR_FIN)
break;
}
}
return copied;
}

View File

@ -222,6 +222,7 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
int *addr_len)
{
struct tcp_sock *tcp = tcp_sk(sk);
int peek = flags & MSG_PEEK;
u32 seq = tcp->copied_seq;
struct sk_psock *psock;
int copied = 0;
@ -311,7 +312,8 @@ msg_bytes_ready:
copied = -EAGAIN;
}
out:
WRITE_ONCE(tcp->copied_seq, seq);
if (!peek)
WRITE_ONCE(tcp->copied_seq, seq);
tcp_rcv_space_adjust(sk);
if (copied > 0)
__tcp_cleanup_rbuf(sk, copied);

View File

@ -253,6 +253,19 @@ static void tcp_measure_rcv_mss(struct sock *sk, const struct sk_buff *skb)
if (unlikely(len > icsk->icsk_ack.rcv_mss +
MAX_TCP_OPTION_SPACE))
tcp_gro_dev_warn(sk, skb, len);
/* If the skb has a len of exactly 1*MSS and has the PSH bit
* set then it is likely the end of an application write. So
* more data may not be arriving soon, and yet the data sender
* may be waiting for an ACK if cwnd-bound or using TX zero
* copy. So we set ICSK_ACK_PUSHED here so that
* tcp_cleanup_rbuf() will send an ACK immediately if the app
* reads all of the data and is not ping-pong. If len > MSS
* then this logic does not matter (and does not hurt) because
* tcp_cleanup_rbuf() will always ACK immediately if the app
* reads data and there is more than an MSS of unACKed data.
*/
if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_PSH)
icsk->icsk_ack.pending |= ICSK_ACK_PUSHED;
} else {
/* Otherwise, we make more careful check taking into account,
* that SACKs block is variable.

View File

@ -177,8 +177,7 @@ static void tcp_event_data_sent(struct tcp_sock *tp,
}
/* Account for an ACK we sent. */
static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts,
u32 rcv_nxt)
static inline void tcp_event_ack_sent(struct sock *sk, u32 rcv_nxt)
{
struct tcp_sock *tp = tcp_sk(sk);
@ -192,7 +191,7 @@ static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts,
if (unlikely(rcv_nxt != tp->rcv_nxt))
return; /* Special ACK sent by DCTCP to reflect ECN */
tcp_dec_quickack_mode(sk, pkts);
tcp_dec_quickack_mode(sk);
inet_csk_clear_xmit_timer(sk, ICSK_TIME_DACK);
}
@ -1387,7 +1386,7 @@ static int __tcp_transmit_skb(struct sock *sk, struct sk_buff *skb,
sk, skb);
if (likely(tcb->tcp_flags & TCPHDR_ACK))
tcp_event_ack_sent(sk, tcp_skb_pcount(skb), rcv_nxt);
tcp_event_ack_sent(sk, rcv_nxt);
if (skb->len != tcp_header_size) {
tcp_event_data_sent(tp, sk);

View File

@ -1640,9 +1640,12 @@ process:
struct sock *nsk;
sk = req->rsk_listener;
drop_reason = tcp_inbound_md5_hash(sk, skb,
&hdr->saddr, &hdr->daddr,
AF_INET6, dif, sdif);
if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb))
drop_reason = SKB_DROP_REASON_XFRM_POLICY;
else
drop_reason = tcp_inbound_md5_hash(sk, skb,
&hdr->saddr, &hdr->daddr,
AF_INET6, dif, sdif);
if (drop_reason) {
sk_drops_add(sk, skb);
reqsk_put(req);
@ -1689,6 +1692,7 @@ process:
}
goto discard_and_relse;
}
nf_reset_ct(skb);
if (nsk == sk) {
reqsk_put(req);
tcp_v6_restore_cb(skb);

View File

@ -507,7 +507,6 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
*/
if (len > INT_MAX - transhdrlen)
return -EMSGSIZE;
ulen = len + transhdrlen;
/* Mirror BSD error message compatibility */
if (msg->msg_flags & MSG_OOB)
@ -628,6 +627,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
back_from_confirm:
lock_sock(sk);
ulen = len + skb_queue_empty(&sk->sk_write_queue) ? transhdrlen : 0;
err = ip6_append_data(sk, ip_generic_getfrag, msg,
ulen, transhdrlen, &ipc6,
&fl6, (struct rt6_info *)dst,

View File

@ -566,6 +566,9 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
}
err = ieee80211_key_link(key, link, sta);
/* KRACK protection, shouldn't happen but just silently accept key */
if (err == -EALREADY)
err = 0;
out_unlock:
mutex_unlock(&local->sta_mtx);
@ -1857,7 +1860,8 @@ static int sta_link_apply_parameters(struct ieee80211_local *local,
/* VHT can override some HT caps such as the A-MSDU max length */
if (params->vht_capa)
ieee80211_vht_cap_ie_to_sta_vht_cap(sdata, sband,
params->vht_capa, link_sta);
params->vht_capa, NULL,
link_sta);
if (params->he_capa)
ieee80211_he_cap_ie_to_sta_he_cap(sdata, sband,

View File

@ -1072,7 +1072,7 @@ static void ieee80211_update_sta_info(struct ieee80211_sub_if_data *sdata,
&chandef);
memcpy(&cap_ie, elems->vht_cap_elem, sizeof(cap_ie));
ieee80211_vht_cap_ie_to_sta_vht_cap(sdata, sband,
&cap_ie,
&cap_ie, NULL,
&sta->deflink);
if (memcmp(&cap, &sta->sta.deflink.vht_cap, sizeof(cap)))
rates_updated |= true;

View File

@ -676,7 +676,7 @@ struct ieee80211_if_mesh {
struct timer_list mesh_path_root_timer;
unsigned long wrkq_flags;
unsigned long mbss_changed;
unsigned long mbss_changed[64 / BITS_PER_LONG];
bool userspace_handles_dfs;
@ -2141,6 +2141,7 @@ void
ieee80211_vht_cap_ie_to_sta_vht_cap(struct ieee80211_sub_if_data *sdata,
struct ieee80211_supported_band *sband,
const struct ieee80211_vht_cap *vht_cap_ie,
const struct ieee80211_vht_cap *vht_cap_ie2,
struct link_sta_info *link_sta);
enum ieee80211_sta_rx_bandwidth
ieee80211_sta_cap_rx_bw(struct link_sta_info *link_sta);

View File

@ -802,6 +802,9 @@ static void ieee80211_key_destroy(struct ieee80211_key *key,
void ieee80211_key_free_unused(struct ieee80211_key *key)
{
if (!key)
return;
WARN_ON(key->sdata || key->local);
ieee80211_key_free_common(key);
}
@ -854,7 +857,7 @@ int ieee80211_key_link(struct ieee80211_key *key,
* can cause warnings to appear.
*/
bool delay_tailroom = sdata->vif.type == NL80211_IFTYPE_STATION;
int ret = -EOPNOTSUPP;
int ret;
mutex_lock(&sdata->local->key_mtx);
@ -868,8 +871,10 @@ int ieee80211_key_link(struct ieee80211_key *key,
* the same cipher. Enforce the assumption for pairwise keys.
*/
if ((alt_key && alt_key->conf.cipher != key->conf.cipher) ||
(old_key && old_key->conf.cipher != key->conf.cipher))
(old_key && old_key->conf.cipher != key->conf.cipher)) {
ret = -EOPNOTSUPP;
goto out;
}
} else if (sta) {
struct link_sta_info *link_sta = &sta->deflink;
int link_id = key->conf.link_id;
@ -895,8 +900,10 @@ int ieee80211_key_link(struct ieee80211_key *key,
/* Non-pairwise keys must also not switch the cipher on rekey */
if (!pairwise) {
if (old_key && old_key->conf.cipher != key->conf.cipher)
if (old_key && old_key->conf.cipher != key->conf.cipher) {
ret = -EOPNOTSUPP;
goto out;
}
}
/*
@ -904,9 +911,8 @@ int ieee80211_key_link(struct ieee80211_key *key,
* new version of the key to avoid nonce reuse or replay issues.
*/
if (ieee80211_key_identical(sdata, old_key, key)) {
ieee80211_key_free_unused(key);
ret = 0;
goto out;
ret = -EALREADY;
goto unlock;
}
key->local = sdata->local;
@ -930,7 +936,11 @@ int ieee80211_key_link(struct ieee80211_key *key,
ieee80211_key_free(key, delay_tailroom);
}
key = NULL;
out:
ieee80211_key_free_unused(key);
unlock:
mutex_unlock(&sdata->local->key_mtx);
return ret;

View File

@ -1175,7 +1175,7 @@ void ieee80211_mbss_info_change_notify(struct ieee80211_sub_if_data *sdata,
/* if we race with running work, worst case this work becomes a noop */
for_each_set_bit(bit, &bits, sizeof(changed) * BITS_PER_BYTE)
set_bit(bit, &ifmsh->mbss_changed);
set_bit(bit, ifmsh->mbss_changed);
set_bit(MESH_WORK_MBSS_CHANGED, &ifmsh->wrkq_flags);
wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work);
}
@ -1257,7 +1257,7 @@ void ieee80211_stop_mesh(struct ieee80211_sub_if_data *sdata)
/* clear any mesh work (for next join) we may have accrued */
ifmsh->wrkq_flags = 0;
ifmsh->mbss_changed = 0;
memset(ifmsh->mbss_changed, 0, sizeof(ifmsh->mbss_changed));
local->fif_other_bss--;
atomic_dec(&local->iff_allmultis);
@ -1724,9 +1724,9 @@ static void mesh_bss_info_changed(struct ieee80211_sub_if_data *sdata)
u32 bit;
u64 changed = 0;
for_each_set_bit(bit, &ifmsh->mbss_changed,
for_each_set_bit(bit, ifmsh->mbss_changed,
sizeof(changed) * BITS_PER_BYTE) {
clear_bit(bit, &ifmsh->mbss_changed);
clear_bit(bit, ifmsh->mbss_changed);
changed |= BIT(bit);
}

View File

@ -451,7 +451,7 @@ static void mesh_sta_info_init(struct ieee80211_sub_if_data *sdata,
changed |= IEEE80211_RC_BW_CHANGED;
ieee80211_vht_cap_ie_to_sta_vht_cap(sdata, sband,
elems->vht_cap_elem,
elems->vht_cap_elem, NULL,
&sta->deflink);
ieee80211_he_cap_ie_to_sta_he_cap(sdata, sband, elems->he_cap,

View File

@ -4202,10 +4202,33 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
elems->ht_cap_elem,
link_sta);
if (elems->vht_cap_elem && !(link->u.mgd.conn_flags & IEEE80211_CONN_DISABLE_VHT))
if (elems->vht_cap_elem &&
!(link->u.mgd.conn_flags & IEEE80211_CONN_DISABLE_VHT)) {
const struct ieee80211_vht_cap *bss_vht_cap = NULL;
const struct cfg80211_bss_ies *ies;
/*
* Cisco AP module 9115 with FW 17.3 has a bug and sends a
* too large maximum MPDU length in the association response
* (indicating 12k) that it cannot actually process ...
* Work around that.
*/
rcu_read_lock();
ies = rcu_dereference(cbss->ies);
if (ies) {
const struct element *elem;
elem = cfg80211_find_elem(WLAN_EID_VHT_CAPABILITY,
ies->data, ies->len);
if (elem && elem->datalen >= sizeof(*bss_vht_cap))
bss_vht_cap = (const void *)elem->data;
}
ieee80211_vht_cap_ie_to_sta_vht_cap(sdata, sband,
elems->vht_cap_elem,
link_sta);
bss_vht_cap, link_sta);
rcu_read_unlock();
}
if (elems->he_operation && !(link->u.mgd.conn_flags & IEEE80211_CONN_DISABLE_HE) &&
elems->he_cap) {
@ -5107,9 +5130,10 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
continue;
valid_links |= BIT(link_id);
if (assoc_data->link[link_id].disabled) {
if (assoc_data->link[link_id].disabled)
dormant_links |= BIT(link_id);
} else if (link_id != assoc_data->assoc_link_id) {
if (link_id != assoc_data->assoc_link_id) {
err = ieee80211_sta_allocate_link(sta, link_id);
if (err)
goto out_err;
@ -5124,7 +5148,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
struct ieee80211_link_data *link;
struct link_sta_info *link_sta;
if (!cbss || assoc_data->link[link_id].disabled)
if (!cbss)
continue;
link = sdata_dereference(sdata->link[link_id], sdata);
@ -5429,17 +5453,18 @@ static void ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) {
struct ieee80211_link_data *link;
link = sdata_dereference(sdata->link[link_id], sdata);
if (!link)
continue;
if (!assoc_data->link[link_id].bss)
continue;
resp.links[link_id].bss = assoc_data->link[link_id].bss;
resp.links[link_id].addr = link->conf->addr;
ether_addr_copy(resp.links[link_id].addr,
assoc_data->link[link_id].addr);
resp.links[link_id].status = assoc_data->link[link_id].status;
link = sdata_dereference(sdata->link[link_id], sdata);
if (!link)
continue;
/* get uapsd queues configuration - same for all links */
resp.uapsd_queues = 0;
for (ac = 0; ac < IEEE80211_NUM_ACS; ac++)

View File

@ -665,7 +665,8 @@ ieee80211_tx_h_select_key(struct ieee80211_tx_data *tx)
}
if (unlikely(tx->key && tx->key->flags & KEY_FLAG_TAINTED &&
!ieee80211_is_deauth(hdr->frame_control)))
!ieee80211_is_deauth(hdr->frame_control)) &&
tx->skb->protocol != tx->sdata->control_port_protocol)
return TX_DROP;
if (!skip_hw && tx->key &&

View File

@ -4,7 +4,7 @@
*
* Portions of this file
* Copyright(c) 2015 - 2016 Intel Deutschland GmbH
* Copyright (C) 2018 - 2022 Intel Corporation
* Copyright (C) 2018 - 2023 Intel Corporation
*/
#include <linux/ieee80211.h>
@ -116,12 +116,14 @@ void
ieee80211_vht_cap_ie_to_sta_vht_cap(struct ieee80211_sub_if_data *sdata,
struct ieee80211_supported_band *sband,
const struct ieee80211_vht_cap *vht_cap_ie,
const struct ieee80211_vht_cap *vht_cap_ie2,
struct link_sta_info *link_sta)
{
struct ieee80211_sta_vht_cap *vht_cap = &link_sta->pub->vht_cap;
struct ieee80211_sta_vht_cap own_cap;
u32 cap_info, i;
bool have_80mhz;
u32 mpdu_len;
memset(vht_cap, 0, sizeof(*vht_cap));
@ -317,11 +319,21 @@ ieee80211_vht_cap_ie_to_sta_vht_cap(struct ieee80211_sub_if_data *sdata,
link_sta->pub->bandwidth = ieee80211_sta_cur_vht_bw(link_sta);
/*
* Work around the Cisco 9115 FW 17.3 bug by taking the min of
* both reported MPDU lengths.
*/
mpdu_len = vht_cap->cap & IEEE80211_VHT_CAP_MAX_MPDU_MASK;
if (vht_cap_ie2)
mpdu_len = min_t(u32, mpdu_len,
le32_get_bits(vht_cap_ie2->vht_cap_info,
IEEE80211_VHT_CAP_MAX_MPDU_MASK));
/*
* FIXME - should the amsdu len be per link? store per link
* and maintain a minimum?
*/
switch (vht_cap->cap & IEEE80211_VHT_CAP_MAX_MPDU_MASK) {
switch (mpdu_len) {
case IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454:
link_sta->pub->agg.max_amsdu_len = IEEE80211_MAX_MPDU_LEN_VHT_11454;
break;

View File

@ -307,12 +307,6 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info)
goto create_err;
}
if (addr_l.id == 0) {
NL_SET_ERR_MSG_ATTR(info->extack, laddr, "missing local addr id");
err = -EINVAL;
goto create_err;
}
err = mptcp_pm_parse_addr(raddr, info, &addr_r);
if (err < 0) {
NL_SET_ERR_MSG_ATTR(info->extack, raddr, "error parsing remote addr");

View File

@ -3425,24 +3425,21 @@ static void schedule_3rdack_retransmission(struct sock *ssk)
sk_reset_timer(ssk, &icsk->icsk_delack_timer, timeout);
}
void mptcp_subflow_process_delegated(struct sock *ssk)
void mptcp_subflow_process_delegated(struct sock *ssk, long status)
{
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
struct sock *sk = subflow->conn;
if (test_bit(MPTCP_DELEGATE_SEND, &subflow->delegated_status)) {
if (status & BIT(MPTCP_DELEGATE_SEND)) {
mptcp_data_lock(sk);
if (!sock_owned_by_user(sk))
__mptcp_subflow_push_pending(sk, ssk, true);
else
__set_bit(MPTCP_PUSH_PENDING, &mptcp_sk(sk)->cb_flags);
mptcp_data_unlock(sk);
mptcp_subflow_delegated_done(subflow, MPTCP_DELEGATE_SEND);
}
if (test_bit(MPTCP_DELEGATE_ACK, &subflow->delegated_status)) {
if (status & BIT(MPTCP_DELEGATE_ACK))
schedule_3rdack_retransmission(ssk);
mptcp_subflow_delegated_done(subflow, MPTCP_DELEGATE_ACK);
}
}
static int mptcp_hash(struct sock *sk)
@ -3968,14 +3965,17 @@ static int mptcp_napi_poll(struct napi_struct *napi, int budget)
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
bh_lock_sock_nested(ssk);
if (!sock_owned_by_user(ssk) &&
mptcp_subflow_has_delegated_action(subflow))
mptcp_subflow_process_delegated(ssk);
/* ... elsewhere tcp_release_cb_override already processed
* the action or will do at next release_sock().
* In both case must dequeue the subflow here - on the same
* CPU that scheduled it.
*/
if (!sock_owned_by_user(ssk)) {
mptcp_subflow_process_delegated(ssk, xchg(&subflow->delegated_status, 0));
} else {
/* tcp_release_cb_override already processed
* the action or will do at next release_sock().
* In both case must dequeue the subflow here - on the same
* CPU that scheduled it.
*/
smp_wmb();
clear_bit(MPTCP_DELEGATE_SCHEDULED, &subflow->delegated_status);
}
bh_unlock_sock(ssk);
sock_put(ssk);

View File

@ -444,9 +444,11 @@ struct mptcp_delegated_action {
DECLARE_PER_CPU(struct mptcp_delegated_action, mptcp_delegated_actions);
#define MPTCP_DELEGATE_SEND 0
#define MPTCP_DELEGATE_ACK 1
#define MPTCP_DELEGATE_SCHEDULED 0
#define MPTCP_DELEGATE_SEND 1
#define MPTCP_DELEGATE_ACK 2
#define MPTCP_DELEGATE_ACTIONS_MASK (~BIT(MPTCP_DELEGATE_SCHEDULED))
/* MPTCP subflow context */
struct mptcp_subflow_context {
struct list_head node;/* conn_list of subflows */
@ -564,23 +566,24 @@ mptcp_subflow_get_mapped_dsn(const struct mptcp_subflow_context *subflow)
return subflow->map_seq + mptcp_subflow_get_map_offset(subflow);
}
void mptcp_subflow_process_delegated(struct sock *ssk);
void mptcp_subflow_process_delegated(struct sock *ssk, long actions);
static inline void mptcp_subflow_delegate(struct mptcp_subflow_context *subflow, int action)
{
long old, set_bits = BIT(MPTCP_DELEGATE_SCHEDULED) | BIT(action);
struct mptcp_delegated_action *delegated;
bool schedule;
/* the caller held the subflow bh socket lock */
lockdep_assert_in_softirq();
/* The implied barrier pairs with mptcp_subflow_delegated_done(), and
* ensures the below list check sees list updates done prior to status
* bit changes
/* The implied barrier pairs with tcp_release_cb_override()
* mptcp_napi_poll(), and ensures the below list check sees list
* updates done prior to delegated status bits changes
*/
if (!test_and_set_bit(action, &subflow->delegated_status)) {
/* still on delegated list from previous scheduling */
if (!list_empty(&subflow->delegated_node))
old = set_mask_bits(&subflow->delegated_status, 0, set_bits);
if (!(old & BIT(MPTCP_DELEGATE_SCHEDULED))) {
if (WARN_ON_ONCE(!list_empty(&subflow->delegated_node)))
return;
delegated = this_cpu_ptr(&mptcp_delegated_actions);
@ -605,20 +608,6 @@ mptcp_subflow_delegated_next(struct mptcp_delegated_action *delegated)
return ret;
}
static inline bool mptcp_subflow_has_delegated_action(const struct mptcp_subflow_context *subflow)
{
return !!READ_ONCE(subflow->delegated_status);
}
static inline void mptcp_subflow_delegated_done(struct mptcp_subflow_context *subflow, int action)
{
/* pairs with mptcp_subflow_delegate, ensures delegate_node is updated before
* touching the status bit
*/
smp_wmb();
clear_bit(action, &subflow->delegated_status);
}
int mptcp_is_enabled(const struct net *net);
unsigned int mptcp_get_add_addr_timeout(const struct net *net);
int mptcp_is_checksum_enabled(const struct net *net);

View File

@ -1956,9 +1956,15 @@ static void subflow_ulp_clone(const struct request_sock *req,
static void tcp_release_cb_override(struct sock *ssk)
{
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
long status;
if (mptcp_subflow_has_delegated_action(subflow))
mptcp_subflow_process_delegated(ssk);
/* process and clear all the pending actions, but leave the subflow into
* the napi queue. To respect locking, only the same CPU that originated
* the action can touch the list. mptcp_napi_poll will take care of it.
*/
status = set_mask_bits(&subflow->delegated_status, MPTCP_DELEGATE_ACTIONS_MASK, 0);
if (status)
mptcp_subflow_process_delegated(ssk, status);
tcp_release_cb(ssk);
}

View File

@ -1439,7 +1439,7 @@ static int bind_mcastif_addr(struct socket *sock, struct net_device *dev)
sin.sin_addr.s_addr = addr;
sin.sin_port = 0;
return sock->ops->bind(sock, (struct sockaddr*)&sin, sizeof(sin));
return kernel_bind(sock, (struct sockaddr *)&sin, sizeof(sin));
}
static void get_mcast_sockaddr(union ipvs_sockaddr *sa, int *salen,
@ -1505,8 +1505,8 @@ static int make_send_sock(struct netns_ipvs *ipvs, int id,
}
get_mcast_sockaddr(&mcast_addr, &salen, &ipvs->mcfg, id);
result = sock->ops->connect(sock, (struct sockaddr *) &mcast_addr,
salen, 0);
result = kernel_connect(sock, (struct sockaddr *)&mcast_addr,
salen, 0);
if (result < 0) {
pr_err("Error connecting to the multicast addr\n");
goto error;
@ -1546,7 +1546,7 @@ static int make_receive_sock(struct netns_ipvs *ipvs, int id,
get_mcast_sockaddr(&mcast_addr, &salen, &ipvs->bcfg, id);
sock->sk->sk_bound_dev_if = dev->ifindex;
result = sock->ops->bind(sock, (struct sockaddr *)&mcast_addr, salen);
result = kernel_bind(sock, (struct sockaddr *)&mcast_addr, salen);
if (result < 0) {
pr_err("Error binding to the multicast addr\n");
goto error;

View File

@ -112,7 +112,7 @@ static const u8 sctp_conntracks[2][11][SCTP_CONNTRACK_MAX] = {
/* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA, sSA},
/* error */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't have Stale cookie*/
/* cookie_echo */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA, sCL},/* 5.2.4 - Big TODO */
/* cookie_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't come in orig dir */
/* cookie_ack */ {sCL, sCL, sCW, sES, sES, sSS, sSR, sSA, sCL},/* Can't come in orig dir */
/* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL, sCL},
/* heartbeat */ {sHS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
/* heartbeat_ack*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
@ -126,7 +126,7 @@ static const u8 sctp_conntracks[2][11][SCTP_CONNTRACK_MAX] = {
/* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV},
/* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV},
/* error */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA, sIV},
/* cookie_echo */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV},/* Can't come in reply dir */
/* cookie_echo */ {sIV, sCL, sCE, sCE, sES, sSS, sSR, sSA, sIV},/* Can't come in reply dir */
/* cookie_ack */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA, sIV},
/* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL, sIV},
/* heartbeat */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
@ -412,6 +412,9 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
/* (D) vtag must be same as init_vtag as found in INIT_ACK */
if (sh->vtag != ct->proto.sctp.vtag[dir])
goto out_unlock;
} else if (sch->type == SCTP_CID_COOKIE_ACK) {
ct->proto.sctp.init[dir] = 0;
ct->proto.sctp.init[!dir] = 0;
} else if (sch->type == SCTP_CID_HEARTBEAT) {
if (ct->proto.sctp.vtag[dir] == 0) {
pr_debug("Setting %d vtag %x for dir %d\n", sch->type, sh->vtag, dir);
@ -461,16 +464,18 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
}
/* If it is an INIT or an INIT ACK note down the vtag */
if (sch->type == SCTP_CID_INIT ||
sch->type == SCTP_CID_INIT_ACK) {
struct sctp_inithdr _inithdr, *ih;
if (sch->type == SCTP_CID_INIT) {
struct sctp_inithdr _ih, *ih;
ih = skb_header_pointer(skb, offset + sizeof(_sch),
sizeof(_inithdr), &_inithdr);
if (ih == NULL)
ih = skb_header_pointer(skb, offset + sizeof(_sch), sizeof(*ih), &_ih);
if (!ih)
goto out_unlock;
pr_debug("Setting vtag %x for dir %d\n",
ih->init_tag, !dir);
if (ct->proto.sctp.init[dir] && ct->proto.sctp.init[!dir])
ct->proto.sctp.init[!dir] = 0;
ct->proto.sctp.init[dir] = 1;
pr_debug("Setting vtag %x for dir %d\n", ih->init_tag, !dir);
ct->proto.sctp.vtag[!dir] = ih->init_tag;
/* don't renew timeout on init retransmit so
@ -481,6 +486,24 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
old_state == SCTP_CONNTRACK_CLOSED &&
nf_ct_is_confirmed(ct))
ignore = true;
} else if (sch->type == SCTP_CID_INIT_ACK) {
struct sctp_inithdr _ih, *ih;
__be32 vtag;
ih = skb_header_pointer(skb, offset + sizeof(_sch), sizeof(*ih), &_ih);
if (!ih)
goto out_unlock;
vtag = ct->proto.sctp.vtag[!dir];
if (!ct->proto.sctp.init[!dir] && vtag && vtag != ih->init_tag)
goto out_unlock;
/* collision */
if (ct->proto.sctp.init[dir] && ct->proto.sctp.init[!dir] &&
vtag != ih->init_tag)
goto out_unlock;
pr_debug("Setting vtag %x for dir %d\n", ih->init_tag, !dir);
ct->proto.sctp.vtag[!dir] = ih->init_tag;
}
ct->proto.sctp.state = new_state;

View File

@ -7871,24 +7871,14 @@ static int nf_tables_delobj(struct sk_buff *skb, const struct nfnl_info *info,
return nft_delobj(&ctx, obj);
}
void nft_obj_notify(struct net *net, const struct nft_table *table,
struct nft_object *obj, u32 portid, u32 seq, int event,
u16 flags, int family, int report, gfp_t gfp)
static void
__nft_obj_notify(struct net *net, const struct nft_table *table,
struct nft_object *obj, u32 portid, u32 seq, int event,
u16 flags, int family, int report, gfp_t gfp)
{
struct nftables_pernet *nft_net = nft_pernet(net);
struct sk_buff *skb;
int err;
char *buf = kasprintf(gfp, "%s:%u",
table->name, nft_net->base_seq);
audit_log_nfcfg(buf,
family,
obj->handle,
event == NFT_MSG_NEWOBJ ?
AUDIT_NFT_OP_OBJ_REGISTER :
AUDIT_NFT_OP_OBJ_UNREGISTER,
gfp);
kfree(buf);
if (!report &&
!nfnetlink_has_listeners(net, NFNLGRP_NFTABLES))
@ -7911,13 +7901,35 @@ void nft_obj_notify(struct net *net, const struct nft_table *table,
err:
nfnetlink_set_err(net, portid, NFNLGRP_NFTABLES, -ENOBUFS);
}
void nft_obj_notify(struct net *net, const struct nft_table *table,
struct nft_object *obj, u32 portid, u32 seq, int event,
u16 flags, int family, int report, gfp_t gfp)
{
struct nftables_pernet *nft_net = nft_pernet(net);
char *buf = kasprintf(gfp, "%s:%u",
table->name, nft_net->base_seq);
audit_log_nfcfg(buf,
family,
obj->handle,
event == NFT_MSG_NEWOBJ ?
AUDIT_NFT_OP_OBJ_REGISTER :
AUDIT_NFT_OP_OBJ_UNREGISTER,
gfp);
kfree(buf);
__nft_obj_notify(net, table, obj, portid, seq, event,
flags, family, report, gfp);
}
EXPORT_SYMBOL_GPL(nft_obj_notify);
static void nf_tables_obj_notify(const struct nft_ctx *ctx,
struct nft_object *obj, int event)
{
nft_obj_notify(ctx->net, ctx->table, obj, ctx->portid, ctx->seq, event,
ctx->flags, ctx->family, ctx->report, GFP_KERNEL);
__nft_obj_notify(ctx->net, ctx->table, obj, ctx->portid,
ctx->seq, event, ctx->flags, ctx->family,
ctx->report, GFP_KERNEL);
}
/*

View File

@ -154,6 +154,17 @@ int nft_payload_inner_offset(const struct nft_pktinfo *pkt)
return pkt->inneroff;
}
static bool nft_payload_need_vlan_copy(const struct nft_payload *priv)
{
unsigned int len = priv->offset + priv->len;
/* data past ether src/dst requested, copy needed */
if (len > offsetof(struct ethhdr, h_proto))
return true;
return false;
}
void nft_payload_eval(const struct nft_expr *expr,
struct nft_regs *regs,
const struct nft_pktinfo *pkt)
@ -172,7 +183,7 @@ void nft_payload_eval(const struct nft_expr *expr,
goto err;
if (skb_vlan_tag_present(skb) &&
priv->offset >= offsetof(struct ethhdr, h_proto)) {
nft_payload_need_vlan_copy(priv)) {
if (!nft_payload_copy_vlan(dest, skb,
priv->offset, priv->len))
goto err;

View File

@ -233,10 +233,9 @@ static void nft_rbtree_gc_remove(struct net *net, struct nft_set *set,
rb_erase(&rbe->node, &priv->root);
}
static int nft_rbtree_gc_elem(const struct nft_set *__set,
struct nft_rbtree *priv,
struct nft_rbtree_elem *rbe,
u8 genmask)
static const struct nft_rbtree_elem *
nft_rbtree_gc_elem(const struct nft_set *__set, struct nft_rbtree *priv,
struct nft_rbtree_elem *rbe, u8 genmask)
{
struct nft_set *set = (struct nft_set *)__set;
struct rb_node *prev = rb_prev(&rbe->node);
@ -246,7 +245,7 @@ static int nft_rbtree_gc_elem(const struct nft_set *__set,
gc = nft_trans_gc_alloc(set, 0, GFP_ATOMIC);
if (!gc)
return -ENOMEM;
return ERR_PTR(-ENOMEM);
/* search for end interval coming before this element.
* end intervals don't carry a timeout extension, they
@ -261,6 +260,7 @@ static int nft_rbtree_gc_elem(const struct nft_set *__set,
prev = rb_prev(prev);
}
rbe_prev = NULL;
if (prev) {
rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
nft_rbtree_gc_remove(net, set, priv, rbe_prev);
@ -272,7 +272,7 @@ static int nft_rbtree_gc_elem(const struct nft_set *__set,
*/
gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
if (WARN_ON_ONCE(!gc))
return -ENOMEM;
return ERR_PTR(-ENOMEM);
nft_trans_gc_elem_add(gc, rbe_prev);
}
@ -280,13 +280,13 @@ static int nft_rbtree_gc_elem(const struct nft_set *__set,
nft_rbtree_gc_remove(net, set, priv, rbe);
gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
if (WARN_ON_ONCE(!gc))
return -ENOMEM;
return ERR_PTR(-ENOMEM);
nft_trans_gc_elem_add(gc, rbe);
nft_trans_gc_queue_sync_done(gc);
return 0;
return rbe_prev;
}
static bool nft_rbtree_update_first(const struct nft_set *set,
@ -314,7 +314,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
struct nft_rbtree *priv = nft_set_priv(set);
u8 cur_genmask = nft_genmask_cur(net);
u8 genmask = nft_genmask_next(net);
int d, err;
int d;
/* Descend the tree to search for an existing element greater than the
* key value to insert that is greater than the new element. This is the
@ -363,9 +363,14 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
*/
if (nft_set_elem_expired(&rbe->ext) &&
nft_set_elem_active(&rbe->ext, cur_genmask)) {
err = nft_rbtree_gc_elem(set, priv, rbe, genmask);
if (err < 0)
return err;
const struct nft_rbtree_elem *removed_end;
removed_end = nft_rbtree_gc_elem(set, priv, rbe, genmask);
if (IS_ERR(removed_end))
return PTR_ERR(removed_end);
if (removed_end == rbe_le || removed_end == rbe_ge)
return -EAGAIN;
continue;
}
@ -486,11 +491,18 @@ static int nft_rbtree_insert(const struct net *net, const struct nft_set *set,
struct nft_rbtree_elem *rbe = elem->priv;
int err;
write_lock_bh(&priv->lock);
write_seqcount_begin(&priv->count);
err = __nft_rbtree_insert(net, set, rbe, ext);
write_seqcount_end(&priv->count);
write_unlock_bh(&priv->lock);
do {
if (fatal_signal_pending(current))
return -EINTR;
cond_resched();
write_lock_bh(&priv->lock);
write_seqcount_begin(&priv->count);
err = __nft_rbtree_insert(net, set, rbe, ext);
write_seqcount_end(&priv->count);
write_unlock_bh(&priv->lock);
} while (err == -EAGAIN);
return err;
}

View File

@ -352,7 +352,7 @@ static void netlink_overrun(struct sock *sk)
if (!nlk_test_bit(RECV_NO_ENOBUFS, sk)) {
if (!test_and_set_bit(NETLINK_S_CONGESTED,
&nlk_sk(sk)->state)) {
sk->sk_err = ENOBUFS;
WRITE_ONCE(sk->sk_err, ENOBUFS);
sk_error_report(sk);
}
}
@ -1605,7 +1605,7 @@ static int do_one_set_err(struct sock *sk, struct netlink_set_err_data *p)
goto out;
}
sk->sk_err = p->code;
WRITE_ONCE(sk->sk_err, p->code);
sk_error_report(sk);
out:
return ret;
@ -1991,7 +1991,7 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) {
ret = netlink_dump(sk);
if (ret) {
sk->sk_err = -ret;
WRITE_ONCE(sk->sk_err, -ret);
sk_error_report(sk);
}
}
@ -2511,7 +2511,7 @@ void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err,
err_bad_put:
nlmsg_free(skb);
err_skb:
NETLINK_CB(in_skb).sk->sk_err = ENOBUFS;
WRITE_ONCE(NETLINK_CB(in_skb).sk->sk_err, ENOBUFS);
sk_error_report(NETLINK_CB(in_skb).sk);
}
EXPORT_SYMBOL(netlink_ack);

View File

@ -1636,7 +1636,9 @@ int nfc_llcp_register_device(struct nfc_dev *ndev)
timer_setup(&local->sdreq_timer, nfc_llcp_sdreq_timer, 0);
INIT_WORK(&local->sdreq_timeout_work, nfc_llcp_sdreq_timeout_work);
spin_lock(&llcp_devices_lock);
list_add(&local->list, &llcp_devices);
spin_unlock(&llcp_devices_lock);
return 0;
}

View File

@ -145,7 +145,7 @@ int rds_tcp_conn_path_connect(struct rds_conn_path *cp)
addrlen = sizeof(sin);
}
ret = sock->ops->bind(sock, addr, addrlen);
ret = kernel_bind(sock, addr, addrlen);
if (ret) {
rdsdebug("bind failed with %d at address %pI6c\n",
ret, &conn->c_laddr);
@ -173,7 +173,7 @@ int rds_tcp_conn_path_connect(struct rds_conn_path *cp)
* own the socket
*/
rds_tcp_set_callbacks(sock, cp);
ret = sock->ops->connect(sock, addr, addrlen, O_NONBLOCK);
ret = kernel_connect(sock, addr, addrlen, O_NONBLOCK);
rdsdebug("connect to address %pI6c returned %d\n", &conn->c_faddr, ret);
if (ret == -EINPROGRESS)

View File

@ -306,7 +306,7 @@ struct socket *rds_tcp_listen_init(struct net *net, bool isv6)
addr_len = sizeof(*sin);
}
ret = sock->ops->bind(sock, (struct sockaddr *)&ss, addr_len);
ret = kernel_bind(sock, (struct sockaddr *)&ss, addr_len);
if (ret < 0) {
rdsdebug("could not bind %s listener socket: %d\n",
isv6 ? "IPv6" : "IPv4", ret);

View File

@ -48,6 +48,7 @@ struct rfkill {
bool persistent;
bool polling_paused;
bool suspended;
bool need_sync;
const struct rfkill_ops *ops;
void *data;
@ -368,6 +369,17 @@ static void rfkill_set_block(struct rfkill *rfkill, bool blocked)
rfkill_event(rfkill);
}
static void rfkill_sync(struct rfkill *rfkill)
{
lockdep_assert_held(&rfkill_global_mutex);
if (!rfkill->need_sync)
return;
rfkill_set_block(rfkill, rfkill_global_states[rfkill->type].cur);
rfkill->need_sync = false;
}
static void rfkill_update_global_state(enum rfkill_type type, bool blocked)
{
int i;
@ -730,6 +742,10 @@ static ssize_t soft_show(struct device *dev, struct device_attribute *attr,
{
struct rfkill *rfkill = to_rfkill(dev);
mutex_lock(&rfkill_global_mutex);
rfkill_sync(rfkill);
mutex_unlock(&rfkill_global_mutex);
return sysfs_emit(buf, "%d\n", (rfkill->state & RFKILL_BLOCK_SW) ? 1 : 0);
}
@ -751,6 +767,7 @@ static ssize_t soft_store(struct device *dev, struct device_attribute *attr,
return -EINVAL;
mutex_lock(&rfkill_global_mutex);
rfkill_sync(rfkill);
rfkill_set_block(rfkill, state);
mutex_unlock(&rfkill_global_mutex);
@ -783,6 +800,10 @@ static ssize_t state_show(struct device *dev, struct device_attribute *attr,
{
struct rfkill *rfkill = to_rfkill(dev);
mutex_lock(&rfkill_global_mutex);
rfkill_sync(rfkill);
mutex_unlock(&rfkill_global_mutex);
return sysfs_emit(buf, "%d\n", user_state_from_blocked(rfkill->state));
}
@ -805,6 +826,7 @@ static ssize_t state_store(struct device *dev, struct device_attribute *attr,
return -EINVAL;
mutex_lock(&rfkill_global_mutex);
rfkill_sync(rfkill);
rfkill_set_block(rfkill, state == RFKILL_USER_STATE_SOFT_BLOCKED);
mutex_unlock(&rfkill_global_mutex);
@ -1032,14 +1054,10 @@ static void rfkill_uevent_work(struct work_struct *work)
static void rfkill_sync_work(struct work_struct *work)
{
struct rfkill *rfkill;
bool cur;
rfkill = container_of(work, struct rfkill, sync_work);
struct rfkill *rfkill = container_of(work, struct rfkill, sync_work);
mutex_lock(&rfkill_global_mutex);
cur = rfkill_global_states[rfkill->type].cur;
rfkill_set_block(rfkill, cur);
rfkill_sync(rfkill);
mutex_unlock(&rfkill_global_mutex);
}
@ -1087,6 +1105,7 @@ int __must_check rfkill_register(struct rfkill *rfkill)
round_jiffies_relative(POLL_INTERVAL));
if (!rfkill->persistent || rfkill_epo_lock_active) {
rfkill->need_sync = true;
schedule_work(&rfkill->sync_work);
} else {
#ifdef CONFIG_RFKILL_INPUT
@ -1171,6 +1190,7 @@ static int rfkill_fop_open(struct inode *inode, struct file *file)
ev = kzalloc(sizeof(*ev), GFP_KERNEL);
if (!ev)
goto free;
rfkill_sync(rfkill);
rfkill_fill_event(&ev->ev, rfkill, RFKILL_OP_ADD);
list_add_tail(&ev->list, &data->events);
}

View File

@ -1159,8 +1159,7 @@ int sctp_assoc_update(struct sctp_association *asoc,
/* Add any peer addresses from the new association. */
list_for_each_entry(trans, &new->peer.transport_addr_list,
transports)
if (!sctp_assoc_lookup_paddr(asoc, &trans->ipaddr) &&
!sctp_assoc_add_peer(asoc, &trans->ipaddr,
if (!sctp_assoc_add_peer(asoc, &trans->ipaddr,
GFP_ATOMIC, trans->state))
return -ENOMEM;

View File

@ -2450,6 +2450,7 @@ static int sctp_apply_peer_addr_params(struct sctp_paddrparams *params,
if (trans) {
trans->hbinterval =
msecs_to_jiffies(params->spp_hbinterval);
sctp_transport_reset_hb_timer(trans);
} else if (asoc) {
asoc->hbinterval =
msecs_to_jiffies(params->spp_hbinterval);

View File

@ -737,6 +737,14 @@ static inline int sock_sendmsg_nosec(struct socket *sock, struct msghdr *msg)
return ret;
}
static int __sock_sendmsg(struct socket *sock, struct msghdr *msg)
{
int err = security_socket_sendmsg(sock, msg,
msg_data_left(msg));
return err ?: sock_sendmsg_nosec(sock, msg);
}
/**
* sock_sendmsg - send a message through @sock
* @sock: socket
@ -747,10 +755,19 @@ static inline int sock_sendmsg_nosec(struct socket *sock, struct msghdr *msg)
*/
int sock_sendmsg(struct socket *sock, struct msghdr *msg)
{
int err = security_socket_sendmsg(sock, msg,
msg_data_left(msg));
struct sockaddr_storage *save_addr = (struct sockaddr_storage *)msg->msg_name;
struct sockaddr_storage address;
int ret;
return err ?: sock_sendmsg_nosec(sock, msg);
if (msg->msg_name) {
memcpy(&address, msg->msg_name, msg->msg_namelen);
msg->msg_name = &address;
}
ret = __sock_sendmsg(sock, msg);
msg->msg_name = save_addr;
return ret;
}
EXPORT_SYMBOL(sock_sendmsg);
@ -1138,7 +1155,7 @@ static ssize_t sock_write_iter(struct kiocb *iocb, struct iov_iter *from)
if (sock->type == SOCK_SEQPACKET)
msg.msg_flags |= MSG_EOR;
res = sock_sendmsg(sock, &msg);
res = __sock_sendmsg(sock, &msg);
*from = msg.msg_iter;
return res;
}
@ -2174,7 +2191,7 @@ int __sys_sendto(int fd, void __user *buff, size_t len, unsigned int flags,
if (sock->file->f_flags & O_NONBLOCK)
flags |= MSG_DONTWAIT;
msg.msg_flags = flags;
err = sock_sendmsg(sock, &msg);
err = __sock_sendmsg(sock, &msg);
out_put:
fput_light(sock->file, fput_needed);
@ -2538,7 +2555,7 @@ static int ____sys_sendmsg(struct socket *sock, struct msghdr *msg_sys,
err = sock_sendmsg_nosec(sock, msg_sys);
goto out_freectl;
}
err = sock_sendmsg(sock, msg_sys);
err = __sock_sendmsg(sock, msg_sys);
/*
* If this is sendmmsg() and sending to current destination address was
* successful, remember it.
@ -3499,7 +3516,12 @@ static long compat_sock_ioctl(struct file *file, unsigned int cmd,
int kernel_bind(struct socket *sock, struct sockaddr *addr, int addrlen)
{
return READ_ONCE(sock->ops)->bind(sock, addr, addrlen);
struct sockaddr_storage address;
memcpy(&address, addr, addrlen);
return READ_ONCE(sock->ops)->bind(sock, (struct sockaddr *)&address,
addrlen);
}
EXPORT_SYMBOL(kernel_bind);

View File

@ -1441,14 +1441,14 @@ static int tipc_crypto_key_revoke(struct net *net, u8 tx_key)
struct tipc_crypto *tx = tipc_net(net)->crypto_tx;
struct tipc_key key;
spin_lock(&tx->lock);
spin_lock_bh(&tx->lock);
key = tx->key;
WARN_ON(!key.active || tx_key != key.active);
/* Free the active key */
tipc_crypto_key_set_state(tx, key.passive, 0, key.pending);
tipc_crypto_key_detach(tx->aead[key.active], &tx->lock);
spin_unlock(&tx->lock);
spin_unlock_bh(&tx->lock);
pr_warn("%s: key is revoked\n", tx->name);
return -EKEYREVOKED;

View File

@ -1181,16 +1181,11 @@ void wiphy_rfkill_set_hw_state_reason(struct wiphy *wiphy, bool blocked,
}
EXPORT_SYMBOL(wiphy_rfkill_set_hw_state_reason);
void cfg80211_cqm_config_free(struct wireless_dev *wdev)
{
kfree(wdev->cqm_config);
wdev->cqm_config = NULL;
}
static void _cfg80211_unregister_wdev(struct wireless_dev *wdev,
bool unregister_netdev)
{
struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
struct cfg80211_cqm_config *cqm_config;
unsigned int link_id;
ASSERT_RTNL();
@ -1227,7 +1222,10 @@ static void _cfg80211_unregister_wdev(struct wireless_dev *wdev,
kfree_sensitive(wdev->wext.keys);
wdev->wext.keys = NULL;
#endif
cfg80211_cqm_config_free(wdev);
wiphy_work_cancel(wdev->wiphy, &wdev->cqm_rssi_work);
/* deleted from the list, so can't be found from nl80211 any more */
cqm_config = rcu_access_pointer(wdev->cqm_config);
kfree_rcu(cqm_config, rcu_head);
/*
* Ensure that all events have been processed and
@ -1379,6 +1377,8 @@ void cfg80211_init_wdev(struct wireless_dev *wdev)
wdev->wext.connect.auth_type = NL80211_AUTHTYPE_AUTOMATIC;
#endif
wiphy_work_init(&wdev->cqm_rssi_work, cfg80211_cqm_rssi_notify_work);
if (wdev->wiphy->flags & WIPHY_FLAG_PS_ON_BY_DEFAULT)
wdev->ps = true;
else

Some files were not shown because too many files have changed in this diff Show More