Including fixes from bpf and netfilter.

Current release - regressions:
 
  - veth: fix packet segmentation in veth_convert_skb_to_xdp_buff
 
 Current release - new code bugs:
 
  - tcp: assorted fixes to the new Auth Option support
 
 Older releases - regressions:
 
  - tcp: fix mid stream window clamp
 
  - tls: fix incorrect splice handling
 
  - ipv4: ip_gre: handle skb_pull() failure in ipgre_xmit()
 
  - dsa: mv88e6xxx: restore USXGMII support for 6393X
 
  - arcnet: restore support for multiple Sohard Arcnet cards
 
 Older releases - always broken:
 
  - tcp: do not accept ACK of bytes we never sent
 
  - require admin privileges to receive packet traces via netlink
 
  - packet: move reference count in packet_sock to atomic_long_t
 
  - bpf:
    - fix incorrect branch offset comparison with cpu=v4
    - fix prog_array_map_poke_run map poke update
 
  - netfilter:
    - 3 fixes for crashes on bad admin commands
    - xt_owner: fix race accessing sk->sk_socket, TOCTOU null-deref
    - nf_tables: fix 'exist' matching on bigendian arches
 
  - leds: netdev: fix RTNL handling to prevent potential deadlock
 
  - eth: tg3: prevent races in error/reset handling
 
  - eth: r8169: fix rtl8125b PAUSE storm when suspended
 
  - eth: r8152: improve reset and surprise removal handling
 
  - eth: hns: fix race between changing features and sending
 
  - eth: nfp: fix sleep in atomic for bonding offload
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmVyGxsACgkQMUZtbf5S
 IrvziA//XZQLEQ3OsZnnYuuGkH0lPnY6ABaK/hcjCHnk9xs8SfIKPVYpq1LaShEp
 TY6mBhLMIANbdNO+yPzaszWVTkBPyb0w8JNy43bhLhOL3m/6FS6qwsgN8SAL2qVv
 8rnDF9Gsb4yU27aMZ6+2m92WiuyPptf4HrWU2ISSv/oCYH9TWsPUrTwt+QuVUboN
 eSbvMzgIAkFIQVSbhMuinR9bOzAypSJPi18m1kkID5NsNUP/OToxPE7IFDEVS/oo
 f4P7Ru6g1Gw9pAJmVXy5c0528Hy2P4Pyyw3LD5i2FWZ7rhYJRADOC4EMs9lINzrn
 uscNUyztldaMHkKcZRqKbaXsnA3MPvuf3qycRH0wyHa1+OjL9N4A9P077FugtBln
 UlmgVokfONVlxRgwy7AqapQbZ30QmnUEOvWjFWV3dsCBS3ziq1h7ujCTaQkl6R/6
 i96xuiUPMrAnxAlbFOjoF8NeGvcvwujYCqs/q5JC43f+xZRGf52Pwf5U/AliOFym
 aBX1mF/mdMLjYIBlGwFABiybACRPMceT2RuCfvhfIdQiM01OHlydO933jS+R3I4O
 cB03ppK0QiNo5W4RlMqDGuXfVnBJ36pv/2tY8IUOZGXSR+jSQOxZHrhYrtzMM5F8
 sWjpEIrfzdtuz0ssEg9wwGBTffEf07uZyPttov3Pm+VnDrsmCMU=
 =bkyC
 -----END PGP SIGNATURE-----

Merge tag 'net-6.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from bpf and netfilter.

  Current release - regressions:

   - veth: fix packet segmentation in veth_convert_skb_to_xdp_buff

  Current release - new code bugs:

   - tcp: assorted fixes to the new Auth Option support

  Older releases - regressions:

   - tcp: fix mid stream window clamp

   - tls: fix incorrect splice handling

   - ipv4: ip_gre: handle skb_pull() failure in ipgre_xmit()

   - dsa: mv88e6xxx: restore USXGMII support for 6393X

   - arcnet: restore support for multiple Sohard Arcnet cards

  Older releases - always broken:

   - tcp: do not accept ACK of bytes we never sent

   - require admin privileges to receive packet traces via netlink

   - packet: move reference count in packet_sock to atomic_long_t

   - bpf:
      - fix incorrect branch offset comparison with cpu=v4
      - fix prog_array_map_poke_run map poke update

   - netfilter:
      - three fixes for crashes on bad admin commands
      - xt_owner: fix race accessing sk->sk_socket, TOCTOU null-deref
      - nf_tables: fix 'exist' matching on bigendian arches

   - leds: netdev: fix RTNL handling to prevent potential deadlock

   - eth: tg3: prevent races in error/reset handling

   - eth: r8169: fix rtl8125b PAUSE storm when suspended

   - eth: r8152: improve reset and surprise removal handling

   - eth: hns: fix race between changing features and sending

   - eth: nfp: fix sleep in atomic for bonding offload"

* tag 'net-6.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (62 commits)
  vsock/virtio: fix "comparison of distinct pointer types lacks a cast" warning
  net/smc: fix missing byte order conversion in CLC handshake
  net: dsa: microchip: provide a list of valid protocols for xmit handler
  drop_monitor: Require 'CAP_SYS_ADMIN' when joining "events" group
  psample: Require 'CAP_NET_ADMIN' when joining "packets" group
  bpf: sockmap, updating the sg structure should also update curr
  net: tls, update curr on splice as well
  nfp: flower: fix for take a mutex lock in soft irq context and rcu lock
  net: dsa: mv88e6xxx: Restore USXGMII support for 6393X
  tcp: do not accept ACK of bytes we never sent
  selftests/bpf: Add test for early update in prog_array_map_poke_run
  bpf: Fix prog_array_map_poke_run map poke update
  netfilter: xt_owner: Fix for unsafe access of sk->sk_socket
  netfilter: nf_tables: validate family when identifying table via handle
  netfilter: nf_tables: bail out on mismatching dynset and set expressions
  netfilter: nf_tables: fix 'exist' matching on bigendian arches
  netfilter: nft_set_pipapo: skip inactive elements during set walk
  netfilter: bpf: fix bad registration on nf_defrag
  leds: trigger: netdev: fix RTNL handling to prevent potential deadlock
  octeontx2-af: Update Tx link register range
  ...
This commit is contained in:
Linus Torvalds 2023-12-07 17:04:13 -08:00
commit 5e3f5b81de
88 changed files with 821 additions and 356 deletions

View File

@ -99,7 +99,7 @@ also [6.1]::
when it is no longer considered permitted.
Linux TCP-AO will try its best to prevent you from removing a key that's
being used, considering it a key management failure. But sine keeping
being used, considering it a key management failure. But since keeping
an outdated key may become a security issue and as a peer may
unintentionally prevent the removal of an old key by always setting
it as RNextKeyID - a forced key removal mechanism is provided, where

View File

@ -15066,6 +15066,7 @@ F: lib/random32.c
F: net/
F: tools/net/
F: tools/testing/selftests/net/
X: net/9p/
X: net/bluetooth/
NETWORKING [IPSEC]

View File

@ -3025,3 +3025,49 @@ void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp
#endif
WARN(1, "verification of programs using bpf_throw should have failed\n");
}
void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke,
struct bpf_prog *new, struct bpf_prog *old)
{
u8 *old_addr, *new_addr, *old_bypass_addr;
int ret;
old_bypass_addr = old ? NULL : poke->bypass_addr;
old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL;
new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL;
/*
* On program loading or teardown, the program's kallsym entry
* might not be in place, so we use __bpf_arch_text_poke to skip
* the kallsyms check.
*/
if (new) {
ret = __bpf_arch_text_poke(poke->tailcall_target,
BPF_MOD_JUMP,
old_addr, new_addr);
BUG_ON(ret < 0);
if (!old) {
ret = __bpf_arch_text_poke(poke->tailcall_bypass,
BPF_MOD_JUMP,
poke->bypass_addr,
NULL);
BUG_ON(ret < 0);
}
} else {
ret = __bpf_arch_text_poke(poke->tailcall_bypass,
BPF_MOD_JUMP,
old_bypass_addr,
poke->bypass_addr);
BUG_ON(ret < 0);
/* let other CPUs finish the execution of program
* so that it will not possible to expose them
* to invalid nop, stack unwind, nop state
*/
if (!ret)
synchronize_rcu();
ret = __bpf_arch_text_poke(poke->tailcall_target,
BPF_MOD_JUMP,
old_addr, NULL);
BUG_ON(ret < 0);
}
}

View File

@ -226,6 +226,11 @@ static int set_device_name(struct led_netdev_data *trigger_data,
cancel_delayed_work_sync(&trigger_data->work);
/*
* Take RTNL lock before trigger_data lock to prevent potential
* deadlock with netdev notifier registration.
*/
rtnl_lock();
mutex_lock(&trigger_data->lock);
if (trigger_data->net_dev) {
@ -245,16 +250,14 @@ static int set_device_name(struct led_netdev_data *trigger_data,
trigger_data->carrier_link_up = false;
trigger_data->link_speed = SPEED_UNKNOWN;
trigger_data->duplex = DUPLEX_UNKNOWN;
if (trigger_data->net_dev != NULL) {
rtnl_lock();
if (trigger_data->net_dev)
get_device_state(trigger_data);
rtnl_unlock();
}
trigger_data->last_activity = 0;
set_baseline_state(trigger_data);
mutex_unlock(&trigger_data->lock);
rtnl_unlock();
return 0;
}

View File

@ -186,6 +186,8 @@ do { \
#define ARC_IS_5MBIT 1 /* card default speed is 5MBit */
#define ARC_CAN_10MBIT 2 /* card uses COM20022, supporting 10MBit,
but default is 2.5MBit. */
#define ARC_HAS_LED 4 /* card has software controlled LEDs */
#define ARC_HAS_ROTARY 8 /* card has rotary encoder */
/* information needed to define an encapsulation driver */
struct ArcProto {

View File

@ -213,12 +213,13 @@ static int com20020pci_probe(struct pci_dev *pdev,
if (!strncmp(ci->name, "EAE PLX-PCI FB2", 15))
lp->backplane = 1;
/* Get the dev_id from the PLX rotary coder */
if (!strncmp(ci->name, "EAE PLX-PCI MA1", 15))
dev_id_mask = 0x3;
dev->dev_id = (inb(priv->misc + ci->rotary) >> 4) & dev_id_mask;
snprintf(dev->name, sizeof(dev->name), "arc%d-%d", dev->dev_id, i);
if (ci->flags & ARC_HAS_ROTARY) {
/* Get the dev_id from the PLX rotary coder */
if (!strncmp(ci->name, "EAE PLX-PCI MA1", 15))
dev_id_mask = 0x3;
dev->dev_id = (inb(priv->misc + ci->rotary) >> 4) & dev_id_mask;
snprintf(dev->name, sizeof(dev->name), "arc%d-%d", dev->dev_id, i);
}
if (arcnet_inb(ioaddr, COM20020_REG_R_STATUS) == 0xFF) {
pr_err("IO address %Xh is empty!\n", ioaddr);
@ -230,6 +231,10 @@ static int com20020pci_probe(struct pci_dev *pdev,
goto err_free_arcdev;
}
ret = com20020_found(dev, IRQF_SHARED);
if (ret)
goto err_free_arcdev;
card = devm_kzalloc(&pdev->dev, sizeof(struct com20020_dev),
GFP_KERNEL);
if (!card) {
@ -239,41 +244,39 @@ static int com20020pci_probe(struct pci_dev *pdev,
card->index = i;
card->pci_priv = priv;
card->tx_led.brightness_set = led_tx_set;
card->tx_led.default_trigger = devm_kasprintf(&pdev->dev,
GFP_KERNEL, "arc%d-%d-tx",
dev->dev_id, i);
card->tx_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
"pci:green:tx:%d-%d",
dev->dev_id, i);
card->tx_led.dev = &dev->dev;
card->recon_led.brightness_set = led_recon_set;
card->recon_led.default_trigger = devm_kasprintf(&pdev->dev,
GFP_KERNEL, "arc%d-%d-recon",
dev->dev_id, i);
card->recon_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
"pci:red:recon:%d-%d",
dev->dev_id, i);
card->recon_led.dev = &dev->dev;
if (ci->flags & ARC_HAS_LED) {
card->tx_led.brightness_set = led_tx_set;
card->tx_led.default_trigger = devm_kasprintf(&pdev->dev,
GFP_KERNEL, "arc%d-%d-tx",
dev->dev_id, i);
card->tx_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
"pci:green:tx:%d-%d",
dev->dev_id, i);
card->tx_led.dev = &dev->dev;
card->recon_led.brightness_set = led_recon_set;
card->recon_led.default_trigger = devm_kasprintf(&pdev->dev,
GFP_KERNEL, "arc%d-%d-recon",
dev->dev_id, i);
card->recon_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
"pci:red:recon:%d-%d",
dev->dev_id, i);
card->recon_led.dev = &dev->dev;
ret = devm_led_classdev_register(&pdev->dev, &card->tx_led);
if (ret)
goto err_free_arcdev;
ret = devm_led_classdev_register(&pdev->dev, &card->recon_led);
if (ret)
goto err_free_arcdev;
dev_set_drvdata(&dev->dev, card);
devm_arcnet_led_init(dev, dev->dev_id, i);
}
card->dev = dev;
ret = devm_led_classdev_register(&pdev->dev, &card->tx_led);
if (ret)
goto err_free_arcdev;
ret = devm_led_classdev_register(&pdev->dev, &card->recon_led);
if (ret)
goto err_free_arcdev;
dev_set_drvdata(&dev->dev, card);
ret = com20020_found(dev, IRQF_SHARED);
if (ret)
goto err_free_arcdev;
devm_arcnet_led_init(dev, dev->dev_id, i);
list_add(&card->list, &priv->list_dev);
continue;
@ -329,7 +332,7 @@ static struct com20020_pci_card_info card_info_5mbit = {
};
static struct com20020_pci_card_info card_info_sohard = {
.name = "PLX-PCI",
.name = "SOHARD SH ARC-PCI",
.devcount = 1,
/* SOHARD needs PCI base addr 4 */
.chan_map_tbl = {
@ -364,7 +367,7 @@ static struct com20020_pci_card_info card_info_eae_arc1 = {
},
},
.rotary = 0x0,
.flags = ARC_CAN_10MBIT,
.flags = ARC_HAS_ROTARY | ARC_HAS_LED | ARC_CAN_10MBIT,
};
static struct com20020_pci_card_info card_info_eae_ma1 = {
@ -396,7 +399,7 @@ static struct com20020_pci_card_info card_info_eae_ma1 = {
},
},
.rotary = 0x0,
.flags = ARC_CAN_10MBIT,
.flags = ARC_HAS_ROTARY | ARC_HAS_LED | ARC_CAN_10MBIT,
};
static struct com20020_pci_card_info card_info_eae_fb2 = {
@ -421,7 +424,7 @@ static struct com20020_pci_card_info card_info_eae_fb2 = {
},
},
.rotary = 0x0,
.flags = ARC_CAN_10MBIT,
.flags = ARC_HAS_ROTARY | ARC_HAS_LED | ARC_CAN_10MBIT,
};
static const struct pci_device_id com20020pci_id_table[] = {

View File

@ -2713,10 +2713,18 @@ static int ksz_connect_tag_protocol(struct dsa_switch *ds,
{
struct ksz_tagger_data *tagger_data;
tagger_data = ksz_tagger_data(ds);
tagger_data->xmit_work_fn = ksz_port_deferred_xmit;
return 0;
switch (proto) {
case DSA_TAG_PROTO_KSZ8795:
return 0;
case DSA_TAG_PROTO_KSZ9893:
case DSA_TAG_PROTO_KSZ9477:
case DSA_TAG_PROTO_LAN937X:
tagger_data = ksz_tagger_data(ds);
tagger_data->xmit_work_fn = ksz_port_deferred_xmit;
return 0;
default:
return -EPROTONOSUPPORT;
}
}
static int ksz_port_vlan_filtering(struct dsa_switch *ds, int port,

View File

@ -465,6 +465,7 @@ mv88e639x_pcs_select(struct mv88e6xxx_chip *chip, int port,
case PHY_INTERFACE_MODE_10GBASER:
case PHY_INTERFACE_MODE_XAUI:
case PHY_INTERFACE_MODE_RXAUI:
case PHY_INTERFACE_MODE_USXGMII:
return &mpcs->xg_pcs;
default:
@ -873,7 +874,8 @@ static int mv88e6393x_xg_pcs_post_config(struct phylink_pcs *pcs,
struct mv88e639x_pcs *mpcs = xg_pcs_to_mv88e639x_pcs(pcs);
int err;
if (interface == PHY_INTERFACE_MODE_10GBASER) {
if (interface == PHY_INTERFACE_MODE_10GBASER ||
interface == PHY_INTERFACE_MODE_USXGMII) {
err = mv88e6393x_erratum_5_2(mpcs);
if (err)
return err;
@ -886,12 +888,37 @@ static int mv88e6393x_xg_pcs_post_config(struct phylink_pcs *pcs,
return mv88e639x_xg_pcs_enable(mpcs);
}
static void mv88e6393x_xg_pcs_get_state(struct phylink_pcs *pcs,
struct phylink_link_state *state)
{
struct mv88e639x_pcs *mpcs = xg_pcs_to_mv88e639x_pcs(pcs);
u16 status, lp_status;
int err;
if (state->interface != PHY_INTERFACE_MODE_USXGMII)
return mv88e639x_xg_pcs_get_state(pcs, state);
state->link = false;
err = mv88e639x_read(mpcs, MV88E6390_USXGMII_PHY_STATUS, &status);
err = err ? : mv88e639x_read(mpcs, MV88E6390_USXGMII_LP_STATUS, &lp_status);
if (err) {
dev_err(mpcs->mdio.dev.parent,
"can't read USXGMII status: %pe\n", ERR_PTR(err));
return;
}
state->link = !!(status & MDIO_USXGMII_LINK);
state->an_complete = state->link;
phylink_decode_usxgmii_word(state, lp_status);
}
static const struct phylink_pcs_ops mv88e6393x_xg_pcs_ops = {
.pcs_enable = mv88e6393x_xg_pcs_enable,
.pcs_disable = mv88e6393x_xg_pcs_disable,
.pcs_pre_config = mv88e6393x_xg_pcs_pre_config,
.pcs_post_config = mv88e6393x_xg_pcs_post_config,
.pcs_get_state = mv88e639x_xg_pcs_get_state,
.pcs_get_state = mv88e6393x_xg_pcs_get_state,
.pcs_config = mv88e639x_xg_pcs_config,
};

View File

@ -553,17 +553,17 @@ void aq_ptp_tx_hwtstamp(struct aq_nic_s *aq_nic, u64 timestamp)
/* aq_ptp_rx_hwtstamp - utility function which checks for RX time stamp
* @adapter: pointer to adapter struct
* @skb: particular skb to send timestamp with
* @shhwtstamps: particular skb_shared_hwtstamps to save timestamp
*
* if the timestamp is valid, we convert it into the timecounter ns
* value, then store that result into the hwtstamps structure which
* is passed up the network stack
*/
static void aq_ptp_rx_hwtstamp(struct aq_ptp_s *aq_ptp, struct sk_buff *skb,
static void aq_ptp_rx_hwtstamp(struct aq_ptp_s *aq_ptp, struct skb_shared_hwtstamps *shhwtstamps,
u64 timestamp)
{
timestamp -= atomic_read(&aq_ptp->offset_ingress);
aq_ptp_convert_to_hwtstamp(aq_ptp, skb_hwtstamps(skb), timestamp);
aq_ptp_convert_to_hwtstamp(aq_ptp, shhwtstamps, timestamp);
}
void aq_ptp_hwtstamp_config_get(struct aq_ptp_s *aq_ptp,
@ -639,7 +639,7 @@ bool aq_ptp_ring(struct aq_nic_s *aq_nic, struct aq_ring_s *ring)
&aq_ptp->ptp_rx == ring || &aq_ptp->hwts_rx == ring;
}
u16 aq_ptp_extract_ts(struct aq_nic_s *aq_nic, struct sk_buff *skb, u8 *p,
u16 aq_ptp_extract_ts(struct aq_nic_s *aq_nic, struct skb_shared_hwtstamps *shhwtstamps, u8 *p,
unsigned int len)
{
struct aq_ptp_s *aq_ptp = aq_nic->aq_ptp;
@ -648,7 +648,7 @@ u16 aq_ptp_extract_ts(struct aq_nic_s *aq_nic, struct sk_buff *skb, u8 *p,
p, len, &timestamp);
if (ret > 0)
aq_ptp_rx_hwtstamp(aq_ptp, skb, timestamp);
aq_ptp_rx_hwtstamp(aq_ptp, shhwtstamps, timestamp);
return ret;
}

View File

@ -67,7 +67,7 @@ int aq_ptp_hwtstamp_config_set(struct aq_ptp_s *aq_ptp,
/* Return either ring is belong to PTP or not*/
bool aq_ptp_ring(struct aq_nic_s *aq_nic, struct aq_ring_s *ring);
u16 aq_ptp_extract_ts(struct aq_nic_s *aq_nic, struct sk_buff *skb, u8 *p,
u16 aq_ptp_extract_ts(struct aq_nic_s *aq_nic, struct skb_shared_hwtstamps *shhwtstamps, u8 *p,
unsigned int len);
struct ptp_clock *aq_ptp_get_ptp_clock(struct aq_ptp_s *aq_ptp);
@ -143,7 +143,7 @@ static inline bool aq_ptp_ring(struct aq_nic_s *aq_nic, struct aq_ring_s *ring)
}
static inline u16 aq_ptp_extract_ts(struct aq_nic_s *aq_nic,
struct sk_buff *skb, u8 *p,
struct skb_shared_hwtstamps *shhwtstamps, u8 *p,
unsigned int len)
{
return 0;

View File

@ -647,7 +647,7 @@ static int __aq_ring_rx_clean(struct aq_ring_s *self, struct napi_struct *napi,
}
if (is_ptp_ring)
buff->len -=
aq_ptp_extract_ts(self->aq_nic, skb,
aq_ptp_extract_ts(self->aq_nic, skb_hwtstamps(skb),
aq_buf_vaddr(&buff->rxdata),
buff->len);
@ -742,6 +742,8 @@ static int __aq_ring_xdp_clean(struct aq_ring_s *rx_ring,
struct aq_ring_buff_s *buff = &rx_ring->buff_ring[rx_ring->sw_head];
bool is_ptp_ring = aq_ptp_ring(rx_ring->aq_nic, rx_ring);
struct aq_ring_buff_s *buff_ = NULL;
u16 ptp_hwtstamp_len = 0;
struct skb_shared_hwtstamps shhwtstamps;
struct sk_buff *skb = NULL;
unsigned int next_ = 0U;
struct xdp_buff xdp;
@ -810,11 +812,12 @@ static int __aq_ring_xdp_clean(struct aq_ring_s *rx_ring,
hard_start = page_address(buff->rxdata.page) +
buff->rxdata.pg_off - rx_ring->page_offset;
if (is_ptp_ring)
buff->len -=
aq_ptp_extract_ts(rx_ring->aq_nic, skb,
aq_buf_vaddr(&buff->rxdata),
buff->len);
if (is_ptp_ring) {
ptp_hwtstamp_len = aq_ptp_extract_ts(rx_ring->aq_nic, &shhwtstamps,
aq_buf_vaddr(&buff->rxdata),
buff->len);
buff->len -= ptp_hwtstamp_len;
}
xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq);
xdp_prepare_buff(&xdp, hard_start, rx_ring->page_offset,
@ -834,6 +837,9 @@ static int __aq_ring_xdp_clean(struct aq_ring_s *rx_ring,
if (IS_ERR(skb) || !skb)
continue;
if (ptp_hwtstamp_len > 0)
*skb_hwtstamps(skb) = shhwtstamps;
if (buff->is_vlan)
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
buff->vlan_rx_tag);

View File

@ -2075,6 +2075,7 @@ destroy_flow_table:
rhashtable_destroy(&tc_info->flow_table);
free_tc_info:
kfree(tc_info);
bp->tc_info = NULL;
return rc;
}

View File

@ -6474,6 +6474,14 @@ static void tg3_dump_state(struct tg3 *tp)
int i;
u32 *regs;
/* If it is a PCI error, all registers will be 0xffff,
* we don't dump them out, just report the error and return
*/
if (tp->pdev->error_state != pci_channel_io_normal) {
netdev_err(tp->dev, "PCI channel ERROR!\n");
return;
}
regs = kzalloc(TG3_REG_BLK_SIZE, GFP_ATOMIC);
if (!regs)
return;
@ -11259,7 +11267,8 @@ static void tg3_reset_task(struct work_struct *work)
rtnl_lock();
tg3_full_lock(tp, 0);
if (tp->pcierr_recovery || !netif_running(tp->dev)) {
if (tp->pcierr_recovery || !netif_running(tp->dev) ||
tp->pdev->error_state != pci_channel_io_normal) {
tg3_flag_clear(tp, RESET_TASK_PENDING);
tg3_full_unlock(tp);
rtnl_unlock();

View File

@ -66,6 +66,27 @@ static enum mac_mode hns_get_enet_interface(const struct hns_mac_cb *mac_cb)
}
}
static u32 hns_mac_link_anti_shake(struct mac_driver *mac_ctrl_drv)
{
#define HNS_MAC_LINK_WAIT_TIME 5
#define HNS_MAC_LINK_WAIT_CNT 40
u32 link_status = 0;
int i;
if (!mac_ctrl_drv->get_link_status)
return link_status;
for (i = 0; i < HNS_MAC_LINK_WAIT_CNT; i++) {
msleep(HNS_MAC_LINK_WAIT_TIME);
mac_ctrl_drv->get_link_status(mac_ctrl_drv, &link_status);
if (!link_status)
break;
}
return link_status;
}
void hns_mac_get_link_status(struct hns_mac_cb *mac_cb, u32 *link_status)
{
struct mac_driver *mac_ctrl_drv;
@ -83,6 +104,14 @@ void hns_mac_get_link_status(struct hns_mac_cb *mac_cb, u32 *link_status)
&sfp_prsnt);
if (!ret)
*link_status = *link_status && sfp_prsnt;
/* for FIBER port, it may have a fake link up.
* when the link status changes from down to up, we need to do
* anti-shake. the anti-shake time is base on tests.
* only FIBER port need to do this.
*/
if (*link_status && !mac_cb->link)
*link_status = hns_mac_link_anti_shake(mac_ctrl_drv);
}
mac_cb->link = *link_status;

View File

@ -142,7 +142,8 @@ MODULE_DEVICE_TABLE(acpi, hns_enet_acpi_match);
static void fill_desc(struct hnae_ring *ring, void *priv,
int size, dma_addr_t dma, int frag_end,
int buf_num, enum hns_desc_type type, int mtu)
int buf_num, enum hns_desc_type type, int mtu,
bool is_gso)
{
struct hnae_desc *desc = &ring->desc[ring->next_to_use];
struct hnae_desc_cb *desc_cb = &ring->desc_cb[ring->next_to_use];
@ -275,6 +276,15 @@ static int hns_nic_maybe_stop_tso(
return 0;
}
static int hns_nic_maybe_stop_tx_v2(struct sk_buff **out_skb, int *bnum,
struct hnae_ring *ring)
{
if (skb_is_gso(*out_skb))
return hns_nic_maybe_stop_tso(out_skb, bnum, ring);
else
return hns_nic_maybe_stop_tx(out_skb, bnum, ring);
}
static void fill_tso_desc(struct hnae_ring *ring, void *priv,
int size, dma_addr_t dma, int frag_end,
int buf_num, enum hns_desc_type type, int mtu)
@ -300,6 +310,19 @@ static void fill_tso_desc(struct hnae_ring *ring, void *priv,
mtu);
}
static void fill_desc_v2(struct hnae_ring *ring, void *priv,
int size, dma_addr_t dma, int frag_end,
int buf_num, enum hns_desc_type type, int mtu,
bool is_gso)
{
if (is_gso)
fill_tso_desc(ring, priv, size, dma, frag_end, buf_num, type,
mtu);
else
fill_v2_desc(ring, priv, size, dma, frag_end, buf_num, type,
mtu);
}
netdev_tx_t hns_nic_net_xmit_hw(struct net_device *ndev,
struct sk_buff *skb,
struct hns_nic_ring_data *ring_data)
@ -313,6 +336,7 @@ netdev_tx_t hns_nic_net_xmit_hw(struct net_device *ndev,
int seg_num;
dma_addr_t dma;
int size, next_to_use;
bool is_gso;
int i;
switch (priv->ops.maybe_stop_tx(&skb, &buf_num, ring)) {
@ -339,8 +363,9 @@ netdev_tx_t hns_nic_net_xmit_hw(struct net_device *ndev,
ring->stats.sw_err_cnt++;
goto out_err_tx_ok;
}
is_gso = skb_is_gso(skb);
priv->ops.fill_desc(ring, skb, size, dma, seg_num == 1 ? 1 : 0,
buf_num, DESC_TYPE_SKB, ndev->mtu);
buf_num, DESC_TYPE_SKB, ndev->mtu, is_gso);
/* fill the fragments */
for (i = 1; i < seg_num; i++) {
@ -354,7 +379,7 @@ netdev_tx_t hns_nic_net_xmit_hw(struct net_device *ndev,
}
priv->ops.fill_desc(ring, skb_frag_page(frag), size, dma,
seg_num - 1 == i ? 1 : 0, buf_num,
DESC_TYPE_PAGE, ndev->mtu);
DESC_TYPE_PAGE, ndev->mtu, is_gso);
}
/*complete translate all packets*/
@ -1776,15 +1801,6 @@ static int hns_nic_set_features(struct net_device *netdev,
netdev_info(netdev, "enet v1 do not support tso!\n");
break;
default:
if (features & (NETIF_F_TSO | NETIF_F_TSO6)) {
priv->ops.fill_desc = fill_tso_desc;
priv->ops.maybe_stop_tx = hns_nic_maybe_stop_tso;
/* The chip only support 7*4096 */
netif_set_tso_max_size(netdev, 7 * 4096);
} else {
priv->ops.fill_desc = fill_v2_desc;
priv->ops.maybe_stop_tx = hns_nic_maybe_stop_tx;
}
break;
}
netdev->features = features;
@ -2159,16 +2175,9 @@ static void hns_nic_set_priv_ops(struct net_device *netdev)
priv->ops.maybe_stop_tx = hns_nic_maybe_stop_tx;
} else {
priv->ops.get_rxd_bnum = get_v2rx_desc_bnum;
if ((netdev->features & NETIF_F_TSO) ||
(netdev->features & NETIF_F_TSO6)) {
priv->ops.fill_desc = fill_tso_desc;
priv->ops.maybe_stop_tx = hns_nic_maybe_stop_tso;
/* This chip only support 7*4096 */
netif_set_tso_max_size(netdev, 7 * 4096);
} else {
priv->ops.fill_desc = fill_v2_desc;
priv->ops.maybe_stop_tx = hns_nic_maybe_stop_tx;
}
priv->ops.fill_desc = fill_desc_v2;
priv->ops.maybe_stop_tx = hns_nic_maybe_stop_tx_v2;
netif_set_tso_max_size(netdev, 7 * 4096);
/* enable tso when init
* control tso on/off through TSE bit in bd
*/

View File

@ -44,7 +44,8 @@ struct hns_nic_ring_data {
struct hns_nic_ops {
void (*fill_desc)(struct hnae_ring *ring, void *priv,
int size, dma_addr_t dma, int frag_end,
int buf_num, enum hns_desc_type type, int mtu);
int buf_num, enum hns_desc_type type, int mtu,
bool is_gso);
int (*maybe_stop_tx)(struct sk_buff **out_skb,
int *bnum, struct hnae_ring *ring);
void (*get_rxd_bnum)(u32 bnum_flag, int *out_bnum);

View File

@ -16224,7 +16224,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
I40E_PRTGL_SAH_MFS_MASK) >> I40E_PRTGL_SAH_MFS_SHIFT;
if (val < MAX_FRAME_SIZE_DEFAULT)
dev_warn(&pdev->dev, "MFS for port %x has been set below the default: %x\n",
i, val);
pf->hw.port, val);
/* Add a filter to drop all Flow control frames from any VSI from being
* transmitted. By doing so we stop a malicious VF from sending out

View File

@ -827,18 +827,10 @@ static int __iavf_set_coalesce(struct net_device *netdev,
struct iavf_adapter *adapter = netdev_priv(netdev);
int i;
if (ec->rx_coalesce_usecs == 0) {
if (ec->use_adaptive_rx_coalesce)
netif_info(adapter, drv, netdev, "rx-usecs=0, need to disable adaptive-rx for a complete disable\n");
} else if ((ec->rx_coalesce_usecs < IAVF_MIN_ITR) ||
(ec->rx_coalesce_usecs > IAVF_MAX_ITR)) {
if (ec->rx_coalesce_usecs > IAVF_MAX_ITR) {
netif_info(adapter, drv, netdev, "Invalid value, rx-usecs range is 0-8160\n");
return -EINVAL;
} else if (ec->tx_coalesce_usecs == 0) {
if (ec->use_adaptive_tx_coalesce)
netif_info(adapter, drv, netdev, "tx-usecs=0, need to disable adaptive-tx for a complete disable\n");
} else if ((ec->tx_coalesce_usecs < IAVF_MIN_ITR) ||
(ec->tx_coalesce_usecs > IAVF_MAX_ITR)) {
} else if (ec->tx_coalesce_usecs > IAVF_MAX_ITR) {
netif_info(adapter, drv, netdev, "Invalid value, tx-usecs range is 0-8160\n");
return -EINVAL;
}

View File

@ -15,7 +15,6 @@
*/
#define IAVF_ITR_DYNAMIC 0x8000 /* use top bit as a flag */
#define IAVF_ITR_MASK 0x1FFE /* mask for ITR register value */
#define IAVF_MIN_ITR 2 /* reg uses 2 usec resolution */
#define IAVF_ITR_100K 10 /* all values below must be even */
#define IAVF_ITR_50K 20
#define IAVF_ITR_20K 50

View File

@ -374,16 +374,11 @@ static void ice_ena_vf_mappings(struct ice_vf *vf)
*/
int ice_calc_vf_reg_idx(struct ice_vf *vf, struct ice_q_vector *q_vector)
{
struct ice_pf *pf;
if (!vf || !q_vector)
return -EINVAL;
pf = vf->pf;
/* always add one to account for the OICR being the first MSIX */
return pf->sriov_base_vector + pf->vfs.num_msix_per * vf->vf_id +
q_vector->v_idx + 1;
return vf->first_vector_idx + q_vector->v_idx + 1;
}
/**

View File

@ -32,7 +32,6 @@ static void ice_port_vlan_on(struct ice_vsi *vsi)
/* setup outer VLAN ops */
vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan;
vlan_ops->clear_port_vlan = ice_vsi_clear_outer_port_vlan;
vlan_ops->clear_port_vlan = ice_vsi_clear_outer_port_vlan;
/* setup inner VLAN ops */
vlan_ops = &vsi->inner_vlan_ops;
@ -47,8 +46,13 @@ static void ice_port_vlan_on(struct ice_vsi *vsi)
vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan;
vlan_ops->clear_port_vlan = ice_vsi_clear_inner_port_vlan;
vlan_ops->clear_port_vlan = ice_vsi_clear_inner_port_vlan;
}
/* all Rx traffic should be in the domain of the assigned port VLAN,
* so prevent disabling Rx VLAN filtering
*/
vlan_ops->dis_rx_filtering = noop_vlan;
vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering;
}
@ -77,6 +81,8 @@ static void ice_port_vlan_off(struct ice_vsi *vsi)
vlan_ops->del_vlan = ice_vsi_del_vlan;
}
vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering;
if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags))
vlan_ops->ena_rx_filtering = noop_vlan;
else
@ -141,7 +147,6 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi)
&vsi->outer_vlan_ops : &vsi->inner_vlan_ops;
vlan_ops->add_vlan = ice_vsi_add_vlan;
vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering;
vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering;
vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering;
}

View File

@ -1523,7 +1523,6 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
u16 num_q_vectors_mapped, vsi_id, vector_id;
struct virtchnl_irq_map_info *irqmap_info;
struct virtchnl_vector_map *map;
struct ice_pf *pf = vf->pf;
struct ice_vsi *vsi;
int i;
@ -1535,7 +1534,7 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
* there is actually at least a single VF queue vector mapped
*/
if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states) ||
pf->vfs.num_msix_per < num_q_vectors_mapped ||
vf->num_msix < num_q_vectors_mapped ||
!num_q_vectors_mapped) {
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
goto error_param;
@ -1557,7 +1556,7 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
/* vector_id is always 0-based for each VF, and can never be
* larger than or equal to the max allowed interrupts per VF
*/
if (!(vector_id < pf->vfs.num_msix_per) ||
if (!(vector_id < vf->num_msix) ||
!ice_vc_isvalid_vsi_id(vf, vsi_id) ||
(!vector_id && (map->rxq_map || map->txq_map))) {
v_ret = VIRTCHNL_STATUS_ERR_PARAM;

View File

@ -1945,7 +1945,7 @@ struct mcs_hw_info {
u8 tcam_entries; /* RX/TX Tcam entries per mcs block */
u8 secy_entries; /* RX/TX SECY entries per mcs block */
u8 sc_entries; /* RX/TX SC CAM entries per mcs block */
u8 sa_entries; /* PN table entries = SA entries */
u16 sa_entries; /* PN table entries = SA entries */
u64 rsvd[16];
};

View File

@ -117,7 +117,7 @@ void mcs_get_rx_secy_stats(struct mcs *mcs, struct mcs_secy_stats *stats, int id
reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYTAGGEDCTLX(id);
stats->pkt_tagged_ctl_cnt = mcs_reg_read(mcs, reg);
reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYUNTAGGEDORNOTAGX(id);
reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYUNTAGGEDX(id);
stats->pkt_untaged_cnt = mcs_reg_read(mcs, reg);
reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYCTLX(id);
@ -215,7 +215,7 @@ void mcs_get_sc_stats(struct mcs *mcs, struct mcs_sc_stats *stats,
reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSCNOTVALIDX(id);
stats->pkt_notvalid_cnt = mcs_reg_read(mcs, reg);
reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSCUNCHECKEDOROKX(id);
reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSCUNCHECKEDX(id);
stats->pkt_unchecked_cnt = mcs_reg_read(mcs, reg);
if (mcs->hw->mcs_blks > 1) {
@ -1219,6 +1219,17 @@ struct mcs *mcs_get_pdata(int mcs_id)
return NULL;
}
bool is_mcs_bypass(int mcs_id)
{
struct mcs *mcs_dev;
list_for_each_entry(mcs_dev, &mcs_list, mcs_list) {
if (mcs_dev->mcs_id == mcs_id)
return mcs_dev->bypass;
}
return true;
}
void mcs_set_port_cfg(struct mcs *mcs, struct mcs_port_cfg_set_req *req)
{
u64 val = 0;
@ -1436,7 +1447,7 @@ static int mcs_x2p_calibration(struct mcs *mcs)
return err;
}
static void mcs_set_external_bypass(struct mcs *mcs, u8 bypass)
static void mcs_set_external_bypass(struct mcs *mcs, bool bypass)
{
u64 val;
@ -1447,6 +1458,7 @@ static void mcs_set_external_bypass(struct mcs *mcs, u8 bypass)
else
val &= ~BIT_ULL(6);
mcs_reg_write(mcs, MCSX_MIL_GLOBAL, val);
mcs->bypass = bypass;
}
static void mcs_global_cfg(struct mcs *mcs)

View File

@ -149,6 +149,7 @@ struct mcs {
u16 num_vec;
void *rvu;
u16 *tx_sa_active;
bool bypass;
};
struct mcs_ops {
@ -206,6 +207,7 @@ void mcs_get_custom_tag_cfg(struct mcs *mcs, struct mcs_custom_tag_cfg_get_req *
int mcs_alloc_ctrlpktrule(struct rsrc_bmap *rsrc, u16 *pf_map, u16 offset, u16 pcifunc);
int mcs_free_ctrlpktrule(struct mcs *mcs, struct mcs_free_ctrl_pkt_rule_req *req);
int mcs_ctrlpktrule_write(struct mcs *mcs, struct mcs_ctrl_pkt_rule_write_req *req);
bool is_mcs_bypass(int mcs_id);
/* CN10K-B APIs */
void cn10kb_mcs_set_hw_capabilities(struct mcs *mcs);

View File

@ -810,14 +810,37 @@
offset = 0x9d8ull; \
offset; })
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSCUNCHECKEDX(a) ({ \
u64 offset; \
\
offset = 0xee80ull; \
if (mcs->hw->mcs_blks > 1) \
offset = 0xe818ull; \
offset += (a) * 0x8ull; \
offset; })
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYUNTAGGEDX(a) ({ \
u64 offset; \
\
offset = 0xa680ull; \
if (mcs->hw->mcs_blks > 1) \
offset = 0xd018ull; \
offset += (a) * 0x8ull; \
offset; })
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSCLATEORDELAYEDX(a) ({ \
u64 offset; \
\
offset = 0xf680ull; \
if (mcs->hw->mcs_blks > 1) \
offset = 0xe018ull; \
offset += (a) * 0x8ull; \
offset; })
#define MCSX_CSE_RX_MEM_SLAVE_INOCTETSSCDECRYPTEDX(a) (0xe680ull + (a) * 0x8ull)
#define MCSX_CSE_RX_MEM_SLAVE_INOCTETSSCVALIDATEX(a) (0xde80ull + (a) * 0x8ull)
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYUNTAGGEDORNOTAGX(a) (0xa680ull + (a) * 0x8ull)
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYNOTAGX(a) (0xd218 + (a) * 0x8ull)
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYUNTAGGEDX(a) (0xd018ull + (a) * 0x8ull)
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSCUNCHECKEDOROKX(a) (0xee80ull + (a) * 0x8ull)
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYCTLX(a) (0xb680ull + (a) * 0x8ull)
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSCLATEORDELAYEDX(a) (0xf680ull + (a) * 0x8ull)
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSAINVALIDX(a) (0x12680ull + (a) * 0x8ull)
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSANOTUSINGSAERRORX(a) (0x15680ull + (a) * 0x8ull)
#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSANOTVALIDX(a) (0x13680ull + (a) * 0x8ull)

View File

@ -2631,6 +2631,9 @@ static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc)
rvu_npc_free_mcam_entries(rvu, pcifunc, -1);
rvu_mac_reset(rvu, pcifunc);
if (rvu->mcs_blk_cnt)
rvu_mcs_flr_handler(rvu, pcifunc);
mutex_unlock(&rvu->flr_lock);
}

View File

@ -345,6 +345,7 @@ struct nix_hw {
struct nix_txvlan txvlan;
struct nix_ipolicer *ipolicer;
u64 *tx_credits;
u8 cc_mcs_cnt;
};
/* RVU block's capabilities or functionality,

View File

@ -1087,7 +1087,7 @@ static int rvu_npa_register_reporters(struct rvu_devlink *rvu_dl)
rvu_dl->devlink_wq = create_workqueue("rvu_devlink_wq");
if (!rvu_dl->devlink_wq)
goto err;
return -ENOMEM;
INIT_WORK(&rvu_reporters->intr_work, rvu_npa_intr_work);
INIT_WORK(&rvu_reporters->err_work, rvu_npa_err_work);
@ -1095,9 +1095,6 @@ static int rvu_npa_register_reporters(struct rvu_devlink *rvu_dl)
INIT_WORK(&rvu_reporters->ras_work, rvu_npa_ras_work);
return 0;
err:
rvu_npa_health_reporters_destroy(rvu_dl);
return -ENOMEM;
}
static int rvu_npa_health_reporters_create(struct rvu_devlink *rvu_dl)

View File

@ -12,6 +12,7 @@
#include "rvu_reg.h"
#include "rvu.h"
#include "npc.h"
#include "mcs.h"
#include "cgx.h"
#include "lmac_common.h"
#include "rvu_npc_hash.h"
@ -4389,6 +4390,12 @@ static void nix_link_config(struct rvu *rvu, int blkaddr,
SDP_HW_MAX_FRS << 16 | NIC_HW_MIN_FRS);
}
/* Get MCS external bypass status for CN10K-B */
if (mcs_get_blkcnt() == 1) {
/* Adjust for 2 credits when external bypass is disabled */
nix_hw->cc_mcs_cnt = is_mcs_bypass(0) ? 0 : 2;
}
/* Set credits for Tx links assuming max packet length allowed.
* This will be reconfigured based on MTU set for PF/VF.
*/
@ -4412,6 +4419,7 @@ static void nix_link_config(struct rvu *rvu, int blkaddr,
tx_credits = (lmac_fifo_len - lmac_max_frs) / 16;
/* Enable credits and set credit pkt count to max allowed */
cfg = (tx_credits << 12) | (0x1FF << 2) | BIT_ULL(1);
cfg |= FIELD_PREP(NIX_AF_LINKX_MCS_CNT_MASK, nix_hw->cc_mcs_cnt);
link = iter + slink;
nix_hw->tx_credits[link] = tx_credits;

View File

@ -389,7 +389,13 @@ static u64 npc_get_default_entry_action(struct rvu *rvu, struct npc_mcam *mcam,
int bank, nixlf, index;
/* get ucast entry rule entry index */
nix_get_nixlf(rvu, pf_func, &nixlf, NULL);
if (nix_get_nixlf(rvu, pf_func, &nixlf, NULL)) {
dev_err(rvu->dev, "%s: nixlf not attached to pcifunc:0x%x\n",
__func__, pf_func);
/* Action 0 is drop */
return 0;
}
index = npc_get_nixlf_mcam_index(mcam, pf_func, nixlf,
NIXLF_UCAST_ENTRY);
bank = npc_get_bank(mcam, index);

View File

@ -31,8 +31,8 @@ static struct hw_reg_map txsch_reg_map[NIX_TXSCH_LVL_CNT] = {
{NIX_TXSCH_LVL_TL4, 3, 0xFFFF, {{0x0B00, 0x0B08}, {0x0B10, 0x0B18},
{0x1200, 0x12E0} } },
{NIX_TXSCH_LVL_TL3, 4, 0xFFFF, {{0x1000, 0x10E0}, {0x1600, 0x1608},
{0x1610, 0x1618}, {0x1700, 0x17B0} } },
{NIX_TXSCH_LVL_TL2, 2, 0xFFFF, {{0x0E00, 0x0EE0}, {0x1700, 0x17B0} } },
{0x1610, 0x1618}, {0x1700, 0x17C8} } },
{NIX_TXSCH_LVL_TL2, 2, 0xFFFF, {{0x0E00, 0x0EE0}, {0x1700, 0x17C8} } },
{NIX_TXSCH_LVL_TL1, 1, 0xFFFF, {{0x0C00, 0x0D98} } },
};

View File

@ -437,6 +437,7 @@
#define NIX_AF_LINKX_BASE_MASK GENMASK_ULL(11, 0)
#define NIX_AF_LINKX_RANGE_MASK GENMASK_ULL(19, 16)
#define NIX_AF_LINKX_MCS_CNT_MASK GENMASK_ULL(33, 32)
/* SSO */
#define SSO_AF_CONST (0x1000)

View File

@ -334,9 +334,12 @@ static void otx2_get_pauseparam(struct net_device *netdev,
if (is_otx2_lbkvf(pfvf->pdev))
return;
mutex_lock(&pfvf->mbox.lock);
req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(&pfvf->mbox);
if (!req)
if (!req) {
mutex_unlock(&pfvf->mbox.lock);
return;
}
if (!otx2_sync_mbox_msg(&pfvf->mbox)) {
rsp = (struct cgx_pause_frm_cfg *)
@ -344,6 +347,7 @@ static void otx2_get_pauseparam(struct net_device *netdev,
pause->rx_pause = rsp->rx_pause;
pause->tx_pause = rsp->tx_pause;
}
mutex_unlock(&pfvf->mbox.lock);
}
static int otx2_set_pauseparam(struct net_device *netdev,

View File

@ -1688,6 +1688,14 @@ static void otx2_do_set_rx_mode(struct otx2_nic *pf)
mutex_unlock(&pf->mbox.lock);
}
static void otx2_set_irq_coalesce(struct otx2_nic *pfvf)
{
int cint;
for (cint = 0; cint < pfvf->hw.cint_cnt; cint++)
otx2_config_irq_coalescing(pfvf, cint);
}
static void otx2_dim_work(struct work_struct *w)
{
struct dim_cq_moder cur_moder;
@ -1703,6 +1711,7 @@ static void otx2_dim_work(struct work_struct *w)
CQ_TIMER_THRESH_MAX : cur_moder.usec;
pfvf->hw.cq_ecount_wait = (cur_moder.pkts > NAPI_POLL_WEIGHT) ?
NAPI_POLL_WEIGHT : cur_moder.pkts;
otx2_set_irq_coalesce(pfvf);
dim->state = DIM_START_MEASURE;
}

View File

@ -512,11 +512,18 @@ static void otx2_adjust_adaptive_coalese(struct otx2_nic *pfvf, struct otx2_cq_p
{
struct dim_sample dim_sample;
u64 rx_frames, rx_bytes;
u64 tx_frames, tx_bytes;
rx_frames = OTX2_GET_RX_STATS(RX_BCAST) + OTX2_GET_RX_STATS(RX_MCAST) +
OTX2_GET_RX_STATS(RX_UCAST);
rx_bytes = OTX2_GET_RX_STATS(RX_OCTS);
dim_update_sample(pfvf->napi_events, rx_frames, rx_bytes, &dim_sample);
tx_bytes = OTX2_GET_TX_STATS(TX_OCTS);
tx_frames = OTX2_GET_TX_STATS(TX_UCAST);
dim_update_sample(pfvf->napi_events,
rx_frames + tx_frames,
rx_bytes + tx_bytes,
&dim_sample);
net_dim(&cq_poll->dim, dim_sample);
}
@ -558,16 +565,9 @@ int otx2_napi_handler(struct napi_struct *napi, int budget)
if (pfvf->flags & OTX2_FLAG_INTF_DOWN)
return workdone;
/* Check for adaptive interrupt coalesce */
if (workdone != 0 &&
((pfvf->flags & OTX2_FLAG_ADPTV_INT_COAL_ENABLED) ==
OTX2_FLAG_ADPTV_INT_COAL_ENABLED)) {
/* Adjust irq coalese using net_dim */
/* Adjust irq coalese using net_dim */
if (pfvf->flags & OTX2_FLAG_ADPTV_INT_COAL_ENABLED)
otx2_adjust_adaptive_coalese(pfvf, cq_poll);
/* Update irq coalescing */
for (i = 0; i < pfvf->hw.cint_cnt; i++)
otx2_config_irq_coalescing(pfvf, i);
}
if (unlikely(!filled_cnt)) {
struct refill_work *work;

View File

@ -160,6 +160,18 @@ struct nfp_tun_mac_addr_offload {
u8 addr[ETH_ALEN];
};
/**
* struct nfp_neigh_update_work - update neighbour information to nfp
* @work: Work queue for writing neigh to the nfp
* @n: neighbour entry
* @app: Back pointer to app
*/
struct nfp_neigh_update_work {
struct work_struct work;
struct neighbour *n;
struct nfp_app *app;
};
enum nfp_flower_mac_offload_cmd {
NFP_TUNNEL_MAC_OFFLOAD_ADD = 0,
NFP_TUNNEL_MAC_OFFLOAD_DEL = 1,
@ -607,38 +619,30 @@ err:
nfp_flower_cmsg_warn(app, "Neighbour configuration failed.\n");
}
static int
nfp_tun_neigh_event_handler(struct notifier_block *nb, unsigned long event,
void *ptr)
static void
nfp_tun_release_neigh_update_work(struct nfp_neigh_update_work *update_work)
{
struct nfp_flower_priv *app_priv;
struct netevent_redirect *redir;
struct neighbour *n;
neigh_release(update_work->n);
kfree(update_work);
}
static void nfp_tun_neigh_update(struct work_struct *work)
{
struct nfp_neigh_update_work *update_work;
struct nfp_app *app;
struct neighbour *n;
bool neigh_invalid;
int err;
switch (event) {
case NETEVENT_REDIRECT:
redir = (struct netevent_redirect *)ptr;
n = redir->neigh;
break;
case NETEVENT_NEIGH_UPDATE:
n = (struct neighbour *)ptr;
break;
default:
return NOTIFY_DONE;
}
neigh_invalid = !(n->nud_state & NUD_VALID) || n->dead;
app_priv = container_of(nb, struct nfp_flower_priv, tun.neigh_nb);
app = app_priv->app;
update_work = container_of(work, struct nfp_neigh_update_work, work);
app = update_work->app;
n = update_work->n;
if (!nfp_flower_get_port_id_from_netdev(app, n->dev))
return NOTIFY_DONE;
goto out;
#if IS_ENABLED(CONFIG_INET)
neigh_invalid = !(n->nud_state & NUD_VALID) || n->dead;
if (n->tbl->family == AF_INET6) {
#if IS_ENABLED(CONFIG_IPV6)
struct flowi6 flow6 = {};
@ -655,13 +659,11 @@ nfp_tun_neigh_event_handler(struct notifier_block *nb, unsigned long event,
dst = ip6_dst_lookup_flow(dev_net(n->dev), NULL,
&flow6, NULL);
if (IS_ERR(dst))
return NOTIFY_DONE;
goto out;
dst_release(dst);
}
nfp_tun_write_neigh(n->dev, app, &flow6, n, true, false);
#else
return NOTIFY_DONE;
#endif /* CONFIG_IPV6 */
} else {
struct flowi4 flow4 = {};
@ -678,17 +680,71 @@ nfp_tun_neigh_event_handler(struct notifier_block *nb, unsigned long event,
rt = ip_route_output_key(dev_net(n->dev), &flow4);
err = PTR_ERR_OR_ZERO(rt);
if (err)
return NOTIFY_DONE;
goto out;
ip_rt_put(rt);
}
nfp_tun_write_neigh(n->dev, app, &flow4, n, false, false);
}
#else
return NOTIFY_DONE;
#endif /* CONFIG_INET */
out:
nfp_tun_release_neigh_update_work(update_work);
}
return NOTIFY_OK;
static struct nfp_neigh_update_work *
nfp_tun_alloc_neigh_update_work(struct nfp_app *app, struct neighbour *n)
{
struct nfp_neigh_update_work *update_work;
update_work = kzalloc(sizeof(*update_work), GFP_ATOMIC);
if (!update_work)
return NULL;
INIT_WORK(&update_work->work, nfp_tun_neigh_update);
neigh_hold(n);
update_work->n = n;
update_work->app = app;
return update_work;
}
static int
nfp_tun_neigh_event_handler(struct notifier_block *nb, unsigned long event,
void *ptr)
{
struct nfp_neigh_update_work *update_work;
struct nfp_flower_priv *app_priv;
struct netevent_redirect *redir;
struct neighbour *n;
struct nfp_app *app;
switch (event) {
case NETEVENT_REDIRECT:
redir = (struct netevent_redirect *)ptr;
n = redir->neigh;
break;
case NETEVENT_NEIGH_UPDATE:
n = (struct neighbour *)ptr;
break;
default:
return NOTIFY_DONE;
}
#if IS_ENABLED(CONFIG_IPV6)
if (n->tbl != ipv6_stub->nd_tbl && n->tbl != &arp_tbl)
#else
if (n->tbl != &arp_tbl)
#endif
return NOTIFY_DONE;
app_priv = container_of(nb, struct nfp_flower_priv, tun.neigh_nb);
app = app_priv->app;
update_work = nfp_tun_alloc_neigh_update_work(app, n);
if (!update_work)
return NOTIFY_DONE;
queue_work(system_highpri_wq, &update_work->work);
return NOTIFY_DONE;
}
void nfp_tunnel_request_route_v4(struct nfp_app *app, struct sk_buff *skb)
@ -706,6 +762,7 @@ void nfp_tunnel_request_route_v4(struct nfp_app *app, struct sk_buff *skb)
netdev = nfp_app_dev_get(app, be32_to_cpu(payload->ingress_port), NULL);
if (!netdev)
goto fail_rcu_unlock;
dev_hold(netdev);
flow.daddr = payload->ipv4_addr;
flow.flowi4_proto = IPPROTO_UDP;
@ -725,13 +782,16 @@ void nfp_tunnel_request_route_v4(struct nfp_app *app, struct sk_buff *skb)
ip_rt_put(rt);
if (!n)
goto fail_rcu_unlock;
rcu_read_unlock();
nfp_tun_write_neigh(n->dev, app, &flow, n, false, true);
neigh_release(n);
rcu_read_unlock();
dev_put(netdev);
return;
fail_rcu_unlock:
rcu_read_unlock();
dev_put(netdev);
nfp_flower_cmsg_warn(app, "Requested route not found.\n");
}
@ -749,6 +809,7 @@ void nfp_tunnel_request_route_v6(struct nfp_app *app, struct sk_buff *skb)
netdev = nfp_app_dev_get(app, be32_to_cpu(payload->ingress_port), NULL);
if (!netdev)
goto fail_rcu_unlock;
dev_hold(netdev);
flow.daddr = payload->ipv6_addr;
flow.flowi6_proto = IPPROTO_UDP;
@ -766,14 +827,16 @@ void nfp_tunnel_request_route_v6(struct nfp_app *app, struct sk_buff *skb)
dst_release(dst);
if (!n)
goto fail_rcu_unlock;
rcu_read_unlock();
nfp_tun_write_neigh(n->dev, app, &flow, n, true, true);
neigh_release(n);
rcu_read_unlock();
dev_put(netdev);
return;
fail_rcu_unlock:
rcu_read_unlock();
dev_put(netdev);
nfp_flower_cmsg_warn(app, "Requested IPv6 route not found.\n");
}

View File

@ -223,7 +223,7 @@ struct ionic_desc_info {
void *cb_arg;
};
#define IONIC_QUEUE_NAME_MAX_SZ 32
#define IONIC_QUEUE_NAME_MAX_SZ 16
struct ionic_queue {
struct device *dev;

View File

@ -49,24 +49,24 @@ static void ionic_lif_queue_identify(struct ionic_lif *lif);
static void ionic_dim_work(struct work_struct *work)
{
struct dim *dim = container_of(work, struct dim, work);
struct ionic_intr_info *intr;
struct dim_cq_moder cur_moder;
struct ionic_qcq *qcq;
struct ionic_lif *lif;
u32 new_coal;
cur_moder = net_dim_get_rx_moderation(dim->mode, dim->profile_ix);
qcq = container_of(dim, struct ionic_qcq, dim);
new_coal = ionic_coal_usec_to_hw(qcq->q.lif->ionic, cur_moder.usec);
lif = qcq->q.lif;
new_coal = ionic_coal_usec_to_hw(lif->ionic, cur_moder.usec);
new_coal = new_coal ? new_coal : 1;
if (qcq->intr.dim_coal_hw != new_coal) {
unsigned int qi = qcq->cq.bound_q->index;
struct ionic_lif *lif = qcq->q.lif;
qcq->intr.dim_coal_hw = new_coal;
intr = &qcq->intr;
if (intr->dim_coal_hw != new_coal) {
intr->dim_coal_hw = new_coal;
ionic_intr_coal_init(lif->ionic->idev.intr_ctrl,
lif->rxqcqs[qi]->intr.index,
qcq->intr.dim_coal_hw);
intr->index, intr->dim_coal_hw);
}
dim->state = DIM_START_MEASURE;

View File

@ -196,6 +196,7 @@ enum rtl_registers {
/* No threshold before first PCI xfer */
#define RX_FIFO_THRESH (7 << RXCFG_FIFO_SHIFT)
#define RX_EARLY_OFF (1 << 11)
#define RX_PAUSE_SLOT_ON (1 << 11) /* 8125b and later */
#define RXCFG_DMA_SHIFT 8
/* Unlimited maximum PCI burst. */
#define RX_DMA_BURST (7 << RXCFG_DMA_SHIFT)
@ -2306,9 +2307,13 @@ static void rtl_init_rxcfg(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_53:
RTL_W32(tp, RxConfig, RX128_INT_EN | RX_MULTI_EN | RX_DMA_BURST | RX_EARLY_OFF);
break;
case RTL_GIGA_MAC_VER_61 ... RTL_GIGA_MAC_VER_63:
case RTL_GIGA_MAC_VER_61:
RTL_W32(tp, RxConfig, RX_FETCH_DFLT_8125 | RX_DMA_BURST);
break;
case RTL_GIGA_MAC_VER_63:
RTL_W32(tp, RxConfig, RX_FETCH_DFLT_8125 | RX_DMA_BURST |
RX_PAUSE_SLOT_ON);
break;
default:
RTL_W32(tp, RxConfig, RX128_INT_EN | RX_DMA_BURST);
break;

View File

@ -710,28 +710,22 @@ void dwmac5_est_irq_status(void __iomem *ioaddr, struct net_device *dev,
}
}
void dwmac5_fpe_configure(void __iomem *ioaddr, u32 num_txq, u32 num_rxq,
void dwmac5_fpe_configure(void __iomem *ioaddr, struct stmmac_fpe_cfg *cfg,
u32 num_txq, u32 num_rxq,
bool enable)
{
u32 value;
if (!enable) {
value = readl(ioaddr + MAC_FPE_CTRL_STS);
value &= ~EFPE;
writel(value, ioaddr + MAC_FPE_CTRL_STS);
return;
if (enable) {
cfg->fpe_csr = EFPE;
value = readl(ioaddr + GMAC_RXQ_CTRL1);
value &= ~GMAC_RXQCTRL_FPRQ;
value |= (num_rxq - 1) << GMAC_RXQCTRL_FPRQ_SHIFT;
writel(value, ioaddr + GMAC_RXQ_CTRL1);
} else {
cfg->fpe_csr = 0;
}
value = readl(ioaddr + GMAC_RXQ_CTRL1);
value &= ~GMAC_RXQCTRL_FPRQ;
value |= (num_rxq - 1) << GMAC_RXQCTRL_FPRQ_SHIFT;
writel(value, ioaddr + GMAC_RXQ_CTRL1);
value = readl(ioaddr + MAC_FPE_CTRL_STS);
value |= EFPE;
writel(value, ioaddr + MAC_FPE_CTRL_STS);
writel(cfg->fpe_csr, ioaddr + MAC_FPE_CTRL_STS);
}
int dwmac5_fpe_irq_status(void __iomem *ioaddr, struct net_device *dev)
@ -741,6 +735,9 @@ int dwmac5_fpe_irq_status(void __iomem *ioaddr, struct net_device *dev)
status = FPE_EVENT_UNKNOWN;
/* Reads from the MAC_FPE_CTRL_STS register should only be performed
* here, since the status flags of MAC_FPE_CTRL_STS are "clear on read"
*/
value = readl(ioaddr + MAC_FPE_CTRL_STS);
if (value & TRSP) {
@ -766,19 +763,15 @@ int dwmac5_fpe_irq_status(void __iomem *ioaddr, struct net_device *dev)
return status;
}
void dwmac5_fpe_send_mpacket(void __iomem *ioaddr, enum stmmac_mpacket_type type)
void dwmac5_fpe_send_mpacket(void __iomem *ioaddr, struct stmmac_fpe_cfg *cfg,
enum stmmac_mpacket_type type)
{
u32 value;
u32 value = cfg->fpe_csr;
value = readl(ioaddr + MAC_FPE_CTRL_STS);
if (type == MPACKET_VERIFY) {
value &= ~SRSP;
if (type == MPACKET_VERIFY)
value |= SVER;
} else {
value &= ~SVER;
else if (type == MPACKET_RESPONSE)
value |= SRSP;
}
writel(value, ioaddr + MAC_FPE_CTRL_STS);
}

View File

@ -153,9 +153,11 @@ int dwmac5_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg,
unsigned int ptp_rate);
void dwmac5_est_irq_status(void __iomem *ioaddr, struct net_device *dev,
struct stmmac_extra_stats *x, u32 txqcnt);
void dwmac5_fpe_configure(void __iomem *ioaddr, u32 num_txq, u32 num_rxq,
void dwmac5_fpe_configure(void __iomem *ioaddr, struct stmmac_fpe_cfg *cfg,
u32 num_txq, u32 num_rxq,
bool enable);
void dwmac5_fpe_send_mpacket(void __iomem *ioaddr,
struct stmmac_fpe_cfg *cfg,
enum stmmac_mpacket_type type);
int dwmac5_fpe_irq_status(void __iomem *ioaddr, struct net_device *dev);

View File

@ -1484,7 +1484,8 @@ static int dwxgmac3_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg,
return 0;
}
static void dwxgmac3_fpe_configure(void __iomem *ioaddr, u32 num_txq,
static void dwxgmac3_fpe_configure(void __iomem *ioaddr, struct stmmac_fpe_cfg *cfg,
u32 num_txq,
u32 num_rxq, bool enable)
{
u32 value;

View File

@ -412,9 +412,11 @@ struct stmmac_ops {
unsigned int ptp_rate);
void (*est_irq_status)(void __iomem *ioaddr, struct net_device *dev,
struct stmmac_extra_stats *x, u32 txqcnt);
void (*fpe_configure)(void __iomem *ioaddr, u32 num_txq, u32 num_rxq,
void (*fpe_configure)(void __iomem *ioaddr, struct stmmac_fpe_cfg *cfg,
u32 num_txq, u32 num_rxq,
bool enable);
void (*fpe_send_mpacket)(void __iomem *ioaddr,
struct stmmac_fpe_cfg *cfg,
enum stmmac_mpacket_type type);
int (*fpe_irq_status)(void __iomem *ioaddr, struct net_device *dev);
};

View File

@ -964,7 +964,8 @@ static void stmmac_fpe_link_state_handle(struct stmmac_priv *priv, bool is_up)
bool *hs_enable = &fpe_cfg->hs_enable;
if (is_up && *hs_enable) {
stmmac_fpe_send_mpacket(priv, priv->ioaddr, MPACKET_VERIFY);
stmmac_fpe_send_mpacket(priv, priv->ioaddr, fpe_cfg,
MPACKET_VERIFY);
} else {
*lo_state = FPE_STATE_OFF;
*lp_state = FPE_STATE_OFF;
@ -5839,6 +5840,7 @@ static void stmmac_fpe_event_status(struct stmmac_priv *priv, int status)
/* If user has requested FPE enable, quickly response */
if (*hs_enable)
stmmac_fpe_send_mpacket(priv, priv->ioaddr,
fpe_cfg,
MPACKET_RESPONSE);
}
@ -7263,6 +7265,7 @@ static void stmmac_fpe_lp_task(struct work_struct *work)
if (*lo_state == FPE_STATE_ENTERING_ON &&
*lp_state == FPE_STATE_ENTERING_ON) {
stmmac_fpe_configure(priv, priv->ioaddr,
fpe_cfg,
priv->plat->tx_queues_to_use,
priv->plat->rx_queues_to_use,
*enable);
@ -7281,6 +7284,7 @@ static void stmmac_fpe_lp_task(struct work_struct *work)
netdev_info(priv->dev, SEND_VERIFY_MPAKCET_FMT,
*lo_state, *lp_state);
stmmac_fpe_send_mpacket(priv, priv->ioaddr,
fpe_cfg,
MPACKET_VERIFY);
}
/* Sleep then retry */
@ -7295,6 +7299,7 @@ void stmmac_fpe_handshake(struct stmmac_priv *priv, bool enable)
if (priv->plat->fpe_cfg->hs_enable != enable) {
if (enable) {
stmmac_fpe_send_mpacket(priv, priv->ioaddr,
priv->plat->fpe_cfg,
MPACKET_VERIFY);
} else {
priv->plat->fpe_cfg->lo_fpe_state = FPE_STATE_OFF;
@ -7755,6 +7760,7 @@ int stmmac_suspend(struct device *dev)
if (priv->dma_cap.fpesel) {
/* Disable FPE */
stmmac_fpe_configure(priv, priv->ioaddr,
priv->plat->fpe_cfg,
priv->plat->tx_queues_to_use,
priv->plat->rx_queues_to_use, false);

View File

@ -1079,6 +1079,7 @@ disable:
priv->plat->fpe_cfg->enable = false;
stmmac_fpe_configure(priv, priv->ioaddr,
priv->plat->fpe_cfg,
priv->plat->tx_queues_to_use,
priv->plat->rx_queues_to_use,
false);

View File

@ -3,5 +3,6 @@ config HYPERV_NET
tristate "Microsoft Hyper-V virtual network driver"
depends on HYPERV
select UCS2_STRING
select NLS
help
Select this option to enable the Hyper-V virtual network driver.

View File

@ -3000,6 +3000,8 @@ static void rtl8152_nic_reset(struct r8152 *tp)
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CR, CR_RST);
for (i = 0; i < 1000; i++) {
if (test_bit(RTL8152_INACCESSIBLE, &tp->flags))
break;
if (!(ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CR) & CR_RST))
break;
usleep_range(100, 400);
@ -3329,6 +3331,8 @@ static void rtl_disable(struct r8152 *tp)
rxdy_gated_en(tp, true);
for (i = 0; i < 1000; i++) {
if (test_bit(RTL8152_INACCESSIBLE, &tp->flags))
break;
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL);
if ((ocp_data & FIFO_EMPTY) == FIFO_EMPTY)
break;
@ -3336,6 +3340,8 @@ static void rtl_disable(struct r8152 *tp)
}
for (i = 0; i < 1000; i++) {
if (test_bit(RTL8152_INACCESSIBLE, &tp->flags))
break;
if (ocp_read_word(tp, MCU_TYPE_PLA, PLA_TCR0) & TCR0_TX_EMPTY)
break;
usleep_range(1000, 2000);
@ -5499,6 +5505,8 @@ static void wait_oob_link_list_ready(struct r8152 *tp)
int i;
for (i = 0; i < 1000; i++) {
if (test_bit(RTL8152_INACCESSIBLE, &tp->flags))
break;
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL);
if (ocp_data & LINK_LIST_READY)
break;
@ -5513,6 +5521,8 @@ static void r8156b_wait_loading_flash(struct r8152 *tp)
int i;
for (i = 0; i < 100; i++) {
if (test_bit(RTL8152_INACCESSIBLE, &tp->flags))
break;
if (ocp_read_word(tp, MCU_TYPE_USB, USB_GPHY_CTRL) & GPHY_PATCH_DONE)
break;
usleep_range(1000, 2000);
@ -5635,6 +5645,8 @@ static int r8153_pre_firmware_1(struct r8152 *tp)
for (i = 0; i < 104; i++) {
u32 ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, USB_WDT1_CTRL);
if (test_bit(RTL8152_INACCESSIBLE, &tp->flags))
return -ENODEV;
if (!(ocp_data & WTD1_EN))
break;
usleep_range(1000, 2000);
@ -5791,6 +5803,8 @@ static void r8153_aldps_en(struct r8152 *tp, bool enable)
data &= ~EN_ALDPS;
ocp_reg_write(tp, OCP_POWER_CFG, data);
for (i = 0; i < 20; i++) {
if (test_bit(RTL8152_INACCESSIBLE, &tp->flags))
return;
usleep_range(1000, 2000);
if (ocp_read_word(tp, MCU_TYPE_PLA, 0xe000) & 0x0100)
break;
@ -8397,6 +8411,8 @@ static int rtl8152_pre_reset(struct usb_interface *intf)
struct r8152 *tp = usb_get_intfdata(intf);
struct net_device *netdev;
rtnl_lock();
if (!tp || !test_bit(PROBED_WITH_NO_ERRORS, &tp->flags))
return 0;
@ -8428,20 +8444,17 @@ static int rtl8152_post_reset(struct usb_interface *intf)
struct sockaddr sa;
if (!tp || !test_bit(PROBED_WITH_NO_ERRORS, &tp->flags))
return 0;
goto exit;
rtl_set_accessible(tp);
/* reset the MAC address in case of policy change */
if (determine_ethernet_addr(tp, &sa) >= 0) {
rtnl_lock();
if (determine_ethernet_addr(tp, &sa) >= 0)
dev_set_mac_address (tp->netdev, &sa, NULL);
rtnl_unlock();
}
netdev = tp->netdev;
if (!netif_running(netdev))
return 0;
goto exit;
set_bit(WORK_ENABLE, &tp->flags);
if (netif_carrier_ok(netdev)) {
@ -8460,6 +8473,8 @@ static int rtl8152_post_reset(struct usb_interface *intf)
if (!list_empty(&tp->rx_done))
napi_schedule(&tp->napi);
exit:
rtnl_unlock();
return 0;
}
@ -10034,6 +10049,7 @@ static const struct usb_device_id rtl8152_table[] = {
{ USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff) },
{ USB_DEVICE(VENDOR_ID_TPLINK, 0x0601) },
{ USB_DEVICE(VENDOR_ID_DLINK, 0xb301) },
{ USB_DEVICE(VENDOR_ID_ASUS, 0x1976) },
{}
};

View File

@ -790,7 +790,8 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
skb_add_rx_frag(nskb, i, page, page_offset, size,
truesize);
if (skb_copy_bits(skb, off, page_address(page),
if (skb_copy_bits(skb, off,
page_address(page) + page_offset,
size)) {
consume_skb(nskb);
goto drop;

View File

@ -3175,6 +3175,9 @@ enum bpf_text_poke_type {
int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
void *addr1, void *addr2);
void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke,
struct bpf_prog *new, struct bpf_prog *old);
void *bpf_arch_text_copy(void *dst, void *src, size_t len);
int bpf_arch_text_invalidate(void *dst, size_t len);

View File

@ -175,6 +175,7 @@ struct stmmac_fpe_cfg {
bool hs_enable; /* FPE handshake enable */
enum stmmac_fpe_state lp_fpe_state; /* Link Partner FPE state */
enum stmmac_fpe_state lo_fpe_state; /* Local station FPE state */
u32 fpe_csr; /* MAC_FPE_CTRL_STS reg cache */
};
struct stmmac_safety_feature_cfg {

View File

@ -169,7 +169,7 @@ struct tcp_request_sock {
#ifdef CONFIG_TCP_AO
u8 ao_keyid;
u8 ao_rcv_next;
u8 maclen;
bool used_tcp_ao;
#endif
};
@ -180,14 +180,10 @@ static inline struct tcp_request_sock *tcp_rsk(const struct request_sock *req)
static inline bool tcp_rsk_used_ao(const struct request_sock *req)
{
/* The real length of MAC is saved in the request socket,
* signing anything with zero-length makes no sense, so here is
* a little hack..
*/
#ifndef CONFIG_TCP_AO
return false;
#else
return tcp_rsk(req)->maclen != 0;
return tcp_rsk(req)->used_tcp_ao;
#endif
}

View File

@ -30,6 +30,7 @@
#define VENDOR_ID_NVIDIA 0x0955
#define VENDOR_ID_TPLINK 0x2357
#define VENDOR_ID_DLINK 0x2001
#define VENDOR_ID_ASUS 0x0b05
#if IS_REACHABLE(CONFIG_USB_RTL8152)
extern u8 rtl8152_get_version(struct usb_interface *intf);

View File

@ -12,10 +12,12 @@
* struct genl_multicast_group - generic netlink multicast group
* @name: name of the multicast group, names are per-family
* @flags: GENL_* flags (%GENL_ADMIN_PERM or %GENL_UNS_ADMIN_PERM)
* @cap_sys_admin: whether %CAP_SYS_ADMIN is required for binding
*/
struct genl_multicast_group {
char name[GENL_NAMSIZ];
u8 flags;
u8 cap_sys_admin:1;
};
struct genl_split_ops;

View File

@ -1514,17 +1514,22 @@ static inline int tcp_full_space(const struct sock *sk)
return tcp_win_from_space(sk, READ_ONCE(sk->sk_rcvbuf));
}
static inline void tcp_adjust_rcv_ssthresh(struct sock *sk)
static inline void __tcp_adjust_rcv_ssthresh(struct sock *sk, u32 new_ssthresh)
{
int unused_mem = sk_unused_reserved_mem(sk);
struct tcp_sock *tp = tcp_sk(sk);
tp->rcv_ssthresh = min(tp->rcv_ssthresh, 4U * tp->advmss);
tp->rcv_ssthresh = min(tp->rcv_ssthresh, new_ssthresh);
if (unused_mem)
tp->rcv_ssthresh = max_t(u32, tp->rcv_ssthresh,
tcp_win_from_space(sk, unused_mem));
}
static inline void tcp_adjust_rcv_ssthresh(struct sock *sk)
{
__tcp_adjust_rcv_ssthresh(sk, 4U * tcp_sk(sk)->advmss);
}
void tcp_cleanup_rbuf(struct sock *sk, int copied);
void __tcp_cleanup_rbuf(struct sock *sk, int copied);

View File

@ -62,11 +62,17 @@ static inline int tcp_ao_maclen(const struct tcp_ao_key *key)
return key->maclen;
}
/* Use tcp_ao_len_aligned() for TCP header calculations */
static inline int tcp_ao_len(const struct tcp_ao_key *key)
{
return tcp_ao_maclen(key) + sizeof(struct tcp_ao_hdr);
}
static inline int tcp_ao_len_aligned(const struct tcp_ao_key *key)
{
return round_up(tcp_ao_len(key), 4);
}
static inline unsigned int tcp_ao_digest_size(struct tcp_ao_key *key)
{
return key->digest_size;

View File

@ -1012,11 +1012,16 @@ static void prog_array_map_poke_untrack(struct bpf_map *map,
mutex_unlock(&aux->poke_mutex);
}
void __weak bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke,
struct bpf_prog *new, struct bpf_prog *old)
{
WARN_ON_ONCE(1);
}
static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
struct bpf_prog *old,
struct bpf_prog *new)
{
u8 *old_addr, *new_addr, *old_bypass_addr;
struct prog_poke_elem *elem;
struct bpf_array_aux *aux;
@ -1025,7 +1030,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
list_for_each_entry(elem, &aux->poke_progs, list) {
struct bpf_jit_poke_descriptor *poke;
int i, ret;
int i;
for (i = 0; i < elem->aux->size_poke_tab; i++) {
poke = &elem->aux->poke_tab[i];
@ -1044,21 +1049,10 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
* activated, so tail call updates can arrive from here
* while JIT is still finishing its final fixup for
* non-activated poke entries.
* 3) On program teardown, the program's kallsym entry gets
* removed out of RCU callback, but we can only untrack
* from sleepable context, therefore bpf_arch_text_poke()
* might not see that this is in BPF text section and
* bails out with -EINVAL. As these are unreachable since
* RCU grace period already passed, we simply skip them.
* 4) Also programs reaching refcount of zero while patching
* 3) Also programs reaching refcount of zero while patching
* is in progress is okay since we're protected under
* poke_mutex and untrack the programs before the JIT
* buffer is freed. When we're still in the middle of
* patching and suddenly kallsyms entry of the program
* gets evicted, we just skip the rest which is fine due
* to point 3).
* 5) Any other error happening below from bpf_arch_text_poke()
* is a unexpected bug.
* buffer is freed.
*/
if (!READ_ONCE(poke->tailcall_target_stable))
continue;
@ -1068,39 +1062,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
poke->tail_call.key != key)
continue;
old_bypass_addr = old ? NULL : poke->bypass_addr;
old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL;
new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL;
if (new) {
ret = bpf_arch_text_poke(poke->tailcall_target,
BPF_MOD_JUMP,
old_addr, new_addr);
BUG_ON(ret < 0 && ret != -EINVAL);
if (!old) {
ret = bpf_arch_text_poke(poke->tailcall_bypass,
BPF_MOD_JUMP,
poke->bypass_addr,
NULL);
BUG_ON(ret < 0 && ret != -EINVAL);
}
} else {
ret = bpf_arch_text_poke(poke->tailcall_bypass,
BPF_MOD_JUMP,
old_bypass_addr,
poke->bypass_addr);
BUG_ON(ret < 0 && ret != -EINVAL);
/* let other CPUs finish the execution of program
* so that it will not possible to expose them
* to invalid nop, stack unwind, nop state
*/
if (!ret)
synchronize_rcu();
ret = bpf_arch_text_poke(poke->tailcall_target,
BPF_MOD_JUMP,
old_addr, NULL);
BUG_ON(ret < 0 && ret != -EINVAL);
}
bpf_arch_poke_desc_update(poke, new, old);
}
}
}

View File

@ -371,14 +371,18 @@ static int bpf_adj_delta_to_imm(struct bpf_insn *insn, u32 pos, s32 end_old,
static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, s32 end_old,
s32 end_new, s32 curr, const bool probe_pass)
{
const s32 off_min = S16_MIN, off_max = S16_MAX;
s64 off_min, off_max, off;
s32 delta = end_new - end_old;
s32 off;
if (insn->code == (BPF_JMP32 | BPF_JA))
if (insn->code == (BPF_JMP32 | BPF_JA)) {
off = insn->imm;
else
off_min = S32_MIN;
off_max = S32_MAX;
} else {
off = insn->off;
off_min = S16_MIN;
off_max = S16_MAX;
}
if (curr < pos && curr + off + 1 >= end_old)
off += delta;

View File

@ -183,7 +183,7 @@ out:
}
static const struct genl_multicast_group dropmon_mcgrps[] = {
{ .name = "events", },
{ .name = "events", .cap_sys_admin = 1 },
};
static void send_dm_alert(struct work_struct *work)
@ -1619,11 +1619,13 @@ static const struct genl_small_ops dropmon_ops[] = {
.cmd = NET_DM_CMD_START,
.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
.doit = net_dm_cmd_trace,
.flags = GENL_ADMIN_PERM,
},
{
.cmd = NET_DM_CMD_STOP,
.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
.doit = net_dm_cmd_trace,
.flags = GENL_ADMIN_PERM,
},
{
.cmd = NET_DM_CMD_CONFIG_GET,

View File

@ -2602,6 +2602,22 @@ BPF_CALL_2(bpf_msg_cork_bytes, struct sk_msg *, msg, u32, bytes)
return 0;
}
static void sk_msg_reset_curr(struct sk_msg *msg)
{
u32 i = msg->sg.start;
u32 len = 0;
do {
len += sk_msg_elem(msg, i)->length;
sk_msg_iter_var_next(i);
if (len >= msg->sg.size)
break;
} while (i != msg->sg.end);
msg->sg.curr = i;
msg->sg.copybreak = 0;
}
static const struct bpf_func_proto bpf_msg_cork_bytes_proto = {
.func = bpf_msg_cork_bytes,
.gpl_only = false,
@ -2721,6 +2737,7 @@ BPF_CALL_4(bpf_msg_pull_data, struct sk_msg *, msg, u32, start,
msg->sg.end - shift + NR_MSG_FRAG_IDS :
msg->sg.end - shift;
out:
sk_msg_reset_curr(msg);
msg->data = sg_virt(&msg->sg.data[first_sge]) + start - offset;
msg->data_end = msg->data + bytes;
return 0;
@ -2857,6 +2874,7 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
msg->sg.data[new] = rsge;
}
sk_msg_reset_curr(msg);
sk_msg_compute_data_pointers(msg);
return 0;
}
@ -3025,6 +3043,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
sk_mem_uncharge(msg->sk, len - pop);
msg->sg.size -= (len - pop);
sk_msg_reset_curr(msg);
sk_msg_compute_data_pointers(msg);
return 0;
}

View File

@ -635,15 +635,18 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
}
if (dev->header_ops) {
int pull_len = tunnel->hlen + sizeof(struct iphdr);
if (skb_cow_head(skb, 0))
goto free_skb;
tnl_params = (const struct iphdr *)skb->data;
/* Pull skb since ip_tunnel_xmit() needs skb->data pointing
* to gre header.
*/
skb_pull(skb, tunnel->hlen + sizeof(struct iphdr));
if (!pskb_network_may_pull(skb, pull_len))
goto free_skb;
/* ip_tunnel_xmit() needs skb->data pointing to gre header. */
skb_pull(skb, pull_len);
skb_reset_mac_header(skb);
if (skb->ip_summed == CHECKSUM_PARTIAL &&

View File

@ -3368,9 +3368,25 @@ int tcp_set_window_clamp(struct sock *sk, int val)
return -EINVAL;
tp->window_clamp = 0;
} else {
tp->window_clamp = val < SOCK_MIN_RCVBUF / 2 ?
SOCK_MIN_RCVBUF / 2 : val;
tp->rcv_ssthresh = min(tp->rcv_wnd, tp->window_clamp);
u32 new_rcv_ssthresh, old_window_clamp = tp->window_clamp;
u32 new_window_clamp = val < SOCK_MIN_RCVBUF / 2 ?
SOCK_MIN_RCVBUF / 2 : val;
if (new_window_clamp == old_window_clamp)
return 0;
tp->window_clamp = new_window_clamp;
if (new_window_clamp < old_window_clamp) {
/* need to apply the reserved mem provisioning only
* when shrinking the window clamp
*/
__tcp_adjust_rcv_ssthresh(sk, tp->window_clamp);
} else {
new_rcv_ssthresh = min(tp->rcv_wnd, tp->window_clamp);
tp->rcv_ssthresh = max(new_rcv_ssthresh,
tp->rcv_ssthresh);
}
}
return 0;
}
@ -3594,6 +3610,10 @@ int do_tcp_setsockopt(struct sock *sk, int level, int optname,
break;
case TCP_AO_REPAIR:
if (!tcp_can_repair_sock(sk)) {
err = -EPERM;
break;
}
err = tcp_ao_set_repair(sk, optval, optlen);
break;
#ifdef CONFIG_TCP_AO
@ -4293,6 +4313,8 @@ zerocopy_rcv_out:
}
#endif
case TCP_AO_REPAIR:
if (!tcp_can_repair_sock(sk))
return -EPERM;
return tcp_ao_get_repair(sk, optval, optlen);
case TCP_AO_GET_KEYS:
case TCP_AO_INFO: {

View File

@ -851,7 +851,7 @@ void tcp_ao_syncookie(struct sock *sk, const struct sk_buff *skb,
const struct tcp_ao_hdr *aoh;
struct tcp_ao_key *key;
treq->maclen = 0;
treq->used_tcp_ao = false;
if (tcp_parse_auth_options(th, NULL, &aoh) || !aoh)
return;
@ -863,7 +863,7 @@ void tcp_ao_syncookie(struct sock *sk, const struct sk_buff *skb,
treq->ao_rcv_next = aoh->keyid;
treq->ao_keyid = aoh->rnext_keyid;
treq->maclen = tcp_ao_maclen(key);
treq->used_tcp_ao = true;
}
static enum skb_drop_reason
@ -1100,7 +1100,7 @@ void tcp_ao_connect_init(struct sock *sk)
ao_info->current_key = key;
if (!ao_info->rnext_key)
ao_info->rnext_key = key;
tp->tcp_header_len += tcp_ao_len(key);
tp->tcp_header_len += tcp_ao_len_aligned(key);
ao_info->lisn = htonl(tp->write_seq);
ao_info->snd_sne = 0;
@ -1346,7 +1346,7 @@ static int tcp_ao_parse_crypto(struct tcp_ao_add *cmd, struct tcp_ao_key *key)
syn_tcp_option_space -= TCPOLEN_MSS_ALIGNED;
syn_tcp_option_space -= TCPOLEN_TSTAMP_ALIGNED;
syn_tcp_option_space -= TCPOLEN_WSCALE_ALIGNED;
if (tcp_ao_len(key) > syn_tcp_option_space) {
if (tcp_ao_len_aligned(key) > syn_tcp_option_space) {
err = -EMSGSIZE;
goto err_kfree;
}
@ -1608,6 +1608,15 @@ static int tcp_ao_add_cmd(struct sock *sk, unsigned short int family,
if (!dev || !l3index)
return -EINVAL;
if (!bound_dev_if || bound_dev_if != cmd.ifindex) {
/* tcp_ao_established_key() doesn't expect having
* non peer-matching key on an established TCP-AO
* connection.
*/
if (!((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)))
return -EINVAL;
}
/* It's still possible to bind after adding keys or even
* re-bind to a different dev (with CAP_NET_RAW).
* So, no reason to return error here, rather try to be

View File

@ -3871,8 +3871,12 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
* then we can probably ignore it.
*/
if (before(ack, prior_snd_una)) {
u32 max_window;
/* do not accept ACK for bytes we never sent. */
max_window = min_t(u64, tp->max_window, tp->bytes_acked);
/* RFC 5961 5.2 [Blind Data Injection Attack].[Mitigation] */
if (before(ack, prior_snd_una - tp->max_window)) {
if (before(ack, prior_snd_una - max_window)) {
if (!(flag & FLAG_NO_CHALLENGE_ACK))
tcp_send_challenge_ack(sk);
return -SKB_DROP_REASON_TCP_TOO_OLD_ACK;
@ -7182,11 +7186,12 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
if (tcp_parse_auth_options(tcp_hdr(skb), NULL, &aoh))
goto drop_and_release; /* Invalid TCP options */
if (aoh) {
tcp_rsk(req)->maclen = aoh->length - sizeof(struct tcp_ao_hdr);
tcp_rsk(req)->used_tcp_ao = true;
tcp_rsk(req)->ao_rcv_next = aoh->keyid;
tcp_rsk(req)->ao_keyid = aoh->rnext_keyid;
} else {
tcp_rsk(req)->maclen = 0;
tcp_rsk(req)->used_tcp_ao = false;
}
#endif
tcp_rsk(req)->snt_isn = isn;

View File

@ -690,7 +690,7 @@ static bool tcp_v4_ao_sign_reset(const struct sock *sk, struct sk_buff *skb,
reply_options[0] = htonl((TCPOPT_AO << 24) | (tcp_ao_len(key) << 16) |
(aoh->rnext_keyid << 8) | keyid);
arg->iov[0].iov_len += round_up(tcp_ao_len(key), 4);
arg->iov[0].iov_len += tcp_ao_len_aligned(key);
reply->doff = arg->iov[0].iov_len / 4;
if (tcp_ao_hash_hdr(AF_INET, (char *)&reply_options[1],
@ -978,7 +978,7 @@ static void tcp_v4_send_ack(const struct sock *sk,
(tcp_ao_len(key->ao_key) << 16) |
(key->ao_key->sndid << 8) |
key->rcv_next);
arg.iov[0].iov_len += round_up(tcp_ao_len(key->ao_key), 4);
arg.iov[0].iov_len += tcp_ao_len_aligned(key->ao_key);
rep.th.doff = arg.iov[0].iov_len / 4;
tcp_ao_hash_hdr(AF_INET, (char *)&rep.opt[offset],

View File

@ -615,7 +615,7 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
ao_key = treq->af_specific->ao_lookup(sk, req,
tcp_rsk(req)->ao_keyid, -1);
if (ao_key)
newtp->tcp_header_len += tcp_ao_len(ao_key);
newtp->tcp_header_len += tcp_ao_len_aligned(ao_key);
#endif
if (skb->len >= TCP_MSS_DEFAULT + newtp->tcp_header_len)
newicsk->icsk_ack.last_seg_size = skb->len - newtp->tcp_header_len;

View File

@ -825,7 +825,7 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb,
timestamps = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_timestamps);
if (tcp_key_is_ao(key)) {
opts->options |= OPTION_AO;
remaining -= tcp_ao_len(key->ao_key);
remaining -= tcp_ao_len_aligned(key->ao_key);
}
}
@ -915,7 +915,7 @@ static unsigned int tcp_synack_options(const struct sock *sk,
ireq->tstamp_ok &= !ireq->sack_ok;
} else if (tcp_key_is_ao(key)) {
opts->options |= OPTION_AO;
remaining -= tcp_ao_len(key->ao_key);
remaining -= tcp_ao_len_aligned(key->ao_key);
ireq->tstamp_ok &= !ireq->sack_ok;
}
@ -982,7 +982,7 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb
size += TCPOLEN_MD5SIG_ALIGNED;
} else if (tcp_key_is_ao(key)) {
opts->options |= OPTION_AO;
size += tcp_ao_len(key->ao_key);
size += tcp_ao_len_aligned(key->ao_key);
}
if (likely(tp->rx_opt.tstamp_ok)) {
@ -3720,7 +3720,6 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
if (tcp_rsk_used_ao(req)) {
#ifdef CONFIG_TCP_AO
struct tcp_ao_key *ao_key = NULL;
u8 maclen = tcp_rsk(req)->maclen;
u8 keyid = tcp_rsk(req)->ao_keyid;
ao_key = tcp_sk(sk)->af_specific->ao_lookup(sk, req_to_sk(req),
@ -3730,13 +3729,11 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
* for another peer-matching key, but the peer has requested
* ao_keyid (RFC5925 RNextKeyID), so let's keep it simple here.
*/
if (unlikely(!ao_key || tcp_ao_maclen(ao_key) != maclen)) {
u8 key_maclen = ao_key ? tcp_ao_maclen(ao_key) : 0;
if (unlikely(!ao_key)) {
rcu_read_unlock();
kfree_skb(skb);
net_warn_ratelimited("TCP-AO: the keyid %u with maclen %u|%u from SYN packet is not present - not sending SYNACK\n",
keyid, maclen, key_maclen);
net_warn_ratelimited("TCP-AO: the keyid %u from SYN packet is not present - not sending SYNACK\n",
keyid);
return NULL;
}
key.ao_key = ao_key;

View File

@ -1511,13 +1511,9 @@ out:
if (!pn_leaf && !(pn->fn_flags & RTN_RTINFO)) {
pn_leaf = fib6_find_prefix(info->nl_net, table,
pn);
#if RT6_DEBUG >= 2
if (!pn_leaf) {
WARN_ON(!pn_leaf);
if (!pn_leaf)
pn_leaf =
info->nl_net->ipv6.fib6_null_entry;
}
#endif
fib6_info_hold(pn_leaf);
rcu_assign_pointer(pn->leaf, pn_leaf);
}

View File

@ -881,7 +881,7 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32
if (tcp_key_is_md5(key))
tot_len += TCPOLEN_MD5SIG_ALIGNED;
if (tcp_key_is_ao(key))
tot_len += tcp_ao_len(key->ao_key);
tot_len += tcp_ao_len_aligned(key->ao_key);
#ifdef CONFIG_MPTCP
if (rst && !tcp_key_is_md5(key)) {

View File

@ -31,7 +31,7 @@ struct bpf_nf_link {
#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4) || IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
static const struct nf_defrag_hook *
get_proto_defrag_hook(struct bpf_nf_link *link,
const struct nf_defrag_hook __rcu *global_hook,
const struct nf_defrag_hook __rcu **ptr_global_hook,
const char *mod)
{
const struct nf_defrag_hook *hook;
@ -39,7 +39,7 @@ get_proto_defrag_hook(struct bpf_nf_link *link,
/* RCU protects us from races against module unloading */
rcu_read_lock();
hook = rcu_dereference(global_hook);
hook = rcu_dereference(*ptr_global_hook);
if (!hook) {
rcu_read_unlock();
err = request_module(mod);
@ -47,7 +47,7 @@ get_proto_defrag_hook(struct bpf_nf_link *link,
return ERR_PTR(err < 0 ? err : -EINVAL);
rcu_read_lock();
hook = rcu_dereference(global_hook);
hook = rcu_dereference(*ptr_global_hook);
}
if (hook && try_module_get(hook->owner)) {
@ -78,7 +78,7 @@ static int bpf_nf_enable_defrag(struct bpf_nf_link *link)
switch (link->hook_ops.pf) {
#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4)
case NFPROTO_IPV4:
hook = get_proto_defrag_hook(link, nf_defrag_v4_hook, "nf_defrag_ipv4");
hook = get_proto_defrag_hook(link, &nf_defrag_v4_hook, "nf_defrag_ipv4");
if (IS_ERR(hook))
return PTR_ERR(hook);
@ -87,7 +87,7 @@ static int bpf_nf_enable_defrag(struct bpf_nf_link *link)
#endif
#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
case NFPROTO_IPV6:
hook = get_proto_defrag_hook(link, nf_defrag_v6_hook, "nf_defrag_ipv6");
hook = get_proto_defrag_hook(link, &nf_defrag_v6_hook, "nf_defrag_ipv6");
if (IS_ERR(hook))
return PTR_ERR(hook);

View File

@ -803,7 +803,7 @@ static struct nft_table *nft_table_lookup(const struct net *net,
static struct nft_table *nft_table_lookup_byhandle(const struct net *net,
const struct nlattr *nla,
u8 genmask, u32 nlpid)
int family, u8 genmask, u32 nlpid)
{
struct nftables_pernet *nft_net;
struct nft_table *table;
@ -811,6 +811,7 @@ static struct nft_table *nft_table_lookup_byhandle(const struct net *net,
nft_net = nft_pernet(net);
list_for_each_entry(table, &nft_net->tables, list) {
if (be64_to_cpu(nla_get_be64(nla)) == table->handle &&
table->family == family &&
nft_active_genmask(table, genmask)) {
if (nft_table_has_owner(table) &&
nlpid && table->nlpid != nlpid)
@ -1544,7 +1545,7 @@ static int nf_tables_deltable(struct sk_buff *skb, const struct nfnl_info *info,
if (nla[NFTA_TABLE_HANDLE]) {
attr = nla[NFTA_TABLE_HANDLE];
table = nft_table_lookup_byhandle(net, attr, genmask,
table = nft_table_lookup_byhandle(net, attr, family, genmask,
NETLINK_CB(skb).portid);
} else {
attr = nla[NFTA_TABLE_NAME];

View File

@ -280,10 +280,15 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
priv->expr_array[i] = dynset_expr;
priv->num_exprs++;
if (set->num_exprs &&
dynset_expr->ops != set->exprs[i]->ops) {
err = -EOPNOTSUPP;
goto err_expr_free;
if (set->num_exprs) {
if (i >= set->num_exprs) {
err = -EINVAL;
goto err_expr_free;
}
if (dynset_expr->ops != set->exprs[i]->ops) {
err = -EOPNOTSUPP;
goto err_expr_free;
}
}
i++;
}

View File

@ -214,7 +214,7 @@ static void nft_exthdr_tcp_eval(const struct nft_expr *expr,
offset = i + priv->offset;
if (priv->flags & NFT_EXTHDR_F_PRESENT) {
*dest = 1;
nft_reg_store8(dest, 1);
} else {
if (priv->len % NFT_REG32_SIZE)
dest[priv->len / NFT_REG32_SIZE] = 0;
@ -461,7 +461,7 @@ static void nft_exthdr_dccp_eval(const struct nft_expr *expr,
type = bufp[0];
if (type == priv->type) {
*dest = 1;
nft_reg_store8(dest, 1);
return;
}

View File

@ -145,11 +145,15 @@ void nft_fib_store_result(void *reg, const struct nft_fib *priv,
switch (priv->result) {
case NFT_FIB_RESULT_OIF:
index = dev ? dev->ifindex : 0;
*dreg = (priv->flags & NFTA_FIB_F_PRESENT) ? !!index : index;
if (priv->flags & NFTA_FIB_F_PRESENT)
nft_reg_store8(dreg, !!index);
else
*dreg = index;
break;
case NFT_FIB_RESULT_OIFNAME:
if (priv->flags & NFTA_FIB_F_PRESENT)
*dreg = !!dev;
nft_reg_store8(dreg, !!dev);
else
strscpy_pad(reg, dev ? dev->name : "", IFNAMSIZ);
break;

View File

@ -2043,6 +2043,9 @@ static void nft_pipapo_walk(const struct nft_ctx *ctx, struct nft_set *set,
e = f->mt[r].e;
if (!nft_set_elem_active(&e->ext, iter->genmask))
goto cont;
iter->err = iter->fn(ctx, set, iter, &e->priv);
if (iter->err < 0)
goto out;

View File

@ -76,18 +76,23 @@ owner_mt(const struct sk_buff *skb, struct xt_action_param *par)
*/
return false;
filp = sk->sk_socket->file;
if (filp == NULL)
read_lock_bh(&sk->sk_callback_lock);
filp = sk->sk_socket ? sk->sk_socket->file : NULL;
if (filp == NULL) {
read_unlock_bh(&sk->sk_callback_lock);
return ((info->match ^ info->invert) &
(XT_OWNER_UID | XT_OWNER_GID)) == 0;
}
if (info->match & XT_OWNER_UID) {
kuid_t uid_min = make_kuid(net->user_ns, info->uid_min);
kuid_t uid_max = make_kuid(net->user_ns, info->uid_max);
if ((uid_gte(filp->f_cred->fsuid, uid_min) &&
uid_lte(filp->f_cred->fsuid, uid_max)) ^
!(info->invert & XT_OWNER_UID))
!(info->invert & XT_OWNER_UID)) {
read_unlock_bh(&sk->sk_callback_lock);
return false;
}
}
if (info->match & XT_OWNER_GID) {
@ -112,10 +117,13 @@ owner_mt(const struct sk_buff *skb, struct xt_action_param *par)
}
}
if (match ^ !(info->invert & XT_OWNER_GID))
if (match ^ !(info->invert & XT_OWNER_GID)) {
read_unlock_bh(&sk->sk_callback_lock);
return false;
}
}
read_unlock_bh(&sk->sk_callback_lock);
return true;
}

View File

@ -1691,6 +1691,9 @@ static int genl_bind(struct net *net, int group)
if ((grp->flags & GENL_UNS_ADMIN_PERM) &&
!ns_capable(net->user_ns, CAP_NET_ADMIN))
ret = -EPERM;
if (grp->cap_sys_admin &&
!ns_capable(net->user_ns, CAP_SYS_ADMIN))
ret = -EPERM;
break;
}

View File

@ -4300,7 +4300,7 @@ static void packet_mm_open(struct vm_area_struct *vma)
struct sock *sk = sock->sk;
if (sk)
atomic_inc(&pkt_sk(sk)->mapped);
atomic_long_inc(&pkt_sk(sk)->mapped);
}
static void packet_mm_close(struct vm_area_struct *vma)
@ -4310,7 +4310,7 @@ static void packet_mm_close(struct vm_area_struct *vma)
struct sock *sk = sock->sk;
if (sk)
atomic_dec(&pkt_sk(sk)->mapped);
atomic_long_dec(&pkt_sk(sk)->mapped);
}
static const struct vm_operations_struct packet_mmap_ops = {
@ -4405,7 +4405,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
err = -EBUSY;
if (!closing) {
if (atomic_read(&po->mapped))
if (atomic_long_read(&po->mapped))
goto out;
if (packet_read_pending(rb))
goto out;
@ -4508,7 +4508,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
err = -EBUSY;
mutex_lock(&po->pg_vec_lock);
if (closing || atomic_read(&po->mapped) == 0) {
if (closing || atomic_long_read(&po->mapped) == 0) {
err = 0;
spin_lock_bh(&rb_queue->lock);
swap(rb->pg_vec, pg_vec);
@ -4526,9 +4526,9 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
po->prot_hook.func = (po->rx_ring.pg_vec) ?
tpacket_rcv : packet_rcv;
skb_queue_purge(rb_queue);
if (atomic_read(&po->mapped))
pr_err("packet_mmap: vma is busy: %d\n",
atomic_read(&po->mapped));
if (atomic_long_read(&po->mapped))
pr_err("packet_mmap: vma is busy: %ld\n",
atomic_long_read(&po->mapped));
}
mutex_unlock(&po->pg_vec_lock);
@ -4606,7 +4606,7 @@ static int packet_mmap(struct file *file, struct socket *sock,
}
}
atomic_inc(&po->mapped);
atomic_long_inc(&po->mapped);
vma->vm_ops = &packet_mmap_ops;
err = 0;

View File

@ -122,7 +122,7 @@ struct packet_sock {
__be16 num;
struct packet_rollover *rollover;
struct packet_mclist *mclist;
atomic_t mapped;
atomic_long_t mapped;
enum tpacket_versions tp_version;
unsigned int tp_hdrlen;
unsigned int tp_reserve;

View File

@ -31,7 +31,8 @@ enum psample_nl_multicast_groups {
static const struct genl_multicast_group psample_nl_mcgrps[] = {
[PSAMPLE_NL_MCGRP_CONFIG] = { .name = PSAMPLE_NL_MCGRP_CONFIG_NAME },
[PSAMPLE_NL_MCGRP_SAMPLE] = { .name = PSAMPLE_NL_MCGRP_SAMPLE_NAME },
[PSAMPLE_NL_MCGRP_SAMPLE] = { .name = PSAMPLE_NL_MCGRP_SAMPLE_NAME,
.flags = GENL_UNS_ADMIN_PERM },
};
static struct genl_family psample_nl_family __ro_after_init;

View File

@ -723,7 +723,7 @@ static void smcd_conn_save_peer_info(struct smc_sock *smc,
int bufsize = smc_uncompress_bufsize(clc->d0.dmbe_size);
smc->conn.peer_rmbe_idx = clc->d0.dmbe_idx;
smc->conn.peer_token = clc->d0.token;
smc->conn.peer_token = ntohll(clc->d0.token);
/* msg header takes up space in the buffer */
smc->conn.peer_rmbe_size = bufsize - sizeof(struct smcd_cdc_msg);
atomic_set(&smc->conn.peer_rmbe_space, smc->conn.peer_rmbe_size);
@ -1415,7 +1415,7 @@ static int smc_connect_ism(struct smc_sock *smc,
if (rc)
return rc;
}
ini->ism_peer_gid[ini->ism_selected] = aclc->d0.gid;
ini->ism_peer_gid[ini->ism_selected] = ntohll(aclc->d0.gid);
/* there is only one lgr role for SMC-D; use server lock */
mutex_lock(&smc_server_lgr_pending);

View File

@ -1004,6 +1004,7 @@ static int smc_clc_send_confirm_accept(struct smc_sock *smc,
{
struct smc_connection *conn = &smc->conn;
struct smc_clc_first_contact_ext_v2x fce;
struct smcd_dev *smcd = conn->lgr->smcd;
struct smc_clc_msg_accept_confirm *clc;
struct smc_clc_fce_gid_ext gle;
struct smc_clc_msg_trail trl;
@ -1021,17 +1022,15 @@ static int smc_clc_send_confirm_accept(struct smc_sock *smc,
memcpy(clc->hdr.eyecatcher, SMCD_EYECATCHER,
sizeof(SMCD_EYECATCHER));
clc->hdr.typev1 = SMC_TYPE_D;
clc->d0.gid =
conn->lgr->smcd->ops->get_local_gid(conn->lgr->smcd);
clc->d0.token = conn->rmb_desc->token;
clc->d0.gid = htonll(smcd->ops->get_local_gid(smcd));
clc->d0.token = htonll(conn->rmb_desc->token);
clc->d0.dmbe_size = conn->rmbe_size_comp;
clc->d0.dmbe_idx = 0;
memcpy(&clc->d0.linkid, conn->lgr->id, SMC_LGR_ID_SIZE);
if (version == SMC_V1) {
clc->hdr.length = htons(SMCD_CLC_ACCEPT_CONFIRM_LEN);
} else {
clc_v2->d1.chid =
htons(smc_ism_get_chid(conn->lgr->smcd));
clc_v2->d1.chid = htons(smc_ism_get_chid(smcd));
if (eid && eid[0])
memcpy(clc_v2->d1.eid, eid, SMC_MAX_EID_LEN);
len = SMCD_CLC_ACCEPT_CONFIRM_LEN_V2;

View File

@ -204,8 +204,8 @@ struct smcr_clc_msg_accept_confirm { /* SMCR accept/confirm */
} __packed;
struct smcd_clc_msg_accept_confirm_common { /* SMCD accept/confirm */
u64 gid; /* Sender GID */
u64 token; /* DMB token */
__be64 gid; /* Sender GID */
__be64 token; /* DMB token */
u8 dmbe_idx; /* DMBE index */
#if defined(__BIG_ENDIAN_BITFIELD)
u8 dmbe_size : 4, /* buf size (compressed) */

View File

@ -952,6 +952,8 @@ static int tls_sw_sendmsg_splice(struct sock *sk, struct msghdr *msg,
}
sk_msg_page_add(msg_pl, page, part, off);
msg_pl->sg.copybreak = 0;
msg_pl->sg.curr = msg_pl->sg.end;
sk_mem_charge(sk, part);
*copied += part;
try_to_copy -= part;

View File

@ -59,8 +59,7 @@ static bool virtio_transport_can_zcopy(const struct virtio_transport *t_ops,
t_ops = virtio_transport_get_ops(info->vsk);
if (t_ops->can_msgzerocopy) {
int pages_in_iov = iov_iter_npages(iov_iter, MAX_SKB_FRAGS);
int pages_to_send = min(pages_in_iov, MAX_SKB_FRAGS);
int pages_to_send = iov_iter_npages(iov_iter, MAX_SKB_FRAGS);
/* +1 is for packet header. */
return t_ops->can_msgzerocopy(pages_to_send + 1);

View File

@ -947,7 +947,7 @@ static __poll_t xsk_poll(struct file *file, struct socket *sock,
rcu_read_lock();
if (xsk_check_common(xs))
goto skip_tx;
goto out;
pool = xs->pool;
@ -959,12 +959,11 @@ static __poll_t xsk_poll(struct file *file, struct socket *sock,
xsk_generic_xmit(sk);
}
skip_tx:
if (xs->rx && !xskq_prod_is_empty(xs->rx))
mask |= EPOLLIN | EPOLLRDNORM;
if (xs->tx && xsk_tx_writeable(xs))
mask |= EPOLLOUT | EPOLLWRNORM;
out:
rcu_read_unlock();
return mask;
}

View File

@ -1,6 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
#include <unistd.h>
#include <test_progs.h>
#include <network_helpers.h>
#include "tailcall_poke.skel.h"
/* test_tailcall_1 checks basic functionality by patching multiple locations
* in a single program for a single tail call slot with nop->jmp, jmp->nop
@ -1105,6 +1108,85 @@ out:
bpf_object__close(tgt_obj);
}
#define JMP_TABLE "/sys/fs/bpf/jmp_table"
static int poke_thread_exit;
static void *poke_update(void *arg)
{
__u32 zero = 0, prog1_fd, prog2_fd, map_fd;
struct tailcall_poke *call = arg;
map_fd = bpf_map__fd(call->maps.jmp_table);
prog1_fd = bpf_program__fd(call->progs.call1);
prog2_fd = bpf_program__fd(call->progs.call2);
while (!poke_thread_exit) {
bpf_map_update_elem(map_fd, &zero, &prog1_fd, BPF_ANY);
bpf_map_update_elem(map_fd, &zero, &prog2_fd, BPF_ANY);
}
return NULL;
}
/*
* We are trying to hit prog array update during another program load
* that shares the same prog array map.
*
* For that we share the jmp_table map between two skeleton instances
* by pinning the jmp_table to same path. Then first skeleton instance
* periodically updates jmp_table in 'poke update' thread while we load
* the second skeleton instance in the main thread.
*/
static void test_tailcall_poke(void)
{
struct tailcall_poke *call, *test;
int err, cnt = 10;
pthread_t thread;
unlink(JMP_TABLE);
call = tailcall_poke__open_and_load();
if (!ASSERT_OK_PTR(call, "tailcall_poke__open"))
return;
err = bpf_map__pin(call->maps.jmp_table, JMP_TABLE);
if (!ASSERT_OK(err, "bpf_map__pin"))
goto out;
err = pthread_create(&thread, NULL, poke_update, call);
if (!ASSERT_OK(err, "new toggler"))
goto out;
while (cnt--) {
test = tailcall_poke__open();
if (!ASSERT_OK_PTR(test, "tailcall_poke__open"))
break;
err = bpf_map__set_pin_path(test->maps.jmp_table, JMP_TABLE);
if (!ASSERT_OK(err, "bpf_map__pin")) {
tailcall_poke__destroy(test);
break;
}
bpf_program__set_autoload(test->progs.test, true);
bpf_program__set_autoload(test->progs.call1, false);
bpf_program__set_autoload(test->progs.call2, false);
err = tailcall_poke__load(test);
tailcall_poke__destroy(test);
if (!ASSERT_OK(err, "tailcall_poke__load"))
break;
}
poke_thread_exit = 1;
ASSERT_OK(pthread_join(thread, NULL), "pthread_join");
out:
bpf_map__unpin(call->maps.jmp_table, JMP_TABLE);
tailcall_poke__destroy(call);
}
void test_tailcalls(void)
{
if (test__start_subtest("tailcall_1"))
@ -1139,4 +1221,6 @@ void test_tailcalls(void)
test_tailcall_bpf2bpf_fentry_fexit();
if (test__start_subtest("tailcall_bpf2bpf_fentry_entry"))
test_tailcall_bpf2bpf_fentry_entry();
if (test__start_subtest("tailcall_poke"))
test_tailcall_poke();
}

View File

@ -0,0 +1,32 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
char _license[] SEC("license") = "GPL";
struct {
__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
__uint(max_entries, 1);
__uint(key_size, sizeof(__u32));
__uint(value_size, sizeof(__u32));
} jmp_table SEC(".maps");
SEC("?fentry/bpf_fentry_test1")
int BPF_PROG(test, int a)
{
bpf_tail_call_static(ctx, &jmp_table, 0);
return 0;
}
SEC("fentry/bpf_fentry_test1")
int BPF_PROG(call1, int a)
{
return 0;
}
SEC("fentry/bpf_fentry_test1")
int BPF_PROG(call2, int a)
{
return 0;
}