Including fixes from netfilter.

Current release - regressions:
 
  - tcp: fix cleanup and leaks in tcp_read_skb() (the new way BPF
    socket maps get data out of the TCP stack)
 
  - tls: rx: react to strparser initialization errors
 
  - netfilter: nf_tables: fix scheduling-while-atomic splat
 
  - net: fix suspicious RCU usage in bpf_sk_reuseport_detach()
 
 Current release - new code bugs:
 
  - mlxsw: ptp: fix a couple of races, static checker warnings
    and error handling
 
 Previous releases - regressions:
 
  - netfilter:
    - nf_tables: fix possible module reference underflow in error path
    - make conntrack helpers deal with BIG TCP (skbs > 64kB)
    - nfnetlink: re-enable conntrack expectation events
 
  - net: fix potential refcount leak in ndisc_router_discovery()
 
 Previous releases - always broken:
 
  - sched: cls_route: disallow handle of 0
 
  - neigh: fix possible local DoS due to net iface start/stop loop
 
  - rtnetlink: fix module refcount leak in rtnetlink_rcv_msg
 
  - sched: fix adding qlen to qcpu->backlog in gnet_stats_add_queue_cpu
 
  - virtio_net: fix endian-ness for RSS
 
  - dsa: mv88e6060: prevent crash on an unused port
 
  - fec: fix timer capture timing in `fec_ptp_enable_pps()`
 
  - ocelot: stats: fix races, integer wrapping and reading incorrect
    registers (the change of register definitions here accounts for
    bulk of the changed LoC in this PR)
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmL+lGYACgkQMUZtbf5S
 IrunKw/+OfV68qJ2C+zg/qPgZg5XAD/v+3WuQo9Vsj4Z+dmxelyQkKqok61xLc6t
 eXr8v3/stDM1/zxHqCc0zJZMGhOug4RLS6kfVVwNbo6XaceTJlKcFTgM1bjQgLyT
 pMlet2JMhzpmWkMma2oztsG4zQaWSITCCjgLJByUmeO8+zKXDMojc1eew2bH8ueo
 KzZjIys+lHdEIo2uhGEU3OdhqnFn2zdVGVxcmtgtV3N9rIobnHiJdVwqLlTgnTvQ
 nU5ZoYUM4h1AG7gKSXsDbM0CPH3s4xavpkA3rMB1x4ahfxNd3y6WmpVt9qjE5wME
 8HbzutQ+x7Xf2XAQBBZma/KjmLW0GCHlQhRT+RHBryk21Yizb04HqXNMB1sPFZe6
 uDAvSZjZqPX+3aMznLTzz1T+F1TJygoeVNQ2tlxHkMuPrfS9g3T+jiohGnELF8+K
 /A3g7oCQin/qiMk35JXBuhGk4RqjyPsITOwAZ2OycHZWD/U5xd1OlkKPGUoUAg+m
 y+7XswZZJ/uBw+U+16AMMzg8vxCmoBHbgYGvnw0+96wpv4yVqTW26Wtzv01gjZPp
 wZuJkd+sHZLBNP5RkBC0PQj5rfcUj+4PUTXtW+57z+XM0HcmcqsXZHLXpMr4rS0b
 EnSsuDlfp9SWwfpMld75v/LA19a6opi6novjY4Nds3+t22ffEHY=
 =ednY
 -----END PGP SIGNATURE-----

Merge tag 'net-6.0-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from netfilter.

  Current release - regressions:

   - tcp: fix cleanup and leaks in tcp_read_skb() (the new way BPF
     socket maps get data out of the TCP stack)

   - tls: rx: react to strparser initialization errors

   - netfilter: nf_tables: fix scheduling-while-atomic splat

   - net: fix suspicious RCU usage in bpf_sk_reuseport_detach()

  Current release - new code bugs:

   - mlxsw: ptp: fix a couple of races, static checker warnings and
     error handling

  Previous releases - regressions:

   - netfilter:
      - nf_tables: fix possible module reference underflow in error path
      - make conntrack helpers deal with BIG TCP (skbs > 64kB)
      - nfnetlink: re-enable conntrack expectation events

   - net: fix potential refcount leak in ndisc_router_discovery()

  Previous releases - always broken:

   - sched: cls_route: disallow handle of 0

   - neigh: fix possible local DoS due to net iface start/stop loop

   - rtnetlink: fix module refcount leak in rtnetlink_rcv_msg

   - sched: fix adding qlen to qcpu->backlog in gnet_stats_add_queue_cpu

   - virtio_net: fix endian-ness for RSS

   - dsa: mv88e6060: prevent crash on an unused port

   - fec: fix timer capture timing in `fec_ptp_enable_pps()`

   - ocelot: stats: fix races, integer wrapping and reading incorrect
     registers (the change of register definitions here accounts for
     bulk of the changed LoC in this PR)"

* tag 'net-6.0-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (77 commits)
  net: moxa: MAC address reading, generating, validity checking
  tcp: handle pure FIN case correctly
  tcp: refactor tcp_read_skb() a bit
  tcp: fix tcp_cleanup_rbuf() for tcp_read_skb()
  tcp: fix sock skb accounting in tcp_read_skb()
  igb: Add lock to avoid data race
  dt-bindings: Fix incorrect "the the" corrections
  net: genl: fix error path memory leak in policy dumping
  stmmac: intel: Add a missing clk_disable_unprepare() call in intel_eth_pci_remove()
  net: ethernet: mtk_eth_soc: fix possible NULL pointer dereference in mtk_xdp_run
  net/mlx5e: Allocate flow steering storage during uplink initialization
  net: mscc: ocelot: report ndo_get_stats64 from the wraparound-resistant ocelot->stats
  net: mscc: ocelot: keep ocelot_stat_layout by reg address, not offset
  net: mscc: ocelot: make struct ocelot_stat_layout array indexable
  net: mscc: ocelot: fix race between ndo_get_stats64 and ocelot_check_stats_work
  net: mscc: ocelot: turn stats_lock into a spinlock
  net: mscc: ocelot: fix address of SYS_COUNT_TX_AGING counter
  net: mscc: ocelot: fix incorrect ndo_get_stats64 packet counters
  net: dsa: felix: fix ethtool 256-511 and 512-1023 TX packet counters
  net: dsa: don't warn in dsa_port_set_state_now() when driver doesn't support it
  ...
This commit is contained in:
Linus Torvalds 2022-08-18 19:37:15 -07:00
commit 4c2d0b039c
65 changed files with 2345 additions and 787 deletions

View File

@ -14,7 +14,7 @@ MAC node:
- mac-address : The 6-byte MAC address. If present, it is the default
MAC address.
- internal-phy : phandle to the internal PHY node
- phy-handle : phandle the external PHY node
- phy-handle : phandle to the external PHY node
Internal PHY node:
- compatible : Should be "qcom,fsm9900-emac-sgmii" or "qcom,qdf2432-emac-sgmii".

View File

@ -42,7 +42,7 @@ properties:
description:
Address ranges of the thermal registers. If more then one range is given
the first one must be the common registers followed by each sensor
according the datasheet.
according to the datasheet.
minItems: 1
maxItems: 4

View File

@ -613,6 +613,9 @@ int ksz9477_fdb_dump(struct ksz_device *dev, int port,
goto exit;
}
if (!(ksz_data & ALU_VALID))
continue;
/* read ALU table */
ksz9477_read_table(dev, alu_table);

View File

@ -118,6 +118,9 @@ static int mv88e6060_setup_port(struct mv88e6060_priv *priv, int p)
int addr = REG_PORT(p);
int ret;
if (dsa_is_unused_port(priv->ds, p))
return 0;
/* Do not force flow control, disable Ingress and Egress
* Header tagging, disable VLAN tunneling, and set the port
* state to Forwarding. Additionally, if this is the CPU

View File

@ -274,27 +274,98 @@ static const u32 vsc9959_rew_regmap[] = {
static const u32 vsc9959_sys_regmap[] = {
REG(SYS_COUNT_RX_OCTETS, 0x000000),
REG(SYS_COUNT_RX_UNICAST, 0x000004),
REG(SYS_COUNT_RX_MULTICAST, 0x000008),
REG(SYS_COUNT_RX_BROADCAST, 0x00000c),
REG(SYS_COUNT_RX_SHORTS, 0x000010),
REG(SYS_COUNT_RX_FRAGMENTS, 0x000014),
REG(SYS_COUNT_RX_JABBERS, 0x000018),
REG(SYS_COUNT_RX_CRC_ALIGN_ERRS, 0x00001c),
REG(SYS_COUNT_RX_SYM_ERRS, 0x000020),
REG(SYS_COUNT_RX_64, 0x000024),
REG(SYS_COUNT_RX_65_127, 0x000028),
REG(SYS_COUNT_RX_128_255, 0x00002c),
REG(SYS_COUNT_RX_256_1023, 0x000030),
REG(SYS_COUNT_RX_1024_1526, 0x000034),
REG(SYS_COUNT_RX_1527_MAX, 0x000038),
REG(SYS_COUNT_RX_LONGS, 0x000044),
REG(SYS_COUNT_RX_256_511, 0x000030),
REG(SYS_COUNT_RX_512_1023, 0x000034),
REG(SYS_COUNT_RX_1024_1526, 0x000038),
REG(SYS_COUNT_RX_1527_MAX, 0x00003c),
REG(SYS_COUNT_RX_PAUSE, 0x000040),
REG(SYS_COUNT_RX_CONTROL, 0x000044),
REG(SYS_COUNT_RX_LONGS, 0x000048),
REG(SYS_COUNT_RX_CLASSIFIED_DROPS, 0x00004c),
REG(SYS_COUNT_RX_RED_PRIO_0, 0x000050),
REG(SYS_COUNT_RX_RED_PRIO_1, 0x000054),
REG(SYS_COUNT_RX_RED_PRIO_2, 0x000058),
REG(SYS_COUNT_RX_RED_PRIO_3, 0x00005c),
REG(SYS_COUNT_RX_RED_PRIO_4, 0x000060),
REG(SYS_COUNT_RX_RED_PRIO_5, 0x000064),
REG(SYS_COUNT_RX_RED_PRIO_6, 0x000068),
REG(SYS_COUNT_RX_RED_PRIO_7, 0x00006c),
REG(SYS_COUNT_RX_YELLOW_PRIO_0, 0x000070),
REG(SYS_COUNT_RX_YELLOW_PRIO_1, 0x000074),
REG(SYS_COUNT_RX_YELLOW_PRIO_2, 0x000078),
REG(SYS_COUNT_RX_YELLOW_PRIO_3, 0x00007c),
REG(SYS_COUNT_RX_YELLOW_PRIO_4, 0x000080),
REG(SYS_COUNT_RX_YELLOW_PRIO_5, 0x000084),
REG(SYS_COUNT_RX_YELLOW_PRIO_6, 0x000088),
REG(SYS_COUNT_RX_YELLOW_PRIO_7, 0x00008c),
REG(SYS_COUNT_RX_GREEN_PRIO_0, 0x000090),
REG(SYS_COUNT_RX_GREEN_PRIO_1, 0x000094),
REG(SYS_COUNT_RX_GREEN_PRIO_2, 0x000098),
REG(SYS_COUNT_RX_GREEN_PRIO_3, 0x00009c),
REG(SYS_COUNT_RX_GREEN_PRIO_4, 0x0000a0),
REG(SYS_COUNT_RX_GREEN_PRIO_5, 0x0000a4),
REG(SYS_COUNT_RX_GREEN_PRIO_6, 0x0000a8),
REG(SYS_COUNT_RX_GREEN_PRIO_7, 0x0000ac),
REG(SYS_COUNT_TX_OCTETS, 0x000200),
REG(SYS_COUNT_TX_UNICAST, 0x000204),
REG(SYS_COUNT_TX_MULTICAST, 0x000208),
REG(SYS_COUNT_TX_BROADCAST, 0x00020c),
REG(SYS_COUNT_TX_COLLISION, 0x000210),
REG(SYS_COUNT_TX_DROPS, 0x000214),
REG(SYS_COUNT_TX_PAUSE, 0x000218),
REG(SYS_COUNT_TX_64, 0x00021c),
REG(SYS_COUNT_TX_65_127, 0x000220),
REG(SYS_COUNT_TX_128_511, 0x000224),
REG(SYS_COUNT_TX_512_1023, 0x000228),
REG(SYS_COUNT_TX_1024_1526, 0x00022c),
REG(SYS_COUNT_TX_1527_MAX, 0x000230),
REG(SYS_COUNT_TX_128_255, 0x000224),
REG(SYS_COUNT_TX_256_511, 0x000228),
REG(SYS_COUNT_TX_512_1023, 0x00022c),
REG(SYS_COUNT_TX_1024_1526, 0x000230),
REG(SYS_COUNT_TX_1527_MAX, 0x000234),
REG(SYS_COUNT_TX_YELLOW_PRIO_0, 0x000238),
REG(SYS_COUNT_TX_YELLOW_PRIO_1, 0x00023c),
REG(SYS_COUNT_TX_YELLOW_PRIO_2, 0x000240),
REG(SYS_COUNT_TX_YELLOW_PRIO_3, 0x000244),
REG(SYS_COUNT_TX_YELLOW_PRIO_4, 0x000248),
REG(SYS_COUNT_TX_YELLOW_PRIO_5, 0x00024c),
REG(SYS_COUNT_TX_YELLOW_PRIO_6, 0x000250),
REG(SYS_COUNT_TX_YELLOW_PRIO_7, 0x000254),
REG(SYS_COUNT_TX_GREEN_PRIO_0, 0x000258),
REG(SYS_COUNT_TX_GREEN_PRIO_1, 0x00025c),
REG(SYS_COUNT_TX_GREEN_PRIO_2, 0x000260),
REG(SYS_COUNT_TX_GREEN_PRIO_3, 0x000264),
REG(SYS_COUNT_TX_GREEN_PRIO_4, 0x000268),
REG(SYS_COUNT_TX_GREEN_PRIO_5, 0x00026c),
REG(SYS_COUNT_TX_GREEN_PRIO_6, 0x000270),
REG(SYS_COUNT_TX_GREEN_PRIO_7, 0x000274),
REG(SYS_COUNT_TX_AGING, 0x000278),
REG(SYS_COUNT_DROP_LOCAL, 0x000400),
REG(SYS_COUNT_DROP_TAIL, 0x000404),
REG(SYS_COUNT_DROP_YELLOW_PRIO_0, 0x000408),
REG(SYS_COUNT_DROP_YELLOW_PRIO_1, 0x00040c),
REG(SYS_COUNT_DROP_YELLOW_PRIO_2, 0x000410),
REG(SYS_COUNT_DROP_YELLOW_PRIO_3, 0x000414),
REG(SYS_COUNT_DROP_YELLOW_PRIO_4, 0x000418),
REG(SYS_COUNT_DROP_YELLOW_PRIO_5, 0x00041c),
REG(SYS_COUNT_DROP_YELLOW_PRIO_6, 0x000420),
REG(SYS_COUNT_DROP_YELLOW_PRIO_7, 0x000424),
REG(SYS_COUNT_DROP_GREEN_PRIO_0, 0x000428),
REG(SYS_COUNT_DROP_GREEN_PRIO_1, 0x00042c),
REG(SYS_COUNT_DROP_GREEN_PRIO_2, 0x000430),
REG(SYS_COUNT_DROP_GREEN_PRIO_3, 0x000434),
REG(SYS_COUNT_DROP_GREEN_PRIO_4, 0x000438),
REG(SYS_COUNT_DROP_GREEN_PRIO_5, 0x00043c),
REG(SYS_COUNT_DROP_GREEN_PRIO_6, 0x000440),
REG(SYS_COUNT_DROP_GREEN_PRIO_7, 0x000444),
REG(SYS_RESET_CFG, 0x000e00),
REG(SYS_SR_ETYPE_CFG, 0x000e04),
REG(SYS_VLAN_ETYPE_CFG, 0x000e08),
@ -547,100 +618,379 @@ static const struct reg_field vsc9959_regfields[REGFIELD_MAX] = {
[SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 7, 4),
};
static const struct ocelot_stat_layout vsc9959_stats_layout[] = {
{ .offset = 0x00, .name = "rx_octets", },
{ .offset = 0x01, .name = "rx_unicast", },
{ .offset = 0x02, .name = "rx_multicast", },
{ .offset = 0x03, .name = "rx_broadcast", },
{ .offset = 0x04, .name = "rx_shorts", },
{ .offset = 0x05, .name = "rx_fragments", },
{ .offset = 0x06, .name = "rx_jabbers", },
{ .offset = 0x07, .name = "rx_crc_align_errs", },
{ .offset = 0x08, .name = "rx_sym_errs", },
{ .offset = 0x09, .name = "rx_frames_below_65_octets", },
{ .offset = 0x0A, .name = "rx_frames_65_to_127_octets", },
{ .offset = 0x0B, .name = "rx_frames_128_to_255_octets", },
{ .offset = 0x0C, .name = "rx_frames_256_to_511_octets", },
{ .offset = 0x0D, .name = "rx_frames_512_to_1023_octets", },
{ .offset = 0x0E, .name = "rx_frames_1024_to_1526_octets", },
{ .offset = 0x0F, .name = "rx_frames_over_1526_octets", },
{ .offset = 0x10, .name = "rx_pause", },
{ .offset = 0x11, .name = "rx_control", },
{ .offset = 0x12, .name = "rx_longs", },
{ .offset = 0x13, .name = "rx_classified_drops", },
{ .offset = 0x14, .name = "rx_red_prio_0", },
{ .offset = 0x15, .name = "rx_red_prio_1", },
{ .offset = 0x16, .name = "rx_red_prio_2", },
{ .offset = 0x17, .name = "rx_red_prio_3", },
{ .offset = 0x18, .name = "rx_red_prio_4", },
{ .offset = 0x19, .name = "rx_red_prio_5", },
{ .offset = 0x1A, .name = "rx_red_prio_6", },
{ .offset = 0x1B, .name = "rx_red_prio_7", },
{ .offset = 0x1C, .name = "rx_yellow_prio_0", },
{ .offset = 0x1D, .name = "rx_yellow_prio_1", },
{ .offset = 0x1E, .name = "rx_yellow_prio_2", },
{ .offset = 0x1F, .name = "rx_yellow_prio_3", },
{ .offset = 0x20, .name = "rx_yellow_prio_4", },
{ .offset = 0x21, .name = "rx_yellow_prio_5", },
{ .offset = 0x22, .name = "rx_yellow_prio_6", },
{ .offset = 0x23, .name = "rx_yellow_prio_7", },
{ .offset = 0x24, .name = "rx_green_prio_0", },
{ .offset = 0x25, .name = "rx_green_prio_1", },
{ .offset = 0x26, .name = "rx_green_prio_2", },
{ .offset = 0x27, .name = "rx_green_prio_3", },
{ .offset = 0x28, .name = "rx_green_prio_4", },
{ .offset = 0x29, .name = "rx_green_prio_5", },
{ .offset = 0x2A, .name = "rx_green_prio_6", },
{ .offset = 0x2B, .name = "rx_green_prio_7", },
{ .offset = 0x80, .name = "tx_octets", },
{ .offset = 0x81, .name = "tx_unicast", },
{ .offset = 0x82, .name = "tx_multicast", },
{ .offset = 0x83, .name = "tx_broadcast", },
{ .offset = 0x84, .name = "tx_collision", },
{ .offset = 0x85, .name = "tx_drops", },
{ .offset = 0x86, .name = "tx_pause", },
{ .offset = 0x87, .name = "tx_frames_below_65_octets", },
{ .offset = 0x88, .name = "tx_frames_65_to_127_octets", },
{ .offset = 0x89, .name = "tx_frames_128_255_octets", },
{ .offset = 0x8B, .name = "tx_frames_256_511_octets", },
{ .offset = 0x8C, .name = "tx_frames_1024_1526_octets", },
{ .offset = 0x8D, .name = "tx_frames_over_1526_octets", },
{ .offset = 0x8E, .name = "tx_yellow_prio_0", },
{ .offset = 0x8F, .name = "tx_yellow_prio_1", },
{ .offset = 0x90, .name = "tx_yellow_prio_2", },
{ .offset = 0x91, .name = "tx_yellow_prio_3", },
{ .offset = 0x92, .name = "tx_yellow_prio_4", },
{ .offset = 0x93, .name = "tx_yellow_prio_5", },
{ .offset = 0x94, .name = "tx_yellow_prio_6", },
{ .offset = 0x95, .name = "tx_yellow_prio_7", },
{ .offset = 0x96, .name = "tx_green_prio_0", },
{ .offset = 0x97, .name = "tx_green_prio_1", },
{ .offset = 0x98, .name = "tx_green_prio_2", },
{ .offset = 0x99, .name = "tx_green_prio_3", },
{ .offset = 0x9A, .name = "tx_green_prio_4", },
{ .offset = 0x9B, .name = "tx_green_prio_5", },
{ .offset = 0x9C, .name = "tx_green_prio_6", },
{ .offset = 0x9D, .name = "tx_green_prio_7", },
{ .offset = 0x9E, .name = "tx_aged", },
{ .offset = 0x100, .name = "drop_local", },
{ .offset = 0x101, .name = "drop_tail", },
{ .offset = 0x102, .name = "drop_yellow_prio_0", },
{ .offset = 0x103, .name = "drop_yellow_prio_1", },
{ .offset = 0x104, .name = "drop_yellow_prio_2", },
{ .offset = 0x105, .name = "drop_yellow_prio_3", },
{ .offset = 0x106, .name = "drop_yellow_prio_4", },
{ .offset = 0x107, .name = "drop_yellow_prio_5", },
{ .offset = 0x108, .name = "drop_yellow_prio_6", },
{ .offset = 0x109, .name = "drop_yellow_prio_7", },
{ .offset = 0x10A, .name = "drop_green_prio_0", },
{ .offset = 0x10B, .name = "drop_green_prio_1", },
{ .offset = 0x10C, .name = "drop_green_prio_2", },
{ .offset = 0x10D, .name = "drop_green_prio_3", },
{ .offset = 0x10E, .name = "drop_green_prio_4", },
{ .offset = 0x10F, .name = "drop_green_prio_5", },
{ .offset = 0x110, .name = "drop_green_prio_6", },
{ .offset = 0x111, .name = "drop_green_prio_7", },
OCELOT_STAT_END
static const struct ocelot_stat_layout vsc9959_stats_layout[OCELOT_NUM_STATS] = {
[OCELOT_STAT_RX_OCTETS] = {
.name = "rx_octets",
.reg = SYS_COUNT_RX_OCTETS,
},
[OCELOT_STAT_RX_UNICAST] = {
.name = "rx_unicast",
.reg = SYS_COUNT_RX_UNICAST,
},
[OCELOT_STAT_RX_MULTICAST] = {
.name = "rx_multicast",
.reg = SYS_COUNT_RX_MULTICAST,
},
[OCELOT_STAT_RX_BROADCAST] = {
.name = "rx_broadcast",
.reg = SYS_COUNT_RX_BROADCAST,
},
[OCELOT_STAT_RX_SHORTS] = {
.name = "rx_shorts",
.reg = SYS_COUNT_RX_SHORTS,
},
[OCELOT_STAT_RX_FRAGMENTS] = {
.name = "rx_fragments",
.reg = SYS_COUNT_RX_FRAGMENTS,
},
[OCELOT_STAT_RX_JABBERS] = {
.name = "rx_jabbers",
.reg = SYS_COUNT_RX_JABBERS,
},
[OCELOT_STAT_RX_CRC_ALIGN_ERRS] = {
.name = "rx_crc_align_errs",
.reg = SYS_COUNT_RX_CRC_ALIGN_ERRS,
},
[OCELOT_STAT_RX_SYM_ERRS] = {
.name = "rx_sym_errs",
.reg = SYS_COUNT_RX_SYM_ERRS,
},
[OCELOT_STAT_RX_64] = {
.name = "rx_frames_below_65_octets",
.reg = SYS_COUNT_RX_64,
},
[OCELOT_STAT_RX_65_127] = {
.name = "rx_frames_65_to_127_octets",
.reg = SYS_COUNT_RX_65_127,
},
[OCELOT_STAT_RX_128_255] = {
.name = "rx_frames_128_to_255_octets",
.reg = SYS_COUNT_RX_128_255,
},
[OCELOT_STAT_RX_256_511] = {
.name = "rx_frames_256_to_511_octets",
.reg = SYS_COUNT_RX_256_511,
},
[OCELOT_STAT_RX_512_1023] = {
.name = "rx_frames_512_to_1023_octets",
.reg = SYS_COUNT_RX_512_1023,
},
[OCELOT_STAT_RX_1024_1526] = {
.name = "rx_frames_1024_to_1526_octets",
.reg = SYS_COUNT_RX_1024_1526,
},
[OCELOT_STAT_RX_1527_MAX] = {
.name = "rx_frames_over_1526_octets",
.reg = SYS_COUNT_RX_1527_MAX,
},
[OCELOT_STAT_RX_PAUSE] = {
.name = "rx_pause",
.reg = SYS_COUNT_RX_PAUSE,
},
[OCELOT_STAT_RX_CONTROL] = {
.name = "rx_control",
.reg = SYS_COUNT_RX_CONTROL,
},
[OCELOT_STAT_RX_LONGS] = {
.name = "rx_longs",
.reg = SYS_COUNT_RX_LONGS,
},
[OCELOT_STAT_RX_CLASSIFIED_DROPS] = {
.name = "rx_classified_drops",
.reg = SYS_COUNT_RX_CLASSIFIED_DROPS,
},
[OCELOT_STAT_RX_RED_PRIO_0] = {
.name = "rx_red_prio_0",
.reg = SYS_COUNT_RX_RED_PRIO_0,
},
[OCELOT_STAT_RX_RED_PRIO_1] = {
.name = "rx_red_prio_1",
.reg = SYS_COUNT_RX_RED_PRIO_1,
},
[OCELOT_STAT_RX_RED_PRIO_2] = {
.name = "rx_red_prio_2",
.reg = SYS_COUNT_RX_RED_PRIO_2,
},
[OCELOT_STAT_RX_RED_PRIO_3] = {
.name = "rx_red_prio_3",
.reg = SYS_COUNT_RX_RED_PRIO_3,
},
[OCELOT_STAT_RX_RED_PRIO_4] = {
.name = "rx_red_prio_4",
.reg = SYS_COUNT_RX_RED_PRIO_4,
},
[OCELOT_STAT_RX_RED_PRIO_5] = {
.name = "rx_red_prio_5",
.reg = SYS_COUNT_RX_RED_PRIO_5,
},
[OCELOT_STAT_RX_RED_PRIO_6] = {
.name = "rx_red_prio_6",
.reg = SYS_COUNT_RX_RED_PRIO_6,
},
[OCELOT_STAT_RX_RED_PRIO_7] = {
.name = "rx_red_prio_7",
.reg = SYS_COUNT_RX_RED_PRIO_7,
},
[OCELOT_STAT_RX_YELLOW_PRIO_0] = {
.name = "rx_yellow_prio_0",
.reg = SYS_COUNT_RX_YELLOW_PRIO_0,
},
[OCELOT_STAT_RX_YELLOW_PRIO_1] = {
.name = "rx_yellow_prio_1",
.reg = SYS_COUNT_RX_YELLOW_PRIO_1,
},
[OCELOT_STAT_RX_YELLOW_PRIO_2] = {
.name = "rx_yellow_prio_2",
.reg = SYS_COUNT_RX_YELLOW_PRIO_2,
},
[OCELOT_STAT_RX_YELLOW_PRIO_3] = {
.name = "rx_yellow_prio_3",
.reg = SYS_COUNT_RX_YELLOW_PRIO_3,
},
[OCELOT_STAT_RX_YELLOW_PRIO_4] = {
.name = "rx_yellow_prio_4",
.reg = SYS_COUNT_RX_YELLOW_PRIO_4,
},
[OCELOT_STAT_RX_YELLOW_PRIO_5] = {
.name = "rx_yellow_prio_5",
.reg = SYS_COUNT_RX_YELLOW_PRIO_5,
},
[OCELOT_STAT_RX_YELLOW_PRIO_6] = {
.name = "rx_yellow_prio_6",
.reg = SYS_COUNT_RX_YELLOW_PRIO_6,
},
[OCELOT_STAT_RX_YELLOW_PRIO_7] = {
.name = "rx_yellow_prio_7",
.reg = SYS_COUNT_RX_YELLOW_PRIO_7,
},
[OCELOT_STAT_RX_GREEN_PRIO_0] = {
.name = "rx_green_prio_0",
.reg = SYS_COUNT_RX_GREEN_PRIO_0,
},
[OCELOT_STAT_RX_GREEN_PRIO_1] = {
.name = "rx_green_prio_1",
.reg = SYS_COUNT_RX_GREEN_PRIO_1,
},
[OCELOT_STAT_RX_GREEN_PRIO_2] = {
.name = "rx_green_prio_2",
.reg = SYS_COUNT_RX_GREEN_PRIO_2,
},
[OCELOT_STAT_RX_GREEN_PRIO_3] = {
.name = "rx_green_prio_3",
.reg = SYS_COUNT_RX_GREEN_PRIO_3,
},
[OCELOT_STAT_RX_GREEN_PRIO_4] = {
.name = "rx_green_prio_4",
.reg = SYS_COUNT_RX_GREEN_PRIO_4,
},
[OCELOT_STAT_RX_GREEN_PRIO_5] = {
.name = "rx_green_prio_5",
.reg = SYS_COUNT_RX_GREEN_PRIO_5,
},
[OCELOT_STAT_RX_GREEN_PRIO_6] = {
.name = "rx_green_prio_6",
.reg = SYS_COUNT_RX_GREEN_PRIO_6,
},
[OCELOT_STAT_RX_GREEN_PRIO_7] = {
.name = "rx_green_prio_7",
.reg = SYS_COUNT_RX_GREEN_PRIO_7,
},
[OCELOT_STAT_TX_OCTETS] = {
.name = "tx_octets",
.reg = SYS_COUNT_TX_OCTETS,
},
[OCELOT_STAT_TX_UNICAST] = {
.name = "tx_unicast",
.reg = SYS_COUNT_TX_UNICAST,
},
[OCELOT_STAT_TX_MULTICAST] = {
.name = "tx_multicast",
.reg = SYS_COUNT_TX_MULTICAST,
},
[OCELOT_STAT_TX_BROADCAST] = {
.name = "tx_broadcast",
.reg = SYS_COUNT_TX_BROADCAST,
},
[OCELOT_STAT_TX_COLLISION] = {
.name = "tx_collision",
.reg = SYS_COUNT_TX_COLLISION,
},
[OCELOT_STAT_TX_DROPS] = {
.name = "tx_drops",
.reg = SYS_COUNT_TX_DROPS,
},
[OCELOT_STAT_TX_PAUSE] = {
.name = "tx_pause",
.reg = SYS_COUNT_TX_PAUSE,
},
[OCELOT_STAT_TX_64] = {
.name = "tx_frames_below_65_octets",
.reg = SYS_COUNT_TX_64,
},
[OCELOT_STAT_TX_65_127] = {
.name = "tx_frames_65_to_127_octets",
.reg = SYS_COUNT_TX_65_127,
},
[OCELOT_STAT_TX_128_255] = {
.name = "tx_frames_128_255_octets",
.reg = SYS_COUNT_TX_128_255,
},
[OCELOT_STAT_TX_256_511] = {
.name = "tx_frames_256_511_octets",
.reg = SYS_COUNT_TX_256_511,
},
[OCELOT_STAT_TX_512_1023] = {
.name = "tx_frames_512_1023_octets",
.reg = SYS_COUNT_TX_512_1023,
},
[OCELOT_STAT_TX_1024_1526] = {
.name = "tx_frames_1024_1526_octets",
.reg = SYS_COUNT_TX_1024_1526,
},
[OCELOT_STAT_TX_1527_MAX] = {
.name = "tx_frames_over_1526_octets",
.reg = SYS_COUNT_TX_1527_MAX,
},
[OCELOT_STAT_TX_YELLOW_PRIO_0] = {
.name = "tx_yellow_prio_0",
.reg = SYS_COUNT_TX_YELLOW_PRIO_0,
},
[OCELOT_STAT_TX_YELLOW_PRIO_1] = {
.name = "tx_yellow_prio_1",
.reg = SYS_COUNT_TX_YELLOW_PRIO_1,
},
[OCELOT_STAT_TX_YELLOW_PRIO_2] = {
.name = "tx_yellow_prio_2",
.reg = SYS_COUNT_TX_YELLOW_PRIO_2,
},
[OCELOT_STAT_TX_YELLOW_PRIO_3] = {
.name = "tx_yellow_prio_3",
.reg = SYS_COUNT_TX_YELLOW_PRIO_3,
},
[OCELOT_STAT_TX_YELLOW_PRIO_4] = {
.name = "tx_yellow_prio_4",
.reg = SYS_COUNT_TX_YELLOW_PRIO_4,
},
[OCELOT_STAT_TX_YELLOW_PRIO_5] = {
.name = "tx_yellow_prio_5",
.reg = SYS_COUNT_TX_YELLOW_PRIO_5,
},
[OCELOT_STAT_TX_YELLOW_PRIO_6] = {
.name = "tx_yellow_prio_6",
.reg = SYS_COUNT_TX_YELLOW_PRIO_6,
},
[OCELOT_STAT_TX_YELLOW_PRIO_7] = {
.name = "tx_yellow_prio_7",
.reg = SYS_COUNT_TX_YELLOW_PRIO_7,
},
[OCELOT_STAT_TX_GREEN_PRIO_0] = {
.name = "tx_green_prio_0",
.reg = SYS_COUNT_TX_GREEN_PRIO_0,
},
[OCELOT_STAT_TX_GREEN_PRIO_1] = {
.name = "tx_green_prio_1",
.reg = SYS_COUNT_TX_GREEN_PRIO_1,
},
[OCELOT_STAT_TX_GREEN_PRIO_2] = {
.name = "tx_green_prio_2",
.reg = SYS_COUNT_TX_GREEN_PRIO_2,
},
[OCELOT_STAT_TX_GREEN_PRIO_3] = {
.name = "tx_green_prio_3",
.reg = SYS_COUNT_TX_GREEN_PRIO_3,
},
[OCELOT_STAT_TX_GREEN_PRIO_4] = {
.name = "tx_green_prio_4",
.reg = SYS_COUNT_TX_GREEN_PRIO_4,
},
[OCELOT_STAT_TX_GREEN_PRIO_5] = {
.name = "tx_green_prio_5",
.reg = SYS_COUNT_TX_GREEN_PRIO_5,
},
[OCELOT_STAT_TX_GREEN_PRIO_6] = {
.name = "tx_green_prio_6",
.reg = SYS_COUNT_TX_GREEN_PRIO_6,
},
[OCELOT_STAT_TX_GREEN_PRIO_7] = {
.name = "tx_green_prio_7",
.reg = SYS_COUNT_TX_GREEN_PRIO_7,
},
[OCELOT_STAT_TX_AGED] = {
.name = "tx_aged",
.reg = SYS_COUNT_TX_AGING,
},
[OCELOT_STAT_DROP_LOCAL] = {
.name = "drop_local",
.reg = SYS_COUNT_DROP_LOCAL,
},
[OCELOT_STAT_DROP_TAIL] = {
.name = "drop_tail",
.reg = SYS_COUNT_DROP_TAIL,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_0] = {
.name = "drop_yellow_prio_0",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_0,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_1] = {
.name = "drop_yellow_prio_1",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_1,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_2] = {
.name = "drop_yellow_prio_2",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_2,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_3] = {
.name = "drop_yellow_prio_3",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_3,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_4] = {
.name = "drop_yellow_prio_4",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_4,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_5] = {
.name = "drop_yellow_prio_5",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_5,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_6] = {
.name = "drop_yellow_prio_6",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_6,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_7] = {
.name = "drop_yellow_prio_7",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_7,
},
[OCELOT_STAT_DROP_GREEN_PRIO_0] = {
.name = "drop_green_prio_0",
.reg = SYS_COUNT_DROP_GREEN_PRIO_0,
},
[OCELOT_STAT_DROP_GREEN_PRIO_1] = {
.name = "drop_green_prio_1",
.reg = SYS_COUNT_DROP_GREEN_PRIO_1,
},
[OCELOT_STAT_DROP_GREEN_PRIO_2] = {
.name = "drop_green_prio_2",
.reg = SYS_COUNT_DROP_GREEN_PRIO_2,
},
[OCELOT_STAT_DROP_GREEN_PRIO_3] = {
.name = "drop_green_prio_3",
.reg = SYS_COUNT_DROP_GREEN_PRIO_3,
},
[OCELOT_STAT_DROP_GREEN_PRIO_4] = {
.name = "drop_green_prio_4",
.reg = SYS_COUNT_DROP_GREEN_PRIO_4,
},
[OCELOT_STAT_DROP_GREEN_PRIO_5] = {
.name = "drop_green_prio_5",
.reg = SYS_COUNT_DROP_GREEN_PRIO_5,
},
[OCELOT_STAT_DROP_GREEN_PRIO_6] = {
.name = "drop_green_prio_6",
.reg = SYS_COUNT_DROP_GREEN_PRIO_6,
},
[OCELOT_STAT_DROP_GREEN_PRIO_7] = {
.name = "drop_green_prio_7",
.reg = SYS_COUNT_DROP_GREEN_PRIO_7,
},
};
static const struct vcap_field vsc9959_vcap_es0_keys[] = {
@ -2166,7 +2516,7 @@ static void vsc9959_psfp_sgi_table_del(struct ocelot *ocelot,
static void vsc9959_psfp_counters_get(struct ocelot *ocelot, u32 index,
struct felix_stream_filter_counters *counters)
{
mutex_lock(&ocelot->stats_lock);
spin_lock(&ocelot->stats_lock);
ocelot_rmw(ocelot, SYS_STAT_CFG_STAT_VIEW(index),
SYS_STAT_CFG_STAT_VIEW_M,
@ -2183,7 +2533,7 @@ static void vsc9959_psfp_counters_get(struct ocelot *ocelot, u32 index,
SYS_STAT_CFG_STAT_CLEAR_SHOT(0x10),
SYS_STAT_CFG);
mutex_unlock(&ocelot->stats_lock);
spin_unlock(&ocelot->stats_lock);
}
static int vsc9959_psfp_filter_add(struct ocelot *ocelot, int port,

View File

@ -270,27 +270,98 @@ static const u32 vsc9953_rew_regmap[] = {
static const u32 vsc9953_sys_regmap[] = {
REG(SYS_COUNT_RX_OCTETS, 0x000000),
REG(SYS_COUNT_RX_UNICAST, 0x000004),
REG(SYS_COUNT_RX_MULTICAST, 0x000008),
REG(SYS_COUNT_RX_BROADCAST, 0x00000c),
REG(SYS_COUNT_RX_SHORTS, 0x000010),
REG(SYS_COUNT_RX_FRAGMENTS, 0x000014),
REG(SYS_COUNT_RX_JABBERS, 0x000018),
REG(SYS_COUNT_RX_CRC_ALIGN_ERRS, 0x00001c),
REG(SYS_COUNT_RX_SYM_ERRS, 0x000020),
REG(SYS_COUNT_RX_64, 0x000024),
REG(SYS_COUNT_RX_65_127, 0x000028),
REG(SYS_COUNT_RX_128_255, 0x00002c),
REG(SYS_COUNT_RX_256_1023, 0x000030),
REG(SYS_COUNT_RX_1024_1526, 0x000034),
REG(SYS_COUNT_RX_1527_MAX, 0x000038),
REG(SYS_COUNT_RX_256_511, 0x000030),
REG(SYS_COUNT_RX_512_1023, 0x000034),
REG(SYS_COUNT_RX_1024_1526, 0x000038),
REG(SYS_COUNT_RX_1527_MAX, 0x00003c),
REG(SYS_COUNT_RX_PAUSE, 0x000040),
REG(SYS_COUNT_RX_CONTROL, 0x000044),
REG(SYS_COUNT_RX_LONGS, 0x000048),
REG(SYS_COUNT_RX_CLASSIFIED_DROPS, 0x00004c),
REG(SYS_COUNT_RX_RED_PRIO_0, 0x000050),
REG(SYS_COUNT_RX_RED_PRIO_1, 0x000054),
REG(SYS_COUNT_RX_RED_PRIO_2, 0x000058),
REG(SYS_COUNT_RX_RED_PRIO_3, 0x00005c),
REG(SYS_COUNT_RX_RED_PRIO_4, 0x000060),
REG(SYS_COUNT_RX_RED_PRIO_5, 0x000064),
REG(SYS_COUNT_RX_RED_PRIO_6, 0x000068),
REG(SYS_COUNT_RX_RED_PRIO_7, 0x00006c),
REG(SYS_COUNT_RX_YELLOW_PRIO_0, 0x000070),
REG(SYS_COUNT_RX_YELLOW_PRIO_1, 0x000074),
REG(SYS_COUNT_RX_YELLOW_PRIO_2, 0x000078),
REG(SYS_COUNT_RX_YELLOW_PRIO_3, 0x00007c),
REG(SYS_COUNT_RX_YELLOW_PRIO_4, 0x000080),
REG(SYS_COUNT_RX_YELLOW_PRIO_5, 0x000084),
REG(SYS_COUNT_RX_YELLOW_PRIO_6, 0x000088),
REG(SYS_COUNT_RX_YELLOW_PRIO_7, 0x00008c),
REG(SYS_COUNT_RX_GREEN_PRIO_0, 0x000090),
REG(SYS_COUNT_RX_GREEN_PRIO_1, 0x000094),
REG(SYS_COUNT_RX_GREEN_PRIO_2, 0x000098),
REG(SYS_COUNT_RX_GREEN_PRIO_3, 0x00009c),
REG(SYS_COUNT_RX_GREEN_PRIO_4, 0x0000a0),
REG(SYS_COUNT_RX_GREEN_PRIO_5, 0x0000a4),
REG(SYS_COUNT_RX_GREEN_PRIO_6, 0x0000a8),
REG(SYS_COUNT_RX_GREEN_PRIO_7, 0x0000ac),
REG(SYS_COUNT_TX_OCTETS, 0x000100),
REG(SYS_COUNT_TX_UNICAST, 0x000104),
REG(SYS_COUNT_TX_MULTICAST, 0x000108),
REG(SYS_COUNT_TX_BROADCAST, 0x00010c),
REG(SYS_COUNT_TX_COLLISION, 0x000110),
REG(SYS_COUNT_TX_DROPS, 0x000114),
REG(SYS_COUNT_TX_PAUSE, 0x000118),
REG(SYS_COUNT_TX_64, 0x00011c),
REG(SYS_COUNT_TX_65_127, 0x000120),
REG(SYS_COUNT_TX_128_511, 0x000124),
REG(SYS_COUNT_TX_512_1023, 0x000128),
REG(SYS_COUNT_TX_1024_1526, 0x00012c),
REG(SYS_COUNT_TX_1527_MAX, 0x000130),
REG(SYS_COUNT_TX_128_255, 0x000124),
REG(SYS_COUNT_TX_256_511, 0x000128),
REG(SYS_COUNT_TX_512_1023, 0x00012c),
REG(SYS_COUNT_TX_1024_1526, 0x000130),
REG(SYS_COUNT_TX_1527_MAX, 0x000134),
REG(SYS_COUNT_TX_YELLOW_PRIO_0, 0x000138),
REG(SYS_COUNT_TX_YELLOW_PRIO_1, 0x00013c),
REG(SYS_COUNT_TX_YELLOW_PRIO_2, 0x000140),
REG(SYS_COUNT_TX_YELLOW_PRIO_3, 0x000144),
REG(SYS_COUNT_TX_YELLOW_PRIO_4, 0x000148),
REG(SYS_COUNT_TX_YELLOW_PRIO_5, 0x00014c),
REG(SYS_COUNT_TX_YELLOW_PRIO_6, 0x000150),
REG(SYS_COUNT_TX_YELLOW_PRIO_7, 0x000154),
REG(SYS_COUNT_TX_GREEN_PRIO_0, 0x000158),
REG(SYS_COUNT_TX_GREEN_PRIO_1, 0x00015c),
REG(SYS_COUNT_TX_GREEN_PRIO_2, 0x000160),
REG(SYS_COUNT_TX_GREEN_PRIO_3, 0x000164),
REG(SYS_COUNT_TX_GREEN_PRIO_4, 0x000168),
REG(SYS_COUNT_TX_GREEN_PRIO_5, 0x00016c),
REG(SYS_COUNT_TX_GREEN_PRIO_6, 0x000170),
REG(SYS_COUNT_TX_GREEN_PRIO_7, 0x000174),
REG(SYS_COUNT_TX_AGING, 0x000178),
REG(SYS_COUNT_DROP_LOCAL, 0x000200),
REG(SYS_COUNT_DROP_TAIL, 0x000204),
REG(SYS_COUNT_DROP_YELLOW_PRIO_0, 0x000208),
REG(SYS_COUNT_DROP_YELLOW_PRIO_1, 0x00020c),
REG(SYS_COUNT_DROP_YELLOW_PRIO_2, 0x000210),
REG(SYS_COUNT_DROP_YELLOW_PRIO_3, 0x000214),
REG(SYS_COUNT_DROP_YELLOW_PRIO_4, 0x000218),
REG(SYS_COUNT_DROP_YELLOW_PRIO_5, 0x00021c),
REG(SYS_COUNT_DROP_YELLOW_PRIO_6, 0x000220),
REG(SYS_COUNT_DROP_YELLOW_PRIO_7, 0x000224),
REG(SYS_COUNT_DROP_GREEN_PRIO_0, 0x000228),
REG(SYS_COUNT_DROP_GREEN_PRIO_1, 0x00022c),
REG(SYS_COUNT_DROP_GREEN_PRIO_2, 0x000230),
REG(SYS_COUNT_DROP_GREEN_PRIO_3, 0x000234),
REG(SYS_COUNT_DROP_GREEN_PRIO_4, 0x000238),
REG(SYS_COUNT_DROP_GREEN_PRIO_5, 0x00023c),
REG(SYS_COUNT_DROP_GREEN_PRIO_6, 0x000240),
REG(SYS_COUNT_DROP_GREEN_PRIO_7, 0x000244),
REG(SYS_RESET_CFG, 0x000318),
REG_RESERVED(SYS_SR_ETYPE_CFG),
REG(SYS_VLAN_ETYPE_CFG, 0x000320),
@ -543,101 +614,379 @@ static const struct reg_field vsc9953_regfields[REGFIELD_MAX] = {
[SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 11, 4),
};
static const struct ocelot_stat_layout vsc9953_stats_layout[] = {
{ .offset = 0x00, .name = "rx_octets", },
{ .offset = 0x01, .name = "rx_unicast", },
{ .offset = 0x02, .name = "rx_multicast", },
{ .offset = 0x03, .name = "rx_broadcast", },
{ .offset = 0x04, .name = "rx_shorts", },
{ .offset = 0x05, .name = "rx_fragments", },
{ .offset = 0x06, .name = "rx_jabbers", },
{ .offset = 0x07, .name = "rx_crc_align_errs", },
{ .offset = 0x08, .name = "rx_sym_errs", },
{ .offset = 0x09, .name = "rx_frames_below_65_octets", },
{ .offset = 0x0A, .name = "rx_frames_65_to_127_octets", },
{ .offset = 0x0B, .name = "rx_frames_128_to_255_octets", },
{ .offset = 0x0C, .name = "rx_frames_256_to_511_octets", },
{ .offset = 0x0D, .name = "rx_frames_512_to_1023_octets", },
{ .offset = 0x0E, .name = "rx_frames_1024_to_1526_octets", },
{ .offset = 0x0F, .name = "rx_frames_over_1526_octets", },
{ .offset = 0x10, .name = "rx_pause", },
{ .offset = 0x11, .name = "rx_control", },
{ .offset = 0x12, .name = "rx_longs", },
{ .offset = 0x13, .name = "rx_classified_drops", },
{ .offset = 0x14, .name = "rx_red_prio_0", },
{ .offset = 0x15, .name = "rx_red_prio_1", },
{ .offset = 0x16, .name = "rx_red_prio_2", },
{ .offset = 0x17, .name = "rx_red_prio_3", },
{ .offset = 0x18, .name = "rx_red_prio_4", },
{ .offset = 0x19, .name = "rx_red_prio_5", },
{ .offset = 0x1A, .name = "rx_red_prio_6", },
{ .offset = 0x1B, .name = "rx_red_prio_7", },
{ .offset = 0x1C, .name = "rx_yellow_prio_0", },
{ .offset = 0x1D, .name = "rx_yellow_prio_1", },
{ .offset = 0x1E, .name = "rx_yellow_prio_2", },
{ .offset = 0x1F, .name = "rx_yellow_prio_3", },
{ .offset = 0x20, .name = "rx_yellow_prio_4", },
{ .offset = 0x21, .name = "rx_yellow_prio_5", },
{ .offset = 0x22, .name = "rx_yellow_prio_6", },
{ .offset = 0x23, .name = "rx_yellow_prio_7", },
{ .offset = 0x24, .name = "rx_green_prio_0", },
{ .offset = 0x25, .name = "rx_green_prio_1", },
{ .offset = 0x26, .name = "rx_green_prio_2", },
{ .offset = 0x27, .name = "rx_green_prio_3", },
{ .offset = 0x28, .name = "rx_green_prio_4", },
{ .offset = 0x29, .name = "rx_green_prio_5", },
{ .offset = 0x2A, .name = "rx_green_prio_6", },
{ .offset = 0x2B, .name = "rx_green_prio_7", },
{ .offset = 0x40, .name = "tx_octets", },
{ .offset = 0x41, .name = "tx_unicast", },
{ .offset = 0x42, .name = "tx_multicast", },
{ .offset = 0x43, .name = "tx_broadcast", },
{ .offset = 0x44, .name = "tx_collision", },
{ .offset = 0x45, .name = "tx_drops", },
{ .offset = 0x46, .name = "tx_pause", },
{ .offset = 0x47, .name = "tx_frames_below_65_octets", },
{ .offset = 0x48, .name = "tx_frames_65_to_127_octets", },
{ .offset = 0x49, .name = "tx_frames_128_255_octets", },
{ .offset = 0x4A, .name = "tx_frames_256_511_octets", },
{ .offset = 0x4B, .name = "tx_frames_512_1023_octets", },
{ .offset = 0x4C, .name = "tx_frames_1024_1526_octets", },
{ .offset = 0x4D, .name = "tx_frames_over_1526_octets", },
{ .offset = 0x4E, .name = "tx_yellow_prio_0", },
{ .offset = 0x4F, .name = "tx_yellow_prio_1", },
{ .offset = 0x50, .name = "tx_yellow_prio_2", },
{ .offset = 0x51, .name = "tx_yellow_prio_3", },
{ .offset = 0x52, .name = "tx_yellow_prio_4", },
{ .offset = 0x53, .name = "tx_yellow_prio_5", },
{ .offset = 0x54, .name = "tx_yellow_prio_6", },
{ .offset = 0x55, .name = "tx_yellow_prio_7", },
{ .offset = 0x56, .name = "tx_green_prio_0", },
{ .offset = 0x57, .name = "tx_green_prio_1", },
{ .offset = 0x58, .name = "tx_green_prio_2", },
{ .offset = 0x59, .name = "tx_green_prio_3", },
{ .offset = 0x5A, .name = "tx_green_prio_4", },
{ .offset = 0x5B, .name = "tx_green_prio_5", },
{ .offset = 0x5C, .name = "tx_green_prio_6", },
{ .offset = 0x5D, .name = "tx_green_prio_7", },
{ .offset = 0x5E, .name = "tx_aged", },
{ .offset = 0x80, .name = "drop_local", },
{ .offset = 0x81, .name = "drop_tail", },
{ .offset = 0x82, .name = "drop_yellow_prio_0", },
{ .offset = 0x83, .name = "drop_yellow_prio_1", },
{ .offset = 0x84, .name = "drop_yellow_prio_2", },
{ .offset = 0x85, .name = "drop_yellow_prio_3", },
{ .offset = 0x86, .name = "drop_yellow_prio_4", },
{ .offset = 0x87, .name = "drop_yellow_prio_5", },
{ .offset = 0x88, .name = "drop_yellow_prio_6", },
{ .offset = 0x89, .name = "drop_yellow_prio_7", },
{ .offset = 0x8A, .name = "drop_green_prio_0", },
{ .offset = 0x8B, .name = "drop_green_prio_1", },
{ .offset = 0x8C, .name = "drop_green_prio_2", },
{ .offset = 0x8D, .name = "drop_green_prio_3", },
{ .offset = 0x8E, .name = "drop_green_prio_4", },
{ .offset = 0x8F, .name = "drop_green_prio_5", },
{ .offset = 0x90, .name = "drop_green_prio_6", },
{ .offset = 0x91, .name = "drop_green_prio_7", },
OCELOT_STAT_END
static const struct ocelot_stat_layout vsc9953_stats_layout[OCELOT_NUM_STATS] = {
[OCELOT_STAT_RX_OCTETS] = {
.name = "rx_octets",
.reg = SYS_COUNT_RX_OCTETS,
},
[OCELOT_STAT_RX_UNICAST] = {
.name = "rx_unicast",
.reg = SYS_COUNT_RX_UNICAST,
},
[OCELOT_STAT_RX_MULTICAST] = {
.name = "rx_multicast",
.reg = SYS_COUNT_RX_MULTICAST,
},
[OCELOT_STAT_RX_BROADCAST] = {
.name = "rx_broadcast",
.reg = SYS_COUNT_RX_BROADCAST,
},
[OCELOT_STAT_RX_SHORTS] = {
.name = "rx_shorts",
.reg = SYS_COUNT_RX_SHORTS,
},
[OCELOT_STAT_RX_FRAGMENTS] = {
.name = "rx_fragments",
.reg = SYS_COUNT_RX_FRAGMENTS,
},
[OCELOT_STAT_RX_JABBERS] = {
.name = "rx_jabbers",
.reg = SYS_COUNT_RX_JABBERS,
},
[OCELOT_STAT_RX_CRC_ALIGN_ERRS] = {
.name = "rx_crc_align_errs",
.reg = SYS_COUNT_RX_CRC_ALIGN_ERRS,
},
[OCELOT_STAT_RX_SYM_ERRS] = {
.name = "rx_sym_errs",
.reg = SYS_COUNT_RX_SYM_ERRS,
},
[OCELOT_STAT_RX_64] = {
.name = "rx_frames_below_65_octets",
.reg = SYS_COUNT_RX_64,
},
[OCELOT_STAT_RX_65_127] = {
.name = "rx_frames_65_to_127_octets",
.reg = SYS_COUNT_RX_65_127,
},
[OCELOT_STAT_RX_128_255] = {
.name = "rx_frames_128_to_255_octets",
.reg = SYS_COUNT_RX_128_255,
},
[OCELOT_STAT_RX_256_511] = {
.name = "rx_frames_256_to_511_octets",
.reg = SYS_COUNT_RX_256_511,
},
[OCELOT_STAT_RX_512_1023] = {
.name = "rx_frames_512_to_1023_octets",
.reg = SYS_COUNT_RX_512_1023,
},
[OCELOT_STAT_RX_1024_1526] = {
.name = "rx_frames_1024_to_1526_octets",
.reg = SYS_COUNT_RX_1024_1526,
},
[OCELOT_STAT_RX_1527_MAX] = {
.name = "rx_frames_over_1526_octets",
.reg = SYS_COUNT_RX_1527_MAX,
},
[OCELOT_STAT_RX_PAUSE] = {
.name = "rx_pause",
.reg = SYS_COUNT_RX_PAUSE,
},
[OCELOT_STAT_RX_CONTROL] = {
.name = "rx_control",
.reg = SYS_COUNT_RX_CONTROL,
},
[OCELOT_STAT_RX_LONGS] = {
.name = "rx_longs",
.reg = SYS_COUNT_RX_LONGS,
},
[OCELOT_STAT_RX_CLASSIFIED_DROPS] = {
.name = "rx_classified_drops",
.reg = SYS_COUNT_RX_CLASSIFIED_DROPS,
},
[OCELOT_STAT_RX_RED_PRIO_0] = {
.name = "rx_red_prio_0",
.reg = SYS_COUNT_RX_RED_PRIO_0,
},
[OCELOT_STAT_RX_RED_PRIO_1] = {
.name = "rx_red_prio_1",
.reg = SYS_COUNT_RX_RED_PRIO_1,
},
[OCELOT_STAT_RX_RED_PRIO_2] = {
.name = "rx_red_prio_2",
.reg = SYS_COUNT_RX_RED_PRIO_2,
},
[OCELOT_STAT_RX_RED_PRIO_3] = {
.name = "rx_red_prio_3",
.reg = SYS_COUNT_RX_RED_PRIO_3,
},
[OCELOT_STAT_RX_RED_PRIO_4] = {
.name = "rx_red_prio_4",
.reg = SYS_COUNT_RX_RED_PRIO_4,
},
[OCELOT_STAT_RX_RED_PRIO_5] = {
.name = "rx_red_prio_5",
.reg = SYS_COUNT_RX_RED_PRIO_5,
},
[OCELOT_STAT_RX_RED_PRIO_6] = {
.name = "rx_red_prio_6",
.reg = SYS_COUNT_RX_RED_PRIO_6,
},
[OCELOT_STAT_RX_RED_PRIO_7] = {
.name = "rx_red_prio_7",
.reg = SYS_COUNT_RX_RED_PRIO_7,
},
[OCELOT_STAT_RX_YELLOW_PRIO_0] = {
.name = "rx_yellow_prio_0",
.reg = SYS_COUNT_RX_YELLOW_PRIO_0,
},
[OCELOT_STAT_RX_YELLOW_PRIO_1] = {
.name = "rx_yellow_prio_1",
.reg = SYS_COUNT_RX_YELLOW_PRIO_1,
},
[OCELOT_STAT_RX_YELLOW_PRIO_2] = {
.name = "rx_yellow_prio_2",
.reg = SYS_COUNT_RX_YELLOW_PRIO_2,
},
[OCELOT_STAT_RX_YELLOW_PRIO_3] = {
.name = "rx_yellow_prio_3",
.reg = SYS_COUNT_RX_YELLOW_PRIO_3,
},
[OCELOT_STAT_RX_YELLOW_PRIO_4] = {
.name = "rx_yellow_prio_4",
.reg = SYS_COUNT_RX_YELLOW_PRIO_4,
},
[OCELOT_STAT_RX_YELLOW_PRIO_5] = {
.name = "rx_yellow_prio_5",
.reg = SYS_COUNT_RX_YELLOW_PRIO_5,
},
[OCELOT_STAT_RX_YELLOW_PRIO_6] = {
.name = "rx_yellow_prio_6",
.reg = SYS_COUNT_RX_YELLOW_PRIO_6,
},
[OCELOT_STAT_RX_YELLOW_PRIO_7] = {
.name = "rx_yellow_prio_7",
.reg = SYS_COUNT_RX_YELLOW_PRIO_7,
},
[OCELOT_STAT_RX_GREEN_PRIO_0] = {
.name = "rx_green_prio_0",
.reg = SYS_COUNT_RX_GREEN_PRIO_0,
},
[OCELOT_STAT_RX_GREEN_PRIO_1] = {
.name = "rx_green_prio_1",
.reg = SYS_COUNT_RX_GREEN_PRIO_1,
},
[OCELOT_STAT_RX_GREEN_PRIO_2] = {
.name = "rx_green_prio_2",
.reg = SYS_COUNT_RX_GREEN_PRIO_2,
},
[OCELOT_STAT_RX_GREEN_PRIO_3] = {
.name = "rx_green_prio_3",
.reg = SYS_COUNT_RX_GREEN_PRIO_3,
},
[OCELOT_STAT_RX_GREEN_PRIO_4] = {
.name = "rx_green_prio_4",
.reg = SYS_COUNT_RX_GREEN_PRIO_4,
},
[OCELOT_STAT_RX_GREEN_PRIO_5] = {
.name = "rx_green_prio_5",
.reg = SYS_COUNT_RX_GREEN_PRIO_5,
},
[OCELOT_STAT_RX_GREEN_PRIO_6] = {
.name = "rx_green_prio_6",
.reg = SYS_COUNT_RX_GREEN_PRIO_6,
},
[OCELOT_STAT_RX_GREEN_PRIO_7] = {
.name = "rx_green_prio_7",
.reg = SYS_COUNT_RX_GREEN_PRIO_7,
},
[OCELOT_STAT_TX_OCTETS] = {
.name = "tx_octets",
.reg = SYS_COUNT_TX_OCTETS,
},
[OCELOT_STAT_TX_UNICAST] = {
.name = "tx_unicast",
.reg = SYS_COUNT_TX_UNICAST,
},
[OCELOT_STAT_TX_MULTICAST] = {
.name = "tx_multicast",
.reg = SYS_COUNT_TX_MULTICAST,
},
[OCELOT_STAT_TX_BROADCAST] = {
.name = "tx_broadcast",
.reg = SYS_COUNT_TX_BROADCAST,
},
[OCELOT_STAT_TX_COLLISION] = {
.name = "tx_collision",
.reg = SYS_COUNT_TX_COLLISION,
},
[OCELOT_STAT_TX_DROPS] = {
.name = "tx_drops",
.reg = SYS_COUNT_TX_DROPS,
},
[OCELOT_STAT_TX_PAUSE] = {
.name = "tx_pause",
.reg = SYS_COUNT_TX_PAUSE,
},
[OCELOT_STAT_TX_64] = {
.name = "tx_frames_below_65_octets",
.reg = SYS_COUNT_TX_64,
},
[OCELOT_STAT_TX_65_127] = {
.name = "tx_frames_65_to_127_octets",
.reg = SYS_COUNT_TX_65_127,
},
[OCELOT_STAT_TX_128_255] = {
.name = "tx_frames_128_255_octets",
.reg = SYS_COUNT_TX_128_255,
},
[OCELOT_STAT_TX_256_511] = {
.name = "tx_frames_256_511_octets",
.reg = SYS_COUNT_TX_256_511,
},
[OCELOT_STAT_TX_512_1023] = {
.name = "tx_frames_512_1023_octets",
.reg = SYS_COUNT_TX_512_1023,
},
[OCELOT_STAT_TX_1024_1526] = {
.name = "tx_frames_1024_1526_octets",
.reg = SYS_COUNT_TX_1024_1526,
},
[OCELOT_STAT_TX_1527_MAX] = {
.name = "tx_frames_over_1526_octets",
.reg = SYS_COUNT_TX_1527_MAX,
},
[OCELOT_STAT_TX_YELLOW_PRIO_0] = {
.name = "tx_yellow_prio_0",
.reg = SYS_COUNT_TX_YELLOW_PRIO_0,
},
[OCELOT_STAT_TX_YELLOW_PRIO_1] = {
.name = "tx_yellow_prio_1",
.reg = SYS_COUNT_TX_YELLOW_PRIO_1,
},
[OCELOT_STAT_TX_YELLOW_PRIO_2] = {
.name = "tx_yellow_prio_2",
.reg = SYS_COUNT_TX_YELLOW_PRIO_2,
},
[OCELOT_STAT_TX_YELLOW_PRIO_3] = {
.name = "tx_yellow_prio_3",
.reg = SYS_COUNT_TX_YELLOW_PRIO_3,
},
[OCELOT_STAT_TX_YELLOW_PRIO_4] = {
.name = "tx_yellow_prio_4",
.reg = SYS_COUNT_TX_YELLOW_PRIO_4,
},
[OCELOT_STAT_TX_YELLOW_PRIO_5] = {
.name = "tx_yellow_prio_5",
.reg = SYS_COUNT_TX_YELLOW_PRIO_5,
},
[OCELOT_STAT_TX_YELLOW_PRIO_6] = {
.name = "tx_yellow_prio_6",
.reg = SYS_COUNT_TX_YELLOW_PRIO_6,
},
[OCELOT_STAT_TX_YELLOW_PRIO_7] = {
.name = "tx_yellow_prio_7",
.reg = SYS_COUNT_TX_YELLOW_PRIO_7,
},
[OCELOT_STAT_TX_GREEN_PRIO_0] = {
.name = "tx_green_prio_0",
.reg = SYS_COUNT_TX_GREEN_PRIO_0,
},
[OCELOT_STAT_TX_GREEN_PRIO_1] = {
.name = "tx_green_prio_1",
.reg = SYS_COUNT_TX_GREEN_PRIO_1,
},
[OCELOT_STAT_TX_GREEN_PRIO_2] = {
.name = "tx_green_prio_2",
.reg = SYS_COUNT_TX_GREEN_PRIO_2,
},
[OCELOT_STAT_TX_GREEN_PRIO_3] = {
.name = "tx_green_prio_3",
.reg = SYS_COUNT_TX_GREEN_PRIO_3,
},
[OCELOT_STAT_TX_GREEN_PRIO_4] = {
.name = "tx_green_prio_4",
.reg = SYS_COUNT_TX_GREEN_PRIO_4,
},
[OCELOT_STAT_TX_GREEN_PRIO_5] = {
.name = "tx_green_prio_5",
.reg = SYS_COUNT_TX_GREEN_PRIO_5,
},
[OCELOT_STAT_TX_GREEN_PRIO_6] = {
.name = "tx_green_prio_6",
.reg = SYS_COUNT_TX_GREEN_PRIO_6,
},
[OCELOT_STAT_TX_GREEN_PRIO_7] = {
.name = "tx_green_prio_7",
.reg = SYS_COUNT_TX_GREEN_PRIO_7,
},
[OCELOT_STAT_TX_AGED] = {
.name = "tx_aged",
.reg = SYS_COUNT_TX_AGING,
},
[OCELOT_STAT_DROP_LOCAL] = {
.name = "drop_local",
.reg = SYS_COUNT_DROP_LOCAL,
},
[OCELOT_STAT_DROP_TAIL] = {
.name = "drop_tail",
.reg = SYS_COUNT_DROP_TAIL,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_0] = {
.name = "drop_yellow_prio_0",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_0,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_1] = {
.name = "drop_yellow_prio_1",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_1,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_2] = {
.name = "drop_yellow_prio_2",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_2,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_3] = {
.name = "drop_yellow_prio_3",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_3,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_4] = {
.name = "drop_yellow_prio_4",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_4,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_5] = {
.name = "drop_yellow_prio_5",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_5,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_6] = {
.name = "drop_yellow_prio_6",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_6,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_7] = {
.name = "drop_yellow_prio_7",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_7,
},
[OCELOT_STAT_DROP_GREEN_PRIO_0] = {
.name = "drop_green_prio_0",
.reg = SYS_COUNT_DROP_GREEN_PRIO_0,
},
[OCELOT_STAT_DROP_GREEN_PRIO_1] = {
.name = "drop_green_prio_1",
.reg = SYS_COUNT_DROP_GREEN_PRIO_1,
},
[OCELOT_STAT_DROP_GREEN_PRIO_2] = {
.name = "drop_green_prio_2",
.reg = SYS_COUNT_DROP_GREEN_PRIO_2,
},
[OCELOT_STAT_DROP_GREEN_PRIO_3] = {
.name = "drop_green_prio_3",
.reg = SYS_COUNT_DROP_GREEN_PRIO_3,
},
[OCELOT_STAT_DROP_GREEN_PRIO_4] = {
.name = "drop_green_prio_4",
.reg = SYS_COUNT_DROP_GREEN_PRIO_4,
},
[OCELOT_STAT_DROP_GREEN_PRIO_5] = {
.name = "drop_green_prio_5",
.reg = SYS_COUNT_DROP_GREEN_PRIO_5,
},
[OCELOT_STAT_DROP_GREEN_PRIO_6] = {
.name = "drop_green_prio_6",
.reg = SYS_COUNT_DROP_GREEN_PRIO_6,
},
[OCELOT_STAT_DROP_GREEN_PRIO_7] = {
.name = "drop_green_prio_7",
.reg = SYS_COUNT_DROP_GREEN_PRIO_7,
},
};
static const struct vcap_field vsc9953_vcap_es0_keys[] = {

View File

@ -93,7 +93,7 @@ static int sja1105_setup_devlink_regions(struct dsa_switch *ds)
region = dsa_devlink_region_create(ds, ops, 1, size);
if (IS_ERR(region)) {
while (i-- >= 0)
while (--i >= 0)
dsa_devlink_region_destroy(priv->regions[i]);
return PTR_ERR(region);
}

View File

@ -13844,7 +13844,7 @@ static void bnx2x_check_kr2_wa(struct link_params *params,
/* Once KR2 was disabled, wait 5 seconds before checking KR2 recovery
* Since some switches tend to reinit the AN process and clear the
* the advertised BP/NP after ~2 seconds causing the KR2 to be disabled
* advertised BP/NP after ~2 seconds causing the KR2 to be disabled
* and recovered many times
*/
if (vars->check_kr2_recovery_cnt > 0) {

View File

@ -243,7 +243,7 @@ static int cxgb_ulp_iscsi_ctl(struct adapter *adapter, unsigned int req,
/*
* on rx, the iscsi pdu has to be < rx page size and the
* the max rx data length programmed in TP
* max rx data length programmed in TP
*/
val = min(adapter->params.tp.rx_pg_size,
((t3_read_reg(adapter, A_TP_PARA_REG2)) >>

View File

@ -135,11 +135,7 @@ static int fec_ptp_enable_pps(struct fec_enet_private *fep, uint enable)
* NSEC_PER_SEC - ts.tv_nsec. Add the remaining nanoseconds
* to current timer would be next second.
*/
tempval = readl(fep->hwp + FEC_ATIME_CTRL);
tempval |= FEC_T_CTRL_CAPTURE;
writel(tempval, fep->hwp + FEC_ATIME_CTRL);
tempval = readl(fep->hwp + FEC_ATIME);
tempval = fep->cc.read(&fep->cc);
/* Convert the ptp local counter to 1588 timestamp */
ns = timecounter_cyc2time(&fep->tc, tempval);
ts = ns_to_timespec64(ns);

View File

@ -384,7 +384,9 @@ static void i40e_tx_timeout(struct net_device *netdev, unsigned int txqueue)
set_bit(__I40E_GLOBAL_RESET_REQUESTED, pf->state);
break;
default:
netdev_err(netdev, "tx_timeout recovery unsuccessful\n");
netdev_err(netdev, "tx_timeout recovery unsuccessful, device is in non-recoverable state.\n");
set_bit(__I40E_DOWN_REQUESTED, pf->state);
set_bit(__I40E_VSI_DOWN_REQUESTED, vsi->state);
break;
}

View File

@ -3203,11 +3203,13 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
protocol = vlan_get_protocol(skb);
if (eth_p_mpls(protocol))
if (eth_p_mpls(protocol)) {
ip.hdr = skb_inner_network_header(skb);
else
l4.hdr = skb_checksum_start(skb);
} else {
ip.hdr = skb_network_header(skb);
l4.hdr = skb_checksum_start(skb);
l4.hdr = skb_transport_header(skb);
}
/* set the tx_flags to indicate the IP protocol type. this is
* required so that checksum header computation below is accurate.

View File

@ -324,6 +324,7 @@ static enum iavf_status iavf_config_arq_regs(struct iavf_hw *hw)
static enum iavf_status iavf_init_asq(struct iavf_hw *hw)
{
enum iavf_status ret_code = 0;
int i;
if (hw->aq.asq.count > 0) {
/* queue already initialized */
@ -354,12 +355,17 @@ static enum iavf_status iavf_init_asq(struct iavf_hw *hw)
/* initialize base registers */
ret_code = iavf_config_asq_regs(hw);
if (ret_code)
goto init_adminq_free_rings;
goto init_free_asq_bufs;
/* success! */
hw->aq.asq.count = hw->aq.num_asq_entries;
goto init_adminq_exit;
init_free_asq_bufs:
for (i = 0; i < hw->aq.num_asq_entries; i++)
iavf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
iavf_free_virt_mem(hw, &hw->aq.asq.dma_head);
init_adminq_free_rings:
iavf_free_adminq_asq(hw);
@ -383,6 +389,7 @@ init_adminq_exit:
static enum iavf_status iavf_init_arq(struct iavf_hw *hw)
{
enum iavf_status ret_code = 0;
int i;
if (hw->aq.arq.count > 0) {
/* queue already initialized */
@ -413,12 +420,16 @@ static enum iavf_status iavf_init_arq(struct iavf_hw *hw)
/* initialize base registers */
ret_code = iavf_config_arq_regs(hw);
if (ret_code)
goto init_adminq_free_rings;
goto init_free_arq_bufs;
/* success! */
hw->aq.arq.count = hw->aq.num_arq_entries;
goto init_adminq_exit;
init_free_arq_bufs:
for (i = 0; i < hw->aq.num_arq_entries; i++)
iavf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
iavf_free_virt_mem(hw, &hw->aq.arq.dma_head);
init_adminq_free_rings:
iavf_free_adminq_arq(hw);

View File

@ -2367,7 +2367,7 @@ static void iavf_init_get_resources(struct iavf_adapter *adapter)
err = iavf_get_vf_config(adapter);
if (err == -EALREADY) {
err = iavf_send_vf_config_msg(adapter);
goto err_alloc;
goto err;
} else if (err == -EINVAL) {
/* We only get -EINVAL if the device is in a very bad
* state or if we've been disabled for previous bad
@ -3086,12 +3086,15 @@ continue_reset:
return;
reset_err:
if (running) {
set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
iavf_free_traffic_irqs(adapter);
}
iavf_disable_vf(adapter);
mutex_unlock(&adapter->client_lock);
mutex_unlock(&adapter->crit_lock);
if (running)
iavf_change_state(adapter, __IAVF_RUNNING);
dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");
iavf_close(netdev);
}
/**
@ -4085,8 +4088,17 @@ static int iavf_open(struct net_device *netdev)
return -EIO;
}
while (!mutex_trylock(&adapter->crit_lock))
while (!mutex_trylock(&adapter->crit_lock)) {
/* If we are in __IAVF_INIT_CONFIG_ADAPTER state the crit_lock
* is already taken and iavf_open is called from an upper
* device's notifier reacting on NETDEV_REGISTER event.
* We have to leave here to avoid dead lock.
*/
if (adapter->state == __IAVF_INIT_CONFIG_ADAPTER)
return -EBUSY;
usleep_range(500, 1000);
}
if (adapter->state != __IAVF_DOWN) {
err = -EBUSY;

View File

@ -62,7 +62,7 @@ ice_fltr_set_vlan_vsi_promisc(struct ice_hw *hw, struct ice_vsi *vsi,
int result;
result = ice_set_vlan_vsi_promisc(hw, vsi->idx, promisc_mask, false);
if (result)
if (result && result != -EEXIST)
dev_err(ice_pf_to_dev(pf),
"Error setting promisc mode on VSI %i (rc=%d)\n",
vsi->vsi_num, result);
@ -86,7 +86,7 @@ ice_fltr_clear_vlan_vsi_promisc(struct ice_hw *hw, struct ice_vsi *vsi,
int result;
result = ice_set_vlan_vsi_promisc(hw, vsi->idx, promisc_mask, true);
if (result)
if (result && result != -EEXIST)
dev_err(ice_pf_to_dev(pf),
"Error clearing promisc mode on VSI %i (rc=%d)\n",
vsi->vsi_num, result);
@ -109,7 +109,7 @@ ice_fltr_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
int result;
result = ice_clear_vsi_promisc(hw, vsi_handle, promisc_mask, vid);
if (result)
if (result && result != -EEXIST)
dev_err(ice_pf_to_dev(pf),
"Error clearing promisc mode on VSI %i for VID %u (rc=%d)\n",
ice_get_hw_vsi_num(hw, vsi_handle), vid, result);
@ -132,7 +132,7 @@ ice_fltr_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
int result;
result = ice_set_vsi_promisc(hw, vsi_handle, promisc_mask, vid);
if (result)
if (result && result != -EEXIST)
dev_err(ice_pf_to_dev(pf),
"Error setting promisc mode on VSI %i for VID %u (rc=%d)\n",
ice_get_hw_vsi_num(hw, vsi_handle), vid, result);

View File

@ -3181,7 +3181,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
pf = vsi->back;
vtype = vsi->type;
if (WARN_ON(vtype == ICE_VSI_VF) && !vsi->vf)
if (WARN_ON(vtype == ICE_VSI_VF && !vsi->vf))
return -EINVAL;
ice_vsi_init_vlan_ops(vsi);
@ -4062,7 +4062,11 @@ int ice_vsi_del_vlan_zero(struct ice_vsi *vsi)
if (err && err != -EEXIST)
return err;
return 0;
/* when deleting the last VLAN filter, make sure to disable the VLAN
* promisc mode so the filter isn't left by accident
*/
return ice_clear_vsi_promisc(&vsi->back->hw, vsi->idx,
ICE_MCAST_VLAN_PROMISC_BITS, 0);
}
/**

View File

@ -267,8 +267,10 @@ static int ice_set_promisc(struct ice_vsi *vsi, u8 promisc_m)
status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx,
promisc_m, 0);
}
if (status && status != -EEXIST)
return status;
return status;
return 0;
}
/**
@ -3573,6 +3575,14 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid)
while (test_and_set_bit(ICE_CFG_BUSY, vsi->state))
usleep_range(1000, 2000);
ret = ice_clear_vsi_promisc(&vsi->back->hw, vsi->idx,
ICE_MCAST_VLAN_PROMISC_BITS, vid);
if (ret) {
netdev_err(netdev, "Error clearing multicast promiscuous mode on VSI %i\n",
vsi->vsi_num);
vsi->current_netdev_flags |= IFF_ALLMULTI;
}
vlan_ops = ice_get_compat_vsi_vlan_ops(vsi);
/* Make sure VLAN delete is successful before updating VLAN

View File

@ -4445,6 +4445,13 @@ ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
goto free_fltr_list;
list_for_each_entry(list_itr, &vsi_list_head, list_entry) {
/* Avoid enabling or disabling VLAN zero twice when in double
* VLAN mode
*/
if (ice_is_dvm_ena(hw) &&
list_itr->fltr_info.l_data.vlan.tpid == 0)
continue;
vlan_id = list_itr->fltr_info.l_data.vlan.vlan_id;
if (rm_vlan_promisc)
status = ice_clear_vsi_promisc(hw, vsi_handle,
@ -4452,7 +4459,7 @@ ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
else
status = ice_set_vsi_promisc(hw, vsi_handle,
promisc_mask, vlan_id);
if (status)
if (status && status != -EEXIST)
break;
}

View File

@ -571,8 +571,10 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
if (ice_is_vf_disabled(vf)) {
vsi = ice_get_vf_vsi(vf);
if (WARN_ON(!vsi))
if (!vsi) {
dev_dbg(dev, "VF is already removed\n");
return -EINVAL;
}
ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, vf->vf_id);
ice_vsi_stop_all_rx_rings(vsi);
dev_dbg(dev, "VF is already disabled, there is no need for resetting it, telling VM, all is fine %d\n",
@ -762,13 +764,16 @@ static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable)
static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi)
{
struct ice_vsi_vlan_ops *vlan_ops;
int err;
int err = 0;
vlan_ops = ice_get_compat_vsi_vlan_ops(vsi);
err = vlan_ops->ena_tx_filtering(vsi);
if (err)
return err;
/* Allow VF with VLAN 0 only to send all tagged traffic */
if (vsi->type != ICE_VSI_VF || ice_vsi_has_non_zero_vlans(vsi)) {
err = vlan_ops->ena_tx_filtering(vsi);
if (err)
return err;
}
return ice_cfg_mac_antispoof(vsi, true);
}

View File

@ -2288,6 +2288,15 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
/* Enable VLAN filtering on first non-zero VLAN */
if (!vlan_promisc && vid && !ice_is_dvm_ena(&pf->hw)) {
if (vf->spoofchk) {
status = vsi->inner_vlan_ops.ena_tx_filtering(vsi);
if (status) {
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
dev_err(dev, "Enable VLAN anti-spoofing on VLAN ID: %d failed error-%d\n",
vid, status);
goto error_param;
}
}
if (vsi->inner_vlan_ops.ena_rx_filtering(vsi)) {
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n",
@ -2333,8 +2342,10 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
}
/* Disable VLAN filtering when only VLAN 0 is left */
if (!ice_vsi_has_non_zero_vlans(vsi))
if (!ice_vsi_has_non_zero_vlans(vsi)) {
vsi->inner_vlan_ops.dis_tx_filtering(vsi);
vsi->inner_vlan_ops.dis_rx_filtering(vsi);
}
if (vlan_promisc)
ice_vf_dis_vlan_promisc(vsi, &vlan);
@ -2838,6 +2849,13 @@ ice_vc_del_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
if (vlan_promisc)
ice_vf_dis_vlan_promisc(vsi, &vlan);
/* Disable VLAN filtering when only VLAN 0 is left */
if (!ice_vsi_has_non_zero_vlans(vsi) && ice_is_dvm_ena(&vsi->back->hw)) {
err = vsi->outer_vlan_ops.dis_tx_filtering(vsi);
if (err)
return err;
}
}
vc_vlan = &vlan_fltr->inner;
@ -2853,8 +2871,17 @@ ice_vc_del_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
/* no support for VLAN promiscuous on inner VLAN unless
* we are in Single VLAN Mode (SVM)
*/
if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc)
ice_vf_dis_vlan_promisc(vsi, &vlan);
if (!ice_is_dvm_ena(&vsi->back->hw)) {
if (vlan_promisc)
ice_vf_dis_vlan_promisc(vsi, &vlan);
/* Disable VLAN filtering when only VLAN 0 is left */
if (!ice_vsi_has_non_zero_vlans(vsi)) {
err = vsi->inner_vlan_ops.dis_tx_filtering(vsi);
if (err)
return err;
}
}
}
}
@ -2931,6 +2958,13 @@ ice_vc_add_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
if (err)
return err;
}
/* Enable VLAN filtering on first non-zero VLAN */
if (vf->spoofchk && vlan.vid && ice_is_dvm_ena(&vsi->back->hw)) {
err = vsi->outer_vlan_ops.ena_tx_filtering(vsi);
if (err)
return err;
}
}
vc_vlan = &vlan_fltr->inner;
@ -2946,10 +2980,19 @@ ice_vc_add_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
/* no support for VLAN promiscuous on inner VLAN unless
* we are in Single VLAN Mode (SVM)
*/
if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc) {
err = ice_vf_ena_vlan_promisc(vsi, &vlan);
if (err)
return err;
if (!ice_is_dvm_ena(&vsi->back->hw)) {
if (vlan_promisc) {
err = ice_vf_ena_vlan_promisc(vsi, &vlan);
if (err)
return err;
}
/* Enable VLAN filtering on first non-zero VLAN */
if (vf->spoofchk && vlan.vid) {
err = vsi->inner_vlan_ops.ena_tx_filtering(vsi);
if (err)
return err;
}
}
}
}

View File

@ -664,6 +664,8 @@ struct igb_adapter {
struct igb_mac_addr *mac_table;
struct vf_mac_filter vf_macs;
struct vf_mac_filter *vf_mac_list;
/* lock for VF resources */
spinlock_t vfs_lock;
};
/* flags controlling PTP/1588 function */

View File

@ -3637,6 +3637,7 @@ static int igb_disable_sriov(struct pci_dev *pdev)
struct net_device *netdev = pci_get_drvdata(pdev);
struct igb_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
unsigned long flags;
/* reclaim resources allocated to VFs */
if (adapter->vf_data) {
@ -3649,12 +3650,13 @@ static int igb_disable_sriov(struct pci_dev *pdev)
pci_disable_sriov(pdev);
msleep(500);
}
spin_lock_irqsave(&adapter->vfs_lock, flags);
kfree(adapter->vf_mac_list);
adapter->vf_mac_list = NULL;
kfree(adapter->vf_data);
adapter->vf_data = NULL;
adapter->vfs_allocated_count = 0;
spin_unlock_irqrestore(&adapter->vfs_lock, flags);
wr32(E1000_IOVCTL, E1000_IOVCTL_REUSE_VFQ);
wrfl();
msleep(100);
@ -3814,7 +3816,9 @@ static void igb_remove(struct pci_dev *pdev)
igb_release_hw_control(adapter);
#ifdef CONFIG_PCI_IOV
rtnl_lock();
igb_disable_sriov(pdev);
rtnl_unlock();
#endif
unregister_netdev(netdev);
@ -3974,6 +3978,9 @@ static int igb_sw_init(struct igb_adapter *adapter)
spin_lock_init(&adapter->nfc_lock);
spin_lock_init(&adapter->stats64_lock);
/* init spinlock to avoid concurrency of VF resources */
spin_lock_init(&adapter->vfs_lock);
#ifdef CONFIG_PCI_IOV
switch (hw->mac.type) {
case e1000_82576:
@ -7958,8 +7965,10 @@ unlock:
static void igb_msg_task(struct igb_adapter *adapter)
{
struct e1000_hw *hw = &adapter->hw;
unsigned long flags;
u32 vf;
spin_lock_irqsave(&adapter->vfs_lock, flags);
for (vf = 0; vf < adapter->vfs_allocated_count; vf++) {
/* process any reset requests */
if (!igb_check_for_rst(hw, vf))
@ -7973,6 +7982,7 @@ static void igb_msg_task(struct igb_adapter *adapter)
if (!igb_check_for_ack(hw, vf))
igb_rcv_ack_from_vf(adapter, vf);
}
spin_unlock_irqrestore(&adapter->vfs_lock, flags);
}
/**

View File

@ -1732,7 +1732,7 @@ static u32 mtk_xdp_run(struct mtk_eth *eth, struct mtk_rx_ring *ring,
case XDP_TX: {
struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp);
if (mtk_xdp_submit_frame(eth, xdpf, dev, false)) {
if (!xdpf || mtk_xdp_submit_frame(eth, xdpf, dev, false)) {
count = &hw_stats->xdp_stats.rx_xdp_tx_errors;
act = XDP_DROP;
break;

View File

@ -696,6 +696,13 @@ static int mlx5e_init_rep(struct mlx5_core_dev *mdev,
{
struct mlx5e_priv *priv = netdev_priv(netdev);
priv->fs = mlx5e_fs_init(priv->profile, mdev,
!test_bit(MLX5E_STATE_DESTROYING, &priv->state));
if (!priv->fs) {
netdev_err(priv->netdev, "FS allocation failed\n");
return -ENOMEM;
}
mlx5e_build_rep_params(netdev);
mlx5e_timestamp_init(priv);
@ -708,12 +715,21 @@ static int mlx5e_init_ul_rep(struct mlx5_core_dev *mdev,
struct mlx5e_priv *priv = netdev_priv(netdev);
int err;
priv->fs = mlx5e_fs_init(priv->profile, mdev,
!test_bit(MLX5E_STATE_DESTROYING, &priv->state));
if (!priv->fs) {
netdev_err(priv->netdev, "FS allocation failed\n");
return -ENOMEM;
}
err = mlx5e_ipsec_init(priv);
if (err)
mlx5_core_err(mdev, "Uplink rep IPsec initialization failed, %d\n", err);
mlx5e_vxlan_set_netdev_info(priv);
return mlx5e_init_rep(mdev, netdev);
mlx5e_build_rep_params(netdev);
mlx5e_timestamp_init(priv);
return 0;
}
static void mlx5e_cleanup_rep(struct mlx5e_priv *priv)
@ -836,13 +852,6 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv)
struct mlx5_core_dev *mdev = priv->mdev;
int err;
priv->fs = mlx5e_fs_init(priv->profile, mdev,
!test_bit(MLX5E_STATE_DESTROYING, &priv->state));
if (!priv->fs) {
netdev_err(priv->netdev, "FS allocation failed\n");
return -ENOMEM;
}
priv->rx_res = mlx5e_rx_res_alloc();
if (!priv->rx_res) {
err = -ENOMEM;

View File

@ -1897,9 +1897,9 @@ static void mlxsw_sp_port_remove(struct mlxsw_sp *mlxsw_sp, u16 local_port)
cancel_delayed_work_sync(&mlxsw_sp_port->periodic_hw_stats.update_dw);
cancel_delayed_work_sync(&mlxsw_sp_port->ptp.shaper_dw);
mlxsw_sp_port_ptp_clear(mlxsw_sp_port);
mlxsw_core_port_clear(mlxsw_sp->core, local_port, mlxsw_sp);
unregister_netdev(mlxsw_sp_port->dev); /* This calls ndo_stop */
mlxsw_sp_port_ptp_clear(mlxsw_sp_port);
mlxsw_sp_port_vlan_classification_set(mlxsw_sp_port, true, true);
mlxsw_sp->ports[local_port] = NULL;
mlxsw_sp_port_vlan_flush(mlxsw_sp_port, true);

View File

@ -46,6 +46,7 @@ struct mlxsw_sp2_ptp_state {
* enabled.
*/
struct hwtstamp_config config;
struct mutex lock; /* Protects 'config' and HW configuration. */
};
struct mlxsw_sp1_ptp_key {
@ -1374,6 +1375,7 @@ struct mlxsw_sp_ptp_state *mlxsw_sp2_ptp_init(struct mlxsw_sp *mlxsw_sp)
goto err_ptp_traps_set;
refcount_set(&ptp_state->ptp_port_enabled_ref, 0);
mutex_init(&ptp_state->lock);
return &ptp_state->common;
err_ptp_traps_set:
@ -1388,6 +1390,7 @@ void mlxsw_sp2_ptp_fini(struct mlxsw_sp_ptp_state *ptp_state_common)
ptp_state = mlxsw_sp2_ptp_state(mlxsw_sp);
mutex_destroy(&ptp_state->lock);
mlxsw_sp_ptp_traps_unset(mlxsw_sp);
kfree(ptp_state);
}
@ -1461,7 +1464,10 @@ int mlxsw_sp2_ptp_hwtstamp_get(struct mlxsw_sp_port *mlxsw_sp_port,
ptp_state = mlxsw_sp2_ptp_state(mlxsw_sp_port->mlxsw_sp);
mutex_lock(&ptp_state->lock);
*config = ptp_state->config;
mutex_unlock(&ptp_state->lock);
return 0;
}
@ -1523,6 +1529,9 @@ mlxsw_sp2_ptp_get_message_types(const struct hwtstamp_config *config,
return -EINVAL;
}
if ((ing_types && !egr_types) || (!ing_types && egr_types))
return -EINVAL;
*p_ing_types = ing_types;
*p_egr_types = egr_types;
return 0;
@ -1574,8 +1583,6 @@ static int mlxsw_sp2_ptp_configure_port(struct mlxsw_sp_port *mlxsw_sp_port,
struct mlxsw_sp2_ptp_state *ptp_state;
int err;
ASSERT_RTNL();
ptp_state = mlxsw_sp2_ptp_state(mlxsw_sp_port->mlxsw_sp);
if (refcount_inc_not_zero(&ptp_state->ptp_port_enabled_ref))
@ -1597,8 +1604,6 @@ static int mlxsw_sp2_ptp_deconfigure_port(struct mlxsw_sp_port *mlxsw_sp_port,
struct mlxsw_sp2_ptp_state *ptp_state;
int err;
ASSERT_RTNL();
ptp_state = mlxsw_sp2_ptp_state(mlxsw_sp_port->mlxsw_sp);
if (!refcount_dec_and_test(&ptp_state->ptp_port_enabled_ref))
@ -1618,16 +1623,20 @@ err_ptp_disable:
int mlxsw_sp2_ptp_hwtstamp_set(struct mlxsw_sp_port *mlxsw_sp_port,
struct hwtstamp_config *config)
{
struct mlxsw_sp2_ptp_state *ptp_state;
enum hwtstamp_rx_filters rx_filter;
struct hwtstamp_config new_config;
u16 new_ing_types, new_egr_types;
bool ptp_enabled;
int err;
ptp_state = mlxsw_sp2_ptp_state(mlxsw_sp_port->mlxsw_sp);
mutex_lock(&ptp_state->lock);
err = mlxsw_sp2_ptp_get_message_types(config, &new_ing_types,
&new_egr_types, &rx_filter);
if (err)
return err;
goto err_get_message_types;
new_config.flags = config->flags;
new_config.tx_type = config->tx_type;
@ -1640,11 +1649,11 @@ int mlxsw_sp2_ptp_hwtstamp_set(struct mlxsw_sp_port *mlxsw_sp_port,
err = mlxsw_sp2_ptp_configure_port(mlxsw_sp_port, new_ing_types,
new_egr_types, new_config);
if (err)
return err;
goto err_configure_port;
} else if (!new_ing_types && !new_egr_types && ptp_enabled) {
err = mlxsw_sp2_ptp_deconfigure_port(mlxsw_sp_port, new_config);
if (err)
return err;
goto err_deconfigure_port;
}
mlxsw_sp_port->ptp.ing_types = new_ing_types;
@ -1652,8 +1661,15 @@ int mlxsw_sp2_ptp_hwtstamp_set(struct mlxsw_sp_port *mlxsw_sp_port,
/* Notify the ioctl caller what we are actually timestamping. */
config->rx_filter = rx_filter;
mutex_unlock(&ptp_state->lock);
return 0;
err_deconfigure_port:
err_configure_port:
err_get_message_types:
mutex_unlock(&ptp_state->lock);
return err;
}
int mlxsw_sp2_ptp_get_ts_info(struct mlxsw_sp *mlxsw_sp,

View File

@ -171,10 +171,11 @@ static inline void mlxsw_sp1_get_stats(struct mlxsw_sp_port *mlxsw_sp_port,
{
}
int mlxsw_sp_ptp_txhdr_construct(struct mlxsw_core *mlxsw_core,
struct mlxsw_sp_port *mlxsw_sp_port,
struct sk_buff *skb,
const struct mlxsw_tx_info *tx_info)
static inline int
mlxsw_sp_ptp_txhdr_construct(struct mlxsw_core *mlxsw_core,
struct mlxsw_sp_port *mlxsw_sp_port,
struct sk_buff *skb,
const struct mlxsw_tx_info *tx_info)
{
return -EOPNOTSUPP;
}
@ -231,10 +232,11 @@ static inline int mlxsw_sp2_ptp_get_ts_info(struct mlxsw_sp *mlxsw_sp,
return mlxsw_sp_ptp_get_ts_info_noptp(info);
}
int mlxsw_sp2_ptp_txhdr_construct(struct mlxsw_core *mlxsw_core,
struct mlxsw_sp_port *mlxsw_sp_port,
struct sk_buff *skb,
const struct mlxsw_tx_info *tx_info)
static inline int
mlxsw_sp2_ptp_txhdr_construct(struct mlxsw_core *mlxsw_core,
struct mlxsw_sp_port *mlxsw_sp_port,
struct sk_buff *skb,
const struct mlxsw_tx_info *tx_info)
{
return -EOPNOTSUPP;
}

View File

@ -710,7 +710,7 @@ static void lan966x_cleanup_ports(struct lan966x *lan966x)
disable_irq(lan966x->xtr_irq);
lan966x->xtr_irq = -ENXIO;
if (lan966x->ana_irq) {
if (lan966x->ana_irq > 0) {
disable_irq(lan966x->ana_irq);
lan966x->ana_irq = -ENXIO;
}
@ -718,10 +718,10 @@ static void lan966x_cleanup_ports(struct lan966x *lan966x)
if (lan966x->fdma)
devm_free_irq(lan966x->dev, lan966x->fdma_irq, lan966x);
if (lan966x->ptp_irq)
if (lan966x->ptp_irq > 0)
devm_free_irq(lan966x->dev, lan966x->ptp_irq, lan966x);
if (lan966x->ptp_ext_irq)
if (lan966x->ptp_ext_irq > 0)
devm_free_irq(lan966x->dev, lan966x->ptp_ext_irq, lan966x);
}
@ -1049,7 +1049,7 @@ static int lan966x_probe(struct platform_device *pdev)
}
lan966x->ana_irq = platform_get_irq_byname(pdev, "ana");
if (lan966x->ana_irq) {
if (lan966x->ana_irq > 0) {
err = devm_request_threaded_irq(&pdev->dev, lan966x->ana_irq, NULL,
lan966x_ana_irq_handler, IRQF_ONESHOT,
"ana irq", lan966x);

View File

@ -62,9 +62,6 @@ static int moxart_set_mac_address(struct net_device *ndev, void *addr)
{
struct sockaddr *address = addr;
if (!is_valid_ether_addr(address->sa_data))
return -EADDRNOTAVAIL;
eth_hw_addr_set(ndev, address->sa_data);
moxart_update_mac_address(ndev);
@ -77,7 +74,7 @@ static void moxart_mac_free_memory(struct net_device *ndev)
int i;
for (i = 0; i < RX_DESC_NUM; i++)
dma_unmap_single(&ndev->dev, priv->rx_mapping[i],
dma_unmap_single(&priv->pdev->dev, priv->rx_mapping[i],
priv->rx_buf_size, DMA_FROM_DEVICE);
if (priv->tx_desc_base)
@ -147,11 +144,11 @@ static void moxart_mac_setup_desc_ring(struct net_device *ndev)
desc + RX_REG_OFFSET_DESC1);
priv->rx_buf[i] = priv->rx_buf_base + priv->rx_buf_size * i;
priv->rx_mapping[i] = dma_map_single(&ndev->dev,
priv->rx_mapping[i] = dma_map_single(&priv->pdev->dev,
priv->rx_buf[i],
priv->rx_buf_size,
DMA_FROM_DEVICE);
if (dma_mapping_error(&ndev->dev, priv->rx_mapping[i]))
if (dma_mapping_error(&priv->pdev->dev, priv->rx_mapping[i]))
netdev_err(ndev, "DMA mapping error\n");
moxart_desc_write(priv->rx_mapping[i],
@ -172,9 +169,6 @@ static int moxart_mac_open(struct net_device *ndev)
{
struct moxart_mac_priv_t *priv = netdev_priv(ndev);
if (!is_valid_ether_addr(ndev->dev_addr))
return -EADDRNOTAVAIL;
napi_enable(&priv->napi);
moxart_mac_reset(ndev);
@ -240,7 +234,7 @@ static int moxart_rx_poll(struct napi_struct *napi, int budget)
if (len > RX_BUF_SIZE)
len = RX_BUF_SIZE;
dma_sync_single_for_cpu(&ndev->dev,
dma_sync_single_for_cpu(&priv->pdev->dev,
priv->rx_mapping[rx_head],
priv->rx_buf_size, DMA_FROM_DEVICE);
skb = netdev_alloc_skb_ip_align(ndev, len);
@ -294,7 +288,7 @@ static void moxart_tx_finished(struct net_device *ndev)
unsigned int tx_tail = priv->tx_tail;
while (tx_tail != tx_head) {
dma_unmap_single(&ndev->dev, priv->tx_mapping[tx_tail],
dma_unmap_single(&priv->pdev->dev, priv->tx_mapping[tx_tail],
priv->tx_len[tx_tail], DMA_TO_DEVICE);
ndev->stats.tx_packets++;
@ -358,9 +352,9 @@ static netdev_tx_t moxart_mac_start_xmit(struct sk_buff *skb,
len = skb->len > TX_BUF_SIZE ? TX_BUF_SIZE : skb->len;
priv->tx_mapping[tx_head] = dma_map_single(&ndev->dev, skb->data,
priv->tx_mapping[tx_head] = dma_map_single(&priv->pdev->dev, skb->data,
len, DMA_TO_DEVICE);
if (dma_mapping_error(&ndev->dev, priv->tx_mapping[tx_head])) {
if (dma_mapping_error(&priv->pdev->dev, priv->tx_mapping[tx_head])) {
netdev_err(ndev, "DMA mapping error\n");
goto out_unlock;
}
@ -379,7 +373,7 @@ static netdev_tx_t moxart_mac_start_xmit(struct sk_buff *skb,
len = ETH_ZLEN;
}
dma_sync_single_for_device(&ndev->dev, priv->tx_mapping[tx_head],
dma_sync_single_for_device(&priv->pdev->dev, priv->tx_mapping[tx_head],
priv->tx_buf_size, DMA_TO_DEVICE);
txdes1 = TX_DESC1_LTS | TX_DESC1_FTS | (len & TX_DESC1_BUF_SIZE_MASK);
@ -488,12 +482,19 @@ static int moxart_mac_probe(struct platform_device *pdev)
}
ndev->base_addr = res->start;
ret = platform_get_ethdev_address(p_dev, ndev);
if (ret == -EPROBE_DEFER)
goto init_fail;
if (ret)
eth_hw_addr_random(ndev);
moxart_update_mac_address(ndev);
spin_lock_init(&priv->txlock);
priv->tx_buf_size = TX_BUF_SIZE;
priv->rx_buf_size = RX_BUF_SIZE;
priv->tx_desc_base = dma_alloc_coherent(&pdev->dev, TX_REG_DESC_SIZE *
priv->tx_desc_base = dma_alloc_coherent(p_dev, TX_REG_DESC_SIZE *
TX_DESC_NUM, &priv->tx_base,
GFP_DMA | GFP_KERNEL);
if (!priv->tx_desc_base) {
@ -501,7 +502,7 @@ static int moxart_mac_probe(struct platform_device *pdev)
goto init_fail;
}
priv->rx_desc_base = dma_alloc_coherent(&pdev->dev, RX_REG_DESC_SIZE *
priv->rx_desc_base = dma_alloc_coherent(p_dev, RX_REG_DESC_SIZE *
RX_DESC_NUM, &priv->rx_base,
GFP_DMA | GFP_KERNEL);
if (!priv->rx_desc_base) {

View File

@ -1860,16 +1860,20 @@ void ocelot_get_strings(struct ocelot *ocelot, int port, u32 sset, u8 *data)
if (sset != ETH_SS_STATS)
return;
for (i = 0; i < ocelot->num_stats; i++)
for (i = 0; i < OCELOT_NUM_STATS; i++) {
if (ocelot->stats_layout[i].name[0] == '\0')
continue;
memcpy(data + i * ETH_GSTRING_LEN, ocelot->stats_layout[i].name,
ETH_GSTRING_LEN);
}
}
EXPORT_SYMBOL(ocelot_get_strings);
/* Caller must hold &ocelot->stats_lock */
static int ocelot_port_update_stats(struct ocelot *ocelot, int port)
{
unsigned int idx = port * ocelot->num_stats;
unsigned int idx = port * OCELOT_NUM_STATS;
struct ocelot_stats_region *region;
int err, j;
@ -1877,9 +1881,8 @@ static int ocelot_port_update_stats(struct ocelot *ocelot, int port)
ocelot_write(ocelot, SYS_STAT_CFG_STAT_VIEW(port), SYS_STAT_CFG);
list_for_each_entry(region, &ocelot->stats_regions, node) {
err = ocelot_bulk_read_rix(ocelot, SYS_COUNT_RX_OCTETS,
region->offset, region->buf,
region->count);
err = ocelot_bulk_read(ocelot, region->base, region->buf,
region->count);
if (err)
return err;
@ -1906,13 +1909,13 @@ static void ocelot_check_stats_work(struct work_struct *work)
stats_work);
int i, err;
mutex_lock(&ocelot->stats_lock);
spin_lock(&ocelot->stats_lock);
for (i = 0; i < ocelot->num_phys_ports; i++) {
err = ocelot_port_update_stats(ocelot, i);
if (err)
break;
}
mutex_unlock(&ocelot->stats_lock);
spin_unlock(&ocelot->stats_lock);
if (err)
dev_err(ocelot->dev, "Error %d updating ethtool stats\n", err);
@ -1925,16 +1928,22 @@ void ocelot_get_ethtool_stats(struct ocelot *ocelot, int port, u64 *data)
{
int i, err;
mutex_lock(&ocelot->stats_lock);
spin_lock(&ocelot->stats_lock);
/* check and update now */
err = ocelot_port_update_stats(ocelot, port);
/* Copy all counters */
for (i = 0; i < ocelot->num_stats; i++)
*data++ = ocelot->stats[port * ocelot->num_stats + i];
/* Copy all supported counters */
for (i = 0; i < OCELOT_NUM_STATS; i++) {
int index = port * OCELOT_NUM_STATS + i;
mutex_unlock(&ocelot->stats_lock);
if (ocelot->stats_layout[i].name[0] == '\0')
continue;
*data++ = ocelot->stats[index];
}
spin_unlock(&ocelot->stats_lock);
if (err)
dev_err(ocelot->dev, "Error %d updating ethtool stats\n", err);
@ -1943,10 +1952,16 @@ EXPORT_SYMBOL(ocelot_get_ethtool_stats);
int ocelot_get_sset_count(struct ocelot *ocelot, int port, int sset)
{
int i, num_stats = 0;
if (sset != ETH_SS_STATS)
return -EOPNOTSUPP;
return ocelot->num_stats;
for (i = 0; i < OCELOT_NUM_STATS; i++)
if (ocelot->stats_layout[i].name[0] != '\0')
num_stats++;
return num_stats;
}
EXPORT_SYMBOL(ocelot_get_sset_count);
@ -1958,8 +1973,11 @@ static int ocelot_prepare_stats_regions(struct ocelot *ocelot)
INIT_LIST_HEAD(&ocelot->stats_regions);
for (i = 0; i < ocelot->num_stats; i++) {
if (region && ocelot->stats_layout[i].offset == last + 1) {
for (i = 0; i < OCELOT_NUM_STATS; i++) {
if (ocelot->stats_layout[i].name[0] == '\0')
continue;
if (region && ocelot->stats_layout[i].reg == last + 4) {
region->count++;
} else {
region = devm_kzalloc(ocelot->dev, sizeof(*region),
@ -1967,12 +1985,12 @@ static int ocelot_prepare_stats_regions(struct ocelot *ocelot)
if (!region)
return -ENOMEM;
region->offset = ocelot->stats_layout[i].offset;
region->base = ocelot->stats_layout[i].reg;
region->count = 1;
list_add_tail(&region->node, &ocelot->stats_regions);
}
last = ocelot->stats_layout[i].offset;
last = ocelot->stats_layout[i].reg;
}
list_for_each_entry(region, &ocelot->stats_regions, node) {
@ -3340,7 +3358,6 @@ static void ocelot_detect_features(struct ocelot *ocelot)
int ocelot_init(struct ocelot *ocelot)
{
const struct ocelot_stat_layout *stat;
char queue_name[32];
int i, ret;
u32 port;
@ -3353,17 +3370,13 @@ int ocelot_init(struct ocelot *ocelot)
}
}
ocelot->num_stats = 0;
for_each_stat(ocelot, stat)
ocelot->num_stats++;
ocelot->stats = devm_kcalloc(ocelot->dev,
ocelot->num_phys_ports * ocelot->num_stats,
ocelot->num_phys_ports * OCELOT_NUM_STATS,
sizeof(u64), GFP_KERNEL);
if (!ocelot->stats)
return -ENOMEM;
mutex_init(&ocelot->stats_lock);
spin_lock_init(&ocelot->stats_lock);
mutex_init(&ocelot->ptp_lock);
mutex_init(&ocelot->mact_lock);
mutex_init(&ocelot->fwd_domain_lock);
@ -3511,7 +3524,6 @@ void ocelot_deinit(struct ocelot *ocelot)
cancel_delayed_work(&ocelot->stats_work);
destroy_workqueue(ocelot->stats_queue);
destroy_workqueue(ocelot->owq);
mutex_destroy(&ocelot->stats_lock);
}
EXPORT_SYMBOL(ocelot_deinit);

View File

@ -725,37 +725,42 @@ static void ocelot_get_stats64(struct net_device *dev,
struct ocelot_port_private *priv = netdev_priv(dev);
struct ocelot *ocelot = priv->port.ocelot;
int port = priv->port.index;
u64 *s;
/* Configure the port to read the stats from */
ocelot_write(ocelot, SYS_STAT_CFG_STAT_VIEW(port),
SYS_STAT_CFG);
spin_lock(&ocelot->stats_lock);
s = &ocelot->stats[port * OCELOT_NUM_STATS];
/* Get Rx stats */
stats->rx_bytes = ocelot_read(ocelot, SYS_COUNT_RX_OCTETS);
stats->rx_packets = ocelot_read(ocelot, SYS_COUNT_RX_SHORTS) +
ocelot_read(ocelot, SYS_COUNT_RX_FRAGMENTS) +
ocelot_read(ocelot, SYS_COUNT_RX_JABBERS) +
ocelot_read(ocelot, SYS_COUNT_RX_LONGS) +
ocelot_read(ocelot, SYS_COUNT_RX_64) +
ocelot_read(ocelot, SYS_COUNT_RX_65_127) +
ocelot_read(ocelot, SYS_COUNT_RX_128_255) +
ocelot_read(ocelot, SYS_COUNT_RX_256_1023) +
ocelot_read(ocelot, SYS_COUNT_RX_1024_1526) +
ocelot_read(ocelot, SYS_COUNT_RX_1527_MAX);
stats->multicast = ocelot_read(ocelot, SYS_COUNT_RX_MULTICAST);
stats->rx_bytes = s[OCELOT_STAT_RX_OCTETS];
stats->rx_packets = s[OCELOT_STAT_RX_SHORTS] +
s[OCELOT_STAT_RX_FRAGMENTS] +
s[OCELOT_STAT_RX_JABBERS] +
s[OCELOT_STAT_RX_LONGS] +
s[OCELOT_STAT_RX_64] +
s[OCELOT_STAT_RX_65_127] +
s[OCELOT_STAT_RX_128_255] +
s[OCELOT_STAT_RX_256_511] +
s[OCELOT_STAT_RX_512_1023] +
s[OCELOT_STAT_RX_1024_1526] +
s[OCELOT_STAT_RX_1527_MAX];
stats->multicast = s[OCELOT_STAT_RX_MULTICAST];
stats->rx_dropped = dev->stats.rx_dropped;
/* Get Tx stats */
stats->tx_bytes = ocelot_read(ocelot, SYS_COUNT_TX_OCTETS);
stats->tx_packets = ocelot_read(ocelot, SYS_COUNT_TX_64) +
ocelot_read(ocelot, SYS_COUNT_TX_65_127) +
ocelot_read(ocelot, SYS_COUNT_TX_128_511) +
ocelot_read(ocelot, SYS_COUNT_TX_512_1023) +
ocelot_read(ocelot, SYS_COUNT_TX_1024_1526) +
ocelot_read(ocelot, SYS_COUNT_TX_1527_MAX);
stats->tx_dropped = ocelot_read(ocelot, SYS_COUNT_TX_DROPS) +
ocelot_read(ocelot, SYS_COUNT_TX_AGING);
stats->collisions = ocelot_read(ocelot, SYS_COUNT_TX_COLLISION);
stats->tx_bytes = s[OCELOT_STAT_TX_OCTETS];
stats->tx_packets = s[OCELOT_STAT_TX_64] +
s[OCELOT_STAT_TX_65_127] +
s[OCELOT_STAT_TX_128_255] +
s[OCELOT_STAT_TX_256_511] +
s[OCELOT_STAT_TX_512_1023] +
s[OCELOT_STAT_TX_1024_1526] +
s[OCELOT_STAT_TX_1527_MAX];
stats->tx_dropped = s[OCELOT_STAT_TX_DROPS] +
s[OCELOT_STAT_TX_AGED];
stats->collisions = s[OCELOT_STAT_TX_COLLISION];
spin_unlock(&ocelot->stats_lock);
}
static int ocelot_port_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],

View File

@ -96,101 +96,379 @@ static const struct reg_field ocelot_regfields[REGFIELD_MAX] = {
[SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 12, 4),
};
static const struct ocelot_stat_layout ocelot_stats_layout[] = {
{ .name = "rx_octets", .offset = 0x00, },
{ .name = "rx_unicast", .offset = 0x01, },
{ .name = "rx_multicast", .offset = 0x02, },
{ .name = "rx_broadcast", .offset = 0x03, },
{ .name = "rx_shorts", .offset = 0x04, },
{ .name = "rx_fragments", .offset = 0x05, },
{ .name = "rx_jabbers", .offset = 0x06, },
{ .name = "rx_crc_align_errs", .offset = 0x07, },
{ .name = "rx_sym_errs", .offset = 0x08, },
{ .name = "rx_frames_below_65_octets", .offset = 0x09, },
{ .name = "rx_frames_65_to_127_octets", .offset = 0x0A, },
{ .name = "rx_frames_128_to_255_octets", .offset = 0x0B, },
{ .name = "rx_frames_256_to_511_octets", .offset = 0x0C, },
{ .name = "rx_frames_512_to_1023_octets", .offset = 0x0D, },
{ .name = "rx_frames_1024_to_1526_octets", .offset = 0x0E, },
{ .name = "rx_frames_over_1526_octets", .offset = 0x0F, },
{ .name = "rx_pause", .offset = 0x10, },
{ .name = "rx_control", .offset = 0x11, },
{ .name = "rx_longs", .offset = 0x12, },
{ .name = "rx_classified_drops", .offset = 0x13, },
{ .name = "rx_red_prio_0", .offset = 0x14, },
{ .name = "rx_red_prio_1", .offset = 0x15, },
{ .name = "rx_red_prio_2", .offset = 0x16, },
{ .name = "rx_red_prio_3", .offset = 0x17, },
{ .name = "rx_red_prio_4", .offset = 0x18, },
{ .name = "rx_red_prio_5", .offset = 0x19, },
{ .name = "rx_red_prio_6", .offset = 0x1A, },
{ .name = "rx_red_prio_7", .offset = 0x1B, },
{ .name = "rx_yellow_prio_0", .offset = 0x1C, },
{ .name = "rx_yellow_prio_1", .offset = 0x1D, },
{ .name = "rx_yellow_prio_2", .offset = 0x1E, },
{ .name = "rx_yellow_prio_3", .offset = 0x1F, },
{ .name = "rx_yellow_prio_4", .offset = 0x20, },
{ .name = "rx_yellow_prio_5", .offset = 0x21, },
{ .name = "rx_yellow_prio_6", .offset = 0x22, },
{ .name = "rx_yellow_prio_7", .offset = 0x23, },
{ .name = "rx_green_prio_0", .offset = 0x24, },
{ .name = "rx_green_prio_1", .offset = 0x25, },
{ .name = "rx_green_prio_2", .offset = 0x26, },
{ .name = "rx_green_prio_3", .offset = 0x27, },
{ .name = "rx_green_prio_4", .offset = 0x28, },
{ .name = "rx_green_prio_5", .offset = 0x29, },
{ .name = "rx_green_prio_6", .offset = 0x2A, },
{ .name = "rx_green_prio_7", .offset = 0x2B, },
{ .name = "tx_octets", .offset = 0x40, },
{ .name = "tx_unicast", .offset = 0x41, },
{ .name = "tx_multicast", .offset = 0x42, },
{ .name = "tx_broadcast", .offset = 0x43, },
{ .name = "tx_collision", .offset = 0x44, },
{ .name = "tx_drops", .offset = 0x45, },
{ .name = "tx_pause", .offset = 0x46, },
{ .name = "tx_frames_below_65_octets", .offset = 0x47, },
{ .name = "tx_frames_65_to_127_octets", .offset = 0x48, },
{ .name = "tx_frames_128_255_octets", .offset = 0x49, },
{ .name = "tx_frames_256_511_octets", .offset = 0x4A, },
{ .name = "tx_frames_512_1023_octets", .offset = 0x4B, },
{ .name = "tx_frames_1024_1526_octets", .offset = 0x4C, },
{ .name = "tx_frames_over_1526_octets", .offset = 0x4D, },
{ .name = "tx_yellow_prio_0", .offset = 0x4E, },
{ .name = "tx_yellow_prio_1", .offset = 0x4F, },
{ .name = "tx_yellow_prio_2", .offset = 0x50, },
{ .name = "tx_yellow_prio_3", .offset = 0x51, },
{ .name = "tx_yellow_prio_4", .offset = 0x52, },
{ .name = "tx_yellow_prio_5", .offset = 0x53, },
{ .name = "tx_yellow_prio_6", .offset = 0x54, },
{ .name = "tx_yellow_prio_7", .offset = 0x55, },
{ .name = "tx_green_prio_0", .offset = 0x56, },
{ .name = "tx_green_prio_1", .offset = 0x57, },
{ .name = "tx_green_prio_2", .offset = 0x58, },
{ .name = "tx_green_prio_3", .offset = 0x59, },
{ .name = "tx_green_prio_4", .offset = 0x5A, },
{ .name = "tx_green_prio_5", .offset = 0x5B, },
{ .name = "tx_green_prio_6", .offset = 0x5C, },
{ .name = "tx_green_prio_7", .offset = 0x5D, },
{ .name = "tx_aged", .offset = 0x5E, },
{ .name = "drop_local", .offset = 0x80, },
{ .name = "drop_tail", .offset = 0x81, },
{ .name = "drop_yellow_prio_0", .offset = 0x82, },
{ .name = "drop_yellow_prio_1", .offset = 0x83, },
{ .name = "drop_yellow_prio_2", .offset = 0x84, },
{ .name = "drop_yellow_prio_3", .offset = 0x85, },
{ .name = "drop_yellow_prio_4", .offset = 0x86, },
{ .name = "drop_yellow_prio_5", .offset = 0x87, },
{ .name = "drop_yellow_prio_6", .offset = 0x88, },
{ .name = "drop_yellow_prio_7", .offset = 0x89, },
{ .name = "drop_green_prio_0", .offset = 0x8A, },
{ .name = "drop_green_prio_1", .offset = 0x8B, },
{ .name = "drop_green_prio_2", .offset = 0x8C, },
{ .name = "drop_green_prio_3", .offset = 0x8D, },
{ .name = "drop_green_prio_4", .offset = 0x8E, },
{ .name = "drop_green_prio_5", .offset = 0x8F, },
{ .name = "drop_green_prio_6", .offset = 0x90, },
{ .name = "drop_green_prio_7", .offset = 0x91, },
OCELOT_STAT_END
static const struct ocelot_stat_layout ocelot_stats_layout[OCELOT_NUM_STATS] = {
[OCELOT_STAT_RX_OCTETS] = {
.name = "rx_octets",
.reg = SYS_COUNT_RX_OCTETS,
},
[OCELOT_STAT_RX_UNICAST] = {
.name = "rx_unicast",
.reg = SYS_COUNT_RX_UNICAST,
},
[OCELOT_STAT_RX_MULTICAST] = {
.name = "rx_multicast",
.reg = SYS_COUNT_RX_MULTICAST,
},
[OCELOT_STAT_RX_BROADCAST] = {
.name = "rx_broadcast",
.reg = SYS_COUNT_RX_BROADCAST,
},
[OCELOT_STAT_RX_SHORTS] = {
.name = "rx_shorts",
.reg = SYS_COUNT_RX_SHORTS,
},
[OCELOT_STAT_RX_FRAGMENTS] = {
.name = "rx_fragments",
.reg = SYS_COUNT_RX_FRAGMENTS,
},
[OCELOT_STAT_RX_JABBERS] = {
.name = "rx_jabbers",
.reg = SYS_COUNT_RX_JABBERS,
},
[OCELOT_STAT_RX_CRC_ALIGN_ERRS] = {
.name = "rx_crc_align_errs",
.reg = SYS_COUNT_RX_CRC_ALIGN_ERRS,
},
[OCELOT_STAT_RX_SYM_ERRS] = {
.name = "rx_sym_errs",
.reg = SYS_COUNT_RX_SYM_ERRS,
},
[OCELOT_STAT_RX_64] = {
.name = "rx_frames_below_65_octets",
.reg = SYS_COUNT_RX_64,
},
[OCELOT_STAT_RX_65_127] = {
.name = "rx_frames_65_to_127_octets",
.reg = SYS_COUNT_RX_65_127,
},
[OCELOT_STAT_RX_128_255] = {
.name = "rx_frames_128_to_255_octets",
.reg = SYS_COUNT_RX_128_255,
},
[OCELOT_STAT_RX_256_511] = {
.name = "rx_frames_256_to_511_octets",
.reg = SYS_COUNT_RX_256_511,
},
[OCELOT_STAT_RX_512_1023] = {
.name = "rx_frames_512_to_1023_octets",
.reg = SYS_COUNT_RX_512_1023,
},
[OCELOT_STAT_RX_1024_1526] = {
.name = "rx_frames_1024_to_1526_octets",
.reg = SYS_COUNT_RX_1024_1526,
},
[OCELOT_STAT_RX_1527_MAX] = {
.name = "rx_frames_over_1526_octets",
.reg = SYS_COUNT_RX_1527_MAX,
},
[OCELOT_STAT_RX_PAUSE] = {
.name = "rx_pause",
.reg = SYS_COUNT_RX_PAUSE,
},
[OCELOT_STAT_RX_CONTROL] = {
.name = "rx_control",
.reg = SYS_COUNT_RX_CONTROL,
},
[OCELOT_STAT_RX_LONGS] = {
.name = "rx_longs",
.reg = SYS_COUNT_RX_LONGS,
},
[OCELOT_STAT_RX_CLASSIFIED_DROPS] = {
.name = "rx_classified_drops",
.reg = SYS_COUNT_RX_CLASSIFIED_DROPS,
},
[OCELOT_STAT_RX_RED_PRIO_0] = {
.name = "rx_red_prio_0",
.reg = SYS_COUNT_RX_RED_PRIO_0,
},
[OCELOT_STAT_RX_RED_PRIO_1] = {
.name = "rx_red_prio_1",
.reg = SYS_COUNT_RX_RED_PRIO_1,
},
[OCELOT_STAT_RX_RED_PRIO_2] = {
.name = "rx_red_prio_2",
.reg = SYS_COUNT_RX_RED_PRIO_2,
},
[OCELOT_STAT_RX_RED_PRIO_3] = {
.name = "rx_red_prio_3",
.reg = SYS_COUNT_RX_RED_PRIO_3,
},
[OCELOT_STAT_RX_RED_PRIO_4] = {
.name = "rx_red_prio_4",
.reg = SYS_COUNT_RX_RED_PRIO_4,
},
[OCELOT_STAT_RX_RED_PRIO_5] = {
.name = "rx_red_prio_5",
.reg = SYS_COUNT_RX_RED_PRIO_5,
},
[OCELOT_STAT_RX_RED_PRIO_6] = {
.name = "rx_red_prio_6",
.reg = SYS_COUNT_RX_RED_PRIO_6,
},
[OCELOT_STAT_RX_RED_PRIO_7] = {
.name = "rx_red_prio_7",
.reg = SYS_COUNT_RX_RED_PRIO_7,
},
[OCELOT_STAT_RX_YELLOW_PRIO_0] = {
.name = "rx_yellow_prio_0",
.reg = SYS_COUNT_RX_YELLOW_PRIO_0,
},
[OCELOT_STAT_RX_YELLOW_PRIO_1] = {
.name = "rx_yellow_prio_1",
.reg = SYS_COUNT_RX_YELLOW_PRIO_1,
},
[OCELOT_STAT_RX_YELLOW_PRIO_2] = {
.name = "rx_yellow_prio_2",
.reg = SYS_COUNT_RX_YELLOW_PRIO_2,
},
[OCELOT_STAT_RX_YELLOW_PRIO_3] = {
.name = "rx_yellow_prio_3",
.reg = SYS_COUNT_RX_YELLOW_PRIO_3,
},
[OCELOT_STAT_RX_YELLOW_PRIO_4] = {
.name = "rx_yellow_prio_4",
.reg = SYS_COUNT_RX_YELLOW_PRIO_4,
},
[OCELOT_STAT_RX_YELLOW_PRIO_5] = {
.name = "rx_yellow_prio_5",
.reg = SYS_COUNT_RX_YELLOW_PRIO_5,
},
[OCELOT_STAT_RX_YELLOW_PRIO_6] = {
.name = "rx_yellow_prio_6",
.reg = SYS_COUNT_RX_YELLOW_PRIO_6,
},
[OCELOT_STAT_RX_YELLOW_PRIO_7] = {
.name = "rx_yellow_prio_7",
.reg = SYS_COUNT_RX_YELLOW_PRIO_7,
},
[OCELOT_STAT_RX_GREEN_PRIO_0] = {
.name = "rx_green_prio_0",
.reg = SYS_COUNT_RX_GREEN_PRIO_0,
},
[OCELOT_STAT_RX_GREEN_PRIO_1] = {
.name = "rx_green_prio_1",
.reg = SYS_COUNT_RX_GREEN_PRIO_1,
},
[OCELOT_STAT_RX_GREEN_PRIO_2] = {
.name = "rx_green_prio_2",
.reg = SYS_COUNT_RX_GREEN_PRIO_2,
},
[OCELOT_STAT_RX_GREEN_PRIO_3] = {
.name = "rx_green_prio_3",
.reg = SYS_COUNT_RX_GREEN_PRIO_3,
},
[OCELOT_STAT_RX_GREEN_PRIO_4] = {
.name = "rx_green_prio_4",
.reg = SYS_COUNT_RX_GREEN_PRIO_4,
},
[OCELOT_STAT_RX_GREEN_PRIO_5] = {
.name = "rx_green_prio_5",
.reg = SYS_COUNT_RX_GREEN_PRIO_5,
},
[OCELOT_STAT_RX_GREEN_PRIO_6] = {
.name = "rx_green_prio_6",
.reg = SYS_COUNT_RX_GREEN_PRIO_6,
},
[OCELOT_STAT_RX_GREEN_PRIO_7] = {
.name = "rx_green_prio_7",
.reg = SYS_COUNT_RX_GREEN_PRIO_7,
},
[OCELOT_STAT_TX_OCTETS] = {
.name = "tx_octets",
.reg = SYS_COUNT_TX_OCTETS,
},
[OCELOT_STAT_TX_UNICAST] = {
.name = "tx_unicast",
.reg = SYS_COUNT_TX_UNICAST,
},
[OCELOT_STAT_TX_MULTICAST] = {
.name = "tx_multicast",
.reg = SYS_COUNT_TX_MULTICAST,
},
[OCELOT_STAT_TX_BROADCAST] = {
.name = "tx_broadcast",
.reg = SYS_COUNT_TX_BROADCAST,
},
[OCELOT_STAT_TX_COLLISION] = {
.name = "tx_collision",
.reg = SYS_COUNT_TX_COLLISION,
},
[OCELOT_STAT_TX_DROPS] = {
.name = "tx_drops",
.reg = SYS_COUNT_TX_DROPS,
},
[OCELOT_STAT_TX_PAUSE] = {
.name = "tx_pause",
.reg = SYS_COUNT_TX_PAUSE,
},
[OCELOT_STAT_TX_64] = {
.name = "tx_frames_below_65_octets",
.reg = SYS_COUNT_TX_64,
},
[OCELOT_STAT_TX_65_127] = {
.name = "tx_frames_65_to_127_octets",
.reg = SYS_COUNT_TX_65_127,
},
[OCELOT_STAT_TX_128_255] = {
.name = "tx_frames_128_255_octets",
.reg = SYS_COUNT_TX_128_255,
},
[OCELOT_STAT_TX_256_511] = {
.name = "tx_frames_256_511_octets",
.reg = SYS_COUNT_TX_256_511,
},
[OCELOT_STAT_TX_512_1023] = {
.name = "tx_frames_512_1023_octets",
.reg = SYS_COUNT_TX_512_1023,
},
[OCELOT_STAT_TX_1024_1526] = {
.name = "tx_frames_1024_1526_octets",
.reg = SYS_COUNT_TX_1024_1526,
},
[OCELOT_STAT_TX_1527_MAX] = {
.name = "tx_frames_over_1526_octets",
.reg = SYS_COUNT_TX_1527_MAX,
},
[OCELOT_STAT_TX_YELLOW_PRIO_0] = {
.name = "tx_yellow_prio_0",
.reg = SYS_COUNT_TX_YELLOW_PRIO_0,
},
[OCELOT_STAT_TX_YELLOW_PRIO_1] = {
.name = "tx_yellow_prio_1",
.reg = SYS_COUNT_TX_YELLOW_PRIO_1,
},
[OCELOT_STAT_TX_YELLOW_PRIO_2] = {
.name = "tx_yellow_prio_2",
.reg = SYS_COUNT_TX_YELLOW_PRIO_2,
},
[OCELOT_STAT_TX_YELLOW_PRIO_3] = {
.name = "tx_yellow_prio_3",
.reg = SYS_COUNT_TX_YELLOW_PRIO_3,
},
[OCELOT_STAT_TX_YELLOW_PRIO_4] = {
.name = "tx_yellow_prio_4",
.reg = SYS_COUNT_TX_YELLOW_PRIO_4,
},
[OCELOT_STAT_TX_YELLOW_PRIO_5] = {
.name = "tx_yellow_prio_5",
.reg = SYS_COUNT_TX_YELLOW_PRIO_5,
},
[OCELOT_STAT_TX_YELLOW_PRIO_6] = {
.name = "tx_yellow_prio_6",
.reg = SYS_COUNT_TX_YELLOW_PRIO_6,
},
[OCELOT_STAT_TX_YELLOW_PRIO_7] = {
.name = "tx_yellow_prio_7",
.reg = SYS_COUNT_TX_YELLOW_PRIO_7,
},
[OCELOT_STAT_TX_GREEN_PRIO_0] = {
.name = "tx_green_prio_0",
.reg = SYS_COUNT_TX_GREEN_PRIO_0,
},
[OCELOT_STAT_TX_GREEN_PRIO_1] = {
.name = "tx_green_prio_1",
.reg = SYS_COUNT_TX_GREEN_PRIO_1,
},
[OCELOT_STAT_TX_GREEN_PRIO_2] = {
.name = "tx_green_prio_2",
.reg = SYS_COUNT_TX_GREEN_PRIO_2,
},
[OCELOT_STAT_TX_GREEN_PRIO_3] = {
.name = "tx_green_prio_3",
.reg = SYS_COUNT_TX_GREEN_PRIO_3,
},
[OCELOT_STAT_TX_GREEN_PRIO_4] = {
.name = "tx_green_prio_4",
.reg = SYS_COUNT_TX_GREEN_PRIO_4,
},
[OCELOT_STAT_TX_GREEN_PRIO_5] = {
.name = "tx_green_prio_5",
.reg = SYS_COUNT_TX_GREEN_PRIO_5,
},
[OCELOT_STAT_TX_GREEN_PRIO_6] = {
.name = "tx_green_prio_6",
.reg = SYS_COUNT_TX_GREEN_PRIO_6,
},
[OCELOT_STAT_TX_GREEN_PRIO_7] = {
.name = "tx_green_prio_7",
.reg = SYS_COUNT_TX_GREEN_PRIO_7,
},
[OCELOT_STAT_TX_AGED] = {
.name = "tx_aged",
.reg = SYS_COUNT_TX_AGING,
},
[OCELOT_STAT_DROP_LOCAL] = {
.name = "drop_local",
.reg = SYS_COUNT_DROP_LOCAL,
},
[OCELOT_STAT_DROP_TAIL] = {
.name = "drop_tail",
.reg = SYS_COUNT_DROP_TAIL,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_0] = {
.name = "drop_yellow_prio_0",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_0,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_1] = {
.name = "drop_yellow_prio_1",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_1,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_2] = {
.name = "drop_yellow_prio_2",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_2,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_3] = {
.name = "drop_yellow_prio_3",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_3,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_4] = {
.name = "drop_yellow_prio_4",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_4,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_5] = {
.name = "drop_yellow_prio_5",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_5,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_6] = {
.name = "drop_yellow_prio_6",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_6,
},
[OCELOT_STAT_DROP_YELLOW_PRIO_7] = {
.name = "drop_yellow_prio_7",
.reg = SYS_COUNT_DROP_YELLOW_PRIO_7,
},
[OCELOT_STAT_DROP_GREEN_PRIO_0] = {
.name = "drop_green_prio_0",
.reg = SYS_COUNT_DROP_GREEN_PRIO_0,
},
[OCELOT_STAT_DROP_GREEN_PRIO_1] = {
.name = "drop_green_prio_1",
.reg = SYS_COUNT_DROP_GREEN_PRIO_1,
},
[OCELOT_STAT_DROP_GREEN_PRIO_2] = {
.name = "drop_green_prio_2",
.reg = SYS_COUNT_DROP_GREEN_PRIO_2,
},
[OCELOT_STAT_DROP_GREEN_PRIO_3] = {
.name = "drop_green_prio_3",
.reg = SYS_COUNT_DROP_GREEN_PRIO_3,
},
[OCELOT_STAT_DROP_GREEN_PRIO_4] = {
.name = "drop_green_prio_4",
.reg = SYS_COUNT_DROP_GREEN_PRIO_4,
},
[OCELOT_STAT_DROP_GREEN_PRIO_5] = {
.name = "drop_green_prio_5",
.reg = SYS_COUNT_DROP_GREEN_PRIO_5,
},
[OCELOT_STAT_DROP_GREEN_PRIO_6] = {
.name = "drop_green_prio_6",
.reg = SYS_COUNT_DROP_GREEN_PRIO_6,
},
[OCELOT_STAT_DROP_GREEN_PRIO_7] = {
.name = "drop_green_prio_7",
.reg = SYS_COUNT_DROP_GREEN_PRIO_7,
},
};
static void ocelot_pll5_init(struct ocelot *ocelot)

View File

@ -180,13 +180,38 @@ const u32 vsc7514_sys_regmap[] = {
REG(SYS_COUNT_RX_64, 0x000024),
REG(SYS_COUNT_RX_65_127, 0x000028),
REG(SYS_COUNT_RX_128_255, 0x00002c),
REG(SYS_COUNT_RX_256_1023, 0x000030),
REG(SYS_COUNT_RX_1024_1526, 0x000034),
REG(SYS_COUNT_RX_1527_MAX, 0x000038),
REG(SYS_COUNT_RX_PAUSE, 0x00003c),
REG(SYS_COUNT_RX_CONTROL, 0x000040),
REG(SYS_COUNT_RX_LONGS, 0x000044),
REG(SYS_COUNT_RX_CLASSIFIED_DROPS, 0x000048),
REG(SYS_COUNT_RX_256_511, 0x000030),
REG(SYS_COUNT_RX_512_1023, 0x000034),
REG(SYS_COUNT_RX_1024_1526, 0x000038),
REG(SYS_COUNT_RX_1527_MAX, 0x00003c),
REG(SYS_COUNT_RX_PAUSE, 0x000040),
REG(SYS_COUNT_RX_CONTROL, 0x000044),
REG(SYS_COUNT_RX_LONGS, 0x000048),
REG(SYS_COUNT_RX_CLASSIFIED_DROPS, 0x00004c),
REG(SYS_COUNT_RX_RED_PRIO_0, 0x000050),
REG(SYS_COUNT_RX_RED_PRIO_1, 0x000054),
REG(SYS_COUNT_RX_RED_PRIO_2, 0x000058),
REG(SYS_COUNT_RX_RED_PRIO_3, 0x00005c),
REG(SYS_COUNT_RX_RED_PRIO_4, 0x000060),
REG(SYS_COUNT_RX_RED_PRIO_5, 0x000064),
REG(SYS_COUNT_RX_RED_PRIO_6, 0x000068),
REG(SYS_COUNT_RX_RED_PRIO_7, 0x00006c),
REG(SYS_COUNT_RX_YELLOW_PRIO_0, 0x000070),
REG(SYS_COUNT_RX_YELLOW_PRIO_1, 0x000074),
REG(SYS_COUNT_RX_YELLOW_PRIO_2, 0x000078),
REG(SYS_COUNT_RX_YELLOW_PRIO_3, 0x00007c),
REG(SYS_COUNT_RX_YELLOW_PRIO_4, 0x000080),
REG(SYS_COUNT_RX_YELLOW_PRIO_5, 0x000084),
REG(SYS_COUNT_RX_YELLOW_PRIO_6, 0x000088),
REG(SYS_COUNT_RX_YELLOW_PRIO_7, 0x00008c),
REG(SYS_COUNT_RX_GREEN_PRIO_0, 0x000090),
REG(SYS_COUNT_RX_GREEN_PRIO_1, 0x000094),
REG(SYS_COUNT_RX_GREEN_PRIO_2, 0x000098),
REG(SYS_COUNT_RX_GREEN_PRIO_3, 0x00009c),
REG(SYS_COUNT_RX_GREEN_PRIO_4, 0x0000a0),
REG(SYS_COUNT_RX_GREEN_PRIO_5, 0x0000a4),
REG(SYS_COUNT_RX_GREEN_PRIO_6, 0x0000a8),
REG(SYS_COUNT_RX_GREEN_PRIO_7, 0x0000ac),
REG(SYS_COUNT_TX_OCTETS, 0x000100),
REG(SYS_COUNT_TX_UNICAST, 0x000104),
REG(SYS_COUNT_TX_MULTICAST, 0x000108),
@ -196,11 +221,46 @@ const u32 vsc7514_sys_regmap[] = {
REG(SYS_COUNT_TX_PAUSE, 0x000118),
REG(SYS_COUNT_TX_64, 0x00011c),
REG(SYS_COUNT_TX_65_127, 0x000120),
REG(SYS_COUNT_TX_128_511, 0x000124),
REG(SYS_COUNT_TX_512_1023, 0x000128),
REG(SYS_COUNT_TX_1024_1526, 0x00012c),
REG(SYS_COUNT_TX_1527_MAX, 0x000130),
REG(SYS_COUNT_TX_AGING, 0x000170),
REG(SYS_COUNT_TX_128_255, 0x000124),
REG(SYS_COUNT_TX_256_511, 0x000128),
REG(SYS_COUNT_TX_512_1023, 0x00012c),
REG(SYS_COUNT_TX_1024_1526, 0x000130),
REG(SYS_COUNT_TX_1527_MAX, 0x000134),
REG(SYS_COUNT_TX_YELLOW_PRIO_0, 0x000138),
REG(SYS_COUNT_TX_YELLOW_PRIO_1, 0x00013c),
REG(SYS_COUNT_TX_YELLOW_PRIO_2, 0x000140),
REG(SYS_COUNT_TX_YELLOW_PRIO_3, 0x000144),
REG(SYS_COUNT_TX_YELLOW_PRIO_4, 0x000148),
REG(SYS_COUNT_TX_YELLOW_PRIO_5, 0x00014c),
REG(SYS_COUNT_TX_YELLOW_PRIO_6, 0x000150),
REG(SYS_COUNT_TX_YELLOW_PRIO_7, 0x000154),
REG(SYS_COUNT_TX_GREEN_PRIO_0, 0x000158),
REG(SYS_COUNT_TX_GREEN_PRIO_1, 0x00015c),
REG(SYS_COUNT_TX_GREEN_PRIO_2, 0x000160),
REG(SYS_COUNT_TX_GREEN_PRIO_3, 0x000164),
REG(SYS_COUNT_TX_GREEN_PRIO_4, 0x000168),
REG(SYS_COUNT_TX_GREEN_PRIO_5, 0x00016c),
REG(SYS_COUNT_TX_GREEN_PRIO_6, 0x000170),
REG(SYS_COUNT_TX_GREEN_PRIO_7, 0x000174),
REG(SYS_COUNT_TX_AGING, 0x000178),
REG(SYS_COUNT_DROP_LOCAL, 0x000200),
REG(SYS_COUNT_DROP_TAIL, 0x000204),
REG(SYS_COUNT_DROP_YELLOW_PRIO_0, 0x000208),
REG(SYS_COUNT_DROP_YELLOW_PRIO_1, 0x00020c),
REG(SYS_COUNT_DROP_YELLOW_PRIO_2, 0x000210),
REG(SYS_COUNT_DROP_YELLOW_PRIO_3, 0x000214),
REG(SYS_COUNT_DROP_YELLOW_PRIO_4, 0x000218),
REG(SYS_COUNT_DROP_YELLOW_PRIO_5, 0x00021c),
REG(SYS_COUNT_DROP_YELLOW_PRIO_6, 0x000220),
REG(SYS_COUNT_DROP_YELLOW_PRIO_7, 0x000214),
REG(SYS_COUNT_DROP_GREEN_PRIO_0, 0x000218),
REG(SYS_COUNT_DROP_GREEN_PRIO_1, 0x00021c),
REG(SYS_COUNT_DROP_GREEN_PRIO_2, 0x000220),
REG(SYS_COUNT_DROP_GREEN_PRIO_3, 0x000224),
REG(SYS_COUNT_DROP_GREEN_PRIO_4, 0x000228),
REG(SYS_COUNT_DROP_GREEN_PRIO_5, 0x00022c),
REG(SYS_COUNT_DROP_GREEN_PRIO_6, 0x000230),
REG(SYS_COUNT_DROP_GREEN_PRIO_7, 0x000234),
REG(SYS_RESET_CFG, 0x000508),
REG(SYS_CMID, 0x00050c),
REG(SYS_VLAN_ETYPE_CFG, 0x000510),

View File

@ -1134,6 +1134,7 @@ static void intel_eth_pci_remove(struct pci_dev *pdev)
stmmac_dvr_remove(&pdev->dev);
clk_disable_unprepare(priv->plat->stmmac_clk);
clk_unregister_fixed_rate(priv->plat->stmmac_clk);
pcim_iounmap_regions(pdev, BIT(0));

View File

@ -348,7 +348,7 @@ do { \
* This macro is invoked by the OS-specific before it left the
* function mac_drv_rx_complete. This macro calls mac_drv_fill_rxd
* if the number of used RxDs is equal or lower than the
* the given low water mark.
* given low water mark.
*
* para low_water low water mark of used RxD's
*

View File

@ -48,7 +48,7 @@ struct ipa;
*
* The offset of registers related to resource types is computed by a macro
* that is supplied a parameter "rt". The "rt" represents a resource type,
* which is is a member of the ipa_resource_type_src enumerated type for
* which is a member of the ipa_resource_type_src enumerated type for
* source endpoint resources or the ipa_resource_type_dst enumerated type
* for destination endpoint resources.
*

View File

@ -1211,7 +1211,7 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash,
if (!hdr_hash || !skb)
return;
switch ((int)hdr_hash->hash_report) {
switch (__le16_to_cpu(hdr_hash->hash_report)) {
case VIRTIO_NET_HASH_REPORT_TCPv4:
case VIRTIO_NET_HASH_REPORT_UDPv4:
case VIRTIO_NET_HASH_REPORT_TCPv6:
@ -1229,7 +1229,7 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash,
default:
rss_hash_type = PKT_HASH_TYPE_NONE;
}
skb_set_hash(skb, (unsigned int)hdr_hash->hash_value, rss_hash_type);
skb_set_hash(skb, __le32_to_cpu(hdr_hash->hash_value), rss_hash_type);
}
static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,

View File

@ -83,6 +83,7 @@ struct neigh_parms {
struct rcu_head rcu_head;
int reachable_time;
int qlen;
int data[NEIGH_VAR_DATA_MAX];
DECLARE_BITMAP(data_state, NEIGH_VAR_DATA_MAX);
};

View File

@ -95,7 +95,7 @@ struct nf_ip_net {
struct netns_ct {
#ifdef CONFIG_NF_CONNTRACK_EVENTS
bool ctnetlink_has_listener;
u8 ctnetlink_has_listener;
bool ecache_dwork_pending;
#endif
u8 sysctl_log_invalid; /* Log invalid packets */

View File

@ -577,6 +577,31 @@ static inline bool sk_user_data_is_nocopy(const struct sock *sk)
#define __sk_user_data(sk) ((*((void __rcu **)&(sk)->sk_user_data)))
/**
* __locked_read_sk_user_data_with_flags - return the pointer
* only if argument flags all has been set in sk_user_data. Otherwise
* return NULL
*
* @sk: socket
* @flags: flag bits
*
* The caller must be holding sk->sk_callback_lock.
*/
static inline void *
__locked_read_sk_user_data_with_flags(const struct sock *sk,
uintptr_t flags)
{
uintptr_t sk_user_data =
(uintptr_t)rcu_dereference_check(__sk_user_data(sk),
lockdep_is_held(&sk->sk_callback_lock));
WARN_ON_ONCE(flags & SK_USER_DATA_PTRMASK);
if ((sk_user_data & flags) == flags)
return (void *)(sk_user_data & SK_USER_DATA_PTRMASK);
return NULL;
}
/**
* __rcu_dereference_sk_user_data_with_flags - return the pointer
* only if argument flags all has been set in sk_user_data. Otherwise

View File

@ -105,11 +105,6 @@
#define REG_RESERVED_ADDR 0xffffffff
#define REG_RESERVED(reg) REG(reg, REG_RESERVED_ADDR)
#define for_each_stat(ocelot, stat) \
for ((stat) = (ocelot)->stats_layout; \
((stat)->name[0] != '\0'); \
(stat)++)
enum ocelot_target {
ANA = 1,
QS,
@ -335,13 +330,38 @@ enum ocelot_reg {
SYS_COUNT_RX_64,
SYS_COUNT_RX_65_127,
SYS_COUNT_RX_128_255,
SYS_COUNT_RX_256_1023,
SYS_COUNT_RX_256_511,
SYS_COUNT_RX_512_1023,
SYS_COUNT_RX_1024_1526,
SYS_COUNT_RX_1527_MAX,
SYS_COUNT_RX_PAUSE,
SYS_COUNT_RX_CONTROL,
SYS_COUNT_RX_LONGS,
SYS_COUNT_RX_CLASSIFIED_DROPS,
SYS_COUNT_RX_RED_PRIO_0,
SYS_COUNT_RX_RED_PRIO_1,
SYS_COUNT_RX_RED_PRIO_2,
SYS_COUNT_RX_RED_PRIO_3,
SYS_COUNT_RX_RED_PRIO_4,
SYS_COUNT_RX_RED_PRIO_5,
SYS_COUNT_RX_RED_PRIO_6,
SYS_COUNT_RX_RED_PRIO_7,
SYS_COUNT_RX_YELLOW_PRIO_0,
SYS_COUNT_RX_YELLOW_PRIO_1,
SYS_COUNT_RX_YELLOW_PRIO_2,
SYS_COUNT_RX_YELLOW_PRIO_3,
SYS_COUNT_RX_YELLOW_PRIO_4,
SYS_COUNT_RX_YELLOW_PRIO_5,
SYS_COUNT_RX_YELLOW_PRIO_6,
SYS_COUNT_RX_YELLOW_PRIO_7,
SYS_COUNT_RX_GREEN_PRIO_0,
SYS_COUNT_RX_GREEN_PRIO_1,
SYS_COUNT_RX_GREEN_PRIO_2,
SYS_COUNT_RX_GREEN_PRIO_3,
SYS_COUNT_RX_GREEN_PRIO_4,
SYS_COUNT_RX_GREEN_PRIO_5,
SYS_COUNT_RX_GREEN_PRIO_6,
SYS_COUNT_RX_GREEN_PRIO_7,
SYS_COUNT_TX_OCTETS,
SYS_COUNT_TX_UNICAST,
SYS_COUNT_TX_MULTICAST,
@ -351,11 +371,46 @@ enum ocelot_reg {
SYS_COUNT_TX_PAUSE,
SYS_COUNT_TX_64,
SYS_COUNT_TX_65_127,
SYS_COUNT_TX_128_511,
SYS_COUNT_TX_128_255,
SYS_COUNT_TX_256_511,
SYS_COUNT_TX_512_1023,
SYS_COUNT_TX_1024_1526,
SYS_COUNT_TX_1527_MAX,
SYS_COUNT_TX_YELLOW_PRIO_0,
SYS_COUNT_TX_YELLOW_PRIO_1,
SYS_COUNT_TX_YELLOW_PRIO_2,
SYS_COUNT_TX_YELLOW_PRIO_3,
SYS_COUNT_TX_YELLOW_PRIO_4,
SYS_COUNT_TX_YELLOW_PRIO_5,
SYS_COUNT_TX_YELLOW_PRIO_6,
SYS_COUNT_TX_YELLOW_PRIO_7,
SYS_COUNT_TX_GREEN_PRIO_0,
SYS_COUNT_TX_GREEN_PRIO_1,
SYS_COUNT_TX_GREEN_PRIO_2,
SYS_COUNT_TX_GREEN_PRIO_3,
SYS_COUNT_TX_GREEN_PRIO_4,
SYS_COUNT_TX_GREEN_PRIO_5,
SYS_COUNT_TX_GREEN_PRIO_6,
SYS_COUNT_TX_GREEN_PRIO_7,
SYS_COUNT_TX_AGING,
SYS_COUNT_DROP_LOCAL,
SYS_COUNT_DROP_TAIL,
SYS_COUNT_DROP_YELLOW_PRIO_0,
SYS_COUNT_DROP_YELLOW_PRIO_1,
SYS_COUNT_DROP_YELLOW_PRIO_2,
SYS_COUNT_DROP_YELLOW_PRIO_3,
SYS_COUNT_DROP_YELLOW_PRIO_4,
SYS_COUNT_DROP_YELLOW_PRIO_5,
SYS_COUNT_DROP_YELLOW_PRIO_6,
SYS_COUNT_DROP_YELLOW_PRIO_7,
SYS_COUNT_DROP_GREEN_PRIO_0,
SYS_COUNT_DROP_GREEN_PRIO_1,
SYS_COUNT_DROP_GREEN_PRIO_2,
SYS_COUNT_DROP_GREEN_PRIO_3,
SYS_COUNT_DROP_GREEN_PRIO_4,
SYS_COUNT_DROP_GREEN_PRIO_5,
SYS_COUNT_DROP_GREEN_PRIO_6,
SYS_COUNT_DROP_GREEN_PRIO_7,
SYS_RESET_CFG,
SYS_SR_ETYPE_CFG,
SYS_VLAN_ETYPE_CFG,
@ -538,16 +593,111 @@ enum ocelot_ptp_pins {
TOD_ACC_PIN
};
enum ocelot_stat {
OCELOT_STAT_RX_OCTETS,
OCELOT_STAT_RX_UNICAST,
OCELOT_STAT_RX_MULTICAST,
OCELOT_STAT_RX_BROADCAST,
OCELOT_STAT_RX_SHORTS,
OCELOT_STAT_RX_FRAGMENTS,
OCELOT_STAT_RX_JABBERS,
OCELOT_STAT_RX_CRC_ALIGN_ERRS,
OCELOT_STAT_RX_SYM_ERRS,
OCELOT_STAT_RX_64,
OCELOT_STAT_RX_65_127,
OCELOT_STAT_RX_128_255,
OCELOT_STAT_RX_256_511,
OCELOT_STAT_RX_512_1023,
OCELOT_STAT_RX_1024_1526,
OCELOT_STAT_RX_1527_MAX,
OCELOT_STAT_RX_PAUSE,
OCELOT_STAT_RX_CONTROL,
OCELOT_STAT_RX_LONGS,
OCELOT_STAT_RX_CLASSIFIED_DROPS,
OCELOT_STAT_RX_RED_PRIO_0,
OCELOT_STAT_RX_RED_PRIO_1,
OCELOT_STAT_RX_RED_PRIO_2,
OCELOT_STAT_RX_RED_PRIO_3,
OCELOT_STAT_RX_RED_PRIO_4,
OCELOT_STAT_RX_RED_PRIO_5,
OCELOT_STAT_RX_RED_PRIO_6,
OCELOT_STAT_RX_RED_PRIO_7,
OCELOT_STAT_RX_YELLOW_PRIO_0,
OCELOT_STAT_RX_YELLOW_PRIO_1,
OCELOT_STAT_RX_YELLOW_PRIO_2,
OCELOT_STAT_RX_YELLOW_PRIO_3,
OCELOT_STAT_RX_YELLOW_PRIO_4,
OCELOT_STAT_RX_YELLOW_PRIO_5,
OCELOT_STAT_RX_YELLOW_PRIO_6,
OCELOT_STAT_RX_YELLOW_PRIO_7,
OCELOT_STAT_RX_GREEN_PRIO_0,
OCELOT_STAT_RX_GREEN_PRIO_1,
OCELOT_STAT_RX_GREEN_PRIO_2,
OCELOT_STAT_RX_GREEN_PRIO_3,
OCELOT_STAT_RX_GREEN_PRIO_4,
OCELOT_STAT_RX_GREEN_PRIO_5,
OCELOT_STAT_RX_GREEN_PRIO_6,
OCELOT_STAT_RX_GREEN_PRIO_7,
OCELOT_STAT_TX_OCTETS,
OCELOT_STAT_TX_UNICAST,
OCELOT_STAT_TX_MULTICAST,
OCELOT_STAT_TX_BROADCAST,
OCELOT_STAT_TX_COLLISION,
OCELOT_STAT_TX_DROPS,
OCELOT_STAT_TX_PAUSE,
OCELOT_STAT_TX_64,
OCELOT_STAT_TX_65_127,
OCELOT_STAT_TX_128_255,
OCELOT_STAT_TX_256_511,
OCELOT_STAT_TX_512_1023,
OCELOT_STAT_TX_1024_1526,
OCELOT_STAT_TX_1527_MAX,
OCELOT_STAT_TX_YELLOW_PRIO_0,
OCELOT_STAT_TX_YELLOW_PRIO_1,
OCELOT_STAT_TX_YELLOW_PRIO_2,
OCELOT_STAT_TX_YELLOW_PRIO_3,
OCELOT_STAT_TX_YELLOW_PRIO_4,
OCELOT_STAT_TX_YELLOW_PRIO_5,
OCELOT_STAT_TX_YELLOW_PRIO_6,
OCELOT_STAT_TX_YELLOW_PRIO_7,
OCELOT_STAT_TX_GREEN_PRIO_0,
OCELOT_STAT_TX_GREEN_PRIO_1,
OCELOT_STAT_TX_GREEN_PRIO_2,
OCELOT_STAT_TX_GREEN_PRIO_3,
OCELOT_STAT_TX_GREEN_PRIO_4,
OCELOT_STAT_TX_GREEN_PRIO_5,
OCELOT_STAT_TX_GREEN_PRIO_6,
OCELOT_STAT_TX_GREEN_PRIO_7,
OCELOT_STAT_TX_AGED,
OCELOT_STAT_DROP_LOCAL,
OCELOT_STAT_DROP_TAIL,
OCELOT_STAT_DROP_YELLOW_PRIO_0,
OCELOT_STAT_DROP_YELLOW_PRIO_1,
OCELOT_STAT_DROP_YELLOW_PRIO_2,
OCELOT_STAT_DROP_YELLOW_PRIO_3,
OCELOT_STAT_DROP_YELLOW_PRIO_4,
OCELOT_STAT_DROP_YELLOW_PRIO_5,
OCELOT_STAT_DROP_YELLOW_PRIO_6,
OCELOT_STAT_DROP_YELLOW_PRIO_7,
OCELOT_STAT_DROP_GREEN_PRIO_0,
OCELOT_STAT_DROP_GREEN_PRIO_1,
OCELOT_STAT_DROP_GREEN_PRIO_2,
OCELOT_STAT_DROP_GREEN_PRIO_3,
OCELOT_STAT_DROP_GREEN_PRIO_4,
OCELOT_STAT_DROP_GREEN_PRIO_5,
OCELOT_STAT_DROP_GREEN_PRIO_6,
OCELOT_STAT_DROP_GREEN_PRIO_7,
OCELOT_NUM_STATS,
};
struct ocelot_stat_layout {
u32 offset;
u32 reg;
char name[ETH_GSTRING_LEN];
};
#define OCELOT_STAT_END { .name = "" }
struct ocelot_stats_region {
struct list_head node;
u32 offset;
u32 base;
int count;
u32 *buf;
};
@ -707,7 +857,6 @@ struct ocelot {
const u32 *const *map;
const struct ocelot_stat_layout *stats_layout;
struct list_head stats_regions;
unsigned int num_stats;
u32 pool_size[OCELOT_SB_NUM][OCELOT_SB_POOL_NUM];
int packet_buffer_size;
@ -750,7 +899,7 @@ struct ocelot {
struct ocelot_psfp_list psfp;
/* Workqueue to check statistics for overflow with its lock */
struct mutex stats_lock;
spinlock_t stats_lock;
u64 *stats;
struct delayed_work stats_work;
struct workqueue_struct *stats_queue;
@ -786,8 +935,8 @@ struct ocelot_policer {
u32 burst; /* bytes */
};
#define ocelot_bulk_read_rix(ocelot, reg, ri, buf, count) \
__ocelot_bulk_read_ix(ocelot, reg, reg##_RSZ * (ri), buf, count)
#define ocelot_bulk_read(ocelot, reg, buf, count) \
__ocelot_bulk_read_ix(ocelot, reg, 0, buf, count)
#define ocelot_read_ix(ocelot, reg, gi, ri) \
__ocelot_read_ix(ocelot, reg, reg##_GSZ * (gi) + reg##_RSZ * (ri))

View File

@ -24,7 +24,7 @@ void bpf_sk_reuseport_detach(struct sock *sk)
struct sock __rcu **socks;
write_lock_bh(&sk->sk_callback_lock);
socks = __rcu_dereference_sk_user_data_with_flags(sk, SK_USER_DATA_BPF);
socks = __locked_read_sk_user_data_with_flags(sk, SK_USER_DATA_BPF);
if (socks) {
WRITE_ONCE(sk->sk_user_data, NULL);
/*

View File

@ -345,7 +345,7 @@ static void gnet_stats_add_queue_cpu(struct gnet_stats_queue *qstats,
for_each_possible_cpu(i) {
const struct gnet_stats_queue *qcpu = per_cpu_ptr(q, i);
qstats->qlen += qcpu->backlog;
qstats->qlen += qcpu->qlen;
qstats->backlog += qcpu->backlog;
qstats->drops += qcpu->drops;
qstats->requeues += qcpu->requeues;

View File

@ -307,14 +307,32 @@ static int neigh_del_timer(struct neighbour *n)
return 0;
}
static void pneigh_queue_purge(struct sk_buff_head *list)
static void pneigh_queue_purge(struct sk_buff_head *list, struct net *net)
{
unsigned long flags;
struct sk_buff *skb;
while ((skb = skb_dequeue(list)) != NULL) {
dev_put(skb->dev);
kfree_skb(skb);
spin_lock_irqsave(&list->lock, flags);
skb = skb_peek(list);
while (skb != NULL) {
struct sk_buff *skb_next = skb_peek_next(skb, list);
struct net_device *dev = skb->dev;
if (net == NULL || net_eq(dev_net(dev), net)) {
struct in_device *in_dev;
rcu_read_lock();
in_dev = __in_dev_get_rcu(dev);
if (in_dev)
in_dev->arp_parms->qlen--;
rcu_read_unlock();
__skb_unlink(skb, list);
dev_put(dev);
kfree_skb(skb);
}
skb = skb_next;
}
spin_unlock_irqrestore(&list->lock, flags);
}
static void neigh_flush_dev(struct neigh_table *tbl, struct net_device *dev,
@ -385,9 +403,9 @@ static int __neigh_ifdown(struct neigh_table *tbl, struct net_device *dev,
write_lock_bh(&tbl->lock);
neigh_flush_dev(tbl, dev, skip_perm);
pneigh_ifdown_and_unlock(tbl, dev);
del_timer_sync(&tbl->proxy_timer);
pneigh_queue_purge(&tbl->proxy_queue);
pneigh_queue_purge(&tbl->proxy_queue, dev_net(dev));
if (skb_queue_empty_lockless(&tbl->proxy_queue))
del_timer_sync(&tbl->proxy_timer);
return 0;
}
@ -1597,8 +1615,15 @@ static void neigh_proxy_process(struct timer_list *t)
if (tdif <= 0) {
struct net_device *dev = skb->dev;
struct in_device *in_dev;
rcu_read_lock();
in_dev = __in_dev_get_rcu(dev);
if (in_dev)
in_dev->arp_parms->qlen--;
rcu_read_unlock();
__skb_unlink(skb, &tbl->proxy_queue);
if (tbl->proxy_redo && netif_running(dev)) {
rcu_read_lock();
tbl->proxy_redo(skb);
@ -1623,7 +1648,7 @@ void pneigh_enqueue(struct neigh_table *tbl, struct neigh_parms *p,
unsigned long sched_next = jiffies +
prandom_u32_max(NEIGH_VAR(p, PROXY_DELAY));
if (tbl->proxy_queue.qlen > NEIGH_VAR(p, PROXY_QLEN)) {
if (p->qlen > NEIGH_VAR(p, PROXY_QLEN)) {
kfree_skb(skb);
return;
}
@ -1639,6 +1664,7 @@ void pneigh_enqueue(struct neigh_table *tbl, struct neigh_parms *p,
skb_dst_drop(skb);
dev_hold(skb->dev);
__skb_queue_tail(&tbl->proxy_queue, skb);
p->qlen++;
mod_timer(&tbl->proxy_timer, sched_next);
spin_unlock(&tbl->proxy_queue.lock);
}
@ -1671,6 +1697,7 @@ struct neigh_parms *neigh_parms_alloc(struct net_device *dev,
refcount_set(&p->refcnt, 1);
p->reachable_time =
neigh_rand_reach_time(NEIGH_VAR(p, BASE_REACHABLE_TIME));
p->qlen = 0;
netdev_hold(dev, &p->dev_tracker, GFP_KERNEL);
p->dev = dev;
write_pnet(&p->net, net);
@ -1736,6 +1763,7 @@ void neigh_table_init(int index, struct neigh_table *tbl)
refcount_set(&tbl->parms.refcnt, 1);
tbl->parms.reachable_time =
neigh_rand_reach_time(NEIGH_VAR(&tbl->parms, BASE_REACHABLE_TIME));
tbl->parms.qlen = 0;
tbl->stats = alloc_percpu(struct neigh_statistics);
if (!tbl->stats)
@ -1787,7 +1815,7 @@ int neigh_table_clear(int index, struct neigh_table *tbl)
cancel_delayed_work_sync(&tbl->managed_work);
cancel_delayed_work_sync(&tbl->gc_work);
del_timer_sync(&tbl->proxy_timer);
pneigh_queue_purge(&tbl->proxy_queue);
pneigh_queue_purge(&tbl->proxy_queue, NULL);
neigh_ifdown(tbl, NULL);
if (atomic_read(&tbl->entries))
pr_crit("neighbour leakage\n");

View File

@ -6070,6 +6070,7 @@ static int rtnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
if (kind == RTNL_KIND_DEL && (nlh->nlmsg_flags & NLM_F_BULK) &&
!(flags & RTNL_FLAG_BULK_DEL_SUPPORTED)) {
NL_SET_ERR_MSG(extack, "Bulk delete is not supported");
module_put(owner);
goto err_unlock;
}

View File

@ -1194,8 +1194,9 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb)
ret = bpf_prog_run_pin_on_cpu(prog, skb);
ret = sk_psock_map_verd(ret, skb_bpf_redirect_fetch(skb));
}
if (sk_psock_verdict_apply(psock, skb, ret) < 0)
len = 0;
ret = sk_psock_verdict_apply(psock, skb, ret);
if (ret < 0)
len = ret;
out:
rcu_read_unlock();
return len;

View File

@ -145,11 +145,14 @@ int dsa_port_set_state(struct dsa_port *dp, u8 state, bool do_fast_age)
static void dsa_port_set_state_now(struct dsa_port *dp, u8 state,
bool do_fast_age)
{
struct dsa_switch *ds = dp->ds;
int err;
err = dsa_port_set_state(dp, state, do_fast_age);
if (err)
pr_err("DSA: failed to set STP state %u (%d)\n", state, err);
if (err && err != -EOPNOTSUPP) {
dev_err(ds->dev, "port %d failed to set STP state %u: %pe\n",
dp->index, state, ERR_PTR(err));
}
}
int dsa_port_set_mst_state(struct dsa_port *dp,

View File

@ -1567,17 +1567,11 @@ static int tcp_peek_sndq(struct sock *sk, struct msghdr *msg, int len)
* calculation of whether or not we must ACK for the sake of
* a window update.
*/
void tcp_cleanup_rbuf(struct sock *sk, int copied)
static void __tcp_cleanup_rbuf(struct sock *sk, int copied)
{
struct tcp_sock *tp = tcp_sk(sk);
bool time_to_ack = false;
struct sk_buff *skb = skb_peek(&sk->sk_receive_queue);
WARN(skb && !before(tp->copied_seq, TCP_SKB_CB(skb)->end_seq),
"cleanup rbuf bug: copied %X seq %X rcvnxt %X\n",
tp->copied_seq, TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt);
if (inet_csk_ack_scheduled(sk)) {
const struct inet_connection_sock *icsk = inet_csk(sk);
@ -1623,6 +1617,17 @@ void tcp_cleanup_rbuf(struct sock *sk, int copied)
tcp_send_ack(sk);
}
void tcp_cleanup_rbuf(struct sock *sk, int copied)
{
struct sk_buff *skb = skb_peek(&sk->sk_receive_queue);
struct tcp_sock *tp = tcp_sk(sk);
WARN(skb && !before(tp->copied_seq, TCP_SKB_CB(skb)->end_seq),
"cleanup rbuf bug: copied %X seq %X rcvnxt %X\n",
tp->copied_seq, TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt);
__tcp_cleanup_rbuf(sk, copied);
}
static void tcp_eat_recv_skb(struct sock *sk, struct sk_buff *skb)
{
__skb_unlink(skb, &sk->sk_receive_queue);
@ -1756,34 +1761,26 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
if (sk->sk_state == TCP_LISTEN)
return -ENOTCONN;
while ((skb = tcp_recv_skb(sk, seq, &offset)) != NULL) {
int used;
skb = tcp_recv_skb(sk, seq, &offset);
if (!skb)
return 0;
__skb_unlink(skb, &sk->sk_receive_queue);
used = recv_actor(sk, skb);
if (used <= 0) {
if (!copied)
copied = used;
break;
}
seq += used;
copied += used;
if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) {
consume_skb(skb);
__skb_unlink(skb, &sk->sk_receive_queue);
WARN_ON(!skb_set_owner_sk_safe(skb, sk));
copied = recv_actor(sk, skb);
if (copied >= 0) {
seq += copied;
if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
++seq;
break;
}
consume_skb(skb);
break;
}
consume_skb(skb);
WRITE_ONCE(tp->copied_seq, seq);
tcp_rcv_space_adjust(sk);
/* Clean up data we have read: This will do ACK frames. */
if (copied > 0)
tcp_cleanup_rbuf(sk, copied);
__tcp_cleanup_rbuf(sk, copied);
return copied;
}

View File

@ -1517,7 +1517,7 @@ static void ip6_tnl_link_config(struct ip6_tnl *t)
* ip6_tnl_change() updates the tunnel parameters
**/
static int
static void
ip6_tnl_change(struct ip6_tnl *t, const struct __ip6_tnl_parm *p)
{
t->parms.laddr = p->laddr;
@ -1531,29 +1531,25 @@ ip6_tnl_change(struct ip6_tnl *t, const struct __ip6_tnl_parm *p)
t->parms.fwmark = p->fwmark;
dst_cache_reset(&t->dst_cache);
ip6_tnl_link_config(t);
return 0;
}
static int ip6_tnl_update(struct ip6_tnl *t, struct __ip6_tnl_parm *p)
static void ip6_tnl_update(struct ip6_tnl *t, struct __ip6_tnl_parm *p)
{
struct net *net = t->net;
struct ip6_tnl_net *ip6n = net_generic(net, ip6_tnl_net_id);
int err;
ip6_tnl_unlink(ip6n, t);
synchronize_net();
err = ip6_tnl_change(t, p);
ip6_tnl_change(t, p);
ip6_tnl_link(ip6n, t);
netdev_state_change(t->dev);
return err;
}
static int ip6_tnl0_update(struct ip6_tnl *t, struct __ip6_tnl_parm *p)
static void ip6_tnl0_update(struct ip6_tnl *t, struct __ip6_tnl_parm *p)
{
/* for default tnl0 device allow to change only the proto */
t->parms.proto = p->proto;
netdev_state_change(t->dev);
return 0;
}
static void
@ -1667,9 +1663,9 @@ ip6_tnl_siocdevprivate(struct net_device *dev, struct ifreq *ifr,
} else
t = netdev_priv(dev);
if (dev == ip6n->fb_tnl_dev)
err = ip6_tnl0_update(t, &p1);
ip6_tnl0_update(t, &p1);
else
err = ip6_tnl_update(t, &p1);
ip6_tnl_update(t, &p1);
}
if (!IS_ERR(t)) {
err = 0;
@ -2091,7 +2087,8 @@ static int ip6_tnl_changelink(struct net_device *dev, struct nlattr *tb[],
} else
t = netdev_priv(dev);
return ip6_tnl_update(t, &p);
ip6_tnl_update(t, &p);
return 0;
}
static void ip6_tnl_dellink(struct net_device *dev, struct list_head *head)

View File

@ -1378,6 +1378,9 @@ static void ndisc_router_discovery(struct sk_buff *skb)
if (!rt && lifetime) {
ND_PRINTK(3, info, "RA: adding default router\n");
if (neigh)
neigh_release(neigh);
rt = rt6_add_dflt_router(net, &ipv6_hdr(skb)->saddr,
skb->dev, pref, defrtr_usr_metric);
if (!rt) {

View File

@ -144,7 +144,6 @@ config NF_CONNTRACK_ZONES
config NF_CONNTRACK_PROCFS
bool "Supply CT list in procfs (OBSOLETE)"
default y
depends on PROC_FS
help
This option enables for the list of known conntrack entries

View File

@ -34,11 +34,6 @@ MODULE_DESCRIPTION("ftp connection tracking helper");
MODULE_ALIAS("ip_conntrack_ftp");
MODULE_ALIAS_NFCT_HELPER(HELPER_NAME);
/* This is slow, but it's simple. --RR */
static char *ftp_buffer;
static DEFINE_SPINLOCK(nf_ftp_lock);
#define MAX_PORTS 8
static u_int16_t ports[MAX_PORTS];
static unsigned int ports_c;
@ -398,6 +393,9 @@ static int help(struct sk_buff *skb,
return NF_ACCEPT;
}
if (unlikely(skb_linearize(skb)))
return NF_DROP;
th = skb_header_pointer(skb, protoff, sizeof(_tcph), &_tcph);
if (th == NULL)
return NF_ACCEPT;
@ -411,12 +409,8 @@ static int help(struct sk_buff *skb,
}
datalen = skb->len - dataoff;
spin_lock_bh(&nf_ftp_lock);
fb_ptr = skb_header_pointer(skb, dataoff, datalen, ftp_buffer);
if (!fb_ptr) {
spin_unlock_bh(&nf_ftp_lock);
return NF_ACCEPT;
}
spin_lock_bh(&ct->lock);
fb_ptr = skb->data + dataoff;
ends_in_nl = (fb_ptr[datalen - 1] == '\n');
seq = ntohl(th->seq) + datalen;
@ -544,7 +538,7 @@ out_update_nl:
if (ends_in_nl)
update_nl_seq(ct, seq, ct_ftp_info, dir, skb);
out:
spin_unlock_bh(&nf_ftp_lock);
spin_unlock_bh(&ct->lock);
return ret;
}
@ -571,7 +565,6 @@ static const struct nf_conntrack_expect_policy ftp_exp_policy = {
static void __exit nf_conntrack_ftp_fini(void)
{
nf_conntrack_helpers_unregister(ftp, ports_c * 2);
kfree(ftp_buffer);
}
static int __init nf_conntrack_ftp_init(void)
@ -580,10 +573,6 @@ static int __init nf_conntrack_ftp_init(void)
NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_ftp_master));
ftp_buffer = kmalloc(65536, GFP_KERNEL);
if (!ftp_buffer)
return -ENOMEM;
if (ports_c == 0)
ports[ports_c++] = FTP_PORT;
@ -603,7 +592,6 @@ static int __init nf_conntrack_ftp_init(void)
ret = nf_conntrack_helpers_register(ftp, ports_c * 2);
if (ret < 0) {
pr_err("failed to register helpers\n");
kfree(ftp_buffer);
return ret;
}

View File

@ -34,6 +34,8 @@
#include <net/netfilter/nf_conntrack_zones.h>
#include <linux/netfilter/nf_conntrack_h323.h>
#define H323_MAX_SIZE 65535
/* Parameters */
static unsigned int default_rrq_ttl __read_mostly = 300;
module_param(default_rrq_ttl, uint, 0600);
@ -86,6 +88,9 @@ static int get_tpkt_data(struct sk_buff *skb, unsigned int protoff,
if (tcpdatalen <= 0) /* No TCP data */
goto clear_out;
if (tcpdatalen > H323_MAX_SIZE)
tcpdatalen = H323_MAX_SIZE;
if (*data == NULL) { /* first TPKT */
/* Get first TPKT pointer */
tpkt = skb_header_pointer(skb, tcpdataoff, tcpdatalen,
@ -1169,6 +1174,9 @@ static unsigned char *get_udp_data(struct sk_buff *skb, unsigned int protoff,
if (dataoff >= skb->len)
return NULL;
*datalen = skb->len - dataoff;
if (*datalen > H323_MAX_SIZE)
*datalen = H323_MAX_SIZE;
return skb_header_pointer(skb, dataoff, *datalen, h323_buffer);
}
@ -1770,7 +1778,7 @@ static int __init nf_conntrack_h323_init(void)
NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_h323_master));
h323_buffer = kmalloc(65536, GFP_KERNEL);
h323_buffer = kmalloc(H323_MAX_SIZE + 1, GFP_KERNEL);
if (!h323_buffer)
return -ENOMEM;
ret = h323_helper_init();

View File

@ -39,6 +39,7 @@ unsigned int (*nf_nat_irc_hook)(struct sk_buff *skb,
EXPORT_SYMBOL_GPL(nf_nat_irc_hook);
#define HELPER_NAME "irc"
#define MAX_SEARCH_SIZE 4095
MODULE_AUTHOR("Harald Welte <laforge@netfilter.org>");
MODULE_DESCRIPTION("IRC (DCC) connection tracking helper");
@ -121,6 +122,7 @@ static int help(struct sk_buff *skb, unsigned int protoff,
int i, ret = NF_ACCEPT;
char *addr_beg_p, *addr_end_p;
typeof(nf_nat_irc_hook) nf_nat_irc;
unsigned int datalen;
/* If packet is coming from IRC server */
if (dir == IP_CT_DIR_REPLY)
@ -140,8 +142,12 @@ static int help(struct sk_buff *skb, unsigned int protoff,
if (dataoff >= skb->len)
return NF_ACCEPT;
datalen = skb->len - dataoff;
if (datalen > MAX_SEARCH_SIZE)
datalen = MAX_SEARCH_SIZE;
spin_lock_bh(&irc_buffer_lock);
ib_ptr = skb_header_pointer(skb, dataoff, skb->len - dataoff,
ib_ptr = skb_header_pointer(skb, dataoff, datalen,
irc_buffer);
if (!ib_ptr) {
spin_unlock_bh(&irc_buffer_lock);
@ -149,7 +155,7 @@ static int help(struct sk_buff *skb, unsigned int protoff,
}
data = ib_ptr;
data_limit = ib_ptr + skb->len - dataoff;
data_limit = ib_ptr + datalen;
/* strlen("\1DCC SENT t AAAAAAAA P\1\n")=24
* 5+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=14 */
@ -251,7 +257,7 @@ static int __init nf_conntrack_irc_init(void)
irc_exp_policy.max_expected = max_dcc_channels;
irc_exp_policy.timeout = dcc_timeout;
irc_buffer = kmalloc(65536, GFP_KERNEL);
irc_buffer = kmalloc(MAX_SEARCH_SIZE + 1, GFP_KERNEL);
if (!irc_buffer)
return -ENOMEM;

View File

@ -34,10 +34,6 @@ MODULE_AUTHOR("Michal Schmidt <mschmidt@redhat.com>");
MODULE_DESCRIPTION("SANE connection tracking helper");
MODULE_ALIAS_NFCT_HELPER(HELPER_NAME);
static char *sane_buffer;
static DEFINE_SPINLOCK(nf_sane_lock);
#define MAX_PORTS 8
static u_int16_t ports[MAX_PORTS];
static unsigned int ports_c;
@ -67,14 +63,16 @@ static int help(struct sk_buff *skb,
unsigned int dataoff, datalen;
const struct tcphdr *th;
struct tcphdr _tcph;
void *sb_ptr;
int ret = NF_ACCEPT;
int dir = CTINFO2DIR(ctinfo);
struct nf_ct_sane_master *ct_sane_info = nfct_help_data(ct);
struct nf_conntrack_expect *exp;
struct nf_conntrack_tuple *tuple;
struct sane_request *req;
struct sane_reply_net_start *reply;
union {
struct sane_request req;
struct sane_reply_net_start repl;
} buf;
/* Until there's been traffic both ways, don't look in packets. */
if (ctinfo != IP_CT_ESTABLISHED &&
@ -92,59 +90,62 @@ static int help(struct sk_buff *skb,
return NF_ACCEPT;
datalen = skb->len - dataoff;
spin_lock_bh(&nf_sane_lock);
sb_ptr = skb_header_pointer(skb, dataoff, datalen, sane_buffer);
if (!sb_ptr) {
spin_unlock_bh(&nf_sane_lock);
return NF_ACCEPT;
}
if (dir == IP_CT_DIR_ORIGINAL) {
if (datalen != sizeof(struct sane_request))
goto out;
const struct sane_request *req;
if (datalen != sizeof(struct sane_request))
return NF_ACCEPT;
req = skb_header_pointer(skb, dataoff, datalen, &buf.req);
if (!req)
return NF_ACCEPT;
req = sb_ptr;
if (req->RPC_code != htonl(SANE_NET_START)) {
/* Not an interesting command */
ct_sane_info->state = SANE_STATE_NORMAL;
goto out;
WRITE_ONCE(ct_sane_info->state, SANE_STATE_NORMAL);
return NF_ACCEPT;
}
/* We're interested in the next reply */
ct_sane_info->state = SANE_STATE_START_REQUESTED;
goto out;
WRITE_ONCE(ct_sane_info->state, SANE_STATE_START_REQUESTED);
return NF_ACCEPT;
}
/* IP_CT_DIR_REPLY */
/* Is it a reply to an uninteresting command? */
if (ct_sane_info->state != SANE_STATE_START_REQUESTED)
goto out;
if (READ_ONCE(ct_sane_info->state) != SANE_STATE_START_REQUESTED)
return NF_ACCEPT;
/* It's a reply to SANE_NET_START. */
ct_sane_info->state = SANE_STATE_NORMAL;
WRITE_ONCE(ct_sane_info->state, SANE_STATE_NORMAL);
if (datalen < sizeof(struct sane_reply_net_start)) {
pr_debug("NET_START reply too short\n");
goto out;
return NF_ACCEPT;
}
reply = sb_ptr;
datalen = sizeof(struct sane_reply_net_start);
reply = skb_header_pointer(skb, dataoff, datalen, &buf.repl);
if (!reply)
return NF_ACCEPT;
if (reply->status != htonl(SANE_STATUS_SUCCESS)) {
/* saned refused the command */
pr_debug("unsuccessful SANE_STATUS = %u\n",
ntohl(reply->status));
goto out;
return NF_ACCEPT;
}
/* Invalid saned reply? Ignore it. */
if (reply->zero != 0)
goto out;
return NF_ACCEPT;
exp = nf_ct_expect_alloc(ct);
if (exp == NULL) {
nf_ct_helper_log(skb, ct, "cannot alloc expectation");
ret = NF_DROP;
goto out;
return NF_DROP;
}
tuple = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple;
@ -162,9 +163,6 @@ static int help(struct sk_buff *skb,
}
nf_ct_expect_put(exp);
out:
spin_unlock_bh(&nf_sane_lock);
return ret;
}
@ -178,7 +176,6 @@ static const struct nf_conntrack_expect_policy sane_exp_policy = {
static void __exit nf_conntrack_sane_fini(void)
{
nf_conntrack_helpers_unregister(sane, ports_c * 2);
kfree(sane_buffer);
}
static int __init nf_conntrack_sane_init(void)
@ -187,10 +184,6 @@ static int __init nf_conntrack_sane_init(void)
NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_sane_master));
sane_buffer = kmalloc(65536, GFP_KERNEL);
if (!sane_buffer)
return -ENOMEM;
if (ports_c == 0)
ports[ports_c++] = SANE_PORT;
@ -210,7 +203,6 @@ static int __init nf_conntrack_sane_init(void)
ret = nf_conntrack_helpers_register(sane, ports_c * 2);
if (ret < 0) {
pr_err("failed to register helpers\n");
kfree(sane_buffer);
return ret;
}

View File

@ -889,7 +889,7 @@ static int nf_tables_dump_tables(struct sk_buff *skb,
rcu_read_lock();
nft_net = nft_pernet(net);
cb->seq = nft_net->base_seq;
cb->seq = READ_ONCE(nft_net->base_seq);
list_for_each_entry_rcu(table, &nft_net->tables, list) {
if (family != NFPROTO_UNSPEC && family != table->family)
@ -1705,7 +1705,7 @@ static int nf_tables_dump_chains(struct sk_buff *skb,
rcu_read_lock();
nft_net = nft_pernet(net);
cb->seq = nft_net->base_seq;
cb->seq = READ_ONCE(nft_net->base_seq);
list_for_each_entry_rcu(table, &nft_net->tables, list) {
if (family != NFPROTO_UNSPEC && family != table->family)
@ -3149,7 +3149,7 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
rcu_read_lock();
nft_net = nft_pernet(net);
cb->seq = nft_net->base_seq;
cb->seq = READ_ONCE(nft_net->base_seq);
list_for_each_entry_rcu(table, &nft_net->tables, list) {
if (family != NFPROTO_UNSPEC && family != table->family)
@ -3907,7 +3907,7 @@ cont:
list_for_each_entry(i, &ctx->table->sets, list) {
int tmp;
if (!nft_is_active_next(ctx->net, set))
if (!nft_is_active_next(ctx->net, i))
continue;
if (!sscanf(i->name, name, &tmp))
continue;
@ -4133,7 +4133,7 @@ static int nf_tables_dump_sets(struct sk_buff *skb, struct netlink_callback *cb)
rcu_read_lock();
nft_net = nft_pernet(net);
cb->seq = nft_net->base_seq;
cb->seq = READ_ONCE(nft_net->base_seq);
list_for_each_entry_rcu(table, &nft_net->tables, list) {
if (ctx->family != NFPROTO_UNSPEC &&
@ -4451,6 +4451,11 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
err = nf_tables_set_desc_parse(&desc, nla[NFTA_SET_DESC]);
if (err < 0)
return err;
if (desc.field_count > 1 && !(flags & NFT_SET_CONCAT))
return -EINVAL;
} else if (flags & NFT_SET_CONCAT) {
return -EINVAL;
}
if (nla[NFTA_SET_EXPR] || nla[NFTA_SET_EXPRESSIONS])
@ -5061,6 +5066,8 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
rcu_read_lock();
nft_net = nft_pernet(net);
cb->seq = READ_ONCE(nft_net->base_seq);
list_for_each_entry_rcu(table, &nft_net->tables, list) {
if (dump_ctx->ctx.family != NFPROTO_UNSPEC &&
dump_ctx->ctx.family != table->family)
@ -5196,6 +5203,9 @@ static int nft_setelem_parse_flags(const struct nft_set *set,
if (!(set->flags & NFT_SET_INTERVAL) &&
*flags & NFT_SET_ELEM_INTERVAL_END)
return -EINVAL;
if ((*flags & (NFT_SET_ELEM_INTERVAL_END | NFT_SET_ELEM_CATCHALL)) ==
(NFT_SET_ELEM_INTERVAL_END | NFT_SET_ELEM_CATCHALL))
return -EINVAL;
return 0;
}
@ -5599,7 +5609,7 @@ int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set,
err = nft_expr_clone(expr, set->exprs[i]);
if (err < 0) {
nft_expr_destroy(ctx, expr);
kfree(expr);
goto err_expr;
}
expr_array[i] = expr;
@ -5842,6 +5852,24 @@ static void nft_setelem_remove(const struct net *net,
set->ops->remove(net, set, elem);
}
static bool nft_setelem_valid_key_end(const struct nft_set *set,
struct nlattr **nla, u32 flags)
{
if ((set->flags & (NFT_SET_CONCAT | NFT_SET_INTERVAL)) ==
(NFT_SET_CONCAT | NFT_SET_INTERVAL)) {
if (flags & NFT_SET_ELEM_INTERVAL_END)
return false;
if (!nla[NFTA_SET_ELEM_KEY_END] &&
!(flags & NFT_SET_ELEM_CATCHALL))
return false;
} else {
if (nla[NFTA_SET_ELEM_KEY_END])
return false;
}
return true;
}
static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
const struct nlattr *attr, u32 nlmsg_flags)
{
@ -5892,6 +5920,18 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
return -EINVAL;
}
if (set->flags & NFT_SET_OBJECT) {
if (!nla[NFTA_SET_ELEM_OBJREF] &&
!(flags & NFT_SET_ELEM_INTERVAL_END))
return -EINVAL;
} else {
if (nla[NFTA_SET_ELEM_OBJREF])
return -EINVAL;
}
if (!nft_setelem_valid_key_end(set, nla, flags))
return -EINVAL;
if ((flags & NFT_SET_ELEM_INTERVAL_END) &&
(nla[NFTA_SET_ELEM_DATA] ||
nla[NFTA_SET_ELEM_OBJREF] ||
@ -5899,6 +5939,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
nla[NFTA_SET_ELEM_EXPIRATION] ||
nla[NFTA_SET_ELEM_USERDATA] ||
nla[NFTA_SET_ELEM_EXPR] ||
nla[NFTA_SET_ELEM_KEY_END] ||
nla[NFTA_SET_ELEM_EXPRESSIONS]))
return -EINVAL;
@ -6029,10 +6070,6 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
}
if (nla[NFTA_SET_ELEM_OBJREF] != NULL) {
if (!(set->flags & NFT_SET_OBJECT)) {
err = -EINVAL;
goto err_parse_key_end;
}
obj = nft_obj_lookup(ctx->net, ctx->table,
nla[NFTA_SET_ELEM_OBJREF],
set->objtype, genmask);
@ -6325,6 +6362,9 @@ static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set,
if (!nla[NFTA_SET_ELEM_KEY] && !(flags & NFT_SET_ELEM_CATCHALL))
return -EINVAL;
if (!nft_setelem_valid_key_end(set, nla, flags))
return -EINVAL;
nft_set_ext_prepare(&tmpl);
if (flags != 0) {
@ -6941,7 +6981,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
rcu_read_lock();
nft_net = nft_pernet(net);
cb->seq = nft_net->base_seq;
cb->seq = READ_ONCE(nft_net->base_seq);
list_for_each_entry_rcu(table, &nft_net->tables, list) {
if (family != NFPROTO_UNSPEC && family != table->family)
@ -7873,7 +7913,7 @@ static int nf_tables_dump_flowtable(struct sk_buff *skb,
rcu_read_lock();
nft_net = nft_pernet(net);
cb->seq = nft_net->base_seq;
cb->seq = READ_ONCE(nft_net->base_seq);
list_for_each_entry_rcu(table, &nft_net->tables, list) {
if (family != NFPROTO_UNSPEC && family != table->family)
@ -8806,6 +8846,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
struct nft_trans_elem *te;
struct nft_chain *chain;
struct nft_table *table;
unsigned int base_seq;
LIST_HEAD(adl);
int err;
@ -8855,9 +8896,12 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
* Bump generation counter, invalidate any dump in progress.
* Cannot fail after this point.
*/
while (++nft_net->base_seq == 0)
base_seq = READ_ONCE(nft_net->base_seq);
while (++base_seq == 0)
;
WRITE_ONCE(nft_net->base_seq, base_seq);
/* step 3. Start new generation, rules_gen_X now in use. */
net->nft.gencursor = nft_gencursor_next(net);
@ -9419,13 +9463,9 @@ static int nf_tables_check_loops(const struct nft_ctx *ctx,
break;
}
}
cond_resched();
}
list_for_each_entry(set, &ctx->table->sets, list) {
cond_resched();
if (!nft_is_active_next(ctx->net, set))
continue;
if (!(set->flags & NFT_SET_MAP) ||

View File

@ -44,6 +44,10 @@ MODULE_DESCRIPTION("Netfilter messages via netlink socket");
static unsigned int nfnetlink_pernet_id __read_mostly;
#ifdef CONFIG_NF_CONNTRACK_EVENTS
static DEFINE_SPINLOCK(nfnl_grp_active_lock);
#endif
struct nfnl_net {
struct sock *nfnl;
};
@ -654,6 +658,44 @@ static void nfnetlink_rcv(struct sk_buff *skb)
netlink_rcv_skb(skb, nfnetlink_rcv_msg);
}
static void nfnetlink_bind_event(struct net *net, unsigned int group)
{
#ifdef CONFIG_NF_CONNTRACK_EVENTS
int type, group_bit;
u8 v;
/* All NFNLGRP_CONNTRACK_* group bits fit into u8.
* The other groups are not relevant and can be ignored.
*/
if (group >= 8)
return;
type = nfnl_group2type[group];
switch (type) {
case NFNL_SUBSYS_CTNETLINK:
break;
case NFNL_SUBSYS_CTNETLINK_EXP:
break;
default:
return;
}
group_bit = (1 << group);
spin_lock(&nfnl_grp_active_lock);
v = READ_ONCE(net->ct.ctnetlink_has_listener);
if ((v & group_bit) == 0) {
v |= group_bit;
/* read concurrently without nfnl_grp_active_lock held. */
WRITE_ONCE(net->ct.ctnetlink_has_listener, v);
}
spin_unlock(&nfnl_grp_active_lock);
#endif
}
static int nfnetlink_bind(struct net *net, int group)
{
const struct nfnetlink_subsystem *ss;
@ -670,28 +712,45 @@ static int nfnetlink_bind(struct net *net, int group)
if (!ss)
request_module_nowait("nfnetlink-subsys-%d", type);
#ifdef CONFIG_NF_CONNTRACK_EVENTS
if (type == NFNL_SUBSYS_CTNETLINK) {
nfnl_lock(NFNL_SUBSYS_CTNETLINK);
WRITE_ONCE(net->ct.ctnetlink_has_listener, true);
nfnl_unlock(NFNL_SUBSYS_CTNETLINK);
}
#endif
nfnetlink_bind_event(net, group);
return 0;
}
static void nfnetlink_unbind(struct net *net, int group)
{
#ifdef CONFIG_NF_CONNTRACK_EVENTS
int type, group_bit;
if (group <= NFNLGRP_NONE || group > NFNLGRP_MAX)
return;
if (nfnl_group2type[group] == NFNL_SUBSYS_CTNETLINK) {
nfnl_lock(NFNL_SUBSYS_CTNETLINK);
if (!nfnetlink_has_listeners(net, group))
WRITE_ONCE(net->ct.ctnetlink_has_listener, false);
nfnl_unlock(NFNL_SUBSYS_CTNETLINK);
type = nfnl_group2type[group];
switch (type) {
case NFNL_SUBSYS_CTNETLINK:
break;
case NFNL_SUBSYS_CTNETLINK_EXP:
break;
default:
return;
}
/* ctnetlink_has_listener is u8 */
if (group >= 8)
return;
group_bit = (1 << group);
spin_lock(&nfnl_grp_active_lock);
if (!nfnetlink_has_listeners(net, group)) {
u8 v = READ_ONCE(net->ct.ctnetlink_has_listener);
v &= ~group_bit;
/* read concurrently without nfnl_grp_active_lock held. */
WRITE_ONCE(net->ct.ctnetlink_has_listener, v);
}
spin_unlock(&nfnl_grp_active_lock);
#endif
}

View File

@ -1174,13 +1174,17 @@ static int ctrl_dumppolicy_start(struct netlink_callback *cb)
op.policy,
op.maxattr);
if (err)
return err;
goto err_free_state;
}
}
if (!ctx->state)
return -ENODATA;
return 0;
err_free_state:
netlink_policy_dump_free(ctx->state);
return err;
}
static void *ctrl_dumppolicy_prep(struct sk_buff *skb,

View File

@ -144,7 +144,7 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
err = add_policy(&state, policy, maxtype);
if (err)
return err;
goto err_try_undo;
for (policy_idx = 0;
policy_idx < state->n_alloc && state->policies[policy_idx].policy;
@ -164,7 +164,7 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
policy[type].nested_policy,
policy[type].len);
if (err)
return err;
goto err_try_undo;
break;
default:
break;
@ -174,6 +174,16 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
*pstate = state;
return 0;
err_try_undo:
/* Try to preserve reasonable unwind semantics - if we're starting from
* scratch clean up fully, otherwise record what we got and caller will.
*/
if (!*pstate)
netlink_policy_dump_free(state);
else
*pstate = state;
return err;
}
static bool

View File

@ -78,11 +78,6 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev,
struct qrtr_mhi_dev *qdev;
int rc;
/* start channels */
rc = mhi_prepare_for_transfer_autoqueue(mhi_dev);
if (rc)
return rc;
qdev = devm_kzalloc(&mhi_dev->dev, sizeof(*qdev), GFP_KERNEL);
if (!qdev)
return -ENOMEM;
@ -96,6 +91,13 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev,
if (rc)
return rc;
/* start channels */
rc = mhi_prepare_for_transfer_autoqueue(mhi_dev);
if (rc) {
qrtr_endpoint_unregister(&qdev->ep);
return rc;
}
dev_dbg(qdev->dev, "Qualcomm MHI QRTR driver probed\n");
return 0;

View File

@ -363,6 +363,7 @@ static int acquire_refill(struct rds_connection *conn)
static void release_refill(struct rds_connection *conn)
{
clear_bit(RDS_RECV_REFILL, &conn->c_flags);
smp_mb__after_atomic();
/* We don't use wait_on_bit()/wake_up_bit() because our waking is in a
* hot path and finding waiters is very rare. We don't want to walk

View File

@ -424,6 +424,11 @@ static int route4_set_parms(struct net *net, struct tcf_proto *tp,
return -EINVAL;
}
if (!nhandle) {
NL_SET_ERR_MSG(extack, "Replacing with handle of 0 is invalid");
return -EINVAL;
}
h1 = to_hash(nhandle);
b = rtnl_dereference(head->table[h1]);
if (!b) {
@ -477,6 +482,11 @@ static int route4_change(struct net *net, struct sk_buff *in_skb,
int err;
bool new = true;
if (!handle) {
NL_SET_ERR_MSG(extack, "Creating with handle of 0 is invalid");
return -EINVAL;
}
if (opt == NULL)
return handle ? -EINVAL : 0;

View File

@ -291,8 +291,10 @@ static ssize_t rpc_sysfs_xprt_state_change(struct kobject *kobj,
int offline = 0, online = 0, remove = 0;
struct rpc_xprt_switch *xps = rpc_sysfs_xprt_kobj_get_xprt_switch(kobj);
if (!xprt)
return 0;
if (!xprt || !xps) {
count = 0;
goto out_put;
}
if (!strncmp(buf, "offline", 7))
offline = 1;

View File

@ -2702,7 +2702,9 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
crypto_info->version != TLS_1_3_VERSION &&
!!(tfm->__crt_alg->cra_flags & CRYPTO_ALG_ASYNC);
tls_strp_init(&sw_ctx_rx->strp, sk);
rc = tls_strp_init(&sw_ctx_rx->strp, sk);
if (rc)
goto free_aead;
}
goto out;

View File

@ -14,13 +14,17 @@
# nft_flowtable.sh -o8000 -l1500 -r2000
#
sfx=$(mktemp -u "XXXXXXXX")
ns1="ns1-$sfx"
ns2="ns2-$sfx"
nsr1="nsr1-$sfx"
nsr2="nsr2-$sfx"
# Kselftest framework requirement - SKIP code is 4.
ksft_skip=4
ret=0
ns1in=""
ns2in=""
nsin=""
ns1out=""
ns2out=""
@ -36,21 +40,19 @@ checktool (){
checktool "nft --version" "run test without nft tool"
checktool "ip -Version" "run test without ip tool"
checktool "which nc" "run test without nc (netcat)"
checktool "ip netns add nsr1" "create net namespace"
checktool "ip netns add $nsr1" "create net namespace $nsr1"
ip netns add ns1
ip netns add ns2
ip netns add nsr2
ip netns add $ns1
ip netns add $ns2
ip netns add $nsr2
cleanup() {
for i in 1 2; do
ip netns del ns$i
ip netns del nsr$i
done
ip netns del $ns1
ip netns del $ns2
ip netns del $nsr1
ip netns del $nsr2
rm -f "$ns1in" "$ns1out"
rm -f "$ns2in" "$ns2out"
rm -f "$nsin" "$ns1out" "$ns2out"
[ $log_netns -eq 0 ] && sysctl -q net.netfilter.nf_log_all_netns=$log_netns
}
@ -59,22 +61,21 @@ trap cleanup EXIT
sysctl -q net.netfilter.nf_log_all_netns=1
ip link add veth0 netns nsr1 type veth peer name eth0 netns ns1
ip link add veth1 netns nsr1 type veth peer name veth0 netns nsr2
ip link add veth0 netns $nsr1 type veth peer name eth0 netns $ns1
ip link add veth1 netns $nsr1 type veth peer name veth0 netns $nsr2
ip link add veth1 netns nsr2 type veth peer name eth0 netns ns2
ip link add veth1 netns $nsr2 type veth peer name eth0 netns $ns2
for dev in lo veth0 veth1; do
for i in 1 2; do
ip -net nsr$i link set $dev up
done
ip -net $nsr1 link set $dev up
ip -net $nsr2 link set $dev up
done
ip -net nsr1 addr add 10.0.1.1/24 dev veth0
ip -net nsr1 addr add dead:1::1/64 dev veth0
ip -net $nsr1 addr add 10.0.1.1/24 dev veth0
ip -net $nsr1 addr add dead:1::1/64 dev veth0
ip -net nsr2 addr add 10.0.2.1/24 dev veth1
ip -net nsr2 addr add dead:2::1/64 dev veth1
ip -net $nsr2 addr add 10.0.2.1/24 dev veth1
ip -net $nsr2 addr add dead:2::1/64 dev veth1
# set different MTUs so we need to push packets coming from ns1 (large MTU)
# to ns2 (smaller MTU) to stack either to perform fragmentation (ip_no_pmtu_disc=1),
@ -106,85 +107,76 @@ do
esac
done
if ! ip -net nsr1 link set veth0 mtu $omtu; then
if ! ip -net $nsr1 link set veth0 mtu $omtu; then
exit 1
fi
ip -net ns1 link set eth0 mtu $omtu
ip -net $ns1 link set eth0 mtu $omtu
if ! ip -net nsr2 link set veth1 mtu $rmtu; then
if ! ip -net $nsr2 link set veth1 mtu $rmtu; then
exit 1
fi
ip -net ns2 link set eth0 mtu $rmtu
ip -net $ns2 link set eth0 mtu $rmtu
# transfer-net between nsr1 and nsr2.
# these addresses are not used for connections.
ip -net nsr1 addr add 192.168.10.1/24 dev veth1
ip -net nsr1 addr add fee1:2::1/64 dev veth1
ip -net $nsr1 addr add 192.168.10.1/24 dev veth1
ip -net $nsr1 addr add fee1:2::1/64 dev veth1
ip -net nsr2 addr add 192.168.10.2/24 dev veth0
ip -net nsr2 addr add fee1:2::2/64 dev veth0
ip -net $nsr2 addr add 192.168.10.2/24 dev veth0
ip -net $nsr2 addr add fee1:2::2/64 dev veth0
for i in 1 2; do
ip netns exec nsr$i sysctl net.ipv4.conf.veth0.forwarding=1 > /dev/null
ip netns exec nsr$i sysctl net.ipv4.conf.veth1.forwarding=1 > /dev/null
for i in 0 1; do
ip netns exec $nsr1 sysctl net.ipv4.conf.veth$i.forwarding=1 > /dev/null
ip netns exec $nsr2 sysctl net.ipv4.conf.veth$i.forwarding=1 > /dev/null
done
ip -net ns$i link set lo up
ip -net ns$i link set eth0 up
ip -net ns$i addr add 10.0.$i.99/24 dev eth0
ip -net ns$i route add default via 10.0.$i.1
ip -net ns$i addr add dead:$i::99/64 dev eth0
ip -net ns$i route add default via dead:$i::1
if ! ip netns exec ns$i sysctl net.ipv4.tcp_no_metrics_save=1 > /dev/null; then
for ns in $ns1 $ns2;do
ip -net $ns link set lo up
ip -net $ns link set eth0 up
if ! ip netns exec $ns sysctl net.ipv4.tcp_no_metrics_save=1 > /dev/null; then
echo "ERROR: Check Originator/Responder values (problem during address addition)"
exit 1
fi
# don't set ip DF bit for first two tests
ip netns exec ns$i sysctl net.ipv4.ip_no_pmtu_disc=1 > /dev/null
ip netns exec $ns sysctl net.ipv4.ip_no_pmtu_disc=1 > /dev/null
done
ip -net nsr1 route add default via 192.168.10.2
ip -net nsr2 route add default via 192.168.10.1
ip -net $ns1 addr add 10.0.1.99/24 dev eth0
ip -net $ns2 addr add 10.0.2.99/24 dev eth0
ip -net $ns1 route add default via 10.0.1.1
ip -net $ns2 route add default via 10.0.2.1
ip -net $ns1 addr add dead:1::99/64 dev eth0
ip -net $ns2 addr add dead:2::99/64 dev eth0
ip -net $ns1 route add default via dead:1::1
ip -net $ns2 route add default via dead:2::1
ip netns exec nsr1 nft -f - <<EOF
ip -net $nsr1 route add default via 192.168.10.2
ip -net $nsr2 route add default via 192.168.10.1
ip netns exec $nsr1 nft -f - <<EOF
table inet filter {
flowtable f1 {
hook ingress priority 0
devices = { veth0, veth1 }
}
counter routed_orig { }
counter routed_repl { }
chain forward {
type filter hook forward priority 0; policy drop;
# flow offloaded? Tag ct with mark 1, so we can detect when it fails.
meta oif "veth1" tcp dport 12345 flow offload @f1 counter
meta oif "veth1" tcp dport 12345 ct mark set 1 flow add @f1 counter name routed_orig accept
# use packet size to trigger 'should be offloaded by now'.
# otherwise, if 'flow offload' expression never offloads, the
# test will pass.
tcp dport 12345 meta length gt 200 ct mark set 1 counter
# this turns off flow offloading internally, so expect packets again
tcp flags fin,rst ct mark set 0 accept
# this allows large packets from responder, we need this as long
# as PMTUd is off.
# This rule is deleted for the last test, when we expect PMTUd
# to kick in and ensure all packets meet mtu requirements.
meta length gt $lmtu accept comment something-to-grep-for
# next line blocks connection w.o. working offload.
# we only do this for reverse dir, because we expect packets to
# enter slow path due to MTU mismatch of veth0 and veth1.
tcp sport 12345 ct mark 1 counter log prefix "mark failure " drop
# count packets supposedly offloaded as per direction.
ct mark 1 counter name ct direction map { original : routed_orig, reply : routed_repl } accept
ct state established,related accept
# for packets that we can't offload yet, i.e. SYN (any ct that is not confirmed)
meta length lt 200 oif "veth1" tcp dport 12345 counter accept
meta nfproto ipv4 meta l4proto icmp accept
meta nfproto ipv6 meta l4proto icmpv6 accept
}
@ -197,30 +189,30 @@ if [ $? -ne 0 ]; then
fi
# test basic connectivity
if ! ip netns exec ns1 ping -c 1 -q 10.0.2.99 > /dev/null; then
echo "ERROR: ns1 cannot reach ns2" 1>&2
if ! ip netns exec $ns1 ping -c 1 -q 10.0.2.99 > /dev/null; then
echo "ERROR: $ns1 cannot reach ns2" 1>&2
exit 1
fi
if ! ip netns exec ns2 ping -c 1 -q 10.0.1.99 > /dev/null; then
echo "ERROR: ns2 cannot reach ns1" 1>&2
if ! ip netns exec $ns2 ping -c 1 -q 10.0.1.99 > /dev/null; then
echo "ERROR: $ns2 cannot reach $ns1" 1>&2
exit 1
fi
if [ $ret -eq 0 ];then
echo "PASS: netns routing/connectivity: ns1 can reach ns2"
echo "PASS: netns routing/connectivity: $ns1 can reach $ns2"
fi
ns1in=$(mktemp)
nsin=$(mktemp)
ns1out=$(mktemp)
ns2in=$(mktemp)
ns2out=$(mktemp)
make_file()
{
name=$1
SIZE=$((RANDOM % (1024 * 8)))
SIZE=$((RANDOM % (1024 * 128)))
SIZE=$((SIZE + (1024 * 8)))
TSIZE=$((SIZE * 1024))
dd if=/dev/urandom of="$name" bs=1024 count=$SIZE 2> /dev/null
@ -231,6 +223,38 @@ make_file()
dd if=/dev/urandom conf=notrunc of="$name" bs=1 count=$SIZE 2> /dev/null
}
check_counters()
{
local what=$1
local ok=1
local orig=$(ip netns exec $nsr1 nft reset counter inet filter routed_orig | grep packets)
local repl=$(ip netns exec $nsr1 nft reset counter inet filter routed_repl | grep packets)
local orig_cnt=${orig#*bytes}
local repl_cnt=${repl#*bytes}
local fs=$(du -sb $nsin)
local max_orig=${fs%%/*}
local max_repl=$((max_orig/4))
if [ $orig_cnt -gt $max_orig ];then
echo "FAIL: $what: original counter $orig_cnt exceeds expected value $max_orig" 1>&2
ret=1
ok=0
fi
if [ $repl_cnt -gt $max_repl ];then
echo "FAIL: $what: reply counter $repl_cnt exceeds expected value $max_repl" 1>&2
ret=1
ok=0
fi
if [ $ok -eq 1 ]; then
echo "PASS: $what"
fi
}
check_transfer()
{
in=$1
@ -255,11 +279,11 @@ test_tcp_forwarding_ip()
local dstport=$4
local lret=0
ip netns exec $nsb nc -w 5 -l -p 12345 < "$ns2in" > "$ns2out" &
ip netns exec $nsb nc -w 5 -l -p 12345 < "$nsin" > "$ns2out" &
lpid=$!
sleep 1
ip netns exec $nsa nc -w 4 "$dstip" "$dstport" < "$ns1in" > "$ns1out" &
ip netns exec $nsa nc -w 4 "$dstip" "$dstport" < "$nsin" > "$ns1out" &
cpid=$!
sleep 3
@ -274,11 +298,11 @@ test_tcp_forwarding_ip()
wait
if ! check_transfer "$ns1in" "$ns2out" "ns1 -> ns2"; then
if ! check_transfer "$nsin" "$ns2out" "ns1 -> ns2"; then
lret=1
fi
if ! check_transfer "$ns2in" "$ns1out" "ns1 <- ns2"; then
if ! check_transfer "$nsin" "$ns1out" "ns1 <- ns2"; then
lret=1
fi
@ -295,41 +319,59 @@ test_tcp_forwarding()
test_tcp_forwarding_nat()
{
local lret
local pmtu
test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345
lret=$?
pmtu=$3
what=$4
if [ $lret -eq 0 ] ; then
if [ $pmtu -eq 1 ] ;then
check_counters "flow offload for ns1/ns2 with masquerade and pmtu discovery $what"
else
echo "PASS: flow offload for ns1/ns2 with masquerade $what"
fi
test_tcp_forwarding_ip "$1" "$2" 10.6.6.6 1666
lret=$?
if [ $pmtu -eq 1 ] ;then
check_counters "flow offload for ns1/ns2 with dnat and pmtu discovery $what"
elif [ $lret -eq 0 ] ; then
echo "PASS: flow offload for ns1/ns2 with dnat $what"
fi
fi
return $lret
}
make_file "$ns1in"
make_file "$ns2in"
make_file "$nsin"
# First test:
# No PMTU discovery, nsr1 is expected to fragment packets from ns1 to ns2 as needed.
if test_tcp_forwarding ns1 ns2; then
# Due to MTU mismatch in both directions, all packets (except small packets like pure
# acks) have to be handled by normal forwarding path. Therefore, packet counters
# are not checked.
if test_tcp_forwarding $ns1 $ns2; then
echo "PASS: flow offloaded for ns1/ns2"
else
echo "FAIL: flow offload for ns1/ns2:" 1>&2
ip netns exec nsr1 nft list ruleset
ip netns exec $nsr1 nft list ruleset
ret=1
fi
# delete default route, i.e. ns2 won't be able to reach ns1 and
# will depend on ns1 being masqueraded in nsr1.
# expect ns1 has nsr1 address.
ip -net ns2 route del default via 10.0.2.1
ip -net ns2 route del default via dead:2::1
ip -net ns2 route add 192.168.10.1 via 10.0.2.1
ip -net $ns2 route del default via 10.0.2.1
ip -net $ns2 route del default via dead:2::1
ip -net $ns2 route add 192.168.10.1 via 10.0.2.1
# Second test:
# Same, but with NAT enabled.
ip netns exec nsr1 nft -f - <<EOF
# Same, but with NAT enabled. Same as in first test: we expect normal forward path
# to handle most packets.
ip netns exec $nsr1 nft -f - <<EOF
table ip nat {
chain prerouting {
type nat hook prerouting priority 0; policy accept;
@ -343,47 +385,45 @@ table ip nat {
}
EOF
if test_tcp_forwarding_nat ns1 ns2; then
echo "PASS: flow offloaded for ns1/ns2 with NAT"
else
if ! test_tcp_forwarding_nat $ns1 $ns2 0 ""; then
echo "FAIL: flow offload for ns1/ns2 with NAT" 1>&2
ip netns exec nsr1 nft list ruleset
ip netns exec $nsr1 nft list ruleset
ret=1
fi
# Third test:
# Same as second test, but with PMTU discovery enabled.
handle=$(ip netns exec nsr1 nft -a list table inet filter | grep something-to-grep-for | cut -d \# -f 2)
# Same as second test, but with PMTU discovery enabled. This
# means that we expect the fastpath to handle packets as soon
# as the endpoints adjust the packet size.
ip netns exec $ns1 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null
ip netns exec $ns2 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null
if ! ip netns exec nsr1 nft delete rule inet filter forward $handle; then
echo "FAIL: Could not delete large-packet accept rule"
exit 1
fi
# reset counters.
# With pmtu in-place we'll also check that nft counters
# are lower than file size and packets were forwarded via flowtable layer.
# For earlier tests (large mtus), packets cannot be handled via flowtable
# (except pure acks and other small packets).
ip netns exec $nsr1 nft reset counters table inet filter >/dev/null
ip netns exec ns1 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null
ip netns exec ns2 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null
if test_tcp_forwarding_nat ns1 ns2; then
echo "PASS: flow offloaded for ns1/ns2 with NAT and pmtu discovery"
else
if ! test_tcp_forwarding_nat $ns1 $ns2 1 ""; then
echo "FAIL: flow offload for ns1/ns2 with NAT and pmtu discovery" 1>&2
ip netns exec nsr1 nft list ruleset
ip netns exec $nsr1 nft list ruleset
fi
# Another test:
# Add bridge interface br0 to Router1, with NAT enabled.
ip -net nsr1 link add name br0 type bridge
ip -net nsr1 addr flush dev veth0
ip -net nsr1 link set up dev veth0
ip -net nsr1 link set veth0 master br0
ip -net nsr1 addr add 10.0.1.1/24 dev br0
ip -net nsr1 addr add dead:1::1/64 dev br0
ip -net nsr1 link set up dev br0
ip -net $nsr1 link add name br0 type bridge
ip -net $nsr1 addr flush dev veth0
ip -net $nsr1 link set up dev veth0
ip -net $nsr1 link set veth0 master br0
ip -net $nsr1 addr add 10.0.1.1/24 dev br0
ip -net $nsr1 addr add dead:1::1/64 dev br0
ip -net $nsr1 link set up dev br0
ip netns exec nsr1 sysctl net.ipv4.conf.br0.forwarding=1 > /dev/null
ip netns exec $nsr1 sysctl net.ipv4.conf.br0.forwarding=1 > /dev/null
# br0 with NAT enabled.
ip netns exec nsr1 nft -f - <<EOF
ip netns exec $nsr1 nft -f - <<EOF
flush table ip nat
table ip nat {
chain prerouting {
@ -398,59 +438,56 @@ table ip nat {
}
EOF
if test_tcp_forwarding_nat ns1 ns2; then
echo "PASS: flow offloaded for ns1/ns2 with bridge NAT"
else
if ! test_tcp_forwarding_nat $ns1 $ns2 1 "on bridge"; then
echo "FAIL: flow offload for ns1/ns2 with bridge NAT" 1>&2
ip netns exec nsr1 nft list ruleset
ip netns exec $nsr1 nft list ruleset
ret=1
fi
# Another test:
# Add bridge interface br0 to Router1, with NAT and VLAN.
ip -net nsr1 link set veth0 nomaster
ip -net nsr1 link set down dev veth0
ip -net nsr1 link add link veth0 name veth0.10 type vlan id 10
ip -net nsr1 link set up dev veth0
ip -net nsr1 link set up dev veth0.10
ip -net nsr1 link set veth0.10 master br0
ip -net $nsr1 link set veth0 nomaster
ip -net $nsr1 link set down dev veth0
ip -net $nsr1 link add link veth0 name veth0.10 type vlan id 10
ip -net $nsr1 link set up dev veth0
ip -net $nsr1 link set up dev veth0.10
ip -net $nsr1 link set veth0.10 master br0
ip -net ns1 addr flush dev eth0
ip -net ns1 link add link eth0 name eth0.10 type vlan id 10
ip -net ns1 link set eth0 up
ip -net ns1 link set eth0.10 up
ip -net ns1 addr add 10.0.1.99/24 dev eth0.10
ip -net ns1 route add default via 10.0.1.1
ip -net ns1 addr add dead:1::99/64 dev eth0.10
ip -net $ns1 addr flush dev eth0
ip -net $ns1 link add link eth0 name eth0.10 type vlan id 10
ip -net $ns1 link set eth0 up
ip -net $ns1 link set eth0.10 up
ip -net $ns1 addr add 10.0.1.99/24 dev eth0.10
ip -net $ns1 route add default via 10.0.1.1
ip -net $ns1 addr add dead:1::99/64 dev eth0.10
if test_tcp_forwarding_nat ns1 ns2; then
echo "PASS: flow offloaded for ns1/ns2 with bridge NAT and VLAN"
else
if ! test_tcp_forwarding_nat $ns1 $ns2 1 "bridge and VLAN"; then
echo "FAIL: flow offload for ns1/ns2 with bridge NAT and VLAN" 1>&2
ip netns exec nsr1 nft list ruleset
ip netns exec $nsr1 nft list ruleset
ret=1
fi
# restore test topology (remove bridge and VLAN)
ip -net nsr1 link set veth0 nomaster
ip -net nsr1 link set veth0 down
ip -net nsr1 link set veth0.10 down
ip -net nsr1 link delete veth0.10 type vlan
ip -net nsr1 link delete br0 type bridge
ip -net ns1 addr flush dev eth0.10
ip -net ns1 link set eth0.10 down
ip -net ns1 link set eth0 down
ip -net ns1 link delete eth0.10 type vlan
ip -net $nsr1 link set veth0 nomaster
ip -net $nsr1 link set veth0 down
ip -net $nsr1 link set veth0.10 down
ip -net $nsr1 link delete veth0.10 type vlan
ip -net $nsr1 link delete br0 type bridge
ip -net $ns1 addr flush dev eth0.10
ip -net $ns1 link set eth0.10 down
ip -net $ns1 link set eth0 down
ip -net $ns1 link delete eth0.10 type vlan
# restore address in ns1 and nsr1
ip -net ns1 link set eth0 up
ip -net ns1 addr add 10.0.1.99/24 dev eth0
ip -net ns1 route add default via 10.0.1.1
ip -net ns1 addr add dead:1::99/64 dev eth0
ip -net ns1 route add default via dead:1::1
ip -net nsr1 addr add 10.0.1.1/24 dev veth0
ip -net nsr1 addr add dead:1::1/64 dev veth0
ip -net nsr1 link set up dev veth0
ip -net $ns1 link set eth0 up
ip -net $ns1 addr add 10.0.1.99/24 dev eth0
ip -net $ns1 route add default via 10.0.1.1
ip -net $ns1 addr add dead:1::99/64 dev eth0
ip -net $ns1 route add default via dead:1::1
ip -net $nsr1 addr add 10.0.1.1/24 dev veth0
ip -net $nsr1 addr add dead:1::1/64 dev veth0
ip -net $nsr1 link set up dev veth0
KEY_SHA="0x"$(ps -xaf | sha1sum | cut -d " " -f 1)
KEY_AES="0x"$(ps -xaf | md5sum | cut -d " " -f 1)
@ -480,23 +517,23 @@ do_esp() {
}
do_esp nsr1 192.168.10.1 192.168.10.2 10.0.1.0/24 10.0.2.0/24 $SPI1 $SPI2
do_esp $nsr1 192.168.10.1 192.168.10.2 10.0.1.0/24 10.0.2.0/24 $SPI1 $SPI2
do_esp nsr2 192.168.10.2 192.168.10.1 10.0.2.0/24 10.0.1.0/24 $SPI2 $SPI1
do_esp $nsr2 192.168.10.2 192.168.10.1 10.0.2.0/24 10.0.1.0/24 $SPI2 $SPI1
ip netns exec nsr1 nft delete table ip nat
ip netns exec $nsr1 nft delete table ip nat
# restore default routes
ip -net ns2 route del 192.168.10.1 via 10.0.2.1
ip -net ns2 route add default via 10.0.2.1
ip -net ns2 route add default via dead:2::1
ip -net $ns2 route del 192.168.10.1 via 10.0.2.1
ip -net $ns2 route add default via 10.0.2.1
ip -net $ns2 route add default via dead:2::1
if test_tcp_forwarding ns1 ns2; then
echo "PASS: ipsec tunnel mode for ns1/ns2"
if test_tcp_forwarding $ns1 $ns2; then
check_counters "ipsec tunnel mode for ns1/ns2"
else
echo "FAIL: ipsec tunnel mode for ns1/ns2"
ip netns exec nsr1 nft list ruleset 1>&2
ip netns exec nsr1 cat /proc/net/xfrm_stat 1>&2
ip netns exec $nsr1 nft list ruleset 1>&2
ip netns exec $nsr1 cat /proc/net/xfrm_stat 1>&2
fi
exit $ret