Merge branch 'bridge-tx-fwd'

Vladimir Oltean says:

====================
Allow TX forwarding for the software bridge data path to be offloaded to capable devices

On RX, switchdev drivers have the ability to mark packets for the
software bridge as "already forwarded in hardware" via
skb->offload_fwd_mark. This instructs the nbp_switchdev_allowed_egress()
function to perform software forwarding of that packet only to the bridge
ports that are not in the same hardware domain as the source packet.

This series expands the concept for TX, in the sense that we can trust
the accelerator to:
(a) look up its FDB (which is more or less in sync with the software
    bridge FDB) for selecting the destination ports for a packet
(b) replicate the frame in hardware in case it's a multicast/broadcast,
    instead of the software bridge having to clone it and send the
    clones to each net device one at a time. This reduces the bandwidth
    needed between the CPU and the accelerator, as well as the CPU time
    spent.

This is done by augmenting nbp_switchdev_allowed_egress() to also
exclude the bridge ports which have the tx_fwd_offload capability if the
skb has already been transmitted to one port from their hardware domain.

Even though in reality, the software bridge still technically looks up
the FDB/MDB for every frame, but all skb clones are suppressed, this
offload specifically requires that the switchdev accelerator looks up
its FDB/MDB again. It is intended to be used to inject "data plane
packets" into the hardware as opposed to "control plane packets" which
target a precise destination port.

Towards that goal, the bridge always provides the TX packets with
skb->offload_fwd_mark = true with the VLAN tag always present, so that
the accelerator can forward according to that VLAN broadcast domain.

This work is not intended to cater to switches which can inject control
plane packets to a bit mask of destination ports. I see that as a more
difficult task to accomplish with potentially less benefits (it provides
only replication offload). The reason it is more difficult is that
struct skb_buff would probably need to be extended to contain a list of
struct net_devices that the packet must be replicated to. Sending data
plane packets avoids that issue by keeping the hardware and software FDB
more or less in sync and looking it up twice.

Additionally, the ability for the software bridge to request data plane
packets to be sent brings the opportunity for "dumb switches" to support
traffic termination to/from the bridge. Such switches (DSA or otherwise)
typically only use control packets for link-local traps, and sending or
receiving a control packet is an expensive operation.

For this class of switches, this patch series makes the difference
between supporting and not supporting local IP termination through a
VLAN-aware bridge, bridging with a foreign interface, bridging with
software upper interfaces like LAG, etc. So instead of telling them
"oh, what a dumb switch you are!", we can now tell them "oh, what a
stark contrast you have between the control and data plane!".

Patches 1-3 tested on Turris MOX (3 mv88e6xxx switches in a daisy chain
topology) and a second DSA driver to be added soon. Patches 4-5 tested
only on Turris MOX.

===========================================================

Changes in v5:
- make sure the static key is decremented on bridge port unoffload
- rename functions and variables so that the "tx_fwd_offload" string is
  easy to grep across the git tree
- simplify DSA core bookkeeping of the bridge_num

===========================================================

Changes in v4:

The biggest change compared to the previous series is not present in the
patches, but is rather a lack of them. Previously we were replaying
switchdev objects on the public notifier chain, but that was a mistake
in my reasoning and it was reverted for v4. Therefore, we are now
passing the notifier blocks as arguments to switchdev_bridge_port_offload()
for all drivers. This alone gets rid of 7 patches compared to v3.

Other changes are:
- Take more care for the case where mlxsw leaves a VLAN or LAG upper
  that is a bridge port, make sure that switchdev_bridge_port_unoffload()
  gets called for that case
- A couple of DSA bug fixes
- Add change logs for all patches
- Copy all switchdev driver maintainers on the changes relevant to them

===========================================================

Message for v3:
https://patchwork.kernel.org/project/netdevbpf/cover/20210712152142.800651-1-vladimir.oltean@nxp.com/

In this submission I have introduced a "native switchdev" driver API to
signal whether the TX forwarding offload is supported or not. This comes
after a third person has said that the macvlan offload framework used
for v2 and v1 was simply too convoluted.

This large patch set is submitted for discussion purposes (it is
provided in its entirety so it can be applied & tested on net-next).
It is only minimally tested, and yet I will not copy all switchdev
driver maintainers until we agree on the viability of this approach.

The major changes compared to v2:
- The introduction of switchdev_bridge_port_offload() and
  switchdev_bridge_port_unoffload() as two major API changes from the
  perspective of a switchdev driver. All drivers were converted to call
  these.
- Augment switchdev_bridge_port_{,un}offload to also handle the
  switchdev object replays on port join/leave.
- Augment switchdev_bridge_port_offload to also signal whether the TX
  forwarding offload is supported.

===========================================================

Message for v2:
https://patchwork.kernel.org/project/netdevbpf/cover/20210703115705.1034112-1-vladimir.oltean@nxp.com/

For this series I have taken Tobias' work from here:
https://patchwork.kernel.org/project/netdevbpf/cover/20210426170411.1789186-1-tobias@waldekranz.com/
and made the following changes:
- I collected and integrated (hopefully all of) Nikolay's, Ido's and my
  feedback on the bridge driver changes. Otherwise, the structure of the
  bridge changes is pretty much the same as Tobias left it.
- I basically rewrote the DSA infrastructure for the data plane
  forwarding offload, based on the commonalities with another switch
  driver for which I implemented this feature (not submitted here)
- I adapted mv88e6xxx to use the new infrastructure, hopefully it still
  works but I didn't test that
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2021-07-23 16:32:50 +01:00
commit 356ae88f83
19 changed files with 352 additions and 26 deletions

View file

@ -1221,14 +1221,36 @@ static u16 mv88e6xxx_port_vlan(struct mv88e6xxx_chip *chip, int dev, int port)
bool found = false;
u16 pvlan;
list_for_each_entry(dp, &dst->ports, list) {
if (dp->ds->index == dev && dp->index == port) {
/* dev is a physical switch */
if (dev <= dst->last_switch) {
list_for_each_entry(dp, &dst->ports, list) {
if (dp->ds->index == dev && dp->index == port) {
/* dp might be a DSA link or a user port, so it
* might or might not have a bridge_dev
* pointer. Use the "found" variable for both
* cases.
*/
br = dp->bridge_dev;
found = true;
break;
}
}
/* dev is a virtual bridge */
} else {
list_for_each_entry(dp, &dst->ports, list) {
if (dp->bridge_num < 0)
continue;
if (dp->bridge_num + 1 + dst->last_switch != dev)
continue;
br = dp->bridge_dev;
found = true;
break;
}
}
/* Prevent frames from unknown switch or port */
/* Prevent frames from unknown switch or virtual bridge */
if (!found)
return 0;
@ -1236,7 +1258,6 @@ static u16 mv88e6xxx_port_vlan(struct mv88e6xxx_chip *chip, int dev, int port)
if (dp->type == DSA_PORT_TYPE_CPU || dp->type == DSA_PORT_TYPE_DSA)
return mv88e6xxx_port_mask(chip);
br = dp->bridge_dev;
pvlan = 0;
/* Frames from user ports can egress any local DSA links and CPU ports,
@ -2422,6 +2443,44 @@ static void mv88e6xxx_crosschip_bridge_leave(struct dsa_switch *ds,
mv88e6xxx_reg_unlock(chip);
}
/* Treat the software bridge as a virtual single-port switch behind the
* CPU and map in the PVT. First dst->last_switch elements are taken by
* physical switches, so start from beyond that range.
*/
static int mv88e6xxx_map_virtual_bridge_to_pvt(struct dsa_switch *ds,
int bridge_num)
{
u8 dev = bridge_num + ds->dst->last_switch + 1;
struct mv88e6xxx_chip *chip = ds->priv;
int err;
mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_pvt_map(chip, dev, 0);
mv88e6xxx_reg_unlock(chip);
return err;
}
static int mv88e6xxx_bridge_tx_fwd_offload(struct dsa_switch *ds, int port,
struct net_device *br,
int bridge_num)
{
return mv88e6xxx_map_virtual_bridge_to_pvt(ds, bridge_num);
}
static void mv88e6xxx_bridge_tx_fwd_unoffload(struct dsa_switch *ds, int port,
struct net_device *br,
int bridge_num)
{
int err;
err = mv88e6xxx_map_virtual_bridge_to_pvt(ds, bridge_num);
if (err) {
dev_err(ds->dev, "failed to remap cross-chip Port VLAN: %pe\n",
ERR_PTR(err));
}
}
static int mv88e6xxx_software_reset(struct mv88e6xxx_chip *chip)
{
if (chip->info->ops->reset)
@ -3025,6 +3084,15 @@ static int mv88e6xxx_setup(struct dsa_switch *ds)
chip->ds = ds;
ds->slave_mii_bus = mv88e6xxx_default_mdio_bus(chip);
/* Since virtual bridges are mapped in the PVT, the number we support
* depends on the physical switch topology. We need to let DSA figure
* that out and therefore we cannot set this at dsa_register_switch()
* time.
*/
if (mv88e6xxx_has_pvt(chip))
ds->num_fwd_offloading_bridges = MV88E6XXX_MAX_PVT_SWITCHES -
ds->dst->last_switch - 1;
mv88e6xxx_reg_lock(chip);
if (chip->info->ops->setup_errata) {
@ -6128,6 +6196,8 @@ static const struct dsa_switch_ops mv88e6xxx_switch_ops = {
.crosschip_lag_change = mv88e6xxx_crosschip_lag_change,
.crosschip_lag_join = mv88e6xxx_crosschip_lag_join,
.crosschip_lag_leave = mv88e6xxx_crosschip_lag_leave,
.port_bridge_tx_fwd_offload = mv88e6xxx_bridge_tx_fwd_offload,
.port_bridge_tx_fwd_unoffload = mv88e6xxx_bridge_tx_fwd_unoffload,
};
static int mv88e6xxx_register_switch(struct mv88e6xxx_chip *chip)

View file

@ -1936,7 +1936,7 @@ static int dpaa2_switch_port_bridge_join(struct net_device *netdev,
err = switchdev_bridge_port_offload(netdev, netdev, NULL,
&dpaa2_switch_port_switchdev_nb,
&dpaa2_switch_port_switchdev_blocking_nb,
extack);
false, extack);
if (err)
goto err_switchdev_offload;

View file

@ -502,7 +502,7 @@ int prestera_bridge_port_join(struct net_device *br_dev,
}
err = switchdev_bridge_port_offload(br_port->dev, port->dev, NULL,
NULL, NULL, extack);
NULL, NULL, false, extack);
if (err)
goto err_switchdev_offload;

View file

@ -362,7 +362,7 @@ mlxsw_sp_bridge_port_create(struct mlxsw_sp_bridge_device *bridge_device,
bridge_port->ref_count = 1;
err = switchdev_bridge_port_offload(brport_dev, mlxsw_sp_port->dev,
NULL, NULL, NULL, extack);
NULL, NULL, NULL, false, extack);
if (err)
goto err_switchdev_offload;

View file

@ -113,7 +113,7 @@ static int sparx5_port_bridge_join(struct sparx5_port *port,
set_bit(port->portno, sparx5->bridge_mask);
err = switchdev_bridge_port_offload(ndev, ndev, NULL, NULL, NULL,
extack);
false, extack);
if (err)
goto err_switchdev_offload;

View file

@ -1200,7 +1200,7 @@ static int ocelot_netdevice_bridge_join(struct net_device *dev,
err = switchdev_bridge_port_offload(brport_dev, dev, priv,
&ocelot_netdevice_nb,
&ocelot_switchdev_blocking_nb,
extack);
false, extack);
if (err)
goto err_switchdev_offload;

View file

@ -2599,7 +2599,7 @@ static int ofdpa_port_bridge_join(struct ofdpa_port *ofdpa_port,
return err;
return switchdev_bridge_port_offload(dev, dev, NULL, NULL, NULL,
extack);
false, extack);
}
static int ofdpa_port_bridge_leave(struct ofdpa_port *ofdpa_port)

View file

@ -2097,7 +2097,7 @@ static int am65_cpsw_netdevice_port_link(struct net_device *ndev,
}
err = switchdev_bridge_port_offload(ndev, ndev, NULL, NULL, NULL,
extack);
false, extack);
if (err)
return err;

View file

@ -1518,7 +1518,7 @@ static int cpsw_netdevice_port_link(struct net_device *ndev,
}
err = switchdev_bridge_port_offload(ndev, ndev, NULL, NULL, NULL,
extack);
false, extack);
if (err)
return err;

View file

@ -57,6 +57,7 @@ struct br_ip_list {
#define BR_MRP_AWARE BIT(17)
#define BR_MRP_LOST_CONT BIT(18)
#define BR_MRP_LOST_IN_CONT BIT(19)
#define BR_TX_FWD_OFFLOAD BIT(20)
#define BR_DEFAULT_AGEING_TIME (300 * HZ)
@ -182,6 +183,7 @@ int switchdev_bridge_port_offload(struct net_device *brport_dev,
struct net_device *dev, const void *ctx,
struct notifier_block *atomic_nb,
struct notifier_block *blocking_nb,
bool tx_fwd_offload,
struct netlink_ext_ack *extack);
void switchdev_bridge_port_unoffload(struct net_device *brport_dev,
const void *ctx,
@ -195,6 +197,7 @@ switchdev_bridge_port_offload(struct net_device *brport_dev,
struct net_device *dev, const void *ctx,
struct notifier_block *atomic_nb,
struct notifier_block *blocking_nb,
bool tx_fwd_offload,
struct netlink_ext_ack *extack)
{
return -EINVAL;

View file

@ -159,6 +159,12 @@ struct dsa_switch_tree {
*/
struct net_device **lags;
unsigned int lags_len;
/* Track the largest switch index within a tree */
unsigned int last_switch;
/* Track the bridges with forwarding offload enabled */
unsigned long fwd_offloading_bridges;
};
#define dsa_lags_foreach_id(_id, _dst) \
@ -259,6 +265,7 @@ struct dsa_port {
bool vlan_filtering;
u8 stp_state;
struct net_device *bridge_dev;
int bridge_num;
struct devlink_port devlink_port;
bool devlink_port_setup;
struct phylink *pl;
@ -410,6 +417,12 @@ struct dsa_switch {
*/
unsigned int num_lag_ids;
/* Drivers that support bridge forwarding offload should set this to
* the maximum number of bridges spanning the same switch tree that can
* be offloaded.
*/
unsigned int num_fwd_offloading_bridges;
size_t num_ports;
};
@ -693,6 +706,14 @@ struct dsa_switch_ops {
struct net_device *bridge);
void (*port_bridge_leave)(struct dsa_switch *ds, int port,
struct net_device *bridge);
/* Called right after .port_bridge_join() */
int (*port_bridge_tx_fwd_offload)(struct dsa_switch *ds, int port,
struct net_device *bridge,
int bridge_num);
/* Called right before .port_bridge_leave() */
void (*port_bridge_tx_fwd_unoffload)(struct dsa_switch *ds, int port,
struct net_device *bridge,
int bridge_num);
void (*port_stp_state_set)(struct dsa_switch *ds, int port,
u8 state);
void (*port_fast_age)(struct dsa_switch *ds, int port);

View file

@ -48,6 +48,8 @@ int br_dev_queue_push_xmit(struct net *net, struct sock *sk, struct sk_buff *skb
skb_set_network_header(skb, depth);
}
skb->offload_fwd_mark = br_switchdev_frame_uses_tx_fwd_offload(skb);
dev_queue_xmit(skb);
return 0;
@ -76,6 +78,11 @@ static void __br_forward(const struct net_bridge_port *to,
struct net *net;
int br_hook;
/* Mark the skb for forwarding offload early so that br_handle_vlan()
* can know whether to pop the VLAN header on egress or keep it.
*/
nbp_switchdev_frame_mark_tx_fwd_offload(to, skb);
vg = nbp_vlan_group_rcu(to);
skb = br_handle_vlan(to->br, to, vg, skb);
if (!skb)
@ -174,6 +181,8 @@ static struct net_bridge_port *maybe_deliver(
if (!should_deliver(p, skb))
return prev;
nbp_switchdev_frame_mark_tx_fwd_to_hwdom(p, skb);
if (!prev)
goto out;

View file

@ -552,12 +552,20 @@ struct br_input_skb_cb {
#endif
#ifdef CONFIG_NET_SWITCHDEV
/* Set if TX data plane offloading is used towards at least one
* hardware domain.
*/
u8 tx_fwd_offload:1;
/* The switchdev hardware domain from which this packet was received.
* If skb->offload_fwd_mark was set, then this packet was already
* forwarded by hardware to the other ports in the source hardware
* domain, otherwise it wasn't.
*/
int src_hwdom;
/* Bit mask of hardware domains towards this packet has already been
* transmitted using the TX data plane offload.
*/
unsigned long fwd_hwdoms;
#endif
};
@ -1871,6 +1879,12 @@ static inline void br_sysfs_delbr(struct net_device *dev) { return; }
/* br_switchdev.c */
#ifdef CONFIG_NET_SWITCHDEV
bool br_switchdev_frame_uses_tx_fwd_offload(struct sk_buff *skb);
void nbp_switchdev_frame_mark_tx_fwd_offload(const struct net_bridge_port *p,
struct sk_buff *skb);
void nbp_switchdev_frame_mark_tx_fwd_to_hwdom(const struct net_bridge_port *p,
struct sk_buff *skb);
void nbp_switchdev_frame_mark(const struct net_bridge_port *p,
struct sk_buff *skb);
bool nbp_switchdev_allowed_egress(const struct net_bridge_port *p,
@ -1891,6 +1905,23 @@ static inline void br_switchdev_frame_unmark(struct sk_buff *skb)
skb->offload_fwd_mark = 0;
}
#else
static inline bool br_switchdev_frame_uses_tx_fwd_offload(struct sk_buff *skb)
{
return false;
}
static inline void
nbp_switchdev_frame_mark_tx_fwd_offload(const struct net_bridge_port *p,
struct sk_buff *skb)
{
}
static inline void
nbp_switchdev_frame_mark_tx_fwd_to_hwdom(const struct net_bridge_port *p,
struct sk_buff *skb)
{
}
static inline void nbp_switchdev_frame_mark(const struct net_bridge_port *p,
struct sk_buff *skb)
{

View file

@ -8,6 +8,46 @@
#include "br_private.h"
static struct static_key_false br_switchdev_tx_fwd_offload;
static bool nbp_switchdev_can_offload_tx_fwd(const struct net_bridge_port *p,
const struct sk_buff *skb)
{
if (!static_branch_unlikely(&br_switchdev_tx_fwd_offload))
return false;
return (p->flags & BR_TX_FWD_OFFLOAD) &&
(p->hwdom != BR_INPUT_SKB_CB(skb)->src_hwdom);
}
bool br_switchdev_frame_uses_tx_fwd_offload(struct sk_buff *skb)
{
if (!static_branch_unlikely(&br_switchdev_tx_fwd_offload))
return false;
return BR_INPUT_SKB_CB(skb)->tx_fwd_offload;
}
/* Mark the frame for TX forwarding offload if this egress port supports it */
void nbp_switchdev_frame_mark_tx_fwd_offload(const struct net_bridge_port *p,
struct sk_buff *skb)
{
if (nbp_switchdev_can_offload_tx_fwd(p, skb))
BR_INPUT_SKB_CB(skb)->tx_fwd_offload = true;
}
/* Lazily adds the hwdom of the egress bridge port to the bit mask of hwdoms
* that the skb has been already forwarded to, to avoid further cloning to
* other ports in the same hwdom by making nbp_switchdev_allowed_egress()
* return false.
*/
void nbp_switchdev_frame_mark_tx_fwd_to_hwdom(const struct net_bridge_port *p,
struct sk_buff *skb)
{
if (nbp_switchdev_can_offload_tx_fwd(p, skb))
set_bit(p->hwdom, &BR_INPUT_SKB_CB(skb)->fwd_hwdoms);
}
void nbp_switchdev_frame_mark(const struct net_bridge_port *p,
struct sk_buff *skb)
{
@ -18,8 +58,10 @@ void nbp_switchdev_frame_mark(const struct net_bridge_port *p,
bool nbp_switchdev_allowed_egress(const struct net_bridge_port *p,
const struct sk_buff *skb)
{
return !skb->offload_fwd_mark ||
BR_INPUT_SKB_CB(skb)->src_hwdom != p->hwdom;
struct br_input_skb_cb *cb = BR_INPUT_SKB_CB(skb);
return !test_bit(p->hwdom, &cb->fwd_hwdoms) &&
(!skb->offload_fwd_mark || cb->src_hwdom != p->hwdom);
}
/* Flags that can be offloaded to hardware */
@ -164,8 +206,11 @@ static void nbp_switchdev_hwdom_put(struct net_bridge_port *leaving)
static int nbp_switchdev_add(struct net_bridge_port *p,
struct netdev_phys_item_id ppid,
bool tx_fwd_offload,
struct netlink_ext_ack *extack)
{
int err;
if (p->offload_count) {
/* Prevent unsupported configurations such as a bridge port
* which is a bonding interface, and the member ports are from
@ -189,7 +234,16 @@ static int nbp_switchdev_add(struct net_bridge_port *p,
p->ppid = ppid;
p->offload_count = 1;
return nbp_switchdev_hwdom_set(p);
err = nbp_switchdev_hwdom_set(p);
if (err)
return err;
if (tx_fwd_offload) {
p->flags |= BR_TX_FWD_OFFLOAD;
static_branch_inc(&br_switchdev_tx_fwd_offload);
}
return 0;
}
static void nbp_switchdev_del(struct net_bridge_port *p)
@ -204,6 +258,11 @@ static void nbp_switchdev_del(struct net_bridge_port *p)
if (p->hwdom)
nbp_switchdev_hwdom_put(p);
if (p->flags & BR_TX_FWD_OFFLOAD) {
p->flags &= ~BR_TX_FWD_OFFLOAD;
static_branch_dec(&br_switchdev_tx_fwd_offload);
}
}
static int nbp_switchdev_sync_objs(struct net_bridge_port *p, const void *ctx,
@ -262,6 +321,7 @@ int switchdev_bridge_port_offload(struct net_device *brport_dev,
struct net_device *dev, const void *ctx,
struct notifier_block *atomic_nb,
struct notifier_block *blocking_nb,
bool tx_fwd_offload,
struct netlink_ext_ack *extack)
{
struct netdev_phys_item_id ppid;
@ -278,7 +338,7 @@ int switchdev_bridge_port_offload(struct net_device *brport_dev,
if (err)
return err;
err = nbp_switchdev_add(p, ppid, extack);
err = nbp_switchdev_add(p, ppid, tx_fwd_offload, extack);
if (err)
return err;

View file

@ -465,7 +465,15 @@ struct sk_buff *br_handle_vlan(struct net_bridge *br,
u64_stats_update_end(&stats->syncp);
}
if (v->flags & BRIDGE_VLAN_INFO_UNTAGGED)
/* If the skb will be sent using forwarding offload, the assumption is
* that the switchdev will inject the packet into hardware together
* with the bridge VLAN, so that it can be forwarded according to that
* VLAN. The switchdev should deal with popping the VLAN header in
* hardware on each egress port as appropriate. So only strip the VLAN
* header if forwarding offload is not being used.
*/
if (v->flags & BRIDGE_VLAN_INFO_UNTAGGED &&
!br_switchdev_frame_uses_tx_fwd_offload(skb))
__vlan_hwaccel_clear_tag(skb);
if (p && (p->flags & BR_VLAN_TUNNEL) &&

View file

@ -1044,6 +1044,7 @@ static struct dsa_port *dsa_port_touch(struct dsa_switch *ds, int index)
dp->ds = ds;
dp->index = index;
dp->bridge_num = -1;
INIT_LIST_HEAD(&dp->list);
list_add_tail(&dp->list, &dst->ports);
@ -1265,6 +1266,9 @@ static int dsa_switch_parse_member_of(struct dsa_switch *ds,
return -EEXIST;
}
if (ds->dst->last_switch < ds->index)
ds->dst->last_switch = ds->index;
return 0;
}

View file

@ -14,6 +14,8 @@
#include <net/dsa.h>
#include <net/gro_cells.h>
#define DSA_MAX_NUM_OFFLOADING_BRIDGES BITS_PER_LONG
enum {
DSA_NOTIFIER_AGEING_TIME,
DSA_NOTIFIER_BRIDGE_JOIN,

View file

@ -230,6 +230,83 @@ static void dsa_port_switchdev_unsync_attrs(struct dsa_port *dp)
*/
}
static int dsa_tree_find_bridge_num(struct dsa_switch_tree *dst,
struct net_device *bridge_dev)
{
struct dsa_port *dp;
/* When preparing the offload for a port, it will have a valid
* dp->bridge_dev pointer but a not yet valid dp->bridge_num.
* However there might be other ports having the same dp->bridge_dev
* and a valid dp->bridge_num, so just ignore this port.
*/
list_for_each_entry(dp, &dst->ports, list)
if (dp->bridge_dev == bridge_dev && dp->bridge_num != -1)
return dp->bridge_num;
return -1;
}
static void dsa_port_bridge_tx_fwd_unoffload(struct dsa_port *dp,
struct net_device *bridge_dev)
{
struct dsa_switch_tree *dst = dp->ds->dst;
int bridge_num = dp->bridge_num;
struct dsa_switch *ds = dp->ds;
/* No bridge TX forwarding offload => do nothing */
if (!ds->ops->port_bridge_tx_fwd_unoffload || dp->bridge_num == -1)
return;
dp->bridge_num = -1;
/* Check if the bridge is still in use, otherwise it is time
* to clean it up so we can reuse this bridge_num later.
*/
if (!dsa_tree_find_bridge_num(dst, bridge_dev))
clear_bit(bridge_num, &dst->fwd_offloading_bridges);
/* Notify the chips only once the offload has been deactivated, so
* that they can update their configuration accordingly.
*/
ds->ops->port_bridge_tx_fwd_unoffload(ds, dp->index, bridge_dev,
bridge_num);
}
static bool dsa_port_bridge_tx_fwd_offload(struct dsa_port *dp,
struct net_device *bridge_dev)
{
struct dsa_switch_tree *dst = dp->ds->dst;
struct dsa_switch *ds = dp->ds;
int bridge_num, err;
if (!ds->ops->port_bridge_tx_fwd_offload)
return false;
bridge_num = dsa_tree_find_bridge_num(dst, bridge_dev);
if (bridge_num < 0) {
/* First port that offloads TX forwarding for this bridge */
bridge_num = find_first_zero_bit(&dst->fwd_offloading_bridges,
DSA_MAX_NUM_OFFLOADING_BRIDGES);
if (bridge_num >= ds->num_fwd_offloading_bridges)
return false;
set_bit(bridge_num, &dst->fwd_offloading_bridges);
}
dp->bridge_num = bridge_num;
/* Notify the driver */
err = ds->ops->port_bridge_tx_fwd_offload(ds, dp->index, bridge_dev,
bridge_num);
if (err) {
dsa_port_bridge_tx_fwd_unoffload(dp, bridge_dev);
return false;
}
return true;
}
int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
struct netlink_ext_ack *extack)
{
@ -241,6 +318,7 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
};
struct net_device *dev = dp->slave;
struct net_device *brport_dev;
bool tx_fwd_offload;
int err;
/* Here the interface is already bridged. Reflect the current
@ -254,10 +332,12 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
if (err)
goto out_rollback;
tx_fwd_offload = dsa_port_bridge_tx_fwd_offload(dp, br);
err = switchdev_bridge_port_offload(brport_dev, dev, dp,
&dsa_slave_switchdev_notifier,
&dsa_slave_switchdev_blocking_notifier,
extack);
tx_fwd_offload, extack);
if (err)
goto out_rollback_unbridge;
@ -302,6 +382,8 @@ void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br)
*/
dp->bridge_dev = NULL;
dsa_port_bridge_tx_fwd_unoffload(dp, br);
err = dsa_broadcast(DSA_NOTIFIER_BRIDGE_LEAVE, &info);
if (err)
pr_err("DSA: failed to notify DSA_NOTIFIER_BRIDGE_LEAVE\n");

View file

@ -126,7 +126,42 @@ static struct sk_buff *dsa_xmit_ll(struct sk_buff *skb, struct net_device *dev,
u8 extra)
{
struct dsa_port *dp = dsa_slave_to_port(dev);
u8 tag_dev, tag_port;
enum dsa_cmd cmd;
u8 *dsa_header;
u16 pvid = 0;
int err;
if (skb->offload_fwd_mark) {
struct dsa_switch_tree *dst = dp->ds->dst;
struct net_device *br = dp->bridge_dev;
cmd = DSA_CMD_FORWARD;
/* When offloading forwarding for a bridge, inject FORWARD
* packets on behalf of a virtual switch device with an index
* past the physical switches.
*/
tag_dev = dst->last_switch + 1 + dp->bridge_num;
tag_port = 0;
/* If we are offloading forwarding for a VLAN-unaware bridge,
* inject packets to hardware using the bridge's pvid, since
* that's where the packets ingressed from.
*/
if (!br_vlan_enabled(br)) {
/* Safe because __dev_queue_xmit() runs under
* rcu_read_lock_bh()
*/
err = br_vlan_get_pvid_rcu(br, &pvid);
if (err)
return NULL;
}
} else {
cmd = DSA_CMD_FROM_CPU;
tag_dev = dp->ds->index;
tag_port = dp->index;
}
if (skb->protocol == htons(ETH_P_8021Q)) {
if (extra) {
@ -134,10 +169,10 @@ static struct sk_buff *dsa_xmit_ll(struct sk_buff *skb, struct net_device *dev,
memmove(skb->data, skb->data + extra, 2 * ETH_ALEN);
}
/* Construct tagged FROM_CPU DSA tag from 802.1Q tag. */
/* Construct tagged DSA tag from 802.1Q tag. */
dsa_header = skb->data + 2 * ETH_ALEN + extra;
dsa_header[0] = (DSA_CMD_FROM_CPU << 6) | 0x20 | dp->ds->index;
dsa_header[1] = dp->index << 3;
dsa_header[0] = (cmd << 6) | 0x20 | tag_dev;
dsa_header[1] = tag_port << 3;
/* Move CFI field from byte 2 to byte 1. */
if (dsa_header[2] & 0x10) {
@ -148,12 +183,13 @@ static struct sk_buff *dsa_xmit_ll(struct sk_buff *skb, struct net_device *dev,
skb_push(skb, DSA_HLEN + extra);
memmove(skb->data, skb->data + DSA_HLEN + extra, 2 * ETH_ALEN);
/* Construct untagged FROM_CPU DSA tag. */
/* Construct untagged DSA tag. */
dsa_header = skb->data + 2 * ETH_ALEN + extra;
dsa_header[0] = (DSA_CMD_FROM_CPU << 6) | dp->ds->index;
dsa_header[1] = dp->index << 3;
dsa_header[2] = 0x00;
dsa_header[3] = 0x00;
dsa_header[0] = (cmd << 6) | tag_dev;
dsa_header[1] = tag_port << 3;
dsa_header[2] = pvid >> 8;
dsa_header[3] = pvid & 0xff;
}
return skb;