Merge branch 'net-bridge-multiple-spanning-trees'

Tobias Waldekranz says:

====================
net: bridge: Multiple Spanning Trees

The bridge has had per-VLAN STP support for a while now, since:

https://lore.kernel.org/netdev/20200124114022.10883-1-nikolay@cumulusnetworks.com/

The current implementation has some problems:

- The mapping from VLAN to STP state is fixed as 1:1, i.e. each VLAN
  is managed independently. This is awkward from an MSTP (802.1Q-2018,
  Clause 13.5) point of view, where the model is that multiple VLANs
  are grouped into MST instances.

  Because of the way that the standard is written, presumably, this is
  also reflected in hardware implementations. It is not uncommon for a
  switch to support the full 4k range of VIDs, but that the pool of
  MST instances is much smaller. Some examples:

  Marvell LinkStreet (mv88e6xxx): 4k VLANs, but only 64 MSTIs
  Marvell Prestera: 4k VLANs, but only 128 MSTIs
  Microchip SparX-5i: 4k VLANs, but only 128 MSTIs

- By default, the feature is enabled, and there is no way to disable
  it. This makes it hard to add offloading in a backwards compatible
  way, since any underlying switchdevs have no way to refuse the
  function if the hardware does not support it

- The port-global STP state has precedence over per-VLAN states. In
  MSTP, as far as I understand it, all VLANs will use the common
  spanning tree (CST) by default - through traffic engineering you can
  then optimize your network to group subsets of VLANs to use
  different trees (MSTI). To my understanding, the way this is
  typically managed in silicon is roughly:

  Incoming packet:
  .----.----.--------------.----.-------------
  | DA | SA | 802.1Q VID=X | ET | Payload ...
  '----'----'--------------'----'-------------
                        |
                        '->|\     .----------------------------.
                           | +--> | VID | Members | ... | MSTI |
                   PVID -->|/     |-----|---------|-----|------|
                                  |   1 | 0001001 | ... |    0 |
                                  |   2 | 0001010 | ... |   10 |
                                  |   3 | 0001100 | ... |   10 |
                                  '----------------------------'
                                                             |
                               .-----------------------------'
                               |  .------------------------.
                               '->| MSTI | Fwding | Lrning |
                                  |------|--------|--------|
                                  |    0 | 111110 | 111110 |
                                  |   10 | 110111 | 110111 |
                                  '------------------------'

  What this is trying to show is that the STP state (whether MSTP is
  used, or ye olde STP) is always accessed via the VLAN table. If STP
  is running, all MSTI pointers in that table will reference the same
  index in the STP stable - if MSTP is running, some VLANs may point
  to other trees (like in this example).

  The fact that in the Linux bridge, the global state (think: index 0
  in most hardware implementations) is supposed to override the
  per-VLAN state, is very awkward to offload. In effect, this means
  that when the global state changes to blocking, drivers will have to
  iterate over all MSTIs in use, and alter them all to match. This
  also means that you have to cache whether the hardware state is
  currently tracking the global state or the per-VLAN state. In the
  first case, you also have to cache the per-VLAN state so that you
  can restore it if the global state transitions back to forwarding.

This series adds a new mst_enable bridge setting (as suggested by Nik)
that can only be changed when no VLANs are configured on the
bridge. Enabling this mode has the following effect:

- The port-global STP state is used to represent the CST (Common
  Spanning Tree) (1/15)

- Ingress STP filtering is deferred until the frame's VLAN has been
  resolved (1/15)

- The preexisting per-VLAN states can no longer be controlled directly
  (1/15). They are instead placed under the MST module's control,
  which is managed using a new netlink interface (described in 3/15)

- VLANs can br mapped to MSTIs in an arbitrary M:N fashion, using a
  new global VLAN option (2/15)

Switchdev notifications are added so that a driver can track:
- MST enabled state
- VID to MSTI mappings
- MST port states

An offloading implementation is this provided for mv88e6xxx.
====================

Link: https://lore.kernel.org/r/20220316150857.2442916-1-tobias@waldekranz.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2022-03-17 16:50:03 -07:00
commit 82e94d4144
23 changed files with 1388 additions and 159 deletions

View file

@ -1667,24 +1667,31 @@ static int mv88e6xxx_pvt_setup(struct mv88e6xxx_chip *chip)
return 0;
}
static int mv88e6xxx_port_fast_age_fid(struct mv88e6xxx_chip *chip, int port,
u16 fid)
{
if (dsa_to_port(chip->ds, port)->lag)
/* Hardware is incapable of fast-aging a LAG through a
* regular ATU move operation. Until we have something
* more fancy in place this is a no-op.
*/
return -EOPNOTSUPP;
return mv88e6xxx_g1_atu_remove(chip, fid, port, false);
}
static void mv88e6xxx_port_fast_age(struct dsa_switch *ds, int port)
{
struct mv88e6xxx_chip *chip = ds->priv;
int err;
if (dsa_to_port(ds, port)->lag)
/* Hardware is incapable of fast-aging a LAG through a
* regular ATU move operation. Until we have something
* more fancy in place this is a no-op.
*/
return;
mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_g1_atu_remove(chip, 0, port, false);
err = mv88e6xxx_port_fast_age_fid(chip, port, 0);
mv88e6xxx_reg_unlock(chip);
if (err)
dev_err(ds->dev, "p%d: failed to flush ATU\n", port);
dev_err(chip->ds->dev, "p%d: failed to flush ATU: %d\n",
port, err);
}
static int mv88e6xxx_vtu_setup(struct mv88e6xxx_chip *chip)
@ -1791,6 +1798,187 @@ static int mv88e6xxx_atu_new(struct mv88e6xxx_chip *chip, u16 *fid)
return mv88e6xxx_g1_atu_flush(chip, *fid, true);
}
static int mv88e6xxx_stu_loadpurge(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry)
{
if (!chip->info->ops->stu_loadpurge)
return -EOPNOTSUPP;
return chip->info->ops->stu_loadpurge(chip, entry);
}
static int mv88e6xxx_stu_setup(struct mv88e6xxx_chip *chip)
{
struct mv88e6xxx_stu_entry stu = {
.valid = true,
.sid = 0
};
if (!mv88e6xxx_has_stu(chip))
return 0;
/* Make sure that SID 0 is always valid. This is used by VTU
* entries that do not make use of the STU, e.g. when creating
* a VLAN upper on a port that is also part of a VLAN
* filtering bridge.
*/
return mv88e6xxx_stu_loadpurge(chip, &stu);
}
static int mv88e6xxx_sid_get(struct mv88e6xxx_chip *chip, u8 *sid)
{
DECLARE_BITMAP(busy, MV88E6XXX_N_SID) = { 0 };
struct mv88e6xxx_mst *mst;
__set_bit(0, busy);
list_for_each_entry(mst, &chip->msts, node)
__set_bit(mst->stu.sid, busy);
*sid = find_first_zero_bit(busy, MV88E6XXX_N_SID);
return (*sid >= mv88e6xxx_max_sid(chip)) ? -ENOSPC : 0;
}
static int mv88e6xxx_mst_put(struct mv88e6xxx_chip *chip, u8 sid)
{
struct mv88e6xxx_mst *mst, *tmp;
int err;
if (!sid)
return 0;
list_for_each_entry_safe(mst, tmp, &chip->msts, node) {
if (mst->stu.sid != sid)
continue;
if (!refcount_dec_and_test(&mst->refcnt))
return 0;
mst->stu.valid = false;
err = mv88e6xxx_stu_loadpurge(chip, &mst->stu);
if (err) {
refcount_set(&mst->refcnt, 1);
return err;
}
list_del(&mst->node);
kfree(mst);
return 0;
}
return -ENOENT;
}
static int mv88e6xxx_mst_get(struct mv88e6xxx_chip *chip, struct net_device *br,
u16 msti, u8 *sid)
{
struct mv88e6xxx_mst *mst;
int err, i;
if (!mv88e6xxx_has_stu(chip)) {
err = -EOPNOTSUPP;
goto err;
}
if (!msti) {
*sid = 0;
return 0;
}
list_for_each_entry(mst, &chip->msts, node) {
if (mst->br == br && mst->msti == msti) {
refcount_inc(&mst->refcnt);
*sid = mst->stu.sid;
return 0;
}
}
err = mv88e6xxx_sid_get(chip, sid);
if (err)
goto err;
mst = kzalloc(sizeof(*mst), GFP_KERNEL);
if (!mst) {
err = -ENOMEM;
goto err;
}
INIT_LIST_HEAD(&mst->node);
refcount_set(&mst->refcnt, 1);
mst->br = br;
mst->msti = msti;
mst->stu.valid = true;
mst->stu.sid = *sid;
/* The bridge starts out all ports in the disabled state. But
* a STU state of disabled means to go by the port-global
* state. So we set all user port's initial state to blocking,
* to match the bridge's behavior.
*/
for (i = 0; i < mv88e6xxx_num_ports(chip); i++)
mst->stu.state[i] = dsa_is_user_port(chip->ds, i) ?
MV88E6XXX_PORT_CTL0_STATE_BLOCKING :
MV88E6XXX_PORT_CTL0_STATE_DISABLED;
err = mv88e6xxx_stu_loadpurge(chip, &mst->stu);
if (err)
goto err_free;
list_add_tail(&mst->node, &chip->msts);
return 0;
err_free:
kfree(mst);
err:
return err;
}
static int mv88e6xxx_port_mst_state_set(struct dsa_switch *ds, int port,
const struct switchdev_mst_state *st)
{
struct dsa_port *dp = dsa_to_port(ds, port);
struct mv88e6xxx_chip *chip = ds->priv;
struct mv88e6xxx_mst *mst;
u8 state;
int err;
if (!mv88e6xxx_has_stu(chip))
return -EOPNOTSUPP;
switch (st->state) {
case BR_STATE_DISABLED:
case BR_STATE_BLOCKING:
case BR_STATE_LISTENING:
state = MV88E6XXX_PORT_CTL0_STATE_BLOCKING;
break;
case BR_STATE_LEARNING:
state = MV88E6XXX_PORT_CTL0_STATE_LEARNING;
break;
case BR_STATE_FORWARDING:
state = MV88E6XXX_PORT_CTL0_STATE_FORWARDING;
break;
default:
return -EINVAL;
}
list_for_each_entry(mst, &chip->msts, node) {
if (mst->br == dsa_port_bridge_dev_get(dp) &&
mst->msti == st->msti) {
if (mst->stu.state[port] == state)
return 0;
mst->stu.state[port] = state;
mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_stu_loadpurge(chip, &mst->stu);
mv88e6xxx_reg_unlock(chip);
return err;
}
}
return -ENOENT;
}
static int mv88e6xxx_port_check_hw_vlan(struct dsa_switch *ds, int port,
u16 vid)
{
@ -2410,6 +2598,12 @@ static int mv88e6xxx_port_vlan_leave(struct mv88e6xxx_chip *chip,
if (err)
return err;
if (!vlan.valid) {
err = mv88e6xxx_mst_put(chip, vlan.sid);
if (err)
return err;
}
return mv88e6xxx_g1_atu_remove(chip, vlan.fid, port, false);
}
@ -2455,6 +2649,69 @@ static int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
return err;
}
static int mv88e6xxx_port_vlan_fast_age(struct dsa_switch *ds, int port, u16 vid)
{
struct mv88e6xxx_chip *chip = ds->priv;
struct mv88e6xxx_vtu_entry vlan;
int err;
mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_vtu_get(chip, vid, &vlan);
if (err)
goto unlock;
err = mv88e6xxx_port_fast_age_fid(chip, port, vlan.fid);
unlock:
mv88e6xxx_reg_unlock(chip);
return err;
}
static int mv88e6xxx_vlan_msti_set(struct dsa_switch *ds,
struct dsa_bridge bridge,
const struct switchdev_vlan_msti *msti)
{
struct mv88e6xxx_chip *chip = ds->priv;
struct mv88e6xxx_vtu_entry vlan;
u8 old_sid, new_sid;
int err;
mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_vtu_get(chip, msti->vid, &vlan);
if (err)
goto unlock;
if (!vlan.valid) {
err = -EINVAL;
goto unlock;
}
old_sid = vlan.sid;
err = mv88e6xxx_mst_get(chip, bridge.dev, msti->msti, &new_sid);
if (err)
goto unlock;
if (new_sid != old_sid) {
vlan.sid = new_sid;
err = mv88e6xxx_vtu_loadpurge(chip, &vlan);
if (err) {
mv88e6xxx_mst_put(chip, new_sid);
goto unlock;
}
}
err = mv88e6xxx_mst_put(chip, old_sid);
unlock:
mv88e6xxx_reg_unlock(chip);
return err;
}
static int mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port,
const unsigned char *addr, u16 vid,
struct dsa_db db)
@ -3427,6 +3684,13 @@ static int mv88e6xxx_setup(struct dsa_switch *ds)
if (err)
goto unlock;
/* Must be called after mv88e6xxx_vtu_setup (which flushes the
* VTU, thereby also flushing the STU).
*/
err = mv88e6xxx_stu_setup(chip);
if (err)
goto unlock;
/* Setup Switch Port Registers */
for (i = 0; i < mv88e6xxx_num_ports(chip); i++) {
if (dsa_is_unused_port(ds, i))
@ -3882,6 +4146,8 @@ static const struct mv88e6xxx_ops mv88e6097_ops = {
.vtu_getnext = mv88e6352_g1_vtu_getnext,
.vtu_loadpurge = mv88e6352_g1_vtu_loadpurge,
.phylink_get_caps = mv88e6095_phylink_get_caps,
.stu_getnext = mv88e6352_g1_stu_getnext,
.stu_loadpurge = mv88e6352_g1_stu_loadpurge,
.set_max_frame_size = mv88e6185_g1_set_max_frame_size,
};
@ -4968,6 +5234,8 @@ static const struct mv88e6xxx_ops mv88e6352_ops = {
.atu_set_hash = mv88e6165_g1_atu_set_hash,
.vtu_getnext = mv88e6352_g1_vtu_getnext,
.vtu_loadpurge = mv88e6352_g1_vtu_loadpurge,
.stu_getnext = mv88e6352_g1_stu_getnext,
.stu_loadpurge = mv88e6352_g1_stu_loadpurge,
.serdes_get_lane = mv88e6352_serdes_get_lane,
.serdes_pcs_get_state = mv88e6352_serdes_pcs_get_state,
.serdes_pcs_config = mv88e6352_serdes_pcs_config,
@ -5033,6 +5301,8 @@ static const struct mv88e6xxx_ops mv88e6390_ops = {
.atu_set_hash = mv88e6165_g1_atu_set_hash,
.vtu_getnext = mv88e6390_g1_vtu_getnext,
.vtu_loadpurge = mv88e6390_g1_vtu_loadpurge,
.stu_getnext = mv88e6390_g1_stu_getnext,
.stu_loadpurge = mv88e6390_g1_stu_loadpurge,
.serdes_power = mv88e6390_serdes_power,
.serdes_get_lane = mv88e6390_serdes_get_lane,
/* Check status register pause & lpa register */
@ -5098,6 +5368,8 @@ static const struct mv88e6xxx_ops mv88e6390x_ops = {
.atu_set_hash = mv88e6165_g1_atu_set_hash,
.vtu_getnext = mv88e6390_g1_vtu_getnext,
.vtu_loadpurge = mv88e6390_g1_vtu_loadpurge,
.stu_getnext = mv88e6390_g1_stu_getnext,
.stu_loadpurge = mv88e6390_g1_stu_loadpurge,
.serdes_power = mv88e6390_serdes_power,
.serdes_get_lane = mv88e6390x_serdes_get_lane,
.serdes_pcs_get_state = mv88e6390_serdes_pcs_get_state,
@ -5166,6 +5438,8 @@ static const struct mv88e6xxx_ops mv88e6393x_ops = {
.atu_set_hash = mv88e6165_g1_atu_set_hash,
.vtu_getnext = mv88e6390_g1_vtu_getnext,
.vtu_loadpurge = mv88e6390_g1_vtu_loadpurge,
.stu_getnext = mv88e6390_g1_stu_getnext,
.stu_loadpurge = mv88e6390_g1_stu_loadpurge,
.serdes_power = mv88e6393x_serdes_power,
.serdes_get_lane = mv88e6393x_serdes_get_lane,
.serdes_pcs_get_state = mv88e6393x_serdes_pcs_get_state,
@ -5234,6 +5508,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
.num_ports = 11,
.num_internal_phys = 8,
.max_vid = 4095,
.max_sid = 63,
.port_base_addr = 0x10,
.phy_base_addr = 0x0,
.global1_addr = 0x1b,
@ -5487,6 +5762,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
.num_internal_phys = 9,
.num_gpio = 16,
.max_vid = 8191,
.max_sid = 63,
.port_base_addr = 0x0,
.phy_base_addr = 0x0,
.global1_addr = 0x1b,
@ -5510,6 +5786,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
.num_internal_phys = 9,
.num_gpio = 16,
.max_vid = 8191,
.max_sid = 63,
.port_base_addr = 0x0,
.phy_base_addr = 0x0,
.global1_addr = 0x1b,
@ -5532,6 +5809,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
.num_ports = 11, /* 10 + Z80 */
.num_internal_phys = 9,
.max_vid = 8191,
.max_sid = 63,
.port_base_addr = 0x0,
.phy_base_addr = 0x0,
.global1_addr = 0x1b,
@ -5554,6 +5832,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
.num_ports = 11, /* 10 + Z80 */
.num_internal_phys = 9,
.max_vid = 8191,
.max_sid = 63,
.port_base_addr = 0x0,
.phy_base_addr = 0x0,
.global1_addr = 0x1b,
@ -5576,6 +5855,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
.num_ports = 11, /* 10 + Z80 */
.num_internal_phys = 9,
.max_vid = 8191,
.max_sid = 63,
.port_base_addr = 0x0,
.phy_base_addr = 0x0,
.global1_addr = 0x1b,
@ -5815,6 +6095,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
.num_internal_phys = 5,
.num_gpio = 15,
.max_vid = 4095,
.max_sid = 63,
.port_base_addr = 0x10,
.phy_base_addr = 0x0,
.global1_addr = 0x1b,
@ -5839,6 +6120,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
.num_internal_phys = 9,
.num_gpio = 16,
.max_vid = 8191,
.max_sid = 63,
.port_base_addr = 0x0,
.phy_base_addr = 0x0,
.global1_addr = 0x1b,
@ -5863,6 +6145,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
.num_internal_phys = 9,
.num_gpio = 16,
.max_vid = 8191,
.max_sid = 63,
.port_base_addr = 0x0,
.phy_base_addr = 0x0,
.global1_addr = 0x1b,
@ -5886,6 +6169,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
.num_ports = 11, /* 10 + Z80 */
.num_internal_phys = 9,
.max_vid = 8191,
.max_sid = 63,
.port_base_addr = 0x0,
.phy_base_addr = 0x0,
.global1_addr = 0x1b,
@ -5954,6 +6238,7 @@ static struct mv88e6xxx_chip *mv88e6xxx_alloc_chip(struct device *dev)
mutex_init(&chip->reg_lock);
INIT_LIST_HEAD(&chip->mdios);
idr_init(&chip->policies);
INIT_LIST_HEAD(&chip->msts);
return chip;
}
@ -6486,10 +6771,13 @@ static const struct dsa_switch_ops mv88e6xxx_switch_ops = {
.port_pre_bridge_flags = mv88e6xxx_port_pre_bridge_flags,
.port_bridge_flags = mv88e6xxx_port_bridge_flags,
.port_stp_state_set = mv88e6xxx_port_stp_state_set,
.port_mst_state_set = mv88e6xxx_port_mst_state_set,
.port_fast_age = mv88e6xxx_port_fast_age,
.port_vlan_fast_age = mv88e6xxx_port_vlan_fast_age,
.port_vlan_filtering = mv88e6xxx_port_vlan_filtering,
.port_vlan_add = mv88e6xxx_port_vlan_add,
.port_vlan_del = mv88e6xxx_port_vlan_del,
.vlan_msti_set = mv88e6xxx_vlan_msti_set,
.port_fdb_add = mv88e6xxx_port_fdb_add,
.port_fdb_del = mv88e6xxx_port_fdb_del,
.port_fdb_dump = mv88e6xxx_port_fdb_dump,

View file

@ -20,6 +20,7 @@
#define EDSA_HLEN 8
#define MV88E6XXX_N_FID 4096
#define MV88E6XXX_N_SID 64
#define MV88E6XXX_FID_STANDALONE 0
#define MV88E6XXX_FID_BRIDGED 1
@ -130,6 +131,7 @@ struct mv88e6xxx_info {
unsigned int num_internal_phys;
unsigned int num_gpio;
unsigned int max_vid;
unsigned int max_sid;
unsigned int port_base_addr;
unsigned int phy_base_addr;
unsigned int global1_addr;
@ -181,6 +183,12 @@ struct mv88e6xxx_vtu_entry {
bool valid;
bool policy;
u8 member[DSA_MAX_PORTS];
u8 state[DSA_MAX_PORTS]; /* Older silicon has no STU */
};
struct mv88e6xxx_stu_entry {
u8 sid;
bool valid;
u8 state[DSA_MAX_PORTS];
};
@ -279,6 +287,7 @@ enum mv88e6xxx_region_id {
MV88E6XXX_REGION_GLOBAL2,
MV88E6XXX_REGION_ATU,
MV88E6XXX_REGION_VTU,
MV88E6XXX_REGION_STU,
MV88E6XXX_REGION_PVT,
_MV88E6XXX_REGION_MAX,
@ -288,6 +297,16 @@ struct mv88e6xxx_region_priv {
enum mv88e6xxx_region_id id;
};
struct mv88e6xxx_mst {
struct list_head node;
refcount_t refcnt;
struct net_device *br;
u16 msti;
struct mv88e6xxx_stu_entry stu;
};
struct mv88e6xxx_chip {
const struct mv88e6xxx_info *info;
@ -388,6 +407,9 @@ struct mv88e6xxx_chip {
/* devlink regions */
struct devlink_region *regions[_MV88E6XXX_REGION_MAX];
/* Bridge MST to SID mappings */
struct list_head msts;
};
struct mv88e6xxx_bus_ops {
@ -602,6 +624,12 @@ struct mv88e6xxx_ops {
int (*vtu_loadpurge)(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry);
/* Spanning Tree Unit operations */
int (*stu_getnext)(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int (*stu_loadpurge)(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
/* GPIO operations */
const struct mv88e6xxx_gpio_ops *gpio_ops;
@ -700,6 +728,11 @@ struct mv88e6xxx_hw_stat {
int type;
};
static inline bool mv88e6xxx_has_stu(struct mv88e6xxx_chip *chip)
{
return chip->info->max_sid > 0;
}
static inline bool mv88e6xxx_has_pvt(struct mv88e6xxx_chip *chip)
{
return chip->info->pvt;
@ -730,6 +763,11 @@ static inline unsigned int mv88e6xxx_max_vid(struct mv88e6xxx_chip *chip)
return chip->info->max_vid;
}
static inline unsigned int mv88e6xxx_max_sid(struct mv88e6xxx_chip *chip)
{
return chip->info->max_sid;
}
static inline u16 mv88e6xxx_port_mask(struct mv88e6xxx_chip *chip)
{
return GENMASK((s32)mv88e6xxx_num_ports(chip) - 1, 0);

View file

@ -503,6 +503,85 @@ static int mv88e6xxx_region_vtu_snapshot(struct devlink *dl,
return 0;
}
/**
* struct mv88e6xxx_devlink_stu_entry - Devlink STU entry
* @sid: Global1/3: SID, unknown filters and learning.
* @vid: Global1/6: Valid bit.
* @data: Global1/7-9: Membership data and priority override.
* @resvd: Reserved. In case we forgot something.
*
* The STU entry format varies between chipset generations. Peridot
* and Amethyst packs the STU data into Global1/7-8. Older silicon
* spreads the information across all three VTU data registers -
* inheriting the layout of even older hardware that had no STU at
* all. Since this is a low-level debug interface, copy all data
* verbatim and defer parsing to the consumer.
*/
struct mv88e6xxx_devlink_stu_entry {
u16 sid;
u16 vid;
u16 data[3];
u16 resvd;
};
static int mv88e6xxx_region_stu_snapshot(struct devlink *dl,
const struct devlink_region_ops *ops,
struct netlink_ext_ack *extack,
u8 **data)
{
struct mv88e6xxx_devlink_stu_entry *table, *entry;
struct dsa_switch *ds = dsa_devlink_to_ds(dl);
struct mv88e6xxx_chip *chip = ds->priv;
struct mv88e6xxx_stu_entry stu;
int err;
table = kcalloc(mv88e6xxx_max_sid(chip) + 1,
sizeof(struct mv88e6xxx_devlink_stu_entry),
GFP_KERNEL);
if (!table)
return -ENOMEM;
entry = table;
stu.sid = mv88e6xxx_max_sid(chip);
stu.valid = false;
mv88e6xxx_reg_lock(chip);
do {
err = mv88e6xxx_g1_stu_getnext(chip, &stu);
if (err)
break;
if (!stu.valid)
break;
err = err ? : mv88e6xxx_g1_read(chip, MV88E6352_G1_VTU_SID,
&entry->sid);
err = err ? : mv88e6xxx_g1_read(chip, MV88E6XXX_G1_VTU_VID,
&entry->vid);
err = err ? : mv88e6xxx_g1_read(chip, MV88E6XXX_G1_VTU_DATA1,
&entry->data[0]);
err = err ? : mv88e6xxx_g1_read(chip, MV88E6XXX_G1_VTU_DATA2,
&entry->data[1]);
err = err ? : mv88e6xxx_g1_read(chip, MV88E6XXX_G1_VTU_DATA3,
&entry->data[2]);
if (err)
break;
entry++;
} while (stu.sid < mv88e6xxx_max_sid(chip));
mv88e6xxx_reg_unlock(chip);
if (err) {
kfree(table);
return err;
}
*data = (u8 *)table;
return 0;
}
static int mv88e6xxx_region_pvt_snapshot(struct devlink *dl,
const struct devlink_region_ops *ops,
struct netlink_ext_ack *extack,
@ -605,6 +684,12 @@ static struct devlink_region_ops mv88e6xxx_region_vtu_ops = {
.destructor = kfree,
};
static struct devlink_region_ops mv88e6xxx_region_stu_ops = {
.name = "stu",
.snapshot = mv88e6xxx_region_stu_snapshot,
.destructor = kfree,
};
static struct devlink_region_ops mv88e6xxx_region_pvt_ops = {
.name = "pvt",
.snapshot = mv88e6xxx_region_pvt_snapshot,
@ -640,6 +725,11 @@ static struct mv88e6xxx_region mv88e6xxx_regions[] = {
.ops = &mv88e6xxx_region_vtu_ops
/* calculated at runtime */
},
[MV88E6XXX_REGION_STU] = {
.ops = &mv88e6xxx_region_stu_ops,
.cond = mv88e6xxx_has_stu,
/* calculated at runtime */
},
[MV88E6XXX_REGION_PVT] = {
.ops = &mv88e6xxx_region_pvt_ops,
.size = MV88E6XXX_MAX_PVT_ENTRIES * sizeof(u16),
@ -706,6 +796,10 @@ int mv88e6xxx_setup_devlink_regions_global(struct dsa_switch *ds)
size = (mv88e6xxx_max_vid(chip) + 1) *
sizeof(struct mv88e6xxx_devlink_vtu_entry);
break;
case MV88E6XXX_REGION_STU:
size = (mv88e6xxx_max_sid(chip) + 1) *
sizeof(struct mv88e6xxx_devlink_stu_entry);
break;
}
region = dsa_devlink_region_create(ds, ops, 1, size);

View file

@ -348,6 +348,16 @@ int mv88e6390_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
int mv88e6390_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry);
int mv88e6xxx_g1_vtu_flush(struct mv88e6xxx_chip *chip);
int mv88e6xxx_g1_stu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int mv88e6352_g1_stu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int mv88e6352_g1_stu_loadpurge(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int mv88e6390_g1_stu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int mv88e6390_g1_stu_loadpurge(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int mv88e6xxx_g1_vtu_prob_irq_setup(struct mv88e6xxx_chip *chip);
void mv88e6xxx_g1_vtu_prob_irq_free(struct mv88e6xxx_chip *chip);
int mv88e6xxx_g1_atu_get_next(struct mv88e6xxx_chip *chip, u16 fid);

View file

@ -44,8 +44,7 @@ static int mv88e6xxx_g1_vtu_fid_write(struct mv88e6xxx_chip *chip,
/* Offset 0x03: VTU SID Register */
static int mv88e6xxx_g1_vtu_sid_read(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry)
static int mv88e6xxx_g1_vtu_sid_read(struct mv88e6xxx_chip *chip, u8 *sid)
{
u16 val;
int err;
@ -54,15 +53,14 @@ static int mv88e6xxx_g1_vtu_sid_read(struct mv88e6xxx_chip *chip,
if (err)
return err;
entry->sid = val & MV88E6352_G1_VTU_SID_MASK;
*sid = val & MV88E6352_G1_VTU_SID_MASK;
return 0;
}
static int mv88e6xxx_g1_vtu_sid_write(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry)
static int mv88e6xxx_g1_vtu_sid_write(struct mv88e6xxx_chip *chip, u8 sid)
{
u16 val = entry->sid & MV88E6352_G1_VTU_SID_MASK;
u16 val = sid & MV88E6352_G1_VTU_SID_MASK;
return mv88e6xxx_g1_write(chip, MV88E6352_G1_VTU_SID, val);
}
@ -91,7 +89,7 @@ static int mv88e6xxx_g1_vtu_op(struct mv88e6xxx_chip *chip, u16 op)
/* Offset 0x06: VTU VID Register */
static int mv88e6xxx_g1_vtu_vid_read(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry)
bool *valid, u16 *vid)
{
u16 val;
int err;
@ -100,25 +98,28 @@ static int mv88e6xxx_g1_vtu_vid_read(struct mv88e6xxx_chip *chip,
if (err)
return err;
entry->vid = val & 0xfff;
if (vid) {
*vid = val & 0xfff;
if (val & MV88E6390_G1_VTU_VID_PAGE)
entry->vid |= 0x1000;
if (val & MV88E6390_G1_VTU_VID_PAGE)
*vid |= 0x1000;
}
entry->valid = !!(val & MV88E6XXX_G1_VTU_VID_VALID);
if (valid)
*valid = !!(val & MV88E6XXX_G1_VTU_VID_VALID);
return 0;
}
static int mv88e6xxx_g1_vtu_vid_write(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry)
bool valid, u16 vid)
{
u16 val = entry->vid & 0xfff;
u16 val = vid & 0xfff;
if (entry->vid & 0x1000)
if (vid & 0x1000)
val |= MV88E6390_G1_VTU_VID_PAGE;
if (entry->valid)
if (valid)
val |= MV88E6XXX_G1_VTU_VID_VALID;
return mv88e6xxx_g1_write(chip, MV88E6XXX_G1_VTU_VID, val);
@ -147,7 +148,7 @@ static int mv88e6185_g1_vtu_stu_data_read(struct mv88e6xxx_chip *chip,
}
static int mv88e6185_g1_vtu_data_read(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry)
u8 *member, u8 *state)
{
u16 regs[3];
int err;
@ -160,36 +161,20 @@ static int mv88e6185_g1_vtu_data_read(struct mv88e6xxx_chip *chip,
/* Extract MemberTag data */
for (i = 0; i < mv88e6xxx_num_ports(chip); ++i) {
unsigned int member_offset = (i % 4) * 4;
unsigned int state_offset = member_offset + 2;
entry->member[i] = (regs[i / 4] >> member_offset) & 0x3;
}
if (member)
member[i] = (regs[i / 4] >> member_offset) & 0x3;
return 0;
}
static int mv88e6185_g1_stu_data_read(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry)
{
u16 regs[3];
int err;
int i;
err = mv88e6185_g1_vtu_stu_data_read(chip, regs);
if (err)
return err;
/* Extract PortState data */
for (i = 0; i < mv88e6xxx_num_ports(chip); ++i) {
unsigned int state_offset = (i % 4) * 4 + 2;
entry->state[i] = (regs[i / 4] >> state_offset) & 0x3;
if (state)
state[i] = (regs[i / 4] >> state_offset) & 0x3;
}
return 0;
}
static int mv88e6185_g1_vtu_data_write(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry)
u8 *member, u8 *state)
{
u16 regs[3] = { 0 };
int i;
@ -199,8 +184,11 @@ static int mv88e6185_g1_vtu_data_write(struct mv88e6xxx_chip *chip,
unsigned int member_offset = (i % 4) * 4;
unsigned int state_offset = member_offset + 2;
regs[i / 4] |= (entry->member[i] & 0x3) << member_offset;
regs[i / 4] |= (entry->state[i] & 0x3) << state_offset;
if (member)
regs[i / 4] |= (member[i] & 0x3) << member_offset;
if (state)
regs[i / 4] |= (state[i] & 0x3) << state_offset;
}
/* Write all 3 VTU/STU Data registers */
@ -268,48 +256,6 @@ static int mv88e6390_g1_vtu_data_write(struct mv88e6xxx_chip *chip, u8 *data)
/* VLAN Translation Unit Operations */
static int mv88e6xxx_g1_vtu_stu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry)
{
int err;
err = mv88e6xxx_g1_vtu_sid_write(chip, entry);
if (err)
return err;
err = mv88e6xxx_g1_vtu_op(chip, MV88E6XXX_G1_VTU_OP_STU_GET_NEXT);
if (err)
return err;
err = mv88e6xxx_g1_vtu_sid_read(chip, entry);
if (err)
return err;
return mv88e6xxx_g1_vtu_vid_read(chip, entry);
}
static int mv88e6xxx_g1_vtu_stu_get(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *vtu)
{
struct mv88e6xxx_vtu_entry stu;
int err;
err = mv88e6xxx_g1_vtu_sid_read(chip, vtu);
if (err)
return err;
stu.sid = vtu->sid - 1;
err = mv88e6xxx_g1_vtu_stu_getnext(chip, &stu);
if (err)
return err;
if (stu.sid != vtu->sid || !stu.valid)
return -EINVAL;
return 0;
}
int mv88e6xxx_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry)
{
@ -327,7 +273,7 @@ int mv88e6xxx_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
* write the VID only once, when the entry is given as invalid.
*/
if (!entry->valid) {
err = mv88e6xxx_g1_vtu_vid_write(chip, entry);
err = mv88e6xxx_g1_vtu_vid_write(chip, false, entry->vid);
if (err)
return err;
}
@ -336,7 +282,7 @@ int mv88e6xxx_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
if (err)
return err;
return mv88e6xxx_g1_vtu_vid_read(chip, entry);
return mv88e6xxx_g1_vtu_vid_read(chip, &entry->valid, &entry->vid);
}
int mv88e6185_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
@ -350,11 +296,7 @@ int mv88e6185_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
return err;
if (entry->valid) {
err = mv88e6185_g1_vtu_data_read(chip, entry);
if (err)
return err;
err = mv88e6185_g1_stu_data_read(chip, entry);
err = mv88e6185_g1_vtu_data_read(chip, entry->member, entry->state);
if (err)
return err;
@ -384,7 +326,7 @@ int mv88e6352_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
return err;
if (entry->valid) {
err = mv88e6185_g1_vtu_data_read(chip, entry);
err = mv88e6185_g1_vtu_data_read(chip, entry->member, NULL);
if (err)
return err;
@ -392,12 +334,7 @@ int mv88e6352_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
if (err)
return err;
/* Fetch VLAN PortState data from the STU */
err = mv88e6xxx_g1_vtu_stu_get(chip, entry);
if (err)
return err;
err = mv88e6185_g1_stu_data_read(chip, entry);
err = mv88e6xxx_g1_vtu_sid_read(chip, &entry->sid);
if (err)
return err;
}
@ -420,18 +357,13 @@ int mv88e6390_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
if (err)
return err;
/* Fetch VLAN PortState data from the STU */
err = mv88e6xxx_g1_vtu_stu_get(chip, entry);
if (err)
return err;
err = mv88e6390_g1_vtu_data_read(chip, entry->state);
if (err)
return err;
err = mv88e6xxx_g1_vtu_fid_read(chip, entry);
if (err)
return err;
err = mv88e6xxx_g1_vtu_sid_read(chip, &entry->sid);
if (err)
return err;
}
return 0;
@ -447,12 +379,12 @@ int mv88e6185_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
if (err)
return err;
err = mv88e6xxx_g1_vtu_vid_write(chip, entry);
err = mv88e6xxx_g1_vtu_vid_write(chip, entry->valid, entry->vid);
if (err)
return err;
if (entry->valid) {
err = mv88e6185_g1_vtu_data_write(chip, entry);
err = mv88e6185_g1_vtu_data_write(chip, entry->member, entry->state);
if (err)
return err;
@ -479,29 +411,23 @@ int mv88e6352_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
if (err)
return err;
err = mv88e6xxx_g1_vtu_vid_write(chip, entry);
err = mv88e6xxx_g1_vtu_vid_write(chip, entry->valid, entry->vid);
if (err)
return err;
if (entry->valid) {
/* Write MemberTag and PortState data */
err = mv88e6185_g1_vtu_data_write(chip, entry);
if (err)
return err;
err = mv88e6xxx_g1_vtu_sid_write(chip, entry);
if (err)
return err;
/* Load STU entry */
err = mv88e6xxx_g1_vtu_op(chip,
MV88E6XXX_G1_VTU_OP_STU_LOAD_PURGE);
/* Write MemberTag data */
err = mv88e6185_g1_vtu_data_write(chip, entry->member, NULL);
if (err)
return err;
err = mv88e6xxx_g1_vtu_fid_write(chip, entry);
if (err)
return err;
err = mv88e6xxx_g1_vtu_sid_write(chip, entry->sid);
if (err)
return err;
}
/* Load/Purge VTU entry */
@ -517,26 +443,11 @@ int mv88e6390_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
if (err)
return err;
err = mv88e6xxx_g1_vtu_vid_write(chip, entry);
err = mv88e6xxx_g1_vtu_vid_write(chip, entry->valid, entry->vid);
if (err)
return err;
if (entry->valid) {
/* Write PortState data */
err = mv88e6390_g1_vtu_data_write(chip, entry->state);
if (err)
return err;
err = mv88e6xxx_g1_vtu_sid_write(chip, entry);
if (err)
return err;
/* Load STU entry */
err = mv88e6xxx_g1_vtu_op(chip,
MV88E6XXX_G1_VTU_OP_STU_LOAD_PURGE);
if (err)
return err;
/* Write MemberTag data */
err = mv88e6390_g1_vtu_data_write(chip, entry->member);
if (err)
@ -545,6 +456,10 @@ int mv88e6390_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
err = mv88e6xxx_g1_vtu_fid_write(chip, entry);
if (err)
return err;
err = mv88e6xxx_g1_vtu_sid_write(chip, entry->sid);
if (err)
return err;
}
/* Load/Purge VTU entry */
@ -562,13 +477,139 @@ int mv88e6xxx_g1_vtu_flush(struct mv88e6xxx_chip *chip)
return mv88e6xxx_g1_vtu_op(chip, MV88E6XXX_G1_VTU_OP_FLUSH_ALL);
}
/* Spanning Tree Unit Operations */
int mv88e6xxx_g1_stu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry)
{
int err;
err = mv88e6xxx_g1_vtu_op_wait(chip);
if (err)
return err;
/* To get the next higher active SID, the STU GetNext operation can be
* started again without setting the SID registers since it already
* contains the last SID.
*
* To save a few hardware accesses and abstract this to the caller,
* write the SID only once, when the entry is given as invalid.
*/
if (!entry->valid) {
err = mv88e6xxx_g1_vtu_sid_write(chip, entry->sid);
if (err)
return err;
}
err = mv88e6xxx_g1_vtu_op(chip, MV88E6XXX_G1_VTU_OP_STU_GET_NEXT);
if (err)
return err;
err = mv88e6xxx_g1_vtu_vid_read(chip, &entry->valid, NULL);
if (err)
return err;
if (entry->valid) {
err = mv88e6xxx_g1_vtu_sid_read(chip, &entry->sid);
if (err)
return err;
}
return 0;
}
int mv88e6352_g1_stu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry)
{
int err;
err = mv88e6xxx_g1_stu_getnext(chip, entry);
if (err)
return err;
if (!entry->valid)
return 0;
return mv88e6185_g1_vtu_data_read(chip, NULL, entry->state);
}
int mv88e6390_g1_stu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry)
{
int err;
err = mv88e6xxx_g1_stu_getnext(chip, entry);
if (err)
return err;
if (!entry->valid)
return 0;
return mv88e6390_g1_vtu_data_read(chip, entry->state);
}
int mv88e6352_g1_stu_loadpurge(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry)
{
int err;
err = mv88e6xxx_g1_vtu_op_wait(chip);
if (err)
return err;
err = mv88e6xxx_g1_vtu_vid_write(chip, entry->valid, 0);
if (err)
return err;
err = mv88e6xxx_g1_vtu_sid_write(chip, entry->sid);
if (err)
return err;
if (entry->valid) {
err = mv88e6185_g1_vtu_data_write(chip, NULL, entry->state);
if (err)
return err;
}
/* Load/Purge STU entry */
return mv88e6xxx_g1_vtu_op(chip, MV88E6XXX_G1_VTU_OP_STU_LOAD_PURGE);
}
int mv88e6390_g1_stu_loadpurge(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry)
{
int err;
err = mv88e6xxx_g1_vtu_op_wait(chip);
if (err)
return err;
err = mv88e6xxx_g1_vtu_vid_write(chip, entry->valid, 0);
if (err)
return err;
err = mv88e6xxx_g1_vtu_sid_write(chip, entry->sid);
if (err)
return err;
if (entry->valid) {
err = mv88e6390_g1_vtu_data_write(chip, entry->state);
if (err)
return err;
}
/* Load/Purge STU entry */
return mv88e6xxx_g1_vtu_op(chip, MV88E6XXX_G1_VTU_OP_STU_LOAD_PURGE);
}
/* VTU Violation Management */
static irqreturn_t mv88e6xxx_g1_vtu_prob_irq_thread_fn(int irq, void *dev_id)
{
struct mv88e6xxx_chip *chip = dev_id;
struct mv88e6xxx_vtu_entry entry;
u16 val, vid;
int spid;
int err;
u16 val;
mv88e6xxx_reg_lock(chip);
@ -580,7 +621,7 @@ static irqreturn_t mv88e6xxx_g1_vtu_prob_irq_thread_fn(int irq, void *dev_id)
if (err)
goto out;
err = mv88e6xxx_g1_vtu_vid_read(chip, &entry);
err = mv88e6xxx_g1_vtu_vid_read(chip, NULL, &vid);
if (err)
goto out;
@ -588,13 +629,13 @@ static irqreturn_t mv88e6xxx_g1_vtu_prob_irq_thread_fn(int irq, void *dev_id)
if (val & MV88E6XXX_G1_VTU_OP_MEMBER_VIOLATION) {
dev_err_ratelimited(chip->dev, "VTU member violation for vid %d, source port %d\n",
entry.vid, spid);
vid, spid);
chip->ports[spid].vtu_member_violation++;
}
if (val & MV88E6XXX_G1_VTU_OP_MISS_VIOLATION) {
dev_dbg_ratelimited(chip->dev, "VTU miss violation for vid %d, source port %d\n",
entry.vid, spid);
vid, spid);
chip->ports[spid].vtu_miss_violation++;
}

View file

@ -119,6 +119,9 @@ int br_vlan_get_info(const struct net_device *dev, u16 vid,
struct bridge_vlan_info *p_vinfo);
int br_vlan_get_info_rcu(const struct net_device *dev, u16 vid,
struct bridge_vlan_info *p_vinfo);
bool br_mst_enabled(const struct net_device *dev);
int br_mst_get_info(const struct net_device *dev, u16 msti, unsigned long *vids);
int br_mst_get_state(const struct net_device *dev, u16 msti, u8 *state);
#else
static inline bool br_vlan_enabled(const struct net_device *dev)
{
@ -151,6 +154,22 @@ static inline int br_vlan_get_info_rcu(const struct net_device *dev, u16 vid,
{
return -EINVAL;
}
static inline bool br_mst_enabled(const struct net_device *dev)
{
return false;
}
static inline int br_mst_get_info(const struct net_device *dev, u16 msti,
unsigned long *vids)
{
return -EINVAL;
}
static inline int br_mst_get_state(const struct net_device *dev, u16 msti,
u8 *state)
{
return -EINVAL;
}
#endif
#if IS_ENABLED(CONFIG_BRIDGE)

View file

@ -957,7 +957,10 @@ struct dsa_switch_ops {
struct dsa_bridge bridge);
void (*port_stp_state_set)(struct dsa_switch *ds, int port,
u8 state);
int (*port_mst_state_set)(struct dsa_switch *ds, int port,
const struct switchdev_mst_state *state);
void (*port_fast_age)(struct dsa_switch *ds, int port);
int (*port_vlan_fast_age)(struct dsa_switch *ds, int port, u16 vid);
int (*port_pre_bridge_flags)(struct dsa_switch *ds, int port,
struct switchdev_brport_flags flags,
struct netlink_ext_ack *extack);
@ -976,6 +979,9 @@ struct dsa_switch_ops {
struct netlink_ext_ack *extack);
int (*port_vlan_del)(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_vlan *vlan);
int (*vlan_msti_set)(struct dsa_switch *ds, struct dsa_bridge bridge,
const struct switchdev_vlan_msti *msti);
/*
* Forwarding database
*/

View file

@ -19,6 +19,7 @@
enum switchdev_attr_id {
SWITCHDEV_ATTR_ID_UNDEFINED,
SWITCHDEV_ATTR_ID_PORT_STP_STATE,
SWITCHDEV_ATTR_ID_PORT_MST_STATE,
SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS,
SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS,
SWITCHDEV_ATTR_ID_PORT_MROUTER,
@ -27,7 +28,14 @@ enum switchdev_attr_id {
SWITCHDEV_ATTR_ID_BRIDGE_VLAN_PROTOCOL,
SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED,
SWITCHDEV_ATTR_ID_BRIDGE_MROUTER,
SWITCHDEV_ATTR_ID_BRIDGE_MST,
SWITCHDEV_ATTR_ID_MRP_PORT_ROLE,
SWITCHDEV_ATTR_ID_VLAN_MSTI,
};
struct switchdev_mst_state {
u16 msti;
u8 state;
};
struct switchdev_brport_flags {
@ -35,6 +43,11 @@ struct switchdev_brport_flags {
unsigned long mask;
};
struct switchdev_vlan_msti {
u16 vid;
u16 msti;
};
struct switchdev_attr {
struct net_device *orig_dev;
enum switchdev_attr_id id;
@ -43,13 +56,16 @@ struct switchdev_attr {
void (*complete)(struct net_device *dev, int err, void *priv);
union {
u8 stp_state; /* PORT_STP_STATE */
struct switchdev_mst_state mst_state; /* PORT_MST_STATE */
struct switchdev_brport_flags brport_flags; /* PORT_BRIDGE_FLAGS */
bool mrouter; /* PORT_MROUTER */
clock_t ageing_time; /* BRIDGE_AGEING_TIME */
bool vlan_filtering; /* BRIDGE_VLAN_FILTERING */
u16 vlan_protocol; /* BRIDGE_VLAN_PROTOCOL */
bool mst; /* BRIDGE_MST */
bool mc_disabled; /* MC_DISABLED */
u8 mrp_port_role; /* MRP_PORT_ROLE */
struct switchdev_vlan_msti vlan_msti; /* VLAN_MSTI */
} u;
};

View file

@ -122,6 +122,7 @@ enum {
IFLA_BRIDGE_VLAN_TUNNEL_INFO,
IFLA_BRIDGE_MRP,
IFLA_BRIDGE_CFM,
IFLA_BRIDGE_MST,
__IFLA_BRIDGE_MAX,
};
#define IFLA_BRIDGE_MAX (__IFLA_BRIDGE_MAX - 1)
@ -453,6 +454,21 @@ enum {
#define IFLA_BRIDGE_CFM_CC_PEER_STATUS_MAX (__IFLA_BRIDGE_CFM_CC_PEER_STATUS_MAX - 1)
enum {
IFLA_BRIDGE_MST_UNSPEC,
IFLA_BRIDGE_MST_ENTRY,
__IFLA_BRIDGE_MST_MAX,
};
#define IFLA_BRIDGE_MST_MAX (__IFLA_BRIDGE_MST_MAX - 1)
enum {
IFLA_BRIDGE_MST_ENTRY_UNSPEC,
IFLA_BRIDGE_MST_ENTRY_MSTI,
IFLA_BRIDGE_MST_ENTRY_STATE,
__IFLA_BRIDGE_MST_ENTRY_MAX,
};
#define IFLA_BRIDGE_MST_ENTRY_MAX (__IFLA_BRIDGE_MST_ENTRY_MAX - 1)
struct bridge_stp_xstats {
__u64 transition_blk;
__u64 transition_fwd;
@ -564,6 +580,7 @@ enum {
BRIDGE_VLANDB_GOPTS_MCAST_QUERIER,
BRIDGE_VLANDB_GOPTS_MCAST_ROUTER_PORTS,
BRIDGE_VLANDB_GOPTS_MCAST_QUERIER_STATE,
BRIDGE_VLANDB_GOPTS_MSTI,
__BRIDGE_VLANDB_GOPTS_MAX
};
#define BRIDGE_VLANDB_GOPTS_MAX (__BRIDGE_VLANDB_GOPTS_MAX - 1)
@ -759,6 +776,7 @@ struct br_mcast_stats {
enum br_boolopt_id {
BR_BOOLOPT_NO_LL_LEARN,
BR_BOOLOPT_MCAST_VLAN_SNOOPING,
BR_BOOLOPT_MST_ENABLE,
BR_BOOLOPT_MAX
};

View file

@ -817,6 +817,7 @@ enum {
#define RTEXT_FILTER_MRP (1 << 4)
#define RTEXT_FILTER_CFM_CONFIG (1 << 5)
#define RTEXT_FILTER_CFM_STATUS (1 << 6)
#define RTEXT_FILTER_MST (1 << 7)
/* End of information exported to user level */

View file

@ -20,7 +20,7 @@ obj-$(CONFIG_BRIDGE_NETFILTER) += br_netfilter.o
bridge-$(CONFIG_BRIDGE_IGMP_SNOOPING) += br_multicast.o br_mdb.o br_multicast_eht.o
bridge-$(CONFIG_BRIDGE_VLAN_FILTERING) += br_vlan.o br_vlan_tunnel.o br_vlan_options.o
bridge-$(CONFIG_BRIDGE_VLAN_FILTERING) += br_vlan.o br_vlan_tunnel.o br_vlan_options.o br_mst.o
bridge-$(CONFIG_NET_SWITCHDEV) += br_switchdev.o

View file

@ -265,6 +265,9 @@ int br_boolopt_toggle(struct net_bridge *br, enum br_boolopt_id opt, bool on,
case BR_BOOLOPT_MCAST_VLAN_SNOOPING:
err = br_multicast_toggle_vlan_snooping(br, on, extack);
break;
case BR_BOOLOPT_MST_ENABLE:
err = br_mst_set_enabled(br, on, extack);
break;
default:
/* shouldn't be called with unsupported options */
WARN_ON(1);
@ -281,6 +284,8 @@ int br_boolopt_get(const struct net_bridge *br, enum br_boolopt_id opt)
return br_opt_get(br, BROPT_NO_LL_LEARN);
case BR_BOOLOPT_MCAST_VLAN_SNOOPING:
return br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED);
case BR_BOOLOPT_MST_ENABLE:
return br_opt_get(br, BROPT_MST_ENABLED);
default:
/* shouldn't be called with unsupported options */
WARN_ON(1);

View file

@ -78,13 +78,22 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
u16 vid = 0;
u8 state;
if (!p || p->state == BR_STATE_DISABLED)
if (!p)
goto drop;
br = p->br;
if (br_mst_is_enabled(br)) {
state = BR_STATE_FORWARDING;
} else {
if (p->state == BR_STATE_DISABLED)
goto drop;
state = p->state;
}
brmctx = &p->br->multicast_ctx;
pmctx = &p->multicast_ctx;
state = p->state;
if (!br_allowed_ingress(p->br, nbp_vlan_group_rcu(p), skb, &vid,
&state, &vlan))
goto out;
@ -370,9 +379,13 @@ static rx_handler_result_t br_handle_frame(struct sk_buff **pskb)
return RX_HANDLER_PASS;
forward:
if (br_mst_is_enabled(p->br))
goto defer_stp_filtering;
switch (p->state) {
case BR_STATE_FORWARDING:
case BR_STATE_LEARNING:
defer_stp_filtering:
if (ether_addr_equal(p->br->dev->dev_addr, dest))
skb->pkt_type = PACKET_HOST;

357
net/bridge/br_mst.c Normal file
View file

@ -0,0 +1,357 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Bridge Multiple Spanning Tree Support
*
* Authors:
* Tobias Waldekranz <tobias@waldekranz.com>
*/
#include <linux/kernel.h>
#include <net/switchdev.h>
#include "br_private.h"
DEFINE_STATIC_KEY_FALSE(br_mst_used);
bool br_mst_enabled(const struct net_device *dev)
{
if (!netif_is_bridge_master(dev))
return false;
return br_opt_get(netdev_priv(dev), BROPT_MST_ENABLED);
}
EXPORT_SYMBOL_GPL(br_mst_enabled);
int br_mst_get_info(const struct net_device *dev, u16 msti, unsigned long *vids)
{
const struct net_bridge_vlan_group *vg;
const struct net_bridge_vlan *v;
const struct net_bridge *br;
ASSERT_RTNL();
if (!netif_is_bridge_master(dev))
return -EINVAL;
br = netdev_priv(dev);
if (!br_opt_get(br, BROPT_MST_ENABLED))
return -EINVAL;
vg = br_vlan_group(br);
list_for_each_entry(v, &vg->vlan_list, vlist) {
if (v->msti == msti)
__set_bit(v->vid, vids);
}
return 0;
}
EXPORT_SYMBOL_GPL(br_mst_get_info);
int br_mst_get_state(const struct net_device *dev, u16 msti, u8 *state)
{
const struct net_bridge_port *p = NULL;
const struct net_bridge_vlan_group *vg;
const struct net_bridge_vlan *v;
ASSERT_RTNL();
p = br_port_get_check_rtnl(dev);
if (!p || !br_opt_get(p->br, BROPT_MST_ENABLED))
return -EINVAL;
vg = nbp_vlan_group(p);
list_for_each_entry(v, &vg->vlan_list, vlist) {
if (v->brvlan->msti == msti) {
*state = v->state;
return 0;
}
}
return -ENOENT;
}
EXPORT_SYMBOL_GPL(br_mst_get_state);
static void br_mst_vlan_set_state(struct net_bridge_port *p, struct net_bridge_vlan *v,
u8 state)
{
struct net_bridge_vlan_group *vg = nbp_vlan_group(p);
if (v->state == state)
return;
br_vlan_set_state(v, state);
if (v->vid == vg->pvid)
br_vlan_set_pvid_state(vg, state);
}
int br_mst_set_state(struct net_bridge_port *p, u16 msti, u8 state,
struct netlink_ext_ack *extack)
{
struct switchdev_attr attr = {
.id = SWITCHDEV_ATTR_ID_PORT_MST_STATE,
.orig_dev = p->dev,
.u.mst_state = {
.msti = msti,
.state = state,
},
};
struct net_bridge_vlan_group *vg;
struct net_bridge_vlan *v;
int err;
vg = nbp_vlan_group(p);
if (!vg)
return 0;
/* MSTI 0 (CST) state changes are notified via the regular
* SWITCHDEV_ATTR_ID_PORT_STP_STATE.
*/
if (msti) {
err = switchdev_port_attr_set(p->dev, &attr, extack);
if (err && err != -EOPNOTSUPP)
return err;
}
list_for_each_entry(v, &vg->vlan_list, vlist) {
if (v->brvlan->msti != msti)
continue;
br_mst_vlan_set_state(p, v, state);
}
return 0;
}
static void br_mst_vlan_sync_state(struct net_bridge_vlan *pv, u16 msti)
{
struct net_bridge_vlan_group *vg = nbp_vlan_group(pv->port);
struct net_bridge_vlan *v;
list_for_each_entry(v, &vg->vlan_list, vlist) {
/* If this port already has a defined state in this
* MSTI (through some other VLAN membership), inherit
* it.
*/
if (v != pv && v->brvlan->msti == msti) {
br_mst_vlan_set_state(pv->port, pv, v->state);
return;
}
}
/* Otherwise, start out in a new MSTI with all ports disabled. */
return br_mst_vlan_set_state(pv->port, pv, BR_STATE_DISABLED);
}
int br_mst_vlan_set_msti(struct net_bridge_vlan *mv, u16 msti)
{
struct switchdev_attr attr = {
.id = SWITCHDEV_ATTR_ID_VLAN_MSTI,
.orig_dev = mv->br->dev,
.u.vlan_msti = {
.vid = mv->vid,
.msti = msti,
},
};
struct net_bridge_vlan_group *vg;
struct net_bridge_vlan *pv;
struct net_bridge_port *p;
int err;
if (mv->msti == msti)
return 0;
err = switchdev_port_attr_set(mv->br->dev, &attr, NULL);
if (err && err != -EOPNOTSUPP)
return err;
mv->msti = msti;
list_for_each_entry(p, &mv->br->port_list, list) {
vg = nbp_vlan_group(p);
pv = br_vlan_find(vg, mv->vid);
if (pv)
br_mst_vlan_sync_state(pv, msti);
}
return 0;
}
void br_mst_vlan_init_state(struct net_bridge_vlan *v)
{
/* VLANs always start out in MSTI 0 (CST) */
v->msti = 0;
if (br_vlan_is_master(v))
v->state = BR_STATE_FORWARDING;
else
v->state = v->port->state;
}
int br_mst_set_enabled(struct net_bridge *br, bool on,
struct netlink_ext_ack *extack)
{
struct switchdev_attr attr = {
.id = SWITCHDEV_ATTR_ID_BRIDGE_MST,
.orig_dev = br->dev,
.u.mst = on,
};
struct net_bridge_vlan_group *vg;
struct net_bridge_port *p;
int err;
list_for_each_entry(p, &br->port_list, list) {
vg = nbp_vlan_group(p);
if (!vg->num_vlans)
continue;
NL_SET_ERR_MSG(extack,
"MST mode can't be changed while VLANs exist");
return -EBUSY;
}
if (br_opt_get(br, BROPT_MST_ENABLED) == on)
return 0;
err = switchdev_port_attr_set(br->dev, &attr, extack);
if (err && err != -EOPNOTSUPP)
return err;
if (on)
static_branch_enable(&br_mst_used);
else
static_branch_disable(&br_mst_used);
br_opt_toggle(br, BROPT_MST_ENABLED, on);
return 0;
}
size_t br_mst_info_size(const struct net_bridge_vlan_group *vg)
{
DECLARE_BITMAP(seen, VLAN_N_VID) = { 0 };
const struct net_bridge_vlan *v;
size_t sz;
/* IFLA_BRIDGE_MST */
sz = nla_total_size(0);
list_for_each_entry_rcu(v, &vg->vlan_list, vlist) {
if (test_bit(v->brvlan->msti, seen))
continue;
/* IFLA_BRIDGE_MST_ENTRY */
sz += nla_total_size(0) +
/* IFLA_BRIDGE_MST_ENTRY_MSTI */
nla_total_size(sizeof(u16)) +
/* IFLA_BRIDGE_MST_ENTRY_STATE */
nla_total_size(sizeof(u8));
__set_bit(v->brvlan->msti, seen);
}
return sz;
}
int br_mst_fill_info(struct sk_buff *skb,
const struct net_bridge_vlan_group *vg)
{
DECLARE_BITMAP(seen, VLAN_N_VID) = { 0 };
const struct net_bridge_vlan *v;
struct nlattr *nest;
int err = 0;
list_for_each_entry(v, &vg->vlan_list, vlist) {
if (test_bit(v->brvlan->msti, seen))
continue;
nest = nla_nest_start_noflag(skb, IFLA_BRIDGE_MST_ENTRY);
if (!nest ||
nla_put_u16(skb, IFLA_BRIDGE_MST_ENTRY_MSTI, v->brvlan->msti) ||
nla_put_u8(skb, IFLA_BRIDGE_MST_ENTRY_STATE, v->state)) {
err = -EMSGSIZE;
break;
}
nla_nest_end(skb, nest);
__set_bit(v->brvlan->msti, seen);
}
return err;
}
static const struct nla_policy br_mst_nl_policy[IFLA_BRIDGE_MST_ENTRY_MAX + 1] = {
[IFLA_BRIDGE_MST_ENTRY_MSTI] = NLA_POLICY_RANGE(NLA_U16,
1, /* 0 reserved for CST */
VLAN_N_VID - 1),
[IFLA_BRIDGE_MST_ENTRY_STATE] = NLA_POLICY_RANGE(NLA_U8,
BR_STATE_DISABLED,
BR_STATE_BLOCKING),
};
static int br_mst_process_one(struct net_bridge_port *p,
const struct nlattr *attr,
struct netlink_ext_ack *extack)
{
struct nlattr *tb[IFLA_BRIDGE_MST_ENTRY_MAX + 1];
u16 msti;
u8 state;
int err;
err = nla_parse_nested(tb, IFLA_BRIDGE_MST_ENTRY_MAX, attr,
br_mst_nl_policy, extack);
if (err)
return err;
if (!tb[IFLA_BRIDGE_MST_ENTRY_MSTI]) {
NL_SET_ERR_MSG_MOD(extack, "MSTI not specified");
return -EINVAL;
}
if (!tb[IFLA_BRIDGE_MST_ENTRY_STATE]) {
NL_SET_ERR_MSG_MOD(extack, "State not specified");
return -EINVAL;
}
msti = nla_get_u16(tb[IFLA_BRIDGE_MST_ENTRY_MSTI]);
state = nla_get_u8(tb[IFLA_BRIDGE_MST_ENTRY_STATE]);
return br_mst_set_state(p, msti, state, extack);
}
int br_mst_process(struct net_bridge_port *p, const struct nlattr *mst_attr,
struct netlink_ext_ack *extack)
{
struct nlattr *attr;
int err, msts = 0;
int rem;
if (!br_opt_get(p->br, BROPT_MST_ENABLED)) {
NL_SET_ERR_MSG_MOD(extack, "Can't modify MST state when MST is disabled");
return -EBUSY;
}
nla_for_each_nested(attr, mst_attr, rem) {
switch (nla_type(attr)) {
case IFLA_BRIDGE_MST_ENTRY:
err = br_mst_process_one(p, attr, extack);
break;
default:
continue;
}
msts++;
if (err)
break;
}
if (!msts) {
NL_SET_ERR_MSG_MOD(extack, "Found no MST entries to process");
err = -EINVAL;
}
return err;
}

View file

@ -119,6 +119,9 @@ static size_t br_get_link_af_size_filtered(const struct net_device *dev,
/* Each VLAN is returned in bridge_vlan_info along with flags */
vinfo_sz += num_vlan_infos * nla_total_size(sizeof(struct bridge_vlan_info));
if (filter_mask & RTEXT_FILTER_MST)
vinfo_sz += br_mst_info_size(vg);
if (!(filter_mask & RTEXT_FILTER_CFM_STATUS))
return vinfo_sz;
@ -485,7 +488,8 @@ static int br_fill_ifinfo(struct sk_buff *skb,
RTEXT_FILTER_BRVLAN_COMPRESSED |
RTEXT_FILTER_MRP |
RTEXT_FILTER_CFM_CONFIG |
RTEXT_FILTER_CFM_STATUS)) {
RTEXT_FILTER_CFM_STATUS |
RTEXT_FILTER_MST)) {
af = nla_nest_start_noflag(skb, IFLA_AF_SPEC);
if (!af)
goto nla_put_failure;
@ -564,7 +568,28 @@ static int br_fill_ifinfo(struct sk_buff *skb,
nla_nest_end(skb, cfm_nest);
}
if ((filter_mask & RTEXT_FILTER_MST) &&
br_opt_get(br, BROPT_MST_ENABLED) && port) {
const struct net_bridge_vlan_group *vg = nbp_vlan_group(port);
struct nlattr *mst_nest;
int err;
if (!vg || !vg->num_vlans)
goto done;
mst_nest = nla_nest_start(skb, IFLA_BRIDGE_MST);
if (!mst_nest)
goto nla_put_failure;
err = br_mst_fill_info(skb, vg);
if (err)
goto nla_put_failure;
nla_nest_end(skb, mst_nest);
}
done:
if (af)
nla_nest_end(skb, af);
nlmsg_end(skb, nlh);
@ -803,6 +828,23 @@ static int br_afspec(struct net_bridge *br,
if (err)
return err;
break;
case IFLA_BRIDGE_MST:
if (!p) {
NL_SET_ERR_MSG(extack,
"MST states can only be set on bridge ports");
return -EINVAL;
}
if (cmd != RTM_SETLINK) {
NL_SET_ERR_MSG(extack,
"MST states can only be set through RTM_SETLINK");
return -EINVAL;
}
err = br_mst_process(p, attr, extack);
if (err)
return err;
break;
}
}

View file

@ -178,6 +178,7 @@ enum {
* @br_mcast_ctx: if MASTER flag set, this is the global vlan multicast context
* @port_mcast_ctx: if MASTER flag unset, this is the per-port/vlan multicast
* context
* @msti: if MASTER flag set, this holds the VLANs MST instance
* @vlist: sorted list of VLAN entries
* @rcu: used for entry destruction
*
@ -210,6 +211,8 @@ struct net_bridge_vlan {
struct net_bridge_mcast_port port_mcast_ctx;
};
u16 msti;
struct list_head vlist;
struct rcu_head rcu;
@ -445,6 +448,7 @@ enum net_bridge_opts {
BROPT_NO_LL_LEARN,
BROPT_VLAN_BRIDGE_BINDING,
BROPT_MCAST_VLAN_SNOOPING_ENABLED,
BROPT_MST_ENABLED,
};
struct net_bridge {
@ -1765,6 +1769,63 @@ static inline bool br_vlan_state_allowed(u8 state, bool learn_allow)
}
#endif
/* br_mst.c */
#ifdef CONFIG_BRIDGE_VLAN_FILTERING
DECLARE_STATIC_KEY_FALSE(br_mst_used);
static inline bool br_mst_is_enabled(struct net_bridge *br)
{
return static_branch_unlikely(&br_mst_used) &&
br_opt_get(br, BROPT_MST_ENABLED);
}
int br_mst_set_state(struct net_bridge_port *p, u16 msti, u8 state,
struct netlink_ext_ack *extack);
int br_mst_vlan_set_msti(struct net_bridge_vlan *v, u16 msti);
void br_mst_vlan_init_state(struct net_bridge_vlan *v);
int br_mst_set_enabled(struct net_bridge *br, bool on,
struct netlink_ext_ack *extack);
size_t br_mst_info_size(const struct net_bridge_vlan_group *vg);
int br_mst_fill_info(struct sk_buff *skb,
const struct net_bridge_vlan_group *vg);
int br_mst_process(struct net_bridge_port *p, const struct nlattr *mst_attr,
struct netlink_ext_ack *extack);
#else
static inline bool br_mst_is_enabled(struct net_bridge *br)
{
return false;
}
static inline int br_mst_set_state(struct net_bridge_port *p, u16 msti,
u8 state, struct netlink_ext_ack *extack)
{
return -EOPNOTSUPP;
}
static inline int br_mst_set_enabled(struct net_bridge *br, bool on,
struct netlink_ext_ack *extack)
{
return -EOPNOTSUPP;
}
static inline size_t br_mst_info_size(const struct net_bridge_vlan_group *vg)
{
return 0;
}
static inline int br_mst_fill_info(struct sk_buff *skb,
const struct net_bridge_vlan_group *vg)
{
return -EOPNOTSUPP;
}
static inline int br_mst_process(struct net_bridge_port *p,
const struct nlattr *mst_attr,
struct netlink_ext_ack *extack)
{
return -EOPNOTSUPP;
}
#endif
struct nf_br_ops {
int (*br_dev_xmit_hook)(struct sk_buff *skb);
};

View file

@ -43,6 +43,12 @@ void br_set_state(struct net_bridge_port *p, unsigned int state)
return;
p->state = state;
if (br_opt_get(p->br, BROPT_MST_ENABLED)) {
err = br_mst_set_state(p, 0, state, NULL);
if (err)
br_warn(p->br, "error setting MST state on port %u(%s)\n",
p->port_no, netdev_name(p->dev));
}
err = switchdev_port_attr_set(p->dev, &attr, NULL);
if (err && err != -EOPNOTSUPP)
br_warn(p->br, "error setting offload STP state on port %u(%s)\n",

View file

@ -331,6 +331,46 @@ br_switchdev_fdb_replay(const struct net_device *br_dev, const void *ctx,
return err;
}
static int br_switchdev_vlan_attr_replay(struct net_device *br_dev,
const void *ctx,
struct notifier_block *nb,
struct netlink_ext_ack *extack)
{
struct switchdev_notifier_port_attr_info attr_info = {
.info = {
.dev = br_dev,
.extack = extack,
.ctx = ctx,
},
};
struct net_bridge *br = netdev_priv(br_dev);
struct net_bridge_vlan_group *vg;
struct switchdev_attr attr;
struct net_bridge_vlan *v;
int err;
attr_info.attr = &attr;
attr.orig_dev = br_dev;
vg = br_vlan_group(br);
list_for_each_entry(v, &vg->vlan_list, vlist) {
if (v->msti) {
attr.id = SWITCHDEV_ATTR_ID_VLAN_MSTI;
attr.u.vlan_msti.vid = v->vid;
attr.u.vlan_msti.msti = v->msti;
err = nb->notifier_call(nb, SWITCHDEV_PORT_ATTR_SET,
&attr_info);
err = notifier_to_errno(err);
if (err)
return err;
}
}
return 0;
}
static int
br_switchdev_vlan_replay_one(struct notifier_block *nb,
struct net_device *dev,
@ -425,6 +465,12 @@ static int br_switchdev_vlan_replay(struct net_device *br_dev,
return err;
}
if (adding) {
err = br_switchdev_vlan_attr_replay(br_dev, ctx, nb, extack);
if (err)
return err;
}
return 0;
}

View file

@ -226,6 +226,24 @@ static void nbp_vlan_rcu_free(struct rcu_head *rcu)
kfree(v);
}
static void br_vlan_init_state(struct net_bridge_vlan *v)
{
struct net_bridge *br;
if (br_vlan_is_master(v))
br = v->br;
else
br = v->port->br;
if (br_opt_get(br, BROPT_MST_ENABLED)) {
br_mst_vlan_init_state(v);
return;
}
v->state = BR_STATE_FORWARDING;
v->msti = 0;
}
/* This is the shared VLAN add function which works for both ports and bridge
* devices. There are four possible calls to this function in terms of the
* vlan entry type:
@ -322,7 +340,7 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags,
}
/* set the state before publishing */
v->state = BR_STATE_FORWARDING;
br_vlan_init_state(v);
err = rhashtable_lookup_insert_fast(&vg->vlan_hash, &v->vnode,
br_vlan_rht_params);

View file

@ -99,6 +99,11 @@ static int br_vlan_modify_state(struct net_bridge_vlan_group *vg,
return -EBUSY;
}
if (br_opt_get(br, BROPT_MST_ENABLED)) {
NL_SET_ERR_MSG_MOD(extack, "Can't modify vlan state directly when MST is enabled");
return -EBUSY;
}
if (v->state == state)
return 0;
@ -291,6 +296,7 @@ bool br_vlan_global_opts_can_enter_range(const struct net_bridge_vlan *v_curr,
const struct net_bridge_vlan *r_end)
{
return v_curr->vid - r_end->vid == 1 &&
v_curr->msti == r_end->msti &&
((v_curr->priv_flags ^ r_end->priv_flags) &
BR_VLFLAG_GLOBAL_MCAST_ENABLED) == 0 &&
br_multicast_ctx_options_equal(&v_curr->br_mcast_ctx,
@ -379,6 +385,9 @@ bool br_vlan_global_opts_fill(struct sk_buff *skb, u16 vid, u16 vid_range,
#endif
#endif
if (nla_put_u16(skb, BRIDGE_VLANDB_GOPTS_MSTI, v_opts->msti))
goto out_err;
nla_nest_end(skb, nest);
return true;
@ -410,6 +419,7 @@ static size_t rtnl_vlan_global_opts_nlmsg_size(const struct net_bridge_vlan *v)
+ nla_total_size(0) /* BRIDGE_VLANDB_GOPTS_MCAST_ROUTER_PORTS */
+ br_rports_size(&v->br_mcast_ctx) /* BRIDGE_VLANDB_GOPTS_MCAST_ROUTER_PORTS */
#endif
+ nla_total_size(sizeof(u16)) /* BRIDGE_VLANDB_GOPTS_MSTI */
+ nla_total_size(sizeof(u16)); /* BRIDGE_VLANDB_GOPTS_RANGE */
}
@ -559,6 +569,15 @@ static int br_vlan_process_global_one_opts(const struct net_bridge *br,
}
#endif
#endif
if (tb[BRIDGE_VLANDB_GOPTS_MSTI]) {
u16 msti;
msti = nla_get_u16(tb[BRIDGE_VLANDB_GOPTS_MSTI]);
err = br_mst_vlan_set_msti(v, msti);
if (err)
return err;
*changed = true;
}
return 0;
}
@ -578,6 +597,7 @@ static const struct nla_policy br_vlan_db_gpol[BRIDGE_VLANDB_GOPTS_MAX + 1] = {
[BRIDGE_VLANDB_GOPTS_MCAST_QUERIER_INTVL] = { .type = NLA_U64 },
[BRIDGE_VLANDB_GOPTS_MCAST_STARTUP_QUERY_INTVL] = { .type = NLA_U64 },
[BRIDGE_VLANDB_GOPTS_MCAST_QUERY_RESPONSE_INTVL] = { .type = NLA_U64 },
[BRIDGE_VLANDB_GOPTS_MSTI] = NLA_POLICY_MAX(NLA_U16, VLAN_N_VID - 1),
};
int br_vlan_rtm_process_global_options(struct net_device *dev,

View file

@ -215,6 +215,9 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
void dsa_port_set_tag_protocol(struct dsa_port *cpu_dp,
const struct dsa_device_ops *tag_ops);
int dsa_port_set_state(struct dsa_port *dp, u8 state, bool do_fast_age);
int dsa_port_set_mst_state(struct dsa_port *dp,
const struct switchdev_mst_state *state,
struct netlink_ext_ack *extack);
int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy);
int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy);
void dsa_port_disable_rt(struct dsa_port *dp);
@ -234,6 +237,10 @@ int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
struct netlink_ext_ack *extack);
bool dsa_port_skip_vlan_configuration(struct dsa_port *dp);
int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock);
int dsa_port_mst_enable(struct dsa_port *dp, bool on,
struct netlink_ext_ack *extack);
int dsa_port_vlan_msti(struct dsa_port *dp,
const struct switchdev_vlan_msti *msti);
int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu,
bool targeted_match);
int dsa_port_fdb_add(struct dsa_port *dp, const unsigned char *addr,

View file

@ -30,12 +30,11 @@ static int dsa_port_notify(const struct dsa_port *dp, unsigned long e, void *v)
return dsa_tree_notify(dp->ds->dst, e, v);
}
static void dsa_port_notify_bridge_fdb_flush(const struct dsa_port *dp)
static void dsa_port_notify_bridge_fdb_flush(const struct dsa_port *dp, u16 vid)
{
struct net_device *brport_dev = dsa_port_to_bridge_port(dp);
struct switchdev_notifier_fdb_info info = {
/* flush all VLANs */
.vid = 0,
.vid = vid,
};
/* When the port becomes standalone it has already left the bridge.
@ -57,7 +56,42 @@ static void dsa_port_fast_age(const struct dsa_port *dp)
ds->ops->port_fast_age(ds, dp->index);
dsa_port_notify_bridge_fdb_flush(dp);
/* flush all VLANs */
dsa_port_notify_bridge_fdb_flush(dp, 0);
}
static int dsa_port_vlan_fast_age(const struct dsa_port *dp, u16 vid)
{
struct dsa_switch *ds = dp->ds;
int err;
if (!ds->ops->port_vlan_fast_age)
return -EOPNOTSUPP;
err = ds->ops->port_vlan_fast_age(ds, dp->index, vid);
if (!err)
dsa_port_notify_bridge_fdb_flush(dp, vid);
return err;
}
static int dsa_port_msti_fast_age(const struct dsa_port *dp, u16 msti)
{
DECLARE_BITMAP(vids, VLAN_N_VID) = { 0 };
int err, vid;
err = br_mst_get_info(dsa_port_bridge_dev_get(dp), msti, vids);
if (err)
return err;
for_each_set_bit(vid, vids, VLAN_N_VID) {
err = dsa_port_vlan_fast_age(dp, vid);
if (err)
return err;
}
return 0;
}
static bool dsa_port_can_configure_learning(struct dsa_port *dp)
@ -118,6 +152,42 @@ static void dsa_port_set_state_now(struct dsa_port *dp, u8 state,
pr_err("DSA: failed to set STP state %u (%d)\n", state, err);
}
int dsa_port_set_mst_state(struct dsa_port *dp,
const struct switchdev_mst_state *state,
struct netlink_ext_ack *extack)
{
struct dsa_switch *ds = dp->ds;
u8 prev_state;
int err;
if (!ds->ops->port_mst_state_set)
return -EOPNOTSUPP;
err = br_mst_get_state(dsa_port_to_bridge_port(dp), state->msti,
&prev_state);
if (err)
return err;
err = ds->ops->port_mst_state_set(ds, dp->index, state);
if (err)
return err;
if (!(dp->learning &&
(prev_state == BR_STATE_LEARNING ||
prev_state == BR_STATE_FORWARDING) &&
(state->state == BR_STATE_DISABLED ||
state->state == BR_STATE_BLOCKING ||
state->state == BR_STATE_LISTENING)))
return 0;
err = dsa_port_msti_fast_age(dp, state->msti);
if (err)
NL_SET_ERR_MSG_MOD(extack,
"Unable to flush associated VLANs");
return 0;
}
int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy)
{
struct dsa_switch *ds = dp->ds;
@ -321,6 +391,16 @@ static void dsa_port_bridge_destroy(struct dsa_port *dp,
kfree(bridge);
}
static bool dsa_port_supports_mst(struct dsa_port *dp)
{
struct dsa_switch *ds = dp->ds;
return ds->ops->vlan_msti_set &&
ds->ops->port_mst_state_set &&
ds->ops->port_vlan_fast_age &&
dsa_port_can_configure_learning(dp);
}
int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
struct netlink_ext_ack *extack)
{
@ -334,6 +414,9 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
struct net_device *brport_dev;
int err;
if (br_mst_enabled(br) && !dsa_port_supports_mst(dp))
return -EOPNOTSUPP;
/* Here the interface is already bridged. Reflect the current
* configuration so that drivers can program their chips accordingly.
*/
@ -735,6 +818,17 @@ int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock)
return 0;
}
int dsa_port_mst_enable(struct dsa_port *dp, bool on,
struct netlink_ext_ack *extack)
{
if (on && !dsa_port_supports_mst(dp)) {
NL_SET_ERR_MSG_MOD(extack, "Hardware does not support MST");
return -EINVAL;
}
return 0;
}
int dsa_port_pre_bridge_flags(const struct dsa_port *dp,
struct switchdev_brport_flags flags,
struct netlink_ext_ack *extack)
@ -778,6 +872,17 @@ int dsa_port_bridge_flags(struct dsa_port *dp,
return 0;
}
int dsa_port_vlan_msti(struct dsa_port *dp,
const struct switchdev_vlan_msti *msti)
{
struct dsa_switch *ds = dp->ds;
if (!ds->ops->vlan_msti_set)
return -EOPNOTSUPP;
return ds->ops->vlan_msti_set(ds, *dp->bridge, msti);
}
int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu,
bool targeted_match)
{

View file

@ -451,6 +451,12 @@ static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx,
ret = dsa_port_set_state(dp, attr->u.stp_state, true);
break;
case SWITCHDEV_ATTR_ID_PORT_MST_STATE:
if (!dsa_port_offloads_bridge_port(dp, attr->orig_dev))
return -EOPNOTSUPP;
ret = dsa_port_set_mst_state(dp, &attr->u.mst_state, extack);
break;
case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING:
if (!dsa_port_offloads_bridge_dev(dp, attr->orig_dev))
return -EOPNOTSUPP;
@ -464,6 +470,12 @@ static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx,
ret = dsa_port_ageing_time(dp, attr->u.ageing_time);
break;
case SWITCHDEV_ATTR_ID_BRIDGE_MST:
if (!dsa_port_offloads_bridge_dev(dp, attr->orig_dev))
return -EOPNOTSUPP;
ret = dsa_port_mst_enable(dp, attr->u.mst, extack);
break;
case SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS:
if (!dsa_port_offloads_bridge_port(dp, attr->orig_dev))
return -EOPNOTSUPP;
@ -477,6 +489,12 @@ static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx,
ret = dsa_port_bridge_flags(dp, attr->u.brport_flags, extack);
break;
case SWITCHDEV_ATTR_ID_VLAN_MSTI:
if (!dsa_port_offloads_bridge_dev(dp, attr->orig_dev))
return -EOPNOTSUPP;
ret = dsa_port_vlan_msti(dp, &attr->u.vlan_msti);
break;
default:
ret = -EOPNOTSUPP;
break;