Merge branch 'net-dcb-rewrite-table'

Daniel Machon says:

====================
net: Introduce new DCB rewrite table

There is currently no support for per-port egress mapping of priority to PCP and
priority to DSCP. Some support for expressing egress mapping of PCP is supported
through ip link, with the 'egress-qos-map', however this command only maps
priority to PCP, and for vlan interfaces only. DCB APP already has support for
per-port ingress mapping of PCP/DEI, DSCP and a bunch of other stuff. So why not
take advantage of this fact, and add a new table that does the reverse.

This patch series introduces the new DCB rewrite table. Whereas the DCB
APP table deals with ingress mapping of PID (protocol identifier) to priority,
the rewrite table deals with egress mapping of priority to PID.

It is indeed possible to integrate rewrite in the existing APP table, by
introducing new dedicated rewrite selectors, and altering existing functions
to treat rewrite entries specially. However, I feel like this is not a good
solution, and will pollute the APP namespace. APP is well-defined in IEEE, and
some userspace relies of advertised entries - for this fact, separating APP and
rewrite into to completely separate objects, seems to me the best solution.

The new table shares much functionality with the APP table, and as such, much
existing code is reused, or slightly modified, to work for both.

================================================================================
DCB rewrite table in a nutshell
================================================================================
The table is implemented as a simple linked list, and uses the same lock as the
APP table. New functions for getting, setting and deleting entries have been
added, and these are exported, so they can be used by the stack or drivers.
Additionnaly, new dcbnl_setrewr and dcnl_delrewr hooks has been added, to
support hardware offload of the entries.

================================================================================
Sparx5 per-port PCP rewrite support
================================================================================
Sparx5 supports PCP egress mapping through two eight-entry switch tables.
One table maps QoS class 0-7 to PCP for DE0 (DP levels mapped to
drop-eligibility 0) and the other for DE1. DCB does currently not have support
for expressing DP/color, so instead, the tagged DEI bit will reflect the DP
levels, for any rewrite entries> 7 ('de').

The driver will take apptrust (contributed earlier) into consideration, so
that the mapping tables only be used, if PCP is trusted *and* the rewrite table
has active mappings, otherwise classified PCP (same as frame PCP) will be used
instead.

================================================================================
Sparx5 per-port DSCP rewrite support
================================================================================
Sparx5 support DSCP egress mapping through a single 32-entry table. This table
maps classified QoS class and DP level to classified DSCP, and is consulted by
the switch Analyzer Classifier at ingress. At egress, the frame DSCP can either
be rewritten to classified DSCP to frame DSCP.

The driver will take apptrust into consideration, so that the mapping tables
only be used, if DSCP is trusted *and* the rewrite table has active mappings,
otherwise frame DSCP will be used instead.

================================================================================
Patches
================================================================================
Patch #1 modifies dcb_app_add to work for both APP and rewrite

Patch #2 adds dcbnl_app_table_setdel() for setting and deleting both APP and
         rewrite entries.

Patch #3 adds the rewrite table and all required functions, offload hooks and
         bookkeeping for maintaining it.

Patch #4 adds two new helper functions for getting a priority to PCP bitmask
         map, and a priority to DSCP bitmask map.

Patch #5 adds support for PCP rewrite in the Sparx5 driver.
Patch #6 adds support for DSCP rewrite in the Sparx5 driver.

================================================================================
v2 -> v3:
  in dcbnl_ieee_fill() use nla_nest_start() instead of the _noflag() version.
  Also, cancel the rewrite nest in case of an error (Petr Machata).

v1 -> v2:
  In dcb_setrewr() change proto to u16 as it ought to be, and remove zero
  initialization of err. (Dan Carpenter).
  Change name of dcbnl_apprewr_setdel -> dcbnl_app_table_setdel and change the
  function signature to take a single function pointer. Update uses accordingly
  (Petr Machata).

====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2023-01-20 09:33:22 +00:00
commit f533920954
7 changed files with 548 additions and 73 deletions

View File

@ -133,12 +133,17 @@ static bool sparx5_dcb_apptrust_contains(int portno, u8 selector)
static int sparx5_dcb_app_update(struct net_device *dev)
{
struct dcb_ieee_app_prio_map dscp_rewr_map = {0};
struct dcb_rewr_prio_pcp_map pcp_rewr_map = {0};
struct sparx5_port *port = netdev_priv(dev);
struct sparx5_port_qos_dscp_map *dscp_map;
struct sparx5_port_qos_pcp_map *pcp_map;
struct sparx5_port_qos qos = {0};
struct dcb_app app_itr = {0};
int portno = port->portno;
bool dscp_rewr = false;
bool pcp_rewr = false;
u16 dscp;
int i;
dscp_map = &qos.dscp.map;
@ -163,31 +168,72 @@ static int sparx5_dcb_app_update(struct net_device *dev)
pcp_map->map[i] = dcb_getapp(dev, &app_itr);
}
/* Get pcp rewrite mapping */
dcb_getrewr_prio_pcp_mask_map(dev, &pcp_rewr_map);
for (i = 0; i < ARRAY_SIZE(pcp_rewr_map.map); i++) {
if (!pcp_rewr_map.map[i])
continue;
pcp_rewr = true;
qos.pcp_rewr.map.map[i] = fls(pcp_rewr_map.map[i]) - 1;
}
/* Get dscp rewrite mapping */
dcb_getrewr_prio_dscp_mask_map(dev, &dscp_rewr_map);
for (i = 0; i < ARRAY_SIZE(dscp_rewr_map.map); i++) {
if (!dscp_rewr_map.map[i])
continue;
/* The rewrite table of the switch has 32 entries; one for each
* priority for each DP level. Currently, the rewrite map does
* not indicate DP level, so we map classified QoS class to
* classified DSCP, for each classified DP level. Rewrite of
* DSCP is only enabled, if we have active mappings.
*/
dscp_rewr = true;
dscp = fls64(dscp_rewr_map.map[i]) - 1;
qos.dscp_rewr.map.map[i] = dscp; /* DP 0 */
qos.dscp_rewr.map.map[i + 8] = dscp; /* DP 1 */
qos.dscp_rewr.map.map[i + 16] = dscp; /* DP 2 */
qos.dscp_rewr.map.map[i + 24] = dscp; /* DP 3 */
}
/* Enable use of pcp for queue classification ? */
if (sparx5_dcb_apptrust_contains(portno, DCB_APP_SEL_PCP)) {
qos.pcp.qos_enable = true;
qos.pcp.dp_enable = qos.pcp.qos_enable;
/* Enable rewrite of PCP and DEI if PCP is trusted *and* rewrite
* table is not empty.
*/
if (pcp_rewr)
qos.pcp_rewr.enable = true;
}
/* Enable use of dscp for queue classification ? */
if (sparx5_dcb_apptrust_contains(portno, IEEE_8021QAZ_APP_SEL_DSCP)) {
qos.dscp.qos_enable = true;
qos.dscp.dp_enable = qos.dscp.qos_enable;
if (dscp_rewr)
/* Do not enable rewrite if no mappings are active, as
* classified DSCP will then be zero for all classified
* QoS class and DP combinations.
*/
qos.dscp_rewr.enable = true;
}
return sparx5_port_qos_set(port, &qos);
}
/* Set or delete dscp app entry.
/* Set or delete DSCP app entry.
*
* Dscp mapping is global for all ports, so set and delete app entries are
* DSCP mapping is global for all ports, so set and delete app entries are
* replicated for each port.
*/
static int sparx5_dcb_ieee_dscp_setdel_app(struct net_device *dev,
struct dcb_app *app, bool del)
static int sparx5_dcb_ieee_dscp_setdel(struct net_device *dev,
struct dcb_app *app,
int (*setdel)(struct net_device *,
struct dcb_app *))
{
struct sparx5_port *port = netdev_priv(dev);
struct dcb_app apps[SPX5_PORTS];
struct sparx5_port *port_itr;
int err, i;
@ -195,11 +241,7 @@ static int sparx5_dcb_ieee_dscp_setdel_app(struct net_device *dev,
port_itr = port->sparx5->ports[i];
if (!port_itr)
continue;
memcpy(&apps[i], app, sizeof(struct dcb_app));
if (del)
err = dcb_ieee_delapp(port_itr->ndev, &apps[i]);
else
err = dcb_ieee_setapp(port_itr->ndev, &apps[i]);
err = setdel(port_itr->ndev, app);
if (err)
return err;
}
@ -226,7 +268,7 @@ static int sparx5_dcb_ieee_setapp(struct net_device *dev, struct dcb_app *app)
}
if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP)
err = sparx5_dcb_ieee_dscp_setdel_app(dev, app, false);
err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_ieee_setapp);
else
err = dcb_ieee_setapp(dev, app);
@ -244,7 +286,7 @@ static int sparx5_dcb_ieee_delapp(struct net_device *dev, struct dcb_app *app)
int err;
if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP)
err = sparx5_dcb_ieee_dscp_setdel_app(dev, app, true);
err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_ieee_delapp);
else
err = dcb_ieee_delapp(dev, app);
@ -283,11 +325,60 @@ static int sparx5_dcb_getapptrust(struct net_device *dev, u8 *selectors,
return 0;
}
static int sparx5_dcb_delrewr(struct net_device *dev, struct dcb_app *app)
{
int err;
if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP)
err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_delrewr);
else
err = dcb_delrewr(dev, app);
if (err < 0)
return err;
return sparx5_dcb_app_update(dev);
}
static int sparx5_dcb_setrewr(struct net_device *dev, struct dcb_app *app)
{
struct dcb_app app_itr;
int err = 0;
u16 proto;
err = sparx5_dcb_app_validate(dev, app);
if (err)
goto out;
/* Delete current mapping, if it exists. */
proto = dcb_getrewr(dev, app);
if (proto) {
app_itr = *app;
app_itr.protocol = proto;
sparx5_dcb_delrewr(dev, &app_itr);
}
if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP)
err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_setrewr);
else
err = dcb_setrewr(dev, app);
if (err)
goto out;
sparx5_dcb_app_update(dev);
out:
return err;
}
const struct dcbnl_rtnl_ops sparx5_dcbnl_ops = {
.ieee_setapp = sparx5_dcb_ieee_setapp,
.ieee_delapp = sparx5_dcb_ieee_delapp,
.dcbnl_setapptrust = sparx5_dcb_setapptrust,
.dcbnl_getapptrust = sparx5_dcb_getapptrust,
.dcbnl_setrewr = sparx5_dcb_setrewr,
.dcbnl_delrewr = sparx5_dcb_delrewr,
};
int sparx5_dcb_init(struct sparx5 *sparx5)
@ -304,6 +395,12 @@ int sparx5_dcb_init(struct sparx5 *sparx5)
sparx5_port_apptrust[port->portno] =
&sparx5_dcb_apptrust_policies
[SPARX5_DCB_APPTRUST_DSCP_PCP];
/* Enable DSCP classification based on classified QoS class and
* DP, for all DSCP values, for all ports.
*/
sparx5_port_qos_dscp_rewr_mode_set(port,
SPARX5_PORT_REW_DSCP_ALL);
}
return 0;

View File

@ -4,8 +4,8 @@
* Copyright (c) 2021 Microchip Technology Inc.
*/
/* This file is autogenerated by cml-utils 2022-09-28 11:17:02 +0200.
* Commit ID: 385c8a11d71a9f6a60368d3a3cb648fa257b479a
/* This file is autogenerated by cml-utils 2022-11-04 11:22:22 +0100.
* Commit ID: 498242727be5db9b423cc0923bc966fc7b40607e
*/
#ifndef _SPARX5_MAIN_REGS_H_
@ -885,6 +885,16 @@ enum sparx5_target {
#define ANA_CL_DSCP_CFG_DSCP_TRUST_ENA_GET(x)\
FIELD_GET(ANA_CL_DSCP_CFG_DSCP_TRUST_ENA, x)
/* ANA_CL:COMMON:QOS_MAP_CFG */
#define ANA_CL_QOS_MAP_CFG(r) \
__REG(TARGET_ANA_CL, 0, 1, 166912, 0, 1, 756, 512, r, 32, 4)
#define ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL GENMASK(9, 4)
#define ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL_SET(x)\
FIELD_PREP(ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL, x)
#define ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL_GET(x)\
FIELD_GET(ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL, x)
/* ANA_L2:COMMON:AUTO_LRN_CFG */
#define ANA_L2_AUTO_LRN_CFG __REG(TARGET_ANA_L2, 0, 1, 566024, 0, 1, 700, 24, 0, 1, 4)
@ -5345,6 +5355,62 @@ enum sparx5_target {
#define REW_PORT_VLAN_CFG_PORT_VID_GET(x)\
FIELD_GET(REW_PORT_VLAN_CFG_PORT_VID, x)
/* REW:PORT:PCP_MAP_DE0 */
#define REW_PCP_MAP_DE0(g, r) \
__REG(TARGET_REW, 0, 1, 360448, g, 70, 256, 4, r, 8, 4)
#define REW_PCP_MAP_DE0_PCP_DE0 GENMASK(2, 0)
#define REW_PCP_MAP_DE0_PCP_DE0_SET(x)\
FIELD_PREP(REW_PCP_MAP_DE0_PCP_DE0, x)
#define REW_PCP_MAP_DE0_PCP_DE0_GET(x)\
FIELD_GET(REW_PCP_MAP_DE0_PCP_DE0, x)
/* REW:PORT:PCP_MAP_DE1 */
#define REW_PCP_MAP_DE1(g, r) \
__REG(TARGET_REW, 0, 1, 360448, g, 70, 256, 36, r, 8, 4)
#define REW_PCP_MAP_DE1_PCP_DE1 GENMASK(2, 0)
#define REW_PCP_MAP_DE1_PCP_DE1_SET(x)\
FIELD_PREP(REW_PCP_MAP_DE1_PCP_DE1, x)
#define REW_PCP_MAP_DE1_PCP_DE1_GET(x)\
FIELD_GET(REW_PCP_MAP_DE1_PCP_DE1, x)
/* REW:PORT:DEI_MAP_DE0 */
#define REW_DEI_MAP_DE0(g, r) \
__REG(TARGET_REW, 0, 1, 360448, g, 70, 256, 68, r, 8, 4)
#define REW_DEI_MAP_DE0_DEI_DE0 BIT(0)
#define REW_DEI_MAP_DE0_DEI_DE0_SET(x)\
FIELD_PREP(REW_DEI_MAP_DE0_DEI_DE0, x)
#define REW_DEI_MAP_DE0_DEI_DE0_GET(x)\
FIELD_GET(REW_DEI_MAP_DE0_DEI_DE0, x)
/* REW:PORT:DEI_MAP_DE1 */
#define REW_DEI_MAP_DE1(g, r) \
__REG(TARGET_REW, 0, 1, 360448, g, 70, 256, 100, r, 8, 4)
#define REW_DEI_MAP_DE1_DEI_DE1 BIT(0)
#define REW_DEI_MAP_DE1_DEI_DE1_SET(x)\
FIELD_PREP(REW_DEI_MAP_DE1_DEI_DE1, x)
#define REW_DEI_MAP_DE1_DEI_DE1_GET(x)\
FIELD_GET(REW_DEI_MAP_DE1_DEI_DE1, x)
/* REW:PORT:DSCP_MAP */
#define REW_DSCP_MAP(g) \
__REG(TARGET_REW, 0, 1, 360448, g, 70, 256, 136, 0, 1, 4)
#define REW_DSCP_MAP_DSCP_UPDATE_ENA BIT(1)
#define REW_DSCP_MAP_DSCP_UPDATE_ENA_SET(x)\
FIELD_PREP(REW_DSCP_MAP_DSCP_UPDATE_ENA, x)
#define REW_DSCP_MAP_DSCP_UPDATE_ENA_GET(x)\
FIELD_GET(REW_DSCP_MAP_DSCP_UPDATE_ENA, x)
#define REW_DSCP_MAP_DSCP_REMAP_ENA BIT(0)
#define REW_DSCP_MAP_DSCP_REMAP_ENA_SET(x)\
FIELD_PREP(REW_DSCP_MAP_DSCP_REMAP_ENA, x)
#define REW_DSCP_MAP_DSCP_REMAP_ENA_GET(x)\
FIELD_GET(REW_DSCP_MAP_DSCP_REMAP_ENA, x)
/* REW:PORT:TAG_CTRL */
#define REW_TAG_CTRL(g) __REG(TARGET_REW, 0, 1, 360448, g, 70, 256, 132, 0, 1, 4)

View File

@ -1151,11 +1151,69 @@ int sparx5_port_qos_set(struct sparx5_port *port,
{
sparx5_port_qos_dscp_set(port, &qos->dscp);
sparx5_port_qos_pcp_set(port, &qos->pcp);
sparx5_port_qos_pcp_rewr_set(port, &qos->pcp_rewr);
sparx5_port_qos_dscp_rewr_set(port, &qos->dscp_rewr);
sparx5_port_qos_default_set(port, qos);
return 0;
}
int sparx5_port_qos_pcp_rewr_set(const struct sparx5_port *port,
struct sparx5_port_qos_pcp_rewr *qos)
{
int i, mode = SPARX5_PORT_REW_TAG_CTRL_CLASSIFIED;
struct sparx5 *sparx5 = port->sparx5;
u8 pcp, dei;
/* Use mapping table, with classified QoS as index, to map QoS and DP
* to tagged PCP and DEI, if PCP is trusted. Otherwise use classified
* PCP. Classified PCP equals frame PCP.
*/
if (qos->enable)
mode = SPARX5_PORT_REW_TAG_CTRL_MAPPED;
spx5_rmw(REW_TAG_CTRL_TAG_PCP_CFG_SET(mode) |
REW_TAG_CTRL_TAG_DEI_CFG_SET(mode),
REW_TAG_CTRL_TAG_PCP_CFG | REW_TAG_CTRL_TAG_DEI_CFG,
port->sparx5, REW_TAG_CTRL(port->portno));
for (i = 0; i < ARRAY_SIZE(qos->map.map); i++) {
/* Extract PCP and DEI */
pcp = qos->map.map[i];
if (pcp > SPARX5_PORT_QOS_PCP_COUNT)
dei = 1;
else
dei = 0;
/* Rewrite PCP and DEI, for each classified QoS class and DP
* level. This table is only used if tag ctrl mode is set to
* 'mapped'.
*
* 0:0nd - prio=0 and dp:0 => pcp=0 and dei=0
* 0:0de - prio=0 and dp:1 => pcp=0 and dei=1
*/
if (dei) {
spx5_rmw(REW_PCP_MAP_DE1_PCP_DE1_SET(pcp),
REW_PCP_MAP_DE1_PCP_DE1, sparx5,
REW_PCP_MAP_DE1(port->portno, i));
spx5_rmw(REW_DEI_MAP_DE1_DEI_DE1_SET(dei),
REW_DEI_MAP_DE1_DEI_DE1, port->sparx5,
REW_DEI_MAP_DE1(port->portno, i));
} else {
spx5_rmw(REW_PCP_MAP_DE0_PCP_DE0_SET(pcp),
REW_PCP_MAP_DE0_PCP_DE0, sparx5,
REW_PCP_MAP_DE0(port->portno, i));
spx5_rmw(REW_DEI_MAP_DE0_DEI_DE0_SET(dei),
REW_DEI_MAP_DE0_DEI_DE0, port->sparx5,
REW_DEI_MAP_DE0(port->portno, i));
}
}
return 0;
}
int sparx5_port_qos_pcp_set(const struct sparx5_port *port,
struct sparx5_port_qos_pcp *qos)
{
@ -1184,6 +1242,45 @@ int sparx5_port_qos_pcp_set(const struct sparx5_port *port,
return 0;
}
void sparx5_port_qos_dscp_rewr_mode_set(const struct sparx5_port *port,
int mode)
{
spx5_rmw(ANA_CL_QOS_CFG_DSCP_REWR_MODE_SEL_SET(mode),
ANA_CL_QOS_CFG_DSCP_REWR_MODE_SEL, port->sparx5,
ANA_CL_QOS_CFG(port->portno));
}
int sparx5_port_qos_dscp_rewr_set(const struct sparx5_port *port,
struct sparx5_port_qos_dscp_rewr *qos)
{
struct sparx5 *sparx5 = port->sparx5;
bool rewr = false;
u16 dscp;
int i;
/* On egress, rewrite DSCP value to either classified DSCP or frame
* DSCP. If enabled; classified DSCP, if disabled; frame DSCP.
*/
if (qos->enable)
rewr = true;
spx5_rmw(REW_DSCP_MAP_DSCP_UPDATE_ENA_SET(rewr),
REW_DSCP_MAP_DSCP_UPDATE_ENA, sparx5,
REW_DSCP_MAP(port->portno));
/* On ingress, map each classified QoS class and DP to classified DSCP
* value. This mapping table is global for all ports.
*/
for (i = 0; i < ARRAY_SIZE(qos->map.map); i++) {
dscp = qos->map.map[i];
spx5_rmw(ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL_SET(dscp),
ANA_CL_QOS_MAP_CFG_DSCP_REWR_VAL, sparx5,
ANA_CL_QOS_MAP_CFG(i));
}
return 0;
}
int sparx5_port_qos_dscp_set(const struct sparx5_port *port,
struct sparx5_port_qos_dscp *qos)
{

View File

@ -9,6 +9,17 @@
#include "sparx5_main.h"
/* Port PCP rewrite mode */
#define SPARX5_PORT_REW_TAG_CTRL_CLASSIFIED 0
#define SPARX5_PORT_REW_TAG_CTRL_DEFAULT 1
#define SPARX5_PORT_REW_TAG_CTRL_MAPPED 2
/* Port DSCP rewrite mode */
#define SPARX5_PORT_REW_DSCP_NONE 0
#define SPARX5_PORT_REW_DSCP_IF_ZERO 1
#define SPARX5_PORT_REW_DSCP_SELECTED 2
#define SPARX5_PORT_REW_DSCP_ALL 3
static inline bool sparx5_port_is_2g5(int portno)
{
return portno >= 16 && portno <= 47;
@ -99,6 +110,15 @@ struct sparx5_port_qos_pcp_map {
u8 map[SPARX5_PORT_QOS_PCP_DEI_COUNT];
};
struct sparx5_port_qos_pcp_rewr_map {
u16 map[SPX5_PRIOS];
};
#define SPARX5_PORT_QOS_DP_NUM 4
struct sparx5_port_qos_dscp_rewr_map {
u16 map[SPX5_PRIOS * SPARX5_PORT_QOS_DP_NUM];
};
#define SPARX5_PORT_QOS_DSCP_COUNT 64
struct sparx5_port_qos_dscp_map {
u8 map[SPARX5_PORT_QOS_DSCP_COUNT];
@ -110,15 +130,27 @@ struct sparx5_port_qos_pcp {
bool dp_enable;
};
struct sparx5_port_qos_pcp_rewr {
struct sparx5_port_qos_pcp_rewr_map map;
bool enable;
};
struct sparx5_port_qos_dscp {
struct sparx5_port_qos_dscp_map map;
bool qos_enable;
bool dp_enable;
};
struct sparx5_port_qos_dscp_rewr {
struct sparx5_port_qos_dscp_rewr_map map;
bool enable;
};
struct sparx5_port_qos {
struct sparx5_port_qos_pcp pcp;
struct sparx5_port_qos_pcp_rewr pcp_rewr;
struct sparx5_port_qos_dscp dscp;
struct sparx5_port_qos_dscp_rewr dscp_rewr;
u8 default_prio;
};
@ -127,9 +159,18 @@ int sparx5_port_qos_set(struct sparx5_port *port, struct sparx5_port_qos *qos);
int sparx5_port_qos_pcp_set(const struct sparx5_port *port,
struct sparx5_port_qos_pcp *qos);
int sparx5_port_qos_pcp_rewr_set(const struct sparx5_port *port,
struct sparx5_port_qos_pcp_rewr *qos);
int sparx5_port_qos_dscp_set(const struct sparx5_port *port,
struct sparx5_port_qos_dscp *qos);
void sparx5_port_qos_dscp_rewr_mode_set(const struct sparx5_port *port,
int mode);
int sparx5_port_qos_dscp_rewr_set(const struct sparx5_port *port,
struct sparx5_port_qos_dscp_rewr *qos);
int sparx5_port_qos_default_set(const struct sparx5_port *port,
const struct sparx5_port_qos *qos);

View File

@ -19,18 +19,32 @@ struct dcb_app_type {
u8 dcbx;
};
u16 dcb_getrewr(struct net_device *dev, struct dcb_app *app);
int dcb_setrewr(struct net_device *dev, struct dcb_app *app);
int dcb_delrewr(struct net_device *dev, struct dcb_app *app);
int dcb_setapp(struct net_device *, struct dcb_app *);
u8 dcb_getapp(struct net_device *, struct dcb_app *);
int dcb_ieee_setapp(struct net_device *, struct dcb_app *);
int dcb_ieee_delapp(struct net_device *, struct dcb_app *);
u8 dcb_ieee_getapp_mask(struct net_device *, struct dcb_app *);
struct dcb_rewr_prio_pcp_map {
u16 map[IEEE_8021QAZ_MAX_TCS];
};
void dcb_getrewr_prio_pcp_mask_map(const struct net_device *dev,
struct dcb_rewr_prio_pcp_map *p_map);
struct dcb_ieee_app_prio_map {
u64 map[IEEE_8021QAZ_MAX_TCS];
};
void dcb_ieee_getapp_prio_dscp_mask_map(const struct net_device *dev,
struct dcb_ieee_app_prio_map *p_map);
void dcb_getrewr_prio_dscp_mask_map(const struct net_device *dev,
struct dcb_ieee_app_prio_map *p_map);
struct dcb_ieee_app_dscp_map {
u8 map[64];
};
@ -113,6 +127,10 @@ struct dcbnl_rtnl_ops {
/* apptrust */
int (*dcbnl_setapptrust)(struct net_device *, u8 *, int);
int (*dcbnl_getapptrust)(struct net_device *, u8 *, int *);
/* rewrite */
int (*dcbnl_setrewr)(struct net_device *dev, struct dcb_app *app);
int (*dcbnl_delrewr)(struct net_device *dev, struct dcb_app *app);
};
#endif /* __NET_DCBNL_H__ */

View File

@ -411,6 +411,7 @@ enum dcbnl_attrs {
* @DCB_ATTR_IEEE_PEER_PFC: peer PFC configuration - get only
* @DCB_ATTR_IEEE_PEER_APP: peer APP tlv - get only
* @DCB_ATTR_DCB_APP_TRUST_TABLE: selector trust table
* @DCB_ATTR_DCB_REWR_TABLE: rewrite configuration
*/
enum ieee_attrs {
DCB_ATTR_IEEE_UNSPEC,
@ -425,6 +426,7 @@ enum ieee_attrs {
DCB_ATTR_IEEE_QCN_STATS,
DCB_ATTR_DCB_BUFFER,
DCB_ATTR_DCB_APP_TRUST_TABLE,
DCB_ATTR_DCB_REWR_TABLE,
__DCB_ATTR_IEEE_MAX
};
#define DCB_ATTR_IEEE_MAX (__DCB_ATTR_IEEE_MAX - 1)

View File

@ -178,6 +178,7 @@ static const struct nla_policy dcbnl_featcfg_nest[DCB_FEATCFG_ATTR_MAX + 1] = {
};
static LIST_HEAD(dcb_app_list);
static LIST_HEAD(dcb_rewr_list);
static DEFINE_SPINLOCK(dcb_lock);
static enum ieee_attrs_app dcbnl_app_attr_type_get(u8 selector)
@ -1099,11 +1100,46 @@ out:
return err;
}
/* Set or delete APP table or rewrite table entries. The APP struct is validated
* and the appropriate callback function is called.
*/
static int dcbnl_app_table_setdel(struct nlattr *attr,
struct net_device *netdev,
int (*setdel)(struct net_device *dev,
struct dcb_app *app))
{
struct dcb_app *app_data;
enum ieee_attrs_app type;
struct nlattr *attr_itr;
int rem, err;
nla_for_each_nested(attr_itr, attr, rem) {
type = nla_type(attr_itr);
if (!dcbnl_app_attr_type_validate(type))
continue;
if (nla_len(attr_itr) < sizeof(struct dcb_app))
return -ERANGE;
app_data = nla_data(attr_itr);
if (!dcbnl_app_selector_validate(type, app_data->selector))
return -EINVAL;
err = setdel(netdev, app_data);
if (err)
return err;
}
return 0;
}
/* Handle IEEE 802.1Qaz/802.1Qau/802.1Qbb GET commands. */
static int dcbnl_ieee_fill(struct sk_buff *skb, struct net_device *netdev)
{
const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops;
struct nlattr *ieee, *app;
struct nlattr *ieee, *app, *rewr;
struct dcb_app_type *itr;
int dcbx;
int err;
@ -1206,6 +1242,27 @@ static int dcbnl_ieee_fill(struct sk_buff *skb, struct net_device *netdev)
spin_unlock_bh(&dcb_lock);
nla_nest_end(skb, app);
rewr = nla_nest_start(skb, DCB_ATTR_DCB_REWR_TABLE);
if (!rewr)
return -EMSGSIZE;
spin_lock_bh(&dcb_lock);
list_for_each_entry(itr, &dcb_rewr_list, list) {
if (itr->ifindex == netdev->ifindex) {
enum ieee_attrs_app type =
dcbnl_app_attr_type_get(itr->app.selector);
err = nla_put(skb, type, sizeof(itr->app), &itr->app);
if (err) {
spin_unlock_bh(&dcb_lock);
nla_nest_cancel(skb, rewr);
return -EMSGSIZE;
}
}
}
spin_unlock_bh(&dcb_lock);
nla_nest_end(skb, rewr);
if (ops->dcbnl_getapptrust) {
err = dcbnl_getapptrust(netdev, skb);
if (err)
@ -1567,37 +1624,20 @@ static int dcbnl_ieee_set(struct net_device *netdev, struct nlmsghdr *nlh,
goto err;
}
if (ieee[DCB_ATTR_DCB_REWR_TABLE]) {
err = dcbnl_app_table_setdel(ieee[DCB_ATTR_DCB_REWR_TABLE],
netdev,
ops->dcbnl_setrewr ?: dcb_setrewr);
if (err)
goto err;
}
if (ieee[DCB_ATTR_IEEE_APP_TABLE]) {
struct nlattr *attr;
int rem;
nla_for_each_nested(attr, ieee[DCB_ATTR_IEEE_APP_TABLE], rem) {
enum ieee_attrs_app type = nla_type(attr);
struct dcb_app *app_data;
if (!dcbnl_app_attr_type_validate(type))
continue;
if (nla_len(attr) < sizeof(struct dcb_app)) {
err = -ERANGE;
goto err;
}
app_data = nla_data(attr);
if (!dcbnl_app_selector_validate(type,
app_data->selector)) {
err = -EINVAL;
goto err;
}
if (ops->ieee_setapp)
err = ops->ieee_setapp(netdev, app_data);
else
err = dcb_ieee_setapp(netdev, app_data);
if (err)
goto err;
}
err = dcbnl_app_table_setdel(ieee[DCB_ATTR_IEEE_APP_TABLE],
netdev, ops->ieee_setapp ?:
dcb_ieee_setapp);
if (err)
goto err;
}
if (ieee[DCB_ATTR_DCB_APP_TRUST_TABLE]) {
@ -1684,31 +1724,19 @@ static int dcbnl_ieee_del(struct net_device *netdev, struct nlmsghdr *nlh,
return err;
if (ieee[DCB_ATTR_IEEE_APP_TABLE]) {
struct nlattr *attr;
int rem;
err = dcbnl_app_table_setdel(ieee[DCB_ATTR_IEEE_APP_TABLE],
netdev, ops->ieee_delapp ?:
dcb_ieee_delapp);
if (err)
goto err;
}
nla_for_each_nested(attr, ieee[DCB_ATTR_IEEE_APP_TABLE], rem) {
enum ieee_attrs_app type = nla_type(attr);
struct dcb_app *app_data;
if (!dcbnl_app_attr_type_validate(type))
continue;
app_data = nla_data(attr);
if (!dcbnl_app_selector_validate(type,
app_data->selector)) {
err = -EINVAL;
goto err;
}
if (ops->ieee_delapp)
err = ops->ieee_delapp(netdev, app_data);
else
err = dcb_ieee_delapp(netdev, app_data);
if (err)
goto err;
}
if (ieee[DCB_ATTR_DCB_REWR_TABLE]) {
err = dcbnl_app_table_setdel(ieee[DCB_ATTR_DCB_REWR_TABLE],
netdev,
ops->dcbnl_delrewr ?: dcb_delrewr);
if (err)
goto err;
}
err:
@ -1939,6 +1967,22 @@ out:
return ret;
}
static struct dcb_app_type *dcb_rewr_lookup(const struct dcb_app *app,
int ifindex, int proto)
{
struct dcb_app_type *itr;
list_for_each_entry(itr, &dcb_rewr_list, list) {
if (itr->app.selector == app->selector &&
itr->app.priority == app->priority &&
itr->ifindex == ifindex &&
((proto == -1) || itr->app.protocol == proto))
return itr;
}
return NULL;
}
static struct dcb_app_type *dcb_app_lookup(const struct dcb_app *app,
int ifindex, int prio)
{
@ -1955,7 +1999,8 @@ static struct dcb_app_type *dcb_app_lookup(const struct dcb_app *app,
return NULL;
}
static int dcb_app_add(const struct dcb_app *app, int ifindex)
static int dcb_app_add(struct list_head *list, const struct dcb_app *app,
int ifindex)
{
struct dcb_app_type *entry;
@ -1965,7 +2010,7 @@ static int dcb_app_add(const struct dcb_app *app, int ifindex)
memcpy(&entry->app, app, sizeof(*app));
entry->ifindex = ifindex;
list_add(&entry->list, &dcb_app_list);
list_add(&entry->list, list);
return 0;
}
@ -2028,7 +2073,7 @@ int dcb_setapp(struct net_device *dev, struct dcb_app *new)
}
/* App type does not exist add new application type */
if (new->priority)
err = dcb_app_add(new, dev->ifindex);
err = dcb_app_add(&dcb_app_list, new, dev->ifindex);
out:
spin_unlock_bh(&dcb_lock);
if (!err)
@ -2061,6 +2106,63 @@ u8 dcb_ieee_getapp_mask(struct net_device *dev, struct dcb_app *app)
}
EXPORT_SYMBOL(dcb_ieee_getapp_mask);
/* Get protocol value from rewrite entry. */
u16 dcb_getrewr(struct net_device *dev, struct dcb_app *app)
{
struct dcb_app_type *itr;
u16 proto = 0;
spin_lock_bh(&dcb_lock);
itr = dcb_rewr_lookup(app, dev->ifindex, -1);
if (itr)
proto = itr->app.protocol;
spin_unlock_bh(&dcb_lock);
return proto;
}
EXPORT_SYMBOL(dcb_getrewr);
/* Add rewrite entry to the rewrite list. */
int dcb_setrewr(struct net_device *dev, struct dcb_app *new)
{
int err;
spin_lock_bh(&dcb_lock);
/* Search for existing match and abort if found. */
if (dcb_rewr_lookup(new, dev->ifindex, new->protocol)) {
err = -EEXIST;
goto out;
}
err = dcb_app_add(&dcb_rewr_list, new, dev->ifindex);
out:
spin_unlock_bh(&dcb_lock);
return err;
}
EXPORT_SYMBOL(dcb_setrewr);
/* Delete rewrite entry from the rewrite list. */
int dcb_delrewr(struct net_device *dev, struct dcb_app *del)
{
struct dcb_app_type *itr;
int err = -ENOENT;
spin_lock_bh(&dcb_lock);
/* Search for existing match and remove it. */
itr = dcb_rewr_lookup(del, dev->ifindex, del->protocol);
if (itr) {
list_del(&itr->list);
kfree(itr);
err = 0;
}
spin_unlock_bh(&dcb_lock);
return err;
}
EXPORT_SYMBOL(dcb_delrewr);
/**
* dcb_ieee_setapp - add IEEE dcb application data to app list
* @dev: network interface
@ -2088,7 +2190,7 @@ int dcb_ieee_setapp(struct net_device *dev, struct dcb_app *new)
goto out;
}
err = dcb_app_add(new, dev->ifindex);
err = dcb_app_add(&dcb_app_list, new, dev->ifindex);
out:
spin_unlock_bh(&dcb_lock);
if (!err)
@ -2130,6 +2232,58 @@ int dcb_ieee_delapp(struct net_device *dev, struct dcb_app *del)
}
EXPORT_SYMBOL(dcb_ieee_delapp);
/* dcb_getrewr_prio_pcp_mask_map - For a given device, find mapping from
* priorities to the PCP and DEI values assigned to that priority.
*/
void dcb_getrewr_prio_pcp_mask_map(const struct net_device *dev,
struct dcb_rewr_prio_pcp_map *p_map)
{
int ifindex = dev->ifindex;
struct dcb_app_type *itr;
u8 prio;
memset(p_map->map, 0, sizeof(p_map->map));
spin_lock_bh(&dcb_lock);
list_for_each_entry(itr, &dcb_rewr_list, list) {
if (itr->ifindex == ifindex &&
itr->app.selector == DCB_APP_SEL_PCP &&
itr->app.protocol < 16 &&
itr->app.priority < IEEE_8021QAZ_MAX_TCS) {
prio = itr->app.priority;
p_map->map[prio] |= 1 << itr->app.protocol;
}
}
spin_unlock_bh(&dcb_lock);
}
EXPORT_SYMBOL(dcb_getrewr_prio_pcp_mask_map);
/* dcb_getrewr_prio_dscp_mask_map - For a given device, find mapping from
* priorities to the DSCP values assigned to that priority.
*/
void dcb_getrewr_prio_dscp_mask_map(const struct net_device *dev,
struct dcb_ieee_app_prio_map *p_map)
{
int ifindex = dev->ifindex;
struct dcb_app_type *itr;
u8 prio;
memset(p_map->map, 0, sizeof(p_map->map));
spin_lock_bh(&dcb_lock);
list_for_each_entry(itr, &dcb_rewr_list, list) {
if (itr->ifindex == ifindex &&
itr->app.selector == IEEE_8021QAZ_APP_SEL_DSCP &&
itr->app.protocol < 64 &&
itr->app.priority < IEEE_8021QAZ_MAX_TCS) {
prio = itr->app.priority;
p_map->map[prio] |= 1ULL << itr->app.protocol;
}
}
spin_unlock_bh(&dcb_lock);
}
EXPORT_SYMBOL(dcb_getrewr_prio_dscp_mask_map);
/*
* dcb_ieee_getapp_prio_dscp_mask_map - For a given device, find mapping from
* priorities to the DSCP values assigned to that priority. Initialize p_map