linux-stable/drivers/net/ethernet/marvell/mvpp2/mvpp2_debugfs.c

704 lines
16 KiB
C
Raw Normal View History

net: mvpp2: add a debugfs interface for the Header Parser Marvell PPv2 Packer Header Parser has a TCAM based filter, that is not trivial to configure and debug. Being able to dump TCAM entries from userspace can be really helpful to help development of new features and debug existing ones. This commit adds a basic debugfs interface for the PPv2 driver, focusing on TCAM related features. <mnt>/mvpp2/ --- f2000000.ethernet \- f4000000.ethernet --- parser --- 000 ... | \- 001 | \- ... | \- 255 --- ai | \- header_data | \- lookup_id | \- sram | \- valid \- eth1 ... \- eth2 --- mac_filter \- parser_entries \- vid_filter There's one directory per PPv2 instance, named after pdev->name to make sure names are uniques. In each of these directories, there's : - one directory per interface on the controller, each containing : - "mac_filter", which lists all filtered addresses for this port (based on TCAM, not on the kernel's uc / mc lists) - "parser_entries", which lists the indices of all valid TCAM entries that have this port in their port map - "vid_filter", which lists the vids allowed on this port, based on TCAM - one "parser" directory (the parser is common to all ports), containing : - one directory per TCAM entry (256 of them, from 0 to 255), each containing : - "ai" : Contains the 1 byte Additional Info field from TCAM, and - "header_data" : Contains the 8 bytes Header Data extracted from the packet - "lookup_id" : Contains the 4 bits LU_ID - "sram" : contains the raw SRAM data, which is the result of the TCAM lookup. This readonly at the moment. - "valid" : Indicates if the entry is valid of not. All entries are read-only, and everything is output in hex form. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:25 +00:00
// SPDX-License-Identifier: GPL-2.0
/*
* Driver for Marvell PPv2 network controller for Armada 375 SoC.
*
* Copyright (C) 2018 Marvell
*/
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/debugfs.h>
#include "mvpp2.h"
#include "mvpp2_prs.h"
#include "mvpp2_cls.h"
net: mvpp2: add a debugfs interface for the Header Parser Marvell PPv2 Packer Header Parser has a TCAM based filter, that is not trivial to configure and debug. Being able to dump TCAM entries from userspace can be really helpful to help development of new features and debug existing ones. This commit adds a basic debugfs interface for the PPv2 driver, focusing on TCAM related features. <mnt>/mvpp2/ --- f2000000.ethernet \- f4000000.ethernet --- parser --- 000 ... | \- 001 | \- ... | \- 255 --- ai | \- header_data | \- lookup_id | \- sram | \- valid \- eth1 ... \- eth2 --- mac_filter \- parser_entries \- vid_filter There's one directory per PPv2 instance, named after pdev->name to make sure names are uniques. In each of these directories, there's : - one directory per interface on the controller, each containing : - "mac_filter", which lists all filtered addresses for this port (based on TCAM, not on the kernel's uc / mc lists) - "parser_entries", which lists the indices of all valid TCAM entries that have this port in their port map - "vid_filter", which lists the vids allowed on this port, based on TCAM - one "parser" directory (the parser is common to all ports), containing : - one directory per TCAM entry (256 of them, from 0 to 255), each containing : - "ai" : Contains the 1 byte Additional Info field from TCAM, and - "header_data" : Contains the 8 bytes Header Data extracted from the packet - "lookup_id" : Contains the 4 bits LU_ID - "sram" : contains the raw SRAM data, which is the result of the TCAM lookup. This readonly at the moment. - "valid" : Indicates if the entry is valid of not. All entries are read-only, and everything is output in hex form. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:25 +00:00
struct mvpp2_dbgfs_prs_entry {
int tid;
struct mvpp2 *priv;
};
struct mvpp2_dbgfs_flow_entry {
int flow;
struct mvpp2 *priv;
};
struct mvpp2_dbgfs_port_flow_entry {
struct mvpp2_port *port;
struct mvpp2_dbgfs_flow_entry *dbg_fe;
};
net: mvpp2: debugfs: add classifier hit counters The classification operations that are used for RSS make use of several lookup tables. Having hit counters for these tables is really helpful to determine what flows were matched by ingress traffic, and see the path of packets among all the classifier tables. This commit adds hit counters for the 3 tables used at the moment : - The decoding table (also called lookup_id table), that links flows identified by the Header Parser to the flow table. There's one entry per flow, located at : .../mvpp2/<controller>/flows/XX/dec_hits Note that there are 21 flows in the decoding table, whereas there are 52 flows in the Header Parser. That's because there are several kind of traffic that will match a given flow. Reading the hit counter from one sub-flow will clear all hit counter that have the same flow_id. This also applies to the flow_hits. - The flow table, that contains all the different lookups to be performed by the classifier for each packet of a given flow. The match is done on the first entry of the flow sequence. - The C2 engine entries, that are used to assign the default rx queue, and enable or disable RSS for a given port. There's one entry per flow, located at: .../mvpp2/<controller>/flows/XX/flow_hits There is one C2 entry per port, so the c2 hit counter is located at : .../mvpp2/<controller>/ethX/c2_hits All hit counter values are 16-bits clear-on-read values. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:28 +00:00
static int mvpp2_dbgfs_flow_flt_hits_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_flow_entry *entry = s->private;
int id = MVPP2_FLOW_C2_ENTRY(entry->flow);
u32 hits = mvpp2_cls_flow_hits(entry->priv, id);
seq_printf(s, "%u\n", hits);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_flow_flt_hits);
static int mvpp2_dbgfs_flow_dec_hits_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_flow_entry *entry = s->private;
u32 hits = mvpp2_cls_lookup_hits(entry->priv, entry->flow);
seq_printf(s, "%u\n", hits);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_flow_dec_hits);
static int mvpp2_dbgfs_flow_type_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_flow_entry *entry = s->private;
struct mvpp2_cls_flow *f;
const char *flow_name;
f = mvpp2_cls_flow_get(entry->flow);
if (!f)
return -EINVAL;
switch (f->flow_type) {
case IPV4_FLOW:
flow_name = "ipv4";
break;
case IPV6_FLOW:
flow_name = "ipv6";
break;
case TCP_V4_FLOW:
flow_name = "tcp4";
break;
case TCP_V6_FLOW:
flow_name = "tcp6";
break;
case UDP_V4_FLOW:
flow_name = "udp4";
break;
case UDP_V6_FLOW:
flow_name = "udp6";
break;
default:
flow_name = "other";
}
seq_printf(s, "%s\n", flow_name);
return 0;
}
static int mvpp2_dbgfs_flow_type_open(struct inode *inode, struct file *file)
{
return single_open(file, mvpp2_dbgfs_flow_type_show, inode->i_private);
}
static int mvpp2_dbgfs_flow_type_release(struct inode *inode, struct file *file)
{
struct seq_file *seq = file->private_data;
struct mvpp2_dbgfs_flow_entry *flow_entry = seq->private;
kfree(flow_entry);
return single_release(inode, file);
}
static const struct file_operations mvpp2_dbgfs_flow_type_fops = {
.open = mvpp2_dbgfs_flow_type_open,
.read = seq_read,
.release = mvpp2_dbgfs_flow_type_release,
};
static int mvpp2_dbgfs_flow_id_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_flow_entry *entry = s->private;
struct mvpp2_cls_flow *f;
f = mvpp2_cls_flow_get(entry->flow);
if (!f)
return -EINVAL;
seq_printf(s, "%d\n", f->flow_id);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_flow_id);
static int mvpp2_dbgfs_port_flow_hash_opt_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_port_flow_entry *entry = s->private;
struct mvpp2_port *port = entry->port;
struct mvpp2_cls_flow_entry fe;
struct mvpp2_cls_flow *f;
int flow_index;
u16 hash_opts;
f = mvpp2_cls_flow_get(entry->dbg_fe->flow);
if (!f)
return -EINVAL;
flow_index = MVPP2_PORT_FLOW_HASH_ENTRY(entry->port->id, f->flow_id);
mvpp2_cls_flow_read(port->priv, flow_index, &fe);
hash_opts = mvpp2_flow_get_hek_fields(&fe);
seq_printf(s, "0x%04x\n", hash_opts);
return 0;
}
static int mvpp2_dbgfs_port_flow_hash_opt_open(struct inode *inode,
struct file *file)
{
return single_open(file, mvpp2_dbgfs_port_flow_hash_opt_show,
inode->i_private);
}
static int mvpp2_dbgfs_port_flow_hash_opt_release(struct inode *inode,
struct file *file)
{
struct seq_file *seq = file->private_data;
struct mvpp2_dbgfs_port_flow_entry *flow_entry = seq->private;
kfree(flow_entry);
return single_release(inode, file);
}
static const struct file_operations mvpp2_dbgfs_port_flow_hash_opt_fops = {
.open = mvpp2_dbgfs_port_flow_hash_opt_open,
.read = seq_read,
.release = mvpp2_dbgfs_port_flow_hash_opt_release,
};
static int mvpp2_dbgfs_port_flow_engine_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_port_flow_entry *entry = s->private;
struct mvpp2_port *port = entry->port;
struct mvpp2_cls_flow_entry fe;
struct mvpp2_cls_flow *f;
int flow_index, engine;
f = mvpp2_cls_flow_get(entry->dbg_fe->flow);
if (!f)
return -EINVAL;
flow_index = MVPP2_PORT_FLOW_HASH_ENTRY(entry->port->id, f->flow_id);
mvpp2_cls_flow_read(port->priv, flow_index, &fe);
engine = mvpp2_cls_flow_eng_get(&fe);
seq_printf(s, "%d\n", engine);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_port_flow_engine);
net: mvpp2: debugfs: add classifier hit counters The classification operations that are used for RSS make use of several lookup tables. Having hit counters for these tables is really helpful to determine what flows were matched by ingress traffic, and see the path of packets among all the classifier tables. This commit adds hit counters for the 3 tables used at the moment : - The decoding table (also called lookup_id table), that links flows identified by the Header Parser to the flow table. There's one entry per flow, located at : .../mvpp2/<controller>/flows/XX/dec_hits Note that there are 21 flows in the decoding table, whereas there are 52 flows in the Header Parser. That's because there are several kind of traffic that will match a given flow. Reading the hit counter from one sub-flow will clear all hit counter that have the same flow_id. This also applies to the flow_hits. - The flow table, that contains all the different lookups to be performed by the classifier for each packet of a given flow. The match is done on the first entry of the flow sequence. - The C2 engine entries, that are used to assign the default rx queue, and enable or disable RSS for a given port. There's one entry per flow, located at: .../mvpp2/<controller>/flows/XX/flow_hits There is one C2 entry per port, so the c2 hit counter is located at : .../mvpp2/<controller>/ethX/c2_hits All hit counter values are 16-bits clear-on-read values. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:28 +00:00
static int mvpp2_dbgfs_flow_c2_hits_show(struct seq_file *s, void *unused)
{
struct mvpp2_port *port = s->private;
u32 hits;
hits = mvpp2_cls_c2_hit_count(port->priv,
MVPP22_CLS_C2_RSS_ENTRY(port->id));
seq_printf(s, "%u\n", hits);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_flow_c2_hits);
static int mvpp2_dbgfs_flow_c2_rxq_show(struct seq_file *s, void *unused)
{
struct mvpp2_port *port = s->private;
struct mvpp2_cls_c2_entry c2;
u8 qh, ql;
mvpp2_cls_c2_read(port->priv, MVPP22_CLS_C2_RSS_ENTRY(port->id), &c2);
qh = (c2.attr[0] >> MVPP22_CLS_C2_ATTR0_QHIGH_OFFS) &
MVPP22_CLS_C2_ATTR0_QHIGH_MASK;
ql = (c2.attr[0] >> MVPP22_CLS_C2_ATTR0_QLOW_OFFS) &
MVPP22_CLS_C2_ATTR0_QLOW_MASK;
seq_printf(s, "%d\n", (qh << 3 | ql));
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_flow_c2_rxq);
static int mvpp2_dbgfs_flow_c2_enable_show(struct seq_file *s, void *unused)
{
struct mvpp2_port *port = s->private;
struct mvpp2_cls_c2_entry c2;
int enabled;
mvpp2_cls_c2_read(port->priv, MVPP22_CLS_C2_RSS_ENTRY(port->id), &c2);
enabled = !!(c2.attr[2] & MVPP22_CLS_C2_ATTR2_RSS_EN);
seq_printf(s, "%d\n", enabled);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_flow_c2_enable);
net: mvpp2: add a debugfs interface for the Header Parser Marvell PPv2 Packer Header Parser has a TCAM based filter, that is not trivial to configure and debug. Being able to dump TCAM entries from userspace can be really helpful to help development of new features and debug existing ones. This commit adds a basic debugfs interface for the PPv2 driver, focusing on TCAM related features. <mnt>/mvpp2/ --- f2000000.ethernet \- f4000000.ethernet --- parser --- 000 ... | \- 001 | \- ... | \- 255 --- ai | \- header_data | \- lookup_id | \- sram | \- valid \- eth1 ... \- eth2 --- mac_filter \- parser_entries \- vid_filter There's one directory per PPv2 instance, named after pdev->name to make sure names are uniques. In each of these directories, there's : - one directory per interface on the controller, each containing : - "mac_filter", which lists all filtered addresses for this port (based on TCAM, not on the kernel's uc / mc lists) - "parser_entries", which lists the indices of all valid TCAM entries that have this port in their port map - "vid_filter", which lists the vids allowed on this port, based on TCAM - one "parser" directory (the parser is common to all ports), containing : - one directory per TCAM entry (256 of them, from 0 to 255), each containing : - "ai" : Contains the 1 byte Additional Info field from TCAM, and - "header_data" : Contains the 8 bytes Header Data extracted from the packet - "lookup_id" : Contains the 4 bits LU_ID - "sram" : contains the raw SRAM data, which is the result of the TCAM lookup. This readonly at the moment. - "valid" : Indicates if the entry is valid of not. All entries are read-only, and everything is output in hex form. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:25 +00:00
static int mvpp2_dbgfs_port_vid_show(struct seq_file *s, void *unused)
{
struct mvpp2_port *port = s->private;
unsigned char byte[2], enable[2];
struct mvpp2 *priv = port->priv;
struct mvpp2_prs_entry pe;
unsigned long pmap;
u16 rvid;
int tid;
for (tid = MVPP2_PRS_VID_PORT_FIRST(port->id);
tid <= MVPP2_PRS_VID_PORT_LAST(port->id); tid++) {
mvpp2_prs_init_from_hw(priv, &pe, tid);
pmap = mvpp2_prs_tcam_port_map_get(&pe);
if (!priv->prs_shadow[tid].valid)
continue;
if (!test_bit(port->id, &pmap))
continue;
mvpp2_prs_tcam_data_byte_get(&pe, 2, &byte[0], &enable[0]);
mvpp2_prs_tcam_data_byte_get(&pe, 3, &byte[1], &enable[1]);
rvid = ((byte[0] & 0xf) << 8) + byte[1];
seq_printf(s, "%u\n", rvid);
}
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_port_vid);
static int mvpp2_dbgfs_port_parser_show(struct seq_file *s, void *unused)
{
struct mvpp2_port *port = s->private;
struct mvpp2 *priv = port->priv;
struct mvpp2_prs_entry pe;
unsigned long pmap;
int i;
for (i = 0; i < MVPP2_PRS_TCAM_SRAM_SIZE; i++) {
mvpp2_prs_init_from_hw(port->priv, &pe, i);
pmap = mvpp2_prs_tcam_port_map_get(&pe);
if (priv->prs_shadow[i].valid && test_bit(port->id, &pmap))
seq_printf(s, "%03d\n", i);
}
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_port_parser);
static int mvpp2_dbgfs_filter_show(struct seq_file *s, void *unused)
{
struct mvpp2_port *port = s->private;
struct mvpp2 *priv = port->priv;
struct mvpp2_prs_entry pe;
unsigned long pmap;
int index, tid;
for (tid = MVPP2_PE_MAC_RANGE_START;
tid <= MVPP2_PE_MAC_RANGE_END; tid++) {
unsigned char da[ETH_ALEN], da_mask[ETH_ALEN];
if (!priv->prs_shadow[tid].valid ||
priv->prs_shadow[tid].lu != MVPP2_PRS_LU_MAC ||
priv->prs_shadow[tid].udf != MVPP2_PRS_UDF_MAC_DEF)
continue;
mvpp2_prs_init_from_hw(priv, &pe, tid);
pmap = mvpp2_prs_tcam_port_map_get(&pe);
/* We only want entries active on this port */
if (!test_bit(port->id, &pmap))
continue;
/* Read mac addr from entry */
for (index = 0; index < ETH_ALEN; index++)
mvpp2_prs_tcam_data_byte_get(&pe, index, &da[index],
&da_mask[index]);
seq_printf(s, "%pM\n", da);
}
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_filter);
static int mvpp2_dbgfs_prs_lu_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_prs_entry *entry = s->private;
struct mvpp2 *priv = entry->priv;
seq_printf(s, "%x\n", priv->prs_shadow[entry->tid].lu);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_prs_lu);
static int mvpp2_dbgfs_prs_pmap_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_prs_entry *entry = s->private;
struct mvpp2_prs_entry pe;
unsigned int pmap;
mvpp2_prs_init_from_hw(entry->priv, &pe, entry->tid);
pmap = mvpp2_prs_tcam_port_map_get(&pe);
pmap &= MVPP2_PRS_PORT_MASK;
seq_printf(s, "%02x\n", pmap);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_prs_pmap);
static int mvpp2_dbgfs_prs_ai_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_prs_entry *entry = s->private;
struct mvpp2_prs_entry pe;
unsigned char ai, ai_mask;
mvpp2_prs_init_from_hw(entry->priv, &pe, entry->tid);
ai = pe.tcam[MVPP2_PRS_TCAM_AI_WORD] & MVPP2_PRS_AI_MASK;
ai_mask = (pe.tcam[MVPP2_PRS_TCAM_AI_WORD] >> 16) & MVPP2_PRS_AI_MASK;
seq_printf(s, "%02x %02x\n", ai, ai_mask);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_prs_ai);
static int mvpp2_dbgfs_prs_hdata_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_prs_entry *entry = s->private;
struct mvpp2_prs_entry pe;
unsigned char data[8], mask[8];
int i;
mvpp2_prs_init_from_hw(entry->priv, &pe, entry->tid);
for (i = 0; i < 8; i++)
mvpp2_prs_tcam_data_byte_get(&pe, i, &data[i], &mask[i]);
seq_printf(s, "%*phN %*phN\n", 8, data, 8, mask);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_prs_hdata);
static int mvpp2_dbgfs_prs_sram_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_prs_entry *entry = s->private;
struct mvpp2_prs_entry pe;
mvpp2_prs_init_from_hw(entry->priv, &pe, entry->tid);
seq_printf(s, "%*phN\n", 14, pe.sram);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_prs_sram);
static int mvpp2_dbgfs_prs_hits_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_prs_entry *entry = s->private;
int val;
val = mvpp2_prs_hits(entry->priv, entry->tid);
if (val < 0)
return val;
seq_printf(s, "%d\n", val);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mvpp2_dbgfs_prs_hits);
net: mvpp2: add a debugfs interface for the Header Parser Marvell PPv2 Packer Header Parser has a TCAM based filter, that is not trivial to configure and debug. Being able to dump TCAM entries from userspace can be really helpful to help development of new features and debug existing ones. This commit adds a basic debugfs interface for the PPv2 driver, focusing on TCAM related features. <mnt>/mvpp2/ --- f2000000.ethernet \- f4000000.ethernet --- parser --- 000 ... | \- 001 | \- ... | \- 255 --- ai | \- header_data | \- lookup_id | \- sram | \- valid \- eth1 ... \- eth2 --- mac_filter \- parser_entries \- vid_filter There's one directory per PPv2 instance, named after pdev->name to make sure names are uniques. In each of these directories, there's : - one directory per interface on the controller, each containing : - "mac_filter", which lists all filtered addresses for this port (based on TCAM, not on the kernel's uc / mc lists) - "parser_entries", which lists the indices of all valid TCAM entries that have this port in their port map - "vid_filter", which lists the vids allowed on this port, based on TCAM - one "parser" directory (the parser is common to all ports), containing : - one directory per TCAM entry (256 of them, from 0 to 255), each containing : - "ai" : Contains the 1 byte Additional Info field from TCAM, and - "header_data" : Contains the 8 bytes Header Data extracted from the packet - "lookup_id" : Contains the 4 bits LU_ID - "sram" : contains the raw SRAM data, which is the result of the TCAM lookup. This readonly at the moment. - "valid" : Indicates if the entry is valid of not. All entries are read-only, and everything is output in hex form. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:25 +00:00
static int mvpp2_dbgfs_prs_valid_show(struct seq_file *s, void *unused)
{
struct mvpp2_dbgfs_prs_entry *entry = s->private;
struct mvpp2 *priv = entry->priv;
int tid = entry->tid;
seq_printf(s, "%d\n", priv->prs_shadow[tid].valid ? 1 : 0);
return 0;
}
static int mvpp2_dbgfs_prs_valid_open(struct inode *inode, struct file *file)
{
return single_open(file, mvpp2_dbgfs_prs_valid_show, inode->i_private);
}
static int mvpp2_dbgfs_prs_valid_release(struct inode *inode, struct file *file)
{
struct seq_file *seq = file->private_data;
struct mvpp2_dbgfs_prs_entry *entry = seq->private;
kfree(entry);
return single_release(inode, file);
}
static const struct file_operations mvpp2_dbgfs_prs_valid_fops = {
.open = mvpp2_dbgfs_prs_valid_open,
.read = seq_read,
.release = mvpp2_dbgfs_prs_valid_release,
};
static int mvpp2_dbgfs_flow_port_init(struct dentry *parent,
struct mvpp2_port *port,
struct mvpp2_dbgfs_flow_entry *entry)
{
struct mvpp2_dbgfs_port_flow_entry *port_entry;
struct dentry *port_dir;
port_dir = debugfs_create_dir(port->dev->name, parent);
if (IS_ERR(port_dir))
return PTR_ERR(port_dir);
/* This will be freed by 'hash_opts' release op */
port_entry = kmalloc(sizeof(*port_entry), GFP_KERNEL);
if (!port_entry)
return -ENOMEM;
port_entry->port = port;
port_entry->dbg_fe = entry;
debugfs_create_file("hash_opts", 0444, port_dir, port_entry,
&mvpp2_dbgfs_port_flow_hash_opt_fops);
debugfs_create_file("engine", 0444, port_dir, port_entry,
&mvpp2_dbgfs_port_flow_engine_fops);
return 0;
}
static int mvpp2_dbgfs_flow_entry_init(struct dentry *parent,
struct mvpp2 *priv, int flow)
{
struct mvpp2_dbgfs_flow_entry *entry;
struct dentry *flow_entry_dir;
char flow_entry_name[10];
int i, ret;
sprintf(flow_entry_name, "%02d", flow);
flow_entry_dir = debugfs_create_dir(flow_entry_name, parent);
if (!flow_entry_dir)
return -ENOMEM;
/* This will be freed by 'type' release op */
entry = kmalloc(sizeof(*entry), GFP_KERNEL);
if (!entry)
return -ENOMEM;
entry->flow = flow;
entry->priv = priv;
net: mvpp2: debugfs: add classifier hit counters The classification operations that are used for RSS make use of several lookup tables. Having hit counters for these tables is really helpful to determine what flows were matched by ingress traffic, and see the path of packets among all the classifier tables. This commit adds hit counters for the 3 tables used at the moment : - The decoding table (also called lookup_id table), that links flows identified by the Header Parser to the flow table. There's one entry per flow, located at : .../mvpp2/<controller>/flows/XX/dec_hits Note that there are 21 flows in the decoding table, whereas there are 52 flows in the Header Parser. That's because there are several kind of traffic that will match a given flow. Reading the hit counter from one sub-flow will clear all hit counter that have the same flow_id. This also applies to the flow_hits. - The flow table, that contains all the different lookups to be performed by the classifier for each packet of a given flow. The match is done on the first entry of the flow sequence. - The C2 engine entries, that are used to assign the default rx queue, and enable or disable RSS for a given port. There's one entry per flow, located at: .../mvpp2/<controller>/flows/XX/flow_hits There is one C2 entry per port, so the c2 hit counter is located at : .../mvpp2/<controller>/ethX/c2_hits All hit counter values are 16-bits clear-on-read values. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:28 +00:00
debugfs_create_file("flow_hits", 0444, flow_entry_dir, entry,
&mvpp2_dbgfs_flow_flt_hits_fops);
debugfs_create_file("dec_hits", 0444, flow_entry_dir, entry,
&mvpp2_dbgfs_flow_dec_hits_fops);
debugfs_create_file("type", 0444, flow_entry_dir, entry,
&mvpp2_dbgfs_flow_type_fops);
debugfs_create_file("id", 0444, flow_entry_dir, entry,
&mvpp2_dbgfs_flow_id_fops);
/* Create entry for each port */
for (i = 0; i < priv->port_count; i++) {
ret = mvpp2_dbgfs_flow_port_init(flow_entry_dir,
priv->port_list[i], entry);
if (ret)
return ret;
}
return 0;
}
static int mvpp2_dbgfs_flow_init(struct dentry *parent, struct mvpp2 *priv)
{
struct dentry *flow_dir;
int i, ret;
flow_dir = debugfs_create_dir("flows", parent);
if (!flow_dir)
return -ENOMEM;
for (i = 0; i < MVPP2_N_FLOWS; i++) {
ret = mvpp2_dbgfs_flow_entry_init(flow_dir, priv, i);
if (ret)
return ret;
}
return 0;
}
net: mvpp2: add a debugfs interface for the Header Parser Marvell PPv2 Packer Header Parser has a TCAM based filter, that is not trivial to configure and debug. Being able to dump TCAM entries from userspace can be really helpful to help development of new features and debug existing ones. This commit adds a basic debugfs interface for the PPv2 driver, focusing on TCAM related features. <mnt>/mvpp2/ --- f2000000.ethernet \- f4000000.ethernet --- parser --- 000 ... | \- 001 | \- ... | \- 255 --- ai | \- header_data | \- lookup_id | \- sram | \- valid \- eth1 ... \- eth2 --- mac_filter \- parser_entries \- vid_filter There's one directory per PPv2 instance, named after pdev->name to make sure names are uniques. In each of these directories, there's : - one directory per interface on the controller, each containing : - "mac_filter", which lists all filtered addresses for this port (based on TCAM, not on the kernel's uc / mc lists) - "parser_entries", which lists the indices of all valid TCAM entries that have this port in their port map - "vid_filter", which lists the vids allowed on this port, based on TCAM - one "parser" directory (the parser is common to all ports), containing : - one directory per TCAM entry (256 of them, from 0 to 255), each containing : - "ai" : Contains the 1 byte Additional Info field from TCAM, and - "header_data" : Contains the 8 bytes Header Data extracted from the packet - "lookup_id" : Contains the 4 bits LU_ID - "sram" : contains the raw SRAM data, which is the result of the TCAM lookup. This readonly at the moment. - "valid" : Indicates if the entry is valid of not. All entries are read-only, and everything is output in hex form. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:25 +00:00
static int mvpp2_dbgfs_prs_entry_init(struct dentry *parent,
struct mvpp2 *priv, int tid)
{
struct mvpp2_dbgfs_prs_entry *entry;
struct dentry *prs_entry_dir;
char prs_entry_name[10];
if (tid >= MVPP2_PRS_TCAM_SRAM_SIZE)
return -EINVAL;
sprintf(prs_entry_name, "%03d", tid);
prs_entry_dir = debugfs_create_dir(prs_entry_name, parent);
if (!prs_entry_dir)
return -ENOMEM;
/* The 'valid' entry's ops will free that */
entry = kmalloc(sizeof(*entry), GFP_KERNEL);
if (!entry)
return -ENOMEM;
entry->tid = tid;
entry->priv = priv;
/* Create each attr */
debugfs_create_file("sram", 0444, prs_entry_dir, entry,
&mvpp2_dbgfs_prs_sram_fops);
debugfs_create_file("valid", 0644, prs_entry_dir, entry,
&mvpp2_dbgfs_prs_valid_fops);
debugfs_create_file("lookup_id", 0644, prs_entry_dir, entry,
&mvpp2_dbgfs_prs_lu_fops);
debugfs_create_file("ai", 0644, prs_entry_dir, entry,
&mvpp2_dbgfs_prs_ai_fops);
debugfs_create_file("header_data", 0644, prs_entry_dir, entry,
&mvpp2_dbgfs_prs_hdata_fops);
debugfs_create_file("hits", 0444, prs_entry_dir, entry,
&mvpp2_dbgfs_prs_hits_fops);
net: mvpp2: add a debugfs interface for the Header Parser Marvell PPv2 Packer Header Parser has a TCAM based filter, that is not trivial to configure and debug. Being able to dump TCAM entries from userspace can be really helpful to help development of new features and debug existing ones. This commit adds a basic debugfs interface for the PPv2 driver, focusing on TCAM related features. <mnt>/mvpp2/ --- f2000000.ethernet \- f4000000.ethernet --- parser --- 000 ... | \- 001 | \- ... | \- 255 --- ai | \- header_data | \- lookup_id | \- sram | \- valid \- eth1 ... \- eth2 --- mac_filter \- parser_entries \- vid_filter There's one directory per PPv2 instance, named after pdev->name to make sure names are uniques. In each of these directories, there's : - one directory per interface on the controller, each containing : - "mac_filter", which lists all filtered addresses for this port (based on TCAM, not on the kernel's uc / mc lists) - "parser_entries", which lists the indices of all valid TCAM entries that have this port in their port map - "vid_filter", which lists the vids allowed on this port, based on TCAM - one "parser" directory (the parser is common to all ports), containing : - one directory per TCAM entry (256 of them, from 0 to 255), each containing : - "ai" : Contains the 1 byte Additional Info field from TCAM, and - "header_data" : Contains the 8 bytes Header Data extracted from the packet - "lookup_id" : Contains the 4 bits LU_ID - "sram" : contains the raw SRAM data, which is the result of the TCAM lookup. This readonly at the moment. - "valid" : Indicates if the entry is valid of not. All entries are read-only, and everything is output in hex form. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:25 +00:00
return 0;
}
static int mvpp2_dbgfs_prs_init(struct dentry *parent, struct mvpp2 *priv)
{
struct dentry *prs_dir;
int i, ret;
prs_dir = debugfs_create_dir("parser", parent);
if (!prs_dir)
return -ENOMEM;
for (i = 0; i < MVPP2_PRS_TCAM_SRAM_SIZE; i++) {
ret = mvpp2_dbgfs_prs_entry_init(prs_dir, priv, i);
if (ret)
return ret;
}
return 0;
}
static int mvpp2_dbgfs_port_init(struct dentry *parent,
struct mvpp2_port *port)
{
struct dentry *port_dir;
port_dir = debugfs_create_dir(port->dev->name, parent);
if (IS_ERR(port_dir))
return PTR_ERR(port_dir);
debugfs_create_file("parser_entries", 0444, port_dir, port,
&mvpp2_dbgfs_port_parser_fops);
debugfs_create_file("mac_filter", 0444, port_dir, port,
&mvpp2_dbgfs_filter_fops);
debugfs_create_file("vid_filter", 0444, port_dir, port,
&mvpp2_dbgfs_port_vid_fops);
net: mvpp2: debugfs: add classifier hit counters The classification operations that are used for RSS make use of several lookup tables. Having hit counters for these tables is really helpful to determine what flows were matched by ingress traffic, and see the path of packets among all the classifier tables. This commit adds hit counters for the 3 tables used at the moment : - The decoding table (also called lookup_id table), that links flows identified by the Header Parser to the flow table. There's one entry per flow, located at : .../mvpp2/<controller>/flows/XX/dec_hits Note that there are 21 flows in the decoding table, whereas there are 52 flows in the Header Parser. That's because there are several kind of traffic that will match a given flow. Reading the hit counter from one sub-flow will clear all hit counter that have the same flow_id. This also applies to the flow_hits. - The flow table, that contains all the different lookups to be performed by the classifier for each packet of a given flow. The match is done on the first entry of the flow sequence. - The C2 engine entries, that are used to assign the default rx queue, and enable or disable RSS for a given port. There's one entry per flow, located at: .../mvpp2/<controller>/flows/XX/flow_hits There is one C2 entry per port, so the c2 hit counter is located at : .../mvpp2/<controller>/ethX/c2_hits All hit counter values are 16-bits clear-on-read values. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:28 +00:00
debugfs_create_file("c2_hits", 0444, port_dir, port,
&mvpp2_dbgfs_flow_c2_hits_fops);
debugfs_create_file("default_rxq", 0444, port_dir, port,
&mvpp2_dbgfs_flow_c2_rxq_fops);
debugfs_create_file("rss_enable", 0444, port_dir, port,
&mvpp2_dbgfs_flow_c2_enable_fops);
net: mvpp2: add a debugfs interface for the Header Parser Marvell PPv2 Packer Header Parser has a TCAM based filter, that is not trivial to configure and debug. Being able to dump TCAM entries from userspace can be really helpful to help development of new features and debug existing ones. This commit adds a basic debugfs interface for the PPv2 driver, focusing on TCAM related features. <mnt>/mvpp2/ --- f2000000.ethernet \- f4000000.ethernet --- parser --- 000 ... | \- 001 | \- ... | \- 255 --- ai | \- header_data | \- lookup_id | \- sram | \- valid \- eth1 ... \- eth2 --- mac_filter \- parser_entries \- vid_filter There's one directory per PPv2 instance, named after pdev->name to make sure names are uniques. In each of these directories, there's : - one directory per interface on the controller, each containing : - "mac_filter", which lists all filtered addresses for this port (based on TCAM, not on the kernel's uc / mc lists) - "parser_entries", which lists the indices of all valid TCAM entries that have this port in their port map - "vid_filter", which lists the vids allowed on this port, based on TCAM - one "parser" directory (the parser is common to all ports), containing : - one directory per TCAM entry (256 of them, from 0 to 255), each containing : - "ai" : Contains the 1 byte Additional Info field from TCAM, and - "header_data" : Contains the 8 bytes Header Data extracted from the packet - "lookup_id" : Contains the 4 bits LU_ID - "sram" : contains the raw SRAM data, which is the result of the TCAM lookup. This readonly at the moment. - "valid" : Indicates if the entry is valid of not. All entries are read-only, and everything is output in hex form. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:25 +00:00
return 0;
}
void mvpp2_dbgfs_cleanup(struct mvpp2 *priv)
{
debugfs_remove_recursive(priv->dbgfs_dir);
}
void mvpp2_dbgfs_init(struct mvpp2 *priv, const char *name)
{
struct dentry *mvpp2_dir, *mvpp2_root;
int ret, i;
mvpp2_root = debugfs_lookup(MVPP2_DRIVER_NAME, NULL);
if (!mvpp2_root) {
mvpp2_root = debugfs_create_dir(MVPP2_DRIVER_NAME, NULL);
if (IS_ERR(mvpp2_root))
return;
}
mvpp2_dir = debugfs_create_dir(name, mvpp2_root);
if (IS_ERR(mvpp2_dir))
return;
priv->dbgfs_dir = mvpp2_dir;
ret = mvpp2_dbgfs_prs_init(mvpp2_dir, priv);
if (ret)
goto err;
for (i = 0; i < priv->port_count; i++) {
ret = mvpp2_dbgfs_port_init(mvpp2_dir, priv->port_list[i]);
if (ret)
goto err;
}
ret = mvpp2_dbgfs_flow_init(mvpp2_dir, priv);
if (ret)
goto err;
net: mvpp2: add a debugfs interface for the Header Parser Marvell PPv2 Packer Header Parser has a TCAM based filter, that is not trivial to configure and debug. Being able to dump TCAM entries from userspace can be really helpful to help development of new features and debug existing ones. This commit adds a basic debugfs interface for the PPv2 driver, focusing on TCAM related features. <mnt>/mvpp2/ --- f2000000.ethernet \- f4000000.ethernet --- parser --- 000 ... | \- 001 | \- ... | \- 255 --- ai | \- header_data | \- lookup_id | \- sram | \- valid \- eth1 ... \- eth2 --- mac_filter \- parser_entries \- vid_filter There's one directory per PPv2 instance, named after pdev->name to make sure names are uniques. In each of these directories, there's : - one directory per interface on the controller, each containing : - "mac_filter", which lists all filtered addresses for this port (based on TCAM, not on the kernel's uc / mc lists) - "parser_entries", which lists the indices of all valid TCAM entries that have this port in their port map - "vid_filter", which lists the vids allowed on this port, based on TCAM - one "parser" directory (the parser is common to all ports), containing : - one directory per TCAM entry (256 of them, from 0 to 255), each containing : - "ai" : Contains the 1 byte Additional Info field from TCAM, and - "header_data" : Contains the 8 bytes Header Data extracted from the packet - "lookup_id" : Contains the 4 bits LU_ID - "sram" : contains the raw SRAM data, which is the result of the TCAM lookup. This readonly at the moment. - "valid" : Indicates if the entry is valid of not. All entries are read-only, and everything is output in hex form. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:29:25 +00:00
return;
err:
mvpp2_dbgfs_cleanup(priv);
}