linux-stable/drivers/net/ifb.c
Lukas Wunner 42df6e1d22 netfilter: Introduce egress hook
Support classifying packets with netfilter on egress to satisfy user
requirements such as:
* outbound security policies for containers (Laura)
* filtering and mangling intra-node Direct Server Return (DSR) traffic
  on a load balancer (Laura)
* filtering locally generated traffic coming in through AF_PACKET,
  such as local ARP traffic generated for clustering purposes or DHCP
  (Laura; the AF_PACKET plumbing is contained in a follow-up commit)
* L2 filtering from ingress and egress for AVB (Audio Video Bridging)
  and gPTP with nftables (Pablo)
* in the future: in-kernel NAT64/NAT46 (Pablo)

The egress hook introduced herein complements the ingress hook added by
commit e687ad60af ("netfilter: add netfilter ingress hook after
handle_ing() under unique static key").  A patch for nftables to hook up
egress rules from user space has been submitted separately, so users may
immediately take advantage of the feature.

Alternatively or in addition to netfilter, packets can be classified
with traffic control (tc).  On ingress, packets are classified first by
tc, then by netfilter.  On egress, the order is reversed for symmetry.
Conceptually, tc and netfilter can be thought of as layers, with
netfilter layered above tc.

Traffic control is capable of redirecting packets to another interface
(man 8 tc-mirred).  E.g., an ingress packet may be redirected from the
host namespace to a container via a veth connection:
tc ingress (host) -> tc egress (veth host) -> tc ingress (veth container)

In this case, netfilter egress classifying is not performed when leaving
the host namespace!  That's because the packet is still on the tc layer.
If tc redirects the packet to a physical interface in the host namespace
such that it leaves the system, the packet is never subjected to
netfilter egress classifying.  That is only logical since it hasn't
passed through netfilter ingress classifying either.

Packets can alternatively be redirected at the netfilter layer using
nft fwd.  Such a packet *is* subjected to netfilter egress classifying
since it has reached the netfilter layer.

Internally, the skb->nf_skip_egress flag controls whether netfilter is
invoked on egress by __dev_queue_xmit().  Because __dev_queue_xmit() may
be called recursively by tunnel drivers such as vxlan, the flag is
reverted to false after sch_handle_egress().  This ensures that
netfilter is applied both on the overlay and underlying network.

Interaction between tc and netfilter is possible by setting and querying
skb->mark.

If netfilter egress classifying is not enabled on any interface, it is
patched out of the data path by way of a static_key and doesn't make a
performance difference that is discernible from noise:

Before:             1537 1538 1538 1537 1538 1537 Mb/sec
After:              1536 1534 1539 1539 1539 1540 Mb/sec
Before + tc accept: 1418 1418 1418 1419 1419 1418 Mb/sec
After  + tc accept: 1419 1424 1418 1419 1422 1420 Mb/sec
Before + tc drop:   1620 1619 1619 1619 1620 1620 Mb/sec
After  + tc drop:   1616 1624 1625 1624 1622 1619 Mb/sec

When netfilter egress classifying is enabled on at least one interface,
a minimal performance penalty is incurred for every egress packet, even
if the interface it's transmitted over doesn't have any netfilter egress
rules configured.  That is caused by checking dev->nf_hooks_egress
against NULL.

Measurements were performed on a Core i7-3615QM.  Commands to reproduce:
ip link add dev foo type dummy
ip link set dev foo up
modprobe pktgen
echo "add_device foo" > /proc/net/pktgen/kpktgend_3
samples/pktgen/pktgen_bench_xmit_mode_queue_xmit.sh -i foo -n 400000000 -m "11:11:11:11:11:11" -d 1.1.1.1

Accept all traffic with tc:
tc qdisc add dev foo clsact
tc filter add dev foo egress bpf da bytecode '1,6 0 0 0,'

Drop all traffic with tc:
tc qdisc add dev foo clsact
tc filter add dev foo egress bpf da bytecode '1,6 0 0 2,'

Apply this patch when measuring packet drops to avoid errors in dmesg:
https://lore.kernel.org/netdev/a73dda33-57f4-95d8-ea51-ed483abd6a7a@iogearbox.net/

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: Laura García Liébana <nevola@gmail.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Thomas Graf <tgraf@suug.ch>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2021-10-14 23:06:28 +02:00

360 lines
8.5 KiB
C

// SPDX-License-Identifier: GPL-2.0-or-later
/* drivers/net/ifb.c:
The purpose of this driver is to provide a device that allows
for sharing of resources:
1) qdiscs/policies that are per device as opposed to system wide.
ifb allows for a device which can be redirected to thus providing
an impression of sharing.
2) Allows for queueing incoming traffic for shaping instead of
dropping.
The original concept is based on what is known as the IMQ
driver initially written by Martin Devera, later rewritten
by Patrick McHardy and then maintained by Andre Correa.
You need the tc action mirror or redirect to feed this device
packets.
Authors: Jamal Hadi Salim (2005)
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/moduleparam.h>
#include <linux/netfilter_netdev.h>
#include <net/pkt_sched.h>
#include <net/net_namespace.h>
#define TX_Q_LIMIT 32
struct ifb_q_private {
struct net_device *dev;
struct tasklet_struct ifb_tasklet;
int tasklet_pending;
int txqnum;
struct sk_buff_head rq;
u64 rx_packets;
u64 rx_bytes;
struct u64_stats_sync rsync;
struct u64_stats_sync tsync;
u64 tx_packets;
u64 tx_bytes;
struct sk_buff_head tq;
} ____cacheline_aligned_in_smp;
struct ifb_dev_private {
struct ifb_q_private *tx_private;
};
static netdev_tx_t ifb_xmit(struct sk_buff *skb, struct net_device *dev);
static int ifb_open(struct net_device *dev);
static int ifb_close(struct net_device *dev);
static void ifb_ri_tasklet(struct tasklet_struct *t)
{
struct ifb_q_private *txp = from_tasklet(txp, t, ifb_tasklet);
struct netdev_queue *txq;
struct sk_buff *skb;
txq = netdev_get_tx_queue(txp->dev, txp->txqnum);
skb = skb_peek(&txp->tq);
if (!skb) {
if (!__netif_tx_trylock(txq))
goto resched;
skb_queue_splice_tail_init(&txp->rq, &txp->tq);
__netif_tx_unlock(txq);
}
while ((skb = __skb_dequeue(&txp->tq)) != NULL) {
/* Skip tc and netfilter to prevent redirection loop. */
skb->redirected = 0;
skb->tc_skip_classify = 1;
nf_skip_egress(skb, true);
u64_stats_update_begin(&txp->tsync);
txp->tx_packets++;
txp->tx_bytes += skb->len;
u64_stats_update_end(&txp->tsync);
rcu_read_lock();
skb->dev = dev_get_by_index_rcu(dev_net(txp->dev), skb->skb_iif);
if (!skb->dev) {
rcu_read_unlock();
dev_kfree_skb(skb);
txp->dev->stats.tx_dropped++;
if (skb_queue_len(&txp->tq) != 0)
goto resched;
break;
}
rcu_read_unlock();
skb->skb_iif = txp->dev->ifindex;
if (!skb->from_ingress) {
dev_queue_xmit(skb);
} else {
skb_pull_rcsum(skb, skb->mac_len);
netif_receive_skb(skb);
}
}
if (__netif_tx_trylock(txq)) {
skb = skb_peek(&txp->rq);
if (!skb) {
txp->tasklet_pending = 0;
if (netif_tx_queue_stopped(txq))
netif_tx_wake_queue(txq);
} else {
__netif_tx_unlock(txq);
goto resched;
}
__netif_tx_unlock(txq);
} else {
resched:
txp->tasklet_pending = 1;
tasklet_schedule(&txp->ifb_tasklet);
}
}
static void ifb_stats64(struct net_device *dev,
struct rtnl_link_stats64 *stats)
{
struct ifb_dev_private *dp = netdev_priv(dev);
struct ifb_q_private *txp = dp->tx_private;
unsigned int start;
u64 packets, bytes;
int i;
for (i = 0; i < dev->num_tx_queues; i++,txp++) {
do {
start = u64_stats_fetch_begin_irq(&txp->rsync);
packets = txp->rx_packets;
bytes = txp->rx_bytes;
} while (u64_stats_fetch_retry_irq(&txp->rsync, start));
stats->rx_packets += packets;
stats->rx_bytes += bytes;
do {
start = u64_stats_fetch_begin_irq(&txp->tsync);
packets = txp->tx_packets;
bytes = txp->tx_bytes;
} while (u64_stats_fetch_retry_irq(&txp->tsync, start));
stats->tx_packets += packets;
stats->tx_bytes += bytes;
}
stats->rx_dropped = dev->stats.rx_dropped;
stats->tx_dropped = dev->stats.tx_dropped;
}
static int ifb_dev_init(struct net_device *dev)
{
struct ifb_dev_private *dp = netdev_priv(dev);
struct ifb_q_private *txp;
int i;
txp = kcalloc(dev->num_tx_queues, sizeof(*txp), GFP_KERNEL);
if (!txp)
return -ENOMEM;
dp->tx_private = txp;
for (i = 0; i < dev->num_tx_queues; i++,txp++) {
txp->txqnum = i;
txp->dev = dev;
__skb_queue_head_init(&txp->rq);
__skb_queue_head_init(&txp->tq);
u64_stats_init(&txp->rsync);
u64_stats_init(&txp->tsync);
tasklet_setup(&txp->ifb_tasklet, ifb_ri_tasklet);
netif_tx_start_queue(netdev_get_tx_queue(dev, i));
}
return 0;
}
static const struct net_device_ops ifb_netdev_ops = {
.ndo_open = ifb_open,
.ndo_stop = ifb_close,
.ndo_get_stats64 = ifb_stats64,
.ndo_start_xmit = ifb_xmit,
.ndo_validate_addr = eth_validate_addr,
.ndo_init = ifb_dev_init,
};
#define IFB_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | NETIF_F_FRAGLIST | \
NETIF_F_GSO_SOFTWARE | NETIF_F_GSO_ENCAP_ALL | \
NETIF_F_HIGHDMA | NETIF_F_HW_VLAN_CTAG_TX | \
NETIF_F_HW_VLAN_STAG_TX)
static void ifb_dev_free(struct net_device *dev)
{
struct ifb_dev_private *dp = netdev_priv(dev);
struct ifb_q_private *txp = dp->tx_private;
int i;
for (i = 0; i < dev->num_tx_queues; i++,txp++) {
tasklet_kill(&txp->ifb_tasklet);
__skb_queue_purge(&txp->rq);
__skb_queue_purge(&txp->tq);
}
kfree(dp->tx_private);
}
static void ifb_setup(struct net_device *dev)
{
/* Initialize the device structure. */
dev->netdev_ops = &ifb_netdev_ops;
/* Fill in device structure with ethernet-generic values. */
ether_setup(dev);
dev->tx_queue_len = TX_Q_LIMIT;
dev->features |= IFB_FEATURES;
dev->hw_features |= dev->features;
dev->hw_enc_features |= dev->features;
dev->vlan_features |= IFB_FEATURES & ~(NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_STAG_TX);
dev->flags |= IFF_NOARP;
dev->flags &= ~IFF_MULTICAST;
dev->priv_flags &= ~IFF_TX_SKB_SHARING;
netif_keep_dst(dev);
eth_hw_addr_random(dev);
dev->needs_free_netdev = true;
dev->priv_destructor = ifb_dev_free;
dev->min_mtu = 0;
dev->max_mtu = 0;
}
static netdev_tx_t ifb_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct ifb_dev_private *dp = netdev_priv(dev);
struct ifb_q_private *txp = dp->tx_private + skb_get_queue_mapping(skb);
u64_stats_update_begin(&txp->rsync);
txp->rx_packets++;
txp->rx_bytes += skb->len;
u64_stats_update_end(&txp->rsync);
if (!skb->redirected || !skb->skb_iif) {
dev_kfree_skb(skb);
dev->stats.rx_dropped++;
return NETDEV_TX_OK;
}
if (skb_queue_len(&txp->rq) >= dev->tx_queue_len)
netif_tx_stop_queue(netdev_get_tx_queue(dev, txp->txqnum));
__skb_queue_tail(&txp->rq, skb);
if (!txp->tasklet_pending) {
txp->tasklet_pending = 1;
tasklet_schedule(&txp->ifb_tasklet);
}
return NETDEV_TX_OK;
}
static int ifb_close(struct net_device *dev)
{
netif_tx_stop_all_queues(dev);
return 0;
}
static int ifb_open(struct net_device *dev)
{
netif_tx_start_all_queues(dev);
return 0;
}
static int ifb_validate(struct nlattr *tb[], struct nlattr *data[],
struct netlink_ext_ack *extack)
{
if (tb[IFLA_ADDRESS]) {
if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN)
return -EINVAL;
if (!is_valid_ether_addr(nla_data(tb[IFLA_ADDRESS])))
return -EADDRNOTAVAIL;
}
return 0;
}
static struct rtnl_link_ops ifb_link_ops __read_mostly = {
.kind = "ifb",
.priv_size = sizeof(struct ifb_dev_private),
.setup = ifb_setup,
.validate = ifb_validate,
};
/* Number of ifb devices to be set up by this module.
* Note that these legacy devices have one queue.
* Prefer something like : ip link add ifb10 numtxqueues 8 type ifb
*/
static int numifbs = 2;
module_param(numifbs, int, 0);
MODULE_PARM_DESC(numifbs, "Number of ifb devices");
static int __init ifb_init_one(int index)
{
struct net_device *dev_ifb;
int err;
dev_ifb = alloc_netdev(sizeof(struct ifb_dev_private), "ifb%d",
NET_NAME_UNKNOWN, ifb_setup);
if (!dev_ifb)
return -ENOMEM;
dev_ifb->rtnl_link_ops = &ifb_link_ops;
err = register_netdevice(dev_ifb);
if (err < 0)
goto err;
return 0;
err:
free_netdev(dev_ifb);
return err;
}
static int __init ifb_init_module(void)
{
int i, err;
down_write(&pernet_ops_rwsem);
rtnl_lock();
err = __rtnl_link_register(&ifb_link_ops);
if (err < 0)
goto out;
for (i = 0; i < numifbs && !err; i++) {
err = ifb_init_one(i);
cond_resched();
}
if (err)
__rtnl_link_unregister(&ifb_link_ops);
out:
rtnl_unlock();
up_write(&pernet_ops_rwsem);
return err;
}
static void __exit ifb_cleanup_module(void)
{
rtnl_link_unregister(&ifb_link_ops);
}
module_init(ifb_init_module);
module_exit(ifb_cleanup_module);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Jamal Hadi Salim");
MODULE_ALIAS_RTNL_LINK("ifb");