bpf-next-for-netdev

-----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQTFp0I1jqZrAX+hPRXbK58LschIgwUCZTp12QAKCRDbK58LschI
 g8BrAQDifqp5liEEdXV8jdReBwJtqInjrL5tzy5LcyHUMQbTaAEA6Ph3Ct3B+3oA
 mFnIW/y6UJiJrby0Xz4+vV5BXI/5WQg=
 =pLCV
 -----END PGP SIGNATURE-----

Merge tag 'for-netdev' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2023-10-26

We've added 51 non-merge commits during the last 10 day(s) which contain
a total of 75 files changed, 5037 insertions(+), 200 deletions(-).

The main changes are:

1) Add open-coded task, css_task and css iterator support.
   One of the use cases is customizable OOM victim selection via BPF,
   from Chuyi Zhou.

2) Fix BPF verifier's iterator convergence logic to use exact states
   comparison for convergence checks, from Eduard Zingerman,
   Andrii Nakryiko and Alexei Starovoitov.

3) Add BPF programmable net device where bpf_mprog defines the logic
   of its xmit routine. It can operate in L3 and L2 mode,
   from Daniel Borkmann and Nikolay Aleksandrov.

4) Batch of fixes for BPF per-CPU kptr and re-enable unit_size checking
   for global per-CPU allocator, from Hou Tao.

5) Fix libbpf which eagerly assumed that SHT_GNU_verdef ELF section
   was going to be present whenever a binary has SHT_GNU_versym section,
   from Andrii Nakryiko.

6) Fix BPF ringbuf correctness to fold smp_mb__before_atomic() into
   atomic_set_release(), from Paul E. McKenney.

7) Add a warning if NAPI callback missed xdp_do_flush() under
   CONFIG_DEBUG_NET which helps checking if drivers were missing
   the former, from Sebastian Andrzej Siewior.

8) Fix missed RCU read-lock in bpf_task_under_cgroup() which was throwing
   a warning under sleepable programs, from Yafang Shao.

9) Avoid unnecessary -EBUSY from htab_lock_bucket by disabling IRQ before
   checking map_locked, from Song Liu.

10) Make BPF CI linked_list failure test more robust,
    from Kumar Kartikeya Dwivedi.

11) Enable samples/bpf to be built as PIE in Fedora, from Viktor Malik.

12) Fix xsk starving when multiple xsk sockets were associated with
    a single xsk_buff_pool, from Albert Huang.

13) Clarify the signed modulo implementation for the BPF ISA standardization
    document that it uses truncated division, from Dave Thaler.

14) Improve BPF verifier's JEQ/JNE branch taken logic to also consider
    signed bounds knowledge, from Andrii Nakryiko.

15) Add an option to XDP selftests to use multi-buffer AF_XDP
    xdp_hw_metadata and mark used XDP programs as capable to use frags,
    from Larysa Zaremba.

16) Fix bpftool's BTF dumper wrt printing a pointer value and another
    one to fix struct_ops dump in an array, from Manu Bretelle.

* tag 'for-netdev' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (51 commits)
  netkit: Remove explicit active/peer ptr initialization
  selftests/bpf: Fix selftests broken by mitigations=off
  samples/bpf: Allow building with custom bpftool
  samples/bpf: Fix passing LDFLAGS to libbpf
  samples/bpf: Allow building with custom CFLAGS/LDFLAGS
  bpf: Add more WARN_ON_ONCE checks for mismatched alloc and free
  selftests/bpf: Add selftests for netkit
  selftests/bpf: Add netlink helper library
  bpftool: Extend net dump with netkit progs
  bpftool: Implement link show support for netkit
  libbpf: Add link-based API for netkit
  tools: Sync if_link uapi header
  netkit, bpf: Add bpf programmable net device
  bpf: Improve JEQ/JNE branch taken logic
  bpf: Fold smp_mb__before_atomic() into atomic_set_release()
  bpf: Fix unnecessary -EBUSY from htab_lock_bucket
  xsk: Avoid starving the xsk further down the list
  bpf: print full verifier states on infinite loop detection
  selftests/bpf: test if state loops are detected in a tricky case
  bpf: correct loop detection for iterators convergence
  ...
====================

Link: https://lore.kernel.org/r/20231026150509.2824-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2023-10-26 20:02:40 -07:00
commit c6f9b7138b
75 changed files with 5039 additions and 202 deletions

View File

@ -283,6 +283,14 @@ For signed operations (``BPF_SDIV`` and ``BPF_SMOD``), for ``BPF_ALU``,
is first :term:`sign extended<Sign Extend>` from 32 to 64 bits, and then
interpreted as a 64-bit signed value.
Note that there are varying definitions of the signed modulo operation
when the dividend or divisor are negative, where implementations often
vary by language such that Python, Ruby, etc. differ from C, Go, Java,
etc. This specification requires that signed modulo use truncated division
(where -13 % 3 == -1) as implemented in C, Go, etc.:
a % n = a - n * trunc(a / n)
The ``BPF_MOVSX`` instruction does a move operation with sign extension.
``BPF_ALU | BPF_MOVSX`` :term:`sign extends<Sign Extend>` 8-bit and 16-bit operands into 32
bit operands, and zeroes the remaining upper 32 bits.

View File

@ -3796,6 +3796,15 @@ L: bpf@vger.kernel.org
S: Odd Fixes
K: (?:\b|_)bpf(?:\b|_)
BPF [NETKIT] (BPF-programmable network device)
M: Daniel Borkmann <daniel@iogearbox.net>
M: Nikolay Aleksandrov <razor@blackwall.org>
L: bpf@vger.kernel.org
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/netkit.c
F: include/net/netkit.h
BPF [NETWORKING] (struct_ops, reuseport)
M: Martin KaFai Lau <martin.lau@linux.dev>
L: bpf@vger.kernel.org

View File

@ -448,6 +448,15 @@ config NLMON
diagnostics, etc. This is mostly intended for developers or support
to debug netlink issues. If unsure, say N.
config NETKIT
bool "BPF-programmable network device"
depends on BPF_SYSCALL
help
The netkit device is a virtual networking device where BPF programs
can be attached to the device(s) transmission routine in order to
implement the driver's internal logic. The device can be configured
to operate in L3 or L2 mode. If unsure, say N.
config NET_VRF
tristate "Virtual Routing and Forwarding (Lite)"
depends on IP_MULTIPLE_TABLES

View File

@ -22,6 +22,7 @@ obj-$(CONFIG_MDIO) += mdio.o
obj-$(CONFIG_NET) += loopback.o
obj-$(CONFIG_NETDEV_LEGACY_INIT) += Space.o
obj-$(CONFIG_NETCONSOLE) += netconsole.o
obj-$(CONFIG_NETKIT) += netkit.o
obj-y += phy/
obj-y += pse-pd/
obj-y += mdio/

936
drivers/net/netkit.c Normal file
View File

@ -0,0 +1,936 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2023 Isovalent */
#include <linux/netdevice.h>
#include <linux/ethtool.h>
#include <linux/etherdevice.h>
#include <linux/filter.h>
#include <linux/netfilter_netdev.h>
#include <linux/bpf_mprog.h>
#include <net/netkit.h>
#include <net/dst.h>
#include <net/tcx.h>
#define DRV_NAME "netkit"
struct netkit {
/* Needed in fast-path */
struct net_device __rcu *peer;
struct bpf_mprog_entry __rcu *active;
enum netkit_action policy;
struct bpf_mprog_bundle bundle;
/* Needed in slow-path */
enum netkit_mode mode;
bool primary;
u32 headroom;
};
struct netkit_link {
struct bpf_link link;
struct net_device *dev;
u32 location;
};
static __always_inline int
netkit_run(const struct bpf_mprog_entry *entry, struct sk_buff *skb,
enum netkit_action ret)
{
const struct bpf_mprog_fp *fp;
const struct bpf_prog *prog;
bpf_mprog_foreach_prog(entry, fp, prog) {
bpf_compute_data_pointers(skb);
ret = bpf_prog_run(prog, skb);
if (ret != NETKIT_NEXT)
break;
}
return ret;
}
static void netkit_prep_forward(struct sk_buff *skb, bool xnet)
{
skb_scrub_packet(skb, xnet);
skb->priority = 0;
nf_skip_egress(skb, true);
}
static struct netkit *netkit_priv(const struct net_device *dev)
{
return netdev_priv(dev);
}
static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct netkit *nk = netkit_priv(dev);
enum netkit_action ret = READ_ONCE(nk->policy);
netdev_tx_t ret_dev = NET_XMIT_SUCCESS;
const struct bpf_mprog_entry *entry;
struct net_device *peer;
rcu_read_lock();
peer = rcu_dereference(nk->peer);
if (unlikely(!peer || !(peer->flags & IFF_UP) ||
!pskb_may_pull(skb, ETH_HLEN) ||
skb_orphan_frags(skb, GFP_ATOMIC)))
goto drop;
netkit_prep_forward(skb, !net_eq(dev_net(dev), dev_net(peer)));
skb->dev = peer;
entry = rcu_dereference(nk->active);
if (entry)
ret = netkit_run(entry, skb, ret);
switch (ret) {
case NETKIT_NEXT:
case NETKIT_PASS:
skb->protocol = eth_type_trans(skb, skb->dev);
skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
__netif_rx(skb);
break;
case NETKIT_REDIRECT:
skb_do_redirect(skb);
break;
case NETKIT_DROP:
default:
drop:
kfree_skb(skb);
dev_core_stats_tx_dropped_inc(dev);
ret_dev = NET_XMIT_DROP;
break;
}
rcu_read_unlock();
return ret_dev;
}
static int netkit_open(struct net_device *dev)
{
struct netkit *nk = netkit_priv(dev);
struct net_device *peer = rtnl_dereference(nk->peer);
if (!peer)
return -ENOTCONN;
if (peer->flags & IFF_UP) {
netif_carrier_on(dev);
netif_carrier_on(peer);
}
return 0;
}
static int netkit_close(struct net_device *dev)
{
struct netkit *nk = netkit_priv(dev);
struct net_device *peer = rtnl_dereference(nk->peer);
netif_carrier_off(dev);
if (peer)
netif_carrier_off(peer);
return 0;
}
static int netkit_get_iflink(const struct net_device *dev)
{
struct netkit *nk = netkit_priv(dev);
struct net_device *peer;
int iflink = 0;
rcu_read_lock();
peer = rcu_dereference(nk->peer);
if (peer)
iflink = peer->ifindex;
rcu_read_unlock();
return iflink;
}
static void netkit_set_multicast(struct net_device *dev)
{
/* Nothing to do, we receive whatever gets pushed to us! */
}
static void netkit_set_headroom(struct net_device *dev, int headroom)
{
struct netkit *nk = netkit_priv(dev), *nk2;
struct net_device *peer;
if (headroom < 0)
headroom = NET_SKB_PAD;
rcu_read_lock();
peer = rcu_dereference(nk->peer);
if (unlikely(!peer))
goto out;
nk2 = netkit_priv(peer);
nk->headroom = headroom;
headroom = max(nk->headroom, nk2->headroom);
peer->needed_headroom = headroom;
dev->needed_headroom = headroom;
out:
rcu_read_unlock();
}
static struct net_device *netkit_peer_dev(struct net_device *dev)
{
return rcu_dereference(netkit_priv(dev)->peer);
}
static void netkit_uninit(struct net_device *dev);
static const struct net_device_ops netkit_netdev_ops = {
.ndo_open = netkit_open,
.ndo_stop = netkit_close,
.ndo_start_xmit = netkit_xmit,
.ndo_set_rx_mode = netkit_set_multicast,
.ndo_set_rx_headroom = netkit_set_headroom,
.ndo_get_iflink = netkit_get_iflink,
.ndo_get_peer_dev = netkit_peer_dev,
.ndo_uninit = netkit_uninit,
.ndo_features_check = passthru_features_check,
};
static void netkit_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
strscpy(info->driver, DRV_NAME, sizeof(info->driver));
}
static const struct ethtool_ops netkit_ethtool_ops = {
.get_drvinfo = netkit_get_drvinfo,
};
static void netkit_setup(struct net_device *dev)
{
static const netdev_features_t netkit_features_hw_vlan =
NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_CTAG_RX |
NETIF_F_HW_VLAN_STAG_TX |
NETIF_F_HW_VLAN_STAG_RX;
static const netdev_features_t netkit_features =
netkit_features_hw_vlan |
NETIF_F_SG |
NETIF_F_FRAGLIST |
NETIF_F_HW_CSUM |
NETIF_F_RXCSUM |
NETIF_F_SCTP_CRC |
NETIF_F_HIGHDMA |
NETIF_F_GSO_SOFTWARE |
NETIF_F_GSO_ENCAP_ALL;
ether_setup(dev);
dev->max_mtu = ETH_MAX_MTU;
dev->flags |= IFF_NOARP;
dev->priv_flags &= ~IFF_TX_SKB_SHARING;
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
dev->priv_flags |= IFF_PHONY_HEADROOM;
dev->priv_flags |= IFF_NO_QUEUE;
dev->ethtool_ops = &netkit_ethtool_ops;
dev->netdev_ops = &netkit_netdev_ops;
dev->features |= netkit_features | NETIF_F_LLTX;
dev->hw_features = netkit_features;
dev->hw_enc_features = netkit_features;
dev->mpls_features = NETIF_F_HW_CSUM | NETIF_F_GSO_SOFTWARE;
dev->vlan_features = dev->features & ~netkit_features_hw_vlan;
dev->needs_free_netdev = true;
netif_set_tso_max_size(dev, GSO_MAX_SIZE);
}
static struct net *netkit_get_link_net(const struct net_device *dev)
{
struct netkit *nk = netkit_priv(dev);
struct net_device *peer = rtnl_dereference(nk->peer);
return peer ? dev_net(peer) : dev_net(dev);
}
static int netkit_check_policy(int policy, struct nlattr *tb,
struct netlink_ext_ack *extack)
{
switch (policy) {
case NETKIT_PASS:
case NETKIT_DROP:
return 0;
default:
NL_SET_ERR_MSG_ATTR(extack, tb,
"Provided default xmit policy not supported");
return -EINVAL;
}
}
static int netkit_check_mode(int mode, struct nlattr *tb,
struct netlink_ext_ack *extack)
{
switch (mode) {
case NETKIT_L2:
case NETKIT_L3:
return 0;
default:
NL_SET_ERR_MSG_ATTR(extack, tb,
"Provided device mode can only be L2 or L3");
return -EINVAL;
}
}
static int netkit_validate(struct nlattr *tb[], struct nlattr *data[],
struct netlink_ext_ack *extack)
{
struct nlattr *attr = tb[IFLA_ADDRESS];
if (!attr)
return 0;
NL_SET_ERR_MSG_ATTR(extack, attr,
"Setting Ethernet address is not supported");
return -EOPNOTSUPP;
}
static struct rtnl_link_ops netkit_link_ops;
static int netkit_new_link(struct net *src_net, struct net_device *dev,
struct nlattr *tb[], struct nlattr *data[],
struct netlink_ext_ack *extack)
{
struct nlattr *peer_tb[IFLA_MAX + 1], **tbp = tb, *attr;
enum netkit_action default_prim = NETKIT_PASS;
enum netkit_action default_peer = NETKIT_PASS;
enum netkit_mode mode = NETKIT_L3;
unsigned char ifname_assign_type;
struct ifinfomsg *ifmp = NULL;
struct net_device *peer;
char ifname[IFNAMSIZ];
struct netkit *nk;
struct net *net;
int err;
if (data) {
if (data[IFLA_NETKIT_MODE]) {
attr = data[IFLA_NETKIT_MODE];
mode = nla_get_u32(attr);
err = netkit_check_mode(mode, attr, extack);
if (err < 0)
return err;
}
if (data[IFLA_NETKIT_PEER_INFO]) {
attr = data[IFLA_NETKIT_PEER_INFO];
ifmp = nla_data(attr);
err = rtnl_nla_parse_ifinfomsg(peer_tb, attr, extack);
if (err < 0)
return err;
err = netkit_validate(peer_tb, NULL, extack);
if (err < 0)
return err;
tbp = peer_tb;
}
if (data[IFLA_NETKIT_POLICY]) {
attr = data[IFLA_NETKIT_POLICY];
default_prim = nla_get_u32(attr);
err = netkit_check_policy(default_prim, attr, extack);
if (err < 0)
return err;
}
if (data[IFLA_NETKIT_PEER_POLICY]) {
attr = data[IFLA_NETKIT_PEER_POLICY];
default_peer = nla_get_u32(attr);
err = netkit_check_policy(default_peer, attr, extack);
if (err < 0)
return err;
}
}
if (ifmp && tbp[IFLA_IFNAME]) {
nla_strscpy(ifname, tbp[IFLA_IFNAME], IFNAMSIZ);
ifname_assign_type = NET_NAME_USER;
} else {
strscpy(ifname, "nk%d", IFNAMSIZ);
ifname_assign_type = NET_NAME_ENUM;
}
net = rtnl_link_get_net(src_net, tbp);
if (IS_ERR(net))
return PTR_ERR(net);
peer = rtnl_create_link(net, ifname, ifname_assign_type,
&netkit_link_ops, tbp, extack);
if (IS_ERR(peer)) {
put_net(net);
return PTR_ERR(peer);
}
netif_inherit_tso_max(peer, dev);
if (mode == NETKIT_L2)
eth_hw_addr_random(peer);
if (ifmp && dev->ifindex)
peer->ifindex = ifmp->ifi_index;
nk = netkit_priv(peer);
nk->primary = false;
nk->policy = default_peer;
nk->mode = mode;
bpf_mprog_bundle_init(&nk->bundle);
err = register_netdevice(peer);
put_net(net);
if (err < 0)
goto err_register_peer;
netif_carrier_off(peer);
if (mode == NETKIT_L2)
dev_change_flags(peer, peer->flags & ~IFF_NOARP, NULL);
err = rtnl_configure_link(peer, NULL, 0, NULL);
if (err < 0)
goto err_configure_peer;
if (mode == NETKIT_L2)
eth_hw_addr_random(dev);
if (tb[IFLA_IFNAME])
nla_strscpy(dev->name, tb[IFLA_IFNAME], IFNAMSIZ);
else
strscpy(dev->name, "nk%d", IFNAMSIZ);
nk = netkit_priv(dev);
nk->primary = true;
nk->policy = default_prim;
nk->mode = mode;
bpf_mprog_bundle_init(&nk->bundle);
err = register_netdevice(dev);
if (err < 0)
goto err_configure_peer;
netif_carrier_off(dev);
if (mode == NETKIT_L2)
dev_change_flags(dev, dev->flags & ~IFF_NOARP, NULL);
rcu_assign_pointer(netkit_priv(dev)->peer, peer);
rcu_assign_pointer(netkit_priv(peer)->peer, dev);
return 0;
err_configure_peer:
unregister_netdevice(peer);
return err;
err_register_peer:
free_netdev(peer);
return err;
}
static struct bpf_mprog_entry *netkit_entry_fetch(struct net_device *dev,
bool bundle_fallback)
{
struct netkit *nk = netkit_priv(dev);
struct bpf_mprog_entry *entry;
ASSERT_RTNL();
entry = rcu_dereference_rtnl(nk->active);
if (entry)
return entry;
if (bundle_fallback)
return &nk->bundle.a;
return NULL;
}
static void netkit_entry_update(struct net_device *dev,
struct bpf_mprog_entry *entry)
{
struct netkit *nk = netkit_priv(dev);
ASSERT_RTNL();
rcu_assign_pointer(nk->active, entry);
}
static void netkit_entry_sync(void)
{
synchronize_rcu();
}
static struct net_device *netkit_dev_fetch(struct net *net, u32 ifindex, u32 which)
{
struct net_device *dev;
struct netkit *nk;
ASSERT_RTNL();
switch (which) {
case BPF_NETKIT_PRIMARY:
case BPF_NETKIT_PEER:
break;
default:
return ERR_PTR(-EINVAL);
}
dev = __dev_get_by_index(net, ifindex);
if (!dev)
return ERR_PTR(-ENODEV);
if (dev->netdev_ops != &netkit_netdev_ops)
return ERR_PTR(-ENXIO);
nk = netkit_priv(dev);
if (!nk->primary)
return ERR_PTR(-EACCES);
if (which == BPF_NETKIT_PEER) {
dev = rcu_dereference_rtnl(nk->peer);
if (!dev)
return ERR_PTR(-ENODEV);
}
return dev;
}
int netkit_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)
{
struct bpf_mprog_entry *entry, *entry_new;
struct bpf_prog *replace_prog = NULL;
struct net_device *dev;
int ret;
rtnl_lock();
dev = netkit_dev_fetch(current->nsproxy->net_ns, attr->target_ifindex,
attr->attach_type);
if (IS_ERR(dev)) {
ret = PTR_ERR(dev);
goto out;
}
entry = netkit_entry_fetch(dev, true);
if (attr->attach_flags & BPF_F_REPLACE) {
replace_prog = bpf_prog_get_type(attr->replace_bpf_fd,
prog->type);
if (IS_ERR(replace_prog)) {
ret = PTR_ERR(replace_prog);
replace_prog = NULL;
goto out;
}
}
ret = bpf_mprog_attach(entry, &entry_new, prog, NULL, replace_prog,
attr->attach_flags, attr->relative_fd,
attr->expected_revision);
if (!ret) {
if (entry != entry_new) {
netkit_entry_update(dev, entry_new);
netkit_entry_sync();
}
bpf_mprog_commit(entry);
}
out:
if (replace_prog)
bpf_prog_put(replace_prog);
rtnl_unlock();
return ret;
}
int netkit_prog_detach(const union bpf_attr *attr, struct bpf_prog *prog)
{
struct bpf_mprog_entry *entry, *entry_new;
struct net_device *dev;
int ret;
rtnl_lock();
dev = netkit_dev_fetch(current->nsproxy->net_ns, attr->target_ifindex,
attr->attach_type);
if (IS_ERR(dev)) {
ret = PTR_ERR(dev);
goto out;
}
entry = netkit_entry_fetch(dev, false);
if (!entry) {
ret = -ENOENT;
goto out;
}
ret = bpf_mprog_detach(entry, &entry_new, prog, NULL, attr->attach_flags,
attr->relative_fd, attr->expected_revision);
if (!ret) {
if (!bpf_mprog_total(entry_new))
entry_new = NULL;
netkit_entry_update(dev, entry_new);
netkit_entry_sync();
bpf_mprog_commit(entry);
}
out:
rtnl_unlock();
return ret;
}
int netkit_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr)
{
struct net_device *dev;
int ret;
rtnl_lock();
dev = netkit_dev_fetch(current->nsproxy->net_ns,
attr->query.target_ifindex,
attr->query.attach_type);
if (IS_ERR(dev)) {
ret = PTR_ERR(dev);
goto out;
}
ret = bpf_mprog_query(attr, uattr, netkit_entry_fetch(dev, false));
out:
rtnl_unlock();
return ret;
}
static struct netkit_link *netkit_link(const struct bpf_link *link)
{
return container_of(link, struct netkit_link, link);
}
static int netkit_link_prog_attach(struct bpf_link *link, u32 flags,
u32 id_or_fd, u64 revision)
{
struct netkit_link *nkl = netkit_link(link);
struct bpf_mprog_entry *entry, *entry_new;
struct net_device *dev = nkl->dev;
int ret;
ASSERT_RTNL();
entry = netkit_entry_fetch(dev, true);
ret = bpf_mprog_attach(entry, &entry_new, link->prog, link, NULL, flags,
id_or_fd, revision);
if (!ret) {
if (entry != entry_new) {
netkit_entry_update(dev, entry_new);
netkit_entry_sync();
}
bpf_mprog_commit(entry);
}
return ret;
}
static void netkit_link_release(struct bpf_link *link)
{
struct netkit_link *nkl = netkit_link(link);
struct bpf_mprog_entry *entry, *entry_new;
struct net_device *dev;
int ret = 0;
rtnl_lock();
dev = nkl->dev;
if (!dev)
goto out;
entry = netkit_entry_fetch(dev, false);
if (!entry) {
ret = -ENOENT;
goto out;
}
ret = bpf_mprog_detach(entry, &entry_new, link->prog, link, 0, 0, 0);
if (!ret) {
if (!bpf_mprog_total(entry_new))
entry_new = NULL;
netkit_entry_update(dev, entry_new);
netkit_entry_sync();
bpf_mprog_commit(entry);
nkl->dev = NULL;
}
out:
WARN_ON_ONCE(ret);
rtnl_unlock();
}
static int netkit_link_update(struct bpf_link *link, struct bpf_prog *nprog,
struct bpf_prog *oprog)
{
struct netkit_link *nkl = netkit_link(link);
struct bpf_mprog_entry *entry, *entry_new;
struct net_device *dev;
int ret = 0;
rtnl_lock();
dev = nkl->dev;
if (!dev) {
ret = -ENOLINK;
goto out;
}
if (oprog && link->prog != oprog) {
ret = -EPERM;
goto out;
}
oprog = link->prog;
if (oprog == nprog) {
bpf_prog_put(nprog);
goto out;
}
entry = netkit_entry_fetch(dev, false);
if (!entry) {
ret = -ENOENT;
goto out;
}
ret = bpf_mprog_attach(entry, &entry_new, nprog, link, oprog,
BPF_F_REPLACE | BPF_F_ID,
link->prog->aux->id, 0);
if (!ret) {
WARN_ON_ONCE(entry != entry_new);
oprog = xchg(&link->prog, nprog);
bpf_prog_put(oprog);
bpf_mprog_commit(entry);
}
out:
rtnl_unlock();
return ret;
}
static void netkit_link_dealloc(struct bpf_link *link)
{
kfree(netkit_link(link));
}
static void netkit_link_fdinfo(const struct bpf_link *link, struct seq_file *seq)
{
const struct netkit_link *nkl = netkit_link(link);
u32 ifindex = 0;
rtnl_lock();
if (nkl->dev)
ifindex = nkl->dev->ifindex;
rtnl_unlock();
seq_printf(seq, "ifindex:\t%u\n", ifindex);
seq_printf(seq, "attach_type:\t%u (%s)\n",
nkl->location,
nkl->location == BPF_NETKIT_PRIMARY ? "primary" : "peer");
}
static int netkit_link_fill_info(const struct bpf_link *link,
struct bpf_link_info *info)
{
const struct netkit_link *nkl = netkit_link(link);
u32 ifindex = 0;
rtnl_lock();
if (nkl->dev)
ifindex = nkl->dev->ifindex;
rtnl_unlock();
info->netkit.ifindex = ifindex;
info->netkit.attach_type = nkl->location;
return 0;
}
static int netkit_link_detach(struct bpf_link *link)
{
netkit_link_release(link);
return 0;
}
static const struct bpf_link_ops netkit_link_lops = {
.release = netkit_link_release,
.detach = netkit_link_detach,
.dealloc = netkit_link_dealloc,
.update_prog = netkit_link_update,
.show_fdinfo = netkit_link_fdinfo,
.fill_link_info = netkit_link_fill_info,
};
static int netkit_link_init(struct netkit_link *nkl,
struct bpf_link_primer *link_primer,
const union bpf_attr *attr,
struct net_device *dev,
struct bpf_prog *prog)
{
bpf_link_init(&nkl->link, BPF_LINK_TYPE_NETKIT,
&netkit_link_lops, prog);
nkl->location = attr->link_create.attach_type;
nkl->dev = dev;
return bpf_link_prime(&nkl->link, link_primer);
}
int netkit_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
{
struct bpf_link_primer link_primer;
struct netkit_link *nkl;
struct net_device *dev;
int ret;
rtnl_lock();
dev = netkit_dev_fetch(current->nsproxy->net_ns,
attr->link_create.target_ifindex,
attr->link_create.attach_type);
if (IS_ERR(dev)) {
ret = PTR_ERR(dev);
goto out;
}
nkl = kzalloc(sizeof(*nkl), GFP_KERNEL_ACCOUNT);
if (!nkl) {
ret = -ENOMEM;
goto out;
}
ret = netkit_link_init(nkl, &link_primer, attr, dev, prog);
if (ret) {
kfree(nkl);
goto out;
}
ret = netkit_link_prog_attach(&nkl->link,
attr->link_create.flags,
attr->link_create.netkit.relative_fd,
attr->link_create.netkit.expected_revision);
if (ret) {
nkl->dev = NULL;
bpf_link_cleanup(&link_primer);
goto out;
}
ret = bpf_link_settle(&link_primer);
out:
rtnl_unlock();
return ret;
}
static void netkit_release_all(struct net_device *dev)
{
struct bpf_mprog_entry *entry;
struct bpf_tuple tuple = {};
struct bpf_mprog_fp *fp;
struct bpf_mprog_cp *cp;
entry = netkit_entry_fetch(dev, false);
if (!entry)
return;
netkit_entry_update(dev, NULL);
netkit_entry_sync();
bpf_mprog_foreach_tuple(entry, fp, cp, tuple) {
if (tuple.link)
netkit_link(tuple.link)->dev = NULL;
else
bpf_prog_put(tuple.prog);
}
}
static void netkit_uninit(struct net_device *dev)
{
netkit_release_all(dev);
}
static void netkit_del_link(struct net_device *dev, struct list_head *head)
{
struct netkit *nk = netkit_priv(dev);
struct net_device *peer = rtnl_dereference(nk->peer);
RCU_INIT_POINTER(nk->peer, NULL);
unregister_netdevice_queue(dev, head);
if (peer) {
nk = netkit_priv(peer);
RCU_INIT_POINTER(nk->peer, NULL);
unregister_netdevice_queue(peer, head);
}
}
static int netkit_change_link(struct net_device *dev, struct nlattr *tb[],
struct nlattr *data[],
struct netlink_ext_ack *extack)
{
struct netkit *nk = netkit_priv(dev);
struct net_device *peer = rtnl_dereference(nk->peer);
enum netkit_action policy;
struct nlattr *attr;
int err;
if (!nk->primary) {
NL_SET_ERR_MSG(extack,
"netkit link settings can be changed only through the primary device");
return -EACCES;
}
if (data[IFLA_NETKIT_MODE]) {
NL_SET_ERR_MSG_ATTR(extack, data[IFLA_NETKIT_MODE],
"netkit link operating mode cannot be changed after device creation");
return -EACCES;
}
if (data[IFLA_NETKIT_POLICY]) {
attr = data[IFLA_NETKIT_POLICY];
policy = nla_get_u32(attr);
err = netkit_check_policy(policy, attr, extack);
if (err)
return err;
WRITE_ONCE(nk->policy, policy);
}
if (data[IFLA_NETKIT_PEER_POLICY]) {
err = -EOPNOTSUPP;
attr = data[IFLA_NETKIT_PEER_POLICY];
policy = nla_get_u32(attr);
if (peer)
err = netkit_check_policy(policy, attr, extack);
if (err)
return err;
nk = netkit_priv(peer);
WRITE_ONCE(nk->policy, policy);
}
return 0;
}
static size_t netkit_get_size(const struct net_device *dev)
{
return nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_POLICY */
nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_PEER_POLICY */
nla_total_size(sizeof(u8)) + /* IFLA_NETKIT_PRIMARY */
nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_MODE */
0;
}
static int netkit_fill_info(struct sk_buff *skb, const struct net_device *dev)
{
struct netkit *nk = netkit_priv(dev);
struct net_device *peer = rtnl_dereference(nk->peer);
if (nla_put_u8(skb, IFLA_NETKIT_PRIMARY, nk->primary))
return -EMSGSIZE;
if (nla_put_u32(skb, IFLA_NETKIT_POLICY, nk->policy))
return -EMSGSIZE;
if (nla_put_u32(skb, IFLA_NETKIT_MODE, nk->mode))
return -EMSGSIZE;
if (peer) {
nk = netkit_priv(peer);
if (nla_put_u32(skb, IFLA_NETKIT_PEER_POLICY, nk->policy))
return -EMSGSIZE;
}
return 0;
}
static const struct nla_policy netkit_policy[IFLA_NETKIT_MAX + 1] = {
[IFLA_NETKIT_PEER_INFO] = { .len = sizeof(struct ifinfomsg) },
[IFLA_NETKIT_POLICY] = { .type = NLA_U32 },
[IFLA_NETKIT_MODE] = { .type = NLA_U32 },
[IFLA_NETKIT_PEER_POLICY] = { .type = NLA_U32 },
[IFLA_NETKIT_PRIMARY] = { .type = NLA_REJECT,
.reject_message = "Primary attribute is read-only" },
};
static struct rtnl_link_ops netkit_link_ops = {
.kind = DRV_NAME,
.priv_size = sizeof(struct netkit),
.setup = netkit_setup,
.newlink = netkit_new_link,
.dellink = netkit_del_link,
.changelink = netkit_change_link,
.get_link_net = netkit_get_link_net,
.get_size = netkit_get_size,
.fill_info = netkit_fill_info,
.policy = netkit_policy,
.validate = netkit_validate,
.maxtype = IFLA_NETKIT_MAX,
};
static __init int netkit_init(void)
{
BUILD_BUG_ON((int)NETKIT_NEXT != (int)TCX_NEXT ||
(int)NETKIT_PASS != (int)TCX_PASS ||
(int)NETKIT_DROP != (int)TCX_DROP ||
(int)NETKIT_REDIRECT != (int)TCX_REDIRECT);
return rtnl_link_register(&netkit_link_ops);
}
static __exit void netkit_exit(void)
{
rtnl_link_unregister(&netkit_link_ops);
}
module_init(netkit_init);
module_exit(netkit_exit);
MODULE_DESCRIPTION("BPF-programmable network device");
MODULE_AUTHOR("Daniel Borkmann <daniel@iogearbox.net>");
MODULE_AUTHOR("Nikolay Aleksandrov <razor@blackwall.org>");
MODULE_LICENSE("GPL");
MODULE_ALIAS_RTNL_LINK(DRV_NAME);

View File

@ -2058,6 +2058,7 @@ struct btf_record *btf_record_dup(const struct btf_record *rec);
bool btf_record_equal(const struct btf_record *rec_a, const struct btf_record *rec_b);
void bpf_obj_free_timer(const struct btf_record *rec, void *obj);
void bpf_obj_free_fields(const struct btf_record *rec, void *obj);
void __bpf_obj_drop_impl(void *p, const struct btf_record *rec, bool percpu);
struct bpf_map *bpf_map_get(u32 ufd);
struct bpf_map *bpf_map_get_with_uref(u32 ufd);
@ -2478,6 +2479,9 @@ void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data,
enum bpf_dynptr_type type, u32 offset, u32 size);
void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr);
void bpf_dynptr_set_rdonly(struct bpf_dynptr_kern *ptr);
bool dev_check_flush(void);
bool cpu_map_check_flush(void);
#else /* !CONFIG_BPF_SYSCALL */
static inline struct bpf_prog *bpf_prog_get(u32 ufd)
{

View File

@ -11,6 +11,7 @@ struct bpf_mem_caches;
struct bpf_mem_alloc {
struct bpf_mem_caches __percpu *caches;
struct bpf_mem_cache __percpu *cache;
bool percpu;
struct work_struct work;
};

View File

@ -373,10 +373,25 @@ struct bpf_verifier_state {
struct bpf_active_lock active_lock;
bool speculative;
bool active_rcu_lock;
/* If this state was ever pointed-to by other state's loop_entry field
* this flag would be set to true. Used to avoid freeing such states
* while they are still in use.
*/
bool used_as_loop_entry;
/* first and last insn idx of this verifier state */
u32 first_insn_idx;
u32 last_insn_idx;
/* If this state is a part of states loop this field points to some
* parent of this state such that:
* - it is also a member of the same states loop;
* - DFS states traversal starting from initial state visits loop_entry
* state before this state.
* Used to compute topmost loop entry for state loops.
* State loops might appear because of open coded iterators logic.
* See get_loop_entry() for more information.
*/
struct bpf_verifier_state *loop_entry;
/* jmp history recorded from first to last.
* backtracking is using it to go from last to first.
* For most states jmp_history_cnt is [0-3].
@ -384,21 +399,21 @@ struct bpf_verifier_state {
*/
struct bpf_idx_pair *jmp_history;
u32 jmp_history_cnt;
u32 dfs_depth;
};
#define bpf_get_spilled_reg(slot, frame) \
#define bpf_get_spilled_reg(slot, frame, mask) \
(((slot < frame->allocated_stack / BPF_REG_SIZE) && \
(frame->stack[slot].slot_type[0] == STACK_SPILL)) \
((1 << frame->stack[slot].slot_type[0]) & (mask))) \
? &frame->stack[slot].spilled_ptr : NULL)
/* Iterate over 'frame', setting 'reg' to either NULL or a spilled register. */
#define bpf_for_each_spilled_reg(iter, frame, reg) \
for (iter = 0, reg = bpf_get_spilled_reg(iter, frame); \
#define bpf_for_each_spilled_reg(iter, frame, reg, mask) \
for (iter = 0, reg = bpf_get_spilled_reg(iter, frame, mask); \
iter < frame->allocated_stack / BPF_REG_SIZE; \
iter++, reg = bpf_get_spilled_reg(iter, frame))
iter++, reg = bpf_get_spilled_reg(iter, frame, mask))
/* Invoke __expr over regsiters in __vst, setting __state and __reg */
#define bpf_for_each_reg_in_vstate(__vst, __state, __reg, __expr) \
#define bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, __mask, __expr) \
({ \
struct bpf_verifier_state *___vstate = __vst; \
int ___i, ___j; \
@ -410,7 +425,7 @@ struct bpf_verifier_state {
__reg = &___regs[___j]; \
(void)(__expr); \
} \
bpf_for_each_spilled_reg(___j, __state, __reg) { \
bpf_for_each_spilled_reg(___j, __state, __reg, __mask) { \
if (!__reg) \
continue; \
(void)(__expr); \
@ -418,6 +433,10 @@ struct bpf_verifier_state {
} \
})
/* Invoke __expr over regsiters in __vst, setting __state and __reg */
#define bpf_for_each_reg_in_vstate(__vst, __state, __reg, __expr) \
bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, 1 << STACK_SPILL, __expr)
/* linked list of verifier states used to prune search */
struct bpf_verifier_state_list {
struct bpf_verifier_state state;

View File

@ -74,6 +74,7 @@
#define KF_ITER_NEW (1 << 8) /* kfunc implements BPF iter constructor */
#define KF_ITER_NEXT (1 << 9) /* kfunc implements BPF iter next method */
#define KF_ITER_DESTROY (1 << 10) /* kfunc implements BPF iter destructor */
#define KF_RCU_PROTECTED (1 << 11) /* kfunc should be protected by rcu cs when they are invoked */
/*
* Tag marking a kernel function as a kfunc. This is meant to minimize the

View File

@ -40,13 +40,11 @@ struct kernel_clone_args;
#define CGROUP_WEIGHT_DFL 100
#define CGROUP_WEIGHT_MAX 10000
/* walk only threadgroup leaders */
#define CSS_TASK_ITER_PROCS (1U << 0)
/* walk all threaded css_sets in the domain */
#define CSS_TASK_ITER_THREADED (1U << 1)
/* internal flags */
#define CSS_TASK_ITER_SKIPPED (1U << 16)
enum {
CSS_TASK_ITER_PROCS = (1U << 0), /* walk only threadgroup leaders */
CSS_TASK_ITER_THREADED = (1U << 1), /* walk all threaded css_sets in the domain */
CSS_TASK_ITER_SKIPPED = (1U << 16), /* internal flags */
};
/* a css_task_iter should be treated as an opaque object */
struct css_task_iter {

View File

@ -132,6 +132,7 @@ extern void __init setup_per_cpu_areas(void);
extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp) __alloc_size(1);
extern void __percpu *__alloc_percpu(size_t size, size_t align) __alloc_size(1);
extern void free_percpu(void __percpu *__pdata);
extern size_t pcpu_alloc_size(void __percpu *__pdata);
DEFINE_FREE(free_percpu, void __percpu *, free_percpu(_T))

38
include/net/netkit.h Normal file
View File

@ -0,0 +1,38 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2023 Isovalent */
#ifndef __NET_NETKIT_H
#define __NET_NETKIT_H
#include <linux/bpf.h>
#ifdef CONFIG_NETKIT
int netkit_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog);
int netkit_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
int netkit_prog_detach(const union bpf_attr *attr, struct bpf_prog *prog);
int netkit_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr);
#else
static inline int netkit_prog_attach(const union bpf_attr *attr,
struct bpf_prog *prog)
{
return -EINVAL;
}
static inline int netkit_link_attach(const union bpf_attr *attr,
struct bpf_prog *prog)
{
return -EINVAL;
}
static inline int netkit_prog_detach(const union bpf_attr *attr,
struct bpf_prog *prog)
{
return -EINVAL;
}
static inline int netkit_prog_query(const union bpf_attr *attr,
union bpf_attr __user *uattr)
{
return -EINVAL;
}
#endif /* CONFIG_NETKIT */
#endif /* __NET_NETKIT_H */

View File

@ -38,16 +38,11 @@ static inline struct tcx_entry *tcx_entry(struct bpf_mprog_entry *entry)
return container_of(bundle, struct tcx_entry, bundle);
}
static inline struct tcx_link *tcx_link(struct bpf_link *link)
static inline struct tcx_link *tcx_link(const struct bpf_link *link)
{
return container_of(link, struct tcx_link, link);
}
static inline const struct tcx_link *tcx_link_const(const struct bpf_link *link)
{
return tcx_link((struct bpf_link *)link);
}
void tcx_inc(void);
void tcx_dec(void);

View File

@ -63,6 +63,13 @@ struct xdp_sock {
struct xsk_queue *tx ____cacheline_aligned_in_smp;
struct list_head tx_list;
/* record the number of tx descriptors sent by this xsk and
* when it exceeds MAX_PER_SOCKET_BUDGET, an opportunity needs
* to be given to other xsks for sending tx descriptors, thereby
* preventing other XSKs from being starved.
*/
u32 tx_budget_spent;
/* Protects generic receive. */
spinlock_t rx_lock;
@ -109,4 +116,13 @@ static inline void __xsk_map_flush(void)
#endif /* CONFIG_XDP_SOCKETS */
#if defined(CONFIG_XDP_SOCKETS) && defined(CONFIG_DEBUG_NET)
bool xsk_map_check_flush(void);
#else
static inline bool xsk_map_check_flush(void)
{
return false;
}
#endif
#endif /* _LINUX_XDP_SOCK_H */

View File

@ -1052,6 +1052,8 @@ enum bpf_attach_type {
BPF_CGROUP_UNIX_RECVMSG,
BPF_CGROUP_UNIX_GETPEERNAME,
BPF_CGROUP_UNIX_GETSOCKNAME,
BPF_NETKIT_PRIMARY,
BPF_NETKIT_PEER,
__MAX_BPF_ATTACH_TYPE
};
@ -1071,6 +1073,7 @@ enum bpf_link_type {
BPF_LINK_TYPE_NETFILTER = 10,
BPF_LINK_TYPE_TCX = 11,
BPF_LINK_TYPE_UPROBE_MULTI = 12,
BPF_LINK_TYPE_NETKIT = 13,
MAX_BPF_LINK_TYPE,
};
@ -1656,6 +1659,13 @@ union bpf_attr {
__u32 flags;
__u32 pid;
} uprobe_multi;
struct {
union {
__u32 relative_fd;
__u32 relative_id;
};
__u64 expected_revision;
} netkit;
};
} link_create;
@ -6576,6 +6586,10 @@ struct bpf_link_info {
__u32 ifindex;
__u32 attach_type;
} tcx;
struct {
__u32 ifindex;
__u32 attach_type;
} netkit;
};
} __attribute__((aligned(8)));

View File

@ -758,6 +758,30 @@ struct tunnel_msg {
__u32 ifindex;
};
/* netkit section */
enum netkit_action {
NETKIT_NEXT = -1,
NETKIT_PASS = 0,
NETKIT_DROP = 2,
NETKIT_REDIRECT = 7,
};
enum netkit_mode {
NETKIT_L2,
NETKIT_L3,
};
enum {
IFLA_NETKIT_UNSPEC,
IFLA_NETKIT_PEER_INFO,
IFLA_NETKIT_PRIMARY,
IFLA_NETKIT_POLICY,
IFLA_NETKIT_PEER_POLICY,
IFLA_NETKIT_MODE,
__IFLA_NETKIT_MAX,
};
#define IFLA_NETKIT_MAX (__IFLA_NETKIT_MAX - 1)
/* VXLAN section */
/* include statistics in the dump */

View File

@ -294,3 +294,68 @@ static int __init bpf_cgroup_iter_init(void)
}
late_initcall(bpf_cgroup_iter_init);
struct bpf_iter_css {
__u64 __opaque[3];
} __attribute__((aligned(8)));
struct bpf_iter_css_kern {
struct cgroup_subsys_state *start;
struct cgroup_subsys_state *pos;
unsigned int flags;
} __attribute__((aligned(8)));
__diag_push();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc int bpf_iter_css_new(struct bpf_iter_css *it,
struct cgroup_subsys_state *start, unsigned int flags)
{
struct bpf_iter_css_kern *kit = (void *)it;
BUILD_BUG_ON(sizeof(struct bpf_iter_css_kern) > sizeof(struct bpf_iter_css));
BUILD_BUG_ON(__alignof__(struct bpf_iter_css_kern) != __alignof__(struct bpf_iter_css));
kit->start = NULL;
switch (flags) {
case BPF_CGROUP_ITER_DESCENDANTS_PRE:
case BPF_CGROUP_ITER_DESCENDANTS_POST:
case BPF_CGROUP_ITER_ANCESTORS_UP:
break;
default:
return -EINVAL;
}
kit->start = start;
kit->pos = NULL;
kit->flags = flags;
return 0;
}
__bpf_kfunc struct cgroup_subsys_state *bpf_iter_css_next(struct bpf_iter_css *it)
{
struct bpf_iter_css_kern *kit = (void *)it;
if (!kit->start)
return NULL;
switch (kit->flags) {
case BPF_CGROUP_ITER_DESCENDANTS_PRE:
kit->pos = css_next_descendant_pre(kit->pos, kit->start);
break;
case BPF_CGROUP_ITER_DESCENDANTS_POST:
kit->pos = css_next_descendant_post(kit->pos, kit->start);
break;
case BPF_CGROUP_ITER_ANCESTORS_UP:
kit->pos = kit->pos ? kit->pos->parent : kit->start;
}
return kit->pos;
}
__bpf_kfunc void bpf_iter_css_destroy(struct bpf_iter_css *it)
{
}
__diag_pop();

View File

@ -764,6 +764,16 @@ void __cpu_map_flush(void)
}
}
#ifdef CONFIG_DEBUG_NET
bool cpu_map_check_flush(void)
{
if (list_empty(this_cpu_ptr(&cpu_map_flush_list)))
return false;
__cpu_map_flush();
return true;
}
#endif
static int __init cpu_map_init(void)
{
int cpu;

View File

@ -418,6 +418,16 @@ void __dev_flush(void)
}
}
#ifdef CONFIG_DEBUG_NET
bool dev_check_flush(void)
{
if (list_empty(this_cpu_ptr(&dev_flush_list)))
return false;
__dev_flush();
return true;
}
#endif
/* Elements are kept alive by RCU; either by rcu_read_lock() (from syscall) or
* by local_bh_disable() (from XDP calls inside NAPI). The
* rcu_read_lock_bh_held() below makes lockdep accept both.

View File

@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
preempt_disable();
local_irq_save(flags);
if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
__this_cpu_dec(*(htab->map_locked[hash]));
local_irq_restore(flags);
preempt_enable();
return -EBUSY;
}
raw_spin_lock_irqsave(&b->raw_lock, flags);
raw_spin_lock(&b->raw_lock);
*pflags = flags;
return 0;
@ -172,8 +174,9 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab,
unsigned long flags)
{
hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
raw_spin_unlock_irqrestore(&b->raw_lock, flags);
raw_spin_unlock(&b->raw_lock);
__this_cpu_dec(*(htab->map_locked[hash]));
local_irq_restore(flags);
preempt_enable();
}

View File

@ -1811,8 +1811,6 @@ bpf_base_func_proto(enum bpf_func_id func_id)
}
}
void __bpf_obj_drop_impl(void *p, const struct btf_record *rec);
void bpf_list_head_free(const struct btf_field *field, void *list_head,
struct bpf_spin_lock *spin_lock)
{
@ -1844,7 +1842,7 @@ unlock:
* bpf_list_head which needs to be freed.
*/
migrate_disable();
__bpf_obj_drop_impl(obj, field->graph_root.value_rec);
__bpf_obj_drop_impl(obj, field->graph_root.value_rec, false);
migrate_enable();
}
}
@ -1883,7 +1881,7 @@ void bpf_rb_root_free(const struct btf_field *field, void *rb_root,
migrate_disable();
__bpf_obj_drop_impl(obj, field->graph_root.value_rec);
__bpf_obj_drop_impl(obj, field->graph_root.value_rec, false);
migrate_enable();
}
}
@ -1915,8 +1913,10 @@ __bpf_kfunc void *bpf_percpu_obj_new_impl(u64 local_type_id__k, void *meta__ign)
}
/* Must be called under migrate_disable(), as required by bpf_mem_free */
void __bpf_obj_drop_impl(void *p, const struct btf_record *rec)
void __bpf_obj_drop_impl(void *p, const struct btf_record *rec, bool percpu)
{
struct bpf_mem_alloc *ma;
if (rec && rec->refcount_off >= 0 &&
!refcount_dec_and_test((refcount_t *)(p + rec->refcount_off))) {
/* Object is refcounted and refcount_dec didn't result in 0
@ -1928,10 +1928,14 @@ void __bpf_obj_drop_impl(void *p, const struct btf_record *rec)
if (rec)
bpf_obj_free_fields(rec, p);
if (rec && rec->refcount_off >= 0)
bpf_mem_free_rcu(&bpf_global_ma, p);
if (percpu)
ma = &bpf_global_percpu_ma;
else
bpf_mem_free(&bpf_global_ma, p);
ma = &bpf_global_ma;
if (rec && rec->refcount_off >= 0)
bpf_mem_free_rcu(ma, p);
else
bpf_mem_free(ma, p);
}
__bpf_kfunc void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
@ -1939,7 +1943,7 @@ __bpf_kfunc void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
struct btf_struct_meta *meta = meta__ign;
void *p = p__alloc;
__bpf_obj_drop_impl(p, meta ? meta->record : NULL);
__bpf_obj_drop_impl(p, meta ? meta->record : NULL, false);
}
__bpf_kfunc void bpf_percpu_obj_drop_impl(void *p__alloc, void *meta__ign)
@ -1983,7 +1987,7 @@ static int __bpf_list_add(struct bpf_list_node_kern *node,
*/
if (cmpxchg(&node->owner, NULL, BPF_PTR_POISON)) {
/* Only called from BPF prog, no need to migrate_disable */
__bpf_obj_drop_impl((void *)n - off, rec);
__bpf_obj_drop_impl((void *)n - off, rec, false);
return -EINVAL;
}
@ -2082,7 +2086,7 @@ static int __bpf_rbtree_add(struct bpf_rb_root *root,
*/
if (cmpxchg(&node->owner, NULL, BPF_PTR_POISON)) {
/* Only called from BPF prog, no need to migrate_disable */
__bpf_obj_drop_impl((void *)n - off, rec);
__bpf_obj_drop_impl((void *)n - off, rec, false);
return -EINVAL;
}
@ -2215,7 +2219,12 @@ __bpf_kfunc struct cgroup *bpf_cgroup_from_id(u64 cgid)
__bpf_kfunc long bpf_task_under_cgroup(struct task_struct *task,
struct cgroup *ancestor)
{
return task_under_cgroup_hierarchy(task, ancestor);
long ret;
rcu_read_lock();
ret = task_under_cgroup_hierarchy(task, ancestor);
rcu_read_unlock();
return ret;
}
#endif /* CONFIG_CGROUPS */
@ -2555,6 +2564,15 @@ BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_iter_task_vma_new, KF_ITER_NEW | KF_RCU)
BTF_ID_FLAGS(func, bpf_iter_task_vma_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_task_vma_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_iter_css_task_new, KF_ITER_NEW | KF_TRUSTED_ARGS)
BTF_ID_FLAGS(func, bpf_iter_css_task_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_css_task_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_iter_task_new, KF_ITER_NEW | KF_TRUSTED_ARGS | KF_RCU_PROTECTED)
BTF_ID_FLAGS(func, bpf_iter_task_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_task_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_iter_css_new, KF_ITER_NEW | KF_TRUSTED_ARGS | KF_RCU_PROTECTED)
BTF_ID_FLAGS(func, bpf_iter_css_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_css_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_dynptr_adjust)
BTF_ID_FLAGS(func, bpf_dynptr_is_null)
BTF_ID_FLAGS(func, bpf_dynptr_is_rdonly)

View File

@ -340,6 +340,7 @@ static void free_bulk(struct bpf_mem_cache *c)
int cnt;
WARN_ON_ONCE(tgt->unit_size != c->unit_size);
WARN_ON_ONCE(tgt->percpu_size != c->percpu_size);
do {
inc_active(c, &flags);
@ -365,6 +366,9 @@ static void __free_by_rcu(struct rcu_head *head)
struct bpf_mem_cache *tgt = c->tgt;
struct llist_node *llnode;
WARN_ON_ONCE(tgt->unit_size != c->unit_size);
WARN_ON_ONCE(tgt->percpu_size != c->percpu_size);
llnode = llist_del_all(&c->waiting_for_gp);
if (!llnode)
goto out;
@ -491,21 +495,17 @@ static int check_obj_size(struct bpf_mem_cache *c, unsigned int idx)
struct llist_node *first;
unsigned int obj_size;
/* For per-cpu allocator, the size of free objects in free list doesn't
* match with unit_size and now there is no way to get the size of
* per-cpu pointer saved in free object, so just skip the checking.
*/
if (c->percpu_size)
return 0;
first = c->free_llist.first;
if (!first)
return 0;
obj_size = ksize(first);
if (c->percpu_size)
obj_size = pcpu_alloc_size(((void **)first)[1]);
else
obj_size = ksize(first);
if (obj_size != c->unit_size) {
WARN_ONCE(1, "bpf_mem_cache[%u]: unexpected object size %u, expect %u\n",
idx, obj_size, c->unit_size);
WARN_ONCE(1, "bpf_mem_cache[%u]: percpu %d, unexpected object size %u, expect %u\n",
idx, c->percpu_size, obj_size, c->unit_size);
return -EINVAL;
}
return 0;
@ -529,6 +529,7 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
/* room for llist_node and per-cpu pointer */
if (percpu)
percpu_size = LLIST_NODE_SZ + sizeof(void *);
ma->percpu = percpu;
if (size) {
pc = __alloc_percpu_gfp(sizeof(*pc), 8, GFP_KERNEL);
@ -878,6 +879,17 @@ void notrace *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size)
return !ret ? NULL : ret + LLIST_NODE_SZ;
}
static notrace int bpf_mem_free_idx(void *ptr, bool percpu)
{
size_t size;
if (percpu)
size = pcpu_alloc_size(*((void **)ptr));
else
size = ksize(ptr - LLIST_NODE_SZ);
return bpf_mem_cache_idx(size);
}
void notrace bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr)
{
int idx;
@ -885,7 +897,7 @@ void notrace bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr)
if (!ptr)
return;
idx = bpf_mem_cache_idx(ksize(ptr - LLIST_NODE_SZ));
idx = bpf_mem_free_idx(ptr, ma->percpu);
if (idx < 0)
return;
@ -899,7 +911,7 @@ void notrace bpf_mem_free_rcu(struct bpf_mem_alloc *ma, void *ptr)
if (!ptr)
return;
idx = bpf_mem_cache_idx(ksize(ptr - LLIST_NODE_SZ));
idx = bpf_mem_free_idx(ptr, ma->percpu);
if (idx < 0)
return;
@ -973,6 +985,12 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags)
return !ret ? NULL : ret + LLIST_NODE_SZ;
}
/* The alignment of dynamic per-cpu area is 8, so c->unit_size and the
* actual size of dynamic per-cpu area will always be matched and there is
* no need to adjust size_index for per-cpu allocation. However for the
* simplicity of the implementation, use an unified size_index for both
* kmalloc and per-cpu allocation.
*/
static __init int bpf_mem_cache_adjust_size(void)
{
unsigned int size;

View File

@ -770,8 +770,7 @@ schedule_work_return:
/* Prevent the clearing of the busy-bit from being reordered before the
* storing of any rb consumer or producer positions.
*/
smp_mb__before_atomic();
atomic_set(&rb->busy, 0);
atomic_set_release(&rb->busy, 0);
if (flags & BPF_RB_FORCE_WAKEUP)
irq_work_queue(&rb->work);

View File

@ -35,8 +35,9 @@
#include <linux/rcupdate_trace.h>
#include <linux/memcontrol.h>
#include <linux/trace_events.h>
#include <net/netfilter/nf_bpf_link.h>
#include <net/netfilter/nf_bpf_link.h>
#include <net/netkit.h>
#include <net/tcx.h>
#define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \
@ -626,8 +627,6 @@ void bpf_obj_free_timer(const struct btf_record *rec, void *obj)
bpf_timer_cancel_and_free(obj + rec->timer_off);
}
extern void __bpf_obj_drop_impl(void *p, const struct btf_record *rec);
void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
{
const struct btf_field *fields;
@ -662,8 +661,8 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
field->kptr.btf_id);
migrate_disable();
__bpf_obj_drop_impl(xchgd_field, pointee_struct_meta ?
pointee_struct_meta->record :
NULL);
pointee_struct_meta->record : NULL,
fields[i].type == BPF_KPTR_PERCPU);
migrate_enable();
} else {
field->kptr.dtor(xchgd_field);
@ -3732,6 +3731,8 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
return BPF_PROG_TYPE_LSM;
case BPF_TCX_INGRESS:
case BPF_TCX_EGRESS:
case BPF_NETKIT_PRIMARY:
case BPF_NETKIT_PEER:
return BPF_PROG_TYPE_SCHED_CLS;
default:
return BPF_PROG_TYPE_UNSPEC;
@ -3783,7 +3784,9 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
return 0;
case BPF_PROG_TYPE_SCHED_CLS:
if (attach_type != BPF_TCX_INGRESS &&
attach_type != BPF_TCX_EGRESS)
attach_type != BPF_TCX_EGRESS &&
attach_type != BPF_NETKIT_PRIMARY &&
attach_type != BPF_NETKIT_PEER)
return -EINVAL;
return 0;
default:
@ -3866,7 +3869,11 @@ static int bpf_prog_attach(const union bpf_attr *attr)
ret = cgroup_bpf_prog_attach(attr, ptype, prog);
break;
case BPF_PROG_TYPE_SCHED_CLS:
ret = tcx_prog_attach(attr, prog);
if (attr->attach_type == BPF_TCX_INGRESS ||
attr->attach_type == BPF_TCX_EGRESS)
ret = tcx_prog_attach(attr, prog);
else
ret = netkit_prog_attach(attr, prog);
break;
default:
ret = -EINVAL;
@ -3927,7 +3934,11 @@ static int bpf_prog_detach(const union bpf_attr *attr)
ret = cgroup_bpf_prog_detach(attr, ptype);
break;
case BPF_PROG_TYPE_SCHED_CLS:
ret = tcx_prog_detach(attr, prog);
if (attr->attach_type == BPF_TCX_INGRESS ||
attr->attach_type == BPF_TCX_EGRESS)
ret = tcx_prog_detach(attr, prog);
else
ret = netkit_prog_detach(attr, prog);
break;
default:
ret = -EINVAL;
@ -3994,6 +4005,9 @@ static int bpf_prog_query(const union bpf_attr *attr,
case BPF_TCX_INGRESS:
case BPF_TCX_EGRESS:
return tcx_prog_query(attr, uattr);
case BPF_NETKIT_PRIMARY:
case BPF_NETKIT_PEER:
return netkit_prog_query(attr, uattr);
default:
return -EINVAL;
}
@ -4975,7 +4989,11 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr)
ret = bpf_xdp_link_attach(attr, prog);
break;
case BPF_PROG_TYPE_SCHED_CLS:
ret = tcx_link_attach(attr, prog);
if (attr->link_create.attach_type == BPF_TCX_INGRESS ||
attr->link_create.attach_type == BPF_TCX_EGRESS)
ret = tcx_link_attach(attr, prog);
else
ret = netkit_link_attach(attr, prog);
break;
case BPF_PROG_TYPE_NETFILTER:
ret = bpf_nf_link_attach(attr, prog);

View File

@ -894,6 +894,157 @@ __bpf_kfunc void bpf_iter_task_vma_destroy(struct bpf_iter_task_vma *it)
__diag_pop();
struct bpf_iter_css_task {
__u64 __opaque[1];
} __attribute__((aligned(8)));
struct bpf_iter_css_task_kern {
struct css_task_iter *css_it;
} __attribute__((aligned(8)));
__diag_push();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc int bpf_iter_css_task_new(struct bpf_iter_css_task *it,
struct cgroup_subsys_state *css, unsigned int flags)
{
struct bpf_iter_css_task_kern *kit = (void *)it;
BUILD_BUG_ON(sizeof(struct bpf_iter_css_task_kern) != sizeof(struct bpf_iter_css_task));
BUILD_BUG_ON(__alignof__(struct bpf_iter_css_task_kern) !=
__alignof__(struct bpf_iter_css_task));
kit->css_it = NULL;
switch (flags) {
case CSS_TASK_ITER_PROCS | CSS_TASK_ITER_THREADED:
case CSS_TASK_ITER_PROCS:
case 0:
break;
default:
return -EINVAL;
}
kit->css_it = bpf_mem_alloc(&bpf_global_ma, sizeof(struct css_task_iter));
if (!kit->css_it)
return -ENOMEM;
css_task_iter_start(css, flags, kit->css_it);
return 0;
}
__bpf_kfunc struct task_struct *bpf_iter_css_task_next(struct bpf_iter_css_task *it)
{
struct bpf_iter_css_task_kern *kit = (void *)it;
if (!kit->css_it)
return NULL;
return css_task_iter_next(kit->css_it);
}
__bpf_kfunc void bpf_iter_css_task_destroy(struct bpf_iter_css_task *it)
{
struct bpf_iter_css_task_kern *kit = (void *)it;
if (!kit->css_it)
return;
css_task_iter_end(kit->css_it);
bpf_mem_free(&bpf_global_ma, kit->css_it);
}
__diag_pop();
struct bpf_iter_task {
__u64 __opaque[3];
} __attribute__((aligned(8)));
struct bpf_iter_task_kern {
struct task_struct *task;
struct task_struct *pos;
unsigned int flags;
} __attribute__((aligned(8)));
enum {
/* all process in the system */
BPF_TASK_ITER_ALL_PROCS,
/* all threads in the system */
BPF_TASK_ITER_ALL_THREADS,
/* all threads of a specific process */
BPF_TASK_ITER_PROC_THREADS
};
__diag_push();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in vmlinux BTF");
__bpf_kfunc int bpf_iter_task_new(struct bpf_iter_task *it,
struct task_struct *task__nullable, unsigned int flags)
{
struct bpf_iter_task_kern *kit = (void *)it;
BUILD_BUG_ON(sizeof(struct bpf_iter_task_kern) > sizeof(struct bpf_iter_task));
BUILD_BUG_ON(__alignof__(struct bpf_iter_task_kern) !=
__alignof__(struct bpf_iter_task));
kit->task = kit->pos = NULL;
switch (flags) {
case BPF_TASK_ITER_ALL_THREADS:
case BPF_TASK_ITER_ALL_PROCS:
break;
case BPF_TASK_ITER_PROC_THREADS:
if (!task__nullable)
return -EINVAL;
break;
default:
return -EINVAL;
}
if (flags == BPF_TASK_ITER_PROC_THREADS)
kit->task = task__nullable;
else
kit->task = &init_task;
kit->pos = kit->task;
kit->flags = flags;
return 0;
}
__bpf_kfunc struct task_struct *bpf_iter_task_next(struct bpf_iter_task *it)
{
struct bpf_iter_task_kern *kit = (void *)it;
struct task_struct *pos;
unsigned int flags;
flags = kit->flags;
pos = kit->pos;
if (!pos)
return pos;
if (flags == BPF_TASK_ITER_ALL_PROCS)
goto get_next_task;
kit->pos = next_thread(kit->pos);
if (kit->pos == kit->task) {
if (flags == BPF_TASK_ITER_PROC_THREADS) {
kit->pos = NULL;
return pos;
}
} else
return pos;
get_next_task:
kit->pos = next_task(kit->pos);
kit->task = kit->pos;
if (kit->pos == &init_task)
kit->pos = NULL;
return pos;
}
__bpf_kfunc void bpf_iter_task_destroy(struct bpf_iter_task *it)
{
}
__diag_pop();
DEFINE_PER_CPU(struct mmap_unlock_irq_work, mmap_unlock_work);
static void do_mmap_read_unlock(struct irq_work *entry)

View File

@ -250,7 +250,7 @@ static void tcx_link_dealloc(struct bpf_link *link)
static void tcx_link_fdinfo(const struct bpf_link *link, struct seq_file *seq)
{
const struct tcx_link *tcx = tcx_link_const(link);
const struct tcx_link *tcx = tcx_link(link);
u32 ifindex = 0;
rtnl_lock();
@ -267,7 +267,7 @@ static void tcx_link_fdinfo(const struct bpf_link *link, struct seq_file *seq)
static int tcx_link_fill_info(const struct bpf_link *link,
struct bpf_link_info *info)
{
const struct tcx_link *tcx = tcx_link_const(link);
const struct tcx_link *tcx = tcx_link(link);
u32 ifindex = 0;
rtnl_lock();

View File

@ -1173,7 +1173,12 @@ static bool is_dynptr_type_expected(struct bpf_verifier_env *env, struct bpf_reg
static void __mark_reg_known_zero(struct bpf_reg_state *reg);
static bool in_rcu_cs(struct bpf_verifier_env *env);
static bool is_kfunc_rcu_protected(struct bpf_kfunc_call_arg_meta *meta);
static int mark_stack_slots_iter(struct bpf_verifier_env *env,
struct bpf_kfunc_call_arg_meta *meta,
struct bpf_reg_state *reg, int insn_idx,
struct btf *btf, u32 btf_id, int nr_slots)
{
@ -1194,6 +1199,12 @@ static int mark_stack_slots_iter(struct bpf_verifier_env *env,
__mark_reg_known_zero(st);
st->type = PTR_TO_STACK; /* we don't have dedicated reg type */
if (is_kfunc_rcu_protected(meta)) {
if (in_rcu_cs(env))
st->type |= MEM_RCU;
else
st->type |= PTR_UNTRUSTED;
}
st->live |= REG_LIVE_WRITTEN;
st->ref_obj_id = i == 0 ? id : 0;
st->iter.btf = btf;
@ -1268,7 +1279,7 @@ static bool is_iter_reg_valid_uninit(struct bpf_verifier_env *env,
return true;
}
static bool is_iter_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
static int is_iter_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
struct btf *btf, u32 btf_id, int nr_slots)
{
struct bpf_func_state *state = func(env, reg);
@ -1276,26 +1287,28 @@ static bool is_iter_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_
spi = iter_get_spi(env, reg, nr_slots);
if (spi < 0)
return false;
return -EINVAL;
for (i = 0; i < nr_slots; i++) {
struct bpf_stack_state *slot = &state->stack[spi - i];
struct bpf_reg_state *st = &slot->spilled_ptr;
if (st->type & PTR_UNTRUSTED)
return -EPROTO;
/* only main (first) slot has ref_obj_id set */
if (i == 0 && !st->ref_obj_id)
return false;
return -EINVAL;
if (i != 0 && st->ref_obj_id)
return false;
return -EINVAL;
if (st->iter.btf != btf || st->iter.btf_id != btf_id)
return false;
return -EINVAL;
for (j = 0; j < BPF_REG_SIZE; j++)
if (slot->slot_type[j] != STACK_ITER)
return false;
return -EINVAL;
}
return true;
return 0;
}
/* Check if given stack slot is "special":
@ -1789,6 +1802,8 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
dst_state->parent = src->parent;
dst_state->first_insn_idx = src->first_insn_idx;
dst_state->last_insn_idx = src->last_insn_idx;
dst_state->dfs_depth = src->dfs_depth;
dst_state->used_as_loop_entry = src->used_as_loop_entry;
for (i = 0; i <= src->curframe; i++) {
dst = dst_state->frame[i];
if (!dst) {
@ -1804,11 +1819,203 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
return 0;
}
static u32 state_htab_size(struct bpf_verifier_env *env)
{
return env->prog->len;
}
static struct bpf_verifier_state_list **explored_state(struct bpf_verifier_env *env, int idx)
{
struct bpf_verifier_state *cur = env->cur_state;
struct bpf_func_state *state = cur->frame[cur->curframe];
return &env->explored_states[(idx ^ state->callsite) % state_htab_size(env)];
}
static bool same_callsites(struct bpf_verifier_state *a, struct bpf_verifier_state *b)
{
int fr;
if (a->curframe != b->curframe)
return false;
for (fr = a->curframe; fr >= 0; fr--)
if (a->frame[fr]->callsite != b->frame[fr]->callsite)
return false;
return true;
}
/* Open coded iterators allow back-edges in the state graph in order to
* check unbounded loops that iterators.
*
* In is_state_visited() it is necessary to know if explored states are
* part of some loops in order to decide whether non-exact states
* comparison could be used:
* - non-exact states comparison establishes sub-state relation and uses
* read and precision marks to do so, these marks are propagated from
* children states and thus are not guaranteed to be final in a loop;
* - exact states comparison just checks if current and explored states
* are identical (and thus form a back-edge).
*
* Paper "A New Algorithm for Identifying Loops in Decompilation"
* by Tao Wei, Jian Mao, Wei Zou and Yu Chen [1] presents a convenient
* algorithm for loop structure detection and gives an overview of
* relevant terminology. It also has helpful illustrations.
*
* [1] https://api.semanticscholar.org/CorpusID:15784067
*
* We use a similar algorithm but because loop nested structure is
* irrelevant for verifier ours is significantly simpler and resembles
* strongly connected components algorithm from Sedgewick's textbook.
*
* Define topmost loop entry as a first node of the loop traversed in a
* depth first search starting from initial state. The goal of the loop
* tracking algorithm is to associate topmost loop entries with states
* derived from these entries.
*
* For each step in the DFS states traversal algorithm needs to identify
* the following situations:
*
* initial initial initial
* | | |
* V V V
* ... ... .---------> hdr
* | | | |
* V V | V
* cur .-> succ | .------...
* | | | | | |
* V | V | V V
* succ '-- cur | ... ...
* | | |
* | V V
* | succ <- cur
* | |
* | V
* | ...
* | |
* '----'
*
* (A) successor state of cur (B) successor state of cur or it's entry
* not yet traversed are in current DFS path, thus cur and succ
* are members of the same outermost loop
*
* initial initial
* | |
* V V
* ... ...
* | |
* V V
* .------... .------...
* | | | |
* V V V V
* .-> hdr ... ... ...
* | | | | |
* | V V V V
* | succ <- cur succ <- cur
* | | |
* | V V
* | ... ...
* | | |
* '----' exit
*
* (C) successor state of cur is a part of some loop but this loop
* does not include cur or successor state is not in a loop at all.
*
* Algorithm could be described as the following python code:
*
* traversed = set() # Set of traversed nodes
* entries = {} # Mapping from node to loop entry
* depths = {} # Depth level assigned to graph node
* path = set() # Current DFS path
*
* # Find outermost loop entry known for n
* def get_loop_entry(n):
* h = entries.get(n, None)
* while h in entries and entries[h] != h:
* h = entries[h]
* return h
*
* # Update n's loop entry if h's outermost entry comes
* # before n's outermost entry in current DFS path.
* def update_loop_entry(n, h):
* n1 = get_loop_entry(n) or n
* h1 = get_loop_entry(h) or h
* if h1 in path and depths[h1] <= depths[n1]:
* entries[n] = h1
*
* def dfs(n, depth):
* traversed.add(n)
* path.add(n)
* depths[n] = depth
* for succ in G.successors(n):
* if succ not in traversed:
* # Case A: explore succ and update cur's loop entry
* # only if succ's entry is in current DFS path.
* dfs(succ, depth + 1)
* h = get_loop_entry(succ)
* update_loop_entry(n, h)
* else:
* # Case B or C depending on `h1 in path` check in update_loop_entry().
* update_loop_entry(n, succ)
* path.remove(n)
*
* To adapt this algorithm for use with verifier:
* - use st->branch == 0 as a signal that DFS of succ had been finished
* and cur's loop entry has to be updated (case A), handle this in
* update_branch_counts();
* - use st->branch > 0 as a signal that st is in the current DFS path;
* - handle cases B and C in is_state_visited();
* - update topmost loop entry for intermediate states in get_loop_entry().
*/
static struct bpf_verifier_state *get_loop_entry(struct bpf_verifier_state *st)
{
struct bpf_verifier_state *topmost = st->loop_entry, *old;
while (topmost && topmost->loop_entry && topmost != topmost->loop_entry)
topmost = topmost->loop_entry;
/* Update loop entries for intermediate states to avoid this
* traversal in future get_loop_entry() calls.
*/
while (st && st->loop_entry != topmost) {
old = st->loop_entry;
st->loop_entry = topmost;
st = old;
}
return topmost;
}
static void update_loop_entry(struct bpf_verifier_state *cur, struct bpf_verifier_state *hdr)
{
struct bpf_verifier_state *cur1, *hdr1;
cur1 = get_loop_entry(cur) ?: cur;
hdr1 = get_loop_entry(hdr) ?: hdr;
/* The head1->branches check decides between cases B and C in
* comment for get_loop_entry(). If hdr1->branches == 0 then
* head's topmost loop entry is not in current DFS path,
* hence 'cur' and 'hdr' are not in the same loop and there is
* no need to update cur->loop_entry.
*/
if (hdr1->branches && hdr1->dfs_depth <= cur1->dfs_depth) {
cur->loop_entry = hdr;
hdr->used_as_loop_entry = true;
}
}
static void update_branch_counts(struct bpf_verifier_env *env, struct bpf_verifier_state *st)
{
while (st) {
u32 br = --st->branches;
/* br == 0 signals that DFS exploration for 'st' is finished,
* thus it is necessary to update parent's loop entry if it
* turned out that st is a part of some loop.
* This is a part of 'case A' in get_loop_entry() comment.
*/
if (br == 0 && st->parent && st->loop_entry)
update_loop_entry(st->parent, st->loop_entry);
/* WARN_ON(br > 1) technically makes sense here,
* but see comment in push_stack(), hence:
*/
@ -7640,15 +7847,24 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
return err;
}
err = mark_stack_slots_iter(env, reg, insn_idx, meta->btf, btf_id, nr_slots);
err = mark_stack_slots_iter(env, meta, reg, insn_idx, meta->btf, btf_id, nr_slots);
if (err)
return err;
} else {
/* iter_next() or iter_destroy() expect initialized iter state*/
if (!is_iter_reg_valid_init(env, reg, meta->btf, btf_id, nr_slots)) {
err = is_iter_reg_valid_init(env, reg, meta->btf, btf_id, nr_slots);
switch (err) {
case 0:
break;
case -EINVAL:
verbose(env, "expected an initialized iter_%s as arg #%d\n",
iter_type_str(meta->btf, btf_id), regno);
return -EINVAL;
return err;
case -EPROTO:
verbose(env, "expected an RCU CS when using %s\n", meta->func_name);
return err;
default:
return err;
}
spi = iter_get_spi(env, reg, nr_slots);
@ -7674,6 +7890,81 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
return 0;
}
/* Look for a previous loop entry at insn_idx: nearest parent state
* stopped at insn_idx with callsites matching those in cur->frame.
*/
static struct bpf_verifier_state *find_prev_entry(struct bpf_verifier_env *env,
struct bpf_verifier_state *cur,
int insn_idx)
{
struct bpf_verifier_state_list *sl;
struct bpf_verifier_state *st;
/* Explored states are pushed in stack order, most recent states come first */
sl = *explored_state(env, insn_idx);
for (; sl; sl = sl->next) {
/* If st->branches != 0 state is a part of current DFS verification path,
* hence cur & st for a loop.
*/
st = &sl->state;
if (st->insn_idx == insn_idx && st->branches && same_callsites(st, cur) &&
st->dfs_depth < cur->dfs_depth)
return st;
}
return NULL;
}
static void reset_idmap_scratch(struct bpf_verifier_env *env);
static bool regs_exact(const struct bpf_reg_state *rold,
const struct bpf_reg_state *rcur,
struct bpf_idmap *idmap);
static void maybe_widen_reg(struct bpf_verifier_env *env,
struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
struct bpf_idmap *idmap)
{
if (rold->type != SCALAR_VALUE)
return;
if (rold->type != rcur->type)
return;
if (rold->precise || rcur->precise || regs_exact(rold, rcur, idmap))
return;
__mark_reg_unknown(env, rcur);
}
static int widen_imprecise_scalars(struct bpf_verifier_env *env,
struct bpf_verifier_state *old,
struct bpf_verifier_state *cur)
{
struct bpf_func_state *fold, *fcur;
int i, fr;
reset_idmap_scratch(env);
for (fr = old->curframe; fr >= 0; fr--) {
fold = old->frame[fr];
fcur = cur->frame[fr];
for (i = 0; i < MAX_BPF_REG; i++)
maybe_widen_reg(env,
&fold->regs[i],
&fcur->regs[i],
&env->idmap_scratch);
for (i = 0; i < fold->allocated_stack / BPF_REG_SIZE; i++) {
if (!is_spilled_reg(&fold->stack[i]) ||
!is_spilled_reg(&fcur->stack[i]))
continue;
maybe_widen_reg(env,
&fold->stack[i].spilled_ptr,
&fcur->stack[i].spilled_ptr,
&env->idmap_scratch);
}
}
return 0;
}
/* process_iter_next_call() is called when verifier gets to iterator's next
* "method" (e.g., bpf_iter_num_next() for numbers iterator) call. We'll refer
* to it as just "iter_next()" in comments below.
@ -7715,25 +8006,47 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
* is some statically known limit on number of iterations (e.g., if there is
* an explicit `if n > 100 then break;` statement somewhere in the loop).
*
* One very subtle but very important aspect is that we *always* simulate NULL
* condition first (as the current state) before we simulate non-NULL case.
* This has to do with intricacies of scalar precision tracking. By simulating
* "exit condition" of iter_next() returning NULL first, we make sure all the
* relevant precision marks *that will be set **after** we exit iterator loop*
* are propagated backwards to common parent state of NULL and non-NULL
* branches. Thanks to that, state equivalence checks done later in forked
* state, when reaching iter_next() for ACTIVE iterator, can assume that
* precision marks are finalized and won't change. Because simulating another
* ACTIVE iterator iteration won't change them (because given same input
* states we'll end up with exactly same output states which we are currently
* comparing; and verification after the loop already propagated back what
* needs to be **additionally** tracked as precise). It's subtle, grok
* precision tracking for more intuitive understanding.
* Iteration convergence logic in is_state_visited() relies on exact
* states comparison, which ignores read and precision marks.
* This is necessary because read and precision marks are not finalized
* while in the loop. Exact comparison might preclude convergence for
* simple programs like below:
*
* i = 0;
* while(iter_next(&it))
* i++;
*
* At each iteration step i++ would produce a new distinct state and
* eventually instruction processing limit would be reached.
*
* To avoid such behavior speculatively forget (widen) range for
* imprecise scalar registers, if those registers were not precise at the
* end of the previous iteration and do not match exactly.
*
* This is a conservative heuristic that allows to verify wide range of programs,
* however it precludes verification of programs that conjure an
* imprecise value on the first loop iteration and use it as precise on a second.
* For example, the following safe program would fail to verify:
*
* struct bpf_num_iter it;
* int arr[10];
* int i = 0, a = 0;
* bpf_iter_num_new(&it, 0, 10);
* while (bpf_iter_num_next(&it)) {
* if (a == 0) {
* a = 1;
* i = 7; // Because i changed verifier would forget
* // it's range on second loop entry.
* } else {
* arr[i] = 42; // This would fail to verify.
* }
* }
* bpf_iter_num_destroy(&it);
*/
static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx,
struct bpf_kfunc_call_arg_meta *meta)
{
struct bpf_verifier_state *cur_st = env->cur_state, *queued_st;
struct bpf_verifier_state *cur_st = env->cur_state, *queued_st, *prev_st;
struct bpf_func_state *cur_fr = cur_st->frame[cur_st->curframe], *queued_fr;
struct bpf_reg_state *cur_iter, *queued_iter;
int iter_frameno = meta->iter.frameno;
@ -7751,6 +8064,19 @@ static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx,
}
if (cur_iter->iter.state == BPF_ITER_STATE_ACTIVE) {
/* Because iter_next() call is a checkpoint is_state_visitied()
* should guarantee parent state with same call sites and insn_idx.
*/
if (!cur_st->parent || cur_st->parent->insn_idx != insn_idx ||
!same_callsites(cur_st->parent, cur_st)) {
verbose(env, "bug: bad parent state for iter next call");
return -EFAULT;
}
/* Note cur_st->parent in the call below, it is necessary to skip
* checkpoint created for cur_st by is_state_visited()
* right at this instruction.
*/
prev_st = find_prev_entry(env, cur_st->parent, insn_idx);
/* branch out active iter state */
queued_st = push_stack(env, insn_idx + 1, insn_idx, false);
if (!queued_st)
@ -7759,6 +8085,8 @@ static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx,
queued_iter = &queued_st->frame[iter_frameno]->stack[iter_spi].spilled_ptr;
queued_iter->iter.state = BPF_ITER_STATE_ACTIVE;
queued_iter->iter.depth++;
if (prev_st)
widen_imprecise_scalars(env, prev_st, queued_st);
queued_fr = queued_st->frame[queued_st->curframe];
mark_ptr_not_null_reg(&queued_fr->regs[BPF_REG_0]);
@ -10231,6 +10559,11 @@ static bool is_kfunc_rcu(struct bpf_kfunc_call_arg_meta *meta)
return meta->kfunc_flags & KF_RCU;
}
static bool is_kfunc_rcu_protected(struct bpf_kfunc_call_arg_meta *meta)
{
return meta->kfunc_flags & KF_RCU_PROTECTED;
}
static bool __kfunc_param_match_suffix(const struct btf *btf,
const struct btf_param *arg,
const char *suffix)
@ -10305,6 +10638,11 @@ static bool is_kfunc_arg_refcounted_kptr(const struct btf *btf, const struct btf
return __kfunc_param_match_suffix(btf, arg, "__refcounted_kptr");
}
static bool is_kfunc_arg_nullable(const struct btf *btf, const struct btf_param *arg)
{
return __kfunc_param_match_suffix(btf, arg, "__nullable");
}
static bool is_kfunc_arg_scalar_with_name(const struct btf *btf,
const struct btf_param *arg,
const char *name)
@ -10447,6 +10785,7 @@ enum kfunc_ptr_arg_type {
KF_ARG_PTR_TO_CALLBACK,
KF_ARG_PTR_TO_RB_ROOT,
KF_ARG_PTR_TO_RB_NODE,
KF_ARG_PTR_TO_NULL,
};
enum special_kfunc_type {
@ -10472,6 +10811,7 @@ enum special_kfunc_type {
KF_bpf_percpu_obj_new_impl,
KF_bpf_percpu_obj_drop_impl,
KF_bpf_throw,
KF_bpf_iter_css_task_new,
};
BTF_SET_START(special_kfunc_set)
@ -10495,6 +10835,7 @@ BTF_ID(func, bpf_dynptr_clone)
BTF_ID(func, bpf_percpu_obj_new_impl)
BTF_ID(func, bpf_percpu_obj_drop_impl)
BTF_ID(func, bpf_throw)
BTF_ID(func, bpf_iter_css_task_new)
BTF_SET_END(special_kfunc_set)
BTF_ID_LIST(special_kfunc_list)
@ -10520,6 +10861,7 @@ BTF_ID(func, bpf_dynptr_clone)
BTF_ID(func, bpf_percpu_obj_new_impl)
BTF_ID(func, bpf_percpu_obj_drop_impl)
BTF_ID(func, bpf_throw)
BTF_ID(func, bpf_iter_css_task_new)
static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta)
{
@ -10600,6 +10942,8 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
if (is_kfunc_arg_callback(env, meta->btf, &args[argno]))
return KF_ARG_PTR_TO_CALLBACK;
if (is_kfunc_arg_nullable(meta->btf, &args[argno]) && register_is_null(reg))
return KF_ARG_PTR_TO_NULL;
if (argno + 1 < nargs &&
(is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], &regs[regno + 1]) ||
@ -11050,6 +11394,20 @@ static int process_kf_arg_ptr_to_rbtree_node(struct bpf_verifier_env *env,
&meta->arg_rbtree_root.field);
}
static bool check_css_task_iter_allowlist(struct bpf_verifier_env *env)
{
enum bpf_prog_type prog_type = resolve_prog_type(env->prog);
switch (prog_type) {
case BPF_PROG_TYPE_LSM:
return true;
case BPF_TRACE_ITER:
return env->prog->aux->sleepable;
default:
return false;
}
}
static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta,
int insn_idx)
{
@ -11136,7 +11494,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
if ((is_kfunc_trusted_args(meta) || is_kfunc_rcu(meta)) &&
(register_is_null(reg) || type_may_be_null(reg->type))) {
(register_is_null(reg) || type_may_be_null(reg->type)) &&
!is_kfunc_arg_nullable(meta->btf, &args[i])) {
verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i);
return -EACCES;
}
@ -11161,6 +11520,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return kf_arg_type;
switch (kf_arg_type) {
case KF_ARG_PTR_TO_NULL:
continue;
case KF_ARG_PTR_TO_ALLOC_BTF_ID:
case KF_ARG_PTR_TO_BTF_ID:
if (!is_kfunc_trusted_args(meta) && !is_kfunc_rcu(meta))
@ -11300,6 +11661,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
break;
}
case KF_ARG_PTR_TO_ITER:
if (meta->func_id == special_kfunc_list[KF_bpf_iter_css_task_new]) {
if (!check_css_task_iter_allowlist(env)) {
verbose(env, "css_task_iter is only allowed in bpf_lsm and bpf iter-s\n");
return -EINVAL;
}
}
ret = process_iter_arg(env, regno, insn_idx, meta);
if (ret < 0)
return ret;
@ -11559,6 +11926,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
if (env->cur_state->active_rcu_lock) {
struct bpf_func_state *state;
struct bpf_reg_state *reg;
u32 clear_mask = (1 << STACK_SPILL) | (1 << STACK_ITER);
if (in_rbtree_lock_required_cb(env) && (rcu_lock || rcu_unlock)) {
verbose(env, "Calling bpf_rcu_read_{lock,unlock} in unnecessary rbtree callback\n");
@ -11569,7 +11937,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
verbose(env, "nested rcu read lock (kernel function %s)\n", func_name);
return -EINVAL;
} else if (rcu_unlock) {
bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({
bpf_for_each_reg_in_vstate_mask(env->cur_state, state, reg, clear_mask, ({
if (reg->type & MEM_RCU) {
reg->type &= ~(MEM_RCU | PTR_MAYBE_NULL);
reg->type |= PTR_UNTRUSTED;
@ -13663,12 +14031,16 @@ static int is_branch32_taken(struct bpf_reg_state *reg, u32 val, u8 opcode)
return !!tnum_equals_const(subreg, val);
else if (val < reg->u32_min_value || val > reg->u32_max_value)
return 0;
else if (sval < reg->s32_min_value || sval > reg->s32_max_value)
return 0;
break;
case BPF_JNE:
if (tnum_is_const(subreg))
return !tnum_equals_const(subreg, val);
else if (val < reg->u32_min_value || val > reg->u32_max_value)
return 1;
else if (sval < reg->s32_min_value || sval > reg->s32_max_value)
return 1;
break;
case BPF_JSET:
if ((~subreg.mask & subreg.value) & val)
@ -13740,12 +14112,16 @@ static int is_branch64_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
return !!tnum_equals_const(reg->var_off, val);
else if (val < reg->umin_value || val > reg->umax_value)
return 0;
else if (sval < reg->smin_value || sval > reg->smax_value)
return 0;
break;
case BPF_JNE:
if (tnum_is_const(reg->var_off))
return !tnum_equals_const(reg->var_off, val);
else if (val < reg->umin_value || val > reg->umax_value)
return 1;
else if (sval < reg->smin_value || sval > reg->smax_value)
return 1;
break;
case BPF_JSET:
if ((~reg->var_off.mask & reg->var_off.value) & val)
@ -14958,21 +15334,6 @@ enum {
BRANCH = 2,
};
static u32 state_htab_size(struct bpf_verifier_env *env)
{
return env->prog->len;
}
static struct bpf_verifier_state_list **explored_state(
struct bpf_verifier_env *env,
int idx)
{
struct bpf_verifier_state *cur = env->cur_state;
struct bpf_func_state *state = cur->frame[cur->curframe];
return &env->explored_states[(idx ^ state->callsite) % state_htab_size(env)];
}
static void mark_prune_point(struct bpf_verifier_env *env, int idx)
{
env->insn_aux_data[idx].prune_point = true;
@ -15849,18 +16210,14 @@ static void clean_live_states(struct bpf_verifier_env *env, int insn,
struct bpf_verifier_state *cur)
{
struct bpf_verifier_state_list *sl;
int i;
sl = *explored_state(env, insn);
while (sl) {
if (sl->state.branches)
goto next;
if (sl->state.insn_idx != insn ||
sl->state.curframe != cur->curframe)
!same_callsites(&sl->state, cur))
goto next;
for (i = 0; i <= cur->curframe; i++)
if (sl->state.frame[i]->callsite != cur->frame[i]->callsite)
goto next;
clean_verifier_state(env, &sl->state);
next:
sl = sl->next;
@ -15878,8 +16235,11 @@ static bool regs_exact(const struct bpf_reg_state *rold,
/* Returns true if (rold safe implies rcur safe) */
static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
struct bpf_reg_state *rcur, struct bpf_idmap *idmap)
struct bpf_reg_state *rcur, struct bpf_idmap *idmap, bool exact)
{
if (exact)
return regs_exact(rold, rcur, idmap);
if (!(rold->live & REG_LIVE_READ))
/* explored state didn't use this */
return true;
@ -15996,7 +16356,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
}
static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
struct bpf_func_state *cur, struct bpf_idmap *idmap)
struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact)
{
int i, spi;
@ -16009,7 +16369,12 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
spi = i / BPF_REG_SIZE;
if (!(old->stack[spi].spilled_ptr.live & REG_LIVE_READ)) {
if (exact &&
old->stack[spi].slot_type[i % BPF_REG_SIZE] !=
cur->stack[spi].slot_type[i % BPF_REG_SIZE])
return false;
if (!(old->stack[spi].spilled_ptr.live & REG_LIVE_READ) && !exact) {
i += BPF_REG_SIZE - 1;
/* explored state didn't use this */
continue;
@ -16059,7 +16424,7 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
* return false to continue verification of this path
*/
if (!regsafe(env, &old->stack[spi].spilled_ptr,
&cur->stack[spi].spilled_ptr, idmap))
&cur->stack[spi].spilled_ptr, idmap, exact))
return false;
break;
case STACK_DYNPTR:
@ -16141,16 +16506,16 @@ static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur,
* the current state will reach 'bpf_exit' instruction safely
*/
static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_state *old,
struct bpf_func_state *cur)
struct bpf_func_state *cur, bool exact)
{
int i;
for (i = 0; i < MAX_BPF_REG; i++)
if (!regsafe(env, &old->regs[i], &cur->regs[i],
&env->idmap_scratch))
&env->idmap_scratch, exact))
return false;
if (!stacksafe(env, old, cur, &env->idmap_scratch))
if (!stacksafe(env, old, cur, &env->idmap_scratch, exact))
return false;
if (!refsafe(old, cur, &env->idmap_scratch))
@ -16159,17 +16524,23 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat
return true;
}
static void reset_idmap_scratch(struct bpf_verifier_env *env)
{
env->idmap_scratch.tmp_id_gen = env->id_gen;
memset(&env->idmap_scratch.map, 0, sizeof(env->idmap_scratch.map));
}
static bool states_equal(struct bpf_verifier_env *env,
struct bpf_verifier_state *old,
struct bpf_verifier_state *cur)
struct bpf_verifier_state *cur,
bool exact)
{
int i;
if (old->curframe != cur->curframe)
return false;
env->idmap_scratch.tmp_id_gen = env->id_gen;
memset(&env->idmap_scratch.map, 0, sizeof(env->idmap_scratch.map));
reset_idmap_scratch(env);
/* Verification state from speculative execution simulation
* must never prune a non-speculative execution one.
@ -16199,7 +16570,7 @@ static bool states_equal(struct bpf_verifier_env *env,
for (i = 0; i <= old->curframe; i++) {
if (old->frame[i]->callsite != cur->frame[i]->callsite)
return false;
if (!func_states_equal(env, old->frame[i], cur->frame[i]))
if (!func_states_equal(env, old->frame[i], cur->frame[i], exact))
return false;
}
return true;
@ -16453,10 +16824,11 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
{
struct bpf_verifier_state_list *new_sl;
struct bpf_verifier_state_list *sl, **pprev;
struct bpf_verifier_state *cur = env->cur_state, *new;
int i, j, err, states_cnt = 0;
struct bpf_verifier_state *cur = env->cur_state, *new, *loop_entry;
int i, j, n, err, states_cnt = 0;
bool force_new_state = env->test_state_freq || is_force_checkpoint(env, insn_idx);
bool add_new_state = force_new_state;
bool force_exact;
/* bpf progs typically have pruning point every 4 instructions
* http://vger.kernel.org/bpfconf2019.html#session-1
@ -16509,9 +16881,33 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
* It's safe to assume that iterator loop will finish, taking into
* account iter_next() contract of eventually returning
* sticky NULL result.
*
* Note, that states have to be compared exactly in this case because
* read and precision marks might not be finalized inside the loop.
* E.g. as in the program below:
*
* 1. r7 = -16
* 2. r6 = bpf_get_prandom_u32()
* 3. while (bpf_iter_num_next(&fp[-8])) {
* 4. if (r6 != 42) {
* 5. r7 = -32
* 6. r6 = bpf_get_prandom_u32()
* 7. continue
* 8. }
* 9. r0 = r10
* 10. r0 += r7
* 11. r8 = *(u64 *)(r0 + 0)
* 12. r6 = bpf_get_prandom_u32()
* 13. }
*
* Here verifier would first visit path 1-3, create a checkpoint at 3
* with r7=-16, continue to 4-7,3. Existing checkpoint at 3 does
* not have read or precision mark for r7 yet, thus inexact states
* comparison would discard current state with r7=-32
* => unsafe memory access at 11 would not be caught.
*/
if (is_iter_next_insn(env, insn_idx)) {
if (states_equal(env, &sl->state, cur)) {
if (states_equal(env, &sl->state, cur, true)) {
struct bpf_func_state *cur_frame;
struct bpf_reg_state *iter_state, *iter_reg;
int spi;
@ -16527,17 +16923,23 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
*/
spi = __get_spi(iter_reg->off + iter_reg->var_off.value);
iter_state = &func(env, iter_reg)->stack[spi].spilled_ptr;
if (iter_state->iter.state == BPF_ITER_STATE_ACTIVE)
if (iter_state->iter.state == BPF_ITER_STATE_ACTIVE) {
update_loop_entry(cur, &sl->state);
goto hit;
}
}
goto skip_inf_loop_check;
}
/* attempt to detect infinite loop to avoid unnecessary doomed work */
if (states_maybe_looping(&sl->state, cur) &&
states_equal(env, &sl->state, cur) &&
states_equal(env, &sl->state, cur, false) &&
!iter_active_depths_differ(&sl->state, cur)) {
verbose_linfo(env, insn_idx, "; ");
verbose(env, "infinite loop detected at insn %d\n", insn_idx);
verbose(env, "cur state:");
print_verifier_state(env, cur->frame[cur->curframe], true);
verbose(env, "old state:");
print_verifier_state(env, sl->state.frame[cur->curframe], true);
return -EINVAL;
}
/* if the verifier is processing a loop, avoid adding new state
@ -16559,7 +16961,36 @@ skip_inf_loop_check:
add_new_state = false;
goto miss;
}
if (states_equal(env, &sl->state, cur)) {
/* If sl->state is a part of a loop and this loop's entry is a part of
* current verification path then states have to be compared exactly.
* 'force_exact' is needed to catch the following case:
*
* initial Here state 'succ' was processed first,
* | it was eventually tracked to produce a
* V state identical to 'hdr'.
* .---------> hdr All branches from 'succ' had been explored
* | | and thus 'succ' has its .branches == 0.
* | V
* | .------... Suppose states 'cur' and 'succ' correspond
* | | | to the same instruction + callsites.
* | V V In such case it is necessary to check
* | ... ... if 'succ' and 'cur' are states_equal().
* | | | If 'succ' and 'cur' are a part of the
* | V V same loop exact flag has to be set.
* | succ <- cur To check if that is the case, verify
* | | if loop entry of 'succ' is in current
* | V DFS path.
* | ...
* | |
* '----'
*
* Additional details are in the comment before get_loop_entry().
*/
loop_entry = get_loop_entry(&sl->state);
force_exact = loop_entry && loop_entry->branches > 0;
if (states_equal(env, &sl->state, cur, force_exact)) {
if (force_exact)
update_loop_entry(cur, loop_entry);
hit:
sl->hit_cnt++;
/* reached equivalent register/stack state,
@ -16598,13 +17029,18 @@ miss:
* to keep checking from state equivalence point of view.
* Higher numbers increase max_states_per_insn and verification time,
* but do not meaningfully decrease insn_processed.
* 'n' controls how many times state could miss before eviction.
* Use bigger 'n' for checkpoints because evicting checkpoint states
* too early would hinder iterator convergence.
*/
if (sl->miss_cnt > sl->hit_cnt * 3 + 3) {
n = is_force_checkpoint(env, insn_idx) && sl->state.branches > 0 ? 64 : 3;
if (sl->miss_cnt > sl->hit_cnt * n + n) {
/* the state is unlikely to be useful. Remove it to
* speed up verification
*/
*pprev = sl->next;
if (sl->state.frame[0]->regs[0].live & REG_LIVE_DONE) {
if (sl->state.frame[0]->regs[0].live & REG_LIVE_DONE &&
!sl->state.used_as_loop_entry) {
u32 br = sl->state.branches;
WARN_ONCE(br,
@ -16673,6 +17109,7 @@ next:
cur->parent = new;
cur->first_insn_idx = insn_idx;
cur->dfs_depth = new->dfs_depth + 1;
clear_jmp_history(cur);
new_sl->next = *explored_state(env, insn_idx);
*explored_state(env, insn_idx) = new_sl;

View File

@ -4917,9 +4917,11 @@ repeat:
void css_task_iter_start(struct cgroup_subsys_state *css, unsigned int flags,
struct css_task_iter *it)
{
unsigned long irqflags;
memset(it, 0, sizeof(*it));
spin_lock_irq(&css_set_lock);
spin_lock_irqsave(&css_set_lock, irqflags);
it->ss = css->ss;
it->flags = flags;
@ -4933,7 +4935,7 @@ void css_task_iter_start(struct cgroup_subsys_state *css, unsigned int flags,
css_task_iter_advance(it);
spin_unlock_irq(&css_set_lock);
spin_unlock_irqrestore(&css_set_lock, irqflags);
}
/**
@ -4946,12 +4948,14 @@ void css_task_iter_start(struct cgroup_subsys_state *css, unsigned int flags,
*/
struct task_struct *css_task_iter_next(struct css_task_iter *it)
{
unsigned long irqflags;
if (it->cur_task) {
put_task_struct(it->cur_task);
it->cur_task = NULL;
}
spin_lock_irq(&css_set_lock);
spin_lock_irqsave(&css_set_lock, irqflags);
/* @it may be half-advanced by skips, finish advancing */
if (it->flags & CSS_TASK_ITER_SKIPPED)
@ -4964,7 +4968,7 @@ struct task_struct *css_task_iter_next(struct css_task_iter *it)
css_task_iter_advance(it);
}
spin_unlock_irq(&css_set_lock);
spin_unlock_irqrestore(&css_set_lock, irqflags);
return it->cur_task;
}
@ -4977,11 +4981,13 @@ struct task_struct *css_task_iter_next(struct css_task_iter *it)
*/
void css_task_iter_end(struct css_task_iter *it)
{
unsigned long irqflags;
if (it->cur_cset) {
spin_lock_irq(&css_set_lock);
spin_lock_irqsave(&css_set_lock, irqflags);
list_del(&it->iters_node);
put_css_set_locked(it->cur_cset);
spin_unlock_irq(&css_set_lock);
spin_unlock_irqrestore(&css_set_lock, irqflags);
}
if (it->cur_dcset)

View File

@ -2244,6 +2244,37 @@ static void pcpu_balance_workfn(struct work_struct *work)
mutex_unlock(&pcpu_alloc_mutex);
}
/**
* pcpu_alloc_size - the size of the dynamic percpu area
* @ptr: pointer to the dynamic percpu area
*
* Returns the size of the @ptr allocation. This is undefined for statically
* defined percpu variables as there is no corresponding chunk->bound_map.
*
* RETURNS:
* The size of the dynamic percpu area.
*
* CONTEXT:
* Can be called from atomic context.
*/
size_t pcpu_alloc_size(void __percpu *ptr)
{
struct pcpu_chunk *chunk;
unsigned long bit_off, end;
void *addr;
if (!ptr)
return 0;
addr = __pcpu_ptr_to_addr(ptr);
/* No pcpu_lock here: ptr has not been freed, so chunk is still alive */
chunk = pcpu_chunk_addr_search(addr);
bit_off = (addr - chunk->base_addr) / PCPU_MIN_ALLOC_SIZE;
end = find_next_bit(chunk->bound_map, pcpu_chunk_map_bits(chunk),
bit_off + 1);
return (end - bit_off) * PCPU_MIN_ALLOC_SIZE;
}
/**
* free_percpu - free percpu area
* @ptr: pointer to area to free
@ -2267,12 +2298,10 @@ void free_percpu(void __percpu *ptr)
kmemleak_free_percpu(ptr);
addr = __pcpu_ptr_to_addr(ptr);
spin_lock_irqsave(&pcpu_lock, flags);
chunk = pcpu_chunk_addr_search(addr);
off = addr - chunk->base_addr;
spin_lock_irqsave(&pcpu_lock, flags);
size = pcpu_free_area(chunk, off);
pcpu_memcg_free_hook(chunk, off, size);

View File

@ -6530,6 +6530,8 @@ static int __napi_poll(struct napi_struct *n, bool *repoll)
if (napi_is_scheduled(n)) {
work = n->poll(n, weight);
trace_napi_poll(n, work, weight);
xdp_do_check_flushed(n);
}
if (unlikely(work > weight))

View File

@ -139,4 +139,10 @@ static inline void netif_set_gro_ipv4_max_size(struct net_device *dev,
}
int rps_cpumask_housekeeping(struct cpumask *mask);
#if defined(CONFIG_DEBUG_NET) && defined(CONFIG_BPF_SYSCALL)
void xdp_do_check_flushed(struct napi_struct *napi);
#else
static inline void xdp_do_check_flushed(struct napi_struct *napi) { }
#endif
#endif

View File

@ -83,6 +83,8 @@
#include <net/netfilter/nf_conntrack_bpf.h>
#include <linux/un.h>
#include "dev.h"
static const struct bpf_func_proto *
bpf_sk_base_func_proto(enum bpf_func_id func_id);
@ -4208,6 +4210,20 @@ void xdp_do_flush(void)
}
EXPORT_SYMBOL_GPL(xdp_do_flush);
#if defined(CONFIG_DEBUG_NET) && defined(CONFIG_BPF_SYSCALL)
void xdp_do_check_flushed(struct napi_struct *napi)
{
bool ret;
ret = dev_check_flush();
ret |= cpu_map_check_flush();
ret |= xsk_map_check_flush();
WARN_ONCE(ret, "Missing xdp_do_flush() invocation after NAPI by %ps\n",
napi->poll);
}
#endif
void bpf_clear_redirect_map(struct bpf_map *map)
{
struct bpf_redirect_info *ri;

View File

@ -33,6 +33,7 @@
#include "xsk.h"
#define TX_BATCH_SIZE 32
#define MAX_PER_SOCKET_BUDGET (TX_BATCH_SIZE)
static DEFINE_PER_CPU(struct list_head, xskmap_flush_list);
@ -391,6 +392,16 @@ void __xsk_map_flush(void)
}
}
#ifdef CONFIG_DEBUG_NET
bool xsk_map_check_flush(void)
{
if (list_empty(this_cpu_ptr(&xskmap_flush_list)))
return false;
__xsk_map_flush();
return true;
}
#endif
void xsk_tx_completed(struct xsk_buff_pool *pool, u32 nb_entries)
{
xskq_prod_submit_n(pool->cq, nb_entries);
@ -413,16 +424,25 @@ EXPORT_SYMBOL(xsk_tx_release);
bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
{
bool budget_exhausted = false;
struct xdp_sock *xs;
rcu_read_lock();
again:
list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
if (xs->tx_budget_spent >= MAX_PER_SOCKET_BUDGET) {
budget_exhausted = true;
continue;
}
if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
if (xskq_has_descs(xs->tx))
xskq_cons_release(xs->tx);
continue;
}
xs->tx_budget_spent++;
/* This is the backpressure mechanism for the Tx path.
* Reserve space in the completion queue and only proceed
* if there is space in it. This avoids having to implement
@ -436,6 +456,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
return true;
}
if (budget_exhausted) {
list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list)
xs->tx_budget_spent = 0;
budget_exhausted = false;
goto again;
}
out:
rcu_read_unlock();
return false;

View File

@ -150,6 +150,9 @@ always-y += ibumad_kern.o
always-y += hbm_out_kern.o
always-y += hbm_edt_kern.o
TPROGS_CFLAGS = $(TPROGS_USER_CFLAGS)
TPROGS_LDFLAGS = $(TPROGS_USER_LDFLAGS)
ifeq ($(ARCH), arm)
# Strip all except -D__LINUX_ARM_ARCH__ option needed to handle linux
# headers when arm instruction set identification is requested.
@ -251,14 +254,15 @@ clean:
$(LIBBPF): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(LIBBPF_OUTPUT)
# Fix up variables inherited from Kbuild that tools/ build system won't like
$(MAKE) -C $(LIBBPF_SRC) RM='rm -rf' EXTRA_CFLAGS="$(TPROGS_CFLAGS)" \
LDFLAGS=$(TPROGS_LDFLAGS) srctree=$(BPF_SAMPLES_PATH)/../../ \
LDFLAGS="$(TPROGS_LDFLAGS)" srctree=$(BPF_SAMPLES_PATH)/../../ \
O= OUTPUT=$(LIBBPF_OUTPUT)/ DESTDIR=$(LIBBPF_DESTDIR) prefix= \
$@ install_headers
BPFTOOLDIR := $(TOOLS_PATH)/bpf/bpftool
BPFTOOL_OUTPUT := $(abspath $(BPF_SAMPLES_PATH))/bpftool
BPFTOOL := $(BPFTOOL_OUTPUT)/bootstrap/bpftool
$(BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) | $(BPFTOOL_OUTPUT)
DEFAULT_BPFTOOL := $(BPFTOOL_OUTPUT)/bootstrap/bpftool
BPFTOOL ?= $(DEFAULT_BPFTOOL)
$(DEFAULT_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) | $(BPFTOOL_OUTPUT)
$(MAKE) -C $(BPFTOOLDIR) srctree=$(BPF_SAMPLES_PATH)/../../ \
OUTPUT=$(BPFTOOL_OUTPUT)/ bootstrap
@ -316,7 +320,7 @@ XDP_SAMPLE_CFLAGS += -Wall -O2 \
-I$(LIBBPF_INCLUDE) \
-I$(src)/../../tools/testing/selftests/bpf
$(obj)/$(XDP_SAMPLE): TPROGS_CFLAGS = $(XDP_SAMPLE_CFLAGS)
$(obj)/$(XDP_SAMPLE): TPROGS_CFLAGS = $(XDP_SAMPLE_CFLAGS) $(TPROGS_USER_CFLAGS)
$(obj)/$(XDP_SAMPLE): $(src)/xdp_sample_user.h $(src)/xdp_sample_shared.h
# Override includes for trace_helpers.o because __must_check won't be defined
# in our include path.

View File

@ -4,6 +4,7 @@
#include <uapi/linux/bpf.h>
#include <bpf/bpf_helpers.h>
#if !defined(__aarch64__)
struct syscalls_enter_open_args {
unsigned long long unused;
long syscall_nr;
@ -11,6 +12,7 @@ struct syscalls_enter_open_args {
long flags;
long mode;
};
#endif
struct syscalls_exit_open_args {
unsigned long long unused;
@ -18,6 +20,15 @@ struct syscalls_exit_open_args {
long ret;
};
struct syscalls_enter_open_at_args {
unsigned long long unused;
long syscall_nr;
long long dfd;
long filename_ptr;
long flags;
long mode;
};
struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
__type(key, u32);
@ -54,14 +65,14 @@ int trace_enter_open(struct syscalls_enter_open_args *ctx)
#endif
SEC("tracepoint/syscalls/sys_enter_openat")
int trace_enter_open_at(struct syscalls_enter_open_args *ctx)
int trace_enter_open_at(struct syscalls_enter_open_at_args *ctx)
{
count(&enter_open_map);
return 0;
}
SEC("tracepoint/syscalls/sys_enter_openat2")
int trace_enter_open_at2(struct syscalls_enter_open_args *ctx)
int trace_enter_open_at2(struct syscalls_enter_open_at_args *ctx)
{
count(&enter_open_map);
return 0;

View File

@ -37,7 +37,7 @@ DESCRIPTION
**bpftool net { show | list }** [ **dev** *NAME* ]
List bpf program attachments in the kernel networking subsystem.
Currently, device driver xdp attachments, tcx and old-style tc
Currently, device driver xdp attachments, tcx, netkit and old-style tc
classifier/action attachments, flow_dissector as well as netfilter
attachments are implemented, i.e., for
program types **BPF_PROG_TYPE_XDP**, **BPF_PROG_TYPE_SCHED_CLS**,
@ -52,11 +52,11 @@ DESCRIPTION
bpf programs, users should consult other tools, e.g., iproute2.
The current output will start with all xdp program attachments, followed by
all tcx, then tc class/qdisc bpf program attachments, then flow_dissector
and finally netfilter programs. Both xdp programs and tcx/tc programs are
all tcx, netkit, then tc class/qdisc bpf program attachments, then flow_dissector
and finally netfilter programs. Both xdp programs and tcx/netkit/tc programs are
ordered based on ifindex number. If multiple bpf programs attached
to the same networking device through **tc**, the order will be first
all bpf programs attached to tcx, then tc classes, then all bpf programs
all bpf programs attached to tcx, netkit, then tc classes, then all bpf programs
attached to non clsact qdiscs, and finally all bpf programs attached
to root and clsact qdisc.

View File

@ -127,7 +127,7 @@ static void btf_dumper_ptr(const struct btf_dumper *d,
print_ptr_value:
if (d->is_plain_text)
jsonw_printf(d->jw, "%p", (void *)value);
jsonw_printf(d->jw, "\"%p\"", (void *)value);
else
jsonw_printf(d->jw, "%lu", value);
}

View File

@ -451,6 +451,10 @@ static int show_link_close_json(int fd, struct bpf_link_info *info)
show_link_ifindex_json(info->tcx.ifindex, json_wtr);
show_link_attach_type_json(info->tcx.attach_type, json_wtr);
break;
case BPF_LINK_TYPE_NETKIT:
show_link_ifindex_json(info->netkit.ifindex, json_wtr);
show_link_attach_type_json(info->netkit.attach_type, json_wtr);
break;
case BPF_LINK_TYPE_XDP:
show_link_ifindex_json(info->xdp.ifindex, json_wtr);
break;
@ -791,6 +795,11 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
show_link_ifindex_plain(info->tcx.ifindex);
show_link_attach_type_plain(info->tcx.attach_type);
break;
case BPF_LINK_TYPE_NETKIT:
printf("\n\t");
show_link_ifindex_plain(info->netkit.ifindex);
show_link_attach_type_plain(info->netkit.attach_type);
break;
case BPF_LINK_TYPE_XDP:
printf("\n\t");
show_link_ifindex_plain(info->xdp.ifindex);

View File

@ -79,6 +79,8 @@ static const char * const attach_type_strings[] = {
static const char * const attach_loc_strings[] = {
[BPF_TCX_INGRESS] = "tcx/ingress",
[BPF_TCX_EGRESS] = "tcx/egress",
[BPF_NETKIT_PRIMARY] = "netkit/primary",
[BPF_NETKIT_PEER] = "netkit/peer",
};
const size_t net_attach_type_size = ARRAY_SIZE(attach_type_strings);
@ -506,6 +508,9 @@ static void show_dev_tc_bpf(struct ip_devname_ifindex *dev)
{
__show_dev_tc_bpf(dev, BPF_TCX_INGRESS);
__show_dev_tc_bpf(dev, BPF_TCX_EGRESS);
__show_dev_tc_bpf(dev, BPF_NETKIT_PRIMARY);
__show_dev_tc_bpf(dev, BPF_NETKIT_PEER);
}
static int show_dev_tc_bpf_classic(int sock, unsigned int nl_pid,
@ -926,7 +931,7 @@ static int do_help(int argc, char **argv)
" ATTACH_TYPE := { xdp | xdpgeneric | xdpdrv | xdpoffload }\n"
" " HELP_SPEC_OPTIONS " }\n"
"\n"
"Note: Only xdp, tcx, tc, flow_dissector and netfilter attachments\n"
"Note: Only xdp, tcx, tc, netkit, flow_dissector and netfilter attachments\n"
" are currently supported.\n"
" For progs attached to cgroups, use \"bpftool cgroup\"\n"
" to dump program attachments. For program types\n"

View File

@ -276,6 +276,9 @@ static struct res do_one_id(const char *id_str, work_func func, void *data,
res.nr_maps++;
if (wtr)
jsonw_start_array(wtr);
if (func(fd, info, data, wtr))
res.nr_errs++;
else if (!wtr && json_output)
@ -288,6 +291,9 @@ static struct res do_one_id(const char *id_str, work_func func, void *data,
*/
jsonw_null(json_wtr);
if (wtr)
jsonw_end_array(wtr);
done:
free(info);
close(fd);

View File

@ -1052,6 +1052,8 @@ enum bpf_attach_type {
BPF_CGROUP_UNIX_RECVMSG,
BPF_CGROUP_UNIX_GETPEERNAME,
BPF_CGROUP_UNIX_GETSOCKNAME,
BPF_NETKIT_PRIMARY,
BPF_NETKIT_PEER,
__MAX_BPF_ATTACH_TYPE
};
@ -1071,6 +1073,7 @@ enum bpf_link_type {
BPF_LINK_TYPE_NETFILTER = 10,
BPF_LINK_TYPE_TCX = 11,
BPF_LINK_TYPE_UPROBE_MULTI = 12,
BPF_LINK_TYPE_NETKIT = 13,
MAX_BPF_LINK_TYPE,
};
@ -1656,6 +1659,13 @@ union bpf_attr {
__u32 flags;
__u32 pid;
} uprobe_multi;
struct {
union {
__u32 relative_fd;
__u32 relative_id;
};
__u64 expected_revision;
} netkit;
};
} link_create;
@ -6576,6 +6586,10 @@ struct bpf_link_info {
__u32 ifindex;
__u32 attach_type;
} tcx;
struct {
__u32 ifindex;
__u32 attach_type;
} netkit;
};
} __attribute__((aligned(8)));

View File

@ -211,6 +211,9 @@ struct rtnl_link_stats {
* @rx_nohandler: Number of packets received on the interface
* but dropped by the networking stack because the device is
* not designated to receive packets (e.g. backup link in a bond).
*
* @rx_otherhost_dropped: Number of packets dropped due to mismatch
* in destination MAC address.
*/
struct rtnl_link_stats64 {
__u64 rx_packets;
@ -243,6 +246,23 @@ struct rtnl_link_stats64 {
__u64 rx_compressed;
__u64 tx_compressed;
__u64 rx_nohandler;
__u64 rx_otherhost_dropped;
};
/* Subset of link stats useful for in-HW collection. Meaning of the fields is as
* for struct rtnl_link_stats64.
*/
struct rtnl_hw_stats64 {
__u64 rx_packets;
__u64 tx_packets;
__u64 rx_bytes;
__u64 tx_bytes;
__u64 rx_errors;
__u64 tx_errors;
__u64 rx_dropped;
__u64 tx_dropped;
__u64 multicast;
};
/* The struct should be in sync with struct ifmap */
@ -350,7 +370,13 @@ enum {
IFLA_GRO_MAX_SIZE,
IFLA_TSO_MAX_SIZE,
IFLA_TSO_MAX_SEGS,
IFLA_ALLMULTI, /* Allmulti count: > 0 means acts ALLMULTI */
IFLA_DEVLINK_PORT,
IFLA_GSO_IPV4_MAX_SIZE,
IFLA_GRO_IPV4_MAX_SIZE,
IFLA_DPLL_PIN,
__IFLA_MAX
};
@ -539,6 +565,12 @@ enum {
IFLA_BRPORT_MRP_IN_OPEN,
IFLA_BRPORT_MCAST_EHT_HOSTS_LIMIT,
IFLA_BRPORT_MCAST_EHT_HOSTS_CNT,
IFLA_BRPORT_LOCKED,
IFLA_BRPORT_MAB,
IFLA_BRPORT_MCAST_N_GROUPS,
IFLA_BRPORT_MCAST_MAX_GROUPS,
IFLA_BRPORT_NEIGH_VLAN_SUPPRESS,
IFLA_BRPORT_BACKUP_NHID,
__IFLA_BRPORT_MAX
};
#define IFLA_BRPORT_MAX (__IFLA_BRPORT_MAX - 1)
@ -716,7 +748,79 @@ enum ipvlan_mode {
#define IPVLAN_F_PRIVATE 0x01
#define IPVLAN_F_VEPA 0x02
/* Tunnel RTM header */
struct tunnel_msg {
__u8 family;
__u8 flags;
__u16 reserved2;
__u32 ifindex;
};
/* netkit section */
enum netkit_action {
NETKIT_NEXT = -1,
NETKIT_PASS = 0,
NETKIT_DROP = 2,
NETKIT_REDIRECT = 7,
};
enum netkit_mode {
NETKIT_L2,
NETKIT_L3,
};
enum {
IFLA_NETKIT_UNSPEC,
IFLA_NETKIT_PEER_INFO,
IFLA_NETKIT_PRIMARY,
IFLA_NETKIT_POLICY,
IFLA_NETKIT_PEER_POLICY,
IFLA_NETKIT_MODE,
__IFLA_NETKIT_MAX,
};
#define IFLA_NETKIT_MAX (__IFLA_NETKIT_MAX - 1)
/* VXLAN section */
/* include statistics in the dump */
#define TUNNEL_MSG_FLAG_STATS 0x01
#define TUNNEL_MSG_VALID_USER_FLAGS TUNNEL_MSG_FLAG_STATS
/* Embedded inside VXLAN_VNIFILTER_ENTRY_STATS */
enum {
VNIFILTER_ENTRY_STATS_UNSPEC,
VNIFILTER_ENTRY_STATS_RX_BYTES,
VNIFILTER_ENTRY_STATS_RX_PKTS,
VNIFILTER_ENTRY_STATS_RX_DROPS,
VNIFILTER_ENTRY_STATS_RX_ERRORS,
VNIFILTER_ENTRY_STATS_TX_BYTES,
VNIFILTER_ENTRY_STATS_TX_PKTS,
VNIFILTER_ENTRY_STATS_TX_DROPS,
VNIFILTER_ENTRY_STATS_TX_ERRORS,
VNIFILTER_ENTRY_STATS_PAD,
__VNIFILTER_ENTRY_STATS_MAX
};
#define VNIFILTER_ENTRY_STATS_MAX (__VNIFILTER_ENTRY_STATS_MAX - 1)
enum {
VXLAN_VNIFILTER_ENTRY_UNSPEC,
VXLAN_VNIFILTER_ENTRY_START,
VXLAN_VNIFILTER_ENTRY_END,
VXLAN_VNIFILTER_ENTRY_GROUP,
VXLAN_VNIFILTER_ENTRY_GROUP6,
VXLAN_VNIFILTER_ENTRY_STATS,
__VXLAN_VNIFILTER_ENTRY_MAX
};
#define VXLAN_VNIFILTER_ENTRY_MAX (__VXLAN_VNIFILTER_ENTRY_MAX - 1)
enum {
VXLAN_VNIFILTER_UNSPEC,
VXLAN_VNIFILTER_ENTRY,
__VXLAN_VNIFILTER_MAX
};
#define VXLAN_VNIFILTER_MAX (__VXLAN_VNIFILTER_MAX - 1)
enum {
IFLA_VXLAN_UNSPEC,
IFLA_VXLAN_ID,
@ -748,6 +852,8 @@ enum {
IFLA_VXLAN_GPE,
IFLA_VXLAN_TTL_INHERIT,
IFLA_VXLAN_DF,
IFLA_VXLAN_VNIFILTER, /* only applicable with COLLECT_METADATA mode */
IFLA_VXLAN_LOCALBYPASS,
__IFLA_VXLAN_MAX
};
#define IFLA_VXLAN_MAX (__IFLA_VXLAN_MAX - 1)
@ -781,6 +887,7 @@ enum {
IFLA_GENEVE_LABEL,
IFLA_GENEVE_TTL_INHERIT,
IFLA_GENEVE_DF,
IFLA_GENEVE_INNER_PROTO_INHERIT,
__IFLA_GENEVE_MAX
};
#define IFLA_GENEVE_MAX (__IFLA_GENEVE_MAX - 1)
@ -826,6 +933,8 @@ enum {
IFLA_GTP_FD1,
IFLA_GTP_PDP_HASHSIZE,
IFLA_GTP_ROLE,
IFLA_GTP_CREATE_SOCKETS,
IFLA_GTP_RESTART_COUNT,
__IFLA_GTP_MAX,
};
#define IFLA_GTP_MAX (__IFLA_GTP_MAX - 1)
@ -1162,6 +1271,17 @@ enum {
#define IFLA_STATS_FILTER_BIT(ATTR) (1 << (ATTR - 1))
enum {
IFLA_STATS_GETSET_UNSPEC,
IFLA_STATS_GET_FILTERS, /* Nest of IFLA_STATS_LINK_xxx, each a u32 with
* a filter mask for the corresponding group.
*/
IFLA_STATS_SET_OFFLOAD_XSTATS_L3_STATS, /* 0 or 1 as u8 */
__IFLA_STATS_GETSET_MAX,
};
#define IFLA_STATS_GETSET_MAX (__IFLA_STATS_GETSET_MAX - 1)
/* These are embedded into IFLA_STATS_LINK_XSTATS:
* [IFLA_STATS_LINK_XSTATS]
* -> [LINK_XSTATS_TYPE_xxx]
@ -1179,10 +1299,21 @@ enum {
enum {
IFLA_OFFLOAD_XSTATS_UNSPEC,
IFLA_OFFLOAD_XSTATS_CPU_HIT, /* struct rtnl_link_stats64 */
IFLA_OFFLOAD_XSTATS_HW_S_INFO, /* HW stats info. A nest */
IFLA_OFFLOAD_XSTATS_L3_STATS, /* struct rtnl_hw_stats64 */
__IFLA_OFFLOAD_XSTATS_MAX
};
#define IFLA_OFFLOAD_XSTATS_MAX (__IFLA_OFFLOAD_XSTATS_MAX - 1)
enum {
IFLA_OFFLOAD_XSTATS_HW_S_INFO_UNSPEC,
IFLA_OFFLOAD_XSTATS_HW_S_INFO_REQUEST, /* u8 */
IFLA_OFFLOAD_XSTATS_HW_S_INFO_USED, /* u8 */
__IFLA_OFFLOAD_XSTATS_HW_S_INFO_MAX,
};
#define IFLA_OFFLOAD_XSTATS_HW_S_INFO_MAX \
(__IFLA_OFFLOAD_XSTATS_HW_S_INFO_MAX - 1)
/* XDP section */
#define XDP_FLAGS_UPDATE_IF_NOEXIST (1U << 0)
@ -1281,4 +1412,14 @@ enum {
#define IFLA_MCTP_MAX (__IFLA_MCTP_MAX - 1)
/* DSA section */
enum {
IFLA_DSA_UNSPEC,
IFLA_DSA_MASTER,
__IFLA_DSA_MAX,
};
#define IFLA_DSA_MAX (__IFLA_DSA_MAX - 1)
#endif /* _UAPI_LINUX_IF_LINK_H */

View File

@ -810,6 +810,22 @@ int bpf_link_create(int prog_fd, int target_fd,
if (!OPTS_ZEROED(opts, tcx))
return libbpf_err(-EINVAL);
break;
case BPF_NETKIT_PRIMARY:
case BPF_NETKIT_PEER:
relative_fd = OPTS_GET(opts, netkit.relative_fd, 0);
relative_id = OPTS_GET(opts, netkit.relative_id, 0);
if (relative_fd && relative_id)
return libbpf_err(-EINVAL);
if (relative_id) {
attr.link_create.netkit.relative_id = relative_id;
attr.link_create.flags |= BPF_F_ID;
} else {
attr.link_create.netkit.relative_fd = relative_fd;
}
attr.link_create.netkit.expected_revision = OPTS_GET(opts, netkit.expected_revision, 0);
if (!OPTS_ZEROED(opts, netkit))
return libbpf_err(-EINVAL);
break;
default:
if (!OPTS_ZEROED(opts, flags))
return libbpf_err(-EINVAL);

View File

@ -415,6 +415,11 @@ struct bpf_link_create_opts {
__u32 relative_id;
__u64 expected_revision;
} tcx;
struct {
__u32 relative_fd;
__u32 relative_id;
__u64 expected_revision;
} netkit;
};
size_t :0;
};

View File

@ -141,14 +141,15 @@ static int elf_sym_iter_new(struct elf_sym_iter *iter,
iter->versyms = elf_getdata(scn, 0);
scn = elf_find_next_scn_by_type(elf, SHT_GNU_verdef, NULL);
if (!scn) {
pr_debug("elf: failed to find verdef ELF sections in '%s'\n", binary_path);
return -ENOENT;
}
if (!gelf_getshdr(scn, &sh))
return -EINVAL;
iter->verdef_strtabidx = sh.sh_link;
if (!scn)
return 0;
iter->verdefs = elf_getdata(scn, 0);
if (!iter->verdefs || !gelf_getshdr(scn, &sh)) {
pr_warn("elf: failed to get verdef ELF section in '%s'\n", binary_path);
return -EINVAL;
}
iter->verdef_strtabidx = sh.sh_link;
return 0;
}
@ -199,6 +200,9 @@ static const char *elf_get_vername(struct elf_sym_iter *iter, int ver)
GElf_Verdef verdef;
int offset;
if (!iter->verdefs)
return NULL;
offset = 0;
while (gelf_getverdef(iter->verdefs, offset, &verdef)) {
if (verdef.vd_ndx != ver) {

View File

@ -126,6 +126,8 @@ static const char * const attach_type_name[] = {
[BPF_TCX_INGRESS] = "tcx_ingress",
[BPF_TCX_EGRESS] = "tcx_egress",
[BPF_TRACE_UPROBE_MULTI] = "trace_uprobe_multi",
[BPF_NETKIT_PRIMARY] = "netkit_primary",
[BPF_NETKIT_PEER] = "netkit_peer",
};
static const char * const link_type_name[] = {
@ -142,6 +144,7 @@ static const char * const link_type_name[] = {
[BPF_LINK_TYPE_NETFILTER] = "netfilter",
[BPF_LINK_TYPE_TCX] = "tcx",
[BPF_LINK_TYPE_UPROBE_MULTI] = "uprobe_multi",
[BPF_LINK_TYPE_NETKIT] = "netkit",
};
static const char * const map_type_name[] = {
@ -8915,6 +8918,8 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("tc", SCHED_CLS, 0, SEC_NONE), /* deprecated / legacy, use tcx */
SEC_DEF("classifier", SCHED_CLS, 0, SEC_NONE), /* deprecated / legacy, use tcx */
SEC_DEF("action", SCHED_ACT, 0, SEC_NONE), /* deprecated / legacy, use tcx */
SEC_DEF("netkit/primary", SCHED_CLS, BPF_NETKIT_PRIMARY, SEC_NONE),
SEC_DEF("netkit/peer", SCHED_CLS, BPF_NETKIT_PEER, SEC_NONE),
SEC_DEF("tracepoint+", TRACEPOINT, 0, SEC_NONE, attach_tp),
SEC_DEF("tp+", TRACEPOINT, 0, SEC_NONE, attach_tp),
SEC_DEF("raw_tracepoint+", RAW_TRACEPOINT, 0, SEC_NONE, attach_raw_tp),
@ -12126,6 +12131,40 @@ bpf_program__attach_tcx(const struct bpf_program *prog, int ifindex,
return bpf_program_attach_fd(prog, ifindex, "tcx", &link_create_opts);
}
struct bpf_link *
bpf_program__attach_netkit(const struct bpf_program *prog, int ifindex,
const struct bpf_netkit_opts *opts)
{
LIBBPF_OPTS(bpf_link_create_opts, link_create_opts);
__u32 relative_id;
int relative_fd;
if (!OPTS_VALID(opts, bpf_netkit_opts))
return libbpf_err_ptr(-EINVAL);
relative_id = OPTS_GET(opts, relative_id, 0);
relative_fd = OPTS_GET(opts, relative_fd, 0);
/* validate we don't have unexpected combinations of non-zero fields */
if (!ifindex) {
pr_warn("prog '%s': target netdevice ifindex cannot be zero\n",
prog->name);
return libbpf_err_ptr(-EINVAL);
}
if (relative_fd && relative_id) {
pr_warn("prog '%s': relative_fd and relative_id cannot be set at the same time\n",
prog->name);
return libbpf_err_ptr(-EINVAL);
}
link_create_opts.netkit.expected_revision = OPTS_GET(opts, expected_revision, 0);
link_create_opts.netkit.relative_fd = relative_fd;
link_create_opts.netkit.relative_id = relative_id;
link_create_opts.flags = OPTS_GET(opts, flags, 0);
return bpf_program_attach_fd(prog, ifindex, "netkit", &link_create_opts);
}
struct bpf_link *bpf_program__attach_freplace(const struct bpf_program *prog,
int target_fd,
const char *attach_func_name)

View File

@ -800,6 +800,21 @@ LIBBPF_API struct bpf_link *
bpf_program__attach_tcx(const struct bpf_program *prog, int ifindex,
const struct bpf_tcx_opts *opts);
struct bpf_netkit_opts {
/* size of this struct, for forward/backward compatibility */
size_t sz;
__u32 flags;
__u32 relative_fd;
__u32 relative_id;
__u64 expected_revision;
size_t :0;
};
#define bpf_netkit_opts__last_field expected_revision
LIBBPF_API struct bpf_link *
bpf_program__attach_netkit(const struct bpf_program *prog, int ifindex,
const struct bpf_netkit_opts *opts);
struct bpf_map;
LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map);

View File

@ -398,6 +398,7 @@ LIBBPF_1.3.0 {
bpf_object__unpin;
bpf_prog_detach_opts;
bpf_program__attach_netfilter;
bpf_program__attach_netkit;
bpf_program__attach_tcx;
bpf_program__attach_uprobe_multi;
ring__avail_data_size;

View File

@ -585,11 +585,20 @@ endef
# Define test_progs test runner.
TRUNNER_TESTS_DIR := prog_tests
TRUNNER_BPF_PROGS_DIR := progs
TRUNNER_EXTRA_SOURCES := test_progs.c cgroup_helpers.c trace_helpers.c \
network_helpers.c testing_helpers.c \
btf_helpers.c flow_dissector_load.h \
cap_helpers.c test_loader.c xsk.c disasm.c \
json_writer.c unpriv_helpers.c \
TRUNNER_EXTRA_SOURCES := test_progs.c \
cgroup_helpers.c \
trace_helpers.c \
network_helpers.c \
testing_helpers.c \
btf_helpers.c \
cap_helpers.c \
unpriv_helpers.c \
netlink_helpers.c \
test_loader.c \
xsk.c \
disasm.c \
json_writer.c \
flow_dissector_load.h \
ip_check_defrag_frags.h
TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read $(OUTPUT)/bpf_testmod.ko \
$(OUTPUT)/liburandom_read.so \

View File

@ -458,4 +458,23 @@ extern void bpf_throw(u64 cookie) __ksym;
__bpf_assert_op(LHS, <=, END, value, false); \
})
struct bpf_iter_css_task;
struct cgroup_subsys_state;
extern int bpf_iter_css_task_new(struct bpf_iter_css_task *it,
struct cgroup_subsys_state *css, unsigned int flags) __weak __ksym;
extern struct task_struct *bpf_iter_css_task_next(struct bpf_iter_css_task *it) __weak __ksym;
extern void bpf_iter_css_task_destroy(struct bpf_iter_css_task *it) __weak __ksym;
struct bpf_iter_task;
extern int bpf_iter_task_new(struct bpf_iter_task *it,
struct task_struct *task, unsigned int flags) __weak __ksym;
extern struct task_struct *bpf_iter_task_next(struct bpf_iter_task *it) __weak __ksym;
extern void bpf_iter_task_destroy(struct bpf_iter_task *it) __weak __ksym;
struct bpf_iter_css;
extern int bpf_iter_css_new(struct bpf_iter_css *it,
struct cgroup_subsys_state *start, unsigned int flags) __weak __ksym;
extern struct cgroup_subsys_state *bpf_iter_css_next(struct bpf_iter_css *it) __weak __ksym;
extern void bpf_iter_css_destroy(struct bpf_iter_css *it) __weak __ksym;
#endif

View File

@ -71,6 +71,7 @@ CONFIG_NETFILTER_SYNPROXY=y
CONFIG_NETFILTER_XT_CONNMARK=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
CONFIG_NETFILTER_XT_TARGET_CT=y
CONFIG_NETKIT=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_DEFRAG_IPV4=y

View File

@ -0,0 +1,358 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* Taken & modified from iproute2's libnetlink.c
* Authors: Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru>
*/
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <time.h>
#include <sys/socket.h>
#include "netlink_helpers.h"
static int rcvbuf = 1024 * 1024;
void rtnl_close(struct rtnl_handle *rth)
{
if (rth->fd >= 0) {
close(rth->fd);
rth->fd = -1;
}
}
int rtnl_open_byproto(struct rtnl_handle *rth, unsigned int subscriptions,
int protocol)
{
socklen_t addr_len;
int sndbuf = 32768;
int one = 1;
memset(rth, 0, sizeof(*rth));
rth->proto = protocol;
rth->fd = socket(AF_NETLINK, SOCK_RAW | SOCK_CLOEXEC, protocol);
if (rth->fd < 0) {
perror("Cannot open netlink socket");
return -1;
}
if (setsockopt(rth->fd, SOL_SOCKET, SO_SNDBUF,
&sndbuf, sizeof(sndbuf)) < 0) {
perror("SO_SNDBUF");
goto err;
}
if (setsockopt(rth->fd, SOL_SOCKET, SO_RCVBUF,
&rcvbuf, sizeof(rcvbuf)) < 0) {
perror("SO_RCVBUF");
goto err;
}
/* Older kernels may no support extended ACK reporting */
setsockopt(rth->fd, SOL_NETLINK, NETLINK_EXT_ACK,
&one, sizeof(one));
memset(&rth->local, 0, sizeof(rth->local));
rth->local.nl_family = AF_NETLINK;
rth->local.nl_groups = subscriptions;
if (bind(rth->fd, (struct sockaddr *)&rth->local,
sizeof(rth->local)) < 0) {
perror("Cannot bind netlink socket");
goto err;
}
addr_len = sizeof(rth->local);
if (getsockname(rth->fd, (struct sockaddr *)&rth->local,
&addr_len) < 0) {
perror("Cannot getsockname");
goto err;
}
if (addr_len != sizeof(rth->local)) {
fprintf(stderr, "Wrong address length %d\n", addr_len);
goto err;
}
if (rth->local.nl_family != AF_NETLINK) {
fprintf(stderr, "Wrong address family %d\n",
rth->local.nl_family);
goto err;
}
rth->seq = time(NULL);
return 0;
err:
rtnl_close(rth);
return -1;
}
int rtnl_open(struct rtnl_handle *rth, unsigned int subscriptions)
{
return rtnl_open_byproto(rth, subscriptions, NETLINK_ROUTE);
}
static int __rtnl_recvmsg(int fd, struct msghdr *msg, int flags)
{
int len;
do {
len = recvmsg(fd, msg, flags);
} while (len < 0 && (errno == EINTR || errno == EAGAIN));
if (len < 0) {
fprintf(stderr, "netlink receive error %s (%d)\n",
strerror(errno), errno);
return -errno;
}
if (len == 0) {
fprintf(stderr, "EOF on netlink\n");
return -ENODATA;
}
return len;
}
static int rtnl_recvmsg(int fd, struct msghdr *msg, char **answer)
{
struct iovec *iov = msg->msg_iov;
char *buf;
int len;
iov->iov_base = NULL;
iov->iov_len = 0;
len = __rtnl_recvmsg(fd, msg, MSG_PEEK | MSG_TRUNC);
if (len < 0)
return len;
if (len < 32768)
len = 32768;
buf = malloc(len);
if (!buf) {
fprintf(stderr, "malloc error: not enough buffer\n");
return -ENOMEM;
}
iov->iov_base = buf;
iov->iov_len = len;
len = __rtnl_recvmsg(fd, msg, 0);
if (len < 0) {
free(buf);
return len;
}
if (answer)
*answer = buf;
else
free(buf);
return len;
}
static void rtnl_talk_error(struct nlmsghdr *h, struct nlmsgerr *err,
nl_ext_ack_fn_t errfn)
{
fprintf(stderr, "RTNETLINK answers: %s\n",
strerror(-err->error));
}
static int __rtnl_talk_iov(struct rtnl_handle *rtnl, struct iovec *iov,
size_t iovlen, struct nlmsghdr **answer,
bool show_rtnl_err, nl_ext_ack_fn_t errfn)
{
struct sockaddr_nl nladdr = { .nl_family = AF_NETLINK };
struct iovec riov;
struct msghdr msg = {
.msg_name = &nladdr,
.msg_namelen = sizeof(nladdr),
.msg_iov = iov,
.msg_iovlen = iovlen,
};
unsigned int seq = 0;
struct nlmsghdr *h;
int i, status;
char *buf;
for (i = 0; i < iovlen; i++) {
h = iov[i].iov_base;
h->nlmsg_seq = seq = ++rtnl->seq;
if (answer == NULL)
h->nlmsg_flags |= NLM_F_ACK;
}
status = sendmsg(rtnl->fd, &msg, 0);
if (status < 0) {
perror("Cannot talk to rtnetlink");
return -1;
}
/* change msg to use the response iov */
msg.msg_iov = &riov;
msg.msg_iovlen = 1;
i = 0;
while (1) {
next:
status = rtnl_recvmsg(rtnl->fd, &msg, &buf);
++i;
if (status < 0)
return status;
if (msg.msg_namelen != sizeof(nladdr)) {
fprintf(stderr,
"Sender address length == %d!\n",
msg.msg_namelen);
exit(1);
}
for (h = (struct nlmsghdr *)buf; status >= sizeof(*h); ) {
int len = h->nlmsg_len;
int l = len - sizeof(*h);
if (l < 0 || len > status) {
if (msg.msg_flags & MSG_TRUNC) {
fprintf(stderr, "Truncated message!\n");
free(buf);
return -1;
}
fprintf(stderr,
"Malformed message: len=%d!\n",
len);
exit(1);
}
if (nladdr.nl_pid != 0 ||
h->nlmsg_pid != rtnl->local.nl_pid ||
h->nlmsg_seq > seq || h->nlmsg_seq < seq - iovlen) {
/* Don't forget to skip that message. */
status -= NLMSG_ALIGN(len);
h = (struct nlmsghdr *)((char *)h + NLMSG_ALIGN(len));
continue;
}
if (h->nlmsg_type == NLMSG_ERROR) {
struct nlmsgerr *err = (struct nlmsgerr *)NLMSG_DATA(h);
int error = err->error;
if (l < sizeof(struct nlmsgerr)) {
fprintf(stderr, "ERROR truncated\n");
free(buf);
return -1;
}
if (error) {
errno = -error;
if (rtnl->proto != NETLINK_SOCK_DIAG &&
show_rtnl_err)
rtnl_talk_error(h, err, errfn);
}
if (i < iovlen) {
free(buf);
goto next;
}
if (error) {
free(buf);
return -i;
}
if (answer)
*answer = (struct nlmsghdr *)buf;
else
free(buf);
return 0;
}
if (answer) {
*answer = (struct nlmsghdr *)buf;
return 0;
}
fprintf(stderr, "Unexpected reply!\n");
status -= NLMSG_ALIGN(len);
h = (struct nlmsghdr *)((char *)h + NLMSG_ALIGN(len));
}
free(buf);
if (msg.msg_flags & MSG_TRUNC) {
fprintf(stderr, "Message truncated!\n");
continue;
}
if (status) {
fprintf(stderr, "Remnant of size %d!\n", status);
exit(1);
}
}
}
static int __rtnl_talk(struct rtnl_handle *rtnl, struct nlmsghdr *n,
struct nlmsghdr **answer, bool show_rtnl_err,
nl_ext_ack_fn_t errfn)
{
struct iovec iov = {
.iov_base = n,
.iov_len = n->nlmsg_len,
};
return __rtnl_talk_iov(rtnl, &iov, 1, answer, show_rtnl_err, errfn);
}
int rtnl_talk(struct rtnl_handle *rtnl, struct nlmsghdr *n,
struct nlmsghdr **answer)
{
return __rtnl_talk(rtnl, n, answer, true, NULL);
}
int addattr(struct nlmsghdr *n, int maxlen, int type)
{
return addattr_l(n, maxlen, type, NULL, 0);
}
int addattr8(struct nlmsghdr *n, int maxlen, int type, __u8 data)
{
return addattr_l(n, maxlen, type, &data, sizeof(__u8));
}
int addattr16(struct nlmsghdr *n, int maxlen, int type, __u16 data)
{
return addattr_l(n, maxlen, type, &data, sizeof(__u16));
}
int addattr32(struct nlmsghdr *n, int maxlen, int type, __u32 data)
{
return addattr_l(n, maxlen, type, &data, sizeof(__u32));
}
int addattr64(struct nlmsghdr *n, int maxlen, int type, __u64 data)
{
return addattr_l(n, maxlen, type, &data, sizeof(__u64));
}
int addattrstrz(struct nlmsghdr *n, int maxlen, int type, const char *str)
{
return addattr_l(n, maxlen, type, str, strlen(str)+1);
}
int addattr_l(struct nlmsghdr *n, int maxlen, int type, const void *data,
int alen)
{
int len = RTA_LENGTH(alen);
struct rtattr *rta;
if (NLMSG_ALIGN(n->nlmsg_len) + RTA_ALIGN(len) > maxlen) {
fprintf(stderr, "%s: Message exceeded bound of %d\n",
__func__, maxlen);
return -1;
}
rta = NLMSG_TAIL(n);
rta->rta_type = type;
rta->rta_len = len;
if (alen)
memcpy(RTA_DATA(rta), data, alen);
n->nlmsg_len = NLMSG_ALIGN(n->nlmsg_len) + RTA_ALIGN(len);
return 0;
}
int addraw_l(struct nlmsghdr *n, int maxlen, const void *data, int len)
{
if (NLMSG_ALIGN(n->nlmsg_len) + NLMSG_ALIGN(len) > maxlen) {
fprintf(stderr, "%s: Message exceeded bound of %d\n",
__func__, maxlen);
return -1;
}
memcpy(NLMSG_TAIL(n), data, len);
memset((void *) NLMSG_TAIL(n) + len, 0, NLMSG_ALIGN(len) - len);
n->nlmsg_len = NLMSG_ALIGN(n->nlmsg_len) + NLMSG_ALIGN(len);
return 0;
}
struct rtattr *addattr_nest(struct nlmsghdr *n, int maxlen, int type)
{
struct rtattr *nest = NLMSG_TAIL(n);
addattr_l(n, maxlen, type, NULL, 0);
return nest;
}
int addattr_nest_end(struct nlmsghdr *n, struct rtattr *nest)
{
nest->rta_len = (void *)NLMSG_TAIL(n) - (void *)nest;
return n->nlmsg_len;
}

View File

@ -0,0 +1,46 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef NETLINK_HELPERS_H
#define NETLINK_HELPERS_H
#include <string.h>
#include <linux/netlink.h>
#include <linux/rtnetlink.h>
struct rtnl_handle {
int fd;
struct sockaddr_nl local;
struct sockaddr_nl peer;
__u32 seq;
__u32 dump;
int proto;
FILE *dump_fp;
#define RTNL_HANDLE_F_LISTEN_ALL_NSID 0x01
#define RTNL_HANDLE_F_SUPPRESS_NLERR 0x02
#define RTNL_HANDLE_F_STRICT_CHK 0x04
int flags;
};
#define NLMSG_TAIL(nmsg) \
((struct rtattr *) (((void *) (nmsg)) + NLMSG_ALIGN((nmsg)->nlmsg_len)))
typedef int (*nl_ext_ack_fn_t)(const char *errmsg, uint32_t off,
const struct nlmsghdr *inner_nlh);
int rtnl_open(struct rtnl_handle *rth, unsigned int subscriptions)
__attribute__((warn_unused_result));
void rtnl_close(struct rtnl_handle *rth);
int rtnl_talk(struct rtnl_handle *rtnl, struct nlmsghdr *n,
struct nlmsghdr **answer)
__attribute__((warn_unused_result));
int addattr(struct nlmsghdr *n, int maxlen, int type);
int addattr8(struct nlmsghdr *n, int maxlen, int type, __u8 data);
int addattr16(struct nlmsghdr *n, int maxlen, int type, __u16 data);
int addattr32(struct nlmsghdr *n, int maxlen, int type, __u32 data);
int addattr64(struct nlmsghdr *n, int maxlen, int type, __u64 data);
int addattrstrz(struct nlmsghdr *n, int maxlen, int type, const char *data);
int addattr_l(struct nlmsghdr *n, int maxlen, int type, const void *data, int alen);
int addraw_l(struct nlmsghdr *n, int maxlen, const void *data, int len);
struct rtattr *addattr_nest(struct nlmsghdr *n, int maxlen, int type);
int addattr_nest_end(struct nlmsghdr *n, struct rtattr *nest);
#endif /* NETLINK_HELPERS_H */

View File

@ -7,7 +7,7 @@
#include "bpf_iter_ipv6_route.skel.h"
#include "bpf_iter_netlink.skel.h"
#include "bpf_iter_bpf_map.skel.h"
#include "bpf_iter_task.skel.h"
#include "bpf_iter_tasks.skel.h"
#include "bpf_iter_task_stack.skel.h"
#include "bpf_iter_task_file.skel.h"
#include "bpf_iter_task_vmas.skel.h"
@ -215,12 +215,12 @@ static void *do_nothing_wait(void *arg)
static void test_task_common_nocheck(struct bpf_iter_attach_opts *opts,
int *num_unknown, int *num_known)
{
struct bpf_iter_task *skel;
struct bpf_iter_tasks *skel;
pthread_t thread_id;
void *ret;
skel = bpf_iter_task__open_and_load();
if (!ASSERT_OK_PTR(skel, "bpf_iter_task__open_and_load"))
skel = bpf_iter_tasks__open_and_load();
if (!ASSERT_OK_PTR(skel, "bpf_iter_tasks__open_and_load"))
return;
ASSERT_OK(pthread_mutex_lock(&do_nothing_mutex), "pthread_mutex_lock");
@ -239,7 +239,7 @@ static void test_task_common_nocheck(struct bpf_iter_attach_opts *opts,
ASSERT_FALSE(pthread_join(thread_id, &ret) || ret != NULL,
"pthread_join");
bpf_iter_task__destroy(skel);
bpf_iter_tasks__destroy(skel);
}
static void test_task_common(struct bpf_iter_attach_opts *opts, int num_unknown, int num_known)
@ -307,10 +307,10 @@ static void test_task_pidfd(void)
static void test_task_sleepable(void)
{
struct bpf_iter_task *skel;
struct bpf_iter_tasks *skel;
skel = bpf_iter_task__open_and_load();
if (!ASSERT_OK_PTR(skel, "bpf_iter_task__open_and_load"))
skel = bpf_iter_tasks__open_and_load();
if (!ASSERT_OK_PTR(skel, "bpf_iter_tasks__open_and_load"))
return;
do_dummy_read(skel->progs.dump_task_sleepable);
@ -320,7 +320,7 @@ static void test_task_sleepable(void)
ASSERT_GT(skel->bss->num_success_copy_from_user_task, 0,
"num_success_copy_from_user_task");
bpf_iter_task__destroy(skel);
bpf_iter_tasks__destroy(skel);
}
static void test_task_stack(void)

View File

@ -1,7 +1,14 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
#include <sys/syscall.h>
#include <sys/mman.h>
#include <sys/wait.h>
#include <unistd.h>
#include <malloc.h>
#include <stdlib.h>
#include <test_progs.h>
#include "cgroup_helpers.h"
#include "iters.skel.h"
#include "iters_state_safety.skel.h"
@ -9,6 +16,10 @@
#include "iters_num.skel.h"
#include "iters_testmod_seq.skel.h"
#include "iters_task_vma.skel.h"
#include "iters_task.skel.h"
#include "iters_css_task.skel.h"
#include "iters_css.skel.h"
#include "iters_task_failure.skel.h"
static void subtest_num_iters(void)
{
@ -146,6 +157,138 @@ cleanup:
iters_task_vma__destroy(skel);
}
static pthread_mutex_t do_nothing_mutex;
static void *do_nothing_wait(void *arg)
{
pthread_mutex_lock(&do_nothing_mutex);
pthread_mutex_unlock(&do_nothing_mutex);
pthread_exit(arg);
}
#define thread_num 2
static void subtest_task_iters(void)
{
struct iters_task *skel = NULL;
pthread_t thread_ids[thread_num];
void *ret;
int err;
skel = iters_task__open_and_load();
if (!ASSERT_OK_PTR(skel, "open_and_load"))
goto cleanup;
skel->bss->target_pid = getpid();
err = iters_task__attach(skel);
if (!ASSERT_OK(err, "iters_task__attach"))
goto cleanup;
pthread_mutex_lock(&do_nothing_mutex);
for (int i = 0; i < thread_num; i++)
ASSERT_OK(pthread_create(&thread_ids[i], NULL, &do_nothing_wait, NULL),
"pthread_create");
syscall(SYS_getpgid);
iters_task__detach(skel);
ASSERT_EQ(skel->bss->procs_cnt, 1, "procs_cnt");
ASSERT_EQ(skel->bss->threads_cnt, thread_num + 1, "threads_cnt");
ASSERT_EQ(skel->bss->proc_threads_cnt, thread_num + 1, "proc_threads_cnt");
pthread_mutex_unlock(&do_nothing_mutex);
for (int i = 0; i < thread_num; i++)
ASSERT_OK(pthread_join(thread_ids[i], &ret), "pthread_join");
cleanup:
iters_task__destroy(skel);
}
extern int stack_mprotect(void);
static void subtest_css_task_iters(void)
{
struct iters_css_task *skel = NULL;
int err, cg_fd, cg_id;
const char *cgrp_path = "/cg1";
err = setup_cgroup_environment();
if (!ASSERT_OK(err, "setup_cgroup_environment"))
goto cleanup;
cg_fd = create_and_get_cgroup(cgrp_path);
if (!ASSERT_GE(cg_fd, 0, "create_and_get_cgroup"))
goto cleanup;
cg_id = get_cgroup_id(cgrp_path);
err = join_cgroup(cgrp_path);
if (!ASSERT_OK(err, "join_cgroup"))
goto cleanup;
skel = iters_css_task__open_and_load();
if (!ASSERT_OK_PTR(skel, "open_and_load"))
goto cleanup;
skel->bss->target_pid = getpid();
skel->bss->cg_id = cg_id;
err = iters_css_task__attach(skel);
if (!ASSERT_OK(err, "iters_task__attach"))
goto cleanup;
err = stack_mprotect();
if (!ASSERT_EQ(err, -1, "stack_mprotect") ||
!ASSERT_EQ(errno, EPERM, "stack_mprotect"))
goto cleanup;
iters_css_task__detach(skel);
ASSERT_EQ(skel->bss->css_task_cnt, 1, "css_task_cnt");
cleanup:
cleanup_cgroup_environment();
iters_css_task__destroy(skel);
}
static void subtest_css_iters(void)
{
struct iters_css *skel = NULL;
struct {
const char *path;
int fd;
} cgs[] = {
{ "/cg1" },
{ "/cg1/cg2" },
{ "/cg1/cg2/cg3" },
{ "/cg1/cg2/cg3/cg4" },
};
int err, cg_nr = ARRAY_SIZE(cgs);
int i;
err = setup_cgroup_environment();
if (!ASSERT_OK(err, "setup_cgroup_environment"))
goto cleanup;
for (i = 0; i < cg_nr; i++) {
cgs[i].fd = create_and_get_cgroup(cgs[i].path);
if (!ASSERT_GE(cgs[i].fd, 0, "create_and_get_cgroup"))
goto cleanup;
}
skel = iters_css__open_and_load();
if (!ASSERT_OK_PTR(skel, "open_and_load"))
goto cleanup;
skel->bss->target_pid = getpid();
skel->bss->root_cg_id = get_cgroup_id(cgs[0].path);
skel->bss->leaf_cg_id = get_cgroup_id(cgs[cg_nr - 1].path);
err = iters_css__attach(skel);
if (!ASSERT_OK(err, "iters_task__attach"))
goto cleanup;
syscall(SYS_getpgid);
ASSERT_EQ(skel->bss->pre_order_cnt, cg_nr, "pre_order_cnt");
ASSERT_EQ(skel->bss->first_cg_id, get_cgroup_id(cgs[0].path), "first_cg_id");
ASSERT_EQ(skel->bss->post_order_cnt, cg_nr, "post_order_cnt");
ASSERT_EQ(skel->bss->last_cg_id, get_cgroup_id(cgs[0].path), "last_cg_id");
ASSERT_EQ(skel->bss->tree_high, cg_nr - 1, "tree_high");
iters_css__detach(skel);
cleanup:
cleanup_cgroup_environment();
iters_css__destroy(skel);
}
void test_iters(void)
{
RUN_TESTS(iters_state_safety);
@ -161,4 +304,11 @@ void test_iters(void)
subtest_testmod_seq_iters();
if (test__start_subtest("task_vma"))
subtest_task_vma_iters();
if (test__start_subtest("task"))
subtest_task_iters();
if (test__start_subtest("css_task"))
subtest_css_task_iters();
if (test__start_subtest("css"))
subtest_css_iters();
RUN_TESTS(iters_task_failure);
}

View File

@ -94,14 +94,8 @@ static struct {
{ "incorrect_head_var_off2", "variable ptr_ access var_off=(0x0; 0xffffffff) disallowed" },
{ "incorrect_head_off1", "bpf_list_head not found at offset=25" },
{ "incorrect_head_off2", "bpf_list_head not found at offset=1" },
{ "pop_front_off",
"15: (bf) r1 = r6 ; R1_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=48,imm=0) "
"R6_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=48,imm=0) refs=2,4\n"
"16: (85) call bpf_this_cpu_ptr#154\nR1 type=ptr_or_null_ expected=percpu_ptr_" },
{ "pop_back_off",
"15: (bf) r1 = r6 ; R1_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=48,imm=0) "
"R6_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=48,imm=0) refs=2,4\n"
"16: (85) call bpf_this_cpu_ptr#154\nR1 type=ptr_or_null_ expected=percpu_ptr_" },
{ "pop_front_off", "off 48 doesn't point to 'struct bpf_spin_lock' that is at 40" },
{ "pop_back_off", "off 48 doesn't point to 'struct bpf_spin_lock' that is at 40" },
};
static void test_linked_list_fail_prog(const char *prog_name, const char *err_msg)

View File

@ -30,8 +30,15 @@ void test_task_under_cgroup(void)
if (!ASSERT_OK(ret, "test_task_under_cgroup__load"))
goto cleanup;
ret = test_task_under_cgroup__attach(skel);
if (!ASSERT_OK(ret, "test_task_under_cgroup__attach"))
/* First, attach the LSM program, and then it will be triggered when the
* TP_BTF program is attached.
*/
skel->links.lsm_run = bpf_program__attach_lsm(skel->progs.lsm_run);
if (!ASSERT_OK_PTR(skel->links.lsm_run, "attach_lsm"))
goto cleanup;
skel->links.tp_btf_run = bpf_program__attach_trace(skel->progs.tp_btf_run);
if (!ASSERT_OK_PTR(skel->links.tp_btf_run, "attach_tp_btf"))
goto cleanup;
pid = fork();

View File

@ -4,6 +4,10 @@
#define TC_HELPERS
#include <test_progs.h>
#ifndef loopback
# define loopback 1
#endif
static inline __u32 id_from_prog_fd(int fd)
{
struct bpf_prog_info prog_info = {};

View File

@ -0,0 +1,687 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2023 Isovalent */
#include <uapi/linux/if_link.h>
#include <net/if.h>
#include <test_progs.h>
#define netkit_peer "nk0"
#define netkit_name "nk1"
#define ping_addr_neigh 0x0a000002 /* 10.0.0.2 */
#define ping_addr_noneigh 0x0a000003 /* 10.0.0.3 */
#include "test_tc_link.skel.h"
#include "netlink_helpers.h"
#include "tc_helpers.h"
#define ICMP_ECHO 8
struct icmphdr {
__u8 type;
__u8 code;
__sum16 checksum;
struct {
__be16 id;
__be16 sequence;
} echo;
};
struct iplink_req {
struct nlmsghdr n;
struct ifinfomsg i;
char buf[1024];
};
static int create_netkit(int mode, int policy, int peer_policy, int *ifindex,
bool same_netns)
{
struct rtnl_handle rth = { .fd = -1 };
struct iplink_req req = {};
struct rtattr *linkinfo, *data;
const char *type = "netkit";
int err;
err = rtnl_open(&rth, 0);
if (!ASSERT_OK(err, "open_rtnetlink"))
return err;
memset(&req, 0, sizeof(req));
req.n.nlmsg_len = NLMSG_LENGTH(sizeof(struct ifinfomsg));
req.n.nlmsg_flags = NLM_F_REQUEST | NLM_F_CREATE | NLM_F_EXCL;
req.n.nlmsg_type = RTM_NEWLINK;
req.i.ifi_family = AF_UNSPEC;
addattr_l(&req.n, sizeof(req), IFLA_IFNAME, netkit_name,
strlen(netkit_name));
linkinfo = addattr_nest(&req.n, sizeof(req), IFLA_LINKINFO);
addattr_l(&req.n, sizeof(req), IFLA_INFO_KIND, type, strlen(type));
data = addattr_nest(&req.n, sizeof(req), IFLA_INFO_DATA);
addattr32(&req.n, sizeof(req), IFLA_NETKIT_POLICY, policy);
addattr32(&req.n, sizeof(req), IFLA_NETKIT_PEER_POLICY, peer_policy);
addattr32(&req.n, sizeof(req), IFLA_NETKIT_MODE, mode);
addattr_nest_end(&req.n, data);
addattr_nest_end(&req.n, linkinfo);
err = rtnl_talk(&rth, &req.n, NULL);
ASSERT_OK(err, "talk_rtnetlink");
rtnl_close(&rth);
*ifindex = if_nametoindex(netkit_name);
ASSERT_GT(*ifindex, 0, "retrieve_ifindex");
ASSERT_OK(system("ip netns add foo"), "create netns");
ASSERT_OK(system("ip link set dev " netkit_name " up"),
"up primary");
ASSERT_OK(system("ip addr add dev " netkit_name " 10.0.0.1/24"),
"addr primary");
if (same_netns) {
ASSERT_OK(system("ip link set dev " netkit_peer " up"),
"up peer");
ASSERT_OK(system("ip addr add dev " netkit_peer " 10.0.0.2/24"),
"addr peer");
} else {
ASSERT_OK(system("ip link set " netkit_peer " netns foo"),
"move peer");
ASSERT_OK(system("ip netns exec foo ip link set dev "
netkit_peer " up"), "up peer");
ASSERT_OK(system("ip netns exec foo ip addr add dev "
netkit_peer " 10.0.0.2/24"), "addr peer");
}
return err;
}
static void destroy_netkit(void)
{
ASSERT_OK(system("ip link del dev " netkit_name), "del primary");
ASSERT_OK(system("ip netns del foo"), "delete netns");
ASSERT_EQ(if_nametoindex(netkit_name), 0, netkit_name "_ifindex");
}
static int __send_icmp(__u32 dest)
{
struct sockaddr_in addr;
struct icmphdr icmp;
int sock, ret;
ret = write_sysctl("/proc/sys/net/ipv4/ping_group_range", "0 0");
if (!ASSERT_OK(ret, "write_sysctl(net.ipv4.ping_group_range)"))
return ret;
sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_ICMP);
if (!ASSERT_GE(sock, 0, "icmp_socket"))
return -errno;
ret = setsockopt(sock, SOL_SOCKET, SO_BINDTODEVICE,
netkit_name, strlen(netkit_name) + 1);
if (!ASSERT_OK(ret, "setsockopt(SO_BINDTODEVICE)"))
goto out;
memset(&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = htonl(dest);
memset(&icmp, 0, sizeof(icmp));
icmp.type = ICMP_ECHO;
icmp.echo.id = 1234;
icmp.echo.sequence = 1;
ret = sendto(sock, &icmp, sizeof(icmp), 0,
(struct sockaddr *)&addr, sizeof(addr));
if (!ASSERT_GE(ret, 0, "icmp_sendto"))
ret = -errno;
else
ret = 0;
out:
close(sock);
return ret;
}
static int send_icmp(void)
{
return __send_icmp(ping_addr_neigh);
}
void serial_test_tc_netkit_basic(void)
{
LIBBPF_OPTS(bpf_prog_query_opts, optq);
LIBBPF_OPTS(bpf_netkit_opts, optl);
__u32 prog_ids[2], link_ids[2];
__u32 pid1, pid2, lid1, lid2;
struct test_tc_link *skel;
struct bpf_link *link;
int err, ifindex;
err = create_netkit(NETKIT_L2, NETKIT_PASS, NETKIT_PASS,
&ifindex, false);
if (err)
return;
skel = test_tc_link__open();
if (!ASSERT_OK_PTR(skel, "skel_open"))
goto cleanup;
ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc1,
BPF_NETKIT_PRIMARY), 0, "tc1_attach_type");
ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc2,
BPF_NETKIT_PEER), 0, "tc2_attach_type");
err = test_tc_link__load(skel);
if (!ASSERT_OK(err, "skel_load"))
goto cleanup;
pid1 = id_from_prog_fd(bpf_program__fd(skel->progs.tc1));
pid2 = id_from_prog_fd(bpf_program__fd(skel->progs.tc2));
ASSERT_NEQ(pid1, pid2, "prog_ids_1_2");
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 0);
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
ASSERT_EQ(skel->bss->seen_tc1, false, "seen_tc1");
ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
link = bpf_program__attach_netkit(skel->progs.tc1, ifindex, &optl);
if (!ASSERT_OK_PTR(link, "link_attach"))
goto cleanup;
skel->links.tc1 = link;
lid1 = id_from_link_fd(bpf_link__fd(skel->links.tc1));
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 1);
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
optq.prog_ids = prog_ids;
optq.link_ids = link_ids;
memset(prog_ids, 0, sizeof(prog_ids));
memset(link_ids, 0, sizeof(link_ids));
optq.count = ARRAY_SIZE(prog_ids);
err = bpf_prog_query_opts(ifindex, BPF_NETKIT_PRIMARY, &optq);
if (!ASSERT_OK(err, "prog_query"))
goto cleanup;
ASSERT_EQ(optq.count, 1, "count");
ASSERT_EQ(optq.revision, 2, "revision");
ASSERT_EQ(optq.prog_ids[0], pid1, "prog_ids[0]");
ASSERT_EQ(optq.link_ids[0], lid1, "link_ids[0]");
ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
ASSERT_EQ(optq.link_ids[1], 0, "link_ids[1]");
tc_skel_reset_all_seen(skel);
ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
link = bpf_program__attach_netkit(skel->progs.tc2, ifindex, &optl);
if (!ASSERT_OK_PTR(link, "link_attach"))
goto cleanup;
skel->links.tc2 = link;
lid2 = id_from_link_fd(bpf_link__fd(skel->links.tc2));
ASSERT_NEQ(lid1, lid2, "link_ids_1_2");
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 1);
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 1);
memset(prog_ids, 0, sizeof(prog_ids));
memset(link_ids, 0, sizeof(link_ids));
optq.count = ARRAY_SIZE(prog_ids);
err = bpf_prog_query_opts(ifindex, BPF_NETKIT_PEER, &optq);
if (!ASSERT_OK(err, "prog_query"))
goto cleanup;
ASSERT_EQ(optq.count, 1, "count");
ASSERT_EQ(optq.revision, 2, "revision");
ASSERT_EQ(optq.prog_ids[0], pid2, "prog_ids[0]");
ASSERT_EQ(optq.link_ids[0], lid2, "link_ids[0]");
ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
ASSERT_EQ(optq.link_ids[1], 0, "link_ids[1]");
tc_skel_reset_all_seen(skel);
ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
ASSERT_EQ(skel->bss->seen_tc2, true, "seen_tc2");
cleanup:
test_tc_link__destroy(skel);
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 0);
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
destroy_netkit();
}
static void serial_test_tc_netkit_multi_links_target(int mode, int target)
{
LIBBPF_OPTS(bpf_prog_query_opts, optq);
LIBBPF_OPTS(bpf_netkit_opts, optl);
__u32 prog_ids[3], link_ids[3];
__u32 pid1, pid2, lid1, lid2;
struct test_tc_link *skel;
struct bpf_link *link;
int err, ifindex;
err = create_netkit(mode, NETKIT_PASS, NETKIT_PASS,
&ifindex, false);
if (err)
return;
skel = test_tc_link__open();
if (!ASSERT_OK_PTR(skel, "skel_open"))
goto cleanup;
ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc1,
target), 0, "tc1_attach_type");
ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc2,
target), 0, "tc2_attach_type");
err = test_tc_link__load(skel);
if (!ASSERT_OK(err, "skel_load"))
goto cleanup;
pid1 = id_from_prog_fd(bpf_program__fd(skel->progs.tc1));
pid2 = id_from_prog_fd(bpf_program__fd(skel->progs.tc2));
ASSERT_NEQ(pid1, pid2, "prog_ids_1_2");
assert_mprog_count_ifindex(ifindex, target, 0);
ASSERT_EQ(skel->bss->seen_tc1, false, "seen_tc1");
ASSERT_EQ(skel->bss->seen_eth, false, "seen_eth");
ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
link = bpf_program__attach_netkit(skel->progs.tc1, ifindex, &optl);
if (!ASSERT_OK_PTR(link, "link_attach"))
goto cleanup;
skel->links.tc1 = link;
lid1 = id_from_link_fd(bpf_link__fd(skel->links.tc1));
assert_mprog_count_ifindex(ifindex, target, 1);
optq.prog_ids = prog_ids;
optq.link_ids = link_ids;
memset(prog_ids, 0, sizeof(prog_ids));
memset(link_ids, 0, sizeof(link_ids));
optq.count = ARRAY_SIZE(prog_ids);
err = bpf_prog_query_opts(ifindex, target, &optq);
if (!ASSERT_OK(err, "prog_query"))
goto cleanup;
ASSERT_EQ(optq.count, 1, "count");
ASSERT_EQ(optq.revision, 2, "revision");
ASSERT_EQ(optq.prog_ids[0], pid1, "prog_ids[0]");
ASSERT_EQ(optq.link_ids[0], lid1, "link_ids[0]");
ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
ASSERT_EQ(optq.link_ids[1], 0, "link_ids[1]");
tc_skel_reset_all_seen(skel);
ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
ASSERT_EQ(skel->bss->seen_eth, true, "seen_eth");
ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
LIBBPF_OPTS_RESET(optl,
.flags = BPF_F_BEFORE,
.relative_fd = bpf_program__fd(skel->progs.tc1),
);
link = bpf_program__attach_netkit(skel->progs.tc2, ifindex, &optl);
if (!ASSERT_OK_PTR(link, "link_attach"))
goto cleanup;
skel->links.tc2 = link;
lid2 = id_from_link_fd(bpf_link__fd(skel->links.tc2));
ASSERT_NEQ(lid1, lid2, "link_ids_1_2");
assert_mprog_count_ifindex(ifindex, target, 2);
memset(prog_ids, 0, sizeof(prog_ids));
memset(link_ids, 0, sizeof(link_ids));
optq.count = ARRAY_SIZE(prog_ids);
err = bpf_prog_query_opts(ifindex, target, &optq);
if (!ASSERT_OK(err, "prog_query"))
goto cleanup;
ASSERT_EQ(optq.count, 2, "count");
ASSERT_EQ(optq.revision, 3, "revision");
ASSERT_EQ(optq.prog_ids[0], pid2, "prog_ids[0]");
ASSERT_EQ(optq.link_ids[0], lid2, "link_ids[0]");
ASSERT_EQ(optq.prog_ids[1], pid1, "prog_ids[1]");
ASSERT_EQ(optq.link_ids[1], lid1, "link_ids[1]");
ASSERT_EQ(optq.prog_ids[2], 0, "prog_ids[2]");
ASSERT_EQ(optq.link_ids[2], 0, "link_ids[2]");
tc_skel_reset_all_seen(skel);
ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
ASSERT_EQ(skel->bss->seen_eth, true, "seen_eth");
ASSERT_EQ(skel->bss->seen_tc2, true, "seen_tc2");
cleanup:
test_tc_link__destroy(skel);
assert_mprog_count_ifindex(ifindex, target, 0);
destroy_netkit();
}
void serial_test_tc_netkit_multi_links(void)
{
serial_test_tc_netkit_multi_links_target(NETKIT_L2, BPF_NETKIT_PRIMARY);
serial_test_tc_netkit_multi_links_target(NETKIT_L3, BPF_NETKIT_PRIMARY);
serial_test_tc_netkit_multi_links_target(NETKIT_L2, BPF_NETKIT_PEER);
serial_test_tc_netkit_multi_links_target(NETKIT_L3, BPF_NETKIT_PEER);
}
static void serial_test_tc_netkit_multi_opts_target(int mode, int target)
{
LIBBPF_OPTS(bpf_prog_attach_opts, opta);
LIBBPF_OPTS(bpf_prog_detach_opts, optd);
LIBBPF_OPTS(bpf_prog_query_opts, optq);
__u32 pid1, pid2, fd1, fd2;
__u32 prog_ids[3];
struct test_tc_link *skel;
int err, ifindex;
err = create_netkit(mode, NETKIT_PASS, NETKIT_PASS,
&ifindex, false);
if (err)
return;
skel = test_tc_link__open_and_load();
if (!ASSERT_OK_PTR(skel, "skel_load"))
goto cleanup;
fd1 = bpf_program__fd(skel->progs.tc1);
fd2 = bpf_program__fd(skel->progs.tc2);
pid1 = id_from_prog_fd(fd1);
pid2 = id_from_prog_fd(fd2);
ASSERT_NEQ(pid1, pid2, "prog_ids_1_2");
assert_mprog_count_ifindex(ifindex, target, 0);
ASSERT_EQ(skel->bss->seen_tc1, false, "seen_tc1");
ASSERT_EQ(skel->bss->seen_eth, false, "seen_eth");
ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
err = bpf_prog_attach_opts(fd1, ifindex, target, &opta);
if (!ASSERT_EQ(err, 0, "prog_attach"))
goto cleanup;
assert_mprog_count_ifindex(ifindex, target, 1);
optq.prog_ids = prog_ids;
memset(prog_ids, 0, sizeof(prog_ids));
optq.count = ARRAY_SIZE(prog_ids);
err = bpf_prog_query_opts(ifindex, target, &optq);
if (!ASSERT_OK(err, "prog_query"))
goto cleanup_fd1;
ASSERT_EQ(optq.count, 1, "count");
ASSERT_EQ(optq.revision, 2, "revision");
ASSERT_EQ(optq.prog_ids[0], pid1, "prog_ids[0]");
ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
tc_skel_reset_all_seen(skel);
ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
ASSERT_EQ(skel->bss->seen_eth, true, "seen_eth");
ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
LIBBPF_OPTS_RESET(opta,
.flags = BPF_F_BEFORE,
.relative_fd = fd1,
);
err = bpf_prog_attach_opts(fd2, ifindex, target, &opta);
if (!ASSERT_EQ(err, 0, "prog_attach"))
goto cleanup_fd1;
assert_mprog_count_ifindex(ifindex, target, 2);
memset(prog_ids, 0, sizeof(prog_ids));
optq.count = ARRAY_SIZE(prog_ids);
err = bpf_prog_query_opts(ifindex, target, &optq);
if (!ASSERT_OK(err, "prog_query"))
goto cleanup_fd2;
ASSERT_EQ(optq.count, 2, "count");
ASSERT_EQ(optq.revision, 3, "revision");
ASSERT_EQ(optq.prog_ids[0], pid2, "prog_ids[0]");
ASSERT_EQ(optq.prog_ids[1], pid1, "prog_ids[1]");
ASSERT_EQ(optq.prog_ids[2], 0, "prog_ids[2]");
tc_skel_reset_all_seen(skel);
ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
ASSERT_EQ(skel->bss->seen_eth, true, "seen_eth");
ASSERT_EQ(skel->bss->seen_tc2, true, "seen_tc2");
cleanup_fd2:
err = bpf_prog_detach_opts(fd2, ifindex, target, &optd);
ASSERT_OK(err, "prog_detach");
assert_mprog_count_ifindex(ifindex, target, 1);
cleanup_fd1:
err = bpf_prog_detach_opts(fd1, ifindex, target, &optd);
ASSERT_OK(err, "prog_detach");
assert_mprog_count_ifindex(ifindex, target, 0);
cleanup:
test_tc_link__destroy(skel);
assert_mprog_count_ifindex(ifindex, target, 0);
destroy_netkit();
}
void serial_test_tc_netkit_multi_opts(void)
{
serial_test_tc_netkit_multi_opts_target(NETKIT_L2, BPF_NETKIT_PRIMARY);
serial_test_tc_netkit_multi_opts_target(NETKIT_L3, BPF_NETKIT_PRIMARY);
serial_test_tc_netkit_multi_opts_target(NETKIT_L2, BPF_NETKIT_PEER);
serial_test_tc_netkit_multi_opts_target(NETKIT_L3, BPF_NETKIT_PEER);
}
void serial_test_tc_netkit_device(void)
{
LIBBPF_OPTS(bpf_prog_query_opts, optq);
LIBBPF_OPTS(bpf_netkit_opts, optl);
__u32 prog_ids[2], link_ids[2];
__u32 pid1, pid2, lid1;
struct test_tc_link *skel;
struct bpf_link *link;
int err, ifindex, ifindex2;
err = create_netkit(NETKIT_L3, NETKIT_PASS, NETKIT_PASS,
&ifindex, true);
if (err)
return;
ifindex2 = if_nametoindex(netkit_peer);
ASSERT_NEQ(ifindex, ifindex2, "ifindex_1_2");
skel = test_tc_link__open();
if (!ASSERT_OK_PTR(skel, "skel_open"))
goto cleanup;
ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc1,
BPF_NETKIT_PRIMARY), 0, "tc1_attach_type");
ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc2,
BPF_NETKIT_PEER), 0, "tc2_attach_type");
ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc3,
BPF_NETKIT_PRIMARY), 0, "tc3_attach_type");
err = test_tc_link__load(skel);
if (!ASSERT_OK(err, "skel_load"))
goto cleanup;
pid1 = id_from_prog_fd(bpf_program__fd(skel->progs.tc1));
pid2 = id_from_prog_fd(bpf_program__fd(skel->progs.tc2));
ASSERT_NEQ(pid1, pid2, "prog_ids_1_2");
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 0);
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
ASSERT_EQ(skel->bss->seen_tc1, false, "seen_tc1");
ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
link = bpf_program__attach_netkit(skel->progs.tc1, ifindex, &optl);
if (!ASSERT_OK_PTR(link, "link_attach"))
goto cleanup;
skel->links.tc1 = link;
lid1 = id_from_link_fd(bpf_link__fd(skel->links.tc1));
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 1);
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
optq.prog_ids = prog_ids;
optq.link_ids = link_ids;
memset(prog_ids, 0, sizeof(prog_ids));
memset(link_ids, 0, sizeof(link_ids));
optq.count = ARRAY_SIZE(prog_ids);
err = bpf_prog_query_opts(ifindex, BPF_NETKIT_PRIMARY, &optq);
if (!ASSERT_OK(err, "prog_query"))
goto cleanup;
ASSERT_EQ(optq.count, 1, "count");
ASSERT_EQ(optq.revision, 2, "revision");
ASSERT_EQ(optq.prog_ids[0], pid1, "prog_ids[0]");
ASSERT_EQ(optq.link_ids[0], lid1, "link_ids[0]");
ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
ASSERT_EQ(optq.link_ids[1], 0, "link_ids[1]");
tc_skel_reset_all_seen(skel);
ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
memset(prog_ids, 0, sizeof(prog_ids));
memset(link_ids, 0, sizeof(link_ids));
optq.count = ARRAY_SIZE(prog_ids);
err = bpf_prog_query_opts(ifindex2, BPF_NETKIT_PRIMARY, &optq);
ASSERT_EQ(err, -EACCES, "prog_query_should_fail");
err = bpf_prog_query_opts(ifindex2, BPF_NETKIT_PEER, &optq);
ASSERT_EQ(err, -EACCES, "prog_query_should_fail");
link = bpf_program__attach_netkit(skel->progs.tc2, ifindex2, &optl);
if (!ASSERT_ERR_PTR(link, "link_attach_should_fail")) {
bpf_link__destroy(link);
goto cleanup;
}
link = bpf_program__attach_netkit(skel->progs.tc3, ifindex2, &optl);
if (!ASSERT_ERR_PTR(link, "link_attach_should_fail")) {
bpf_link__destroy(link);
goto cleanup;
}
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 1);
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
cleanup:
test_tc_link__destroy(skel);
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 0);
assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
destroy_netkit();
}
static void serial_test_tc_netkit_neigh_links_target(int mode, int target)
{
LIBBPF_OPTS(bpf_prog_query_opts, optq);
LIBBPF_OPTS(bpf_netkit_opts, optl);
__u32 prog_ids[2], link_ids[2];
__u32 pid1, lid1;
struct test_tc_link *skel;
struct bpf_link *link;
int err, ifindex;
err = create_netkit(mode, NETKIT_PASS, NETKIT_PASS,
&ifindex, false);
if (err)
return;
skel = test_tc_link__open();
if (!ASSERT_OK_PTR(skel, "skel_open"))
goto cleanup;
ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc1,
BPF_NETKIT_PRIMARY), 0, "tc1_attach_type");
err = test_tc_link__load(skel);
if (!ASSERT_OK(err, "skel_load"))
goto cleanup;
pid1 = id_from_prog_fd(bpf_program__fd(skel->progs.tc1));
assert_mprog_count_ifindex(ifindex, target, 0);
ASSERT_EQ(skel->bss->seen_tc1, false, "seen_tc1");
ASSERT_EQ(skel->bss->seen_eth, false, "seen_eth");
link = bpf_program__attach_netkit(skel->progs.tc1, ifindex, &optl);
if (!ASSERT_OK_PTR(link, "link_attach"))
goto cleanup;
skel->links.tc1 = link;
lid1 = id_from_link_fd(bpf_link__fd(skel->links.tc1));
assert_mprog_count_ifindex(ifindex, target, 1);
optq.prog_ids = prog_ids;
optq.link_ids = link_ids;
memset(prog_ids, 0, sizeof(prog_ids));
memset(link_ids, 0, sizeof(link_ids));
optq.count = ARRAY_SIZE(prog_ids);
err = bpf_prog_query_opts(ifindex, target, &optq);
if (!ASSERT_OK(err, "prog_query"))
goto cleanup;
ASSERT_EQ(optq.count, 1, "count");
ASSERT_EQ(optq.revision, 2, "revision");
ASSERT_EQ(optq.prog_ids[0], pid1, "prog_ids[0]");
ASSERT_EQ(optq.link_ids[0], lid1, "link_ids[0]");
ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
ASSERT_EQ(optq.link_ids[1], 0, "link_ids[1]");
tc_skel_reset_all_seen(skel);
ASSERT_EQ(__send_icmp(ping_addr_noneigh), 0, "icmp_pkt");
ASSERT_EQ(skel->bss->seen_tc1, true /* L2: ARP */, "seen_tc1");
ASSERT_EQ(skel->bss->seen_eth, mode == NETKIT_L3, "seen_eth");
cleanup:
test_tc_link__destroy(skel);
assert_mprog_count_ifindex(ifindex, target, 0);
destroy_netkit();
}
void serial_test_tc_netkit_neigh_links(void)
{
serial_test_tc_netkit_neigh_links_target(NETKIT_L2, BPF_NETKIT_PRIMARY);
serial_test_tc_netkit_neigh_links_target(NETKIT_L3, BPF_NETKIT_PRIMARY);
}

View File

@ -2471,7 +2471,7 @@ static void test_tc_opts_query_target(int target)
__u32 fd1, fd2, fd3, fd4, id1, id2, id3, id4;
struct test_tc_link *skel;
union bpf_attr attr;
__u32 prog_ids[5];
__u32 prog_ids[10];
int err;
skel = test_tc_link__open_and_load();
@ -2599,6 +2599,135 @@ static void test_tc_opts_query_target(int target)
ASSERT_EQ(attr.query.link_ids, 0, "link_ids");
ASSERT_EQ(attr.query.link_attach_flags, 0, "link_attach_flags");
/* Test 3: Query with smaller prog_ids array */
memset(&attr, 0, attr_size);
attr.query.target_ifindex = loopback;
attr.query.attach_type = target;
memset(prog_ids, 0, sizeof(prog_ids));
attr.query.prog_ids = ptr_to_u64(prog_ids);
attr.query.count = 2;
err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
ASSERT_EQ(err, -1, "prog_query_should_fail");
ASSERT_EQ(errno, ENOSPC, "prog_query_should_fail");
ASSERT_EQ(attr.query.count, 4, "count");
ASSERT_EQ(attr.query.revision, 5, "revision");
ASSERT_EQ(attr.query.query_flags, 0, "query_flags");
ASSERT_EQ(attr.query.attach_flags, 0, "attach_flags");
ASSERT_EQ(attr.query.target_ifindex, loopback, "target_ifindex");
ASSERT_EQ(attr.query.attach_type, target, "attach_type");
ASSERT_EQ(attr.query.prog_ids, ptr_to_u64(prog_ids), "prog_ids");
ASSERT_EQ(prog_ids[0], id1, "prog_ids[0]");
ASSERT_EQ(prog_ids[1], id2, "prog_ids[1]");
ASSERT_EQ(prog_ids[2], 0, "prog_ids[2]");
ASSERT_EQ(prog_ids[3], 0, "prog_ids[3]");
ASSERT_EQ(prog_ids[4], 0, "prog_ids[4]");
ASSERT_EQ(attr.query.prog_attach_flags, 0, "prog_attach_flags");
ASSERT_EQ(attr.query.link_ids, 0, "link_ids");
ASSERT_EQ(attr.query.link_attach_flags, 0, "link_attach_flags");
/* Test 4: Query with larger prog_ids array */
memset(&attr, 0, attr_size);
attr.query.target_ifindex = loopback;
attr.query.attach_type = target;
memset(prog_ids, 0, sizeof(prog_ids));
attr.query.prog_ids = ptr_to_u64(prog_ids);
attr.query.count = 10;
err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
if (!ASSERT_OK(err, "prog_query"))
goto cleanup4;
ASSERT_EQ(attr.query.count, 4, "count");
ASSERT_EQ(attr.query.revision, 5, "revision");
ASSERT_EQ(attr.query.query_flags, 0, "query_flags");
ASSERT_EQ(attr.query.attach_flags, 0, "attach_flags");
ASSERT_EQ(attr.query.target_ifindex, loopback, "target_ifindex");
ASSERT_EQ(attr.query.attach_type, target, "attach_type");
ASSERT_EQ(attr.query.prog_ids, ptr_to_u64(prog_ids), "prog_ids");
ASSERT_EQ(prog_ids[0], id1, "prog_ids[0]");
ASSERT_EQ(prog_ids[1], id2, "prog_ids[1]");
ASSERT_EQ(prog_ids[2], id3, "prog_ids[2]");
ASSERT_EQ(prog_ids[3], id4, "prog_ids[3]");
ASSERT_EQ(prog_ids[4], 0, "prog_ids[4]");
ASSERT_EQ(attr.query.prog_attach_flags, 0, "prog_attach_flags");
ASSERT_EQ(attr.query.link_ids, 0, "link_ids");
ASSERT_EQ(attr.query.link_attach_flags, 0, "link_attach_flags");
/* Test 5: Query with NULL prog_ids array but with count > 0 */
memset(&attr, 0, attr_size);
attr.query.target_ifindex = loopback;
attr.query.attach_type = target;
memset(prog_ids, 0, sizeof(prog_ids));
attr.query.count = sizeof(prog_ids);
err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
if (!ASSERT_OK(err, "prog_query"))
goto cleanup4;
ASSERT_EQ(attr.query.count, 4, "count");
ASSERT_EQ(attr.query.revision, 5, "revision");
ASSERT_EQ(attr.query.query_flags, 0, "query_flags");
ASSERT_EQ(attr.query.attach_flags, 0, "attach_flags");
ASSERT_EQ(attr.query.target_ifindex, loopback, "target_ifindex");
ASSERT_EQ(attr.query.attach_type, target, "attach_type");
ASSERT_EQ(prog_ids[0], 0, "prog_ids[0]");
ASSERT_EQ(prog_ids[1], 0, "prog_ids[1]");
ASSERT_EQ(prog_ids[2], 0, "prog_ids[2]");
ASSERT_EQ(prog_ids[3], 0, "prog_ids[3]");
ASSERT_EQ(prog_ids[4], 0, "prog_ids[4]");
ASSERT_EQ(attr.query.prog_ids, 0, "prog_ids");
ASSERT_EQ(attr.query.prog_attach_flags, 0, "prog_attach_flags");
ASSERT_EQ(attr.query.link_ids, 0, "link_ids");
ASSERT_EQ(attr.query.link_attach_flags, 0, "link_attach_flags");
/* Test 6: Query with non-NULL prog_ids array but with count == 0 */
memset(&attr, 0, attr_size);
attr.query.target_ifindex = loopback;
attr.query.attach_type = target;
memset(prog_ids, 0, sizeof(prog_ids));
attr.query.prog_ids = ptr_to_u64(prog_ids);
err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
if (!ASSERT_OK(err, "prog_query"))
goto cleanup4;
ASSERT_EQ(attr.query.count, 4, "count");
ASSERT_EQ(attr.query.revision, 5, "revision");
ASSERT_EQ(attr.query.query_flags, 0, "query_flags");
ASSERT_EQ(attr.query.attach_flags, 0, "attach_flags");
ASSERT_EQ(attr.query.target_ifindex, loopback, "target_ifindex");
ASSERT_EQ(attr.query.attach_type, target, "attach_type");
ASSERT_EQ(prog_ids[0], 0, "prog_ids[0]");
ASSERT_EQ(prog_ids[1], 0, "prog_ids[1]");
ASSERT_EQ(prog_ids[2], 0, "prog_ids[2]");
ASSERT_EQ(prog_ids[3], 0, "prog_ids[3]");
ASSERT_EQ(prog_ids[4], 0, "prog_ids[4]");
ASSERT_EQ(attr.query.prog_ids, ptr_to_u64(prog_ids), "prog_ids");
ASSERT_EQ(attr.query.prog_attach_flags, 0, "prog_attach_flags");
ASSERT_EQ(attr.query.link_ids, 0, "link_ids");
ASSERT_EQ(attr.query.link_attach_flags, 0, "link_attach_flags");
/* Test 7: Query with invalid flags */
attr.query.attach_flags = 0;
attr.query.query_flags = 1;
err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
ASSERT_EQ(err, -1, "prog_query_should_fail");
ASSERT_EQ(errno, EINVAL, "prog_query_should_fail");
attr.query.attach_flags = 1;
attr.query.query_flags = 0;
err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
ASSERT_EQ(err, -1, "prog_query_should_fail");
ASSERT_EQ(errno, EINVAL, "prog_query_should_fail");
cleanup4:
err = bpf_prog_detach_opts(fd4, loopback, target, &optd);
ASSERT_OK(err, "prog_detach");

View File

@ -9,9 +9,10 @@
#include "test_bpf_ma.skel.h"
void test_test_bpf_ma(void)
static void do_bpf_ma_test(const char *name)
{
struct test_bpf_ma *skel;
struct bpf_program *prog;
struct btf *btf;
int i, err;
@ -34,6 +35,11 @@ void test_test_bpf_ma(void)
skel->rodata->data_btf_ids[i] = id;
}
prog = bpf_object__find_program_by_name(skel->obj, name);
if (!ASSERT_OK_PTR(prog, "invalid prog name"))
goto out;
bpf_program__set_autoload(prog, true);
err = test_bpf_ma__load(skel);
if (!ASSERT_OK(err, "load"))
goto out;
@ -48,3 +54,15 @@ void test_test_bpf_ma(void)
out:
test_bpf_ma__destroy(skel);
}
void test_test_bpf_ma(void)
{
if (test__start_subtest("batch_alloc_free"))
do_bpf_ma_test("test_batch_alloc_free");
if (test__start_subtest("free_through_map_free"))
do_bpf_ma_test("test_free_through_map_free");
if (test__start_subtest("batch_percpu_alloc_free"))
do_bpf_ma_test("test_batch_percpu_alloc_free");
if (test__start_subtest("percpu_free_through_map_free"))
do_bpf_ma_test("test_percpu_free_through_map_free");
}

View File

@ -14,6 +14,13 @@ int my_pid;
int arr[256];
int small_arr[16] SEC(".data.small_arr");
struct {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(max_entries, 10);
__type(key, int);
__type(value, int);
} amap SEC(".maps");
#ifdef REAL_TEST
#define MY_PID_GUARD() if (my_pid != (bpf_get_current_pid_tgid() >> 32)) return 0
#else
@ -716,4 +723,692 @@ int iter_pass_iter_ptr_to_subprog(const void *ctx)
return 0;
}
SEC("?raw_tp")
__failure
__msg("R1 type=scalar expected=fp")
__naked int delayed_read_mark(void)
{
/* This is equivalent to C program below.
* The call to bpf_iter_num_next() is reachable with r7 values &fp[-16] and 0xdead.
* State with r7=&fp[-16] is visited first and follows r6 != 42 ... continue branch.
* At this point iterator next() call is reached with r7 that has no read mark.
* Loop body with r7=0xdead would only be visited if verifier would decide to continue
* with second loop iteration. Absence of read mark on r7 might affect state
* equivalent logic used for iterator convergence tracking.
*
* r7 = &fp[-16]
* fp[-16] = 0
* r6 = bpf_get_prandom_u32()
* bpf_iter_num_new(&fp[-8], 0, 10)
* while (bpf_iter_num_next(&fp[-8])) {
* r6++
* if (r6 != 42) {
* r7 = 0xdead
* continue;
* }
* bpf_probe_read_user(r7, 8, 0xdeadbeef); // this is not safe
* }
* bpf_iter_num_destroy(&fp[-8])
* return 0
*/
asm volatile (
"r7 = r10;"
"r7 += -16;"
"r0 = 0;"
"*(u64 *)(r7 + 0) = r0;"
"call %[bpf_get_prandom_u32];"
"r6 = r0;"
"r1 = r10;"
"r1 += -8;"
"r2 = 0;"
"r3 = 10;"
"call %[bpf_iter_num_new];"
"1:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_next];"
"if r0 == 0 goto 2f;"
"r6 += 1;"
"if r6 != 42 goto 3f;"
"r7 = 0xdead;"
"goto 1b;"
"3:"
"r1 = r7;"
"r2 = 8;"
"r3 = 0xdeadbeef;"
"call %[bpf_probe_read_user];"
"goto 1b;"
"2:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_destroy];"
"r0 = 0;"
"exit;"
:
: __imm(bpf_get_prandom_u32),
__imm(bpf_iter_num_new),
__imm(bpf_iter_num_next),
__imm(bpf_iter_num_destroy),
__imm(bpf_probe_read_user)
: __clobber_all
);
}
SEC("?raw_tp")
__failure
__msg("math between fp pointer and register with unbounded")
__naked int delayed_precision_mark(void)
{
/* This is equivalent to C program below.
* The test is similar to delayed_iter_mark but verifies that incomplete
* precision don't fool verifier.
* The call to bpf_iter_num_next() is reachable with r7 values -16 and -32.
* State with r7=-16 is visited first and follows r6 != 42 ... continue branch.
* At this point iterator next() call is reached with r7 that has no read
* and precision marks.
* Loop body with r7=-32 would only be visited if verifier would decide to continue
* with second loop iteration. Absence of precision mark on r7 might affect state
* equivalent logic used for iterator convergence tracking.
*
* r8 = 0
* fp[-16] = 0
* r7 = -16
* r6 = bpf_get_prandom_u32()
* bpf_iter_num_new(&fp[-8], 0, 10)
* while (bpf_iter_num_next(&fp[-8])) {
* if (r6 != 42) {
* r7 = -32
* r6 = bpf_get_prandom_u32()
* continue;
* }
* r0 = r10
* r0 += r7
* r8 = *(u64 *)(r0 + 0) // this is not safe
* r6 = bpf_get_prandom_u32()
* }
* bpf_iter_num_destroy(&fp[-8])
* return r8
*/
asm volatile (
"r8 = 0;"
"*(u64 *)(r10 - 16) = r8;"
"r7 = -16;"
"call %[bpf_get_prandom_u32];"
"r6 = r0;"
"r1 = r10;"
"r1 += -8;"
"r2 = 0;"
"r3 = 10;"
"call %[bpf_iter_num_new];"
"1:"
"r1 = r10;"
"r1 += -8;\n"
"call %[bpf_iter_num_next];"
"if r0 == 0 goto 2f;"
"if r6 != 42 goto 3f;"
"r7 = -32;"
"call %[bpf_get_prandom_u32];"
"r6 = r0;"
"goto 1b;\n"
"3:"
"r0 = r10;"
"r0 += r7;"
"r8 = *(u64 *)(r0 + 0);"
"call %[bpf_get_prandom_u32];"
"r6 = r0;"
"goto 1b;\n"
"2:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_destroy];"
"r0 = r8;"
"exit;"
:
: __imm(bpf_get_prandom_u32),
__imm(bpf_iter_num_new),
__imm(bpf_iter_num_next),
__imm(bpf_iter_num_destroy),
__imm(bpf_probe_read_user)
: __clobber_all
);
}
SEC("?raw_tp")
__failure
__msg("math between fp pointer and register with unbounded")
__flag(BPF_F_TEST_STATE_FREQ)
__naked int loop_state_deps1(void)
{
/* This is equivalent to C program below.
*
* The case turns out to be tricky in a sense that:
* - states with c=-25 are explored only on a second iteration
* of the outer loop;
* - states with read+precise mark on c are explored only on
* second iteration of the inner loop and in a state which
* is pushed to states stack first.
*
* Depending on the details of iterator convergence logic
* verifier might stop states traversal too early and miss
* unsafe c=-25 memory access.
*
* j = iter_new(); // fp[-16]
* a = 0; // r6
* b = 0; // r7
* c = -24; // r8
* while (iter_next(j)) {
* i = iter_new(); // fp[-8]
* a = 0; // r6
* b = 0; // r7
* while (iter_next(i)) {
* if (a == 1) {
* a = 0;
* b = 1;
* } else if (a == 0) {
* a = 1;
* if (random() == 42)
* continue;
* if (b == 1) {
* *(r10 + c) = 7; // this is not safe
* iter_destroy(i);
* iter_destroy(j);
* return;
* }
* }
* }
* iter_destroy(i);
* a = 0;
* b = 0;
* c = -25;
* }
* iter_destroy(j);
* return;
*/
asm volatile (
"r1 = r10;"
"r1 += -16;"
"r2 = 0;"
"r3 = 10;"
"call %[bpf_iter_num_new];"
"r6 = 0;"
"r7 = 0;"
"r8 = -24;"
"j_loop_%=:"
"r1 = r10;"
"r1 += -16;"
"call %[bpf_iter_num_next];"
"if r0 == 0 goto j_loop_end_%=;"
"r1 = r10;"
"r1 += -8;"
"r2 = 0;"
"r3 = 10;"
"call %[bpf_iter_num_new];"
"r6 = 0;"
"r7 = 0;"
"i_loop_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_next];"
"if r0 == 0 goto i_loop_end_%=;"
"check_one_r6_%=:"
"if r6 != 1 goto check_zero_r6_%=;"
"r6 = 0;"
"r7 = 1;"
"goto i_loop_%=;"
"check_zero_r6_%=:"
"if r6 != 0 goto i_loop_%=;"
"r6 = 1;"
"call %[bpf_get_prandom_u32];"
"if r0 != 42 goto check_one_r7_%=;"
"goto i_loop_%=;"
"check_one_r7_%=:"
"if r7 != 1 goto i_loop_%=;"
"r0 = r10;"
"r0 += r8;"
"r1 = 7;"
"*(u64 *)(r0 + 0) = r1;"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_destroy];"
"r1 = r10;"
"r1 += -16;"
"call %[bpf_iter_num_destroy];"
"r0 = 0;"
"exit;"
"i_loop_end_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_destroy];"
"r6 = 0;"
"r7 = 0;"
"r8 = -25;"
"goto j_loop_%=;"
"j_loop_end_%=:"
"r1 = r10;"
"r1 += -16;"
"call %[bpf_iter_num_destroy];"
"r0 = 0;"
"exit;"
:
: __imm(bpf_get_prandom_u32),
__imm(bpf_iter_num_new),
__imm(bpf_iter_num_next),
__imm(bpf_iter_num_destroy)
: __clobber_all
);
}
SEC("?raw_tp")
__failure
__msg("math between fp pointer and register with unbounded")
__flag(BPF_F_TEST_STATE_FREQ)
__naked int loop_state_deps2(void)
{
/* This is equivalent to C program below.
*
* The case turns out to be tricky in a sense that:
* - states with read+precise mark on c are explored only on a second
* iteration of the first inner loop and in a state which is pushed to
* states stack first.
* - states with c=-25 are explored only on a second iteration of the
* second inner loop and in a state which is pushed to states stack
* first.
*
* Depending on the details of iterator convergence logic
* verifier might stop states traversal too early and miss
* unsafe c=-25 memory access.
*
* j = iter_new(); // fp[-16]
* a = 0; // r6
* b = 0; // r7
* c = -24; // r8
* while (iter_next(j)) {
* i = iter_new(); // fp[-8]
* a = 0; // r6
* b = 0; // r7
* while (iter_next(i)) {
* if (a == 1) {
* a = 0;
* b = 1;
* } else if (a == 0) {
* a = 1;
* if (random() == 42)
* continue;
* if (b == 1) {
* *(r10 + c) = 7; // this is not safe
* iter_destroy(i);
* iter_destroy(j);
* return;
* }
* }
* }
* iter_destroy(i);
* i = iter_new(); // fp[-8]
* a = 0; // r6
* b = 0; // r7
* while (iter_next(i)) {
* if (a == 1) {
* a = 0;
* b = 1;
* } else if (a == 0) {
* a = 1;
* if (random() == 42)
* continue;
* if (b == 1) {
* a = 0;
* c = -25;
* }
* }
* }
* iter_destroy(i);
* }
* iter_destroy(j);
* return;
*/
asm volatile (
"r1 = r10;"
"r1 += -16;"
"r2 = 0;"
"r3 = 10;"
"call %[bpf_iter_num_new];"
"r6 = 0;"
"r7 = 0;"
"r8 = -24;"
"j_loop_%=:"
"r1 = r10;"
"r1 += -16;"
"call %[bpf_iter_num_next];"
"if r0 == 0 goto j_loop_end_%=;"
/* first inner loop */
"r1 = r10;"
"r1 += -8;"
"r2 = 0;"
"r3 = 10;"
"call %[bpf_iter_num_new];"
"r6 = 0;"
"r7 = 0;"
"i_loop_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_next];"
"if r0 == 0 goto i_loop_end_%=;"
"check_one_r6_%=:"
"if r6 != 1 goto check_zero_r6_%=;"
"r6 = 0;"
"r7 = 1;"
"goto i_loop_%=;"
"check_zero_r6_%=:"
"if r6 != 0 goto i_loop_%=;"
"r6 = 1;"
"call %[bpf_get_prandom_u32];"
"if r0 != 42 goto check_one_r7_%=;"
"goto i_loop_%=;"
"check_one_r7_%=:"
"if r7 != 1 goto i_loop_%=;"
"r0 = r10;"
"r0 += r8;"
"r1 = 7;"
"*(u64 *)(r0 + 0) = r1;"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_destroy];"
"r1 = r10;"
"r1 += -16;"
"call %[bpf_iter_num_destroy];"
"r0 = 0;"
"exit;"
"i_loop_end_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_destroy];"
/* second inner loop */
"r1 = r10;"
"r1 += -8;"
"r2 = 0;"
"r3 = 10;"
"call %[bpf_iter_num_new];"
"r6 = 0;"
"r7 = 0;"
"i2_loop_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_next];"
"if r0 == 0 goto i2_loop_end_%=;"
"check2_one_r6_%=:"
"if r6 != 1 goto check2_zero_r6_%=;"
"r6 = 0;"
"r7 = 1;"
"goto i2_loop_%=;"
"check2_zero_r6_%=:"
"if r6 != 0 goto i2_loop_%=;"
"r6 = 1;"
"call %[bpf_get_prandom_u32];"
"if r0 != 42 goto check2_one_r7_%=;"
"goto i2_loop_%=;"
"check2_one_r7_%=:"
"if r7 != 1 goto i2_loop_%=;"
"r6 = 0;"
"r8 = -25;"
"goto i2_loop_%=;"
"i2_loop_end_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_destroy];"
"r6 = 0;"
"r7 = 0;"
"goto j_loop_%=;"
"j_loop_end_%=:"
"r1 = r10;"
"r1 += -16;"
"call %[bpf_iter_num_destroy];"
"r0 = 0;"
"exit;"
:
: __imm(bpf_get_prandom_u32),
__imm(bpf_iter_num_new),
__imm(bpf_iter_num_next),
__imm(bpf_iter_num_destroy)
: __clobber_all
);
}
SEC("?raw_tp")
__success
__naked int triple_continue(void)
{
/* This is equivalent to C program below.
* High branching factor of the loop body turned out to be
* problematic for one of the iterator convergence tracking
* algorithms explored.
*
* r6 = bpf_get_prandom_u32()
* bpf_iter_num_new(&fp[-8], 0, 10)
* while (bpf_iter_num_next(&fp[-8])) {
* if (bpf_get_prandom_u32() != 42)
* continue;
* if (bpf_get_prandom_u32() != 42)
* continue;
* if (bpf_get_prandom_u32() != 42)
* continue;
* r0 += 0;
* }
* bpf_iter_num_destroy(&fp[-8])
* return 0
*/
asm volatile (
"r1 = r10;"
"r1 += -8;"
"r2 = 0;"
"r3 = 10;"
"call %[bpf_iter_num_new];"
"loop_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_next];"
"if r0 == 0 goto loop_end_%=;"
"call %[bpf_get_prandom_u32];"
"if r0 != 42 goto loop_%=;"
"call %[bpf_get_prandom_u32];"
"if r0 != 42 goto loop_%=;"
"call %[bpf_get_prandom_u32];"
"if r0 != 42 goto loop_%=;"
"r0 += 0;"
"goto loop_%=;"
"loop_end_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_destroy];"
"r0 = 0;"
"exit;"
:
: __imm(bpf_get_prandom_u32),
__imm(bpf_iter_num_new),
__imm(bpf_iter_num_next),
__imm(bpf_iter_num_destroy)
: __clobber_all
);
}
SEC("?raw_tp")
__success
__naked int widen_spill(void)
{
/* This is equivalent to C program below.
* The counter is stored in fp[-16], if this counter is not widened
* verifier states representing loop iterations would never converge.
*
* fp[-16] = 0
* bpf_iter_num_new(&fp[-8], 0, 10)
* while (bpf_iter_num_next(&fp[-8])) {
* r0 = fp[-16];
* r0 += 1;
* fp[-16] = r0;
* }
* bpf_iter_num_destroy(&fp[-8])
* return 0
*/
asm volatile (
"r0 = 0;"
"*(u64 *)(r10 - 16) = r0;"
"r1 = r10;"
"r1 += -8;"
"r2 = 0;"
"r3 = 10;"
"call %[bpf_iter_num_new];"
"loop_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_next];"
"if r0 == 0 goto loop_end_%=;"
"r0 = *(u64 *)(r10 - 16);"
"r0 += 1;"
"*(u64 *)(r10 - 16) = r0;"
"goto loop_%=;"
"loop_end_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_destroy];"
"r0 = 0;"
"exit;"
:
: __imm(bpf_iter_num_new),
__imm(bpf_iter_num_next),
__imm(bpf_iter_num_destroy)
: __clobber_all
);
}
SEC("raw_tp")
__success
__naked int checkpoint_states_deletion(void)
{
/* This is equivalent to C program below.
*
* int *a, *b, *c, *d, *e, *f;
* int i, sum = 0;
* bpf_for(i, 0, 10) {
* a = bpf_map_lookup_elem(&amap, &i);
* b = bpf_map_lookup_elem(&amap, &i);
* c = bpf_map_lookup_elem(&amap, &i);
* d = bpf_map_lookup_elem(&amap, &i);
* e = bpf_map_lookup_elem(&amap, &i);
* f = bpf_map_lookup_elem(&amap, &i);
* if (a) sum += 1;
* if (b) sum += 1;
* if (c) sum += 1;
* if (d) sum += 1;
* if (e) sum += 1;
* if (f) sum += 1;
* }
* return 0;
*
* The body of the loop spawns multiple simulation paths
* with different combination of NULL/non-NULL information for a/b/c/d/e/f.
* Each combination is unique from states_equal() point of view.
* Explored states checkpoint is created after each iterator next call.
* Iterator convergence logic expects that eventually current state
* would get equal to one of the explored states and thus loop
* exploration would be finished (at-least for a specific path).
* Verifier evicts explored states with high miss to hit ratio
* to to avoid comparing current state with too many explored
* states per instruction.
* This test is designed to "stress test" eviction policy defined using formula:
*
* sl->miss_cnt > sl->hit_cnt * N + N // if true sl->state is evicted
*
* Currently N is set to 64, which allows for 6 variables in this test.
*/
asm volatile (
"r6 = 0;" /* a */
"r7 = 0;" /* b */
"r8 = 0;" /* c */
"*(u64 *)(r10 - 24) = r6;" /* d */
"*(u64 *)(r10 - 32) = r6;" /* e */
"*(u64 *)(r10 - 40) = r6;" /* f */
"r9 = 0;" /* sum */
"r1 = r10;"
"r1 += -8;"
"r2 = 0;"
"r3 = 10;"
"call %[bpf_iter_num_new];"
"loop_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_next];"
"if r0 == 0 goto loop_end_%=;"
"*(u64 *)(r10 - 16) = r0;"
"r1 = %[amap] ll;"
"r2 = r10;"
"r2 += -16;"
"call %[bpf_map_lookup_elem];"
"r6 = r0;"
"r1 = %[amap] ll;"
"r2 = r10;"
"r2 += -16;"
"call %[bpf_map_lookup_elem];"
"r7 = r0;"
"r1 = %[amap] ll;"
"r2 = r10;"
"r2 += -16;"
"call %[bpf_map_lookup_elem];"
"r8 = r0;"
"r1 = %[amap] ll;"
"r2 = r10;"
"r2 += -16;"
"call %[bpf_map_lookup_elem];"
"*(u64 *)(r10 - 24) = r0;"
"r1 = %[amap] ll;"
"r2 = r10;"
"r2 += -16;"
"call %[bpf_map_lookup_elem];"
"*(u64 *)(r10 - 32) = r0;"
"r1 = %[amap] ll;"
"r2 = r10;"
"r2 += -16;"
"call %[bpf_map_lookup_elem];"
"*(u64 *)(r10 - 40) = r0;"
"if r6 == 0 goto +1;"
"r9 += 1;"
"if r7 == 0 goto +1;"
"r9 += 1;"
"if r8 == 0 goto +1;"
"r9 += 1;"
"r0 = *(u64 *)(r10 - 24);"
"if r0 == 0 goto +1;"
"r9 += 1;"
"r0 = *(u64 *)(r10 - 32);"
"if r0 == 0 goto +1;"
"r9 += 1;"
"r0 = *(u64 *)(r10 - 40);"
"if r0 == 0 goto +1;"
"r9 += 1;"
"goto loop_%=;"
"loop_end_%=:"
"r1 = r10;"
"r1 += -8;"
"call %[bpf_iter_num_destroy];"
"r0 = 0;"
"exit;"
:
: __imm(bpf_map_lookup_elem),
__imm(bpf_iter_num_new),
__imm(bpf_iter_num_next),
__imm(bpf_iter_num_destroy),
__imm_addr(amap)
: __clobber_all
);
}
char _license[] SEC("license") = "GPL";

View File

@ -0,0 +1,72 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (C) 2023 Chuyi Zhou <zhouchuyi@bytedance.com> */
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include "bpf_misc.h"
#include "bpf_experimental.h"
char _license[] SEC("license") = "GPL";
pid_t target_pid;
u64 root_cg_id, leaf_cg_id;
u64 first_cg_id, last_cg_id;
int pre_order_cnt, post_order_cnt, tree_high;
struct cgroup *bpf_cgroup_from_id(u64 cgid) __ksym;
void bpf_cgroup_release(struct cgroup *p) __ksym;
void bpf_rcu_read_lock(void) __ksym;
void bpf_rcu_read_unlock(void) __ksym;
SEC("fentry.s/" SYS_PREFIX "sys_getpgid")
int iter_css_for_each(const void *ctx)
{
struct task_struct *cur_task = bpf_get_current_task_btf();
struct cgroup_subsys_state *root_css, *leaf_css, *pos;
struct cgroup *root_cgrp, *leaf_cgrp, *cur_cgrp;
if (cur_task->pid != target_pid)
return 0;
root_cgrp = bpf_cgroup_from_id(root_cg_id);
if (!root_cgrp)
return 0;
leaf_cgrp = bpf_cgroup_from_id(leaf_cg_id);
if (!leaf_cgrp) {
bpf_cgroup_release(root_cgrp);
return 0;
}
root_css = &root_cgrp->self;
leaf_css = &leaf_cgrp->self;
pre_order_cnt = post_order_cnt = tree_high = 0;
first_cg_id = last_cg_id = 0;
bpf_rcu_read_lock();
bpf_for_each(css, pos, root_css, BPF_CGROUP_ITER_DESCENDANTS_POST) {
cur_cgrp = pos->cgroup;
post_order_cnt++;
last_cg_id = cur_cgrp->kn->id;
}
bpf_for_each(css, pos, root_css, BPF_CGROUP_ITER_DESCENDANTS_PRE) {
cur_cgrp = pos->cgroup;
pre_order_cnt++;
if (!first_cg_id)
first_cg_id = cur_cgrp->kn->id;
}
bpf_for_each(css, pos, leaf_css, BPF_CGROUP_ITER_ANCESTORS_UP)
tree_high++;
bpf_for_each(css, pos, root_css, BPF_CGROUP_ITER_ANCESTORS_UP)
tree_high--;
bpf_rcu_read_unlock();
bpf_cgroup_release(root_cgrp);
bpf_cgroup_release(leaf_cgrp);
return 0;
}

View File

@ -0,0 +1,47 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (C) 2023 Chuyi Zhou <zhouchuyi@bytedance.com> */
#include "vmlinux.h"
#include <errno.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include "bpf_misc.h"
#include "bpf_experimental.h"
char _license[] SEC("license") = "GPL";
struct cgroup *bpf_cgroup_from_id(u64 cgid) __ksym;
void bpf_cgroup_release(struct cgroup *p) __ksym;
pid_t target_pid;
int css_task_cnt;
u64 cg_id;
SEC("lsm/file_mprotect")
int BPF_PROG(iter_css_task_for_each, struct vm_area_struct *vma,
unsigned long reqprot, unsigned long prot, int ret)
{
struct task_struct *cur_task = bpf_get_current_task_btf();
struct cgroup_subsys_state *css;
struct task_struct *task;
struct cgroup *cgrp;
if (cur_task->pid != target_pid)
return ret;
cgrp = bpf_cgroup_from_id(cg_id);
if (!cgrp)
return -EPERM;
css = &cgrp->self;
css_task_cnt = 0;
bpf_for_each(css_task, task, css, CSS_TASK_ITER_PROCS)
if (task->pid == target_pid)
css_task_cnt++;
bpf_cgroup_release(cgrp);
return -EPERM;
}

View File

@ -0,0 +1,41 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (C) 2023 Chuyi Zhou <zhouchuyi@bytedance.com> */
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include "bpf_misc.h"
#include "bpf_experimental.h"
char _license[] SEC("license") = "GPL";
pid_t target_pid;
int procs_cnt, threads_cnt, proc_threads_cnt;
void bpf_rcu_read_lock(void) __ksym;
void bpf_rcu_read_unlock(void) __ksym;
SEC("fentry.s/" SYS_PREFIX "sys_getpgid")
int iter_task_for_each_sleep(void *ctx)
{
struct task_struct *cur_task = bpf_get_current_task_btf();
struct task_struct *pos;
if (cur_task->pid != target_pid)
return 0;
procs_cnt = threads_cnt = proc_threads_cnt = 0;
bpf_rcu_read_lock();
bpf_for_each(task, pos, NULL, BPF_TASK_ITER_ALL_PROCS)
if (pos->pid == target_pid)
procs_cnt++;
bpf_for_each(task, pos, cur_task, BPF_TASK_ITER_PROC_THREADS)
proc_threads_cnt++;
bpf_for_each(task, pos, NULL, BPF_TASK_ITER_ALL_THREADS)
if (pos->tgid == target_pid)
threads_cnt++;
bpf_rcu_read_unlock();
return 0;
}

View File

@ -0,0 +1,105 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (C) 2023 Chuyi Zhou <zhouchuyi@bytedance.com> */
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include "bpf_misc.h"
#include "bpf_experimental.h"
char _license[] SEC("license") = "GPL";
struct cgroup *bpf_cgroup_from_id(u64 cgid) __ksym;
void bpf_cgroup_release(struct cgroup *p) __ksym;
void bpf_rcu_read_lock(void) __ksym;
void bpf_rcu_read_unlock(void) __ksym;
SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
__failure __msg("expected an RCU CS when using bpf_iter_task_next")
int BPF_PROG(iter_tasks_without_lock)
{
struct task_struct *pos;
bpf_for_each(task, pos, NULL, BPF_TASK_ITER_ALL_PROCS) {
}
return 0;
}
SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
__failure __msg("expected an RCU CS when using bpf_iter_css_next")
int BPF_PROG(iter_css_without_lock)
{
u64 cg_id = bpf_get_current_cgroup_id();
struct cgroup *cgrp = bpf_cgroup_from_id(cg_id);
struct cgroup_subsys_state *root_css, *pos;
if (!cgrp)
return 0;
root_css = &cgrp->self;
bpf_for_each(css, pos, root_css, BPF_CGROUP_ITER_DESCENDANTS_POST) {
}
bpf_cgroup_release(cgrp);
return 0;
}
SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
__failure __msg("expected an RCU CS when using bpf_iter_task_next")
int BPF_PROG(iter_tasks_lock_and_unlock)
{
struct task_struct *pos;
bpf_rcu_read_lock();
bpf_for_each(task, pos, NULL, BPF_TASK_ITER_ALL_PROCS) {
bpf_rcu_read_unlock();
bpf_rcu_read_lock();
}
bpf_rcu_read_unlock();
return 0;
}
SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
__failure __msg("expected an RCU CS when using bpf_iter_css_next")
int BPF_PROG(iter_css_lock_and_unlock)
{
u64 cg_id = bpf_get_current_cgroup_id();
struct cgroup *cgrp = bpf_cgroup_from_id(cg_id);
struct cgroup_subsys_state *root_css, *pos;
if (!cgrp)
return 0;
root_css = &cgrp->self;
bpf_rcu_read_lock();
bpf_for_each(css, pos, root_css, BPF_CGROUP_ITER_DESCENDANTS_POST) {
bpf_rcu_read_unlock();
bpf_rcu_read_lock();
}
bpf_rcu_read_unlock();
bpf_cgroup_release(cgrp);
return 0;
}
SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
__failure __msg("css_task_iter is only allowed in bpf_lsm and bpf iter-s")
int BPF_PROG(iter_css_task_for_each)
{
u64 cg_id = bpf_get_current_cgroup_id();
struct cgroup *cgrp = bpf_cgroup_from_id(cg_id);
struct cgroup_subsys_state *css;
struct task_struct *task;
if (cgrp == NULL)
return 0;
css = &cgrp->self;
bpf_for_each(css_task, task, css, CSS_TASK_ITER_PROCS) {
}
bpf_cgroup_release(cgrp);
return 0;
}

View File

@ -30,6 +30,7 @@ int iter_task_vma_for_each(const void *ctx)
bpf_for_each(task_vma, vma, task, 0) {
if (seen >= 1000)
break;
barrier_var(seen);
vm_ranges[seen].vm_start = vma->vm_start;
vm_ranges[seen].vm_end = vma->vm_end;

View File

@ -591,7 +591,9 @@ int pop_ptr_off(void *(*op)(void *head))
n = op(&p->head);
bpf_spin_unlock(&p->lock);
bpf_this_cpu_ptr(n);
if (!n)
return 0;
bpf_spin_lock((void *)n);
return 0;
}

View File

@ -37,10 +37,20 @@ int pid = 0;
__type(key, int); \
__type(value, struct map_value_##_size); \
__uint(max_entries, 128); \
} array_##_size SEC(".maps");
} array_##_size SEC(".maps")
static __always_inline void batch_alloc_free(struct bpf_map *map, unsigned int batch,
unsigned int idx)
#define DEFINE_ARRAY_WITH_PERCPU_KPTR(_size) \
struct map_value_percpu_##_size { \
struct bin_data_##_size __percpu_kptr * data; \
}; \
struct { \
__uint(type, BPF_MAP_TYPE_ARRAY); \
__type(key, int); \
__type(value, struct map_value_percpu_##_size); \
__uint(max_entries, 128); \
} array_percpu_##_size SEC(".maps")
static __always_inline void batch_alloc(struct bpf_map *map, unsigned int batch, unsigned int idx)
{
struct generic_map_value *value;
unsigned int i, key;
@ -65,6 +75,14 @@ static __always_inline void batch_alloc_free(struct bpf_map *map, unsigned int b
return;
}
}
}
static __always_inline void batch_free(struct bpf_map *map, unsigned int batch, unsigned int idx)
{
struct generic_map_value *value;
unsigned int i, key;
void *old;
for (i = 0; i < batch; i++) {
key = i;
value = bpf_map_lookup_elem(map, &key);
@ -81,8 +99,72 @@ static __always_inline void batch_alloc_free(struct bpf_map *map, unsigned int b
}
}
static __always_inline void batch_percpu_alloc(struct bpf_map *map, unsigned int batch,
unsigned int idx)
{
struct generic_map_value *value;
unsigned int i, key;
void *old, *new;
for (i = 0; i < batch; i++) {
key = i;
value = bpf_map_lookup_elem(map, &key);
if (!value) {
err = 1;
return;
}
/* per-cpu allocator may not be able to refill in time */
new = bpf_percpu_obj_new_impl(data_btf_ids[idx], NULL);
if (!new)
continue;
old = bpf_kptr_xchg(&value->data, new);
if (old) {
bpf_percpu_obj_drop(old);
err = 2;
return;
}
}
}
static __always_inline void batch_percpu_free(struct bpf_map *map, unsigned int batch,
unsigned int idx)
{
struct generic_map_value *value;
unsigned int i, key;
void *old;
for (i = 0; i < batch; i++) {
key = i;
value = bpf_map_lookup_elem(map, &key);
if (!value) {
err = 3;
return;
}
old = bpf_kptr_xchg(&value->data, NULL);
if (!old)
continue;
bpf_percpu_obj_drop(old);
}
}
#define CALL_BATCH_ALLOC(size, batch, idx) \
batch_alloc((struct bpf_map *)(&array_##size), batch, idx)
#define CALL_BATCH_ALLOC_FREE(size, batch, idx) \
batch_alloc_free((struct bpf_map *)(&array_##size), batch, idx)
do { \
batch_alloc((struct bpf_map *)(&array_##size), batch, idx); \
batch_free((struct bpf_map *)(&array_##size), batch, idx); \
} while (0)
#define CALL_BATCH_PERCPU_ALLOC(size, batch, idx) \
batch_percpu_alloc((struct bpf_map *)(&array_percpu_##size), batch, idx)
#define CALL_BATCH_PERCPU_ALLOC_FREE(size, batch, idx) \
do { \
batch_percpu_alloc((struct bpf_map *)(&array_percpu_##size), batch, idx); \
batch_percpu_free((struct bpf_map *)(&array_percpu_##size), batch, idx); \
} while (0)
DEFINE_ARRAY_WITH_KPTR(8);
DEFINE_ARRAY_WITH_KPTR(16);
@ -97,8 +179,21 @@ DEFINE_ARRAY_WITH_KPTR(1024);
DEFINE_ARRAY_WITH_KPTR(2048);
DEFINE_ARRAY_WITH_KPTR(4096);
SEC("fentry/" SYS_PREFIX "sys_nanosleep")
int test_bpf_mem_alloc_free(void *ctx)
/* per-cpu kptr doesn't support bin_data_8 which is a zero-sized array */
DEFINE_ARRAY_WITH_PERCPU_KPTR(16);
DEFINE_ARRAY_WITH_PERCPU_KPTR(32);
DEFINE_ARRAY_WITH_PERCPU_KPTR(64);
DEFINE_ARRAY_WITH_PERCPU_KPTR(96);
DEFINE_ARRAY_WITH_PERCPU_KPTR(128);
DEFINE_ARRAY_WITH_PERCPU_KPTR(192);
DEFINE_ARRAY_WITH_PERCPU_KPTR(256);
DEFINE_ARRAY_WITH_PERCPU_KPTR(512);
DEFINE_ARRAY_WITH_PERCPU_KPTR(1024);
DEFINE_ARRAY_WITH_PERCPU_KPTR(2048);
DEFINE_ARRAY_WITH_PERCPU_KPTR(4096);
SEC("?fentry/" SYS_PREFIX "sys_nanosleep")
int test_batch_alloc_free(void *ctx)
{
if ((u32)bpf_get_current_pid_tgid() != pid)
return 0;
@ -121,3 +216,76 @@ int test_bpf_mem_alloc_free(void *ctx)
return 0;
}
SEC("?fentry/" SYS_PREFIX "sys_nanosleep")
int test_free_through_map_free(void *ctx)
{
if ((u32)bpf_get_current_pid_tgid() != pid)
return 0;
/* Alloc 128 8-bytes objects in batch to trigger refilling,
* then free these objects through map free.
*/
CALL_BATCH_ALLOC(8, 128, 0);
CALL_BATCH_ALLOC(16, 128, 1);
CALL_BATCH_ALLOC(32, 128, 2);
CALL_BATCH_ALLOC(64, 128, 3);
CALL_BATCH_ALLOC(96, 128, 4);
CALL_BATCH_ALLOC(128, 128, 5);
CALL_BATCH_ALLOC(192, 128, 6);
CALL_BATCH_ALLOC(256, 128, 7);
CALL_BATCH_ALLOC(512, 64, 8);
CALL_BATCH_ALLOC(1024, 32, 9);
CALL_BATCH_ALLOC(2048, 16, 10);
CALL_BATCH_ALLOC(4096, 8, 11);
return 0;
}
SEC("?fentry/" SYS_PREFIX "sys_nanosleep")
int test_batch_percpu_alloc_free(void *ctx)
{
if ((u32)bpf_get_current_pid_tgid() != pid)
return 0;
/* Alloc 128 16-bytes per-cpu objects in batch to trigger refilling,
* then free 128 16-bytes per-cpu objects in batch to trigger freeing.
*/
CALL_BATCH_PERCPU_ALLOC_FREE(16, 128, 1);
CALL_BATCH_PERCPU_ALLOC_FREE(32, 128, 2);
CALL_BATCH_PERCPU_ALLOC_FREE(64, 128, 3);
CALL_BATCH_PERCPU_ALLOC_FREE(96, 128, 4);
CALL_BATCH_PERCPU_ALLOC_FREE(128, 128, 5);
CALL_BATCH_PERCPU_ALLOC_FREE(192, 128, 6);
CALL_BATCH_PERCPU_ALLOC_FREE(256, 128, 7);
CALL_BATCH_PERCPU_ALLOC_FREE(512, 64, 8);
CALL_BATCH_PERCPU_ALLOC_FREE(1024, 32, 9);
CALL_BATCH_PERCPU_ALLOC_FREE(2048, 16, 10);
CALL_BATCH_PERCPU_ALLOC_FREE(4096, 8, 11);
return 0;
}
SEC("?fentry/" SYS_PREFIX "sys_nanosleep")
int test_percpu_free_through_map_free(void *ctx)
{
if ((u32)bpf_get_current_pid_tgid() != pid)
return 0;
/* Alloc 128 16-bytes per-cpu objects in batch to trigger refilling,
* then free these object through map free.
*/
CALL_BATCH_PERCPU_ALLOC(16, 128, 1);
CALL_BATCH_PERCPU_ALLOC(32, 128, 2);
CALL_BATCH_PERCPU_ALLOC(64, 128, 3);
CALL_BATCH_PERCPU_ALLOC(96, 128, 4);
CALL_BATCH_PERCPU_ALLOC(128, 128, 5);
CALL_BATCH_PERCPU_ALLOC(192, 128, 6);
CALL_BATCH_PERCPU_ALLOC(256, 128, 7);
CALL_BATCH_PERCPU_ALLOC(512, 64, 8);
CALL_BATCH_PERCPU_ALLOC(1024, 32, 9);
CALL_BATCH_PERCPU_ALLOC(2048, 16, 10);
CALL_BATCH_PERCPU_ALLOC(4096, 8, 11);
return 0;
}

View File

@ -18,7 +18,7 @@ const volatile __u64 cgid;
int remote_pid;
SEC("tp_btf/task_newtask")
int BPF_PROG(handle__task_newtask, struct task_struct *task, u64 clone_flags)
int BPF_PROG(tp_btf_run, struct task_struct *task, u64 clone_flags)
{
struct cgroup *cgrp = NULL;
struct task_struct *acquired;
@ -48,4 +48,30 @@ out:
return 0;
}
SEC("lsm.s/bpf")
int BPF_PROG(lsm_run, int cmd, union bpf_attr *attr, unsigned int size)
{
struct cgroup *cgrp = NULL;
struct task_struct *task;
int ret = 0;
task = bpf_get_current_task_btf();
if (local_pid != task->pid)
return 0;
if (cmd != BPF_LINK_CREATE)
return 0;
/* 1 is the root cgroup */
cgrp = bpf_cgroup_from_id(1);
if (!cgrp)
goto out;
if (!bpf_task_under_cgroup(task, cgrp))
ret = -1;
bpf_cgroup_release(cgrp);
out:
return ret;
}
char _license[] SEC("license") = "GPL";

View File

@ -1,7 +1,11 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2023 Isovalent */
#include <stdbool.h>
#include <linux/bpf.h>
#include <linux/if_ether.h>
#include <bpf/bpf_endian.h>
#include <bpf/bpf_helpers.h>
char LICENSE[] SEC("license") = "GPL";
@ -12,10 +16,19 @@ bool seen_tc3;
bool seen_tc4;
bool seen_tc5;
bool seen_tc6;
bool seen_eth;
SEC("tc/ingress")
int tc1(struct __sk_buff *skb)
{
struct ethhdr eth = {};
if (skb->protocol != __bpf_constant_htons(ETH_P_IP))
goto out;
if (bpf_skb_load_bytes(skb, 0, &eth, sizeof(eth)))
goto out;
seen_eth = eth.h_proto == bpf_htons(ETH_P_IP);
out:
seen_tc1 = true;
return TCX_NEXT;
}

View File

@ -21,7 +21,7 @@ extern int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx,
extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, __u32 *hash,
enum xdp_rss_hash_type *rss_type) __ksym;
SEC("xdp")
SEC("xdp.frags")
int rx(struct xdp_md *ctx)
{
void *data, *data_meta, *data_end;

View File

@ -4,9 +4,40 @@
#include <stdlib.h>
#include <error.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include "unpriv_helpers.h"
static bool get_mitigations_off(void)
{
char cmdline[4096], *c;
int fd, ret = false;
fd = open("/proc/cmdline", O_RDONLY);
if (fd < 0) {
perror("open /proc/cmdline");
return false;
}
if (read(fd, cmdline, sizeof(cmdline) - 1) < 0) {
perror("read /proc/cmdline");
goto out;
}
cmdline[sizeof(cmdline) - 1] = '\0';
for (c = strtok(cmdline, " \n"); c; c = strtok(NULL, " \n")) {
if (strncmp(c, "mitigations=off", strlen(c)))
continue;
ret = true;
break;
}
out:
close(fd);
return ret;
}
bool get_unpriv_disabled(void)
{
bool disabled;
@ -22,5 +53,5 @@ bool get_unpriv_disabled(void)
disabled = true;
}
return disabled;
return disabled ? true : get_mitigations_off();
}

View File

@ -26,6 +26,7 @@
#include <linux/sockios.h>
#include <sys/mman.h>
#include <net/if.h>
#include <ctype.h>
#include <poll.h>
#include <time.h>
@ -47,6 +48,7 @@ struct xsk {
};
struct xdp_hw_metadata *bpf_obj;
__u16 bind_flags = XDP_COPY;
struct xsk *rx_xsk;
const char *ifname;
int ifindex;
@ -60,7 +62,7 @@ static int open_xsk(int ifindex, struct xsk *xsk, __u32 queue_id)
const struct xsk_socket_config socket_config = {
.rx_size = XSK_RING_PROD__DEFAULT_NUM_DESCS,
.tx_size = XSK_RING_PROD__DEFAULT_NUM_DESCS,
.bind_flags = XDP_COPY,
.bind_flags = bind_flags,
};
const struct xsk_umem_config umem_config = {
.fill_size = XSK_RING_PROD__DEFAULT_NUM_DESCS,
@ -263,11 +265,14 @@ static int verify_metadata(struct xsk *rx_xsk, int rxq, int server_fd, clockid_t
verify_skb_metadata(server_fd);
for (i = 0; i < rxq; i++) {
bool first_seg = true;
bool is_eop = true;
if (fds[i].revents == 0)
continue;
struct xsk *xsk = &rx_xsk[i];
peek:
ret = xsk_ring_cons__peek(&xsk->rx, 1, &idx);
printf("xsk_ring_cons__peek: %d\n", ret);
if (ret != 1)
@ -276,12 +281,19 @@ static int verify_metadata(struct xsk *rx_xsk, int rxq, int server_fd, clockid_t
rx_desc = xsk_ring_cons__rx_desc(&xsk->rx, idx);
comp_addr = xsk_umem__extract_addr(rx_desc->addr);
addr = xsk_umem__add_offset_to_addr(rx_desc->addr);
printf("%p: rx_desc[%u]->addr=%llx addr=%llx comp_addr=%llx\n",
xsk, idx, rx_desc->addr, addr, comp_addr);
verify_xdp_metadata(xsk_umem__get_data(xsk->umem_area, addr),
clock_id);
is_eop = !(rx_desc->options & XDP_PKT_CONTD);
printf("%p: rx_desc[%u]->addr=%llx addr=%llx comp_addr=%llx%s\n",
xsk, idx, rx_desc->addr, addr, comp_addr, is_eop ? " EoP" : "");
if (first_seg) {
verify_xdp_metadata(xsk_umem__get_data(xsk->umem_area, addr),
clock_id);
first_seg = false;
}
xsk_ring_cons__release(&xsk->rx, 1);
refill_rx(xsk, comp_addr);
if (!is_eop)
goto peek;
}
}
@ -404,6 +416,53 @@ static void timestamping_enable(int fd, int val)
error(1, errno, "setsockopt(SO_TIMESTAMPING)");
}
static void print_usage(void)
{
const char *usage =
"Usage: xdp_hw_metadata [OPTIONS] [IFNAME]\n"
" -m Enable multi-buffer XDP for larger MTU\n"
" -h Display this help and exit\n\n"
"Generate test packets on the other machine with:\n"
" echo -n xdp | nc -u -q1 <dst_ip> 9091\n";
printf("%s", usage);
}
static void read_args(int argc, char *argv[])
{
char opt;
while ((opt = getopt(argc, argv, "mh")) != -1) {
switch (opt) {
case 'm':
bind_flags |= XDP_USE_SG;
break;
case 'h':
print_usage();
exit(0);
case '?':
if (isprint(optopt))
fprintf(stderr, "Unknown option: -%c\n", optopt);
fallthrough;
default:
print_usage();
error(-1, opterr, "Command line options error");
}
}
if (optind >= argc) {
fprintf(stderr, "No device name provided\n");
print_usage();
exit(-1);
}
ifname = argv[optind];
ifindex = if_nametoindex(ifname);
if (!ifname)
error(-1, errno, "Invalid interface name");
}
int main(int argc, char *argv[])
{
clockid_t clock_id = CLOCK_TAI;
@ -413,13 +472,8 @@ int main(int argc, char *argv[])
struct bpf_program *prog;
if (argc != 2) {
fprintf(stderr, "pass device name\n");
return -1;
}
read_args(argc, argv);
ifname = argv[1];
ifindex = if_nametoindex(ifname);
rxq = rxq_num(ifname);
printf("rxq: %d\n", rxq);