Daniel Borkmann says:

====================
bpf-next 2021-10-02

We've added 85 non-merge commits during the last 15 day(s) which contain
a total of 132 files changed, 13779 insertions(+), 6724 deletions(-).

The main changes are:

1) Massive update on test_bpf.ko coverage for JITs as preparatory work for
   an upcoming MIPS eBPF JIT, from Johan Almbladh.

2) Add a batched interface for RX buffer allocation in AF_XDP buffer pool,
   with driver support for i40e and ice from Magnus Karlsson.

3) Add legacy uprobe support to libbpf to complement recently merged legacy
   kprobe support, from Andrii Nakryiko.

4) Add bpf_trace_vprintk() as variadic printk helper, from Dave Marchevsky.

5) Support saving the register state in verifier when spilling <8byte bounded
   scalar to the stack, from Martin Lau.

6) Add libbpf opt-in for stricter BPF program section name handling as part
   of libbpf 1.0 effort, from Andrii Nakryiko.

7) Add a document to help clarifying BPF licensing, from Alexei Starovoitov.

8) Fix skel_internal.h to propagate errno if the loader indicates an internal
   error, from Kumar Kartikeya Dwivedi.

9) Fix build warnings with -Wcast-function-type so that the option can later
   be enabled by default for the kernel, from Kees Cook.

10) Fix libbpf to ignore STT_SECTION symbols in legacy map definitions as it
    otherwise errors out when encountering them, from Toke Høiland-Jørgensen.

11) Teach libbpf to recognize specialized maps (such as for perf RB) and
    internally remove BTF type IDs when creating them, from Hengqi Chen.

12) Various fixes and improvements to BPF selftests.
====================

Link: https://lore.kernel.org/r/20211002001327.15169-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2021-10-01 19:55:09 -07:00
commit 6b7b0c3091
132 changed files with 8209 additions and 1154 deletions

View File

@ -0,0 +1,92 @@
=============
BPF licensing
=============
Background
==========
* Classic BPF was BSD licensed
"BPF" was originally introduced as BSD Packet Filter in
http://www.tcpdump.org/papers/bpf-usenix93.pdf. The corresponding instruction
set and its implementation came from BSD with BSD license. That original
instruction set is now known as "classic BPF".
However an instruction set is a specification for machine-language interaction,
similar to a programming language. It is not a code. Therefore, the
application of a BSD license may be misleading in a certain context, as the
instruction set may enjoy no copyright protection.
* eBPF (extended BPF) instruction set continues to be BSD
In 2014, the classic BPF instruction set was significantly extended. We
typically refer to this instruction set as eBPF to disambiguate it from cBPF.
The eBPF instruction set is still BSD licensed.
Implementations of eBPF
=======================
Using the eBPF instruction set requires implementing code in both kernel space
and user space.
In Linux Kernel
---------------
The reference implementations of the eBPF interpreter and various just-in-time
compilers are part of Linux and are GPLv2 licensed. The implementation of
eBPF helper functions is also GPLv2 licensed. Interpreters, JITs, helpers,
and verifiers are called eBPF runtime.
In User Space
-------------
There are also implementations of eBPF runtime (interpreter, JITs, helper
functions) under
Apache2 (https://github.com/iovisor/ubpf),
MIT (https://github.com/qmonnet/rbpf), and
BSD (https://github.com/DPDK/dpdk/blob/main/lib/librte_bpf).
In HW
-----
The HW can choose to execute eBPF instruction natively and provide eBPF runtime
in HW or via the use of implementing firmware with a proprietary license.
In other operating systems
--------------------------
Other kernels or user space implementations of eBPF instruction set and runtime
can have proprietary licenses.
Using BPF programs in the Linux kernel
======================================
Linux Kernel (while being GPLv2) allows linking of proprietary kernel modules
under these rules:
Documentation/process/license-rules.rst
When a kernel module is loaded, the linux kernel checks which functions it
intends to use. If any function is marked as "GPL only," the corresponding
module or program has to have GPL compatible license.
Loading BPF program into the Linux kernel is similar to loading a kernel
module. BPF is loaded at run time and not statically linked to the Linux
kernel. BPF program loading follows the same license checking rules as kernel
modules. BPF programs can be proprietary if they don't use "GPL only" BPF
helper functions.
Further, some BPF program types - Linux Security Modules (LSM) and TCP
Congestion Control (struct_ops), as of Aug 2021 - are required to be GPL
compatible even if they don't use "GPL only" helper functions directly. The
registration step of LSM and TCP congestion control modules of the Linux
kernel is done through EXPORT_SYMBOL_GPL kernel functions. In that sense LSM
and struct_ops BPF programs are implicitly calling "GPL only" functions.
The same restriction applies to BPF programs that call kernel functions
directly via unstable interface also known as "kfunc".
Packaging BPF programs with user space applications
====================================================
Generally, proprietary-licensed applications and GPL licensed BPF programs
written for the Linux kernel in the same package can co-exist because they are
separate executable processes. This applies to both cBPF and eBPF programs.

View File

@ -82,6 +82,15 @@ Testing and debugging BPF
s390
Licensing
=========
.. toctree::
:maxdepth: 1
bpf_licensing
Other
=====

View File

@ -193,42 +193,40 @@ bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 count)
{
u16 ntu = rx_ring->next_to_use;
union i40e_rx_desc *rx_desc;
struct xdp_buff **bi, *xdp;
struct xdp_buff **xdp;
u32 nb_buffs, i;
dma_addr_t dma;
bool ok = true;
rx_desc = I40E_RX_DESC(rx_ring, ntu);
bi = i40e_rx_bi(rx_ring, ntu);
do {
xdp = xsk_buff_alloc(rx_ring->xsk_pool);
if (!xdp) {
ok = false;
goto no_buffers;
}
*bi = xdp;
dma = xsk_buff_xdp_get_dma(xdp);
xdp = i40e_rx_bi(rx_ring, ntu);
nb_buffs = min_t(u16, count, rx_ring->count - ntu);
nb_buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, nb_buffs);
if (!nb_buffs)
return false;
i = nb_buffs;
while (i--) {
dma = xsk_buff_xdp_get_dma(*xdp);
rx_desc->read.pkt_addr = cpu_to_le64(dma);
rx_desc->read.hdr_addr = 0;
rx_desc++;
bi++;
ntu++;
if (unlikely(ntu == rx_ring->count)) {
rx_desc = I40E_RX_DESC(rx_ring, 0);
bi = i40e_rx_bi(rx_ring, 0);
ntu = 0;
}
} while (--count);
no_buffers:
if (rx_ring->next_to_use != ntu) {
/* clear the status bits for the next_to_use descriptor */
rx_desc->wb.qword1.status_error_len = 0;
i40e_release_rx_desc(rx_ring, ntu);
xdp++;
}
return ok;
ntu += nb_buffs;
if (ntu == rx_ring->count) {
rx_desc = I40E_RX_DESC(rx_ring, 0);
xdp = i40e_rx_bi(rx_ring, 0);
ntu = 0;
}
/* clear the status bits for the next_to_use descriptor */
rx_desc->wb.qword1.status_error_len = 0;
i40e_release_rx_desc(rx_ring, ntu);
return count == nb_buffs ? true : false;
}
/**
@ -365,7 +363,7 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
break;
bi = *i40e_rx_bi(rx_ring, next_to_clean);
bi->data_end = bi->data + size;
xsk_buff_set_size(bi, size);
xsk_buff_dma_sync_for_cpu(bi, rx_ring->xsk_pool);
xdp_res = i40e_run_xdp_zc(rx_ring, bi);

View File

@ -164,17 +164,10 @@ struct ice_tx_offload_params {
};
struct ice_rx_buf {
union {
struct {
dma_addr_t dma;
struct page *page;
unsigned int page_offset;
u16 pagecnt_bias;
};
struct {
struct xdp_buff *xdp;
};
};
dma_addr_t dma;
struct page *page;
unsigned int page_offset;
u16 pagecnt_bias;
};
struct ice_q_stats {
@ -270,6 +263,7 @@ struct ice_ring {
union {
struct ice_tx_buf *tx_buf;
struct ice_rx_buf *rx_buf;
struct xdp_buff **xdp_buf;
};
/* CL2 - 2nd cacheline starts here */
u16 q_index; /* Queue number of ring */

View File

@ -364,45 +364,39 @@ bool ice_alloc_rx_bufs_zc(struct ice_ring *rx_ring, u16 count)
{
union ice_32b_rx_flex_desc *rx_desc;
u16 ntu = rx_ring->next_to_use;
struct ice_rx_buf *rx_buf;
bool ok = true;
struct xdp_buff **xdp;
u32 nb_buffs, i;
dma_addr_t dma;
if (!count)
return true;
rx_desc = ICE_RX_DESC(rx_ring, ntu);
rx_buf = &rx_ring->rx_buf[ntu];
xdp = &rx_ring->xdp_buf[ntu];
do {
rx_buf->xdp = xsk_buff_alloc(rx_ring->xsk_pool);
if (!rx_buf->xdp) {
ok = false;
break;
}
nb_buffs = min_t(u16, count, rx_ring->count - ntu);
nb_buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, nb_buffs);
if (!nb_buffs)
return false;
dma = xsk_buff_xdp_get_dma(rx_buf->xdp);
i = nb_buffs;
while (i--) {
dma = xsk_buff_xdp_get_dma(*xdp);
rx_desc->read.pkt_addr = cpu_to_le64(dma);
rx_desc->wb.status_error0 = 0;
rx_desc++;
rx_buf++;
ntu++;
if (unlikely(ntu == rx_ring->count)) {
rx_desc = ICE_RX_DESC(rx_ring, 0);
rx_buf = rx_ring->rx_buf;
ntu = 0;
}
} while (--count);
if (rx_ring->next_to_use != ntu) {
/* clear the status bits for the next_to_use descriptor */
rx_desc->wb.status_error0 = 0;
ice_release_rx_desc(rx_ring, ntu);
xdp++;
}
return ok;
ntu += nb_buffs;
if (ntu == rx_ring->count) {
rx_desc = ICE_RX_DESC(rx_ring, 0);
xdp = rx_ring->xdp_buf;
ntu = 0;
}
/* clear the status bits for the next_to_use descriptor */
rx_desc->wb.status_error0 = 0;
ice_release_rx_desc(rx_ring, ntu);
return count == nb_buffs ? true : false;
}
/**
@ -421,19 +415,19 @@ static void ice_bump_ntc(struct ice_ring *rx_ring)
/**
* ice_construct_skb_zc - Create an sk_buff from zero-copy buffer
* @rx_ring: Rx ring
* @rx_buf: zero-copy Rx buffer
* @xdp_arr: Pointer to the SW ring of xdp_buff pointers
*
* This function allocates a new skb from a zero-copy Rx buffer.
*
* Returns the skb on success, NULL on failure.
*/
static struct sk_buff *
ice_construct_skb_zc(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf)
ice_construct_skb_zc(struct ice_ring *rx_ring, struct xdp_buff **xdp_arr)
{
unsigned int metasize = rx_buf->xdp->data - rx_buf->xdp->data_meta;
unsigned int datasize = rx_buf->xdp->data_end - rx_buf->xdp->data;
unsigned int datasize_hard = rx_buf->xdp->data_end -
rx_buf->xdp->data_hard_start;
struct xdp_buff *xdp = *xdp_arr;
unsigned int metasize = xdp->data - xdp->data_meta;
unsigned int datasize = xdp->data_end - xdp->data;
unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start;
struct sk_buff *skb;
skb = __napi_alloc_skb(&rx_ring->q_vector->napi, datasize_hard,
@ -441,13 +435,13 @@ ice_construct_skb_zc(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf)
if (unlikely(!skb))
return NULL;
skb_reserve(skb, rx_buf->xdp->data - rx_buf->xdp->data_hard_start);
memcpy(__skb_put(skb, datasize), rx_buf->xdp->data, datasize);
skb_reserve(skb, xdp->data - xdp->data_hard_start);
memcpy(__skb_put(skb, datasize), xdp->data, datasize);
if (metasize)
skb_metadata_set(skb, metasize);
xsk_buff_free(rx_buf->xdp);
rx_buf->xdp = NULL;
xsk_buff_free(xdp);
*xdp_arr = NULL;
return skb;
}
@ -521,7 +515,7 @@ int ice_clean_rx_irq_zc(struct ice_ring *rx_ring, int budget)
while (likely(total_rx_packets < (unsigned int)budget)) {
union ice_32b_rx_flex_desc *rx_desc;
unsigned int size, xdp_res = 0;
struct ice_rx_buf *rx_buf;
struct xdp_buff **xdp;
struct sk_buff *skb;
u16 stat_err_bits;
u16 vlan_tag = 0;
@ -544,18 +538,18 @@ int ice_clean_rx_irq_zc(struct ice_ring *rx_ring, int budget)
if (!size)
break;
rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean];
rx_buf->xdp->data_end = rx_buf->xdp->data + size;
xsk_buff_dma_sync_for_cpu(rx_buf->xdp, rx_ring->xsk_pool);
xdp = &rx_ring->xdp_buf[rx_ring->next_to_clean];
xsk_buff_set_size(*xdp, size);
xsk_buff_dma_sync_for_cpu(*xdp, rx_ring->xsk_pool);
xdp_res = ice_run_xdp_zc(rx_ring, rx_buf->xdp);
xdp_res = ice_run_xdp_zc(rx_ring, *xdp);
if (xdp_res) {
if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR))
xdp_xmit |= xdp_res;
else
xsk_buff_free(rx_buf->xdp);
xsk_buff_free(*xdp);
rx_buf->xdp = NULL;
*xdp = NULL;
total_rx_bytes += size;
total_rx_packets++;
cleaned_count++;
@ -565,7 +559,7 @@ int ice_clean_rx_irq_zc(struct ice_ring *rx_ring, int budget)
}
/* XDP_PASS path */
skb = ice_construct_skb_zc(rx_ring, rx_buf);
skb = ice_construct_skb_zc(rx_ring, xdp);
if (!skb) {
rx_ring->rx_stats.alloc_buf_failed++;
break;
@ -813,12 +807,12 @@ void ice_xsk_clean_rx_ring(struct ice_ring *rx_ring)
u16 i;
for (i = 0; i < rx_ring->count; i++) {
struct ice_rx_buf *rx_buf = &rx_ring->rx_buf[i];
struct xdp_buff **xdp = &rx_ring->xdp_buf[i];
if (!rx_buf->xdp)
if (!xdp)
continue;
rx_buf->xdp = NULL;
*xdp = NULL;
}
}

View File

@ -48,6 +48,7 @@ extern struct idr btf_idr;
extern spinlock_t btf_idr_lock;
extern struct kobject *btf_kobj;
typedef u64 (*bpf_callback_t)(u64, u64, u64, u64, u64);
typedef int (*bpf_iter_init_seq_priv_t)(void *private_data,
struct bpf_iter_aux_info *aux);
typedef void (*bpf_iter_fini_seq_priv_t)(void *private_data);
@ -142,7 +143,8 @@ struct bpf_map_ops {
int (*map_set_for_each_callback_args)(struct bpf_verifier_env *env,
struct bpf_func_state *caller,
struct bpf_func_state *callee);
int (*map_for_each_callback)(struct bpf_map *map, void *callback_fn,
int (*map_for_each_callback)(struct bpf_map *map,
bpf_callback_t callback_fn,
void *callback_ctx, u64 flags);
/* BTF name and id of struct allocated by map_alloc */
@ -1089,6 +1091,7 @@ bool bpf_prog_array_compatible(struct bpf_array *array, const struct bpf_prog *f
int bpf_prog_calc_tag(struct bpf_prog *fp);
const struct bpf_func_proto *bpf_get_trace_printk_proto(void);
const struct bpf_func_proto *bpf_get_trace_vprintk_proto(void);
typedef unsigned long (*bpf_ctx_copy_t)(void *dst, const void *src,
unsigned long off, unsigned long len);
@ -2217,6 +2220,8 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
struct btf_id_set;
bool btf_id_set_contains(const struct btf_id_set *set, u32 id);
#define MAX_BPRINTF_VARARGS 12
int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args,
u32 **bin_buf, u32 num_args);
void bpf_bprintf_cleanup(void);

View File

@ -360,10 +360,9 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
.off = 0, \
.imm = TGT })
/* Function call */
/* Convert function address to BPF immediate */
#define BPF_CAST_CALL(x) \
((u64 (*)(u64, u64, u64, u64, u64))(x))
#define BPF_CALL_IMM(x) ((void *)(x) - (void *)__bpf_call_base)
#define BPF_EMIT_CALL(FUNC) \
((struct bpf_insn) { \
@ -371,7 +370,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
.dst_reg = 0, \
.src_reg = 0, \
.off = 0, \
.imm = ((FUNC) - __bpf_call_base) })
.imm = BPF_CALL_IMM(FUNC) })
/* Raw code statement block */

View File

@ -15,13 +15,13 @@
* level RX-ring queues. It is information that is specific to how
* the driver have configured a given RX-ring queue.
*
* Each xdp_buff frame received in the driver carry a (pointer)
* Each xdp_buff frame received in the driver carries a (pointer)
* reference to this xdp_rxq_info structure. This provides the XDP
* data-path read-access to RX-info for both kernel and bpf-side
* (limited subset).
*
* For now, direct access is only safe while running in NAPI/softirq
* context. Contents is read-mostly and must not be updated during
* context. Contents are read-mostly and must not be updated during
* driver NAPI/softirq poll.
*
* The driver usage API is a register and unregister API.
@ -30,8 +30,8 @@
* can be attached as long as it doesn't change the underlying
* RX-ring. If the RX-ring does change significantly, the NIC driver
* naturally need to stop the RX-ring before purging and reallocating
* memory. In that process the driver MUST call unregistor (which
* also apply for driver shutdown and unload). The register API is
* memory. In that process the driver MUST call unregister (which
* also applies for driver shutdown and unload). The register API is
* also mandatory during RX-ring setup.
*/

View File

@ -77,6 +77,12 @@ static inline struct xdp_buff *xsk_buff_alloc(struct xsk_buff_pool *pool)
return xp_alloc(pool);
}
/* Returns as many entries as possible up to max. 0 <= N <= max. */
static inline u32 xsk_buff_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
{
return xp_alloc_batch(pool, xdp, max);
}
static inline bool xsk_buff_can_alloc(struct xsk_buff_pool *pool, u32 count)
{
return xp_can_alloc(pool, count);
@ -89,6 +95,13 @@ static inline void xsk_buff_free(struct xdp_buff *xdp)
xp_free(xskb);
}
static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size)
{
xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
xdp->data_meta = xdp->data;
xdp->data_end = xdp->data + size;
}
static inline dma_addr_t xsk_buff_raw_get_dma(struct xsk_buff_pool *pool,
u64 addr)
{
@ -212,6 +225,11 @@ static inline struct xdp_buff *xsk_buff_alloc(struct xsk_buff_pool *pool)
return NULL;
}
static inline u32 xsk_buff_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
{
return 0;
}
static inline bool xsk_buff_can_alloc(struct xsk_buff_pool *pool, u32 count)
{
return false;
@ -221,6 +239,10 @@ static inline void xsk_buff_free(struct xdp_buff *xdp)
{
}
static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size)
{
}
static inline dma_addr_t xsk_buff_raw_get_dma(struct xsk_buff_pool *pool,
u64 addr)
{

View File

@ -7,6 +7,7 @@
#include <linux/if_xdp.h>
#include <linux/types.h>
#include <linux/dma-mapping.h>
#include <linux/bpf.h>
#include <net/xdp.h>
struct xsk_buff_pool;
@ -23,7 +24,6 @@ struct xdp_buff_xsk {
dma_addr_t dma;
dma_addr_t frame_dma;
struct xsk_buff_pool *pool;
bool unaligned;
u64 orig_addr;
struct list_head free_list_node;
};
@ -67,6 +67,7 @@ struct xsk_buff_pool {
u32 free_heads_cnt;
u32 headroom;
u32 chunk_size;
u32 chunk_shift;
u32 frame_len;
u8 cached_need_wakeup;
bool uses_need_wakeup;
@ -81,6 +82,13 @@ struct xsk_buff_pool {
struct xdp_buff_xsk *free_heads[];
};
/* Masks for xdp_umem_page flags.
* The low 12-bits of the addr will be 0 since this is the page address, so we
* can use them for flags.
*/
#define XSK_NEXT_PG_CONTIG_SHIFT 0
#define XSK_NEXT_PG_CONTIG_MASK BIT_ULL(XSK_NEXT_PG_CONTIG_SHIFT)
/* AF_XDP core. */
struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
struct xdp_umem *umem);
@ -89,7 +97,6 @@ int xp_assign_dev(struct xsk_buff_pool *pool, struct net_device *dev,
int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem,
struct net_device *dev, u16 queue_id);
void xp_destroy(struct xsk_buff_pool *pool);
void xp_release(struct xdp_buff_xsk *xskb);
void xp_get_pool(struct xsk_buff_pool *pool);
bool xp_put_pool(struct xsk_buff_pool *pool);
void xp_clear_dev(struct xsk_buff_pool *pool);
@ -99,12 +106,28 @@ void xp_del_xsk(struct xsk_buff_pool *pool, struct xdp_sock *xs);
/* AF_XDP, and XDP core. */
void xp_free(struct xdp_buff_xsk *xskb);
static inline void xp_init_xskb_addr(struct xdp_buff_xsk *xskb, struct xsk_buff_pool *pool,
u64 addr)
{
xskb->orig_addr = addr;
xskb->xdp.data_hard_start = pool->addrs + addr + pool->headroom;
}
static inline void xp_init_xskb_dma(struct xdp_buff_xsk *xskb, struct xsk_buff_pool *pool,
dma_addr_t *dma_pages, u64 addr)
{
xskb->frame_dma = (dma_pages[addr >> PAGE_SHIFT] & ~XSK_NEXT_PG_CONTIG_MASK) +
(addr & ~PAGE_MASK);
xskb->dma = xskb->frame_dma + pool->headroom + XDP_PACKET_HEADROOM;
}
/* AF_XDP ZC drivers, via xdp_sock_buff.h */
void xp_set_rxq_info(struct xsk_buff_pool *pool, struct xdp_rxq_info *rxq);
int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
unsigned long attrs, struct page **pages, u32 nr_pages);
void xp_dma_unmap(struct xsk_buff_pool *pool, unsigned long attrs);
struct xdp_buff *xp_alloc(struct xsk_buff_pool *pool);
u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max);
bool xp_can_alloc(struct xsk_buff_pool *pool, u32 count);
void *xp_raw_get_data(struct xsk_buff_pool *pool, u64 addr);
dma_addr_t xp_raw_get_dma(struct xsk_buff_pool *pool, u64 addr);
@ -180,4 +203,25 @@ static inline u64 xp_unaligned_add_offset_to_addr(u64 addr)
xp_unaligned_extract_offset(addr);
}
static inline u32 xp_aligned_extract_idx(struct xsk_buff_pool *pool, u64 addr)
{
return xp_aligned_extract_addr(pool, addr) >> pool->chunk_shift;
}
static inline void xp_release(struct xdp_buff_xsk *xskb)
{
if (xskb->pool->unaligned)
xskb->pool->free_heads[xskb->pool->free_heads_cnt++] = xskb;
}
static inline u64 xp_get_handle(struct xdp_buff_xsk *xskb)
{
u64 offset = xskb->xdp.data - xskb->xdp.data_hard_start;
offset += xskb->pool->headroom;
if (!xskb->pool->unaligned)
return xskb->orig_addr + offset;
return xskb->orig_addr + (offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT);
}
#endif /* XSK_BUFF_POOL_H_ */

View File

@ -4046,7 +4046,7 @@ union bpf_attr {
* arguments. The *data* are a **u64** array and corresponding format string
* values are stored in the array. For strings and pointers where pointees
* are accessed, only the pointer values are stored in the *data* array.
* The *data_len* is the size of *data* in bytes.
* The *data_len* is the size of *data* in bytes - must be a multiple of 8.
*
* Formats **%s**, **%p{i,I}{4,6}** requires to read kernel memory.
* Reading kernel memory may fail due to either invalid address or
@ -4751,7 +4751,8 @@ union bpf_attr {
* Each format specifier in **fmt** corresponds to one u64 element
* in the **data** array. For strings and pointers where pointees
* are accessed, only the pointer values are stored in the *data*
* array. The *data_len* is the size of *data* in bytes.
* array. The *data_len* is the size of *data* in bytes - must be
* a multiple of 8.
*
* Formats **%s** and **%p{i,I}{4,6}** require to read kernel
* memory. Reading kernel memory may fail due to either invalid
@ -4898,6 +4899,16 @@ union bpf_attr {
* **-EINVAL** if *flags* is not zero.
*
* **-ENOENT** if architecture does not support branch records.
*
* long bpf_trace_vprintk(const char *fmt, u32 fmt_size, const void *data, u32 data_len)
* Description
* Behaves like **bpf_trace_printk**\ () helper, but takes an array of u64
* to format and can handle more format args as a result.
*
* Arguments are to be used as in **bpf_seq_printf**\ () helper.
* Return
* The number of bytes written to the buffer, or a negative error
* in case of failure.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@ -5077,6 +5088,7 @@ union bpf_attr {
FN(get_attach_cookie), \
FN(task_pt_regs), \
FN(get_branch_snapshot), \
FN(trace_vprintk), \
/* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper

View File

@ -645,7 +645,7 @@ static const struct bpf_iter_seq_info iter_seq_info = {
.seq_priv_size = sizeof(struct bpf_iter_seq_array_map_info),
};
static int bpf_for_each_array_elem(struct bpf_map *map, void *callback_fn,
static int bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback_fn,
void *callback_ctx, u64 flags)
{
u32 i, key, num_elems = 0;
@ -668,9 +668,8 @@ static int bpf_for_each_array_elem(struct bpf_map *map, void *callback_fn,
val = array->value + array->elem_size * i;
num_elems++;
key = i;
ret = BPF_CAST_CALL(callback_fn)((u64)(long)map,
(u64)(long)&key, (u64)(long)val,
(u64)(long)callback_ctx, 0);
ret = callback_fn((u64)(long)map, (u64)(long)&key,
(u64)(long)val, (u64)(long)callback_ctx, 0);
/* return value: 0 - continue, 1 - stop and return */
if (ret)
break;

View File

@ -2357,6 +2357,11 @@ const struct bpf_func_proto * __weak bpf_get_trace_printk_proto(void)
return NULL;
}
const struct bpf_func_proto * __weak bpf_get_trace_vprintk_proto(void)
{
return NULL;
}
u64 __weak
bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy)

View File

@ -668,7 +668,7 @@ static int htab_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem,
(void *(*)(struct bpf_map *map, void *key))NULL));
*insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem));
*insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem);
*insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 1);
*insn++ = BPF_ALU64_IMM(BPF_ADD, ret,
offsetof(struct htab_elem, key) +
@ -709,7 +709,7 @@ static int htab_lru_map_gen_lookup(struct bpf_map *map,
BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem,
(void *(*)(struct bpf_map *map, void *key))NULL));
*insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem));
*insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem);
*insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 4);
*insn++ = BPF_LDX_MEM(BPF_B, ref_reg, ret,
offsetof(struct htab_elem, lru_node) +
@ -2049,7 +2049,7 @@ static const struct bpf_iter_seq_info iter_seq_info = {
.seq_priv_size = sizeof(struct bpf_iter_seq_hash_map_info),
};
static int bpf_for_each_hash_elem(struct bpf_map *map, void *callback_fn,
static int bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_fn,
void *callback_ctx, u64 flags)
{
struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
@ -2089,9 +2089,8 @@ static int bpf_for_each_hash_elem(struct bpf_map *map, void *callback_fn,
val = elem->key + roundup_key_size;
}
num_elems++;
ret = BPF_CAST_CALL(callback_fn)((u64)(long)map,
(u64)(long)key, (u64)(long)val,
(u64)(long)callback_ctx, 0);
ret = callback_fn((u64)(long)map, (u64)(long)key,
(u64)(long)val, (u64)(long)callback_ctx, 0);
/* return value: 0 - continue, 1 - stop and return */
if (ret) {
rcu_read_unlock();
@ -2397,7 +2396,7 @@ static int htab_of_map_gen_lookup(struct bpf_map *map,
BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem,
(void *(*)(struct bpf_map *map, void *key))NULL));
*insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem));
*insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem);
*insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 2);
*insn++ = BPF_ALU64_IMM(BPF_ADD, ret,
offsetof(struct htab_elem, key) +

View File

@ -979,15 +979,13 @@ out:
return err;
}
#define MAX_SNPRINTF_VARARGS 12
BPF_CALL_5(bpf_snprintf, char *, str, u32, str_size, char *, fmt,
const void *, data, u32, data_len)
{
int err, num_args;
u32 *bin_args;
if (data_len % 8 || data_len > MAX_SNPRINTF_VARARGS * 8 ||
if (data_len % 8 || data_len > MAX_BPRINTF_VARARGS * 8 ||
(data_len && !data))
return -EINVAL;
num_args = data_len / 8;
@ -1058,7 +1056,7 @@ static enum hrtimer_restart bpf_timer_cb(struct hrtimer *hrtimer)
struct bpf_hrtimer *t = container_of(hrtimer, struct bpf_hrtimer, timer);
struct bpf_map *map = t->map;
void *value = t->value;
void *callback_fn;
bpf_callback_t callback_fn;
void *key;
u32 idx;
@ -1083,8 +1081,7 @@ static enum hrtimer_restart bpf_timer_cb(struct hrtimer *hrtimer)
key = value - round_up(map->key_size, 8);
}
BPF_CAST_CALL(callback_fn)((u64)(long)map, (u64)(long)key,
(u64)(long)value, 0, 0);
callback_fn((u64)(long)map, (u64)(long)key, (u64)(long)value, 0, 0);
/* The verifier checked that return value is zero. */
this_cpu_write(hrtimer_running, NULL);
@ -1437,6 +1434,8 @@ bpf_base_func_proto(enum bpf_func_id func_id)
return &bpf_snprintf_proto;
case BPF_FUNC_task_pt_regs:
return &bpf_task_pt_regs_proto;
case BPF_FUNC_trace_vprintk:
return bpf_get_trace_vprintk_proto();
default:
return NULL;
}

View File

@ -612,6 +612,20 @@ static const char *kernel_type_name(const struct btf* btf, u32 id)
return btf_name_by_offset(btf, btf_type_by_id(btf, id)->name_off);
}
/* The reg state of a pointer or a bounded scalar was saved when
* it was spilled to the stack.
*/
static bool is_spilled_reg(const struct bpf_stack_state *stack)
{
return stack->slot_type[BPF_REG_SIZE - 1] == STACK_SPILL;
}
static void scrub_spilled_slot(u8 *stype)
{
if (*stype != STACK_INVALID)
*stype = STACK_MISC;
}
static void print_verifier_state(struct bpf_verifier_env *env,
const struct bpf_func_state *state)
{
@ -717,7 +731,7 @@ static void print_verifier_state(struct bpf_verifier_env *env,
continue;
verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE);
print_liveness(env, state->stack[i].spilled_ptr.live);
if (state->stack[i].slot_type[0] == STACK_SPILL) {
if (is_spilled_reg(&state->stack[i])) {
reg = &state->stack[i].spilled_ptr;
t = reg->type;
verbose(env, "=%s", reg_type_str[t]);
@ -1730,7 +1744,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id)
desc = &tab->descs[tab->nr_descs++];
desc->func_id = func_id;
desc->imm = BPF_CAST_CALL(addr) - __bpf_call_base;
desc->imm = BPF_CALL_IMM(addr);
err = btf_distill_func_proto(&env->log, btf_vmlinux,
func_proto, func_name,
&desc->func_model);
@ -2373,7 +2387,7 @@ static void mark_all_scalars_precise(struct bpf_verifier_env *env,
reg->precise = true;
}
for (j = 0; j < func->allocated_stack / BPF_REG_SIZE; j++) {
if (func->stack[j].slot_type[0] != STACK_SPILL)
if (!is_spilled_reg(&func->stack[j]))
continue;
reg = &func->stack[j].spilled_ptr;
if (reg->type != SCALAR_VALUE)
@ -2415,7 +2429,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
}
while (spi >= 0) {
if (func->stack[spi].slot_type[0] != STACK_SPILL) {
if (!is_spilled_reg(&func->stack[spi])) {
stack_mask = 0;
break;
}
@ -2514,7 +2528,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
return 0;
}
if (func->stack[i].slot_type[0] != STACK_SPILL) {
if (!is_spilled_reg(&func->stack[i])) {
stack_mask &= ~(1ull << i);
continue;
}
@ -2626,15 +2640,21 @@ static bool __is_pointer_value(bool allow_ptr_leaks,
}
static void save_register_state(struct bpf_func_state *state,
int spi, struct bpf_reg_state *reg)
int spi, struct bpf_reg_state *reg,
int size)
{
int i;
state->stack[spi].spilled_ptr = *reg;
state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
if (size == BPF_REG_SIZE)
state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
for (i = 0; i < BPF_REG_SIZE; i++)
state->stack[spi].slot_type[i] = STACK_SPILL;
for (i = BPF_REG_SIZE; i > BPF_REG_SIZE - size; i--)
state->stack[spi].slot_type[i - 1] = STACK_SPILL;
/* size < 8 bytes spill */
for (; i; i--)
scrub_spilled_slot(&state->stack[spi].slot_type[i - 1]);
}
/* check_stack_{read,write}_fixed_off functions track spill/fill of registers,
@ -2681,7 +2701,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
}
if (reg && size == BPF_REG_SIZE && register_is_bounded(reg) &&
if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
!register_is_null(reg) && env->bpf_capable) {
if (dst_reg != BPF_REG_FP) {
/* The backtracking logic can only recognize explicit
@ -2694,7 +2714,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
if (err)
return err;
}
save_register_state(state, spi, reg);
save_register_state(state, spi, reg, size);
} else if (reg && is_spillable_regtype(reg->type)) {
/* register containing pointer is being spilled into stack */
if (size != BPF_REG_SIZE) {
@ -2706,16 +2726,16 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
verbose(env, "cannot spill pointers to stack into stack frame of the caller\n");
return -EINVAL;
}
save_register_state(state, spi, reg);
save_register_state(state, spi, reg, size);
} else {
u8 type = STACK_MISC;
/* regular write of data into stack destroys any spilled ptr */
state->stack[spi].spilled_ptr.type = NOT_INIT;
/* Mark slots as STACK_MISC if they belonged to spilled ptr. */
if (state->stack[spi].slot_type[0] == STACK_SPILL)
if (is_spilled_reg(&state->stack[spi]))
for (i = 0; i < BPF_REG_SIZE; i++)
state->stack[spi].slot_type[i] = STACK_MISC;
scrub_spilled_slot(&state->stack[spi].slot_type[i]);
/* only mark the slot as written if all 8 bytes were written
* otherwise read propagation may incorrectly stop too soon
@ -2918,23 +2938,50 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
struct bpf_func_state *state = vstate->frame[vstate->curframe];
int i, slot = -off - 1, spi = slot / BPF_REG_SIZE;
struct bpf_reg_state *reg;
u8 *stype;
u8 *stype, type;
stype = reg_state->stack[spi].slot_type;
reg = &reg_state->stack[spi].spilled_ptr;
if (stype[0] == STACK_SPILL) {
if (is_spilled_reg(&reg_state->stack[spi])) {
if (size != BPF_REG_SIZE) {
u8 scalar_size = 0;
if (reg->type != SCALAR_VALUE) {
verbose_linfo(env, env->insn_idx, "; ");
verbose(env, "invalid size of register fill\n");
return -EACCES;
}
if (dst_regno >= 0) {
mark_reg_unknown(env, state->regs, dst_regno);
state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
}
mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
if (dst_regno < 0)
return 0;
for (i = BPF_REG_SIZE; i > 0 && stype[i - 1] == STACK_SPILL; i--)
scalar_size++;
if (!(off % BPF_REG_SIZE) && size == scalar_size) {
/* The earlier check_reg_arg() has decided the
* subreg_def for this insn. Save it first.
*/
s32 subreg_def = state->regs[dst_regno].subreg_def;
state->regs[dst_regno] = *reg;
state->regs[dst_regno].subreg_def = subreg_def;
} else {
for (i = 0; i < size; i++) {
type = stype[(slot - i) % BPF_REG_SIZE];
if (type == STACK_SPILL)
continue;
if (type == STACK_MISC)
continue;
verbose(env, "invalid read from stack off %d+%d size %d\n",
off, i, size);
return -EACCES;
}
mark_reg_unknown(env, state->regs, dst_regno);
}
state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
return 0;
}
for (i = 1; i < BPF_REG_SIZE; i++) {
@ -2965,8 +3012,6 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
}
mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
} else {
u8 type;
for (i = 0; i < size; i++) {
type = stype[(slot - i) % BPF_REG_SIZE];
if (type == STACK_MISC)
@ -4514,17 +4559,17 @@ static int check_stack_range_initialized(
goto mark;
}
if (state->stack[spi].slot_type[0] == STACK_SPILL &&
if (is_spilled_reg(&state->stack[spi]) &&
state->stack[spi].spilled_ptr.type == PTR_TO_BTF_ID)
goto mark;
if (state->stack[spi].slot_type[0] == STACK_SPILL &&
if (is_spilled_reg(&state->stack[spi]) &&
(state->stack[spi].spilled_ptr.type == SCALAR_VALUE ||
env->allow_ptr_leaks)) {
if (clobber) {
__mark_reg_unknown(env, &state->stack[spi].spilled_ptr);
for (j = 0; j < BPF_REG_SIZE; j++)
state->stack[spi].slot_type[j] = STACK_MISC;
scrub_spilled_slot(&state->stack[spi].slot_type[j]);
}
goto mark;
}
@ -10356,9 +10401,9 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
* return false to continue verification of this path
*/
return false;
if (i % BPF_REG_SIZE)
if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1)
continue;
if (old->stack[spi].slot_type[0] != STACK_SPILL)
if (!is_spilled_reg(&old->stack[spi]))
continue;
if (!regsafe(env, &old->stack[spi].spilled_ptr,
&cur->stack[spi].spilled_ptr, idmap))
@ -10565,7 +10610,7 @@ static int propagate_precision(struct bpf_verifier_env *env,
}
for (i = 0; i < state->allocated_stack / BPF_REG_SIZE; i++) {
if (state->stack[i].slot_type[0] != STACK_SPILL)
if (!is_spilled_reg(&state->stack[i]))
continue;
state_reg = &state->stack[i].spilled_ptr;
if (state_reg->type != SCALAR_VALUE ||
@ -12469,8 +12514,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
if (!bpf_pseudo_call(insn))
continue;
subprog = insn->off;
insn->imm = BPF_CAST_CALL(func[subprog]->bpf_func) -
__bpf_call_base;
insn->imm = BPF_CALL_IMM(func[subprog]->bpf_func);
}
/* we use the aux data to keep a list of the start addresses
@ -12950,32 +12994,25 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
patch_map_ops_generic:
switch (insn->imm) {
case BPF_FUNC_map_lookup_elem:
insn->imm = BPF_CAST_CALL(ops->map_lookup_elem) -
__bpf_call_base;
insn->imm = BPF_CALL_IMM(ops->map_lookup_elem);
continue;
case BPF_FUNC_map_update_elem:
insn->imm = BPF_CAST_CALL(ops->map_update_elem) -
__bpf_call_base;
insn->imm = BPF_CALL_IMM(ops->map_update_elem);
continue;
case BPF_FUNC_map_delete_elem:
insn->imm = BPF_CAST_CALL(ops->map_delete_elem) -
__bpf_call_base;
insn->imm = BPF_CALL_IMM(ops->map_delete_elem);
continue;
case BPF_FUNC_map_push_elem:
insn->imm = BPF_CAST_CALL(ops->map_push_elem) -
__bpf_call_base;
insn->imm = BPF_CALL_IMM(ops->map_push_elem);
continue;
case BPF_FUNC_map_pop_elem:
insn->imm = BPF_CAST_CALL(ops->map_pop_elem) -
__bpf_call_base;
insn->imm = BPF_CALL_IMM(ops->map_pop_elem);
continue;
case BPF_FUNC_map_peek_elem:
insn->imm = BPF_CAST_CALL(ops->map_peek_elem) -
__bpf_call_base;
insn->imm = BPF_CALL_IMM(ops->map_peek_elem);
continue;
case BPF_FUNC_redirect_map:
insn->imm = BPF_CAST_CALL(ops->map_redirect) -
__bpf_call_base;
insn->imm = BPF_CALL_IMM(ops->map_redirect);
continue;
}

View File

@ -398,7 +398,7 @@ static const struct bpf_func_proto bpf_trace_printk_proto = {
.arg2_type = ARG_CONST_SIZE,
};
const struct bpf_func_proto *bpf_get_trace_printk_proto(void)
static void __set_printk_clr_event(void)
{
/*
* This program might be calling bpf_trace_printk,
@ -410,11 +410,57 @@ const struct bpf_func_proto *bpf_get_trace_printk_proto(void)
*/
if (trace_set_clr_event("bpf_trace", "bpf_trace_printk", 1))
pr_warn_ratelimited("could not enable bpf_trace_printk events");
}
const struct bpf_func_proto *bpf_get_trace_printk_proto(void)
{
__set_printk_clr_event();
return &bpf_trace_printk_proto;
}
#define MAX_SEQ_PRINTF_VARARGS 12
BPF_CALL_4(bpf_trace_vprintk, char *, fmt, u32, fmt_size, const void *, data,
u32, data_len)
{
static char buf[BPF_TRACE_PRINTK_SIZE];
unsigned long flags;
int ret, num_args;
u32 *bin_args;
if (data_len & 7 || data_len > MAX_BPRINTF_VARARGS * 8 ||
(data_len && !data))
return -EINVAL;
num_args = data_len / 8;
ret = bpf_bprintf_prepare(fmt, fmt_size, data, &bin_args, num_args);
if (ret < 0)
return ret;
raw_spin_lock_irqsave(&trace_printk_lock, flags);
ret = bstr_printf(buf, sizeof(buf), fmt, bin_args);
trace_bpf_trace_printk(buf);
raw_spin_unlock_irqrestore(&trace_printk_lock, flags);
bpf_bprintf_cleanup();
return ret;
}
static const struct bpf_func_proto bpf_trace_vprintk_proto = {
.func = bpf_trace_vprintk,
.gpl_only = true,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_MEM,
.arg2_type = ARG_CONST_SIZE,
.arg3_type = ARG_PTR_TO_MEM_OR_NULL,
.arg4_type = ARG_CONST_SIZE_OR_ZERO,
};
const struct bpf_func_proto *bpf_get_trace_vprintk_proto(void)
{
__set_printk_clr_event();
return &bpf_trace_vprintk_proto;
}
BPF_CALL_5(bpf_seq_printf, struct seq_file *, m, char *, fmt, u32, fmt_size,
const void *, data, u32, data_len)
@ -422,7 +468,7 @@ BPF_CALL_5(bpf_seq_printf, struct seq_file *, m, char *, fmt, u32, fmt_size,
int err, num_args;
u32 *bin_args;
if (data_len & 7 || data_len > MAX_SEQ_PRINTF_VARARGS * 8 ||
if (data_len & 7 || data_len > MAX_BPRINTF_VARARGS * 8 ||
(data_len && !data))
return -EINVAL;
num_args = data_len / 8;
@ -1162,6 +1208,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
return &bpf_get_func_ip_proto_tracing;
case BPF_FUNC_get_branch_snapshot:
return &bpf_get_branch_snapshot_proto;
case BPF_FUNC_trace_vprintk:
return bpf_get_trace_vprintk_proto();
default:
return bpf_base_func_proto(func_id);
}

File diff suppressed because it is too large Load Diff

View File

@ -807,7 +807,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
if (ret)
goto free_data;
bpf_prog_change_xdp(NULL, prog);
if (repeat > 1)
bpf_prog_change_xdp(NULL, prog);
ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true);
/* We convert the xdp_buff back to an xdp_md before checking the return
* code so the reference count of any held netdevice will be decremented
@ -828,7 +829,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
sizeof(struct xdp_md));
out:
bpf_prog_change_xdp(prog, NULL);
if (repeat > 1)
bpf_prog_change_xdp(prog, NULL);
free_data:
kfree(data);
free_ctx:

View File

@ -134,21 +134,6 @@ int xsk_reg_pool_at_qid(struct net_device *dev, struct xsk_buff_pool *pool,
return 0;
}
void xp_release(struct xdp_buff_xsk *xskb)
{
xskb->pool->free_heads[xskb->pool->free_heads_cnt++] = xskb;
}
static u64 xp_get_handle(struct xdp_buff_xsk *xskb)
{
u64 offset = xskb->xdp.data - xskb->xdp.data_hard_start;
offset += xskb->pool->headroom;
if (!xskb->pool->unaligned)
return xskb->orig_addr + offset;
return xskb->orig_addr + (offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT);
}
static int __xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
{
struct xdp_buff_xsk *xskb = container_of(xdp, struct xdp_buff_xsk, xdp);

View File

@ -44,12 +44,13 @@ void xp_destroy(struct xsk_buff_pool *pool)
struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
struct xdp_umem *umem)
{
bool unaligned = umem->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG;
struct xsk_buff_pool *pool;
struct xdp_buff_xsk *xskb;
u32 i;
u32 i, entries;
pool = kvzalloc(struct_size(pool, free_heads, umem->chunks),
GFP_KERNEL);
entries = unaligned ? umem->chunks : 0;
pool = kvzalloc(struct_size(pool, free_heads, entries), GFP_KERNEL);
if (!pool)
goto out;
@ -63,7 +64,8 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
pool->free_heads_cnt = umem->chunks;
pool->headroom = umem->headroom;
pool->chunk_size = umem->chunk_size;
pool->unaligned = umem->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG;
pool->chunk_shift = ffs(umem->chunk_size) - 1;
pool->unaligned = unaligned;
pool->frame_len = umem->chunk_size - umem->headroom -
XDP_PACKET_HEADROOM;
pool->umem = umem;
@ -81,7 +83,10 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
xskb = &pool->heads[i];
xskb->pool = pool;
xskb->xdp.frame_sz = umem->chunk_size - umem->headroom;
pool->free_heads[i] = xskb;
if (pool->unaligned)
pool->free_heads[i] = xskb;
else
xp_init_xskb_addr(xskb, pool, i * pool->chunk_size);
}
return pool;
@ -406,6 +411,12 @@ int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
if (pool->unaligned)
xp_check_dma_contiguity(dma_map);
else
for (i = 0; i < pool->heads_cnt; i++) {
struct xdp_buff_xsk *xskb = &pool->heads[i];
xp_init_xskb_dma(xskb, pool, dma_map->dma_pages, xskb->orig_addr);
}
err = xp_init_dma_info(pool, dma_map);
if (err) {
@ -448,12 +459,9 @@ static struct xdp_buff_xsk *__xp_alloc(struct xsk_buff_pool *pool)
if (pool->free_heads_cnt == 0)
return NULL;
xskb = pool->free_heads[--pool->free_heads_cnt];
for (;;) {
if (!xskq_cons_peek_addr_unchecked(pool->fq, &addr)) {
pool->fq->queue_empty_descs++;
xp_release(xskb);
return NULL;
}
@ -466,17 +474,17 @@ static struct xdp_buff_xsk *__xp_alloc(struct xsk_buff_pool *pool)
}
break;
}
xskq_cons_release(pool->fq);
xskb->orig_addr = addr;
xskb->xdp.data_hard_start = pool->addrs + addr + pool->headroom;
if (pool->dma_pages_cnt) {
xskb->frame_dma = (pool->dma_pages[addr >> PAGE_SHIFT] &
~XSK_NEXT_PG_CONTIG_MASK) +
(addr & ~PAGE_MASK);
xskb->dma = xskb->frame_dma + pool->headroom +
XDP_PACKET_HEADROOM;
if (pool->unaligned) {
xskb = pool->free_heads[--pool->free_heads_cnt];
xp_init_xskb_addr(xskb, pool, addr);
if (pool->dma_pages_cnt)
xp_init_xskb_dma(xskb, pool, pool->dma_pages, addr);
} else {
xskb = &pool->heads[xp_aligned_extract_idx(pool, addr)];
}
xskq_cons_release(pool->fq);
return xskb;
}
@ -507,6 +515,96 @@ struct xdp_buff *xp_alloc(struct xsk_buff_pool *pool)
}
EXPORT_SYMBOL(xp_alloc);
static u32 xp_alloc_new_from_fq(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
{
u32 i, cached_cons, nb_entries;
if (max > pool->free_heads_cnt)
max = pool->free_heads_cnt;
max = xskq_cons_nb_entries(pool->fq, max);
cached_cons = pool->fq->cached_cons;
nb_entries = max;
i = max;
while (i--) {
struct xdp_buff_xsk *xskb;
u64 addr;
bool ok;
__xskq_cons_read_addr_unchecked(pool->fq, cached_cons++, &addr);
ok = pool->unaligned ? xp_check_unaligned(pool, &addr) :
xp_check_aligned(pool, &addr);
if (unlikely(!ok)) {
pool->fq->invalid_descs++;
nb_entries--;
continue;
}
if (pool->unaligned) {
xskb = pool->free_heads[--pool->free_heads_cnt];
xp_init_xskb_addr(xskb, pool, addr);
if (pool->dma_pages_cnt)
xp_init_xskb_dma(xskb, pool, pool->dma_pages, addr);
} else {
xskb = &pool->heads[xp_aligned_extract_idx(pool, addr)];
}
*xdp = &xskb->xdp;
xdp++;
}
xskq_cons_release_n(pool->fq, max);
return nb_entries;
}
static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 nb_entries)
{
struct xdp_buff_xsk *xskb;
u32 i;
nb_entries = min_t(u32, nb_entries, pool->free_list_cnt);
i = nb_entries;
while (i--) {
xskb = list_first_entry(&pool->free_list, struct xdp_buff_xsk, free_list_node);
list_del(&xskb->free_list_node);
*xdp = &xskb->xdp;
xdp++;
}
pool->free_list_cnt -= nb_entries;
return nb_entries;
}
u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
{
u32 nb_entries1 = 0, nb_entries2;
if (unlikely(pool->dma_need_sync)) {
/* Slow path */
*xdp = xp_alloc(pool);
return !!*xdp;
}
if (unlikely(pool->free_list_cnt)) {
nb_entries1 = xp_alloc_reused(pool, xdp, max);
if (nb_entries1 == max)
return nb_entries1;
max -= nb_entries1;
xdp += nb_entries1;
}
nb_entries2 = xp_alloc_new_from_fq(pool, xdp, max);
if (!nb_entries2)
pool->fq->queue_empty_descs++;
return nb_entries1 + nb_entries2;
}
EXPORT_SYMBOL(xp_alloc_batch);
bool xp_can_alloc(struct xsk_buff_pool *pool, u32 count)
{
if (pool->free_list_cnt >= count)

View File

@ -111,14 +111,18 @@ struct xsk_queue {
/* Functions that read and validate content from consumer rings. */
static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr)
static inline void __xskq_cons_read_addr_unchecked(struct xsk_queue *q, u32 cached_cons, u64 *addr)
{
struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring;
u32 idx = cached_cons & q->ring_mask;
*addr = ring->desc[idx];
}
static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr)
{
if (q->cached_cons != q->cached_prod) {
u32 idx = q->cached_cons & q->ring_mask;
*addr = ring->desc[idx];
__xskq_cons_read_addr_unchecked(q, q->cached_cons, addr);
return true;
}

View File

@ -155,7 +155,7 @@ static void read_route(struct nlmsghdr *nh, int nll)
printf("%d\n", nh->nlmsg_type);
memset(&route, 0, sizeof(route));
printf("Destination\t\tGateway\t\tGenmask\t\tMetric\t\tIface\n");
printf("Destination Gateway Genmask Metric Iface\n");
for (; NLMSG_OK(nh, nll); nh = NLMSG_NEXT(nh, nll)) {
rt_msg = (struct rtmsg *)NLMSG_DATA(nh);
rtm_family = rt_msg->rtm_family;
@ -207,6 +207,7 @@ static void read_route(struct nlmsghdr *nh, int nll)
int metric;
__be32 gw;
} *prefix_value;
struct in_addr dst_addr, gw_addr, mask_addr;
prefix_key = alloca(sizeof(*prefix_key) + 3);
prefix_value = alloca(sizeof(*prefix_value));
@ -234,14 +235,17 @@ static void read_route(struct nlmsghdr *nh, int nll)
for (i = 0; i < 4; i++)
prefix_key->data[i] = (route.dst >> i * 8) & 0xff;
printf("%3d.%d.%d.%d\t\t%3x\t\t%d\t\t%d\t\t%s\n",
(int)prefix_key->data[0],
(int)prefix_key->data[1],
(int)prefix_key->data[2],
(int)prefix_key->data[3],
route.gw, route.dst_len,
dst_addr.s_addr = route.dst;
printf("%-16s", inet_ntoa(dst_addr));
gw_addr.s_addr = route.gw;
printf("%-16s", inet_ntoa(gw_addr));
mask_addr.s_addr = htonl(~(0xffffffffU >> route.dst_len));
printf("%-16s%-7d%s\n", inet_ntoa(mask_addr),
route.metric,
route.iface_name);
if (bpf_map_lookup_elem(lpm_map_fd, prefix_key,
prefix_value) < 0) {
for (i = 0; i < 4; i++)
@ -393,8 +397,12 @@ static void read_arp(struct nlmsghdr *nh, int nll)
if (nh->nlmsg_type == RTM_GETNEIGH)
printf("READING arp entry\n");
printf("Address\tHwAddress\n");
printf("Address HwAddress\n");
for (; NLMSG_OK(nh, nll); nh = NLMSG_NEXT(nh, nll)) {
struct in_addr dst_addr;
char mac_str[18];
int len = 0, i;
rt_msg = (struct ndmsg *)NLMSG_DATA(nh);
rt_attr = (struct rtattr *)RTM_RTA(rt_msg);
ndm_family = rt_msg->ndm_family;
@ -415,7 +423,14 @@ static void read_arp(struct nlmsghdr *nh, int nll)
}
arp_entry.dst = atoi(dsts);
arp_entry.mac = atol(mac);
printf("%x\t\t%llx\n", arp_entry.dst, arp_entry.mac);
dst_addr.s_addr = arp_entry.dst;
for (i = 0; i < 6; i++)
len += snprintf(mac_str + len, 18 - len, "%02llx%s",
((arp_entry.mac >> i * 8) & 0xff),
i < 5 ? ":" : "");
printf("%-16s%s\n", inet_ntoa(dst_addr), mac_str);
if (ndm_family == AF_INET) {
if (bpf_map_lookup_elem(exact_match_map_fd,
&arp_entry.dst,
@ -672,7 +687,7 @@ int main(int ac, char **argv)
if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd))
return 1;
printf("\n**************loading bpf file*********************\n\n\n");
printf("\n******************loading bpf file*********************\n");
if (!prog_fd) {
printf("bpf_prog_load_xattr: %s\n", strerror(errno));
return 1;
@ -722,9 +737,9 @@ int main(int ac, char **argv)
signal(SIGINT, int_exit);
signal(SIGTERM, int_exit);
printf("*******************ROUTE TABLE*************************\n\n\n");
printf("\n*******************ROUTE TABLE*************************\n");
get_route_table(AF_INET);
printf("*******************ARP TABLE***************************\n\n\n");
printf("\n*******************ARP TABLE***************************\n");
get_arp_table(AF_INET);
if (monitor_route() < 0) {
printf("Error in receiving route update");

View File

@ -624,6 +624,7 @@ probe_helpers_for_progtype(enum bpf_prog_type prog_type, bool supported_type,
*/
switch (id) {
case BPF_FUNC_trace_printk:
case BPF_FUNC_trace_vprintk:
case BPF_FUNC_probe_write_user:
if (!full_mode)
continue;

View File

@ -803,7 +803,10 @@ static int do_skeleton(int argc, char **argv)
} \n\
\n\
err = %1$s__create_skeleton(obj); \n\
err = err ?: bpf_object__open_skeleton(obj->skeleton, opts);\n\
if (err) \n\
goto err_out; \n\
\n\
err = bpf_object__open_skeleton(obj->skeleton, opts);\n\
if (err) \n\
goto err_out; \n\
\n\

View File

@ -4046,7 +4046,7 @@ union bpf_attr {
* arguments. The *data* are a **u64** array and corresponding format string
* values are stored in the array. For strings and pointers where pointees
* are accessed, only the pointer values are stored in the *data* array.
* The *data_len* is the size of *data* in bytes.
* The *data_len* is the size of *data* in bytes - must be a multiple of 8.
*
* Formats **%s**, **%p{i,I}{4,6}** requires to read kernel memory.
* Reading kernel memory may fail due to either invalid address or
@ -4751,7 +4751,8 @@ union bpf_attr {
* Each format specifier in **fmt** corresponds to one u64 element
* in the **data** array. For strings and pointers where pointees
* are accessed, only the pointer values are stored in the *data*
* array. The *data_len* is the size of *data* in bytes.
* array. The *data_len* is the size of *data* in bytes - must be
* a multiple of 8.
*
* Formats **%s** and **%p{i,I}{4,6}** require to read kernel
* memory. Reading kernel memory may fail due to either invalid
@ -4898,6 +4899,16 @@ union bpf_attr {
* **-EINVAL** if *flags* is not zero.
*
* **-ENOENT** if architecture does not support branch records.
*
* long bpf_trace_vprintk(const char *fmt, u32 fmt_size, const void *data, u32 data_len)
* Description
* Behaves like **bpf_trace_printk**\ () helper, but takes an array of u64
* to format and can handle more format args as a result.
*
* Arguments are to be used as in **bpf_seq_printf**\ () helper.
* Return
* The number of bytes written to the buffer, or a negative error
* in case of failure.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@ -5077,6 +5088,7 @@ union bpf_attr {
FN(get_attach_cookie), \
FN(task_pt_regs), \
FN(get_branch_snapshot), \
FN(trace_vprintk), \
/* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper

View File

@ -14,14 +14,6 @@
#define __type(name, val) typeof(val) *name
#define __array(name, val) typeof(val) *name[]
/* Helper macro to print out debug messages */
#define bpf_printk(fmt, ...) \
({ \
char ____fmt[] = fmt; \
bpf_trace_printk(____fmt, sizeof(____fmt), \
##__VA_ARGS__); \
})
/*
* Helper macro to place programs, maps, license in
* different sections in elf_bpf file. Section names
@ -224,4 +216,47 @@ enum libbpf_tristate {
___param, sizeof(___param)); \
})
#ifdef BPF_NO_GLOBAL_DATA
#define BPF_PRINTK_FMT_MOD
#else
#define BPF_PRINTK_FMT_MOD static const
#endif
#define __bpf_printk(fmt, ...) \
({ \
BPF_PRINTK_FMT_MOD char ____fmt[] = fmt; \
bpf_trace_printk(____fmt, sizeof(____fmt), \
##__VA_ARGS__); \
})
/*
* __bpf_vprintk wraps the bpf_trace_vprintk helper with variadic arguments
* instead of an array of u64.
*/
#define __bpf_vprintk(fmt, args...) \
({ \
static const char ___fmt[] = fmt; \
unsigned long long ___param[___bpf_narg(args)]; \
\
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
___bpf_fill(___param, args); \
_Pragma("GCC diagnostic pop") \
\
bpf_trace_vprintk(___fmt, sizeof(___fmt), \
___param, sizeof(___param)); \
})
/* Use __bpf_printk when bpf_printk call has 3 or fewer fmt args
* Otherwise use __bpf_vprintk
*/
#define ___bpf_pick_printk(...) \
___bpf_nth(_, ##__VA_ARGS__, __bpf_vprintk, __bpf_vprintk, __bpf_vprintk, \
__bpf_vprintk, __bpf_vprintk, __bpf_vprintk, __bpf_vprintk, \
__bpf_vprintk, __bpf_vprintk, __bpf_printk /*3*/, __bpf_printk /*2*/,\
__bpf_printk /*1*/, __bpf_printk /*0*/)
/* Helper macro to print out debug messages */
#define bpf_printk(fmt, args...) ___bpf_pick_printk(args)(fmt, ##args)
#endif

View File

@ -5,6 +5,7 @@
#include <string.h>
#include <errno.h>
#include <linux/filter.h>
#include <sys/param.h>
#include "btf.h"
#include "bpf.h"
#include "libbpf.h"
@ -135,13 +136,17 @@ void bpf_gen__init(struct bpf_gen *gen, int log_level)
static int add_data(struct bpf_gen *gen, const void *data, __u32 size)
{
__u32 size8 = roundup(size, 8);
__u64 zero = 0;
void *prev;
if (realloc_data_buf(gen, size))
if (realloc_data_buf(gen, size8))
return 0;
prev = gen->data_cur;
memcpy(gen->data_cur, data, size);
gen->data_cur += size;
memcpy(gen->data_cur, &zero, size8 - size);
gen->data_cur += size8 - size;
return prev - gen->data_start;
}

File diff suppressed because it is too large Load Diff

View File

@ -269,7 +269,7 @@ struct bpf_kprobe_opts {
/* custom user-provided value fetchable through bpf_get_attach_cookie() */
__u64 bpf_cookie;
/* function's offset to install kprobe to */
unsigned long offset;
size_t offset;
/* kprobe is return probe */
bool retprobe;
size_t :0;
@ -481,9 +481,13 @@ struct bpf_map_def {
unsigned int map_flags;
};
/*
* The 'struct bpf_map' in include/linux/bpf.h is internal to the kernel,
* so no need to worry about a name clash.
/**
* @brief **bpf_object__find_map_by_name()** returns BPF map of
* the given name, if it exists within the passed BPF object
* @param obj BPF object
* @param name name of the BPF map
* @return BPF map instance, if such map exists within the BPF object;
* or NULL otherwise.
*/
LIBBPF_API struct bpf_map *
bpf_object__find_map_by_name(const struct bpf_object *obj, const char *name);
@ -509,7 +513,12 @@ bpf_map__next(const struct bpf_map *map, const struct bpf_object *obj);
LIBBPF_API struct bpf_map *
bpf_map__prev(const struct bpf_map *map, const struct bpf_object *obj);
/* get/set map FD */
/**
* @brief **bpf_map__fd()** gets the file descriptor of the passed
* BPF map
* @param map the BPF map instance
* @return the file descriptor; or -EINVAL in case of an error
*/
LIBBPF_API int bpf_map__fd(const struct bpf_map *map);
LIBBPF_API int bpf_map__reuse_fd(struct bpf_map *map, int fd);
/* get map definition */
@ -550,6 +559,14 @@ LIBBPF_API int bpf_map__set_initial_value(struct bpf_map *map,
const void *data, size_t size);
LIBBPF_API const void *bpf_map__initial_value(struct bpf_map *map, size_t *psize);
LIBBPF_API bool bpf_map__is_offload_neutral(const struct bpf_map *map);
/**
* @brief **bpf_map__is_internal()** tells the caller whether or not the
* passed map is a special map created by libbpf automatically for things like
* global variables, __ksym externs, Kconfig values, etc
* @param map the bpf_map
* @return true, if the map is an internal map; false, otherwise
*/
LIBBPF_API bool bpf_map__is_internal(const struct bpf_map *map);
LIBBPF_API int bpf_map__set_pin_path(struct bpf_map *map, const char *path);
LIBBPF_API const char *bpf_map__get_pin_path(const struct bpf_map *map);
@ -561,6 +578,38 @@ LIBBPF_API int bpf_map__unpin(struct bpf_map *map, const char *path);
LIBBPF_API int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd);
LIBBPF_API struct bpf_map *bpf_map__inner_map(struct bpf_map *map);
/**
* @brief **libbpf_get_error()** extracts the error code from the passed
* pointer
* @param ptr pointer returned from libbpf API function
* @return error code; or 0 if no error occured
*
* Many libbpf API functions which return pointers have logic to encode error
* codes as pointers, and do not return NULL. Meaning **libbpf_get_error()**
* should be used on the return value from these functions immediately after
* calling the API function, with no intervening calls that could clobber the
* `errno` variable. Consult the individual functions documentation to verify
* if this logic applies should be used.
*
* For these API functions, if `libbpf_set_strict_mode(LIBBPF_STRICT_CLEAN_PTRS)`
* is enabled, NULL is returned on error instead.
*
* If ptr is NULL, then errno should be already set by the failing
* API, because libbpf never returns NULL on success and it now always
* sets errno on error.
*
* Example usage:
*
* struct perf_buffer *pb;
*
* pb = perf_buffer__new(bpf_map__fd(obj->maps.events), PERF_BUFFER_PAGES, &opts);
* err = libbpf_get_error(pb);
* if (err) {
* pb = NULL;
* fprintf(stderr, "failed to open perf buffer: %d\n", err);
* goto cleanup;
* }
*/
LIBBPF_API long libbpf_get_error(const void *ptr);
struct bpf_prog_load_attr {
@ -825,9 +874,10 @@ bpf_program__bpil_addr_to_offs(struct bpf_prog_info_linear *info_linear);
LIBBPF_API void
bpf_program__bpil_offs_to_addr(struct bpf_prog_info_linear *info_linear);
/*
* A helper function to get the number of possible CPUs before looking up
* per-CPU maps. Negative errno is returned on failure.
/**
* @brief **libbpf_num_possible_cpus()** is a helper function to get the
* number of possible CPUs that the host kernel supports and expects.
* @return number of possible CPUs; or error code on failure
*
* Example usage:
*
@ -837,7 +887,6 @@ bpf_program__bpil_offs_to_addr(struct bpf_prog_info_linear *info_linear);
* }
* long values[ncpus];
* bpf_map_lookup_elem(per_cpu_map_fd, key, values);
*
*/
LIBBPF_API int libbpf_num_possible_cpus(void);

View File

@ -89,6 +89,13 @@
(offsetof(TYPE, FIELD) + sizeof(((TYPE *)0)->FIELD))
#endif
/* Check whether a string `str` has prefix `pfx`, regardless if `pfx` is
* a string literal known at compilation time or char * pointer known only at
* runtime.
*/
#define str_has_pfx(str, pfx) \
(strncmp(str, pfx, __builtin_constant_p(pfx) ? sizeof(pfx) - 1 : strlen(pfx)) == 0)
/* Symbol versioning is different between static and shared library.
* Properly versioned symbols are needed for shared library, but
* only the symbol of the new version is needed for static library.

View File

@ -46,6 +46,15 @@ enum libbpf_strict_mode {
*/
LIBBPF_STRICT_DIRECT_ERRS = 0x02,
/*
* Enforce strict BPF program section (SEC()) names.
* E.g., while prefiously SEC("xdp_whatever") or SEC("perf_event_blah") were
* allowed, with LIBBPF_STRICT_SEC_PREFIX this will become
* unrecognized by libbpf and would have to be just SEC("xdp") and
* SEC("xdp") and SEC("perf_event").
*/
LIBBPF_STRICT_SEC_NAME = 0x04,
__LIBBPF_STRICT_LAST,
};

View File

@ -105,10 +105,12 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
err = skel_sys_bpf(BPF_PROG_RUN, &attr, sizeof(attr));
if (err < 0 || (int)attr.test.retval < 0) {
opts->errstr = "failed to execute loader prog";
if (err < 0)
if (err < 0) {
err = -errno;
else
} else {
err = (int)attr.test.retval;
errno = -err;
}
goto out;
}
err = 0;

View File

@ -315,7 +315,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
linked_vars.skel.h linked_maps.skel.h
LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c
test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c \
trace_vprintk.c
SKEL_BLACKLIST += $$(LSKELS)
test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o

View File

@ -242,3 +242,16 @@ To fix this issue, user newer libbpf.
.. Links
.. _clang reloc patch: https://reviews.llvm.org/D102712
.. _kernel llvm reloc: /Documentation/bpf/llvm_reloc.rst
Clang dependencies for the u32 spill test (xdpwall)
===================================================
The xdpwall selftest requires a change in `Clang 14`__.
Without it, the xdpwall selftest will fail and the error message
from running test_progs will look like:
.. code-block:: console
test_xdpwall:FAIL:Does LLVM have https://reviews.llvm.org/D109073? unexpected error: -4007
__ https://reviews.llvm.org/D109073

View File

@ -14,6 +14,20 @@ void test_attach_probe(void)
struct test_attach_probe* skel;
size_t uprobe_offset;
ssize_t base_addr, ref_ctr_offset;
bool legacy;
/* Check if new-style kprobe/uprobe API is supported.
* Kernels that support new FD-based kprobe and uprobe BPF attachment
* through perf_event_open() syscall expose
* /sys/bus/event_source/devices/kprobe/type and
* /sys/bus/event_source/devices/uprobe/type files, respectively. They
* contain magic numbers that are passed as "type" field of
* perf_event_attr. Lack of such file in the system indicates legacy
* kernel with old-style kprobe/uprobe attach interface through
* creating per-probe event through tracefs. For such cases
* ref_ctr_offset feature is not supported, so we don't test it.
*/
legacy = access("/sys/bus/event_source/devices/kprobe/type", F_OK) != 0;
base_addr = get_base_addr();
if (CHECK(base_addr < 0, "get_base_addr",
@ -45,10 +59,11 @@ void test_attach_probe(void)
goto cleanup;
skel->links.handle_kretprobe = kretprobe_link;
ASSERT_EQ(uprobe_ref_ctr, 0, "uprobe_ref_ctr_before");
if (!legacy)
ASSERT_EQ(uprobe_ref_ctr, 0, "uprobe_ref_ctr_before");
uprobe_opts.retprobe = false;
uprobe_opts.ref_ctr_offset = ref_ctr_offset;
uprobe_opts.ref_ctr_offset = legacy ? 0 : ref_ctr_offset;
uprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uprobe,
0 /* self pid */,
"/proc/self/exe",
@ -58,11 +73,12 @@ void test_attach_probe(void)
goto cleanup;
skel->links.handle_uprobe = uprobe_link;
ASSERT_GT(uprobe_ref_ctr, 0, "uprobe_ref_ctr_after");
if (!legacy)
ASSERT_GT(uprobe_ref_ctr, 0, "uprobe_ref_ctr_after");
/* if uprobe uses ref_ctr, uretprobe has to use ref_ctr as well */
uprobe_opts.retprobe = true;
uprobe_opts.ref_ctr_offset = ref_ctr_offset;
uprobe_opts.ref_ctr_offset = legacy ? 0 : ref_ctr_offset;
uretprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uretprobe,
-1 /* any pid */,
"/proc/self/exe",

View File

@ -358,12 +358,27 @@ static void test_btf_dump_int_data(struct btf *btf, struct btf_dump *d,
TEST_BTF_DUMP_DATA_OVER(btf, d, NULL, str, int, sizeof(int)-1, "", 1);
#ifdef __SIZEOF_INT128__
TEST_BTF_DUMP_DATA(btf, d, NULL, str, __int128, BTF_F_COMPACT,
"(__int128)0xffffffffffffffff",
0xffffffffffffffff);
ASSERT_OK(btf_dump_data(btf, d, "__int128", NULL, 0, &i, 16, str,
"(__int128)0xfffffffffffffffffffffffffffffffe"),
"dump __int128");
/* gcc encode unsigned __int128 type with name "__int128 unsigned" in dwarf,
* and clang encode it with name "unsigned __int128" in dwarf.
* Do an availability test for either variant before doing actual test.
*/
if (btf__find_by_name(btf, "unsigned __int128") > 0) {
TEST_BTF_DUMP_DATA(btf, d, NULL, str, unsigned __int128, BTF_F_COMPACT,
"(unsigned __int128)0xffffffffffffffff",
0xffffffffffffffff);
ASSERT_OK(btf_dump_data(btf, d, "unsigned __int128", NULL, 0, &i, 16, str,
"(unsigned __int128)0xfffffffffffffffffffffffffffffffe"),
"dump unsigned __int128");
} else if (btf__find_by_name(btf, "__int128 unsigned") > 0) {
TEST_BTF_DUMP_DATA(btf, d, NULL, str, __int128 unsigned, BTF_F_COMPACT,
"(__int128 unsigned)0xffffffffffffffff",
0xffffffffffffffff);
ASSERT_OK(btf_dump_data(btf, d, "__int128 unsigned", NULL, 0, &i, 16, str,
"(__int128 unsigned)0xfffffffffffffffffffffffffffffffe"),
"dump unsigned __int128");
} else {
ASSERT_TRUE(false, "unsigned_int128_not_found");
}
#endif
}

View File

@ -458,9 +458,9 @@ static int init_prog_array(struct bpf_object *obj, struct bpf_map *prog_array)
return -1;
for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
snprintf(prog_name, sizeof(prog_name), "flow_dissector/%i", i);
snprintf(prog_name, sizeof(prog_name), "flow_dissector_%d", i);
prog = bpf_object__find_program_by_title(obj, prog_name);
prog = bpf_object__find_program_by_name(obj, prog_name);
if (!prog)
return -1;

View File

@ -38,10 +38,9 @@ static int create_perf_events(void)
static void close_perf_events(void)
{
int cpu = 0;
int fd;
int cpu, fd;
while (cpu++ < cpu_cnt) {
for (cpu = 0; cpu < cpu_cnt; cpu++) {
fd = pfd_array[cpu];
if (fd < 0)
break;

View File

@ -3,7 +3,7 @@
void test_probe_user(void)
{
const char *prog_name = "kprobe/__sys_connect";
const char *prog_name = "handle_sys_connect";
const char *obj_file = "./test_probe_user.o";
DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts, );
int err, results_map_fd, sock_fd, duration = 0;
@ -18,7 +18,7 @@ void test_probe_user(void)
if (!ASSERT_OK_PTR(obj, "obj_open_file"))
return;
kprobe_prog = bpf_object__find_program_by_title(obj, prog_name);
kprobe_prog = bpf_object__find_program_by_name(obj, prog_name);
if (CHECK(!kprobe_prog, "find_probe",
"prog '%s' not found\n", prog_name))
goto cleanup;

View File

@ -1,6 +1,21 @@
// SPDX-License-Identifier: GPL-2.0
#include <test_progs.h>
static void toggle_object_autoload_progs(const struct bpf_object *obj,
const char *name_load)
{
struct bpf_program *prog;
bpf_object__for_each_program(prog, obj) {
const char *name = bpf_program__name(prog);
if (!strcmp(name_load, name))
bpf_program__set_autoload(prog, true);
else
bpf_program__set_autoload(prog, false);
}
}
void test_reference_tracking(void)
{
const char *file = "test_sk_lookup_kern.o";
@ -9,44 +24,49 @@ void test_reference_tracking(void)
.object_name = obj_name,
.relaxed_maps = true,
);
struct bpf_object *obj;
struct bpf_object *obj_iter, *obj = NULL;
struct bpf_program *prog;
__u32 duration = 0;
int err = 0;
obj = bpf_object__open_file(file, &open_opts);
if (!ASSERT_OK_PTR(obj, "obj_open_file"))
obj_iter = bpf_object__open_file(file, &open_opts);
if (!ASSERT_OK_PTR(obj_iter, "obj_iter_open_file"))
return;
if (CHECK(strcmp(bpf_object__name(obj), obj_name), "obj_name",
if (CHECK(strcmp(bpf_object__name(obj_iter), obj_name), "obj_name",
"wrong obj name '%s', expected '%s'\n",
bpf_object__name(obj), obj_name))
bpf_object__name(obj_iter), obj_name))
goto cleanup;
bpf_object__for_each_program(prog, obj) {
const char *title;
bpf_object__for_each_program(prog, obj_iter) {
const char *name;
/* Ignore .text sections */
title = bpf_program__section_name(prog);
if (strstr(title, ".text") != NULL)
name = bpf_program__name(prog);
if (!test__start_subtest(name))
continue;
if (!test__start_subtest(title))
continue;
obj = bpf_object__open_file(file, &open_opts);
if (!ASSERT_OK_PTR(obj, "obj_open_file"))
goto cleanup;
toggle_object_autoload_progs(obj, name);
/* Expect verifier failure if test name has 'err' */
if (strstr(title, "err_") != NULL) {
if (strncmp(name, "err_", sizeof("err_") - 1) == 0) {
libbpf_print_fn_t old_print_fn;
old_print_fn = libbpf_set_print(NULL);
err = !bpf_program__load(prog, "GPL", 0);
err = !bpf_object__load(obj);
libbpf_set_print(old_print_fn);
} else {
err = bpf_program__load(prog, "GPL", 0);
err = bpf_object__load(obj);
}
CHECK(err, title, "\n");
ASSERT_OK(err, name);
bpf_object__close(obj);
obj = NULL;
}
cleanup:
bpf_object__close(obj);
bpf_object__close(obj_iter);
}

View File

@ -48,7 +48,7 @@ configure_stack(void)
return false;
sprintf(tc_cmd, "%s %s %s %s", "tc filter add dev lo ingress bpf",
"direct-action object-file ./test_sk_assign.o",
"section classifier/sk_assign_test",
"section tc",
(env.verbosity < VERBOSE_VERY) ? " 2>/dev/null" : "verbose");
if (CHECK(system(tc_cmd), "BPF load failed;",
"run with -vv for more info\n"))

View File

@ -2,7 +2,7 @@
#include <test_progs.h>
#include "cgroup_helpers.h"
static int prog_attach(struct bpf_object *obj, int cgroup_fd, const char *title)
static int prog_attach(struct bpf_object *obj, int cgroup_fd, const char *title, const char *name)
{
enum bpf_attach_type attach_type;
enum bpf_prog_type prog_type;
@ -15,23 +15,23 @@ static int prog_attach(struct bpf_object *obj, int cgroup_fd, const char *title)
return -1;
}
prog = bpf_object__find_program_by_title(obj, title);
prog = bpf_object__find_program_by_name(obj, name);
if (!prog) {
log_err("Failed to find %s BPF program", title);
log_err("Failed to find %s BPF program", name);
return -1;
}
err = bpf_prog_attach(bpf_program__fd(prog), cgroup_fd,
attach_type, BPF_F_ALLOW_MULTI);
if (err) {
log_err("Failed to attach %s BPF program", title);
log_err("Failed to attach %s BPF program", name);
return -1;
}
return 0;
}
static int prog_detach(struct bpf_object *obj, int cgroup_fd, const char *title)
static int prog_detach(struct bpf_object *obj, int cgroup_fd, const char *title, const char *name)
{
enum bpf_attach_type attach_type;
enum bpf_prog_type prog_type;
@ -42,7 +42,7 @@ static int prog_detach(struct bpf_object *obj, int cgroup_fd, const char *title)
if (err)
return -1;
prog = bpf_object__find_program_by_title(obj, title);
prog = bpf_object__find_program_by_name(obj, name);
if (!prog)
return -1;
@ -89,7 +89,7 @@ static int run_getsockopt_test(struct bpf_object *obj, int cg_parent,
* - child: 0x80 -> 0x90
*/
err = prog_attach(obj, cg_child, "cgroup/getsockopt/child");
err = prog_attach(obj, cg_child, "cgroup/getsockopt", "_getsockopt_child");
if (err)
goto detach;
@ -113,7 +113,7 @@ static int run_getsockopt_test(struct bpf_object *obj, int cg_parent,
* - parent: 0x90 -> 0xA0
*/
err = prog_attach(obj, cg_parent, "cgroup/getsockopt/parent");
err = prog_attach(obj, cg_parent, "cgroup/getsockopt", "_getsockopt_parent");
if (err)
goto detach;
@ -157,7 +157,7 @@ static int run_getsockopt_test(struct bpf_object *obj, int cg_parent,
* - parent: unexpected 0x40, EPERM
*/
err = prog_detach(obj, cg_child, "cgroup/getsockopt/child");
err = prog_detach(obj, cg_child, "cgroup/getsockopt", "_getsockopt_child");
if (err) {
log_err("Failed to detach child program");
goto detach;
@ -198,8 +198,8 @@ static int run_getsockopt_test(struct bpf_object *obj, int cg_parent,
}
detach:
prog_detach(obj, cg_child, "cgroup/getsockopt/child");
prog_detach(obj, cg_parent, "cgroup/getsockopt/parent");
prog_detach(obj, cg_child, "cgroup/getsockopt", "_getsockopt_child");
prog_detach(obj, cg_parent, "cgroup/getsockopt", "_getsockopt_parent");
return err;
}
@ -236,7 +236,7 @@ static int run_setsockopt_test(struct bpf_object *obj, int cg_parent,
/* Attach child program and make sure it adds 0x10. */
err = prog_attach(obj, cg_child, "cgroup/setsockopt");
err = prog_attach(obj, cg_child, "cgroup/setsockopt", "_setsockopt");
if (err)
goto detach;
@ -263,7 +263,7 @@ static int run_setsockopt_test(struct bpf_object *obj, int cg_parent,
/* Attach parent program and make sure it adds another 0x10. */
err = prog_attach(obj, cg_parent, "cgroup/setsockopt");
err = prog_attach(obj, cg_parent, "cgroup/setsockopt", "_setsockopt");
if (err)
goto detach;
@ -289,8 +289,8 @@ static int run_setsockopt_test(struct bpf_object *obj, int cg_parent,
}
detach:
prog_detach(obj, cg_child, "cgroup/setsockopt");
prog_detach(obj, cg_parent, "cgroup/setsockopt");
prog_detach(obj, cg_child, "cgroup/setsockopt", "_setsockopt");
prog_detach(obj, cg_parent, "cgroup/setsockopt", "_setsockopt");
return err;
}

View File

@ -21,7 +21,7 @@ static void test_tailcall_1(void)
if (CHECK_FAIL(err))
return;
prog = bpf_object__find_program_by_title(obj, "classifier");
prog = bpf_object__find_program_by_name(obj, "entry");
if (CHECK_FAIL(!prog))
goto out;
@ -38,9 +38,9 @@ static void test_tailcall_1(void)
goto out;
for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
snprintf(prog_name, sizeof(prog_name), "classifier/%i", i);
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_title(obj, prog_name);
prog = bpf_object__find_program_by_name(obj, prog_name);
if (CHECK_FAIL(!prog))
goto out;
@ -70,9 +70,9 @@ static void test_tailcall_1(void)
err, errno, retval);
for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
snprintf(prog_name, sizeof(prog_name), "classifier/%i", i);
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_title(obj, prog_name);
prog = bpf_object__find_program_by_name(obj, prog_name);
if (CHECK_FAIL(!prog))
goto out;
@ -92,9 +92,9 @@ static void test_tailcall_1(void)
for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
j = bpf_map__def(prog_array)->max_entries - 1 - i;
snprintf(prog_name, sizeof(prog_name), "classifier/%i", j);
snprintf(prog_name, sizeof(prog_name), "classifier_%d", j);
prog = bpf_object__find_program_by_title(obj, prog_name);
prog = bpf_object__find_program_by_name(obj, prog_name);
if (CHECK_FAIL(!prog))
goto out;
@ -159,7 +159,7 @@ static void test_tailcall_2(void)
if (CHECK_FAIL(err))
return;
prog = bpf_object__find_program_by_title(obj, "classifier");
prog = bpf_object__find_program_by_name(obj, "entry");
if (CHECK_FAIL(!prog))
goto out;
@ -176,9 +176,9 @@ static void test_tailcall_2(void)
goto out;
for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
snprintf(prog_name, sizeof(prog_name), "classifier/%i", i);
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_title(obj, prog_name);
prog = bpf_object__find_program_by_name(obj, prog_name);
if (CHECK_FAIL(!prog))
goto out;
@ -233,7 +233,7 @@ static void test_tailcall_count(const char *which)
if (CHECK_FAIL(err))
return;
prog = bpf_object__find_program_by_title(obj, "classifier");
prog = bpf_object__find_program_by_name(obj, "entry");
if (CHECK_FAIL(!prog))
goto out;
@ -249,7 +249,7 @@ static void test_tailcall_count(const char *which)
if (CHECK_FAIL(map_fd < 0))
goto out;
prog = bpf_object__find_program_by_title(obj, "classifier/0");
prog = bpf_object__find_program_by_name(obj, "classifier_0");
if (CHECK_FAIL(!prog))
goto out;
@ -329,7 +329,7 @@ static void test_tailcall_4(void)
if (CHECK_FAIL(err))
return;
prog = bpf_object__find_program_by_title(obj, "classifier");
prog = bpf_object__find_program_by_name(obj, "entry");
if (CHECK_FAIL(!prog))
goto out;
@ -354,9 +354,9 @@ static void test_tailcall_4(void)
return;
for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
snprintf(prog_name, sizeof(prog_name), "classifier/%i", i);
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_title(obj, prog_name);
prog = bpf_object__find_program_by_name(obj, prog_name);
if (CHECK_FAIL(!prog))
goto out;
@ -417,7 +417,7 @@ static void test_tailcall_5(void)
if (CHECK_FAIL(err))
return;
prog = bpf_object__find_program_by_title(obj, "classifier");
prog = bpf_object__find_program_by_name(obj, "entry");
if (CHECK_FAIL(!prog))
goto out;
@ -442,9 +442,9 @@ static void test_tailcall_5(void)
return;
for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
snprintf(prog_name, sizeof(prog_name), "classifier/%i", i);
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_title(obj, prog_name);
prog = bpf_object__find_program_by_name(obj, prog_name);
if (CHECK_FAIL(!prog))
goto out;
@ -503,7 +503,7 @@ static void test_tailcall_bpf2bpf_1(void)
if (CHECK_FAIL(err))
return;
prog = bpf_object__find_program_by_title(obj, "classifier");
prog = bpf_object__find_program_by_name(obj, "entry");
if (CHECK_FAIL(!prog))
goto out;
@ -521,9 +521,9 @@ static void test_tailcall_bpf2bpf_1(void)
/* nop -> jmp */
for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
snprintf(prog_name, sizeof(prog_name), "classifier/%i", i);
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_title(obj, prog_name);
prog = bpf_object__find_program_by_name(obj, prog_name);
if (CHECK_FAIL(!prog))
goto out;
@ -587,7 +587,7 @@ static void test_tailcall_bpf2bpf_2(void)
if (CHECK_FAIL(err))
return;
prog = bpf_object__find_program_by_title(obj, "classifier");
prog = bpf_object__find_program_by_name(obj, "entry");
if (CHECK_FAIL(!prog))
goto out;
@ -603,7 +603,7 @@ static void test_tailcall_bpf2bpf_2(void)
if (CHECK_FAIL(map_fd < 0))
goto out;
prog = bpf_object__find_program_by_title(obj, "classifier/0");
prog = bpf_object__find_program_by_name(obj, "classifier_0");
if (CHECK_FAIL(!prog))
goto out;
@ -665,7 +665,7 @@ static void test_tailcall_bpf2bpf_3(void)
if (CHECK_FAIL(err))
return;
prog = bpf_object__find_program_by_title(obj, "classifier");
prog = bpf_object__find_program_by_name(obj, "entry");
if (CHECK_FAIL(!prog))
goto out;
@ -682,9 +682,9 @@ static void test_tailcall_bpf2bpf_3(void)
goto out;
for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
snprintf(prog_name, sizeof(prog_name), "classifier/%i", i);
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_title(obj, prog_name);
prog = bpf_object__find_program_by_name(obj, prog_name);
if (CHECK_FAIL(!prog))
goto out;
@ -762,7 +762,7 @@ static void test_tailcall_bpf2bpf_4(bool noise)
if (CHECK_FAIL(err))
return;
prog = bpf_object__find_program_by_title(obj, "classifier");
prog = bpf_object__find_program_by_name(obj, "entry");
if (CHECK_FAIL(!prog))
goto out;
@ -779,9 +779,9 @@ static void test_tailcall_bpf2bpf_4(bool noise)
goto out;
for (i = 0; i < bpf_map__def(prog_array)->max_entries; i++) {
snprintf(prog_name, sizeof(prog_name), "classifier/%i", i);
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
prog = bpf_object__find_program_by_title(obj, prog_name);
prog = bpf_object__find_program_by_name(obj, prog_name);
if (CHECK_FAIL(!prog))
goto out;

View File

@ -10,7 +10,7 @@
void test_trace_printk(void)
{
int err, iter = 0, duration = 0, found = 0;
int err = 0, iter = 0, found = 0;
struct trace_printk__bss *bss;
struct trace_printk *skel;
char *buf = NULL;
@ -18,25 +18,24 @@ void test_trace_printk(void)
size_t buflen;
skel = trace_printk__open();
if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
if (!ASSERT_OK_PTR(skel, "trace_printk__open"))
return;
ASSERT_EQ(skel->rodata->fmt[0], 'T', "invalid printk fmt string");
ASSERT_EQ(skel->rodata->fmt[0], 'T', "skel->rodata->fmt[0]");
skel->rodata->fmt[0] = 't';
err = trace_printk__load(skel);
if (CHECK(err, "skel_load", "failed to load skeleton: %d\n", err))
if (!ASSERT_OK(err, "trace_printk__load"))
goto cleanup;
bss = skel->bss;
err = trace_printk__attach(skel);
if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err))
if (!ASSERT_OK(err, "trace_printk__attach"))
goto cleanup;
fp = fopen(TRACEBUF, "r");
if (CHECK(fp == NULL, "could not open trace buffer",
"error %d opening %s", errno, TRACEBUF))
if (!ASSERT_OK_PTR(fp, "fopen(TRACEBUF)"))
goto cleanup;
/* We do not want to wait forever if this test fails... */
@ -46,14 +45,10 @@ void test_trace_printk(void)
usleep(1);
trace_printk__detach(skel);
if (CHECK(bss->trace_printk_ran == 0,
"bpf_trace_printk never ran",
"ran == %d", bss->trace_printk_ran))
if (!ASSERT_GT(bss->trace_printk_ran, 0, "bss->trace_printk_ran"))
goto cleanup;
if (CHECK(bss->trace_printk_ret <= 0,
"bpf_trace_printk returned <= 0 value",
"got %d", bss->trace_printk_ret))
if (!ASSERT_GT(bss->trace_printk_ret, 0, "bss->trace_printk_ret"))
goto cleanup;
/* verify our search string is in the trace buffer */
@ -66,8 +61,7 @@ void test_trace_printk(void)
break;
}
if (CHECK(!found, "message from bpf_trace_printk not found",
"no instance of %s in %s", SEARCHMSG, TRACEBUF))
if (!ASSERT_EQ(found, bss->trace_printk_ran, "found"))
goto cleanup;
cleanup:

View File

@ -0,0 +1,68 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include <test_progs.h>
#include "trace_vprintk.lskel.h"
#define TRACEBUF "/sys/kernel/debug/tracing/trace_pipe"
#define SEARCHMSG "1,2,3,4,5,6,7,8,9,10"
void test_trace_vprintk(void)
{
int err = 0, iter = 0, found = 0;
struct trace_vprintk__bss *bss;
struct trace_vprintk *skel;
char *buf = NULL;
FILE *fp = NULL;
size_t buflen;
skel = trace_vprintk__open_and_load();
if (!ASSERT_OK_PTR(skel, "trace_vprintk__open_and_load"))
goto cleanup;
bss = skel->bss;
err = trace_vprintk__attach(skel);
if (!ASSERT_OK(err, "trace_vprintk__attach"))
goto cleanup;
fp = fopen(TRACEBUF, "r");
if (!ASSERT_OK_PTR(fp, "fopen(TRACEBUF)"))
goto cleanup;
/* We do not want to wait forever if this test fails... */
fcntl(fileno(fp), F_SETFL, O_NONBLOCK);
/* wait for tracepoint to trigger */
usleep(1);
trace_vprintk__detach(skel);
if (!ASSERT_GT(bss->trace_vprintk_ran, 0, "bss->trace_vprintk_ran"))
goto cleanup;
if (!ASSERT_GT(bss->trace_vprintk_ret, 0, "bss->trace_vprintk_ret"))
goto cleanup;
/* verify our search string is in the trace buffer */
while (getline(&buf, &buflen, fp) >= 0 || errno == EAGAIN) {
if (strstr(buf, SEARCHMSG) != NULL)
found++;
if (found == bss->trace_vprintk_ran)
break;
if (++iter > 1000)
break;
}
if (!ASSERT_EQ(found, bss->trace_vprintk_ran, "found"))
goto cleanup;
if (!ASSERT_LT(bss->null_data_vprintk_ret, 0, "bss->null_data_vprintk_ret"))
goto cleanup;
cleanup:
trace_vprintk__destroy(skel);
free(buf);
if (fp)
fclose(fp);
}

View File

@ -0,0 +1,15 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include "test_progs.h"
#include "xdpwall.skel.h"
void test_xdpwall(void)
{
struct xdpwall *skel;
skel = xdpwall__open_and_load();
ASSERT_OK_PTR(skel, "Does LLMV have https://reviews.llvm.org/D109073?");
xdpwall__destroy(skel);
}

View File

@ -19,9 +19,8 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
int _version SEC("version") = 1;
#define PROG(F) PROG_(F, _##F)
#define PROG_(NUM, NAME) SEC("flow_dissector/"#NUM) int bpf_func##NAME
#define PROG_(NUM, NAME) SEC("flow_dissector") int flow_dissector_##NUM
/* These are the identifiers of the BPF programs that will be used in tail
* calls. Name is limited to 16 characters, with the terminating character and

View File

@ -20,7 +20,7 @@ struct {
__u32 invocations = 0;
SEC("cgroup_skb/egress/1")
SEC("cgroup_skb/egress")
int egress1(struct __sk_buff *skb)
{
struct cgroup_value *ptr_cg_storage =
@ -32,7 +32,7 @@ int egress1(struct __sk_buff *skb)
return 1;
}
SEC("cgroup_skb/egress/2")
SEC("cgroup_skb/egress")
int egress2(struct __sk_buff *skb)
{
struct cgroup_value *ptr_cg_storage =

View File

@ -20,7 +20,7 @@ struct {
__u32 invocations = 0;
SEC("cgroup_skb/egress/1")
SEC("cgroup_skb/egress")
int egress1(struct __sk_buff *skb)
{
struct cgroup_value *ptr_cg_storage =
@ -32,7 +32,7 @@ int egress1(struct __sk_buff *skb)
return 1;
}
SEC("cgroup_skb/egress/2")
SEC("cgroup_skb/egress")
int egress2(struct __sk_buff *skb)
{
struct cgroup_value *ptr_cg_storage =

View File

@ -47,7 +47,7 @@ check_percpu_elem(struct bpf_map *map, __u32 *key, __u64 *val,
u32 arraymap_output = 0;
SEC("classifier")
SEC("tc")
int test_pkt_access(struct __sk_buff *skb)
{
struct callback_ctx data;

View File

@ -78,7 +78,7 @@ int hashmap_output = 0;
int hashmap_elems = 0;
int percpu_map_elems = 0;
SEC("classifier")
SEC("tc")
int test_pkt_access(struct __sk_buff *skb)
{
struct callback_ctx data;

View File

@ -9,8 +9,8 @@
char _license[] SEC("license") = "GPL";
struct {
__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__type(key, int);
__type(value, int);
} perf_buf_map SEC(".maps");
#define _(P) (__builtin_preserve_access_index(P))

View File

@ -8,7 +8,7 @@ extern int bpf_kfunc_call_test2(struct sock *sk, __u32 a, __u32 b) __ksym;
extern __u64 bpf_kfunc_call_test1(struct sock *sk, __u32 a, __u64 b,
__u32 c, __u64 d) __ksym;
SEC("classifier")
SEC("tc")
int kfunc_call_test2(struct __sk_buff *skb)
{
struct bpf_sock *sk = skb->sk;
@ -23,7 +23,7 @@ int kfunc_call_test2(struct __sk_buff *skb)
return bpf_kfunc_call_test2((struct sock *)sk, 1, 2);
}
SEC("classifier")
SEC("tc")
int kfunc_call_test1(struct __sk_buff *skb)
{
struct bpf_sock *sk = skb->sk;

View File

@ -33,7 +33,7 @@ int __noinline f1(struct __sk_buff *skb)
return (__u32)bpf_kfunc_call_test1((struct sock *)sk, 1, 2, 3, 4);
}
SEC("classifier")
SEC("tc")
int kfunc_call_test1(struct __sk_buff *skb)
{
return f1(skb);

View File

@ -11,8 +11,8 @@ typedef __u64 stack_trace_t[PERF_MAX_STACK_DEPTH];
struct {
__uint(type, BPF_MAP_TYPE_STACK_TRACE);
__uint(max_entries, 16384);
__uint(key_size, sizeof(__u32));
__uint(value_size, sizeof(stack_trace_t));
__type(key, __u32);
__type(value, stack_trace_t);
} stackmap SEC(".maps");
struct {

View File

@ -25,7 +25,7 @@ out:
return ip;
}
SEC("classifier/cls")
SEC("tc")
int main_prog(struct __sk_buff *skb)
{
struct iphdr *ip = NULL;

View File

@ -7,22 +7,22 @@ int _version SEC("version") = 1;
struct {
__uint(type, BPF_MAP_TYPE_SOCKMAP);
__uint(max_entries, 20);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__type(key, int);
__type(value, int);
} sock_map_rx SEC(".maps");
struct {
__uint(type, BPF_MAP_TYPE_SOCKMAP);
__uint(max_entries, 20);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__type(key, int);
__type(value, int);
} sock_map_tx SEC(".maps");
struct {
__uint(type, BPF_MAP_TYPE_SOCKMAP);
__uint(max_entries, 20);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__type(key, int);
__type(value, int);
} sock_map_msg SEC(".maps");
struct {

View File

@ -4,9 +4,8 @@
#include <bpf/bpf_helpers.h>
char _license[] SEC("license") = "GPL";
__u32 _version SEC("version") = 1;
SEC("cgroup/getsockopt/child")
SEC("cgroup/getsockopt")
int _getsockopt_child(struct bpf_sockopt *ctx)
{
__u8 *optval_end = ctx->optval_end;
@ -29,7 +28,7 @@ int _getsockopt_child(struct bpf_sockopt *ctx)
return 1;
}
SEC("cgroup/getsockopt/parent")
SEC("cgroup/getsockopt")
int _getsockopt_parent(struct bpf_sockopt *ctx)
{
__u8 *optval_end = ctx->optval_end;

View File

@ -11,8 +11,8 @@ struct {
} jmp_table SEC(".maps");
#define TAIL_FUNC(x) \
SEC("classifier/" #x) \
int bpf_func_##x(struct __sk_buff *skb) \
SEC("tc") \
int classifier_##x(struct __sk_buff *skb) \
{ \
return x; \
}
@ -20,7 +20,7 @@ TAIL_FUNC(0)
TAIL_FUNC(1)
TAIL_FUNC(2)
SEC("classifier")
SEC("tc")
int entry(struct __sk_buff *skb)
{
/* Multiple locations to make sure we patch
@ -45,4 +45,3 @@ int entry(struct __sk_buff *skb)
}
char __license[] SEC("license") = "GPL";
int _version SEC("version") = 1;

View File

@ -10,41 +10,41 @@ struct {
__uint(value_size, sizeof(__u32));
} jmp_table SEC(".maps");
SEC("classifier/0")
int bpf_func_0(struct __sk_buff *skb)
SEC("tc")
int classifier_0(struct __sk_buff *skb)
{
bpf_tail_call_static(skb, &jmp_table, 1);
return 0;
}
SEC("classifier/1")
int bpf_func_1(struct __sk_buff *skb)
SEC("tc")
int classifier_1(struct __sk_buff *skb)
{
bpf_tail_call_static(skb, &jmp_table, 2);
return 1;
}
SEC("classifier/2")
int bpf_func_2(struct __sk_buff *skb)
SEC("tc")
int classifier_2(struct __sk_buff *skb)
{
return 2;
}
SEC("classifier/3")
int bpf_func_3(struct __sk_buff *skb)
SEC("tc")
int classifier_3(struct __sk_buff *skb)
{
bpf_tail_call_static(skb, &jmp_table, 4);
return 3;
}
SEC("classifier/4")
int bpf_func_4(struct __sk_buff *skb)
SEC("tc")
int classifier_4(struct __sk_buff *skb)
{
bpf_tail_call_static(skb, &jmp_table, 3);
return 4;
}
SEC("classifier")
SEC("tc")
int entry(struct __sk_buff *skb)
{
bpf_tail_call_static(skb, &jmp_table, 0);
@ -56,4 +56,3 @@ int entry(struct __sk_buff *skb)
}
char __license[] SEC("license") = "GPL";
int _version SEC("version") = 1;

View File

@ -12,15 +12,15 @@ struct {
int count = 0;
SEC("classifier/0")
int bpf_func_0(struct __sk_buff *skb)
SEC("tc")
int classifier_0(struct __sk_buff *skb)
{
count++;
bpf_tail_call_static(skb, &jmp_table, 0);
return 1;
}
SEC("classifier")
SEC("tc")
int entry(struct __sk_buff *skb)
{
bpf_tail_call_static(skb, &jmp_table, 0);
@ -28,4 +28,3 @@ int entry(struct __sk_buff *skb)
}
char __license[] SEC("license") = "GPL";
int _version SEC("version") = 1;

View File

@ -13,8 +13,8 @@ struct {
int selector = 0;
#define TAIL_FUNC(x) \
SEC("classifier/" #x) \
int bpf_func_##x(struct __sk_buff *skb) \
SEC("tc") \
int classifier_##x(struct __sk_buff *skb) \
{ \
return x; \
}
@ -22,7 +22,7 @@ TAIL_FUNC(0)
TAIL_FUNC(1)
TAIL_FUNC(2)
SEC("classifier")
SEC("tc")
int entry(struct __sk_buff *skb)
{
bpf_tail_call(skb, &jmp_table, selector);
@ -30,4 +30,3 @@ int entry(struct __sk_buff *skb)
}
char __license[] SEC("license") = "GPL";
int _version SEC("version") = 1;

View File

@ -13,8 +13,8 @@ struct {
int selector = 0;
#define TAIL_FUNC(x) \
SEC("classifier/" #x) \
int bpf_func_##x(struct __sk_buff *skb) \
SEC("tc") \
int classifier_##x(struct __sk_buff *skb) \
{ \
return x; \
}
@ -22,7 +22,7 @@ TAIL_FUNC(0)
TAIL_FUNC(1)
TAIL_FUNC(2)
SEC("classifier")
SEC("tc")
int entry(struct __sk_buff *skb)
{
int idx = 0;
@ -37,4 +37,3 @@ int entry(struct __sk_buff *skb)
}
char __license[] SEC("license") = "GPL";
int _version SEC("version") = 1;

View File

@ -12,8 +12,8 @@ struct {
int count, which;
SEC("classifier/0")
int bpf_func_0(struct __sk_buff *skb)
SEC("tc")
int classifier_0(struct __sk_buff *skb)
{
count++;
if (__builtin_constant_p(which))
@ -22,7 +22,7 @@ int bpf_func_0(struct __sk_buff *skb)
return 1;
}
SEC("classifier")
SEC("tc")
int entry(struct __sk_buff *skb)
{
if (__builtin_constant_p(which))

View File

@ -10,8 +10,8 @@ struct {
} jmp_table SEC(".maps");
#define TAIL_FUNC(x) \
SEC("classifier/" #x) \
int bpf_func_##x(struct __sk_buff *skb) \
SEC("tc") \
int classifier_##x(struct __sk_buff *skb) \
{ \
return x; \
}
@ -26,7 +26,7 @@ int subprog_tail(struct __sk_buff *skb)
return skb->len * 2;
}
SEC("classifier")
SEC("tc")
int entry(struct __sk_buff *skb)
{
bpf_tail_call_static(skb, &jmp_table, 1);
@ -35,4 +35,3 @@ int entry(struct __sk_buff *skb)
}
char __license[] SEC("license") = "GPL";
int _version SEC("version") = 1;

View File

@ -22,14 +22,14 @@ int subprog_tail(struct __sk_buff *skb)
int count = 0;
SEC("classifier/0")
int bpf_func_0(struct __sk_buff *skb)
SEC("tc")
int classifier_0(struct __sk_buff *skb)
{
count++;
return subprog_tail(skb);
}
SEC("classifier")
SEC("tc")
int entry(struct __sk_buff *skb)
{
bpf_tail_call_static(skb, &jmp_table, 0);
@ -38,4 +38,3 @@ int entry(struct __sk_buff *skb)
}
char __license[] SEC("license") = "GPL";
int _version SEC("version") = 1;

View File

@ -33,23 +33,23 @@ int subprog_tail(struct __sk_buff *skb)
return skb->len * 2;
}
SEC("classifier/0")
int bpf_func_0(struct __sk_buff *skb)
SEC("tc")
int classifier_0(struct __sk_buff *skb)
{
volatile char arr[128] = {};
return subprog_tail2(skb);
}
SEC("classifier/1")
int bpf_func_1(struct __sk_buff *skb)
SEC("tc")
int classifier_1(struct __sk_buff *skb)
{
volatile char arr[128] = {};
return skb->len * 3;
}
SEC("classifier")
SEC("tc")
int entry(struct __sk_buff *skb)
{
volatile char arr[128] = {};
@ -58,4 +58,3 @@ int entry(struct __sk_buff *skb)
}
char __license[] SEC("license") = "GPL";
int _version SEC("version") = 1;

View File

@ -50,30 +50,29 @@ int subprog_tail(struct __sk_buff *skb)
return skb->len;
}
SEC("classifier/1")
int bpf_func_1(struct __sk_buff *skb)
SEC("tc")
int classifier_1(struct __sk_buff *skb)
{
return subprog_tail_2(skb);
}
SEC("classifier/2")
int bpf_func_2(struct __sk_buff *skb)
SEC("tc")
int classifier_2(struct __sk_buff *skb)
{
count++;
return subprog_tail_2(skb);
}
SEC("classifier/0")
int bpf_func_0(struct __sk_buff *skb)
SEC("tc")
int classifier_0(struct __sk_buff *skb)
{
return subprog_tail_1(skb);
}
SEC("classifier")
SEC("tc")
int entry(struct __sk_buff *skb)
{
return subprog_tail(skb);
}
char __license[] SEC("license") = "GPL";
int _version SEC("version") = 1;

View File

@ -21,8 +21,8 @@ struct inner_map_sz2 {
struct outer_arr {
__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
__uint(max_entries, 3);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__type(key, int);
__type(value, int);
/* it's possible to use anonymous struct as inner map definition here */
__array(values, struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
@ -61,8 +61,8 @@ struct inner_map_sz4 {
struct outer_arr_dyn {
__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
__uint(max_entries, 3);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__type(key, int);
__type(value, int);
__array(values, struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
__uint(map_flags, BPF_F_INNER_MAP);
@ -81,7 +81,7 @@ struct outer_arr_dyn {
struct outer_hash {
__uint(type, BPF_MAP_TYPE_HASH_OF_MAPS);
__uint(max_entries, 5);
__uint(key_size, sizeof(int));
__type(key, int);
/* Here everything works flawlessly due to reuse of struct inner_map
* and compiler will complain at the attempt to use non-inner_map
* references below. This is great experience.
@ -111,8 +111,8 @@ struct sockarr_sz2 {
struct outer_sockarr_sz1 {
__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
__uint(max_entries, 1);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__type(key, int);
__type(value, int);
__array(values, struct sockarr_sz1);
} outer_sockarr SEC(".maps") = {
.values = { (void *)&sockarr_sz1 },

View File

@ -145,7 +145,7 @@ release:
return TC_ACT_OK;
}
SEC("classifier/ingress")
SEC("tc")
int cls_ingress(struct __sk_buff *skb)
{
struct ipv6hdr *ip6h;

View File

@ -6,14 +6,14 @@
int calls = 0;
int alt_calls = 0;
SEC("cgroup_skb/egress1")
SEC("cgroup_skb/egress")
int egress(struct __sk_buff *skb)
{
__sync_fetch_and_add(&calls, 1);
return 1;
}
SEC("cgroup_skb/egress2")
SEC("cgroup_skb/egress")
int egress_alt(struct __sk_buff *skb)
{
__sync_fetch_and_add(&alt_calls, 1);

View File

@ -153,7 +153,7 @@ int xdp_input_len_exceed(struct xdp_md *ctx)
return retval;
}
SEC("classifier")
SEC("tc")
int tc_use_helper(struct __sk_buff *ctx)
{
int retval = BPF_OK; /* Expected retval on successful test */
@ -172,7 +172,7 @@ out:
return retval;
}
SEC("classifier")
SEC("tc")
int tc_exceed_mtu(struct __sk_buff *ctx)
{
__u32 ifindex = GLOBAL_USER_IFINDEX;
@ -196,7 +196,7 @@ int tc_exceed_mtu(struct __sk_buff *ctx)
return retval;
}
SEC("classifier")
SEC("tc")
int tc_exceed_mtu_da(struct __sk_buff *ctx)
{
/* SKB Direct-Access variant */
@ -223,7 +223,7 @@ int tc_exceed_mtu_da(struct __sk_buff *ctx)
return retval;
}
SEC("classifier")
SEC("tc")
int tc_minus_delta(struct __sk_buff *ctx)
{
int retval = BPF_OK; /* Expected retval on successful test */
@ -245,7 +245,7 @@ int tc_minus_delta(struct __sk_buff *ctx)
return retval;
}
SEC("classifier")
SEC("tc")
int tc_input_len(struct __sk_buff *ctx)
{
int retval = BPF_OK; /* Expected retval on successful test */
@ -265,7 +265,7 @@ int tc_input_len(struct __sk_buff *ctx)
return retval;
}
SEC("classifier")
SEC("tc")
int tc_input_len_exceed(struct __sk_buff *ctx)
{
int retval = BPF_DROP; /* Fail */

View File

@ -928,7 +928,7 @@ static INLINING verdict_t process_ipv6(buf_t *pkt, metrics_t *metrics)
}
}
SEC("classifier/cls_redirect")
SEC("tc")
int cls_redirect(struct __sk_buff *skb)
{
metrics_t *metrics = get_global_metrics();

View File

@ -68,7 +68,7 @@ static struct foo struct3 = {
bpf_map_update_elem(&result_##map, &key, var, 0); \
} while (0)
SEC("classifier/static_data_load")
SEC("tc")
int load_static_data(struct __sk_buff *skb)
{
static const __u64 bar = ~0;

View File

@ -38,7 +38,7 @@ int f3(int val, struct __sk_buff *skb, int var)
return skb->ifindex * val * var;
}
SEC("classifier/test")
SEC("tc")
int test_cls(struct __sk_buff *skb)
{
return f0(1, skb) + f1(skb) + f2(2, skb) + f3(3, skb, 4);

View File

@ -54,7 +54,7 @@ int f8(struct __sk_buff *skb)
}
#endif
SEC("classifier/test")
SEC("tc")
int test_cls(struct __sk_buff *skb)
{
#ifndef NO_FN8

View File

@ -24,7 +24,7 @@ int f3(int val, struct __sk_buff *skb)
return skb->ifindex * val;
}
SEC("classifier/test")
SEC("tc")
int test_cls(struct __sk_buff *skb)
{
return f1(skb) + f2(2, skb) + f3(3, skb);

View File

@ -24,7 +24,7 @@ int f3(int val, struct __sk_buff *skb)
return skb->ifindex * val;
}
SEC("classifier/test")
SEC("tc")
int test_cls(struct __sk_buff *skb)
{
return f1(skb) + f2(2, skb) + f3(3, skb);

View File

@ -10,7 +10,7 @@ void foo(struct __sk_buff *skb)
skb->tc_index = 0;
}
SEC("classifier/test")
SEC("tc")
int test_cls(struct __sk_buff *skb)
{
foo(skb);

View File

@ -9,21 +9,19 @@ struct {
__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
__uint(max_entries, 1);
__uint(map_flags, 0);
__uint(key_size, sizeof(__u32));
/* must be sizeof(__u32) for map in map */
__uint(value_size, sizeof(__u32));
__type(key, __u32);
__type(value, __u32);
} mim_array SEC(".maps");
struct {
__uint(type, BPF_MAP_TYPE_HASH_OF_MAPS);
__uint(max_entries, 1);
__uint(map_flags, 0);
__uint(key_size, sizeof(int));
/* must be sizeof(__u32) for map in map */
__uint(value_size, sizeof(__u32));
__type(key, int);
__type(value, __u32);
} mim_hash SEC(".maps");
SEC("xdp_mimtest")
SEC("xdp")
int xdp_mimtest0(struct xdp_md *ctx)
{
int value = 123;

View File

@ -13,7 +13,7 @@ struct inner {
struct {
__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
__uint(max_entries, 0); /* This will make map creation to fail */
__uint(key_size, sizeof(__u32));
__type(key, __u32);
__array(values, struct inner);
} mim SEC(".maps");

View File

@ -293,7 +293,7 @@ static int handle_passive_estab(struct bpf_sock_ops *skops)
return check_active_hdr_in(skops);
}
SEC("sockops/misc_estab")
SEC("sockops")
int misc_estab(struct bpf_sock_ops *skops)
{
int true_val = 1;

View File

@ -7,15 +7,15 @@
struct {
__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
__uint(max_entries, 1);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__type(key, int);
__type(value, int);
} array_1 SEC(".maps");
struct {
__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
__uint(max_entries, 1);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__type(key, int);
__type(value, int);
__uint(map_flags, BPF_F_PRESERVE_ELEMS);
} array_2 SEC(".maps");

View File

@ -8,8 +8,8 @@
struct {
__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__type(key, int);
__type(value, int);
} perf_buf_map SEC(".maps");
SEC("tp/raw_syscalls/sys_enter")

View File

@ -97,7 +97,7 @@ int test_pkt_write_access_subprog(struct __sk_buff *skb, __u32 off)
return 0;
}
SEC("classifier/test_pkt_access")
SEC("tc")
int test_pkt_access(struct __sk_buff *skb)
{
void *data_end = (void *)(long)skb->data_end;

View File

@ -7,8 +7,6 @@
#include <linux/pkt_cls.h>
#include <bpf/bpf_helpers.h>
int _version SEC("version") = 1;
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
#define TEST_FIELD(TYPE, FIELD, MASK) \
{ \
@ -27,7 +25,7 @@ int _version SEC("version") = 1;
}
#endif
SEC("classifier/test_pkt_md_access")
SEC("tc")
int test_pkt_md_access(struct __sk_buff *skb)
{
TEST_FIELD(__u8, len, 0xFF);

View File

@ -8,13 +8,37 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#if defined(__TARGET_ARCH_x86)
#define SYSCALL_WRAPPER 1
#define SYS_PREFIX "__x64_"
#elif defined(__TARGET_ARCH_s390)
#define SYSCALL_WRAPPER 1
#define SYS_PREFIX "__s390x_"
#elif defined(__TARGET_ARCH_arm64)
#define SYSCALL_WRAPPER 1
#define SYS_PREFIX "__arm64_"
#else
#define SYSCALL_WRAPPER 0
#define SYS_PREFIX ""
#endif
static struct sockaddr_in old;
SEC("kprobe/__sys_connect")
SEC("kprobe/" SYS_PREFIX "sys_connect")
int BPF_KPROBE(handle_sys_connect)
{
void *ptr = (void *)PT_REGS_PARM2(ctx);
#if SYSCALL_WRAPPER == 1
struct pt_regs *real_regs;
#endif
struct sockaddr_in new;
void *ptr;
#if SYSCALL_WRAPPER == 0
ptr = (void *)PT_REGS_PARM2(ctx);
#else
real_regs = (struct pt_regs *)PT_REGS_PARM1(ctx);
bpf_probe_read_kernel(&ptr, sizeof(ptr), &PT_REGS_PARM2(real_regs));
#endif
bpf_probe_read_user(&old, sizeof(old), ptr);
__builtin_memset(&new, 0xab, sizeof(new));

View File

@ -24,8 +24,8 @@ int _version SEC("version") = 1;
struct {
__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
__uint(max_entries, 1);
__uint(key_size, sizeof(__u32));
__uint(value_size, sizeof(__u32));
__type(key, __u32);
__type(value, __u32);
} outer_map SEC(".maps");
struct {

View File

@ -36,7 +36,6 @@ struct {
.pinning = PIN_GLOBAL_NS,
};
int _version SEC("version") = 1;
char _license[] SEC("license") = "GPL";
/* Fill 'tuple' with L3 info, and attempt to find L4. On fail, return NULL. */
@ -159,7 +158,7 @@ assign:
return ret;
}
SEC("classifier/sk_assign_test")
SEC("tc")
int bpf_sk_assign_test(struct __sk_buff *skb)
{
struct bpf_sock_tuple *tuple, ln = {0};

View File

@ -72,32 +72,32 @@ static const __u16 DST_PORT = 7007; /* Host byte order */
static const __u32 DST_IP4 = IP4(127, 0, 0, 1);
static const __u32 DST_IP6[] = IP6(0xfd000000, 0x0, 0x0, 0x00000001);
SEC("sk_lookup/lookup_pass")
SEC("sk_lookup")
int lookup_pass(struct bpf_sk_lookup *ctx)
{
return SK_PASS;
}
SEC("sk_lookup/lookup_drop")
SEC("sk_lookup")
int lookup_drop(struct bpf_sk_lookup *ctx)
{
return SK_DROP;
}
SEC("sk_reuseport/reuse_pass")
SEC("sk_reuseport")
int reuseport_pass(struct sk_reuseport_md *ctx)
{
return SK_PASS;
}
SEC("sk_reuseport/reuse_drop")
SEC("sk_reuseport")
int reuseport_drop(struct sk_reuseport_md *ctx)
{
return SK_DROP;
}
/* Redirect packets destined for port DST_PORT to socket at redir_map[0]. */
SEC("sk_lookup/redir_port")
SEC("sk_lookup")
int redir_port(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk;
@ -116,7 +116,7 @@ int redir_port(struct bpf_sk_lookup *ctx)
}
/* Redirect packets destined for DST_IP4 address to socket at redir_map[0]. */
SEC("sk_lookup/redir_ip4")
SEC("sk_lookup")
int redir_ip4(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk;
@ -139,7 +139,7 @@ int redir_ip4(struct bpf_sk_lookup *ctx)
}
/* Redirect packets destined for DST_IP6 address to socket at redir_map[0]. */
SEC("sk_lookup/redir_ip6")
SEC("sk_lookup")
int redir_ip6(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk;
@ -164,7 +164,7 @@ int redir_ip6(struct bpf_sk_lookup *ctx)
return err ? SK_DROP : SK_PASS;
}
SEC("sk_lookup/select_sock_a")
SEC("sk_lookup")
int select_sock_a(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk;
@ -179,7 +179,7 @@ int select_sock_a(struct bpf_sk_lookup *ctx)
return err ? SK_DROP : SK_PASS;
}
SEC("sk_lookup/select_sock_a_no_reuseport")
SEC("sk_lookup")
int select_sock_a_no_reuseport(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk;
@ -194,7 +194,7 @@ int select_sock_a_no_reuseport(struct bpf_sk_lookup *ctx)
return err ? SK_DROP : SK_PASS;
}
SEC("sk_reuseport/select_sock_b")
SEC("sk_reuseport")
int select_sock_b(struct sk_reuseport_md *ctx)
{
__u32 key = KEY_SERVER_B;
@ -205,7 +205,7 @@ int select_sock_b(struct sk_reuseport_md *ctx)
}
/* Check that bpf_sk_assign() returns -EEXIST if socket already selected. */
SEC("sk_lookup/sk_assign_eexist")
SEC("sk_lookup")
int sk_assign_eexist(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk;
@ -238,7 +238,7 @@ out:
}
/* Check that bpf_sk_assign(BPF_SK_LOOKUP_F_REPLACE) can override selection. */
SEC("sk_lookup/sk_assign_replace_flag")
SEC("sk_lookup")
int sk_assign_replace_flag(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk;
@ -270,7 +270,7 @@ out:
}
/* Check that bpf_sk_assign(sk=NULL) is accepted. */
SEC("sk_lookup/sk_assign_null")
SEC("sk_lookup")
int sk_assign_null(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk = NULL;
@ -313,7 +313,7 @@ out:
}
/* Check that selected sk is accessible through context. */
SEC("sk_lookup/access_ctx_sk")
SEC("sk_lookup")
int access_ctx_sk(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk1 = NULL, *sk2 = NULL;
@ -379,7 +379,7 @@ out:
* are not covered because they give bogus results, that is the
* verifier ignores the offset.
*/
SEC("sk_lookup/ctx_narrow_access")
SEC("sk_lookup")
int ctx_narrow_access(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk;
@ -553,7 +553,7 @@ int ctx_narrow_access(struct bpf_sk_lookup *ctx)
}
/* Check that sk_assign rejects SERVER_A socket with -ESOCKNOSUPPORT */
SEC("sk_lookup/sk_assign_esocknosupport")
SEC("sk_lookup")
int sk_assign_esocknosupport(struct bpf_sk_lookup *ctx)
{
struct bpf_sock *sk;
@ -578,28 +578,28 @@ out:
return ret;
}
SEC("sk_lookup/multi_prog_pass1")
SEC("sk_lookup")
int multi_prog_pass1(struct bpf_sk_lookup *ctx)
{
bpf_map_update_elem(&run_map, &KEY_PROG1, &PROG_DONE, BPF_ANY);
return SK_PASS;
}
SEC("sk_lookup/multi_prog_pass2")
SEC("sk_lookup")
int multi_prog_pass2(struct bpf_sk_lookup *ctx)
{
bpf_map_update_elem(&run_map, &KEY_PROG2, &PROG_DONE, BPF_ANY);
return SK_PASS;
}
SEC("sk_lookup/multi_prog_drop1")
SEC("sk_lookup")
int multi_prog_drop1(struct bpf_sk_lookup *ctx)
{
bpf_map_update_elem(&run_map, &KEY_PROG1, &PROG_DONE, BPF_ANY);
return SK_DROP;
}
SEC("sk_lookup/multi_prog_drop2")
SEC("sk_lookup")
int multi_prog_drop2(struct bpf_sk_lookup *ctx)
{
bpf_map_update_elem(&run_map, &KEY_PROG2, &PROG_DONE, BPF_ANY);
@ -623,7 +623,7 @@ static __always_inline int select_server_a(struct bpf_sk_lookup *ctx)
return SK_PASS;
}
SEC("sk_lookup/multi_prog_redir1")
SEC("sk_lookup")
int multi_prog_redir1(struct bpf_sk_lookup *ctx)
{
int ret;
@ -633,7 +633,7 @@ int multi_prog_redir1(struct bpf_sk_lookup *ctx)
return SK_PASS;
}
SEC("sk_lookup/multi_prog_redir2")
SEC("sk_lookup")
int multi_prog_redir2(struct bpf_sk_lookup *ctx)
{
int ret;

View File

@ -15,7 +15,6 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
int _version SEC("version") = 1;
char _license[] SEC("license") = "GPL";
/* Fill 'tuple' with L3 info, and attempt to find L4. On fail, return NULL. */
@ -53,8 +52,8 @@ static struct bpf_sock_tuple *get_tuple(void *data, __u64 nh_off,
return result;
}
SEC("classifier/sk_lookup_success")
int bpf_sk_lookup_test0(struct __sk_buff *skb)
SEC("tc")
int sk_lookup_success(struct __sk_buff *skb)
{
void *data_end = (void *)(long)skb->data_end;
void *data = (void *)(long)skb->data;
@ -79,8 +78,8 @@ int bpf_sk_lookup_test0(struct __sk_buff *skb)
return sk ? TC_ACT_OK : TC_ACT_UNSPEC;
}
SEC("classifier/sk_lookup_success_simple")
int bpf_sk_lookup_test1(struct __sk_buff *skb)
SEC("tc")
int sk_lookup_success_simple(struct __sk_buff *skb)
{
struct bpf_sock_tuple tuple = {};
struct bpf_sock *sk;
@ -91,8 +90,8 @@ int bpf_sk_lookup_test1(struct __sk_buff *skb)
return 0;
}
SEC("classifier/err_use_after_free")
int bpf_sk_lookup_uaf(struct __sk_buff *skb)
SEC("tc")
int err_use_after_free(struct __sk_buff *skb)
{
struct bpf_sock_tuple tuple = {};
struct bpf_sock *sk;
@ -106,8 +105,8 @@ int bpf_sk_lookup_uaf(struct __sk_buff *skb)
return family;
}
SEC("classifier/err_modify_sk_pointer")
int bpf_sk_lookup_modptr(struct __sk_buff *skb)
SEC("tc")
int err_modify_sk_pointer(struct __sk_buff *skb)
{
struct bpf_sock_tuple tuple = {};
struct bpf_sock *sk;
@ -121,8 +120,8 @@ int bpf_sk_lookup_modptr(struct __sk_buff *skb)
return 0;
}
SEC("classifier/err_modify_sk_or_null_pointer")
int bpf_sk_lookup_modptr_or_null(struct __sk_buff *skb)
SEC("tc")
int err_modify_sk_or_null_pointer(struct __sk_buff *skb)
{
struct bpf_sock_tuple tuple = {};
struct bpf_sock *sk;
@ -135,8 +134,8 @@ int bpf_sk_lookup_modptr_or_null(struct __sk_buff *skb)
return 0;
}
SEC("classifier/err_no_release")
int bpf_sk_lookup_test2(struct __sk_buff *skb)
SEC("tc")
int err_no_release(struct __sk_buff *skb)
{
struct bpf_sock_tuple tuple = {};
@ -144,8 +143,8 @@ int bpf_sk_lookup_test2(struct __sk_buff *skb)
return 0;
}
SEC("classifier/err_release_twice")
int bpf_sk_lookup_test3(struct __sk_buff *skb)
SEC("tc")
int err_release_twice(struct __sk_buff *skb)
{
struct bpf_sock_tuple tuple = {};
struct bpf_sock *sk;
@ -156,8 +155,8 @@ int bpf_sk_lookup_test3(struct __sk_buff *skb)
return 0;
}
SEC("classifier/err_release_unchecked")
int bpf_sk_lookup_test4(struct __sk_buff *skb)
SEC("tc")
int err_release_unchecked(struct __sk_buff *skb)
{
struct bpf_sock_tuple tuple = {};
struct bpf_sock *sk;
@ -173,8 +172,8 @@ void lookup_no_release(struct __sk_buff *skb)
bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0);
}
SEC("classifier/err_no_release_subcall")
int bpf_sk_lookup_test5(struct __sk_buff *skb)
SEC("tc")
int err_no_release_subcall(struct __sk_buff *skb)
{
lookup_no_release(skb);
return 0;

View File

@ -14,7 +14,7 @@ struct {
char _license[] SEC("license") = "GPL";
SEC("classifier/test_skb_helpers")
SEC("tc")
int test_skb_helpers(struct __sk_buff *skb)
{
struct task_struct *task;

View File

@ -56,7 +56,7 @@ int prog_stream_verdict(struct __sk_buff *skb)
return verdict;
}
SEC("sk_skb/skb_verdict")
SEC("sk_skb")
int prog_skb_verdict(struct __sk_buff *skb)
{
unsigned int *count;

View File

@ -9,7 +9,7 @@ struct {
__type(value, __u64);
} sock_map SEC(".maps");
SEC("sk_skb/skb_verdict")
SEC("sk_skb")
int prog_skb_verdict(struct __sk_buff *skb)
{
return SK_DROP;

View File

@ -24,7 +24,7 @@ struct {
__type(value, __u64);
} dst_sock_hash SEC(".maps");
SEC("classifier/copy_sock_map")
SEC("tc")
int copy_sock_map(void *ctx)
{
struct bpf_sock *sk;

View File

@ -28,8 +28,8 @@ struct {
__uint(type, BPF_MAP_TYPE_STACK_TRACE);
__uint(max_entries, 128);
__uint(map_flags, BPF_F_STACK_BUILD_ID);
__uint(key_size, sizeof(__u32));
__uint(value_size, sizeof(stack_trace_t));
__type(key, __u32);
__type(value, stack_trace_t);
} stackmap SEC(".maps");
struct {

View File

@ -27,8 +27,8 @@ typedef __u64 stack_trace_t[PERF_MAX_STACK_DEPTH];
struct {
__uint(type, BPF_MAP_TYPE_STACK_TRACE);
__uint(max_entries, 16384);
__uint(key_size, sizeof(__u32));
__uint(value_size, sizeof(stack_trace_t));
__type(key, __u32);
__type(value, stack_trace_t);
} stackmap SEC(".maps");
struct {

View File

@ -5,7 +5,7 @@
/* Dummy prog to test TC-BPF API */
SEC("classifier")
SEC("tc")
int cls(struct __sk_buff *skb)
{
return 0;

View File

@ -70,7 +70,7 @@ static __always_inline bool is_remote_ep_v6(struct __sk_buff *skb,
return v6_equal(ip6h->daddr, addr);
}
SEC("classifier/chk_egress")
SEC("tc")
int tc_chk(struct __sk_buff *skb)
{
void *data_end = ctx_ptr(skb->data_end);
@ -83,7 +83,7 @@ int tc_chk(struct __sk_buff *skb)
return !raw[0] && !raw[1] && !raw[2] ? TC_ACT_SHOT : TC_ACT_OK;
}
SEC("classifier/dst_ingress")
SEC("tc")
int tc_dst(struct __sk_buff *skb)
{
__u8 zero[ETH_ALEN * 2];
@ -108,7 +108,7 @@ int tc_dst(struct __sk_buff *skb)
return bpf_redirect_neigh(IFINDEX_SRC, NULL, 0, 0);
}
SEC("classifier/src_ingress")
SEC("tc")
int tc_src(struct __sk_buff *skb)
{
__u8 zero[ETH_ALEN * 2];

Some files were not shown because too many files have changed in this diff Show More