bpf-next pull-request 2023-08-09

-----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQRdM/uy1Ege0+EN1fNar9k/UBDW4wUCZNRx8QAKCRBar9k/UBDW
 46MBAQC3YDFsEfPzX4P7ZnlM5Lf1NynjNbso5bYW0TF/dp/Y+gD+M8wdM5Vj2Mb0
 Zr56TnwCJei0kGBemiel4sStt3e4qwY=
 =+0u+
 -----END PGP SIGNATURE-----

Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Martin KaFai Lau says:

====================
pull-request: bpf-next 2023-08-09

We've added 19 non-merge commits during the last 6 day(s) which contain
a total of 25 files changed, 369 insertions(+), 141 deletions(-).

The main changes are:

1) Fix array-index-out-of-bounds access when detaching from an
   already empty mprog entry from Daniel Borkmann.

2) Adjust bpf selftest because of a recent llvm change
   related to the cpu-v4 ISA from Eduard Zingerman.

3) Add uprobe support for the bpf_get_func_ip helper from Jiri Olsa.

4) Fix a KASAN splat due to the kernel incorrectly accepted
   an invalid program using the recent cpu-v4 instruction from
   Yonghong Song.

* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next:
  bpf: btf: Remove two unused function declarations
  bpf: lru: Remove unused declaration bpf_lru_promote()
  selftests/bpf: relax expected log messages to allow emitting BPF_ST
  selftests/bpf: remove duplicated functions
  bpf, docs: Fix small typo and define semantics of sign extension
  selftests/bpf: Add bpf_get_func_ip test for uprobe inside function
  selftests/bpf: Add bpf_get_func_ip tests for uprobe on function entry
  bpf: Add support for bpf_get_func_ip helper for uprobe program
  selftests/bpf: Add a movsx selftest for sign-extension of R10
  bpf: Fix an incorrect verification success with movsx insn
  bpf, docs: Formalize type notation and function semantics in ISA standard
  bpf: change bpf_alu_sign_string and bpf_movsx_string to static
  libbpf: Use local includes inside the library
  bpf: fix bpf_dynptr_slice() to stop return an ERR_PTR.
  bpf: fix inconsistent return types of bpf_xdp_copy_buf().
  selftests/bpf: fix the incorrect verification of port numbers.
  selftests/bpf: Add test for detachment on empty mprog entry
  bpf: Fix mprog detachment for empty mprog entry
  bpf: bpf_struct_ops: Remove unnecessary initial values of variables
====================

Link: https://lore.kernel.org/r/20230810055123.109578-1-martin.lau@linux.dev
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2023-08-10 14:12:34 -07:00
commit 6a1ed1430d
25 changed files with 368 additions and 140 deletions

View file

@ -10,9 +10,92 @@ This document specifies version 1.0 of the eBPF instruction set.
Documentation conventions
=========================
For brevity, this document uses the type notion "u64", "u32", etc.
to mean an unsigned integer whose width is the specified number of bits,
and "s32", etc. to mean a signed integer of the specified number of bits.
For brevity and consistency, this document refers to families
of types using a shorthand syntax and refers to several expository,
mnemonic functions when describing the semantics of instructions.
The range of valid values for those types and the semantics of those
functions are defined in the following subsections.
Types
-----
This document refers to integer types with the notation `SN` to specify
a type's signedness (`S`) and bit width (`N`), respectively.
.. table:: Meaning of signedness notation.
==== =========
`S` Meaning
==== =========
`u` unsigned
`s` signed
==== =========
.. table:: Meaning of bit-width notation.
===== =========
`N` Bit width
===== =========
`8` 8 bits
`16` 16 bits
`32` 32 bits
`64` 64 bits
`128` 128 bits
===== =========
For example, `u32` is a type whose valid values are all the 32-bit unsigned
numbers and `s16` is a types whose valid values are all the 16-bit signed
numbers.
Functions
---------
* `htobe16`: Takes an unsigned 16-bit number in host-endian format and
returns the equivalent number as an unsigned 16-bit number in big-endian
format.
* `htobe32`: Takes an unsigned 32-bit number in host-endian format and
returns the equivalent number as an unsigned 32-bit number in big-endian
format.
* `htobe64`: Takes an unsigned 64-bit number in host-endian format and
returns the equivalent number as an unsigned 64-bit number in big-endian
format.
* `htole16`: Takes an unsigned 16-bit number in host-endian format and
returns the equivalent number as an unsigned 16-bit number in little-endian
format.
* `htole32`: Takes an unsigned 32-bit number in host-endian format and
returns the equivalent number as an unsigned 32-bit number in little-endian
format.
* `htole64`: Takes an unsigned 64-bit number in host-endian format and
returns the equivalent number as an unsigned 64-bit number in little-endian
format.
* `bswap16`: Takes an unsigned 16-bit number in either big- or little-endian
format and returns the equivalent number with the same bit width but
opposite endianness.
* `bswap32`: Takes an unsigned 32-bit number in either big- or little-endian
format and returns the equivalent number with the same bit width but
opposite endianness.
* `bswap64`: Takes an unsigned 64-bit number in either big- or little-endian
format and returns the equivalent number with the same bit width but
opposite endianness.
Definitions
-----------
.. glossary::
Sign Extend
To `sign extend an` ``X`` `-bit number, A, to a` ``Y`` `-bit number, B ,` means to
#. Copy all ``X`` bits from `A` to the lower ``X`` bits of `B`.
#. Set the value of the remaining ``Y`` - ``X`` bits of `B` to the value of
the most-significant bit of `A`.
.. admonition:: Example
Sign extend an 8-bit number ``A`` to a 16-bit number ``B`` on a big-endian platform:
::
A: 10000110
B: 11111111 10000110
Registers and calling convention
================================
@ -172,7 +255,7 @@ BPF_SMOD 0x90 1 dst = (src != 0) ? (dst s% src) : dst
BPF_XOR 0xa0 0 dst ^= src
BPF_MOV 0xb0 0 dst = src
BPF_MOVSX 0xb0 8/16/32 dst = (s8,s16,s32)src
BPF_ARSH 0xc0 0 sign extending dst >>= (src & mask)
BPF_ARSH 0xc0 0 :term:`sign extending<Sign Extend>` dst >>= (src & mask)
BPF_END 0xd0 0 byte swap operations (see `Byte swap instructions`_ below)
========= ===== ======= ==========================================================
@ -204,22 +287,22 @@ where '(u32)' indicates that the upper 32 bits are zeroed.
Note that most instructions have instruction offset of 0. Only three instructions
(``BPF_SDIV``, ``BPF_SMOD``, ``BPF_MOVSX``) have a non-zero offset.
The devision and modulo operations support both unsigned and signed flavors.
The division and modulo operations support both unsigned and signed flavors.
For unsigned operations (``BPF_DIV`` and ``BPF_MOD``), for ``BPF_ALU``,
'imm' is interpreted as a 32-bit unsigned value. For ``BPF_ALU64``,
'imm' is first sign extended from 32 to 64 bits, and then interpreted as
a 64-bit unsigned value.
'imm' is first :term:`sign extended<Sign Extend>` from 32 to 64 bits, and then
interpreted as a 64-bit unsigned value.
For signed operations (``BPF_SDIV`` and ``BPF_SMOD``), for ``BPF_ALU``,
'imm' is interpreted as a 32-bit signed value. For ``BPF_ALU64``, 'imm'
is first sign extended from 32 to 64 bits, and then interpreted as a
64-bit signed value.
is first :term:`sign extended<Sign Extend>` from 32 to 64 bits, and then
interpreted as a 64-bit signed value.
The ``BPF_MOVSX`` instruction does a move operation with sign extension.
``BPF_ALU | BPF_MOVSX`` sign extends 8-bit and 16-bit operands into 32
``BPF_ALU | BPF_MOVSX`` :term:`sign extends<Sign Extend>` 8-bit and 16-bit operands into 32
bit operands, and zeroes the remaining upper 32 bits.
``BPF_ALU64 | BPF_MOVSX`` sign extends 8-bit, 16-bit, and 32-bit
``BPF_ALU64 | BPF_MOVSX`` :term:`sign extends<Sign Extend>` 8-bit, 16-bit, and 32-bit
operands into 64 bit operands.
Shift operations use a mask of 0x3F (63) for 64-bit operations and 0x1F (31)
@ -252,19 +335,23 @@ are supported: 16, 32 and 64.
Examples:
``BPF_ALU | BPF_TO_LE | BPF_END`` with imm = 16 means::
``BPF_ALU | BPF_TO_LE | BPF_END`` with imm = 16/32/64 means::
dst = htole16(dst)
dst = htole32(dst)
dst = htole64(dst)
``BPF_ALU | BPF_TO_BE | BPF_END`` with imm = 64 means::
``BPF_ALU | BPF_TO_BE | BPF_END`` with imm = 16/32/64 means::
dst = htobe16(dst)
dst = htobe32(dst)
dst = htobe64(dst)
``BPF_ALU64 | BPF_TO_LE | BPF_END`` with imm = 16/32/64 means::
dst = bswap16 dst
dst = bswap32 dst
dst = bswap64 dst
dst = bswap16(dst)
dst = bswap32(dst)
dst = bswap64(dst)
Jump instructions
-----------------
@ -400,7 +487,7 @@ Where size is one of: ``BPF_B``, ``BPF_H``, ``BPF_W``, or ``BPF_DW`` and
Sign-extension load operations
------------------------------
The ``BPF_MEMSX`` mode modifier is used to encode sign-extension load
The ``BPF_MEMSX`` mode modifier is used to encode :term:`sign-extension<Sign Extend>` load
instructions that transfer data between a register and memory.
``BPF_MEMSX | <size> | BPF_LDX`` means::

View file

@ -1819,6 +1819,7 @@ struct bpf_cg_run_ctx {
struct bpf_trace_run_ctx {
struct bpf_run_ctx run_ctx;
u64 bpf_cookie;
bool is_uprobe;
};
struct bpf_tramp_run_ctx {
@ -1867,6 +1868,8 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
if (unlikely(!array))
return ret;
run_ctx.is_uprobe = false;
migrate_disable();
old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
item = &array->items[0];
@ -1891,8 +1894,8 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
* rcu-protected dynamically sized maps.
*/
static __always_inline u32
bpf_prog_run_array_sleepable(const struct bpf_prog_array __rcu *array_rcu,
const void *ctx, bpf_prog_run_fn run_prog)
bpf_prog_run_array_uprobe(const struct bpf_prog_array __rcu *array_rcu,
const void *ctx, bpf_prog_run_fn run_prog)
{
const struct bpf_prog_array_item *item;
const struct bpf_prog *prog;
@ -1906,6 +1909,8 @@ bpf_prog_run_array_sleepable(const struct bpf_prog_array __rcu *array_rcu,
rcu_read_lock_trace();
migrate_disable();
run_ctx.is_uprobe = true;
array = rcu_dereference_check(array_rcu, rcu_read_lock_trace_held());
if (unlikely(!array))
goto out;

View file

@ -204,8 +204,6 @@ u32 btf_nr_types(const struct btf *btf);
bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s,
const struct btf_member *m,
u32 expected_offset, u32 expected_size);
int btf_find_spin_lock(const struct btf *btf, const struct btf_type *t);
int btf_find_timer(const struct btf *btf, const struct btf_type *t);
struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type *t,
u32 field_mask, u32 value_size);
int btf_check_and_fixup_fields(const struct btf *btf, struct btf_record *rec);

View file

@ -1572,10 +1572,9 @@ static inline void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
return NULL;
}
static inline void *bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off, void *buf,
unsigned long len, bool flush)
static inline void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off, void *buf,
unsigned long len, bool flush)
{
return NULL;
}
#endif /* CONFIG_NET */

View file

@ -5086,9 +5086,14 @@ union bpf_attr {
* u64 bpf_get_func_ip(void *ctx)
* Description
* Get address of the traced function (for tracing and kprobe programs).
*
* When called for kprobe program attached as uprobe it returns
* probe address for both entry and return uprobe.
*
* Return
* Address of the traced function.
* Address of the traced function for kprobe.
* 0 for kprobes placed within the function (not at the entry).
* Address of the probe for uprobe and return uprobe.
*
* u64 bpf_get_attach_cookie(void *ctx)
* Description

View file

@ -75,6 +75,5 @@ void bpf_lru_populate(struct bpf_lru *lru, void *buf, u32 node_offset,
void bpf_lru_destroy(struct bpf_lru *lru);
struct bpf_lru_node *bpf_lru_pop_free(struct bpf_lru *lru, u32 hash);
void bpf_lru_push_free(struct bpf_lru *lru, struct bpf_lru_node *node);
void bpf_lru_promote(struct bpf_lru *lru, struct bpf_lru_node *node);
#endif

View file

@ -374,9 +374,9 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
struct bpf_struct_ops_value *uvalue, *kvalue;
const struct btf_member *member;
const struct btf_type *t = st_ops->type;
struct bpf_tramp_links *tlinks = NULL;
struct bpf_tramp_links *tlinks;
void *udata, *kdata;
int prog_fd, err = 0;
int prog_fd, err;
void *image, *image_end;
u32 i;
@ -815,7 +815,7 @@ static int bpf_struct_ops_map_link_update(struct bpf_link *link, struct bpf_map
struct bpf_struct_ops_map *st_map, *old_st_map;
struct bpf_map *old_map;
struct bpf_struct_ops_link *st_link;
int err = 0;
int err;
st_link = container_of(link, struct bpf_struct_ops_link, link);
st_map = container_of(new_map, struct bpf_struct_ops_map, map);

View file

@ -87,12 +87,12 @@ const char *const bpf_alu_string[16] = {
[BPF_END >> 4] = "endian",
};
const char *const bpf_alu_sign_string[16] = {
static const char *const bpf_alu_sign_string[16] = {
[BPF_DIV >> 4] = "s/=",
[BPF_MOD >> 4] = "s%=",
};
const char *const bpf_movsx_string[4] = {
static const char *const bpf_movsx_string[4] = {
[0] = "(s8)",
[1] = "(s16)",
[3] = "(s32)",

View file

@ -2270,7 +2270,7 @@ __bpf_kfunc void *bpf_dynptr_slice(const struct bpf_dynptr_kern *ptr, u32 offset
case BPF_DYNPTR_TYPE_XDP:
{
void *xdp_ptr = bpf_xdp_pointer(ptr->data, ptr->offset + offset, len);
if (xdp_ptr)
if (!IS_ERR_OR_NULL(xdp_ptr))
return xdp_ptr;
if (!buffer__opt)

View file

@ -337,6 +337,8 @@ int bpf_mprog_detach(struct bpf_mprog_entry *entry,
return -EINVAL;
if (revision && revision != bpf_mprog_revision(entry))
return -ESTALE;
if (!bpf_mprog_total(entry))
return -ENOENT;
ret = bpf_mprog_tuple_relative(&rtuple, id_or_fd, flags,
prog ? prog->type :
BPF_PROG_TYPE_UNSPEC);

View file

@ -13165,17 +13165,26 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
dst_reg->subreg_def = DEF_NOT_SUBREG;
} else {
/* case: R1 = (s8, s16 s32)R2 */
bool no_sext;
if (is_pointer_value(env, insn->src_reg)) {
verbose(env,
"R%d sign-extension part of pointer\n",
insn->src_reg);
return -EACCES;
} else if (src_reg->type == SCALAR_VALUE) {
bool no_sext;
no_sext = src_reg->umax_value < (1ULL << (insn->off - 1));
if (no_sext && need_id)
src_reg->id = ++env->id_gen;
copy_register_state(dst_reg, src_reg);
if (!no_sext)
dst_reg->id = 0;
coerce_reg_to_size_sx(dst_reg, insn->off >> 3);
dst_reg->live |= REG_LIVE_WRITTEN;
dst_reg->subreg_def = DEF_NOT_SUBREG;
no_sext = src_reg->umax_value < (1ULL << (insn->off - 1));
if (no_sext && need_id)
src_reg->id = ++env->id_gen;
copy_register_state(dst_reg, src_reg);
if (!no_sext)
dst_reg->id = 0;
coerce_reg_to_size_sx(dst_reg, insn->off >> 3);
dst_reg->live |= REG_LIVE_WRITTEN;
dst_reg->subreg_def = DEF_NOT_SUBREG;
} else {
mark_reg_unknown(env, regs, insn->dst_reg);
}
}
} else {
/* R1 = (u32) R2 */

View file

@ -1055,7 +1055,16 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
{
struct kprobe *kp = kprobe_running();
struct bpf_trace_run_ctx *run_ctx __maybe_unused;
struct kprobe *kp;
#ifdef CONFIG_UPROBES
run_ctx = container_of(current->bpf_ctx, struct bpf_trace_run_ctx, run_ctx);
if (run_ctx->is_uprobe)
return ((struct uprobe_dispatch_data *)current->utask->vaddr)->bp_addr;
#endif
kp = kprobe_running();
if (!kp || !(kp->flags & KPROBE_FLAG_ON_FUNC_ENTRY))
return 0;

View file

@ -519,3 +519,8 @@ void __trace_probe_log_err(int offset, int err);
#define trace_probe_log_err(offs, err) \
__trace_probe_log_err(offs, TP_ERR_##err)
struct uprobe_dispatch_data {
struct trace_uprobe *tu;
unsigned long bp_addr;
};

View file

@ -88,11 +88,6 @@ static struct trace_uprobe *to_trace_uprobe(struct dyn_event *ev)
static int register_uprobe_event(struct trace_uprobe *tu);
static int unregister_uprobe_event(struct trace_uprobe *tu);
struct uprobe_dispatch_data {
struct trace_uprobe *tu;
unsigned long bp_addr;
};
static int uprobe_dispatcher(struct uprobe_consumer *con, struct pt_regs *regs);
static int uretprobe_dispatcher(struct uprobe_consumer *con,
unsigned long func, struct pt_regs *regs);
@ -1352,7 +1347,7 @@ static void __uprobe_perf_func(struct trace_uprobe *tu,
if (bpf_prog_array_valid(call)) {
u32 ret;
ret = bpf_prog_run_array_sleepable(call->prog_array, regs, bpf_prog_run);
ret = bpf_prog_run_array_uprobe(call->prog_array, regs, bpf_prog_run);
if (!ret)
return;
}

View file

@ -5086,9 +5086,14 @@ union bpf_attr {
* u64 bpf_get_func_ip(void *ctx)
* Description
* Get address of the traced function (for tracing and kprobe programs).
*
* When called for kprobe program attached as uprobe it returns
* probe address for both entry and return uprobe.
*
* Return
* Address of the traced function.
* Address of the traced function for kprobe.
* 0 for kprobes placed within the function (not at the entry).
* Address of the probe for uprobe and return uprobe.
*
* u64 bpf_get_attach_cookie(void *ctx)
* Description

View file

@ -2,7 +2,7 @@
#ifndef __BPF_TRACING_H__
#define __BPF_TRACING_H__
#include <bpf/bpf_helpers.h>
#include "bpf_helpers.h"
/* Scan the ARCH passed in from ARCH env variable (see Makefile) */
#if defined(__TARGET_ARCH_x86)

View file

@ -4,8 +4,8 @@
#define __USDT_BPF_H__
#include <linux/errno.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include "bpf_helpers.h"
#include "bpf_tracing.h"
/* Below types and maps are internal implementation details of libbpf's USDT
* support and are subjects to change. Also, bpf_usdt_xxx() API helpers should

View file

@ -9,6 +9,7 @@
#include "testing_helpers.h"
#include "cgroup_tcp_skb.skel.h"
#include "cgroup_tcp_skb.h"
#include "network_helpers.h"
#define CGROUP_TCP_SKB_PATH "/test_cgroup_tcp_skb"
@ -58,79 +59,13 @@ static int create_client_sock_v6(void)
return fd;
}
static int create_server_sock_v6(void)
{
struct sockaddr_in6 addr = {
.sin6_family = AF_INET6,
.sin6_port = htons(0),
.sin6_addr = IN6ADDR_LOOPBACK_INIT,
};
int fd, err;
fd = socket(AF_INET6, SOCK_STREAM, 0);
if (fd < 0) {
perror("socket");
return -1;
}
err = bind(fd, (struct sockaddr *)&addr, sizeof(addr));
if (err < 0) {
perror("bind");
return -1;
}
err = listen(fd, 1);
if (err < 0) {
perror("listen");
return -1;
}
return fd;
}
static int get_sock_port_v6(int fd)
{
struct sockaddr_in6 addr;
socklen_t len;
int err;
len = sizeof(addr);
err = getsockname(fd, (struct sockaddr *)&addr, &len);
if (err < 0) {
perror("getsockname");
return -1;
}
return ntohs(addr.sin6_port);
}
static int connect_client_server_v6(int client_fd, int listen_fd)
{
struct sockaddr_in6 addr = {
.sin6_family = AF_INET6,
.sin6_addr = IN6ADDR_LOOPBACK_INIT,
};
int err;
addr.sin6_port = htons(get_sock_port_v6(listen_fd));
if (addr.sin6_port < 0)
return -1;
err = connect(client_fd, (struct sockaddr *)&addr, sizeof(addr));
if (err < 0) {
perror("connect");
return -1;
}
return 0;
}
/* Connect to the server in a cgroup from the outside of the cgroup. */
static int talk_to_cgroup(int *client_fd, int *listen_fd, int *service_fd,
struct cgroup_tcp_skb *skel)
{
int err, cp;
char buf[5];
int port;
/* Create client & server socket */
err = join_root_cgroup();
@ -142,14 +77,17 @@ static int talk_to_cgroup(int *client_fd, int *listen_fd, int *service_fd,
err = join_cgroup(CGROUP_TCP_SKB_PATH);
if (!ASSERT_OK(err, "join_cgroup"))
return -1;
*listen_fd = create_server_sock_v6();
*listen_fd = start_server(AF_INET6, SOCK_STREAM, NULL, 0, 0);
if (!ASSERT_GE(*listen_fd, 0, "listen_fd"))
return -1;
skel->bss->g_sock_port = get_sock_port_v6(*listen_fd);
port = get_socket_local_port(*listen_fd);
if (!ASSERT_GE(port, 0, "get_socket_local_port"))
return -1;
skel->bss->g_sock_port = ntohs(port);
/* Connect client to server */
err = connect_client_server_v6(*client_fd, *listen_fd);
if (!ASSERT_OK(err, "connect_client_server_v6"))
err = connect_fd_to_fd(*client_fd, *listen_fd, 0);
if (!ASSERT_OK(err, "connect_fd_to_fd"))
return -1;
*service_fd = accept(*listen_fd, NULL, NULL);
if (!ASSERT_GE(*service_fd, 0, "service_fd"))
@ -174,12 +112,13 @@ static int talk_to_outside(int *client_fd, int *listen_fd, int *service_fd,
{
int err, cp;
char buf[5];
int port;
/* Create client & server socket */
err = join_root_cgroup();
if (!ASSERT_OK(err, "join_root_cgroup"))
return -1;
*listen_fd = create_server_sock_v6();
*listen_fd = start_server(AF_INET6, SOCK_STREAM, NULL, 0, 0);
if (!ASSERT_GE(*listen_fd, 0, "listen_fd"))
return -1;
err = join_cgroup(CGROUP_TCP_SKB_PATH);
@ -191,11 +130,14 @@ static int talk_to_outside(int *client_fd, int *listen_fd, int *service_fd,
err = join_root_cgroup();
if (!ASSERT_OK(err, "join_root_cgroup"))
return -1;
skel->bss->g_sock_port = get_sock_port_v6(*listen_fd);
port = get_socket_local_port(*listen_fd);
if (!ASSERT_GE(port, 0, "get_socket_local_port"))
return -1;
skel->bss->g_sock_port = ntohs(port);
/* Connect client to server */
err = connect_client_server_v6(*client_fd, *listen_fd);
if (!ASSERT_OK(err, "connect_client_server_v6"))
err = connect_fd_to_fd(*client_fd, *listen_fd, 0);
if (!ASSERT_OK(err, "connect_fd_to_fd"))
return -1;
*service_fd = accept(*listen_fd, NULL, NULL);
if (!ASSERT_GE(*service_fd, 0, "service_fd"))

View file

@ -1,6 +1,11 @@
// SPDX-License-Identifier: GPL-2.0
#include <test_progs.h>
#include "get_func_ip_test.skel.h"
#include "get_func_ip_uprobe_test.skel.h"
static noinline void uprobe_trigger(void)
{
}
static void test_function_entry(void)
{
@ -20,6 +25,8 @@ static void test_function_entry(void)
if (!ASSERT_OK(err, "get_func_ip_test__attach"))
goto cleanup;
skel->bss->uprobe_trigger = (unsigned long) uprobe_trigger;
prog_fd = bpf_program__fd(skel->progs.test1);
err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
@ -30,21 +37,31 @@ static void test_function_entry(void)
ASSERT_OK(err, "test_run");
uprobe_trigger();
ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
ASSERT_EQ(skel->bss->test5_result, 1, "test5_result");
ASSERT_EQ(skel->bss->test7_result, 1, "test7_result");
ASSERT_EQ(skel->bss->test8_result, 1, "test8_result");
cleanup:
get_func_ip_test__destroy(skel);
}
/* test6 is x86_64 specific because of the instruction
* offset, disabling it for all other archs
*/
#ifdef __x86_64__
static void test_function_body(void)
extern void uprobe_trigger_body(void);
asm(
".globl uprobe_trigger_body\n"
".type uprobe_trigger_body, @function\n"
"uprobe_trigger_body:\n"
" nop\n"
" ret\n"
);
static void test_function_body_kprobe(void)
{
struct get_func_ip_test *skel = NULL;
LIBBPF_OPTS(bpf_test_run_opts, topts);
@ -56,6 +73,9 @@ static void test_function_body(void)
if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open"))
return;
/* test6 is x86_64 specific and is disabled by default,
* enable it for body test.
*/
bpf_program__set_autoload(skel->progs.test6, true);
err = get_func_ip_test__load(skel);
@ -79,6 +99,35 @@ static void test_function_body(void)
bpf_link__destroy(link6);
get_func_ip_test__destroy(skel);
}
static void test_function_body_uprobe(void)
{
struct get_func_ip_uprobe_test *skel = NULL;
int err;
skel = get_func_ip_uprobe_test__open_and_load();
if (!ASSERT_OK_PTR(skel, "get_func_ip_uprobe_test__open_and_load"))
return;
err = get_func_ip_uprobe_test__attach(skel);
if (!ASSERT_OK(err, "get_func_ip_test__attach"))
goto cleanup;
skel->bss->uprobe_trigger_body = (unsigned long) uprobe_trigger_body;
uprobe_trigger_body();
ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
cleanup:
get_func_ip_uprobe_test__destroy(skel);
}
static void test_function_body(void)
{
test_function_body_kprobe();
test_function_body_uprobe();
}
#else
#define test_function_body()
#endif

View file

@ -124,7 +124,7 @@ static void missing_map(void)
ASSERT_FALSE(bpf_map__autocreate(skel->maps.missing_map), "missing_map_autocreate");
ASSERT_HAS_SUBSTR(log_buf,
"8: <invalid BPF map reference>\n"
": <invalid BPF map reference>\n"
"BPF map 'missing_map' is referenced but wasn't created\n",
"log_buf");

View file

@ -1,4 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
#include <regex.h>
#include <test_progs.h>
#include <network_helpers.h>
@ -19,12 +20,16 @@ static struct {
"; R1_w=map_value(off=0,ks=4,vs=4,imm=0)\n2: (85) call bpf_this_cpu_ptr#154\n"
"R1 type=map_value expected=percpu_ptr_" },
{ "lock_id_mapval_preserve",
"8: (bf) r1 = r0 ; R0_w=map_value(id=1,off=0,ks=4,vs=8,imm=0) "
"R1_w=map_value(id=1,off=0,ks=4,vs=8,imm=0)\n9: (85) call bpf_this_cpu_ptr#154\n"
"[0-9]\\+: (bf) r1 = r0 ;"
" R0_w=map_value(id=1,off=0,ks=4,vs=8,imm=0)"
" R1_w=map_value(id=1,off=0,ks=4,vs=8,imm=0)\n"
"[0-9]\\+: (85) call bpf_this_cpu_ptr#154\n"
"R1 type=map_value expected=percpu_ptr_" },
{ "lock_id_innermapval_preserve",
"13: (bf) r1 = r0 ; R0=map_value(id=2,off=0,ks=4,vs=8,imm=0) "
"R1_w=map_value(id=2,off=0,ks=4,vs=8,imm=0)\n14: (85) call bpf_this_cpu_ptr#154\n"
"[0-9]\\+: (bf) r1 = r0 ;"
" R0=map_value(id=2,off=0,ks=4,vs=8,imm=0)"
" R1_w=map_value(id=2,off=0,ks=4,vs=8,imm=0)\n"
"[0-9]\\+: (85) call bpf_this_cpu_ptr#154\n"
"R1 type=map_value expected=percpu_ptr_" },
{ "lock_id_mismatch_kptr_kptr", "bpf_spin_unlock of different lock" },
{ "lock_id_mismatch_kptr_global", "bpf_spin_unlock of different lock" },
@ -45,6 +50,24 @@ static struct {
{ "lock_id_mismatch_innermapval_mapval", "bpf_spin_unlock of different lock" },
};
static int match_regex(const char *pattern, const char *string)
{
int err, rc;
regex_t re;
err = regcomp(&re, pattern, REG_NOSUB);
if (err) {
char errbuf[512];
regerror(err, &re, errbuf, sizeof(errbuf));
PRINT_FAIL("Can't compile regex: %s\n", errbuf);
return -1;
}
rc = regexec(&re, string, 0, NULL, 0);
regfree(&re);
return rc == 0 ? 1 : 0;
}
static void test_spin_lock_fail_prog(const char *prog_name, const char *err_msg)
{
LIBBPF_OPTS(bpf_object_open_opts, opts, .kernel_log_buf = log_buf,
@ -74,7 +97,11 @@ static void test_spin_lock_fail_prog(const char *prog_name, const char *err_msg)
goto end;
}
if (!ASSERT_OK_PTR(strstr(log_buf, err_msg), "expected error message")) {
ret = match_regex(err_msg, log_buf);
if (!ASSERT_GE(ret, 0, "match_regex"))
goto end;
if (!ASSERT_TRUE(ret, "no match for expected error message")) {
fprintf(stderr, "Expected: %s\n", err_msg);
fprintf(stderr, "Verifier: %s\n", log_buf);
}

View file

@ -2237,3 +2237,34 @@ void serial_test_tc_opts_detach_after(void)
test_tc_opts_detach_after_target(BPF_TCX_INGRESS);
test_tc_opts_detach_after_target(BPF_TCX_EGRESS);
}
static void test_tc_opts_delete_empty(int target, bool chain_tc_old)
{
LIBBPF_OPTS(bpf_tc_hook, tc_hook, .ifindex = loopback);
LIBBPF_OPTS(bpf_prog_detach_opts, optd);
int err;
assert_mprog_count(target, 0);
if (chain_tc_old) {
tc_hook.attach_point = target == BPF_TCX_INGRESS ?
BPF_TC_INGRESS : BPF_TC_EGRESS;
err = bpf_tc_hook_create(&tc_hook);
ASSERT_OK(err, "bpf_tc_hook_create");
__assert_mprog_count(target, 0, true, loopback);
}
err = bpf_prog_detach_opts(0, loopback, target, &optd);
ASSERT_EQ(err, -ENOENT, "prog_detach");
if (chain_tc_old) {
tc_hook.attach_point = BPF_TC_INGRESS | BPF_TC_EGRESS;
bpf_tc_hook_destroy(&tc_hook);
}
assert_mprog_count(target, 0);
}
void serial_test_tc_opts_delete_empty(void)
{
test_tc_opts_delete_empty(BPF_TCX_INGRESS, false);
test_tc_opts_delete_empty(BPF_TCX_EGRESS, false);
test_tc_opts_delete_empty(BPF_TCX_INGRESS, true);
test_tc_opts_delete_empty(BPF_TCX_EGRESS, true);
}

View file

@ -1,8 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include <stdbool.h>
char _license[] SEC("license") = "GPL";
@ -83,3 +82,25 @@ int test6(struct pt_regs *ctx)
test6_result = (const void *) addr == 0;
return 0;
}
unsigned long uprobe_trigger;
__u64 test7_result = 0;
SEC("uprobe//proc/self/exe:uprobe_trigger")
int BPF_UPROBE(test7)
{
__u64 addr = bpf_get_func_ip(ctx);
test7_result = (const void *) addr == (const void *) uprobe_trigger;
return 0;
}
__u64 test8_result = 0;
SEC("uretprobe//proc/self/exe:uprobe_trigger")
int BPF_URETPROBE(test8, int ret)
{
__u64 addr = bpf_get_func_ip(ctx);
test8_result = (const void *) addr == (const void *) uprobe_trigger;
return 0;
}

View file

@ -0,0 +1,18 @@
// SPDX-License-Identifier: GPL-2.0
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
char _license[] SEC("license") = "GPL";
unsigned long uprobe_trigger_body;
__u64 test1_result = 0;
SEC("uprobe//proc/self/exe:uprobe_trigger_body+1")
int BPF_UPROBE(test1)
{
__u64 addr = bpf_get_func_ip(ctx);
test1_result = (const void *) addr == (const void *) uprobe_trigger_body + 1;
return 0;
}

View file

@ -198,6 +198,28 @@ l0_%=: \
: __clobber_all);
}
SEC("socket")
__description("MOV64SX, S16, R10 Sign Extension")
__failure __msg("R1 type=scalar expected=fp, pkt, pkt_meta, map_key, map_value, mem, ringbuf_mem, buf, trusted_ptr_")
__failure_unpriv __msg_unpriv("R10 sign-extension part of pointer")
__naked void mov64sx_s16_r10(void)
{
asm volatile (" \
r1 = 553656332; \
*(u32 *)(r10 - 8) = r1; \
r1 = (s16)r10; \
r1 += -8; \
r2 = 3; \
if r2 <= r1 goto l0_%=; \
l0_%=: \
call %[bpf_trace_printk]; \
r0 = 0; \
exit; \
" :
: __imm(bpf_trace_printk)
: __clobber_all);
}
#else
SEC("socket")