bpf-next-for-netdev

-----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQTFp0I1jqZrAX+hPRXbK58LschIgwUCZkGcZAAKCRDbK58LschI
 g6o6APwLsqhrM2w71VUN5ciCxu4H5VDtZp6wkdqtVbxxU4qNxQEApKgYgKt8ZLF3
 Kily5c7m+S4ZXhMX21rb8JhSAz0dfQk=
 =5Dk7
 -----END PGP SIGNATURE-----

Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2024-05-13

We've added 119 non-merge commits during the last 14 day(s) which contain
a total of 134 files changed, 9462 insertions(+), 4742 deletions(-).

The main changes are:

1) Add BPF JIT support for 32-bit ARCv2 processors, from Shahab Vahedi.

2) Add BPF range computation improvements to the verifier in particular
   around XOR and OR operators, refactoring of checks for range computation
   and relaxing MUL range computation so that src_reg can also be an unknown
   scalar, from Cupertino Miranda.

3) Add support to attach kprobe BPF programs through kprobe_multi link in
   a session mode, meaning, a BPF program is attached to both function entry
   and return, the entry program can decide if the return program gets
   executed and the entry program can share u64 cookie value with return
   program. Session mode is a common use-case for tetragon and bpftrace,
   from Jiri Olsa.

4) Fix a potential overflow in libbpf's ring__consume_n() and improve libbpf
   as well as BPF selftest's struct_ops handling, from Andrii Nakryiko.

5) Improvements to BPF selftests in context of BPF gcc backend,
   from Jose E. Marchesi & David Faust.

6) Migrate remaining BPF selftest tests from test_sock_addr.c to prog_test-
   -style in order to retire the old test, run it in BPF CI and additionally
   expand test coverage, from Jordan Rife.

7) Big batch for BPF selftest refactoring in order to remove duplicate code
   around common network helpers, from Geliang Tang.

8) Another batch of improvements to BPF selftests to retire obsolete
   bpf_tcp_helpers.h as everything is available vmlinux.h,
   from Martin KaFai Lau.

9) Fix BPF map tear-down to not walk the map twice on free when both timer
   and wq is used, from Benjamin Tissoires.

10) Fix BPF verifier assumptions about socket->sk that it can be non-NULL,
    from Alexei Starovoitov.

11) Change BTF build scripts to using --btf_features for pahole v1.26+,
    from Alan Maguire.

12) Small improvements to BPF reusing struct_size() and krealloc_array(),
    from Andy Shevchenko.

13) Fix s390 JIT to emit a barrier for BPF_FETCH instructions,
    from Ilya Leoshkevich.

14) Extend TCP ->cong_control() callback in order to feed in ack and
    flag parameters and allow write-access to tp->snd_cwnd_stamp
    from BPF program, from Miao Xu.

15) Add support for internal-only per-CPU instructions to inline
    bpf_get_smp_processor_id() helper call for arm64 and riscv64 BPF JITs,
    from Puranjay Mohan.

16) Follow-up to remove the redundant ethtool.h from tooling infrastructure,
    from Tushar Vyavahare.

17) Extend libbpf to support "module:<function>" syntax for tracing
    programs, from Viktor Malik.

* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (119 commits)
  bpf: make list_for_each_entry portable
  bpf: ignore expected GCC warning in test_global_func10.c
  bpf: disable strict aliasing in test_global_func9.c
  selftests/bpf: Free strdup memory in xdp_hw_metadata
  selftests/bpf: Fix a few tests for GCC related warnings.
  bpf: avoid gcc overflow warning in test_xdp_vlan.c
  tools: remove redundant ethtool.h from tooling infra
  selftests/bpf: Expand ATTACH_REJECT tests
  selftests/bpf: Expand getsockname and getpeername tests
  sefltests/bpf: Expand sockaddr hook deny tests
  selftests/bpf: Expand sockaddr program return value tests
  selftests/bpf: Retire test_sock_addr.(c|sh)
  selftests/bpf: Remove redundant sendmsg test cases
  selftests/bpf: Migrate ATTACH_REJECT test cases
  selftests/bpf: Migrate expected_attach_type tests
  selftests/bpf: Migrate wildcard destination rewrite test
  selftests/bpf: Migrate sendmsg6 v4 mapped address tests
  selftests/bpf: Migrate sendmsg deny test cases
  selftests/bpf: Migrate WILDCARD_IP test
  selftests/bpf: Handle SYSCALL_EPERM and SYSCALL_ENOTSUPP test cases
  ...
====================

Link: https://lore.kernel.org/r/20240513134114.17575-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2024-05-13 16:40:22 -07:00
commit 6e62702feb
134 changed files with 9483 additions and 4763 deletions

View File

@ -72,6 +72,7 @@ two flavors of JITs, the newer eBPF JIT currently supported on:
- riscv64
- riscv32
- loongarch64
- arc
And the older cBPF JIT supported on the following archs:

View File

@ -513,7 +513,7 @@ JIT compiler
------------
The Linux kernel has a built-in BPF JIT compiler for x86_64, SPARC,
PowerPC, ARM, ARM64, MIPS, RISC-V and s390 and can be enabled through
PowerPC, ARM, ARM64, MIPS, RISC-V, s390, and ARC and can be enabled through
CONFIG_BPF_JIT. The JIT compiler is transparently invoked for each
attached filter from user space or for internal kernel users if it has
been previously enabled by root::
@ -650,7 +650,7 @@ before a conversion to the new layout is being done behind the scenes!
Currently, the classic BPF format is being used for JITing on most
32-bit architectures, whereas x86-64, aarch64, s390x, powerpc64,
sparc64, arm32, riscv64, riscv32, loongarch64 perform JIT compilation
sparc64, arm32, riscv64, riscv32, loongarch64, arc perform JIT compilation
from eBPF instruction set.
Testing

View File

@ -3712,6 +3712,12 @@ S: Maintained
F: Documentation/devicetree/bindings/iio/imu/bosch,bmi323.yaml
F: drivers/iio/imu/bmi323/
BPF JIT for ARC
M: Shahab Vahedi <shahab@synopsys.com>
L: bpf@vger.kernel.org
S: Maintained
F: arch/arc/net/
BPF JIT for ARM
M: Russell King <linux@armlinux.org.uk>
M: Puranjay Mohan <puranjay@kernel.org>

View File

@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
obj-y += kernel/
obj-y += mm/
obj-y += net/
# for cleaning
subdir- += boot

View File

@ -51,6 +51,7 @@ config ARC
select PCI_SYSCALL if PCI
select HAVE_ARCH_JUMP_LABEL if ISA_ARCV2 && !CPU_ENDIAN_BE32
select TRACE_IRQFLAGS_SUPPORT
select HAVE_EBPF_JIT if ISA_ARCV2
config LOCKDEP_SUPPORT
def_bool y

6
arch/arc/net/Makefile Normal file
View File

@ -0,0 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
ifeq ($(CONFIG_ISA_ARCV2),y)
obj-$(CONFIG_BPF_JIT) += bpf_jit_core.o
obj-$(CONFIG_BPF_JIT) += bpf_jit_arcv2.o
endif

164
arch/arc/net/bpf_jit.h Normal file
View File

@ -0,0 +1,164 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* The interface that a back-end should provide to bpf_jit_core.c.
*
* Copyright (c) 2024 Synopsys Inc.
* Author: Shahab Vahedi <shahab@synopsys.com>
*/
#ifndef _ARC_BPF_JIT_H
#define _ARC_BPF_JIT_H
#include <linux/bpf.h>
#include <linux/filter.h>
/* Print debug info and assert. */
//#define ARC_BPF_JIT_DEBUG
/* Determine the address type of the target. */
#ifdef CONFIG_ISA_ARCV2
#define ARC_ADDR u32
#endif
/*
* For the translation of some BPF instructions, a temporary register
* might be needed for some interim data.
*/
#define JIT_REG_TMP MAX_BPF_JIT_REG
/*
* Buffer access: If buffer "b" is not NULL, advance by "n" bytes.
*
* This macro must be used in any place that potentially requires a
* "buf + len". This way, we make sure that the "buf" argument for
* the underlying "arc_*(buf, ...)" ends up as NULL instead of something
* like "0+4" or "0+8", etc. Those "arc_*()" functions check their "buf"
* value to decide if instructions should be emitted or not.
*/
#define BUF(b, n) (((b) != NULL) ? ((b) + (n)) : (b))
/************** Functions that the back-end must provide **************/
/* Extension for 32-bit operations. */
inline u8 zext(u8 *buf, u8 rd);
/***** Moves *****/
u8 mov_r32(u8 *buf, u8 rd, u8 rs, u8 sign_ext);
u8 mov_r32_i32(u8 *buf, u8 reg, s32 imm);
u8 mov_r64(u8 *buf, u8 rd, u8 rs, u8 sign_ext);
u8 mov_r64_i32(u8 *buf, u8 reg, s32 imm);
u8 mov_r64_i64(u8 *buf, u8 reg, u32 lo, u32 hi);
/***** Loads and stores *****/
u8 load_r(u8 *buf, u8 rd, u8 rs, s16 off, u8 size, bool sign_ext);
u8 store_r(u8 *buf, u8 rd, u8 rs, s16 off, u8 size);
u8 store_i(u8 *buf, s32 imm, u8 rd, s16 off, u8 size);
/***** Addition *****/
u8 add_r32(u8 *buf, u8 rd, u8 rs);
u8 add_r32_i32(u8 *buf, u8 rd, s32 imm);
u8 add_r64(u8 *buf, u8 rd, u8 rs);
u8 add_r64_i32(u8 *buf, u8 rd, s32 imm);
/***** Subtraction *****/
u8 sub_r32(u8 *buf, u8 rd, u8 rs);
u8 sub_r32_i32(u8 *buf, u8 rd, s32 imm);
u8 sub_r64(u8 *buf, u8 rd, u8 rs);
u8 sub_r64_i32(u8 *buf, u8 rd, s32 imm);
/***** Multiplication *****/
u8 mul_r32(u8 *buf, u8 rd, u8 rs);
u8 mul_r32_i32(u8 *buf, u8 rd, s32 imm);
u8 mul_r64(u8 *buf, u8 rd, u8 rs);
u8 mul_r64_i32(u8 *buf, u8 rd, s32 imm);
/***** Division *****/
u8 div_r32(u8 *buf, u8 rd, u8 rs, bool sign_ext);
u8 div_r32_i32(u8 *buf, u8 rd, s32 imm, bool sign_ext);
/***** Remainder *****/
u8 mod_r32(u8 *buf, u8 rd, u8 rs, bool sign_ext);
u8 mod_r32_i32(u8 *buf, u8 rd, s32 imm, bool sign_ext);
/***** Bitwise AND *****/
u8 and_r32(u8 *buf, u8 rd, u8 rs);
u8 and_r32_i32(u8 *buf, u8 rd, s32 imm);
u8 and_r64(u8 *buf, u8 rd, u8 rs);
u8 and_r64_i32(u8 *buf, u8 rd, s32 imm);
/***** Bitwise OR *****/
u8 or_r32(u8 *buf, u8 rd, u8 rs);
u8 or_r32_i32(u8 *buf, u8 rd, s32 imm);
u8 or_r64(u8 *buf, u8 rd, u8 rs);
u8 or_r64_i32(u8 *buf, u8 rd, s32 imm);
/***** Bitwise XOR *****/
u8 xor_r32(u8 *buf, u8 rd, u8 rs);
u8 xor_r32_i32(u8 *buf, u8 rd, s32 imm);
u8 xor_r64(u8 *buf, u8 rd, u8 rs);
u8 xor_r64_i32(u8 *buf, u8 rd, s32 imm);
/***** Bitwise Negate *****/
u8 neg_r32(u8 *buf, u8 r);
u8 neg_r64(u8 *buf, u8 r);
/***** Bitwise left shift *****/
u8 lsh_r32(u8 *buf, u8 rd, u8 rs);
u8 lsh_r32_i32(u8 *buf, u8 rd, u8 imm);
u8 lsh_r64(u8 *buf, u8 rd, u8 rs);
u8 lsh_r64_i32(u8 *buf, u8 rd, s32 imm);
/***** Bitwise right shift (logical) *****/
u8 rsh_r32(u8 *buf, u8 rd, u8 rs);
u8 rsh_r32_i32(u8 *buf, u8 rd, u8 imm);
u8 rsh_r64(u8 *buf, u8 rd, u8 rs);
u8 rsh_r64_i32(u8 *buf, u8 rd, s32 imm);
/***** Bitwise right shift (arithmetic) *****/
u8 arsh_r32(u8 *buf, u8 rd, u8 rs);
u8 arsh_r32_i32(u8 *buf, u8 rd, u8 imm);
u8 arsh_r64(u8 *buf, u8 rd, u8 rs);
u8 arsh_r64_i32(u8 *buf, u8 rd, s32 imm);
/***** Frame related *****/
u32 mask_for_used_regs(u8 bpf_reg, bool is_call);
u8 arc_prologue(u8 *buf, u32 usage, u16 frame_size);
u8 arc_epilogue(u8 *buf, u32 usage, u16 frame_size);
/***** Jumps *****/
/*
* Different sorts of conditions (ARC enum as opposed to BPF_*).
*
* Do not change the order of enums here. ARC_CC_SLE+1 is used
* to determine the number of JCCs.
*/
enum ARC_CC {
ARC_CC_UGT = 0, /* unsigned > */
ARC_CC_UGE, /* unsigned >= */
ARC_CC_ULT, /* unsigned < */
ARC_CC_ULE, /* unsigned <= */
ARC_CC_SGT, /* signed > */
ARC_CC_SGE, /* signed >= */
ARC_CC_SLT, /* signed < */
ARC_CC_SLE, /* signed <= */
ARC_CC_AL, /* always */
ARC_CC_EQ, /* == */
ARC_CC_NE, /* != */
ARC_CC_SET, /* test */
ARC_CC_LAST
};
/*
* A few notes:
*
* - check_jmp_*() are prerequisites before calling the gen_jmp_*().
* They return "true" if the jump is possible and "false" otherwise.
*
* - The notion of "*_off" is to emphasize that these parameters are
* merely offsets in the JIT stream and not absolute addresses. One
* can look at them as addresses if the JIT code would start from
* address 0x0000_0000. Nonetheless, since the buffer address for the
* JIT is on a word-aligned address, this works and actually makes
* things simpler (offsets are in the range of u32 which is more than
* enough).
*/
bool check_jmp_32(u32 curr_off, u32 targ_off, u8 cond);
bool check_jmp_64(u32 curr_off, u32 targ_off, u8 cond);
u8 gen_jmp_32(u8 *buf, u8 rd, u8 rs, u8 cond, u32 c_off, u32 t_off);
u8 gen_jmp_64(u8 *buf, u8 rd, u8 rs, u8 cond, u32 c_off, u32 t_off);
/***** Miscellaneous *****/
u8 gen_func_call(u8 *buf, ARC_ADDR func_addr, bool external_func);
u8 arc_to_bpf_return(u8 *buf);
/*
* - Perform byte swaps on "rd" based on the "size".
* - If "force" is set, do it unconditionally. Otherwise, consider the
* desired "endian"ness and the host endianness.
* - For data "size"s up to 32 bits, perform a zero-extension if asked
* by the "do_zext" boolean.
*/
u8 gen_swap(u8 *buf, u8 rd, u8 size, u8 endian, bool force, bool do_zext);
#endif /* _ARC_BPF_JIT_H */

3005
arch/arc/net/bpf_jit_arcv2.c Normal file

File diff suppressed because it is too large Load Diff

1425
arch/arc/net/bpf_jit_core.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -135,6 +135,12 @@ enum aarch64_insn_special_register {
AARCH64_INSN_SPCLREG_SP_EL2 = 0xF210
};
enum aarch64_insn_system_register {
AARCH64_INSN_SYSREG_TPIDR_EL1 = 0x4684,
AARCH64_INSN_SYSREG_TPIDR_EL2 = 0x6682,
AARCH64_INSN_SYSREG_SP_EL0 = 0x4208,
};
enum aarch64_insn_variant {
AARCH64_INSN_VARIANT_32BIT,
AARCH64_INSN_VARIANT_64BIT
@ -686,6 +692,8 @@ u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
}
#endif
u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type);
u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result,
enum aarch64_insn_system_register sysreg);
s32 aarch64_get_branch_offset(u32 insn);
u32 aarch64_set_branch_offset(u32 insn, s32 offset);

View File

@ -1515,3 +1515,14 @@ u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type)
return insn;
}
u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result,
enum aarch64_insn_system_register sysreg)
{
u32 insn = aarch64_insn_get_mrs_value();
insn &= ~GENMASK(19, 0);
insn |= sysreg << 5;
return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT,
insn, result);
}

View File

@ -297,4 +297,12 @@
#define A64_ADR(Rd, offset) \
aarch64_insn_gen_adr(0, offset, Rd, AARCH64_INSN_ADR_TYPE_ADR)
/* MRS */
#define A64_MRS_TPIDR_EL1(Rt) \
aarch64_insn_gen_mrs(Rt, AARCH64_INSN_SYSREG_TPIDR_EL1)
#define A64_MRS_TPIDR_EL2(Rt) \
aarch64_insn_gen_mrs(Rt, AARCH64_INSN_SYSREG_TPIDR_EL2)
#define A64_MRS_SP_EL0(Rt) \
aarch64_insn_gen_mrs(Rt, AARCH64_INSN_SYSREG_SP_EL0)
#endif /* _BPF_JIT_H */

View File

@ -494,20 +494,26 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
{
const u8 code = insn->code;
const u8 arena_vm_base = bpf2a64[ARENA_VM_START];
const u8 dst = bpf2a64[insn->dst_reg];
const u8 src = bpf2a64[insn->src_reg];
const u8 tmp = bpf2a64[TMP_REG_1];
const u8 tmp2 = bpf2a64[TMP_REG_2];
const bool isdw = BPF_SIZE(code) == BPF_DW;
const bool arena = BPF_MODE(code) == BPF_PROBE_ATOMIC;
const s16 off = insn->off;
u8 reg;
u8 reg = dst;
if (!off) {
reg = dst;
} else {
emit_a64_mov_i(1, tmp, off, ctx);
emit(A64_ADD(1, tmp, tmp, dst), ctx);
reg = tmp;
if (off || arena) {
if (off) {
emit_a64_mov_i(1, tmp, off, ctx);
emit(A64_ADD(1, tmp, tmp, dst), ctx);
reg = tmp;
}
if (arena) {
emit(A64_ADD(1, tmp, reg, arena_vm_base), ctx);
reg = tmp;
}
}
switch (insn->imm) {
@ -576,6 +582,12 @@ static int emit_ll_sc_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
u8 reg;
s32 jmp_offset;
if (BPF_MODE(code) == BPF_PROBE_ATOMIC) {
/* ll_sc based atomics don't support unsafe pointers yet. */
pr_err_once("unknown atomic opcode %02x\n", code);
return -EINVAL;
}
if (!off) {
reg = dst;
} else {
@ -777,7 +789,8 @@ static int add_exception_handler(const struct bpf_insn *insn,
if (BPF_MODE(insn->code) != BPF_PROBE_MEM &&
BPF_MODE(insn->code) != BPF_PROBE_MEMSX &&
BPF_MODE(insn->code) != BPF_PROBE_MEM32)
BPF_MODE(insn->code) != BPF_PROBE_MEM32 &&
BPF_MODE(insn->code) != BPF_PROBE_ATOMIC)
return 0;
if (!ctx->prog->aux->extable ||
@ -877,6 +890,15 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
emit(A64_ORR(1, tmp, dst, tmp), ctx);
emit(A64_MOV(1, dst, tmp), ctx);
break;
} else if (insn_is_mov_percpu_addr(insn)) {
if (dst != src)
emit(A64_MOV(1, dst, src), ctx);
if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN))
emit(A64_MRS_TPIDR_EL2(tmp), ctx);
else
emit(A64_MRS_TPIDR_EL1(tmp), ctx);
emit(A64_ADD(1, dst, dst, tmp), ctx);
break;
}
switch (insn->off) {
case 0:
@ -1206,6 +1228,21 @@ emit_cond_jmp:
const u8 r0 = bpf2a64[BPF_REG_0];
bool func_addr_fixed;
u64 func_addr;
u32 cpu_offset;
/* Implement helper call to bpf_get_smp_processor_id() inline */
if (insn->src_reg == 0 && insn->imm == BPF_FUNC_get_smp_processor_id) {
cpu_offset = offsetof(struct thread_info, cpu);
emit(A64_MRS_SP_EL0(tmp), ctx);
if (is_lsi_offset(cpu_offset, 2)) {
emit(A64_LDR32I(r0, tmp, cpu_offset), ctx);
} else {
emit_a64_mov_i(1, tmp2, cpu_offset, ctx);
emit(A64_LDR32(r0, tmp, tmp2), ctx);
}
break;
}
ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
&func_addr, &func_addr_fixed);
@ -1474,12 +1511,18 @@ emit_cond_jmp:
case BPF_STX | BPF_ATOMIC | BPF_W:
case BPF_STX | BPF_ATOMIC | BPF_DW:
case BPF_STX | BPF_PROBE_ATOMIC | BPF_W:
case BPF_STX | BPF_PROBE_ATOMIC | BPF_DW:
if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS))
ret = emit_lse_atomic(insn, ctx);
else
ret = emit_ll_sc_atomic(insn, ctx);
if (ret)
return ret;
ret = add_exception_handler(insn, ctx, dst);
if (ret)
return ret;
break;
default:
@ -2527,6 +2570,34 @@ bool bpf_jit_supports_arena(void)
return true;
}
bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena)
{
if (!in_arena)
return true;
switch (insn->code) {
case BPF_STX | BPF_ATOMIC | BPF_W:
case BPF_STX | BPF_ATOMIC | BPF_DW:
if (!cpus_have_cap(ARM64_HAS_LSE_ATOMICS))
return false;
}
return true;
}
bool bpf_jit_supports_percpu_insn(void)
{
return true;
}
bool bpf_jit_inlines_helper_call(s32 imm)
{
switch (imm) {
case BPF_FUNC_get_smp_processor_id:
return true;
default:
return false;
}
}
void bpf_jit_free(struct bpf_prog *prog)
{
if (prog->jited) {

View File

@ -608,7 +608,7 @@ static inline u32 rv_nop(void)
return rv_i_insn(0, 0, 0, 0, 0x13);
}
/* RVC instrutions. */
/* RVC instructions. */
static inline u16 rvc_addi4spn(u8 rd, u32 imm10)
{
@ -737,7 +737,7 @@ static inline u16 rvc_swsp(u32 imm8, u8 rs2)
return rv_css_insn(0x6, imm, rs2, 0x2);
}
/* RVZBB instrutions. */
/* RVZBB instructions. */
static inline u32 rvzbb_sextb(u8 rd, u8 rs1)
{
return rv_i_insn(0x604, rs1, 1, rd, 0x13);

View File

@ -12,6 +12,7 @@
#include <linux/stop_machine.h>
#include <asm/patch.h>
#include <asm/cfi.h>
#include <asm/percpu.h>
#include "bpf_jit.h"
#define RV_FENTRY_NINSNS 2
@ -503,33 +504,33 @@ static void emit_atomic(u8 rd, u8 rs, s16 off, s32 imm, bool is64,
break;
/* src_reg = atomic_fetch_<op>(dst_reg + off16, src_reg) */
case BPF_ADD | BPF_FETCH:
emit(is64 ? rv_amoadd_d(rs, rs, rd, 0, 0) :
rv_amoadd_w(rs, rs, rd, 0, 0), ctx);
emit(is64 ? rv_amoadd_d(rs, rs, rd, 1, 1) :
rv_amoadd_w(rs, rs, rd, 1, 1), ctx);
if (!is64)
emit_zextw(rs, rs, ctx);
break;
case BPF_AND | BPF_FETCH:
emit(is64 ? rv_amoand_d(rs, rs, rd, 0, 0) :
rv_amoand_w(rs, rs, rd, 0, 0), ctx);
emit(is64 ? rv_amoand_d(rs, rs, rd, 1, 1) :
rv_amoand_w(rs, rs, rd, 1, 1), ctx);
if (!is64)
emit_zextw(rs, rs, ctx);
break;
case BPF_OR | BPF_FETCH:
emit(is64 ? rv_amoor_d(rs, rs, rd, 0, 0) :
rv_amoor_w(rs, rs, rd, 0, 0), ctx);
emit(is64 ? rv_amoor_d(rs, rs, rd, 1, 1) :
rv_amoor_w(rs, rs, rd, 1, 1), ctx);
if (!is64)
emit_zextw(rs, rs, ctx);
break;
case BPF_XOR | BPF_FETCH:
emit(is64 ? rv_amoxor_d(rs, rs, rd, 0, 0) :
rv_amoxor_w(rs, rs, rd, 0, 0), ctx);
emit(is64 ? rv_amoxor_d(rs, rs, rd, 1, 1) :
rv_amoxor_w(rs, rs, rd, 1, 1), ctx);
if (!is64)
emit_zextw(rs, rs, ctx);
break;
/* src_reg = atomic_xchg(dst_reg + off16, src_reg); */
case BPF_XCHG:
emit(is64 ? rv_amoswap_d(rs, rs, rd, 0, 0) :
rv_amoswap_w(rs, rs, rd, 0, 0), ctx);
emit(is64 ? rv_amoswap_d(rs, rs, rd, 1, 1) :
rv_amoswap_w(rs, rs, rd, 1, 1), ctx);
if (!is64)
emit_zextw(rs, rs, ctx);
break;
@ -1089,6 +1090,24 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
emit_or(RV_REG_T1, rd, RV_REG_T1, ctx);
emit_mv(rd, RV_REG_T1, ctx);
break;
} else if (insn_is_mov_percpu_addr(insn)) {
if (rd != rs)
emit_mv(rd, rs, ctx);
#ifdef CONFIG_SMP
/* Load current CPU number in T1 */
emit_ld(RV_REG_T1, offsetof(struct thread_info, cpu),
RV_REG_TP, ctx);
/* << 3 because offsets are 8 bytes */
emit_slli(RV_REG_T1, RV_REG_T1, 3, ctx);
/* Load address of __per_cpu_offset array in T2 */
emit_addr(RV_REG_T2, (u64)&__per_cpu_offset, extra_pass, ctx);
/* Add offset of current CPU to __per_cpu_offset */
emit_add(RV_REG_T1, RV_REG_T2, RV_REG_T1, ctx);
/* Load __per_cpu_offset[cpu] in T1 */
emit_ld(RV_REG_T1, 0, RV_REG_T1, ctx);
/* Add the offset to Rd */
emit_add(rd, rd, RV_REG_T1, ctx);
#endif
}
if (imm == 1) {
/* Special mov32 for zext */
@ -1474,6 +1493,22 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
bool fixed_addr;
u64 addr;
/* Inline calls to bpf_get_smp_processor_id()
*
* RV_REG_TP holds the address of the current CPU's task_struct and thread_info is
* at offset 0 in task_struct.
* Load cpu from thread_info:
* Set R0 to ((struct thread_info *)(RV_REG_TP))->cpu
*
* This replicates the implementation of raw_smp_processor_id() on RISCV
*/
if (insn->src_reg == 0 && insn->imm == BPF_FUNC_get_smp_processor_id) {
/* Load current CPU number in R0 */
emit_ld(bpf_to_rv_reg(BPF_REG_0, ctx), offsetof(struct thread_info, cpu),
RV_REG_TP, ctx);
break;
}
mark_call(ctx);
ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
&addr, &fixed_addr);
@ -2038,3 +2073,18 @@ bool bpf_jit_supports_arena(void)
{
return true;
}
bool bpf_jit_supports_percpu_insn(void)
{
return true;
}
bool bpf_jit_inlines_helper_call(s32 imm)
{
switch (imm) {
case BPF_FUNC_get_smp_processor_id:
return true;
default:
return false;
}
}

View File

@ -1427,8 +1427,12 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
EMIT6_DISP_LH(0xeb000000, is32 ? (op32) : (op64), \
(insn->imm & BPF_FETCH) ? src_reg : REG_W0, \
src_reg, dst_reg, off); \
if (is32 && (insn->imm & BPF_FETCH)) \
EMIT_ZERO(src_reg); \
if (insn->imm & BPF_FETCH) { \
/* bcr 14,0 - see atomic_fetch_{add,and,or,xor}() */ \
_EMIT2(0x07e0); \
if (is32) \
EMIT_ZERO(src_reg); \
} \
} while (0)
case BPF_ADD:
case BPF_ADD | BPF_FETCH:

View File

@ -3,6 +3,8 @@
#ifndef _LINUX_BTF_IDS_H
#define _LINUX_BTF_IDS_H
#include <linux/types.h> /* for u32 */
struct btf_id_set {
u32 cnt;
u32 ids[];

View File

@ -993,6 +993,7 @@ u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog);
void bpf_jit_compile(struct bpf_prog *prog);
bool bpf_jit_needs_zext(void);
bool bpf_jit_inlines_helper_call(s32 imm);
bool bpf_jit_supports_subprog_tailcalls(void);
bool bpf_jit_supports_percpu_insn(void);
bool bpf_jit_supports_kfunc_call(void);

View File

@ -1164,7 +1164,7 @@ struct tcp_congestion_ops {
/* call when packets are delivered to update cwnd and pacing rate,
* after all the ca_state processing. (optional)
*/
void (*cong_control)(struct sock *sk, const struct rate_sample *rs);
void (*cong_control)(struct sock *sk, u32 ack, int flag, const struct rate_sample *rs);
/* new value of cwnd after loss (required) */

View File

@ -1115,6 +1115,7 @@ enum bpf_attach_type {
BPF_CGROUP_UNIX_GETSOCKNAME,
BPF_NETKIT_PRIMARY,
BPF_NETKIT_PEER,
BPF_TRACE_KPROBE_SESSION,
__MAX_BPF_ATTACH_TYPE
};

View File

@ -44,7 +44,7 @@ obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
obj-$(CONFIG_BPF_SYSCALL) += cpumask.o
obj-${CONFIG_BPF_LSM} += bpf_lsm.o
endif
ifeq ($(CONFIG_CRYPTO),y)
ifneq ($(CONFIG_CRYPTO),)
obj-$(CONFIG_BPF_SYSCALL) += crypto.o
endif
obj-$(CONFIG_BPF_PRELOAD) += preload/

View File

@ -251,7 +251,7 @@ static vm_fault_t arena_vm_fault(struct vm_fault *vmf)
int ret;
kbase = bpf_arena_get_kern_vm_start(arena);
kaddr = kbase + (u32)(vmf->address & PAGE_MASK);
kaddr = kbase + (u32)(vmf->address);
guard(mutex)(&arena->lock);
page = vmalloc_to_page((void *)kaddr);

View File

@ -436,13 +436,14 @@ static void array_map_free_timers_wq(struct bpf_map *map)
/* We don't reset or free fields other than timer and workqueue
* on uref dropping to zero.
*/
if (btf_record_has_field(map->record, BPF_TIMER))
for (i = 0; i < array->map.max_entries; i++)
bpf_obj_free_timer(map->record, array_map_elem_ptr(array, i));
if (btf_record_has_field(map->record, BPF_WORKQUEUE))
for (i = 0; i < array->map.max_entries; i++)
bpf_obj_free_workqueue(map->record, array_map_elem_ptr(array, i));
if (btf_record_has_field(map->record, BPF_TIMER | BPF_WORKQUEUE)) {
for (i = 0; i < array->map.max_entries; i++) {
if (btf_record_has_field(map->record, BPF_TIMER))
bpf_obj_free_timer(map->record, array_map_elem_ptr(array, i));
if (btf_record_has_field(map->record, BPF_WORKQUEUE))
bpf_obj_free_workqueue(map->record, array_map_elem_ptr(array, i));
}
}
}
/* Called when map->refcnt goes to zero, either from workqueue or from syscall */

View File

@ -218,6 +218,7 @@ enum btf_kfunc_hook {
BTF_KFUNC_HOOK_SOCKET_FILTER,
BTF_KFUNC_HOOK_LWT,
BTF_KFUNC_HOOK_NETFILTER,
BTF_KFUNC_HOOK_KPROBE,
BTF_KFUNC_HOOK_MAX,
};
@ -8157,6 +8158,8 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
return BTF_KFUNC_HOOK_LWT;
case BPF_PROG_TYPE_NETFILTER:
return BTF_KFUNC_HOOK_NETFILTER;
case BPF_PROG_TYPE_KPROBE:
return BTF_KFUNC_HOOK_KPROBE;
default:
return BTF_KFUNC_HOOK_MAX;
}

View File

@ -26,6 +26,7 @@
#include <linux/bpf.h>
#include <linux/btf.h>
#include <linux/objtool.h>
#include <linux/overflow.h>
#include <linux/rbtree_latch.h>
#include <linux/kallsyms.h>
#include <linux/rcupdate.h>
@ -849,7 +850,7 @@ int bpf_jit_add_poke_descriptor(struct bpf_prog *prog,
return -EINVAL;
}
tab = krealloc(tab, size * sizeof(*poke), GFP_KERNEL);
tab = krealloc_array(tab, size, sizeof(*poke), GFP_KERNEL);
if (!tab)
return -ENOMEM;
@ -2455,13 +2456,14 @@ EXPORT_SYMBOL(bpf_empty_prog_array);
struct bpf_prog_array *bpf_prog_array_alloc(u32 prog_cnt, gfp_t flags)
{
if (prog_cnt)
return kzalloc(sizeof(struct bpf_prog_array) +
sizeof(struct bpf_prog_array_item) *
(prog_cnt + 1),
flags);
struct bpf_prog_array *p;
return &bpf_empty_prog_array.hdr;
if (prog_cnt)
p = kzalloc(struct_size(p, items, prog_cnt + 1), flags);
else
p = &bpf_empty_prog_array.hdr;
return p;
}
void bpf_prog_array_free(struct bpf_prog_array *progs)
@ -2939,6 +2941,17 @@ bool __weak bpf_jit_needs_zext(void)
return false;
}
/* Return true if the JIT inlines the call to the helper corresponding to
* the imm.
*
* The verifier will not patch the insn->imm for the call to the helper if
* this returns true.
*/
bool __weak bpf_jit_inlines_helper_call(s32 imm)
{
return false;
}
/* Return TRUE if the JIT backend supports mixing bpf2bpf and tailcalls. */
bool __weak bpf_jit_supports_subprog_tailcalls(void)
{

View File

@ -221,13 +221,11 @@ static bool htab_has_extra_elems(struct bpf_htab *htab)
return !htab_is_percpu(htab) && !htab_is_lru(htab);
}
static void htab_free_prealloced_timers(struct bpf_htab *htab)
static void htab_free_prealloced_timers_and_wq(struct bpf_htab *htab)
{
u32 num_entries = htab->map.max_entries;
int i;
if (!btf_record_has_field(htab->map.record, BPF_TIMER))
return;
if (htab_has_extra_elems(htab))
num_entries += num_possible_cpus();
@ -235,27 +233,12 @@ static void htab_free_prealloced_timers(struct bpf_htab *htab)
struct htab_elem *elem;
elem = get_htab_elem(htab, i);
bpf_obj_free_timer(htab->map.record, elem->key + round_up(htab->map.key_size, 8));
cond_resched();
}
}
static void htab_free_prealloced_wq(struct bpf_htab *htab)
{
u32 num_entries = htab->map.max_entries;
int i;
if (!btf_record_has_field(htab->map.record, BPF_WORKQUEUE))
return;
if (htab_has_extra_elems(htab))
num_entries += num_possible_cpus();
for (i = 0; i < num_entries; i++) {
struct htab_elem *elem;
elem = get_htab_elem(htab, i);
bpf_obj_free_workqueue(htab->map.record,
elem->key + round_up(htab->map.key_size, 8));
if (btf_record_has_field(htab->map.record, BPF_TIMER))
bpf_obj_free_timer(htab->map.record,
elem->key + round_up(htab->map.key_size, 8));
if (btf_record_has_field(htab->map.record, BPF_WORKQUEUE))
bpf_obj_free_workqueue(htab->map.record,
elem->key + round_up(htab->map.key_size, 8));
cond_resched();
}
}
@ -1515,7 +1498,7 @@ static void delete_all_elements(struct bpf_htab *htab)
migrate_enable();
}
static void htab_free_malloced_timers_or_wq(struct bpf_htab *htab, bool is_timer)
static void htab_free_malloced_timers_and_wq(struct bpf_htab *htab)
{
int i;
@ -1527,10 +1510,10 @@ static void htab_free_malloced_timers_or_wq(struct bpf_htab *htab, bool is_timer
hlist_nulls_for_each_entry(l, n, head, hash_node) {
/* We only free timer on uref dropping to zero */
if (is_timer)
if (btf_record_has_field(htab->map.record, BPF_TIMER))
bpf_obj_free_timer(htab->map.record,
l->key + round_up(htab->map.key_size, 8));
else
if (btf_record_has_field(htab->map.record, BPF_WORKQUEUE))
bpf_obj_free_workqueue(htab->map.record,
l->key + round_up(htab->map.key_size, 8));
}
@ -1544,17 +1527,11 @@ static void htab_map_free_timers_and_wq(struct bpf_map *map)
struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
/* We only free timer and workqueue on uref dropping to zero */
if (btf_record_has_field(htab->map.record, BPF_TIMER)) {
if (btf_record_has_field(htab->map.record, BPF_TIMER | BPF_WORKQUEUE)) {
if (!htab_is_prealloc(htab))
htab_free_malloced_timers_or_wq(htab, true);
htab_free_malloced_timers_and_wq(htab);
else
htab_free_prealloced_timers(htab);
}
if (btf_record_has_field(htab->map.record, BPF_WORKQUEUE)) {
if (!htab_is_prealloc(htab))
htab_free_malloced_timers_or_wq(htab, false);
else
htab_free_prealloced_wq(htab);
htab_free_prealloced_timers_and_wq(htab);
}
}

View File

@ -4016,11 +4016,15 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
if (prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI &&
attach_type != BPF_TRACE_KPROBE_MULTI)
return -EINVAL;
if (prog->expected_attach_type == BPF_TRACE_KPROBE_SESSION &&
attach_type != BPF_TRACE_KPROBE_SESSION)
return -EINVAL;
if (prog->expected_attach_type == BPF_TRACE_UPROBE_MULTI &&
attach_type != BPF_TRACE_UPROBE_MULTI)
return -EINVAL;
if (attach_type != BPF_PERF_EVENT &&
attach_type != BPF_TRACE_KPROBE_MULTI &&
attach_type != BPF_TRACE_KPROBE_SESSION &&
attach_type != BPF_TRACE_UPROBE_MULTI)
return -EINVAL;
return 0;
@ -5281,7 +5285,8 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr)
case BPF_PROG_TYPE_KPROBE:
if (attr->link_create.attach_type == BPF_PERF_EVENT)
ret = bpf_perf_link_attach(attr, prog);
else if (attr->link_create.attach_type == BPF_TRACE_KPROBE_MULTI)
else if (attr->link_create.attach_type == BPF_TRACE_KPROBE_MULTI ||
attr->link_create.attach_type == BPF_TRACE_KPROBE_SESSION)
ret = bpf_kprobe_multi_link_attach(attr, prog);
else if (attr->link_create.attach_type == BPF_TRACE_UPROBE_MULTI)
ret = bpf_uprobe_multi_link_attach(attr, prog);

View File

@ -2368,6 +2368,8 @@ static void mark_btf_ld_reg(struct bpf_verifier_env *env,
regs[regno].type = PTR_TO_BTF_ID | flag;
regs[regno].btf = btf;
regs[regno].btf_id = btf_id;
if (type_may_be_null(flag))
regs[regno].id = ++env->id_gen;
}
#define DEF_NOT_SUBREG (0)
@ -5400,8 +5402,6 @@ static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
*/
mark_btf_ld_reg(env, cur_regs(env), value_regno, PTR_TO_BTF_ID, kptr_field->kptr.btf,
kptr_field->kptr.btf_id, btf_ld_kptr_type(env, kptr_field));
/* For mark_ptr_or_null_reg */
val_reg->id = ++env->id_gen;
} else if (class == BPF_STX) {
val_reg = reg_state(env, value_regno);
if (!register_is_null(val_reg) &&
@ -5719,7 +5719,8 @@ static bool is_trusted_reg(const struct bpf_reg_state *reg)
return true;
/* Types listed in the reg2btf_ids are always trusted */
if (reg2btf_ids[base_type(reg->type)])
if (reg2btf_ids[base_type(reg->type)] &&
!bpf_type_has_unsafe_modifiers(reg->type))
return true;
/* If a register is not referenced, it is trusted if it has the
@ -6339,6 +6340,7 @@ static int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val,
#define BTF_TYPE_SAFE_RCU(__type) __PASTE(__type, __safe_rcu)
#define BTF_TYPE_SAFE_RCU_OR_NULL(__type) __PASTE(__type, __safe_rcu_or_null)
#define BTF_TYPE_SAFE_TRUSTED(__type) __PASTE(__type, __safe_trusted)
#define BTF_TYPE_SAFE_TRUSTED_OR_NULL(__type) __PASTE(__type, __safe_trusted_or_null)
/*
* Allow list few fields as RCU trusted or full trusted.
@ -6402,7 +6404,7 @@ BTF_TYPE_SAFE_TRUSTED(struct dentry) {
struct inode *d_inode;
};
BTF_TYPE_SAFE_TRUSTED(struct socket) {
BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct socket) {
struct sock *sk;
};
@ -6437,11 +6439,20 @@ static bool type_is_trusted(struct bpf_verifier_env *env,
BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct linux_binprm));
BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct file));
BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct dentry));
BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct socket));
return btf_nested_type_is_trusted(&env->log, reg, field_name, btf_id, "__safe_trusted");
}
static bool type_is_trusted_or_null(struct bpf_verifier_env *env,
struct bpf_reg_state *reg,
const char *field_name, u32 btf_id)
{
BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct socket));
return btf_nested_type_is_trusted(&env->log, reg, field_name, btf_id,
"__safe_trusted_or_null");
}
static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
struct bpf_reg_state *regs,
int regno, int off, int size,
@ -6550,6 +6561,8 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
*/
if (type_is_trusted(env, reg, field_name, btf_id)) {
flag |= PTR_TRUSTED;
} else if (type_is_trusted_or_null(env, reg, field_name, btf_id)) {
flag |= PTR_TRUSTED | PTR_MAYBE_NULL;
} else if (in_rcu_cs(env) && !type_may_be_null(reg->type)) {
if (type_is_rcu(env, reg, field_name, btf_id)) {
/* ignore __rcu tag and mark it MEM_RCU */
@ -11050,6 +11063,7 @@ enum special_kfunc_type {
KF_bpf_preempt_disable,
KF_bpf_preempt_enable,
KF_bpf_iter_css_task_new,
KF_bpf_session_cookie,
};
BTF_SET_START(special_kfunc_set)
@ -11110,6 +11124,7 @@ BTF_ID(func, bpf_iter_css_task_new)
#else
BTF_ID_UNUSED
#endif
BTF_ID(func, bpf_session_cookie)
static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta)
{
@ -12281,6 +12296,11 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
}
}
if (meta.func_id == special_kfunc_list[KF_bpf_session_cookie]) {
meta.r0_size = sizeof(u64);
meta.r0_rdonly = false;
}
if (is_bpf_wq_set_callback_impl_kfunc(meta.func_id)) {
err = push_callback_call(env, insn, insn_idx, meta.subprogno,
set_timer_callback_state);
@ -13858,6 +13878,46 @@ static void scalar_min_max_arsh(struct bpf_reg_state *dst_reg,
__update_reg_bounds(dst_reg);
}
static bool is_safe_to_compute_dst_reg_range(struct bpf_insn *insn,
const struct bpf_reg_state *src_reg)
{
bool src_is_const = false;
u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
if (insn_bitness == 32) {
if (tnum_subreg_is_const(src_reg->var_off)
&& src_reg->s32_min_value == src_reg->s32_max_value
&& src_reg->u32_min_value == src_reg->u32_max_value)
src_is_const = true;
} else {
if (tnum_is_const(src_reg->var_off)
&& src_reg->smin_value == src_reg->smax_value
&& src_reg->umin_value == src_reg->umax_value)
src_is_const = true;
}
switch (BPF_OP(insn->code)) {
case BPF_ADD:
case BPF_SUB:
case BPF_AND:
case BPF_XOR:
case BPF_OR:
case BPF_MUL:
return true;
/* Shift operators range is only computable if shift dimension operand
* is a constant. Shifts greater than 31 or 63 are undefined. This
* includes shifts by a negative number.
*/
case BPF_LSH:
case BPF_RSH:
case BPF_ARSH:
return (src_is_const && src_reg->umax_value < insn_bitness);
default:
return false;
}
}
/* WARNING: This function does calculations on 64-bit values, but the actual
* execution may occur on 32-bit values. Therefore, things like bitshifts
* need extra checks in the 32-bit case.
@ -13867,53 +13927,11 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
struct bpf_reg_state *dst_reg,
struct bpf_reg_state src_reg)
{
struct bpf_reg_state *regs = cur_regs(env);
u8 opcode = BPF_OP(insn->code);
bool src_known;
s64 smin_val, smax_val;
u64 umin_val, umax_val;
s32 s32_min_val, s32_max_val;
u32 u32_min_val, u32_max_val;
u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
bool alu32 = (BPF_CLASS(insn->code) != BPF_ALU64);
int ret;
smin_val = src_reg.smin_value;
smax_val = src_reg.smax_value;
umin_val = src_reg.umin_value;
umax_val = src_reg.umax_value;
s32_min_val = src_reg.s32_min_value;
s32_max_val = src_reg.s32_max_value;
u32_min_val = src_reg.u32_min_value;
u32_max_val = src_reg.u32_max_value;
if (alu32) {
src_known = tnum_subreg_is_const(src_reg.var_off);
if ((src_known &&
(s32_min_val != s32_max_val || u32_min_val != u32_max_val)) ||
s32_min_val > s32_max_val || u32_min_val > u32_max_val) {
/* Taint dst register if offset had invalid bounds
* derived from e.g. dead branches.
*/
__mark_reg_unknown(env, dst_reg);
return 0;
}
} else {
src_known = tnum_is_const(src_reg.var_off);
if ((src_known &&
(smin_val != smax_val || umin_val != umax_val)) ||
smin_val > smax_val || umin_val > umax_val) {
/* Taint dst register if offset had invalid bounds
* derived from e.g. dead branches.
*/
__mark_reg_unknown(env, dst_reg);
return 0;
}
}
if (!src_known &&
opcode != BPF_ADD && opcode != BPF_SUB && opcode != BPF_AND) {
if (!is_safe_to_compute_dst_reg_range(insn, &src_reg)) {
__mark_reg_unknown(env, dst_reg);
return 0;
}
@ -13970,46 +13988,24 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
scalar_min_max_xor(dst_reg, &src_reg);
break;
case BPF_LSH:
if (umax_val >= insn_bitness) {
/* Shifts greater than 31 or 63 are undefined.
* This includes shifts by a negative number.
*/
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
if (alu32)
scalar32_min_max_lsh(dst_reg, &src_reg);
else
scalar_min_max_lsh(dst_reg, &src_reg);
break;
case BPF_RSH:
if (umax_val >= insn_bitness) {
/* Shifts greater than 31 or 63 are undefined.
* This includes shifts by a negative number.
*/
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
if (alu32)
scalar32_min_max_rsh(dst_reg, &src_reg);
else
scalar_min_max_rsh(dst_reg, &src_reg);
break;
case BPF_ARSH:
if (umax_val >= insn_bitness) {
/* Shifts greater than 31 or 63 are undefined.
* This includes shifts by a negative number.
*/
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
if (alu32)
scalar32_min_max_arsh(dst_reg, &src_reg);
else
scalar_min_max_arsh(dst_reg, &src_reg);
break;
default:
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
@ -20029,6 +20025,10 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
goto next_insn;
}
/* Skip inlining the helper call if the JIT does it. */
if (bpf_jit_inlines_helper_call(insn->imm))
goto next_insn;
if (insn->imm == BPF_FUNC_get_route_realm)
prog->dst_needed = 1;
if (insn->imm == BPF_FUNC_get_prandom_u32)

View File

@ -1631,6 +1631,17 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
}
}
static bool is_kprobe_multi(const struct bpf_prog *prog)
{
return prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI ||
prog->expected_attach_type == BPF_TRACE_KPROBE_SESSION;
}
static inline bool is_kprobe_session(const struct bpf_prog *prog)
{
return prog->expected_attach_type == BPF_TRACE_KPROBE_SESSION;
}
static const struct bpf_func_proto *
kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
@ -1646,13 +1657,13 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
return &bpf_override_return_proto;
#endif
case BPF_FUNC_get_func_ip:
if (prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI)
if (is_kprobe_multi(prog))
return &bpf_get_func_ip_proto_kprobe_multi;
if (prog->expected_attach_type == BPF_TRACE_UPROBE_MULTI)
return &bpf_get_func_ip_proto_uprobe_multi;
return &bpf_get_func_ip_proto_kprobe;
case BPF_FUNC_get_attach_cookie:
if (prog->expected_attach_type == BPF_TRACE_KPROBE_MULTI)
if (is_kprobe_multi(prog))
return &bpf_get_attach_cookie_proto_kmulti;
if (prog->expected_attach_type == BPF_TRACE_UPROBE_MULTI)
return &bpf_get_attach_cookie_proto_umulti;
@ -2585,6 +2596,12 @@ static int __init bpf_event_init(void)
fs_initcall(bpf_event_init);
#endif /* CONFIG_MODULES */
struct bpf_session_run_ctx {
struct bpf_run_ctx run_ctx;
bool is_return;
void *data;
};
#ifdef CONFIG_FPROBE
struct bpf_kprobe_multi_link {
struct bpf_link link;
@ -2598,7 +2615,7 @@ struct bpf_kprobe_multi_link {
};
struct bpf_kprobe_multi_run_ctx {
struct bpf_run_ctx run_ctx;
struct bpf_session_run_ctx session_ctx;
struct bpf_kprobe_multi_link *link;
unsigned long entry_ip;
};
@ -2777,7 +2794,8 @@ static u64 bpf_kprobe_multi_cookie(struct bpf_run_ctx *ctx)
if (WARN_ON_ONCE(!ctx))
return 0;
run_ctx = container_of(current->bpf_ctx, struct bpf_kprobe_multi_run_ctx, run_ctx);
run_ctx = container_of(current->bpf_ctx, struct bpf_kprobe_multi_run_ctx,
session_ctx.run_ctx);
link = run_ctx->link;
if (!link->cookies)
return 0;
@ -2794,15 +2812,21 @@ static u64 bpf_kprobe_multi_entry_ip(struct bpf_run_ctx *ctx)
{
struct bpf_kprobe_multi_run_ctx *run_ctx;
run_ctx = container_of(current->bpf_ctx, struct bpf_kprobe_multi_run_ctx, run_ctx);
run_ctx = container_of(current->bpf_ctx, struct bpf_kprobe_multi_run_ctx,
session_ctx.run_ctx);
return run_ctx->entry_ip;
}
static int
kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
unsigned long entry_ip, struct pt_regs *regs)
unsigned long entry_ip, struct pt_regs *regs,
bool is_return, void *data)
{
struct bpf_kprobe_multi_run_ctx run_ctx = {
.session_ctx = {
.is_return = is_return,
.data = data,
},
.link = link,
.entry_ip = entry_ip,
};
@ -2817,7 +2841,7 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
migrate_disable();
rcu_read_lock();
old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
err = bpf_prog_run(link->link.prog, regs);
bpf_reset_run_ctx(old_run_ctx);
rcu_read_unlock();
@ -2834,10 +2858,11 @@ kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip,
void *data)
{
struct bpf_kprobe_multi_link *link;
int err;
link = container_of(fp, struct bpf_kprobe_multi_link, fp);
kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs);
return 0;
err = kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs, false, data);
return is_kprobe_session(link->link.prog) ? err : 0;
}
static void
@ -2848,7 +2873,7 @@ kprobe_multi_link_exit_handler(struct fprobe *fp, unsigned long fentry_ip,
struct bpf_kprobe_multi_link *link;
link = container_of(fp, struct bpf_kprobe_multi_link, fp);
kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs);
kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs, true, data);
}
static int symbols_cmp_r(const void *a, const void *b, const void *priv)
@ -2981,7 +3006,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
if (sizeof(u64) != sizeof(void *))
return -EOPNOTSUPP;
if (prog->expected_attach_type != BPF_TRACE_KPROBE_MULTI)
if (!is_kprobe_multi(prog))
return -EINVAL;
flags = attr->link_create.kprobe_multi.flags;
@ -3062,10 +3087,12 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
if (err)
goto error;
if (flags & BPF_F_KPROBE_MULTI_RETURN)
link->fp.exit_handler = kprobe_multi_link_exit_handler;
else
if (!(flags & BPF_F_KPROBE_MULTI_RETURN))
link->fp.entry_handler = kprobe_multi_link_handler;
if ((flags & BPF_F_KPROBE_MULTI_RETURN) || is_kprobe_session(prog))
link->fp.exit_handler = kprobe_multi_link_exit_handler;
if (is_kprobe_session(prog))
link->fp.entry_data_size = sizeof(u64);
link->addrs = addrs;
link->cookies = cookies;
@ -3491,3 +3518,54 @@ static u64 bpf_uprobe_multi_entry_ip(struct bpf_run_ctx *ctx)
return 0;
}
#endif /* CONFIG_UPROBES */
#ifdef CONFIG_FPROBE
__bpf_kfunc_start_defs();
__bpf_kfunc bool bpf_session_is_return(void)
{
struct bpf_session_run_ctx *session_ctx;
session_ctx = container_of(current->bpf_ctx, struct bpf_session_run_ctx, run_ctx);
return session_ctx->is_return;
}
__bpf_kfunc __u64 *bpf_session_cookie(void)
{
struct bpf_session_run_ctx *session_ctx;
session_ctx = container_of(current->bpf_ctx, struct bpf_session_run_ctx, run_ctx);
return session_ctx->data;
}
__bpf_kfunc_end_defs();
BTF_KFUNCS_START(kprobe_multi_kfunc_set_ids)
BTF_ID_FLAGS(func, bpf_session_is_return)
BTF_ID_FLAGS(func, bpf_session_cookie)
BTF_KFUNCS_END(kprobe_multi_kfunc_set_ids)
static int bpf_kprobe_multi_filter(const struct bpf_prog *prog, u32 kfunc_id)
{
if (!btf_id_set8_contains(&kprobe_multi_kfunc_set_ids, kfunc_id))
return 0;
if (!is_kprobe_session(prog))
return -EACCES;
return 0;
}
static const struct btf_kfunc_id_set bpf_kprobe_multi_kfunc_set = {
.owner = THIS_MODULE,
.set = &kprobe_multi_kfunc_set_ids,
.filter = bpf_kprobe_multi_filter,
};
static int __init bpf_kprobe_multi_kfuncs_init(void)
{
return register_btf_kfunc_id_set(BPF_PROG_TYPE_KPROBE, &bpf_kprobe_multi_kfunc_set);
}
late_initcall(bpf_kprobe_multi_kfuncs_init);
#endif

View File

@ -107,6 +107,9 @@ static int bpf_tcp_ca_btf_struct_access(struct bpf_verifier_log *log,
case offsetof(struct tcp_sock, snd_cwnd_cnt):
end = offsetofend(struct tcp_sock, snd_cwnd_cnt);
break;
case offsetof(struct tcp_sock, snd_cwnd_stamp):
end = offsetofend(struct tcp_sock, snd_cwnd_stamp);
break;
case offsetof(struct tcp_sock, snd_ssthresh):
end = offsetofend(struct tcp_sock, snd_ssthresh);
break;
@ -307,7 +310,8 @@ static u32 bpf_tcp_ca_min_tso_segs(struct sock *sk)
return 0;
}
static void bpf_tcp_ca_cong_control(struct sock *sk, const struct rate_sample *rs)
static void bpf_tcp_ca_cong_control(struct sock *sk, u32 ack, int flag,
const struct rate_sample *rs)
{
}

View File

@ -1024,7 +1024,7 @@ static void bbr_update_model(struct sock *sk, const struct rate_sample *rs)
bbr_update_gains(sk);
}
__bpf_kfunc static void bbr_main(struct sock *sk, const struct rate_sample *rs)
__bpf_kfunc static void bbr_main(struct sock *sk, u32 ack, int flag, const struct rate_sample *rs)
{
struct bbr *bbr = inet_csk_ca(sk);
u32 bw;

View File

@ -3542,7 +3542,7 @@ static void tcp_cong_control(struct sock *sk, u32 ack, u32 acked_sacked,
const struct inet_connection_sock *icsk = inet_csk(sk);
if (icsk->icsk_ca_ops->cong_control) {
icsk->icsk_ca_ops->cong_control(sk, rs);
icsk->icsk_ca_ops->cong_control(sk, ack, flag, rs);
return;
}

View File

@ -337,7 +337,7 @@ $(obj)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL)
ifeq ($(VMLINUX_H),)
ifeq ($(VMLINUX_BTF),)
$(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)",\
build the kernel or set VMLINUX_BTF or VMLINUX_H variable)
build the kernel or set VMLINUX_BTF like "VMLINUX_BTF=/sys/kernel/btf/vmlinux" or VMLINUX_H variable)
endif
$(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
else

View File

@ -3,6 +3,8 @@
pahole-ver := $(CONFIG_PAHOLE_VERSION)
pahole-flags-y :=
ifeq ($(call test-le, $(pahole-ver), 125),y)
# pahole 1.18 through 1.21 can't handle zero-sized per-CPU vars
ifeq ($(call test-le, $(pahole-ver), 121),y)
pahole-flags-$(call test-ge, $(pahole-ver), 118) += --skip_encoding_btf_vars
@ -12,8 +14,17 @@ pahole-flags-$(call test-ge, $(pahole-ver), 121) += --btf_gen_floats
pahole-flags-$(call test-ge, $(pahole-ver), 122) += -j
ifeq ($(pahole-ver), 125)
pahole-flags-y += --skip_encoding_btf_inconsistent_proto --btf_gen_optimized
endif
else
# Switch to using --btf_features for v1.26 and later.
pahole-flags-$(call test-ge, $(pahole-ver), 126) = -j --btf_features=encode_force,var,float,enum64,decl_tag,type_tag,optimized_func,consistent_func
endif
pahole-flags-$(CONFIG_PAHOLE_HAS_LANG_EXCLUDE) += --lang_exclude=rust
pahole-flags-$(call test-ge, $(pahole-ver), 125) += --skip_encoding_btf_inconsistent_proto --btf_gen_optimized
export PAHOLE_FLAGS := $(pahole-flags-y)

View File

@ -147,7 +147,7 @@ ifeq ($(feature-llvm),1)
# If LLVM is available, use it for JIT disassembly
CFLAGS += -DHAVE_LLVM_SUPPORT
LLVM_CONFIG_LIB_COMPONENTS := mcdisassembler all-targets
CFLAGS += $(shell $(LLVM_CONFIG) --cflags --libs $(LLVM_CONFIG_LIB_COMPONENTS))
CFLAGS += $(shell $(LLVM_CONFIG) --cflags)
LIBS += $(shell $(LLVM_CONFIG) --libs $(LLVM_CONFIG_LIB_COMPONENTS))
ifeq ($(shell $(LLVM_CONFIG) --shared-mode),static)
LIBS += $(shell $(LLVM_CONFIG) --system-libs $(LLVM_CONFIG_LIB_COMPONENTS))

View File

@ -1115,6 +1115,7 @@ enum bpf_attach_type {
BPF_CGROUP_UNIX_GETSOCKNAME,
BPF_NETKIT_PRIMARY,
BPF_NETKIT_PEER,
BPF_TRACE_KPROBE_SESSION,
__MAX_BPF_ATTACH_TYPE
};

File diff suppressed because it is too large Load Diff

View File

@ -766,6 +766,7 @@ int bpf_link_create(int prog_fd, int target_fd,
return libbpf_err(-EINVAL);
break;
case BPF_TRACE_KPROBE_MULTI:
case BPF_TRACE_KPROBE_SESSION:
attr.link_create.kprobe_multi.flags = OPTS_GET(opts, kprobe_multi.flags, 0);
attr.link_create.kprobe_multi.cnt = OPTS_GET(opts, kprobe_multi.cnt, 0);
attr.link_create.kprobe_multi.syms = ptr_to_u64(OPTS_GET(opts, kprobe_multi.syms, 0));

View File

@ -104,6 +104,7 @@ enum bpf_enum_value_kind {
case 2: val = *(const unsigned short *)p; break; \
case 4: val = *(const unsigned int *)p; break; \
case 8: val = *(const unsigned long long *)p; break; \
default: val = 0; break; \
} \
val <<= __CORE_RELO(s, field, LSHIFT_U64); \
if (__CORE_RELO(s, field, SIGNED)) \

View File

@ -186,10 +186,21 @@ enum libbpf_tristate {
#define __kptr __attribute__((btf_type_tag("kptr")))
#define __percpu_kptr __attribute__((btf_type_tag("percpu_kptr")))
#define bpf_ksym_exists(sym) ({ \
_Static_assert(!__builtin_constant_p(!!sym), #sym " should be marked as __weak"); \
!!sym; \
#if defined (__clang__)
#define bpf_ksym_exists(sym) ({ \
_Static_assert(!__builtin_constant_p(!!sym), \
#sym " should be marked as __weak"); \
!!sym; \
})
#elif __GNUC__ > 8
#define bpf_ksym_exists(sym) ({ \
_Static_assert(__builtin_has_attribute (*sym, __weak__), \
#sym " should be marked as __weak"); \
!!sym; \
})
#else
#define bpf_ksym_exists(sym) !!sym
#endif
#define __arg_ctx __attribute__((btf_decl_tag("arg:ctx")))
#define __arg_nonnull __attribute((btf_decl_tag("arg:nonnull")))

View File

@ -633,18 +633,18 @@ struct pt_regs;
#endif
#define ___bpf_ctx_cast0() ctx
#define ___bpf_ctx_cast1(x) ___bpf_ctx_cast0(), (void *)ctx[0]
#define ___bpf_ctx_cast2(x, args...) ___bpf_ctx_cast1(args), (void *)ctx[1]
#define ___bpf_ctx_cast3(x, args...) ___bpf_ctx_cast2(args), (void *)ctx[2]
#define ___bpf_ctx_cast4(x, args...) ___bpf_ctx_cast3(args), (void *)ctx[3]
#define ___bpf_ctx_cast5(x, args...) ___bpf_ctx_cast4(args), (void *)ctx[4]
#define ___bpf_ctx_cast6(x, args...) ___bpf_ctx_cast5(args), (void *)ctx[5]
#define ___bpf_ctx_cast7(x, args...) ___bpf_ctx_cast6(args), (void *)ctx[6]
#define ___bpf_ctx_cast8(x, args...) ___bpf_ctx_cast7(args), (void *)ctx[7]
#define ___bpf_ctx_cast9(x, args...) ___bpf_ctx_cast8(args), (void *)ctx[8]
#define ___bpf_ctx_cast10(x, args...) ___bpf_ctx_cast9(args), (void *)ctx[9]
#define ___bpf_ctx_cast11(x, args...) ___bpf_ctx_cast10(args), (void *)ctx[10]
#define ___bpf_ctx_cast12(x, args...) ___bpf_ctx_cast11(args), (void *)ctx[11]
#define ___bpf_ctx_cast1(x) ___bpf_ctx_cast0(), ctx[0]
#define ___bpf_ctx_cast2(x, args...) ___bpf_ctx_cast1(args), ctx[1]
#define ___bpf_ctx_cast3(x, args...) ___bpf_ctx_cast2(args), ctx[2]
#define ___bpf_ctx_cast4(x, args...) ___bpf_ctx_cast3(args), ctx[3]
#define ___bpf_ctx_cast5(x, args...) ___bpf_ctx_cast4(args), ctx[4]
#define ___bpf_ctx_cast6(x, args...) ___bpf_ctx_cast5(args), ctx[5]
#define ___bpf_ctx_cast7(x, args...) ___bpf_ctx_cast6(args), ctx[6]
#define ___bpf_ctx_cast8(x, args...) ___bpf_ctx_cast7(args), ctx[7]
#define ___bpf_ctx_cast9(x, args...) ___bpf_ctx_cast8(args), ctx[8]
#define ___bpf_ctx_cast10(x, args...) ___bpf_ctx_cast9(args), ctx[9]
#define ___bpf_ctx_cast11(x, args...) ___bpf_ctx_cast10(args), ctx[10]
#define ___bpf_ctx_cast12(x, args...) ___bpf_ctx_cast11(args), ctx[11]
#define ___bpf_ctx_cast(args...) ___bpf_apply(___bpf_ctx_cast, ___bpf_narg(args))(args)
/*
@ -786,14 +786,14 @@ ____##name(unsigned long long *ctx ___bpf_ctx_decl(args))
struct pt_regs;
#define ___bpf_kprobe_args0() ctx
#define ___bpf_kprobe_args1(x) ___bpf_kprobe_args0(), (void *)PT_REGS_PARM1(ctx)
#define ___bpf_kprobe_args2(x, args...) ___bpf_kprobe_args1(args), (void *)PT_REGS_PARM2(ctx)
#define ___bpf_kprobe_args3(x, args...) ___bpf_kprobe_args2(args), (void *)PT_REGS_PARM3(ctx)
#define ___bpf_kprobe_args4(x, args...) ___bpf_kprobe_args3(args), (void *)PT_REGS_PARM4(ctx)
#define ___bpf_kprobe_args5(x, args...) ___bpf_kprobe_args4(args), (void *)PT_REGS_PARM5(ctx)
#define ___bpf_kprobe_args6(x, args...) ___bpf_kprobe_args5(args), (void *)PT_REGS_PARM6(ctx)
#define ___bpf_kprobe_args7(x, args...) ___bpf_kprobe_args6(args), (void *)PT_REGS_PARM7(ctx)
#define ___bpf_kprobe_args8(x, args...) ___bpf_kprobe_args7(args), (void *)PT_REGS_PARM8(ctx)
#define ___bpf_kprobe_args1(x) ___bpf_kprobe_args0(), (unsigned long long)PT_REGS_PARM1(ctx)
#define ___bpf_kprobe_args2(x, args...) ___bpf_kprobe_args1(args), (unsigned long long)PT_REGS_PARM2(ctx)
#define ___bpf_kprobe_args3(x, args...) ___bpf_kprobe_args2(args), (unsigned long long)PT_REGS_PARM3(ctx)
#define ___bpf_kprobe_args4(x, args...) ___bpf_kprobe_args3(args), (unsigned long long)PT_REGS_PARM4(ctx)
#define ___bpf_kprobe_args5(x, args...) ___bpf_kprobe_args4(args), (unsigned long long)PT_REGS_PARM5(ctx)
#define ___bpf_kprobe_args6(x, args...) ___bpf_kprobe_args5(args), (unsigned long long)PT_REGS_PARM6(ctx)
#define ___bpf_kprobe_args7(x, args...) ___bpf_kprobe_args6(args), (unsigned long long)PT_REGS_PARM7(ctx)
#define ___bpf_kprobe_args8(x, args...) ___bpf_kprobe_args7(args), (unsigned long long)PT_REGS_PARM8(ctx)
#define ___bpf_kprobe_args(args...) ___bpf_apply(___bpf_kprobe_args, ___bpf_narg(args))(args)
/*
@ -821,7 +821,7 @@ static __always_inline typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args)
#define ___bpf_kretprobe_args0() ctx
#define ___bpf_kretprobe_args1(x) ___bpf_kretprobe_args0(), (void *)PT_REGS_RC(ctx)
#define ___bpf_kretprobe_args1(x) ___bpf_kretprobe_args0(), (unsigned long long)PT_REGS_RC(ctx)
#define ___bpf_kretprobe_args(args...) ___bpf_apply(___bpf_kretprobe_args, ___bpf_narg(args))(args)
/*
@ -845,24 +845,24 @@ static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args)
/* If kernel has CONFIG_ARCH_HAS_SYSCALL_WRAPPER, read pt_regs directly */
#define ___bpf_syscall_args0() ctx
#define ___bpf_syscall_args1(x) ___bpf_syscall_args0(), (void *)PT_REGS_PARM1_SYSCALL(regs)
#define ___bpf_syscall_args2(x, args...) ___bpf_syscall_args1(args), (void *)PT_REGS_PARM2_SYSCALL(regs)
#define ___bpf_syscall_args3(x, args...) ___bpf_syscall_args2(args), (void *)PT_REGS_PARM3_SYSCALL(regs)
#define ___bpf_syscall_args4(x, args...) ___bpf_syscall_args3(args), (void *)PT_REGS_PARM4_SYSCALL(regs)
#define ___bpf_syscall_args5(x, args...) ___bpf_syscall_args4(args), (void *)PT_REGS_PARM5_SYSCALL(regs)
#define ___bpf_syscall_args6(x, args...) ___bpf_syscall_args5(args), (void *)PT_REGS_PARM6_SYSCALL(regs)
#define ___bpf_syscall_args7(x, args...) ___bpf_syscall_args6(args), (void *)PT_REGS_PARM7_SYSCALL(regs)
#define ___bpf_syscall_args1(x) ___bpf_syscall_args0(), (unsigned long long)PT_REGS_PARM1_SYSCALL(regs)
#define ___bpf_syscall_args2(x, args...) ___bpf_syscall_args1(args), (unsigned long long)PT_REGS_PARM2_SYSCALL(regs)
#define ___bpf_syscall_args3(x, args...) ___bpf_syscall_args2(args), (unsigned long long)PT_REGS_PARM3_SYSCALL(regs)
#define ___bpf_syscall_args4(x, args...) ___bpf_syscall_args3(args), (unsigned long long)PT_REGS_PARM4_SYSCALL(regs)
#define ___bpf_syscall_args5(x, args...) ___bpf_syscall_args4(args), (unsigned long long)PT_REGS_PARM5_SYSCALL(regs)
#define ___bpf_syscall_args6(x, args...) ___bpf_syscall_args5(args), (unsigned long long)PT_REGS_PARM6_SYSCALL(regs)
#define ___bpf_syscall_args7(x, args...) ___bpf_syscall_args6(args), (unsigned long long)PT_REGS_PARM7_SYSCALL(regs)
#define ___bpf_syscall_args(args...) ___bpf_apply(___bpf_syscall_args, ___bpf_narg(args))(args)
/* If kernel doesn't have CONFIG_ARCH_HAS_SYSCALL_WRAPPER, we have to BPF_CORE_READ from pt_regs */
#define ___bpf_syswrap_args0() ctx
#define ___bpf_syswrap_args1(x) ___bpf_syswrap_args0(), (void *)PT_REGS_PARM1_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args2(x, args...) ___bpf_syswrap_args1(args), (void *)PT_REGS_PARM2_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args3(x, args...) ___bpf_syswrap_args2(args), (void *)PT_REGS_PARM3_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args4(x, args...) ___bpf_syswrap_args3(args), (void *)PT_REGS_PARM4_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args5(x, args...) ___bpf_syswrap_args4(args), (void *)PT_REGS_PARM5_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args6(x, args...) ___bpf_syswrap_args5(args), (void *)PT_REGS_PARM6_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args7(x, args...) ___bpf_syswrap_args6(args), (void *)PT_REGS_PARM7_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args1(x) ___bpf_syswrap_args0(), (unsigned long long)PT_REGS_PARM1_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args2(x, args...) ___bpf_syswrap_args1(args), (unsigned long long)PT_REGS_PARM2_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args3(x, args...) ___bpf_syswrap_args2(args), (unsigned long long)PT_REGS_PARM3_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args4(x, args...) ___bpf_syswrap_args3(args), (unsigned long long)PT_REGS_PARM4_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args5(x, args...) ___bpf_syswrap_args4(args), (unsigned long long)PT_REGS_PARM5_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args6(x, args...) ___bpf_syswrap_args5(args), (unsigned long long)PT_REGS_PARM6_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args7(x, args...) ___bpf_syswrap_args6(args), (unsigned long long)PT_REGS_PARM7_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args(args...) ___bpf_apply(___bpf_syswrap_args, ___bpf_narg(args))(args)
/*

View File

@ -132,6 +132,7 @@ static const char * const attach_type_name[] = {
[BPF_TRACE_UPROBE_MULTI] = "trace_uprobe_multi",
[BPF_NETKIT_PRIMARY] = "netkit_primary",
[BPF_NETKIT_PEER] = "netkit_peer",
[BPF_TRACE_KPROBE_SESSION] = "trace_kprobe_session",
};
static const char * const link_type_name[] = {
@ -1127,6 +1128,7 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map)
const struct btf_type *mtype, *kern_mtype;
__u32 mtype_id, kern_mtype_id;
void *mdata, *kern_mdata;
struct bpf_program *prog;
__s64 msize, kern_msize;
__u32 moff, kern_moff;
__u32 kern_member_idx;
@ -1144,18 +1146,28 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map)
kern_member = find_member_by_name(kern_btf, kern_type, mname);
if (!kern_member) {
/* Skip all zeros or null fields if they are not
* presented in the kernel BTF.
*/
if (libbpf_is_mem_zeroed(mdata, msize)) {
pr_info("struct_ops %s: member %s not found in kernel, skipping it as it's set to zero\n",
if (!libbpf_is_mem_zeroed(mdata, msize)) {
pr_warn("struct_ops init_kern %s: Cannot find member %s in kernel BTF\n",
map->name, mname);
continue;
return -ENOTSUP;
}
pr_warn("struct_ops init_kern %s: Cannot find member %s in kernel BTF\n",
if (st_ops->progs[i]) {
/* If we had declaratively set struct_ops callback, we need to
* force its autoload to false, because it doesn't have
* a chance of succeeding from POV of the current struct_ops map.
* If this program is still referenced somewhere else, though,
* then bpf_object_adjust_struct_ops_autoload() will update its
* autoload accordingly.
*/
st_ops->progs[i]->autoload = false;
st_ops->progs[i] = NULL;
}
/* Skip all-zero/NULL fields if they are not present in the kernel BTF */
pr_info("struct_ops %s: member %s not found in kernel, skipping it as it's set to zero\n",
map->name, mname);
return -ENOTSUP;
continue;
}
kern_member_idx = kern_member - btf_members(kern_type);
@ -1181,13 +1193,19 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map)
}
if (btf_is_ptr(mtype)) {
struct bpf_program *prog;
prog = *(void **)mdata;
/* just like for !kern_member case above, reset declaratively
* set (at compile time) program's autload to false,
* if user replaced it with another program or NULL
*/
if (st_ops->progs[i] && st_ops->progs[i] != prog)
st_ops->progs[i]->autoload = false;
/* Update the value from the shadow type */
prog = *(void **)mdata;
st_ops->progs[i] = prog;
if (!prog)
continue;
if (!is_valid_st_ops_program(obj, prog)) {
pr_warn("struct_ops init_kern %s: member %s is not a struct_ops program\n",
map->name, mname);
@ -7354,7 +7372,11 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
__u32 log_level = prog->log_level;
int ret, err;
if (prog->type == BPF_PROG_TYPE_UNSPEC) {
/* Be more helpful by rejecting programs that can't be validated early
* with more meaningful and actionable error message.
*/
switch (prog->type) {
case BPF_PROG_TYPE_UNSPEC:
/*
* The program type must be set. Most likely we couldn't find a proper
* section definition at load time, and thus we didn't infer the type.
@ -7362,6 +7384,15 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
pr_warn("prog '%s': missing BPF prog type, check ELF section name '%s'\n",
prog->name, prog->sec_name);
return -EINVAL;
case BPF_PROG_TYPE_STRUCT_OPS:
if (prog->attach_btf_id == 0) {
pr_warn("prog '%s': SEC(\"struct_ops\") program isn't referenced anywhere, did you forget to use it?\n",
prog->name);
return -EINVAL;
}
break;
default:
break;
}
if (!insns || !insns_cnt)
@ -9272,6 +9303,7 @@ static int attach_tp(const struct bpf_program *prog, long cookie, struct bpf_lin
static int attach_raw_tp(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_trace(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_kprobe_session(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link);
@ -9288,6 +9320,7 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("uretprobe.s+", KPROBE, 0, SEC_SLEEPABLE, attach_uprobe),
SEC_DEF("kprobe.multi+", KPROBE, BPF_TRACE_KPROBE_MULTI, SEC_NONE, attach_kprobe_multi),
SEC_DEF("kretprobe.multi+", KPROBE, BPF_TRACE_KPROBE_MULTI, SEC_NONE, attach_kprobe_multi),
SEC_DEF("kprobe.session+", KPROBE, BPF_TRACE_KPROBE_SESSION, SEC_NONE, attach_kprobe_session),
SEC_DEF("uprobe.multi+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_NONE, attach_uprobe_multi),
SEC_DEF("uretprobe.multi+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_NONE, attach_uprobe_multi),
SEC_DEF("uprobe.multi.s+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_SLEEPABLE, attach_uprobe_multi),
@ -9858,16 +9891,28 @@ static int find_kernel_btf_id(struct bpf_object *obj, const char *attach_name,
enum bpf_attach_type attach_type,
int *btf_obj_fd, int *btf_type_id)
{
int ret, i;
int ret, i, mod_len;
const char *fn_name, *mod_name = NULL;
ret = find_attach_btf_id(obj->btf_vmlinux, attach_name, attach_type);
if (ret > 0) {
*btf_obj_fd = 0; /* vmlinux BTF */
*btf_type_id = ret;
return 0;
fn_name = strchr(attach_name, ':');
if (fn_name) {
mod_name = attach_name;
mod_len = fn_name - mod_name;
fn_name++;
}
if (!mod_name || strncmp(mod_name, "vmlinux", mod_len) == 0) {
ret = find_attach_btf_id(obj->btf_vmlinux,
mod_name ? fn_name : attach_name,
attach_type);
if (ret > 0) {
*btf_obj_fd = 0; /* vmlinux BTF */
*btf_type_id = ret;
return 0;
}
if (ret != -ENOENT)
return ret;
}
if (ret != -ENOENT)
return ret;
ret = load_module_btfs(obj);
if (ret)
@ -9876,7 +9921,12 @@ static int find_kernel_btf_id(struct bpf_object *obj, const char *attach_name,
for (i = 0; i < obj->btf_module_cnt; i++) {
const struct module_btf *mod = &obj->btf_modules[i];
ret = find_attach_btf_id(mod->btf, attach_name, attach_type);
if (mod_name && strncmp(mod->name, mod_name, mod_len) != 0)
continue;
ret = find_attach_btf_id(mod->btf,
mod_name ? fn_name : attach_name,
attach_type);
if (ret > 0) {
*btf_obj_fd = mod->fd;
*btf_type_id = ret;
@ -11380,13 +11430,14 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
struct kprobe_multi_resolve res = {
.pattern = pattern,
};
enum bpf_attach_type attach_type;
struct bpf_link *link = NULL;
char errmsg[STRERR_BUFSIZE];
const unsigned long *addrs;
int err, link_fd, prog_fd;
bool retprobe, session;
const __u64 *cookies;
const char **syms;
bool retprobe;
size_t cnt;
if (!OPTS_VALID(opts, bpf_kprobe_multi_opts))
@ -11425,6 +11476,12 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
}
retprobe = OPTS_GET(opts, retprobe, false);
session = OPTS_GET(opts, session, false);
if (retprobe && session)
return libbpf_err_ptr(-EINVAL);
attach_type = session ? BPF_TRACE_KPROBE_SESSION : BPF_TRACE_KPROBE_MULTI;
lopts.kprobe_multi.syms = syms;
lopts.kprobe_multi.addrs = addrs;
@ -11439,7 +11496,7 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
}
link->detach = &bpf_link__detach_fd;
link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_KPROBE_MULTI, &lopts);
link_fd = bpf_link_create(prog_fd, 0, attach_type, &lopts);
if (link_fd < 0) {
err = -errno;
pr_warn("prog '%s': failed to attach: %s\n",
@ -11536,7 +11593,7 @@ static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, stru
n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
if (n < 1) {
pr_warn("kprobe multi pattern is invalid: %s\n", pattern);
pr_warn("kprobe multi pattern is invalid: %s\n", spec);
return -EINVAL;
}
@ -11545,6 +11602,32 @@ static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, stru
return libbpf_get_error(*link);
}
static int attach_kprobe_session(const struct bpf_program *prog, long cookie,
struct bpf_link **link)
{
LIBBPF_OPTS(bpf_kprobe_multi_opts, opts, .session = true);
const char *spec;
char *pattern;
int n;
*link = NULL;
/* no auto-attach for SEC("kprobe.session") */
if (strcmp(prog->sec_name, "kprobe.session") == 0)
return 0;
spec = prog->sec_name + sizeof("kprobe.session/") - 1;
n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
if (n < 1) {
pr_warn("kprobe session pattern is invalid: %s\n", spec);
return -EINVAL;
}
*link = bpf_program__attach_kprobe_multi_opts(prog, pattern, &opts);
free(pattern);
return *link ? 0 : -errno;
}
static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link)
{
char *probe_type = NULL, *binary_path = NULL, *func_name = NULL;

View File

@ -539,10 +539,12 @@ struct bpf_kprobe_multi_opts {
size_t cnt;
/* create return kprobes */
bool retprobe;
/* create session kprobes */
bool session;
size_t :0;
};
#define bpf_kprobe_multi_opts__last_field retprobe
#define bpf_kprobe_multi_opts__last_field session
LIBBPF_API struct bpf_link *
bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,

View File

@ -301,7 +301,7 @@ int ring_buffer__consume_n(struct ring_buffer *rb, size_t n)
if (n == 0)
break;
}
return res;
return res > INT_MAX ? INT_MAX : res;
}
/* Consume available ring buffer(s) data without event polling.
@ -405,7 +405,7 @@ int ring__map_fd(const struct ring *r)
int ring__consume_n(struct ring *r, size_t n)
{
int res;
int64_t res;
res = ringbuf_process_ring(r, n);
if (res < 0)

View File

@ -2,6 +2,7 @@
#undef _GNU_SOURCE
#include <string.h>
#include <stdio.h>
#include <errno.h>
#include "str_error.h"
/* make sure libbpf doesn't use kernel-only integer typedefs */
@ -15,7 +16,18 @@
char *libbpf_strerror_r(int err, char *dst, int len)
{
int ret = strerror_r(err < 0 ? -err : err, dst, len);
if (ret)
snprintf(dst, len, "ERROR: strerror_r(%d)=%d", err, ret);
/* on glibc <2.13, ret == -1 and errno is set, if strerror_r() can't
* handle the error, on glibc >=2.13 *positive* (errno-like) error
* code is returned directly
*/
if (ret == -1)
ret = errno;
if (ret) {
if (ret == EINVAL)
/* strerror_r() doesn't recognize this specific error */
snprintf(dst, len, "unknown error (%d)", err < 0 ? err : -err);
else
snprintf(dst, len, "ERROR: strerror_r(%d)=%d", err, ret);
}
return dst;
}

View File

@ -214,18 +214,18 @@ long bpf_usdt_cookie(struct pt_regs *ctx)
/* we rely on ___bpf_apply() and ___bpf_narg() macros already defined in bpf_tracing.h */
#define ___bpf_usdt_args0() ctx
#define ___bpf_usdt_args1(x) ___bpf_usdt_args0(), ({ long _x; bpf_usdt_arg(ctx, 0, &_x); (void *)_x; })
#define ___bpf_usdt_args2(x, args...) ___bpf_usdt_args1(args), ({ long _x; bpf_usdt_arg(ctx, 1, &_x); (void *)_x; })
#define ___bpf_usdt_args3(x, args...) ___bpf_usdt_args2(args), ({ long _x; bpf_usdt_arg(ctx, 2, &_x); (void *)_x; })
#define ___bpf_usdt_args4(x, args...) ___bpf_usdt_args3(args), ({ long _x; bpf_usdt_arg(ctx, 3, &_x); (void *)_x; })
#define ___bpf_usdt_args5(x, args...) ___bpf_usdt_args4(args), ({ long _x; bpf_usdt_arg(ctx, 4, &_x); (void *)_x; })
#define ___bpf_usdt_args6(x, args...) ___bpf_usdt_args5(args), ({ long _x; bpf_usdt_arg(ctx, 5, &_x); (void *)_x; })
#define ___bpf_usdt_args7(x, args...) ___bpf_usdt_args6(args), ({ long _x; bpf_usdt_arg(ctx, 6, &_x); (void *)_x; })
#define ___bpf_usdt_args8(x, args...) ___bpf_usdt_args7(args), ({ long _x; bpf_usdt_arg(ctx, 7, &_x); (void *)_x; })
#define ___bpf_usdt_args9(x, args...) ___bpf_usdt_args8(args), ({ long _x; bpf_usdt_arg(ctx, 8, &_x); (void *)_x; })
#define ___bpf_usdt_args10(x, args...) ___bpf_usdt_args9(args), ({ long _x; bpf_usdt_arg(ctx, 9, &_x); (void *)_x; })
#define ___bpf_usdt_args11(x, args...) ___bpf_usdt_args10(args), ({ long _x; bpf_usdt_arg(ctx, 10, &_x); (void *)_x; })
#define ___bpf_usdt_args12(x, args...) ___bpf_usdt_args11(args), ({ long _x; bpf_usdt_arg(ctx, 11, &_x); (void *)_x; })
#define ___bpf_usdt_args1(x) ___bpf_usdt_args0(), ({ long _x; bpf_usdt_arg(ctx, 0, &_x); _x; })
#define ___bpf_usdt_args2(x, args...) ___bpf_usdt_args1(args), ({ long _x; bpf_usdt_arg(ctx, 1, &_x); _x; })
#define ___bpf_usdt_args3(x, args...) ___bpf_usdt_args2(args), ({ long _x; bpf_usdt_arg(ctx, 2, &_x); _x; })
#define ___bpf_usdt_args4(x, args...) ___bpf_usdt_args3(args), ({ long _x; bpf_usdt_arg(ctx, 3, &_x); _x; })
#define ___bpf_usdt_args5(x, args...) ___bpf_usdt_args4(args), ({ long _x; bpf_usdt_arg(ctx, 4, &_x); _x; })
#define ___bpf_usdt_args6(x, args...) ___bpf_usdt_args5(args), ({ long _x; bpf_usdt_arg(ctx, 5, &_x); _x; })
#define ___bpf_usdt_args7(x, args...) ___bpf_usdt_args6(args), ({ long _x; bpf_usdt_arg(ctx, 6, &_x); _x; })
#define ___bpf_usdt_args8(x, args...) ___bpf_usdt_args7(args), ({ long _x; bpf_usdt_arg(ctx, 7, &_x); _x; })
#define ___bpf_usdt_args9(x, args...) ___bpf_usdt_args8(args), ({ long _x; bpf_usdt_arg(ctx, 8, &_x); _x; })
#define ___bpf_usdt_args10(x, args...) ___bpf_usdt_args9(args), ({ long _x; bpf_usdt_arg(ctx, 9, &_x); _x; })
#define ___bpf_usdt_args11(x, args...) ___bpf_usdt_args10(args), ({ long _x; bpf_usdt_arg(ctx, 10, &_x); _x; })
#define ___bpf_usdt_args12(x, args...) ___bpf_usdt_args11(args), ({ long _x; bpf_usdt_arg(ctx, 11, &_x); _x; })
#define ___bpf_usdt_args(args...) ___bpf_apply(___bpf_usdt_args, ___bpf_narg(args))(args)
/*

View File

@ -17,7 +17,6 @@ test_dev_cgroup
test_verifier_log
feature
test_sock
test_sock_addr
urandom_read
test_sockmap
test_lirc_mode2_user

View File

@ -10,4 +10,3 @@ fill_link_info/kprobe_multi_link_info # bpf_program__attach_kprobe_mu
fill_link_info/kretprobe_multi_link_info # bpf_program__attach_kprobe_multi_opts unexpected error: -95
fill_link_info/kprobe_multi_invalid_ubuff # bpf_program__attach_kprobe_multi_opts unexpected error: -95
missed/kprobe_recursion # missed_kprobe_recursion__attach unexpected error: -95 (errno 95)
arena_atomics

View File

@ -53,6 +53,7 @@ progs/syscall.c-CFLAGS := -fno-strict-aliasing
progs/test_pkt_md_access.c-CFLAGS := -fno-strict-aliasing
progs/test_sk_lookup.c-CFLAGS := -fno-strict-aliasing
progs/timer_crash.c-CFLAGS := -fno-strict-aliasing
progs/test_global_func9.c-CFLAGS := -fno-strict-aliasing
ifneq ($(LLVM),)
# Silence some warnings when compiled with clang
@ -81,11 +82,24 @@ TEST_INST_SUBDIRS += bpf_gcc
# The following tests contain C code that, although technically legal,
# triggers GCC warnings that cannot be disabled: declaration of
# anonymous struct types in function parameter lists.
progs/btf_dump_test_case_bitfields.c-CFLAGS := -Wno-error
progs/btf_dump_test_case_namespacing.c-CFLAGS := -Wno-error
progs/btf_dump_test_case_packing.c-CFLAGS := -Wno-error
progs/btf_dump_test_case_padding.c-CFLAGS := -Wno-error
progs/btf_dump_test_case_syntax.c-CFLAGS := -Wno-error
progs/btf_dump_test_case_bitfields.c-bpf_gcc-CFLAGS := -Wno-error
progs/btf_dump_test_case_namespacing.c-bpf_gcc-CFLAGS := -Wno-error
progs/btf_dump_test_case_packing.c-bpf_gcc-CFLAGS := -Wno-error
progs/btf_dump_test_case_padding.c-bpf_gcc-CFLAGS := -Wno-error
progs/btf_dump_test_case_syntax.c-bpf_gcc-CFLAGS := -Wno-error
# The following tests do type-punning, via the __imm_insn macro, from
# `struct bpf_insn' to long and then uses the value. This triggers an
# "is used uninitialized" warning in GCC due to strict-aliasing
# rules.
progs/verifier_ref_tracking.c-bpf_gcc-CFLAGS := -fno-strict-aliasing
progs/verifier_unpriv.c-bpf_gcc-CFLAGS := -fno-strict-aliasing
progs/verifier_cgroup_storage.c-bpf_gcc-CFLAGS := -fno-strict-aliasing
progs/verifier_ld_ind.c-bpf_gcc-CFLAGS := -fno-strict-aliasing
progs/verifier_map_ret_val.c-bpf_gcc-CFLAGS := -fno-strict-aliasing
progs/verifier_spill_fill.c-bpf_gcc-CFLAGS := -fno-strict-aliasing
progs/verifier_subprog_precision.c-bpf_gcc-CFLAGS := -fno-strict-aliasing
progs/verifier_uninit.c-bpf_gcc-CFLAGS := -fno-strict-aliasing
endif
ifneq ($(CLANG_CPUV4),)
@ -102,7 +116,6 @@ TEST_PROGS := test_kmod.sh \
test_xdp_redirect_multi.sh \
test_xdp_meta.sh \
test_xdp_veth.sh \
test_sock_addr.sh \
test_tunnel.sh \
test_lwt_seg6local.sh \
test_lirc_mode2.sh \
@ -127,7 +140,7 @@ TEST_PROGS_EXTENDED := with_addr.sh \
test_xdp_vlan.sh test_bpftool.py
# Compile but not part of 'make run_tests'
TEST_GEN_PROGS_EXTENDED = test_sock_addr test_skb_cgroup_id_user \
TEST_GEN_PROGS_EXTENDED = test_skb_cgroup_id_user \
flow_dissector_load test_flow_dissector test_tcp_check_syncookie_user \
test_lirc_mode2_user xdping test_cpp runqslower bench bpf_testmod.ko \
xskxceiver xdp_redirect_multi xdp_synproxy veristat xdp_hw_metadata \
@ -262,7 +275,7 @@ $(OUTPUT)/runqslower: $(BPFOBJ) | $(DEFAULT_BPFTOOL) $(RUNQSLOWER_OUTPUT)
$(Q)$(MAKE) $(submake_extras) -C $(TOOLSDIR)/bpf/runqslower \
OUTPUT=$(RUNQSLOWER_OUTPUT) VMLINUX_BTF=$(VMLINUX_BTF) \
BPFTOOL_OUTPUT=$(HOST_BUILD_DIR)/bpftool/ \
BPFOBJ_OUTPUT=$(BUILD_DIR)/libbpf \
BPFOBJ_OUTPUT=$(BUILD_DIR)/libbpf/ \
BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR) \
EXTRA_CFLAGS='-g $(OPT_FLAGS) $(SAN_CFLAGS)' \
EXTRA_LDFLAGS='$(SAN_LDFLAGS)' && \
@ -283,7 +296,6 @@ NETWORK_HELPERS := $(OUTPUT)/network_helpers.o
$(OUTPUT)/test_dev_cgroup: $(CGROUP_HELPERS) $(TESTING_HELPERS)
$(OUTPUT)/test_skb_cgroup_id_user: $(CGROUP_HELPERS) $(TESTING_HELPERS)
$(OUTPUT)/test_sock: $(CGROUP_HELPERS) $(TESTING_HELPERS)
$(OUTPUT)/test_sock_addr: $(CGROUP_HELPERS) $(TESTING_HELPERS) $(NETWORK_HELPERS)
$(OUTPUT)/test_sockmap: $(CGROUP_HELPERS) $(TESTING_HELPERS)
$(OUTPUT)/test_tcpnotify_user: $(CGROUP_HELPERS) $(TESTING_HELPERS) $(TRACE_HELPERS)
$(OUTPUT)/get_cgroup_id_user: $(CGROUP_HELPERS) $(TESTING_HELPERS)
@ -297,6 +309,7 @@ $(OUTPUT)/flow_dissector_load: $(TESTING_HELPERS)
$(OUTPUT)/test_maps: $(TESTING_HELPERS)
$(OUTPUT)/test_verifier: $(TESTING_HELPERS) $(CAP_HELPERS) $(UNPRIV_HELPERS)
$(OUTPUT)/xsk.o: $(BPFOBJ)
$(OUTPUT)/test_tcp_check_syncookie_user: $(NETWORK_HELPERS)
BPFTOOL ?= $(DEFAULT_BPFTOOL)
$(DEFAULT_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) \
@ -431,7 +444,7 @@ endef
# Build BPF object using GCC
define GCC_BPF_BUILD_RULE
$(call msg,GCC-BPF,$(TRUNNER_BINARY),$2)
$(Q)$(BPF_GCC) $3 -O2 -c $1 -o $2
$(Q)$(BPF_GCC) $3 -DBPF_NO_PRESERVE_ACCESS_INDEX -Wno-attributes -O2 -c $1 -o $2
endef
SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
@ -470,7 +483,7 @@ LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(ske
# $eval()) and pass control to DEFINE_TEST_RUNNER_RULES.
# Parameters:
# $1 - test runner base binary name (e.g., test_progs)
# $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, gcc-bpf, etc)
# $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, bpf_gcc, etc)
define DEFINE_TEST_RUNNER
TRUNNER_OUTPUT := $(OUTPUT)$(if $2,/)$2
@ -498,7 +511,7 @@ endef
# Using TRUNNER_XXX variables, provided by callers of DEFINE_TEST_RUNNER and
# set up by DEFINE_TEST_RUNNER itself, create test runner build rules with:
# $1 - test runner base binary name (e.g., test_progs)
# $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, gcc-bpf, etc)
# $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, bpf_gcc, etc)
define DEFINE_TEST_RUNNER_RULES
ifeq ($($(TRUNNER_OUTPUT)-dir),)
@ -521,7 +534,8 @@ $(TRUNNER_BPF_OBJS): $(TRUNNER_OUTPUT)/%.bpf.o: \
| $(TRUNNER_OUTPUT) $$(BPFOBJ)
$$(call $(TRUNNER_BPF_BUILD_RULE),$$<,$$@, \
$(TRUNNER_BPF_CFLAGS) \
$$($$<-CFLAGS))
$$($$<-CFLAGS) \
$$($$<-$2-CFLAGS))
$(TRUNNER_BPF_SKELS): %.skel.h: %.bpf.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)

View File

@ -29,6 +29,7 @@ static inline void *bpf_iter_num_new(struct bpf_iter_num *it, int i, int j) { re
static inline void bpf_iter_num_destroy(struct bpf_iter_num *it) {}
static inline bool bpf_iter_num_next(struct bpf_iter_num *it) { return true; }
#define cond_break ({})
#define can_loop true
#endif
/* Safely walk link list elements. Deletion of elements is allowed. */
@ -36,8 +37,7 @@ static inline bool bpf_iter_num_next(struct bpf_iter_num *it) { return true; }
for (void * ___tmp = (pos = list_entry_safe((head)->first, \
typeof(*(pos)), member), \
(void *)0); \
pos && ({ ___tmp = (void *)pos->member.next; 1; }); \
cond_break, \
pos && ({ ___tmp = (void *)pos->member.next; 1; }) && can_loop; \
pos = list_entry_safe((void __arena *)___tmp, typeof(*(pos)), member))
static inline void list_add_head(arena_list_node_t *n, arena_list_head_t *h)

View File

@ -326,19 +326,48 @@ l_true: \
})
#endif
/*
* Note that cond_break can only be portably used in the body of a breakable
* construct, whereas can_loop can be used anywhere.
*/
#ifdef __BPF_FEATURE_MAY_GOTO
#define can_loop \
({ __label__ l_break, l_continue; \
bool ret = true; \
asm volatile goto("may_goto %l[l_break]" \
:::: l_break); \
goto l_continue; \
l_break: ret = false; \
l_continue:; \
ret; \
})
#define cond_break \
({ __label__ l_break, l_continue; \
asm volatile goto("may_goto %l[l_break]" \
asm volatile goto("may_goto %l[l_break]" \
:::: l_break); \
goto l_continue; \
l_break: break; \
l_continue:; \
})
#else
#define can_loop \
({ __label__ l_break, l_continue; \
bool ret = true; \
asm volatile goto("1:.byte 0xe5; \
.byte 0; \
.long ((%l[l_break] - 1b - 8) / 8) & 0xffff; \
.short 0" \
:::: l_break); \
goto l_continue; \
l_break: ret = false; \
l_continue:; \
ret; \
})
#define cond_break \
({ __label__ l_break, l_continue; \
asm volatile goto("1:.byte 0xe5; \
asm volatile goto("1:.byte 0xe5; \
.byte 0; \
.long ((%l[l_break] - 1b - 8) / 8) & 0xffff; \
.short 0" \

View File

@ -75,4 +75,7 @@ extern void bpf_key_put(struct bpf_key *key) __ksym;
extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr,
struct bpf_dynptr *sig_ptr,
struct bpf_key *trusted_keyring) __ksym;
extern bool bpf_session_is_return(void) __ksym __weak;
extern long *bpf_session_cookie(void) __ksym __weak;
#endif

View File

@ -1,241 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __BPF_TCP_HELPERS_H
#define __BPF_TCP_HELPERS_H
#include <stdbool.h>
#include <linux/types.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_core_read.h>
#include <bpf/bpf_tracing.h>
#define BPF_STRUCT_OPS(name, args...) \
SEC("struct_ops/"#name) \
BPF_PROG(name, args)
#ifndef SOL_TCP
#define SOL_TCP 6
#endif
#ifndef TCP_CA_NAME_MAX
#define TCP_CA_NAME_MAX 16
#endif
#define tcp_jiffies32 ((__u32)bpf_jiffies64())
struct sock_common {
unsigned char skc_state;
__u16 skc_num;
} __attribute__((preserve_access_index));
enum sk_pacing {
SK_PACING_NONE = 0,
SK_PACING_NEEDED = 1,
SK_PACING_FQ = 2,
};
struct sock {
struct sock_common __sk_common;
#define sk_state __sk_common.skc_state
unsigned long sk_pacing_rate;
__u32 sk_pacing_status; /* see enum sk_pacing */
} __attribute__((preserve_access_index));
struct inet_sock {
struct sock sk;
} __attribute__((preserve_access_index));
struct inet_connection_sock {
struct inet_sock icsk_inet;
__u8 icsk_ca_state:6,
icsk_ca_setsockopt:1,
icsk_ca_dst_locked:1;
struct {
__u8 pending;
} icsk_ack;
__u64 icsk_ca_priv[104 / sizeof(__u64)];
} __attribute__((preserve_access_index));
struct request_sock {
struct sock_common __req_common;
} __attribute__((preserve_access_index));
struct tcp_sock {
struct inet_connection_sock inet_conn;
__u32 rcv_nxt;
__u32 snd_nxt;
__u32 snd_una;
__u32 window_clamp;
__u8 ecn_flags;
__u32 delivered;
__u32 delivered_ce;
__u32 snd_cwnd;
__u32 snd_cwnd_cnt;
__u32 snd_cwnd_clamp;
__u32 snd_ssthresh;
__u8 syn_data:1, /* SYN includes data */
syn_fastopen:1, /* SYN includes Fast Open option */
syn_fastopen_exp:1,/* SYN includes Fast Open exp. option */
syn_fastopen_ch:1, /* Active TFO re-enabling probe */
syn_data_acked:1,/* data in SYN is acked by SYN-ACK */
save_syn:1, /* Save headers of SYN packet */
is_cwnd_limited:1,/* forward progress limited by snd_cwnd? */
syn_smc:1; /* SYN includes SMC */
__u32 max_packets_out;
__u32 lsndtime;
__u32 prior_cwnd;
__u64 tcp_mstamp; /* most recent packet received/sent */
bool is_mptcp;
} __attribute__((preserve_access_index));
static __always_inline struct inet_connection_sock *inet_csk(const struct sock *sk)
{
return (struct inet_connection_sock *)sk;
}
static __always_inline void *inet_csk_ca(const struct sock *sk)
{
return (void *)inet_csk(sk)->icsk_ca_priv;
}
static __always_inline struct tcp_sock *tcp_sk(const struct sock *sk)
{
return (struct tcp_sock *)sk;
}
static __always_inline bool before(__u32 seq1, __u32 seq2)
{
return (__s32)(seq1-seq2) < 0;
}
#define after(seq2, seq1) before(seq1, seq2)
#define TCP_ECN_OK 1
#define TCP_ECN_QUEUE_CWR 2
#define TCP_ECN_DEMAND_CWR 4
#define TCP_ECN_SEEN 8
enum inet_csk_ack_state_t {
ICSK_ACK_SCHED = 1,
ICSK_ACK_TIMER = 2,
ICSK_ACK_PUSHED = 4,
ICSK_ACK_PUSHED2 = 8,
ICSK_ACK_NOW = 16 /* Send the next ACK immediately (once) */
};
enum tcp_ca_event {
CA_EVENT_TX_START = 0,
CA_EVENT_CWND_RESTART = 1,
CA_EVENT_COMPLETE_CWR = 2,
CA_EVENT_LOSS = 3,
CA_EVENT_ECN_NO_CE = 4,
CA_EVENT_ECN_IS_CE = 5,
};
struct ack_sample {
__u32 pkts_acked;
__s32 rtt_us;
__u32 in_flight;
} __attribute__((preserve_access_index));
struct rate_sample {
__u64 prior_mstamp; /* starting timestamp for interval */
__u32 prior_delivered; /* tp->delivered at "prior_mstamp" */
__s32 delivered; /* number of packets delivered over interval */
long interval_us; /* time for tp->delivered to incr "delivered" */
__u32 snd_interval_us; /* snd interval for delivered packets */
__u32 rcv_interval_us; /* rcv interval for delivered packets */
long rtt_us; /* RTT of last (S)ACKed packet (or -1) */
int losses; /* number of packets marked lost upon ACK */
__u32 acked_sacked; /* number of packets newly (S)ACKed upon ACK */
__u32 prior_in_flight; /* in flight before this ACK */
bool is_app_limited; /* is sample from packet with bubble in pipe? */
bool is_retrans; /* is sample from retransmission? */
bool is_ack_delayed; /* is this (likely) a delayed ACK? */
} __attribute__((preserve_access_index));
#define TCP_CA_NAME_MAX 16
#define TCP_CONG_NEEDS_ECN 0x2
struct tcp_congestion_ops {
char name[TCP_CA_NAME_MAX];
__u32 flags;
/* initialize private data (optional) */
void (*init)(struct sock *sk);
/* cleanup private data (optional) */
void (*release)(struct sock *sk);
/* return slow start threshold (required) */
__u32 (*ssthresh)(struct sock *sk);
/* do new cwnd calculation (required) */
void (*cong_avoid)(struct sock *sk, __u32 ack, __u32 acked);
/* call before changing ca_state (optional) */
void (*set_state)(struct sock *sk, __u8 new_state);
/* call when cwnd event occurs (optional) */
void (*cwnd_event)(struct sock *sk, enum tcp_ca_event ev);
/* call when ack arrives (optional) */
void (*in_ack_event)(struct sock *sk, __u32 flags);
/* new value of cwnd after loss (required) */
__u32 (*undo_cwnd)(struct sock *sk);
/* hook for packet ack accounting (optional) */
void (*pkts_acked)(struct sock *sk, const struct ack_sample *sample);
/* override sysctl_tcp_min_tso_segs */
__u32 (*min_tso_segs)(struct sock *sk);
/* returns the multiplier used in tcp_sndbuf_expand (optional) */
__u32 (*sndbuf_expand)(struct sock *sk);
/* call when packets are delivered to update cwnd and pacing rate,
* after all the ca_state processing. (optional)
*/
void (*cong_control)(struct sock *sk, const struct rate_sample *rs);
void *owner;
};
#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))
#define min_not_zero(x, y) ({ \
typeof(x) __x = (x); \
typeof(y) __y = (y); \
__x == 0 ? __y : ((__y == 0) ? __x : min(__x, __y)); })
static __always_inline bool tcp_in_slow_start(const struct tcp_sock *tp)
{
return tp->snd_cwnd < tp->snd_ssthresh;
}
static __always_inline bool tcp_is_cwnd_limited(const struct sock *sk)
{
const struct tcp_sock *tp = tcp_sk(sk);
/* If in slow start, ensure cwnd grows to twice what was ACKed. */
if (tcp_in_slow_start(tp))
return tp->snd_cwnd < 2 * tp->max_packets_out;
return !!BPF_CORE_READ_BITFIELD(tp, is_cwnd_limited);
}
static __always_inline bool tcp_cc_eq(const char *a, const char *b)
{
int i;
for (i = 0; i < TCP_CA_NAME_MAX; i++) {
if (a[i] != b[i])
return false;
if (!a[i])
break;
}
return true;
}
extern __u32 tcp_slow_start(struct tcp_sock *tp, __u32 acked) __ksym;
extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
struct mptcp_sock {
struct inet_connection_sock sk;
__u32 token;
struct sock *first;
char ca_name[TCP_CA_NAME_MAX];
} __attribute__((preserve_access_index));
#endif

View File

@ -10,18 +10,30 @@
#include <linux/percpu-defs.h>
#include <linux/sysfs.h>
#include <linux/tracepoint.h>
#include <linux/net.h>
#include <linux/socket.h>
#include <linux/nsproxy.h>
#include <linux/inet.h>
#include <linux/in.h>
#include <linux/in6.h>
#include <linux/un.h>
#include <net/sock.h>
#include "bpf_testmod.h"
#include "bpf_testmod_kfunc.h"
#define CREATE_TRACE_POINTS
#include "bpf_testmod-events.h"
#define CONNECT_TIMEOUT_SEC 1
typedef int (*func_proto_typedef)(long);
typedef int (*func_proto_typedef_nested1)(func_proto_typedef);
typedef int (*func_proto_typedef_nested2)(func_proto_typedef_nested1);
DEFINE_PER_CPU(int, bpf_testmod_ksym_percpu) = 123;
long bpf_testmod_test_struct_arg_result;
static DEFINE_MUTEX(sock_lock);
static struct socket *sock;
struct bpf_testmod_struct_arg_1 {
int a;
@ -501,6 +513,237 @@ __bpf_kfunc void bpf_kfunc_call_test_sleepable(void)
{
}
__bpf_kfunc int bpf_kfunc_init_sock(struct init_sock_args *args)
{
int proto;
int err;
mutex_lock(&sock_lock);
if (sock) {
pr_err("%s called without releasing old sock", __func__);
err = -EPERM;
goto out;
}
switch (args->af) {
case AF_INET:
case AF_INET6:
proto = args->type == SOCK_STREAM ? IPPROTO_TCP : IPPROTO_UDP;
break;
case AF_UNIX:
proto = PF_UNIX;
break;
default:
pr_err("invalid address family %d\n", args->af);
err = -EINVAL;
goto out;
}
err = sock_create_kern(current->nsproxy->net_ns, args->af, args->type,
proto, &sock);
if (!err)
/* Set timeout for call to kernel_connect() to prevent it from hanging,
* and consider the connection attempt failed if it returns
* -EINPROGRESS.
*/
sock->sk->sk_sndtimeo = CONNECT_TIMEOUT_SEC * HZ;
out:
mutex_unlock(&sock_lock);
return err;
}
__bpf_kfunc void bpf_kfunc_close_sock(void)
{
mutex_lock(&sock_lock);
if (sock) {
sock_release(sock);
sock = NULL;
}
mutex_unlock(&sock_lock);
}
__bpf_kfunc int bpf_kfunc_call_kernel_connect(struct addr_args *args)
{
int err;
if (args->addrlen > sizeof(args->addr))
return -EINVAL;
mutex_lock(&sock_lock);
if (!sock) {
pr_err("%s called without initializing sock", __func__);
err = -EPERM;
goto out;
}
err = kernel_connect(sock, (struct sockaddr *)&args->addr,
args->addrlen, 0);
out:
mutex_unlock(&sock_lock);
return err;
}
__bpf_kfunc int bpf_kfunc_call_kernel_bind(struct addr_args *args)
{
int err;
if (args->addrlen > sizeof(args->addr))
return -EINVAL;
mutex_lock(&sock_lock);
if (!sock) {
pr_err("%s called without initializing sock", __func__);
err = -EPERM;
goto out;
}
err = kernel_bind(sock, (struct sockaddr *)&args->addr, args->addrlen);
out:
mutex_unlock(&sock_lock);
return err;
}
__bpf_kfunc int bpf_kfunc_call_kernel_listen(void)
{
int err;
mutex_lock(&sock_lock);
if (!sock) {
pr_err("%s called without initializing sock", __func__);
err = -EPERM;
goto out;
}
err = kernel_listen(sock, 128);
out:
mutex_unlock(&sock_lock);
return err;
}
__bpf_kfunc int bpf_kfunc_call_kernel_sendmsg(struct sendmsg_args *args)
{
struct msghdr msg = {
.msg_name = &args->addr.addr,
.msg_namelen = args->addr.addrlen,
};
struct kvec iov;
int err;
if (args->addr.addrlen > sizeof(args->addr.addr) ||
args->msglen > sizeof(args->msg))
return -EINVAL;
iov.iov_base = args->msg;
iov.iov_len = args->msglen;
mutex_lock(&sock_lock);
if (!sock) {
pr_err("%s called without initializing sock", __func__);
err = -EPERM;
goto out;
}
err = kernel_sendmsg(sock, &msg, &iov, 1, args->msglen);
args->addr.addrlen = msg.msg_namelen;
out:
mutex_unlock(&sock_lock);
return err;
}
__bpf_kfunc int bpf_kfunc_call_sock_sendmsg(struct sendmsg_args *args)
{
struct msghdr msg = {
.msg_name = &args->addr.addr,
.msg_namelen = args->addr.addrlen,
};
struct kvec iov;
int err;
if (args->addr.addrlen > sizeof(args->addr.addr) ||
args->msglen > sizeof(args->msg))
return -EINVAL;
iov.iov_base = args->msg;
iov.iov_len = args->msglen;
iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &iov, 1, args->msglen);
mutex_lock(&sock_lock);
if (!sock) {
pr_err("%s called without initializing sock", __func__);
err = -EPERM;
goto out;
}
err = sock_sendmsg(sock, &msg);
args->addr.addrlen = msg.msg_namelen;
out:
mutex_unlock(&sock_lock);
return err;
}
__bpf_kfunc int bpf_kfunc_call_kernel_getsockname(struct addr_args *args)
{
int err;
mutex_lock(&sock_lock);
if (!sock) {
pr_err("%s called without initializing sock", __func__);
err = -EPERM;
goto out;
}
err = kernel_getsockname(sock, (struct sockaddr *)&args->addr);
if (err < 0)
goto out;
args->addrlen = err;
err = 0;
out:
mutex_unlock(&sock_lock);
return err;
}
__bpf_kfunc int bpf_kfunc_call_kernel_getpeername(struct addr_args *args)
{
int err;
mutex_lock(&sock_lock);
if (!sock) {
pr_err("%s called without initializing sock", __func__);
err = -EPERM;
goto out;
}
err = kernel_getpeername(sock, (struct sockaddr *)&args->addr);
if (err < 0)
goto out;
args->addrlen = err;
err = 0;
out:
mutex_unlock(&sock_lock);
return err;
}
BTF_KFUNCS_START(bpf_testmod_check_kfunc_ids)
BTF_ID_FLAGS(func, bpf_testmod_test_mod_kfunc)
BTF_ID_FLAGS(func, bpf_kfunc_call_test1)
@ -528,6 +771,15 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_offset)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_sleepable, KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_kfunc_init_sock, KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_kfunc_close_sock, KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_kfunc_call_kernel_connect, KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_kfunc_call_kernel_bind, KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_kfunc_call_kernel_listen, KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_kfunc_call_kernel_sendmsg, KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_kfunc_call_sock_sendmsg, KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_kfunc_call_kernel_getsockname, KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_kfunc_call_kernel_getpeername, KF_SLEEPABLE)
BTF_KFUNCS_END(bpf_testmod_check_kfunc_ids)
static int bpf_testmod_ops_init(struct btf *btf)
@ -658,6 +910,8 @@ static int bpf_testmod_init(void)
return ret;
if (bpf_fentry_test1(0) < 0)
return -EINVAL;
sock = NULL;
mutex_init(&sock_lock);
return sysfs_create_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
}
@ -671,6 +925,7 @@ static void bpf_testmod_exit(void)
while (refcount_read(&prog_test_struct.cnt) > 1)
msleep(20);
bpf_kfunc_close_sock();
sysfs_remove_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
}

View File

@ -64,6 +64,22 @@ struct prog_test_fail3 {
char arr2[];
};
struct init_sock_args {
int af;
int type;
};
struct addr_args {
char addr[sizeof(struct __kernel_sockaddr_storage)];
int addrlen;
};
struct sendmsg_args {
struct addr_args addr;
char msg[10];
int msglen;
};
struct prog_test_ref_kfunc *
bpf_kfunc_call_test_acquire(unsigned long *scalar_ptr) __ksym;
void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym;
@ -107,4 +123,15 @@ void bpf_kfunc_call_test_fail3(struct prog_test_fail3 *p);
void bpf_kfunc_call_test_mem_len_fail1(void *mem, int len);
void bpf_kfunc_common_test(void) __ksym;
int bpf_kfunc_init_sock(struct init_sock_args *args) __ksym;
void bpf_kfunc_close_sock(void) __ksym;
int bpf_kfunc_call_kernel_connect(struct addr_args *args) __ksym;
int bpf_kfunc_call_kernel_bind(struct addr_args *args) __ksym;
int bpf_kfunc_call_kernel_listen(void) __ksym;
int bpf_kfunc_call_kernel_sendmsg(struct sendmsg_args *args) __ksym;
int bpf_kfunc_call_sock_sendmsg(struct sendmsg_args *args) __ksym;
int bpf_kfunc_call_kernel_getsockname(struct addr_args *args) __ksym;
int bpf_kfunc_call_kernel_getpeername(struct addr_args *args) __ksym;
#endif /* _BPF_TESTMOD_KFUNC_H */

View File

@ -508,6 +508,9 @@ int cgroup_setup_and_join(const char *path) {
/**
* setup_classid_environment() - Setup the cgroupv1 net_cls environment
*
* This function should only be called in a custom mount namespace, e.g.
* created by running setup_cgroup_environment.
*
* After calling this function, cleanup_classid_environment should be called
* once testing is complete.
*

View File

@ -80,24 +80,22 @@ int settimeo(int fd, int timeout_ms)
#define save_errno_close(fd) ({ int __save = errno; close(fd); errno = __save; })
static int __start_server(int type, int protocol, const struct sockaddr *addr,
socklen_t addrlen, int timeout_ms, bool reuseport)
static int __start_server(int type, const struct sockaddr *addr, socklen_t addrlen,
const struct network_helper_opts *opts)
{
int on = 1;
int fd;
fd = socket(addr->sa_family, type, protocol);
fd = socket(addr->sa_family, type, opts->proto);
if (fd < 0) {
log_err("Failed to create server socket");
return -1;
}
if (settimeo(fd, timeout_ms))
if (settimeo(fd, opts->timeout_ms))
goto error_close;
if (reuseport &&
setsockopt(fd, SOL_SOCKET, SO_REUSEPORT, &on, sizeof(on))) {
log_err("Failed to set SO_REUSEPORT");
if (opts->post_socket_cb && opts->post_socket_cb(fd, NULL)) {
log_err("Failed to call post_socket_cb");
goto error_close;
}
@ -120,35 +118,35 @@ error_close:
return -1;
}
static int start_server_proto(int family, int type, int protocol,
const char *addr_str, __u16 port, int timeout_ms)
int start_server(int family, int type, const char *addr_str, __u16 port,
int timeout_ms)
{
struct network_helper_opts opts = {
.timeout_ms = timeout_ms,
};
struct sockaddr_storage addr;
socklen_t addrlen;
if (make_sockaddr(family, addr_str, port, &addr, &addrlen))
return -1;
return __start_server(type, protocol, (struct sockaddr *)&addr,
addrlen, timeout_ms, false);
return __start_server(type, (struct sockaddr *)&addr, addrlen, &opts);
}
int start_server(int family, int type, const char *addr_str, __u16 port,
int timeout_ms)
static int reuseport_cb(int fd, const struct post_socket_opts *opts)
{
return start_server_proto(family, type, 0, addr_str, port, timeout_ms);
}
int on = 1;
int start_mptcp_server(int family, const char *addr_str, __u16 port,
int timeout_ms)
{
return start_server_proto(family, SOCK_STREAM, IPPROTO_MPTCP, addr_str,
port, timeout_ms);
return setsockopt(fd, SOL_SOCKET, SO_REUSEPORT, &on, sizeof(on));
}
int *start_reuseport_server(int family, int type, const char *addr_str,
__u16 port, int timeout_ms, unsigned int nr_listens)
{
struct network_helper_opts opts = {
.timeout_ms = timeout_ms,
.post_socket_cb = reuseport_cb,
};
struct sockaddr_storage addr;
unsigned int nr_fds = 0;
socklen_t addrlen;
@ -164,8 +162,7 @@ int *start_reuseport_server(int family, int type, const char *addr_str,
if (!fds)
return NULL;
fds[0] = __start_server(type, 0, (struct sockaddr *)&addr, addrlen,
timeout_ms, true);
fds[0] = __start_server(type, (struct sockaddr *)&addr, addrlen, &opts);
if (fds[0] == -1)
goto close_fds;
nr_fds = 1;
@ -174,8 +171,7 @@ int *start_reuseport_server(int family, int type, const char *addr_str,
goto close_fds;
for (; nr_fds < nr_listens; nr_fds++) {
fds[nr_fds] = __start_server(type, 0, (struct sockaddr *)&addr,
addrlen, timeout_ms, true);
fds[nr_fds] = __start_server(type, (struct sockaddr *)&addr, addrlen, &opts);
if (fds[nr_fds] == -1)
goto close_fds;
}
@ -193,8 +189,7 @@ int start_server_addr(int type, const struct sockaddr_storage *addr, socklen_t l
if (!opts)
opts = &default_opts;
return __start_server(type, 0, (struct sockaddr *)addr, len,
opts->timeout_ms, 0);
return __start_server(type, (struct sockaddr *)addr, len, opts);
}
void free_fds(int *fds, unsigned int nr_close_fds)

View File

@ -21,6 +21,8 @@ typedef __u16 __sum16;
#define VIP_NUM 5
#define MAGIC_BYTES 123
struct post_socket_opts {};
struct network_helper_opts {
const char *cc;
int timeout_ms;
@ -28,6 +30,7 @@ struct network_helper_opts {
bool noconnect;
int type;
int proto;
int (*post_socket_cb)(int fd, const struct post_socket_opts *opts);
};
/* ipv4 test vector */
@ -49,8 +52,6 @@ extern struct ipv6_packet pkt_v6;
int settimeo(int fd, int timeout_ms);
int start_server(int family, int type, const char *addr, __u16 port,
int timeout_ms);
int start_mptcp_server(int family, const char *addr, __u16 port,
int timeout_ms);
int *start_reuseport_server(int family, int type, const char *addr_str,
__u16 port, int timeout_ms,
unsigned int nr_listens);

View File

@ -14,6 +14,7 @@
#include "tcp_ca_incompl_cong_ops.skel.h"
#include "tcp_ca_unsupp_cong_op.skel.h"
#include "tcp_ca_kfunc.skel.h"
#include "bpf_cc_cubic.skel.h"
#ifndef ENOTSUPP
#define ENOTSUPP 524
@ -452,6 +453,27 @@ static void test_tcp_ca_kfunc(void)
tcp_ca_kfunc__destroy(skel);
}
static void test_cc_cubic(void)
{
struct bpf_cc_cubic *cc_cubic_skel;
struct bpf_link *link;
cc_cubic_skel = bpf_cc_cubic__open_and_load();
if (!ASSERT_OK_PTR(cc_cubic_skel, "bpf_cc_cubic__open_and_load"))
return;
link = bpf_map__attach_struct_ops(cc_cubic_skel->maps.cc_cubic);
if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
bpf_cc_cubic__destroy(cc_cubic_skel);
return;
}
do_test("bpf_cc_cubic", NULL);
bpf_link__destroy(link);
bpf_cc_cubic__destroy(cc_cubic_skel);
}
void test_bpf_tcp_ca(void)
{
if (test__start_subtest("dctcp"))
@ -482,4 +504,6 @@ void test_bpf_tcp_ca(void)
test_link_replace();
if (test__start_subtest("tcp_ca_kfunc"))
test_tcp_ca_kfunc();
if (test__start_subtest("cc_cubic"))
test_cc_cubic();
}

View File

@ -87,9 +87,12 @@ void test_cgroup1_hierarchy(void)
goto destroy;
/* Setup cgroup1 hierarchy */
err = setup_cgroup_environment();
if (!ASSERT_OK(err, "setup_cgroup_environment"))
goto destroy;
err = setup_classid_environment();
if (!ASSERT_OK(err, "setup_classid_environment"))
goto destroy;
goto cleanup_cgroup;
err = join_classid();
if (!ASSERT_OK(err, "join_cgroup1"))
@ -153,6 +156,8 @@ void test_cgroup1_hierarchy(void)
cleanup:
cleanup_classid_environment();
cleanup_cgroup:
cleanup_cgroup_environment();
destroy:
test_cgroup1_hierarchy__destroy(skel);
}

View File

@ -4,6 +4,8 @@
#include "trace_helpers.h"
#include "kprobe_multi_empty.skel.h"
#include "kprobe_multi_override.skel.h"
#include "kprobe_multi_session.skel.h"
#include "kprobe_multi_session_cookie.skel.h"
#include "bpf/libbpf_internal.h"
#include "bpf/hashmap.h"
@ -326,6 +328,74 @@ cleanup:
kprobe_multi__destroy(skel);
}
static void test_session_skel_api(void)
{
struct kprobe_multi_session *skel = NULL;
LIBBPF_OPTS(bpf_kprobe_multi_opts, opts);
LIBBPF_OPTS(bpf_test_run_opts, topts);
struct bpf_link *link = NULL;
int i, err, prog_fd;
skel = kprobe_multi_session__open_and_load();
if (!ASSERT_OK_PTR(skel, "kprobe_multi_session__open_and_load"))
return;
skel->bss->pid = getpid();
err = kprobe_multi_session__attach(skel);
if (!ASSERT_OK(err, " kprobe_multi_session__attach"))
goto cleanup;
prog_fd = bpf_program__fd(skel->progs.trigger);
err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
ASSERT_EQ(topts.retval, 0, "test_run");
/* bpf_fentry_test1-4 trigger return probe, result is 2 */
for (i = 0; i < 4; i++)
ASSERT_EQ(skel->bss->kprobe_session_result[i], 2, "kprobe_session_result");
/* bpf_fentry_test5-8 trigger only entry probe, result is 1 */
for (i = 4; i < 8; i++)
ASSERT_EQ(skel->bss->kprobe_session_result[i], 1, "kprobe_session_result");
cleanup:
bpf_link__destroy(link);
kprobe_multi_session__destroy(skel);
}
static void test_session_cookie_skel_api(void)
{
struct kprobe_multi_session_cookie *skel = NULL;
LIBBPF_OPTS(bpf_kprobe_multi_opts, opts);
LIBBPF_OPTS(bpf_test_run_opts, topts);
struct bpf_link *link = NULL;
int err, prog_fd;
skel = kprobe_multi_session_cookie__open_and_load();
if (!ASSERT_OK_PTR(skel, "fentry_raw_skel_load"))
return;
skel->bss->pid = getpid();
err = kprobe_multi_session_cookie__attach(skel);
if (!ASSERT_OK(err, " kprobe_multi_wrapper__attach"))
goto cleanup;
prog_fd = bpf_program__fd(skel->progs.trigger);
err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
ASSERT_EQ(topts.retval, 0, "test_run");
ASSERT_EQ(skel->bss->test_kprobe_1_result, 1, "test_kprobe_1_result");
ASSERT_EQ(skel->bss->test_kprobe_2_result, 2, "test_kprobe_2_result");
ASSERT_EQ(skel->bss->test_kprobe_3_result, 3, "test_kprobe_3_result");
cleanup:
bpf_link__destroy(link);
kprobe_multi_session_cookie__destroy(skel);
}
static size_t symbol_hash(long key, void *ctx __maybe_unused)
{
return str_hash((const char *) key);
@ -690,4 +760,8 @@ void test_kprobe_multi_test(void)
test_attach_api_fails();
if (test__start_subtest("attach_override"))
test_attach_override();
if (test__start_subtest("session"))
test_session_skel_api();
if (test__start_subtest("session_cookie"))
test_session_cookie_skel_api();
}

View File

@ -51,6 +51,10 @@ void test_module_attach(void)
0, "bpf_testmod_test_read");
ASSERT_OK(err, "set_attach_target");
err = bpf_program__set_attach_target(skel->progs.handle_fentry_explicit_manual,
0, "bpf_testmod:bpf_testmod_test_read");
ASSERT_OK(err, "set_attach_target_explicit");
err = test_module_attach__load(skel);
if (CHECK(err, "skel_load", "failed to load skeleton\n"))
return;
@ -70,6 +74,8 @@ void test_module_attach(void)
ASSERT_EQ(bss->tp_btf_read_sz, READ_SZ, "tp_btf");
ASSERT_EQ(bss->fentry_read_sz, READ_SZ, "fentry");
ASSERT_EQ(bss->fentry_manual_read_sz, READ_SZ, "fentry_manual");
ASSERT_EQ(bss->fentry_explicit_read_sz, READ_SZ, "fentry_explicit");
ASSERT_EQ(bss->fentry_explicit_manual_read_sz, READ_SZ, "fentry_explicit_manual");
ASSERT_EQ(bss->fexit_read_sz, READ_SZ, "fexit");
ASSERT_EQ(bss->fexit_ret, -EIO, "fexit_tet");
ASSERT_EQ(bss->fmod_ret_read_sz, READ_SZ, "fmod_ret");

View File

@ -82,6 +82,22 @@ static void cleanup_netns(struct nstoken *nstoken)
SYS_NOFAIL("ip netns del %s", NS_TEST);
}
static int start_mptcp_server(int family, const char *addr_str, __u16 port,
int timeout_ms)
{
struct network_helper_opts opts = {
.timeout_ms = timeout_ms,
.proto = IPPROTO_MPTCP,
};
struct sockaddr_storage addr;
socklen_t addrlen;
if (make_sockaddr(family, addr_str, port, &addr, &addrlen))
return -1;
return start_server_addr(SOCK_STREAM, &addr, addrlen, &opts);
}
static int verify_tsk(int map_fd, int client_fd)
{
int err, cfd = client_fd;

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
#include <test_progs.h>
#include "cgroup_helpers.h"
#include "network_helpers.h"
#include "sockopt_inherit.skel.h"
@ -9,35 +10,6 @@
#define CUSTOM_INHERIT2 1
#define CUSTOM_LISTENER 2
static int connect_to_server(int server_fd)
{
struct sockaddr_storage addr;
socklen_t len = sizeof(addr);
int fd;
fd = socket(AF_INET, SOCK_STREAM, 0);
if (fd < 0) {
log_err("Failed to create client socket");
return -1;
}
if (getsockname(server_fd, (struct sockaddr *)&addr, &len)) {
log_err("Failed to get server addr");
goto out;
}
if (connect(fd, (const struct sockaddr *)&addr, len) < 0) {
log_err("Fail to connect to server");
goto out;
}
return fd;
out:
close(fd);
return -1;
}
static int verify_sockopt(int fd, int optname, const char *msg, char expected)
{
socklen_t optlen = 1;
@ -98,47 +70,36 @@ static void *server_thread(void *arg)
return (void *)(long)err;
}
static int start_server(void)
static int custom_cb(int fd, const struct post_socket_opts *opts)
{
struct sockaddr_in addr = {
.sin_family = AF_INET,
.sin_addr.s_addr = htonl(INADDR_LOOPBACK),
};
char buf;
int err;
int fd;
int i;
fd = socket(AF_INET, SOCK_STREAM, 0);
if (fd < 0) {
log_err("Failed to create server socket");
return -1;
}
for (i = CUSTOM_INHERIT1; i <= CUSTOM_LISTENER; i++) {
buf = 0x01;
err = setsockopt(fd, SOL_CUSTOM, i, &buf, 1);
if (err) {
log_err("Failed to call setsockopt(%d)", i);
close(fd);
return -1;
}
}
if (bind(fd, (const struct sockaddr *)&addr, sizeof(addr)) < 0) {
log_err("Failed to bind socket");
close(fd);
return -1;
}
return fd;
return 0;
}
static void run_test(int cgroup_fd)
{
struct bpf_link *link_getsockopt = NULL;
struct bpf_link *link_setsockopt = NULL;
struct network_helper_opts opts = {
.post_socket_cb = custom_cb,
};
int server_fd = -1, client_fd;
struct sockaddr_in addr = {
.sin_family = AF_INET,
.sin_addr.s_addr = htonl(INADDR_LOOPBACK),
};
struct sockopt_inherit *obj;
void *server_err;
pthread_t tid;
@ -160,7 +121,8 @@ static void run_test(int cgroup_fd)
if (!ASSERT_OK_PTR(link_setsockopt, "cg-attach-setsockopt"))
goto close_bpf_object;
server_fd = start_server();
server_fd = start_server_addr(SOCK_STREAM, (struct sockaddr_storage *)&addr,
sizeof(addr), &opts);
if (!ASSERT_GE(server_fd, 0, "start_server"))
goto close_bpf_object;
@ -173,7 +135,7 @@ static void run_test(int cgroup_fd)
pthread_cond_wait(&server_started, &server_started_mtx);
pthread_mutex_unlock(&server_started_mtx);
client_fd = connect_to_server(server_fd);
client_fd = connect_to_fd(server_fd, 0);
if (!ASSERT_GE(client_fd, 0, "connect_to_server"))
goto close_server_fd;

View File

@ -4,6 +4,8 @@
#include <time.h>
#include "struct_ops_module.skel.h"
#include "struct_ops_nulled_out_cb.skel.h"
#include "struct_ops_forgotten_cb.skel.h"
static void check_map_info(struct bpf_map_info *info)
{
@ -66,6 +68,7 @@ static void test_struct_ops_load(void)
* auto-loading, or it will fail to load.
*/
bpf_program__set_autoload(skel->progs.test_2, false);
bpf_map__set_autocreate(skel->maps.testmod_zeroed, false);
err = struct_ops_module__load(skel);
if (!ASSERT_OK(err, "struct_ops_module_load"))
@ -103,6 +106,10 @@ static void test_struct_ops_not_zeroed(void)
if (!ASSERT_OK_PTR(skel, "struct_ops_module_open"))
return;
skel->struct_ops.testmod_zeroed->zeroed = 0;
/* zeroed_op prog should be not loaded automatically now */
skel->struct_ops.testmod_zeroed->zeroed_op = NULL;
err = struct_ops_module__load(skel);
ASSERT_OK(err, "struct_ops_module_load");
@ -118,6 +125,7 @@ static void test_struct_ops_not_zeroed(void)
* value of "zeroed" is non-zero.
*/
skel->struct_ops.testmod_zeroed->zeroed = 0xdeadbeef;
skel->struct_ops.testmod_zeroed->zeroed_op = NULL;
err = struct_ops_module__load(skel);
ASSERT_ERR(err, "struct_ops_module_load_not_zeroed");
@ -148,25 +156,103 @@ static void test_struct_ops_incompatible(void)
{
struct struct_ops_module *skel;
struct bpf_link *link;
int err;
skel = struct_ops_module__open_and_load();
if (!ASSERT_OK_PTR(skel, "open_and_load"))
skel = struct_ops_module__open();
if (!ASSERT_OK_PTR(skel, "struct_ops_module_open"))
return;
bpf_map__set_autocreate(skel->maps.testmod_zeroed, false);
err = struct_ops_module__load(skel);
if (!ASSERT_OK(err, "skel_load"))
goto cleanup;
link = bpf_map__attach_struct_ops(skel->maps.testmod_incompatible);
if (ASSERT_OK_PTR(link, "attach_struct_ops"))
bpf_link__destroy(link);
cleanup:
struct_ops_module__destroy(skel);
}
/* validate that it's ok to "turn off" callback that kernel supports */
static void test_struct_ops_nulled_out_cb(void)
{
struct struct_ops_nulled_out_cb *skel;
int err;
skel = struct_ops_nulled_out_cb__open();
if (!ASSERT_OK_PTR(skel, "skel_open"))
return;
/* kernel knows about test_1, but we still null it out */
skel->struct_ops.ops->test_1 = NULL;
err = struct_ops_nulled_out_cb__load(skel);
if (!ASSERT_OK(err, "skel_load"))
goto cleanup;
ASSERT_FALSE(bpf_program__autoload(skel->progs.test_1_turn_off), "prog_autoload");
ASSERT_LT(bpf_program__fd(skel->progs.test_1_turn_off), 0, "prog_fd");
cleanup:
struct_ops_nulled_out_cb__destroy(skel);
}
/* validate that libbpf generates reasonable error message if struct_ops is
* not referenced in any struct_ops map
*/
static void test_struct_ops_forgotten_cb(void)
{
struct struct_ops_forgotten_cb *skel;
char *log;
int err;
skel = struct_ops_forgotten_cb__open();
if (!ASSERT_OK_PTR(skel, "skel_open"))
return;
start_libbpf_log_capture();
err = struct_ops_forgotten_cb__load(skel);
if (!ASSERT_ERR(err, "skel_load"))
goto cleanup;
log = stop_libbpf_log_capture();
ASSERT_HAS_SUBSTR(log,
"prog 'test_1_forgotten': SEC(\"struct_ops\") program isn't referenced anywhere, did you forget to use it?",
"libbpf_log");
free(log);
struct_ops_forgotten_cb__destroy(skel);
/* now let's programmatically use it, we should be fine now */
skel = struct_ops_forgotten_cb__open();
if (!ASSERT_OK_PTR(skel, "skel_open"))
return;
skel->struct_ops.ops->test_1 = skel->progs.test_1_forgotten; /* not anymore */
err = struct_ops_forgotten_cb__load(skel);
if (!ASSERT_OK(err, "skel_load"))
goto cleanup;
cleanup:
struct_ops_forgotten_cb__destroy(skel);
}
void serial_test_struct_ops_module(void)
{
if (test__start_subtest("test_struct_ops_load"))
if (test__start_subtest("struct_ops_load"))
test_struct_ops_load();
if (test__start_subtest("test_struct_ops_not_zeroed"))
if (test__start_subtest("struct_ops_not_zeroed"))
test_struct_ops_not_zeroed();
if (test__start_subtest("test_struct_ops_incompatible"))
if (test__start_subtest("struct_ops_incompatible"))
test_struct_ops_incompatible();
if (test__start_subtest("struct_ops_null_out_cb"))
test_struct_ops_nulled_out_cb();
if (test__start_subtest("struct_ops_forgotten_cb"))
test_struct_ops_forgotten_cb();
}

View File

@ -66,6 +66,7 @@
#include "verifier_sdiv.skel.h"
#include "verifier_search_pruning.skel.h"
#include "verifier_sock.skel.h"
#include "verifier_sock_addr.skel.h"
#include "verifier_spill_fill.skel.h"
#include "verifier_spin_lock.skel.h"
#include "verifier_stack_ptr.skel.h"
@ -181,6 +182,7 @@ void test_verifier_scalar_ids(void) { RUN(verifier_scalar_ids); }
void test_verifier_sdiv(void) { RUN(verifier_sdiv); }
void test_verifier_search_pruning(void) { RUN(verifier_search_pruning); }
void test_verifier_sock(void) { RUN(verifier_sock); }
void test_verifier_sock_addr(void) { RUN(verifier_sock_addr); }
void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); }
void test_verifier_spin_lock(void) { RUN(verifier_spin_lock); }
void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); }

View File

@ -36,7 +36,5 @@ void serial_test_wq(void)
void serial_test_failures_wq(void)
{
LIBBPF_OPTS(bpf_test_run_opts, topts);
RUN_TESTS(wq_failures);
}

View File

@ -107,8 +107,8 @@ void test_xdp_do_redirect(void)
.attach_point = BPF_TC_INGRESS);
memcpy(&data[sizeof(__u64)], &pkt_udp, sizeof(pkt_udp));
*((__u32 *)data) = 0x42; /* metadata test value */
*((__u32 *)data + 4) = 0;
((__u32 *)data)[0] = 0x42; /* metadata test value */
((__u32 *)data)[1] = 0;
skel = test_xdp_do_redirect__open();
if (!ASSERT_OK_PTR(skel, "skel"))

View File

@ -49,7 +49,7 @@ int arena_list_add(void *ctx)
list_head = &global_head;
for (i = zero; i < cnt; cond_break, i++) {
for (i = zero; i < cnt && can_loop; i++) {
struct elem __arena *n = bpf_alloc(sizeof(*n));
test_val++;

View File

@ -61,14 +61,15 @@ SEC("lsm.s/socket_post_create")
int BPF_PROG(socket_post_create, struct socket *sock, int family, int type,
int protocol, int kern)
{
struct sock *sk = sock->sk;
struct storage *stg;
__u32 pid;
pid = bpf_get_current_pid_tgid() >> 32;
if (pid != bench_pid)
if (pid != bench_pid || !sk)
return 0;
stg = bpf_sk_storage_get(&sk_storage_map, sock->sk, NULL,
stg = bpf_sk_storage_get(&sk_storage_map, sk, NULL,
BPF_LOCAL_STORAGE_GET_F_CREATE);
if (stg)

View File

@ -12,6 +12,8 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
#include "bind_prog.h"
#define SERV4_IP 0xc0a801feU /* 192.168.1.254 */
#define SERV4_PORT 4040
#define SERV4_REWRITE_IP 0x7f000001U /* 127.0.0.1 */
@ -118,23 +120,23 @@ int bind_v4_prog(struct bpf_sock_addr *ctx)
// u8 narrow loads:
user_ip4 = 0;
user_ip4 |= ((volatile __u8 *)&ctx->user_ip4)[0] << 0;
user_ip4 |= ((volatile __u8 *)&ctx->user_ip4)[1] << 8;
user_ip4 |= ((volatile __u8 *)&ctx->user_ip4)[2] << 16;
user_ip4 |= ((volatile __u8 *)&ctx->user_ip4)[3] << 24;
user_ip4 |= load_byte(ctx->user_ip4, 0, sizeof(user_ip4));
user_ip4 |= load_byte(ctx->user_ip4, 1, sizeof(user_ip4));
user_ip4 |= load_byte(ctx->user_ip4, 2, sizeof(user_ip4));
user_ip4 |= load_byte(ctx->user_ip4, 3, sizeof(user_ip4));
if (ctx->user_ip4 != user_ip4)
return 0;
user_port = 0;
user_port |= ((volatile __u8 *)&ctx->user_port)[0] << 0;
user_port |= ((volatile __u8 *)&ctx->user_port)[1] << 8;
user_port |= load_byte(ctx->user_port, 0, sizeof(user_port));
user_port |= load_byte(ctx->user_port, 1, sizeof(user_port));
if (ctx->user_port != user_port)
return 0;
// u16 narrow loads:
user_ip4 = 0;
user_ip4 |= ((volatile __u16 *)&ctx->user_ip4)[0] << 0;
user_ip4 |= ((volatile __u16 *)&ctx->user_ip4)[1] << 16;
user_ip4 |= load_word(ctx->user_ip4, 0, sizeof(user_ip4));
user_ip4 |= load_word(ctx->user_ip4, 1, sizeof(user_ip4));
if (ctx->user_ip4 != user_ip4)
return 0;
@ -156,4 +158,10 @@ int bind_v4_prog(struct bpf_sock_addr *ctx)
return 1;
}
SEC("cgroup/bind4")
int bind_v4_deny_prog(struct bpf_sock_addr *ctx)
{
return 0;
}
char _license[] SEC("license") = "GPL";

View File

@ -12,6 +12,8 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
#include "bind_prog.h"
#define SERV6_IP_0 0xfaceb00c /* face:b00c:1234:5678::abcd */
#define SERV6_IP_1 0x12345678
#define SERV6_IP_2 0x00000000
@ -129,25 +131,25 @@ int bind_v6_prog(struct bpf_sock_addr *ctx)
// u8 narrow loads:
for (i = 0; i < 4; i++) {
user_ip6 = 0;
user_ip6 |= ((volatile __u8 *)&ctx->user_ip6[i])[0] << 0;
user_ip6 |= ((volatile __u8 *)&ctx->user_ip6[i])[1] << 8;
user_ip6 |= ((volatile __u8 *)&ctx->user_ip6[i])[2] << 16;
user_ip6 |= ((volatile __u8 *)&ctx->user_ip6[i])[3] << 24;
user_ip6 |= load_byte(ctx->user_ip6[i], 0, sizeof(user_ip6));
user_ip6 |= load_byte(ctx->user_ip6[i], 1, sizeof(user_ip6));
user_ip6 |= load_byte(ctx->user_ip6[i], 2, sizeof(user_ip6));
user_ip6 |= load_byte(ctx->user_ip6[i], 3, sizeof(user_ip6));
if (ctx->user_ip6[i] != user_ip6)
return 0;
}
user_port = 0;
user_port |= ((volatile __u8 *)&ctx->user_port)[0] << 0;
user_port |= ((volatile __u8 *)&ctx->user_port)[1] << 8;
user_port |= load_byte(ctx->user_port, 0, sizeof(user_port));
user_port |= load_byte(ctx->user_port, 1, sizeof(user_port));
if (ctx->user_port != user_port)
return 0;
// u16 narrow loads:
for (i = 0; i < 4; i++) {
user_ip6 = 0;
user_ip6 |= ((volatile __u16 *)&ctx->user_ip6[i])[0] << 0;
user_ip6 |= ((volatile __u16 *)&ctx->user_ip6[i])[1] << 16;
user_ip6 |= load_word(ctx->user_ip6[i], 0, sizeof(user_ip6));
user_ip6 |= load_word(ctx->user_ip6[i], 1, sizeof(user_ip6));
if (ctx->user_ip6[i] != user_ip6)
return 0;
}
@ -173,4 +175,10 @@ int bind_v6_prog(struct bpf_sock_addr *ctx)
return 1;
}
SEC("cgroup/bind6")
int bind_v6_deny_prog(struct bpf_sock_addr *ctx)
{
return 0;
}
char _license[] SEC("license") = "GPL";

View File

@ -0,0 +1,19 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __BIND_PROG_H__
#define __BIND_PROG_H__
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
#define load_byte(src, b, s) \
(((volatile __u8 *)&(src))[b] << 8 * b)
#define load_word(src, w, s) \
(((volatile __u16 *)&(src))[w] << 16 * w)
#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
#define load_byte(src, b, s) \
(((volatile __u8 *)&(src))[(b) + (sizeof(src) - (s))] << 8 * ((s) - (b) - 1))
#define load_word(src, w, s) \
(((volatile __u16 *)&(src))[w] << 16 * (((s) / 2) - (w) - 1))
#else
# error "Fix your compiler's __BYTE_ORDER__?!"
#endif
#endif

View File

@ -0,0 +1,189 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Highlights:
* 1. The major difference between this bpf program and tcp_cubic.c
* is that this bpf program relies on `cong_control` rather than
* `cong_avoid` in the struct tcp_congestion_ops.
* 2. Logic such as tcp_cwnd_reduction, tcp_cong_avoid, and
* tcp_update_pacing_rate is bypassed when `cong_control` is
* defined, so moving these logic to `cong_control`.
* 3. WARNING: This bpf program is NOT the same as tcp_cubic.c.
* The main purpose is to show use cases of the arguments in
* `cong_control`. For simplicity's sake, it reuses tcp cubic's
* kernel functions.
*/
#include "bpf_tracing_net.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#define USEC_PER_SEC 1000000UL
#define TCP_PACING_SS_RATIO (200)
#define TCP_PACING_CA_RATIO (120)
#define TCP_REORDERING (12)
#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))
#define after(seq2, seq1) before(seq1, seq2)
extern void cubictcp_init(struct sock *sk) __ksym;
extern void cubictcp_cwnd_event(struct sock *sk, enum tcp_ca_event event) __ksym;
extern __u32 cubictcp_recalc_ssthresh(struct sock *sk) __ksym;
extern void cubictcp_state(struct sock *sk, __u8 new_state) __ksym;
extern __u32 tcp_reno_undo_cwnd(struct sock *sk) __ksym;
extern void cubictcp_acked(struct sock *sk, const struct ack_sample *sample) __ksym;
extern void cubictcp_cong_avoid(struct sock *sk, __u32 ack, __u32 acked) __ksym;
static bool before(__u32 seq1, __u32 seq2)
{
return (__s32)(seq1-seq2) < 0;
}
static __u64 div64_u64(__u64 dividend, __u64 divisor)
{
return dividend / divisor;
}
static void tcp_update_pacing_rate(struct sock *sk)
{
const struct tcp_sock *tp = tcp_sk(sk);
__u64 rate;
/* set sk_pacing_rate to 200 % of current rate (mss * cwnd / srtt) */
rate = (__u64)tp->mss_cache * ((USEC_PER_SEC / 100) << 3);
/* current rate is (cwnd * mss) / srtt
* In Slow Start [1], set sk_pacing_rate to 200 % the current rate.
* In Congestion Avoidance phase, set it to 120 % the current rate.
*
* [1] : Normal Slow Start condition is (tp->snd_cwnd < tp->snd_ssthresh)
* If snd_cwnd >= (tp->snd_ssthresh / 2), we are approaching
* end of slow start and should slow down.
*/
if (tp->snd_cwnd < tp->snd_ssthresh / 2)
rate *= TCP_PACING_SS_RATIO;
else
rate *= TCP_PACING_CA_RATIO;
rate *= max(tp->snd_cwnd, tp->packets_out);
if (tp->srtt_us)
rate = div64_u64(rate, (__u64)tp->srtt_us);
sk->sk_pacing_rate = min(rate, sk->sk_max_pacing_rate);
}
static void tcp_cwnd_reduction(struct sock *sk, int newly_acked_sacked,
int newly_lost, int flag)
{
struct tcp_sock *tp = tcp_sk(sk);
int sndcnt = 0;
__u32 pkts_in_flight = tp->packets_out - (tp->sacked_out + tp->lost_out) + tp->retrans_out;
int delta = tp->snd_ssthresh - pkts_in_flight;
if (newly_acked_sacked <= 0 || !tp->prior_cwnd)
return;
__u32 prr_delivered = tp->prr_delivered + newly_acked_sacked;
if (delta < 0) {
__u64 dividend =
(__u64)tp->snd_ssthresh * prr_delivered + tp->prior_cwnd - 1;
sndcnt = (__u32)div64_u64(dividend, (__u64)tp->prior_cwnd) - tp->prr_out;
} else {
sndcnt = max(prr_delivered - tp->prr_out, newly_acked_sacked);
if (flag & FLAG_SND_UNA_ADVANCED && !newly_lost)
sndcnt++;
sndcnt = min(delta, sndcnt);
}
/* Force a fast retransmit upon entering fast recovery */
sndcnt = max(sndcnt, (tp->prr_out ? 0 : 1));
tp->snd_cwnd = pkts_in_flight + sndcnt;
}
/* Decide wheather to run the increase function of congestion control. */
static bool tcp_may_raise_cwnd(const struct sock *sk, const int flag)
{
if (tcp_sk(sk)->reordering > TCP_REORDERING)
return flag & FLAG_FORWARD_PROGRESS;
return flag & FLAG_DATA_ACKED;
}
SEC("struct_ops")
void BPF_PROG(bpf_cubic_init, struct sock *sk)
{
cubictcp_init(sk);
}
SEC("struct_ops")
void BPF_PROG(bpf_cubic_cwnd_event, struct sock *sk, enum tcp_ca_event event)
{
cubictcp_cwnd_event(sk, event);
}
SEC("struct_ops")
void BPF_PROG(bpf_cubic_cong_control, struct sock *sk, __u32 ack, int flag,
const struct rate_sample *rs)
{
struct tcp_sock *tp = tcp_sk(sk);
if (((1<<TCP_CA_CWR) | (1<<TCP_CA_Recovery)) &
(1 << inet_csk(sk)->icsk_ca_state)) {
/* Reduce cwnd if state mandates */
tcp_cwnd_reduction(sk, rs->acked_sacked, rs->losses, flag);
if (!before(tp->snd_una, tp->high_seq)) {
/* Reset cwnd to ssthresh in CWR or Recovery (unless it's undone) */
if (tp->snd_ssthresh < TCP_INFINITE_SSTHRESH &&
inet_csk(sk)->icsk_ca_state == TCP_CA_CWR) {
tp->snd_cwnd = tp->snd_ssthresh;
tp->snd_cwnd_stamp = tcp_jiffies32;
}
}
} else if (tcp_may_raise_cwnd(sk, flag)) {
/* Advance cwnd if state allows */
cubictcp_cong_avoid(sk, ack, rs->acked_sacked);
tp->snd_cwnd_stamp = tcp_jiffies32;
}
tcp_update_pacing_rate(sk);
}
SEC("struct_ops")
__u32 BPF_PROG(bpf_cubic_recalc_ssthresh, struct sock *sk)
{
return cubictcp_recalc_ssthresh(sk);
}
SEC("struct_ops")
void BPF_PROG(bpf_cubic_state, struct sock *sk, __u8 new_state)
{
cubictcp_state(sk, new_state);
}
SEC("struct_ops")
void BPF_PROG(bpf_cubic_acked, struct sock *sk, const struct ack_sample *sample)
{
cubictcp_acked(sk, sample);
}
SEC("struct_ops")
__u32 BPF_PROG(bpf_cubic_undo_cwnd, struct sock *sk)
{
return tcp_reno_undo_cwnd(sk);
}
SEC(".struct_ops")
struct tcp_congestion_ops cc_cubic = {
.init = (void *)bpf_cubic_init,
.ssthresh = (void *)bpf_cubic_recalc_ssthresh,
.cong_control = (void *)bpf_cubic_cong_control,
.set_state = (void *)bpf_cubic_state,
.undo_cwnd = (void *)bpf_cubic_undo_cwnd,
.cwnd_event = (void *)bpf_cubic_cwnd_event,
.pkts_acked = (void *)bpf_cubic_acked,
.name = "bpf_cc_cubic",
};
char _license[] SEC("license") = "GPL";

View File

@ -14,14 +14,22 @@
* "ca->ack_cnt / delta" operation.
*/
#include <linux/bpf.h>
#include <linux/stddef.h>
#include <linux/tcp.h>
#include "bpf_tcp_helpers.h"
#include "bpf_tracing_net.h"
#include <bpf/bpf_tracing.h>
char _license[] SEC("license") = "GPL";
#define clamp(val, lo, hi) min((typeof(val))max(val, lo), hi)
#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))
static bool before(__u32 seq1, __u32 seq2)
{
return (__s32)(seq1-seq2) < 0;
}
#define after(seq2, seq1) before(seq1, seq2)
extern __u32 tcp_slow_start(struct tcp_sock *tp, __u32 acked) __ksym;
extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
#define BICTCP_BETA_SCALE 1024 /* Scale factor beta calculation
* max_cwnd = snd_cwnd * beta
@ -70,7 +78,7 @@ static const __u64 cube_factor = (__u64)(1ull << (10+3*BICTCP_HZ))
/ (bic_scale * 10);
/* BIC TCP Parameters */
struct bictcp {
struct bpf_bictcp {
__u32 cnt; /* increase cwnd by 1 after ACKs */
__u32 last_max_cwnd; /* last maximum snd_cwnd */
__u32 last_cwnd; /* the last snd_cwnd */
@ -91,7 +99,7 @@ struct bictcp {
__u32 curr_rtt; /* the minimum rtt of current round */
};
static inline void bictcp_reset(struct bictcp *ca)
static void bictcp_reset(struct bpf_bictcp *ca)
{
ca->cnt = 0;
ca->last_max_cwnd = 0;
@ -112,7 +120,7 @@ extern unsigned long CONFIG_HZ __kconfig;
#define USEC_PER_SEC 1000000UL
#define USEC_PER_JIFFY (USEC_PER_SEC / HZ)
static __always_inline __u64 div64_u64(__u64 dividend, __u64 divisor)
static __u64 div64_u64(__u64 dividend, __u64 divisor)
{
return dividend / divisor;
}
@ -120,7 +128,7 @@ static __always_inline __u64 div64_u64(__u64 dividend, __u64 divisor)
#define div64_ul div64_u64
#define BITS_PER_U64 (sizeof(__u64) * 8)
static __always_inline int fls64(__u64 x)
static int fls64(__u64 x)
{
int num = BITS_PER_U64 - 1;
@ -153,15 +161,15 @@ static __always_inline int fls64(__u64 x)
return num + 1;
}
static __always_inline __u32 bictcp_clock_us(const struct sock *sk)
static __u32 bictcp_clock_us(const struct sock *sk)
{
return tcp_sk(sk)->tcp_mstamp;
}
static __always_inline void bictcp_hystart_reset(struct sock *sk)
static void bictcp_hystart_reset(struct sock *sk)
{
struct tcp_sock *tp = tcp_sk(sk);
struct bictcp *ca = inet_csk_ca(sk);
struct bpf_bictcp *ca = inet_csk_ca(sk);
ca->round_start = ca->last_ack = bictcp_clock_us(sk);
ca->end_seq = tp->snd_nxt;
@ -169,11 +177,10 @@ static __always_inline void bictcp_hystart_reset(struct sock *sk)
ca->sample_cnt = 0;
}
/* "struct_ops/" prefix is a requirement */
SEC("struct_ops/bpf_cubic_init")
SEC("struct_ops")
void BPF_PROG(bpf_cubic_init, struct sock *sk)
{
struct bictcp *ca = inet_csk_ca(sk);
struct bpf_bictcp *ca = inet_csk_ca(sk);
bictcp_reset(ca);
@ -184,12 +191,11 @@ void BPF_PROG(bpf_cubic_init, struct sock *sk)
tcp_sk(sk)->snd_ssthresh = initial_ssthresh;
}
/* "struct_ops" prefix is a requirement */
SEC("struct_ops/bpf_cubic_cwnd_event")
SEC("struct_ops")
void BPF_PROG(bpf_cubic_cwnd_event, struct sock *sk, enum tcp_ca_event event)
{
if (event == CA_EVENT_TX_START) {
struct bictcp *ca = inet_csk_ca(sk);
struct bpf_bictcp *ca = inet_csk_ca(sk);
__u32 now = tcp_jiffies32;
__s32 delta;
@ -230,7 +236,7 @@ static const __u8 v[] = {
* Newton-Raphson iteration.
* Avg err ~= 0.195%
*/
static __always_inline __u32 cubic_root(__u64 a)
static __u32 cubic_root(__u64 a)
{
__u32 x, b, shift;
@ -263,8 +269,7 @@ static __always_inline __u32 cubic_root(__u64 a)
/*
* Compute congestion window to use.
*/
static __always_inline void bictcp_update(struct bictcp *ca, __u32 cwnd,
__u32 acked)
static void bictcp_update(struct bpf_bictcp *ca, __u32 cwnd, __u32 acked)
{
__u32 delta, bic_target, max_cnt;
__u64 offs, t;
@ -377,11 +382,11 @@ tcp_friendliness:
ca->cnt = max(ca->cnt, 2U);
}
/* Or simply use the BPF_STRUCT_OPS to avoid the SEC boiler plate. */
void BPF_STRUCT_OPS(bpf_cubic_cong_avoid, struct sock *sk, __u32 ack, __u32 acked)
SEC("struct_ops")
void BPF_PROG(bpf_cubic_cong_avoid, struct sock *sk, __u32 ack, __u32 acked)
{
struct tcp_sock *tp = tcp_sk(sk);
struct bictcp *ca = inet_csk_ca(sk);
struct bpf_bictcp *ca = inet_csk_ca(sk);
if (!tcp_is_cwnd_limited(sk))
return;
@ -397,10 +402,11 @@ void BPF_STRUCT_OPS(bpf_cubic_cong_avoid, struct sock *sk, __u32 ack, __u32 acke
tcp_cong_avoid_ai(tp, ca->cnt, acked);
}
__u32 BPF_STRUCT_OPS(bpf_cubic_recalc_ssthresh, struct sock *sk)
SEC("struct_ops")
__u32 BPF_PROG(bpf_cubic_recalc_ssthresh, struct sock *sk)
{
const struct tcp_sock *tp = tcp_sk(sk);
struct bictcp *ca = inet_csk_ca(sk);
struct bpf_bictcp *ca = inet_csk_ca(sk);
ca->epoch_start = 0; /* end of epoch */
@ -414,7 +420,8 @@ __u32 BPF_STRUCT_OPS(bpf_cubic_recalc_ssthresh, struct sock *sk)
return max((tp->snd_cwnd * beta) / BICTCP_BETA_SCALE, 2U);
}
void BPF_STRUCT_OPS(bpf_cubic_state, struct sock *sk, __u8 new_state)
SEC("struct_ops")
void BPF_PROG(bpf_cubic_state, struct sock *sk, __u8 new_state)
{
if (new_state == TCP_CA_Loss) {
bictcp_reset(inet_csk_ca(sk));
@ -433,7 +440,7 @@ void BPF_STRUCT_OPS(bpf_cubic_state, struct sock *sk, __u8 new_state)
* We apply another 100% factor because @rate is doubled at this point.
* We cap the cushion to 1ms.
*/
static __always_inline __u32 hystart_ack_delay(struct sock *sk)
static __u32 hystart_ack_delay(struct sock *sk)
{
unsigned long rate;
@ -444,10 +451,10 @@ static __always_inline __u32 hystart_ack_delay(struct sock *sk)
div64_ul((__u64)GSO_MAX_SIZE * 4 * USEC_PER_SEC, rate));
}
static __always_inline void hystart_update(struct sock *sk, __u32 delay)
static void hystart_update(struct sock *sk, __u32 delay)
{
struct tcp_sock *tp = tcp_sk(sk);
struct bictcp *ca = inet_csk_ca(sk);
struct bpf_bictcp *ca = inet_csk_ca(sk);
__u32 threshold;
if (hystart_detect & HYSTART_ACK_TRAIN) {
@ -492,11 +499,11 @@ static __always_inline void hystart_update(struct sock *sk, __u32 delay)
int bpf_cubic_acked_called = 0;
void BPF_STRUCT_OPS(bpf_cubic_acked, struct sock *sk,
const struct ack_sample *sample)
SEC("struct_ops")
void BPF_PROG(bpf_cubic_acked, struct sock *sk, const struct ack_sample *sample)
{
const struct tcp_sock *tp = tcp_sk(sk);
struct bictcp *ca = inet_csk_ca(sk);
struct bpf_bictcp *ca = inet_csk_ca(sk);
__u32 delay;
bpf_cubic_acked_called = 1;
@ -524,7 +531,8 @@ void BPF_STRUCT_OPS(bpf_cubic_acked, struct sock *sk,
extern __u32 tcp_reno_undo_cwnd(struct sock *sk) __ksym;
__u32 BPF_STRUCT_OPS(bpf_cubic_undo_cwnd, struct sock *sk)
SEC("struct_ops")
__u32 BPF_PROG(bpf_cubic_undo_cwnd, struct sock *sk)
{
return tcp_reno_undo_cwnd(sk);
}

View File

@ -6,15 +6,23 @@
* the kernel BPF logic.
*/
#include <stddef.h>
#include <linux/bpf.h>
#include <linux/types.h>
#include <linux/stddef.h>
#include <linux/tcp.h>
#include <errno.h>
#include "bpf_tracing_net.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include "bpf_tcp_helpers.h"
#ifndef EBUSY
#define EBUSY 16
#endif
#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))
#define min_not_zero(x, y) ({ \
typeof(x) __x = (x); \
typeof(y) __y = (y); \
__x == 0 ? __y : ((__y == 0) ? __x : min(__x, __y)); })
static bool before(__u32 seq1, __u32 seq2)
{
return (__s32)(seq1-seq2) < 0;
}
char _license[] SEC("license") = "GPL";
@ -35,7 +43,7 @@ struct {
#define DCTCP_MAX_ALPHA 1024U
struct dctcp {
struct bpf_dctcp {
__u32 old_delivered;
__u32 old_delivered_ce;
__u32 prior_rcv_nxt;
@ -48,8 +56,7 @@ struct dctcp {
static unsigned int dctcp_shift_g = 4; /* g = 1/2^4 */
static unsigned int dctcp_alpha_on_init = DCTCP_MAX_ALPHA;
static __always_inline void dctcp_reset(const struct tcp_sock *tp,
struct dctcp *ca)
static void dctcp_reset(const struct tcp_sock *tp, struct bpf_dctcp *ca)
{
ca->next_seq = tp->snd_nxt;
@ -57,11 +64,11 @@ static __always_inline void dctcp_reset(const struct tcp_sock *tp,
ca->old_delivered_ce = tp->delivered_ce;
}
SEC("struct_ops/dctcp_init")
SEC("struct_ops")
void BPF_PROG(dctcp_init, struct sock *sk)
{
const struct tcp_sock *tp = tcp_sk(sk);
struct dctcp *ca = inet_csk_ca(sk);
struct bpf_dctcp *ca = inet_csk_ca(sk);
int *stg;
if (!(tp->ecn_flags & TCP_ECN_OK) && fallback[0]) {
@ -104,21 +111,21 @@ void BPF_PROG(dctcp_init, struct sock *sk)
dctcp_reset(tp, ca);
}
SEC("struct_ops/dctcp_ssthresh")
SEC("struct_ops")
__u32 BPF_PROG(dctcp_ssthresh, struct sock *sk)
{
struct dctcp *ca = inet_csk_ca(sk);
struct bpf_dctcp *ca = inet_csk_ca(sk);
struct tcp_sock *tp = tcp_sk(sk);
ca->loss_cwnd = tp->snd_cwnd;
return max(tp->snd_cwnd - ((tp->snd_cwnd * ca->dctcp_alpha) >> 11U), 2U);
}
SEC("struct_ops/dctcp_update_alpha")
SEC("struct_ops")
void BPF_PROG(dctcp_update_alpha, struct sock *sk, __u32 flags)
{
const struct tcp_sock *tp = tcp_sk(sk);
struct dctcp *ca = inet_csk_ca(sk);
struct bpf_dctcp *ca = inet_csk_ca(sk);
/* Expired RTT */
if (!before(tp->snd_una, ca->next_seq)) {
@ -144,16 +151,16 @@ void BPF_PROG(dctcp_update_alpha, struct sock *sk, __u32 flags)
}
}
static __always_inline void dctcp_react_to_loss(struct sock *sk)
static void dctcp_react_to_loss(struct sock *sk)
{
struct dctcp *ca = inet_csk_ca(sk);
struct bpf_dctcp *ca = inet_csk_ca(sk);
struct tcp_sock *tp = tcp_sk(sk);
ca->loss_cwnd = tp->snd_cwnd;
tp->snd_ssthresh = max(tp->snd_cwnd >> 1U, 2U);
}
SEC("struct_ops/dctcp_state")
SEC("struct_ops")
void BPF_PROG(dctcp_state, struct sock *sk, __u8 new_state)
{
if (new_state == TCP_CA_Recovery &&
@ -164,7 +171,7 @@ void BPF_PROG(dctcp_state, struct sock *sk, __u8 new_state)
*/
}
static __always_inline void dctcp_ece_ack_cwr(struct sock *sk, __u32 ce_state)
static void dctcp_ece_ack_cwr(struct sock *sk, __u32 ce_state)
{
struct tcp_sock *tp = tcp_sk(sk);
@ -179,9 +186,8 @@ static __always_inline void dctcp_ece_ack_cwr(struct sock *sk, __u32 ce_state)
* S: 0 <- last pkt was non-CE
* 1 <- last pkt was CE
*/
static __always_inline
void dctcp_ece_ack_update(struct sock *sk, enum tcp_ca_event evt,
__u32 *prior_rcv_nxt, __u32 *ce_state)
static void dctcp_ece_ack_update(struct sock *sk, enum tcp_ca_event evt,
__u32 *prior_rcv_nxt, __u32 *ce_state)
{
__u32 new_ce_state = (evt == CA_EVENT_ECN_IS_CE) ? 1 : 0;
@ -201,10 +207,10 @@ void dctcp_ece_ack_update(struct sock *sk, enum tcp_ca_event evt,
dctcp_ece_ack_cwr(sk, new_ce_state);
}
SEC("struct_ops/dctcp_cwnd_event")
SEC("struct_ops")
void BPF_PROG(dctcp_cwnd_event, struct sock *sk, enum tcp_ca_event ev)
{
struct dctcp *ca = inet_csk_ca(sk);
struct bpf_dctcp *ca = inet_csk_ca(sk);
switch (ev) {
case CA_EVENT_ECN_IS_CE:
@ -220,17 +226,17 @@ void BPF_PROG(dctcp_cwnd_event, struct sock *sk, enum tcp_ca_event ev)
}
}
SEC("struct_ops/dctcp_cwnd_undo")
SEC("struct_ops")
__u32 BPF_PROG(dctcp_cwnd_undo, struct sock *sk)
{
const struct dctcp *ca = inet_csk_ca(sk);
const struct bpf_dctcp *ca = inet_csk_ca(sk);
return max(tcp_sk(sk)->snd_cwnd, ca->loss_cwnd);
}
extern void tcp_reno_cong_avoid(struct sock *sk, __u32 ack, __u32 acked) __ksym;
SEC("struct_ops/dctcp_reno_cong_avoid")
SEC("struct_ops")
void BPF_PROG(dctcp_cong_avoid, struct sock *sk, __u32 ack, __u32 acked)
{
tcp_reno_cong_avoid(sk, ack, acked);

View File

@ -1,19 +1,15 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include <stddef.h>
#include <linux/bpf.h>
#include <linux/types.h>
#include <linux/stddef.h>
#include <linux/tcp.h>
#include "bpf_tracing_net.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include "bpf_tcp_helpers.h"
char _license[] SEC("license") = "GPL";
const char cubic[] = "cubic";
void BPF_STRUCT_OPS(dctcp_nouse_release, struct sock *sk)
SEC("struct_ops")
void BPF_PROG(dctcp_nouse_release, struct sock *sk)
{
bpf_setsockopt(sk, SOL_TCP, TCP_CONGESTION,
(void *)cubic, sizeof(cubic));

View File

@ -1,14 +1,12 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include <linux/types.h>
#include <bpf/bpf_helpers.h>
#include "bpf_tracing_net.h"
#include <bpf/bpf_tracing.h>
#include "bpf_tcp_helpers.h"
char _license[] SEC("license") = "X";
void BPF_STRUCT_OPS(nogpltcp_init, struct sock *sk)
SEC("struct_ops")
void BPF_PROG(nogpltcp_init, struct sock *sk)
{
}

View File

@ -2,6 +2,9 @@
#ifndef __BPF_TRACING_NET_H__
#define __BPF_TRACING_NET_H__
#include <vmlinux.h>
#include <bpf/bpf_core_read.h>
#define AF_INET 2
#define AF_INET6 10
@ -22,6 +25,7 @@
#define IP_TOS 1
#define SOL_IPV6 41
#define IPV6_TCLASS 67
#define IPV6_AUTOFLOWLABEL 70
@ -46,6 +50,13 @@
#define TCP_CA_NAME_MAX 16
#define TCP_NAGLE_OFF 1
#define TCP_ECN_OK 1
#define TCP_ECN_QUEUE_CWR 2
#define TCP_ECN_DEMAND_CWR 4
#define TCP_ECN_SEEN 8
#define TCP_CONG_NEEDS_ECN 0x2
#define ICSK_TIME_RETRANS 1
#define ICSK_TIME_PROBE0 3
#define ICSK_TIME_LOSS_PROBE 5
@ -80,6 +91,14 @@
#define TCP_INFINITE_SSTHRESH 0x7fffffff
#define TCP_PINGPONG_THRESH 3
#define FLAG_DATA_ACKED 0x04 /* This ACK acknowledged new data. */
#define FLAG_SYN_ACKED 0x10 /* This ACK acknowledged SYN. */
#define FLAG_DATA_SACKED 0x20 /* New SACK. */
#define FLAG_SND_UNA_ADVANCED \
0x400 /* Snd_una was changed (!= FLAG_DATA_ACKED) */
#define FLAG_ACKED (FLAG_DATA_ACKED | FLAG_SYN_ACKED)
#define FLAG_FORWARD_PROGRESS (FLAG_ACKED | FLAG_DATA_SACKED)
#define fib_nh_dev nh_common.nhc_dev
#define fib_nh_gw_family nh_common.nhc_gw_family
#define fib_nh_gw6 nh_common.nhc_gw.ipv6
@ -119,4 +138,37 @@
#define tw_v6_daddr __tw_common.skc_v6_daddr
#define tw_v6_rcv_saddr __tw_common.skc_v6_rcv_saddr
#define tcp_jiffies32 ((__u32)bpf_jiffies64())
static inline struct inet_connection_sock *inet_csk(const struct sock *sk)
{
return (struct inet_connection_sock *)sk;
}
static inline void *inet_csk_ca(const struct sock *sk)
{
return (void *)inet_csk(sk)->icsk_ca_priv;
}
static inline struct tcp_sock *tcp_sk(const struct sock *sk)
{
return (struct tcp_sock *)sk;
}
static inline bool tcp_in_slow_start(const struct tcp_sock *tp)
{
return tp->snd_cwnd < tp->snd_ssthresh;
}
static inline bool tcp_is_cwnd_limited(const struct sock *sk)
{
const struct tcp_sock *tp = tcp_sk(sk);
/* If in slow start, ensure cwnd grows to twice what was ACKed. */
if (tcp_in_slow_start(tp))
return tp->snd_cwnd < 2 * tp->max_packets_out;
return !!BPF_CORE_READ_BITFIELD(tp, is_cwnd_limited);
}
#endif

View File

@ -14,8 +14,6 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
#include "bpf_tcp_helpers.h"
#define SRC_REWRITE_IP4 0x7f000004U
#define DST_REWRITE_IP4 0x7f000001U
#define DST_REWRITE_PORT4 4444
@ -32,6 +30,10 @@
#define IFNAMSIZ 16
#endif
#ifndef SOL_TCP
#define SOL_TCP 6
#endif
__attribute__ ((noinline)) __weak
int do_bind(struct bpf_sock_addr *ctx)
{
@ -197,4 +199,10 @@ int connect_v4_prog(struct bpf_sock_addr *ctx)
return do_bind(ctx) ? 1 : 0;
}
SEC("cgroup/connect4")
int connect_v4_deny_prog(struct bpf_sock_addr *ctx)
{
return 0;
}
char _license[] SEC("license") = "GPL";

View File

@ -90,4 +90,10 @@ int connect_v6_prog(struct bpf_sock_addr *ctx)
return 1;
}
SEC("cgroup/connect6")
int connect_v6_deny_prog(struct bpf_sock_addr *ctx)
{
return 0;
}
char _license[] SEC("license") = "GPL";

View File

@ -36,4 +36,10 @@ int connect_unix_prog(struct bpf_sock_addr *ctx)
return 1;
}
SEC("cgroup/connect_unix")
int connect_unix_deny_prog(struct bpf_sock_addr *ctx)
{
return 0;
}
char _license[] SEC("license") = "GPL";

View File

@ -9,7 +9,7 @@
int err;
#define private(name) SEC(".bss." #name) __hidden __attribute__((aligned(8)))
#define private(name) SEC(".bss." #name) __attribute__((aligned(8)))
private(MASK) static struct bpf_cpumask __kptr * global_mask;
struct __cpumask_map_value {

View File

@ -61,11 +61,8 @@ SEC("tp_btf/task_newtask")
__failure __msg("bpf_cpumask_set_cpu args#1 expected pointer to STRUCT bpf_cpumask")
int BPF_PROG(test_mutate_cpumask, struct task_struct *task, u64 clone_flags)
{
struct bpf_cpumask *cpumask;
/* Can't set the CPU of a non-struct bpf_cpumask. */
bpf_cpumask_set_cpu(0, (struct bpf_cpumask *)task->cpus_ptr);
__sink(cpumask);
return 0;
}

View File

@ -80,7 +80,7 @@ SEC("?raw_tp")
__failure __msg("Unreleased reference id=2")
int ringbuf_missing_release1(void *ctx)
{
struct bpf_dynptr ptr;
struct bpf_dynptr ptr = {};
bpf_ringbuf_reserve_dynptr(&ringbuf, val, 0, &ptr);
@ -1385,7 +1385,7 @@ SEC("?raw_tp")
__failure __msg("Expected an initialized dynptr as arg #1")
int dynptr_adjust_invalid(void *ctx)
{
struct bpf_dynptr ptr;
struct bpf_dynptr ptr = {};
/* this should fail */
bpf_dynptr_adjust(&ptr, 1, 2);
@ -1398,7 +1398,7 @@ SEC("?raw_tp")
__failure __msg("Expected an initialized dynptr as arg #1")
int dynptr_is_null_invalid(void *ctx)
{
struct bpf_dynptr ptr;
struct bpf_dynptr ptr = {};
/* this should fail */
bpf_dynptr_is_null(&ptr);
@ -1411,7 +1411,7 @@ SEC("?raw_tp")
__failure __msg("Expected an initialized dynptr as arg #1")
int dynptr_is_rdonly_invalid(void *ctx)
{
struct bpf_dynptr ptr;
struct bpf_dynptr ptr = {};
/* this should fail */
bpf_dynptr_is_rdonly(&ptr);
@ -1424,7 +1424,7 @@ SEC("?raw_tp")
__failure __msg("Expected an initialized dynptr as arg #1")
int dynptr_size_invalid(void *ctx)
{
struct bpf_dynptr ptr;
struct bpf_dynptr ptr = {};
/* this should fail */
bpf_dynptr_size(&ptr);
@ -1437,7 +1437,7 @@ SEC("?raw_tp")
__failure __msg("Expected an initialized dynptr as arg #1")
int clone_invalid1(void *ctx)
{
struct bpf_dynptr ptr1;
struct bpf_dynptr ptr1 = {};
struct bpf_dynptr ptr2;
/* this should fail */

View File

@ -3,8 +3,8 @@
#include <linux/types.h>
#include <linux/bpf.h>
#include <linux/pkt_cls.h>
#include <bpf/bpf_helpers.h>
#include "bpf_tracing_net.h"
struct bpf_fib_lookup fib_params = {};
int fib_lookup_ret = 0;

View File

@ -0,0 +1,24 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2024 Google LLC */
#include "vmlinux.h"
#include <string.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
#include <bpf/bpf_core_read.h>
#include "bpf_kfuncs.h"
#define REWRITE_ADDRESS_IP4 0xc0a801fe // 192.168.1.254
#define REWRITE_ADDRESS_PORT4 4040
SEC("cgroup/getpeername4")
int getpeername_v4_prog(struct bpf_sock_addr *ctx)
{
ctx->user_ip4 = bpf_htonl(REWRITE_ADDRESS_IP4);
ctx->user_port = bpf_htons(REWRITE_ADDRESS_PORT4);
return 1;
}
char _license[] SEC("license") = "GPL";

View File

@ -0,0 +1,31 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2024 Google LLC */
#include "vmlinux.h"
#include <string.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
#include <bpf/bpf_core_read.h>
#include "bpf_kfuncs.h"
#define REWRITE_ADDRESS_IP6_0 0xfaceb00c
#define REWRITE_ADDRESS_IP6_1 0x12345678
#define REWRITE_ADDRESS_IP6_2 0x00000000
#define REWRITE_ADDRESS_IP6_3 0x0000abcd
#define REWRITE_ADDRESS_PORT6 6060
SEC("cgroup/getpeername6")
int getpeername_v6_prog(struct bpf_sock_addr *ctx)
{
ctx->user_ip6[0] = bpf_htonl(REWRITE_ADDRESS_IP6_0);
ctx->user_ip6[1] = bpf_htonl(REWRITE_ADDRESS_IP6_1);
ctx->user_ip6[2] = bpf_htonl(REWRITE_ADDRESS_IP6_2);
ctx->user_ip6[3] = bpf_htonl(REWRITE_ADDRESS_IP6_3);
ctx->user_port = bpf_htons(REWRITE_ADDRESS_PORT6);
return 1;
}
char _license[] SEC("license") = "GPL";

View File

@ -0,0 +1,24 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2024 Google LLC */
#include "vmlinux.h"
#include <string.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
#include <bpf/bpf_core_read.h>
#include "bpf_kfuncs.h"
#define REWRITE_ADDRESS_IP4 0xc0a801fe // 192.168.1.254
#define REWRITE_ADDRESS_PORT4 4040
SEC("cgroup/getsockname4")
int getsockname_v4_prog(struct bpf_sock_addr *ctx)
{
ctx->user_ip4 = bpf_htonl(REWRITE_ADDRESS_IP4);
ctx->user_port = bpf_htons(REWRITE_ADDRESS_PORT4);
return 1;
}
char _license[] SEC("license") = "GPL";

View File

@ -0,0 +1,31 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2024 Google LLC */
#include "vmlinux.h"
#include <string.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_endian.h>
#include <bpf/bpf_core_read.h>
#include "bpf_kfuncs.h"
#define REWRITE_ADDRESS_IP6_0 0xfaceb00c
#define REWRITE_ADDRESS_IP6_1 0x12345678
#define REWRITE_ADDRESS_IP6_2 0x00000000
#define REWRITE_ADDRESS_IP6_3 0x0000abcd
#define REWRITE_ADDRESS_PORT6 6060
SEC("cgroup/getsockname6")
int getsockname_v6_prog(struct bpf_sock_addr *ctx)
{
ctx->user_ip6[0] = bpf_htonl(REWRITE_ADDRESS_IP6_0);
ctx->user_ip6[1] = bpf_htonl(REWRITE_ADDRESS_IP6_1);
ctx->user_ip6[2] = bpf_htonl(REWRITE_ADDRESS_IP6_2);
ctx->user_ip6[3] = bpf_htonl(REWRITE_ADDRESS_IP6_3);
ctx->user_port = bpf_htons(REWRITE_ADDRESS_PORT6);
return 1;
}
char _license[] SEC("license") = "GPL";

View File

@ -4,6 +4,10 @@
#include <bpf/bpf_helpers.h>
#include "bpf_misc.h"
#ifndef __clang__
#pragma GCC diagnostic ignored "-Warray-bounds"
#endif
char _license[] SEC("license") = "GPL";
struct {

View File

@ -0,0 +1,79 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include <stdbool.h>
#include "bpf_kfuncs.h"
#define ARRAY_SIZE(x) (int)(sizeof(x) / sizeof((x)[0]))
char _license[] SEC("license") = "GPL";
extern const void bpf_fentry_test1 __ksym;
extern const void bpf_fentry_test2 __ksym;
extern const void bpf_fentry_test3 __ksym;
extern const void bpf_fentry_test4 __ksym;
extern const void bpf_fentry_test5 __ksym;
extern const void bpf_fentry_test6 __ksym;
extern const void bpf_fentry_test7 __ksym;
extern const void bpf_fentry_test8 __ksym;
int pid = 0;
__u64 kprobe_session_result[8];
static int session_check(void *ctx)
{
unsigned int i;
__u64 addr;
const void *kfuncs[] = {
&bpf_fentry_test1,
&bpf_fentry_test2,
&bpf_fentry_test3,
&bpf_fentry_test4,
&bpf_fentry_test5,
&bpf_fentry_test6,
&bpf_fentry_test7,
&bpf_fentry_test8,
};
if (bpf_get_current_pid_tgid() >> 32 != pid)
return 1;
addr = bpf_get_func_ip(ctx);
for (i = 0; i < ARRAY_SIZE(kfuncs); i++) {
if (kfuncs[i] == (void *) addr) {
kprobe_session_result[i]++;
break;
}
}
/*
* Force probes for function bpf_fentry_test[5-8] not to
* install and execute the return probe
*/
if (((const void *) addr == &bpf_fentry_test5) ||
((const void *) addr == &bpf_fentry_test6) ||
((const void *) addr == &bpf_fentry_test7) ||
((const void *) addr == &bpf_fentry_test8))
return 1;
return 0;
}
/*
* No tests in here, just to trigger 'bpf_fentry_test*'
* through tracing test_run
*/
SEC("fentry/bpf_modify_return_test")
int BPF_PROG(trigger)
{
return 0;
}
SEC("kprobe.session/bpf_fentry_test*")
int test_kprobe(struct pt_regs *ctx)
{
return session_check(ctx);
}

View File

@ -0,0 +1,58 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include <stdbool.h>
#include "bpf_kfuncs.h"
char _license[] SEC("license") = "GPL";
int pid = 0;
__u64 test_kprobe_1_result = 0;
__u64 test_kprobe_2_result = 0;
__u64 test_kprobe_3_result = 0;
/*
* No tests in here, just to trigger 'bpf_fentry_test*'
* through tracing test_run
*/
SEC("fentry/bpf_modify_return_test")
int BPF_PROG(trigger)
{
return 0;
}
static int check_cookie(__u64 val, __u64 *result)
{
long *cookie;
if (bpf_get_current_pid_tgid() >> 32 != pid)
return 1;
cookie = bpf_session_cookie();
if (bpf_session_is_return())
*result = *cookie == val ? val : 0;
else
*cookie = val;
return 0;
}
SEC("kprobe.session/bpf_fentry_test1")
int test_kprobe_1(struct pt_regs *ctx)
{
return check_cookie(1, &test_kprobe_1_result);
}
SEC("kprobe.session/bpf_fentry_test1")
int test_kprobe_2(struct pt_regs *ctx)
{
return check_cookie(2, &test_kprobe_2_result);
}
SEC("kprobe.session/bpf_fentry_test1")
int test_kprobe_3(struct pt_regs *ctx)
{
return check_cookie(3, &test_kprobe_3_result);
}

View File

@ -140,11 +140,12 @@ int BPF_PROG(socket_bind, struct socket *sock, struct sockaddr *address,
{
__u32 pid = bpf_get_current_pid_tgid() >> 32;
struct local_storage *storage;
struct sock *sk = sock->sk;
if (pid != monitored_pid)
if (pid != monitored_pid || !sk)
return 0;
storage = bpf_sk_storage_get(&sk_storage_map, sock->sk, 0, 0);
storage = bpf_sk_storage_get(&sk_storage_map, sk, 0, 0);
if (!storage)
return 0;
@ -155,24 +156,24 @@ int BPF_PROG(socket_bind, struct socket *sock, struct sockaddr *address,
/* This tests that we can associate multiple elements
* with the local storage.
*/
storage = bpf_sk_storage_get(&sk_storage_map2, sock->sk, 0,
storage = bpf_sk_storage_get(&sk_storage_map2, sk, 0,
BPF_LOCAL_STORAGE_GET_F_CREATE);
if (!storage)
return 0;
if (bpf_sk_storage_delete(&sk_storage_map2, sock->sk))
if (bpf_sk_storage_delete(&sk_storage_map2, sk))
return 0;
storage = bpf_sk_storage_get(&sk_storage_map2, sock->sk, 0,
storage = bpf_sk_storage_get(&sk_storage_map2, sk, 0,
BPF_LOCAL_STORAGE_GET_F_CREATE);
if (!storage)
return 0;
if (bpf_sk_storage_delete(&sk_storage_map, sock->sk))
if (bpf_sk_storage_delete(&sk_storage_map, sk))
return 0;
/* Ensure that the sk_storage_map is disconnected from the storage. */
if (!sock->sk->sk_bpf_storage || sock->sk->sk_bpf_storage->smap)
if (!sk->sk_bpf_storage || sk->sk_bpf_storage->smap)
return 0;
sk_storage_result = 0;
@ -185,11 +186,12 @@ int BPF_PROG(socket_post_create, struct socket *sock, int family, int type,
{
__u32 pid = bpf_get_current_pid_tgid() >> 32;
struct local_storage *storage;
struct sock *sk = sock->sk;
if (pid != monitored_pid)
if (pid != monitored_pid || !sk)
return 0;
storage = bpf_sk_storage_get(&sk_storage_map, sock->sk, 0,
storage = bpf_sk_storage_get(&sk_storage_map, sk, 0,
BPF_LOCAL_STORAGE_GET_F_CREATE);
if (!storage)
return 0;

View File

@ -103,11 +103,15 @@ static __always_inline int real_bind(struct socket *sock,
int addrlen)
{
struct sockaddr_ll sa = {};
struct sock *sk = sock->sk;
if (sock->sk->__sk_common.skc_family != AF_PACKET)
if (!sk)
return 1;
if (sock->sk->sk_kern_sock)
if (sk->__sk_common.skc_family != AF_PACKET)
return 1;
if (sk->sk_kern_sock)
return 1;
bpf_probe_read_kernel(&sa, sizeof(sa), address);

View File

@ -2,9 +2,9 @@
/* Copyright (c) 2020, Tessares SA. */
/* Copyright (c) 2022, SUSE. */
#include <linux/bpf.h>
#include "bpf_tracing_net.h"
#include <bpf/bpf_helpers.h>
#include "bpf_tcp_helpers.h"
#include <bpf/bpf_tracing.h>
char _license[] SEC("license") = "GPL";
__u32 token = 0;

View File

@ -49,4 +49,10 @@ int sendmsg_v4_prog(struct bpf_sock_addr *ctx)
return 1;
}
SEC("cgroup/sendmsg4")
int sendmsg_v4_deny_prog(struct bpf_sock_addr *ctx)
{
return 0;
}
char _license[] SEC("license") = "GPL";

View File

@ -20,6 +20,11 @@
#define DST_REWRITE_IP6_2 0
#define DST_REWRITE_IP6_3 1
#define DST_REWRITE_IP6_V4_MAPPED_0 0
#define DST_REWRITE_IP6_V4_MAPPED_1 0
#define DST_REWRITE_IP6_V4_MAPPED_2 0x0000FFFF
#define DST_REWRITE_IP6_V4_MAPPED_3 0xc0a80004 // 192.168.0.4
#define DST_REWRITE_PORT6 6666
SEC("cgroup/sendmsg6")
@ -59,4 +64,56 @@ int sendmsg_v6_prog(struct bpf_sock_addr *ctx)
return 1;
}
SEC("cgroup/sendmsg6")
int sendmsg_v6_v4mapped_prog(struct bpf_sock_addr *ctx)
{
/* Rewrite source. */
ctx->msg_src_ip6[0] = bpf_htonl(SRC_REWRITE_IP6_0);
ctx->msg_src_ip6[1] = bpf_htonl(SRC_REWRITE_IP6_1);
ctx->msg_src_ip6[2] = bpf_htonl(SRC_REWRITE_IP6_2);
ctx->msg_src_ip6[3] = bpf_htonl(SRC_REWRITE_IP6_3);
/* Rewrite destination. */
ctx->user_ip6[0] = bpf_htonl(DST_REWRITE_IP6_V4_MAPPED_0);
ctx->user_ip6[1] = bpf_htonl(DST_REWRITE_IP6_V4_MAPPED_1);
ctx->user_ip6[2] = bpf_htonl(DST_REWRITE_IP6_V4_MAPPED_2);
ctx->user_ip6[3] = bpf_htonl(DST_REWRITE_IP6_V4_MAPPED_3);
ctx->user_port = bpf_htons(DST_REWRITE_PORT6);
return 1;
}
SEC("cgroup/sendmsg6")
int sendmsg_v6_wildcard_prog(struct bpf_sock_addr *ctx)
{
/* Rewrite source. */
ctx->msg_src_ip6[0] = bpf_htonl(SRC_REWRITE_IP6_0);
ctx->msg_src_ip6[1] = bpf_htonl(SRC_REWRITE_IP6_1);
ctx->msg_src_ip6[2] = bpf_htonl(SRC_REWRITE_IP6_2);
ctx->msg_src_ip6[3] = bpf_htonl(SRC_REWRITE_IP6_3);
/* Rewrite destination. */
ctx->user_ip6[0] = bpf_htonl(0);
ctx->user_ip6[1] = bpf_htonl(0);
ctx->user_ip6[2] = bpf_htonl(0);
ctx->user_ip6[3] = bpf_htonl(0);
ctx->user_port = bpf_htons(DST_REWRITE_PORT6);
return 1;
}
SEC("cgroup/sendmsg6")
int sendmsg_v6_preserve_dst_prog(struct bpf_sock_addr *ctx)
{
return 1;
}
SEC("cgroup/sendmsg6")
int sendmsg_v6_deny_prog(struct bpf_sock_addr *ctx)
{
return 0;
}
char _license[] SEC("license") = "GPL";

View File

@ -36,4 +36,10 @@ int sendmsg_unix_prog(struct bpf_sock_addr *ctx)
return 1;
}
SEC("cgroup/sendmsg_unix")
int sendmsg_unix_deny_prog(struct bpf_sock_addr *ctx)
{
return 0;
}
char _license[] SEC("license") = "GPL";

Some files were not shown because too many files have changed in this diff Show More