Merge branch 'bpf-arm64-use-bpf-prog-pack-allocator-in-bpf-jit'

Puranjay Mohan says:

====================
bpf, arm64: use BPF prog pack allocator in BPF JIT

Changes in V8 => V9:
V8: https://lore.kernel.org/bpf/20240221145106.105995-1-puranjay12@gmail.com/
1. Rebased on bpf-next/master
2. Added Acked-by: Catalin Marinas <catalin.marinas@arm.com>

Changes in V7 => V8:
V7: https://lore.kernel.org/bpf/20240125133159.85086-1-puranjay12@gmail.com/
1. Rebase on bpf-next/master
2. Fix __text_poke() by removing usage of 'ret' that was never set.

Changes in V6 => V7:
V6: https://lore.kernel.org/all/20240124164917.119997-1-puranjay12@gmail.com/
1. Rebase on bpf-next/master.

Changes in V5 => V6:
V5: https://lore.kernel.org/all/20230908144320.2474-1-puranjay12@gmail.com/
1. Implement a text poke api to reduce code repeatition.
2. Use flush_icache_range() in place of caches_clean_inval_pou() in the
   functions that modify code.
3. Optimize the bpf_jit_free() by not copying the all instructions on
   the rw image to the ro_image

Changes in V4 => v5:
1. Remove the patch for making prog pack allocator portable as it will come
   through the RISCV tree[1].

2. Add a new function aarch64_insn_set() to be used in
   bpf_arch_text_invalidate() for putting illegal instructions after a
   program is removed. The earlier implementation of bpf_arch_text_invalidate()
   was calling aarch64_insn_patch_text_nosync() in a loop and making it slow
   because each call invalidated the cache.

   Here is test_tag now:
   [root@ip-172-31-6-176 bpf]# time ./test_tag
   test_tag: OK (40945 tests)

   real    0m19.695s
   user    0m1.514s
   sys     0m17.841s

   test_tag without these patches:
   [root@ip-172-31-6-176 bpf]# time ./test_tag
   test_tag: OK (40945 tests)

   real    0m21.487s
   user    0m1.647s
   sys     0m19.106s

   test_tag in the previous version was really slow > 2 minutes. see [2]

3. Add cache invalidation in aarch64_insn_copy() so other users can call the
   function without worrying about the cache. Currently only bpf_arch_text_copy()
   is using it, but there might be more users in the future.

Chanes in V3 => V4: Changes only in 3rd patch
1. Fix the I-cache maintenance: Clean the data cache and invalidate the i-Cache
   only *after* the instructions have been copied to the ROX region.

Chanes in V2 => V3: Changes only in 3rd patch
1. Set prog = orig_prog; in the failure path of bpf_jit_binary_pack_finalize()
call.
2. Add comments explaining the usage of the offsets in the exception table.

Changes in v1 => v2:
1. Make the naming consistent in the 3rd patch:
   ro_image and image
   ro_header and header
   ro_image_ptr and image_ptr
2. Use names dst/src in place of addr/opcode in second patch.
3. Add Acked-by: Song Liu <song@kernel.org> in 1st and 2nd patch.

BPF programs currently consume a page each on ARM64. For systems with many BPF
programs, this adds significant pressure to instruction TLB. High iTLB pressure
usually causes slow down for the whole system.

Song Liu introduced the BPF prog pack allocator[3] to mitigate the above issue.
It packs multiple BPF programs into a single huge page. It is currently only
enabled for the x86_64 BPF JIT.

This patch series enables the BPF prog pack allocator for the ARM64 BPF JIT.

====================================================
Performance Analysis of prog pack allocator on ARM64
====================================================

To test the performance of the BPF prog pack allocator on ARM64, a stresser
tool[4] was built. This tool loads 8 BPF programs on the system and triggers
5 of them in an infinite loop by doing system calls.

The runner script starts 20 instances of the above which loads 8*20=160 BPF
programs on the system, 5*20=100 of which are being constantly triggered.

In the above environment we try to build Python-3.8.4 and try to find different
iTLB metrics for the compilation done by gcc-12.2.0.

The source code[5] is  configured with the following command:
./configure --enable-optimizations --with-ensurepip=install

Then the runner script is executed with the following command:
./run.sh "perf stat -e ITLB_WALK,L1I_TLB,INST_RETIRED,iTLB-load-misses -a make -j32"

This builds Python while 160 BPF programs are loaded and 100 are being constantly
triggered and measures iTLB related metrics.

The output of the above command is discussed below before and after enabling the
BPF prog pack allocator.

The tests were run on qemu-system-aarch64 with 32 cpus, 4G memory, -machine virt,
-cpu host, and -enable-kvm.

Results
-------

Before enabling prog pack allocator:
------------------------------------

Performance counter stats for 'system wide':

         333278635      ITLB_WALK
     6762692976558      L1I_TLB
    25359571423901      INST_RETIRED
       15824054789      iTLB-load-misses

     189.029769053 seconds time elapsed

After enabling prog pack allocator:
-----------------------------------

Performance counter stats for 'system wide':

         190333544      ITLB_WALK
     6712712386528      L1I_TLB
    25278233304411      INST_RETIRED
        5716757866      iTLB-load-misses

     185.392650561 seconds time elapsed

Improvements in metrics
-----------------------

Compilation time                             ---> 1.92% faster
iTLB-load-misses/Sec (Less is better)        ---> 63.16% decrease
ITLB_WALK/1000 INST_RETIRED (Less is better) ---> 42.71% decrease
ITLB_Walk/L1I_TLB (Less is better)           ---> 42.47% decrease

[1] https://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git/commit/?h=for-next&id=20e490adea279d49d57b800475938f5b67926d98
[2] https://lore.kernel.org/all/CANk7y0gcP3dF2mESLp5JN1+9iDfgtiWRFGqLkCgZD6wby1kQOw@mail.gmail.com/
[3] https://lore.kernel.org/bpf/20220204185742.271030-1-song@kernel.org/
[4] https://github.com/puranjaymohan/BPF-Allocator-Bench
[5] https://www.python.org/ftp/python/3.8.4/Python-3.8.4.tgz
====================

Link: https://lore.kernel.org/r/20240228141824.119877-1-puranjay12@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
This commit is contained in:
Alexei Starovoitov 2024-02-28 13:44:48 -08:00
commit b9a6299848
3 changed files with 192 additions and 24 deletions

View File

@ -8,6 +8,8 @@ int aarch64_insn_read(void *addr, u32 *insnp);
int aarch64_insn_write(void *addr, u32 insn);
int aarch64_insn_write_literal_u64(void *addr, u64 val);
void *aarch64_insn_set(void *dst, u32 insn, size_t len);
void *aarch64_insn_copy(void *dst, void *src, size_t len);
int aarch64_insn_patch_text_nosync(void *addr, u32 insn);
int aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt);

View File

@ -105,6 +105,81 @@ noinstr int aarch64_insn_write_literal_u64(void *addr, u64 val)
return ret;
}
typedef void text_poke_f(void *dst, void *src, size_t patched, size_t len);
static void *__text_poke(text_poke_f func, void *addr, void *src, size_t len)
{
unsigned long flags;
size_t patched = 0;
size_t size;
void *waddr;
void *ptr;
raw_spin_lock_irqsave(&patch_lock, flags);
while (patched < len) {
ptr = addr + patched;
size = min_t(size_t, PAGE_SIZE - offset_in_page(ptr),
len - patched);
waddr = patch_map(ptr, FIX_TEXT_POKE0);
func(waddr, src, patched, size);
patch_unmap(FIX_TEXT_POKE0);
patched += size;
}
raw_spin_unlock_irqrestore(&patch_lock, flags);
flush_icache_range((uintptr_t)addr, (uintptr_t)addr + len);
return addr;
}
static void text_poke_memcpy(void *dst, void *src, size_t patched, size_t len)
{
copy_to_kernel_nofault(dst, src + patched, len);
}
static void text_poke_memset(void *dst, void *src, size_t patched, size_t len)
{
u32 c = *(u32 *)src;
memset32(dst, c, len / 4);
}
/**
* aarch64_insn_copy - Copy instructions into (an unused part of) RX memory
* @dst: address to modify
* @src: source of the copy
* @len: length to copy
*
* Useful for JITs to dump new code blocks into unused regions of RX memory.
*/
noinstr void *aarch64_insn_copy(void *dst, void *src, size_t len)
{
/* A64 instructions must be word aligned */
if ((uintptr_t)dst & 0x3)
return NULL;
return __text_poke(text_poke_memcpy, dst, src, len);
}
/**
* aarch64_insn_set - memset for RX memory regions.
* @dst: address to modify
* @insn: value to set
* @len: length of memory region.
*
* Useful for JITs to fill regions of RX memory with illegal instructions.
*/
noinstr void *aarch64_insn_set(void *dst, u32 insn, size_t len)
{
if ((uintptr_t)dst & 0x3)
return NULL;
return __text_poke(text_poke_memset, dst, &insn, len);
}
int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
{
u32 *tp = addr;

View File

@ -76,6 +76,7 @@ struct jit_ctx {
int *offset;
int exentry_idx;
__le32 *image;
__le32 *ro_image;
u32 stack_size;
int fpb_offset;
};
@ -205,6 +206,14 @@ static void jit_fill_hole(void *area, unsigned int size)
*ptr++ = cpu_to_le32(AARCH64_BREAK_FAULT);
}
int bpf_arch_text_invalidate(void *dst, size_t len)
{
if (!aarch64_insn_set(dst, AARCH64_BREAK_FAULT, len))
return -EINVAL;
return 0;
}
static inline int epilogue_offset(const struct jit_ctx *ctx)
{
int to = ctx->epilogue_offset;
@ -746,7 +755,8 @@ static int add_exception_handler(const struct bpf_insn *insn,
struct jit_ctx *ctx,
int dst_reg)
{
off_t offset;
off_t ins_offset;
off_t fixup_offset;
unsigned long pc;
struct exception_table_entry *ex;
@ -763,12 +773,17 @@ static int add_exception_handler(const struct bpf_insn *insn,
return -EINVAL;
ex = &ctx->prog->aux->extable[ctx->exentry_idx];
pc = (unsigned long)&ctx->image[ctx->idx - 1];
pc = (unsigned long)&ctx->ro_image[ctx->idx - 1];
offset = pc - (long)&ex->insn;
if (WARN_ON_ONCE(offset >= 0 || offset < INT_MIN))
/*
* This is the relative offset of the instruction that may fault from
* the exception table itself. This will be written to the exception
* table and if this instruction faults, the destination register will
* be set to '0' and the execution will jump to the next instruction.
*/
ins_offset = pc - (long)&ex->insn;
if (WARN_ON_ONCE(ins_offset >= 0 || ins_offset < INT_MIN))
return -ERANGE;
ex->insn = offset;
/*
* Since the extable follows the program, the fixup offset is always
@ -777,12 +792,25 @@ static int add_exception_handler(const struct bpf_insn *insn,
* bits. We don't need to worry about buildtime or runtime sort
* modifying the upper bits because the table is already sorted, and
* isn't part of the main exception table.
*
* The fixup_offset is set to the next instruction from the instruction
* that may fault. The execution will jump to this after handling the
* fault.
*/
offset = (long)&ex->fixup - (pc + AARCH64_INSN_SIZE);
if (!FIELD_FIT(BPF_FIXUP_OFFSET_MASK, offset))
fixup_offset = (long)&ex->fixup - (pc + AARCH64_INSN_SIZE);
if (!FIELD_FIT(BPF_FIXUP_OFFSET_MASK, fixup_offset))
return -ERANGE;
ex->fixup = FIELD_PREP(BPF_FIXUP_OFFSET_MASK, offset) |
/*
* The offsets above have been calculated using the RO buffer but we
* need to use the R/W buffer for writes.
* switch ex to rw buffer for writing.
*/
ex = (void *)ctx->image + ((void *)ex - (void *)ctx->ro_image);
ex->insn = ins_offset;
ex->fixup = FIELD_PREP(BPF_FIXUP_OFFSET_MASK, fixup_offset) |
FIELD_PREP(BPF_FIXUP_REG_MASK, dst_reg);
ex->type = EX_TYPE_BPF;
@ -1550,7 +1578,8 @@ static inline void bpf_flush_icache(void *start, void *end)
struct arm64_jit_data {
struct bpf_binary_header *header;
u8 *image;
u8 *ro_image;
struct bpf_binary_header *ro_header;
struct jit_ctx ctx;
};
@ -1559,12 +1588,14 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
int image_size, prog_size, extable_size, extable_align, extable_offset;
struct bpf_prog *tmp, *orig_prog = prog;
struct bpf_binary_header *header;
struct bpf_binary_header *ro_header;
struct arm64_jit_data *jit_data;
bool was_classic = bpf_prog_was_classic(prog);
bool tmp_blinded = false;
bool extra_pass = false;
struct jit_ctx ctx;
u8 *image_ptr;
u8 *ro_image_ptr;
if (!prog->jit_requested)
return orig_prog;
@ -1591,8 +1622,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
}
if (jit_data->ctx.offset) {
ctx = jit_data->ctx;
image_ptr = jit_data->image;
ro_image_ptr = jit_data->ro_image;
ro_header = jit_data->ro_header;
header = jit_data->header;
image_ptr = (void *)header + ((void *)ro_image_ptr
- (void *)ro_header);
extra_pass = true;
prog_size = sizeof(u32) * ctx.idx;
goto skip_init_ctx;
@ -1637,18 +1671,27 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
/* also allocate space for plt target */
extable_offset = round_up(prog_size + PLT_TARGET_SIZE, extable_align);
image_size = extable_offset + extable_size;
header = bpf_jit_binary_alloc(image_size, &image_ptr,
sizeof(u32), jit_fill_hole);
if (header == NULL) {
ro_header = bpf_jit_binary_pack_alloc(image_size, &ro_image_ptr,
sizeof(u32), &header, &image_ptr,
jit_fill_hole);
if (!ro_header) {
prog = orig_prog;
goto out_off;
}
/* 2. Now, the actual pass. */
/*
* Use the image(RW) for writing the JITed instructions. But also save
* the ro_image(RX) for calculating the offsets in the image. The RW
* image will be later copied to the RX image from where the program
* will run. The bpf_jit_binary_pack_finalize() will do this copy in the
* final step.
*/
ctx.image = (__le32 *)image_ptr;
ctx.ro_image = (__le32 *)ro_image_ptr;
if (extable_size)
prog->aux->extable = (void *)image_ptr + extable_offset;
prog->aux->extable = (void *)ro_image_ptr + extable_offset;
skip_init_ctx:
ctx.idx = 0;
ctx.exentry_idx = 0;
@ -1656,9 +1699,8 @@ skip_init_ctx:
build_prologue(&ctx, was_classic, prog->aux->exception_cb);
if (build_body(&ctx, extra_pass)) {
bpf_jit_binary_free(header);
prog = orig_prog;
goto out_off;
goto out_free_hdr;
}
build_epilogue(&ctx, prog->aux->exception_cb);
@ -1666,34 +1708,44 @@ skip_init_ctx:
/* 3. Extra pass to validate JITed code. */
if (validate_ctx(&ctx)) {
bpf_jit_binary_free(header);
prog = orig_prog;
goto out_off;
goto out_free_hdr;
}
/* And we're done. */
if (bpf_jit_enable > 1)
bpf_jit_dump(prog->len, prog_size, 2, ctx.image);
bpf_flush_icache(header, ctx.image + ctx.idx);
if (!prog->is_func || extra_pass) {
if (extra_pass && ctx.idx != jit_data->ctx.idx) {
pr_err_once("multi-func JIT bug %d != %d\n",
ctx.idx, jit_data->ctx.idx);
bpf_jit_binary_free(header);
prog->bpf_func = NULL;
prog->jited = 0;
prog->jited_len = 0;
goto out_free_hdr;
}
if (WARN_ON(bpf_jit_binary_pack_finalize(prog, ro_header,
header))) {
/* ro_header has been freed */
ro_header = NULL;
prog = orig_prog;
goto out_off;
}
bpf_jit_binary_lock_ro(header);
/*
* The instructions have now been copied to the ROX region from
* where they will execute. Now the data cache has to be cleaned to
* the PoU and the I-cache has to be invalidated for the VAs.
*/
bpf_flush_icache(ro_header, ctx.ro_image + ctx.idx);
} else {
jit_data->ctx = ctx;
jit_data->image = image_ptr;
jit_data->ro_image = ro_image_ptr;
jit_data->header = header;
jit_data->ro_header = ro_header;
}
prog->bpf_func = (void *)ctx.image;
prog->bpf_func = (void *)ctx.ro_image;
prog->jited = 1;
prog->jited_len = prog_size;
@ -1714,6 +1766,14 @@ out:
bpf_jit_prog_release_other(prog, prog == orig_prog ?
tmp : orig_prog);
return prog;
out_free_hdr:
if (header) {
bpf_arch_text_copy(&ro_header->size, &header->size,
sizeof(header->size));
bpf_jit_binary_pack_free(ro_header, header);
}
goto out_off;
}
bool bpf_jit_supports_kfunc_call(void)
@ -1721,6 +1781,13 @@ bool bpf_jit_supports_kfunc_call(void)
return true;
}
void *bpf_arch_text_copy(void *dst, void *src, size_t len)
{
if (!aarch64_insn_copy(dst, src, len))
return ERR_PTR(-EINVAL);
return dst;
}
u64 bpf_jit_alloc_exec_limit(void)
{
return VMALLOC_END - VMALLOC_START;
@ -2359,3 +2426,27 @@ bool bpf_jit_supports_exceptions(void)
*/
return true;
}
void bpf_jit_free(struct bpf_prog *prog)
{
if (prog->jited) {
struct arm64_jit_data *jit_data = prog->aux->jit_data;
struct bpf_binary_header *hdr;
/*
* If we fail the final pass of JIT (from jit_subprogs),
* the program may not be finalized yet. Call finalize here
* before freeing it.
*/
if (jit_data) {
bpf_arch_text_copy(&jit_data->ro_header->size, &jit_data->header->size,
sizeof(jit_data->header->size));
kfree(jit_data);
}
hdr = bpf_jit_binary_pack_hdr(prog);
bpf_jit_binary_pack_free(hdr, NULL);
WARN_ON_ONCE(!bpf_prog_kallsyms_verify_off(prog));
}
bpf_prog_unlock_free(prog);
}