linux-stable/kernel/bpf
Yonghong Song e36373dc5e bpf: Mark bpf_spin_{lock,unlock}() helpers with notrace correctly
[ Upstream commit 178c54666f ]

Currently tracing is supposed not to allow for bpf_spin_{lock,unlock}()
helper calls. This is to prevent deadlock for the following cases:
  - there is a prog (prog-A) calling bpf_spin_{lock,unlock}().
  - there is a tracing program (prog-B), e.g., fentry, attached
    to bpf_spin_lock() and/or bpf_spin_unlock().
  - prog-B calls bpf_spin_{lock,unlock}().
For such a case, when prog-A calls bpf_spin_{lock,unlock}(),
a deadlock will happen.

The related source codes are below in kernel/bpf/helpers.c:
  notrace BPF_CALL_1(bpf_spin_lock, struct bpf_spin_lock *, lock)
  notrace BPF_CALL_1(bpf_spin_unlock, struct bpf_spin_lock *, lock)
notrace is supposed to prevent fentry prog from attaching to
bpf_spin_{lock,unlock}().

But actually this is not the case and fentry prog can successfully
attached to bpf_spin_lock(). Siddharth Chintamaneni reported
the issue in [1]. The following is the macro definition for
above BPF_CALL_1:
  #define BPF_CALL_x(x, name, ...)                                               \
        static __always_inline                                                 \
        u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__));   \
        typedef u64 (*btf_##name)(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__)); \
        u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__));         \
        u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__))          \
        {                                                                      \
                return ((btf_##name)____##name)(__BPF_MAP(x,__BPF_CAST,__BPF_N,__VA_ARGS__));\
        }                                                                      \
        static __always_inline                                                 \
        u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__))

  #define BPF_CALL_1(name, ...)   BPF_CALL_x(1, name, __VA_ARGS__)

The notrace attribute is actually applied to the static always_inline function
____bpf_spin_{lock,unlock}(). The actual callback function
bpf_spin_{lock,unlock}() is not marked with notrace, hence
allowing fentry prog to attach to two helpers, and this
may cause the above mentioned deadlock. Siddharth Chintamaneni
actually has a reproducer in [2].

To fix the issue, a new macro NOTRACE_BPF_CALL_1 is introduced which
will add notrace attribute to the original function instead of
the hidden always_inline function and this fixed the problem.

  [1] https://lore.kernel.org/bpf/CAE5sdEigPnoGrzN8WU7Tx-h-iFuMZgW06qp0KHWtpvoXxf1OAQ@mail.gmail.com/
  [2] https://lore.kernel.org/bpf/CAE5sdEg6yUc_Jz50AnUXEEUh6O73yQ1Z6NV2srJnef0ZrQkZew@mail.gmail.com/

Fixes: d83525ca62 ("bpf: introduce bpf_spin_lock")
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/bpf/20240207070102.335167-1-yonghong.song@linux.dev
Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-03-26 18:19:29 -04:00
..
preload
arraymap.c bpf: Set need_defer as false when clearing fd array during map free 2024-02-05 20:14:20 +00:00
bloom_filter.c
bpf_cgrp_storage.c
bpf_inode_storage.c
bpf_iter.c
bpf_local_storage.c bpf: bpf_sk_storage: Fix the missing uncharge in sk_omem_alloc 2023-09-06 11:08:14 +02:00
bpf_lru_list.c
bpf_lru_list.h
bpf_lsm.c
bpf_struct_ops.c
bpf_struct_ops_types.h
bpf_task_storage.c
btf.c bpf: Add bpf_sock_addr_set_sun_path() to allow writing unix sockaddr from bpf 2024-01-31 16:19:04 -08:00
cgroup.c bpf: Propagate modified uaddrlen from cgroup sockaddr programs 2024-01-31 16:19:04 -08:00
cgroup_iter.c
core.c bpf: Fix a verifier bug due to incorrect branch offset comparison with cpu=v4 2023-12-13 18:45:04 +01:00
cpumap.c cpumap: Zero-initialise xdp_rxq_info struct before running XDP program 2024-03-15 10:48:18 -04:00
cpumask.c
devmap.c
disasm.c
disasm.h
dispatcher.c
hashtab.c bpf: Add map and need_defer parameters to .map_fd_put_ptr() 2024-01-25 15:35:22 -08:00
helpers.c bpf: Mark bpf_spin_{lock,unlock}() helpers with notrace correctly 2024-03-26 18:19:29 -04:00
inode.c
Kconfig
link_iter.c
local_storage.c
log.c
lpm_trie.c bpf, lpm: Fix check prefixlen before walking trie 2024-01-25 15:35:19 -08:00
Makefile
map_in_map.c bpf: Defer the free of inner map when necessary 2024-01-25 15:35:22 -08:00
map_in_map.h bpf: Add map and need_defer parameters to .map_fd_put_ptr() 2024-01-25 15:35:22 -08:00
map_iter.c
memalloc.c bpf: Use c->unit_size to select target cache during free 2024-01-25 15:35:28 -08:00
mmap_unlock_work.h
mprog.c bpf: Handle bpf_mprog_query with NULL entry 2023-10-06 17:11:20 -07:00
net_namespace.c
offload.c bpf: Avoid dummy bpf_offload_netdev in __bpf_prog_dev_bound_init 2023-09-11 22:06:06 -07:00
percpu_freelist.c
percpu_freelist.h
prog_iter.c
queue_stack_maps.c bpf: Avoid deadlock when using queue and stack maps from NMI 2023-09-11 19:04:49 -07:00
reuseport_array.c
ringbuf.c
stackmap.c bpf: Add crosstask check to __bpf_get_stack 2024-01-25 15:35:19 -08:00
syscall.c bpf: Set uattr->batch.count as zero before batched update or deletion 2024-02-05 20:14:21 +00:00
sysfs_btf.c
task_iter.c
tcx.c bpf: Handle bpf_mprog_query with NULL entry 2023-10-06 17:11:20 -07:00
tnum.c
trampoline.c bpf, x64: Fix tailcall infinite loop 2023-11-20 11:58:55 +01:00
verifier.c bpf: check bpf_func_state->callback_depth when pruning states 2024-03-15 10:48:17 -04:00