Merge branch 'bpf-formatting-fixes-helpers'

Quentin Monnet says:

====================
Here is a follow-up set for eBPF helper documentation, with two patches
to fix formatting issues:

- One to fix an error I introduced with the initial set for the doc.
- One for the newly added bpf_get_stack(), that is currently not parsed
  correctly with the Python script (function signature is fine, but
  description and return value appear as empty).

There is no change to text contents in this set.
====================

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
This commit is contained in:
Daniel Borkmann 2018-04-30 13:53:12 +02:00
commit 081023a311
2 changed files with 52 additions and 52 deletions

View file

@ -828,12 +828,12 @@ union bpf_attr {
*
* Also, be aware that the newer helper
* **bpf_perf_event_read_value**\ () is recommended over
* **bpf_perf_event_read*\ () in general. The latter has some ABI
* **bpf_perf_event_read**\ () in general. The latter has some ABI
* quirks where error and counter value are used as a return code
* (which is wrong to do since ranges may overlap). This issue is
* fixed with bpf_perf_event_read_value(), which at the same time
* provides more features over the **bpf_perf_event_read**\ ()
* interface. Please refer to the description of
* fixed with **bpf_perf_event_read_value**\ (), which at the same
* time provides more features over the **bpf_perf_event_read**\
* () interface. Please refer to the description of
* **bpf_perf_event_read_value**\ () for details.
* Return
* The value of the perf event counter read from the map, or a
@ -1770,33 +1770,33 @@ union bpf_attr {
*
* int bpf_get_stack(struct pt_regs *regs, void *buf, u32 size, u64 flags)
* Description
* Return a user or a kernel stack in bpf program provided buffer.
* To achieve this, the helper needs *ctx*, which is a pointer
* to the context on which the tracing program is executed.
* To store the stacktrace, the bpf program provides *buf* with
* a nonnegative *size*.
* Return a user or a kernel stack in bpf program provided buffer.
* To achieve this, the helper needs *ctx*, which is a pointer
* to the context on which the tracing program is executed.
* To store the stacktrace, the bpf program provides *buf* with
* a nonnegative *size*.
*
* The last argument, *flags*, holds the number of stack frames to
* skip (from 0 to 255), masked with
* **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set
* the following flags:
* The last argument, *flags*, holds the number of stack frames to
* skip (from 0 to 255), masked with
* **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set
* the following flags:
*
* **BPF_F_USER_STACK**
* Collect a user space stack instead of a kernel stack.
* **BPF_F_USER_BUILD_ID**
* Collect buildid+offset instead of ips for user stack,
* only valid if **BPF_F_USER_STACK** is also specified.
* **BPF_F_USER_STACK**
* Collect a user space stack instead of a kernel stack.
* **BPF_F_USER_BUILD_ID**
* Collect buildid+offset instead of ips for user stack,
* only valid if **BPF_F_USER_STACK** is also specified.
*
* **bpf_get_stack**\ () can collect up to
* **PERF_MAX_STACK_DEPTH** both kernel and user frames, subject
* to sufficient large buffer size. Note that
* this limit can be controlled with the **sysctl** program, and
* that it should be manually increased in order to profile long
* user stacks (such as stacks for Java programs). To do so, use:
* **bpf_get_stack**\ () can collect up to
* **PERF_MAX_STACK_DEPTH** both kernel and user frames, subject
* to sufficient large buffer size. Note that
* this limit can be controlled with the **sysctl** program, and
* that it should be manually increased in order to profile long
* user stacks (such as stacks for Java programs). To do so, use:
*
* ::
* ::
*
* # sysctl kernel.perf_event_max_stack=<new value>
* # sysctl kernel.perf_event_max_stack=<new value>
*
* Return
* a non-negative value equal to or less than size on success, or

View file

@ -828,12 +828,12 @@ union bpf_attr {
*
* Also, be aware that the newer helper
* **bpf_perf_event_read_value**\ () is recommended over
* **bpf_perf_event_read*\ () in general. The latter has some ABI
* **bpf_perf_event_read**\ () in general. The latter has some ABI
* quirks where error and counter value are used as a return code
* (which is wrong to do since ranges may overlap). This issue is
* fixed with bpf_perf_event_read_value(), which at the same time
* provides more features over the **bpf_perf_event_read**\ ()
* interface. Please refer to the description of
* fixed with **bpf_perf_event_read_value**\ (), which at the same
* time provides more features over the **bpf_perf_event_read**\
* () interface. Please refer to the description of
* **bpf_perf_event_read_value**\ () for details.
* Return
* The value of the perf event counter read from the map, or a
@ -1770,33 +1770,33 @@ union bpf_attr {
*
* int bpf_get_stack(struct pt_regs *regs, void *buf, u32 size, u64 flags)
* Description
* Return a user or a kernel stack in bpf program provided buffer.
* To achieve this, the helper needs *ctx*, which is a pointer
* to the context on which the tracing program is executed.
* To store the stacktrace, the bpf program provides *buf* with
* a nonnegative *size*.
* Return a user or a kernel stack in bpf program provided buffer.
* To achieve this, the helper needs *ctx*, which is a pointer
* to the context on which the tracing program is executed.
* To store the stacktrace, the bpf program provides *buf* with
* a nonnegative *size*.
*
* The last argument, *flags*, holds the number of stack frames to
* skip (from 0 to 255), masked with
* **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set
* the following flags:
* The last argument, *flags*, holds the number of stack frames to
* skip (from 0 to 255), masked with
* **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set
* the following flags:
*
* **BPF_F_USER_STACK**
* Collect a user space stack instead of a kernel stack.
* **BPF_F_USER_BUILD_ID**
* Collect buildid+offset instead of ips for user stack,
* only valid if **BPF_F_USER_STACK** is also specified.
* **BPF_F_USER_STACK**
* Collect a user space stack instead of a kernel stack.
* **BPF_F_USER_BUILD_ID**
* Collect buildid+offset instead of ips for user stack,
* only valid if **BPF_F_USER_STACK** is also specified.
*
* **bpf_get_stack**\ () can collect up to
* **PERF_MAX_STACK_DEPTH** both kernel and user frames, subject
* to sufficient large buffer size. Note that
* this limit can be controlled with the **sysctl** program, and
* that it should be manually increased in order to profile long
* user stacks (such as stacks for Java programs). To do so, use:
* **bpf_get_stack**\ () can collect up to
* **PERF_MAX_STACK_DEPTH** both kernel and user frames, subject
* to sufficient large buffer size. Note that
* this limit can be controlled with the **sysctl** program, and
* that it should be manually increased in order to profile long
* user stacks (such as stacks for Java programs). To do so, use:
*
* ::
* ::
*
* # sysctl kernel.perf_event_max_stack=<new value>
* # sysctl kernel.perf_event_max_stack=<new value>
*
* Return
* a non-negative value equal to or less than size on success, or