Tracing updates for v6.7:

- Remove eventfs_file descriptor
 
   This is the biggest change, and the second part of making eventfs
   create its files dynamically.
 
   In 6.6 the first part was added, and that maintained a one to one
   mapping between eventfs meta descriptors and the directories and
   file inodes and dentries that were dynamically created. The
   directories were represented by a eventfs_inode and the files
   were represented by a eventfs_file.
 
   In v6.7 the eventfs_file is removed. As all events have the same
   directory make up (sched_switch has an "enable", "id", "format",
   etc files), the handing of what files are underneath each leaf
   eventfs directory is moved back to the tracing subsystem via a
   callback. When a event is added to the eventfs, it registers
   an array of evenfs_entry's. These hold the names of the files and
   the callbacks to call when the file is referenced. The callback gets
   the name so that the same callback may be used by multiple files.
   The callback then supplies the filesystem_operations structure needed
   to create this file.
 
   This has brought the memory footprint of creating multiple eventfs
   instances down by 2 megs each!
 
 - User events now has persistent events that are not associated
   to a single processes. These are privileged events that hang around
   even if no process is attached to them.
 
 - Clean up of seq_buf.
   There's talk about using seq_buf more to replace strscpy() and friends.
   But this also requires some minor modifications of seq_buf to be
   able to do this.
 
 - Expand instance ring buffers individually
   Currently if boot up creates an instance, and a trace event is
   enabled on that instance, the ring buffer for that instance and the
   top level ring buffer are expanded (1.4 MB per CPU). This wastes
   memory as this happens when nothing is using the top level instance.
 
 - Other minor clean ups and fixes.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCZUMrBBQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6quzVAQCed/kPM7X9j2QZamJVDruMf2CmVxpu
 /TOvKvSKV584GgEAxLntf5VKx1Q98bc68y3Zkg+OCi8jSgORos1ROmURhws=
 =iIgb
 -----END PGP SIGNATURE-----

Merge tag 'trace-v6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing updates from Steven Rostedt:

 - Remove eventfs_file descriptor

   This is the biggest change, and the second part of making eventfs
   create its files dynamically.

   In 6.6 the first part was added, and that maintained a one to one
   mapping between eventfs meta descriptors and the directories and file
   inodes and dentries that were dynamically created. The directories
   were represented by a eventfs_inode and the files were represented by
   a eventfs_file.

   In v6.7 the eventfs_file is removed. As all events have the same
   directory make up (sched_switch has an "enable", "id", "format", etc
   files), the handing of what files are underneath each leaf eventfs
   directory is moved back to the tracing subsystem via a callback.

   When an event is added to the eventfs, it registers an array of
   evenfs_entry's. These hold the names of the files and the callbacks
   to call when the file is referenced. The callback gets the name so
   that the same callback may be used by multiple files. The callback
   then supplies the filesystem_operations structure needed to create
   this file.

   This has brought the memory footprint of creating multiple eventfs
   instances down by 2 megs each!

 - User events now has persistent events that are not associated to a
   single processes. These are privileged events that hang around even
   if no process is attached to them

 - Clean up of seq_buf

   There's talk about using seq_buf more to replace strscpy() and
   friends. But this also requires some minor modifications of seq_buf
   to be able to do this

 - Expand instance ring buffers individually

   Currently if boot up creates an instance, and a trace event is
   enabled on that instance, the ring buffer for that instance and the
   top level ring buffer are expanded (1.4 MB per CPU). This wastes
   memory as this happens when nothing is using the top level instance

 - Other minor clean ups and fixes

* tag 'trace-v6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (34 commits)
  seq_buf: Export seq_buf_puts()
  seq_buf: Export seq_buf_putc()
  eventfs: Use simple_recursive_removal() to clean up dentries
  eventfs: Remove special processing of dput() of events directory
  eventfs: Delete eventfs_inode when the last dentry is freed
  eventfs: Hold eventfs_mutex when calling callback functions
  eventfs: Save ownership and mode
  eventfs: Test for ei->is_freed when accessing ei->dentry
  eventfs: Have a free_ei() that just frees the eventfs_inode
  eventfs: Remove "is_freed" union with rcu head
  eventfs: Fix kerneldoc of eventfs_remove_rec()
  tracing: Have the user copy of synthetic event address use correct context
  eventfs: Remove extra dget() in eventfs_create_events_dir()
  tracing: Have trace_event_file have ref counters
  seq_buf: Introduce DECLARE_SEQ_BUF and seq_buf_str()
  eventfs: Fix typo in eventfs_inode union comment
  eventfs: Fix WARN_ON() in create_file_dentry()
  powerpc: Remove initialisation of readpos
  tracing/histograms: Simplify last_cmd_set()
  seq_buf: fix a misleading comment
  ...
This commit is contained in:
Linus Torvalds 2023-11-03 07:41:18 -10:00
commit 31e5f934ff
24 changed files with 1310 additions and 724 deletions

View File

@ -14,6 +14,11 @@ Programs can view status of the events via
/sys/kernel/tracing/user_events_status and can both register and write
data out via /sys/kernel/tracing/user_events_data.
Programs can also use /sys/kernel/tracing/dynamic_events to register and
delete user based events via the u: prefix. The format of the command to
dynamic_events is the same as the ioctl with the u: prefix applied. This
requires CAP_PERFMON due to the event persisting, otherwise -EPERM is returned.
Typically programs will register a set of events that they wish to expose to
tools that can read trace_events (such as ftrace and perf). The registration
process tells the kernel which address and bit to reflect if any tool has
@ -45,7 +50,7 @@ This command takes a packed struct user_reg as an argument::
/* Input: Enable size in bytes at address */
__u8 enable_size;
/* Input: Flags for future use, set to 0 */
/* Input: Flags to use, if any */
__u16 flags;
/* Input: Address to update when enabled */
@ -69,7 +74,7 @@ The struct user_reg requires all the above inputs to be set appropriately.
This must be 4 (32-bit) or 8 (64-bit). 64-bit values are only allowed to be
used on 64-bit kernels, however, 32-bit can be used on all kernels.
+ flags: The flags to use, if any. For the initial version this must be 0.
+ flags: The flags to use, if any.
Callers should first attempt to use flags and retry without flags to ensure
support for lower versions of the kernel. If a flag is not supported -EINVAL
is returned.
@ -80,6 +85,13 @@ The struct user_reg requires all the above inputs to be set appropriately.
+ name_args: The name and arguments to describe the event, see command format
for details.
The following flags are currently supported.
+ USER_EVENT_REG_PERSIST: The event will not delete upon the last reference
closing. Callers may use this if an event should exist even after the
process closes or unregisters the event. Requires CAP_PERFMON otherwise
-EPERM is returned.
Upon successful registration the following is set.
+ write_index: The index to use for this file descriptor that represents this
@ -141,7 +153,10 @@ event (in both user and kernel space). User programs should use a separate file
to request deletes than the one used for registration due to this.
**NOTE:** By default events will auto-delete when there are no references left
to the event. Flags in the future may change this logic.
to the event. If programs do not want auto-delete, they must use the
USER_EVENT_REG_PERSIST flag when registering the event. Once that flag is used
the event exists until DIAG_IOCSDEL is invoked. Both register and delete of an
event that persists requires CAP_PERFMON, otherwise -EPERM is returned.
Unregistering
-------------

View File

@ -601,7 +601,6 @@ struct seq_buf ppc_hw_desc __initdata = {
.buffer = ppc_hw_desc_buf,
.size = sizeof(ppc_hw_desc_buf),
.len = 0,
.readpos = 0,
};
static __init void probe_machine(void)

File diff suppressed because it is too large Load Diff

View File

@ -385,7 +385,7 @@ static void tracefs_dentry_iput(struct dentry *dentry, struct inode *inode)
ti = get_tracefs(inode);
if (ti && ti->flags & TRACEFS_EVENT_INODE)
eventfs_set_ef_status_free(ti, dentry);
eventfs_set_ei_status_free(ti, dentry);
iput(inode);
}

View File

@ -13,6 +13,58 @@ struct tracefs_inode {
struct inode vfs_inode;
};
/*
* struct eventfs_attr - cache the mode and ownership of a eventfs entry
* @mode: saved mode plus flags of what is saved
* @uid: saved uid if changed
* @gid: saved gid if changed
*/
struct eventfs_attr {
int mode;
kuid_t uid;
kgid_t gid;
};
/*
* struct eventfs_inode - hold the properties of the eventfs directories.
* @list: link list into the parent directory
* @entries: the array of entries representing the files in the directory
* @name: the name of the directory to create
* @children: link list into the child eventfs_inode
* @dentry: the dentry of the directory
* @d_parent: pointer to the parent's dentry
* @d_children: The array of dentries to represent the files when created
* @entry_attrs: Saved mode and ownership of the @d_children
* @attr: Saved mode and ownership of eventfs_inode itself
* @data: The private data to pass to the callbacks
* @is_freed: Flag set if the eventfs is on its way to be freed
* Note if is_freed is set, then dentry is corrupted.
* @nr_entries: The number of items in @entries
*/
struct eventfs_inode {
struct list_head list;
const struct eventfs_entry *entries;
const char *name;
struct list_head children;
struct dentry *dentry; /* Check is_freed to access */
struct dentry *d_parent;
struct dentry **d_children;
struct eventfs_attr *entry_attrs;
struct eventfs_attr attr;
void *data;
/*
* Union - used for deletion
* @llist: for calling dput() if needed after RCU
* @rcu: eventfs_inode to delete in RCU
*/
union {
struct llist_node llist;
struct rcu_head rcu;
};
unsigned int is_freed:1;
unsigned int nr_entries:31;
};
static inline struct tracefs_inode *get_tracefs(const struct inode *inode)
{
return container_of(inode, struct tracefs_inode, vfs_inode);
@ -25,6 +77,6 @@ struct inode *tracefs_get_inode(struct super_block *sb);
struct dentry *eventfs_start_creating(const char *name, struct dentry *parent);
struct dentry *eventfs_failed_creating(struct dentry *dentry);
struct dentry *eventfs_end_creating(struct dentry *dentry);
void eventfs_set_ef_status_free(struct tracefs_inode *ti, struct dentry *dentry);
void eventfs_set_ei_status_free(struct tracefs_inode *ti, struct dentry *dentry);
#endif /* _TRACEFS_INTERNAL_H */

View File

@ -14,19 +14,25 @@
* @buffer: pointer to the buffer
* @size: size of the buffer
* @len: the amount of data inside the buffer
* @readpos: The next position to read in the buffer.
*/
struct seq_buf {
char *buffer;
size_t size;
size_t len;
loff_t readpos;
};
#define DECLARE_SEQ_BUF(NAME, SIZE) \
char __ ## NAME ## _buffer[SIZE] = ""; \
struct seq_buf NAME = { \
.buffer = &__ ## NAME ## _buffer, \
.size = SIZE, \
}
static inline void seq_buf_clear(struct seq_buf *s)
{
s->len = 0;
s->readpos = 0;
if (s->size)
s->buffer[0] = '\0';
}
static inline void
@ -39,7 +45,7 @@ seq_buf_init(struct seq_buf *s, char *buf, unsigned int size)
/*
* seq_buf have a buffer that might overflow. When this happens
* the len and size are set to be equal.
* len is set to be greater than size.
*/
static inline bool
seq_buf_has_overflowed(struct seq_buf *s)
@ -72,8 +78,8 @@ static inline unsigned int seq_buf_used(struct seq_buf *s)
}
/**
* seq_buf_terminate - Make sure buffer is nul terminated
* @s: the seq_buf descriptor to terminate.
* seq_buf_str - get %NUL-terminated C string from seq_buf
* @s: the seq_buf handle
*
* This makes sure that the buffer in @s is nul terminated and
* safe to read as a string.
@ -84,16 +90,20 @@ static inline unsigned int seq_buf_used(struct seq_buf *s)
*
* After this function is called, s->buffer is safe to use
* in string operations.
*
* Returns @s->buf after making sure it is terminated.
*/
static inline void seq_buf_terminate(struct seq_buf *s)
static inline const char *seq_buf_str(struct seq_buf *s)
{
if (WARN_ON(s->size == 0))
return;
return "";
if (seq_buf_buffer_left(s))
s->buffer[s->len] = 0;
else
s->buffer[s->size - 1] = 0;
return s->buffer;
}
/**
@ -143,7 +153,7 @@ extern __printf(2, 0)
int seq_buf_vprintf(struct seq_buf *s, const char *fmt, va_list args);
extern int seq_buf_print_seq(struct seq_file *m, struct seq_buf *s);
extern int seq_buf_to_user(struct seq_buf *s, char __user *ubuf,
int cnt);
size_t start, int cnt);
extern int seq_buf_puts(struct seq_buf *s, const char *str);
extern int seq_buf_putc(struct seq_buf *s, unsigned char c);
extern int seq_buf_putmem(struct seq_buf *s, const void *mem, unsigned int len);

View File

@ -492,6 +492,7 @@ enum {
EVENT_FILE_FL_TRIGGER_COND_BIT,
EVENT_FILE_FL_PID_FILTER_BIT,
EVENT_FILE_FL_WAS_ENABLED_BIT,
EVENT_FILE_FL_FREED_BIT,
};
extern struct trace_event_file *trace_get_event_file(const char *instance,
@ -630,6 +631,7 @@ extern int __kprobe_event_add_fields(struct dynevent_cmd *cmd, ...);
* TRIGGER_COND - When set, one or more triggers has an associated filter
* PID_FILTER - When set, the event is filtered based on pid
* WAS_ENABLED - Set when enabled to know to clear trace on module removal
* FREED - File descriptor is freed, all fields should be considered invalid
*/
enum {
EVENT_FILE_FL_ENABLED = (1 << EVENT_FILE_FL_ENABLED_BIT),
@ -643,13 +645,14 @@ enum {
EVENT_FILE_FL_TRIGGER_COND = (1 << EVENT_FILE_FL_TRIGGER_COND_BIT),
EVENT_FILE_FL_PID_FILTER = (1 << EVENT_FILE_FL_PID_FILTER_BIT),
EVENT_FILE_FL_WAS_ENABLED = (1 << EVENT_FILE_FL_WAS_ENABLED_BIT),
EVENT_FILE_FL_FREED = (1 << EVENT_FILE_FL_FREED_BIT),
};
struct trace_event_file {
struct list_head list;
struct trace_event_call *event_call;
struct event_filter __rcu *filter;
struct eventfs_file *ef;
struct eventfs_inode *ei;
struct trace_array *tr;
struct trace_subsystem_dir *system;
struct list_head triggers;
@ -671,6 +674,7 @@ struct trace_event_file {
* caching and such. Which is mostly OK ;-)
*/
unsigned long flags;
atomic_t ref; /* ref count for opened files */
atomic_t sm_ref; /* soft-mode reference counter */
atomic_t tm_ref; /* trigger-mode reference counter */
};

View File

@ -14,6 +14,7 @@
struct trace_seq {
char buffer[PAGE_SIZE];
struct seq_buf seq;
size_t readpos;
int full;
};
@ -22,6 +23,7 @@ trace_seq_init(struct trace_seq *s)
{
seq_buf_init(&s->seq, s->buffer, PAGE_SIZE);
s->full = 0;
s->readpos = 0;
}
/**

View File

@ -23,26 +23,69 @@ struct file_operations;
struct eventfs_file;
struct dentry *eventfs_create_events_dir(const char *name,
struct dentry *parent);
/**
* eventfs_callback - A callback function to create dynamic files in eventfs
* @name: The name of the file that is to be created
* @mode: return the file mode for the file (RW access, etc)
* @data: data to pass to the created file ops
* @fops: the file operations of the created file
*
* The evetnfs files are dynamically created. The struct eventfs_entry array
* is passed to eventfs_create_dir() or eventfs_create_events_dir() that will
* be used to create the files within those directories. When a lookup
* or access to a file within the directory is made, the struct eventfs_entry
* array is used to find a callback() with the matching name that is being
* referenced (for lookups, the entire array is iterated and each callback
* will be called).
*
* The callback will be called with @name for the name of the file to create.
* The callback can return less than 1 to indicate that no file should be
* created.
*
* If a file is to be created, then @mode should be populated with the file
* mode (permissions) for which the file is created for. This would be
* used to set the created inode i_mode field.
*
* The @data should be set to the data passed to the other file operations
* (read, write, etc). Note, @data will also point to the data passed in
* to eventfs_create_dir() or eventfs_create_events_dir(), but the callback
* can replace the data if it chooses to. Otherwise, the original data
* will be used for the file operation functions.
*
* The @fops should be set to the file operations that will be used to create
* the inode.
*
* NB. This callback is called while holding internal locks of the eventfs
* system. The callback must not call any code that might also call into
* the tracefs or eventfs system or it will risk creating a deadlock.
*/
typedef int (*eventfs_callback)(const char *name, umode_t *mode, void **data,
const struct file_operations **fops);
struct eventfs_file *eventfs_add_subsystem_dir(const char *name,
struct dentry *parent);
/**
* struct eventfs_entry - dynamically created eventfs file call back handler
* @name: Then name of the dynamic file in an eventfs directory
* @callback: The callback to get the fops of the file when it is created
*
* See evenfs_callback() typedef for how to set up @callback.
*/
struct eventfs_entry {
const char *name;
eventfs_callback callback;
};
struct eventfs_file *eventfs_add_dir(const char *name,
struct eventfs_file *ef_parent);
struct eventfs_inode;
int eventfs_add_file(const char *name, umode_t mode,
struct eventfs_file *ef_parent, void *data,
const struct file_operations *fops);
struct eventfs_inode *eventfs_create_events_dir(const char *name, struct dentry *parent,
const struct eventfs_entry *entries,
int size, void *data);
int eventfs_add_events_file(const char *name, umode_t mode,
struct dentry *parent, void *data,
const struct file_operations *fops);
struct eventfs_inode *eventfs_create_dir(const char *name, struct eventfs_inode *parent,
const struct eventfs_entry *entries,
int size, void *data);
void eventfs_remove(struct eventfs_file *ef);
void eventfs_remove_events_dir(struct dentry *dentry);
void eventfs_remove_events_dir(struct eventfs_inode *ei);
void eventfs_remove_dir(struct eventfs_inode *ei);
struct dentry *tracefs_create_file(const char *name, umode_t mode,
struct dentry *parent, void *data,

View File

@ -17,6 +17,15 @@
/* Create dynamic location entry within a 32-bit value */
#define DYN_LOC(offset, size) ((size) << 16 | (offset))
/* List of supported registration flags */
enum user_reg_flag {
/* Event will not delete upon last reference closing */
USER_EVENT_REG_PERSIST = 1U << 0,
/* This value or above is currently non-ABI */
USER_EVENT_REG_MAX = 1U << 1,
};
/*
* Describes an event registration and stores the results of the registration.
* This structure is passed to the DIAG_IOCSREG ioctl, callers at a minimum
@ -33,7 +42,7 @@ struct user_reg {
/* Input: Enable size in bytes at address */
__u8 enable_size;
/* Input: Flags for future use, set to 0 */
/* Input: Flags to use, if any */
__u16 flags;
/* Input: Address to update when enabled */

View File

@ -2056,7 +2056,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer)
retries = 10;
success = false;
while (retries--) {
struct list_head *head_page, *prev_page, *r;
struct list_head *head_page, *prev_page;
struct list_head *last_page, *first_page;
struct list_head *head_page_with_bit;
struct buffer_page *hpage = rb_set_head_page(cpu_buffer);
@ -2075,9 +2075,9 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer)
last_page->next = head_page_with_bit;
first_page->prev = prev_page;
r = cmpxchg(&prev_page->next, head_page_with_bit, first_page);
if (r == head_page_with_bit) {
/* caution: head_page_with_bit gets updated on cmpxchg failure */
if (try_cmpxchg(&prev_page->next,
&head_page_with_bit, first_page)) {
/*
* yay, we replaced the page pointer to our new list,
* now, we just have to update to head page's prev

View File

@ -54,12 +54,6 @@
#include "trace.h"
#include "trace_output.h"
/*
* On boot up, the ring buffer is set to the minimum size, so that
* we do not waste memory on systems that are not using tracing.
*/
bool ring_buffer_expanded;
#ifdef CONFIG_FTRACE_STARTUP_TEST
/*
* We need to change this state when a selftest is running.
@ -202,7 +196,7 @@ static int __init set_cmdline_ftrace(char *str)
strscpy(bootup_tracer_buf, str, MAX_TRACER_SIZE);
default_bootup_tracer = bootup_tracer_buf;
/* We are using ftrace early, expand it */
ring_buffer_expanded = true;
trace_set_ring_buffer_expanded(NULL);
return 1;
}
__setup("ftrace=", set_cmdline_ftrace);
@ -247,7 +241,7 @@ static int __init boot_alloc_snapshot(char *str)
} else {
allocate_snapshot = true;
/* We also need the main ring buffer expanded */
ring_buffer_expanded = true;
trace_set_ring_buffer_expanded(NULL);
}
return 1;
}
@ -490,6 +484,13 @@ static struct trace_array global_trace = {
.trace_flags = TRACE_DEFAULT_FLAGS,
};
void trace_set_ring_buffer_expanded(struct trace_array *tr)
{
if (!tr)
tr = &global_trace;
tr->ring_buffer_expanded = true;
}
LIST_HEAD(ftrace_trace_arrays);
int trace_array_get(struct trace_array *this_tr)
@ -1730,15 +1731,15 @@ static ssize_t trace_seq_to_buffer(struct trace_seq *s, void *buf, size_t cnt)
{
int len;
if (trace_seq_used(s) <= s->seq.readpos)
if (trace_seq_used(s) <= s->readpos)
return -EBUSY;
len = trace_seq_used(s) - s->seq.readpos;
len = trace_seq_used(s) - s->readpos;
if (cnt > len)
cnt = len;
memcpy(buf, s->buffer + s->seq.readpos, cnt);
memcpy(buf, s->buffer + s->readpos, cnt);
s->seq.readpos += cnt;
s->readpos += cnt;
return cnt;
}
@ -2012,7 +2013,7 @@ static int run_tracer_selftest(struct tracer *type)
#ifdef CONFIG_TRACER_MAX_TRACE
if (type->use_max_tr) {
/* If we expanded the buffers, make sure the max is expanded too */
if (ring_buffer_expanded)
if (tr->ring_buffer_expanded)
ring_buffer_resize(tr->max_buffer.buffer, trace_buf_size,
RING_BUFFER_ALL_CPUS);
tr->allocated_snapshot = true;
@ -2038,7 +2039,7 @@ static int run_tracer_selftest(struct tracer *type)
tr->allocated_snapshot = false;
/* Shrink the max buffer again */
if (ring_buffer_expanded)
if (tr->ring_buffer_expanded)
ring_buffer_resize(tr->max_buffer.buffer, 1,
RING_BUFFER_ALL_CPUS);
}
@ -3403,7 +3404,7 @@ void trace_printk_init_buffers(void)
pr_warn("**********************************************************\n");
/* Expand the buffers to set size */
tracing_update_buffers();
tracing_update_buffers(&global_trace);
buffers_allocated = 1;
@ -3827,15 +3828,6 @@ static bool trace_safe_str(struct trace_iterator *iter, const char *str,
return false;
}
static const char *show_buffer(struct trace_seq *s)
{
struct seq_buf *seq = &s->seq;
seq_buf_terminate(seq);
return seq->buffer;
}
static DEFINE_STATIC_KEY_FALSE(trace_no_verify);
static int test_can_verify_check(const char *fmt, ...)
@ -3975,7 +3967,7 @@ void trace_check_vprintf(struct trace_iterator *iter, const char *fmt,
*/
if (WARN_ONCE(!trace_safe_str(iter, str, star, len),
"fmt: '%s' current_buffer: '%s'",
fmt, show_buffer(&iter->seq))) {
fmt, seq_buf_str(&iter->seq.seq))) {
int ret;
/* Try to safely read the string */
@ -4986,6 +4978,20 @@ int tracing_open_file_tr(struct inode *inode, struct file *filp)
if (ret)
return ret;
mutex_lock(&event_mutex);
/* Fail if the file is marked for removal */
if (file->flags & EVENT_FILE_FL_FREED) {
trace_array_put(file->tr);
ret = -ENODEV;
} else {
event_file_get(file);
}
mutex_unlock(&event_mutex);
if (ret)
return ret;
filp->private_data = inode->i_private;
return 0;
@ -4996,6 +5002,7 @@ int tracing_release_file_tr(struct inode *inode, struct file *filp)
struct trace_event_file *file = inode->i_private;
trace_array_put(file->tr);
event_file_put(file);
return 0;
}
@ -6374,7 +6381,7 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr,
* we use the size that was given, and we can forget about
* expanding it later.
*/
ring_buffer_expanded = true;
trace_set_ring_buffer_expanded(tr);
/* May be called before buffers are initialized */
if (!tr->array_buffer.buffer)
@ -6452,6 +6459,7 @@ out:
/**
* tracing_update_buffers - used by tracing facility to expand ring buffers
* @tr: The tracing instance
*
* To save on memory when the tracing is never used on a system with it
* configured in. The ring buffers are set to a minimum size. But once
@ -6460,13 +6468,13 @@ out:
*
* This function is to be called when a tracer is about to be used.
*/
int tracing_update_buffers(void)
int tracing_update_buffers(struct trace_array *tr)
{
int ret = 0;
mutex_lock(&trace_types_lock);
if (!ring_buffer_expanded)
ret = __tracing_resize_ring_buffer(&global_trace, trace_buf_size,
if (!tr->ring_buffer_expanded)
ret = __tracing_resize_ring_buffer(tr, trace_buf_size,
RING_BUFFER_ALL_CPUS);
mutex_unlock(&trace_types_lock);
@ -6520,7 +6528,7 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
mutex_lock(&trace_types_lock);
if (!ring_buffer_expanded) {
if (!tr->ring_buffer_expanded) {
ret = __tracing_resize_ring_buffer(tr, trace_buf_size,
RING_BUFFER_ALL_CPUS);
if (ret < 0)
@ -7006,7 +7014,7 @@ waitagain:
/* Now copy what we have to the user */
sret = trace_seq_to_user(&iter->seq, ubuf, cnt);
if (iter->seq.seq.readpos >= trace_seq_used(&iter->seq))
if (iter->seq.readpos >= trace_seq_used(&iter->seq))
trace_seq_init(&iter->seq);
/*
@ -7192,7 +7200,7 @@ tracing_entries_read(struct file *filp, char __user *ubuf,
}
if (buf_size_same) {
if (!ring_buffer_expanded)
if (!tr->ring_buffer_expanded)
r = sprintf(buf, "%lu (expanded: %lu)\n",
size >> 10,
trace_buf_size >> 10);
@ -7249,10 +7257,10 @@ tracing_total_entries_read(struct file *filp, char __user *ubuf,
mutex_lock(&trace_types_lock);
for_each_tracing_cpu(cpu) {
size += per_cpu_ptr(tr->array_buffer.data, cpu)->entries >> 10;
if (!ring_buffer_expanded)
if (!tr->ring_buffer_expanded)
expanded_size += trace_buf_size >> 10;
}
if (ring_buffer_expanded)
if (tr->ring_buffer_expanded)
r = sprintf(buf, "%lu\n", size);
else
r = sprintf(buf, "%lu (expanded: %lu)\n", size, expanded_size);
@ -7646,7 +7654,7 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
unsigned long val;
int ret;
ret = tracing_update_buffers();
ret = tracing_update_buffers(tr);
if (ret < 0)
return ret;
@ -9550,6 +9558,9 @@ static struct trace_array *trace_array_create(const char *name)
if (allocate_trace_buffers(tr, trace_buf_size) < 0)
goto out_free_tr;
/* The ring buffer is defaultly expanded */
trace_set_ring_buffer_expanded(tr);
if (ftrace_allocate_ftrace_ops(tr) < 0)
goto out_free_tr;
@ -9759,7 +9770,6 @@ static __init void create_trace_instances(struct dentry *d_tracer)
static void
init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
{
struct trace_event_file *file;
int cpu;
trace_create_file("available_tracers", TRACE_MODE_READ, d_tracer,
@ -9792,11 +9802,7 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
trace_create_file("trace_marker", 0220, d_tracer,
tr, &tracing_mark_fops);
file = __find_event_file(tr, "ftrace", "print");
if (file && file->ef)
eventfs_add_file("trigger", TRACE_MODE_WRITE, file->ef,
file, &event_trigger_fops);
tr->trace_marker_file = file;
tr->trace_marker_file = __find_event_file(tr, "ftrace", "print");
trace_create_file("trace_marker_raw", 0220, d_tracer,
tr, &tracing_mark_raw_fops);
@ -10444,7 +10450,7 @@ __init static int tracer_alloc_buffers(void)
trace_printk_init_buffers();
/* To save memory, keep the ring buffer size to its minimum */
if (ring_buffer_expanded)
if (global_trace.ring_buffer_expanded)
ring_buf_size = trace_buf_size;
else
ring_buf_size = 1;

View File

@ -381,7 +381,7 @@ struct trace_array {
struct dentry *dir;
struct dentry *options;
struct dentry *percpu_dir;
struct dentry *event_dir;
struct eventfs_inode *event_dir;
struct trace_options *topts;
struct list_head systems;
struct list_head events;
@ -410,6 +410,11 @@ struct trace_array {
struct cond_snapshot *cond_snapshot;
#endif
struct trace_func_repeats __percpu *last_func_repeats;
/*
* On boot up, the ring buffer is set to the minimum size, so that
* we do not waste memory on systems that are not using tracing.
*/
bool ring_buffer_expanded;
};
enum {
@ -761,7 +766,7 @@ extern int DYN_FTRACE_TEST_NAME(void);
#define DYN_FTRACE_TEST_NAME2 trace_selftest_dynamic_test_func2
extern int DYN_FTRACE_TEST_NAME2(void);
extern bool ring_buffer_expanded;
extern void trace_set_ring_buffer_expanded(struct trace_array *tr);
extern bool tracing_selftest_disabled;
#ifdef CONFIG_FTRACE_STARTUP_TEST
@ -1305,7 +1310,7 @@ static inline void trace_branch_disable(void)
#endif /* CONFIG_BRANCH_TRACER */
/* set ring buffers to default size if not already done so */
int tracing_update_buffers(void);
int tracing_update_buffers(struct trace_array *tr);
union trace_synth_field {
u8 as_u8;
@ -1344,7 +1349,7 @@ struct trace_subsystem_dir {
struct list_head list;
struct event_subsystem *subsystem;
struct trace_array *tr;
struct eventfs_file *ef;
struct eventfs_inode *ei;
int ref_count;
int nr_events;
};
@ -1664,6 +1669,9 @@ extern void event_trigger_unregister(struct event_command *cmd_ops,
char *glob,
struct event_trigger_data *trigger_data);
extern void event_file_get(struct trace_event_file *file);
extern void event_file_put(struct trace_event_file *file);
/**
* struct event_trigger_ops - callbacks for trace event triggers
*

View File

@ -984,19 +984,41 @@ static void remove_subsystem(struct trace_subsystem_dir *dir)
return;
if (!--dir->nr_events) {
eventfs_remove(dir->ef);
eventfs_remove_dir(dir->ei);
list_del(&dir->list);
__put_system_dir(dir);
}
}
void event_file_get(struct trace_event_file *file)
{
atomic_inc(&file->ref);
}
void event_file_put(struct trace_event_file *file)
{
if (WARN_ON_ONCE(!atomic_read(&file->ref))) {
if (file->flags & EVENT_FILE_FL_FREED)
kmem_cache_free(file_cachep, file);
return;
}
if (atomic_dec_and_test(&file->ref)) {
/* Count should only go to zero when it is freed */
if (WARN_ON_ONCE(!(file->flags & EVENT_FILE_FL_FREED)))
return;
kmem_cache_free(file_cachep, file);
}
}
static void remove_event_file_dir(struct trace_event_file *file)
{
eventfs_remove(file->ef);
eventfs_remove_dir(file->ei);
list_del(&file->list);
remove_subsystem(file->system);
free_event_filter(file->filter);
kmem_cache_free(file_cachep, file);
file->flags |= EVENT_FILE_FL_FREED;
event_file_put(file);
}
/*
@ -1166,7 +1188,7 @@ ftrace_event_write(struct file *file, const char __user *ubuf,
if (!cnt)
return 0;
ret = tracing_update_buffers();
ret = tracing_update_buffers(tr);
if (ret < 0)
return ret;
@ -1369,7 +1391,7 @@ event_enable_read(struct file *filp, char __user *ubuf, size_t cnt,
flags = file->flags;
mutex_unlock(&event_mutex);
if (!file)
if (!file || flags & EVENT_FILE_FL_FREED)
return -ENODEV;
if (flags & EVENT_FILE_FL_ENABLED &&
@ -1397,18 +1419,20 @@ event_enable_write(struct file *filp, const char __user *ubuf, size_t cnt,
if (ret)
return ret;
ret = tracing_update_buffers();
if (ret < 0)
return ret;
switch (val) {
case 0:
case 1:
ret = -ENODEV;
mutex_lock(&event_mutex);
file = event_file_data(filp);
if (likely(file))
if (likely(file && !(file->flags & EVENT_FILE_FL_FREED))) {
ret = tracing_update_buffers(file->tr);
if (ret < 0) {
mutex_unlock(&event_mutex);
return ret;
}
ret = ftrace_event_enable_disable(file, val);
}
mutex_unlock(&event_mutex);
break;
@ -1482,7 +1506,7 @@ system_enable_write(struct file *filp, const char __user *ubuf, size_t cnt,
if (ret)
return ret;
ret = tracing_update_buffers();
ret = tracing_update_buffers(dir->tr);
if (ret < 0)
return ret;
@ -1681,7 +1705,7 @@ event_filter_read(struct file *filp, char __user *ubuf, size_t cnt,
mutex_lock(&event_mutex);
file = event_file_data(filp);
if (file)
if (file && !(file->flags & EVENT_FILE_FL_FREED))
print_event_filter(file, s);
mutex_unlock(&event_mutex);
@ -1956,7 +1980,7 @@ event_pid_write(struct file *filp, const char __user *ubuf,
if (!cnt)
return 0;
ret = tracing_update_buffers();
ret = tracing_update_buffers(tr);
if (ret < 0)
return ret;
@ -2280,14 +2304,40 @@ create_new_subsystem(const char *name)
return NULL;
}
static struct eventfs_file *
static int system_callback(const char *name, umode_t *mode, void **data,
const struct file_operations **fops)
{
if (strcmp(name, "filter") == 0)
*fops = &ftrace_subsystem_filter_fops;
else if (strcmp(name, "enable") == 0)
*fops = &ftrace_system_enable_fops;
else
return 0;
*mode = TRACE_MODE_WRITE;
return 1;
}
static struct eventfs_inode *
event_subsystem_dir(struct trace_array *tr, const char *name,
struct trace_event_file *file, struct dentry *parent)
struct trace_event_file *file, struct eventfs_inode *parent)
{
struct event_subsystem *system, *iter;
struct trace_subsystem_dir *dir;
struct eventfs_file *ef;
int res;
struct eventfs_inode *ei;
int nr_entries;
static struct eventfs_entry system_entries[] = {
{
.name = "filter",
.callback = system_callback,
},
{
.name = "enable",
.callback = system_callback,
}
};
/* First see if we did not already create this dir */
list_for_each_entry(dir, &tr->systems, list) {
@ -2295,7 +2345,7 @@ event_subsystem_dir(struct trace_array *tr, const char *name,
if (strcmp(system->name, name) == 0) {
dir->nr_events++;
file->system = dir;
return dir->ef;
return dir->ei;
}
}
@ -2319,39 +2369,29 @@ event_subsystem_dir(struct trace_array *tr, const char *name,
} else
__get_system(system);
ef = eventfs_add_subsystem_dir(name, parent);
if (IS_ERR(ef)) {
/* ftrace only has directories no files */
if (strcmp(name, "ftrace") == 0)
nr_entries = 0;
else
nr_entries = ARRAY_SIZE(system_entries);
ei = eventfs_create_dir(name, parent, system_entries, nr_entries, dir);
if (IS_ERR(ei)) {
pr_warn("Failed to create system directory %s\n", name);
__put_system(system);
goto out_free;
}
dir->ef = ef;
dir->ei = ei;
dir->tr = tr;
dir->ref_count = 1;
dir->nr_events = 1;
dir->subsystem = system;
file->system = dir;
/* the ftrace system is special, do not create enable or filter files */
if (strcmp(name, "ftrace") != 0) {
res = eventfs_add_file("filter", TRACE_MODE_WRITE,
dir->ef, dir,
&ftrace_subsystem_filter_fops);
if (res) {
kfree(system->filter);
system->filter = NULL;
pr_warn("Could not create tracefs '%s/filter' entry\n", name);
}
eventfs_add_file("enable", TRACE_MODE_WRITE, dir->ef, dir,
&ftrace_system_enable_fops);
}
list_add(&dir->list, &tr->systems);
return dir->ef;
return dir->ei;
out_free:
kfree(dir);
@ -2400,15 +2440,134 @@ event_define_fields(struct trace_event_call *call)
return ret;
}
static int event_callback(const char *name, umode_t *mode, void **data,
const struct file_operations **fops)
{
struct trace_event_file *file = *data;
struct trace_event_call *call = file->event_call;
if (strcmp(name, "format") == 0) {
*mode = TRACE_MODE_READ;
*fops = &ftrace_event_format_fops;
*data = call;
return 1;
}
/*
* Only event directories that can be enabled should have
* triggers or filters, with the exception of the "print"
* event that can have a "trigger" file.
*/
if (!(call->flags & TRACE_EVENT_FL_IGNORE_ENABLE)) {
if (call->class->reg && strcmp(name, "enable") == 0) {
*mode = TRACE_MODE_WRITE;
*fops = &ftrace_enable_fops;
return 1;
}
if (strcmp(name, "filter") == 0) {
*mode = TRACE_MODE_WRITE;
*fops = &ftrace_event_filter_fops;
return 1;
}
}
if (!(call->flags & TRACE_EVENT_FL_IGNORE_ENABLE) ||
strcmp(trace_event_name(call), "print") == 0) {
if (strcmp(name, "trigger") == 0) {
*mode = TRACE_MODE_WRITE;
*fops = &event_trigger_fops;
return 1;
}
}
#ifdef CONFIG_PERF_EVENTS
if (call->event.type && call->class->reg &&
strcmp(name, "id") == 0) {
*mode = TRACE_MODE_READ;
*data = (void *)(long)call->event.type;
*fops = &ftrace_event_id_fops;
return 1;
}
#endif
#ifdef CONFIG_HIST_TRIGGERS
if (strcmp(name, "hist") == 0) {
*mode = TRACE_MODE_READ;
*fops = &event_hist_fops;
return 1;
}
#endif
#ifdef CONFIG_HIST_TRIGGERS_DEBUG
if (strcmp(name, "hist_debug") == 0) {
*mode = TRACE_MODE_READ;
*fops = &event_hist_debug_fops;
return 1;
}
#endif
#ifdef CONFIG_TRACE_EVENT_INJECT
if (call->event.type && call->class->reg &&
strcmp(name, "inject") == 0) {
*mode = 0200;
*fops = &event_inject_fops;
return 1;
}
#endif
return 0;
}
static int
event_create_dir(struct dentry *parent, struct trace_event_file *file)
event_create_dir(struct eventfs_inode *parent, struct trace_event_file *file)
{
struct trace_event_call *call = file->event_call;
struct eventfs_file *ef_subsystem = NULL;
struct trace_array *tr = file->tr;
struct eventfs_file *ef;
struct eventfs_inode *e_events;
struct eventfs_inode *ei;
const char *name;
int nr_entries;
int ret;
static struct eventfs_entry event_entries[] = {
{
.name = "enable",
.callback = event_callback,
},
{
.name = "filter",
.callback = event_callback,
},
{
.name = "trigger",
.callback = event_callback,
},
{
.name = "format",
.callback = event_callback,
},
#ifdef CONFIG_PERF_EVENTS
{
.name = "id",
.callback = event_callback,
},
#endif
#ifdef CONFIG_HIST_TRIGGERS
{
.name = "hist",
.callback = event_callback,
},
#endif
#ifdef CONFIG_HIST_TRIGGERS_DEBUG
{
.name = "hist_debug",
.callback = event_callback,
},
#endif
#ifdef CONFIG_TRACE_EVENT_INJECT
{
.name = "inject",
.callback = event_callback,
},
#endif
};
/*
* If the trace point header did not define TRACE_SYSTEM
@ -2418,29 +2577,20 @@ event_create_dir(struct dentry *parent, struct trace_event_file *file)
if (WARN_ON_ONCE(strcmp(call->class->system, TRACE_SYSTEM) == 0))
return -ENODEV;
ef_subsystem = event_subsystem_dir(tr, call->class->system, file, parent);
if (!ef_subsystem)
e_events = event_subsystem_dir(tr, call->class->system, file, parent);
if (!e_events)
return -ENOMEM;
nr_entries = ARRAY_SIZE(event_entries);
name = trace_event_name(call);
ef = eventfs_add_dir(name, ef_subsystem);
if (IS_ERR(ef)) {
ei = eventfs_create_dir(name, e_events, event_entries, nr_entries, file);
if (IS_ERR(ei)) {
pr_warn("Could not create tracefs '%s' directory\n", name);
return -1;
}
file->ef = ef;
if (call->class->reg && !(call->flags & TRACE_EVENT_FL_IGNORE_ENABLE))
eventfs_add_file("enable", TRACE_MODE_WRITE, file->ef, file,
&ftrace_enable_fops);
#ifdef CONFIG_PERF_EVENTS
if (call->event.type && call->class->reg)
eventfs_add_file("id", TRACE_MODE_READ, file->ef,
(void *)(long)call->event.type,
&ftrace_event_id_fops);
#endif
file->ei = ei;
ret = event_define_fields(call);
if (ret < 0) {
@ -2448,35 +2598,6 @@ event_create_dir(struct dentry *parent, struct trace_event_file *file)
return ret;
}
/*
* Only event directories that can be enabled should have
* triggers or filters.
*/
if (!(call->flags & TRACE_EVENT_FL_IGNORE_ENABLE)) {
eventfs_add_file("filter", TRACE_MODE_WRITE, file->ef,
file, &ftrace_event_filter_fops);
eventfs_add_file("trigger", TRACE_MODE_WRITE, file->ef,
file, &event_trigger_fops);
}
#ifdef CONFIG_HIST_TRIGGERS
eventfs_add_file("hist", TRACE_MODE_READ, file->ef, file,
&event_hist_fops);
#endif
#ifdef CONFIG_HIST_TRIGGERS_DEBUG
eventfs_add_file("hist_debug", TRACE_MODE_READ, file->ef, file,
&event_hist_debug_fops);
#endif
eventfs_add_file("format", TRACE_MODE_READ, file->ef, call,
&ftrace_event_format_fops);
#ifdef CONFIG_TRACE_EVENT_INJECT
if (call->event.type && call->class->reg)
eventfs_add_file("inject", 0200, file->ef, file,
&event_inject_fops);
#endif
return 0;
}
@ -2803,6 +2924,7 @@ trace_create_new_event(struct trace_event_call *call,
atomic_set(&file->tm_ref, 0);
INIT_LIST_HEAD(&file->triggers);
list_add(&file->list, &tr->events);
event_file_get(file);
return file;
}
@ -2824,7 +2946,7 @@ static __init int setup_trace_triggers(char *str)
int i;
strscpy(bootup_trigger_buf, str, COMMAND_LINE_SIZE);
ring_buffer_expanded = true;
trace_set_ring_buffer_expanded(NULL);
disable_tracing_selftest("running event triggers");
buf = bootup_trigger_buf;
@ -3614,37 +3736,72 @@ static char bootup_event_buf[COMMAND_LINE_SIZE] __initdata;
static __init int setup_trace_event(char *str)
{
strscpy(bootup_event_buf, str, COMMAND_LINE_SIZE);
ring_buffer_expanded = true;
trace_set_ring_buffer_expanded(NULL);
disable_tracing_selftest("running event tracing");
return 1;
}
__setup("trace_event=", setup_trace_event);
static int events_callback(const char *name, umode_t *mode, void **data,
const struct file_operations **fops)
{
if (strcmp(name, "enable") == 0) {
*mode = TRACE_MODE_WRITE;
*fops = &ftrace_tr_enable_fops;
return 1;
}
if (strcmp(name, "header_page") == 0)
*data = ring_buffer_print_page_header;
else if (strcmp(name, "header_event") == 0)
*data = ring_buffer_print_entry_header;
else
return 0;
*mode = TRACE_MODE_READ;
*fops = &ftrace_show_header_fops;
return 1;
}
/* Expects to have event_mutex held when called */
static int
create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
{
struct dentry *d_events;
struct eventfs_inode *e_events;
struct dentry *entry;
int error = 0;
int nr_entries;
static struct eventfs_entry events_entries[] = {
{
.name = "enable",
.callback = events_callback,
},
{
.name = "header_page",
.callback = events_callback,
},
{
.name = "header_event",
.callback = events_callback,
},
};
entry = trace_create_file("set_event", TRACE_MODE_WRITE, parent,
tr, &ftrace_set_event_fops);
if (!entry)
return -ENOMEM;
d_events = eventfs_create_events_dir("events", parent);
if (IS_ERR(d_events)) {
nr_entries = ARRAY_SIZE(events_entries);
e_events = eventfs_create_events_dir("events", parent, events_entries,
nr_entries, tr);
if (IS_ERR(e_events)) {
pr_warn("Could not create tracefs 'events' directory\n");
return -ENOMEM;
}
error = eventfs_add_events_file("enable", TRACE_MODE_WRITE, d_events,
tr, &ftrace_tr_enable_fops);
if (error)
return -ENOMEM;
/* There are not as crucial, just warn if they are not created */
trace_create_file("set_event_pid", TRACE_MODE_WRITE, parent,
@ -3654,16 +3811,7 @@ create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
TRACE_MODE_WRITE, parent, tr,
&ftrace_set_event_notrace_pid_fops);
/* ring buffer internal formats */
eventfs_add_events_file("header_page", TRACE_MODE_READ, d_events,
ring_buffer_print_page_header,
&ftrace_show_header_fops);
eventfs_add_events_file("header_event", TRACE_MODE_READ, d_events,
ring_buffer_print_entry_header,
&ftrace_show_header_fops);
tr->event_dir = d_events;
tr->event_dir = e_events;
return 0;
}

View File

@ -2349,6 +2349,9 @@ int apply_event_filter(struct trace_event_file *file, char *filter_string)
struct event_filter *filter = NULL;
int err;
if (file->flags & EVENT_FILE_FL_FREED)
return -ENODEV;
if (!strcmp(strstrip(filter_string), "0")) {
filter_disable(file);
filter = event_filter(file);

View File

@ -774,23 +774,16 @@ static void last_cmd_set(struct trace_event_file *file, char *str)
{
const char *system = NULL, *name = NULL;
struct trace_event_call *call;
int len;
if (!str)
return;
/* sizeof() contains the nul byte */
len = sizeof(HIST_PREFIX) + strlen(str);
kfree(last_cmd);
last_cmd = kzalloc(len, GFP_KERNEL);
last_cmd = kasprintf(GFP_KERNEL, HIST_PREFIX "%s", str);
if (!last_cmd)
return;
strcpy(last_cmd, HIST_PREFIX);
/* Again, sizeof() contains the nul byte */
len -= sizeof(HIST_PREFIX);
strncat(last_cmd, str, len);
if (file) {
call = file->event_call;
system = call->class->system;

View File

@ -452,7 +452,7 @@ static unsigned int trace_string(struct synth_trace_event *entry,
#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
if ((unsigned long)str_val < TASK_SIZE)
ret = strncpy_from_user_nofault(str_field, str_val, STR_VAR_LEN_MAX);
ret = strncpy_from_user_nofault(str_field, (const void __user *)str_val, STR_VAR_LEN_MAX);
else
#endif
ret = strncpy_from_kernel_nofault(str_field, str_val, STR_VAR_LEN_MAX);

View File

@ -49,18 +49,6 @@
#define EVENT_STATUS_PERF BIT(1)
#define EVENT_STATUS_OTHER BIT(7)
/*
* User register flags are not allowed yet, keep them here until we are
* ready to expose them out to the user ABI.
*/
enum user_reg_flag {
/* Event will not delete upon last reference closing */
USER_EVENT_REG_PERSIST = 1U << 0,
/* This value or above is currently non-ABI */
USER_EVENT_REG_MAX = 1U << 1,
};
/*
* Stores the system name, tables, and locks for a group of events. This
* allows isolation for events by various means.
@ -220,6 +208,17 @@ static u32 user_event_key(char *name)
return jhash(name, strlen(name), 0);
}
static bool user_event_capable(u16 reg_flags)
{
/* Persistent events require CAP_PERFMON / CAP_SYS_ADMIN */
if (reg_flags & USER_EVENT_REG_PERSIST) {
if (!perfmon_capable())
return false;
}
return true;
}
static struct user_event *user_event_get(struct user_event *user)
{
refcount_inc(&user->refcnt);
@ -1811,6 +1810,9 @@ static int user_event_free(struct dyn_event *ev)
if (!user_event_last_ref(user))
return -EBUSY;
if (!user_event_capable(user->reg_flags))
return -EPERM;
return destroy_user_event(user);
}
@ -1926,10 +1928,13 @@ static int user_event_parse(struct user_event_group *group, char *name,
int argc = 0;
char **argv;
/* User register flags are not ready yet */
if (reg_flags != 0 || flags != NULL)
/* Currently don't support any text based flags */
if (flags != NULL)
return -EINVAL;
if (!user_event_capable(reg_flags))
return -EPERM;
/* Prevent dyn_event from racing */
mutex_lock(&event_mutex);
user = find_user_event(group, name, &key);
@ -2062,6 +2067,9 @@ static int delete_user_event(struct user_event_group *group, char *name)
if (!user_event_last_ref(user))
return -EBUSY;
if (!user_event_capable(user->reg_flags))
return -EPERM;
return destroy_user_event(user);
}

View File

@ -370,8 +370,12 @@ EXPORT_SYMBOL_GPL(trace_seq_path);
*/
int trace_seq_to_user(struct trace_seq *s, char __user *ubuf, int cnt)
{
int ret;
__trace_seq_init(s);
return seq_buf_to_user(&s->seq, ubuf, cnt);
ret = seq_buf_to_user(&s->seq, ubuf, s->readpos, cnt);
if (ret > 0)
s->readpos += ret;
return ret;
}
EXPORT_SYMBOL_GPL(trace_seq_to_user);

View File

@ -109,9 +109,7 @@ void seq_buf_do_printk(struct seq_buf *s, const char *lvl)
if (s->size == 0 || s->len == 0)
return;
seq_buf_terminate(s);
start = s->buffer;
start = seq_buf_str(s);
while ((lf = strchr(start, '\n'))) {
int len = lf - start + 1;
@ -189,6 +187,7 @@ int seq_buf_puts(struct seq_buf *s, const char *str)
seq_buf_set_overflow(s);
return -1;
}
EXPORT_SYMBOL_GPL(seq_buf_puts);
/**
* seq_buf_putc - sequence printing of simple character
@ -210,6 +209,7 @@ int seq_buf_putc(struct seq_buf *s, unsigned char c)
seq_buf_set_overflow(s);
return -1;
}
EXPORT_SYMBOL_GPL(seq_buf_putc);
/**
* seq_buf_putmem - write raw data into the sequenc buffer
@ -324,23 +324,24 @@ int seq_buf_path(struct seq_buf *s, const struct path *path, const char *esc)
* seq_buf_to_user - copy the sequence buffer to user space
* @s: seq_buf descriptor
* @ubuf: The userspace memory location to copy to
* @start: The first byte in the buffer to copy
* @cnt: The amount to copy
*
* Copies the sequence buffer into the userspace memory pointed to
* by @ubuf. It starts from the last read position (@s->readpos)
* and writes up to @cnt characters or till it reaches the end of
* the content in the buffer (@s->len), which ever comes first.
* by @ubuf. It starts from @start and writes up to @cnt characters
* or until it reaches the end of the content in the buffer (@s->len),
* whichever comes first.
*
* On success, it returns a positive number of the number of bytes
* it copied.
*
* On failure it returns -EBUSY if all of the content in the
* sequence has been already read, which includes nothing in the
* sequence (@s->len == @s->readpos).
* sequence (@s->len == @start).
*
* Returns -EFAULT if the copy to userspace fails.
*/
int seq_buf_to_user(struct seq_buf *s, char __user *ubuf, int cnt)
int seq_buf_to_user(struct seq_buf *s, char __user *ubuf, size_t start, int cnt)
{
int len;
int ret;
@ -350,20 +351,17 @@ int seq_buf_to_user(struct seq_buf *s, char __user *ubuf, int cnt)
len = seq_buf_used(s);
if (len <= s->readpos)
if (len <= start)
return -EBUSY;
len -= s->readpos;
len -= start;
if (cnt > len)
cnt = len;
ret = copy_to_user(ubuf, s->buffer + s->readpos, cnt);
ret = copy_to_user(ubuf, s->buffer + start, cnt);
if (ret == cnt)
return -EFAULT;
cnt -= ret;
s->readpos += cnt;
return cnt;
return cnt - ret;
}
/**

View File

@ -40,7 +40,9 @@ riscv*)
esac
: "Test get argument (1)"
if grep -q eventfs_add_dir available_filter_functions; then
if grep -q eventfs_create_dir available_filter_functions; then
DIR_NAME="eventfs_create_dir"
elif grep -q eventfs_add_dir available_filter_functions; then
DIR_NAME="eventfs_add_dir"
else
DIR_NAME="tracefs_create_dir"

View File

@ -40,7 +40,9 @@ riscv*)
esac
: "Test get argument (1)"
if grep -q eventfs_add_dir available_filter_functions; then
if grep -q eventfs_create_dir available_filter_functions; then
DIR_NAME="eventfs_create_dir"
elif grep -q eventfs_add_dir available_filter_functions; then
DIR_NAME="eventfs_add_dir"
else
DIR_NAME="tracefs_create_dir"

View File

@ -24,6 +24,18 @@
const char *data_file = "/sys/kernel/tracing/user_events_data";
const char *enable_file = "/sys/kernel/tracing/events/user_events/__abi_event/enable";
static bool event_exists(void)
{
int fd = open(enable_file, O_RDWR);
if (fd < 0)
return false;
close(fd);
return true;
}
static int change_event(bool enable)
{
int fd = open(enable_file, O_RDWR);
@ -47,7 +59,22 @@ static int change_event(bool enable)
return ret;
}
static int reg_enable(void *enable, int size, int bit)
static int event_delete(void)
{
int fd = open(data_file, O_RDWR);
int ret;
if (fd < 0)
return -1;
ret = ioctl(fd, DIAG_IOCSDEL, "__abi_event");
close(fd);
return ret;
}
static int reg_enable_flags(void *enable, int size, int bit, int flags)
{
struct user_reg reg = {0};
int fd = open(data_file, O_RDWR);
@ -58,6 +85,7 @@ static int reg_enable(void *enable, int size, int bit)
reg.size = sizeof(reg);
reg.name_args = (__u64)"__abi_event";
reg.flags = flags;
reg.enable_bit = bit;
reg.enable_addr = (__u64)enable;
reg.enable_size = size;
@ -69,6 +97,11 @@ static int reg_enable(void *enable, int size, int bit)
return ret;
}
static int reg_enable(void *enable, int size, int bit)
{
return reg_enable_flags(enable, size, bit, 0);
}
static int reg_disable(void *enable, int bit)
{
struct user_unreg reg = {0};
@ -128,6 +161,26 @@ TEST_F(user, enablement) {
ASSERT_EQ(0, change_event(false));
}
TEST_F(user, flags) {
/* USER_EVENT_REG_PERSIST is allowed */
ASSERT_EQ(0, reg_enable_flags(&self->check, sizeof(int), 0,
USER_EVENT_REG_PERSIST));
ASSERT_EQ(0, reg_disable(&self->check, 0));
/* Ensure it exists after close and disable */
ASSERT_TRUE(event_exists());
/* Ensure we can delete it */
ASSERT_EQ(0, event_delete());
/* USER_EVENT_REG_MAX or above is not allowed */
ASSERT_EQ(-1, reg_enable_flags(&self->check, sizeof(int), 0,
USER_EVENT_REG_MAX));
/* Ensure it does not exist after invalid flags */
ASSERT_FALSE(event_exists());
}
TEST_F(user, bit_sizes) {
/* Allow 0-31 bits for 32-bit */
ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 0));

View File

@ -17,9 +17,25 @@
#include "../kselftest_harness.h"
#include "user_events_selftests.h"
const char *dyn_file = "/sys/kernel/tracing/dynamic_events";
const char *abi_file = "/sys/kernel/tracing/user_events_data";
const char *enable_file = "/sys/kernel/tracing/events/user_events/__test_event/enable";
static int event_delete(void)
{
int fd = open(abi_file, O_RDWR);
int ret;
if (fd < 0)
return -1;
ret = ioctl(fd, DIAG_IOCSDEL, "__test_event");
close(fd);
return ret;
}
static bool wait_for_delete(void)
{
int i;
@ -64,7 +80,31 @@ static int unreg_event(int fd, int *check, int bit)
return ioctl(fd, DIAG_IOCSUNREG, &unreg);
}
static int parse(int *check, const char *value)
static int parse_dyn(const char *value)
{
int fd = open(dyn_file, O_RDWR | O_APPEND);
int len = strlen(value);
int ret;
if (fd == -1)
return -1;
ret = write(fd, value, len);
if (ret == len)
ret = 0;
else
ret = -1;
close(fd);
if (ret == 0)
event_delete();
return ret;
}
static int parse_abi(int *check, const char *value)
{
int fd = open(abi_file, O_RDWR);
int ret;
@ -90,6 +130,18 @@ static int parse(int *check, const char *value)
return ret;
}
static int parse(int *check, const char *value)
{
int abi_ret = parse_abi(check, value);
int dyn_ret = parse_dyn(value);
/* Ensure both ABI and DYN parse the same way */
if (dyn_ret != abi_ret)
return -1;
return dyn_ret;
}
static int check_match(int *check, const char *first, const char *second, bool *match)
{
int fd = open(abi_file, O_RDWR);