linux-stable/include/linux/kcsan.h
Marco Elver 9756f64c8f kcsan: Avoid checking scoped accesses from nested contexts
Avoid checking scoped accesses from nested contexts (such as nested
interrupts or in scheduler code) which share the same kcsan_ctx.

This is to avoid detecting false positive races of accesses in the same
thread with currently scoped accesses: consider setting up a watchpoint
for a non-scoped (normal) access that also "conflicts" with a current
scoped access. In a nested interrupt (or in the scheduler), which shares
the same kcsan_ctx, we cannot check scoped accesses set up in the parent
context -- simply ignore them in this case.

With the introduction of kcsan_ctx::disable_scoped, we can also clean up
kcsan_check_scoped_accesses()'s recursion guard, and do not need to
modify the list's prev pointer.

Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-12-09 16:42:26 -08:00

67 lines
1.9 KiB
C

/* SPDX-License-Identifier: GPL-2.0 */
/*
* The Kernel Concurrency Sanitizer (KCSAN) infrastructure. Public interface and
* data structures to set up runtime. See kcsan-checks.h for explicit checks and
* modifiers. For more info please see Documentation/dev-tools/kcsan.rst.
*
* Copyright (C) 2019, Google LLC.
*/
#ifndef _LINUX_KCSAN_H
#define _LINUX_KCSAN_H
#include <linux/kcsan-checks.h>
#include <linux/types.h>
#ifdef CONFIG_KCSAN
/*
* Context for each thread of execution: for tasks, this is stored in
* task_struct, and interrupts access internal per-CPU storage.
*/
struct kcsan_ctx {
int disable_count; /* disable counter */
int disable_scoped; /* disable scoped access counter */
int atomic_next; /* number of following atomic ops */
/*
* We distinguish between: (a) nestable atomic regions that may contain
* other nestable regions; and (b) flat atomic regions that do not keep
* track of nesting. Both (a) and (b) are entirely independent of each
* other, and a flat region may be started in a nestable region or
* vice-versa.
*
* This is required because, for example, in the annotations for
* seqlocks, we declare seqlock writer critical sections as (a) nestable
* atomic regions, but reader critical sections as (b) flat atomic
* regions, but have encountered cases where seqlock reader critical
* sections are contained within writer critical sections (the opposite
* may be possible, too).
*
* To support these cases, we independently track the depth of nesting
* for (a), and whether the leaf level is flat for (b).
*/
int atomic_nest_count;
bool in_flat_atomic;
/*
* Access mask for all accesses if non-zero.
*/
unsigned long access_mask;
/* List of scoped accesses. */
struct list_head scoped_accesses;
};
/**
* kcsan_init - initialize KCSAN runtime
*/
void kcsan_init(void);
#else /* CONFIG_KCSAN */
static inline void kcsan_init(void) { }
#endif /* CONFIG_KCSAN */
#endif /* _LINUX_KCSAN_H */