linux-stable/arch/ia64/include/asm/cache.h
Joe Perches 33def8498f treewide: Convert macro and uses of __section(foo) to __section("foo")
Use a more generic form for __section that requires quotes to avoid
complications with clang and gcc differences.

Remove the quote operator # from compiler_attributes.h __section macro.

Convert all unquoted __section(foo) uses to quoted __section("foo").
Also convert __attribute__((section("foo"))) uses to __section("foo")
even if the __attribute__ has multiple list entry forms.

Conversion done using the script at:

    https://lore.kernel.org/lkml/75393e5ddc272dc7403de74d645e6c6e0f4e70eb.camel@perches.com/2-convert_section.pl

Signed-off-by: Joe Perches <joe@perches.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@gooogle.com>
Reviewed-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-25 14:51:49 -07:00

30 lines
752 B
C

/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_CACHE_H
#define _ASM_IA64_CACHE_H
/*
* Copyright (C) 1998-2000 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
/* Bytes per L1 (data) cache line. */
#define L1_CACHE_SHIFT CONFIG_IA64_L1_CACHE_SHIFT
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
#ifdef CONFIG_SMP
# define SMP_CACHE_SHIFT L1_CACHE_SHIFT
# define SMP_CACHE_BYTES L1_CACHE_BYTES
#else
/*
* The "aligned" directive can only _increase_ alignment, so this is
* safe and provides an easy way to avoid wasting space on a
* uni-processor:
*/
# define SMP_CACHE_SHIFT 3
# define SMP_CACHE_BYTES (1 << 3)
#endif
#define __read_mostly __section(".data..read_mostly")
#endif /* _ASM_IA64_CACHE_H */