linux-stable/arch/arm/Kconfig

2096 lines
65 KiB
Text
Raw Normal View History

License cleanup: add SPDX GPL-2.0 license identifier to files with no license Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
# SPDX-License-Identifier: GPL-2.0
config ARM
bool
default y
select ARCH_32BIT_OFF_T
select ARCH_CLOCKSOURCE_DATA
select ARCH_HAS_BINFMT_FLAT
ARM: 8738/1: Disable CONFIG_DEBUG_VIRTUAL for NOMMU While running MPS2 platform (NOMMU) with DTB placed below PHYS_OFFSET following warning poped up: ------------[ cut here ]------------ WARNING: CPU: 0 PID: 0 at arch/arm/mm/physaddr.c:42 __virt_to_phys+0x2f/0x40 virt_to_phys used for non-linear address: 00004000 (0x4000) CPU: 0 PID: 0 Comm: swapper Not tainted 4.15.0-rc1-5a31bf2-clean+ #2767 Hardware name: MPS2 (Device Tree Support) [<2100bf39>] (unwind_backtrace) from [<2100b3ff>] (show_stack+0xb/0xc) [<2100b3ff>] (show_stack) from [<2100e697>] (__warn+0x87/0xac) [<2100e697>] (__warn) from [<2100e6db>] (warn_slowpath_fmt+0x1f/0x28) [<2100e6db>] (warn_slowpath_fmt) from [<2100c603>] (__virt_to_phys+0x2f/0x40) [<2100c603>] (__virt_to_phys) from [<2116a499>] (early_init_fdt_reserve_self+0xd/0x24) [<2116a499>] (early_init_fdt_reserve_self) from [<2116222d>] (arm_memblock_init+0xb5/0xf8) [<2116222d>] (arm_memblock_init) from [<21161cad>] (setup_arch+0x38b/0x50e) [<21161cad>] (setup_arch) from [<21160455>] (start_kernel+0x31/0x280) [<21160455>] (start_kernel) from [<00000000>] ( (null)) random: get_random_bytes called from init_oops_id+0x17/0x2c with crng_init=0 ---[ end trace 0000000000000000 ]--- Platforms without MMU support run with 1:1 (i.e. linear) memory mapping, so disable CONFIG_DEBUG_VIRTUAL. Fixes: e377cd8221eb ("ARM: 8640/1: Add support for CONFIG_DEBUG_VIRTUAL") Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-12-18 10:48:42 +00:00
select ARCH_HAS_DEBUG_VIRTUAL if MMU
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE
mm: expose arch_mmap_rnd when available When an architecture fully supports randomizing the ELF load location, a per-arch mmap_rnd() function is used to find a randomized mmap base. In preparation for randomizing the location of ET_DYN binaries separately from mmap, this renames and exports these functions as arch_mmap_rnd(). Additionally introduces CONFIG_ARCH_HAS_ELF_RANDOMIZE for describing this feature on architectures that support it (which is a superset of ARCH_BINFMT_ELF_RANDOMIZE_PIE, since s390 already supports a separated ET_DYN ASLR from mmap ASLR without the ARCH_BINFMT_ELF_RANDOMIZE_PIE logic). Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Russell King <linux@arm.linux.org.uk> Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: "David A. Long" <dave.long@linaro.org> Cc: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Arun Chandran <achandran@mvista.com> Cc: Yann Droneaud <ydroneaud@opteya.com> Cc: Min-Hua Chen <orca.chen@gmail.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Alex Smith <alex@alex-smith.me.uk> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Vineeth Vijayan <vvijayan@mvista.com> Cc: Jeff Bailey <jeffbailey@google.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Behan Webster <behanw@converseincode.com> Cc: Ismael Ripoll <iripoll@upv.es> Cc: Jan-Simon Mller <dl9pf@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-04-14 22:48:00 +00:00
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_KEEPINITRD
select ARCH_HAS_KCOV
select ARCH_HAS_MEMBARRIER_SYNC_CORE
mm: introduce ARCH_HAS_PTE_SPECIAL Currently the PTE special supports is turned on in per architecture header files. Most of the time, it is defined in arch/*/include/asm/pgtable.h depending or not on some other per architecture static definition. This patch introduce a new configuration variable to manage this directly in the Kconfig files. It would later replace __HAVE_ARCH_PTE_SPECIAL. Here notes for some architecture where the definition of __HAVE_ARCH_PTE_SPECIAL is not obvious: arm __HAVE_ARCH_PTE_SPECIAL which is currently defined in arch/arm/include/asm/pgtable-3level.h which is included by arch/arm/include/asm/pgtable.h when CONFIG_ARM_LPAE is set. So select ARCH_HAS_PTE_SPECIAL if ARM_LPAE. powerpc __HAVE_ARCH_PTE_SPECIAL is defined in 2 files: - arch/powerpc/include/asm/book3s/64/pgtable.h - arch/powerpc/include/asm/pte-common.h The first one is included if (PPC_BOOK3S & PPC64) while the second is included in all the other cases. So select ARCH_HAS_PTE_SPECIAL all the time. sparc: __HAVE_ARCH_PTE_SPECIAL is defined if defined(__sparc__) && defined(__arch64__) which are defined through the compiler in sparc/Makefile if !SPARC32 which I assume to be if SPARC64. So select ARCH_HAS_PTE_SPECIAL if SPARC64 There is no functional change introduced by this patch. Link: http://lkml.kernel.org/r/1523433816-14460-2-git-send-email-ldufour@linux.vnet.ibm.com Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Suggested-by: Jerome Glisse <jglisse@redhat.com> Reviewed-by: Jerome Glisse <jglisse@redhat.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: David S. Miller <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Albert Ou <albert@sifive.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: David Rientjes <rientjes@google.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Christophe LEROY <christophe.leroy@c-s.fr> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-08 00:06:08 +00:00
select ARCH_HAS_PTE_SPECIAL if ARM_LPAE
select ARCH_HAS_PHYS_TO_DMA
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SET_MEMORY
select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL
select ARCH_HAS_STRICT_MODULE_RWX if MMU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE if SWIOTLB
select ARCH_HAS_SYNC_DMA_FOR_CPU if SWIOTLB
select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_HAS_GCOV_PROFILE_ALL
mm: memblock: make keeping memblock memory opt-in rather than opt-out Most architectures do not need the memblock memory after the page allocator is initialized, but only few enable ARCH_DISCARD_MEMBLOCK in the arch Kconfig. Replacing ARCH_DISCARD_MEMBLOCK with ARCH_KEEP_MEMBLOCK and inverting the logic makes it clear which architectures actually use memblock after system initialization and skips the necessity to add ARCH_DISCARD_MEMBLOCK to the architectures that are still missing that option. Link: http://lkml.kernel.org/r/1556102150-32517-1-git-send-email-rppt@linux.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: James Hogan <jhogan@kernel.org> Cc: Ley Foon Tan <lftan@altera.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Eric Biederman <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 00:22:59 +00:00
select ARCH_KEEP_MEMBLOCK if HAVE_ARCH_PFN_VALID || KEXEC
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_NO_SG_CHAIN if !ARM_HAS_SG_CHAIN
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT if CPU_V7
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select ARCH_WANT_IPC_PARSE_VERSION
select BINFMT_FLAT_ARGVP_ENVP_ON_STACK
select BUILDTIME_TABLE_SORT if MMU
select CLONE_BACKWARDS
select CPU_PM if SUSPEND || CPU_IDLE
select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS
select DMA_DECLARE_COHERENT
select DMA_REMAP if MMU
EDAC: Cleanup atomic_scrub mess So first of all, this atomic_scrub() function's naming is bad. It looks like an atomic_t helper. Change it to edac_atomic_scrub(). The bigger problem is that this function is arch-specific and every new arch which doesn't necessarily need that functionality still needs to define it, otherwise EDAC doesn't compile. So instead of doing that and including arch-specific headers, have each arch define an EDAC_ATOMIC_SCRUB symbol which can be used in edac_mc.c for ifdeffery. Much cleaner. And we already are doing this with another symbol - EDAC_SUPPORT. This is also much cleaner than having CONFIG_EDAC enumerate all the arches which need/have EDAC support and drivers. This way I can kill the useless edac.h header in tile too. Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Chris Metcalf <cmetcalf@ezchip.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Doug Thompson <dougthompson@xmission.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-edac@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linuxppc-dev@lists.ozlabs.org Cc: "Maciej W. Rozycki" <macro@codesourcery.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Cc: Paul Mackerras <paulus@samba.org> Cc: "Steven J. Hill" <Steven.Hill@imgtec.com> Cc: x86@kernel.org Signed-off-by: Borislav Petkov <bp@suse.de>
2015-05-21 17:59:31 +00:00
select EDAC_SUPPORT
select EDAC_ATOMIC_SCRUB
select GENERIC_ALLOCATOR
select GENERIC_ARCH_TOPOLOGY if ARM_CPU_TOPOLOGY
select GENERIC_ATOMIC64 if CPU_V7M || CPU_V6 || !CPU_32v6K || !AEABI
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select GENERIC_CLOCKEVENTS_BROADCAST if SMP
select GENERIC_CPU_AUTOPROBE
select GENERIC_EARLY_IOREMAP
select GENERIC_IDLE_POLL_SETUP
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
select GENERIC_IRQ_SHOW_LEVEL
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select GENERIC_SMP_IDLE_THREAD
select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER
select HANDLE_DOMAIN_IRQ
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HARDIRQS_SW_RESEND
select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
arm: mm: support ARCH_MMAP_RND_BITS arm: arch_mmap_rnd() uses a hard-code value of 8 to generate the random offset for the mmap base address. This value represents a compromise between increased ASLR effectiveness and avoiding address-space fragmentation. Replace it with a Kconfig option, which is sensibly bounded, so that platform developers may choose where to place this compromise. Keep 8 as the minimum acceptable value. [arnd@arndb.de: ARM: avoid ARCH_MMAP_RND_BITS for NOMMU] Signed-off-by: Daniel Cashman <dcashman@google.com> Cc: Russell King <linux@arm.linux.org.uk> Acked-by: Kees Cook <keescook@chromium.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Don Zickus <dzickus@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Heinrich Schuchardt <xypron.glpk@gmx.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: David Rientjes <rientjes@google.com> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Jeff Vander Stoep <jeffv@google.com> Cc: Nick Kralevich <nnk@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Borislav Petkov <bp@suse.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-14 23:19:57 +00:00
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
select HAVE_ARCH_THREAD_STRUCT_WHITELIST
select HAVE_ARCH_TRACEHOOK
select HAVE_ARM_SMCCC if CPU_V7
select HAVE_EBPF_JIT if !CPU_ENDIAN_BE32
select HAVE_CONTEXT_TRACKING
select HAVE_COPY_THREAD_TLS
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HAVE_C_RECORDMCOUNT
select HAVE_DEBUG_KMEMLEAK if !XIP_KERNEL
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HAVE_DMA_CONTIGUOUS if MMU
select HAVE_DYNAMIC_FTRACE if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU
exit_thread: remove empty bodies Define HAVE_EXIT_THREAD for archs which want to do something in exit_thread. For others, let's define exit_thread as an empty inline. This is a cleanup before we change the prototype of exit_thread to accept a task parameter. [akpm@linux-foundation.org: fix mips] Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: "David S. Miller" <davem@davemloft.net> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chen Liqin <liqin.linux@gmail.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Chris Zankel <chris@zankel.net> Cc: David Howells <dhowells@redhat.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: James Hogan <james.hogan@imgtec.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Jonas Bonn <jonas@southpole.se> Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mikael Starvik <starvik@axis.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Rich Felker <dalias@libc.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Russell King <linux@arm.linux.org.uk> Cc: Steven Miao <realmz6@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 00:00:16 +00:00
select HAVE_EXIT_THREAD
select HAVE_FAST_GUP if ARM_LPAE
select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL && !CC_IS_CLANG
ARM: 8905/1: Emit __gnu_mcount_nc when using Clang 10.0.0 or newer Currently, multi_v7_defconfig + CONFIG_FUNCTION_TRACER fails to build with clang: arm-linux-gnueabi-ld: kernel/softirq.o: in function `_local_bh_enable': softirq.c:(.text+0x504): undefined reference to `mcount' arm-linux-gnueabi-ld: kernel/softirq.o: in function `__local_bh_enable_ip': softirq.c:(.text+0x58c): undefined reference to `mcount' arm-linux-gnueabi-ld: kernel/softirq.o: in function `do_softirq': softirq.c:(.text+0x6c8): undefined reference to `mcount' arm-linux-gnueabi-ld: kernel/softirq.o: in function `irq_enter': softirq.c:(.text+0x75c): undefined reference to `mcount' arm-linux-gnueabi-ld: kernel/softirq.o: in function `irq_exit': softirq.c:(.text+0x840): undefined reference to `mcount' arm-linux-gnueabi-ld: kernel/softirq.o:softirq.c:(.text+0xa50): more undefined references to `mcount' follow clang can emit a working mcount symbol, __gnu_mcount_nc, when '-meabi gnu' is passed to it. Until r369147 in LLVM, this was broken and caused the kernel not to boot with '-pg' because the calling convention was not correct. Always build with '-meabi gnu' when using clang but ensure that '-pg' (which is added with CONFIG_FUNCTION_TRACER and its prereq CONFIG_HAVE_FUNCTION_TRACER) cannot be added with it unless this is fixed (which means using clang 10.0.0 and newer). Link: https://github.com/ClangBuiltLinux/linux/issues/35 Link: https://bugs.llvm.org/show_bug.cgi?id=33845 Link: https://github.com/llvm/llvm-project/commit/16fa8b09702378bacfa3d07081afe6b353b99e60 Reviewed-by: Matthias Kaehlcke <mka@chromium.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-09-04 00:13:15 +00:00
select HAVE_FUNCTION_TRACER if !XIP_KERNEL && (CC_IS_GCC || CLANG_VERSION >= 100000)
GCC plugin infrastructure This patch allows to build the whole kernel with GCC plugins. It was ported from grsecurity/PaX. The infrastructure supports building out-of-tree modules and building in a separate directory. Cross-compilation is supported too. Currently the x86, arm, arm64 and uml architectures enable plugins. The directory of the gcc plugins is scripts/gcc-plugins. You can use a file or a directory there. The plugins compile with these options: * -fno-rtti: gcc is compiled with this option so the plugins must use it too * -fno-exceptions: this is inherited from gcc too * -fasynchronous-unwind-tables: this is inherited from gcc too * -ggdb: it is useful for debugging a plugin (better backtrace on internal errors) * -Wno-narrowing: to suppress warnings from gcc headers (ipa-utils.h) * -Wno-unused-variable: to suppress warnings from gcc headers (gcc_version variable, plugin-version.h) The infrastructure introduces a new Makefile target called gcc-plugins. It supports all gcc versions from 4.5 to 6.0. The scripts/gcc-plugin.sh script chooses the proper host compiler (gcc-4.7 can be built by either gcc or g++). This script also checks the availability of the included headers in scripts/gcc-plugins/gcc-common.h. The gcc-common.h header contains frequently included headers for GCC plugins and it has a compatibility layer for the supported gcc versions. The gcc-generate-*-pass.h headers automatically generate the registration structures for GIMPLE, SIMPLE_IPA, IPA and RTL passes. Note that 'make clean' keeps the *.so files (only the distclean or mrproper targets clean all) because they are needed for out-of-tree modules. Based on work created by the PaX Team. Signed-off-by: Emese Revfy <re.emese@gmail.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Michal Marek <mmarek@suse.com>
2016-05-23 22:09:38 +00:00
select HAVE_GCC_PLUGINS
select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HAVE_IDE if PCI || ISA || PCMCIA
select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZ4
select HAVE_KERNEL_LZMA
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HAVE_KERNEL_LZO
select HAVE_KERNEL_XZ
select HAVE_KPROBES if !XIP_KERNEL && !CPU_ENDIAN_BE32 && !CPU_V7M
select HAVE_KRETPROBES if HAVE_KPROBES
select HAVE_MOD_ARCH_SPECIFIC
printk/nmi: generic solution for safe printk in NMI printk() takes some locks and could not be used a safe way in NMI context. The chance of a deadlock is real especially when printing stacks from all CPUs. This particular problem has been addressed on x86 by the commit a9edc8809328 ("x86/nmi: Perform a safe NMI stack trace on all CPUs"). The patchset brings two big advantages. First, it makes the NMI backtraces safe on all architectures for free. Second, it makes all NMI messages almost safe on all architectures (the temporary buffer is limited. We still should keep the number of messages in NMI context at minimum). Note that there already are several messages printed in NMI context: WARN_ON(in_nmi()), BUG_ON(in_nmi()), anything being printed out from MCE handlers. These are not easy to avoid. This patch reuses most of the code and makes it generic. It is useful for all messages and architectures that support NMI. The alternative printk_func is set when entering and is reseted when leaving NMI context. It queues IRQ work to copy the messages into the main ring buffer in a safe context. __printk_nmi_flush() copies all available messages and reset the buffer. Then we could use a simple cmpxchg operations to get synchronized with writers. There is also used a spinlock to get synchronized with other flushers. We do not longer use seq_buf because it depends on external lock. It would be hard to make all supported operations safe for a lockless use. It would be confusing and error prone to make only some operations safe. The code is put into separate printk/nmi.c as suggested by Steven Rostedt. It needs a per-CPU buffer and is compiled only on architectures that call nmi_enter(). This is achieved by the new HAVE_NMI Kconfig flag. The are MN10300 and Xtensa architectures. We need to clean up NMI handling there first. Let's do it separately. The patch is heavily based on the draft from Peter Zijlstra, see https://lkml.org/lkml/2015/6/10/327 [arnd@arndb.de: printk-nmi: use %zu format string for size_t] [akpm@linux-foundation.org: min_t->min - all types are size_t here] Signed-off-by: Petr Mladek <pmladek@suse.com> Suggested-by: Peter Zijlstra <peterz@infradead.org> Suggested-by: Steven Rostedt <rostedt@goodmis.org> Cc: Jan Kara <jack@suse.cz> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> [arm part] Cc: Daniel Thompson <daniel.thompson@linaro.org> Cc: Jiri Kosina <jkosina@suse.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: David Miller <davem@davemloft.net> Cc: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 00:00:33 +00:00
select HAVE_NMI
select HAVE_OPROFILE if HAVE_PERF_EVENTS
select HAVE_OPTPROBES if !THUMB2_KERNEL
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select MMU_GATHER_RCU_TABLE_FREE if SMP && ARM_LPAE
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_RSEQ
select HAVE_STACKPROTECTOR
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_UID16
select HAVE_VIRT_CPU_ACCOUNTING_GEN
select IRQ_FORCED_THREADING
select MODULES_USE_ELF_REL
select NEED_DMA_MAP_STATE
select OF_EARLY_FLATTREE if OF
select OLD_SIGACTION
select OLD_SIGSUSPEND3
select PCI_SYSCALL if PCI
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select PERF_USE_VMALLOC
select RTC_LIB
select SYS_SUPPORTS_APM_EMULATION
# Above selects are sorted alphabetically; please add new ones
# according to that. Thanks.
help
The ARM series is a line of low-power-consumption RISC chip designs
licensed by ARM Ltd and targeted at embedded applications and
handhelds such as the Compaq IPAQ. ARM-based PCs are no longer
manufactured, but legacy ARM-based PC hardware remains popular in
Europe. There is an ARM Linux project with a web page at
<http://www.arm.linux.org.uk/>.
config ARM_HAS_SG_CHAIN
bool
ARM: dma-mapping: add support for IOMMU mapper This patch add a complete implementation of DMA-mapping API for devices which have IOMMU support. This implementation tries to optimize dma address space usage by remapping all possible physical memory chunks into a single dma address space chunk. DMA address space is managed on top of the bitmap stored in the dma_iommu_mapping structure stored in device->archdata. Platform setup code has to initialize parameters of the dma address space (base address, size, allocation precision order) with arm_iommu_create_mapping() function. To reduce the size of the bitmap, all allocations are aligned to the specified order of base 4 KiB pages. dma_alloc_* functions allocate physical memory in chunks, each with alloc_pages() function to avoid failing if the physical memory gets fragmented. In worst case the allocated buffer is composed of 4 KiB page chunks. dma_map_sg() function minimizes the total number of dma address space chunks by merging of physical memory chunks into one larger dma address space chunk. If requested chunk (scatter list entry) boundaries match physical page boundaries, most calls to dma_map_sg() requests will result in creating only one chunk in dma address space. dma_map_page() simply creates a mapping for the given page(s) in the dma address space. All dma functions also perform required cache operation like their counterparts from the arm linear physical memory mapping version. This patch contains code and fixes kindly provided by: - Krishna Reddy <vdumpa@nvidia.com>, - Andrzej Pietrasiewicz <andrzej.p@samsung.com>, - Hiroshi DOYU <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
2012-05-16 13:48:21 +00:00
config ARM_DMA_USE_IOMMU
bool
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select ARM_HAS_SG_CHAIN
select NEED_SG_DMA_LENGTH
ARM: dma-mapping: add support for IOMMU mapper This patch add a complete implementation of DMA-mapping API for devices which have IOMMU support. This implementation tries to optimize dma address space usage by remapping all possible physical memory chunks into a single dma address space chunk. DMA address space is managed on top of the bitmap stored in the dma_iommu_mapping structure stored in device->archdata. Platform setup code has to initialize parameters of the dma address space (base address, size, allocation precision order) with arm_iommu_create_mapping() function. To reduce the size of the bitmap, all allocations are aligned to the specified order of base 4 KiB pages. dma_alloc_* functions allocate physical memory in chunks, each with alloc_pages() function to avoid failing if the physical memory gets fragmented. In worst case the allocated buffer is composed of 4 KiB page chunks. dma_map_sg() function minimizes the total number of dma address space chunks by merging of physical memory chunks into one larger dma address space chunk. If requested chunk (scatter list entry) boundaries match physical page boundaries, most calls to dma_map_sg() requests will result in creating only one chunk in dma address space. dma_map_page() simply creates a mapping for the given page(s) in the dma address space. All dma functions also perform required cache operation like their counterparts from the arm linear physical memory mapping version. This patch contains code and fixes kindly provided by: - Krishna Reddy <vdumpa@nvidia.com>, - Andrzej Pietrasiewicz <andrzej.p@samsung.com>, - Hiroshi DOYU <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
2012-05-16 13:48:21 +00:00
if ARM_DMA_USE_IOMMU
config ARM_DMA_IOMMU_ALIGNMENT
int "Maximum PAGE_SIZE order of alignment for DMA IOMMU buffers"
range 4 9
default 8
help
DMA mapping framework by default aligns all buffers to the smallest
PAGE_SIZE order which is greater than or equal to the requested buffer
size. This works well for buffers up to a few hundreds kilobytes, but
for larger buffers it just a waste of address space. Drivers which has
relatively small addressing window (like 64Mib) might run out of
virtual space with just a few allocations.
With this parameter you can specify the maximum PAGE_SIZE order for
DMA IOMMU buffers. Larger buffers will be aligned only to this
specified order. The order is expressed as a power of two multiplied
by the PAGE_SIZE.
endif
config SYS_SUPPORTS_APM_EMULATION
bool
config HAVE_TCM
bool
select GENERIC_ALLOCATOR
config HAVE_PROC_CPU
bool
config NO_IOPORT_MAP
bool
config SBUS
bool
config STACKTRACE_SUPPORT
bool
default y
config LOCKDEP_SUPPORT
bool
default y
config TRACE_IRQFLAGS_SUPPORT
bool
default !CPU_V7M
config ARCH_HAS_ILOG2_U32
bool
config ARCH_HAS_ILOG2_U64
bool
config ARCH_HAS_BANDGAP
bool
config FIX_EARLYCON_MEM
def_bool y if MMU
config GENERIC_HWEIGHT
bool
default y
config GENERIC_CALIBRATE_DELAY
bool
default y
config ARCH_MAY_HAVE_PC_FDC
bool
config ZONE_DMA
bool
config ARCH_SUPPORTS_UPROBES
def_bool y
config ARCH_HAS_DMA_SET_COHERENT_MASK
bool
config GENERIC_ISA_DMA
bool
config FIQ
bool
config NEED_RET_TO_USER
bool
config ARCH_MTD_XIP
bool
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
config ARM_PATCH_PHYS_VIRT
bool "Patch physical to virtual translations at runtime" if EMBEDDED
default y
depends on !XIP_KERNEL && MMU
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
help
Patch phys-to-virt and virt-to-phys translation functions at
boot and module load time according to the position of the
kernel in system memory.
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
This can only be used with non-XIP MMU kernels where the base
of physical memory is at a 16MB boundary.
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
Only disable this option if you know that you do not require
this feature (eg, building a kernel for a single machine) and
you need to shrink the kernel to the minimal size.
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
config NEED_MACH_IO_H
bool
help
Select this when mach/io.h is required to provide special
definitions for this platform. The need for mach/io.h should
be avoided when possible.
config NEED_MACH_MEMORY_H
bool
help
Select this when mach/memory.h is required to provide special
definitions for this platform. The need for mach/memory.h should
be avoided when possible.
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
config PHYS_OFFSET
hex "Physical address of main memory" if MMU
depends on !ARM_PATCH_PHYS_VIRT
default DRAM_BASE if !MMU
default 0x00000000 if ARCH_EBSA110 || \
ARCH_FOOTBRIDGE || \
ARCH_INTEGRATOR || \
ARCH_REALVIEW
default 0x10000000 if ARCH_OMAP1 || ARCH_RPC
default 0x20000000 if ARCH_S5PV210
default 0xc0000000 if ARCH_SA1100
help
Please provide the physical address corresponding to the
location of main memory in your system.
ARM: 7017/1: Use generic BUG() handler ARM uses its own BUG() handler which makes its output slightly different from other archtectures. One of the problems is that the ARM implementation doesn't report the function with the BUG() in it, but always reports the PC being in __bug(). The generic implementation doesn't have this problem. Currently we get something like: kernel BUG at fs/proc/breakme.c:35! Unable to handle kernel NULL pointer dereference at virtual address 00000000 ... PC is at __bug+0x20/0x2c With this patch it displays: kernel BUG at fs/proc/breakme.c:35! Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP ... PC is at write_breakme+0xd0/0x1b4 This implementation uses an undefined instruction to implement BUG, and sets up a bug table containing the relevant information. Many versions of gcc do not support %c properly for ARM (inserting a # when they shouldn't) so we work around this using distasteful macro magic. v1: Initial version to replace existing ARM BUG() implementation with something more similar to other architectures. v2: Add Thumb support, remove backtrace whitespace output changes. Change to use macros instead of requiring the asm %d flag to work (thanks to Dave Martin <dave.martin@linaro.org>) v3: Remove old BUG() implementation in favor of this one. Remove the Backtrace: message (will submit this separately). Use ARM_EXIT_KEEP() so that some architectures can dump exit text at link time thanks to Stephen Boyd <sboyd@codeaurora.org> (although since we always define GENERIC_BUG this might be academic.) Rebase to linux-2.6.git master. v4: Allow BUGS in modules (these were not reported correctly in v3) (thanks to Stephen Boyd <sboyd@codeaurora.org> for suggesting that.) Remove __bug() as this is no longer needed. v5: Add %progbits as the section flags. Signed-off-by: Simon Glass <sjg@chromium.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-08-16 22:44:26 +00:00
config GENERIC_BUG
def_bool y
depends on BUG
config PGTABLE_LEVELS
int
default 3 if ARM_LPAE
default 2
menu "System Type"
config MMU
bool "MMU-based Paged Memory Management Support"
default y
help
Select if you want MMU-based virtualised addressing space
support by paged memory management. If unsure, say 'Y'.
arm: mm: support ARCH_MMAP_RND_BITS arm: arch_mmap_rnd() uses a hard-code value of 8 to generate the random offset for the mmap base address. This value represents a compromise between increased ASLR effectiveness and avoiding address-space fragmentation. Replace it with a Kconfig option, which is sensibly bounded, so that platform developers may choose where to place this compromise. Keep 8 as the minimum acceptable value. [arnd@arndb.de: ARM: avoid ARCH_MMAP_RND_BITS for NOMMU] Signed-off-by: Daniel Cashman <dcashman@google.com> Cc: Russell King <linux@arm.linux.org.uk> Acked-by: Kees Cook <keescook@chromium.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Don Zickus <dzickus@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Heinrich Schuchardt <xypron.glpk@gmx.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: David Rientjes <rientjes@google.com> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Jeff Vander Stoep <jeffv@google.com> Cc: Nick Kralevich <nnk@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Borislav Petkov <bp@suse.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-14 23:19:57 +00:00
config ARCH_MMAP_RND_BITS_MIN
default 8
config ARCH_MMAP_RND_BITS_MAX
default 14 if PAGE_OFFSET=0x40000000
default 15 if PAGE_OFFSET=0x80000000
default 16
#
# The "ARM system type" choice list is ordered alphabetically by option
# text. Please add new entries in the option alphabetic order.
#
choice
prompt "ARM system type"
default ARM_SINGLE_ARMV7M if !MMU
default ARCH_MULTIPLATFORM if MMU
config ARCH_MULTIPLATFORM
bool "Allow multiple platforms to be selected"
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
depends on MMU
select ARM_HAS_SG_CHAIN
select ARM_PATCH_PHYS_VIRT
select AUTO_ZRELADDR
select TIMER_OF
select COMMON_CLK
select GENERIC_CLOCKEVENTS
select GENERIC_IRQ_MULTI_HANDLER
select HAVE_PCI
select PCI_DOMAINS_GENERIC if PCI
select SPARSE_IRQ
select USE_OF
config ARM_SINGLE_ARMV7M
bool "ARMv7-M based platforms (Cortex-M0/M3/M4)"
depends on !MMU
select ARM_NVIC
select AUTO_ZRELADDR
select TIMER_OF
select COMMON_CLK
select CPU_V7M
select GENERIC_CLOCKEVENTS
select NO_IOPORT_MAP
select SPARSE_IRQ
select USE_OF
config ARCH_EBSA110
bool "EBSA-110"
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select ARCH_USES_GETTIMEOFFSET
select CPU_SA110
select ISA
select NEED_MACH_IO_H
select NEED_MACH_MEMORY_H
select NO_IOPORT_MAP
help
This is an evaluation board for the StrongARM processor available
from Digital. It has limited hardware on-board, including an
Ethernet interface, two PCMCIA sockets, two serial ports and a
parallel port.
config ARCH_EP93XX
bool "EP93xx-based"
ARM: ep93xx: switch to SPARSEMEM The EP93xx has four chip selects that can be used for the SDRAM memory. These chip selects are decoded to specify an address domain: SDCS3 0x00000000-0x0fffffff with Boot Option ASDO=1 SDCS0 0xc0000000-0xcfffffff SDCS1 0xd0000000-0xdfffffff SDCS2 0xe0000000-x0efffffff SDCS3 0xf0000000-0xffffffff with Boot Option ASDO=0 Because of the row/column/bank architecture of SDRAM, the mapping of these memories into the processor's memory space is discontiguous. Most ep93xx systems only use one of the chip selects. For these systems, ARCH_HAS_HOLES_MEMORYMODEL has worked fine to handle the discontiguous memory. But, some of the TS-72xx boards use multiple chip selects. The TS-7300 in particular uses SDCS3 (with ASDO=1) and SDCS2. On that system with ARCH_HAS_HOLES_MEMORYMODEL the SDCS2 memory does not get handled correctly and results in the system not booting. Change the EP93xx to ARCH_SPARSEMEM_ENABLE. This handles the discontiguous memory for all configurations. This has been tested on the following ep93xx platforms: EDB9307A with 64 MiB on SDCS0 Vision EP9307 with 64 MiB on SDCS0 TS-7300 with 64 MiB on SDCS3 (with ASDO=1) and 64 MiB on SDCS2 sim.one with 64 MiB on SDCS0 Signed-off-by: H Hartley Sweeten <hsweeten@visionengravers.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Cc: Russell King <linux@armlinux.org.uk> Tested-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Alexander Sverdlin <alexander.sverdlin@gmail.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2017-09-03 17:43:44 +00:00
select ARCH_SPARSEMEM_ENABLE
select ARM_AMBA
imply ARM_PATCH_PHYS_VIRT
select ARM_VIC
select AUTO_ZRELADDR
select CLKDEV_LOOKUP
select CLKSRC_MMIO
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select CPU_ARM920T
select GENERIC_CLOCKEVENTS
select GPIOLIB
help
This enables support for the Cirrus EP93xx series of CPUs.
config ARCH_FOOTBRIDGE
bool "FootBridge"
select CPU_SA110
select FOOTBRIDGE
select GENERIC_CLOCKEVENTS
select HAVE_IDE
select NEED_MACH_IO_H if !MMU
select NEED_MACH_MEMORY_H
help
Support for systems based on the DC21285 companion chip
("FootBridge"), such as the Simtec CATS and the Rebel NetWinder.
config ARCH_IOP32X
bool "IOP32x-based"
depends on MMU
select CPU_XSCALE
select GPIO_IOP
select GPIOLIB
select NEED_RET_TO_USER
select FORCE_PCI
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select PLAT_IOP
help
Support for Intel's 80219 and IOP32X (XScale) family of
processors.
config ARCH_IXP4XX
bool "IXP4xx-based"
depends on MMU
select ARCH_HAS_DMA_SET_COHERENT_MASK
select ARCH_SUPPORTS_BIG_ENDIAN
select CPU_XSCALE
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select DMABOUNCE if PCI
select GENERIC_CLOCKEVENTS
select GENERIC_IRQ_MULTI_HANDLER
select GPIO_IXP4XX
select GPIOLIB
select HAVE_PCI
select IXP4XX_IRQ
select IXP4XX_TIMER
select NEED_MACH_IO_H
select USB_EHCI_BIG_ENDIAN_DESC
select USB_EHCI_BIG_ENDIAN_MMIO
help
Support for Intel's IXP4XX (XScale) family of processors.
config ARCH_DOVE
bool "Marvell Dove"
select CPU_PJ4
select GENERIC_CLOCKEVENTS
select GENERIC_IRQ_MULTI_HANDLER
select GPIOLIB
select HAVE_PCI
select MVEBU_MBUS
select PINCTRL
select PINCTRL_DOVE
arm: plat-orion: introduce PLAT_ORION_LEGACY hidden config option Until now, the PLAT_ORION configuration option was common to all the Marvell EBU SoCs, and selecting this option had the effect of enabling the MPP code, GPIO code, address decoding and PCIe code from plat-orion, as well as providing access to driver-specific header files from plat-orion/include. However, the Armada 370 and XP SoCs will not use the MPP and GPIO code (instead some proper pinctrl and gpio drivers are in preparation), and generally, we want to move away from plat-orion and instead have everything in mach-mvebu. That said, in the mean time, we want to leverage the driver-specific headers as well as the address decoding code, so we introduce PLAT_ORION_LEGACY. The older Marvell SoCs need to select PLAT_ORION_LEGACY, while the newer Marvell SoCs need to select PLAT_ORION. Of course, when PLAT_ORION_LEGACY is selected, it automatically selects PLAT_ORION. Then, with just PLAT_ORION, you have the address decoding code plus the driver-specific headers. If you add PLAT_ORION_LEGACY to this, you gain the old MPP, GPIO and PCIe code. Again, this is only a temporary solution until we make all Marvell EBU platforms converge into the mach-mvebu directory. This solution avoids duplicating the existing address decoding code into mach-mvebu. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2012-09-11 12:27:27 +00:00
select PLAT_ORION_LEGACY
select SPARSE_IRQ
select PM_GENERIC_DOMAINS if PM
help
Support for the Marvell Dove SoC 88AP510
config ARCH_PXA
bool "PXA2xx/PXA3xx-based"
depends on MMU
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select ARCH_MTD_XIP
select ARM_CPU_SUSPEND if PM
select AUTO_ZRELADDR
select COMMON_CLK
select CLKDEV_LOOKUP
select CLKSRC_PXA
select CLKSRC_MMIO
select TIMER_OF
select CPU_XSCALE if !CPU_XSC3
select GENERIC_CLOCKEVENTS
select GENERIC_IRQ_MULTI_HANDLER
select GPIO_PXA
select GPIOLIB
select HAVE_IDE
select IRQ_DOMAIN
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select PLAT_PXA
select SPARSE_IRQ
help
Support for Intel/Marvell's PXA2xx/PXA3xx processor line.
config ARCH_RPC
bool "RiscPC"
depends on MMU
select ARCH_ACORN
select ARCH_MAY_HAVE_PC_FDC
select ARCH_SPARSEMEM_ENABLE
select ARM_HAS_SG_CHAIN
select CPU_SA110
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select FIQ
select HAVE_IDE
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HAVE_PATA_PLATFORM
select ISA_DMA_API
select NEED_MACH_IO_H
select NEED_MACH_MEMORY_H
select NO_IOPORT_MAP
help
On the Acorn Risc-PC, Linux can support the internal IDE disk and
CD-ROM interface, serial and parallel port, and the floppy drive.
config ARCH_SA1100
bool "SA1100-based"
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select ARCH_MTD_XIP
select ARCH_SPARSEMEM_ENABLE
select CLKDEV_LOOKUP
select CLKSRC_MMIO
select CLKSRC_PXA
select TIMER_OF if OF
select COMMON_CLK
select CPU_FREQ
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select CPU_SA1100
select GENERIC_CLOCKEVENTS
select GENERIC_IRQ_MULTI_HANDLER
select GPIOLIB
select HAVE_IDE
select IRQ_DOMAIN
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select ISA
select NEED_MACH_MEMORY_H
select SPARSE_IRQ
help
Support for StrongARM 11x0 based boards.
config ARCH_S3C24XX
bool "Samsung S3C24XX SoCs"
select ATAGS
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select CLKDEV_LOOKUP
select CLKSRC_SAMSUNG_PWM
select GENERIC_CLOCKEVENTS
select GPIO_SAMSUNG
select GPIOLIB
select GENERIC_IRQ_MULTI_HANDLER
select HAVE_S3C2410_I2C if I2C
select HAVE_S3C2410_WATCHDOG if WATCHDOG
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HAVE_S3C_RTC if RTC_CLASS
select NEED_MACH_IO_H
select SAMSUNG_ATAGS
select USE_OF
help
Samsung S3C2410, S3C2412, S3C2413, S3C2416, S3C2440, S3C2442, S3C2443
and S3C2450 SoCs based systems, such as the Simtec Electronics BAST
(<http://www.simtec.co.uk/products/EB110ITX/>), the IPAQ 1940 or the
Samsung SMDK2410 development board (and derivatives).
config ARCH_OMAP1
bool "TI OMAP1"
depends on MMU
select ARCH_HAS_HOLES_MEMORYMODEL
select ARCH_OMAP
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select CLKDEV_LOOKUP
select CLKSRC_MMIO
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select GENERIC_CLOCKEVENTS
select GENERIC_IRQ_CHIP
select GENERIC_IRQ_MULTI_HANDLER
select GPIOLIB
select HAVE_IDE
select IRQ_DOMAIN
select NEED_MACH_IO_H if PCCARD
select NEED_MACH_MEMORY_H
select SPARSE_IRQ
help
Support for older TI OMAP1 (omap7xx, omap15xx or omap16xx)
endchoice
menu "Multiple platform selection"
depends on ARCH_MULTIPLATFORM
comment "CPU Core family selection"
config ARCH_MULTI_V4
bool "ARMv4 based platforms (FA526)"
depends on !ARCH_MULTI_V6_V7
select ARCH_MULTI_V4_V5
select CPU_FA526
config ARCH_MULTI_V4T
bool "ARMv4T based platforms (ARM720T, ARM920T, ...)"
depends on !ARCH_MULTI_V6_V7
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select ARCH_MULTI_V4_V5
select CPU_ARM920T if !(CPU_ARM7TDMI || CPU_ARM720T || \
CPU_ARM740T || CPU_ARM9TDMI || CPU_ARM922T || \
CPU_ARM925T || CPU_ARM940T)
config ARCH_MULTI_V5
bool "ARMv5 based platforms (ARM926T, XSCALE, PJ1, ...)"
depends on !ARCH_MULTI_V6_V7
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select ARCH_MULTI_V4_V5
select CPU_ARM926T if !(CPU_ARM946E || CPU_ARM1020 || \
CPU_ARM1020E || CPU_ARM1022 || CPU_ARM1026 || \
CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK || CPU_FEROCEON)
config ARCH_MULTI_V4_V5
bool
config ARCH_MULTI_V6
bool "ARMv6 based platforms (ARM11)"
select ARCH_MULTI_V6_V7
select CPU_V6K
config ARCH_MULTI_V7
bool "ARMv7 based platforms (Cortex-A, PJ4, Scorpion, Krait)"
default y
select ARCH_MULTI_V6_V7
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select CPU_V7
select HAVE_SMP
config ARCH_MULTI_V6_V7
bool
select MIGHT_HAVE_CACHE_L2X0
config ARCH_MULTI_CPU_AUTO
def_bool !(ARCH_MULTI_V4 || ARCH_MULTI_V4T || ARCH_MULTI_V6_V7)
select ARCH_MULTI_V5
endmenu
config ARCH_VIRT
ARM: use "depends on" for SoC configs instead of "if" after prompt Many ARM sub-architectures use prompts followed by "if" conditional, but it is wrong. Please notice the difference between config ARCH_FOO bool "Foo SoCs" if ARCH_MULTI_V7 and config ARCH_FOO bool "Foo SoCs" depends on ARCH_MULTI_V7 These two are *not* equivalent! In the former statement, it is not ARCH_FOO, but its prompt that depends on ARCH_MULTI_V7. So, it is completely valid that ARCH_FOO is selected by another, but ARCH_MULTI_V7 is still disabled. As it is not unmet dependency, Kconfig never warns. This is probably not what you want. The former should be used only when you need to do so, and you really understand what you are doing. (In most cases, it should be wrong!) For enabling/disabling sub-architectures, the latter is always correct. As a good side effect, this commit fixes some entries over 80 columns (mach-imx, mach-integrator, mach-mbevu). [Arnd: I note that there is not really a bug here, according to the discussion that followed, but I can see value in being consistent and in making the lines shorter] Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com> Acked-by: Heiko Stuebner <heiko@sntech.de> Acked-by: Patrice Chotard <patrice.chotard@st.com> Acked-by: Liviu Dudau <Liviu.Dudau@arm.com> Acked-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Jun Nie <jun.nie@linaro.org> Acked-by: Matthias Brugger <matthias.bgg@gmail.com> Acked-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Shawn Guo <shawnguo@kernel.org> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Acked-by: Krzysztof Halasa <khc@piap.pl> Acked-by: Maxime Coquelin <maxime.coquelin@st.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2015-11-16 03:06:10 +00:00
bool "Dummy Virtual Machine"
depends on ARCH_MULTI_V7
select ARM_AMBA
select ARM_GIC
PCI/MSI: irqchip: Fix PCI_MSI dependencies The PCI_MSI symbol is used inconsistently throughout the tree, with some drivers using 'select' and others using 'depends on', or using conditional selects. This keeps causing problems; the latest one is a result of ARCH_ALPINE using a 'select' statement to enable its platform-specific MSI driver without enabling MSI: warning: (ARCH_ALPINE) selects ALPINE_MSI which has unmet direct dependencies (PCI && PCI_MSI) drivers/irqchip/irq-alpine-msi.c:104:15: error: variable 'alpine_msix_domain_info' has initializer but incomplete type static struct msi_domain_info alpine_msix_domain_info = { ^~~~~~~~~~~~~~~ drivers/irqchip/irq-alpine-msi.c:105:2: error: unknown field 'flags' specified in initializer .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^ drivers/irqchip/irq-alpine-msi.c:105:11: error: 'MSI_FLAG_USE_DEF_DOM_OPS' undeclared here (not in a function) .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^~~~~~~~~~~~~~~~~~~~~~~~ There is little reason to enable PCI support for a platform that uses MSI but then leave MSI disabled at compile time. Select PCI_MSI from irqchips that implement MSI, and make PCI host bridges that use MSI on ARM depend on PCI_MSI_IRQ_DOMAIN. For all three architectures that support PCI_MSI_IRQ_DOMAIN (ARM, ARM64, X86), enable it by default whenever MSI is enabled. [bhelgaas: changelog, omit crypto config change] Suggested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
2016-06-15 20:47:33 +00:00
select ARM_GIC_V2M if PCI
select ARM_GIC_V3
select ARM_GIC_V3_ITS if PCI
select ARM_PSCI
select HAVE_ARM_ARCH_TIMER
select ARCH_SUPPORTS_BIG_ENDIAN
#
# This is sorted alphabetically by mach-* pathname. However, plat-*
# Kconfigs may be included either alphabetically (according to the
# plat- suffix) or along side the corresponding mach-* source.
#
source "arch/arm/mach-actions/Kconfig"
source "arch/arm/mach-alpine/Kconfig"
source "arch/arm/mach-artpec/Kconfig"
source "arch/arm/mach-asm9260/Kconfig"
source "arch/arm/mach-aspeed/Kconfig"
source "arch/arm/mach-at91/Kconfig"
source "arch/arm/mach-axxia/Kconfig"
source "arch/arm/mach-bcm/Kconfig"
source "arch/arm/mach-berlin/Kconfig"
source "arch/arm/mach-clps711x/Kconfig"
source "arch/arm/mach-cns3xxx/Kconfig"
source "arch/arm/mach-davinci/Kconfig"
source "arch/arm/mach-digicolor/Kconfig"
source "arch/arm/mach-dove/Kconfig"
source "arch/arm/mach-ep93xx/Kconfig"
source "arch/arm/mach-exynos/Kconfig"
source "arch/arm/plat-samsung/Kconfig"
source "arch/arm/mach-footbridge/Kconfig"
source "arch/arm/mach-gemini/Kconfig"
source "arch/arm/mach-highbank/Kconfig"
source "arch/arm/mach-hisi/Kconfig"
source "arch/arm/mach-imx/Kconfig"
source "arch/arm/mach-integrator/Kconfig"
source "arch/arm/mach-iop32x/Kconfig"
source "arch/arm/mach-ixp4xx/Kconfig"
source "arch/arm/mach-keystone/Kconfig"
source "arch/arm/mach-lpc32xx/Kconfig"
source "arch/arm/mach-mediatek/Kconfig"
source "arch/arm/mach-meson/Kconfig"
source "arch/arm/mach-milbeaut/Kconfig"
source "arch/arm/mach-mmp/Kconfig"
source "arch/arm/mach-moxart/Kconfig"
source "arch/arm/mach-mv78xx0/Kconfig"
source "arch/arm/mach-mvebu/Kconfig"
source "arch/arm/mach-mxs/Kconfig"
source "arch/arm/mach-nomadik/Kconfig"
source "arch/arm/mach-npcm/Kconfig"
source "arch/arm/mach-nspire/Kconfig"
source "arch/arm/plat-omap/Kconfig"
source "arch/arm/mach-omap1/Kconfig"
source "arch/arm/mach-omap2/Kconfig"
source "arch/arm/mach-orion5x/Kconfig"
source "arch/arm/mach-oxnas/Kconfig"
source "arch/arm/mach-picoxcell/Kconfig"
source "arch/arm/mach-prima2/Kconfig"
source "arch/arm/mach-pxa/Kconfig"
source "arch/arm/plat-pxa/Kconfig"
source "arch/arm/mach-qcom/Kconfig"
source "arch/arm/mach-rda/Kconfig"
source "arch/arm/mach-realview/Kconfig"
source "arch/arm/mach-rockchip/Kconfig"
source "arch/arm/mach-s3c24xx/Kconfig"
source "arch/arm/mach-s3c64xx/Kconfig"
source "arch/arm/mach-s5pv210/Kconfig"
source "arch/arm/mach-sa1100/Kconfig"
source "arch/arm/mach-shmobile/Kconfig"
source "arch/arm/mach-socfpga/Kconfig"
source "arch/arm/mach-spear/Kconfig"
source "arch/arm/mach-sti/Kconfig"
source "arch/arm/mach-stm32/Kconfig"
source "arch/arm/mach-sunxi/Kconfig"
source "arch/arm/mach-tango/Kconfig"
source "arch/arm/mach-tegra/Kconfig"
source "arch/arm/mach-u300/Kconfig"
source "arch/arm/mach-uniphier/Kconfig"
source "arch/arm/mach-ux500/Kconfig"
source "arch/arm/mach-versatile/Kconfig"
source "arch/arm/mach-vexpress/Kconfig"
source "arch/arm/plat-versatile/Kconfig"
source "arch/arm/mach-vt8500/Kconfig"
source "arch/arm/mach-zx/Kconfig"
source "arch/arm/mach-zynq/Kconfig"
# ARMv7-M architecture
config ARCH_EFM32
bool "Energy Micro efm32"
depends on ARM_SINGLE_ARMV7M
select GPIOLIB
help
Support for Energy Micro's (now Silicon Labs) efm32 Giant Gecko
processors.
config ARCH_LPC18XX
bool "NXP LPC18xx/LPC43xx"
depends on ARM_SINGLE_ARMV7M
select ARCH_HAS_RESET_CONTROLLER
select ARM_AMBA
select CLKSRC_LPC32XX
select PINCTRL
help
Support for NXP's LPC18xx Cortex-M3 and LPC43xx Cortex-M4
high performance microcontrollers.
config ARCH_MPS2
bool "ARM MPS2 platform"
depends on ARM_SINGLE_ARMV7M
select ARM_AMBA
select CLKSRC_MPS2
help
Support for Cortex-M Prototyping System (or V2M-MPS2) which comes
with a range of available cores like Cortex-M3/M4/M7.
Please, note that depends which Application Note is used memory map
for the platform may vary, so adjustment of RAM base might be needed.
# Definitions to make life easier
config ARCH_ACORN
bool
config PLAT_IOP
bool
select GENERIC_CLOCKEVENTS
config PLAT_ORION
bool
select CLKSRC_MMIO
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select COMMON_CLK
select GENERIC_IRQ_CHIP
select IRQ_DOMAIN
arm: plat-orion: introduce PLAT_ORION_LEGACY hidden config option Until now, the PLAT_ORION configuration option was common to all the Marvell EBU SoCs, and selecting this option had the effect of enabling the MPP code, GPIO code, address decoding and PCIe code from plat-orion, as well as providing access to driver-specific header files from plat-orion/include. However, the Armada 370 and XP SoCs will not use the MPP and GPIO code (instead some proper pinctrl and gpio drivers are in preparation), and generally, we want to move away from plat-orion and instead have everything in mach-mvebu. That said, in the mean time, we want to leverage the driver-specific headers as well as the address decoding code, so we introduce PLAT_ORION_LEGACY. The older Marvell SoCs need to select PLAT_ORION_LEGACY, while the newer Marvell SoCs need to select PLAT_ORION. Of course, when PLAT_ORION_LEGACY is selected, it automatically selects PLAT_ORION. Then, with just PLAT_ORION, you have the address decoding code plus the driver-specific headers. If you add PLAT_ORION_LEGACY to this, you gain the old MPP, GPIO and PCIe code. Again, this is only a temporary solution until we make all Marvell EBU platforms converge into the mach-mvebu directory. This solution avoids duplicating the existing address decoding code into mach-mvebu. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2012-09-11 12:27:27 +00:00
config PLAT_ORION_LEGACY
bool
select PLAT_ORION
config PLAT_PXA
bool
config PLAT_VERSATILE
bool
source "arch/arm/mm/Kconfig"
[ARM] 3881/4: xscale: clean up cp0/cp1 handling XScale cores either have a DSP coprocessor (which contains a single 40 bit accumulator register), or an iWMMXt coprocessor (which contains eight 64 bit registers.) Because of the small amount of state in the DSP coprocessor, access to the DSP coprocessor (CP0) is always enabled, and DSP context switching is done unconditionally on every task switch. Access to the iWMMXt coprocessor (CP0/CP1) is enabled only when an iWMMXt instruction is first issued, and iWMMXt context switching is done lazily. CONFIG_IWMMXT is supposed to mean 'the cpu we will be running on will have iWMMXt support', but boards are supposed to select this config symbol by hand, and at least one pxa27x board doesn't get this right, so on that board, proc-xscale.S will incorrectly assume that we have a DSP coprocessor, enable CP0 on boot, and we will then only save the first iWMMXt register (wR0) on context switches, which is Bad. This patch redefines CONFIG_IWMMXT as 'the cpu we will be running on might have iWMMXt support, and we will enable iWMMXt context switching if it does.' This means that with this patch, running a CONFIG_IWMMXT=n kernel on an iWMMXt-capable CPU will no longer potentially corrupt iWMMXt state over context switches, and running a CONFIG_IWMMXT=y kernel on a non-iWMMXt capable CPU will still do DSP context save/restore. These changes should make iWMMXt work on PXA3xx, and as a side effect, enable proper acc0 save/restore on non-iWMMXt capable xsc3 cores such as IOP13xx and IXP23xx (which will not have CONFIG_CPU_XSCALE defined), as well as setting and using HWCAP_IWMMXT properly. Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org> Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-12-03 17:51:14 +00:00
config IWMMXT
bool "Enable iWMMXt support"
depends on CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK || CPU_PJ4 || CPU_PJ4B
default y if PXA27x || PXA3xx || ARCH_MMP || CPU_PJ4 || CPU_PJ4B
[ARM] 3881/4: xscale: clean up cp0/cp1 handling XScale cores either have a DSP coprocessor (which contains a single 40 bit accumulator register), or an iWMMXt coprocessor (which contains eight 64 bit registers.) Because of the small amount of state in the DSP coprocessor, access to the DSP coprocessor (CP0) is always enabled, and DSP context switching is done unconditionally on every task switch. Access to the iWMMXt coprocessor (CP0/CP1) is enabled only when an iWMMXt instruction is first issued, and iWMMXt context switching is done lazily. CONFIG_IWMMXT is supposed to mean 'the cpu we will be running on will have iWMMXt support', but boards are supposed to select this config symbol by hand, and at least one pxa27x board doesn't get this right, so on that board, proc-xscale.S will incorrectly assume that we have a DSP coprocessor, enable CP0 on boot, and we will then only save the first iWMMXt register (wR0) on context switches, which is Bad. This patch redefines CONFIG_IWMMXT as 'the cpu we will be running on might have iWMMXt support, and we will enable iWMMXt context switching if it does.' This means that with this patch, running a CONFIG_IWMMXT=n kernel on an iWMMXt-capable CPU will no longer potentially corrupt iWMMXt state over context switches, and running a CONFIG_IWMMXT=y kernel on a non-iWMMXt capable CPU will still do DSP context save/restore. These changes should make iWMMXt work on PXA3xx, and as a side effect, enable proper acc0 save/restore on non-iWMMXt capable xsc3 cores such as IOP13xx and IXP23xx (which will not have CONFIG_CPU_XSCALE defined), as well as setting and using HWCAP_IWMMXT properly. Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org> Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-12-03 17:51:14 +00:00
help
Enable support for iWMMXt context switching at run time if
running on a CPU that supports it.
if !MMU
source "arch/arm/Kconfig-nommu"
endif
config PJ4B_ERRATA_4742
bool "PJ4B Errata 4742: IDLE Wake Up Commands can Cause the CPU Core to Cease Operation"
depends on CPU_PJ4B && MACH_ARMADA_370
default y
help
When coming out of either a Wait for Interrupt (WFI) or a Wait for
Event (WFE) IDLE states, a specific timing sensitivity exists between
the retiring WFI/WFE instructions and the newly issued subsequent
instructions. This sensitivity can result in a CPU hang scenario.
Workaround:
The software must insert either a Data Synchronization Barrier (DSB)
or Data Memory Barrier (DMB) command immediately after the WFI/WFE
instruction
config ARM_ERRATA_326103
bool "ARM errata: FSR write bit incorrect on a SWP to read-only memory"
depends on CPU_V6
help
Executing a SWP instruction to read-only memory does not set bit 11
of the FSR on the ARM 1136 prior to r1p0. This causes the kernel to
treat the access as a read, preventing a COW from occurring and
causing the faulting task to livelock.
config ARM_ERRATA_411920
bool "ARM errata: Invalidation of the Instruction Cache operation can fail"
depends on CPU_V6 || CPU_V6K
help
Invalidation of the Instruction Cache operation can
fail. This erratum is present in 1136 (before r1p4), 1156 and 1176.
It does not affect the MPCore. This option enables the ARM Ltd.
recommended workaround.
config ARM_ERRATA_430973
bool "ARM errata: Stale prediction on replaced interworking branch"
depends on CPU_V7
help
This option enables the workaround for the 430973 Cortex-A8
r1p* erratum. If a code sequence containing an ARM/Thumb
interworking branch is replaced with another code sequence at the
same virtual address, whether due to self-modifying code or virtual
to physical address re-mapping, Cortex-A8 does not recover from the
stale interworking branch prediction. This results in Cortex-A8
executing the new code sequence in the incorrect ARM or Thumb state.
The workaround enables the BTB/BTAC operations by setting ACTLR.IBE
and also flushes the branch target cache at every context switch.
Note that setting specific bits in the ACTLR register may not be
available in non-secure mode.
config ARM_ERRATA_458693
bool "ARM errata: Processor deadlock when a false hazard is created"
depends on CPU_V7
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 458693 Cortex-A8 (r2p0)
erratum. For very specific sequences of memory operations, it is
possible for a hazard condition intended for a cache line to instead
be incorrectly associated with a different cache line. This false
hazard might then cause a processor deadlock. The workaround enables
the L1 caching of the NEON accesses and disables the PLD instruction
in the ACTLR register. Note that setting specific bits in the ACTLR
register may not be available in non-secure mode.
config ARM_ERRATA_460075
bool "ARM errata: Data written to the L2 cache can be overwritten with stale data"
depends on CPU_V7
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 460075 Cortex-A8 (r2p0)
erratum. Any asynchronous access to the L2 cache may encounter a
situation in which recent store transactions to the L2 cache are lost
and overwritten with stale memory contents from external memory. The
workaround disables the write-allocate mode for the L2 cache via the
ACTLR register. Note that setting specific bits in the ACTLR register
may not be available in non-secure mode.
config ARM_ERRATA_742230
bool "ARM errata: DMB operation may be faulty"
depends on CPU_V7 && SMP
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 742230 Cortex-A9
(r1p0..r2p2) erratum. Under rare circumstances, a DMB instruction
between two write operations may not ensure the correct visibility
ordering of the two writes. This workaround sets a specific bit in
the diagnostic register of the Cortex-A9 which causes the DMB
instruction to behave as a DSB, ensuring the correct behaviour of
the two writes.
config ARM_ERRATA_742231
bool "ARM errata: Incorrect hazard handling in the SCU may lead to data corruption"
depends on CPU_V7 && SMP
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 742231 Cortex-A9
(r2p0..r2p2) erratum. Under certain conditions, specific to the
Cortex-A9 MPCore micro-architecture, two CPUs working in SMP mode,
accessing some data located in the same cache line, may get corrupted
data due to bad handling of the address hazard when the line gets
replaced from one of the CPUs at the same time as another CPU is
accessing it. This workaround sets specific bits in the diagnostic
register of the Cortex-A9 which reduces the linefill issuing
capabilities of the processor.
config ARM_ERRATA_643719
bool "ARM errata: LoUIS bit field in CLIDR register is incorrect"
depends on CPU_V7 && SMP
default y
help
This option enables the workaround for the 643719 Cortex-A9 (prior to
r1p0) erratum. On affected cores the LoUIS bit field of the CLIDR
register returns zero when it should return one. The workaround
corrects this value, ensuring cache maintenance operations which use
it behave as intended and avoiding data corruption.
config ARM_ERRATA_720789
bool "ARM errata: TLBIASIDIS and TLBIMVAIS operations can broadcast a faulty ASID"
depends on CPU_V7
help
This option enables the workaround for the 720789 Cortex-A9 (prior to
r2p0) erratum. A faulty ASID can be sent to the other CPUs for the
broadcasted CP15 TLB maintenance operations TLBIASIDIS and TLBIMVAIS.
As a consequence of this erratum, some TLB entries which should be
invalidated are not, resulting in an incoherency in the system page
tables. The workaround changes the TLB flushing routines to invalidate
entries regardless of the ASID.
config ARM_ERRATA_743622
bool "ARM errata: Faulty hazard checking in the Store Buffer may lead to data corruption"
depends on CPU_V7
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 743622 Cortex-A9
(r2p*) erratum. Under very rare conditions, a faulty
optimisation in the Cortex-A9 Store Buffer may lead to data
corruption. This workaround sets a specific bit in the diagnostic
register of the Cortex-A9 which disables the Store Buffer
optimisation, preventing the defect from occurring. This has no
visible impact on the overall performance or power consumption of the
processor.
config ARM_ERRATA_751472
bool "ARM errata: Interrupted ICIALLUIS may prevent completion of broadcasted operation"
depends on CPU_V7
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 751472 Cortex-A9 (prior
to r3p0) erratum. An interrupted ICIALLUIS operation may prevent the
completion of a following broadcasted operation if the second
operation is received by a CPU before the ICIALLUIS has completed,
potentially leading to corrupted entries in the cache or TLB.
config ARM_ERRATA_754322
bool "ARM errata: possible faulty MMU translations following an ASID switch"
depends on CPU_V7
help
This option enables the workaround for the 754322 Cortex-A9 (r2p*,
r3p*) erratum. A speculative memory access may cause a page table walk
which starts prior to an ASID switch but completes afterwards. This
can populate the micro-TLB with a stale entry which may be hit with
the new ASID. This workaround places two dsb instructions in the mm
switching code so that no page table walks can cross the ASID switch.
config ARM_ERRATA_754327
bool "ARM errata: no automatic Store Buffer drain"
depends on CPU_V7 && SMP
help
This option enables the workaround for the 754327 Cortex-A9 (prior to
r2p0) erratum. The Store Buffer does not have any automatic draining
mechanism and therefore a livelock may occur if an external agent
continuously polls a memory location waiting to observe an update.
This workaround defines cpu_relax() as smp_mb(), preventing correctly
written polling loops from denying visibility of updates to memory.
config ARM_ERRATA_364296
bool "ARM errata: Possible cache data corruption with hit-under-miss enabled"
depends on CPU_V6
help
This options enables the workaround for the 364296 ARM1136
r0p2 erratum (possible cache data corruption with
hit-under-miss enabled). It sets the undocumented bit 31 in
the auxiliary control register and the FI bit in the control
register, thus disabling hit-under-miss without putting the
processor into full low interrupt latency mode. ARM11MPCore
is not affected.
config ARM_ERRATA_764369
bool "ARM errata: Data cache line maintenance operation by MVA may not succeed"
depends on CPU_V7 && SMP
help
This option enables the workaround for erratum 764369
affecting Cortex-A9 MPCore with two or more processors (all
current revisions). Under certain timing circumstances, a data
cache line maintenance operation by MVA targeting an Inner
Shareable memory region may fail to proceed up to either the
Point of Coherency or to the Point of Unification of the
system. This workaround adds a DSB instruction before the
relevant cache maintenance functions and sets a specific bit
in the diagnostic control register of the SCU.
config ARM_ERRATA_775420
bool "ARM errata: A data cache maintenance operation which aborts, might lead to deadlock"
depends on CPU_V7
help
This option enables the workaround for the 775420 Cortex-A9 (r2p2,
r2p6,r2p8,r2p10,r3p0) erratum. In case a data cache maintenance
operation aborts with MMU exception, it might cause the processor
to deadlock. This workaround puts DSB before executing ISB if
an abort may occur on cache maintenance.
config ARM_ERRATA_798181
bool "ARM errata: TLBI/DSB failure on Cortex-A15"
depends on CPU_V7 && SMP
help
On Cortex-A15 (r0p0..r3p2) the TLBI*IS/DSB operations are not
adequately shooting down all use of the old entries. This
option enables the Linux kernel workaround for this erratum
which sends an IPI to the CPUs that are running the same ASID
as the one being invalidated.
config ARM_ERRATA_773022
bool "ARM errata: incorrect instructions may be executed from loop buffer"
depends on CPU_V7
help
This option enables the workaround for the 773022 Cortex-A15
(up to r0p4) erratum. In certain rare sequences of code, the
loop buffer may deliver incorrect instructions. This
workaround disables the loop buffer to avoid the erratum.
ARM: 8558/1: errata: Workaround errata A12 818325/852422 A17 852423 There are several similar errata on Cortex A12 and A17 that all have the same workaround: setting bit[12] of the Feature Register. Technically the list of errata are: - A12 818325: Execution of an UNPREDICTABLE STR or STM instruction might deadlock. Fixed in r0p1. - A12 852422: Execution of a sequence of instructions might lead to either a data corruption or a CPU deadlock. Not fixed in any A12s yet. - A17 852423: Execution of a sequence of instructions might lead to either a data corruption or a CPU deadlock. Not fixed in any A17s yet. Since A12 got renamed to A17 it seems likely that there won't be any future Cortex-A12 cores, so we'll enable for all Cortex-A12. For Cortex-A17 I believe that all known revisions are affected and that all knows revisions means <= r1p2. Presumably if a new A17 was released it would have this problem fixed. Note that in <https://patchwork.kernel.org/patch/4735341/> folks previously expressed opposition to this change because: A) It was thought to only apply to r0p0 and there were no known r0p0 boards supported in mainline. B) It was argued that such a workaround beloned in firmware. Now that this same fix solves other errata on real boards (like rk3288) point A) is addressed. Point B) is impossible to address on boards like rk3288. On rk3288 the firmware doesn't stay resident in RAM and isn't involved at all in the suspend/resume process nor in the SMP bringup process. That means that the most the firmware could do would be to set the bit on "core 0" and this bit would be lost at suspend/resume time. It is true that we could write a "generic" solution that saved the boot-time "core 0" value of this register and applied it at SMP bringup / resume time. However, since this register (described as the "Feature Register" in errata) appears to be undocumented (as far as I can tell) and is only modified for these errata, that "generic" solution seems questionably cleaner. The generic solution also won't fix existing users that haven't happened to do a FW update. Note that in ARM64 presumably PSCI will be universal and fixes like this will end up in ATF. Hopefully we are nearing the end of this style of errata workaround. Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Huang Tao <huangtao@rock-chips.com> Signed-off-by: Kever Yang <kever.yang@rock-chips.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-04-06 23:25:00 +00:00
config ARM_ERRATA_818325_852422
bool "ARM errata: A12: some seqs of opposed cond code instrs => deadlock or corruption"
depends on CPU_V7
help
This option enables the workaround for:
- Cortex-A12 818325: Execution of an UNPREDICTABLE STR or STM
instruction might deadlock. Fixed in r0p1.
- Cortex-A12 852422: Execution of a sequence of instructions might
lead to either a data corruption or a CPU deadlock. Not fixed in
any Cortex-A12 cores yet.
This workaround for all both errata involves setting bit[12] of the
Feature Register. This bit disables an optimisation applied to a
sequence of 2 instructions that use opposing condition codes.
config ARM_ERRATA_821420
bool "ARM errata: A12: sequence of VMOV to core registers might lead to a dead lock"
depends on CPU_V7
help
This option enables the workaround for the 821420 Cortex-A12
(all revs) erratum. In very rare timing conditions, a sequence
of VMOV to Core registers instructions, for which the second
one is in the shadow of a branch or abort, can lead to a
deadlock when the VMOV instructions are issued out-of-order.
config ARM_ERRATA_825619
bool "ARM errata: A12: DMB NSHST/ISHST mixed ... might cause deadlock"
depends on CPU_V7
help
This option enables the workaround for the 825619 Cortex-A12
(all revs) erratum. Within rare timing constraints, executing a
DMB NSHST or DMB ISHST instruction followed by a mix of Cacheable
and Device/Strongly-Ordered loads and stores might cause deadlock
config ARM_ERRATA_857271
bool "ARM errata: A12: CPU might deadlock under some very rare internal conditions"
depends on CPU_V7
help
This option enables the workaround for the 857271 Cortex-A12
(all revs) erratum. Under very rare timing conditions, the CPU might
hang. The workaround is expected to have a < 1% performance impact.
config ARM_ERRATA_852421
bool "ARM errata: A17: DMB ST might fail to create order between stores"
depends on CPU_V7
help
This option enables the workaround for the 852421 Cortex-A17
(r1p0, r1p1, r1p2) erratum. Under very rare timing conditions,
execution of a DMB ST instruction might fail to properly order
stores from GroupA and stores from GroupB.
ARM: 8558/1: errata: Workaround errata A12 818325/852422 A17 852423 There are several similar errata on Cortex A12 and A17 that all have the same workaround: setting bit[12] of the Feature Register. Technically the list of errata are: - A12 818325: Execution of an UNPREDICTABLE STR or STM instruction might deadlock. Fixed in r0p1. - A12 852422: Execution of a sequence of instructions might lead to either a data corruption or a CPU deadlock. Not fixed in any A12s yet. - A17 852423: Execution of a sequence of instructions might lead to either a data corruption or a CPU deadlock. Not fixed in any A17s yet. Since A12 got renamed to A17 it seems likely that there won't be any future Cortex-A12 cores, so we'll enable for all Cortex-A12. For Cortex-A17 I believe that all known revisions are affected and that all knows revisions means <= r1p2. Presumably if a new A17 was released it would have this problem fixed. Note that in <https://patchwork.kernel.org/patch/4735341/> folks previously expressed opposition to this change because: A) It was thought to only apply to r0p0 and there were no known r0p0 boards supported in mainline. B) It was argued that such a workaround beloned in firmware. Now that this same fix solves other errata on real boards (like rk3288) point A) is addressed. Point B) is impossible to address on boards like rk3288. On rk3288 the firmware doesn't stay resident in RAM and isn't involved at all in the suspend/resume process nor in the SMP bringup process. That means that the most the firmware could do would be to set the bit on "core 0" and this bit would be lost at suspend/resume time. It is true that we could write a "generic" solution that saved the boot-time "core 0" value of this register and applied it at SMP bringup / resume time. However, since this register (described as the "Feature Register" in errata) appears to be undocumented (as far as I can tell) and is only modified for these errata, that "generic" solution seems questionably cleaner. The generic solution also won't fix existing users that haven't happened to do a FW update. Note that in ARM64 presumably PSCI will be universal and fixes like this will end up in ATF. Hopefully we are nearing the end of this style of errata workaround. Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Huang Tao <huangtao@rock-chips.com> Signed-off-by: Kever Yang <kever.yang@rock-chips.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-04-06 23:25:00 +00:00
config ARM_ERRATA_852423
bool "ARM errata: A17: some seqs of opposed cond code instrs => deadlock or corruption"
depends on CPU_V7
help
This option enables the workaround for:
- Cortex-A17 852423: Execution of a sequence of instructions might
lead to either a data corruption or a CPU deadlock. Not fixed in
any Cortex-A17 cores yet.
This is identical to Cortex-A12 erratum 852422. It is a separate
config option from the A12 erratum due to the way errata are checked
for and handled.
config ARM_ERRATA_857272
bool "ARM errata: A17: CPU might deadlock under some very rare internal conditions"
depends on CPU_V7
help
This option enables the workaround for the 857272 Cortex-A17 erratum.
This erratum is not known to be fixed in any A17 revision.
This is identical to Cortex-A12 erratum 857271. It is a separate
config option from the A12 erratum due to the way errata are checked
for and handled.
endmenu
source "arch/arm/common/Kconfig"
menu "Bus support"
config ISA
bool
help
Find out whether you have ISA slots on your motherboard. ISA is the
name of a bus system, i.e. the way the CPU talks to the other stuff
inside your box. Other bus systems are PCI, EISA, MicroChannel
(MCA) or VESA. ISA is an older system, now being displaced by PCI;
newer boards don't support it. If you have ISA, say Y, otherwise N.
# Select ISA DMA controller support
config ISA_DMA
bool
select ISA_DMA_API
# Select ISA DMA interface
config ISA_DMA_API
bool
config PCI_NANOENGINE
bool "BSE nanoEngine PCI support"
depends on SA1100_NANOENGINE
help
Enable PCI on the BSE nanoEngine board.
config PCI_HOST_ITE8152
bool
depends on PCI && MACH_ARMCORE
default y
select DMABOUNCE
config ARM_ERRATA_814220
bool "ARM errata: Cache maintenance by set/way operations can execute out of order"
depends on CPU_V7
help
The v7 ARM states that all cache and branch predictor maintenance
operations that do not specify an address execute, relative to
each other, in program order.
However, because of this erratum, an L2 set/way cache maintenance
operation can overtake an L1 set/way cache maintenance operation.
This ERRATA only affected the Cortex-A7 and present in r0p2, r0p3,
r0p4, r0p5.
endmenu
menu "Kernel Features"
config HAVE_SMP
bool
help
This option should be selected by machines which have an SMP-
capable CPU.
The only effect of this option is to make the SMP-related
options available to the user for configuration.
config SMP
bool "Symmetric Multi-Processing"
depends on CPU_V6K || CPU_V7
depends on GENERIC_CLOCKEVENTS
depends on HAVE_SMP
depends on MMU || ARM_MPU
select IRQ_WORK
help
This enables support for systems with more than one CPU. If you have
a system with only one CPU, say N. If you have a system with more
than one CPU, say Y.
If you say N here, the kernel will run on uni- and multiprocessor
machines, but will use only one CPU of a multiprocessor machine. If
you say Y here, the kernel will run on many, but not all,
uniprocessor machines. On a uniprocessor machine, the kernel
will run faster if you say N here.
See also <file:Documentation/x86/i386/IO-APIC.rst>,
<file:Documentation/admin-guide/lockup-watchdogs.rst> and the SMP-HOWTO available at
<http://tldp.org/HOWTO/SMP-HOWTO.html>.
If you don't know what to do here, say N.
config SMP_ON_UP
bool "Allow booting SMP kernel on uniprocessor systems"
depends on SMP && !XIP_KERNEL && MMU
default y
help
SMP kernels contain instructions which fail on non-SMP processors.
Enabling this option allows the kernel to modify itself to make
these instructions safe. Disabling it allows about 1K of space
savings.
If you don't know what to do here, say Y.
config ARM_CPU_TOPOLOGY
bool "Support cpu topology definition"
depends on SMP && CPU_V7
default y
help
Support ARM cpu topology definition. The MPIDR register defines
affinity between processors which is then used to describe the cpu
topology of an ARM System.
config SCHED_MC
bool "Multi-core scheduler support"
depends on ARM_CPU_TOPOLOGY
help
Multi-core scheduler support improves the CPU scheduler's decision
making when dealing with multi-core CPU chips at a cost of slightly
increased overhead in some places. If unsure say N here.
config SCHED_SMT
bool "SMT scheduler support"
depends on ARM_CPU_TOPOLOGY
help
Improves the CPU scheduler's decision making when dealing with
MultiThreading at a cost of slightly increased overhead in some
places. If unsure say N here.
config HAVE_ARM_SCU
bool
help
This option enables support for the ARM snoop control unit
config HAVE_ARM_ARCH_TIMER
bool "Architected timer support"
depends on CPU_V7
select ARM_ARCH_TIMER
select GENERIC_CLOCKEVENTS
help
This option enables support for the ARM architected timer
config HAVE_ARM_TWD
bool
help
This options enables support for the ARM timer and watchdog unit
config MCPM
bool "Multi-Cluster Power Management"
depends on CPU_V7 && SMP
help
This option provides the common power management infrastructure
for (multi-)cluster based systems, such as big.LITTLE based
systems.
config MCPM_QUAD_CLUSTER
bool
depends on MCPM
help
To avoid wasting resources unnecessarily, MCPM only supports up
to 2 clusters by default.
Platforms with 3 or 4 clusters that use MCPM must select this
option to allow the additional clusters to be managed.
ARM: b.L: core switcher code This is the core code implementing big.LITTLE switcher functionality. Rationale for this code is available here: http://lwn.net/Articles/481055/ The main entry point for a switch request is: void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id) If the calling CPU is not the wanted one, this wrapper takes care of sending the request to the appropriate CPU with schedule_work_on(). At the moment the core switch operation is handled by bL_switch_to() which must be called on the CPU for which a switch is requested. What this code does: * Return early if the current cluster is the wanted one. * Close the gate in the kernel entry vector for both the inbound and outbound CPUs. * Wake up the inbound CPU so it can perform its reset sequence in parallel up to the kernel entry vector gate. * Migrate all interrupts in the GIC targeting the outbound CPU interface to the inbound CPU interface, including SGIs. This is performed by gic_migrate_target() in drivers/irqchip/irq-gic.c. * Call cpu_pm_enter() which takes care of flushing the VFP state to RAM and save the CPU interface config from the GIC to RAM. * Modify the cpu_logical_map to refer to the inbound physical CPU. * Call cpu_suspend() which saves the CPU state (general purpose registers, page table address) onto the stack and store the resulting stack pointer in an array indexed by the updated cpu_logical_map, then call the provided shutdown function. This happens in arch/arm/kernel/sleep.S. At this point, the provided shutdown function executed by the outbound CPU ungates the inbound CPU. Therefore the inbound CPU: * Picks up the saved stack pointer in the array indexed by its MPIDR in arch/arm/kernel/sleep.S. * The MMU and caches are re-enabled using the saved state on the provided stack, just like if this was a resume operation from a suspended state. * Then cpu_suspend() returns, although this is on the inbound CPU rather than the outbound CPU which called it initially. * The function cpu_pm_exit() is called which effect is to restore the CPU interface state in the GIC using the state previously saved by the outbound CPU. * Exit of bL_switch_to() to resume normal kernel execution on the new CPU. However, the outbound CPU is potentially still running in parallel while the inbound CPU is resuming normal kernel execution, hence we need per CPU stack isolation to execute bL_do_switch(). After the outbound CPU has ungated the inbound CPU, it calls mcpm_cpu_power_down() to: * Clean its L1 cache. * If it is the last CPU still alive in its cluster (last man standing), it also cleans its L2 cache and disables cache snooping from the other cluster. * Power down the CPU (or whole cluster). Code called from bL_do_switch() might end up referencing 'current' for some reasons. However, 'current' is derived from the stack pointer. With any arbitrary stack, the returned value for 'current' and any dereferenced values through it are just random garbage which may lead to segmentation faults. The active page table during the execution of bL_do_switch() is also a problem. There is no guarantee that the inbound CPU won't destroy the corresponding task which would free the attached page table while the outbound CPU is still running and relying on it. To solve both issues, we borrow some of the task space belonging to the init/idle task which, by its nature, is lightly used and therefore is unlikely to clash with our usage. The init task is also never going away. Right now the logical CPU number is assumed to be equivalent to the physical CPU number within each cluster. The kernel should also be booted with only one cluster active. These limitations will be lifted eventually. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2012-04-12 06:56:10 +00:00
config BIG_LITTLE
bool "big.LITTLE support (Experimental)"
depends on CPU_V7 && SMP
select MCPM
help
This option enables support selections for the big.LITTLE
system architecture.
config BL_SWITCHER
bool "big.LITTLE switcher support"
depends on BIG_LITTLE && MCPM && HOTPLUG_CPU && ARM_GIC
select CPU_PM
ARM: b.L: core switcher code This is the core code implementing big.LITTLE switcher functionality. Rationale for this code is available here: http://lwn.net/Articles/481055/ The main entry point for a switch request is: void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id) If the calling CPU is not the wanted one, this wrapper takes care of sending the request to the appropriate CPU with schedule_work_on(). At the moment the core switch operation is handled by bL_switch_to() which must be called on the CPU for which a switch is requested. What this code does: * Return early if the current cluster is the wanted one. * Close the gate in the kernel entry vector for both the inbound and outbound CPUs. * Wake up the inbound CPU so it can perform its reset sequence in parallel up to the kernel entry vector gate. * Migrate all interrupts in the GIC targeting the outbound CPU interface to the inbound CPU interface, including SGIs. This is performed by gic_migrate_target() in drivers/irqchip/irq-gic.c. * Call cpu_pm_enter() which takes care of flushing the VFP state to RAM and save the CPU interface config from the GIC to RAM. * Modify the cpu_logical_map to refer to the inbound physical CPU. * Call cpu_suspend() which saves the CPU state (general purpose registers, page table address) onto the stack and store the resulting stack pointer in an array indexed by the updated cpu_logical_map, then call the provided shutdown function. This happens in arch/arm/kernel/sleep.S. At this point, the provided shutdown function executed by the outbound CPU ungates the inbound CPU. Therefore the inbound CPU: * Picks up the saved stack pointer in the array indexed by its MPIDR in arch/arm/kernel/sleep.S. * The MMU and caches are re-enabled using the saved state on the provided stack, just like if this was a resume operation from a suspended state. * Then cpu_suspend() returns, although this is on the inbound CPU rather than the outbound CPU which called it initially. * The function cpu_pm_exit() is called which effect is to restore the CPU interface state in the GIC using the state previously saved by the outbound CPU. * Exit of bL_switch_to() to resume normal kernel execution on the new CPU. However, the outbound CPU is potentially still running in parallel while the inbound CPU is resuming normal kernel execution, hence we need per CPU stack isolation to execute bL_do_switch(). After the outbound CPU has ungated the inbound CPU, it calls mcpm_cpu_power_down() to: * Clean its L1 cache. * If it is the last CPU still alive in its cluster (last man standing), it also cleans its L2 cache and disables cache snooping from the other cluster. * Power down the CPU (or whole cluster). Code called from bL_do_switch() might end up referencing 'current' for some reasons. However, 'current' is derived from the stack pointer. With any arbitrary stack, the returned value for 'current' and any dereferenced values through it are just random garbage which may lead to segmentation faults. The active page table during the execution of bL_do_switch() is also a problem. There is no guarantee that the inbound CPU won't destroy the corresponding task which would free the attached page table while the outbound CPU is still running and relying on it. To solve both issues, we borrow some of the task space belonging to the init/idle task which, by its nature, is lightly used and therefore is unlikely to clash with our usage. The init task is also never going away. Right now the logical CPU number is assumed to be equivalent to the physical CPU number within each cluster. The kernel should also be booted with only one cluster active. These limitations will be lifted eventually. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2012-04-12 06:56:10 +00:00
help
The big.LITTLE "switcher" provides the core functionality to
transparently handle transition between a cluster of A15's
and a cluster of A7's in a big.LITTLE system.
config BL_SWITCHER_DUMMY_IF
tristate "Simple big.LITTLE switcher user interface"
depends on BL_SWITCHER && DEBUG_KERNEL
help
This is a simple and dummy char dev interface to control
the big.LITTLE switcher core code. It is meant for
debugging purposes only.
choice
prompt "Memory split"
depends on MMU
default VMSPLIT_3G
help
Select the desired split between kernel and user memory.
If you are not absolutely sure what you are doing, leave this
option alone!
config VMSPLIT_3G
bool "3G/1G user/kernel split"
config VMSPLIT_3G_OPT
depends on !ARM_LPAE
bool "3G/1G user/kernel split (for full 1G low memory)"
config VMSPLIT_2G
bool "2G/2G user/kernel split"
config VMSPLIT_1G
bool "1G/3G user/kernel split"
endchoice
config PAGE_OFFSET
hex
default PHYS_OFFSET if !MMU
default 0x40000000 if VMSPLIT_1G
default 0x80000000 if VMSPLIT_2G
default 0xB0000000 if VMSPLIT_3G_OPT
default 0xC0000000
config NR_CPUS
int "Maximum number of CPUs (2-32)"
range 2 32
depends on SMP
default "4"
config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs"
depends on SMP
select GENERIC_IRQ_MIGRATION
help
Say Y here to experiment with turning CPUs off and on. CPUs
can be controlled through /sys/devices/system/cpu.
config ARM_PSCI
bool "Support for the ARM Power State Coordination Interface (PSCI)"
depends on HAVE_ARM_SMCCC
select ARM_PSCI_FW
help
Say Y here if you want Linux to communicate with system firmware
implementing the PSCI specification for CPU-centric power
management operations described in ARM document number ARM DEN
0022A ("Power State Coordination Interface System Software on
ARM processors").
# The GPIO number here must be sorted by descending number. In case of
# a multiplatform kernel, we just want the highest value required by the
# selected platforms.
config ARCH_NR_GPIO
int
default 2048 if ARCH_SOCFPGA
default 1024 if ARCH_BRCMSTB || ARCH_RENESAS || ARCH_TEGRA || \
ARCH_ZYNQ || ARCH_ASPEED
default 512 if ARCH_EXYNOS || ARCH_KEYSTONE || SOC_OMAP5 || \
SOC_DRA7XX || ARCH_S3C24XX || ARCH_S3C64XX || ARCH_S5PV210
default 416 if ARCH_SUNXI
default 392 if ARCH_U8500
default 352 if ARCH_VT8500
default 288 if ARCH_ROCKCHIP
default 264 if MACH_H4700
default 0
help
Maximum number of GPIOs in the system.
If unsure, leave the default value.
config HZ_FIXED
int
ARM: Drop fixed 200 Hz timer requirement from Samsung platforms All Samsung platforms, including the Exynos, are selecting HZ_FIXED with 200 Hz. Unfortunately in case of multiplatform image this affects also other platforms when Exynos is enabled. This looks like an very old legacy code, dating back to initial upstreaming of S3C24xx. Probably it was required for s3c24xx timer driver, which was removed in commit ad38bdd15d5b ("ARM: SAMSUNG: Remove unused plat-samsung/time.c"). Since then, this fixed 200 Hz spread everywhere, including out-of-tree Samsung kernels (SoC vendor's and Tizen's). I believe this choice was rather an effect of coincidence instead of conscious choice. On S3C24xx, the PWM counter is only 16 bit wide, and with the typical 12MHz input clock that overflows every 5.5ms. This works with HZ=200 or higher but not with HZ=100 which needs a 10ms interval between ticks. On Later chips (S3C64xx, S5P and EXYNOS), the counter is 32 bits and does not have this problem. The new samsung_pwm_timer driver solves the problem by scaling the input clock by a factor of 50 on S3C24xx, which makes it less accurate but allows HZ=100 as well as CONFIG_NO_HZ with fewer wakeups. Few perf mem and sched tests on Odroid XU3 board (Exynos5422, 4x Cortex A7, 4x Cortex A15) show no regressions when switching from 200 Hz to other values. Reported-by: Lee Jones <lee.jones@linaro.org> [Dropping of 200_HZ from S3C/S5P was suggested by Arnd] Reported-by: Arnd Bergmann <arnd@arndb.de> Cc: Kukjin Kim <kgene@kernel.org> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> [Tested on Exynos5800] Tested-by: Javier Martinez Canillas <javier@osg.samsung.com> Acked-by: Kukjin Kim <kgene@kernel.org> [Tested on S3C2440] Tested-by: Sylwester Nawrocki <s.nawrocki@samsung.com> Acked-by: Lee Jones <lee.jones@linaro.org>
2016-11-18 11:15:12 +00:00
default 200 if ARCH_EBSA110
default 128 if SOC_AT91RM9200
default 0
choice
depends on HZ_FIXED = 0
prompt "Timer frequency"
config HZ_100
bool "100 Hz"
config HZ_200
bool "200 Hz"
config HZ_250
bool "250 Hz"
config HZ_300
bool "300 Hz"
config HZ_500
bool "500 Hz"
config HZ_1000
bool "1000 Hz"
endchoice
config HZ
int
default HZ_FIXED if HZ_FIXED != 0
default 100 if HZ_100
default 200 if HZ_200
default 250 if HZ_250
default 300 if HZ_300
default 500 if HZ_500
default 1000
config SCHED_HRTICK
def_bool HIGH_RES_TIMERS
config THUMB2_KERNEL
bool "Compile the kernel in Thumb-2 mode" if !CPU_THUMBONLY
depends on (CPU_V7 || CPU_V7M) && !CPU_V6 && !CPU_V6K
default y if CPU_THUMBONLY
select ARM_UNWIND
help
By enabling this option, the kernel will be compiled in
Thumb-2 mode.
If unsure, say N.
config THUMB2_AVOID_R_ARM_THM_JUMP11
bool "Work around buggy Thumb-2 short branch relocations in gas"
depends on THUMB2_KERNEL && MODULES
default y
help
Various binutils versions can resolve Thumb-2 branches to
locally-defined, preemptible global symbols as short-range "b.n"
branch instructions.
This is a problem, because there's no guarantee the final
destination of the symbol, or any candidate locations for a
trampoline, are within range of the branch. For this reason, the
kernel does not support fixing up the R_ARM_THM_JUMP11 (102)
relocation in modules at all, and it makes little sense to add
support.
The symptom is that the kernel fails with an "unsupported
relocation" error when loading some modules.
Until fixed tools are available, passing
-fno-optimize-sibling-calls to gcc should prevent gcc generating
code which hits this problem, at the cost of a bit of extra runtime
stack usage in some cases.
The problem is described in more detail at:
https://bugs.launchpad.net/binutils-linaro/+bug/725126
Only Thumb-2 kernels are affected.
Unless you are sure your tools don't have this problem, say Y.
config ARM_PATCH_IDIV
bool "Runtime patch udiv/sdiv instructions into __aeabi_{u}idiv()"
depends on CPU_32v7 && !XIP_KERNEL
default y
help
The ARM compiler inserts calls to __aeabi_idiv() and
__aeabi_uidiv() when it needs to perform division on signed
and unsigned integers. Some v7 CPUs have support for the sdiv
and udiv instructions that can be used to implement those
functions.
Enabling this option allows the kernel to modify itself to
replace the first two instructions of these library functions
with the sdiv or udiv plus "bx lr" instructions when the CPU
it is running on supports them. Typically this will be faster
and less power intensive than running the original library
code to do integer division.
config AEABI
bool "Use the ARM EABI to compile the kernel" if !CPU_V7 && \
!CPU_V7M && !CPU_V6 && !CPU_V6K && !CC_IS_CLANG
default CPU_V7 || CPU_V7M || CPU_V6 || CPU_V6K || CC_IS_CLANG
help
This option allows for the kernel to be compiled using the latest
ARM ABI (aka EABI). This is only useful if you are using a user
space environment that is also compiled with EABI.
Since there are major incompatibilities between the legacy ABI and
EABI, especially with regard to structure member alignment, this
option also changes the kernel syscall calling convention to
disambiguate both ABIs and allow for backward compatibility support
(selected with CONFIG_OABI_COMPAT).
To use this you need GCC version 4.0.0 or later.
config OABI_COMPAT
bool "Allow old ABI binaries to run with this kernel (EXPERIMENTAL)"
depends on AEABI && !THUMB2_KERNEL
help
This option preserves the old syscall interface along with the
new (ARM EABI) one. It also provides a compatibility layer to
intercept syscalls that have structure arguments which layout
in memory differs between the legacy ABI and the new ARM EABI
(only for non "thumb" binaries). This option adds a tiny
overhead to all syscalls and produces a slightly larger kernel.
The seccomp filter system will not be available when this is
selected, since there is no way yet to sensibly distinguish
between calling conventions during filtering.
If you know you'll be using only pure EABI user space then you
can say N here. If this option is not selected and you attempt
to execute a legacy ABI binary then the result will be
UNPREDICTABLE (in fact it can be predicted that it won't work
at all). If in doubt say N.
[ARM] Double check memmap is actually valid with a memmap has unexpected holes V2 pfn_valid() is meant to be able to tell if a given PFN has valid memmap associated with it or not. In FLATMEM, it is expected that holes always have valid memmap as long as there is valid PFNs either side of the hole. In SPARSEMEM, it is assumed that a valid section has a memmap for the entire section. However, ARM and maybe other embedded architectures in the future free memmap backing holes to save memory on the assumption the memmap is never used. The page_zone linkages are then broken even though pfn_valid() returns true. A walker of the full memmap must then do this additional check to ensure the memmap they are looking at is sane by making sure the zone and PFN linkages are still valid. This is expensive, but walkers of the full memmap are extremely rare. This was caught before for FLATMEM and hacked around but it hits again for SPARSEMEM because the page_zone linkages can look ok where the PFN linkages are totally screwed. This looks like a hatchet job but the reality is that any clean solution would end up consumning all the memory saved by punching these unexpected holes in the memmap. For example, we tried marking the memmap within the section invalid but the section size exceeds the size of the hole in most cases so pfn_valid() starts returning false where valid memmap exists. Shrinking the size of the section would increase memory consumption offsetting the gains. This patch identifies when an architecture is punching unexpected holes in the memmap that the memory model cannot automatically detect and sets ARCH_HAS_HOLES_MEMORYMODEL. At the moment, this is restricted to EP93xx which is the model sub-architecture this has been reported on but may expand later. When set, walkers of the full memmap must call memmap_valid_within() for each PFN and passing in what it expects the page and zone to be for that PFN. If it finds the linkages to be broken, it assumes the memmap is invalid for that PFN. Signed-off-by: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-05-13 16:34:48 +00:00
config ARCH_HAS_HOLES_MEMORYMODEL
bool
config ARCH_SPARSEMEM_ENABLE
bool
config ARCH_SPARSEMEM_DEFAULT
def_bool ARCH_SPARSEMEM_ENABLE
ARM: 6913/1: sparsemem: allow pfn_valid to be overridden when using SPARSEMEM In commit eb33575c ("[ARM] Double check memmap is actually valid with a memmap has unexpected holes V2"), a new function, memmap_valid_within, was introduced to mmzone.h so that holes in the memmap which pass pfn_valid in SPARSEMEM configurations can be detected and avoided. The fix to this problem checks that the pfn <-> page linkages are correct by calculating the page for the pfn and then checking that page_to_pfn on that page returns the original pfn. Unfortunately, in SPARSEMEM configurations, this results in reading from the page flags to determine the correct section. Since the memmap here has been freed, junk is read from memory and the check is no longer robust. In the best case, reading from /proc/pagetypeinfo will give you the wrong answer. In the worst case, you get SEGVs, Kernel OOPses and hung CPUs. Furthermore, ioremap implementations that use pfn_valid to disallow the remapping of normal memory will break. This patch allows architectures to provide their own pfn_valid function instead of using the default implementation used by sparsemem. The architecture-specific version is aware of the memmap state and will return false when passed a pfn for a freed page within a valid section. Acked-by: Mel Gorman <mgorman@suse.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-19 12:21:14 +00:00
config HAVE_ARCH_PFN_VALID
def_bool ARCH_HAS_HOLES_MEMORYMODEL || !SPARSEMEM
config HIGHMEM
bool "High Memory Support"
depends on MMU
help
The address space of ARM processors is only 4 Gigabytes large
and it has to accommodate user address space, kernel address
space as well as some memory mapped IO. That means that, if you
have a large amount of physical memory and/or IO, not all of the
memory can be "permanently mapped" by the kernel. The physical
memory that is not permanently mapped is called "high memory".
Depending on the selected kernel/user memory split, minimum
vmalloc space and actual amount of RAM, you may not need this
option which should result in a slightly faster kernel.
If unsure, say n.
config HIGHPTE
bool "Allocate 2nd-level pagetables from highmem" if EXPERT
depends on HIGHMEM
default y
help
The VM uses one page of physical memory for each page table.
For systems with a lot of processes, this can use a lot of
precious low memory, eventually leading to low memory being
consumed by page tables. Setting this option will allow
user-space 2nd level page tables to reside in high memory.
config CPU_SW_DOMAIN_PAN
bool "Enable use of CPU domains to implement privileged no-access"
depends on MMU && !ARM_LPAE
default y
help
Increase kernel security by ensuring that normal kernel accesses
are unable to access userspace addresses. This can help prevent
use-after-free bugs becoming an exploitable privilege escalation
by ensuring that magic values (such as LIST_POISON) will always
fault when dereferenced.
CPUs with low-vector mappings use a best-efforts implementation.
Their lower 1MB needs to remain accessible for the vectors, but
the remainder of userspace will become appropriately inaccessible.
config HW_PERF_EVENTS
def_bool y
depends on ARM_PMU
config SYS_SUPPORTS_HUGETLBFS
def_bool y
depends on ARM_LPAE
config HAVE_ARCH_TRANSPARENT_HUGEPAGE
def_bool y
depends on ARM_LPAE
config ARCH_WANT_GENERAL_HUGETLB
def_bool y
config ARM_MODULE_PLTS
bool "Use PLTs to allow module memory to spill over into vmalloc area"
depends on MODULES
default y
help
Allocate PLTs when loading modules so that jumps and calls whose
targets are too far away for their relative offsets to be encoded
in the instructions themselves can be bounced via veneers in the
module's PLT. This allows modules to be allocated in the generic
vmalloc area after the dedicated module memory area has been
exhausted. The modules will use slightly more memory, but after
rounding up to page size, the actual memory footprint is usually
the same.
Disabling this is usually safe for small single-platform
configurations. If unsure, say y.
config FORCE_MAX_ZONEORDER
int "Maximum zone order"
default "12" if SOC_AM33XX
default "9" if SA1111 || ARCH_EFM32
default "11"
help
The kernel memory allocator divides physically contiguous memory
blocks into "zones", where each zone is a power of two number of
pages. This option selects the largest power of two that the kernel
keeps in the memory allocator. If you need to allocate very large
blocks of physically contiguous memory, then you may need to
increase this value.
This config option is actually maximum order plus one. For example,
a value of 11 means that the largest free memory block is 2^10 pages.
config ALIGNMENT_TRAP
bool
depends on CPU_CP15_MMU
default y if !ARCH_EBSA110
select HAVE_PROC_CPU if PROC_FS
help
ARM processors cannot fetch/store information which is not
naturally aligned on the bus, i.e., a 4 byte fetch must start at an
address divisible by 4. On 32-bit ARM processors, these non-aligned
fetch/store instructions will be emulated in software if you say
here, which has a severe performance impact. This is necessary for
correct operation of some network protocols. With an IP-only
configuration it is safe to say N, otherwise say Y.
[ARM] alternative copy_to_user/clear_user implementation This implements {copy_to,clear}_user() by faulting in the userland pages and then using the regular kernel mem{cpy,set}() to copy the data (while holding the page table lock). This is a win if the regular mem{cpy,set}() implementations are faster than the user copy functions, which is the case e.g. on Feroceon, where 8-word STMs (which memcpy() uses under the right conditions) give significantly higher memory write throughput than a sequence of individual 32bit stores. Here are numbers for page sized buffers on some Feroceon cores: - copy_to_user on Orion5x goes from 51 MB/s to 83 MB/s - clear_user on Orion5x goes from 89MB/s to 314MB/s - copy_to_user on Kirkwood goes from 240 MB/s to 356 MB/s - clear_user on Kirkwood goes from 367 MB/s to 1108 MB/s - copy_to_user on Disco-Duo goes from 248 MB/s to 398 MB/s - clear_user on Disco-Duo goes from 328 MB/s to 1741 MB/s Because the setup cost is non negligible, this is worthwhile only if the amount of data to copy is large enough. The operation falls back to the standard implementation when the amount of data is below a certain threshold. This threshold was determined empirically, however some targets could benefit from a lower runtime determined value for optimal results eventually. In the copy_from_user() case, this technique does not provide any worthwhile performance gain due to the fact that any kind of read access allocates the cache and subsequent 32bit loads are just as fast as the equivalent 8-word LDM. Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: Nicolas Pitre <nico@marvell.com> Tested-by: Martin Michlmayr <tbm@cyrius.com>
2009-03-09 18:30:09 +00:00
config UACCESS_WITH_MEMCPY
bool "Use kernel mem{cpy,set}() for {copy_to,clear}_user()"
depends on MMU
[ARM] alternative copy_to_user/clear_user implementation This implements {copy_to,clear}_user() by faulting in the userland pages and then using the regular kernel mem{cpy,set}() to copy the data (while holding the page table lock). This is a win if the regular mem{cpy,set}() implementations are faster than the user copy functions, which is the case e.g. on Feroceon, where 8-word STMs (which memcpy() uses under the right conditions) give significantly higher memory write throughput than a sequence of individual 32bit stores. Here are numbers for page sized buffers on some Feroceon cores: - copy_to_user on Orion5x goes from 51 MB/s to 83 MB/s - clear_user on Orion5x goes from 89MB/s to 314MB/s - copy_to_user on Kirkwood goes from 240 MB/s to 356 MB/s - clear_user on Kirkwood goes from 367 MB/s to 1108 MB/s - copy_to_user on Disco-Duo goes from 248 MB/s to 398 MB/s - clear_user on Disco-Duo goes from 328 MB/s to 1741 MB/s Because the setup cost is non negligible, this is worthwhile only if the amount of data to copy is large enough. The operation falls back to the standard implementation when the amount of data is below a certain threshold. This threshold was determined empirically, however some targets could benefit from a lower runtime determined value for optimal results eventually. In the copy_from_user() case, this technique does not provide any worthwhile performance gain due to the fact that any kind of read access allocates the cache and subsequent 32bit loads are just as fast as the equivalent 8-word LDM. Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: Nicolas Pitre <nico@marvell.com> Tested-by: Martin Michlmayr <tbm@cyrius.com>
2009-03-09 18:30:09 +00:00
default y if CPU_FEROCEON
help
Implement faster copy_to_user and clear_user methods for CPU
cores where a 8-word STM instruction give significantly higher
memory write throughput than a sequence of individual 32bit stores.
A possible side effect is a slight increase in scheduling latency
between threads sharing the same address space if they invoke
such copy operations with large buffers.
However, if the CPU data cache is using a write-allocate mode,
this option is unlikely to provide any performance gain.
config SECCOMP
bool
prompt "Enable seccomp to safely compute untrusted bytecode"
---help---
This kernel feature is useful for number crunching applications
that may need to compute untrusted bytecode during their
execution. By using pipes or other transports made available to
the process as file descriptors supporting the read/write
syscalls, it's possible to isolate those applications in
their own address space using seccomp. Once seccomp is
enabled via prctl(PR_SET_SECCOMP), it cannot be disabled
and the task is only allowed to execute a few safe syscalls
defined by each seccomp mode.
config PARAVIRT
bool "Enable paravirtualization code"
help
This changes the kernel so it can modify itself when it is run
under a hypervisor, potentially improving performance significantly
over full virtualization.
config PARAVIRT_TIME_ACCOUNTING
bool "Paravirtual steal time accounting"
select PARAVIRT
help
Select this option to enable fine granularity task steal time
accounting. Time spent executing other tasks in parallel with
the current vCPU is discounted from the vCPU power. To account for
that, there can be a small performance impact.
If in doubt, say N here.
config XEN_DOM0
def_bool y
depends on XEN
config XEN
bool "Xen guest support on ARM"
xen: arm: mandate EABI and use generic atomic operations. Rob Herring has observed that c81611c4e96f "xen: event channel arrays are xen_ulong_t and not unsigned long" introduced a compile failure when building without CONFIG_AEABI: /tmp/ccJaIZOW.s: Assembler messages: /tmp/ccJaIZOW.s:831: Error: even register required -- `ldrexd r5,r6,[r4]' Will Deacon pointed out that this is because OABI does not require even base registers for 64-bit values. We can avoid this by simply using the existing atomic64_xchg operation and the same containerof trick as used by the cmpxchg macros. However since this code is used on memory which is shared with the hypervisor we require proper atomic instructions and cannot use the generic atomic64 callbacks (which are based on spinlocks), therefore add a dependency on !GENERIC_ATOMIC64. Since we already depend on !CPU_V6 there isn't much downside to this. While thinking about this we also observed that OABI has different struct alignment requirements to EABI, which is a problem for hypercall argument structs which are shared with the hypervisor and which must be in EABI layout. Since I don't expect people to want to run OABI kernels on Xen depend on CONFIG_AEABI explicitly too (although it also happens to be enforced by the !GENERIC_ATOMIC64 requirement too). Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Rob Herring <robherring2@gmail.com> Acked-by: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2013-03-07 07:17:25 +00:00
depends on ARM && AEABI && OF
depends on CPU_V7 && !CPU_V6
xen: arm: mandate EABI and use generic atomic operations. Rob Herring has observed that c81611c4e96f "xen: event channel arrays are xen_ulong_t and not unsigned long" introduced a compile failure when building without CONFIG_AEABI: /tmp/ccJaIZOW.s: Assembler messages: /tmp/ccJaIZOW.s:831: Error: even register required -- `ldrexd r5,r6,[r4]' Will Deacon pointed out that this is because OABI does not require even base registers for 64-bit values. We can avoid this by simply using the existing atomic64_xchg operation and the same containerof trick as used by the cmpxchg macros. However since this code is used on memory which is shared with the hypervisor we require proper atomic instructions and cannot use the generic atomic64 callbacks (which are based on spinlocks), therefore add a dependency on !GENERIC_ATOMIC64. Since we already depend on !CPU_V6 there isn't much downside to this. While thinking about this we also observed that OABI has different struct alignment requirements to EABI, which is a problem for hypercall argument structs which are shared with the hypervisor and which must be in EABI layout. Since I don't expect people to want to run OABI kernels on Xen depend on CONFIG_AEABI explicitly too (although it also happens to be enforced by the !GENERIC_ATOMIC64 requirement too). Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Rob Herring <robherring2@gmail.com> Acked-by: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2013-03-07 07:17:25 +00:00
depends on !GENERIC_ATOMIC64
depends on MMU
select ARCH_DMA_ADDR_T_64BIT
select ARM_PSCI
select SWIOTLB
xen/arm,arm64: enable SWIOTLB_XEN Xen on arm and arm64 needs SWIOTLB_XEN: when running on Xen we need to program the hardware with mfns rather than pfns for dma addresses. Remove SWIOTLB_XEN dependency on X86 and PCI and make XEN select SWIOTLB_XEN on arm and arm64. At the moment always rely on swiotlb-xen, but when Xen starts supporting hardware IOMMUs we'll be able to avoid it conditionally on the presence of an IOMMU on the platform. Implement xen_create_contiguous_region on arm and arm64: for the moment we assume that dom0 has been mapped 1:1 (physical addresses == machine addresses) therefore we don't need to call XENMEM_exchange. Simply return the physical address as dma address. Initialize the xen-swiotlb from xen_early_init (before the native dma_ops are initialized), set xen_dma_ops to &xen_swiotlb_dma_ops. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v8: - assume dom0 is mapped 1:1, no need to call XENMEM_exchange. Changes in v7: - call __set_phys_to_machine_multi from xen_create_contiguous_region and xen_destroy_contiguous_region to update the P2M; - don't call XENMEM_unpin, it has been removed; - call XENMEM_exchange instead of XENMEM_exchange_and_pin; - set nr_exchanged to 0 before calling the hypercall. Changes in v6: - introduce and export xen_dma_ops; - call xen_mm_init from as arch_initcall. Changes in v4: - remove redefinition of DMA_ERROR_CODE; - update the code to use XENMEM_exchange_and_pin and XENMEM_unpin; - add a note about hardware IOMMU in the commit message. Changes in v3: - code style changes; - warn on XENMEM_put_dma_buf failures.
2013-10-10 13:40:44 +00:00
select SWIOTLB_XEN
select PARAVIRT
help
Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
config STACKPROTECTOR_PER_TASK
bool "Use a unique stack canary value for each task"
depends on GCC_PLUGINS && STACKPROTECTOR && SMP && !XIP_DEFLATED_DATA
select GCC_PLUGIN_ARM_SSP_PER_TASK
default y
help
Due to the fact that GCC uses an ordinary symbol reference from
which to load the value of the stack canary, this value can only
change at reboot time on SMP systems, and all tasks running in the
kernel's address space are forced to use the same canary value for
the entire duration that the system is up.
Enable this option to switch to a different method that uses a
different canary value for each task.
endmenu
menu "Boot options"
config USE_OF
bool "Flattened Device Tree support"
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select IRQ_DOMAIN
select OF
help
Include support for flattened device tree machine descriptions.
config ATAGS
bool "Support for the traditional ATAGS boot data passing" if USE_OF
default y
help
This is the traditional way of passing data to the kernel at boot
time. If you are solely relying on the flattened device tree (or
the ARM_ATAG_DTB_COMPAT option) then you may unselect this option
to remove ATAGS support from your kernel binary. If unsure,
leave this to y.
config DEPRECATED_PARAM_STRUCT
bool "Provide old way to pass kernel parameters"
depends on ATAGS
help
This was deprecated in 2001 and announced to live on for 5 years.
Some old boot loaders still use this way.
# Compressed boot loader in ROM. Yes, we really want to ask about
# TEXT and BSS so we preserve their values in the config files.
config ZBOOT_ROM_TEXT
hex "Compressed ROM boot loader base address"
default "0"
help
The physical address at which the ROM-able zImage is to be
placed in the target. Platforms which normally make use of
ROM-able zImage formats normally set this to a suitable
value in their defconfig file.
If ZBOOT_ROM is not enabled, this has no effect.
config ZBOOT_ROM_BSS
hex "Compressed ROM boot loader BSS address"
default "0"
help
The base address of an area of read/write memory in the target
for the ROM-able zImage which must be available while the
decompressor is running. It must be large enough to hold the
entire decompressed kernel plus an additional 128 KiB.
Platforms which normally make use of ROM-able zImage formats
normally set this to a suitable value in their defconfig file.
If ZBOOT_ROM is not enabled, this has no effect.
config ZBOOT_ROM
bool "Compressed boot loader in ROM/flash"
depends on ZBOOT_ROM_TEXT != ZBOOT_ROM_BSS
depends on !ARM_APPENDED_DTB && !XIP_KERNEL && !AUTO_ZRELADDR
help
Say Y here if you intend to execute your compressed kernel image
(zImage) directly from ROM or flash. If unsure, say N.
config ARM_APPENDED_DTB
bool "Use appended device tree blob to zImage (EXPERIMENTAL)"
depends on OF
help
With this option, the boot code will look for a device tree binary
(DTB) appended to zImage
(e.g. cat zImage <filename>.dtb > zImage_w_dtb).
This is meant as a backward compatibility convenience for those
systems with a bootloader that can't be upgraded to accommodate
the documented boot protocol using a device tree.
Beware that there is very little in terms of protection against
this option being confused by leftover garbage in memory that might
look like a DTB header after a reboot if no actual DTB is appended
to zImage. Do not leave this option active in a production kernel
if you don't intend to always append a DTB. Proper passing of the
location into r2 of a bootloader provided DTB is always preferable
to this option.
config ARM_ATAG_DTB_COMPAT
bool "Supplement the appended DTB with traditional ATAG information"
depends on ARM_APPENDED_DTB
help
Some old bootloaders can't be updated to a DTB capable one, yet
they provide ATAGs with memory configuration, the ramdisk address,
the kernel cmdline string, etc. Such information is dynamically
provided by the bootloader and can't always be stored in a static
DTB. To allow a device tree enabled kernel to be used with such
bootloaders, this option allows zImage to extract the information
from the ATAG list and store it at run time into the appended DTB.
choice
prompt "Kernel command line type" if ARM_ATAG_DTB_COMPAT
default ARM_ATAG_DTB_COMPAT_CMDLINE_FROM_BOOTLOADER
config ARM_ATAG_DTB_COMPAT_CMDLINE_FROM_BOOTLOADER
bool "Use bootloader kernel arguments if available"
help
Uses the command-line options passed by the boot loader instead of
the device tree bootargs property. If the boot loader doesn't provide
any, the device tree bootargs property will be used.
config ARM_ATAG_DTB_COMPAT_CMDLINE_EXTEND
bool "Extend with bootloader kernel arguments"
help
The command-line arguments provided by the boot loader will be
appended to the the device tree bootargs property.
endchoice
config CMDLINE
string "Default kernel command string"
default ""
help
On some architectures (EBSA110 and CATS), there is currently no way
for the boot loader to pass arguments to the kernel. For these
architectures, you should supply some command-line options at build
time by entering them here. As a minimum, you should specify the
memory size and the root device (e.g., mem=64M root=/dev/nfs).
choice
prompt "Kernel command line type" if CMDLINE != ""
default CMDLINE_FROM_BOOTLOADER
depends on ATAGS
config CMDLINE_FROM_BOOTLOADER
bool "Use bootloader kernel arguments if available"
help
Uses the command-line options passed by the boot loader. If
the boot loader doesn't provide any, the default kernel command
string provided in CMDLINE will be used.
config CMDLINE_EXTEND
bool "Extend bootloader kernel arguments"
help
The command-line arguments provided by the boot loader will be
appended to the default kernel command string.
config CMDLINE_FORCE
bool "Always use the default kernel command string"
help
Always use the default kernel command string, even if the boot
loader passes other arguments to the kernel.
This is useful if you cannot or don't want to change the
command-line options your boot loader passes to the kernel.
endchoice
config XIP_KERNEL
bool "Kernel Execute-In-Place from ROM"
depends on !ARM_LPAE && !ARCH_MULTIPLATFORM
help
Execute-In-Place allows the kernel to run from non-volatile storage
directly addressable by the CPU, such as NOR flash. This saves RAM
space since the text section of the kernel is not loaded from flash
to RAM. Read-write sections, such as the data section and stack,
are still copied to RAM. The XIP kernel is not compressed since
it has to run directly from flash, so it will take more space to
store it. The flash address used to link the kernel object files,
and for storing it, is configuration dependent. Therefore, if you
say Y here, you must know the proper physical address where to
store the kernel image depending on your own flash memory usage.
Also note that the make target becomes "make xipImage" rather than
"make zImage" or "make Image". The final kernel binary to put in
ROM memory will be arch/arm/boot/xipImage.
If unsure, say N.
config XIP_PHYS_ADDR
hex "XIP Kernel Physical Location"
depends on XIP_KERNEL
default "0x00080000"
help
This is the physical address in your flash memory the kernel will
be linked for and stored to. This address is dependent on your
own flash usage.
config XIP_DEFLATED_DATA
bool "Store kernel .data section compressed in ROM"
depends on XIP_KERNEL
select ZLIB_INFLATE
help
Before the kernel is actually executed, its .data section has to be
copied to RAM from ROM. This option allows for storing that data
in compressed form and decompressed to RAM rather than merely being
copied, saving some precious ROM space. A possible drawback is a
slightly longer boot delay.
config KEXEC
bool "Kexec system call (EXPERIMENTAL)"
ARM: 7759/1: decouple CPU offlining from reboot/shutdown Add comments to machine_shutdown()/halt()/power_off()/restart() that describe their purpose and/or requirements re: CPUs being active/not. In machine_shutdown(), replace the call to smp_send_stop() with a call to disable_nonboot_cpus(). This completely disables all but one CPU, thus satisfying the requirement that only a single CPU be active for kexec. Adjust Kconfig dependencies for this change. In machine_halt()/power_off()/restart(), call smp_send_stop() directly, rather than via machine_shutdown(); these functions don't need to completely de-activate all CPUs using hotplug, but rather just quiesce them. Remove smp_kill_cpus(), and its call from smp_send_stop(). smp_kill_cpus() was indirectly calling smp_ops.cpu_kill() without calling smp_ops.cpu_die() on the target CPUs first. At least some implementations of smp_ops had issues with this; it caused cpu_kill() to hang on Tegra, for example. Since smp_send_stop() is only used for shutdown, halt, and power-off, there is no need to attempt any kind of CPU hotplug here. Adjust Kconfig to reflect that machine_shutdown() (and hence kexec) relies upon disable_nonboot_cpus(). However, this alone doesn't guarantee that hotplug will work, or even that hotplug is implemented for a particular piece of HW that a multi-platform zImage runs on. Hence, add error-checking to machine_kexec() to determine whether it did work. Suggested-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Stephen Warren <swarren@nvidia.com> Acked-by: Will Deacon <will.deacon@arm.com> Tested-by: Zhangfei Gao <zhangfei.gao@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-06-14 15:14:14 +00:00
depends on (!SMP || PM_SLEEP_SMP)
depends on MMU
2015-09-09 22:38:55 +00:00
select KEXEC_CORE
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
but it is independent of the system firmware. And like a reboot
you can start any kernel with it, not just Linux.
It is an ongoing process to be certain the hardware in a machine
is properly shutdown, so do not be surprised if this code does not
initially work for you.
config ATAGS_PROC
bool "Export atags in procfs"
depends on ATAGS && KEXEC
default y
help
Should the atags used to boot the kernel be exported in an "atags"
file in procfs. Useful with kexec.
config CRASH_DUMP
bool "Build kdump crash kernel (EXPERIMENTAL)"
help
Generate crash dump after being started by kexec. This should
be normally only set in special crash dump kernels which are
loaded in the main kernel with kexec-tools into a specially
reserved region and then later executed after a crash by
kdump/kexec. The crash dump kernel must be compiled to a
memory address not used by the main kernel
For more details see Documentation/admin-guide/kdump/kdump.rst
config AUTO_ZRELADDR
bool "Auto calculation of the decompressed kernel image address"
help
ZRELADDR is the physical address where the decompressed kernel
image will be placed. If AUTO_ZRELADDR is selected, the address
will be determined at run-time by masking the current IP with
0xf8000000. This assumes the zImage being placed in the first 128MB
from start of memory.
config EFI_STUB
bool
config EFI
bool "UEFI runtime support"
depends on OF && !CPU_BIG_ENDIAN && MMU && AUTO_ZRELADDR && !XIP_KERNEL
select UCS2_STRING
select EFI_PARAMS_FROM_FDT
select EFI_STUB
select EFI_ARMSTUB
select EFI_RUNTIME_WRAPPERS
---help---
This option provides support for runtime services provided
by UEFI firmware (such as non-volatile variables, realtime
clock, and platform reset). A UEFI stub is also provided to
allow the kernel to be booted as an EFI application. This
is only useful for kernels that may run on systems that have
UEFI firmware.
efi/arm: Enable DMI/SMBIOS Wire up the existing arm64 support for SMBIOS tables (aka DMI) for ARM as well, by moving the arm64 init code to drivers/firmware/efi/arm-runtime.c (which is shared between ARM and arm64), and adding a asm/dmi.h header to ARM that defines the mapping routines for the firmware tables. This allows userspace to access these tables to discover system information exposed by the firmware. It also sets the hardware name used in crash dumps, e.g.: Unable to handle kernel NULL pointer dereference at virtual address 00000000 pgd = ed3c0000 [00000000] *pgd=bf1f3835 Internal error: Oops: 817 [#1] SMP THUMB2 Modules linked in: CPU: 0 PID: 759 Comm: bash Not tainted 4.10.0-09601-g0e8f38792120-dirty #112 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 ^^^ NOTE: This does *NOT* enable or encourage the use of DMI quirks, i.e., the the practice of identifying the platform via DMI to decide whether certain workarounds for buggy hardware and/or firmware need to be enabled. This would require the DMI subsystem to be enabled much earlier than we do on ARM, which is non-trivial. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20170602135207.21708-14-ard.biesheuvel@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-02 13:52:07 +00:00
config DMI
bool "Enable support for SMBIOS (DMI) tables"
depends on EFI
default y
help
This enables SMBIOS/DMI feature for systems.
This option is only useful on systems that have UEFI firmware.
However, even with this option, the resultant kernel should
continue to boot on existing non-UEFI platforms.
NOTE: This does *NOT* enable or encourage the use of DMI quirks,
i.e., the the practice of identifying the platform via DMI to
decide whether certain workarounds for buggy hardware and/or
firmware need to be enabled. This would require the DMI subsystem
to be enabled much earlier than we do on ARM, which is non-trivial.
endmenu
menu "CPU Power Management"
source "drivers/cpufreq/Kconfig"
source "drivers/cpuidle/Kconfig"
endmenu
menu "Floating point emulation"
comment "At least one emulation must be selected"
config FPE_NWFPE
bool "NWFPE math emulation"
depends on (!AEABI || OABI_COMPAT) && !THUMB2_KERNEL
---help---
Say Y to include the NWFPE floating point emulator in the kernel.
This is necessary to run most binaries. Linux does not currently
support floating point hardware so you need to say Y here even if
your machine has an FPA or floating point co-processor podule.
You may say N here if you are going to load the Acorn FPEmulator
early in the bootup.
config FPE_NWFPE_XP
bool "Support extended precision"
depends on FPE_NWFPE
help
Say Y to include 80-bit support in the kernel floating-point
emulator. Otherwise, only 32 and 64-bit support is compiled in.
Note that gcc does not generate 80-bit operations by default,
so in most cases this option only enlarges the size of the
floating point emulator without any good reason.
You almost surely want to say N here.
config FPE_FASTFPE
bool "FastFPE math emulation (EXPERIMENTAL)"
depends on (!AEABI || OABI_COMPAT) && !CPU_32v3
---help---
Say Y here to include the FAST floating point emulator in the kernel.
This is an experimental much faster emulator which now also has full
precision for the mantissa. It does not support any exceptions.
It is very simple, and approximately 3-6 times faster than NWFPE.
It should be sufficient for most programs. It may be not suitable
for scientific calculations, but you have to check this for yourself.
If you do not feel you need a faster FP emulation you should better
choose NWFPE.
config VFP
bool "VFP-format floating point maths"
depends on CPU_V6 || CPU_V6K || CPU_ARM926T || CPU_V7 || CPU_FEROCEON
help
Say Y to include VFP support code in the kernel. This is needed
if your hardware includes a VFP unit.
Please see <file:Documentation/arm/vfp/release-notes.rst> for
release notes and additional status information.
Say N if your target does not have VFP hardware.
config VFPv3
bool
depends on VFP
default y if CPU_V7
config NEON
bool "Advanced SIMD (NEON) Extension support"
depends on VFPv3 && CPU_V7
help
Say Y to include support code for NEON, the ARMv7 Advanced SIMD
Extension.
config KERNEL_MODE_NEON
bool "Support for NEON in kernel mode"
depends on NEON && AEABI
help
Say Y to include support for NEON in kernel mode.
endmenu
menu "Power management options"
source "kernel/power/Kconfig"
config ARCH_SUSPEND_POSSIBLE
depends on CPU_ARM920T || CPU_ARM926T || CPU_FEROCEON || CPU_SA1100 || \
CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M || CPU_XSC3 || CPU_XSCALE || CPU_MOHAWK
def_bool y
config ARM_CPU_SUSPEND
ARM: 8511/1: ARM64: kernel: PSCI: move PSCI idle management code to drivers/firmware ARM64 PSCI kernel interfaces that initialize idle states and implement the suspend API to enter them are generic and can be shared with the ARM architecture. To achieve that goal, this patch moves ARM64 PSCI idle management code to drivers/firmware, so that the interface to initialize and enter idle states can actually be shared by ARM and ARM64 arches back-ends. The ARM generic CPUidle implementation also requires the definition of a cpuidle_ops section entry for the kernel to initialize the CPUidle operations at boot based on the enable-method (ie ARM64 has the statically initialized cpu_ops counterparts for that purpose); therefore this patch also adds the required section entry on CONFIG_ARM for PSCI so that the kernel can initialize the PSCI CPUidle back-end when PSCI is the probed enable-method. On ARM64 this patch provides no functional change. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arch/arm64] Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jisheng Zhang <jszhang@marvell.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Jisheng Zhang <jszhang@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-02-01 17:01:30 +00:00
def_bool PM_SLEEP || BL_SWITCHER || ARM_PSCI_FW
depends on ARCH_SUSPEND_POSSIBLE
config ARCH_HIBERNATION_POSSIBLE
bool
depends on MMU
default y if ARCH_SUSPEND_POSSIBLE
endmenu
source "drivers/firmware/Kconfig"
if CRYPTO
source "arch/arm/crypto/Kconfig"
endif
source "arch/arm/kvm/Kconfig"