linux-stable/arch/arc/include/asm/highmem.h
Thomas Gleixner 39cac191ff arc/mm/highmem: Use generic kmap atomic implementation
Adopt the map ordering to match the other architectures and the generic
code. Also make the maximum entries limited and not dependend on the number
of CPUs. With the original implementation did the following calculation:

   nr_slots = mapsize >> PAGE_SHIFT;

The results in either 512 or 1024 total slots depending on
configuration. The total slots have to be divided by the number of CPUs to
get the number of slots per CPU (former KM_TYPE_NR). ARC supports up to 4k
CPUs, so this just falls apart in random ways depending on the number of
CPUs and the actual kmap (atomic) nesting. The comment in highmem.c:

 * - fixmap anyhow needs a limited number of mappings. So 2M kvaddr == 256 PTE
 *   slots across NR_CPUS would be more than sufficient (generic code defines
 *   KM_TYPE_NR as 20).

is just wrong. KM_TYPE_NR (now KM_MAX_IDX) is the number of slots per CPU
because kmap_local/atomic() needs to support nested mappings (thread,
softirq, interrupt). While KM_MAX_IDX might be overestimated, the above
reasoning is just wrong and clearly the highmem code was never tested with
any system with more than a few CPUs.

Use the default number of slots and fail the build when it does not
fit. Randomly failing at runtime is not a really good option.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20201103095857.472289952@linutronix.de
2020-11-06 23:14:55 +01:00

53 lines
1.4 KiB
C

/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2015 Synopsys, Inc. (www.synopsys.com)
*/
#ifndef _ASM_HIGHMEM_H
#define _ASM_HIGHMEM_H
#ifdef CONFIG_HIGHMEM
#include <uapi/asm/page.h>
#include <asm/kmap_size.h>
#define FIXMAP_SIZE PGDIR_SIZE
#define PKMAP_SIZE PGDIR_SIZE
/* start after vmalloc area */
#define FIXMAP_BASE (PAGE_OFFSET - FIXMAP_SIZE - PKMAP_SIZE)
#define FIX_KMAP_SLOTS (KM_MAX_IDX * NR_CPUS)
#define FIX_KMAP_BEGIN (0UL)
#define FIX_KMAP_END ((FIX_KMAP_BEGIN + FIX_KMAP_SLOTS) - 1)
#define FIXADDR_TOP (FIXMAP_BASE + (FIX_KMAP_END << PAGE_SHIFT))
/*
* This should be converted to the asm-generic version, but of course this
* is needlessly different from all other architectures. Sigh - tglx
*/
#define __fix_to_virt(x) (FIXADDR_TOP - ((x) << PAGE_SHIFT))
#define __virt_to_fix(x) (((FIXADDR_TOP - ((x) & PAGE_MASK))) >> PAGE_SHIFT)
/* start after fixmap area */
#define PKMAP_BASE (FIXMAP_BASE + FIXMAP_SIZE)
#define LAST_PKMAP (PKMAP_SIZE >> PAGE_SHIFT)
#define LAST_PKMAP_MASK (LAST_PKMAP - 1)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
#define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
#include <asm/cacheflush.h>
extern void kmap_init(void);
#define arch_kmap_local_post_unmap(vaddr) \
local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE)
static inline void flush_cache_kmaps(void)
{
flush_cache_all();
}
#endif
#endif