Commit Graph

8 Commits

Author SHA1 Message Date
Alexandre Ghiti 7a92fc8b4d mm: Introduce flush_cache_vmap_early()
The pcpu setup when using the page allocator sets up a new vmalloc
mapping very early in the boot process, so early that it cannot use the
flush_cache_vmap() function which may depend on structures not yet
initialized (for example in riscv, we currently send an IPI to flush
other cpus TLB).

But on some architectures, we must call flush_cache_vmap(): for example,
in riscv, some uarchs can cache invalid TLB entries so we need to flush
the new established mapping to avoid taking an exception.

So fix this by introducing a new function flush_cache_vmap_early() which
is called right after setting the new page table entry and before
accessing this new mapping. This new function implements a local flush
tlb on riscv and is no-op for other architectures (same as today).

Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Dennis Zhou <dennis@kernel.org>
2023-12-14 00:23:17 -08:00
Matthew Wilcox (Oracle) 203b7b6aad mm: rationalise flush_icache_pages() and flush_icache_page()
Move the default (no-op) implementation of flush_icache_pages() to
<linux/cacheflush.h> from <asm-generic/cacheflush.h>.  Remove the
flush_icache_page() wrapper from each architecture into
<linux/cacheflush.h>.

Link: https://lkml.kernel.org/r/20230802151406.3735276-32-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-24 16:20:25 -07:00
Matthew Wilcox (Oracle) e724e7aaf9 csky: implement the new page table range API
Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio().
Change the PG_dcache_clean flag from being per-page to per-folio.

Link: https://lkml.kernel.org/r/20230802151406.3735276-12-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Guo Ren <guoren@kernel.org>
Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-24 16:20:20 -07:00
Guo Ren 997153b9a7 csky: Add flush_icache_mm to defer flush icache all
Some CPUs don't support icache.va instruction to maintain the whole
smp cores' icache. Using icache.all + IPI casue a lot on performace
and using defer mechanism could reduce the number of calling icache
_flush_all functions.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-02-21 15:43:24 +08:00
Guo Ren cc1f6563a9 csky: Optimize abiv2 copy_to_user_page with VM_EXEC
Only when vma is for VM_EXEC, we need sync dcache & icache. eg:
 - gdb ptrace modify user space instruction code area.

Add VM_EXEC condition to reduce unnecessary cache flush.

The abiv1 cpus' cache are all VIPT, so we still need to deal with
dcache aliasing problem. But there is optimized way to use cache
color, just like what's done in arch/csky/abiv1/inc/abi/page.h.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-02-21 15:43:24 +08:00
Guo Ren d936a7e708 csky: Enable defer flush_dcache_page for abiv2 cpus (807/810/860)
Instead of flushing cache per update_mmu_cache() called, we use
flush_dcache_page to reduce the frequency of flashing the cache.

As abiv2 cpus are all PIPT for icache & dcache, we needn't handle
dcache aliasing problem. But their icache can't snoop dcache, so
we still need sync_icache_dcache in update_mmu_cache().

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-02-21 15:43:24 +08:00
Guo Ren a117673413 csky: Remove unnecessary flush_icache_* implementation
The abiv2 CPUs are all PIPT cache, so there is no need to implement
flush_icache_page function.

The function flush_icache_user_range hasn't been used, so just
remove it.

The function flush_cache_range is not necessary for PIPT cache when
tlb mapping changed.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-02-21 15:43:24 +08:00
Guo Ren 00a9730e10 csky: Cache and TLB routines
This patch adds cache and tlb sync codes for abiv1 & abiv2.

Signed-off-by: Guo Ren <ren_guo@c-sky.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
2018-10-25 23:36:19 +08:00