mm: remove ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO

Current best practice is to reuse the name of the function as a define to
indicate that the function is implemented by the architecture.

Link: https://lkml.kernel.org/r/20230802151406.3735276-6-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Matthew Wilcox (Oracle) 2023-08-02 16:13:33 +01:00 committed by Andrew Morton
parent bc60abfbe6
commit 29d26f1215
3 changed files with 12 additions and 18 deletions

View File

@ -273,7 +273,7 @@ maps this page at its virtual address.
If D-cache aliasing is not an issue, these two routines may If D-cache aliasing is not an issue, these two routines may
simply call memcpy/memset directly and do nothing more. simply call memcpy/memset directly and do nothing more.
``void flush_dcache_page(struct page *page)`` ``void flush_dcache_folio(struct folio *folio)``
This routines must be called when: This routines must be called when:
@ -281,7 +281,7 @@ maps this page at its virtual address.
and / or in high memory and / or in high memory
b) the kernel is about to read from a page cache page and user space b) the kernel is about to read from a page cache page and user space
shared/writable mappings of this page potentially exist. Note shared/writable mappings of this page potentially exist. Note
that {get,pin}_user_pages{_fast} already call flush_dcache_page that {get,pin}_user_pages{_fast} already call flush_dcache_folio
on any page found in the user address space and thus driver on any page found in the user address space and thus driver
code rarely needs to take this into account. code rarely needs to take this into account.
@ -295,7 +295,7 @@ maps this page at its virtual address.
The phrase "kernel writes to a page cache page" means, specifically, The phrase "kernel writes to a page cache page" means, specifically,
that the kernel executes store instructions that dirty data in that that the kernel executes store instructions that dirty data in that
page at the page->virtual mapping of that page. It is important to page at the kernel virtual mapping of that page. It is important to
flush here to handle D-cache aliasing, to make sure these kernel stores flush here to handle D-cache aliasing, to make sure these kernel stores
are visible to user space mappings of that page. are visible to user space mappings of that page.
@ -306,18 +306,18 @@ maps this page at its virtual address.
If D-cache aliasing is not an issue, this routine may simply be defined If D-cache aliasing is not an issue, this routine may simply be defined
as a nop on that architecture. as a nop on that architecture.
There is a bit set aside in page->flags (PG_arch_1) as "architecture There is a bit set aside in folio->flags (PG_arch_1) as "architecture
private". The kernel guarantees that, for pagecache pages, it will private". The kernel guarantees that, for pagecache pages, it will
clear this bit when such a page first enters the pagecache. clear this bit when such a page first enters the pagecache.
This allows these interfaces to be implemented much more This allows these interfaces to be implemented much more
efficiently. It allows one to "defer" (perhaps indefinitely) the efficiently. It allows one to "defer" (perhaps indefinitely) the
actual flush if there are currently no user processes mapping this actual flush if there are currently no user processes mapping this
page. See sparc64's flush_dcache_page and update_mmu_cache_range page. See sparc64's flush_dcache_folio and update_mmu_cache_range
implementations for an example of how to go about doing this. implementations for an example of how to go about doing this.
The idea is, first at flush_dcache_page() time, if The idea is, first at flush_dcache_folio() time, if
page_file_mapping() returns a mapping, and mapping_mapped on that folio_flush_mapping() returns a mapping, and mapping_mapped() on that
mapping returns %false, just mark the architecture private page mapping returns %false, just mark the architecture private page
flag bit. Later, in update_mmu_cache_range(), a check is made flag bit. Later, in update_mmu_cache_range(), a check is made
of this flag bit, and if set the flush is done and the flag bit of this flag bit, and if set the flush is done and the flag bit
@ -331,12 +331,6 @@ maps this page at its virtual address.
dirty. Again, see sparc64 for examples of how dirty. Again, see sparc64 for examples of how
to deal with this. to deal with this.
``void flush_dcache_folio(struct folio *folio)``
This function is called under the same circumstances as
flush_dcache_page(). It allows the architecture to
optimise for flushing the entire folio of pages instead
of flushing one page at a time.
``void copy_to_user_page(struct vm_area_struct *vma, struct page *page, ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long user_vaddr, void *dst, void *src, int len)`` unsigned long user_vaddr, void *dst, void *src, int len)``
``void copy_from_user_page(struct vm_area_struct *vma, struct page *page, ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
@ -357,7 +351,7 @@ maps this page at its virtual address.
When the kernel needs to access the contents of an anonymous When the kernel needs to access the contents of an anonymous
page, it calls this function (currently only page, it calls this function (currently only
get_user_pages()). Note: flush_dcache_page() deliberately get_user_pages()). Note: flush_dcache_folio() deliberately
doesn't work for an anonymous page. The default doesn't work for an anonymous page. The default
implementation is a nop (and should remain so for all coherent implementation is a nop (and should remain so for all coherent
architectures). For incoherent architectures, it should flush architectures). For incoherent architectures, it should flush
@ -374,7 +368,7 @@ maps this page at its virtual address.
``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
All the functionality of flush_icache_page can be implemented in All the functionality of flush_icache_page can be implemented in
flush_dcache_page and update_mmu_cache_range. In the future, the hope flush_dcache_folio and update_mmu_cache_range. In the future, the hope
is to remove this interface completely. is to remove this interface completely.
The final category of APIs is for I/O to deliberately aliased address The final category of APIs is for I/O to deliberately aliased address

View File

@ -7,14 +7,14 @@
struct folio; struct folio;
#if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE
#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO #ifndef flush_dcache_folio
void flush_dcache_folio(struct folio *folio); void flush_dcache_folio(struct folio *folio);
#endif #endif
#else #else
static inline void flush_dcache_folio(struct folio *folio) static inline void flush_dcache_folio(struct folio *folio)
{ {
} }
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO 0 #define flush_dcache_folio flush_dcache_folio
#endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */
#endif /* _LINUX_CACHEFLUSH_H */ #endif /* _LINUX_CACHEFLUSH_H */

View File

@ -1119,7 +1119,7 @@ void page_offline_end(void)
} }
EXPORT_SYMBOL(page_offline_end); EXPORT_SYMBOL(page_offline_end);
#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO #ifndef flush_dcache_folio
void flush_dcache_folio(struct folio *folio) void flush_dcache_folio(struct folio *folio)
{ {
long i, nr = folio_nr_pages(folio); long i, nr = folio_nr_pages(folio);