linux-stable/arch/powerpc/mm/book3s64
Suren Baghdasaryan 49b0638502 mm: enable page walking API to lock vmas during the walk
walk_page_range() and friends often operate under write-locked mmap_lock. 
With introduction of vma locks, the vmas have to be locked as well during
such walks to prevent concurrent page faults in these areas.  Add an
additional member to mm_walk_ops to indicate locking requirements for the
walk.

The change ensures that page walks which prevent concurrent page faults
by write-locking mmap_lock, operate correctly after introduction of
per-vma locks.  With per-vma locks page faults can be handled under vma
lock without taking mmap_lock at all, so write locking mmap_lock would
not stop them.  The change ensures vmas are properly locked during such
walks.

A sample issue this solves is do_mbind() performing queue_pages_range()
to queue pages for migration.  Without this change a concurrent page
can be faulted into the area and be left out of migration.

Link: https://lkml.kernel.org/r/20230804152724.3090321-2-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Suggested-by: Linus Torvalds <torvalds@linuxfoundation.org>
Suggested-by: Jann Horn <jannh@google.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Michel Lespinasse <michel@lespinasse.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-21 13:07:20 -07:00
..
Makefile powerpc: Book3S 64-bit outline-only KASAN support 2022-05-22 15:58:29 +10:00
hash_4k.c powerpc/64s/hash: add stress_hpt kernel boot option to increase hash faults 2022-12-02 18:04:25 +11:00
hash_64k.c powerpc/64s/hash: add stress_hpt kernel boot option to increase hash faults 2022-12-02 18:04:25 +11:00
hash_hugepage.c
hash_native.c powerpc/64s: Fix native_hpte_remove() to be irq-safe 2023-07-10 09:47:46 +10:00
hash_pgtable.c powerpc/64s: Fix hash__change_memory_range preemption warning 2022-10-18 22:46:18 +11:00
hash_tlb.c powerpc: allow pte_offset_map[_lock]() to fail 2023-06-19 16:19:08 -07:00
hash_utils.c Merge branch 'fixes' into next 2023-02-12 22:11:56 +11:00
hugetlbpage.c powerpc/mm: Update default hugetlb size early 2022-02-12 22:47:44 +11:00
internal.h powerpc/64s/hash: add stress_hpt kernel boot option to increase hash faults 2022-12-02 18:04:25 +11:00
iommu_api.c mm/gup: remove vmas parameter from pin_user_pages() 2023-06-09 16:25:26 -07:00
mmu_context.c powerpc/mm: Add __init attribute to eligible functions 2021-12-23 22:33:11 +11:00
pgtable.c powerpc: Remove find_current_mm_pte() 2022-11-24 23:12:18 +11:00
pkeys.c powerpc: Include asm/firmware.h in all users of firmware_has_feature() 2022-06-29 16:45:05 +10:00
radix_hugetlbpage.c powerpc/64s: POWER10 nest MMU can upgrade PTE access authority without TLB flush 2022-07-27 21:36:04 +10:00
radix_pgtable.c powerpc/book3s64/mm: Use PAGE_KERNEL instead of opencoding 2023-06-21 14:08:53 +10:00
radix_tlb.c powerpc/64s/radix: Fix exit lazy tlb mm switch with irqs enabled 2023-06-09 16:35:52 +10:00
slb.c powerpc: fix typos in comments 2022-05-05 22:12:44 +10:00
slice.c powerpc/mm: Enable full randomisation of memory mappings 2022-05-05 22:11:58 +10:00
subpage_prot.c mm: enable page walking API to lock vmas during the walk 2023-08-21 13:07:20 -07:00
trace.c mm/migration: add trace events for THP migrations 2022-03-24 19:06:45 -07:00