mm: hugetlb: use flush_hugetlb_tlb_range() in move_hugetlb_page_tables()
Archs may need to do special things when flushing hugepage tlb, so use the
more applicable flush_hugetlb_tlb_range() instead of flush_tlb_range().
Link: https://lkml.kernel.org/r/20230801023145.17026-2-wangkefeng.wang@huawei.com
Fixes: 550a7d60bd
("mm, hugepages: add mremap() support for hugepage backed vma")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Muchun Song <songmuchun@bytedance.com>
Cc: Barry Song <21cnbao@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Joel Fernandes (Google) <joel@joelfernandes.org>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Will Deacon <will@kernel.org>
Cc: William Kucharski <william.kucharski@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
13cfd63f3f
commit
f720b471fd
|
@ -5279,9 +5279,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
|
||||||
}
|
}
|
||||||
|
|
||||||
if (shared_pmd)
|
if (shared_pmd)
|
||||||
flush_tlb_range(vma, range.start, range.end);
|
flush_hugetlb_tlb_range(vma, range.start, range.end);
|
||||||
else
|
else
|
||||||
flush_tlb_range(vma, old_end - len, old_end);
|
flush_hugetlb_tlb_range(vma, old_end - len, old_end);
|
||||||
mmu_notifier_invalidate_range_end(&range);
|
mmu_notifier_invalidate_range_end(&range);
|
||||||
i_mmap_unlock_write(mapping);
|
i_mmap_unlock_write(mapping);
|
||||||
hugetlb_vma_unlock_write(vma);
|
hugetlb_vma_unlock_write(vma);
|
||||||
|
|
Loading…
Reference in New Issue