From 13e4ad2ce8df6e058ef482a31fdd81c725b0f7ea Mon Sep 17 00:00:00 2001 From: Nadav Amit Date: Sun, 21 Nov 2021 12:40:08 -0800 Subject: [PATCH] hugetlbfs: flush before unlock on move_hugetlb_page_tables() We must flush the TLB before releasing i_mmap_rwsem to avoid the potential reuse of an unshared PMDs page. This is not true in the case of move_hugetlb_page_tables(). The last reference on the page table can therefore be dropped before the TLB flush took place. Prevent it by reordering the operations and flushing the TLB before releasing i_mmap_rwsem. Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma") Signed-off-by: Nadav Amit Reviewed-by: Mike Kravetz Cc: Mina Almasry Cc: Andrew Morton Signed-off-by: Linus Torvalds --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2ccebe1ca9f4..abcd1785c629 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4919,9 +4919,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, move_huge_pte(vma, old_addr, new_addr, src_pte); } - i_mmap_unlock_write(mapping); flush_tlb_range(vma, old_end - len, old_end); mmu_notifier_invalidate_range_end(&range); + i_mmap_unlock_write(mapping); return len + old_addr - old_end; }