mm/rmap: move folio_test_anon() check out of __folio_set_anon()

Let's handle it in the caller; no need for the "first" check based on the
mapcount.

We really only end up with !anon pages in page_add_anon_rmap() via
do_swap_page(), where we hold the folio lock.  So races are not possible. 
Add a VM_WARN_ON_FOLIO() to make sure that we really hold the folio lock.

In the future, we might want to let do_swap_page() use
folio_add_new_anon_rmap() on new pages instead: however, we might have to
pass then whether the folio is exclusive or not.  So keep it in there for
now.

For hugetlb we never expect to have a non-anon page in
hugepage_add_anon_rmap().  Remove that code, along with some other checks
that are either not required or were checked in
hugepage_add_new_anon_rmap() already.

Link: https://lkml.kernel.org/r/20230913125113.313322-4-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Muchun Song <muchun.song@linux.dev>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
David Hildenbrand 2023-09-13 14:51:10 +02:00 committed by Andrew Morton
parent c66db8c070
commit c5c5400347
1 changed files with 8 additions and 15 deletions

View File

@ -1135,9 +1135,6 @@ static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
BUG_ON(!anon_vma);
if (folio_test_anon(folio))
return;
/*
* If the folio isn't exclusive to this vma, we must use the _oldest_
* possible anon_vma for the folio mapping!
@ -1239,12 +1236,12 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
if (nr)
__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
if (likely(!folio_test_ksm(folio))) {
if (first)
__folio_set_anon(folio, vma, address,
!!(flags & RMAP_EXCLUSIVE));
else
__page_check_anon_rmap(folio, page, vma, address);
if (unlikely(!folio_test_anon(folio))) {
VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
__folio_set_anon(folio, vma, address,
!!(flags & RMAP_EXCLUSIVE));
} else if (likely(!folio_test_ksm(folio))) {
__page_check_anon_rmap(folio, page, vma, address);
}
if (flags & RMAP_EXCLUSIVE)
SetPageAnonExclusive(page);
@ -2541,17 +2538,13 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
unsigned long address, rmap_t flags)
{
struct folio *folio = page_folio(page);
struct anon_vma *anon_vma = vma->anon_vma;
int first;
BUG_ON(!folio_test_locked(folio));
BUG_ON(!anon_vma);
VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
first = atomic_inc_and_test(&folio->_entire_mapcount);
VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);
if (first)
__folio_set_anon(folio, vma, address,
!!(flags & RMAP_EXCLUSIVE));
if (flags & RMAP_EXCLUSIVE)
SetPageAnonExclusive(page);
}