mm/memcg: minor cleanup for mc_handle_present_pte()

When pagetable lock is held, the page will always be page_mapped().  So
remove unneeded page_mapped() check.  Also the page can't be freed from
under us in this case.  So use get_page() to get extra page reference to
simplify the code.  No functional change intended.

Link: https://lkml.kernel.org/r/20230717113644.3026478-1-linmiaohe@huawei.com
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Shakeel Butt <shakeelb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Miaohe Lin 2023-07-17 19:36:44 +08:00 committed by Andrew Morton
parent 43b3dfdd04
commit 58f341f772
1 changed files with 2 additions and 3 deletions

View File

@ -5639,7 +5639,7 @@ static struct page *mc_handle_present_pte(struct vm_area_struct *vma,
{
struct page *page = vm_normal_page(vma, addr, ptent);
if (!page || !page_mapped(page))
if (!page)
return NULL;
if (PageAnon(page)) {
if (!(mc.flags & MOVE_ANON))
@ -5648,8 +5648,7 @@ static struct page *mc_handle_present_pte(struct vm_area_struct *vma,
if (!(mc.flags & MOVE_FILE))
return NULL;
}
if (!get_page_unless_zero(page))
return NULL;
get_page(page);
return page;
}