mm/damon/paddr: avoid unnecessary page level access check for pageout DAMOS action

Patch series "mm/damon/paddr: simplify page level access re-check for
pageout.

The 'pageout' DAMOS action implementation of 'paddr' asks reclaim_pages()
to do page level access check again.  But the user can ask 'paddr' to do
the page level access check on its own, using DAMOS filter of 'young page'
type.  Meanwhile, 'paddr' is the only user of reclaim_pages() that asks
the page level access check.

Make 'paddr' does the page level access check on its own always, and
simplify reclaim_pages() by removing the page level access check request
handling logic.  As a result of the change for reclaim_pages(),
reclaim_folio_list(), which is called by reclaim_pages(), also no more
need to do the page level access check.  Simplify the function, too.


This patch (of 4):

'pageout' DAMOS action implementation of 'paddr' asks reclaim_pages() to
do the page level access check.  User could ask DAMOS to do the page level
access check on its own using 'young page' type DAMOS filter.  In the
case, pageout DAMOS action unnecessarily asks reclaim_pages() to do the
check again.  Ask the page level access check only if the scheme is not
having the filter.

Link: https://lkml.kernel.org/r/20240429224451.67081-1-sj@kernel.org
Link: https://lkml.kernel.org/r/20240429224451.67081-2-sj@kernel.org
Signed-off-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
SeongJae Park 2024-04-29 15:44:48 -07:00 committed by Andrew Morton
parent 01d89b93e1
commit 69a5f99917
1 changed files with 11 additions and 1 deletions

View File

@ -244,6 +244,16 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
{
unsigned long addr, applied;
LIST_HEAD(folio_list);
bool ignore_references = false;
struct damos_filter *filter;
/* respect user's page level reference check handling request */
damos_for_each_filter(filter, s) {
if (filter->type == DAMOS_FILTER_TYPE_YOUNG) {
ignore_references = true;
break;
}
}
for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
struct folio *folio = damon_get_folio(PHYS_PFN(addr));
@ -265,7 +275,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
put_folio:
folio_put(folio);
}
applied = reclaim_pages(&folio_list, false);
applied = reclaim_pages(&folio_list, ignore_references);
cond_resched();
return applied * PAGE_SIZE;
}