mm: compaction: update pageblock skip when first migration candidate is not at the start

isolate_migratepages_block should mark a pageblock as skip if scanning
started on an aligned pageblock boundary but it only updates the skip flag
if the first migration candidate is also aligned.  Tracing during a
compaction stress load (mmtests: workload-usemem-stress-numa-compact) that
many pageblocks are not marked skip causing excessive scanning of blocks
that had been recently checked.  Update pageblock skip based on
"valid_page" which is set if scanning started on a pageblock boundary.

[mgorman@techsingularity.net: fix handling of skip bit]
  Link: https://lkml.kernel.org/r/20230602111622.swtxhn6lu2qwgrwq@techsingularity.net
Link: https://lkml.kernel.org/r/20230515113344.6869-4-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Tested-by: Raghavendra K T <raghavendra.kt@amd.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Chuyi Zhou <zhouchuyi@bytedance.com>
Cc: Jiri Slaby <jirislaby@kernel.org>
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Pedro Falcato <pedro.falcato@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Mel Gorman 2023-05-15 12:33:43 +01:00 committed by Andrew Morton
parent 9ecc5fc50a
commit 590ccea80a
1 changed files with 12 additions and 11 deletions

View File

@ -392,18 +392,14 @@ void reset_isolation_suitable(pg_data_t *pgdat)
* Sets the pageblock skip bit if it was clear. Note that this is a hint as
* locks are not required for read/writers. Returns true if it was already set.
*/
static bool test_and_set_skip(struct compact_control *cc, struct page *page,
unsigned long pfn)
static bool test_and_set_skip(struct compact_control *cc, struct page *page)
{
bool skip;
/* Do no update if skip hint is being ignored */
/* Do not update if skip hint is being ignored */
if (cc->ignore_skip_hint)
return false;
if (!pageblock_aligned(pfn))
return false;
skip = get_pageblock_skip(page);
if (!skip && !cc->no_set_skip_hint)
set_pageblock_skip(page);
@ -470,8 +466,7 @@ static void update_cached_migrate(struct compact_control *cc, unsigned long pfn)
{
}
static bool test_and_set_skip(struct compact_control *cc, struct page *page,
unsigned long pfn)
static bool test_and_set_skip(struct compact_control *cc, struct page *page)
{
return false;
}
@ -1074,11 +1069,17 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
lruvec_memcg_debug(lruvec, page_folio(page));
/* Try get exclusive access under lock */
if (!skip_updated) {
/*
* Try get exclusive access under lock. If marked for
* skip, the scan is aborted unless the current context
* is a rescan to reach the end of the pageblock.
*/
if (!skip_updated && valid_page) {
skip_updated = true;
if (test_and_set_skip(cc, page, low_pfn))
if (test_and_set_skip(cc, valid_page) &&
!cc->finish_pageblock) {
goto isolate_abort;
}
}
/*