mm: page_alloc: fix move_freepages_block() range error

When a block is partially outside the zone of the cursor page, the
function cuts the range to the pivot page instead of the zone start.  This
can leave large parts of the block behind, which encourages incompatible
page mixing down the line (ask for one type, get another), and thus
long-term fragmentation.

This triggers reliably on the first block in the DMA zone, whose start_pfn
is 1.  The block is stolen, but everything before the pivot page (which
was often hundreds of pages) is left on the old list.

Link: https://lkml.kernel.org/r/20240320180429.678181-6-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Johannes Weiner 2024-03-20 14:02:10 -04:00 committed by Andrew Morton
parent b54ccd3c6b
commit 2dd482ba62
1 changed files with 8 additions and 2 deletions

View File

@ -1650,9 +1650,15 @@ int move_freepages_block(struct zone *zone, struct page *page,
start_pfn = pageblock_start_pfn(pfn);
end_pfn = pageblock_end_pfn(pfn) - 1;
/* Do not cross zone boundaries */
/*
* The caller only has the lock for @zone, don't touch ranges
* that straddle into other zones. While we could move part of
* the range that's inside the zone, this call is usually
* accompanied by other operations such as migratetype updates
* which also should be locked.
*/
if (!zone_spans_pfn(zone, start_pfn))
start_pfn = pfn;
return 0;
if (!zone_spans_pfn(zone, end_pfn))
return 0;