mm/page_alloc: remove track of active PCP lists range in bulk free

Patch series "Two minor cleanups for pcp list in page_alloc".

There are two minor cleanups for pcp list in page_alloc. More details
can be found in respective patches.


This patch (of 2):

After commit fd56eef258 ("mm/page_alloc: simplify how many pages are
selected per pcp list during bulk free"), we will drain all pages in
selected pcp list.  And we ensured passed count is < pcp->count.  Then,
the search will finish before wrap-around and track of active PCP lists
range intended for wrap-around case is no longer needed.

Link: https://lkml.kernel.org/r/20230809100754.3094517-1-shikemeng@huaweicloud.com
Link: https://lkml.kernel.org/r/20230809100754.3094517-2-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Kemeng Shi 2023-08-09 18:07:53 +08:00 committed by Andrew Morton
parent 1a8c64e110
commit f142b2c253
1 changed files with 3 additions and 12 deletions

View File

@ -1207,8 +1207,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
int pindex)
{
unsigned long flags;
int min_pindex = 0;
int max_pindex = NR_PCP_LISTS - 1;
unsigned int order;
bool isolated_pageblocks;
struct page *page;
@ -1231,17 +1229,10 @@ static void free_pcppages_bulk(struct zone *zone, int count,
/* Remove pages from lists in a round-robin fashion. */
do {
if (++pindex > max_pindex)
pindex = min_pindex;
if (++pindex > NR_PCP_LISTS - 1)
pindex = 0;
list = &pcp->lists[pindex];
if (!list_empty(list))
break;
if (pindex == max_pindex)
max_pindex--;
if (pindex == min_pindex)
min_pindex++;
} while (1);
} while (list_empty(list));
order = pindex_to_order(pindex);
nr_pages = 1 << order;