mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
synced 2024-10-31 16:38:12 +00:00
mm/page_alloc: move pages to tail in move_to_free_list()
Whenever we move pages between freelists via move_to_free_list()/ move_freepages_block(), we don't actually touch the pages: 1. Page isolation doesn't actually touch the pages, it simply isolates pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist. When undoing isolation, we move the pages back to the target list. 2. Page stealing (steal_suitable_fallback()) moves free pages directly between lists without touching them. 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves free pages directly between freelists without touching them. We already place pages to the tail of the freelists when undoing isolation via __putback_isolated_page(), let's do it in any case (e.g., if order <= pageblock_order) and document the behavior. To simplify, let's move the pages to the tail for all move_to_free_list()/move_freepages_block() users. In 2., the target list is empty, so there should be no change. In 3., we might observe a change, however, highatomic is more concerned about allocations succeeding than cache hotness - if we ever realize this change degrades a workload, we can special-case this instance and add a proper comment. This change results in all pages getting onlined via online_pages() to be placed to the tail of the freelist. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Oscar Salvador <osalvador@suse.de> Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com> Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Mike Rapoport <rppt@kernel.org> Cc: Scott Cheloha <cheloha@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Wei Liu <wei.liu@kernel.org> Link: https://lkml.kernel.org/r/20201005121534.15649-4-david@redhat.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
47b6a24a23
commit
293ffa5ebb
2 changed files with 12 additions and 3 deletions
|
@ -901,13 +901,17 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone,
|
|||
area->nr_free++;
|
||||
}
|
||||
|
||||
/* Used for pages which are on another list */
|
||||
/*
|
||||
* Used for pages which are on another list. Move the pages to the tail
|
||||
* of the list - so the moved pages won't immediately be considered for
|
||||
* allocation again (e.g., optimization for memory onlining).
|
||||
*/
|
||||
static inline void move_to_free_list(struct page *page, struct zone *zone,
|
||||
unsigned int order, int migratetype)
|
||||
{
|
||||
struct free_area *area = &zone->free_area[order];
|
||||
|
||||
list_move(&page->lru, &area->free_list[migratetype]);
|
||||
list_move_tail(&page->lru, &area->free_list[migratetype]);
|
||||
}
|
||||
|
||||
static inline void del_page_from_free_list(struct page *page, struct zone *zone,
|
||||
|
@ -2340,7 +2344,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
|
|||
#endif
|
||||
|
||||
/*
|
||||
* Move the free pages in a range to the free lists of the requested type.
|
||||
* Move the free pages in a range to the freelist tail of the requested type.
|
||||
* Note that start_page and end_pages are not aligned on a pageblock
|
||||
* boundary. If alignment is required, use move_freepages_block()
|
||||
*/
|
||||
|
|
|
@ -106,6 +106,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
|
|||
* If we isolate freepage with more than pageblock_order, there
|
||||
* should be no freepage in the range, so we could avoid costly
|
||||
* pageblock scanning for freepage moving.
|
||||
*
|
||||
* We didn't actually touch any of the isolated pages, so place them
|
||||
* to the tail of the freelist. This is an optimization for memory
|
||||
* onlining - just onlined memory won't immediately be considered for
|
||||
* allocation.
|
||||
*/
|
||||
if (!isolated_page) {
|
||||
nr_pages = move_freepages_block(zone, page, migratetype, NULL);
|
||||
|
|
Loading…
Reference in a new issue