mm, slub: restructure new page checks in ___slab_alloc()

When we allocate slab object from a newly acquired page (from node's partial
list or page allocator), we usually also retain the page as a new percpu slab.
There are two exceptions - when pfmemalloc status of the page doesn't match our
gfp flags, or when the cache has debugging enabled.

The current code for these decisions is not easy to follow, so restructure it
and add comments. The new structure will also help with the following changes.
No functional change.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
This commit is contained in:
Vlastimil Babka 2021-05-11 18:25:09 +02:00
parent 75c8ff281d
commit 1572df7cbc
1 changed files with 22 additions and 6 deletions

View File

@ -2782,13 +2782,29 @@ new_slab:
c->page = page;
check_new_page:
if (likely(!kmem_cache_debug(s) && pfmemalloc_match(page, gfpflags)))
goto load_freelist;
/* Only entered in the debug case */
if (kmem_cache_debug(s) &&
!alloc_debug_processing(s, page, freelist, addr))
goto new_slab; /* Slab failed checks. Next slab needed */
if (kmem_cache_debug(s)) {
if (!alloc_debug_processing(s, page, freelist, addr))
/* Slab failed checks. Next slab needed */
goto new_slab;
else
/*
* For debug case, we don't load freelist so that all
* allocations go through alloc_debug_processing()
*/
goto return_single;
}
if (unlikely(!pfmemalloc_match(page, gfpflags)))
/*
* For !pfmemalloc_match() case we don't load freelist so that
* we don't make further mismatched allocations easier.
*/
goto return_single;
goto load_freelist;
return_single:
deactivate_slab(s, page, get_freepointer(s, freelist), c);
return freelist;