mm/dmapool: use might_alloc()

Now that my little helper has landed, use it more.  On top of the existing
check this also uses lockdep through the fs_reclaim annotations.

Link: https://lkml.kernel.org/r/20210113135009.3606813-1-daniel.vetter@ffwll.ch
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Daniel Vetter 2021-02-25 17:18:41 -08:00 committed by Linus Torvalds
parent 4be408cec2
commit 0f2f89b6de
1 changed files with 2 additions and 1 deletions

View File

@ -28,6 +28,7 @@
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/poison.h> #include <linux/poison.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/sched/mm.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
@ -319,7 +320,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
size_t offset; size_t offset;
void *retval; void *retval;
might_sleep_if(gfpflags_allow_blocking(mem_flags)); might_alloc(mem_flags);
spin_lock_irqsave(&pool->lock, flags); spin_lock_irqsave(&pool->lock, flags);
list_for_each_entry(page, &pool->page_list, page_list) { list_for_each_entry(page, &pool->page_list, page_list) {