drm/ttm: make sure pool pages are cleared

The old implementation wasn't consistend on this.

But it looks like we depend on this so better bring it back.

Signed-off-by: Christian König <christian.koenig@amd.com>
Reported-and-tested-by: Mike Galbraith <efault@gmx.de>
Fixes: d099fc8f54 ("drm/ttm: new TT backend allocation pool v3")
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210210160549.1462-1-christian.koenig@amd.com
This commit is contained in:
Christian König 2021-02-10 14:24:27 +01:00
parent 1926a0508d
commit 811ee9dff5
1 changed files with 10 additions and 0 deletions

View File

@ -33,6 +33,7 @@
#include <linux/module.h>
#include <linux/dma-mapping.h>
#include <linux/highmem.h>
#ifdef CONFIG_X86
#include <asm/set_memory.h>
@ -218,6 +219,15 @@ static void ttm_pool_unmap(struct ttm_pool *pool, dma_addr_t dma_addr,
/* Give pages into a specific pool_type */
static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p)
{
unsigned int i, num_pages = 1 << pt->order;
for (i = 0; i < num_pages; ++i) {
if (PageHighMem(p))
clear_highpage(p + i);
else
clear_page(page_address(p + i));
}
spin_lock(&pt->lock);
list_add(&p->lru, &pt->pages);
spin_unlock(&pt->lock);