mtd: create function to perform large allocations

Introduce a common function to handle large, contiguous kmalloc buffer
allocations by exponentially backing off on the size of the requested
kernel transfer buffer until it succeeds or until the requested
transfer buffer size falls below the page size.

This helps ensure the operation can succeed under low-memory, highly-
fragmented situations albeit somewhat more slowly.

Artem: so this patch solves the problem that the kernel tries to kmalloc too
large buffers, which (a) may fail and does fail - people complain about this,
and (b) slows down the system in case of high memory fragmentation, because
the kernel starts dropping caches, writing back, swapping, etc. But we do not
really have to allocate a lot of memory to do the I/O, we may do this even with
as little as one min. I/O unit (NAND page) of RAM. So the idea of this patch is
that if the user asks to read or write a lot, we try to kmalloc a lot, with GFP
flags which make the kernel _not_ drop caches, etc. If we can allocate it - good,
if not - we try to allocate twice as less, and so on, until we reach the min.
I/O unit size, which is our last resort allocation and use the normal
GFP_KERNEL flag.

Artem: re-write the allocation function so that it makes sure the allocated
buffer is aligned to the min. I/O size of the flash.

Signed-off-by: Grant Erickson <marathon96@gmail.com>
Tested-by: Ben Gardiner <bengardiner@nanometrics.ca>
Tested-by: Stefano Babic <sbabic@denx.de>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
This commit is contained in:
Grant Erickson 2011-04-08 08:51:32 -07:00 committed by David Woodhouse
parent 431e1ecabd
commit 33b53716bc
2 changed files with 51 additions and 0 deletions

View file

@ -638,6 +638,54 @@ int default_mtd_writev(struct mtd_info *mtd, const struct kvec *vecs,
return ret;
}
/**
* mtd_kmalloc_up_to - allocate a contiguous buffer up to the specified size
* @size: A pointer to the ideal or maximum size of the allocation. Points
* to the actual allocation size on success.
*
* This routine attempts to allocate a contiguous kernel buffer up to
* the specified size, backing off the size of the request exponentially
* until the request succeeds or until the allocation size falls below
* the system page size. This attempts to make sure it does not adversely
* impact system performance, so when allocating more than one page, we
* ask the memory allocator to avoid re-trying, swapping, writing back
* or performing I/O.
*
* Note, this function also makes sure that the allocated buffer is aligned to
* the MTD device's min. I/O unit, i.e. the "mtd->writesize" value.
*
* This is called, for example by mtd_{read,write} and jffs2_scan_medium,
* to handle smaller (i.e. degraded) buffer allocations under low- or
* fragmented-memory situations where such reduced allocations, from a
* requested ideal, are allowed.
*
* Returns a pointer to the allocated buffer on success; otherwise, NULL.
*/
void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size)
{
gfp_t flags = __GFP_NOWARN | __GFP_WAIT |
__GFP_NORETRY | __GFP_NO_KSWAPD;
size_t min_alloc = max_t(size_t, mtd->writesize, PAGE_SIZE);
void *kbuf;
*size = min_t(size_t, *size, KMALLOC_MAX_SIZE);
while (*size > min_alloc) {
kbuf = kmalloc(*size, flags);
if (kbuf)
return kbuf;
*size >>= 1;
*size = ALIGN(*size, mtd->writesize);
}
/*
* For the last resort allocation allow 'kmalloc()' to do all sorts of
* things (write-back, dropping caches, etc) by using GFP_KERNEL.
*/
return kmalloc(*size, GFP_KERNEL);
}
EXPORT_SYMBOL_GPL(add_mtd_device);
EXPORT_SYMBOL_GPL(del_mtd_device);
EXPORT_SYMBOL_GPL(get_mtd_device);
@ -648,6 +696,7 @@ EXPORT_SYMBOL_GPL(__put_mtd_device);
EXPORT_SYMBOL_GPL(register_mtd_user);
EXPORT_SYMBOL_GPL(unregister_mtd_user);
EXPORT_SYMBOL_GPL(default_mtd_writev);
EXPORT_SYMBOL_GPL(mtd_kmalloc_up_to);
#ifdef CONFIG_PROC_FS

View file

@ -348,6 +348,8 @@ int default_mtd_writev(struct mtd_info *mtd, const struct kvec *vecs,
int default_mtd_readv(struct mtd_info *mtd, struct kvec *vecs,
unsigned long count, loff_t from, size_t *retlen);
void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size);
#ifdef CONFIG_MTD_PARTITIONS
void mtd_erase_callback(struct erase_info *instr);
#else