lkml.org 
[lkml]   [2019]   [Feb]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] dma-contiguous: do not allocate a single page from CMA area
From
Date
Hi Nicolin,

On 2019-02-15 21:06, Nicolin Chen wrote:
> The addresses within a single page are always contiguous, so it's
> not so necessary to always allocate one single page from CMA area.
> Since the CMA area has a limited predefined size of space, it may
> run out of space in heavy use cases, where there might be quite a
> lot CMA pages being allocated for single pages.
>
> However, there is also a concern that a device might care where a
> page comes from -- it might expect the page from CMA area and act
> differently if the page doesn't.
>
> This patch tries to skip of one-page size allocations and returns
> NULL so as to let callers allocate normal pages unless the device
> has its own CMA area. This would save resources from the CMA area
> for more CMA allocations. And it'd also reduce CMA fragmentations
> resulted from trivial allocations.
>
> Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>

Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>

> ---
> kernel/dma/contiguous.c | 22 +++++++++++++++++++---
> 1 file changed, 19 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
> index b2a87905846d..09074bd04793 100644
> --- a/kernel/dma/contiguous.c
> +++ b/kernel/dma/contiguous.c
> @@ -186,16 +186,32 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
> *
> * This function allocates memory buffer for specified device. It uses
> * device specific contiguous memory area if available or the default
> - * global one. Requires architecture specific dev_get_cma_area() helper
> - * function.
> + * global one.
> + *
> + * However, it skips one-page size of allocations from the global area.
> + * As the addresses within one page are always contiguous, so there is
> + * no need to waste CMA pages for that kind; it also helps reduce the
> + * fragmentations in the CMA area. So a caller should be the rebounder
> + * in such case to allocate a normal page upon NULL return value.
> + *
> + * Requires architecture specific dev_get_cma_area() helper function.
> */
> struct page *dma_alloc_from_contiguous(struct device *dev, size_t count,
> unsigned int align, bool no_warn)
> {
> + struct cma *cma;
> +
> if (align > CONFIG_CMA_ALIGNMENT)
> align = CONFIG_CMA_ALIGNMENT;
>
> - return cma_alloc(dev_get_cma_area(dev), count, align, no_warn);
> + if (dev && dev->cma_area)
> + cma = dev->cma_area;
> + else if (count > 1)
> + cma = dma_contiguous_default_area;
> + else
> + return NULL;
> +
> + return cma_alloc(cma, count, align, no_warn);
> }
>
> /**

Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland

\
 
 \ /
  Last update: 2019-02-18 15:41    [W:0.077 / U:20.184 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site