lkml.org 
[lkml]   [2020]   [Mar]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] mm,cma: remove pfn_range_valid_contig
From
Date
On 3/6/20 11:06 PM, Rik van Riel wrote:
> The function pfn_range_valid_contig checks whether all memory in the
> target area is free. This causes unnecessary CMA failures, since
> alloc_contig_range will migrate movable memory out of a target range,
> and has its own sanity check early on in has_unmovable_pages, which
> is called from start_isolate_page_range & set_migrate_type_isolate.
>
> Relying on that has_unmovable_pages call simplifies the CMA code and
> results in an increased success rate of CMA allocations.
>
> Signed-off-by: Rik van Riel <riel@surriel.com>

Yeah, the page_count and PageHuge checks are harmful. Not sure about
PageReserved. And is anything later in the alloc_contig_range() making sure that
we are always in the same zone?

> ---
> mm/page_alloc.c | 47 +++--------------------------------------------
> 1 file changed, 3 insertions(+), 44 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 0fb3c1719625..75e84907d8c6 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8539,32 +8539,6 @@ static int __alloc_contig_pages(unsigned long start_pfn,
> gfp_mask);
> }
>
> -static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn,
> - unsigned long nr_pages)
> -{
> - unsigned long i, end_pfn = start_pfn + nr_pages;
> - struct page *page;
> -
> - for (i = start_pfn; i < end_pfn; i++) {
> - page = pfn_to_online_page(i);
> - if (!page)
> - return false;
> -
> - if (page_zone(page) != z)
> - return false;
> -
> - if (PageReserved(page))
> - return false;
> -
> - if (page_count(page) > 0)
> - return false;
> -
> - if (PageHuge(page))
> - return false;
> - }
> - return true;
> -}
> -
> static bool zone_spans_last_pfn(const struct zone *zone,
> unsigned long start_pfn, unsigned long nr_pages)
> {
> @@ -8605,28 +8579,13 @@ struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask,
> zonelist = node_zonelist(nid, gfp_mask);
> for_each_zone_zonelist_nodemask(zone, z, zonelist,
> gfp_zone(gfp_mask), nodemask) {
> - spin_lock_irqsave(&zone->lock, flags);
> -
> pfn = ALIGN(zone->zone_start_pfn, nr_pages);
> while (zone_spans_last_pfn(zone, pfn, nr_pages)) {
> - if (pfn_range_valid_contig(zone, pfn, nr_pages)) {
> - /*
> - * We release the zone lock here because
> - * alloc_contig_range() will also lock the zone
> - * at some point. If there's an allocation
> - * spinning on this lock, it may win the race
> - * and cause alloc_contig_range() to fail...
> - */
> - spin_unlock_irqrestore(&zone->lock, flags);
> - ret = __alloc_contig_pages(pfn, nr_pages,
> - gfp_mask);
> - if (!ret)
> - return pfn_to_page(pfn);
> - spin_lock_irqsave(&zone->lock, flags);
> - }
> + ret = __alloc_contig_pages(pfn, nr_pages, gfp_mask);
> + if (!ret)
> + return pfn_to_page(pfn);
> pfn += nr_pages;
> }
> - spin_unlock_irqrestore(&zone->lock, flags);
> }
> return NULL;
> }
>

\
 
 \ /
  Last update: 2020-03-11 09:42    [W:0.043 / U:12.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site