lkml.org 
[lkml]   [2019]   [Dec]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [net-next v5 PATCH] page_pool: handle page recycle for NUMA_NO_NODE condition
Hi Saeed, 
> which is the msix affinity.. the pool has no knowledge of that on
> initialization.
>
> > The reason I want this behavior is that during driver init/boot, it
> > can
> > easily happen that a driver allocates RX-pages from wrong NUMA node.
> > This will cause a performance slowdown, that normally doesn't happen,
> > because without a cache (like page_pool) RX-pages would fairly
> > quickly
> > transition over to the RX NUMA node (instead we keep recycling these,
> > in your case #2, where you suggest recycle blindly in case of
> > NUMA_NO_NODE). IMHO page_pool should hide this border-line case from
> > driver developers.
> >
>
> So, Ilias's #1 suggestion make sense, to always store a valid nid
> value.
> the question is which value to store on initialization if the user
> provided NUMA_NO_NODE ? I don't think the pool is capable of choosing
> the right value, so let's just use numa node 0.

Again i don't mind using the current solution. We could use 0 or the whatever
numa is choosen from alloc_pages_node()

>
> If the developer cares, he would have picked the right affinity on
> initialization, or he can just call pool_update_nid() when the affinity
> is determined and every thing will be fine after that point.
>
> My 2cent is that you just can't provide the perfect performance when
> the user uses NUMA_NO_NODE, so just pick any default concrete node id
> and avoid dealing with NUMA_NO_NODE in the pool fast or even slow
> path..

I don't have strong preference on any of those. I just prefer the homogeneous
approach of always storing a normal usable memory id. Either way rest of the
code seems fine, so i'll approve this once you manage to test it on your setup.

I did test it on my netsec card using NUMA_NO_NODE. On that machine though it
doesn't make any difference since page_to_nid(page) and numa_mem_id() always
return 0 on that. So the allocation is already 'correct', the only thing that
changes once i call page_pool_update_nid() is pool->p.nid

Thanks
/Ilias
>
> > --Jesper
> >
> >
> > > On Fri, Dec 20, 2019 at 06:06:49PM +0200, Ilias Apalodimas wrote:
> > > > On Fri, Dec 20, 2019 at 04:22:54PM +0100, Jesper Dangaard Brouer
> > > > wrote:
> > > > > On Fri, 20 Dec 2019 12:49:37 +0200
> > > > > Ilias Apalodimas <ilias.apalodimas@linaro.org> wrote:
> > > > >
> > > > > > On Fri, Dec 20, 2019 at 11:41:16AM +0100, Jesper Dangaard
> > > > > > Brouer wrote:
> > > > > > > On Fri, 20 Dec 2019 12:23:14 +0200
> > > > > > > Ilias Apalodimas <ilias.apalodimas@linaro.org> wrote:
> > > > > > >
> > > > > > > > Hi Jesper,
> > > > > > > >
> > > > > > > > I like the overall approach since this moves the check
> > > > > > > > out
> > > > > > > > of the hotpath. @Saeed, since i got no hardware to test
> > > > > > > > this on, would it be possible to check that it still
> > > > > > > > works
> > > > > > > > fine for mlx5?
> > > > > > > >
> > > > > > > > [...]
> > > > > > > > > + struct ptr_ring *r = &pool->ring;
> > > > > > > > > + struct page *page;
> > > > > > > > > + int pref_nid; /* preferred NUMA node */
> > > > > > > > > +
> > > > > > > > > + /* Quicker fallback, avoid locks when ring is
> > > > > > > > > empty */
> > > > > > > > > + if (__ptr_ring_empty(r))
> > > > > > > > > + return NULL;
> > > > > > > > > +
> > > > > > > > > + /* Softirq guarantee CPU and thus NUMA node is
> > > > > > > > > stable. This,
> > > > > > > > > + * assumes CPU refilling driver RX-ring will
> > > > > > > > > also run RX-NAPI.
> > > > > > > > > + */
> > > > > > > > > + pref_nid = (pool->p.nid == NUMA_NO_NODE) ?
> > > > > > > > > numa_mem_id() : pool->p.nid;
> > > > > > > >
> > > > > > > > One of the use cases for this is that during the
> > > > > > > > allocation
> > > > > > > > we are not guaranteed to pick up the correct NUMA node.
> > > > > > > > This will get automatically fixed once the driver starts
> > > > > > > > recycling packets.
> > > > > > > >
> > > > > > > > I don't feel strongly about this, since i don't usually
> > > > > > > > like hiding value changes from the user but, would it
> > > > > > > > make
> > > > > > > > sense to move this into __page_pool_alloc_pages_slow()
> > > > > > > > and
> > > > > > > > change the pool->p.nid?
> > > > > > > >
> > > > > > > > Since alloc_pages_node() will replace NUMA_NO_NODE with
> > > > > > > > numa_mem_id() regardless, why not store the actual node
> > > > > > > > in
> > > > > > > > our page pool information? You can then skip this and
> > > > > > > > check
> > > > > > > > pool->p.nid == numa_mem_id(), regardless of what's
> > > > > > > > configured.
> > > > > > >
> > > > > > > This single code line helps support that drivers can
> > > > > > > control
> > > > > > > the nid themselves. This is a feature that is only used my
> > > > > > > mlx5 AFAIK.
> > > > > > >
> > > > > > > I do think that is useful to allow the driver to "control"
> > > > > > > the nid, as pinning/preferring the pages to come from the
> > > > > > > NUMA node that matches the PCI-e controller hardware is
> > > > > > > installed in does have benefits.
> > > > > >
> > > > > > Sure you can keep the if statement as-is, it won't break
> > > > > > anything. Would we want to store the actual numa id in
> > > > > > pool->p.nid if the user selects 'NUMA_NO_NODE'?
> > > > >
> > > > > No. pool->p.nid should stay as NUMA_NO_NODE, because that makes
> > > > > it
> > > > > dynamic. If someone moves an RX IRQ to another CPU on another
> > > > > NUMA node, then this 'NUMA_NO_NODE' setting makes pages
> > > > > transitioned automatically.
> > > > Ok this assumed that drivers were going to use
> > > > page_pool_nid_changed(), but with the current code we don't have
> > > > to
> > > > force them to do that. Let's keep this as-is.
> > > >
> > > > I'll be running a few more tests and wait in case Saeed gets a
> > > > chance to test it and send my reviewed-by
> >
> >

\
 
 \ /
  Last update: 2019-12-24 10:35    [W:0.066 / U:4.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site