Messages in this thread |  | | From | "Li,Rongqing" <> | Subject | 答复: 答复: [PATCH][v2] page pool: handle page recycle for NUMA NO NODE condition | Date | Mon, 16 Dec 2019 10:57:56 +0000 |
| |
> -----邮件原件----- > 发件人: Ilias Apalodimas [mailto:ilias.apalodimas@linaro.org] > 发送时间: 2019年12月16日 18:17 > 收件人: Li,Rongqing <lirongqing@baidu.com> > 抄送: Yunsheng Lin <linyunsheng@huawei.com>; Jesper Dangaard Brouer > <brouer@redhat.com>; Saeed Mahameed <saeedm@mellanox.com>; > jonathan.lemon@gmail.com; netdev@vger.kernel.org; mhocko@kernel.org; > peterz@infradead.org; Greg Kroah-Hartman <gregkh@linuxfoundation.org>; > bhelgaas@google.com; linux-kernel@vger.kernel.org; Björn Töpel > <bjorn.topel@intel.com> > 主题: Re: 答复: [PATCH][v2] page_pool: handle page recycle for > NUMA_NO_NODE condition > > > > > > > > > Simply clearing the pool->alloc.cache when calling > > > > page_pool_update_nid() seems better. > > > > > > > > > > How about the below codes, the driver can configure p.nid to any, which will > be adjusted in NAPI polling, irq migration will not be problem, but it will add a > check into hot path. > > > > We'll have to check the impact on some high speed (i.e 100gbit) > > interface between doing anything like that. Saeed's current patch runs > > once per NAPI. This runs once per packet. The load might be measurable. > > The READ_ONCE is needed in case all producers/consumers run on the > > same CPU > > I meant different cpus! >
If no READ_ONCE, pool->p.nid will be always written and become dirty although it is unshared by multiple cpus
See Eric' patch:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=503978aca46124cd714703e180b9c8292ba50ba7
-Li > > right? > > > > > > Thanks > > /Ilias > > > > > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c index > > > a6aefe989043..4374a6239d17 100644 > > > --- a/net/core/page_pool.c > > > +++ b/net/core/page_pool.c > > > @@ -108,6 +108,10 @@ static struct page > *__page_pool_get_cached(struct page_pool *pool) > > > if (likely(pool->alloc.count)) { > > > /* Fast-path */ > > > page = > > > pool->alloc.cache[--pool->alloc.count]; > > > + > > > + if (unlikely(READ_ONCE(pool->p.nid) != > numa_mem_id())) > > > + WRITE_ONCE(pool->p.nid, > > > + numa_mem_id()); > > > + > > > return page; > > > } > > > refill = true; > > > @@ -155,6 +159,10 @@ static struct page > *__page_pool_alloc_pages_slow(struct page_pool *pool, > > > if (pool->p.order) > > > gfp |= __GFP_COMP; > > > > > > + > > > + if (unlikely(READ_ONCE(pool->p.nid) != numa_mem_id())) > > > + WRITE_ONCE(pool->p.nid, numa_mem_id()); > > > + > > > /* FUTURE development: > > > * > > > * Current slow-path essentially falls back to single page > > > Thanks > > > > > > -Li > > > > > > > >
|  |