lkml.org 
[lkml]   [2022]   [May]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/6] Drain remote per-cpu directly v3
On Wed, May 18, 2022 at 02:26:08PM -0300, Marcelo Tosatti wrote:
> On Tue, May 17, 2022 at 07:35:07PM -0400, Qian Cai wrote:
> > On Thu, May 12, 2022 at 09:50:37AM +0100, Mel Gorman wrote:
> > > Changelog since v2
> > > o More conversions from page->lru to page->[pcp_list|buddy_list]
> > > o Additional test results in changelogs
> > >
> > > Changelog since v1
> > > o Fix unsafe RT locking scheme
> > > o Use spin_trylock on UP PREEMPT_RT
> > >
> > > This series has the same intent as Nicolas' series "mm/page_alloc: Remote
> > > per-cpu lists drain support" -- avoid interference of a high priority
> > > task due to a workqueue item draining per-cpu page lists. While many
> > > workloads can tolerate a brief interruption, it may be cause a real-time
> > > task runnning on a NOHZ_FULL CPU to miss a deadline and at minimum,
> > > the draining in non-deterministic.
> > >
> > > Currently an IRQ-safe local_lock protects the page allocator per-cpu lists.
> > > The local_lock on its own prevents migration and the IRQ disabling protects
> > > from corruption due to an interrupt arriving while a page allocation is
> > > in progress. The locking is inherently unsafe for remote access unless
> > > the CPU is hot-removed.
> > >
> > > This series adjusts the locking. A spinlock is added to struct
> > > per_cpu_pages to protect the list contents while local_lock_irq continues
> > > to prevent migration and IRQ reentry. This allows a remote CPU to safely
> > > drain a remote per-cpu list.
> > >
> > > This series is a partial series. Follow-on work should allow the
> > > local_irq_save to be converted to a local_irq to avoid IRQs being
> > > disabled/enabled in most cases. Consequently, there are some TODO comments
> > > highlighting the places that would change if local_irq was used. However,
> > > there are enough corner cases that it deserves a series on its own
> > > separated by one kernel release and the priority right now is to avoid
> > > interference of high priority tasks.
> >
> > Reverting the whole series fixed an issue that offlining a memory
> > section blocking for hours on today's linux-next tree.
> >
> > __wait_rcu_gp
> > synchronize_rcu at kernel/rcu/tree.c:3915
> > lru_cache_disable at mm/swap.c:886
> > __alloc_contig_migrate_range at mm/page_alloc.c:9078
> > isolate_single_pageblock at mm/page_isolation.c:405
> > start_isolate_page_range
> > offline_pages
> > memory_subsys_offline
> > device_offline
> > online_store
> > dev_attr_store
> > sysfs_kf_write
> > kernfs_fop_write_iter
> > new_sync_write
> > vfs_write
> > ksys_write
> > __arm64_sys_write
> > invoke_syscall
> > el0_svc_common.constprop.0
> > do_el0_svc
> > el0_svc
> > el0t_64_sync_handler
> > el0t_64_sync
> >
> > For full disclosure, I have also reverted the commit 0d523026abd4
> > ("mm/page_alloc: fix tracepoint mm_page_alloc_zone_locked()"), so the
> > series can be reverted cleanly. But, I can't see how the commit
> > 0d523026abd4 could cause this issue at all.
>
> Hi Qian,
>
> The issue is probably due to lack of the following:
>
> https://lore.kernel.org/linux-mm/YmrWK%2FKoU1zrAxPI@fuller.cnet/
>
> Can you please give the patch on the URL a try?
>
> Thanks!

Oops, sorry don't think the above URL has anything to do with this
problem.


\
 
 \ /
  Last update: 2022-05-18 20:05    [W:0.121 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site