lkml.org 
[lkml]   [2021]   [Jan]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU
On Wed, Jan 13, 2021 at 09:28:13PM +0800, Lai Jiangshan wrote:
> On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
> > * of all workers first and then clear UNBOUND. As we're called
> > * from CPU_ONLINE, the following shouldn't fail.
> > */
> > - for_each_pool_worker(worker, pool)
> > + for_each_pool_worker(worker, pool) {
> > WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> > pool->attrs->cpumask) < 0);
> > + kthread_set_per_cpu(worker->task, true);
>
> Will the schedule break affinity in the middle of these two lines due to
> patch4 allowing it and result in Paul's reported splat.

So something like the below _should_ work, except i'm seeing odd WARNs.
I'll prod at it some more.

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2371,6 +2371,7 @@ static int worker_thread(void *__worker)
/* tell the scheduler that this is a workqueue worker */
set_pf_worker(true);
woke_up:
+ kthread_parkme();
raw_spin_lock_irq(&pool->lock);

/* am I supposed to die? */
@@ -2428,6 +2429,7 @@ static int worker_thread(void *__worker)
move_linked_works(work, &worker->scheduled, NULL);
process_scheduled_works(worker);
}
+ kthread_parkme();
} while (keep_working(pool));

worker_set_flags(worker, WORKER_PREP);
@@ -4978,9 +4980,9 @@ static void rebind_workers(struct worker
* from CPU_ONLINE, the following shouldn't fail.
*/
for_each_pool_worker(worker, pool) {
- WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
- pool->attrs->cpumask) < 0);
+ kthread_park(worker->task);
kthread_set_per_cpu(worker->task, true);
+ kthread_unpark(worker->task);
}

raw_spin_lock_irq(&pool->lock);
\
 
 \ /
  Last update: 2021-01-14 14:15    [W:0.197 / U:5.472 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site