lkml.org 
[lkml]   [2019]   [Sep]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 3/3] task: Clean house now that tasks on the runqueue are rcu protected
Hi "Eric,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[cannot apply to v5.3-rc7 next-20190902]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url: https://github.com/0day-ci/linux/commits/Eric-W-Biederman/task-Making-tasks-on-the-runqueue-rcu-protected/20190903-130546
reproduce:
# apt-get install sparse
# sparse version: v0.6.1-rc1-7-g2b96cd8-dirty
make ARCH=x86_64 allmodconfig
make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)

>> kernel/sched/membarrier.c:74:21: sparse: sparse: incompatible types in comparison expression (different address spaces):
>> kernel/sched/membarrier.c:74:21: sparse: struct task_struct [noderef] <asn:4> *
>> kernel/sched/membarrier.c:74:21: sparse: struct task_struct *
kernel/sched/membarrier.c:153:21: sparse: sparse: incompatible types in comparison expression (different address spaces):
kernel/sched/membarrier.c:153:21: sparse: struct task_struct [noderef] <asn:4> *
kernel/sched/membarrier.c:153:21: sparse: struct task_struct *

vim +74 kernel/sched/membarrier.c

32
33 static int membarrier_global_expedited(void)
34 {
35 int cpu;
36 bool fallback = false;
37 cpumask_var_t tmpmask;
38
39 if (num_online_cpus() == 1)
40 return 0;
41
42 /*
43 * Matches memory barriers around rq->curr modification in
44 * scheduler.
45 */
46 smp_mb(); /* system call entry is not a mb. */
47
48 /*
49 * Expedited membarrier commands guarantee that they won't
50 * block, hence the GFP_NOWAIT allocation flag and fallback
51 * implementation.
52 */
53 if (!zalloc_cpumask_var(&tmpmask, GFP_NOWAIT)) {
54 /* Fallback for OOM. */
55 fallback = true;
56 }
57
58 cpus_read_lock();
59 for_each_online_cpu(cpu) {
60 struct task_struct *p;
61
62 /*
63 * Skipping the current CPU is OK even through we can be
64 * migrated at any point. The current CPU, at the point
65 * where we read raw_smp_processor_id(), is ensured to
66 * be in program order with respect to the caller
67 * thread. Therefore, we can skip this CPU from the
68 * iteration.
69 */
70 if (cpu == raw_smp_processor_id())
71 continue;
72
73 rcu_read_lock();
> 74 p = rcu_dereference(cpu_rq(cpu)->curr);
75 if (p && p->mm && (atomic_read(&p->mm->membarrier_state) &
76 MEMBARRIER_STATE_GLOBAL_EXPEDITED)) {
77 if (!fallback)
78 __cpumask_set_cpu(cpu, tmpmask);
79 else
80 smp_call_function_single(cpu, ipi_mb, NULL, 1);
81 }
82 rcu_read_unlock();
83 }
84 if (!fallback) {
85 preempt_disable();
86 smp_call_function_many(tmpmask, ipi_mb, NULL, 1);
87 preempt_enable();
88 free_cpumask_var(tmpmask);
89 }
90 cpus_read_unlock();
91
92 /*
93 * Memory barrier on the caller thread _after_ we finished
94 * waiting for the last IPI. Matches memory barriers around
95 * rq->curr modification in scheduler.
96 */
97 smp_mb(); /* exit from system call is not a mb */
98 return 0;
99 }
100

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation

\
 
 \ /
  Last update: 2019-09-03 11:46    [W:0.223 / U:11.404 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site