Messages in this thread |  | | From | Vincent Guittot <> | Date | Fri, 25 Aug 2017 15:39:25 +0200 | Subject | Re: [PATCH v2 4/5] sched/fair: Fix use of find_idlest_group when no groups are allowed |
| |
On 25 August 2017 at 12:16, Brendan Jackman <brendan.jackman@arm.com> wrote: > When p is allowed on none of the CPUs in the sched_domain, we > currently return NULL from find_idlest_group, and pointlessly > continue the search on lower sched_domain levels (where p is also not > allowed) before returning prev_cpu regardless (as we have not updated > new_cpu). > > Add an explicit check for this case, and a comment to > find_idlest_group. Now when find_idlest_group returns NULL, it always > means that the local group is allowed and idlest. > > Signed-off-by: Brendan Jackman <brendan.jackman@arm.com> > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> > Cc: Vincent Guittot <vincent.guittot@linaro.org> > Cc: Josef Bacik <josef@toxicpanda.com> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: Morten Rasmussen <morten.rasmussen@arm.com> > Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> --- > kernel/sched/fair.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 0ce75bbcde45..26080917ff8d 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5380,6 +5380,8 @@ static unsigned long capacity_spare_wake(int cpu, struct task_struct *p) > /* > * find_idlest_group finds and returns the least busy CPU group within the > * domain. > + * > + * Assumes p is allowed on at least one CPU in sd. > */ > static struct sched_group * > find_idlest_group(struct sched_domain *sd, struct task_struct *p, > @@ -5567,6 +5569,9 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p > { > int new_cpu = prev_cpu; > > + if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed)) > + return prev_cpu; > + > while (sd) { > struct sched_group *group; > struct sched_domain *tmp; > -- > 2.14.1 >
|  |