Messages in this thread |  | | From | Vincent Guittot <> | Date | Fri, 25 Aug 2017 15:39:50 +0200 | Subject | Re: [PATCH v2 3/5] sched/fair: Fix find_idlest_group when local group is not allowed |
| |
On 25 August 2017 at 12:16, Brendan Jackman <brendan.jackman@arm.com> wrote: > When the local group is not allowed we do not modify this_*_load from > their initial value of 0. That means that the load checks at the end > of find_idlest_group cause us to incorrectly return NULL. Fixing the > initial values to ULONG_MAX means we will instead return the idlest > remote group in that case. > > Signed-off-by: Brendan Jackman <brendan.jackman@arm.com> > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> > Cc: Vincent Guittot <vincent.guittot@linaro.org> > Cc: Josef Bacik <josef@toxicpanda.com> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: Morten Rasmussen <morten.rasmussen@arm.com> > Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> --- > kernel/sched/fair.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 4ccecbf825bf..0ce75bbcde45 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5387,8 +5387,9 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, > { > struct sched_group *idlest = NULL, *group = sd->groups; > struct sched_group *most_spare_sg = NULL; > - unsigned long min_runnable_load = ULONG_MAX, this_runnable_load = 0; > - unsigned long min_avg_load = ULONG_MAX, this_avg_load = 0; > + unsigned long min_runnable_load = ULONG_MAX; > + unsigned long this_runnable_load = ULONG_MAX; > + unsigned long min_avg_load = ULONG_MAX, this_avg_load = ULONG_MAX; > unsigned long most_spare = 0, this_spare = 0; > int load_idx = sd->forkexec_idx; > int imbalance_scale = 100 + (sd->imbalance_pct-100)/2; > -- > 2.14.1 >
|  |