Messages in this thread |  | | From | Vincent Guittot <> | Date | Mon, 6 Jan 2020 14:55:00 +0100 | Subject | Re: [PATCH] sched, fair: Allow a small degree of load imbalance between SD_NUMA domains v2 |
| |
On Fri, 3 Jan 2020 at 15:31, Mel Gorman <mgorman@techsingularity.net> wrote: > > On Mon, Dec 23, 2019 at 02:31:40PM +0100, Vincent Guittot wrote: > > > + /* Consider allowing a small imbalance between NUMA groups */ > > > + if (env->sd->flags & SD_NUMA) { > > > + unsigned int imbalance_adj, imbalance_max; > > > + > > > + /* > > > + * imbalance_adj is the allowable degree of imbalance > > > + * to exist between two NUMA domains. It's calculated > > > + * relative to imbalance_pct with a minimum of two > > > + * tasks or idle CPUs. The choice of two is due to > > > + * the most basic case of two communicating tasks > > > + * that should remain on the same NUMA node after > > > + * wakeup. > > > + */ > > > + imbalance_adj = max(2U, (busiest->group_weight * > > > + (env->sd->imbalance_pct - 100) / 100) >> 1); > > > + > > > + /* > > > + * Ignore small imbalances unless the busiest sd has > > > + * almost half as many busy CPUs as there are > > > + * available CPUs in the busiest group. Note that > > > + * it is not exactly half as imbalance_adj must be > > > + * accounted for or the two domains do not converge > > > + * as equally balanced if the number of busy tasks is > > > + * roughly the size of one NUMA domain. > > > + */ > > > + imbalance_max = (busiest->group_weight >> 1) + imbalance_adj; > > > + if (env->imbalance <= imbalance_adj && > > > > AFAICT, env->imbalance is undefined there. I have tried your patch > > with the below instead > > > > Even when fixed, other corner cases were hit for parallelised loads that > benefit from spreading early. At the moment I'm testing a variant of the > first approach except it adjust small balances at low utilisation and > otherwise balances at normal. It appears to work for basic communicating > tasks at relatively low utitisation that fits within a node without > obviously impacting higher utilisation non-communicating workloads but > more testing time will be needed to be sure. > > It's still based on sum_nr_running as a cut-off instead of idle_cpus as > at low utilisation, there is not much of a material difference in the > cut-offs given that either approach can be misleading depending on the > load of the individual tasks, any cpu binding configurations and the > degree the tasks are memory-bound. > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index ba749f579714..58ba2f0c6363 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -8648,10 +8648,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s > /* > * Try to use spare capacity of local group without overloading it or > * emptying busiest. > - * XXX Spreading tasks across NUMA nodes is not always the best policy > - * and special care should be taken for SD_NUMA domain level before > - * spreading the tasks. For now, load_balance() fully relies on > - * NUMA_BALANCING and fbq_classify_group/rq to override the decision. > */ > if (local->group_type == group_has_spare) { > if (busiest->group_type > group_fully_busy) { > @@ -8691,16 +8686,39 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s > env->migration_type = migrate_task; > lsub_positive(&nr_diff, local->sum_nr_running); > env->imbalance = nr_diff >> 1; > - return; > - } > + } else { > > - /* > - * If there is no overload, we just want to even the number of > - * idle cpus. > - */ > - env->migration_type = migrate_task; > - env->imbalance = max_t(long, 0, (local->idle_cpus - > + /* > + * If there is no overload, we just want to even the number of > + * idle cpus. > + */ > + env->migration_type = migrate_task; > + env->imbalance = max_t(long, 0, (local->idle_cpus - > busiest->idle_cpus) >> 1); > + } > + > + /* Consider allowing a small imbalance between NUMA groups */ > + if (env->sd->flags & SD_NUMA) { > + long imbalance_adj, imbalance_max; > + > + /* > + * imbalance_adj is the allowable degree of imbalance > + * to exist between two NUMA domains. imbalance_pct > + * is used to estimate the number of active tasks > + * needed before memory bandwidth may be as important > + * as memory locality. > + */ > + imbalance_adj = (100 / (env->sd->imbalance_pct - 100)) - 1;
This looks weird to me because you use imbalance_pct, which is meaningful only compare a ratio, to define a number that will be then compared to a number of tasks without taking into account the weight of the node. So whatever the node size, 32 or 128 CPUs, the imbalance_adj will be the same: 3 with the default imbalance_pct of NUMA level which is 125 AFAICT
> + > + /* > + * Allow small imbalances when the busiest group has > + * low utilisation. > + */ > + imbalance_max = imbalance_adj << 1;
Why do you add this shift ?
So according to the above, imbalance_max = 6 whatever the size of the node
Regards, Vincent
> + if (busiest->sum_nr_running < imbalance_max) > + env->imbalance -= min(env->imbalance, imbalance_adj); > + } > + > return; > } > > > -- > Mel Gorman > SUSE Labs
|  |