lkml.org 
[lkml]   [2019]   [Jul]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH 0/5] sched/fair: rework the CFS load balance
Date
Several wrong task placement have been raised with the current load
balance algorithm but their fixes are not always straight forward and
end up with using biased values to force migrations. A cleanup and rework
of the load balance will help to handle such UCs and enable to fine grain
the behavior of the scheduler for other cases.

Patch 1 has already been sent separatly and only consolidate policy in one
place and help the review of the changes in load_balance.

Patch 2 renames the sum of h_nr_running and also add the sum of nr_running

Patch 3 reworks load_balance algorithm and fixes some wrong task placement
but try to stay conservative.

Patch 4 replaces runnable_load but load now that it is only used when
overloaded.

Patch 5 improves the spread of tasks at the 1st scheduling level.

Some benchmarks results based on 8 iterations of each tests:
- small arm64 dual quad cores system

tip/sched/core w/ this patchset improvement
schedpipe 53326 +/-0.32% 54494 +/-0.33% (+2.19%)

hackbench
1 groups 0.914 +/-1.82% 0.903 +/-2.10% (+1.24%)

- large arm64 2 nodes / 224 cores system

tip/sched/core w/ this patchset improvement
schedpipe 123373.625 +/-0.88% 124277.125 +/-1.34% (+0.73%)

hackbench -l (256000/#grp) -g #grp
1 groups 14.886 +/-2.31% 14.504 +/-2.54% (+2.56%)
4 groups 5.725 +/-7.26% 5.332 +/-9.05% (+6.85%)
16 groups 3.041 +/-0.99% 3.221 +/-0.45% (-5.92%)
32 groups 2.859 +/-1.04% 2.812 +/-1.25% (+1.64%)
64 groups 2.740 +/-1.33% 2.662 +/-1.55% (+2.84%)
128 groups 3.090 +/-13.22% 2.808 +/-12.90% (+9.11%)
256 groups 3.629 +/-21.20% 3.063 +/-12.86% (+15.60%)

dbench
1 groups 337.703 +/-0.13% 333.729 +/-0.40% (-1.18%)
4 groups 944.095 +/-1.09% 967.050 +/-0.96% (+2.43%)
16 groups 1923.760 +/-3.62% 1981.926 +/-0.48% (+3.02%)
32 groups 2243.161 +/-8.40% 2453.247 +/-0.56% (+9.37%)
64 groups 2351.472 +/-10.64% 2621.137 +/-1.97% (+11.47%)
128 groups 2070.117 +/-4.87% 2310.451 +/-2.45% (+11.61%)
256 groups 1277.402 +/-3.03% 1691.865 +/-6.34% (+32.45%)

tip/sched/core sha1:
af24bde8df20('sched/uclamp: Add uclamp support to energy_compute()')

Vincent Guittot (5):
sched/fair: clean up asym packing
sched/fair: rename sum_nr_running to sum_h_nr_running
sched/fair: rework load_balance
sched/fair: use load instead of runnable load
sched/fair: evenly spread tasks when not overloaded

kernel/sched/fair.c | 614 +++++++++++++++++++++++++++-------------------------
1 file changed, 323 insertions(+), 291 deletions(-)

--
2.7.4

\
 
 \ /
  Last update: 2019-07-19 09:59    [W:0.260 / U:4.800 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site