Messages in this thread |  | | Subject | Re: [RFC PATCH 1/5] sched/fair: record overloaded cpus | From | Aubrey Li <> | Date | Sun, 27 Feb 2022 16:08:12 +0800 |
| |
On 2/24/22 3:10 PM, Gautham R. Shenoy wrote: > Hello Abel, > > (+ Aubrey Li, Srikar) > > On Thu, Feb 17, 2022 at 11:43:57PM +0800, Abel Wu wrote: >> An CFS runqueue is considered overloaded when there are >> more than one pullable non-idle tasks on it (since sched- >> idle cpus are treated as idle cpus). And idle tasks are >> counted towards rq->cfs.idle_h_nr_running, that is either >> assigned SCHED_IDLE policy or placed under idle cgroups. >> >> The overloaded cfs rqs can cause performance issues to >> both task types: >> >> - for latency critical tasks like SCHED_NORMAL, >> time of waiting in the rq will increase and >> result in higher pct99 latency, and >> >> - batch tasks may not be able to make full use >> of cpu capacity if sched-idle rq exists, thus >> presents poorer throughput. >> >> The mask of overloaded cpus is updated in periodic tick >> and the idle path at the LLC domain basis. This cpumask >> will also be used in SIS as a filter, improving idle cpu >> searching. > > This is an interesting approach to minimise the tail latencies by > keeping track of the overloaded cpus in the LLC so that > idle/sched-idle CPUs can pull from them. This approach contrasts with the > following approaches that were previously tried : > > 1. Maintain the idle cpumask at the LLC level by Aubrey Li > https://lore.kernel.org/all/1615872606-56087-1-git-send-email-aubrey.li@intel.com/ > > 2. Maintain the identity of the idle core itself at the LLC level, by Srikar : > https://lore.kernel.org/lkml/20210513074027.543926-3-srikar@linux.vnet.ibm.com/ > > There have been concerns in the past about having to update the shared > mask/counter at regular intervals. Srikar, Aubrey any thoughts on this > ? > https://lkml.org/lkml/2022/2/7/1129
|  |