Messages in this thread |  | | From | "Rajendran, Jaishankar" <> | Subject | Core Scheduling - Concurrent VMs | Date | Mon, 26 Jul 2021 05:41:17 +0000 |
| |
Refer to the below experiments performed using Core Scheduling for Concurrent VMs and we found the benchmark scores executed in different VMs are degrading after enablement of Core Scheduling. Both the host and guest are enabled with Core Scheduler
Environment: ++++++++++ Platform - CML-NUC - i5-10600 @ 3.3 Ghz Core Assignment for Host OS : No .of Cores : 4 No .of Threads : 4 ( i.e. 8 Logical Cores) Host OS - Ubuntu 20.1 with Chromium Kernel 5.4 Guest OS - Android OS running as VM using qemu/kvm with Chromium Kernel 5.4 VM's are assigned with 4 cores and 4 Threads (i.e. 8 Logical Cores)
Kernel Config: +++++++++++ Host and Guest OS Kernel is enabled with CONFIG_SCHED_CORE=y
CGROUP Mapping: ++++++++++++++++ We did follow the CGroup approach for Core Scheduling based on the below document https://lkml.org/lkml/2021/1/22/1469 and configured the CGroup. But not able to find cpu.core_tag file in CGroups?
VM1 (Android OS running using qemu/KVM) is executed and its default cgroup is changed to caas1 VM2 (Android OS running using qemu/KVM) is executed and its default cgroup is changed to caas2 The changes are validated using htop command.
WorkLoad: +++++++++ GeekBench 5.3.1 - Multi core Test executed on both the VMs Observations: ++++++++++++ As per core CORE_SCHEDULER documentation, (vCPU) Threads of VM1 & VM2 (belonging to two different cgroups ) should not be scheduled on the same core at a given time. But we observe that vCPU Threads of different VMs are scheduled in the same core ( Used HTOP and ps commands)
Benchmark Scores: +++++++++++++++ With the above changes we are seeing degradation in the Multi-Threaded scores
Without Core Scheduling: ++++++++++++++++++++ GeekBench - MultiCore Test- Concurrent Run (Two VMs) - Performance Gap is 2% With Core Scheduling +++++++++++++++++ GeekBench- MultiCoreTes - Concurrent Run (Two VMs) - Performance Gap is 18%
Pl advice?
Thanks, Jaishankar Rajendran / Raju
|  |