Messages in this thread |  | | Subject | Re: [RESEND RFC PATCH V3] sched: Improve scalability of select_idle_sibling using SMT balance | From | Steven Sistare <> | Date | Fri, 2 Feb 2018 12:36:47 -0500 |
| |
On 2/2/2018 12:17 PM, Peter Zijlstra wrote: > On Fri, Feb 02, 2018 at 11:53:40AM -0500, Steven Sistare wrote: >>>> +static int select_idle_smt(struct task_struct *p, struct sched_group *sg) >>>> { >>>> + int i, rand_index, rand_cpu; >>>> + int this_cpu = smp_processor_id(); >>>> >>>> + rand_index = CPU_PSEUDO_RANDOM(this_cpu) % sg->group_weight; >>>> + rand_cpu = sg->cp_array[rand_index]; >>> >>> Right, so yuck.. I know why you need that, but that extra array and >>> dereference is the reason I never went there. >>> >>> How much difference does it really make vs the 'normal' wrapping search >>> from last CPU ? >>> >>> This really should be a separate patch with separate performance numbers >>> on. >> >> For the benefit of other readers, if we always search and choose starting from >> the first CPU in a core, then later searches will often need to traverse the first >> N busy CPU's to find the first idle CPU. Choosing a random starting point avoids >> such bias. It is probably a win for processors with 4 to 8 CPUs per core, and >> a slight but hopefully negligible loss for 2 CPUs per core, and I agree we need >> to see performance data for this as a separate patch to decide. We have SPARC >> systems with 8 CPUs per core. > > Which is why the current code already doesn't start from the first cpu > in the mask. We start at whatever CPU the task ran last on, which is > effectively 'random' if the system is busy. > > So how is a per-cpu rotor better than that?
The current code is: for_each_cpu(cpu, cpu_smt_mask(target)) {
For an 8-cpu/core processor, 8 values of target map to the same cpu_smt_mask. 8 different tasks will traverse the mask in the same order.
- Steve
|  |