mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
synced 2024-11-01 17:08:10 +00:00
sched/fair: Reduce minimal imbalance threshold
The 25% default imbalance threshold for DIE and NUMA domain is large enough to generate significant unfairness between threads. A typical example is the case of 11 threads running on 2x4 CPUs. The imbalance of 20% between the 2 groups of 4 cores is just low enough to not trigger the load balance between the 2 groups. We will have always the same 6 threads on one group of 4 CPUs and the other 5 threads on the other group of CPUS. With a fair time sharing in each group, we ends up with +20% running time for the group of 5 threads. Consider decreasing the imbalance threshold for overloaded case where we use the load to balance task and to ensure fair time sharing. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Phil Auld <pauld@redhat.com> Acked-by: Hillf Danton <hdanton@sina.com> Link: https://lkml.kernel.org/r/20200921072424.14813-3-vincent.guittot@linaro.org
This commit is contained in:
parent
5a7f555904
commit
2208cdaa56
1 changed files with 1 additions and 1 deletions
|
@ -1349,7 +1349,7 @@ sd_init(struct sched_domain_topology_level *tl,
|
||||||
.min_interval = sd_weight,
|
.min_interval = sd_weight,
|
||||||
.max_interval = 2*sd_weight,
|
.max_interval = 2*sd_weight,
|
||||||
.busy_factor = 32,
|
.busy_factor = 32,
|
||||||
.imbalance_pct = 125,
|
.imbalance_pct = 117,
|
||||||
|
|
||||||
.cache_nice_tries = 0,
|
.cache_nice_tries = 0,
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue