linux-stable/kernel/sched
Michael Wang 26cf52229e sched: Avoid scale real weight down to zero
During our testing, we found a case that shares no longer
working correctly, the cgroup topology is like:

  /sys/fs/cgroup/cpu/A		(shares=102400)
  /sys/fs/cgroup/cpu/A/B	(shares=2)
  /sys/fs/cgroup/cpu/A/B/C	(shares=1024)

  /sys/fs/cgroup/cpu/D		(shares=1024)
  /sys/fs/cgroup/cpu/D/E	(shares=1024)
  /sys/fs/cgroup/cpu/D/E/F	(shares=1024)

The same benchmark is running in group C & F, no other tasks are
running, the benchmark is capable to consumed all the CPUs.

We suppose the group C will win more CPU resources since it could
enjoy all the shares of group A, but it's F who wins much more.

The reason is because we have group B with shares as 2, since
A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus,
so A->cfs_rq.load.weight become very small.

And in calc_group_shares() we calculate shares as:

  load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
  shares = (tg_shares * load) / tg_weight;

Since the 'cfs_rq->load.weight' is too small, the load become 0
after scale down, although 'tg_shares' is 102400, shares of the se
which stand for group A on root cfs_rq become 2.

While the se of D on root cfs_rq is far more bigger than 2, so it
wins the battle.

Thus when scale_load_down() scale real weight down to 0, it's no
longer telling the real story, the caller will have the wrong
information and the calculation will be buggy.

This patch add check in scale_load_down(), so the real weight will
be >= MIN_SHARES after scale, after applied the group C wins as
expected.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michael Wang <yun.wang@linux.alibaba.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/38e8e212-59a1-64b2-b247-b6d0b52d8dc1@linux.alibaba.com
2020-03-20 13:06:19 +01:00
..
autogroup.c sched/autogroup: Make autogroup_path() always available 2019-06-24 19:23:40 +02:00
autogroup.h sched/headers: Simplify and clean up header usage in the scheduler 2018-03-04 12:39:29 +01:00
clock.c sched/clock: Use static_branch_likely() with sched_clock_running 2019-11-29 08:10:54 +01:00
completion.c sched/Documentation: Update wake_up() & co. memory-barrier guarantees 2018-07-17 09:30:34 +02:00
core.c psi: Fix cpu.pressure for cpu.max and competing cgroups 2020-03-20 13:06:18 +01:00
cpuacct.c sched/headers: Simplify and clean up header usage in the scheduler 2018-03-04 12:39:29 +01:00
cpudeadline.c Linux 5.2-rc5 2019-06-17 12:12:27 +02:00
cpudeadline.h sched/headers: Simplify and clean up header usage in the scheduler 2018-03-04 12:39:29 +01:00
cpufreq.c cpufreq: Avoid leaving stale IRQ work items during CPU offline 2019-12-12 17:59:43 +01:00
cpufreq_schedutil.c sched/uclamp: Rename uclamp_util_with() into uclamp_rq_util_with() 2019-12-25 10:42:08 +01:00
cpupri.c sched/rt: Optimize cpupri_find() on non-heterogenous systems 2020-03-06 12:57:27 +01:00
cpupri.h sched/rt: Optimize cpupri_find() on non-heterogenous systems 2020-03-06 12:57:27 +01:00
cputime.c sched/vtime: Prevent unstable evaluation of WARN(vtime->state) 2020-03-06 12:57:16 +01:00
deadline.c sched/deadline: Make two functions static 2020-03-06 12:57:24 +01:00
debug.c sched/pelt: Add a new runnable average signal 2020-02-24 11:36:36 +01:00
fair.c sched/fair: Fix enqueue_task_fair warning 2020-03-20 13:06:18 +01:00
features.h sched/fair/util_est: Implement faster ramp-up EWMA on utilization increases 2019-10-29 10:01:07 +01:00
idle.c idle: fix spelling mistake "iterrupts" -> "interrupts" 2020-01-17 10:19:22 +01:00
isolation.c genirq, sched/isolation: Isolate from handling managed interrupts 2020-01-22 16:29:49 +01:00
loadavg.c timers/nohz: Update NOHZ load in remote tick 2020-01-28 21:36:44 +01:00
Makefile psi: pressure stall information for CPU, memory, and IO 2018-10-26 16:26:32 -07:00
membarrier.c membarrier: Fix RCU locking bug caused by faulty merge 2019-10-01 21:27:50 +02:00
pelt.c sched/pelt: Add support to track thermal pressure 2020-03-06 12:57:17 +01:00
pelt.h sched/pelt: Add support to track thermal pressure 2020-03-06 12:57:17 +01:00
psi.c psi: Move PF_MEMSTALL out of task->flags 2020-03-20 13:06:19 +01:00
rt.c sched/rt: Remove unnecessary push for unfit tasks 2020-03-06 12:57:29 +01:00
sched-pelt.h sched/fair: Fix "runnable_avg_yN_inv" not used warnings 2019-06-17 12:15:58 +02:00
sched.h sched: Avoid scale real weight down to zero 2020-03-20 13:06:19 +01:00
stats.c proc: introduce proc_create_seq{,_data} 2018-05-16 07:23:35 +02:00
stats.h psi: Move PF_MEMSTALL out of task->flags 2020-03-20 13:06:19 +01:00
stop_task.c sched/core: Further clarify sched_class::set_next_task() 2019-11-11 08:35:21 +01:00
swait.c kernel/sched/: remove caller signal_pending branch predictions 2019-01-04 13:13:48 -08:00
topology.c sched/topology: Don't enable EAS on SMT systems 2020-03-06 12:57:23 +01:00
wait.c Add wake_up_interruptible_sync_poll_locked() 2019-10-31 15:12:23 +00:00
wait_bit.c sched/wait: fix ___wait_var_event(exclusive) 2019-12-17 13:32:50 +01:00