linux-stable/block
Tejun Heo 968f7a239c blk-iolatency: Fix inflight count imbalances and IO hangs on offline
commit 8a177a36da upstream.

iolatency needs to track the number of inflight IOs per cgroup. As this
tracking can be expensive, it is disabled when no cgroup has iolatency
configured for the device. To ensure that the inflight counters stay
balanced, iolatency_set_limit() freezes the request_queue while manipulating
the enabled counter, which ensures that no IO is in flight and thus all
counters are zero.

Unfortunately, iolatency_set_limit() isn't the only place where the enabled
counter is manipulated. iolatency_pd_offline() can also dec the counter and
trigger disabling. As this disabling happens without freezing the q, this
can easily happen while some IOs are in flight and thus leak the counts.

This can be easily demonstrated by turning on iolatency on an one empty
cgroup while IOs are in flight in other cgroups and then removing the
cgroup. Note that iolatency shouldn't have been enabled elsewhere in the
system to ensure that removing the cgroup disables iolatency for the whole
device.

The following keeps flipping on and off iolatency on sda:

  echo +io > /sys/fs/cgroup/cgroup.subtree_control
  while true; do
      mkdir -p /sys/fs/cgroup/test
      echo '8:0 target=100000' > /sys/fs/cgroup/test/io.latency
      sleep 1
      rmdir /sys/fs/cgroup/test
      sleep 1
  done

and there's concurrent fio generating direct rand reads:

  fio --name test --filename=/dev/sda --direct=1 --rw=randread \
      --runtime=600 --time_based --iodepth=256 --numjobs=4 --bs=4k

while monitoring with the following drgn script:

  while True:
    for css in css_for_each_descendant_pre(prog['blkcg_root'].css.address_of_()):
        for pos in hlist_for_each(container_of(css, 'struct blkcg', 'css').blkg_list):
            blkg = container_of(pos, 'struct blkcg_gq', 'blkcg_node')
            pd = blkg.pd[prog['blkcg_policy_iolatency'].plid]
            if pd.value_() == 0:
                continue
            iolat = container_of(pd, 'struct iolatency_grp', 'pd')
            inflight = iolat.rq_wait.inflight.counter.value_()
            if inflight:
                print(f'inflight={inflight} {disk_name(blkg.q.disk).decode("utf-8")} '
                      f'{cgroup_path(css.cgroup).decode("utf-8")}')
    time.sleep(1)

The monitoring output looks like the following:

  inflight=1 sda /user.slice
  inflight=1 sda /user.slice
  ...
  inflight=14 sda /user.slice
  inflight=13 sda /user.slice
  inflight=17 sda /user.slice
  inflight=15 sda /user.slice
  inflight=18 sda /user.slice
  inflight=17 sda /user.slice
  inflight=20 sda /user.slice
  inflight=19 sda /user.slice <- fio stopped, inflight stuck at 19
  inflight=19 sda /user.slice
  inflight=19 sda /user.slice

If a cgroup with stuck inflight ends up getting throttled, the throttled IOs
will never get issued as there's no completion event to wake it up leading
to an indefinite hang.

This patch fixes the bug by unifying enable handling into a work item which
is automatically kicked off from iolatency_set_min_lat_nsec() which is
called from both iolatency_set_limit() and iolatency_pd_offline() paths.
Punting to a work item is necessary as iolatency_pd_offline() is called
under spinlocks while freezing a request_queue requires a sleepable context.

This also simplifies the code reducing LOC sans the comments and avoids the
unnecessary freezes which were happening whenever a cgroup's latency target
is newly set or cleared.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Liu Bo <bo.liu@linux.alibaba.com>
Fixes: 8c772a9bfc ("blk-iolatency: fix IO hang due to negative inflight counter")
Cc: stable@vger.kernel.org # v5.0+
Link: https://lore.kernel.org/r/Yn9ScX6Nx2qIiQQi@slm.duckdns.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-06-09 10:26:30 +02:00
..
partitions block: remove GENHD_FL_EXT_DEVT 2021-11-29 06:38:35 -07:00
badblocks.c treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
bdev.c mm: remove cleancache 2022-01-22 08:33:38 +02:00
bfq-cgroup.c bfq: Make sure bfqg for which we are queueing requests is online 2022-06-09 10:26:18 +02:00
bfq-iosched.c bfq: Get rid of __bio_blkcg() usage 2022-06-09 10:26:18 +02:00
bfq-iosched.h bfq: Get rid of __bio_blkcg() usage 2022-06-09 10:26:18 +02:00
bfq-wf2q.c block/bfq_wf2q: correct weight to ioprio 2022-04-08 13:58:36 +02:00
bio-integrity.c block: bio-integrity: Advance seed correctly for larger interval sizes 2022-02-03 21:09:24 -07:00
bio.c block: fix offset/size check in bio_trim() 2022-04-20 09:36:18 +02:00
blk-cgroup-rwstat.c blk-cgroup: Fix the recursive blkg rwstat 2021-03-05 11:32:15 -07:00
blk-cgroup-rwstat.h
blk-cgroup.c blk-cgroup: set blkg iostat after percpu stat aggregation 2022-04-08 13:57:34 +02:00
blk-core.c block: release rq qos structures for queue without disk 2022-03-14 14:05:41 -06:00
blk-crypto-fallback.c blk-crypto: rename blk_keyslot_manager to blk_crypto_profile 2021-10-21 10:49:32 -06:00
blk-crypto-internal.h block: move struct request to blk-mq.h 2021-10-18 06:17:02 -06:00
blk-crypto-profile.c blk-crypto: remove blk_crypto_unregister() 2021-11-29 06:38:51 -07:00
blk-crypto.c blk-crypto: rename blk_keyslot_manager to blk_crypto_profile 2021-10-21 10:49:32 -06:00
blk-flush.c block: switch to atomic_t for request references 2021-12-03 14:51:29 -07:00
blk-ia-ranges.c block: Fix potential deadlock in blk_ia_range_sysfs_show() 2022-06-09 10:26:20 +02:00
blk-integrity.c blk-crypto: remove blk_crypto_unregister() 2021-11-29 06:38:51 -07:00
blk-ioc.c block: restore the old set_task_ioprio() behaviour wrt PF_EXITING 2022-04-08 13:58:57 +02:00
blk-iocost.c iocost: don't reset the inuse weight of under-weighted debtors 2022-05-09 09:16:16 +02:00
blk-iolatency.c blk-iolatency: Fix inflight count imbalances and IO hangs on offline 2022-06-09 10:26:30 +02:00
blk-ioprio.c blk-ioprio: don't set bio priority if not needed 2021-11-29 06:38:35 -07:00
blk-ioprio.h block: Introduce the ioprio rq-qos policy 2021-06-21 15:03:40 -06:00
blk-lib.c block: export blk_next_bio() 2021-06-17 15:51:20 +02:00
blk-map.c block-map: add __GFP_ZERO flag for alloc_page in function bio_copy_kern 2022-02-17 07:54:03 -07:00
blk-merge.c block: throttle split bio in case of iops limit 2022-04-08 13:58:36 +02:00
blk-mq-cpumap.c blk-mq: remove the calling of local_memory_node() 2020-10-20 07:08:17 -06:00
blk-mq-debugfs-zoned.c
blk-mq-debugfs.c blk-mq: check q->poll_stat in queue_poll_stat_show 2021-12-02 08:20:49 -07:00
blk-mq-debugfs.h
blk-mq-pci.c
blk-mq-rdma.c
blk-mq-sched.c block: limit request dispatch loop duration 2022-04-08 13:57:26 +02:00
blk-mq-sched.h block: move blk_mq_sched_assign_ioc to blk-ioc.c 2021-11-29 06:41:29 -07:00
blk-mq-sysfs.c blk-mq: move srcu from blk_mq_hw_ctx to request_queue 2021-12-03 14:51:29 -07:00
blk-mq-tag.c blk-mq: fix tag_get wait task can't be awakened 2022-01-13 12:52:14 -07:00
blk-mq-tag.h blk-mq: Delete busy_iter_fn 2021-12-06 13:18:47 -07:00
blk-mq-virtio.c
blk-mq.c Revert "block: inherit request start time from bio for BLK_CGROUP" 2022-05-09 09:16:29 +02:00
blk-mq.h blk-mq: don't run might_sleep() if the operation needn't blocking 2021-12-06 09:40:42 -07:00
blk-pm.c scsi: block: pm: Always set request queue runtime active in blk_post_runtime_resume() 2021-12-22 23:38:29 -05:00
blk-pm.h block: Remove unused blk_pm_*() function definitions 2021-02-22 06:33:48 -07:00
blk-rq-qos.c rq-qos: fix missed wake-ups in rq_qos_throttle try two 2021-06-08 15:12:57 -06:00
blk-rq-qos.h block: fix rq-qos breakage from skipping rq_qos_done_bio() 2022-04-08 13:57:26 +02:00
blk-settings.c block: Fix partition check for host-aware zoned block devices 2021-10-27 06:58:01 -06:00
blk-stat.c block: make queue stat accounting a reference 2021-12-14 17:23:05 -07:00
blk-stat.h block: make queue stat accounting a reference 2021-12-14 17:23:05 -07:00
blk-sysfs.c block: don't delete queue kobject before its children 2022-04-08 13:57:35 +02:00
blk-throttle.c blk-throttle: Set BIO_THROTTLED when bio has been throttled 2022-06-09 10:25:29 +02:00
blk-throttle.h block: throttle split bio in case of iops limit 2022-04-08 13:58:36 +02:00
blk-timeout.c
blk-wbt.c blk-wbt: prevent NULL pointer dereference in wb_timer_fn 2021-10-19 06:13:41 -06:00
blk-wbt.h blk-wbt: introduce a new disable state to prevent false positive by rwb_enabled() 2021-06-21 15:03:41 -06:00
blk-zoned.c block: Hold invalidate_lock in BLKRESETZONE ioctl 2021-11-11 11:52:46 -07:00
blk.h block: only build the icq tracking code when needed 2021-12-16 10:59:02 -07:00
bounce.c mm: don't include <linux/blk-cgroup.h> in <linux/backing-dev.h> 2021-10-18 06:17:01 -06:00
bsg-lib.c block: remove the gendisk argument to blk_execute_rq 2021-11-29 06:41:29 -07:00
bsg.c scsi: bsg: Fix device unregistration 2021-09-14 00:22:15 -04:00
disk-events.c block: return errors from disk_alloc_events 2021-08-23 12:55:45 -06:00
elevator.c block/wbt: fix negative inflight counter when remove scsi device 2022-02-17 07:54:03 -07:00
elevator.h block: move elevator.h to block/ 2021-10-18 06:17:01 -06:00
fops.c block: clear iocb->private in blkdev_bio_end_io_async() 2022-02-22 06:59:49 -07:00
genhd.c block: Fix the maximum minor value is blk_alloc_ext_minor() 2022-04-08 13:58:57 +02:00
holder.c block: drop unused includes in <linux/genhd.h> 2021-10-18 06:17:02 -06:00
ioctl.c block/compat_ioctl: fix range check in BLKGETSIZE 2022-04-27 14:40:54 +02:00
ioprio.c for-5.17/block-2022-01-11 2022-01-12 10:26:52 -08:00
Kconfig block: only build the icq tracking code when needed 2021-12-16 10:59:02 -07:00
Kconfig.iosched block: only build the icq tracking code when needed 2021-12-16 10:59:02 -07:00
kyber-iosched.c block: make queue stat accounting a reference 2021-12-14 17:23:05 -07:00
Makefile block: remove blk-exec.c 2021-11-29 06:34:50 -07:00
mq-deadline.c block/mq-deadline: Set the fifo_time member also if inserting at head 2022-05-25 09:59:06 +02:00
opal_proto.h
sed-opal.c
t10-pi.c block: move integrity handling out of <linux/blkdev.h> 2021-10-18 06:17:02 -06:00