bfq: Drop pointless unlock-lock pair

commit fc84e1f941 upstream.

In bfq_insert_request() we unlock bfqd->lock only to call
trace_block_rq_insert() and then lock bfqd->lock again. This is really
pointless since tracing is disabled if we really care about performance
and even if the tracepoint is enabled, it is a quick call.

CC: stable@vger.kernel.org
Tested-by: "yukuai (C)" <yukuai3@huawei.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20220401102752.8599-5-jack@suse.cz
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Jan Kara 2022-06-07 11:15:10 +02:00 committed by Greg Kroah-Hartman
parent 97be7d13fb
commit f885f55033

View file

@ -5529,11 +5529,8 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
return; return;
} }
spin_unlock_irq(&bfqd->lock);
blk_mq_sched_request_inserted(rq); blk_mq_sched_request_inserted(rq);
spin_lock_irq(&bfqd->lock);
bfqq = bfq_init_rq(rq); bfqq = bfq_init_rq(rq);
if (!bfqq || at_head || blk_rq_is_passthrough(rq)) { if (!bfqq || at_head || blk_rq_is_passthrough(rq)) {
if (at_head) if (at_head)