Messages in this thread Patch in this message |  | | Date | Tue, 4 Feb 2020 11:37:11 -0800 | Subject | [PATCH] block: Limit number of items taken from the I/O scheduler in one go | From | Salman Qazi <> |
| |
Flushes bypass the I/O scheduler and get added to hctx->dispatch in blk_mq_sched_bypass_insert. This can happen while a kworker is running hctx->run_work work item and is past the point in blk_mq_sched_dispatch_requests where hctx->dispatch is checked.
The blk_mq_do_dispatch_sched call is not guaranteed to end in bounded time, because the I/O scheduler can feed an arbitrary number of commands.
Since we have only one hctx->run_work, the commands waiting in hctx->dispatch will wait an arbitrary length of time for run_work to be rerun.
A similar phenomenon exists with dispatches from the software queue.
The solution is to poll hctx->dispatch in blk_mq_do_dispatch_sched and blk_mq_do_dispatch_ctx and return from the run_work handler and let it rerun.
Signed-off-by: Salman Qazi <sqazi@google.com> --- block/blk-mq-sched.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index ca22afd47b3d..d1b8b31bc3d4 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -97,6 +97,9 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) if (e->type->ops.has_work && !e->type->ops.has_work(hctx)) break; + if (!list_empty_careful(&hctx->dispatch)) + break; + if (!blk_mq_get_dispatch_budget(hctx)) break; @@ -140,6 +143,9 @@ static void blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx) do { struct request *rq; + if (!list_empty_careful(&hctx->dispatch)) + break; + if (!sbitmap_any_bit_set(&hctx->ctx_map)) break; -- 2.25.0.341.g760bfbb309-goog
|  |