iomap: clear the per-folio dirty bits on all writeback failures

write_cache_pages always clear the page dirty bit before calling into the
file systems, and leaves folios with a writeback failure without the
dirty bit after return.  We also clear the per-block writeback bits for
writeback failures unless no I/O has submitted, which will leave the
folio in an inconsistent state where it doesn't have the folio dirty,
but one or more per-block dirty bits.  This seems to be due the place
where the iomap_clear_range_dirty call was inserted into the existing
not very clearly structured code when adding per-block dirty bit support
and not actually intentional.  Switch to always clearing the dirty on
writeback failure.

Fixes: 4ce02c6797 ("iomap: Add per-block dirty state tracking to improve performance")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20231207072710.176093-2-hch@lst.de
Signed-off-by: Christian Brauner <brauner@kernel.org>
This commit is contained in:
Christoph Hellwig 2023-12-07 08:26:57 +01:00 committed by Christian Brauner
parent 6613476e22
commit 7ea1d9b4a8
1 changed files with 11 additions and 7 deletions

View File

@ -1833,16 +1833,10 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
if (unlikely(error)) {
/*
* Let the filesystem know what portion of the current page
* failed to map. If the page hasn't been added to ioend, it
* won't be affected by I/O completion and we must unlock it
* now.
* failed to map.
*/
if (wpc->ops->discard_folio)
wpc->ops->discard_folio(folio, pos);
if (!count) {
folio_unlock(folio);
goto done;
}
}
/*
@ -1851,6 +1845,16 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
* all the dirty bits in the folio here.
*/
iomap_clear_range_dirty(folio, 0, folio_size(folio));
/*
* If the page hasn't been added to the ioend, it won't be affected by
* I/O completion and we must unlock it now.
*/
if (error && !count) {
folio_unlock(folio);
goto done;
}
folio_start_writeback(folio);
folio_unlock(folio);