btrfs: adjust overcommit logic when very close to full

A user reported some unpleasant behavior with very small file systems.
The reproducer is this

  $ mkfs.btrfs -f -m single -b 8g /dev/vdb
  $ mount /dev/vdb /mnt/test
  $ dd if=/dev/zero of=/mnt/test/testfile bs=512M count=20

This will result in usage that looks like this

  Overall:
      Device size:                   8.00GiB
      Device allocated:              8.00GiB
      Device unallocated:            1.00MiB
      Device missing:                  0.00B
      Device slack:                  2.00GiB
      Used:                          5.47GiB
      Free (estimated):              2.52GiB      (min: 2.52GiB)
      Free (statfs, df):               0.00B
      Data ratio:                       1.00
      Metadata ratio:                   1.00
      Global reserve:                5.50MiB      (used: 0.00B)
      Multiple profiles:                  no

  Data,single: Size:7.99GiB, Used:5.46GiB (68.41%)
     /dev/vdb        7.99GiB

  Metadata,single: Size:8.00MiB, Used:5.77MiB (72.07%)
     /dev/vdb        8.00MiB

  System,single: Size:4.00MiB, Used:16.00KiB (0.39%)
     /dev/vdb        4.00MiB

  Unallocated:
     /dev/vdb        1.00MiB

As you can see we've gotten ourselves quite full with metadata, with all
of the disk being allocated for data.

On smaller file systems there's not a lot of time before we get full, so
our overcommit behavior bites us here.  Generally speaking data
reservations result in chunk allocations as we assume reservation ==
actual use for data.  This means at any point we could end up with a
chunk allocation for data, and if we're very close to full we could do
this before we have a chance to figure out that we need another metadata
chunk.

Address this by adjusting the overcommit logic.  Simply put we need to
take away 1 chunk from the available chunk space in case of a data
reservation.  This will allow us to stop overcommitting before we
potentially lose this space to a data allocation.  With this fix in
place we properly allocate a metadata chunk before we're completely
full, allowing for enough slack space in metadata.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
This commit is contained in:
Josef Bacik 2023-09-27 13:47:01 -04:00 committed by David Sterba
parent 6f2d3c0196
commit cb6cbab790

View file

@ -345,8 +345,10 @@ static u64 calc_available_free_space(struct btrfs_fs_info *fs_info,
struct btrfs_space_info *space_info, struct btrfs_space_info *space_info,
enum btrfs_reserve_flush_enum flush) enum btrfs_reserve_flush_enum flush)
{ {
struct btrfs_space_info *data_sinfo;
u64 profile; u64 profile;
u64 avail; u64 avail;
u64 data_chunk_size;
int factor; int factor;
if (space_info->flags & BTRFS_BLOCK_GROUP_SYSTEM) if (space_info->flags & BTRFS_BLOCK_GROUP_SYSTEM)
@ -364,6 +366,36 @@ static u64 calc_available_free_space(struct btrfs_fs_info *fs_info,
*/ */
factor = btrfs_bg_type_to_factor(profile); factor = btrfs_bg_type_to_factor(profile);
avail = div_u64(avail, factor); avail = div_u64(avail, factor);
if (avail == 0)
return 0;
/*
* Calculate the data_chunk_size, space_info->chunk_size is the
* "optimal" chunk size based on the fs size. However when we actually
* allocate the chunk we will strip this down further, making it no more
* than 10% of the disk or 1G, whichever is smaller.
*/
data_sinfo = btrfs_find_space_info(fs_info, BTRFS_BLOCK_GROUP_DATA);
data_chunk_size = min(data_sinfo->chunk_size,
mult_perc(fs_info->fs_devices->total_rw_bytes, 10));
data_chunk_size = min_t(u64, data_chunk_size, SZ_1G);
/*
* Since data allocations immediately use block groups as part of the
* reservation, because we assume that data reservations will == actual
* usage, we could potentially overcommit and then immediately have that
* available space used by a data allocation, which could put us in a
* bind when we get close to filling the file system.
*
* To handle this simply remove the data_chunk_size from the available
* space. If we are relatively empty this won't affect our ability to
* overcommit much, and if we're very close to full it'll keep us from
* getting into a position where we've given ourselves very little
* metadata wiggle room.
*/
if (avail <= data_chunk_size)
return 0;
avail -= data_chunk_size;
/* /*
* If we aren't flushing all things, let us overcommit up to * If we aren't flushing all things, let us overcommit up to