[systemd-devel] Antw: Re: [EXT] Re: consider dropping defrag of journals on btrfs

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Tue Feb 9 15:29:40 UTC 2021


>>> Phillip Susi <phill at thesusis.net> schrieb am 09.02.2021 um 15:53 in Nachricht
<87o8gtz3m1.fsf at vps.thesusis.net>:

> Chris Murphy writes:
> 
>> Basically correct. It will merge random writes such that they become
>> sequential writes. But it means inserts/appends/overwrites for a file
>> won't be located with the original extents.
> 
> Wait, I thoguht that was only true for metadata, not normal file data
> blocks?  Well, maybe it becomes true for normal data if you enable
> compression.  Or small files that get leaf packed into the metadata
> chunk.
> 
> If it's really combining streaming writes from two different files into
> a single interleaved write to the disk, that would be really silly.

Why? The idea of BtrFS was that any block written (or at least any block that is used "frequently enough") will be in RAM cache, so the actual location of a block does not matter. Perfomance-killing synchronous random writes would benefit instead. Like that (AFAIK).




More information about the systemd-devel mailing list