[systemd-devel] consider dropping defrag of journals on btrfs
Dave Howorth
systemd at howorth.org.uk
Fri Feb 5 16:06:05 UTC 2021
On Fri, 5 Feb 2021 16:23:02 +0100
Lennart Poettering <lennart at poettering.net> wrote:
> I don't think that makes much sense: we rotate and start new files for
> a multitude of reasons, such as size overrun, time jumps, abnormal
> shutdown and so on. If we'd always leave a fully allocated file around
> people would hate us...
I'm not sure about that. The file is eventually going to grow to 128 MB
so if there isn't space for it, I might as well know right now as
later. And it's not like the space will be available for anything else,
it's left free for exactly this log file.
Or are you talking about left over files after some exceptional event
that are only part full? If so, then just deallocate the unwanted empty
space from them after you've recovered from the exceptional event.
> Also, we vacuum old journals when allocating and the size constraints
> are hit. i.e. if we detect that adding 8M to journal file X would mean
> the space used by all journals together would be above the configure
> disk usage limits we'll delete the oldest journal files we can, until
> we can allocate 8M again. And we do this each time. If we'd allocate
> the full file all the time this means we'll likely remove ~256M of
> logs whenever we start a new file. And that's just shitty behaviour.
No it's not; it's exactly what happens most of the time, because all
the old log files are exactly the same size because that's why they
were rolled over. So freeing just one of those gives exactly the right
size space for the new log file. I don't understand why you would want
to free two?
More information about the systemd-devel
mailing list