[systemd-devel] [PATCH] journal, coredump: allow relative values in some configuration options

Lennart Poettering lennart at poettering.net
Thu May 28 04:24:01 PDT 2015


On Thu, 28.05.15 11:49, Jan Synacek (jsynacek at redhat.com) wrote:

> Lennart Poettering <lennart at poettering.net> writes:
> 
> > Hmm, this doesn't look right. here we choose the hash table sizes to
> > use for a file, and I doubt we should base this on the currently
> > available disk space, since sizing the hashtable will have an effect
> > on the entire lifetime of the file, during which the available disk
> > space might change wildly.
> >
> > I think it would be best not to do relative sizes for the journal file
> > max size at all, and continue to only support and absolute value for
> > that. 
> >
> >> +
> >> +uint64_t size_parameter_evaluate(const SizeParameter *sp, uint64_t available) {
> >> +        if (sp->value == (uint64_t) -1)
> >> +                return (uint64_t) -1;
> >> +
> >> +        if (sp->relative)
> >> +                return sp->value * 0.01 * available;
> >
> > Hmm, so this implements this as percentage after all. as mentioned in
> > my earlier mail, I think this should be normalized to 2^32 instead, so
> > that 2^32 maps to 100%...
> 
> I realized that I got the patch wrong. What I really wanted was to take
> percentage values of *disk size*, not available space. Using disk size
> would make it constant. 

Not really. On btrfs and suchlike you can easily add/remove a new disk
during runtime, making this dynamic...

> Having said that, is it ok to change even the options that you said
> were the bad idea?

Well, for some of them you need to to an extra statfs() which we
better avoid if we don't have to, since it was in a relatively "inner"
loop. I'd hence rather avoid this...

> Also, does it really make sense to implement the relative values as
> a mapping as you have suggested? To me it really doesn't, since you
> can't take more than 100% of disk space is not possible (I don't
> really count thin LVs), and mapping to a huge interval is just not
> as readable as using percentage. What is the advantage of the
> mapping again? Sorry if I'm being thick.

Well, storing them as fixed-point factor with 32bit before and 32bit
after the radix point rather than as percentage is mostly just a
question of accuracy and of being generic or not...

I'd always keep our basic structures as generic as possible, and as
close to the way CPUs work. Hence: store things as fixed-point
32bit/32bit internally, but make it configurable in percent as
user-pointing UI.

Sure, actually using factors > 1.0 (or > 100%) doesn't make much sense,
but I'd still allow them to be encoded, simply to have the basic types
as generic as possible...

Lennart

-- 
Lennart Poettering, Red Hat


More information about the systemd-devel mailing list