[systemd-devel] [systemd-commits] 2 commits - src/journal src/shared
Lennart Poettering
lennart at poettering.net
Wed Jan 28 18:53:06 PST 2015
On Thu, 29.01.15 03:22, Zbigniew Jędrzejewski-Szmek (zbyszek at in.waw.pl) wrote:
> On Wed, Jan 28, 2015 at 05:48:01PM -0800, Lennart Poettering wrote:
> > Revert "journal: do not check for number of files"
> >
> > This reverts commit b914ea8d379b446c4c9fac4ba181771676ef38cd.
> >
> > We really need to put a limit on all our resources, everywhere, and in
> > particular if we operate on external data.
> >
> > Hence, let's reintroduce the limit, but bump it substantially, so that
> > it is guaranteed to be higher than any realistic RLIMIT_NOFILE setting.
> Hm, each journal file requires a descriptor. How could we open more
> than RLIMIT_NOFILE files?
Hmm, I see what you mean. You are right, the fact that fds are
limited, means JournalFile objects in an sd_journal are implicitly
limited too.
I think we should still leave this in though, in case people set
RLIMIT_NOFILE to 65K. We should be careful to grow the number of
journal files to read without bounds since most operations scale with
O(n) then...
Also, one thing I still want to do is that we track more journal files
than we have open, and encode enough information in the journal file
name so that we can skip over them without having to open them for
many operations. For example: if we look at the most recent log
entries, and we find journal files whose name already indicates that
they only contain really old data it's not worth opening them at all,
and we don't have to pay the price of O(n). In this case we'd have
more JournalFile objects than actual open fds. In that case we however
should still put a limit on the JournalFile objects we allocate.
Anyway, that code doesn't exist yet of course, so it's mostly a
made-up reason...
Lennart
--
Lennart Poettering, Red Hat
More information about the systemd-devel
mailing list