[systemd-devel] Linux Journal API/client lib

Rainer Gerhards rgerhards at gmail.com
Fri Dec 2 08:02:12 PST 2011


On Fri, Dec 2, 2011 at 4:39 PM, Kay Sievers <kay.sievers at vrfy.org> wrote:
> On Fri, Dec 2, 2011 at 16:14, Rainer Gerhards <rgerhards at gmail.com> wrote:
>> On Fri, Dec 2, 2011 at 2:49 PM, Kay Sievers <kay.sievers at vrfy.org> wrote:
>>> On Fri, Dec 2, 2011 at 13:59, Rainer Gerhards <rgerhards at gmail.com> wrote:
>>>> as you probably know, I am not a big fan of the journald proposal, but
>>>> that's not the point of my question. I am thinking about how to
>>>> integrate journal data into a syslog logging solution.
>>>
>>> You know that the syslog daemon will still see exactly the same log
>>> messages from all clients as it did before, right? The /dev/log file
>>> descriptor that systemd passed to the syslog daemon at startup, will
>>> still carry all the same things regardless of journald's actions.
>>
>> Does that mean /dev/log will also receive messages submitted via the
>> *new* API you define? If so, is the format documented somewhere (or
>> intended to be)?
>
> /dev/log will be read by journald. The syslog.socket filedescriptor
> that the syslog daemon receives, will be provided by journald and have
> all the messages which are received by /dev/log and the human readable
> part that is received over the native journal interface.

This excludes the metadata, right? I'd say you should reconsider this,
probably as a config option (metadata is especially useful if
developers use the new capabilities and do not provide any
human-readable form, or at least with partial info only in the human
readable form).

I guess if this is provided, there would hardly be a need for syslogd
to pull the information from the journal.

>> But there will be one journal that a root admin can pull for all log
>> entries? Or does this mean that, in order to obtain all entries, the
>> system journal file plus all journal files for all users must be read?
>> If so, is it intended that the API/lib handles that?
>
> The library already merges all files the caller has access (unix file
> permissions) to. It's transparent to the reader, if it can read them,
> they will be included in the stream.

That's nice and useful.

>>> Yes, sure, it can just forward things to the journal. Along with what
>>> they log, they will just have some metadata of the forwarder added.
>>
>> So now let's assume I have pulled some log messages from system A, and
>> transport these via syslog to system B. Now I want to consolidate the
>> log on system B. So what I need is to have an exact duplicate of what
>> is present on A also present on B (especially the metadata). That
>> means I can write into B's journal exactly what was on A, including
>> the *trusted fields*? (or let's for a moment assume that A  does not
>> run journald, but I know A's hostname via RFC5425 X.509-based auth and
>> so this info is known to be correct - many scenarios along these
>> lines).
>
> We haven't thought about any of the details how to handle possible
> trusted log merge. It's surely possible to do something like that, but
> we have no specific ideas as of now.
>
> In the end it's again a bit like git, and the model journal can do
> across the network is like 'git pull' from other hosts. So there is
> certainly the possibility of having a 'syslog journal gateway' that
> provides the syslog 'commits' which are to be merged.

IMHO if the journal intend to be *the* log store on Linux, it needs to
have such import facility. Even on a low-end system, you will probably
want to have logs from your soho router included. Then it make much
more sense to query that database. The problem is that with this the
trustedness of trusted fields must somehow be ensured. And this is
where it get's complicated. Maybe it's not something to target for the
initial release, but definitely something that you should think about.

I admit that I am quite interested in that part of the journal idea --
it will probably save me from writing a full-blow, fast-query-time
"log store". I am not sure if that's on your agenda at all (the
current circular buffer is questionable for some of these use cases).
But such a central store, with a standardized API, would probably make
sense to support in log analysis projects (I am specifically thinking
about our log analyzer, a web interface, but there are probably many
more). In that sense, I'd (ab?)use journal as a database,even for
cases where I would not be interested at all in local events... (but
of course there are many factors to consider, like volume of data,
etc, etc. - just wanted to convey the idea).

Rainer


More information about the systemd-devel mailing list