[systemd-devel] Linux Journal API/client lib

Kay Sievers kay.sievers at vrfy.org
Fri Dec 2 09:59:00 PST 2011


On Fri, Dec 2, 2011 at 17:02, Rainer Gerhards <rgerhards at gmail.com> wrote:
> On Fri, Dec 2, 2011 at 4:39 PM, Kay Sievers <kay.sievers at vrfy.org> wrote:

>> /dev/log will be read by journald. The syslog.socket filedescriptor
>> that the syslog daemon receives, will be provided by journald and have
>> all the messages which are received by /dev/log and the human readable
>> part that is received over the native journal interface.
>
> This excludes the metadata, right? I'd say you should reconsider this,
> probably as a config option (metadata is especially useful if
> developers use the new capabilities and do not provide any
> human-readable form, or at least with partial info only in the human
> readable form).
>
> I guess if this is provided, there would hardly be a need for syslogd
> to pull the information from the journal.

Well, if syslogd, or any other consumer, is interested in the
metadata, it should not rely in /dev/log. /dev/log will probably stay
what it is which is mostly plain old syslog with a header and a
timestamp and the human readable string. All stuff that wants the
metadata should use the proper API and get the records from there. The
'/dev/log proxy' is just to ensure full syslogd compatibility, not to
provide any new data which do not really fit into the plaintext file
format.

>> In the end it's again a bit like git, and the model journal can do
>> across the network is like 'git pull' from other hosts. So there is
>> certainly the possibility of having a 'syslog journal gateway' that
>> provides the syslog 'commits' which are to be merged.
>
> IMHO if the journal intend to be *the* log store on Linux, it needs to
> have such import facility. Even on a low-end system, you will probably
> want to have logs from your soho router included. Then it make much
> more sense to query that database. The problem is that with this the
> trustedness of trusted fields must somehow be ensured. And this is
> where it get's complicated. Maybe it's not something to target for the
> initial release, but definitely something that you should think about.
>
> I admit that I am quite interested in that part of the journal idea --
> it will probably save me from writing a full-blow, fast-query-time
> "log store". I am not sure if that's on your agenda at all (the
> current circular buffer is questionable for some of these use cases).
> But such a central store, with a standardized API, would probably make
> sense to support in log analysis projects (I am specifically thinking
> about our log analyzer, a web interface, but there are probably many
> more). In that sense, I'd (ab?)use journal as a database,even for
> cases where I would not be interested at all in local events... (but
> of course there are many factors to consider, like volume of data,
> etc, etc. - just wanted to convey the idea).

All that should be possible, without knowing any of the details which
would need to be figured out. The SOHO router's logging should
probably just end up in its own journal files, it would get a
machine-id assigned and just logs on the same machine, but to
different files. Similar how we split up the login uids to their own
files. We will need to find out when we get there.

All these separated journal streams are identified by machine-id and
can be transparently merged in the client library when the data is
retrieved.

Kay


More information about the systemd-devel mailing list