<html>
<head>
<base href="https://bugs.freedesktop.org/" />
</head>
<body>
<p>
<div>
<b><a class="bz_bug_link
bz_status_REOPENED "
title="REOPENED --- - RFE: journald to send logs via network"
href="https://bugs.freedesktop.org/show_bug.cgi?id=77013#c7">Comment # 7</a>
on <a class="bz_bug_link
bz_status_REOPENED "
title="REOPENED --- - RFE: journald to send logs via network"
href="https://bugs.freedesktop.org/show_bug.cgi?id=77013">bug 77013</a>
from <span class="vcard"><a class="email" href="mailto:zbyszek@in.waw.pl" title="Zbigniew Jedrzejewski-Szmek <zbyszek@in.waw.pl>"> <span class="fn">Zbigniew Jedrzejewski-Szmek</span></a>
</span></b>
<pre>(In reply to <a href="show_bug.cgi?id=77013#c3">comment #3</a>)
<span class="quote">> Zbigniew, you're right. I misread the systemd-journal-remote man page and
> thought it did support push.</span >
The man page could probably use some polishing :)
systemd-journal-remote supports pulling, but this support is rather primitive,
and is certainly not enough for sustained transfer of logs.
<span class="quote">> At my company we use journal2gelf [1] to push messages. Of course, that
> pushes in GELF format, which is for Logstash aggregation, not journal
> aggregation. I'd be concerned about the performance implications of push
> aggregation to the journal right now.</span >
Journald is fairly slow because it does a lot of /proc trawling for each
message. When receiving messages over the network, all possible data is already
there, so it should be reasonably fast. I expect HTTP and especially TLS to be
the bottlenecks, not the journal writing code. Running benchmarks is on my TODO
list.
(In reply to <a href="show_bug.cgi?id=77013#c4">comment #4</a>)
<span class="quote">> The use case for JSON formatting is to send logs to alternative aggregators
> (such as Logstash as mentioned in <a href="show_bug.cgi?id=77013#c3">comment #3</a>). The ability to receive logs
> in separated format rather than log lines makes it much easier for these
> systems to parse entries and stick them in whatever database is being used.</span >
Adding json support to systemd-journal-upload (the sender part, which is
currently unmerged) would probably be quite simple... But for this to be
useful,
it has to support whatever protocol the receiver uses. I had a look at the
logstash docs, and it seems that json_lines codec should work. I'm not
sure about the details, but it looks like something that could be added
without too much trouble. Maybe some interested party will write a patch :)
<span class="quote">> The use case for extra tags I would say is similar to Puppet/Foreman
> hostgroups or classes. Systems know quite a lot about themselves which the
> log aggregator is going to have a hard time figuring out.</span >
OK. This sounds useful (and easy to implement).
(In reply to <a href="show_bug.cgi?id=77013#c6">comment #6</a>)
<span class="quote">> Final question: is there failover/load balancing ability on the cards for
> the remote sending?</span >
So far no.
<span class="quote">> i.e. setting up 2 log destinations, possibly with round robin or plain
> failover when 1 destination is out of action?
>
> Would journald be capable of remembering the last successfully sent entry in
> event of all destinations being offline? Rather than buffering output to
> disk in event of network failure, just point to the last sent log entry and
> restart from there when the destinations become available.</span >
journald is not directly involved. It's a program totally separate from
journald
and it simply another journal client. It keeps the cursor of last successfully
sent entry in a file on disk, and when started, by default, uploads all entries
after that cursor and then new ones as they come in.
<span class="quote">> Too much for one bugzilla? Split out into 2 or more?</span >
No, it's fine.</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are the QA Contact for the bug.</li>
<li>You are the assignee for the bug.</li>
</ul>
</body>
</html>