[systemd-devel] [RFC] [PATCH 0/3] journal: Add deferred log processing to reduce synchonous IO overhead

Lennart Poettering lennart at poettering.net
Fri Dec 13 19:47:14 PST 2013


On Fri, 13.12.13 22:16, Karol Lewandowski (lmctlx at gmail.com) wrote:

> 
> On Fri, Dec 13, 2013 at 03:45:36PM +0100, Lennart Poettering wrote:
> > On Fri, 13.12.13 12:46, Karol Lewandowski (k.lewandowsk at samsung.com) wrote:
> 
> > Well, are you suggesting that the AF_UNIX/SOCK_DGRAM code actually hands
> > off the timeslice to the other side as soon as it queued something in?
> 
> If by other side you mean receiving one, then no - qnx seems to do
> that, but it isn't the case here. What I'm trying to say here is that
> kernel puts the process doing send(2) into sleep when (a) queue fills
> up and (b) fd is blocking one (otherwise we just get EAGAIN).  That's
> expected, I presume.
> 
> One of the problems I see, though, is that no matter how deep I make
> the queue (`max_dgram_qlen') I still see process sleeping on send()
> way earlier that configured queue depth would suggest.

It would be interesting to find out why this happens. I mean, there are
three parameters here I could think of that matter: the qlen, SO_SNDBUF
on the sender, and SO_RCVBUF on the receiver (though the latter two might
actually change the same value on AF_UNIX? or maybe one of the latter
two is a NOP on AF_UNIX?). If any of them reaches the limit then the
sender will necessarily have to block.

(SO_SNDBUF and SO_RCVBUF can also be controlled via
/proc/sys/net/core/rmem* and ../wmem*... For testing purposes it might
be easier to play around with these and set them to ludicrously high
values...)

I mean, this appears to be the crux here: this blocks too early but it
really shouldn't block at all. we should find the reason for this and
fix it.

Lennart

-- 
Lennart Poettering, Red Hat


More information about the systemd-devel mailing list