Question about implementation of Unix FD passing

Alban Crequy alban.crequy at collabora.co.uk
Mon Feb 11 02:02:39 PST 2013


Le Sun, 10 Feb 2013 09:41:46 -0800,
Thiago Macieira <thiago at kde.org> a écrit :

> On domingo, 10 de fevereiro de 2013 14.34.38, Georg Reinke wrote:
> > However, because of the special semantics of out-of-band data on
> > Unix sockets, one has to read all data in one operation as
> > remaining data is discarded. That means that the buffer for the
> > out-of-band data must big enough to hold the greatest number of
> > file descriptors that can be possibly transferred, which is 2^32
> > (according to the spec), in order to guarantee that no data is
> > discarded.
> > 
> > While the actual size probably is much lower because of some kernel
> > limit, this behaviour still seems odd to me. So my question is: Is
> > this intended or have I misunderstood anything or made wrong
> > assumptions anywhere?
> 
> It's not possible to query the system to find out how many file
> descriptors were passed. Since the number is most often very small,
> the reference implementation passes a buffer of 1024 entries.
> 

It should be possible to use recv(2) with
flags |= MSG_PEEK
and then check flags | MSG_CTRUNC.

When recv() is called without MSG_PEEK, fds are passed up to the limit
of the control buffer, and the potential remaining fds are closed.

When recv() is called with MSG_PEEK, fds are duplicated up to the limit
of the control buffer and they will be fetched again in the next recv().

See comment on
http://lxr.free-electrons.com/source/net/unix/af_unix.c#L1828

Of course, calling recv() several times per message is not very
efficient...

Alban


More information about the dbus mailing list