[Bug 737316] Add support for sending file-descriptors over Unix domain sockets

GStreamer (bugzilla.gnome.org) bugzilla at gnome.org
Fri Oct 24 05:54:52 PDT 2014


https://bugzilla.gnome.org/show_bug.cgi?id=737316
  GStreamer | gstreamer (core) | git

--- Comment #18 from Will Manley <gnome at williammanley.net> 2014-10-24 12:54:45 UTC ---
(In reply to comment #17)
> (In reply to comment #16)
> > (In reply to comment #5)
> > > fdsink/src is specialized for writing data into file descriptors, not socket.
> > > So as Sebastien says, this should be a new element. I'm not particularly fan of
> > > an hybrid between tcpsrc/sink and shmsrc/sink.
> > 
> > Note, there is no hybrid proposed here.  As before fdsink/fdsrc/multisocketsink
> > are used to talk to sockets, the only change is that they understand more of
> > the capabilities that sockets have, e.g. the fact that they support sending
> > ancillary data alongside the data payloads.
> > 
> > I think some of the confusion is caused by both sockets and memfds being
> > referred to by a file-descriptor from user-space, despite the fact that they
> > are very different beasts in kernel-space.
> 
> fdsrc/fdsink are not specialized for socket, normal file descriptors (files,
> pipe, etc.) are expected to work.

Agreed, and these different types of file description worked fine with my
patches
too.  The patch is a pure generalisation of fdsink and fdsrc.  All previous
behaviour is preserved unless I've messed up in some way.

> > > I would personally prefer a
> > > design closer to shmsrc/sink, but with memfd allocator (hence memfd Memory).
> > > This same element, in reverse, should be able to zero copy memory that are
> > > backed by FDs and can be mmaped.
> > 
> > Indeed.  The intention was to cover the use cases that `shmsrc`/`shmsink`
> > cover, and expand on them.  `shmsink` and `shmsrc` work using a listening unix
> > socket to create connections between sender and receiver and then communicates
> > over that.  `shmsink` behaves like "fdpay ! unixserversink" and `shmsrc` like
> > "unixclientsrc ! fddepay".  OTOH the "PulseVideo" use-case uses `socketpair`
> > and DBus for creating connections so the pipelines look like "fdpay !
> > multisocketsink" and "fdsrc ! fddepay".  This is beyond the capabilities of
> > `shmsrc` and `shmsink` because `shmsrc` and `shmsink` implemented their own
> > socket code rather than reusing the code from other elements, whereas it's
> > natural in the fdpay/fddepay model as you can re-use the connection creation
> > mechanisms already implemented in other elements.
> 
> For the reference, the shmsrc/shmsink designed was tailored to very small peace
> of data and framing (RTP). So shmsrc/shmsink implement a protocol that allow
> multiplexing a larger SHM area, reducing the overhead. The problem I've hit so
> far, is that it's protocol isn't easily extensible, so adding abitlity to pass
> sub region of the main SHM area along with passing other FDs that would pass by
> would break the protocol backward compatibility.

Right.  Compatibility is a concern for me also.  I'm creating a system that
will
allow running test-scripts that my clients provide running inside a docker
container.  Our clients depend on being able to run the same test script and
being sure they are getting exactly the same result.  There is no scope for 
breaking or changing the behaviour of these scripts.

I don't want to be constrained by backwards compatibility when writing new
versions of stb-tester though so I run user scripts in docker containers.

My solution to the protocol versioning is to separate connection establishment 
from the general communication when video is being passed.  This allows feature
negotiation to happen out-of-band before the connection is established, and
once
this negotiation is complete fdpay/fddepay can be configured to talk a
particular
version of the protocol, or even an entirely different payloader could be used.

More concretely I've defined a DBus interface that looks like (in vala):

    [DBus (name = "com.stb-tester.VideoSource1")]
    interface VideoSource : GLib.Object {

        public abstract string caps { owned get; }
        public abstract GLib.UnixInputStream attach () throws Error;
    }

The caps property has DBus type "s" and the attach() method has DBus type "h"
(Unix FD).

The idea is that if I come up with a new protocol (corresponding to some
properties set on fdpay or a different payloader) I can also come up with a new
interface name, e.g. com.stb-tester.VideoSource2.  I then need to ensure the
server-side can offer both the VideoSource1 and VideoSource2 interfaces and the
job's a good'un.

I've put the PulseVideo source that I used for the presentation on github:

https://gist.github.com/wmanley/76974b124588c669c3b1

I'm considering also implementing a "zerocopydbusserversink" which would be a
bin capable of exposing a stream on DBus using GDBus.  The bin would be
responsible for the negotiation and configuration and elements contained within
would be responsible for actually sending the data.  This is in contrast to the
approach of multisocketsink and tcpserversink for instance where tcpserversink
derives from multisocketsink and is thus responsible for both connection
establishment and sending the data.

In summary I believe that the key to compatibility is:

* Separation between connection establishment and streaming
* And out-of-band feature negotiation.

> Outside that I'm not convince of the overload of fdsrc and fdsink I'd like to
> thanks you for taking the time to experiment this and sharing your experience
> at the GStreamer Conference. Until your talk, it wasn't that clear to me what
> this design was about. My interest into this is mainly DMABUF fd passing,
> though looking at what is going on the kernel side, I'm starting to foresee
> something that would allow combination of memfd and KDBus to fulfill this task.

Thanks for the kind words.

Indeed, using KDBus certainly has an appeal.  Video frames could then be 
individually sent as DBus signals, rather than my design where only the socket 
(e.g. the conduit for the video) is sent during stream-setup time.  The reason
I 
didn't take this approach is that I need to be making use of it now-ish rather 
than being able to wait for KDBus.

-- 
Configure bugmail: https://bugzilla.gnome.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.


More information about the gstreamer-bugs mailing list