[gst-devel] muxers

Ronald Bultje rbultje at ronald.bitfreak.net
Tue Oct 15 12:37:02 CEST 2002


Hi all,

some thoughts about muxer elements in gstreamer, let me know what you
think of it.

What's a muxer:
for now, everything that takes input from >=1 sink pads which don't
necessarily have the same input rate.

Current situation:
muxers don't work. ;-). We've actually got one (avimux), and another one
is in development (mplex-based or so, mpegmux?). If we take a simple
pipeline, like:

[videosrc -> queue]
                    -> avimux -> filesink
[audiosrc -> queue]

[] = thread

it will actually run. According to Wim, it will take one buffer from
each pad in turn, which means that you'll have more video than audio
input (or the other way around), unless you're as lucky as to have
time_of(audio_buffer)==time_of(video_buffer), which usually isn't true.
this is because it's chainbased. According to Wim (again), it'd work if
the thing was loopbased and would calculate from which sink pad to pull
frames, i.e., it'd take care of handling its own input.

Okay, the first situation (chainbased doesn't work) sucks, and the
second one has severe side-effects:
[*] it depends on the timestamps being correct
[*] it won't work if the audio/video source don't sync well
[*] it doesn't take care of one of the N input streams being longer than
the others

For point 3, Wim suggested to listen to EOS signals. This works well
only *if* the element provides an EOS signal. For osssrc/v4lsrc, for
example, this isn't the case, so bad-synced signals (which happens on
99% of the soundcards because of broken video clocks) will cause this to
fail horribly as well.

conclusion: muxing currently sucks.

Okay, so what should change:
In the ideal case, we'd have the option to use chainbased as well as
loopbased muxing elements, and both would work well, regardless of the
input streams. This is a bit hard. So let's try to figure out some ways
to get there.

First of all, why would chainbased elements work? look at this:

videosource
            -> muxer -> filesink
audiosource

Notice that the queues are missing. This should just work. If muxer is
loopbased, it just pulls buffers, and if muxer is chainbased, it will
pull buffers sequentially. not any much better than without the queues,
and that's exactly the problem! Now, let's imagine that every input
stream pipeline has a queue:

[videosource -> queue]
                       -> muxer -> filesink
[audiosource -> queue]

The video/audio source threads would fill both buffers, and each time it
gets filled, the buffers could immediately be forwarded to muxer. This
would make chainbased elements work too. Loopbased elements would still
work the old way. Basically, this requires the scheduler to listen to
queue signals, and responding to them in a specific way.

Another way of makign chainbased elements work is for the scheduler to
look at the buffers in the queue and simply forwarding the one with the
earliest timestamp. Again, this requires a queue in front of the muxer
for any input and it requires the scheduler to do something specific.

Both ways have negative things, but they'd do a good job. They have
advantages above the loopbased-only approach, mainly because the
scheduler knows what's going on in the whole pipeline, rather then our
own element only. Maybe there's better ways... Please comment if you
know of any. The point I'm trying to make is that loopbased-only won't
work in all cases for muxers. So we need to have a working concept for
chainbased muxers too. I don't really care how. ;-).

Comments?

Ronald





More information about the gstreamer-devel mailing list