[gst-devel] One-autoplugger-fits-all ?

Richard Boulton richard at tartarus.org
Sat May 19 12:31:25 CEST 2001

On Fri, May 18, 2001 at 11:18:03PM -0700, Erik Walthinsen wrote:
> So, the current autoplugger implemenets the 1->1, known output case.  This
> is good, but I'd like to find a way to make it usable in all cases.  The
> default is to have a single source pad available for connection.  What if
> you hit PLAYING without connecting it?  That could be the hint necessary
> for the autoplugger to render.  Or there could be a flag.
A flag would be better: otherwise, if an app only wants to render one of
the streams (for example, only the video stream of an mpeg), all the other
streams need to be connected to a sink which throws the data away

> Ack, any thoughts on how to proceed?

It strikes me that trying to put the decision of which renderer to use into
the autoplug system would make the autoplugger more complicated than
neccessary.  I would resubmit my ideas on Meta Elements, (see

This scheme allows the user to specify what should be done with the output,
without specifying how to do it.  For example, a meta-playaudio element
would be expected to send audio output to an audible place (as configured
by a config file), whereas a meta-saveaudio element would save it to a file
(in a format determined by the config file).  With an autoplugger, the
decision of what to do is entirely dependent on the format of the stream: I
can't see how these two cases could be distinguished.

This scheme also has the advantage that a meta element could have
parameters which can be easily set by the user.  For example, the host to
play audio on could be specified by setting a parameter on the
meta-playaudio element (the available parameters would be specified
together with the specification of what the meta-element is for, what
output types it accepts, etc).

This also makes the issue of deciding which outputs to render much simpler:
the autoplugger no longer needs to make a decision.

For example, if an application is trying to render an mpeg (audio+video)
stream, the autoplugger will be given an source and two sinks; it just has
to connect them together appropriately.  If only the audio is desired, then
it will only be given one sink, and it will be clear what to do.

For the user's convenience, it would be possible to make a higher level
autoplugger, which did the rendering by using a known output autoplugger,
giving it the appropriate meta-element to use for output, so the user can
still say "here's the source; autoplug me a full system; render it".  This
wouldn't have the added flexibility described above; but it would be
possible to get that flexibility without having to go right down to the
level of specifiying the exact elements.

> Should we even go so far as to not have a 'renderer autoplug' and just
> have output pads for those cases?  Leave it to the app to decide exactly
> how it wants to render them?  If so, we have the problem of determining
> what the base format is for each substream.  What if, for instance,
> there's a hardware MPEG2 video decoder available?  If you split the
> stream, then render to video/raw, you've got a problem when the renderer
> is connected.  I suppose the autoplugger can be smart enough to consider
> removing the bits of the pipeline that are now irrelevant....

Or you could simply require that the autoplugger gets used before you've
split the stream.  So:

          /-> video -> decode -> Autoplugger -> meta-playvideo
MPEG2 in -
          \-> audio -> decode -> Autoplugger -> meta-playaudio

Won't use the hardware decoder, but

          /-> video -> Autoplugger -> meta-playvideo
MPEG2 in -
          \-> audio -> Autoplugger -> meta-playaudio

or even

                         /-> meta-playvideo
MPEG2 in -> Autoplugger -
                         \-> meta-playaudio



More information about the gstreamer-devel mailing list