[gst-devel] One-autoplugger-fits-all ?

Bastien Nocera hadess at hadess.net
Sat May 19 14:16:04 CEST 2001


I'll give my thoughts as an application developer here. If you think my
ideas are unreasonable, please tell me.

On 18 May 2001 23:18:03 -0700, Erik Walthinsen wrote:
> The current dynamic autoplugger I'm building is designed initally to act a
> as [un]known-to-known magic element.  Attach it in the middle of a
> pipeline, it will construct a filler pipeline, determining the source data
> type as necessary.
> There are two basic types of data to be autoplugged: an elementary stream
> (audio XOR video), or a system stream (audio AND video).  Variants exist
> when you're talking about DVD's, with multiple audio streams, as well as a
> control stream, but it's basically a 1 vs. N difference.
> There are also two goals of autoplug: autoplug to a known output, and
> autoplug to some number of renderers.  gstplay will want to make use of
> the latter case.
> So, the current autoplugger implemenets the 1->1, known output case.  This
> is good, but I'd like to find a way to make it usable in all cases.  The
> default is to have a single source pad available for connection.  What if
> you hit PLAYING without connecting it?  That could be the hint necessary
> for the autoplugger to render.  Or there could be a flag.  If there's a
> flag, and it's a system stream, do we then take the source pad away and
> replace it with a bunch of pads for the different outputs, firing signals
> as it goes?
> Should we even go so far as to not have a 'renderer autoplug' and just
> have output pads for those cases?  Leave it to the app to decide exactly
> how it wants to render them?  If so, we have the problem of determining
> what the base format is for each substream.  What if, for instance,
> there's a hardware MPEG2 video decoder available?  If you split the
> stream, then render to video/raw, you've got a problem when the renderer
> is connected.  I suppose the autoplugger can be smart enough to consider
> removing the bits of the pipeline that are now irrelevant....
> Ack, any thoughts on how to proceed?

I want the output to be user-configurable. I guess it would be the
easiest way to deal with sources that could use multiple outputs.

For example, a configuration file under /etc/gstreamer/ would give the
application (or a desktop-wide control-center applet for example) the
default sound output: will it be aRTS, or esd, or the alsasink or the
osssink ? Same for video outputs (xvideosink, aasink, sdlsink...).

And we could also apply the same kind of scheme to "data processors",
for example should the "default" mp3 decoder be mad or "mp3parse !
mpg123", or should the "default" mpeg-layer 2 audio encoder be lame or

I guess all of this could be done on the application side without any
more code being needed in gstreamer. What would be nice would be for
gstreamer to "fill in the blanks". For example I would provide it with a
not finished pipeline, like:

gnomevfssrc location=file:///unknown ! ???? ! volume ! osssink

And gstreamer would fill in the gap providing me with either nothing
(file type is not supported) or the ogg decoder (that's a ogg file) or
the default mp3 decoder selected by the user (be it via the application
or a capplet) if it's an mp3.

What do you think ? I know this seems quite unreasonable but I won't be
the one implementing it, will I ?


PS: No, I'm not volunteering.

/Bastien Nocera running

More information about the gstreamer-devel mailing list