[gst-devel] AVI parser

Erik Walthinsen omega at cse.ogi.edu
Mon Mar 20 05:33:17 CET 2000

On Sun, 19 Mar 2000, Wim Taymans wrote:

> - The avi parser itself finds out what codec to use, adds this to the
>   pad and send of a new_pad event. The new pad will always be an uncompress
>   audio/video stream.
> - The avi parser sends out a new_pad event and let the application find out
>   what codec to connect to it. The avi parser will create a compressed
>   pad.
> What would be the best choice? I prefer 2, but is there a way to find out
> the type of the pad. There is a get_type_id but what is it supposed to do?
What I've got in mind, and a little bit of code for, is the ability to
autoconstruct a stream by starting to stream data, allowing for a
`typefind` element to attach to each new pad (under the control of the
application, of course) and cause streaming to occur in a non-destructive
fashion (the DISCOVERY state).  This means that source elements should
eventually be capable of doing these non-destructive reads, which may mean
some kind of core or library code that can handle the buffering necessary.

The actual name for a pad should be as specific as it can be, but in the
case of AVI, it may be the RIFF tag and a sequence number, i.e. WAV_0,
WAV_1, etc.  The actual data type can be set later, if at all.  A lot of
applications may never care what the official data type is, since they
"know" what things are based on the RIFF tags (which is likely to be

To actually figure out what the thing should be, each element can provide
a typecheck function that would process just enough data to determine one
way or another if that data stream is one type or another.  I was
originally thinking a boolean return, but obviously that won't work with
plugins that deal with multiple types.  In fact, the whole typing scheme
is a bit screwy.  Thoughts?

> The mpeg parser currently uses option 2, but that is easy because we
> always know we have to connect the mpg123 parser to the audio pad...
> Also it encodes the type of pad in the name (audio_x, video_x, ..) which
> is not too good either. 
The case of the MPEG parser is special because the streams will *always*
be of a certain basic type.  However, each audio or video stream could be
any of several format, so the above is still necessary.

> I have the library thing working quite nicely. All the basic codec stuff
> of the avi parser should go in the libraries so that they can be reused
> for a quicktime parser. The avi specific stuff for the codecs will go
> into the avi directory. These avi codecs use the libraries to get the
> (de)compression done. 
One problem is that all your testing has been done with plugin_load_all().
If you were to just demand load the AVI parser, it would promptly fail
with unresolved symboles gst_riff_* during plugin load.  We need to turn
on lazy loading in gstplugin.c, and any plugin that is based on a library
or another plugin must have critical code at the top of the plugin_init()
routine that will go load all those plugins.  If they don't load, it's all
over.  That reminds me, there's no sane way to unload a plugin right now,
and there should be (think of a more complex interactive setup where
you're actually writing/testing plugins without shutting down the
application in question).  Refcounting should be done, and maybe some way
to keep track of all elements based on that plugin so some application
could forcibly yank them in a frozen pipeline (that's another idea I've

Freezing: to modify a pipeline live, there has to be some way of forcing
data flow and scheduling to stop.  My thought is that RUNNING and PLAYING
really are orthogonal, with normal playing state being RUNNING && PLAYING.
Freezed == !RUNNING && PLAYING.  This is a short-term state, which means
that any queues shouldn't be affected too much, and the scheduling doesn't
get screwed.  An application wanting to freeze something to replace it
should always have the replacement ready to go.

> When I get this proof of concept working, we might think about moving
> the 4 different implementations of getbits into a library. We still need
> one indirect call though, but we do not need MMX checking and
> dispatching to the right getbits implementation because we can load the
> specific library right away. 
Good point.  The indirect function call gets replaced with a ld.so-based
rewrite of the code (which is an indiect function call itself, but
hey...).  That way we have the getbits_mmx.c, getbits_int.c, etc. code,
and the getbits plugin itself could do a forcible load of the appropriate
version, nothing more.  Literally:

GstPlugin *init_plugin(GModule *module) {
  if (mm_supported & SSE)
  else if (mm_supported && MMX)
  return plugin;

Of course, this has to occur with proper error checking, so it's more
likely the switchout would just set a string pointer and the load happens
once with checking.  And the switched plugins all provide the same

         Erik Walthinsen <omega at cse.ogi.edu> - Staff Programmer @ OGI
        Quasar project - http://www.cse.ogi.edu/DISC/projects/quasar/
   Video4Linux Two drivers and stuff - http://www.cse.ogi.edu/~omega/v4l2/
       /  \             SEUL: Simple End-User Linux - http://www.seul.org/
      |    | M E G A           Helping Linux become THE choice
      _\  /_                          for the home or office user

More information about the gstreamer-devel mailing list