[gst-devel] mpeg2dec plugin development notes
Erik Walthinsen
omega at temple-baptist.com
Wed Mar 7 20:38:02 CET 2001
On Wed, 7 Mar 2001, David I. Lehn wrote:
> That's also another issue... the plugin is doing the memory management
> itself. At the moment there doesn't seem to be functionality for an
> element to ask other elements for a more efficient way to handle
> buffers. I don't know what is required for this.
This is what the BufferPool concept is for. The idea is that an element
would attach a BufferPool to its sink pad, and the peer element would use
it to allocate buffers to be sent back.
A simple bufferpool, which has identical function to what we do right now,
would simply g_alloc() the buffers requested. This may be how things are
done in the future throughout (in lieu of a better BufferPool), just so
we're using BufferPools everywhere. You ask the 'NULL' bufferpool for
something, and that's what happens.
A more interesting bufferpool would be one provided by the audiosink. If
you have a sound card capable of DMA, you would create a bufferpool that
gives out properly timed buffers that are simply located in the right
place in the DMA space. That way the peer would write directly to the
final output space. There are significant problems with that approach in
the general case though, because of the very close timings you need to
hold to. For audio-specific applications that need to be real-time, and
can hold to those requirements, it's an ideal solution.
Now to the problem at hand, something similar can be done with video
frames. This makes the assumption that the videosink can pre-allocate
frames, which is not really relevant in Xv ;-(, but the idea still holds.
Then the trick is to decide who drives the buffer count, etc. If the
mpeg2dec element allocates 2 frames (forward and back reference), then a
third which gets immediately handed back, it will theoretically never ask
for more than a total of three. If that happens to be a hardware limit,
and something asks for more frames, it can either respond with a failure
(which the element would have to figure out how to deal with), or it could
simply malloc something and deal with it as appropriate later.
The key here, especially for audio, is that these buffers need to be
requested with a timestamp or other offset value that represents where the
media is in time. It wouldn't help if the audiosink randomly passed out
buffers from DMA space, since they might get mangled up in time, then you
get some really strange stuff coming from your speakers.
One major feature here would be the ability for elements that modify
buffers in place to proxy the bufferpool, so their left-hand peer can get
a bufferpool from their right-hand peer. That can avoid many copies...
Anyway, there's a little more to this, but I need to send this off...
Erik Walthinsen <omega at temple-baptist.com> - System Administrator
__
/ \ GStreamer - The only way to stream!
| | M E G A ***** http://gstreamer.net/ *****
_\ /_
More information about the gstreamer-devel
mailing list