[gst-devel] proposal of new core element

Ronald S. Bultje R.S.Bultje at students.uu.nl
Wed Apr 21 21:19:55 CEST 2004


Hi Thomas,

On Thu, 2004-04-22 at 00:49, Thomas Vander Stichele wrote:
> > audio and video are slightly different. The purpose of the audio one is
> > make sure that the buffer has a specific/fixed amount of samples. Incoming
> > buffers might have different amounts of samples, but will still be
> > aligned. For video, however, each buffer is one, and exactly one, frame.
> > It makes no sense to have buffers of zero or three or X frames.
> 
> I know, but in practice this cannot be guaranteed.  Raw video coming in
> from a file, a pipe, or a socket just doesn't know how much bytes a
> buffer should contain.

True. But if you speak of raw video, then you speak of a specific type
of data. Doesn't that need a specific plugin? videorawparse? Like
mpeg1videoparse, mpegparse, etc. All those have as their primary purpose
to parse a specific data type, based on bytestream (unfortunately,
mpeg1videoparse doesn't use bytestream; TODO item on my list for some
day; same for mp3parse). I don't see why you need a general cutter
element in the core for that, that's my point. ;).

> Also, the core basically contains a set of general pipe elements, which
> have "pipe theory" operations.  A reframing one is missing there IMO. 
> With the current core, I also don't see another way of doing this
> nicely.  If I subclass the generic core element and calculate the frame
> byte size from the video caps this works quite well.

I think bytestream is more suited for this than subclassing...
Subclassing is nice, but it doesn't really add anything to this use
case. Rather, it complicates it because the size per frame differs for
most data types. Bytestream is more appliccable here.

> > For the compressed case, it's even more complicated because of different
> > sizes per frame.
> 
> Agreed, but it might still be possible to do this.

Possible != preferrable. ;).

> >  An extension to bytestream would make much more sense
> > here imo, especially if we make it chain-based functional (or we go ahead
> > with Dave/Benjamin's idea of making everything iterate-based, in which
> > case this is not needed).
> 
> I don't see how bytestream is solving this, esp. for elements that
> aren't using bytestream in the first place :)

They should use bytestream, or they should have a parser based on
bytestream in front of it.

> Anyway, even with all of this in place, for testing purposes and generic
> stuff a simple reframer still sounds like a good idea to me :)

Sounds good.

Btw I can obviously see what you're heading to here, so the next obvious
question: why do you want to stream raw video? Don't you rather want to
stream *hint* ogg/theora? :). Much easier to parse, too (since you don't
need to tell it the framesize to parse correctly).

Ronald




More information about the gstreamer-devel mailing list