[gst-devel] RFC: A Timeline

Edward Hervey bilboed at gmail.com
Mon Jun 27 02:20:01 CEST 2005


Hi :)

On 6/27/05, MDK <mdk at mdk.org.pl> wrote:
> Hi,
> 
> I'm thinking about the ways to implement a "Timeline" in Gstreamer (many
> clips aligned in time at given positions) I know we have gnonlin
> (probe_fired based) but I'm wondering if there are better ways to do
> that.

 There are definitely many ideas I've got lying around which are going
to be implemented as I port gnonlin to gstreamer 0.9.

> 
> I'm thinking about the following:
> 
> Write a bin element (let's call it a SOURCE), that manages another
> supplied bin making it "infinite" in time. In the simplest form it has
> "in" point and "out" points. As soon as the playback reaches "In" it
> adds/links the managed bin to itself. As soon as it goes past "Out" it
> unlinks/removes the bin from itself. For all the time > "Out" && time <
> "In" it somehow (?) generates "no data" on it's sink pad. It's seek
> aware and "does the right thing" on seek events (add/link ,
> unlink/remove). SOURCE never generates EOS.

  Having an element generate data when it's not doing anything seems
to be a waste a cpu.

> 
> An arbitrary number of SOURCES is linked to a TIMELINE element - which
> is a modified videomixer thing. It has infinite number of src pads and
> one sink pad. In a loopfunc TIMELINE pulls all of it's src pads trying
> to find a first one that returns a buffer ( != "no data"). It pushes it
> to the sink pad.

  Two problems here :
   1_ gnonlin tries to be as media-agnostic as possible, in your case
you would have to do a different kind of timeline per media class.
   2_ the timeline loopfunc would have to go over more pads that are
actually needed.

> 
> Explaining it the other way round - let's say I've got a composition
> (movie) that consists of 50 clips aligned one after another (one
> "track", they do not overlap). This gives me 50 SOURCE bins linked to a
> TIMELINE all the time. All the 50 are iterated at each moment. But only
> 1 actually performs any real work. 49 are "idle" returning "no data".
> 
> The advantages I see:
> 
>         * Simplicity

  Simplicity in what sense ? Code-wise ? API-wise ?

>         * Each element (SOURCE) is responsible only for itself. Operations like
> changing "In" and "Out" points require minimal pipeline rebuilding.
> TIMELINE doesn't need to know anything about the SOURCES.
  The overhead in timeline rebuilding is known and being worked on.

>         * It fits with my concept of just two elements (no "tracks", no
> "compositions"). Possible effects (like: transitions) can happen at the
> TIMELINE level (video mixing).

 Indeed, it suits your concept but the aim of gnonlin was to be as
versatile as possible.

> 
> Possible disadvantages:
> 
>         * Performance. All the clips are itarated, even if they're not used.
> It's a dummy iteration (without any data processing), but might be a
> significant overhead (let's imagine I have a movie composed of 100
> shots...)
> 
>         * Having an element with 100 src pads allocated might be a problem
> (???)
> 
>         * Is it actually possible to push "no data" on a pad?

  Yes, Yes and yes (filler event)

> 
> 
> Let me know if that sounds sane to you. Perhaps there are other ways to
> do that?

  

> 
> In other news: I've the playskipper plugin ready (for automatic frame
> dropping if playback isn't "smooth"). I'm going to post it for a wider
> testing soon.

  If I'm not mistaken, that's what videorate does.

> 
>> Back to the topic, I'd suggest to just go with gnonlin and fix those
>> parts buggy or missing. Doing twenty slightly different timeline
>> implementations doesn't help anyone.

> Well, my problems with gnonlin are also related to performance.
> GnlSource (the basic source element in gnonlin) can wrap a bin that has
> just one sink pad. That means you need two GnlSources for an AVI clip
> that has audio and video. And that means doubling the filesrc ! avidemux
> part. I don't know if the scheduler optimizes that (so that double
> disk-reading doesn't happen), but is seems strange to me.

Indeed you have a double (or more) demuxer existing several times. But
in order to suit every scenario, this is needed. Imagine you play the
audio and video at different times in the timeline, you'll need that
anyway. The cost of overhead for a demuxer is not that big anyway
compared to the timeline possibilities you can create this way.
The fact of having N source elements generating data at every loop
takes more processing than that :)

> There are no examples on gnonlin, so I might be plain wrong here.

True, I need to create some new ones, I'm adding that to the gnonlin 0.9 TODO.

> Best regards,

Don't hesitate to mail me your comments/ideas or come and chat on
#gstreamer (irc.freenode.net)

  Edward

> 
> 
> --
> MDK
> mdk at mdk.org.pl
> www.mdk.org.pl
> 
> 
> 
> 
> 
> -------------------------------------------------------
> SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
> from IBM. Find simple to follow Roadmaps, straightforward articles,
> informative Webcasts and more! Get everything you need to get up to
> speed, fast. http://ads.osdn.com/?ad_id=7477&alloc_id=16492&op=click
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>




More information about the gstreamer-devel mailing list