[gst-devel] GANSO for gstreamer

Wim Taymans wim.taymans at chello.be
Fri Mar 23 00:13:34 CET 2001


On 19 Mar 2001 20:04:42 +0100, Ruben wrote:
> 
>       I've been thinking a bit in the port of ganso to gstreamer, I will
> rewrite all the application. The core of ganso must be rewritten because
> it's tree-based while gstreamer is graph-based (which is more powerfull, of
> course). The user interface is all but friendly X-) I've been reading some
> Adobe Premiere tutorials and it's million times easier than ganso, and it's
> UI allow people to be much more productive. And the biggest problem is that
> ganso is frame-based instead of time based :(

The timeline hierarchy and the object hierarchy of the rendering engine
are different things. I
believe it does make sense to use a tree based setup for the timeline.
The interesting part is how
one would convert the timeline hierarchy to a graph based
implementation...

> 
>       I need some questions answered to make the "port":
> 
>       - Most of time, the user will need to playback little portions of
> the composition, or even single frames. User will click in a timeline and
> wants the frame corresponding to this instant of time to appear in a preview
> window. One solution is to seek, but you said that it isn't a good idea,
> well, what should I do then?

I think seeking is a good idea. With a clever caching mechanism this
will work quite nicely.

> 
>       - The user wants a quick response from the program when managing
> little portions of the composition (dragging with the cursor, for example).
> The current version of ganso manages it with an image cache, all the media
> sources are linked to the cache, and when some agent wants a frame from a
> video source, it asks the cache. Well it works like the processor cache, but
> oriented to images. Is there any standard way in gstreamer of doing this or
> should I write a plugin to avoid this problem and link it to all
> decompressors output?

I would implement an indexing/caching mechanism on top of the
pipelines... ok I'll try to 
visualize my ideas:

1) the timeline:


 (-------------)
 ! vtrack1     !
 !   fade      !
 !     vtrack2 !   
 (-------------)

 (-------------)
 ! atrack1     !
 !   mixer     !
 !   atrack2   !   
 (-------------)

0    5   10
!----!----!---->
timeline

I think you currently use an hierarchy like this:

+ fade (start 5, end 10)
!- vtrack1 (start 0, end 10)
!- vtrack2 (start 5, end 15)
+ mixer (start 5, end 10)
!- atrack1 (start 0, end 10)
!- atrack2 (start 5, end 10)


we have two video tracks (vtrack1&2) and two audio tracks that are going to
be crossfaded and mixed together in the timeline.each of the atracks and vtracks
are translated into a pipeline, for example:

 vtrack1 = fdsrc -> avidecoder -> jpegdec 
 vtrack2 = fdsrc -> mpeg2dec 

 atrack1 = fdsrc -> wavdec
 atrack2 = fdsrc -> wavdec

etc.. for each A/V track you will have a pipeline in gstreamer. Applying the
effects (fade) on two tracks would involve constructing a pipeline like 
(atrack1&2 are now bins with regular ghost pads):

 (----------------------)
 !  atrack1 --)         !
 !            ->  mixer ->
 !  atrack2 --)         !
 (----------------------)

same for the video mixer.

when you render this bin, you will activate each of the inner bins at the 
right time. For example, doing the atrack1 to atrack2 mix:

 timeline
   !
   !
0  - activate atrack1
   !
   !
5  - activate (or insert) mixer and atrack2
   !  (mixing happens here)
   !
10 - inactivate (or remove) atrack1
   !
   V

This would involve having an accurate clock and a scheduler updating
the pipeline at the specific clock ticks.

rendering a tumbnail preview of atrack1 would involve running the
atrack1 bin and sending the output to the screen (using an element with
50x40 video capabilities and a scaler, possibly seeking to every nth
frame etc..). I would tee these frames to a cache (preview cache with
an index etc..).

This is all still theory though.. I would say that a key component of 
the system is one that converts the timeline hierarchy into a gstreamer
bin and updates this bin over time.


> 
>       - If I have two video sources and I want to see them one after
> another, which structure should I use?

not sure what you mean, I assume you want to play one media stream after
the other...
In this case you would have the scheduler activate the first bin until
EOS and then
activate the second one.



> 
>       - What is the format of the video/raw data?

It's still undefined :( We currently use fourcc's to define the type (RGB,
YUV, etc..) 

> 
>       - Is there any plugin to alpha blend two sources into one? and to
> expand a video in time (keeping the framerate)?

not yet.


my 0.02 EUR

Wim

> 
> Thanks
> -- 
>  __
>  )_) \/ / /  email: mailto:ryu at gpul.org
> / \  / (_/   www  : http://pinguino.dyndns.org
> [ GGL + GAnSO developer ] & [ GPUL member ]
> 
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.sourceforge.net
> http://lists.sourceforge.net/lists/listinfo/gstreamer-devel





More information about the gstreamer-devel mailing list