[gst-devel] GNOME Animation Studio

Ruben ryu at gpul.org
Sat Feb 3 19:57:53 CET 2001


On 2001/Feb/03, Wim Taymans wrote:

> I'm glad you are looking at GStreamer. Video editing is one of my goals
> with GStreamer. I'm currently downloading and inspecting your code, so
> more feedback will follow soon...

	I wait impatient :D
 
> >       * file: either an image (png, ppm, etc) or a video (currently only
> > mpeg-1). They are leafs of the tree.
> 
> We also have those elements as leaf nodes (GstElement). We actually are a
> bit more modular, you'll have an element for reading from disk or network
> an a separate element for decoding the media.

	Yes, this part isn't very well thinked, but I've no many
alternatives :(, I've only file sources because I need to know the size of
the source. It's very difficult to know the size of an MPEG-1 file, I can't
figure out what should I do for network sources, and it would be impossible
for a video-for-linux source where the user deceides when he wants to start
saving and when he wants to stop. So the solution I have taken is that ganso
only reads from file and the user is responsible for saving in files the
media sources from HTTP, from RTP, from his webcam or from where he wants.

> >       * sequential composition: can have sub-nodes that will be played
> > sequentially, with the posibility of padding before each node.
> > 
> >       * concurrent composition: it's like the layer-system of The GIMP,
> > but applied to videos. Of course, as in GIMP, it's usefull if you have top
> > layers with alpha.
> 
> I'll have to look at the code to understand what you mean with the
> composition.

	It's easy:

	Sequential (defines composition of media sources in time):

---------------------------------------------
padding 0| Video 0 | padding 1 | Video 1| ...
---------------------------------------------
------ time ---->

	When the user uses the preview window to see what he will get, first
he sees a black image (padding 0) during some frames, then the video 0, then
another black image (padding 1) during some frames, then the video 1.
	This black image is really a "transparent" image, so if it's
composed with concurrence with another thing, while the paddings, you will
see the "bottom" video.


	Concurrent (defines composition of media sources in space):

-------------------------------
<===========Video 0===========>
-------------------------------
<======Video 1======>
-------------------------------
<=========Video 2=========>
-------------------------------
------ time ----->

	The three videos are composed, if you use preview window, the first
frame you will get the video 0, over it the video 1 and over them the video
2. If the upper videos has transparent zones, in these zones you will see
what occurs below. For example, imagine that the Video 2 is an animation of
a window opening, behind the window there is nothing, only transparency, so,
when combined with Video 1, you will see a window opening and letting you
see the second video. If the second video is a transparent hole that gets
bigger after the window is completely opened, the result will be a window
that opens, and let you see a hole that gets bigger and lets you see the
Video 2.


> The GStreamer core is media type independant and agnostic, we have a
> property system to describe the media types. The meaning of the media data
> is entirely handled by the plugins so in theary you can use whatever media
> type you like.

	When ganso reads any video media, each time ganso read a frame
(well, ganso not, its plugins, of course) it copies the frame to an internal
format of 32 bits (8,8,8,8 -> red, green, blue, alpha). And all filter
plugins have access to this format, so it's very easy to make a new filter,
and don't have to worry about what kind of representation the media uses.

> You should use GStreamer, no doubt about that. While our goals are exactly
> the same as yours we should be able to integrate GStreamer. You have to
> take into account though that this will be a first real-life use of
> GStreamer in an app like GANSO, so you might have problems or requirements
> that we need to add to the core.

	Yes, this is what I thought. Then I think it's better to make two
branches of GAnSO, one traditional and other trying to use gstream.I wish
it's well designed enough (ganso) to support both branches without very much
cut&paste...

> A quick look at the site tells me that the tree you use is mostly for
> describing the video sequences and the how they will be merged together.

	Yes, the tree is only an internal reprepsentation of the data
definition. When the resulting video gets builded, the codec asks this tree
for images (of course for the codec is transparent the aspect of these tree,
he only asks the root) and each node knows how to merge his internal nodes,
spatially or temporally if it has to merge something.

> the GStreamer architecture uses tree to do the actual rendering of those
> sequences, there is a slight difference. I think the tree you use can
> still be used to serve as input for a source element in GStreamer.

	But imagine that I have the gstream tree build up, and I can show
the user a render of these tree. Now can I save this to a file? (I mean to
an MPEG-1 file or a DivX file or so) If I can, then there is no
difference...

Regards
-- 
 __
 )_) \/ / /  email: mailto:ryu at gpul.org
/ \  / (_/   www  : http://pinguino.dyndns.org
[ GGL + GAnSO developer ] & [ GPUL member ]




More information about the gstreamer-devel mailing list