[gst-devel] [RFC] Video blending interface

Gergely Nagy gergely.nagy at neteyes.hu
Fri Jan 14 10:23:08 CET 2005


On Fri, 2005-01-14 at 14:42 +0100, Benjamin Otte wrote:
> Some random comments that came to my mind while reading the mail:
> 
> On Fri, 14 Jan 2005, Gergely Nagy wrote:
> 
> > The third requirement is that blending should be an interface, not one
> > plugin that does all kinds of blending. There are a few reasons I think
> > this is a good idea: first, it makes the plugin far less complex;
> > second, makes it easier to reuse; third, this design still allows a
> > larger, bin-like plugin which has on-demand sinks, and dispatches the
> > work to other blenders (together with z-order & stuff).
> >
> I'm assuming you are talking about a base class here, when you talk about
> an "interface" - say GstVideoblender, kind of like GstVideofilter
> already is a base class for all sorts of filter effects.
> The word interface in the GStreamer sense means a GObject interface used
> to provide a common API from an element to applications (see GstOverlay or
> GstMixer for examples)

Yes, I am. Sorry for the confusion.

> > Videomixer supports the xpos/ypos pad properties, which is one half of
> > the bounding_box approach I need. It also supports an alpha pad
> > property, which I did not mention, but is important too. However, as far
> > as I see, it always adds a background to the image at its srcpad. It is
> > also one single plugin doing too much (imho, at least). It would make
> > the code simpler and more reusable if different blending methods were
> > different plugins implementing the same interface. Then videomixer's job
> > would only be co-ordination.
> >
> I'm not sure I understand why there's need for a bounding box. Given two
> video streams, you just need the xpos and ypos to know how to compose
> them. If you want to compose only a partial image, you could use an
> element beforehand that handles removing the unwanted parts.

Well, that's a possibility too.. but I'm after a solution that is more
resource-friendly. For example, imagine the following scenario: we have
a video feed from a webcam, a pre-rendered text, and a logo. Obviously,
the logo and the pre-rendered text are static, and I'd rather not touch
them, and wouldn't really wish to re-generate them either.

Now, lets assume there's no bounding box thing, but I use an element
that crops the thing: that means that either the original image gets
clobbered, which is bad, or it gets copied, which is only slightly
better. I want to avoid copying. Copying 320x24 once is okay (copying in
the sense that this gets blended on the background), copying it twice is
a waste for me. The hardware this will run on needs every optimisation I
can possibly make.

Now, if the blender plugin handles the cropping, there is no need to
copy the static buffer. And the plugin could be implemented so that with
a flag, it does not copy the background before merging the other buffer
onto it, but reuses it. That way, there's no copying.

> > Plugins implementing this would take I420 or AYUV as input,
> > and output AYUV.
> >
> I'm not sure why a blending plugin should be bothered doing colorspace
> conversion from I420 to AYUV. That's clearly the job of a colorspace
> converter.
> Also it sounds useful to abstract away colorspaces so that you can later
> on extend it to provide other implementations, like for example RGBA or
> even color key blending with non-alpha colorspaces. (Or even OpenGL or
> cairo, if we ever start passing data this abstracted)

Hrm, true enough.. The main reason I had was that my video source does
not need to have a full alpha plane, adding that would just take time
and memory.. Though, not much, so this point might well be moot.

> > They would not have
> > on-demand sinks, but two static ones: the background, and the
> > foreground. On the srcpad, the result would always have the same size as
> > the background (this is because I want to be able to merge a part of a
> > larger element onto a smaller one). Basically, that's all. Except, I am
> > not sure how the framerate should be handled...
> >
> Using two static pads sounds like a good idea here. I would make the pad
> providing the background image be the "primary" pad (for lack of a better
> word) and take output image size and framerate from this plugin.

Yeah, I had a similar idea... However, this would present a though
situation when the two framerates differ, and I want to avoid copying if
possible. Hmm... Maybe not, if I correctly _ref() them.. Yeah, with
proper refcounting, this sounds doable.

> Now there's two approaches for the framerate on the secondary pad:
> 1) force it to be equal to the framerate of the primary pad. This sounds
> most useful given that we already have the videorate element and that it
> makes the blending plugin not be a rate plugin.

This sounds best to me too.

> 2) simply duplicate/drop images from the secondary pad to match the
> timestamp of buffers coming in from the primary pad. This approach is
> certainly more complicated, but may provide more flexibility.
> I'd start with approach 1 and only change if I encounter serious
> obstacles.

Agreed.

> > One idea is to not care about it at all, and on the src pad, always have
> > a frame rate of 0 (a variant of this is to have the framerate of the
> > slower sink pad). This would mean that if one of the sinks does not have
> > new data, the previous buffer we got from it would be reused.
> >
> This is probably not a good idea because there's elements expecting a
> constant frame rate (like encoder elements). I think it's better to just
> use the frame rate from the primary pad here.

Wouldn't a videorate element after the blender solve this issue?

> PS: To any GStreamer dev desperatly looking for something to do: I
> think we still need support for AYUV (and probably RGBA) in quite some
> important elements, like colorspace converters or videotestsrc.
> Also, shouldn't videotestsrc provide a "transparent" pattern?

I'm already looking into the AYUV/RGBA colorspace stuff, as I've seen
problems with them, and need it fixed. (though, with only 3 patches
accepted, I can hardly be considered as a gstreamer dev.. O:)

-- 
Gergely Nagy <gergely.nagy at neteyes.hu>
NetEyes Kft.





More information about the gstreamer-devel mailing list