[gst-devel] Text rendering

Maciej Katafiasz ml at mathrick.org
Sun Feb 20 14:27:28 CET 2005

Dnia 20-02-2005, nie o godzinie 22:58 +0100, Gergely Nagy napisał:
> > > > Is it really going to be expensive if most of that image will be 100%
> > > > alpha anyway?
> > >
> > > Yes, unless you do some RLE, in which case you're overdoing stuff,
> > > methinks. If the image you generate has empty spaces, you could just
> > > skip generating those, and tell the mixer where to merge the image,
> > > instead of positioning it yourself. (This way, the user can have
> > > subtitles on the top of the video if so he wants, and the renderer
> > > does not need to know about it at all).
> > 
> > Aaah, you mean generating empty space is expensive. I get it now. Oh
> > well, if it really poses a problem, we can use what Jan proposed,
> > regions similar to how X represents damage areas.
> Generating them is not too expensive, blending a large image is. An
> image with lots of empty lines is large.

Hmm, is going over empty pixel really that much of overhead? Didn't
suspect that, if so, we will use cropping info.

> > Cairo is nice, because we get support for bascially everything we want
> > to do in one place, and most importantly, we can combine those ops.
> > Which saves us huge, klunky pipelines as described above. Oh, did I
> > mention SSA/ASS also includes rendering arbitrary shapes? :) Not that
> > anyone supports or uses that, but it does.
> Sounds nice! Rendering arbitrary shapes is something I might even use
> (think something like that "bar" you can see on, eg, MTV, on which
> they print the song title, author, and so on).

Oh, someone with application for that shapes stuff. Cool, first time I
see one ;).

> I think I'm persuaded that my original idea was inappropriate for many jobs..
> So, the only thing that remains, and on which we don't seem to agree,
> is the way blending the rendered stuff onto a video frame. I'd prefer
> using imagemixer, that leads to the least amount of code duplication
> (not to mention that it can be pretty easily extended to be able to
> mix images in various interesting formats, so it is not limited to
> RGBA or AYUV; one of the things I want to do with imagemixer is to be
> able to push I420+alpha buffers to it, and get I420 as output. That
> way I won't have to do any colorspace conversions in my application at
> all).

If the buffer is cropped, I don't think there's any significant overhead
added over direct-to-video output, so it's probably OK to make it
primary mechanism of blitting.

> On the other hand, it seems to me, there is no easy way in 0.8 to pass
> bounding box paramaters from the renderer to the mixer. I hope this
> will change in 0.9. So, for 0.8, it might be better to have a
> cairooverlay element. (Or a cairorenderer, that outputs something
> representing a cairo canvas, embedded in a GstBuffer, and a
> cairooverlay that takes it, and overlays it onto a video frame. This
> latter would make it easier to port the thing later to the
> param-passing way :)

We can add cropping info to buffers themselves, it's no problem. In
general, it's no problem to make mechanism that would work almost like
dparams-in-stream today using regular buffers, just that it won't be
generic (and also won't enjoy any special scheduler love, too, if
scheduler was to know about and treat dparams specially)


Maciej Katafiasz <ml at mathrick.org>

More information about the gstreamer-devel mailing list