[gst-devel] Text rendering

Jan Schmidt thaytan at noraisin.net
Tue Feb 22 04:06:37 CET 2005

On Tue, 2005-02-22 at 12:07 +0100, Gergely Nagy wrote:

>On Mon, 2005-02-21 at 15:34 +1100, Jan Schmidt wrote:
>> On Sun, 2005-02-20 at 21:58 +0100, Maciej Katafiasz wrote:
>> >Dnia 20-02-2005, nie o godzinie 18:51 +0100, Gergely Nagy napisał:
>> >> Note, that the text renderer element would only create a buffer that
>> >> is large enough to hold the text, not as large as the whole frame
>> >> it will be rendered onto. This way the extra overhead coming from
>> >> the fact that renderer and blender are separated is insignificant.
>> >
>> >Not really, you kinda need to render to full frame (at least logically
>> >so), because subtitles are to be positioned precisely. If we were to
>> >render only "necessary" parts, it will make correct positioning
>> >impossible. As I refuse to pass positioning info via caps, it pretty
>> >much reduces to either RLE or what Jan proposed -- equivalent of X
>> >damage regions. I favor the latter, having RLE decoder inside imagemixer
>> >feels wrong :)
>> Actually, I think an RLE input format will be useful - in fact I'm
>> planning to implement one for DVD subtitling, because RLE is the format
>> the subtitles are given as on DVD. I'm planning to do this with a helper
>> library in gst-libs to avoid duplicating the code everywhere, of course.
>I'd think an RLE decoder would be more useful.. The only case where an
>RLE input format might be better, is when you can memcpy a bunch of
>pixels over the background. Which is the most common case for subtitles,
>true, but in that case, blitting an uncompressed image over the picture
>is pretty fast too (and has an added benefit of making it possible to do
>some tricks, like transparency, anti-aliasing and stuff like that).

I guess you mean having an RLE decoder separate to the videomixer. My
motivation is entirely geared toward DVD, currently - I want to avoid
taking RLE compressed subtitles off the DVD and writing out an
uncompressed region (which may be as large as the entire framebuffer)
and then reading that back in from main memory to blend it, when a
simple RLE encoding makes the number of bytes that need to move in and
out of main memory quite a lot smaller.

There's no reason videomixer can't accept several available buffer
formats on the input pads and have some internal functions that know how
to blend those, within reason - I think RLE is simple enough, but a JPEG
format would be silly.

>> Also, I don't think we need anything as complex as a full XDamage
>> structure, we just need some values in the first bytes of the buffer
>> that specify the x/y offset and have each buffer represent a single
>> output rectangle to draw. To draw multiple regions in an output frame,
>> the text renderers could supply a stream of buffers with 0 duration and
>> identical timestamps. Only the last buffer of the set would have a
>> non-zero duration.
>I don't really like this... this places additional burden on the mixer,
>which will have to care about buffer timestamps and durations. Couldn't
>we have the text renderer output a container format that has a header
>prepended to each region-buffer, and have a regionmixer that parses the
>format, and calls into imagemixer appropriately? This way, the mixer
>would not need extensive modifications to deal with 0 duration buffers.
>It wouldn't even know about it, as the regionmixer takes care of that

I don't know whether you're suggesting a subclass of videomixer here, or
whether you're confused about how GStreamer elements interact.
If anything, I think we'd add your 'multiple regions overlay' format as
an frame encoding that the videomixer knows how to read and blend onto
an AYUV framebuffer, and then add that mime-type to the pad capabilities
for the sink pads of videomixer.

>I mean something like, text_renderer outputs this container format,
>regionmixer parses it into individual buffers, then passes each one in
>turn to gst_imagemixer_mix(). This sounds like pretty easy to do.

What is 'gst_imagemixer_mix()'? I don't know whether you're describing
having 2 separate GStreamer elements named 'regionmixer' and
'imagemixer'. If you are, well - Gstreamer elements are only allowed to
interact by pushing Gstreamer buffers and events to each other, they
can't call functions in another element.

>However, what would be best, is to push a whole buffer set over on a
>pad. That way, one wouldn't have to do the splitting into individual
>buffers part (far more resource friendly, and faster, since it means far
>less buffer copying), nor would we need the 0-duration checking.

There's no way in GStreamer to pass a group of buffers as one. Are you
using 'buffer' in a different way than we use it in GStreamer circles?

Jan Schmidt <thaytan at noraisin.net>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20050222/90d0ed85/attachment.htm>

More information about the gstreamer-devel mailing list