[gst-devel] Text rendering
Gergely Nagy
gergely.nagy at neteyes.hu
Tue Feb 22 04:45:23 CET 2005
On Tue, 2005-02-22 at 23:05 +1100, Jan Schmidt wrote:
> On Tue, 2005-02-22 at 12:07 +0100, Gergely Nagy wrote:
> > On Mon, 2005-02-21 at 15:34 +1100, Jan Schmidt wrote:
> > > On Sun, 2005-02-20 at 21:58 +0100, Maciej Katafiasz wrote:
> > > >Dnia 20-02-2005, nie o godzinie 18:51 +0100, Gergely Nagy napisał:
> > > >> Note, that the text renderer element would only create a buffer that
> > > >> is large enough to hold the text, not as large as the whole frame
> > > >> it will be rendered onto. This way the extra overhead coming from
> > > >> the fact that renderer and blender are separated is insignificant.
> > > >
> > > >Not really, you kinda need to render to full frame (at least logically
> > > >so), because subtitles are to be positioned precisely. If we were to
> > > >render only "necessary" parts, it will make correct positioning
> > > >impossible. As I refuse to pass positioning info via caps, it pretty
> > > >much reduces to either RLE or what Jan proposed -- equivalent of X
> > > >damage regions. I favor the latter, having RLE decoder inside imagemixer
> > > >feels wrong :)
> > >
> > > Actually, I think an RLE input format will be useful - in fact I'm
> > > planning to implement one for DVD subtitling, because RLE is the format
> > > the subtitles are given as on DVD. I'm planning to do this with a helper
> > > library in gst-libs to avoid duplicating the code everywhere, of course.
> >
> > I'd think an RLE decoder would be more useful.. The only case where an
> > RLE input format might be better, is when you can memcpy a bunch of
> > pixels over the background. Which is the most common case for subtitles,
> > true, but in that case, blitting an uncompressed image over the picture
> > is pretty fast too (and has an added benefit of making it possible to do
> > some tricks, like transparency, anti-aliasing and stuff like that).
>
> I guess you mean having an RLE decoder separate to the videomixer. My
> motivation is entirely geared toward DVD, currently - I want to avoid
> taking RLE compressed subtitles off the DVD and writing out an
> uncompressed region (which may be as large as the entire framebuffer)
> and then reading that back in from main memory to blend it, when a
> simple RLE encoding makes the number of bytes that need to move in and
> out of main memory quite a lot smaller.
True enough..
> There's no reason videomixer can't accept several available buffer
> formats on the input pads and have some internal functions that know
> how to blend those, within reason - I think RLE is simple enough, but
> a JPEG format would be silly.
Point taken.
> > > Also, I don't think we need anything as complex as a full XDamage
> > > structure, we just need some values in the first bytes of the buffer
> > > that specify the x/y offset and have each buffer represent a single
> > > output rectangle to draw. To draw multiple regions in an output frame,
> > > the text renderers could supply a stream of buffers with 0 duration and
> > > identical timestamps. Only the last buffer of the set would have a
> > > non-zero duration.
> >
> > I don't really like this... this places additional burden on the mixer,
> > which will have to care about buffer timestamps and durations. Couldn't
> > we have the text renderer output a container format that has a header
> > prepended to each region-buffer, and have a regionmixer that parses the
> > format, and calls into imagemixer appropriately? This way, the mixer
> > would not need extensive modifications to deal with 0 duration buffers.
> > It wouldn't even know about it, as the regionmixer takes care of that
> > stuff.
>
> I don't know whether you're suggesting a subclass of videomixer here,
> or whether you're confused about how GStreamer elements interact.
> If anything, I think we'd add your 'multiple regions overlay' format
> as an frame encoding that the videomixer knows how to read and blend
> onto an AYUV framebuffer, and then add that mime-type to the pad
> capabilities for the sink pads of videomixer.
This has the drawback of having to specify the format of the buffers
inside the container too. (Eg, image/x-gst-regions, format=AYUV or
format=I420... and when you get RGB, this gets hairy. Yuck.)
True enough, that can work just fine, but it seems to me, it would be
much clearer if we'd not need a new mime type at all, and could just
pull buffers in until we have to. Not to mention, that one of the things
I love about GStreamer is that in most cases demuxing and decoding are
separated. This is what bothers me here.. I don't mind a subclass of
imagemixer that does some demuxing too, but I'd rather keep that
separate from the basic imagemixer.
My main aim with imagemixer was to make it simple, and easy to build
upon (I still have work to do in this front, though). My vision is that
imagemixer can only handle two pads, and pulls one buffer from each.
Then, there would be videomixer, whose purpose is to be able to handle
more fancy stuff, like multiple sinks and the ability to pull multiple
buffers from each pad, if so need be - thus supporting what you outlined
before.
> > I mean something like, text_renderer outputs this container format,
> > regionmixer parses it into individual buffers, then passes each one in
> > turn to gst_imagemixer_mix(). This sounds like pretty easy to do.
> What is 'gst_imagemixer_mix()'? I don't know whether you're describing
> having 2 separate GStreamer elements named 'regionmixer' and
> 'imagemixer'. If you are, well - Gstreamer elements are only allowed
> to interact by pushing Gstreamer buffers and events to each other,
> they can't call functions in another element.
I'm aware of that - I meant regionmixer to be a subclass of imagemixer.
Sorry if I wasn't clear.
> > However, what would be best, is to push a whole buffer set over on a
> > pad. That way, one wouldn't have to do the splitting into individual
> > buffers part (far more resource friendly, and faster, since it means far
> > less buffer copying), nor would we need the 0-duration checking.
> >
> There's no way in GStreamer to pass a group of buffers as one. Are you
> using 'buffer' in a different way than we use it in GStreamer circles?
I know there is no such way, but it would come handy, methinks.
--
Gergely Nagy <gergely.nagy at neteyes.hu>
NetEyes Kft.
More information about the gstreamer-devel
mailing list