[Cogl] cogl-pango API

Robert Bragg robert at sixbynine.org
Mon Jul 21 13:05:18 PDT 2014


Hi Reza,

On Fri, Jul 18, 2014 at 3:18 AM, Reza Ghassemi <reza.robin1 at gmail.com> wrote:
> Hi Robert,
> Thanks again for your info.  So due to increasing issues with transparency
> and depth testing in our project for more 3D UIs and UIs that just want to
> use depth instead of changing the draw order to simulate depth all the time,
> I'm really thinking of attempting to modify the clutter source code and
> change the rendering code to implement the multi-pass gaming rendering
> technique you mentioned above.
> It seems that there would be several steps on the way to achieving that
> goal, first of which would be to make sure render nodes are used for all
> painting and then building a full scene tree of render nodes so that it can
> be split and sorted.  Also render nodes would need additional information
> such as the full transformation matrix and state of depth testing and
> whether there is any transparency which might be difficult to determine
> sometimes, such as if it is in an alpha channel of a texture.

Tbh, I never really familiarised myself with how render nodes are
implemented and haven't really worked on Clutter for some years now.
Render nodes probably make this kind of change much easier than it
used to be, but I would still guess that this would be a pretty big
undertaking to implement esp if you want to make sure to maintain
backwards compatibility and get the change upstream.

Some of the things you mention, seem to be thinking along the right
lines though.

> Then there
> are ClutterEffects to deal with.

Right, fragment shader based effects are applied in screen space so it
would be pretty tricky to define some way of affecting the depth
buffer for the whole scene when using an effect.

> Then there are other state changes
> happening in clutter_actor_paint() that need to be captured in the render
> nodes.
> There are probably many more things to deal with.  How much work do
> you think this is?  Am I going to hit difficult road blocks?

I guess you will hit road blocks, considering that clutter wasn't
designed with this kind of rendering model in mind, so it's a very
fundamental change.

If I were to try and take a stab at this; I imagine:
- I'd want some way to constrain the scenegraph to only have actors
that can support multi-pass rendering. Hopefully anything just using
simple render nodes could be made to support this, but more custom
actors would probably need to be updated in some way.
- the paint method will somehow need to specify what pass is currently
running. Adding a paint2() method of some kind that could be passed an
extensible PaintContext object of some kind might make some sense.
- there would be some way to flag that an actor is opaque so it can be
skipped when rendering translucent objects that need blending.
- you will need to clearly define the semantics of different rendering
passes and where depth writing, depth testing and blending should be
enabled.
- there will be lots of fun re-architecting the backend rendering code
to work in terms explicitly traversing the scenegraph, performing
culling, identifying opaque actors and possibly(depending on gpu)
sorting those on the cpu before running an opaque, front-to-back,
depth-write+test, non-blended pass followed by a translucent,
back-to-front, depth-test, blended pass.
- picking will need to also be aware of this new design so that
picking either changes to be handled geometrically, taking into
account the depth and occlusion of picking geometry or the pick render
is handled like an opaque, depth-write+test pass.

I don't really feel confident that this represents the full picture of
what would be involved though.

I would also say that these kinds of changes only lead to a quite
primitive 3D rendering model and I don't know if that will be enough
for your use case.

If you want a more 3D UI scenegraph; will you want to be able to move
the 'camera' around at times (i.e. to animate your view on the scene,
separate from animating the positions of the actors)? Although you can
change the stage's transform, that may not be enough for some use
cases. I've implemented cameras before on a stereoscopic branch for
clutter, though that was a pretty tricky change in itself too.

Besides getting depth testing to work, are you interested in
techniques such as being able to implement lighting effects across
actors perhaps? This kind of thing also doesn't fit with the current
clutter design. Some techniques might for example require you to
render the depth buffer to a texture; possibly from a different view
point too.

I'm of course biased, but based on the direction you are looking to go
with Clutter, I'd be curious to hear some more about your use cases
which I can keep in mind as I develop Rig. Considering the major
changes I think you would be looking to make to Clutter, it seems that
something like Rig could be a good fit for you eventually as it has
been developed with these kinds of things in mind from the beginning.

Cheers,
Robert

>
> Thanks,
> Reza
>
>
> On Mon, Jan 6, 2014 at 3:37 PM, Robert Bragg <robert at sixbynine.org> wrote:
>>
>> Hi Reza,
>>
>> I'm afraid this is going to be awkward to handle for a few different
>> reasons...
>>
>> One issue here is that cogl-pango currently abstracts away what
>> CoglPipeline gets used internally during _draw_layout(). This is
>> necessary to some extent because pango supports rich text and so
>> cogl-pango at least needs to be able to change the color, but also
>> because internally glyphs are stored in a number of atlas textures and
>> for any given text cogl-pango needs to figure out what texture the
>> glyphs you need are stored in and setup the CoglPipeline so it reads
>> from the right texture.
>>
>> It would be good to add a way for a user to perhaps specify a template
>> pipeline that cogl-pango could then derive from before maying any
>> changes, or alternatively add some hooks that give you enough
>> information to completely own how all CoglPipelines are created.
>>
>> Besides this missing api capability though, you should understand that
>> each glyph is basically drawn as a textured rectangle that maps a
>> sub-region of a glyph-cache atlas texture to the rectangle that bounds
>> that glyph. This means that in terms of geometry each glyph is
>> represented as a rectangle and so you can't just rely on depth testing
>> based on the geometry, you also need to make sure that you discard
>> transparent and semi transparent fragments so they don't affect depth
>> testing. If you do that though you will be left with horrible jaggie
>> edges on all of your glpyhs I think that will be difficult to address,
>> given the design of Clutter.
>>
>> I'm afraid that there is no silver bullet for this kind of problem and
>> clutter and cogl-pango aren't currently well geared to support your
>> use case. Although we created toys in the past that used depth testing
>> with Clutter, it was never really considered a priority when there was
>> no pressing use case which means that Clutter isn't designed to take
>> advantage of GPU depth testing.
>>
>> Often applications using the GPU (e.g. games using OpenGL/D3D) that
>> want to use depth testing to avoid overdraw but that also need to draw
>> transparent objects will use a two pass approach. They will do an
>> opaque pass that draws all opaque objects with depth writing and
>> testing enabled in a front-to-back order and then they will do a
>> blending pass that will draw all transparent objects with blending
>> enabled, depth testing enabled, depth write disable and in a
>> back-to-front order. The way Clutter is currently designed, it only
>> has one pass with no depth sorting and it doesn't know about GPU depth
>> testing itself (it's just that actors can use Cogl to use depth
>> testing without Clutter being aware). This means it will be difficult
>> to solve this in the same way that something like a game engine could.
>>
>> As it happens I'm currently working on a project called Rig, which is
>> trying to create a UI technology and design tool that can hopefully
>> help us better utilize modern GPUs in UI design, but I'm afraid it's
>> such early days for that, that it probably wouldn't be helpful for you
>> yet. From the sounds of what you're trying to do with Clutter though,
>> it seems you're treading uncharted territory and I think this is an
>> example of something that's going to be difficult to handle well.
>>
>> Sorry that this answer might not be very helpful to you.
>>
>> kind regards,
>> Robert
>>
>>
>> On Sat, Dec 14, 2013 at 3:10 AM, Reza Ghassemi <reza.robin1 at gmail.com>
>> wrote:
>> > Hi,
>> >
>> > If I have two ClutterText actors at different depths and have depth
>> > testing
>> > enabled, the text itself still draws only in the order the actors are
>> > added
>> > to the stage, regardless of their depths.
>> >
>> > Is this a known limitation due to how the cogl-pango API is implemented?
>> > How hard would this be to fix?
>> > I notice in cogl-pango-render.c in cogl_pango_show_layout:
>> >
>> >  if (qdata->display_list == NULL)
>> >     {
>> >      ...
>> >      ...
>> >       pango_renderer_draw_layout (PANGO_RENDERER (priv), layout, 0, 0);
>> >       priv->display_list = NULL;
>> >
>> >       qdata->mipmapping_used = priv->use_mipmapping;
>> >     }
>> >
>> >   cogl_framebuffer_push_matrix (fb);
>> >   cogl_framebuffer_translate (fb, x, y, 0);
>> >
>> >   _cogl_pango_display_list_render (fb,
>> >                                    qdata->display_list,
>> >                                    color);
>> >
>> >   cogl_framebuffer_pop_matrix (fb);
>> >
>> > This looks like pango_renderer_draw_layout is drawing it to an offscreen
>> > buffer then it is being drawn to the cogl framebuffer.  So why does it
>> > not
>> > obey depth testing?
>> >
>> >
>> > Thanks,
>> > Reza
>> >
>> > _______________________________________________
>> > Cogl mailing list
>> > Cogl at lists.freedesktop.org
>> > http://lists.freedesktop.org/mailman/listinfo/cogl
>> >
>
>


More information about the Cogl mailing list