Xgl/Xegl future?
Michel Dänzer
michel at daenzer.net
Sun Aug 21 17:17:42 PDT 2005
On Sun, 2005-08-21 at 18:07 -0400, Adam Jackson wrote:
> On Saturday 20 August 2005 19:46, Michel Dänzer wrote:
> > On Fri, 2005-08-19 at 11:55 +0200, Christian Parpart wrote:
> > > Will it be possible to do such amazing things w/o hardware-OpenGL-based
> > > X server?
> >
> > Yes. The major toolkits seem to be moving to GL backends, and there are
> > proofs of concept of GL based compositing managers. I'm wondering what
> > effect these trends will have on the usefulness of Xgl, has anybody else
> > considered that?
>
> There's a few major issues there, in my view.
>
> First is that for any sort of decent performance for image- or
> texture-intensive GL clients, Xgl will need to gain support for the DRI
> protocol. This is real close to trivial. The GLX backend would actually
> have it toughest since it would have to translate the cliprects, window
> positions, etc. by the Xglx window's position, and there might be some other
> issues with the nesting there that I haven't thought through yet. But for
> Xegl it shouldn't be difficult at all. There's one or two latent
> dependencies in the libdri code in the server on things in the xfree86 ddx,
> but nothing huge.
I'm afraid you're oversimplifying things. E.g., how will Xgl know the
locations of redirected windows in the framebuffer for passing them on
to the client-side driver as rendering buffer locations? I can
definitely see this being solvable via vendor specific extensions, but I
think this will be tricky to do in a vendor neutral way. Maybe something
like sharing framebuffer objects between the server and clients (is that
possible with just GL_EXT_framebuffer_object?), but we're still quite a
way from supporting fbos at all in the free drivers...
> But, backing up from simple issues of GL performance, I think the real reason
> toolkits are attempting to go through the GL path is because you can fake
> Render in terms of GL (hence glitz), and the normal Render path through the
> protocol is dog-slow because it's effectively all software. If we had a fast
> Render path on the server side, toolkits would use that instead of invoking
> the pain of full GL.
I'm not sure it's that simple. E.g., using GL directly also gives you
direct rendering, which while probably not crucial for most
applications, may be for some, e.g. the compositing manager.
--
Earthling Michel Dänzer | Debian (powerpc), X and DRI developer
Libre software enthusiast | http://svcs.affero.net/rm.php?r=daenzer
More information about the xorg
mailing list