GLX and Xgl

David Reveman davidr at novell.com
Mon Apr 11 17:56:52 PDT 2005


On Mon, 2005-04-11 at 13:18 -0400, Adam Jackson wrote: 
> On Monday 11 April 2005 12:33, David Reveman wrote:
> > I've got GLX and indirect rendering working with Xgl. It's accelerated
> > and works fine with Composite. There's of course a lot more work to be
> > done but I don't plan on going much further until we're using
> > framebuffer objects in Xgl as it would mean adding code that will be
> > thrown away later.
> >
> > The glitz and Xgl code needed to get this working is in pretty good
> > shape and it should land in CVS in a few days.
> 
> Way cool.
> 
> > But I had to do some pretty drastic changes to server side GLX code and
> > I'm not sure that my current solutions are the best way to go. Here's
> > what I've done:
> >
> > 1. Made glcore use MGL namespace. This allows me to always have software
> > mesa available and this is currently necessary as there might not be
> > enough resources to use the *real* GL stack with Composite. It might not
> > be necessary when we're using framebuffer objects but I still think it's
> > a good idea. This works fine when running Xgl on top of nvidia's GL
> > stack or software mesa, but I haven't been able to get it running on top
> > of mesa/DRI yet.
> 
> This is reasonable given that it's GLcore.  DRI drivers are better for this, 
> they have their own dispatch table built in so you don't have to worry about 
> namespace mangling.  I think all you'd have to do to make DRI drivers work is 
> fill in glRenderTable{,EXT} from the driver's dispatch table.
> 
> > 2. Made all GL calls in server side GLX go through another dispatch
> > table. Allows me to switch between software mesa and *real* GL stack as
> > I like. This is also necessary as extension function pointers might be
> > different between contexts and we need to wrap some GL calls. e.g.
> > glViewport needs an offset.
> 
> Any function pointer you can query from glXGetProcAddress is explicitly 
> context-independent.  From the spec:
> 
> #    * Are function pointers context-independent?
> #
> #        Yes. The pointer to an extension function can be used with any
> #        context which supports the extension.
> 

OK, so that's not a problem then.

> I'm not quite clear yet on how you decide whether to use the software or 
> hardware paths.  Is it per context?  Per client?  Per drawable?

I think it would have to be per context. 

The temporary solution I'm using right now is to make this decision when
the the client creates it's first buffer, if we can accelerate drawing
to the buffer a native context is used, otherwise a software context is
used. This is not a solid solution and it can't be used in the future.

The problem is that with Composite extension present a drawable can at
any time be redirected to a pixmap. So what do we do if the native GL
stack can't handle this? With framebuffer objects available we can
probably always allocate another texture and redirect drawing to it, the
native GL stack will handle software fall-back if necessary. What do we
do when framebuffer objects are not available?

1. Don't support GLX at all? I think this would be a major drawback.

2. Use software GL. And possibly use native GL for root window as it
can't be redirected and it would make a compositing manager run
accelerated. This is what I hoped we could get working.

3. Move native context to software when a window is redirected. Seems
like a really bad idea to me. Don't think we could ever get this working
properly.

> 
> I think you'll have major issues trying to use two rendering engines at once.

That's bad as I think that not getting this working will mean that we
have to go with option 1 from above. 

I've had no trouble with using both GLcore and nvidia's GL stack in Xgl
so far... I think it could be worth investigate the possibilities for
getting this working with all GL stacks. Isn't there anyone with some
experience in this, seems like something someone would have tried
before...

If we can get this working, GLX visuals that always use software could
also be available. I think that can be useful as well. 

> 
> > Both these changes are available as patches from here:
> > http://www.cs.umu.se/~c99drn/xgl-glx/
> >
> > xserver-mesa.diff also include some changes required to get xserver
> > compiling with mesa CVS and a few lines to support ARGB visuals.
> > xserver-glx.diff modifies files that seem to be auto generated but I
> > didn't find the source to that so I just made the changes directly.
> 
> Most of the server-side GLX code was (at one point) autogenerated from some 
> scripts at SGI.  We don't have those scripts though.

OK.

> 
> > I had to add a 8A8R8G8B pixel format to XMesa for ARGB visuals to work
> > properly. This patch should do that:
> > http://www.cs.umu.se/~c99drn/xgl-glx/Mesa-PF_8A8R8G8B.diff
> 
> This would actually be really cool to land on its own.

That patch is in good shape. Anyone with proper access to Mesa source
can commit it if they like.

> 
> > The following is not working:
> > - Context Sharing (all contexts are currently shared)
> > - Drawing to front buffer
> > - CopyPixels
> >
> > All contexts need to be shared inside Xgl so we're going to have to keep
> > hash tables in Xgl to deal with GLX contexts.
> 
> Is this an artifact of using glitz, or is this something we'd see with other 
> backends too?

As long as we're using textures for drawables all contexts will have to
be shared. We need to be able to render to a drawable using both the
client specific context and using Xgl's core drawing context used for
regular X11 drawing requests. 

> 
> > This is just what I believe is the best way to go, it's not in any way
> > set in stone, it's all open for discussion. Comments and suggestions are
> > of course much appreciated.
> 
> Sounds pretty sane, I'll think on it a bit.  Very nice work!

thx.

-David




More information about the xorg mailing list