GLX and Xgl

Matthias Hopf mhopf at suse.de
Wed Apr 13 11:37:20 PDT 2005


This is getting a longer post, again...

On Apr 12, 05 21:12:48 +0200, David Reveman wrote:
> On Tue, 2005-04-12 at 17:49 +0200, Matthias Hopf wrote: 
> > - You are only able to render redirected OpenGL apps accelerated, when
> >   the driver has PBuffer support, because you need a rendering context
> >   for the application (if you do not want to re-invent the wheel
> >   everywhere and do state tracking yourself).
> >   Do you need PBuffers for non-redirected windows as well?
> The back buffer is currently allocated in the same way as pixmaps and
> Xgl is currently using the *real* back buffer for pixmaps so you might
> be able to run non-redirected windows accelerated without pbuffers. This

Again, with scissor test, I imagine?

> stupid use of the *real* back buffer for pixmaps is only there so that
> we can test and get these things (XRender, Composite...) running before
> we have framebuffer object support. Once we can start to use framebuffer
> objects I'll change so that the *real* back buffer is used as back
> buffer for non-redirected windows.

Then we can actually intercept glXSwapBuffers() and account for
overlapping windows etc. pp. I guess this will make things much easier.

> > - As soon as framebuffer objects exist and should be used for off-screen
> >   rendering contexts, we would need something like ARB_render_target,
> >   otherwise we couldn't provide the applications with a context to
> >   render into. Or we have to intercept all BindRenderbufferEXT() etc.
> >   calls from the application.
> We'll have to intercept BindRenderbuffer, BindFramebuffer... but I think
> we can get that working without to much trouble. I don't see how

I'll guess so.

> ARB_render_target could be of help here... is that extension still
> considered? 

Don't think so.  I think I missunderstood something WRT this extension -
you explained that to another guy already, I guess I see a bit clearer
now.

> > So how can we make - in the long term - make direct rendering with Xgl
> > possible? So far I think we basically need
> > - EXT_framebuffer_object for rendering X requests into a texture in
> >   the server
> > - some extension yet to be specified, which allows sharing of textures
> >   between processes (Xgl and application)
> Yes.

Let's look at an application that is just about to create a new context
/ bind the context to a drawable:
(glXCreateNewContext/glXCreateContext/glXMakeContextCurrent/glXMakeCurrent)

For indirect rendering we can just create a context, BindRenderbuffer()
the current Pixmap texture to it, and after that(!) return the context
id to the application. Calls to BindRenderbuffer() etc. have to be
intercepted, so that binding to 0 actually binds to the Pixmap texture
again.

For direct rendering things are getting more complicated. The libgl has
to be changed so that it supports a mechanism to let the application
create a direct rendering context, that renders not into the framebuffer
but into the Pixmap texture. For a regular XWindow. This cannot be done
without code change in the library.

It doesn't matter, because we have to have additional functionality
(extension) anyway: we need to have a way of sharing a texture with
another process id.

As all of this cannot be done in a backwards-compatible manner,
applications linked to old / unextended versions of the library can only
be supported using indirect rendering. Not really a problem (who links
statically against libgl anyway).

So I currently see the need for two extensions:

- Share a texture with another process, that is with another address
  space (GL_shared_texture).

  For security reasons and ease of implementation this would have to be
  twofold: the Xserver would have to export the texture, creating an
  unambiguous ID, transfer this ID to the client, then the client would
  have to import the texture using this ID. We would have to think about
  whether we need locking mechanisms here as well, at least for
  format/size changes.

- Identify displays that may correspond with shared textures and change
  the way contexts are created and bound (GLX_shared_texture_context).

  The client would have to identify displays with windows associated to
  off-screen buffers. If (and only if) a direct rendering context is to
  be created to one of these windows, it should contact the Xserver and
  ask him for an texture export ID, then bind this texture for its
  rendering context. So this extension is tightly coupled with
  GL_shared_texture.
  On the client side this could actually be coupled with framebuffer
  objects.
  
I would write proposals as soon as we agree on what we actually need,
but I think we are a bit too early in the discussion process for that.

Right now I have no good solution what should happen with OpenGL
applications when a composite manager redirects / unredirects windows. I
guess the easiest way would be to always render into offscreen buffers,
whether a window is redirected or not. For indirect rendering this
shouldn't be problematic, as the Xserver is in control of where to
render to.

> > - ARB_render_texture to create a context on the application side that
> >   renders into a texture
> So far I've only thought of using GLX_ARB_render_texture for indirect
> rendering. Don't know if it's a good idea to use it for direct rendering
> as well but maybe that's possible. Either way, that's all transparent to
> the client so it wouldn't be to too bad if it turns out we need
> something else, right? 

Very right. The only possibility for using this extension for creating
the context would be to patch the system libGL as well. I don't like
this idea, we would have to have special cases for NVidia/ATI here as
well.

> > That is, textures are currently only working correctly, if all
> > applications use GenTextures(), right?
> Exactly.

So most of mine won't :-]  Was lazy until now...

> > > Getting front buffer drawing to work is a bit harder. We need to report
> > > damage and do pixel ownership test. Using the current scissor box when
> > > reporting damage is probably good enough but I don't have good solution
> > > for pixel ownership test. I guess we're going to have to do multiple
> > > drawing operations with different scissor boxes but that will make
> > > display lists much harder to handle...
> > What happens if the application wants to use the scissor test on its own?
> > For indirect rendering we could always interprete the protocol ourself
> > (Ugh) and adapt the test to our needs, but for direct rendering I have
> > no clue.
> I'm intersecting the client scissor box with window bounds right now.

So you already implemented the (Ugh) case - very impressive :)

> Means trouble when display lists are used but I think that can be solved
> for indirect rendering by splitting up display lists on the server-side.

Oh, right, now I understand why display lists could be a problem.
Luckily, typically no scissor test changes are incorporated into display
lists. I'm not saying that we shouldn't care ;)

> Don't know about direct rendering. When we're drawing to framebuffer
> objects we don't have to do this.

Right. For double buffered applications this should be easy to implement
(copy to screen on glXSwapBuffers()), for single buffered applications
we could emulate single buffer with a copy to screen on glXFlush()...

CU

Matthias

-- 
Matthias Hopf <mhopf at suse.de>       __        __   __
Maxfeldstr. 5 / 90409 Nuernberg    (_   | |  (_   |__         mat at mshopf.de
Phone +49-911-74053-715            __)  |_|  __)  |__  labs   www.mshopf.de



More information about the xorg mailing list