[Mesa3d-dev] Re: GLX and Xgl
Brian Paul
brian.paul at tungstengraphics.com
Wed Apr 13 08:11:55 PDT 2005
Matthias Hopf wrote:
> On Apr 12, 05 14:59:07 -0400, Owen Taylor wrote:
>
>>On Tue, 2005-04-12 at 17:49 +0200, Matthias Hopf wrote:
>>
>>>So how can we make - in the long term - make direct rendering with Xgl
>>>possible? So far I think we basically need
>>>
>>>- EXT_framebuffer_object for rendering X requests into a texture in
>>> the server
Just FYI, I've been plugging away at the GL_EXT_framebuffer_object in
Mesa in my spare time. It'll probably be a few more weeks before I
check-in something that works.
>>>- some extension yet to be specified, which allows sharing of textures
>>> between processes (Xgl and application)
>>
>>I think it is important to note that this isn't exactly arbitrary
>>peer-to-peer sharing of textures; the setup of the shared "textures"
>>(really render targets) is always part of the server, and the server
>
>
> Yes, I didn't specify concrete details by intention.
>
>
>>is special in the GLX protocol. In the simplest model, the differences
>>from how the DRI works is:
>
>
> I've never done much work with DRI, so I'm not influenced that much from
> that direction (I used to work a lot with SGIs).
>
>
>>In the DRI/Egl/Xgl world, it clearly is a fairly different problem,
>>but still doesn't seem essentially different from the problem of
>>non-redirected direct rendering. The server has to tell the clients
>>where to render in memory, and there must be locking so that the
>>client doesn't render to memory that is being used for something
>>else.
>
>
> I guess I have to dig a bit into GLX code and read the specs more
> thorowly. Right now there is no notion of a memory pointer to be
> rendered to in OpenGL. So we might need an extension to get these
> low-level rendering parameters from the OpenGL layer in order to
> implement the GLX rendering context negotiation / redirection completely
> in user space (which we have to, because we no longer have access to low
> level routines like regular Xservers have).
>
>
>>One obvious hard problem is framebuffer memory exhaustion ... nothing
>>prevents an application from just creating more and more GL windows,
>>and that would require more and more video memory given independent
>>backbuffers. You might need a framebuffer ejection mechanism much like
>>the current texture ejection mechanism, except that it's more
>>complex ... restoring the framebuffer requires cooperation between the
>>ejector and the ejectee.
>
>
> Agreed.
> AFAIR 3Dlabs had MMIO on their chips which could easily deal with this
> problem, but neither NVidia nor ATI have something like this or even
> plan to implement it AFAIK.
>
>
>>>- ARB_render_texture to create a context on the application side that
>>> renders into a texture
>>
>>To the client it must look precisely as if they are rendering to a
>>window. No client-exposed extension can be involved.
>
>
> That should be the plan.
> I wanted to read the GLX specs more thorowly for the bytestream protocol
> to initiate direct rendering, however, I couldn't find anything related
> to that. Do you know whether this part is vendor specific?
> Guess I have to read the Mesa sources.
>
>
>>>One alternative would be another extension that would allow the
>>>transport of one context to another process, so the context for
>>>rendering into a texture could be created on the Xgl side, and the
>>>context could then be transferred to the application side. This sounds
>>>scary as well. I doubt that an extension for shared contextes would work
>>>without patching the application side libGL, either.
>>
>>Hmm, sounds like the hard way to do things. I'd think a GLcontext is a
>>much more complex object than "there is a framebuffer at this
>>address in video memory with this fbconfig"
>
>
> Yes it is. That's what makes me quite a bit uncomfortable.
I often see people referring to a "GL context" without really knowing
what it is. An OpenGL rendering context is not a drawing surface;
it's a state record which keeps track of things like current blend
mode, current drawing color, current texture parameters, etc.
A rendering context can't be changed or used until it's bound to a
drawing surface. A single rendering context can be used with any
number of (compatible) drawing surfaces. And a drawing surface can be
drawn to by any number of (compatible) rendering contexts.
-Brian
More information about the xorg
mailing list