[Mesa-dev] [RFC] GLX_MESA_query_renderer

Ian Romanick idr at freedesktop.org
Tue Mar 12 09:46:27 PDT 2013


On 03/05/2013 06:58 AM, Henri Verbeet wrote:
> On 2 March 2013 00:14, Ian Romanick <idr at freedesktop.org> wrote:
>>
> I added some comments, but I think the extension is pretty much fine
> for at least Wine's purposes.
>
>>      GLX_ARB_create_context and GLX_ARB_create_context_profile are required.
>>
> It's probably not a big deal since just about everyone implements
> these, but I think most of the extension could be implemented without
> these.

Right... the extension also adds an attribute that can only be used with 
glXCreateContextAttribsARB.

>>      There are also cases where more than one renderer may be available per
>>      display.  For example, there is typically a hardware implementation and
>>      a software based implementation.  There are cases where an application
>>      may want to pick one over the other.  One such situation is when the
>>      software implementation supports more features than the hardware
>>      implementation.
>>
> I think that makes sense in some cases (although the more common case
> may turn out to be setups like PRIME where you actually have two
> different hardware renderers and want to switch between them), but
> wouldn't you also want to look at the (GL) extension string before
> creating a context in such a case? I realize issue 9 resolves this as
> basically "not worth the effort", but doesn't that then contradict the
> text above? (For Wine creating the GLX context is no big deal at this
> point since we already have that code anyway, but it seems like useful
> information for (new) applications that want to avoid that.)

My thinking was that it will be very rare for multiple renderers to 
support the same GL versions and different extension strings... at least 
in a way that would cause apps to make different context creation decisions.

>> Additions to the OpenGL / WGL Specifications
>>
>>      None. This specification is written for GLX.
>>
> I think we'd like a WGL spec for wined3d, since it's written on top of
> Wine's WGL implementation instead of directly on top of GLX. If needed
> we could also solve that with a Wine internal extension, but we'd like
> to avoid those where possible.

Well... I don't do Windows. :)

>>      To obtain information about the available renderers for a particular
>>      display and screen,
>>
>>          void glXQueryRendererIntegerMESA(Display *dpy, int screen, int
>> renderer,
>>                                           int attribute, unsigned int
>> *value);
>>
> This returned a Bool above. I don't see the glXQueryCurrent*()
> functions specified at all, but I assume that will be added before the
> final version of the spec.

Yes... it seems I neglected to propagate those changes into the body of 
the spec.  I'll fix that.

>>      GLX_RENDERER_VERSION_MESA     3           Major, minor, and patch level
>> of
>>                                                the renderer implementation
> I guess the trade-of here is that it avoids having to parse version
> strings in the application, but on the other hand it leaves no room
> for things like the git sha1 or e.g. "beta" or "rc" that you sometimes
> see in version strings. That probably isn't a big deal for
> applications themselves, but it may be relevant when a version string
> is included in a bug report.

Part of the thinking is that it would force regularity in how the 
version is advertised.  Otherwise everyone will have a different kind of 
string, and the currently annoying situation of parsing implementation 
dependent strings continues.

Maybe GLX_RENDERER_VERSION_MESA should also be allowed with 
glXQueryRendererStringMESA?

>>      The string returned for GLX_RENDERER_VENDOR_ID_MESA will have the same
>>      format as the string that would be returned by glGetString of GL_VENDOR.
>>      It may, however, have a different value.
>>
>>      The string returned for GLX_RENDERER_DEVICE_ID_MESA will have the same
>>      format as the string that would be returned by glGetString of
>> GL_RENDERER.
>>      It may, however, have a different value.
>>
> But the GL_VENDOR and GL_RENDERER "formats" are implementation
> defined, so I'm not sure that wording it like this really adds much
> over just saying the format for these are implementation defined.

Fair. :)

>>      1) How should the difference between on-card and GART memory be exposed?
>>
>>          UNRESOLVED.
>>
> Somewhat related, dxgi / d3d10 distinguishes between
> "DedicatedVideoMemory" and "SharedSystemMemory" (and
> "DedicatedSystemMemory"). I'm not sure how much we really care, but I
> figured I'd at least mention it.

The feedback that I have received so far is that apps want to know how 
much memory they can use without falling off a performance cliff.  As 
far as I can tell, that means on-card memory.  I'm still soliciting 
feedback. :)  If there is a use for advertising SharedSystemMemory, I'll 
gladly add it.

>>      5) How can applications tell the difference between different hardware
>>      renderers for the same device?  For example, whether the renderer is the
>>      open-source driver or the closed-source driver.
>>
>>          RESOLVED.  Assuming this extension is ever implemented outside Mesa,
>>          applications can query GLX_RENDERER_VENDOR_ID_MESA from
>>          glXQueryRendererStringMESA.  This will almost certainly return
>>          different strings for open-source and closed-source drivers.
>>
> For what it's worth, internally in wined3d we distinguish between the
> GL vendor and the hardware vendor. So you can have e.g. Mesa / AMD,
> fglrx / AMD or Apple / AMD for the same hardware. That's all derived
> from the VENDOR and RENDERER strings, so that approach is certainly
> possible, but on the other hand perhaps it also makes sense to
> explicitly make that distinction in the API itself.

I also based this on ISV feedback.  Some just wanted to know what the 
hardware was, and others wanted to know that and who made the driver.  I 
was really trying to get away from "just parse this random string" for 
as much of the API as possible.  It seems like this should only make 
things easier for apps... should.

>>      6) What is the value of GLX_RENDERER_UNIFIED_MEMORY_ARCHITECTURE_MESA
>> for
>>      software renderers?
>>
>>          UNRESOLVED.  Video (display) memory and texture memory is not
>> unified
>>          for software implementations, so it seems reasonable for this to be
>>          False.
>>
> Related to that, are e.g. GLX_RENDERER_VENDOR_ID_MESA,
> GLX_RENDERER_DEVICE_ID_MESA (integer versions for both) or
> GLX_RENDERER_VIDEO_MEMORY_MESA really meaningful for software
> renderers?

Probably not... but they should return something.  I can imagine apps 
doing something really dumb if GLX_RENDERER_VIDEO_MEMORY_MESA returns 0. 
  The other two probably could (should?) return zero... since anything 
else will be misleading.  I'll add an issue for that.



More information about the mesa-dev mailing list