[Mesa-dev] [RFC] GLX_MESA_query_renderer

Henri Verbeet hverbeet at gmail.com
Tue Mar 5 06:58:01 PST 2013


On 2 March 2013 00:14, Ian Romanick <idr at freedesktop.org> wrote:
>
I added some comments, but I think the extension is pretty much fine
for at least Wine's purposes.

>     GLX_ARB_create_context and GLX_ARB_create_context_profile are required.
>
It's probably not a big deal since just about everyone implements
these, but I think most of the extension could be implemented without
these.

>     There are also cases where more than one renderer may be available per
>     display.  For example, there is typically a hardware implementation and
>     a software based implementation.  There are cases where an application
>     may want to pick one over the other.  One such situation is when the
>     software implementation supports more features than the hardware
>     implementation.
>
I think that makes sense in some cases (although the more common case
may turn out to be setups like PRIME where you actually have two
different hardware renderers and want to switch between them), but
wouldn't you also want to look at the (GL) extension string before
creating a context in such a case? I realize issue 9 resolves this as
basically "not worth the effort", but doesn't that then contradict the
text above? (For Wine creating the GLX context is no big deal at this
point since we already have that code anyway, but it seems like useful
information for (new) applications that want to avoid that.)

> Additions to the OpenGL / WGL Specifications
>
>     None. This specification is written for GLX.
>
I think we'd like a WGL spec for wined3d, since it's written on top of
Wine's WGL implementation instead of directly on top of GLX. If needed
we could also solve that with a Wine internal extension, but we'd like
to avoid those where possible.

>     To obtain information about the available renderers for a particular
>     display and screen,
>
>         void glXQueryRendererIntegerMESA(Display *dpy, int screen, int
> renderer,
>                                          int attribute, unsigned int
> *value);
>
This returned a Bool above. I don't see the glXQueryCurrent*()
functions specified at all, but I assume that will be added before the
final version of the spec.

>     GLX_RENDERER_VERSION_MESA     3           Major, minor, and patch level
> of
>                                               the renderer implementation
I guess the trade-of here is that it avoids having to parse version
strings in the application, but on the other hand it leaves no room
for things like the git sha1 or e.g. "beta" or "rc" that you sometimes
see in version strings. That probably isn't a big deal for
applications themselves, but it may be relevant when a version string
is included in a bug report.

>     The string returned for GLX_RENDERER_VENDOR_ID_MESA will have the same
>     format as the string that would be returned by glGetString of GL_VENDOR.
>     It may, however, have a different value.
>
>     The string returned for GLX_RENDERER_DEVICE_ID_MESA will have the same
>     format as the string that would be returned by glGetString of
> GL_RENDERER.
>     It may, however, have a different value.
>
But the GL_VENDOR and GL_RENDERER "formats" are implementation
defined, so I'm not sure that wording it like this really adds much
over just saying the format for these are implementation defined.

>     1) How should the difference between on-card and GART memory be exposed?
>
>         UNRESOLVED.
>
Somewhat related, dxgi / d3d10 distinguishes between
"DedicatedVideoMemory" and "SharedSystemMemory" (and
"DedicatedSystemMemory"). I'm not sure how much we really care, but I
figured I'd at least mention it.

>     5) How can applications tell the difference between different hardware
>     renderers for the same device?  For example, whether the renderer is the
>     open-source driver or the closed-source driver.
>
>         RESOLVED.  Assuming this extension is ever implemented outside Mesa,
>         applications can query GLX_RENDERER_VENDOR_ID_MESA from
>         glXQueryRendererStringMESA.  This will almost certainly return
>         different strings for open-source and closed-source drivers.
>
For what it's worth, internally in wined3d we distinguish between the
GL vendor and the hardware vendor. So you can have e.g. Mesa / AMD,
fglrx / AMD or Apple / AMD for the same hardware. That's all derived
from the VENDOR and RENDERER strings, so that approach is certainly
possible, but on the other hand perhaps it also makes sense to
explicitly make that distinction in the API itself.

>     6) What is the value of GLX_RENDERER_UNIFIED_MEMORY_ARCHITECTURE_MESA
> for
>     software renderers?
>
>         UNRESOLVED.  Video (display) memory and texture memory is not
> unified
>         for software implementations, so it seems reasonable for this to be
>         False.
>
Related to that, are e.g. GLX_RENDERER_VENDOR_ID_MESA,
GLX_RENDERER_DEVICE_ID_MESA (integer versions for both) or
GLX_RENDERER_VIDEO_MEMORY_MESA really meaningful for software
renderers?


More information about the mesa-dev mailing list