[Mesa-dev] Extension to get Mesa IRs (Was: [Bug 91173])
Jose Fonseca
jfonseca at vmware.com
Wed Jul 1 22:55:08 PDT 2015
On 01/07/15 22:30, bugzilla-daemon at freedesktop.org wrote:> *Comment # 14
<https://bugs.freedesktop.org/show_bug.cgi?id=91173#c14>
> on bug 91173 <https://bugs.freedesktop.org/show_bug.cgi?id=91173> from
> Ilia Mirkin <mailto:imirkin at alum.mit.edu> *
>
> Erm... ok...
>
> MOV R0.zw, c[A0.x + 9];
> MOV R1.x, c[0].w;
> ADD R0.x, c[A0.x + 9].y, R1;
> FLR R0.y, R0.x;
>
> vs
>
> 0: MAD TEMP[0].xy, IN[1], CONST[7].yyyy, CONST[7].xxxx
> 3: MOV TEMP[0].zw, CONST[ADDR[0].x+9]
> 7: FLR TEMP[0].y, CONST[0].wwww
>
> Could be that I'm matching the wrong shaders. But this seems highly
suspect.
> Need to see if there's a good way of dumping mesa ir... I wonder if
it doesn't
> notice the write-mask on the MOV R0.zw and thinks that R0 contains
the value it
> wants.
Nice detective work on this bug, Ilia.
> Could be that I'm matching the wrong shaders.
I think it could be quite useful if there was a
"GL_MESAX_get_internal_representation" Mesa specific extension to
extract a text representation of the current bound GLSL, TGSI, hardware
speicfic, etc, exclusively for debugging purposes.
It doesn't even need to be advertised on non-debug builds of Mesa. But
merely being able to see next to each other all the IRs at a given call
in a trace, will probably save some time / grief for us developers on
similar situations.
I did something akin to this for NVIDIA prioprietary drivers on
https://github.com/apitrace/apitrace/commit/49192a4e48d080e44a0d66f059e6897f07cf67f8
but I don't think GetProgramBinary is apropriate for Mesa (only one format.)
Instead, for Mesa we could have something like
GLint n;
// this will trigget IRs being collected into an array internally
glGetIntegerv(GL_NUM_ACTIVE_IRS, &n);
for (i=0; i < n; ++i) {
GLint nameLength;
char *name;
GLint sourceLength;
char *source;
glGetActiveInternalRepr(&nameLength, NULL, &sourceLength, NULL);
name = malloc(nameLength)
source = malloc(sourceLength)
glGetActiveInternalRepr(NULL, name, NULL, source);
}
And this would need to be plumbed through all the way inside the
drivers, each layer would advertise additional IRs.
And the information here would only be obtainable/valid immediately
after a draw call.
A completely different tack, is that apitrace's glretrace would
advertise an unique environment variable (e.g,MESA_IR_DUMP_ALL=fd), and
all drivers/layers would write shaders repres, and when they are
bound/unbound/destroyed on a preestablished format:
CREATE "GLSL/123"
...
EOF
CREATE TGSI/456
EOF
BIND GLSL/123
BIND TGSI/456
BIND HW/789
UNBIND GLSL/123
UNBIND TGSI/456
UNBIND HW/789
DESTROY GLSL/123
DESTROY TGSI/456
DESTROY HW/789
I don't feel strongly either way, but I suspect that having a proper
extension, even if a little more work at start, will be more robust on
the long term. And less runtime overhead. GL extensions also give a
mechanism to revise/deprecate this functionality in the future.
Jose
More information about the mesa-dev
mailing list