[Mesa-dev] Miscompilation on Intel?
org.mesa3d.mesa-users at io7m.com
org.mesa3d.mesa-users at io7m.com
Sun Apr 23 18:12:47 UTC 2017
Hello.
I have the following apitrace that seems to indicate a uniform variable
being erroneously optimized out:
http://ataxia.io7m.com/2017/04/23/r2.trace
Note the (rather large) shader sources at frame 1, call 8383
(glShaderSource). Trimming the sources to the relevant parts gives the
following (abbreviated) code:
--8<--
struct R2_light_ambient_t {
/// The light color. The components are assumed to be in the range
`[0, 1]`. vec3 color;
/// The light intensity.
float intensity;
/// The occlusion map.
sampler2D occlusion;
};
vec3
R2_lightAmbientTerm (
const R2_light_ambient_t light,
const vec2 uv)
{
float occ = texture (light.occlusion, uv).x;
return light.color * (light.intensity * occ);
}
uniform R2_light_ambient_t R2_light_ambient;
R2_light_output_t
R2_deferredLightMain(
const R2_reconstructed_surface_t surface)
{
vec3 diffuse =
R2_lightAmbientTerm (R2_light_ambient, surface.uv);
vec3 specular =
vec3 (0.0);
return R2_light_output_t (diffuse, specular);
}
layout(location = 0) out vec4 R2_out_image;
void
R2_lightShaderWrite(
const R2_reconstructed_surface_t surface,
const R2_light_output_t o)
{
R2_out_image = vec4 (surface.albedo * (o.diffuse + o.specular), 1.0);
}
uniform R2_viewport_t R2_light_viewport;
uniform R2_gbuffer_input_t R2_light_gbuffer;
uniform float R2_light_depth_coefficient;
uniform R2_view_rays_t R2_light_view_rays;
in float R2_light_volume_positive_eye_z;
void
main (void)
{
// Rendering of light volumes is expected to occur with depth
// writes disabled. However, it's necessary to calculate the
// correct logarithmic depth value for each fragment of the light
// volume in order to get correct depth testing with respect to the
// contents of the G-Buffer.
float depth_log = R2_logDepthEncodePartial(
R2_light_volume_positive_eye_z,
R2_light_depth_coefficient);
// Reconstruct the surface
R2_reconstructed_surface_t surface =
R2_deferredSurfaceReconstruct(
R2_light_gbuffer,
R2_light_viewport,
R2_light_view_rays,
R2_light_depth_coefficient,
gl_FragCoord.xy);
// Evaluate light
R2_light_output_t o = R2_deferredLightMain(surface);
R2_lightShaderWrite(surface, o);
gl_FragDepth = depth_log;
}
--8<--
However, in the trace at call 8404, the linked program only returns
four uniforms and R2_light_ambient isn't one of them. For some reason,
the compiler appears to be optimizing it out. The uniform
R2_light_ambient is referenced from R2_deferredLightMain and is involved
in a computation, the result of which is returned from
R2_deferredLightMain. The result is then passed to R2_lightShaderWrite
where it is involved in an addition, the result of which is written to
the output variable R2_out_image. I'm having a hard time coming up with
a reason that the compiler could optimize out R2_light_ambient.
Can anyone shed any light on this?
Relevant system info:
Linux copperhead 4.10.11-1-ARCH #1 SMP PREEMPT Tue Apr 18 08:39:42 CEST
2017 x86_64 GNU/Linux
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Sandybridge Mobile
OpenGL core profile version string: 3.3 (Core Profile) Mesa 17.0.4
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 17.0.4
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 17.0.4
OpenGL ES profile shading language version string: OpenGL ES GLSL ES
3.00
I'm using a 3.3 context and don't seem to get any compilation errors or
warnings.
M
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 854 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/mesa-dev/attachments/20170423/0e2bac83/attachment.sig>
More information about the mesa-dev
mailing list