[Mesa-dev] [RFC 2/2] i965: add support for image AoA

Timothy Arceri t_arceri at yahoo.com.au
Sat Aug 15 17:46:11 PDT 2015


On Sat, 2015-08-15 at 17:33 +0300, Francisco Jerez wrote:
> Timothy Arceri <t_arceri at yahoo.com.au> writes:
> 
> > On Wed, 2015-08-12 at 19:39 +1000, Timothy Arceri wrote:
> > > Cc: Francisco Jerez <currojerez at riseup.net>
> > > ---
> > >  This patch works for direct indexing of images, but I'm having some 
> > > trouble
> > >  getting indirect to work.
> > > 
> > >  For example for:
> > >  tests/spec/arb_arrays_of_arrays/execution/image_store/basic-imageStore
> > >  -non-const-uniform-index.shader_test
> > > 
> > >  Which has and image writeonly uniform image2D tex[2][2]
> > > 
> > >  Indirect indexing will work for tex[0][0] and text[0][1] but not for
> > >  tex[1][0] and tex[1][1] they seem to always end up refering to the
> > >  image in 0.
> > 
> > Just to add some more to this, I'm pretty sure my code is generating the
> > correct offsets. If I hardcode the img_offset offset to 72 to get the 
> > uniform
> > value of tex[1][0] I get the value I expected, but if I set image.reladdr 
> > to a
> > register that contains 72 I don't what I expect.
> > 
> > If I change the array to be a single dimension e.g. tex[4] and I hardcode 
> > the
> > offset as described above then it works as expected for both scenarios, it
> > also works if I split the offset across img_offset and image.reladdr, 
> > there is
> > something going on with image.reladdr for multi-dimensional arrays that I 
> > can'
> > t quite put my finger on.
> > 
> > Any hints appreciated.
> > 
> Odd, can you attach an assembly dump?
> 
> Thanks.

I wasn't sure what would be the most helpful so I've attached a few different
dumps.

image_dump = 1D array indirect piglit test, without this patch (Result=pass)
image_dump2 = 2D array indirect piglit test, with this patch (Result=fail)
image_dump3 = 1D array indirect piglit test, with this patch (Result=pass)

image_dump4 = 1D array indirect piglit test, hardcoded register with 72 offset
  (Result=pass)
image_dump5 = 2D array indirect piglit test, hardcoded register with 72 offset
(Result=fail)

image_dump4 vs image_dump5 is interesting because the output matches which is
what I would have expected, but the result differs. Then with the offset below
it seems to work as expected suggesting everything else is setup correctly.

image_dump6 = 1D array indirect piglit test, hardcoded 72 offset in img_offset
   (Result=pass)
image_dump7 = 2D array indirect piglit test, hardcoded 72 offset in img_offset
   (Result=pass)
 

> 
> > 
> > > 
> > >  I can't quite seem to see either my mistake or what I'm missing so I 
> > > thought
> > >  I'd send this out, and see if anyone has any ideas. I've also sent some
> > >  tests with mixed direct/indirect indexing which seem to calculate the 
> > > correct
> > >  offest for the direct but the indirect indexing is not working there 
> > > either. 
> > > 
> > >  src/mesa/drivers/dri/i965/brw_fs_nir.cpp | 53 ++++++++++++++++++++++---
> > > ----
> > > ---
> > >  1 file changed, 36 insertions(+), 17 deletions(-)
> > > 
> > > diff --git a/src/mesa/drivers/dri/i965/brw_fs_nir.cpp 
> > > b/src/mesa/drivers/dri/i965/brw_fs_nir.cpp
> > > index d7a2500..a49c230 100644
> > > --- a/src/mesa/drivers/dri/i965/brw_fs_nir.cpp
> > > +++ b/src/mesa/drivers/dri/i965/brw_fs_nir.cpp
> > > @@ -226,6 +226,7 @@ fs_visitor::nir_setup_uniform(nir_variable *var)
> > >        * our name.
> > >        */
> > >     unsigned index = var->data.driver_location;
> > > +   bool set_image_location = true;
> > >     for (unsigned u = 0; u < shader_prog->NumUniformStorage; u++) {
> > >        struct gl_uniform_storage *storage = &shader_prog
> > > ->UniformStorage[u];
> > >  
> > > @@ -244,7 +245,13 @@ fs_visitor::nir_setup_uniform(nir_variable *var)
> > >            * because their size is driver-specific, so we need to 
> > > allocate
> > >            * space for them here at the end of the parameter array.
> > >            */
> > > -         var->data.driver_location = uniforms;
> > > +         if (set_image_location) {
> > > +            /* For arrays of arrays we only want to set this once at 
> > > the 
> > > base
> > > +             * location.
> > > +             */
> > > +            var->data.driver_location = uniforms;
> > > +            set_image_location = false;
> > > +         }
> > >           param_size[uniforms] =
> > >              BRW_IMAGE_PARAM_SIZE * MAX2(storage->array_elements, 1);
> > >  
> > > @@ -1165,19 +1172,27 @@ fs_visitor::get_nir_image_deref(const 
> > > nir_deref_var 
> > > *deref)
> > >  {
> > >     fs_reg image(UNIFORM, deref->var->data.driver_location,
> > >                  BRW_REGISTER_TYPE_UD);
> > > -
> > > -   if (deref->deref.child) {
> > > -      const nir_deref_array *deref_array =
> > > -         nir_deref_as_array(deref->deref.child);
> > > -      assert(deref->deref.child->deref_type == nir_deref_type_array &&
> > > -             deref_array->deref.child == NULL);
> > > -      const unsigned size = glsl_get_length(deref->var->type);
> > > +   fs_reg *indirect_offset = NULL;
> > > +
> > > +   unsigned img_offset = 0;
> > > +   const nir_deref *tail = &deref->deref;
> > > +   while (tail->child) {
> > > +      const nir_deref_array *deref_array = nir_deref_as_array(tail
> > > ->child);
> > > +      assert(tail->child->deref_type == nir_deref_type_array);
> > > +      tail = tail->child;
> > > +      const unsigned size = glsl_get_length(tail->type);
> > > +      const unsigned child_array_elements = tail->child != NULL ?
> > > +         glsl_get_aoa_size(tail->type) : 1;
> > >        const unsigned base = MIN2(deref_array->base_offset, size - 1);
> > > -
> > > -      image = offset(image, bld, base * BRW_IMAGE_PARAM_SIZE);
> > > +      const unsigned aoa_size = child_array_elements * 
> > > BRW_IMAGE_PARAM_SIZE;
> > > +      img_offset += base * aoa_size;
> > >  
> > >        if (deref_array->deref_array_type == 
> > > nir_deref_array_type_indirect) {
> > > -         fs_reg *tmp = new(mem_ctx) fs_reg(vgrf(glsl_type::int_type));
> > > +         fs_reg tmp = vgrf(glsl_type::int_type);
> > > +         if (indirect_offset == NULL) {
> > > +            indirect_offset = new(mem_ctx) 
> > > fs_reg(vgrf(glsl_type::int_type));
> > > +            bld.MOV(*indirect_offset, fs_reg(0));
> > > +         }
> > >  
> > >           if (devinfo->gen == 7 && !devinfo->is_haswell) {
> > >              /* IVB hangs when trying to access an invalid surface index 
> > > with
> > > @@ -1188,18 +1203,22 @@ fs_visitor::get_nir_image_deref(const 
> > > nir_deref_var 
> > > *deref)
> > >               * of the possible outcomes of the hang.  Clamp the index 
> > > to
> > >               * prevent access outside of the array bounds.
> > >               */
> > > -            bld.emit_minmax(*tmp, retype(get_nir_src(deref_array
> > > ->indirect),
> > > -                                         BRW_REGISTER_TYPE_UD),
> > > +            bld.emit_minmax(tmp, retype(get_nir_src(deref_array
> > > ->indirect),
> > > +                                        BRW_REGISTER_TYPE_UD),
> > >                              fs_reg(size - base - 1), 
> > > BRW_CONDITIONAL_L);
> > >           } else {
> > > -            bld.MOV(*tmp, get_nir_src(deref_array->indirect));
> > > +            bld.MOV(tmp, get_nir_src(deref_array->indirect));
> > >           }
> > > -
> > > -         bld.MUL(*tmp, *tmp, fs_reg(BRW_IMAGE_PARAM_SIZE));
> > > -         image.reladdr = tmp;
> > > +         bld.MUL(tmp, tmp, fs_reg(aoa_size));
> > > +         bld.ADD(*indirect_offset, *indirect_offset, tmp);
> > >        }
> > >     }
> > >  
> > > +   if (indirect_offset) {
> > > +      image.reladdr = indirect_offset;
> > > +   }
> > > +   image = offset(image, bld, img_offset);
> > > +
> > >     return image;
> > >  }
> > >  
-------------- next part --------------
NIR (SSA form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  image2D[4] tex (4294967295, 5)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (4)
	vec4 ssa_6 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_7 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_6, ssa_4, ssa_7) (tex[ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

NIR (final form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  image2D[4] tex (4294967295, 5)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (4)
	vec4 ssa_6 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_7 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_6, ssa_4, ssa_7) (tex[ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) (array image2D 4) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (var_ref tex) (var_ref n) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 2)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 27 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 432 to 336 bytes (22%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
mov(8)          g7<1>UD         0x00000000UD                    { align1 1Q compacted };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(8)          g13<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g16<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g17<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g18<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g19<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
add(8)          g12<1>D         g7<8,8,1>D      0x00000048UD    { align1 WE_all 1Q };
mov(1)          g13.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g10<1>F         g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g11<1>F         -g4<8,8,1>F     249.5F          { align1 1Q };
send(8)         g2<1>UW         g12<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
mov(8)          g14<1>D         g10<8,8,1>F                     { align1 1Q compacted };
mov(8)          g15<1>D         g11<8,8,1>F                     { align1 1Q compacted };
shl(1)          a0<1>UD         g8<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g9<1>UD         g[a0 64]<0,1,0>UD               { align1 WE_all };
and(1)          a0<1>UD         g9<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g13<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 42 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 672 to 512 bytes (24%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g14<1>UW        g1.5<2,4,0>UW   0x11001100V     { align1 1H };
mov(16)         g17<1>UD        0x00000000UD                    { align1 1H compacted };
mov(1)          g20<1>UD        0D                              { align1 WE_all };
mov(8)          g10<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g11<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g12<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g13<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g7<1>UD         0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g14<8,8,1>UW                    { align1 1H };
add(16)         g14<1>D         g17<8,8,1>D     0x00000048UD    { align1 WE_all 1H };
mov(1)          g7.7<1>UD       g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g16<1>F         g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g18<1>F         -g4<8,8,1>F     249.5F          { align1 1H };
send(16)        g22<1>UW        g14<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(16)         g14<1>D         g16<8,8,1>F                     { align1 1H compacted };
mov(16)         g16<1>D         g18<8,8,1>F                     { align1 1H compacted };
shl(1)          a0<1>UD         g20<0,1,0>UD    0x00000002UD    { align1 WE_all compacted };
add(1)          a0<1>UD         a0<0,1,0>UD     0x00000200UD    { align1 WE_all };
mov(1)          g21<1>UD        g[a0 192]<0,1,0>UD              { align1 WE_all };
mov(8)          g8<1>UD         g14<8,8,1>UD                    { align1 1Q compacted };
mov(8)          g9<1>UD         g16<8,8,1>UD                    { align1 1Q compacted };
and(1)          a0<1>UD         g21<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g7<8,8,1>UD     a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g8<1>UD         g15<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g9<1>UD         g17<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g10<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g11<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g12<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g13<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g7<1>UD         0D                              { align1 WE_all 2Q };
mov(1)          g7.7<1>UD       g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g21<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g7<8,8,1>UD     a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0


GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) (array image2D 4) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (var_ref tex) (var_ref n) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 3)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 27 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 432 to 336 bytes (22%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
mov(8)          g7<1>UD         0x00000000UD                    { align1 1Q compacted };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(8)          g13<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g16<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g17<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g18<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g19<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
add(8)          g12<1>D         g7<8,8,1>D      0x00000048UD    { align1 WE_all 1Q };
mov(1)          g13.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g10<1>F         g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g11<1>F         g4<8,8,1>F      0.5F            { align1 1Q };
send(8)         g2<1>UW         g12<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
mov(8)          g14<1>D         g10<8,8,1>F                     { align1 1Q compacted };
mov(8)          g15<1>D         g11<8,8,1>F                     { align1 1Q compacted };
shl(1)          a0<1>UD         g8<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g9<1>UD         g[a0 64]<0,1,0>UD               { align1 WE_all };
and(1)          a0<1>UD         g9<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g13<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 42 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 672 to 512 bytes (24%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g14<1>UW        g1.5<2,4,0>UW   0x11001100V     { align1 1H };
mov(16)         g17<1>UD        0x00000000UD                    { align1 1H compacted };
mov(1)          g20<1>UD        0D                              { align1 WE_all };
mov(8)          g10<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g11<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g12<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g13<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g7<1>UD         0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g14<8,8,1>UW                    { align1 1H };
add(16)         g14<1>D         g17<8,8,1>D     0x00000048UD    { align1 WE_all 1H };
mov(1)          g7.7<1>UD       g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g16<1>F         g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g18<1>F         g4<8,8,1>F      0.5F            { align1 1H };
send(16)        g22<1>UW        g14<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(16)         g14<1>D         g16<8,8,1>F                     { align1 1H compacted };
mov(16)         g16<1>D         g18<8,8,1>F                     { align1 1H compacted };
shl(1)          a0<1>UD         g20<0,1,0>UD    0x00000002UD    { align1 WE_all compacted };
add(1)          a0<1>UD         a0<0,1,0>UD     0x00000200UD    { align1 WE_all };
mov(1)          g21<1>UD        g[a0 192]<0,1,0>UD              { align1 WE_all };
mov(8)          g8<1>UD         g14<8,8,1>UD                    { align1 1Q compacted };
mov(8)          g9<1>UD         g16<8,8,1>UD                    { align1 1Q compacted };
and(1)          a0<1>UD         g21<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g7<8,8,1>UD     a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g8<1>UD         g15<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g9<1>UD         g17<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g10<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g11<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g12<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g13<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g7<1>UD         0D                              { align1 WE_all 2Q };
mov(1)          g7.7<1>UD       g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g21<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g7<8,8,1>UD     a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0
-------------- next part --------------
NIR (SSA form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  int m (4294967295, 5)
decl_var uniform  image2D[2][2] tex (4294967295, 6)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (5)
	vec1 ssa_6 = intrinsic load_uniform () () (4)
	vec4 ssa_7 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_8 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_7, ssa_4, ssa_8) (tex[ssa_6][ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

NIR (final form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  int m (4294967295, 5)
decl_var uniform  image2D[2][2] tex (4294967295, 6)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (5)
	vec1 ssa_6 = intrinsic load_uniform () () (4)
	vec4 ssa_7 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_8 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_7, ssa_4, ssa_8) (tex[ssa_6][ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) int m)
(declare (uniform ) (array (array image2D 2) 2) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (array_ref (var_ref tex) (var_ref n) ) (var_ref m) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 2)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 23 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 368 to 288 bytes (22%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(8)          g12<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g15<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g16<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g17<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g18<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g5<1>F          g2.4<8,4,1>UW                   { align1 1Q };
mov(1)          g9<1>UD         g4.6<0,1,0>UD                   { align1 WE_all };
mov(1)          g12.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g10<1>F         g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g11<1>F         -g5<8,8,1>F     249.5F          { align1 1Q };
mov(8)          g13<1>D         g10<8,8,1>F                     { align1 1Q compacted };
mov(8)          g14<1>D         g11<8,8,1>F                     { align1 1Q compacted };
and(1)          a0<1>UD         g9<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g12<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 37 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 592 to 448 bytes (24%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g7<1>UW         g1.5<2,4,0>UW   0x11001100V     { align1 1H };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(8)          g17<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g18<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g19<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g20<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g14<1>UD        0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g7<8,8,1>UW                     { align1 1H };
mov(1)          g13<1>UD        g6.6<0,1,0>UD                   { align1 WE_all };
mov(1)          g14.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g9<1>F          g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g11<1>F         -g4<8,8,1>F     249.5F          { align1 1H };
mov(16)         g7<1>D          g9<8,8,1>F                      { align1 1H compacted };
mov(16)         g9<1>D          g11<8,8,1>F                     { align1 1H compacted };
mov(8)          g15<1>UD        g7<8,8,1>UD                     { align1 1Q compacted };
mov(8)          g16<1>UD        g9<8,8,1>UD                     { align1 1Q compacted };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g14<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g15<1>UD        g8<8,8,1>UD                     { align1 2Q compacted };
mov(8)          g16<1>UD        g10<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g17<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g18<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g19<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g20<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g14<1>UD        0D                              { align1 WE_all 2Q };
mov(1)          g14.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g14<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0


GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) int m)
(declare (uniform ) (array (array image2D 2) 2) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (array_ref (var_ref tex) (var_ref n) ) (var_ref m) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 3)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 23 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 368 to 288 bytes (22%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(8)          g12<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g15<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g16<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g17<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g18<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g5<1>F          g2.4<8,4,1>UW                   { align1 1Q };
mov(1)          g9<1>UD         g4.6<0,1,0>UD                   { align1 WE_all };
mov(1)          g12.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g10<1>F         g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g11<1>F         g5<8,8,1>F      0.5F            { align1 1Q };
mov(8)          g13<1>D         g10<8,8,1>F                     { align1 1Q compacted };
mov(8)          g14<1>D         g11<8,8,1>F                     { align1 1Q compacted };
and(1)          a0<1>UD         g9<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g12<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 37 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 592 to 448 bytes (24%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g7<1>UW         g1.5<2,4,0>UW   0x11001100V     { align1 1H };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(8)          g17<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g18<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g19<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g20<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g14<1>UD        0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g7<8,8,1>UW                     { align1 1H };
mov(1)          g13<1>UD        g6.6<0,1,0>UD                   { align1 WE_all };
mov(1)          g14.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g9<1>F          g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g11<1>F         g4<8,8,1>F      0.5F            { align1 1H };
mov(16)         g7<1>D          g9<8,8,1>F                      { align1 1H compacted };
mov(16)         g9<1>D          g11<8,8,1>F                     { align1 1H compacted };
mov(8)          g15<1>UD        g7<8,8,1>UD                     { align1 1Q compacted };
mov(8)          g16<1>UD        g9<8,8,1>UD                     { align1 1Q compacted };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g14<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g15<1>UD        g8<8,8,1>UD                     { align1 2Q compacted };
mov(8)          g16<1>UD        g10<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g17<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g18<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g19<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g20<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g14<1>UD        0D                              { align1 WE_all 2Q };
mov(1)          g14.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g14<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0

-------------- next part --------------
NIR (SSA form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  int m (4294967295, 5)
decl_var uniform  image2D[2][2] tex (4294967295, 6)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (5)
	vec1 ssa_6 = intrinsic load_uniform () () (4)
	vec4 ssa_7 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_8 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_7, ssa_4, ssa_8) (tex[ssa_6][ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

NIR (final form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  int m (4294967295, 5)
decl_var uniform  image2D[2][2] tex (4294967295, 6)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (5)
	vec1 ssa_6 = intrinsic load_uniform () () (4)
	vec4 ssa_7 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_8 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_7, ssa_4, ssa_8) (tex[ssa_6][ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) int m)
(declare (uniform ) (array (array image2D 2) 2) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (array_ref (var_ref tex) (var_ref n) ) (var_ref m) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 2)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 26 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 416 to 320 bytes (23%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
mov(1)          g7<1>UD         0D                              { align1 WE_all };
mov(8)          g11<1>UD        0x00000048UD                    { align1 WE_all 1Q compacted };
mov(8)          g15<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g18<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g19<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g20<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g21<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
send(8)         g11<1>UW        g11<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
mov(1)          g15.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g9<1>F          g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g10<1>F         -g4<8,8,1>F     249.5F          { align1 1Q };
shl(1)          a0<1>UD         g7<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g8<1>UD         g[a0 352]<0,1,0>UD              { align1 WE_all };
mov(8)          g16<1>D         g9<8,8,1>F                      { align1 1Q compacted };
mov(8)          g17<1>D         g10<8,8,1>F                     { align1 1Q compacted };
and(1)          a0<1>UD         g8<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g15<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 40 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 640 to 480 bytes (25%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g7<1>UW         g1.5<2,4,0>UW   0x11001100V     { align1 1H };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(16)         g9<1>UD         0x00000048UD                    { align1 WE_all 1H compacted };
mov(8)          g25<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g26<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g27<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g28<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g22<1>UD        0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g7<8,8,1>UW                     { align1 1H };
send(16)        g14<1>UW        g9<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(1)          g22.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g9<1>F          g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g11<1>F         -g4<8,8,1>F     249.5F          { align1 1H };
shl(1)          a0<1>UD         g8<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g13<1>UD        g[a0 448]<0,1,0>UD              { align1 WE_all };
mov(16)         g7<1>D          g9<8,8,1>F                      { align1 1H compacted };
mov(16)         g9<1>D          g11<8,8,1>F                     { align1 1H compacted };
mov(8)          g23<1>UD        g7<8,8,1>UD                     { align1 1Q compacted };
mov(8)          g24<1>UD        g9<8,8,1>UD                     { align1 1Q compacted };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g22<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g15<1>UD        g8<8,8,1>UD                     { align1 2Q compacted };
mov(8)          g16<1>UD        g10<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g17<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g18<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g19<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g20<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g14<1>UD        0D                              { align1 WE_all 2Q };
mov(1)          g14.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g14<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0


GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) int m)
(declare (uniform ) (array (array image2D 2) 2) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (array_ref (var_ref tex) (var_ref n) ) (var_ref m) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 3)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 26 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 416 to 320 bytes (23%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
mov(1)          g7<1>UD         0D                              { align1 WE_all };
mov(8)          g11<1>UD        0x00000048UD                    { align1 WE_all 1Q compacted };
mov(8)          g15<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g18<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g19<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g20<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g21<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
send(8)         g11<1>UW        g11<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
mov(1)          g15.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g9<1>F          g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g10<1>F         g4<8,8,1>F      0.5F            { align1 1Q };
shl(1)          a0<1>UD         g7<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g8<1>UD         g[a0 352]<0,1,0>UD              { align1 WE_all };
mov(8)          g16<1>D         g9<8,8,1>F                      { align1 1Q compacted };
mov(8)          g17<1>D         g10<8,8,1>F                     { align1 1Q compacted };
and(1)          a0<1>UD         g8<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g15<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 40 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 640 to 480 bytes (25%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g7<1>UW         g1.5<2,4,0>UW   0x11001100V     { align1 1H };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(16)         g9<1>UD         0x00000048UD                    { align1 WE_all 1H compacted };
mov(8)          g25<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g26<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g27<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g28<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g22<1>UD        0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g7<8,8,1>UW                     { align1 1H };
send(16)        g14<1>UW        g9<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(1)          g22.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g9<1>F          g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g11<1>F         g4<8,8,1>F      0.5F            { align1 1H };
shl(1)          a0<1>UD         g8<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g13<1>UD        g[a0 448]<0,1,0>UD              { align1 WE_all };
mov(16)         g7<1>D          g9<8,8,1>F                      { align1 1H compacted };
mov(16)         g9<1>D          g11<8,8,1>F                     { align1 1H compacted };
mov(8)          g23<1>UD        g7<8,8,1>UD                     { align1 1Q compacted };
mov(8)          g24<1>UD        g9<8,8,1>UD                     { align1 1Q compacted };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g22<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g15<1>UD        g8<8,8,1>UD                     { align1 2Q compacted };
mov(8)          g16<1>UD        g10<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g17<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g18<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g19<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g20<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g14<1>UD        0D                              { align1 WE_all 2Q };
mov(1)          g14.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g14<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0
-------------- next part --------------
NIR (SSA form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  image2D[4] tex (4294967295, 5)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (4)
	vec4 ssa_6 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_7 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_6, ssa_4, ssa_7) (tex[ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

NIR (final form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  image2D[4] tex (4294967295, 5)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (4)
	vec4 ssa_6 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_7 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_6, ssa_4, ssa_7) (tex[ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) (array image2D 4) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (var_ref tex) (var_ref n) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 2)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 26 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 416 to 320 bytes (23%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
mov(1)          g7<1>UD         0D                              { align1 WE_all };
mov(8)          g11<1>UD        0x00000048UD                    { align1 WE_all 1Q compacted };
mov(8)          g15<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g18<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g19<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g20<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g21<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
send(8)         g11<1>UW        g11<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
mov(1)          g15.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g9<1>F          g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g10<1>F         -g4<8,8,1>F     249.5F          { align1 1Q };
shl(1)          a0<1>UD         g7<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g8<1>UD         g[a0 352]<0,1,0>UD              { align1 WE_all };
mov(8)          g16<1>D         g9<8,8,1>F                      { align1 1Q compacted };
mov(8)          g17<1>D         g10<8,8,1>F                     { align1 1Q compacted };
and(1)          a0<1>UD         g8<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g15<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 40 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 640 to 480 bytes (25%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g7<1>UW         g1.5<2,4,0>UW   0x11001100V     { align1 1H };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(16)         g9<1>UD         0x00000048UD                    { align1 WE_all 1H compacted };
mov(8)          g25<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g26<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g27<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g28<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g22<1>UD        0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g7<8,8,1>UW                     { align1 1H };
send(16)        g14<1>UW        g9<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(1)          g22.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g9<1>F          g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g11<1>F         -g4<8,8,1>F     249.5F          { align1 1H };
shl(1)          a0<1>UD         g8<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g13<1>UD        g[a0 448]<0,1,0>UD              { align1 WE_all };
mov(16)         g7<1>D          g9<8,8,1>F                      { align1 1H compacted };
mov(16)         g9<1>D          g11<8,8,1>F                     { align1 1H compacted };
mov(8)          g23<1>UD        g7<8,8,1>UD                     { align1 1Q compacted };
mov(8)          g24<1>UD        g9<8,8,1>UD                     { align1 1Q compacted };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g22<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g15<1>UD        g8<8,8,1>UD                     { align1 2Q compacted };
mov(8)          g16<1>UD        g10<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g17<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g18<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g19<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g20<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g14<1>UD        0D                              { align1 WE_all 2Q };
mov(1)          g14.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g14<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0


GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) (array image2D 4) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (var_ref tex) (var_ref n) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 3)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 26 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 416 to 320 bytes (23%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
mov(1)          g7<1>UD         0D                              { align1 WE_all };
mov(8)          g11<1>UD        0x00000048UD                    { align1 WE_all 1Q compacted };
mov(8)          g15<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g18<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g19<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g20<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g21<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
send(8)         g11<1>UW        g11<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
mov(1)          g15.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g9<1>F          g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g10<1>F         g4<8,8,1>F      0.5F            { align1 1Q };
shl(1)          a0<1>UD         g7<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g8<1>UD         g[a0 352]<0,1,0>UD              { align1 WE_all };
mov(8)          g16<1>D         g9<8,8,1>F                      { align1 1Q compacted };
mov(8)          g17<1>D         g10<8,8,1>F                     { align1 1Q compacted };
and(1)          a0<1>UD         g8<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g15<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 40 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 640 to 480 bytes (25%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g7<1>UW         g1.5<2,4,0>UW   0x11001100V     { align1 1H };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(16)         g9<1>UD         0x00000048UD                    { align1 WE_all 1H compacted };
mov(8)          g25<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g26<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g27<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g28<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g22<1>UD        0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g7<8,8,1>UW                     { align1 1H };
send(16)        g14<1>UW        g9<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(1)          g22.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g9<1>F          g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g11<1>F         g4<8,8,1>F      0.5F            { align1 1H };
shl(1)          a0<1>UD         g8<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g13<1>UD        g[a0 448]<0,1,0>UD              { align1 WE_all };
mov(16)         g7<1>D          g9<8,8,1>F                      { align1 1H compacted };
mov(16)         g9<1>D          g11<8,8,1>F                     { align1 1H compacted };
mov(8)          g23<1>UD        g7<8,8,1>UD                     { align1 1Q compacted };
mov(8)          g24<1>UD        g9<8,8,1>UD                     { align1 1Q compacted };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g22<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g15<1>UD        g8<8,8,1>UD                     { align1 2Q compacted };
mov(8)          g16<1>UD        g10<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g17<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g18<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g19<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g20<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g14<1>UD        0D                              { align1 WE_all 2Q };
mov(1)          g14.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g13<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g14<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0
-------------- next part --------------
NIR (SSA form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  image2D[4] tex (4294967295, 5)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (4)
	vec4 ssa_6 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_7 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_6, ssa_4, ssa_7) (tex[ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

NIR (final form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  image2D[4] tex (4294967295, 5)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (4)
	vec4 ssa_6 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_7 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_6, ssa_4, ssa_7) (tex[ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) (array image2D 4) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (var_ref tex) (var_ref n) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 2)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 27 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 432 to 352 bytes (19%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
sel.l(8)        g7<1>D          g4.4<0,1,0>UD   0xffffffffUD    { align1 1Q };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(8)          g12<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g15<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g16<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g17<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g18<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
mul(8)          g2<1>D          g7<8,8,1>D      0x00000018UD    { align1 1Q };
mov(1)          g12.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g10<1>F         g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g11<1>F         -g4<8,8,1>F     249.5F          { align1 1Q };
send(8)         g2<1>UW         g2<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
mov(8)          g13<1>D         g10<8,8,1>F                     { align1 1Q compacted };
mov(8)          g14<1>D         g11<8,8,1>F                     { align1 1Q compacted };
shl(1)          a0<1>UD         g8<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g9<1>UD         g[a0 64]<0,1,0>UD               { align1 WE_all };
and(1)          a0<1>UD         g9<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g12<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
nop                                                             ;
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 41 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 656 to 512 bytes (22%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g7<1>UW         g1.5<2,4,0>UW   0x11001100V     { align1 1H };
sel.l(16)       g10<1>D         g6.4<0,1,0>UD   0xffffffffUD    { align1 1H };
mov(1)          g13<1>UD        0D                              { align1 WE_all };
mov(8)          g26<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g27<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g28<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g29<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g23<1>UD        0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g7<8,8,1>UW                     { align1 1H };
mul(16)         g7<1>D          g10<8,8,1>D     0x00000018UD    { align1 1H };
mov(1)          g23.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g9<1>F          g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g11<1>F         -g4<8,8,1>F     249.5F          { align1 1H };
send(16)        g15<1>UW        g7<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(16)         g7<1>D          g9<8,8,1>F                      { align1 1H compacted };
mov(16)         g9<1>D          g11<8,8,1>F                     { align1 1H compacted };
shl(1)          a0<1>UD         g13<0,1,0>UD    0x00000002UD    { align1 WE_all compacted };
mov(1)          g14<1>UD        g[a0 480]<0,1,0>UD              { align1 WE_all };
mov(8)          g24<1>UD        g7<8,8,1>UD                     { align1 1Q compacted };
mov(8)          g25<1>UD        g9<8,8,1>UD                     { align1 1Q compacted };
and(1)          a0<1>UD         g14<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g23<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g16<1>UD        g8<8,8,1>UD                     { align1 2Q compacted };
mov(8)          g17<1>UD        g10<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g18<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g19<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g20<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g21<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g15<1>UD        0D                              { align1 WE_all 2Q };
mov(1)          g15.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g14<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g15<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
nop                                                             ;
   END B0


GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) (array image2D 4) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (var_ref tex) (var_ref n) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 3)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 27 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 432 to 352 bytes (19%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
sel.l(8)        g7<1>D          g4.4<0,1,0>UD   0xffffffffUD    { align1 1Q };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(8)          g12<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g15<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g16<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g17<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g18<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
mul(8)          g2<1>D          g7<8,8,1>D      0x00000018UD    { align1 1Q };
mov(1)          g12.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g10<1>F         g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g11<1>F         g4<8,8,1>F      0.5F            { align1 1Q };
send(8)         g2<1>UW         g2<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
mov(8)          g13<1>D         g10<8,8,1>F                     { align1 1Q compacted };
mov(8)          g14<1>D         g11<8,8,1>F                     { align1 1Q compacted };
shl(1)          a0<1>UD         g8<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g9<1>UD         g[a0 64]<0,1,0>UD               { align1 WE_all };
and(1)          a0<1>UD         g9<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g12<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
nop                                                             ;
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 41 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 656 to 512 bytes (22%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g7<1>UW         g1.5<2,4,0>UW   0x11001100V     { align1 1H };
sel.l(16)       g10<1>D         g6.4<0,1,0>UD   0xffffffffUD    { align1 1H };
mov(1)          g13<1>UD        0D                              { align1 WE_all };
mov(8)          g26<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g27<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g28<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g29<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g23<1>UD        0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g7<8,8,1>UW                     { align1 1H };
mul(16)         g7<1>D          g10<8,8,1>D     0x00000018UD    { align1 1H };
mov(1)          g23.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g9<1>F          g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g11<1>F         g4<8,8,1>F      0.5F            { align1 1H };
send(16)        g15<1>UW        g7<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(16)         g7<1>D          g9<8,8,1>F                      { align1 1H compacted };
mov(16)         g9<1>D          g11<8,8,1>F                     { align1 1H compacted };
shl(1)          a0<1>UD         g13<0,1,0>UD    0x00000002UD    { align1 WE_all compacted };
mov(1)          g14<1>UD        g[a0 480]<0,1,0>UD              { align1 WE_all };
mov(8)          g24<1>UD        g7<8,8,1>UD                     { align1 1Q compacted };
mov(8)          g25<1>UD        g9<8,8,1>UD                     { align1 1Q compacted };
and(1)          a0<1>UD         g14<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g23<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g16<1>UD        g8<8,8,1>UD                     { align1 2Q compacted };
mov(8)          g17<1>UD        g10<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g18<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g19<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g20<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g21<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g15<1>UD        0D                              { align1 WE_all 2Q };
mov(1)          g15.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g14<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g15<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
nop                                                             ;
   END B0
-------------- next part --------------
NIR (SSA form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  image2D[4] tex (4294967295, 5)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (4)
	vec4 ssa_6 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_7 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_6, ssa_4, ssa_7) (tex[ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

NIR (final form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  image2D[4] tex (4294967295, 5)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (4)
	vec4 ssa_6 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_7 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_6, ssa_4, ssa_7) (tex[ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) (array image2D 4) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (var_ref tex) (var_ref n) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 2)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 27 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 432 to 336 bytes (22%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
sel.l(8)        g7<1>D          g4.4<0,1,0>UD   0x00000003UD    { align1 1Q };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(8)          g12<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g15<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g16<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g17<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g18<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
mul(8)          g2<1>D          g7<8,8,1>D      24D             { align1 1Q compacted };
mov(1)          g12.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g10<1>F         g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g11<1>F         -g4<8,8,1>F     249.5F          { align1 1Q };
send(8)         g2<1>UW         g2<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
mov(8)          g13<1>D         g10<8,8,1>F                     { align1 1Q compacted };
mov(8)          g14<1>D         g11<8,8,1>F                     { align1 1Q compacted };
shl(1)          a0<1>UD         g8<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g9<1>UD         g[a0 64]<0,1,0>UD               { align1 WE_all };
and(1)          a0<1>UD         g9<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g12<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 41 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 656 to 496 bytes (24%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g7<1>UW         g1.5<2,4,0>UW   0x11001100V     { align1 1H };
sel.l(16)       g10<1>D         g6.4<0,1,0>UD   0x00000003UD    { align1 1H };
mov(1)          g13<1>UD        0D                              { align1 WE_all };
mov(8)          g26<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g27<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g28<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g29<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g23<1>UD        0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g7<8,8,1>UW                     { align1 1H };
mul(16)         g7<1>D          g10<8,8,1>D     24D             { align1 1H compacted };
mov(1)          g23.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g9<1>F          g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g11<1>F         -g4<8,8,1>F     249.5F          { align1 1H };
send(16)        g15<1>UW        g7<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(16)         g7<1>D          g9<8,8,1>F                      { align1 1H compacted };
mov(16)         g9<1>D          g11<8,8,1>F                     { align1 1H compacted };
shl(1)          a0<1>UD         g13<0,1,0>UD    0x00000002UD    { align1 WE_all compacted };
mov(1)          g14<1>UD        g[a0 480]<0,1,0>UD              { align1 WE_all };
mov(8)          g24<1>UD        g7<8,8,1>UD                     { align1 1Q compacted };
mov(8)          g25<1>UD        g9<8,8,1>UD                     { align1 1Q compacted };
and(1)          a0<1>UD         g14<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g23<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g16<1>UD        g8<8,8,1>UD                     { align1 2Q compacted };
mov(8)          g17<1>UD        g10<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g18<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g19<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g20<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g21<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g15<1>UD        0D                              { align1 WE_all 2Q };
mov(1)          g15.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g14<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g15<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0


GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) (array image2D 4) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (var_ref tex) (var_ref n) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 3)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 27 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 432 to 336 bytes (22%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
sel.l(8)        g7<1>D          g4.4<0,1,0>UD   0x00000003UD    { align1 1Q };
mov(1)          g8<1>UD         0D                              { align1 WE_all };
mov(8)          g12<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g15<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g16<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g17<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g18<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
mul(8)          g2<1>D          g7<8,8,1>D      24D             { align1 1Q compacted };
mov(1)          g12.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g10<1>F         g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g11<1>F         g4<8,8,1>F      0.5F            { align1 1Q };
send(8)         g2<1>UW         g2<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
mov(8)          g13<1>D         g10<8,8,1>F                     { align1 1Q compacted };
mov(8)          g14<1>D         g11<8,8,1>F                     { align1 1Q compacted };
shl(1)          a0<1>UD         g8<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g9<1>UD         g[a0 64]<0,1,0>UD               { align1 WE_all };
and(1)          a0<1>UD         g9<0,1,0>UD     0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g12<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 41 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 656 to 496 bytes (24%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g7<1>UW         g1.5<2,4,0>UW   0x11001100V     { align1 1H };
sel.l(16)       g10<1>D         g6.4<0,1,0>UD   0x00000003UD    { align1 1H };
mov(1)          g13<1>UD        0D                              { align1 WE_all };
mov(8)          g26<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g27<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g28<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g29<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g23<1>UD        0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g7<8,8,1>UW                     { align1 1H };
mul(16)         g7<1>D          g10<8,8,1>D     24D             { align1 1H compacted };
mov(1)          g23.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g9<1>F          g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g11<1>F         g4<8,8,1>F      0.5F            { align1 1H };
send(16)        g15<1>UW        g7<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(16)         g7<1>D          g9<8,8,1>F                      { align1 1H compacted };
mov(16)         g9<1>D          g11<8,8,1>F                     { align1 1H compacted };
shl(1)          a0<1>UD         g13<0,1,0>UD    0x00000002UD    { align1 WE_all compacted };
mov(1)          g14<1>UD        g[a0 480]<0,1,0>UD              { align1 WE_all };
mov(8)          g24<1>UD        g7<8,8,1>UD                     { align1 1Q compacted };
mov(8)          g25<1>UD        g9<8,8,1>UD                     { align1 1Q compacted };
and(1)          a0<1>UD         g14<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g23<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g16<1>UD        g8<8,8,1>UD                     { align1 2Q compacted };
mov(8)          g17<1>UD        g10<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g18<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g19<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g20<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g21<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g15<1>UD        0D                              { align1 WE_all 2Q };
mov(1)          g15.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g14<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g15<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0
-------------- next part --------------
NIR (SSA form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  int m (4294967295, 5)
decl_var uniform  image2D[2][2] tex (4294967295, 6)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (5)
	vec1 ssa_6 = intrinsic load_uniform () () (4)
	vec4 ssa_7 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_8 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_7, ssa_4, ssa_8) (tex[ssa_6][ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

NIR (final form) for fragment shader:
decl_var uniform  vec4 color (4294967295, 0)
decl_var uniform  int n (4294967295, 4)
decl_var uniform  int m (4294967295, 5)
decl_var uniform  image2D[2][2] tex (4294967295, 6)
decl_var shader_in  vec4 gl_FragCoord (0, 0)
decl_var shader_out  vec4 outcolor (4, 0)
decl_overload main returning void

impl main {
	block block_0:
	/* preds: */
	vec4 ssa_0 = load_const (0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x00000000 /* 0.000000 */, 0x3f800000 /* 1.000000 */)
	vec4 ssa_1 = intrinsic load_input () () (0)
	vec1 ssa_2 = f2i ssa_1
	vec1 ssa_3 = f2i ssa_1.y
	vec1 ssa_4 = undefined
	vec1 ssa_5 = intrinsic load_uniform () () (5)
	vec1 ssa_6 = intrinsic load_uniform () () (4)
	vec4 ssa_7 = vec4 ssa_2, ssa_3, ssa_4, ssa_4
	vec4 ssa_8 = intrinsic load_uniform () () (0)
	intrinsic image_store (ssa_7, ssa_4, ssa_8) (tex[ssa_6][ssa_5]) ()
	intrinsic store_output (ssa_0) () (0)
	/* succs: block_1 */
	block block_1:
}

GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) int m)
(declare (uniform ) (array (array image2D 2) 2) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (array_ref (var_ref tex) (var_ref n) ) (var_ref m) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 2)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 30 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 480 to 384 bytes (20%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
sel.l(8)        g7<1>D          g4.4<0,1,0>UD   0x00000001UD    { align1 1Q };
sel.l(8)        g8<1>D          g4.5<0,1,0>UD   0xffffffffUD    { align1 1Q };
mov(1)          g9<1>UD         0D                              { align1 WE_all };
mov(8)          g13<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g16<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g17<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g18<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g19<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
mul(8)          g2<1>D          g7<8,8,1>D      0x00000030UD    { align1 1Q };
mul(8)          g5<1>D          g8<8,8,1>D      0x00000018UD    { align1 1Q };
mov(1)          g13.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g11<1>F         g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g12<1>F         -g4<8,8,1>F     249.5F          { align1 1Q };
add(8)          g6<1>D          g2<8,8,1>D      g5<8,8,1>D      { align1 1Q compacted };
mov(8)          g14<1>D         g11<8,8,1>F                     { align1 1Q compacted };
mov(8)          g15<1>D         g12<8,8,1>F                     { align1 1Q compacted };
send(8)         g2<1>UW         g6<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
shl(1)          a0<1>UD         g9<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g10<1>UD        g[a0 64]<0,1,0>UD               { align1 WE_all };
and(1)          a0<1>UD         g10<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g13<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 45 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 720 to 560 bytes (22%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g14<1>UW        g1.5<2,4,0>UW   0x11001100V     { align1 1H };
sel.l(16)       g17<1>D         g6.4<0,1,0>UD   0x00000001UD    { align1 1H };
sel.l(16)       g19<1>D         g6.5<0,1,0>UD   0xffffffffUD    { align1 1H };
mov(1)          g21<1>UD        0D                              { align1 WE_all };
mov(8)          g10<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g11<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g12<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g13<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g7<1>UD         0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g14<8,8,1>UW                    { align1 1H };
mul(16)         g14<1>D         g17<8,8,1>D     0x00000030UD    { align1 1H };
mul(16)         g23<1>D         g19<8,8,1>D     0x00000018UD    { align1 1H };
mov(1)          g7.7<1>UD       g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g16<1>F         g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g18<1>F         -g4<8,8,1>F     249.5F          { align1 1H };
add(16)         g31<1>D         g14<8,8,1>D     g23<8,8,1>D     { align1 1H compacted };
mov(16)         g14<1>D         g16<8,8,1>F                     { align1 1H compacted };
mov(16)         g16<1>D         g18<8,8,1>F                     { align1 1H compacted };
send(16)        g23<1>UW        g31<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(8)          g8<1>UD         g14<8,8,1>UD                    { align1 1Q compacted };
mov(8)          g9<1>UD         g16<8,8,1>UD                    { align1 1Q compacted };
shl(1)          a0<1>UD         g21<0,1,0>UD    0x00000002UD    { align1 WE_all compacted };
add(1)          a0<1>UD         a0<0,1,0>UD     0x00000200UD    { align1 WE_all };
mov(1)          g22<1>UD        g[a0 224]<0,1,0>UD              { align1 WE_all };
and(1)          a0<1>UD         g22<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g7<8,8,1>UD     a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g8<1>UD         g15<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g9<1>UD         g17<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g10<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g11<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g12<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g13<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g7<1>UD         0D                              { align1 WE_all 2Q };
mov(1)          g7.7<1>UD       g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g22<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g7<8,8,1>UD     a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0


GLSL IR for native fragment shader 3:
(
(declare (location=0 shader_in ) vec4 gl_FragCoord)
(declare (uniform ) vec4 color)
(declare (uniform ) int n)
(declare (uniform ) int m)
(declare (uniform ) (array (array image2D 2) 2) tex)
(declare (location=4 shader_out ) vec4 outcolor)
(declare (temporary ) vec4 outcolor)
( function main
  (signature void
    (parameters
    )
    (
      (declare (temporary ) ivec2 flattening_tmp)
      (assign  (x) (var_ref flattening_tmp)  (expression int f2i (swiz x (var_ref gl_FragCoord) )) ) 
      (assign  (y) (var_ref flattening_tmp)  (expression int f2i (swiz y (var_ref gl_FragCoord) )) ) 
      (call __intrinsic_image_store  ((array_ref (array_ref (var_ref tex) (var_ref n) ) (var_ref m) ) (var_ref flattening_tmp) (var_ref color) ))

      (assign  (xyzw) (var_ref outcolor)  (constant vec4 (0.000000 0.000000 0.000000 1.000000)) ) 
      (assign  (xyzw) (var_ref outcolor at 3)  (var_ref outcolor) ) 
    ))

)

( function __intrinsic_image_store
  (signature void
    (parameters
      (declare (in ) image2D image)
      (declare (in ) ivec2 coord)
      (declare (in ) vec4 arg0)
    )
    (
    ))

)

)


Native code for unnamed fragment shader 3
SIMD8 shader: 30 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 480 to 384 bytes (20%)
   START B0
add(16)         g2<1>UW         g1.4<1,4,0>UW   0x11001010V     { align1 WE_all 1H };
sel.l(8)        g7<1>D          g4.4<0,1,0>UD   0x00000001UD    { align1 1Q };
sel.l(8)        g8<1>D          g4.5<0,1,0>UD   0xffffffffUD    { align1 1Q };
mov(1)          g9<1>UD         0D                              { align1 WE_all };
mov(8)          g13<1>UD        0D                              { align1 WE_all 1Q };
mov(8)          g16<1>F         g4<0,1,0>F                      { align1 1Q compacted };
mov(8)          g17<1>F         g4.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g18<1>F         g4.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g19<1>F         g4.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g3<1>F          g2<8,4,1>UW                     { align1 1Q };
mov(8)          g4<1>F          g2.4<8,4,1>UW                   { align1 1Q };
mul(8)          g2<1>D          g7<8,8,1>D      0x00000030UD    { align1 1Q };
mul(8)          g5<1>D          g8<8,8,1>D      0x00000018UD    { align1 1Q };
mov(1)          g13.7<1>UD      g1.7<0,1,0>UD                   { align1 WE_all };
add(8)          g11<1>F         g3<8,8,1>F      0.5F            { align1 1Q };
add(8)          g12<1>F         g4<8,8,1>F      0.5F            { align1 1Q };
add(8)          g6<1>D          g2<8,8,1>D      g5<8,8,1>D      { align1 1Q compacted };
mov(8)          g14<1>D         g11<8,8,1>F                     { align1 1Q compacted };
mov(8)          g15<1>D         g12<8,8,1>F                     { align1 1Q compacted };
send(8)         g2<1>UW         g6<8,8,1>D
                            sampler ld SIMD8 Surface = 5 Sampler = 0 mlen 1 rlen 4 { align1 WE_all 1Q };
shl(1)          a0<1>UD         g9<0,1,0>UD     0x00000002UD    { align1 WE_all compacted };
mov(1)          g10<1>UD        g[a0 64]<0,1,0>UD               { align1 WE_all };
and(1)          a0<1>UD         g10<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g13<8,8,1>UD    a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g125<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g126<1>F        [0F, 0F, 0F, 0F]VF              { align1 1Q compacted };
mov(8)          g127<1>F        1F                              { align1 1Q };
sendc(8)        null            g124<8,8,1>F
                            render RT write SIMD8 LastRT Surface = 0 mlen 4 rlen 0 { align1 1Q EOT };
   END B0

Native code for unnamed fragment shader 3
SIMD16 shader: 45 instructions. 0 loops. 0:0 spills:fills. Promoted 0 constants. Compacted 720 to 560 bytes (22%)
   START B0
add(16)         g4<1>UW         g1.4<2,4,0>UW   0x10101010V     { align1 1H };
add(16)         g14<1>UW        g1.5<2,4,0>UW   0x11001100V     { align1 1H };
sel.l(16)       g17<1>D         g6.4<0,1,0>UD   0x00000001UD    { align1 1H };
sel.l(16)       g19<1>D         g6.5<0,1,0>UD   0xffffffffUD    { align1 1H };
mov(1)          g21<1>UD        0D                              { align1 WE_all };
mov(8)          g10<1>F         g6<0,1,0>F                      { align1 1Q compacted };
mov(8)          g11<1>F         g6.1<0,1,0>F                    { align1 1Q compacted };
mov(8)          g12<1>F         g6.2<0,1,0>F                    { align1 1Q compacted };
mov(8)          g13<1>F         g6.3<0,1,0>F                    { align1 1Q compacted };
mov(8)          g7<1>UD         0D                              { align1 WE_all 1Q };
mov(16)         g2<1>F          g4<8,8,1>UW                     { align1 1H };
mov(16)         g4<1>F          g14<8,8,1>UW                    { align1 1H };
mul(16)         g14<1>D         g17<8,8,1>D     0x00000030UD    { align1 1H };
mul(16)         g23<1>D         g19<8,8,1>D     0x00000018UD    { align1 1H };
mov(1)          g7.7<1>UD       g1.7<0,1,0>UD                   { align1 WE_all };
add(16)         g16<1>F         g2<8,8,1>F      0.5F            { align1 1H };
add(16)         g18<1>F         g4<8,8,1>F      0.5F            { align1 1H };
add(16)         g31<1>D         g14<8,8,1>D     g23<8,8,1>D     { align1 1H compacted };
mov(16)         g14<1>D         g16<8,8,1>F                     { align1 1H compacted };
mov(16)         g16<1>D         g18<8,8,1>F                     { align1 1H compacted };
send(16)        g23<1>UW        g31<8,8,1>D
                            sampler ld SIMD16 Surface = 5 Sampler = 0 mlen 2 rlen 8 { align1 WE_all 1H };
mov(8)          g8<1>UD         g14<8,8,1>UD                    { align1 1Q compacted };
mov(8)          g9<1>UD         g16<8,8,1>UD                    { align1 1Q compacted };
shl(1)          a0<1>UD         g21<0,1,0>UD    0x00000002UD    { align1 WE_all compacted };
add(1)          a0<1>UD         a0<0,1,0>UD     0x00000200UD    { align1 WE_all };
mov(1)          g22<1>UD        g[a0 224]<0,1,0>UD              { align1 WE_all };
and(1)          a0<1>UD         g22<0,1,0>UD    0x000000ffUD    { align1 WE_all compacted };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b4000UD    { align1 WE_all };
send(8)         null            g7<8,8,1>UD     a0<0,1,0>UD
                            render indirect                                 { align1 1Q };
mov(8)          g8<1>UD         g15<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g9<1>UD         g17<8,8,1>UD                    { align1 2Q compacted };
mov(8)          g10<1>F         g6<0,1,0>F                      { align1 2Q compacted };
mov(8)          g11<1>F         g6.1<0,1,0>F                    { align1 2Q compacted };
mov(8)          g12<1>F         g6.2<0,1,0>F                    { align1 2Q compacted };
mov(8)          g13<1>F         g6.3<0,1,0>F                    { align1 2Q compacted };
mov(8)          g7<1>UD         0D                              { align1 WE_all 2Q };
mov(1)          g7.7<1>UD       g1.7<0,1,0>UD                   { align1 WE_all };
and(1)          a0<1>UD         g22<0,1,0>UD    0x000000ffUD    { align1 WE_all };
or(1)           a0<1>UD         a0<0,1,0>UD     0x0e0b6000UD    { align1 WE_all };
send(8)         null            g7<8,8,1>UD     a0<0,1,0>UD
                            render indirect                                 { align1 2Q };
mov(16)         g120<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g122<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g124<1>F        [0F, 0F, 0F, 0F]VF              { align1 1H compacted };
mov(16)         g126<1>F        1F                              { align1 1H };
sendc(16)       null            g120<8,8,1>F
                            render RT write SIMD16 LastRT Surface = 0 mlen 8 rlen 0 { align1 1H EOT };
   END B0



More information about the mesa-dev mailing list