<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">I left some cleanup comments below and one request for an additional comment.</div><div class="gmail_quote"><br></div><div class="gmail_quote">Reviewed-by: Jason Ekstrand <<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a>><br></div><div class="gmail_quote"><br></div><div class="gmail_quote">On Wed, Nov 29, 2017 at 6:57 PM, Jose Maria Casanova Crespo <span dir="ltr"><<a href="mailto:jmcasanova@igalia.com" target="_blank">jmcasanova@igalia.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">From: Eduardo Lima Mitev <<a href="mailto:elima@igalia.com">elima@igalia.com</a>><br>
<br>
Currently, we use byte-scattered write messages for storing 16-bit<br>
into an SSBO. This is because untyped surface messages have a fixed<br>
32-bit size.<br>
<br>
This patch optimizes these 16-bit writes by combining 2 values (e.g,<br>
two consecutive components aligned with 32-bits) into a 32-bit register,<br>
packing the two 16-bit words.<br>
<br>
16-bit single component values will continue to use byte-scattered<br>
write messages. The same will happens when the first consecutive<br>
component is not aligned 32-bits.<br>
<br>
This optimization reduces the number of SEND messages used for storing<br>
16-bit values potentially by 2 or 4, which cuts down execution time<br>
significantly because byte-scattered writes are an expensive<br>
operation as they only write a component for message.<br>
<br>
v2: Removed use of stride = 2 on sources (Jason Ekstrand)<br>
    Rework optimization using shuffle 16 write and enable writes<br>
    of 16bit vec4 with only one message of 32-bits. (Chema Casanova)<br>
v3: - Fix coding style (Eduardo Lima)<br>
    - Reorganize code to avoid duplication. (Jason Ekstrand)<br>
    - Include new comments to explain the length calculations to<br>
      fix alignment issues of components. (Jason Ekstrand)<br>
    - Fix issues with writemask yz with 16-bit writes. (Jason Ektrand)<br>
<br>
Signed-off-by: Jose Maria Casanova Crespo <<a href="mailto:jmcasanova@igalia.com">jmcasanova@igalia.com</a>><br>
Signed-off-by: Eduardo Lima <<a href="mailto:elima@igalia.com">elima@igalia.com</a>><br>
---<br>
 src/intel/compiler/brw_fs_nir.<wbr>cpp | 61 +++++++++++++++++++++++++++++-<wbr>---------<br>
 1 file changed, 46 insertions(+), 15 deletions(-)<br>
<br>
diff --git a/src/intel/compiler/brw_fs_<wbr>nir.cpp b/src/intel/compiler/brw_fs_<wbr>nir.cpp<br>
index c091241132..2c344ec7df 100644<br>
--- a/src/intel/compiler/brw_fs_<wbr>nir.cpp<br>
+++ b/src/intel/compiler/brw_fs_<wbr>nir.cpp<br>
@@ -4088,14 +4088,14 @@ fs_visitor::nir_emit_<wbr>intrinsic(const fs_builder &bld, nir_intrinsic_instr *instr<br>
        */<br>
       unsigned bit_size = nir_src_bit_size(instr->src[0]<wbr>);<br>
       unsigned type_size = bit_size / 8;<br>
+      unsigned slots_per_component = 1;<br>
+<br>
       if (bit_size == 64) {<br>
          val_reg = shuffle_64bit_data_for_32bit_<wbr>write(bld,<br>
             val_reg, instr->num_components);<br>
+         slots_per_component = 2;<br></blockquote><div><br></div><div>Instead of messing around with slots_per_component, I think it would be easier if the shuffle were just moved inside the loop.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
       }<br>
<br>
-      /* 16-bit types would use a minimum of 1 slot */<br>
-      unsigned type_slots = MAX2(type_size / 4, 1);<br>
-<br>
       /* Combine groups of consecutive enabled channels in one write<br>
        * message. We use ffs to find the first enabled channel and then ffs on<br>
        * the bit-inverse, down-shifted writemask to determine the length of<br>
@@ -4105,18 +4105,48 @@ fs_visitor::nir_emit_<wbr>intrinsic(const fs_builder &bld, nir_intrinsic_instr *instr<br>
          unsigned first_component = ffs(writemask) - 1;<br>
          unsigned length = ffs(~(writemask >> first_component)) - 1;<br>
<br>
+         fs_reg current_val_reg =<br>
+            offset(val_reg, bld, first_component * slots_per_component);<br>
+<br>
          if (type_size > 4) {<br>
             /* We can't write more than 2 64-bit components at once. Limit<br>
              * the length of the write to what we can do and let the next<br>
              * iteration handle the rest.<br>
              */<br>
             length = MIN2(2, length);<br></blockquote><div><br></div><div>You could put it here to match 16-bit.  Then current_val_reg above could be replaced with</div><div><br></div><div>fs_reg write_src = offset(val_reg, bld, first_component);</div><div><br></div><div>and we could put the following here:</div><div><br></div><div>write_src = shuffle_64bit_data_for_32bit_write(bld, write_src, length);<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
-         } else if (type_size == 2) {<br>
-            /* For 16-bit types we are using byte scattered writes, that can<br>
-             * only write one component per call. So we limit the length, and<br>
-             * let the write happening in several iterations.<br>
+         } else if (type_size < 4) {<br>
+            assert(type_size == 2);<br>
+            /* For 16-bit types we pack two consecutive values into a 32-bit<br>
+             * word and use an untyped write message. For single values or not<br>
+             * 32-bit-aligned we need to use byte-scattered writes because<br>
+             * untyped writes works with 32-bit components with 32-bit<br>
+             * alignment. byte_scattered_write messages only support one<br>
+             * 16-bit component at a time.<br>
+             *<br>
+             * For example, if there is a 3-component vector we submit one<br>
+             * untyped-write message of 32-bit (first two components), and one<br>
+             * byte-scattered write message (the last component).<br>
              */<br>
-            length = 1;<br>
+<br>
+            if (first_component % 2) {<br>
+               /* If we use a .yz writemask we also need to emit 2<br>
+                * byte-scattered write messages because of y-component not<br>
+                * being aligned to 32-bit.<br>
+                */<br>
+               length = 1;<br>
+            } else if (length > 2 && (length % 2)) {<br>
+               /* If there is an odd number of consecutive components we left<br>
+                * the not paired component for a following emit of length == 1<br>
+                * with byte_scattered_write.<br>
+                */<br>
+               length --;<br>
+            }<br>
+<br>
+            fs_reg tmp = bld.vgrf(BRW_REGISTER_TYPE_D,<br>
+                                  DIV_ROUND_UP(length, 2));<br>
+            shuffle_16bit_data_for_32bit_<wbr>write(bld, tmp, current_val_reg,<br>
+                                               length);<br>
+            current_val_reg = tmp;<br></blockquote><div><br></div><div>It would be worth a comment to say why you're not wrapping this in an "if (length > 1)".  It's not immediately obvious that it's because you need things strided out for the byte scattered write.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
          }<br>
<br>
          fs_reg offset_reg;<br>
@@ -4131,24 +4161,25 @@ fs_visitor::nir_emit_<wbr>intrinsic(const fs_builder &bld, nir_intrinsic_instr *instr<br>
                     brw_imm_ud(type_size * first_component));<br>
          }<br>
<br>
-         if (type_size == 2) {<br>
+         if (type_size < 4 && length == 1) {<br>
+            assert(type_size == 2);<br>
             /* Untyped Surface messages have a fixed 32-bit size, so we need<br>
              * to rely on byte scattered in order to write 16-bit elements.<br>
              * The byte_scattered_write message needs that every written 16-bit<br>
              * type to be aligned 32-bits (stride=2).<br>
              */<br>
-            fs_reg tmp = bld.vgrf(BRW_REGISTER_TYPE_D);<br>
-            bld.MOV(subscript(tmp, BRW_REGISTER_TYPE_W, 0),<br>
-                     offset(val_reg, bld, first_component));<br>
             emit_byte_scattered_write(bld, surf_index, offset_reg,<br>
-                                      tmp,<br>
+                                      current_val_reg,<br>
                                       1 /* dims */, 1,<br>
                                       bit_size,<br>
                                       BRW_PREDICATE_NONE);<br>
          } else {<br>
+            unsigned write_size = (length * type_size) / 4;<br>
+            assert (write_size > 0);<br>
+<br>
             emit_untyped_write(bld, surf_index, offset_reg,<br>
-                               offset(val_reg, bld, first_component * type_slots),<br>
-                               1 /* dims */, length * type_slots,<br>
+                               current_val_reg,<br>
+                               1 /* dims */, write_size,<br>
                                BRW_PREDICATE_NONE);<br>
          }<br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
2.14.3<br>
<br>
______________________________<wbr>_________________<br>
mesa-dev mailing list<br>
<a href="mailto:mesa-dev@lists.freedesktop.org">mesa-dev@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/mesa-dev" rel="noreferrer" target="_blank">https://lists.freedesktop.org/<wbr>mailman/listinfo/mesa-dev</a><br>
</font></span></blockquote></div><br></div></div>