<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Fri, Dec 1, 2017 at 2:46 AM, Chema Casanova <span dir="ltr"><<a href="mailto:jmcasanova@igalia.com" target="_blank">jmcasanova@igalia.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 01/12/17 11:12, Jason Ekstrand wrote:<br>
> I've left some comments below that I think clean things up and make this<br>
> better, but I believe it is correct as-is.<br>
><br>
> Reviewed-by: Jason Ekstrand <<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a><br>
</span>> <mailto:<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a>>><br>
<span class="">><br>
> On Wed, Nov 29, 2017 at 6:42 PM, Jose Maria Casanova Crespo<br>
</span><span class="">> <<a href="mailto:jmcasanova@igalia.com">jmcasanova@igalia.com</a> <mailto:<a href="mailto:jmcasanova@igalia.com">jmcasanova@igalia.com</a>><wbr>> wrote:<br>
><br>
> From: Alejandro Piñeiro <<a href="mailto:apinheiro@igalia.com">apinheiro@igalia.com</a><br>
</span>> <mailto:<a href="mailto:apinheiro@igalia.com">apinheiro@igalia.com</a>>><br>
<span class="">><br>
> We need to rely on byte scattered writes as untyped writes are 32-bit<br>
> size. We could try to keep using 32-bit messages when we have two or<br>
> four 16-bit elements, but for simplicity sake, we use the same message<br>
> for any component number. We revisit this aproach in the follwing<br>
> patches.<br>
><br>
> v2: Removed use of stride = 2 on 16-bit sources (Jason Ekstrand)<br>
><br>
> v3: (Jason Ekstrand)<br>
> - Include bit_size to scattered write message and remove namespace<br>
> - specific for scattered messages.<br>
> - Move comment to proper place.<br>
> - Squashed with i965/fs: Adjust type_size/type_slots on store_ssbo.<br>
> (Jose Maria Casanova)<br>
> - Take into account that get_nir_src returns now WORD types for<br>
> 16-bit sources instead of DWORD.<br>
><br>
> Signed-off-by: Jose Maria Casanova Crespo <<a href="mailto:jmcasanova@igalia.com">jmcasanova@igalia.com</a><br>
</span>> <mailto:<a href="mailto:jmcasanova@igalia.com">jmcasanova@igalia.com</a>><wbr>><br>
> Signed-off-by: Alejandro Piñeiro <<a href="mailto:apinheiro@igalia.com">apinheiro@igalia.com</a><br>
> <mailto:<a href="mailto:apinheiro@igalia.com">apinheiro@igalia.com</a>>><br>
<div><div class="h5">> ---<br>
> src/intel/compiler/brw_fs_<wbr>nir.cpp | 51<br>
> ++++++++++++++++++++++++++++--<wbr>---------<br>
> 1 file changed, 37 insertions(+), 14 deletions(-)<br>
><br>
> diff --git a/src/intel/compiler/brw_fs_<wbr>nir.cpp<br>
> b/src/intel/compiler/brw_fs_<wbr>nir.cpp<br>
> index d6ab286147..ff04e2468b 100644<br>
> --- a/src/intel/compiler/brw_fs_<wbr>nir.cpp<br>
> +++ b/src/intel/compiler/brw_fs_<wbr>nir.cpp<br>
> @@ -4075,14 +4075,15 @@ fs_visitor::nir_emit_<wbr>intrinsic(const<br>
> fs_builder &bld, nir_intrinsic_instr *instr<br>
> * Also, we have to suffle 64-bit data to be in the<br>
> appropriate layout<br>
> * expected by our 32-bit write messages.<br>
> */<br>
> - unsigned type_size = 4;<br>
> - if (nir_src_bit_size(instr->src[<wbr>0]) == 64) {<br>
> - type_size = 8;<br>
> + unsigned bit_size = nir_src_bit_size(instr->src[0]<wbr>);<br>
> + unsigned type_size = bit_size / 8;<br>
> + if (bit_size == 64) {<br>
> val_reg = shuffle_64bit_data_for_32bit_<wbr>write(bld,<br>
> val_reg, instr->num_components);<br>
> }<br>
><br>
> - unsigned type_slots = type_size / 4;<br>
> + /* 16-bit types would use a minimum of 1 slot */<br>
> + unsigned type_slots = MAX2(type_size / 4, 1);<br>
><br>
><br>
> Given that this is only used for emit_typed_write, maybe we should just<br>
> move it next to the emit_typed_write call and just get rid of the<br>
> MAX2(). More on that later.<br>
<br>
</div></div>It makes sanes, i follow partially this approach at "[PATCH v4 26/44]<br>
i965/fs: Optimize 16-bit SSBO stores by packing two into a 32-bit reg"<br>
using an slots_per_component that is just 2 for 64-bits and 1 for the<br>
other bitsizes. But i like your approach.<br>
<span class=""><br>
> /* Combine groups of consecutive enabled channels in one write<br>
> * message. We use ffs to find the first enabled channel and<br>
> then ffs on<br>
> @@ -4093,12 +4094,19 @@ fs_visitor::nir_emit_<wbr>intrinsic(const<br>
> fs_builder &bld, nir_intrinsic_instr *instr<br>
> unsigned first_component = ffs(writemask) - 1;<br>
> unsigned length = ffs(~(writemask >> first_component)) - 1;<br>
><br>
><br>
> If the one above is first_component, num_components would be a better<br>
> name for this one. It's very confusing go have something generically<br>
> named "length" in a piece of code with so many different possible units.<br>
<br>
</span>It was also confussing to me. What about a rename to<br>
num_consecutive_components as that what is really calculating? so we<br>
don't confuse it with the num_components of instr.<span class=""><br></span></blockquote><div><br></div><div>Hrm... That would work I suppose. Not a huge deal in any case.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
> - /* We can't write more than 2 64-bit components at once.<br>
> Limit the<br>
> - * length of the write to what we can do and let the next<br>
> iteration<br>
> - * handle the rest<br>
> - */<br>
> - if (type_size > 4)<br>
> + if (type_size > 4) {<br>
> + /* We can't write more than 2 64-bit components at<br>
> once. Limit<br>
> + * the length of the write to what we can do and let<br>
> the next<br>
> + * iteration handle the rest.<br>
> + */<br>
> length = MIN2(2, length);<br>
> + } else if (type_size == 2) {<br>
><br>
><br>
> Maybe type_size < 4?<br>
<br>
</span>I should have advanced this change to this patch, you commented it<br>
already for current [PATCH v4 26/44]<br>
<span class=""><br>
<br>
> + /* For 16-bit types we are using byte scattered writes,<br>
> that can<br>
> + * only write one component per call. So we limit the<br>
> length, and<br>
> + * let the write happening in several iterations.<br>
> + */<br>
> + length = 1;<br>
> + }<br>
><br>
> fs_reg offset_reg;<br>
> nir_const_value *const_offset =<br>
> nir_src_as_const_value(instr-><wbr>src[2]);<br>
> @@ -4112,11 +4120,26 @@ fs_visitor::nir_emit_<wbr>intrinsic(const<br>
> fs_builder &bld, nir_intrinsic_instr *instr<br>
> brw_imm_ud(type_size * first_component));<br>
> }<br>
><br>
> -<br>
> - emit_untyped_write(bld, surf_index, offset_reg,<br>
> - offset(val_reg, bld, first_component *<br>
> type_slots),<br>
> - 1 /* dims */, length * type_slots,<br>
> - BRW_PREDICATE_NONE);<br>
> + if (type_size == 2) {<br>
><br>
><br>
> maybe type_size < 4?<br>
<br>
</span>Agree<br>
<span class=""><br>
> + /* Untyped Surface messages have a fixed 32-bit size,<br>
> so we need<br>
> + * to rely on byte scattered in order to write 16-bit<br>
> elements.<br>
> + * The byte_scattered_write message needs that every<br>
> written 16-bit<br>
> + * type to be aligned 32-bits (stride=2).<br>
> + */<br>
> + fs_reg tmp = bld.vgrf(BRW_REGISTER_TYPE_D);<br>
> + bld.MOV(subscript(tmp, BRW_REGISTER_TYPE_W, 0),<br>
> + offset(val_reg, bld, first_component));<br>
> + emit_byte_scattered_write(bld, surf_index, offset_reg,<br>
> + tmp,<br>
> + 1 /* dims */, 1,<br>
> + bit_size,<br>
> + BRW_PREDICATE_NONE);<br>
> + } else {<br>
><br>
><br>
> If we moved type_slots here, I think we could very nicely future-proof<br>
> things as follows:<br>
<br>
</span>I think that we can not move it here because it is really used when<br>
calculating the offset for the components of 64-bits,<br>
<br>
> assert(num_components * type_size < 16);<br>
<br>
Shouldn't it be <=<br>
<span class=""><br>
> assert((num_components * type_size) % 4 == 0);<br>
> assert((first_component * type_size) % 4 == 0);<br>
> unsigned first_slot = (first_component * type_size) / 4;<br>
> unsigned num_slots = (num_components * type_size) / 4;<br>
> emit_untyped_write(bld, surf_index, reg_offset,<br>
> offset(val_reg, bld, first_slot),<br>
> 1 /* dims */, num_slots,<br>
> BRW_PREDICATE_NONE);<br>
<br>
<br>
> That said, let's not get ahead of ourselves. This can all be done as a<br>
> later clean-up on top of the optimization patch if that's easier. :)<br>
<br>
</span>We were thinking this with the following optimization in the past. But<br>
at that moment we thought it was easier to understand doing it in two<br>
steps. I'll wait for you to review [PATCH v4 26/44] and try to advance<br>
as much as I can to this first one to avoid re-writing lines.</blockquote><div><br></div><div>That's fine. After reading your later optimization patch, it seems like it lands in a reasonable state. Forget this comment if you'd like.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
> + emit_untyped_write(bld, surf_index, offset_reg,<br>
> + offset(val_reg, bld, first_component<br>
> * type_slots),<br>
> + 1 /* dims */, length * type_slots,<br>
> + BRW_PREDICATE_NONE);<br>
> + }<br>
><br>
> /* Clear the bits in the writemask that we just wrote,<br>
> then try<br>
> * again to see if more channels are left.<br>
> --<br>
> 2.14.3<br>
><br>
> ______________________________<wbr>_________________<br>
> mesa-dev mailing list<br>
</span>> <a href="mailto:mesa-dev@lists.freedesktop.org">mesa-dev@lists.freedesktop.org</a> <mailto:<a href="mailto:mesa-dev@lists.freedesktop.org">mesa-dev@lists.<wbr>freedesktop.org</a>><br>
> <a href="https://lists.freedesktop.org/mailman/listinfo/mesa-dev" rel="noreferrer" target="_blank">https://lists.freedesktop.org/<wbr>mailman/listinfo/mesa-dev</a><br>
> <<a href="https://lists.freedesktop.org/mailman/listinfo/mesa-dev" rel="noreferrer" target="_blank">https://lists.freedesktop.<wbr>org/mailman/listinfo/mesa-dev</a>><br>
><br>
><br>
</blockquote></div><br></div></div>