<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Jul 5, 2018 at 11:03 AM, Francisco Jerez <span dir="ltr"><<a href="mailto:currojerez@riseup.net" target="_blank">currojerez@riseup.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">Jason Ekstrand <<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a>> writes:<br>
<br>
> On Wed, Jul 4, 2018 at 1:20 PM, Francisco Jerez <<a href="mailto:currojerez@riseup.net">currojerez@riseup.net</a>><br>
> wrote:<br>
><br>
>> Jason Ekstrand <<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a>> writes:<br>
>><br>
>> > Many fragment shaders do a discard using relatively little information<br>
>> > but still put the discard fairly far down in the shader for no good<br>
>> > reason. If the discard is moved higher up, we can possibly avoid doing<br>
>> > some or almost all of the work in the shader. When this lets us skip<br>
>> > texturing operations, it's an especially high win.<br>
>> ><br>
>> > One of the biggest offenders here is DXVK. The D3D APIs have different<br>
>> > rules for discards than OpenGL and Vulkan. One effective way (which is<br>
>> > what DXVK uses) to implement DX behavior on top of GL or Vulkan is to<br>
>> > wait until the very end of the shader to discard. This ends up in the<br>
>> > pessimal case where we always do all of the work before discarding.<br>
>> > This pass helps some DXVK shaders significantly.<br>
>> ><br>
>><br>
>> One thing to keep in mind is that this sort of transformation is trading<br>
>> off run-time of fragment shader invocations that don't call discard (or<br>
>> do so non-uniformly, which means that the code the discard jump is<br>
>> protecting will be executed anyway, so doing this can actually increase<br>
>> the critical path of the program) in favour of invocations that call<br>
>> discard uniformly (so executing discard early will effectively terminate<br>
>> the program early).<br>
><br>
><br>
> It's not really a uniform vs. non-uniform thing. Even if a shader only<br>
> discards some of the fragments, it sill reduces the number of live channels<br>
> which reduces the cost of later non-uniform control-flow.<br>
><br>
<br>
</div></div>Which only helps if the shader's control flow is sufficiently<br>
non-uniform that the additional cost from performing those computations<br>
early pays off -- Or not at all if the discarded fragments need to be<br>
executed (non-compliantly) anyway in order to provide<br>
derivatives_safe_after_<wbr>discard. However, if the discard condition is<br>
uniform (across a warp), the thread can be terminated early by the<br>
back-end most certainly, which gives you the maximum pay-off. Uniform<br>
discard conditions are therefore the best-case scenario for this<br>
optimization pass.<span class=""><br></span></blockquote><div><br></div><div>Yes, that is correct. Fortunately, things that discard tend to discard fairly large chunks of the polygon at one time so this case is fairly common.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
><br>
>> Optimizing for the latter case is an essentially<br>
>> heuristic assumption that needs to be verified experimentally. Have you<br>
>> tested the effect of this pass on non-DX workloads extensively?<br>
>><br>
><br>
> Yes, it is a trade-off. No, I have not done particularly extensive<br>
> testing. We do, however, know of non-DXVK workloads that would benefit<br>
> from this. I believe Manhattan is one such example though I have not yet<br>
> benchmarked it.<br>
><br>
<br>
</span>You should grab some numbers then to make sure there are no<br>
regressions...</blockquote><div><br></div><div>I'm working on that. Unfortunately the perf system is giving me trouble so I don't have the numbers yet.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">But keep in mind that the i965 scheduler is already<br>
performing a similar optimization (locally, but with cycle-count<br>
information). This will only help over the existing optimization if the<br>
shaders that represent a bottleneck in Manhattan have sufficient control<br>
flow for the basic block boundaries to represent a problem to the<br>
(local) scheduler.<br><div><div class="h5"></div></div></blockquote><div><br></div><div>I'm not sure about the manhattan shader but the Skyrim shader does have control flow which the discard has to get moved above.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
><br>
>> > v2 (Jason Ekstrand):<br>
>> > - Fix a couple of typos (Grazvydas, Ian)<br>
>> > - Use the new nir_instr_move helper<br>
>> > - Find all movable discards before moving anything so we don't<br>
>> > accidentally re-order anything and break dependencies<br>
>> > ---<br>
>> > src/compiler/Makefile.sources | 1 +<br>
>> > src/compiler/nir/meson.build | 1 +<br>
>> > src/compiler/nir/nir.h | 10 +<br>
>> > src/compiler/nir/nir_opt_<wbr>discard.c | 396 +++++++++++++++++++++++++++++<br>
>> > 4 files changed, 408 insertions(+)<br>
>> > create mode 100644 src/compiler/nir/nir_opt_<wbr>discard.c<br>
>> ><br>
>> > diff --git a/src/compiler/Makefile.<wbr>sources b/src/compiler/Makefile.<br>
>> sources<br>
>> > index 9e3fbdc2612..8600ce81281 100644<br>
>> > --- a/src/compiler/Makefile.<wbr>sources<br>
>> > +++ b/src/compiler/Makefile.<wbr>sources<br>
>> > @@ -271,6 +271,7 @@ NIR_FILES = \<br>
>> > nir/nir_opt_cse.c \<br>
>> > nir/nir_opt_dce.c \<br>
>> > nir/nir_opt_dead_cf.c \<br>
>> > + nir/nir_opt_discard.c \<br>
>> > nir/nir_opt_gcm.c \<br>
>> > nir/nir_opt_global_to_local.c \<br>
>> > nir/nir_opt_if.c \<br>
>> > diff --git a/src/compiler/nir/meson.build b/src/compiler/nir/meson.build<br>
>> > index 28aa8de7014..e339258bb94 100644<br>
>> > --- a/src/compiler/nir/meson.build<br>
>> > +++ b/src/compiler/nir/meson.build<br>
>> > @@ -156,6 +156,7 @@ files_libnir = files(<br>
>> > 'nir_opt_cse.c',<br>
>> > 'nir_opt_dce.c',<br>
>> > 'nir_opt_dead_cf.c',<br>
>> > + 'nir_opt_discard.c',<br>
>> > 'nir_opt_gcm.c',<br>
>> > 'nir_opt_global_to_local.c',<br>
>> > 'nir_opt_if.c',<br>
>> > diff --git a/src/compiler/nir/nir.h b/src/compiler/nir/nir.h<br>
>> > index c40a88c8ccc..dac019c17e8 100644<br>
>> > --- a/src/compiler/nir/nir.h<br>
>> > +++ b/src/compiler/nir/nir.h<br>
>> > @@ -2022,6 +2022,13 @@ typedef struct nir_shader_compiler_options {<br>
>> > */<br>
>> > bool vs_inputs_dual_locations;<br>
>> ><br>
>> > + /**<br>
>> > + * Whether or not derivatives are still a safe operation after a<br>
>> discard<br>
>> > + * has occurred. Optimization passes may be able to be a bit more<br>
>> > + * agressive if this is true.<br>
>> > + */<br>
>> > + bool derivatives_safe_after_<wbr>discard;<br>
>> > +<br>
>><br>
>> It's worth noting in the comment above that any driver that is in<br>
>> position to enable this option (e.g. i965) is strictly speaking<br>
>> non-compliant with GLSL and SPIR-V, whether or not this optimization<br>
>> pass is used. The reason is that derivatives being safe after a<br>
>> non-uniform discard implies that any invocations involved in derivative<br>
>> computations must be executed even though they aren't supposed to<br>
>> according to the spec, and even though doing so might lead to undefined<br>
>> behaviour that wasn't present in the original program, e.g.:<br>
>><br>
>> | int delta = non_uniform_computation();<br>
>> | if (delta == 0)<br>
>> | discard;<br>
>> |<br>
>> | for (int i = 0; i < N; i += delta) {<br>
>> | // Will loop forever if discarded fragments are incorrectly executed<br>
>> | // by the back-end.<br>
>> | }<br>
>><br>
>> The above shader is specified to terminate if the semantics of discard<br>
>> are as defined by GLSL or SPIRV, but not necessarily as defined by DX.<br>
>><br>
><br>
> That is an interesting point. One possible solution would be to define the<br>
> NIR discard intrinsic to have the DX behavior and then make the SPIR-V and<br>
> GLSL to NIR converters emit a discard followed immediately by a return (or<br>
> possibly a stronger "halt" instruction which could be called inside a<br>
> helper function and still kill the whole program). A pass such as this<br>
> could then freely move the discard higher while leaving the control-flow<br>
> effects in place. In any case, that doesn't really affect the correctness<br>
> of this pass, just the currectness of our current handling of discard.<br>
><br>
<br>
</div></div>Yes, that should work.<br>
<span class=""><br>
><br>
>> This makes me think that DXVK is in a privileged position to decide<br>
>> where the discard jump should end up at, since it can make assumptions<br>
>> about code lexically after a discard being well-defined even if the<br>
>> discard condition evaluates to true. It's unfortunate that it behaves<br>
>> so suboptimally currently that you need to work around it here.<br>
>><br>
><br>
> You can make all sorts of arguments about where the optimal place to put a<br>
> pass is. Really, the "optimal" thing would be for people to hand-write<br>
> their shaders in carefully optimized GEN assembly but that isn't going to<br>
> happen. :-) The reality is that we get lots of garbage from shaders and we<br>
> have to be able to clean it up. This particular bit of garbage clean-up is<br>
> needed for DXVK but it's also needed for other clients so having a pass is<br>
> useful regardless of whether or not the problem could be solved by the<br>
> client providing better shader code.<br>
><br>
<br>
</span>I wasn't implying that it would be more optimal for DXVK to perform this<br>
optimization: It would possibly have the same performance as this<br>
solution, but it wouldn't run the risk of introducing undefined behavior<br>
into the program by attempting to support derivatives after a<br>
non-uniform discard. It would be valid for DXVK to implement the DX<br>
semantics in terms of the SPIRV kill instruction and subgroup operations<br>
in roughly the same way that the i965 back-end implements it, because it<br>
can make assumptions about the code lexically after a discard being<br>
well-defined even for discarded invocations thanks to the D3D spec.<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
>> > unsigned max_unroll_iterations;<br>
>> > } nir_shader_compiler_options;<br>
>> ><br>
>> > @@ -2901,6 +2908,9 @@ bool nir_opt_dce(nir_shader *shader);<br>
>> ><br>
>> > bool nir_opt_dead_cf(nir_shader *shader);<br>
>> ><br>
>> > +bool nir_opt_discard_if(nir_shader *shader);<br>
>> > +bool nir_opt_move_discards_to_top(<wbr>nir_shader *shader);<br>
>> > +<br>
>> > bool nir_opt_gcm(nir_shader *shader, bool value_number);<br>
>> ><br>
>> > bool nir_opt_if(nir_shader *shader);<br>
>> > diff --git a/src/compiler/nir/nir_opt_<wbr>discard.c<br>
>> b/src/compiler/nir/nir_opt_<wbr>discard.c<br>
>> > new file mode 100644<br>
>> > index 00000000000..c61af163707<br>
>> > --- /dev/null<br>
>> > +++ b/src/compiler/nir/nir_opt_<wbr>discard.c<br>
>> > @@ -0,0 +1,396 @@<br>
>> > +/*<br>
>> > + * Copyright © 2018 Intel Corporation<br>
>> > + *<br>
>> > + * Permission is hereby granted, free of charge, to any person<br>
>> obtaining a<br>
>> > + * copy of this software and associated documentation files (the<br>
>> "Software"),<br>
>> > + * to deal in the Software without restriction, including without<br>
>> limitation<br>
>> > + * the rights to use, copy, modify, merge, publish, distribute,<br>
>> sublicense,<br>
>> > + * and/or sell copies of the Software, and to permit persons to whom the<br>
>> > + * Software is furnished to do so, subject to the following conditions:<br>
>> > + *<br>
>> > + * The above copyright notice and this permission notice (including the<br>
>> next<br>
>> > + * paragraph) shall be included in all copies or substantial portions<br>
>> of the<br>
>> > + * Software.<br>
>> > + *<br>
>> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,<br>
>> EXPRESS OR<br>
>> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF<br>
>> MERCHANTABILITY,<br>
>> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT<br>
>> SHALL<br>
>> > + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR<br>
>> OTHER<br>
>> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,<br>
>> ARISING<br>
>> > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER<br>
>> DEALINGS<br>
>> > + * IN THE SOFTWARE.<br>
>> > + */<br>
>> > +<br>
>> > +#include "nir.h"<br>
>> > +#include "nir_builder.h"<br>
>> > +#include "nir_control_flow.h"<br>
>> > +#include "nir_worklist.h"<br>
>> > +<br>
>> > +static bool<br>
>> > +block_has_only_discard(nir_<wbr>block *block)<br>
>> > +{<br>
>> > + nir_instr *instr = nir_block_first_instr(block);<br>
>> > + if (instr == NULL || instr != nir_block_last_instr(block))<br>
>> > + return false;<br>
>> > +<br>
>> > + if (instr->type != nir_instr_type_intrinsic)<br>
>> > + return false;<br>
>> > +<br>
>> > + nir_intrinsic_instr *intrin = nir_instr_as_intrinsic(instr);<br>
>> > + return intrin->intrinsic == nir_intrinsic_discard;<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +opt_discard_if_impl(nir_<wbr>function_impl *impl)<br>
>> > +{<br>
>> > + bool progress = false;<br>
>> > +<br>
>> > + nir_builder b;<br>
>> > + nir_builder_init(&b, impl);<br>
>> > +<br>
>> > + nir_foreach_block(block, impl) {<br>
>> > + nir_if *nif = nir_block_get_following_if(<wbr>block);<br>
>> > + if (!nif)<br>
>> > + continue;<br>
>> > +<br>
>> > + bool discard_in_then;<br>
>> > + if (block_has_only_discard(nir_<wbr>if_first_then_block(nif)))<br>
>> > + discard_in_then = true;<br>
>> > + else if (block_has_only_discard(nir_<wbr>if_first_else_block(nif)))<br>
>> > + discard_in_then = false;<br>
>> > + else<br>
>> > + continue;<br>
>> > +<br>
>> > + b.cursor = nir_after_block(block);<br>
>> > + nir_ssa_def *cond = nir_ssa_for_src(&b, nif->condition, 1);<br>
>> > + if (!discard_in_then)<br>
>> > + cond = nir_inot(&b, cond);<br>
>> > +<br>
>> > + nir_intrinsic_instr *discard_if =<br>
>> > + nir_intrinsic_instr_create(b.<wbr>shader,<br>
>> nir_intrinsic_discard_if);<br>
>> > + discard_if->src[0] = nir_src_for_ssa(cond);<br>
>> > + nir_builder_instr_insert(&b, &discard_if->instr);<br>
>> > +<br>
>> > + nir_lower_phis_to_regs_block(<wbr>nir_cf_node_as_block(<br>
>> > + nir_cf_node_next(&nif->cf_<wbr>node)));<br>
>> > +<br>
>> > + nir_cf_list list;<br>
>> > + if (discard_in_then)<br>
>> > + nir_cf_list_extract(&list, &nif->else_list);<br>
>> > + else<br>
>> > + nir_cf_list_extract(&list, &nif->then_list);<br>
>> > + nir_cf_reinsert(&list, nir_after_instr(&discard_if-><wbr>instr));<br>
>> > +<br>
>> > + nir_cf_node_remove(&nif->cf_<wbr>node);<br>
>> > +<br>
>> > + progress = true;<br>
>> > + }<br>
>> > +<br>
>> > + /* If we modified control-flow, metadata is toast. Also, we may have<br>
>> > + * lowered some phis to registers so we need to back into SSA.<br>
>> > + */<br>
>> > + if (progress) {<br>
>> > + nir_metadata_preserve(impl, 0);<br>
>> > + nir_lower_regs_to_ssa_impl(<wbr>impl);<br>
>> > + }<br>
>> > +<br>
>> > + return progress;<br>
>> > +}<br>
>> > +<br>
>> > +bool<br>
>> > +nir_opt_discard_if(nir_shader *shader)<br>
>> > +{<br>
>> > + assert(shader->info.stage == MESA_SHADER_FRAGMENT);<br>
>> > +<br>
>> > + bool progress = false;<br>
>> > +<br>
>> > + nir_foreach_function(function, shader) {<br>
>> > + if (function->impl &&<br>
>> > + opt_discard_if_impl(function-><wbr>impl))<br>
>> > + progress = true;<br>
>> > + }<br>
>> > +<br>
>> > + return progress;<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +nir_variable_mode_is_read_<wbr>only(nir_variable_mode mode)<br>
>> > +{<br>
>> > + return mode == nir_var_shader_in ||<br>
>> > + mode == nir_var_uniform ||<br>
>> > + mode == nir_var_system_value;<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +nir_op_is_derivative(nir_op op)<br>
>> > +{<br>
>> > + return op == nir_op_fddx ||<br>
>> > + op == nir_op_fddy ||<br>
>> > + op == nir_op_fddx_fine ||<br>
>> > + op == nir_op_fddy_fine ||<br>
>> > + op == nir_op_fddx_coarse ||<br>
>> > + op == nir_op_fddy_coarse;<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +nir_texop_implies_derivative(<wbr>nir_texop op)<br>
>> > +{<br>
>> > + return op == nir_texop_tex ||<br>
>> > + op == nir_texop_txb ||<br>
>> > + op == nir_texop_lod;<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +nir_intrinsic_writes_<wbr>external_memory(nir_intrinsic_<wbr>op intrin)<br>
>> > +{<br>
>> > + switch (intrin) {<br>
>> > + case nir_intrinsic_store_deref:<br>
>> > + case nir_intrinsic_copy_deref:<br>
>> > + case nir_intrinsic_deref_atomic_<wbr>add:<br>
>> > + case nir_intrinsic_deref_atomic_<wbr>imin:<br>
>> > + case nir_intrinsic_deref_atomic_<wbr>umin:<br>
>> > + case nir_intrinsic_deref_atomic_<wbr>imax:<br>
>> > + case nir_intrinsic_deref_atomic_<wbr>umax:<br>
>> > + case nir_intrinsic_deref_atomic_<wbr>and:<br>
>> > + case nir_intrinsic_deref_atomic_or:<br>
>> > + case nir_intrinsic_deref_atomic_<wbr>xor:<br>
>> > + case nir_intrinsic_deref_atomic_<wbr>exchange:<br>
>> > + case nir_intrinsic_deref_atomic_<wbr>comp_swap:<br>
>> > + /* If we ever start using variables for SSBO ops, we'll need to do<br>
>> > + * something here. For now, they're safe.<br>
>> > + */<br>
>> > + return false;<br>
>> > +<br>
>> > + case nir_intrinsic_store_ssbo:<br>
>> > + case nir_intrinsic_ssbo_atomic_add:<br>
>> > + case nir_intrinsic_ssbo_atomic_<wbr>imin:<br>
>> > + case nir_intrinsic_ssbo_atomic_<wbr>umin:<br>
>> > + case nir_intrinsic_ssbo_atomic_<wbr>imax:<br>
>> > + case nir_intrinsic_ssbo_atomic_<wbr>umax:<br>
>> > + case nir_intrinsic_ssbo_atomic_and:<br>
>> > + case nir_intrinsic_ssbo_atomic_or:<br>
>> > + case nir_intrinsic_ssbo_atomic_xor:<br>
>> > + case nir_intrinsic_ssbo_atomic_<wbr>exchange:<br>
>> > + case nir_intrinsic_ssbo_atomic_<wbr>comp_swap:<br>
>> > + return true;<br>
>> > +<br>
>> > + case nir_intrinsic_image_deref_<wbr>store:<br>
>> > + case nir_intrinsic_image_deref_<wbr>atomic_add:<br>
>> > + case nir_intrinsic_image_deref_<wbr>atomic_min:<br>
>> > + case nir_intrinsic_image_deref_<wbr>atomic_max:<br>
>> > + case nir_intrinsic_image_deref_<wbr>atomic_and:<br>
>> > + case nir_intrinsic_image_deref_<wbr>atomic_or:<br>
>> > + case nir_intrinsic_image_deref_<wbr>atomic_xor:<br>
>> > + case nir_intrinsic_image_deref_<wbr>atomic_exchange:<br>
>> > + case nir_intrinsic_image_deref_<wbr>atomic_comp_swap:<br>
>> > + return true;<br>
>> > +<br>
>> > + default:<br>
>> > + return false;<br>
>> > + }<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +add_src_instr_to_worklist(<wbr>nir_src *src, void *wl)<br>
>> > +{<br>
>> > + if (!src->is_ssa)<br>
>> > + return false;<br>
>> > +<br>
>> > + nir_instr_worklist_push_tail(<wbr>wl, src->ssa->parent_instr);<br>
>> > + return true;<br>
>> > +}<br>
>> > +<br>
>> > +/** Try to mark a discard instruction for moving<br>
>> > + *<br>
>> > + * This function does two things. One is that it searches through the<br>
>> > + * dependency chain to see if this discard is an instruction that we<br>
>> can move<br>
>> > + * up to the top. Second, if the discard is one we can move, it adds<br>
>> the<br>
>> > + * discard and its dependencies to discards_and_deps.<br>
>> > + */<br>
>> > +static void<br>
>> > +try_move_discard(nir_<wbr>intrinsic_instr *discard,<br>
>> > + struct set *discards_and_deps)<br>
>> > +{<br>
>> > + /* We require the discard to be in the top level of control flow. We<br>
>> > + * could, in theory, move discards that are inside ifs or loops but<br>
>> that<br>
>> > + * would be a lot more work.<br>
>> > + */<br>
>> > + if (discard->instr.block->cf_<wbr>node.parent->type !=<br>
>> nir_cf_node_function)<br>
>> > + return;<br>
>> > +<br>
>> > + /* Build the set of all instructions discard depends on. We'll<br>
>> union this<br>
>> > + * one later with discard_and_deps if the discard is movable.<br>
>> > + */<br>
>> > + struct set *instrs = _mesa_set_create(NULL, _mesa_hash_pointer,<br>
>> > + _mesa_key_pointer_equal);<br>
>> > + nir_instr_worklist *work = nir_instr_worklist_create();<br>
>> > +<br>
>> > + _mesa_set_add(instrs, &discard->instr);<br>
>> > + add_src_instr_to_worklist(&<wbr>discard->src[0], work);<br>
>> > +<br>
>> > + bool can_move_discard = true;<br>
>> > + nir_foreach_instr_in_worklist(<wbr>instr, work) {<br>
>> > + /* Don't process an instruction twice */<br>
>> > + if (_mesa_set_search(instrs, instr))<br>
>> > + continue;<br>
>> > +<br>
>> > + _mesa_set_add(instrs, instr);<br>
>> > +<br>
>> > + /* Phi instructions can't be moved at all. Also, if we're<br>
>> dependent on<br>
>> > + * a phi then we are dependent on some other bit of control flow<br>
>> and<br>
>> > + * it's hard to figure out the proper condition.<br>
>> > + */<br>
>> > + if (instr->type == nir_instr_type_phi) {<br>
>> > + can_move_discard = false;<br>
>> > + break;<br>
>> > + }<br>
>> > +<br>
>> > + if (instr->type == nir_instr_type_intrinsic) {<br>
>> > + nir_intrinsic_instr *intrin = nir_instr_as_intrinsic(instr);<br>
>> > + if (intrin->intrinsic == nir_intrinsic_load_deref) {<br>
>> > + nir_deref_instr *deref = nir_src_as_deref(intrin->src[<wbr>0]);<br>
>> > + if (!nir_variable_mode_is_read_<wbr>only(deref->mode)) {<br>
>> > + can_move_discard = false;<br>
>> > + break;<br>
>> > + }<br>
>> > + } else if (!(nir_intrinsic_infos[intrin-<wbr>>intrinsic].flags &<br>
>> > + NIR_INTRINSIC_CAN_REORDER)) {<br>
>> > + can_move_discard = false;<br>
>> > + break;<br>
>> > + }<br>
>> > + }<br>
>> > +<br>
>> > + if (!nir_foreach_src(instr, add_src_instr_to_worklist, work)) {<br>
>> > + can_move_discard = false;<br>
>> > + break;<br>
>> > + }<br>
>> > + }<br>
>> > +<br>
>> > + if (can_move_discard) {<br>
>> > + struct set_entry *entry;<br>
>> > + set_foreach(instrs, entry)<br>
>> > + _mesa_set_add(discards_and_<wbr>deps, entry->key);<br>
>> > + }<br>
>> > +<br>
>> > + nir_instr_worklist_destroy(<wbr>work);<br>
>> > + _mesa_set_destroy(instrs, NULL);<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +opt_move_discards_to_top_<wbr>impl(nir_function_impl *impl)<br>
>> > +{<br>
>> > + const nir_shader_compiler_options *options = impl->function->shader-><br>
>> options;<br>
>> > +<br>
>> > + /* This optimization only operates on discard_if. Run the discard_if<br>
>> > + * optimization (it's very cheap if it doesn't make progress) so<br>
>> that we<br>
>> > + * have some hope of move_discards_to_top making progress.<br>
>> > + */<br>
>> > + bool progress = opt_discard_if_impl(impl);<br>
>> > +<br>
>> > + struct set *move_instrs = _mesa_set_create(NULL, _mesa_hash_pointer,<br>
>> > + _mesa_key_pointer_equal);<br>
>> > +<br>
>> > + /* Walk through the instructions and look for a discard that we can<br>
>> move<br>
>> > + * to the top of the program. If we hit any operation along the way<br>
>> that<br>
>> > + * we cannot safely move a discard above, break out of the loop and<br>
>> stop<br>
>> > + * trying to move any more discards.<br>
>> > + */<br>
>> > + nir_foreach_block(block, impl) {<br>
>> > + nir_foreach_instr_safe(instr, block) {<br>
>> > + switch (instr->type) {<br>
>> > + case nir_instr_type_alu: {<br>
>> > + nir_alu_instr *alu = nir_instr_as_alu(instr);<br>
>> > + if (nir_op_is_derivative(alu->op) &&<br>
>> > + !options->derivatives_safe_<wbr>after_discard)<br>
>> > + goto break_all;<br>
>> > + continue;<br>
>> > + }<br>
>> > +<br>
>> > + case nir_instr_type_deref:<br>
>> > + case nir_instr_type_load_const:<br>
>> > + case nir_instr_type_ssa_undef:<br>
>> > + case nir_instr_type_phi:<br>
>> > + /* These are all safe */<br>
>> > + continue;<br>
>> > +<br>
>> > + case nir_instr_type_call:<br>
>> > + /* We don't know what the function will do */<br>
>> > + goto break_all;<br>
>> > +<br>
>> > + case nir_instr_type_tex: {<br>
>> > + nir_tex_instr *tex = nir_instr_as_tex(instr);<br>
>> > + if (nir_texop_implies_derivative(<wbr>tex->op) &&<br>
>> > + !options->derivatives_safe_<wbr>after_discard)<br>
>> > + goto break_all;<br>
>> > + continue;<br>
>> > + }<br>
>> > +<br>
>> > + case nir_instr_type_intrinsic: {<br>
>> > + nir_intrinsic_instr *intrin = nir_instr_as_intrinsic(instr);<br>
>> > + if (nir_intrinsic_writes_<wbr>external_memory(intrin-><br>
>> intrinsic))<br>
>> > + goto break_all;<br>
>> > +<br>
>> > + if (intrin->intrinsic == nir_intrinsic_discard_if)<br>
>> > + try_move_discard(intrin, move_instrs);<br>
>> > + continue;<br>
>> > + }<br>
>> > +<br>
>> > + case nir_instr_type_jump: {<br>
>> > + nir_jump_instr *jump = nir_instr_as_jump(instr);<br>
>> > + /* A return would cause the discard to not get executed */<br>
>> > + if (jump->type == nir_jump_return)<br>
>> > + goto break_all;<br>
>> > + continue;<br>
>> > + }<br>
>> > +<br>
>> > + case nir_instr_type_parallel_copy:<br>
>> > + unreachable("Unhanded instruction type");<br>
>> > + }<br>
>> > + }<br>
>> > + }<br>
>> > +break_all:<br>
>> > +<br>
>> > + if (move_instrs->entries) {<br>
>> > + /* Walk the list of instructions and move the discard and<br>
>> everything it<br>
>> > + * depends on to the top. We walk the instruction list here<br>
>> because it<br>
>> > + * ensures that everything stays in its original order. This<br>
>> provides<br>
>> > + * stability for the algorithm and ensures that we don't<br>
>> accidentally<br>
>> > + * get dependencies out-of-order.<br>
>> > + */<br>
>> > + nir_cursor cursor = nir_before_block(nir_start_<wbr>block(impl));<br>
>> > + nir_foreach_block(block, impl) {<br>
>> > + nir_foreach_instr_safe(instr, block) {<br>
>> > + if (_mesa_set_search(move_instrs, instr)) {<br>
>> > + nir_instr_move(cursor, instr);<br>
>> > + cursor = nir_after_instr(instr);<br>
>> > + }<br>
>> > + }<br>
>> > + }<br>
>> > + progress = true;<br>
>> > + }<br>
>> > +<br>
>> > + _mesa_set_destroy(move_instrs, NULL);<br>
>> > +<br>
>> > + if (progress) {<br>
>> > + nir_metadata_preserve(impl, nir_metadata_block_index |<br>
>> > + nir_metadata_dominance);<br>
>> > + }<br>
>> > +<br>
>> > + return progress;<br>
>> > +}<br>
>> > +<br>
>> > +bool<br>
>> > +nir_opt_move_discards_to_top(<wbr>nir_shader *shader)<br>
>> > +{<br>
>> > + assert(shader->info.stage == MESA_SHADER_FRAGMENT);<br>
>> > +<br>
>> > + bool progress = false;<br>
>> > +<br>
>> > + nir_foreach_function(function, shader) {<br>
>> > + if (function->impl &&<br>
>> > + opt_move_discards_to_top_impl(<wbr>function->impl))<br>
>> > + progress = true;<br>
>> > + }<br>
>> > +<br>
>> > + return progress;<br>
>> > +}<br>
>> > --<br>
>> > 2.17.1<br>
>> ><br>
>> > ______________________________<wbr>_________________<br>
>> > mesa-dev mailing list<br>
>> > <a href="mailto:mesa-dev@lists.freedesktop.org">mesa-dev@lists.freedesktop.org</a><br>
>> > <a href="https://lists.freedesktop.org/mailman/listinfo/mesa-dev" rel="noreferrer" target="_blank">https://lists.freedesktop.org/<wbr>mailman/listinfo/mesa-dev</a><br>
>><br>
</div></div></blockquote></div><br></div></div>