<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Dec 17, 2014 at 8:42 PM, Connor Abbott <span dir="ltr"><<a href="mailto:cwabbott0@gmail.com" target="_blank">cwabbott0@gmail.com</a>></span> wrote:<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">On Wed, Dec 17, 2014 at 10:50 PM, Jason Ekstrand <<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a>> wrote:<br>
><br>
><br>
> On Wed, Dec 17, 2014 at 7:13 PM, Connor Abbott <<a href="mailto:cwabbott0@gmail.com">cwabbott0@gmail.com</a>> wrote:<br>
>><br>
>> On Tue, Dec 16, 2014 at 1:11 AM, Jason Ekstrand <<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a>><br>
>> wrote:<br>
>> > This pass analizes all of the load/store operations and, when a variable<br>
>> > is<br>
>> > never aliased (potentially used by an indirect operation), it is lowered<br>
>> > directly to an SSA value. This pass translates to SSA directly and does<br>
>> > not require any fixup by the original to-SSA pass.<br>
>> > ---<br>
>> > src/glsl/Makefile.sources | 1 +<br>
>> > src/glsl/nir/nir.h | 2 +<br>
>> > src/glsl/nir/nir_lower_variables.c | 1071<br>
>> > ++++++++++++++++++++++++++++++++++++<br>
>> > 3 files changed, 1074 insertions(+)<br>
>> > create mode 100644 src/glsl/nir/nir_lower_variables.c<br>
>> ><br>
>> > diff --git a/src/glsl/Makefile.sources b/src/glsl/Makefile.sources<br>
>> > index 84245bc..1d3b049 100644<br>
>> > --- a/src/glsl/Makefile.sources<br>
>> > +++ b/src/glsl/Makefile.sources<br>
>> > @@ -24,6 +24,7 @@ NIR_FILES = \<br>
>> > $(GLSL_SRCDIR)/nir/nir_lower_atomics.c \<br>
>> > $(GLSL_SRCDIR)/nir/nir_lower_samplers.cpp \<br>
>> > $(GLSL_SRCDIR)/nir/nir_lower_system_values.c \<br>
>> > + $(GLSL_SRCDIR)/nir/nir_lower_variables.c \<br>
>> > $(GLSL_SRCDIR)/nir/nir_lower_variables_scalar.c \<br>
>> > $(GLSL_SRCDIR)/nir/nir_lower_vec_to_movs.c \<br>
>> > $(GLSL_SRCDIR)/nir/nir_metadata.c \<br>
>> > diff --git a/src/glsl/nir/nir.h b/src/glsl/nir/nir.h<br>
>> > index 86cda07..b3abfb9 100644<br>
>> > --- a/src/glsl/nir/nir.h<br>
>> > +++ b/src/glsl/nir/nir.h<br>
>> > @@ -1358,6 +1358,8 @@ void nir_dump_cfg(nir_shader *shader, FILE *fp);<br>
>> ><br>
>> > void nir_split_var_copies(nir_shader *shader);<br>
>> ><br>
>> > +void nir_lower_variables(nir_shader *shader);<br>
>> > +<br>
>> > void nir_lower_variables_scalar(nir_shader *shader, bool lower_globals,<br>
>> > bool lower_io, bool add_names,<br>
>> > bool native_integers);<br>
>> > diff --git a/src/glsl/nir/nir_lower_variables.c<br>
>> > b/src/glsl/nir/nir_lower_variables.c<br>
>> > new file mode 100644<br>
>> > index 0000000..052b021<br>
>> > --- /dev/null<br>
>> > +++ b/src/glsl/nir/nir_lower_variables.c<br>
>> > @@ -0,0 +1,1071 @@<br>
>> > +/*<br>
>> > + * Copyright © 2014 Intel Corporation<br>
>> > + *<br>
>> > + * Permission is hereby granted, free of charge, to any person<br>
>> > obtaining a<br>
>> > + * copy of this software and associated documentation files (the<br>
>> > "Software"),<br>
>> > + * to deal in the Software without restriction, including without<br>
>> > limitation<br>
>> > + * the rights to use, copy, modify, merge, publish, distribute,<br>
>> > sublicense,<br>
>> > + * and/or sell copies of the Software, and to permit persons to whom<br>
>> > the<br>
>> > + * Software is furnished to do so, subject to the following conditions:<br>
>> > + *<br>
>> > + * The above copyright notice and this permission notice (including the<br>
>> > next<br>
>> > + * paragraph) shall be included in all copies or substantial portions<br>
>> > of the<br>
>> > + * Software.<br>
>> > + *<br>
>> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,<br>
>> > EXPRESS OR<br>
>> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF<br>
>> > MERCHANTABILITY,<br>
>> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT<br>
>> > SHALL<br>
>> > + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR<br>
>> > OTHER<br>
>> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,<br>
>> > ARISING<br>
>> > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER<br>
>> > DEALINGS<br>
>> > + * IN THE SOFTWARE.<br>
>> > + *<br>
>> > + * Authors:<br>
>> > + * Jason Ekstrand (<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a>)<br>
>> > + *<br>
>> > + */<br>
>> > +<br>
>> > +#include "nir.h"<br>
>> > +<br>
>> > +struct deref_node {<br>
>> > + struct deref_node *parent;<br>
>> > + const struct glsl_type *type;<br>
>> > +<br>
>> > + bool lower_to_ssa;<br>
>> > +<br>
>> > + struct set *loads;<br>
>> > + struct set *stores;<br>
>> > + struct set *copies;<br>
>> > +<br>
>> > + nir_ssa_def **def_stack;<br>
>> > + nir_ssa_def **def_stack_tail;<br>
>> > +<br>
>> > + struct deref_node *wildcard;<br>
>> > + struct deref_node *indirect;<br>
>> > + struct deref_node *children[0];<br>
>> > +};<br>
>><br>
>> I think this should be a typedef'd struct, that's the way everything<br>
>> else is in NIR, including all the other pass-specific structures (all<br>
>> bikesheds aside).<br>
><br>
><br>
> Sure, and I think I certainly will if it gets moved to a header file. In<br>
> general, I haven't been typdef'ing pass-specific stuff. Maybe you were, but<br>
> I haven't.<br>
><br>
>><br>
>><br>
>> > +<br>
>> > +struct lower_variables_state {<br>
>> > + void *mem_ctx;<br>
>> > + void *dead_ctx;<br>
>> > + nir_function_impl *impl;<br>
>> > +<br>
>> > + /* A hash table mapping variables to deref_node data */<br>
>> > + struct hash_table *deref_var_nodes;<br>
>> > + /* A hash table mapping dereference leaves to deref_node data */<br>
>> > + struct hash_table *deref_leaves;<br>
>> > +<br>
>> > + /* A hash table mapping phi nodes to deref_state data */<br>
>> > + struct hash_table *phi_table;<br>
>> > +};<br>
>> > +<br>
>> > +/* The following two functions implement a hash and equality check for<br>
>> > + * variable dreferences. When the hash or equality function encounters<br>
>> > an<br>
>> > + * array, all indirects are treated as equal and are never equal to a<br>
>> > + * direct dereference or a wildcard.<br>
>> > + */<br>
>> > +static uint32_t<br>
>> > +hash_deref(const void *void_deref)<br>
>> > +{<br>
>> > + const nir_deref *deref = void_deref;<br>
>> > +<br>
>> > + uint32_t hash;<br>
>> > + if (deref->child) {<br>
>> > + hash = hash_deref(deref->child);<br>
>> > + } else {<br>
>> > + hash = 2166136261ul;<br>
>> > + }<br>
>> > +<br>
>> > + switch (deref->deref_type) {<br>
>> > + case nir_deref_type_var:<br>
>> > + hash ^= _mesa_hash_pointer(nir_deref_as_var(deref)->var);<br>
>> > + break;<br>
>> > + case nir_deref_type_array: {<br>
>> > + nir_deref_array *array = nir_deref_as_array(deref);<br>
>> > + hash += 268435183 * array->deref_array_type;<br>
>> > + if (array->deref_array_type == nir_deref_array_type_direct)<br>
>> > + hash ^= array->base_offset; /* Some prime */<br>
>> > + break;<br>
>> > + }<br>
>> > + case nir_deref_type_struct:<br>
>> > + hash ^= nir_deref_as_struct(deref)->index;<br>
>> > + break;<br>
>> > + }<br>
>> > +<br>
>> > + return hash * 0x01000193;<br>
>> > +}<br>
>><br>
>> I seem to recall you saying these magic numbers aren't really<br>
>> necessary. If they are, can we document why you chose them and what<br>
>> they're doing?<br>
><br>
><br>
> They're some primes I grabbed off the internet. Hashing is magic. I don't<br>
> claim that this hash is good. Just that it's a hash.<br>
><br>
>><br>
>><br>
>> > +<br>
>> > +static bool<br>
>> > +derefs_equal(const void *void_a, const void *void_b)<br>
>> > +{<br>
>> > + const nir_deref *a = void_a;<br>
>> > + const nir_deref *b = void_b;<br>
>> > +<br>
>> > + if (a->deref_type != b->deref_type)<br>
>> > + return false;<br>
>> > +<br>
>> > + switch (a->deref_type) {<br>
>> > + case nir_deref_type_var:<br>
>> > + if (nir_deref_as_var(a)->var != nir_deref_as_var(b)->var)<br>
>> > + return false;<br>
>> > + break;<br>
>> > + case nir_deref_type_array: {<br>
>> > + nir_deref_array *a_arr = nir_deref_as_array(a);<br>
>> > + nir_deref_array *b_arr = nir_deref_as_array(b);<br>
>> > +<br>
>> > + if (a_arr->deref_array_type != b_arr->deref_array_type)<br>
>> > + return false;<br>
>> > +<br>
>> > + if (a_arr->deref_array_type == nir_deref_array_type_direct &&<br>
>> > + a_arr->base_offset != b_arr->base_offset)<br>
>> > + return false;<br>
>> > + break;<br>
>> > + }<br>
>> > + case nir_deref_type_struct:<br>
>> > + if (nir_deref_as_struct(a)->index !=<br>
>> > nir_deref_as_struct(b)->index)<br>
>> > + return false;<br>
>> > + break;<br>
>> > + default:<br>
>> > + unreachable("Invalid dreference type");<br>
>> > + }<br>
>> > +<br>
>> > + assert((a->child == NULL) == (b->child == NULL));<br>
>> > + if (a->child)<br>
>> > + return derefs_equal(a->child, b->child);<br>
>> > + else<br>
>> > + return true;<br>
>> > +}<br>
>> > +<br>
>> > +static int<br>
>> > +type_get_length(const struct glsl_type *type)<br>
>> > +{<br>
>> > + switch (glsl_get_base_type(type)) {<br>
>> > + case GLSL_TYPE_STRUCT:<br>
>> > + case GLSL_TYPE_ARRAY:<br>
>> > + return glsl_get_length(type);<br>
>> > + case GLSL_TYPE_FLOAT:<br>
>> > + case GLSL_TYPE_INT:<br>
>> > + case GLSL_TYPE_UINT:<br>
>> > + case GLSL_TYPE_BOOL:<br>
>> > + if (glsl_type_is_matrix(type))<br>
>> > + return glsl_get_matrix_columns(type);<br>
>> > + else<br>
>> > + return glsl_get_vector_elements(type);<br>
>> > + default:<br>
>> > + unreachable("Invalid deref base type");<br>
>> > + }<br>
>> > +}<br>
>> > +<br>
>> > +static struct deref_node *<br>
>> > +deref_node_create(struct deref_node *parent,<br>
>> > + const struct glsl_type *type, void *mem_ctx)<br>
>> > +{<br>
>> > + size_t size = sizeof(struct deref_node) +<br>
>> > + type_get_length(type) * sizeof(struct deref_node *);<br>
>> > +<br>
>> > + struct deref_node *node = rzalloc_size(mem_ctx, size);<br>
>> > + node->type = type;<br>
>> > + node->parent = parent;<br>
>> > +<br>
>> > + return node;<br>
>> > +}<br>
>> > +<br>
>> > +static struct deref_node *<br>
>> > +get_deref_node(nir_deref_var *deref, bool add_to_leaves,<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + bool is_leaf = true;<br>
>> > + struct deref_node *parent = NULL;<br>
>> > + nir_deref *tail = &deref->deref;<br>
>> > + while (tail) {<br>
>> > + struct deref_node *node;<br>
>> > +<br>
>> > + switch (tail->deref_type) {<br>
>> > + case nir_deref_type_var: {<br>
>> > + assert(tail == &deref->deref);<br>
>> > + assert(parent == NULL);<br>
>> > +<br>
>> > + uint32_t var_hash = _mesa_hash_pointer(deref->var);<br>
>> > + struct hash_entry *entry =<br>
>> > + _mesa_hash_table_search(state->deref_var_nodes,<br>
>> > + var_hash, deref->var);<br>
>> > + if (entry) {<br>
>> > + node = entry->data;<br>
>> > + } else {<br>
>> > + node = deref_node_create(NULL, tail->type,<br>
>> > state->dead_ctx);<br>
>> > + _mesa_hash_table_insert(state->deref_var_nodes,<br>
>> > + var_hash, deref->var, node);<br>
>> > + }<br>
>> > + break;<br>
>> > + }<br>
>> > +<br>
>> > + case nir_deref_type_struct: {<br>
>> > + assert(parent != NULL);<br>
>> > +<br>
>> > + nir_deref_struct *deref_struct = nir_deref_as_struct(tail);<br>
>> > + assert(deref_struct->index < type_get_length(parent->type));<br>
>> > + if (parent->children[deref_struct->index]) {<br>
>> > + node = parent->children[deref_struct->index];<br>
>> > + } else {<br>
>> > + node = deref_node_create(parent, tail->type,<br>
>> > state->dead_ctx);<br>
>> > + parent->children[deref_struct->index] = node;<br>
>> > + }<br>
>> > + break;<br>
>> > + }<br>
>> > +<br>
>> > + case nir_deref_type_array: {<br>
>> > + assert(parent != NULL);<br>
>> > +<br>
>> > + nir_deref_array *arr = nir_deref_as_array(tail);<br>
>> > + switch (arr->deref_array_type) {<br>
>> > + case nir_deref_array_type_direct:<br>
>> > + if (arr->base_offset >= type_get_length(parent->type)) {<br>
>> > + /* This is possible if a loop unrolls and generates an<br>
>> > + * out-of-bounds offset. We need to handle this at<br>
>> > least<br>
>> > + * somewhat gracefully.<br>
>> > + */<br>
>> > + return NULL;<br>
>> > + } else if (parent->children[arr->base_offset]) {<br>
>> > + node = parent->children[arr->base_offset];<br>
>> > + } else {<br>
>> > + node = deref_node_create(parent, tail->type,<br>
>> > state->dead_ctx);<br>
>> > + parent->children[arr->base_offset] = node;<br>
>> > + }<br>
>> > + break;<br>
>> > + case nir_deref_array_type_indirect:<br>
>> > + if (parent->indirect) {<br>
>> > + node = parent->indirect;<br>
>> > + } else {<br>
>> > + node = deref_node_create(parent, tail->type,<br>
>> > state->dead_ctx);<br>
>> > + parent->indirect = node;<br>
>> > + }<br>
>> > + is_leaf = false;<br>
>><br>
>> Maybe I'm not understanding something, but I don't think how you're<br>
>> setting is_leaf makes sense. For example, if I have<br>
>><br>
>> vec4 foo[10];<br>
>><br>
>> and I have a dereference like foo[i], then that sounds like it should<br>
>> be a leaf dereference (since it's not dereferencing a composite type<br>
>> like a struct or array). But here you'd set is_leaf to false in that<br>
>> case. I also see nothing that would set is_leaf to false if I just<br>
>> dereference foo by itself. On second thought, by that definition all<br>
>> of our derefs should be leaf dereferences, so maybe is_leaf is<br>
>> supposed to mean something else?<br>
<br>
</div></div>You didn't respond here. Can you explain what is_leaf is supposed to be doing?<br></blockquote><div><br></div><div>Looking at it again, it is, as you said below "entirely direct". is_leaf is a bad name. I should change that and comment it better.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div><div class="h5"><br>
>><br>
>> > + break;<br>
>> > + case nir_deref_array_type_wildcard:<br>
>> > + if (parent->wildcard) {<br>
>> > + node = parent->wildcard;<br>
>> > + } else {<br>
>> > + node = deref_node_create(parent, tail->type,<br>
>> > state->dead_ctx);<br>
>> > + parent->wildcard = node;<br>
>> > + }<br>
>> > + is_leaf = false;<br>
>> > + break;<br>
>> > + default:<br>
>> > + unreachable("Invalid array deref type");<br>
>> > + }<br>
>> > + break;<br>
>> > + }<br>
>> > + default:<br>
>> > + unreachable("Invalid deref type");<br>
>> > + }<br>
>> > +<br>
>> > + parent = node;<br>
>> > + tail = tail->child;<br>
>> > + }<br>
>> > +<br>
>> > + assert(parent);<br>
>> > +<br>
>> > + if (is_leaf && add_to_leaves)<br>
>> > + _mesa_hash_table_insert(state->deref_leaves,<br>
>> > + hash_deref(deref), deref, parent);<br>
>> > +<br>
>> > + return parent;<br>
>> > +}<br>
>> > +<br>
>> > +static void<br>
>> > +register_load_instr(nir_intrinsic_instr *load_instr, bool create_node,<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + struct deref_node *node = get_deref_node(load_instr->variables[0],<br>
>> > + create_node, state);<br>
>> > + if (node == NULL)<br>
>> > + return;<br>
>> > +<br>
>> > + if (node->loads == NULL)<br>
>> > + node->loads = _mesa_set_create(state->dead_ctx,<br>
>> > + _mesa_key_pointer_equal);<br>
>> > +<br>
>> > + _mesa_set_add(node->loads, _mesa_hash_pointer(load_instr),<br>
>> > load_instr);<br>
>> > +}<br>
>> > +<br>
>> > +static void<br>
>> > +register_store_instr(nir_intrinsic_instr *store_instr, bool<br>
>> > create_node,<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + struct deref_node *node = get_deref_node(store_instr->variables[0],<br>
>> > + create_node, state);<br>
>> > + if (node == NULL)<br>
>> > + return;<br>
>> > +<br>
>> > + if (node->stores == NULL)<br>
>> > + node->stores = _mesa_set_create(state->dead_ctx,<br>
>> > + _mesa_key_pointer_equal);<br>
>> > +<br>
>> > + _mesa_set_add(node->stores, _mesa_hash_pointer(store_instr),<br>
>> > store_instr);<br>
>> > +}<br>
>> > +<br>
>> > +static void<br>
>> > +register_copy_instr(nir_intrinsic_instr *copy_instr, bool create_node,<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + for (unsigned idx = 0; idx < 2; idx++) {<br>
>> > + struct deref_node *node =<br>
>> > get_deref_node(copy_instr->variables[idx],<br>
>> > + create_node, state);<br>
>> > + if (node == NULL)<br>
>> > + continue;<br>
>> > +<br>
>> > + if (node->copies == NULL)<br>
>> > + node->copies = _mesa_set_create(state->dead_ctx,<br>
>> > + _mesa_key_pointer_equal);<br>
>> > +<br>
>> > + _mesa_set_add(node->copies, _mesa_hash_pointer(copy_instr),<br>
>> > copy_instr);<br>
>> > + }<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +foreach_deref_node_worker(struct deref_node *node, nir_deref *deref,<br>
>> > + bool (* cb)(struct deref_node *node,<br>
>> > + struct lower_variables_state<br>
>> > *state),<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + if (deref->child == NULL) {<br>
>> > + return cb(node, state);<br>
>> > + } else {<br>
>> > + switch (deref->child->deref_type) {<br>
>> > + case nir_deref_type_array: {<br>
>> > + nir_deref_array *arr = nir_deref_as_array(deref->child);<br>
>> > + assert(arr->deref_array_type == nir_deref_array_type_direct);<br>
>> > + if (node->children[arr->base_offset] &&<br>
>> > +<br>
>> > !foreach_deref_node_worker(node->children[arr->base_offset],<br>
>> > + deref->child, cb, state))<br>
>> > + return false;<br>
>> > +<br>
>> > + if (node->wildcard &&<br>
>> > + !foreach_deref_node_worker(node->wildcard,<br>
>> > + deref->child, cb, state))<br>
>> > + return false;<br>
>> > +<br>
>> > + return true;<br>
>> > + }<br>
>> > +<br>
>> > + case nir_deref_type_struct: {<br>
>> > + nir_deref_struct *str = nir_deref_as_struct(deref->child);<br>
>> > + return foreach_deref_node_worker(node->children[str->index],<br>
>> > + deref->child, cb, state);<br>
>> > + }<br>
>> > +<br>
>> > + default:<br>
>> > + unreachable("Invalid deref child type");<br>
>> > + }<br>
>> > + }<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +foreach_deref_node_match(nir_deref_var *deref,<br>
>> > + bool (* cb)(struct deref_node *node,<br>
>> > + struct lower_variables_state<br>
>> > *state),<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + nir_deref_var var_deref = *deref;<br>
>> > + var_deref.deref.child = NULL;<br>
>> > + struct deref_node *node = get_deref_node(&var_deref, false, state);<br>
>> > +<br>
>> > + if (node == NULL)<br>
>> > + return false;<br>
>> > +<br>
>> > + return foreach_deref_node_worker(node, &deref->deref, cb, state);<br>
>> > +}<br>
>><br>
>> Hmm, it seems we never use this function... and even if we did,<br>
>> get_deref_node() seems to provide basically the same functionality.<br>
>> Remove them?<br>
><br>
><br>
> We do use it.<br>
<br>
</div></div>Where? And why can't you use get_node_deref() instead? If this fn is<br>
necessary, we should still get rid of the recursion and calling a<br>
callback by just passing back the deref_node * value and returning<br>
NULL instead of false it it doesn't exist. Also, foreach_* is a pretty<br>
bad name since it's job seems to be to return something, i.e. the<br>
deref_node pointer.<br></blockquote><div><br></div><div>Down below. I pointed it out. What this function does is not entirely obvious. Given a fully-qualified deref, it iterates over every deref and calls the callback on everything that matches. For example a[1].bar[3].baz will match all of the following:<br><br>a[1].bar[3].baz<br>a[1].bar[*].baz<br>a[*].bar[3].baz<br>a[*].bar[*].baz<br><br></div><div>We use this to lower all of the above var_copy intrinsics to actual load/store intrinsics.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<span class=""><br>
><br>
>><br>
>><br>
>> > +<br>
>> > +/* This question can only be asked about leaves. Searching down the<br>
>> > tree<br>
>> > + * is much harder than searching up.<br>
>> > + */<br>
>> > +static bool<br>
>> > +deref_may_be_aliased_node(struct deref_node *node, nir_deref *deref,<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + if (deref->child == NULL) {<br>
>> > + return false;<br>
>> > + } else {<br>
>> > + switch (deref->child->deref_type) {<br>
>> > + case nir_deref_type_array: {<br>
>> > + nir_deref_array *arr = nir_deref_as_array(deref->child);<br>
>> > + if (arr->deref_array_type == nir_deref_array_type_indirect)<br>
>> > + return true;<br>
>> > +<br>
>> > + assert(arr->deref_array_type == nir_deref_array_type_direct);<br>
>><br>
>> Why is this assert here? You're handling the wildcard case a few lines<br>
>> down.<br>
><br>
><br>
> The assert is here because you can only ask the "is this aliased" question<br>
> (at the moment) about an entirely direct reference. The stuff below is for<br>
> if there is a wildcard reference of this somewhere.<br>
<br>
</span>Ok, so by "leaf" you mean "entirely direct reference"? And why,<br>
exactly, can't you answer this question? I thought we were using this<br>
to figure out if we could lower the variables to SSA, and the rule<br>
there is just "if there are no indirect dereferences, then we're<br>
fine"?<br></blockquote><div><br></div><div>Yes, but in order to ask the question "are there any indirects to this" we need to know what we're talking about. It's very clear what "are there any indirects to a[1].foo[3].bar?" but "are there any indirects to a[*].foo[4].bar?" is a much harder question and it's not at all clear that it's even useful to ask that question.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div><div class="h5"><br>
><br>
>><br>
>><br>
>> > +<br>
>> > + if (node->children[arr->base_offset] &&<br>
>> > +<br>
>> > deref_may_be_aliased_node(node->children[arr->base_offset],<br>
>> > + deref->child, state))<br>
>> > + return true;<br>
>> > +<br>
>> > + if (node->wildcard &&<br>
>> > + deref_may_be_aliased_node(node->wildcard, deref->child,<br>
>> > state))<br>
>> > + return true;<br>
>> > +<br>
>> > + return false;<br>
>> > + }<br>
>> > +<br>
>> > + case nir_deref_type_struct: {<br>
>> > + nir_deref_struct *str = nir_deref_as_struct(deref->child);<br>
>> > + if (node->children[str->index]) {<br>
>> > + return<br>
>> > deref_may_be_aliased_node(node->children[str->index],<br>
>> > + deref->child, state);<br>
>> > + } else {<br>
>> > + return false;<br>
>> > + }<br>
>> > + }<br>
>> > +<br>
>> > + default:<br>
>> > + unreachable("Invalid nir_deref child type");<br>
>> > + }<br>
>> > + }<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +deref_may_be_aliased(nir_deref_var *deref,<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + nir_deref_var var_deref = *deref;<br>
>> > + var_deref.deref.child = NULL;<br>
>> > + struct deref_node *node = get_deref_node(&var_deref, false, state);<br>
>><br>
>> Instead of doing all these shenanigans, can we just look up the hash<br>
>> table itself? You're only using a very small part of get_deref_node()<br>
>> here.<br>
><br>
><br>
> Nope. Every time we hit an array reference, a wildcard may be floating<br>
> around somewhere and, if it is, we have to take two paths: One for the<br>
> direct reference and one for the wildcard.<br>
<br>
</div></div>Wait, I'm confused... looking at get_deref_node, it's always going to<br>
hit the deref_type_var case then immediately return the entry it found<br>
or created. I wasn't talking about deref_may_be_aliased_node(), just<br>
the 3 lines above. What I mean is that we can pull out the "searching<br>
or creating the root of the deref_node tree" part of get_deref_node()<br>
into a separate function and then just use that here.<br></blockquote><div><br></div><div>Right... I missed what you were saying. Sure, we could just lookup the root.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div><div class="h5"><br>
><br>
>><br>
>><br>
>> > +<br>
>> > + /* An invalid dereference can't be aliased. */<br>
>> > + if (node == NULL)<br>
>> > + return false;<br>
>> > +<br>
>> > + return deref_may_be_aliased_node(node, &deref->deref, state);<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +fill_deref_tables_block(nir_block *block, void *void_state)<br>
>> > +{<br>
>> > + struct lower_variables_state *state = void_state;<br>
>> > +<br>
>> > + nir_foreach_instr_safe(block, instr) {<br>
>> > + if (instr->type != nir_instr_type_intrinsic)<br>
>> > + continue;<br>
>> > +<br>
>> > + nir_intrinsic_instr *intrin = nir_instr_as_intrinsic(instr);<br>
>> > +<br>
>> > + switch (intrin->intrinsic) {<br>
>> > + case nir_intrinsic_load_var_vec1:<br>
>> > + case nir_intrinsic_load_var_vec2:<br>
>> > + case nir_intrinsic_load_var_vec3:<br>
>> > + case nir_intrinsic_load_var_vec4:<br>
>> > + register_load_instr(intrin, true, state);<br>
>> > + break;<br>
>> > +<br>
>> > + case nir_intrinsic_store_var_vec1:<br>
>> > + case nir_intrinsic_store_var_vec2:<br>
>> > + case nir_intrinsic_store_var_vec3:<br>
>> > + case nir_intrinsic_store_var_vec4:<br>
>> > + register_store_instr(intrin, true, state);<br>
>> > + break;<br>
>> > +<br>
>> > + case nir_intrinsic_copy_var:<br>
>> > + register_copy_instr(intrin, true, state);<br>
>> > + break;<br>
>> > +<br>
>> > + default:<br>
>> > + continue;<br>
>> > + }<br>
>> > + }<br>
>> > +<br>
>> > + return true;<br>
>> > +}<br>
>> > +<br>
>> > +static nir_deref *<br>
>> > +deref_next_wildcard_parent(nir_deref *deref)<br>
>> > +{<br>
>> > + for (nir_deref *tail = deref; tail->child; tail = tail->child) {<br>
>> > + if (tail->child->deref_type != nir_deref_type_array)<br>
>> > + continue;<br>
>> > +<br>
>> > + nir_deref_array *arr = nir_deref_as_array(tail->child);<br>
>> > +<br>
>> > + if (arr->deref_array_type == nir_deref_array_type_wildcard)<br>
>> > + return tail;<br>
>> > + }<br>
>> > +<br>
>> > + return NULL;<br>
>> > +}<br>
>> > +<br>
>> > +static nir_deref *<br>
>> > +get_deref_tail(nir_deref *deref)<br>
>> > +{<br>
>> > + while (deref->child)<br>
>> > + deref = deref->child;<br>
>> > +<br>
>> > + return deref;<br>
>> > +}<br>
>><br>
>> I'm getting some deja vu, it's like I've seen this function before...<br>
>> ;) really, need to move it to nir.c.<br>
><br>
><br>
> Yeah. nir_deref needs some helpers. That can be done.<br>
><br>
>><br>
>><br>
>> > +<br>
>> > +static void<br>
>> > +emit_copy_load_store(nir_intrinsic_instr *copy_instr,<br>
>> > + nir_deref_var *dest_head, nir_deref_var *src_head,<br>
>> > + nir_deref *dest_tail, nir_deref *src_tail,<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + nir_deref *src_arr_parent = deref_next_wildcard_parent(src_tail);<br>
>> > + nir_deref *dest_arr_parent = deref_next_wildcard_parent(dest_tail);<br>
>> > +<br>
>> > + if (src_arr_parent || dest_arr_parent) {<br>
>> > + assert(dest_arr_parent && dest_arr_parent);<br>
>> > +<br>
>> > + nir_deref_array *src_arr =<br>
>> > nir_deref_as_array(src_arr_parent->child);<br>
>> > + nir_deref_array *dest_arr =<br>
>> > nir_deref_as_array(dest_arr_parent->child);<br>
>> > +<br>
>> > + unsigned length = type_get_length(src_arr_parent->type);<br>
>> > + assert(length == type_get_length(dest_arr_parent->type));<br>
>> > + assert(length > 0);<br>
>> > +<br>
>> > + src_arr->deref_array_type = nir_deref_array_type_direct;<br>
>> > + dest_arr->deref_array_type = nir_deref_array_type_direct;<br>
>> > + for (unsigned i = 0; i < length; i++) {<br>
>> > + src_arr->base_offset = i;<br>
>> > + dest_arr->base_offset = i;<br>
>> > + emit_copy_load_store(copy_instr, dest_head, src_head,<br>
>> > + &dest_arr->deref, &src_arr->deref,<br>
>> > state);<br>
>> > + }<br>
>> > + src_arr->deref_array_type = nir_deref_array_type_wildcard;<br>
>> > + dest_arr->deref_array_type = nir_deref_array_type_wildcard;<br>
>> > + } else {<br>
>> > + /* Base case. Actually do the copy */<br>
>> > + src_tail = get_deref_tail(src_tail);<br>
>> > + dest_tail = get_deref_tail(dest_tail);<br>
>> > +<br>
>> > + assert(src_tail->type == dest_tail->type);<br>
>> > +<br>
>> > + unsigned num_components =<br>
>> > glsl_get_vector_elements(src_tail->type);<br>
>> > +<br>
>> > + nir_deref *src_deref = nir_copy_deref(state->mem_ctx,<br>
>> > &src_head->deref);<br>
>> > + nir_deref *dest_deref = nir_copy_deref(state->mem_ctx,<br>
>> > &dest_head->deref);<br>
>> > +<br>
>> > + nir_intrinsic_op load_op;<br>
>> > + switch (num_components) {<br>
>> > + case 1: load_op = nir_intrinsic_load_var_vec1; break;<br>
>> > + case 2: load_op = nir_intrinsic_load_var_vec2; break;<br>
>> > + case 3: load_op = nir_intrinsic_load_var_vec3; break;<br>
>> > + case 4: load_op = nir_intrinsic_load_var_vec4; break;<br>
>> > + default: unreachable("Invalid number of components"); break;<br>
>> > + }<br>
>> > +<br>
>> > + nir_intrinsic_instr *load =<br>
>> > nir_intrinsic_instr_create(state->mem_ctx,<br>
>> > + load_op);<br>
>> > + load->variables[0] = nir_deref_as_var(src_deref);<br>
>> > + load->dest.is_ssa = true;<br>
>> > + nir_ssa_def_init(&load->instr, &load->dest.ssa, num_components,<br>
>> > NULL);<br>
>> > +<br>
>> > + nir_instr_insert_before(©_instr->instr, &load->instr);<br>
>> > + register_load_instr(load, false, state);<br>
>> > +<br>
>> > + nir_intrinsic_op store_op;<br>
>> > + switch (num_components) {<br>
>> > + case 1: store_op = nir_intrinsic_store_var_vec1; break;<br>
>> > + case 2: store_op = nir_intrinsic_store_var_vec2; break;<br>
>> > + case 3: store_op = nir_intrinsic_store_var_vec3; break;<br>
>> > + case 4: store_op = nir_intrinsic_store_var_vec4; break;<br>
>> > + default: unreachable("Invalid number of components"); break;<br>
>> > + }<br>
>> > +<br>
>> > + nir_intrinsic_instr *store =<br>
>> > nir_intrinsic_instr_create(state->mem_ctx,<br>
>> > +<br>
>> > store_op);<br>
>> > + store->variables[0] = nir_deref_as_var(dest_deref);<br>
>> > + store->src[0].is_ssa = true;<br>
>> > + store->src[0].ssa = &load->dest.ssa;<br>
>> > +<br>
>> > + if (copy_instr->has_predicate) {<br>
>> > + store->has_predicate = true;<br>
>> > + store->predicate = nir_src_copy(copy_instr->predicate,<br>
>> > state->mem_ctx);<br>
>> > + }<br>
>> > +<br>
>> > + nir_instr_insert_before(©_instr->instr, &store->instr);<br>
>> > + register_store_instr(store, false, state);<br>
>> > + }<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +lower_copies_to_load_store(struct deref_node *node,<br>
>> > + struct lower_variables_state *state)<br>
>><br>
>> This never returns false, so just make it return void.<br>
><br>
><br>
> No, we do use the return value. It's passed into foreach_deref_match down<br>
> there<br>
<br>
</div></div>Ok, depending on your answer to my comment about foreach_deref_match<br>
and foreach_deref_worker I think we can still clean it up then.<br></blockquote><div><br></div><div>Oh, don't worry. This entire file needs to be cleaned up. It's just about as clean as I was able to get it at the time.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div><div class="h5"><br>
><br>
>><br>
>><br>
>> > +{<br>
>> > + if (!node->copies)<br>
>> > + return true;<br>
>> > +<br>
>> > + struct set_entry *copy_entry;<br>
>> > + set_foreach(node->copies, copy_entry) {<br>
>> > + nir_intrinsic_instr *copy = (void *)copy_entry->key;<br>
>> > +<br>
>> > + emit_copy_load_store(copy, copy->variables[0],<br>
>> > copy->variables[1],<br>
>> > + ©->variables[0]->deref,<br>
>> > + ©->variables[1]->deref,<br>
>> > + state);<br>
>> > +<br>
>> > + for (unsigned i = 0; i < 2; ++i) {<br>
>> > + struct deref_node *arg_node =<br>
>> > get_deref_node(copy->variables[i],<br>
>> > + false, state);<br>
>> > + if (arg_node == NULL)<br>
>> > + continue;<br>
>> > +<br>
>> > + struct set_entry *arg_entry =<br>
>> > _mesa_set_search(arg_node->copies,<br>
>> > +<br>
>> > copy_entry->hash,<br>
>> > + copy);<br>
>> > + assert(arg_entry);<br>
>> > + _mesa_set_remove(node->copies, arg_entry);<br>
>> > + }<br>
>> > +<br>
>> > + nir_instr_remove(©->instr);<br>
>> > + }<br>
>> > +<br>
>> > + return true;<br>
>> > +}<br>
>> > +<br>
>> > +static nir_load_const_instr *<br>
>> > +get_const_initializer_load(const nir_deref_var *deref,<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + nir_constant *constant = deref->var->constant_initializer;<br>
>> > + const nir_deref *tail = &deref->deref;<br>
>> > + unsigned matrix_offset = 0;<br>
>> > + while (tail->child) {<br>
>> > + switch (tail->child->deref_type) {<br>
>> > + case nir_deref_type_array: {<br>
>> > + nir_deref_array *arr = nir_deref_as_array(tail->child);<br>
>> > + assert(arr->deref_array_type == nir_deref_array_type_direct);<br>
>> > + if (glsl_type_is_matrix(tail->type)) {<br>
>> > + assert(arr->deref.child == NULL);<br>
>> > + matrix_offset = arr->base_offset;<br>
>> > + } else {<br>
>> > + constant = constant->elements[arr->base_offset];<br>
>> > + }<br>
>> > + break;<br>
>> > + }<br>
>> > +<br>
>> > + case nir_deref_type_struct: {<br>
>> > + constant =<br>
>> > constant->elements[nir_deref_as_struct(tail->child)->index];<br>
>> > + break;<br>
>> > + }<br>
>> > +<br>
>> > + default:<br>
>> > + unreachable("Invalid deref child type");<br>
>> > + }<br>
>> > +<br>
>> > + tail = tail->child;<br>
>> > + }<br>
>> > +<br>
>> > + nir_load_const_instr *load =<br>
>> > nir_load_const_instr_create(state->mem_ctx);<br>
>> > + load->array_elems = 0;<br>
>> > + load->num_components = glsl_get_vector_elements(tail->type);<br>
>> > +<br>
>> > + matrix_offset *= load->num_components;<br>
>> > + for (unsigned i = 0; i < load->num_components; i++) {<br>
>> > + switch (glsl_get_base_type(tail->type)) {<br>
>> > + case GLSL_TYPE_FLOAT:<br>
>> > + case GLSL_TYPE_INT:<br>
>> > + case GLSL_TYPE_UINT:<br>
>> > + load->value.u[i] = constant->value.u[matrix_offset + i];<br>
>> > + break;<br>
>> > + case GLSL_TYPE_BOOL:<br>
>> > + load->value.u[i] = constant->value.u[matrix_offset + i] ?<br>
>> > + NIR_TRUE : NIR_FALSE;<br>
>> > + break;<br>
>> > + default:<br>
>> > + unreachable("Invalid immediate type");<br>
>> > + }<br>
>> > + }<br>
>> > +<br>
>> > + return load;<br>
>> > +}<br>
>> > +<br>
>> > +static void<br>
>> > +def_stack_push(struct deref_node *node, nir_ssa_def *def,<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + if (node->def_stack == NULL) {<br>
>> > + node->def_stack = ralloc_array(state->dead_ctx, nir_ssa_def *,<br>
>> > + state->impl->num_blocks);<br>
>> > + node->def_stack_tail = node->def_stack - 1;<br>
>> > + }<br>
>> > +<br>
>> > + if (node->def_stack_tail >= node->def_stack) {<br>
>> > + nir_ssa_def *top_def = *node->def_stack_tail;<br>
>> > +<br>
>> > + if (def->parent_instr->block == top_def->parent_instr->block) {<br>
>> > + /* They're in the same block, just replace the top */<br>
>> > + *node->def_stack_tail = def;<br>
>> > + return;<br>
>> > + }<br>
>> > + }<br>
>> > +<br>
>> > + *(++node->def_stack_tail) = def;<br>
>> > +}<br>
>> > +<br>
>> > +static nir_ssa_def *<br>
>> > +get_ssa_def_for_block(struct deref_node *node, nir_block *block,<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + if (node->def_stack) {<br>
>> > + while (node->def_stack_tail >= node->def_stack) {<br>
>> > + nir_ssa_def *def = *node->def_stack_tail;<br>
>> > +<br>
>> > + for (nir_block *dom = block; dom != NULL; dom = dom->imm_dom)<br>
>> > {<br>
>> > + if (def->parent_instr->block == dom)<br>
>> > + return def;<br>
>> > + }<br>
>> > +<br>
>> > + node->def_stack_tail--;<br>
>> > + }<br>
>> > + }<br>
>> > +<br>
>> > + /* If we got here then we don't have a definition that dominates the<br>
>> > + * given block. This means that we need to add an undef and use<br>
>> > that.<br>
>> > + */<br>
>> > + nir_ssa_undef_instr *undef =<br>
>> > nir_ssa_undef_instr_create(state->mem_ctx);<br>
>> > + nir_ssa_def_init(&undef->instr, &undef->def,<br>
>> > + glsl_get_vector_elements(node->type), NULL);<br>
>> > + nir_instr_insert_before_cf_list(&state->impl->body, &undef->instr);<br>
>> > + def_stack_push(node, &undef->def, state);<br>
>> > + return &undef->def;<br>
>> > +}<br>
>><br>
>> I've said this before, but I really think we should be doing exactly<br>
>> what the paper says. It might be just an optimization, but it gives<br>
>> paranoid people like me much greater confidence that what we're doing<br>
>> is actually correct. Also, we should be referencing the paper in a<br>
>> comment somewhere...<br>
>><br>
>> Here are the changes I think you'll need to make:<br>
>><br>
>> -Add a hash table to the state that maps from SSA definition to deref<br>
>> node, and make def_stack_push() update it (but only when we don't<br>
>> replace the top of the stack to prevent popping more than once per<br>
>> stack)<br>
>> -Add the for loop in deref_to_ssa_block() that goes backwards, looking<br>
>> at each SSA definition and if it corresponds to a deref node, pops<br>
>> that node's stack<br>
>> -Make get_ssa_def_for_block() not pop the stack<br>
>><br>
>> I don't think it'll be that difficult, and you may even end up with<br>
>> less code after doing the third step.<br>
><br>
><br>
> Given that we know that, at any given point, the top of the stack belongs to<br>
> either the current block or one of its parents in the dominance tree, why<br>
> can't we just do "if (stack_top->block == block) pop()" after walking the<br>
> children?<br>
<br>
</div></div>Because then we'd have to walk every node after visiting every block<br>
to make that check, something that's probably much more expensive than<br>
walking every instruction. At least, that's why the paper's written<br>
that way.</blockquote><div><br></div><div>Right... I thought through that earlier today and, aparently, forgot.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class=""><div class="h5"><br>
><br>
>><br>
>><br>
>> > +<br>
>> > +static void<br>
>> > +add_phi_sources(nir_block *block, nir_block *pred,<br>
>> > + struct lower_variables_state *state)<br>
>> > +{<br>
>> > + nir_foreach_instr(block, instr) {<br>
>> > + if (instr->type != nir_instr_type_phi)<br>
>> > + break;<br>
>> > +<br>
>> > + nir_phi_instr *phi = nir_instr_as_phi(instr);<br>
>> > +<br>
>> > + struct hash_entry *entry =<br>
>> > + _mesa_hash_table_search(state->phi_table,<br>
>> > + _mesa_hash_pointer(phi), phi);<br>
>> > + if (!entry)<br>
>> > + continue;<br>
>> > +<br>
>> > + struct deref_node *node = entry->data;<br>
>> > +<br>
>> > + nir_phi_src *src = ralloc(state->mem_ctx, nir_phi_src);<br>
>> > + src->pred = pred;<br>
>> > + src->src.is_ssa = true;<br>
>> > + src->src.ssa = get_ssa_def_for_block(node, pred, state);<br>
>> > +<br>
>> > + _mesa_set_add(src->src.ssa->uses, _mesa_hash_pointer(instr),<br>
>> > instr);<br>
>> > +<br>
>> > + exec_list_push_tail(&phi->srcs, &src->node);<br>
>> > + }<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +lower_deref_to_ssa_block(nir_block *block, void *void_state)<br>
>> > +{<br>
>> > + struct lower_variables_state *state = void_state;<br>
>> > +<br>
>> > + nir_foreach_instr_safe(block, instr) {<br>
>> > + if (instr->type == nir_instr_type_phi) {<br>
>> > + nir_phi_instr *phi = nir_instr_as_phi(instr);<br>
>> > +<br>
>> > + struct hash_entry *entry =<br>
>> > + _mesa_hash_table_search(state->phi_table,<br>
>> > + _mesa_hash_pointer(phi), phi);<br>
>> > +<br>
>> > + /* This can happen if we already have phi nodes in the program<br>
>> > + * that were not created in this pass.<br>
>> > + */<br>
>> > + if (!entry)<br>
>> > + continue;<br>
>> > +<br>
>> > + struct deref_node *node = entry->data;<br>
>> > +<br>
>> > + def_stack_push(node, &phi->dest.ssa, state);<br>
>> > + } else if (instr->type == nir_instr_type_intrinsic) {<br>
>> > + nir_intrinsic_instr *intrin = nir_instr_as_intrinsic(instr);<br>
>> > +<br>
>> > + switch (intrin->intrinsic) {<br>
>> > + case nir_intrinsic_load_var_vec1:<br>
>> > + case nir_intrinsic_load_var_vec2:<br>
>> > + case nir_intrinsic_load_var_vec3:<br>
>> > + case nir_intrinsic_load_var_vec4: {<br>
>> > + struct deref_node *node =<br>
>> > get_deref_node(intrin->variables[0],<br>
>> > + false, state);<br>
>> > + unsigned num_chans =<br>
>> > + nir_intrinsic_infos[intrin->intrinsic].dest_components;<br>
>> > +<br>
>> > + if (node == NULL) {<br>
>> > + /* If we hit this path then we are referencing an<br>
>> > invalid<br>
>> > + * value. Most likely, we unrolled something and are<br>
>> > + * reading past the end of some array. In any case,<br>
>> > this<br>
>> > + * should result in an undefined value.<br>
>> > + */<br>
>> > + nir_ssa_undef_instr *undef =<br>
>> > + nir_ssa_undef_instr_create(state->mem_ctx);<br>
>> > + nir_ssa_def_init(&undef->instr, &undef->def, num_chans,<br>
>> > NULL);<br>
>> > +<br>
>> > + nir_instr_insert_before(&intrin->instr, &undef->instr);<br>
>> > + nir_instr_remove(&intrin->instr);<br>
>> > +<br>
>> > + nir_src new_src = {<br>
>> > + .is_ssa = true,<br>
>> > + .ssa = &undef->def,<br>
>> > + };<br>
>> > +<br>
>> > + nir_ssa_def_rewrite_uses(&intrin->dest.ssa, new_src,<br>
>> > + state->mem_ctx);<br>
>> > + continue;<br>
>> > + }<br>
>> > +<br>
>> > + if (!node->lower_to_ssa)<br>
>> > + continue;<br>
>> > +<br>
>> > + nir_alu_instr *mov = nir_alu_instr_create(state->mem_ctx,<br>
>> > + nir_op_imov);<br>
>> > + mov->src[0].src.is_ssa = true;<br>
>> > + mov->src[0].src.ssa = get_ssa_def_for_block(node, block,<br>
>> > state);<br>
>> > + for (unsigned i = num_chans; i < 4; i++)<br>
>> > + mov->src[0].swizzle[i] = 0;<br>
>> > +<br>
>> > + assert(intrin->dest.is_ssa);<br>
>> > +<br>
>> > + mov->dest.write_mask = (1 << num_chans) - 1;<br>
>> > + mov->dest.dest.is_ssa = true;<br>
>> > + nir_ssa_def_init(&mov->instr, &mov->dest.dest.ssa,<br>
>> > num_chans, NULL);<br>
>> > +<br>
>> > + nir_instr_insert_before(&intrin->instr, &mov->instr);<br>
>> > + nir_instr_remove(&intrin->instr);<br>
>> > +<br>
>> > + nir_src new_src = {<br>
>> > + .is_ssa = true,<br>
>> > + .ssa = &mov->dest.dest.ssa,<br>
>> > + };<br>
>> > +<br>
>> > + nir_ssa_def_rewrite_uses(&intrin->dest.ssa, new_src,<br>
>> > + state->mem_ctx);<br>
>> > + break;<br>
>> > + }<br>
>> > +<br>
>> > + case nir_intrinsic_store_var_vec1:<br>
>> > + case nir_intrinsic_store_var_vec2:<br>
>> > + case nir_intrinsic_store_var_vec3:<br>
>> > + case nir_intrinsic_store_var_vec4: {<br>
>> > + struct deref_node *node =<br>
>> > get_deref_node(intrin->variables[0],<br>
>> > + false, state);<br>
>> > +<br>
>> > + if (node == NULL) {<br>
>> > + /* Probably an out-of-bounds array store. That should<br>
>> > be a<br>
>> > + * no-op. */<br>
>> > + nir_instr_remove(&intrin->instr);<br>
>> > + continue;<br>
>> > + }<br>
>> > +<br>
>> > + if (!node->lower_to_ssa)<br>
>> > + continue;<br>
>> > +<br>
>> > + unsigned num_chans = glsl_get_vector_elements(node->type);<br>
>> > +<br>
>> > + assert(intrin->src[0].is_ssa);<br>
>> > +<br>
>> > + nir_alu_instr *mov;<br>
>> > + if (intrin->has_predicate) {<br>
>> > + mov = nir_alu_instr_create(state->mem_ctx,<br>
>> > nir_op_bcsel);<br>
>> > + mov->src[0].src = nir_src_copy(intrin->predicate,<br>
>> > + state->mem_ctx);<br>
>> > + memset(mov->src[0].swizzle, 0, sizeof<br>
>> > mov->src[0].swizzle);<br>
>> > +<br>
>> > + mov->src[1].src.is_ssa = true;<br>
>> > + mov->src[1].src.ssa = intrin->src[0].ssa;<br>
>> > + for (unsigned i = num_chans; i < 4; i++)<br>
>> > + mov->src[1].swizzle[i] = 0;<br>
>> > +<br>
>> > + mov->src[2].src.is_ssa = true;<br>
>> > + mov->src[2].src.ssa = get_ssa_def_for_block(node, block,<br>
>> > state);<br>
>> > + for (unsigned i = num_chans; i < 4; i++)<br>
>> > + mov->src[2].swizzle[i] = 0;<br>
>> > +<br>
>> > + } else {<br>
>> > + mov = nir_alu_instr_create(state->mem_ctx, nir_op_imov);<br>
>> > +<br>
>> > + mov->src[0].src.is_ssa = true;<br>
>> > + mov->src[0].src.ssa = intrin->src[0].ssa;<br>
>> > + for (unsigned i = num_chans; i < 4; i++)<br>
>> > + mov->src[0].swizzle[i] = 0;<br>
>> > + }<br>
>> > +<br>
>> > + mov->dest.write_mask = (1 << num_chans) - 1;<br>
>> > + mov->dest.dest.is_ssa = true;<br>
>> > + nir_ssa_def_init(&mov->instr, &mov->dest.dest.ssa,<br>
>> > num_chans, NULL);<br>
>> > +<br>
>> > + nir_instr_insert_before(&intrin->instr, &mov->instr);<br>
>> > + nir_instr_remove(&intrin->instr);<br>
>> > +<br>
>> > + def_stack_push(node, &mov->dest.dest.ssa, state);<br>
>> > + break;<br>
>> > + }<br>
>> > +<br>
>> > + default:<br>
>> > + break;<br>
>> > + }<br>
>> > + }<br>
>> > + }<br>
>> > +<br>
>> > + if (block->successors[0])<br>
>> > + add_phi_sources(block->successors[0], block, state);<br>
>> > + if (block->successors[1])<br>
>> > + add_phi_sources(block->successors[1], block, state);<br>
>> > +<br>
>> > + return true;<br>
>> > +}<br>
>> > +<br>
>> > +static void<br>
>> > +insert_phi_nodes(struct lower_variables_state *state)<br>
>> > +{<br>
>> > + unsigned work[state->impl->num_blocks];<br>
>> > + unsigned has_already[state->impl->num_blocks];<br>
>> > + nir_block *W[state->impl->num_blocks];<br>
>> > +<br>
>> > + memset(work, 0, sizeof work);<br>
>> > + memset(has_already, 0, sizeof has_already);<br>
>> > +<br>
>> > + unsigned w_start, w_end;<br>
>> > + unsigned iter_count = 0;<br>
>> > +<br>
>> > + struct hash_entry *deref_entry;<br>
>> > + hash_table_foreach(state->deref_leaves, deref_entry) {<br>
>> > + struct deref_node *node = deref_entry->data;<br>
>> > +<br>
>> > + if (node->stores == NULL)<br>
>> > + continue;<br>
>> > +<br>
>> > + if (!node->lower_to_ssa)<br>
>> > + continue;<br>
>> > +<br>
>> > + w_start = w_end = 0;<br>
>> > + iter_count++;<br>
>> > +<br>
>> > + struct set_entry *store_entry;<br>
>> > + set_foreach(node->stores, store_entry) {<br>
>> > + nir_intrinsic_instr *store = (nir_intrinsic_instr<br>
>> > *)store_entry->key;<br>
>> > + if (work[store->instr.block->index] < iter_count)<br>
>> > + W[w_end++] = store->instr.block;<br>
>> > + work[store->instr.block->index] = iter_count;<br>
>> > + }<br>
>> > +<br>
>> > + while (w_start != w_end) {<br>
>> > + nir_block *cur = W[w_start++];<br>
>> > + struct set_entry *dom_entry;<br>
>> > + set_foreach(cur->dom_frontier, dom_entry) {<br>
>> > + nir_block *next = (nir_block *) dom_entry->key;<br>
>> > +<br>
>> > + /*<br>
>> > + * If there's more than one return statement, then the end<br>
>> > block<br>
>> > + * can be a join point for some definitions. However, there<br>
>> > are<br>
>> > + * no instructions in the end block, so nothing would use<br>
>> > those<br>
>> > + * phi nodes. Of course, we couldn't place those phi nodes<br>
>> > + * anyways due to the restriction of having no instructions<br>
>> > in the<br>
>> > + * end block...<br>
>> > + */<br>
>> > + if (next == state->impl->end_block)<br>
>> > + continue;<br>
>> > +<br>
>> > + if (has_already[next->index] < iter_count) {<br>
>> > + nir_phi_instr *phi =<br>
>> > nir_phi_instr_create(state->mem_ctx);<br>
>> > + phi->dest.is_ssa = true;<br>
>> > + nir_ssa_def_init(&phi->instr, &phi->dest.ssa,<br>
>> > + glsl_get_vector_elements(node->type),<br>
>> > NULL);<br>
>> > + nir_instr_insert_before_block(next, &phi->instr);<br>
>> > +<br>
>> > + _mesa_hash_table_insert(state->phi_table,<br>
>> > + _mesa_hash_pointer(phi), phi,<br>
>> > node);<br>
>> > +<br>
>> > + has_already[next->index] = iter_count;<br>
>> > + if (work[next->index] < iter_count) {<br>
>> > + work[next->index] = iter_count;<br>
>> > + W[w_end++] = next;<br>
>> > + }<br>
>> > + }<br>
>> > + }<br>
>> > + }<br>
>> > + }<br>
>> > +}<br>
>> > +<br>
>> > +static bool<br>
>> > +nir_lower_variables_impl(nir_function_impl *impl)<br>
>> > +{<br>
>> > + struct lower_variables_state state;<br>
>> > +<br>
>> > + state.mem_ctx = ralloc_parent(impl);<br>
>> > + state.dead_ctx = ralloc_context(state.mem_ctx);<br>
>> > + state.impl = impl;<br>
>> > +<br>
>> > + state.deref_var_nodes = _mesa_hash_table_create(state.dead_ctx,<br>
>> > +<br>
>> > _mesa_key_pointer_equal);<br>
>> > + state.deref_leaves = _mesa_hash_table_create(state.dead_ctx,<br>
>> > + derefs_equal);<br>
>> > + state.phi_table = _mesa_hash_table_create(state.dead_ctx,<br>
>> > + _mesa_key_pointer_equal);<br>
>> > +<br>
>> > + nir_foreach_block(impl, fill_deref_tables_block, &state);<br>
>> > +<br>
>> > + struct set *outputs = _mesa_set_create(state.dead_ctx,<br>
>> > + _mesa_key_pointer_equal);<br>
>> > +<br>
>> > + bool progress = false;<br>
>> > +<br>
>> > + nir_metadata_require(impl, nir_metadata_block_index);<br>
>> > +<br>
>> > + struct hash_entry *entry;<br>
>> > + hash_table_foreach(state.deref_leaves, entry) {<br>
>> > + nir_deref_var *deref = (void *)entry->key;<br>
>> > + struct deref_node *node = entry->data;<br>
>> > +<br>
>> > + if (deref->var->data.mode != nir_var_local) {<br>
>> > + _mesa_hash_table_remove(state.deref_leaves, entry);<br>
>> > + continue;<br>
>> > + }<br>
>> > +<br>
>> > + if (deref_may_be_aliased(deref, &state)) {<br>
>> > + _mesa_hash_table_remove(state.deref_leaves, entry);<br>
>> > + continue;<br>
>> > + }<br>
>> > +<br>
>> > + node->lower_to_ssa = true;<br>
>> > + progress = true;<br>
>> > +<br>
>> > + if (deref->var->constant_initializer) {<br>
>> > + nir_load_const_instr *load = get_const_initializer_load(deref,<br>
>> > &state);<br>
>> > + load->dest.is_ssa = true;<br>
>> > + nir_ssa_def_init(&load->instr, &load->dest.ssa,<br>
>> > + glsl_get_vector_elements(node->type), NULL);<br>
>> > + nir_instr_insert_before_cf_list(&impl->body, &load->instr);<br>
>> > + def_stack_push(node, &load->dest.ssa, &state);<br>
>> > + }<br>
>> > +<br>
>> > + if (deref->var->data.mode == nir_var_shader_out)<br>
>> > + _mesa_set_add(outputs, _mesa_hash_pointer(node), node);<br>
>> > +<br>
>> > + foreach_deref_node_match(deref, lower_copies_to_load_store,<br>
>> > &state);<br></div></div></blockquote><div><br></div><div>It's used right here ^^<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">
>> > + }<br>
>> > +<br>
>> > + if (!progress)<br>
>> > + return false;<br>
>> > +<br>
>> > + nir_metadata_require(impl, nir_metadata_dominance);<br>
>> > +<br>
>> > + insert_phi_nodes(&state);<br>
>> > + nir_foreach_block(impl, lower_deref_to_ssa_block, &state);<br>
>> > +<br>
>> > + nir_metadata_dirty(impl, nir_metadata_block_index |<br>
>> > + nir_metadata_dominance);<br>
>> > +<br>
>> > + ralloc_free(state.dead_ctx);<br>
>> > +<br>
>> > + return progress;<br>
>> > +}<br>
>> > +<br>
>> > +void<br>
>> > +nir_lower_variables(nir_shader *shader)<br>
>> > +{<br>
>> > + nir_foreach_overload(shader, overload) {<br>
>> > + if (overload->impl)<br>
>> > + nir_lower_variables_impl(overload->impl);<br>
>> > + }<br>
>> > +}<br>
>> > --<br>
>> > 2.2.0<br>
>> ><br>
>> > _______________________________________________<br>
>> > mesa-dev mailing list<br>
>> > <a href="mailto:mesa-dev@lists.freedesktop.org">mesa-dev@lists.freedesktop.org</a><br>
>> > <a href="http://lists.freedesktop.org/mailman/listinfo/mesa-dev" target="_blank">http://lists.freedesktop.org/mailman/listinfo/mesa-dev</a><br>
</div></div></blockquote></div></div></div>