[Mesa-dev] Mesa (shader-work): glsl: introduce ir_binop_all_equal and ir_binop_any_equal, allow vector cmps

Ian Romanick idr at freedesktop.org
Tue Sep 7 20:15:17 PDT 2010


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Luca Barbieri wrote:
> I think my patch can be seen an intermediate step towards that.
> 
> I'm not sure that it's a good idea to implement them in terms of all/any.
> 
> ir_binop_all_equal and ir_binop_any_nequal represent the concept of
> "whole object equality", which seems quite useful in itself.

We already lower structure and array comparisons to comparisons of the
individual components.  This is a necessary and usual step in
optimization, so I don't see it going away.

> For instance, it can be trivially extended to records and any other
> data structure, while the all()/any() approach is more cumbersome in
> that case.
> 
>> Using ir_unop_any, we get stuck with:
>>
>>       bvec4 a = bvec4(true, true, x > y, true);
>>       if (!any(not(a))) {
>>               ...
>>       }
>>
>> It never gets reduced because an expression is either completely
>> constant or it's not constant at all.
> 
> Perhaps this should be fixed.

Yes.  We need real ud-chains instead of the ad-hoc mechanisms that we
currently have.

> But in general I'm wondering whether attempting to write a great
> optimizer for GLSL IR is a good idea, or whether we should just
> convert the GLSL IR to LLVM, let the LLVM optimizers do their job and
> then convert back until all hardware has an LLVM backend.

Too bad LLVM doesn't have a clue about hardware that requires structured
branching.  Any decent optimizer for general purpose CPUs generates
spaghetti code.  It is, in the best case, really hard to convert
spaghetti code back into structured code.  Even worse, once you do that
you ruin a lot of the optimizations that you just worked so hard to get.

At least for fragment shaders, hardware is going to continue to look
like this for the foreseeable future.  Vertex shaders, geometry shaders,
and OpenCL kernels don't have the same issues.  Fragement shaders are
pretty important, though. :)

> In particular, I suspect that writing an optimizer that can decently
> handle long and complex OpenGL 4.1 shaders using
> EXT_shader_image_load_store, or OpenCL compute shaders (which come as
> LLVM IR anyway), and a code generator that can truly generate optimal
> hardware code, will need as much work as writing GCC or LLVM from
> scratch.
> 
> I (mostly) did the GLSL->LLVM conversion code, but the other side,
> which is harder, is still missing.

One of our first projects after 7.9 is to add support for using LLVM to
generate software vertex shaders.

> Why did you not choose to not do that straight away, and instead opted
> for writing GLSL IR optimization passes?

GLSL requires a certain level of optimization just to perform semantic
checking on a shader.  We really haven't done very much beyond that.
That's in addition to my comments above about structured branching.  We
tried to take the path with the fewest unknowns.  That's also why we're
still generating the low-level Mesa IR.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkyG/8QACgkQX1gOwKyEAw+b2ACeJFzXFRax+QdEHZqBkM0h9v+H
wFkAnidCL0KZeRh7ZNJDdPKlqhArygXx
=EEI/
-----END PGP SIGNATURE-----


More information about the mesa-dev mailing list