[Mesa-dev] nir: find_msb vs clz

Erik Faye-Lund erik.faye-lund at collabora.com
Wed Apr 1 18:39:22 UTC 2020


While working on the NIR to DXIL conversion code for D3D12, I've
noticed that we're not exactly doing the best we could here.

First some background:

NIR currently has a few instructions that does kinda the same:

1. nir_op_ufind_msb: Finds the index of the most significant bit,
counting from the least significant bit. It returns -1 on zero-input.

2. nir_op_ifind_msb: A signed version of ufind_msb; looks for the first
non sign-bit. It's not terribly interesting in this context, as it can
be trivially lowered if missing, and it doesn't seem like any hardware
supports this natively. I'm just mentioning it for completeness.

3. nir_op_uclz: Counts the amount of leading zeroes, counding from the
most significant bit. It returns 32 on zero-input, and only exist in an
unsigned 32-bit variation.

ufind_msb is kinda the O.G here, uclz was recently added, and is as far
as I can see only used in an intel-specific SPIR-V instruction.

Additionally, there's the OpenCLstd_Clz SPIR-V instruction, which we
lower to ufind_msb using nir_clz_u(), regardless if the backend
supports nir_op_uclz or not.

It seems only the nouveau's NV50 backend actually wants ufind_msb,
everything else seems to convert ufind_msb to some clz-variant while
emitting code. Some have to special-case on zero-input, and some
not... 

All of this is not really awesome in my eyes.

So, while adding support for DXIL, I need to figure out how to map
these (well, ufind_msb at least) onto the DXIL intrinsics. DXIL doesn't
have a ufind_msb, but it has a firstbit_hi that is identical to
nir_op_uclz... except that it returns -1 on zero-input :(

For now, I'm lowering ufind_msb to something ufind_msb while emitting
code, like everyone else. But this feels a bit dirty, *especially*
since we have a clz-instruction that *almost* fits. And since we're
targetting OpenCL, which use clz as it's primitive, we end up doing 32
- (32 - x), and since that inner isub happens while emitting, we can't
easily optimize it away without introducing an optimizing backend...

The solution seems obvious; use nir_op_uclz instead.

But that's also a bit annoying, for a few reasons:

1. Only *one* backend actually implements support for it. So this
either means a lot of work, or making it an opt-in feature somehow.

2. We would probably have to support lowering in either direction to
support what all hardware prefers.

3. That zero-case still needs special treatment in several backends, it
seems. We could alternatively declare that nir_op_uclz is undefined for
zero-input, and handle this when lowering...?

4. It seems some (Intel?) hardware only supports 32-bit clz, so we
would have to lower to something else for other bit-sizes. That's not
too hard, though.

So yeah...

I guess the first step would be to add a switch to use nir_uclz()
instead of nir_clz_u() when handling OpenCLstd_Clz in vtn.

Next, I guess I would add a lower_ufind_msb flag to
nir_shader_compiler_options, and make nir_opt_algebraic.py lower
ufind_msb to uclz.

Finally, we can start implementing support for this in more drivers,
and flip on some switches.

I'm still not really sold on what to do about the special-case for
zero... By making it undefined, I think we're just punishing all
backends, just in the name of making the compiler backends a bit
simpler, so that doesn't seem too good of an idea either.

Does anyone have a better idea? I would kinda love to optimize away the
zero-case if it's obvious that it's impossible, e.g cases like "clz(x |
1)"... 




More information about the mesa-dev mailing list