[Mesa-dev] prep work for 64-bit integer support

Ilia Mirkin imirkin at alum.mit.edu
Thu Jun 9 18:26:47 UTC 2016


On Thu, Jun 9, 2016 at 2:07 PM, Ian Romanick <idr at freedesktop.org> wrote:
> On 06/08/2016 02:15 PM, Dave Airlie wrote:
>> While writing ARB_gpu_shader_int64 I realised I needed to change
>> a lot of existing checks for doubles to 64bit, so I decided to
>> do that as much in advance as possible.
>
> I didn't know you were working on that.  I just started poking at more
> general sized integer support too.  I wanted to add support for 8, 16,
> and 64-bit types.

Might be worth noting that NVIDIA has some support for "SIMD"
operations on 16- and 8-bit sized values packed in a 32-bit integer.
You can see what operations are supported by looking up "video
instructions" in the PTX ISA - those roughly map 1:1 with the
hardware. However I've never seen NVIDIA blob actually generate them,
even with NV_gpu_shader5's u8vec4 and such. I don't know how this
changes on Pascal, which is rumored to support fp16 ALU natively.

>
> What's your hardware support plan?  I think that any hardware that can
> do uaddCarry, usubBorrow, [ui]mulExtended, and findMSB can implement
> everything in a relatively efficient manner.  I've coded almost all of
> the possible 64-bit operations in GLSL using ivec2 or uvec2 and these
> primitives as a proof of concept.  Less efficient implementations of
> everything is possible if any of those primitives are missing.
> Technically speaking, it ought to be possible to expose 64-bit integer
> support on *any* hardware that has true integers.
>
> I'm currently leaning towards implementing these as a NIR lowering pass,
> but there are other possibilities.  There are advantages to doing the
> lowering after most or all of the device independent optimizations.  In
> addition, doing it completely in NIR means that we can get 64-bit
> integer support for SPIR-V nearly for free.  I've also considered GLSL
> IR lowering or lowering while translating GLSL IR to NIR.

While I can't speak for AMD hw, NVIDIA has some limited support for 64-bit ints:

(a) atomics
(b) shifts (so you don't have to use a temp + bitfield manipulation to
shift from one 32-bit val to another)
(c) conversion between float/double and 64-bit ints

And things like addition can be done using things like carry bits. We
have a pass to auto-lower 64-bit integer ops at the "end" so that
splitting them up doesn't affect things like constant propagation and
other optimizations. [I'm sure it'll need adjusting for a full 64-bit
int implementation, it mostly ends up getting used with address
calculations.] So I'd be highly in favor of (a) letting the backend
deal with it and (b) having the requisite TGSI opcodes to express it
all cleanly [which is what Dave has done].

  -ilia


More information about the mesa-dev mailing list