Replacing NIR with SPIR-V?

Dave Airlie airlied at
Sun Jan 23 20:10:28 UTC 2022

On Sun, 23 Jan 2022 at 22:58, Abel Bernabeu
<abel.bernabeu at> wrote:
>> Yes, NIR arrays and struct and nir_deref to deal with them but, by the time you get into the back-end, all the nir_derefs are gone and you're left with load/store messages with actual addresses (either a 64-bit memory address or a index+offset pair for a bound resource).  Again, unless you're going to dump straight into LLVM, you really don't want to handle that in your back-end unless you really have to.
> That is the thing: there is already a community maintained LLVM backend for RISC-V and I need to see how to get value from that effort. And that is a very typical escenario for new architectures. There is already an LLVM backend for a programmable device and someone asks: could you do some graphics around this without spending millions?


If you want something useful, it's going to cost millions over the
lifetime of creating it. This stuff is hard, it needs engineers who
understand it and they usually have to be paid.

RISC-V as-is isn't going to make a good compute core for a GPU. I
don't think any of the implementations are the right design. as long
as people get sucked into thinking it might, there'll be millions
wasted. SIMT vs SIMD is just making SSE-512 type decisions or
recreating Intel Larrabee efforts. Nobody has made an effective GPU in
this fashion. You'd need someone to create a new GPU with it's own
instruction set (maybe dervied from RISC-V), but with it's own
specialised compute core.

The alternate more tractable project is to possibly make sw rendering
(with llvmpipe) on RISC-V more palatable, but that's really just
optimising llvmpipe and the LLVM backend and maybe finding a few
instructions to enhance things. It might be possible to use a texture
unit to speed things up and really for software rendering and hw
rendering, memory bandwidth is a lot of the problem to solve.


More information about the mesa-dev mailing list