Replacing NIR with SPIR-V?
abel.bernabeu at esperantotech.com
Sun Jan 23 20:51:52 UTC 2022
Am glad for the Mesa community to support the RISC-V effort with the advice
given so far.
I hear your concerns regarding performance. I am familiar with the Larabee
case and know some of the people who worked on that. However, I am not here
to discuss what is the RISC-V strategy for graphics beyond the fact that
there is a NIR backend planned. That would be offtopic for this list.
Join the Graphics and ML SIG at RISC-V and let us discuss it there.
First join RISC-V as an individual contributor, strategic member or
Then join the Graphics and ML SIG through the Working Groups Portal. Feel
free to ask for details privately.
On Sun, Jan 23, 2022 at 9:10 PM Dave Airlie <airlied at gmail.com> wrote:
> On Sun, 23 Jan 2022 at 22:58, Abel Bernabeu
> <abel.bernabeu at esperantotech.com> wrote:
> >> Yes, NIR arrays and struct and nir_deref to deal with them but, by the
> time you get into the back-end, all the nir_derefs are gone and you're left
> with load/store messages with actual addresses (either a 64-bit memory
> address or a index+offset pair for a bound resource). Again, unless you're
> going to dump straight into LLVM, you really don't want to handle that in
> your back-end unless you really have to.
> > That is the thing: there is already a community maintained LLVM backend
> for RISC-V and I need to see how to get value from that effort. And that is
> a very typical escenario for new architectures. There is already an LLVM
> backend for a programmable device and someone asks: could you do some
> graphics around this without spending millions?
> If you want something useful, it's going to cost millions over the
> lifetime of creating it. This stuff is hard, it needs engineers who
> understand it and they usually have to be paid.
> RISC-V as-is isn't going to make a good compute core for a GPU. I
> don't think any of the implementations are the right design. as long
> as people get sucked into thinking it might, there'll be millions
> wasted. SIMT vs SIMD is just making SSE-512 type decisions or
> recreating Intel Larrabee efforts. Nobody has made an effective GPU in
> this fashion. You'd need someone to create a new GPU with it's own
> instruction set (maybe dervied from RISC-V), but with it's own
> specialised compute core.
> The alternate more tractable project is to possibly make sw rendering
> (with llvmpipe) on RISC-V more palatable, but that's really just
> optimising llvmpipe and the LLVM backend and maybe finding a few
> instructions to enhance things. It might be possible to use a texture
> unit to speed things up and really for software rendering and hw
> rendering, memory bandwidth is a lot of the problem to solve.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mesa-dev