Replacing NIR with SPIR-V?

Abel Bernabeu abel.bernabeu at esperantotech.com
Sun Jan 23 23:24:43 UTC 2022


I can tease you all with the promise of SIMT and fixed function units.

In the meantime you can hear me talking about the work we do at the
Graphics and ML Special Interest Group within RISC-V:
https://www.youtube.com/watch?v=kM0lsWjqOaw

I still need to make our site a bit more useful, but here is the GitHub
repo where I put our meeting minutes:
https://github.com/riscv-admin/graphics

Regards.

On Sun, Jan 23, 2022 at 11:07 PM Ian Romanick <
idr at paranormal-entertainment.com> wrote:

> On 1/23/22 12:10 PM, Dave Airlie wrote:
> > On Sun, 23 Jan 2022 at 22:58, Abel Bernabeu
> > <abel.bernabeu at esperantotech.com> wrote:
> >>>
> >>> Yes, NIR arrays and struct and nir_deref to deal with them but, by the
> time you get into the back-end, all the nir_derefs are gone and you're left
> with load/store messages with actual addresses (either a 64-bit memory
> address or a index+offset pair for a bound resource).  Again, unless you're
> going to dump straight into LLVM, you really don't want to handle that in
> your back-end unless you really have to.
> >>
> >>
> >> That is the thing: there is already a community maintained LLVM backend
> for RISC-V and I need to see how to get value from that effort. And that is
> a very typical escenario for new architectures. There is already an LLVM
> backend for a programmable device and someone asks: could you do some
> graphics around this without spending millions?
> >
> > No.
> >
> > If you want something useful, it's going to cost millions over the
> > lifetime of creating it. This stuff is hard, it needs engineers who
> > understand it and they usually have to be paid.
> >
> > RISC-V as-is isn't going to make a good compute core for a GPU. I
> > don't think any of the implementations are the right design. as long
> > as people get sucked into thinking it might, there'll be millions
> > wasted. SIMT vs SIMD is just making SSE-512 type decisions or
> > recreating Intel Larrabee efforts. Nobody has made an effective GPU in
> > this fashion. You'd need someone to create a new GPU with it's own
> > instruction set (maybe dervied from RISC-V), but with it's own
> > specialised compute core.
> >
> > The alternate more tractable project is to possibly make sw rendering
> > (with llvmpipe) on RISC-V more palatable, but that's really just
> > optimising llvmpipe and the LLVM backend and maybe finding a few
> > instructions to enhance things. It might be possible to use a texture
> > unit to speed things up and really for software rendering and hw
> > rendering, memory bandwidth is a lot of the problem to solve.
>
> For the love of all that is good in the world, no! :) That was my
> original master's project that I gave up on.
>
> Executive summary: There's a reason GPUs have huge piles of
> fixed-function blocks.  It's the only way to get enough power
> efficiency, and power consumption (and the heat it generates) is *the*
> problem.
>
> > Dave.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/mesa-dev/attachments/20220124/b108ec01/attachment.htm>


More information about the mesa-dev mailing list