[Mesa-dev] R600g LLVM shader backend
jfonseca at vmware.com
Mon Dec 12 07:05:26 PST 2011
----- Original Message -----
> I have just pushed a branch containing an LLVM shader backend for
> r600g to my
> personal git repo:
> http://cgit.freedesktop.org/~tstellar/mesa/ r600g-llvm-shader
This is pretty cool stuff. The fact that you have a similar passing rate to the existing compiler makes it quite remarkable. I was aware of several closed/open-source projects to develop GPU backends for LLVM, and LunarG made a middle end, but this is the first working OpenGL stack based on a LLVM GPU backend that I'm aware.
> There are three main components to this branch:
> 1. A TGSI->LLVM converter (commit
> The goal of this converter is to give all gallium drivers a
> way to convert from TGSI to LLVM. The interface is still evolving,
> and I'm interested in getting some feedback for it.
The interface looks a good start, but I'd like it to be even more overridable. I don't even think there's anything GPU specific there -- I also had some plans to do TGSI translation in llvmpipe in two stages: first TGSI -> high-level LLVM IR w/ custom gallivm/tgsi intrinsics -> low-level LLVM IR w/ x86 SIMD instrinsincs, to allow optimizations and other decisions to happen at a higher level, before starting to emit lower-level code.
So I'd like us to have a flexible hierarchy of TGSI translators that can be shared for GPU/CPUs alike.
BTW, I'd prefer that all reusable Gallium+LLVM code (be it meant for GPU or CPU) lives in src/gallium/auxiliary/gallivm , as it make code maintenance and build integration simpler. So tgsi_llvm.c should be moved into gallivm. Also, beware that the ability to build core gallium/mesa without LLVM must be preserved. (Having particular drivers to have hard dependencies on LLVM is of course at discretion of drivers developers though.)
> 2. Changes to gallivm so that code can be shared between it and
> the TGSI->LLVM converter. These changes are attached, please review.
I'll review them separately.
> 3. An LLVM backend for r600g.
> This backend is mostly based on AMD's AMDIL LLVM backend for OpenCL
> with a
> few changes added for emitting machine code. Currently, it passes
> 99% of the piglit tests that pass with the current r600g shader
> Most of the failures are due to some unimplemented texture
> Indirect addressing is also missing from the LLVM backend, and it
> on the current r600g shader code to do this.
There's a 30K line file src/gallium/drivers/radeon/macrodb_gen.h . Is this generated code?
Also, maybe it would make sense to have amdil backend distributed separately from mesa, as it looks like a component that has other consumers beyond mesa/gallium/r600g, right?
> In reality, the LLVM backend does not emit actual machine code,
> but rather a byte stream that is converted to struct r600_bytecode.
> The final transformations are done by r600_asm.c, just like in the
> shader backend. The LLVM backend is not optimized for VLIW, and it
> emits one instruction per group. The optimizations in r600_asm.c are
> able to do some instruction packing, but the resulting code is not
> as good as the current backend.
Why is the result code worse: due to limitations in the assembler, in the AMDIL LLVM backend, or in LLVM itself?
> The main motivation for this LLVM backend is to help bring
> support to r600g by making it easier to support different compiler
> frontends. I don't have a concrete plan for integrating this into
> mainline Mesa yet, but I don't expect it to be in the next release.
> I would really like to make it compatible with LLVM 3.0 before it
> merged (it only works with LLVM 2.9 now), but if compute support
> quickly, I might be tempted to push the 2.9 version into the master
What are the state trackers that you envision this will use? (e.g., Are you targeting clover? do you hope this will be the default compiler backend for Mesa? Or is Mesa/Gallium just a way to test AMDIL backend?)
I'm also interested in your general opinion on using LLVM for GPU.
More information about the mesa-dev