[Mesa-dev] R600g LLVM shader backend

Tom Stellard thomas.stellard at amd.com
Mon Dec 12 11:21:38 PST 2011


On Mon, 2011-12-12 at 07:05 -0800, Jose Fonseca wrote:
> ----- Original Message -----
> > Hi,
> > 
> > I have just pushed a branch containing an LLVM shader backend for
> > r600g to my
> > personal git repo:
> > 
> > http://cgit.freedesktop.org/~tstellar/mesa/ r600g-llvm-shader
> 
> Hi Tom,
> 
> This is pretty cool stuff.  The fact that you have a similar passing rate to the existing compiler makes it quite remarkable.  I was aware of several closed/open-source projects to develop GPU backends for LLVM, and LunarG made a middle end, but this is the first working OpenGL stack based on a LLVM GPU backend that I'm aware. 
> 
> > There are three main components to this branch:
> > 
> > 1. A TGSI->LLVM converter (commit
> > ec9bb644cf7dde055c6c3ee5b8890a2d367337e5)
> >
> > The goal of this converter is to give all gallium drivers a
> > convenient
> > way to convert from TGSI to LLVM.  The interface is still evolving,
> > and I'm interested in getting some feedback for it.
> 
> The interface looks a good start, but I'd like it to be even more overridable.  I don't even think there's anything GPU specific there -- I also had some plans to do TGSI translation in llvmpipe in two stages: first TGSI -> high-level LLVM IR w/ custom gallivm/tgsi intrinsics -> low-level LLVM IR w/ x86 SIMD instrinsincs, to allow optimizations and other decisions to happen at a higher level, before starting to emit lower-level code.
> 
What else would you like to see overridable?

I think it might be nice to map TGSI opcodes to C functions rather than
intrinsic strings.


> So I'd like us to have a flexible hierarchy of TGSI translators that can be shared for GPU/CPUs alike.
> 
> BTW, I'd prefer that all reusable Gallium+LLVM code (be it meant for GPU or CPU) lives in src/gallium/auxiliary/gallivm , as it make code maintenance and build integration simpler.  So tgsi_llvm.c should be moved into gallivm.  Also, beware that the ability to build core gallium/mesa without LLVM must be preserved. (Having particular drivers to have hard dependencies on LLVM is of course at discretion of drivers developers though.) 
> 
> > 2. Changes to gallivm so that code can be shared between it and
> > the TGSI->LLVM converter.  These changes are attached, please review.
> 
> I'll review them separately.
> 
> > 3. An LLVM backend for r600g.
> > 
> > This backend is mostly based on AMD's AMDIL LLVM backend for OpenCL
> > with a
> > few changes added for emitting machine code.  Currently, it passes
> > about
> > 99% of the piglit tests that pass with the current r600g shader
> > backend.
> > Most of the failures are due to some unimplemented texture
> > instructions.
> > Indirect addressing is also missing from the LLVM backend, and it
> > relies
> > on the current r600g shader code to do this.
> 
> There's a 30K line file src/gallium/drivers/radeon/macrodb_gen.h . Is this generated code?

I'm pretty sure this can be removed.  I think it's only useful for
generating AMDIL assembly, but I need to examine it more closely to make
sure.

> 
> Also, maybe it would make sense to have amdil backend distributed separately from mesa, as it looks like a component that has other consumers beyond mesa/gallium/r600g, right?
> 

Eventually, the AMDIL backend will be distributed as a part of llvm [1],
but we still have a lot of work to do to make that happen.  The r600g
backend is basically a subclass of the AMDIL backend, so if the AMDIL
backend is in LLVM the r600g backend would probably have to be too.

The AMDIL code is most likely to stay in Mesa until it's upstream in
LLVM.  I think it does make sense to explore having some sort of
libAMDIL, but whether or not that happens depends a lot on what other
people want to do with the AMDIL backend.


> > In reality, the LLVM backend does not emit actual machine code,
> > but rather a byte stream that is converted to struct r600_bytecode.
> > The final transformations are done by r600_asm.c, just like in the
> > current
> > shader backend.  The LLVM backend is not optimized for VLIW, and it
> > only
> > emits one instruction per group.  The optimizations in r600_asm.c are
> > able to do some instruction packing, but the resulting code is not
> > yet
> > as good as the current backend.
> 
> Why is the result code worse: due to limitations in the assembler, in the AMDIL LLVM backend, or in LLVM itself?
> 

I guess it's due to limitations in the assembler.  When the code is
translated from TGSI, the vector instructions fit much better into the
VLIW architecture, so the lack of a proper assembler is not as
noticeable, but the r600g LLVM backend assumes non-VLIW hardware, which
makes the lack of a good assembler really noticeable.


> > The main motivation for this LLVM backend is to help bring
> > compute/OpenCL
> > support to r600g by making it easier to support different compiler
> > frontends.  I don't have a concrete plan for integrating this into
> > mainline Mesa yet, but I don't expect it to be in the next release.
> > I would really like to make it compatible with LLVM 3.0 before it
> > gets
> > merged (it only works with LLVM 2.9 now), but if compute support
> > evolves
> > quickly, I might be tempted to push the 2.9 version into the master
> > branch.
> 
> What are the state trackers that you envision this will use? (e.g., Are you targeting clover? do you hope this will be the default compiler backend for Mesa? Or is Mesa/Gallium just a way to test AMDIL backend?)
> 

For r600g we are targeting clover, but future chip generations will use
LLVM for all state trackers.

> I'm also interested in your general opinion on using LLVM for GPU.
> 


The thing that was most difficult for me to model with LLVM was
preloading values (e.g. shader inputs) into registers.  I had to try a
few different ways of implementing this before I got it to work.  There
are also some features missing from tablegen (the language for defining
hw instructions and registers) like having instructions with registers
_or_ float immediate arguments that made it a little difficult.
Otherwise, the implementation went pretty smoothly.  I think LLVM is
flexible enough to be used for GPUs, and is certainly a much better
starting point than trying to write a compiler from scratch.

-Tom

[1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-December/046136.html

> Jose
> _______________________________________________
> mesa-dev mailing list
> mesa-dev at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
> 





More information about the mesa-dev mailing list