[Mesa-dev] Proposal for a long-term shader compiler (and IR) architecture
johnk at lunarg.com
Mon Oct 18 12:21:48 PDT 2010
Yes, nicely put Keith.
If the effort to make this work is not worth the benefit of standardizing
and picking up the LLVM optimizations, finding out sooner is better than
later. Hence, any specific reasons why that would be the case are most
Jerome has a good point about the final register allocation and scheduling
to a specific GPU. Note this is optional; LLVM can be use just as middle
end, with a well-defined IR above and below it, and the final back-end
targeting can be done by translating the bottom IR to a back-end
target-specific translator that focuses on target-specific optimizations.
Showing that is also part of proving this will work. But, the idea is that
every back-end does not have to re-invent lots of optimizations in LLVM that
all back-ends would benefit from, even if LLVM doesn't do the final
On Mon, Oct 18, 2010 at 11:52 AM, Keith Whitwell
<keith.whitwell at gmail.com>wrote:
> On Mon, Oct 18, 2010 at 9:18 AM, Jerome Glisse <j.glisse at gmail.com> wrote:
> > On Fri, Oct 15, 2010 at 7:44 PM, John Kessenich <johnk at lunarg.com>
> >> Hi,
> >> LunarG has decided to work on an open source, long-term,
> >> and modular shader and kernel compiler stack. Attached is our high-level
> >> proposal for this compiler architecture (LunarGLASS). We would like to
> >> solicit feedback from the open source community on doing this.
> >> I have read several posts here where it seems the time has come for
> >> something like this, and in that spirit, I hope this is consistent with
> >> desire and direction many contributors to this list have already alluded
> >> Perhaps the biggest point of the proposal is to standardize on LLVM as
> >> intermediate representation. This is actually done at two levels within
> >> proposal; one at a high-level IR close to the source language and one at
> >> low-level IR close to the target architecture. The full picture is in
> >> attached document.
> >> Based on feedback to this proposal, our next step is to more precisely
> >> define the two forms of LLVM IR.
> >> Please let me know if you have any trouble reading the attached, or any
> >> questions, or any feedback regarding the proposal.
> >> Thanks,
> >> JohnK
> > Just a quick reply (i won't have carefully read through this proposition
> > couple weeks) last time i check LLVM didn't seemed to fit the bill for
> > newer GPU can be seen as close to scalar but not completely, there are
> > restriction on instruction packing and the amount of data computation
> > unit of gpu can access per cycle, also register allocation is different
> > from normal CPU, you don't wan to do register peeling on GPU. So from
> > my POV instruction scheduling & packing and register allocation are
> > interlace process (where you store variable impact instruction packing).
> > Also on newer gpu it makes sense to use a mixed scalar/vector
> > to preserve things like dot product. Last loop, jump, function have kind
> > of unsual restriction unlike any CPU (thought i haven't broad CPU
> > Bottom line is i don't think LLVM is anywhere near what would help us.
> I think this is the big question mark with this proposal -- basically
> can it be done?
> I believe John feels the answer to that is yes, it can, with some
> work. From my point of view, I think I need to actually see it - but
> it sounds like this is what John is saying they're going to do.
> At a high level, LLVM is very compelling - there's a lot of work going
> on for it, a lot of people enhancing it, etc. Now, if it's possible
> to leverage that for shader compilation, I think that's very
> So basically I think it's necessary to figure out what would
> constitute evidence that LLVM is capable of doing the job, and make
> getting to that point a priority.
> If it can't be done, we'll find out quickly, if it can then we can
> stop debating whether or not it's possible.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mesa-dev