Yes, nicely put Keith.<div><br></div><div>If the effort to make this work is not worth the benefit of standardizing and picking up the LLVM optimizations, finding out sooner is better than later. Hence, any specific reasons why that would be the case are most appreciated.</div>
<div><br></div><div>Jerome has a good point about the final register allocation and scheduling to a specific GPU. Note this is optional; LLVM can be use just as middle end, with a well-defined IR above and below it, and the final back-end targeting can be done by translating the bottom IR to a back-end target-specific translator that focuses on target-specific optimizations. Showing that is also part of proving this will work. But, the idea is that every back-end does not have to re-invent lots of optimizations in LLVM that all back-ends would benefit from, even if LLVM doesn't do the final targeting.</div>
<div><br></div><div>Thanks,</div><div>JohnK</div><div><br><br><div class="gmail_quote">On Mon, Oct 18, 2010 at 11:52 AM, Keith Whitwell <span dir="ltr"><<a href="mailto:keith.whitwell@gmail.com">keith.whitwell@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div></div><div class="h5">On Mon, Oct 18, 2010 at 9:18 AM, Jerome Glisse <<a href="mailto:j.glisse@gmail.com">j.glisse@gmail.com</a>> wrote:<br>
> On Fri, Oct 15, 2010 at 7:44 PM, John Kessenich <<a href="mailto:johnk@lunarg.com">johnk@lunarg.com</a>> wrote:<br>
>> Hi,<br>
>> LunarG has decided to work on an open source, long-term, highly-functional,<br>
>> and modular shader and kernel compiler stack. Attached is our high-level<br>
>> proposal for this compiler architecture (LunarGLASS). We would like to<br>
>> solicit feedback from the open source community on doing this.<br>
>> I have read several posts here where it seems the time has come for<br>
>> something like this, and in that spirit, I hope this is consistent with the<br>
>> desire and direction many contributors to this list have already alluded to.<br>
>> Perhaps the biggest point of the proposal is to standardize on LLVM as an<br>
>> intermediate representation. This is actually done at two levels within the<br>
>> proposal; one at a high-level IR close to the source language and one at a<br>
>> low-level IR close to the target architecture. The full picture is in the<br>
>> attached document.<br>
>> Based on feedback to this proposal, our next step is to more precisely<br>
>> define the two forms of LLVM IR.<br>
>> Please let me know if you have any trouble reading the attached, or any<br>
>> questions, or any feedback regarding the proposal.<br>
>> Thanks,<br>
>> JohnK<br>
><br>
><br>
> Just a quick reply (i won't have carefully read through this proposition before<br>
> couple weeks) last time i check LLVM didn't seemed to fit the bill for GPU,<br>
> newer GPU can be seen as close to scalar but not completely, there are<br>
> restriction on instruction packing and the amount of data computation<br>
> unit of gpu can access per cycle, also register allocation is different<br>
> from normal CPU, you don't wan to do register peeling on GPU. So from<br>
> my POV instruction scheduling & packing and register allocation are<br>
> interlace process (where you store variable impact instruction packing).<br>
> Also on newer gpu it makes sense to use a mixed scalar/vector representation<br>
> to preserve things like dot product. Last loop, jump, function have kind<br>
> of unsual restriction unlike any CPU (thought i haven't broad CPU knowledge)<br>
><br>
> Bottom line is i don't think LLVM is anywhere near what would help us.<br>
<br>
<br>
</div></div>I think this is the big question mark with this proposal -- basically<br>
can it be done?<br>
<br>
I believe John feels the answer to that is yes, it can, with some<br>
work. From my point of view, I think I need to actually see it - but<br>
it sounds like this is what John is saying they're going to do.<br>
<br>
At a high level, LLVM is very compelling - there's a lot of work going<br>
on for it, a lot of people enhancing it, etc. Now, if it's possible<br>
to leverage that for shader compilation, I think that's very<br>
interesting.<br>
<br>
So basically I think it's necessary to figure out what would<br>
constitute evidence that LLVM is capable of doing the job, and make<br>
getting to that point a priority.<br>
<br>
If it can't be done, we'll find out quickly, if it can then we can<br>
stop debating whether or not it's possible.<br>
<font color="#888888"><br>
Keith<br>
</font></blockquote></div><br></div>