[Mesa-dev] [RFC PATCH 00/16] A new IR for Mesa

Connor Abbott cwabbott0 at gmail.com
Sat Aug 23 08:11:53 PDT 2014


On Sat, Aug 23, 2014 at 5:14 AM, Christian König
<deathsimple at vodafone.de> wrote:
> Am 22.08.2014 um 18:01 schrieb Connor Abbott:
>
>> On Fri, Aug 22, 2014 at 11:27 AM, Christian König
>> <deathsimple at vodafone.de> wrote:
>>>
>>> Am 22.08.2014 um 17:13 schrieb Connor Abbott:
>>>
>>>> On Thu, Aug 21, 2014 at 11:08 PM, Dave Airlie <airlied at gmail.com> wrote:
>>>>>
>>>>> On 22 August 2014 12:46, Jason Ekstrand <jason at jlekstrand.net> wrote:
>>>>>>
>>>>>> On Thu, Aug 21, 2014 at 7:36 PM, Dave Airlie <airlied at gmail.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> On 21 August 2014 19:10, Henri Verbeet <hverbeet at gmail.com> wrote:
>>>>>>>>
>>>>>>>> On 21 August 2014 04:56, Michel Dänzer <michel at daenzer.net> wrote:
>>>>>>>>>
>>>>>>>>> On 21.08.2014 04:29, Henri Verbeet wrote:
>>>>>>>>>>
>>>>>>>>>> For whatever it's worth, I have been avoiding radeonsi in part
>>>>>>>>>> because
>>>>>>>>>> of the LLVM dependency. Some of the other issues already mentioned
>>>>>>>>>> aside, I also think it makes it just painful to do bisects over
>>>>>>>>>> moderate/longer periods of time.
>>>>>>>>>
>>>>>>>>> More painful, sure, but not too bad IME. In particular, if you know
>>>>>>>>> the
>>>>>>>>> regression is in Mesa, you can always use a stable release of LLVM
>>>>>>>>> for
>>>>>>>>> the bisect. You only need to change the --with-llvm-prefix=
>>>>>>>>> parameter
>>>>>>>>> to
>>>>>>>>> Mesa's configure for that. Of course, it could still be mildly
>>>>>>>>> painful
>>>>>>>>> if you need to go so far back that the current stable LLVM release
>>>>>>>>> wasn't supported yet. But how often does that happen? Very rarely
>>>>>>>>> for
>>>>>>>>> me.
>>>>>>>>>
>>>>>>>> Sure, it's not impossible, but is that really the kind of process
>>>>>>>> you
>>>>>>>> want users to go through when bisecting a regression? Perhaps throw
>>>>>>>> in
>>>>>>>> building 32-bit versions of both Mesa and LLVM on 64-bit as well if
>>>>>>>> they want to run 32-bit applications.
>>>>>>>>
>>>>>>>>> Without LLVM, I'm not sure there would be a driver you could avoid.
>>>>>>>>> :)
>>>>>>>>>
>>>>>>>> R600g didn't really exist either, and that one seems to have worked
>>>>>>>> out fine. I think in a large part because of work done by Jerome and
>>>>>>>> Dave in the early days, but regardless. From what I've seen from SI,
>>>>>>>> I
>>>>>>>> don't think radeonsi needed to be a separate driver to start with,
>>>>>>>> and
>>>>>>>> while its ISA is certainly different from R600-Cayman, it doesn't
>>>>>>>> particularly strike me as much harder to work with.
>>>>>>>>
>>>>>>>> Back to the more immediate topic though, I think think that on
>>>>>>>> occasion the discussion is framed as "Is there any reason using LLVM
>>>>>>>> IR wouldn't work?", while it would perhaps be more appropriate to
>>>>>>>> think of as "Would using LLVM IR provide enough advantages to
>>>>>>>> justify
>>>>>>>> adding a LLVM dependency to core Mesa?".
>>>>>>>
>>>>>>> Could we use an llvm compatible IR? is also a question I'd like to
>>>>>>> see
>>>>>>> answered.
>>>>>>
>>>>>>
>>>>>> What do you mean by llvm compatible?  Do you mean forking their IR
>>>>>> inside
>>>>>> mesa or just something that's easy to translate back and forth?
>>>>>>
>>>>> Importing/forking the llvm IR code with a different symbol set, and
>>>>> trying to not intentionally
>>>>> be incompatible with their llvm.
>>>>
>>>> That sounds like a huge amount of work, possibly even more work than
>>>> going it on our own because the LLVM project moves really quickly and
>>>> we'd have to import all of the changes. Also, it seems pretty ugly and
>>>> I'm sure distro maintainers would just looooooove a solution like that
>>>> /s. Just look at the situation with Chromium and Fedora now (or at
>>>> least last I checked).
>>>
>>>
>>> Actually the LLVM IR is considered stable and as Dave explained we
>>> wouldn't
>>> depend on LLVM, but rather just use the same concept for the IR.
>>
>> Except the optimization passes aren't, and those are what we would
>> actually use the IR for...
>>
>>> This actually sounds like a pretty good idea to me. And I would say we
>>> should just continue moving the GLSL IR towards SSA and all the
>>> specialized
>>> GL opcodes into something similar to LLVM intrinsics.
>>
>> So, in other words, using NIR ;) NIR already has intrinsics, and while
>> it does have some extra things (swizzles, writemasks, modifiers) those
>> are only there to make things a little easier on the drivers that want
>> to use them and absolutely aren't necessary. As of now, we already
>> don't care about writemasks in the optimization passes because they
>> don't matter with SSA, and we can avoid caring about the others as
>> well if it makes optimizations easier.
>
>
> And that's exactly what you don't want in an IR. The IR should cover only a
> single form of representation, no optionals or other stuff that driver can
> use when they want to. That's stuff for the driver internal representation.

No, not when the vast majority of driver internal representations will
want it, when for some drivers (e.g. i965 vec4) it will be difficult
to optimize handle them on their own, and when we control exactly who
produces and consumes the IR.

>
> Christian.
>
>
>>
>> Connor
>>
>>> Christian.
>>>
>>>
>>>> Connor
>>>>
>>>>> Dave.
>>>>> _______________________________________________
>>>>> mesa-dev mailing list
>>>>> mesa-dev at lists.freedesktop.org
>>>>> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>>>>
>>>> _______________________________________________
>>>> mesa-dev mailing list
>>>> mesa-dev at lists.freedesktop.org
>>>> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>>>
>>>
>


More information about the mesa-dev mailing list