[Mesa-dev] [PATCH 2/2] mesa: add hard limits for the number of varyings and uniforms for the linker

Kenneth Graunke kenneth at whitecape.org
Wed Nov 23 09:23:13 PST 2011


On 11/23/2011 05:42 AM, Marek Olšák wrote:
> On Wed, Nov 23, 2011 at 6:59 AM, Kenneth Graunke <kenneth at whitecape.org> wrote:
>> On 11/22/2011 07:27 PM, Marek Olšák wrote:
>>> On Tue, Nov 22, 2011 at 11:11 PM, Ian Romanick <idr at freedesktop.org> wrote:
>>>> All of this discussion is largely moot.  The failure that you're so angry
>>>> about was caused by a bug in the check, not by the check itself. That bug
>>>> has already been fixed (commit 151867b).
>>>>
>>>> The exact same check was previously performed in st_glsl_to_tgsi (or
>>>> ir_to_mesa), and the exact same set of shaders would have been rejected.
>>>>  The check is now done in the linker instead.
>>>
>>> Actually, the bug only got my attention and then I realized what is
>>> actually happening in the linker. I probably wouldn't even notice
>>> because I no longer do any 3D on my laptop with r500. I gotta admit, I
>>> didn't know the checks were so... well, "not ready for a release" to
>>> say the least and that's meant regardless of the bug.
>>
>> Well.  Whether they're "ready for a release" or not, the truth is that
>> we've been shipping them in releases for quite some time.  So we haven't
>> regressed anything; it already didn't work.
>>
>> I am fully in support of doing additional optimizations in the compiler
>> to reduce resource usage and make more applications run on older
>> hardware.  Splitting uniform arrays and pruning unused sections could
>> definitely happen in the compiler, which as an added bonus, makes the
>> optimization available on all backends.  It would also eliminate unused
>> resources prior to the link-time checks.
>>
>> Also, we desperately need to pack varyings.  Currently, we don't pack
>> them at all, which is both severely non-spec-compliant and very likely
>> to break real world applications.  We can all agree on this, so let's
>> start here.  Varyings are precious resources even on modern GPUs.
>>
>> However, I cannot in good conscience just disable resource checking
>> altogether (which is what I believe you're proposing).  There are so
>> many cases where applications could _actually need_ more resources than
>> the hardware supports, and in that case, giving an error is the only
>> sensible option.  What else would you do?  Crash, render nothing,
>> replace those uniforms/varyings with zero and render garbage?  Those are
>> the very behaviors you're trying to avoid.
>>
>> By giving an error, the application at least has the -chance- to try and
>> drop down from its "High Quality" shaders to Medium/Low quality settings
>> for older cards.  I know many apps don't, but some do, so we should give
>> them the chance.
>>
>> There has to be resource checking somewhere.  Perhaps the backend is a
>> more suitable place than the linker; I don't know.  (I can see wanting
>> to move optimizations before checks or checks after optimizations...)
>> But we can't just remove checking entirely; that's just broken.
>>
>>> Let's analyze the situation a bit, open-minded.
>>>
>>> The checks can be enabled for OpenGL ES 2.0 with no problem, we won't
>>> likely get a failure there.
>>>
>>> They can also be enabled for D3D10-level and later hardware, because
>>> its limits are pretty high and therefore are unlikely to fail. The
>>> problem is with the D3D9-level hardware (probably related to the
>>> vmware driver too).
>>>
>>> We also have to consider that a lot of applications are now developed
>>> with D3D10-level or later hardware and even though the expected
>>> hardware requirements for such an app are meant to be low, there can
>>> be, say, programming mistakes, which raise hardware requirements quite
>>> a lot. The app developer has no way to know about it, because it just
>>> works on his machine. For example, some compositing managers had such
>>> mistakes and there's been a lot of whining about that on Phoronix.
>>
>> And returning an decent error makes the mistake abundandly clear to the
>> application developer: "I used...wait, 500 of those?  That can't be
>> right."  To use your example of compositors (typically open source), the
>> compositor may not run for some people, but the application developer
>> can fix it.
>>
>> In contrast, incorrect rendering provides _no_ useful information and
>> will likely make said application developer think it's a driver bug,
>> blame Mesa, and not even bother to investigate further.
>>
>>> We also should take into account that hardly any app has a fallback if
>>> a shader program fails to link. VDrift has one, but that's rather an
>>> exception to the rule (VDrift is an interesting example though; it
>>> falls back to fixed-function because Mesa is too strict about obeying
>>> specs, just that really). Most apps usually just abort, crash, or
>>> completely ignore that linking failed and render garbage or nothing.
>>> Wine, our biggest user of Mesa, can't fail. D3D shaders must compile
>>> successfully or it's game over.
>>>
>>> Although the possibility of a linker failure is a nice feature in
>>> theory, the reality is nobody wants it, because it's the primary cause
>>> of apps aborting themselves or just rendering nothing (and, of course,
>>> everybody blames Mesa, or worse: Linux).
>>>
>>> There is a quite a large possibility that if those linker checks were
>>> disabled, more apps would work, especially those were the limits are
>>> exceeded by a little bit, but the difference is eliminated by the
>>> driver. Sure, some apps would still be broken or render garbage, but
>>> it's either this or nothing, don't you think?
>>>
>>> Marek
>>
>> So, again, if the interest is in making more apps succeed, we should
>> start with varying packing.  That's useful all around, and I doubt
>> anyone can dispute that it's necessary.
> 
> No, that's not the problem. Varying packing is indeed important, but
> it's far less important that the problem this discussion is all about.
> We should start with breaking arrays into elements when possible when
> doing those checks. Consider this shader:
> 
> varying vec4 array[5];
> ...
> gl_TexCoord[7] = ...; /* adds 8*4 varyings components */
> array[4] = ...; /* adds 5*4 varying components */
> 
> /* linker stats: 13*4 varying components used --> FAIL */
> /* r300g stats: 2*4 components used -> PASS */
> 
> It's the exact same problem with uniforms. The thing is r300g has
> these optimizations already implemented for varyings and for uniforms.
> Disabling not just one, but two optimizations at the same time just
> because they should be done in the GLSL compiler and not in the
> driver, seems quite unfriendly to me, almost like you didn't want me
> to enable them. I would probably implement them in the GLSL compiler,
> say, next year (I don't and can't have deadlines), but there is no
> reason for me to do so, because I already have them.
> 
> Marek

Well, as I said, perhaps the checks should be done in the backend:
either moving the optimization before the checks (do this in the GLSL
compiler), or moving the checks after the optimization (do the checks in
the backend) should solve the problem, no?

I think doing the optimizations in the complier is the ideal solution,
but perhaps doing the checks in the backend is easier in the short term.


More information about the mesa-dev mailing list