[Mesa-dev] [PATCH 2/2] mesa: add hard limits for the number of varyings and uniforms for the linker

Kenneth Graunke kenneth at whitecape.org
Tue Nov 22 21:59:36 PST 2011


On 11/22/2011 07:27 PM, Marek Olšák wrote:
> On Tue, Nov 22, 2011 at 11:11 PM, Ian Romanick <idr at freedesktop.org> wrote:
>> All of this discussion is largely moot.  The failure that you're so angry
>> about was caused by a bug in the check, not by the check itself. That bug
>> has already been fixed (commit 151867b).
>>
>> The exact same check was previously performed in st_glsl_to_tgsi (or
>> ir_to_mesa), and the exact same set of shaders would have been rejected.
>>  The check is now done in the linker instead.
> 
> Actually, the bug only got my attention and then I realized what is
> actually happening in the linker. I probably wouldn't even notice
> because I no longer do any 3D on my laptop with r500. I gotta admit, I
> didn't know the checks were so... well, "not ready for a release" to
> say the least and that's meant regardless of the bug.

Well.  Whether they're "ready for a release" or not, the truth is that
we've been shipping them in releases for quite some time.  So we haven't
regressed anything; it already didn't work.

I am fully in support of doing additional optimizations in the compiler
to reduce resource usage and make more applications run on older
hardware.  Splitting uniform arrays and pruning unused sections could
definitely happen in the compiler, which as an added bonus, makes the
optimization available on all backends.  It would also eliminate unused
resources prior to the link-time checks.

Also, we desperately need to pack varyings.  Currently, we don't pack
them at all, which is both severely non-spec-compliant and very likely
to break real world applications.  We can all agree on this, so let's
start here.  Varyings are precious resources even on modern GPUs.

However, I cannot in good conscience just disable resource checking
altogether (which is what I believe you're proposing).  There are so
many cases where applications could _actually need_ more resources than
the hardware supports, and in that case, giving an error is the only
sensible option.  What else would you do?  Crash, render nothing,
replace those uniforms/varyings with zero and render garbage?  Those are
the very behaviors you're trying to avoid.

By giving an error, the application at least has the -chance- to try and
drop down from its "High Quality" shaders to Medium/Low quality settings
for older cards.  I know many apps don't, but some do, so we should give
them the chance.

There has to be resource checking somewhere.  Perhaps the backend is a
more suitable place than the linker; I don't know.  (I can see wanting
to move optimizations before checks or checks after optimizations...)
But we can't just remove checking entirely; that's just broken.

> Let's analyze the situation a bit, open-minded.
> 
> The checks can be enabled for OpenGL ES 2.0 with no problem, we won't
> likely get a failure there.
> 
> They can also be enabled for D3D10-level and later hardware, because
> its limits are pretty high and therefore are unlikely to fail. The
> problem is with the D3D9-level hardware (probably related to the
> vmware driver too).
> 
> We also have to consider that a lot of applications are now developed
> with D3D10-level or later hardware and even though the expected
> hardware requirements for such an app are meant to be low, there can
> be, say, programming mistakes, which raise hardware requirements quite
> a lot. The app developer has no way to know about it, because it just
> works on his machine. For example, some compositing managers had such
> mistakes and there's been a lot of whining about that on Phoronix.

And returning an decent error makes the mistake abundandly clear to the
application developer: "I used...wait, 500 of those?  That can't be
right."  To use your example of compositors (typically open source), the
compositor may not run for some people, but the application developer
can fix it.

In contrast, incorrect rendering provides _no_ useful information and
will likely make said application developer think it's a driver bug,
blame Mesa, and not even bother to investigate further.

> We also should take into account that hardly any app has a fallback if
> a shader program fails to link. VDrift has one, but that's rather an
> exception to the rule (VDrift is an interesting example though; it
> falls back to fixed-function because Mesa is too strict about obeying
> specs, just that really). Most apps usually just abort, crash, or
> completely ignore that linking failed and render garbage or nothing.
> Wine, our biggest user of Mesa, can't fail. D3D shaders must compile
> successfully or it's game over.
> 
> Although the possibility of a linker failure is a nice feature in
> theory, the reality is nobody wants it, because it's the primary cause
> of apps aborting themselves or just rendering nothing (and, of course,
> everybody blames Mesa, or worse: Linux).
> 
> There is a quite a large possibility that if those linker checks were
> disabled, more apps would work, especially those were the limits are
> exceeded by a little bit, but the difference is eliminated by the
> driver. Sure, some apps would still be broken or render garbage, but
> it's either this or nothing, don't you think?
> 
> Marek

So, again, if the interest is in making more apps succeed, we should
start with varying packing.  That's useful all around, and I doubt
anyone can dispute that it's necessary.


More information about the mesa-dev mailing list