[Piglit] [PATCH] mingw: use default 2MB stack size instead of 1MB

Jose Fonseca jfonseca at vmware.com
Thu Oct 12 22:13:40 UTC 2017


On 12/10/17 23:08, Roland Scheidegger wrote:
> Am 12.10.2017 um 21:19 schrieb Brian Paul:
>> On 10/12/2017 12:11 PM, Jose Fonseca wrote:
>>> On 12/10/17 17:51, Brian Paul wrote:
>>>> On 10/12/2017 08:04 AM, Jose Fonseca wrote:
>>>>> The intent here was not so much to match the piglti MSVC build, but
>>>>> apps
>>>>> build by MSVC in general.
>>>>>
>>>>> After all, nothing ever prevented us from setting a huge stack size on
>>>>> both MinGW and MSVC alike, as both toolchains allow to congifure the
>>>>> stack size to whatever we want.
>>>>>
>>>>>
>>>>> The key issue here is that OpenGL driver don't get to pick the apps
>>>>> they
>>>>> are loaded, and real OpenGL applications will be likely built with MSVC
>>>>> instead of Mingw, and therefore will likely only have the MSVC default
>>>>> stack size.  And we should err on the side of caution when testing.
>>>>>
>>>>>
>>>>> Regardless of the compiler used, if we bump the stack size in piglit,
>>>>> one is just increasing the chance that wasteful stack allocations go
>>>>> undetected on piglit and blow up on real applications.
>>>>>
>>>>>
>>>>> Therefore I suggest we continue to keep 1MB default, and try to fix
>>>>> Mesa
>>>>> to be less stack hungry.   If that's not pratical here,
>>>>
>>>> The ir_expression::constant_expression_value() function's local vars
>>>> are pretty minimal.  The two that stand out:
>>>>
>>>>      ir_constant *op[ARRAY_SIZE(this->operands)] = { NULL, };
>>>>      ir_constant_data data;
>>>>
>>>> are only 32 and 128 bytes, respectively (on a 32-bit build).  I'm not
>>>> sure what else accounts for the approx 2KB of the activation record.
>>>>
>>>> I don't see an obvious way to fix the problem.  Even if we could
>>>> reduce per-call stack memory, someone could write an evil shader that
>>>> adds a thousand or more terms and we'd overflow the stack again.
>>>
>>> I'm not following...
>>>
>>> If the application is malicous, it can overflow the stack without OpenGL
>>> driver help.  We have to trust the application.  After all, the opengl
>>> driver is in the same process.
>>>
>>> If the application is a browser, who needs to handle untrusted shaders,
>>> then it's the application responsibility to ensure it has enough stack
>>> to cope with big shaders.
>>>
>>> And for the sake of argument, if increasing stack size is the solution
>>> for malicious shaders, where would one stop?  If increasing stack size
>>> is not a solution to malicous shaders, then why is it relevant to the
>>> discussion?
>>
>> I'm just saying that if we found a way to "fix Mesa to be less stack
>> hungry" someone could generate a new "evil" Piglit test that'd still
>> breaks things.  Any construct which generates a really deep IR tree and
>> is evaluated recursively could run out of stack space.
> 
> I think to be safe there, it would be necessary for the compiler to
> detect this and refuse compilation (once some recursive call limit is
> reached). (But of course not intentionally malicious shaders should
> compile if they are at least semi-reasonable.)
> Not sure though if that's practical... (I don't know if that piglit test
> is really "sane" albeit it doesn't look like it's really meant to stress
> this in particular, so it probably would be nice if it would just work.)

Yeah, D3D Shader Models establish limits on recursion to keep things 
sane, etc.

But the case here seems somewhat self inflicted: it's not the shader has 
a rercusion, but merely that the expression is deep.  I suspect the 
optimization pass should either avoid recursion, or indeed cut short 
after a few iterations.

Jose


More information about the Piglit mailing list