[virglrenderer-devel] Float /integer interaction with undefined behavior

Lepton Wu lepton at chromium.org
Mon Jun 8 21:52:31 UTC 2020


On Tue, May 19, 2020 at 4:46 PM Lepton Wu <lepton at chromium.org> wrote:
>
> FYI,  I upload a WIP CL to show the idea in my mind to fix this issue:
>
> My plan is:  introduce integer type in generated GLSL, also I also plan to change the default type to int instead of float for all temp registers and
> if it's possible (for example, a flat inout between shaders), also change those generic registers to integer.  The idea hea is:  it's safe to  call
> floatbitsToInt while intBitsToFloat doesn't work all the time.
>
The first step is to use correct type in generated glsl, this can
fixed most broken dEQP tests on MALI and make things work, while we
still have
floatbitsToInt(intBitsToFloat(x))!=x. The CL can be reviwed at:
https://gitlab.freedesktop.org/virgl/virglrenderer/-/merge_requests/395
>
>
>
> On Tue, May 12, 2020 at 6:55 PM Tao Wu(吴涛@Eng) <lepton at google.com> wrote:
>>
>> Hi,
>>
>> I am trying run virglrender on mali and this uncovered one issue in virgl, I am trying to get a fix but I'd like to hear from you since
>> that could save lots of time.
>>
>> Currently in virgl, most register are used as "float", so for in/out in guest shader like "in ivec4 in1" or "out ivec4 out1", we actually
>> are creating shaders like "in vec4 in1" and "out vec4 out1", but this could work because 2 undefined behavior (according to opengl
>> document):
>>
>> The 1st thing is: intBitsToFloat and uintBitsToFloat return the encoding passed in parameter x as a highp floating-point value. If the encoding of a NaN is passed in x, it will not signal and the resulting value will be undefined. That means actually for some GPU,  we can't really get back x with  floatBitsToInt(intBitsToFloat(x)) for some X.
>>
>> The 2nd thing is: The general type of attribute used in the vertex shader must match the general type provided by the attribute array. This is governed by which glVertexAttribPointer function you use. For floating-point attributes, you must use glVertexAttribPointer. For integer (both signed and unsigned), you must use glVertexAttribIPointer. And for double-precision attributes, where available, you must use glVertexAttribLPointer.  For the issue I am hitting, it's sightly different,
>> but I believe it's similar: it seems we are calling glVertexAttribIFormat while the attibute in vertex shader was defined as float, We also have similar thing for output.
>> For example, fsout_c0 is defined as float in fragment shader while the surface is in GL_R32I format.
>>
>> All of these works fine with intel gpu backends. But according to spec, it's undefined behavior and cause trouble on mali.
>>
>> It looks like there is no type information in tgsi  (or actually they are there already?) for fixing this, any suggestions?


More information about the virglrenderer-devel mailing list