[Mesa-dev] gallium scaled types
Roland Scheidegger
sroland at vmware.com
Tue Sep 13 07:07:22 PDT 2011
Am 13.09.2011 00:33, schrieb Jose Fonseca:
>
>
> ----- Original Message -----
>> On 12.09.2011 21:14, Jose Fonseca wrote:
>>>
>>> ----- Original Message -----
>>>> Am 12.09.2011 19:05, schrieb Christoph Bumiller:
>>>>> On 12.09.2011 18:41, Jose Fonseca wrote:
>>>>>> ----- Original Message -----
>>>>>>> On Mon, Sep 12, 2011 at 5:48 PM, Roland Scheidegger
>>>>>>> <sroland at vmware.com> wrote:
>>>>>>>> Am 11.09.2011 19:17, schrieb Dave Airlie:
>>>>>>>>> On Sun, Sep 11, 2011 at 10:11 AM, Dave Airlie
>>>>>>>>> <airlied at gmail.com> wrote:
>>>>>>>>>> Hi guys,
>>>>>>>>>>
>>>>>>>>>> not really finding a great explaination in my 2
>>>>>>>>>> minute search, of what the USCALED and SSCALED
>>>>>>>>>> types are representative of
>>>>>>>>>>
>>>>>>>>>> On r600 hw at least we have a SCALED type, which
>>>>>>>>>> seems to be an integer in float point format, as
>>>>>>>>>> well as an INT type which is natural integers.
>>>>>>>>> Talked on irc with calim and mareko, makes sense now,
>>>>>>>>> need to add UINT/SINT types will document things
>>>>>>>>> maybe a bit more on my way past.
>>>>>>>>>
>>>>>>>>> will also rename the stencil types.
>>>>>>>> Hmm what's wrong with them? USCALED is a unsigned int
>>>>>>>> type which in contrast to UNORM isn't normalized but
>>>>>>>> "scaled" to the actual value (so same as UINT really).
>>>>>>>> Same for SSCALED which is just signed instead of
>>>>>>>> unsigned. And the stencil types seem to fit already.
>>>>>>> No, they are not.
>>>>>>>
>>>>>>> SCALED is an int that is automatically converted to float
>>>>>>> when fetched by a shader.
>>>>>>>
>>>>>>> The SCALED types are OpenGL's non-normalized *float*
>>>>>>> vertex formats that are stored in memory as ints, e.g.
>>>>>>> glVertexAttribPointer(... GL_INT ...). There are no
>>>>>>> SCALED textures or renderbuffers supported by any
>>>>>>> hardware or exposed by any API known to me. Radeons seem
>>>>>>> to be able to do SCALED types according to the ISA docs,
>>>>>>> but in practice it only works with vertex formats and
>>>>>>> only with SCALED8 and SCALED16 (AFAIK).
>>>>>>>
>>>>>>> Then there should be the standard INT types that are not
>>>>>>> converted to float upon shader reads. Those can be
>>>>>>> specified as vertices by glVertexAttribIPointer(...
>>>>>>> GL_INT ...) (note the *I*), or as integer textures. This
>>>>>>> is really missing in Gallium.
>>>>>> Pipe formats describe how the data should be interpreted.
>>>>>>
>>>>>> IMO, the type of register they will be stored after
>>>>>> interpretation is beyond the the scope of pipe_format. I
>>>>>> think that is purely in the realm of shaders.
>>>>>>
>>>>>> For example, when doing texture sampling, if
>>>>>> PIPE_R32G32B32A32_SSCALED should be read into a integer
>>>>>> register or float registers should be decided by the
>>>>>> texture sample opcode. Not the pipe_format.
>>>>>>
>>>>>> And in the case of vertex shaders inputs, the desired
>>>>>> register type (float, int, double) should be not in
>>>>>> pipe_vertex_element at all, but probably in the shader
>>>>>> input declaration. Given that it ties more closely with
>>>>>> shader itself: an integer vertex input will be used usually
>>>>>> with integer opcodes, and vice-versa. Independent of the
>>>>>> actually vertices being stored in the vertex buffer as
>>>>>> integers or not.
>>>>> No. If you declare a shader input as float and you use
>>>>> VertexAttribIPointer, you do NOT get a float, even if the
>>>>> shader expects it.
>>>>>
>>>>> The vertex format describes a property of the vertex fetch
>>>>> stage (input assembler) and determines how data is brought
>>>>> from a vertex buffer into vertex attribute memory / cache;
>>>>> what the shader does with the data is completely unrelated.
>>>> Ah I see the problem now. This boils down to the implicit
>>>> convert-to-float which earlier GL (and hw) did, but you most
>>>> likely don't want (well the non-normalizing case) if you
>>>> support native integers (but you still need to be able to do it
>>>> for GL).
>>> Exactly, but not just int-as-float + native integer support +
>>> VertexAttribIPointer
>>>
>>> There's also GL 4's double-as-float + native double support +
>>> VertexAttribLPointer
>>>
>>>> I think the non-normalized ints-as-floats is something d3d10
>>>> ditched. I'm not really too thrilled seeing more formats which
>>>> are essentially the same (as the values don't actually change
>>>> it's just float vs. int type) but it seems GL actually wants
>>>> this and hw can actually do it, so I don't really see a better
>>>> solution. I guess it would be possible to make the int-as-float
>>>> bit part of the pipe_vertex_buffer or something instead but I'm
>>>> not sure it would work nicely.
>>> I'd still think an additional state in pipe_vertex_element is by
>>> far preferable to the duplication of formats. I'd like us to
>>> make a honest attempt at that. Maybe a single flag "as-float"
>>> would do it.
>>>
>>> Otherwise we might as well just start naming the formats as
>>> PIPE_foo_INT_WHATEVER, PIPE_foo_INT_I_REALLY_MEANT_IT_NOW,
>>> PIPE_foo_DOUBLE_BUT_NOT_QUITE, and
>>> PIPE_foo_DOUBLE_DONT_YOU_DARE_DOWNCAST_TO_FLOAT_OR_THE_BOOGIE_MAN_WILL_GET_YOU!
>>>
>>>
:DDD
>>>
>>> Jose
>>
>> Well then, I think I've befriended that idea ... let's go
>> glReadPixels style and add a fetch-as-float bit, but please still
>> rename SCALED to INT, since a lot of people working on hardware
>> drivers will be used to the terminology where SCALED implies
>> conversion.
>
> I'd be entirely fine with that.
>
>> The extra formats would really only make sense for vertices (and
>> allow for simple use of a to-hw mapping table), but it really does
>> get awkward with doubles. Initially I had though that a dvec4
>> attribute could be implemented as 2 float (i.e. typeless / no
>> conversion) attributes - nv50, nvc0 hardware has to be configured
>> that way - but from an interface perspective this is quite ugly.
>
> I admit I'm not very familiar how hardware currently supports native
> double formats. Or how likely it is for hardware to have native 4 x
> double register support in the future. But even if we ignore the 4 x
> double case, there's still the 2 x double and 1 x double vertex
> attributes, which can be converted to float or double depending on
> the VertexAttrib call.
FWIW it's interesting to note that d3d11 does not seem to support any
native double (or 64bit for that matter) formats. From the HSLS docs
(http://msdn.microsoft.com/de-de/library/bb509646%28v=vs.85%29.aspx).
"You cannot use double precision values as inputs and outputs for a
stream. To pass double precision values between shaders, declare each
double as a pair of uint data types. Then, use the asdouble function to
pack each double into the pair of uints and the asuint function to
unpack the pair of uints back into the double."
That probably implies hw doesn't support it neither.
Roland
More information about the mesa-dev
mailing list