[Mesa-dev] gallium scaled types

Christian König deathsimple at vodafone.de
Sat Sep 17 02:04:43 PDT 2011


Am Freitag, den 16.09.2011, 12:58 -0700 schrieb Jose Fonseca:
[SNIP]
> > This change would be best to describe a vertex and texture fetch as
> > implemented by Radeons:
> > 
> > diff --git a/src/gallium/auxiliary/util/u_format.h
> > b/src/gallium/auxiliary/util/u_format.h
> > index 2eb3e1b..8c4d67f 100644
> > --- a/src/gallium/auxiliary/util/u_format.h
> > +++ b/src/gallium/auxiliary/util/u_format.h
> > @@ -91,10 +91,11 @@ struct util_format_block
> > 
> >  enum util_format_type {
> >     UTIL_FORMAT_TYPE_VOID = 0,
> > -   UTIL_FORMAT_TYPE_UNSIGNED = 1,
> > -   UTIL_FORMAT_TYPE_SIGNED = 2,
> > -   UTIL_FORMAT_TYPE_FIXED = 3,
> > -   UTIL_FORMAT_TYPE_FLOAT = 4
> > +   UTIL_FORMAT_TYPE_FLOAT = 1,
> > +   UTIL_FORMAT_TYPE_INT = 2,     /* signed/unsigned */
> > +   UTIL_FORMAT_TYPE_NORM = 3,    /* signed/unsigned */
> > +   UTIL_FORMAT_TYPE_SCALED = 4,  /* signed/unsigned */
> > +   UTIL_FORMAT_TYPE_FIXED = 5,
> >  };
> >
> > @@ -121,7 +122,7 @@ enum util_format_colorspace {
> >  struct util_format_channel_description
> >  {
> >     unsigned type:6;        /**< UTIL_FORMAT_TYPE_x */
> > -   unsigned normalized:1;
> > +   unsigned is_signed:1;
> >     unsigned size:9;        /**< bits per channel */
> >  };
> 
> Moving the sign out of util_format_type into util_format_channel_description is fine.  
> Like NORM/FIXED can be done either way.  (For example, llvmpipe's struct lp_type has them all in 1 bit flags).
> 
> But fundamentally you're proposing introducing a new type, SCALED, into util_format_type.
I don't think Mareks intention is to actually commit this patch, he just
wanted to describe how Radeon hardware works internally.

But yes, in the end we need something to distinct between SCALED and INT
types.

[SNIP]
> > Now you can take a look at the function r600_vertex_data_type in
> > r600_asm.c:2020. It's a good example of what is going on. At the end
> > of the function, there is this piece of code:
> > 
> > 	if (desc->channel[i].normalized) {
> > 		*num_format = 0;
> > 	} else {
> > 		*num_format = 2;
> > 	}
> > 
> > 0 means NORM, 2 means SCALED. And guess what... 1 means INT. I think
> > "num_format" stands for "numeric format". The function which is
> > called
> > in is_format_supported is r600_is_vertex_format_supported in
> > r600_formats.h:84.
> 
> I got the picture.
> 
> I understand that having the INT vs SCALED flag in desc would be nice. On the other hand, 
> it doesn't look like the as_float flag to r600_vertex_data_type as an additional parameter would force a big rewrite.
At least it makes things allot more complicated, because how a value
ends up in a shader register is programmed in the same place
(register/opcode) as the bits per channel. So if we are going to have an
as_float flag in the fetch opcode or r600_vertex_data, then we would
need to delay compiling the shader until the actually bits per channel
is known, eventually even need to recompile a shader quite often.

> > Note that r600 might be able to support SCALED textures too (I know I
> > had said the opposite, but I am not sure now after I read the docs
> > again). We only know SCALED doesn't work for 32 bits per channel,
> > same
> > as with vertices. 8 bits and 16 bits per-channel SCALED textures
> > might
> > work, but nobody tried that yet.
Oh, that's not true, I tried it for g3dvl iDCT transformation and it
worked flawlessly.

Actually the hardware part that does the load of a vertex element (and
with it the conversion from memory to shader register representation) is
the same hardware part that does the load from a texture, so both are
programmed very similar.

> > So now we are back to the discussion about the convert-to-float flag
> > in pipe_vertex_element and is_format_supported. I am not saying it's
> > wrong, I am just saying it's not clean. Either way, I will respect
> > the
> > final decision of the implementers of this functionality.
> 
> As I was playing out the changes that you suggest to u_format and llvmpipe in 
> my head, and realized one issue I hadn't till now.
> 
> 
> Imagine format conversion use cases like translate, u_blit, stretch blitters, llvmpipe,
> where we generate complex state (x86 assembly, blit shaders, LLVM, IR) that deal with ,
> and we put the source format in a key to a hash table to avoid regenerating code.
> 
>   struct {
>       enum pipe_format src_format;
>       enum pipe_format dst_format;
>       
>       boolean stretch;
>       int filtering_mode;
>       ...
>   };
> 
> Having different enums for PIPE_xxx_INT and PIPE_xxx_SCALED will cause unnecessary cache misses,
> because for these use cases INT/SCALED that are one and only.
> 
> And the only solution to avoid this is to have a canonizing mapping function, such as:
> 
>    enum pipe_format
>    get_canonical_format(enum pipe_format) {
>    case PIPE_FORMAT_FOO_SINT:
>    case PIPE_FORMAT_FOO_SSCALED:
>        return PIPE_FORMAT_FOO_SINT
>    case PIPE_FORMAT_FOO_UINT:
>    case PIPE_FORMAT_FOO_USCALED:
>        return PIPE_FORMAT_FOO_UINT
>    ....
>    }
> 
> which is a bit painful.

Actually we have a bit of the same problem in the Radeon drivers, for
simple copy operations the way how values end up in registers doesn't
actually matters, only the total bit width of a pixel matters.

Since r600 doesn't supports 32bit scaled types and r300 supports even
less, we quite often fall back to just copy a R8G8B8A8_* texture as
R32_FLOAT.

> Sure, GL, D3D, and a lot of hardware does not support integer -> float conversion in several places
> on the graphics pipeline, but modules such as translate and u_format , and software renderers are
> not bound by those limitations, and these cases can & do happen.
> 
> 
> So, while I agree that distinguishing SCALED vs INT indeed simplify vertex translation, I think it
> would impair the usefulness of pipe_format as derived state cache lookup key, one of its important uses today.
Yeah, pipe_format actually sums up two separate information a) how the
memory layout of the channels is and b) how those channels end up inside
shader registers.

But that is actually a good thing, because we have some very nice
function in u_format to split those informations again and some
combinations of memory layout and how the values end up in shaders
actually doesn't make any sense at all.

> But ignore all I wrote and said till now.  If I simply imagine the total work necessary to plumb a flag into
> the vertex fetch paths, vs the total work necessary to update for the INT/SCALED split of pipe_formats and
> u_format, the latter seems much more grunt work to me.
That might be true, but I'm beginning to get the feeling that it might
become a performance problem in r600 if we do it with a flag somewhere
in the vertex path, since that could lead to shader recompiling.

> Anyway, you guys are doing it, and it doesn't seem like Keith or Brian have an opinion, so as far as I'm concerned,
> feel free to decide among yourselves and start coding away in a branch.
That sounds like a good idea, let's setup a branch, implement it there
before and then hammer out all the bugs and regressions before it gets
merged into master.

> I only have two requirements:
> 
> - there must be no regression on llvmpipe upon merging.  (svga is equally important, but its capabilities in terms
> of formats are quite constrained so I doubt there will be any issues. llvmpipe, on the other hand, supports pretty
> most of the formats out there, most of them through LLVM IR, and the remainder trhough u_format's code).
> 
> - u_format's current functionality must not degrade -- i.e., don't remove the ability to pack/unpack/fetch _any_ integer
> format from/to 4 x floats or 4 x unorm8, just because GL/D3D doesn't allow it.
> 
> That's all!
Agree.

So who is going to do that? Dave and Marek seems to already have
volunteered for that. I would also like to help, but at least officially
I don't have time todo so.

Christian.



More information about the mesa-dev mailing list