[Mesa-dev] [PATCH v3 27/43] anv/pipeline: Use 32-bit surface formats for 16-bit formats

Pohjolainen, Topi topi.pohjolainen at gmail.com
Sun Oct 15 10:14:04 UTC 2017


On Thu, Oct 12, 2017 at 08:38:16PM +0200, Jose Maria Casanova Crespo wrote:
> From: Alejandro Piñeiro <apinheiro at igalia.com>
> 
> From Vulkan 1.0.50 spec, Section 3.30.1. Format Definition:
>     VK_FORMAT_R16G16_SFLOAT
> 
>     A two-component, 32-bit signed floating-point format that has a
>     16-bit R component in bytes 0..1, and a 16-bit G component in
>     bytes 2..3.
> 
> So this format expects those 16-bit floats to be passed without any
> conversion (applies too using 2/3/4 components, and with int formats)
> 
> But from skl PRM, vol 07, section FormatConversion, page 445 there is
> a table that points that *16*FLOAT formats are converted to FLOAT,
> that in that context, is a 32-bit float. This is similar to the
> *64*FLOAT formats, that converts 64-bit floats to 32-bit floats.
> 
> Unfortunately, while with 64-bit floats we have the alternative to use
> *64*PASSTHRU formats, it is not the case with 16-bits.
> 
> This issue happens too with 16-bit int surface formats.
> 
> As a workaround, if we are using a 16-bit location at the shader, we
> use 32-bit formats to avoid the conversion, and will fix getting the
> proper content later. Note that as we are using 32-bit formats, we can
> use formats with less components (example: use *R32* for *R16G16*).
> 
> Signed-off-by: Jose Maria Casanova Crespo <jmcasanova at igalia.com>
> Signed-off-by: Alejandro Piñeiro <apinheiro at igalia.com>
> ---
>  src/intel/vulkan/genX_pipeline.c | 47 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 47 insertions(+)
> 
> diff --git a/src/intel/vulkan/genX_pipeline.c b/src/intel/vulkan/genX_pipeline.c
> index c2fa9c0ff7..8b2d472787 100644
> --- a/src/intel/vulkan/genX_pipeline.c
> +++ b/src/intel/vulkan/genX_pipeline.c
> @@ -83,6 +83,44 @@ vertex_element_comp_control(enum isl_format format, unsigned comp)
>     }
>  }
>  
> +#if GEN_GEN >= 8
> +static enum isl_format
> +adjust_16bit_format(enum isl_format format)
> +{
> +   switch(format) {
> +   case ISL_FORMAT_R16_FLOAT:
> +      return ISL_FORMAT_R32_FLOAT;
> +   case ISL_FORMAT_R16G16_FLOAT:
> +      return ISL_FORMAT_R32_FLOAT;
> +   case ISL_FORMAT_R16G16B16_FLOAT:
> +      return ISL_FORMAT_R32G32_FLOAT;
> +   case ISL_FORMAT_R16G16B16A16_FLOAT:
> +      return ISL_FORMAT_R32G32_FLOAT;
> +
> +   case ISL_FORMAT_R16_SINT:
> +      return ISL_FORMAT_R32_SINT;
> +   case ISL_FORMAT_R16G16_SINT:
> +      return ISL_FORMAT_R32_SINT;
> +   case ISL_FORMAT_R16G16B16_SINT:
> +      return ISL_FORMAT_R32G32_SINT;
> +   case ISL_FORMAT_R16G16B16A16_SINT:
> +      return ISL_FORMAT_R32G32_SINT;
> +
> +   case ISL_FORMAT_R16_UINT:
> +      return ISL_FORMAT_R32_UINT;
> +   case ISL_FORMAT_R16G16_UINT:
> +      return ISL_FORMAT_R32_UINT;
> +   case ISL_FORMAT_R16G16B16_UINT:
> +      return ISL_FORMAT_R32G32_UINT;
> +   case ISL_FORMAT_R16G16B16A16_UINT:
> +      return ISL_FORMAT_R32G32_UINT;
> +
> +   default:
> +      return format;
> +   }
> +}

Just wondering aloud. As we are going to reinterpret the data in any case 
we could simply use _UINT variants even for FLOAT and SINT. It doesn't really
make any difference only that here someone might think it is somehow relevant
to keep the base type. In blorp we use simpply UINTs, see, for example,
blorp_blit.c::get_ccs_compatible_uint_format().


More information about the mesa-dev mailing list