[Mesa-dev] [PATCH 08/11] mesa: Implement the GL entry points defined by ARB_shader_image_load_store.

Francisco Jerez currojerez at riseup.net
Sat Dec 7 17:11:23 PST 2013


Paul Berry <stereotype441 at gmail.com> writes:

> On 24 November 2013 21:00, Francisco Jerez <currojerez at riseup.net> wrote:
>[...]
>> +
>> +   case GL_RGBA16UI:
>> +      return MESA_FORMAT_RGBA_UINT16;
>> +
>> +   case GL_RGB10_A2UI:
>> +      return MESA_FORMAT_ABGR2101010_UINT;
>>
>
> I don't understand the naming conventions of the GL and MESA formats well
> enough to be sure about this, but I'm troubled by the inconsistency between
> the two case statements above.  In the GL_RGBA16UI case, the MESA format
> lists the components in the same order as the GL format (RGBA).  But in the
> GL_RGB10_A2UI case, the MESA format lists the components in the opposite
> order as the GL format (ABGR instead of RGBA).  Unless there are some
> really counterintuitive naming conventions at work here, it seems like
> these cases can't both be right.
>

Yes.  Mesa's formats mix two naming conventions: MESA_FORMAT_XYZW_N
specifies the XYZW components in memory order, each of N bits, while
MESA_FORMAT_XYZWNMPQ specifies the XYZW components from MSB to LSB in an
endianness-dependent manner.  The GL_RGB10_A2UI internal format refers
to an RGBA_INTEGER format of type UNSIGNED_INT_2_10_10_10_REV, which
matches the Mesa format above -- See table 8.8 in the GL4.4 spec.

> Assuming the above code is really correct, could you add some comments
> explaining the apparent inconsistency?  Also, do you have piglit tests that
> validate the above code?
>
No, sorry.

>[...]
>> +   IMAGE_FORMAT_CLASS_11_11_10,
>>
>
> The ARB_shader_load_store spec describes this class as "for 11/11/10 packed
> floating-point formats"
>
>
>> +   IMAGE_FORMAT_CLASS_4X8,
>> +   IMAGE_FORMAT_CLASS_4X16,
>> +   IMAGE_FORMAT_CLASS_4X32,
>> +   IMAGE_FORMAT_CLASS_2_10_10_10
>>
>
> and it describes this class as "for 10/10/10/2 packed formats".  Is there a
> good reason why we're reversing the order of the bit counts in one case but
> not the other?
>

Yeah...  We should probably follow the same naming convention in both
classes.

>
>> +   /** \} */
>> +};
>> +
>> +
>> +static GLboolean
>> +validate_image_unit(struct gl_context *ctx, struct gl_image_unit *u)
>> +{
>> +   struct gl_texture_object *t = u->TexObj;
>> +   struct gl_texture_image *img;
>> +
>> +   if (!t || u->Level < t->BaseLevel ||
>> +       u->Level > t->_MaxLevel)
>> +      return GL_FALSE;
>> +
>> +   _mesa_test_texobj_completeness(ctx, t);
>> +
>> +   if ((u->Level == t->BaseLevel && !t->_BaseComplete) ||
>> +       (u->Level != t->BaseLevel && !t->_MipmapComplete))
>> +      return GL_FALSE;
>> +
>> +   if (u->Layered)
>> +      img = t->Image[0][u->Level];
>> +   else
>> +      img = t->Image[u->Layer][u->Level];
>>
>
> This can't be right, because u->Layer can be much larger than 6, but the
> size of the t->Image[] array is 6.  I believe what we need to do instead is:
>
> (a) for cubemaps, we need to validate that 0 <= u->Layer < 6 before
> accessing t->Image, otherwise we will access garbage memory.
>
> (b) for non-cubemaps (e.g. 2D arrays), all the layers for each miplevel are
> stored in a single gl_texture_image, so we need to access
> t->Image[0][u->Level], and then check that 0 <= u->Layer < t->Depth.
>
> (c) I'm not sure how cubemap arrays are handled by the Mesa front end, so
> I'm not sure what is correct to do for them.
>

I've addressed these issues here [1], I can send a v2 to the mailing
list if you have more suggestions specific to the new version.

> Also, the spec says:
>
>     If the texture identified by <texture> does not have multiple layers or
>     faces, the entire texture level is bound, regardless of the values
>     specified by <layered> and <layer>.
>
> I think that means that if the texture is non-layered, we should always set
> img = t->Image[0][u->Level], regardless of the setting of u->Layered.
>
> Note: as a follow up, it might be worth creating a
> _mesa_tex_target_is_layered() function so that we don't have to duplicate
> the knowledge of which texture targets are layered in multiple places.
>

OK, I've done that.  It looks like the code dealing with layered
framebuffers could benefit from the same function.

>
>> +
>> +   if (!img || img->Border ||
>> +       get_image_format_class(img->TexFormat) == IMAGE_FORMAT_CLASS_NONE
>> ||
>> +       img->NumSamples > ctx->Const.MaxImageSamples)
>> +      return GL_FALSE;
>> +
>> +   if (t->ImageFormatCompatibility ==
>> GL_IMAGE_FORMAT_COMPATIBILITY_BY_SIZE &&
>> +       _mesa_get_format_bytes(img->TexFormat) !=
>> +       _mesa_get_format_bytes(u->_ActualFormat))
>> +      return GL_FALSE;
>> +
>> +   if (t->ImageFormatCompatibility ==
>> GL_IMAGE_FORMAT_COMPATIBILITY_BY_CLASS &&
>> +       get_image_format_class(img->TexFormat) !=
>> +       get_image_format_class(u->_ActualFormat))
>> +      return GL_FALSE;
>>
>
> If some bug elsewhere in Mesa causes t->ImageFormatCompatibility to get set
> to something other than GL_IMAGE_FORMAT_COMPATIBILITY_BY_SIZE or
> GL_IMAGE_FORMAT_COMPATIBILITY_BY_CLASS, then the code above will silently
> accept the texture and we probably won't notice the bug.  How about if we
> do something like this instead?
>
> switch (t->ImageFormatCompatibility) {
> case GL_IMAGE_FORMAT_COMPATIBILITY_BY_SIZE:
>    if (_mesa_get_format_bytes(img->TexFormat) !=
>        _mesa_get_format_bytes(u->_ActualFormat)) {
>       return GL_FALSE;
>    }
>    break;
> case GL_IMAGE_FORMAT_COMPATIBILITY_BY_CLASS:
>    if (get_image_format_class(img->TexFormat) !=
>        get_image_format_class(u->_ActualFormat)) {
>       return GL_FALSE;
>    }
>    break;
> default:
>    assert(!"Unexpected image format compatibility type");
> }
>

Hm...  Why not.

>
>> +
>> +   return GL_TRUE;
>> +}
>> +
>> +static GLboolean
>> +validate_bind_image_texture(struct gl_context *ctx, GLuint unit,
>> +                            GLuint texture, GLint level, GLboolean
>> layered,
>> +                            GLint layer, GLenum access, GLenum format)
>> +{
>>
>
> Can we add the following assert:
>
> assert(ctx->Const.MaxImageUnits <= MAX_IMAGE_UNITS);
>
> Otherwise there's a danger that a driver might accidentally set
> ctx->Const.MaxImageUnits to a value greater than MAX_IMAGE_UNITS, and then
> some of the array accesses below would overflow.
>

Okay.

>
>> +   if (unit > ctx->Const.MaxImageUnits) {
>>
>
> This should be ">=".
>

I could swear I had fixed that one already.

>
>> +      _mesa_error(ctx, GL_INVALID_VALUE, "glBindImageTexture(unit)");
>> +      return GL_FALSE;
>> +   }
>> +
>> +   if (level < 0) {
>> +      _mesa_error(ctx, GL_INVALID_VALUE, "glBindImageTexture(level)");
>> +      return GL_FALSE;
>> +   }
>> +
>> +   if (layer < 0) {
>> +      _mesa_error(ctx, GL_INVALID_VALUE, "glBindImageTexture(layer)");
>> +      return GL_FALSE;
>> +   }
>> +
>> +   if (access != GL_READ_ONLY &&
>> +       access != GL_WRITE_ONLY &&
>> +       access != GL_READ_WRITE) {
>> +      _mesa_error(ctx, GL_INVALID_VALUE, "glBindImageTexture(access)");
>> +      return GL_FALSE;
>> +   }
>>
> +
>> +   if (!get_image_format(format)) {
>> +      _mesa_error(ctx, GL_INVALID_VALUE, "glBindImageTexture(format)");
>> +      return GL_FALSE;
>> +   }
>> +
>> +   return GL_TRUE;
>> +}
>> +
>> +void GLAPIENTRY
>> +_mesa_BindImageTexture(GLuint unit, GLuint texture, GLint level,
>> +                       GLboolean layered, GLint layer, GLenum access,
>> +                       GLenum format)
>> +{
>> +   GET_CURRENT_CONTEXT(ctx);
>> +   struct gl_texture_object *t = NULL;
>> +   struct gl_image_unit *u;
>> +
>> +   if (!validate_bind_image_texture(ctx, unit, texture, level,
>> +                                    layered, layer, access, format))
>> +      return;
>> +
>> +   u = &ctx->ImageUnits[unit];
>> +
>> +   FLUSH_VERTICES(ctx, 0);
>> +   ctx->NewDriverState |= ctx->DriverFlags.NewImageUnits;
>> +
>> +   if (texture) {
>> +      t = _mesa_lookup_texture(ctx, texture);
>> +      if (!t) {
>> +         _mesa_error(ctx, GL_INVALID_VALUE,
>> "glBindImageTexture(texture)");
>> +         return;
>> +      }
>> +
>> +      _mesa_reference_texobj(&u->TexObj, t);
>> +      u->Level = level;
>> +      u->Layered = layered;
>> +      u->Layer = (layered ? 0 : layer);
>> +      u->Access = access;
>> +      u->Format = format;
>> +      u->_ActualFormat = get_image_format(format);
>> +      u->_Valid = validate_image_unit(ctx, u);
>> +
>> +   } else {
>> +      _mesa_reference_texobj(&u->TexObj, NULL);
>> +      u->_Valid = GL_FALSE;
>> +   }
>>
>
> I'd prefer to move the call to validate_image_unit() outside the if/then
> statement.  That way we exercise the code in validate_image_unit() that
> marks an image unit as invalid when its TexObj is NULL.
>

The code looks slightly clearer to me as it is, but I don't really have
a strong preference.

>
>> +
>> +   if (ctx->Driver.BindImageTexture)
>> +      ctx->Driver.BindImageTexture(ctx, u, t, level, layered,
>> +                                   layer, access, format);
>> +}
>> +
>>

Thanks.

[1] http://cgit.freedesktop.org/~currojerez/mesa/commit/?h=image-load-store&id=10026c1e80371a27438930d7134f76dec56da53d
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 229 bytes
Desc: not available
URL: <http://lists.freedesktop.org/archives/mesa-dev/attachments/20131208/d4b3eaa0/attachment.pgp>


More information about the mesa-dev mailing list