[Mesa-dev] [PATCH 0/3] verify max vertex attrib stride

Erik Faye-Lund erik.faye-lund at collabora.com
Mon Jul 9 07:54:35 UTC 2018

On 06. juli 2018 18:43, Roland Scheidegger wrote:
> Am 06.07.2018 um 12:03 schrieb Erik Faye-Lund:
>> OpenGL 4.4 and OpenGL ES 3.1 both require the maximum
>> vertex attrib stride to be at least 2048. If this isn't
>> the case, we shouldn't expose these API versions.
>> Unfortunately, the r600 driver only supports 2047. To
>> avoid regressions in the supported GL version, the
>> first patch modifies the returned value.
>> I'm not sure if that last one is a good idea or not, as
>> it's strictly speaking non-conformant. But applications
>> won't expect GL errors generated when using strides of
>> 2048 either, which is what currently happens.
>> The initial motivation for this patch-series is to avoid
>> exposing a too high spec version in virgl and then get
>> dEQP failures when running on old hardware. The virgl
>> specific bits are being sent separately, because they
>> depend on some other not-yet-upstream things ATM.
>> Thoughts?
>> Erik Faye-Lund (3):
>>    r600: report incorrect max-vertex-attrib for GL 4.4
>>    mesa: verify MaxVertexAttribStride for GL 4.4
>>    mesa: verify MaxVertexAttribStride for GLES 3.1
>>   src/gallium/drivers/r600/r600_pipe.c | 3 ++-
>>   src/mesa/main/version.c              | 2 ++
>>   2 files changed, 4 insertions(+), 1 deletion(-)
> Personally I think it's _much_ better to lie about the supported GL
> version rather than the maximum vertex attrib stride (I don't know if
> dEQP would actually have a test which tests for the max stride also
> working than just being advertized, which would be way more relevant and
> fail in any case).

I don't think I agree in this case; OpenGL 4.4 applications that use
*exactly* strides of 2048probably won't even check the max-stride, as it's
guaranteed to work by the standard anyway. Applications that use *more*
might check for this value, but as the query was introduced in 4.4 it's
likely that applications will just get a gl-error down an unexpected
code-path, which can lead to the wrong attribute bound (possibly pointing
to client-memory, which can lead to segfaults). Lying about the max
attribute stride will "just" lead to rendering glitches.

But even *if* it was better to lie about the GL version, I don't think we
have a good way to contain something like this to just the driver in
question, and I *certainly* don't think it's better to have hacks for a
single driver in the generic code that affects *all* drivers. But if we
were to lie about the GL version rather than the max-stride, we'd IMO need
to introduce something like a driver-switch to control the GL version,
which just seems like a bad idea to me, as it's easy to get stuck behind
on an old version for artificial reasons, or to miss something else
important and get an artificially high version (and risk breaking apps
on working systems when fixing it again).

There's of course a third option here, which is to go by the book and
leave r600 on OpenGL 4.3 / OpenGL ES 3.0. That feels like the "correct
thing to do" to me, but I'm sure someone will be annoyed if they can't
play the games they've bought already before. But perhaps it's better for
the users to work around this on applications where it's known to work,
than having *any* sort of deceptive code in here?

> (Because if you're going to use stride 2048, it is hugely unlikely you'd
> just rely on GL 4.4 for that, hence it wouldn't work with older versions
> of GL anyway neither, where ALL strides are just supposed to work.)

I think what ARB realized for GL 4.4 is that not all implementations
support all strides. In that light, I think it's reasonable to say
that for GL 4.3 and before, an *undefined* stride was supposed to

> Roland
> FWIW, I think it should be possible to work around that limitation on
> r600. Since vertex fetch just uses a fetch shader, you could have a
> shader key bit indicating strides of 2048, program a stride of 1024
> instead and multiply the index to fetch by 2 in the shader. Of course
> that could cause shader recompiles (never in practice...) and there's
> some overhead associated with it, so might not be worth it...

I don't think it would be a very high priority thing to fix, but it would
certainly be interesting. Sadly, I don't have an r600 card myself, so I
can't really work on that.

Another option would be to do some CPU-side reshuffling of vertex-
attributes that are overly distant. Dunno.

If we decided to go down the "let's stick to the spec"-approach here, one
of these approaches might be appropriate to regain the 4.4 feature...

More information about the mesa-dev mailing list