A texture-data alignment bug when tracing
José Fonseca
jose.r.fonseca at gmail.com
Fri Sep 14 09:03:24 PDT 2012
Hi Carl,
On Mon, Sep 10, 2012 at 11:11 PM, Carl Worth <cworth at cworth.org> wrote:
> I've attached a trace that demonstrates a bug in capturing texture data
> blobs while tracing. The code being traced does the following:
>
> /* A 2x2 RGB texture requires 16 bytes with the default
> * GL_UNPACK_ALIGNMENT value of 4. */
> uint8_t data[16];
>
> /* ... Initialize data, gen and bind texture here ... */
>
> glTexImage2D (GL_TEXTURE_2D,
> 0, GL_RGB8,
> 2, 2, 0,
> GL_RGB, GL_UNSIGNED_BYTE, data);
>
> As you can see, the resulting trace has:
>
> 850 glTexImage2D(target = GL_TEXTURE_2D, level = 0,
> internalformat = GL_RGB8, width = 2, height = 2, border = 0,
> format = GL_RGB, type = GL_UNSIGNED_BYTE, pixels = blob(12))
>
> That is, the trace has captured only 12 bytes of data. But there is
> required data in 14 bytes, (and for full alignment 16 bytes are
> required).
>
> It appears that _gl_image_size in glsize.hpp is doing lots of work to
> compute the correct image size (and checking GL_UNPACK_ALIGNMENT). I
> believe that the code that should be performing the alignment is the
> following:
>
> if ((GLint)bits_per_pixel < alignment*8 &&
> (bits_per_pixel & 7) == 0 &&
> _is_pot(bits_per_pixel)) {
> row_stride = _align(row_stride, alignment);
> }
>
> The second and third sub-conditions of that condition appear to be
> causing the problem here. In this case bits_per_pixel is 12, so
> (bits_per_pixel & 7) != 0. Also, bits_per_pixel is not a power of 2.
Actually bits_per_pixel here should be 3*8 == 24. But indeed that's not a POT.
> I'm not sure why either of these sub-conditions are present. Why would
> we not want to always satisfy the alignment requirements?
>
> I'll follow up with a patch that just removes these
> sub-conditions. Please let me know if there's some more appropriate
> solution here.
glspec21.20061201.pdf , pg 131, says
"If the number of bits per element is not 1, 2, 4, or 8 times
the number of bits in a GL ubyte, then k = nl for all values of a."
where a is alignment. That is, when the number of bits per element is
not 8bits, 16bits, 32bits, or 64bits, then alignment should be ignore
for row_stride.
The "(bits_per_pixel & 7) == 0 && _is_pot(bits_per_pixel)" is not
fully correct either. it should be
diff --git a/helpers/glsize.hpp b/helpers/glsize.hpp
index a589199..cc31414 100644
--- a/helpers/glsize.hpp
+++ b/helpers/glsize.hpp
@@ -683,9 +683,11 @@ _gl_image_size(GLenum format, GLenum type,
GLsizei width, GLsizei height, GLsize
size_t row_stride = (row_length*bits_per_pixel + 7)/8;
- if ((GLint)bits_per_pixel < alignment*8 &&
- (bits_per_pixel & 7) == 0 &&
- _is_pot(bits_per_pixel)) {
+ if ((bits_per_pixel == 1*8 ||
+ bits_per_pixel == 2*8 ||
+ bits_per_pixel == 4*8 ||
+ bits_per_pixel == 8*8) &&
+ (GLint)bits_per_pixel < alignment*8) {
row_stride = _align(row_stride, alignment);
}
But still, AFAICT, unless I'm misinterpreting the meaning of "bits per
element", apitrace is correct, and the driver/application is wrong.
Jose
More information about the apitrace
mailing list