[Mesa-dev] [Bug 109816] glGenerateMipmap makes deepest level far from average texel value for NPOT textures

bugzilla-daemon at freedesktop.org bugzilla-daemon at freedesktop.org
Sat Mar 2 14:03:33 UTC 2019


https://bugs.freedesktop.org/show_bug.cgi?id=109816

            Bug ID: 109816
           Summary: glGenerateMipmap makes deepest level far from average
                    texel value for NPOT textures
           Product: Mesa
           Version: 18.2
          Hardware: Other
                OS: All
            Status: NEW
          Severity: normal
          Priority: medium
         Component: Mesa core
          Assignee: mesa-dev at lists.freedesktop.org
          Reporter: b7.10110111 at gmail.com
        QA Contact: mesa-dev at lists.freedesktop.org

Created attachment 143513
  --> https://bugs.freedesktop.org/attachment.cgi?id=143513&action=edit
Test project

Trying to get an average of the pixels I rendered in the scene, I've come
across a problem with the way Mesa's implementation of glGenerateMipmap works
with NPOT textures.
To estimate the average, I render the scene to an FBO texture, and then use the
following C++ code (full compilable test project is attached to this bug
report):

// BEGIN code
void getMeanPixelValue(int texW, int texH)
{
    // Get average value of the rendered pixels as the value of the deepest
mipmap level
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, texFBO);
    glGenerateMipmap(GL_TEXTURE_2D);

    using namespace std;
    // Formula from the glspec, "Mipmapping" subsection in section 3.8.11
Texture Minification
    const auto totalMipmapLevels = 1+floor(log2(max(texW,texH)));
    const auto deepestLevel=totalMipmapLevels-1;

    // Sanity check
    int deepestMipmapLevelWidth=-1, deepestMipmapLevelHeight=-1;
    glGetTexLevelParameteriv(GL_TEXTURE_2D, deepestLevel,
        GL_TEXTURE_WIDTH, &deepestMipmapLevelWidth);
    glGetTexLevelParameteriv(GL_TEXTURE_2D, deepestLevel,
        GL_TEXTURE_HEIGHT, &deepestMipmapLevelHeight);
    assert(deepestMipmapLevelWidth==1);
    assert(deepestMipmapLevelHeight==1);

    vec4 pixel;
    glGetTexImage(GL_TEXTURE_2D, deepestLevel, GL_RGBA, GL_FLOAT, &pixel[0]);

    // Get average value in an actual summing loop over all the pixels
    std::vector<vec4> data(texW*texH);
    glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, data.data());
    vec4 avg(0,0,0,0);
    for(auto const& v : data)
        avg+=v;
    avg/=texW*texH;

    std::cerr << "Mipmap value: " << pixel[0] << ", " << pixel[1] 
              << ", " << pixel[2] << ", " << pixel[3] << "\n";
    std::cerr << "True average: " << avg[0] << ", " << avg[1]
              << ", " << avg[2] << ", " << avg[3] << "\n";
}
// END code

What I get from this code on Mesa Software renderer (using
LIBGL_ALWAYS_SOFTWARE=1):

New size: 512x512
Mipmap value: 0.195312, 0.390625, 0.585938, 0.78125
True average: 0.195312, 0.390625, 0.585938, 0.78125
New size: 512x511
Mipmap value: 0, 1.19209e-07, 1, 1
True average: 0.195695, 0.391389, 0.587084, 0.782779

With Intel Haswell (Core i7-4765T) hardware acceleration, the 1.19209e-07 value
becomes simply 0.

For comparison, below is what I get on an NVidia GPU with the proprietary
"nvidia" driver. This is what I would expect from Mesa and any other decent
OpenGL implementation.

New size: 512x512
Mipmap value: 0.195312, 0.390625, 0.585938, 0.78125
True average: 0.195312, 0.390625, 0.585938, 0.78125
New size: 512x511
Mipmap value: 0.195606, 0.39101, 0.586946, 0.782537
True average: 0.195695, 0.391389, 0.587084, 0.782779

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/mesa-dev/attachments/20190302/ed377096/attachment.html>


More information about the mesa-dev mailing list