[Mesa-dev] Spec interpretation question: glBlitFramebuffer behavior when src rectangle out of range

Paul Berry stereotype441 at gmail.com
Wed May 16 06:47:00 PDT 2012


On 14 May 2012 13:42, Paul Berry <stereotype441 at gmail.com> wrote:

> On 14 May 2012 11:06, Ian Romanick <idr at freedesktop.org> wrote:
>
>> On 05/14/2012 07:30 AM, Paul Berry wrote:
>>
>>> I'm trying to figure out how glBlitFramebuffer() is supposed to behave
>>> when the source rectangle exceeds the bounds of the read framebuffer.
>>> For example, if the read framebuffer has width 4 and height 1, and the
>>> draw framebuffer has width 100 and height 100, what should be the result
>>> of these two calls?
>>>
>>> glBlitFramebuffer(-1, 0, 3, 1, 0, 0, 9, 1, GL_COLOR_BUFFER_BIT,
>>> GL_NEAREST);
>>> glBlitFramebuffer(-1, 0, 3, 1, 0, 0, 9, 1, GL_COLOR_BUFFER_BIT,
>>> GL_LINEAR);
>>>
>>> (In other words, the source rect is 4 pixels wide with 1 pixel falling
>>> off the left hand edge of the read framebuffer, and the destination rect
>>> is 9 pixels wide with all pixels falling within the draw framebuffer).
>>>
>>>
>>> Here is the relevant text from the spec (e.g. GL 4.2 core spec, p316):
>>>
>>> "The actual region taken from the read framebuffer is limited to the
>>> intersection of the source buffers being transferred, which may include
>>> the color buffer selected by the read buffer, the depth buffer, and/or
>>> the stencil buffer depending on mask. The actual region written to the
>>> draw framebuffer is limited to the intersection of the destination
>>> buffers being written, which may include multiple draw buffers, the
>>> depth buffer, and/or the stencil buffer depending on mask. Whether or
>>> not the source or destination regions are altered due to these limits,
>>> the scaling and offset applied to pixels being transferred is performed
>>> as though no such limits were present."
>>>
>>
>> This is trying to describe the case where buffers attached to the source
>> FBO have different sizes.  If the color buffer is 64x64, the depth buffer
>> is 32x32, and the selected buffers is GL_COLOR_BUFFER_BIT |
>> GL_DEPTH_BUFFER_BIT, then the source region is treated as though it's
>> 32x32.  If the color buffer is 64x64, the depth buffer is 32x32, and the
>> selected buffers is only GL_COLOR_BUFFER_BIT, then the source region is
>> treated as though it's 64x64.
>>
>>
>>  And then later:
>>>
>>> "If a linear filter is selected and the rules of LINEAR sampling would
>>> require sampling outside the bounds of a source buffer, it is as though
>>> CLAMP_TO_EDGE texture sampling were being performed. If a linear filter
>>> is selected and sampling would be required outside the bounds of the
>>> specified source region, but within the bounds of a source buffer, the
>>> implementation may choose to clamp while sampling or not."
>>>
>>
>> The last sentence here is also about the mismatched buffer size scenario.
>>  If there are two color buffers attached to the source FBO, one being 32x32
>> and the other being 64x64, a blit of
>>
>>        glBlitFramebuffer(0, 0, 64, 64, 0, 0, 128, 128,
>>                        GL_COLOR_BUFFER_BIT,
>>                        GL_LINEAR);
>>
>> would sample outside the 32x32 intersection region of the source buffers.
>>  However, one of the color buffers has pixel data outside that region.  The
>> implementation may or may not sample those pixels.
>
>
> What's curious about your interpretation of this sentence is that if it's
> trying to address mismatched buffer sizes, why does begin with "If a linear
> filter is selected..."?  It seems like the question of how to handle
> mismatched buffer sizes ought to apply regardless of whether a linear
> filter is selected.
>
> I had a different interpertation of that sentence.  I thought it was
> talking about the situation where linear interpolation causes the image to
> be sampled at a position outside of the source rectangle.  For example, if
> the source and destination framebuffer are 100x100, and you do the blit:
>
> glBlitFramebuffer(10, 10, 12, 12, 20, 20, 24, 24, GL_COLOR_BUFFER_BIT,
> GL_NEAREST)
>
> Then according to the rules of nearest-neighbor interpolation (considering
> just the X dimension), the pixels will get copied like so:
>
> dst[20] = src[10]
> dst[21] = src[10]
> dst[22] = src[11]
> dst[23] = src[11]
>
> But if you do a GL_LINEAR blit, then according to the rules of linear
> interpolation, the pixels will get copied like so:
>
> dst[20] = 0.25*src[9] + 0.75*src[10]
> dst[21] = 0.75*src[10] + 0.25*src[11]
> dst[22] = 0.25*src[10] + 0.75*src[11]
> dst[23] = 0.75*src[11] + 0.25*src[12]
>
> So we have the counter-intuitive situation where even in the absence of
> mismatched buffer sizes, some of the pixels within the destination
> rectangle wind up partially depending on pixels outside the source
> rectangle.
>
> My interpretation of the last sentence is that an implementation is
> permitted, but not required, to clamp src[9] and src[12] to within the
> bounds of the source rectangle, to get:
>
> dst[20] = 0.25*src[10] + 0.75*src[10] = src[10]
> dst[21] = 0.75*src[10] + 0.25*src[11]
> dst[22] = 0.25*src[10] + 0.75*src[11]
> dst[23] = 0.75*src[11] + 0.25*src[11] = src[11]
>
> If my interpretation is correct and this language was only intended to
> cover clamping by one pixel to cover this counter-intuitive side effect of
> linear interpolation, then I'm not sure it necessarily follows that
> clamping is the intended behavior when the source rectangle is outside the
> bounds of the source fbo.
>
>
>>
>>  The behaviour I observe on my nVidia system is: in GL_NEAREST mode,
>>> destination pixels that map to a source pixel outside the read
>>> framebuffer are clipped out of the blit, and are left unchanged.  So, in
>>>
>>
>> So, the source framebuffer has a single 32x32 color buffer, the blit
>>
>>        glBlitFramebuffer(0, 0, 64, 64, 0, 0, 128, 128,
>>                        GL_COLOR_BUFFER_BIT,
>>                        GL_LINEAR);
>>
>> only modifies the destination pixels (0, 0) - (64, 64)?
>>
>> While I can see that as being a valid design choice, it's not the one the
>> ARB made.  I can't find anything in the spec, even going back to
>> GL_EXT_framebuffer_blit, to support this behavior.
>>
>>
>>  the GL_NEAREST call above, the first two destination pixels are left
>>> unchanged.  In GL_LINEAR mode, the same set of pixels is clipped off as
>>> in GL_LINEAR mode, and the remaining pixels are interpolated as though
>>> no clipping had occurred, with CLAMP_TO_EDGE behaviour occurring for
>>> situations where linear interpolation would have required reading a
>>> non-existent pixel from the read framebuffer.  Notably, this means that
>>> the nVidia driver is not simply reducing the size of the source and
>>> destination rectangles to eliminate the clipped off pixels, because
>>>
>>> glBlitFramebuffer(-1, 0, 3, 1, 0, 0, 9, 1, GL_COLOR_BUFFER_BIT,
>>> GL_LINEAR);
>>>
>>> does *not* produce equivalent interpolation to
>>>
>>> glBlitFramebuffer(0, 0, 3, 1, 2, 0, 9, 1, GL_COLOR_BUFFER_BIT,
>>> GL_LINEAR);
>>>
>>>
>>> Mesa, on the other hand, never clips.  The behaviour of destination
>>> pixels that map to a source pixel outside the read framebuffer depends
>>> on whether the read framebuffer is backed by a texture or a
>>> renderbuffer.  If it's backed by a texture, then those pixels are
>>> rendered with CLAMP_TO_EDGE behaviour, regardless of whether the blit is
>>> GL_LINEAR or GL_NEAREST.  If it's backed by a renderbuffer, then garbage
>>> is written to those pixels.
>>>
>>
>> I can't find anything in the spec to support writing garbage either. Any
>> idea what the garbage is?  I'm a little surprised that the behavior is
>> different for textures and renderbuffers.
>
>
> The garbage happens because Mesa implements non-1:1 blits using a meta-op,
> and the meta-op code is written in terms of the GL API, which doesn't allow
> texturing from a renderbuffer-backed framebuffer.  So, in order to blit
> from a renderbuffer-backed framebuffer, the meta-op actually does two
> operations: first it copies from the source framebuffer to a temporary
> texture using glCopyTexSubImage2D(), then it does the blit by drawing from
> the texture.  If the source rectangle is outside the bounds of the source
> framebuffer, then glCopyTexImage2D() doesn't overwrite the whole temporary
> texture.  So the garbage is whatever was in the temporary texture before
> the blit happened (in my tests, it's usually leftover data from a previous
> blit).
>
> When we talked about this in person, you expressed interest in finding out
> what happens on other implementations (e.g. Apple, Catalyst).  I'll work up
> a Piglit test so we can try it out on several implementations.
>

Ok, I wrote up a quick and dirty Piglit test, and with the help of Chad
Versace and Kenneth Graunke I've now run it on the Mesa driver for i965,
the proprietary linux drivers for nVidia and AMD, and a Mac OSX system.
 The consensus seems to be in support of Ian's interpertation: if the
source rectangle of a blit is outside the range of the source framebuffer,
the missing source pixels are inferred by supplied in a fashion analogous
to CLAMP_TO_EDGE texturing, regardless of whether the interpolation mode is
NEAREST or LINEAR.  The only implementation that appears to differ from
this interpretation on purpose is nVidia (which clips the out-of-range
pixels out of the blit).  Mesa's "render garbage when the source fbo is
backed by a renderbuffer" is clearly a bug.

Since all the implementations, plus Ian, seem to agree on the intent except
for nVidia, I think that's an adequate consensus as to which behaviour is
correct.  I'll follow up with a Piglit test that verifies the correct
behaviour, a patch to fix the Mesa bug, and a patch to make my new i965
"blorp" blitting code do the right thing.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/mesa-dev/attachments/20120516/bae0564a/attachment.htm>


More information about the mesa-dev mailing list