[Mesa-dev] Looking for advice on how MSAA should behave under sRGB

Paul Berry stereotype441 at gmail.com
Thu Jun 7 01:27:11 CEST 2012


In my current implementation of MSAA for i965, I'm facing a choice in how
to perform MSAA resolves when sRGB is in use.  Should I:

(a) always combine samples using a linear average.  For example, if
multisampling by a factor of 4, and the 4 samples corresponding to a pixel
have color values of a, b, c, and d, resolve them to a pixel color of
(a+b+c+d)/4.

(b) first convert the samples from sRGB to linear, then combine them using
a linear average, then convert back to sRGB.

(c) try to make an educated guess between (a) and (b) based on what the
client program is doing.


The GL 3.0 spec seems to pull towards (a), but it leaves a great deal of
latitude to the implementation.  From 4.1.12 (Additional Multisample
Fragment Operations):

"After all operations have been completed on the multisample buffer, the
sample values for each color in the multisample buffer are combined to
produce a single color value, and that value is written into the
corresponding color buffers selected by DrawBuffer or DrawBuffers. An
implementation may defer the writing of the color buffers until a later
time, but the state of the framebuffer must behave as if the color buffers
were updated as each fragment was processed. The method of combination is
not specified, though a simple average computed independently for each
color component is recommended."


By contrast, both EXT_framebuffer_sRGB and ARB_framebuffer_sRGB seem to
pull towards (b), again leaving a great deal of latitude.  From the
"issues" section, in response to the question "How does this extension
interact with multisampling?":

"RESOLVED:  There are no explicit interactions.  However, arguably if the
color samples for multisampling are sRGB encoded, the samples should be
linearized before being "resolved" for display and then recoverted to sRGB
if the output device expects sRGB encoded color components.

"This is really a video scan-out issue and beyond the scope of this
extension which is focused on the rendering issues. However some
implementation advice is provided:

"The implementation sufficiently aware of the gamma correction configured
for the display device could decide to perform an sRGB-correct multisample
resolve.  Whether this occurs or not could be determined by a control panel
setting or inferred by the application's use of this extension."


Arguments in favor of (a) (always use a linear average):

- Conceptually simple.
- Easy to test.

Arguments in favor of (b) (convert sRGB to linear, then linear average,
then convert back to sRGB):

- Easy to implement (Sandy Bridge hardware has a built-in resolve operation
which behaves this way by default; Ivy Bridge requires the driver to
implement the resolve operation, and a naive implementation has this
behavior).
- Produces visually superior results to (a) (in particular, with (a), an
edge between two solid colors appears dark in the antialiased region).
- If, instead of using glBlitFramebuffer to do the resolve, the client used
ARB_texture_multisample to texel fetch each sample and manually blend them
together, this is the behaviour they would get (because the samples would
be converted from sRGB to linear during the texel fetch).

Arguments in favor of (c) (do (a) sometimes, (b) other times):

- My nVidia reference system appears to do this (AFAICT, it uses approach
(b) when the resolve operation occurs during a blit to the windowsystem
framebuffer.  It uses approach (a) in all other situations).


My current implementation in Mesa uses (b), but the Piglit tests I've been
developing assume (a), so laziness is no help in making this decision.  But
I'm leaning towards (b).  Does anyone have strong opinions?

Paul
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/mesa-dev/attachments/20120606/2f8b9ff6/attachment.html>


More information about the mesa-dev mailing list