[Mesa-dev] [PATCH V2 4/4] i965: Enable ext_framebuffer_multisample_blit_scaled on intel h/w

Paul Berry stereotype441 at gmail.com
Fri May 24 12:18:18 PDT 2013


On 16 May 2013 11:44, Anuj Phogat <anuj.phogat at gmail.com> wrote:

> This patch enables ext_framebuffer_multisample_blit_scaled extension
> on intel h/w >= gen6.
>
> Note: Patches for piglit tests to verify this functionality are out
> for review on piglit mailing list. Tests pass for all of the scaling
> factors from 0.1 to 2.4.
>
> Comment from Paul Berry:
> I have some concerns about the image quality of the method you've
> implemented.  As I understand it, the primary use case of this extension
> is to allow the client to do multisampled rendering at slightly less
> than screen resolution (e.g. 720p instead of 1080p), and then blit the
> result to the screen in one step while keeping most of the quality
> benefits of multisampling.  Since your implementation is effectively
> equivalent to downsampling and then blitting using GL_NEAREST filtering,
> my fear is that it will lead to blocky artifacts that are severe enough
> to negate the benefit of multisampling in the first place.
>
> Before we turn this extension on in the Intel driver, I'd like to look
> at a comparison of:
>
> (1) your technique
> (2) downsampling followed by scaling with GL_LINEAR filtering
> (3) The nVidia implementation, in GL_SCALED_RESOLVE_FASTEST_EXT mode
> (4) The nVidia implementation, in GL_SCALED_RESOLVE_NICEST_EXT mode
> (5) Just rendering the image directly to the single-sampled destination
> buffer
>
> Observation: Image quality is better in cases 2, 3, 4 and 5 as
> compared to case 1. Although extension's implementation meets the
> specification's requirements, using it leads to  blocky artifacts
> due to nearest filtering.
>
> I'll work on implementing a better filtering technique in blorp.
>

Thanks for quoting my comment here.  It's good to have context so that we
can continue the discussion.

My preference would be to go ahead and land patches 1-3 now, but hold patch
4 back until we've figured out how to get comparable image quality to the
nVidia implementation.  It seems like it would be nice to go out of the
gate with our best looking implementation.

Does that seem reasonable to other folks?


>
> Signed-off-by: Anuj Phogat <anuj.phogat at gmail.com>
> ---
>  src/mesa/drivers/dri/intel/intel_extensions.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/src/mesa/drivers/dri/intel/intel_extensions.c
> b/src/mesa/drivers/dri/intel/intel_extensions.c
> index 8d8e325..de12ec3 100644
> --- a/src/mesa/drivers/dri/intel/intel_extensions.c
> +++ b/src/mesa/drivers/dri/intel/intel_extensions.c
> @@ -97,6 +97,7 @@ intelInitExtensions(struct gl_context *ctx)
>
>     if (intel->gen >= 6) {
>        ctx->Extensions.EXT_framebuffer_multisample = true;
> +      ctx->Extensions.EXT_framebuffer_multisample_blit_scaled = true;
>        ctx->Extensions.ARB_blend_func_extended =
> !driQueryOptionb(&intel->optionCache, "disable_blend_func_extended");
>        ctx->Extensions.ARB_draw_buffers_blend = true;
>        ctx->Extensions.ARB_ES3_compatibility = true;
> --
> 1.8.1.4
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/mesa-dev/attachments/20130524/09e0eefb/attachment.html>


More information about the mesa-dev mailing list