[Mesa-dev] r600: performance reading from staging texture

Vic Lee llyzs.vic at gmail.com
Fri Aug 31 09:57:56 PDT 2012


Hi,

In my application, I need to read pixels back to system memory for every 
rendered frame. My approach is to create a chain of textures with 
PIPE_USAGE_STAGING flag, and copy the render target to the staging 
textures before reading them to avoid pipeline stall.

Now I found out as in the patch below, there's a line of code checking 
whether the texture volume has more than 1024 pixels and will use a 
staging texture if so. However the texture I am mapping is already 
staging and can be surely mapped directly regardless of its volume. This 
checking does not make any sense for me and is causing significant 
performance penalty. I have tried removing it and nothing is broken, 
performance can go up 10-20% in my case.

Please point it out if I missed anything here, otherwise I suggest this 
two lines of code should be removed.

Thanks in advanced.

Vic

diff --git a/src/gallium/drivers/r600/r600_texture.c 
b/src/gallium/drivers/r600/r600_texture.c
index 6de3d6a..536f88f 100644
--- a/src/gallium/drivers/r600/r600_texture.c
+++ b/src/gallium/drivers/r600/r600_texture.c
@@ -622,9 +622,6 @@ struct pipe_transfer* 
r600_texture_get_transfer(struct pipe_context *ctx,
                 use_staging_texture = TRUE;
         }

-       if ((usage & PIPE_TRANSFER_READ) && u_box_volume(box) > 1024)
-               use_staging_texture = TRUE;
-
         /* Use a staging texture for uploads if the underlying BO is 
busy. */
         if (!(usage & PIPE_TRANSFER_READ) &&
             (rctx->ws->cs_is_buffer_referenced(rctx->cs, 
rtex->resource.cs_buf, RADEON_USAGE_READWRITE) ||

Vic


More information about the mesa-dev mailing list