[Mesa-dev] [PATCH 1/2] virgl: avoid large inline transfers

Gert Wollny gert.wollny at collabora.com
Thu Nov 22 12:41:10 UTC 2018


I think Erik already pointed out the little problems with this series
and "virgl: quadruple command buffer size". 

I've tested the impact on performance of these tree patches and the
results look great: Unigine Valley went from ~9 fps to 20 (Host 50) and
Unigine Heaven (no tesselation) from 12 fps to 26 (Host 68). (All on
r600 - 6870 HD). 

Tested-By: Gert Wollny <gert.wollny at collabora.com> 

Am Mittwoch, den 21.11.2018, 20:08 -0800 schrieb Gurchetan Singh:
> We flush everytime the command buffer (16 kB) is full, which is
> quite costly.
> 
> This improves
> 
> dEQP-
> GLES3.performance.buffer.data_upload.function_call.buffer_data.new_bu
> ffer.usage_stream_draw
> 
> from 111.16 MB/s to 1930.36 MB/s.
> 
> In addition, I made the benchmark produce buffers from 0 -->
> VIRGL_MAX_CMDBUF_DWORDS * 4,
> and tried ((VIRGL_MAX_CMDBUF_DWORDS * 4) / 2),
> ((VIRGL_MAX_CMDBUF_DWORDS * 4) / 4), etc.
> 
> I didn't notice any clear differences, so let's just go with the most
> obvious
> heuristic.
> ---
>  src/gallium/drivers/virgl/virgl_resource.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/src/gallium/drivers/virgl/virgl_resource.c
> b/src/gallium/drivers/virgl/virgl_resource.c
> index db5e7dd61a..9174ec5cbb 100644
> --- a/src/gallium/drivers/virgl/virgl_resource.c
> +++ b/src/gallium/drivers/virgl/virgl_resource.c
> @@ -95,7 +95,11 @@ static void virgl_buffer_subdata(struct
> pipe_context *pipe,
>        usage |= PIPE_TRANSFER_DISCARD_RANGE;
>  
>     u_box_1d(offset, size, &box);
> -   virgl_transfer_inline_write(pipe, resource, 0, usage, &box, data,
> 0, 0);
> +
> +   if (size >= (VIRGL_MAX_CMDBUF_DWORDS * 4))
> +      u_default_buffer_subdata(pipe, resource, usage, offset, size,
> data);
> +   else
> +      virgl_transfer_inline_write(pipe, resource, 0, usage, &box,
> data, 0, 0);
>  }
>  
>  void virgl_init_context_resource_functions(struct pipe_context *ctx)


More information about the mesa-dev mailing list