[Mesa-stable] [Mesa-dev] [PATCH v2 1/1] eg/compute: Use reference counting to handle compute memory pool.

Dave Airlie airlied at gmail.com
Tue May 15 22:56:41 UTC 2018


On 16 May 2018 at 08:06, Jan Vesely <jan.vesely at rutgers.edu> wrote:
> On Wed, 2018-05-09 at 15:54 -0400, Jan Vesely wrote:
>> Use pipe_refernce to release old RAT surfaces.
>> RAT surface adds a reference to pool bo. So use reference counting for pool->bo
>> as well.
>>
>> v2: Use the same pattern for both defrag paths
>>     Drop confusing comment
>>
>> CC: <mesa-stable at lists.freedesktop.org>
>> Signed-off-by: Jan Vesely <jan.vesely at rutgers.edu>
>> ---

Reviewed-by: Dave Airlie <airlied at redhat.com>

>
> ping. This one fixes a memory corruption.
>
> Jan
>
>>  src/gallium/drivers/r600/compute_memory_pool.c | 16 +++++-----------
>>  src/gallium/drivers/r600/evergreen_compute.c   |  3 ++-
>>  2 files changed, 7 insertions(+), 12 deletions(-)
>>
>> diff --git a/src/gallium/drivers/r600/compute_memory_pool.c b/src/gallium/drivers/r600/compute_memory_pool.c
>> index bcda155c71..4b0e0044f5 100644
>> --- a/src/gallium/drivers/r600/compute_memory_pool.c
>> +++ b/src/gallium/drivers/r600/compute_memory_pool.c
>> @@ -91,10 +91,7 @@ void compute_memory_pool_delete(struct compute_memory_pool* pool)
>>  {
>>       COMPUTE_DBG(pool->screen, "* compute_memory_pool_delete()\n");
>>       free(pool->shadow);
>> -     if (pool->bo) {
>> -             pool->screen->b.b.resource_destroy((struct pipe_screen *)
>> -                     pool->screen, (struct pipe_resource *)pool->bo);
>> -     }
>> +     pipe_resource_reference(&pool->bo, NULL);
>>       /* In theory, all of the items were freed in compute_memory_free.
>>        * Just delete the list heads
>>        */
>> @@ -213,10 +210,8 @@ int compute_memory_grow_defrag_pool(struct compute_memory_pool *pool,
>>
>>                       compute_memory_defrag(pool, src, dst, pipe);
>>
>> -                     pool->screen->b.b.resource_destroy(
>> -                                     (struct pipe_screen *)pool->screen,
>> -                                     src);
>> -
>> +                     /* Release the old buffer */
>> +                     pipe_resource_reference(&pool->bo, NULL);
>>                       pool->bo = temp;
>>                       pool->size_in_dw = new_size_in_dw;
>>               }
>> @@ -230,9 +225,8 @@ int compute_memory_grow_defrag_pool(struct compute_memory_pool *pool,
>>                               return -1;
>>
>>                       pool->size_in_dw = new_size_in_dw;
>> -                     pool->screen->b.b.resource_destroy(
>> -                                     (struct pipe_screen *)pool->screen,
>> -                                     (struct pipe_resource *)pool->bo);
>> +                     /* Release the old buffer */
>> +                     pipe_resource_reference(&pool->bo, NULL);
>>                       pool->bo = r600_compute_buffer_alloc_vram(pool->screen, pool->size_in_dw * 4);
>>                       compute_memory_shadow(pool, pipe, 0);
>>
>> diff --git a/src/gallium/drivers/r600/evergreen_compute.c b/src/gallium/drivers/r600/evergreen_compute.c
>> index 5168267d51..5070243914 100644
>> --- a/src/gallium/drivers/r600/evergreen_compute.c
>> +++ b/src/gallium/drivers/r600/evergreen_compute.c
>> @@ -122,7 +122,8 @@ static void evergreen_set_rat(struct r600_pipe_compute *pipe,
>>       rat_templ.u.tex.first_layer = 0;
>>       rat_templ.u.tex.last_layer = 0;
>>
>> -     /* Add the RAT the list of color buffers */
>> +     /* Add the RAT the list of color buffers. Drop the old buffer first. */
>> +     pipe_surface_reference(&pipe->ctx->framebuffer.state.cbufs[id], NULL);
>>       pipe->ctx->framebuffer.state.cbufs[id] = pipe->ctx->b.b.create_surface(
>>               (struct pipe_context *)pipe->ctx,
>>               (struct pipe_resource *)bo, &rat_templ);
>
> --
> Jan Vesely <jan.vesely at rutgers.edu>
> _______________________________________________
> mesa-dev mailing list
> mesa-dev at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
>


More information about the mesa-stable mailing list