[PATCH 1/2] dma-buf: keep only not signaled fence in reservation_object_add_shared_replace v2

Chunming Zhou zhoucm1 at amd.com
Tue Oct 31 09:10:42 UTC 2017



On 2017年10月31日 16:55, Christian König wrote:
> Am 31.10.2017 um 09:51 schrieb Chunming Zhou:
>>
>>
>> On 2017年10月31日 16:43, Christian König wrote:
>>> The amdgpu issue to also need signaled fences in the reservation 
>>> objects
>>> should be fixed by now.
>>>
>>> Optimize the list by keeping only the not signaled yet fences around.
>>>
>>> v2: temporary put the signaled fences at the end of the new container
>>>
>>> Signed-off-by: Christian König <christian.koenig at amd.com>
>>> ---
>>>   drivers/dma-buf/reservation.c | 36 
>>> ++++++++++++++++++++++++++----------
>>>   1 file changed, 26 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/reservation.c 
>>> b/drivers/dma-buf/reservation.c
>>> index b44d9d7db347..6fc794576997 100644
>>> --- a/drivers/dma-buf/reservation.c
>>> +++ b/drivers/dma-buf/reservation.c
>>> @@ -145,8 +145,8 @@ reservation_object_add_shared_replace(struct 
>>> reservation_object *obj,
>>>                         struct reservation_object_list *fobj,
>>>                         struct dma_fence *fence)
>>>   {
>>> -    unsigned i;
>>>       struct dma_fence *old_fence = NULL;
>>> +    unsigned i, j, k;
>>>         dma_fence_get(fence);
>>>   @@ -162,9 +162,7 @@ reservation_object_add_shared_replace(struct 
>>> reservation_object *obj,
>>>        * references from the old struct are carried over to
>>>        * the new.
>>>        */
>>> -    fobj->shared_count = old->shared_count;
>>> -
>>> -    for (i = 0; i < old->shared_count; ++i) {
>>> +    for (i = 0, j = 0, k = fobj->shared_max; i < old->shared_count; 
>>> ++i) {
>>>           struct dma_fence *check;
>>>             check = rcu_dereference_protected(old->shared[i],
>>> @@ -172,10 +170,18 @@ reservation_object_add_shared_replace(struct 
>>> reservation_object *obj,
>>>             if (!old_fence && check->context == fence->context) {
>>>               old_fence = check;
>>> -            RCU_INIT_POINTER(fobj->shared[i], fence);
>>> -        } else
>>> -            RCU_INIT_POINTER(fobj->shared[i], check);
>>> +            RCU_INIT_POINTER(fobj->shared[j++], fence);
>>> +        } else if (!dma_fence_is_signaled(check)) {
>>> +            RCU_INIT_POINTER(fobj->shared[j++], check);
>>> +        } else {
>>> +            /*
>>> +             * Temporary save the signaled fences at the end of the
>>> +             * new container
>>> +             */
>>> +            RCU_INIT_POINTER(fobj->shared[--k], check);
>>> +        }
>>>       }
>>> +    fobj->shared_count = j;
>>>       if (!old_fence) {
>>> RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence);
>> Seems you still don't address here, how do you sure 
>> fobj->shared[fobj->shared_count] is null? if not NULL, the old one 
>> will be leak.
>
> I've put them at the end of the container, see above "k = 
> fobj->shared_max". Since shared_max >> shared_count (at least twice as 
> large in this situation) we should be on the save side.
Ah, oops, Reviewed-by: Chunming Zhou <david1.zhou at amd.com> for the series.

Thanks,
David Zhou
>
> Regards,
> Christian.
>
>>
>> Regards,
>> David Zhou
>>>           fobj->shared_count++;
>>> @@ -192,10 +198,20 @@ reservation_object_add_shared_replace(struct 
>>> reservation_object *obj,
>>>       write_seqcount_end(&obj->seq);
>>>       preempt_enable();
>>>   -    if (old)
>>> -        kfree_rcu(old, rcu);
>>> -
>>>       dma_fence_put(old_fence);
>>> +
>>> +    if (!old)
>>> +        return;
>>> +
>>> +    /* Drop the references to the signaled fences */
>>> +    for (i = k; i < fobj->shared_max; ++i) {
>>> +        struct dma_fence *f;
>>> +
>>> +        f = rcu_dereference_protected(fobj->shared[i],
>>> +                          reservation_object_held(obj));
>>> +        dma_fence_put(f);
>>> +    }
>>> +    kfree_rcu(old, rcu);
>>>   }
>>>     /**
>>
>



More information about the dri-devel mailing list