[Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
Christian König
christian.koenig at amd.com
Wed Dec 1 10:32:01 UTC 2021
Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
> [SNIP]
>>
>> What we could do is to avoid all this by not calling the callback
>> with the lock held in the first place.
>
> If that's possible that might be a good idea, pls also see below.
The problem with that is
dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we
could avoid using that or at least allow it to drop the lock then we
could call the callback without holding it.
Somebody would need to audit the drivers and see if holding the lock is
really necessary anywhere.
>>
>>>>
>>>>>>
>>>>>> /Thomas
>>>>>
>>>>> Oh, and a follow up question:
>>>>>
>>>>> If there was a way to break the recursion on final put() (using
>>>>> the same basic approach as patch 2 in this series uses to break
>>>>> recursion in enable_signaling()), so that none of these containers
>>>>> did require any special treatment, would it be worth pursuing? I
>>>>> guess it might be possible by having the callbacks drop the
>>>>> references rather than the loop in the final put. + a couple of
>>>>> changes in code iterating over the fence pointers.
>>>>
>>>> That won't really help, you just move the recursion from the final
>>>> put into the callback.
>>>
>>> How do we recurse from the callback? The introduced fence_put() of
>>> individual fence pointers
>>> doesn't recurse anymore (at most 1 level), and any callback
>>> recursion is broken by the irq_work?
>>
>> Yeah, but then you would need to take another lock to avoid racing
>> with dma_fence_array_signaled().
>>
>>>
>>> I figure the big amount of work would be to adjust code that
>>> iterates over the individual fence pointers to recognize that they
>>> are rcu protected.
>>
>> Could be that we could solve this with RCU, but that sounds like a
>> lot of churn for no gain at all.
>>
>> In other words even with the problems solved I think it would be a
>> really bad idea to allow chaining of dma_fence_array objects.
>
> Yes, that was really the question, Is it worth pursuing this? I'm not
> really suggesting we should allow this as an intentional feature. I'm
> worried, however, that if we allow these containers to start floating
> around cross-driver (or even internally) disguised as ordinary
> dma_fences, they would require a lot of driver special casing, or else
> completely unexpeced WARN_ON()s and lockdep splats would start to turn
> up, scaring people off from using them. And that would be a breeding
> ground for hairy driver-private constructs.
Well the question is why we would want to do it?
If it's to avoid inter driver lock dependencies by avoiding to call the
callback with the spinlock held, then yes please. We had tons of
problems with that, resulting in irq_work and work_item delegation all
over the place.
If it's to allow nesting of dma_fence_array instances, then it's most
likely a really bad idea even if we fix all the locking order problems.
Christian.
>
> /Thomas
>
>
>>
>> Christian.
>>
>>>
>>>
>>> Thanks,
>>>
>>> /Thomas
>>>
>>>
More information about the dri-devel
mailing list