[Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes

Christian König christian.koenig at amd.com
Wed Dec 1 11:25:09 UTC 2021


Am 01.12.21 um 12:04 schrieb Thomas Hellström (Intel):
>
> On 12/1/21 11:32, Christian König wrote:
>> Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
>>> [SNIP]
>>>>
>>>> What we could do is to avoid all this by not calling the callback 
>>>> with the lock held in the first place.
>>>
>>> If that's possible that might be a good idea, pls also see below.
>>
>> The problem with that is 
>> dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we 
>> could avoid using that or at least allow it to drop the lock then we 
>> could call the callback without holding it.
>>
>> Somebody would need to audit the drivers and see if holding the lock 
>> is really necessary anywhere.
>>
>>>>
>>>>>>
>>>>>>>>
>>>>>>>> /Thomas
>>>>>>>
>>>>>>> Oh, and a follow up question:
>>>>>>>
>>>>>>> If there was a way to break the recursion on final put() (using 
>>>>>>> the same basic approach as patch 2 in this series uses to break 
>>>>>>> recursion in enable_signaling()), so that none of these 
>>>>>>> containers did require any special treatment, would it be worth 
>>>>>>> pursuing? I guess it might be possible by having the callbacks 
>>>>>>> drop the references rather than the loop in the final put. + a 
>>>>>>> couple of changes in code iterating over the fence pointers.
>>>>>>
>>>>>> That won't really help, you just move the recursion from the 
>>>>>> final put into the callback.
>>>>>
>>>>> How do we recurse from the callback? The introduced fence_put() of 
>>>>> individual fence pointers
>>>>> doesn't recurse anymore (at most 1 level), and any callback 
>>>>> recursion is broken by the irq_work?
>>>>
>>>> Yeah, but then you would need to take another lock to avoid racing 
>>>> with dma_fence_array_signaled().
>>>>
>>>>>
>>>>> I figure the big amount of work would be to adjust code that 
>>>>> iterates over the individual fence pointers to recognize that they 
>>>>> are rcu protected.
>>>>
>>>> Could be that we could solve this with RCU, but that sounds like a 
>>>> lot of churn for no gain at all.
>>>>
>>>> In other words even with the problems solved I think it would be a 
>>>> really bad idea to allow chaining of dma_fence_array objects.
>>>
>>> Yes, that was really the question, Is it worth pursuing this? I'm 
>>> not really suggesting we should allow this as an intentional 
>>> feature. I'm worried, however, that if we allow these containers to 
>>> start floating around cross-driver (or even internally) disguised as 
>>> ordinary dma_fences, they would require a lot of driver special 
>>> casing, or else completely unexpeced WARN_ON()s and lockdep splats 
>>> would start to turn up, scaring people off from using them. And that 
>>> would be a breeding ground for hairy driver-private constructs.
>>
>> Well the question is why we would want to do it?
>>
>> If it's to avoid inter driver lock dependencies by avoiding to call 
>> the callback with the spinlock held, then yes please. We had tons of 
>> problems with that, resulting in irq_work and work_item delegation 
>> all over the place.
>
> Yes, that sounds like something desirable, but in these containers, 
> what's causing the lock dependencies is the enable_signaling() 
> callback that is typically called locked.
>
>
>>
>> If it's to allow nesting of dma_fence_array instances, then it's most 
>> likely a really bad idea even if we fix all the locking order problems.
>
> Well I think my use-case where I hit a dead end may illustrate what 
> worries me here:
>
> 1) We use a dma-fence-array to coalesce all dependencies for ttm 
> object migration.
> 2) We use a dma-fence-chain to order the resulting dm_fence into a 
> timeline because the TTM resource manager code requires that.
>
> Initially seemingly harmless to me.
>
> But after a sequence evict->alloc->clear, the dma-fence-chain feeds 
> into the dma-fence-array for the clearing operation. Code still works 
> fine, and no deep recursion, no warnings. But if I were to add another 
> driver to the system that instead feeds a dma-fence-array into a 
> dma-fence-chain, this would give me a lockdep splat.
>
> So then if somebody were to come up with the splendid idea of using a 
> dma-fence-chain to initially coalesce fences, I'd hit the same problem 
> or risk illegaly joining two dma-fence-chains together.
>
> To fix this, I would need to look at the incoming fences and iterate 
> over any dma-fence-array or dma-fence-chain that is fed into the 
> dma-fence-array to flatten out the input. In fact all dma-fence-array 
> users would need to do that, and even dma-fence-chain users watching 
> out for not joining chains together or accidently add an array that 
> perhaps came as a disguised dma-fence from antother driver.
>
> So the purpose to me would be to allow these containers as input to 
> eachother without a lot of in-driver special-casing, be it by breaking 
> recursion on built-in flattening to avoid
>
> a) Hitting issues in the future or with existing interoperating drivers.
> b) Avoid driver-private containers that also might break the 
> interoperability. (For example the i915 currently driver-private 
> dma_fence_work avoid all these problems, but we're attempting to 
> address issues in common code rather than re-inventing stuff internally).

I don't think that a dma_fence_array or dma_fence_chain is the right 
thing to begin with in those use cases.

When you want to coalesce the dependencies for a job you could either 
use an xarray like Daniel did for the scheduler or some hashtable like 
we use in amdgpu. But I don't see the need for exposing the dma_fence 
interface for those.

And why do you use dma_fence_chain to generate a timeline for TTM? That 
should come naturally because all the moves must be ordered.

Regards,
Christian.





More information about the Intel-gfx mailing list