[PATCH 6/7] drm/amdgpu: unwrap fence chains in the explicit sync fence
Christian König
ckoenig.leichtzumerken at gmail.com
Mon Jun 14 07:25:44 UTC 2021
Am 11.06.21 um 17:18 schrieb Daniel Vetter:
> On Fri, Jun 11, 2021 at 12:09:19PM +0200, Christian König wrote:
>> Am 11.06.21 um 11:07 schrieb Daniel Vetter:
>>> On Thu, Jun 10, 2021 at 11:17:59AM +0200, Christian König wrote:
>>>> Unwrap a the explicit fence if it is a dma_fence_chain and
>>>> sync to the first fence not matching the owner rules.
>>>>
>>>> Signed-off-by: Christian König <christian.koenig at amd.com>
>>>> ---
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 118 +++++++++++++----------
>>>> 1 file changed, 68 insertions(+), 50 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
>>>> index 1b2ceccaf5b0..862eb3c1c4c5 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
>>>> @@ -28,6 +28,8 @@
>>>> * Christian König <christian.koenig at amd.com>
>>>> */
>>>> +#include <linux/dma-fence-chain.h>
>>>> +
>>>> #include "amdgpu.h"
>>>> #include "amdgpu_trace.h"
>>>> #include "amdgpu_amdkfd.h"
>>>> @@ -186,6 +188,55 @@ int amdgpu_sync_vm_fence(struct amdgpu_sync *sync, struct dma_fence *fence)
>>>> return amdgpu_sync_fence(sync, fence);
>>>> }
>>>> +/* Determine based on the owner and mode if we should sync to a fence or not */
>>>> +static bool amdgpu_sync_test_fence(struct amdgpu_device *adev,
>>>> + enum amdgpu_sync_mode mode,
>>>> + void *owner, struct dma_fence *f)
>>>> +{
>>>> + void *fence_owner = amdgpu_sync_get_owner(f);
>>>> +
>>>> + /* Always sync to moves, no matter what */
>>>> + if (fence_owner == AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> + return true;
>>>> +
>>>> + /* We only want to trigger KFD eviction fences on
>>>> + * evict or move jobs. Skip KFD fences otherwise.
>>>> + */
>>>> + if (fence_owner == AMDGPU_FENCE_OWNER_KFD &&
>>>> + owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> + return false;
>>>> +
>>>> + /* Never sync to VM updates either. */
>>>> + if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
>>>> + owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> + return false;
>>>> +
>>>> + /* Ignore fences depending on the sync mode */
>>>> + switch (mode) {
>>>> + case AMDGPU_SYNC_ALWAYS:
>>>> + return true;
>>>> +
>>>> + case AMDGPU_SYNC_NE_OWNER:
>>>> + if (amdgpu_sync_same_dev(adev, f) &&
>>>> + fence_owner == owner)
>>>> + return false;
>>>> + break;
>>>> +
>>>> + case AMDGPU_SYNC_EQ_OWNER:
>>>> + if (amdgpu_sync_same_dev(adev, f) &&
>>>> + fence_owner != owner)
>>>> + return false;
>>>> + break;
>>>> +
>>>> + case AMDGPU_SYNC_EXPLICIT:
>>>> + return false;
>>>> + }
>>>> +
>>>> + WARN(debug_evictions && fence_owner == AMDGPU_FENCE_OWNER_KFD,
>>>> + "Adding eviction fence to sync obj");
>>>> + return true;
>>>> +}
>>>> +
>>>> /**
>>>> * amdgpu_sync_resv - sync to a reservation object
>>>> *
>>>> @@ -211,67 +262,34 @@ int amdgpu_sync_resv(struct amdgpu_device *adev, struct amdgpu_sync *sync,
>>>> /* always sync to the exclusive fence */
>>>> f = dma_resv_excl_fence(resv);
>>>> - r = amdgpu_sync_fence(sync, f);
>>>> + dma_fence_chain_for_each(f, f) {
>>> Jason has some helper for deep-walking fence chains/arrays here I think.
>>> Might want to look into that, so that we have some consistency in how we
>>> pile up multiple exclusive fences.
>> Well those helpers are not from Jason, but from me :)
>>
>> But no, for now the deep inspection is not really helpful here since
>> grabbing a reference to a certain chain node is what that makes the handling
>> easier and faster here.
>>
>> Thinking more about it that should also make it possible for the garbage
>> collection to kick in properly.
> Hm this is tricky to reason about, but yeah with this here it's a true
> chain, and you just need to connect them. But then if a buffer is on
> multiple engines, collapsing things down occasionally might be useful.
>
> But maybe we need to do that in the bigger rework where exclusive fences
> are also just in the dma_fence_list with a "this is an exclusive one btw"
> tag.
>
> I think for the vk import case doing the deep scan makes more sense, it's
> a once-per-frame thing, and there's a much bigger chance that you have a
> pile of fences from different engines on it already.
The problem with Jasons IOCTL is that you *must* do a deep dive and
flatten out the fences.
Otherwise somebody could use it to create a deep fence structure with
dma_fence_arrays containing dma_fence_arrays, containing
dma_fence_arrays etc...
When you then release that structure you overwrite kernel stack because
the dma_fence_array does a dma_fence_put() on it's entries :)
The dma_fence_chain container is intentionally made in a way to prevent
that.
> I think a comment explaining why we think deep scan isn't a good idea here
> would be good, just so we can appreciate our foolishness when it all goes
> wrong :-)
Ok, good point.
Thanks,
Christian.
> -Daniel
>
>
>>> Anyway pretty much one of the versions I had in mind too, except I didn't
>>> type it up.
>>>
>>> Acked-by: Daniel Vetter <daniel.vetter at ffwll.ch>
>> Thanks,
>> Christian.
>>
>>>> + struct dma_fence_chain *chain = to_dma_fence_chain(f);
>>>> +
>>>> + if (amdgpu_sync_test_fence(adev, mode, owner, chain ?
>>>> + chain->fence : f)) {
>>>> + r = amdgpu_sync_fence(sync, f);
>>>> + dma_fence_put(f);
>>>> + if (r)
>>>> + return r;
>>>> + break;
>>>> + }
>>>> + }
>>>> flist = dma_resv_shared_list(resv);
>>>> - if (!flist || r)
>>>> - return r;
>>>> + if (!flist)
>>>> + return 0;
>>>> for (i = 0; i < flist->shared_count; ++i) {
>>>> - void *fence_owner;
>>>> -
>>>> f = rcu_dereference_protected(flist->shared[i],
>>>> dma_resv_held(resv));
>>>> - fence_owner = amdgpu_sync_get_owner(f);
>>>> -
>>>> - /* Always sync to moves, no matter what */
>>>> - if (fence_owner == AMDGPU_FENCE_OWNER_UNDEFINED) {
>>>> + if (amdgpu_sync_test_fence(adev, mode, owner, f)) {
>>>> r = amdgpu_sync_fence(sync, f);
>>>> if (r)
>>>> - break;
>>>> - }
>>>> -
>>>> - /* We only want to trigger KFD eviction fences on
>>>> - * evict or move jobs. Skip KFD fences otherwise.
>>>> - */
>>>> - if (fence_owner == AMDGPU_FENCE_OWNER_KFD &&
>>>> - owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> - continue;
>>>> -
>>>> - /* Never sync to VM updates either. */
>>>> - if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
>>>> - owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> - continue;
>>>> -
>>>> - /* Ignore fences depending on the sync mode */
>>>> - switch (mode) {
>>>> - case AMDGPU_SYNC_ALWAYS:
>>>> - break;
>>>> -
>>>> - case AMDGPU_SYNC_NE_OWNER:
>>>> - if (amdgpu_sync_same_dev(adev, f) &&
>>>> - fence_owner == owner)
>>>> - continue;
>>>> - break;
>>>> -
>>>> - case AMDGPU_SYNC_EQ_OWNER:
>>>> - if (amdgpu_sync_same_dev(adev, f) &&
>>>> - fence_owner != owner)
>>>> - continue;
>>>> - break;
>>>> -
>>>> - case AMDGPU_SYNC_EXPLICIT:
>>>> - continue;
>>>> + return r;
>>>> }
>>>> -
>>>> - WARN(debug_evictions && fence_owner == AMDGPU_FENCE_OWNER_KFD,
>>>> - "Adding eviction fence to sync obj");
>>>> - r = amdgpu_sync_fence(sync, f);
>>>> - if (r)
>>>> - break;
>>>> }
>>>> - return r;
>>>> + return 0;
>>>> }
>>>> /**
>>>> --
>>>> 2.25.1
>>>>
More information about the amd-gfx
mailing list