[PATCH 2/3] drm/xe: share bo dma-resv with backup object
Matthew Auld
matthew.auld at intel.com
Mon Apr 14 10:32:11 UTC 2025
Hi,
On 11/04/2025 16:12, Thomas Hellström wrote:
> On Thu, 2025-04-10 at 17:20 +0100, Matthew Auld wrote:
>> We end up needing to grab both locks together anyway and keep them
>> held
>> until we complete the copy or add the fence. Plus the backup_obj is
>> short lived and tied to the parent object, so seems reasonable to
>> share
>> the same dma-resv. This will simplify the locking here, and in follow
>> up patches.
>>
>> Signed-off-by: Matthew Auld <matthew.auld at intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>
>
> Is there any chance that the bo dma-resv is freed before the backup
> object's resv is individualized?
>
> If not, perhaps a short description why that can never happen?
Thanks for reviewing. My thinking was that there should be one reference
on the backup bo, which is either dropped by the parent bo when calling
the unpin or the unprepare step, whoever gets there first. In both cases
there will still be a ref to the parent when we drop the backup ref. The
individualize step looks to be synchronous in ttm so it should happen
within the scope of holding the parent lock so parent bo can't disappear.
But as you say maybe this is inviting trouble later, if there is some
hypothetical way for something to grab an extra ref on the backup bo.
What about if in addition we also hold a ref to the parent bo, which is
then dropped after the individualize step:
https://gitlab.freedesktop.org/mwa/kernel/-/commit/3b72a079a1cd9da9590da82912798785ede8c97f
>
> /Thomas
>
>
>> ---
>> drivers/gpu/drm/xe/xe_bo.c | 24 +++++++++---------------
>> 1 file changed, 9 insertions(+), 15 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index c337790c81ae..3eab6352d9dc 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -1120,9 +1120,10 @@ int xe_bo_evict_pinned(struct xe_bo *bo)
>> if (bo->flags & XE_BO_FLAG_PINNED_NORESTORE)
>> goto out_unlock_bo;
>>
>> - backup = xe_bo_create_locked(xe, NULL, NULL, bo->size,
>> ttm_bo_type_kernel,
>> - XE_BO_FLAG_SYSTEM |
>> XE_BO_FLAG_NEEDS_CPU_ACCESS |
>> - XE_BO_FLAG_PINNED);
>> + backup = ___xe_bo_create_locked(xe, NULL, NULL, bo-
>>> ttm.base.resv, NULL, bo->size,
>> + DRM_XE_GEM_CPU_CACHING_WB,
>> ttm_bo_type_kernel,
>> + XE_BO_FLAG_SYSTEM |
>> XE_BO_FLAG_NEEDS_CPU_ACCESS |
>> + XE_BO_FLAG_PINNED);
>> if (IS_ERR(backup)) {
>> ret = PTR_ERR(backup);
>> goto out_unlock_bo;
>> @@ -1177,7 +1178,6 @@ int xe_bo_evict_pinned(struct xe_bo *bo)
>>
>> out_backup:
>> xe_bo_vunmap(backup);
>> - xe_bo_unlock(backup);
>> if (ret)
>> xe_bo_put(backup);
>> out_unlock_bo:
>> @@ -1212,17 +1212,12 @@ int xe_bo_restore_pinned(struct xe_bo *bo)
>> if (!backup)
>> return 0;
>>
>> - xe_bo_lock(backup, false);
>> + xe_bo_lock(bo, false);
>>
>> ret = ttm_bo_validate(&backup->ttm, &backup->placement,
>> &ctx);
>> if (ret)
>> goto out_backup;
>>
>> - if (WARN_ON(!dma_resv_trylock(bo->ttm.base.resv))) {
>> - ret = -EBUSY;
>> - goto out_backup;
>> - }
>> -
>> if (xe_bo_is_user(bo) || (bo->flags &
>> XE_BO_FLAG_PINNED_LATE_RESTORE)) {
>> struct xe_migrate *migrate;
>> struct dma_fence *fence;
>> @@ -1271,15 +1266,14 @@ int xe_bo_restore_pinned(struct xe_bo *bo)
>>
>> bo->backup_obj = NULL;
>>
>> +out_backup:
>> + xe_bo_vunmap(backup);
>> + if (!bo->backup_obj)
>> + xe_bo_put(backup);
>> out_unlock_bo:
>> if (unmap)
>> xe_bo_vunmap(bo);
>> xe_bo_unlock(bo);
>> -out_backup:
>> - xe_bo_vunmap(backup);
>> - xe_bo_unlock(backup);
>> - if (!bo->backup_obj)
>> - xe_bo_put(backup);
>> return ret;
>> }
>>
>
More information about the Intel-xe
mailing list