[PATCH 1/8] drm: execution context for GEM buffers v2
Christian König
christian.koenig at amd.com
Thu Jun 9 07:18:22 UTC 2022
Am 09.06.22 um 02:05 schrieb Felix Kuehling:
> [SNIP]
>> + *
>> + * ret = drm_exec_lock(&exec, boB, 1);
>
> Where is drm_exec_lock? It's not in this patch.
I've renamed this function to drm_exec_prepare_obj() but forgot to
update the documentation. Going to fix this, thanks.
>
>
>> + * drm_exec_continue_on_contention(&exec);
>> + * if (ret)
>> + * goto error;
>> + * }
>> + *
>> + * drm_exec_for_each_locked_object(&exec, index, obj) {
>> + * dma_resv_add_fence(obj->resv, fence, DMA_RESV_USAGE_READ);
>> + * ...
>> + * }
>> + * drm_exec_fini(&exec);
>> + *
>> + * See struct dma_exec for more details.
>> + */
>> +
>> +/* Dummy value used to initially enter the retry loop */
>> +#define DRM_EXEC_DUMMY (void*)~0
>> +
>> +/* Initialize the drm_exec_objects container */
>> +static void drm_exec_objects_init(struct drm_exec_objects *container)
>> +{
>> + container->objects = kmalloc(PAGE_SIZE, GFP_KERNEL);
>
> Should this be kvmalloc? You're using kvrealloc and kvfree elsewhere.
The initial number of objects tracked should be small enough so that we
can use kmalloc() directly.
> [SNIP]
>>
>> +/**
>> + * drm_exec_prepare_obj - prepare a GEM object for use
>> + * @exec: the drm_exec object with the state
>> + * @obj: the GEM object to prepare
>> + * @num_fences: how many fences to reserve
>> + *
>> + * Prepare a GEM object for use by locking it and reserving fence
>> slots. All
>> + * succesfully locked objects are put into the locked container.
>> Duplicates
>> + * detected as well and automatically moved into the duplicates
>> container.
>> + *
>> + * Returns: -EDEADLK if a contention is detected, -ENOMEM when memory
>> + * allocation failed and zero for success.
>> + */
>> +int drm_exec_prepare_obj(struct drm_exec *exec, struct
>> drm_gem_object *obj,
>> + unsigned int num_fences)
>> +{
>> + int ret;
>> +
>> + ret = drm_exec_lock_contended(exec);
>
> If this succeeds, it won't reserve any fence slots for object. Is that
> a problem?
No, the contended object should be re-presented later on and then we
allocate the fence slots.
>>
>> + if (unlikely(ret == -EALREADY)) {
>> + struct drm_exec_objects *container = &exec->duplicates;
>> +
>> + /*
>> + * If this is the first locked GEM object it was most likely
>> + * just contended. So don't add it to the duplicates, just
>> + * reserve the fence slots.
>
> I don't understand this. Seems a bit arbitrary. Is it even legal to
> try to add the same object twice? I thought duplicates was for
> different objects that share the same reservation, not actually the
> same object on the same list twice.
Yes, it's legal to try the same object twice. That can easily happen
when userspace specifies the same handle twice for a submission.
The other possibility is that we had a contention, backed off and then
locked the contention first and then re-tried everything else once more.
> Maybe you meant to compare with
> exec->locked.objects[exec->locked.num_objects-1], assuming that
> drm_exec_lock_contended just succeeded locking a previously contended
> object, and the caller retried locking that same object again?
Yes, that's pretty much the idea. But then exec->locked.num_objects is
just 1 so that's equal to checking exec->locked.num_objects[0].
On the other hand we would also catch cases where we lock the same
object multiple times directly behind each other. Need to think about
that a bit.
>> + */
>> + if (exec->locked.num_objects && exec->locked.objects[0] == obj)
>> + container = NULL;
>> +
>> + ret = drm_exec_obj_locked(container, obj, num_fences);
>> + if (ret)
>> + return ret;
>> +
>> + } else if (unlikely(ret)) {
>> + return ret;
>> +
>> + } else {
>> + ret = drm_exec_obj_locked(&exec->locked, obj, num_fences);
>> + if (ret)
>> + goto error_unlock;
>> + }
>> +
>> + drm_gem_object_get(obj);
>
> The container already gets a reference to obj. What is this extra
> reference for? And where does it get dropped?
Need to double check this, might be that it's just a leftover.
Thanks,
Christian.
>
> Regards,
> Felix
More information about the amd-gfx
mailing list