[Intel-gfx] [PATCH 17/28] drm/i915: use the new iterator in i915_gem_busy_ioctl v2
Christian König
ckoenig.leichtzumerken at gmail.com
Tue Oct 5 12:44:50 UTC 2021
Am 05.10.21 um 14:40 schrieb Tvrtko Ursulin:
>
> On 05/10/2021 12:37, Christian König wrote:
>> This makes the function much simpler since the complex
>> retry logic is now handled else where.
>>
>> Signed-off-by: Christian König <christian.koenig at amd.com>
>> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>
> Reminder - r-b was retracted until at least more text is added to
> commit message about pros and cons. But really some discussion had
> inside the i915 team on the topic.
Sure, going to move those to a different branch.
But I really only see the following options:
1. Grab the lock.
2. Use the _unlocked variant with get/put.
3. Add another _rcu iterator just for this case.
I'm fine with either, but Daniel pretty much already rejected #3 and
#2/#1 has more overhead then the original one.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
>> ---
>> drivers/gpu/drm/i915/gem/i915_gem_busy.c | 35 ++++++++++--------------
>> 1 file changed, 14 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>> b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>> index 6234e17259c1..dc72b36dae54 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c
>> @@ -82,8 +82,8 @@ i915_gem_busy_ioctl(struct drm_device *dev, void
>> *data,
>> {
>> struct drm_i915_gem_busy *args = data;
>> struct drm_i915_gem_object *obj;
>> - struct dma_resv_list *list;
>> - unsigned int seq;
>> + struct dma_resv_iter cursor;
>> + struct dma_fence *fence;
>> int err;
>> err = -ENOENT;
>> @@ -109,27 +109,20 @@ i915_gem_busy_ioctl(struct drm_device *dev,
>> void *data,
>> * to report the overall busyness. This is what the wait-ioctl
>> does.
>> *
>> */
>> -retry:
>> - seq = raw_read_seqcount(&obj->base.resv->seq);
>> -
>> - /* Translate the exclusive fence to the READ *and* WRITE engine */
>> - args->busy =
>> busy_check_writer(dma_resv_excl_fence(obj->base.resv));
>> -
>> - /* Translate shared fences to READ set of engines */
>> - list = dma_resv_shared_list(obj->base.resv);
>> - if (list) {
>> - unsigned int shared_count = list->shared_count, i;
>> -
>> - for (i = 0; i < shared_count; ++i) {
>> - struct dma_fence *fence =
>> - rcu_dereference(list->shared[i]);
>> -
>> + args->busy = 0;
>> + dma_resv_iter_begin(&cursor, obj->base.resv, true);
>> + dma_resv_for_each_fence_unlocked(&cursor, fence) {
>> + if (dma_resv_iter_is_restarted(&cursor))
>> + args->busy = 0;
>> +
>> + if (dma_resv_iter_is_exclusive(&cursor))
>> + /* Translate the exclusive fence to the READ *and* WRITE
>> engine */
>> + args->busy |= busy_check_writer(fence);
>> + else
>> + /* Translate shared fences to READ set of engines */
>> args->busy |= busy_check_reader(fence);
>> - }
>> }
>> -
>> - if (args->busy && read_seqcount_retry(&obj->base.resv->seq, seq))
>> - goto retry;
>> + dma_resv_iter_end(&cursor);
>> err = 0;
>> out:
>>
More information about the Intel-gfx
mailing list