[PATCH v5 14/20] drm/sched: Don't store self-dependencies

Christian König christian.koenig at amd.com
Thu Aug 5 13:18:15 UTC 2021



Am 05.08.21 um 12:46 schrieb Daniel Vetter:
> This is essentially part of drm_sched_dependency_optimized(), which
> only amdgpu seems to make use of. Use it a bit more.
>
> This would mean that as-is amdgpu can't use the dependency helpers, at
> least not with the current approach amdgpu has for deciding whether a
> vm_flush is needed. Since amdgpu also has very special rules around
> implicit fencing it can't use those helpers either, and adding a
> drm_sched_job_await_fence_always or similar for amdgpu wouldn't be too
> onerous. That way the special case handling for amdgpu sticks even
> more out and we have higher chances that reviewers that go across all
> drivers wont miss it.

Well you should probably drop the sentence about the vm_flush, this is 
completely unrelated.

Additional to that I still don't think that this is a good idea. 
Dependency handling is something completely driver specific.

E.g. even when you have submitted jobs back to back they still might 
need a cache flush in between and that is not only for amdgpu like this.

What you can do is to optimize for while looking at the fences later on 
and then note that you have done so and what the last hw fence is you 
used instead.

Regards,
Christian.

>
> Reviewed-by: Lucas Stach <l.stach at pengutronix.de>
> Acked-by: Melissa Wen <mwen at igalia.com>
> Signed-off-by: Daniel Vetter <daniel.vetter at intel.com>
> Cc: "Christian König" <christian.koenig at amd.com>
> Cc: Daniel Vetter <daniel.vetter at ffwll.ch>
> Cc: Luben Tuikov <luben.tuikov at amd.com>
> Cc: Andrey Grodzovsky <andrey.grodzovsky at amd.com>
> Cc: Alex Deucher <alexander.deucher at amd.com>
> ---
>   drivers/gpu/drm/scheduler/sched_main.c | 7 +++++++
>   1 file changed, 7 insertions(+)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index f77456929139..49e507f91ec0 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -660,6 +660,13 @@ int drm_sched_job_add_dependency(struct drm_sched_job *job,
>   	if (!fence)
>   		return 0;
>   
> +	/* if it's a fence from us it's guaranteed to be earlier */
> +	if (fence->context == job->entity->fence_context ||
> +	    fence->context == job->entity->fence_context + 1) {
> +		dma_fence_put(fence);
> +		return 0;
> +	}
> +
>   	/* Deduplicate if we already depend on a fence from the same context.
>   	 * This lets the size of the array of deps scale with the number of
>   	 * engines involved, rather than the number of BOs.



More information about the dri-devel mailing list