[PATCH] drm/scheduler: fix last_scheduled handling
Christian König
ckoenig.leichtzumerken at gmail.com
Wed Aug 8 10:50:29 UTC 2018
Ping, Nayan any comments on that or can I commit it?
This is just a stripped down version of my original last_scheduled
improvement patch.
Christian.
Am 07.08.2018 um 14:54 schrieb Christian König:
> Make sure we access last_scheduled only after checking that there are no
> more jobs on the entity.
>
> Signed-off-by: Christian König <christian.koenig at amd.com>
> ---
> drivers/gpu/drm/scheduler/gpu_scheduler.c | 21 +++++++++++----------
> 1 file changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> index 8ee249886473..bd7883d1b964 100644
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> @@ -568,19 +568,20 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,
> struct drm_sched_entity *entity)
> {
> struct drm_sched_rq *rq = entity->rq;
> - bool first, reschedule, idle;
> + bool first;
>
> - idle = entity->last_scheduled == NULL ||
> - dma_fence_is_signaled(entity->last_scheduled);
> first = spsc_queue_count(&entity->job_queue) == 0;
> - reschedule = idle && first && (entity->num_rq_list > 1);
> + if (first && (entity->num_rq_list > 1)) {
> + struct dma_fence *fence;
>
> - if (reschedule) {
> - rq = drm_sched_entity_get_free_sched(entity);
> - spin_lock(&entity->rq_lock);
> - drm_sched_rq_remove_entity(entity->rq, entity);
> - entity->rq = rq;
> - spin_unlock(&entity->rq_lock);
> + fence = READ_ONCE(entity->last_scheduled);
> + if (fence == NULL || dma_fence_is_signaled(fence)) {
> + rq = drm_sched_entity_get_free_sched(entity);
> + spin_lock(&entity->rq_lock);
> + drm_sched_rq_remove_entity(entity->rq, entity);
> + entity->rq = rq;
> + spin_unlock(&entity->rq_lock);
> + }
> }
>
> sched_job->sched = entity->rq->sched;
More information about the dri-devel
mailing list