[PATCH 1/2] drm/scheduler: only kill entity if last user is killed v2
Andrey Grodzovsky
Andrey.Grodzovsky at amd.com
Mon Jul 30 20:42:18 UTC 2018
I believe that in this case
if (!entity->rq) {
DRM_ERROR...
return;
}
clause will take place.
P.S I remember we planned to actually propagate the error back to the
caller so i guess we should take care of this sooner or later.
The change is Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky at amd.com>
Andrey
On 07/30/2018 09:34 AM, Nayan Deshmukh wrote:
> Hi Christian,
>
> The code looks good to me. But I was just wondering what will happen
> when the last user is killed and some other user tries to push to the
> entity.
>
> Regards,
> Nayan Deshmukh
>
> On Mon, Jul 30, 2018 at 4:33 PM Christian König
> <ckoenig.leichtzumerken at gmail.com
> <mailto:ckoenig.leichtzumerken at gmail.com>> wrote:
>
> Note which task is using the entity and only kill it if the last
> user of
> the entity is killed. This should prevent problems when entities
> are leaked to
> child processes.
>
> v2: add missing kernel doc
>
> Signed-off-by: Christian König <christian.koenig at amd.com
> <mailto:christian.koenig at amd.com>>
> ---
> drivers/gpu/drm/scheduler/gpu_scheduler.c | 6 +++++-
> include/drm/gpu_scheduler.h | 2 ++
> 2 files changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> index 3f2fc5e8242a..f563e4fbb4b6 100644
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> @@ -275,6 +275,7 @@ static void
> drm_sched_entity_kill_jobs_cb(struct dma_fence *f,
> long drm_sched_entity_flush(struct drm_sched_entity *entity, long
> timeout)
> {
> struct drm_gpu_scheduler *sched;
> + struct task_struct *last_user;
> long ret = timeout;
>
> sched = entity->rq->sched;
> @@ -295,7 +296,9 @@ long drm_sched_entity_flush(struct
> drm_sched_entity *entity, long timeout)
>
>
> /* For killed process disable any more IBs enqueue right
> now */
> - if ((current->flags & PF_EXITING) && (current->exit_code
> == SIGKILL))
> + last_user = cmpxchg(&entity->last_user,
> current->group_leader, NULL);
> + if ((!last_user || last_user == current->group_leader) &&
> + (current->flags & PF_EXITING) && (current->exit_code
> == SIGKILL))
> drm_sched_entity_set_rq(entity, NULL);
>
> return ret;
> @@ -541,6 +544,7 @@ void drm_sched_entity_push_job(struct
> drm_sched_job *sched_job,
>
> trace_drm_sched_job(sched_job, entity);
>
> + WRITE_ONCE(entity->last_user, current->group_leader);
> first = spsc_queue_push(&entity->job_queue,
> &sched_job->queue_node);
>
> /* first job wakes up scheduler */
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 091b9afcd184..21c648b0b2a1 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -66,6 +66,7 @@ enum drm_sched_priority {
> * @guilty: points to ctx's guilty.
> * @fini_status: contains the exit status in case the process was
> signalled.
> * @last_scheduled: points to the finished fence of the last
> scheduled job.
> + * @last_user: last group leader pushing a job into the entity.
> *
> * Entities will emit jobs in order to their corresponding hardware
> * ring, and the scheduler will alternate between entities based on
> @@ -85,6 +86,7 @@ struct drm_sched_entity {
> struct dma_fence_cb cb;
> atomic_t *guilty;
> struct dma_fence *last_scheduled;
> + struct task_struct *last_user;
> };
>
> /**
> --
> 2.14.1
>
>
>
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20180730/4fdfe4b8/attachment-0001.html>
More information about the dri-devel
mailing list