<div dir="ltr"><div><div><div>Hi Christian,<br><br></div>The code looks good to me. But I was just wondering what will happen when the last user is killed and some other user tries to push to the entity. <br><br></div>Regards,<br></div>Nayan Deshmukh <br></div><br><div class="gmail_quote"><div dir="ltr">On Mon, Jul 30, 2018 at 4:33 PM Christian König <<a href="mailto:ckoenig.leichtzumerken@gmail.com" target="_blank">ckoenig.leichtzumerken@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Note which task is using the entity and only kill it if the last user of<br>
the entity is killed. This should prevent problems when entities are leaked to<br>
child processes.<br>
<br>
v2: add missing kernel doc<br>
<br>
Signed-off-by: Christian König <<a href="mailto:christian.koenig@amd.com" target="_blank">christian.koenig@amd.com</a>><br>
---<br>
drivers/gpu/drm/scheduler/gpu_scheduler.c | 6 +++++-<br>
include/drm/gpu_scheduler.h | 2 ++<br>
2 files changed, 7 insertions(+), 1 deletion(-)<br>
<br>
diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br>
index 3f2fc5e8242a..f563e4fbb4b6 100644<br>
--- a/drivers/gpu/drm/scheduler/gpu_scheduler.c<br>
+++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br>
@@ -275,6 +275,7 @@ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f,<br>
long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout)<br>
{<br>
struct drm_gpu_scheduler *sched;<br>
+ struct task_struct *last_user;<br>
long ret = timeout;<br>
<br>
sched = entity->rq->sched;<br>
@@ -295,7 +296,9 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout)<br>
<br>
<br>
/* For killed process disable any more IBs enqueue right now */<br>
- if ((current->flags & PF_EXITING) && (current->exit_code == SIGKILL))<br>
+ last_user = cmpxchg(&entity->last_user, current->group_leader, NULL);<br>
+ if ((!last_user || last_user == current->group_leader) &&<br>
+ (current->flags & PF_EXITING) && (current->exit_code == SIGKILL))<br>
drm_sched_entity_set_rq(entity, NULL);<br>
<br>
return ret;<br>
@@ -541,6 +544,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,<br>
<br>
trace_drm_sched_job(sched_job, entity);<br>
<br>
+ WRITE_ONCE(entity->last_user, current->group_leader);<br>
first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);<br>
<br>
/* first job wakes up scheduler */<br>
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h<br>
index 091b9afcd184..21c648b0b2a1 100644<br>
--- a/include/drm/gpu_scheduler.h<br>
+++ b/include/drm/gpu_scheduler.h<br>
@@ -66,6 +66,7 @@ enum drm_sched_priority {<br>
* @guilty: points to ctx's guilty.<br>
* @fini_status: contains the exit status in case the process was signalled.<br>
* @last_scheduled: points to the finished fence of the last scheduled job.<br>
+ * @last_user: last group leader pushing a job into the entity.<br>
*<br>
* Entities will emit jobs in order to their corresponding hardware<br>
* ring, and the scheduler will alternate between entities based on<br>
@@ -85,6 +86,7 @@ struct drm_sched_entity {<br>
struct dma_fence_cb cb;<br>
atomic_t *guilty;<br>
struct dma_fence *last_scheduled;<br>
+ struct task_struct *last_user;<br>
};<br>
<br>
/**<br>
-- <br>
2.14.1<br>
<br>
</blockquote></div>