drm_sched with panfrost crash on T820

Hillf Danton hdanton at sina.com
Mon Sep 30 14:52:28 UTC 2019


On Mon, 30 Sep 2019 11:17:45 +0200 Neil Armstrong wrote:
> 
> Did a new run from 5.3:
> 
> [   35.971972] Call trace:
> [   35.974391]  drm_sched_increase_karma+0x5c/0xf0
>			ffff000010667f38	FFFF000010667F94
>			drivers/gpu/drm/scheduler/sched_main.c:335
> 
> The crashing line is :
>                                 if (bad->s_fence->scheduled.context ==
>                                     entity->fence_context) {
> 
> Doesn't seem related to guilty job.

Bail out if s_fence is no longer fresh.

--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -333,6 +333,10 @@ void drm_sched_increase_karma(struct drm
 
 			spin_lock(&rq->lock);
 			list_for_each_entry_safe(entity, tmp, &rq->entities, list) {
+				if (!smp_load_acquire(&bad->s_fence)) {
+					spin_unlock(&rq->lock);
+					return;
+				}
 				if (bad->s_fence->scheduled.context ==
 				    entity->fence_context) {
 					if (atomic_read(&bad->karma) >
@@ -543,7 +547,7 @@ EXPORT_SYMBOL(drm_sched_job_init);
 void drm_sched_job_cleanup(struct drm_sched_job *job)
 {
 	dma_fence_put(&job->s_fence->finished);
-	job->s_fence = NULL;
+	smp_store_release(&job->s_fence, 0);
 }
 EXPORT_SYMBOL(drm_sched_job_cleanup);
 
--



More information about the dri-devel mailing list