blocking ops in drm_sched_cleanup_jobs()

Steven Price steven.price at arm.com
Wed Sep 25 14:40:37 UTC 2019


On 25/09/2019 15:12, Koenig, Christian wrote:
> Am 25.09.19 um 16:06 schrieb Steven Price:
>> On 24/09/2019 10:55, Koenig, Christian wrote:
>>> Sorry for the delayed response, have been busy on other stuff last week.
>>>
>>> Am 17.09.19 um 14:46 schrieb Steven Price:
>>>> On 17/09/2019 09:42, Koenig, Christian wrote:
>>>>> Hi Steven,
>>>>>
>>>>> thought about that issue a bit more and I think I came up with a
>>>>> solution.
>>>>>
>>>>> What you could do is to split up drm_sched_cleanup_jobs() into two
>>>>> functions.
>>>>>
>>>>> One that checks if jobs to be cleaned up are present and one which does
>>>>> the actual cleanup.
>>>>>
>>>>> This way we could call drm_sched_cleanup_jobs() outside of the
>>>>> wait_event_interruptible().
>>>> Yes that seems like a good solution - there doesn't seem to be a good
>>>> reason why the actual job cleanup needs to be done within the
>>>> wait_event_interruptible() condition. I did briefly attempt that
>>>> before, but I couldn't work out exactly what the condition is which
>>>> should cause the wake (my initial attempt caused continuous wake-ups).
>>> Basically you need something like the following:
>>>
>>> 1. Test is timeout worker is running:
>>>
>>> if (sched->timeout != MAX_SCHEDULE_TIMEOUT &&
>>>       !cancel_delayed_work(&sched->work_tdr))
>>>           return false;
>>>
>>> 2. Test if there is any job ready to be cleaned up.
>>>
>>> job = list_first_entry_or_null(&sched->ring_mirror_list, struct
>>> drm_sched_job, node);
>>> if (!job || !dma_fence_is_signaled(&job->s_fence->finished))
>>>       return false;
>>>
>>> That should basically do it.
>> Thanks for the pointers. I wasn't sure if the "queue timeout for next
>> job" part was necessary or not if step 2 above returns false.
>>
>> I've been testing the following patch which simply pulls the
>> sched->ops->free_job() out of the wait_event_interruptible().
>>
>> I'll try with just the tests you've described.

It looks like it is necessary to queue the timeout for the next job even
if there isn't a job to be cleaned up (i.e. even if we wouldn't exit
from wait_event_interruptible(). So I'll go with a patch like below.

>> ----8<-----
>>  From 873c1816394beee72904e64aa2ee0f169e768d76 Mon Sep 17 00:00:00 2001
>> From: Steven Price <steven.price at arm.com>
>> Date: Mon, 23 Sep 2019 11:08:50 +0100
>> Subject: [PATCH] drm: Don't free jobs in wait_event_interruptible()
>>
>> drm_sched_cleanup_jobs() attempts to free finished jobs, however because
>> it is called as the condition of wait_event_interruptible() it must not
>> sleep. Unfortuantly some free callbacks (notibly for Panfrost) do sleep.
>>
>> Instead let's rename drm_sched_cleanup_jobs() to
>> drm_sched_get_cleanup_job() and simply return a job for processing if
>> there is one. The caller can then call the free_job() callback outside
>> the wait_event_interruptible() where sleeping is possible before
>> re-checking and returning to sleep if necessary.
>>
>> Signed-off-by: Steven Price <steven.price at arm.com>
>> ---
>>   drivers/gpu/drm/scheduler/sched_main.c | 22 +++++++++++++++-------
>>   1 file changed, 15 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>> index 9a0ee74d82dc..bf9b4931ddfd 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -622,20 +622,21 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb)
>>   }
>>   
>>   /**
>> - * drm_sched_cleanup_jobs - destroy finished jobs
>> + * drm_sched_get_cleanup_job - fetch the next finished job to be destroyed
>>    *
>>    * @sched: scheduler instance
>>    *
>> - * Remove all finished jobs from the mirror list and destroy them.
>> + * Returns the next finished job from the mirror list (if there is one)
>> + * ready for it to be destroyed.
>>    */
>> -static void drm_sched_cleanup_jobs(struct drm_gpu_scheduler *sched)
>> +static struct drm_sched_job *drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched)
>>   {
>>   	unsigned long flags;
>>   
>>   	/* Don't destroy jobs while the timeout worker is running */
>>   	if (sched->timeout != MAX_SCHEDULE_TIMEOUT &&
>>   	    !cancel_delayed_work(&sched->work_tdr))
>> -		return;
>> +		return NULL;
>>   
>>   
>>   	while (!list_empty(&sched->ring_mirror_list)) {
> 
> Yeah, you should probably clean that up a bit here, but apart from that 
> this should work as well.

Good point - this function can be tidied up quite a bit, I'll post a new
patch shortly.

Thanks,

Steve

> Regards,
> Christian.
> 
>> @@ -651,7 +652,7 @@ static void drm_sched_cleanup_jobs(struct drm_gpu_scheduler *sched)
>>   		list_del_init(&job->node);
>>   		spin_unlock_irqrestore(&sched->job_list_lock, flags);
>>   
>> -		sched->ops->free_job(job);
>> +		return job;
>>   	}
>>   
>>   	/* queue timeout for next job */
>> @@ -659,6 +660,7 @@ static void drm_sched_cleanup_jobs(struct drm_gpu_scheduler *sched)
>>   	drm_sched_start_timeout(sched);
>>   	spin_unlock_irqrestore(&sched->job_list_lock, flags);
>>   
>> +	return NULL;
>>   }
>>   
>>   /**
>> @@ -698,12 +700,18 @@ static int drm_sched_main(void *param)
>>   		struct drm_sched_fence *s_fence;
>>   		struct drm_sched_job *sched_job;
>>   		struct dma_fence *fence;
>> +		struct drm_sched_job *cleanup_job = NULL;
>>   
>>   		wait_event_interruptible(sched->wake_up_worker,
>> -					 (drm_sched_cleanup_jobs(sched),
>> +					 (cleanup_job = drm_sched_get_cleanup_job(sched)) ||
>>   					 (!drm_sched_blocked(sched) &&
>>   					  (entity = drm_sched_select_entity(sched))) ||
>> -					 kthread_should_stop()));
>> +					 kthread_should_stop());
>> +
>> +		while (cleanup_job) {
>> +			sched->ops->free_job(cleanup_job);
>> +			cleanup_job = drm_sched_get_cleanup_job(sched);
>> +		}
>>   
>>   		if (!entity)
>>   			continue;
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 



More information about the dri-devel mailing list