[PATCH v5 03/16] drm/sched: De-clutter drm_sched_init

Tvrtko Ursulin tvrtko.ursulin at igalia.com
Fri Jul 4 13:02:53 UTC 2025


On 04/07/2025 13:59, Philipp Stanner wrote:
> On Mon, 2025-06-23 at 13:27 +0100, Tvrtko Ursulin wrote:
>> Move work queue allocation into a helper for a more streamlined
>> function
>> body.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin at igalia.com>
>> Cc: Christian König <christian.koenig at amd.com>
>> Cc: Danilo Krummrich <dakr at kernel.org>
>> Cc: Matthew Brost <matthew.brost at intel.com>
>> Cc: Philipp Stanner <phasta at kernel.org>
>> ---
>>   drivers/gpu/drm/scheduler/sched_main.c | 33 ++++++++++++++++--------
>> --
>>   1 file changed, 20 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>> b/drivers/gpu/drm/scheduler/sched_main.c
>> index a1b445c3b4db..1f077782ec12 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -84,12 +84,6 @@
>>   #define CREATE_TRACE_POINTS
>>   #include "gpu_scheduler_trace.h"
>>   
>> -#ifdef CONFIG_LOCKDEP
>> -static struct lockdep_map drm_sched_lockdep_map = {
>> -	.name = "drm_sched_lockdep_map"
>> -};
>> -#endif
>> -
>>   int drm_sched_policy = DRM_SCHED_POLICY_FIFO;
>>   
>>   /**
>> @@ -1263,6 +1257,25 @@ static void drm_sched_run_job_work(struct
>> work_struct *w)
>>   	drm_sched_run_job_queue(sched);
>>   }
>>   
>> +static struct workqueue_struct *drm_sched_alloc_wq(const char *name)
>> +{
>> +#if (IS_ENABLED(CONFIG_LOCKDEP))
>> +	static struct lockdep_map map = {
>> +		.name = "drm_sched_lockdep_map"
>> +	};
>> +
>> +	/*
>> +	 * Avoid leaking a lockdep map on each drm sched creation
>> and
>> +	 * destruction by using a single lockdep map for all drm
>> sched
>> +	 * allocated submit_wq.
>> +	 */
>> +
>> +	return alloc_ordered_workqueue_lockdep_map(name,
>> WQ_MEM_RECLAIM, &map);
>> +#else
>> +	return alloc_ordered_workqueue(name, WQ_MEM_RECLAIM);
>> +#endif
>> +}
>> +
>>   /**
>>    * drm_sched_init - Init a gpu scheduler instance
>>    *
>> @@ -1303,13 +1316,7 @@ int drm_sched_init(struct drm_gpu_scheduler
>> *sched, const struct drm_sched_init_
>>   		sched->submit_wq = args->submit_wq;
>>   		sched->own_submit_wq = false;
>>   	} else {
>> -#ifdef CONFIG_LOCKDEP
>> -		sched->submit_wq =
>> alloc_ordered_workqueue_lockdep_map(args->name,
>> -
>> 								       WQ_MEM_RECLAIM,
>> -
>> 								       &drm_sched_lockdep_map);
>> -#else
>> -		sched->submit_wq = alloc_ordered_workqueue(args-
>>> name, WQ_MEM_RECLAIM);
>> -#endif
>> +		sched->submit_wq = drm_sched_alloc_wq(args->name);
>>   		if (!sched->submit_wq)
>>   			return -ENOMEM;
> 
> You could send this patch separately any time *wink wink*
> 
> We definitely wanna merge that, and you could then just rebase your RFC
> series on drm-misc-next.

Will do. I was waiting for acks or r-bs on the easy patches from the 
head of the series to extract them. As mentioned before it was all 
supposed to be structured in a way that there are logical points in the 
series up to where it makes sense even without going to the end.

Regards,

Tvrtko



More information about the amd-gfx mailing list