[PATCH v2 3/5] drm/xe: Add exec_queue.sched_props.priority

Welty, Brian brian.welty at intel.com
Thu Jan 4 19:03:44 UTC 2024



On 1/4/2024 8:46 AM, Matthew Brost wrote:
> On Wed, Jan 03, 2024 at 10:44:06AM -0800, Brian Welty wrote:
>> The purpose here is to allow to optimize exec_queue_set_priority()
>> in follow-on patch.  Currently it does q->ops->set_priority(...).
>> But we'd like to apply exec_queue_user_extensions much earlier and
>> q->ops cannot be called before __xe_exec_queue_init().
>>
>> It will be much more efficient to instead only have to set
>> q->sched_props.priority when applying user extensions. That value will
>> then already be set to the user requested value. So the setting of
>> default value is moved from q->ops->init() to __xe_exec_queue_alloc.
>>
>> v2: fix existing bug such that q->sched_props.priority is now set
>>      before guc_exec_queue_add_msg() (Matt)
>>      fix existing bug in that xe_migrate_init() should use exec_queue's
>>      vfunc for updating priority.
>>
>> Signed-off-by: Brian Welty <brian.welty at intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_exec_queue.c       | 1 +
>>   drivers/gpu/drm/xe/xe_exec_queue_types.h | 4 ++--
>>   drivers/gpu/drm/xe/xe_guc_submit.c       | 7 +++----
>>   drivers/gpu/drm/xe/xe_migrate.c          | 2 +-
>>   4 files changed, 7 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
>> index e78b13845417..9891cddba71c 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>> @@ -67,6 +67,7 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
>>   				hwe->eclass->sched_props.preempt_timeout_us;
>>   	q->sched_props.job_timeout_ms =
>>   				hwe->eclass->sched_props.job_timeout_ms;
>> +	q->sched_props.priority = XE_EXEC_QUEUE_PRIORITY_NORMAL;
>>   
>>   	if (xe_exec_queue_is_parallel(q)) {
>>   		q->parallel.composite_fence_ctx = dma_fence_context_alloc(1);
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> index 882eb5373980..6ae4f4e2ddca 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> @@ -52,8 +52,6 @@ struct xe_exec_queue {
>>   	struct xe_vm *vm;
>>   	/** @class: class of this exec queue */
>>   	enum xe_engine_class class;
>> -	/** @priority: priority of this exec queue */
>> -	enum xe_exec_queue_priority priority;
>>   	/**
>>   	 * @logical_mask: logical mask of where job submitted to exec queue can run
>>   	 */
>> @@ -144,6 +142,8 @@ struct xe_exec_queue {
>>   		u32 preempt_timeout_us;
>>   		/** @job_timeout_ms: job timeout in milliseconds */
>>   		u32 job_timeout_ms;
>> +		/** @priority: priority of this exec queue */
>> +		enum xe_exec_queue_priority priority;
>>   	} sched_props;
>>   
>>   	/** @compute: compute exec queue state */
>> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
>> index 56c0a7bf554f..392cbde62957 100644
>> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
>> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
>> @@ -421,7 +421,7 @@ static void init_policies(struct xe_guc *guc, struct xe_exec_queue *q)
>>   {
>>   	struct exec_queue_policy policy;
>>   	struct xe_device *xe = guc_to_xe(guc);
>> -	enum xe_exec_queue_priority prio = q->priority;
>> +	enum xe_exec_queue_priority prio = q->sched_props.priority;
>>   	u32 timeslice_us = q->sched_props.timeslice_us;
>>   	u32 preempt_timeout_us = q->sched_props.preempt_timeout_us;
>>   
>> @@ -1231,7 +1231,6 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
>>   	err = xe_sched_entity_init(&ge->entity, sched);
>>   	if (err)
>>   		goto err_sched;
>> -	q->priority = XE_EXEC_QUEUE_PRIORITY_NORMAL;
>>   
>>   	if (xe_exec_queue_is_lr(q))
>>   		INIT_WORK(&q->guc->lr_tdr, xe_guc_exec_queue_lr_cleanup);
>> @@ -1301,15 +1300,15 @@ static int guc_exec_queue_set_priority(struct xe_exec_queue *q,
>>   {
>>   	struct xe_sched_msg *msg;
>>   
>> -	if (q->priority == priority || exec_queue_killed_or_banned(q))
>> +	if (q->sched_props.priority == priority || exec_queue_killed_or_banned(q))
>>   		return 0;
>>   
>>   	msg = kmalloc(sizeof(*msg), GFP_KERNEL);
>>   	if (!msg)
>>   		return -ENOMEM;
>>   
>> +	q->sched_props.priority = priority;
>>   	guc_exec_queue_add_msg(q, msg, SET_SCHED_PROPS);
>> -	q->priority = priority;
> 
> This probably should be its own patch and maybe with a Fixes tag too.
> 
>>   
>>   	return 0;
>>   }
>> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
>> index adf1dab5eba2..f967fa69769e 100644
>> --- a/drivers/gpu/drm/xe/xe_migrate.c
>> +++ b/drivers/gpu/drm/xe/xe_migrate.c
>> @@ -356,7 +356,7 @@ struct xe_migrate *xe_migrate_init(struct xe_tile *tile)
>>   		return ERR_CAST(m->q);
>>   	}
>>   	if (xe->info.has_usm)
>> -		m->q->priority = XE_EXEC_QUEUE_PRIORITY_KERNEL;
>> +		m->q->ops->set_priority(m->q, XE_EXEC_QUEUE_PRIORITY_KERNEL);
> 
> Same here.

Okay, will pursue....   but then I guess I'd rather pass a new flag into 
xe_exec_queue_create() and just set the priority upfront rather than use
the vfunc to modify it here later.  Will code it up and see how it 
looks.  Then don't have to deal with the vfunc possibly failing.

Would it be wrong to simply always use XE_EXEC_QUEUE_PRIORITY_KERNEL
for exec_queues created with flags:
	EXEC_QUEUE_FLAG_KERNEL | EXEC_QUEUE_FLAG_PERMANENT
??


> 
> Otherwise LGTM.
> 
> Matt
> 
>>   
>>   	mutex_init(&m->job_mutex);
>>   
>> -- 
>> 2.43.0
>>


More information about the Intel-xe mailing list