[PATCH 2/4] drm/xe: Add exec_queue.sched_props.job_timeout_ms

Matthew Brost matthew.brost at intel.com
Wed Jan 3 08:14:45 UTC 2024


On Tue, Jan 02, 2024 at 01:17:29PM -0800, Brian Welty wrote:
> The purpose here is to allow to optimize exec_queue_set_job_timeout()
> in follow-on patch.  Currently it does q->ops->set_job_timeout(...).
> But we'd like to apply exec_queue_user_extensions much earlier and
> q->ops cannot be called before __xe_exec_queue_init().
> 
> It will be much more efficient to instead only have to set
> q->sched_props.job_timeout_ms when applying user extensions. That value
> will then be used during q->ops->init().
> 
> Signed-off-by: Brian Welty <brian.welty at intel.com>
> ---
>  drivers/gpu/drm/xe/xe_exec_queue.c       | 2 ++
>  drivers/gpu/drm/xe/xe_exec_queue_types.h | 2 ++
>  drivers/gpu/drm/xe/xe_guc_submit.c       | 3 ++-
>  3 files changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index 94ae87540854..e78b13845417 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -65,6 +65,8 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
>  	q->sched_props.timeslice_us = hwe->eclass->sched_props.timeslice_us;
>  	q->sched_props.preempt_timeout_us =
>  				hwe->eclass->sched_props.preempt_timeout_us;
> +	q->sched_props.job_timeout_ms =
> +				hwe->eclass->sched_props.job_timeout_ms;
>  
>  	if (xe_exec_queue_is_parallel(q)) {
>  		q->parallel.composite_fence_ctx = dma_fence_context_alloc(1);
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> index 3d7e704ec3d9..882eb5373980 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> @@ -142,6 +142,8 @@ struct xe_exec_queue {
>  		u32 timeslice_us;
>  		/** @preempt_timeout_us: preemption timeout in micro-seconds */
>  		u32 preempt_timeout_us;
> +		/** @job_timeout_ms: job timeout in milliseconds */
> +		u32 job_timeout_ms;
>  	} sched_props;
>  
>  	/** @compute: compute exec queue state */
> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
> index 21ac68e3246f..6cbf41ad9c8c 100644
> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> @@ -1218,7 +1218,7 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
>  	init_waitqueue_head(&ge->suspend_wait);
>  
>  	timeout = (q->vm && xe_vm_in_lr_mode(q->vm)) ? MAX_SCHEDULE_TIMEOUT :
> -		  q->hwe->eclass->sched_props.job_timeout_ms;
> +		  q->sched_props.job_timeout_ms;
>  	err = xe_sched_init(&ge->sched, &drm_sched_ops, &xe_sched_ops,
>  			    get_submit_wq(guc),
>  			    q->lrc[0].ring.size / MAX_JOB_SIZE_BYTES, 64,
> @@ -1361,6 +1361,7 @@ static int guc_exec_queue_set_job_timeout(struct xe_exec_queue *q, u32 job_timeo
>  	xe_assert(xe, !exec_queue_banned(q));
>  	xe_assert(xe, !exec_queue_killed(q));
>  
> +	q->sched_props.job_timeout_ms = job_timeout_ms;

Patch LGTM but per my comment in [1] I think this vfunc can be deleted.
If we agree it can be, then no use adding this change in.

Matt

[1] https://patchwork.freedesktop.org/patch/573052/?series=128128&rev=1

>  	sched->base.timeout = job_timeout_ms;
>  
>  	return 0;
> -- 
> 2.43.0
> 


More information about the Intel-xe mailing list