[PATCH] tests/intel/xe_exec_balancer: use xe_wait_ufence to replace sleep action

Dandamudi, Priyanka priyanka.dandamudi at intel.com
Thu Aug 14 04:36:34 UTC 2025



> -----Original Message-----
> From: Dandamudi, Priyanka
> Sent: 13 August 2025 11:15 AM
> To: Zongyao Bai <zongyao.bai at intel.com>; intel-xe at lists.freedesktop.org
> Cc: Bai, Zongyao <zongyao.bai at intel.com>
> Subject: RE: [PATCH] tests/intel/xe_exec_balancer: use xe_wait_ufence to
> replace sleep action
> 
> 
> 
> > -----Original Message-----
> > From: Intel-xe <intel-xe-bounces at lists.freedesktop.org> On Behalf Of
> > Zongyao Bai
> > Sent: 13 July 2025 12:11 AM
> > To: intel-xe at lists.freedesktop.org
> > Cc: Bai, Zongyao <zongyao.bai at intel.com>
> > Subject: [PATCH] tests/intel/xe_exec_balancer: use xe_wait_ufence to
> > replace sleep action
> >
> >     In the test_cm() function, for cases with the INVALIDATE flag,
> >     use sleep 0.25 seconds to allow all tasks to complete their jobs.
> >     However, this time is sometimes insufficient.
> >
> >     In this patch, xe_wait_ufence (referred to as "fence" for short)
> > actions are added
> >     for the second half of n_execs tasks in INVALIDATE-RACE scenarios.
> >     The changes are as follows:
> >         When flags hit nothing: => No change
> >              -- all n_exec tasks operate with a fence.
> >         When flags include INVALIDATE but do not include RACE: => No change
> >              -- the last n_exec task operates with a fence.
> >         When flags include both INVALIDATE and RACE => New in this patch
> >              -- n_execs/2 + 1 tasks operate with a fence.
> >              -- at least the last n_exec task operates with a fence
> >     Furthermore, modify the xe_wait_ufence timeout value from 1s to
> > INT64_MAX.
> >     It helps avoid the fence timetout error.
> >
> >     With the above changes, all n_exec tasks operate with a fence,
> >     allowing us to remove the sleep part.
> >
> > Signed-off-by: Zongyao Bai <zongyao.bai at intel.com>
> > ---
> >  tests/intel/xe_exec_balancer.c | 15 +++++++++------
> >  1 file changed, 9 insertions(+), 6 deletions(-)
> >
> > diff --git a/tests/intel/xe_exec_balancer.c
> > b/tests/intel/xe_exec_balancer.c index 1747e207c..8b397038a 100644
> > --- a/tests/intel/xe_exec_balancer.c
> > +++ b/tests/intel/xe_exec_balancer.c
> > @@ -527,14 +527,17 @@ test_cm(int fd, int gt, int class, int
> > n_exec_queues, int n_execs,
> >  		}
> >  	}
> >
> > -	j = flags & INVALIDATE && n_execs ? n_execs - 1 : 0;
> > +	/* Wait for all execs to complete, and the xe_wait_ufence need to be
> > run at least once. */
> > +	if (flags & INVALIDATE && n_execs) {
> > +		j = flags & RACE ? (n_execs/2 + 1) : n_execs-1;
> > +		if (j >= n_execs)
> > +			j = n_execs - 1;
> > +	} else {
> > +		j = 0;
> > +	}
> j never goes beyond n_execs, maximum it will be equal to n_execs (example
> n_execs = 2). So, in that case it can be modified to this format j = flags &
> INVALIDATE && n_execs ? (!(flags & RACE) ? n_execs - 1 : min(n_execs / 2 + 1,
> n_execs - 1)) : 0;
> 
> >  	for (i = j; i < n_execs; i++)
> >  		xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE,
> > -			       exec_queues[i % n_exec_queues],
> > NSEC_PER_SEC);
> > -
> > -	/* Wait for all execs to complete */
> > -	if (flags & INVALIDATE)
> > -		usleep(250000);
> > +			       exec_queues[i % n_exec_queues], INT64_MAX);
> >
> >  	sync[0].addr = to_user_pointer(&data[0].vm_sync);
> >  	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
> > --
> > 2.43.0



More information about the igt-dev mailing list