[PATCH v2 5/9] drm/i915/gvt: factor out the scheduler

Gao, Ping A ping.a.gao at intel.com
Tue Feb 21 06:30:18 UTC 2017


On 2017/2/17 17:36, Tian, Kevin wrote:
>> From: Ping Gao
>> Sent: Tuesday, February 14, 2017 12:26 PM
>>
>> Factor out the scheduler to a more clear structure, the basic logic
>> is to find out next vGPU first and them schedule it. When decide to
>> pick up who is the next vgpu it does split out two parts:
>> first is to find a right sched head to keep fairness, second is to
>> choose the vgpu has timeslice left.
> the pick-up policy needs some elaboration.

How about like this:

When decide to pick up who is the next vGPU it has two sequence part: 
first is to find out a sched head who has urgent requirement as it's
near TDR because of out-of-service for a long time, second is to choose
the vgpu who has timeslice left follow round-robin style.

>
>> Signed-off-by: Ping Gao <ping.a.gao at intel.com>
>> ---
>>  drivers/gpu/drm/i915/gvt/sched_policy.c | 73
>> ++++++++++++++++++++++++---------
>>  1 file changed, 54 insertions(+), 19 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gvt/sched_policy.c
>> b/drivers/gpu/drm/i915/gvt/sched_policy.c
>> index 6c53bf0..c174ce6 100644
>> --- a/drivers/gpu/drm/i915/gvt/sched_policy.c
>> +++ b/drivers/gpu/drm/i915/gvt/sched_policy.c
>> @@ -118,28 +118,12 @@ static void try_to_schedule_next_vgpu(struct intel_gvt *gvt)
>>  		wake_up(&scheduler->waitq[i]);
>>  }
>>
>> -#define GVT_DEFAULT_TIME_SLICE 1000000
>> -
>> -static void tbs_sched_func(struct tbs_sched_data *sched_data)
>> +static struct intel_vgpu *get_vgpu_timeslice_left(struct list_head *head,
>> +					struct tbs_sched_data *sched_data)
>>  {
> is it the right name? From the code comment, the remaining logic is
> about finding a vGPU with pending workload...

It's the name for the finally purpose, but here it's about finding
non-empty vGPU only. I will change back the right name at next version.

>>  	struct tbs_vgpu_data *vgpu_data;
>> -
>> -	struct intel_gvt *gvt = sched_data->gvt;
>> -	struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler;
>> -
>>  	struct intel_vgpu *vgpu = NULL;
>> -	struct list_head *pos, *head;
>> -
>> -	/* no vgpu or has already had a target */
>> -	if (gvt->num_vgpu_sched <= 1 || scheduler->next_vgpu)
>> -		goto out;
>> -
>> -	if (scheduler->current_vgpu) {
>> -		vgpu_data = scheduler->current_vgpu->sched_data;
>> -		head = &vgpu_data->list;
>> -	} else {
>> -		head = &sched_data->runq_head;
>> -	}
>> +	struct list_head *pos;
>>
>>  	/* search a vgpu with pending workload */
>>  	list_for_each(pos, head) {
>> @@ -154,6 +138,57 @@ static void tbs_sched_func(struct tbs_sched_data *sched_data)
>>  		break;
>>  	}
>>
>> +	return vgpu;
>> +}
>> +
>> +static struct list_head *get_sched_head(struct tbs_sched_data *sched_data)
>> +{
>> +	struct intel_gvt *gvt = sched_data->gvt;
>> +	struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler;
>> +	struct tbs_vgpu_data *cur_vgpu_data;
>> +	struct list_head *head;
>> +
>> +	if (scheduler->current_vgpu) {
>> +		cur_vgpu_data = scheduler->current_vgpu->sched_data;
>> +		head = &cur_vgpu_data->list;
>> +	} else {
>> +		gvt_dbg_sched("no current vgpu search from q head\n");
>> +		head = &sched_data->runq_head;
>> +	}
>> +
>> +	return head;
>> +}
> need a background somewhere e.g. how many sched queues we have now
> and what's the purpose. otherwise not quite sure about the purpose here.

Only two queues: one is the normal round-robin vGPU queue, another is
the LRU vGPU queue.
round-robin queue is to maintain the default round-robin schedule
behavior, the LRU vGPU queue is to record the vGPU who get service
recently by order, used to find out fastly which vGPU has TDR risk.

get_sched_head() happen before the timeslice checking of the round-robin
queue, it's decide which vGPU is first one in the round-robin queue at
this schedule check point, has below purposes:
1.  Normally next_vgpu should be the next one of the current_vgpu in the
round-robin queue, but if the current_vgpu set to NULL because of vGPU
reset/destroy, need find a proper vGPU in the round-robin queue to
start, to avoid skip schedule chance of some vGPU in the queue.
2. It's adapt for future priority implementation, higher priority can be
chose at this function before the timeslice check.

>> +
>> +static struct intel_vgpu *pickup_next_vgpu(struct tbs_sched_data *sched_data)
>> +{
>> +	struct intel_vgpu *next_vgpu = NULL;
>> +	struct list_head *head = NULL;
>> +
>> +	/* The scheduler is follow round-robin style, sched
>> +	 * head means where start to choose next vGPU, that's
>> +	 * important to keep fairness. */
>> +	head = get_sched_head(sched_data);
>> +
>> +	/* Choose the vGPU which has timeslice left */
>> +	next_vgpu = get_vgpu_timeslice_left(head, sched_data);
>> +
>> +	return next_vgpu;
>> +}
>> +
>> +#define GVT_DEFAULT_TIME_SLICE 1000000
>> +
>> +static void tbs_sched_func(struct tbs_sched_data *sched_data)
>> +{
>> +	struct intel_gvt *gvt = sched_data->gvt;
>> +	struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler;
>> +	struct intel_vgpu *vgpu = NULL;
>> +
>> +	/* no vgpu or has already had a target */
>> +	if (gvt->num_vgpu_sched <= 1 || scheduler->next_vgpu)
>> +		goto out;
>> +
>> +	/* determine which vGPU should choose as next */
>> +	vgpu = pickup_next_vgpu(sched_data);
>>  	if (vgpu) {
>>  		scheduler->next_vgpu = vgpu;
>>  		gvt_dbg_sched("pick next vgpu %d\n", vgpu->id);
>> --
>> 2.7.4
>>
>> _______________________________________________
>> intel-gvt-dev mailing list
>> intel-gvt-dev at lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev



More information about the intel-gvt-dev mailing list