[PATCH v3 2/2] drm/i915/gvt: Trigger scheduling after context complete
Gao, Ping A
ping.a.gao at intel.com
Wed May 24 12:45:42 UTC 2017
Missend, it's the same with v2.
On 2017/5/24 20:29, Gao, Ping A wrote:
> The time based scheduler poll context busy status at every
> micro-second during vGPU switch, it will make GPU idle for a while
> when the context is very small and completed before the next
> micro-second arrival. Trigger scheduling immediately after context
> complete will eliminate GPU idle and improve performance.
>
> Create two vGPU with same type, run Heaven simultaneously:
> Before this patch:
> +---------+----------+----------+
> | | vGPU1 | vGPU2 |
> +---------+----------+----------+
> | Heaven | 357 | 354 |
> +-------------------------------+
>
> After this patch:
> +---------+----------+----------+
> | | vGPU1 | vGPU2 |
> +---------+----------+----------+
> | Heaven | 397 | 398 |
> +-------------------------------+
>
> v2: Let need_reschedule protect by gvt-lock.
>
> Signed-off-by: Ping Gao <ping.a.gao at intel.com>
> Signed-off-by: Weinan Li <weinan.z.li at intel.com>
> ---
> drivers/gpu/drm/i915/gvt/scheduler.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
> index 6ae286c..e63b1d8 100644
> --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> @@ -431,6 +431,10 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id)
>
> atomic_dec(&vgpu->running_workload_num);
> wake_up(&scheduler->workload_complete_wq);
> +
> + if (gvt->scheduler.need_reschedule)
> + intel_gvt_request_service(gvt, INTEL_GVT_REQUEST_EVENT_SCHED);
> +
> mutex_unlock(&gvt->lock);
> }
>
More information about the intel-gvt-dev
mailing list