[Intel-gfx] [PATCH v2 2/2] drm/i915/scheduler: add gvt notification for guc

Dong, Chuanxiao chuanxiao.dong at intel.com
Wed Apr 12 09:11:46 UTC 2017



> -----Original Message-----
> From: intel-gvt-dev [mailto:intel-gvt-dev-bounces at lists.freedesktop.org] On
> Behalf Of Chris Wilson
> Sent: Wednesday, April 12, 2017 4:22 PM
> To: Dong, Chuanxiao <chuanxiao.dong at intel.com>
> Cc: Tian, Kevin <kevin.tian at intel.com>; intel-gvt-dev at lists.freedesktop.org;
> intel-gfx at lists.freedesktop.org; joonas.lahtinen at linux.intel.com; Zheng,
> Xiao <xiao.zheng at intel.com>; Wang, Zhi A <zhi.a.wang at intel.com>
> Subject: Re: [PATCH v2 2/2] drm/i915/scheduler: add gvt notification for guc
> 
> On Mon, Apr 10, 2017 at 02:40:24AM +0000, Dong, Chuanxiao wrote:
> > > -----Original Message-----
> > > From: intel-gvt-dev
> > > [mailto:intel-gvt-dev-bounces at lists.freedesktop.org] On Behalf Of
> > > Dong, Chuanxiao
> > > Sent: Thursday, April 6, 2017 11:19 PM
> > > To: Chris Wilson <chris at chris-wilson.co.uk>
> > > Cc: Tian, Kevin <kevin.tian at intel.com>;
> > > intel-gvt-dev at lists.freedesktop.org;
> > > intel-gfx at lists.freedesktop.org; joonas.lahtinen at linux.intel.com;
> > > Zheng, Xiao <xiao.zheng at intel.com>; Wang, Zhi A
> > > <zhi.a.wang at intel.com>
> > > Subject: RE: [PATCH v2 2/2] drm/i915/scheduler: add gvt notification
> > > for guc
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Chris Wilson [mailto:chris at chris-wilson.co.uk]
> > > > Sent: Thursday, April 6, 2017 11:07 PM
> > > > To: Dong, Chuanxiao
> > > > Cc: intel-gfx at lists.freedesktop.org;
> > > > intel-gvt-dev at lists.freedesktop.org;
> > > > Zheng, Xiao; Tian, Kevin; joonas.lahtinen at linux.intel.com; Wang,
> > > > Zhi A
> > > > Subject: Re: [PATCH v2 2/2] drm/i915/scheduler: add gvt
> > > > notification for guc
> > > >
> > > > On Thu, Apr 06, 2017 at 02:49:54PM +0000, Dong, Chuanxiao wrote:
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Chris Wilson [mailto:chris at chris-wilson.co.uk]
> > > > > > Sent: Thursday, April 6, 2017 10:19 PM
> > > > > > To: Dong, Chuanxiao
> > > > > > Cc: intel-gfx at lists.freedesktop.org;
> > > > > > intel-gvt-dev at lists.freedesktop.org;
> > > > > > Zheng, Xiao; Tian, Kevin; joonas.lahtinen at linux.intel.com
> > > > > > Subject: Re: [PATCH v2 2/2] drm/i915/scheduler: add gvt
> > > > > > notification for guc
> > > > > >
> > > > > > On Thu, Apr 06, 2017 at 02:05:15PM +0000, Dong, Chuanxiao wrote:
> > > > > > >
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Chris Wilson [mailto:chris at chris-wilson.co.uk]
> > > > > > > > Sent: Thursday, April 6, 2017 9:32 PM
> > > > > > > > To: Dong, Chuanxiao
> > > > > > > > Cc: intel-gfx at lists.freedesktop.org;
> > > > > > > > intel-gvt-dev at lists.freedesktop.org;
> > > > > > > > Zheng, Xiao; Tian, Kevin; joonas.lahtinen at linux.intel.com
> > > > > > > > Subject: Re: [PATCH v2 2/2] drm/i915/scheduler: add gvt
> > > > > > > > notification for guc
> > > > > > > >
> > > > > > > > On Tue, Mar 28, 2017 at 05:38:41PM +0800, Chuanxiao Dong
> wrote:
> > > > > > > > > GVT request needs a manual mmio load/restore. Before GuC
> > > > > > > > > submit a request, send notification to gvt for mmio loading.
> > > > > > > > > And after the GuC finished this GVT request, notify gvt
> > > > > > > > > again for
> > > > mmio restore.
> > > > > > > > > This follows the usage when using execlists submission.
> > > > > > > > >
> > > > > > > > > Cc: xiao.zheng at intel.com
> > > > > > > > > Cc: kevin.tian at intel.com
> > > > > > > > > Cc: joonas.lahtinen at linux.intel.com
> > > > > > > > > Cc: chris at chris-wilson.co.uk
> > > > > > > > > Signed-off-by: Chuanxiao Dong <chuanxiao.dong at intel.com>
> > > > > > > > > ---
> > > > > > > > >  drivers/gpu/drm/i915/i915_guc_submission.c |  4 ++++
> > > > > > > > >  drivers/gpu/drm/i915/intel_gvt.h           | 12 ++++++++++++
> > > > > > > > >  drivers/gpu/drm/i915/intel_lrc.c           | 21 +++------------------
> > > > > > > > >  3 files changed, 19 insertions(+), 18 deletions(-)
> > > > > > > > >
> > > > > > > > > diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c
> > > > > > > > > b/drivers/gpu/drm/i915/i915_guc_submission.c
> > > > > > > > > index 58087630..d8a5942 100644
> > > > > > > > > --- a/drivers/gpu/drm/i915/i915_guc_submission.c
> > > > > > > > > +++ b/drivers/gpu/drm/i915/i915_guc_submission.c
> > > > > > > > > @@ -606,6 +606,8 @@ static void __i915_guc_submit(struct
> > > > > > > > drm_i915_gem_request *rq)
> > > > > > > > >  	unsigned long flags;
> > > > > > > > >  	int b_ret;
> > > > > > > > >
> > > > > > > > > +	intel_gvt_notify_context_status(rq,
> > > > > > > > > +INTEL_CONTEXT_SCHEDULE_IN);
> > > > > > > > > +
> > > > > > > > >  	/* WA to flush out the pending GMADR writes to ring buffer.
> > > > */
> > > > > > > > >  	if (i915_vma_is_map_and_fenceable(rq->ring->vma))
> > > > > > > > >  		POSTING_READ_FW(GUC_STATUS); @@ -725,6
> > > > +727,8 @@ static
> > > > > > > > > void i915_guc_irq_handler(unsigned long
> > > > > > data)
> > > > > > > > >  		rq = port[0].request;
> > > > > > > > >  		while (rq && i915_gem_request_completed(rq)) {
> > > > > > > > >  			trace_i915_gem_request_out(rq);
> > > > > > > > > +			intel_gvt_notify_context_status(rq,
> > > > > > > > > +
> > > > 	INTEL_CONTEXT_SCHEDULE_OUT);
> > > > > > > >
> > > > > > > > This is incorrect though. This is no better than just
> > > > > > > > waiting for the request, which is not enough since the
> > > > > > > > idea is that you need to wait for the context image to be
> > > > > > > > completely written to memory before
> > > > > > you read it.
> > > > > > > > -Chris
> > > > > > >
> > > > > > > The wait for the context image to be completely written will
> > > > > > > be done in the
> > > > > > notification from the GVT, by checking the CSB. If put the
> > > > > > wait here will made each i915 request to wait, which seems not
> necessary.
> > > > > >
> > > > > > Urm, no. I hope you mean the wait will be on some other thread
> > > > > > than inside this interrupt handler.
> > > > >
> > > > > The SCHEDULE_OUT means to stop GuC to submit another request
> > > before
> > > > the current one is completed by GVT so GVT can manually restore
> > > > the
> > > MMIO.
> > > > So this irq handler should wait until SCHEDULE_OUT is completed.
> > > > How it possible to make this irq handler to wait for another
> > > > thread? From the current software architecture there is no other
> thread....
> > > >
> > > > No. It is not acceptable to have any blocking here. Rather you
> > > > delegate the polling of CSB to a thread/worker that you kick off
> > > > from this
> > > notify.
> > > > -Chris
> > >
> > > The major issue is that we should wait for the context image to be
> > > completely written to memory before GVT read it. I will double check
> > > if we are really reading from context image in this SCHEDULE_OUT
> > > event and return back later.
> >
> > Hi Chris,
> >
> > Had a discussion with Zhi, actually the context image is accessed from the
> workload_thread to update the guest context but not directly from the
> SCHEDULE_OUT event. So my previous comment is wrong and the CSB
> waiting should be in workload_thread instead of this IRQ handler.
> 
> That's better, still a little worried about having to remember to review these
> hooks later.
> 
> The other concern I express at the start of this was:
Sorry, I missed this one.

> 
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c
> b/drivers/gpu/drm/i915/intel_lrc.c
> index 9e327049b735..5a86abd715df 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -430,16 +430,6 @@ static void execlists_dequeue(struct
> intel_engine_cs *engine)
>                         if (port != engine->execlist_port)
>                                 break;
> 
> -                       /* If GVT overrides us we only ever submit port[0],
> -                        * leaving port[1] empty. Note that we also have
> -                        * to be careful that we don't queue the same
> -                        * context (even though a different request) to
> -                        * the second port.
> -                        */
> -                       if (ctx_single_port_submission(last->ctx) ||
> -                           ctx_single_port_submission(cursor->ctx))
> -                               break;
> -
> 
> 
> Afaict that requirement is completely bogus (since your workloads do only
> active on gvt requests and there will only ever be one scheduled-in at any
> time) and burdensome on non-gvt.
> -Chris

My understanding about this is to prevent both port[0] and port[1] submitted if any of the last/cursor is a GVT request. Without this check, 1) last is a GVT request in port[0] and cursor is an i915 request, this i915 request will be submitted to port[1]; 2) last is an i915 request in port[0] and cursor is a GVT request, this GVT request will be submitted to port[1]; 3) Last is a GVT request from vgpu1 in port[0] and cursor is a GVT request from vgpu2, the GVT request from vgpu2 will be submitted to port[1];

Is this ok without the check? Or my understanding is wrong? Please help me to get this. :)

Thanks
Chuanxiao

> 
> --
> Chris Wilson, Intel Open Source Technology Centre
> _______________________________________________
> intel-gvt-dev mailing list
> intel-gvt-dev at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev


More information about the Intel-gfx mailing list