[PATCH] drm/i915/gvt: GVT pin/unpin shadow context

Dong, Chuanxiao chuanxiao.dong at intel.com
Thu Mar 16 07:20:49 UTC 2017



> -----Original Message-----
> From: intel-gvt-dev [mailto:intel-gvt-dev-bounces at lists.freedesktop.org] On
> Behalf Of Zhenyu Wang
> Sent: Thursday, March 16, 2017 3:14 PM
> To: Dong, Chuanxiao <chuanxiao.dong at intel.com>
> Cc: intel-gvt-dev at lists.freedesktop.org
> Subject: Re: [PATCH] drm/i915/gvt: GVT pin/unpin shadow context
> 
> On 2017.03.16 07:03:07 +0000, Dong, Chuanxiao wrote:
> > >
> > > On 2017.03.16 09:47:58 +0800, Chuanxiao Dong wrote:
> > > > When handling guest request, GVT needs to populate/update
> > > > shadow_ctx with guest context. This behavior needs to make sure
> > > > the shadow_ctx is pinned. The current implementation is relying on
> > > > i195 allocate request to pin but this way cannot guarantee the
> > > > i915 not to unpin the shadow_ctx when GVT update the guest context
> > > > from shadow_ctx. So GVT should pin/unpin the shadow_ctx by itself.
> > > >
> > > > Signed-off-by: Chuanxiao Dong <chuanxiao.dong at intel.com>
> > > > ---
> > >
> > > My previous comment for Pei's old change was that even shadow_ctx
> > > might not be pinned but gvt could still access its mem, so could fix
> > > this by changing any shadow ctx manipulating code. Although this one
> > > is better than previous, still could we add pin only around required
> > > steps to ensure it's pinned when used?
> >
> > Hi Zhenyu, pin/unpin during the entire life of the shadow context in one
> i915 request may have less code change than pin/unpin when used.
> >
> 
> For record that current issue is because when handling workload complete,
> shadow ctx might already be unpinned for request retire. So this change trys
> to ensure shadow ctx always pinned during this process.

Yes, it is true. Thanks Zhenyu for the explanation.

> 
> > >
> > > >  drivers/gpu/drm/i915/gvt/scheduler.c | 27
> > > +++++++++++++++++++++++++++
> > > >  1 file changed, 27 insertions(+)
> > > >
> > > > diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c
> > > > b/drivers/gpu/drm/i915/gvt/scheduler.c
> > > > index d29e435..cc76f26 100644
> > > > --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> > > > +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> > > > @@ -175,6 +175,7 @@ static int dispatch_workload(struct
> > > intel_vgpu_workload *workload)
> > > >  	int ring_id = workload->ring_id;
> > > >  	struct i915_gem_context *shadow_ctx = workload->vgpu-
> > > >shadow_ctx;
> > > >  	struct drm_i915_private *dev_priv =
> > > >workload->vgpu->gvt->dev_priv;
> > > > +	struct intel_engine_cs *engine = dev_priv->engine[ring_id];
> > > >  	struct drm_i915_gem_request *rq;
> > > >  	struct intel_vgpu *vgpu = workload->vgpu;
> > > >  	int ret;
> > > > @@ -188,6 +189,21 @@ static int dispatch_workload(struct
> > > > intel_vgpu_workload *workload)
> > > >
> > > >  	mutex_lock(&dev_priv->drm.struct_mutex);
> > > >
> > > > +	/* pin shadow context by gvt even the shadow context will be pinned
> > > > +	 * when i915 alloc request. That is because gvt will update the guest
> > > > +	 * context from shadow context when workload is completed, and
> > > > +at
> > > that
> > > > +	 * moment, i915 may already unpined the shadow context to make
> > > the
> > > > +	 * shadow_ctx pages invalid. So gvt need to pin itself. After update
> > > > +	 * the guest context, gvt can unpin the shadow_ctx safely.
> > > > +	 */
> > > > +	ret = engine->context_pin(engine, shadow_ctx);
> > > > +	if (ret) {
> > > > +		gvt_vgpu_err("fail to pin shadow context\n");
> > > > +		workload->status = ret;
> > > > +		mutex_unlock(&dev_priv->drm.struct_mutex);
> > > > +		return ret;
> > > > +	}
> > > > +
> > > >  	rq = i915_gem_request_alloc(dev_priv->engine[ring_id],
> > > shadow_ctx);
> > > >  	if (IS_ERR(rq)) {
> > > >  		gvt_vgpu_err("fail to allocate gem request\n"); @@ -228,6
> > > +244,9 @@
> > > > static int dispatch_workload(struct intel_vgpu_workload *workload)
> > > >
> > > >  	if (!IS_ERR_OR_NULL(rq))
> > > >  		i915_add_request_no_flush(rq);
> > > > +	else
> > > > +		engine->context_unpin(engine, shadow_ctx);
> > > > +
> > > >  	mutex_unlock(&dev_priv->drm.struct_mutex);
> > > >  	return ret;
> > > >  }
> > > > @@ -377,6 +396,10 @@ static void complete_current_workload(struct
> > > intel_gvt *gvt, int ring_id)
> > > >  	 * For the workload w/o request, directly complete the workload.
> > > >  	 */
> > > >  	if (workload->req) {
> > > > +		struct drm_i915_private *dev_priv =
> > > > +			workload->vgpu->gvt->dev_priv;
> > > > +		struct intel_engine_cs *engine =
> > > > +			dev_priv->engine[workload->ring_id];
> > > >  		wait_event(workload->shadow_ctx_status_wq,
> > > >  			   !atomic_read(&workload->shadow_ctx_active));
> > > >
> > > > @@ -389,6 +412,10 @@ static void complete_current_workload(struct
> > > intel_gvt *gvt, int ring_id)
> > > >  					 INTEL_GVT_EVENT_MAX)
> > > >  				intel_vgpu_trigger_virtual_event(vgpu,
> > > event);
> > > >  		}
> > > > +		mutex_lock(&dev_priv->drm.struct_mutex);
> > > > +		/* unpin shadow ctx as the shadow_ctx update is done */
> > > > +		engine->context_unpin(engine, workload->vgpu-
> > > >shadow_ctx);
> > > > +		mutex_unlock(&dev_priv->drm.struct_mutex);
> > > >  	}
> > > >
> > > >  	gvt_dbg_sched("ring id %d complete workload %p status %d\n",
> > > > --
> > > > 2.7.4
> > > >
> > > > _______________________________________________
> > > > intel-gvt-dev mailing list
> > > > intel-gvt-dev at lists.freedesktop.org
> > > > https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev
> > >
> > > --
> > > Open Source Technology Center, Intel ltd.
> > >
> > > $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827
> > _______________________________________________
> > intel-gvt-dev mailing list
> > intel-gvt-dev at lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev
> 
> --
> Open Source Technology Center, Intel ltd.
> 
> $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827


More information about the intel-gvt-dev mailing list