[PATCH] drm/i915/gvt: Add shadow context descriptor updating

Zhang, Tina tina.zhang at intel.com
Thu Jul 27 02:28:49 UTC 2017



> -----Original Message-----
> From: Zhenyu Wang [mailto:zhenyuw at linux.intel.com]
> Sent: Thursday, July 27, 2017 9:41 AM
> To: Zhang, Tina <tina.zhang at intel.com>
> Cc: intel-gvt-dev at lists.freedesktop.org; Lv, Zhiyuan <zhiyuan.lv at intel.com>;
> Wang, Zhi A <zhi.a.wang at intel.com>; Lu, Kechen <kechen.lu at intel.com>
> Subject: Re: [PATCH] drm/i915/gvt: Add shadow context descriptor updating
> 
> On 2017.07.26 18:09:18 +0800, Tina Zhang wrote:
> > From: Kechen Lu <kechen.lu at intel.com>
> >
> > The current context logic only updates the descriptor of context when
> > it's being pinned to graphics memory space. But this cannot satisfy
> > the requirement of shadow context. The addressing mode of the pinned
> > shadow context descriptor may be changed according to the guest addressing
> mode.
> > And this won't be updated, as the already pinned shadow context has no
> > chance to update its descriptor. And this will lead to GPU hang issue,
> > as shadow context is used with wrong descriptor. This patch fixes this
> > issue by letting the pinned shadow context descriptor update its
> > addressing mode on demand.
> >
> > This patch fixes GPU HANG issue which happends after changing the grub
> > parameter i915.enable_ppgtt form 0x01 to 0x03 or vice versa and then
> > rebooting the guest.
> >
> > Signed-off-by: Tina Zhang <tina.zhang at intel.com>
> > Signed-off-by: Kechen Lu <kechen.lu at intel.com>
> >
> > diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c
> > b/drivers/gpu/drm/i915/gvt/scheduler.c
> > index ca1926d..4fda2f7 100644
> > --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> > +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> > @@ -184,6 +184,23 @@ static int shadow_context_status_change(struct
> notifier_block *nb,
> >  	return NOTIFY_OK;
> >  }
> >
> > +static void shadow_context_descriptor_update(struct i915_gem_context
> *ctx,
> > +		struct intel_engine_cs *engine)
> > +{
> > +	struct intel_context *ce = &ctx->engine[engine->id];
> > +	u64 desc = 0;
> > +
> > +	desc = ce->lrc_desc;
> > +
> > +	/* Update bits 0-11 of the context descriptor which includes flags
> > +	 * like GEN8_CTX_* cached in desc_template
> > +	 */
> > +	desc &= U64_MAX << 12;
> > +	desc |= ctx->desc_template & ((1ULL << 12) - 1);
> > +
> > +	ce->lrc_desc = desc;
> > +}
> > +
> >  /**
> >   * intel_gvt_scan_and_shadow_workload - audit the workload by scanning
> and
> >   * shadow it as well, include ringbuffer,wa_ctx and ctx.
> > @@ -210,6 +227,8 @@ int intel_gvt_scan_and_shadow_workload(struct
> intel_vgpu_workload *workload)
> >  	shadow_ctx->desc_template |= workload->ctx_desc.addressing_mode
> <<
> >  				    GEN8_CTX_ADDRESSING_MODE_SHIFT;
> >
> > +	shadow_context_descriptor_update(shadow_ctx,
> > +dev_priv->engine[ring_id]);
> > +
> >  	rq = i915_gem_request_alloc(dev_priv->engine[ring_id], shadow_ctx);
> >  	if (IS_ERR(rq)) {
> >  		gvt_vgpu_err("fail to allocate gem request\n");
> 
> Initially thought we might need to re-create shadow context for vGPU reset, but
> that has other side effect e.g new context hw id which might not be welcomed
> by e.g profiling tool. But this is mostly a waste when context is not pinned or
> once adjusted after reset, could we have some flag for this after vGPU reset for
> any new workload? Which should be bitmap flag on each engine for descriptor
> reset.
Would guest mixes 64bit and 32bit context workload? If that is the case, we may need to
check each guest's workload descriptor. If that isn't the case, we can add some flag to
do the update when needed (e.g after device model reset).
Thanks.

Tina

> 
> --
> Open Source Technology Center, Intel ltd.
> 
> $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827


More information about the intel-gvt-dev mailing list