[PATCH 2/3 - small lock v2] drm/i915/gvt: use per vgpu lock

Zhang, Pei pei.zhang at intel.com
Thu Feb 1 01:11:10 UTC 2018


> -----Original Message-----
> From: Zhenyu Wang [mailto:zhenyuw at linux.intel.com]
> Sent: Wednesday, January 31, 2018 10:28 AM
> To: Zhang, Pei <pei.zhang at intel.com>
> Cc: intel-gvt-dev at lists.freedesktop.org; Wang, Zhi A
> <zhi.a.wang at intel.com>
> Subject: Re: [PATCH 2/3 - small lock v2] drm/i915/gvt: use per vgpu lock
> 
> On 2018.01.30 10:43:31 +0000, Zhang, Pei wrote:
> > > > diff --git a/drivers/gpu/drm/i915/gvt/display.c
> > > > b/drivers/gpu/drm/i915/gvt/display.c
> > > > index dd96ffc..1ff88c1 100644
> > > > --- a/drivers/gpu/drm/i915/gvt/display.c
> > > > +++ b/drivers/gpu/drm/i915/gvt/display.c
> > > > @@ -327,26 +327,28 @@ void intel_gvt_check_vblank_emulation(struct
> > > intel_gvt *gvt)
> > > >  	struct intel_gvt_irq *irq = &gvt->irq;
> > > >  	struct intel_vgpu *vgpu;
> > > >  	int pipe, id;
> > > > +	int found = false;
> > > >
> > > > -	if (WARN_ON(!mutex_is_locked(&gvt->lock)))
> > > > -		return;
> > > > -
> > > > +	mutex_lock(&gvt->lock);
> > > >  	for_each_active_vgpu(gvt, vgpu, id) {
> > > >  		for (pipe = 0; pipe < I915_MAX_PIPES; pipe++) {
> > > > -			if (pipe_is_enabled(vgpu, pipe))
> > > > -				goto out;
> > > > +			if (pipe_is_enabled(vgpu, pipe)) {
> > > > +				found = true;
> > > > +				break;
> > > > +			}
> > >
> > > The problem is you would walk all vgpu and check other vgpu's state
> > > without taking vgpu_lock which could be possibly unconsistent, as
> > > below mmio path lock would be per-vgpu only.
> > >
> > > So for below pipeconf mmio handler lock change, for pipe enable,
> > > just start vbl timer. For pipe deactive, might send request for
> > > service thread to check any active vbl timer required instead.
> >
> > [Pei] GVT lock is used to protect the vGPU list, and makes sure that
> when a vGPU instance is used, it's still alive and vGPU structure
> pointer is valid to reference. So here the GVT global lock is hold. At
> the same time, I think that vgpu_lock is not required in this code
> segment, because pipe_is_enabled(vgpu..) only 'read' the vgpu's state
> and no write happens. How do you think?
> >
> 
> yeah, read pipe status should be ok, my previous thought was to protect
> against other vgpu's mmio access, but seems even with those change, we
> can still handle later for another vbl check call.
[Pei] yes, it's that. The vbl check will be called when any vGPU's any pipe enabling happens. This logic is same between big and small locks. Difference is that current implementation will allows this function be multiple-called concurrently(caller is from MMIO trap function and use per vGPU lock). The usage of GVT_lock will makes sure sequence.

> 
> > But your method is also very good, I would be happy to do the
> modification if you still has concern.
> >
> 
> I also think of changing vbl timer for per-vgpu mode, but need to
> evaluate how much benefit that can bring, and that will also make this
> change easier.
> 
> --
> Open Source Technology Center, Intel ltd.
> 
> $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827


More information about the intel-gvt-dev mailing list