[PATCH] drm/i915/gvt: free workload in vgpu release

Zhang, Xiong Y xiong.y.zhang at intel.com
Mon Aug 6 05:49:29 UTC 2018


> On 08/02/2018 09:37 AM, Zhang, Xiong Y wrote:
> >> From: Hang Yuan <hang.yuan at linux.intel.com>
> >>
> >> Some workloads may be prepared in vgpu's queue but not be scheduled
> >> to run yet. If vgpu is released at this time, they will not be freed
> >> in workload complete callback and so need to be freed in vgpu release
> operation.
> >>
> >> Signed-off-by: Hang Yuan <hang.yuan at linux.intel.com>
> >> ---
> >>   drivers/gpu/drm/i915/gvt/scheduler.c | 7 ++++---
> >> drivers/gpu/drm/i915/gvt/scheduler.h | 3 +++
> >>   drivers/gpu/drm/i915/gvt/vgpu.c      | 1 +
> >>   3 files changed, 8 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c
> >> b/drivers/gpu/drm/i915/gvt/scheduler.c
> >> index 1ead1cc..9e845f1 100644
> >> --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> >> +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> >> @@ -784,7 +784,8 @@ static void update_guest_context(struct
> >> intel_vgpu_workload *workload)
> >>   	kunmap(page);
> >>   }
> >>
> >> -static void clean_workloads(struct intel_vgpu *vgpu, unsigned long
> >> engine_mask)
> >> +void intel_vgpu_clean_workloads(struct intel_vgpu *vgpu,
> >> +				unsigned long engine_mask)
> >>   {
> >>   	struct intel_vgpu_submission *s = &vgpu->submission;
> >>   	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; @@
> -879,7
> >> +880,7 @@ static void complete_current_workload(struct intel_gvt
> >> +*gvt, int
> >> ring_id)
> >>   		 * cleaned up during the resetting process later, so doing
> >>   		 * the workload clean up here doesn't have any impact.
> >>   		 **/
> >> -		clean_workloads(vgpu, ENGINE_MASK(ring_id));
> >> +		intel_vgpu_clean_workloads(vgpu, ENGINE_MASK(ring_id));
> >>   	}
> >>
> >>   	workload->complete(workload);
> >> @@ -1081,7 +1082,7 @@ void intel_vgpu_reset_submission(struct
> >> intel_vgpu *vgpu,
> >>   	if (!s->active)
> >>   		return;
> >>
> >> -	clean_workloads(vgpu, engine_mask);
> >> +	intel_vgpu_clean_workloads(vgpu, engine_mask);
> >>   	s->ops->reset(vgpu, engine_mask);
> >>   }
> >>
> >> diff --git a/drivers/gpu/drm/i915/gvt/scheduler.h
> >> b/drivers/gpu/drm/i915/gvt/scheduler.h
> >> index 21eddab..ca5529d 100644
> >> --- a/drivers/gpu/drm/i915/gvt/scheduler.h
> >> +++ b/drivers/gpu/drm/i915/gvt/scheduler.h
> >> @@ -158,4 +158,7 @@ intel_vgpu_create_workload(struct intel_vgpu
> >> *vgpu, int ring_id,
> >>
> >>   void intel_vgpu_destroy_workload(struct intel_vgpu_workload
> >> *workload);
> >>
> >> +void intel_vgpu_clean_workloads(struct intel_vgpu *vgpu,
> >> +				unsigned long engine_mask);
> >> +
> >>   #endif
> >> diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c
> >> b/drivers/gpu/drm/i915/gvt/vgpu.c index 0bc1f1e..8256e54 100644
> >> --- a/drivers/gpu/drm/i915/gvt/vgpu.c
> >> +++ b/drivers/gpu/drm/i915/gvt/vgpu.c
> >> @@ -238,6 +238,7 @@ void intel_gvt_deactivate_vgpu(struct intel_vgpu
> >> *vgpu)
> >>   	}
> >>
> >>   	intel_vgpu_stop_schedule(vgpu);
> >> +	intel_vgpu_clean_workloads(vgpu, ALL_ENGINES);
> >>   	intel_vgpu_dmabuf_cleanup(vgpu);
> > [Zhang, Xiong Y] deactivate_vgpu just stop vgpu, we couldn't clean
> resource here. This will break live migration as it stop vgpu first then migrate
> vgpu state.
> > But clean_workloads is only used in reset_vgpu(). vgpu_release() and
> vgpu_destroy() don't do this. We should add it through another way.
> So sounds new gvt ops is needed to support vgpu_release which will stop
> vgpu and release vgpu's resource. And keep deactivate_vgpu ops to only stop
> vgpu for live migration needs. How about this solution? - Henry
[Zhang, Xiong Y] yes, a new gvt ops is needed to clean vgpu's running status. vgpu's static status should be cleaned in vgpu_destroy.

thankd
> 
> >>
> >>   	mutex_unlock(&vgpu->vgpu_lock);
> >> --
> >> 2.7.4
> >>


More information about the intel-gvt-dev mailing list