[PATCH v2] drm/i915/gvt: add a spin_lock to protect the current_workload
Dong, Chuanxiao
chuanxiao.dong at intel.com
Thu Feb 16 07:35:14 UTC 2017
> -----Original Message-----
> From: He, Min
> Sent: Thursday, February 16, 2017 3:08 PM
> To: Dong, Chuanxiao <chuanxiao.dong at intel.com>; intel-gvt-
> dev at lists.freedesktop.org
> Cc: Zhang, Pei <pei.zhang at intel.com>; Wang, Zhi A <zhi.a.wang at intel.com>
> Subject: Re: [PATCH v2] drm/i915/gvt: add a spin_lock to protect the
> current_workload
>
> Zhenyu already gave you some comments.
Sorry for missing this. Will handle it.
>
>
> And I don’t think this spin lock cannot actually protect the current_worload.
>
> For example: thread A calls shadow_context_status_change: current
> workload is not NULL, and we get the pointer, save it and continue the
> register save/restore, which may take some time. But at this time, another
> thread B is in complete_current_workload, and then free the content of the
> current workload.After that, thread A wants to call wake_up(&workload-
> >shadow_ctx_status_wq), but at this time, the content of workload is
> already cleared.
Actually spin_lock is not the only synchronize mechanism between shadow_context_status_change() and complete_current_workload(). There is a wait queue between them.
There are two scenarios: a successfully completed workload and a failed workload.
In a successfully completed workload case: This is actually the case Min mentioned. In this case, spin lock is not needed. When shadow_context_status_change() is doing the register save/restore, complete_current_workload() will wait for the wake_up from shadow_context_status_change(). So only when shadow_context_status_change() is completed, complete_current_workload() will continue and then to free the workload. This is the current solution we have in upstream code.
In a failed workload, normally there should have no context switch interrupt so there is no chance for shadow_context_status_change() to run. But in some corner case, later there still has a context switch interrupt coming. The spin_lock is used to protect for this case. In this case, shadow_context_status_change() won't do any register save/restore.
For the failed workload and there still is a context switch interrupt, assume shadow_context_status_change() holds the spin_lock first, shadow_context_status_change() get the current_workload, then return without any register restore/save. Meanwhile, complete_current_workload() is spin to get the lock before clear the current_workload. So these two functions are synchronized between each other. No conflict.
Then assume complete_current_workload() get the spin_lock first, it clears the current_workload then release the log. Meanwhile, shadow_context_status_change() is spin to get the lock, then it found the current_workload is NULL, so it does nothing but just return. In this case, still no conflict.
>
> Maybe we can just get the mutex_lock(&gvt->lock) in
> shadow_context_status_change, don’t know if there's any side effect.
shadow_context_status_change() will be called in tasklet, it may also be used in a context interrupt in future. So mutex is not properly for it.
>
>
>
> 在 2/16/2017 2:32 PM, Dong, Chuanxiao 写道:
> > Any comments?
> >
> >> -----Original Message-----
> >> From: intel-gvt-dev [mailto:intel-gvt-dev-bounces at lists.freedesktop.org]
> On
> >> Behalf Of Chuanxiao Dong
> >> Sent: Friday, February 10, 2017 5:30 PM
> >> To: intel-gvt-dev at lists.freedesktop.org
> >> Cc: Zhang, Pei <pei.zhang at intel.com>; Wang, Zhi A
> <zhi.a.wang at intel.com>
> >> Subject: [PATCH v2] drm/i915/gvt: add a spin_lock to protect the
> >> current_workload
> >>
> >> There is a corner case which can cause kernel panic after a GPU reset.
> >> The reason for this kernel panic is that, sometimes there is still a context
> >> switch comes from HW to notify SW for a failed workload. But as it is a
> failed
> >> workload, GVT has already freed this workload and the current_workload
> in
> >> scheduler is NULL. Accessing NULL caused kernel panic. So this issue is
> >> caused by unsynchronized between the workload free and unexpected
> >> notification handling.
> >>
> >> To protect the current_workload, add a spin_lock. So it can provide a
> chance
> >> for SW to synchronize with each other to avoid such kind of conflict.
> >>
> >> v2: update the commit message;
> >> add EINPROGRESS check for workload->status
> >>
> >> Signed-off-by: Chuanxiao Dong <chuanxiao.dong at intel.com>
> >> ---
> >> drivers/gpu/drm/i915/gvt/scheduler.c | 30
> >> ++++++++++++++++++++++++++++--
> drivers/gpu/drm/i915/gvt/scheduler.h |
> >> 2 ++
> >> 2 files changed, 30 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c
> >> b/drivers/gpu/drm/i915/gvt/scheduler.c
> >> index 7ea68a7..c300197 100644
> >> --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> >> +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> >> @@ -136,8 +136,30 @@ static int shadow_context_status_change(struct
> >> notifier_block *nb,
> >> (struct drm_i915_gem_request *)data;
> >> struct intel_gvt_workload_scheduler *scheduler =
> >> &vgpu->gvt->scheduler;
> >> - struct intel_vgpu_workload *workload =
> >> - scheduler->current_workload[req->engine->id];
> >> + struct intel_vgpu_workload *workload;
> >> + unsigned long flags;
> >> +
> >> + spin_lock_irqsave(&scheduler->cur_workload_lock, flags);
> >> + workload = scheduler->current_workload[req->engine->id];
> >> + /* Sometimes a failed workload still has this notification
> >> + * from HW side.
> >> + * So at this point, the failed workload may
> >> + * already been freed. In this case, do nothing. Even the
> >> + * failed workload is not freed yet, still do nothing.
> >> + *
> >> + * Another case, when we goes here, this workload is actually
> >> + * a new workload already and we cannot do the context schedule
> >> + * out based on the last failed one.
> >> + */
> >> + if (unlikely(!workload ||
> >> + ((workload->status != -EINPROGRESS && workload->status)
> >> ||
> >> + vgpu->resetting) ||
> >> + (action == INTEL_CONTEXT_SCHEDULE_OUT &&
> >> + !atomic_read(&workload->shadow_ctx_active)))) {
> >> + spin_unlock_irqrestore(&scheduler->cur_workload_lock,
> >> flags);
> >> + return NOTIFY_OK;
> >> + }
> >> + spin_unlock_irqrestore(&scheduler->cur_workload_lock, flags);
> >>
> >> switch (action) {
> >> case INTEL_CONTEXT_SCHEDULE_IN:
> >> @@ -352,6 +374,7 @@ static void complete_current_workload(struct
> >> intel_gvt *gvt, int ring_id)
> >> struct intel_vgpu_workload *workload;
> >> struct intel_vgpu *vgpu;
> >> int event;
> >> + unsigned long flags;
> >>
> >> mutex_lock(&gvt->lock);
> >>
> >> @@ -372,7 +395,9 @@ static void complete_current_workload(struct
> >> intel_gvt *gvt, int ring_id)
> >> gvt_dbg_sched("ring id %d complete workload %p status %d\n",
> >> ring_id, workload, workload->status);
> >>
> >> + spin_lock_irqsave(&scheduler->cur_workload_lock, flags);
> >> scheduler->current_workload[ring_id] = NULL;
> >> + spin_unlock_irqrestore(&scheduler->cur_workload_lock, flags);
> >>
> >> list_del_init(&workload->list);
> >> workload->complete(workload);
> >> @@ -513,6 +538,7 @@ int intel_gvt_init_workload_scheduler(struct
> >> intel_gvt *gvt)
> >>
> >> gvt_dbg_core("init workload scheduler\n");
> >>
> >> + spin_lock_init(&scheduler->cur_workload_lock);
> >> init_waitqueue_head(&scheduler->workload_complete_wq);
> >>
> >> for (i = 0; i < I915_NUM_ENGINES; i++) { diff --git
> >> a/drivers/gpu/drm/i915/gvt/scheduler.h
> >> b/drivers/gpu/drm/i915/gvt/scheduler.h
> >> index 2833dfa..b446ea9 100644
> >> --- a/drivers/gpu/drm/i915/gvt/scheduler.h
> >> +++ b/drivers/gpu/drm/i915/gvt/scheduler.h
> >> @@ -40,6 +40,8 @@ struct intel_gvt_workload_scheduler {
> >> struct intel_vgpu *current_vgpu;
> >> struct intel_vgpu *next_vgpu;
> >> struct intel_vgpu_workload
> >> *current_workload[I915_NUM_ENGINES];
> >> + /* For current_workload handling sync */
> >> + spinlock_t cur_workload_lock;
> >> bool need_reschedule;
> >>
> >> wait_queue_head_t workload_complete_wq;
> >> --
> >> 2.7.4
> >>
> >> _______________________________________________
> >> intel-gvt-dev mailing list
> >> intel-gvt-dev at lists.freedesktop.org
> >> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev
> > _______________________________________________
> > intel-gvt-dev mailing list
> > intel-gvt-dev at lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev
More information about the intel-gvt-dev
mailing list