[PATCH] drm/i915/gvt: move intel_runtime_pm_get out of spin_lock in stop_schedule
Hang Yuan
hang.yuan at linux.intel.com
Tue Aug 28 10:16:20 UTC 2018
Hi Zhenyu,
Do you have any other comments on the patch? The patch only takes effect
in stop_schedule. It doesn't impact shadow_context_status_change because
device is active when that function is called.
Thanks,
Henry
On 08/22/2018 06:25 PM, intel-gvt-dev-bounces at lists.freedesktop.org wrote:
> On 08/07/2018 10:56 AM, Zhenyu Wang wrote:
>> On 2018.07.31 18:05:46 +0800,
>> intel-gvt-dev-bounces at lists.freedesktop.org wrote:
>>> From: Hang Yuan <hang.yuan at linux.intel.com>
>>>
>>> pm_runtime_get_sync in intel_runtime_pm_get might sleep if i915
>>> device is not active. When stop vgpu schedule, the device may be
>>> inactive. So need to move runtime_pm_get out of spin_lock/unlock.
>>>
>>> Fixes: b24881e0b0b6("drm/i915/gvt: Add runtime_pm_get/put into
>>> gvt_switch_mmio
>>> Signed-off-by: Hang Yuan <hang.yuan at linux.intel.com>
>>> Signed-off-by: Xiong Zhang <xiong.y.zhang at intel.com>
>>> ---
>>> drivers/gpu/drm/i915/gvt/mmio_context.c | 2 --
>>> drivers/gpu/drm/i915/gvt/sched_policy.c | 3 +++
>>> 2 files changed, 3 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gvt/mmio_context.c
>>> b/drivers/gpu/drm/i915/gvt/mmio_context.c
>>> index 7e702c6..10e63ee 100644
>>> --- a/drivers/gpu/drm/i915/gvt/mmio_context.c
>>> +++ b/drivers/gpu/drm/i915/gvt/mmio_context.c
>>> @@ -549,11 +549,9 @@ void intel_gvt_switch_mmio(struct intel_vgpu *pre,
>>> * performace for batch mmio read/write, so we need
>>> * handle forcewake mannually.
>>> */
>>> - intel_runtime_pm_get(dev_priv);
>>> intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
>>> switch_mmio(pre, next, ring_id);
>>> intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
>>> - intel_runtime_pm_put(dev_priv);
>>> }
>>
>> If removed this, What about other user calling it? Or we need a flag
>> to tell if need to get runtime pm when switch mmio?
>>
> Henry: What kind of flag do you mean? How about adding some code
> comments to remind this function will access HW so runtime_get is needed
> if device is not active? intel_gvt_switch_mmio is called in atomic
> notification handler as well. So I can't use mutex to protect it in my
> original thought. Then still use spin_lock to protect it, which makes
> intel_runtime_pm_get inside the function buggy when device is not active.
>
>>> /**
>>> diff --git a/drivers/gpu/drm/i915/gvt/sched_policy.c
>>> b/drivers/gpu/drm/i915/gvt/sched_policy.c
>>> index 09d7bb7..985fe81 100644
>>> --- a/drivers/gpu/drm/i915/gvt/sched_policy.c
>>> +++ b/drivers/gpu/drm/i915/gvt/sched_policy.c
>>> @@ -426,6 +426,7 @@ void intel_vgpu_stop_schedule(struct intel_vgpu
>>> *vgpu)
>>> &vgpu->gvt->scheduler;
>>> int ring_id;
>>> struct vgpu_sched_data *vgpu_data = vgpu->sched_data;
>>> + struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
>>> if (!vgpu_data->active)
>>> return;
>>> @@ -444,6 +445,7 @@ void intel_vgpu_stop_schedule(struct intel_vgpu
>>> *vgpu)
>>> scheduler->current_vgpu = NULL;
>>> }
>>> + intel_runtime_pm_get(dev_priv);
>>> spin_lock_bh(&scheduler->mmio_context_lock);
>>> for (ring_id = 0; ring_id < I915_NUM_ENGINES; ring_id++) {
>>> if (scheduler->engine_owner[ring_id] == vgpu) {
>>> @@ -452,5 +454,6 @@ void intel_vgpu_stop_schedule(struct intel_vgpu
>>> *vgpu)
>>> }
>>> }
>>> spin_unlock_bh(&scheduler->mmio_context_lock);
>>> + intel_runtime_pm_put(dev_priv);
>>> mutex_unlock(&vgpu->gvt->sched_lock);
>>> }
> _______________________________________________
> intel-gvt-dev mailing list
> intel-gvt-dev at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev
More information about the intel-gvt-dev
mailing list