[Intel-gfx] [PATCH 19/19] drm/i915: Sync against the GuC log buffer flush work item on system suspend

Goel, Akash akash.goel at intel.com
Thu Aug 18 03:45:46 UTC 2016



On 8/17/2016 9:07 PM, Goel, Akash wrote:
>
>
> On 8/17/2016 6:41 PM, Imre Deak wrote:
>> On ke, 2016-08-17 at 18:15 +0530, Goel, Akash wrote:
>>>
>>> On 8/17/2016 5:11 PM, Chris Wilson wrote:
>>>> On Wed, Aug 17, 2016 at 12:27:30PM +0100, Tvrtko Ursulin wrote:
>>>>>

>>>>>> +int intel_guc_suspend(struct drm_device *dev, bool rpm_suspend)
>>>>>>  {
>>>>>>      struct drm_i915_private *dev_priv = to_i915(dev);
>>>>>>      struct intel_guc *guc = &dev_priv->guc;
>>>>>> @@ -1530,6 +1530,12 @@ int intel_guc_suspend(struct drm_device *dev)
>>>>>>          return 0;
>>>>>>
>>>>>>      gen9_disable_guc_interrupts(dev_priv);
>>>>>> +    /* Sync is needed only for the system suspend case, runtime
>>>>>> suspend
>>>>>> +     * case is covered due to rpm get/put calls used around Hw
>>>>>> access in
>>>>>> +     * the work item function.
>>>>>> +     */
>>>>>> +    if (!rpm_suspend && (i915.guc_log_level >= 0))
>>>>>> +        flush_work(&dev_priv->guc.log.flush_work);
>>>>
>>>> In which case (rpm suspend) the flush_work is idle and this a noop.
>>>> That
>>>> you have to pass around such state suggests that you are papering
>>>> over a
>>>> bug?
>>> In case of rpm suspend the flush_work may not be a NOOP.
>>> Can use the flush_work for runtime suspend also but in spite of that
>>> can't prevent the 'RPM wakelock' asserts, as the work item can get
>>> executed after the rpm ref count drops to zero and before runtime
>>> suspend kicks in (after autosuspend delay).
>>>
>>> For that you had earlier suggested to use rpm get/put in the work item
>>> function, around the register access, but with that had to remove the
>>> flush_work from the suspend hook, otherwise a deadlock can happen.
>>> So doing the flush_work conditionally for system suspend case, as rpm
>>> get/put won't cause the resume of device in that case.
>>>
>>> Actually I had discussed about this with Imre and as per his inputs
>>> prepared this patch.
>>
>> There would be this alternative:
>>
> Thanks much for suggesting the alternate approach.
>
> Just to confirm whether I understood everything correctly,
>
>> in gen9_guc_irq_handler():
>>    WARN_ON(!intel_runtime_pm_get_if_in_use());
> Used WARN, as we don't expect the device to be suspended at this
> juncture, so intel_runtime_pm_get_if_in_use() should return true.
>
>>    if (!queue_work(log.flush_work))
> If queue_work returns 0, then work item is already pending, so it won't
> be queued hence can release the rpm ref count now only.
>>        intel_runtime_pm_put();
>
>>
>> and dropping the reference at the end of the work item.
> This will be just like the __intel_autoenable_gt_powersave
>
>> This would make the flush_work() a nop in case of runtime_suspend().
> So can call the flush_work unconditionally.
>
> Hope I understood it correctly.
>
Hi Imre,

You had suggested to use the below code from irq handler, suspecting 
that intel_runtime_pm_get_if_in_use() can return false, if interrupt 
gets handled just after device goes out of use.

	if (intel_runtime_pm_get_if_in_use()) {
		if (!queue_work(log.flush_work))
			intel_runtime_pm_put();
	}

Do you mean to say that interrupt can come when rpm suspend has already 
started but before the interrupt is disabled from the suspend hook ?
Like if interrupt comes b/w 1) & 4), then runtime_pm_get_if_in_use()
will return false.
1)	Autosuspend delay elapses (device is marked as suspending)
2)		intel_runtime_suspend
3)			intel_guc_suspend
4)				gen9_disable_guc_interrupts(dev_priv);

If the above hypothesis is correct, then it implies that interrupt has 
to come after autosuspend delay has elapsed for the above scenario to arise.

I think it would be unlikely for the interrupt to come so late because 
device would have gone idle just before the autosuspend period started 
and so no GuC submissions would have been done after that.
So the probability of missing a work item could be very less and we
can bear that.

Best regards
Akash

> Best regards
> Akash
>
>> --Imre
>>


More information about the Intel-gfx mailing list