[Intel-gfx] [PATCH] drm/i915/guc/slpc: Use non-blocking H2G for waitboost

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Thu Jun 23 08:12:40 UTC 2022


On 22/06/2022 22:28, Dixit, Ashutosh wrote:
> On Wed, 22 Jun 2022 13:30:23 -0700, Belgaumkar, Vinay wrote:
>> On 6/21/2022 5:26 PM, Dixit, Ashutosh wrote:
>>> On Sat, 14 May 2022 23:05:06 -0700, Vinay Belgaumkar wrote:
>>> The issue I have is what happens when we de-boost (restore min freq to its
>>> previous value in intel_guc_slpc_dec_waiters()). It would seem that that
>>> call is fairly important to get the min freq down when there are no pending
>>> requests. Therefore what do we do in that case?
>>>
>>> This is the function:
>>>
>>> void intel_guc_slpc_dec_waiters(struct intel_guc_slpc *slpc)
>>> {
>>>           mutex_lock(&slpc->lock);
>>>           if (atomic_dec_and_test(&slpc->num_waiters))
>>>                   slpc_force_min_freq(slpc, slpc->min_freq_softlimit);
>>>           mutex_unlock(&slpc->lock);
>>> }
>>>
>>>
>>> 1. First it would seem that at the minimum we need a similar drm_notice()
>>>      in intel_guc_slpc_dec_waiters(). That would mean we need to put the
>>>      drm_notice() back in slpc_force_min_freq() (replacing
>>>      i915_probe_error()) rather than in slpc_boost_work() above?
>> Sure.
>>>
>>> 2. Further, if de-boosting is important then maybe as was being discussed
>>>      in v1 of this patch (see the bottom of
>>>      https://patchwork.freedesktop.org/patch/485004/?series=103598&rev=1) do
>>>      we need to use intel_guc_send_busy_loop() in the
>>>      intel_guc_slpc_dec_waiters() code path?
>>
>> Using a busy_loop here would essentially be the same as blocking, right?
> 
> Well blocking waits for a response from GuC (so all previous requests need
> to be processed by GuC) whereas busy_loop() just waits for space to be
> available at the back of the queue (so just a few, or maybe just one,
> request have to be processed by GuC).
> 
>> And it could still fail/timeout with blocking as well (which is the problem
>> we are trying to solve here).
> 
> intel_guc_send_busy_loop() has an infinite wait without a drm_err()!! :)
> 
>> De-boosting is important, but in the worst case scenario, lets say this
>> request was not processed by GuC. This would happen only if the system
>> were really busy, which would mean there is a high likelihood we would
>> boost/de-boost again anyways and it would probably go through at that
>> point.
> 
> Not sure of this. The system was busy but now might have gone idle which is
> why we are trying to de-boost. But GuC queue might still be full so we may
> drop the de-boost request. Or if the system has gone really idle there will
> be space in the GuC queue.
> 
> Also the problem with intel_guc_send_busy_loop() is that it just has a
> sleep in it, so others might be adding requests in the GuC queue while
> busy_loop() was sleeping (to avoid such situations we'd need a SW queue in
> front of the real GuC queue).
> 
> So I am ok if we don't want to add intel_guc_send_busy_loop() for now and
> "wait and watch". Unless John suggests otherwise since I don't have any
> idea how likely is this to happen. If we change drm_notice to drm_err the
> CI will quick tell us if this happening.
> 
> Anyway, so at least let's move drm_notice (or drm_err) into
> slpc_force_min_freq() and I can ok the patch. Thanks.

I got a bit lost but I thought I suggested notice level? Is it the same 
log message you are discussing here? If so, I don't think it is an error 
strictly speaking but just an unexpected condition which should be noted 
(claim being it should never ever happen outside IGT). Maybe warning if 
you think notice is too low level?

Regards,

Tvrtko

> 
>>> At least we need to do 1. But for 2. we might as well just put
>>> intel_guc_send_busy_loop() in guc_action_slpc_set_param_nb()? In both cases
>>> (boost and de-boost) intel_guc_send_busy_loop() would be called from a work
>>> item so looks doable (the way we were previously doing the blocking call
>>> from the two places). Thoughts?
>>>
>>> Thanks.
>>> --
>>> Ashutosh


More information about the Intel-gfx mailing list