[Intel-gfx] [PATCH] drm/i915: Reduce duplicated forcewake logic

Dave Gordon david.s.gordon at intel.com
Fri Nov 7 19:55:22 CET 2014


On 07/11/14 15:46, Ville Syrjälä wrote:
> On Wed, Sep 10, 2014 at 07:34:54PM +0100, Chris Wilson wrote:
>> Introduce a structure to track the individual forcewake domains and use
>> that to eliminated duplicate logic.
>> ---
>>  drivers/gpu/drm/i915/i915_debugfs.c |  41 +++---
>>  drivers/gpu/drm/i915/i915_drv.h     |  32 +++--
>>  drivers/gpu/drm/i915/intel_uncore.c | 265 ++++++++++++++++--------------------
>>  3 files changed, 157 insertions(+), 181 deletions(-)

Hi Chris,

this looks useful -- I was looking at refactoring the VLV vs GEN9
versions of forcewake too. A few comments below:

>> diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
>> index 3b3d3e0..641950b 100644
>> --- a/drivers/gpu/drm/i915/intel_uncore.c
>> +++ b/drivers/gpu/drm/i915/intel_uncore.c

>> @@ -145,7 +145,7 @@ static void __gen6_gt_force_wake_put(struct drm_i915_private *dev_priv,
>>  }
>>  
>>  static void __gen7_gt_force_wake_mt_put(struct drm_i915_private *dev_priv,
>> -							int fw_engine)
>> +					int fw_engine)

Here and in all the other versions: how about renaming the parameter to
"fw_domains" in line with your new enum and struct naming? Thus making
it clearer that it can have multiple bits set ...

>> @@ -389,50 +358,60 @@ void intel_uncore_sanitize(struct drm_device *dev)
>>   * be called at the beginning of the sequence followed by a call to
>>   * gen6_gt_force_wake_put() at the end of the sequence.
>>   */
>> -void gen6_gt_force_wake_get(struct drm_i915_private *dev_priv, int fw_engine)
>> +void gen6_gt_force_wake_get(struct drm_i915_private *dev_priv,
>> +			    unsigned fw_domains)
>>  {
>>  	unsigned long irqflags;
>> +	int i;
>>  
>>  	if (!dev_priv->uncore.funcs.force_wake_get)
>>  		return;
>>  
>>  	WARN_ON(!pm_runtime_active(&dev_priv->dev->pdev->dev));
>>  
>> +	fw_domains &= dev_priv->uncore.fw_domains;
>> +
>>  	spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
>>  
>> -	/* Redirect to VLV specific routine */
>> -	if (IS_VALLEYVIEW(dev_priv->dev)) {
>> -		vlv_force_wake_get(dev_priv, fw_engine);
>> -	} else {
>> -		if (dev_priv->uncore.forcewake_count++ == 0)
>> -			dev_priv->uncore.funcs.force_wake_get(dev_priv,
>> -							      FORCEWAKE_ALL);
>> +	for (i = 0; i < FW_DOMAIN_COUNT; i++) {
>> +		if ((fw_domains & (1 << i)) == 0)
>> +			continue;
>> +
>> +		if (dev_priv->uncore.fw_domain[i].wake_count++)
>> +			fw_domains &= ~(1 << i);
>>  	}
>> +
>> +	if (fw_domains)
>> +		dev_priv->uncore.funcs.force_wake_get(dev_priv, fw_domains);
>> +
>>  	spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
>>  }

Nice to get rid of the "Redirect to XXX specific routine" stuff -- one
of the things I don't like about the current forcewake code is that the
VLV and GEN9 versions of the outer wrapper are inconsistent w.r.t. how
(and how many times) to call the ->force_wait_get() vfunc -- vlv
collects all the bits and makes one call, gen9 makes one call for each
bit separately. This code would handle all gens consistently, which is
much nicer. And I prefer a single call with multiple bits, so that the
low-level code, which knows where the corresponding h/w bits are, can
deal efficiently with multiple bits mapping to the same register, or
(for example) issue all the forcewake writes first and then poll them
all for completion, rather than doing the write/poll pairs sequentially.

[snip]
> 
> Also the code in nightly still has the runtime pm stuff in the forcewake
> code. I guess you cleaned those out in your tree, but I don't remeber
> seeing a recent patch on that front.

Yes, this has been another area we wanted clarified and/or tidied up too
- I posted a message earlier today asking about whether the PM calls
were necessary and a patch that allowed them to be avoided. But if they
can be removed completely that would be better still :)

> Otherwise this lookd very nice IMO, so would be great to get it in
> finally.

Agreed -
.Dave.



More information about the Intel-gfx mailing list