[Intel-gfx] [PATCH 1/2] drm/i915: prepare for multiple power wells
Imre Deak
imre.deak at intel.com
Wed Oct 23 16:46:03 CEST 2013
On Wed, 2013-10-23 at 11:56 -0200, Paulo Zanoni wrote:
> 2013/10/22 Imre Deak <imre.deak at intel.com>:
> > In the future we'll need to support multiple power wells, so prepare for
> > that here. Create a new power domains struct which contains all
> > power domain/well specific fields. Since we'll have one lock protecting
> > all power wells, move power_well->lock to the new struct too.
> >
> > No functional change.
> >
> > Signed-off-by: Imre Deak <imre.deak at intel.com>
> > ---
> > drivers/gpu/drm/i915/i915_drv.h | 11 ++++++---
> > drivers/gpu/drm/i915/intel_pm.c | 55 +++++++++++++++++++++++++----------------
> > 2 files changed, 42 insertions(+), 24 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> > index 80957ca..7315095 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -911,12 +911,18 @@ struct intel_ilk_power_mgmt {
> > /* Power well structure for haswell */
> > struct i915_power_well {
> > struct drm_device *device;
>
> Can't we also move "device" to i915_power_domains?
Yes, that's a better place for it, especially since there is no point in
duplicating it in every instance of the power well struct. Afaics the
only real reason we need to store device is
i915_{request,release}_power_well, but those could take it from
power_domains and pass it to __intel_power_well_{get,put}. This is a
somewhat bigger/independent change, so I'd follow up with a separate
patch for it.
> > - struct mutex lock;
> > /* power well enable/disable usage count */
> > int count;
> > int i915_request;
> > };
> >
> > +#define I915_MAX_POWER_WELLS 1
>
> How about this?
>
> enum intel_power_wells {
> HASWELL_POWER_WELL = 0, /* Or any other more creative name,
> since I guess we'll have an equivalent power well on other gens */
> I915_MAX_POWER_WELLS,
> };
I was also thinking about naming the power wells somehow. They are
different on different platforms, so the above enum would have the same
values with different names, stg like:
enum intel_power_wells {
HASWELL_POWER_WELL = 0,
VALLEYVIEW_POWER_WELL = 0,
XX_POWER_WELL0 = 0,
XX_POWER_WELL1 = 1,
XX_POWER_WELL2 = 2,
I915_MAX_POWER_WELLS,
};
Which would work, but is a bit messy imo. Atm I don't see the point of
having these names, since we will only reference them in terms of the
power domains they support.
> And then you switch all the code below that uses index zero for the
> actual enum name.
Yea, that's not so nice, but it would vanish as soon as we switch to
reference the power wells in the above way. I have a preliminary patch
for this see: https://github.com/ideak/linux/commit/5be8b8b4
> Maybe "NON_LPSP_POWER_WELL = 0" since the docs describe LPSP (Low
> Power Single Pipe) as a feature that is enabled when the power well is
> disabled. Feel free to invent better names :)
See above, but perhaps for debugging purposes we could add a char *name,
and make the state of power wells available through debugfs.
> > +
> > +struct i915_power_domains {
> > + struct mutex lock;
> > + struct i915_power_well power_wells[I915_MAX_POWER_WELLS];
> > +};
> > +
> > struct i915_dri1_state {
> > unsigned allow_batchbuffer : 1;
> > u32 __iomem *gfx_hws_cpu_addr;
> > @@ -1412,8 +1418,7 @@ typedef struct drm_i915_private {
> > * mchdev_lock in intel_pm.c */
> > struct intel_ilk_power_mgmt ips;
> >
> > - /* Haswell power well */
> > - struct i915_power_well power_well;
> > + struct i915_power_domains power_domains;
> >
> > struct i915_psr psr;
> >
> > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > index 3e140ab..09ca6f9 100644
> > --- a/drivers/gpu/drm/i915/intel_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > @@ -5625,7 +5625,7 @@ void intel_display_power_get(struct drm_device *dev,
> > enum intel_display_power_domain domain)
> > {
> > struct drm_i915_private *dev_priv = dev->dev_private;
> > - struct i915_power_well *power_well = &dev_priv->power_well;
> > + struct i915_power_domains *power_domains;
> >
> > if (!HAS_POWER_WELL(dev))
> > return;
> > @@ -5633,16 +5633,18 @@ void intel_display_power_get(struct drm_device *dev,
> > if (is_always_on_power_domain(dev, domain))
> > return;
> >
> > - mutex_lock(&power_well->lock);
> > - __intel_power_well_get(power_well);
> > - mutex_unlock(&power_well->lock);
> > + power_domains = &dev_priv->power_domains;
> > +
> > + mutex_lock(&power_domains->lock);
> > + __intel_power_well_get(&power_domains->power_wells[0]);
> > + mutex_unlock(&power_domains->lock);
> > }
> >
> > void intel_display_power_put(struct drm_device *dev,
> > enum intel_display_power_domain domain)
> > {
> > struct drm_i915_private *dev_priv = dev->dev_private;
> > - struct i915_power_well *power_well = &dev_priv->power_well;
> > + struct i915_power_domains *power_domains;
> >
> > if (!HAS_POWER_WELL(dev))
> > return;
> > @@ -5650,12 +5652,14 @@ void intel_display_power_put(struct drm_device *dev,
> > if (is_always_on_power_domain(dev, domain))
> > return;
> >
> > - mutex_lock(&power_well->lock);
> > - __intel_power_well_put(power_well);
> > - mutex_unlock(&power_well->lock);
> > + power_domains = &dev_priv->power_domains;
> > +
> > + mutex_lock(&power_domains->lock);
> > + __intel_power_well_put(&power_domains->power_wells[0]);
> > + mutex_unlock(&power_domains->lock);
> > }
> >
> > -static struct i915_power_well *hsw_pwr;
> > +static struct i915_power_domains *hsw_pwr;
> >
> > /* Display audio driver power well request */
> > void i915_request_power_well(void)
> > @@ -5664,7 +5668,7 @@ void i915_request_power_well(void)
> > return;
> >
> > mutex_lock(&hsw_pwr->lock);
> > - __intel_power_well_get(hsw_pwr);
> > + __intel_power_well_get(&hsw_pwr->power_wells[0]);
> > mutex_unlock(&hsw_pwr->lock);
> > }
> > EXPORT_SYMBOL_GPL(i915_request_power_well);
> > @@ -5676,7 +5680,7 @@ void i915_release_power_well(void)
> > return;
> >
> > mutex_lock(&hsw_pwr->lock);
> > - __intel_power_well_put(hsw_pwr);
> > + __intel_power_well_put(&hsw_pwr->power_wells[0]);
> > mutex_unlock(&hsw_pwr->lock);
> > }
> > EXPORT_SYMBOL_GPL(i915_release_power_well);
> > @@ -5684,12 +5688,15 @@ EXPORT_SYMBOL_GPL(i915_release_power_well);
> > int i915_init_power_well(struct drm_device *dev)
> > {
> > struct drm_i915_private *dev_priv = dev->dev_private;
> > + struct i915_power_domains *power_domains = &dev_priv->power_domains;
> > + struct i915_power_well *power_well;
> >
> > - hsw_pwr = &dev_priv->power_well;
> > + mutex_init(&power_domains->lock);
> > + hsw_pwr = power_domains;
> >
> > - hsw_pwr->device = dev;
> > - mutex_init(&hsw_pwr->lock);
> > - hsw_pwr->count = 0;
> > + power_well = &power_domains->power_wells[0];
> > + power_well->device = dev;
> > + power_well->count = 0;
>
> How about this?
>
> for (i = 0; i < I915_MAX_POWER_WELLS; i++) {
> power_well = &power_domains->power_wells[0];
> power_well->device = dev; /* If you don't move this to the
> power_domains struct. */
> power_well->count = 0;
> }
>
> Then we'll always be safe :)
Imo, this is safe now, since we only have a single power well. But the
above preliminary patch has something similar to what you suggest, so if
that's ok I'd address it there.
> With or without my 3 bikesheds (although I would really like to see them):
> Reviewed-by: Paulo Zanoni <paulo.r.zanoni at intel.com>
>
> By the way, do you also plan to change some functions so they take
> struct i915_power_well (or an index representing the power well
> number) as a parameter? E.g., intel_set_power_well.
See the above patch. But basically yes, I would find the power well(s)
based on the requested power domain and call
__intel_power_well_{get,put} for each, and these would in turn call a
platform dependent .set hook if needed.
Thanks for the review,
Imre
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 490 bytes
Desc: This is a digitally signed message part
URL: <http://lists.freedesktop.org/archives/intel-gfx/attachments/20131023/145c749c/attachment.sig>
More information about the Intel-gfx
mailing list