[Intel-gfx] [RFC] drm/i915: downclock support
Fu Michael
michael_fu at linux.intel.com
Thu Aug 27 09:16:53 CEST 2009
Matthew Garrett wrote:
> Here's my current version - tested working on 945 and 965.
>
> commit 914cbeb9dee6a75140a79b617206a6f7b253593e
> Author: Matthew Garrett <mjg59 at vaio.localdomain>
> Date: Sun Jul 26 19:40:21 2009 +0100
>
> drm: Add dynamic power management to Intel
>
> There are several sources of unnecessary power consumption on Intel
> graphics systems. The first is the LVDS clock. TFTs don't suffer from
> persistence issues like TFTs, and so we can reduce the LVDS refresh
> rate when the screen is idle. It will be automatically upclocked when
> userspace triggers graphical activity. Beyond that, we can enable
> memory self refresh. This allows the memory to go into a lower power state
> when the graphics are idle. Finally, we can drop some clocks on the gpu
> itself. All of these things can be reenabled between frames, and so there
> should be no user visible graphical changes.
>
> Signed-off-by: Jesse Barnes <jesse.barnes at intel.com>
> Signed-off-by: Matthew Garrett <mjg at redhat.com>
>
> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> index a58bfad..f9984fe 100644
> --- a/drivers/gpu/drm/i915/intel_display.c
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -190,7 +193,7 @@ struct intel_limit {
> #define G4X_P2_SINGLE_CHANNEL_LVDS_LIMIT 0
>
> /*The parameter is for DUAL_CHANNEL_LVDS on G4x platform*/
> -#define G4X_DOT_DUAL_CHANNEL_LVDS_MIN 80000
> +#define G4X_DOT_DUAL_CHANNEL_LVDS_MIN 20000
>
this looks wrong. 80000 is correct for G4x. Where does the 20000 come from?
> @@ -666,15 +682,16 @@ intel_find_best_PLL(const intel_limit_t *limit, struct drm_crtc *crtc,
>
> memset (best_clock, 0, sizeof (*best_clock));
>
> - for (clock.m1 = limit->m1.min; clock.m1 <= limit->m1.max; clock.m1++) {
> - for (clock.m2 = limit->m2.min; clock.m2 <= limit->m2.max; clock.m2++) {
> - /* m1 is always 0 in IGD */
> - if (clock.m2 >= clock.m1 && !IS_IGD(dev))
> - break;
> - for (clock.n = limit->n.min; clock.n <= limit->n.max;
> - clock.n++) {
> - for (clock.p1 = limit->p1.min;
> - clock.p1 <= limit->p1.max; clock.p1++) {
> + for (clock.p1 = limit->p1.max; clock.p1 >= limit->p1.min; clock.p1--) {
> + for (clock.m1 = limit->m1.min; clock.m1 <= limit->m1.max;
> + clock.m1++) {
> + for (clock.m2 = limit->m2.min;
> + clock.m2 <= limit->m2.max; clock.m2++) {
> + /* m1 is always 0 in IGD */
> + if (clock.m2 >= clock.m1 && !IS_IGD(dev))
> + break;
> + for (clock.n = limit->n.min;
> + clock.n <= limit->n.max; clock.n++) {
> int this_err;
>
>
I don't see why this change is needed. Are there any bug using the old
code?
More information about the Intel-gfx
mailing list