The ati petition

Marek Wawrzyczny marekw1977 at yahoo.com.au
Mon Sep 6 17:44:16 PDT 2004


On Tue, 7 Sep 2004 10:31, Vladimir Dergachev wrote:
> On Mon, 6 Sep 2004, Mathieu Lacage wrote:
> > On Mon, 2004-09-06 at 02:23, Vladimir Dergachev wrote:
> >>>>> to the same power-performance tradeoff for your graphics card and
> >>>>> saving power.
> >>>>
> >>>> Good point. I would imagine that since most of GPU is inactive under
> >>>> regular X drivers I am running at minimum power ;)
> >>>
> >>> Unlikely.  I suspect that, like with the CPU, the GPU speed must be
> >>>  manually slowed.
> >>
> >> Unlike regular CPU, GPU consists of many blocks. I would expect that if
> >> 3d engine is not used at all it consumes little power - kinda like
> >> running HLT continuously on spare CPU.
> >
> > Actually, that is pretty unlikely. A lot of the power consumption of
> > these devices (even when they are running at full speed) comes from the
> > clock tree which distributes the clock throughout the chip. Unless the
> > designers have added special multiplexers to stop completely the clock
> > tree of the unused blocks, these will consume a lot of power even idle.
>
> Possibly they did. There are registers in the documentation that deal with
> power management and they have bits like this:
>
>     RE_CLK:     0   - Dynamic
>                 1   - Force on
>
> To me this would mean that the RE (Render Engine) clock is not active if
> there is nothing for the render engine to do.
>
> Keep in mind that this is mere speculation. Also, some ATI sample code
> writes 1 to such registers with comment that "some versions of ASIC do not
> properly implement dynamic clocks". Linux drivers do not have this code
> though, so this might have been fixed in production version.

I do remember discussions in my engineering classes about powersaving 
techniques. Switching the clock on/off to unused subsystems is definitely a 
common technique, of course I don't know if it is used in GPUs.
I would be surprised if modern notebook/laptop versions of GPUs do have this 
sort of functionality built in. I suspect the documentation is referring just 
to that... desktop GPUs probably sacrifice power saving for performance, 
while mobile GPUs do the opposite. 
Again... pure speculation... I've never actually practiced engineering instead 
gone straight into programming.

Marek Wawrzyczny



More information about the xorg mailing list