RandR 1.3 additions?
alexdeucher at gmail.com
Tue Jul 17 08:15:57 PDT 2007
On 7/17/07, Jesse Barnes <jbarnes at virtuousgeek.org> wrote:
> > > I think we could save lot power by downclocking gpu & ram anytimes
> > > there is not much activity on screen this why i have though about
> > > damage extension to provide information on how much the screen
> > > change. If there is only the cursor or slow refresh things you
> > > don't want to have gpu going at 400Mhz, 200MHz should be more than
> > > enough. The things is that i would like to avoid using a daemon
> > > which query damage extension in order to set gpu clock. I guess it
> > > would require to add some new extension which report to a daemon
> > > anytime there is a major change in graphics activity, we might also
> > > want to avoid downclocking and upclocking the gpu every second.
> > >
> > > Anyway i guess for a first step providing infrastructure to change
> > > GPU clock and VRAM clock (or any other power saving things the card
> > > can offer) would be the first step. Then we could think to add more
> > > cleverness either by adding new things to damage or by adding a new
> > > extension which can take advantage of damage information. Btw i
> > > don't know much about damage extension so maybe i am wrong when
> > > thinking that it could provide usefull information on how much
> > > graphics activity we got.
> > This is starting to move into the chips control domain. Many newer
> > GPUs already scale their clocks, voltage, etc. or toggle special
> > power modes automatically without user space intervention.
> Right, the chips will do much of this automatically, and in many other
> cases the driver should be automatically taking advantage of power
> saving features if possible (disabling parts of the chip, going into
> appropriate PCI Dn states, etc.). But I see Jerome's point: some
> features just have a performance slider, e.g. clock speed. Allowing
> clients to tweak that value might be a good idea since inferring
> performance requirements from the client load isn't really possible
> (i.e. the user may *want* slow performance to save power, even though
> the GPU is working flat out).
> It would help if we could come up with a list of such tunables and
> figure out which ones should be handled automatically vs. exported to
I definitely agree. I think it will depend largely on the hardware.
I think generic attributes will be the most flexible. We could
standardize a few of the common ones as is done for Xv attributes like
brightness or hue.
More information about the xorg