[PATCH] Ensure blitter quiescience before reading pixels from the framebuffer

Daniel Stone daniel at fooishbar.org
Tue Jul 31 06:58:32 PDT 2007

On Tue, Jul 31, 2007 at 03:07:48PM +0200, Bernardo Innocenti wrote:
> Michel Dänzer wrote:
> > Probably, but does it incur a measurable penalty? The CPU is supposed to
> > be ahead of the GPU anyway.
> On the OLPC, it may not be the case: we have a very weak CPU along with a
> somewhat better blitter.  It's probably the same with most embedded devices.

Not really (and not that the OLPC is at all embedded: it's a laptop in
both physical and power profile).  On the N800, at least, the CPU is
vastly ahead of what the in-built GPU can do, and we don't see that
changing at all further down the track, until we start using the PowerVR
3D core.

(Due to an architectural limitation, this turns out not to matter
 anyway, as we can only push so many pixels to the screen per second,
 but, details.)

> > I guess it's just not feasible to accurately estimate performance from
> > code inspection. It needs to be measured.
> I wanted to do it at some point, but running oprofile on slow hardware is
> quite painful.  And, still, you need to do some guessing when you interpret
> the results.

How is it painful?  Works just fine for me.  Of course, running
oprofile's UI on the device itself is insanity, but oprofile itself, and
interpeting the results somewhere else, works just fine.

> For instance, I expect to see a lot of time spent in the driver, but mostly
> because EXA is asking it to do spurious uploads of small bitmaps.

Reality frequently fails to match expectations.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: <http://lists.x.org/archives/xorg/attachments/20070731/481b54a6/attachment.pgp>

More information about the xorg mailing list