DMA scheduling
Felix Kühling
fxkuehl at gmx.de
Thu Mar 16 11:03:51 PST 2006
Am Donnerstag, den 16.03.2006, 15:52 +0000 schrieb Keith Whitwell:
>
> I've been thinking a little bit about DMA scheduling for the graphics
> hardware.
>
> Currently we have a situation where any 3d app which happens to be
> able to grab the DRI lock can enqueue as many commands on the hardware
> dma queue as it sees fit. The Linux scheduler is the only arbiter
> between competing 3D clients, and it has no information regarding the
> GPU usage of these clients.
>
> Even if it did, there are benefits to be reaped from keeping the 3d
> DMA streams seperate and explicitly scheduling the dma rather than
> allowing clients to inject it in arbitary quantities and orders.
>
> Why do we want a GPU scheduler?
>
> 1) Fairness. We can currently have situations where one 3d
> applications manages to dominate the GPU while a second app in
> another window is locked out entirely.
>
> 2) Interactivity. It is quite possible to have one application which
> does so little rendering per frame that it can run at 3000fps while
> another eg, video-based application does a lot more and can just
> about keep up a 30fps framerate. Consider a situation where both
> applications are running at once. Simple fairness criteria would
> have them running at 1500fps and 15fps respectively - but it seems
> that fairness isn't what is required here. It would be preferable
> give the slower application a greater percentage of the GPU, so
> that it manages eg. 27fps, while the other is scaled down to "only"
> 300fps or so.
>
> Note that we currently don't even have the "fair" situation...
>
> 3) Resource management. Imagine two applications each of which has a
> texture working set of 90% of the available video ram. Even a
> smart replacement algorithm will end up thrashing if the
> applications are able to rapidly grab the DRI lock from each other
> and enqueue buffer loads and rendering. A scheduler could
> recognize the thrashing and improve matters by giving each app a
> longer timeslice on the GPU to minimize texture reloads.
4) Latency. There are currently frame throttling hacks in place to limit
the latency, or IOW how far the CPU can be ahead of the GPU. If the
scheduler were to get involved in this activity it would not only
schedule queued commands to the hardware but also throttle (block)
clients whose command stream processing is too far behind.
Latency is also important for buffer swapping synchronized with the
vrefresh. The only way to do this reliably right now is to lock the
hardware, drain the DMA stream, wait for the refresh, swap, unlock. A
low latency scheduler could make sure that the hardware never queues too
many commands so that it would be able to inject buffer swapping
commands synchronously with the vertical refresh without ever stalling
the hardware and other clients.
Regards,
Felix
>
[snip]
--
| Felix Kühling <fxkuehl at gmx.de> http://fxk.de.vu |
| PGP Fingerprint: 6A3C 9566 5B30 DDED 73C3 B152 151C 5CC1 D888 E595 |
More information about the xorg
mailing list