[Libdlo] [PATCH] udlfb: high-throughput urb pool
Bernie Thompson
bernie.thompson at gmail.com
Sat Dec 18 14:14:10 PST 2010
Hi Andrew,
Thanks so much for these datapoints. They're great -- there's
definitely a bunch to learn here.
On Saturday, December 18, 2010, akephart at akephart.org
<akephart at akephart.org> wrote:
> [AK] So here's where my test results diverge from the expected -- with
> the blocking model, we lost pixels constantly, regardless of pool size.
You know, I haven't been thinking clearly about the defio case just
conceptually.
In the damage case, we're usually blocking in the application's
context - so we get a kind of natural, brute-force signal back to the
app that its sending more data than we can push across USB. The app
sends less data (AFAIK mplayer drops frames somewhat intelligently).
In the defio case, because all the processing happens in a deferred
context, the app itself never gets blocked. So that may be how so
many incoming rendering requests can come in, that we're waiting on
the semaphore for so long (a second or two, effectively an eternity).
It would be interesting to log how many waiters we have on that
semaphore at that point where we get a timeout.
So it's looking like for defio 1) lacking a signal back to the app
that we've run through all our pool 2) we need to start dropping
pixels ourselves 3) increasing the defio delay (defio's scheduled time
until dirty pages are handed to the driver for rendering) is the first
and most natural way to do that. The dirty page method naturally drops
pixels that have been rendered more than once during the period, and
doesn't result in out-of-date pixels on screen indefinitely 4) if we
can't make #3 work, then we can look at dropping pixels later in
rendering, and then somehow triggering full/selective repaints. But
this would get tricky ...
But based on your test results (where we're timing out waiting for
buffers), at least on that class of system, the way udlfb is working
now on the defio path isn't acceptable. So we'll need to make some
change to improve.
Thanks again,
Bernie
More information about the Libdlo
mailing list