[PATCH 3/3] exa/mixed: Exclude frontbuffer from deferred pixmap handling.

Maarten Maathuis madman2003 at gmail.com
Tue Jan 18 06:25:20 PST 2011


2011/1/18 Maarten Maathuis <madman2003 at gmail.com>:
> 2011/1/18 Michel Dänzer <michel at daenzer.net>:
>> On Mon, 2011-01-17 at 21:45 +0100, Maarten Maathuis wrote:
>>> 2010/12/20 Michel Dänzer <michel at daenzer.net>:
>>> > On Mon, 2010-12-20 at 15:54 +0100, Maarten Maathuis wrote:
>>> >> 2010/12/20 Michel Dänzer <michel at daenzer.net>:
>>> >> > On Mon, 2010-12-20 at 15:46 +0100, Maarten Maathuis wrote:
>>> >> >> 2010/12/14 Michel Dänzer <michel at daenzer.net>:
>>> >> >> > On Mon, 2010-12-13 at 19:42 +0100, Maarten Maathuis wrote:
>>> >> >> >> - Apps like xterm can trigger a lot of fallback rendering.
>>> >> >> >> - This can lead to (annoyingly) high latencies, because you
>>> >> >> >>   have to wait for the block handler.
>>> >> >> >> - You need a driver that doesn't directly access the front
>>> >> >> >>   buffer to trigger this (NV50+ nouveau for example).
>>> >> >> >> - Repeatingly doing dmesg on an xterm with a bitmap font
>>> >> >> >>   will reveal that you never see part of the text.
>>> >> >> >> - I have recieved at least one complaint in the past of slow
>>> >> >> >>   terminal performance, which was related to core font
>>> >> >> >>   rendering.
>>> >> >> >> - This does sacrifice some throughput, not sure how much,
>>> >> >> >
>>> >> >> > Shouldn't be hard to measure.
>>> >> >>
>>> >> >> I did a little test (catting a saved copy of dmesg) and the throughput
>>> >> >> loss is about 25%.
>>> >> >
>>> >> > What are the absolute numbers?
>>> >>
>>> >> Roughly 250 ms vs 330 ms (error margin is about 20-30 ms if i had to guess).
>>> >
>>> > That seems rather inaccurate, can you try something at least an order of
>>> > magnitude bigger?
>>>
>>> Forgot about it for a while, but it remains 33% slower, for 10 times
>>> the text. Times are typically 2.7 - 2.8 s vs 3.6 - 3.7 s.
>>
>> Okay, thanks. I guess this might be acceptable for a substantial
>> decrease in latency, but I can't help wondering if we couldn't get that
>> with less if any sacrifice in throughput. Have you tried or thought
>> about anything else? Some random ideas offhand:
>>
>>      * Limit the amount of deferred dirtiness, be it wall clock based
>>        or even just a simple counter of deferred ops.
>>      * Flushing the deferred dirtiness in other places in addition to
>>        (or instead of) the BlockHandler, e.g. a flush callback.
>>
>
> I kind of went for the "best" solution in terms of latency
> (considering it doesn't happen all that often for most people), the
> second best would probably be a frontbuffer area counter, requiring a
> certain square pixel damaged area before flushing. The downside is
> that latency while typing in a console will still be there. Some kind
> of maximum latency timer might work as well, although i don't know if
> the xserver has those. But that won't reduce latency to a minimum in
> low throughput situations.

Another idea would be to use a rate limit. It gives you the low
throughput low latency situation, but still gives decent behaviour at
higher throughput. Just need to think of the right timer to use and
what maximum rate to use.

>
>>
>> --
>> Earthling Michel Dänzer           |                http://www.vmware.com
>> Libre software enthusiast         |          Debian, X and DRI developer
>>
>
>
>
> --
> Far away from the primal instinct, the song seems to fade away, the
> river get wider between your thoughts and the things we do and say.
>



-- 
Far away from the primal instinct, the song seems to fade away, the
river get wider between your thoughts and the things we do and say.


More information about the xorg-devel mailing list