[Spice-devel] Performance of Xspice - some results, and a potential patch

Jeremy White jwhite at codeweavers.com
Tue Aug 7 06:08:40 PDT 2012


> I don't know what were the network conditions you tested, but it would be great if you could repeat your test with lower bandwidth (you can use tc), and also, you can try disabling off-screen surfaces in the driver.

I do have a test network constructed for just that purpose, so I
can manage and measure all aspects of the network.

It was with the latency set to 80 ms that I probed the issue with the
ack window size.

For the tests I ran, though, I was on an uncrippled network, using
all default settings for Xspice.  I do not know what form of image
compression was in use; only that it would be whatever default would
be chosen (I believe the default is auto lz, and my sense is that
I was mostly seeing lz and quic images going across the wire, but
don't quote me on that).

As far as I recall, disabling surfaces prevented Xspice from
working properly, so I did not test that mode.  I did not really
understand how to analyze the effectiveness of surface caching,
so while I saw the ability to gather those statistics, I didn't
do that exercise.

I haven't tried throttling bandwidth; it's interesting to know that the code
may respond to that condition.

> 
> In the same matter, another improvement we planned is to change the limit for the size of the queue of messages directed to the client: currently it is limited to a constant number of messages. When we reach the limit, we stop reading from the driver command ring. We planned to change it to use a limit based on the estimation of the total size that is pending to be sent to the client. Maybe we should consider limiting it by the time duration from the head to tail. In this manner we can have more control of the refresh rate, and maybe be able to drop more commands than today.
> 
> That said, if the scenario is composed of commands that don't hide one another, all the above wouldn't help.

Right; the worst case scenario I hit was a set of 27,000 draw operations,
each one next to the other, none hiding another.

What I found as I probed typical use cases for most Linux apps
was that solid() calls dominated the use case; with a typical
run containing a few thousand 'other' calls, and then 100,000
or more calls to solid().

(Of course, a case I care strongly about, MS Office in Wine, turns
out to devolve into a lot of surfaces and copies, so it wasn't
strictly true :-/).

> How are you verifying that the user experience remains the same?
> That there is no tearing, distortion, missing elements, etc.?
> 

That's a good question; I don't have a great answer.  It Works For Me (TM)
is pretty lame <grin>.  Seriously, we plan to do some QA around this whole
stack, so I expect to uncover a few further issues.

The test code I posted does include an option to record the user session
to a movie file; I plan to use a video diff tool to compare the resulting
movie to  a capture of a session running purely on an Xvfb server.  That will
give me a number to use to compare to 'correct'.  I hope to use that to
quantify the 'quality' of various solutions.

Note that this sounds nice on paper; I have yet to actually do this, so it
may not work all that well in practice. The session capture software seems
to work only so-so, and the diff software costs money that I haven't spent yet.

But I would very much appreciate other suggestions on how best to test
and verify that the solution is working well.

Cheers,

Jeremy


More information about the Spice-devel mailing list