[Spice-devel] Unfair comparisons with RDP

John A. Sullivan III jsullivan at opensourcedevel.com
Sat Jul 2 08:10:56 PDT 2011


On Sat, 2011-07-02 at 13:43 +0200, Alon Levy wrote:
> On Sat, Jul 02, 2011 at 07:24:59AM -0400, John A. Sullivan III wrote:
> > On Sat, 2011-07-02 at 12:27 +0200, Alon Levy wrote:
> > > On Sat, Jul 02, 2011 at 05:45:34AM -0400, John A. Sullivan III wrote:
> > > > On Sat, 2011-07-02 at 04:56 +0200, Alon Levy wrote:
> > > > > On Fri, Jul 01, 2011 at 09:40:41PM -0400, John A. Sullivan III wrote:
> > > > > > On Fri, 2011-07-01 at 17:05 +0200, Alon Levy wrote:
> > > > > > > On Fri, Jul 01, 2011 at 03:00:32PM +0200, Gianluca Cecchi wrote:
> > > > > > > > On Fri, Jul 1, 2011 at 1:04 PM, John A. Sullivan III  wrote:
> > > > > > > > > Interesting observation. That is true; we did not create separate VM
> > > > > > > > > definitions for SPICE and TSPlus thus the TSPlus environment is using
> > > > > > > > > the QXL driver.  Would we expect that to have any "supercharging" effect
> > > > > > > > > on RDP?
> > > > > > > > >
> > > > > > > > >
> > > > > > > > 
> > > > > > > > Probably not, because afaik (that is not so much ;-) Remote Desktop
> > > > > > > > (and probably tsplus too) works at the GDI call level, so it should
> > > > > > > > not depend so much on video adapter/video driver...
> > > > > > > > It was simply a question that arose analysing how to correctly
> > > > > > > > replicate comparisons...
> > > > > > > > Coming back to the test case and these operations:
> > > > > > > > 
> > > > > > > > rdp
> > > > > > > > 17:             display desktop, i.e., minimize all open applications
> > > > > > > > 42:             Paint existing LibreOffice document, i.e., restore from minimize
> > > > > > > > 
> > > > > > > > spice
> > > > > > > > 61:             display desktop, i.e., minimize all open applications
> > > > > > > > 92:             Paint existing LibreOffice document, i.e., restore from minimize
> > > > > > > > 
> > > > > > > > I think they are GDI ones, so that naturally when using rdp they are
> > > > > > > > executed locally on client desktop (only the gdi directives are sent),
> > > > > > > > while in spice (raster?) they will be network intensive (from a slow
> > > > > > > spice implements a driver, which implements a large part of the gdi api. So any
> > > > > > > operation that it doesn't implement is done via the windows gdi software rendering
> > > > > > > and the result given to the driver (which is spice) as an image.
> > > > > > > 
> > > > > > > So in cases where the specific operations are all implemented by the driver the
> > > > > > > performance should be identical. In other cases spice will be suboptimal, since
> > > > > > > it will send the image and not the operation. In both cases the rendering should
> > > > > > > be correct.
> > > > > > > 
> > > > > > > > link point of view).
> > > > > > > > So probably an optimized rdp could never be beaten on too slow links?
> > > > > > > > 
> > > > > > > ><snip>
> > > > > > Hmm . . . I remember you saying that the Windows product was actually
> > > > > > more developed than the Linux product.  Could it be that you have
> > > > > True
> > > > > 
> > > > > > implemented more of the GDI API than the X API (or whatever one uses for
> > > > > > Linux) and thus my Linux client is more regularly falling back to
> > > > > > sending images rather than directives?
> > > > > Client != Guest. A confusion that arises all the time here :) The client
> > > > > is *using* the graphics api on whatever platform. The linux client uses
> > > > > pixman mostly. The windows client uses gdi. The gdi canvas (as the graphics
> > > > > backend for the clients is called) has seen more usage / optimization I think,
> > > > > so you are not wrong in your conclusion. There are actually two different clients
> > > > > right now, spicec and any client based on the spice-gtk, such as vinagre or spicy.
> > > > > Could you try any of the later to see if you get 100% cpu with them as well?
> > > > <snip>
> > > > Sorry - I realize I stated that backwards! However the 100% CPU problem
> > > > is a different one.  We are noticing that the Windows server viewed via
> > > > the Debian client is laggard but CPU utilization is fine on both client
> > > > and server.  The problem with 100% CPU utilization is when we have a
> > > > Fedora 15 server.
> > > by server you mean guest? So this is the driver taking 100% cpu?
> > > 
> > <snip>
> > Yes, I hope I have my terminology right.  Host is the system running
> > KVM, server is the system running on the KVM host, and client is the
> > device I am using to see the server by connecting to the host :) If
> > there is a more official terminology, do please correct me as the right
> > vocabulary seems to be one of the most difficult things to master in
> > SPICE <grin>
> > 
> The terminology is:
>  host - machine running vm processes
>  spice server - part of the vm process.
>  guest - whatever is running in the vm.
>  qxl driver - the part of spice running in the guest
>  spice client - spice viewer, possibly on another machine
> 
> So I was trying to understand if you mean that you were running top on the host
> and seeing the process take 100% cpu, or running top inside the guest and seeing
> X (we are talking about a F15 guest, right?) take 100%. The later suggests a driver
> problem, while the former a server problem (or just a guest doing a bloody lot of work).
Thank you for clarifying that - in fact, we'll make it part of our
internal documentation! Now I understand why what I was saying was so
confusing.  Yes, I was looking at top on the Fedora 15 guest, not the
Fedora 15 host.  CPU utilization on the Windows guest accessed via a
Debian client is fine.  On the Fedora guest access via the Debian
client, it is unusably high - John



More information about the Spice-devel mailing list