[Spice-devel] rfc seamless migration

Yonit Halperin yhalperi at redhat.com
Mon Jun 11 01:07:52 PDT 2012


Hi,
On 06/11/2012 10:25 AM, Gerd Hoffmann wrote:
>    Hi,
>
>> I'm still not a big fan of the concept of server data going through the
>> client, this means the server
>> will need to seriously sanity check what it receives to avoid
>> potentially new attacks on it.
>>
>> I'm wondering why not do the following:
>>
>> 1) spicevmc device gets a savevm call, tell spice-server to send a
>> message to the client telling it
>> to stop sending more data to *this* server.
>> 2) client sends an ack in response to the stop sending data server
>> 3) server waits for ack.
>> 4) savevm continues only after ack, which means all data which was in
>> flight has been received.
>
> Isn't going to fly.  The extra roundtrip time adds to the guests
> downtime, we will not get that upstream.
>
> I think we don't need that though.  The guest ->  client path isn't
> critical, we can just send any pending buffers before sending the
> MIGRATE_MSG.
>
> The client ->  guest messages can be handled like this:
>
>    * The client keeps a copy of the last few messages.
>    * On migration the server informs the client which message was
>      the last one committed to the guest.
>    * After migration the client can simply resend messages if needed.
>
> We could also extend the spicevmc protocol and have the spice server
> explicitly ack each message after it was passed successfully to the
> guest.  Then the client can just free the messages once acked instead of
> using heuristics for the number of messages to keep.  We also don't have
> any special migration state then, after migration the client simply
> replays all the (un-acked) messages it has.
>
I think this is too much of an overhead for handling a scenario that 
happens once in a while.

> This also removes the somewhat silly buffer passing:  data send from the
> client to the src spice server (but not yet passed to the guest) is
> passed from src spice server back to the client so it can forward it to
> the dst spice server
The size of data that can reach the server without being consumed by the 
guest should be limited.

First, in any case we should extend spice char devices implementation to:
(1) handle stop/start of the devices. Mainly, to not write to the guest 
when the vm is stopped, and keep a write buffer.
(2) limit the transfer rate by tokens for both sides + a limited message 
size. Note that in this manner, when the guest is stopped, the client 
will stop sending data to the server. In addition, we won't read data 
from the guest if the client can't consume it.

Second, the client holds a messages queue. Upon migration, it can just 
not send the queue, and only push the FLUSH_MARK message - we wait only 
for the last message that has been sent, and not for all the pending 
messages.


>
>>> - list of off-screen surfaces ids that have been sent to the client,
>>> and their lossy region.
>>> By keeping this data we will avoid on-demand resending surfaces  that
>>> already exist on the client side.
>>
>> The client already knows which off-screen surfaces ids it has been
>> received, so it can just
>> send these to the new server without having to receive them from the old
>> one first.
>
> Agree.  Any state which the client knows anyway doesn't need to be part
> of the MIGRATE_DATA.
>
I don't think that the client needs to know implementation details of 
the server. In order to reload the complete server state it will need to 
keep track of data it doesn't have any use in (like lossy regions,
lru of the cache, etc.)

Besides, I'm not sure how effective restoring the cache and surfaces 
list will be, when taking into account the delay it adds, and that most 
of the surfaces are short-living.

Regards,
Yonit/
> cheers,
>    Gerd



More information about the Spice-devel mailing list