[Spice-devel] [Qemu-devel] seamless migration with spice

Yonit Halperin yhalperi at redhat.com
Sun Mar 18 06:28:42 PDT 2012


Hi,
On 03/15/2012 04:07 PM, Yonit Halperin wrote:
> Hi,
> On 03/15/2012 02:36 PM, Hans de Goede wrote:
>> Hi,
>>
>> On 03/15/2012 01:11 PM, Yonit Halperin wrote:
>>> On 03/13/2012 09:40 AM, Gerd Hoffmann wrote:
>>>> Hi,
>>>>
>>>>> It is not easy when you have 2 components, and it is much less easy
>>>>> when
>>>>> you have 3 or 4 components. So why make it more complicated if you can
>>>>> avoid it. Especially since there is no functional reason for making
>>>>> the
>>>>> qemu/client capabilities/versions dependent on the server internal
>>>>> data.
>>>>
>>>> qemu has ways to handle compatibility in the vmstate format. We can use
>>>> those capabilities. That of course requires exposing the structs to be
>>>> saved to qemu and adds some complexity to the qemu<-> spice interface.
>>>>
>>>> What session state is needed by the target?
>>>> What of this can be negotiated between client and target host without
>>>> bothering the source?
>>>> What needs be transfered from source to target, either directly or via
>>>> client?
>>>>
>>>>>> If this is a hard requirement then using the vmstate channel isn't
>>>>>> going
>>>>>> to work. The vmstate is a one-way channel, no way to negotiate
>>>>>> anything
>>>>>> between source and target.
>>>>>>
>>>>> We can do this via the client.
>>>>
>>>> Then you can send the actual state via client too.
>>>> Out-of-band negotiation for the blob send via vmstate scares me.
>>>>
>>>> Can we please start with a look at which state we actually have to send
>>>> over?
>>> Ok, I can take the display and sound channels.
Display channel
---------------
(A) cache
Cache migration is a bit tricky since the cache is shared between the 
display channels, and each display channel can be in different state wrt 
migration. The possible states are: (1) Source still sends pending 
messages  (2) migration transition - messages are accumulate in the pipe 
(3) Dest send display messages.

We can either store and migrate the cache, or choose to reset it.
In the extinct spice seamless migration solution, the cache was reset. 
For implementing this approach, I think that the first display channel 
that handles migration can freeze the source cache, and send 
SPICE_MSG_DISPLAY_INVAL_ALL_PIXMAPS to the client (together with the 
corresponding "wait list" - i.e., other display channels' message 
serials we should wait for before resetting the cache).
In the old solution, resetting the client side cache was performed only 
after the channel that freezed the cached completely switched to the 
destination. This required migrating the "wait list" and the last 
message serial. Then, the freezer channel sent the 
SPICE_MSG_DISPLAY_INVAL_ALL_PIXMAPS with the MAX(migrated_wait_list, 
current_cache_wait_list_serial).
I'm not sure why the old solution initiated the reset from the 
destination and not from the source. Maybe for a case that for some 
reason the client stayed connected to the source and the vm was started 
on the source???

Of course, resetting the cache has the obvious consequence of resending 
images and rebuilding the cache.

If we choose to restore the complete cache on the destination side we 
need to:
(1) freeze the cache
(2) send the cache to the destination. The cache holds the ids of the 
images in stored in the client side cache, and the lru list of them.
In addition, for each such image we store the serial of the last message 
that accessed it from each display channel.
(3) start the destination cache in freeze mode
(4) Unfreeze the cache after it is restored from the migtation data.

In any case, the migration data should also hold the cache size (which 
is set by the client upon connection initialization).

(B) Glz dictionary
The dictionary is also shared between the display channels. It holds 
references to qxl images.
As in the old implementation, I think we should reset it after 
migration. Unlike the cache, the client doesn't need to know anything 
about it. The only date that should be migrated to the destination 
server are (1) the dictionary size (also set by the client upon connection)
(2) the last image id in the dictionary (otherwise we should have a 
message for resetting the dictionary on the client side).

(C) Surfaces:
Again, 2 options:
(1) Not migrate anything related to the client's off-screen surfaces. 
Consequence: we might send the client off-screen surfaces that we have 
already sent.
(2) Migrate the list of surfaces that the client holds and their lossy 
regions (or just the regions extents, for simplicity).


(D) In order to promise that in flight data from/to the src server won't 
get lost we still need to assure that the src server is not killed 
before spice completes its work - and then we are back to the original 
problem that started this thread. This is relevant to other channels as 
well, e.g., spicevmc.


Sound channels:
---------------
There is a 16K buffer in the record channel. However, since it can be 
overwritten by newer samples anyhow, I don't think it is necessary to 
migrate it.
The old solution migrated the record start time, and also the time its 
mode change (celt/raw), but I don't find any use for it.





>>> Alon, can you take the smartcard?
>>> Hans, spicevmc?
>>
>> Easy, the spicevmc channel has no state which needs to be migrated,
>> except
>> for things in the red_channel_client base class:
>>
>> 1) Partially received spice messages
>> 2) Possible pending pipe items when writes from qemu -> client have
>> blocked.
>>
>> I assume that the red_channel_client base class will handle migrating
>> them,
>> if we migrate them at all.
>>
>> Instead of migrating we could:
>> For 1. expect the client to stop sending new messages at a certain point
>> during the migration, and ensure we've processed any pending messages
>> after this point.
>>
>> For 2. we could flush pending items and set a flag to stop channel
>> implementations from queuing new ones, at which point for spicevmc the
>> data will get queued inside qemu and migrating it no longer is
>> a spice-server problem to migrate it (and we need migration support for
>> the data possibly queued inside qemu anyways).
>>
> We have an implementation for this: After migration had completed,
> each spice-server channel sent MSG_MIGRATE to the corresponding client
> channel. The msg was sent after all the pending msgs to the client had
> already been sent.
> In response, the client sent SPICE_MSGC_MIGRATE_FLUSH_MARK to the
> server, after it completed sending all its pending messages.
> Then the "blob" data transfer and completion of socket switching has
> occurred.
>
> Regarding the usb data in the server that should be flushed to qemu: we
> need to save it after the source vm is stopped. So I think it is too
> late for flushing it to qemu, unless you refereed to the special vmstate
> we will have for spice, if we go in that solution direction.
>
> Cheers,
> Yonit.
>> Regards,
>>
>> Hans
>
> _______________________________________________
> Spice-devel mailing list
> Spice-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/spice-devel



More information about the Spice-devel mailing list