[Spice-devel] [Qemu-devel] seamless migration with spice
Yonit Halperin
yhalperi at redhat.com
Sun Mar 18 06:25:32 PDT 2012
Hi,
On 03/15/2012 04:23 PM, Hans de Goede wrote:
> Hi,
>
> On 03/15/2012 03:07 PM, Yonit Halperin wrote:
>> Hi,
>> On 03/15/2012 02:36 PM, Hans de Goede wrote:
>>> Hi,
>>>
>>> On 03/15/2012 01:11 PM, Yonit Halperin wrote:
>>>> On 03/13/2012 09:40 AM, Gerd Hoffmann wrote:
>>>>> Hi,
>>>>>
>>>>>> It is not easy when you have 2 components, and it is much less easy
>>>>>> when
>>>>>> you have 3 or 4 components. So why make it more complicated if you
>>>>>> can
>>>>>> avoid it. Especially since there is no functional reason for
>>>>>> making the
>>>>>> qemu/client capabilities/versions dependent on the server internal
>>>>>> data.
>>>>>
>>>>> qemu has ways to handle compatibility in the vmstate format. We can
>>>>> use
>>>>> those capabilities. That of course requires exposing the structs to be
>>>>> saved to qemu and adds some complexity to the qemu<-> spice interface.
>>>>>
>>>>> What session state is needed by the target?
>>>>> What of this can be negotiated between client and target host without
>>>>> bothering the source?
>>>>> What needs be transfered from source to target, either directly or via
>>>>> client?
>>>>>
>>>>>>> If this is a hard requirement then using the vmstate channel isn't
>>>>>>> going
>>>>>>> to work. The vmstate is a one-way channel, no way to negotiate
>>>>>>> anything
>>>>>>> between source and target.
>>>>>>>
>>>>>> We can do this via the client.
>>>>>
>>>>> Then you can send the actual state via client too.
>>>>> Out-of-band negotiation for the blob send via vmstate scares me.
>>>>>
>>>>> Can we please start with a look at which state we actually have to
>>>>> send
>>>>> over?
>>>> Ok, I can take the display and sound channels.
>>>> Alon, can you take the smartcard?
>>>> Hans, spicevmc?
>>>
>>> Easy, the spicevmc channel has no state which needs to be migrated,
>>> except
>>> for things in the red_channel_client base class:
>>>
>>> 1) Partially received spice messages
>>> 2) Possible pending pipe items when writes from qemu -> client have
>>> blocked.
>>>
>>> I assume that the red_channel_client base class will handle migrating
>>> them,
>>> if we migrate them at all.
>>>
>>> Instead of migrating we could:
>>> For 1. expect the client to stop sending new messages at a certain point
>>> during the migration, and ensure we've processed any pending messages
>>> after this point.
>>>
>>> For 2. we could flush pending items and set a flag to stop channel
>>> implementations from queuing new ones, at which point for spicevmc the
>>> data will get queued inside qemu and migrating it no longer is
>>> a spice-server problem to migrate it (and we need migration support for
>>> the data possibly queued inside qemu anyways).
>>>
>> We have an implementation for this: After migration had completed,
>> each spice-server channel sent MSG_MIGRATE to the corresponding client
>> channel. The msg was sent after all the pending msgs to the client had
>> already been sent.
>> In response, the client sent SPICE_MSGC_MIGRATE_FLUSH_MARK to the
>> server, after it completed sending all its pending messages.
>
> Yes, that is exactly what I thought we did but I was too lazy to check the
> source :) So that would mean that other then assuring no data gets
> queued up in spicevmc after sending the MSG_MIGRATE to the client (see
> below),
> no changes are needed to spicevmc, as it is essentially stateless.
>
>> Then the "blob" data transfer and completion of socket switching has
>> occurred.
>>
>> Regarding the usb data in the server that should be flushed to qemu:
>> we need to save it after the source vm is stopped. So I think it is
>> too late for flushing it to qemu, unless you refereed to the special
>> vmstate we will have for spice, if we go in that solution direction.
>
> I was talking about the other direction, so data queued in qemu which
> should be flushed
> to the server (and then forwarded to the client). IOW what I mean is that
> after the spice-server channel sent MSG_MIGRATE, it should no longer
> read data
> from the qemu-chardev, even if it receives a chardev wakeup from qemu,
> leaving the
> data inside qemu to be migrated using qemu's standard migration mechanism.
However, since data from the client can reach spicevmc server channel
after savevm was called, we will have to migrate this data by ourself.
Unless, we manage to flush the connection before savevm is performed.
Regards,
Yonit.
>
> Regards,
>
> Hans
More information about the Spice-devel
mailing list