[Spice-devel] Seamless mode for switch_host migration?
Yonit Halperin
yhalperi at redhat.com
Fri Mar 1 06:10:34 PST 2013
Hi,
On 02/28/2013 10:14 AM, David Jaša wrote:
> Hi,
>
> this email is a wrap-up of in-person discussion with Hans about bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=884631
>
> The problem is that if client tries to connect to destination host when
> VM is already migrating. The problem can be triggered:
> 1. using the steps in bug: start migration, then connect to the VM.
> In this scenario, the client can never get client_migrate_info
> because it is not connected at the time of sending it
Actually, I don't think this scenario should be supported. I think we
just shouldn't allow connections after migration starts.
Connecting to the destination server after migration completed is not a
reliable option due to ticket expiration time: if migration_time >
ticket expiration time, we can't connect to the server. That is why the
connection is done before migration in the first place (transferring
from "switch_host" to "semi-seamless"). So until qemu activates the main
loop during migration, I think we should prevent such connections.
> 2. by high latency between already connected client and destination
> host (not tested but anticipated by Hans). The cause is that
> libvirt starts qemu migration right after it receives return
> message from client_migrate_info regardless of
> connected/seamless statuses
client_migrate_info is an async command. the src-spice-server doesn't
complete it until it receives a msg from the client that it is connected
to the dst server, or when a timeout expires (an in that case, the
client will be disconnected after migration).
Regards,
Yonit.
>
> At this point, client is connected to source qemu but it can not connect
> to destination qemu because client_migrate_info was never sent to it in
> first case, and because qemu is migrating and has stopped event loop in
> second case.
> This stage of migration can last varying amouts of time, from just a few
> seconds for small-memory VMs migrating over fast unlimited network, to
> at least minutes: when migrating large-memory VMs over highly-utilized
> or capped network.
>
> When final synchronization happens, all features that require seamless
> migration (USB redirection, smartcard, c&p transfer, ...) are
> interrupted, some of them temporarily (c&p), some of them sort of
> permanently (USB device is unshared and user has to manually share it
> again).
>
> Hans's idea to address this issue was to leverage the fact that libvirt
> doesn't terminate src qemu till it receives qmp message indicating that
> seamless migration is done:
> 1. after migration synchronization finishes, src spice-server
> checks client connection status to dst spice-server:
> 1. if connected, go on
> 2. if not, send client client_migrate_info and wait till
> client connects to dst spice-server
> 2. perform actual seamless migration. We can do it because the
> client is now connected to both servers regardless of what
> connection status was at the end of final synchronization
>
> This code path should fix the gap with least amount of effort overall.
> It doesn't seem to be suitable to have it as default though because the
> several src <--> client <--> dst roundtrips may considerably add to time
> when VM console is unresponsive to the user -- but for cases when client
> is connected, it's a little price to pay for having all spice features
> available.
>
> Yonit, what do you think about all of these? Is it doable to have this
> work finished for next release of spice-server?
>
> David
>
More information about the Spice-devel
mailing list