[Spice-devel] [RFC PATCH spice 0.8 01/19] server/spice.h: semi-seamless migration interface, RHBZ #738266

Yonit Halperin yhalperi at redhat.com
Mon Sep 19 21:57:35 PDT 2011


On 09/19/2011 04:24 PM, Alon Levy wrote:
> On Mon, Sep 19, 2011 at 04:06:25PM +0300, Yonit Halperin wrote:
>> On 09/19/2011 03:15 PM, Gerd Hoffmann wrote:
>>> On 09/19/11 11:46, Yonit Halperin wrote:
>>>> semi-seamless migration details:
>>>>
>>>> migration source side
>>>> ---------------------
>>>> (1) spice_server_migrate_info (*): tell client to link
>>>> to the target side - send SPICE_MSG_MAIN_MIGRATE_BEGIN.
>>>> client_migrate_info cmd is asynchronous.
>>>> (2) Complete client_migrate_info only when the client has been connected
>>>> to the target - wait for
>>>> SPICE_MSGC_MAIN_MIGRATE_(CONNECTED|CONNECT_ERROR) or a timeout.
>>>> (3) spice_server_migrate_end: tell client migration it can switch to
>>>> the target - send
>>>> SPICE_MSG_MAIN_MIGRATE_END.
>>>> (4) client cleans up all data related to the connection to the source
>>>> and switches to the target.
>>>> It sends SPICE_MSGC_MAIN_MIGRATE_END.
>>>
>>> So the switch-host message will not be used any more? How do you handle
>>> compatibility to spice servers which don't support this migration scheme?
>>>
>> Older qemu/spice-server will still use the switch host message and
>> the spice client still support it.
>> With respect to spice-severs that doesn't support this scheme:
>> qemu checks #ifdef SPICE_INTERFACE_MIGRATION, and acts accordingly
>> (calls switch-host or migrate_end).
>> The spice client responds according to the server:
>> if<=Rhel6.0 : seamless migration
>> if Rhel6.1: Switch host
>> if Rhel6.2: semi-seamless
>
> We need to handle another case:
>   New client.
>   New source server.
>   Old target server.
>
> Server and client both have the SEMI_SEAMLESS cap, so server uses semi-seamless,
> client connects to target, but target doesn't support semi seamless and proceeds
> to send updates as usual without waiting for a SPICEC_MSG_MIGRATE_END.
>
> Two possible solutions:
>   1. client_migrate_info - add an extra parameter for "target version". Spice server will
>   see the target is too old, and use switch host. Requires change to libvirt, so
>   ignored for now.
>   2. When client connects to the server it sees that the server doesn't have the SEMI_SEAMLESS_CAP
>   present. So it disconnects, and sends to the source a SPICEC_MSG_MIGRATE_TARGET_CONNECTION_FAILED_TARGET_TOO_OLD
> , or maybe just a generic error message (if we have something like that) and use that as the reason.
>   The server sees this, and proceeds to do the old switch host behavior. This doesn're require libvirt changes,
>   it does create two connections to the target, but at least the vm is never stopped during that time.
>
Actually the connection to the old target server will fail anyway with 
SPICE_LINK_ERR_BAD_CONNECTION_ID (*) and the client will send 
SPICE_MSGC_MAIN_MIGRATE_CONNECT_ERROR. So I guess the only change needed 
is to fall back to switch host when 
SPICE_MSGC_MAIN_MIGRATE_CONNECT_ERROR happens (currently the client is 
disconnected).

(*) the client sends a connection_id != 0 to the target server. The 
target server compares it to reds->link_id, which is 0 (in RHEL5 we used 
to send the connection id from the src to the target), and rejects the 
connection.


>>
>> Each scheme has its own code pathway in the client.
>>
>>> This looks a bit like seamless migration half-done. What is missing to
>>> full seamless migration support? The spice channels need to transfer the
>>> state via client. Also qemu on the source needs to wait for that to
>>> complete I guess? Anything else?
>> We are still missing keeping the target stopped till the target
>> spice server is restored from the data passed through the client.
>> Thanks to the fix Alon sent for RHBZ #729621, we could block the
>> target from starting using a vm state change handler, but it is a
>> workaround...
>> IMHO, if we could have async notifiers for migration before it
>> starts and just before it completes (as we had in Rhel5), that would
>> have been the best solution (it will also allow us not to use
>> client_migrate_info for the pre-connection to the target). By async
>> notifiers I mean
>> (1) notifiers that hold the migration process till they complete.
>> (2) to add notifications (a) before migration starts (today we
>> receive one only after it starts), (b) before migration completes -
>> when the src and target are both stopped (today we don't have
>> control over the target state).
>>
>>>
>>>> enum {
>>>> SPICE_MIGRATE_CLIENT_NONE = 1,
>>>> SPICE_MIGRATE_CLIENT_WAITING,
>>>> SPICE_MIGRATE_CLIENT_READY,
>>>> };
>>>
>>> Is that still needed?
>>>
>> Don't think so.
>>
>>>> +/* migration interface */
>>>> +#define SPICE_INTERFACE_MIGRATION "migration"
>>>> +#define SPICE_INTERFACE_MIGRATION_MAJOR 1
>>>> +#define SPICE_INTERFACE_MIGRATION_MINOR 1
>>>> +typedef struct SpiceMigrateInterface SpiceMigrateInterface;
>>>> +typedef struct SpiceMigrateInstance SpiceMigrateInstance;
>>>> +typedef struct SpiceMigrateState SpiceMigrateState;
>>>> +
>>>> +struct SpiceMigrateInterface {
>>>> + SpiceBaseInterface base;
>>>> + void (*migrate_info_complete)(SpiceMigrateInstance *sin);
>>>> +};
>>>>
>>>> +struct SpiceMigrateInstance {
>>>> + SpiceBaseInstance base;
>>>> + SpiceMigrateState *st;
>>>> +};
>>>
>>> Why a new interface? The only purpose it seems to serve is adding a
>>> single callback. It is registered unconditionally by the spice core code
>>> in qemu. I fail to see why we can't just add the callback to the core
>>> interface. That would save a bunch of code to handle the new interface.
>>>
>>> I think we should try to make sure the qemu<->  spice-server migration
>>> API can handle both semi-seamless and real seamless client migration.
>> I added a new interface since I think that for supporting seamless
>> migration we may need more callbacks (e.g., if the above async
>> notifiers will be implemented). IMHO, it is more readable then
>> adding it to the core interface.
>>> Then the two spice servers can negotiate themself which method they
>>> pick, without qemu being involved here. I think the minimum we'll need
>>> here is a second notification (when spice channel state migration via
>>> slient is finished), either using a second callback or by adding a
>>> reason argument to the one we have. But maybe just blocking in
>>> spice_server_migrate_end() works too (the guest is stopped at that point
>>> anyway, so going async doesn't buy us that much here ...).
>>>
>>> What is the plan for the master branch? How will this work with
>>> multiclient enabled?
>>>
>> For one client - the same.
>> For multiclient - not sure yet, maybe for small number of clients do
>> the same, and for the rest fall back to switch_host? (with the risk
>> of losing the ticket).
>>> cheers,
>>> Gerd
>>>
>>



More information about the Spice-devel mailing list