[Spice-devel] [spice-server PATCH 1/2] display seamless migration: don't process both cmd ring and dispatcher queue till migration data is received
Uri Lublin
uril at redhat.com
Wed Nov 7 07:25:11 PST 2012
On 11/05/2012 11:33 PM, Yonit Halperin wrote:
> fix: rhbz#866929
>
> At migration destination side, we need to restore the client's surfaces
> state, before sending surfaces related messages.
> Before this patch, we stopped the processing of only the cmd ring, till migration data
> arrived.
> However, some QXL_IOs require reading and rendering the cmd ring (e.g.,
> update_area). Moreover, when the device is reset, after destroying all
> surfaces, we assert (in qemu) if the cmd ring is not empty (see
> rhbz#866929).
> This fix makes the red_worker thread wait till the migration data arrives
> (or till a timeout), and not process any input from the device after the
> vm is started.
Hi Yonit,
This patch makes sense to me.
While red_worker is waiting, potentially, the qxl device (qemu-kvm)
continues to add
requests to the worker queue. Hopefully it will not fill it.
See 2 minor comments/questions below.
> ---
> server/red_worker.c | 45 +++++++++++++++++++++++++++++++++++++--------
> 1 files changed, 37 insertions(+), 8 deletions(-)
>
> diff --git a/server/red_worker.c b/server/red_worker.c
> index 9edd5d4..dd27872 100644
> --- a/server/red_worker.c
> +++ b/server/red_worker.c
> @@ -102,6 +102,7 @@
> #define CHANNEL_PUSH_SLEEP_DURATION 10000 //micro
>
> #define DISPLAY_CLIENT_TIMEOUT 15000000000ULL //nano
> +#define DISPLAY_CLIENT_MIGRATE_DATA_TIMEOUT 10000000000ULL //nano
Nitpick, maybe, to make it clear, add after // nano -- 10 seconds
> #define DISPLAY_CLIENT_RETRY_INTERVAL 10000 //micro
>
> #define DISPLAY_FREE_LIST_DEFAULT_SIZE 128
> @@ -4892,13 +4893,7 @@ static int red_process_commands(RedWorker *worker, uint32_t max_pipe_size, int *
> int n = 0;
> uint64_t start = red_now();
>
> - /* we don't process the command ring if we are at the migration target and we
> - * are expecting migration data. The migration data includes the list
> - * of surfaces that are held by the client. We need to have it before
> - * processing commands in order not to render and send surfaces unnecessarily
> - */
> - if (!worker->running || (worker->display_channel&&
> - red_channel_waits_for_migrate_data(&worker->display_channel->common.base))) {
> + if (!worker->running) {
> *ring_is_empty = TRUE;
> return n;
> }
> @@ -11162,18 +11157,52 @@ void handle_dev_stop(void *opaque, void *payload)
> red_wait_outgoing_items(&worker->cursor_channel->common.base);
> }
>
> +static int display_channel_wait_for_migrate_data(DisplayChannel *display)
> +{
> + uint64_t end_time = red_now() + DISPLAY_CLIENT_MIGRATE_DATA_TIMEOUT;
> + RedChannel *channel =&display->common.base;
> + RedChannelClient *rcc;
> +
> + spice_debug(NULL);
> + spice_assert(channel->clients_num == 1);
> +
> + rcc = SPICE_CONTAINEROF(ring_get_head(&channel->clients), RedChannelClient, channel_link);
> + spice_assert(red_channel_client_waits_for_migrate_data(rcc));
> +
> + for (;;) {
> + red_channel_client_receive(rcc);
> + if (!red_channel_client_is_connected(rcc)) {
> + break;
> + }
> +
> + if (!red_channel_client_waits_for_migrate_data(rcc)) {
> + return TRUE;
> + }
> + if (red_now()> end_time) {
> + spice_warning("timeout");
> + red_channel_client_disconnect(rcc);
> + break;
> + }
> + usleep(DISPLAY_CLIENT_RETRY_INTERVAL);
> + }
> + return FALSE;
> +}
> +
> void handle_dev_start(void *opaque, void *payload)
> {
> RedWorker *worker = opaque;
>
> spice_assert(!worker->running);
> + worker->running = TRUE;
Why does worker->running have to be set before the waiting ?
> if (worker->cursor_channel) {
> worker->cursor_channel->common.during_target_migrate = FALSE;
> }
> if (worker->display_channel) {
> worker->display_channel->common.during_target_migrate = FALSE;
> + if (red_channel_waits_for_migrate_data(&worker->display_channel->common.base)) {
> + display_channel_wait_for_migrate_data(worker->display_channel);
> + }
> }
> - worker->running = TRUE;
> guest_set_client_capabilities(worker);
> }
>
More information about the Spice-devel
mailing list