[Spice-devel] [spice-gtk] main: Don't delay update_display_timer(0) for up to 1 second

Marc-André Lureau mlureau at redhat.com
Wed May 25 16:31:09 UTC 2016


Hi

----- Original Message -----
> When using remote-viewer --full-screen with a VM/client with multiple
> monitors, a race can be observed during auto-configuration. First, the
> client monitors config is sent to the guest:
> (remote-viewer:19480): GSpice-DEBUG: channel-main.c:1166 main-1:0: sending
> new monitors config to guest
> (remote-viewer:19480): GSpice-DEBUG: channel-main.c:1183 main-1:0: monitor
> #0: 1920x1080+0+0 @ 32 bpp
> (remote-viewer:19480): GSpice-DEBUG: channel-main.c:1183 main-1:0: monitor
> #1: 1920x1080+1920+0 @ 32 bpp
> 
> Then we receive messages from the agent which trigger a call to
> update_display_timer(0)
> 
> This should cause the current monitors state to be sent to the server
> very soon after this call. However, in the racy case, this is delayed
> for nearly a second, and timer_set_display() ends up being called at the
> wrong time, when information about the first display channel have been
> received from the server, but not the second one.

I don't fully understand what's going on, perhaps Jonathon will be able to help here.

> 
> When this happens, we inform the server that only one monitor is active,
> then the server sends a MonitorsConfig message with 2 monitors (first
> request we sent), and then with just 1 monitor (second request we sent).
> 
> This causes remote-viewer to show one fullscreen black window indicating
> "Connected to server" on one monitor and a working fullscreen window on
> the second monitor.
> 
> update_display_timer(0) schedules a timeout to be run with
> g_timeout_add_seconds(0). However, g_timeout_add_seconds() schedules
> timers with a granularity of a second, so the timeout we scheduled with
> g_timeout_add_seconds() may fire up to 1 second later than when we
> called it, while we really wanted it to fire as soon as possible.

Why not call timer_set_display() there in this particular case?

> Special-casing update_display_timer(0) and using g_timeout_add() in that
> case avoid this issue. In theory, the race could probably still happen
> with a very very bad timing, but in practice I don't think it will be
> possible to trigger it after this change.
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1323092
> ---
>  src/channel-main.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/src/channel-main.c b/src/channel-main.c
> index e5a70af..946b280 100644
> --- a/src/channel-main.c
> +++ b/src/channel-main.c
> @@ -1560,7 +1560,16 @@ static void update_display_timer(SpiceMainChannel
> *channel, guint seconds)
>      if (c->timer_id)
>          g_source_remove(c->timer_id);
>  
> -    c->timer_id = g_timeout_add_seconds(seconds, timer_set_display,
> channel);
> +    if (seconds != 0) {
> +        c->timer_id = g_timeout_add_seconds(seconds, timer_set_display,
> channel);
> +    } else {
> +        /* We need to special case 0, as we want the callback to fire as
> soon
> +         * as possible. g_timeout_add_seconds(0) would set up a timer which
> would fire
> +         * at the next second boundary, which might be nearly 1 full second
> later.
> +         */
> +        c->timer_id = g_timeout_add(0, timer_set_display, channel);
> +    }
> +
>  }
>  
>  /* coroutine context  */
> --
> 2.7.4
> 
> _______________________________________________
> Spice-devel mailing list
> Spice-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/spice-devel
> 


More information about the Spice-devel mailing list