[Spice-devel] [spice-gtk] main: Don't delay update_display_timer(0) for up to 1 second

Pavel Grunt pgrunt at redhat.com
Wed Jun 1 13:47:11 UTC 2016


Hi Christophe,

On Wed, 2016-05-25 at 18:14 +0200, Christophe Fergeau wrote:
> When using remote-viewer --full-screen with a VM/client with multiple
> monitors, a race can be observed during auto-configuration. First, the
> client monitors config is sent to the guest:
> (remote-viewer:19480): GSpice-DEBUG: channel-main.c:1166 main-1:0: sending new
> monitors config to guest
> (remote-viewer:19480): GSpice-DEBUG: channel-main.c:1183 main-1:0: monitor #0:
> 1920x1080+0+0 @ 32 bpp
> (remote-viewer:19480): GSpice-DEBUG: channel-main.c:1183 main-1:0: monitor #1:
> 1920x1080+1920+0 @ 32 bpp
> 
> Then we receive messages from the agent which trigger a call to
> update_display_timer(0)
> 
> This should cause the current monitors state to be sent to the server
> very soon after this call. However, in the racy case, this is delayed
> for nearly a second, and timer_set_display() ends up being called at the
> wrong time, when information about the first display channel have been
> received from the server, but not the second one.
this is strange
> 
> When this happens, we inform the server that only one monitor is active,
> then the server sends a MonitorsConfig message with 2 monitors (first
> request we sent), and then with just 1 monitor (second request we sent).
Is the server doing everything correctly ?
> 
> This causes remote-viewer to show one fullscreen black window indicating
> "Connected to server" on one monitor and a working fullscreen window on
> the second monitor.
> 
> update_display_timer(0) schedules a timeout to be run with
> g_timeout_add_seconds(0). However, g_timeout_add_seconds() schedules
> timers with a granularity of a second, so the timeout we scheduled with
> g_timeout_add_seconds() may fire up to 1 second later than when we
> called it, while we really wanted it to fire as soon as possible.
> Special-casing update_display_timer(0) and using g_timeout_add()
What about only using g_timeout_add() ?

>  in that
> case avoid this issue. In theory, the race could probably still happen
> with a very very bad timing, 

I think we should focus on the race (and the behaviour of the server ?).

So if I understand it correctly, the client sends a resize request, but at the
same time the server sends info about monitors, and this causes the problem.
What about waiting with the resize request till we know that display channels
are ready.

I have a feeling that would be nice to have some mechanism to confirm that the
resize request was processed by the server. And I also have a feeling that we
have discussed.

Pavel

> but in practice I don't think it will be
> possible to trigger it after this change.
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1323092
> ---
>  src/channel-main.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/src/channel-main.c b/src/channel-main.c
> index e5a70af..946b280 100644
> --- a/src/channel-main.c
> +++ b/src/channel-main.c
> @@ -1560,7 +1560,16 @@ static void update_display_timer(SpiceMainChannel
> *channel, guint seconds)
>      if (c->timer_id)
>          g_source_remove(c->timer_id);
>  
> -    c->timer_id = g_timeout_add_seconds(seconds, timer_set_display, channel);
> +    if (seconds != 0) {
> +        c->timer_id = g_timeout_add_seconds(seconds, timer_set_display,
> channel);
> +    } else {
> +        /* We need to special case 0, as we want the callback to fire as soon
> +         * as possible. g_timeout_add_seconds(0) would set up a timer which
> would fire
> +         * at the next second boundary, which might be nearly 1 full second
> later.
> +         */
> +        c->timer_id = g_timeout_add(0, timer_set_display, channel);
> +    }
> +
>  }
>  
>  /* coroutine context  */


More information about the Spice-devel mailing list