[Spice-commits] src/channel-main.c

Christophe Fergau teuf at kemper.freedesktop.org
Wed Jun 15 13:35:00 UTC 2016


 src/channel-main.c |   11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

New commits:
commit 3e3b37b672743453d829264144c1c63a83b1387c
Author: Christophe Fergeau <cfergeau at redhat.com>
Date:   Wed May 25 17:36:30 2016 +0200

    main: Don't delay update_display_timer(0) for up to 1 second
    
    When using remote-viewer --full-screen with a VM/client with multiple
    monitors, a race can be observed during auto-configuration. First, the
    client monitors config is sent to the guest:
    (remote-viewer:19480): GSpice-DEBUG: channel-main.c:1166 main-1:0: sending new monitors config to guest
    (remote-viewer:19480): GSpice-DEBUG: channel-main.c:1183 main-1:0: monitor #0: 1920x1080+0+0 @ 32 bpp
    (remote-viewer:19480): GSpice-DEBUG: channel-main.c:1183 main-1:0: monitor #1: 1920x1080+1920+0 @ 32 bpp
    
    Then we receive messages from the agent which trigger a call to
    update_display_timer(0)
    
    This should cause the current monitors state to be sent to the server
    very soon after this call. However, in the racy case, this is delayed
    for nearly a second, and timer_set_display() ends up being called at the
    wrong time, when information about the first display channel have been
    received from the server, but not the second one.
    
    When this happens, we inform the server that only one monitor is active,
    then the server sends a MonitorsConfig message with 2 monitors (first
    request we sent), and then with just 1 monitor (second request we sent).
    
    This causes remote-viewer to show one fullscreen black window indicating
    "Connected to server" on one monitor and a working fullscreen window on
    the second monitor.
    
    update_display_timer(0) schedules a timeout to be run with
    g_timeout_add_seconds(0). However, g_timeout_add_seconds() schedules
    timers with a granularity of a second, so the timeout we scheduled with
    g_timeout_add_seconds() may fire up to 1 second later than when we
    called it, while we really wanted it to fire as soon as possible.
    Special-casing update_display_timer(0) and using g_timeout_add() in that
    case avoid this issue. In theory, the race could probably still happen
    with a very very bad timing, but in practice I don't think it will be
    possible to trigger it after this change.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1323092

diff --git a/src/channel-main.c b/src/channel-main.c
index efe7f4e..2d717c5 100644
--- a/src/channel-main.c
+++ b/src/channel-main.c
@@ -1559,7 +1559,16 @@ static void update_display_timer(SpiceMainChannel *channel, guint seconds)
     if (c->timer_id)
         g_source_remove(c->timer_id);
 
-    c->timer_id = g_timeout_add_seconds(seconds, timer_set_display, channel);
+    if (seconds != 0) {
+        c->timer_id = g_timeout_add_seconds(seconds, timer_set_display, channel);
+    } else {
+        /* We need to special case 0, as we want the callback to fire as soon
+         * as possible. g_timeout_add_seconds(0) would set up a timer which would fire
+         * at the next second boundary, which might be nearly 1 full second later.
+         */
+        c->timer_id = g_timeout_add(0, timer_set_display, channel);
+    }
+
 }
 
 /* coroutine context  */


More information about the Spice-commits mailing list