[Spice-devel] [spice] Enable mm_time adjustments on startup

Francois Gouget fgouget at codeweavers.com
Thu May 9 11:17:25 UTC 2019


On Fri, 3 May 2019, Frediano Ziglio wrote:
[...]
> > > What will be the default? what will happen to late video frames?
> > 
> > Late frames will be dropped as has always been the case (at least in
> > the mjpeg case otherwise it's a bit more complex).
> > 
> 
> I think the terminology should really be changed.
> "Late frame": what does it mean?

The objective definition is that if the server captures two frames at a 25 
milliseconds interval, then the client should display the second frame 25 
milliseconds after displaying the first one. Before that and the second 
frame is early, after that and it is late.

Both are bad because they will result in jerky motion. Jerky motion 
likely won't be noticeable if the video being displayed is that of an 
Xterm, but it will be bad for any kind of 'TV content' (movies, sport, 
shows, etc.), games, etc.


> Ignore that we know the protocol and implementation it does not make 
> sense, all frames are late, 
[...]

The protocol is there to ensure that the client will be able to 
respect the interval between frames. 


> Honestly, we receive a frame, this is the best and more updated screen
> we have. What do you want to do? Drop it, that's obvious. Are we really
> sure about that?

I agree that dropping 'late' frames is not necessarily the right thing to 
do. That's why in the GStreamer decoder schedule_frame() always displays 
the least out of date frame in the queue. The MJPEG decoder does not do 
that and I did not have the heart to modify it when I was working on the 
GStreamer decoder.


> Is that the reason why we "must" increase this "latency"?

No. We need to increase the latency to make sure the client can display 
the frames at the right intervals to ensure a smooth video.

That implies some buffering on the client-side to cover for network 
jitter, delays tied to the frame size (particulary if there is a mix of I 
and B frames), and scheduling delays in the client.

Note that this is all for the no sound case. Otherwise the frames must be 
displayed at the same time as the corresponding sound fragment is played 
on the speakers so that we have proper lip-sync. But in that case the 
latency is controlled by the audio-side.


> "latency": this term is really confusing. Taking into account that we
> need to deal also with network latency but this is not it I find always
> really confusing. Can't we start using "delay" or something else?

I agree that the terminology being used is confusing. For me it's because 
it uses different terms for the same thing. For instance what it calls 
'latency' in reds.c [reds_set_client_mm_time_latency(), 
RedsState.mm_time_latency] and dcc.c [dcc_set_max_stream_latency()], it 
calls 'delay' in stream.c [update_client_playback_delay()].

To me both are all about setting the 'offset' between the server's mm_time 
clock and the client's.


> What's also confusing is the computation. Usually when you have
> latency issues the more latency you have the worst it is.
> In client you compute a value and the _less_ it is the worst it is.

Yes, I dislike that name in the client too. To me it's the margin we have 
between when we receive the frame and when it should be displayed. This is 
sent back to the encoder and in the Gstreamer traces I call it the 'video 
margin' (but the MJPEG encoder calls it the 'video-delay').

This margin actually plays a very important role in letting the server 
determine the available network bandwidth: when the encoder increases 
the video bitrate the frames get bigger, sending them on the network 
takes longer, they arrive at the client later, and this margin decreases. 
The server then periodically receives this margin in the 
SpiceMsgcDisplayStreamReport messages. When the margin goes steadily down 
it knows exceeded the network bandwith; and if it steadily goes up it 
knows that it could increase the video bitrate to provide better quality.

This mechanism is not really satisfactory though because the feedback 
comes too late: stream reports are sent every 5 frames so at 20 fps the 
server becomes aware of a bandwidth drop after at least 250 ms (and at 
1s the report_timeout does not help).


> > Increasing the latency means a frame queued on the client will be
> > displayed even later. So the latency can be increased freely.
> > 
> 
> If I'm drawing with the mouse something on the screen and the
> latency is 10 minutes I don't really think that increase "freely"
> is really good.

I did not mean 'freely' as in 'as much as we want', but as in 'with no 
risk of causing frame drops given how existing clients work'.

Indeed the MJPEG encoder has a built-in maximum 'playback delay' of 5 
seconds [MJPEG_MAX_CLIENT_PLAYBACK_DELAY]. I find this way too high 
because it does not correspond to any realistic combination of encoding 
time or network latency. So if the MJPEG encoder settles on such a high 
value it means something went seriously wrong.


> > Decreasing the latency means a bunch of frames queued on the client may
> > suddenly become late, causing them to be dropped. So more care must be
> > taken.
> > 
> 
> Is the problem of "care to take" something not avoidable or is it
> something we create with our hands? I mean, I know that current
> implementation will drop frames which we don't want but is it
> not a workaround of a bad implementation?

The answer depends in part on whether you consider the existing clients to 
be a given that the server has to deal with, or something that can be 
replaced to suit the server (or to not hold back the protocol).


-- 
Francois Gouget <fgouget at codeweavers.com>, 



More information about the Spice-devel mailing list