[Spice-devel] [spice v2] streaming: Always delegate bit rate control to the video encoder

Francois Gouget fgouget at codeweavers.com
Fri Oct 28 11:00:53 UTC 2016


On Thu, 27 Oct 2016, Francois Gouget wrote:
[...]
> Testing this you'll notice that reducing the available bandwidth results 
> in freezes and recovery times that take a dozen second or more. This is 
> because it's only after a sufficient backlog has been created (possibly 
> a single large video frame in the queue) that the server drops a frame 
> which makes the bitrate control aware that something is wrong. This 
> issue is common to all algorithms when all they have to rely on is 
> server frame drop notifications.

I'll add some more thoughts on this because I think the server could do 
better on this.

I some place that I have not precisely identified the server has to be 
writing each frame to the network socket. This will take more or less 
long depending on the state of the network queue.

- If there is enough room in the queue for the frame the write 
  should complete instantly.
- If the queue is full or there is not enough space the write will only 
  complete once enough data has been sent and received by the client. 
  How long this takes will depend on how much space needed freeing and 
  on the network bandwidth.

Dividing the frame size by the time to write gives and upper bound on 
the available bandwidth. That value is probably not directly usable (one 
would have to substract the bandwidth used by other traffic like audio) 
but its variations could prove informative.

Thus notifying the video encoder of the time it took to push each frame 
to the network could provide useful and early information on the network 
state:

* If the time to write is ~0 then it means there is plenty of bandwidth 
  available so the stream bitrate can be increased. This type of 
  information is currently completely unavailable if the client does not 
  send stream reports.

* If the time to write shoots up from ~0 then it means the queue is now 
  full so the stream bitrate should not be increased further.

* If the time to write was already high and the calculated 
  bandwidth dropped, then it means the available network bandwidth 
  dropped. So decrease the stream bitrate.

* Since we have a bandwidth upper bound it should be higher than the 
  stream bitrate. If that's not the case it's another indicator that the 
  stream bitrate may be too high.

* What makes this interesting is that catching congestion conditions 
  early is key to avoid them escalating to frame drops: if you don't 
  then large frames will keep accumulating in the network queue until 
  you get a lag of at least 1 frame interval, or until you get a client 
  report back which you only get once every 166ms (5 frames at 30 fps, 
  plus it's also stale by RTT/2 ms). Here you'd get feedback as soon as 
  the frame is in the network queue, likely even before the client has 
  received it.

* Of course that source of data is going to be quite noisy and it's 
  likely dealing with that noise will reintroduce some lag. But at least 
  the lag is not built-in so it still has the potential of being more 
  reactive.


-- 
Francois Gouget <fgouget at codeweavers.com>


More information about the Spice-devel mailing list