[Spice-devel] [spice v2] streaming: Always delegate bit rate control to the video encoder

Frediano Ziglio fziglio at redhat.com
Wed Nov 16 13:02:09 UTC 2016


> 
> On Thu, 27 Oct 2016, Francois Gouget wrote:
> [...]
> > Testing this you'll notice that reducing the available bandwidth results
> > in freezes and recovery times that take a dozen second or more. This is
> > because it's only after a sufficient backlog has been created (possibly
> > a single large video frame in the queue) that the server drops a frame
> > which makes the bitrate control aware that something is wrong. This
> > issue is common to all algorithms when all they have to rely on is
> > server frame drop notifications.
> 
> I'll add some more thoughts on this because I think the server could do
> better on this.
> 
> I some place that I have not precisely identified the server has to be
> writing each frame to the network socket. This will take more or less
> long depending on the state of the network queue.
> 
> - If there is enough room in the queue for the frame the write
>   should complete instantly.
> - If the queue is full or there is not enough space the write will only
>   complete once enough data has been sent and received by the client.
>   How long this takes will depend on how much space needed freeing and
>   on the network bandwidth.
> 
> Dividing the frame size by the time to write gives and upper bound on
> the available bandwidth. That value is probably not directly usable (one
> would have to substract the bandwidth used by other traffic like audio)
> but its variations could prove informative.
> 

I don't know how much information can give the upper bound. But
I think that if you continue to see the queue full the upper bound
should approximate to the real one which is much more useful.

> Thus notifying the video encoder of the time it took to push each frame
> to the network could provide useful and early information on the network
> state:
> 
> * If the time to write is ~0 then it means there is plenty of bandwidth
>   available so the stream bitrate can be increased. This type of
>   information is currently completely unavailable if the client does not
>   send stream reports.
> 

I think you are not considering proxy case. In this case the proxy is
providing extra space reducing potentially the queue.
The queue (in this case the tcp stream one) depends on many aspects like
- system setting;
- average network usage (so not only this connection);
- network latency;
- proxy presence.
But if you can send to client X bytes and client give an ACK using S
seconds you had a bandwidth of X/S and this is a lower bound, not an
upper bound. Here the S time can be bigger due to preexisting queue
(due to previous data on this connections or others sharing same
network path).

> * If the time to write shoots up from ~0 then it means the queue is now
>   full so the stream bitrate should not be increased further.
> 

Agreed basically you are using more bandwidth than available one.

> * If the time to write was already high and the calculated
>   bandwidth dropped, then it means the available network bandwidth
>   dropped. So decrease the stream bitrate.
> 

Here the problem I think is the calculated bandwidth.
We should compute it using more global data so to include
all possible streams (like sound) and connections usage
(even image and cursor for instance)

> * Since we have a bandwidth upper bound it should be higher than the
>   stream bitrate. If that's not the case it's another indicator that the
>   stream bitrate may be too high.
> 

Here you mean basically that you cannot support such high bitrate
on the stream and you should decrease the bitrate, right?
Maybe terminology confusion, here by bitrate you mean the configured
stream bitrate (set to gstreamer if gstreamer is used) and not the
networks one (bandwidth).

> * What makes this interesting is that catching congestion conditions
>   early is key to avoid them escalating to frame drops: if you don't
>   then large frames will keep accumulating in the network queue until
>   you get a lag of at least 1 frame interval, or until you get a client
>   report back which you only get once every 166ms (5 frames at 30 fps,
>   plus it's also stale by RTT/2 ms). Here you'd get feedback as soon as
>   the frame is in the network queue, likely even before the client has
>   received it.
> 

Currently we don't use it but beside the streaming report there is
a PING/PONG protocol that we could use for better bandwidth computation.
It's used at the beginning to get the low/high bandwidth estimation
and to do some weird bandwidth limitation  (IMHO wrong) later.

> * Of course that source of data is going to be quite noisy and it's
>   likely dealing with that noise will reintroduce some lag. But at least
>   the lag is not built-in so it still has the potential of being more
>   reactive.
> 

When you test the streaming do you look at the network queues?
I find it very interesting, I usually keep a terminal windows open
with a command like "watch 'netstat -anp | grep 5900'".

Frediano


More information about the Spice-devel mailing list