[Spice-devel] [spice v2] streaming: Always delegate bit rate control to the video encoder
Francois Gouget
fgouget at codeweavers.com
Fri Nov 18 17:27:58 UTC 2016
On Wed, 16 Nov 2016, Frediano Ziglio wrote:
[...]
> I don't know how much information can give the upper bound.
When the available bandwidth drops suddenly (e.g. degraded wifi / 3G
connection or multiple competing network streams starting) it can take
quite a few iterations before the video stream's bitrate is slashed
sufficiently to fit. A network bandwidth upper bound could let us
immediately drop the stream bitrate to a lower value. Of course there's
no point if the upper bandwidth estimate value takes too long to react
to network changes, or if it is unreliable.
> But I think that if you continue to see the queue full the upper bound
> should approximate to the real one which is much more useful.
Yes.
[...]
> > * If the time to write is ~0 then it means there is plenty of bandwidth
> > available so the stream bitrate can be increased.
[...]
> But if you can send to client X bytes and client give an ACK using S
> seconds you had a bandwidth of X/S and this is a lower bound, not an
> upper bound.
Note that in my thought experiment the time S would be the time it
takes to put the data in the kernel's network buffer, not the time
it takes for the client to acknowledge receipt of that data.
The latter would indeed give a lower bound. But the former gives an
upper bound because if there is already sufficient space in the kernel
buffer then that time will essentially be 0 resulting in an infinite
bandwidth.
It's only when the buffer is already full and we need to wait for the
TCP stack to receive IP acks of old data that the calculated value may
be too low. But in that case I expect it will still be close to the
right value.
> > * If the time to write was already high and the calculated
> > bandwidth dropped, then it means the available network bandwidth
> > dropped. So decrease the stream bitrate.
>
> Here the problem I think is the calculated bandwidth.
> We should compute it using more global data so to include
> all possible streams (like sound) and connections usage
> (even image and cursor for instance)
It would be nice but my feeling is that image and cursor data network
usage is pretty bursty and unpredictable. Audio bandwidth is the
opposite: it should be pretty constant and predictable so knowing that
would help.
Other video streams are a bit in the middle: if there is lots of
bandwidth available then their bitrate will be limited by the quantizer
cap meaning that it will depend a lot on the scene: low bandwidth on
simple scenes and higher bandwidth on complex ones. If bandwidth is
limited then all scenes will bump against the bitrate limit we impose on
the stream meaning it should be more constant and thus known and
predictable for other streams.
> > * Since we have a bandwidth upper bound it should be higher than the
> > stream bitrate. If that's not the case it's another indicator that the
> > stream bitrate may be too high.
> >
>
> Here you mean basically that you cannot support such high bitrate
> on the stream and you should decrease the bitrate, right?
Yes.
> Maybe terminology confusion, here by bitrate you mean the configured
> stream bitrate (set to gstreamer if gstreamer is used) and not the
> networks one (bandwidth).
Yes.
[...]
> When you test the streaming do you look at the network queues?
> I find it very interesting, I usually keep a terminal windows open
> with a command like "watch 'netstat -anp | grep 5900'".
I did not but this could be interesting. In my tests I just tried to
limit the interface queue length to avoid excessive queuing in the
kernel (ifconfig lo txqueuelen 1). But I did not notice a clear impact.
It feels like getting data like RTT times for IP-level acks straight
from the TCP stack could provide valuable information. I don't know if
other streaming applications have tried that.
--
Francois Gouget <fgouget at codeweavers.com>
More information about the Spice-devel
mailing list