adaptive bitrate elements - flawed bitrate calculation mechanism?

Duncan Palmer dpalmer at digisoft.tv
Sun Feb 8 23:19:41 PST 2015


Hi guys,

I've found that the bitrate calculated by hlsdemux can be wildly
innacurate under some circumstances. The newer adaptive bitrate base
class uses the same algorithm and so I imagine suffers the same
problem.

The current mechanism for calculating the bitrate relies on cumulative
measurements of time taken to do small read operations (I find that
generally, _src_chain() gets buffers about 1K in size). Individually,
these measurements can be innacurate as data often comes directly from
the socket read buffer. Over time, if hlsdemux spends most of it's
time waiting for data from souphttpsrc, the individual inaccuracies
even out somewhat, as we sometimes wait for data to be fetched over
the network. However, if hlsdemux does not spend much of it's time
waiting for data from souphttpsrc (e.g. because it's blocked sending
data to a downstream queue when the pipeline is in a steady buffering
state), then these innacuracies do not even out, as we never have to
wait for data to be downloaded, and the download bitrate calculation
results in a number which is way too high.

I've experimented with setting the socket receive buffer size in
souphttpsrc to 2k. This vastly improves the accuracy of the bitrate
figures, tho I still find that they are generally too high. I'm
testing using tc to shape port 80 on my server. I'm comparing bitrates
calculated by hlsdemux with those I calculate by timing how long it
takes wget to download a fragment, and also with my tc configuration.

I've observed a couple of behaviours resulting from this:
- We keep trying to switch to a variant which has a too-high bitrate.
This happens when we start
  with a pipeline in which all queues are full, and so are often in overrun.
- We sit on one variant, but the queue in the pipeline drains over a
period of a few fragments,
  and subsequently emit a BUFFERING message, whereupon I pause. Repeat
ad infinitum. I think
  this occurs because the first multiqueue inserted by decodebin has a
max-buffers limit of 2, and so
  can overrun even tho the downstream multiqueue has space. I'm a bit
unsure exactly, but I do
  observe that the calculated bitrate in this circumstance is way too
high, preventing a variant switch.

Can anyone comment on this?

I'd like to implement a proper solution. The only solutions which
spring to mind are:
- Have an internal queue in hlsdemux who's sole purpose is to
facilitate calculation of bitrate. We never start downloading a
fragment until this is empty. We could end up using quite a lot of
memory doing this.
- We look at increasing the souphttpsrc buffer-size. I haven't tried
this, but imagine it may improve things. The default receive buffer
size on my machine is 87380 bytes, so we'd want to make buffer-size
fairly large to have an impact.

Can anyone offer other alternatives?

Regards,
Dunk


More information about the gstreamer-devel mailing list