[Bug 733959] hlsdemux: download bitrate alogoritmus don't reflect real download rate

GStreamer (GNOME Bugzilla) bugzilla at gnome.org
Thu Feb 12 21:22:54 PST 2015


https://bugzilla.gnome.org/show_bug.cgi?id=733959

--- Comment #11 from Thiago Sousa Santos <thiagossantos at gmail.com> ---
(In reply to Duncan Palmer from comment #9)
> 
> In testing, where I've shaped port 80 on my http server to 5mbit/s using tc:
> - hlsdemux takes 2.93s to download a 3s hls segment. Of this, 2.44s is spent
> pushing buffers downstream, and 0.49s is spent retrieving buffers from
> souphttpsrc. Segment size is 1528440 bytes.
> - hlsdemux reports a bitrate of 23 mbit/s for this particular hls segment. 
> - Using wget to download the same segment on my device indicates the
> download rate is 4 mbit/s. 
> - If I mess about with the socket read buffer sizes, setting min to 1024 and
> max to 2048 using /proc/sys/net/ipv4/tcp_rmem , /proc/sys/net/core/rmem_max
> and /proc/sys/net/core/rmem_default , hlsdemux reports the rate as 8 mbit/s
> 
> As msko indicated, this behaviour was triggered by the multiqueue fix. If I
> set max-size-buffers on the first multiqueue to 0 (unlimited), the
> calculated bitrate is not too bad (tho is it a bit high). This is because
> we're continually reading from souphttpsrc, and so spend a lot of that time
> fetching data over the network.
> 
> I'm not sure what the solution is. One simple solution would be to buffer up
> some amount of data in hlsdemux, and measure how long it takes to fill the
> buffer. Empty the buffer before fetching more data. The buffer would need to
> be large enough to reduce the effects of the socket read buffers. Using a
> larger blocksize for souphttpsrc would help as well, but there are downsides
> to this I imagine.

This is a tricky issue. At some point we had an infinite queue inside hlsdemux
so we could safely measure the bitrate as we would never block on a full queue.
The problem is that this can be used to explode our memory usage with a
malicious stream.

Currently our algorithm is to count only the time spend getting the data from
souphttpsrc, we stop our chronometer before pushing and restart after the push
returns. This saves us from measuring the blocking time but the internet and
soup are running when our thread is blocked anyway, so we cheat a little by not
measuring this time that was used to deliver us a few buffers. Specially when
blocking the kernel can receive and hold some data for us and we would never
take this time into account.

I don't see any other option to have a good enough measure that doesn't involve
an infinite (or very very large) queue. As I said, this can be exploited to
consume the machine's RAM. So, we need to write to disk where our limits are
much larger. The downside is that it also is much slower. We could, however,
create a new type of queue that would be hybrid, it would have its latest data
always available in memory, while newest data goes to disk storage.

It would actually work like 2 queues:

-> [ disk queue ] -> [ mem queue ] ->

The souphttpsrc would be pushing data into the first queue, another thread
would be taking data from mem queue and pushing it outside, and a third thread
sits in the middle taking data from the disk and putting into memory.
The trick here is that the input thread would skip the disk queue if it was
empty and if there was space in the mem queue. It would only use the disk when
memory was full and we had enough buffering to not care about the disk latency.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.


More information about the gstreamer-bugs mailing list