Ideal pipe for streaming live with minimal delay
Edward Hervey
bilboed at gmail.com
Wed Mar 14 01:21:39 PDT 2012
On Tue, 2012-03-13 at 21:40 +0100, Wim Taymans wrote:
> On 03/13/2012 08:59 PM, W.A. Garrett Weaver wrote:
> > You are right about latency being either from the network or the
> > sender. In my case it is from the sender. Changing the sender script
> > dramatically reduced latency. The thing that was changed was adjusting
> > the speed-preset option in the x264enc. My latest sender script is:
> >
> > gst-launch-0.10 v4l2src ! 'video/x-raw-yuv,width=640,height=480' !
> > x264enc bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink
> > host=244.1.1.1 port=5000 auto-multicast=true
>
> If you are using x264enc you should consider using tune=zerolatency as well.
To expand on that, what Wim is pointing out is that there is a
difference between processing speed and latency.
What you did only reduced the processing speed... but the encoder was
still using reordered frame encoding (i.e. use the information from
multiple neighboring frames to end up with a potentially better
information-per-bit).
But that doesn't reduce the latency. The encoder will have to delay
the output by the number of reordered frames (which can go quite high,
like 15, 25 or even more frames). So it will only output the encoded
frame X when it received frame X+25.
By using the zerolatency preset, you are essentially telling the
encoder to produce a stream without any reordered frames (i.e. no B
frames), allowing it to push out the encoded frame as soon as it's
processed.
This is the very drastic solution, you could also, depending on what
your goal is, find a middle ground by allowing a lower-than-default
number of reordered frames (3-5), thereby allowing the encoder a chance
to produce a better compression rate while at the same time not
introducing a too high latency.
Summary : processing speed != latency :)
Edward
P.S. You also want to put some queues just after v4l2src and just after
the encoder. Otherwise you are processing everything in one thread. By
adding those queues, you are essentially putting capture, encoding and
payloading/sending into dedicated threads, reducing even more the
latency (the encoder doesn't need to block the capture, and the
payloading/sending doesn't neet to block the encoding and capture).
More information about the gstreamer-devel
mailing list