Ideal pipe for streaming live with minimal delay

W.A. Garrett Weaver weaverg at
Fri Mar 16 01:09:45 PDT 2012

Wim, Edward, thank you for replying. Sorry it's been a few days I've gotten
caught up in other projects.

The wealth of information you've given me has been amazing. Thank you all
so much.

First of all: I tried to get rid of the sync=false attribute in the
receiving script. For some reason, just this time, mearly removing it did
not create a very choppy video feed like it did last time I had the
receiving computer on (which is an atom based net-top). However, I still
got the "this computer may be too slow or there may be a time stamp error"
message and the video had tremendous amounts of artifacts during fast
movement. I had to increase the latency of gstrtpkitterbuffer to about
500ms to eliminate that message. To reduce the artifacts: I set
drop-on-latency to be false.

So my receiving script now looks like:

gst-launch-0.10 udpsrc multicast-group= auto-multicast=true
port=5000 caps=application/x-rtp ! gstrtpjitterbuffer drop-on-latency=false
latency=500  ! rtph264depay ! ffdec_h264 ! xvimagesink

Although it does add some latency, it's not too bad.

Now onto the sending script:

Adding in the queues seemed to help, but not significantly and I couldn't
quantitatively measure a difference. If adding queues adds more threads to
the process, would this only benefit multi-core systems? I've heard that
multi-threaded applications work better, even on single core machines. I'd
like to know if this would be any benifit to single core machines because
I'd like to port these scripts to a little arm based computer platform
called the Beagleboard.

Setting tune to zero latency helped speed up the script tremendously,
reducing the delay by about half as much.

So my current send script looks like this:
gst-launch-0.10 v4l2src ! queue ! 'video/x-raw-yuv,width=640,height=480' !
x264enc bitrate=500 speed-preset=superfast tune=zerolatency ! queue !
rtph264pay ! udpsink host= port=5000 auto-multicast=true

I did some research about the speed-preset option. Apparently it adjusts a
lot of individual parameters of the video compression as sort of an easy
way to trade quality for speed. One of the things it does change, among
other things is b-frames. I don't know very much about video compression,
but I believe b-frames are frames that are based on both previous and
future frames. By eliminating them you're only going to have I and P
frames. This would mean that the amount of ways you have to compress the
video is going to go down so there is the potential for the video to take
up more bandwidth.

Here are some of my sources:

It also seems like there is a tremendous amount of parameters to change to
tailor video compression to the needs of an application and it seems

On Wed, Mar 14, 2012 at 1:21 AM, Edward Hervey <bilboed at> wrote:

> On Tue, 2012-03-13 at 21:40 +0100, Wim Taymans wrote:
> > On 03/13/2012 08:59 PM, W.A. Garrett Weaver wrote:
> > > You are right about latency being either from the network or the
> > > sender. In my case it is from the sender. Changing the sender script
> > > dramatically reduced latency. The thing that was changed was adjusting
> > > the speed-preset option in the x264enc. My latest sender script is:
> > >
> > > gst-launch-0.10 v4l2src ! 'video/x-raw-yuv,width=640,height=480' !
> > > x264enc bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink
> > > host= port=5000 auto-multicast=true
> >
> > If you are using x264enc you should consider using tune=zerolatency as
> well.
>   To expand on that, what Wim is pointing out is that there is a
> difference between processing speed and latency.
>  What you did only reduced the processing speed... but the encoder was
> still using reordered frame encoding (i.e. use the information from
> multiple neighboring frames to end up with a potentially better
> information-per-bit).
>  But that doesn't reduce the latency. The encoder will have to delay
> the output by the number of reordered frames (which can go quite high,
> like 15, 25 or even more frames). So it will only output the encoded
> frame X when it received frame X+25.
>  By using the zerolatency preset, you are essentially telling the
> encoder to produce a stream without any reordered frames (i.e. no B
> frames), allowing it to push out the encoded frame as soon as it's
> processed.
>  This is the very drastic solution, you could also, depending on what
> your goal is, find a middle ground by allowing a lower-than-default
> number of reordered frames (3-5), thereby allowing the encoder a chance
> to produce a better compression rate while at the same time not
> introducing a too high latency.
>  Summary : processing speed != latency :)
>    Edward
> P.S. You also want to put some queues just after v4l2src and just after
> the encoder. Otherwise you are processing everything in one thread. By
> adding those queues, you are essentially putting capture, encoding and
> payloading/sending into dedicated threads, reducing even more the
> latency (the encoder doesn't need to block the capture, and the
> payloading/sending doesn't neet to block the encoding and capture).
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at

*W.A. Garrett Weaver *
weaverg at
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the gstreamer-devel mailing list