[gst-devel] HowTo: low latency video screen (dv1394src --> xvimagesink)?

Ola Theander ola.theander at otsystem.com
Thu Jan 12 09:46:06 CET 2006


Dear subscribers

I'm working on a video recording device that will be used for inspection
work, i.e. objects will be filmed and stored as a video files on disk. Since
the camera is connected to the recording device using a long cable the
operator will use the video image on device's screen to guide him in the
navigation of the camera. In order to simplify the task as much as possible
the latency between the actual movement of the camera and the response on
the screen should be as low as possible, preferably in the order of 1/10
second. My current implementation is decent but to squeeze the last bit of
performance out of the box I hope that I can get some feedback on my
pipeline design. Maybe it even needs to be redesigned.

My design looks basically like this (I'll try some ASCII art):

                                     --------------------------------
                                     |  ---> queue --> xvimagesink  |
                                     | /                            |
                                     -/------------------------------
                     ---> decodebin --
                    /                 -\-----------------------------
                   /                 |  \                           |
dv1394src --> tee -                  |   ---> queue --> alsasink    |
                   \                 --------------------------------
                    \
                     \
                      ---> filesink

I.e. my video source is the dv1394src which fetch data from an external
composite video to dv-video converter. The converter is connected to the
computer's firewire port. In the sketch the dashed boxes means that the
boxed part of the pipeline runs in it's own thread, i.e. the videosink and
the alsasink each have their own threads. Later I will perhaps add an
mpeg-encoder on the branch that goes to the filesink to store in a more
compact form than dv.

In the general case of recording it really doesn't matter whether there is a
short delay from that the video data is available from the source until it's
stored on the disk as long as no frames are dropped etc. But in my case, as
I mentioned above, the device screen is used to guide the operator thus
there is a need for a short latency. If there is a long delay until the
operator get feedback on the screen from his movement it considerably more
difficult to navigate a long camera wire in tight channels. Note that it's
not important if there is a slight delay in the part of the pipeline that
ends up in the filesink. The important thing is the feedback on the screen.
Therefore I need to improve the latency as much as possible and I hope that
you will provide me with some feedback.

Here are my basic questions/ideas:

- Minimize the number of threads? In an earlier part of the development I
had more threads in my pipeline but I noticed that more threads seemed to
increase the delay in the pipeline. I am well aware that there is an
overhead in switching threads but my earliest research showed that have
smooth recording/playback you need to use threads to some extent, otherwise
the video/sound playback was very jerky.

- Shorten the amount of data cached in the queue? I'm not sure if the amount
of data stored in the queue elements have a considerable impact on the delay
but it seems reasonable to me that they would cause a delay. Can this
anybody confirm this and perhaps even provide some recommendations about how
to dimension the queues? It's acceptable to drop a frame every now and then
if it improves the latency.

- Increase thread priority? I can increase the priority of the threads.
Would this be beneficial or is there some built in timing in the pipeline
which makes the thread priority less relevant?

- Hardware performance? The device is based on a mini-ITX motherboard which
basically means that I have processors in the range from VIA C3 up to Intel
Pentium M/Pentium 4 class available. I.e. I have quite capable hardware to
choose from but I don't want to overdo it either since I want to keep the
price of each device down. How much will the hardware affect the latency?
Which class of hardware is enough?

- Should I expect any performance improvement when I upgrade to GStreamer
0.10 from my current 0.8.11? The software is currently based on GStreamer
0.8.11 but I plan to convert to the latest version during this spring.

The software on the box is currently:

- Gentoo 2005.0 Linux
- Kernel 2.6.12
- x.org, v. 6.8.2
- GStreamer 0.8.11

Eventually I'll might skip X and use a frame buffer (e.g. DirectFB) instead.
I noticed that there is a directfb sink available which I'll evaluate as
soon as possible.

Any help on this matter would be greatly appreciated.

Kind regards, Ola Theander





More information about the gstreamer-devel mailing list