Latency compensation

Wolfgang Grandegger wg at grandegger.com
Wed Dec 12 21:40:48 UTC 2018


Am 12.12.2018 um 22:08 schrieb Nicolas Dufresne:
> Le mercredi 12 décembre 2018 à 21:51 +0100, Wolfgang Grandegger a
> écrit :
>> Hello,
>>
>> Am 12.12.2018 um 20:50 schrieb Nicolas Dufresne:
>>> Le mercredi 12 décembre 2018 à 20:38 +0100, Wolfgang Grandegger a
>>> écrit :
>>>> Hello,
>>>>
>>>> I'm trying to understand how the processing latency is compensated for
>>>> the simple pipeline:
>>>>
>>>>   $ gst-launch-1.0 -v udpsrc port=50004 buffer-size=180000000 \
>>>>     caps="application/x-rtp, media=(string)video, clock-rate=(int)90000,
>>>>     encoding-name=(string)JPEG, payload=(int)26,framerate(fraction)60/1" \
>>>>     ! rtpjitterbuffer latency=0
>>>>     ! rtpjpegdepay 
>>>>     ! vaapijpegdec
>>>>     ! vaapisink 
>>>>
>>>> GST_TRACERS="latency" reports the following latency from the src to the sink:
>>>>
>>>>   61495822
>>>>   61865042
>>>>   61219613
>>>>   61537702
>>>>
>>>> If I use "sync=false" on the "vaapisink", I see that the latency is just 2ms.
>>>> There is some latency adoption taking place? How does it work? How can I
>>>> change it?
>>>
>>> We are still working on improving the latency tracer, which version of
>>> GStreamer are you running with ?
>>
>> The latest version 1.14.4
>>
>>> Note that the latency reported by the latency if the effective latency.
>>> So it's a combination of the processing and the buffering latency. With
>>
>> That is also my thinking. With "latency=20" instead of "latency=0",
>> the reported latency is approx. 20 ms longer.
>>
>>> udpsrc, depending on which start time has been decided and how much
>>> latency is declared by each element, you'll always get some more
>>> latency then when disabling sink, which will keep the queues low, but
>>> with a bursty rendering, and not all image might reach the screen.
>>
>> How do the elements calculate their latency?
> 
> It depends on the context. Some don't like the jitterbuffer, and you
> simply let the app configured it. Most of the reported latency by
> element is based on buffering latency. So the size of the buffers
> creates the latency. Some of this buffering can be part of the decoding
> process and depends on how the stream was encoded in the first place.

I see, I have two systems, an Atom E3950 @ 1.6GHz and an i7-6700TE CPU @
2.40GHz. For both the same latencies are used, even if the faster
6700TE could do the job in less time (shorter latency).

I need then to figure out how the app can configure the latency.

>> What I find with GST_DEBUG="basesink:7":
>>
>>   0:00:14.179298552 ... basesink gstbasesink.c:4509:gst_base_sink_send_event:<vaapisink0>. 
>>   sending event 0x7ff5940065d0 latency event: 0x7ff5940065d0, time 99:99:99.999999999, 
>>   seq-num 65, GstEventLatency, latency=(guint64)80000000;
>>
>> I wonder where the 80ms come from?
>>
>> If I add "render-delay=40" to the "vaapisink", I find 120ms in the log 
>> line above. I think that's the extra processing delay you mentioned.
>> This reduces the jerking, indeed, for the price of a longer latency.
> 
> No, there is a new processing-deadline= property. If my memory is
> right, render-delay creates a delay between sinks, and is used to fix
> A/V synchronisation manually. While processing-deadline will add to the
> global latency, and will be shared among sinks.

OK, it's not in 1.14.4, but in the master branch. Is there a release
schedule for the next release?

Thanks,

Wolfgang.



More information about the gstreamer-devel mailing list