Debugging video jerking problem
Wolfgang Grandegger
wg at grandegger.com
Tue Dec 11 18:03:55 UTC 2018
Hello,
Am 11.12.2018 um 16:35 schrieb Nicolas Dufresne:
> Le mardi 11 décembre 2018 à 09:41 +0100, Wolfgang Grandegger a écrit :
>> Hello,
>>
>> I did made some progress on that topic...
>>
>> Am 08.12.2018 um 19:34 schrieb Wolfgang Grandegger:
>>> Am 07.12.2018 um 18:25 schrieb Wolfgang Grandegger:
>>>> Hello,
>>>>
>>>> Am 07.12.2018 um 14:43 schrieb Nicolas Dufresne:
>>>>>
>>>>> Le ven. 7 déc. 2018 05 h 11, Wolfgang Grandegger <wg at grandegger.com
>>>>> <mailto:wg at grandegger.com>> a écrit :
>>>>>
>>>>> Hello,
>>>>>
>>>>> hmm, the long story: My camera records AVCHD video with 1920x1080 pixel
>>>>> and 50 frames/sec. Then I convert it to an 50 FPS AVI raw video using:
>>>>>
>>>>> $ ffmpeg -i input.mts -vcodec rawvideo -y output-50fps.avi
>>>>>
>>>>> To get the same video running at 60 FPS I do:
>>>>>
>>>>> $ ffmpeg -i input.mts -filter:v "setpts=0.83333333*PTS" \
>>>>> -r 60 -vcodec rawvideo -y output-60fps.avi
>>>>>
>>>>> I assume that this video uses the same raw frames just with different
>>>>> time stamps. What do I miss?
>>>>>
>>>>>
>>>>> Understand that this could also be the threshold your receiving socket
>>>>> can carry with its current setting. It cost nothing to try and increase
>>>>> it, see buffer-size on udp-sink and sysctl net.core.rmem_max setting of
>>>>> your Linux kernel (if Linux is your host OS).
>>>>
>>>> I already doubled both values. It does not help.
>>>>
>>>> I did debug the sink with with "GST_DEBUG=basesink:6". If I look to the
>>>> timestamps of the lines:
>>>>
>>>> 0:00:03.215045610 basesink gstbasesink.c:2921:gst_base_sink_is_too_late:<vaapisink0> object was scheduled in time
>>>>
>>>> And calculate the time difference to the previous line in ms I get:
>>>>
>>>> 16.631028
>>>> 33.438903
>>>> 233.507839
>>>> 16.668671
>>>> ...
>>>> 16.941117
>>>> 33.009663
>>>> 183.428079
>>>> 16.470837
>>>>
>>>> This is the jerking I see, I guess. With 50 FPS, it's always 20ms.
>>>> What I do also realize at 60 FPS is that not all frames are recorded
>>>> but just 42.96 fps:
>>>>
>>>> # ffprobe /tmp/rec0.mp4
>>>>
>>>> encoder : VA-API H264 encoder
>>>> Duration: 00:00:43.20, start: 0.000000, bitrate: 33813 kb/s
>>>> Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 33809 kb/s, 42.96 fps, 60 tbr, 10k tbn, 20k tbc (default)
>>>> Metadata:
>>>> creation_time : 2018-12-07T17:18:53.000000Z
>>>> handler_name : VideoHandler
>>>> [h264 @ 0x62dae0] time_scale/num_units_in_tick invalid or unsupported (0/1)
>>>>
>>>> It looks like there is a resource limit somewhere :(.
>>>
>>> I now tried pipeline latency with "GST_DEBUG="GST_TRACER:7
>>> GST_TRACERS=latency". It shows me
>>>
>>> 0:00:10.824079787 ... latency, src=(string)udpsrc0_src, sink=(string)display sink_sink, time=(guint64)78779611, ts=(guint64)10824035191;
>>> 0:00:10.837392036 ... latency, src=(string)udpsrc0_src, sink=(string)filesink0_sink, time=(guint64)52504281, ts=(guint64)10837353922;
>>> 0:00:10.843501996 ... latency, src=(string)udpsrc0_src, sink=(string)display sink_sink, time=(guint64)78006259, ts=(guint64)10843436324;
>>> 0:00:10.857081850 ... latency, src=(string)udpsrc0_src, sink=(string)filesink0_sink, time=(guint64)51733514, ts=(guint64)10857042971;
>>>
>>> Here it takes approx. 80ms to display and 50 ms to record a frame. This
>>> is really cool stuff! I can now measure the effect of recording and text
>>> overlay but it does not tell me resource usage.
>>>
>>> For that I tried "GST_TRACERS=stats;rusage". I have attached the output
>>> of "gst-stats-1.0". Does it tell me the CPU usage? Are these numbers
>>> documented somewhere?
>>
>> The tracing output of "rusage" analysed with "gst-plot-traces.sh" didn't
>> give meaningful results. It seems that it cannot deal properly with
>> multiple CPUs. Is this a known issue?
>>
>> Anyway, "GST_TRACERS=latencies" and "GST_DEBUG=basesink:5" helped me to
>> understand a bit better what's going on. With 50 FPS, the latencies
>> reported by "GST_TRACERS=latencies" are rather constant around 80ms,
>> while with 60 FPS they are fluctuating much more. Also, if I look to
>> the "rendering" lines of the "GST_DEBUG=basesink:5" log:
>>
>> 00:03.665643643 ... basesink gstbasesink.c:3538:gst_base_sink_chain_unlocked:<display sink> rendering object 0x7f6c8c0752a0
>>
>> I see that for 50 FPS, the difference between the timestamps is always
>> close to 20 ms:
>>
>> 19.986890
>> 19.950369
>> 20.245006
>> 19.663664
>> 20.054239
>> 19.919339
>>
>> while with 60 FPS, often the time difference is greater than 16.7 ms:
>>
>> 16.663874
>> 16.182956
>> 17.571549
>> 27.771846
>> 16.442217
>> 28.842019
>>
>> How does the default GStreamer pipeline compensate for the processing latency?
>
> It's not adaptive. You can increase it in your application using the
> processing-latency property on sink (it's a recent property).
> Increasing that value will make the queues a bit more efficient at
> hiding the spikes and the processing load.
Should I see that property with "gst-inspect-1.0"? I'm using the
"vaapisink" and GStreamer version 1.14.4.
I just ran the pipeline on a more powerful X86 system... and the jerking
is gone! It's an Atom E3950 @ 1.6GHz vs. i7-6700TE CPU @ 2.40GHz.
Wolfgang.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20181211/1af88c5d/attachment-0001.sig>
More information about the gstreamer-devel
mailing list