Tracer subsystem, how to interpret these measures?

solo_juan solo_juan at gmx.com
Sat Feb 1 16:30:05 UTC 2020


Hi,

I'm learning Gstreamer. I'm trying to use the *Tracer* subsystem but I don't
know how to interpret the measurements.

The data:

- Rasberry Pi 3
- Raspbian Buster
- Raspberry Camera Module V2
- gstreamer, version 1.17.1
- Capture data: Motion-JPEG, 426x240 and 429000 bps of bitrate

*Pipeline 1 (15 fps):*

/opt/vc/bin/raspivid -t 60000 -cd MJPEG -w 426 -h 240 -fps 15 -b 429000 -vf
-hf \
-o - | GST_DEBUG="GST_TRACER:7" GST_DEBUG_FILE=240.log
GST_TRACERS="latency(flags=pipeline+element+reported)" \
gst-launch-1.0 -v  fdsrc do-timestamp=true ! \
"image/jpeg,width=426,height=240,framerate=15/1,payload=(int)26" ! \
jpegparse ! rtpjpegpay ! \
udpsink port=13000 host=192.168.1.111 && gst-stats-1.0 240.log

*Measure (pipeline 1):*

Latency Statistics:
0x20340f0.fdsrc0.src|0x20ff150.udpsink0.sink: mean=0:00:00.001291400
min=0:00:00.000751041 max=0:00:00.007009887

Element Latency Statistics:
0x21040f0.capsfilter0.src: mean=0:00:00.000236961 min=0:00:00.000096875
max=0:00:00.000554530
0x20ec2d0.jpegparse0.src: mean=0:00:00.000558282 min=0:00:00.000346250
max=0:00:00.006139732
0x20f80b8.rtpjpegpay0.src: mean=0:00:00.000458047 min=0:00:00.000248542
max=0:00:00.004859044

Element Reported Latency:
0x20340f0.fdsrc0: min=0:00:00.000000000 max=0:00:00.000000000
ts=0:00:00.644349393
0x21040f0.capsfilter0: min=0:00:00.000000000 max=0:00:00.000000000
ts=0:00:00.644468508
0x20ec2d0.jpegparse0: min=0:00:00.000000000 max=0:00:00.000000000
ts=0:00:00.644528403
0x20f80b8.rtpjpegpay0: min=0:00:00.000000000 max=99:99:99.999999999
ts=0:00:00.644595383


*Pipeline 2(rpicamsrc, 30 fps):*

GST_DEBUG="GST_TRACER:7" GST_DEBUG_FILE=240p.log
GST_TRACERS="latency(flags=pipeline+element+reported)" \
 gst-launch-1.0 rpicamsrc preview=0 rotation=180 bitrate=4429000 ! \
 image/jpeg,width=1270, height=720,framerate=30 ! \
 jpegparse ! rtpjpegpay ! port=13000 host=192.168.1.111 && gst-stats-1.0
240p.log

*Measure (pipeline 2):*

Latency Statistics:
0x485738.rpicamsrc0.src|0x550e98.udpsink0.sink: mean=0:00:00.002029413
min=0:00:00.000996300 max=0:00:00.007922902

Element Latency Statistics:
0x556118.capsfilter0.src: mean=0:00:00.000220204 min=0:00:00.000116354
max=0:00:00.000882290
0x53c2e8.jpegparse0.src: mean=0:00:00.000840342 min=0:00:00.000421510
max=0:00:00.006688112
0x54a0c8.rtpjpegpay0.src: mean=0:00:00.000968794 min=0:00:00.000458436
max=0:00:00.006141604

Element Reported Latency:
0x485738.rpicamsrc0: min=0:00:00.000000000 max=0:00:00.000000000
ts=0:00:00.473207970
0x556118.capsfilter0: min=0:00:00.000000000 max=0:00:00.000000000
ts=0:00:00.473302137
0x53c2e8.jpegparse0: min=0:00:00.000000000 max=0:00:00.000000000
ts=0:00:00.473359064
0x54a0c8.rtpjpegpay0: min=0:00:00.000000000 max=0:00:00.000000000
ts=0:00:00.473419220

I understand that in the first pipeline, it takes *644.3* ms o receive the
data stream from the camera and in the second pipeline it takes *473.2* ms.

Is this interpretation of the data correct? If so, why does it take so long
to acquire the flow?

Cheers.



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/


More information about the gstreamer-devel mailing list