Pipeline Help
jawieman
jonathan.wieman at deepanalyticsllc.com
Wed Aug 5 21:09:58 UTC 2020
I am trying to use Google's coral gstreamer example code to use a different
4K camera from e-conn Systems (the See3CAM_CU135).
https://github.com/google-coral/examples-camera
<https://github.com/google-coral/examples-camera>
I've a pipeline working using raw video input to a videoloop back device
(which then streams to the web).
Original working pipeline
gst-launch-1.0 v4l2src device=/dev/video0 !
video/x-raw,width=1280,height=720,framerate=30/1 ! tee name=t
t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert
! videoscale ! video/x-raw,width=640,height=360 ! videobox
name=box autocrop=true
! video/x-raw,format=RGB,width=640,height=640 ! appsink
name=appsink emit-signals=true max-buffers=1 drop=true
t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert
! rsvgoverlay name=overlay ! videoconvert
! video/x-raw,format=UYVY ! v4l2sink device=/dev/video99
Is there any standard / preferred / simpler way to verify that the elements
linked are compatible to each other before actual testing with
gst-launch-1.0?
I'd like to optimize the pipeline to use the jpeg output of the camera
instead in hope for less processor utilization for necessary conversion.
My current stream:
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, width=1280,
height=720, framerate=30/1 ! tee name=t \
t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! videoscale !
video/x-raw, width=300, height=168 \
! videobox name=box autocrop=true ! video/x-raw, format=RGB, width=300,
height=300 \
! appsink name=appsink emit-signals=true max-buffers=1 drop=true \
t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert \
! rsvgoverlay name=overlay ! videoconvert \
! jpegenc ! jpegdec ! videoconvert ! fpsdisplaysink sync=false
--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
More information about the gstreamer-devel
mailing list