Decoupling branches

c_mac caroline.mckee at tealdrones.com
Wed Aug 28 19:28:58 UTC 2019


I'm a gstreamer newbie. I'm working on doing real time object detection
inference on a video stream. My GPU is not powerful enough to process every
frame in real time (30 fps), so I am wondering if it would be possible to
only send every 3rd frame down the inference branch after the tee and then
perhaps introduce some sort of delay in displaying the detection overlay.
I'm using GstInference ( https://github.com/RidgeRun/gst-inference
)- currently reading in from a file because the rest of the video pipeline
is still in development. Would like to read from the file at 30 fps but
process the frames at 10 fps and only display the overlay on every third
frame. Here is my current pipeline: 

gst-launch-1.0 filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! tee
name=t \ 
t. ! queue ! videoscale ! net.sink_model \ 
t. ! queue ! net.sink_bypass tinyyolov3 name=net
model-location=$MODEL_LOCATION backend=tensorflow
backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ 
net.src_bypass ! detectionoverlay labels="$(cat $LABELS)" font-scale=2
thickness=2 ! videoconvert ! xvimagesink sync=false 

Is this possible? Any help or alternative approaches would be much
appreciated. Right now the entire video rendering is slowed down to the
speed of the inference. 



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/


More information about the gstreamer-devel mailing list