Best way to go about using opencv/tensorflow detections for videocrop limits
Anil Ramachandran
cloud9ine at gmail.com
Mon Dec 28 19:22:56 UTC 2020
Hi everyone,
I have been playing with gstreamer these last few weeks. Mainly with
gst-launch on the command line. My platform has been the raspberry pi 4B.
Now, I am trying to build a system where the video from the camera is
split into two paths - one for object detection which would return me
bounding box coordinates for all detected objects in the frame and the
other path would use a videocrop element. However, I want to dynamically
specify the videocrop element's parameters based on some processing of the
object detection (for example, find all persons and pets and use their
combined outer edges on the top, bottom, right, and left but then normalize
to a specific aspect ratio). I do not want to pass the video through opencv
and crop it to the output because then the frame rate would be affected by
the processing rate of the opencv detector (4 or 5 fps on the RPi 4). So, I
want opencv to work at its own pace or even slower (for my purpose, even 1
fps is fine) and the main video pipeline just keeps streaming the video,
just every couple of seconds, the video crop gets adjusted to keep all
objects of interest in the frame.
I see that you can put an opencv element in a gstreamer pipeline or put a
gstreamer pipeline in an opencv program. I also see that you can pass data
to other elements using the gstreamer message bus. I am looking for some
general guidance on how you would go about this from someone with much more
gstreamer mileage than me. Once I have a general direction, I am happy to
dive in and learn the details and figure it out.
Appreciate all help!
Anil
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20201228/3ea757bf/attachment.htm>
More information about the gstreamer-devel
mailing list