v4l2rc MJPEG to Videomixer slow - alternative pathways?

ashwath ashwath at bodega.ai
Thu Sep 7 20:10:52 UTC 2017


Hi all,

I'm currently working with some UVC cameras that support outputting both raw
frames (YUV) and mjpeg.  Due to USB 2.0 bandwidth restrictions I'm hoping to
switch over to MJPEG so I can support higher resolution video.

My current pipeline takes X camera raw feeds, composites them via
videomixer, and pushes them into an appsink which I read from my Python
application.  

When I try to switch over from raw feeds to MJPEG, the stream is extremely
laggy if I use more than 3 cameras; I imagine this has to do with the way
MJPEG images are parsed, as well as with the timing involved with videomixer
(I convert the MJPEG feeds back to raw video because that's all videomixer
can support).

My question is, is there a way to more smoothly stream mjpeg frames into
videomixer?  I've tried placing queues in different locations; sync is set
to false (on appsink) and is-live and do-timestamp are set to true (on
v4l2src).  

Otherwise, would it be possible for me to design my pipeline in a different
way to support MJPEG (i.e. for X cameras, stream frames into X different
appsinks and composite them manually in Python/C - though I worry reading
from X appsinks in python will be slow)?  Perhaps there's a way to use tees
I'm not thinking of?  Or I could stream the mjpeg feeds via UDP locally and
simply stream the UDP locally as well?  Any suggestions or thoughts on how
this could work?

Thanks in advance,
Ashwath



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/


More information about the gstreamer-devel mailing list