AW: Drop frames if a filter (neural networks) is too slow

Carlos Rafael Giani crg7475 at mailbox.org
Mon Jul 30 07:59:49 UTC 2018


I guess you could play around with the QoS features in GStreamer, but 
for this, I think relying on queue is actually much simpler. 
max-size-buffers=1,max-size-bytes=0,max-size-time=0,leaky=2 gives you a 
queue that stores just one frame, and you anyway don't need more. Why is 
an additional thread a problem?


On 2018-07-30 08:39, MyungJoo Ham wrote:
> Hi.
>
> Oh.. it appears that with an additional queue element in front of a neural network filter, it's going to work! Thanks!
>
> However, is there a way to do this without adding a queue between elements?
> Maybe because we need a new thread to do this, it's going to be "NO" if we want simple solutions, I guess.
>
> Thanks so much!
>
> Cheers,
> MyungJoo
>   
> --------- Original Message ---------
> Sender : Thornton, Keith <keith.thornton at zeiss.com>
> Date   : 2018-07-30 15:28 (GMT+9)
> Title  : AW: Drop frames if a filter (neural networks) is too slow
>   
> Hi, have you tried a queue with max-size-buffers=1,max-size-bytes=0,max-size-time=0,leaky=2
>   
> -----Ursprüngliche Nachricht-----
> Von: gstreamer-devel [mailto:gstreamer-devel-bounces at lists.freedesktop.org] Im Auftrag von MyungJoo Ham
> Gesendet: Montag, 30. Juli 2018 02:54
> An: gstreamer-devel at lists.freedesktop.org
> Cc: JIJOONG MOON <jijoong.moon at samsung.com>; Geunsik Lim <geunsik.lim at samsung.com>; Wook Song <wook16.song at samsung.com>; Jaeyun Jung <jy1210.jung at samsung.com>; Sangjung Woo <sangjung.woo at samsung.com>; Hyoungjoo Ahn <hello.ahn at samsung.com>; Jinhyuck Park <jinhyuck83.park at samsung.com>
> Betreff: Drop frames if a filter (neural networks) is too slow
>   
>   
> Dear Gstreamer Developers,
>   
>   
> I'm developing gstreamer filters that either use general neural network models as media filters or help such filters (transforming, mux/demuxing, or converting tensors).
>   
> https://github.com/nnsuite/nnstreamer
>   
>   
> One concern is that we have a lot of usage cases with heavy neural networks (e.g., latencies of over 100ms and fluctuating) on live video streams from cameras and we want to drop old-pending video frames if there is a new video frame is coming while the filter is still processing previous frame. (but not dropping already-being-processed
> frames)
>   
>   
> In other words, in a stream like this:
>   
> Camera(v4l2) --> Neural Network (tensor_converter + tensor_filter) --> sink
>   
> , let's assume that Camera is operating at 60FPS and Neural Network is processing at 1FPS (although it's not realistic enough to say "xxFPS" on these networks as they fluctuate a lot)
>   
> Then, we want to process 0th camera frame and 60th camera frame, and then 120th camera frame, .. and so on.
>   
> With common configurations, with large queues, it processes 0th, 1st, 2nd frame and drops newer frames, not older frames if the queue is full.
>   
> Could you please enlightenme on which document to look at or which part to implement for this matter?
>   
>   
> Cheers,
> MyungJoo
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>   
>   
> --
> MyungJoo Ham (함명주), Ph.D.
> Autonomous Machine Lab., AI Center, Samsung Research.
> Cell: +82-10-6714-2858
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



More information about the gstreamer-devel mailing list