Using a queue element to store video frames
Ranti Endeley
ranti_endeley at hotmail.co.uk
Fri Nov 19 13:36:24 UTC 2021
Hi,
Thanks for all your input guys. I have made a lot of progress. My
pipeline appears to be mostly working. Having placed the queue element
between both my plugins I am now getting a consistent error message from
the queue:
ERROR queue gstqueue.c:1028:gst_queue_handle_sink_event:<tqueue> Failed
to push event
The downstream writer plugin still seems to receive the event. Is this
error because I have done something wrong or can it be ignored?
On 14/11/2021 14:05, Matthew Waters wrote:
> Hi,
>
> On 11/11/21 04:16, Ranti Endeley via gstreamer-devel wrote:
>> Hi,
>>
>> I am quite new to gstreamer. I am trying to develop a couple of plugins.
>>
>> * one which examines video frames in a buffer alongside inference
>> metadata (object detection and classification) - this plugin then
>> emits a custom out of bounds event downstream which is acted upon
>> by the second plugin
>> * the second plugin is separated from the 1st by a queue (the idea
>> being to block the queue until the custom event notifies it to
>> start capturing the frames in the buffer)
>>
>> My idea is to use the queue (leaky=downstream) between the two plugins
>> to store the buffers until an event is detected by the 1st plugin. I
>> would like the queue to fill up with buffers and drop the oldest
>> buffers until the downstream plugin is prepared to accept them. This
>> should in theory then allow the 2nd plugin to capture the video frames
>> emitted prior to the event occurring so that I can have a record of
>> some seconds of video prior to the event which triggered the capture
>> (by increasing the max-size-time property of the queue to match the
>> amount of time I want to store before an event).
>>
>
> Yes, this is possible and doable as you describe.
>
>> My pipeline is like this:
>> video/x-raw + inference metadata -> videoconvert ! trigger plugin !
>> queue max-size-buffers=G_MAXUINT, "max-size-bytes=G_MAXUINT,
>> max-size-time=60000000000,leaky=downstream ! capture plugin !
>> videoconvert ! videoscale ! videorate ! theoraenc ! oggmux ! filesink
>> [hoping to store up to 60 seconds of videoframes]
>>
>> Testing the above pipeline using videotestsrc (no metadata yet, just
>> passing through all frames) has brought up some issues that are a
>> little hard for me to understand.
>>
>> * The output video runs for much longer than expected (for example 2
>> seconds of runtime results in about 30 seconds of video)
>> * When the leaky=downstream option is set on the queue frames are
>> dropped much earlier than I would expect (leading to a very choppy
>> output video - which incidentally is still longer than expected).
>>
>> My questions:
>>
>> * Is what I am trying to do possible with the pipeline I have
>> described above? If not, why and what am I missing?
>> * Why is the video length being output disproportionate to the run
>> time of the pipeline?
>>
>
> If you are running a source that just outputs data as fast as possible
> (videotestsrc) with a sink element that doesn't not synchronise on the
> clock (filesink), then you don't have anything 'realtime' here blocking
> the execution based on the buffer timestamps. The amount of resulting
> data depends on how fast it can be generated. Whether or not the queue
> drops data depends on whether downstream processes data faster or slower
> than upstream.
>
> You can either use videotestsrc is-live=true or insert a clocksync
> element at an approprate place before the leaky queue.
>
> Cheers
> -Matt
>
>> Thanks in advance for your assistance.
>>
>>
>
More information about the gstreamer-devel
mailing list