Using a queue element to store video frames

Eslam Ahmed eslam.ahmed at avidbeam.com
Sun Nov 14 09:40:05 UTC 2021


Hi,

In the spirit of joining your discussion,

   - What makes you think that these queues will always have the amount of
   buffers that you require at some point? Aren't you just setting their
   maximum limit?

On the other hand, I would like to know if your method would work because I
have tackled this problem via a totally different approach.
The idea is to have 2 pipelines. One acting as the trigger, with just the
inference plugin, and whenever it finds something interesting it signals
the other application, via a method of your choice, controlling the second
pipeline. This second pipeline is running concurrently with the first and
using an appsink as its sink, you will be able to retain some of the frames
in a data structure queue of a size of your choice. Once signalled, the
second pipeline would trigger a new on-demand pipeline with an appsrc just
to write/record the event.

Here's my post on stackoverflow to get the full idea:
https://stackoverflow.com/questions/68066985/how-to-write-specific-time-interval-of-gstsamples-rtp-over-udp-h264-packets

Hope that helps!

Best Regards,
Eslam Ahmed


On Wed, Nov 10, 2021 at 8:30 PM Ranti Endeley via gstreamer-devel <
gstreamer-devel at lists.freedesktop.org> wrote:

> Hi,
>
> I am quite new to gstreamer. I am trying to develop a couple of plugins.
>
>    - one which examines video frames in a buffer alongside inference
>    metadata (object detection and classification) - this plugin then emits a
>    custom out of bounds event downstream which is acted upon by the second
>    plugin
>    - the second plugin is separated from the 1st by a queue (the idea
>    being to block the queue until the custom event notifies it to start
>    capturing the frames in the buffer)
>
> My idea is to use the queue (leaky=downstream) between the two plugins to
> store the buffers until an event is detected by the 1st plugin. I would
> like the queue to fill up with buffers and drop the oldest buffers until
> the downstream plugin is prepared to accept them. This should in theory
> then allow the 2nd plugin to capture the video frames emitted prior to the
> event occurring so that I can have a record of some seconds of video prior
> to the event which triggered the capture (by increasing the max-size-time
> property of the queue to match the amount of time I want to store before an
> event).
>
> My pipeline is like this:
> video/x-raw + inference metadata -> videoconvert ! trigger plugin ! queue
> max-size-buffers=G_MAXUINT, "max-size-bytes=G_MAXUINT,
> max-size-time=60000000000,leaky=downstream ! capture plugin ! videoconvert
> ! videoscale ! videorate ! theoraenc ! oggmux ! filesink [hoping to store
> up to 60 seconds of videoframes]
>
> Testing the above pipeline using videotestsrc (no metadata yet, just
> passing through all frames) has brought up some issues that are a little
> hard for me to understand.
>
>    - The output video runs for much longer than expected (for example 2
>    seconds of runtime results in about 30 seconds of video)
>    - When the leaky=downstream option is set on the queue frames are
>    dropped much earlier than I would expect (leading to a very choppy output
>    video - which incidentally is still longer than expected).
>
> My questions:
>
>    - Is what I am trying to do possible with the pipeline I have
>    described above? If not, why and what am I missing?
>    - Why is the video length being output disproportionate to the run
>    time of the pipeline?
>
> Thanks in advance for your assistance.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20211114/ef7276da/attachment.htm>


More information about the gstreamer-devel mailing list