Optimize serving images to disk using a queue of H264 encoded GstSamples.

Eslam Ahmed eslam.ahmed at avidbeam.com
Sun Sep 25 07:38:21 UTC 2022


Hello,

With reference to this post
<https://stackoverflow.com/questions/68066985/how-to-write-specific-time-interval-oaf-gstsamples-rtp-over-udp-h264-packets>
https://stackoverflow.com/questions/68066985/how-to-write-specific-time-interval-oaf-gstsamples-rtp-over-udp-h264-packets
which records videos from a queue of h264 GstSamples based on user request.
I wish to adapt and optimize that to write jpeg images based on user
request as well.

So a sample pipeline to start with is as follows:
gst-launch-1.0 appsrc ! h264parse ! avdec_h264 ! jpegenc ! filesink
location=image.jpeg

Next we add a pad probe callback on avdec_h264's src pad and check whether
this is the requested frame (by matching the timestamp in the GstMeta). If
this is the frame we allow it to pass otherwise we drop it.

Problem with this is when we want to write a single image, we end up
feeding a larger interval of GstSamples such that the h264parse will work
and figure out the codec data and/or NAL parts. So in most cases, we
definitely parse and decode more frames than necessary yielding a slow
performance. Is it possible to optimize this?

P.S. Images are requested via an RPC at a later time after the stream has
been processed. So persistent decoding of the stream is not an option due
to the size of raw frames.

Best Regards,
Eslam Ahmed
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20220925/9b357e30/attachment.htm>


More information about the gstreamer-devel mailing list