Regulate speed of decoded frames using presentation timestamp

Nicolas Dufresne nicolas at ndufresne.ca
Tue Mar 16 14:28:46 UTC 2021


Le lundi 15 mars 2021 à 18:06 -0500, Andressio a écrit :
> Hi all. I am developing an application that consumes h264 streams from two
> different models of ip cameras via rtsp. I am using a gstremer pipeline that
> looks like this:
> 
> /rtspsrc location=camera_rtsp protocols=GST_RTSP_LOWER_TRANS_TCP ! 
> rtph264depay ! h264parse ! 
> video/x-h264,stream-format=byte-stream,alignment=au ! appsink/
> 
> Buffers pulled from appsink hold encoded h264 packets. I feed these packets
> directly into the NVDEC decoder of an nvidia gpu to obtain the decoded raw
> frames using opencv + nvidia video codec sdk (note that I don't use the
> nvdec element from gstreamer plugins bad).
> 
> What happens is that with one of the cameras everything works fine: the
> frames are decoded at the right speed (in line with video framerate) and
> displaying them results in a smooth video. On the other hand the second
> camera gives poor results: frames are decoded at variable speed leading to a
> bulky video when displayed but, if I average the times between all
> consecutives frames from one keyframe to the next keyframe, the speed is in
> line with video framerate. To sum up this last case: all frames are decoded
> correctly, but are not decoded at a constant speed.
> 
> I am quite confident that the decoder works properly. I guess it is simply a
> matter of correctly using the presentation timestamp that can be extracted
> from appsink when pulling buffers (N.B. for the first camera the
> presentation timestamp is mostly monotonically increasing but sometimes
> returns back in time, for the second camera it is monotonically incresing).

This return back in time indicates the presence of B-Frames. The decoder will
reorder them for you. Notice the DTS, which should be monotonic before the
decoder.

> 
> In an early stage of the application I put gstreamer's nvdec element between
> /h264parse/ end /appsink/. I obtained regular streams with raw frames
> emitted at a regular speed for both cameras but unfortunately it was not
> very reliable.
> 
> How does gstreamer's nvdec regulate the emission of decoded frames? How can
> I implement a similar behavior  using the presentation timestamps from the
> /appsink/ while keeping the above pipeline unchanged? I guess I should use
> some buffers, but are you aware of some examples or resources that can be
> used as a staring point?

In GStreamer, decoders are not responsible for smoothing the transmission. They
simply process as fast as possible. Dowstream the decoder, playback pipelines
will usually contain a short raw video queue (which allow buffering when
decoding is faster).  After that queue, there is a display componenent, (could
be glimagesink). These element are using GstBaseSink base class, which implement
most of the synchronisation code. It will translate the PTS into running-time,
and then translate this to clock time in order to determin how long to wait
before displaying. Audio sink are much more complex.

p.s. Remember that decoding complexity varies within a stream, so not all frames
decode at equal speed. Some HW decoder would smooth this, but this is atipical
for GPU decoders and PC software.

> 
> Thanks
> 
> 
> 
> 
> --
> Sent from: http://gstreamer-devel.966125.n4.nabble.com/
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel




More information about the gstreamer-devel mailing list