Recording, with proper timestamps

Juan Navarro juan.navarro at gmx.es
Tue Oct 13 07:40:05 UTC 2020


On 12/10/20 17:50, Juan Navarro wrote:
> I'd like to know if there is an element, or possibly a set of
> elements, or a library API, that can help with providing smooth and
> robust timestamping for recoding Gstreamer pipelines.
>
> The two main needs I'm thinking for recordings are:
>
> 1. Provide an initial timestamp of 0:00:00 in the output file,
> regardless of what was the running time of the pipeline when recording
> started.

I'm wondering if this should be done separately for audio and video, or
doing it for just the first buffer would be enough.

For 1 video & 1 audio track, the code I inherited would monitor both
branches of the pipeline, and store as "offset" the PTS of the first
buffer passes through towards the filesink, regardless of which one it
is (video or audio, typically audio). Then, it subtracts that time from
the PTS of all the following buffers (both video and audio).

This alone raises several questions already:

* Assuming both PTS and DTS are available, and not wanting to assume if
B-frames will be present or not: Should the PTS be stored as offset? Or
the DTS? Or none, and instead use the pipeline's running time?

* In case of using PTS or DTS as offset: Is using a single offset enough
(from the very first frame that passes through)? Or should individual
offsets be used for each track (an audio offset, and a separate video
offset)?



More information about the gstreamer-devel mailing list