Synchronizing audio component with appsrc-ed video

zanbri zan.bridi at gmail.com
Tue Jan 12 18:44:34 UTC 2021


Hi all,

I have the following 2 pipelines:

*[P1]* /videosrc (mp4 file or RTMP stream) > decodebin > queue >
videoconvert > appsink/

... processing video frames ...

*[P2]* /appsrc > queue > videoconvert > sink (mp4 file or RTMP stream)/

*Objective: *I currently discard the audio component of the videosrc (when
handling pads for /decodebin/ in P1) which obviously results in no audio in
the final sink (i.e. the sink of P2). I would now like to include the audio
component in the final sink.

*Question:* What is the best way to achieve this?

*Some details:* The processing I do between pipelines can be thought of as a
normal video filter: I effectively process each frame, though I am not
looking to package this processing into a gstreamer filter, even if it would
make the addition of audio much simpler. The audio does not need to be
processed/altered at all -- it just needs to be passed through the pipelines
and synchronized with the video component.

My guess is to somehow pass the audio buffers through my "processing video
frames" code and into the appsrc and mux them together. If this is true,
- Should I pass the audio buffers into the same appsink as my video buffers,
or, should I tee the first pipeline and deal with the audio/video
separately?
- What metadata do I need to give to the audio and video buffers in the
appsink in order to help me synchronize them in the appsrc?
- Similar to the first point, should I receive the audio buffers in the same
appsrc in the second pipeline, or, should I have two appsrcs and somehow
merge their branches in a muxing operation? Is this even possible?

I know very little about audio buffers (I don't even know if there is one
audio buffer for every video frame buffer), so any clarity/pointers on what
I should do would be much appreciated!



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/


More information about the gstreamer-devel mailing list