Use of Queue in constructing Gstreamer Pipeline for Playback
Sebastian Dröge
sebastian at centricular.com
Sat Oct 1 08:28:24 UTC 2016
On Fri, 2016-09-30 at 07:41 -0700, DeepakRohan wrote:
> Hi Sebastian,
>
> Again thanks a lot for the quick reply.
>
> My application is always going to do playback related operations. It may
> never be used for any capturing purposes.
> The pipeline looks very similar to the below:
>
> -----> Audio
> elements(except Audio Queue) ....... AudioSink
> I
> source --> typefind --> Demuxer ---
> I
> ----> Video elements (with
> Video Queue) .......... VideoSink
>
>
> So now with the above pipeline always is there a chance or slight
> possibility that I may face issues later on.
>
> From my command gst-launch-1.0 testing it has worked out so far, but I have
> not tested it for all possible cases (different audio, video and subtitle
> codecs with following properties for: audio - sample rate, bit-rate,
> channels. video - resolution, framerate, level and profile).
>
> Since I am not sure of the consequences of removing audio-queue, because for
> me it worked on command line as well as with the application. My application
> creates exactly the same pipeline as mentioned in the above diagram (sort of
> diagram).
>
> Please can you mention cases where removing audio-queue may cause issues for
> the above way of pipeline creation.
Without the queue, the demuxer will directly push from its own thread
to the audio sink. By default all sinks will block until every sink has
a buffer, so if it happens that there is first audio in your container
and only then video, then the demuxer will push audio, the audio sink
will wait for the video sink and block the demuxer, and the demuxer has
no possibility to push video to the video sink.
Another case where this is problematic is if your sinks are
synchronising to the clock and the container has not a perfect
interleave. Consider the case where there is always 1s of audio, then
1s of video, and 1s of audio again. What will happen is that the audio
sink will first consume the 1s of audio, while the video sink is
starving for 1s. Then the video queue will fill up 1s, the video sink
can play 1s (which is all too late now), and the audio sink again makes
the demuxer output 1s of audio while the video sink is starving.
There are more possible scenarios like this. Generally, use queues
after each demuxer source pad to prevent this. Or even better, in your
case, use a single multiqueue with one pad per demuxer source pad. Or
yet even better, use uridecodebin or decodebin for your pipeline, which
will automatically insert queues/multiqueues as needed.
--
Sebastian Dröge, Centricular Ltd · http://www.centricular.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 931 bytes
Desc: This is a digitally signed message part
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20161001/de7c9ee3/attachment-0001.sig>
More information about the gstreamer-devel
mailing list