audiotestsrc vs. multifilesrc
Marianna Smidth Buschle
msb at qtec.com
Thu Jan 13 13:17:48 UTC 2022
Hello,
I strongly believe I have experienced similar issues, but related to
images/video instead of audio.
Your basic problem is that 'multifilesrc' is not a live source, while
you use the 'audiotestsrc' as live.
That basically means that while a live source will produce buffers based
on the configured timing (from CAPS), the non-live source will produce
buffers as fast as possible (or as fast as the downstream elements tell
it to).
You can try checking the difference by doing:
audiotestsrc is-live=true wave=ticks ! audio/x-raw,format=S16LE,rate=48000,channels=1 ! autoaudiosink sync=true
And
audiotestsrc is-live=true wave=ticks ! audio/x-raw,format=S16LE,rate=48000,channels=1 ! autoaudiosink sync=false
For the live source you shouldn't see any difference.
But I expect that you will see for the non-live, depending on using
'autoaudiosink sync=true' or 'autoaudiosink sync=false'
multifilesrc do-timestamp=true loop=true location=count.wav ! wavparse ignore-length=1 ! audio/x-raw,format=S16LE,rate=48000,channels=1 ! autoaudiosink sync=false
Now, the way I managed to have file sources working as "live sources"
for me was by adding either a 'identity sync=true' or 'clocksync' to the
pipeline.
Something like:
multifilesrc do-timestamp=true loop=true location=count.wav ! wavparse ignore-length=1 ! audio/x-raw,format=S16LE,rate=48000,channels=1 ! clocksync ! autoaudiosink sync=false
Now, I do remember some issues with 'multifilesrc', so I would recommend
trying also with 'filesrc' instead.
And I was using H264 streams packed to MPEG-TS, in this case it would
only work after the demuxer:
gst-launch-1.0 filesrc location=/tmp/test1.ts ! tsdemux name=demux :
queue ! identity sync=true ! h264parse ! avdec_h264 qos=false !
videoconvert ! ximagesink
demux. : queue ! identity sync=true ! decodebin : audioconvert !
autoaudiosink
Note that I haven't tested any of the pipelines besides this last one,
which comes from my own project...
Best Regards
Marianna S. Buschle
On 13.01.2022 13.00, gstreamer-devel-request at lists.freedesktop.org wrote:
> Hello everyone,
>
> I'm having an issue here that's probably very simple, but I can't see what's wrong.
>
> I've been using the following audio source for testing in my larger WebRTC pipeline:
>
> audiotestsrc is-live=true wave=ticks !
> audio/x-raw,format=S16LE,rate=48000,channels=1 ! tee allow-not-linked=true
> name=audiotestsrc
>
> Now I've tried to replace it with an audiofile of a voice counting (to estimate
> delay etc.):
>
> multifilesrc do-timestamp=true loop=true location=count.wav ! wavparse
> ignore-length=1 ! audio/x-raw,format=S16LE,rate=48000,channels=1 ! tee
> allow-not-linked=true name=audiotestsrc
>
> The audiofile does have S16LE/48kHz/mono, so there shouldn't be any format
> issues. Both variants work when I run them in gst-launch-1.0 and append an
> autoaudiosink at the end; I can even replicate the encoder pipeline by appending
> "... ! queue ! opusenc ! rtpopuspay ! queue max-size-time=100000000 !
> rtpopusdepay ! opusdec ! autoaudiosink" and it still works for both of them.
>
> However, when I connect the encoder output to webrtcbin within my larger
> pipeline, then the multifilesrc seems to never start streaming. Caps negotiation
> and WebRTC SDP negotiation both complete and seem to be fine, but I'm never
> actually getting an audiostream unless I keep using audiotestsrc (or e.g.
> alsasrc/pulsesrc, those work as well).
>
> For the curious, audiofile in question is here:
> https://floe.butterbrot.org/external/count.wav
>
> Any suggestions?
>
> Thanks and best regards, Florian
--
Best regards / Med venlig hilsen
“Marianna Smidth Buschle”
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20220113/a15559f4/attachment.htm>
More information about the gstreamer-devel
mailing list