Appsrc doesn't play audio to autoaudiosink
nirbheek.chauhan at gmail.com
Thu Nov 25 12:14:46 UTC 2021
Opus frames do not contain timestamps. It sounds like you're not
pushing buffers correctly or maybe the opus frames are being
misdetected (wrong sample rate / channels, maybe). You haven't shared
your code so we can only guess. You definitely need to push data
continuously, though, since this is a live pipeline.
I recommend using the "need-data" signal to know when to push data
into the pipeline, and if you do not have data ready to push, the
simplest thing would be to push an opus frame containing silence.
There's other things you can do, like using audiomixer to ensure that
pulsesink gets a continuous stream, etc.
On Wed, Nov 24, 2021 at 8:15 PM Kyle Gibbons via gstreamer-devel
<gstreamer-devel at lists.freedesktop.org> wrote:
> I am finally making some progress! I set the min-latency to 8000000000 which obviously causes a huge delay, but does allow audio to play. When I stop sending audio I get a "Got Underflow" error from pulsesink and then audio does not play again until I restart the application. Also, the audio does not sound great. It's almost like it's playing under speed, sounds a bit lower than expected. I have to set the volume to at least 2 to be able to hear the audio well.
> Is there a way to compensate for the timestamps coming in from the source without introducing a large delay? I am guessing that since I am basically just passing the opus from Zello through my application that the origins opus timestamp is being used, which of course would be well past when my app starts playing.
> All the best,
> Kyle Gibbons
> On Wed, Nov 24, 2021 at 8:02 AM Kyle Gibbons <kyle at kylegibbons.com> wrote:
>> I wanted to add that when there is data coming in the samples and buffers should be consistent, but because the ultimate source is a walkie-talkie like interface, there is not always audio coming in. We only send data to gstreamer when there is audio coming into the system over the network, we do not send silence. I did try starting the stream before the application so there was essentially always audio flowing in, but that made no difference.
>> All the best,
>> Kyle Gibbons
>> On Wed, Nov 24, 2021 at 7:00 AM Kyle Gibbons <kyle at kylegibbons.com> wrote:
>>> Thanks for the reply. I tried adding min-latency of 40000000, 60000000, 100000000, and 1000000000 to no avail.
>>> The buffers and number of samples should be consistent. The audio comes from another service I wrote using Go and Pion which gets its audio from the Zello API (zello.com)
>>> All the best,
>>> Kyle Gibbons
>>> On Wed, Nov 24, 2021 at 6:48 AM Tim-Philipp Müller via gstreamer-devel <gstreamer-devel at lists.freedesktop.org> wrote:
>>>> Hi Kyle,
>>>> > But this doesn't:
>>>> > appsrc is-live=true do-timestamp=true name=src ! queue ! opusparse !
>>>> > opusdec ! audioconvert ! audioresample! queue ! pulsesink
>>>> Try adding appsrc min-latency=40000000 (=40ms in nanoseconds) or such.
>>>> You might have to experiment with the values.
>>>> Do you always push in buffers of the same size / number of samples?
>>>> Where do you get the audio data from?
More information about the gstreamer-devel