Appsrc doesn't play audio to autoaudiosink

Kyle Gibbons kyle at kylegibbons.com
Thu Nov 25 12:53:38 UTC 2021


Hi Nirbheek,

That's good to know that opus doesn't have timestamps, why then would
adding the latency to the appsrc make a difference? The incoming OPUS data
is in format 16000 Hz sample rate, 1 frame per packet, 60 ms frame size

Also good to know that I need to push data continuously, I will work on
that. What is the recommended way to push silence?

I am happy to share the code, it's just really basic so I wasn't sure it
was necessary.

*This is the current pipeline:*

appsrc min-latency=4000000000 is-live=true do-timestamp=true  name=src !
queue2 ! opusparse ! opusdec ! audioconvert ! audioresample ! queue2 ! !
pulsesink volume=2

I've been messing around with different min-latencies, queue types, and
timestamp properties

*Here is the Go code that feeds the audio coming in over WebRTC*

sinkPipelineStr := fmt.Sprintf("%s", viper.GetString("sinkPipeline"))

sinkPipeline := gst.CreatePipeline(sinkPipelineStr, "src", "", nil)

m.log.Infof("Sink pipeline: %s", sinkPipelineStr)

sinkPipeline.Start()
defer sinkPipeline.Stop()

localAudioSrc := make(chan []byte)

fmt.Println("Waiting for audio...")
for {
  select {
    case data := <-m.audioChans.Out:

    fmt.Println("audio OUT")
    // fmt.Println(data)
    sinkPipeline.Push(data)
    //f.Write(data)
    case <-ctx.Done():
    return
  }
}

*Here is the code on the CreatePipeline function:*

func CreatePipeline(pipelineStr string, srcName string, sinkName string,
sink chan []byte) *Pipeline {

  pipelineStrUnsafe := C.CString(pipelineStr)
  defer C.free(unsafe.Pointer(pipelineStrUnsafe))

   pipelinesLock.Lock()
  defer pipelinesLock.Unlock()

  pipeline := &Pipeline{
    id:       len(pipelines),
    pipeline: C.gstreamer_create_pipeline(pipelineStrUnsafe),
    sink:     sink,
    srcName:  srcName,
    sinkName: sinkName,
  }

  pipelines[pipeline.id] = pipeline

  return pipeline
}

*Here is the code for the start function:*

func (p *Pipeline) Start() {
  C.gstreamer_start_pipeline(p.pipeline, C.int(p.id), C.CString(p.srcName),
C.CString(p.sinkName))
}

*Here is the push function:*

func (p *Pipeline) Push(buffer []byte) {
  b := C.CBytes(buffer)
  defer C.free(b)
  C.gstreamer_receive_push_buffer(p.pipeline, b, C.int(len(buffer)),
C.CString(p.srcName))
}


All the best,
Kyle Gibbons



On Thu, Nov 25, 2021 at 7:15 AM Nirbheek Chauhan <nirbheek.chauhan at gmail.com>
wrote:

> Hi Kyle,
>
> Opus frames do not contain timestamps. It sounds like you're not
> pushing buffers correctly or maybe the opus frames are being
> misdetected (wrong sample rate / channels, maybe). You haven't shared
> your code so we can only guess. You definitely need to push data
> continuously, though, since this is a live pipeline.
>
> I recommend using the "need-data" signal to know when to push data
> into the pipeline, and if you do not have data ready to push, the
> simplest thing would be to push an opus frame containing silence.
>
> There's other things you can do, like using audiomixer to ensure that
> pulsesink gets a continuous stream, etc.
>
> Cheers,
> Nirbheek
>
> On Wed, Nov 24, 2021 at 8:15 PM Kyle Gibbons via gstreamer-devel
> <gstreamer-devel at lists.freedesktop.org> wrote:
> >
> > I am finally making some progress! I set the min-latency to 8000000000
> which obviously causes a huge delay, but does allow audio to play. When I
> stop sending audio I get a "Got Underflow" error from pulsesink and then
> audio does not play again until I restart the application. Also, the audio
> does not sound great. It's almost like it's playing under speed, sounds a
> bit lower than expected. I have to set the volume to at least 2 to be able
> to hear the audio well.
> >
> > Is there a way to compensate for the timestamps coming in from the
> source without introducing a large delay? I am guessing that since I am
> basically just passing the opus from Zello through my application that the
> origins opus timestamp is being used, which of course would be well past
> when my app starts playing.
> >
> > All the best,
> > Kyle Gibbons
> >
> >
> >
> > On Wed, Nov 24, 2021 at 8:02 AM Kyle Gibbons <kyle at kylegibbons.com>
> wrote:
> >>
> >> I wanted to add that when there is data coming in the samples and
> buffers should be consistent, but because the ultimate source is a
> walkie-talkie like interface, there is not always audio coming in. We only
> send data to gstreamer when there is audio coming into the system over the
> network, we do not send silence. I did try starting the stream before the
> application so there was essentially always audio flowing in, but that made
> no difference.
> >>
> >> All the best,
> >> Kyle Gibbons
> >>
> >>
> >>
> >> On Wed, Nov 24, 2021 at 7:00 AM Kyle Gibbons <kyle at kylegibbons.com>
> wrote:
> >>>
> >>> Tim,
> >>>
> >>> Thanks for the reply. I tried adding min-latency of 40000000,
> 60000000, 100000000, and 1000000000 to no avail.
> >>>
> >>> The buffers and number of samples should be consistent. The audio
> comes from another service I wrote using Go and Pion which gets its audio
> from the Zello API (zello.com)
> >>>
> >>> All the best,
> >>> Kyle Gibbons
> >>>
> >>>
> >>>
> >>> On Wed, Nov 24, 2021 at 6:48 AM Tim-Philipp Müller via gstreamer-devel
> <gstreamer-devel at lists.freedesktop.org> wrote:
> >>>>
> >>>> Hi Kyle,
> >>>>
> >>>> > But this doesn't:
> >>>> >
> >>>> > appsrc is-live=true do-timestamp=true name=src ! queue ! opusparse !
> >>>> > opusdec ! audioconvert ! audioresample!  queue ! pulsesink
> >>>>
> >>>>
> >>>> Try adding appsrc min-latency=40000000 (=40ms in nanoseconds) or such.
> >>>>
> >>>> You might have to experiment with the values.
> >>>>
> >>>> Do you always push in buffers of the same size / number of samples?
> >>>> Where do you get the audio data from?
> >>>>
> >>>> Cheers
> >>>>  Tim
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20211125/defb9fd1/attachment.htm>


More information about the gstreamer-devel mailing list