Best way to implement is-live/do-timestamp appsrc that respects duration of incoming buffers

Sean DuBois sean at siobud.com
Fri Mar 17 01:32:41 UTC 2017


Hey list!

I am consuming an API that provides me with some raw AAC
buffers and all I know is the duration of those buffers. They are
provided in bursts (so I get ~30 seconds worth of buffers at a time)

However, when I push them on an appsrc that is live with do-timestamp
enabled they all bunch up. Since they are marked with the pipeline time
the PTS is wrong. If I try to audioresample or audiorate those buffers
they are destroyed from all the dropping, because once decoded they have
a duration that makes them overlap with the next buffers PTS.

The best solution I have found is to do something like
```
    std::thread([aac_appsrc]() {
        GstBuffer *buff;
        for (;;) {
          // Allocate my GstBuffer and fill it with data, this will
          // block until data is available

          GST_BUFFER_DTS(buff) = GST_BUFFER_PTS(buff) = GST_BUFFER_DURATION(buff) = GST_CLOCK_TIME_NONE;
          gst_app_src_push_buffer(GST_APP_SRC(aac_appsrc), buff);

          g_usleep(21333);
        }
    }).detach();
```

There is some discontinuity with this solution, but it is the closest I
have gotten. The only other thing I could think of is filling out the PTS
in the need-data callback for the appsrc, but I can't get the timestamps
right. I don't know the latency/pipeline time etc... so my buffers are always late
to the sink and get dropped.

So does anyone have a better solution for pushing blocks of data onto
the appsrc, and having it make sure to properly space the PTS out so
that I can resample/audiorate?

thanks


More information about the gstreamer-devel mailing list