Bytestream or sample jitterbuffer
nicolas at ndufresne.ca
Thu Jan 27 21:05:12 UTC 2022
Le jeudi 27 janvier 2022 à 20:38 +0100, Peter Maersk-Moller via gstreamer-devel
a écrit :
> Hi Nicolas
> Thx for taking the time
> On Thu, Jan 27, 2022 at 8:12 PM Nicolas Dufresne <nicolas at ndufresne.ca> wrote:
> > Somewhat your pipeline is live, or can be made live. One way to reach this
> > effect, could be to place clocksync element after your parser (since you
> > need
> > timeinformation). This will make you pipeline live, and then you can control
> > the
> > amount of jitter you allow with the processing-deadline properly for the
> > "real
> > sink", and ts-offset on the clocksync (or something along these line).
> Ok so clocksync adds timestamp and we can wait start until first sample. I
> thought audiorate also did this?
audioparse should in theory add timestamps. clocksync will read them and wait
till the time has been reached before forwarding the buffer.
> And is it still the case that fdsrc do-timestamp=true" does not actually add a
do-timestamp is not great, cause it introduce jittery timestamp. To handle this
type of timestamp, youd actually need a real jitterbuffer, but a jitterbuffer
must be aware of the buffer duration or be able to estimate it (rtp) to actually
> Ok so now we have timestamped bytestream (audio or not does not matter) andwe
> can add a ts offset, even negative if needed.
> But how does the processing dead-line work. If we set that to lets say 100ms,
> does it mean that the sink (lets use alsasink as a case study), does start
> process and output the first sample until after 100ms and subsequently on
> average, any sample received after that can be up to 100ms delayed accoding to
> a prefect stream thus in effect have a jitterbuffer of 100ms. Is that what you
> mean and how it works?
The processing-deadline is meant to be used to accommodate the "processing"
time. This is a bit unpredictable, but the idea is that application usually have
an idea of what is acceptable delay. Of course, that delay must be under the
duration of one buffer (on average), so you stay real-time. Increasing the delay
will increase the pipeline configure latency, which is basically the time we
wait until we start a render.
Increasing that when there is no delay will cause the queues to fill, higher
level in queues is what you are looking for to accommodate jitter (the case a
set of buffer is actually delayed). I'm not certain how this would be control in
parallel to other settings, but I think you should monitor the queue level to
validate your configuration.
> Thanks for taking the time.
> Peter Maersk-Moller
> > >
> > > Thanks in advance.
> > > Peter Maersk-Moller
> > >
> > >
More information about the gstreamer-devel