[pulseaudio-discuss] [PATCH 07/13] loopback: Refactor latency initialization

Tanu Kaskinen tanuk at iki.fi
Wed Nov 25 16:49:57 PST 2015


On Wed, 2015-11-25 at 22:58 +0100, Georg Chini wrote:
> On 25.11.2015 19:49, Tanu Kaskinen wrote:
> > On Wed, 2015-11-25 at 16:05 +0100, Georg Chini wrote:
> > > On 25.11.2015 09:00, Georg Chini wrote:
> > > > OK, understood. Strange that you are talking of 75% and 25%
> > > > average buffer fills. Doesn't that give a hint towards the connection
> > > > between sink latency and buffer_latency?
> > > > I believe I found something in the sink or alsa code back in February
> > > > which at least supported my choice of the 0.75, but I have to admit
> > > > that I can't find it anymore.
> > > Lets take the case I mentioned in my last mail. I have requested
> > > 20 ms for the sink/source latency and 5 ms for the memblockq.
> > What does it mean that you request 20 ms "sink/source latency"? There
> > is the sink latency and the source latency. Does 20 ms "sink/source
> > latency" mean that you want to give 10 ms to the sink and 10 ms to the
> > source? Or 20 ms to both?
> 
> I try to configure source and sink to the same latency, so when I
> say source/sink latency = 20 ms I mean that I configure both to
> 20 ms.
> In the end it may be possible that they are configured to different
> latencies (for example HDA -> USB).
> The minimum necessary buffer_latency is determined by the larger
> of the two.
> For simplicity in this thread I always assume they are both equal.
> 
> > 
> > > The
> > > 20 ms cannot be satisfied, I get 25 ms as sink/source latency when
> > > I try to configure it (USB device).
> > I don't understand how you get 25 ms. default_fragment_size was 5 ms
> > and default_fragments was 4, multiply those and you get 20 ms.
> 
> You are right. The configured latency is 20 ms but in fact I am seeing
> up to 25 ms.

25 ms reported as the sink latency? If the buffer size is 20 ms, then
that would mean that there's 5 ms buffered later in the audio path.
That sounds a bit high to me, but not impossible. My understanding is
that USB transfers audio in 1 ms packets, so there has to be at least 1
ms extra buffer after the basic alsa ringbuffer, maybe the extra buffer
contains several packets.

> > 
> > > For the loopback code it means that the target latency is not what
> > > I specified on the command line but the average sum of source and
> > > sink latency + buffer_latency.
> > The target latency should be "configured source latency +
> > buffer_latency + configured sink latency". The average latency of the
> > sink and source don't matter, because you need to be prepared for the
> > worst case scenario, in which the source buffer is full and the sink
> > wants to refill its buffer before the source pushes its buffered audio
> > to the memblockq.
> 
> Using your suggestion would again drastically reduce the possible
> lower limit. Obviously it is not necessary to go to the full range.

How is that obviously not necessary? For an interrupt-driven alsa
source I see how that is not necessary, hence the suggestion for
optimization, but other than that, I don't see the obvious reason.

> That special case is also difficult to explain. There are two situations,
> where I use the average sum of source and sink latency.
> 1) The latency specified cannot be satisfied
> 2) sink/source latency and buffer_latency are both specified
> 
> In case 1) the sink/source latency will be set as low as possible
> and buffer_latency will be derived from the sink/source latency
> using my safeguards.
> in case 2) sink/source latency will be set to the nearest possible
> value (but may be higher than specified), and buffer_latency is
> set to the commandline value.
> 
> Now in both cases you have sink/source latency + buffer_latency
> as the target value for the controller - at least if you want to handle
> it similar to the normal operation.
> The problem now is that the configured sink/source latency is
> possibly different from what you get on average. So I replaced
> sink/source latency with the average sum of the measured
> latencies.

Of course the average measured latency of a sink or source is lower
than the configured latency. The configured latency represents the
situation where the sink or source buffer is full, and the buffers
won't be full most of the time. That doesn't mean that the total
latency doesn't need to be big enough to contain both of the configured
latencies, because you need to handle the case where both buffers
happen to be full at the same time.

> The average is also used to compare the "real"
> source/sink latency + buffer_latency
> against the configured overall latency and the larger of the two
> values is the controller target. This is the mechanism used
> to increase the overall latency in case of underruns.

I don't understand this paragraph. I thought the reason why the
measured total latency is compared against the configured total latency
is that you then know whether you should increase or decrease the sink
input rate. I don't see how averaging the measurements helps here.

And what does this have to do with increasing the latency on underruns?
If you get an underrun, then you know buffer_latency is too low, so you
bump it up by 5 ms (if I recall your earlier email correctly), causing
the configured total latency to go up by 5 ms as well. As far as I can
see, the measured latency is not needed for anything in this operation.

----

Using your example (usb sound card with 4 * 5 ms sink and source
buffers), my algorithm combined with the alsa source optimization
yields the following results:

configured sink latency = 20 ms
configured source latency = 20 ms
maximum source buffer fill level = 5 ms
buffer_latency = 0 ms
target latency = 25 ms

So you see that the results aren't necessarily overly conservative.

buffer_latency shouldn't be zero, of course, if you want to protect
against rate errors, scheduling delays and jitter[1], but my point is
that buffer_latency shouldn't be proportional to the fragment size
(unless you can show how the rate errors, scheduling delays or jitter
are proportional to the fragment size).

In your example the user explicitly configured buffer_latency to 5 ms,
but I ignored that, because that seemed pointless. If you really want
to override the default in this case, then fine, buffer_latency can be
set to 5 ms, that will just mean that the total target latency will
increase to 30 ms, because 25 ms is the minimum latency supported by
the sink and source.

[1] I guess by "jitter" you mean latency measurement jitter? I didn't
initially consider that, and after trying to think about how to
calculate a safe margin against the effects of the jitter, I decided to
give up due to brain hurting too much. If you think the jitter causes
big enough problems to warrant a safety margin in buffer_latency,
you're welcome to add it. If you want to make it relative to some other
number, like the fragment size or total latency or adjust time, then I
want to understand on what basis that association is done.

-- 
Tanu


More information about the pulseaudio-discuss mailing list