[pulseaudio-discuss] [PATCH v6 14/25] source.c, sink.c: Implement pa_{source, sink}_get_raw_latency_within_thread() calls
Georg Chini
georg at chini.tk
Thu Aug 18 17:17:44 UTC 2016
On 18.08.2016 18:45, Tanu Kaskinen wrote:
> On Wed, 2016-08-17 at 17:06 +0200, Georg Chini wrote:
>> On 17.08.2016 15:22, Tanu Kaskinen wrote:
>>> On Wed, 2016-08-17 at 14:58 +0200, Georg Chini wrote:
>>>
>>> As said above it does not look more complex to me, but that's a matter
>>> of opinion. Anyway, additional complexity only arises because the
>>> latencies returned by pa_*_get_latency_*() may not be negative, so that
>>> I have to establish a new function to get the unbiased values.
>>> If you would simply accept negative numbers, there would even be less
>>> code.
> I think changing all the get_latency functions to deal with signed
> numbers would be fine. Maybe they should have a bool flag to tell
> whether negative values should be filtered out, if there are many call
> sites that need to do the filtering, but from a quick glance there seem
> to be also many places where negative values are fine.
OK, I'll do it that way in the next version.
>
>>>> It is much easier to accept them as they are because you do not gain
>>>> anything by the transformation.
>>>> Your approach would not even deliver more accurate results, because
>>>> due to the error of the measurements and your idea to accumulate
>>>> those negative values, you would end up with always overestimating
>>>> the offset by the maximum error of the measurement.
>>> Over-adjusting the offset by the maximum error of the measurement case
>>> only occurs if the maximum error happens on a measurement that is done
>>> when the real latency is exactly zero (which it never is, but it may be
>>> close). What magnitude do you expect the maximum non-offset-related
>>> error to be? I expect it to be less than what the offset error, in
>>> which case shifting the offset should result in more accurate results
>>> than not doing that, but maybe I'm wrong about the relative magnitudes
>>> of offset errors and other errors?
>> I think the overall error will be (negligible) smaller if you live with the
>> offset.
>> Why should the error of the reference point be any different from
>> the error of following measurements? In the end, the reference
>> point consists of a single measurement.
> The initial reset of the smoother is done in a different situation than
> regular latency queries, and I got the impression that you observed bad
> accuracy specifically when starting a sink or source. Maybe that
> impression was wrong.
>
Yes, your impression was wrong. The magnitude of the negative values
does not change over time, that is why I think it is mainly an error in the
initial conditions.
When you chose a starting point you basically set
card_time = system_time = 0
The implementation may be slightly different, but that's the principle.
If this condition has an error, it will never be corrected, because all
following calculations are based on the initial assumption.
The error is the same for the first data point as for all subsequent
data points. The system time maybe considered as the exact
reference time, any measurement of the card time relative to
the system time will therefore show the same statistical deviation
from the correct value. It does not matter in what situation the data
is captured.
In practice the situation is even more complicated, because neither
system time nor card time are completely stable. Glitches in one of
the time scales might also be partly responsible for negative latencies.
More information about the pulseaudio-discuss
mailing list