[pulseaudio-discuss] [PATCH v6 14/25] source.c, sink.c: Implement pa_{source, sink}_get_raw_latency_within_thread() calls
Georg Chini
georg at chini.tk
Wed Aug 17 15:06:21 UTC 2016
On 17.08.2016 15:22, Tanu Kaskinen wrote:
> On Wed, 2016-08-17 at 14:58 +0200, Georg Chini wrote:
>> On 17.08.2016 13:48, Tanu Kaskinen wrote:
>>> On Wed, 2016-08-17 at 06:52 +0200, Georg Chini wrote:
>>>> On 17.08.2016 00:30, Tanu Kaskinen wrote:
>>>>> What would you think about the following idea:
>>>>>
>>>>> Let's say that the latency calculation in the alsa code results
>>>>> in
>>>>> -123
>>>>> usecs. Negative latencies are impossible, so we know that the
>>>>> value
>>>>> is
>>>>> wrong by at least 123 usecs. If the error is due to a wrong
>>>>> reference
>>>>> point that was set in the beginning, we can improve this and
>>>>> all
>>>>> subsequent reports by moving the smoother's reference point by
>>>>> 123
>>>>> usecs. It will cause a jump in the latency reports, but it will
>>>>> happen
>>>>> only once (well, there's no guarantee that the reference point
>>>>> error
>>>>> isn't bigger than 123 usecs, so negative values can still occur
>>>>> and
>>>>> more jumps can happen, but each jump makes all subsequent jumps
>>>>> smaller, until there's no more error left to fix).
>>>>>
>>>>> The values produced by the smoother are estimates, so they can
>>>>> contain
>>>>> also other errors than the error in the reference point, so I
>>>>> guess
>>>>> overcompensation is possible in this scheme, but I'd expect it
>>>>> to
>>>>> be
>>>>> negligible in magnitude.
>>>>>
>>>> What would this be good for? Only not to have negative numbers?
>>>> The final end-to-end latency is not what you configure anyway,
>>>> there
>>>> are (as we discussed) other offsets around that we cannot see. So
>>>> why bother to correct half a millisecond here? Accepting negative
>>>> latencies is just to keep the reports smooth and continuous.
>>>> If you prefer, I can try your approach, but I do not see the
>>>> benefit.
>>> The benefit is the same as with your "accept negative values"
>>> patch:
>>> more smooth and continuous reports than with the current code (and
>>> also
>>> slightly more accurate, but the difference is negligible). The
>>> drawback
>>> of this idea compared to your approach is that there are
>>> incontinuities
>>> in the beginning, but on the plus side, the core doesn't have to be
>>> complicated just in order to pass known-incorrect values around for
>>> little practical benefit (or maybe there is some significant
>>> practical
>>> benefit, but so far your stated goal of 50-usec latency stability
>>> seems
>>> arbitrary, and irrelevant to end users).
>> Yes, the benefit would be the same, but with much more overhead
>> and also with the drawback of discontinuities in the beginning.
> What overhead? Do you think there's more overhead than one if check?
>
> if (delay < 0) {
> pa_smoother_adjust_reference_point(smoother, delay);
> delay = 0;
> }
What about the function? There is a pa_smoother_set_time_offset()
function, but it does not add up the consecutive changes as would
be necessary, so we would have to store the current offset elsewhere.
This is an additional variable that must be tracked and initialized. To me
this seems more complex than copying an already existing function.
>> What
>> is your problem with negative numbers? They do not have more error
>> than the positive numbers
> On what basis do you say that? If we spot a negative number, we know
> that the error of the measurement is at least that big. Changing the
> number to zero removes that amount of error. If the error is mostly due
> to a constant offset error, changing the offset error will improve also
> subsequent latency reports.
You will only remove the error from that single measurement. A
single measurement does not give you any information about the
actual offset, it might be, that the offset is 0 and just the single
data point has an error. Then all subsequent values would have
a larger error. So you cannot be sure, if you improve anything.
And as positive and negative values are subjected to the same
systematic offset error, I can't see why the positive values should
be any "better" than negative.
>
>> and going through the effort of making
>> them positive just because it looks nicer does not make sense to me.
> "Looking nicer" in this case means less complexity in the core. Maybe
> you think that has no value? I think it does have value.
As said above it does not look more complex to me, but that's a matter
of opinion. Anyway, additional complexity only arises because the
latencies returned by pa_*_get_latency_*() may not be negative, so that
I have to establish a new function to get the unbiased values.
If you would simply accept negative numbers, there would even be less
code.
>
>> It is much easier to accept them as they are because you do not gain
>> anything by the transformation.
>> Your approach would not even deliver more accurate results, because
>> due to the error of the measurements and your idea to accumulate
>> those negative values, you would end up with always overestimating
>> the offset by the maximum error of the measurement.
> Over-adjusting the offset by the maximum error of the measurement case
> only occurs if the maximum error happens on a measurement that is done
> when the real latency is exactly zero (which it never is, but it may be
> close). What magnitude do you expect the maximum non-offset-related
> error to be? I expect it to be less than what the offset error, in
> which case shifting the offset should result in more accurate results
> than not doing that, but maybe I'm wrong about the relative magnitudes
> of offset errors and other errors?
I think the overall error will be (negligible) smaller if you live with the
offset.
Why should the error of the reference point be any different from
the error of following measurements? In the end, the reference
point consists of a single measurement. This means, the error of
it is somewhere between 0 and the maximum error.
Following your approach, you would end up with an error that is
always near to the maximum error if you wait long enough.
As long as the maximum is not reached, you have the disadvantage
that discontinuities are still possible. Another disadvantage is, that
the calculated offset cannot grow smaller again. If something
strange happens and you get a very negative latency only once, it
will mess up all future data.
If you insist, I will implement your approach, but I still believe that
accepting negative latencies produces the more reliable result.
>
>> Regarding the goal: The correct goal is to make the latency as stable
>> as possible, 50 usec is the limit I hit with my approach and USB
>> devices, so you are right, the number is completely arbitrary and
>> only valid for a very special case. I just mentioned it to show, that
>> sub-millisecond jumps in the latency do matter for the implemented
>> controller and that a few hundred usec can be considered a rather
>> large jump in that context. In fact, I would have been content with
>> one of my previous controllers, but Alexander Patrakov objected, so I
>> tried to make it as precise as zita-aj bridge which he always used as
>> an example.
> I'm sorry if you get mixed messages. If we can match zita-aj bridge's
> performance, that would be nice, but in my opinion the benefits of
> doing that need to be weighted against the costs.
>
OK, my results show that it is possible to achieve about the same
stability, but finally you have to decide if you accept the costs.
More information about the pulseaudio-discuss
mailing list