[pulseaudio-discuss] [PATCH v9] loopback: Initialize latency at startup and during source/sink changes

Georg Chini georg at chini.tk
Tue Feb 28 15:30:47 UTC 2017


On 28.02.2017 16:11, Tanu Kaskinen wrote:
> On Mon, 2017-02-27 at 13:54 +0100, Georg Chini wrote:
>>           case SINK_INPUT_MESSAGE_POST:
>>   
>> -            pa_sink_input_assert_io_context(u->sink_input);
>> +            pa_memblockq_push_align(u->memblockq, chunk);
>> +
>> +            /* If push has not been called yet, latency adjustments in sink_input_pop_cb()
>> +             * are enabled. Disable them on first push and correct the memblockq. If pop
>> +             * has not been called yet, wait until the pop_cb() requests the adjustment */
>> +            if (u->output_thread_info.pop_called && (!u->output_thread_info.push_called || u->output_thread_info.pop_adjust)) {
>> +                pa_usec_t time_delta;
>> +
>> +                /* This is the source latency at the time push was called */
>> +                time_delta = PA_PTR_TO_UINT(data);
>> +                /* Add the time between push and post */
>> +                time_delta += pa_rtclock_now() - offset;
>> +                /* Add the sink latency */
>> +                time_delta += pa_sink_get_latency_within_thread(u->sink_input->sink);
>> +
>> +                /* The source latency report includes the audio in the chunk,
>> +                 * but since we already pushed the chunk to the memblockq, we need
>> +                 * to subtract the chunk size from the source latency so that it
>> +                 * won't be counted towards both the memblockq latency and the
>> +                 * source latency.
>> +                 * If the source has overrun, assume that the maximum it should have pushed is
>> +                 * one full source latency. It may still be possible that the next push also
>> +                 * contains too much data, then the resulting latency will be wrong. */
>> +                if (pa_bytes_to_usec(chunk->length, &u->sink_input->sample_spec) > u->output_thread_info.effective_source_latency)
>> +                    time_delta = PA_CLIP_SUB(time_delta, u->output_thread_info.effective_source_latency);
>> +                else
>> +                    time_delta = PA_CLIP_SUB(time_delta, pa_bytes_to_usec(chunk->length, &u->sink_input->sample_spec));
> Using effective_source_latency is starting to make some sense to me,
> but I think the comment is not easy to understand. Are you ok with it
> if I modify the comment like this:
>
> @@ -701,9 +701,18 @@ static int sink_input_process_msg_cb(pa_msgobject *obj, int code, void *data, in
>                    * to subtract the chunk size from the source latency so that it
>                    * won't be counted towards both the memblockq latency and the
>                    * source latency.
> -                 * If the source has overrun, assume that the maximum it should have pushed is
> -                 * one full source latency. It may still be possible that the next push also
> -                 * contains too much data, then the resulting latency will be wrong. */
> +                 *
> +                 * Sometimes the alsa source reports way too low latency (might
> +                 * be a bug in the alsa source code). This seems to happen when
> +                 * there's an overrun. As an attempt to detect overruns, we
> +                 * check if the chunk size is larger than the configured source
> +                 * latency. If so, we assume that the source should have pushed
> +                 * a chunk whose size equals the configured latency, so we
> +                 * modify time_delta only by that amount, which makes
> +                 * memblockq_adjust() drop more data than it would otherwise.
> +                 * This seems to work quite well, but it's possible that the
> +                 * next push also contains too much data, and in that case the
> +                 * resulting latency will be wrong. */
>                   if (pa_bytes_to_usec(chunk->length, &u->sink_input->sample_spec) > u->output_thread_info.effective_source_latency)
>                       time_delta = PA_CLIP_SUB(time_delta, u->output_thread_info.effective_source_latency);
>                   else
>
OK for me, except that "configured source latency" is somewhat misleading,
because we have a configured_source_latency variable which is not what we
are using here. I do however not know how to rephrase it without using many
words, so I'm OK with your comment.

I also discovered a small bug:
In update_latency_boundaries() I test for alsa devices with fixed 
latency like this:

             s = pa_proplist_gets(source->proplist, PA_PROP_DEVICE_API);
             if (pa_streq(s, "alsa"))
                 u->fixed_alsa_source = true;

This will crash pulse, if the property is not set. It should be:

             if ((s = pa_proplist_gets(source->proplist, 
PA_PROP_DEVICE_API))) {
                 if (pa_streq(s, "alsa"))
                     u->fixed_alsa_source = true;
             }

Shall I send a new version or can you fix it when pushing the patch?


More information about the pulseaudio-discuss mailing list