Why is wav file mixed by soundmixer plugin truncated?

toub etienne.laurent at pa.cotte.fr
Fri Dec 8 15:19:22 UTC 2017


Nicolas Dufresne-5 wrote
> Le jeudi 07 décembre 2017 à 02:45 -0700, toub a écrit :
>> 
>> In any way, why do I have to set latency ? As far a I know, buffers
>> produced
>> by mixer a timelapse T are not delayed before arriving to alsasink.
> 
> As you have live sources you need latency, because by the time we have
> capture the audio, the data is already late. The latency is the amount
> of time you give to your pipeline to transport data from source to sink
> and render it, plus the extra time needed to synchronize to the sink
> that renders last.

But in my case there are no live sources, only filesrc element which are
linked to the pipeline on random time. Should I consider these sources as
live sources ?


>> 
>> Also, I could not find how to use the do-latency signal to modify alsa
>> sink.
>> Could you give me a sample ?
> 
> There is no example that I know of, I usually start from the default
> implementation:
> 
> https://cgit.freedesktop.org/gstreamer/gstreamer/tree/gst/gstpipeline.c#n619

Ok I'll try to adapt default implementation next week. What is the latency
that I should apply ? It turns out that the longer it takes before a new
sound is triggered, the more truncated is the sound. So I expect that
latency should be adapted dyamically but I cannot see how I could compute
the latency to apply if it's not constant ?





--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/


More information about the gstreamer-devel mailing list