[gst-devel] Re: [linux-audio-dev] Toward a modularization of audio component
Paul Davis
pbd at Op.Net
Fri May 4 13:22:05 CEST 2001
>suddenly have a very low-latency graph. That's because the only thing
>GStreamer does is enable the actual data flow, in a very direct manner
>(put the buffer pointer in a holding pen, *cothread* switch in the worst
>case, call on the stack best case, hand the buffer to peer element).
>
>If the application decides to use explicit pthreads (as opposed to *just*
>cothreads), then you're gonna have latency problems, unless you have a
>kernel that likes you (and even then....).
why on earth are there *any* kinds of threads in the processing chain?
>Several recent changes make it very easy to construct a new 'scheduler'
>that decides what order to run the elements in. If you have a large mixer
>pipeline with the same chain of elements for each of N channels, you then
>have a decision to make, depending on whether you're more interested in
>keeping the code or the data in cache. If you're dealing with 64 samples
>at a time with lots of effects, you want to run all the effects of the
>same type at the same time, then go to the next one.
How could you do that when the inputs to some of them may not have
been computed yet?
>> imagine: an audio interface is asking you to supply 64 frames of audio
>> data at a time, generated/mutated/processed on demand. you've got
>> 1.3ms (best case) in which to do it, maybe just half that.
>
>This is the application I had in mind when I original started the
>GStreamer project some 2 years ago. I want to eventually have a
>fully-automated mixing surface that controls a computer (and vice versa),
>in order to do *live* mixing. When someone steps to a specific mic, a
>script fires that lowers all the other channels, for instance. Large
>pipelies for this kind of stuff are going to be the norm, and that's why I
>build GStreamer the way I did.
More information about the gstreamer-devel
mailing list