[pulseaudio-discuss] sink/source implementation for pull-io audio processing
daniel at caiaq.de
Wed Nov 4 04:02:15 PST 2009
On Wed, Nov 04, 2009 at 12:02:27AM +0100, Lennart Poettering wrote:
> On Tue, 03.11.09 10:54, Daniel Mack (daniel at caiaq.de) wrote:
> > CoreAudio is implemented in a asynchronous pull-io fashion, which means
> > that the user registers a callback (IOProc) to be called for the device
> > whenever there is a specified amount of data available and/or wanted,
> > respectively. That's all fairly straight forward.
> > As the PA sinks/sources are set up to match the sample format used by
> > CoreAudio's IOProc, there is actually nothing more left to do inside
> > this callback than copying the buffers from one end to the other when
> > I'm called and then inform PA about new data arrival.
> > Which API would cause as less overhead as possible? I didn't fully
> > understand the magic behind the PA RT threads yet, and I doubt I need that
> > at all as CoreAudio's IOProc is already called from a _very_ highly
> > priorized thread.
> This is actually similar to the JACK situation. On JACK too, the API
> allocates the thread and we need to make the best of it. In the Jack
> case we work around that by playing ping-pong between two RT
> threads: the one that is created by libjack and the one that is
> created by PA. This is actually really bad, since this means one
> additional context switch, and we really would prefer to do without
That's what I thought, too.
> The PA core actually does not require that it runs an RT thread that
> was created by itself, we are actually very flexible on this and could
> run fine with a foreign thread. However, what is important is that
> there is a way so that the PA core can wake up the RT thread at any time
> and cause it to execute code then. Unfortunately JACK currently does
> not allow that, the RT thread can only be woken up by the JACK server,
> not by our PA core.
> Now the question is: how much control does CoreAudio actually give you
> for that high prio thread? Is there any chance you can trigger from
> the PA main thread that some code is run inside the RT thread
> CoreAudio maintains? That means to things: firstly, there needs to be
> a way to wake up the RT thread from another thread, and secondly that
> some arbitrary code can be executed in the RT thread then.
Executing arbitrary code from that thread wouldn't be a problem, as I'm
just dumped to a callback function to do my audio processing in there.
It's up to the user whether to memcpy() a chunk of data or to do
realtime audio rendering in this callback.
However, the thread can not be woken up by the userspace AFAIK. It is
purely driven by the audio callback of the corresponding hardware and
will even stop occuring at all in case the hardware stops clocking for
whatever reason. So it's not more than the userspace part of the audio
The question I have about this is: why does PA necessarily need to
process events within that very thread? Wouldn't it be possible to let
the CoreAudio RT thread do all the audio stuff and create a PA RT thread
to handle everything else? That way, we wouldn't have a context switch
for all the audio material.
More information about the pulseaudio-discuss