[pulseaudio-discuss] Converting sound output with FFMPEG
rfapereira at gmail.com
Fri May 23 03:00:26 PDT 2008
I've just started playing with PulseAudio, so sorry if I'm missing something
really obvious here.
Here's the scenario - I want to capture the sound of different client
applications connected to PulseAudio and encode each sound stream
independently using FFMPEG.
I'm using Ubuntu, which has version 0.9.10 installed. I've made all
configurations to make sure I'm getting everything into PA (ALSA, EDS,
etc...). This works fine.
I know that FFMPEG has a built-in OSS grabbing feature, so one option would
be to configure an OSS sink and route all sound to it.
But I don't want to go down that route, since that defeats the "independent
stream encoding" goal.
So, another idea is using pipe-sinks. In theory, I could set up N different
pipe-sinks for N running clients and move each stream to a separate pipe.
Then, since FFMPEG can get its input from a pipe, I could have N instances
of FFMPEG running and encoding in real-time.
But here's my problem - when reading the pipe that PA creates, I get an
enormous data rate out of it. I configure it for 22050Hz, s16le and 2
channels, so I expect around 705kb/s..
Even without anything playing, it seems that the output is just flooding
with zeros. To do a quick test, if I "cat pipname > grabbed" for a couple of
seconds, the file reaches 200, 300 MB easily. So - my guess is that
pipe-sink module seems to be outputting data as fast as it can, even if it
is just 0s.
Is is this case? Is it on purpose? Can someone explain what's going on? (or
perhaps have better suggestions on how to achieve the same goal)?
I can create and compile my own module, if necessary (I've managed to make a
copy of the module-pipe-sink, deploy it and load it up on PA), so I can
tweak the code if necessary....
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the pulseaudio-discuss