[pulseaudio-discuss] Network Audio with Pulse

Jim Duda jim at duda.tzo.com
Mon Apr 7 16:36:30 PDT 2008


Thanks for the feedback.

I understand your point about using the command line interface module. 
I actually would end up using the socket approach from perl once I had 
it all working properly.

I believe I understand how the rtp approach would work. I think what you 
are doing is as follows.  All stream senders would send on the rtp_send 
side, connecting the rtp_send.monitor to the default alsa sink (for 
local sound).  All other machines would have an rtp_recv, and send the 
output of rtp_recv to the default alsa sink.  Each of these rtp_recv 
would be muted by default.  If machine B wants to join in, machine B 
would unmute it's rtp_recv and thereby get the stream.  Do I have a 
basic understanding of how this approach would work?  I have played with 
this to some degree, so I think I understand.

I assume using the tunnels would work in a similar fashion.  However, 
you need to build a mesh of tunnel connections.  In my case, with 4 
nodes, the mesh is 3 tunnels for each mode, 4*3=12 tunnels in total. 
Each receiving node would then mute each tunnel by default, turning on 
the one it wants.  The annoying part of this approach is that you have 
to decide which source you want to connect to, whereas, with the rtp 
approach, you simply join the "collective".

I have played with the combined_sinks somewhat too.  However, since 
upgrading to FC8, the pulseaudio server keeps crashing when I attempt to 
use a combined sink.  I've been trying to get a core dump to Lennart, 
but I haven't been able to get gdb to help me out, I keep getting some 
problems with some threading library (or something of that nature).

I'm now trying to understand how the paprefs gui mechanism works.  I 
haven't been able to get any of the options to be enabled for operation, 
all the controls are grayed out, trying to understand why.


Matt Patterson wrote:
> I played with something similar but my goal was an audio multiplex 
> switch all on the same machine to the rtp lag issue was less apparent. 
> As for controlling it, I just wrote a simple python app that connects to 
> the unix socket (same thing pacmd does) and I issue commands to load 
> modules, mute inputs, etc so things can be controlled. I then wrote a 
> php wrapper around the python app so my web based audio control could 
> come about.
> To go this route you have to make sure the command line interface is 
> available either via TCP or Unix socket (I chose unix socket). If you 
> like I would be happy to send my hacktastic python code to help get 
> things moving.
> I believe that using the tunnels allows you to have the sync feature 
> where rtp doesn't, so maybe play around with getting them working???
> Matt
> Jim Duda wrote:
>> There was a similar thread, back around New Year's regarding Network 
>> Audio.  I've read the entire thread a few times.  I'm having similar 
>> problems, yet different.
>> I'm looking for some advice as to how best to use network audio with pulse.
>> I have multiple linux computers in my house, four to be specific.  One 
>> operates as a file server, one as a desktop, and the other two as 
>> diskless think clients which basically operate as media players.
>> I use these computers in a home automation network in my house using the 
>> misterhouse home automation software (misterhouse.net).
>> All machines are running stock fedora 8.  The two thin clients, are not 
>> running the full suite of services which a desktop would.  For example, 
>> they are not currently running avahi or hal (but could if necessary).  I 
>> can certainly turn on what needs to be running.
>> I'm hoping to perform the following using pulseaudio.
>> Let's call my machines A, B, C, D.
>> Let's assume that some stream is started on machine A, playing in the 
>> living room.  I would like to be able to have that same stream play on 
>> machines A and B simultaneously.  I don't care if I have to go to stream 
>> A and say send to machine B now, or, go to machine B and ask B to fetch 
>> a stream from machine A.  I can make both work.  I want to be able to 
>> drop the stream to B at anytime.  I realize that if the source stream 
>> stops, then all streams would in essence stop too.
>> I need to be able to access the controls to switch streams using a 
>> command line application which I can call from perl using the system 
>> call.  I've seen the stream switch in pavucontrol.  I've seen the 
>> move-sink-input in pactl (but failed to get it to work, I guess I don't 
>> understand how the params work as I always get some error message).
>> At some other time, I may want to have machine C join in the stream with 
>> machines B, C.
>> How is this best to accomplish?
>> 1) Should I use combine_sink on the source machine?
>> 2) Should I use rtp?
>> 3) Should I use tunnel_sink?
>> I've played with rtp.  Although it works, the audio isn't synchronized. 
>>     Maybe it should be synchronized, but I haven't found that to be 
>> true.  I can hear latency delay between multiple machines.
>> I know how to play across the network, using the pulseaudio alsa plugin.
>> I'm now trying to play with the network options in the paprefs 
>> application.  On my main server and desktop, all the network audio 
>> options in paprefs, configure local sound server, are all grayed out.
>> Each machine has these modules installed from FC8.
>> sudo yum list '*pulse*'
>> Installed Packages
>> akode-pulseaudio.i386             2.0.2-4.fc8            installed
>> alsa-plugins-pulseaudio.i386      1.0.15-3.fc8.1         installed
>> pulseaudio.i386                   0.9.8-5.fc8            installed
>> pulseaudio-core-libs.i386         0.9.8-5.fc8            installed
>> pulseaudio-esound-compat.i3       0.9.8-5.fc8            installed
>> pulseaudio-libs.i386              0.9.8-5.fc8            installed
>> pulseaudio-libs-devel.i386        0.9.8-5.fc8            installed
>> pulseaudio-libs-glib2.i386        0.9.8-5.fc8            installed
>> pulseaudio-libs-zeroconf.i386     0.9.8-5.fc8            installed
>> pulseaudio-module-gconf.i386      0.9.8-5.fc8            installed
>> pulseaudio-module-jack.i386       0.9.8-5.fc8            installed
>> pulseaudio-module-x11.i386        0.9.8-5.fc8            installed
>> pulseaudio-module-zeroconf.i386   0.9.8-5.fc8            installed
>> pulseaudio-utils.i386            0.9.8-5.fc8             installed
>> Available Packages
>> audacious-plugins-pulseaudio.i386 1.3.5-3.fc8            fedora
>> fluxbox-pulseaudio.i386           1.0.0-2.fc8            updates
>> gstreamer-plugins-pulse.i386      0.9.5-0.4.svn20070924. fedora
>> kde-settings-pulseaudio.noarch    3.5-38.fc8             updates
>> pulseaudio-module-bluetooth.i386  0.9.8-5.fc8            updates
>> pulseaudio-module-lirc.i386       0.9.8-5.fc8            updates
>> Both the avahi and gconf modules are loaded as displayed in the Modules 
>> section of the Paprefs Manager display.  What else is necessary?
>> I have auth-anonymouns=1 loaded for both native-protocol-unix and native 
>> -protocol-tcp.
>> I've read all the documentation on the pulse wiki many times.  I've 
>> browsed through all the postings on the mailing list over the past 6 months.
>> I'm just playing now with the server and desktop which have full blown 
>> stock fc8 installs, just to figure out how all this works, then I'll 
>> incorporate the thin clients later.
>> The whole package is rather complicated and I haven't had much success 
>> in putting it all together.
>> I've done my homework.  I just cannot get it working ...
>> Thanks,
>> Jim
>> _______________________________________________
>> pulseaudio-discuss mailing list
>> pulseaudio-discuss at mail.0pointer.de
>> https://tango.0pointer.de/mailman/listinfo/pulseaudio-discuss

More information about the pulseaudio-discuss mailing list