[pulseaudio-discuss] Network Audio with Pulse
Jim Duda
jim at duda.tzo.com
Tue Apr 8 11:26:21 PDT 2008
Wow!, but, not sure I'm there yet.
When you play from some player, do you play to sink zone1... or p1...
Where does the sound come out? Don't you just have 1 sound card here with 2 front-channels?
Thanks for you patience, it hasn't quite clicked in my head yet.
Jim
"Matthew Patterson" <matt at v8zman.com> wrote in message news:47FB86E6.1010606 at v8zman.com...
I'm sorry, I think I mispoke in the last email, I meant combine sink. Here's some sample config:
When simulating my matrix switch idea, before I got the multiple sound cards I used remap to make it seem like I had 4 stereo sound cards. You can use this same idea to make one 6 channel card appear as 3 stereo cards:
# remap things so it seems like we have 4 stereo zones
load-module module-remap-sink sink_name=zone1 master=alsa_output.pci_8086_2668_alsa_playback_0 channels=2 master_channel_map=front-left,front-right channel_map=front-left,front-right
load-module module-remap-sink sink_name=zone2 master=alsa_output.pci_8086_2668_alsa_playback_0 channels=2 master_channel_map=front-left,front-right channel_map=front-left,front-right
load-module module-remap-sink sink_name=zone3 master=alsa_output.pci_8086_2668_alsa_playback_0 channels=2 master_channel_map=front-left,front-right channel_map=front-left,front-right
load-module module-remap-sink sink_name=zone4 master=alsa_output.pci_8086_2668_alsa_playback_0 channels=2 master_channel_map=front-left,front-right channel_map=front-left,front-right
Then I used the combine module to join all the zones together 4 times so there would be four inputs per zone, which can be muted to control what is heard:
# we leave only one of the outputs unmuted at startup, that is our player selection
load-module module-combine sink_name=p1 master=zone1 slaves=zone2,zone3,zone4
#set-sink-input-mute 4 1
set-sink-input-mute 5 1
set-sink-input-mute 6 1
set-sink-input-mute 7 1
load-module module-combine sink_name=p2 master=zone1 slaves=zone2,zone3,zone4
set-sink-input-mute 8 1
#set-sink-input-mute 9 1
set-sink-input-mute 10 1
set-sink-input-mute 11 1
load-module module-combine sink_name=p3 master=zone1 slaves=zone2,zone3,zone4
set-sink-input-mute 12 1
set-sink-input-mute 13 1
#set-sink-input-mute 14 1
set-sink-input-mute 15 1
load-module module-combine sink_name=p4 master=zone1 slaves=zone2,zone3,zone4
set-sink-input-mute 16 1
set-sink-input-mute 17 1
set-sink-input-mute 18 1
#set-sink-input-mute 19 1
Does that help?
Matt
Jim Duda wrote:
Matt,
I don't understand how the remap_sink module helps (or works for that matter). I'm having trouble getting my head around the inputs and outputs of this module.
Would you mind posting an example of how you use remap_sink?
Thanks,
Jim
"Matt Patterson" <matt at v8zman.com> wrote in message news:47FAB22D.7000408 at v8zman.com...
Yeah, sounds like you have the rtp thing. I assume you realize you can have multiple multicast addresses so there can be simultaneous streams that don't collide/get mixed.
I don't think there is a way you can avoid the mesh (multicast is basically a mesh, just for free) unless you designate a server machine. In which case you could set up a single tunnel sink to each client machine and then have all the switching happen on that machine. I use the remap module to split each sink into 4 inputs (could be a tunnel sink), then connect each input to a different mpd instance, and control what is heard out each device by muting 3 of the 4 inputs. I end up with 16 remapped sinks in this case (4 output devices * 4 remapped sinks each). I will be adding a 5th zone to my whole home audio soon, so that will make it 25. The 16 sinks/streams seems to cause no undue load on the system (Core 2 Duo 2180), we'll see how 25 does :)
I haven't played with the tunnel sink module.
Matt
Jim Duda wrote:
Matt,
Thanks for the feedback.
I understand your point about using the command line interface module.
I actually would end up using the socket approach from perl once I had
it all working properly.
I believe I understand how the rtp approach would work. I think what you
are doing is as follows. All stream senders would send on the rtp_send
side, connecting the rtp_send.monitor to the default alsa sink (for
local sound). All other machines would have an rtp_recv, and send the
output of rtp_recv to the default alsa sink. Each of these rtp_recv
would be muted by default. If machine B wants to join in, machine B
would unmute it's rtp_recv and thereby get the stream. Do I have a
basic understanding of how this approach would work? I have played with
this to some degree, so I think I understand.
I assume using the tunnels would work in a similar fashion. However,
you need to build a mesh of tunnel connections. In my case, with 4
nodes, the mesh is 3 tunnels for each mode, 4*3=12 tunnels in total.
Each receiving node would then mute each tunnel by default, turning on
the one it wants. The annoying part of this approach is that you have
to decide which source you want to connect to, whereas, with the rtp
approach, you simply join the "collective".
I have played with the combined_sinks somewhat too. However, since
upgrading to FC8, the pulseaudio server keeps crashing when I attempt to
use a combined sink. I've been trying to get a core dump to Lennart,
but I haven't been able to get gdb to help me out, I keep getting some
problems with some threading library (or something of that nature).
I'm now trying to understand how the paprefs gui mechanism works. I
haven't been able to get any of the options to be enabled for operation,
all the controls are grayed out, trying to understand why.
Jim
Matt Patterson wrote:
I played with something similar but my goal was an audio multiplex
switch all on the same machine to the rtp lag issue was less apparent.
As for controlling it, I just wrote a simple python app that connects to
the unix socket (same thing pacmd does) and I issue commands to load
modules, mute inputs, etc so things can be controlled. I then wrote a
php wrapper around the python app so my web based audio control could
come about.
To go this route you have to make sure the command line interface is
available either via TCP or Unix socket (I chose unix socket). If you
like I would be happy to send my hacktastic python code to help get
things moving.
I believe that using the tunnels allows you to have the sync feature
where rtp doesn't, so maybe play around with getting them working???
Matt
Jim Duda wrote:
There was a similar thread, back around New Year's regarding Network
Audio. I've read the entire thread a few times. I'm having similar
problems, yet different.
I'm looking for some advice as to how best to use network audio with pulse.
I have multiple linux computers in my house, four to be specific. One
operates as a file server, one as a desktop, and the other two as
diskless think clients which basically operate as media players.
I use these computers in a home automation network in my house using the
misterhouse home automation software (misterhouse.net).
All machines are running stock fedora 8. The two thin clients, are not
running the full suite of services which a desktop would. For example,
they are not currently running avahi or hal (but could if necessary). I
can certainly turn on what needs to be running.
I'm hoping to perform the following using pulseaudio.
Let's call my machines A, B, C, D.
Let's assume that some stream is started on machine A, playing in the
living room. I would like to be able to have that same stream play on
machines A and B simultaneously. I don't care if I have to go to stream
A and say send to machine B now, or, go to machine B and ask B to fetch
a stream from machine A. I can make both work. I want to be able to
drop the stream to B at anytime. I realize that if the source stream
stops, then all streams would in essence stop too.
I need to be able to access the controls to switch streams using a
command line application which I can call from perl using the system
call. I've seen the stream switch in pavucontrol. I've seen the
move-sink-input in pactl (but failed to get it to work, I guess I don't
understand how the params work as I always get some error message).
At some other time, I may want to have machine C join in the stream with
machines B, C.
How is this best to accomplish?
1) Should I use combine_sink on the source machine?
2) Should I use rtp?
3) Should I use tunnel_sink?
I've played with rtp. Although it works, the audio isn't synchronized.
Maybe it should be synchronized, but I haven't found that to be
true. I can hear latency delay between multiple machines.
I know how to play across the network, using the pulseaudio alsa plugin.
I'm now trying to play with the network options in the paprefs
application. On my main server and desktop, all the network audio
options in paprefs, configure local sound server, are all grayed out.
Each machine has these modules installed from FC8.
sudo yum list '*pulse*'
Installed Packages
akode-pulseaudio.i386 2.0.2-4.fc8 installed
alsa-plugins-pulseaudio.i386 1.0.15-3.fc8.1 installed
pulseaudio.i386 0.9.8-5.fc8 installed
pulseaudio-core-libs.i386 0.9.8-5.fc8 installed
pulseaudio-esound-compat.i3 0.9.8-5.fc8 installed
pulseaudio-libs.i386 0.9.8-5.fc8 installed
pulseaudio-libs-devel.i386 0.9.8-5.fc8 installed
pulseaudio-libs-glib2.i386 0.9.8-5.fc8 installed
pulseaudio-libs-zeroconf.i386 0.9.8-5.fc8 installed
pulseaudio-module-gconf.i386 0.9.8-5.fc8 installed
pulseaudio-module-jack.i386 0.9.8-5.fc8 installed
pulseaudio-module-x11.i386 0.9.8-5.fc8 installed
pulseaudio-module-zeroconf.i386 0.9.8-5.fc8 installed
pulseaudio-utils.i386 0.9.8-5.fc8 installed
Available Packages
audacious-plugins-pulseaudio.i386 1.3.5-3.fc8 fedora
fluxbox-pulseaudio.i386 1.0.0-2.fc8 updates
gstreamer-plugins-pulse.i386 0.9.5-0.4.svn20070924. fedora
kde-settings-pulseaudio.noarch 3.5-38.fc8 updates
pulseaudio-module-bluetooth.i386 0.9.8-5.fc8 updates
pulseaudio-module-lirc.i386 0.9.8-5.fc8 updates
Both the avahi and gconf modules are loaded as displayed in the Modules
section of the Paprefs Manager display. What else is necessary?
I have auth-anonymouns=1 loaded for both native-protocol-unix and native
-protocol-tcp.
I've read all the documentation on the pulse wiki many times. I've
browsed through all the postings on the mailing list over the past 6 months.
I'm just playing now with the server and desktop which have full blown
stock fc8 installs, just to figure out how all this works, then I'll
incorporate the thin clients later.
The whole package is rather complicated and I haven't had much success
in putting it all together.
I've done my homework. I just cannot get it working ...
Thanks,
Jim
_______________________________________________
pulseaudio-discuss mailing list
pulseaudio-discuss at mail.0pointer.de
https://tango.0pointer.de/mailman/listinfo/pulseaudio-discuss
_______________________________________________
pulseaudio-discuss mailing list
pulseaudio-discuss at mail.0pointer.de
https://tango.0pointer.de/mailman/listinfo/pulseaudio-discuss
--------------------------------------------------------------------------
_______________________________________________
pulseaudio-discuss mailing list
pulseaudio-discuss at mail.0pointer.de
https://tango.0pointer.de/mailman/listinfo/pulseaudio-discuss
----------------------------------------------------------------------------
_______________________________________________
pulseaudio-discuss mailing list
pulseaudio-discuss at mail.0pointer.de
https://tango.0pointer.de/mailman/listinfo/pulseaudio-discuss
------------------------------------------------------------------------------
_______________________________________________
pulseaudio-discuss mailing list
pulseaudio-discuss at mail.0pointer.de
https://tango.0pointer.de/mailman/listinfo/pulseaudio-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/pulseaudio-discuss/attachments/20080408/1e8bfd68/attachment.htm>
More information about the pulseaudio-discuss
mailing list