[pulseaudio-discuss] asoundrc, configure one virtual device for both input and output
amar.akshat at gmail.com
Fri Jun 29 11:37:30 PDT 2012
I did not realize my messages were going in HTML, I ve changed it to
plain text now.
On Sat, Jun 30, 2012 at 2:57 AM, Tanu Kaskinen <tanuk at iki.fi> wrote:
> Could you send your messages as plain text instead of HTML in the
> future? I hear Gmail has a setting for that. Quoting more than one level
> doesn't seem to work properly with HTML (and it's anyway against the
> mailing list etiquette to send HTML).
> On Sat, 2012-06-30 at 00:58 +0900, Amar Akshat wrote:
> > Let me explain, I am in the process of building an application ("my
> > program"), which allows user to switch sound cards (as in left usb
> > handset, right usb handset and oboard), and I am going to store
> > sound_cards, in variables like following,
> > onboard = "pulse_onboard"
> > right_handset = "pulse_right"
> > left_handset = "pulse_left"
> > Depending upon user's current state, I am going to marshal request to
> > my back-end program, which takes sound card as plain strings. So I
> > can't use default, I need to be able to specify the sound_card name
> > which can be any of the three (namely onboard, right and left).
> It's hard to imagine what your program does... I guess you can count on
> every user to have specific hardware (an onboard sound card and two USB
> handsets, whatever those are)? Is it out of question to require the user
> to use e.g. pavucontrol to change the routing? Is it out of question for
> you to use PulseAudio's native API to control the routing?
Yes, i am providing the hardware as well as the software. Actually,
user requirement is, he should be able to switch handsets on the press
of a button, Consider a device with two handsets and one main speaker,
and user having a graphical interface to control his calls. He can
switch between handsets and talk to several people at one time. The
telephony program is the back-end (sip-stack), which is passed a
sound_device, media flows through it.
Having said that, pavucontrol (I think) is out of picture, Using
PulseAudio's native API to control routing may be an approach, as long
as it can provide me exactly one "sound_device", to be used as source
> Do you reopen the audio stream when the routing changes? I guess you
> have to do that, since ALSA doesn't provide a mechanism to move a live
> stream from one device to another. You could then set the PULSE_SINK and
> PULSE_SOURCE environment variables prior to opening a new stream - that
> way you could use just "default", and the routing would be controlled by
> those environment variables. I asked about reopening the streams,
> because the environment variables only have effect when you create a new
Thats true, my sip stack, reopens the audio stream when routing
changes, infact, my sip-stack creates audio stream for every call.
I have a doubt, when we pass "default" as the sound device, does the
pulseaudio refer to PULSE_SINK and PULSE_SOURCE environment variables
internally to identify sink/source? If yes, then I guess setting these
variables can do the trick for me.
> > Am I a bit clear this time ?
> Yes, your use case is becoming a bit clearer :)
Amar Akshat (アマール)
"Walking on water and developing software from a specification are easy if
both are frozen."
More information about the pulseaudio-discuss