[pulseaudio-discuss] why command_cork_playback_stream() will be invoked many times?
marcandre.lureau at gmail.com
Wed Feb 11 06:55:59 PST 2009
On Wed, Feb 11, 2009 at 4:33 PM, Lennart Poettering
<lennart at poettering.net> wrote:
> On Fri, 06.02.09 16:04, pl bossart (bossart.nospam at gmail.com) wrote:
>> Hi Lennart,
>> I like the idea of modules being able to send events to a client. That would
>> work for clients who connect directly to pulseaudio, with some additional
>> modifications internally. For example the pulsesink would sent a message on
>> gst_bus to request the app to pause.
> At FOSDEM I talked t the gst folks about that, and yes, the plan is to
> map this to a message on gstbus.
Hmm, that's how GSmartMix was doing it, but it's probably not enough.
We need to discuss that thoroughly with interested parties during
developer meeting. I can't wait!!
>> However in the case where apps use ALSA and see the PCM routed to PulseAudio
>> by the ALSA-lib pulse plugin, that wouldn't help at all, would it? The
>> cork request would need to be sent to the original application, using DBUS
>> or something.
> Yes, of course. Folks using an abstraction layer that abstracts
> features away won't be able to make use of this directly. That is not
> particularly surprising, isn't it?
Please Lennart, don't ignore 96% of applications (yes, very accurate,
eh!) by implying it's there fault. At least, let's try to convince
them, not ignore them.
> Marc-Andre pointed me to MPRIS, and suggested implementing
> pause/resume based on that:
yes, because they started long ago, altough without the "policy" case
or "distributed" case in mind. In many ways, I dislike this API, I
even started a pet project called MEPRIS (to have disdain in french),
which is now called EPRIS (sort of "in love"),
http://code.google.com/p/epris/ (check "Why?" section)
> The MPRIS API is very much flawed in my eyes (i.e. racy: the
> definition of the Pause() call is just stupid, makes it impossible to
> use this for software that gets the events in question by some other
> way as well). Also, I am always a bit unsure about having PA connect
> to the session bus since the sesson bus runs on the machine that runs
> the rest of the session while PA is usually run alongside X on the
> client side and hence relying on this kind of communication will break
> network transparency/thin client stuff. In addition to that none of
> the relevant media players really supports MPRIS. (Where's Rhythmbox?
> Where is Totem? Where is Banshee?) I am a bit unsure how to map the
> conbnection to the right service. i.e. is prefixing the executable
> name with org.mpris good enough?
Can you raise your question on their mailing lists? I think the MPRIS
requirements are pretty loose, and so far, the interest was a bit too
low (like the notification API, for instance). Those things deserve
much more attention for the mobile case, for instance. I hate
companies reinventing the wheel in their own side, without
participating. But I am one of those to blame, since I implemented the
so-called "libplayback" in Maemo. Although, you can believe me, I
tried somewhat do discuss and propose publicly with GSmartMix. Maemo
has its own schedule, as you might know.
> But OTOH, having said all this it should be easy to implement a module that
> forwards the pause events to MPRIS as well, by adding a tiny hook that
> allows intercepting of the request-pause/request-resume events.
Yep, that is a possibility. We are getting used to have dozen of
differents path in the audio stack, so, one more does not shock me at
all ;) But I am surely perverted already.
> There's also another optiont: we could use XTEST to synthesize
> XF86XK_AudioPause and XF86XK_AudioPlay key events on X. That way
> gnome-settings-daemon will pick them up and dispatch them to the media
> player apps. This would be pretty simple and thin-client-safe. OTOH
> it doesn't allow targeting the events to specific applications.
Right. We have actually this kind of problem with other accessory
inputs, such as +/- on mobile, and headsets. One should be overridable
by some applications for zooming. The other should only target audio
volume. And both should be hooked regardless of application focus. My
hope is that XInput2 will also expose more informations about the
device from which the input come from (which is pretty weak area of
XInput right now)
PS: on Fremantle, we will still be using and abusing libplayback
(org.maemo.Playback interface) for audio policy targetting UI. We have
other lower-level ways to enforce policy thanks to PulseAudio as well.
More information about the pulseaudio-discuss