[gst-devel] GNOME Sound server arch
Owen Fraser-Green
owen at discobabe.net
Mon May 21 15:01:01 CEST 2001
Hi,
> Now this interfaces with the more general media-server API I'm not sure.
> Ow3n, you want to comment on that?
I've kept my head low for the last couple of weeks amid the GNOME sound
server debate. My main reason for this is that I think the whole argument
is a little futile since I don't really see the need for a sound server at
all. The whole thing is founded on the existing philosphy of having an
application which outputs some audio data to some sound server that
delivers that data to some audio hardware somewhere. I believe this need
can be met fairly adequately with GnoStream although I admit it introduces
a lot of overhead to a fairly simple problem.
My argument on the other hand is that this whole philosophy is flawed for
the following reasons:
1) This is (provably) insufficient for multi-media streams since it
involves the seperation of the audio flow from the other flows before
travelling over the wire.
2) It is duplication of effort for the client applications to participate
in the parsing of streams to the sound server when gstreamer is quite
capable of being told just the location of the file and "go and play it".
How about if you had an audio CODEC which requires the data to be sent non-
sequentially (as divx does with video)?
Many people are drawing parallels between sound servers and X and I think
it's a good analogy to make it's just that people aren't taking it far
enough. A sound server where you just throw it _audio data_ is just like
using X by just sending the pixels you want to display. I think we should
bring some of the higher X layers into the picture where the media server
gives us bigger building blocks. Then we can just tell it the file name of
the beep we want it to play and it retreives for us.
That, however, is where we risk splitting the community because the best
ways for controlling outproc servers in high-level ways is through
something like CORBA or, better still (from GNOME's point of view), Bonobo.
That's where GnoStream comes up against some opposition. However, if we
leave the networking in gstreamer and just view GnoStream as the "gstreamer-
based media server implementation" which offers client applications the
mechanism for interacting with the media server then the KDE people could
build KStream which is also a "gstreamer-based media server
implementation". Or, asciidiagramatically, gnome_play_sound can play a
sound where the $AUDIO variable identifies a KDE host as follows:
+------------------+
| gnome_play_sound |
+-----+------------+
|
| (Bonobo)
v
+------------+ +-----------+
| GnoStream | | KStream |
+-----+------+ +-----------+
| |
| |
v |
+-------------+ +----+------+
| gstreamer | | gstreamer +--->Sound card
+-------------+ +-----------+
| |
. .
. .
======================== network ====================
Then I think the requirements are:
1) GnoStream/gstreamer implements legacy interfaces e.g. esd.
2) gstreamer can be fully distributed i.e. not just send streams back and
forth but is actually capable of scheduling dispersed gstreamer servlets.
This wil be a fairly hefty undertaking and will require auditing to be
accepted.
What this does mean, however, is that all environments wishing to play ball
have to implement a gstreamer-based server (artsd could achieve this by
implementing a gstreamer plugin for aRts) which will of course be
unpalettable for others (e.g. KDE).
Cheers,
Owen
---------------------------------------------------------------------------
Owen Fraser-Green "Hard work never killed anyone,
owen at discobabe.net but why give it a chance?"
---------------------------------------------------------------------------
More information about the gstreamer-devel
mailing list