[Spice-devel] RFC - Make sound receiving and deconding independent from the gtk widget
Marc-André Lureau
mlureau at redhat.com
Tue Jun 24 08:06:03 PDT 2014
----- Original Message -----
> Hi,
>
> I just trying to implement sound into my SDL client. This client uses
> the spice glib but it does not use the gtk library (as gtk is horrible
> slow on OSX).
Yeah, it would be nice if somebody could make the drawing updates faster.
> Here I noticed a design break in spice. Every channel is more or less
> implemented in two parts:
>
> 1.) Spice Protocol decoding is implemented in the spice-glib
> 2.) Spice Data presentation and grabbing is implemented in spice-gtk
Yes, the spice-gtk library provides GUI and integration with gtk. Similarly,
other "frontend" library can be implemented for other toolkits (somebody
worked on a qt one for example).
> However if I only want need the spice-glib it works like a charm until I
> try to implement sound. Sound decoding is impossible because the
> function spice_audio_new in spice-audio.c does not provide a valid
> pointer if there is no backend. But why is the backend for sound
> implemented (partially) and depending on a sound presentation library in
> the glib?
Because spice-glib is everything except gtk integration atm.
Well, SpiceAudio is just a helper, it connects the audio channels to
an audio backend. You can use the spice-glib API to implement the same
functions, it doesn't have to be implemented in spice-glib.
>
> Here is a draft that works for me when I want to use sound in my client
> even if the gtk library does not support sound:
>
> /***** spice-audio.c *****/
> [...]
> SpiceAudio *spice_audio_new(SpiceSession *session, GMainContext
> *context,
> const char *name)
> {
> SpiceAudio *self = NULL;
>
> if (context == NULL)
> context = g_main_context_default();
> if (name == NULL)
> name = g_get_application_name();
>
> #if defined (WITH_PULSE)
> /* implement the pulse backend for later use in the gtk library */
> self = SPICE_AUDIO(spice_pulse_new(session, context, name));
> #elif defined (WITH_GSTAUDIO)
> /* implement the gstreamer backend for later use in the gtk library
> */
> self = SPICE_AUDIO(spice_gstaudio_new(session, context, name));
> #else
> /* implement no specific backend to enable the glib the funtionality
> that
> * including <channel-playback.h> makes sense */
> self = SPICE_AUDIO(g_object_new(SPICE_TYPE_FAKE_AUDIO, NULL));
> #endif
> if (!self)
> return NULL;
>
> spice_g_signal_connect_object(session, "notify::enable-audio",
> G_CALLBACK(session_enable_audio), self, 0);
> spice_g_signal_connect_object(session, "channel-new",
> G_CALLBACK(channel_new), self, 0);
> update_audio_channels(self, session);
>
> return self;
> }
> [...]
>
> SPICE_TYPE_FAKE_AUDIO is just a dummy object class to have a valid
> object 'self'.
I don't understand the benefit of having that "fake" object. Why do
you need it?
> Is it possible to have an upstream version of spice that does not depend
> on gstreamer or pulse but that is able to register callbacks for sound
> playback?
You can disable backends --with-audio=none, and you can implement your
own audio backend by connecting to audio channels events using the
public spice-glib API.
cheers
More information about the Spice-devel
mailing list