[gst-devel] about base classes
in7y118 at public.uni-hamburg.de
Tue Mar 8 11:55:07 CET 2005
Some general ideas about base classes:
My idea of a base class is that it provides a number of required functions
that need to be implemented (and maybe some optional ones). Additionally
it provides a set of functions to access other needed properties.
Subclasses of such a base class should _never_ have the need to poke
inside the actual base class or its object. As an example, I'd consider an
audio sink base class that requires it subclasses to ever touch the GstPad
of the sink a broken design.)
Given this as the basic constraint, base classes should be slowly built on
top of each other where possible. So if you have two different sets of
functionalities that can be seperated, do it. (A general sink and an audio
sink class may or may not be such cases.)
Given this, an audio sink base class to me still looks like it would
provide these functions:
/* REQUIRED */
/* write given amount of data, return amount of bytes written */
guint (* write) (GstAudioSink *sink, const guint8 *data,
/* OPTIONAL */
/* try to open device */
gboolean (* open) (GstAudioSink *sink);
/* close device */
void (* close) (GstAudioSink *sink);
/* get possibly supported formats from device */
GstCaps * (* get_formats) (GstAudioSink *sink);
/* try to set given format */
gboolean (* set_format) (GstAudioSink *sink, const GstCaps *caps);
/* query delay for clocking */
guint (* get_delay) (GstAudioSink *sink);
This looks like it would work for any case we have without any loss of
functionality, while making it as simple as possible to write a new
subclass for $soundsystem.
Or did I miss something?
On Sat, 5 Mar 2005, Ronald S. Bultje wrote:
> Andy requested some thoughts on base classes after a discussion (..) on
> IRC. Here's a shot at starting such a discussion on the mailinglist (so
> everyone, even those not online on IRC, can read it).
> I'm going to discuss audio sink base classes here. Similar thoughts may
> apply to audio/video source base classes or video sink base classes. I'm
> intending to propose a base class that just works, without magic,
> required code or any buts or yets for most common audio systems out
> 1) functions of an audio sink in 0.9
> * clock provider (since we have a count of processes samples).
> * clock accepter (we can work with another clock as master).
> * synchronization of streams against foreign clocks and non-perfect
> streams (cutting off samples from a buffer, dropping buffers, inserting
> * writing actual data to whatever system we're abstracting (OSS, ALSA,
> esound, jack, polypaudio, artsd, ...).
> * negotiation of a format that the audio output supports.
> * opening/closing/setting up the system we're abstracting.
> * preroll.
> I may be missing stuff, but this should be pretty much what an audiosink
> does nowadays. The problem here is that it's too much, and a lot of code
> is shared between different implementations (e.g. preroll, a/v sync),
> which is why we proposed and agreed on using base classes.
> 2) ase class scope
> Here's where the rambling starts. :). What should an audio base class
> do? I've sat down to think about this, seeing what else is out there,
> how different systems handle this, and came up with this:
> 2a - general sink base class
> * preroll
> Why? Because it's the same for video and audio. Wim already
> implemented this in his -threaded branch.
> 2b - audio sink base class
> * a/v sync, clock providing, clock accepting.
> Why? Because those are simple systems that can easily be shared
> in a single base class without being specific towards one
> implementation. It provides a write virtual function through
> which it writes data to the specific implementation.
> 2c - audio sink implementation
> * format negotiation, backend setup.
> Because this cannot easily be generalized without generalizing
> too much, or we don't gain anything by generalizing it.
> 3) Example base class design
> I've written an audio base class example  which is actually
> format-agnostic. It acts based on timestamps and durations set on
> buffers (for rounding, see ). It will calculate differences of
> current-time and expected-time and insert silence or drop samples (both
> through virtual functions, see ) and write all resulting data through
> a write virtual function. There's default implementations for the first
> two virtual functions, which will keep sync, but may give a small hickup
> on an asynchronity (see ). I consider that acceptable, given that
> implementing one virtual function can be one single line of code (again,
> see , it implements it for osssink).
> Since we keep track of durations, we can provide a clock. The fourth
> (and last) virtual function that implementations can implement is to get
> the 'delay', i.e. the amount of data that was written but not yet
> played. The default implementation returns 0. Through this, the provided
> clock can always present the exact time.
> My patch-provided base class does not derive from basesink yet, because
> it didn't exist at the time when I wrote the patch, but it's intended to
> derive from it (which is why any sort of preroll handling is missing).
> I'll do that sometime later.
> The result of this base class is that if we are clock provider, we are
> always in sync. If we are clock receiver and all virtual functions are
> (correctly) implemented, we are also always in sync. If we are clock
> receiver and only the write virtual function is implemented, then we may
> be the duration of the buffer out-of-sync. I think this is an acceptable
> 4) Implementations
> I've implementes osssink (). The hard part (and largest part of the
> patch) is to refactor the osselement base class out of osssink while not
> losing any functionality (mixer/device handling shared with other oss
> elements). The actual implementation is tiny and simple. Esdsink would
> be less than an hour to implement using this baseclass, piece of cake.
> Alsasink would need the same refactoring as osssink; I'm willing to do
> the work required to get that going (since I myself use ALSA, in the
> end). Artsdsink or polypaudiosink would be dead easy, too. The nice
> thing is that even though the implementations are dead easy
> (essentially, they only need to implement one virtual function, the
> 'write' one), they will still keep perfect sync, because the base class
> does all the hard work there.
> I don't know how difficult jacksink would be. It probably wouldn't fit
> in this model very well, since jack wants to drive the pipeline rather
> than be driven.
> That's about all. I hope this explains an example design, I hope people
> have nice ideas on how to improve the above or comments on mistakes I've
> made. Alternative implementations are welcome, too.
>  http://ronald.bitfreak.net/priv/audiosink.patch
>  Now, this will indeed give rounding errors, but this will only
> become noticeable after a few weeks of use under normal conditions (and
> even require several hours (up to 24) of use before it shows in
> pro-audio circumstances). That's a design decision, it can be changed if
> there's a strong opposition. I can explain the math behind this if
> anyone cares.
>  OK, so if we're format-agnostic, then that means we don't know how
> to generate silent samples or how to cut off samples. Therefore, there's
> virtual functions for both. The cut-off always returns 0 (i.e. "don't
> cut off anything"), and the insert-silence always returns NULL (i.e. "no
> silence"). See  for the effects on a/v sync.
>  On an asynchronity, we will either drop samples or insert silence.
> If we drop samples, the default implementation will drop zero samples.
> this means that even though the clock advances only half a buffer, we
> play the full buffer (to keep sync). If we are the clock provider, this
> means that other streams waiting for us will halt for a short while
> because the clock-advance takes longer than realtime. I.e., there will
> be a small 'halt' in playback. If we are clock receiver, then we'll be
> out-of-sync until we drop a full buffer (actually, I may not have
> implemented the drop-full-buffer part yet; FIXME). If we insert silence,
> the default implementation will do nothing. this means that the clock
> advances, but in no-time (since we play nothing), so other streams (if
> we're clock provider) may skip a frame here. If we're clock receiver, we
> could g_usleep() here, but that's currently unimplemented (i.e.: FIXME).
> Obviously, if the virtual functions are implemented, all of this does
> not apply and it just works.
> Ronald S. Bultje <rbultje at ronald.bitfreak.net>
> SF email is sponsored by - The IT Product Guide
> Read honest & candid reviews on hundreds of IT Products from real users.
> Discover which products truly live up to the hype. Start reading now.
> gstreamer-devel mailing list
> gstreamer-devel at lists.sourceforge.net
More information about the gstreamer-devel