[gst-devel] Re: [linux-audio-dev] Toward a modularization of audio component

Erik Walthinsen omega at temple-baptist.com
Fri May 4 21:09:52 CEST 2001


On Fri, 4 May 2001, Paul Davis wrote:

> >If the application decides to use explicit pthreads (as opposed to *just*
> >cothreads), then you're gonna have latency problems, unless you have a
> >kernel that likes you (and even then....).
>
> why on earth are there *any* kinds of threads in the processing chain?

Cothreads are used because they have a very low overhead, and allow
significant lattitude and simplification.  In a time-critical pipeline
with a better scheduler (the current one has been simplified to reduce the
number of things to go wrong at once while other things change), the
schedule will not even use cothreads except where absolutely necessary.

Pthreads are made available because this is not just a real-time audio
framework.  The media player we have (which will automatically play any
media for which well-formed plugins exist) makes use of threads to
guarantee smoothness and the ability to tolerate system-level scheduling
interference.

> >Several recent changes make it very easy to construct a new 'scheduler'
> >that decides what order to run the elements in.  If you have a large mixer
> >pipeline with the same chain of elements for each of N channels, you then
> >have a decision to make, depending on whether you're more interested in
> >keeping the code or the data in cache.  If you're dealing with 64 samples
> >at a time with lots of effects, you want to run all the effects of the
> >same type at the same time, then go to the next one.
>
> How could you do that when the inputs to some of them may not have
> been computed yet?

The example is an intentionally simple one, where that does not occur:

-----                           --------
    |--- gain --- EQ --- pan ---|      |
    |--- gain --- EQ --- pan ---|      |   -----
alsa|--- gain --- EQ --- pan ---| mix  |---|
src |--- gain --- EQ --- pan ---|      |---|alsa
    |--- gain --- EQ --- pan ---|matrix|---|sink
    |--- gain --- EQ --- pan ---|      |---|
    |--- gain --- EQ --- pan ---|      |   -----
-----                           -------|

If there are variations in the pipeline, this is exactly what cothreads
solve very elegantly:
                                   ----------
------        ---- EQ --- adder ---|        |   -----
alsa |-------/              ^      | matrix |---|alsa
src  |--- EQ --- delay --- tee ----|        |---|sink
------                             ----------   -----

In the above case, if the scheduler decides to run the upper channel
first, it gets to the adder and finds that it needs data from the other
channel.  It switches to the tee, which pulls from the delay, which pulls
from the EQ, which finally pulls from the alsa source.  The data then
works its way back, and satisfies the adder, then the matrix.

As the scheduler gets smarter (the entire architecture is designed around
making the scheduler as smart as possible over time), it will be able to
anticipate this, and avoid the switches back from the tee towards the
source, and just jump in one go.  There are also tricks the application
can play to make sure that it gets the best schedule, depending on the
behavior of certain plugins.

> From what I've seen of GStreamer it will burn CPU cycles on supporting
> properties that you will rarely (if ever) be using, stealing them from
> FX processing etc.

A critical design decision made at the very very beginning was that
everything would be highly specializable.  All element operations are
function pointers, including the data processing code.  If an element is
chain-based (left-hand peer calls it on the stack), it can trivially
accomplish this by replacing the function pointer.  Therefore, if some
specific property is not in use, the element simply places a pointer to
the version that doesn't worry about that property in the right location.
If at a later date someone sets (via the g[tk]object set_arg system) the
property to something else, the element can immediately replace that
function pointer.  It does have to take care to avoid killing its live
state, i.e. replugging, but there are also support mechanisms being added
now that will aid significantly in that.

> However, the great thing about these projects is that there are a
> multiplicity of approaches, and that can only be a good thing. I may
> sound rather dogmatic some times, but thats only because I don't know
> of other people who have actually written (linux) software that does
> what my stuff does. I hope I'm open to enlightenment, still.

My intent from the beginning has been to write the kind of software you're
writing, as well as provide a general multimedia framework.  I don't have
the broad sound production and computer music background you have, but I
believe I have taken the best of many architectures, and added a number of
new features, such that GStreamer is capable of a much wider range of
applications than any previous framework.

I'm willing to explain anything about the system you want, but also I
suggest that anyone that's interested simply take a look at the slides on
the website, the docs, and the example code.  Most of the capabilities of
the system are fairly well documented.

Also, the developer live in IRC on irc.openprojects.net, in #gstreamer, if
you have any other questions.

      Erik Walthinsen <omega at temple-baptist.com> - System Administrator
        __
       /  \                GStreamer - The only way to stream!
      |    | M E G A        ***** http://gstreamer.net/ *****
      _\  /_





More information about the gstreamer-devel mailing list