[gst-devel] gst-controller and parameter change rate

Stefan Kost ensonic at hora-obscura.de
Sat May 5 20:14:30 CEST 2007


hi,

this issue plagues me since quite a while. I'll try to describe the facts as
best as I can. Sorry for the long email, but thanks for reading :)

In buzztard I extensively use gst-controller to dynamically update element
properties during playback. This works quite well but only because of one
'hack'. The elements have to actively pull the property changes. For
interpolated  value changes (e.g. linear fades) this mean, they should call
gst_object_sync_values in small intervals (e.g. for every 10 samples). For
trigger parameters (a note-on that starts an envelope) the
gst_object_sync_values need to be called at the exact time. In a music
application all parameter changes are quantized. The quantization is based on
the tempo, the measure and a subtick rate (the higher the later, the smother
fades are). As all gst-elements that support the gst-controller call
gst_object_sync_values before processing a buffer, I need to make sure that
buffers only span one quantum of the parameter-change grid. In gst-buzztard I've
added a tempo interface [1] which basically adds three iface-properties:
beats-per-minute, ticks-per-beat and subticks-per-tick. All my source elements
implement this and receive the tempo information this way. Based on that the
sources calculate the optimal buffer-size:
samples_per_buffer=((samplerate*60.0)/(gdouble)(beats_per_minute*ticks_per_beat));
For each basesrc::create call they send a buffer that spans this size. Now there
 is one problem, they ignore the 'length' parameter of the create call. I
totally overlooked this. Even audiotestsrc does that. And it seems to sometimes
cause noise in the audio-output.
Basetransform inherited elements automatically benefit from the buffersize
chosen by the sources. The only need the tempo iface, if they want to base some
parameters on the tempo. E.g. for a audiodelay (echo) its quite nice to be able
to sync the echos with the tempo.

Using an interface for these three parameters seems to be overkill. So I thought
about using tags for that. A BPM tag is already in base, as this can also be
written to the vorbiscomments or id3tags. Unfortunately tags are not that
flexible. E.g. if one adds more elements while playing the application needs to
resend the tempo-tags.

Music apps usualy support to change the tempo dynamically (e.g. slow down the
song at the end). This would work with the tempo-iface, but not with the tags.
On the other hand, I should probably use the playback-rate of the segment for
that (still its hard to do a smooth playback rate change).

Finally if one uses jacksink, the sink could use the tempo information to
correctly sync other jack-clients.

Thats for the background. Now I like to get feedback to decide about these isues:

= buffer-length fitting:
Should audiobaseclasses do the buffer-size fitting? If we tell them in one way
or another about the tempo, they could call create of the subclass with right
sized chunks. The baseclasses could do this in a intelligent way:
* don't subdivide buffers if no parameter is attached to GstController
* subdivide using subticks-per-tick only if interpolation is used on a parameter
* use subbuffers if possible
To ease implementation in the baseclass we could add one function to
GstController: next_ts=gst_controller_suggest_next_sync(cur_ts);
as described in [2].
If all this sound great, should we have GstBaseAudioSource and
GstBaseAudioTransform because of that or just a variant of
gst_base_src_get_range() in GstBaseSource (and the same in GstBaseTransform).

= how to supply the tempo information:
Interface or tags?


I tend towards new baseclasses in gst-plugins-base/gst-libs/audio for source,
transform, filter and using a tempo interface.

Awaiting your thoughts,
  Stefan

[1] http://buzztard.cvs.sourceforge.net/buzztard/gst-buzztard/src/tempo/
[2] http://www.buzztard.org/index.php/Dynamic_parameters#Trigger




More information about the gstreamer-devel mailing list