[gst-devel] Midi and GStreamer
steve at stevebaker.org
Tue Jul 15 22:54:01 CEST 2003
On Tue, 2003-07-15 at 07:56, Leif Johnson wrote:
> Hi all -
> It seems like GStreamer could benefit greatly from a different subclass of
> GstPad, something like GstControlPad. Pads of this type could contain
> control data like parameters for oscillators/filters, MIDI events, text
> information for subtitles, etc. The defining characteristic of this type of
> data is that it operates at a much lower sample rate than the multimedia
> data that GStreamer currently handles.
I think that control data can be sent down existing pads without making
> GstControlPad instances could also contain a default value like Wingo has
> been pondering, so apps wouldn't need to connect actual data to the pads if
> the default value sufficed. There could also be some sweet integration with
> dparams, it seems like.
If you want a default value on a control pad, just make the source
element send the value when the state changes.
> Elements that have control pads could also have standard GstPads, and I'd
> imagine there would need to be some scheduler modifications to enable the
> lower processing demands of control pads.
> Unfortunately, as is probably obvious, I don't know enough of the GStreamer
> core to tell if this is a good idea or not, but I'd really appreciate
> comments. This would be cool if it worked out.
It was always my intention for dparams to be able to send values to and
get values from pads. All we need is some simple elements to do the
And now, on to the comments about MIDI:
> On Wed, 09 Jul 2003, nick wrote:
> > Hi All
> > The thing I am thinking about is how a gstreamer plugin would handle
> > MIDI and audio at the same time... In my mind, this requires the midi
> > and audio buffers to be processed on a 1-to-1 basis (so 1 buffer of
> > audio and 1 buffer of midi cover the same duration of time).. Does what
> > I'm saying make sense to you?
All buffers are timestamped and MIDI buffers should be no exception. A
buffer with MIDI data will have a timestamp which says exactly when the
data should be played. In some cases this would mean a buffer contains
just a couple of bytes (eg, note-on). So be it - if this turns out to be
inefficient we can deal with that later.
> > (For me, I would want to be able to write amSynth as a plugin - this
> > would require that when my process function is called, I have a midi
> > buffer as input, containing how ever many midi events occurred in, say,
> > 1/100 sec for example, and then I generate an audio buffer of the same
> > time duration...)
> > Any ideas? Maybe this will indicate the kind of problems to be faced.
GStreamer has solved this problem for audio/video syncing, so you should
probably do it the same way.
The first task would be to make this pipeline work:
filesrc ! amSynth ! osssink
An amSynth element should be a loop element. It would read MIDI buffers
until it has more than enough to produce audio for the duration of 1
audio buffer. It knows it has enough MIDI buffers by looking at the
timestamp. Because amSynth is setting the timestamps on the audio
buffers going out, osssink knows when to play them.
Once this is working, a more challenging pipeline might be:
alsamidisrc ! amSynth ! alsasink
This would be a real-time pipeline - any MIDI input should instantly be
transformed into audio. You would have small audio buffers for low
latency (64 samples seems to be typical). This is a problem for amSynth
because it can't sit there waiting for more MIDI just in case there is
more than one MIDI event per audio buffer. In this case you could
- listen to the clock so you know when its time to output the buffer
- have some kind of real-time mode for amSynth which doesn't wait for
MIDI events which may never come
- have alsamidisrc produce empty timestamped MIDI buffers so that
amSynth knows that is time to spit out some audio.
I hope this clarifies things a bit. amSynth sounds very cool ;)
Steve Baker <steve at stevebaker.org>
More information about the gstreamer-devel