[gst-devel] "bounties"/...
Andy Wingo
wingo at pobox.com
Sat Jul 17 01:55:13 CEST 2004
Hey Josh (and Jonathan),
I really wonder what you want to do with MIDI. Do you want to write
elements that deal with MIDI and alter MIDI events, or do you want to
turn MIDI events into sound via synthesis? The "bounty" in this case
isn't clear at all, but to be fair, neither is the idea of integrating
MIDI into gstreamer.
In the first case, I would go by the timestamped-events model, or the
filled-buffer model, but to be honest I don't care ;)
I eventually plan on supporting MIDI in soundscrape[0], although it will
be along the lines of the Voicer unit generator in SuperCollider (check
the docs on www.audiosynth.com for details on that). What I think that
means is that I will have a pipeline like voicer ! audiosink, where the
voicer is actually a bin that contains a mixer, and when an event comes
in, it plugs a new element chain into the mixer.
This is effectively the "spawn" idea from SuperCollider. I have the
spawner implemented in soundscrape, and it works quite well.
I just wanted to let you know what I was thinking.
[0] http://ambient.2y.net/soundscrape/ -- the web site is out of date,
but the arch archive lives :-)
> - Should we use the GStreamer devel email list for discussion of this
> topic?
There are smart kids on this list, and I for one would be interested in
hearing your designs.
> - Defining GstMidiEvent is step one and getting it encapsulated in
> GstData.
In my studies of the Oshiwambo language, I came across the proverb,
"londa omukwa noongaku." Literally it means "climb a baobab with shoes",
but its figurative meaning is that you will experience problems with
this way of doing things. (In the case of the baobab, you'll fall.)
We've never, to my knowledge, had a pipeline that _only_ passes events.
This will be tricky. I'm not saying that it's a bad idea, only that
passing GstBuffer's whose data you could cast to GstMidiEvent would give
you less problems.
(Certainly your solution is cleaner.)
> - GStreamer MIDI could be modeled after the ALSA sequencer, since its
> pretty sweet.
Yes, it's pretty sweet ;)
(Which leads to the question, why duplicate it if it's so nice? Why pass
MIDI data in a pipeline if the ALSA sequencer can do it much more
generally?)
> - The MIDI proposition document you gave me the link for above mentions
> that blank events need to be sent at regular intervals, this seems
> hackish. A proposal was in there for control type pads, has this been
> created yet?
Soundscrape defines a special kind of pad that has a rate, audio or
control.
Src rate | Sink rate | Behaviour
--------------+-----------+-----------
Audio | Audio | Pass audio GstBuffer
Control | Audio | Pass buffer linearly interpolated between
| | successive values
Audio | Control | Hackishly take the first sample (rare in
| practice)
Control | Control | Set the 'control property on the sink
[unconnected] | Control | Return the 'value property of the sink
[unconnected] | Audio | Return a buffer with only 'value
As you see from the unconnected case, it effectively allows a third
value, set by an object property _on the pad_.
> - Some thought needs to be put into the idea of immediate MIDI events
> versus queued (emitted at a specific time) events.
Indeed.
> It seems like this is not much code initially and it will help get things
> started. I'll get myself up to speed on current GStreamer CVS and API so I
> can be of more help in actually coding on this. Cheers.
Let me know if you have a question. I have my own project I'm sinking my
time into (and we should collaborate when things are stable), but I do
know gstreamer and I'd like to get more "pro" audio people looking at
it. So keep me informed.
Cheers, and good luck,
--
Andy Wingo <wingo at pobox.com>
http://ambient.2y.net/wingo/
More information about the gstreamer-devel
mailing list