[Fwd: Re: [gst-devel] Midi and GStreamer]

nick nixx at nixx.org.uk
Wed Jul 16 15:26:16 CEST 2003


On Wed, 2003-07-16 at 21:26, Christian Fredrik Kalager Schaller wrote:
> Hi Nick,
> Are you on gst-devel? There are quite some mails gone back and forth now
> on the list, but I am worried you won't see em

I am now ;-)

> Christian
> 
> ______________________________________________________________________
> 
> From: Steve Baker <steve at stevebaker.org>
> To: gst-devel <gstreamer-devel at lists.sourceforge.net>
> Subject: Re: [gst-devel] Midi and GStreamer
> Date: 16 Jul 2003 17:52:07 +1200
> 
> On Tue, 2003-07-15 at 07:56, Leif Johnson wrote:
> > Hi all -
> > 
> > It seems like GStreamer could benefit greatly from a different subclass of
> > GstPad, something like GstControlPad. Pads of this type could contain
> > control data like parameters for oscillators/filters, MIDI events, text
> > information for subtitles, etc. The defining characteristic of this type of
> > data is that it operates at a much lower sample rate than the multimedia
> > data that GStreamer currently handles.
> 
> I think that control data can be sent down existing pads without making
> any changes.
> 
> > GstControlPad instances could also contain a default value like Wingo has
> > been pondering, so apps wouldn't need to connect actual data to the pads if
> > the default value sufficed. There could also be some sweet integration with
> > dparams, it seems like.
> 
> If you want a default value on a control pad, just make the source
> element send the value when the state changes.
> 
> > Elements that have control pads could also have standard GstPads, and I'd
> > imagine there would need to be some scheduler modifications to enable the
> > lower processing demands of control pads.
> > 
> > Unfortunately, as is probably obvious, I don't know enough of the GStreamer
> > core to tell if this is a good idea or not, but I'd really appreciate
> > comments. This would be cool if it worked out.
> 
> It was always my intention for dparams to be able to send values to and
> get values from pads. All we need is some simple elements to do the
> forwarding.
> 
> And now, on to the comments about MIDI:
> > On Wed, 09 Jul 2003, nick wrote:
> > 
> > > Hi All
> > > 
> > > The thing I am thinking about is how a gstreamer plugin would handle
> > > MIDI and audio at the same time... In my mind, this requires the midi
> > > and audio buffers to be processed on a 1-to-1 basis (so 1 buffer of
> > > audio and 1 buffer of midi cover the same duration of time).. Does what
> > > I'm saying make sense to you?
> 
> All buffers are timestamped and MIDI buffers should be no exception.  A
> buffer with MIDI data will have a timestamp which says exactly when the
> data should be played. In some cases this would mean a buffer contains
> just a couple of bytes (eg, note-on). So be it - if this turns out to be
> inefficient we can deal with that later.
>  
> 
> > > (For me, I would want to be able to write amSynth as a plugin - this
> > > would require that when my process function is called, I have a midi
> > > buffer as input, containing how ever many midi events occurred in, say,
> > > 1/100 sec for example, and then I generate an audio buffer of the same
> > > time duration...)
> > > 
> > > Any ideas? Maybe this will indicate the kind of problems to be faced.
> 
> GStreamer has solved this problem for audio/video syncing, so you should
> probably do it the same way.

I admit I still have to delve into the guts of gstreamer as it stands..

> The first task would be to make this pipeline work:
> filesrc ! amSynth ! osssink
> 
> An amSynth element should be a loop element. It would read MIDI buffers
> until it has more than enough to produce audio for the duration of 1
> audio buffer. It knows it has enough MIDI buffers by looking at the
> timestamp.  Because amSynth is setting the timestamps on the audio
> buffers going out, osssink knows when to play them.

I'm not sure - doesn't this mean we could wait indefinitely for a midi
message which will never arrive, and risk not generating the audio in
time?

> Once this is working, a more challenging pipeline might be:
> alsamidisrc ! amSynth ! alsasink
> 
> This would be a real-time pipeline - any MIDI input should instantly be
> transformed into audio. You would have small audio buffers for low
> latency (64 samples seems to be typical). This is a problem for amSynth
> because it can't sit there waiting for more MIDI just in case there is
> more than one MIDI event per audio buffer. In this case you could
> either:
> - listen to the clock so you know when its time to output the buffer
> - have some kind of real-time mode for amSynth which doesn't wait for
> MIDI events which may never come
> - have alsamidisrc produce empty timestamped MIDI buffers so that
> amSynth knows that is time to spit out some audio.

The way I see this working (coming from my audio+midi programming
experience) is:

- wait until amSynth is required to generate another buffer of audio
- collect all waiting MIDI messages (from alsamidisrc)
- generate my audio data given those midi messages

This is based around a 'pull' structure for audio programming.. is this
something compatible with gstreamer?

Otherwise I think a multi-threaded approach may be needed (1 thread to
collect all midi messages ready for the audio thread.)

> I hope this clarifies things a bit. amSynth sounds very cool ;)

:D

> cheers
-- 
nixx at nixx.org.uk          |     amSynth lead developer
JabberID: nixx at jabber.org |     http://amsynthe.sf.net





More information about the gstreamer-devel mailing list