[gst-devel] Midi and GStreamer]

Steve Baker steve at stevebaker.org
Thu Jul 17 00:28:01 CEST 2003

On Thu, 2003-07-17 at 18:58, Ronald Bultje wrote:
> Hey Nick,
> On Thu, 2003-07-17 at 00:30, nick wrote:
> > I'm not sure - doesn't this mean we could wait indefinitely for a midi
> > message which will never arrive, and risk not generating the audio in
> > time?
> You'd make it loopbased. This means your plugin has its own (sort of)
> main function, which requests the next data buffer (well, message) from
> the previous element. Note that this is not synced or anything (so
> nothing is waited for), so this buffer (message) will arrive far before
> the sound is actually played and you'll have more than enough time to
> create the audio that belongs to this message timestamp - or even
> earlier! Look at the timestamp of this message, and generate the buffer
> of the timestamp that you are supposed to create a data buffer for now
> (which could be earlier than the timestamp of the message).
> You can request two message from the previous element with timestamps of
> 0 and 100 seconds (that'd be weird, but ohwell), and then generate sound
> from 0 up to 99 seconds and start requesting new messages to see what
> you're supposed to do next. That's all valid.

Just to clarify, this would work when your source is a midi file, so you
have access to all of you midi buffers ahead of time.

In the case where you have a real-time input such as alsamidisrc, then
alsamidisrc will have to generate timestamped filler events so that
amSynth knows when it needs to spit out an audio buffer. In this case
latency would be minimised by matching the period of fill events with
the the size of your audio buffers.

Steve Baker <steve at stevebaker.org>

More information about the gstreamer-devel mailing list