[Fwd: Re: [gst-devel] Midi and GStreamer]
Ronald Bultje
rbultje at ronald.bitfreak.net
Wed Jul 16 23:44:36 CEST 2003
Hey Nick,
On Thu, 2003-07-17 at 00:30, nick wrote:
> I'm not sure - doesn't this mean we could wait indefinitely for a midi
> message which will never arrive, and risk not generating the audio in
> time?
You'd make it loopbased. This means your plugin has its own (sort of)
main function, which requests the next data buffer (well, message) from
the previous element. Note that this is not synced or anything (so
nothing is waited for), so this buffer (message) will arrive far before
the sound is actually played and you'll have more than enough time to
create the audio that belongs to this message timestamp - or even
earlier! Look at the timestamp of this message, and generate the buffer
of the timestamp that you are supposed to create a data buffer for now
(which could be earlier than the timestamp of the message).
You can request two message from the previous element with timestamps of
0 and 100 seconds (that'd be weird, but ohwell), and then generate sound
from 0 up to 99 seconds and start requesting new messages to see what
you're supposed to do next. That's all valid.
> The way I see this working (coming from my audio+midi programming
> experience) is:
>
> - wait until amSynth is required to generate another buffer of audio
> - collect all waiting MIDI messages (from alsamidisrc)
> - generate my audio data given those midi messages
>
> This is based around a 'pull' structure for audio programming.. is this
> something compatible with gstreamer?
amSynth can just request each of these messages and queue them
internally until they're needed. That's called a loopbased approach.
> Otherwise I think a multi-threaded approach may be needed (1 thread to
> collect all midi messages ready for the audio thread.)
Nah, shouldn't be needed.
HTH,
Ronald
--
Ronald Bultje <rbultje at ronald.bitfreak.net>
More information about the gstreamer-devel
mailing list