<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<meta http-equiv="Content-Type" content="text/html;
charset=ISO-8859-1">
<p>
I'm working on a dataflow interface for live media generation
similar to tools like PureData, vvvv and Isadora (veejaying, sound
production, interactivity, etc). Currently our dataflow is
synchronized but we want to make it unsynchronized and
multi-threaded.<br>
</p>
<p>Here is an example screenshot: <a href="http://ur1.ca/cl992">http://ur1.ca/cl992</a>
</p>
<p>
We would probably choose between either a push or a pull model
(not both) although we're not sure which one yet. Also, our datas
are "hard typed" and we support just a few types to make things
easy for the user (eg. one type of audio, one type of video).
Finally, we provide a high-level C++ interface for the effects
(elements) that tries to make it easy for unexperimented
developers to contribute.<br>
</p>
<p>
My question is: Could I use GStreamer as the backend (ie. managing
the dataflow)?<br>
</p>
<p>
Here are the characteristics/constraints of the software:<br>
- QT-based GUI<br>
- allow for many data types: <br>
— audio (we'll support just one type through the flow, like
audio/raw-float 44.1k)<br>
— video (again, we'll support just one type here as well, RGBA
32bit)<br>
— floats and integers<br>
— strings<br>
— triggers / "bangs"<br>
— datastructures such as vectors and maps<br>
— vectorial (SVG)<br>
— 3d scenes and textures<br>
— lambda (ie. elements/plugins can send a copy of themselves in a
buffer) (see this example: <a
href="http://www.cs.bgu.ac.il/%7Eshemeas/juks/"
title="http://www.cs.bgu.ac.il/~shemeas/juks/">http://www.cs.bgu.ac.il/~shemeas/juks/</a>)<br>
— any other user-defined datatype should be easily implementable
through a high-level interface<br>
- composite elements (abstractions)<br>
- possibility to change the graph in real-time: <br>
— unplug, plug pads<br>
— remove elements from the pipeline<br>
- multi-threaded<br>
— we're not sure how it will be implemented, either each element
will be on its own thread, or parts of the graph<br>
— freely moving elements from one thread to the other<br>
- low-latency, especially for audio eg. possibility of priorizing
audio threads over video threads<br>
- eventually portable to mobile platforms<br>
</p>
<p>
Example of application:<br>
</p>
<p>
A live performance with a dancer and a video projection. The
movements of the dancers are analysed with OpenCV which influences
both audio and video in real time. A hardware interface on the
dancer sends accelerometer data by wifi through OSC messages, also
influencing the audio and video. The video is generated by a mix
of video effects applied on a camera image. The video effects
influence the image that is projected as a texture on a 3D object,
which is then rendered to the projection screen. Finally,
sentences submitted through a webpage are also sent to the
application: the text is rendered as vectorial objects which are
then modified using vectorial effects and projected on the floor.
The application is controlled live: the performer controlling it
manipulates the graph live, rewiring connections between effects,
playing with sliders and generating sound using a microphone.<br>
</p>
<p>Thanks,<br>
</p>
<p>Tats<br>
</p>
</body>
</html>