ANNOUNCE: vidi, MIDI-driven video creation

Antonio Ospite ao2 at ao2.it
Fri Dec 16 09:42:35 UTC 2016


Hi,

I would like to announce that I published "vidi": a set of experimental
tools to assemble videos from MIDI, an introduction is at
https://ao2.it/126

The code is here:
https://git.ao2.it/vidi-player.git/

The code is based on GES and GStreamer, there are three main tools:

- vidi-timeline: analyzes a MIDI file and creates a GES timeline taking
  the assets from a "VideoFont" which has one video sample per note
  value.
  
  In the README.md file there are instructions about how to create
  a VideoFont either synthetically with the GStreamer test sources, or
  from an actual recording.

  The created timeline can be saved as an xges project and opened in
  Pitivi. The created timelines can be long and use quite a lot of
  clips, so maybe they could also be used as tests to improve the
  robustness of Pitivi.

  I saw crashes in Pitivi with some project files created by
  vidi-timeline, I intend to send bug reports eventually.

- vidi-player: plays a MIDI-file taking video samples from a VideoFont.

- vidi-sampler: plays MIDI events interactively from a MIDI controller,
  taking video samples from a VideoFont.

The project is still in a proof-of-concept phase, so with this message
I would also like to discuss some technical matters.

Questions
=========

1. What is the best way to switch in a gapless fashion between the video
   samples?

   Right now vidi-sampler and vidi-player use a playbin and rely on the
   "about-to-finish" signal to switch between video samples; to trigger
   the signal interactively a flushing seek to the END of the current
   stream is performed when a new note event is detected, this seems to
   work mostly fine for a proof-of-concept, but I feel that it can be
   fragile for anything more serious.

   Can there be races if two events occur too close in time one another?

2. What can be a better VideoFont format?

   Right now each video sample is in a separate file, even if they all
   use the same container and codecs there is still a setup time for the
   decoders each time the video sample changes.

   Maybe using a single file, and seek around and playing "regions" of
   the file can improve things? The pad probe examples show how to
   define a region, is this a way worth exploring? We would have one
   region per note, and an index of the regions could be provided either
   externally or as metadata in the container. Can the concept of
   "chapters" available in some containers be abused to represent
   different video samples in a single file?

   Could using a single file work for vidi-timeline as well?
   The same file would be added multiple times to the GES timeline, but
   with different clip inpoints depending on the region/sample.


Ideas for further development
=============================

- The current code can only deal with linear timelines, however support
  for showing overlapping notes in a composited canvas could be
  supported, something to create videos like the following in a
  semi-automatic way: https://www.youtube.com/watch?v=opg4VGvyi3M

- The first example videos I made use some very trivial VideoFonts,
  maybe someone here could put me in touch with artists interested in
  producing more engaging VideoFonts?

Thanks,
   Antonio

-- 
Antonio Ospite
https://ao2.it
https://twitter.com/ao2it

A: Because it messes up the order in which people normally read text.
   See http://en.wikipedia.org/wiki/Posting_style
Q: Why is top-posting such a bad thing?


More information about the gstreamer-devel mailing list