Summary of waveform widget project
ensonic at hora-obscura.de
Sun Apr 29 12:56:02 PDT 2012
a couple of comments inline. I think it would also be good to just
discuss thei on gstreamer-devel and thus moving it there.
On 04/27/2012 11:26 AM, pecisk at gmail.com wrote:
> Hi everyone!
> This is summary for me and you to see ideas expresed about waveform widget.
> Complete picture:
> First of all, shortly about my project (citing from my proposal): It
> would have three major parts - first one will be element which reads
> digital audio data from media for levels. Second part will be data
> model which will store level data and will provide interfaces for
> calculating data for different scenarios like zoom changes and
> offsets. It will also cache data to operative or persistent memory
> (e.g. hard
> disk). Third then would be Gtk+ canvas widget using Cairo with options
> how to draw it.
> There's nice snippet from discussion with Setfan about this:
> "<pecisk> So it nutshell it would be like this - we have bin, we set
> property uri and tree additional params start, stop and interval, set
> to play and we get results which we return to data model (which could
> cache data and/or calculate needed data for zoom levels) and that
> feeds those data to widget
> <ensonic> yep, the widget will like to draw a waveform or parts of it,
> it will ask the model for the data, the model acts as a wrapper that
> hides how we get to the data
> this way the model can act as a cache, provide data from a file or
> get live data from gstreamer
> in gtk the treeview uses such a split"
> * About level reading element:
> + Bemac has idea of something like "level" element, which creates
> buffers for level reading. While I found this idea confusing, it has
> some solid points and bemac wrote nice summary about storing peak data
> for caching in this wiki page
> http://gstreamer.freedesktop.org/wiki/AudioPeaksElement so this
> information will be quite useful for me during implementation
> + I will start with my original idea of separate modules, so level
> reading element will be implemented in C using pipeline bus and level
> element messages. Idea is to keep all things separated, so if any
> other more effective way of reading levels comes up, it could be
> easily swapped, leaving data model and widget parts intact.
For this part we need to think a bit about performance. Using a bin with
level inside is a good start. For the future we need to figure how we
could speed up the analysis part. Using a audiopeaks element instead of
level does not sounds like it would give us a large boost. I would
rather consider to e.g. run 4 analysis pipelines in parallel, where each
of them would analyse 1/4 of the audio. Anyway, food for thought.
> * Data model:
> This would be data structure with level readings with additional
> functions for resampling filter (medians for each n samples, etc.). I
> could request start/stop values and
> * Gtk+ Widget:
> From my proposal I have ideas for such visual customisation:
> "* Is waveform centered around middle line, or it's relative (as Jokosher does);
> * Waveform does/doesn't have outline, and how thick and what color it has;
> * Waveform background color/image;"
A couple of variants I've see to is discrete channels vs. mono mixdown
view. Or a special stereo view with jokosher like left on the top and
right up-side-down below. One think to keep in mind is that the drawing
code needs to provide an aggregate view for zoom-levels > 1 sample=1
pixel. Many audio editors start to do cubic interpolation for zoom
levels < 1 sample= 1 pixel, but highlight the samples (e.g. with a small
dot). Audacity does that quite well.
> Except from discussion in #pitivi around waveform implementations:
> "<bemasc> huh
> pitivi is doing a lot of this stuff the "right" way, with maximum
> abstraction, but it sure ain't easy.
> <emdash> we use i think pixels per nanosecond, which is stupid
> nanoseconds per pixel would make a bit more sense
> maybe stupid isn't the right word
> but it's been pointed out to me that the way we do zooming
> calculations makes rounding errors more likely"
> * There are several advanced and post GSoC topics to think about:
> + There's notion going around about simple widget collection for
> Gstreamer based multimedia applications. This could be starting point,
> but let's see how I handle this one first :) I fully support whole
> idea though.
> + What could be more challenging is how to read audio stream data (and
> how to make all workflow respond to that), but I will plan to
> investigate that as project progresses.
> I will start to work on some code this weekend, starting with simple
> things like reading levels in C (and trough gi passing it to Python),
> and drawing simple waveforms from it. You will find code here
> Peteris Krisjanis.
More information about the gstreamer-devel