video whale redux

Douglas Bagnall douglas at
Tue Mar 15 16:25:07 PDT 2011

It is sort of well known that long ago Zeeshan Ali Khattak and others
made a video wall using Gstreamer 0.4 and Xinerama [1,2].  Since then
video cards have sprouted extra outputs and xrandr has pushed Xinerama
into a dark corner.  What worked in 2002 seems not to work now and,
perhaps due to the rise of projectors, nobody does video walls any
more.  So this message describes my approach to multiple synchronised
video using newish hardware and software.  (When I started writing
this I had questions, but I worked them out so I'm continuing to fill
the hole in the internet).


The core of the Video Whale approach was to make a single video window
which would cover a 2x2 array of monitors, and let Xinerama handle the
cropping for each monitor.  As I understand it, with modern X this
would require the whole buffer to be duplicated in each graphics card
and generally muck up hardware acceleration.  I don't know for sure,
because I abandoned Xinerama before I got it working, choosing instead
to crop the video in the pipeline and send each projector only what it
needs to know.  With the radeon driver and some xorg.conf fiddling, I
got two ATI 4350 cards (the cheapest thing available) to work in a
kind of hybrid Zaphod/xrandr mode, with each dual-head card acting as
an independent xrandr-manipulable X screen. It's a crazy set-up for
ordinary use, with X input never going where you want it, but video is

After poking the monitors into a conventional side by side set up with
xrandr, I put xvimagesinks on GTK windows arranged like this:

Monitor      X display    window x position
1            :0.0         0
2            :0.0         X screen 0 width / 2
3            :0.1         0
4            :0.1         X screen 1 width / 2

And make each window fullscreen using GTK calls.  The total output on
standard projectors is 4 x 1024x768.  Scaling and dividing a 800x600
25fps v4l2 source across all windows uses ~10% of a dirt cheap AMD X4
core.  To play synchronised video I join them together in a 4096x768
file.  With x264 and "fastdecode" tuning, cpu hovers around 75%; vp8,
mpeg4, and mjpeg take a bit more (as does x264 with standard options).

To put it another way, I use approximately: "playbin2 ! tee name=t",
followed by 4 of these: "t. ! queue ! videocrop !  xvimagesink".


I was doing this for an art installation that involved a 5 minute
loop.  Looping the video via playbin2's "about-to-finish" signal
worked fine with silent video but not at all with sound.  On irc __tim
suggested using segment seeks, which worked so long as I always used
flushing seeks.

I first tried with two different ATI cards (4350 and 5-something) but
the X drivers were not particularly happy with that.

The flglx driver seemed like it would also work with the right
xorg.conf.  Nvidia users who talk about multiple cards in internet
forums insist they need old versions of X, but then they also want
dragable windows and compiz effects and other stuff unnecessary for
fullscreen video.

Zeeshan Ali's original Video Whale showed a 320x240 video stream
across a 16 monitor display, with each quarter run by a separate
computer.  Each of these sub-whales got the video stream via the
network and cropped it to its particular 160x120 corner.  This was
scaled in software to 640x480, then by XV to 1280x960, for Xinerama to
distribute across the 4 640x480 monitors in 16 bit colour.  Thus,
overall, the 320x240 image was scaled to 2560x1920, and the effective
resolution of each monitor was 80x60.  Now I get 1024x768 per monitor,
which suggests a 163.84-fold improvement in cheap hardware and/or
video plumbing over 9 years, if you don't take into account Video
Whale's extra decoding and network overhead.

Synchronised projection seems to be a common problem for video artists
and galleries.  According to galleries in New Zealand and Australia,
current best practice is "start the DVDs at the same time" and the
result is rarely satisfactory for the artists.

It seems likely that a combination of a motherboard with more PCI-E
slots, a faster CPU, and better video cards would be able to drive 6
or 8 projectors.  It also looks easy to scale using a server, as Video
Whale did -- thanks to playbin2 magic, the client would barely need

Opo, a small video whale

Opo was a famous New Zealand dolphin[3], and with just a small
taxonomic stretch, I so named my code.  It is here, under GPLv3:

The pipeline itself is built using C.  There is also an example
xorg.conf, and a python GUI to select and launch the video.  The idea
is that the art gallery people only have to press the power button
twice a day, but can do more if they really need.


Douglas Bagnall

More information about the gstreamer-devel mailing list