[CREATE] Open Source Panorama Creation Workflow Revisited
Yuval Levy
create07 at sfina.com
Fri Jul 13 00:52:47 PDT 2007
Hi all,
The 14th edition of the World Wide Panorama (WWP) has been officially
launched. My self-set challenge for the event was to use as much
OpenSource software as possible, and it was more than during my
presentation at LGM. Read below my report.
First things first, my gratitude goes to all those who have patiently
answered my technical questions, particularly to Pablo d'Angelo (hugin),
Giuseppe Rota (qtpfsgui), Cyrille Berger and Boudewijn Rempt (Krita),
Leon Moctezuma (freepv).
A big thank you also to Simon Jacobs for arranging shooting permissions
and access to the site, and for being very flexible of my delivery
schedule for his project.
Last but not least, a big thank you to the people who make the WWP
happen, Don Bain, Landis Bennett and Markus Altendorff.
Next: instant gratification.
<http://geoimages.berkeley.edu/worldwidepanorama/wwp607/html/YuvalLevy.html>
If the VR does not show because the QTVR format is not yet supported on
your computer, alternative formats including Flash (warning, work in
progress, the smileys are not part of the published work) and Java are
available at <http://www.photopla.net/070618assnat>
If you absolutely want to see it in context with OpenSource software,
install FreePV <http://freepv.sf.net/> on your favorite Linux distro.
Instructions for ubuntu
<http://tech.groups.yahoo.com/group/PanoToolsNG/message/10772> - might
be outdated as libxml2 and maybe other libraries are now needed to build
FreePV.
Leon Moctezuma is a Google Summer of Code student taking FreePV to the
next level. He will appreciate any help / bug reports on FreePV,
particularly from people who are knowledgeable of the Mozilla plugin
architecture. Hubert: did you get my mail last week?
Still reading? thank you for your patience. Bear with me, here is the story.
The World Wide Panorama is a quarterly event that brings together
professional and amateur virtual reality (VR) photographers from all
over the world around a single topic, which is interpreted by the VR
artist in one VR. The topic for the current edition was "Community".
I decided to interpret "Community" on several levels. First, I wanted to
depict the place where a democratic community of seven millions makes
its rules. Then, I wanted to use as much community software as possible,
inspired by the exciting exchanges at LGM.
The procedure of getting in and out of that location is necessarily
controlled and the people there have been all very nice to me. Once I
got over the feeling of awe, taking control of the camera and shooting
the VR was technically uneventful. Since the location is not generally
accessible for photography, I decided to make good use of the (little)
time that was allocated to me and I shot multiple exposures.
On the same evening as the shooting I came back to the office and
produced an initial panorama using my known and tested workflow (80%
proprietary software). There it was, an HDR panorama ready for
submission almost two weeks ahead of the deadline.
This left me with some time to revisit my workflow. A disclaimer before
continuing. Panorama-making workflows are very personal. Many ways lead
to Rome and there are many ways to create a stitched panorama. I hereby
describe my preferred way. Other photographers might come to different
conclusion than me. If you try to replicate this process or to apply
some of my learnings to your panorama making process, the usual
disclaimers apply. YMMV.
As a reminder, the generic process is:
- convert RAWs
- register images position in space (and for a real HDR process on the
additional dimension of exposure value)
- output panorama (merge the images)
- edit the seams and retouch the image
- tonemap the HDR panorama
And the tools in my old workflow are:
- Adobe Lightroom to convert the RAWs
- PTgui for registration and output
- Adobe Photoshop to edit the seams and retouch
- Photomatix or the new PTgui Pro to create an HDR panorama and tone map it
In replacing tools, I worked myself through those areas where I felt my
process could improve most. Beyond this challenge I want to mix and
match proprietary tools with OpenSource tools to achieve an optimal
workflow for me.
The first thing I was unhappy with was tonemapping. These days
HDR/Tonemapping is once again a hot topic in the VR community, mainly
because of the introduction of the technique into two popular commercial
panorama-making products. I simply did not like the lack of variations
available to me, nor the idea that tonemapping is about trying to
reconstruct a "natural" image.
The OpenSource tool for this is qtpfsgui. I tried it a few months ago
already, but memory limitations and crashes of the Windows version have
kept me from fully adopting it. This time I upgraded my old spare
workstation (Athlon XP 2500+ 1GB) from Ubuntu 6.06 LTS to Ubuntu 7.04
and qtpfsgui was more stable to work with. Qtpfsgui gives me more
latitude there where the proprietary tools limit my choices. It seems
tailored on my philosophy that tonemapping is an added set of tools in
the artist's toolbox and they should be available with no restriction to
push the envelope.
Using qtpfsgui for the tonemapping still requires a little bit of extra
attention.
1. It seems to me that the program has some memory leaks. After a few
operations, it would quit without warning. The solution? start the
program clean, make one tonemapping operation, save. close. repeat.
2. I upgraded my newer workstation (Athlon X2 4200+ 2GB) to Ubuntu 7.04
(soon to say good bye, Windows) and tried to compile qtpfsgui on it. The
resulting tonemapping looks very much different than on the 32bit box. A
report is being prepared for Giuseppe.
3. The tool is not yet aware of the 360° seam. the temporary workaround
for this (Giuseppe is working on a real fix) is to extend the image
beyond the seam, do the tonemapping and then cut it at the seam again.
4. Batch operation is not yet as smooth as Photomatix, but Giuseppe told
me that's the next feature coming up.
=> USEABLE. Results are (subjectively) superior to the proprietary tools
tested, usability is improving.
While panoramas with their wider dynamic range seems to be a natural
application for HDR techniques, a number of obstacles have made them
useless in many situations.
Movements in the scene (ghosts) are one obstacle - not really applicable
for this case. Well, almost. The trained eye will notice that the lights
on the ceiling were swinging lightly with the draft, lucky me not too
much. Jing Jin is a Google Summer of Code student working on a
deghosting algorithms for hugin.
Another obstacle is the lack of editing tools. Two types of editing are
usually applied to stitched panoramas. One is mask editing, to determine
the exact placement of the seam between two overlapping images. The
other one is retouching.
Currently I do the retouching on the tonemapped version of the panorama,
but this is sub-optimal because if I want to apply a different type of
tonemapping, the retouching is lost. Unfortunately there was no solution
to this. Cyrille and Boudewijn have been very helpful in answering my
questions, but krita is not ready yet.
For the masks, I tried to use Cinepaint, but failed at memory limits,
time limits, and a user interface
that I just can't get used to. Sorry.
=> Photoshop is still unmatched.
The most important piece of OpenSource code to me in this experiment was
hugin, that replaced PTgui almost completely.
For the registration, I did not bother to re-create control points
because Zoran Mesec's Google Summer of Code project that promises a
superior and patent free feature detection/matching algorithm was not
ready yet. For the manual setting of control points, hugin as an
excellent CP fine tuning. Unfortunately it also increases the
click-count per CP-pair.
PTgui's project file was loaded into hugin. Hugin re-optimized (in PTgui
I anways use the PTOptimizer form the original Panotools, which has
given me better results than PTgui's own optimizer).
Then there is that extra step that give hugin an advantage over all
commercial stitchers: photometric adjustment. It can automatically
correct for vignetting and it helps when producing "partial HDR",
relaxing even further the old rule of shooting all images to a stitched
pano with the same exposure.
On the HDR output side, there is still room for improvement. The ideal
output would be a set of masked HDR layers, each representing a stack of
different exposures of the same shots. This was still a manual process
this time: deselect all images. Select those images belonging to the
same stack. Produce HDR output. Repeat. And there are no masks yet,
although Pablo told me that the output of stacks is already in the hugin
code now, and with Ippei Ukai's Google SUmmer of Code project of a
modular GUI rewrite, I hope that these functionalities will soon become
easy to use.
HDR editing was also where Photoshop (CS3) had its limit. The magic wand
and other tools do not apply to an HDR layer, so I had to copy each
layer into a separate file, reduce it to 16 or 8 bit, make the masks and
copy them over the HDR layer in the final document.
By the time I was done with all this, time was over, so I did not have
the time to look at RAW conversion.
Overall, while there is still a lot of work to do, my impression is that
the workflow to create stitched panoramas with OpenSource code is very
close to be viable and competitive in a productivity driven environment.
Until then, I will keep mixing and matching.
Kudos to all developers for the great progress in your code in the last
few months!
Yuv
More information about the CREATE
mailing list