Work by Vincent Rabaud and Serge Belongie
The team (Kristi Tsukida, Natacha Sagalovsky, Javier Molina and Daniel Johnson) lead by Maurizio Seracini put up a robot that scans a painting under different wavelengths and at very high resolution.
The only problem was: how do you stich everything back together ? Well, find the answers on this page :)
Several high resolution images are taken one by one and stitched one after another to the previous ones to form an overall panorama.
To this end, for every new image, SIFT features are computed and matched to the previously taken images. The overall optimal homographies to apply to each image are then computed using block bundle adjustment.
It takes roughly 10 to 20 seconds to add a new image to the panorama and the program eats up 200 MB of RAM.
In this image, the boundaries of each added image are made explicit.
Another output of the program is a mask of where information exists. On the side image, white indicates a spot where information is present. This is useful for the robot to know where a picture needs to be taken.
Gain compensation and Frequency Blending
Once the geometry is computed, the different gains of the images are adjusted and all the images are merged using frequency blending. This program takes more time to complete (a few minutes) but can be done off-site.
The obtained results are pretty good but could be improved by implementing 3 very small algorithms (compared to what has been coded so far): color calibration, lighting calibration, lens distortion removal (basically the thing that takes into account the other images I took: ches pattern, color grid, white board). These are fairly simple but I did not have time to implement them and also, as the images were not taken with the same settings, it would have been impossible for me to test the accuracy of these programs.The results are shown using the IIPImage interface.