Abstract: We present a unified computational approach for taking photos through reflecting or occluding elements such as windows and fences. Rather than capturing a single image, we instruct the user to take a short image sequence while slightly moving the camera. Differences that often exist in the relative position of the background and the obstructing elements from the camera allow us to separate them based on their motions, and to recover the desired background scene as if the visual obstructions were not there. We show results on controlled experiments and many real and practical scenarios, including shooting through reflections, fences, and raindrop-covered windows.
There’s a neat video at https://www.youtube.com/watch?v=xoyNiatRIh4
(Not me, I found this just now on Metafilter: http://www.metafilter.com/151863/ENHANCE and thought it would be interesting to folks here. I couldn’t see a separate Image Processing tag …)