Virtual Visual Hulls: Example-Based 3D Shape Estimation from a Single Silhouette
Recovering a volumetric model of a person, car, or other objectof interest from a single snapshot would be useful for many computergraphics applications. 3D model estimation in general is hard, andcurrently requires active sensors, multiple views, or integration overtime. For a known object class, however, 3D shape can be successfullyinferred from a single snapshot. We present a method for generating a``virtual visual hull''-- an estimate of the 3D shape of an objectfrom a known class, given a single silhouette observed from an unknownviewpoint. For a given class, a large database of multi-viewsilhouette examples from calibrated, though possibly varied, camerarigs are collected. To infer a novel single view input silhouette'svirtual visual hull, we search for 3D shapes in the database which aremost consistent with the observed contour. The input is matched tocomponent single views of the multi-view training examples. A set ofviewpoint-aligned virtual views are generated from the visual hullscorresponding to these examples. The 3D shape estimate for the inputis then found by interpolating between the contours of these alignedviews. When the underlying shape is ambiguous given a single viewsilhouette, we produce multiple visual hull hypotheses; if a sequenceof input images is available, a dynamic programming approach isapplied to find the maximum likelihood path through the feasiblehypotheses over time. We show results of our algorithm on real andsynthetic images of people.