Mesostructure from Specularity

Tongbo Chen1     Michael Goesele2     Hans-Peter Seidel1    
1MPI Informatik     2University of Washington    

Abstract

We describe a simple and robust method for surface mesostructure acquisition. Our method builds on the observation that specular reflection is a reliable visual cue for surface mesostructure perception. In contrast to most photometric stereo methods, which take specularities as outliers and discard them, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects with a wide variety of reflection properties, including translucent, low albedo, and highly specular objects. We show results for a variety of objects including skin, apricot, orange, jelly candy, black leather and dark chocolate.

Paper

Tongbo Chen, Michael Goesele and Hans-Peter Seidel. Mesostructure from Specularity. In proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), New York, NY, USA, June 17-22, 2006, pp. 1825-1832. Abstract PDF Project webpages

Background and motivation

Some complex real-world objects, which may be translucent, with low albedo, highly specular, pose a challenging problem for 3D reconstruction, especially for geometric mesostructure acquisition, which includes fine-scale geometric details.

We human beings, with human vision, can perceive the fine-scale surface geometric details, the mesostructure with only one photograph for one object, i.e. at one viewpoint and under constant illumination. If we move our head or change the lighting condition, the 3D information of the object that we can get will become incrementally complete. The most significant visual cue the human vision system used for perceiving such kinds of objects is specularity. Our motivation to start this project is to exploit this basic observation. There are several issues we considered in developing a practical system.

System Overview

The basic setup consists of one digital camera and one point light source. We used a 12-bit 1300X1030-pixel Jen-optik ProgRes C14 digital camera for image acquisition and a 5 Watt Luxeon Star white LED as point light source. A checkerboard is used for camera calibration. Four specular spheres are positioned at the four corners of the checkerboard for light source estimation. The sample object is placed on a necessary support base at the center of the checkerboard. The camera faces downward to the checkerboard with optical axis perpendicular to the checkerboard plane. We keep the camera 1.5 meters away from the checkerboard. The mesotructure has ignorable magnitude, compared to such a large distance. We also assume the base geometry of the sample object has minute scale, compared to the distance between the camera and the object. During acquisition, we keep the light source about 1.5 meters away from the object. With such a large distance, the LED light can be well approximated by a point light source. To keep the illumination consistent, we always point the light to the sample object. We capture one image for each position of the point light source. Using histogram thresholding, we can in real-time extract the specular reflection component of the sample object and update the specularity field, which keeps the state of how much specularity data has been captured from the sample object. The pixel with at least one specular image, where this pixel has a detected specular peak, is marked as red; otherwise it is marked as black. During the acquisition, a growing portion of the specularity field will be colored red and the user can use this feedback to move the light source in a way that improves coverage of the specularity field. This incremental refinement allows flexible control of the quality of the result mesostructure. If the final specularity field is very dense, the result mesostructure will be very accurate and highly-detailed. On the other hand, if only a sparse specularity field is captured, the reconstructed mesostructure will be dominated by low frequency features.

Experimental results

Low albedo glossy objects: black leather, chocolate

Translucent glossy objects: orange, apricot, skin, jelly candy

Black leather

photograph normal map recovered 3D

Download: input images in a video (DivX, 392K),   reconstructed mesostructure (Waverfront OBJ, OFF, PLY)

Chocolate

photograph normal map recovered 3D

Download: input images in a video (DivX, 2.9M),   reconstructed mesostructure (Waverfront OBJ, OFF, PLY )

Orange

(a) (b) (c) (d)
(e) (f) (g) (h)

(a-d) Four cropped input images of orange skin.

(e) Recovered normal field (RGB-encoded).

(f) Filtered normal field.

(g) Simple rendering using Ward's isotropic BRDF model (Ward'92).

(h) Reconstructed 3D surface rendered at a novel viewpoint.

Download: input images in a video (DivX, 312K),   reconstructed mesostructure (Waverfront OBJ, OFF, PLY),   rendering in Ward's isotropic BRDF model (DivX, 312K)

Apricot

photograph photograph recovered 3D recovered 3D recovered 3D

Download: input images in a video (DivX, 1.2M),  reconstructed mesostructure (Waverfront OBJ, OFF, PLY)

Jelly candy

photograph photograph recovered 3D recovered 3D recovered 3D

Download: input images in a video (DivX, 1.3M),   reconstructed mesostructure (Waverfront OBJ, OFF, PLY)

Jelly candy (piece)

(a) (b) (c) (d)

(a) A photograph of a piece of jelly candy (ca. 18mmX28mm), which is highly translucent.

(b) Laser scanner result.

(b) Laser scanner result after covering the jelly candy with Lambertian fine powder.

(d) Reconstructed mesostructure by using our method.

Download: input images in a video (DivX, 552K),   reconstructed mesostructure (Waverfront OBJ, OFF,PLY)

Skin

photograph photograph recovered 3D recovered 3D recovered 3D

Download: input images in a video (DivX, 5.7M),   reconstructed mesostructure ( Waverfront OBJ, OFF, PLY)

Acknowledgement

We would like to thank the anonymous reviewers for their insightful comments. Thanks to Christian Fuchs for his help in preparing the rebuttal. We are grateful to Yan Wang for her assistance in the project.