Multi-View Hair Capture Using Orientation Fields

orientation-teaser

We begin with many high-resolution photographs (with unconstrained lighting), compute an orientation field for each, and perform multi-view stereo matching using a metric based on orientation similarity. The resulting depth maps show high-resolution details of hair strands and we integrate them into a single merged model. In contrast, conventional multi-view stereo algorithms and merging techniques [4, 7] fail at capturing the fine hair structures.

 

Reconstructing realistic 3D hair geometry is challenging due to omnipresent occlusions, complex discontinuities and specular appearance. To address these challenges, we propose a multi-view hair reconstruction algorithm based on orientation fields with structure-aware aggregation. Our key insight is that while hair’s color appearance is view-dependent, the response to oriented filters that captures the local hair orientation is more stable. We apply the structure-aware aggregation to the MRF matching energy to enforce the structural continuities implied from the local hair orientations. Multiple depth maps from the MRF optimization are then fused into a globally consistent hair geometry with a template refinement procedure. Compared to the state-of-the-art color-based methods, our method faithfully reconstructs detailed hair structures. We demonstrate the results for a number of hair styles, ranging from straight to curly, and show that our framework is suitable for capturing hair in motion.

Resources:    Paper »    Video »

@inproceedings{Luo:2012:MHC,
author = "Linjie Luo and Hao Li and Sylvain Paris and Thibaut Weise and Mark Pauly and
Szymon Rusinkiewicz",
title = "Multi-View Hair Capture Using Orientation Fields",
booktitle = "Computer Vision and Pattern Recognition (CVPR)",
year = "2012",
month = jun
}