3D Self-Portraits

portrait-teaser

With our system, users can scan themselves with a single 3D sensor by rotating the same pose for a few different views (typically eight, 45 degrees apart) to cover the full body. Our method robustly registers and merges different scans into a watertight surface with consistent texture in spite of shape changes during repositioning, and lighting differences between the scans. These surfaces are suitable for applications such as online avatars or 3D printing (the miniature shown here was printed using a ZPrinter 650.)

 

We develop an automatic pipeline that allows ordinary users to capture complete and fully textured 3D models of themselves in minutes, using only a single Kinect sensor, in the uncontrolled lighting environment of their own home. Our method requires neither a turntable nor a second operator, and is robust to the small deformations and changes of pose that inevitably arise during scanning. After the users rotate themselves with the same pose for a few scans from different views, our system stitches together the captured scans using multi-view non-rigid registration, and produces watertight final models. To ensure consistent texturing, we recover the underlying albedo from each scanned texture and generate seamless global textures using Poisson blending. Despite the minimal requirements we place on the hardware and users, our method is suitable for full body capture of challenging scenes that cannot be handled well using previous methods, such as those involving loose clothing, complex poses, and props.

Resources:    Paper »    Video »

@article {Li:2013:TSP,
author = "Hao Li and Etienne Vouga and Anton Gudym and Linjie Luo and
Jonathan T. Barron and Gleb Gusev",
title = "3D Self-Portraits",
journal = "ACM Transactions on Graphics (Proc. SIGGRAPH Asia)",
year = "2013",
month = dec,
volume = "32",
number = "6"
}