DynOcc: Learning Single-View Depth from Dynamic Occlusion Cues

Recently, significant progress has been made in single-view depth estimation thanks to increasingly large and diverse depth datasets. However, these datasets are largely limited to specific application domains (e.g. indoor, autonomous driving) or static in-the-wild scenes due to hard-ware constraints or technical limitations of 3D reconstruction. In this paper, we introduce the first depth dataset DynOcc consisting of dynamic in-the-wild scenes. Our approach leverages the occlusion cues in these dynamic scenes to infer depth relationships between points of selected video frames. To achieve accurate occlusion detection and depth order estimation, we employ a novel occlusion boundary detection, filtering and thinning scheme followed by a robust foreground/background classification method. In total our DynOcc dataset contains 22M depth pairs out of 91K frames from a diverse set of videos. Using our dataset we achieved state-of-the-art results measured in weighted human disagreement rate (WHDR). We also show that the inferred depth maps trained with DynOcc can preserve sharper depth boundaries.

Resources:    Paper »

@inproceedings{Wang:2020:DynOcc,
author = "Yifan Wang and Linjie Luo and Xiaohui Shen and Xing Mei",
title = "Dynamic Kernel Distillation for Efficient Pose Estimation in Videos",
booktitle = "International Conference on 3D Vision (3DV)",
year = "2020",
}