Deep Generative Modeling for Scene Synthesis via Hybrid Representations

We present a deep generative scene modeling technique for indoor environments. Our goal is to train a generative model using a feed-forward neural network that maps a prior distribution (e.g., a normal distribution) to the distribution of primary objects in indoor scenes. We introduce a 3D object arrangement representation that models the locations and orientations of objects, based on their size and shape attributes. Moreover, our scene representation is applicable for 3D objects with different multiplicities (repetition counts), selected from a database. We show a principled way to train this model by combining discriminative losses for both a 3D object arrangement representation and a 2D image-based representation. We demonstrate the effectiveness of our scene representation and the network training method

Resources:    Paper »

@article{Zhang:2020:DGM,
author = "Zaiwei Zhang and Zhenpei Yang and Chongyang Ma and Linjie Luo and Alexander Huth and Etienne Vouga and Qixing Huang",
title = "Deep Generative Modeling for Scene Synthesis via Hybrid Representations",
journal = "ACM Transactions on Graphics",
year = "2020",
month = "apr",
volume = "39",
number = "2"
}