SceneNet: A Library of Synthetic Indoor Scenes for Semantic Understanding
SceneNet is a library of labelled synthetic 3D scenes for rendering depth maps and their corresponding annotations. These scenes belong to different semantic categories and have been compiled
together from various online 3D repositories e.g.
www.crazy3dfree.com, and manually annotated.
There are in total five different scene categories: bedroom, office, kitchen, living-room, and bathroom, with at least 10 annotated scenes per category.
All the 3D models are in metric scale. Each scene is composed of up to around 50–150 objects and the complexity can be controlled algorithmically.
The granularity of the annotations can also be adapted by the user depending on the application. The models are provided in .obj format. Virtual cameras can be placed
in the synthetic scene at desired locations using POV-Ray or OpenGL to generate a possible trajectory for rendering at different viewpoints.
SceneNet can be particularly useful for:
- Generating potentially unlimited high-quality annotated depth data for different types of scenes.
- Benchmarking large scale depth-only SLAM systems on complex scenes.
- Enabling training generative models, to learn common scene layouts and object relationships.