Title
HoME: a Household Multimodal Environment.
Abstract
We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.
Year
Venue
Field
2017
ICLR
Computer science,Human–computer interaction,Artificial intelligence,Semantics,Robotics,Machine learning,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1711.11017
14
PageRank 
References 
Authors
0.69
11
9
Name
Order
Citations
PageRank
Simon Brodeur1162.77
Ethan Perez2996.03
Ankesh Anand3152.39
Florian Golemo4140.69
Luca Celotti5140.69
Florian Strub617511.20
Jean Rouat751550.25
Hugo Larochelle87692488.99
Aaron C. Courville96671348.46