Abstract | ||
---|---|---|
We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting. |
Year | Venue | Field |
---|---|---|
2017 | ICLR | Computer science,Human–computer interaction,Artificial intelligence,Semantics,Robotics,Machine learning,Reinforcement learning |
DocType | Volume | Citations |
Journal | abs/1711.11017 | 14 |
PageRank | References | Authors |
0.69 | 11 | 9 |
Name | Order | Citations | PageRank |
---|---|---|---|
Simon Brodeur | 1 | 16 | 2.77 |
Ethan Perez | 2 | 99 | 6.03 |
Ankesh Anand | 3 | 15 | 2.39 |
Florian Golemo | 4 | 14 | 0.69 |
Luca Celotti | 5 | 14 | 0.69 |
Florian Strub | 6 | 175 | 11.20 |
Jean Rouat | 7 | 515 | 50.25 |
Hugo Larochelle | 8 | 7692 | 488.99 |
Aaron C. Courville | 9 | 6671 | 348.46 |