Title
Playing Doom with SLAM-Augmented Deep Reinforcement Learning.
Abstract
A number of recent approaches to policy learning in 2D game domains have been successful going directly from raw input images to actions. However when employed in complex 3D environments, they typically suffer from challenges related to partial observability, combinatorial exploration spaces, path planning, and a scarcity of rewarding scenarios. Inspired from prior work in human cognition that indicates how humans employ a variety of semantic concepts and abstractions (object categories, localisation, etc.) to reason about the world, we build an agent-model that incorporates such abstractions into its policy-learning framework. We augment the raw image input to a Deep Q-Learning Network (DQN), by adding details of objects and structural elements encountered, along with the agentu0027s localisation. The different components are automatically extracted and composed into a topological representation using on-the-fly object detection and 3D-scene reconstruction.We evaluate the efficacy of our approach in Doom, a 3D first-person combat game that exhibits a number of challenges discussed, and show that our augmented framework consistently learns better, more effective policies.
Year
Venue
Field
2016
arXiv: Artificial Intelligence
Motion planning,Object detection,Observability,Abstraction,Scarcity,Computer science,Policy learning,Artificial intelligence,Cognition,Machine learning,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1612.00380
2
PageRank 
References 
Authors
0.38
0
6
Name
Order
Citations
PageRank
Shehroze Bhatti120.38
Alban Desmaison2174.63
Miksik Ondrej340314.28
Nantas Nardelli421.05
n siddharth5235.16
Philip H. S. Torr69140636.18