Title
Curiosity-Bottleneck: Exploration By Distilling Task-Specific Novelty
Abstract
Exploration based on state novelty has brought great success in challenging reinforcement learning problems with sparse rewards. However, existing novelty-based strategies become inefficient in real-world problems where observation contains not only task-dependent state novelty of our interest but also task-irrelevant information that should be ignored. We introduce an information- theoretic exploration strategy named Curiosity-Bottleneck that distills task-relevant information from observation. Based on the information bottleneck principle, our exploration bonus is quantified as the compressiveness of observation with respect to the learned representation of a compressive value network. With extensive experiments on static image classification, grid-world and three hard-exploration Atari games, we show that Curiosity-Bottleneck learns an effective exploration strategy by robustly measuring the state novelty in distractive environments where state-of-the-art exploration methods often degenerate.
Year
Venue
Field
2019
international conference on machine learning
Bottleneck,Curiosity,Computer science,Human–computer interaction,Artificial intelligence,Novelty,Machine learning
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Youngjin Kim1195.12
Daniel Nam200.34
Hyun-Woo Kim3216.72
Ji-Hoon Kim46810.13
Gunhee Kim563247.17