Title
Policy Design for Active Sequential Hypothesis Testing using Deep Learning.
Abstract
Information theory has been very successful in obtaining performance limits for various problems such as communication, compression and hypothesis testing. Likewise, stochastic control theory provides a characterization of optimal policies for Partially Observable Markov Decision Processes (POMDPs) using dynamic programming. However, finding optimal policies for these problems is computationally hard in general and thus, heuristic solutions are employed in practice. Deep learning can be used as a tool for designing better heuristics in such problems. In this paper, the problem of active sequential hypothesis testing is considered. The goal is to design a policy that can reliably infer the true hypothesis using as few samples as possible by adaptively selecting appropriate queries. This problem can be modeled as a POMDP and bounds on its value function exist in literature. However, optimal policies have not been identified and various heuristics are used. In this paper, two new heuristics are proposed: one based on deep reinforcement learning and another based on a KL-divergence zero-sum game. These heuristics are compared with state-of-the-art solutions and it is demonstrated using numerical experiments that the proposed heuristics can achieve significantly better performance than existing methods in some scenarios.
Year
Venue
Keywords
2018
2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
Testing,Neural networks,Reinforcement learning,Bayes methods,Deep learning,Information theory,Control theory
DocType
Volume
ISSN
Conference
abs/1810.04859
2474-0195
Citations 
PageRank 
References 
1
0.43
9
Authors
4
Name
Order
Citations
PageRank
dhruva kartik152.90
Ekraam Sabir2152.42
Urbashi Mitra31336229.37
Premkumar Natarajan487479.46