Title
Discretionary Lane Change Decision Making using Reinforcement Learning with Model-Based Exploration.
Abstract
Deep reinforcement learning (DRL) techniques have been used to solve a discretionary lane change decision-making problem and are showing promising results. However, since the input information for the discretionary lane change problem is continuous and can be in high dimension, it is an open challenge for DRL to optimize the exploration-exploitation trade-off. Conventional model-less exploration methods lack a systematic way to incorporate additional engineering or model-based knowledge of our application into consideration and as a result, the training can be inefficient and may dwell on a policy, e.g. lane change strategy that is impractical. In previous related work, many used the rule-based safety check policy to guide the exploration and collect input information data. However, it is not guaranteed to get the optimal policy and the performance is dependent on the safety check policy selected. In this paper, we developed an explicit statistical aggregated environment model using a conditional variational auto-encoder and a model-based exploration strategy leveraging it. The agent is guided to explore with surprise-based intrinsic reward derived from the environment model. The result is compared with annealing epsilon-greedy exploration and with rule-based safety check exploration. We demonstrate that the performance of the developed model-based exploration method is comparable with the best rule-based safety check exploration and much better than the epsilon-greedy exploration.
Year
DOI
Venue
2019
10.1109/ICMLA.2019.00147
ICMLA
Field
DocType
Citations 
Information data,Computer science,Artificial intelligence,Surprise,Instrumental and intrinsic value,Machine learning,Reinforcement learning
Conference
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Songan Zhang121.71
Huei Peng2805150.82
Subramanya P. Nageshrao3454.95
H. Eric Tseng400.34