Title
Meta-Deep Q-Learning for Eco-Routing
Abstract
In this paper, a multi-objective deep Q-learning (MOM-DQL) method is developed for solving eco- routing problem in a signalized traffic network. The problem is formulated as dynamic multi- objective Markov decision processes (MOMDPs). MOM- DQL can explore the optimal eco-routes with respect to drivers’ different preferences on saving travel time and fuel. The MOM-DQL agent is trained under a series of learning environments that are based on historical vehicle trajectories, fuel consumption, and traffic signal status in the remote data center. The model that represents the action value function of the historical dynamic driving conditions can be downloaded to the vehicle requesting eco-routing service. The model can quickly adapt to the most recent driving condition through online one-shot learning and predict the optimal eco-routes for the subsequent unseen driving condition of the signalized traffic network. Simulation shows that the MOM-DQL method can discover the optimal eco-routes, and saves 52% travel time and 33% fuel, compared to the shortest-path strategy that is widely used in navigation systems.
Year
DOI
Venue
2019
10.1109/CAVS.2019.8887764
2019 IEEE 2nd Connected and Automated Vehicles Symposium (CAVS)
Keywords
Field
DocType
meta-deep Q-learning,multiobjective deep Q-learning method,signalized traffic network,multiobjective Markov decision processes,MOM- DQL,travel time,MOM-DQL agent,historical vehicle trajectories,traffic signal status,historical dynamic driving conditions,eco-routing service,one-shot learning,subsequent unseen driving condition,MOM-DQL method,action value function,eco- routing problem
Traffic signal,Computer science,Q-learning,Markov decision process,Bellman equation,Real-time computing,Traffic network,Fuel efficiency,Travel time,Data center
Conference
ISBN
Citations 
PageRank 
978-1-7281-3617-2
0
0.34
References 
Authors
2
3
Name
Order
Citations
PageRank
Xin Ma100.68
Yuanchang Xie2809.93
Chunxiao Chigan319520.62