Title
Reinforcement Learning Driven Heuristic Optimization.
Abstract
Heuristic algorithms such as simulated annealing, Concorde, and METIS are effective and widely used approaches to find solutions to combinatorial optimization problems. However, they are limited by the high sample complexity required to reach a reasonable solution from a cold-start. In this paper, we introduce a novel framework to generate better initial solutions for heuristic algorithms using reinforcement learning (RL), named RLHO. We augment the ability of heuristic algorithms to greedily improve upon an existing initial solution generated by RL, and demonstrate novel results where RL is able to leverage the performance of heuristics as a learning signal to generate better initialization. We apply this framework to Proximal Policy Optimization (PPO) and Simulated Annealing (SA). We conduct a series of experiments on the well-known NP-complete bin packing problem, and show that the RLHO method outperforms our baselines. We show that on the bin packing problem, RL can learn to help heuristics perform even better, allowing us to combine the best parts of both approaches.
Year
Venue
DocType
2019
CoRR
Journal
Volume
Citations 
PageRank 
abs/1906.06639
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Qingpeng Cai1194.86
Will Hang210.68
Azalia Mirhoseini323818.68
George Tucker416015.17
Jingtao Wang540741.87
Wei Wei620116.60