Title
A lagrangian propagator for artificial neural networks in constraint programming
Abstract
This paper discusses a new method to perform propagation over a (two-layer, feed-forward) Neural Network embedded in a Constraint Programming model. The method is meant to be employed in Empirical Model Learning, a technique designed to enable optimal decision making over systems that cannot be modeled via conventional declarative means. The key step in Empirical Model Learning is to embed a Machine Learning model into a combinatorial model. It has been showed that Neural Networks can be embedded in a Constraint Programming model by simply encoding each neuron as a global constraint, which is then propagated individually. Unfortunately, this decomposition approach may lead to weak bounds. To overcome such limitation, we propose a new network-level propagator based on a non-linear Lagrangian relaxation that is solved with a subgradient algorithm. The method proved capable of dramatically reducing the search tree size on a thermal-aware dispatching problem on multicore CPUs. The overhead for optimizing the Lagrangian multipliers is kept within a reasonable level via a few simple techniques. This paper is an extended version of [], featuring an improved structure, a new filtering technique for the network inputs, a set of overhead reduction techniques, and a thorough experimentation.
Year
DOI
Venue
2016
10.1007/s10601-015-9234-6
Constraints
Keywords
DocType
Volume
Constraint programming,Lagrangian relaxation,Neural networks
Journal
21
Issue
ISSN
Citations 
4
1383-7133
3
PageRank 
References 
Authors
0.35
20
2
Name
Order
Citations
PageRank
Michele Lombardi127028.86
Stefano Gualandi215615.95