Abstract | ||
---|---|---|
This paper investigates the policy optimization paradigm, where a learning model is trained to find the solution of complex Markov decision problems, as a tool to address the berth allocation problem in multimodal terminals. To this purpose, we drop the typical formulation of the latter as a mixed-integer static scheduling one, and we model it instead as an evolving scenario in which berths are assigned to ships according to a parameterized policy function that drives the temporal evolution of the environment. We adopt a cross-entropy optimization scheme to optimize the policy parameters, which is a simple and highly parallelizable gradient-free technique. As compared to the static mixed-integer formulation, the proposed approach relies on a much lighter optimization problem in the continuous space of the policy parameters, thus making it feasible to replan in real time when needed. Furthermore, the generality of the policy optimization approach allows to take into account any performance metric and specific feature of the scenario straightforwardly, without the need to devise ad hoc heuristics. Simulation tests showcase the good performance of the policy approach under various conditions. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/IJCNN52387.2021.9533891 | 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) |
Keywords | DocType | ISSN |
Berth allocation problem, dynamic optimization, machine learning, policy optimization | Conference | 2161-4393 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Cristiano Cervellera | 1 | 226 | 23.63 |
M. Gaggero | 2 | 41 | 3.30 |
Danilo Macciò | 3 | 64 | 10.95 |