Abstract | ||
---|---|---|
This paper suggests a new method for generating the Pareto front in multi-objective Markov chains, which overcomes some existing drawbacks in multi-objective methods: a fundamental issue is to find strong Pareto policies which are policies whose cost-function value is the closest in Euclidean norm to the utopian point. Each strong Pareto policy is reached when each cost-function, constrained by the strategy of others, cannot improve further its own criterion. Constraints associated to the objective function are implemented formulating the problem as a bi-level optimization approach. We convert the problem into a single level optimization approach by introducing a generalized Lagrangian function to represent the original multi-objective problem in terms of a related nonlinear programming problem. Then, we apply the Tikhonov regularization method to the objective function. The regularization method ensures that all the possible Pareto policies to be generated along the Pareto front are strong Pareto policies. For solving the problem we employ the extra-proximal method. The method effectively approximates to every optimal Pareto point, which in this case is a strong Pareto point, in the Pareto front. The experimental result, applied to the route selection for counter-kidnapping problem, validates the effectiveness and usefulness of the method. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1007/s10479-018-2755-9 | Annals OR |
Keywords | Field | DocType |
Multi-objective, Strong Pareto optimal, Euclidian norm, Nash, Markov chains, Route selection | Tikhonov regularization,Mathematical optimization,Lagrangian,Nonlinear programming,Euclidean distance,Markov chain,Multi-objective optimization,Regularization (mathematics),Pareto principle,Mathematics | Journal |
Volume | Issue | ISSN |
271 | 2 | 0254-5330 |
Citations | PageRank | References |
1 | 0.38 | 25 |
Authors | ||
1 |
Name | Order | Citations | PageRank |
---|---|---|---|
Julio B. Clempner | 1 | 91 | 20.11 |