Title
Diminishing Regret for Online Nonconvex Optimization
Abstract
A single nonconvex optimization is NP-hard in the worst case, and so is a sequence of nonconvex problems viewed separately. For online nonconvex optimization (ONO) problems, widely used local search algorithms are guaranteed to track a sequence of local optima, but offer no promises about global optimality. In this paper, we introduce the concept of nonconvexity regret to measure the performance of a local search method against a global optimization solver for ONO. We define the notion of depth of a global minimum, and show that memory and random explorations drive the nonconvexity regret to zero if the variability of the objective function is low compared to the depth of the global minima. We prove probabilistic guarantees on the regret bound that depend on the evolution of the landscapes of the time-varying objective functions. Then, based on the notions of missing mass and 1-occupancy set, we develop a practical algorithm that works even when there is no such information on the landscapes. The theoretical results imply that the existence of a low-complexity optimization at any arbitrary time instance of ONO can nullify the NP-hardness of the entire ONO problem. The results are verified through numerical simulations.
Year
DOI
Venue
2021
10.23919/ACC50511.2021.9482986
2021 AMERICAN CONTROL CONFERENCE (ACC)
DocType
ISSN
Citations 
Conference
0743-1619
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Sangwoo Park100.68
Julie Mulvaney-Kemp200.34
Ming Jin301.69
Javad Lavaei458771.90