Abstract | ||
---|---|---|
In this paper, we analyze gradient-free methods with one-point feedback for stochastic saddle point problems $\min_{x}\max_{y} \varphi(x, y)$. For non-smooth and smooth cases, we present analysis in a general geometric setup with arbitrary Bregman divergence. For problems with higher-order smoothness, the analysis is carried out only in the Euclidean case. The estimates we have obtained repeat the best currently known estimates of gradient-free methods with one-point feedback for problems of imagining a convex or strongly convex function. The paper uses three main approaches to recovering the gradient through finite differences: standard with a random direction, as well as its modifications with kernels and residual feedback. We also provide experiments to compare these approaches for the matrix game. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1007/978-3-030-77876-7_10 | MOTOR |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Aleksandr Beznosikov | 1 | 0 | 0.68 |
Vasilii Novitskii | 2 | 0 | 0.34 |
Alexander Gasnikov | 3 | 9 | 4.23 |