An Optimal Algorithm for Strongly Convex Minimization under Affine Constraints | 0 | 0.34 | 2022 |
IntSGD: Adaptive Floatless Compression of Stochastic Gradients | 0 | 0.34 | 2022 |
Adom: Accelerated Decentralized Optimization Method For Time-Varying Networks | 0 | 0.34 | 2021 |
Near-Optimal Decentralized Algorithms for Saddle Point Problems over Time-Varying Networks. | 0 | 0.34 | 2021 |
Towards Accelerated Rates for Distributed Optimization over Time-Varying Networks. | 0 | 0.34 | 2021 |
A Linearly Convergent Algorithm For Decentralized Optimization: Sending Less Bits For Free! | 0 | 0.34 | 2021 |
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems | 0 | 0.34 | 2020 |
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization | 0 | 0.34 | 2020 |
From Local SGD to Local Fixed-Point Methods for Federated Learning. | 0 | 0.34 | 2020 |
Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization | 0 | 0.34 | 2020 |
Don't Jump Through Hoops and Remove Those Loops - SVRG and Katyusha are Better Without the Outer Loop. | 1 | 0.34 | 2020 |
Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates. | 0 | 0.34 | 2019 |
Stochastic Spectral and Conjugate Descent Methods. | 0 | 0.34 | 2018 |