Name
Affiliation
Papers
PETER RICHTÁRIK
university of edinburgh
108
Collaborators
Citations 
PageRank 
116
1314
84.53
Referers 
Referees 
References 
1897
944
1141
Search Limit
1001000
Title
Citations
PageRank
Year
Shifted compression framework: generalizations and improvements.00.342022
Proximal and Federated Random Reshuffling.00.342022
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!00.342022
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning00.342022
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning00.342022
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation.00.342022
An Optimal Algorithm for Strongly Convex Minimization under Affine Constraints00.342022
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization.00.342022
Permutation Compressors for Provably Faster Distributed Nonconvex Optimization00.342022
Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms00.342022
IntSGD: Adaptive Floatless Compression of Stochastic Gradients00.342022
FedNL: Making Newton-Type Methods Applicable to Federated Learning.00.342022
Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information00.342022
Error Compensated Distributed SGD Can Be Accelerated.00.342021
Marina: Faster Non-Convex Distributed Learning With Compression00.342021
Adom: Accelerated Decentralized Optimization Method For Time-Varying Networks00.342021
Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols10.352021
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning00.342021
Local Sgd: Unified Theory And New Efficient Methods00.342021
L-Svrg And L-Katyusha With Arbitrary Sampling00.342021
Stochastic Sign Descent Methods: New Algorithms and Better Theory00.342021
A Linearly Convergent Algorithm For Decentralized Optimization: Sending Less Bits For Free!00.342021
A Stochastic Derivative-Free Optimization Method With Importance Sampling: Theory And Learning To Control00.342020
Stochastic Subspace Cubic Newton Method00.342020
Variance-Reduced Methods for Machine Learning30.362020
99% of Worker-Master Communication in Distributed Optimization Is Not Needed.00.342020
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems00.342020
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization00.342020
From Local SGD to Local Fixed-Point Methods for Federated Learning.00.342020
Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm00.342020
Random Reshuffling: Simple Analysis with Vast Improvements00.342020
Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization00.342020
Lower Bounds and Optimal Algorithms for Personalized Federated Learning00.342020
Convergence Analysis Of Inexact Randomized Iterative Methods10.352020
Don't Jump Through Hoops and Remove Those Loops - SVRG and Katyusha are Better Without the Outer Loop.10.342020
Best Pair Formulation & Accelerated Scheme for Non-convex Principal Component Pursuit.00.342019
Convergence Analysis of Inexact Randomized Iterative Methods.00.342019
Revisiting Stochastic Extragradient.00.342019
Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates.00.342019
Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols.00.342019
Online and Batch Supervised Background Estimation via L1 Regression.10.352019
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods.00.342019
99% of Parallel Optimization is Inevitably a Waste of Time.00.342019
Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates10.352019
SAGA with Arbitrary Sampling.00.342019
Nonconvex Variance Reduced Optimization with Arbitrary Sampling20.362019
Scaling Distributed Machine Learning with In-Network Aggregation.30.382019
Distributed Learning with Compressed Gradient Differences.20.362019
Stochastic Convolutional Sparse Coding00.342019
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent.00.342019
  • 1
  • 2