Title
Learning of Gaussian Processes in Distributed and Communication Limited Systems.
Abstract
It is of fundamental importance to find algorithms obtaining optimal performance for learning of statistical models in distributed and communication limited systems. Aiming at characterizing the optimal strategies, we consider learning of Gaussian Processes (GP) in distributed systems as a pivotal example. We first address a very basic problem: how many bits are required to estimate the inner-products of some Gaussian vectors across distributed machines? Using information theoretic bounds, we obtain an optimal solution for the problem which is based on vector quantization. Two suboptimal and more practical schemes are also presented as substitutes for the vector quantization scheme. In particular, it is shown that the performance of one of the practical schemes which is called per-symbol quantization is very close to the optimal one. Schemes provided for the inner-product calculations are incorporated into our proposed distributed learning methods for GPs. Experimental results show that with spending few bits per symbol in our communication scheme, our proposed methods outperform previous zero rate distributed GP learning schemes such as Bayesian Committee Model (BCM) and Product of experts (PoE).
Year
DOI
Venue
2017
10.1109/TPAMI.2019.2906207
IEEE Transactions on Pattern Analysis and Machine Intelligence
Keywords
Field
DocType
Kernel,Training,Machine learning,Gaussian processes,Task analysis,Optimization,Distortion
Mathematical optimization,Gaussian,Product of experts,Vector quantization,Global Positioning System,Artificial intelligence,Gaussian process,Statistical model,Quantization (signal processing),Mathematics,Machine learning,Bayesian probability
Journal
Volume
Issue
ISSN
42
8
0162-8828
Citations 
PageRank 
References 
1
0.38
5
Authors
3
Name
Order
Citations
PageRank
Mostafa Tavassolipour1282.15
Abolfazl S. Motahari225533.01
Mohammad Taghi Manzuri3184.78