Title
Evaluating Parallel Minibatch Training For Machine Learning Applications
Abstract
The amount of data available for analytics applications continues to rise. At the same time, there are some application areas where security and privacy concerns prevent liberal dissemination of data. Both of these factors motivate the hypothesis that machine learning algorithms may benefit from parallelizing the training process (for large amounts of data) and/or distributing the training process (for sensitive data that cannot be shared).We investigate this hypothesis by considering two real-world machine learning tasks (logistic regression and sparse autoencoder), and empirically test how a model's performance changes when its parameters are set to the arithmetic means of parameters of models trained on minibatches, i.e., horizontally split portions of the data set. We observe that iterating the minibatch training and parameter averaging process for a small number of times results in models with performance only slightly worse that of models trained on the full data sets.
Year
DOI
Venue
2017
10.1007/978-3-319-74718-7_48
COMPUTER AIDED SYSTEMS THEORY - EUROCAST 2017, PT I
Keywords
Field
DocType
Distributed machine learning, Logistic regression, Sparse autoencoders, Minibatch training
Computer science,Artificial intelligence,Analytics,Machine learning
Conference
Volume
ISSN
Citations 
10671
0302-9743
0
PageRank 
References 
Authors
0.34
10
1
Name
Order
Citations
PageRank
Stephan Dreiseitl133834.80