Title
Marginalized neural network mixtures for large-scale regression.
Abstract
For regression tasks, traditional neural networks (NNs) have been superseded by gaussian processes, which provide probabilistic predictions (input-dependent error bars), improved accuracy, and virtually no overfitting. Due to their high computational cost, in scenarios with massive data sets, one has to resort to sparse gaussian processes, which strive to achieve similar performance with much smaller computational effort. In this context, we introduce a mixture of NNs with marginalized output weights that can both provide probabilistic predictions and improve on the performance of sparse gaussian processes, at the same computational cost. The effectiveness of this approach is shown experimentally on some representative large data sets.
Year
DOI
Venue
2010
10.1109/TNN.2010.2049859
IEEE Transactions on Neural Networks
Keywords
Field
DocType
large-scale regression,computational cost,smaller computational effort,gaussian process,high computational cost,improved accuracy,marginalized neural network mixture,representative large data set,similar performance,sparse gaussian,probabilistic prediction,massive data set,neural networks,uncertainty,neural nets,regression analysis,bayesian model,neural network,high performance computing,testing,regression,multilayer perceptron,gaussian processes
Data set,Regression,Pattern recognition,Supercomputer,Regression analysis,Computer science,Artificial intelligence,Gaussian process,Probabilistic logic,Overfitting,Artificial neural network,Machine learning
Journal
Volume
Issue
ISSN
21
8
1941-0093
Citations 
PageRank 
References 
5
0.62
7
Authors
4