Title
Collaborative learning by boosting in distributed environments
Abstract
In this paper we propose a new distributed learn- ing method called distributed network boosting (DNB) algorithm for distributed applications. The learned hy- potheses are exchanged between neighboring sites dur- ing learning process. Theoretical analysis shows that the DNB algorithm minimizes the cost function through the collaborative functional gradient descent in hy- potheses space. Comparison results of the DNB algo- rithm with other distributed learning methods on real data sets with different sizes show its effectiveness.
Year
DOI
Venue
2010
10.1109/ICPR.2008.4761440
IJPRAI
Keywords
Field
DocType
distributed algorithms,collaborative learning algorithm,cost function minimization,learning (artificial intelligence),pattern classification,distributed learning algorithm,classification problem,hypotheses space,gradient methods,minimisation,groupware,distributed network boosting algorithm,collaborative functional gradient descent,distributed environment,distributed databases,collaborative learning,algorithm design and analysis,gradient descent,collaboration,learning artificial intelligence,classification algorithms,cost function,distributed application,boosting
Gradient descent,Collaborative learning,Algorithm design,Computer science,Collaborative software,Distributed algorithm,Artificial intelligence,Boosting (machine learning),Distributed database,Statistical classification,Machine learning
Journal
Volume
Issue
ISSN
24
05
1051-4651 E-ISBN : 978-1-4244-2175-6
ISBN
Citations 
PageRank 
978-1-4244-2175-6
0
0.34
References 
Authors
16
2
Name
Order
Citations
PageRank
Shijun Wang123922.83
Changshui Zhang25506323.40