Title
Learning from networked examples.
Abstract
Many machine learning algorithms are based on the assumption that training examples are drawn identically and independently. However, this assumption does not hold anymore when learning from a networked sample because two or more training examples may share some common objects, and hence share the features of these shared objects. We first show that the classic approach of ignoring this problem potentially can have a disastrous effect on the accuracy of statistics, and then consider alternatives. One of these is to only use independent examples, discarding other information. However, this is clearly suboptimal. We analyze sample error bounds in a networked setting, providing both improved and new results. Next, we propose an efficient weighting method which achieves a better sample error bound than those of previous methods. Our approach is based on novel concentration inequalities for networked variables.
Year
Venue
Field
2014
ALT
Weighting,Computer science,Sampling error,Artificial intelligence,Machine learning
DocType
Volume
Citations 
Journal
abs/1405.2600
2
PageRank 
References 
Authors
0.51
13
3
Name
Order
Citations
PageRank
Yuyi Wang1113.69
Jan Ramon2144.84
Zheng-Chu Guo3262.66