Abstract | ||
---|---|---|
Distributed machine learning (DML) is widely used in resource-constrained networks where the intelligent decision is needed, since no single node can work out the accurate results from massive dataset within an acceptable time complexity. However, this will inevitably expose more potential targets to adversaries compared with non-distributed environment, especially when the dataset is periodically sent to distributed nodes for updating the decision model, which should be adapted to changing environments and requirements. In order to prevent the dataset from manipulation, we put forward a novel scheme, which depends on partial training results, to detect and correct manipulated train samples for DML. Based on variable training loops, the proposed scheme utilizes a cross-learning mechanism and a manipulated data detection algorithm to ensure the data security and the learning model correctness in an unsafe environment. Furthermore, a mathematical model is established, in order to find the optimal training loops. Simulation experiments show the mathematical model matches simulation results well, in which the optimal training loops achieve the best performance. Moreover, based on the optimal training loops, the proposed data manipulation avoidance schemes can significantly improve the accuracy of the final trained model. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/ICC.2019.8761772 | IEEE International Conference on Communications |
Field | DocType | ISSN |
Intelligent decision,Data security,Computer science,Data detection,Correctness,Unsafe environment,Artificial intelligence,Decision model,Data manipulation language,Time complexity,Machine learning | Conference | 1550-3607 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yijin Chen | 1 | 16 | 2.18 |
Dongpo Chen | 2 | 0 | 0.34 |
Yunkai Wei | 3 | 4 | 2.49 |
Supeng Leng | 4 | 702 | 67.45 |
Yuming Mao | 5 | 1 | 2.73 |
Jing Lin | 6 | 57 | 8.71 |