Title
Adversarial Training Regularization For Negative Sampling Based Network Embedding
Abstract
The aim of network embedding is to learn compact node representations. This has been shown to be effective in various downstream learning tasks, such as link prediction and node classification. Most methods focus on preserving different network structures and properties, ignoring the fact that networks are usually noisy and incomplete, thus such methods potentially lack robustness and suffer from the overfitting issue. Recently, generative adversarial networks based methods have been exploited to impose a prior distribution on node embeddings to encourage a global smoothness, but their model architecture is very complicated and they suffer from the non-convergence problem. Here, we propose adversarial training (AdvT), a more succinct and effective local regularization method, for negative-sampling-based network embedding to improve model robustness and generalization ability. Specifically, we first define the adversarial perturbations in the embedding space instead of in the discrete graph domain to circumvent the challenge of generating discrete adversarial examples. Then, to enable more effective regularization, we design the adaptive l(2) norm constraints on adversarial perturbations that depend upon the connectivity pattern of node pairs. We integrate AdvT into several famous models including DEEPWALK, LINE and node2vec, and conduct extensive experiments on benchmark datasets to verify its effectiveness. (C) 2021 Elsevier Inc. All rights reserved.
Year
DOI
Venue
2021
10.1016/j.ins.2021.07.018
INFORMATION SCIENCES
Keywords
DocType
Volume
Network Embedding, Adversarial Training, Robustness
Journal
579
ISSN
Citations 
PageRank 
0020-0255
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Quanyu Dai1285.28
Xiao Shen200.34
Zimu Zheng3226.00
Liang Zhang417611.95
Qiang Li5181.56
Dan Wang668658.70