Title
Improving Textual Network Learning with Variational Homophilic Embeddings
Abstract
The performance of many network learning applications crucially hinges on the success of network embedding algorithms, which aim to encode rich network information into low-dimensional vertex-based vector representations. This paper considers a novel variational formulation of network embeddings, with special focus on textual networks. Different from most existing methods that optimize a discriminative objective, we introduce Variational Homophilic Embedding (VHE), a fully generative model that learns network embeddings by modeling the semantic (textual) information with a variational autoencoder, while accounting for the structural (topology) information through a novel homophilic prior design. Homophilic vertex embeddings encourage similar embedding vectors for related (connected) vertices. The proposed VHE promises better generalization for downstream tasks, robustness to incomplete observations, and the ability to generalize to unseen vertices. Extensive experiments on real-world networks, for multiple tasks, demonstrate that the proposed method consistently achieves superior performance relative to competing state-of-the-art approaches.
Year
Venue
Keywords
2019
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)
special focus
Field
DocType
Volume
Computer science,Artificial intelligence,Machine learning
Conference
32
ISSN
Citations 
PageRank 
1049-5258
0
0.34
References 
Authors
0
10
Name
Order
Citations
PageRank
Wenlin Wang122.05
Chenyang Tao287.93
Zhe Gan331932.58
Guoyin Wang432.06
Liqun Chen5284.77
Xinyuan Zhang6103.16
Ruiyi Zhang72110.04
Yang, Qian800.68
Ricardo Henao928623.85
Lawrence Carin1013711.38