Title
Word Embedding Revisited: A New Representation Learning and Explicit Matrix Factorization Perspective.
Abstract
Recently significant advances have been witnessed in the area of distributed word representations based on neural networks, which are also known as word embeddings. Among the new word embedding models, skip-gram negative sampling (SGNS) in the word2vec toolbox has attracted much attention due to its simplicity and effectiveness. However, the principles of SGNS remain not well understood, except for a recent work that explains SGNS as an implicit matrix factorization of the pointwise mutual information (PMI) matrix. In this paper, we provide a new perspective for further understanding SGNS. We point out that SGNS is essentially a representation learning method, which learns to represent the co-occurrence vector for a word. Based on the representation learning view, SGNS is in fact an explicit matrix factorization (EMF) of the words' co-occurrence matrix. Furthermore, extended supervised word embedding can be established based on our proposed representation learning view.
Year
Venue
Field
2015
IJCAI
Matrix (mathematics),Computer science,Matrix decomposition,Toolbox,Theoretical computer science,Artificial intelligence,Word embedding,Word2vec,Artificial neural network,Pointwise mutual information,Feature learning
DocType
Citations 
PageRank 
Conference
19
0.89
References 
Authors
10
6
Name
Order
Citations
PageRank
Yitan Li1323.11
Linli Xu279042.51
Fei Tian316011.88
Jiang Liang4404.92
Xiaowei Zhong5241.30
Enhong Chen62106165.57