Title
An Unsupervised Neural Attention Model For Aspect Extraction
Abstract
Aspect extraction is an important and challenging task in aspect-based sentiment analysis. Existing works tend to apply variants of topic models on this task. While fairly successful, these methods usually do not produce highly coherent aspects. In this paper, we present a novel neural approach with the aim of discovering coherent aspects. The model improves coherence by exploiting the distribution of word co-occurrences through the use of neural word embeddings. Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space. In addition, we use an attention mechanism to de-emphasize irrelevant words during training, further improving the coherence of aspects. Experimental results on real-life datasets demonstrate that our approach discovers more meaningful and coherent aspects, and substantially outperforms baseline methods on several evaluation tasks.
Year
DOI
Venue
2017
10.18653/v1/P17-1036
PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1
Field
DocType
Volume
Feature vector,Embedding,Computer science,Raw data,Attention model,Speech recognition,Natural language processing,Artificial intelligence,Vocabulary,Sentence,Machine learning
Conference
P17-1
Citations 
PageRank 
References 
32
1.49
19
Authors
4
Name
Order
Citations
PageRank
Ruidan He1455.71
Wee Sun Lee23325382.37
Hwee Tou Ng34092300.40
Daniel Dahlmeier446029.67