Title
Application of the distributed document representation in the authorship attribution task for small corpora.
Abstract
Distributed word representation in a vector space (word embeddings) is a novel technique that allows to represent words in terms of the elements in the neighborhood. Distributed representations can be extended to larger language structures like phrases, sentences, paragraphs and documents. The capability to encode semantic information of texts and the ability to handle high- dimensional datasets are the reasons why this representation is widely used in various natural language processing tasks such as text summarization, sentiment analysis and syntactic parsing. In this paper, we propose to use the distributed representation at the document level to solve the task of the authorship attribution. The proposed method learns distributed vector representations at the document level and then uses the SVM classifier to perform the automatic authorship attribution. We also propose to use the word n-grams (instead of the words) as the input data type for learning the distributed representation model. We conducted experiments over six datasets used in the state-of-the-art works, and for the majority of the datasets, we obtained comparable or better results. Our best results were obtained using the combination of words and n-grams of words as the input data types. Training data are relatively scarce, which did not affect the distributed representation.
Year
DOI
Venue
2017
10.1007/s00500-016-2446-x
Soft Comput.
Keywords
Field
DocType
Distributed representation, Authorship attribution, Author identification, Embeddings, Word embeddings, Stylometry, Machine learning, SVM, Scarce training data
Automatic summarization,ENCODE,Vector space,Sentiment analysis,Computer science,Support vector machine,Attribution,Data type,Stylometry,Artificial intelligence,Natural language processing,Machine learning
Journal
Volume
Issue
ISSN
21
3
1433-7479
Citations 
PageRank 
References 
7
0.48
29
Authors
6