Title
Similar Text Fragments Extraction For Identifying Common Wikipedia Communities
Abstract
Similar text fragments extraction from weakly formalized data is the task of natural language processing and intelligent data analysis and is used for solving the problem of automatic identification of connected knowledge fields. In order to search such common communities in Wikipedia, we propose to use as an additional stage a logical-algebraic model for similar collocations extraction. With Stanford Part-Of-Speech tagger and Stanford Universal Dependencies parser, we identify the grammatical characteristics of collocation words. With WordNet synsets, we choose their synonyms. Our dataset includes Wikipedia articles from different portals and projects. The experimental results show the frequencies of synonymous text fragments in Wikipedia articles that form common information spaces. The number of highly frequented synonymous collocations can obtain an indication of key common up-to-date Wikipedia communities.
Year
DOI
Venue
2018
10.3390/data3040066
DATA
Keywords
DocType
Volume
information extraction, short text fragment similarity, Wikipedia communities, NLP
Journal
3
Issue
ISSN
Citations 
4
2306-5729
0
PageRank 
References 
Authors
0.34
6
5