Title
Mingmatch-A Fast N-Gram Model For Word Segmentation Of The Ainu Language
Abstract
Word segmentation is an essential task in automatic language processing for languages where there are no explicit word boundary markers, or where space-delimited orthographic words are too coarse-grained. In this paper we introduce the MiNgMatch Segmenter-a fast word segmentation algorithm, which reduces the problem of identifying word boundaries to finding the shortest sequence of lexical n-grams matching the input text. In order to validate our method in a low-resource scenario involving extremely sparse data, we tested it with a small corpus of text in the critically endangered language of the Ainu people living in northern parts of Japan. Furthermore, we performed a series of experiments comparing our algorithm with systems utilizing state-of-the-art lexical n-gram-based language modelling techniques (namely, Stupid Backoff model and a model with modified Kneser-Ney smoothing), as well as a neural model performing word segmentation as character sequence labelling. The experimental results we obtained demonstrate the high performance of our algorithm, comparable with the other best-performing models. Given its low computational cost and competitive results, we believe that the proposed approach could be extended to other languages, and possibly also to other Natural Language Processing tasks, such as speech recognition.
Year
DOI
Venue
2019
10.3390/info10100317
INFORMATION
Keywords
Field
DocType
word segmentation, tokenization, language modelling, n-gram models, Ainu language, endangered languages, under-resourced languages
Tokenization (data security),Orthographic projection,Computer science,Text segmentation,Smoothing,n-gram,Artificial intelligence,Natural language processing,Language modelling,Machine learning,Sparse matrix
Journal
Volume
Issue
Citations 
10
10
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Karol Nowakowski100.68
Michal Ptaszynski213225.47
Fumito Masui38727.22