Title
Expected Sequence Similarity Maximization
Abstract
This paper presents efficient algorithms for expected similarity maximization, which co- incides with minimum Bayes decoding for a similarity-based loss function. Our algorithms are designed for similarity functions that are sequence kernels in a general class of posi- tive definite symmetric kernels. We discuss both a general algorithm and a more efficient algorithm applicable in a common unambigu- ous scenario. We also describe the applica- tion of our algorithms to machine translation and report the results of experiments with sev- eral translation data sets which demonstrate a substantial speed-up. In particular, our results show a speed-up by two orders of magnitude with respect to the original method of Tromble et al. (2008) and by a factor of 3 or more even with respect to an approximate algorithm specifically designed for that task. These re- sults open the path for the exploration of more appropriate or optimal kernels for the specific tasks considered.
Year
Venue
Keywords
2010
North American Chapter of the Association for Computational Linguistics
translation data set,expected sequence similarity maximization,similarity function,common unambiguous scenario,general algorithm,expected similarity maximization,substantial speed-up,machine translation,general class,efficient algorithm,approximate algorithm,generic algorithm,loss function
Field
DocType
ISBN
Data set,General algorithm,Computer science,Positive-definite matrix,Machine translation,Artificial intelligence,Decoding methods,Order of magnitude,Maximization,Machine learning,Bayes' theorem
Conference
1-932432-65-5
Citations 
PageRank 
References 
2
0.39
16
Authors
5
Name
Order
Citations
PageRank
Cyril Allauzen169047.64
Shankar Kumar21316.04
Wolfgang Macherey361745.06
Mehryar Mohri44502448.21
Michael Riley51027.13