Title
Unsupervised word alignment with arbitrary features
Abstract
We introduce a discriminatively trained, globally normalized, log-linear variant of the lexical translation models proposed by Brown et al. (1993). In our model, arbitrary, non-independent features may be freely incorporated, thereby overcoming the inherent limitation of generative models, which require that features be sensitive to the conditional independencies of the generative process. However, unlike previous work on discriminative modeling of word alignment (which also permits the use of arbitrary features), the parameters in our models are learned from unannotated parallel sentences, rather than from supervised word alignments. Using a variety of intrinsic and extrinsic measures, including translation performance, we show our model yields better alignments than generative baselines in a number of language pairs.
Year
Venue
Keywords
2011
ACL
generative baselines,conditional independency,generative process,arbitrary feature,unsupervised word alignment,generative model,supervised word alignment,model yields better alignment,translation performance,lexical translation model,word alignment
Field
DocType
Volume
Normalization (statistics),Pattern recognition,Computer science,Speech recognition,Natural language processing,Artificial intelligence,Generative grammar,Discriminative model,Machine learning
Conference
P11-1
Citations 
PageRank 
References 
21
0.68
38
Authors
4
Name
Order
Citations
PageRank
chris dyer15438232.28
Jonathan H. Clark241116.42
alon lavie32606177.91
Noah A. Smith45867314.27