Abstract | ||
---|---|---|
We propose a new framework for N-best reranking on sparse feature sets. The idea is to reformulate the reranking problem as a Multitask Learning problem, where each N-best list corresponds to a distinct task. This is motivated by the observation that N-best lists often show significant differences in feature distributions. Training a single reranker directly on this heteroge-nous data can be difficult. Our proposed meta-algorithm solves this challenge by using multitask learning (such as ℓ1/ℓ2 regularization) to discover common feature representations across N-best lists. This meta-algorithm is simple to implement, and its modular approach allows one to plug-in different learning algorithms from existing literature. As a proof of concept, we show statistically significant improvements on a machine translation system involving millions of features. |
Year | Venue | Keywords |
---|---|---|
2010 | WMT@ACL | multitask learning,multitask learning problem,common feature representation,n-best list,proposed meta-algorithm,n-best reranking,feature distribution,n-best list corresponds,plug-in different learning algorithm,sparse feature set,proof of concept |
Field | DocType | Citations |
Multi-task learning,Computer science,Machine translation system,Proof of concept,Regularization (mathematics),Artificial intelligence,Modular design,Machine learning | Conference | 6 |
PageRank | References | Authors |
0.65 | 29 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kevin Duh | 1 | 819 | 72.94 |
Katsuhito Sudoh | 2 | 326 | 34.44 |
Hajime Tsukada | 3 | 449 | 29.46 |
Hideki Isozaki | 4 | 934 | 64.50 |
Masaaki Nagata | 5 | 573 | 77.86 |