Abstract | ||
---|---|---|
Coarse-to-fine inference has been shown to be a robust approximate method for improving the efficiency of structured prediction models while preserving their accuracy. We propose a multi-pass coarse-to-fine architecture for dependency parsing using linear-time vine pruning and structured prediction cascades. Our first-, second-, and third-order models achieve accuracies comparable to those of their unpruned counterparts, while exploring only a fraction of the search space. We observe speed-ups of up to two orders of magnitude compared to exhaustive search. Our pruned third-order model is twice as fast as an unpruned first-order model and also compares favorably to a state-of-the-art transition-based parser for multiple languages. |
Year | Venue | Keywords |
---|---|---|
2012 | HLT-NAACL | structured prediction cascade,unpruned counterpart,third-order model,structured prediction model,coarse-to-fine inference,exhaustive search,unpruned first-order model,search space,efficient multi-pass dependency parsing,multi-pass coarse-to-fine architecture,linear-time vine pruning |
Field | DocType | Citations |
Brute-force search,Computer science,Inference,Structured prediction,Vine,Dependency grammar,Theoretical computer science,Artificial intelligence,Parsing,Machine learning,Pruning | Conference | 22 |
PageRank | References | Authors |
0.95 | 29 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Alexander M. Rush | 1 | 1499 | 67.53 |
Slav Petrov | 2 | 2405 | 107.56 |