Abstract | ||
---|---|---|
In this section, we motivate the desirability of exploring the use of n-best reranking of supertags. Although we give multiple motivations, we focus on justifying our approach as a promising alternative in improving the performance of a full parser. First, we review the supertagging task and its applications. Because supertagging requires the existence of a particular TAG, we subsequently introduce automatically extracted TAGs and motivate their use. Although they have their advantages, supertagging using automatically extracted TAGs runs into dam- aging sparse data problems. We review n-best supertagging as one means of alleviating these problems. Lastly, we run experiments that show supertagging is potentially a viable option in order to speed up a full parser. Throughout this section, we describe the kinds of linguistic resources that we use in all of our experiments and the kinds of notation that we will employ in the rest of this paper. |
Year | Venue | Keywords |
---|---|---|
2002 | TAG+ | sparse data |
Field | DocType | Citations |
Ranking,Trigram,Computer science,n-gram,Boosting (machine learning),Natural language processing,Artificial intelligence,Parsing,Ambiguity,Sentence,Sparse matrix | Conference | 10 |
PageRank | References | Authors |
1.11 | 14 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
john chen | 1 | 197 | 26.31 |
Srinivas Bangalore | 2 | 1319 | 157.37 |
Michael Collins | 3 | 6788 | 785.35 |
Owen Rambow | 4 | 2256 | 247.69 |