Abstract | ||
---|---|---|
of new document retrieval and duplicate removal strategies for 'list' and 'other' questions, established a baseline for other systems in the interactive task, and fo- cused on question analysis and paraphrasing, rather than incorportation of external knowl- edge, in the factoid task. Many of the indi- vidual subsystems are largely unchanged from last year. We found that document retrieval strategy has an influence on performance in the dier- ent kinds of tasks later in the pipeline. Our other changes from last year did not immedi- ately yield clear lessons. We present a question analysis data set and interannotator agree- ment indicators for the ciQA task that we hope will spur further evaluation. |
Year | Venue | Keywords |
---|---|---|
2006 | TREC | question answering,document retrieval |
Field | DocType | Citations |
Question answering,Information retrieval,Computer science,Question analysis,Document retrieval,Factoid | Conference | 2 |
PageRank | References | Authors |
0.43 | 5 | 13 |
Name | Order | Citations | PageRank |
---|---|---|---|
Boris Katz | 1 | 501 | 49.78 |
Gregory Marton | 2 | 139 | 13.19 |
Sue Felshin | 3 | 148 | 14.79 |
Daniel Loreto | 4 | 65 | 5.58 |
Ben Lu | 5 | 2 | 0.43 |
Federico Mora | 6 | 69 | 6.72 |
Özlem Uzuner | 7 | 1045 | 67.09 |
Michael Mcgraw-herdeg | 8 | 2 | 0.43 |
Natalie Cheung | 9 | 2 | 0.43 |
Alexey Radul | 10 | 35 | 8.90 |
Yuan Kui Shen | 11 | 12 | 1.27 |
Yuan Luo | 12 | 8 | 7.02 |
Gabriel Zaccak | 13 | 8 | 1.25 |