Abstract | ||
---|---|---|
For TREC-10, we participated in the adhoc and manual web tracks and in both the site-finding and cross-lingual tracks. For the adhoc track, we did extensive calibrations and learned that combining similarity measures yields little improvement. This year, we focused on a single high- performance similarity measure. For site finding, we implemented several algorithms that did well on the data provided for calibration, but poorly on the real dataset. For the cross-lingual track, we calibrated on the monolingual collection, and developed new Arabic stemming algorithms as well as a novel dictionary-based means of cross-lingual retrieval. Our results in this track were quite promising, with seventeen of our queries performing at or above the median. |
Year | Venue | Field |
---|---|---|
2001 | TREC | Arabic,Similarity measure,Information retrieval,Computer science |
DocType | Citations | PageRank |
Conference | 13 | 1.43 |
References | Authors | |
5 | 8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Mohammed Aljlayl | 1 | 102 | 7.58 |
Steven M. Beitzel | 2 | 696 | 46.72 |
Eric C. Jensen | 3 | 696 | 46.72 |
Abdur Chowdhury | 4 | 2013 | 160.59 |
David O. Holmes | 5 | 163 | 20.38 |
M. Lee | 6 | 144 | 19.86 |
david a grossman | 7 | 399 | 46.60 |
Ophir Frieder | 8 | 433 | 46.10 |