Title
IIT at TREC-10
Abstract
For TREC-10, we participated in the adhoc and manual web tracks and in both the site-finding and cross-lingual tracks. For the adhoc track, we did extensive calibrations and learned that combining similarity measures yields little improvement. This year, we focused on a single high- performance similarity measure. For site finding, we implemented several algorithms that did well on the data provided for calibration, but poorly on the real dataset. For the cross-lingual track, we calibrated on the monolingual collection, and developed new Arabic stemming algorithms as well as a novel dictionary-based means of cross-lingual retrieval. Our results in this track were quite promising, with seventeen of our queries performing at or above the median.
Year
Venue
Field
2001
TREC
Arabic,Similarity measure,Information retrieval,Computer science
DocType
Citations 
PageRank 
Conference
13
1.43
References 
Authors
5
8
Name
Order
Citations
PageRank
Mohammed Aljlayl11027.58
Steven M. Beitzel269646.72
Eric C. Jensen369646.72
Abdur Chowdhury42013160.59
David O. Holmes516320.38
M. Lee614419.86
david a grossman739946.60
Ophir Frieder843346.10