Title
Fudan University at TRECVID 2007
Abstract
In this notebook paper we describe our participation in the NIST TRECVID 2007 evaluation. We took part in two tasks of benchmark this year including high-level feature extraction and interactive search. For high-level feature extraction, we submitted 4 runs. FD_SVM: using SVM. FD_SVM_BN: using SVM and ontology. FD_MTL: using multi-task learning. FD_MTL_BN: using multi-task learning and ontology. Evaluation results illustrate that there are both advantages and disadvantages exist in all methods. For search, we submitted 5 interactive runs. Fudan_P: using multi-model and AP-based fusion. Fudan_R: using multi-model and MGR fusion. Fudan_C: cross system retrieval. Fudan_T: textual retrieval Fudan_I: image retrieval Evaluation results illustrate that the AP-based fusion method yields higher precision while the MGR fusion method finds more positive shots than other runs. We also experimented with simple cross system interactive retrieval to estimate the impact of manual browsing on the results.
Year
Venue
Field
2008
TREC Video Retrieval Evaluation
Concept map,Visual search,Computer science,TRECVID,Full text search,Speech recognition,NIST
DocType
Citations 
PageRank 
Conference
2
0.39
References 
Authors
11
18
Name
Order
Citations
PageRank
Xiangyang Xue12466154.25
Hui Yu2137.78
Hong Lu3438.61
Yue-Fei Guo417213.22
Yuejie Zhang512725.82
Shile Zhang6101.93
Bin Li76815.59
Bolan Su847222.37
Yingbin Zheng919116.70
Wenjian Zhou1041.12
Lei Cen1121.74
Jie Zhang124715.01
Yu Jiang13113.60
Jiahui Qi1420.39
Jiaojiao Lu1520.39
Qian Diao1620.39
Zhenzhen Shi1731.16
Zichen Sun18362.51