Abstract | ||
---|---|---|
The traditional process of focused web crawler is to harvest a collection of web documents that are focused on the topical subspaces. The intricacy of focused crawlers is identifying the next most important and relevant link to follow. Focused Crawlers mostly rely on probabilistic models for predicting the relevancy of the documents. The Web documents are well characterized by the hypertext and the hypertext can be used to determine the relevance of the document to the search domain. The semantics of the link characterizes the semantics of the document referred. In this article, a novel, and distinctive focused crawler named LSCrawler has been proposed. This LSCrawler system retrieves documents by speculating the relevancy of the document based on the keywords in the link and the surrounding text of the link. The relevancy of the documents is reckoned measuring the semantic similarity between the keywords in the link and the taxonomy hierarchy of the specific domain. The system exhibits better recall as it exploits the semantic of the keywords in the link. |
Year | DOI | Venue |
---|---|---|
2006 | 10.1109/WI.2006.112 | Web Intelligence |
Keywords | Field | DocType |
semantic similarity,focused crawler,distinctive focused crawler,focused crawlers,search domain,specific domain,lscrawler system retrieves document,focused web crawler,relevant link,web document,link semantics,enhanced focused web crawler,web crawler,information retrieval,probabilistic model,internet | Web search engine,Data mining,World Wide Web,Semantic Web Stack,Information retrieval,Web page,Computer science,Semantic Web,Focused crawler,Social Semantic Web,HTML,Web crawler | Conference |
ISBN | Citations | PageRank |
0-7695-2747-7 | 18 | 0.74 |
References | Authors | |
5 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
M. Yuvarani | 1 | 19 | 1.12 |
N. Ch. S. N. Iyengar | 2 | 84 | 11.24 |
A. Kannan | 3 | 195 | 25.98 |