Title
Cross-supervised synthesis of web-crawlers.
Abstract
A web-crawler is a program that automatically and systematically tracks the links of a website and extracts information from its pages. Due to the different formats of websites, the crawling scheme for different sites can differ dramatically. Manually customizing a crawler for each specific site is time consuming and error-prone. Furthermore, because sites periodically change their format and presentation, crawling schemes have to be manually updated and adjusted. In this paper, we present a technique for automatic synthesis of web-crawlers from examples. The main idea is to use hand-crafted (possibly partial) crawlers for some websites as the basis for crawling other sites that contain the same kind of information. Technically, we use the data on one site to identify data on another site. We then use the identified data to learn the website structure and synthesize an appropriate extraction scheme. We iterate this process, as synthesized extraction schemes result in additional data to be used for re-learning the website structure. We implemented our approach and automatically synthesized 30 crawlers for websites from nine different categories: books, TVs, conferences, universities, cameras, phones, movies, songs, and hotels.
Year
DOI
Venue
2016
10.1145/2884781.2884842
ICSE
Keywords
Field
DocType
Data extraction,wrapper,synthesis,scarpper
Program slicing,Query language,World Wide Web,Code mining,Crawling,Information retrieval internet,Computer science,Web crawler
Conference
ISSN
ISBN
Citations 
0270-5257
978-1-5090-2071-3
3
PageRank 
References 
Authors
0.38
39
3
Name
Order
Citations
PageRank
Adi Omari1211.66
Sharon Shoham234226.67
Eran Yahav3170679.49