Title
Generative Models are Unsupervised Predictors of Page Quality: A Colossal-Scale Study
Abstract
ABSTRACTLarge generative language models such as GPT-2 are well-known for their ability to generate text as well as their utility in supervised downstream tasks via fine-tuning. Its prevalence on the web, however, is still not well understood - if we run GPT-2 detectors across the web, what will we find? Our work is twofold: firstly we demonstrate via human evaluation that classifiers trained to discriminate between human and machine-generated text emerge as unsupervised predictors of "page quality", able to detect low quality content without any training. This enables fast bootstrapping of quality indicators in a low-resource setting. Secondly, curious to understand the prevalence and nature of low quality pages in the wild, we conduct extensive qualitative and quantitative analysis over 500 million web articles, making this the largest-scale study ever conducted on the topic.
Year
DOI
Venue
2021
10.1145/3437963.3441809
WSDM
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Dara Bahri124.78
Yi Tay222928.97
Che Zheng300.34
Cliff Brunk400.34
Donald Metzler53138141.39
Andrew Tomkins693881401.23