Abstract | ||
---|---|---|
Most of today's video quality assessment (VQA) databases contain very limited content and distortion diversities and fail to adequately represent real world video impairments. This is in part because conducting subjective studies in the lab is slow, inefficient and expensive process. Crowdsourcing quality scores is a more scalable solution. However given that viewers operate under innumerable viewing conditions (in-cluding display resolutions, viewing distances, internet connection speeds) and because they are not closely supervised, multiple technical challenges arise. We carefully designed a framework in Amazon Mechanical Turk (AMT) to address the many technical issues that are faced. We launched the largest available VQA study, collecting more than 205000 opinion scores provided by more than 4700 unique participants. We have verified that our framework provided us with results that are highly consistent with the ones obtained in a lab environment under controlled conditions. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/ICIP.2018.8451467 | 2018 25th IEEE International Conference on Image Processing (ICIP) |
Keywords | Field | DocType |
Video Quality Assessment,Subjective Study,Crowdsourcing | Computer vision,Video recording,Display resolution,Crowdsourcing,Computer science,Subjective video quality,Artificial intelligence,Internet access,Distortion,Video quality,Multimedia,Scalability | Conference |
ISSN | ISBN | Citations |
1522-4880 | 978-1-4799-7062-9 | 1 |
PageRank | References | Authors |
0.36 | 11 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zeina Sinno | 1 | 28 | 3.52 |
Alan C. Bovik | 2 | 5062 | 349.55 |