Title
No-Reference Video Quality Assessment Based on the Temporal Pooling of Deep Features
Abstract
Video quality assessment (VQA) is an important element of various applications ranging from automatic video streaming to display technology. Furthermore, visual quality measurements require a balanced investigation of visual content and features. Previous studies have shown that the features extracted from a pretrained convolutional neural network are highly effective for a wide range of applications in image processing and computer vision. In this study, we developed a novel architecture for no-reference VQA based on the features obtained from pretrained convolutional neural networks, transfer learning, temporal pooling, and regression. In particular, we obtained solutions by only applying temporally pooled deep features and without using manually derived features. The proposed architecture was trained based on the recently published Konstanz natural video quality database (KoNViD-1k), which contains 1200 video sequences with authentic distortion unlike other publicly available databases. The experimental results obtained based on KoNViD-1k demonstrated that the proposed method performed better than other state-of-the-art algorithms. Furthermore, these results were confirmed by tests using the LIVE VQA database, which contains artificially distorted videos.
Year
DOI
Venue
2019
10.1007/s11063-019-10036-6
Neural Processing Letters
Keywords
DocType
Volume
No-reference video quality assessment, Convolutional neural network
Journal
50
Issue
ISSN
Citations 
3
1370-4621
2
PageRank 
References 
Authors
0.37
0
1
Name
Order
Citations
PageRank
Domonkos Varga1134.29