Title
Attentions Help CNNs See Better: Attention-based Hybrid Image Quality Assessment Network
Abstract
Image quality assessment (IQA) algorithm aims to quantify the human perception of image quality. Unfortunately, there is a performance drop when assessing the distortion images generated by generative adversarial network (GAN) with seemingly realistic textures. In this work, we conjecture that this maladaptation lies in the backbone of IQA models, where patch-level prediction methods use independent image patches as input to calculate their scores separately, but lack spatial relationship modeling among image patches. Therefore, we propose an Attention-based Hybrid Image Quality Assessment Network (AHIQ) to deal with the challenge and get better performance on the GAN-based IQA task. Firstly, we adopt a two-branch architecture, including a vision transformer (ViT) branch and a convolutional neural network (CNN) branch for feature extraction. The hybrid architecture combines interaction information among image patches captured by ViT and local texture details from CNN. To make the features from the shallow CNN more focused on the visually salient region, a deformable convolution is applied with the help of semantic information from the ViT branch. Finally, we use a patch-wise score prediction module to obtain the final score. The experiments show that our model outperforms the state-of-the-art methods on four standard IQA datasets and AHIQ ranked first on the Full Reference (FR) track of the NTIRE 2022 Perceptual Image Quality Assessment Challenge. Code and pretrained models are publicly available at https://github.com/IIGROUP/AHIQ
Year
DOI
Venue
2022
10.1109/CVPRW56347.2022.00123
IEEE Conference on Computer Vision and Pattern Recognition
DocType
Volume
Issue
Conference
2022
1
Citations 
PageRank 
References 
0
0.34
0
Authors
8
Name
Order
Citations
PageRank
Shanshan Lao100.68
Yuan Gong200.34
Shuwei Shi341.74
Sidi Yang401.01
Tianhe Wu501.01
Jiahao Wang641.74
Weihao Xia762.09
Yang Yu-Jiu88919.30