Title
LKASR: Large kernel attention for lightweight image super-resolution
Abstract
Image super-resolution, aims to recover a corresponding high-resolution image from a given low-resolution image. While most state-of-the-art methods only consider using fixed small-size convolution kernels (e.g., 1 × 1, 3 × 3) to extract image features, few works have been made to large-size convolution kernels for image super-resolution (SR). In this paper, we propose a novel lightweight baseline model LKASR based on large kernel attention (LKA). LKASR consists of three parts, shallow feature extraction, deep feature extraction and high-quality image reconstruction. In particular, the deep feature extraction module consists of multiple cascaded visual attention modules (VAM), each of which consists of a 1 × 1 convolution, a large kernel attention (acts as Transformer) and a feature refinement module (FRM, acts as CNN). Specifically, VAM applies lightweight architecture like swin transformer to realize iterative extraction of global and local features of images, which greatly improves the effectiveness of SR method (0.049s in Urban100 dataset). For different scales ( × 2,  × 3,  × 4), extensive experimental results on benchmark demonstrate that LKASR outperforms most lightweight SR methods by up to 0.17∼0.34 dB, while the total of parameters and FLOPs remains lightweight.
Year
DOI
Venue
2022
10.1016/j.knosys.2022.109376
Knowledge-Based Systems
Keywords
DocType
Volume
Image super-resolution,Large kernel attention,Feature refinement
Journal
252
ISSN
Citations 
PageRank 
0950-7051
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Hao Feng100.34
Liejun Wang200.34
Yongming Li300.34
Anyu Du444.19