Title
Simple Yet Effective Way For Improving The Performance Of Lossy Image Compression
Abstract
Lossy image compression methods with deep neural network (DNN) include a quantization process between encoder and decoder networks as an essential part to increase the compression rate. However, the quantization operation impedes the flow of gradient and often disturbs the optimal learning of the encoder, which results in distortion in the reconstructed images. To alleviate this problem, this paper presents a simple yet effective way that enhances the performance of lossy image compression without imposing training overhead or modifying the original network architectures. In the proposed method, we utilize an auxiliary branch called a shortcut which directly connects the encoder and decoder. Since the shortcut does not include the quantization process, it supports the optimal learning of the encoder by flowing the accurate gradient. Furthermore, to assist the decoder which should handle additional feature maps obtained via the shortcut, we also propose a residual refinement unit (RRU) following the quantizer. The experimental results show that the image compression network trained with the proposed method remarkably improves the performance in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and multi-scale structural similarity (MS-SSIM).
Year
DOI
Venue
2020
10.1109/LSP.2020.2982561
IEEE SIGNAL PROCESSING LETTERS
Keywords
DocType
Volume
Convolutional neural network, deep learning, image compression
Journal
27
ISSN
Citations 
PageRank 
1070-9908
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Yoon-Jae Yeo152.17
yonggoo shin2156.00
Min-Cheol Sagong393.23
Seung Wook Kim4104.19
Sung-Jea Ko51051114.34