Title
Learned Image Compression with Fixed-point Arithmetic
Abstract
Learned image compression (LIC) has achieved superior coding performance than traditional image compression standards such as HEVC intra in terms of both PSNR and MS-SSIM. However, most LIC frameworks are based on floating-point arithmetic which has two potential problems. First is that using traditional 32-bit floating-point will consume huge memory and computational cost. Second is that the decoding might fail because of the floating-point error coming from different encoding/decoding platforms. To solve the above two problems. 1) We linearly quantize the weight in the main path to 8-bit fixed-point arithmetic, and propose a fine tuning scheme to reduce the coding loss caused by the quantization. Analysis transform and synthesis transform are fine tuned layer by layer. 2) We exploit look-up-table (LUT) for the cumulative distribution function (CDF) to avoid the floating-point error. When the latent node follows non-zero mean Gaussian distribution, to share the CDF LUT for different mean values, we restrict the range of latent node to be within a certain range around mean. As a result, 8-bit weight quantization can achieve negligible coding gain loss compared with 32-bit floating-point anchor. In addition, proposed CDF LUT can ensure the correct coding at various CPU and GPU hardware platforms.
Year
DOI
Venue
2021
10.1109/PCS50896.2021.9477496
2021 Picture Coding Symposium (PCS)
Keywords
DocType
ISSN
Image compression,neural networks,quantization,fixed-point,fine-tuning
Conference
2330-7935
ISBN
Citations 
PageRank 
978-1-6654-3078-4
2
0.46
References 
Authors
0
3
Name
Order
Citations
PageRank
Heming Sun19222.50
Lu Yu244455.90
Jiro Katto326266.14