Abstract | ||
---|---|---|
Vector quantization is a popular data compression technique due to its theoretical advantage over scalar quantization which enables exploitation of the dependencies between neighboring samples. However, the complexity of the encoding process imposes certain limitations on the size of the codebook population and/or the dimensions of the processed blocks. In this paper, we show that this complexity can be conveniently distributed as subcodebooks over general purpose MIMD parallel processors, to provide almost linearly scalable throughput and flexible configurability. A particular advantage of this approach is that it makes feasible the use of higher dimensional image blocks and/or larger codebooks, leading to improved coding performance with no penalty in execution speed compared with the original sequential implementation. As an example, we show that an implementation with 12 transputers using 8 × 8 blocks and 4096 codebook entries reduces the bit-rate by a factor of 2.625 and runs faster than a sequential implementation based upon 4 / 4 blocks and 256 codebook entries, while producing a similar PSNR. |
Year | DOI | Venue |
---|---|---|
1996 | 10.1006/rtim.1996.0025 | Real-Time Imaging |
Keywords | Field | DocType |
scalable parallel approach,vector quantization | Population,Linde–Buzo–Gray algorithm,Computer science,Parallel computing,Real-time computing,Theoretical computer science,Vector quantization,Data compression,Encoding (memory),MIMD,Scalability,Codebook | Journal |
Volume | Issue | ISSN |
2 | 4 | Real-Time Imaging |
Citations | PageRank | References |
8 | 0.59 | 2 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Aysegul Cuhadar | 1 | 63 | 5.65 |
Demetrios G. Sampson | 2 | 1310 | 247.68 |
Andy Downton | 3 | 110 | 17.35 |