Title
Generalized Deduplication: Lossless Compression by Clustering Similar Data
Abstract
This paper proposes generalized deduplication, a concept where similar data is systematically deduplicated by first transforming chunks of each file into two parts: a basis and a deviation. This increases the potential for compression as more chunks can have a common basis that can be deduplicated by the system. The deviation is kept small and stored together with an identifier to its chunk, e.g., hash of a chunk, in order to recover the original data without errors or distortions. This paper characterizes the performance of generalized deduplication using Golomb-Rice codes as a suitable data transform function to discover similarities across all files stored in the system. Considering different synthetic data distributions, we show in theory and simulations that generalized deduplication can result in compression factors of 300 (high compression), i.e., 300 times less storage space, and that this compression is achieved with 60,000 times fewer data chunks inserted into the system compared to classic deduplication (compression gains start earlier). Finally, we show that the table/registry to recognize similar chunks is 10,000 times smaller for generalized deduplication compared to the table in classic deduplication techniques, which will result in less RAM usage in the storage system.
Year
DOI
Venue
2019
10.1109/CloudNet47604.2019.9064140
2019 IEEE 8th International Conference on Cloud Networking (CloudNet)
Keywords
DocType
ISSN
generalized deduplication,golomb rice,geometric distribution,data deduplication
Conference
2374-3239
ISBN
Citations 
PageRank 
978-1-7281-4833-5
0
0.34
References 
Authors
8
2
Name
Order
Citations
PageRank
Prasad Talasila100.68
Daniel E. Lucani223642.29