Title
Addressing Hadoop's Small File Problem With an Appendable Archive File Format.
Abstract
Hadoop has been used widely for data analytic tasks in various domains. At the same time, data volume is expected to grow even further in the next years. Hadoop recently introduced the concept Archival Storage, an automated tiered storage technique for increasing storage capacity for long-term storage. However, Hadoop Distributed File System's scalability is limited by the total number of files that can be stored, and it is likely that the number of files increases fast when using it for archival purposes. This paper presents an approach for improving HDFS' scalability when using it as an archival storage. We present a tool that extends Hadoop Archive to an appendable file format. New files are appended to one of the existing archive data files efficiently without rewriting the whole archive. Therefore, a first fit algorithm is used to fill up the often not fully utilized fixed-sized data blocks of the archive data files. Index files are updated using a red-black tree providing guaranteed fast lookup and insert performance. We show that the tool performs well for different sizes of archives and number of files to add. By distributing new files efficiently, we also reduce the number of data blocks needed for archiving and, thus, reduce the memory footprint on the NameNode.
Year
DOI
Venue
2017
10.1145/3075564.3078888
Conf. Computing Frontiers
Keywords
Field
DocType
File Systems, Hadoop Distributed File System, HDFS, Metadata Management, Archival Storage
Distributed File System,File format,File Control Block,Computer science,Parallel computing,Archive file,Data file,Automated tiered storage,File system fragmentation,Operating system,Database,Computer file
Conference
Citations 
PageRank 
References 
0
0.34
9
Authors
4
Name
Order
Citations
PageRank
Thomas Renner1185.47
Johannes Müller200.34
Lauritz Thamsen3439.26
Odej Kao4106696.19