Title
Hadoop Massive Small File Merging Technology Based on Visiting Hot-Spot and Associated File Optimization.
Abstract
Hadoop Distributed File System (HDFS) is designed to reliably storage and manage large-scale files. All the files in HDFS are managed by a single server, the NameNode. The NameNode stores metadata, in its main memory, for each file stored into HDFS. HDFS suffers the penalty of performance with increased number of small files. It imposes a heavy burden to the NameNode to store and manage a mass of small files. The number of files that can be stored into HDFS is constrained by the size of NameNode’s main memory. In order to improve the efficiency of storing and accessing the small files on HDFS, we propose Small Hadoop Distributed File System (SHDFS), which bases on original HDFS. Compared to original HDFS, we add two novel modules in the proposed SHDFS: merging module and caching module. In merging module, the correlated files model is proposed, which is used to find out the correlated files by user-based collaborative filtering and then merge correlated files into a single large file to reduce the total number of files. In caching module, we use Log - linear model to dig out some hot-spot data that user frequently access to, and then design a special memory subsystem to cache these hot-spot data. Caching mechanism speeds up access to hot-spot data.
Year
Venue
Field
2018
BICS
Distributed File System,Small files,Hot spot (veterinary medicine),Metadata,Collaborative filtering,Cache,Computer science,Merge (version control),Operating system
DocType
Citations 
PageRank 
Conference
1
0.35
References 
Authors
10
7
Name
Order
Citations
PageRank
Jian-feng Peng110.35
Wenguo Wei260.82
Huimin Zhao320623.43
Qingyun Dai414823.91
Gui-yuan Xie510.35
Jun Cai611.03
Ke-jing He710.35