Title
Improved performance optimization for massive small files in cloud computing environment.
Abstract
Hadoop uses the Hadoop distributed file system for storing big data, and uses MapReduce to process big data in cloud computing environments. Because Hadoop is optimized for large file sizes, it has difficulties processing large numbers of small files. A small file can be defined as any file that is significantly smaller than the Hadoop block size, which is typically set to 64 MB. Hadoop is optimized to store data in relatively large files, and thus suffers from name node memory insufficiency and increased scheduling and processing time when processing large numbers of small files. This study proposes a performance improvement method for MapReduce processing, which integrates the CombineFileInputFormat method and the reuse feature of the Java Virtual Machine (JVM). Existing methods create a mapper for every small file. Unlike these methods, the proposed method reduces the number of created mappers by processing large numbers of files that are combined by a single split using CombineFileInputFormat. Moreover, to improve MapReduce processing performance, the proposed method reduces JVM creation time by reusing a single JVM to run multiple mappers (rather than creating a JVM for every mapper).
Year
DOI
Venue
2018
10.1007/s10479-016-2376-0
Annals OR
Keywords
Field
DocType
Massive small files, Hadoop, MapReduce, JVM reuse, CombineFileInputFormat
Small files,Distributed File System,Block size,Computer science,Reuse,Scheduling (computing),Parallel computing,Big data,Operating system,Cloud computing,Performance improvement
Journal
Volume
Issue
ISSN
265
2
1572-9338
Citations 
PageRank 
References 
2
0.37
7
Authors
4
Name
Order
Citations
PageRank
Chang Choi126139.04
Chulwoong Choi220.37
Junho Choi336660.87
Pan-Koo Kim419931.13