Abstract | ||
---|---|---|
The performance of a distributed file system significantly affects data-intensive applications that frequently execute I/O operations on large amounts of data. Although many modern distributed file systems are geared to provide highly efficient I/O performance, their operations are nonetheless affected by runtime overhead in data transfer between client nodes and I/O servers. A large part of the overhead is caused by memory copies executed by the client interface using the FUSE framework or a special kernel module. In this paper, we propose a method based on InfiniBand RDMA that improves data transfer performance between client and server in a distributed file system. The major characteristic of the method is that it transfers file data directly from a server's memory to the page cache of a client node. The method minimizes memory copies that are otherwise executed in the client interface or the operating system kernel. We implemented the proposed method in the Gfarm distributed file system and tested it using I/O benchmark software and real applications. The experimental results showed that our method effected a performance improvement of up to 78.4% and 256.0% in sequential and random file reads, respectively, and an improvement of up to 6.3% in data-intensive applications. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1109/CLUSTER.2015.40 | Cluster Computing |
Keywords | Field | DocType |
distributed file systems, InfiniBand, RDMA, high-performance computing, storage | Distributed File System,SSH File Transfer Protocol,File Control Block,Virtual file system,Self-certifying File System,Computer science,Parallel computing,Device file,Versioning file system,Memory-mapped file,Operating system | Conference |
ISSN | Citations | PageRank |
1552-5244 | 1 | 0.35 |
References | Authors | |
17 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Shin Sasaki | 1 | 5 | 1.47 |
Kazushi Takahashi | 2 | 3 | 1.42 |
Yoshihiro Oyama | 3 | 243 | 20.62 |
Osamu Tatebe | 4 | 309 | 42.94 |