Abstract | ||
---|---|---|
Arithmetic performance with GPGPU attracts attention. However, the difficulty of the programming poses a problem. We have proposed GPGPU programming which used the existing parallel programming technique. We are now developing OpenMP framework for GPU as a concrete of our proposal. The framework is based on Omni OpenMP Compiler and named “OMPCUDA”. In this paper we describe a design and an implementation of OMPCUDA. We evaluated using test programs, and validated that parallel improvement in the speed could be easily carried out in the same code as the existing OpenMP. |
Year | DOI | Venue |
---|---|---|
2010 | 10.1007/978-3-642-13217-9_13 | IWOMP |
Keywords | Field | DocType |
test program,omni openmp compiler,arithmetic performance,existing parallel programming technique,openmp execution framework,parallel improvement,existing openmp,openmp framework,gpgpu programming | Computer architecture,Programming language,Shared memory,CUDA,Computer science,Parallel computing,Compiler,Runtime library,General-purpose computing on graphics processing units | Conference |
Volume | ISSN | ISBN |
6132 | 0302-9743 | 3-642-13216-2 |
Citations | PageRank | References |
7 | 0.78 | 4 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Satoshi Ohshima | 1 | 53 | 8.47 |
Shoichi Hirasawa | 2 | 21 | 8.38 |
Hiroki Honda | 3 | 84 | 8.72 |