Abstract | ||
---|---|---|
In order to make scientific middleware and applications more scalable, there is a need to design them in such a way that they can utilize the evolving multi-core processor architectures available in grid and cloud computing environments. In this paper, we analyze various processing and scheduling techniques on multi-core architectures based on scientific data characteristics and access patterns. More specifically, we conduct fine-grained analysis of scientific datasets such as HDF5 to make effective processing and scheduling decisions in multi-threaded programming. We present performance analysis on how processing threads can be scheduled on multi-core nodes to enhance the performance of scientific applications that process HDF5 data. To accomplish this we introduce a dynamic marking scheme to keep track of the progress of threads on each core. This can be used to help determine work allocation, which results in a decrease in overall application execution time. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1109/AINA.2013.153 | AINA |
Keywords | Field | DocType |
processing hdf5 datasets,scientific data characteristic,scientific datasets,scientific application,multi-core architectures,multi-core node,various processing,multi-core architecture,effective processing,scientific middleware,processing thread,multi-core processor,hierarchical data format,multi core,grid computing,multi threading,hdf5,middleware,cloud computing,multithreaded programming | Middleware,Multithreading,Grid computing,Computer science,Scheduling (computing),Thread (computing),Multi-core processor,Distributed computing,Scalability,Cloud computing | Conference |
ISSN | Citations | PageRank |
1550-445X | 0 | 0.34 |
References | Authors | |
15 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Rajdeep Bhowmik | 1 | 21 | 4.21 |
Jessica Hartog | 2 | 46 | 4.31 |
Madhusudhan Govindaraju | 3 | 854 | 96.53 |