Abstract | ||
---|---|---|
In the parallel execution of queries in Non-Uniform Memory Access (NUMA), the operating system maps database processes/threads (i.e., workers) to the available cores across the NUMA nodes. However, this mapping results in poor cache activity with many minor page faults and slower query response time when workers and data are allocated in different NUMA nodes. The system needs to move large volumes of data around the NUMA nodes to catch up with the running workers. Our hypothesis is that we mitigate the data movement to boost cache hits and response time if we only hand out to the system the local optimum number of cores instead of all the available ones. In this paper we present a PetriNet mechanism that represents the load of the database workers for dynamically computing and allocating the local optimum number of CPU cores to tackle such load. Preliminary results show that data movement diminishes with the local optimum number of CPU cores. |
Year | DOI | Venue |
---|---|---|
2017 | http://dl.acm.org/citation.cfm?doid=3076113.3076121 | DaMoN |
Field | DocType | ISSN |
Computer science,Local optimum,Cache,Parallel computing,Response time,Cache-only memory architecture,Real-time computing,Thread (computing),Page fault,Online analytical processing,Multi-core processor,Operating system | Conference | 7 |
Citations | PageRank | References |
0 | 0.34 | 10 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Simone Dominico | 1 | 0 | 0.68 |
Eduardo Cunha De Almeida | 2 | 64 | 17.33 |
Jorge Augusto Meira | 3 | 18 | 5.77 |