Abstract | ||
---|---|---|
Network throughput is scaling \"up\" to higher data transfer rates while processors are scaling \"out\" to multiple cores. As a result, network adapter \"offloads\" and performance \"tuning\" have received a good deal of attention lately. However, much of this attention is focused on the \"how\" and not the \"why\" of performance efficiency. There are two types of efficiencies that we have found particularly intriguing: First, processor core \"affinity,\" or \"binding\" is fundamentally the choice of which processor core or cores handle certain tasks in a network- or I/O-heavy application running on a MIMD machine. Second, Ethernet \"pause frames\" slightly violate the \"end-to-end\" nature of TCP/IP in order to perform link-to-link flow control. The goal of our research is to delve deeper into why these tuning suggestions and this offload exist, and how they affect the end-to-end performance and efficiency of a single, large TCP flow. |
Year | DOI | Venue |
---|---|---|
2014 | 10.1145/2658260.2661772 | ANCS |
Keywords | Field | DocType |
40gbps network,high-speed networks,miscellaneous,rfs,affinitization,network protocols,network performance analysis,core binding,rps,esnet | Computer science,Computer network,Real-time computing,Link layer,Ethernet,Flow control (data),Throughput,End system,Network interface controller,Multi-core processor,Performance tuning,Embedded system | Conference |
ISBN | Citations | PageRank |
978-1-4799-6534-2 | 1 | 0.36 |
References | Authors | |
3 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Nathan Hanford | 1 | 11 | 2.33 |
Vishal Ahuja | 2 | 25 | 3.76 |
Matthew Farrens | 3 | 515 | 69.21 |
Dipak Ghosal | 4 | 2848 | 163.40 |
Mehmet Balman | 5 | 144 | 10.73 |
Eric Pouyoul | 6 | 82 | 10.27 |
Brian Tierney | 7 | 611 | 70.38 |