Title
Direct Cache Access for High Bandwidth Network I/O
Abstract
Recent I/O technologies such as PCI-Express and 10Gb Ethernet enable unprecedented levels of I/O bandwidths in mainstream platforms. However, in traditional architectures, memory latency alone can limit processors from matching 10 Gb inbound network I/O traffic. We propose a platform-wide method called Direct Cache Access (DCA) to deliver inbound I/O data directly into processor caches. We demonstrate that DCA provides a significant reduction in memory latency and memory bandwidth for receive intensive network I/O applications. Analysis of benchmarks such as SPECWeb9, TPC-W and TPC-C shows that overall benefit depends on the relative volume of I/O to memory traffic as well as the spatial and temporal relationship between processor and I/O memory accesses. A system level perspective for the efficient implementation of DCA is presented.
Year
DOI
Venue
2005
10.1109/ISCA.2005.23
ISCA
Keywords
Field
DocType
bandwidth,pci express,memory latency,processor cache,throughput,tcpip,ethernet,local area networks,protocols,computer architecture,internet,application software,memory bandwidth
Multipath I/O,Memory bandwidth,Computer science,CPU cache,Parallel computing,Computer network,Real-time computing,Input/output,Direct memory access,I/O Acceleration Technology,Memory-mapped I/O,Memory architecture
Conference
Volume
Issue
ISSN
33
2
0163-5964
ISBN
Citations 
PageRank 
0-7695-2270-X
84
4.26
References 
Authors
8
3
Name
Order
Citations
PageRank
Ram Huggahalli135820.94
Ravishankar K. Iyer2111975.72
Scott Tetrick3844.26