Abstract | ||
---|---|---|
In this paper, the viability of the throughput and frame loss rate benchmarking procedures of RFC 8219 is tested by executing them to examine the performance of three free software SIIT (also called stateless NAT64) implementations: Jool, TAYGA, and map646. An important methodological problem of the two tested benchmarking procedures is pointed out: they use improper timeout setting. A solution of individually checking the timeout for each frame is proposed to get more reasonable results, and its feasibility is demonstrated. The unreliability of the results caused by the lack of requirement for repeated tests is also pointed out, and the need for relevant number of tests is demonstrated. The possibility of an optional non-zero frame loss acceptance criterion for throughput measurement is also discussed. The benchmarking measurements are performed using two different computer hardware, and all relevant results are disclosed and compared. The performance of the kernel based Jool was found to scale up well with the number of active CPU cores and Jool also significantly outperformed the two other SIIT implementations, which work in the user space. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1016/j.comcom.2020.03.034 | Computer Communications |
Keywords | DocType | Volume |
Benchmarking,IPv6 deployment,IPv6 transition solutions,SIIT,Stateless NAT64,Performance analysis | Journal | 156 |
ISSN | Citations | PageRank |
0140-3664 | 1 | 0.38 |
References | Authors | |
0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Gabor Lencse | 1 | 53 | 11.71 |
Keiichi Shima | 2 | 15 | 3.90 |