Title
Assessing computer performance with stocs
Abstract
Several aspects of a computer system cause performance measurements to include random errors. Moreover, these systems are typically composed of a non-trivial combination of individual components that may cause one system to perform better or worse than another depending on the workload. Hence, properly measuring and comparing computer systems performance are non-trivial tasks. The majority of work published on recent major computer architecture conferences do not report the random errors measured on their experiments. The few remaining authors have been using only confidence intervals or standard deviations to quantify and factor out random errors. Recent publications claim that this approach could still lead to misleading conclusions. In this work, we reproduce and discuss the results obtained in previous study. Finally, we propose SToCS, a tool that integrates several statistical frameworks and facilitates the analysis of computer science experiments.
Year
DOI
Venue
2013
10.1145/2479871.2479915
ICPE
Keywords
Field
DocType
non-trivial combination,individual component,recent major computer architecture,recent publication,confidence interval,assessing computer performance,computer system cause performance,random error,non-trivial task,computer science experiment,computer systems performance,statistics,hypothesis tests
Random error,Data mining,Computer performance,Workload,Computer science,Confidence interval,Standard deviation,Statistical hypothesis testing
Conference
Citations 
PageRank 
References 
1
0.36
9
Authors
6
Name
Order
Citations
PageRank
Leonardo Piga1263.90
Gabriel F.T. Gomes210.36
Rafael Auler3203.19
Bruno Rosa410.36
Sandro Rigo518524.91
Edson Borin613110.48