Pilot: A Framework that Understands How to Do Performance Benchmarks The Right Way

Appeared in Proceedings of the 24th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS 2016).

Abstract

Carrying out even the simplest performance benchmark requires considerable knowledge of statistics and computer systems, and painstakingly following many error-prone steps, which are distinct skill sets yet essential for getting statistically valid results. As a result, many performance measurements in peer-reviewed publications are flawed. Among many problems, they fall short in one or more of the following requirements: accuracy, precision, comparability, repeatability, and control of overhead. This is a serious problem because poor performance measurements misguide system design and optimization. We propose a collection of algorithms and heuristics to automate these steps. They cover the collection, storing, analysis, and comparison of performance measurements. We implement these methods as a readily-usable open source software framework called Pilot, which can help to reduce human error and shorten benchmark time. Evaluation of Pilot on various benchmarks show that it can reduce the cost and complexity of running benchmarks, and can produce better measurement results.

Publication date:
September 2016

Authors:
Yan Li
Yash Gupta
Ethan L. Miller
Darrell D. E. Long

Projects:
Tracing and Benchmarking
Ultra-Large Scale Storage
Storage QoS

Available media

Full paper text: PDF

Bibtex entry

@inproceedings{li-mascots16,
  author       = {Yan Li and Yash Gupta and Ethan L. Miller and Darrell D. E. Long},
  title        = {Pilot: A Framework that Understands How to Do Performance Benchmarks The Right Way},
  booktitle    = {Proceedings of the 24th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS 2016)},
  month        = sep,
  year         = {2016},
}
Last modified 28 May 2019