TR-08-03 Workflows for Performance Evaluation and Tuning

Jeffrey L. Tilson, Mark S.C. Reed, and Robert J. Fowler. Workflows for Performance Evaluation and Tuning. Technical Report TR-08-03, RENCI, North Carolina, May 2008.

We report our experiences with using highthroughput techniques to run large sets of performance experiments on collections of grid accessible parallel computer systems for the purpose of deploying optimally compiled and configured scientific applications. In these environments, the set of variable parameters (compiler, link, and runtime flags; application and library options; partition size) can be very large, so running the performance ensembles is labor intensive, tedious, and prone to errors. Automating this process improves productivity, reduces barriers to deploying and maintaining multi-platform codes, and facilitates the tracking of application and system performance over time. We describe the design and implementation of our system for running performance ensembles and we use two case studies as the basis for evaluating the long term potential for this approach. The architecture of a prototype benchmarking system is presented along with results on the efficacy of the workflow approach.

@TECHREPORT{LFRT2008:tr0803,
AUTHOR = {Jeffrey L. Tilson, Mark S.C. Reed, Robert J. Fowler},
TITLE = {Workflows for Performance Evaluation and Tuning },
INSTITUTION = {RENCI},
YEAR = {2008},
NUMBER = {TR-08-03},
ADDRESS = {North Carolina},
MONTH = {May},
ABSTRACT = {We report our experiences with using highthroughput techniques to run large sets of performance experiments on collections of grid accessible parallel computer systems for the purpose of deploying optimally compiled and configured scientific applications. In these environments, the set of variable parameters (compiler, link, and runtime flags; application and library options; partition size) can be very large, so running the performance ensembles is labor intensive, tedious, and prone to errors. Automating this process improves productivity, reduces barriers to deploying and maintaining multi-platform codes, and facilitates the tracking of application and system performance over time. We describe the design and implementation of our system for running performance ensembles and we use two case studies as the basis for evaluating the long term potential for this approach. The architecture of a prototype benchmarking system is presented along with results on the efficacy of the workflow approach.},
URL = {http://www.renci.org/publications/techreports/TR0803.pdf}
}