Pablo

Overview

The Pablo Research Group started in the Department of Computer Science at the University of Illinois at Urbana-Champaign under the direction of Professor Dan Reed (former RENCI director). The group investigates the interaction of architecture, system software, and applications on large-scale parallel and distributed computer systems. It created SvPablo, a graphical environment for instrumenting application source code and browsing dynamic performance data.

A Graphical Performance Analysis Environment for Performance Tuning and Visualization

To capture dynamic performance data, SvPablo supports both interactive instrumentation via its graphical user interface and automatic instrumentation using the standalone SvPablo Parser. During the execution of instrumented code, the SvPablo library captures performance data and computes performance metrics including general software statistics and hardware counter data on the execution dynamics of each instrumented construct on each processor. The hardware counter data is captured via the integration of SvPablo with PAPI – a hardware counter API. Because only statistics, rather than detailed event traces, are maintained, the SvPablo library can capture the execution behavior of codes that execute for hours or days on hundreds of processors. Following execution, performance data from each processor is integrated into a single performance file with additional statistics. This file is then loaded into SvPablo GUI and displayed along with the application source code. The SvPablo browser provides a hierarchy of color-coded performance displays, including a high-level routine profile and source code scrollboxes. Clicking the mouse on a source code line creates additional, pop-up dialogs showing other statistics and detailed, individual processor metrics for each processor and each routine or line. By browsing the performance data correlating with the source code, users can identify the performance bottlenecks in their applications rapidly. In addition, SvPablo also provides convenient means in the GUI for users to conduct load balancing analysis and scalability studies.

In addition to the static performance analysis, with the integration of Autopilot, the performance data can also be captured in time window frame during the runtime using Autopilot sensors. This data can be rendered in a three dimensional visualization environment in real-time for users to investigate their application behavior during the program execution and effectively steering the application by using Autopilot actuators to manipulate individual variables and initialize independent threads of control.They can also be utilized by run time adaptation techniques for performance monitoring and adaptive control.

SvPablo New Developments

  • System health and power consumption: as the processor count in large systems grows to tens of thousands, system reliability and continued operation in the face of component failures is an increasing challenge. We have integrated SvPablo with a Health Application Programming Interface (HAPI) for system health monitoring. This integration will enable temperature and power consumption profiles of application execution, key indicators of potential failure modes and indicators of code sites for hybrid power-performance optimization.
  • Performance data reduction: on systems containing tens of thousands of processors, task-level performance monitoring can produce prodigious amounts of performance data, which will adversely affect the performance of the applications being measured. To retain the benefits of detailed measurement while reducing data volumes, we are developing an adaptive performance monitoring library using stratified population sampling technique, and integrating it with SvPablo. The sampling library will capture data from the identified node subset, and the data will be displayed in the SvPablo GUI with error estimates on the performance data derived from the sample subset.
  • Performance signature modeling: in order to retain the benefits of event tracing with lower overhead, we are integrating performance signature modeling with SvPablo. The compact application signature modeling method takes a curve fitting approach and forms a trajectory in the time domain for a specific metric value from a set of trace data. SvPablo will generate performance signatures during the execution of an instrumented application, display the signatures in SvPblo GUI, and provide a graphical user interface for convenient signature comparisons.

Acknowledgements

This research is supported in part by the Defense Advanced Research Projects Agency and by the Department of Energy

 

Related SvPablo Projects

PERI

National Facility for I/O Characterization and Optimization

I/O delays often limit the performance achieved with application codes on today’s high-performance computers. Although the performance of both microprocessors and disks is increasing rapidly, both are not increasing at the same rate.

Without a substantial base of empirical data on the I/O access patterns of single and multiple processor systems, researchers and educators can do little to overcome limitations imposed by I/O on the overall performance of emerging systems.

The National Facility for High-Performance I/O Characterization and Optimization, or CADRE, is extending, documenting, archiving, and disseminating tools, sample applications, and experimental data to stimulate education and research on I/O system design, analysis, and optimization for large-scale commodity and workstation clusters, petascale systems, and distributed computational grids.

CADRE is part of the Pablo Research Group and operates under a grant to the U.S. National Science Foundation. It is supported by funds from the National Science Foundation and the Department of Energy.

General Information

For many next-generation applications, constraints imposed by I/O limit the level of achievable performance.  A large and important class of resource-intensive applications are irregular, containing complex, data-dependent execution behavior, and dynamic, with shifting resource demands that vary over time. Because the interactions between application and system software change across applications and during a single application’s execution, analysts aiming to optimize performance require runtime libraries and analysis tools that can reveal an application’s I/O  behavior.

The Pablo project has developed a portable performance data analysis environment,  used to capture and reveal the I/O patterns of applications executing on a variety of high-performance, single and multiple-processor systems. To catalyze further research and education aimed at optimizing I/O,  the National Science Foundation is funding the extension, documentation, and deployment of Pablo tools and data. You may request any of these  tools to be sent to you on DVD or CD-ROM.

Furthermore a collection of files are made available through this facility for use in the research and development of I/O optimization methodologies and tools. These files trace the I/O activity of a collection of data-bound applications that were executed on various high-performance platforms as part of ongoing scientific/engineering research. Reflecting a variety of hardware and file system configurations, the data from the files are made available in their entirety and through a database that can be queried online to perform statistical analyses on the data.

I/O Characterization Tools
  • I/O Characterization Libraries
  • Performance Capture Facility: Tutorial, Code
  • Java PCF: Tutorial, Code
  • Physical I/O Tracing Facility: Tutorial, Code
  • UNIX I/O Library: Tutorial, Code, Manual
  • MPI I/O Library: Tutorial, Code, Manual
  • HDF Library: Tutorial, Code, Manual
  • SDDF to XML Conversion Tool
  • SCSI Disk Feature Extraction Facility
I/O Analysis Tools
  • Software Tools for Analyzing I/O Behavior in Data-Intensive Applications
  • Instrumented, Analyzed Code Samples
  • Applications and Analyses
  • Trace Database-Search & Query
I/O Trace Archive

Files that detail the I/O behavior of advanced applications being used in their respective fields have been indexed in a portable data metaformat and are available in a database that can be queried to produce a list of trace files reflecting any of a number of search cirteria.Users can perform statistical analyses on trace file data using a forms-based web interface. Trace files of interest can by email ordered for delivery on CD ROM or DVD disks.

Acknowledgements

This research is supported in part by the Defense Advanced Research Projects Agency and by the Department of Energy

Grid Application Development Software Project

The transient, rarely repeatable behavior of the Grid indicates the need to replace standard models of post-mortem performance optimization with a real-time model, one that optimizes application and runtime behavior during program execution. Led by a team at Rice University, the goal of the Grid Application Development Software (GrADS) project is to enable the routine development and performance tuning of Grid applications by simplifying distributed heterogeneous computing. A number of key areas investigated as part of grads

  • Grid software architectures that facilitate information flow and resource negotiation among applications, libraries, compilers, linkers, and runtime systems;
  • Base software technologies, such as scheduling, resource discovery, and communication tools, to support the development and execution of performance-efficient Grid applications;
  • Policies and software mechanisms that support performance analysis, the exchange of performance information, and performance contract brokering;
  • Languages, compilers, environments, and tools supporting the creation of applications for the Grid and the solution of problems via the Grid;
  • Mathematical and data-structure libraries for Grid applications, including numerical methods for controling accuracy and latency tolerance;
  • System software and communication libraries for aligning distributed computer collections as unified computing configurations; and
  • Simulation and modeling tools to enable systematic, scientific study of the dynamic properties of Grid middleware, application software, and configurations.

Pablo team members are looking into the feasibility, design, and implementation of a system of performance contracts, software that would allow the level of performance expected of system modules to be quantified and then measured during execution. The early vision of performance contracts includes software that uses a fuzzy ruleset to quantify the level of performance expected as a function of available resources. Such contracts would include the criteria (as could be measured in FLOPS, expected completion time, or some other quantitative characteristic that the compiler could derive) needed to guide run-time environments in configuring object programs to available resources and in deciding when to interrupt execution and reconfigure to achieve better performance.

As part of this research, the Pablo group is defining and experimenting with fuzzy rulesets to express expected levels of performance. The group’s objective is to integrate fuzzy rulesets with Markov and time-series models to predict resource requirements and identify optimal resource allocation.

Some experimental results include:
  • Early results for potential contracts (2nd quarter, 2000), including fuzzy rulesets and decision outcomes
  • Results from Jan 2001 ScaLAPACK runs (1st quarter, 2001)
  • Results from Fall 2001 ScaLAPACK runs initiated at UIUC with revised library-developer models.  (3rd quarter, 2001)
More details on
  • Specifying and Monitoring GrADS Contracts – Second Draft
  • Introduction to Performance Contracts and Fuzzy Logic

Fredrik Vraalsen, Ruth A. Aydt, Celso L. Mendes, and Daniel A. Reed, “Performance Contracts: Predicting and Monitoring Grid Application Behavior,” Proceedings of the 2nd International Workshop on Grid Computing/LNCS (GRID 2001), Springer-Verlag Lecture Notes in Computer Science, Denver, Colorado, November 12, 2001, Volume 2242, pp. 154-165. [PDF]

Jeffrey S. Vetter and Daniel A. Reed, “Real-time Performance Monitoring, Adaptive Control, and Interactive Steering of Computational Grids,” The International Journal of High Performance Computing Applications, Winter 2000, Volume 14, No. 4,  pp. 357-366. [DOC]

Ruth Aydt’s presentation – GrADS Site Visit, April 30, 2001, Rice
Dan Reed’s presentation-GrADS meeting, January, 2001, ISI

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grant No. 9975020. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Partners

Rice University
University of California at San Diego
University of Tennessee
University of Chicago
Indiana University
University of Houston
University of Southern California

Links

GrADS Project Website
Macro Grid testbed details and other helpful information at ISI
GGF Document: Grid Monitoring Architecture (pdf)
GGF Document: A Simple Case Study of a Grid Performance System (pdf)