Computers are growing more complex
due to the introduction of new architectural features and the widening processor/memory gap.
As a result, the development of highly optimized programs is becoming increasingly difficult and costly.
Automatic program optimization is one of the most promising avenues to alleviate this problem.
Our goal in this project is to study methodologies for implementing program optimization tools.
Our initial focus has been the study of program optimization techniques within library generators.
Earlier results are reported in our PLDI'03 paper listed below. The lessons learned from this study
will also be applied to compiler optimization. A program optimization methodology is the last great
frontier in program optimization. Although we have a solid understanding of most compiler algorithms,
including those for parsing, program analysis, and code transformation, there has never been a methodology
to drive transformations in a reliable way towards an optimal program. In fact, the design of transformation
drivers is poorly understood to the point that, in some cases, selecting a higher compiler optimization level
hinders the performance of the target code.
We see program optimization as a search in the space of possible program versions for a form that is as
close to optimal as possible in terms of its execution time, the power it consumes, the space it occupies,
or a combination of these. These versions could be generated by applying compiler transformations to conventional
programs or to domain-specific representations such as formulas, code templates, or very high-level languages.
This search has to be as efficient as possible given that the search space is typically quite large.
We are using two complementary search strategies. Our first strategy is to use architectural
models to prune the search space either by completely avoiding the search along one or more of the
dimensions of the space of program versions or by finding bounds beyond which a search would be useless.
Our second strategy is to apply explanation-based learning techniques to dramatically reduce the number of
points that need to be evaluated. In EBL, an empirical search is used in conjunction with background knowledge.
This knowledge consists of information about transformations and their effect on performance. The results of the
training examples collected during the search are used with the background knowledge through an explanation process
that allows the learning system to confidently select a good hypothesis based on far fewer observed training examples.
Today's experimental library generators
usually focus on specific problem domains. Routines are tuned to the target platform
during installation. An effective library generator should produce
routines that match the performance of manually-written routines.
Some experimental library generators are listed below: Senior Personnel PLDI
2003 [ PDF ]
Goals:
To improve the
quality of library generators.
Extend the
application domain
Extend the
methodology for use on continuous optimizations
Apply the
methodology to conventional compilers
People
Students
Publications
Kamen Yotov, Xiaoming Li, Gang Ren, Michael Cibulskis, Gerald DeJong, María
Jesús Garzarán, David Padua, Keshav Pingali, Paul Stodghill,
and Peng Wu. A Comparison of Empirical and Model-driven Optimization. In Proc.
of the International Conference on Programming Language Design and Implementation,
pages 63-76, June 2003.