Sparse Analysis Framework
For precise, sound, global, yet also scalable static analysis
Hakjoo Oh,
Kihong Heo,
Wonchan Lee,
Woosuk Lee, and
Kwangkeun Yi
Programming Research Lab,
ROSAEC Center,
Seoul National University
Introduction
Our sparse analysis framework provides a general method for achieving global static analyzers that are precise, sound, yet also scalable. Our method, on top of the abstract interpretation framework, is a general sparse analysis technique that supports relational as well as non-relational semantics properties for various programming languages. Analysis designers first use the abstract interpretation framework to have a global and correct static analyzer whose scalability is unattended. Upon this underlying sound static analyzer, analysis designers add our generalized sparse analysis techniques to improve its scalability while preserving the precision of the underlying analysis. Our method prescribes what to prove to guarantee that the resulting sparse version should preserve the precision of the underlying analyzer.
What is Sparse Analysis?
Look at the following slides to capture the key idea of sparse analysis and what's different from the conventional non-sparse analysis.
Papers
- Hakjoo Oh, Kihong Heo, Daejun Park, Jeehun Kang, and Kwangkeun Yi. Global Sparse Analysis Framework. submitted.
[pdf]
[bib]
- Hakjoo Oh, Kihong Heo, Wonchan Lee, Woosuk Lee, and Kwangkeun Yi. Design and Implementation of Sparse Global Analyses for C-like Languages. PLDI 2012
[pdf]
[bib]
- Hakjoo Oh and Kwangkeun Yi. Access-based Localization with Bypassing. APLAS 2012.
[pdf]
[bib]
- Hakjoo Oh, Lucas Brutschy, and Kwangkeun Yi. Access Analysis-based Tight Localization of Abstract Memories. VMCAI 2011.
[pdf]
[bib]
Talks
-
Talk slides, Hakjoo Oh, @Dagstuhl Seminar on Pointer Analysis, April 2013
- Talk slides, Hakjoo Oh, @PLDI 2012, June 2012
-
Poster, Kihong Heo, @PLDI 2012, June 2012
-
Talk slides, Kwangkeun Yi, @MIT, April 2012
-
Talk slides, Wonchan Lee, @Berkeley, April 2012
Our theoretical framework has also a practical significance.
Based on the framework, we have derived a sparse version of
Sparrow and found that the sparse Sparrow is 175x more scalable than the baseline in terms of lines of code. Furthermore, the sparse Sparrow is 1500x faster than the initial version of the Sparrow.
What's different from the conventional sparse analysis?
- Our sparse analysis technique is general for various programming languages (e.g., imperative languages, functional languages, etc), arbitrary semantics properties (e.g., relational analysis, non-relational analysis, etc), and arbitrary trace partitioning (e.g., context-sensitivity, path-sensitivity, loop unrolling, etc). On the other hand, previous sparse analysis techniques are mostly algorithmic and tightly coupled with particular analyses.
- Our sparse analysis technique is based on the new notions of data dependencies and their safe approximations, which is the key to the precision-preserving sparse analysis. Unlike conventional def-use chains, which the conventional sparse analysis is based on, sparse analysis with our data dependency preserves the precision of the original non-sparse analysis.
People
|
Hakjoo Oh |
|
Kihong Heo |
|
Wonchan Lee |
|
Woosuk Lee |
|
Kwangkeun Yi |
Contact
Acknowledgement
This work was supported by the Engineering Research Center of Excellence Program of Korea Ministry of Education, Science and Technology(MEST) / National Research Foundation of Korea(NRF) (Grant 2012-0000468) and the Brain Korea 21 Project, School of Electrical Engineering and Computer Science, Seoul National University in 2011.