Dr. Eric Mercer, Department of Computer Science
Evaluation of how well the academic objectives of the proposal were met
The proposed academic objectives are included below for convenience. Each objective was met under the guidance of a senior researcher, Neha Rungta, who has since graduated with a Ph.D. and joined NASA Ames where she continues to work with students in the SMC-lab. Neha actively mentored the students on the project. These students participated by: writing models to test, reading and reviewing publications, writing code to support the algorithms, and putting together a complete demo-application illustrating how the guided search algorithm behaves. A total of four undergraduate students participated in the research.
- A new guidance algorithm that quickly localizes deadlock in concurrent programs using the output from classical lockset analysis and guided test that requires no manual verification of the error; it only reports real errors with an actual execution of the program. Neha Rungta (Senior Researcher in mentoring environment) published several papers regarding an improved algorithm for deadlock detection in Java programs.
- An Eclipse plug-in that fully integrates a precise lockset analysis in a comprehensive Java development environment. Neha Rungta fully integrated the algorithm into the Java PathFinder framework that is integrated in Eclipse. The algorithms can be accessed directly from the Eclipse interface in a manner similar to the “Run As” option for Java programs.
- A complete empirical study of the new guided test lockset analysis on the Java model checking benchmark suite, the Java 1.4.2 concurrent libraries, and the JBoss application server. Neha Rungta’s work included a detailed analysis of the Java 1.4.2 libraries that uncovered several concurrency errors. The findings show a deadlock situation involving nested concurrent containers.
Evaluation of the mentoring environment
Neha Rungta proved to be an ideal senior research mentor for the students, and she is continuing her mentoring status now that she has graduated and joined NASA Ames research. To be specific, Neha never worked with more than two students at a time. Steve Morley and Neil Self were the two principle students who worked very closely with Neha. Both completed the first three steps of the mentoring process (background knowledge, benchmarks, and ownership in part of an ongoing research project). After stage three, Neil realized he did not have time for research during his undergraduate and left the lab to focus on his undergraduate courses. Steve has continued in the lab and is completing his portion of an ongoing research project under Dr. Neha Rungta of NASA Ames. Once he finalizes the visualization tool and with Neha and myself publishes the work early next year, he will be ready to begin a small independent research project in stage 4. Neil Self and Steve Morley have presented their work at the college spring research conference, and Steve has given several presentations to outside researchers including Ganesh Gopalakrishnan of the University of Utah as part of fifth step in the mentoring process to present the work to outside researchers.
Travis Andelin and Sophia Zenzius did not have enough time in the lab to move beyond the second step in mentoring process which generating benchmark programs and running simple tests. They both started late in their undergraduate degrees, graduated, and accepted full-time jobs before they were able to progress beyond the early stages of development.
List of students who participated and what academic deliverables they have produced or it is anticipated they will produce
- Neil Self, Sep-2008 through Aug-2009, wrote test programs to evaluate the guided test algorithms in both Java and C# and compare those results with other competing technologize – N. Rungta and E. G. Mercer, “Clash of the Titans: Tools and Techniques for Hunting Bugs in Concurrent Programs”, Parallel and distributed testing, debug, and analysis (PADTAD), Proceedings of the International Symposium on Software Testing and Analysis, pp. 71 – 81, Chicago, IL, July 2009 – Joel’s schedule did not permit him to participate in the writing, but he is acknowledged for running the test, developing the benchmarks, and publishing the results on the wiki (Concurrency Tool Comparison)
- Travis Andelin, Jul-2008 through Aug-2008 and Jan-2009 through Mar-2009, worked on reading papers and filling in background knowledge before accepting a full-time job off-campus
- Sophia Zenzius, Mar-2008 through Aug-2008, worked on integration and examples before graduating and accepting a full-time job off-campus
- Steve Morley, Apr-2009 to present, visualization toolkit to demonstrate algorithm behavior and assist in understanding and debugging error traces – paper anticipated early 2011 (in writing).
Description of the results/findings of the project
The quality and reliability of software systems, in terms of their functional correctness, critically relies on the effectiveness of the testing tools and techniques to detect errors in the system before deployment. A lack of testing tools for concurrent programs that systematically control thread scheduling choices has not allowed concurrent software development to keep abreast with hardware trends of multi-core and multi-processor technologies. This motivates a need for the development of systematic testing techniques that detect errors in concurrent programs. The work in this MEG grant found a potentially scalable technique that can be used to detect concurrency errors in production code. The technique is a viable solution for software engineers and testers to detect errors in multi- threaded programs before deployment. We present a guided testing technique that combines static analysis techniques, systematic verification techniques, and heuristics to efficiently detect errors in concurrent programs. An abstraction-refinement technique lies at the heart of the guided test technique. The abstraction-refinement technique uses as input potential errors in the program generated by imprecise, but scalable, static analysis tools. The abstraction further leverages static analyses to generate a set of program locations relevant in verifying the reachability of the potential error. Program execution is guided along these points by ranking both thread and data non-determinism. The set of relevant locations is refined when program execution is unable to make progress. The work discusses various heuristics for effectively guiding program execution. We implemented the guided test technique to detect errors in Java programs. Guided test successfully detects errors caused by thread schedules and data input values in Java benchmarks and the JDK concurrent libraries for which other state of the art analysis and testing tools for concurrent programs are unable to find an error.
Description of how the budget was spent
The budget was spent solely on undergraduate student wages for the above students with Steve Morley being a long time student who joined the lab early in his undergraduate career.