Spacecraft Autonomy and AI Planning
One of the main obstacles hindering the adoption of autonomous software for man-rated
critical applications is the lack of a rigorous means of establishing the safety of these systems. Although AI-based autonomous software can provide enormous capability and flexibility, this power comes at a price: the verification and validation problem is greatly exacerbated. Traditional simulation based testing methods fall short of providing the needed level of confidence level. In other application domains, formal methods have been developed which explore the entire state space of the application providing rigorous guarantees about critical safety properties of the application. Unfortunately little progress has been made to date in applying formal methods to AI-based autonomous
software systems. Some ground breaking work has been made by Charles Pecheur of NASA Ames [1,2,3] , but there are huge challenges in making techniques such as these practical for the system being developed under the Spacecraft Autonomy for Vehicles and Habitats Project. In  Menzies and Pecheur write
The main challenge in verifying AI software (or, for that matter, any kind of complex system) comes from the number of different possible executions that have to be taken into account. We refer broadly to this uncertainty on a system's future behavior as non-determinism. Non-deterministic choices come from incoming external events, scheduling of concurrent tasks, or even intentional random choices in the program, to name a few. Every choice point in the execution results in as many possible subsequent execution branches. Typically, those choices compound into exponentially many possible executions. This is known as the state space explosion phenomenon.
- Radu Siminiceanu, Ricky Butler, and César Muñoz,
Experimental Evaluation of a Planning Language Suitable for
5th International Workshop on Model Checking and Artificial Intelligence (MoChArt 2008),
Lecture Notes in Artificial Intelligence, Volume 5348, 2009. BibTeX
- Ricky Butler, Radu Siminiceanu, and César Muñoz,
The ANMLite Language and Logic for Specifying Planning Problems,
NASA TM-2007-215088, November 2007. BibTeX Reference.
- Ricky Butler, César Muñoz, and Radu Siminiceanu,
Solving the AI Planning Plus Scheduling Problem Using Model Checking via Automatic Translation from the Abstract Plan Preparation Language (APPL) to the Symbolic Analysis Laboratory (SAL),
NASA/TM-2007-215089, November 2007. BibTeX Reference.
- Ricky Butler and César Muñoz,
An Abstract Plan Preparation Language, NASA TM-2006-214518, November 2006. BibTeX Reference.
Procedure Representation Language (PRL)
Plan Execution Interchange Language (PLEXIL)
tag identifies links that are outside of the NASA domain.
- Camilo Rocha, César Muñoz, and Héctor Cadavid,
A Graphical Environment for the Semantic Validation of a Plan
Third IEEE International Conference on Space Mission Challenges
for Information Technology (SMC-IT 2009), 2009.
- Gilles Dowek, César Muñoz, and Camilo Rocha, Rewriting Logic Semantics of a Plan Execution Language, NASA TM-2009-215770,
2009. BibTeX Reference.
- Gilles Dowek, César Muñoz, and Corina Pasareanu,
A Small-Step Semantics of PLEXIL, NIA Technical Report 2008-11, November 2008. BibTeX Reference.
César Muñoz, and
Corina Pasareanu, A Formal Analysis Framework for PLEXIL, 3rd Workshop on Planning and Plan Execution for Real-World Systems,
2007. BibTeX Reference.
- Radu Siminiceanu, Model Checking Abstract PLEXIL Programs with SMART,
NASA CR-2007-214542, April 2007.
Curator and Responsible NASA Official:
Ricky W. Butler
larc privacy statement
last modified: 26 September 2003 (10:10:40)