| home >
home >
research 
 Automated Testing 
For safety-critical applications such as avionics systems, writing the computer
    source code is usually less than 10% of the total project cost. For this
    reason, the value of automatic generation of code from the models is often
    downplayed. However, the automatic generation of code does reduce total
    project cost and can eliminate the insertion of errors from manual coding.
    More importantly, it places the focus of the development back on the models
    and ensures that the models are maintained as the system evolves over time.
 
  The main difficulty with code generation for commercial avionics systems lies
  in meeting the stringent FAA certification objectives defined in 
  the RTCA standard DO-178B. 
  One way to do this is to produce code generators that can be
  qualified as a software development tool as defined in DO-178B. This is a
  very expensive, difficult process and only a few software tools have met this
  standard.
 
  
  Another approach is to reduce the effort needed to verify the correctness of the
  auto-generated code. Commercial tools have already been developed that can 
  automatically generate test cases from models. While such test cases can be
  used to show that the executable object code is correctly generated from the
  model, they do little to show that the model itself is correct. 
  
   
   Another approach is to state the system or high-level software requirements
   as formal properties of the model and to automatically generate test cases
   from these properties. This approach should be feasible even if the properties 
   cannot be formally proven due to the size or complexity of the model. If the
   properties can be proven, such test cases are still
   valuable in showing that the final implementation executes correctly on its
   target platform.  
   
    
    On the MT-FCS project, we have investigated techniques for automatically 
	generating test cases from formal properties of the system.
         |