Tenth NASA Formal Methods Symposium (NFM 2018)

30 Years of Formal Methods at NASA

Newport News Marriott at City Center[*]

Newport News, VA, USA, April 17-19, 2018

Accepted Papers

The program committee selected 24 regular papers and 7 short papers from over 90 submissions. Each paper was read by at least 3 reviewers.

Proceedings of the Tenth NASA Formal Methods Symposium appear in Springer's Lecture Notes in Computer Science, Formal Methods subline, as Volume 10811.

(Hide abstracts)
  • Ansgar Fehnker, Kaylash Chaudhary and Vinay Mehta An Even Better Approach - Improving the B.A.T.M.A.N. Protocol Through Formal Modelling and Analysis

    Previous work formalised the B.A.T.M.A.N. protocol in Uppaal and found several ambiguities and inconsistencies. More importantly, explicit choices in the RFC had, unfortunately, a negative impact on route discovery. This previous work compared a literal model based of the RFC with an incremental improvement. This paper goes one step further and proposes an alternative that departs from the RFC. We compare the performance using simulations in Uppaal, for static as well as dynamic topologies. The analysis shows that the proposed alternative reduces the number of suboptimal routes significantly, and recovers better from routing errors that are introduced by mobility.

  • Massimo Narizzano, Luca Pulina, Armando Tacchella and Simone Vuotto. Consistency of Property Specification Patterns with Boolean and Constrained Numerical Signals

    Property Specification Patterns (PSPs) have been proposed
    to solve recurring specification needs, to ease the formalization of requirements, and enable automated verification thereof. In this paper, we extend PSPs by considering Boolean as well as atomic assertions from a constraint system. This extension enables us to reason about functional requirements which would not be captured by basic PSPs. We contribute an encoding from constrained PSPs to LTL formulas, and we show experimental results demonstrating that our approach scales on requirements of realistic size generated using an artificial probabilistic model. Finally, we show that our extension enables us to prove (in)consistency of requirements about an embedded controller for a robotic manipulator.

  • Marcus Gerhold, Arnd Hartmanns and Mariëlle Stoelinga . Model-Based Testing for General Stochastic Time

    Many systems are inherently stochastic: they interact with unpredictable environments or use randomised algorithms. Then classical model-based testing is insufficient: it only covers functional correctness. In this paper, we present a new model-based testing framework that additionally covers the stochastic aspects in hard and soft real-time systems. Using the theory of stochastic automata for specifications, test cases and a formal notion of conformance, it provides clean mechanisms for underspecification, randomisation, and stochastic timing. Supporting arbitrary continuous and discrete probability distributions, the framework generalises previous work based on purely Markovian models. We cleanly define its theoretical foundations, and then outline a practical algorithm for statistical conformance testing based on the Kolmogorov-Smirnov test. We exemplify the framework's capabilities and tradeoffs by testing timing aspects of the Bluetooth device discovery protocol.

  • Ruben Giaquinta, Ruth Hoffmann, Murray Ireland, Alice Miller, and Gethin Norman. Strategy Synthesis for Autonomous Agents using PRISM

    We present probabilistic models for autonomous agent search and retrieve missions derived from Simulink models for an Unmanned Ariel Vehicle (UAV), and show how probabilistic model checking and the probabilistic model checker PRISM are used for optimal controller generation. We introduce a sequence of scenarios relevant to UAVs and other autonomous agents such as autonomous underwater vehicles and autonomous ground vehicles. For each scenario we demonstrate how it can be modelled using the PRISM language, give the model checking statistics and present the synthesised optimal controllers. We show how the controllers can be returned to the UAV and adapted for use on larger search areas. We conclude with a discussion of the limitations when using probabilistic model checking and PRISM in this context and what steps can be taken to overcome them.

  • Miguel Romero and Camilo Rocha . Symbolic Execution and Reachability Analysis using Rewriting Modulo SMT for Spatial Concurrent Constraint Systems with Extrusion

    The usual high degree of assurance in safety-critical systems is now challenged by a new incarnation of distributed systems exposed to the presence of hierarchical computation (e.g., virtualization resources such as container and virtual machine technology). This paper addresses the issue of symbolically specifying and verifying properties of distributed hierarchical systems using rewriting modulo SMT, a symbolic approach for rewriting logic that seamlessly combines rewriting modulo theories, SMT solving, and model checking. It presents a rewrite theory **R** implementing a symbolic executable semantics of an algebraic model of spatially constrained concurrent process with extrusion. The underlying constraint system in **R** is materialized with the help of SMT-solving technology, where the constraints are quantifier-free formulas interpreted over the Booleans and integers, and information entailment is queried via semantic inference. Symbolic rewrite steps with **-->R** capture \emph{all} possible traces from ground instances of the source state to the ground instances of the target state. This approach, as illustrated with examples throughout the paper, is well-suited for specifying and proving (or disproving) existential reachability properties of distributed hierarchical systems, such as fault-tolerance, consistency, and privacy.

  • Lola Masson, Jérémie Guiochet, Helene Waeselynck, Kalou Cabrera, Sofia Cassel and Martin Törngren. Tuning permissiveness of active safety monitors for autonomous systems

    Robots and autonomous system have become a part of our everyday life, therefore guaranteeing their safety is a crucial issue. Among the possible methods for guaranteeing safety, monitoring is widely used, but few methods exist to generate safety rules to implement such monitors. Particularly, building safety monitors that do not constrain excessively the system's ability to perform its tasks is necessary as those systems operate with few human interventions. We propose in this paper a method to take into account the system's desired tasks in the specification of strategies for monitors and apply it to a case study. We show that we can synthesize a more important number of strategies and we facilitate the reasoning about the trade-off between safety and functionalities.

  • Radha Nakade, Eric Mercer, Peter Aldous and Jay McCarthy. Model Checking Task Parallel Programs for Data-race

    Data-race detection is the problem of determining if a concurrent program has a data-race in some execution and input; it has been long studied and often solved. The research in this paper reprises the problem in the context of task parallel programs with the intent to prove, via model checking, the absence of data-race on any feasible schedule for a given input. Many of the correctness properties afforded by task parallel programming models such as OpenMP, Cilk, X10, Chapel, Habanero, etc. rely on data-race freedom. Model checking for data-race is in contrast to recent work using run-time monitoring, log analysis, or static analysis which are complete or sound but never both. The model checking algorithm builds a happens-before relation from the program execution and uses that relation to detect data-race similar to many solutions that reason over a single observed execution. Unlike those solutions though, model checking generates additional program schedules sufficient to prove data-race freedom over all schedules on the given input. The approach is evaluated in a Java implementation of Habanero using the JavaPathfinder model checker. The results, when compared to existing data-race detectors in Java Pathfinder, show a significant reduction in the time required for proving data race freedom.

  • Aymeric Fromherz, Abdelraouf Ouadjaout and Antoine Miné. Static Value Analysis of Python Programs by Abstract Interpretation

    We propose a static analysis by abstract interpretation for a significant subset of Python to infer variable values, run-time errors, and uncaught exceptions. Python is a high-level language with dynamic typing, a class-based object system, complex control structures such as generators, and a large library of builtin objects. This makes static reasoning on Python programs challenging. The control flow is highly dependent on the type of values, which we thus infer accurately. As Python lacks a formal specification, we first present a concrete collecting semantics of reachable program states. We then propose a non-relational flow-sensitive type and value analysis based on simple abstract domains for each type, and handle non-local control such as exceptions through continuations. We show how to infer relational numeric invariants by leveraging the type information we gather. Finally, we propose a relational abstraction of generators to count the number of available elements and prove that no StopIteration exception is raised. Our prototype implementation heavily in development does not support some Python features, such as compile, and handles only a small part of the builtin objects and standard library. Nevertheless, we are able to present preliminary experimental results on analyzing actual, if small, Python code from a benchmarking application and a regression test suite.

  • Flavio M. de Paula, Arvind Haran and Brad Bingham. An Efficient Rewriting Framework for Trace Coverage of Symmetric Systems

    Verification coverage is an important metric in any hardware verification effort. Coverage models are proposed as a set of events the hardware may exhibit, intended to be possible under a test scenario. Usually, these events each correspond to a visited state or taken transition in some higher level transition system of the underlying hardware. A more sophisticated approach is to check that tests exercise specific sequences of events, corresponding to traces through the transition system. However, such trace-based coverage models are inherently expensive to consider in practice, as the number of traces is exponential in trace length. We present a novel framework that combines the approaches of conservative abstraction with rewriting to construct a concise trace-based coverage model of a class of parameterized symmetric systems. First, we leverage both symmetry and rewriting to construct abstractions that can be tailored by users' defined rewriting. Then, under this abstraction, a coverage model for a larger system can be generated from traces for a smaller system. This coverage model is of tractable size, is tractable to generate, and can be used to identify coverage-holes in large systems. Our experiments on the cache coherence protocol implementation of a multi-billion transistors Industrial design demonstrate the viability and effectiveness of this approach.

  • Ansgar Fehnker and Kaylash Chaudhary . Twenty Percent and a Few Days - Optimising a Bitcoin Majority Attack (short paper)

    Bitcoin is a distributed online payment that organises transactions into blocks. The size of blocks is limited to 1 megabyte, which also limits the number of transactions that can be confirmed. This year has seen several attempts have been made to create a fork or a split that removes this restriction. One such alternative is Bitcoin Unlimited (BTU). Proponents of BTU have suggested to use a type of majority attack to force other Bitcoin miners to adopt BTU. In this paper we model this attack in Uppaal, and analyse how long it will take for an attack to succeed, depending on the share the attacker has of the total network, and the so-called confirmation depth. The analysis shows that a share of 20% an attack will be successful within a few days. This paper also looks at the effect of increasing the confirmation depth as a countermeasure.

  • Jeroen Meijer and Jaco van de Pol. Sound Black-Box Checking in the LearnLib

    In black-box checking incremental hypotheses of a system are learned in the form of finite automata. On these automata LTL formulae are verified, or their counterexamples validated on the actual system. We extend the LearnLib's system-under-learning API for sound BBC, by means of state equivalence, that contrasts the original proposal where an upper-bound on the number of states in the system is assumed. We will show how LearnLib's new BBC algorithms can be used in practice, as well as how one could experiment with different model checkers and BBC algorithms. Using the RERS 2017 challenge we provide experimental results on the performance of all LearnLib's active learning algorithms when applied in a BBC setting. The performance of learning algorithms was unknown for this setting. We will show that the novel incremental algorithms TTT, and ADT perform the best.

  • Philipp Koerner and Jens Bendisposto. Distributed Model Checking Using ProB

    Model checking larger specifications %with \prob{} (or other model checkers) can take a lot of time, from several minutes up to weeks. Naturally, this renders the development of a correct specification very cumbersome. If the model offers enough non-determinism, however, we can distribute the workload onto multiple computers in order to reduce the runtime. In this paper, we present distb, a distributed version of ProB's model checker. Furthermore, we show possible speed-ups for real-life formal models on both a single workstation and a high-performance cluster.

  • Saksham Chand and Yanhong A. Liu. Simpler Specifications and Easier Proofs of Distributed Algorithms Using History Variables

    This paper empirically evaluates specifications and proofs of distributed algorithms when specified using only message history variables. We argue that while pseudocode and implementations of these algorithms define and maintain other state variables, using them in formal specifications is not a good choice. We develop specifications in TLA+ for Basic Paxos, Multi-Paxos, and Multi-Paxos with Preemption that use only message history variables, and we show that not only are they conceptually simpler compared to previous specifications, they also allow us to develop easier proofs that are mechanically checked using TLAPS, and furthermore the proofs are checked more efficiently. We show how history variables lead to shorter proofs that need fewer manually written invariants, and we outline how the simpler specifications can help in systematically generating useful invariants for the proofs.

  • Siddhartha Bhattacharyya, Thomas Eskridge, Natasha Neogi, Marco Carvalho and Milton Stafford. Formal Assurance for Cooperative Intelligent Autonomous Agents

    Developing trust in intelligent agents requires understanding the full capabilities of the agent including the boundaries beyond which the agent is not designed to operate. This paper focuses on applying formal verification methods to identify these boundary conditions in order to ensure the proper design for the effective operation of the human-agent team. The approach applied involves creating an executable specification of the human-machine interaction in a cognitive architecture, which incorporates the the expression of learning behavior. The model is then translated into a formal language, where verification and validation activities can occur in an automated fashion. We illustrate our approach through the design of an intelligent co-pilot that teams with a human in a takeoff operation, while a contingency scenario involving an engine-out is potentially executed. The formal verification, and counterexample generation enables increased confidence in the designed procedures and behavior of the intelligent copilot system.

  • Yassmeen Elderhalli, Osman Hasan, Waqar Ahmed and Sofiène Tahar. Formal Dynamic Fault Trees Analysis using an Integration of Theorem Proving and Model Checking

    Dynamic fault trees (DFTs) have emerged as an important tool for capturing the dynamic behavior of system failure. These DFTs are analyzed qualitatively and quantitatively using stochastic or algebraic methods. Model checking has been recently proposed to conduct the failure analysis of systems using DFTs. However, it has not been used for DFT qualitative analysis. Moreover, its analysis time grows exponentially with the number of states and its reduction algorithms are usually not formally verified. To overcome these limitations, we propose a methodology to perform the formal analysis of DFTs using an integration of theorem proving and model checking. We formalize all basic DFT gates in higher-order logic and formally verify many algebraic simplification properties using theorem proving. Based on this, we were able to prove the equivalence between raw DFTs and their reduced forms to enable the formal qualitative analysis (determine the cut sets and sequences) of DFTs with theorem proving. We then use model checking to perform the quantitative analysis (compute probabilities of failure) of the formally verified reduced DFT. We applied our methodology on five benchmarks and the results show that the formally verified reduced DFT was analyzed using model checking with up to six times less states and up to 133000 times faster.

  • Sarah Benyagoub, Meriem Ouederni, Yamine Ait Ameur, and Atif Mashkoor. Incremental Construction of Realizable Choreographies

    This paper proposes a correct-by-construction method to build realizable choreographies described using conversation protocols (CPs). We define a new language consisting of an operators set for incremental construction of CPs. We suggest an asynchronous model described with Event-B method and its refinement strategy, ensuring the scalability of our approach.

  • César Augusto Ochoa Escudero, Rémi Delmas, Thomas Bochot, Matthieu David, and Virginie Wiels. Automatic Generation of DO-178 Test Procedures

    Liebherr-Aerospace Toulouse is a provider of safety critical Air Management Systems for civilian and military aircraft. The applicative software of such systems is developed following DO-178 guidelines, using a model-based approach built on the SCADE modeling language. In the current V&V process, Test Cases (TCs) specify test conditions on internal SCADE data-flows, and expected test outcomes on internal data-flows. TCs are then implemented in the form of concrete Test Procedures (TPs) that are run on the target which can only drive the model main inputs. Today, TP implementation is a complex task and time-consuming when performed manually. This paper proposes an approach to assist the generation of TPs. We propose a formal and declarative test case specification language, from which synchronous SCADE observers are generated and composed with the SCADE model of the applicative software. Model checking techniques are then used to assist the production of the applicative software main inputs.

  • Andrew Ireland, Maria Teresa Llano and Simon Colton. The Use of Automated Theory Formation in Support of Hazard Analysis (short paper)

    Model checking and simulation are powerful techniques for developing and verifying the design of reactive systems. Here we propose the use of a complementary technique -- automated theory formation. In particular, we report on an experiment in which we used a general purpose automated theory formation tool, HR, to explore properties of a model written in Promela. Our use of HR is constrained by meta-knowledge about the model that is relevant to hazard analysis. Moreover, we argue that such meta-knowledge will able us to explore how safety properties could be violated.

  • Zhuo Chen and Werner Dietl. Don't Miss the End: Preventing Unsafe End-of-File Comparisons (short paper)

    Reading from an \<InputStream> or \<Reader> in Java either returns the read byte/character or -1 if the end-of-file (EOF) has been reached. To support the additional -1 as return value, the \<read> method returns an \<int>. After comparing the value against EOF, it is converted to \<byte> or \<char>. If the return value is converted to \bctt{} before the comparison to -1, the loop will either exit prematurely or be stuck in an infinite loop. The SEI CERT Oracle Coding Standard for Java rule FIO08-J ``Distinguish between characters or bytes read from a stream and -1'' describes this issue in detail. This paper presents a type system that prevents unsafe EOF value comparisons statically and is implemented for Java using the Checker Framework. In an evaluation of 35 projects (9~million LOC) it detected 3 defects in production software, 8 bad coding practices, and no false positives. The overall annotation effort is very low. Overrides for the \<read> method needed to be annotated, requiring a total of 44 annotations. Additionally, 3 annotations for fields and method parameters needed to be added. To the best of our knowledge this is the first freely available tool to prevent this security issue.

  • András Vörös, Márton Búr, István Ráth, Ákos Horváth, Zoltán Micskei, László Balogh, Bálint Hegyi, Benedek Horváth, Zsolt Mázló and Dániel Varró. MoDeS3: Model-based Demonstrator for Smart and Safe Cyber-Physical Systems (short paper)

    We present MoDeS3, a complex research demonstrator illustrating the combined use of model-driven development, formal verification, safety engineering and IoT technologies for smart and safe cyber-physical systems. MoDeS3 represents a smart transportation system-of-systems composed of a model railway and a crane which may automatically load and unload cargo from trains where both subsystems need to fulfill functional and safety requirements. The demonstrator is built by using the model-based software engineering principle, while the system level safety is ensured by the combined use of design-time and runtime verification and validation techniques.

  • Giovanna Broccia, Paolo Milazzo and Peter Csaba Ölveczky. An Executable Formal Framework for Safety-Critical Human Multitasking

    When a person is concurrently interacting with different systems, the amount of cognitive resources required (cognitive load) could be too high and might prevent some tasks from being completed. When such human multitasking involves safety-critical tasks, for example in an airplane, a spacecraft, or a car, failure to devote sufficient attention to the different tasks could have serious consequences. To study this problem, we define an executable formal model of human attention and multitasking in Real-Time Maude. It includes a description of the human working memory and the cognitive processes involved in the interaction with a device. Our framework enables us to analyze human multitasking through simulation, reachability analysis, and LTL and timed CTL model checking, and we show how a number of prototypical multitasking problems can be analyzed in Real-Time Maude. We illustrate our modeling and analysis framework by studying the interaction with a GPS navigation system while driving, and apply model checking to show that in some cases the cognitive load of the navigation system could cause the driver to keep the focus away from driving for too long.

  • Marco Feliú and Mariano Moscato . Towards a Formal Safety Framework for Trajectories (short paper)

    Trajectory generation is at the heart of an autonomous Unmanned Aerial System (UAS). Given a navigation context, UAS have to conceive and perform trajectories that fulfill their mission while being safe from collisions with obstacles and surrounding traffic. These intended trajectories are idealizations of the various actual physical trajectories that the UAS may perform during flight. The validation of actual physical trajectories w.r.t. their intended counterparts is challenging due to the interaction over time of several factors such as the environmental conditions, measurement errors and the cyber-physical components of the UAS. For this reason, the most common validation technique for trajectory generators is flight simulation, which is not exhaustive and thus cannot prove the actual absence of collisions. This paper presents a preliminary formal framework to reason about the collision-safety of trajectories with uncertainty and its application as an oracle for black-box testing validation of trajectory generators. The proposed strategy would allow the collision-safety evaluation of trajectories while removing the need for simulation.

  • Rui Qiu, Sarfraz Khurshid, Corina Pasareanu, Junye Wen and Guowei Yang . Using Test Ranges to Improve Symbolic Execution

    Symbolic execution is a systematic technique for checking programs, which has received a lot of research attention during the last decade. In principle, symbolic execution provides a powerful analysis for bug finding. In practice however, the technique remains computationally expensive and hard to scale. This paper introduces Synergise, a novel approach to improve symbolic execution by tackling a key bottleneck to its wider adoption: costly and incomplete constraint solving. To mitigate the cost, Synergise introduces a succinct encoding of constraint solving results, thereby enabling symbolic execution to be distributed among different workers while sharing and re-using constraint solving results among them without having to communicate databases of constraint solving results. To mitigate the incompleteness, Synergise introduces an integration of complementary approaches for testing, e.g., search-based test generation, with symbolic execution, thereby enabling symbolic execution and other techniques to apply in tandem, say to create higher quality tests. Experimental results using a suite of Java programs show that Synergise presents a promising approach for improving symbolic execution.

  • Allan Blanchard, Nikolai Kosmatov and Frédéric Loulergue . Ghosts for Lists: A Critical Module of Contiki Verified in Frama-C

    Internet of Things (IoT) applications are becoming increasingly critical and require rigorous formal verification. In this paper we target Contiki, a widely used open-source OS for IoT, and present a verification case study of one of its most critical modules: that of linked lists. Its API and list representation differ from the classical linked list implementations, and are particularly challenging for deductive verification. The proposed verification technique relies on a parallel view of a list through a companion ghost array. This approach makes it possible to perform most proofs automatically using the Frama-C/WP tool, only a small number of auxiliary lemmas being proved interactively in the Coq proof assistant. We present an elegant segment-based reasoning over the companion array developed for the proof. Finally, we validate the proposed specification by proving a few functions manipulating lists.

  • Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan and Ashish Tiwari. Output Range Analysis for Deep Feedforward Neural Networks

    Given a neural network (NN) and a set of possible inputs to the network described by polyhedral constraints, we wish to compute a safe over approximation of the possible output values. This operation is a fundamental primitive enabling the formal analysis of neural networks that are extensively used for a variety of machine learning tasks such as image classification, perception and control of autonomous systems. Increasingly, they are deployed in high-assurance applications, leading to a compelling use case for formal verification approaches. In this paper, we present an efficient range estimation algorithm that iterates between an expensive``global'' combinatorial search using mixed integer linear programming problems, and a relatively inexpensive ``local'' optimization that repeatedly seeks a local optimum of the function represented by the NN. We implement our approach and compare it with Reluplex, a recently proposed solver for deep neural networks. We demonstrate applications of our approach to computing flowpipes for neural network-based feedback controllers. We show that the use of local search in conjunction with mixed integer linear programming solvers effectively reduces the combinatorial search over possible combinations of active neurons in the network by pruning away suboptimal nodes.

  • Bruno Dutertre, Dejan Jovanović and Jorge Navas. Verification of Fault-Tolerant Protocols with Sally (short paper)

    Recent developments in model-checking technology make it possible to automatically verify complex, fault-tolerant protocols. We present a modeling approach for specifying such protocols and its application to a variety of examples. This approach is supported by our new model checker called Sally.

  • Alfons Laarman. Optimal Storage of Combinatorial State Spaces

    Efficiently deciding reachability for model checking problems requires storing the entire state space. We provide an information theoretical lower bound for these storage requirements and demonstrate how it can be reached in practice using a simple binary tree in combination with a compact hash table. Extensive experiments confirm that the lower bound is reached in practice in a majority of cases, demonstrating the combinatorial nature of state spaces.

  • Alfons Laarman. Stubborn Transaction Reduction

    The exponential explosion of parallel interleavings remains a fundamental challenge to model checking of concurrent programs. Both partial-order reduction (POR) and transaction reduction (TR) decrease the number of interleavings in a concurrent system. Unlike POR, transactions also reduce the number of intermediate states. Modern POR techniques, on the other hand, offer more dynamic ways of identifying commutative behavior, a crucial task for obtaining good reductions. We show that transaction reduction can use the same dynamic commutativity as found in stubborn set POR. We also compare reductions obtained by POR and TR, demonstrating with several examples that these techniques complement each other. With an implementation of the dynamic transactions in the model checker LTSmin, we compare its effectiveness with the original static TR and two POR approaches. Several inputs, including realistic case studies, demonstrate that the new dynamic TR can surpass POR in practice.

  • Cumhur Erkan Tuncali, Bardh Hoxha, Guohui Ding, Georgios Fainekos and Sriram Sankaranarayanan. Experience Report: Application of Falsification Methods on the UxAS System (short paper)

    In this report, we present our experiences in applying falsification methods over the Unmanned Systems Autonomy Services (UxAS) system. UxAS is a collection of software modules that enables complex mission planning for multiple vehicles. To test the system, we utilized the tool S-TaLiRo to generate mission scenarios for both UxAS and the underlying vehicle simulators, with the goal of finding behaviors which do not meet system specifications.

  • Hendrik Maarand and Tarmo Uustalu. Certified Foata Normalization for Generalized Traces

    Mazurkiewicz traces are a well-known model of concurrency with a notion of equivalence for interleaving executions. Interleaving executions of a concurrent system are represented as strings over an alphabet equipped with an independence relation, and two strings are taken to be equivalent if they can be transformed into each other by repeatedly commuting independent consecutive letters. Analyzing all behaviors of the system can be reduced to analyzing one canonical representative from each equivalence class; well-known normal forms such as the Foata normal form can be used for this purpose. In some applications, it is useful to have commutability of two adjacent letters in a string depend on their left context. We develop Foata normal forms and normalization for Sassone et al.’s context-dependent generalization of traces, formalize this development in the dependently typed programming language Agda and show generalized Foata normalization in action on an example from relaxed shared-memory concurrency (local reads in TSO).

  • Francesco Marconi, Giovanni Quattrocchi, Luciano Baresi, Marcello M. Bersani and Matteo Rossi. On the Timed Analysis of Big-Data Applications

    Apache Spark is one of the best-known frameworks for executing big-data batch applications over a cluster of (virtual) machines. Defining the cluster (i.e., the number of machines and CPUs) to attain guarantees on the execution times (deadlines) of the application is indeed a trade-off between the cost of the infrastructure and the time needed to execute the application. Sizing the computational resources, in order to prevent cost overruns, can benefit from the use of formal models as a means to capture the execution time of applications. Our model of Spark applications, based on the CLTLoc logic, is defined by considering the directed acyclic graph around which Spark programs are organized, the number of available CPUs, the number of tasks elaborated by the application, and the execution time of a single task. If the outcome of the analysis is positive, then the execution is feasible---that is, it can be completed within a given time span. The analysis tool has been implemented on top of the Zot formal verification tool. A preliminary evaluation shows that our model is sufficiently accurate: the formal analysis identifies execution times that are close (the error is less than 10%) to those obtained by actually running the application.


Curator and Responsible NASA Official: César Muñoz
LaRC Privacy Statement
Last modified: January 2018