PPDP | LOPSTR |
---|---|
08:30
Registration
[Location:
2nd floor, Hörsaalzentrum]
The registration desk will be opened from 8:30 a.m. It is located in the second floor of the building while the conference rooms (lecture halls HZ 10 and HZ 11) are in the third floor of the building. |
|
08:30
Registration
[Location:
2nd floor, Hörsaalzentrum]
The registration desk will be opened from 8:30 a.m. It is located in the second floor of the building while the conference rooms (lecture halls HZ 10 and HZ 11) are in the third floor of the building. |
|
09:00
PPDP & LOPSTR: Joint Invited Talk
[Chair:
Peter Stuckey • Location:
HZ 10 (third floor, Hörsaalzentrum)]
Constrained Horn Clauses for Verification (Invited Talk)
Jorge Navas Developing scalable software verification tools is a very difficult task. First, due to the undecidability of the verification problem, these tools, must be highly tuned and engineered to provide reasonable efficiency and precision trade-offs. Second, different programming languages come with very diverse assortments of syntactic and semantic features. Third, the diverse encoding of the verification problem makes the integration with other powerful solvers and verifiers difficult. This talk presents SeaHorn — an open source automated Constrained Horn clause-based reasoning framework. SeaHorn combines advanced automated solving techniques based on Satisfiability Modulo Theory (SMT) and Abstract Interpretation. SeaHorn is built on top of LLVM using its front-end(s) to deal with the idiosyncrasies of the syntax and it highly benefits from LLVM optimizations to reduce the verification effort. SeaHorn uses Constrained Horn clauses (CHC) which are a uniform way to formally represent a broad variety of transition systems while allowing many encoding styles of verification conditions. Moreover, the recent popularity of CHC as an intermediate language for verification engines makes it possible to interface SeaHorn with a variety of new and emerging tools. All of these features make SeaHorn a versatile and highly customizable tool which allows researchers to easily build or experiment with new verification techniques. |
|
09:00
PPDP & LOPSTR: Joint Invited Talk
[Chair:
Peter Stuckey • Location:
HZ 10 (third floor, Hörsaalzentrum)]
Constrained Horn Clauses for Verification (Invited Talk)
Jorge Navas Developing scalable software verification tools is a very difficult task. First, due to the undecidability of the verification problem, these tools, must be highly tuned and engineered to provide reasonable efficiency and precision trade-offs. Second, different programming languages come with very diverse assortments of syntactic and semantic features. Third, the diverse encoding of the verification problem makes the integration with other powerful solvers and verifiers difficult. This talk presents SeaHorn — an open source automated Constrained Horn clause-based reasoning framework. SeaHorn combines advanced automated solving techniques based on Satisfiability Modulo Theory (SMT) and Abstract Interpretation. SeaHorn is built on top of LLVM using its front-end(s) to deal with the idiosyncrasies of the syntax and it highly benefits from LLVM optimizations to reduce the verification effort. SeaHorn uses Constrained Horn clauses (CHC) which are a uniform way to formally represent a broad variety of transition systems while allowing many encoding styles of verification conditions. Moreover, the recent popularity of CHC as an intermediate language for verification engines makes it possible to interface SeaHorn with a variety of new and emerging tools. All of these features make SeaHorn a versatile and highly customizable tool which allows researchers to easily build or experiment with new verification techniques. |
|
10:00
Coffee Break
[Location:
2nd floor, Hörsaalzentrum]
|
|
10:00
Coffee Break
[Location:
2nd floor, Hörsaalzentrum]
|
|
10:30
PPDP: Contributed Talks #6
[Chair:
Alberto Pardo • Location:
HZ 10 (third floor, Hörsaalzentrum)]
10:30
Declarative GUIs: Simple, Consistent, and Verified
Stephan Adelsberger, Anton Setzer and Eric Walkingshaw Graphical user interfaces (GUIs) are ubiquitous in real-world software and a notorious source of bugs that are difficult to catch through software testing. Model checking has been used to prove the absence of certain kinds of bugs, but model checking works on an abstract model of the GUI application, which might be inconsistent with its implementation. We present a library for developing directly verified, state-dependent GUI applications in the dependently typed programming language Agda. In the library, the type of a GUI's controller depends on a specification of the GUI itself, statically enforcing consistency between them. Arbitrary properties can be defined and proved in terms of user interactions and state transitions. Our library connects to a custom-built Haskell back-end for declarative vector-based GUI elements. Compared to an earlier version of our library built on an existing imperative GUI framework, the more declarative back-end supports simpler definitions and proofs. As a practical application of our library to a safety-critical domain, we present a case study developed in cooperation with the Medical University of Vienna. The case study implements a healthcare process for prescribing anticoagulants, which is highly error-prone when followed manually. Our implementation generates GUIs from an abstract description of a data-aware business process, making our approach easy to reuse and adapt to other safety-critical processes. We prove medically relevant safety properties about the executable GUI application, such as that given certain inputs, certain states must or must not be reached. 11:00
Combinatorics of Explicit Substitutions
Maciej Bendkowski and Pierre Lescanne lambda-upsilon is an extension of the lambda-calculus which internalises the calculus of substitutions. In the current paper, we investigate the combinatorial properties of lambda-upsilon focusing on the quantitative aspects of substitution resolution. We exhibit an unexpected correspondence between the counting sequence for lambda-upsilon terms and famous Catalan numbers. As a by-product, we establish effective sampling schemes for random lambda-upsilon terms. We show that typical lambda-upsilon terms represent, in a strong sense, non-strict computations in the classic lambda-calculus. Moreover, typically almost all substitutions are in fact suspended, i.e., unevaluated, under closures. Consequently, we argue that lambda-upsilon is an intrinsically non-strict calculus of explicit substitutions. Finally, we investigate the distribution of various redexes governing the substitution resolution in lambda-upsilon and investigate the quantitative contribution of various substitution primitives. 11:30
Schematic Polymorphism in the Abella Proof Assistant
Gopalan Nadathur and Yuting Wang The Abella interactive theorem prover has proven to be an effective vehicle for reasoning about relational specifications. However, the system has a limitation that arises from the fact that it is based on a simply typed logic: formalizations that are identical except in the respect that they apply to different types have to be repeated at each type. We develop an approach that overcomes this limitation while preserving the logical underpinnings of the system. In this approach object constructors, formulas and other relevant logical notions are allowed to be parameterized by types, with the interpretation that they stand for the (infinite) collection of corresponding constructs that are obtained by instantiating the type parameters. The proof structures that we consider for formulas that are schematized in this fashion are limited to ones whose type instances are valid proofs in the simply typed logic. We develop schematic proof rules that ensure this property, a task that is complicated by the fact that type information influences the notion of unification that plays a key role in the logic. Our ideas, which have been implemented in an updated version of the system, accommodate schematic polymorphism both in the core logic of Abella and in the executable specification logic that it embeds. Authors: |
10:30
LOPSTR: Analysis of LP
[Chair:
Michael Hanus • Location:
HZ 11 (third floor, Hörsaalzentrum)]
10:30
Compiling Control as Offline Conjunctive Partial Deduction
Vincent Nys and Danny De Schreye We present a new approach to a technique known as compiling control, whose aim is to compile away special mechanisms for non-standard atom selection in logic programs. It has previously been conjectured that compiling control could be implemented as an instance of the first Futamura projection, in which an interpreter is specialized for an input program. However, the exact nature of such an interpreter and of the required technique for specialization were never specified. In this work, we propose a Prolog meta-interpreter which applies the desired non-standard selection rule and which is amenable to specialization using offline partial deduction. After the initial analysis phase of compiling control, we collect annotations to specialize the interpreter using the Logen system for offline partial deduction. We also show that the result of the specialization is equivalent to the program obtained using the traditional approach to compiling control. In this way, we simplify the synthesis step. 11:00
Predicate Specialization for Definitional Higher-order Logic Programs
Antonis Troumpoukis and Angelos Charalambidis Higher-order logic programming is an interesting extension of traditional logic programming that allows predicates to appear as arguments and variables to be used where predicates typically occur. Higher-order characteristics are indeed desirable but on the other hand they are also usually more expensive to support. In this paper we propose a program specialization technique based on partial evaluation that can be applied to a modest but useful class of higher-order logic programs and can transform them into first-order programs without introducing additional data structures. The resulting first-order programs can be executed by conventional logic programming interpreters and benefit from other optimizations that might offer. We provide an implementation and experimental results that suggest the efficiency of the transformation. 11:30
An Assertion language for slicing Constraint Logic Languages
Moreno Falaschi and Carlos Olarte Constraint Logic Programming (CLP) is a language scheme for combining two declarative paradigms: constraint solving and logic programming. Concurrent Constraint Programming (CCP) is a declarative model for concurrency where agents interact by telling and asking constraints in a shared store. In a previous paper, we have developed a framework for dynamic slicing of CCP where the user first identifies that a (partial) computation is wrong. Then, she marks (selects) some parts of the final state corresponding to the data (constraints) and processes that she wants to study more deeply. An automatic process of slicing begins, and the partial computation is depurated by removing irrelevant information. In this paper we give two major contributions. First, we extend the framework to CLP, thus generalizing the previous work. Second, we provide an assertion language suitable for both, CCP and CLP, which allows the user to specify some properties of the computations in her program. If a state in a computation does not satisfy an assertion then some wrong information is identified and an automatic slicing process can start. This way we make one step further towards automatizing the slicing process. We show that our framework can be integrated with the previous semi-automatic one, giving the user more choices and flexi- bility. We show by means of examples and experiments the usefulness of our approach. |
10:30
PPDP: Contributed Talks #6
[Chair:
Alberto Pardo • Location:
HZ 10 (third floor, Hörsaalzentrum)]
10:30
Declarative GUIs: Simple, Consistent, and Verified
Stephan Adelsberger, Anton Setzer and Eric Walkingshaw Graphical user interfaces (GUIs) are ubiquitous in real-world software and a notorious source of bugs that are difficult to catch through software testing. Model checking has been used to prove the absence of certain kinds of bugs, but model checking works on an abstract model of the GUI application, which might be inconsistent with its implementation. We present a library for developing directly verified, state-dependent GUI applications in the dependently typed programming language Agda. In the library, the type of a GUI's controller depends on a specification of the GUI itself, statically enforcing consistency between them. Arbitrary properties can be defined and proved in terms of user interactions and state transitions. Our library connects to a custom-built Haskell back-end for declarative vector-based GUI elements. Compared to an earlier version of our library built on an existing imperative GUI framework, the more declarative back-end supports simpler definitions and proofs. As a practical application of our library to a safety-critical domain, we present a case study developed in cooperation with the Medical University of Vienna. The case study implements a healthcare process for prescribing anticoagulants, which is highly error-prone when followed manually. Our implementation generates GUIs from an abstract description of a data-aware business process, making our approach easy to reuse and adapt to other safety-critical processes. We prove medically relevant safety properties about the executable GUI application, such as that given certain inputs, certain states must or must not be reached. 11:00
Combinatorics of Explicit Substitutions
Maciej Bendkowski and Pierre Lescanne lambda-upsilon is an extension of the lambda-calculus which internalises the calculus of substitutions. In the current paper, we investigate the combinatorial properties of lambda-upsilon focusing on the quantitative aspects of substitution resolution. We exhibit an unexpected correspondence between the counting sequence for lambda-upsilon terms and famous Catalan numbers. As a by-product, we establish effective sampling schemes for random lambda-upsilon terms. We show that typical lambda-upsilon terms represent, in a strong sense, non-strict computations in the classic lambda-calculus. Moreover, typically almost all substitutions are in fact suspended, i.e., unevaluated, under closures. Consequently, we argue that lambda-upsilon is an intrinsically non-strict calculus of explicit substitutions. Finally, we investigate the distribution of various redexes governing the substitution resolution in lambda-upsilon and investigate the quantitative contribution of various substitution primitives. 11:30
Schematic Polymorphism in the Abella Proof Assistant
Gopalan Nadathur and Yuting Wang The Abella interactive theorem prover has proven to be an effective vehicle for reasoning about relational specifications. However, the system has a limitation that arises from the fact that it is based on a simply typed logic: formalizations that are identical except in the respect that they apply to different types have to be repeated at each type. We develop an approach that overcomes this limitation while preserving the logical underpinnings of the system. In this approach object constructors, formulas and other relevant logical notions are allowed to be parameterized by types, with the interpretation that they stand for the (infinite) collection of corresponding constructs that are obtained by instantiating the type parameters. The proof structures that we consider for formulas that are schematized in this fashion are limited to ones whose type instances are valid proofs in the simply typed logic. We develop schematic proof rules that ensure this property, a task that is complicated by the fact that type information influences the notion of unification that plays a key role in the logic. Our ideas, which have been implemented in an updated version of the system, accommodate schematic polymorphism both in the core logic of Abella and in the executable specification logic that it embeds. Authors: |
|
10:30
LOPSTR: Analysis of LP
[Chair:
Michael Hanus • Location:
HZ 11 (third floor, Hörsaalzentrum)]
10:30
Compiling Control as Offline Conjunctive Partial Deduction
Vincent Nys and Danny De Schreye We present a new approach to a technique known as compiling control, whose aim is to compile away special mechanisms for non-standard atom selection in logic programs. It has previously been conjectured that compiling control could be implemented as an instance of the first Futamura projection, in which an interpreter is specialized for an input program. However, the exact nature of such an interpreter and of the required technique for specialization were never specified. In this work, we propose a Prolog meta-interpreter which applies the desired non-standard selection rule and which is amenable to specialization using offline partial deduction. After the initial analysis phase of compiling control, we collect annotations to specialize the interpreter using the Logen system for offline partial deduction. We also show that the result of the specialization is equivalent to the program obtained using the traditional approach to compiling control. In this way, we simplify the synthesis step. 11:00
Predicate Specialization for Definitional Higher-order Logic Programs
Antonis Troumpoukis and Angelos Charalambidis Higher-order logic programming is an interesting extension of traditional logic programming that allows predicates to appear as arguments and variables to be used where predicates typically occur. Higher-order characteristics are indeed desirable but on the other hand they are also usually more expensive to support. In this paper we propose a program specialization technique based on partial evaluation that can be applied to a modest but useful class of higher-order logic programs and can transform them into first-order programs without introducing additional data structures. The resulting first-order programs can be executed by conventional logic programming interpreters and benefit from other optimizations that might offer. We provide an implementation and experimental results that suggest the efficiency of the transformation. 11:30
An Assertion language for slicing Constraint Logic Languages
Moreno Falaschi and Carlos Olarte Constraint Logic Programming (CLP) is a language scheme for combining two declarative paradigms: constraint solving and logic programming. Concurrent Constraint Programming (CCP) is a declarative model for concurrency where agents interact by telling and asking constraints in a shared store. In a previous paper, we have developed a framework for dynamic slicing of CCP where the user first identifies that a (partial) computation is wrong. Then, she marks (selects) some parts of the final state corresponding to the data (constraints) and processes that she wants to study more deeply. An automatic process of slicing begins, and the partial computation is depurated by removing irrelevant information. In this paper we give two major contributions. First, we extend the framework to CLP, thus generalizing the previous work. Second, we provide an assertion language suitable for both, CCP and CLP, which allows the user to specify some properties of the computations in her program. If a state in a computation does not satisfy an assertion then some wrong information is identified and an automatic slicing process can start. This way we make one step further towards automatizing the slicing process. We show that our framework can be integrated with the previous semi-automatic one, giving the user more choices and flexi- bility. We show by means of examples and experiments the usefulness of our approach. |
|
12:00
Lunch Break
[Location:
Mensa Casino]
|
|
12:00
Lunch Break
[Location:
Mensa Casino]
|
|
13:30
PPDP: Contributed Talks #7
[Chair:
Dietmar Seipel • Location:
HZ 10 (third floor, Hörsaalzentrum)]
13:30
Implicit Parameters for Logic Programming
Magnus Madsen and Ondřej Lhoták Implicit parameters allow programmers to omit certain arguments from function calls and have them automatically inferred by the compiler based on their types. At every call site, the compiler determines the values of the implicit parameters based on their declared types and the bindings currently in implicit scope. The programmer controls this mechanism in two ways: by adding bindings to the implicit scope, or by explicitly providing the implicit parameters for the function call. Implicit parameters are known from functional and object-oriented languages such as Haskell and Scala. In recent years, more languages have added support for implicit parameters, including Agda, Coq, and Idris. Implicit parameters have played an impressive role as the foundation for a broad range of language features such as type classes, capability and effect systems, software transactional memory, macros, and more. In this paper, we propose a design of implicit parameters for typed Horn clause based logic programming languages, such as Datalog and Prolog. We illustrate the usefulness of implicit parameters and show how they support logic programming in the large. We explore some of the differences that arise between implicit parameters in functional languages and in logic languages. 14:00
Verifying Fail-Free Declarative Programs
Michael Hanus Failed computations are a frequent problem in software system development. Some failures have external reasons (e.g., missing files) that can be caught by exception handlers. Many other failures have internal reasons, such as calling a partially defined operation with unintended arguments. In order to avoid the latter kind of failures, one can try to analyze the program at compile time for potential occurrences of these failures at run time. In this paper we present an approach to verify the absence of such failures in functional logic programs. Since programming with failures is a typical technique in logic programming, we are not interested to abandon partially defined operations at all. Instead, we want to verify conditions which ensure that operations can be executed without running into a failure. For this purpose, we propose to annotate operations with non-fail conditions that are verified at compile time with an SMT solver. For successfully verified programs, it is ensured that computations never fail provided that the non-fail condition of the main operation is satisfied. 14:30
Static Performance Guarantees for Programs with Run-time Checks
Maximiliano Klemen, Nataliia Stulova, Pedro Lopez-Garcia, Jose F. Morales and Manuel V. Hermenegildo Instrumenting programs for performing run-time checking of properties, such as regular shapes, is a common and useful technique that helps programmers detect incorrect program behaviors. This is specially true in dynamic languages such as Prolog. However, such run-time checks inevitably introduce run-time overhead (in execution time, memory, energy, etc.). Several approaches have been proposed for reducing this overhead, such as eliminating the checks that can statically be proved to always succeed, and/or optimizing the way in which the (remaining) checks are performed. However, there are cases in which it is not possible to remove all checks statically (e.g., open libraries which must check their interfaces, complex properties, unknown code, etc.) and in which, even after optimizations, these remaining checks may still introduce an unacceptable level of overhead. It is thus important for programmers to be able to determine the additional cost due to the run-time checks and compare it to some notion of admissible cost. The common practice used for estimating run-time checking overhead is profiling, which is not exhaustive by nature. Instead, we propose a method that uses static analysis to estimate such overhead, with the advantage that the estimations are functions parameterized by input data sizes. Unlike profiling, this approach can provide guarantees for all possible execution traces, and allows assessing how the overhead grows as the size of the input grows. Our method also extends an existing assertion verification framework to express “admissible” overheads, and statically and automatically checks whether the instru- mented program conforms with such specifications. Finally, we present an experimental evaluation of our approach that suggests that our method is feasible and promising. |
13:30
LOPSTR: Invited Tutorial #1
[Chair:
Moreno Falaschi • Location:
HZ 11 (third floor, Hörsaalzentrum)]
The VeriMAP System for program transformation and verification (Invited Tutorial)
Fabio Fioravanti Constrained Horn Clauses (CHC) are becoming very popular for representing programs and verification problems, and several tools have been developed for checking their satisfiability.
|
13:30
PPDP: Contributed Talks #7
[Chair:
Dietmar Seipel • Location:
HZ 10 (third floor, Hörsaalzentrum)]
13:30
Implicit Parameters for Logic Programming
Magnus Madsen and Ondřej Lhoták Implicit parameters allow programmers to omit certain arguments from function calls and have them automatically inferred by the compiler based on their types. At every call site, the compiler determines the values of the implicit parameters based on their declared types and the bindings currently in implicit scope. The programmer controls this mechanism in two ways: by adding bindings to the implicit scope, or by explicitly providing the implicit parameters for the function call. Implicit parameters are known from functional and object-oriented languages such as Haskell and Scala. In recent years, more languages have added support for implicit parameters, including Agda, Coq, and Idris. Implicit parameters have played an impressive role as the foundation for a broad range of language features such as type classes, capability and effect systems, software transactional memory, macros, and more. In this paper, we propose a design of implicit parameters for typed Horn clause based logic programming languages, such as Datalog and Prolog. We illustrate the usefulness of implicit parameters and show how they support logic programming in the large. We explore some of the differences that arise between implicit parameters in functional languages and in logic languages. 14:00
Verifying Fail-Free Declarative Programs
Michael Hanus Failed computations are a frequent problem in software system development. Some failures have external reasons (e.g., missing files) that can be caught by exception handlers. Many other failures have internal reasons, such as calling a partially defined operation with unintended arguments. In order to avoid the latter kind of failures, one can try to analyze the program at compile time for potential occurrences of these failures at run time. In this paper we present an approach to verify the absence of such failures in functional logic programs. Since programming with failures is a typical technique in logic programming, we are not interested to abandon partially defined operations at all. Instead, we want to verify conditions which ensure that operations can be executed without running into a failure. For this purpose, we propose to annotate operations with non-fail conditions that are verified at compile time with an SMT solver. For successfully verified programs, it is ensured that computations never fail provided that the non-fail condition of the main operation is satisfied. 14:30
Static Performance Guarantees for Programs with Run-time Checks
Maximiliano Klemen, Nataliia Stulova, Pedro Lopez-Garcia, Jose F. Morales and Manuel V. Hermenegildo Instrumenting programs for performing run-time checking of properties, such as regular shapes, is a common and useful technique that helps programmers detect incorrect program behaviors. This is specially true in dynamic languages such as Prolog. However, such run-time checks inevitably introduce run-time overhead (in execution time, memory, energy, etc.). Several approaches have been proposed for reducing this overhead, such as eliminating the checks that can statically be proved to always succeed, and/or optimizing the way in which the (remaining) checks are performed. However, there are cases in which it is not possible to remove all checks statically (e.g., open libraries which must check their interfaces, complex properties, unknown code, etc.) and in which, even after optimizations, these remaining checks may still introduce an unacceptable level of overhead. It is thus important for programmers to be able to determine the additional cost due to the run-time checks and compare it to some notion of admissible cost. The common practice used for estimating run-time checking overhead is profiling, which is not exhaustive by nature. Instead, we propose a method that uses static analysis to estimate such overhead, with the advantage that the estimations are functions parameterized by input data sizes. Unlike profiling, this approach can provide guarantees for all possible execution traces, and allows assessing how the overhead grows as the size of the input grows. Our method also extends an existing assertion verification framework to express “admissible” overheads, and statically and automatically checks whether the instru- mented program conforms with such specifications. Finally, we present an experimental evaluation of our approach that suggests that our method is feasible and promising. |
|
13:30
LOPSTR: Invited Tutorial #1
[Chair:
Moreno Falaschi • Location:
HZ 11 (third floor, Hörsaalzentrum)]
The VeriMAP System for program transformation and verification (Invited Tutorial)
Fabio Fioravanti Constrained Horn Clauses (CHC) are becoming very popular for representing programs and verification problems, and several tools have been developed for checking their satisfiability.
|
|
15:00
Coffee Break
[Location:
2nd floor, Hörsaalzentrum]
|
|
15:00
Coffee Break
[Location:
2nd floor, Hörsaalzentrum]
|
|
15:30
PPDP: Contributed Talks #8
[Chair:
David Sabel • Location:
HZ 10 (third floor, Hörsaalzentrum)]
15:30
Three Improvements to the Top-Down Solver
Helmut Seidl and Ralf Vogler The local solver TD is a generic fixpoint engine which explores a given system of equations on demand. It has been successfully applied to the interprocedural analysis of procedural languages. The solver TD gains efficiency by detecting variable dependencies on the fly. This algorithm has been recently extended to deal with widening and narrowing as well. In particular, it has been equipped with an automatic detection of widening and narrowing points. That version, however, is only guaranteed to terminate under two conditions: only finitely many variables are encountered, and all right-hand sides are monotonic. While the first condition is unavoidable, the second limits the applicability of the solver. Another limitation is that the solver maintains the current abstract values of all encountered variables in one data-structure - thus prohibiting interprocedural analyses to scale to larger programs. In the present paper, we therefore extend the top-down solver TD in three ways. First, we indicate how the restriction to monotonic right-hand sides can be lifted. Then we show how the solver can be tuned to store abstract values only when it turns out to be inevitable. Finally, we also show how the solver can be extended to side-effecting equation systems, which have successfully been applied to a seamless combination of context-sensitive with flow-insensitive analyses. 16:00
Lazy Abstraction for Higher-Order Program Verification
Taku Terao This paper proposes a lazy abstraction algorithm for verification of functional programs. The feature of lazy abstraction method is that the predicate abstraction and the model checking are fused, and that abstractions for unreachable configurations are pruned. We define an abstract semantics that characterizes the precision of the lazy abstraction algorithm, and prove the soundness of our verification method and the progress property of our abstraction refinement algorithm. We have implemented a prototype of our method, and confirmed through experiments that total efficiency of verification is improved, compared with previous eager abstraction methods. 16:30
An Infrastructure for Combining Domain Knowledge with Automated Theorem Provers
Sylvia Grewe, Sebastian Erdweg, André Pacak and Mira Mezini Computer science has seen much progress in the area of automated verification in the last decades. Yet, there are many domains where abstract strategies for verifying standard properties are well-understood by domain experts, but still not automated to a satisfactory degree. One example for such a domain are type soundness proofs. Being able to express domain-specific verification strategies using domain-specific terminology and concepts can help to narrow down this gap toward more automated verification. We present the requirements, design, and implementation of a configurable verification infrastructure that allows for expressing domain knowledge about proofs and for interfacing with existing automated theorem provers and solvers to verify individual proof steps. As an application scenario for our infrastructure, we present the development of a standard type soundness proof for a typed subset of SQL. 17:00
Pattern Matching and Fixed Points: Resources Types and Strong Call-By-Need
Pablo Barenbaum, Eduardo Bonelli and Kareem Mohamed Resource types are types that statically quantify some aspect of program execution. They come in various guises; this paper focusses on a manifestation of resource types known as non-idempotent intersection types. We use them to characterize weak normalisation for a type-erased lambda calculus for the Calculus of Inductive Construction (λe), as introduced by Gregoire and Leroy. The λe calculus consists of the lambda calculus together with constructors, pattern matching and a fixed-point operator. The characterization is then used to prove the completeness of a strong call-by-need strategy for λe. This strategy operates on open terms: rather than having evaluation stop when it reaches an abstraction, as in weak call-by-need, it computes strong normal forms by admitting reduction inside the body of abstractions and substitutions. Moreover, argument evaluation is by-need: arguments are evaluated when needed and at most once. Such a notion of reduction is of interest in areas such as partial evaluation and proof-checkers such as Coq. |
15:30
LOPSTR: Program Analysis
[Chair:
Maja Kirkeby • Location:
HZ 11 (third floor, Hörsaalzentrum)]
15:30
Multivariant Assertion-based Guidance in Abstract Interpretation
Isabel Garcia-Contreras, Jose F. Morales, and Manuel V. Hermenegildo Approximations during program analysis are a necessary evil, as they ensure essential properties, such as analysis soundness and termination, but they also imply that the analysis is not guaranteed to always produce useful results. In such cases it is necessary to have some means for users to provide information to guide analysis and thus to improve precision and/or shorten analysis time. This allows dealing with, e.g., constructs for which the analysis is not complete and loses precision or for which the source is only partially available. We present techniques for supporting within an abstract interpretation framework a rich set of assertions that can deal with multivariance / context-sensitivity, and can handle different run-time semantics for those assertions that cannot be discharged at compile time. We show how the proposed approach can be applied to both improving precision and accelerating convergence, and provide a procedure to decide whether the invariants used to guide the analyzer are checked, incompatible, or neither. We also provide some formal results on the effects of such assertions on the analysis results. 16:00
Eliminating Unstable Tests in Floating-Point Programs
Laura Titolo, César Muñoz, Marco A. Feliú, and Mariano M. Moscato Round-off errors arising from the difference between real numbers and their floating-point representation cause the control flow of conditional floating-point statements to deviate from the ideal flow of the real-number computation. This problem, which is called test instability, may result in a significant difference between the computation of a floating-point program and the expected output in real arithmetic. In this paper, a formally proven program transformation is proposed to detect and correct the effects of unstable tests. The output of this transformation is a floating-point program that is guaranteed to return either the result of the original floating-point program when it can be assured that both its real and its floating-point flows agree or a warning when these flows may diverge. The proposed approach is illustrated with the transformation of the core computation of a polygon containment algorithm developed at NASA that is used in a geofencing system for unmanned aircraft systems. 16:30
Formalisation of a frame stack semantics for a Java-like language
Aleksy Schubert and Jacek Chrząszcz We present a Coq formalisation of the small-step operational semantics of Jafun, a small Java-like language with classes. This format of semantics makes it possible to naturally specify and prove invariants that should hold at each computation step. In contrast to the Featherweight Java approach the semantics explicitly manipulates frame stack of method calls. Thanks to that one can express properties of computation that depend on execution of particular methods. On the basis of the semantics, we developed a type system that makes it possible to delineate a notion of a complex value and classify certain methods as extensional functions operating on them. In our formalisation we make a mechanised proof that the operational semantics for the un- typed version of the semantics agrees with the one for the typed one. We discuss different methods to make such formalisation effort and provide experiments that substantiate it. |
15:30
PPDP: Contributed Talks #8
[Chair:
David Sabel • Location:
HZ 10 (third floor, Hörsaalzentrum)]
15:30
Three Improvements to the Top-Down Solver
Helmut Seidl and Ralf Vogler The local solver TD is a generic fixpoint engine which explores a given system of equations on demand. It has been successfully applied to the interprocedural analysis of procedural languages. The solver TD gains efficiency by detecting variable dependencies on the fly. This algorithm has been recently extended to deal with widening and narrowing as well. In particular, it has been equipped with an automatic detection of widening and narrowing points. That version, however, is only guaranteed to terminate under two conditions: only finitely many variables are encountered, and all right-hand sides are monotonic. While the first condition is unavoidable, the second limits the applicability of the solver. Another limitation is that the solver maintains the current abstract values of all encountered variables in one data-structure - thus prohibiting interprocedural analyses to scale to larger programs. In the present paper, we therefore extend the top-down solver TD in three ways. First, we indicate how the restriction to monotonic right-hand sides can be lifted. Then we show how the solver can be tuned to store abstract values only when it turns out to be inevitable. Finally, we also show how the solver can be extended to side-effecting equation systems, which have successfully been applied to a seamless combination of context-sensitive with flow-insensitive analyses. 16:00
Lazy Abstraction for Higher-Order Program Verification
Taku Terao This paper proposes a lazy abstraction algorithm for verification of functional programs. The feature of lazy abstraction method is that the predicate abstraction and the model checking are fused, and that abstractions for unreachable configurations are pruned. We define an abstract semantics that characterizes the precision of the lazy abstraction algorithm, and prove the soundness of our verification method and the progress property of our abstraction refinement algorithm. We have implemented a prototype of our method, and confirmed through experiments that total efficiency of verification is improved, compared with previous eager abstraction methods. 16:30
An Infrastructure for Combining Domain Knowledge with Automated Theorem Provers
Sylvia Grewe, Sebastian Erdweg, André Pacak and Mira Mezini Computer science has seen much progress in the area of automated verification in the last decades. Yet, there are many domains where abstract strategies for verifying standard properties are well-understood by domain experts, but still not automated to a satisfactory degree. One example for such a domain are type soundness proofs. Being able to express domain-specific verification strategies using domain-specific terminology and concepts can help to narrow down this gap toward more automated verification. We present the requirements, design, and implementation of a configurable verification infrastructure that allows for expressing domain knowledge about proofs and for interfacing with existing automated theorem provers and solvers to verify individual proof steps. As an application scenario for our infrastructure, we present the development of a standard type soundness proof for a typed subset of SQL. 17:00
Pattern Matching and Fixed Points: Resources Types and Strong Call-By-Need
Pablo Barenbaum, Eduardo Bonelli and Kareem Mohamed Resource types are types that statically quantify some aspect of program execution. They come in various guises; this paper focusses on a manifestation of resource types known as non-idempotent intersection types. We use them to characterize weak normalisation for a type-erased lambda calculus for the Calculus of Inductive Construction (λe), as introduced by Gregoire and Leroy. The λe calculus consists of the lambda calculus together with constructors, pattern matching and a fixed-point operator. The characterization is then used to prove the completeness of a strong call-by-need strategy for λe. This strategy operates on open terms: rather than having evaluation stop when it reaches an abstraction, as in weak call-by-need, it computes strong normal forms by admitting reduction inside the body of abstractions and substitutions. Moreover, argument evaluation is by-need: arguments are evaluated when needed and at most once. Such a notion of reduction is of interest in areas such as partial evaluation and proof-checkers such as Coq. |
|
15:30
LOPSTR: Program Analysis
[Chair:
Maja Kirkeby • Location:
HZ 11 (third floor, Hörsaalzentrum)]
15:30
Multivariant Assertion-based Guidance in Abstract Interpretation
Isabel Garcia-Contreras, Jose F. Morales, and Manuel V. Hermenegildo Approximations during program analysis are a necessary evil, as they ensure essential properties, such as analysis soundness and termination, but they also imply that the analysis is not guaranteed to always produce useful results. In such cases it is necessary to have some means for users to provide information to guide analysis and thus to improve precision and/or shorten analysis time. This allows dealing with, e.g., constructs for which the analysis is not complete and loses precision or for which the source is only partially available. We present techniques for supporting within an abstract interpretation framework a rich set of assertions that can deal with multivariance / context-sensitivity, and can handle different run-time semantics for those assertions that cannot be discharged at compile time. We show how the proposed approach can be applied to both improving precision and accelerating convergence, and provide a procedure to decide whether the invariants used to guide the analyzer are checked, incompatible, or neither. We also provide some formal results on the effects of such assertions on the analysis results. 16:00
Eliminating Unstable Tests in Floating-Point Programs
Laura Titolo, César Muñoz, Marco A. Feliú, and Mariano M. Moscato Round-off errors arising from the difference between real numbers and their floating-point representation cause the control flow of conditional floating-point statements to deviate from the ideal flow of the real-number computation. This problem, which is called test instability, may result in a significant difference between the computation of a floating-point program and the expected output in real arithmetic. In this paper, a formally proven program transformation is proposed to detect and correct the effects of unstable tests. The output of this transformation is a floating-point program that is guaranteed to return either the result of the original floating-point program when it can be assured that both its real and its floating-point flows agree or a warning when these flows may diverge. The proposed approach is illustrated with the transformation of the core computation of a polygon containment algorithm developed at NASA that is used in a geofencing system for unmanned aircraft systems. 16:30
Formalisation of a frame stack semantics for a Java-like language
Aleksy Schubert and Jacek Chrząszcz We present a Coq formalisation of the small-step operational semantics of Jafun, a small Java-like language with classes. This format of semantics makes it possible to naturally specify and prove invariants that should hold at each computation step. In contrast to the Featherweight Java approach the semantics explicitly manipulates frame stack of method calls. Thanks to that one can express properties of computation that depend on execution of particular methods. On the basis of the semantics, we developed a type system that makes it possible to delineate a notion of a complex value and classify certain methods as extensional functions operating on them. In our formalisation we make a mechanised proof that the operational semantics for the un- typed version of the semantics agrees with the one for the typed one. We discuss different methods to make such formalisation effort and provide experiments that substantiate it. |